mirror of
https://github.com/crewAIInc/crewAI.git
synced 2026-04-30 14:52:36 +00:00
docs/add-vulnerability-reporting-section
27 Commits
| Author | SHA1 | Message | Date | |
|---|---|---|---|---|
|
|
68e943be68 | feat: emit token usage data in LLMCallCompletedEvent | ||
|
|
6b262f5a6d |
Fix lock_store crash when redis package is not installed (#4943)
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Nightly Canary Release / Check for new commits (push) Has been cancelled
Nightly Canary Release / Build nightly packages (push) Has been cancelled
Nightly Canary Release / Publish nightly to PyPI (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
* Fix lock_store crash when redis package is not installed `REDIS_URL` being set was enough to trigger a Redis lock, which would raise `ImportError` if the `redis` package wasn't available. Added `_redis_available()` to guard on both the env var and the import. * Simplify tests * Simplify tests #2 |
||
|
|
32d7b4a8d4 |
Lorenze/feat/plan execute pattern (#4817)
* feat: introduce PlanningConfig for enhanced agent planning capabilities (#4344) * feat: introduce PlanningConfig for enhanced agent planning capabilities This update adds a new PlanningConfig class to manage agent planning configurations, allowing for customizable planning behavior before task execution. The existing reasoning parameter is deprecated in favor of this new configuration, ensuring backward compatibility while enhancing the planning process. Additionally, the Agent class has been updated to utilize this new configuration, and relevant utility functions have been adjusted accordingly. Tests have been added to validate the new planning functionality and ensure proper integration with existing agent workflows. * dropping redundancy * fix test * revert handle_reasoning here * refactor: update reasoning handling in Agent class This commit modifies the Agent class to conditionally call the handle_reasoning function based on the executor class being used. The legacy CrewAgentExecutor will continue to utilize handle_reasoning, while the new AgentExecutor will manage planning internally. Additionally, the PlanningConfig class has been referenced in the documentation to clarify its role in enabling or disabling planning. Tests have been updated to reflect these changes and ensure proper functionality. * improve planning prompts * matching * refactor: remove default enabled flag from PlanningConfig in Agent class * more cassettes * fix test * refactor: update planning prompt and remove deprecated methods in reasoning handler * improve planning prompt * Lorenze/feat planning pt 2 todo list gen (#4449) * feat: introduce PlanningConfig for enhanced agent planning capabilities This update adds a new PlanningConfig class to manage agent planning configurations, allowing for customizable planning behavior before task execution. The existing reasoning parameter is deprecated in favor of this new configuration, ensuring backward compatibility while enhancing the planning process. Additionally, the Agent class has been updated to utilize this new configuration, and relevant utility functions have been adjusted accordingly. Tests have been added to validate the new planning functionality and ensure proper integration with existing agent workflows. * dropping redundancy * fix test * revert handle_reasoning here * refactor: update reasoning handling in Agent class This commit modifies the Agent class to conditionally call the handle_reasoning function based on the executor class being used. The legacy CrewAgentExecutor will continue to utilize handle_reasoning, while the new AgentExecutor will manage planning internally. Additionally, the PlanningConfig class has been referenced in the documentation to clarify its role in enabling or disabling planning. Tests have been updated to reflect these changes and ensure proper functionality. * improve planning prompts * matching * refactor: remove default enabled flag from PlanningConfig in Agent class * more cassettes * fix test * feat: enhance agent planning with structured todo management This commit introduces a new planning system within the AgentExecutor class, allowing for the creation of structured todo items from planning steps. The TodoList and TodoItem models have been added to facilitate tracking of plan execution. The reasoning plan now includes a list of steps, improving the clarity and organization of agent tasks. Additionally, tests have been added to validate the new planning functionality and ensure proper integration with existing workflows. * refactor: update planning prompt and remove deprecated methods in reasoning handler * improve planning prompt * improve handler * linted * linted * Lorenze/feat/planning pt 3 todo list execution (#4450) * feat: introduce PlanningConfig for enhanced agent planning capabilities This update adds a new PlanningConfig class to manage agent planning configurations, allowing for customizable planning behavior before task execution. The existing reasoning parameter is deprecated in favor of this new configuration, ensuring backward compatibility while enhancing the planning process. Additionally, the Agent class has been updated to utilize this new configuration, and relevant utility functions have been adjusted accordingly. Tests have been added to validate the new planning functionality and ensure proper integration with existing agent workflows. * dropping redundancy * fix test * revert handle_reasoning here * refactor: update reasoning handling in Agent class This commit modifies the Agent class to conditionally call the handle_reasoning function based on the executor class being used. The legacy CrewAgentExecutor will continue to utilize handle_reasoning, while the new AgentExecutor will manage planning internally. Additionally, the PlanningConfig class has been referenced in the documentation to clarify its role in enabling or disabling planning. Tests have been updated to reflect these changes and ensure proper functionality. * improve planning prompts * matching * refactor: remove default enabled flag from PlanningConfig in Agent class * more cassettes * fix test * feat: enhance agent planning with structured todo management This commit introduces a new planning system within the AgentExecutor class, allowing for the creation of structured todo items from planning steps. The TodoList and TodoItem models have been added to facilitate tracking of plan execution. The reasoning plan now includes a list of steps, improving the clarity and organization of agent tasks. Additionally, tests have been added to validate the new planning functionality and ensure proper integration with existing workflows. * refactor: update planning prompt and remove deprecated methods in reasoning handler * improve planning prompt * improve handler * execute todos and be able to track them * feat: introduce PlannerObserver and StepExecutor for enhanced plan execution This commit adds the PlannerObserver and StepExecutor classes to the CrewAI framework, implementing the observation phase of the Plan-and-Execute architecture. The PlannerObserver analyzes step execution results, determines plan validity, and suggests refinements, while the StepExecutor executes individual todo items in isolation. These additions improve the overall planning and execution process, allowing for more dynamic and responsive agent behavior. Additionally, new observation events have been defined to facilitate monitoring and logging of the planning process. * refactor: enhance final answer synthesis in AgentExecutor This commit improves the synthesis of final answers in the AgentExecutor class by implementing a more coherent approach to combining results from multiple todo items. The method now utilizes a single LLM call to generate a polished response, falling back to concatenation if the synthesis fails. Additionally, the test cases have been updated to reflect the changes in planning and execution, ensuring that the results are properly validated and that the plan-and-execute architecture is functioning as intended. * refactor: enhance final answer synthesis in AgentExecutor This commit improves the synthesis of final answers in the AgentExecutor class by implementing a more coherent approach to combining results from multiple todo items. The method now utilizes a single LLM call to generate a polished response, falling back to concatenation if the synthesis fails. Additionally, the test cases have been updated to reflect the changes in planning and execution, ensuring that the results are properly validated and that the plan-and-execute architecture is functioning as intended. * refactor: implement structured output handling in final answer synthesis This commit enhances the final answer synthesis process in the AgentExecutor class by introducing support for structured outputs when a response model is specified. The synthesis method now utilizes the response model to produce outputs that conform to the expected schema, while still falling back to concatenation in case of synthesis failures. This change ensures that intermediate steps yield free-text results, but the final output can be structured, improving the overall coherence and usability of the synthesized answers. * regen tests * linted * fix * Enhance PlanningConfig and AgentExecutor with Reasoning Effort Levels This update introduces a new attribute in the class, allowing users to customize the observation and replanning behavior during task execution. The class has been modified to utilize this new attribute, routing step observations based on the specified reasoning effort level: low, medium, or high. Additionally, tests have been added to validate the functionality of the reasoning effort levels, ensuring that the agent behaves as expected under different configurations. This enhancement improves the adaptability and efficiency of the planning process in agent execution. * regen cassettes for test and fix test * cassette regen * fixing tests * dry * Refactor PlannerObserver and StepExecutor to Utilize I18N for Prompts This update enhances the PlannerObserver and StepExecutor classes by integrating the I18N utility for managing prompts and messages. The system and user prompts are now retrieved from the I18N module, allowing for better localization and maintainability. Additionally, the code has been cleaned up to remove hardcoded strings, improving readability and consistency across the planning and execution processes. * Refactor PlannerObserver and StepExecutor to Utilize I18N for Prompts This update enhances the PlannerObserver and StepExecutor classes by integrating the I18N utility for managing prompts and messages. The system and user prompts are now retrieved from the I18N module, allowing for better localization and maintainability. Additionally, the code has been cleaned up to remove hardcoded strings, improving readability and consistency across the planning and execution processes. * consolidate agent logic * fix datetime * improving step executor * refactor: streamline observation and refinement process in PlannerObserver - Updated the PlannerObserver to apply structured refinements directly from observations without requiring a second LLM call. - Renamed method to for clarity. - Enhanced documentation to reflect changes in how refinements are handled. - Removed unnecessary LLM message building and parsing logic, simplifying the refinement process. - Updated event emissions to include summaries of refinements instead of raw data. * enhance step executor with tool usage events and validation - Added event emissions for tool usage, including started and finished events, to track tool execution. - Implemented validation to ensure expected tools are called during step execution, raising errors when not. - Refactored the method to handle tool execution with event logging. - Introduced a new method for parsing tool input into a structured format. - Updated tests to cover new functionality and ensure correct behavior of tool usage events. * refactor: enhance final answer synthesis logic in AgentExecutor - Updated the finalization process to conditionally skip synthesis when the last todo result is sufficient as a complete answer. - Introduced a new method to determine if the last todo result can be used directly, improving efficiency. - Added tests to verify the new behavior, ensuring synthesis is skipped when appropriate and maintained when a response model is set. * fix: update observation handling in PlannerObserver for LLM errors - Modified the error handling in the PlannerObserver to default to a conservative replan when an LLM call fails. - Updated the return values to indicate that the step was not completed successfully and that a full replan is needed. - Added a new test to verify the behavior of the observer when an LLM error occurs, ensuring the correct replan logic is triggered. * refactor: enhance planning and execution flow in agents - Updated the PlannerObserver to accept a kickoff input for standalone task execution, improving flexibility in task handling. - Refined the step execution process in StepExecutor to support multi-turn action loops, allowing for iterative tool execution and observation. - Introduced a method to extract relevant task sections from descriptions, ensuring clarity in task requirements. - Enhanced the AgentExecutor to manage step failures more effectively, triggering replans only when necessary and preserving completed task history. - Updated translations to reflect changes in planning principles and execution prompts, emphasizing concrete and executable steps. * refactor: update setup_native_tools to include tool_name_mapping - Modified the setup_native_tools function to return an additional mapping of tool names. - Updated StepExecutor and AgentExecutor classes to accommodate the new return value from setup_native_tools. * fix tests * linted * linted * feat: enhance image block handling in Anthropic provider and update AgentExecutor logic - Added a method to convert OpenAI-style image_url blocks to Anthropic's required format. - Updated AgentExecutor to handle cases where no todos are ready, introducing a needs_replan return state. - Improved fallback answer generation in AgentExecutor to prevent RuntimeErrors when no final output is produced. * lint * lint * 1. Added failed to TodoStatus (planning_types.py) - TodoStatus now includes failed as a valid state: Literal[pending, running, completed, failed] - Added mark_failed(step_number, result) method to TodoList - Added get_failed_todos() method to TodoList - Updated is_complete to treat both completed and failed as terminal states - Updated replace_pending_todos docstring to mention failed items are preserved 2. Mark running todos as failed before replan (agent_executor.py) All three effort-level handlers now call mark_failed() on the current todo before routing to replan_now: - Low effort (handle_step_observed_low): hard-failure branch - Medium effort (handle_step_observed_medium): needs_full_replan branch - High effort (decide_next_action): both needs_full_replan and step_completed_successfully=False branches 3. Updated _should_replan to use get_failed_todos() Previously filtered on todo.status == failed which was dead code. Now uses the proper accessor method that will actually find failed items. What this fixes: Before these changes, a step that triggered a replan would stay in running status permanently, causing is_complete to never return True and next_pending to skip it — leading to stuck execution states. Now failed steps are properly tracked, replanning context correctly reports them, and LiteAgentOutput.failed_todos will actually return results. * fix test * imp on failed states * adjusted the var name from AgentReActState to AgentExecutorState * addressed p0 bugs * more improvements * linted * regen cassette * addressing crictical comments * ensure configurable timeouts, max_replans and max step iterations * adjusted tools * dropping debug statements * addressed comment * fix linter * lints and test fixes * fix: default observation parse fallback to failure and clean up plan-execute types When _parse_observation_response fails all parse attempts, default to step_completed_successfully=False instead of True to avoid silently masking failures. Extract duplicate _extract_task_section into a shared utility in agent_utils. Type PlanningConfig.llm as str | BaseLLM | None instead of str | Any | None. Make StepResult a frozen dataclass for immutability consistency with StepExecutionContext. * fix: remove Any from function_calling_llm union type in step_executor * fix: make BaseTool usage count thread-safe for parallel step execution Add _usage_lock and _claim_usage() to BaseTool for atomic check-and-increment of current_usage_count. This prevents race conditions when parallel plan steps invoke the same tool concurrently via execute_todos_parallel. Remove the racy pre-check from execute_single_native_tool_call since the limit is now enforced atomically inside tool.run(). --------- Co-authored-by: Greyson LaLonde <greyson.r.lalonde@gmail.com> Co-authored-by: Greyson LaLonde <greyson@crewai.com> |
||
|
|
1ac5801578 | fix: inject tool errors as observations and resolve name collisions | ||
|
|
d259150d8d |
Enhance MCP tool resolution and related events (#4580)
* feat: enhance MCP tool resolution * feat: emit event when MCP configuration fails * feat: emit event when MCP tool execution has failed * style: resolve linter issues * refactor: use clear and natural mcp tool name resolution * test: fix broken tests * fix: resolve MCP connection leaks, slug validation, duplicate connections, and httpx exception handling --------- Co-authored-by: Greyson LaLonde <greyson.r.lalonde@gmail.com> Co-authored-by: Greyson LaLonde <greyson@crewai.com> |
||
|
|
09e3b81ca3 |
fix: preserve null types in tool parameter schemas for LLM (#4579)
* fix: preserve null types in tool parameter schemas for LLM
Tool parameter schemas were stripping null from optional fields via
generate_model_description, forcing the LLM to provide non-null values
for fields.
Adds strip_null_types parameter to generate_model_description and passes False when generating tool
schemas, so optional fields keep anyOf: [{type: T}, {type: null}]
* Update lib/crewai/src/crewai/utilities/pydantic_schema_utils.py
Co-authored-by: Gabe Milani <gabriel@crewai.com>
---------
Co-authored-by: Greyson LaLonde <greyson.r.lalonde@gmail.com>
Co-authored-by: Gabe Milani <gabriel@crewai.com>
Co-authored-by: Lorenze Jay <63378463+lorenzejay@users.noreply.github.com>
|
||
|
|
8102d0a6ca |
feat: enhance JSON argument parsing and validation in CrewAgentExecutor and BaseTool
* feat: enhance JSON argument parsing and validation in CrewAgentExecutor and BaseTool - Added error handling for malformed JSON tool arguments in CrewAgentExecutor, providing descriptive error messages. - Implemented schema validation for tool arguments in BaseTool, ensuring that invalid arguments raise appropriate exceptions. - Introduced tests to verify correct behavior for both valid and invalid JSON inputs, enhancing robustness of tool execution. * refactor: improve argument validation in BaseTool - Introduced a new private method to handle argument validation for tools, enhancing code clarity and reusability. - Updated the method to utilize the new validation method, ensuring consistent error handling for invalid arguments. - Enhanced exception handling to specifically catch , providing clearer error messages for tool argument validation failures. * feat: introduce parse_tool_call_args for improved argument parsing - Added a new utility function, parse_tool_call_args, to handle parsing of tool call arguments from JSON strings or dictionaries, enhancing error handling for malformed JSON inputs. - Updated CrewAgentExecutor and AgentExecutor to utilize the new parsing function, streamlining argument validation and improving clarity in error reporting. - Introduced unit tests for parse_tool_call_args to ensure robust functionality and correct handling of various input scenarios. * feat: add keyword argument validation in BaseTool and Tool classes - Introduced a new method `_validate_kwargs` in BaseTool to validate keyword arguments against the defined schema, ensuring proper argument handling. - Updated the `run` and `arun` methods in both BaseTool and Tool classes to utilize the new validation method, improving error handling and robustness. - Added comprehensive tests for asynchronous execution in `TestBaseToolArunValidation` to verify correct behavior for valid and invalid keyword arguments. * Potential fix for pull request finding 'Syntax error' Co-authored-by: Copilot Autofix powered by AI <223894421+github-code-quality[bot]@users.noreply.github.com> --------- Co-authored-by: lorenzejay <lorenzejaytech@gmail.com> Co-authored-by: Lorenze Jay <63378463+lorenzejay@users.noreply.github.com> Co-authored-by: Greyson LaLonde <greyson.r.lalonde@gmail.com> Co-authored-by: Copilot Autofix powered by AI <223894421+github-code-quality[bot]@users.noreply.github.com> |
||
|
|
7377e1aa26 |
fix: bedrock region was always set to "us-east-1" not respecting the env var. (#4582)
* fix: bedrock region was always set to "us-east-1" not respecting the env var. code had AWS_REGION_NAME referenced, but not used, unified to AWS_DEFAULT_REGION as per documentation * DRY code improvement and fix caught by tests. * Supporting litellm configuration |
||
|
|
2ed0c2c043 |
imp compaction (#4399)
Some checks failed
* imp compaction * fix lint * cassette gen * cassette gen * improve assert * adding azure * fix global docstring |
||
|
|
576b74b2ef |
Add call_id to LLM events for correlating requests (#4281)
When monitoring LLM events, consumers need to know which events belong to the same API call. Before this change, there was no way to correlate LLMCallStartedEvent, LLMStreamChunkEvent, and LLMCallCompletedEvent belonging to the same request. |
||
|
|
19ce56032c |
fix: improve output handling and response model integration in agents (#4307)
* fix: improve output handling and response model integration in agents - Refactored output handling in the Agent class to ensure proper conversion and formatting of outputs, including support for BaseModel instances. - Enhanced the AgentExecutor class to correctly utilize response models during execution, improving the handling of structured outputs. - Updated the Gemini and Anthropic completion providers to ensure compatibility with new response model handling, including the addition of strict mode for function definitions. - Improved the OpenAI completion provider to enforce strict adherence to function schemas. - Adjusted translations to clarify instructions regarding output formatting and schema adherence. * drop what was a print that didnt get deleted properly * fixes gemini * azure working * bedrock works * added tests * adjust test * fix tests and regen * fix tests and regen * refactor: ensure stop words are applied correctly in Azure, Gemini, and OpenAI completions; add tests to validate behavior with structured outputs * linting |
||
|
|
2d05e59223 |
Lorenze/improve tool response pt2 (#4297)
* no need post tool reflection on native tools * refactor: update prompt generation to prevent thought leakage - Modified the prompt structure to ensure agents without tools use a simplified format, avoiding ReAct instructions. - Introduced a new 'task_no_tools' slice for agents lacking tools, ensuring clean output without Thought: prefixes. - Enhanced test coverage to verify that prompts do not encourage thought leakage, ensuring outputs remain focused and direct. - Added integration tests to validate that real LLM calls produce clean outputs without internal reasoning artifacts. * dont forget the cassettes |
||
|
|
06a58e463c | feat: adding response_id in streaming response | ||
|
|
c4c9208229 |
feat: native multimodal file handling; openai responses api
- add input_files parameter to Crew.kickoff(), Flow.kickoff(), Task, and Agent.kickoff() - add provider-specific file uploaders for OpenAI, Anthropic, Gemini, and Bedrock - add file type detection, constraint validation, and automatic format conversion - add URL file source support for multimodal content - add streaming uploads for large files - add prompt caching support for Anthropic - add OpenAI Responses API support |
||
|
|
bd4d039f63 |
Lorenze/imp/native tool calling (#4258)
Some checks failed
* wip restrcuturing agent executor and liteagent * fix: handle None task in AgentExecutor to prevent errors Added a check to ensure that if the task is None, the method returns early without attempting to access task properties. This change improves the robustness of the AgentExecutor by preventing potential errors when the task is not set. * refactor: streamline AgentExecutor initialization by removing redundant parameters Updated the Agent class to simplify the initialization of the AgentExecutor by removing unnecessary task and crew parameters in standalone mode. This change enhances code clarity and maintains backward compatibility by ensuring that the executor is correctly configured without redundant assignments. * wip: clean * ensure executors work inside a flow due to flow in flow async structure * refactor: enhance agent kickoff preparation by separating common logic Updated the Agent class to introduce a new private method that consolidates the common setup logic for both synchronous and asynchronous kickoff executions. This change improves code clarity and maintainability by reducing redundancy in the kickoff process, while ensuring that the agent can still execute effectively within both standalone and flow contexts. * linting and tests * fix test * refactor: improve test for Agent kickoff parameters Updated the test for the Agent class to ensure that the kickoff method correctly preserves parameters. The test now verifies the configuration of the agent after kickoff, enhancing clarity and maintainability. Additionally, the test for asynchronous kickoff within a flow context has been updated to reflect the Agent class instead of LiteAgent. * refactor: update test task guardrail process output for improved validation Refactored the test for task guardrail process output to enhance the validation of the output against the OpenAPI schema. The changes include a more structured request body and updated response handling to ensure compliance with the guardrail requirements. This update aims to improve the clarity and reliability of the test cases, ensuring that task outputs are correctly validated and feedback is appropriately provided. * test fix cassette * test fix cassette * working * working cassette * refactor: streamline agent execution and enhance flow compatibility Refactored the Agent class to simplify the execution method by removing the event loop check and clarifying the behavior when called from synchronous and asynchronous contexts. The changes ensure that the method operates seamlessly within flow methods, improving clarity in the documentation. Additionally, updated the AgentExecutor to set the response model to None, enhancing flexibility. New test cassettes were added to validate the functionality of agents within flow contexts, ensuring robust testing for both synchronous and asynchronous operations. * fixed cassette * Enhance Flow Execution Logic - Introduced conditional execution for start methods in the Flow class. - Unconditional start methods are prioritized during kickoff, while conditional starts are executed only if no unconditional starts are present. - Improved handling of cyclic flows by allowing re-execution of conditional start methods triggered by routers. - Added checks to continue execution chains for completed conditional starts. These changes improve the flexibility and control of flow execution, ensuring that the correct methods are triggered based on the defined conditions. * Enhance Agent and Flow Execution Logic - Updated the Agent class to automatically detect the event loop and return a coroutine when called within a Flow, simplifying async handling for users. - Modified Flow class to execute listeners sequentially, preventing race conditions on shared state during listener execution. - Improved handling of coroutine results from synchronous methods, ensuring proper execution flow and state management. These changes enhance the overall execution logic and user experience when working with agents and flows in CrewAI. * Enhance Flow Listener Logic and Agent Imports - Updated the Flow class to track fired OR listeners, ensuring that multi-source OR listeners only trigger once during execution. This prevents redundant executions and improves flow efficiency. - Cleared fired OR listeners during cyclic flow resets to allow re-execution in new cycles. - Modified the Agent class imports to include Coroutine from collections.abc, enhancing type handling for asynchronous operations. These changes improve the control and performance of flow execution in CrewAI, ensuring more predictable behavior in complex scenarios. * adjusted test due to new cassette * ensure native tool calling works with liteagent * ensure response model is respected * Enhance Tool Name Handling for LLM Compatibility - Added a new function to replace invalid characters in function names with underscores, ensuring compatibility with LLM providers. - Updated the function to sanitize tool names before validation. - Modified the function to use sanitized names for tool registration. These changes improve the robustness of tool name handling, preventing potential issues with invalid characters in function names. * ensure we dont finalize batch on just a liteagent finishing * max tools per turn wip and ensure we drop print times * fix sync main issues * fix llm_call_completed event serialization issue * drop max_tools_iterations * for fixing model dump with state * Add extract_tool_call_info function to handle various tool call formats - Introduced a new utility function to extract tool call ID, name, and arguments from different provider formats (OpenAI, Gemini, Anthropic, and dictionary). - This enhancement improves the flexibility and compatibility of tool calls across multiple LLM providers, ensuring consistent handling of tool call information. - The function returns a tuple containing the call ID, function name, and function arguments, or None if the format is unrecognized. * Refactor AgentExecutor to support batch execution of native tool calls - Updated the method to process all tools from in a single batch, enhancing efficiency and reducing the number of interactions with the LLM. - Introduced a new utility function to streamline the extraction of tool call details, improving compatibility with various tool formats. - Removed the parameter, simplifying the initialization of the . - Enhanced logging and message handling to provide clearer insights during tool execution. - This refactor improves the overall performance and usability of the agent execution flow. * Update English translations for tool usage and reasoning instructions - Revised the `post_tool_reasoning` message to clarify the analysis process after tool usage, emphasizing the need to provide only the final answer if requirements are met. - Updated the `format` message to simplify the instructions for deciding between using a tool or providing a final answer, enhancing clarity for users. - These changes improve the overall user experience by providing clearer guidance on task execution and response formatting. * fix * fixing azure tests * organizae imports * dropped unused * Remove debug print statements from AgentExecutor to clean up the code and improve readability. This change enhances the overall performance of the agent execution flow by eliminating unnecessary console output during LLM calls and iterations. * linted * updated cassette * regen cassette * revert crew agent executor * adjust cassettes and dropped tests due to native tool implementation * adjust * ensure we properly fail tools and emit their events * Enhance tool handling and delegation tracking in agent executors - Implemented immediate return for tools with result_as_answer=True in crew_agent_executor.py. - Added delegation tracking functionality in agent_utils.py to increment delegations when specific tools are used. - Updated tool usage logic to handle caching more effectively in tool_usage.py. - Enhanced test cases to validate new delegation features and tool caching behavior. This update improves the efficiency of tool execution and enhances the delegation capabilities of agents. * Enhance tool handling and delegation tracking in agent executors - Implemented immediate return for tools with result_as_answer=True in crew_agent_executor.py. - Added delegation tracking functionality in agent_utils.py to increment delegations when specific tools are used. - Updated tool usage logic to handle caching more effectively in tool_usage.py. - Enhanced test cases to validate new delegation features and tool caching behavior. This update improves the efficiency of tool execution and enhances the delegation capabilities of agents. * fix cassettes * fix * regen cassettes * regen gemini * ensure we support bedrock * supporting bedrock * regen azure cassettes * Implement max usage count tracking for tools in agent executors - Added functionality to check if a tool has reached its maximum usage count before execution in both crew_agent_executor.py and agent_executor.py. - Enhanced error handling to return a message when a tool's usage limit is reached. - Updated tool usage logic in tool_usage.py to increment usage counts and print current usage status. - Introduced tests to validate max usage count behavior for native tool calling, ensuring proper enforcement and tracking. This update improves tool management by preventing overuse and providing clear feedback when limits are reached. * fix other test * fix test * drop logs * better tests * regen * regen all azure cassettes * regen again placeholder for cassette matching * fix: unify tool name sanitization across codebase * fix: include tool role messages in save_last_messages * fix: update sanitize_tool_name test expectations Align test expectations with unified sanitize_tool_name behavior that lowercases and splits camelCase for LLM provider compatibility. * fix: apply sanitize_tool_name consistently across codebase Unify tool name sanitization to ensure consistency between tool names shown to LLMs and tool name matching/lookup logic. * regen * fix: sanitize tool names in native tool call processing - Update extract_tool_call_info to return sanitized tool names - Fix delegation tool name matching to use sanitized names - Add sanitization in crew_agent_executor tool call extraction - Add sanitization in experimental agent_executor - Add sanitization in LLM.call function lookup - Update streaming utility to use sanitized names - Update base_agent_executor_mixin delegation check * Extract text content from parts directly to avoid warning about non-text parts * Add test case for Gemini token usage tracking - Introduced a new YAML cassette for tracking token usage in Gemini API responses. - Updated the test for Gemini to validate token usage metrics and response content. - Ensured proper integration with the Gemini model and API key handling. --------- Co-authored-by: Greyson LaLonde <greyson.r.lalonde@gmail.com> |
||
|
|
741bf12bf4 |
Lorenze/enh decouple executor from crew (#4209)
Some checks failed
* wip restrcuturing agent executor and liteagent * fix: handle None task in AgentExecutor to prevent errors Added a check to ensure that if the task is None, the method returns early without attempting to access task properties. This change improves the robustness of the AgentExecutor by preventing potential errors when the task is not set. * refactor: streamline AgentExecutor initialization by removing redundant parameters Updated the Agent class to simplify the initialization of the AgentExecutor by removing unnecessary task and crew parameters in standalone mode. This change enhances code clarity and maintains backward compatibility by ensuring that the executor is correctly configured without redundant assignments. * ensure executors work inside a flow due to flow in flow async structure * refactor: enhance agent kickoff preparation by separating common logic Updated the Agent class to introduce a new private method that consolidates the common setup logic for both synchronous and asynchronous kickoff executions. This change improves code clarity and maintainability by reducing redundancy in the kickoff process, while ensuring that the agent can still execute effectively within both standalone and flow contexts. * linting and tests * fix test * refactor: improve test for Agent kickoff parameters Updated the test for the Agent class to ensure that the kickoff method correctly preserves parameters. The test now verifies the configuration of the agent after kickoff, enhancing clarity and maintainability. Additionally, the test for asynchronous kickoff within a flow context has been updated to reflect the Agent class instead of LiteAgent. * refactor: update test task guardrail process output for improved validation Refactored the test for task guardrail process output to enhance the validation of the output against the OpenAPI schema. The changes include a more structured request body and updated response handling to ensure compliance with the guardrail requirements. This update aims to improve the clarity and reliability of the test cases, ensuring that task outputs are correctly validated and feedback is appropriately provided. * test fix cassette * test fix cassette * working * working cassette * refactor: streamline agent execution and enhance flow compatibility Refactored the Agent class to simplify the execution method by removing the event loop check and clarifying the behavior when called from synchronous and asynchronous contexts. The changes ensure that the method operates seamlessly within flow methods, improving clarity in the documentation. Additionally, updated the AgentExecutor to set the response model to None, enhancing flexibility. New test cassettes were added to validate the functionality of agents within flow contexts, ensuring robust testing for both synchronous and asynchronous operations. * fixed cassette * Enhance Flow Execution Logic - Introduced conditional execution for start methods in the Flow class. - Unconditional start methods are prioritized during kickoff, while conditional starts are executed only if no unconditional starts are present. - Improved handling of cyclic flows by allowing re-execution of conditional start methods triggered by routers. - Added checks to continue execution chains for completed conditional starts. These changes improve the flexibility and control of flow execution, ensuring that the correct methods are triggered based on the defined conditions. * Enhance Agent and Flow Execution Logic - Updated the Agent class to automatically detect the event loop and return a coroutine when called within a Flow, simplifying async handling for users. - Modified Flow class to execute listeners sequentially, preventing race conditions on shared state during listener execution. - Improved handling of coroutine results from synchronous methods, ensuring proper execution flow and state management. These changes enhance the overall execution logic and user experience when working with agents and flows in CrewAI. * Enhance Flow Listener Logic and Agent Imports - Updated the Flow class to track fired OR listeners, ensuring that multi-source OR listeners only trigger once during execution. This prevents redundant executions and improves flow efficiency. - Cleared fired OR listeners during cyclic flow resets to allow re-execution in new cycles. - Modified the Agent class imports to include Coroutine from collections.abc, enhancing type handling for asynchronous operations. These changes improve the control and performance of flow execution in CrewAI, ensuring more predictable behavior in complex scenarios. * adjusted test due to new cassette * ensure we dont finalize batch on just a liteagent finishing * feat: cancellable parallelized flow methods * feat: allow methods to be cancelled & run parallelized * feat: ensure state is thread safe through proxy * fix: check for proxy state * fix: mimic BaseModel method * chore: update final attr checks; test * better description * fix test * chore: update test assumptions * extra --------- Co-authored-by: Greyson LaLonde <greyson.r.lalonde@gmail.com> |
||
|
|
8945457883 |
Lorenze/metrics for human feedback flows (#4188)
Some checks failed
* measuring human feedback feat * add some tests |
||
|
|
467ee2917e |
Improve EventListener and TraceCollectionListener for improved event… (#4160)
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
Build uv cache / build-cache (3.10) (push) Has been cancelled
Build uv cache / build-cache (3.11) (push) Has been cancelled
Build uv cache / build-cache (3.12) (push) Has been cancelled
Build uv cache / build-cache (3.13) (push) Has been cancelled
* Refactor EventListener and TraceCollectionListener for improved event handling - Removed unused threading and method branches from EventListener to simplify the code. - Updated event handling methods in EventListener to use new formatter methods for better clarity and consistency. - Refactored TraceCollectionListener to eliminate unnecessary parameters in formatter calls, enhancing readability. - Simplified ConsoleFormatter by removing outdated tree management methods and focusing on panel-based output for status updates. - Enhanced ToolUsage to track run attempts for better tool usage metrics. * clearer for knowledge retrieval and dropped some reduancies * Refactor EventListener and ConsoleFormatter for improved clarity and consistency - Removed the MCPToolExecutionCompletedEvent handler from EventListener to streamline event processing. - Updated ConsoleFormatter to enhance output formatting by adding line breaks for better readability in status content. - Renamed status messages for MCP Tool execution to provide clearer context during tool operations. * fix run attempt incrementation * task name consistency * memory events consistency * ensure hitl works * linting |
||
|
|
38b0b125d3 |
feat: use json schema for tool argument serialization
Some checks failed
Check Documentation Broken Links / Check broken links (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
Build uv cache / build-cache (3.10) (push) Has been cancelled
Build uv cache / build-cache (3.11) (push) Has been cancelled
Build uv cache / build-cache (3.12) (push) Has been cancelled
Build uv cache / build-cache (3.13) (push) Has been cancelled
- Replace Python representation with JsonSchema for tool arguments - Remove deprecated PydanticSchemaParser in favor of direct schema generation - Add handling for VAR_POSITIONAL and VAR_KEYWORD parameters - Improve tool argument schema collection |
||
|
|
e43c7debbd | fix: add idx for task ordering, tests | ||
|
|
c925d2d519 |
chore: restructure test env, cassettes, and conftest; fix flaky tests
Some checks failed
Build uv cache / build-cache (3.10) (push) Has been cancelled
Build uv cache / build-cache (3.11) (push) Has been cancelled
Build uv cache / build-cache (3.12) (push) Has been cancelled
Build uv cache / build-cache (3.13) (push) Has been cancelled
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
Consolidates pytest config, standardizes env handling, reorganizes cassette layout, removes outdated VCR configs, improves sync with threading.Condition, updates event-waiting logic, ensures cleanup, regenerates Gemini cassettes, and reverts unintended test changes. |
||
|
|
4ae8c36815 |
feat: enhance flow event state management (#3952)
* feat: enhance flow event state management - Added `state` attribute to `FlowFinishedEvent` to capture the flow's state as a JSON-serialized dictionary. - Updated flow event emissions to include the serialized state, improving traceability and debugging capabilities during flow execution. * fix: improve state serialization in Flow class - Enhanced the `_copy_and_serialize_state` method to handle exceptions during JSON serialization of Pydantic models, ensuring robustness in state management. - Updated test assertions to access the state as a dictionary, aligning with the new state structure. --------- Co-authored-by: Greyson LaLonde <greyson.r.lalonde@gmail.com> |
||
|
|
7e6171d5bc |
fix: ensure lite agents course-correct on validation errors
* fix: ensure lite agents course-correct on validation errors * chore: update cassettes and test expectations * fix: ensure multiple guardrails propogate |
||
|
|
e134e5305b |
Gl/feat/a2a refactor (#3793)
* feat: agent metaclass, refactor a2a to wrappers * feat: a2a schemas and utils * chore: move agent class, update imports * refactor: organize imports to avoid circularity, add a2a to console * feat: pass response_model through call chain * feat: add standard openapi spec serialization to tools and structured output * feat: a2a events * chore: add a2a to pyproject * docs: minimal base for learn docs * fix: adjust a2a conversation flow, allow llm to decide exit until max_retries * fix: inject agent skills into initial prompt * fix: format agent card as json in prompt * refactor: simplify A2A agent prompt formatting and improve skill display * chore: wide cleanup * chore: cleanup logic, add auth cache, use json for messages in prompt * chore: update docs * fix: doc snippets formatting * feat: optimize A2A agent card fetching and improve error reporting * chore: move imports to top of file * chore: refactor hasattr check * chore: add httpx-auth, update lockfile * feat: create base public api * chore: cleanup modules, add docstrings, types * fix: exclude extra fields in prompt * chore: update docs * tests: update to correct import * chore: lint for ruff, add missing import * fix: tweak openai streaming logic for response model * tests: add reimport for test * tests: add reimport for test * fix: don't set a2a attr if not set * fix: don't set a2a attr if not set * chore: update cassettes * tests: fix tests * fix: use instructor and dont pass response_format for litellm * chore: consolidate event listeners, add typing * fix: address race condition in test, update cassettes * tests: add correct mocks, rerun cassette for json * tests: update cassette * chore: regenerate cassette after new run * fix: make token manager access-safe * fix: make token manager access-safe * merge * chore: update test and cassete for output pydantic * fix: tweak to disallow deadlock * chore: linter * fix: adjust event ordering for threading * fix: use conditional for batch check * tests: tweak for emission * tests: simplify api + event check * fix: ensure non-function calling llms see json formatted string * tests: tweak message comparison * fix: use internal instructor for litellm structure responses --------- Co-authored-by: Mike Plachta <mike@crewai.com> |
||
|
|
08e15ab267 |
fix: update default LLM model and improve error logging in LLM utilities (#3785)
* fix: update default LLM model and improve error logging in LLM utilities * Updated the default LLM model from "gpt-4o-mini" to "gpt-4.1-mini" for better performance. * Enhanced error logging in the LLM utilities to use logger.error instead of logger.debug, ensuring that errors are properly reported and raised. * Added tests to verify behavior when OpenAI API key is missing and when Anthropic dependency is not available, improving robustness and error handling in LLM creation. * fix: update test for default LLM model usage * Refactored the test_create_llm_with_none_uses_default_model to use the imported DEFAULT_LLM_MODEL constant instead of a hardcoded string. * Ensured that the test correctly asserts the model used is the current default, improving maintainability and consistency across tests. * change default model to gpt-4.1-mini * change default model use defualt |
||
|
|
a850813f2b |
feat: enhance InternalInstructor to support multiple LLM providers (#3767)
* feat: enhance InternalInstructor to support multiple LLM providers - Updated InternalInstructor to conditionally create an instructor client based on the LLM provider. - Introduced a new method _create_instructor_client to handle client creation using the modern from_provider pattern. - Added functionality to extract the provider from the LLM model name. - Implemented tests for InternalInstructor with various LLM providers including OpenAI, Anthropic, Gemini, and Azure, ensuring robust integration and error handling. This update improves flexibility and extensibility for different LLM integrations. * fix test |
||
|
|
d1343b96ed |
Release/v1.0.0 (#3618)
* feat: add `apps` & `actions` attributes to Agent (#3504)
* feat: add app attributes to Agent
* feat: add actions attribute to Agent
* chore: resolve linter issues
* refactor: merge the apps and actions parameters into a single one
* fix: remove unnecessary print
* feat: logging error when CrewaiPlatformTools fails
* chore: export CrewaiPlatformTools directly from crewai_tools
* style: resolver linter issues
* test: fix broken tests
* style: solve linter issues
* fix: fix broken test
* feat: monorepo restructure and test/ci updates
- Add crewai workspace member
- Fix vcr cassette paths and restore test dirs
- Resolve ci failures and update linter/pytest rules
* chore: update python version to 3.13 and package metadata
* feat: add crewai-tools workspace and fix tests/dependencies
* feat: add crewai-tools workspace structure
* Squashed 'temp-crewai-tools/' content from commit 9bae5633
git-subtree-dir: temp-crewai-tools
git-subtree-split: 9bae56339096cb70f03873e600192bd2cd207ac9
* feat: configure crewai-tools workspace package with dependencies
* fix: apply ruff auto-formatting to crewai-tools code
* chore: update lockfile
* fix: don't allow tool tests yet
* fix: comment out extra pytest flags for now
* fix: remove conflicting conftest.py from crewai-tools tests
* fix: resolve dependency conflicts and test issues
- Pin vcrpy to 7.0.0 to fix pytest-recording compatibility
- Comment out types-requests to resolve urllib3 conflict
- Update requests requirement in crewai-tools to >=2.32.0
* chore: update CI workflows and docs for monorepo structure
* chore: update CI workflows and docs for monorepo structure
* fix: actions syntax
* chore: ci publish and pin versions
* fix: add permission to action
* chore: bump version to 1.0.0a1 across all packages
- Updated version to 1.0.0a1 in pyproject.toml for crewai and crewai-tools
- Adjusted version in __init__.py files for consistency
* WIP: v1 docs (#3626)
(cherry picked from commit d46e20fa09bcd2f5916282f5553ddeb7183bd92c)
* docs: parity for all translations
* docs: full name of acronym AMP
* docs: fix lingering unused code
* docs: expand contextual options in docs.json
* docs: add contextual action to request feature on GitHub (#3635)
* chore: apply linting fixes to crewai-tools
* feat: add required env var validation for brightdata
Co-authored-by: Greyson Lalonde <greyson.r.lalonde@gmail.com>
* fix: handle properly anyOf oneOf allOf schema's props
Co-authored-by: Greyson Lalonde <greyson.r.lalonde@gmail.com>
* feat: bump version to 1.0.0a2
* Lorenze/native inference sdks (#3619)
* ruff linted
* using native sdks with litellm fallback
* drop exa
* drop print on completion
* Refactor LLM and utility functions for type consistency
- Updated `max_tokens` parameter in `LLM` class to accept `float` in addition to `int`.
- Modified `create_llm` function to ensure consistent type hints and return types, now returning `LLM | BaseLLM | None`.
- Adjusted type hints for various parameters in `create_llm` and `_llm_via_environment_or_fallback` functions for improved clarity and type safety.
- Enhanced test cases to reflect changes in type handling and ensure proper instantiation of LLM instances.
* fix agent_tests
* fix litellm tests and usagemetrics fix
* drop print
* Refactor LLM event handling and improve test coverage
- Removed commented-out event emission for LLM call failures in `llm.py`.
- Added `from_agent` parameter to `CrewAgentExecutor` for better context in LLM responses.
- Enhanced test for LLM call failure to simulate OpenAI API failure and updated assertions for clarity.
- Updated agent and task ID assertions in tests to ensure they are consistently treated as strings.
* fix test_converter
* fixed tests/agents/test_agent.py
* Refactor LLM context length exception handling and improve provider integration
- Renamed `LLMContextLengthExceededException` to `LLMContextLengthExceededExceptionError` for clarity and consistency.
- Updated LLM class to pass the provider parameter correctly during initialization.
- Enhanced error handling in various LLM provider implementations to raise the new exception type.
- Adjusted tests to reflect the updated exception name and ensure proper error handling in context length scenarios.
* Enhance LLM context window handling across providers
- Introduced CONTEXT_WINDOW_USAGE_RATIO to adjust context window sizes dynamically for Anthropic, Azure, Gemini, and OpenAI LLMs.
- Added validation for context window sizes in Azure and Gemini providers to ensure they fall within acceptable limits.
- Updated context window size calculations to use the new ratio, improving consistency and adaptability across different models.
- Removed hardcoded context window sizes in favor of ratio-based calculations for better flexibility.
* fix test agent again
* fix test agent
* feat: add native LLM providers for Anthropic, Azure, and Gemini
- Introduced new completion implementations for Anthropic, Azure, and Gemini, integrating their respective SDKs.
- Added utility functions for tool validation and extraction to support function calling across LLM providers.
- Enhanced context window management and token usage extraction for each provider.
- Created a common utility module for shared functionality among LLM providers.
* chore: update dependencies and improve context management
- Removed direct dependency on `litellm` from the main dependencies and added it under extras for better modularity.
- Updated the `litellm` dependency specification to allow for greater flexibility in versioning.
- Refactored context length exception handling across various LLM providers to use a consistent error class.
- Enhanced platform-specific dependency markers for NVIDIA packages to ensure compatibility across different systems.
* refactor(tests): update LLM instantiation to include is_litellm flag in test cases
- Modified multiple test cases in test_llm.py to set the is_litellm parameter to True when instantiating the LLM class.
- This change ensures that the tests are aligned with the latest LLM configuration requirements and improves consistency across test scenarios.
- Adjusted relevant assertions and comments to reflect the updated LLM behavior.
* linter
* linted
* revert constants
* fix(tests): correct type hint in expected model description
- Updated the expected description in the test_generate_model_description_dict_field function to use 'Dict' instead of 'dict' for consistency with type hinting conventions.
- This change ensures that the test accurately reflects the expected output format for model descriptions.
* refactor(llm): enhance LLM instantiation and error handling
- Updated the LLM class to include validation for the model parameter, ensuring it is a non-empty string.
- Improved error handling by logging warnings when the native SDK fails, allowing for a fallback to LiteLLM.
- Adjusted the instantiation of LLM in test cases to consistently include the is_litellm flag, aligning with recent changes in LLM configuration.
- Modified relevant tests to reflect these updates, ensuring better coverage and accuracy in testing scenarios.
* fixed test
* refactor(llm): enhance token usage tracking and add copy methods
- Updated the LLM class to track token usage and log callbacks in streaming mode, improving monitoring capabilities.
- Introduced shallow and deep copy methods for the LLM instance, allowing for better management of LLM configurations and parameters.
- Adjusted test cases to instantiate LLM with the is_litellm flag, ensuring alignment with recent changes in LLM configuration.
* refactor(tests): reorganize imports and enhance error messages in test cases
- Cleaned up import statements in test_crew.py for better organization and readability.
- Enhanced error messages in test cases to use `re.escape` for improved regex matching, ensuring more robust error handling.
- Adjusted comments for clarity and consistency across test scenarios.
- Ensured that all necessary modules are imported correctly to avoid potential runtime issues.
* feat: add base devtooling
* fix: ensure dep refs are updated for devtools
* fix: allow pre-release
* feat: allow release after tag
* feat: bump versions to 1.0.0a3
Co-authored-by: Greyson LaLonde <greyson.r.lalonde@gmail.com>
* fix: match tag and release title, ignore devtools build for pypi
* fix: allow failed pypi publish
* feat: introduce trigger listing and execution commands for local development (#3643)
* chore: exclude tests from ruff linting
* chore: exclude tests from GitHub Actions linter
* fix: replace print statements with logger in agent and memory handling
* chore: add noqa for intentional print in printer utility
* fix: resolve linting errors across codebase
* feat: update docs with new approach to consume Platform Actions (#3675)
* fix: remove duplicate line and add explicit env var
* feat: bump versions to 1.0.0a4 (#3686)
* Update triggers docs (#3678)
* docs: introduce triggers list & triggers run command
* docs: add KO triggers docs
* docs: ensure CREWAI_PLATFORM_INTEGRATION_TOKEN is mentioned on docs (#3687)
* Lorenze/bedrock llm (#3693)
* feat: add AWS Bedrock support and update dependencies
- Introduced BedrockCompletion class for AWS Bedrock integration in LLM.
- Added boto3 as a new dependency in both pyproject.toml and uv.lock.
- Updated LLM class to support Bedrock provider.
- Created new files for Bedrock provider implementation.
* using converse api
* converse
* linted
* refactor: update BedrockCompletion class to improve parameter handling
- Changed max_tokens from a fixed integer to an optional integer.
- Simplified model ID assignment by removing the inference profile mapping method.
- Cleaned up comments and unnecessary code related to tool specifications and model-specific parameters.
* feat: improve event bus thread safety and async support
Add thread-safe, async-compatible event bus with read–write locking and
handler dependency ordering. Remove blinker dependency and implement
direct dispatch. Improve type safety, error handling, and deterministic
event synchronization.
Refactor tests to auto-wait for async handlers, ensure clean teardown,
and add comprehensive concurrency coverage. Replace thread-local state
in AgentEvaluator with instance-based locking for correct cross-thread
access. Enhance tracing reliability and event finalization.
* feat: enhance OpenAICompletion class with additional client parameters (#3701)
* feat: enhance OpenAICompletion class with additional client parameters
- Added support for default_headers, default_query, and client_params in the OpenAICompletion class.
- Refactored client initialization to use a dedicated method for client parameter retrieval.
- Introduced new test cases to validate the correct usage of OpenAICompletion with various parameters.
* fix: correct test case for unsupported OpenAI model
- Updated the test_openai.py to ensure that the LLM instance is created before calling the method, maintaining proper error handling for unsupported models.
- This change ensures that the test accurately checks for the NotFoundError when an invalid model is specified.
* fix: enhance error handling in OpenAICompletion class
- Added specific exception handling for NotFoundError and APIConnectionError in the OpenAICompletion class to provide clearer error messages and improve logging.
- Updated the test case for unsupported models to ensure it raises a ValueError with the appropriate message when a non-existent model is specified.
- This change improves the robustness of the OpenAI API integration and enhances the clarity of error reporting.
* fix: improve test for unsupported OpenAI model handling
- Refactored the test case in test_openai.py to create the LLM instance after mocking the OpenAI client, ensuring proper error handling for unsupported models.
- This change enhances the clarity of the test by accurately checking for ValueError when a non-existent model is specified, aligning with recent improvements in error handling for the OpenAICompletion class.
* feat: bump versions to 1.0.0b1 (#3706)
* Lorenze/tools drop litellm (#3710)
* completely drop litellm and correctly pass config for qdrant
* feat: add support for additional embedding models in EmbeddingService
- Expanded the list of supported embedding models to include Google Vertex, Hugging Face, Jina, Ollama, OpenAI, Roboflow, Watson X, custom embeddings, Sentence Transformers, Text2Vec, OpenClip, and Instructor.
- This enhancement improves the versatility of the EmbeddingService by allowing integration with a wider range of embedding providers.
* fix: update collection parameter handling in CrewAIRagAdapter
- Changed the condition for setting vectors_config in the CrewAIRagAdapter to check for QdrantConfig instance instead of using hasattr. This improves type safety and ensures proper configuration handling for Qdrant integration.
* moved stagehand as optional dep (#3712)
* feat: bump versions to 1.0.0b2 (#3713)
* feat: enhance AnthropicCompletion class with additional client parame… (#3707)
* feat: enhance AnthropicCompletion class with additional client parameters and tool handling
- Added support for client_params in the AnthropicCompletion class to allow for additional client configuration.
- Refactored client initialization to use a dedicated method for retrieving client parameters.
- Implemented a new method to handle tool use conversation flow, ensuring proper execution and response handling.
- Introduced comprehensive test cases to validate the functionality of the AnthropicCompletion class, including tool use scenarios and parameter handling.
* drop print statements
* test: add fixture to mock ANTHROPIC_API_KEY for tests
- Introduced a pytest fixture to automatically mock the ANTHROPIC_API_KEY environment variable for all tests in the test_anthropic.py module.
- This change ensures that tests can run without requiring a real API key, improving test isolation and reliability.
* refactor: streamline streaming message handling in AnthropicCompletion class
- Removed the 'stream' parameter from the API call as it is set internally by the SDK.
- Simplified the handling of tool use events and response construction by extracting token usage from the final message.
- Enhanced the flow for managing tool use conversation, ensuring proper integration with the streaming API response.
* fix streaming here too
* fix: improve error handling in tool conversion for AnthropicCompletion class
- Enhanced exception handling during tool conversion by catching KeyError and ValueError.
- Added logging for conversion errors to aid in debugging and maintain robustness in tool integration.
* feat: enhance GeminiCompletion class with client parameter support (#3717)
* feat: enhance GeminiCompletion class with client parameter support
- Added support for client_params in the GeminiCompletion class to allow for additional client configuration.
- Refactored client initialization into a dedicated method for improved parameter handling.
- Introduced a new method to retrieve client parameters, ensuring compatibility with the base class.
- Enhanced error handling during client initialization to provide clearer messages for missing configuration.
- Updated documentation to reflect the changes in client parameter usage.
* add optional dependancies
* refactor: update test fixture to mock GOOGLE_API_KEY
- Renamed the fixture from `mock_anthropic_api_key` to `mock_google_api_key` to reflect the change in the environment variable being mocked.
- This update ensures that all tests in the module can run with a mocked GOOGLE_API_KEY, improving test isolation and reliability.
* fix tests
* feat: enhance BedrockCompletion class with advanced features
* feat: enhance BedrockCompletion class with advanced features and error handling
- Added support for guardrail configuration, additional model request fields, and custom response field paths in the BedrockCompletion class.
- Improved error handling for AWS exceptions and added token usage tracking with stop reason logging.
- Enhanced streaming response handling with comprehensive event management, including tool use and content block processing.
- Updated documentation to reflect new features and initialization parameters.
- Introduced a new test suite for BedrockCompletion to validate functionality and ensure robust integration with AWS Bedrock APIs.
* chore: add boto typing
* fix: use typing_extensions.Required for Python 3.10 compatibility
---------
Co-authored-by: Greyson Lalonde <greyson.r.lalonde@gmail.com>
* feat: azure native tests
* feat: add Azure AI Inference support and related tests
- Introduced the `azure-ai-inference` package with version `1.0.0b9` and its dependencies in `uv.lock` and `pyproject.toml`.
- Added new test files for Azure LLM functionality, including tests for Azure completion and tool handling.
- Implemented comprehensive test cases to validate Azure-specific behavior and integration with the CrewAI framework.
- Enhanced the testing framework to mock Azure credentials and ensure proper isolation during tests.
* feat: enhance AzureCompletion class with Azure OpenAI support
- Added support for the Azure OpenAI endpoint in the AzureCompletion class, allowing for flexible endpoint configurations.
- Implemented endpoint validation and correction to ensure proper URL formats for Azure OpenAI deployments.
- Enhanced error handling to provide clearer messages for common HTTP errors, including authentication and rate limit issues.
- Updated tests to validate the new endpoint handling and error messaging, ensuring robust integration with Azure AI Inference.
- Refactored parameter preparation to conditionally include the model parameter based on the endpoint type.
* refactor: convert project module to metaclass with full typing
* Lorenze/OpenAI base url backwards support (#3723)
* fix: enhance OpenAICompletion class base URL handling
- Updated the base URL assignment in the OpenAICompletion class to prioritize the new `api_base` attribute and fallback to the environment variable `OPENAI_BASE_URL` if both are not set.
- Added `api_base` to the list of parameters in the OpenAICompletion class to ensure proper configuration and flexibility in API endpoint management.
* feat: enhance OpenAICompletion class with api_base support
- Added the `api_base` parameter to the OpenAICompletion class to allow for flexible API endpoint configuration.
- Updated the `_get_client_params` method to prioritize `base_url` over `api_base`, ensuring correct URL handling.
- Introduced comprehensive tests to validate the behavior of `api_base` and `base_url` in various scenarios, including environment variable fallback.
- Enhanced test coverage for client parameter retrieval, ensuring robust integration with the OpenAI API.
* fix: improve OpenAICompletion class configuration handling
- Added a debug print statement to log the client configuration parameters during initialization for better traceability.
- Updated the base URL assignment logic to ensure it defaults to None if no valid base URL is provided, enhancing robustness in API endpoint configuration.
- Refined the retrieval of the `api_base` environment variable to streamline the configuration process.
* drop print
* feat: improvements on import native sdk support (#3725)
* feat: add support for Anthropic provider and enhance logging
- Introduced the `anthropic` package with version `0.69.0` in `pyproject.toml` and `uv.lock`, allowing for integration with the Anthropic API.
- Updated logging in the LLM class to provide clearer error messages when importing native providers, enhancing debugging capabilities.
- Improved error handling in the AnthropicCompletion class to guide users on installation via the updated error message format.
- Refactored import error handling in other provider classes to maintain consistency in error messaging and installation instructions.
* feat: enhance LLM support with Bedrock provider and update dependencies
- Added support for the `bedrock` provider in the LLM class, allowing integration with AWS Bedrock APIs.
- Updated `uv.lock` to replace `boto3` with `bedrock` in the dependencies, reflecting the new provider structure.
- Introduced `SUPPORTED_NATIVE_PROVIDERS` to include `bedrock` and ensure proper error handling when instantiating native providers.
- Enhanced error handling in the LLM class to raise informative errors when native provider instantiation fails.
- Added tests to validate the behavior of the new Bedrock provider and ensure fallback mechanisms work correctly for unsupported providers.
* test: update native provider fallback tests to expect ImportError
* adjust the test with the expected bevaior - raising ImportError
* this is exoecting the litellm format, all gemini native tests are in test_google.py
---------
Co-authored-by: Greyson LaLonde <greyson.r.lalonde@gmail.com>
* fix: remove stdout prints, improve test determinism, and update trace handling
Removed `print` statements from the `LLMStreamChunkEvent` handler to prevent
LLM response chunks from being written directly to stdout. The listener now
only tracks chunks internally.
Fixes #3715
Added explicit return statements for trace-related tests.
Updated cassette for `test_failed_evaluation` to reflect new behavior where
an empty trace dict is used instead of returning early.
Ensured deterministic cleanup order in test fixtures by making
`clear_event_bus_handlers` depend on `setup_test_environment`. This guarantees
event bus shutdown and file handle cleanup occur before temporary directory
deletion, resolving intermittent “Directory not empty” errors in CI.
* chore: remove lib/crewai exclusion from pre-commit hooks
* feat: enhance task guardrail functionality and validation
* feat: enhance task guardrail functionality and validation
- Introduced support for multiple guardrails in the Task class, allowing for sequential processing of guardrails.
- Added a new `guardrails` field to the Task model to accept a list of callable guardrails or string descriptions.
- Implemented validation to ensure guardrails are processed correctly, including handling of retries and error messages.
- Enhanced the `_invoke_guardrail_function` method to manage guardrail execution and integrate with existing task output processing.
- Updated tests to cover various scenarios involving multiple guardrails, including success, failure, and retry mechanisms.
This update improves the flexibility and robustness of task execution by allowing for more complex validation scenarios.
* refactor: enhance guardrail type handling in Task model
- Updated the Task class to improve guardrail type definitions, introducing GuardrailType and GuardrailsType for better clarity and type safety.
- Simplified the validation logic for guardrails, ensuring that both single and multiple guardrails are processed correctly.
- Enhanced error messages for guardrail validation to provide clearer feedback when incorrect types are provided.
- This refactor improves the maintainability and robustness of task execution by standardizing guardrail handling.
* feat: implement per-guardrail retry tracking in Task model
- Introduced a new private attribute `_guardrail_retry_counts` to the Task class for tracking retry attempts on a per-guardrail basis.
- Updated the guardrail processing logic to utilize the new retry tracking, allowing for independent retry counts for each guardrail.
- Enhanced error handling to provide clearer feedback when guardrails fail validation after exceeding retry limits.
- Modified existing tests to validate the new retry tracking behavior, ensuring accurate assertions on guardrail retries.
This update improves the robustness and flexibility of task execution by allowing for more granular control over guardrail validation and retry mechanisms.
* chore: 1.0.0b3 bump (#3734)
* chore: full ruff and mypy
improved linting, pre-commit setup, and internal architecture. Configured Ruff to respect .gitignore, added stricter rules, and introduced a lock pre-commit hook with virtualenv activation. Fixed type shadowing in EXASearchTool using a type_ alias to avoid PEP 563 conflicts and resolved circular imports in agent executor and guardrail modules. Removed agent-ops attributes, deprecated watson alias, and dropped crewai-enterprise tools with corresponding test updates. Refactored cache and memoization for thread safety and cleaned up structured output adapters and related logic.
* New MCL DSL (#3738)
* Adding MCP implementation
* New tests for MCP implementation
* fix tests
* update docs
* Revert "New tests for MCP implementation"
This reverts commit
|