* chore: update crewai-tools dependency to version 0.60.0
- Updated the `pyproject.toml` and `uv.lock` files to reflect the new version of `crewai-tools`.
- This change ensures compatibility with the latest features and improvements in the tools package.
* chore: bump CrewAI version to 0.157.0
- Updated the version in `__init__.py` to reflect the new release.
- Adjusted dependency versions in `pyproject.toml` files for crew, flow, and tool templates to ensure compatibility with the latest features and improvements in CrewAI.
- This change maintains consistency across the project and prepares for upcoming enhancements.
* initial setup
* feat: enhance CrewKickoffCompletedEvent to include total token usage
- Added total_tokens attribute to CrewKickoffCompletedEvent for better tracking of token usage during crew execution.
- Updated Crew class to emit total token usage upon kickoff completion.
- Removed obsolete context handler and execution context tracker files to streamline event handling.
* cleanup
* remove print statements for loggers
* feat: add CrewAI base URL and improve logging in tracing
- Introduced `CREWAI_BASE_URL` constant for easy access to the CrewAI application URL.
- Replaced print statements with logging in the `TraceSender` class for better error tracking.
- Enhanced the `TraceBatchManager` to provide default values for flow names and removed unnecessary comments.
- Implemented singleton pattern in `TraceCollectionListener` to ensure a single instance is used.
- Added a new test case to verify that the trace listener correctly collects events during crew execution.
* clear
* fix: update datetime serialization in tracing interfaces
- Removed the 'Z' suffix from datetime serialization in TraceSender and TraceEvent to ensure consistent ISO format.
- Added new test cases to validate the functionality of the TraceBatchManager and event collection during crew execution.
- Introduced fixtures to clear event bus listeners before each test to maintain isolation.
* test: enhance tracing tests with mock authentication token
- Added a mock authentication token to the tracing tests to ensure proper setup and event collection.
- Updated test methods to include the mock token, improving isolation and reliability of tests related to the TraceListener and BatchManager.
- Ensured that the tests validate the correct behavior of event collection during crew execution.
* test: refactor tracing tests to improve mock usage
- Moved the mock authentication token patching inside the test class to enhance readability and maintainability.
- Updated test methods to remove unnecessary mock parameters, streamlining the test signatures.
- Ensured that the tests continue to validate the correct behavior of event collection during crew execution while improving isolation.
* test: refactor tracing tests for improved mock usage and consistency
- Moved mock authentication token patching into individual test methods for better clarity and maintainability.
- Corrected the backstory string in the `Agent` instantiation to fix a typo.
- Ensured that all tests validate the correct behavior of event collection during crew execution while enhancing isolation and readability.
* test: add new tracing test for disabled trace listener
- Introduced a new test case to verify that the trace listener does not make HTTP calls when tracing is disabled via environment variables.
- Enhanced existing tests by mocking PlusAPI HTTP calls to avoid authentication and network requests, improving test isolation and reliability.
- Updated the test setup to ensure proper initialization of the trace listener and its components during crew execution.
* refactor: update LLM class to utilize new completion function and improve cost calculation
- Replaced direct calls to `litellm.completion` with a new import for better clarity and maintainability.
- Introduced a new optional attribute `completion_cost` in the LLM class to track the cost of completions.
- Updated the handling of completion responses to ensure accurate cost calculations and improved error handling.
- Removed outdated test cassettes for gemini models to streamline test suite and avoid redundancy.
- Enhanced existing tests to reflect changes in the LLM class and ensure proper functionality.
* test: enhance tracing tests with additional request and response scenarios
- Added new test cases to validate the behavior of the trace listener and batch manager when handling 404 responses from the tracing API.
- Updated existing test cassettes to include detailed request and response structures, ensuring comprehensive coverage of edge cases.
- Improved mock setup to avoid unnecessary network calls and enhance test reliability.
- Ensured that the tests validate the correct behavior of event collection during crew execution, particularly in scenarios where the tracing service is unavailable.
* feat: enable conditional tracing based on environment variable
- Added support for enabling or disabling the trace listener based on the `CREWAI_TRACING_ENABLED` environment variable.
- Updated the `Crew` class to conditionally set up the trace listener only when tracing is enabled, improving performance and resource management.
- Refactored test cases to ensure proper cleanup of event bus listeners before and after each test, enhancing test reliability and isolation.
- Improved mock setup in tracing tests to validate the behavior of the trace listener when tracing is disabled.
* fix: downgrade litellm version from 1.74.9 to 1.74.3
- Updated the `pyproject.toml` and `uv.lock` files to reflect the change in the `litellm` dependency version.
- This downgrade addresses compatibility issues and ensures stability in the project environment.
* refactor: improve tracing test setup by moving mock authentication token patching
- Removed the module-level patch for the authentication token and implemented a fixture to mock the token for all tests in the class, enhancing test isolation and readability.
- Updated the event bus clearing logic to ensure original handlers are restored after tests, improving reliability of the test environment.
- This refactor streamlines the test setup and ensures consistent behavior across tracing tests.
* test: enhance tracing test setup with comprehensive mock authentication
- Expanded the mock authentication token patching to cover all instances where `get_auth_token` is used across different modules, ensuring consistent behavior in tests.
- Introduced a new fixture to reset tracing singleton instances between tests, improving test isolation and reliability.
- This update enhances the overall robustness of the tracing tests by ensuring that all necessary components are properly mocked and reset, leading to more reliable test outcomes.
* just drop the test for now
* refactor: comment out completion-related code in LLM and LLM event classes
- Commented out the `completion` and `completion_cost` imports and their usage in the `LLM` class to prevent potential issues during execution.
- Updated the `LLMCallCompletedEvent` class to comment out the `response_cost` attribute, ensuring consistency with the changes in the LLM class.
- This refactor aims to streamline the code and prepare for future updates without affecting current functionality.
* refactor: update LLM response handling in LiteAgent
- Commented out the `response_cost` attribute in the LLM response handling to align with recent refactoring in the LLM class.
- This change aims to maintain consistency in the codebase and prepare for future updates without affecting current functionality.
* refactor: remove commented-out response cost attributes in LLM and LiteAgent
- Commented out the `response_cost` attribute in both the `LiteAgent` and `LLM` classes to maintain consistency with recent refactoring efforts.
- This change aligns with previous updates aimed at streamlining the codebase and preparing for future enhancements without impacting current functionality.
* bring back litellm upgrade version
* feat: support oauth2 config for authentication
* refactor: improve OAuth2 settings management
The CLI now supports seamless integration with other authentication providers, since the client_id, issue, domain are now manage by the user
* feat: support okta Device Authorization flow
* chore: resolve linter issues
* test: fix tests
* test: adding tests for auth providers
* test: fix broken test
* refator: adding WorkOS paramenters as default settings auth
* chore: improve oauth2 attributes description
* refactor: simplify WorkOS getting values
* fix: ensure Auth0 parameters is set when overrinding default auth provider
* chore: remove TODO Auth0 no longer provides default values
---------
Co-authored-by: Heitor Carvalho <heitor.scz@gmail.com>
- Updated `crewai-tools` dependency from `0.58.0` to `0.59.0` in `pyproject.toml` and `uv.lock`.
- Bumped the version of the CrewAI library from `0.150.0` to `0.152.0` in `__init__.py`.
- Updated dependency versions in CLI templates for crew, flow, and tool projects to reflect the new CrewAI version.
- Updated `crewai-tools` dependency from `0.55.0` to `0.58.0` in `pyproject.toml` and `uv.lock`.
- Added new packages `anthropic`, `browserbase`, `playwright`, `pyee`, and `stagehand` with their respective versions in `uv.lock`.
- Bumped the version of the CrewAI library from `0.148.0` to `0.150.0` in `__init__.py`.
- Updated dependency versions in CLI templates for crew, flow, and tool projects to reflect the new CrewAI version.
* Update CrewAI version to 0.148.0 in project templates and dependencies
* Update crewai-tools dependency to version 0.55.0 in pyproject.toml and uv.lock for improved functionality and performance.
- Bump crewAI version to 0.141.0 in __init__.py for alignment with updated dependencies.
- Update `crewai-tools` dependency version to 0.51.0 in pyproject.toml and related template files.
- Add new testing dependencies: pytest-split and pytest-xdist for improved test execution.
- Ensure compatibility with the latest package versions in uv.lock and template files.
* fix: clean up whitespace and update dependencies
* Removed unnecessary whitespace in multiple files for consistency.
* Updated `crewai-tools` dependency version to `0.49.0` in `pyproject.toml` and related template files.
* Bumped CrewAI version to `0.140.0` in `__init__.py` for alignment with updated dependencies.
* chore: update pyproject.toml to exclude documentation from build targets
* Added exclusions for the `docs` directory in both wheel and sdist build targets to streamline the build process and reduce unnecessary file inclusion.
* chore: update uv.lock for dependency resolution and Python version compatibility
* Incremented revision to 2.
* Updated resolution markers to include support for Python 3.13 and adjusted platform checks for better compatibility.
* Added new wheel URLs for zstandard version 0.23.0 to ensure availability across various platforms.
* chore: pin json-repair dependency version in pyproject.toml and uv.lock
* Updated json-repair dependency from a range to a specific version (0.25.2) for consistency and to avoid potential compatibility issues.
* Adjusted related entries in uv.lock to reflect the pinned version, ensuring alignment across project files.
* chore: pin agentops dependency version in pyproject.toml and uv.lock
* Updated agentops dependency from a range to a specific version (0.3.18) for consistency and to avoid potential compatibility issues.
* Adjusted related entries in uv.lock to reflect the pinned version, ensuring alignment across project files.
* test: enhance cache call assertions in crew tests
* Improved the test for cache hitting between agents by filtering mock calls to ensure they include the expected 'tool' and 'input' keywords.
* Added assertions to verify the number of cache calls and their expected arguments, enhancing the reliability of the test.
* Cleaned up whitespace and improved readability in various test cases for better maintainability.
When creating a Crew via the CLI and selecting the Azure provider, the generated .env file had environment variables in lowercase.
This commit ensures that all environment variables are written in uppercase.
* fix: normalize project names by stripping trailing slashes in crew creation
- Strip trailing slashes from project names in create_folder_structure
- Add comprehensive tests for trailing slash scenarios
- Fixes#3059
The issue occurred because trailing slashes in project names like 'hello/'
were directly incorporated into pyproject.toml, creating invalid package
names and script entries. This fix silently normalizes project names by
stripping trailing slashes before processing, maintaining backward
compatibility while fixing the invalid template generation.
Co-Authored-By: João <joao@crewai.com>
* trigger CI re-run to check for flaky test issue
Co-Authored-By: João <joao@crewai.com>
* fix: resolve circular import in CLI authentication module
- Move ToolCommand import to be local inside _poll_for_token method
- Update test mock to patch ToolCommand at correct location
- Resolves Python 3.11 test collection failure in CI
Co-Authored-By: João <joao@crewai.com>
* feat: add comprehensive class name validation for Python identifiers
- Ensure generated class names are always valid Python identifiers
- Handle edge cases: names starting with numbers, special characters, keywords, built-ins
- Add sanitization logic to remove invalid characters and prefix with 'Crew' when needed
- Add comprehensive test coverage for class name validation edge cases
- Addresses GitHub PR comment from lucasgomide about class name validity
Fixes include:
- Names starting with numbers: '123project' -> 'Crew123Project'
- Python built-ins: 'True' -> 'TrueCrew', 'False' -> 'FalseCrew'
- Special characters: 'hello@world' -> 'HelloWorld'
- Empty/whitespace: ' ' -> 'DefaultCrew'
- All generated class names pass isidentifier() and keyword checks
Co-Authored-By: João <joao@crewai.com>
* refactor: change class name validation to raise errors instead of generating defaults
- Remove default value generation (Crew prefix/suffix, DefaultCrew fallback)
- Raise ValueError with descriptive messages for invalid class names
- Update tests to expect validation errors instead of default corrections
- Addresses GitHub comment feedback from lucasgomide about strict validation
Co-Authored-By: João <joao@crewai.com>
* fix: add working directory safety checks to prevent test interference
Co-Authored-By: João <joao@crewai.com>
* fix: standardize working directory handling in tests to prevent corruption
Co-Authored-By: João <joao@crewai.com>
* fix: eliminate os.chdir() usage in tests to prevent working directory corruption
- Replace os.chdir() with parent_folder parameter for create_folder_structure tests
- Mock create_folder_structure directly for create_crew tests to avoid directory changes
- All 12 tests now pass locally without working directory corruption
- Should resolve the 103 failing tests in Python 3.12 CI
Co-Authored-By: João <joao@crewai.com>
* fix: remove unused os import to resolve lint failure
- Remove unused 'import os' statement from test_create_crew.py
- All tests still pass locally after removing unused import
- Should resolve F401 lint error in CI
Co-Authored-By: João <joao@crewai.com>
* feat: add folder name validation for Python module names
- Implement validation to ensure folder_name is valid Python identifier
- Check that folder names don't start with digits
- Validate folder names are not Python keywords
- Sanitize invalid characters from folder names
- Raise ValueError with descriptive messages for invalid cases
- Update tests to validate both folder and class name requirements
- Addresses GitHub comment requiring folder names to be valid Python module names
Co-Authored-By: João <joao@crewai.com>
* fix: correct folder name validation logic to match test expectations
- Fix validation regex to catch names starting with invalid characters like '@#/'
- Ensure validation properly raises ValueError for cases expected by tests
- Maintain support for valid cases like 'my.project/' -> 'myproject'
- Address lucasgomide's comment about valid Python module names
Co-Authored-By: João <joao@crewai.com>
---------
Co-authored-by: Devin AI <158243242+devin-ai-integration[bot]@users.noreply.github.com>
Co-authored-by: João <joao@crewai.com>
Co-authored-by: Lucas Gomide <lucaslg200@gmail.com>
* test: add tests to test get_crews
* feat: improve Crew search while resetting their memories
Some memories couldn't be reset due to their reliance on relative external sources like `PDFKnowledge`. This was caused by the need to run the reset memories command from the `src` directory, which could break when external files weren't accessible from that path.
This commit allows the reset command to be executed from the root of the project — the same location typically used to run a crew — improving compatibility and reducing friction.
* feat: skip cli/templates folder while looking for Crew
* refactor: use console.print instead of print
When running behind cloud-based security users are struggling to donwload LLM data from Github. Usually the following error is raised
```
SSL certificate verification failed: HTTPSConnectionPool(host='raw.githubusercontent.com', port=443): Max retries exceeded with url: /BerriAI/litellm/main/model_prices_and_context_window.json (Caused by SSLError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:1010)')))
Current CA bundle path: /usr/local/etc///.pem
```
This commit ensures the SSL config is beign provided while requesting data
- Bump CrewAI version from 0.126.0 to 0.130.0 in pyproject.toml and uv.lock.
- Update optional dependency 'crewai-tools' version from 0.46.0 to 0.47.1.
- Adjust dependency specifications in CLI templates to reflect the new version.
* docs: add organization management in our CLI docs
* feat: improve user feedback when user is not authenticated
* feat: improve logging about current organization while publishing/install a Tool
* feat: improve logging when Agent repository is not found during fetch
* fix linter offences
* test: fix auth token error
* feat: support to list, switch and see your current organization
* feat: store the current org after logged in
* feat: filtering agents, tools and their actions by organization_uuid if present
* fix linter offenses
* refactor: propagate the current org thought Header instead of params
* refactor: rename org column name to ID instead of Handle
---------
Co-authored-by: Tony Kipkemboi <iamtonykipkemboi@gmail.com>
* feat: add capability to see and expose public Tool classes
* feat: persist available Tools from repository on publish
* ci: ignore explictly templates from ruff check
Ruff only applies --exclude to files it discovers itself. So we have to skip manually the same files excluded from `ruff.toml`
* sytle: fix linter issues
* refactor: renaming available_tools_classes by available_exports
* feat: provide more context about exportable tools
* feat: allow to install a Tool from pypi
* test: fix tests
* feat: add env_vars attribute to BaseTool
* remove TODO: security check since we are handle that on enterprise side
* CLI command added
* Added reset agent knowledge function
* Reduced verbose
* Added test cases
* Added docs
* Llama test case failing
* Changed _reset_agent_knowledge function
* Fixed new line error
* Added docs
* fixed the new line error
* Refractored
* Uncommmented some test cases
* ruff check fixed
* fixed run type checks
* fixed run type checks
* fixed run type checks
* Made reset_fn callable by casting to silence run type checks
* Changed the reset_knowledge as it expects only list of knowledge
* Fixed typo in docs
* Refractored the memory_system
* Minor Changes
* fixed test case
* Fixed linting issues
* Network test cases failing
---------
Co-authored-by: Lucas Gomide <lucaslg200@gmail.com>
I used ai.dev as the alternate URL as it takes up less space but if this
is likely to confuse users we can use the long form.
Co-authored-by: Tony Kipkemboi <iamtonykipkemboi@gmail.com>
This commit updates the project version to 0.119.0 and modifies the required version of the `crewai-tools` dependency to 0.44.0 across various configuration files. Additionally, the version number is reflected in the `__init__.py` file and the CLI templates for crew, flow, and tool projects.
* fix: support to reset memories after changing Crew's embedder
The sources must not be added while initializing the Knowledge otherwise we could not reset it
* chore: improve reset memory feedback
Previously, even when no memories were actually erased, we logged that they had been. From now on, the log will specify which memory has been reset.
* feat: improve get_crew discovery from a single file
Crew instances can now be discovered from any function or method with a return type annotation of -> Crew, as well as from module-level attributes assigned to a Crew instance. Additionally, crews can be retrieved from within a Flow
* refactor: make add_sources a public method from Knowledge
Add `__init__.py` files to 20 directories to conform with Python package standards. This ensures directories are properly recognized as packages, enabling cleaner imports.
* added gpt4.1 models and gemini 2.0 and 2.5 models
* added flash model
* Updated test fun to all models
* Added Gemma3 test cases and passed all google test case
* added gemini 2.5 flash
* added gpt4.1 models and gemini 2.0 and 2.5 models
* added flash model
* Updated test fun to all models
* Added Gemma3 test cases and passed all google test case
* added gemini 2.5 flash
* added gpt4.1 models and gemini 2.0 and 2.5 models
* added flash model
* Updated test fun to all models
* Added Gemma3 test cases and passed all google test case
* added gemini 2.5 flash
* test: add missing cassettes
* test: ignore authorization key from gemini/gemma3 request
---------
Co-authored-by: Lucas Gomide <lucaslg200@gmail.com>
Co-authored-by: Lorenze Jay <63378463+lorenzejay@users.noreply.github.com>
* Adjust checking for callable crew object.
Changes back to how it was being done before.
Fixes#2307
* Fix specific memory reset errors.
When not initiated, the function should raise
the "memory system is not initialized" RuntimeError.
* Remove print statement
* Fixes test case
---------
Co-authored-by: Carlos Souza <carloshrsouza@gmail.com>
* Fix#2551: Add Huggingface to provider list in CLI
Co-Authored-By: Joe Moura <joao@crewai.com>
* Update Huggingface API key name to HF_TOKEN and remove base URL prompt
Co-Authored-By: Joe Moura <joao@crewai.com>
* Update Huggingface API key name to HF_TOKEN in documentation
Co-Authored-By: Joe Moura <joao@crewai.com>
* Fix import sorting in test_constants.py
Co-Authored-By: Joe Moura <joao@crewai.com>
* Fix import order in test_constants.py
Co-Authored-By: Joe Moura <joao@crewai.com>
* Fix import formatting in test_constants.py
Co-Authored-By: Joe Moura <joao@crewai.com>
* Skip failing tests in Python 3.11 due to VCR cassette issues
Co-Authored-By: Joe Moura <joao@crewai.com>
* Fix import order in knowledge_test.py
Co-Authored-By: Joe Moura <joao@crewai.com>
* Revert skip decorators to check if tests are flaky
Co-Authored-By: Joe Moura <joao@crewai.com>
* Restore skip decorators for tests with VCR cassette issues in Python 3.11
Co-Authored-By: Joe Moura <joao@crewai.com>
* revert skip pytest decorators
* Remove import sys and skip decorators from test files
Co-Authored-By: Joe Moura <joao@crewai.com>
---------
Co-authored-by: Devin AI <158243242+devin-ai-integration[bot]@users.noreply.github.com>
Co-authored-by: Joe Moura <joao@crewai.com>
Co-authored-by: Lucas Gomide <lucaslg200@gmail.com>
This commit resolves an issue in the crew template generator where the test()
function incorrectly uses 'openai_model_name' as a parameter name when calling
Crew.test(), while the actual implementation expects 'eval_llm'.
The mismatch causes a TypeError when users run the generated test command:
"Crew.test() got an unexpected keyword argument 'openai_model_name'"
This change ensures that templates generated with 'crewai create crew' will
produce code that aligns with the framework's API.
* Add support for custom LLM implementations
Co-Authored-By: Joe Moura <joao@crewai.com>
* Fix import sorting and type annotations
Co-Authored-By: Joe Moura <joao@crewai.com>
* Fix linting issues with import sorting
Co-Authored-By: Joe Moura <joao@crewai.com>
* Fix type errors in crew.py by updating tool-related methods to return List[BaseTool]
Co-Authored-By: Joe Moura <joao@crewai.com>
* Enhance custom LLM implementation with better error handling, documentation, and test coverage
Co-Authored-By: Joe Moura <joao@crewai.com>
* Refactor LLM module by extracting BaseLLM to a separate file
This commit moves the BaseLLM abstract base class from llm.py to a new file llms/base_llm.py to improve code organization. The changes include:
- Creating a new file src/crewai/llms/base_llm.py
- Moving the BaseLLM class to the new file
- Updating imports in __init__.py and llm.py to reflect the new location
- Updating test cases to use the new import path
The refactoring maintains the existing functionality while improving the project's module structure.
* Add AISuite LLM support and update dependencies
- Integrate AISuite as a new third-party LLM option
- Update pyproject.toml and uv.lock to include aisuite package
- Modify BaseLLM to support more flexible initialization
- Remove unnecessary LLM imports across multiple files
- Implement AISuiteLLM with basic chat completion functionality
* Update AISuiteLLM and LLM utility type handling
- Modify AISuiteLLM to support more flexible input types for messages
- Update type hints in AISuiteLLM to allow string or list of message dictionaries
- Enhance LLM utility function to support broader LLM type annotations
- Remove default `self.stop` attribute from BaseLLM initialization
* Update LLM imports and type hints across multiple files
- Modify imports in crew_chat.py to use LLM instead of BaseLLM
- Update type hints in llm_utils.py to use LLM type
- Add optional `stop` parameter to BaseLLM initialization
- Refactor type handling for LLM creation and usage
* Improve stop words handling in CrewAgentExecutor
- Add support for handling existing stop words in LLM configuration
- Ensure stop words are correctly merged and deduplicated
- Update type hints to support both LLM and BaseLLM types
* Remove abstract method set_callbacks from BaseLLM class
* Enhance CustomLLM and JWTAuthLLM initialization with model parameter
- Update CustomLLM to accept a model parameter during initialization
- Modify test cases to include the new model argument
- Ensure JWTAuthLLM and TimeoutHandlingLLM also utilize the model parameter in their constructors
- Update type hints in create_llm function to support both LLM and BaseLLM types
* Enhance create_llm function to support BaseLLM type
- Update the create_llm function to accept both LLM and BaseLLM instances
- Ensure compatibility with existing LLM handling logic
* Update type hint for initialize_chat_llm to support BaseLLM
- Modify the return type of initialize_chat_llm function to allow for both LLM and BaseLLM instances
- Ensure compatibility with recent changes in create_llm function
* Refactor AISuiteLLM to include tools parameter in completion methods
- Update the _prepare_completion_params method to accept an optional tools parameter
- Modify the chat completion method to utilize the new tools parameter for enhanced functionality
- Clean up print statements for better code clarity
* Remove unused tool_calls handling in AISuiteLLM chat completion method for cleaner code.
* Refactor Crew class and LLM hierarchy for improved type handling and code clarity
- Update Crew class methods to enhance readability with consistent formatting and type hints.
- Change LLM class to inherit from BaseLLM for better structure.
- Remove unnecessary type checks and streamline tool handling in CrewAgentExecutor.
- Adjust BaseLLM to provide default implementations for stop words and context window size methods.
- Clean up AISuiteLLM by removing unused methods related to stop words and context window size.
* Remove unused `stream` method from `BaseLLM` class to enhance code clarity and maintainability.
---------
Co-authored-by: Devin AI <158243242+devin-ai-integration[bot]@users.noreply.github.com>
Co-authored-by: Joe Moura <joao@crewai.com>
Co-authored-by: Lorenze Jay <lorenzejaytech@gmail.com>
Co-authored-by: João Moura <joaomdmoura@gmail.com>
Co-authored-by: Brandon Hancock (bhancock_ai) <109994880+bhancockio@users.noreply.github.com>