* Add support for custom LLM implementations
Co-Authored-By: Joe Moura <joao@crewai.com>
* Fix import sorting and type annotations
Co-Authored-By: Joe Moura <joao@crewai.com>
* Fix linting issues with import sorting
Co-Authored-By: Joe Moura <joao@crewai.com>
* Fix type errors in crew.py by updating tool-related methods to return List[BaseTool]
Co-Authored-By: Joe Moura <joao@crewai.com>
* Enhance custom LLM implementation with better error handling, documentation, and test coverage
Co-Authored-By: Joe Moura <joao@crewai.com>
* Refactor LLM module by extracting BaseLLM to a separate file
This commit moves the BaseLLM abstract base class from llm.py to a new file llms/base_llm.py to improve code organization. The changes include:
- Creating a new file src/crewai/llms/base_llm.py
- Moving the BaseLLM class to the new file
- Updating imports in __init__.py and llm.py to reflect the new location
- Updating test cases to use the new import path
The refactoring maintains the existing functionality while improving the project's module structure.
* Add AISuite LLM support and update dependencies
- Integrate AISuite as a new third-party LLM option
- Update pyproject.toml and uv.lock to include aisuite package
- Modify BaseLLM to support more flexible initialization
- Remove unnecessary LLM imports across multiple files
- Implement AISuiteLLM with basic chat completion functionality
* Update AISuiteLLM and LLM utility type handling
- Modify AISuiteLLM to support more flexible input types for messages
- Update type hints in AISuiteLLM to allow string or list of message dictionaries
- Enhance LLM utility function to support broader LLM type annotations
- Remove default `self.stop` attribute from BaseLLM initialization
* Update LLM imports and type hints across multiple files
- Modify imports in crew_chat.py to use LLM instead of BaseLLM
- Update type hints in llm_utils.py to use LLM type
- Add optional `stop` parameter to BaseLLM initialization
- Refactor type handling for LLM creation and usage
* Improve stop words handling in CrewAgentExecutor
- Add support for handling existing stop words in LLM configuration
- Ensure stop words are correctly merged and deduplicated
- Update type hints to support both LLM and BaseLLM types
* Remove abstract method set_callbacks from BaseLLM class
* Enhance CustomLLM and JWTAuthLLM initialization with model parameter
- Update CustomLLM to accept a model parameter during initialization
- Modify test cases to include the new model argument
- Ensure JWTAuthLLM and TimeoutHandlingLLM also utilize the model parameter in their constructors
- Update type hints in create_llm function to support both LLM and BaseLLM types
* Enhance create_llm function to support BaseLLM type
- Update the create_llm function to accept both LLM and BaseLLM instances
- Ensure compatibility with existing LLM handling logic
* Update type hint for initialize_chat_llm to support BaseLLM
- Modify the return type of initialize_chat_llm function to allow for both LLM and BaseLLM instances
- Ensure compatibility with recent changes in create_llm function
* Refactor AISuiteLLM to include tools parameter in completion methods
- Update the _prepare_completion_params method to accept an optional tools parameter
- Modify the chat completion method to utilize the new tools parameter for enhanced functionality
- Clean up print statements for better code clarity
* Remove unused tool_calls handling in AISuiteLLM chat completion method for cleaner code.
* Refactor Crew class and LLM hierarchy for improved type handling and code clarity
- Update Crew class methods to enhance readability with consistent formatting and type hints.
- Change LLM class to inherit from BaseLLM for better structure.
- Remove unnecessary type checks and streamline tool handling in CrewAgentExecutor.
- Adjust BaseLLM to provide default implementations for stop words and context window size methods.
- Clean up AISuiteLLM by removing unused methods related to stop words and context window size.
* Remove unused `stream` method from `BaseLLM` class to enhance code clarity and maintainability.
---------
Co-authored-by: Devin AI <158243242+devin-ai-integration[bot]@users.noreply.github.com>
Co-authored-by: Joe Moura <joao@crewai.com>
Co-authored-by: Lorenze Jay <lorenzejaytech@gmail.com>
Co-authored-by: João Moura <joaomdmoura@gmail.com>
Co-authored-by: Brandon Hancock (bhancock_ai) <109994880+bhancockio@users.noreply.github.com>
* wip
* More clean up
* Fix error
* clean up test
* Improve chat calling messages
* crewai chat improvements
* working but need to clean up
* Clean up chat
* worked on foundation for new conversational crews. Now going to work on chatting.
* core loop should be working and ready for testing.
* high level chat working
* its alive!!
* Added in Joaos feedback to steer crew chats back towards the purpose of the crew
* properly return tool call result
* accessing crew directly instead of through uv commands
* everything is working for conversation now
* Fix linting
* fix llm_utils.py and other type errors
* fix more type errors
* fixing type error
* More fixing of types
* fix failing tests
* Fix more failing tests
* adding tests. cleaing up pr.
* improve
* drop old functions
* improve type hintings