* Add support for custom LLM implementations
Co-Authored-By: Joe Moura <joao@crewai.com>
* Fix import sorting and type annotations
Co-Authored-By: Joe Moura <joao@crewai.com>
* Fix linting issues with import sorting
Co-Authored-By: Joe Moura <joao@crewai.com>
* Fix type errors in crew.py by updating tool-related methods to return List[BaseTool]
Co-Authored-By: Joe Moura <joao@crewai.com>
* Enhance custom LLM implementation with better error handling, documentation, and test coverage
Co-Authored-By: Joe Moura <joao@crewai.com>
* Refactor LLM module by extracting BaseLLM to a separate file
This commit moves the BaseLLM abstract base class from llm.py to a new file llms/base_llm.py to improve code organization. The changes include:
- Creating a new file src/crewai/llms/base_llm.py
- Moving the BaseLLM class to the new file
- Updating imports in __init__.py and llm.py to reflect the new location
- Updating test cases to use the new import path
The refactoring maintains the existing functionality while improving the project's module structure.
* Add AISuite LLM support and update dependencies
- Integrate AISuite as a new third-party LLM option
- Update pyproject.toml and uv.lock to include aisuite package
- Modify BaseLLM to support more flexible initialization
- Remove unnecessary LLM imports across multiple files
- Implement AISuiteLLM with basic chat completion functionality
* Update AISuiteLLM and LLM utility type handling
- Modify AISuiteLLM to support more flexible input types for messages
- Update type hints in AISuiteLLM to allow string or list of message dictionaries
- Enhance LLM utility function to support broader LLM type annotations
- Remove default `self.stop` attribute from BaseLLM initialization
* Update LLM imports and type hints across multiple files
- Modify imports in crew_chat.py to use LLM instead of BaseLLM
- Update type hints in llm_utils.py to use LLM type
- Add optional `stop` parameter to BaseLLM initialization
- Refactor type handling for LLM creation and usage
* Improve stop words handling in CrewAgentExecutor
- Add support for handling existing stop words in LLM configuration
- Ensure stop words are correctly merged and deduplicated
- Update type hints to support both LLM and BaseLLM types
* Remove abstract method set_callbacks from BaseLLM class
* Enhance CustomLLM and JWTAuthLLM initialization with model parameter
- Update CustomLLM to accept a model parameter during initialization
- Modify test cases to include the new model argument
- Ensure JWTAuthLLM and TimeoutHandlingLLM also utilize the model parameter in their constructors
- Update type hints in create_llm function to support both LLM and BaseLLM types
* Enhance create_llm function to support BaseLLM type
- Update the create_llm function to accept both LLM and BaseLLM instances
- Ensure compatibility with existing LLM handling logic
* Update type hint for initialize_chat_llm to support BaseLLM
- Modify the return type of initialize_chat_llm function to allow for both LLM and BaseLLM instances
- Ensure compatibility with recent changes in create_llm function
* Refactor AISuiteLLM to include tools parameter in completion methods
- Update the _prepare_completion_params method to accept an optional tools parameter
- Modify the chat completion method to utilize the new tools parameter for enhanced functionality
- Clean up print statements for better code clarity
* Remove unused tool_calls handling in AISuiteLLM chat completion method for cleaner code.
* Refactor Crew class and LLM hierarchy for improved type handling and code clarity
- Update Crew class methods to enhance readability with consistent formatting and type hints.
- Change LLM class to inherit from BaseLLM for better structure.
- Remove unnecessary type checks and streamline tool handling in CrewAgentExecutor.
- Adjust BaseLLM to provide default implementations for stop words and context window size methods.
- Clean up AISuiteLLM by removing unused methods related to stop words and context window size.
* Remove unused `stream` method from `BaseLLM` class to enhance code clarity and maintainability.
---------
Co-authored-by: Devin AI <158243242+devin-ai-integration[bot]@users.noreply.github.com>
Co-authored-by: Joe Moura <joao@crewai.com>
Co-authored-by: Lorenze Jay <lorenzejaytech@gmail.com>
Co-authored-by: João Moura <joaomdmoura@gmail.com>
Co-authored-by: Brandon Hancock (bhancock_ai) <109994880+bhancockio@users.noreply.github.com>
* initial knowledge
* WIP
* Adding core knowledge sources
* Improve types and better support for file paths
* added additional sources
* fix linting
* update yaml to include optional deps
* adding in lorenze feedback
* ensure embeddings are persisted
* improvements all around Knowledge class
* return this
* properly reset memory
* properly reset memory+knowledge
* consolodation and improvements
* linted
* cleanup rm unused embedder
* fix test
* fix duplicate
* generating cassettes for knowledge test
* updated default embedder
* None embedder to use default on pipeline cloning
* improvements
* fixed text_file_knowledge
* mypysrc fixes
* type check fixes
* added extra cassette
* just mocks
* linted
* mock knowledge query to not spin up db
* linted
* verbose run
* put a flag
* fix
* adding docs
* better docs
* improvements from review
* more docs
* linted
* rm print
* more fixes
* clearer docs
* added docstrings and type hints for cli
---------
Co-authored-by: João Moura <joaomdmoura@gmail.com>
Co-authored-by: Lorenze Jay <lorenzejaytech@gmail.com>
* Almost working!
* It fully works but not clean enought
* Working but not clean engouth
* Everything is workign
* WIP. Working on adding and & or to flows. In the middle of setting up template for flow as well
* template working
* Everything is working
* More changes and todos
* Add more support for @start
* Router working now
* minor tweak to
* minor tweak to conditions and event handling
* Update logs
* Too trigger happy with cleanup
* Added in Thiago fix
* Flow passing results again
* Working on docs.
* made more progress updates on docs
* Finished talking about controlling flows
* add flow output
* fixed flow output section
* add crews to flows section is looking good now
* more flow doc changes
* Update docs and add more examples
* drop visualizer
* save visualizer
* pyvis is beginning to work
* pyvis working
* it is working
* regular methods and triggers working. Need to work on router next.
* properly identifying router and router children nodes. Need to fix color
* children router working. Need to support loops
* curving cycles but need to add curve conditionals
* everythin is showing up properly need to fix curves
* all working. needs to be cleaned up
* adjust padding
* drop lib
* clean up prior to PR
* incorporate joao feedback
* final tweaks for joao
* rebuilding executor
* removing langchain
* Making all tests good
* fixing types and adding ability for nor using system prompts
* improving types
* pleasing the types gods
* pleasing the types gods
* fixing parser, tools and executor
* making sure all tests pass
* final pass
* fixing type
* Updating Docs
* preparing to cut new version
* WIP. Procedure appears to be working well. Working on mocking properly for tests
* All tests are passing now
* rshift working
* Add back in Gui's tool_usage fix
* WIP
* Going to start refactoring for pipeline_output
* Update terminology
* new pipeline flow with traces and usage metrics working. need to add more tests and make sure PipelineOutput behaves likew CrewOutput
* Fix pipelineoutput to look more like crewoutput and taskoutput
* Implemented additional tests for pipeline. One test is failing. Need team support
* Update docs for pipeline
* Update pipeline to properly process input and ouput dictionary
* Update Pipeline docs
* Add back in commentary at top of pipeline file
* Starting to work on router
* Drop router for now. will add in separately
* In the middle of fixing router. A ton of circular dependencies. Moving over to a new design.
* WIP.
* Fix circular dependencies and updated PipelineRouter
* Add in Eduardo feedback. Still need to add in more commentary describing the design decisions for pipeline
* Add developer notes to explain what is going on in pipelines.
* Add doc strings
* Fix missing rag datatype
* WIP. Converting usage metrics from a dict to an object
* Fix tests that were checking usage metrics
* Drop todo
* Fix 1 type error in pipeline
* Update pipeline to use UsageMetric
* Add missing doc string
* WIP.
* Change names
* Rename variables based on joaos feedback
* Fix critical circular dependency issues. Now needing to fix trace issue.
* Tests working now!
* Add more tests which showed underlying issue with traces
* Fix tests
* Remove overly complicated test
* Add router example to docs
* Clean up end of docs
* Clean up docs
* Working on creating Crew templates and pipeline templates
* WIP.
* WIP
* Fix poetry install from templates
* WIP
* Restructure
* changes for lorenze
* more todos
* WIP: create pipelines cli working
* wrapped up router
* ignore mypy src on templates
* ignored signature of copy
* fix all verbose
* rm print statements
* brought back correct folders
* fixes missing folders and then rm print statements
* fixed tests
* fixed broken test
* fixed type checker
* fixed type ignore
* ignore types for templates
* needed
* revert
* exclude only required
* rm type errors on templates
* rm excluding type checks for template files on github action
* fixed missing quotes
---------
Co-authored-by: Brandon Hancock <brandon@brandonhancock.io>
* better spacing
* works with llama index
* works on langchain custom just need delegation to work
* cleanup for custom_agent class
* works with different argument expectations for agent_executor
* cleanup for hierarchial process, better agent_executor args handler and added to the crew agent doc page
* removed code examples for langchain + llama index, added to docs instead
* added key output if return is not a str for and added some tests
* added hinting for CustomAgent class
* removed pass as it was not needed
* closer just need to figuire ou agentTools
* running agents - llamaindex and langchain with base agent
* some cleanup on baseAgent
* minimum for agent to run for base class and ensure it works with hierarchical process
* cleanup for original agent to take on BaseAgent class
* Agent takes on langchainagent and cleanup across
* token handling working for usage_metrics to continue working
* installed llama-index, updated docs and added better name
* fixed some type errors
* base agent holds token_process
* heirarchail process uses proper tools and no longer relies on hasattr for token_processes
* removal of test_custom_agent_executions
* this fixes copying agents
* leveraging an executor class for trigger llamaindex agent
* llama index now has ask_human
* executor mixins added
* added output converter base class
* type listed
* cleanup for output conversions and tokenprocess eliminated redundancy
* properly handling tokens
* simplified token calc handling
* original agent with base agent builder structure setup
* better docs
* no more llama-index dep
* cleaner docs
* test fixes
* poetry reverts and better docs
* base_agent_tools set for third party agents
* updated task and test fix