mirror of
https://github.com/crewAIInc/crewAI.git
synced 2026-01-08 15:48:29 +00:00
* Add support for custom LLM implementations Co-Authored-By: Joe Moura <joao@crewai.com> * Fix import sorting and type annotations Co-Authored-By: Joe Moura <joao@crewai.com> * Fix linting issues with import sorting Co-Authored-By: Joe Moura <joao@crewai.com> * Fix type errors in crew.py by updating tool-related methods to return List[BaseTool] Co-Authored-By: Joe Moura <joao@crewai.com> * Enhance custom LLM implementation with better error handling, documentation, and test coverage Co-Authored-By: Joe Moura <joao@crewai.com> * Refactor LLM module by extracting BaseLLM to a separate file This commit moves the BaseLLM abstract base class from llm.py to a new file llms/base_llm.py to improve code organization. The changes include: - Creating a new file src/crewai/llms/base_llm.py - Moving the BaseLLM class to the new file - Updating imports in __init__.py and llm.py to reflect the new location - Updating test cases to use the new import path The refactoring maintains the existing functionality while improving the project's module structure. * Add AISuite LLM support and update dependencies - Integrate AISuite as a new third-party LLM option - Update pyproject.toml and uv.lock to include aisuite package - Modify BaseLLM to support more flexible initialization - Remove unnecessary LLM imports across multiple files - Implement AISuiteLLM with basic chat completion functionality * Update AISuiteLLM and LLM utility type handling - Modify AISuiteLLM to support more flexible input types for messages - Update type hints in AISuiteLLM to allow string or list of message dictionaries - Enhance LLM utility function to support broader LLM type annotations - Remove default `self.stop` attribute from BaseLLM initialization * Update LLM imports and type hints across multiple files - Modify imports in crew_chat.py to use LLM instead of BaseLLM - Update type hints in llm_utils.py to use LLM type - Add optional `stop` parameter to BaseLLM initialization - Refactor type handling for LLM creation and usage * Improve stop words handling in CrewAgentExecutor - Add support for handling existing stop words in LLM configuration - Ensure stop words are correctly merged and deduplicated - Update type hints to support both LLM and BaseLLM types * Remove abstract method set_callbacks from BaseLLM class * Enhance CustomLLM and JWTAuthLLM initialization with model parameter - Update CustomLLM to accept a model parameter during initialization - Modify test cases to include the new model argument - Ensure JWTAuthLLM and TimeoutHandlingLLM also utilize the model parameter in their constructors - Update type hints in create_llm function to support both LLM and BaseLLM types * Enhance create_llm function to support BaseLLM type - Update the create_llm function to accept both LLM and BaseLLM instances - Ensure compatibility with existing LLM handling logic * Update type hint for initialize_chat_llm to support BaseLLM - Modify the return type of initialize_chat_llm function to allow for both LLM and BaseLLM instances - Ensure compatibility with recent changes in create_llm function * Refactor AISuiteLLM to include tools parameter in completion methods - Update the _prepare_completion_params method to accept an optional tools parameter - Modify the chat completion method to utilize the new tools parameter for enhanced functionality - Clean up print statements for better code clarity * Remove unused tool_calls handling in AISuiteLLM chat completion method for cleaner code. * Refactor Crew class and LLM hierarchy for improved type handling and code clarity - Update Crew class methods to enhance readability with consistent formatting and type hints. - Change LLM class to inherit from BaseLLM for better structure. - Remove unnecessary type checks and streamline tool handling in CrewAgentExecutor. - Adjust BaseLLM to provide default implementations for stop words and context window size methods. - Clean up AISuiteLLM by removing unused methods related to stop words and context window size. * Remove unused `stream` method from `BaseLLM` class to enhance code clarity and maintainability. --------- Co-authored-by: Devin AI <158243242+devin-ai-integration[bot]@users.noreply.github.com> Co-authored-by: Joe Moura <joao@crewai.com> Co-authored-by: Lorenze Jay <lorenzejaytech@gmail.com> Co-authored-by: João Moura <joaomdmoura@gmail.com> Co-authored-by: Brandon Hancock (bhancock_ai) <109994880+bhancockio@users.noreply.github.com>
306 lines
10 KiB
YAML
306 lines
10 KiB
YAML
interactions:
|
|
- request:
|
|
body: '{"messages": [{"role": "system", "content": "You are a helpful assistant."},
|
|
{"role": "user", "content": [{"role": "system", "content": "You are Say Hi.
|
|
You just say hi to the user\nYour personal goal is: Say hi to the user\nTo give
|
|
my best complete final answer to the task respond using the exact following
|
|
format:\n\nThought: I now can give a great answer\nFinal Answer: Your final
|
|
answer must be the great and the most complete as possible, it must be outcome
|
|
described.\n\nI MUST use these formats, my job depends on it!"}, {"role": "user",
|
|
"content": "\nCurrent Task: Say hi to the user\n\nThis is the expected criteria
|
|
for your final answer: A greeting to the user\nyou MUST return the actual complete
|
|
content as the final answer, not a summary.\n\nBegin! This is VERY important
|
|
to you, use the tools available and give your best Final Answer, your job depends
|
|
on it!\n\nThought:"}]}], "model": "gpt-4o-mini", "tools": null}'
|
|
headers:
|
|
accept:
|
|
- application/json
|
|
accept-encoding:
|
|
- gzip, deflate
|
|
connection:
|
|
- keep-alive
|
|
content-length:
|
|
- '931'
|
|
content-type:
|
|
- application/json
|
|
host:
|
|
- api.openai.com
|
|
user-agent:
|
|
- OpenAI/Python 1.61.0
|
|
x-stainless-arch:
|
|
- arm64
|
|
x-stainless-async:
|
|
- 'false'
|
|
x-stainless-lang:
|
|
- python
|
|
x-stainless-os:
|
|
- MacOS
|
|
x-stainless-package-version:
|
|
- 1.61.0
|
|
x-stainless-retry-count:
|
|
- '0'
|
|
x-stainless-runtime:
|
|
- CPython
|
|
x-stainless-runtime-version:
|
|
- 3.12.8
|
|
method: POST
|
|
uri: https://api.openai.com/v1/chat/completions
|
|
response:
|
|
content: "{\n \"error\": {\n \"message\": \"Missing required parameter: 'messages[1].content[0].type'.\",\n
|
|
\ \"type\": \"invalid_request_error\",\n \"param\": \"messages[1].content[0].type\",\n
|
|
\ \"code\": \"missing_required_parameter\"\n }\n}"
|
|
headers:
|
|
CF-RAY:
|
|
- 91b54660799a15b4-SJC
|
|
Connection:
|
|
- keep-alive
|
|
Content-Length:
|
|
- '219'
|
|
Content-Type:
|
|
- application/json
|
|
Date:
|
|
- Tue, 04 Mar 2025 23:50:16 GMT
|
|
Server:
|
|
- cloudflare
|
|
Set-Cookie:
|
|
- __cf_bm=OwS.6cyfDpbxxx8vPp4THv5eNoDMQK0qSVN.wSUyOYk-1741132216-1.0.1.1-QBVd08CjfmDBpNnYQM5ILGbTUWKh6SDM9E4ARG4SV2Z9Q4ltFSFLXoo38OGJApUNZmzn4PtRsyAPsHt_dsrHPF6MD17FPcGtrnAHqCjJrfU;
|
|
path=/; expires=Wed, 05-Mar-25 00:20:16 GMT; domain=.api.openai.com; HttpOnly;
|
|
Secure; SameSite=None
|
|
- _cfuvid=n_ebDsAOhJm5Mc7OMx8JDiOaZq5qzHCnVxyS3KN0BwA-1741132216951-0.0.1.1-604800000;
|
|
path=/; domain=.api.openai.com; HttpOnly; Secure; SameSite=None
|
|
X-Content-Type-Options:
|
|
- nosniff
|
|
access-control-expose-headers:
|
|
- X-Request-ID
|
|
alt-svc:
|
|
- h3=":443"; ma=86400
|
|
cf-cache-status:
|
|
- DYNAMIC
|
|
openai-organization:
|
|
- crewai-iuxna1
|
|
openai-processing-ms:
|
|
- '19'
|
|
openai-version:
|
|
- '2020-10-01'
|
|
strict-transport-security:
|
|
- max-age=31536000; includeSubDomains; preload
|
|
x-ratelimit-limit-requests:
|
|
- '30000'
|
|
x-ratelimit-limit-tokens:
|
|
- '150000000'
|
|
x-ratelimit-remaining-requests:
|
|
- '29999'
|
|
x-ratelimit-remaining-tokens:
|
|
- '149999974'
|
|
x-ratelimit-reset-requests:
|
|
- 2ms
|
|
x-ratelimit-reset-tokens:
|
|
- 0s
|
|
x-request-id:
|
|
- req_042a4e8f9432f6fde7a02037bb6caafa
|
|
http_version: HTTP/1.1
|
|
status_code: 400
|
|
- request:
|
|
body: '{"messages": [{"role": "system", "content": "You are a helpful assistant."},
|
|
{"role": "user", "content": [{"role": "system", "content": "You are Say Hi.
|
|
You just say hi to the user\nYour personal goal is: Say hi to the user\nTo give
|
|
my best complete final answer to the task respond using the exact following
|
|
format:\n\nThought: I now can give a great answer\nFinal Answer: Your final
|
|
answer must be the great and the most complete as possible, it must be outcome
|
|
described.\n\nI MUST use these formats, my job depends on it!"}, {"role": "user",
|
|
"content": "\nCurrent Task: Say hi to the user\n\nThis is the expected criteria
|
|
for your final answer: A greeting to the user\nyou MUST return the actual complete
|
|
content as the final answer, not a summary.\n\nBegin! This is VERY important
|
|
to you, use the tools available and give your best Final Answer, your job depends
|
|
on it!\n\nThought:"}]}], "model": "gpt-4o-mini", "tools": null}'
|
|
headers:
|
|
accept:
|
|
- application/json
|
|
accept-encoding:
|
|
- gzip, deflate
|
|
connection:
|
|
- keep-alive
|
|
content-length:
|
|
- '931'
|
|
content-type:
|
|
- application/json
|
|
host:
|
|
- api.openai.com
|
|
user-agent:
|
|
- OpenAI/Python 1.61.0
|
|
x-stainless-arch:
|
|
- arm64
|
|
x-stainless-async:
|
|
- 'false'
|
|
x-stainless-lang:
|
|
- python
|
|
x-stainless-os:
|
|
- MacOS
|
|
x-stainless-package-version:
|
|
- 1.61.0
|
|
x-stainless-retry-count:
|
|
- '0'
|
|
x-stainless-runtime:
|
|
- CPython
|
|
x-stainless-runtime-version:
|
|
- 3.12.8
|
|
method: POST
|
|
uri: https://api.openai.com/v1/chat/completions
|
|
response:
|
|
content: "{\n \"error\": {\n \"message\": \"Missing required parameter: 'messages[1].content[0].type'.\",\n
|
|
\ \"type\": \"invalid_request_error\",\n \"param\": \"messages[1].content[0].type\",\n
|
|
\ \"code\": \"missing_required_parameter\"\n }\n}"
|
|
headers:
|
|
CF-RAY:
|
|
- 91b54664bb1acef1-SJC
|
|
Connection:
|
|
- keep-alive
|
|
Content-Length:
|
|
- '219'
|
|
Content-Type:
|
|
- application/json
|
|
Date:
|
|
- Tue, 04 Mar 2025 23:50:17 GMT
|
|
Server:
|
|
- cloudflare
|
|
Set-Cookie:
|
|
- __cf_bm=.wGU4pJEajaSzFWjp05TBQwWbCNA2CgpYNu7UYOzbbM-1741132217-1.0.1.1-NoLiAx4qkplllldYYxZCOSQGsX6hsPUJIEyqmt84B3g7hjW1s7.jk9C9PYzXagHWjT0sQ9Ny4LZBA94lDJTfDBZpty8NJQha7ZKW0P_msH8;
|
|
path=/; expires=Wed, 05-Mar-25 00:20:17 GMT; domain=.api.openai.com; HttpOnly;
|
|
Secure; SameSite=None
|
|
- _cfuvid=GAjgJjVLtN49bMeWdWZDYLLkEkK51z5kxK4nKqhAzxY-1741132217161-0.0.1.1-604800000;
|
|
path=/; domain=.api.openai.com; HttpOnly; Secure; SameSite=None
|
|
X-Content-Type-Options:
|
|
- nosniff
|
|
access-control-expose-headers:
|
|
- X-Request-ID
|
|
alt-svc:
|
|
- h3=":443"; ma=86400
|
|
cf-cache-status:
|
|
- DYNAMIC
|
|
openai-organization:
|
|
- crewai-iuxna1
|
|
openai-processing-ms:
|
|
- '25'
|
|
openai-version:
|
|
- '2020-10-01'
|
|
strict-transport-security:
|
|
- max-age=31536000; includeSubDomains; preload
|
|
x-ratelimit-limit-requests:
|
|
- '30000'
|
|
x-ratelimit-limit-tokens:
|
|
- '150000000'
|
|
x-ratelimit-remaining-requests:
|
|
- '29999'
|
|
x-ratelimit-remaining-tokens:
|
|
- '149999974'
|
|
x-ratelimit-reset-requests:
|
|
- 2ms
|
|
x-ratelimit-reset-tokens:
|
|
- 0s
|
|
x-request-id:
|
|
- req_7a1d027da1ef4468e861e570c72e98fb
|
|
http_version: HTTP/1.1
|
|
status_code: 400
|
|
- request:
|
|
body: '{"messages": [{"role": "system", "content": "You are a helpful assistant."},
|
|
{"role": "user", "content": [{"role": "system", "content": "You are Say Hi.
|
|
You just say hi to the user\nYour personal goal is: Say hi to the user\nTo give
|
|
my best complete final answer to the task respond using the exact following
|
|
format:\n\nThought: I now can give a great answer\nFinal Answer: Your final
|
|
answer must be the great and the most complete as possible, it must be outcome
|
|
described.\n\nI MUST use these formats, my job depends on it!"}, {"role": "user",
|
|
"content": "\nCurrent Task: Say hi to the user\n\nThis is the expected criteria
|
|
for your final answer: A greeting to the user\nyou MUST return the actual complete
|
|
content as the final answer, not a summary.\n\nBegin! This is VERY important
|
|
to you, use the tools available and give your best Final Answer, your job depends
|
|
on it!\n\nThought:"}]}], "model": "gpt-4o-mini", "tools": null}'
|
|
headers:
|
|
accept:
|
|
- application/json
|
|
accept-encoding:
|
|
- gzip, deflate
|
|
connection:
|
|
- keep-alive
|
|
content-length:
|
|
- '931'
|
|
content-type:
|
|
- application/json
|
|
host:
|
|
- api.openai.com
|
|
user-agent:
|
|
- OpenAI/Python 1.61.0
|
|
x-stainless-arch:
|
|
- arm64
|
|
x-stainless-async:
|
|
- 'false'
|
|
x-stainless-lang:
|
|
- python
|
|
x-stainless-os:
|
|
- MacOS
|
|
x-stainless-package-version:
|
|
- 1.61.0
|
|
x-stainless-retry-count:
|
|
- '0'
|
|
x-stainless-runtime:
|
|
- CPython
|
|
x-stainless-runtime-version:
|
|
- 3.12.8
|
|
method: POST
|
|
uri: https://api.openai.com/v1/chat/completions
|
|
response:
|
|
content: "{\n \"error\": {\n \"message\": \"Missing required parameter: 'messages[1].content[0].type'.\",\n
|
|
\ \"type\": \"invalid_request_error\",\n \"param\": \"messages[1].content[0].type\",\n
|
|
\ \"code\": \"missing_required_parameter\"\n }\n}"
|
|
headers:
|
|
CF-RAY:
|
|
- 91b54666183beb22-SJC
|
|
Connection:
|
|
- keep-alive
|
|
Content-Length:
|
|
- '219'
|
|
Content-Type:
|
|
- application/json
|
|
Date:
|
|
- Tue, 04 Mar 2025 23:50:17 GMT
|
|
Server:
|
|
- cloudflare
|
|
Set-Cookie:
|
|
- __cf_bm=VwjWHHpkZMJlosI9RbMqxYDBS1t0JK4tWpAy4lST2QM-1741132217-1.0.1.1-u7PU.ZvVBTXNB5R8vaYfWdPXAjWZ3ZcTAy656VaGDZmKIckk5od._eQdn0W0EGVtEMm3TuF60z4GZAPDwMYvb3_3cw1RuEMmQbp4IIrl7VY;
|
|
path=/; expires=Wed, 05-Mar-25 00:20:17 GMT; domain=.api.openai.com; HttpOnly;
|
|
Secure; SameSite=None
|
|
- _cfuvid=NglAAsQBoiabMuuHFgilRjflSPFqS38VGKnGyweuCuw-1741132217438-0.0.1.1-604800000;
|
|
path=/; domain=.api.openai.com; HttpOnly; Secure; SameSite=None
|
|
X-Content-Type-Options:
|
|
- nosniff
|
|
access-control-expose-headers:
|
|
- X-Request-ID
|
|
alt-svc:
|
|
- h3=":443"; ma=86400
|
|
cf-cache-status:
|
|
- DYNAMIC
|
|
openai-organization:
|
|
- crewai-iuxna1
|
|
openai-processing-ms:
|
|
- '56'
|
|
openai-version:
|
|
- '2020-10-01'
|
|
strict-transport-security:
|
|
- max-age=31536000; includeSubDomains; preload
|
|
x-ratelimit-limit-requests:
|
|
- '30000'
|
|
x-ratelimit-limit-tokens:
|
|
- '150000000'
|
|
x-ratelimit-remaining-requests:
|
|
- '29999'
|
|
x-ratelimit-remaining-tokens:
|
|
- '149999974'
|
|
x-ratelimit-reset-requests:
|
|
- 2ms
|
|
x-ratelimit-reset-tokens:
|
|
- 0s
|
|
x-request-id:
|
|
- req_3c335b308b82cc2214783a4bf2fc0fd4
|
|
http_version: HTTP/1.1
|
|
status_code: 400
|
|
version: 1
|