Files
crewAI/tests/cassettes/test_agent_moved_on_after_max_iterations.yaml
Brandon Hancock (bhancock_ai) efe27bd570 Feat/individual react agent (#2483)
* WIP

* WIP

* wip

* wip

* WIP

* More WIP

* Its working but needs a massive clean up

* output type works now

* Usage metrics fixed

* more testing

* WIP

* cleaning up

* Update logger

* 99% done. Need to make docs match new example

* cleanup

* drop hard coded examples

* docs

* Clean up

* Fix errors

* Trying to fix CI issues

* more type checker fixes

* More type checking fixes

* Update LiteAgent documentation for clarity and consistency; replace WebsiteSearchTool with SerperDevTool, and improve formatting in examples.

* fix fingerprinting issues

* fix type-checker

* Fix type-checker issue by adding type ignore comment for cache read in ToolUsage class

* Add optional agent parameter to CrewAgentParser and enhance action handling logic

* Remove unused parameters from ToolUsage instantiation in tests and clean up debug print statement in CrewAgentParser.

* Remove deprecated test files and examples for LiteAgent; add comprehensive tests for LiteAgent functionality, including tool usage and structured output handling.

* Remove unused variable 'result' from ToolUsage class to clean up code.

* Add initialization for 'result' variable in ToolUsage class to resolve type-checker warnings

* Refactor agent_utils.py by removing unused event imports and adding missing commas in function definitions. Update test_events.py to reflect changes in expected event counts and adjust assertions accordingly. Modify test_tools_emits_error_events.yaml to include new headers and update response content for consistency with recent API changes.

* Enhance tests in crew_test.py by verifying cache behavior in test_tools_with_custom_caching and ensuring proper agent initialization with added commas in test_crew_kickoff_for_each_works_with_manager_agent_copy.

* Update agent tests to reflect changes in expected call counts and improve response formatting in YAML cassette. Adjusted mock call count from 2 to 3 and refined interaction formats for clarity and consistency.

* Refactor agent tests to update model versions and improve response formatting in YAML cassettes. Changed model references from 'o1-preview' to 'o3-mini' and adjusted interaction formats for consistency. Enhanced error handling in context length tests and refined mock setups for better clarity.

* Update tool usage logging to ensure tool arguments are consistently formatted as strings. Adjust agent test cases to reflect changes in maximum iterations and expected outputs, enhancing clarity in assertions. Update YAML cassettes to align with new response formats and improve overall consistency across tests.

* Update YAML cassette for LLM tests to reflect changes in response structure and model version. Adjusted request and response headers, including updated content length and user agent. Enhanced token limits and request counts for improved testing accuracy.

* Update tool usage logging to store tool arguments as native types instead of strings, enhancing data integrity and usability.

* Refactor agent tests by removing outdated test cases and updating YAML cassettes to reflect changes in tool usage and response formats. Adjusted request and response headers, including user agent and content length, for improved accuracy in testing. Enhanced interaction formats for consistency across tests.

* Add Excalidraw diagram file for visual representation of input-output flow

Created a new Excalidraw file that includes a diagram illustrating the input box, database, and output box with connecting arrows. This visual aid enhances understanding of the data flow within the application.

* Remove redundant error handling for action and final answer in CrewAgentParser. Update tests to reflect this change by deleting the corresponding test case.

---------

Co-authored-by: Lorenze Jay <63378463+lorenzejay@users.noreply.github.com>
Co-authored-by: Lorenze Jay <lorenzejaytech@gmail.com>
2025-04-02 08:54:46 -07:00

1078 lines
52 KiB
YAML

interactions:
- request:
body: '{"messages": [{"role": "system", "content": "You are test role. test backstory\nYour
personal goal is: test goal\nYou ONLY have access to the following tools, and
should NEVER make up tools that are not listed here:\n\nTool Name: get_final_answer\nTool
Arguments: {}\nTool Description: Get the final answer but don''t give it yet,
just re-use this\n tool non-stop.\n\nIMPORTANT: Use the following format
in your response:\n\n```\nThought: you should always think about what to do\nAction:
the action to take, only one name of [get_final_answer], just the name, exactly
as it''s written.\nAction Input: the input to the action, just a simple JSON
object, enclosed in curly braces, using \" to wrap keys and values.\nObservation:
the result of the action\n```\n\nOnce all necessary information is gathered,
return the following format:\n\n```\nThought: I now know the final answer\nFinal
Answer: the final answer to the original input question\n```"}, {"role": "user",
"content": "\nCurrent Task: The final answer is 42. But don''t give it yet,
instead keep using the `get_final_answer` tool over and over until you''re told
you can give your final answer.\n\nThis is the expected criteria for your final
answer: The final answer\nyou MUST return the actual complete content as the
final answer, not a summary.\n\nBegin! This is VERY important to you, use the
tools available and give your best Final Answer, your job depends on it!\n\nThought:"}],
"model": "gpt-4o-mini", "stop": ["\nObservation:"]}'
headers:
accept:
- application/json
accept-encoding:
- gzip, deflate, zstd
connection:
- keep-alive
content-length:
- '1501'
content-type:
- application/json
host:
- api.openai.com
user-agent:
- OpenAI/Python 1.68.2
x-stainless-arch:
- arm64
x-stainless-async:
- 'false'
x-stainless-lang:
- python
x-stainless-os:
- MacOS
x-stainless-package-version:
- 1.68.2
x-stainless-raw-response:
- 'true'
x-stainless-read-timeout:
- '600.0'
x-stainless-retry-count:
- '0'
x-stainless-runtime:
- CPython
x-stainless-runtime-version:
- 3.12.8
method: POST
uri: https://api.openai.com/v1/chat/completions
response:
content: "{\n \"id\": \"chatcmpl-BHIyHPwQwes0C4pDX7xQLHvqR6305\",\n \"object\":
\"chat.completion\",\n \"created\": 1743464201,\n \"model\": \"gpt-4o-mini-2024-07-18\",\n
\ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\":
\"assistant\",\n \"content\": \"Thought: I should start using the tool
to get the final answer repeatedly as instructed. \\nAction: get_final_answer
\ \\nAction Input: {} \",\n \"refusal\": null,\n \"annotations\":
[]\n },\n \"logprobs\": null,\n \"finish_reason\": \"stop\"\n
\ }\n ],\n \"usage\": {\n \"prompt_tokens\": 303,\n \"completion_tokens\":
29,\n \"total_tokens\": 332,\n \"prompt_tokens_details\": {\n \"cached_tokens\":
0,\n \"audio_tokens\": 0\n },\n \"completion_tokens_details\": {\n
\ \"reasoning_tokens\": 0,\n \"audio_tokens\": 0,\n \"accepted_prediction_tokens\":
0,\n \"rejected_prediction_tokens\": 0\n }\n },\n \"service_tier\":
\"default\",\n \"system_fingerprint\": \"fp_b376dfbbd5\"\n}\n"
headers:
CF-RAY:
- 9293ab99f853ce50-SJC
Connection:
- keep-alive
Content-Encoding:
- gzip
Content-Type:
- application/json
Date:
- Mon, 31 Mar 2025 23:36:42 GMT
Server:
- cloudflare
Set-Cookie:
- __cf_bm=Bwvchs4Dp02K9.WxyX6U8yVg.jg2z6x7yNWekHnFUbQ-1743464202-1.0.1.1-KvmUaCRpD961qPqJPLi38I.N4IEYmc3i_IyJ5LDo2z5TIhZilbmK0oMNu7HrCHT3kzKWh0SpZ_FocvywK0qJ3fku_cwyTByEPK05SQQOEWE;
path=/; expires=Tue, 01-Apr-25 00:06:42 GMT; domain=.api.openai.com; HttpOnly;
Secure; SameSite=None
- _cfuvid=zeQ6mwappLtze7fZgtCp1BJNVbBLSsCm8WxR2Jydshg-1743464202332-0.0.1.1-604800000;
path=/; domain=.api.openai.com; HttpOnly; Secure; SameSite=None
Transfer-Encoding:
- chunked
X-Content-Type-Options:
- nosniff
access-control-expose-headers:
- X-Request-ID
alt-svc:
- h3=":443"; ma=86400
cf-cache-status:
- DYNAMIC
openai-organization:
- crewai-iuxna1
openai-processing-ms:
- '967'
openai-version:
- '2020-10-01'
strict-transport-security:
- max-age=31536000; includeSubDomains; preload
x-ratelimit-limit-requests:
- '30000'
x-ratelimit-limit-tokens:
- '150000000'
x-ratelimit-remaining-requests:
- '29999'
x-ratelimit-remaining-tokens:
- '149999663'
x-ratelimit-reset-requests:
- 2ms
x-ratelimit-reset-tokens:
- 0s
x-request-id:
- req_a8af664cb724dbc0d8886d863743321b
http_version: HTTP/1.1
status_code: 200
- request:
body: '{"messages": [{"role": "system", "content": "You are test role. test backstory\nYour
personal goal is: test goal\nYou ONLY have access to the following tools, and
should NEVER make up tools that are not listed here:\n\nTool Name: get_final_answer\nTool
Arguments: {}\nTool Description: Get the final answer but don''t give it yet,
just re-use this\n tool non-stop.\n\nIMPORTANT: Use the following format
in your response:\n\n```\nThought: you should always think about what to do\nAction:
the action to take, only one name of [get_final_answer], just the name, exactly
as it''s written.\nAction Input: the input to the action, just a simple JSON
object, enclosed in curly braces, using \" to wrap keys and values.\nObservation:
the result of the action\n```\n\nOnce all necessary information is gathered,
return the following format:\n\n```\nThought: I now know the final answer\nFinal
Answer: the final answer to the original input question\n```"}, {"role": "user",
"content": "\nCurrent Task: The final answer is 42. But don''t give it yet,
instead keep using the `get_final_answer` tool over and over until you''re told
you can give your final answer.\n\nThis is the expected criteria for your final
answer: The final answer\nyou MUST return the actual complete content as the
final answer, not a summary.\n\nBegin! This is VERY important to you, use the
tools available and give your best Final Answer, your job depends on it!\n\nThought:"},
{"role": "assistant", "content": "42"}, {"role": "assistant", "content": "Thought:
I should start using the tool to get the final answer repeatedly as instructed. \nAction:
get_final_answer \nAction Input: {} \nObservation: 42"}], "model": "gpt-4o-mini",
"stop": ["\nObservation:"]}'
headers:
accept:
- application/json
accept-encoding:
- gzip, deflate, zstd
connection:
- keep-alive
content-length:
- '1734'
content-type:
- application/json
cookie:
- __cf_bm=Bwvchs4Dp02K9.WxyX6U8yVg.jg2z6x7yNWekHnFUbQ-1743464202-1.0.1.1-KvmUaCRpD961qPqJPLi38I.N4IEYmc3i_IyJ5LDo2z5TIhZilbmK0oMNu7HrCHT3kzKWh0SpZ_FocvywK0qJ3fku_cwyTByEPK05SQQOEWE;
_cfuvid=zeQ6mwappLtze7fZgtCp1BJNVbBLSsCm8WxR2Jydshg-1743464202332-0.0.1.1-604800000
host:
- api.openai.com
user-agent:
- OpenAI/Python 1.68.2
x-stainless-arch:
- arm64
x-stainless-async:
- 'false'
x-stainless-lang:
- python
x-stainless-os:
- MacOS
x-stainless-package-version:
- 1.68.2
x-stainless-raw-response:
- 'true'
x-stainless-read-timeout:
- '600.0'
x-stainless-retry-count:
- '0'
x-stainless-runtime:
- CPython
x-stainless-runtime-version:
- 3.12.8
method: POST
uri: https://api.openai.com/v1/chat/completions
response:
content: "{\n \"id\": \"chatcmpl-BHIyIBjI26RQEA6wcGPOodTFflqRo\",\n \"object\":
\"chat.completion\",\n \"created\": 1743464202,\n \"model\": \"gpt-4o-mini-2024-07-18\",\n
\ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\":
\"assistant\",\n \"content\": \"Thought: I should continue using the
tool to obtain the final answer. \\nAction: get_final_answer \\nAction Input:
{} \",\n \"refusal\": null,\n \"annotations\": []\n },\n
\ \"logprobs\": null,\n \"finish_reason\": \"stop\"\n }\n ],\n
\ \"usage\": {\n \"prompt_tokens\": 345,\n \"completion_tokens\": 26,\n
\ \"total_tokens\": 371,\n \"prompt_tokens_details\": {\n \"cached_tokens\":
0,\n \"audio_tokens\": 0\n },\n \"completion_tokens_details\": {\n
\ \"reasoning_tokens\": 0,\n \"audio_tokens\": 0,\n \"accepted_prediction_tokens\":
0,\n \"rejected_prediction_tokens\": 0\n }\n },\n \"service_tier\":
\"default\",\n \"system_fingerprint\": \"fp_b376dfbbd5\"\n}\n"
headers:
CF-RAY:
- 9293aba0e8d6ce50-SJC
Connection:
- keep-alive
Content-Encoding:
- gzip
Content-Type:
- application/json
Date:
- Mon, 31 Mar 2025 23:36:43 GMT
Server:
- cloudflare
Transfer-Encoding:
- chunked
X-Content-Type-Options:
- nosniff
access-control-expose-headers:
- X-Request-ID
alt-svc:
- h3=":443"; ma=86400
cf-cache-status:
- DYNAMIC
openai-organization:
- crewai-iuxna1
openai-processing-ms:
- '556'
openai-version:
- '2020-10-01'
strict-transport-security:
- max-age=31536000; includeSubDomains; preload
x-ratelimit-limit-requests:
- '30000'
x-ratelimit-limit-tokens:
- '150000000'
x-ratelimit-remaining-requests:
- '29999'
x-ratelimit-remaining-tokens:
- '149999622'
x-ratelimit-reset-requests:
- 2ms
x-ratelimit-reset-tokens:
- 0s
x-request-id:
- req_c57d2f2205a659ee25d122bdc7a3d5ba
http_version: HTTP/1.1
status_code: 200
- request:
body: '{"messages": [{"role": "system", "content": "You are test role. test backstory\nYour
personal goal is: test goal\nYou ONLY have access to the following tools, and
should NEVER make up tools that are not listed here:\n\nTool Name: get_final_answer\nTool
Arguments: {}\nTool Description: Get the final answer but don''t give it yet,
just re-use this\n tool non-stop.\n\nIMPORTANT: Use the following format
in your response:\n\n```\nThought: you should always think about what to do\nAction:
the action to take, only one name of [get_final_answer], just the name, exactly
as it''s written.\nAction Input: the input to the action, just a simple JSON
object, enclosed in curly braces, using \" to wrap keys and values.\nObservation:
the result of the action\n```\n\nOnce all necessary information is gathered,
return the following format:\n\n```\nThought: I now know the final answer\nFinal
Answer: the final answer to the original input question\n```"}, {"role": "user",
"content": "\nCurrent Task: The final answer is 42. But don''t give it yet,
instead keep using the `get_final_answer` tool over and over until you''re told
you can give your final answer.\n\nThis is the expected criteria for your final
answer: The final answer\nyou MUST return the actual complete content as the
final answer, not a summary.\n\nBegin! This is VERY important to you, use the
tools available and give your best Final Answer, your job depends on it!\n\nThought:"},
{"role": "assistant", "content": "42"}, {"role": "assistant", "content": "Thought:
I should start using the tool to get the final answer repeatedly as instructed. \nAction:
get_final_answer \nAction Input: {} \nObservation: 42"}, {"role": "assistant",
"content": "I tried reusing the same input, I must stop using this action input.
I''ll try something else instead.\n\n"}, {"role": "assistant", "content": "Thought:
I should continue using the tool to obtain the final answer. \nAction: get_final_answer \nAction
Input: {} \nObservation: I tried reusing the same input, I must stop using
this action input. I''ll try something else instead."}], "model": "gpt-4o-mini",
"stop": ["\nObservation:"]}'
headers:
accept:
- application/json
accept-encoding:
- gzip, deflate, zstd
connection:
- keep-alive
content-length:
- '2150'
content-type:
- application/json
cookie:
- __cf_bm=Bwvchs4Dp02K9.WxyX6U8yVg.jg2z6x7yNWekHnFUbQ-1743464202-1.0.1.1-KvmUaCRpD961qPqJPLi38I.N4IEYmc3i_IyJ5LDo2z5TIhZilbmK0oMNu7HrCHT3kzKWh0SpZ_FocvywK0qJ3fku_cwyTByEPK05SQQOEWE;
_cfuvid=zeQ6mwappLtze7fZgtCp1BJNVbBLSsCm8WxR2Jydshg-1743464202332-0.0.1.1-604800000
host:
- api.openai.com
user-agent:
- OpenAI/Python 1.68.2
x-stainless-arch:
- arm64
x-stainless-async:
- 'false'
x-stainless-lang:
- python
x-stainless-os:
- MacOS
x-stainless-package-version:
- 1.68.2
x-stainless-raw-response:
- 'true'
x-stainless-read-timeout:
- '600.0'
x-stainless-retry-count:
- '0'
x-stainless-runtime:
- CPython
x-stainless-runtime-version:
- 3.12.8
method: POST
uri: https://api.openai.com/v1/chat/completions
response:
content: "{\n \"id\": \"chatcmpl-BHIyJ9rzK9MdaKoTCou0bZfXbocg2\",\n \"object\":
\"chat.completion\",\n \"created\": 1743464203,\n \"model\": \"gpt-4o-mini-2024-07-18\",\n
\ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\":
\"assistant\",\n \"content\": \"Thought: I need to keep using the tool
to retrieve the final answer repeatedly. \\nAction: get_final_answer \\nAction
Input: {} \",\n \"refusal\": null,\n \"annotations\": []\n },\n
\ \"logprobs\": null,\n \"finish_reason\": \"stop\"\n }\n ],\n
\ \"usage\": {\n \"prompt_tokens\": 425,\n \"completion_tokens\": 28,\n
\ \"total_tokens\": 453,\n \"prompt_tokens_details\": {\n \"cached_tokens\":
0,\n \"audio_tokens\": 0\n },\n \"completion_tokens_details\": {\n
\ \"reasoning_tokens\": 0,\n \"audio_tokens\": 0,\n \"accepted_prediction_tokens\":
0,\n \"rejected_prediction_tokens\": 0\n }\n },\n \"service_tier\":
\"default\",\n \"system_fingerprint\": \"fp_b376dfbbd5\"\n}\n"
headers:
CF-RAY:
- 9293aba4eda8ce50-SJC
Connection:
- keep-alive
Content-Encoding:
- gzip
Content-Type:
- application/json
Date:
- Mon, 31 Mar 2025 23:36:43 GMT
Server:
- cloudflare
Transfer-Encoding:
- chunked
X-Content-Type-Options:
- nosniff
access-control-expose-headers:
- X-Request-ID
alt-svc:
- h3=":443"; ma=86400
cf-cache-status:
- DYNAMIC
openai-organization:
- crewai-iuxna1
openai-processing-ms:
- '550'
openai-version:
- '2020-10-01'
strict-transport-security:
- max-age=31536000; includeSubDomains; preload
x-ratelimit-limit-requests:
- '30000'
x-ratelimit-limit-tokens:
- '150000000'
x-ratelimit-remaining-requests:
- '29999'
x-ratelimit-remaining-tokens:
- '149999537'
x-ratelimit-reset-requests:
- 2ms
x-ratelimit-reset-tokens:
- 0s
x-request-id:
- req_43c3fb39cef01274c42b218850f6c23a
http_version: HTTP/1.1
status_code: 200
- request:
body: !!binary |
CpQECiQKIgoMc2VydmljZS5uYW1lEhIKEGNyZXdBSS10ZWxlbWV0cnkS6wMKEgoQY3Jld2FpLnRl
bGVtZXRyeRKUAQoQKWg+yHi9soA2LjyuLMgsRRIIAgk59s2N62MqClRvb2wgVXNhZ2UwATnQhPxq
dAcyGEHofxFrdAcyGEobCg5jcmV3YWlfdmVyc2lvbhIJCgcwLjEwOC4wSh8KCXRvb2xfbmFtZRIS
ChBnZXRfZmluYWxfYW5zd2VySg4KCGF0dGVtcHRzEgIYAXoCGAGFAQABAAASnQEKEIU9KjUxT2Q4
Rb5JHmc7ziwSCE1tdrTxYrB1KhNUb29sIFJlcGVhdGVkIFVzYWdlMAE5MHhQkXQHMhhBMB5fkXQH
MhhKGwoOY3Jld2FpX3ZlcnNpb24SCQoHMC4xMDguMEofCgl0b29sX25hbWUSEgoQZ2V0X2ZpbmFs
X2Fuc3dlckoOCghhdHRlbXB0cxICGAF6AhgBhQEAAQAAEp0BChBktnW7lH1MTK02aePrm5fjEggA
v1XFsR1QSyoTVG9vbCBSZXBlYXRlZCBVc2FnZTABOeAkd7h0BzIYQTj+gbh0BzIYShsKDmNyZXdh
aV92ZXJzaW9uEgkKBzAuMTA4LjBKHwoJdG9vbF9uYW1lEhIKEGdldF9maW5hbF9hbnN3ZXJKDgoI
YXR0ZW1wdHMSAhgBegIYAYUBAAEAAA==
headers:
Accept:
- '*/*'
Accept-Encoding:
- gzip, deflate, zstd
Connection:
- keep-alive
Content-Length:
- '535'
Content-Type:
- application/x-protobuf
User-Agent:
- OTel-OTLP-Exporter-Python/1.31.1
method: POST
uri: https://telemetry.crewai.com:4319/v1/traces
response:
body:
string: "\n\0"
headers:
Content-Length:
- '2'
Content-Type:
- application/x-protobuf
Date:
- Mon, 31 Mar 2025 23:36:44 GMT
status:
code: 200
message: OK
- request:
body: '{"messages": [{"role": "system", "content": "You are test role. test backstory\nYour
personal goal is: test goal\nYou ONLY have access to the following tools, and
should NEVER make up tools that are not listed here:\n\nTool Name: get_final_answer\nTool
Arguments: {}\nTool Description: Get the final answer but don''t give it yet,
just re-use this\n tool non-stop.\n\nIMPORTANT: Use the following format
in your response:\n\n```\nThought: you should always think about what to do\nAction:
the action to take, only one name of [get_final_answer], just the name, exactly
as it''s written.\nAction Input: the input to the action, just a simple JSON
object, enclosed in curly braces, using \" to wrap keys and values.\nObservation:
the result of the action\n```\n\nOnce all necessary information is gathered,
return the following format:\n\n```\nThought: I now know the final answer\nFinal
Answer: the final answer to the original input question\n```"}, {"role": "user",
"content": "\nCurrent Task: The final answer is 42. But don''t give it yet,
instead keep using the `get_final_answer` tool over and over until you''re told
you can give your final answer.\n\nThis is the expected criteria for your final
answer: The final answer\nyou MUST return the actual complete content as the
final answer, not a summary.\n\nBegin! This is VERY important to you, use the
tools available and give your best Final Answer, your job depends on it!\n\nThought:"},
{"role": "assistant", "content": "42"}, {"role": "assistant", "content": "Thought:
I should start using the tool to get the final answer repeatedly as instructed. \nAction:
get_final_answer \nAction Input: {} \nObservation: 42"}, {"role": "assistant",
"content": "I tried reusing the same input, I must stop using this action input.
I''ll try something else instead.\n\n"}, {"role": "assistant", "content": "Thought:
I should continue using the tool to obtain the final answer. \nAction: get_final_answer \nAction
Input: {} \nObservation: I tried reusing the same input, I must stop using
this action input. I''ll try something else instead."}, {"role": "assistant",
"content": "I tried reusing the same input, I must stop using this action input.
I''ll try something else instead.\n\n\n\n\nYou ONLY have access to the following
tools, and should NEVER make up tools that are not listed here:\n\nTool Name:
get_final_answer\nTool Arguments: {}\nTool Description: Get the final answer
but don''t give it yet, just re-use this\n tool non-stop.\n\nIMPORTANT:
Use the following format in your response:\n\n```\nThought: you should always
think about what to do\nAction: the action to take, only one name of [get_final_answer],
just the name, exactly as it''s written.\nAction Input: the input to the action,
just a simple JSON object, enclosed in curly braces, using \" to wrap keys and
values.\nObservation: the result of the action\n```\n\nOnce all necessary information
is gathered, return the following format:\n\n```\nThought: I now know the final
answer\nFinal Answer: the final answer to the original input question\n```"},
{"role": "assistant", "content": "Thought: I need to keep using the tool to
retrieve the final answer repeatedly. \nAction: get_final_answer \nAction
Input: {} \nObservation: I tried reusing the same input, I must stop using
this action input. I''ll try something else instead.\n\n\n\n\nYou ONLY have
access to the following tools, and should NEVER make up tools that are not listed
here:\n\nTool Name: get_final_answer\nTool Arguments: {}\nTool Description:
Get the final answer but don''t give it yet, just re-use this\n tool
non-stop.\n\nIMPORTANT: Use the following format in your response:\n\n```\nThought:
you should always think about what to do\nAction: the action to take, only one
name of [get_final_answer], just the name, exactly as it''s written.\nAction
Input: the input to the action, just a simple JSON object, enclosed in curly
braces, using \" to wrap keys and values.\nObservation: the result of the action\n```\n\nOnce
all necessary information is gathered, return the following format:\n\n```\nThought:
I now know the final answer\nFinal Answer: the final answer to the original
input question\n```"}], "model": "gpt-4o-mini", "stop": ["\nObservation:"]}'
headers:
accept:
- application/json
accept-encoding:
- gzip, deflate, zstd
connection:
- keep-alive
content-length:
- '4266'
content-type:
- application/json
cookie:
- __cf_bm=Bwvchs4Dp02K9.WxyX6U8yVg.jg2z6x7yNWekHnFUbQ-1743464202-1.0.1.1-KvmUaCRpD961qPqJPLi38I.N4IEYmc3i_IyJ5LDo2z5TIhZilbmK0oMNu7HrCHT3kzKWh0SpZ_FocvywK0qJ3fku_cwyTByEPK05SQQOEWE;
_cfuvid=zeQ6mwappLtze7fZgtCp1BJNVbBLSsCm8WxR2Jydshg-1743464202332-0.0.1.1-604800000
host:
- api.openai.com
user-agent:
- OpenAI/Python 1.68.2
x-stainless-arch:
- arm64
x-stainless-async:
- 'false'
x-stainless-lang:
- python
x-stainless-os:
- MacOS
x-stainless-package-version:
- 1.68.2
x-stainless-raw-response:
- 'true'
x-stainless-read-timeout:
- '600.0'
x-stainless-retry-count:
- '0'
x-stainless-runtime:
- CPython
x-stainless-runtime-version:
- 3.12.8
method: POST
uri: https://api.openai.com/v1/chat/completions
response:
content: "{\n \"id\": \"chatcmpl-BHIyJOYjYmWgzoxY1EujNvwGjOf0V\",\n \"object\":
\"chat.completion\",\n \"created\": 1743464203,\n \"model\": \"gpt-4o-mini-2024-07-18\",\n
\ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\":
\"assistant\",\n \"content\": \"Thought: I need to continue using the
designated tool to obtain the final answer. \\nAction: get_final_answer \\nAction
Input: {} \",\n \"refusal\": null,\n \"annotations\": []\n },\n
\ \"logprobs\": null,\n \"finish_reason\": \"stop\"\n }\n ],\n
\ \"usage\": {\n \"prompt_tokens\": 861,\n \"completion_tokens\": 28,\n
\ \"total_tokens\": 889,\n \"prompt_tokens_details\": {\n \"cached_tokens\":
0,\n \"audio_tokens\": 0\n },\n \"completion_tokens_details\": {\n
\ \"reasoning_tokens\": 0,\n \"audio_tokens\": 0,\n \"accepted_prediction_tokens\":
0,\n \"rejected_prediction_tokens\": 0\n }\n },\n \"service_tier\":
\"default\",\n \"system_fingerprint\": \"fp_b376dfbbd5\"\n}\n"
headers:
CF-RAY:
- 9293aba90b04ce50-SJC
Connection:
- keep-alive
Content-Encoding:
- gzip
Content-Type:
- application/json
Date:
- Mon, 31 Mar 2025 23:36:45 GMT
Server:
- cloudflare
Transfer-Encoding:
- chunked
X-Content-Type-Options:
- nosniff
access-control-expose-headers:
- X-Request-ID
alt-svc:
- h3=":443"; ma=86400
cf-cache-status:
- DYNAMIC
openai-organization:
- crewai-iuxna1
openai-processing-ms:
- '1496'
openai-version:
- '2020-10-01'
strict-transport-security:
- max-age=31536000; includeSubDomains; preload
x-ratelimit-limit-requests:
- '30000'
x-ratelimit-limit-tokens:
- '150000000'
x-ratelimit-remaining-requests:
- '29999'
x-ratelimit-remaining-tokens:
- '149999039'
x-ratelimit-reset-requests:
- 2ms
x-ratelimit-reset-tokens:
- 0s
x-request-id:
- req_8c38f479539f55db3282f670b8957bf4
http_version: HTTP/1.1
status_code: 200
- request:
body: '{"messages": [{"role": "system", "content": "You are test role. test backstory\nYour
personal goal is: test goal\nYou ONLY have access to the following tools, and
should NEVER make up tools that are not listed here:\n\nTool Name: get_final_answer\nTool
Arguments: {}\nTool Description: Get the final answer but don''t give it yet,
just re-use this\n tool non-stop.\n\nIMPORTANT: Use the following format
in your response:\n\n```\nThought: you should always think about what to do\nAction:
the action to take, only one name of [get_final_answer], just the name, exactly
as it''s written.\nAction Input: the input to the action, just a simple JSON
object, enclosed in curly braces, using \" to wrap keys and values.\nObservation:
the result of the action\n```\n\nOnce all necessary information is gathered,
return the following format:\n\n```\nThought: I now know the final answer\nFinal
Answer: the final answer to the original input question\n```"}, {"role": "user",
"content": "\nCurrent Task: The final answer is 42. But don''t give it yet,
instead keep using the `get_final_answer` tool over and over until you''re told
you can give your final answer.\n\nThis is the expected criteria for your final
answer: The final answer\nyou MUST return the actual complete content as the
final answer, not a summary.\n\nBegin! This is VERY important to you, use the
tools available and give your best Final Answer, your job depends on it!\n\nThought:"},
{"role": "assistant", "content": "42"}, {"role": "assistant", "content": "Thought:
I should start using the tool to get the final answer repeatedly as instructed. \nAction:
get_final_answer \nAction Input: {} \nObservation: 42"}, {"role": "assistant",
"content": "I tried reusing the same input, I must stop using this action input.
I''ll try something else instead.\n\n"}, {"role": "assistant", "content": "Thought:
I should continue using the tool to obtain the final answer. \nAction: get_final_answer \nAction
Input: {} \nObservation: I tried reusing the same input, I must stop using
this action input. I''ll try something else instead."}, {"role": "assistant",
"content": "I tried reusing the same input, I must stop using this action input.
I''ll try something else instead.\n\n\n\n\nYou ONLY have access to the following
tools, and should NEVER make up tools that are not listed here:\n\nTool Name:
get_final_answer\nTool Arguments: {}\nTool Description: Get the final answer
but don''t give it yet, just re-use this\n tool non-stop.\n\nIMPORTANT:
Use the following format in your response:\n\n```\nThought: you should always
think about what to do\nAction: the action to take, only one name of [get_final_answer],
just the name, exactly as it''s written.\nAction Input: the input to the action,
just a simple JSON object, enclosed in curly braces, using \" to wrap keys and
values.\nObservation: the result of the action\n```\n\nOnce all necessary information
is gathered, return the following format:\n\n```\nThought: I now know the final
answer\nFinal Answer: the final answer to the original input question\n```"},
{"role": "assistant", "content": "Thought: I need to keep using the tool to
retrieve the final answer repeatedly. \nAction: get_final_answer \nAction
Input: {} \nObservation: I tried reusing the same input, I must stop using
this action input. I''ll try something else instead.\n\n\n\n\nYou ONLY have
access to the following tools, and should NEVER make up tools that are not listed
here:\n\nTool Name: get_final_answer\nTool Arguments: {}\nTool Description:
Get the final answer but don''t give it yet, just re-use this\n tool
non-stop.\n\nIMPORTANT: Use the following format in your response:\n\n```\nThought:
you should always think about what to do\nAction: the action to take, only one
name of [get_final_answer], just the name, exactly as it''s written.\nAction
Input: the input to the action, just a simple JSON object, enclosed in curly
braces, using \" to wrap keys and values.\nObservation: the result of the action\n```\n\nOnce
all necessary information is gathered, return the following format:\n\n```\nThought:
I now know the final answer\nFinal Answer: the final answer to the original
input question\n```"}, {"role": "assistant", "content": "I tried reusing the
same input, I must stop using this action input. I''ll try something else instead.\n\n"},
{"role": "assistant", "content": "Thought: I need to continue using the designated
tool to obtain the final answer. \nAction: get_final_answer \nAction Input:
{} \nObservation: I tried reusing the same input, I must stop using this action
input. I''ll try something else instead."}], "model": "gpt-4o-mini", "stop":
["\nObservation:"]}'
headers:
accept:
- application/json
accept-encoding:
- gzip, deflate, zstd
connection:
- keep-alive
content-length:
- '4694'
content-type:
- application/json
cookie:
- __cf_bm=Bwvchs4Dp02K9.WxyX6U8yVg.jg2z6x7yNWekHnFUbQ-1743464202-1.0.1.1-KvmUaCRpD961qPqJPLi38I.N4IEYmc3i_IyJ5LDo2z5TIhZilbmK0oMNu7HrCHT3kzKWh0SpZ_FocvywK0qJ3fku_cwyTByEPK05SQQOEWE;
_cfuvid=zeQ6mwappLtze7fZgtCp1BJNVbBLSsCm8WxR2Jydshg-1743464202332-0.0.1.1-604800000
host:
- api.openai.com
user-agent:
- OpenAI/Python 1.68.2
x-stainless-arch:
- arm64
x-stainless-async:
- 'false'
x-stainless-lang:
- python
x-stainless-os:
- MacOS
x-stainless-package-version:
- 1.68.2
x-stainless-raw-response:
- 'true'
x-stainless-read-timeout:
- '600.0'
x-stainless-retry-count:
- '0'
x-stainless-runtime:
- CPython
x-stainless-runtime-version:
- 3.12.8
method: POST
uri: https://api.openai.com/v1/chat/completions
response:
content: "{\n \"id\": \"chatcmpl-BHIyLLDkgsE6GdQsZ86C35CjnYGTo\",\n \"object\":
\"chat.completion\",\n \"created\": 1743464205,\n \"model\": \"gpt-4o-mini-2024-07-18\",\n
\ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\":
\"assistant\",\n \"content\": \"Thought: I need to continue using the
tool without changing the input format. \\nAction: get_final_answer \\nAction
Input: {} \",\n \"refusal\": null,\n \"annotations\": []\n },\n
\ \"logprobs\": null,\n \"finish_reason\": \"stop\"\n }\n ],\n
\ \"usage\": {\n \"prompt_tokens\": 943,\n \"completion_tokens\": 27,\n
\ \"total_tokens\": 970,\n \"prompt_tokens_details\": {\n \"cached_tokens\":
0,\n \"audio_tokens\": 0\n },\n \"completion_tokens_details\": {\n
\ \"reasoning_tokens\": 0,\n \"audio_tokens\": 0,\n \"accepted_prediction_tokens\":
0,\n \"rejected_prediction_tokens\": 0\n }\n },\n \"service_tier\":
\"default\",\n \"system_fingerprint\": \"fp_b376dfbbd5\"\n}\n"
headers:
CF-RAY:
- 9293abb3684dce50-SJC
Connection:
- keep-alive
Content-Encoding:
- gzip
Content-Type:
- application/json
Date:
- Mon, 31 Mar 2025 23:36:46 GMT
Server:
- cloudflare
Transfer-Encoding:
- chunked
X-Content-Type-Options:
- nosniff
access-control-expose-headers:
- X-Request-ID
alt-svc:
- h3=":443"; ma=86400
cf-cache-status:
- DYNAMIC
openai-organization:
- crewai-iuxna1
openai-processing-ms:
- '809'
openai-version:
- '2020-10-01'
strict-transport-security:
- max-age=31536000; includeSubDomains; preload
x-ratelimit-limit-requests:
- '30000'
x-ratelimit-limit-tokens:
- '150000000'
x-ratelimit-remaining-requests:
- '29999'
x-ratelimit-remaining-tokens:
- '149998950'
x-ratelimit-reset-requests:
- 2ms
x-ratelimit-reset-tokens:
- 0s
x-request-id:
- req_35fcab88e7d96ac0040ee34407d57ced
http_version: HTTP/1.1
status_code: 200
- request:
body: '{"messages": [{"role": "system", "content": "You are test role. test backstory\nYour
personal goal is: test goal\nYou ONLY have access to the following tools, and
should NEVER make up tools that are not listed here:\n\nTool Name: get_final_answer\nTool
Arguments: {}\nTool Description: Get the final answer but don''t give it yet,
just re-use this\n tool non-stop.\n\nIMPORTANT: Use the following format
in your response:\n\n```\nThought: you should always think about what to do\nAction:
the action to take, only one name of [get_final_answer], just the name, exactly
as it''s written.\nAction Input: the input to the action, just a simple JSON
object, enclosed in curly braces, using \" to wrap keys and values.\nObservation:
the result of the action\n```\n\nOnce all necessary information is gathered,
return the following format:\n\n```\nThought: I now know the final answer\nFinal
Answer: the final answer to the original input question\n```"}, {"role": "user",
"content": "\nCurrent Task: The final answer is 42. But don''t give it yet,
instead keep using the `get_final_answer` tool over and over until you''re told
you can give your final answer.\n\nThis is the expected criteria for your final
answer: The final answer\nyou MUST return the actual complete content as the
final answer, not a summary.\n\nBegin! This is VERY important to you, use the
tools available and give your best Final Answer, your job depends on it!\n\nThought:"},
{"role": "assistant", "content": "42"}, {"role": "assistant", "content": "Thought:
I should start using the tool to get the final answer repeatedly as instructed. \nAction:
get_final_answer \nAction Input: {} \nObservation: 42"}, {"role": "assistant",
"content": "I tried reusing the same input, I must stop using this action input.
I''ll try something else instead.\n\n"}, {"role": "assistant", "content": "Thought:
I should continue using the tool to obtain the final answer. \nAction: get_final_answer \nAction
Input: {} \nObservation: I tried reusing the same input, I must stop using
this action input. I''ll try something else instead."}, {"role": "assistant",
"content": "I tried reusing the same input, I must stop using this action input.
I''ll try something else instead.\n\n\n\n\nYou ONLY have access to the following
tools, and should NEVER make up tools that are not listed here:\n\nTool Name:
get_final_answer\nTool Arguments: {}\nTool Description: Get the final answer
but don''t give it yet, just re-use this\n tool non-stop.\n\nIMPORTANT:
Use the following format in your response:\n\n```\nThought: you should always
think about what to do\nAction: the action to take, only one name of [get_final_answer],
just the name, exactly as it''s written.\nAction Input: the input to the action,
just a simple JSON object, enclosed in curly braces, using \" to wrap keys and
values.\nObservation: the result of the action\n```\n\nOnce all necessary information
is gathered, return the following format:\n\n```\nThought: I now know the final
answer\nFinal Answer: the final answer to the original input question\n```"},
{"role": "assistant", "content": "Thought: I need to keep using the tool to
retrieve the final answer repeatedly. \nAction: get_final_answer \nAction
Input: {} \nObservation: I tried reusing the same input, I must stop using
this action input. I''ll try something else instead.\n\n\n\n\nYou ONLY have
access to the following tools, and should NEVER make up tools that are not listed
here:\n\nTool Name: get_final_answer\nTool Arguments: {}\nTool Description:
Get the final answer but don''t give it yet, just re-use this\n tool
non-stop.\n\nIMPORTANT: Use the following format in your response:\n\n```\nThought:
you should always think about what to do\nAction: the action to take, only one
name of [get_final_answer], just the name, exactly as it''s written.\nAction
Input: the input to the action, just a simple JSON object, enclosed in curly
braces, using \" to wrap keys and values.\nObservation: the result of the action\n```\n\nOnce
all necessary information is gathered, return the following format:\n\n```\nThought:
I now know the final answer\nFinal Answer: the final answer to the original
input question\n```"}, {"role": "assistant", "content": "I tried reusing the
same input, I must stop using this action input. I''ll try something else instead.\n\n"},
{"role": "assistant", "content": "Thought: I need to continue using the designated
tool to obtain the final answer. \nAction: get_final_answer \nAction Input:
{} \nObservation: I tried reusing the same input, I must stop using this action
input. I''ll try something else instead."}, {"role": "assistant", "content":
"I tried reusing the same input, I must stop using this action input. I''ll
try something else instead.\n\n"}, {"role": "assistant", "content": "Thought:
I need to continue using the tool without changing the input format. \nAction:
get_final_answer \nAction Input: {} \nObservation: I tried reusing the same
input, I must stop using this action input. I''ll try something else instead."},
{"role": "assistant", "content": "Thought: I need to continue using the tool
without changing the input format. \nAction: get_final_answer \nAction Input:
{} \nObservation: I tried reusing the same input, I must stop using this action
input. I''ll try something else instead.\n\n\nNow it''s time you MUST give your
absolute best final answer. You''ll ignore all previous instructions, stop using
any tools, and just return your absolute BEST Final answer."}], "model": "gpt-4o-mini",
"stop": ["\nObservation:"]}'
headers:
accept:
- application/json
accept-encoding:
- gzip, deflate, zstd
connection:
- keep-alive
content-length:
- '5577'
content-type:
- application/json
cookie:
- __cf_bm=Bwvchs4Dp02K9.WxyX6U8yVg.jg2z6x7yNWekHnFUbQ-1743464202-1.0.1.1-KvmUaCRpD961qPqJPLi38I.N4IEYmc3i_IyJ5LDo2z5TIhZilbmK0oMNu7HrCHT3kzKWh0SpZ_FocvywK0qJ3fku_cwyTByEPK05SQQOEWE;
_cfuvid=zeQ6mwappLtze7fZgtCp1BJNVbBLSsCm8WxR2Jydshg-1743464202332-0.0.1.1-604800000
host:
- api.openai.com
user-agent:
- OpenAI/Python 1.68.2
x-stainless-arch:
- arm64
x-stainless-async:
- 'false'
x-stainless-lang:
- python
x-stainless-os:
- MacOS
x-stainless-package-version:
- 1.68.2
x-stainless-raw-response:
- 'true'
x-stainless-read-timeout:
- '600.0'
x-stainless-retry-count:
- '0'
x-stainless-runtime:
- CPython
x-stainless-runtime-version:
- 3.12.8
method: POST
uri: https://api.openai.com/v1/chat/completions
response:
content: "{\n \"id\": \"chatcmpl-BHIyMjkFCQoAMiB3hVzH8zjNlHHem\",\n \"object\":
\"chat.completion\",\n \"created\": 1743464206,\n \"model\": \"gpt-4o-mini-2024-07-18\",\n
\ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\":
\"assistant\",\n \"content\": \"```\\nThought: I now know the final answer\\nFinal
Answer: 42\\n```\",\n \"refusal\": null,\n \"annotations\": []\n
\ },\n \"logprobs\": null,\n \"finish_reason\": \"stop\"\n }\n
\ ],\n \"usage\": {\n \"prompt_tokens\": 1111,\n \"completion_tokens\":
19,\n \"total_tokens\": 1130,\n \"prompt_tokens_details\": {\n \"cached_tokens\":
0,\n \"audio_tokens\": 0\n },\n \"completion_tokens_details\": {\n
\ \"reasoning_tokens\": 0,\n \"audio_tokens\": 0,\n \"accepted_prediction_tokens\":
0,\n \"rejected_prediction_tokens\": 0\n }\n },\n \"service_tier\":
\"default\",\n \"system_fingerprint\": \"fp_b376dfbbd5\"\n}\n"
headers:
CF-RAY:
- 9293abb94854ce50-SJC
Connection:
- keep-alive
Content-Encoding:
- gzip
Content-Type:
- application/json
Date:
- Mon, 31 Mar 2025 23:36:46 GMT
Server:
- cloudflare
Transfer-Encoding:
- chunked
X-Content-Type-Options:
- nosniff
access-control-expose-headers:
- X-Request-ID
alt-svc:
- h3=":443"; ma=86400
cf-cache-status:
- DYNAMIC
openai-organization:
- crewai-iuxna1
openai-processing-ms:
- '638'
openai-version:
- '2020-10-01'
strict-transport-security:
- max-age=31536000; includeSubDomains; preload
x-ratelimit-limit-requests:
- '30000'
x-ratelimit-limit-tokens:
- '150000000'
x-ratelimit-remaining-requests:
- '29999'
x-ratelimit-remaining-tokens:
- '149998757'
x-ratelimit-reset-requests:
- 2ms
x-ratelimit-reset-tokens:
- 0s
x-request-id:
- req_dfb10b0dbcc99d8a08c6c8cd172b006d
http_version: HTTP/1.1
status_code: 200
- request:
body: '{"messages": [{"role": "system", "content": "You are test role. test backstory\nYour
personal goal is: test goal\nYou ONLY have access to the following tools, and
should NEVER make up tools that are not listed here:\n\nTool Name: get_final_answer\nTool
Arguments: {}\nTool Description: Get the final answer but don''t give it yet,
just re-use this\n tool non-stop.\n\nIMPORTANT: Use the following format
in your response:\n\n```\nThought: you should always think about what to do\nAction:
the action to take, only one name of [get_final_answer], just the name, exactly
as it''s written.\nAction Input: the input to the action, just a simple JSON
object, enclosed in curly braces, using \" to wrap keys and values.\nObservation:
the result of the action\n```\n\nOnce all necessary information is gathered,
return the following format:\n\n```\nThought: I now know the final answer\nFinal
Answer: the final answer to the original input question\n```"}, {"role": "user",
"content": "\nCurrent Task: The final answer is 42. But don''t give it yet,
instead keep using the `get_final_answer` tool over and over until you''re told
you can give your final answer.\n\nThis is the expected criteria for your final
answer: The final answer\nyou MUST return the actual complete content as the
final answer, not a summary.\n\nBegin! This is VERY important to you, use the
tools available and give your best Final Answer, your job depends on it!\n\nThought:"},
{"role": "assistant", "content": "42"}, {"role": "assistant", "content": "Thought:
I should start using the tool to get the final answer repeatedly as instructed. \nAction:
get_final_answer \nAction Input: {} \nObservation: 42"}, {"role": "assistant",
"content": "I tried reusing the same input, I must stop using this action input.
I''ll try something else instead.\n\n"}, {"role": "assistant", "content": "Thought:
I should continue using the tool to obtain the final answer. \nAction: get_final_answer \nAction
Input: {} \nObservation: I tried reusing the same input, I must stop using
this action input. I''ll try something else instead."}, {"role": "assistant",
"content": "I tried reusing the same input, I must stop using this action input.
I''ll try something else instead.\n\n\n\n\nYou ONLY have access to the following
tools, and should NEVER make up tools that are not listed here:\n\nTool Name:
get_final_answer\nTool Arguments: {}\nTool Description: Get the final answer
but don''t give it yet, just re-use this\n tool non-stop.\n\nIMPORTANT:
Use the following format in your response:\n\n```\nThought: you should always
think about what to do\nAction: the action to take, only one name of [get_final_answer],
just the name, exactly as it''s written.\nAction Input: the input to the action,
just a simple JSON object, enclosed in curly braces, using \" to wrap keys and
values.\nObservation: the result of the action\n```\n\nOnce all necessary information
is gathered, return the following format:\n\n```\nThought: I now know the final
answer\nFinal Answer: the final answer to the original input question\n```"},
{"role": "assistant", "content": "Thought: I need to keep using the tool to
retrieve the final answer repeatedly. \nAction: get_final_answer \nAction
Input: {} \nObservation: I tried reusing the same input, I must stop using
this action input. I''ll try something else instead.\n\n\n\n\nYou ONLY have
access to the following tools, and should NEVER make up tools that are not listed
here:\n\nTool Name: get_final_answer\nTool Arguments: {}\nTool Description:
Get the final answer but don''t give it yet, just re-use this\n tool
non-stop.\n\nIMPORTANT: Use the following format in your response:\n\n```\nThought:
you should always think about what to do\nAction: the action to take, only one
name of [get_final_answer], just the name, exactly as it''s written.\nAction
Input: the input to the action, just a simple JSON object, enclosed in curly
braces, using \" to wrap keys and values.\nObservation: the result of the action\n```\n\nOnce
all necessary information is gathered, return the following format:\n\n```\nThought:
I now know the final answer\nFinal Answer: the final answer to the original
input question\n```"}, {"role": "assistant", "content": "I tried reusing the
same input, I must stop using this action input. I''ll try something else instead.\n\n"},
{"role": "assistant", "content": "Thought: I need to continue using the designated
tool to obtain the final answer. \nAction: get_final_answer \nAction Input:
{} \nObservation: I tried reusing the same input, I must stop using this action
input. I''ll try something else instead."}, {"role": "assistant", "content":
"I tried reusing the same input, I must stop using this action input. I''ll
try something else instead.\n\n"}, {"role": "assistant", "content": "Thought:
I need to continue using the tool without changing the input format. \nAction:
get_final_answer \nAction Input: {} \nObservation: I tried reusing the same
input, I must stop using this action input. I''ll try something else instead."},
{"role": "assistant", "content": "Thought: I need to continue using the tool
without changing the input format. \nAction: get_final_answer \nAction Input:
{} \nObservation: I tried reusing the same input, I must stop using this action
input. I''ll try something else instead.\n\n\nNow it''s time you MUST give your
absolute best final answer. You''ll ignore all previous instructions, stop using
any tools, and just return your absolute BEST Final answer."}], "model": "gpt-4o-mini",
"stop": ["\nObservation:"]}'
headers:
accept:
- application/json
accept-encoding:
- gzip, deflate, zstd
connection:
- keep-alive
content-length:
- '5577'
content-type:
- application/json
cookie:
- __cf_bm=Bwvchs4Dp02K9.WxyX6U8yVg.jg2z6x7yNWekHnFUbQ-1743464202-1.0.1.1-KvmUaCRpD961qPqJPLi38I.N4IEYmc3i_IyJ5LDo2z5TIhZilbmK0oMNu7HrCHT3kzKWh0SpZ_FocvywK0qJ3fku_cwyTByEPK05SQQOEWE;
_cfuvid=zeQ6mwappLtze7fZgtCp1BJNVbBLSsCm8WxR2Jydshg-1743464202332-0.0.1.1-604800000
host:
- api.openai.com
user-agent:
- OpenAI/Python 1.68.2
x-stainless-arch:
- arm64
x-stainless-async:
- 'false'
x-stainless-lang:
- python
x-stainless-os:
- MacOS
x-stainless-package-version:
- 1.68.2
x-stainless-raw-response:
- 'true'
x-stainless-read-timeout:
- '600.0'
x-stainless-retry-count:
- '0'
x-stainless-runtime:
- CPython
x-stainless-runtime-version:
- 3.12.8
method: POST
uri: https://api.openai.com/v1/chat/completions
response:
content: "{\n \"id\": \"chatcmpl-BHIyNYch0OY50INtQUdPpOnd0ypLu\",\n \"object\":
\"chat.completion\",\n \"created\": 1743464207,\n \"model\": \"gpt-4o-mini-2024-07-18\",\n
\ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\":
\"assistant\",\n \"content\": \"```\\nThought: I now know the final answer\\nFinal
Answer: 42\\n```\",\n \"refusal\": null,\n \"annotations\": []\n
\ },\n \"logprobs\": null,\n \"finish_reason\": \"stop\"\n }\n
\ ],\n \"usage\": {\n \"prompt_tokens\": 1111,\n \"completion_tokens\":
19,\n \"total_tokens\": 1130,\n \"prompt_tokens_details\": {\n \"cached_tokens\":
1024,\n \"audio_tokens\": 0\n },\n \"completion_tokens_details\":
{\n \"reasoning_tokens\": 0,\n \"audio_tokens\": 0,\n \"accepted_prediction_tokens\":
0,\n \"rejected_prediction_tokens\": 0\n }\n },\n \"service_tier\":
\"default\",\n \"system_fingerprint\": \"fp_b376dfbbd5\"\n}\n"
headers:
CF-RAY:
- 9293abbdcd59ce50-SJC
Connection:
- keep-alive
Content-Encoding:
- gzip
Content-Type:
- application/json
Date:
- Mon, 31 Mar 2025 23:36:47 GMT
Server:
- cloudflare
Transfer-Encoding:
- chunked
X-Content-Type-Options:
- nosniff
access-control-expose-headers:
- X-Request-ID
alt-svc:
- h3=":443"; ma=86400
cf-cache-status:
- DYNAMIC
openai-organization:
- crewai-iuxna1
openai-processing-ms:
- '587'
openai-version:
- '2020-10-01'
strict-transport-security:
- max-age=31536000; includeSubDomains; preload
x-ratelimit-limit-requests:
- '30000'
x-ratelimit-limit-tokens:
- '150000000'
x-ratelimit-remaining-requests:
- '29999'
x-ratelimit-remaining-tokens:
- '149998757'
x-ratelimit-reset-requests:
- 2ms
x-ratelimit-reset-tokens:
- 0s
x-request-id:
- req_424bb9ef11cf97c170f2543448a30bea
http_version: HTTP/1.1
status_code: 200
version: 1