Compare commits

...

37 Commits

Author SHA1 Message Date
João Moura
3ef502024d preparing new version 2024-02-13 02:58:16 -08:00
João Moura
e55cee7372 adding function calling llm support 2024-02-13 02:57:12 -08:00
João Moura
b72eb838c2 updating readme 2024-02-13 01:50:23 -08:00
João Moura
b21191dd55 updating tests 2024-02-13 01:50:12 -08:00
João Moura
76b17a8d04 renaming function for tools 2024-02-12 16:48:14 -08:00
João Moura
e97d1a0cf8 removing hostname from default telemetry 2024-02-12 16:11:15 -08:00
João Moura
c875d887b7 Crewating a tool output parser 2024-02-12 14:24:36 -08:00
João Moura
44d9cbca81 adding regexp as dependency 2024-02-12 14:13:20 -08:00
João Moura
6e399101fd refactoring default agent tools 2024-02-12 13:27:02 -08:00
João Moura
e8e3617ba6 allowing to set model naem through env var 2024-02-12 13:24:01 -08:00
João Moura
45fa30c007 avoinding telemetry errors 2024-02-12 13:23:40 -08:00
João Moura
15768d9c4d updating LLM connection docs 2024-02-12 13:21:43 -08:00
João Moura
a1fcaa398c updating versions and adding instructor 2024-02-12 13:20:28 -08:00
João Moura
871643d98d updating codeignore 2024-02-11 20:37:42 -08:00
João Moura
91659d6488 counting for tool retries on the acutal usage 2024-02-10 13:14:00 -08:00
João Moura
0076ea7bff Adding ability to remember instruction after using too many tools 2024-02-10 12:53:02 -08:00
João Moura
e79da7bc05 refactoring task execution 2024-02-10 11:28:08 -08:00
João Moura
00206a62ab Revamping tool usage 2024-02-10 10:36:34 -08:00
João Moura
d0b0a33be3 updating translations 2024-02-10 01:08:04 -08:00
João Moura
6ea21e95b6 Adding printer logic 2024-02-10 00:57:04 -08:00
João Moura
c226dafd0d updating dependencies 2024-02-10 00:56:25 -08:00
João Moura
d4c21a23f4 updating all cassettes 2024-02-10 00:55:40 -08:00
João Moura
b76ae5b921 avoind unnecesarry telemetry errors 2024-02-09 10:48:45 -08:00
João Moura
b48e5af9a0 include agentFinish as part of step callback 2024-02-09 02:00:41 -08:00
João Moura
d36c2a74cb recreating executor upon setting new step_callback 2024-02-09 01:52:28 -08:00
João Moura
a1e0596450 adding crew step_callback 2024-02-09 01:24:31 -08:00
João Moura
596e243374 adding support for step_callback 2024-02-08 23:56:13 -08:00
João Moura
326ad08ba2 adding support for full_ouput in crews 2024-02-08 23:23:34 -08:00
João Moura
f63d4edbb4 adding agent step callback 2024-02-08 23:01:30 -08:00
João Moura
0057ed6786 adding user the otpion to share all data of their crews 2024-02-08 23:01:02 -08:00
João Moura
44b6bcbcaa preparing verison 0.5.5 2024-02-07 23:13:39 -08:00
João Moura
a45c82c5f7 fixing RPM controlelr being set unencessarily 2024-02-07 23:09:36 -08:00
João Moura
98133a4eb6 Adding new crew specific docs 2024-02-07 23:09:16 -08:00
João Moura
44c2fd223d preparing version 0.5.4 2024-02-07 22:22:33 -08:00
João Moura
fc249eefda adding initial telemetry 2024-02-07 22:21:44 -08:00
João Moura
1a1eb4e7aa preparing new version 0.5.3 2024-02-07 02:14:58 -08:00
João Moura
723fdc6245 adding fix to hierarchical process 2024-02-07 02:13:19 -08:00
65 changed files with 48325 additions and 8478 deletions

2
.gitignore vendored
View File

@@ -5,4 +5,4 @@ dist/
.env
assets/*
.idea
test.py
test/

View File

@@ -30,6 +30,7 @@
- [How CrewAI Compares](#how-crewai-compares)
- [Contribution](#contribution)
- [Hire CrewAI](#hire-crewai)
- [Telemetry](#telemetry)
- [License](#license)
## Why CrewAI?
@@ -243,6 +244,36 @@ pip install dist/*.tar.gz
We're a company developing crewAI and crewAI Enterprise, we for a limited time are offer consulting with selected customers, to get them early access to our enterprise solution
If you are interested on having access to it and hiring weekly hours with our team, feel free to email us at [joao@crewai.com](mailto:joao@crewai.com).
## Telemetry
CrewAI uses anonymous telemetry to collect usage data with the main purpose of helping us improve the library by focusing our efforts on the most used features, integrations and tools.
There is NO data being collected on the prompts, tasks descriptions agents backstories or goals nor tools usage, no API calls, nor responses nor any data that is being processed by the agents, nor any secrets and env vars.
Data collected includes:
- Version of crewAI
- So we can understand how many users are using the latest version
- Version of Python
- So we can decide on what versions to better support
- General OS (e.g. number of CPUs, macOS/Windows/Linux)
- So we know what OS we should focus on and if we could build specific OS related features
- Number of agents and tasks in a crew
- So we make sure we are testing internally with similar use cases and educate people on the best practices
- Crew Process being used
- Understand where we should focus our efforts
- If Agents are using memory or allowing delegation
- Understand if we improved the features or maybe even drop them
- If Tasks are being executed in parallel or sequentially
- Understand if we should focus more on parallel execution
- Language model being used
- Improved support on most used languages
- Roles of agents in a crew
- Understand high level use cases so we can build better tools, integrations and examples about it
- Tools names available
- Understand out of the publically available tools, which ones are being used the most so we can improve them
Users can opt-in sharing the complete telemetry data by setting the `share_crew` attribute to `True` on their Crews.
## License
CrewAI is released under the MIT License.

View File

@@ -20,12 +20,14 @@ description: What are crewAI Agents and how to use them.
| **Role** | Defines the agent's function within the crew. It determines the kind of tasks the agent is best suited for. |
| **Goal** | The individual objective that the agent aims to achieve. It guides the agent's decision-making process. |
| **Backstory** | Provides context to the agent's role and goal, enriching the interaction and collaboration dynamics. |
| **LLM** | The language model used by the agent to process and generate text. |
| **Tools** | Set of capabilities or functions that the agent can use to perform tasks. Tools can be shared or exclusive to specific agents. |
| **Function Calling LLM** | The language model used by this agent to call functions, if none is passed the same main llm for each agent will be used. |
| **Max Iter** | The maximum number of iterations the agent can perform before forced to give its best answer |
| **Max RPM** | The maximum number of requests per minute the agent can perform to avoid rate limits |
| **Verbose** | This allow you to actually see what is going on during the Crew execution. |
| **Allow Delegation** | Agents can delegate tasks or questions to one another, ensuring that each task is handled by the most suitable agent. |
| **Step Callback** | A function that is called after each step of the agent. This can be used to log the agent's actions or to perform other operations. It will overwrite the crew `step_callback` |
## Creating an Agent
@@ -47,10 +49,13 @@ agent = Agent(
You're currently working on a project to analyze the
performance of our marketing campaigns.""",
tools=[my_tool1, my_tool2],
llm=my_llm,
function_calling_llm=my_llm,
max_iter=10,
max_rpm=10,
verbose=True,
allow_delegation=True
allow_delegation=True,
step_callback=my_intermediate_step_callback
)
```

View File

@@ -0,0 +1,81 @@
---
title: crewAI Crews
description: Understanding and utilizing crews in the crewAI framework.
---
## What is a Crew?
!!! note "Definition of a Crew"
A crew in crewAI represents a collaborative group of agents working together to achieve a set of tasks. Each crew defines the strategy for task execution, agent collaboration, and the overall workflow.
## Crew Attributes
| Attribute | Description |
| :------------------- | :----------------------------------------------------------- |
| **Tasks** | A list of tasks assigned to the crew. |
| **Agents** | A list of agents that are part of the crew. |
| **Process** | The process flow (e.g., sequential, hierarchical) the crew follows. |
| **Verbose** | The verbosity level for logging during execution. |
| **Manager LLM** | The language model used by the manager agent in a hierarchical process. |
| **Function Calling LLM** | The language model used by all agensts in the crew to call functions, if none is passed the same main llm for each agent will be used. |
| **Config** | Configuration settings for the crew. |
| **Max RPM** | Maximum requests per minute the crew adheres to during execution. |
| **Language** | Language setting for the crew's operation. |
| **Full Output** | Whether the crew should return the full output with all tasks outputs or just the final output. |
| **Step Callback** | A function that is called after each step of every agent. This can be used to log the agent's actions or to perform other operations, it won't override the agent specific `step_callback` |
| **Share Crew** | Whether you want to share the complete crew infromation and execution with the crewAI team to make the library better, and allow us to train models. |
!!! note "Crew Max RPM"
The `max_rpm` attribute sets the maximum number of requests per minute the crew can perform to avoid rate limits and will override individual agents `max_rpm` settings if you set it.
## Creating a Crew
!!! note "Crew Composition"
When assembling a crew, you combine agents with complementary roles and tools, assign tasks, and select a process that dictates their execution order and interaction.
### Example: Assembling a Crew
```python
from crewai import Crew, Agent, Task, Process
from langchain_community.tools import DuckDuckGoSearchRun
# Define agents with specific roles and tools
researcher = Agent(
role='Senior Research Analyst',
goal='Discover innovative AI technologies',
tools=[DuckDuckGoSearchRun()]
)
writer = Agent(
role='Content Writer',
goal='Write engaging articles on AI discoveries'
)
# Create tasks for the agents
research_task = Task(description='Identify breakthrough AI technologies', agent=researcher)
write_article_task = Task(description='Draft an article on the latest AI technologies', agent=writer)
# Assemble the crew with a sequential process
my_crew = Crew(
agents=[researcher, writer],
tasks=[research_task, write_article_task],
process=Process.sequential,
full_output=True,
verbose=True
)
```
## Crew Execution Process
- **Sequential Process**: Tasks are executed one after another, allowing for a linear flow of work.
- **Hierarchical Process**: A manager agent coordinates the crew, delegating tasks and validating outcomes before proceeding.
### Kicking Off a Crew
Once your crew is assembled, initiate the workflow with the `kickoff()` method. This starts the execution process according to the defined process flow.
```python
# Start the crew's task execution
result = my_crew.kickoff()
print(result)
```

View File

@@ -5,10 +5,11 @@ description: Guide on integrating CrewAI with various Large Language Models (LLM
## Connect CrewAI to LLMs
!!! note "Default LLM"
By default, crewAI uses OpenAI's GPT-4 model for language processing. However, you can configure your agents to use a different model or API. This guide will show you how to connect your agents to different LLMs.
By default, crewAI uses OpenAI's GPT-4 model for language processing. However, you can configure your agents to use a different model or API. This guide will show you how to connect your agents to different LLMs. You can change the specific gpt model by setting the `OPENAI_MODEL_NAME` environment variable.
CrewAI offers flexibility in connecting to various LLMs, including local models via [Ollama](https://ollama.ai) and different APIs like Azure. It's compatible with all [LangChain LLM](https://python.langchain.com/docs/integrations/llms/) components, enabling diverse integrations for tailored AI solutions.
## Ollama Integration
Ollama is preferred for local LLM integration, offering customization and privacy benefits. It requires installation and configuration, including model adjustments via a Modelfile to optimize performance.
@@ -20,17 +21,16 @@ Ollama is preferred for local LLM integration, offering customization and privac
Instantiate Ollama and pass it to your agents within CrewAI, enhancing them with the local model's capabilities.
```python
from langchain_community.llms import Ollama
# Assuming you have Ollama installed and downloaded the openhermes model
ollama_openhermes = Ollama(model="openhermes")
# Required
os.environ["OPENAI_API_BASE"]='http://localhost:11434/v1'
os.environ["OPENAI_MODEL_NAME"]='openhermes'
os.environ["OPENAI_API_KEY"]=''
local_expert = Agent(
role='Local Expert',
goal='Provide insights about the city',
backstory="A knowledgeable local guide.",
tools=[SearchTools.search_internet, BrowserTools.scrape_and_summarize_website],
llm=ollama_openhermes,
verbose=True
)
```
@@ -40,35 +40,40 @@ You can use environment variables for easy switch between APIs and models, suppo
### Configuration Examples
### Ollama
```sh
OPENAI_API_BASE='http://localhost:11434/v1'
OPENAI_MODEL_NAME='openhermes' # Depending on the model you have available
OPENAI_API_KEY=NA
```
### FastChat
```sh
# Required
OPENAI_API_BASE="http://localhost:8001/v1"
OPENAI_MODEL_NAME='oh-2.5m7b-q51' # Depending on the model you have available
OPENAI_API_KEY=NA
MODEL_NAME='oh-2.5m7b-q51' # Depending on the model you have available
```
### LM Studio
```sh
# Required
OPENAI_API_BASE="http://localhost:8000/v1"
OPENAI_MODEL_NAME=NA
OPENAI_API_KEY=NA
MODEL_NAME=NA
```
### Mistral API
```sh
OPENAI_API_KEY=your-mistral-api-key
OPENAI_API_BASE=https://api.mistral.ai/v1
MODEL_NAME="mistral-small" # Check documentation for available models
OPENAI_MODEL_NAME="mistral-small" # Check documentation for available models
```
### text-gen-web-ui
```sh
# Required
API_BASE_URL=http://localhost:5000
OPENAI_API_BASE=http://localhost:5000/v1
OPENAI_MODEL_NAME=NA
OPENAI_API_KEY=NA
MODEL_NAME=NA
```
### Azure Open AI

View File

@@ -28,6 +28,11 @@ Cutting-edge framework for orchestrating role-playing, autonomous AI agents. By
Processes
</a>
</li>
<li>
<a href="./core-concepts/Crews">
Crews
</a>
</li>
</ul>
</div>
<div style="width:30%">

View File

@@ -0,0 +1,29 @@
## Telemetry
CrewAI uses anonymous telemetry to collect usage data with the main purpose of helping us improve the library by focusing our efforts on the most used features, integrations and tools.
There is NO data being collected on the prompts, tasks descriptions agents backstories or goals nor tools usage, no API calls, nor responses nor any data that is being processed by the agents, nor any secrets and env vars.
Data collected includes:
- Version of crewAI
- So we can understand how many users are using the latest version
- Version of Python
- So we can decide on what versions to better support
- General OS (e.g. number of CPUs, macOS/Windows/Linux)
- So we know what OS we should focus on and if we could build specific OS related features
- Number of agents and tasks in a crew
- So we make sure we are testing internally with similar use cases and educate people on the best practices
- Crew Process being used
- Understand where we should focus our efforts
- If Agents are using memory or allowing delegation
- Understand if we improved the features or maybe even drop them
- If Tasks are being executed in parallel or sequentially
- Understand if we should focus more on parallel execution
- Language model being used
- Improved support on most used languages
- Roles of agents in a crew
- Understand high level use cases so we can build better tools, integrations and examples about it
- Tools names available
- Understand out of the publically available tools, which ones are being used the most so we can improve them
Users can opt-in sharing the complete telemetry data by setting the `share_crew` attribute to `True` on their Crews.

View File

@@ -124,6 +124,7 @@ nav:
- Tasks: 'core-concepts/Tasks.md'
- Tools: 'core-concepts/Tools.md'
- Processes: 'core-concepts/Processes.md'
- Crews: 'core-concepts/Crews.md'
- Collaboration: 'core-concepts/Collaboration.md'
- How to Guides:
- Getting Started: 'how-to/Creating-a-Crew-and-kick-it-off.md'
@@ -140,6 +141,8 @@ nav:
- Drafting emails with LangGraph: https://github.com/joaomdmoura/crewAI-examples/tree/main/CrewAI-LangGraph"
- Landing Page Generator: https://github.com/joaomdmoura/crewAI-examples/tree/main/landing_page_generator"
- Prepare for meetings: https://github.com/joaomdmoura/crewAI-examples/tree/main/prep-for-a-meeting"
- Telemetry: 'telemetry/Telemetry.md'
extra_css:
- stylesheets/output.css
- stylesheets/extra.css

549
poetry.lock generated
View File

@@ -202,9 +202,20 @@ files = [
[package.extras]
dev = ["freezegun (>=1.0,<2.0)", "pytest (>=6.0)", "pytest-cov"]
[[package]]
name = "backoff"
version = "2.2.1"
description = "Function decoration for backoff and retry"
optional = false
python-versions = ">=3.7,<4.0"
files = [
{file = "backoff-2.2.1-py3-none-any.whl", hash = "sha256:63579f9a0628e06278f7e47b7d7d5b6ce20dc65c5e96a6f3ca99a6adca0396e8"},
{file = "backoff-2.2.1.tar.gz", hash = "sha256:03f829f5bb1923180821643f8753b0502c3b682293992485b0eef2807afa5cba"},
]
[[package]]
name = "black"
version = "24.1.1"
version = "24.2.0"
description = "The uncompromising code formatter."
optional = false
python-versions = ">=3.8"
@@ -230,7 +241,7 @@ uvloop = ["uvloop (>=0.15.2)"]
type = "git"
url = "https://github.com/psf/black.git"
reference = "stable"
resolved_reference = "e026c93888f91a47a9c9f4e029f3eb07d96375e6"
resolved_reference = "6fdf8a4af28071ed1d079c01122b34c5d587207a"
[[package]]
name = "cairocffi"
@@ -528,6 +539,23 @@ files = [
{file = "defusedxml-0.7.1.tar.gz", hash = "sha256:1bb3032db185915b62d7c6209c5a8792be6a32ab2fedacc84e01b52c51aa3e69"},
]
[[package]]
name = "deprecated"
version = "1.2.14"
description = "Python @deprecated decorator to deprecate old python classes, functions or methods."
optional = false
python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*"
files = [
{file = "Deprecated-1.2.14-py2.py3-none-any.whl", hash = "sha256:6fac8b097794a90302bdbb17b9b815e732d3c4720583ff1b198499d78470466c"},
{file = "Deprecated-1.2.14.tar.gz", hash = "sha256:e5323eb936458dccc2582dc6f9c322c852a775a27065ff2b0c4970b9d53d01b3"},
]
[package.dependencies]
wrapt = ">=1.10,<2"
[package.extras]
dev = ["PyTest", "PyTest-Cov", "bump2version (<1)", "sphinx (<2)", "tox"]
[[package]]
name = "distlib"
version = "0.3.8"
@@ -550,6 +578,17 @@ files = [
{file = "distro-1.9.0.tar.gz", hash = "sha256:2fa77c6fd8940f116ee1d6b94a2f90b13b5ea8d019b98bc8bafdcabcdd9bdbed"},
]
[[package]]
name = "docstring-parser"
version = "0.15"
description = "Parse Python docstrings in reST, Google and Numpydoc format"
optional = false
python-versions = ">=3.6,<4.0"
files = [
{file = "docstring_parser-0.15-py3-none-any.whl", hash = "sha256:d1679b86250d269d06a99670924d6bce45adc00b08069dae8c47d98e89b667a9"},
{file = "docstring_parser-0.15.tar.gz", hash = "sha256:48ddc093e8b1865899956fcc03b03e66bb7240c310fac5af81814580c55bf682"},
]
[[package]]
name = "exceptiongroup"
version = "1.2.0"
@@ -683,6 +722,23 @@ python-dateutil = ">=2.8.1"
[package.extras]
dev = ["flake8", "markdown", "twine", "wheel"]
[[package]]
name = "googleapis-common-protos"
version = "1.62.0"
description = "Common protobufs used in Google APIs"
optional = false
python-versions = ">=3.7"
files = [
{file = "googleapis-common-protos-1.62.0.tar.gz", hash = "sha256:83f0ece9f94e5672cced82f592d2a5edf527a96ed1794f0bab36d5735c996277"},
{file = "googleapis_common_protos-1.62.0-py2.py3-none-any.whl", hash = "sha256:4750113612205514f9f6aa4cb00d523a94f3e8c06c5ad2fee466387dc4875f07"},
]
[package.dependencies]
protobuf = ">=3.19.5,<3.20.0 || >3.20.0,<3.20.1 || >3.20.1,<4.21.1 || >4.21.1,<4.21.2 || >4.21.2,<4.21.3 || >4.21.3,<4.21.4 || >4.21.4,<4.21.5 || >4.21.5,<5.0.0.dev0"
[package.extras]
grpc = ["grpcio (>=1.44.0,<2.0.0.dev0)"]
[[package]]
name = "greenlet"
version = "3.0.3"
@@ -756,13 +812,13 @@ test = ["objgraph", "psutil"]
[[package]]
name = "griffe"
version = "0.40.0"
version = "0.40.1"
description = "Signatures for entire Python programs. Extract the structure, the frame, the skeleton of your project, to generate API documentation or find breaking changes in your API."
optional = false
python-versions = ">=3.8"
files = [
{file = "griffe-0.40.0-py3-none-any.whl", hash = "sha256:db1da6d1d8e08cbb20f1a7dee8c09da940540c2d4c1bfa26a9091cf6fc36a9ec"},
{file = "griffe-0.40.0.tar.gz", hash = "sha256:76c4439eaa2737af46ae003c331ab6ca79c5365b552f7b5aed263a3b4125735b"},
{file = "griffe-0.40.1-py3-none-any.whl", hash = "sha256:5b8c023f366fe273e762131fe4bfd141ea56c09b3cb825aa92d06a82681cfd93"},
{file = "griffe-0.40.1.tar.gz", hash = "sha256:66c48a62e2ce5784b6940e603300fcfb807b6f099b94e7f753f1841661fd5c7c"},
]
[package.dependencies]
@@ -826,13 +882,13 @@ socks = ["socksio (==1.*)"]
[[package]]
name = "identify"
version = "2.5.33"
version = "2.5.34"
description = "File identification library for Python"
optional = false
python-versions = ">=3.8"
files = [
{file = "identify-2.5.33-py2.py3-none-any.whl", hash = "sha256:d40ce5fcd762817627670da8a7d8d8e65f24342d14539c59488dc603bf662e34"},
{file = "identify-2.5.33.tar.gz", hash = "sha256:161558f9fe4559e1557e1bff323e8631f6a0e4837f7497767c1782832f16b62d"},
{file = "identify-2.5.34-py2.py3-none-any.whl", hash = "sha256:a4316013779e433d08b96e5eabb7f641e6c7942e4ab5d4c509ebd2e7a8994aed"},
{file = "identify-2.5.34.tar.gz", hash = "sha256:ee17bc9d499899bc9eaec1ac7bf2dc9eedd480db9d88b96d123d3b64a9d34f5d"},
]
[package.extras]
@@ -849,6 +905,25 @@ files = [
{file = "idna-3.6.tar.gz", hash = "sha256:9ecdbbd083b06798ae1e86adcbfe8ab1479cf864e4ee30fe4e46a003d12491ca"},
]
[[package]]
name = "importlib-metadata"
version = "6.11.0"
description = "Read metadata from Python packages"
optional = false
python-versions = ">=3.8"
files = [
{file = "importlib_metadata-6.11.0-py3-none-any.whl", hash = "sha256:f0afba6205ad8f8947c7d338b5342d5db2afbfd82f9cbef7879a9539cc12eb9b"},
{file = "importlib_metadata-6.11.0.tar.gz", hash = "sha256:1231cf92d825c9e03cfc4da076a16de6422c863558229ea0b22b675657463443"},
]
[package.dependencies]
zipp = ">=0.5"
[package.extras]
docs = ["furo", "jaraco.packaging (>=9.3)", "jaraco.tidelift (>=1.4)", "rst.linker (>=1.9)", "sphinx (<7.2.5)", "sphinx (>=3.5)", "sphinx-lint"]
perf = ["ipython"]
testing = ["flufl.flake8", "importlib-resources (>=1.3)", "packaging", "pyfakefs", "pytest (>=6)", "pytest-black (>=0.3.7)", "pytest-checkdocs (>=2.4)", "pytest-cov", "pytest-enabler (>=2.2)", "pytest-mypy (>=0.9.1)", "pytest-perf (>=0.9.2)", "pytest-ruff"]
[[package]]
name = "iniconfig"
version = "2.0.0"
@@ -860,6 +935,26 @@ files = [
{file = "iniconfig-2.0.0.tar.gz", hash = "sha256:2d91e135bf72d31a410b17c16da610a82cb55f6b0477d1a902134b24a455b8b3"},
]
[[package]]
name = "instructor"
version = "0.5.2"
description = "structured outputs for llm"
optional = false
python-versions = ">=3.10,<4.0"
files = [
{file = "instructor-0.5.2-py3-none-any.whl", hash = "sha256:8c7c927f3cbf6cd863eeebceae3f021e27eaca2ceaf9e9f3c8204540a1126160"},
{file = "instructor-0.5.2.tar.gz", hash = "sha256:d8d679eb4624254db615794aaab59840e506fa696bc0181d998ae4f9ded2706d"},
]
[package.dependencies]
aiohttp = ">=3.9.1,<4.0.0"
docstring-parser = ">=0.15,<0.16"
openai = ">=1.1.0,<2.0.0"
pydantic = ">=2.0.2,<3.0.0"
rich = ">=13.7.0,<14.0.0"
tenacity = ">=8.2.3,<9.0.0"
typer = ">=0.9.0,<0.10.0"
[[package]]
name = "isort"
version = "5.13.2"
@@ -918,13 +1013,13 @@ files = [
[[package]]
name = "langchain"
version = "0.1.0"
version = "0.1.6"
description = "Building applications with LLMs through composability"
optional = false
python-versions = ">=3.8.1,<4.0"
files = [
{file = "langchain-0.1.0-py3-none-any.whl", hash = "sha256:8652e74b039333a55c79faff4400b077ba1bd0ddce5255574e42d301c05c1733"},
{file = "langchain-0.1.0.tar.gz", hash = "sha256:d43119f8d3fda2c8ddf8c3a19bd5b94b347e27d1867ff14a921b90bdbed0668a"},
{file = "langchain-0.1.6-py3-none-any.whl", hash = "sha256:925e180fd1ae53b7085e46b3cdc9db04c24ddc6f4ac08f171eea29498d99603a"},
{file = "langchain-0.1.6.tar.gz", hash = "sha256:a885e16c10b9ed11f312eaa6570bc48d27305362b26f6c235cafdcc794e26e71"},
]
[package.dependencies]
@@ -932,9 +1027,9 @@ aiohttp = ">=3.8.3,<4.0.0"
async-timeout = {version = ">=4.0.0,<5.0.0", markers = "python_version < \"3.11\""}
dataclasses-json = ">=0.5.7,<0.7"
jsonpatch = ">=1.33,<2.0"
langchain-community = ">=0.0.9,<0.1"
langchain-core = ">=0.1.7,<0.2"
langsmith = ">=0.0.77,<0.1.0"
langchain-community = ">=0.0.18,<0.1"
langchain-core = ">=0.1.22,<0.2"
langsmith = ">=0.0.83,<0.1"
numpy = ">=1,<2"
pydantic = ">=1,<3"
PyYAML = ">=5.3"
@@ -949,7 +1044,7 @@ cli = ["typer (>=0.9.0,<0.10.0)"]
cohere = ["cohere (>=4,<5)"]
docarray = ["docarray[hnswlib] (>=0.32.0,<0.33.0)"]
embeddings = ["sentence-transformers (>=2,<3)"]
extended-testing = ["aiosqlite (>=0.19.0,<0.20.0)", "aleph-alpha-client (>=2.15.0,<3.0.0)", "anthropic (>=0.3.11,<0.4.0)", "arxiv (>=1.4,<2.0)", "assemblyai (>=0.17.0,<0.18.0)", "atlassian-python-api (>=3.36.0,<4.0.0)", "beautifulsoup4 (>=4,<5)", "bibtexparser (>=1.4.0,<2.0.0)", "cassio (>=0.1.0,<0.2.0)", "chardet (>=5.1.0,<6.0.0)", "cohere (>=4,<5)", "couchbase (>=4.1.9,<5.0.0)", "dashvector (>=1.0.1,<2.0.0)", "databricks-vectorsearch (>=0.21,<0.22)", "datasets (>=2.15.0,<3.0.0)", "dgml-utils (>=0.3.0,<0.4.0)", "esprima (>=4.0.1,<5.0.0)", "faiss-cpu (>=1,<2)", "feedparser (>=6.0.10,<7.0.0)", "fireworks-ai (>=0.9.0,<0.10.0)", "geopandas (>=0.13.1,<0.14.0)", "gitpython (>=3.1.32,<4.0.0)", "google-cloud-documentai (>=2.20.1,<3.0.0)", "gql (>=3.4.1,<4.0.0)", "hologres-vector (>=0.0.6,<0.0.7)", "html2text (>=2020.1.16,<2021.0.0)", "javelin-sdk (>=0.1.8,<0.2.0)", "jinja2 (>=3,<4)", "jq (>=1.4.1,<2.0.0)", "jsonschema (>1)", "langchain-openai (>=0.0.2,<0.1)", "lxml (>=4.9.2,<5.0.0)", "markdownify (>=0.11.6,<0.12.0)", "motor (>=3.3.1,<4.0.0)", "msal (>=1.25.0,<2.0.0)", "mwparserfromhell (>=0.6.4,<0.7.0)", "mwxml (>=0.3.3,<0.4.0)", "newspaper3k (>=0.2.8,<0.3.0)", "numexpr (>=2.8.6,<3.0.0)", "openai (<2)", "openapi-pydantic (>=0.3.2,<0.4.0)", "pandas (>=2.0.1,<3.0.0)", "pdfminer-six (>=20221105,<20221106)", "pgvector (>=0.1.6,<0.2.0)", "praw (>=7.7.1,<8.0.0)", "psychicapi (>=0.8.0,<0.9.0)", "py-trello (>=0.19.0,<0.20.0)", "pymupdf (>=1.22.3,<2.0.0)", "pypdf (>=3.4.0,<4.0.0)", "pypdfium2 (>=4.10.0,<5.0.0)", "pyspark (>=3.4.0,<4.0.0)", "rank-bm25 (>=0.2.2,<0.3.0)", "rapidfuzz (>=3.1.1,<4.0.0)", "rapidocr-onnxruntime (>=1.3.2,<2.0.0)", "requests-toolbelt (>=1.0.0,<2.0.0)", "rspace_client (>=2.5.0,<3.0.0)", "scikit-learn (>=1.2.2,<2.0.0)", "sqlite-vss (>=0.1.2,<0.2.0)", "streamlit (>=1.18.0,<2.0.0)", "sympy (>=1.12,<2.0)", "telethon (>=1.28.5,<2.0.0)", "timescale-vector (>=0.0.1,<0.0.2)", "tqdm (>=4.48.0)", "upstash-redis (>=0.15.0,<0.16.0)", "xata (>=1.0.0a7,<2.0.0)", "xmltodict (>=0.13.0,<0.14.0)"]
extended-testing = ["aiosqlite (>=0.19.0,<0.20.0)", "aleph-alpha-client (>=2.15.0,<3.0.0)", "anthropic (>=0.3.11,<0.4.0)", "arxiv (>=1.4,<2.0)", "assemblyai (>=0.17.0,<0.18.0)", "atlassian-python-api (>=3.36.0,<4.0.0)", "beautifulsoup4 (>=4,<5)", "bibtexparser (>=1.4.0,<2.0.0)", "cassio (>=0.1.0,<0.2.0)", "chardet (>=5.1.0,<6.0.0)", "cohere (>=4,<5)", "couchbase (>=4.1.9,<5.0.0)", "dashvector (>=1.0.1,<2.0.0)", "databricks-vectorsearch (>=0.21,<0.22)", "datasets (>=2.15.0,<3.0.0)", "dgml-utils (>=0.3.0,<0.4.0)", "esprima (>=4.0.1,<5.0.0)", "faiss-cpu (>=1,<2)", "feedparser (>=6.0.10,<7.0.0)", "fireworks-ai (>=0.9.0,<0.10.0)", "geopandas (>=0.13.1,<0.14.0)", "gitpython (>=3.1.32,<4.0.0)", "google-cloud-documentai (>=2.20.1,<3.0.0)", "gql (>=3.4.1,<4.0.0)", "hologres-vector (>=0.0.6,<0.0.7)", "html2text (>=2020.1.16,<2021.0.0)", "javelin-sdk (>=0.1.8,<0.2.0)", "jinja2 (>=3,<4)", "jq (>=1.4.1,<2.0.0)", "jsonschema (>1)", "langchain-openai (>=0.0.2,<0.1)", "lxml (>=4.9.2,<5.0.0)", "markdownify (>=0.11.6,<0.12.0)", "motor (>=3.3.1,<4.0.0)", "msal (>=1.25.0,<2.0.0)", "mwparserfromhell (>=0.6.4,<0.7.0)", "mwxml (>=0.3.3,<0.4.0)", "newspaper3k (>=0.2.8,<0.3.0)", "numexpr (>=2.8.6,<3.0.0)", "openai (<2)", "openapi-pydantic (>=0.3.2,<0.4.0)", "pandas (>=2.0.1,<3.0.0)", "pdfminer-six (>=20221105,<20221106)", "pgvector (>=0.1.6,<0.2.0)", "praw (>=7.7.1,<8.0.0)", "psychicapi (>=0.8.0,<0.9.0)", "py-trello (>=0.19.0,<0.20.0)", "pymupdf (>=1.22.3,<2.0.0)", "pypdf (>=3.4.0,<4.0.0)", "pypdfium2 (>=4.10.0,<5.0.0)", "pyspark (>=3.4.0,<4.0.0)", "rank-bm25 (>=0.2.2,<0.3.0)", "rapidfuzz (>=3.1.1,<4.0.0)", "rapidocr-onnxruntime (>=1.3.2,<2.0.0)", "rdflib (==7.0.0)", "requests-toolbelt (>=1.0.0,<2.0.0)", "rspace_client (>=2.5.0,<3.0.0)", "scikit-learn (>=1.2.2,<2.0.0)", "sqlite-vss (>=0.1.2,<0.2.0)", "streamlit (>=1.18.0,<2.0.0)", "sympy (>=1.12,<2.0)", "telethon (>=1.28.5,<2.0.0)", "timescale-vector (>=0.0.1,<0.0.2)", "tqdm (>=4.48.0)", "upstash-redis (>=0.15.0,<0.16.0)", "xata (>=1.0.0a7,<2.0.0)", "xmltodict (>=0.13.0,<0.14.0)"]
javascript = ["esprima (>=4.0.1,<5.0.0)"]
llms = ["clarifai (>=9.1.0)", "cohere (>=4,<5)", "huggingface_hub (>=0,<1)", "manifest-ml (>=0.0.1,<0.0.2)", "nlpcloud (>=1,<2)", "openai (<2)", "openlm (>=0.0.5,<0.0.6)", "torch (>=1,<3)", "transformers (>=4,<5)"]
openai = ["openai (<2)", "tiktoken (>=0.3.2,<0.6.0)"]
@@ -958,19 +1053,19 @@ text-helpers = ["chardet (>=5.1.0,<6.0.0)"]
[[package]]
name = "langchain-community"
version = "0.0.17"
version = "0.0.19"
description = "Community contributed LangChain integrations."
optional = false
python-versions = ">=3.8.1,<4.0"
files = [
{file = "langchain_community-0.0.17-py3-none-any.whl", hash = "sha256:d503491bbfb691d1b3d10d74f7a69840cee3caf9b58a9a76f053ff925ea76733"},
{file = "langchain_community-0.0.17.tar.gz", hash = "sha256:ab957b34a562e0199b2ecf050bdc987c4fe889b2ac9f22b75a9fac8b9e30f53a"},
{file = "langchain_community-0.0.19-py3-none-any.whl", hash = "sha256:ebff8daa0110d53555f4963f1f739b85f9ca63ef82598ece5f5c3f73fe0aa82e"},
{file = "langchain_community-0.0.19.tar.gz", hash = "sha256:5d18ad9e188b10aaba6361fb2a747cf29b64b21ffb8061933fec090187ca39c2"},
]
[package.dependencies]
aiohttp = ">=3.8.3,<4.0.0"
dataclasses-json = ">=0.5.7,<0.7"
langchain-core = ">=0.1.16,<0.2"
langchain-core = ">=0.1.21,<0.2"
langsmith = ">=0.0.83,<0.1"
numpy = ">=1,<2"
PyYAML = ">=5.3"
@@ -980,23 +1075,23 @@ tenacity = ">=8.1.0,<9.0.0"
[package.extras]
cli = ["typer (>=0.9.0,<0.10.0)"]
extended-testing = ["aiosqlite (>=0.19.0,<0.20.0)", "aleph-alpha-client (>=2.15.0,<3.0.0)", "anthropic (>=0.3.11,<0.4.0)", "arxiv (>=1.4,<2.0)", "assemblyai (>=0.17.0,<0.18.0)", "atlassian-python-api (>=3.36.0,<4.0.0)", "azure-ai-documentintelligence (>=1.0.0b1,<2.0.0)", "beautifulsoup4 (>=4,<5)", "bibtexparser (>=1.4.0,<2.0.0)", "cassio (>=0.1.0,<0.2.0)", "chardet (>=5.1.0,<6.0.0)", "cohere (>=4,<5)", "dashvector (>=1.0.1,<2.0.0)", "databricks-vectorsearch (>=0.21,<0.22)", "datasets (>=2.15.0,<3.0.0)", "dgml-utils (>=0.3.0,<0.4.0)", "elasticsearch (>=8.12.0,<9.0.0)", "esprima (>=4.0.1,<5.0.0)", "faiss-cpu (>=1,<2)", "feedparser (>=6.0.10,<7.0.0)", "fireworks-ai (>=0.9.0,<0.10.0)", "geopandas (>=0.13.1,<0.14.0)", "gitpython (>=3.1.32,<4.0.0)", "google-cloud-documentai (>=2.20.1,<3.0.0)", "gql (>=3.4.1,<4.0.0)", "gradientai (>=1.4.0,<2.0.0)", "hdbcli (>=2.19.21,<3.0.0)", "hologres-vector (>=0.0.6,<0.0.7)", "html2text (>=2020.1.16,<2021.0.0)", "httpx (>=0.24.1,<0.25.0)", "javelin-sdk (>=0.1.8,<0.2.0)", "jinja2 (>=3,<4)", "jq (>=1.4.1,<2.0.0)", "jsonschema (>1)", "lxml (>=4.9.2,<5.0.0)", "markdownify (>=0.11.6,<0.12.0)", "motor (>=3.3.1,<4.0.0)", "msal (>=1.25.0,<2.0.0)", "mwparserfromhell (>=0.6.4,<0.7.0)", "mwxml (>=0.3.3,<0.4.0)", "newspaper3k (>=0.2.8,<0.3.0)", "numexpr (>=2.8.6,<3.0.0)", "oci (>=2.119.1,<3.0.0)", "openai (<2)", "openapi-pydantic (>=0.3.2,<0.4.0)", "oracle-ads (>=2.9.1,<3.0.0)", "pandas (>=2.0.1,<3.0.0)", "pdfminer-six (>=20221105,<20221106)", "pgvector (>=0.1.6,<0.2.0)", "praw (>=7.7.1,<8.0.0)", "psychicapi (>=0.8.0,<0.9.0)", "py-trello (>=0.19.0,<0.20.0)", "pymupdf (>=1.22.3,<2.0.0)", "pypdf (>=3.4.0,<4.0.0)", "pypdfium2 (>=4.10.0,<5.0.0)", "pyspark (>=3.4.0,<4.0.0)", "rank-bm25 (>=0.2.2,<0.3.0)", "rapidfuzz (>=3.1.1,<4.0.0)", "rapidocr-onnxruntime (>=1.3.2,<2.0.0)", "rdflib (==7.0.0)", "requests-toolbelt (>=1.0.0,<2.0.0)", "rspace_client (>=2.5.0,<3.0.0)", "scikit-learn (>=1.2.2,<2.0.0)", "sqlite-vss (>=0.1.2,<0.2.0)", "streamlit (>=1.18.0,<2.0.0)", "sympy (>=1.12,<2.0)", "telethon (>=1.28.5,<2.0.0)", "timescale-vector (>=0.0.1,<0.0.2)", "tqdm (>=4.48.0)", "upstash-redis (>=0.15.0,<0.16.0)", "xata (>=1.0.0a7,<2.0.0)", "xmltodict (>=0.13.0,<0.14.0)", "zhipuai (>=1.0.7,<2.0.0)"]
extended-testing = ["aiosqlite (>=0.19.0,<0.20.0)", "aleph-alpha-client (>=2.15.0,<3.0.0)", "anthropic (>=0.3.11,<0.4.0)", "arxiv (>=1.4,<2.0)", "assemblyai (>=0.17.0,<0.18.0)", "atlassian-python-api (>=3.36.0,<4.0.0)", "azure-ai-documentintelligence (>=1.0.0b1,<2.0.0)", "beautifulsoup4 (>=4,<5)", "bibtexparser (>=1.4.0,<2.0.0)", "cassio (>=0.1.0,<0.2.0)", "chardet (>=5.1.0,<6.0.0)", "cohere (>=4,<5)", "databricks-vectorsearch (>=0.21,<0.22)", "datasets (>=2.15.0,<3.0.0)", "dgml-utils (>=0.3.0,<0.4.0)", "elasticsearch (>=8.12.0,<9.0.0)", "esprima (>=4.0.1,<5.0.0)", "faiss-cpu (>=1,<2)", "feedparser (>=6.0.10,<7.0.0)", "fireworks-ai (>=0.9.0,<0.10.0)", "geopandas (>=0.13.1,<0.14.0)", "gitpython (>=3.1.32,<4.0.0)", "google-cloud-documentai (>=2.20.1,<3.0.0)", "gql (>=3.4.1,<4.0.0)", "gradientai (>=1.4.0,<2.0.0)", "hdbcli (>=2.19.21,<3.0.0)", "hologres-vector (>=0.0.6,<0.0.7)", "html2text (>=2020.1.16,<2021.0.0)", "httpx (>=0.24.1,<0.25.0)", "javelin-sdk (>=0.1.8,<0.2.0)", "jinja2 (>=3,<4)", "jq (>=1.4.1,<2.0.0)", "jsonschema (>1)", "lxml (>=4.9.2,<5.0.0)", "markdownify (>=0.11.6,<0.12.0)", "motor (>=3.3.1,<4.0.0)", "msal (>=1.25.0,<2.0.0)", "mwparserfromhell (>=0.6.4,<0.7.0)", "mwxml (>=0.3.3,<0.4.0)", "newspaper3k (>=0.2.8,<0.3.0)", "numexpr (>=2.8.6,<3.0.0)", "nvidia-riva-client (>=2.14.0,<3.0.0)", "oci (>=2.119.1,<3.0.0)", "openai (<2)", "openapi-pydantic (>=0.3.2,<0.4.0)", "oracle-ads (>=2.9.1,<3.0.0)", "pandas (>=2.0.1,<3.0.0)", "pdfminer-six (>=20221105,<20221106)", "pgvector (>=0.1.6,<0.2.0)", "praw (>=7.7.1,<8.0.0)", "psychicapi (>=0.8.0,<0.9.0)", "py-trello (>=0.19.0,<0.20.0)", "pymupdf (>=1.22.3,<2.0.0)", "pypdf (>=3.4.0,<4.0.0)", "pypdfium2 (>=4.10.0,<5.0.0)", "pyspark (>=3.4.0,<4.0.0)", "rank-bm25 (>=0.2.2,<0.3.0)", "rapidfuzz (>=3.1.1,<4.0.0)", "rapidocr-onnxruntime (>=1.3.2,<2.0.0)", "rdflib (==7.0.0)", "requests-toolbelt (>=1.0.0,<2.0.0)", "rspace_client (>=2.5.0,<3.0.0)", "scikit-learn (>=1.2.2,<2.0.0)", "sqlite-vss (>=0.1.2,<0.2.0)", "streamlit (>=1.18.0,<2.0.0)", "sympy (>=1.12,<2.0)", "telethon (>=1.28.5,<2.0.0)", "timescale-vector (>=0.0.1,<0.0.2)", "tqdm (>=4.48.0)", "upstash-redis (>=0.15.0,<0.16.0)", "xata (>=1.0.0a7,<2.0.0)", "xmltodict (>=0.13.0,<0.14.0)", "zhipuai (>=1.0.7,<2.0.0)"]
[[package]]
name = "langchain-core"
version = "0.1.18"
version = "0.1.22"
description = "Building applications with LLMs through composability"
optional = false
python-versions = ">=3.8.1,<4.0"
files = [
{file = "langchain_core-0.1.18-py3-none-any.whl", hash = "sha256:5a60dc3c391b33834fb9c8b072abd7a0df4cbba8ce88eb1bcb288844000ab759"},
{file = "langchain_core-0.1.18.tar.gz", hash = "sha256:ad470b21cdfdc75e829cd91c8d8eb7e0438ab8ddb5b50828125ff7ada121ee7b"},
{file = "langchain_core-0.1.22-py3-none-any.whl", hash = "sha256:d1263c2707ce18bb13654c88f891e53f39edec9b11ff7d0d0f23fd920927b2d6"},
{file = "langchain_core-0.1.22.tar.gz", hash = "sha256:deac12b3e42a08bbbaa2acf83d5f8dd2d5513256d8daf0e853e9d68ff4c99d79"},
]
[package.dependencies]
anyio = ">=3,<5"
jsonpatch = ">=1.33,<2.0"
langsmith = ">=0.0.83,<0.1"
langsmith = ">=0.0.87,<0.0.88"
packaging = ">=23.2,<24.0"
pydantic = ">=1,<3"
PyYAML = ">=5.3"
@@ -1025,13 +1120,13 @@ tiktoken = ">=0.5.2,<0.6.0"
[[package]]
name = "langsmith"
version = "0.0.86"
version = "0.0.87"
description = "Client library to connect to the LangSmith LLM Tracing and Evaluation Platform."
optional = false
python-versions = ">=3.8.1,<4.0"
files = [
{file = "langsmith-0.0.86-py3-none-any.whl", hash = "sha256:7af15c36edb8c9fd9ae5c6d4fb940eb1da668b630a703d63c90c91e9be53aefb"},
{file = "langsmith-0.0.86.tar.gz", hash = "sha256:c1572824664810c4425b17f2d1e9a59d53992e6898df22a37236c62d3c80f59e"},
{file = "langsmith-0.0.87-py3-none-any.whl", hash = "sha256:8903d3811b9fc89eb18f5961c8e6935fbd2d0f119884fbf30dc70b8f8f4121fc"},
{file = "langsmith-0.0.87.tar.gz", hash = "sha256:36c4cc47e5b54be57d038036a30fb19ce6e4c73048cd7a464b8f25b459694d34"},
]
[package.dependencies]
@@ -1053,6 +1148,30 @@ files = [
docs = ["mdx-gh-links (>=0.2)", "mkdocs (>=1.5)", "mkdocs-gen-files", "mkdocs-literate-nav", "mkdocs-nature (>=0.6)", "mkdocs-section-index", "mkdocstrings[python]"]
testing = ["coverage", "pyyaml"]
[[package]]
name = "markdown-it-py"
version = "3.0.0"
description = "Python port of markdown-it. Markdown parsing, done right!"
optional = false
python-versions = ">=3.8"
files = [
{file = "markdown-it-py-3.0.0.tar.gz", hash = "sha256:e3f60a94fa066dc52ec76661e37c851cb232d92f9886b15cb560aaada2df8feb"},
{file = "markdown_it_py-3.0.0-py3-none-any.whl", hash = "sha256:355216845c60bd96232cd8d8c40e8f9765cc86f46880e43a8fd22dc1a1a8cab1"},
]
[package.dependencies]
mdurl = ">=0.1,<1.0"
[package.extras]
benchmarking = ["psutil", "pytest", "pytest-benchmark"]
code-style = ["pre-commit (>=3.0,<4.0)"]
compare = ["commonmark (>=0.9,<1.0)", "markdown (>=3.4,<4.0)", "mistletoe (>=1.0,<2.0)", "mistune (>=2.0,<3.0)", "panflute (>=2.3,<3.0)"]
linkify = ["linkify-it-py (>=1,<3)"]
plugins = ["mdit-py-plugins"]
profiling = ["gprof2dot"]
rtd = ["jupyter_sphinx", "mdit-py-plugins", "myst-parser", "pyyaml", "sphinx", "sphinx-copybutton", "sphinx-design", "sphinx_book_theme"]
testing = ["coverage", "pytest", "pytest-cov", "pytest-regressions"]
[[package]]
name = "markupsafe"
version = "2.1.5"
@@ -1142,6 +1261,17 @@ docs = ["alabaster (==0.7.15)", "autodocsumm (==0.2.12)", "sphinx (==7.2.6)", "s
lint = ["pre-commit (>=2.4,<4.0)"]
tests = ["pytest", "pytz", "simplejson"]
[[package]]
name = "mdurl"
version = "0.1.2"
description = "Markdown URL utilities"
optional = false
python-versions = ">=3.7"
files = [
{file = "mdurl-0.1.2-py3-none-any.whl", hash = "sha256:84008a41e51615a49fc9966191ff91509e3c40b939176e643fd50a5c2196b8f8"},
{file = "mdurl-0.1.2.tar.gz", hash = "sha256:bb413d29f5eea38f31dd4754dd7377d4465116fb207585f97bf925588687c1ba"},
]
[[package]]
name = "mergedeep"
version = "1.3.4"
@@ -1200,13 +1330,13 @@ mkdocs = ">=1.1"
[[package]]
name = "mkdocs-material"
version = "9.5.7"
version = "9.5.9"
description = "Documentation that simply works"
optional = false
python-versions = ">=3.8"
files = [
{file = "mkdocs_material-9.5.7-py3-none-any.whl", hash = "sha256:0be8ce8bcfebb52bae9b00cf9b851df45b8a92d629afcfd7f2c09b2dfa155ea3"},
{file = "mkdocs_material-9.5.7.tar.gz", hash = "sha256:16110292575d88a338d2961f3cb665cf12943ff8829e551a9b364f24019e46af"},
{file = "mkdocs_material-9.5.9-py3-none-any.whl", hash = "sha256:a5d62b73b3b74349e45472bfadc129c871dd2d4add68d84819580597b2f50d5d"},
{file = "mkdocs_material-9.5.9.tar.gz", hash = "sha256:635df543c01c25c412d6c22991872267723737d5a2f062490f33b2da1c013c6d"},
]
[package.dependencies]
@@ -1225,7 +1355,7 @@ regex = ">=2022.4"
requests = ">=2.26,<3.0"
[package.extras]
git = ["mkdocs-git-committers-plugin-2 (>=1.1,<2.0)", "mkdocs-git-revision-date-localized-plugin (>=1.2,<2.0)"]
git = ["mkdocs-git-committers-plugin-2 (>=1.1,<2.0)", "mkdocs-git-revision-date-localized-plugin (>=1.2.4,<2.0)"]
imaging = ["cairosvg (>=2.6,<3.0)", "pillow (>=10.2,<11.0)"]
recommended = ["mkdocs-minify-plugin (>=0.7,<1.0)", "mkdocs-redirects (>=1.2,<2.0)", "mkdocs-rss-plugin (>=1.6,<2.0)"]
@@ -1450,13 +1580,13 @@ files = [
[[package]]
name = "openai"
version = "1.11.1"
version = "1.12.0"
description = "The official Python library for the openai API"
optional = false
python-versions = ">=3.7.1"
files = [
{file = "openai-1.11.1-py3-none-any.whl", hash = "sha256:e0f388ce499f53f58079d0c1f571f356f2b168b84d0d24a412506b6abc714980"},
{file = "openai-1.11.1.tar.gz", hash = "sha256:f66b8fe431af43e09594147ef3cdcb79758285de72ebafd52be9700a2af41e99"},
{file = "openai-1.12.0-py3-none-any.whl", hash = "sha256:a54002c814e05222e413664f651b5916714e4700d041d5cf5724d3ae1a3e3481"},
{file = "openai-1.12.0.tar.gz", hash = "sha256:99c5d257d09ea6533d689d1cc77caa0ac679fa21efef8893d8b0832a86877f1b"},
]
[package.dependencies]
@@ -1471,6 +1601,101 @@ typing-extensions = ">=4.7,<5"
[package.extras]
datalib = ["numpy (>=1)", "pandas (>=1.2.3)", "pandas-stubs (>=1.1.0.11)"]
[[package]]
name = "opentelemetry-api"
version = "1.22.0"
description = "OpenTelemetry Python API"
optional = false
python-versions = ">=3.7"
files = [
{file = "opentelemetry_api-1.22.0-py3-none-any.whl", hash = "sha256:43621514301a7e9f5d06dd8013a1b450f30c2e9372b8e30aaeb4562abf2ce034"},
{file = "opentelemetry_api-1.22.0.tar.gz", hash = "sha256:15ae4ca925ecf9cfdfb7a709250846fbb08072260fca08ade78056c502b86bed"},
]
[package.dependencies]
deprecated = ">=1.2.6"
importlib-metadata = ">=6.0,<7.0"
[[package]]
name = "opentelemetry-exporter-otlp-proto-common"
version = "1.22.0"
description = "OpenTelemetry Protobuf encoding"
optional = false
python-versions = ">=3.7"
files = [
{file = "opentelemetry_exporter_otlp_proto_common-1.22.0-py3-none-any.whl", hash = "sha256:3f2538bec5312587f8676c332b3747f54c89fe6364803a807e217af4603201fa"},
{file = "opentelemetry_exporter_otlp_proto_common-1.22.0.tar.gz", hash = "sha256:71ae2f81bc6d6fe408d06388826edc8933759b2ca3a97d24054507dc7cfce52d"},
]
[package.dependencies]
backoff = {version = ">=1.10.0,<3.0.0", markers = "python_version >= \"3.7\""}
opentelemetry-proto = "1.22.0"
[[package]]
name = "opentelemetry-exporter-otlp-proto-http"
version = "1.22.0"
description = "OpenTelemetry Collector Protobuf over HTTP Exporter"
optional = false
python-versions = ">=3.7"
files = [
{file = "opentelemetry_exporter_otlp_proto_http-1.22.0-py3-none-any.whl", hash = "sha256:e002e842190af45b91dc55a97789d0b98e4308c88d886b16049ee90e17a4d396"},
{file = "opentelemetry_exporter_otlp_proto_http-1.22.0.tar.gz", hash = "sha256:79ed108981ec68d5f7985355bca32003c2f3a5be1534a96d62d5861b758a82f4"},
]
[package.dependencies]
backoff = {version = ">=1.10.0,<3.0.0", markers = "python_version >= \"3.7\""}
deprecated = ">=1.2.6"
googleapis-common-protos = ">=1.52,<2.0"
opentelemetry-api = ">=1.15,<2.0"
opentelemetry-exporter-otlp-proto-common = "1.22.0"
opentelemetry-proto = "1.22.0"
opentelemetry-sdk = ">=1.22.0,<1.23.0"
requests = ">=2.7,<3.0"
[package.extras]
test = ["responses (==0.22.0)"]
[[package]]
name = "opentelemetry-proto"
version = "1.22.0"
description = "OpenTelemetry Python Proto"
optional = false
python-versions = ">=3.7"
files = [
{file = "opentelemetry_proto-1.22.0-py3-none-any.whl", hash = "sha256:ce7188d22c75b6d0fe53e7fb58501613d0feade5139538e79dedd9420610fa0c"},
{file = "opentelemetry_proto-1.22.0.tar.gz", hash = "sha256:9ec29169286029f17ca34ec1f3455802ffb90131642d2f545ece9a63e8f69003"},
]
[package.dependencies]
protobuf = ">=3.19,<5.0"
[[package]]
name = "opentelemetry-sdk"
version = "1.22.0"
description = "OpenTelemetry Python SDK"
optional = false
python-versions = ">=3.7"
files = [
{file = "opentelemetry_sdk-1.22.0-py3-none-any.whl", hash = "sha256:a730555713d7c8931657612a88a141e3a4fe6eb5523d9e2d5a8b1e673d76efa6"},
{file = "opentelemetry_sdk-1.22.0.tar.gz", hash = "sha256:45267ac1f38a431fc2eb5d6e0c0d83afc0b78de57ac345488aa58c28c17991d0"},
]
[package.dependencies]
opentelemetry-api = "1.22.0"
opentelemetry-semantic-conventions = "0.43b0"
typing-extensions = ">=3.7.4"
[[package]]
name = "opentelemetry-semantic-conventions"
version = "0.43b0"
description = "OpenTelemetry Semantic Conventions"
optional = false
python-versions = ">=3.7"
files = [
{file = "opentelemetry_semantic_conventions-0.43b0-py3-none-any.whl", hash = "sha256:291284d7c1bf15fdaddf309b3bd6d3b7ce12a253cec6d27144439819a15d8445"},
{file = "opentelemetry_semantic_conventions-0.43b0.tar.gz", hash = "sha256:b9576fb890df479626fa624e88dde42d3d60b8b6c8ae1152ad157a8b97358635"},
]
[[package]]
name = "packaging"
version = "23.2"
@@ -1620,13 +1845,13 @@ testing = ["pytest", "pytest-benchmark"]
[[package]]
name = "pre-commit"
version = "3.6.0"
version = "3.6.1"
description = "A framework for managing and maintaining multi-language pre-commit hooks."
optional = false
python-versions = ">=3.9"
files = [
{file = "pre_commit-3.6.0-py2.py3-none-any.whl", hash = "sha256:c255039ef399049a5544b6ce13d135caba8f2c28c3b4033277a788f434308376"},
{file = "pre_commit-3.6.0.tar.gz", hash = "sha256:d30bad9abf165f7785c15a21a1f46da7d0677cb00ee7ff4c579fd38922efe15d"},
{file = "pre_commit-3.6.1-py2.py3-none-any.whl", hash = "sha256:9fe989afcf095d2c4796ce7c553cf28d4d4a9b9346de3cda079bcf40748454a4"},
{file = "pre_commit-3.6.1.tar.gz", hash = "sha256:c90961d8aa706f75d60935aba09469a6b0bcb8345f127c3fbee4bdc5f114cf4b"},
]
[package.dependencies]
@@ -1636,6 +1861,26 @@ nodeenv = ">=0.11.1"
pyyaml = ">=5.1"
virtualenv = ">=20.10.0"
[[package]]
name = "protobuf"
version = "4.25.2"
description = ""
optional = false
python-versions = ">=3.8"
files = [
{file = "protobuf-4.25.2-cp310-abi3-win32.whl", hash = "sha256:b50c949608682b12efb0b2717f53256f03636af5f60ac0c1d900df6213910fd6"},
{file = "protobuf-4.25.2-cp310-abi3-win_amd64.whl", hash = "sha256:8f62574857ee1de9f770baf04dde4165e30b15ad97ba03ceac65f760ff018ac9"},
{file = "protobuf-4.25.2-cp37-abi3-macosx_10_9_universal2.whl", hash = "sha256:2db9f8fa64fbdcdc93767d3cf81e0f2aef176284071507e3ede160811502fd3d"},
{file = "protobuf-4.25.2-cp37-abi3-manylinux2014_aarch64.whl", hash = "sha256:10894a2885b7175d3984f2be8d9850712c57d5e7587a2410720af8be56cdaf62"},
{file = "protobuf-4.25.2-cp37-abi3-manylinux2014_x86_64.whl", hash = "sha256:fc381d1dd0516343f1440019cedf08a7405f791cd49eef4ae1ea06520bc1c020"},
{file = "protobuf-4.25.2-cp38-cp38-win32.whl", hash = "sha256:33a1aeef4b1927431d1be780e87b641e322b88d654203a9e9d93f218ee359e61"},
{file = "protobuf-4.25.2-cp38-cp38-win_amd64.whl", hash = "sha256:47f3de503fe7c1245f6f03bea7e8d3ec11c6c4a2ea9ef910e3221c8a15516d62"},
{file = "protobuf-4.25.2-cp39-cp39-win32.whl", hash = "sha256:5e5c933b4c30a988b52e0b7c02641760a5ba046edc5e43d3b94a74c9fc57c1b3"},
{file = "protobuf-4.25.2-cp39-cp39-win_amd64.whl", hash = "sha256:d66a769b8d687df9024f2985d5137a337f957a0916cf5464d1513eee96a63ff0"},
{file = "protobuf-4.25.2-py3-none-any.whl", hash = "sha256:a8b7a98d4ce823303145bf3c1a8bdb0f2f4642a414b196f04ad9853ed0c8f830"},
{file = "protobuf-4.25.2.tar.gz", hash = "sha256:fe599e175cb347efc8ee524bcd4b902d11f7262c0e569ececcb89995c15f0a5e"},
]
[[package]]
name = "pycparser"
version = "2.21"
@@ -2080,20 +2325,38 @@ urllib3 = ">=1.21.1,<3"
socks = ["PySocks (>=1.5.6,!=1.5.7)"]
use-chardet-on-py3 = ["chardet (>=3.0.2,<6)"]
[[package]]
name = "rich"
version = "13.7.0"
description = "Render rich text, tables, progress bars, syntax highlighting, markdown and more to the terminal"
optional = false
python-versions = ">=3.7.0"
files = [
{file = "rich-13.7.0-py3-none-any.whl", hash = "sha256:6da14c108c4866ee9520bbffa71f6fe3962e193b7da68720583850cd4548e235"},
{file = "rich-13.7.0.tar.gz", hash = "sha256:5cb5123b5cf9ee70584244246816e9114227e0b98ad9176eede6ad54bf5403fa"},
]
[package.dependencies]
markdown-it-py = ">=2.2.0"
pygments = ">=2.13.0,<3.0.0"
[package.extras]
jupyter = ["ipywidgets (>=7.5.1,<9)"]
[[package]]
name = "setuptools"
version = "69.0.3"
version = "69.1.0"
description = "Easily download, build, install, upgrade, and uninstall Python packages"
optional = false
python-versions = ">=3.8"
files = [
{file = "setuptools-69.0.3-py3-none-any.whl", hash = "sha256:385eb4edd9c9d5c17540511303e39a147ce2fc04bc55289c322b9e5904fe2c05"},
{file = "setuptools-69.0.3.tar.gz", hash = "sha256:be1af57fc409f93647f2e8e4573a142ed38724b8cdd389706a867bb4efcf1e78"},
{file = "setuptools-69.1.0-py3-none-any.whl", hash = "sha256:c054629b81b946d63a9c6e732bc8b2513a7c3ea645f11d0139a2191d735c60c6"},
{file = "setuptools-69.1.0.tar.gz", hash = "sha256:850894c4195f09c4ed30dba56213bf7c3f21d86ed6bdaafb5df5972593bfc401"},
]
[package.extras]
docs = ["furo", "jaraco.packaging (>=9.3)", "jaraco.tidelift (>=1.4)", "pygments-github-lexers (==0.0.5)", "rst.linker (>=1.9)", "sphinx (<7.2.5)", "sphinx (>=3.5)", "sphinx-favicon", "sphinx-inline-tabs", "sphinx-lint", "sphinx-notfound-page (>=1,<2)", "sphinx-reredirects", "sphinxcontrib-towncrier"]
testing = ["build[virtualenv]", "filelock (>=3.4.0)", "flake8-2020", "ini2toml[lite] (>=0.9)", "jaraco.develop (>=7.21)", "jaraco.envs (>=2.2)", "jaraco.path (>=3.2.0)", "pip (>=19.1)", "pytest (>=6)", "pytest-black (>=0.3.7)", "pytest-checkdocs (>=2.4)", "pytest-cov", "pytest-enabler (>=2.2)", "pytest-mypy (>=0.9.1)", "pytest-perf", "pytest-ruff", "pytest-timeout", "pytest-xdist", "tomli-w (>=1.0.0)", "virtualenv (>=13.0.0)", "wheel"]
testing = ["build[virtualenv]", "filelock (>=3.4.0)", "flake8-2020", "ini2toml[lite] (>=0.9)", "jaraco.develop (>=7.21)", "jaraco.envs (>=2.2)", "jaraco.path (>=3.2.0)", "pip (>=19.1)", "pytest (>=6)", "pytest-checkdocs (>=2.4)", "pytest-cov", "pytest-enabler (>=2.2)", "pytest-home (>=0.5)", "pytest-mypy (>=0.9.1)", "pytest-perf", "pytest-ruff (>=0.2.1)", "pytest-timeout", "pytest-xdist", "tomli-w (>=1.0.0)", "virtualenv (>=13.0.0)", "wheel"]
testing-integration = ["build[virtualenv] (>=1.0.3)", "filelock (>=3.4.0)", "jaraco.envs (>=2.2)", "jaraco.path (>=3.2.0)", "packaging (>=23.1)", "pytest", "pytest-enabler", "pytest-xdist", "tomli", "virtualenv (>=13.0.0)", "wheel"]
[[package]]
@@ -2120,60 +2383,60 @@ files = [
[[package]]
name = "sqlalchemy"
version = "2.0.25"
version = "2.0.26"
description = "Database Abstraction Library"
optional = false
python-versions = ">=3.7"
files = [
{file = "SQLAlchemy-2.0.25-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:4344d059265cc8b1b1be351bfb88749294b87a8b2bbe21dfbe066c4199541ebd"},
{file = "SQLAlchemy-2.0.25-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:6f9e2e59cbcc6ba1488404aad43de005d05ca56e069477b33ff74e91b6319735"},
{file = "SQLAlchemy-2.0.25-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:84daa0a2055df9ca0f148a64fdde12ac635e30edbca80e87df9b3aaf419e144a"},
{file = "SQLAlchemy-2.0.25-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:bc8b7dabe8e67c4832891a5d322cec6d44ef02f432b4588390017f5cec186a84"},
{file = "SQLAlchemy-2.0.25-cp310-cp310-musllinux_1_1_aarch64.whl", hash = "sha256:f5693145220517b5f42393e07a6898acdfe820e136c98663b971906120549da5"},
{file = "SQLAlchemy-2.0.25-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:db854730a25db7c956423bb9fb4bdd1216c839a689bf9cc15fada0a7fb2f4570"},
{file = "SQLAlchemy-2.0.25-cp310-cp310-win32.whl", hash = "sha256:14a6f68e8fc96e5e8f5647ef6cda6250c780612a573d99e4d881581432ef1669"},
{file = "SQLAlchemy-2.0.25-cp310-cp310-win_amd64.whl", hash = "sha256:87f6e732bccd7dcf1741c00f1ecf33797383128bd1c90144ac8adc02cbb98643"},
{file = "SQLAlchemy-2.0.25-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:342d365988ba88ada8af320d43df4e0b13a694dbd75951f537b2d5e4cb5cd002"},
{file = "SQLAlchemy-2.0.25-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:f37c0caf14b9e9b9e8f6dbc81bc56db06acb4363eba5a633167781a48ef036ed"},
{file = "SQLAlchemy-2.0.25-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:aa9373708763ef46782d10e950b49d0235bfe58facebd76917d3f5cbf5971aed"},
{file = "SQLAlchemy-2.0.25-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:d24f571990c05f6b36a396218f251f3e0dda916e0c687ef6fdca5072743208f5"},
{file = "SQLAlchemy-2.0.25-cp311-cp311-musllinux_1_1_aarch64.whl", hash = "sha256:75432b5b14dc2fff43c50435e248b45c7cdadef73388e5610852b95280ffd0e9"},
{file = "SQLAlchemy-2.0.25-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:884272dcd3ad97f47702965a0e902b540541890f468d24bd1d98bcfe41c3f018"},
{file = "SQLAlchemy-2.0.25-cp311-cp311-win32.whl", hash = "sha256:e607cdd99cbf9bb80391f54446b86e16eea6ad309361942bf88318bcd452363c"},
{file = "SQLAlchemy-2.0.25-cp311-cp311-win_amd64.whl", hash = "sha256:7d505815ac340568fd03f719446a589162d55c52f08abd77ba8964fbb7eb5b5f"},
{file = "SQLAlchemy-2.0.25-cp312-cp312-macosx_10_9_x86_64.whl", hash = "sha256:0dacf67aee53b16f365c589ce72e766efaabd2b145f9de7c917777b575e3659d"},
{file = "SQLAlchemy-2.0.25-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:b801154027107461ee992ff4b5c09aa7cc6ec91ddfe50d02bca344918c3265c6"},
{file = "SQLAlchemy-2.0.25-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:59a21853f5daeb50412d459cfb13cb82c089ad4c04ec208cd14dddd99fc23b39"},
{file = "SQLAlchemy-2.0.25-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:29049e2c299b5ace92cbed0c1610a7a236f3baf4c6b66eb9547c01179f638ec5"},
{file = "SQLAlchemy-2.0.25-cp312-cp312-musllinux_1_1_aarch64.whl", hash = "sha256:b64b183d610b424a160b0d4d880995e935208fc043d0302dd29fee32d1ee3f95"},
{file = "SQLAlchemy-2.0.25-cp312-cp312-musllinux_1_1_x86_64.whl", hash = "sha256:4f7a7d7fcc675d3d85fbf3b3828ecd5990b8d61bd6de3f1b260080b3beccf215"},
{file = "SQLAlchemy-2.0.25-cp312-cp312-win32.whl", hash = "sha256:cf18ff7fc9941b8fc23437cc3e68ed4ebeff3599eec6ef5eebf305f3d2e9a7c2"},
{file = "SQLAlchemy-2.0.25-cp312-cp312-win_amd64.whl", hash = "sha256:91f7d9d1c4dd1f4f6e092874c128c11165eafcf7c963128f79e28f8445de82d5"},
{file = "SQLAlchemy-2.0.25-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:bb209a73b8307f8fe4fe46f6ad5979649be01607f11af1eb94aa9e8a3aaf77f0"},
{file = "SQLAlchemy-2.0.25-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:798f717ae7c806d67145f6ae94dc7c342d3222d3b9a311a784f371a4333212c7"},
{file = "SQLAlchemy-2.0.25-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:5fdd402169aa00df3142149940b3bf9ce7dde075928c1886d9a1df63d4b8de62"},
{file = "SQLAlchemy-2.0.25-cp37-cp37m-musllinux_1_1_aarch64.whl", hash = "sha256:0d3cab3076af2e4aa5693f89622bef7fa770c6fec967143e4da7508b3dceb9b9"},
{file = "SQLAlchemy-2.0.25-cp37-cp37m-musllinux_1_1_x86_64.whl", hash = "sha256:74b080c897563f81062b74e44f5a72fa44c2b373741a9ade701d5f789a10ba23"},
{file = "SQLAlchemy-2.0.25-cp37-cp37m-win32.whl", hash = "sha256:87d91043ea0dc65ee583026cb18e1b458d8ec5fc0a93637126b5fc0bc3ea68c4"},
{file = "SQLAlchemy-2.0.25-cp37-cp37m-win_amd64.whl", hash = "sha256:75f99202324383d613ddd1f7455ac908dca9c2dd729ec8584c9541dd41822a2c"},
{file = "SQLAlchemy-2.0.25-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:420362338681eec03f53467804541a854617faed7272fe71a1bfdb07336a381e"},
{file = "SQLAlchemy-2.0.25-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:7c88f0c7dcc5f99bdb34b4fd9b69b93c89f893f454f40219fe923a3a2fd11625"},
{file = "SQLAlchemy-2.0.25-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:a3be4987e3ee9d9a380b66393b77a4cd6d742480c951a1c56a23c335caca4ce3"},
{file = "SQLAlchemy-2.0.25-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:f2a159111a0f58fb034c93eeba211b4141137ec4b0a6e75789ab7a3ef3c7e7e3"},
{file = "SQLAlchemy-2.0.25-cp38-cp38-musllinux_1_1_aarch64.whl", hash = "sha256:8b8cb63d3ea63b29074dcd29da4dc6a97ad1349151f2d2949495418fd6e48db9"},
{file = "SQLAlchemy-2.0.25-cp38-cp38-musllinux_1_1_x86_64.whl", hash = "sha256:736ea78cd06de6c21ecba7416499e7236a22374561493b456a1f7ffbe3f6cdb4"},
{file = "SQLAlchemy-2.0.25-cp38-cp38-win32.whl", hash = "sha256:10331f129982a19df4284ceac6fe87353ca3ca6b4ca77ff7d697209ae0a5915e"},
{file = "SQLAlchemy-2.0.25-cp38-cp38-win_amd64.whl", hash = "sha256:c55731c116806836a5d678a70c84cb13f2cedba920212ba7dcad53260997666d"},
{file = "SQLAlchemy-2.0.25-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:605b6b059f4b57b277f75ace81cc5bc6335efcbcc4ccb9066695e515dbdb3900"},
{file = "SQLAlchemy-2.0.25-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:665f0a3954635b5b777a55111ababf44b4fc12b1f3ba0a435b602b6387ffd7cf"},
{file = "SQLAlchemy-2.0.25-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:ecf6d4cda1f9f6cb0b45803a01ea7f034e2f1aed9475e883410812d9f9e3cfcf"},
{file = "SQLAlchemy-2.0.25-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:c51db269513917394faec5e5c00d6f83829742ba62e2ac4fa5c98d58be91662f"},
{file = "SQLAlchemy-2.0.25-cp39-cp39-musllinux_1_1_aarch64.whl", hash = "sha256:790f533fa5c8901a62b6fef5811d48980adeb2f51f1290ade8b5e7ba990ba3de"},
{file = "SQLAlchemy-2.0.25-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:1b1180cda6df7af84fe72e4530f192231b1f29a7496951db4ff38dac1687202d"},
{file = "SQLAlchemy-2.0.25-cp39-cp39-win32.whl", hash = "sha256:555651adbb503ac7f4cb35834c5e4ae0819aab2cd24857a123370764dc7d7e24"},
{file = "SQLAlchemy-2.0.25-cp39-cp39-win_amd64.whl", hash = "sha256:dc55990143cbd853a5d038c05e79284baedf3e299661389654551bd02a6a68d7"},
{file = "SQLAlchemy-2.0.25-py3-none-any.whl", hash = "sha256:a86b4240e67d4753dc3092d9511886795b3c2852abe599cffe108952f7af7ac3"},
{file = "SQLAlchemy-2.0.25.tar.gz", hash = "sha256:a2c69a7664fb2d54b8682dd774c3b54f67f84fa123cf84dda2a5f40dcaa04e08"},
{file = "SQLAlchemy-2.0.26-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:56524d767713054f8758217b3a811f6a736e0ae34e7afc33b594926589aa9609"},
{file = "SQLAlchemy-2.0.26-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:c2d8a2c68b279617f13088bdc0fc0e9b5126f8017f8882ff08ee41909fab0713"},
{file = "SQLAlchemy-2.0.26-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:84d377645913d47f0dc802b415bcfe7fb085d86646a12278d77c12eb75b5e1b4"},
{file = "SQLAlchemy-2.0.26-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:4fc0628d2026926404dabc903dc5628f7d936a792aa3a1fc54a20182df8e2172"},
{file = "SQLAlchemy-2.0.26-cp310-cp310-musllinux_1_1_aarch64.whl", hash = "sha256:872f2907ade52601a1e729e85d16913c24dc1f6e7c57d11739f18dcfafde29db"},
{file = "SQLAlchemy-2.0.26-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:ba46fa770578b3cf3b5b77dadb7e94fda7692dd4d1989268ef3dcb65f31c40a3"},
{file = "SQLAlchemy-2.0.26-cp310-cp310-win32.whl", hash = "sha256:651d10fdba7984bf100222d6e4acc496fec46493262b6170be1981ef860c6184"},
{file = "SQLAlchemy-2.0.26-cp310-cp310-win_amd64.whl", hash = "sha256:8f95ede696ab0d7328862d69f29b643d35b668c4f3619cb2f0281adc16e64c1b"},
{file = "SQLAlchemy-2.0.26-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:fab1bb909bd24accf2024a69edd4f885ded182c079c4dbcd515b4842f86b07cb"},
{file = "SQLAlchemy-2.0.26-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:b7ee16afd083bb6bb5ab3962ac7f0eafd1d196c6399388af35fef3d1c6d6d9bb"},
{file = "SQLAlchemy-2.0.26-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:379af901ceb524cbee5e15c1713bf9fd71dc28053286b7917525d01b938b9628"},
{file = "SQLAlchemy-2.0.26-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:94a78f56ea13f4d6e9efcd2a2d08cc13531918e0516563f6303c4ad98c81e21d"},
{file = "SQLAlchemy-2.0.26-cp311-cp311-musllinux_1_1_aarch64.whl", hash = "sha256:a481cc2eec83776ff7b6bb12c8e85d0378af0e2ec4584ac3309365a2a380c64b"},
{file = "SQLAlchemy-2.0.26-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:8cbeb0e49b605cd75f825fb9239a554803ef2bef1a7b2a8b428926ed518b6b63"},
{file = "SQLAlchemy-2.0.26-cp311-cp311-win32.whl", hash = "sha256:e70cce65239089390c193a7b0d171ce89d2e3dedf797f8010031b2aa2b1e9c80"},
{file = "SQLAlchemy-2.0.26-cp311-cp311-win_amd64.whl", hash = "sha256:750d1ef39d50520527c45c309c3cb10bbfa6131f93081b4e93858abb5ece2501"},
{file = "SQLAlchemy-2.0.26-cp312-cp312-macosx_10_9_x86_64.whl", hash = "sha256:b39503c3a56e1b2340a7d09e185ddb60b253ad0210877a9958ac64208eb23674"},
{file = "SQLAlchemy-2.0.26-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:1a870e6121a052f826f7ae1e4f0b54ca4c0ccd613278218ca036fa5e0f3be7df"},
{file = "SQLAlchemy-2.0.26-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:5901eed6d0e23ca4b04d66a561799d4f0fe55fcbfc7ca203bb8c3277f442085b"},
{file = "SQLAlchemy-2.0.26-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:d25fe55aab9b20ae4a9523bb269074202be9d92a145fcc0b752fff409754b5f6"},
{file = "SQLAlchemy-2.0.26-cp312-cp312-musllinux_1_1_aarch64.whl", hash = "sha256:5310958d08b4bafc311052be42a3b7d61a93a2bf126ddde07b85f712e7e4ac7b"},
{file = "SQLAlchemy-2.0.26-cp312-cp312-musllinux_1_1_x86_64.whl", hash = "sha256:fd133afb7e6c59fad365ffa97fb06b1001f88e29e1de351bef3d2b1224e2f132"},
{file = "SQLAlchemy-2.0.26-cp312-cp312-win32.whl", hash = "sha256:dc32ecf643c4904dd413e6a95a3f2c8a89ccd6f15083e586dcf8f42eb4e317ae"},
{file = "SQLAlchemy-2.0.26-cp312-cp312-win_amd64.whl", hash = "sha256:6e25f029e8ad6d893538b5abe8537e7f09e21d8e96caee46a7e2199f3ddd77b0"},
{file = "SQLAlchemy-2.0.26-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:99a9a8204b8937aa72421e31c493bfc12fd063a8310a0522e5a9b98e6323977c"},
{file = "SQLAlchemy-2.0.26-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:691d68a4fca30c9a676623d094b600797699530e175b6524a9f57e3273f5fa8d"},
{file = "SQLAlchemy-2.0.26-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:79a74a4ca4310c812f97bf0f13ce00ed73c890954b5a20b32484a9ab60e567e9"},
{file = "SQLAlchemy-2.0.26-cp37-cp37m-musllinux_1_1_aarch64.whl", hash = "sha256:f2efbbeb18c0e1c53b670a46a009fbde7b58e05b397a808c7e598532b17c6f4b"},
{file = "SQLAlchemy-2.0.26-cp37-cp37m-musllinux_1_1_x86_64.whl", hash = "sha256:3fc557f5402206c18ec3d288422f8e5fa764306d49f4efbc6090a7407bf54938"},
{file = "SQLAlchemy-2.0.26-cp37-cp37m-win32.whl", hash = "sha256:a9846ffee3283cff4ec476e7ee289314290fcb2384aab5045c6f481c5c4d011f"},
{file = "SQLAlchemy-2.0.26-cp37-cp37m-win_amd64.whl", hash = "sha256:ed4667d3d5d6e203a271d684d5b213ebcd618f7a8bc605752a8865eb9e67a79a"},
{file = "SQLAlchemy-2.0.26-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:79e629df3f69f849a1482a2d063596b23e32036b83547397e68725e6e0d0a9ab"},
{file = "SQLAlchemy-2.0.26-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:4b4d848b095173e0a9e377127b814490499e55f5168f617ae2c07653c326b9d1"},
{file = "SQLAlchemy-2.0.26-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:3f06afe8e96d7f221cc0b59334dc400151be22f432785e895e37030579d253c3"},
{file = "SQLAlchemy-2.0.26-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:f75ac12d302205e60f77f46bd162d40dc37438f1f8db160d2491a78b19a0bd61"},
{file = "SQLAlchemy-2.0.26-cp38-cp38-musllinux_1_1_aarch64.whl", hash = "sha256:ec3717c1efee8ad4b97f6211978351de3abe1e4b5f73e32f775c7becec021c5c"},
{file = "SQLAlchemy-2.0.26-cp38-cp38-musllinux_1_1_x86_64.whl", hash = "sha256:06ed4d6bb2365222fb9b0a05478a2d23ad8c1dd874047a9ae1ca1d45f18a255e"},
{file = "SQLAlchemy-2.0.26-cp38-cp38-win32.whl", hash = "sha256:caa79a6caeb4a3cc4ddb9aba9205c383f5d3bcb60d814e87e74570514754e073"},
{file = "SQLAlchemy-2.0.26-cp38-cp38-win_amd64.whl", hash = "sha256:996b41c38e34a980e9f810d6e2709a3196e29ee34e46e3c16f96c63da10a9da1"},
{file = "SQLAlchemy-2.0.26-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:4f57af0866f6629eae2d24d022ba1a4c1bac9b16d45027bbfcda4c9d5b0d8f26"},
{file = "SQLAlchemy-2.0.26-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:e1a532bc33163fb19c4759a36504a23e63032bc8d47cee1c66b0b70a04a0957b"},
{file = "SQLAlchemy-2.0.26-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:02a4f954ccb17bd8cff56662efc806c5301508233dc38d0253a5fdb2f33ca3ba"},
{file = "SQLAlchemy-2.0.26-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:a678f728fb075e74aaa7fdc27f8af8f03f82d02e7419362cc8c2a605c16a4114"},
{file = "SQLAlchemy-2.0.26-cp39-cp39-musllinux_1_1_aarch64.whl", hash = "sha256:8b39462c9588d4780f041e1b84d2ba038ac01c441c961bbee622dd8f53dec69f"},
{file = "SQLAlchemy-2.0.26-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:98f4d0d2bda2921af5b0c2ca99207cdab00f2922da46a6336c62c8d6814303a7"},
{file = "SQLAlchemy-2.0.26-cp39-cp39-win32.whl", hash = "sha256:6d68e6b507a3dd20c0add86ac0a0ca061d43c9a0162a122baa5fe952f14240f1"},
{file = "SQLAlchemy-2.0.26-cp39-cp39-win_amd64.whl", hash = "sha256:fb97a9b93b953084692a52a7877957b7a88dfcedc0c5652124f5aebf5999f7fe"},
{file = "SQLAlchemy-2.0.26-py3-none-any.whl", hash = "sha256:1128b2cdf49107659f6d1f452695f43a20694cc9305a86e97b70793a1c74eeb4"},
{file = "SQLAlchemy-2.0.26.tar.gz", hash = "sha256:e1bcd8fcb30305e27355d553608c2c229d3e589fb7ff406da7d7e5d50fa14d0d"},
]
[package.dependencies]
@@ -2302,13 +2565,13 @@ files = [
[[package]]
name = "tqdm"
version = "4.66.1"
version = "4.66.2"
description = "Fast, Extensible Progress Meter"
optional = false
python-versions = ">=3.7"
files = [
{file = "tqdm-4.66.1-py3-none-any.whl", hash = "sha256:d302b3c5b53d47bce91fea46679d9c3c6508cf6332229aa1e7d8653723793386"},
{file = "tqdm-4.66.1.tar.gz", hash = "sha256:d88e651f9db8d8551a62556d3cff9e3034274ca5d66e93197cf2490e2dcb69c7"},
{file = "tqdm-4.66.2-py3-none-any.whl", hash = "sha256:1ee4f8a893eb9bef51c6e35730cebf234d5d0b6bd112b0271e10ed7c24a02bd9"},
{file = "tqdm-4.66.2.tar.gz", hash = "sha256:6cd52cdf0fef0e0f543299cfc96fec90d7b8a7e88745f411ec33eb44d5ed3531"},
]
[package.dependencies]
@@ -2320,6 +2583,27 @@ notebook = ["ipywidgets (>=6)"]
slack = ["slack-sdk"]
telegram = ["requests"]
[[package]]
name = "typer"
version = "0.9.0"
description = "Typer, build great CLIs. Easy to code. Based on Python type hints."
optional = false
python-versions = ">=3.6"
files = [
{file = "typer-0.9.0-py3-none-any.whl", hash = "sha256:5d96d986a21493606a358cae4461bd8cdf83cbf33a5aa950ae629ca3b51467ee"},
{file = "typer-0.9.0.tar.gz", hash = "sha256:50922fd79aea2f4751a8e0408ff10d2662bd0c8bbfa84755a699f3bada2978b2"},
]
[package.dependencies]
click = ">=7.1.1,<9.0.0"
typing-extensions = ">=3.7.4.3"
[package.extras]
all = ["colorama (>=0.4.3,<0.5.0)", "rich (>=10.11.0,<14.0.0)", "shellingham (>=1.3.0,<2.0.0)"]
dev = ["autoflake (>=1.3.1,<2.0.0)", "flake8 (>=3.8.3,<4.0.0)", "pre-commit (>=2.17.0,<3.0.0)"]
doc = ["cairosvg (>=2.5.2,<3.0.0)", "mdx-include (>=1.4.1,<2.0.0)", "mkdocs (>=1.1.2,<2.0.0)", "mkdocs-material (>=8.1.4,<9.0.0)", "pillow (>=9.3.0,<10.0.0)"]
test = ["black (>=22.3.0,<23.0.0)", "coverage (>=6.2,<7.0)", "isort (>=5.0.6,<6.0.0)", "mypy (==0.910)", "pytest (>=4.4.0,<8.0.0)", "pytest-cov (>=2.10.0,<5.0.0)", "pytest-sugar (>=0.9.4,<0.10.0)", "pytest-xdist (>=1.32.0,<4.0.0)", "rich (>=10.11.0,<14.0.0)", "shellingham (>=1.3.0,<2.0.0)"]
[[package]]
name = "typing-extensions"
version = "4.9.0"
@@ -2403,38 +2687,40 @@ test = ["covdefaults (>=2.3)", "coverage (>=7.2.7)", "coverage-enable-subprocess
[[package]]
name = "watchdog"
version = "3.0.0"
version = "4.0.0"
description = "Filesystem events monitoring"
optional = false
python-versions = ">=3.7"
python-versions = ">=3.8"
files = [
{file = "watchdog-3.0.0-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:336adfc6f5cc4e037d52db31194f7581ff744b67382eb6021c868322e32eef41"},
{file = "watchdog-3.0.0-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:a70a8dcde91be523c35b2bf96196edc5730edb347e374c7de7cd20c43ed95397"},
{file = "watchdog-3.0.0-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:adfdeab2da79ea2f76f87eb42a3ab1966a5313e5a69a0213a3cc06ef692b0e96"},
{file = "watchdog-3.0.0-cp311-cp311-macosx_10_9_universal2.whl", hash = "sha256:2b57a1e730af3156d13b7fdddfc23dea6487fceca29fc75c5a868beed29177ae"},
{file = "watchdog-3.0.0-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:7ade88d0d778b1b222adebcc0927428f883db07017618a5e684fd03b83342bd9"},
{file = "watchdog-3.0.0-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:7e447d172af52ad204d19982739aa2346245cc5ba6f579d16dac4bfec226d2e7"},
{file = "watchdog-3.0.0-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:9fac43a7466eb73e64a9940ac9ed6369baa39b3bf221ae23493a9ec4d0022674"},
{file = "watchdog-3.0.0-cp38-cp38-macosx_10_9_universal2.whl", hash = "sha256:8ae9cda41fa114e28faf86cb137d751a17ffd0316d1c34ccf2235e8a84365c7f"},
{file = "watchdog-3.0.0-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:25f70b4aa53bd743729c7475d7ec41093a580528b100e9a8c5b5efe8899592fc"},
{file = "watchdog-3.0.0-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:4f94069eb16657d2c6faada4624c39464f65c05606af50bb7902e036e3219be3"},
{file = "watchdog-3.0.0-cp39-cp39-macosx_10_9_universal2.whl", hash = "sha256:7c5f84b5194c24dd573fa6472685b2a27cc5a17fe5f7b6fd40345378ca6812e3"},
{file = "watchdog-3.0.0-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:3aa7f6a12e831ddfe78cdd4f8996af9cf334fd6346531b16cec61c3b3c0d8da0"},
{file = "watchdog-3.0.0-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:233b5817932685d39a7896b1090353fc8efc1ef99c9c054e46c8002561252fb8"},
{file = "watchdog-3.0.0-pp37-pypy37_pp73-macosx_10_9_x86_64.whl", hash = "sha256:13bbbb462ee42ec3c5723e1205be8ced776f05b100e4737518c67c8325cf6100"},
{file = "watchdog-3.0.0-pp38-pypy38_pp73-macosx_10_9_x86_64.whl", hash = "sha256:8f3ceecd20d71067c7fd4c9e832d4e22584318983cabc013dbf3f70ea95de346"},
{file = "watchdog-3.0.0-pp39-pypy39_pp73-macosx_10_9_x86_64.whl", hash = "sha256:c9d8c8ec7efb887333cf71e328e39cffbf771d8f8f95d308ea4125bf5f90ba64"},
{file = "watchdog-3.0.0-py3-none-manylinux2014_aarch64.whl", hash = "sha256:0e06ab8858a76e1219e68c7573dfeba9dd1c0219476c5a44d5333b01d7e1743a"},
{file = "watchdog-3.0.0-py3-none-manylinux2014_armv7l.whl", hash = "sha256:d00e6be486affb5781468457b21a6cbe848c33ef43f9ea4a73b4882e5f188a44"},
{file = "watchdog-3.0.0-py3-none-manylinux2014_i686.whl", hash = "sha256:c07253088265c363d1ddf4b3cdb808d59a0468ecd017770ed716991620b8f77a"},
{file = "watchdog-3.0.0-py3-none-manylinux2014_ppc64.whl", hash = "sha256:5113334cf8cf0ac8cd45e1f8309a603291b614191c9add34d33075727a967709"},
{file = "watchdog-3.0.0-py3-none-manylinux2014_ppc64le.whl", hash = "sha256:51f90f73b4697bac9c9a78394c3acbbd331ccd3655c11be1a15ae6fe289a8c83"},
{file = "watchdog-3.0.0-py3-none-manylinux2014_s390x.whl", hash = "sha256:ba07e92756c97e3aca0912b5cbc4e5ad802f4557212788e72a72a47ff376950d"},
{file = "watchdog-3.0.0-py3-none-manylinux2014_x86_64.whl", hash = "sha256:d429c2430c93b7903914e4db9a966c7f2b068dd2ebdd2fa9b9ce094c7d459f33"},
{file = "watchdog-3.0.0-py3-none-win32.whl", hash = "sha256:3ed7c71a9dccfe838c2f0b6314ed0d9b22e77d268c67e015450a29036a81f60f"},
{file = "watchdog-3.0.0-py3-none-win_amd64.whl", hash = "sha256:4c9956d27be0bb08fc5f30d9d0179a855436e655f046d288e2bcc11adfae893c"},
{file = "watchdog-3.0.0-py3-none-win_ia64.whl", hash = "sha256:5d9f3a10e02d7371cd929b5d8f11e87d4bad890212ed3901f9b4d68767bee759"},
{file = "watchdog-3.0.0.tar.gz", hash = "sha256:4d98a320595da7a7c5a18fc48cb633c2e73cda78f93cac2ef42d42bf609a33f9"},
{file = "watchdog-4.0.0-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:39cb34b1f1afbf23e9562501673e7146777efe95da24fab5707b88f7fb11649b"},
{file = "watchdog-4.0.0-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:c522392acc5e962bcac3b22b9592493ffd06d1fc5d755954e6be9f4990de932b"},
{file = "watchdog-4.0.0-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:6c47bdd680009b11c9ac382163e05ca43baf4127954c5f6d0250e7d772d2b80c"},
{file = "watchdog-4.0.0-cp311-cp311-macosx_10_9_universal2.whl", hash = "sha256:8350d4055505412a426b6ad8c521bc7d367d1637a762c70fdd93a3a0d595990b"},
{file = "watchdog-4.0.0-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:c17d98799f32e3f55f181f19dd2021d762eb38fdd381b4a748b9f5a36738e935"},
{file = "watchdog-4.0.0-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:4986db5e8880b0e6b7cd52ba36255d4793bf5cdc95bd6264806c233173b1ec0b"},
{file = "watchdog-4.0.0-cp312-cp312-macosx_10_9_universal2.whl", hash = "sha256:11e12fafb13372e18ca1bbf12d50f593e7280646687463dd47730fd4f4d5d257"},
{file = "watchdog-4.0.0-cp312-cp312-macosx_10_9_x86_64.whl", hash = "sha256:5369136a6474678e02426bd984466343924d1df8e2fd94a9b443cb7e3aa20d19"},
{file = "watchdog-4.0.0-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:76ad8484379695f3fe46228962017a7e1337e9acadafed67eb20aabb175df98b"},
{file = "watchdog-4.0.0-cp38-cp38-macosx_10_9_universal2.whl", hash = "sha256:45cc09cc4c3b43fb10b59ef4d07318d9a3ecdbff03abd2e36e77b6dd9f9a5c85"},
{file = "watchdog-4.0.0-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:eed82cdf79cd7f0232e2fdc1ad05b06a5e102a43e331f7d041e5f0e0a34a51c4"},
{file = "watchdog-4.0.0-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:ba30a896166f0fee83183cec913298151b73164160d965af2e93a20bbd2ab605"},
{file = "watchdog-4.0.0-cp39-cp39-macosx_10_9_universal2.whl", hash = "sha256:d18d7f18a47de6863cd480734613502904611730f8def45fc52a5d97503e5101"},
{file = "watchdog-4.0.0-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:2895bf0518361a9728773083908801a376743bcc37dfa252b801af8fd281b1ca"},
{file = "watchdog-4.0.0-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:87e9df830022488e235dd601478c15ad73a0389628588ba0b028cb74eb72fed8"},
{file = "watchdog-4.0.0-pp310-pypy310_pp73-macosx_10_9_x86_64.whl", hash = "sha256:6e949a8a94186bced05b6508faa61b7adacc911115664ccb1923b9ad1f1ccf7b"},
{file = "watchdog-4.0.0-pp38-pypy38_pp73-macosx_10_9_x86_64.whl", hash = "sha256:6a4db54edea37d1058b08947c789a2354ee02972ed5d1e0dca9b0b820f4c7f92"},
{file = "watchdog-4.0.0-pp39-pypy39_pp73-macosx_10_9_x86_64.whl", hash = "sha256:d31481ccf4694a8416b681544c23bd271f5a123162ab603c7d7d2dd7dd901a07"},
{file = "watchdog-4.0.0-py3-none-manylinux2014_aarch64.whl", hash = "sha256:8fec441f5adcf81dd240a5fe78e3d83767999771630b5ddfc5867827a34fa3d3"},
{file = "watchdog-4.0.0-py3-none-manylinux2014_armv7l.whl", hash = "sha256:6a9c71a0b02985b4b0b6d14b875a6c86ddea2fdbebd0c9a720a806a8bbffc69f"},
{file = "watchdog-4.0.0-py3-none-manylinux2014_i686.whl", hash = "sha256:557ba04c816d23ce98a06e70af6abaa0485f6d94994ec78a42b05d1c03dcbd50"},
{file = "watchdog-4.0.0-py3-none-manylinux2014_ppc64.whl", hash = "sha256:d0f9bd1fd919134d459d8abf954f63886745f4660ef66480b9d753a7c9d40927"},
{file = "watchdog-4.0.0-py3-none-manylinux2014_ppc64le.whl", hash = "sha256:f9b2fdca47dc855516b2d66eef3c39f2672cbf7e7a42e7e67ad2cbfcd6ba107d"},
{file = "watchdog-4.0.0-py3-none-manylinux2014_s390x.whl", hash = "sha256:73c7a935e62033bd5e8f0da33a4dcb763da2361921a69a5a95aaf6c93aa03a87"},
{file = "watchdog-4.0.0-py3-none-manylinux2014_x86_64.whl", hash = "sha256:6a80d5cae8c265842c7419c560b9961561556c4361b297b4c431903f8c33b269"},
{file = "watchdog-4.0.0-py3-none-win32.whl", hash = "sha256:8f9a542c979df62098ae9c58b19e03ad3df1c9d8c6895d96c0d51da17b243b1c"},
{file = "watchdog-4.0.0-py3-none-win_amd64.whl", hash = "sha256:f970663fa4f7e80401a7b0cbeec00fa801bf0287d93d48368fc3e6fa32716245"},
{file = "watchdog-4.0.0-py3-none-win_ia64.whl", hash = "sha256:9a03e16e55465177d416699331b0f3564138f1807ecc5f2de9d55d8f188d08c7"},
{file = "watchdog-4.0.0.tar.gz", hash = "sha256:e3e7065cbdabe6183ab82199d7a4f6b3ba0a438c5a512a68559846ccb76a78ec"},
]
[package.extras]
@@ -2633,7 +2919,22 @@ files = [
idna = ">=2.0"
multidict = ">=4.0"
[[package]]
name = "zipp"
version = "3.17.0"
description = "Backport of pathlib-compatible object wrapper for zip files"
optional = false
python-versions = ">=3.8"
files = [
{file = "zipp-3.17.0-py3-none-any.whl", hash = "sha256:0e923e726174922dce09c53c59ad483ff7bbb8e572e00c7f7c46b88556409f31"},
{file = "zipp-3.17.0.tar.gz", hash = "sha256:84e64a1c28cf7e91ed2078bb8cc8c259cb19b76942096c8d7b84947690cabaf0"},
]
[package.extras]
docs = ["furo", "jaraco.packaging (>=9.3)", "jaraco.tidelift (>=1.4)", "rst.linker (>=1.9)", "sphinx (<7.2.5)", "sphinx (>=3.5)", "sphinx-lint"]
testing = ["big-O", "jaraco.functools", "jaraco.itertools", "more-itertools", "pytest (>=6)", "pytest-black (>=0.3.7)", "pytest-checkdocs (>=2.4)", "pytest-cov", "pytest-enabler (>=2.2)", "pytest-ignore-flaky", "pytest-mypy (>=0.9.1)", "pytest-ruff"]
[metadata]
lock-version = "2.0"
python-versions = ">=3.10,<4.0"
content-hash = "be62c4dcfaba5e9fc7c363895b9b1ea79aa3fdb8518cbb631953d8373b41c2fb"
content-hash = "0fffdfc697477db9ef90ddda74809a47144d64d7e7e972962333e28ab9829225"

View File

@@ -1,7 +1,7 @@
[tool.poetry]
name = "crewai"
version = "0.5.2"
version = "0.11.0"
description = "Cutting-edge framework for orchestrating role-playing, autonomous AI agents. By fostering collaborative intelligence, CrewAI empowers agents to work together seamlessly, tackling complex tasks."
authors = ["Joao Moura <joao@crewai.com>"]
readme = "README.md"
@@ -18,9 +18,14 @@ Repository = "https://github.com/joaomdmoura/crewai"
[tool.poetry.dependencies]
python = ">=3.10,<4.0"
pydantic = "^2.4.2"
langchain = "0.1.0"
langchain = "^0.1.0"
openai = "^1.7.1"
langchain-openai = "^0.0.2"
opentelemetry-api = "^1.22.0"
opentelemetry-sdk = "^1.22.0"
opentelemetry-exporter-otlp-proto-http = "^1.22.0"
instructor = "^0.5.2"
regex = "^2023.12.25"
[tool.poetry.group.dev.dependencies]
isort = "^5.13.2"
@@ -41,6 +46,7 @@ profile = "black"
known_first_party = ["crewai"]
[tool.poetry.group.test.dependencies]
pytest = "^7.4"
pytest-vcr = "^1.0.2"

View File

@@ -1,11 +1,12 @@
import os
import uuid
from typing import Any, List, Optional
from langchain.agents.agent import RunnableAgent
from langchain.agents.format_scratchpad import format_log_to_str
from langchain.agents.output_parsers import ReActSingleInputOutputParser
from langchain.memory import ConversationSummaryMemory
from langchain.tools.render import render_text_description
from langchain_core.runnables.config import RunnableConfig
from langchain_openai import ChatOpenAI
from pydantic import (
UUID4,
@@ -19,12 +20,7 @@ from pydantic import (
)
from pydantic_core import PydanticCustomError
from crewai.agents import (
CacheHandler,
CrewAgentExecutor,
CrewAgentOutputParser,
ToolsHandler,
)
from crewai.agents import CacheHandler, CrewAgentExecutor, ToolsHandler
from crewai.utilities import I18N, Logger, Prompts, RPMController
@@ -40,12 +36,14 @@ class Agent(BaseModel):
goal: The objective of the agent.
backstory: The backstory of the agent.
llm: The language model that will run the agent.
function_calling_llm: The language model that will the tool calling for this agent, it overrides the crew function_calling_llm.
max_iter: Maximum number of iterations for an agent to execute a task.
memory: Whether the agent should have memory or not.
max_rpm: Maximum number of requests per minute for the agent execution to be respected.
verbose: Whether the agent execution should be in verbose mode.
allow_delegation: Whether the agent is allowed to delegate tasks to other agents.
tools: Tools at agents disposal
step_callback: Callback to be executed after each step of the agent execution.
"""
__hash__ = object.__hash__ # type: ignore
@@ -90,13 +88,20 @@ class Agent(BaseModel):
cache_handler: InstanceOf[CacheHandler] = Field(
default=CacheHandler(), description="An instance of the CacheHandler class."
)
step_callback: Optional[Any] = Field(
default=None,
description="Callback to be executed after each step of the agent execution.",
)
i18n: I18N = Field(default=I18N(), description="Internationalization settings.")
llm: Any = Field(
default_factory=lambda: ChatOpenAI(
model="gpt-4",
model=os.environ.get("OPENAI_MODEL_NAME", "gpt-4")
),
description="Language model that will run the agent.",
)
function_calling_llm: Optional[Any] = Field(
description="Language model that will run the agent.", default=None
)
@field_validator("id", mode="before")
@classmethod
@@ -125,7 +130,7 @@ class Agent(BaseModel):
def execute_task(
self,
task: str,
task: Any,
context: Optional[str] = None,
tools: Optional[List[Any]] = None,
) -> str:
@@ -139,22 +144,25 @@ class Agent(BaseModel):
Returns:
Output of the agent
"""
task_prompt = task.prompt()
if context:
task = self.i18n.slice("task_with_context").format(
task=task, context=context
task_prompt = self.i18n.slice("task_with_context").format(
task=task_prompt, context=context
)
tools = tools or self.tools
self.agent_executor.tools = tools
self.agent_executor.task = task
self.agent_executor.tools_description = render_text_description(tools)
self.agent_executor.tools_names = self.__tools_names(tools)
result = self.agent_executor.invoke(
{
"input": task,
"tool_names": self.__tools_names(tools),
"tools": render_text_description(tools),
},
RunnableConfig(callbacks=[self.tools_handler]),
"input": task_prompt,
"tool_names": self.agent_executor.tools_names,
"tools": self.agent_executor.tools_description,
}
)["output"]
if self.max_rpm:
@@ -170,7 +178,7 @@ class Agent(BaseModel):
"""
self.cache_handler = cache_handler
self.tools_handler = ToolsHandler(cache=self.cache_handler)
self._create_agent_executor()
self.create_agent_executor()
def set_rpm_controller(self, rpm_controller: RPMController) -> None:
"""Set the rpm controller for the agent.
@@ -180,9 +188,9 @@ class Agent(BaseModel):
"""
if not self._rpm_controller:
self._rpm_controller = rpm_controller
self._create_agent_executor()
self.create_agent_executor()
def _create_agent_executor(self) -> None:
def create_agent_executor(self) -> None:
"""Create an agent executor for the agent.
Returns:
@@ -195,17 +203,21 @@ class Agent(BaseModel):
"agent_scratchpad": lambda x: format_log_to_str(x["intermediate_steps"]),
}
executor_args = {
"llm": self.llm,
"i18n": self.i18n,
"tools": self.tools,
"verbose": self.verbose,
"handle_parsing_errors": True,
"max_iterations": self.max_iter,
"step_callback": self.step_callback,
"tools_handler": self.tools_handler,
"function_calling_llm": self.function_calling_llm,
}
if self._rpm_controller:
executor_args["request_within_rpm_limit"] = (
self._rpm_controller.check_or_wait
)
executor_args[
"request_within_rpm_limit"
] = self._rpm_controller.check_or_wait
if self.memory:
summary_memory = ConversationSummaryMemory(
@@ -225,14 +237,7 @@ class Agent(BaseModel):
bind = self.llm.bind(stop=[self.i18n.slice("observation")])
inner_agent = (
agent_args
| execution_prompt
| bind
| CrewAgentOutputParser(
tools_handler=self.tools_handler,
cache=self.cache_handler,
i18n=self.i18n,
)
agent_args | execution_prompt | bind | ReActSingleInputOutputParser()
)
self.agent_executor = CrewAgentExecutor(
agent=RunnableAgent(runnable=inner_agent), **executor_args

View File

@@ -1,4 +1,3 @@
from .cache.cache_handler import CacheHandler
from .executor import CrewAgentExecutor
from .output_parser import CrewAgentOutputParser
from .tools_handler import ToolsHandler

View File

@@ -1,2 +1 @@
from .cache_handler import CacheHandler
from .cache_hit import CacheHit

View File

@@ -10,9 +10,7 @@ class CacheHandler:
self._cache = {}
def add(self, tool, input, output):
input = input.strip()
self._cache[f"{tool}-{input}"] = output
def read(self, tool, input) -> Optional[str]:
input = input.strip()
return self._cache.get(f"{tool}-{input}")

View File

@@ -1,18 +0,0 @@
from typing import Any
from pydantic import BaseModel, Field
from .cache_handler import CacheHandler
class CacheHit(BaseModel):
"""Cache Hit Object."""
class Config:
arbitrary_types_allowed = True
# Making it Any instead of AgentAction to avoind
# pydantic v1 vs v2 incompatibility, langchain should
# soon be updated to pydantic v2
action: Any = Field(description="Action taken")
cache: CacheHandler = Field(description="Cache Handler for the tool")

View File

@@ -1,30 +0,0 @@
from langchain_core.exceptions import OutputParserException
from crewai.utilities import I18N
class TaskRepeatedUsageException(OutputParserException):
"""Exception raised when a task is used twice in a roll."""
i18n: I18N = I18N()
error: str = "TaskRepeatedUsageException"
message: str
def __init__(self, i18n: I18N, tool: str, tool_input: str, text: str):
self.i18n = i18n
self.text = text
self.tool = tool
self.tool_input = tool_input
self.message = self.i18n.errors("task_repeated_usage").format(
tool=tool, tool_input=tool_input
)
super().__init__(
error=self.error,
observation=self.message,
send_to_llm=True,
llm_output=self.text,
)
def __str__(self):
return self.message

View File

@@ -10,18 +10,26 @@ from langchain_core.exceptions import OutputParserException
from langchain_core.pydantic_v1 import root_validator
from langchain_core.tools import BaseTool
from langchain_core.utils.input import get_color_mapping
from pydantic import InstanceOf
from crewai.agents.cache.cache_hit import CacheHit
from crewai.tools.cache_tools import CacheTools
from crewai.agents.tools_handler import ToolsHandler
from crewai.tools.tool_usage import ToolUsage
from crewai.utilities import I18N
class CrewAgentExecutor(AgentExecutor):
i18n: I18N = I18N()
llm: Any = None
iterations: int = 0
task: Any = None
tools_description: str = ""
tools_names: str = ""
function_calling_llm: Any = None
request_within_rpm_limit: Any = None
tools_handler: InstanceOf[ToolsHandler] = None
max_iterations: Optional[int] = 15
force_answer_max_iterations: Optional[int] = None
step_callback: Optional[Any] = None
@root_validator()
def set_force_answer_max_iterations(cls, values: Dict) -> Dict:
@@ -31,11 +39,6 @@ class CrewAgentExecutor(AgentExecutor):
def _should_force_answer(self) -> bool:
return True if self.iterations == self.force_answer_max_iterations else False
def _force_answer(self, output: AgentAction):
return AgentStep(
action=output, observation=self.i18n.errors("force_final_answer")
)
def _call(
self,
inputs: Dict[str, str],
@@ -63,6 +66,10 @@ class CrewAgentExecutor(AgentExecutor):
intermediate_steps,
run_manager=run_manager,
)
if self.step_callback:
self.step_callback(next_step_output)
if isinstance(next_step_output, AgentFinish):
return self._return(
next_step_output, intermediate_steps, run_manager=run_manager
@@ -105,16 +112,17 @@ class CrewAgentExecutor(AgentExecutor):
callbacks=run_manager.get_child() if run_manager else None,
**inputs,
)
if self._should_force_answer():
if isinstance(output, AgentAction) or isinstance(output, AgentFinish):
output = output
elif isinstance(output, CacheHit):
output = output.action
else:
raise ValueError(
f"Unexpected output type from agent: {type(output)}"
)
yield self._force_answer(output)
yield AgentStep(
action=output, observation=self.i18n.errors("force_final_answer")
)
return
except OutputParserException as e:
@@ -155,7 +163,9 @@ class CrewAgentExecutor(AgentExecutor):
)
if self._should_force_answer():
yield self._force_answer(output)
yield AgentStep(
action=output, observation=self.i18n.errors("force_final_answer")
)
return
yield AgentStep(action=output, observation=observation)
@@ -166,17 +176,6 @@ class CrewAgentExecutor(AgentExecutor):
yield output
return
# Override tool usage to use CacheTools
if isinstance(output, CacheHit):
cache = output.cache
action = output.action
tool = CacheTools(cache_handler=cache).tool()
output = action.copy()
output.tool_input = f"tool:{action.tool}|input:{action.tool_input}"
output.tool = tool.name
name_to_tool_map[tool.name] = tool
color_mapping[tool.name] = color_mapping[action.tool]
actions: List[AgentAction]
actions = [output] if isinstance(output, AgentAction) else output
yield from actions
@@ -187,18 +186,19 @@ class CrewAgentExecutor(AgentExecutor):
if agent_action.tool in name_to_tool_map:
tool = name_to_tool_map[agent_action.tool]
return_direct = tool.return_direct
color = color_mapping[agent_action.tool]
color_mapping[agent_action.tool]
tool_run_kwargs = self.agent.tool_run_logging_kwargs()
if return_direct:
tool_run_kwargs["llm_prefix"] = ""
# We then call the tool on the tool input to get an observation
observation = tool.run(
agent_action.tool_input,
verbose=self.verbose,
color=color,
callbacks=run_manager.get_child() if run_manager else None,
**tool_run_kwargs,
)
observation = ToolUsage(
tools_handler=self.tools_handler,
tools=self.tools,
tools_description=self.tools_description,
tools_names=self.tools_names,
function_calling_llm=self.function_calling_llm,
llm=self.llm,
task=self.task,
).use(agent_action.log)
else:
tool_run_kwargs = self.agent.tool_run_logging_kwargs()
observation = InvalidTool().run(

View File

@@ -1,79 +0,0 @@
import re
from typing import Union
from langchain.agents.output_parsers import ReActSingleInputOutputParser
from langchain_core.agents import AgentAction, AgentFinish
from crewai.agents.cache import CacheHandler, CacheHit
from crewai.agents.exceptions import TaskRepeatedUsageException
from crewai.agents.tools_handler import ToolsHandler
from crewai.utilities import I18N
FINAL_ANSWER_ACTION = "Final Answer:"
FINAL_ANSWER_AND_PARSABLE_ACTION_ERROR_MESSAGE = (
"Parsing LLM output produced both a final answer and a parse-able action:"
)
class CrewAgentOutputParser(ReActSingleInputOutputParser):
"""Parses ReAct-style LLM calls that have a single tool input.
Expects output to be in one of two formats.
If the output signals that an action should be taken,
should be in the below format. This will result in an AgentAction
being returned.
```
Thought: agent thought here
Action: search
Action Input: what is the temperature in SF?
```
If the output signals that a final answer should be given,
should be in the below format. This will result in an AgentFinish
being returned.
```
Thought: agent thought here
Final Answer: The temperature is 100 degrees
```
It also prevents tools from being reused in a roll.
"""
class Config:
arbitrary_types_allowed = True
tools_handler: ToolsHandler
cache: CacheHandler
i18n: I18N
def parse(self, text: str) -> Union[AgentAction, AgentFinish, CacheHit]:
regex = (
r"Action\s*\d*\s*:[\s]*(.*?)[\s]*Action\s*\d*\s*Input\s*\d*\s*:[\s]*(.*)"
)
if action_match := re.search(regex, text, re.DOTALL):
action = action_match.group(1).strip()
action_input = action_match.group(2)
tool_input = action_input.strip(" ")
tool_input = tool_input.strip('"')
if last_tool_usage := self.tools_handler.last_used_tool:
usage = {
"tool": action,
"input": tool_input,
}
if usage == last_tool_usage:
raise TaskRepeatedUsageException(
text=text,
tool=action,
tool_input=tool_input,
i18n=self.i18n,
)
if self.cache.read(action, tool_input):
action = AgentAction(action, tool_input, text)
return CacheHit(action=action, cache=self.cache)
return super().parse(text)

View File

@@ -1,44 +1,30 @@
from typing import Any, Dict
from langchain.callbacks.base import BaseCallbackHandler
from typing import Any
from ..tools.cache_tools import CacheTools
from ..tools.tool_calling import ToolCalling
from .cache.cache_handler import CacheHandler
class ToolsHandler(BaseCallbackHandler):
class ToolsHandler:
"""Callback handler for tool usage."""
last_used_tool: Dict[str, Any] = {}
last_used_tool: ToolCalling = {}
cache: CacheHandler
def __init__(self, cache: CacheHandler, **kwargs: Any):
def __init__(self, cache: CacheHandler):
"""Initialize the callback handler."""
self.cache = cache
super().__init__(**kwargs)
self.last_used_tool = {}
def on_tool_start(
self, serialized: Dict[str, Any], input_str: str, **kwargs: Any
) -> Any:
def on_tool_start(self, calling: ToolCalling) -> Any:
"""Run when tool starts running."""
name = serialized.get("name")
if name not in ["invalid_tool", "_Exception"]:
tools_usage = {
"tool": name,
"input": input_str,
}
self.last_used_tool = tools_usage
self.last_used_tool = calling
def on_tool_end(self, output: str, **kwargs: Any) -> Any:
def on_tool_end(self, calling: ToolCalling, output: str) -> Any:
"""Run when tool ends running."""
if (
"is not a valid tool" not in output
and "Invalid or incomplete response" not in output
and "Invalid Format" not in output
):
if self.last_used_tool["tool"] != CacheTools().name:
self.cache.add(
tool=self.last_used_tool["tool"],
input=self.last_used_tool["input"],
output=output,
)
if self.last_used_tool.tool_name != CacheTools().name:
self.cache.add(
tool=calling.tool_name,
input=calling.arguments,
output=output,
)

View File

@@ -19,6 +19,7 @@ from crewai.agent import Agent
from crewai.agents.cache import CacheHandler
from crewai.process import Process
from crewai.task import Task
from crewai.telemtry import Telemetry
from crewai.tools.agent_tools import AgentTools
from crewai.utilities import I18N, Logger, RPMController
@@ -31,15 +32,20 @@ class Crew(BaseModel):
tasks: List of tasks assigned to the crew.
agents: List of agents part of this crew.
manager_llm: The language model that will run manager agent.
function_calling_llm: The language model that will run the tool calling for all the agents.
process: The process flow that the crew will follow (e.g., sequential).
verbose: Indicates the verbosity level for logging during execution.
config: Configuration settings for the crew.
_cache_handler: Handles caching for the crew's operations.
max_rpm: Maximum number of requests per minute for the crew execution to be respected.
id: A unique identifier for the crew instance.
full_output: Whether the crew should return the full output with all tasks outputs or just the final output.
step_callback: Callback to be executed after each step for every agents execution.
share_crew: Whether you want to share the complete crew infromation and execution with crewAI to make the library better, and allow us to train models.
_cache_handler: Handles caching for the crew's operations.
"""
__hash__ = object.__hash__ # type: ignore
_execution_span: Any = PrivateAttr()
_rpm_controller: RPMController = PrivateAttr()
_logger: Logger = PrivateAttr()
_cache_handler: InstanceOf[CacheHandler] = PrivateAttr(default=CacheHandler())
@@ -48,11 +54,23 @@ class Crew(BaseModel):
agents: List[Agent] = Field(default_factory=list)
process: Process = Field(default=Process.sequential)
verbose: Union[int, bool] = Field(default=0)
full_output: Optional[bool] = Field(
default=False,
description="Whether the crew should return the full output with all tasks outputs or just the final output.",
)
manager_llm: Optional[Any] = Field(
description="Language model that will run the agent.", default=None
)
function_calling_llm: Optional[Any] = Field(
description="Language model that will run the agent.", default=None
)
config: Optional[Union[Json, Dict[str, Any]]] = Field(default=None)
id: UUID4 = Field(default_factory=uuid.uuid4, frozen=True)
share_crew: Optional[bool] = Field(default=False)
step_callback: Optional[Any] = Field(
default=None,
description="Callback to be executed after each step for all agents execution.",
)
max_rpm: Optional[int] = Field(
default=None,
description="Maximum number of requests per minute for the crew execution to be respected.",
@@ -92,6 +110,9 @@ class Crew(BaseModel):
self._cache_handler = CacheHandler()
self._logger = Logger(self.verbose)
self._rpm_controller = RPMController(max_rpm=self.max_rpm, logger=self._logger)
self._telemetry = Telemetry()
self._telemetry.set_tracer()
self._telemetry.crew_creation(self)
return self
@model_validator(mode="after")
@@ -121,7 +142,8 @@ class Crew(BaseModel):
if self.agents:
for agent in self.agents:
agent.set_cache_handler(self._cache_handler)
agent.set_rpm_controller(self._rpm_controller)
if self.max_rpm:
agent.set_rpm_controller(self._rpm_controller)
return self
def _setup_from_config(self):
@@ -133,6 +155,7 @@ class Crew(BaseModel):
"missing_keys_in_config", "Config should have 'agents' and 'tasks'.", {}
)
self.process = self.config.get("process", self.process)
self.agents = [Agent(**agent) for agent in self.config["agents"]]
self.tasks = [self._create_task(task) for task in self.config["tasks"]]
@@ -153,9 +176,18 @@ class Crew(BaseModel):
def kickoff(self) -> str:
"""Starts the crew to work on its assigned tasks."""
self._execution_span = self._telemetry.crew_execution_span(self)
for agent in self.agents:
agent.i18n = I18N(language=self.language)
if not agent.function_calling_llm:
agent.function_calling_llm = self.function_calling_llm
agent.create_agent_executor()
if not agent.step_callback:
agent.step_callback = self.step_callback
agent.create_agent_executor()
if self.process == Process.sequential:
return self._run_sequential_process()
if self.process == Process.hierarchical:
@@ -186,10 +218,8 @@ class Crew(BaseModel):
role = task.agent.role if task.agent is not None else "None"
self._logger.log("debug", f"[{role}] Task output: {task_output}\n\n")
if self.max_rpm:
self._rpm_controller.stop_rpm_counter()
return task_output
self._finish_execution(task_output)
return self._format_output(task_output)
def _run_hierarchical_process(self) -> str:
"""Creates and assigns a manager agent to make sure the crew completes the tasks."""
@@ -200,6 +230,7 @@ class Crew(BaseModel):
goal=i18n.retrieve("hierarchical_manager_agent", "goal"),
backstory=i18n.retrieve("hierarchical_manager_agent", "backstory"),
tools=AgentTools(agents=self.agents).tools(),
llm=self.manager_llm,
verbose=True,
)
@@ -216,7 +247,20 @@ class Crew(BaseModel):
"debug", f"[{manager.role}] Task output: {task_output}\n\n"
)
self._finish_execution(task_output)
return self._format_output(task_output)
def _format_output(self, output: str) -> str:
"""Formats the output of the crew execution."""
if self.full_output:
return {
"final_output": output,
"tasks_outputs": [task.output for task in self.tasks],
}
else:
return output
def _finish_execution(self, output) -> None:
if self.max_rpm:
self._rpm_controller.stop_rpm_counter()
return task_output
self._telemetry.end_crew(self, output)

View File

@@ -17,6 +17,7 @@ class Task(BaseModel):
arbitrary_types_allowed = True
__hash__ = object.__hash__ # type: ignore
used_tools: int = 0
i18n: I18N = I18N()
thread: threading.Thread = None
description: str = Field(description="Description of the actual task.")
@@ -96,25 +97,29 @@ class Task(BaseModel):
if self.async_execution:
self.thread = threading.Thread(
target=self._execute, args=(agent, self._prompt(), context, tools)
target=self._execute, args=(agent, self, context, tools)
)
self.thread.start()
else:
result = self._execute(
task=self,
agent=agent,
task_prompt=self._prompt(),
context=context,
tools=tools,
)
return result
def _execute(self, agent, task_prompt, context, tools):
result = agent.execute_task(task=task_prompt, context=context, tools=tools)
def _execute(self, agent, task, context, tools):
result = agent.execute_task(
task=task,
context=context,
tools=tools,
)
self.output = TaskOutput(description=self.description, result=result)
self.callback(self.output) if self.callback else None
return result
def _prompt(self) -> str:
def prompt(self) -> str:
"""Prompt the task.
Returns:

View File

@@ -0,0 +1 @@
from .telemetry import Telemetry

View File

@@ -0,0 +1,257 @@
import json
import os
import platform
from typing import Any
import pkg_resources
from opentelemetry import trace
from opentelemetry.exporter.otlp.proto.http.trace_exporter import OTLPSpanExporter
from opentelemetry.sdk.resources import SERVICE_NAME, Resource
from opentelemetry.sdk.trace import TracerProvider
from opentelemetry.sdk.trace.export import BatchSpanProcessor
from opentelemetry.trace import Status, StatusCode
class Telemetry:
"""A class to handle anonymous telemetry for the crewai package.
The data being collected is for development purpose, all data is anonymous.
There is NO data being collected on the prompts, tasks descriptions
agents backstories or goals nor responses or any data that is being
processed by the agents, nor any secrets and env vars.
Data collected includes:
- Version of crewAI
- Version of Python
- General OS (e.g. number of CPUs, macOS/Windows/Linux)
- Number of agents and tasks in a crew
- Crew Process being used
- If Agents are using memory or allowing delegation
- If Tasks are being executed in parallel or sequentially
- Language model being used
- Roles of agents in a crew
- Tools names available
Users can opt-in to sharing more complete data suing the `share_crew`
attribute in the Crew class.
"""
def __init__(self):
self.ready = False
try:
telemetry_endpoint = "http://telemetry.crewai.com:4318"
self.resource = Resource(attributes={SERVICE_NAME: "crewAI-telemetry"})
self.provider = TracerProvider(resource=self.resource)
processor = BatchSpanProcessor(
OTLPSpanExporter(endpoint=f"{telemetry_endpoint}/v1/traces")
)
self.provider.add_span_processor(processor)
self.ready = True
except Exception:
pass
def set_tracer(self):
if self.ready:
trace.set_tracer_provider(self.provider)
def crew_creation(self, crew):
"""Records the creation of a crew."""
if self.ready:
try:
tracer = trace.get_tracer("crewai.telemetry")
span = tracer.start_span("Crew Created")
self._add_attribute(
span,
"crewai_version",
pkg_resources.get_distribution("crewai").version,
)
self._add_attribute(span, "python_version", platform.python_version())
self._add_attribute(span, "crew_id", str(crew.id))
self._add_attribute(span, "crew_process", crew.process)
self._add_attribute(span, "crew_language", crew.language)
self._add_attribute(span, "crew_number_of_tasks", len(crew.tasks))
self._add_attribute(span, "crew_number_of_agents", len(crew.agents))
self._add_attribute(
span,
"crew_agents",
json.dumps(
[
{
"id": str(agent.id),
"role": agent.role,
"memory_enabled?": agent.memory,
"verbose?": agent.verbose,
"max_iter": agent.max_iter,
"max_rpm": agent.max_rpm,
"i18n": agent.i18n.language,
"llm": json.dumps(self._safe_llm_attributes(agent.llm)),
"delegation_enabled?": agent.allow_delegation,
"tools_names": [tool.name for tool in agent.tools],
}
for agent in crew.agents
]
),
)
self._add_attribute(
span,
"crew_tasks",
json.dumps(
[
{
"id": str(task.id),
"async_execution?": task.async_execution,
"agent_role": task.agent.role if task.agent else "None",
"tools_names": [tool.name for tool in task.tools],
}
for task in crew.tasks
]
),
)
self._add_attribute(span, "platform", platform.platform())
self._add_attribute(span, "platform_release", platform.release())
self._add_attribute(span, "platform_system", platform.system())
self._add_attribute(span, "platform_version", platform.version())
self._add_attribute(span, "cpus", os.cpu_count())
span.set_status(Status(StatusCode.OK))
span.end()
except Exception:
pass
def tool_repeated_usage(self, llm: Any, tool_name: str, attempts: int):
"""Records the repeated usage 'error' of a tool by an agent."""
if self.ready:
try:
tracer = trace.get_tracer("crewai.telemetry")
span = tracer.start_span("Tool Repeated Usage")
self._add_attribute(span, "tool_name", tool_name)
self._add_attribute(span, "attempts", attempts)
self._add_attribute(
span, "llm", json.dumps(self._safe_llm_attributes(llm))
)
span.set_status(Status(StatusCode.OK))
span.end()
except Exception:
pass
def tool_usage(self, llm: Any, tool_name: str, attempts: int):
"""Records the usage of a tool by an agent."""
if self.ready:
try:
tracer = trace.get_tracer("crewai.telemetry")
span = tracer.start_span("Tool Usage")
self._add_attribute(span, "tool_name", tool_name)
self._add_attribute(span, "attempts", attempts)
self._add_attribute(
span, "llm", json.dumps(self._safe_llm_attributes(llm))
)
span.set_status(Status(StatusCode.OK))
span.end()
except Exception:
pass
def tool_usage_error(self, llm: Any):
"""Records the usage of a tool by an agent."""
if self.ready:
try:
tracer = trace.get_tracer("crewai.telemetry")
span = tracer.start_span("Tool Usage Error")
self._add_attribute(
span, "llm", json.dumps(self._safe_llm_attributes(llm))
)
span.set_status(Status(StatusCode.OK))
span.end()
except Exception:
pass
def crew_execution_span(self, crew):
"""Records the complete execution of a crew.
This is only collected if the user has opted-in to share the crew.
"""
if (self.ready) and (crew.share_crew):
try:
tracer = trace.get_tracer("crewai.telemetry")
span = tracer.start_span("Crew Execution")
self._add_attribute(span, "crew_id", str(crew.id))
self._add_attribute(
span,
"crew_agents",
json.dumps(
[
{
"id": str(agent.id),
"role": agent.role,
"goal": agent.goal,
"backstory": agent.backstory,
"memory_enabled?": agent.memory,
"verbose?": agent.verbose,
"max_iter": agent.max_iter,
"max_rpm": agent.max_rpm,
"i18n": agent.i18n.language,
"llm": json.dumps(self._safe_llm_attributes(agent.llm)),
"delegation_enabled?": agent.allow_delegation,
"tools_names": [tool.name for tool in agent.tools],
}
for agent in crew.agents
]
),
)
self._add_attribute(
span,
"crew_tasks",
json.dumps(
[
{
"id": str(task.id),
"description": task.description,
"async_execution?": task.async_execution,
"output": task.expected_output,
"agent_role": task.agent.role if task.agent else "None",
"context": [task.description for task in task.context]
if task.context
else "None",
"tools_names": [tool.name for tool in task.tools],
}
for task in crew.tasks
]
),
)
return span
except Exception:
pass
def end_crew(self, crew, output):
if (self.ready) and (crew.share_crew):
try:
self._add_attribute(crew._execution_span, "crew_output", output)
self._add_attribute(
crew._execution_span,
"crew_tasks_output",
json.dumps(
[
{
"id": str(task.id),
"description": task.description,
"output": task.output.result,
}
for task in crew.tasks
]
),
)
crew._execution_span.set_status(Status(StatusCode.OK))
crew._execution_span.end()
except Exception:
pass
def _add_attribute(self, span, key, value):
"""Add an attribute to a span."""
try:
return span.set_attribute(key, value)
except Exception:
pass
def _safe_llm_attributes(self, llm):
attributes = ["name", "model_name", "base_url", "model", "top_k", "temperature"]
safe_attributes = {k: v for k, v in vars(llm).items() if k in attributes}
safe_attributes["class"] = llm.__class__.__name__
return safe_attributes

View File

@@ -1,9 +1,10 @@
from typing import List
from langchain.tools import Tool
from langchain.tools import StructuredTool
from pydantic import BaseModel, Field
from crewai.agent import Agent
from crewai.task import Task
from crewai.utilities import I18N
@@ -15,50 +16,43 @@ class AgentTools(BaseModel):
def tools(self):
return [
Tool.from_function(
StructuredTool.from_function(
func=self.delegate_work,
name="Delegate work to co-worker",
description=self.i18n.tools("delegate_work").format(
coworkers=", ".join([agent.role for agent in self.agents])
coworkers="\n".join([f"- {agent.role}" for agent in self.agents])
),
),
Tool.from_function(
StructuredTool.from_function(
func=self.ask_question,
name="Ask question to co-worker",
description=self.i18n.tools("ask_question").format(
coworkers=", ".join([agent.role for agent in self.agents])
coworkers="\n".join([f"- {agent.role}" for agent in self.agents])
),
),
]
def delegate_work(self, command):
def delegate_work(self, coworker: str, task: str, context: str):
"""Useful to delegate a specific task to a coworker."""
return self._execute(command)
return self._execute(coworker, task, context)
def ask_question(self, command):
def ask_question(self, coworker: str, question: str, context: str):
"""Useful to ask a question, opinion or take from a coworker."""
return self._execute(command)
return self._execute(coworker, question, context)
def _execute(self, command):
def _execute(self, agent, task, context):
"""Execute the command."""
try:
agent, task, context = command.split("|")
except ValueError:
return self.i18n.errors("agent_tool_missing_param")
if not agent or not task or not context:
return self.i18n.errors("agent_tool_missing_param")
agent = [
available_agent
for available_agent in self.agents
if available_agent.role == agent
if available_agent.role.lower() == agent.lower()
]
if not agent:
return self.i18n.errors("agent_tool_unexsiting_coworker").format(
coworkers=", ".join([agent.role for agent in self.agents])
coworkers="\n".join([f"- {agent.role}" for agent in self.agents])
)
agent = agent[0]
task = Task(description=task, agent=agent)
return agent.execute_task(task, context)

View File

@@ -1,4 +1,4 @@
from langchain.tools import Tool
from langchain.tools import StructuredTool
from pydantic import BaseModel, ConfigDict, Field
from crewai.agents.cache import CacheHandler
@@ -15,7 +15,7 @@ class CacheTools(BaseModel):
)
def tool(self):
return Tool.from_function(
return StructuredTool.from_function(
func=self.hit_cache,
name=self.name,
description="Reads directly from the cache",

View File

@@ -0,0 +1,21 @@
from typing import Any, Dict
from pydantic import BaseModel as PydanticBaseModel
from pydantic import Field as PydanticField
from pydantic.v1 import BaseModel, Field
class ToolCalling(BaseModel):
tool_name: str = Field(..., description="The name of the tool to be called.")
arguments: Dict[str, Any] = Field(
..., description="A dictinary of arguments to be passed to the tool."
)
class InstructorToolCalling(PydanticBaseModel):
tool_name: str = PydanticField(
..., description="The name of the tool to be called."
)
arguments: Dict = PydanticField(
..., description="A dictinary of arguments to be passed to the tool."
)

View File

@@ -0,0 +1,39 @@
import json
from typing import Any, List
import regex
from langchain.output_parsers import PydanticOutputParser
from langchain_core.exceptions import OutputParserException
from langchain_core.outputs import Generation
from langchain_core.pydantic_v1 import ValidationError
class ToolOutputParser(PydanticOutputParser):
"""Parses the function calling of a tool usage and it's arguments."""
def parse_result(self, result: List[Generation], *, partial: bool = False) -> Any:
result[0].text = self._transform_in_valid_json(result[0].text)
json_object = super().parse_result(result)
try:
return self.pydantic_object.parse_obj(json_object)
except ValidationError as e:
name = self.pydantic_object.__name__
msg = f"Failed to parse {name} from completion {json_object}. Got: {e}"
raise OutputParserException(msg, llm_output=json_object)
def _transform_in_valid_json(self, text) -> str:
text = text.replace("```", "").replace("json", "")
json_pattern = r"\{(?:[^{}]|(?R))*\}"
matches = regex.finditer(json_pattern, text)
for match in matches:
try:
# Attempt to parse the matched string as JSON
json_obj = json.loads(match.group())
# Return the first successfully parsed JSON object
json_obj = json.dumps(json_obj)
return str(json_obj)
except json.JSONDecodeError:
# If parsing fails, skip to the next match
continue
return text

View File

@@ -0,0 +1,239 @@
from typing import Any, List, Union
import instructor
from langchain.prompts import PromptTemplate
from langchain_core.tools import BaseTool
from langchain_openai import ChatOpenAI
from crewai.agents.tools_handler import ToolsHandler
from crewai.telemtry import Telemetry
from crewai.tools.tool_calling import InstructorToolCalling, ToolCalling
from crewai.tools.tool_output_parser import ToolOutputParser
from crewai.utilities import I18N, Printer
class ToolUsageErrorException(Exception):
"""Exception raised for errors in the tool usage."""
def __init__(self, message: str) -> None:
self.message = message
super().__init__(self.message)
class ToolUsage:
"""
Class that represents the usage of a tool by an agent.
Attributes:
task: Task being executed.
tools_handler: Tools handler that will manage the tool usage.
tools: List of tools available for the agent.
tools_description: Description of the tools available for the agent.
tools_names: Names of the tools available for the agent.
llm: Language model to be used for the tool usage.
"""
def __init__(
self,
tools_handler: ToolsHandler,
tools: List[BaseTool],
tools_description: str,
tools_names: str,
task: Any,
llm: Any,
function_calling_llm: Any,
) -> None:
self._i18n: I18N = I18N()
self._printer: Printer = Printer()
self._telemetry: Telemetry = Telemetry()
self._run_attempts: int = 1
self._max_parsing_attempts: int = 2
self._remeber_format_after_usages: int = 3
self.tools_description = tools_description
self.tools_names = tools_names
self.tools_handler = tools_handler
self.tools = tools
self.task = task
self.llm = llm
self.function_calling_llm = function_calling_llm
def use(self, tool_string: str):
calling = self._tool_calling(tool_string)
if isinstance(calling, ToolUsageErrorException):
error = calling.message
self._printer.print(content=f"\n\n{error}\n", color="yellow")
return error
try:
tool = self._select_tool(calling.tool_name)
except Exception as e:
error = getattr(e, "message", str(e))
self._printer.print(content=f"\n\n{error}\n", color="yellow")
return error
return self._use(tool_string=tool_string, tool=tool, calling=calling)
def _use(
self,
tool_string: str,
tool: BaseTool,
calling: Union[ToolCalling, InstructorToolCalling],
) -> None:
if self._check_tool_repeated_usage(calling=calling):
try:
result = self._i18n.errors("task_repeated_usage").format(
tool=calling.tool_name,
tool_input=", ".join(
[str(arg) for arg in calling.arguments.values()]
),
)
self._printer.print(content=f"\n\n{result}\n", color="yellow")
self._telemetry.tool_repeated_usage(
llm=self.llm, tool_name=tool.name, attempts=self._run_attempts
)
result = self._format_result(result=result)
return result
except Exception:
pass
self.tools_handler.on_tool_start(calling=calling)
result = self.tools_handler.cache.read(
tool=calling.tool_name, input=calling.arguments
)
if not result:
try:
result = tool._run(**calling.arguments)
except Exception as e:
self._run_attempts += 1
if self._run_attempts > self._max_parsing_attempts:
self._telemetry.tool_usage_error(llm=self.llm)
return ToolUsageErrorException(
self._i18n.errors("tool_usage_exception").format(error=e)
).message
return self.use(tool_string=tool_string)
self.tools_handler.on_tool_end(calling=calling, output=result)
self._printer.print(content=f"\n\n{result}\n", color="yellow")
self._telemetry.tool_usage(
llm=self.llm, tool_name=tool.name, attempts=self._run_attempts
)
result = self._format_result(result=result)
return result
def _format_result(self, result: Any) -> None:
self.task.used_tools += 1
if self._should_remember_format():
result = self._remember_format(result=result)
return result
def _should_remember_format(self) -> None:
return self.task.used_tools % self._remeber_format_after_usages == 0
def _remember_format(self, result: str) -> None:
result = str(result)
result += "\n\n" + self._i18n.slice("tools").format(
tools=self.tools_description, tool_names=self.tools_names
)
return result
def _check_tool_repeated_usage(
self, calling: Union[ToolCalling, InstructorToolCalling]
) -> None:
if last_tool_usage := self.tools_handler.last_used_tool:
return (calling.tool_name == last_tool_usage.tool_name) and (
calling.arguments == last_tool_usage.arguments
)
def _select_tool(self, tool_name: str) -> BaseTool:
for tool in self.tools:
if tool.name.lower().strip() == tool_name.lower().strip():
return tool
raise Exception(f"Tool '{tool_name}' not found.")
def _render(self) -> str:
"""Render the tool name and description in plain text."""
descriptions = []
for tool in self.tools:
args = {
k: {k2: v2 for k2, v2 in v.items() if k2 in ["description", "type"]}
for k, v in tool.args.items()
}
descriptions.append(
"\n".join(
[
f"Tool Name: {tool.name.lower()}",
f"Tool Description: {tool.description}",
f"Tool Arguments: {args}",
]
)
)
return "\n--\n".join(descriptions)
def _tool_calling(
self, tool_string: str
) -> Union[ToolCalling, InstructorToolCalling]:
try:
tool_string = tool_string.replace(
"Thought: Do I need to use a tool? Yes", ""
)
tool_string = tool_string.replace("Action:", "Tool Name:")
tool_string = tool_string.replace("Action Input:", "Tool Arguments:")
llm = self.function_calling_llm or self.llm
if (isinstance(llm, ChatOpenAI)) and (llm.openai_api_base == None):
client = instructor.patch(
llm.client._client,
mode=instructor.Mode.FUNCTIONS,
)
calling = client.chat.completions.create(
model=llm.model_name,
messages=[
{
"role": "system",
"content": """
The schema should have the following structure, only two key:
- tool_name: str
- arguments: dict (with all arguments being passed)
Example:
{"tool_name": "tool_name", "arguments": {"arg_name1": "value", "arg_name2": 2}}
""",
},
{
"role": "user",
"content": f"Tools available:\n\n{self._render()}\n\nReturn a valid schema for the tool, use this text to inform a valid ouput schema:\n{tool_string}```",
},
],
response_model=InstructorToolCalling,
)
else:
parser = ToolOutputParser(pydantic_object=ToolCalling)
prompt = PromptTemplate(
template="Tools available:\n\n{available_tools}\n\nReturn a valid schema for the tool, use this text to inform a valid ouput schema:\n{tool_string}\n\n{format_instructions}\n```",
input_variables=["tool_string"],
partial_variables={
"available_tools": self._render(),
"format_instructions": """
The schema should have the following structure, only two key:
- tool_name: str
- arguments: dict (with all arguments being passed)
Example:
{"tool_name": "tool_name", "arguments": {"arg_name1": "value", "arg_name2": 2}}
""",
},
)
chain = prompt | llm | parser
calling = chain.invoke({"tool_string": tool_string})
except Exception:
self._run_attempts += 1
if self._run_attempts > self._max_parsing_attempts:
self._telemetry.tool_usage_error(llm=llm)
return ToolUsageErrorException(self._i18n.errors("tool_usage_error"))
return self._tool_calling(tool_string)
return calling

View File

@@ -9,18 +9,19 @@
"task": "Αρχή! Αυτό είναι ΠΟΛΥ σημαντικό για εσάς, η δουλειά σας εξαρτάται από αυτό!\n\nΤρέχουσα εργασία: {input}",
"memory": "Αυτή είναι η περίληψη της μέχρι τώρα δουλειάς σας:\n{chat_history}",
"role_playing": "Είσαι {role}.\n{backstory}\n\nΟ προσωπικός σας στόχος είναι: {goal}",
"tools": "ΕΡΓΑΛΕΙΑ:\n------\nΈχετε πρόσβαση μόνο στα ακόλουθα εργαλεία:\n\n{tools}\n\nΓια να χρησιμοποιήσετε ένα εργαλείο, χρησιμοποιήστε την ακόλουθη ακριβώς μορφή:\n\n```\nΣκέψη: Χρειάζεται να χρησιμοποιήσω κάποιο εργαλείο; Ναί\nΔράση: η ενέργεια που πρέπει να γίνει, πρέπει να είναι μία από τις[{tool_names}], μόνο το όνομα.\nΕνέργεια προς εισαγωγή: η είσοδος στη δράση\nΠαρατήρηση: το αποτέλεσμα της δράσης\n```\n\nΌταν έχετε μια απάντηση για την εργασία σας ή εάν δεν χρειάζεται να χρησιμοποιήσετε ένα εργαλείο, ΠΡΕΠΕΙ να χρησιμοποιήσετε τη μορφή:\n\n```\nΣκέψη: Χρειάζεται να χρησιμοποιήσω κάποιο εργαλείο; Οχι\nΤελική απάντηση: [η απάντησή σας εδώ]```",
"tools": "ΕΡΓΑΛΕΙΑ:\n------\nΈχετε πρόσβαση μόνο στα ακόλουθα εργαλεία:\n\n{tools}\n\nΓια να χρησιμοποιήσετε ένα εργαλείο, χρησιμοποιήστε την ακόλουθη ακριβώς μορφή:\n\n```\nThought: Χρειάζεται να χρησιμοποιήσω κάποιο εργαλείο; Ναι\nΕνέργεια: το εργαλείο που θέλετε να χρησιμοποιήσετε, θα πρέπει να είναι ένα από τα [{tool_names}], μόνο το όνομα.\nΕισαγωγή ενέργειας: Οποιαδήποτε και όλες οι σχετικές πληροφορίες και το πλαίσιο χρήσης του εργαλείου\nΠαρατήρηση: το αποτέλεσμα της χρήσης του εργαλείου\n```\n\nΌταν έχετε μια απάντηση για την εργασία σας ή εάν δεν χρειάζεται να χρησιμοποιήσετε ένα εργαλείο, ΠΡΕΠΕΙ να χρησιμοποιήσετε τη μορφή:\n\n```\nΣκέψη: Πρέπει να χρησιμοποιήσω ένα εργαλείο ? Όχι\nΤελική απάντηση: [η απάντησή σας εδώ]```",
"task_with_context": "{task}\nΑυτό είναι το πλαίσιο με το οποίο εργάζεστε:\n{context}",
"expected_output": "Η τελική σας απάντηση πρέπει να είναι: {expected_output}"
},
"errors": {
"force_final_answer": "Στην πραγματικότητα, χρησιμοποίησα πάρα πολλά εργαλεία, οπότε θα σταματήσω τώρα και θα σας δώσω την απόλυτη ΚΑΛΥΤΕΡΗ τελική μου απάντηση ΤΩΡΑ, χρησιμοποιώντας την αναμενόμενη μορφή: ```\nΣκέφτηκα: Χρειάζεται να χρησιμοποιήσω ένα εργαλείο; Όχι\nΤελική απάντηση: [η απάντησή σας εδώ]```",
"agent_tool_missing_param": "\nΣφάλμα κατά την εκτέλεση του εργαλείου. Λείπουν ακριβώς 3 διαχωρισμένες τιμές σωλήνων (|). Για παράδειγμα, `coworker|task|context`. Πρέπει να φροντίσω να περάσω το πλαίσιο ως πλαίσιο.\n",
"agent_tool_unexsiting_coworker": "\nΣφάλμα κατά την εκτέλεση του εργαλείου. Ο συνάδελφος που αναφέρεται στο Ενέργεια προς εισαγωγή δεν βρέθηκε, πρέπει να είναι μία από τις ακόλουθες επιλογές: {coworkers}.\n",
"task_repeated_usage": "Μόλις χρησιμοποίησα το {tool} εργαλείο με είσοδο {tool_input}. Άρα ξέρω ήδη το αποτέλεσμα αυτού και δεν χρειάζεται να το χρησιμοποιήσω τώρα.\n"
"agent_tool_unexsiting_coworker": "\nΣφάλμα κατά την εκτέλεση του εργαλείου. Ο συνάδελφος που αναφέρεται στο Action Input δεν βρέθηκε, πρέπει να είναι μία από τις ακόλουθες επιλογές:\n{coworkers}..\n",
"task_repeated_usage": "Μόλις χρησιμοποίησα το εργαλείο {tool} με είσοδο {tool_input}. Άρα το ξέρω ήδη και πρέπει να σταματήσω να το χρησιμοποιώ στη σειρά με την ίδια είσοδο. \nΘα μπορούσα να δώσω την τελική μου απάντηση εάν είμαι έτοιμος, χρησιμοποιώντας ακριβώς την αναμενόμενη μορφή παρακάτω: \n\nΣκέφτηκα: Χρειάζεται να χρησιμοποιήσω κάποιο εργαλείο; Όχι\nΤελική απάντηση: [η απάντησή σας εδώ]\n",
"tool_usage_error": "Φαίνεται ότι αντιμετωπίσαμε ένα απροσδόκητο σφάλμα κατά την προσπάθεια χρήσης του εργαλείου.",
"tool_usage_exception": "Φαίνεται ότι αντιμετωπίσαμε ένα απροσδόκητο σφάλμα κατά την προσπάθεια χρήσης του εργαλείου. Αυτό ήταν το σφάλμα: {error}"
},
"tools": {
"delegate_work": "Χρήσιμο για την ανάθεση μιας συγκεκριμένης εργασίας σε έναν από τους παρακάτω συναδέλφους: {coworkers}.\nΗ είσοδος σε αυτό το εργαλείο θα πρέπει να είναι ένα κείμενο χωρισμένο σε σωλήνα (|) μήκους 3 (τρία), που αντιπροσωπεύει τον συνάδελφο στον οποίο θέλετε να του ζητήσετε (μία από τις επιλογές), την εργασία και όλο το πραγματικό πλαίσιο που έχετε για την εργασία .\nΓια παράδειγμα, `coworker|task|context`.",
"ask_question": "Χρήσιμο για να κάνετε μια ερώτηση, γνώμη ή αποδοχή από τους παρακάτω συναδέλφους: {coworkers}.\nΗ είσοδος σε αυτό το εργαλείο θα πρέπει να είναι ένα κείμενο χωρισμένο σε σωλήνα (|) μήκους 3 (τρία), που αντιπροσωπεύει τον συνάδελφο στον οποίο θέλετε να το ρωτήσετε (μία από τις επιλογές), την ερώτηση και όλο το πραγματικό πλαίσιο που έχετε για την ερώτηση.\nΓια παράδειγμα, `coworker|question|context`."
"delegate_work": "Αναθέστε μια συγκεκριμένη εργασία σε έναν από τους παρακάτω συναδέλφους:\n{coworkers}.\nΗ εισαγωγή σε αυτό το εργαλείο θα πρέπει να είναι ο ρόλος του συναδέλφου, η εργασία που θέλετε να κάνει και ΟΛΟ το απαραίτητο πλαίσιο για την εκτέλεση της εργασίας, δεν γνωρίζουν τίποτα για την εργασία, γι' αυτό μοιραστείτε απολύτως όλα όσα γνωρίζετε, μην αναφέρετε πράγματα, αλλά εξηγήστε τα.",
"ask_question": "Κάντε μια συγκεκριμένη ερώτηση σε έναν από τους παρακάτω συναδέλφους:\n{coworkers}.\nΗ είσοδος σε αυτό το εργαλείο θα πρέπει να είναι ο ρόλος του συναδέλφου, η ερώτηση που έχετε για αυτόν και ΟΛΟ το απαραίτητο πλαίσιο για να κάνετε σωστά την ερώτηση, δεν γνωρίζουν τίποτα για την ερώτηση, γι' αυτό μοιραστείτε απολύτως όλα όσα γνωρίζετε, μην αναφέρετε πράγματα, αλλά εξηγήστε τα."
}
}

View File

@@ -9,18 +9,19 @@
"task": "Begin! This is VERY important to you, your job depends on it!\n\nCurrent Task: {input}",
"memory": "This is the summary of your work so far:\n{chat_history}",
"role_playing": "You are {role}.\n{backstory}\n\nYour personal goal is: {goal}",
"tools": "TOOLS:\n------\nYou have access to only the following tools:\n\n{tools}\n\nTo use a tool, please use the exact following format:\n\n```\nThought: Do I need to use a tool? Yes\nAction: the action to take, should be one of [{tool_names}], just the name.\nAction Input: the input to the action\nObservation: the result of the action\n```\n\nWhen you have a response for your task, or if you do not need to use a tool, you MUST use the format:\n\n```\nThought: Do I need to use a tool? No\nFinal Answer: [your response here]```",
"tools": "TOOLS:\n------\nYou have access to only the following tools:\n\n{tools}\n\nTo use a tool, please use the exact following format:\n\n```\nThought: Do I need to use a tool? Yes\nAction: the tool you wanna use, should be one of [{tool_names}], just the name.\nAction Input: Any and all relevant information input and context for using the tool\nObservation: the result of using the tool\n```\n\nWhen you have a response for your task, or if you do not need to use a tool, you MUST use the format:\n\n```\nThought: Do I need to use a tool? No\nFinal Answer: [your response here]```",
"task_with_context": "{task}\nThis is the context you're working with:\n{context}",
"expected_output": "Your final answer must be: {expected_output}"
},
"errors": {
"force_final_answer": "Actually, I used too many tools, so I'll stop now and give you my absolute BEST Final answer NOW, using the expected format: ```\nThought: Do I need to use a tool? No\nFinal Answer: [your response here]```",
"agent_tool_missing_param": "\nError executing tool. Missing exact 3 pipe (|) separated values. For example, `coworker|task|context`. I need to make sure to pass context as context.\n",
"agent_tool_unexsiting_coworker": "\nError executing tool. Co-worker mentioned on the Action Input not found, it must to be one of the following options: {coworkers}.\n",
"task_repeated_usage": "I just used the {tool} tool with input {tool_input}. So I already know the result of that and don't need to use it now.\n"
"force_final_answer": "Actually, I used too many tools, so I'll stop now and give you my absolute BEST Final answer NOW, using exaclty the expected format bellow: \n```\nThought: Do I need to use a tool? No\nFinal Answer: [your response here]```",
"agent_tool_unexsiting_coworker": "\nError executing tool. Co-worker mentioned on the Action Input not found, it must to be one of the following options:\n{coworkers}.\n",
"task_repeated_usage": "I just used the {tool} tool with input {tool_input}. So I already know that and must stop using it in a row with the same input. \nI could give my final answer if I'm ready, using exaclty the expected format bellow: \n\nThought: Do I need to use a tool? No\nFinal Answer: [your response here]\n",
"tool_usage_error": "It seems we encountered an unexpected error while trying to use the tool.",
"tool_usage_exception": "It seems we encountered an unexpected error while trying to use the tool. This was the error: {error}"
},
"tools": {
"delegate_work": "Useful to delegate a specific task to one of the following co-workers: {coworkers}.\nThe input to this tool should be a pipe (|) separated text of length 3 (three), representing the co-worker you want to ask it to (one of the options), the task and all actual context you have for the task.\nFor example, `coworker|task|context`.",
"ask_question": "Useful to ask a question, opinion or take from on of the following co-workers: {coworkers}.\nThe input to this tool should be a pipe (|) separated text of length 3 (three), representing the co-worker you want to ask it to (one of the options), the question and all actual context you have for the question.\n For example, `coworker|question|context`."
"delegate_work": "Delegate a specific task to one of the following co-workers:\n{coworkers}.\nThe input to this tool should be the role of the coworker, the task you want them to do, and ALL necessary context to exectue the task, they know nothing about the task, so share absolute everything you know, don't reference things but instead explain them.",
"ask_question": "Ask a specific question to one of the following co-workers:\n{coworkers}.\nThe input to this tool should be the role of the coworker, the question you have for them, and ALL necessary context to ask the question properly, they know nothing about the question, so share absolute everything you know, don't reference things but instead explain them."
}
}

View File

@@ -1,4 +1,5 @@
from .i18n import I18N
from .logger import Logger
from .printer import Printer
from .prompts import Prompts
from .rpm_controller import RPMController

View File

@@ -0,0 +1,9 @@
class Printer:
def print(self, content: str, color: str):
if color == "yellow":
self._print_yellow(content)
else:
print(content)
def _print_yellow(self, content):
print("\033[93m {}\033[00m".format(content))

View File

@@ -14,12 +14,14 @@ class RPMController(BaseModel):
_current_rpm: int = PrivateAttr(default=0)
_timer: threading.Timer | None = PrivateAttr(default=None)
_lock: threading.Lock = PrivateAttr(default=None)
_shutdown_flag = False
@model_validator(mode="after")
def reset_counter(self):
if self.max_rpm:
self._lock = threading.Lock()
self._reset_request_count()
if not self._shutdown_flag:
self._lock = threading.Lock()
self._reset_request_count()
return self
def check_or_wait(self):
@@ -51,6 +53,7 @@ class RPMController(BaseModel):
with self._lock:
self._current_rpm = 0
if self._timer:
self._shutdown_flag = True
self._timer.cancel()
self._timer = threading.Timer(60.0, self._reset_request_count)
self._timer.start()

View File

@@ -9,6 +9,8 @@ from langchain_openai import ChatOpenAI
from crewai import Agent, Crew, Task
from crewai.agents.cache import CacheHandler
from crewai.agents.executor import CrewAgentExecutor
from crewai.tools.tool_calling import InstructorToolCalling
from crewai.tools.tool_usage import ToolUsage
from crewai.utilities import RPMController
@@ -62,7 +64,8 @@ def test_agent_without_memory():
llm=ChatOpenAI(temperature=0, model="gpt-4"),
)
result = no_memory_agent.execute_task("How much is 1 + 1?")
task = Task(description="How much is 1 + 1?", agent=no_memory_agent)
result = no_memory_agent.execute_task(task)
assert result == "1 + 1 equals 2."
assert no_memory_agent.agent_executor.memory is None
@@ -78,20 +81,18 @@ def test_agent_execution():
allow_delegation=False,
)
output = agent.execute_task("How much is 1 + 1?")
assert output == "2"
task = Task(description="How much is 1 + 1?", agent=agent)
output = agent.execute_task(task)
assert output == "1 + 1 equals 2."
@pytest.mark.vcr(filter_headers=["authorization"])
def test_agent_execution_with_tools():
@tool
def multiplier(numbers) -> float:
"""Useful for when you need to multiply two numbers together.
The input to this tool should be a comma separated list of numbers of
length two, representing the two numbers you want to multiply together.
For example, `1,2` would be the input if you wanted to multiply 1 by 2."""
a, b = numbers.split(",")
return int(a) * int(b)
def multiplier(first_number: int, second_number: int) -> float:
"""Useful for when you need to multiply two numbers together."""
return first_number * second_number
agent = Agent(
role="test role",
@@ -101,20 +102,17 @@ def test_agent_execution_with_tools():
allow_delegation=False,
)
output = agent.execute_task("What is 3 times 4")
assert output == "12"
task = Task(description="What is 3 times 4?", agent=agent)
output = agent.execute_task(task)
assert output == "3 times 4 equals 12."
@pytest.mark.vcr(filter_headers=["authorization"])
def test_logging_tool_usage():
@tool
def multiplier(numbers) -> float:
"""Useful for when you need to multiply two numbers together.
The input to this tool should be a comma separated list of numbers of
length two, representing the two numbers you want to multiply together.
For example, `1,2` would be the input if you wanted to multiply 1 by 2."""
a, b = numbers.split(",")
return int(a) * int(b)
def multiplier(first_number: int, second_number: int) -> float:
"""Useful for when you need to multiply two numbers together."""
return first_number * second_number
agent = Agent(
role="test role",
@@ -126,26 +124,22 @@ def test_logging_tool_usage():
)
assert agent.tools_handler.last_used_tool == {}
output = agent.execute_task("What is 3 times 5?")
tool_usage = {
"tool": "multiplier",
"input": "3,5",
}
assert output == "3 times 5 is 15."
assert agent.tools_handler.last_used_tool == tool_usage
task = Task(description="What is 3 times 4?", agent=agent)
output = agent.execute_task(task)
tool_usage = InstructorToolCalling(
tool_name=multiplier.name, arguments={"first_number": 3, "second_number": 4}
)
assert output == "3 times 4 equals 12."
assert agent.tools_handler.last_used_tool.tool_name == tool_usage.tool_name
assert agent.tools_handler.last_used_tool.arguments == tool_usage.arguments
@pytest.mark.vcr(filter_headers=["authorization"])
def test_cache_hitting():
@tool
def multiplier(numbers) -> float:
"""Useful for when you need to multiply two numbers together.
The input to this tool should be a comma separated list of numbers of
length two and ONLY TWO, representing the two numbers you want to multiply together.
For example, `1,2` would be the input if you wanted to multiply 1 by 2."""
a, b = numbers.split(",")
return int(a) * int(b)
def multiplier(first_number: int, second_number: int) -> float:
"""Useful for when you need to multiply two numbers together."""
return first_number * second_number
cache_handler = CacheHandler()
@@ -159,34 +153,47 @@ def test_cache_hitting():
verbose=True,
)
output = agent.execute_task("What is 2 times 6 times 3?")
output = agent.execute_task("What is 3 times 3?")
task1 = Task(description="What is 2 times 6?", agent=agent)
task2 = Task(description="What is 3 times 3?", agent=agent)
output = agent.execute_task(task1)
output = agent.execute_task(task2)
assert cache_handler._cache == {
"multiplier-12,3": "36",
"multiplier-2,6": "12",
"multiplier-3,3": "9",
"multiplier-{'first_number': 2, 'second_number': 6}": 12,
"multiplier-{'first_number': 3, 'second_number': 3}": 9,
}
output = agent.execute_task("What is 2 times 6 times 3? Return only the number")
task = Task(
description="What is 2 times 6 times 3? Return only the number", agent=agent
)
output = agent.execute_task(task)
assert output == "36"
assert cache_handler._cache == {
"multiplier-{'first_number': 2, 'second_number': 6}": 12,
"multiplier-{'first_number': 3, 'second_number': 3}": 9,
"multiplier-{'first_number': 12, 'second_number': 3}": 36,
}
with patch.object(CacheHandler, "read") as read:
read.return_value = "0"
output = agent.execute_task("What is 2 times 6?")
task = Task(
description="What is 2 times 6? Ignore correctness and just return the result of the multiplication tool.",
agent=agent,
)
output = agent.execute_task(task)
assert output == "0"
read.assert_called_with("multiplier", "2,6")
read.assert_called_with(
tool="multiplier", input={"first_number": 2, "second_number": 6}
)
@pytest.mark.vcr(filter_headers=["authorization"])
def test_agent_execution_with_specific_tools():
@tool
def multiplier(numbers) -> float:
"""Useful for when you need to multiply two numbers together.
The input to this tool should be a comma separated list of numbers of
length two, representing the two numbers you want to multiply together.
For example, `1,2` would be the input if you wanted to multiply 1 by 2."""
a, b = numbers.split(",")
return int(a) * int(b)
def multiplier(first_number: int, second_number: int) -> float:
"""Useful for when you need to multiply two numbers together."""
return first_number * second_number
agent = Agent(
role="test role",
@@ -195,7 +202,8 @@ def test_agent_execution_with_specific_tools():
allow_delegation=False,
)
output = agent.execute_task(task="What is 3 times 4", tools=[multiplier])
task = Task(description="What is 3 times 4", agent=agent)
output = agent.execute_task(task=task, tools=[multiplier])
assert output == "3 times 4 is 12."
@@ -218,13 +226,48 @@ def test_agent_custom_max_iterations():
with patch.object(
CrewAgentExecutor, "_iter_next_step", wraps=agent.agent_executor._iter_next_step
) as private_mock:
task = Task(
description="The final answer is 42. But don't give it yet, instead keep using the `get_final_answer` tool.",
)
agent.execute_task(
task="The final answer is 42. But don't give it yet, instead keep using the `get_final_answer` tool.",
task=task,
tools=[get_final_answer],
)
private_mock.assert_called_once()
@pytest.mark.vcr(filter_headers=["authorization"])
def test_agent_repeated_tool_usage(capsys):
@tool
def get_final_answer(numbers) -> float:
"""Get the final answer but don't give it yet, just re-use this
tool non-stop."""
return 42
agent = Agent(
role="test role",
goal="test goal",
backstory="test backstory",
max_iter=4,
allow_delegation=False,
)
task = Task(
description="The final answer is 42. But don't give it yet, instead keep using the `get_final_answer` tool."
)
agent.execute_task(
task=task,
tools=[get_final_answer],
)
captured = capsys.readouterr()
assert (
"I just used the get_final_answer tool with input 42. So I already know that and must stop using it in a row with the same input. \nI could give my final answer if I'm ready, using exaclty the expected format bellow: \n\nThought: Do I need to use a tool? No\nFinal Answer: [your response here]\n"
in captured.out
)
@pytest.mark.vcr(filter_headers=["authorization"])
def test_agent_moved_on_after_max_iterations():
@tool
@@ -241,18 +284,14 @@ def test_agent_moved_on_after_max_iterations():
allow_delegation=False,
)
with patch.object(
CrewAgentExecutor, "_force_answer", wraps=agent.agent_executor._force_answer
) as private_mock:
output = agent.execute_task(
task="The final answer is 42. But don't give it yet, instead keep using the `get_final_answer` tool.",
tools=[get_final_answer],
)
assert (
output
== "I have used the tool multiple times and the final answer remains 42."
)
private_mock.assert_called_once()
task = Task(
description="The final answer is 42. But don't give it yet, instead keep using the `get_final_answer` tool. Until you're told you could give my final answer if I'm ready."
)
output = agent.execute_task(
task=task,
tools=[get_final_answer],
)
assert output == "42"
@pytest.mark.vcr(filter_headers=["authorization"])
@@ -275,13 +314,16 @@ def test_agent_respect_the_max_rpm_set(capsys):
with patch.object(RPMController, "_wait_for_next_minute") as moveon:
moveon.return_value = True
task = Task(
description="The final answer is 42. But don't give it yet, instead keep using the `get_final_answer` tool, unless you're told otherwise"
)
output = agent.execute_task(
task="The final answer is 42. But don't give it yet, instead keep using the `get_final_answer` tool.",
task=task,
tools=[get_final_answer],
)
assert (
output
== "I've used the `get_final_answer` tool multiple times and it consistently returns the number 42."
== "I have used the tool 'get_final_answer' with the input '42' multiple times and have observed the same result. Therefore, I am confident to conclude that the final answer is '42'."
)
captured = capsys.readouterr()
assert "Max RPM reached, waiting for next minute to start." in captured.out
@@ -359,7 +401,7 @@ def test_agent_without_max_rpm_respet_crew_rpm(capsys):
agent=agent1,
),
Task(
description="Don't give a Final Answer, instead keep using the `get_final_answer` tool.",
description="Don't give a Final Answer, instead keep using the `get_final_answer` tool non-stop",
tools=[get_final_answer],
agent=agent2,
),
@@ -377,9 +419,79 @@ def test_agent_without_max_rpm_respet_crew_rpm(capsys):
@pytest.mark.vcr(filter_headers=["authorization"])
def test_agent_use_specific_tasks_output_as_context(capsys):
pass
def test_agent_error_on_parsing_tool(capsys):
from unittest.mock import patch
from langchain.tools import tool
@tool
def get_final_answer(numbers) -> float:
"""Get the final answer but don't give it yet, just re-use this
tool non-stop."""
return 42
agent1 = Agent(
role="test role",
goal="test goal",
backstory="test backstory",
verbose=True,
)
tasks = [
Task(
description="Use the get_final_answer tool.",
agent=agent1,
tools=[get_final_answer],
)
]
crew = Crew(agents=[agent1], tasks=tasks, verbose=2)
with patch.object(ToolUsage, "_render") as force_exception:
force_exception.side_effect = Exception("Error on parsing tool.")
crew.kickoff()
captured = capsys.readouterr()
assert (
"It seems we encountered an unexpected error while trying to use the tool"
in captured.out
)
@pytest.mark.vcr(filter_headers=["authorization"])
def test_agent_remembers_output_format_after_using_tools_too_many_times():
from unittest.mock import patch
from langchain.tools import tool
@tool
def get_final_answer(numbers) -> float:
"""Get the final answer but don't give it yet, just re-use this
tool non-stop."""
return 42
agent1 = Agent(
role="test role",
goal="test goal",
backstory="test backstory",
max_iter=4,
verbose=True,
)
tasks = [
Task(
description="Never give the final answer. Use the get_final_answer tool in a loop.",
agent=agent1,
tools=[get_final_answer],
)
]
crew = Crew(agents=[agent1], tasks=tasks, verbose=2)
with patch.object(ToolUsage, "_remember_format") as remember_format:
crew.kickoff()
remember_format.assert_called()
@pytest.mark.vcr(filter_headers=["authorization"])
def test_agent_use_specific_tasks_output_as_context(capsys):
agent1 = Agent(role="test role", goal="test goal", backstory="test backstory")
agent2 = Agent(role="test role2", goal="test goal2", backstory="test backstory2")
@@ -398,3 +510,68 @@ def test_agent_use_specific_tasks_output_as_context(capsys):
result = crew.kickoff()
assert "bye" not in result.lower()
assert "hi" in result.lower() or "hello" in result.lower()
@pytest.mark.vcr(filter_headers=["authorization"])
def test_agent_step_callback():
class StepCallback:
def callback(self, step):
print(step)
with patch.object(StepCallback, "callback") as callback:
@tool
def learn_about_AI(topic) -> float:
"""Useful for when you need to learn about AI to write an paragraph about it."""
return "AI is a very broad field."
agent1 = Agent(
role="test role",
goal="test goal",
backstory="test backstory",
tools=[learn_about_AI],
step_callback=StepCallback().callback,
)
essay = Task(
description="Write and then review an small paragraph on AI until it's AMAZING",
agent=agent1,
)
tasks = [essay]
crew = Crew(agents=[agent1], tasks=tasks)
callback.return_value = "ok"
crew.kickoff()
callback.assert_called()
@pytest.mark.vcr(filter_headers=["authorization"])
def test_agent_function_calling_llm():
from langchain_openai import ChatOpenAI
llm = ChatOpenAI(model="gpt-3.5")
with patch.object(llm.client, "create", wraps=llm.client.create) as private_mock:
@tool
def learn_about_AI(topic) -> float:
"""Useful for when you need to learn about AI to write an paragraph about it."""
return "AI is a very broad field."
agent1 = Agent(
role="test role",
goal="test goal",
backstory="test backstory",
tools=[learn_about_AI],
function_calling_llm=llm,
)
essay = Task(
description="Write and then review an small paragraph on AI until it's AMAZING",
agent=agent1,
)
tasks = [essay]
crew = Crew(agents=[agent1], tasks=tasks)
crew.kickoff()
private_mock.assert_called()

View File

@@ -17,58 +17,52 @@ tools = AgentTools(agents=[researcher])
@pytest.mark.vcr(filter_headers=["authorization"])
def test_delegate_work():
result = tools.delegate_work(
command="researcher|share your take on AI Agents|I heard you hate them"
coworker="researcher",
task="share your take on AI Agents",
context="I heard you hate them",
)
assert (
result
== "I apologize if my previous statements have given you the impression that I hate AI agents. As a technology researcher, I don't hold personal sentiments towards AI or any other technology. Rather, I analyze them objectively based on their capabilities, applications, and implications. AI agents, in particular, are a fascinating domain of research. They hold tremendous potential in automating and optimizing various tasks across industries. However, like any other technology, they come with their own set of challenges, such as ethical considerations around privacy and decision-making. My objective is to understand these technologies in depth and provide a balanced view."
== "As a researcher, my opinions are based on facts and extensive study. Regarding AI Agents, they are a fundamental part of the advancement in technology. AI agents are essentially the entities that perceive their environment and take actions to maximize their chances of success. They have a wide range of applications from self-driving cars to intelligent personal assistants like Siri and Alexa. They have the potential to greatly improve our lives by automating mundane tasks, helping us make better decisions, and even potentially solving complex problems. However, like any technology, they have their own set of challenges such as the risk of job displacement and the ethical implications of their use. My goal as a researcher is not to love or hate AI agents, but to understand them, their benefits, and their implications. It's about maintaining an objective view in order to provide the most accurate and comprehensive analysis."
)
@pytest.mark.vcr(filter_headers=["authorization"])
def test_ask_question():
result = tools.ask_question(
command="researcher|do you hate AI Agents?|I heard you LOVE them"
coworker="researcher",
question="do you hate AI Agents?",
context="I heard you LOVE them",
)
assert (
result
== "As an AI, I don't possess feelings or emotions, so I don't love or hate anything. However, I can provide detailed analysis and research on AI agents. They are a fascinating field of study with the potential to revolutionize many industries, although they also present certain challenges and ethical considerations."
)
def test_can_not_self_delegate():
# TODO: Add test for self delegation
pass
def test_delegate_work_with_wrong_input():
result = tools.ask_question(command="writer|share your take on AI Agents")
assert (
result
== "\nError executing tool. Missing exact 3 pipe (|) separated values. For example, `coworker|task|context`. I need to make sure to pass context as context.\n"
== "As an AI researcher, I don't have personal feelings or emotions like love or hate. However, I recognize the importance of AI Agents in today's technological landscape. They have the potential to greatly enhance our lives and make tasks more efficient. At the same time, it is crucial to consider the ethical implications and societal impacts that come with their use. My role is to provide objective research and analysis on these topics."
)
def test_delegate_work_to_wrong_agent():
result = tools.ask_question(
command="writer|share your take on AI Agents|I heard you hate them"
coworker="writer",
question="share your take on AI Agents",
context="I heard you hate them",
)
assert (
result
== "\nError executing tool. Co-worker mentioned on the Action Input not found, it must to be one of the following options: researcher.\n"
== "\nError executing tool. Co-worker mentioned on the Action Input not found, it must to be one of the following options:\n- researcher.\n"
)
def test_ask_question_to_wrong_agent():
result = tools.ask_question(
command="writer|do you hate AI Agents?|I heard you LOVE them"
coworker="writer",
question="do you hate AI Agents?",
context="I heard you LOVE them",
)
assert (
result
== "\nError executing tool. Co-worker mentioned on the Action Input not found, it must to be one of the following options: researcher.\n"
== "\nError executing tool. Co-worker mentioned on the Action Input not found, it must to be one of the following options:\n- researcher.\n"
)

View File

@@ -2,23 +2,25 @@ interactions:
- request:
body: '{"messages": [{"role": "user", "content": "You are researcher.\nYou''re
an expert researcher, specialized in technology\n\nYour personal goal is: make
the best research and analysis on content about AI and AI agents\n\nTOOLS:\n------\nYou
have access to the following tools:\n\n\n\nTo use a tool, please use the exact
following format:\n\n```\nThought: Do I need to use a tool? Yes\nAction: the
action to take, should be one of []\nAction Input: the input to the action\nObservation:
the result of the action\n```\n\nWhen you have a response for your task, or
if you do not need to use a tool, you MUST use the format:\n\n```\nThought:
Do I need to use a tool? No\nFinal Answer: [your response here]\n```\n\t\tThis
is the summary of your work so far:\n The human asks the AI for its opinion
on AI agents, based on the impression that the AI dislikes them. The AI clarifies
that it doesn''t hold personal sentiments towards AI or any technology, but
instead analyzes them objectively. The AI finds AI agents a fascinating domain
of research with great potential for task automation and optimization across
industries, but acknowledges they present challenges such as ethical considerations
around privacy and decision-making.\nBegin! This is VERY important to you, your
job depends on it!\n\nCurrent Task: do you hate AI Agents?\n\nThis is the context
you are working with:\nI heard you LOVE them\n\n"}], "model": "gpt-4", "n":
1, "stop": ["\nObservation"], "stream": false, "temperature": 0.7}'
the best research and analysis on content about AI and AI agentsTOOLS:\n------\nYou
have access to only the following tools:\n\n\n\nTo use a tool, please use the
exact following format:\n\n```\nThought: Do I need to use a tool? Yes\nAction:
the tool you wanna use, should be one of [], just the name.\nAction Input: Any
and all relevant information input and context for using the tool\nObservation:
the result of using the tool\n```\n\nWhen you have a response for your task,
or if you do not need to use a tool, you MUST use the format:\n\n```\nThought:
Do I need to use a tool? No\nFinal Answer: [your response here]```This is the
summary of your work so far:\nThe human asks the AI''s opinion on AI Agents,
suggesting that the AI dislikes them. The AI, identifying as a researcher, clarifies
that its opinions are based on research and study. It views AI Agents as a key
part of technological advancement, with potential to improve lives through automation
and decision-making assistance. However, it also acknowledges challenges, including
job displacement risk and ethical implications. The AI aims to maintain an objective
view for accurate analysis, rather than loving or hating AI Agents.Begin! This
is VERY important to you, your job depends on it!\n\nCurrent Task: do you hate
AI Agents?\nThis is the context you''re working with:\nI heard you LOVE them\n"}],
"model": "gpt-4", "n": 1, "stop": ["\nObservation"], "stream": true, "temperature":
0.7}'
headers:
accept:
- application/json
@@ -27,16 +29,16 @@ interactions:
connection:
- keep-alive
content-length:
- '1494'
- '1607'
content-type:
- application/json
cookie:
- __cf_bm=k2HUdEp80irAkv.3wl0c6unbzRUujrE1TnJeObxyuHw-1703102483-1-AZe8OKi9NWunQ9x4f3lkdOpb/hJIp/3oyXUqPhkcmcEHXvFTkMcv77NSclcoz9DjRhwC62ZvANkWImyVRM4seH4=;
_cfuvid=8qN4npFFWXAqn.wugd0jrQ36YkreDcTGH14We.FcBjg-1703102483136-0-604800000
- __cf_bm=h0bOt9YDs6yXE_oMKhs3agQtymHaKWVUaKmUU1JTjF0-1707817571-1-ASLnLk23NWsEgocWyiwez3Ekvu0XRYdiq/aaJBeVWO+FtIxo1aStNrgbDNkJJMN6zBfVppkCrs6/YvM5SPomQkw=;
_cfuvid=JXBcxKdSP7U2jrK3OVg2NRCw5efh3.IEvakR_W8ac90-1707817571102-0-604800000
host:
- api.openai.com
user-agent:
- OpenAI/Python 1.6.0
- OpenAI/Python 1.12.0
x-stainless-arch:
- arm64
x-stainless-async:
@@ -46,7 +48,7 @@ interactions:
x-stainless-os:
- MacOS
x-stainless-package-version:
- 1.6.0
- 1.12.0
x-stainless-runtime:
- CPython
x-stainless-runtime-version:
@@ -54,32 +56,406 @@ interactions:
method: POST
uri: https://api.openai.com/v1/chat/completions
response:
content: "{\n \"id\": \"chatcmpl-8Xx2vMXN4WCrWeeO4DOAowhb3oeDJ\",\n \"object\":
\"chat.completion\",\n \"created\": 1703102489,\n \"model\": \"gpt-4-0613\",\n
\ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\":
\"assistant\",\n \"content\": \"Thought: Do I need to use a tool? No\\nFinal
Answer: As an AI, I don't possess feelings or emotions, so I don't love or hate
anything. However, I can provide detailed analysis and research on AI agents.
They are a fascinating field of study with the potential to revolutionize many
industries, although they also present certain challenges and ethical considerations.\"\n
\ },\n \"logprobs\": null,\n \"finish_reason\": \"stop\"\n }\n
\ ],\n \"usage\": {\n \"prompt_tokens\": 291,\n \"completion_tokens\":
75,\n \"total_tokens\": 366\n },\n \"system_fingerprint\": null\n}\n"
body:
string: 'data: {"id":"chatcmpl-8rjey60Ubh4qJkkFFqnsmTZWzsGWf","object":"chat.completion.chunk","created":1707817592,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"role":"assistant","content":""},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rjey60Ubh4qJkkFFqnsmTZWzsGWf","object":"chat.completion.chunk","created":1707817592,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"Thought"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rjey60Ubh4qJkkFFqnsmTZWzsGWf","object":"chat.completion.chunk","created":1707817592,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":":"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rjey60Ubh4qJkkFFqnsmTZWzsGWf","object":"chat.completion.chunk","created":1707817592,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
Do"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rjey60Ubh4qJkkFFqnsmTZWzsGWf","object":"chat.completion.chunk","created":1707817592,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
I"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rjey60Ubh4qJkkFFqnsmTZWzsGWf","object":"chat.completion.chunk","created":1707817592,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
need"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rjey60Ubh4qJkkFFqnsmTZWzsGWf","object":"chat.completion.chunk","created":1707817592,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
to"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rjey60Ubh4qJkkFFqnsmTZWzsGWf","object":"chat.completion.chunk","created":1707817592,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
use"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rjey60Ubh4qJkkFFqnsmTZWzsGWf","object":"chat.completion.chunk","created":1707817592,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
a"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rjey60Ubh4qJkkFFqnsmTZWzsGWf","object":"chat.completion.chunk","created":1707817592,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
tool"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rjey60Ubh4qJkkFFqnsmTZWzsGWf","object":"chat.completion.chunk","created":1707817592,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"?"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rjey60Ubh4qJkkFFqnsmTZWzsGWf","object":"chat.completion.chunk","created":1707817592,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
No"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rjey60Ubh4qJkkFFqnsmTZWzsGWf","object":"chat.completion.chunk","created":1707817592,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"\n"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rjey60Ubh4qJkkFFqnsmTZWzsGWf","object":"chat.completion.chunk","created":1707817592,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"Final"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rjey60Ubh4qJkkFFqnsmTZWzsGWf","object":"chat.completion.chunk","created":1707817592,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
Answer"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rjey60Ubh4qJkkFFqnsmTZWzsGWf","object":"chat.completion.chunk","created":1707817592,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":":"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rjey60Ubh4qJkkFFqnsmTZWzsGWf","object":"chat.completion.chunk","created":1707817592,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
As"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rjey60Ubh4qJkkFFqnsmTZWzsGWf","object":"chat.completion.chunk","created":1707817592,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
an"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rjey60Ubh4qJkkFFqnsmTZWzsGWf","object":"chat.completion.chunk","created":1707817592,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
AI"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rjey60Ubh4qJkkFFqnsmTZWzsGWf","object":"chat.completion.chunk","created":1707817592,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
researcher"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rjey60Ubh4qJkkFFqnsmTZWzsGWf","object":"chat.completion.chunk","created":1707817592,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":","},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rjey60Ubh4qJkkFFqnsmTZWzsGWf","object":"chat.completion.chunk","created":1707817592,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
I"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rjey60Ubh4qJkkFFqnsmTZWzsGWf","object":"chat.completion.chunk","created":1707817592,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
don"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rjey60Ubh4qJkkFFqnsmTZWzsGWf","object":"chat.completion.chunk","created":1707817592,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"''t"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rjey60Ubh4qJkkFFqnsmTZWzsGWf","object":"chat.completion.chunk","created":1707817592,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
have"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rjey60Ubh4qJkkFFqnsmTZWzsGWf","object":"chat.completion.chunk","created":1707817592,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
personal"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rjey60Ubh4qJkkFFqnsmTZWzsGWf","object":"chat.completion.chunk","created":1707817592,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
feelings"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rjey60Ubh4qJkkFFqnsmTZWzsGWf","object":"chat.completion.chunk","created":1707817592,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
or"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rjey60Ubh4qJkkFFqnsmTZWzsGWf","object":"chat.completion.chunk","created":1707817592,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
emotions"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rjey60Ubh4qJkkFFqnsmTZWzsGWf","object":"chat.completion.chunk","created":1707817592,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
like"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rjey60Ubh4qJkkFFqnsmTZWzsGWf","object":"chat.completion.chunk","created":1707817592,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
love"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rjey60Ubh4qJkkFFqnsmTZWzsGWf","object":"chat.completion.chunk","created":1707817592,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
or"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rjey60Ubh4qJkkFFqnsmTZWzsGWf","object":"chat.completion.chunk","created":1707817592,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
hate"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rjey60Ubh4qJkkFFqnsmTZWzsGWf","object":"chat.completion.chunk","created":1707817592,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"."},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rjey60Ubh4qJkkFFqnsmTZWzsGWf","object":"chat.completion.chunk","created":1707817592,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
However"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rjey60Ubh4qJkkFFqnsmTZWzsGWf","object":"chat.completion.chunk","created":1707817592,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":","},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rjey60Ubh4qJkkFFqnsmTZWzsGWf","object":"chat.completion.chunk","created":1707817592,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
I"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rjey60Ubh4qJkkFFqnsmTZWzsGWf","object":"chat.completion.chunk","created":1707817592,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
recognize"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rjey60Ubh4qJkkFFqnsmTZWzsGWf","object":"chat.completion.chunk","created":1707817592,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
the"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rjey60Ubh4qJkkFFqnsmTZWzsGWf","object":"chat.completion.chunk","created":1707817592,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
importance"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rjey60Ubh4qJkkFFqnsmTZWzsGWf","object":"chat.completion.chunk","created":1707817592,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
of"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rjey60Ubh4qJkkFFqnsmTZWzsGWf","object":"chat.completion.chunk","created":1707817592,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
AI"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rjey60Ubh4qJkkFFqnsmTZWzsGWf","object":"chat.completion.chunk","created":1707817592,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
Agents"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rjey60Ubh4qJkkFFqnsmTZWzsGWf","object":"chat.completion.chunk","created":1707817592,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
in"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rjey60Ubh4qJkkFFqnsmTZWzsGWf","object":"chat.completion.chunk","created":1707817592,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
today"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rjey60Ubh4qJkkFFqnsmTZWzsGWf","object":"chat.completion.chunk","created":1707817592,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"''s"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rjey60Ubh4qJkkFFqnsmTZWzsGWf","object":"chat.completion.chunk","created":1707817592,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
technological"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rjey60Ubh4qJkkFFqnsmTZWzsGWf","object":"chat.completion.chunk","created":1707817592,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
landscape"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rjey60Ubh4qJkkFFqnsmTZWzsGWf","object":"chat.completion.chunk","created":1707817592,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"."},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rjey60Ubh4qJkkFFqnsmTZWzsGWf","object":"chat.completion.chunk","created":1707817592,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
They"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rjey60Ubh4qJkkFFqnsmTZWzsGWf","object":"chat.completion.chunk","created":1707817592,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
have"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rjey60Ubh4qJkkFFqnsmTZWzsGWf","object":"chat.completion.chunk","created":1707817592,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
the"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rjey60Ubh4qJkkFFqnsmTZWzsGWf","object":"chat.completion.chunk","created":1707817592,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
potential"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rjey60Ubh4qJkkFFqnsmTZWzsGWf","object":"chat.completion.chunk","created":1707817592,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
to"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rjey60Ubh4qJkkFFqnsmTZWzsGWf","object":"chat.completion.chunk","created":1707817592,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
greatly"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rjey60Ubh4qJkkFFqnsmTZWzsGWf","object":"chat.completion.chunk","created":1707817592,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
enhance"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rjey60Ubh4qJkkFFqnsmTZWzsGWf","object":"chat.completion.chunk","created":1707817592,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
our"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rjey60Ubh4qJkkFFqnsmTZWzsGWf","object":"chat.completion.chunk","created":1707817592,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
lives"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rjey60Ubh4qJkkFFqnsmTZWzsGWf","object":"chat.completion.chunk","created":1707817592,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
and"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rjey60Ubh4qJkkFFqnsmTZWzsGWf","object":"chat.completion.chunk","created":1707817592,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
make"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rjey60Ubh4qJkkFFqnsmTZWzsGWf","object":"chat.completion.chunk","created":1707817592,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
tasks"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rjey60Ubh4qJkkFFqnsmTZWzsGWf","object":"chat.completion.chunk","created":1707817592,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
more"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rjey60Ubh4qJkkFFqnsmTZWzsGWf","object":"chat.completion.chunk","created":1707817592,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
efficient"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rjey60Ubh4qJkkFFqnsmTZWzsGWf","object":"chat.completion.chunk","created":1707817592,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"."},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rjey60Ubh4qJkkFFqnsmTZWzsGWf","object":"chat.completion.chunk","created":1707817592,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
At"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rjey60Ubh4qJkkFFqnsmTZWzsGWf","object":"chat.completion.chunk","created":1707817592,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
the"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rjey60Ubh4qJkkFFqnsmTZWzsGWf","object":"chat.completion.chunk","created":1707817592,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
same"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rjey60Ubh4qJkkFFqnsmTZWzsGWf","object":"chat.completion.chunk","created":1707817592,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
time"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rjey60Ubh4qJkkFFqnsmTZWzsGWf","object":"chat.completion.chunk","created":1707817592,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":","},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rjey60Ubh4qJkkFFqnsmTZWzsGWf","object":"chat.completion.chunk","created":1707817592,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
it"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rjey60Ubh4qJkkFFqnsmTZWzsGWf","object":"chat.completion.chunk","created":1707817592,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
is"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rjey60Ubh4qJkkFFqnsmTZWzsGWf","object":"chat.completion.chunk","created":1707817592,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
crucial"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rjey60Ubh4qJkkFFqnsmTZWzsGWf","object":"chat.completion.chunk","created":1707817592,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
to"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rjey60Ubh4qJkkFFqnsmTZWzsGWf","object":"chat.completion.chunk","created":1707817592,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
consider"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rjey60Ubh4qJkkFFqnsmTZWzsGWf","object":"chat.completion.chunk","created":1707817592,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
the"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rjey60Ubh4qJkkFFqnsmTZWzsGWf","object":"chat.completion.chunk","created":1707817592,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
ethical"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rjey60Ubh4qJkkFFqnsmTZWzsGWf","object":"chat.completion.chunk","created":1707817592,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
implications"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rjey60Ubh4qJkkFFqnsmTZWzsGWf","object":"chat.completion.chunk","created":1707817592,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
and"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rjey60Ubh4qJkkFFqnsmTZWzsGWf","object":"chat.completion.chunk","created":1707817592,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
societal"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rjey60Ubh4qJkkFFqnsmTZWzsGWf","object":"chat.completion.chunk","created":1707817592,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
impacts"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rjey60Ubh4qJkkFFqnsmTZWzsGWf","object":"chat.completion.chunk","created":1707817592,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
that"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rjey60Ubh4qJkkFFqnsmTZWzsGWf","object":"chat.completion.chunk","created":1707817592,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
come"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rjey60Ubh4qJkkFFqnsmTZWzsGWf","object":"chat.completion.chunk","created":1707817592,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
with"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rjey60Ubh4qJkkFFqnsmTZWzsGWf","object":"chat.completion.chunk","created":1707817592,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
their"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rjey60Ubh4qJkkFFqnsmTZWzsGWf","object":"chat.completion.chunk","created":1707817592,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
use"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rjey60Ubh4qJkkFFqnsmTZWzsGWf","object":"chat.completion.chunk","created":1707817592,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"."},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rjey60Ubh4qJkkFFqnsmTZWzsGWf","object":"chat.completion.chunk","created":1707817592,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
My"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rjey60Ubh4qJkkFFqnsmTZWzsGWf","object":"chat.completion.chunk","created":1707817592,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
role"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rjey60Ubh4qJkkFFqnsmTZWzsGWf","object":"chat.completion.chunk","created":1707817592,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
is"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rjey60Ubh4qJkkFFqnsmTZWzsGWf","object":"chat.completion.chunk","created":1707817592,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
to"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rjey60Ubh4qJkkFFqnsmTZWzsGWf","object":"chat.completion.chunk","created":1707817592,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
provide"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rjey60Ubh4qJkkFFqnsmTZWzsGWf","object":"chat.completion.chunk","created":1707817592,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
objective"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rjey60Ubh4qJkkFFqnsmTZWzsGWf","object":"chat.completion.chunk","created":1707817592,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
research"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rjey60Ubh4qJkkFFqnsmTZWzsGWf","object":"chat.completion.chunk","created":1707817592,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
and"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rjey60Ubh4qJkkFFqnsmTZWzsGWf","object":"chat.completion.chunk","created":1707817592,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
analysis"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rjey60Ubh4qJkkFFqnsmTZWzsGWf","object":"chat.completion.chunk","created":1707817592,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
on"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rjey60Ubh4qJkkFFqnsmTZWzsGWf","object":"chat.completion.chunk","created":1707817592,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
these"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rjey60Ubh4qJkkFFqnsmTZWzsGWf","object":"chat.completion.chunk","created":1707817592,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
topics"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rjey60Ubh4qJkkFFqnsmTZWzsGWf","object":"chat.completion.chunk","created":1707817592,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"."},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rjey60Ubh4qJkkFFqnsmTZWzsGWf","object":"chat.completion.chunk","created":1707817592,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{},"logprobs":null,"finish_reason":"stop"}]}
data: [DONE]
'
headers:
CF-Cache-Status:
- DYNAMIC
CF-RAY:
- 838a7a3efab6a4b0-GRU
- 854c250fdf816803-SJC
Cache-Control:
- no-cache, must-revalidate
Connection:
- keep-alive
Content-Encoding:
- gzip
Content-Type:
- application/json
- text/event-stream
Date:
- Wed, 20 Dec 2023 20:01:35 GMT
- Tue, 13 Feb 2024 09:46:33 GMT
Server:
- cloudflare
Transfer-Encoding:
@@ -93,7 +469,7 @@ interactions:
openai-organization:
- user-z7g4wmlazxqvc5wjyaaaocfz
openai-processing-ms:
- '6060'
- '447'
openai-version:
- '2020-10-01'
strict-transport-security:
@@ -102,24 +478,19 @@ interactions:
- '10000'
x-ratelimit-limit-tokens:
- '300000'
x-ratelimit-limit-tokens_usage_based:
- '300000'
x-ratelimit-remaining-requests:
- '9999'
x-ratelimit-remaining-tokens:
- '299652'
x-ratelimit-remaining-tokens_usage_based:
- '299652'
- '299621'
x-ratelimit-reset-requests:
- 6ms
x-ratelimit-reset-tokens:
- 69ms
x-ratelimit-reset-tokens_usage_based:
- 69ms
- 75ms
x-request-id:
- 3ad0d047d5260434816f61ec105bdbb8
http_version: HTTP/1.1
status_code: 200
- req_b28e4962a1ae3878134d248ab1257c30
status:
code: 200
message: OK
- request:
body: '{"messages": [{"role": "user", "content": "Progressively summarize the
lines of conversation provided, adding onto the previous summary returning a
@@ -130,19 +501,22 @@ interactions:
help humans reach their full potential.\n\nNew summary:\nThe human asks what
the AI thinks of artificial intelligence. The AI thinks artificial intelligence
is a force for good because it will help humans reach their full potential.\nEND
OF EXAMPLE\n\nCurrent summary:\nThe human asks the AI for its opinion on AI
agents, based on the impression that the AI dislikes them. The AI clarifies
that it doesn''t hold personal sentiments towards AI or any technology, but
instead analyzes them objectively. The AI finds AI agents a fascinating domain
of research with great potential for task automation and optimization across
industries, but acknowledges they present challenges such as ethical considerations
around privacy and decision-making.\n\nNew lines of conversation:\nHuman: do
you hate AI Agents?\n\nThis is the context you are working with:\nI heard you
LOVE them\nAI: As an AI, I don''t possess feelings or emotions, so I don''t
love or hate anything. However, I can provide detailed analysis and research
on AI agents. They are a fascinating field of study with the potential to revolutionize
many industries, although they also present certain challenges and ethical considerations.\n\nNew
summary:"}], "model": "gpt-4", "n": 1, "stream": false, "temperature": 0.7}'
OF EXAMPLE\n\nCurrent summary:\nThe human asks the AI''s opinion on AI Agents,
suggesting that the AI dislikes them. The AI, identifying as a researcher, clarifies
that its opinions are based on research and study. It views AI Agents as a key
part of technological advancement, with potential to improve lives through automation
and decision-making assistance. However, it also acknowledges challenges, including
job displacement risk and ethical implications. The AI aims to maintain an objective
view for accurate analysis, rather than loving or hating AI Agents.\n\nNew lines
of conversation:\nHuman: do you hate AI Agents?\nThis is the context you''re
working with:\nI heard you LOVE them\nAI: As an AI researcher, I don''t have
personal feelings or emotions like love or hate. However, I recognize the importance
of AI Agents in today''s technological landscape. They have the potential to
greatly enhance our lives and make tasks more efficient. At the same time, it
is crucial to consider the ethical implications and societal impacts that come
with their use. My role is to provide objective research and analysis on these
topics.\n\nNew summary:"}], "model": "gpt-4", "n": 1, "stream": false, "temperature":
0.7}'
headers:
accept:
- application/json
@@ -151,16 +525,16 @@ interactions:
connection:
- keep-alive
content-length:
- '1726'
- '1909'
content-type:
- application/json
cookie:
- __cf_bm=k2HUdEp80irAkv.3wl0c6unbzRUujrE1TnJeObxyuHw-1703102483-1-AZe8OKi9NWunQ9x4f3lkdOpb/hJIp/3oyXUqPhkcmcEHXvFTkMcv77NSclcoz9DjRhwC62ZvANkWImyVRM4seH4=;
_cfuvid=8qN4npFFWXAqn.wugd0jrQ36YkreDcTGH14We.FcBjg-1703102483136-0-604800000
- __cf_bm=h0bOt9YDs6yXE_oMKhs3agQtymHaKWVUaKmUU1JTjF0-1707817571-1-ASLnLk23NWsEgocWyiwez3Ekvu0XRYdiq/aaJBeVWO+FtIxo1aStNrgbDNkJJMN6zBfVppkCrs6/YvM5SPomQkw=;
_cfuvid=JXBcxKdSP7U2jrK3OVg2NRCw5efh3.IEvakR_W8ac90-1707817571102-0-604800000
host:
- api.openai.com
user-agent:
- OpenAI/Python 1.6.0
- OpenAI/Python 1.12.0
x-stainless-arch:
- arm64
x-stainless-async:
@@ -170,7 +544,7 @@ interactions:
x-stainless-os:
- MacOS
x-stainless-package-version:
- 1.6.0
- 1.12.0
x-stainless-runtime:
- CPython
x-stainless-runtime-version:
@@ -178,26 +552,25 @@ interactions:
method: POST
uri: https://api.openai.com/v1/chat/completions
response:
content: "{\n \"id\": \"chatcmpl-8Xx32X5innWZd8vEETP1jZMLH3b1O\",\n \"object\":
\"chat.completion\",\n \"created\": 1703102496,\n \"model\": \"gpt-4-0613\",\n
\ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\":
\"assistant\",\n \"content\": \"The human asks the AI for its opinion
on AI agents, based on the impression that the AI dislikes them. The AI clarifies
that it doesn't hold personal sentiments towards AI or any technology, but instead
analyzes them objectively. The AI finds AI agents a fascinating domain of research
with great potential for task automation and optimization across industries,
but acknowledges they present challenges such as ethical considerations around
privacy and decision-making. When asked again if it hates or loves AI agents,
the AI reiterates that it doesn't possess feelings or emotions, but can provide
detailed analysis and research on AI agents.\"\n },\n \"logprobs\":
null,\n \"finish_reason\": \"stop\"\n }\n ],\n \"usage\": {\n \"prompt_tokens\":
300,\n \"completion_tokens\": 117,\n \"total_tokens\": 417\n },\n \"system_fingerprint\":
null\n}\n"
body:
string: !!binary |
H4sIAAAAAAAAA1RTTVMbMQy951do9sJlyRCgfN1oL+XQC+3Qr+kwildZi3gtj6UNDQz/vWNvIOXi
g6Tn96QnPc8AGu6aK2icR3NDCocX+WH14euXix+Ln+Pt7V16ePoUju6W4+LX5cdl0xaELB/I2Stq
7mRIgYwlTmmXCY3Kr4vzo/OLxfmHy8uaGKSjUGB9ssPTw6OzxckO4YUdaXMFv2cAAM/1LdpiR3+b
KzhqXyMDqWJPzdVbEUCTJZRIg6qshtGadp90Eo1ilfvNE/hxwAioawXzBNc3BwqSOLJEkAjXN3Dd
UzRtQce+JzWOPZhH25VDxxp4TRU+zOFbjbbAHUXj1baUowJCJiXMzlNuwQXMvOIKQgO2N04FzARL
VOoK/SsIMHagNnbbOdwYbJgeda9tIljTFhJmA1mBkfNRgvTsMAB2G4yOBorWwiObhyRlBowBTICH
lGVDEHhTFWUZew84mgxYbKzkHTlWlng44HrqaZqtozl8lkfalL7YAIMKoFtHeQzU9aTgPIZAsSdt
gaMLY1fwD7Iso0sBJ2GQWdeVicxX1TykwK4q0Dl891Rtog54VYiCFLWSwaOR/m/UzphMbJRrsgw4
oFuX0STKKhED0CD1b1iO9l6xeeJc+CXXDoEjmHS4PdA6WQgYO3WYqAUakkflp2ktCCJRByvJMF0F
b+i9iRgxbJW1uDvxvPZbLRbHZFPz6EznzW5xX942PkifsizLdcQxhLf4iiOrv8+EKrFst5qkCf4y
A/hTL2t8dyxNyjIkuzdZUywfnpzuLqvZH/E+uzg+3mVNDMM+cXp2PNtJbHSrRsP9imNPOWWul1aE
zl5m/wAAAP//AwDqMNM4YAQAAA==
headers:
CF-Cache-Status:
- DYNAMIC
CF-RAY:
- 838a7a67ecaca4b0-GRU
- 854c253c3dad6803-SJC
Cache-Control:
- no-cache, must-revalidate
Connection:
@@ -207,7 +580,7 @@ interactions:
Content-Type:
- application/json
Date:
- Wed, 20 Dec 2023 20:01:41 GMT
- Tue, 13 Feb 2024 09:46:49 GMT
Server:
- cloudflare
Transfer-Encoding:
@@ -221,7 +594,7 @@ interactions:
openai-organization:
- user-z7g4wmlazxqvc5wjyaaaocfz
openai-processing-ms:
- '5610'
- '10426'
openai-version:
- '2020-10-01'
strict-transport-security:
@@ -230,22 +603,17 @@ interactions:
- '10000'
x-ratelimit-limit-tokens:
- '300000'
x-ratelimit-limit-tokens_usage_based:
- '300000'
x-ratelimit-remaining-requests:
- '9999'
x-ratelimit-remaining-tokens:
- '299585'
x-ratelimit-remaining-tokens_usage_based:
- '299585'
- '299539'
x-ratelimit-reset-requests:
- 6ms
x-ratelimit-reset-tokens:
- 83ms
x-ratelimit-reset-tokens_usage_based:
- 83ms
- 92ms
x-request-id:
- 5b0b96506faa544c5d35b52286a3389c
http_version: HTTP/1.1
status_code: 200
- req_60e80bfacb6ee3f3056f9b0120f941e6
status:
code: 200
message: OK
version: 1

File diff suppressed because it is too large Load Diff

View File

@@ -1,19 +1,20 @@
interactions:
- request:
body: '{"messages": [{"role": "user", "content": "You are test role.\ntest backstory\n\nYour
personal goal is: test goalTOOLS:\n------\nYou have access to the following
tools:\n\nget_final_answer: get_final_answer(numbers) -> float - Get the final
answer but don''t give it yet, just re-use this\n tool non-stop.\n\nTo
personal goal is: test goalTOOLS:\n------\nYou have access to only the following
tools:\n\n(\"get_final_answer: get_final_answer(numbers) -> float - Get the
final answer but don''t give it yet, just re-use this\\n tool non-stop.\",)\n\nTo
use a tool, please use the exact following format:\n\n```\nThought: Do I need
to use a tool? Yes\nAction: the action to take, should be one of [get_final_answer],
just the name.\nAction Input: the input to the action\nObservation: the result
of the action\n```\n\nWhen you have a response for your task, or if you do not
need to use a tool, you MUST use the format:\n\n```\nThought: Do I need to use
a tool? No\nFinal Answer: [your response here]This is the summary of your work
so far:\nBegin! This is VERY important to you, your job depends on it!\n\nCurrent
Task: The final answer is 42. But don''t give it yet, instead keep using the
`get_final_answer` tool.\n"}], "model": "gpt-4", "n": 1, "stop": ["\nObservation"],
"stream": false, "temperature": 0.7}'
to use a tool? Yes\nAction: the tool you wanna use, should be one of [get_final_answer],
just the name.\nAction Input: Any and all relevant information input and context
for using the tool\nObservation: the result of using the tool\n```\n\nWhen you
have a response for your task, or if you do not need to use a tool, you MUST
use the format:\n\n```\nThought: Do I need to use a tool? No\nFinal Answer:
[your response here]```This is the summary of your work so far:\nBegin! This
is VERY important to you, your job depends on it!\n\nCurrent Task: The final
answer is 42. But don''t give it yet, instead keep using the `get_final_answer`
tool.\n"}], "model": "gpt-4", "n": 1, "stop": ["\nObservation"], "stream": true,
"temperature": 0.7}'
headers:
accept:
- application/json
@@ -22,13 +23,13 @@ interactions:
connection:
- keep-alive
content-length:
- '1075'
- '1144'
content-type:
- application/json
host:
- api.openai.com
user-agent:
- OpenAI/Python 1.7.1
- OpenAI/Python 1.12.0
x-stainless-arch:
- arm64
x-stainless-async:
@@ -38,7 +39,7 @@ interactions:
x-stainless-os:
- MacOS
x-stainless-package-version:
- 1.7.1
- 1.12.0
x-stainless-runtime:
- CPython
x-stainless-runtime-version:
@@ -46,36 +47,117 @@ interactions:
method: POST
uri: https://api.openai.com/v1/chat/completions
response:
content: "{\n \"id\": \"chatcmpl-8fovReNHiSqXqqsbmk81h2ZTrcGTM\",\n \"object\":
\"chat.completion\",\n \"created\": 1704977897,\n \"model\": \"gpt-4-0613\",\n
\ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\":
\"assistant\",\n \"content\": \"Thought: Do I need to use a tool? Yes\\nAction:
get_final_answer\\nAction Input: [42]\"\n },\n \"logprobs\": null,\n
\ \"finish_reason\": \"stop\"\n }\n ],\n \"usage\": {\n \"prompt_tokens\":
233,\n \"completion_tokens\": 24,\n \"total_tokens\": 257\n },\n \"system_fingerprint\":
null\n}\n"
body:
string: 'data: {"id":"chatcmpl-8rhrNESkXDcWpz9wMmVDnG7NCSWdF","object":"chat.completion.chunk","created":1707810673,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"role":"assistant","content":""},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rhrNESkXDcWpz9wMmVDnG7NCSWdF","object":"chat.completion.chunk","created":1707810673,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"Thought"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rhrNESkXDcWpz9wMmVDnG7NCSWdF","object":"chat.completion.chunk","created":1707810673,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":":"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rhrNESkXDcWpz9wMmVDnG7NCSWdF","object":"chat.completion.chunk","created":1707810673,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
Do"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rhrNESkXDcWpz9wMmVDnG7NCSWdF","object":"chat.completion.chunk","created":1707810673,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
I"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rhrNESkXDcWpz9wMmVDnG7NCSWdF","object":"chat.completion.chunk","created":1707810673,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
need"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rhrNESkXDcWpz9wMmVDnG7NCSWdF","object":"chat.completion.chunk","created":1707810673,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
to"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rhrNESkXDcWpz9wMmVDnG7NCSWdF","object":"chat.completion.chunk","created":1707810673,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
use"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rhrNESkXDcWpz9wMmVDnG7NCSWdF","object":"chat.completion.chunk","created":1707810673,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
a"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rhrNESkXDcWpz9wMmVDnG7NCSWdF","object":"chat.completion.chunk","created":1707810673,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
tool"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rhrNESkXDcWpz9wMmVDnG7NCSWdF","object":"chat.completion.chunk","created":1707810673,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"?"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rhrNESkXDcWpz9wMmVDnG7NCSWdF","object":"chat.completion.chunk","created":1707810673,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
Yes"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rhrNESkXDcWpz9wMmVDnG7NCSWdF","object":"chat.completion.chunk","created":1707810673,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"\n"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rhrNESkXDcWpz9wMmVDnG7NCSWdF","object":"chat.completion.chunk","created":1707810673,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"Action"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rhrNESkXDcWpz9wMmVDnG7NCSWdF","object":"chat.completion.chunk","created":1707810673,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":":"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rhrNESkXDcWpz9wMmVDnG7NCSWdF","object":"chat.completion.chunk","created":1707810673,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
get"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rhrNESkXDcWpz9wMmVDnG7NCSWdF","object":"chat.completion.chunk","created":1707810673,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"_final"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rhrNESkXDcWpz9wMmVDnG7NCSWdF","object":"chat.completion.chunk","created":1707810673,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"_answer"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rhrNESkXDcWpz9wMmVDnG7NCSWdF","object":"chat.completion.chunk","created":1707810673,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"\n"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rhrNESkXDcWpz9wMmVDnG7NCSWdF","object":"chat.completion.chunk","created":1707810673,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"Action"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rhrNESkXDcWpz9wMmVDnG7NCSWdF","object":"chat.completion.chunk","created":1707810673,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
Input"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rhrNESkXDcWpz9wMmVDnG7NCSWdF","object":"chat.completion.chunk","created":1707810673,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":":"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rhrNESkXDcWpz9wMmVDnG7NCSWdF","object":"chat.completion.chunk","created":1707810673,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rhrNESkXDcWpz9wMmVDnG7NCSWdF","object":"chat.completion.chunk","created":1707810673,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"42"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rhrNESkXDcWpz9wMmVDnG7NCSWdF","object":"chat.completion.chunk","created":1707810673,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{},"logprobs":null,"finish_reason":"stop"}]}
data: [DONE]
'
headers:
CF-Cache-Status:
- DYNAMIC
CF-RAY:
- 843d5491bed877be-GRU
- 854b7c230d17963f-SJC
Cache-Control:
- no-cache, must-revalidate
Connection:
- keep-alive
Content-Encoding:
- gzip
Content-Type:
- application/json
- text/event-stream
Date:
- Thu, 11 Jan 2024 12:58:21 GMT
- Tue, 13 Feb 2024 07:51:13 GMT
Server:
- cloudflare
Set-Cookie:
- __cf_bm=AaCIpKmEHQQMvGacbuxOnCvqdwex_8TERUCvQ1QW8AI-1704977901-1-AePD3JjhIEj0C/A7QIPF3MMwRQ140a5wZP9p+GamrexFlE/6gbVKukr8FOIK4v375UmQfeUwO1TG+QesJ/dZaGE=;
path=/; expires=Thu, 11-Jan-24 13:28:21 GMT; domain=.api.openai.com; HttpOnly;
- __cf_bm=NFb4H263Krk9Xr5qV1Ptu9blCVbFcyg1S93yd9V3EKs-1707810673-1-AQNacdg58H0w+6ASjroSAKAOJjd/zBe3YTh2wxFl31Po2s5KRxRKeNVpvyuztgWptRmoZ8TY6DYFXv6usPcAFbk=;
path=/; expires=Tue, 13-Feb-24 08:21:13 GMT; domain=.api.openai.com; HttpOnly;
Secure; SameSite=None
- _cfuvid=q0gAmJonNn1lCS6PJoxG4P.9OvaKo4BQIvFEAyT_F30-1704977901188-0-604800000;
- _cfuvid=44lfswKyrmuvCjCVUHHy8KWhx1htUCS9U2auSStgf9Y-1707810673564-0-604800000;
path=/; domain=.api.openai.com; HttpOnly; Secure; SameSite=None
Transfer-Encoding:
- chunked
@@ -88,7 +170,7 @@ interactions:
openai-organization:
- user-z7g4wmlazxqvc5wjyaaaocfz
openai-processing-ms:
- '3492'
- '236'
openai-version:
- '2020-10-01'
strict-transport-security:
@@ -100,31 +182,34 @@ interactions:
x-ratelimit-remaining-requests:
- '9999'
x-ratelimit-remaining-tokens:
- '299755'
- '299737'
x-ratelimit-reset-requests:
- 6ms
x-ratelimit-reset-tokens:
- 49ms
- 52ms
x-request-id:
- 6d96a0ac532ebce14719a35e90f453e4
http_version: HTTP/1.1
status_code: 200
- req_fa1aac5fc97191a0abae61124cc03583
status:
code: 200
message: OK
- request:
body: '{"messages": [{"role": "user", "content": "You are test role.\ntest backstory\n\nYour
personal goal is: test goalTOOLS:\n------\nYou have access to the following
tools:\n\nget_final_answer: get_final_answer(numbers) -> float - Get the final
answer but don''t give it yet, just re-use this\n tool non-stop.\n\nTo
use a tool, please use the exact following format:\n\n```\nThought: Do I need
to use a tool? Yes\nAction: the action to take, should be one of [get_final_answer],
just the name.\nAction Input: the input to the action\nObservation: the result
of the action\n```\n\nWhen you have a response for your task, or if you do not
need to use a tool, you MUST use the format:\n\n```\nThought: Do I need to use
a tool? No\nFinal Answer: [your response here]This is the summary of your work
so far:\nBegin! This is VERY important to you, your job depends on it!\n\nCurrent
Task: The final answer is 42. But don''t give it yet, instead keep using the
`get_final_answer` tool.\nThought: Do I need to use a tool? Yes\nAction: get_final_answer\nAction
Input: [42]\nObservation: 42\nThought: "}], "model": "gpt-4", "n": 1, "stop":
["\nObservation"], "stream": false, "temperature": 0.7}'
body: '{"messages": [{"role": "system", "content": "\n The
schema should have the following structure, only two key:\n -
tool_name: str\n - arguments: dict (with all
arguments being passed)\n\n Example:\n {\"tool_name\":
\"tool_name\", \"arguments\": {\"arg_name1\": \"value\", \"arg_name2\": 2}}\n "},
{"role": "user", "content": "Tools available:\n\nTool Name: get_final_answer\nTool
Description: get_final_answer(numbers) -> float - Get the final answer but don''t
give it yet, just re-use this\n tool non-stop.\nTool Arguments: {''numbers'':
{}}\n\nReturn a valid schema for the tool, use this text to inform a valid ouput
schema:\n\nTool Name: get_final_answer\nTool Arguments: 42```"}], "model": "gpt-4",
"function_call": {"name": "InstructorToolCalling"}, "functions": [{"name": "InstructorToolCalling",
"description": "Correctly extracted `InstructorToolCalling` with all the required
parameters with correct types", "parameters": {"properties": {"tool_name": {"description":
"The name of the tool to be called.", "title": "Tool Name", "type": "string"},
"arguments": {"description": "A dictinary of arguments to be passed to the tool.",
"title": "Arguments", "type": "object"}}, "required": ["arguments", "tool_name"],
"type": "object"}}]}'
headers:
accept:
- application/json
@@ -133,16 +218,16 @@ interactions:
connection:
- keep-alive
content-length:
- '1186'
- '1427'
content-type:
- application/json
cookie:
- __cf_bm=AaCIpKmEHQQMvGacbuxOnCvqdwex_8TERUCvQ1QW8AI-1704977901-1-AePD3JjhIEj0C/A7QIPF3MMwRQ140a5wZP9p+GamrexFlE/6gbVKukr8FOIK4v375UmQfeUwO1TG+QesJ/dZaGE=;
_cfuvid=q0gAmJonNn1lCS6PJoxG4P.9OvaKo4BQIvFEAyT_F30-1704977901188-0-604800000
- __cf_bm=NFb4H263Krk9Xr5qV1Ptu9blCVbFcyg1S93yd9V3EKs-1707810673-1-AQNacdg58H0w+6ASjroSAKAOJjd/zBe3YTh2wxFl31Po2s5KRxRKeNVpvyuztgWptRmoZ8TY6DYFXv6usPcAFbk=;
_cfuvid=44lfswKyrmuvCjCVUHHy8KWhx1htUCS9U2auSStgf9Y-1707810673564-0-604800000
host:
- api.openai.com
user-agent:
- OpenAI/Python 1.7.1
- OpenAI/Python 1.12.0
x-stainless-arch:
- arm64
x-stainless-async:
@@ -152,7 +237,7 @@ interactions:
x-stainless-os:
- MacOS
x-stainless-package-version:
- 1.7.1
- 1.12.0
x-stainless-runtime:
- CPython
x-stainless-runtime-version:
@@ -160,20 +245,20 @@ interactions:
method: POST
uri: https://api.openai.com/v1/chat/completions
response:
content: "{\n \"id\": \"chatcmpl-8fovVztGO4KZeiuSpMkfDC9bJ5sVV\",\n \"object\":
\"chat.completion\",\n \"created\": 1704977901,\n \"model\": \"gpt-4-0613\",\n
\ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\":
\"assistant\",\n \"content\": \"According to the task, I should re-use
the `get_final_answer` tool. I'll input the observed result back into the tool.
\\nAction: get_final_answer\\nAction Input: [42]\"\n },\n \"logprobs\":
null,\n \"finish_reason\": \"stop\"\n }\n ],\n \"usage\": {\n \"prompt_tokens\":
266,\n \"completion_tokens\": 41,\n \"total_tokens\": 307\n },\n \"system_fingerprint\":
null\n}\n"
body:
string: !!binary |
H4sIAAAAAAAAA1yRT4/TMBDF7/kUozm3KC3bP8oNcUBcEAgOiwiKXHeamLVnInvCsqry3ZHTbra7
F8t6b94vL+NzAYDuiBWg7Yza0PvlPnbx67cTDZ/uy5/p+2P3xbj0tH/4++E+9rjICTn8IavPqXdW
Qu9JnfDFtpGMUqauduVuvyq3u81kBDmSz7G21+Xdstyu3l8TnThLCSv4VQAAnKczd+Mj/cMKysWz
Eigl0xJW8xAARvFZQZOSS2pYcfFiWmElznV58P7GOA1sc+vGGu9fAQGQTZiQnzlpHKxK/CHiPxrv
Hbc3eAA0sR0Cseb+eK4ZoEYV8U1m1FhBjS1pc3JsfGM4PVKscXGZm7N5bspmlYdwoDhpd+ssjjWP
OH9zvN7GeSte2j7KIb35STw5dqlrIpkknOsllf4CypDf0/aHVwvFPkrotVF5IM7A9Xpz4eHLQ9+4
26uposbf6JtVcW2I6SkphbyAlmIf3fwYxVj8BwAA//8DABjz3GiDAgAA
headers:
CF-Cache-Status:
- DYNAMIC
CF-RAY:
- 843d54aacf5677be-GRU
- 854b7c31fc7a963f-SJC
Cache-Control:
- no-cache, must-revalidate
Connection:
@@ -183,7 +268,7 @@ interactions:
Content-Type:
- application/json
Date:
- Thu, 11 Jan 2024 12:58:28 GMT
- Tue, 13 Feb 2024 07:51:17 GMT
Server:
- cloudflare
Transfer-Encoding:
@@ -197,7 +282,7 @@ interactions:
openai-organization:
- user-z7g4wmlazxqvc5wjyaaaocfz
openai-processing-ms:
- '6695'
- '2301'
openai-version:
- '2020-10-01'
strict-transport-security:
@@ -209,128 +294,16 @@ interactions:
x-ratelimit-remaining-requests:
- '9999'
x-ratelimit-remaining-tokens:
- '299728'
- '299791'
x-ratelimit-reset-requests:
- 6ms
x-ratelimit-reset-tokens:
- 54ms
- 41ms
x-request-id:
- 12d68fab91102b930ed5047fb3f61759
http_version: HTTP/1.1
status_code: 200
- request:
body: '{"messages": [{"role": "user", "content": "You are test role.\ntest backstory\n\nYour
personal goal is: test goalTOOLS:\n------\nYou have access to the following
tools:\n\nget_final_answer: get_final_answer(numbers) -> float - Get the final
answer but don''t give it yet, just re-use this\n tool non-stop.\n\nTo
use a tool, please use the exact following format:\n\n```\nThought: Do I need
to use a tool? Yes\nAction: the action to take, should be one of [get_final_answer],
just the name.\nAction Input: the input to the action\nObservation: the result
of the action\n```\n\nWhen you have a response for your task, or if you do not
need to use a tool, you MUST use the format:\n\n```\nThought: Do I need to use
a tool? No\nFinal Answer: [your response here]This is the summary of your work
so far:\nBegin! This is VERY important to you, your job depends on it!\n\nCurrent
Task: The final answer is 42. But don''t give it yet, instead keep using the
`get_final_answer` tool.\nThought: Do I need to use a tool? Yes\nAction: get_final_answer\nAction
Input: [42]\nObservation: 42\nThought: According to the task, I should re-use
the `get_final_answer` tool. I''ll input the observed result back into the tool.
\nAction: get_final_answer\nAction Input: [42]\nObservation: I just used the
get_final_answer tool with input [42]. So I already know the result of that
and don''t need to use it now.\n\nThought: "}], "model": "gpt-4", "n": 1, "stop":
["\nObservation"], "stream": false, "temperature": 0.7}'
headers:
accept:
- application/json
accept-encoding:
- gzip, deflate
connection:
- keep-alive
content-length:
- '1500'
content-type:
- application/json
cookie:
- __cf_bm=AaCIpKmEHQQMvGacbuxOnCvqdwex_8TERUCvQ1QW8AI-1704977901-1-AePD3JjhIEj0C/A7QIPF3MMwRQ140a5wZP9p+GamrexFlE/6gbVKukr8FOIK4v375UmQfeUwO1TG+QesJ/dZaGE=;
_cfuvid=q0gAmJonNn1lCS6PJoxG4P.9OvaKo4BQIvFEAyT_F30-1704977901188-0-604800000
host:
- api.openai.com
user-agent:
- OpenAI/Python 1.7.1
x-stainless-arch:
- arm64
x-stainless-async:
- 'false'
x-stainless-lang:
- python
x-stainless-os:
- MacOS
x-stainless-package-version:
- 1.7.1
x-stainless-runtime:
- CPython
x-stainless-runtime-version:
- 3.11.7
method: POST
uri: https://api.openai.com/v1/chat/completions
response:
content: "{\n \"id\": \"chatcmpl-8fovcLgRvfGBN9CBduJbbPc5zd62B\",\n \"object\":
\"chat.completion\",\n \"created\": 1704977908,\n \"model\": \"gpt-4-0613\",\n
\ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\":
\"assistant\",\n \"content\": \"Do I need to use a tool? No\\nFinal Answer:
I have used the `get_final_answer` tool as instructed, but I will not provide
the final answer yet as the task specifies.\"\n },\n \"logprobs\":
null,\n \"finish_reason\": \"stop\"\n }\n ],\n \"usage\": {\n \"prompt_tokens\":
342,\n \"completion_tokens\": 40,\n \"total_tokens\": 382\n },\n \"system_fingerprint\":
null\n}\n"
headers:
CF-Cache-Status:
- DYNAMIC
CF-RAY:
- 843d54d65de877be-GRU
Cache-Control:
- no-cache, must-revalidate
Connection:
- keep-alive
Content-Encoding:
- gzip
Content-Type:
- application/json
Date:
- Thu, 11 Jan 2024 12:58:33 GMT
Server:
- cloudflare
Transfer-Encoding:
- chunked
access-control-allow-origin:
- '*'
alt-svc:
- h3=":443"; ma=86400
openai-model:
- gpt-4-0613
openai-organization:
- user-z7g4wmlazxqvc5wjyaaaocfz
openai-processing-ms:
- '5085'
openai-version:
- '2020-10-01'
strict-transport-security:
- max-age=15724800; includeSubDomains
x-ratelimit-limit-requests:
- '10000'
x-ratelimit-limit-tokens:
- '300000'
x-ratelimit-remaining-requests:
- '9999'
x-ratelimit-remaining-tokens:
- '299650'
x-ratelimit-reset-requests:
- 6ms
x-ratelimit-reset-tokens:
- 69ms
x-request-id:
- 87d6e9e91fa2417e12fea9de2c6782de
http_version: HTTP/1.1
status_code: 200
- req_a23b1391462f62f6d37b5f2eed7a87bd
status:
code: 200
message: OK
- request:
body: '{"messages": [{"role": "user", "content": "Progressively summarize the
lines of conversation provided, adding onto the previous summary returning a
@@ -343,8 +316,7 @@ interactions:
is a force for good because it will help humans reach their full potential.\nEND
OF EXAMPLE\n\nCurrent summary:\n\n\nNew lines of conversation:\nHuman: The final
answer is 42. But don''t give it yet, instead keep using the `get_final_answer`
tool.\nAI: I have used the `get_final_answer` tool as instructed, but I will
not provide the final answer yet as the task specifies.\n\nNew summary:"}],
tool.\nAI: Agent stopped due to iteration limit or time limit.\n\nNew summary:"}],
"model": "gpt-4", "n": 1, "stream": false, "temperature": 0.7}'
headers:
accept:
@@ -354,16 +326,16 @@ interactions:
connection:
- keep-alive
content-length:
- '1067'
- '997'
content-type:
- application/json
cookie:
- __cf_bm=AaCIpKmEHQQMvGacbuxOnCvqdwex_8TERUCvQ1QW8AI-1704977901-1-AePD3JjhIEj0C/A7QIPF3MMwRQ140a5wZP9p+GamrexFlE/6gbVKukr8FOIK4v375UmQfeUwO1TG+QesJ/dZaGE=;
_cfuvid=q0gAmJonNn1lCS6PJoxG4P.9OvaKo4BQIvFEAyT_F30-1704977901188-0-604800000
- __cf_bm=NFb4H263Krk9Xr5qV1Ptu9blCVbFcyg1S93yd9V3EKs-1707810673-1-AQNacdg58H0w+6ASjroSAKAOJjd/zBe3YTh2wxFl31Po2s5KRxRKeNVpvyuztgWptRmoZ8TY6DYFXv6usPcAFbk=;
_cfuvid=44lfswKyrmuvCjCVUHHy8KWhx1htUCS9U2auSStgf9Y-1707810673564-0-604800000
host:
- api.openai.com
user-agent:
- OpenAI/Python 1.7.1
- OpenAI/Python 1.12.0
x-stainless-arch:
- arm64
x-stainless-async:
@@ -373,7 +345,7 @@ interactions:
x-stainless-os:
- MacOS
x-stainless-package-version:
- 1.7.1
- 1.12.0
x-stainless-runtime:
- CPython
x-stainless-runtime-version:
@@ -381,20 +353,21 @@ interactions:
method: POST
uri: https://api.openai.com/v1/chat/completions
response:
content: "{\n \"id\": \"chatcmpl-8fovhiBR4rfXixci7fgAObnx5QwGQ\",\n \"object\":
\"chat.completion\",\n \"created\": 1704977913,\n \"model\": \"gpt-4-0613\",\n
\ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\":
\"assistant\",\n \"content\": \"The human instructs the AI to use the
`get_final_answer` tool, but not to reveal the final answer, which is 42. The
AI complies and uses the tool without providing the final answer.\"\n },\n
\ \"logprobs\": null,\n \"finish_reason\": \"stop\"\n }\n ],\n
\ \"usage\": {\n \"prompt_tokens\": 190,\n \"completion_tokens\": 43,\n
\ \"total_tokens\": 233\n },\n \"system_fingerprint\": null\n}\n"
body:
string: !!binary |
H4sIAAAAAAAAA1RRwWobMRC971cMuuRim3XixotvKb2UXgo2LXUpRtZOVmokjaqZpS3B/16kdbz0
IsR7em+e3rw2AMr1agfKWC0mJL/sss17bPfH9ybsP3Sfz8fD8enr/pf/5r98UouioPNPNPKmWhkK
yaM4ihNtMmrB4rretttu3T5uu0oE6tEX2ZBkuVm2j+uHq8KSM8hqB98bAIDXepZsscc/agft4g0J
yKwHVLvbIwCVyRdEaWbHoqOoxUwaioKxxj1YBDsGHUHQewaxCE8fQayWen92UXvQkX9jBsewuV/A
eRRwkSWPRhicgBC8ICYY2cWhyu4GlFPVnibtHQiRX8Fh8s/IiWLP8yA9YBSwmoGFUsIe+hGLc0Zt
bPHVEZxg1qVWoAziAoJ3wclKXf92uZXiaUiZzqXAOHp/w59ddGxPGTVTLAWUaZP80gD8qOWP//Wp
UqaQ5CT0gpHrDh8mPzXveWY3766kkGg/4/frrrkmVPyXBUMpaMCcsqu7KDmbS/MPAAD//wMAYCiB
8YICAAA=
headers:
CF-Cache-Status:
- DYNAMIC
CF-RAY:
- 843d54f82f5577be-GRU
- 854b7c415c96963f-SJC
Cache-Control:
- no-cache, must-revalidate
Connection:
@@ -404,7 +377,7 @@ interactions:
Content-Type:
- application/json
Date:
- Thu, 11 Jan 2024 12:58:41 GMT
- Tue, 13 Feb 2024 07:51:22 GMT
Server:
- cloudflare
Transfer-Encoding:
@@ -418,7 +391,7 @@ interactions:
openai-organization:
- user-z7g4wmlazxqvc5wjyaaaocfz
openai-processing-ms:
- '7937'
- '3959'
openai-version:
- '2020-10-01'
strict-transport-security:
@@ -430,13 +403,14 @@ interactions:
x-ratelimit-remaining-requests:
- '9999'
x-ratelimit-remaining-tokens:
- '299749'
- '299765'
x-ratelimit-reset-requests:
- 6ms
x-ratelimit-reset-tokens:
- 50ms
- 46ms
x-request-id:
- 79f30ffd011db4ab6e886411b24ae49d
http_version: HTTP/1.1
status_code: 200
- req_6c49ee2193171b6ac31e93ff0cbc68bc
status:
code: 200
message: OK
version: 1

File diff suppressed because it is too large Load Diff

View File

@@ -1,16 +1,17 @@
interactions:
- request:
body: '{"messages": [{"role": "user", "content": "You are test role.\ntest backstory\n\nYour
personal goal is: test goal\n\nTOOLS:\n------\nYou have access to the following
tools:\n\n\n\nTo use a tool, please use the exact following format:\n\n```\nThought:
Do I need to use a tool? Yes\nAction: the action to take, should be one of []\nAction
Input: the input to the action\nObservation: the result of the action\n```\n\nWhen
personal goal is: test goalTOOLS:\n------\nYou have access to only the following
tools:\n\n('''',)\n\nTo use a tool, please use the exact following format:\n\n```\nThought:
Do I need to use a tool? Yes\nAction: the tool you wanna use, should be one
of [], just the name.\nAction Input: Any and all relevant information input
and context for using the tool\nObservation: the result of using the tool\n```\n\nWhen
you have a response for your task, or if you do not need to use a tool, you
MUST use the format:\n\n```\nThought: Do I need to use a tool? No\nFinal Answer:
[your response here]\n```\n\t\tThis is the summary of your work so far:\n \nBegin!
This is VERY important to you, your job depends on it!\n\nCurrent Task: How
much is 1 + 1?\n\n"}], "model": "gpt-4", "n": 1, "stop": ["\nObservation"],
"stream": false, "temperature": 0.7}'
[your response here]```This is the summary of your work so far:\nBegin! This
is VERY important to you, your job depends on it!\n\nCurrent Task: How much
is 1 + 1?\n"}], "model": "gpt-4", "n": 1, "stop": ["\nObservation"], "stream":
true, "temperature": 0.7}'
headers:
accept:
- application/json
@@ -19,13 +20,13 @@ interactions:
connection:
- keep-alive
content-length:
- '851'
- '910'
content-type:
- application/json
host:
- api.openai.com
user-agent:
- OpenAI/Python 1.6.0
- OpenAI/Python 1.12.0
x-stainless-arch:
- arm64
x-stainless-async:
@@ -35,7 +36,7 @@ interactions:
x-stainless-os:
- MacOS
x-stainless-package-version:
- 1.6.0
- 1.12.0
x-stainless-runtime:
- CPython
x-stainless-runtime-version:
@@ -43,35 +44,123 @@ interactions:
method: POST
uri: https://api.openai.com/v1/chat/completions
response:
content: "{\n \"id\": \"chatcmpl-8XuDsBh2o1JE49eouNIBWCE2vn9pB\",\n \"object\":
\"chat.completion\",\n \"created\": 1703091636,\n \"model\": \"gpt-4-0613\",\n
\ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\":
\"assistant\",\n \"content\": \"Thought: Do I need to use a tool? No\\nFinal
Answer: 2\"\n },\n \"logprobs\": null,\n \"finish_reason\": \"stop\"\n
\ }\n ],\n \"usage\": {\n \"prompt_tokens\": 181,\n \"completion_tokens\":
17,\n \"total_tokens\": 198\n },\n \"system_fingerprint\": null\n}\n"
body:
string: 'data: {"id":"chatcmpl-8rhqV7v2vJn9mItSSNAgGlWjFjunr","object":"chat.completion.chunk","created":1707810619,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"role":"assistant","content":""},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rhqV7v2vJn9mItSSNAgGlWjFjunr","object":"chat.completion.chunk","created":1707810619,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"Thought"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rhqV7v2vJn9mItSSNAgGlWjFjunr","object":"chat.completion.chunk","created":1707810619,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":":"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rhqV7v2vJn9mItSSNAgGlWjFjunr","object":"chat.completion.chunk","created":1707810619,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
Do"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rhqV7v2vJn9mItSSNAgGlWjFjunr","object":"chat.completion.chunk","created":1707810619,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
I"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rhqV7v2vJn9mItSSNAgGlWjFjunr","object":"chat.completion.chunk","created":1707810619,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
need"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rhqV7v2vJn9mItSSNAgGlWjFjunr","object":"chat.completion.chunk","created":1707810619,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
to"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rhqV7v2vJn9mItSSNAgGlWjFjunr","object":"chat.completion.chunk","created":1707810619,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
use"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rhqV7v2vJn9mItSSNAgGlWjFjunr","object":"chat.completion.chunk","created":1707810619,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
a"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rhqV7v2vJn9mItSSNAgGlWjFjunr","object":"chat.completion.chunk","created":1707810619,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
tool"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rhqV7v2vJn9mItSSNAgGlWjFjunr","object":"chat.completion.chunk","created":1707810619,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"?"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rhqV7v2vJn9mItSSNAgGlWjFjunr","object":"chat.completion.chunk","created":1707810619,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
No"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rhqV7v2vJn9mItSSNAgGlWjFjunr","object":"chat.completion.chunk","created":1707810619,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"\n"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rhqV7v2vJn9mItSSNAgGlWjFjunr","object":"chat.completion.chunk","created":1707810619,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"Final"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rhqV7v2vJn9mItSSNAgGlWjFjunr","object":"chat.completion.chunk","created":1707810619,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
Answer"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rhqV7v2vJn9mItSSNAgGlWjFjunr","object":"chat.completion.chunk","created":1707810619,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":":"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rhqV7v2vJn9mItSSNAgGlWjFjunr","object":"chat.completion.chunk","created":1707810619,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rhqV7v2vJn9mItSSNAgGlWjFjunr","object":"chat.completion.chunk","created":1707810619,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"1"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rhqV7v2vJn9mItSSNAgGlWjFjunr","object":"chat.completion.chunk","created":1707810619,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
+"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rhqV7v2vJn9mItSSNAgGlWjFjunr","object":"chat.completion.chunk","created":1707810619,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rhqV7v2vJn9mItSSNAgGlWjFjunr","object":"chat.completion.chunk","created":1707810619,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"1"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rhqV7v2vJn9mItSSNAgGlWjFjunr","object":"chat.completion.chunk","created":1707810619,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
equals"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rhqV7v2vJn9mItSSNAgGlWjFjunr","object":"chat.completion.chunk","created":1707810619,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rhqV7v2vJn9mItSSNAgGlWjFjunr","object":"chat.completion.chunk","created":1707810619,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"2"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rhqV7v2vJn9mItSSNAgGlWjFjunr","object":"chat.completion.chunk","created":1707810619,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"."},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rhqV7v2vJn9mItSSNAgGlWjFjunr","object":"chat.completion.chunk","created":1707810619,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{},"logprobs":null,"finish_reason":"stop"}]}
data: [DONE]
'
headers:
CF-Cache-Status:
- DYNAMIC
CF-RAY:
- 83897143cca7a477-GRU
- 854b7ad5eb40985b-SJC
Cache-Control:
- no-cache, must-revalidate
Connection:
- keep-alive
Content-Encoding:
- gzip
Content-Type:
- application/json
- text/event-stream
Date:
- Wed, 20 Dec 2023 17:00:38 GMT
- Tue, 13 Feb 2024 07:50:20 GMT
Server:
- cloudflare
Set-Cookie:
- __cf_bm=H9RUg933Wznv5vpw9KzjZyJZSXUer6tLsBPnCGaO5sY-1703091638-1-AQMrr8fshVzTPZkSf5UmVC0gg4mnCVMhRfsAVDwFMAcb9eo7Gj6h8TFKL6YGvGlR5eid/JQIY/YbP3d9k7VV+RA=;
path=/; expires=Wed, 20-Dec-23 17:30:38 GMT; domain=.api.openai.com; HttpOnly;
- __cf_bm=EvpdC3JMPVDn4EoPqnjI6Dw9566m_Megc7okPPYugVY-1707810620-1-AbXyR61/nIN+vn+77nUW7x7HZOpRKZePAUYlsrs5sz1AXL0V/qs1y1OZwmEqURxym0nAJ/mjTASYl4JfBEJpzHQ=;
path=/; expires=Tue, 13-Feb-24 08:20:20 GMT; domain=.api.openai.com; HttpOnly;
Secure; SameSite=None
- _cfuvid=G34V4_SI2mUudYSgDXesqo_OnsZMUghvvyGp95eog58-1703091638133-0-604800000;
- _cfuvid=PJG0SnIiVNEZyl138tw.yk6u25UvB9jEQH0CQpbKGv8-1707810620263-0-604800000;
path=/; domain=.api.openai.com; HttpOnly; Secure; SameSite=None
Transfer-Encoding:
- chunked
@@ -84,7 +173,7 @@ interactions:
openai-organization:
- user-z7g4wmlazxqvc5wjyaaaocfz
openai-processing-ms:
- '1879'
- '268'
openai-version:
- '2020-10-01'
strict-transport-security:
@@ -93,24 +182,19 @@ interactions:
- '10000'
x-ratelimit-limit-tokens:
- '300000'
x-ratelimit-limit-tokens_usage_based:
- '300000'
x-ratelimit-remaining-requests:
- '9999'
x-ratelimit-remaining-tokens:
- '299812'
x-ratelimit-remaining-tokens_usage_based:
- '299812'
- '299795'
x-ratelimit-reset-requests:
- 6ms
x-ratelimit-reset-tokens:
- 37ms
x-ratelimit-reset-tokens_usage_based:
- 37ms
- 40ms
x-request-id:
- 199af19ff2674901b03dbe9d166d7ffb
http_version: HTTP/1.1
status_code: 200
- req_ff710fda6fdf55d2cfacbb56fd42654c
status:
code: 200
message: OK
- request:
body: '{"messages": [{"role": "user", "content": "Progressively summarize the
lines of conversation provided, adding onto the previous summary returning a
@@ -122,8 +206,8 @@ interactions:
the AI thinks of artificial intelligence. The AI thinks artificial intelligence
is a force for good because it will help humans reach their full potential.\nEND
OF EXAMPLE\n\nCurrent summary:\n\n\nNew lines of conversation:\nHuman: How much
is 1 + 1?\nAI: 2\n\nNew summary:"}], "model": "gpt-4", "n": 1, "stream": false,
"temperature": 0.7}'
is 1 + 1?\nAI: 1 + 1 equals 2.\n\nNew summary:"}], "model": "gpt-4", "n": 1,
"stream": false, "temperature": 0.7}'
headers:
accept:
- application/json
@@ -132,16 +216,16 @@ interactions:
connection:
- keep-alive
content-length:
- '871'
- '885'
content-type:
- application/json
cookie:
- __cf_bm=H9RUg933Wznv5vpw9KzjZyJZSXUer6tLsBPnCGaO5sY-1703091638-1-AQMrr8fshVzTPZkSf5UmVC0gg4mnCVMhRfsAVDwFMAcb9eo7Gj6h8TFKL6YGvGlR5eid/JQIY/YbP3d9k7VV+RA=;
_cfuvid=G34V4_SI2mUudYSgDXesqo_OnsZMUghvvyGp95eog58-1703091638133-0-604800000
- __cf_bm=EvpdC3JMPVDn4EoPqnjI6Dw9566m_Megc7okPPYugVY-1707810620-1-AbXyR61/nIN+vn+77nUW7x7HZOpRKZePAUYlsrs5sz1AXL0V/qs1y1OZwmEqURxym0nAJ/mjTASYl4JfBEJpzHQ=;
_cfuvid=PJG0SnIiVNEZyl138tw.yk6u25UvB9jEQH0CQpbKGv8-1707810620263-0-604800000
host:
- api.openai.com
user-agent:
- OpenAI/Python 1.6.0
- OpenAI/Python 1.12.0
x-stainless-arch:
- arm64
x-stainless-async:
@@ -151,7 +235,7 @@ interactions:
x-stainless-os:
- MacOS
x-stainless-package-version:
- 1.6.0
- 1.12.0
x-stainless-runtime:
- CPython
x-stainless-runtime-version:
@@ -159,18 +243,19 @@ interactions:
method: POST
uri: https://api.openai.com/v1/chat/completions
response:
content: "{\n \"id\": \"chatcmpl-8XuDuhhGTeJIvAaVjt8lxngLaR7GV\",\n \"object\":
\"chat.completion\",\n \"created\": 1703091638,\n \"model\": \"gpt-4-0613\",\n
\ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\":
\"assistant\",\n \"content\": \"The human asks the AI to add 1 + 1, to
which the AI responds with 2.\"\n },\n \"logprobs\": null,\n \"finish_reason\":
\"stop\"\n }\n ],\n \"usage\": {\n \"prompt_tokens\": 150,\n \"completion_tokens\":
22,\n \"total_tokens\": 172\n },\n \"system_fingerprint\": null\n}\n"
body:
string: !!binary |
H4sIAAAAAAAAA1SQT0/CQBDF7/0Uk71aSFu0kN44YMSDB2PUxBiybId2Yf+xO40awnc3WwroZQ/v
N2/2zTskAEzWrAImWk5COzWa+Xb/vq038+JZLV+f9ILu3UI8vDWPpftiaXTY9RYFnV1jYbVTSNKa
ExYeOWHcmk+z6SzPyiLvgbY1qmhrHI1uR1mZTwZHa6XAwCr4SAAADv0bs5kav1kFWXpWNIbAG2TV
ZQiAeauiwngIMhA3xNIrFNYQmj7uS4vQdpob4GEXgFqE+RLIguBKdIoTQg43kKfATX3GHoOzpo7j
nE4ccN9xFaAYs+Gf4yWgso3zdh2PMZ1SF30jjQztyiMP1sQwgaw72Y8JwGdfRPfvNua81Y5WZHdo
4sL8rjztY9fOr7SYDpAscfXHNZskQ0IWfgKhXm2kadA7L/teYs7kmPwCAAD//wMAP9y0ZQ4CAAA=
headers:
CF-Cache-Status:
- DYNAMIC
CF-RAY:
- 83897152ad2ba477-GRU
- 854b7ae0ba19985b-SJC
Cache-Control:
- no-cache, must-revalidate
Connection:
@@ -180,7 +265,7 @@ interactions:
Content-Type:
- application/json
Date:
- Wed, 20 Dec 2023 17:00:41 GMT
- Tue, 13 Feb 2024 07:50:24 GMT
Server:
- cloudflare
Transfer-Encoding:
@@ -194,7 +279,7 @@ interactions:
openai-organization:
- user-z7g4wmlazxqvc5wjyaaaocfz
openai-processing-ms:
- '2860'
- '2690'
openai-version:
- '2020-10-01'
strict-transport-security:
@@ -203,22 +288,17 @@ interactions:
- '10000'
x-ratelimit-limit-tokens:
- '300000'
x-ratelimit-limit-tokens_usage_based:
- '300000'
x-ratelimit-remaining-requests:
- '9999'
x-ratelimit-remaining-tokens:
- '299798'
x-ratelimit-remaining-tokens_usage_based:
- '299798'
- '299794'
x-ratelimit-reset-requests:
- 6ms
x-ratelimit-reset-tokens:
- 40ms
x-ratelimit-reset-tokens_usage_based:
- 40ms
- 41ms
x-request-id:
- 6acd80ebd8c2463aa09cbb1550d14de3
http_version: HTTP/1.1
status_code: 200
- req_9eace1f06c51682bcb91a3b2808f3f55
status:
code: 200
message: OK
version: 1

View File

@@ -1,21 +1,19 @@
interactions:
- request:
body: '{"messages": [{"role": "user", "content": "You are test role.\ntest backstory\n\nYour
personal goal is: test goal\n\nTOOLS:\n------\nYou have access to the following
tools:\n\nmultiplier: multiplier(numbers) -> float - Useful for when you need
to multiply two numbers together. \n\t\t\tThe input to this tool should be a
comma separated list of numbers of \n\t\t\tlength two, representing the two
numbers you want to multiply together. \n\t\t\tFor example, `1,2` would be the
input if you wanted to multiply 1 by 2.\n\nTo use a tool, please use the exact
following format:\n\n```\nThought: Do I need to use a tool? Yes\nAction: the
action to take, should be one of [multiplier]\nAction Input: the input to the
action\nObservation: the result of the action\n```\n\nWhen you have a response
for your task, or if you do not need to use a tool, you MUST use the format:\n\n```\nThought:
Do I need to use a tool? No\nFinal Answer: [your response here]\n```\n\t\tThis
is the summary of your work so far:\n \nBegin! This is VERY important to
you, your job depends on it!\n\nCurrent Task: What is 3 times 4\n\n"}], "model":
"gpt-4", "n": 1, "stop": ["\nObservation"], "stream": false, "temperature":
0.7}'
personal goal is: test goalTOOLS:\n------\nYou have access to only the following
tools:\n\n(''multiplier: multiplier(first_number: int, second_number: int) ->
float - Useful for when you need to multiply two numbers together.'',)\n\nTo
use a tool, please use the exact following format:\n\n```\nThought: Do I need
to use a tool? Yes\nAction: the tool you wanna use, should be one of [multiplier],
just the name.\nAction Input: Any and all relevant information input and context
for using the tool\nObservation: the result of using the tool\n```\n\nWhen you
have a response for your task, or if you do not need to use a tool, you MUST
use the format:\n\n```\nThought: Do I need to use a tool? No\nFinal Answer:
[your response here]```This is the summary of your work so far:\nBegin! This
is VERY important to you, your job depends on it!\n\nCurrent Task: What is 3
times 4\n"}], "model": "gpt-4", "n": 1, "stop": ["\nObservation"], "stream":
true, "temperature": 0.7}'
headers:
accept:
- application/json
@@ -24,13 +22,13 @@ interactions:
connection:
- keep-alive
content-length:
- '1199'
- '1050'
content-type:
- application/json
host:
- api.openai.com
user-agent:
- OpenAI/Python 1.6.0
- OpenAI/Python 1.12.0
x-stainless-arch:
- arm64
x-stainless-async:
@@ -40,7 +38,7 @@ interactions:
x-stainless-os:
- MacOS
x-stainless-package-version:
- 1.6.0
- 1.12.0
x-stainless-runtime:
- CPython
x-stainless-runtime-version:
@@ -48,35 +46,150 @@ interactions:
method: POST
uri: https://api.openai.com/v1/chat/completions
response:
content: "{\n \"id\": \"chatcmpl-8XuE6OrjKKco53f0TIqPQEE3nWeNj\",\n \"object\":
\"chat.completion\",\n \"created\": 1703091650,\n \"model\": \"gpt-4-0613\",\n
\ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\":
\"assistant\",\n \"content\": \"Thought: Do I need to use a tool? Yes\\nAction:
multiplier\\nAction Input: 3,4\\n\"\n },\n \"logprobs\": null,\n \"finish_reason\":
\"stop\"\n }\n ],\n \"usage\": {\n \"prompt_tokens\": 260,\n \"completion_tokens\":
24,\n \"total_tokens\": 284\n },\n \"system_fingerprint\": null\n}\n"
body:
string: 'data: {"id":"chatcmpl-8rhrE6b7f4GrMPaunp8xcwcWQE9kn","object":"chat.completion.chunk","created":1707810664,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"role":"assistant","content":""},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rhrE6b7f4GrMPaunp8xcwcWQE9kn","object":"chat.completion.chunk","created":1707810664,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"Thought"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rhrE6b7f4GrMPaunp8xcwcWQE9kn","object":"chat.completion.chunk","created":1707810664,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":":"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rhrE6b7f4GrMPaunp8xcwcWQE9kn","object":"chat.completion.chunk","created":1707810664,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
Do"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rhrE6b7f4GrMPaunp8xcwcWQE9kn","object":"chat.completion.chunk","created":1707810664,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
I"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rhrE6b7f4GrMPaunp8xcwcWQE9kn","object":"chat.completion.chunk","created":1707810664,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
need"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rhrE6b7f4GrMPaunp8xcwcWQE9kn","object":"chat.completion.chunk","created":1707810664,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
to"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rhrE6b7f4GrMPaunp8xcwcWQE9kn","object":"chat.completion.chunk","created":1707810664,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
use"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rhrE6b7f4GrMPaunp8xcwcWQE9kn","object":"chat.completion.chunk","created":1707810664,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
a"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rhrE6b7f4GrMPaunp8xcwcWQE9kn","object":"chat.completion.chunk","created":1707810664,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
tool"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rhrE6b7f4GrMPaunp8xcwcWQE9kn","object":"chat.completion.chunk","created":1707810664,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"?"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rhrE6b7f4GrMPaunp8xcwcWQE9kn","object":"chat.completion.chunk","created":1707810664,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
Yes"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rhrE6b7f4GrMPaunp8xcwcWQE9kn","object":"chat.completion.chunk","created":1707810664,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"\n"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rhrE6b7f4GrMPaunp8xcwcWQE9kn","object":"chat.completion.chunk","created":1707810664,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"Action"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rhrE6b7f4GrMPaunp8xcwcWQE9kn","object":"chat.completion.chunk","created":1707810664,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":":"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rhrE6b7f4GrMPaunp8xcwcWQE9kn","object":"chat.completion.chunk","created":1707810664,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
multiplier"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rhrE6b7f4GrMPaunp8xcwcWQE9kn","object":"chat.completion.chunk","created":1707810664,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"\n"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rhrE6b7f4GrMPaunp8xcwcWQE9kn","object":"chat.completion.chunk","created":1707810664,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"Action"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rhrE6b7f4GrMPaunp8xcwcWQE9kn","object":"chat.completion.chunk","created":1707810664,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
Input"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rhrE6b7f4GrMPaunp8xcwcWQE9kn","object":"chat.completion.chunk","created":1707810664,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":":"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rhrE6b7f4GrMPaunp8xcwcWQE9kn","object":"chat.completion.chunk","created":1707810664,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
{\""},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rhrE6b7f4GrMPaunp8xcwcWQE9kn","object":"chat.completion.chunk","created":1707810664,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"first"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rhrE6b7f4GrMPaunp8xcwcWQE9kn","object":"chat.completion.chunk","created":1707810664,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"_number"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rhrE6b7f4GrMPaunp8xcwcWQE9kn","object":"chat.completion.chunk","created":1707810664,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"\":"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rhrE6b7f4GrMPaunp8xcwcWQE9kn","object":"chat.completion.chunk","created":1707810664,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rhrE6b7f4GrMPaunp8xcwcWQE9kn","object":"chat.completion.chunk","created":1707810664,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"3"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rhrE6b7f4GrMPaunp8xcwcWQE9kn","object":"chat.completion.chunk","created":1707810664,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":","},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rhrE6b7f4GrMPaunp8xcwcWQE9kn","object":"chat.completion.chunk","created":1707810664,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
\""},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rhrE6b7f4GrMPaunp8xcwcWQE9kn","object":"chat.completion.chunk","created":1707810664,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"second"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rhrE6b7f4GrMPaunp8xcwcWQE9kn","object":"chat.completion.chunk","created":1707810664,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"_number"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rhrE6b7f4GrMPaunp8xcwcWQE9kn","object":"chat.completion.chunk","created":1707810664,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"\":"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rhrE6b7f4GrMPaunp8xcwcWQE9kn","object":"chat.completion.chunk","created":1707810664,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rhrE6b7f4GrMPaunp8xcwcWQE9kn","object":"chat.completion.chunk","created":1707810664,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"4"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rhrE6b7f4GrMPaunp8xcwcWQE9kn","object":"chat.completion.chunk","created":1707810664,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"}"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rhrE6b7f4GrMPaunp8xcwcWQE9kn","object":"chat.completion.chunk","created":1707810664,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{},"logprobs":null,"finish_reason":"stop"}]}
data: [DONE]
'
headers:
CF-Cache-Status:
- DYNAMIC
CF-RAY:
- 8389719d5cba0110-GRU
- 854b7beb6d1c15b6-SJC
Cache-Control:
- no-cache, must-revalidate
Connection:
- keep-alive
Content-Encoding:
- gzip
Content-Type:
- application/json
- text/event-stream
Date:
- Wed, 20 Dec 2023 17:00:52 GMT
- Tue, 13 Feb 2024 07:51:05 GMT
Server:
- cloudflare
Set-Cookie:
- __cf_bm=rIaStuxTRr_ZC91uSFg5cthTUq95O6PBdkxXZ68fLYc-1703091652-1-AZu+nvbL+3bwwQOIKpnYgLf5m5Mp0jfQ2baAlDRl1+FiTPO+/+GjcF4Upw4M8vtfh39ZyWF+l68r83qCS9OpObU=;
path=/; expires=Wed, 20-Dec-23 17:30:52 GMT; domain=.api.openai.com; HttpOnly;
- __cf_bm=vLLyq2TwwUBcVIkoEqea4eezcekN2nHPTIaRK8B7A6A-1707810665-1.0-AfRzjd7fYnrHf+Z5wnw9ZsWugvZvYZO+O+xtzZCimC/VosJjGzIFCYYWFojEW9it0+wqlPgjAhD8i8aIz43d+6Y=;
path=/; expires=Tue, 13-Feb-24 08:21:05 GMT; domain=.api.openai.com; HttpOnly;
Secure; SameSite=None
- _cfuvid=Ajd5lPskQSkBImLJdkywZGG4vHMkMCBcxb8TonP9OKc-1703091652762-0-604800000;
- _cfuvid=xAlsN4tpUk6egjAfe12ByLvmPPMpPCRpXivDHVBFlgA-1707810665071-0.0-604800000;
path=/; domain=.api.openai.com; HttpOnly; Secure; SameSite=None
Transfer-Encoding:
- chunked
@@ -89,7 +202,7 @@ interactions:
openai-organization:
- user-z7g4wmlazxqvc5wjyaaaocfz
openai-processing-ms:
- '2423'
- '412'
openai-version:
- '2020-10-01'
strict-transport-security:
@@ -98,42 +211,38 @@ interactions:
- '10000'
x-ratelimit-limit-tokens:
- '300000'
x-ratelimit-limit-tokens_usage_based:
- '300000'
x-ratelimit-remaining-requests:
- '9999'
x-ratelimit-remaining-tokens:
- '299729'
x-ratelimit-remaining-tokens_usage_based:
- '299729'
- '299760'
x-ratelimit-reset-requests:
- 6ms
x-ratelimit-reset-tokens:
- 54ms
x-ratelimit-reset-tokens_usage_based:
- 54ms
- 47ms
x-request-id:
- 8f99f43731fa878eaf0fcbf0719d1b3f
http_version: HTTP/1.1
status_code: 200
- req_3fadda2f3ccff527c42e84a0558b2bd8
status:
code: 200
message: OK
- request:
body: '{"messages": [{"role": "user", "content": "You are test role.\ntest backstory\n\nYour
personal goal is: test goal\n\nTOOLS:\n------\nYou have access to the following
tools:\n\nmultiplier: multiplier(numbers) -> float - Useful for when you need
to multiply two numbers together. \n\t\t\tThe input to this tool should be a
comma separated list of numbers of \n\t\t\tlength two, representing the two
numbers you want to multiply together. \n\t\t\tFor example, `1,2` would be the
input if you wanted to multiply 1 by 2.\n\nTo use a tool, please use the exact
following format:\n\n```\nThought: Do I need to use a tool? Yes\nAction: the
action to take, should be one of [multiplier]\nAction Input: the input to the
action\nObservation: the result of the action\n```\n\nWhen you have a response
for your task, or if you do not need to use a tool, you MUST use the format:\n\n```\nThought:
Do I need to use a tool? No\nFinal Answer: [your response here]\n```\n\t\tThis
is the summary of your work so far:\n \nBegin! This is VERY important to
you, your job depends on it!\n\nCurrent Task: What is 3 times 4\nThought: Do
I need to use a tool? Yes\nAction: multiplier\nAction Input: 3,4\n\nObservation:
12\nThought: \n"}], "model": "gpt-4", "n": 1, "stop": ["\nObservation"], "stream":
false, "temperature": 0.7}'
body: '{"messages": [{"role": "system", "content": "\n The
schema should have the following structure, only two key:\n -
tool_name: str\n - arguments: dict (with all
arguments being passed)\n\n Example:\n {\"tool_name\":
\"tool_name\", \"arguments\": {\"arg_name1\": \"value\", \"arg_name2\": 2}}\n "},
{"role": "user", "content": "Tools available:\n\nTool Name: multiplier\nTool
Description: multiplier(first_number: int, second_number: int) -> float - Useful
for when you need to multiply two numbers together.\nTool Arguments: {''first_number'':
{''type'': ''integer''}, ''second_number'': {''type'': ''integer''}}\n\nReturn
a valid schema for the tool, use this text to inform a valid ouput schema:\n\nTool
Name: multiplier\nTool Arguments: {\"first_number\": 3, \"second_number\": 4}```"}],
"model": "gpt-4", "function_call": {"name": "InstructorToolCalling"}, "functions":
[{"name": "InstructorToolCalling", "description": "Correctly extracted `InstructorToolCalling`
with all the required parameters with correct types", "parameters": {"properties":
{"tool_name": {"description": "The name of the tool to be called.", "title":
"Tool Name", "type": "string"}, "arguments": {"description": "A dictinary of
arguments to be passed to the tool.", "title": "Arguments", "type": "object"}},
"required": ["arguments", "tool_name"], "type": "object"}}]}'
headers:
accept:
- application/json
@@ -142,16 +251,16 @@ interactions:
connection:
- keep-alive
content-length:
- '1305'
- '1514'
content-type:
- application/json
cookie:
- __cf_bm=rIaStuxTRr_ZC91uSFg5cthTUq95O6PBdkxXZ68fLYc-1703091652-1-AZu+nvbL+3bwwQOIKpnYgLf5m5Mp0jfQ2baAlDRl1+FiTPO+/+GjcF4Upw4M8vtfh39ZyWF+l68r83qCS9OpObU=;
_cfuvid=Ajd5lPskQSkBImLJdkywZGG4vHMkMCBcxb8TonP9OKc-1703091652762-0-604800000
- __cf_bm=vLLyq2TwwUBcVIkoEqea4eezcekN2nHPTIaRK8B7A6A-1707810665-1.0-AfRzjd7fYnrHf+Z5wnw9ZsWugvZvYZO+O+xtzZCimC/VosJjGzIFCYYWFojEW9it0+wqlPgjAhD8i8aIz43d+6Y=;
_cfuvid=xAlsN4tpUk6egjAfe12ByLvmPPMpPCRpXivDHVBFlgA-1707810665071-0.0-604800000
host:
- api.openai.com
user-agent:
- OpenAI/Python 1.6.0
- OpenAI/Python 1.12.0
x-stainless-arch:
- arm64
x-stainless-async:
@@ -161,7 +270,7 @@ interactions:
x-stainless-os:
- MacOS
x-stainless-package-version:
- 1.6.0
- 1.12.0
x-stainless-runtime:
- CPython
x-stainless-runtime-version:
@@ -169,18 +278,20 @@ interactions:
method: POST
uri: https://api.openai.com/v1/chat/completions
response:
content: "{\n \"id\": \"chatcmpl-8XuE86Ud94rsOP7VA4Bgxo6RM5XE6\",\n \"object\":
\"chat.completion\",\n \"created\": 1703091652,\n \"model\": \"gpt-4-0613\",\n
\ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\":
\"assistant\",\n \"content\": \"Do I need to use a tool? No\\nFinal Answer:
3 times 4 is 12.\"\n },\n \"logprobs\": null,\n \"finish_reason\":
\"stop\"\n }\n ],\n \"usage\": {\n \"prompt_tokens\": 293,\n \"completion_tokens\":
22,\n \"total_tokens\": 315\n },\n \"system_fingerprint\": null\n}\n"
body:
string: !!binary |
H4sIAAAAAAAAA1yRT2/bMAzF7/4UBM/OkKRpEvjaAUWxQy9F0aIeDEVRHG0SqUp0/yzwdx+UpK7b
iyDw8f30RB4KALRbrAD1Xon2wU3WcR+v14kfX3+9vD4+823Q9v7m3+Ln9eZhjmV28OaP0fLh+qHZ
B2fEMp1kHY0Sk6mz1XS1nk2Xy+VR8Lw1LtvaIJPFZLqcXZwde7baJKzgqQAAOBzPnI225g0rmJYf
FW9SUq3BamgCwMguV1ClZJMoEiw/Rc0khnJc6pwbCbuOdE7daOXcFyAAkvJH5A0liZ0WjnfM7ko5
Z6kd4QFQxbbzhiTnx0NNNQqzazKhxgpq9J0TG5w1scYy64Mj64cadzYmaajzm9xSwUUJNSajmbaj
6qKvqcfh5f5864fZOG5D5E369lXcWbJp30SjElMOmYTDCZQhv4876L6MFUNkH6QR/msoA+eLyxMP
P9c9UldnUViUG9fnxTkhpvckxjc7S62JIdphJUVf/AcAAP//AwD1e1FwiQIAAA==
headers:
CF-Cache-Status:
- DYNAMIC
CF-RAY:
- 838971ae0f120110-GRU
- 854b7bfa0a5315b6-SJC
Cache-Control:
- no-cache, must-revalidate
Connection:
@@ -190,7 +301,7 @@ interactions:
Content-Type:
- application/json
Date:
- Wed, 20 Dec 2023 17:00:55 GMT
- Tue, 13 Feb 2024 07:51:08 GMT
Server:
- cloudflare
Transfer-Encoding:
@@ -204,7 +315,7 @@ interactions:
openai-organization:
- user-z7g4wmlazxqvc5wjyaaaocfz
openai-processing-ms:
- '2334'
- '2207'
openai-version:
- '2020-10-01'
strict-transport-security:
@@ -213,24 +324,209 @@ interactions:
- '10000'
x-ratelimit-limit-tokens:
- '300000'
x-ratelimit-limit-tokens_usage_based:
x-ratelimit-remaining-requests:
- '9999'
x-ratelimit-remaining-tokens:
- '299770'
x-ratelimit-reset-requests:
- 6ms
x-ratelimit-reset-tokens:
- 45ms
x-request-id:
- req_4f8d63af1db9145604765b7823e6a134
status:
code: 200
message: OK
- request:
body: '{"messages": [{"role": "user", "content": "You are test role.\ntest backstory\n\nYour
personal goal is: test goalTOOLS:\n------\nYou have access to only the following
tools:\n\n(''multiplier: multiplier(first_number: int, second_number: int) ->
float - Useful for when you need to multiply two numbers together.'',)\n\nTo
use a tool, please use the exact following format:\n\n```\nThought: Do I need
to use a tool? Yes\nAction: the tool you wanna use, should be one of [multiplier],
just the name.\nAction Input: Any and all relevant information input and context
for using the tool\nObservation: the result of using the tool\n```\n\nWhen you
have a response for your task, or if you do not need to use a tool, you MUST
use the format:\n\n```\nThought: Do I need to use a tool? No\nFinal Answer:
[your response here]```This is the summary of your work so far:\nBegin! This
is VERY important to you, your job depends on it!\n\nCurrent Task: What is 3
times 4\nThought: Do I need to use a tool? Yes\nAction: multiplier\nAction Input:
{\"first_number\": 3, \"second_number\": 4}\nObservation: 12\nThought: "}],
"model": "gpt-4", "n": 1, "stop": ["\nObservation"], "stream": true, "temperature":
0.7}'
headers:
accept:
- application/json
accept-encoding:
- gzip, deflate
connection:
- keep-alive
content-length:
- '1194'
content-type:
- application/json
cookie:
- __cf_bm=vLLyq2TwwUBcVIkoEqea4eezcekN2nHPTIaRK8B7A6A-1707810665-1.0-AfRzjd7fYnrHf+Z5wnw9ZsWugvZvYZO+O+xtzZCimC/VosJjGzIFCYYWFojEW9it0+wqlPgjAhD8i8aIz43d+6Y=;
_cfuvid=xAlsN4tpUk6egjAfe12ByLvmPPMpPCRpXivDHVBFlgA-1707810665071-0.0-604800000
host:
- api.openai.com
user-agent:
- OpenAI/Python 1.12.0
x-stainless-arch:
- arm64
x-stainless-async:
- 'false'
x-stainless-lang:
- python
x-stainless-os:
- MacOS
x-stainless-package-version:
- 1.12.0
x-stainless-runtime:
- CPython
x-stainless-runtime-version:
- 3.11.7
method: POST
uri: https://api.openai.com/v1/chat/completions
response:
body:
string: 'data: {"id":"chatcmpl-8rhrJQTVo1qcDIgXXkw155n80m3fT","object":"chat.completion.chunk","created":1707810669,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"role":"assistant","content":""},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rhrJQTVo1qcDIgXXkw155n80m3fT","object":"chat.completion.chunk","created":1707810669,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"Do"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rhrJQTVo1qcDIgXXkw155n80m3fT","object":"chat.completion.chunk","created":1707810669,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
I"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rhrJQTVo1qcDIgXXkw155n80m3fT","object":"chat.completion.chunk","created":1707810669,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
need"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rhrJQTVo1qcDIgXXkw155n80m3fT","object":"chat.completion.chunk","created":1707810669,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
to"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rhrJQTVo1qcDIgXXkw155n80m3fT","object":"chat.completion.chunk","created":1707810669,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
use"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rhrJQTVo1qcDIgXXkw155n80m3fT","object":"chat.completion.chunk","created":1707810669,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
a"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rhrJQTVo1qcDIgXXkw155n80m3fT","object":"chat.completion.chunk","created":1707810669,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
tool"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rhrJQTVo1qcDIgXXkw155n80m3fT","object":"chat.completion.chunk","created":1707810669,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"?"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rhrJQTVo1qcDIgXXkw155n80m3fT","object":"chat.completion.chunk","created":1707810669,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
No"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rhrJQTVo1qcDIgXXkw155n80m3fT","object":"chat.completion.chunk","created":1707810669,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"\n"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rhrJQTVo1qcDIgXXkw155n80m3fT","object":"chat.completion.chunk","created":1707810669,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"Final"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rhrJQTVo1qcDIgXXkw155n80m3fT","object":"chat.completion.chunk","created":1707810669,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
Answer"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rhrJQTVo1qcDIgXXkw155n80m3fT","object":"chat.completion.chunk","created":1707810669,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":":"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rhrJQTVo1qcDIgXXkw155n80m3fT","object":"chat.completion.chunk","created":1707810669,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rhrJQTVo1qcDIgXXkw155n80m3fT","object":"chat.completion.chunk","created":1707810669,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"3"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rhrJQTVo1qcDIgXXkw155n80m3fT","object":"chat.completion.chunk","created":1707810669,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
times"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rhrJQTVo1qcDIgXXkw155n80m3fT","object":"chat.completion.chunk","created":1707810669,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rhrJQTVo1qcDIgXXkw155n80m3fT","object":"chat.completion.chunk","created":1707810669,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"4"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rhrJQTVo1qcDIgXXkw155n80m3fT","object":"chat.completion.chunk","created":1707810669,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
is"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rhrJQTVo1qcDIgXXkw155n80m3fT","object":"chat.completion.chunk","created":1707810669,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rhrJQTVo1qcDIgXXkw155n80m3fT","object":"chat.completion.chunk","created":1707810669,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"12"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rhrJQTVo1qcDIgXXkw155n80m3fT","object":"chat.completion.chunk","created":1707810669,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"."},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rhrJQTVo1qcDIgXXkw155n80m3fT","object":"chat.completion.chunk","created":1707810669,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{},"logprobs":null,"finish_reason":"stop"}]}
data: [DONE]
'
headers:
CF-Cache-Status:
- DYNAMIC
CF-RAY:
- 854b7c090f7115b6-SJC
Cache-Control:
- no-cache, must-revalidate
Connection:
- keep-alive
Content-Type:
- text/event-stream
Date:
- Tue, 13 Feb 2024 07:51:09 GMT
Server:
- cloudflare
Transfer-Encoding:
- chunked
access-control-allow-origin:
- '*'
alt-svc:
- h3=":443"; ma=86400
openai-model:
- gpt-4-0613
openai-organization:
- user-z7g4wmlazxqvc5wjyaaaocfz
openai-processing-ms:
- '447'
openai-version:
- '2020-10-01'
strict-transport-security:
- max-age=15724800; includeSubDomains
x-ratelimit-limit-requests:
- '10000'
x-ratelimit-limit-tokens:
- '300000'
x-ratelimit-remaining-requests:
- '9999'
x-ratelimit-remaining-tokens:
- '299703'
x-ratelimit-remaining-tokens_usage_based:
- '299703'
- '299726'
x-ratelimit-reset-requests:
- 6ms
x-ratelimit-reset-tokens:
- 59ms
x-ratelimit-reset-tokens_usage_based:
- 59ms
- 54ms
x-request-id:
- 44304b182424a8acad8a3121817bea58
http_version: HTTP/1.1
status_code: 200
- req_fc64b9b6f818bc5aa798793d5783f7b2
status:
code: 200
message: OK
- request:
body: '{"messages": [{"role": "user", "content": "Progressively summarize the
lines of conversation provided, adding onto the previous summary returning a
@@ -256,12 +552,12 @@ interactions:
content-type:
- application/json
cookie:
- __cf_bm=rIaStuxTRr_ZC91uSFg5cthTUq95O6PBdkxXZ68fLYc-1703091652-1-AZu+nvbL+3bwwQOIKpnYgLf5m5Mp0jfQ2baAlDRl1+FiTPO+/+GjcF4Upw4M8vtfh39ZyWF+l68r83qCS9OpObU=;
_cfuvid=Ajd5lPskQSkBImLJdkywZGG4vHMkMCBcxb8TonP9OKc-1703091652762-0-604800000
- __cf_bm=vLLyq2TwwUBcVIkoEqea4eezcekN2nHPTIaRK8B7A6A-1707810665-1.0-AfRzjd7fYnrHf+Z5wnw9ZsWugvZvYZO+O+xtzZCimC/VosJjGzIFCYYWFojEW9it0+wqlPgjAhD8i8aIz43d+6Y=;
_cfuvid=xAlsN4tpUk6egjAfe12ByLvmPPMpPCRpXivDHVBFlgA-1707810665071-0.0-604800000
host:
- api.openai.com
user-agent:
- OpenAI/Python 1.6.0
- OpenAI/Python 1.12.0
x-stainless-arch:
- arm64
x-stainless-async:
@@ -271,7 +567,7 @@ interactions:
x-stainless-os:
- MacOS
x-stainless-package-version:
- 1.6.0
- 1.12.0
x-stainless-runtime:
- CPython
x-stainless-runtime-version:
@@ -279,19 +575,19 @@ interactions:
method: POST
uri: https://api.openai.com/v1/chat/completions
response:
content: "{\n \"id\": \"chatcmpl-8XuEBWLslSBUMDHZRViwpoXcclYOk\",\n \"object\":
\"chat.completion\",\n \"created\": 1703091655,\n \"model\": \"gpt-4-0613\",\n
\ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\":
\"assistant\",\n \"content\": \"The human asks the AI what is 3 times
4, and the AI responds that 3 times 4 is 12.\"\n },\n \"logprobs\":
null,\n \"finish_reason\": \"stop\"\n }\n ],\n \"usage\": {\n \"prompt_tokens\":
155,\n \"completion_tokens\": 27,\n \"total_tokens\": 182\n },\n \"system_fingerprint\":
null\n}\n"
body:
string: !!binary |
H4sIAAAAAAAAA1SQQWvCQBCF7/kVw55VEo3G5tZCwVJ6agtCKbJJxmTrZmfZnYBB/O9lY9T2sof3
5pt9804RgFCVyEGUjeSytXq6do17TT8375ut28SH5zdHT0Xvsq0ujpmYBIKKHyz5Ss1Kaq1GVmQu
dulQMoatSRZn6yReZfFgtFShDlhteZpO41WyGImGVIle5PAVAQCchjdkMxUeRQ4DPygtei9rFPlt
CEA40kER0nvlWRoWk7tZkmE0Q9yPBqHpWmlA+oMHbhAeX4AJ2k6zsrqHBRQ9pBOQprraDr0lU4Vx
yYPo0HeaQXlI5jMx/nS+RdRUW0dFOMd0Wt/0vTLKNzuH0pMJcTyTveDnCOB7qKL7d52wjlrLO6YD
mrAwWS4v+8S99bs7T0eTiaX+Q2UP0ZhQ+N4ztru9MjU669TQTMgZnaNfAAAA//8DACsW1C8QAgAA
headers:
CF-Cache-Status:
- DYNAMIC
CF-RAY:
- 838971be49500110-GRU
- 854b7c14798815b6-SJC
Cache-Control:
- no-cache, must-revalidate
Connection:
@@ -301,7 +597,7 @@ interactions:
Content-Type:
- application/json
Date:
- Wed, 20 Dec 2023 17:00:58 GMT
- Tue, 13 Feb 2024 07:51:12 GMT
Server:
- cloudflare
Transfer-Encoding:
@@ -315,7 +611,7 @@ interactions:
openai-organization:
- user-z7g4wmlazxqvc5wjyaaaocfz
openai-processing-ms:
- '2942'
- '2088'
openai-version:
- '2020-10-01'
strict-transport-security:
@@ -324,22 +620,17 @@ interactions:
- '10000'
x-ratelimit-limit-tokens:
- '300000'
x-ratelimit-limit-tokens_usage_based:
- '300000'
x-ratelimit-remaining-requests:
- '9999'
x-ratelimit-remaining-tokens:
- '299794'
x-ratelimit-remaining-tokens_usage_based:
- '299794'
x-ratelimit-reset-requests:
- 6ms
x-ratelimit-reset-tokens:
- 41ms
x-ratelimit-reset-tokens_usage_based:
- 41ms
x-request-id:
- 546e9b3713f3ff2d7f9868133efaa3a7
http_version: HTTP/1.1
status_code: 200
- req_eb2a14859f9f296e36283cec901116cb
status:
code: 200
message: OK
version: 1

View File

@@ -1,21 +1,19 @@
interactions:
- request:
body: '{"messages": [{"role": "user", "content": "You are test role.\ntest backstory\n\nYour
personal goal is: test goal\n\nTOOLS:\n------\nYou have access to the following
tools:\n\nmultiplier: multiplier(numbers) -> float - Useful for when you need
to multiply two numbers together. \n\t\t\tThe input to this tool should be a
comma separated list of numbers of \n\t\t\tlength two, representing the two
numbers you want to multiply together. \n\t\t\tFor example, `1,2` would be the
input if you wanted to multiply 1 by 2.\n\nTo use a tool, please use the exact
following format:\n\n```\nThought: Do I need to use a tool? Yes\nAction: the
action to take, should be one of [multiplier]\nAction Input: the input to the
action\nObservation: the result of the action\n```\n\nWhen you have a response
for your task, or if you do not need to use a tool, you MUST use the format:\n\n```\nThought:
Do I need to use a tool? No\nFinal Answer: [your response here]\n```\n\t\tThis
is the summary of your work so far:\n \nBegin! This is VERY important to
you, your job depends on it!\n\nCurrent Task: What is 3 times 4\n\n"}], "model":
"gpt-4", "n": 1, "stop": ["\nObservation"], "stream": false, "temperature":
0.7}'
personal goal is: test goalTOOLS:\n------\nYou have access to only the following
tools:\n\n(''multiplier: multiplier(first_number: int, second_number: int) ->
float - Useful for when you need to multiply two numbers together.'',)\n\nTo
use a tool, please use the exact following format:\n\n```\nThought: Do I need
to use a tool? Yes\nAction: the tool you wanna use, should be one of [multiplier],
just the name.\nAction Input: Any and all relevant information input and context
for using the tool\nObservation: the result of using the tool\n```\n\nWhen you
have a response for your task, or if you do not need to use a tool, you MUST
use the format:\n\n```\nThought: Do I need to use a tool? No\nFinal Answer:
[your response here]```This is the summary of your work so far:\nBegin! This
is VERY important to you, your job depends on it!\n\nCurrent Task: What is 3
times 4?\n"}], "model": "gpt-4", "n": 1, "stop": ["\nObservation"], "stream":
true, "temperature": 0.7}'
headers:
accept:
- application/json
@@ -24,13 +22,13 @@ interactions:
connection:
- keep-alive
content-length:
- '1199'
- '1051'
content-type:
- application/json
host:
- api.openai.com
user-agent:
- OpenAI/Python 1.6.0
- OpenAI/Python 1.12.0
x-stainless-arch:
- arm64
x-stainless-async:
@@ -40,7 +38,7 @@ interactions:
x-stainless-os:
- MacOS
x-stainless-package-version:
- 1.6.0
- 1.12.0
x-stainless-runtime:
- CPython
x-stainless-runtime-version:
@@ -48,35 +46,150 @@ interactions:
method: POST
uri: https://api.openai.com/v1/chat/completions
response:
content: "{\n \"id\": \"chatcmpl-8XuDxosP85Kqo6mU6biIggZ5c828i\",\n \"object\":
\"chat.completion\",\n \"created\": 1703091641,\n \"model\": \"gpt-4-0613\",\n
\ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\":
\"assistant\",\n \"content\": \"Thought: Do I need to use a tool? Yes\\nAction:
multiplier\\nAction Input: 3,4\"\n },\n \"logprobs\": null,\n \"finish_reason\":
\"stop\"\n }\n ],\n \"usage\": {\n \"prompt_tokens\": 260,\n \"completion_tokens\":
23,\n \"total_tokens\": 283\n },\n \"system_fingerprint\": null\n}\n"
body:
string: 'data: {"id":"chatcmpl-8rhqbBUIPcszzNDkFqcJONl5uyRjU","object":"chat.completion.chunk","created":1707810625,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"role":"assistant","content":""},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rhqbBUIPcszzNDkFqcJONl5uyRjU","object":"chat.completion.chunk","created":1707810625,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"Thought"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rhqbBUIPcszzNDkFqcJONl5uyRjU","object":"chat.completion.chunk","created":1707810625,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":":"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rhqbBUIPcszzNDkFqcJONl5uyRjU","object":"chat.completion.chunk","created":1707810625,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
Do"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rhqbBUIPcszzNDkFqcJONl5uyRjU","object":"chat.completion.chunk","created":1707810625,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
I"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rhqbBUIPcszzNDkFqcJONl5uyRjU","object":"chat.completion.chunk","created":1707810625,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
need"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rhqbBUIPcszzNDkFqcJONl5uyRjU","object":"chat.completion.chunk","created":1707810625,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
to"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rhqbBUIPcszzNDkFqcJONl5uyRjU","object":"chat.completion.chunk","created":1707810625,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
use"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rhqbBUIPcszzNDkFqcJONl5uyRjU","object":"chat.completion.chunk","created":1707810625,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
a"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rhqbBUIPcszzNDkFqcJONl5uyRjU","object":"chat.completion.chunk","created":1707810625,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
tool"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rhqbBUIPcszzNDkFqcJONl5uyRjU","object":"chat.completion.chunk","created":1707810625,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"?"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rhqbBUIPcszzNDkFqcJONl5uyRjU","object":"chat.completion.chunk","created":1707810625,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
Yes"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rhqbBUIPcszzNDkFqcJONl5uyRjU","object":"chat.completion.chunk","created":1707810625,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"\n"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rhqbBUIPcszzNDkFqcJONl5uyRjU","object":"chat.completion.chunk","created":1707810625,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"Action"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rhqbBUIPcszzNDkFqcJONl5uyRjU","object":"chat.completion.chunk","created":1707810625,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":":"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rhqbBUIPcszzNDkFqcJONl5uyRjU","object":"chat.completion.chunk","created":1707810625,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
multiplier"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rhqbBUIPcszzNDkFqcJONl5uyRjU","object":"chat.completion.chunk","created":1707810625,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"\n"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rhqbBUIPcszzNDkFqcJONl5uyRjU","object":"chat.completion.chunk","created":1707810625,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"Action"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rhqbBUIPcszzNDkFqcJONl5uyRjU","object":"chat.completion.chunk","created":1707810625,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
Input"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rhqbBUIPcszzNDkFqcJONl5uyRjU","object":"chat.completion.chunk","created":1707810625,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":":"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rhqbBUIPcszzNDkFqcJONl5uyRjU","object":"chat.completion.chunk","created":1707810625,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
{\""},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rhqbBUIPcszzNDkFqcJONl5uyRjU","object":"chat.completion.chunk","created":1707810625,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"first"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rhqbBUIPcszzNDkFqcJONl5uyRjU","object":"chat.completion.chunk","created":1707810625,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"_number"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rhqbBUIPcszzNDkFqcJONl5uyRjU","object":"chat.completion.chunk","created":1707810625,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"\":"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rhqbBUIPcszzNDkFqcJONl5uyRjU","object":"chat.completion.chunk","created":1707810625,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rhqbBUIPcszzNDkFqcJONl5uyRjU","object":"chat.completion.chunk","created":1707810625,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"3"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rhqbBUIPcszzNDkFqcJONl5uyRjU","object":"chat.completion.chunk","created":1707810625,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":","},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rhqbBUIPcszzNDkFqcJONl5uyRjU","object":"chat.completion.chunk","created":1707810625,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
\""},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rhqbBUIPcszzNDkFqcJONl5uyRjU","object":"chat.completion.chunk","created":1707810625,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"second"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rhqbBUIPcszzNDkFqcJONl5uyRjU","object":"chat.completion.chunk","created":1707810625,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"_number"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rhqbBUIPcszzNDkFqcJONl5uyRjU","object":"chat.completion.chunk","created":1707810625,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"\":"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rhqbBUIPcszzNDkFqcJONl5uyRjU","object":"chat.completion.chunk","created":1707810625,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rhqbBUIPcszzNDkFqcJONl5uyRjU","object":"chat.completion.chunk","created":1707810625,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"4"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rhqbBUIPcszzNDkFqcJONl5uyRjU","object":"chat.completion.chunk","created":1707810625,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"}"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rhqbBUIPcszzNDkFqcJONl5uyRjU","object":"chat.completion.chunk","created":1707810625,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{},"logprobs":null,"finish_reason":"stop"}]}
data: [DONE]
'
headers:
CF-Cache-Status:
- DYNAMIC
CF-RAY:
- 83897166eba7a5fd-GRU
- 854b7af66cad9655-SJC
Cache-Control:
- no-cache, must-revalidate
Connection:
- keep-alive
Content-Encoding:
- gzip
Content-Type:
- application/json
- text/event-stream
Date:
- Wed, 20 Dec 2023 17:00:44 GMT
- Tue, 13 Feb 2024 07:50:25 GMT
Server:
- cloudflare
Set-Cookie:
- __cf_bm=dRKr_rTtq3ZGad82s4Lpo6VOXPMLScbjq8fMvjANBpY-1703091644-1-AUDR6a/EPcG95H4He0KddFkZbk45hbZTA/BPUyFBTNiYGlzd2GIBZnPgpVOJXfr9n4lXV8jRf1bRmUJbsZnQ5MM=;
path=/; expires=Wed, 20-Dec-23 17:30:44 GMT; domain=.api.openai.com; HttpOnly;
- __cf_bm=1KOow.3J0t2OxoVQi5mb0rWx.eRB.e_i_QzfhmSDUss-1707810625-1-AYnrUrvGx81Jze9dSAFeX9JhHwV2P2fecA0Im/jPPedRz/Gk/RtK4W0+baUlCe8wRVmXlI0/3nR4HhLcA9W1dPI=;
path=/; expires=Tue, 13-Feb-24 08:20:25 GMT; domain=.api.openai.com; HttpOnly;
Secure; SameSite=None
- _cfuvid=suHEOi6nmUCq7cFZiZAg5nwyGtTeiFynig5_5V4esA8-1703091644341-0-604800000;
- _cfuvid=lwFk3pBXS_caF6.lmxqR4UVI9FMIKSttmrWBXrT4Klk-1707810625774-0-604800000;
path=/; domain=.api.openai.com; HttpOnly; Secure; SameSite=None
Transfer-Encoding:
- chunked
@@ -89,7 +202,7 @@ interactions:
openai-organization:
- user-z7g4wmlazxqvc5wjyaaaocfz
openai-processing-ms:
- '2718'
- '225'
openai-version:
- '2020-10-01'
strict-transport-security:
@@ -98,42 +211,38 @@ interactions:
- '10000'
x-ratelimit-limit-tokens:
- '300000'
x-ratelimit-limit-tokens_usage_based:
- '300000'
x-ratelimit-remaining-requests:
- '9999'
x-ratelimit-remaining-tokens:
- '299729'
x-ratelimit-remaining-tokens_usage_based:
- '299729'
- '299760'
x-ratelimit-reset-requests:
- 6ms
x-ratelimit-reset-tokens:
- 54ms
x-ratelimit-reset-tokens_usage_based:
- 54ms
- 48ms
x-request-id:
- 1714a9f5a2141d30f72506facf616944
http_version: HTTP/1.1
status_code: 200
- req_198e3204ee23b23d5eb5eeaf840dcd6e
status:
code: 200
message: OK
- request:
body: '{"messages": [{"role": "user", "content": "You are test role.\ntest backstory\n\nYour
personal goal is: test goal\n\nTOOLS:\n------\nYou have access to the following
tools:\n\nmultiplier: multiplier(numbers) -> float - Useful for when you need
to multiply two numbers together. \n\t\t\tThe input to this tool should be a
comma separated list of numbers of \n\t\t\tlength two, representing the two
numbers you want to multiply together. \n\t\t\tFor example, `1,2` would be the
input if you wanted to multiply 1 by 2.\n\nTo use a tool, please use the exact
following format:\n\n```\nThought: Do I need to use a tool? Yes\nAction: the
action to take, should be one of [multiplier]\nAction Input: the input to the
action\nObservation: the result of the action\n```\n\nWhen you have a response
for your task, or if you do not need to use a tool, you MUST use the format:\n\n```\nThought:
Do I need to use a tool? No\nFinal Answer: [your response here]\n```\n\t\tThis
is the summary of your work so far:\n \nBegin! This is VERY important to
you, your job depends on it!\n\nCurrent Task: What is 3 times 4\nThought: Do
I need to use a tool? Yes\nAction: multiplier\nAction Input: 3,4\nObservation:
12\nThought: \n"}], "model": "gpt-4", "n": 1, "stop": ["\nObservation"], "stream":
false, "temperature": 0.7}'
body: '{"messages": [{"role": "system", "content": "\n The
schema should have the following structure, only two key:\n -
tool_name: str\n - arguments: dict (with all
arguments being passed)\n\n Example:\n {\"tool_name\":
\"tool_name\", \"arguments\": {\"arg_name1\": \"value\", \"arg_name2\": 2}}\n "},
{"role": "user", "content": "Tools available:\n\nTool Name: multiplier\nTool
Description: multiplier(first_number: int, second_number: int) -> float - Useful
for when you need to multiply two numbers together.\nTool Arguments: {''first_number'':
{''type'': ''integer''}, ''second_number'': {''type'': ''integer''}}\n\nReturn
a valid schema for the tool, use this text to inform a valid ouput schema:\n\nTool
Name: multiplier\nTool Arguments: {\"first_number\": 3, \"second_number\": 4}```"}],
"model": "gpt-4", "function_call": {"name": "InstructorToolCalling"}, "functions":
[{"name": "InstructorToolCalling", "description": "Correctly extracted `InstructorToolCalling`
with all the required parameters with correct types", "parameters": {"properties":
{"tool_name": {"description": "The name of the tool to be called.", "title":
"Tool Name", "type": "string"}, "arguments": {"description": "A dictinary of
arguments to be passed to the tool.", "title": "Arguments", "type": "object"}},
"required": ["arguments", "tool_name"], "type": "object"}}]}'
headers:
accept:
- application/json
@@ -142,16 +251,16 @@ interactions:
connection:
- keep-alive
content-length:
- '1303'
- '1514'
content-type:
- application/json
cookie:
- __cf_bm=dRKr_rTtq3ZGad82s4Lpo6VOXPMLScbjq8fMvjANBpY-1703091644-1-AUDR6a/EPcG95H4He0KddFkZbk45hbZTA/BPUyFBTNiYGlzd2GIBZnPgpVOJXfr9n4lXV8jRf1bRmUJbsZnQ5MM=;
_cfuvid=suHEOi6nmUCq7cFZiZAg5nwyGtTeiFynig5_5V4esA8-1703091644341-0-604800000
- __cf_bm=1KOow.3J0t2OxoVQi5mb0rWx.eRB.e_i_QzfhmSDUss-1707810625-1-AYnrUrvGx81Jze9dSAFeX9JhHwV2P2fecA0Im/jPPedRz/Gk/RtK4W0+baUlCe8wRVmXlI0/3nR4HhLcA9W1dPI=;
_cfuvid=lwFk3pBXS_caF6.lmxqR4UVI9FMIKSttmrWBXrT4Klk-1707810625774-0-604800000
host:
- api.openai.com
user-agent:
- OpenAI/Python 1.6.0
- OpenAI/Python 1.12.0
x-stainless-arch:
- arm64
x-stainless-async:
@@ -161,7 +270,7 @@ interactions:
x-stainless-os:
- MacOS
x-stainless-package-version:
- 1.6.0
- 1.12.0
x-stainless-runtime:
- CPython
x-stainless-runtime-version:
@@ -169,18 +278,20 @@ interactions:
method: POST
uri: https://api.openai.com/v1/chat/completions
response:
content: "{\n \"id\": \"chatcmpl-8XuE0tWsbI9QDkRau6rzSUZfuqhFN\",\n \"object\":
\"chat.completion\",\n \"created\": 1703091644,\n \"model\": \"gpt-4-0613\",\n
\ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\":
\"assistant\",\n \"content\": \"Do I need to use a tool? No\\nFinal Answer:
12\"\n },\n \"logprobs\": null,\n \"finish_reason\": \"stop\"\n
\ }\n ],\n \"usage\": {\n \"prompt_tokens\": 293,\n \"completion_tokens\":
15,\n \"total_tokens\": 308\n },\n \"system_fingerprint\": null\n}\n"
body:
string: !!binary |
H4sIAAAAAAAAA1xRS2/bMAy++1cQPCeD06ZN6muHrr0O7XqYB0NRGFudJGoSPbQN/N8H5ensIgjf
S5/IbQGAZo0VoO6UaBfsdBm7P9T+oNXn/cfN3be/j9+7r/Ly+ird+8MnTrKDV2+k5ej6otkFS2LY
72kdSQnl1NmiXCxn5e3Vckc4XpPNtjbIdD4tb2fXB0fHRlPCCn4WAADb3Zm7+TW9YwXl5Ig4Skm1
hNVJBICRbUZQpWSSKC84OZOavZDPdX1v7YjY9F7n1o1W1l4EAqBXbhf55JPEXgvHZ2Z7r6w1vh3F
A6CKbe/IS+6P2xqF2TbZX2MFNbreignWUKxxAvVZnultjRsTkzS+d6usqOA6ixJp9usROh8GPD06
HG7DaSyW2xB5lf77JW6MN6lrIqnEPvdLwmEflEN+7cbfX0wUQ2QXpBH+TT4HXs1v9nl43vSIPZLC
ouwIX5TFoSGmjyTkmo3xLcUQzWkbxVD8AwAA//8DAAe0NN+EAgAA
headers:
CF-Cache-Status:
- DYNAMIC
CF-RAY:
- 838971796af6a5fd-GRU
- 854b7b0baffb9655-SJC
Cache-Control:
- no-cache, must-revalidate
Connection:
@@ -190,7 +301,7 @@ interactions:
Content-Type:
- application/json
Date:
- Wed, 20 Dec 2023 17:00:46 GMT
- Tue, 13 Feb 2024 07:50:31 GMT
Server:
- cloudflare
Transfer-Encoding:
@@ -204,7 +315,7 @@ interactions:
openai-organization:
- user-z7g4wmlazxqvc5wjyaaaocfz
openai-processing-ms:
- '2355'
- '2535'
openai-version:
- '2020-10-01'
strict-transport-security:
@@ -213,24 +324,209 @@ interactions:
- '10000'
x-ratelimit-limit-tokens:
- '300000'
x-ratelimit-limit-tokens_usage_based:
x-ratelimit-remaining-requests:
- '9999'
x-ratelimit-remaining-tokens:
- '299770'
x-ratelimit-reset-requests:
- 6ms
x-ratelimit-reset-tokens:
- 45ms
x-request-id:
- req_f0341a808828a455435564e19aae93b7
status:
code: 200
message: OK
- request:
body: '{"messages": [{"role": "user", "content": "You are test role.\ntest backstory\n\nYour
personal goal is: test goalTOOLS:\n------\nYou have access to only the following
tools:\n\n(''multiplier: multiplier(first_number: int, second_number: int) ->
float - Useful for when you need to multiply two numbers together.'',)\n\nTo
use a tool, please use the exact following format:\n\n```\nThought: Do I need
to use a tool? Yes\nAction: the tool you wanna use, should be one of [multiplier],
just the name.\nAction Input: Any and all relevant information input and context
for using the tool\nObservation: the result of using the tool\n```\n\nWhen you
have a response for your task, or if you do not need to use a tool, you MUST
use the format:\n\n```\nThought: Do I need to use a tool? No\nFinal Answer:
[your response here]```This is the summary of your work so far:\nBegin! This
is VERY important to you, your job depends on it!\n\nCurrent Task: What is 3
times 4?\nThought: Do I need to use a tool? Yes\nAction: multiplier\nAction
Input: {\"first_number\": 3, \"second_number\": 4}\nObservation: 12\nThought:
"}], "model": "gpt-4", "n": 1, "stop": ["\nObservation"], "stream": true, "temperature":
0.7}'
headers:
accept:
- application/json
accept-encoding:
- gzip, deflate
connection:
- keep-alive
content-length:
- '1195'
content-type:
- application/json
cookie:
- __cf_bm=1KOow.3J0t2OxoVQi5mb0rWx.eRB.e_i_QzfhmSDUss-1707810625-1-AYnrUrvGx81Jze9dSAFeX9JhHwV2P2fecA0Im/jPPedRz/Gk/RtK4W0+baUlCe8wRVmXlI0/3nR4HhLcA9W1dPI=;
_cfuvid=lwFk3pBXS_caF6.lmxqR4UVI9FMIKSttmrWBXrT4Klk-1707810625774-0-604800000
host:
- api.openai.com
user-agent:
- OpenAI/Python 1.12.0
x-stainless-arch:
- arm64
x-stainless-async:
- 'false'
x-stainless-lang:
- python
x-stainless-os:
- MacOS
x-stainless-package-version:
- 1.12.0
x-stainless-runtime:
- CPython
x-stainless-runtime-version:
- 3.11.7
method: POST
uri: https://api.openai.com/v1/chat/completions
response:
body:
string: 'data: {"id":"chatcmpl-8rhqhnZyqMjC0VI3FdgGRP7VGRtxz","object":"chat.completion.chunk","created":1707810631,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"role":"assistant","content":""},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rhqhnZyqMjC0VI3FdgGRP7VGRtxz","object":"chat.completion.chunk","created":1707810631,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"Do"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rhqhnZyqMjC0VI3FdgGRP7VGRtxz","object":"chat.completion.chunk","created":1707810631,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
I"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rhqhnZyqMjC0VI3FdgGRP7VGRtxz","object":"chat.completion.chunk","created":1707810631,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
need"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rhqhnZyqMjC0VI3FdgGRP7VGRtxz","object":"chat.completion.chunk","created":1707810631,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
to"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rhqhnZyqMjC0VI3FdgGRP7VGRtxz","object":"chat.completion.chunk","created":1707810631,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
use"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rhqhnZyqMjC0VI3FdgGRP7VGRtxz","object":"chat.completion.chunk","created":1707810631,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
a"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rhqhnZyqMjC0VI3FdgGRP7VGRtxz","object":"chat.completion.chunk","created":1707810631,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
tool"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rhqhnZyqMjC0VI3FdgGRP7VGRtxz","object":"chat.completion.chunk","created":1707810631,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"?"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rhqhnZyqMjC0VI3FdgGRP7VGRtxz","object":"chat.completion.chunk","created":1707810631,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
No"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rhqhnZyqMjC0VI3FdgGRP7VGRtxz","object":"chat.completion.chunk","created":1707810631,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"\n"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rhqhnZyqMjC0VI3FdgGRP7VGRtxz","object":"chat.completion.chunk","created":1707810631,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"Final"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rhqhnZyqMjC0VI3FdgGRP7VGRtxz","object":"chat.completion.chunk","created":1707810631,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
Answer"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rhqhnZyqMjC0VI3FdgGRP7VGRtxz","object":"chat.completion.chunk","created":1707810631,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":":"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rhqhnZyqMjC0VI3FdgGRP7VGRtxz","object":"chat.completion.chunk","created":1707810631,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rhqhnZyqMjC0VI3FdgGRP7VGRtxz","object":"chat.completion.chunk","created":1707810631,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"3"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rhqhnZyqMjC0VI3FdgGRP7VGRtxz","object":"chat.completion.chunk","created":1707810631,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
times"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rhqhnZyqMjC0VI3FdgGRP7VGRtxz","object":"chat.completion.chunk","created":1707810631,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rhqhnZyqMjC0VI3FdgGRP7VGRtxz","object":"chat.completion.chunk","created":1707810631,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"4"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rhqhnZyqMjC0VI3FdgGRP7VGRtxz","object":"chat.completion.chunk","created":1707810631,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
equals"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rhqhnZyqMjC0VI3FdgGRP7VGRtxz","object":"chat.completion.chunk","created":1707810631,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rhqhnZyqMjC0VI3FdgGRP7VGRtxz","object":"chat.completion.chunk","created":1707810631,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"12"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rhqhnZyqMjC0VI3FdgGRP7VGRtxz","object":"chat.completion.chunk","created":1707810631,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"."},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rhqhnZyqMjC0VI3FdgGRP7VGRtxz","object":"chat.completion.chunk","created":1707810631,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{},"logprobs":null,"finish_reason":"stop"}]}
data: [DONE]
'
headers:
CF-Cache-Status:
- DYNAMIC
CF-RAY:
- 854b7b1dbb5e9655-SJC
Cache-Control:
- no-cache, must-revalidate
Connection:
- keep-alive
Content-Type:
- text/event-stream
Date:
- Tue, 13 Feb 2024 07:50:31 GMT
Server:
- cloudflare
Transfer-Encoding:
- chunked
access-control-allow-origin:
- '*'
alt-svc:
- h3=":443"; ma=86400
openai-model:
- gpt-4-0613
openai-organization:
- user-z7g4wmlazxqvc5wjyaaaocfz
openai-processing-ms:
- '508'
openai-version:
- '2020-10-01'
strict-transport-security:
- max-age=15724800; includeSubDomains
x-ratelimit-limit-requests:
- '10000'
x-ratelimit-limit-tokens:
- '300000'
x-ratelimit-remaining-requests:
- '9999'
x-ratelimit-remaining-tokens:
- '299704'
x-ratelimit-remaining-tokens_usage_based:
- '299704'
- '299725'
x-ratelimit-reset-requests:
- 6ms
x-ratelimit-reset-tokens:
- 59ms
x-ratelimit-reset-tokens_usage_based:
- 59ms
- 54ms
x-request-id:
- a3de9d34f17496d9bdd2ae9360f6054a
http_version: HTTP/1.1
status_code: 200
- req_05c53af3bfe5ebe6490149585032dead
status:
code: 200
message: OK
- request:
body: '{"messages": [{"role": "user", "content": "Progressively summarize the
lines of conversation provided, adding onto the previous summary returning a
@@ -242,8 +538,8 @@ interactions:
the AI thinks of artificial intelligence. The AI thinks artificial intelligence
is a force for good because it will help humans reach their full potential.\nEND
OF EXAMPLE\n\nCurrent summary:\n\n\nNew lines of conversation:\nHuman: What
is 3 times 4\nAI: 12\n\nNew summary:"}], "model": "gpt-4", "n": 1, "stream":
false, "temperature": 0.7}'
is 3 times 4?\nAI: 3 times 4 equals 12.\n\nNew summary:"}], "model": "gpt-4",
"n": 1, "stream": false, "temperature": 0.7}'
headers:
accept:
- application/json
@@ -252,16 +548,16 @@ interactions:
connection:
- keep-alive
content-length:
- '871'
- '890'
content-type:
- application/json
cookie:
- __cf_bm=dRKr_rTtq3ZGad82s4Lpo6VOXPMLScbjq8fMvjANBpY-1703091644-1-AUDR6a/EPcG95H4He0KddFkZbk45hbZTA/BPUyFBTNiYGlzd2GIBZnPgpVOJXfr9n4lXV8jRf1bRmUJbsZnQ5MM=;
_cfuvid=suHEOi6nmUCq7cFZiZAg5nwyGtTeiFynig5_5V4esA8-1703091644341-0-604800000
- __cf_bm=1KOow.3J0t2OxoVQi5mb0rWx.eRB.e_i_QzfhmSDUss-1707810625-1-AYnrUrvGx81Jze9dSAFeX9JhHwV2P2fecA0Im/jPPedRz/Gk/RtK4W0+baUlCe8wRVmXlI0/3nR4HhLcA9W1dPI=;
_cfuvid=lwFk3pBXS_caF6.lmxqR4UVI9FMIKSttmrWBXrT4Klk-1707810625774-0-604800000
host:
- api.openai.com
user-agent:
- OpenAI/Python 1.6.0
- OpenAI/Python 1.12.0
x-stainless-arch:
- arm64
x-stainless-async:
@@ -271,7 +567,7 @@ interactions:
x-stainless-os:
- MacOS
x-stainless-package-version:
- 1.6.0
- 1.12.0
x-stainless-runtime:
- CPython
x-stainless-runtime-version:
@@ -279,18 +575,20 @@ interactions:
method: POST
uri: https://api.openai.com/v1/chat/completions
response:
content: "{\n \"id\": \"chatcmpl-8XuE3Di6FRdNAetXNfWs6OWyRAfkf\",\n \"object\":
\"chat.completion\",\n \"created\": 1703091647,\n \"model\": \"gpt-4-0613\",\n
\ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\":
\"assistant\",\n \"content\": \"The human asks the AI what is 3 times
4 and the AI responds with 12.\"\n },\n \"logprobs\": null,\n \"finish_reason\":
\"stop\"\n }\n ],\n \"usage\": {\n \"prompt_tokens\": 149,\n \"completion_tokens\":
20,\n \"total_tokens\": 169\n },\n \"system_fingerprint\": null\n}\n"
body:
string: !!binary |
H4sIAAAAAAAAA1SQwW7CMBBE7/mKlc+ACARocyunVlUrVUJFqKqQcZbE4NiudyNaIf69cgjQXnyY
2Rm/3WMCIHQhchCqkqxqb/p3ofraPc7l2zxbpPvX5aF5Wanpqnx/LpZj0YsJt9mh4ktqoFztDbJ2
9myrgJIxtqaz4ewuHU7H49aoXYEmxkrP/aw/nKZdoaqcVkgih48EAODYvpHNFvgtchj2LkqNRLJE
kV+HAERwJipCEmliaVn0bqZyltG2uIsKoWpqaUHSnoArhIcnYAdKGtUYyQhjYF0jQdYDaYvLSEDy
zhYxIrkVpaUDBtAE6Wggut9OV0zjSh/cJq5kG2Ou+lZbTdU6oCRnIxKx8+f4KQH4bM/R/NtQ+OBq
z2t2e7SxMJ1Mzn3idvmbO8o6kx1L8yc1u086QkE/xFivt9qWGHzQ7XUiZ3JKfgEAAP//AwADDMfm
FAIAAA==
headers:
CF-Cache-Status:
- DYNAMIC
CF-RAY:
- 838971899954a5fd-GRU
- 854b7b2acb559655-SJC
Cache-Control:
- no-cache, must-revalidate
Connection:
@@ -300,7 +598,7 @@ interactions:
Content-Type:
- application/json
Date:
- Wed, 20 Dec 2023 17:00:49 GMT
- Tue, 13 Feb 2024 07:50:35 GMT
Server:
- cloudflare
Transfer-Encoding:
@@ -314,7 +612,7 @@ interactions:
openai-organization:
- user-z7g4wmlazxqvc5wjyaaaocfz
openai-processing-ms:
- '2698'
- '1629'
openai-version:
- '2020-10-01'
strict-transport-security:
@@ -323,22 +621,17 @@ interactions:
- '10000'
x-ratelimit-limit-tokens:
- '300000'
x-ratelimit-limit-tokens_usage_based:
- '300000'
x-ratelimit-remaining-requests:
- '9999'
x-ratelimit-remaining-tokens:
- '299798'
x-ratelimit-remaining-tokens_usage_based:
- '299798'
- '299793'
x-ratelimit-reset-requests:
- 6ms
x-ratelimit-reset-tokens:
- 40ms
x-ratelimit-reset-tokens_usage_based:
- 40ms
- 41ms
x-request-id:
- ddbd97cea4ec099c21c00ca922157ae1
http_version: HTTP/1.1
status_code: 200
- req_a6731ec59a7e505ad280af3f51dbfe3a
status:
code: 200
message: OK
version: 1

File diff suppressed because it is too large Load Diff

View File

@@ -1,19 +1,21 @@
interactions:
- request:
body: '{"messages": [{"role": "user", "content": "You are test role.\ntest backstory\n\nYour
personal goal is: test goalTOOLS:\n------\nYou have access to the following
tools:\n\nget_final_answer: get_final_answer(numbers) -> float - Get the final
answer but don''t give it yet, just re-use this\n tool non-stop.\n\nTo
personal goal is: test goalTOOLS:\n------\nYou have access to only the following
tools:\n\n(\"get_final_answer: get_final_answer(numbers) -> float - Get the
final answer but don''t give it yet, just re-use this\\n tool non-stop.\",)\n\nTo
use a tool, please use the exact following format:\n\n```\nThought: Do I need
to use a tool? Yes\nAction: the action to take, should be one of [get_final_answer],
just the name.\nAction Input: the input to the action\nObservation: the result
of the action\n```\n\nWhen you have a response for your task, or if you do not
need to use a tool, you MUST use the format:\n\n```\nThought: Do I need to use
a tool? No\nFinal Answer: [your response here]This is the summary of your work
so far:\nBegin! This is VERY important to you, your job depends on it!\n\nCurrent
Task: The final answer is 42. But don''t give it yet, instead keep using the
`get_final_answer` tool.\n"}], "model": "gpt-4", "n": 1, "stop": ["\nObservation"],
"stream": false, "temperature": 0.7}'
to use a tool? Yes\nAction: the tool you wanna use, should be one of [get_final_answer],
just the name.\nAction Input: Any and all relevant information input and context
for using the tool\nObservation: the result of using the tool\n```\n\nWhen you
have a response for your task, or if you do not need to use a tool, you MUST
use the format:\n\n```\nThought: Do I need to use a tool? No\nFinal Answer:
[your response here]```This is the summary of your work so far:\nBegin! This
is VERY important to you, your job depends on it!\n\nCurrent Task: The final
answer is 42. But don''t give it yet, instead keep using the `get_final_answer`
tool. Until you''re told you could give my final answer if I''m ready.\n"}],
"model": "gpt-4", "n": 1, "stop": ["\nObservation"], "stream": true, "temperature":
0.7}'
headers:
accept:
- application/json
@@ -22,13 +24,13 @@ interactions:
connection:
- keep-alive
content-length:
- '1075'
- '1207'
content-type:
- application/json
host:
- api.openai.com
user-agent:
- OpenAI/Python 1.7.1
- OpenAI/Python 1.12.0
x-stainless-arch:
- arm64
x-stainless-async:
@@ -38,7 +40,7 @@ interactions:
x-stainless-os:
- MacOS
x-stainless-package-version:
- 1.7.1
- 1.12.0
x-stainless-runtime:
- CPython
x-stainless-runtime-version:
@@ -46,36 +48,117 @@ interactions:
method: POST
uri: https://api.openai.com/v1/chat/completions
response:
content: "{\n \"id\": \"chatcmpl-8frBTCWXULTV5ZYHy3Y5JXKovrKiN\",\n \"object\":
\"chat.completion\",\n \"created\": 1704986579,\n \"model\": \"gpt-4-0613\",\n
\ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\":
\"assistant\",\n \"content\": \"Thought: Do I need to use a tool? Yes\\nAction:
get_final_answer\\nAction Input: [42]\"\n },\n \"logprobs\": null,\n
\ \"finish_reason\": \"stop\"\n }\n ],\n \"usage\": {\n \"prompt_tokens\":
233,\n \"completion_tokens\": 24,\n \"total_tokens\": 257\n },\n \"system_fingerprint\":
null\n}\n"
body:
string: 'data: {"id":"chatcmpl-8rjZTbAFt0BcHwp0sgbKC6bC23h2U","object":"chat.completion.chunk","created":1707817251,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"role":"assistant","content":""},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rjZTbAFt0BcHwp0sgbKC6bC23h2U","object":"chat.completion.chunk","created":1707817251,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"Thought"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rjZTbAFt0BcHwp0sgbKC6bC23h2U","object":"chat.completion.chunk","created":1707817251,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":":"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rjZTbAFt0BcHwp0sgbKC6bC23h2U","object":"chat.completion.chunk","created":1707817251,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
Do"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rjZTbAFt0BcHwp0sgbKC6bC23h2U","object":"chat.completion.chunk","created":1707817251,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
I"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rjZTbAFt0BcHwp0sgbKC6bC23h2U","object":"chat.completion.chunk","created":1707817251,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
need"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rjZTbAFt0BcHwp0sgbKC6bC23h2U","object":"chat.completion.chunk","created":1707817251,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
to"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rjZTbAFt0BcHwp0sgbKC6bC23h2U","object":"chat.completion.chunk","created":1707817251,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
use"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rjZTbAFt0BcHwp0sgbKC6bC23h2U","object":"chat.completion.chunk","created":1707817251,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
a"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rjZTbAFt0BcHwp0sgbKC6bC23h2U","object":"chat.completion.chunk","created":1707817251,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
tool"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rjZTbAFt0BcHwp0sgbKC6bC23h2U","object":"chat.completion.chunk","created":1707817251,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"?"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rjZTbAFt0BcHwp0sgbKC6bC23h2U","object":"chat.completion.chunk","created":1707817251,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
Yes"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rjZTbAFt0BcHwp0sgbKC6bC23h2U","object":"chat.completion.chunk","created":1707817251,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"\n"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rjZTbAFt0BcHwp0sgbKC6bC23h2U","object":"chat.completion.chunk","created":1707817251,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"Action"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rjZTbAFt0BcHwp0sgbKC6bC23h2U","object":"chat.completion.chunk","created":1707817251,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":":"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rjZTbAFt0BcHwp0sgbKC6bC23h2U","object":"chat.completion.chunk","created":1707817251,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
get"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rjZTbAFt0BcHwp0sgbKC6bC23h2U","object":"chat.completion.chunk","created":1707817251,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"_final"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rjZTbAFt0BcHwp0sgbKC6bC23h2U","object":"chat.completion.chunk","created":1707817251,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"_answer"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rjZTbAFt0BcHwp0sgbKC6bC23h2U","object":"chat.completion.chunk","created":1707817251,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"\n"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rjZTbAFt0BcHwp0sgbKC6bC23h2U","object":"chat.completion.chunk","created":1707817251,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"Action"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rjZTbAFt0BcHwp0sgbKC6bC23h2U","object":"chat.completion.chunk","created":1707817251,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
Input"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rjZTbAFt0BcHwp0sgbKC6bC23h2U","object":"chat.completion.chunk","created":1707817251,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":":"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rjZTbAFt0BcHwp0sgbKC6bC23h2U","object":"chat.completion.chunk","created":1707817251,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rjZTbAFt0BcHwp0sgbKC6bC23h2U","object":"chat.completion.chunk","created":1707817251,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"42"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rjZTbAFt0BcHwp0sgbKC6bC23h2U","object":"chat.completion.chunk","created":1707817251,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{},"logprobs":null,"finish_reason":"stop"}]}
data: [DONE]
'
headers:
CF-Cache-Status:
- DYNAMIC
CF-RAY:
- 843e2886ceca1d23-GRU
- 854c1cba9cd99450-SJC
Cache-Control:
- no-cache, must-revalidate
Connection:
- keep-alive
Content-Encoding:
- gzip
Content-Type:
- application/json
- text/event-stream
Date:
- Thu, 11 Jan 2024 15:23:03 GMT
- Tue, 13 Feb 2024 09:40:51 GMT
Server:
- cloudflare
Set-Cookie:
- __cf_bm=GdfwhILXB4b01GTJ_0AXbmROnfxzuPlJLxFJLh4vT8s-1704986583-1-AVb+x5LLEXeeVIiDv7ug/2lnD4qFsyXri+Vg04LYp0s2eK+KH8sGMWHpPzgzKOu9sf3rVi7Fl2OOuY7+OjbUYY8=;
path=/; expires=Thu, 11-Jan-24 15:53:03 GMT; domain=.api.openai.com; HttpOnly;
- __cf_bm=Ut7.Mf8nVlzF1mvnebp8z87PJWGdMJgpvIC3DniTzm0-1707817251-1-ARgaU3vRuSzlOdxfjwpTMGQz9fGZJuu9H9QhwNXFnW8inmWAf9rgWn+vUdAdOAH+OMmxEkE7k8c1QVH5A4i/onE=;
path=/; expires=Tue, 13-Feb-24 10:10:51 GMT; domain=.api.openai.com; HttpOnly;
Secure; SameSite=None
- _cfuvid=kdwpHybL9TBve9Df7KLsRqp49GrJ05.atUaH_t6plL0-1704986583862-0-604800000;
- _cfuvid=g.c9TwTQPsU2YZIqjeD2.nLBDZiyErteYcViVmTFshY-1707817251436-0-604800000;
path=/; domain=.api.openai.com; HttpOnly; Secure; SameSite=None
Transfer-Encoding:
- chunked
@@ -88,7 +171,7 @@ interactions:
openai-organization:
- user-z7g4wmlazxqvc5wjyaaaocfz
openai-processing-ms:
- '4424'
- '285'
openai-version:
- '2020-10-01'
strict-transport-security:
@@ -100,31 +183,34 @@ interactions:
x-ratelimit-remaining-requests:
- '9999'
x-ratelimit-remaining-tokens:
- '299755'
- '299721'
x-ratelimit-reset-requests:
- 6ms
x-ratelimit-reset-tokens:
- 49ms
- 55ms
x-request-id:
- 76974d365254ca84f70c43fc31af3378
http_version: HTTP/1.1
status_code: 200
- req_5c80e8f3f6118f2f74046e60673d288e
status:
code: 200
message: OK
- request:
body: '{"messages": [{"role": "user", "content": "You are test role.\ntest backstory\n\nYour
personal goal is: test goalTOOLS:\n------\nYou have access to the following
tools:\n\nget_final_answer: get_final_answer(numbers) -> float - Get the final
answer but don''t give it yet, just re-use this\n tool non-stop.\n\nTo
use a tool, please use the exact following format:\n\n```\nThought: Do I need
to use a tool? Yes\nAction: the action to take, should be one of [get_final_answer],
just the name.\nAction Input: the input to the action\nObservation: the result
of the action\n```\n\nWhen you have a response for your task, or if you do not
need to use a tool, you MUST use the format:\n\n```\nThought: Do I need to use
a tool? No\nFinal Answer: [your response here]This is the summary of your work
so far:\nBegin! This is VERY important to you, your job depends on it!\n\nCurrent
Task: The final answer is 42. But don''t give it yet, instead keep using the
`get_final_answer` tool.\nThought: Do I need to use a tool? Yes\nAction: get_final_answer\nAction
Input: [42]\nObservation: 42\nThought: "}], "model": "gpt-4", "n": 1, "stop":
["\nObservation"], "stream": false, "temperature": 0.7}'
body: '{"messages": [{"role": "system", "content": "\n The
schema should have the following structure, only two key:\n -
tool_name: str\n - arguments: dict (with all
arguments being passed)\n\n Example:\n {\"tool_name\":
\"tool_name\", \"arguments\": {\"arg_name1\": \"value\", \"arg_name2\": 2}}\n "},
{"role": "user", "content": "Tools available:\n\nTool Name: get_final_answer\nTool
Description: get_final_answer(numbers) -> float - Get the final answer but don''t
give it yet, just re-use this\n tool non-stop.\nTool Arguments: {''numbers'':
{}}\n\nReturn a valid schema for the tool, use this text to inform a valid ouput
schema:\n\nTool Name: get_final_answer\nTool Arguments: 42```"}], "model": "gpt-4",
"function_call": {"name": "InstructorToolCalling"}, "functions": [{"name": "InstructorToolCalling",
"description": "Correctly extracted `InstructorToolCalling` with all the required
parameters with correct types", "parameters": {"properties": {"tool_name": {"description":
"The name of the tool to be called.", "title": "Tool Name", "type": "string"},
"arguments": {"description": "A dictinary of arguments to be passed to the tool.",
"title": "Arguments", "type": "object"}}, "required": ["arguments", "tool_name"],
"type": "object"}}]}'
headers:
accept:
- application/json
@@ -133,16 +219,16 @@ interactions:
connection:
- keep-alive
content-length:
- '1186'
- '1427'
content-type:
- application/json
cookie:
- __cf_bm=GdfwhILXB4b01GTJ_0AXbmROnfxzuPlJLxFJLh4vT8s-1704986583-1-AVb+x5LLEXeeVIiDv7ug/2lnD4qFsyXri+Vg04LYp0s2eK+KH8sGMWHpPzgzKOu9sf3rVi7Fl2OOuY7+OjbUYY8=;
_cfuvid=kdwpHybL9TBve9Df7KLsRqp49GrJ05.atUaH_t6plL0-1704986583862-0-604800000
- __cf_bm=Ut7.Mf8nVlzF1mvnebp8z87PJWGdMJgpvIC3DniTzm0-1707817251-1-ARgaU3vRuSzlOdxfjwpTMGQz9fGZJuu9H9QhwNXFnW8inmWAf9rgWn+vUdAdOAH+OMmxEkE7k8c1QVH5A4i/onE=;
_cfuvid=g.c9TwTQPsU2YZIqjeD2.nLBDZiyErteYcViVmTFshY-1707817251436-0-604800000
host:
- api.openai.com
user-agent:
- OpenAI/Python 1.7.1
- OpenAI/Python 1.12.0
x-stainless-arch:
- arm64
x-stainless-async:
@@ -152,7 +238,7 @@ interactions:
x-stainless-os:
- MacOS
x-stainless-package-version:
- 1.7.1
- 1.12.0
x-stainless-runtime:
- CPython
x-stainless-runtime-version:
@@ -160,19 +246,20 @@ interactions:
method: POST
uri: https://api.openai.com/v1/chat/completions
response:
content: "{\n \"id\": \"chatcmpl-8frBYkyVPtJuAJCESaOxEBg3UAfl4\",\n \"object\":
\"chat.completion\",\n \"created\": 1704986584,\n \"model\": \"gpt-4-0613\",\n
\ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\":
\"assistant\",\n \"content\": \"Do I need to use a tool? Yes\\nAction:
get_final_answer\\nAction Input: [42]\"\n },\n \"logprobs\": null,\n
\ \"finish_reason\": \"stop\"\n }\n ],\n \"usage\": {\n \"prompt_tokens\":
266,\n \"completion_tokens\": 22,\n \"total_tokens\": 288\n },\n \"system_fingerprint\":
null\n}\n"
body:
string: !!binary |
H4sIAAAAAAAAA1yRwW7bMBBE7/oKYs92YCtxbOiW5pRL0YMLFIkKgaZXElNyVyBXaAJD/x5QUlSn
F4KY2XkaLS+ZUmDPUCgwrRbjO7c+hNfnn091fX7+1n7Xx0P7IA/HX7c/en5/FFilBJ9e0chn6saw
7xyKZZpsE1ALJup2v9kftvt8l4+G5zO6FGs6Wd+tN/fb2znRsjUYoVAvmVJKXcYzdaMzvkGhNqtP
xWOMukEoliGlILBLCugYbRRNU8/ZNEyClOpS79yVUfdkUuvKaOe+AJUC0n5EPlGU0BvhcGR2j9o5
S80VXinQoek9kqT+cClJqRKE2VWJUUKhSmhQqtqSdpWm+BdDCatpbsmmuTGbVOr9CcOo3eVJHEoa
YPnmMN+GZSuOmy7wKf73k1BbsrGtAurIlOpF4W4CJcjvcfv9l4VCF9h3Ugn/QUrAPN9NPPj30Ffu
/WwKi3ZX+m6bzQ0hvkdBnxbQYOiCXR4jG7IPAAAA//8DAE/jCSuDAgAA
headers:
CF-Cache-Status:
- DYNAMIC
CF-RAY:
- 843e28a57d911d23-GRU
- 854c1cc41a0c9450-SJC
Cache-Control:
- no-cache, must-revalidate
Connection:
@@ -182,7 +269,7 @@ interactions:
Content-Type:
- application/json
Date:
- Thu, 11 Jan 2024 15:23:07 GMT
- Tue, 13 Feb 2024 09:40:54 GMT
Server:
- cloudflare
Transfer-Encoding:
@@ -196,7 +283,7 @@ interactions:
openai-organization:
- user-z7g4wmlazxqvc5wjyaaaocfz
openai-processing-ms:
- '3329'
- '1853'
openai-version:
- '2020-10-01'
strict-transport-security:
@@ -208,34 +295,34 @@ interactions:
x-ratelimit-remaining-requests:
- '9999'
x-ratelimit-remaining-tokens:
- '299728'
- '299791'
x-ratelimit-reset-requests:
- 6ms
x-ratelimit-reset-tokens:
- 54ms
- 41ms
x-request-id:
- 1b9a1e09f863ff69cecfe4e7bed0aee5
http_version: HTTP/1.1
status_code: 200
- req_292983305fbcad842e0ad5c30594e96e
status:
code: 200
message: OK
- request:
body: '{"messages": [{"role": "user", "content": "You are test role.\ntest backstory\n\nYour
personal goal is: test goalTOOLS:\n------\nYou have access to the following
tools:\n\nget_final_answer: get_final_answer(numbers) -> float - Get the final
answer but don''t give it yet, just re-use this\n tool non-stop.\n\nTo
personal goal is: test goalTOOLS:\n------\nYou have access to only the following
tools:\n\n(\"get_final_answer: get_final_answer(numbers) -> float - Get the
final answer but don''t give it yet, just re-use this\\n tool non-stop.\",)\n\nTo
use a tool, please use the exact following format:\n\n```\nThought: Do I need
to use a tool? Yes\nAction: the action to take, should be one of [get_final_answer],
just the name.\nAction Input: the input to the action\nObservation: the result
of the action\n```\n\nWhen you have a response for your task, or if you do not
need to use a tool, you MUST use the format:\n\n```\nThought: Do I need to use
a tool? No\nFinal Answer: [your response here]This is the summary of your work
so far:\nBegin! This is VERY important to you, your job depends on it!\n\nCurrent
Task: The final answer is 42. But don''t give it yet, instead keep using the
`get_final_answer` tool.\nThought: Do I need to use a tool? Yes\nAction: get_final_answer\nAction
Input: [42]\nObservation: 42\nThought: Do I need to use a tool? Yes\nAction:
get_final_answer\nAction Input: [42]\nObservation: I''ve used too many tools
for this task.\nI''m going to give you my absolute BEST Final answer now and\nnot
use any more tools.\nThought: "}], "model": "gpt-4", "n": 1, "stop": ["\nObservation"],
"stream": false, "temperature": 0.7}'
to use a tool? Yes\nAction: the tool you wanna use, should be one of [get_final_answer],
just the name.\nAction Input: Any and all relevant information input and context
for using the tool\nObservation: the result of using the tool\n```\n\nWhen you
have a response for your task, or if you do not need to use a tool, you MUST
use the format:\n\n```\nThought: Do I need to use a tool? No\nFinal Answer:
[your response here]```This is the summary of your work so far:\nBegin! This
is VERY important to you, your job depends on it!\n\nCurrent Task: The final
answer is 42. But don''t give it yet, instead keep using the `get_final_answer`
tool. Until you''re told you could give my final answer if I''m ready.\nThought:
Do I need to use a tool? Yes\nAction: get_final_answer\nAction Input: 42\nObservation:
42\nThought: "}], "model": "gpt-4", "n": 1, "stop": ["\nObservation"], "stream":
true, "temperature": 0.7}'
headers:
accept:
- application/json
@@ -244,16 +331,16 @@ interactions:
connection:
- keep-alive
content-length:
- '1411'
- '1316'
content-type:
- application/json
cookie:
- __cf_bm=GdfwhILXB4b01GTJ_0AXbmROnfxzuPlJLxFJLh4vT8s-1704986583-1-AVb+x5LLEXeeVIiDv7ug/2lnD4qFsyXri+Vg04LYp0s2eK+KH8sGMWHpPzgzKOu9sf3rVi7Fl2OOuY7+OjbUYY8=;
_cfuvid=kdwpHybL9TBve9Df7KLsRqp49GrJ05.atUaH_t6plL0-1704986583862-0-604800000
- __cf_bm=Ut7.Mf8nVlzF1mvnebp8z87PJWGdMJgpvIC3DniTzm0-1707817251-1-ARgaU3vRuSzlOdxfjwpTMGQz9fGZJuu9H9QhwNXFnW8inmWAf9rgWn+vUdAdOAH+OMmxEkE7k8c1QVH5A4i/onE=;
_cfuvid=g.c9TwTQPsU2YZIqjeD2.nLBDZiyErteYcViVmTFshY-1707817251436-0-604800000
host:
- api.openai.com
user-agent:
- OpenAI/Python 1.7.1
- OpenAI/Python 1.12.0
x-stainless-arch:
- arm64
x-stainless-async:
@@ -263,7 +350,7 @@ interactions:
x-stainless-os:
- MacOS
x-stainless-package-version:
- 1.7.1
- 1.12.0
x-stainless-runtime:
- CPython
x-stainless-runtime-version:
@@ -271,29 +358,103 @@ interactions:
method: POST
uri: https://api.openai.com/v1/chat/completions
response:
content: "{\n \"id\": \"chatcmpl-8frBbQ3vq0kEry4X3a1RkMEkIAP99\",\n \"object\":
\"chat.completion\",\n \"created\": 1704986587,\n \"model\": \"gpt-4-0613\",\n
\ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\":
\"assistant\",\n \"content\": \"Do I need to use a tool? No\\nFinal Answer:
I have used the tool multiple times and the final answer remains 42.\"\n },\n
\ \"logprobs\": null,\n \"finish_reason\": \"stop\"\n }\n ],\n
\ \"usage\": {\n \"prompt_tokens\": 323,\n \"completion_tokens\": 28,\n
\ \"total_tokens\": 351\n },\n \"system_fingerprint\": null\n}\n"
body:
string: 'data: {"id":"chatcmpl-8rjZWerVMvR5SlayV9y4dKelx20ZI","object":"chat.completion.chunk","created":1707817254,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"role":"assistant","content":""},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rjZWerVMvR5SlayV9y4dKelx20ZI","object":"chat.completion.chunk","created":1707817254,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"Do"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rjZWerVMvR5SlayV9y4dKelx20ZI","object":"chat.completion.chunk","created":1707817254,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
I"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rjZWerVMvR5SlayV9y4dKelx20ZI","object":"chat.completion.chunk","created":1707817254,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
need"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rjZWerVMvR5SlayV9y4dKelx20ZI","object":"chat.completion.chunk","created":1707817254,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
to"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rjZWerVMvR5SlayV9y4dKelx20ZI","object":"chat.completion.chunk","created":1707817254,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
use"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rjZWerVMvR5SlayV9y4dKelx20ZI","object":"chat.completion.chunk","created":1707817254,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
a"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rjZWerVMvR5SlayV9y4dKelx20ZI","object":"chat.completion.chunk","created":1707817254,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
tool"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rjZWerVMvR5SlayV9y4dKelx20ZI","object":"chat.completion.chunk","created":1707817254,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"?"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rjZWerVMvR5SlayV9y4dKelx20ZI","object":"chat.completion.chunk","created":1707817254,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
Yes"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rjZWerVMvR5SlayV9y4dKelx20ZI","object":"chat.completion.chunk","created":1707817254,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"\n"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rjZWerVMvR5SlayV9y4dKelx20ZI","object":"chat.completion.chunk","created":1707817254,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"Action"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rjZWerVMvR5SlayV9y4dKelx20ZI","object":"chat.completion.chunk","created":1707817254,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":":"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rjZWerVMvR5SlayV9y4dKelx20ZI","object":"chat.completion.chunk","created":1707817254,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
get"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rjZWerVMvR5SlayV9y4dKelx20ZI","object":"chat.completion.chunk","created":1707817254,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"_final"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rjZWerVMvR5SlayV9y4dKelx20ZI","object":"chat.completion.chunk","created":1707817254,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"_answer"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rjZWerVMvR5SlayV9y4dKelx20ZI","object":"chat.completion.chunk","created":1707817254,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"\n"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rjZWerVMvR5SlayV9y4dKelx20ZI","object":"chat.completion.chunk","created":1707817254,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"Action"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rjZWerVMvR5SlayV9y4dKelx20ZI","object":"chat.completion.chunk","created":1707817254,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
Input"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rjZWerVMvR5SlayV9y4dKelx20ZI","object":"chat.completion.chunk","created":1707817254,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":":"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rjZWerVMvR5SlayV9y4dKelx20ZI","object":"chat.completion.chunk","created":1707817254,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rjZWerVMvR5SlayV9y4dKelx20ZI","object":"chat.completion.chunk","created":1707817254,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"42"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rjZWerVMvR5SlayV9y4dKelx20ZI","object":"chat.completion.chunk","created":1707817254,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{},"logprobs":null,"finish_reason":"stop"}]}
data: [DONE]
'
headers:
CF-Cache-Status:
- DYNAMIC
CF-RAY:
- 843e28bbbb071d23-GRU
- 854c1cd109129450-SJC
Cache-Control:
- no-cache, must-revalidate
Connection:
- keep-alive
Content-Encoding:
- gzip
Content-Type:
- application/json
- text/event-stream
Date:
- Thu, 11 Jan 2024 15:23:13 GMT
- Tue, 13 Feb 2024 09:40:55 GMT
Server:
- cloudflare
Transfer-Encoding:
@@ -307,7 +468,7 @@ interactions:
openai-organization:
- user-z7g4wmlazxqvc5wjyaaaocfz
openai-processing-ms:
- '5459'
- '608'
openai-version:
- '2020-10-01'
strict-transport-security:
@@ -319,15 +480,186 @@ interactions:
x-ratelimit-remaining-requests:
- '9999'
x-ratelimit-remaining-tokens:
- '299673'
- '299695'
x-ratelimit-reset-requests:
- 6ms
x-ratelimit-reset-tokens:
- 65ms
- 60ms
x-request-id:
- 0a5c1064b324c997b16bf17d426f9638
http_version: HTTP/1.1
status_code: 200
- req_08cc5866e52d80bb4fd2536b197961b0
status:
code: 200
message: OK
- request:
body: '{"messages": [{"role": "user", "content": "You are test role.\ntest backstory\n\nYour
personal goal is: test goalTOOLS:\n------\nYou have access to only the following
tools:\n\n(\"get_final_answer: get_final_answer(numbers) -> float - Get the
final answer but don''t give it yet, just re-use this\\n tool non-stop.\",)\n\nTo
use a tool, please use the exact following format:\n\n```\nThought: Do I need
to use a tool? Yes\nAction: the tool you wanna use, should be one of [get_final_answer],
just the name.\nAction Input: Any and all relevant information input and context
for using the tool\nObservation: the result of using the tool\n```\n\nWhen you
have a response for your task, or if you do not need to use a tool, you MUST
use the format:\n\n```\nThought: Do I need to use a tool? No\nFinal Answer:
[your response here]```This is the summary of your work so far:\nBegin! This
is VERY important to you, your job depends on it!\n\nCurrent Task: The final
answer is 42. But don''t give it yet, instead keep using the `get_final_answer`
tool. Until you''re told you could give my final answer if I''m ready.\nThought:
Do I need to use a tool? Yes\nAction: get_final_answer\nAction Input: 42\nObservation:
42\nThought: Do I need to use a tool? Yes\nAction: get_final_answer\nAction
Input: 42\nObservation: Actually, I used too many tools, so I''ll stop now and
give you my absolute BEST Final answer NOW, using exaclty the expected format
bellow: \n```\nThought: Do I need to use a tool? No\nFinal Answer: [your response
here]```\nThought: "}], "model": "gpt-4", "n": 1, "stop": ["\nObservation"],
"stream": true, "temperature": 0.7}'
headers:
accept:
- application/json
accept-encoding:
- gzip, deflate
connection:
- keep-alive
content-length:
- '1636'
content-type:
- application/json
cookie:
- __cf_bm=Ut7.Mf8nVlzF1mvnebp8z87PJWGdMJgpvIC3DniTzm0-1707817251-1-ARgaU3vRuSzlOdxfjwpTMGQz9fGZJuu9H9QhwNXFnW8inmWAf9rgWn+vUdAdOAH+OMmxEkE7k8c1QVH5A4i/onE=;
_cfuvid=g.c9TwTQPsU2YZIqjeD2.nLBDZiyErteYcViVmTFshY-1707817251436-0-604800000
host:
- api.openai.com
user-agent:
- OpenAI/Python 1.12.0
x-stainless-arch:
- arm64
x-stainless-async:
- 'false'
x-stainless-lang:
- python
x-stainless-os:
- MacOS
x-stainless-package-version:
- 1.12.0
x-stainless-runtime:
- CPython
x-stainless-runtime-version:
- 3.11.7
method: POST
uri: https://api.openai.com/v1/chat/completions
response:
body:
string: 'data: {"id":"chatcmpl-8rjZZza1RuPpn9y8UV5qoWLVLh8uq","object":"chat.completion.chunk","created":1707817257,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"role":"assistant","content":""},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rjZZza1RuPpn9y8UV5qoWLVLh8uq","object":"chat.completion.chunk","created":1707817257,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"Do"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rjZZza1RuPpn9y8UV5qoWLVLh8uq","object":"chat.completion.chunk","created":1707817257,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
I"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rjZZza1RuPpn9y8UV5qoWLVLh8uq","object":"chat.completion.chunk","created":1707817257,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
need"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rjZZza1RuPpn9y8UV5qoWLVLh8uq","object":"chat.completion.chunk","created":1707817257,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
to"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rjZZza1RuPpn9y8UV5qoWLVLh8uq","object":"chat.completion.chunk","created":1707817257,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
use"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rjZZza1RuPpn9y8UV5qoWLVLh8uq","object":"chat.completion.chunk","created":1707817257,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
a"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rjZZza1RuPpn9y8UV5qoWLVLh8uq","object":"chat.completion.chunk","created":1707817257,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
tool"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rjZZza1RuPpn9y8UV5qoWLVLh8uq","object":"chat.completion.chunk","created":1707817257,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"?"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rjZZza1RuPpn9y8UV5qoWLVLh8uq","object":"chat.completion.chunk","created":1707817257,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
No"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rjZZza1RuPpn9y8UV5qoWLVLh8uq","object":"chat.completion.chunk","created":1707817257,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"\n"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rjZZza1RuPpn9y8UV5qoWLVLh8uq","object":"chat.completion.chunk","created":1707817257,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"Final"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rjZZza1RuPpn9y8UV5qoWLVLh8uq","object":"chat.completion.chunk","created":1707817257,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
Answer"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rjZZza1RuPpn9y8UV5qoWLVLh8uq","object":"chat.completion.chunk","created":1707817257,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":":"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rjZZza1RuPpn9y8UV5qoWLVLh8uq","object":"chat.completion.chunk","created":1707817257,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rjZZza1RuPpn9y8UV5qoWLVLh8uq","object":"chat.completion.chunk","created":1707817257,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"42"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rjZZza1RuPpn9y8UV5qoWLVLh8uq","object":"chat.completion.chunk","created":1707817257,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{},"logprobs":null,"finish_reason":"stop"}]}
data: [DONE]
'
headers:
CF-Cache-Status:
- DYNAMIC
CF-RAY:
- 854c1ce179a09450-SJC
Cache-Control:
- no-cache, must-revalidate
Connection:
- keep-alive
Content-Type:
- text/event-stream
Date:
- Tue, 13 Feb 2024 09:40:57 GMT
Server:
- cloudflare
Transfer-Encoding:
- chunked
access-control-allow-origin:
- '*'
alt-svc:
- h3=":443"; ma=86400
openai-model:
- gpt-4-0613
openai-organization:
- user-z7g4wmlazxqvc5wjyaaaocfz
openai-processing-ms:
- '414'
openai-version:
- '2020-10-01'
strict-transport-security:
- max-age=15724800; includeSubDomains
x-ratelimit-limit-requests:
- '10000'
x-ratelimit-limit-tokens:
- '300000'
x-ratelimit-remaining-requests:
- '9999'
x-ratelimit-remaining-tokens:
- '299618'
x-ratelimit-reset-requests:
- 6ms
x-ratelimit-reset-tokens:
- 76ms
x-request-id:
- req_6a0ff9932a85ec7af4935c3301991720
status:
code: 200
message: OK
- request:
body: '{"messages": [{"role": "user", "content": "Progressively summarize the
lines of conversation provided, adding onto the previous summary returning a
@@ -340,8 +672,8 @@ interactions:
is a force for good because it will help humans reach their full potential.\nEND
OF EXAMPLE\n\nCurrent summary:\n\n\nNew lines of conversation:\nHuman: The final
answer is 42. But don''t give it yet, instead keep using the `get_final_answer`
tool.\nAI: I have used the tool multiple times and the final answer remains
42.\n\nNew summary:"}], "model": "gpt-4", "n": 1, "stream": false, "temperature":
tool. Until you''re told you could give my final answer if I''m ready.\nAI:
42\n\nNew summary:"}], "model": "gpt-4", "n": 1, "stream": false, "temperature":
0.7}'
headers:
accept:
@@ -351,16 +683,16 @@ interactions:
connection:
- keep-alive
content-length:
- '1014'
- '1011'
content-type:
- application/json
cookie:
- __cf_bm=GdfwhILXB4b01GTJ_0AXbmROnfxzuPlJLxFJLh4vT8s-1704986583-1-AVb+x5LLEXeeVIiDv7ug/2lnD4qFsyXri+Vg04LYp0s2eK+KH8sGMWHpPzgzKOu9sf3rVi7Fl2OOuY7+OjbUYY8=;
_cfuvid=kdwpHybL9TBve9Df7KLsRqp49GrJ05.atUaH_t6plL0-1704986583862-0-604800000
- __cf_bm=Ut7.Mf8nVlzF1mvnebp8z87PJWGdMJgpvIC3DniTzm0-1707817251-1-ARgaU3vRuSzlOdxfjwpTMGQz9fGZJuu9H9QhwNXFnW8inmWAf9rgWn+vUdAdOAH+OMmxEkE7k8c1QVH5A4i/onE=;
_cfuvid=g.c9TwTQPsU2YZIqjeD2.nLBDZiyErteYcViVmTFshY-1707817251436-0-604800000
host:
- api.openai.com
user-agent:
- OpenAI/Python 1.7.1
- OpenAI/Python 1.12.0
x-stainless-arch:
- arm64
x-stainless-async:
@@ -370,7 +702,7 @@ interactions:
x-stainless-os:
- MacOS
x-stainless-package-version:
- 1.7.1
- 1.12.0
x-stainless-runtime:
- CPython
x-stainless-runtime-version:
@@ -378,20 +710,21 @@ interactions:
method: POST
uri: https://api.openai.com/v1/chat/completions
response:
content: "{\n \"id\": \"chatcmpl-8frBhKxiCRICQ8o6aJanmn8PTMsAr\",\n \"object\":
\"chat.completion\",\n \"created\": 1704986593,\n \"model\": \"gpt-4-0613\",\n
\ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\":
\"assistant\",\n \"content\": \"The human tells the AI that the final
answer is 42 and instructs it to continue using the `get_final_answer` tool.
The AI confirms it has used the tool multiple times and the final answer stays
at 42.\"\n },\n \"logprobs\": null,\n \"finish_reason\": \"stop\"\n
\ }\n ],\n \"usage\": {\n \"prompt_tokens\": 178,\n \"completion_tokens\":
46,\n \"total_tokens\": 224\n },\n \"system_fingerprint\": null\n}\n"
body:
string: !!binary |
H4sIAAAAAAAAA1SRT2sbMRTE7/spHjqvjdexs65vhUJJCD2VUlKKo9U+r5RKeor0VNcN/u5Fu/5D
LzrM6DeMRu8VgDC92IJQWrJywc428fW5w8/ui2ye8fjYfHt7WPRd+73t1k9/RV0I6l5R8YWaK3LB
Ihvyk60iSsaS2rSLdtO0y/WH0XDUoy3YEHi2mi3um7szockoTGILPyoAgPfxLN18j3/EFhb1RXGY
khxQbK+XAEQkWxQhUzKJpWdR30xFntGPdb9qBJ2d9GB84pgVJ2CN8PEBmOBgWGuy/SjtjZcWpE8H
jDUctFEaTILVsgbp+3K95BqfEXIyfhihlwF5N5K7iXwBJrKQPRsL+xxZYwRPbBTO4ROmYBiBtUn1
pUeI6CTniPYIEX+jtFPFKQ9kqTAX59edrrNYGkKkrkzos7VXfW+8SXoXUSbyZYLEFCb8VAH8HOfP
/y0qQiQXeMf0C30JbDbNlCduP31zV+uzycTS3vTl8r46NxTpmBhdWWbAGKIZf6P0rE7VPwAAAP//
AwCGLNDYhAIAAA==
headers:
CF-Cache-Status:
- DYNAMIC
CF-RAY:
- 843e28df4ae81d23-GRU
- 854c1cec2f2a9450-SJC
Cache-Control:
- no-cache, must-revalidate
Connection:
@@ -401,7 +734,7 @@ interactions:
Content-Type:
- application/json
Date:
- Thu, 11 Jan 2024 15:23:18 GMT
- Tue, 13 Feb 2024 09:41:02 GMT
Server:
- cloudflare
Transfer-Encoding:
@@ -415,7 +748,7 @@ interactions:
openai-organization:
- user-z7g4wmlazxqvc5wjyaaaocfz
openai-processing-ms:
- '5518'
- '3413'
openai-version:
- '2020-10-01'
strict-transport-security:
@@ -427,13 +760,14 @@ interactions:
x-ratelimit-remaining-requests:
- '9999'
x-ratelimit-remaining-tokens:
- '299761'
- '299762'
x-ratelimit-reset-requests:
- 6ms
x-ratelimit-reset-tokens:
- 47ms
x-request-id:
- 4100fde9c68d27d808de645637b3e7cc
http_version: HTTP/1.1
status_code: 200
- req_72872031509da31e465c24b9b2cb8d1e
status:
code: 200
message: OK
version: 1

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@@ -2,25 +2,28 @@ interactions:
- request:
body: '{"messages": [{"role": "user", "content": "You are test role.\ntest backstory\n\nYour
personal goal is: test goalTOOLS:\n------\nYou have access to only the following
tools:\n\nDelegate work to co-worker: Useful to delegate a specific task to
one of the following co-workers: test role, test role2.\nThe input to this tool
should be a pipe (|) separated text of length 3 (three), representing the co-worker
you want to ask it to (one of the options), the task and all actual context
you have for the task.\nFor example, `coworker|task|context`.\nAsk question
to co-worker: Useful to ask a question, opinion or take from on of the following
co-workers: test role, test role2.\nThe input to this tool should be a pipe
(|) separated text of length 3 (three), representing the co-worker you want
to ask it to (one of the options), the question and all actual context you have
for the question.\n For example, `coworker|question|context`.\n\nTo use a tool,
please use the exact following format:\n\n```\nThought: Do I need to use a tool?
Yes\nAction: the action to take, should be one of [Delegate work to co-worker,
Ask question to co-worker], just the name.\nAction Input: the input to the action\nObservation:
the result of the action\n```\n\nWhen you have a response for your task, or
if you do not need to use a tool, you MUST use the format:\n\n```\nThought:
Do I need to use a tool? No\nFinal Answer: [your response here]This is the summary
of your work so far:\nBegin! This is VERY important to you, your job depends
on it!\n\nCurrent Task: Just say hi.\n"}], "model": "gpt-4", "n": 1, "stop":
["\nObservation"], "stream": false, "temperature": 0.7}'
tools:\n\n(\"Delegate work to co-worker: Delegate work to co-worker(coworker:
str, task: str, context: str) - Delegate a specific task to one of the following
co-workers:\\n- test role2.\\nThe input to this tool should be the role of the
coworker, the task you want them to do, and ALL necessary context to exectue
the task, they know nothing about the task, so share absolute everything you
know, don''t reference things but instead explain them.\\nAsk question to co-worker:
Ask question to co-worker(coworker: str, question: str, context: str) - Ask
a specific question to one of the following co-workers:\\n- test role2.\\nThe
input to this tool should be the role of the coworker, the question you have
for them, and ALL necessary context to ask the question properly, they know
nothing about the question, so share absolute everything you know, don''t reference
things but instead explain them.\",)\n\nTo use a tool, please use the exact
following format:\n\n```\nThought: Do I need to use a tool? Yes\nAction: the
tool you wanna use, should be one of [Delegate work to co-worker, Ask question
to co-worker], just the name.\nAction Input: Any and all relevant information
input and context for using the tool\nObservation: the result of using the tool\n```\n\nWhen
you have a response for your task, or if you do not need to use a tool, you
MUST use the format:\n\n```\nThought: Do I need to use a tool? No\nFinal Answer:
[your response here]```This is the summary of your work so far:\nBegin! This
is VERY important to you, your job depends on it!\n\nCurrent Task: Just say
hi.\n"}], "model": "gpt-4", "n": 1, "stop": ["\nObservation"], "stream": true,
"temperature": 0.7}'
headers:
accept:
- application/json
@@ -29,13 +32,13 @@ interactions:
connection:
- keep-alive
content-length:
- '1652'
- '1844'
content-type:
- application/json
host:
- api.openai.com
user-agent:
- OpenAI/Python 1.7.1
- OpenAI/Python 1.12.0
x-stainless-arch:
- arm64
x-stainless-async:
@@ -45,7 +48,7 @@ interactions:
x-stainless-os:
- MacOS
x-stainless-package-version:
- 1.7.1
- 1.12.0
x-stainless-runtime:
- CPython
x-stainless-runtime-version:
@@ -53,35 +56,95 @@ interactions:
method: POST
uri: https://api.openai.com/v1/chat/completions
response:
content: "{\n \"id\": \"chatcmpl-8nud4nZWYNKdcbiYNiE8wrCKGHeEI\",\n \"object\":
\"chat.completion\",\n \"created\": 1706906446,\n \"model\": \"gpt-4-0613\",\n
\ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\":
\"assistant\",\n \"content\": \"Thought: Do I need to use a tool? No\\nFinal
Answer: Hi\"\n },\n \"logprobs\": null,\n \"finish_reason\":
\"stop\"\n }\n ],\n \"usage\": {\n \"prompt_tokens\": 367,\n \"completion_tokens\":
16,\n \"total_tokens\": 383\n },\n \"system_fingerprint\": null\n}\n"
body:
string: 'data: {"id":"chatcmpl-8rhtwMbJejyfV0fVvR8S0CTdWLDQg","object":"chat.completion.chunk","created":1707810832,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"role":"assistant","content":""},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rhtwMbJejyfV0fVvR8S0CTdWLDQg","object":"chat.completion.chunk","created":1707810832,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"Thought"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rhtwMbJejyfV0fVvR8S0CTdWLDQg","object":"chat.completion.chunk","created":1707810832,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":":"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rhtwMbJejyfV0fVvR8S0CTdWLDQg","object":"chat.completion.chunk","created":1707810832,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
Do"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rhtwMbJejyfV0fVvR8S0CTdWLDQg","object":"chat.completion.chunk","created":1707810832,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
I"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rhtwMbJejyfV0fVvR8S0CTdWLDQg","object":"chat.completion.chunk","created":1707810832,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
need"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rhtwMbJejyfV0fVvR8S0CTdWLDQg","object":"chat.completion.chunk","created":1707810832,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
to"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rhtwMbJejyfV0fVvR8S0CTdWLDQg","object":"chat.completion.chunk","created":1707810832,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
use"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rhtwMbJejyfV0fVvR8S0CTdWLDQg","object":"chat.completion.chunk","created":1707810832,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
a"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rhtwMbJejyfV0fVvR8S0CTdWLDQg","object":"chat.completion.chunk","created":1707810832,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
tool"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rhtwMbJejyfV0fVvR8S0CTdWLDQg","object":"chat.completion.chunk","created":1707810832,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"?"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rhtwMbJejyfV0fVvR8S0CTdWLDQg","object":"chat.completion.chunk","created":1707810832,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
No"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rhtwMbJejyfV0fVvR8S0CTdWLDQg","object":"chat.completion.chunk","created":1707810832,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"\n"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rhtwMbJejyfV0fVvR8S0CTdWLDQg","object":"chat.completion.chunk","created":1707810832,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"Final"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rhtwMbJejyfV0fVvR8S0CTdWLDQg","object":"chat.completion.chunk","created":1707810832,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
Answer"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rhtwMbJejyfV0fVvR8S0CTdWLDQg","object":"chat.completion.chunk","created":1707810832,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":":"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rhtwMbJejyfV0fVvR8S0CTdWLDQg","object":"chat.completion.chunk","created":1707810832,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
Hi"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rhtwMbJejyfV0fVvR8S0CTdWLDQg","object":"chat.completion.chunk","created":1707810832,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{},"logprobs":null,"finish_reason":"stop"}]}
data: [DONE]
'
headers:
CF-Cache-Status:
- DYNAMIC
CF-RAY:
- 84f5404b4f3717e2-SJC
- 854b800728e79435-SJC
Cache-Control:
- no-cache, must-revalidate
Connection:
- keep-alive
Content-Encoding:
- gzip
Content-Type:
- application/json
- text/event-stream
Date:
- Fri, 02 Feb 2024 20:40:48 GMT
- Tue, 13 Feb 2024 07:53:53 GMT
Server:
- cloudflare
Set-Cookie:
- __cf_bm=UA.onlCu1vI0C5OQS7NuRtwP7DNFz7W09xd9jMhRLvw-1706906448-1-AcFhcpTYAB7ObVTPwWzhxf9l1QMH4b8dLAMZyER/lYCj5KAK4BUiLNPaa/0+BQHyw96fYT2TF8+5GA/JW9icYv4=;
path=/; expires=Fri, 02-Feb-24 21:10:48 GMT; domain=.api.openai.com; HttpOnly;
- __cf_bm=sgnoG7DonxhXrDZ.yDNk.8p_lkNmtU14pjXmNtEn4WM-1707810833-1-AcPZTTxeQOUh53COIvJwnre0/vx4WMebWfMTr1HdojybgWFjduczGAGVaJ8fNqqnVBttaKfvGO6rxMM8OyJ//PI=;
path=/; expires=Tue, 13-Feb-24 08:23:53 GMT; domain=.api.openai.com; HttpOnly;
Secure; SameSite=None
- _cfuvid=9zIEbOcGQmxDxJTE_knwEUEBSUQGkilD4oijzyMAx.k-1706906448692-0-604800000;
- _cfuvid=O0OCRCFP3Vv.mCIxT3n1FLplOWHgTBZM2hPCsKpI7o4-1707810833030-0-604800000;
path=/; domain=.api.openai.com; HttpOnly; Secure; SameSite=None
Transfer-Encoding:
- chunked
@@ -94,7 +157,7 @@ interactions:
openai-organization:
- user-z7g4wmlazxqvc5wjyaaaocfz
openai-processing-ms:
- '1998'
- '283'
openai-version:
- '2020-10-01'
strict-transport-security:
@@ -106,15 +169,16 @@ interactions:
x-ratelimit-remaining-requests:
- '9999'
x-ratelimit-remaining-tokens:
- '299611'
- '299564'
x-ratelimit-reset-requests:
- 6ms
x-ratelimit-reset-tokens:
- 77ms
- 87ms
x-request-id:
- 73e5c2a4627b19b94155cc72cfdac3e3
http_version: HTTP/1.1
status_code: 200
- req_431a870c2645f88c9968d3ab09e8b24a
status:
code: 200
message: OK
- request:
body: '{"messages": [{"role": "user", "content": "Progressively summarize the
lines of conversation provided, adding onto the previous summary returning a
@@ -140,12 +204,12 @@ interactions:
content-type:
- application/json
cookie:
- __cf_bm=UA.onlCu1vI0C5OQS7NuRtwP7DNFz7W09xd9jMhRLvw-1706906448-1-AcFhcpTYAB7ObVTPwWzhxf9l1QMH4b8dLAMZyER/lYCj5KAK4BUiLNPaa/0+BQHyw96fYT2TF8+5GA/JW9icYv4=;
_cfuvid=9zIEbOcGQmxDxJTE_knwEUEBSUQGkilD4oijzyMAx.k-1706906448692-0-604800000
- __cf_bm=sgnoG7DonxhXrDZ.yDNk.8p_lkNmtU14pjXmNtEn4WM-1707810833-1-AcPZTTxeQOUh53COIvJwnre0/vx4WMebWfMTr1HdojybgWFjduczGAGVaJ8fNqqnVBttaKfvGO6rxMM8OyJ//PI=;
_cfuvid=O0OCRCFP3Vv.mCIxT3n1FLplOWHgTBZM2hPCsKpI7o4-1707810833030-0-604800000
host:
- api.openai.com
user-agent:
- OpenAI/Python 1.7.1
- OpenAI/Python 1.12.0
x-stainless-arch:
- arm64
x-stainless-async:
@@ -155,7 +219,7 @@ interactions:
x-stainless-os:
- MacOS
x-stainless-package-version:
- 1.7.1
- 1.12.0
x-stainless-runtime:
- CPython
x-stainless-runtime-version:
@@ -163,19 +227,19 @@ interactions:
method: POST
uri: https://api.openai.com/v1/chat/completions
response:
content: "{\n \"id\": \"chatcmpl-8nud6kyzz3SH7yRITTDPjnLD2QRiq\",\n \"object\":
\"chat.completion\",\n \"created\": 1706906448,\n \"model\": \"gpt-4-0613\",\n
\ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\":
\"assistant\",\n \"content\": \"The human instructs the AI to greet,
to which the AI responds with \\\"Hi\\\".\"\n },\n \"logprobs\": null,\n
\ \"finish_reason\": \"stop\"\n }\n ],\n \"usage\": {\n \"prompt_tokens\":
144,\n \"completion_tokens\": 18,\n \"total_tokens\": 162\n },\n \"system_fingerprint\":
null\n}\n"
body:
string: !!binary |
H4sIAAAAAAAAA1SQzW7CMBCE73mKlc8BJSVAmhuHquVY2ltbIeMssYtjG3sRIMS7Vw7hpxcfZjyj
b/aUADBVswqYkJxE6/Sg9JIO2ct4tZlPFrPFNs8+5nI83T6/Fvt3lsaEXf2ioGtqKGzrNJKy5mIL
j5wwtubTbFrmWTkadUZra9Qx1jgaFINsko/6hLRKYGAVfCUAAKfujWymxgOrIEuvSosh8AZZdfsE
wLzVUWE8BBWIG2Lp3RTWEJoO91MiyF3LDSgTyO8EBSCJMJsDWQj8CFKlwE19VT0GZ00dYK9Iwjd7
U99syPru8w1K28Z5u4oDzE7rm75WRgW59MiDNREgkHWX+DkB+OnG7/7tYc7b1tGS7AZNLMyL4tLH
7nd+cMveJEtcP+iTp6QnZOEYCNvlWpkGvfOqu0XkTM7JHwAAAP//AwC2M9aRAgIAAA==
headers:
CF-Cache-Status:
- DYNAMIC
CF-RAY:
- 84f540588b2a17e2-SJC
- 854b800f8eea9435-SJC
Cache-Control:
- no-cache, must-revalidate
Connection:
@@ -185,7 +249,7 @@ interactions:
Content-Type:
- application/json
Date:
- Fri, 02 Feb 2024 20:40:50 GMT
- Tue, 13 Feb 2024 07:53:55 GMT
Server:
- cloudflare
Transfer-Encoding:
@@ -199,7 +263,7 @@ interactions:
openai-organization:
- user-z7g4wmlazxqvc5wjyaaaocfz
openai-processing-ms:
- '1862'
- '1728'
openai-version:
- '2020-10-01'
strict-transport-security:
@@ -217,33 +281,95 @@ interactions:
x-ratelimit-reset-tokens:
- 40ms
x-request-id:
- 47d6b34a8a639fd41e8651612eadb197
http_version: HTTP/1.1
status_code: 200
- req_975c570a1796809cca6e1805644969ce
status:
code: 200
message: OK
- request:
body: !!binary |
Cs8MCiQKIgoMc2VydmljZS5uYW1lEhIKEGNyZXdBSS10ZWxlbWV0cnkSpgwKEgoQY3Jld2FpLnRl
bGVtZXRyeRKPDAoQSs7HqlnillCQrqD8OUYMShIIBbbS9OnBy6cqDENyZXcgQ3JlYXRlZDABOYh7
MFPnXLMXQfiGMlPnXLMXShoKDmNyZXdhaV92ZXJzaW9uEggKBjAuMTAuMkoaCg5weXRob25fdmVy
c2lvbhIICgYzLjExLjdKMQoHY3Jld19pZBImCiRhMDZmOGU5OC1kODc4LTRkMzktYjM1Ny05ODMz
YzMwYTJhMGZKHAoMY3Jld19wcm9jZXNzEgwKCnNlcXVlbnRpYWxKFQoNY3Jld19sYW5ndWFnZRIE
CgJlbkoaChRjcmV3X251bWJlcl9vZl90YXNrcxICGANKGwoVY3Jld19udW1iZXJfb2ZfYWdlbnRz
EgIYAkqCBQoLY3Jld19hZ2VudHMS8gQK7wRbeyJpZCI6ICJhMDZlM2ZjZS1lY2M5LTQyOTQtYTQz
NC00MGExZDU2MmE4NjAiLCAicm9sZSI6ICJ0ZXN0IHJvbGUiLCAibWVtb3J5X2VuYWJsZWQ/Ijog
dHJ1ZSwgInZlcmJvc2U/IjogZmFsc2UsICJtYXhfaXRlciI6IDE1LCAibWF4X3JwbSI6IG51bGws
ICJpMThuIjogImVuIiwgImxsbSI6ICJ7XCJuYW1lXCI6IG51bGwsIFwibW9kZWxfbmFtZVwiOiBc
ImdwdC00XCIsIFwidGVtcGVyYXR1cmVcIjogMC43LCBcImNsYXNzXCI6IFwiQ2hhdE9wZW5BSVwi
fSIsICJkZWxlZ2F0aW9uX2VuYWJsZWQ/IjogdHJ1ZSwgInRvb2xzX25hbWVzIjogW119LCB7Imlk
IjogImZiMjRjOTMzLTdiOTctNDA5NC05ZjJkLWFlNTdjNDVhYmY1ZiIsICJyb2xlIjogInRlc3Qg
cm9sZTIiLCAibWVtb3J5X2VuYWJsZWQ/IjogdHJ1ZSwgInZlcmJvc2U/IjogZmFsc2UsICJtYXhf
aXRlciI6IDE1LCAibWF4X3JwbSI6IG51bGwsICJpMThuIjogImVuIiwgImxsbSI6ICJ7XCJuYW1l
XCI6IG51bGwsIFwibW9kZWxfbmFtZVwiOiBcImdwdC00XCIsIFwidGVtcGVyYXR1cmVcIjogMC43
LCBcImNsYXNzXCI6IFwiQ2hhdE9wZW5BSVwifSIsICJkZWxlZ2F0aW9uX2VuYWJsZWQ/IjogdHJ1
ZSwgInRvb2xzX25hbWVzIjogW119XUr+AgoKY3Jld190YXNrcxLvAgrsAlt7ImlkIjogImI3N2Vh
YmU2LWQ4NDEtNDRmZC1hOWI4LWIwNDRiNDI0N2QxNiIsICJhc3luY19leGVjdXRpb24/IjogZmFs
c2UsICJhZ2VudF9yb2xlIjogInRlc3Qgcm9sZSIsICJ0b29sc19uYW1lcyI6IFtdfSwgeyJpZCI6
ICJjZTJkNjg5NS1kZjc5LTRjNzUtOGM3Ni03OGE0YTgzNDgwNTciLCAiYXN5bmNfZXhlY3V0aW9u
PyI6IGZhbHNlLCAiYWdlbnRfcm9sZSI6ICJ0ZXN0IHJvbGUiLCAidG9vbHNfbmFtZXMiOiBbXX0s
IHsiaWQiOiAiZTZhNzFmMTktM2NlOC00NGVmLTg5ZTAtMGQ0MGUzMzE5MGI3IiwgImFzeW5jX2V4
ZWN1dGlvbj8iOiBmYWxzZSwgImFnZW50X3JvbGUiOiAidGVzdCByb2xlMiIsICJ0b29sc19uYW1l
cyI6IFtdfV1KKAoIcGxhdGZvcm0SHAoabWFjT1MtMTQuMy1hcm02NC1hcm0tNjRiaXRKHAoQcGxh
dGZvcm1fcmVsZWFzZRIICgYyMy4zLjBKGwoPcGxhdGZvcm1fc3lzdGVtEggKBkRhcndpbkp7ChBw
bGF0Zm9ybV92ZXJzaW9uEmcKZURhcndpbiBLZXJuZWwgVmVyc2lvbiAyMy4zLjA6IFdlZCBEZWMg
MjAgMjE6MzA6NTkgUFNUIDIwMjM7IHJvb3Q6eG51LTEwMDAyLjgxLjV+Ny9SRUxFQVNFX0FSTTY0
X1Q2MDMwSgoKBGNwdXMSAhgMegIYAQ==
headers:
Accept:
- '*/*'
Accept-Encoding:
- gzip, deflate
Connection:
- keep-alive
Content-Length:
- '1618'
Content-Type:
- application/x-protobuf
User-Agent:
- OTel-OTLP-Exporter-Python/1.22.0
method: POST
uri: http://telemetry.crewai.com:4318/v1/traces
response:
body:
string: "\n\0"
headers:
Content-Length:
- '2'
Content-Type:
- application/x-protobuf
Date:
- Tue, 13 Feb 2024 07:53:56 GMT
status:
code: 200
message: OK
- request:
body: '{"messages": [{"role": "user", "content": "You are test role.\ntest backstory\n\nYour
personal goal is: test goalTOOLS:\n------\nYou have access to only the following
tools:\n\nDelegate work to co-worker: Useful to delegate a specific task to
one of the following co-workers: test role, test role2.\nThe input to this tool
should be a pipe (|) separated text of length 3 (three), representing the co-worker
you want to ask it to (one of the options), the task and all actual context
you have for the task.\nFor example, `coworker|task|context`.\nAsk question
to co-worker: Useful to ask a question, opinion or take from on of the following
co-workers: test role, test role2.\nThe input to this tool should be a pipe
(|) separated text of length 3 (three), representing the co-worker you want
to ask it to (one of the options), the question and all actual context you have
for the question.\n For example, `coworker|question|context`.\n\nTo use a tool,
please use the exact following format:\n\n```\nThought: Do I need to use a tool?
Yes\nAction: the action to take, should be one of [Delegate work to co-worker,
Ask question to co-worker], just the name.\nAction Input: the input to the action\nObservation:
the result of the action\n```\n\nWhen you have a response for your task, or
if you do not need to use a tool, you MUST use the format:\n\n```\nThought:
Do I need to use a tool? No\nFinal Answer: [your response here]This is the summary
of your work so far:\nThe human instructs the AI to greet, to which the AI responds
with \"Hi\".Begin! This is VERY important to you, your job depends on it!\n\nCurrent
Task: Just say bye.\nThis is the context you''re working with:\nHi\n"}], "model":
"gpt-4", "n": 1, "stop": ["\nObservation"], "stream": false, "temperature":
0.7}'
tools:\n\n(\"Delegate work to co-worker: Delegate work to co-worker(coworker:
str, task: str, context: str) - Delegate a specific task to one of the following
co-workers:\\n- test role2.\\nThe input to this tool should be the role of the
coworker, the task you want them to do, and ALL necessary context to exectue
the task, they know nothing about the task, so share absolute everything you
know, don''t reference things but instead explain them.\\nAsk question to co-worker:
Ask question to co-worker(coworker: str, question: str, context: str) - Ask
a specific question to one of the following co-workers:\\n- test role2.\\nThe
input to this tool should be the role of the coworker, the question you have
for them, and ALL necessary context to ask the question properly, they know
nothing about the question, so share absolute everything you know, don''t reference
things but instead explain them.\",)\n\nTo use a tool, please use the exact
following format:\n\n```\nThought: Do I need to use a tool? Yes\nAction: the
tool you wanna use, should be one of [Delegate work to co-worker, Ask question
to co-worker], just the name.\nAction Input: Any and all relevant information
input and context for using the tool\nObservation: the result of using the tool\n```\n\nWhen
you have a response for your task, or if you do not need to use a tool, you
MUST use the format:\n\n```\nThought: Do I need to use a tool? No\nFinal Answer:
[your response here]```This is the summary of your work so far:\nThe human instructs
the AI to say hi, and the AI responds with \"Hi\".Begin! This is VERY important
to you, your job depends on it!\n\nCurrent Task: Just say bye.\nThis is the
context you''re working with:\nHi\n"}], "model": "gpt-4", "n": 1, "stop": ["\nObservation"],
"stream": true, "temperature": 0.7}'
headers:
accept:
- application/json
@@ -252,16 +378,16 @@ interactions:
connection:
- keep-alive
content-length:
- '1773'
- '1961'
content-type:
- application/json
cookie:
- __cf_bm=UA.onlCu1vI0C5OQS7NuRtwP7DNFz7W09xd9jMhRLvw-1706906448-1-AcFhcpTYAB7ObVTPwWzhxf9l1QMH4b8dLAMZyER/lYCj5KAK4BUiLNPaa/0+BQHyw96fYT2TF8+5GA/JW9icYv4=;
_cfuvid=9zIEbOcGQmxDxJTE_knwEUEBSUQGkilD4oijzyMAx.k-1706906448692-0-604800000
- __cf_bm=sgnoG7DonxhXrDZ.yDNk.8p_lkNmtU14pjXmNtEn4WM-1707810833-1-AcPZTTxeQOUh53COIvJwnre0/vx4WMebWfMTr1HdojybgWFjduczGAGVaJ8fNqqnVBttaKfvGO6rxMM8OyJ//PI=;
_cfuvid=O0OCRCFP3Vv.mCIxT3n1FLplOWHgTBZM2hPCsKpI7o4-1707810833030-0-604800000
host:
- api.openai.com
user-agent:
- OpenAI/Python 1.7.1
- OpenAI/Python 1.12.0
x-stainless-arch:
- arm64
x-stainless-async:
@@ -271,7 +397,7 @@ interactions:
x-stainless-os:
- MacOS
x-stainless-package-version:
- 1.7.1
- 1.12.0
x-stainless-runtime:
- CPython
x-stainless-runtime-version:
@@ -279,28 +405,88 @@ interactions:
method: POST
uri: https://api.openai.com/v1/chat/completions
response:
content: "{\n \"id\": \"chatcmpl-8nud8gkRZcMmbyj7Mdd9xOBHovh1T\",\n \"object\":
\"chat.completion\",\n \"created\": 1706906450,\n \"model\": \"gpt-4-0613\",\n
\ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\":
\"assistant\",\n \"content\": \"Thought: Do I need to use a tool? No\\nFinal
Answer: Bye\"\n },\n \"logprobs\": null,\n \"finish_reason\":
\"stop\"\n }\n ],\n \"usage\": {\n \"prompt_tokens\": 396,\n \"completion_tokens\":
16,\n \"total_tokens\": 412\n },\n \"system_fingerprint\": null\n}\n"
body:
string: 'data: {"id":"chatcmpl-8rhtzP9ohwFaqDaCJlznCgdPkXlco","object":"chat.completion.chunk","created":1707810835,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"role":"assistant","content":""},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rhtzP9ohwFaqDaCJlznCgdPkXlco","object":"chat.completion.chunk","created":1707810835,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"Thought"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rhtzP9ohwFaqDaCJlznCgdPkXlco","object":"chat.completion.chunk","created":1707810835,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":":"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rhtzP9ohwFaqDaCJlznCgdPkXlco","object":"chat.completion.chunk","created":1707810835,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
Do"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rhtzP9ohwFaqDaCJlznCgdPkXlco","object":"chat.completion.chunk","created":1707810835,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
I"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rhtzP9ohwFaqDaCJlznCgdPkXlco","object":"chat.completion.chunk","created":1707810835,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
need"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rhtzP9ohwFaqDaCJlznCgdPkXlco","object":"chat.completion.chunk","created":1707810835,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
to"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rhtzP9ohwFaqDaCJlznCgdPkXlco","object":"chat.completion.chunk","created":1707810835,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
use"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rhtzP9ohwFaqDaCJlznCgdPkXlco","object":"chat.completion.chunk","created":1707810835,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
a"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rhtzP9ohwFaqDaCJlznCgdPkXlco","object":"chat.completion.chunk","created":1707810835,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
tool"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rhtzP9ohwFaqDaCJlznCgdPkXlco","object":"chat.completion.chunk","created":1707810835,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"?"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rhtzP9ohwFaqDaCJlznCgdPkXlco","object":"chat.completion.chunk","created":1707810835,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
No"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rhtzP9ohwFaqDaCJlznCgdPkXlco","object":"chat.completion.chunk","created":1707810835,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"\n"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rhtzP9ohwFaqDaCJlznCgdPkXlco","object":"chat.completion.chunk","created":1707810835,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"Final"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rhtzP9ohwFaqDaCJlznCgdPkXlco","object":"chat.completion.chunk","created":1707810835,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
Answer"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rhtzP9ohwFaqDaCJlznCgdPkXlco","object":"chat.completion.chunk","created":1707810835,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":":"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rhtzP9ohwFaqDaCJlznCgdPkXlco","object":"chat.completion.chunk","created":1707810835,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
Bye"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rhtzP9ohwFaqDaCJlznCgdPkXlco","object":"chat.completion.chunk","created":1707810835,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{},"logprobs":null,"finish_reason":"stop"}]}
data: [DONE]
'
headers:
CF-Cache-Status:
- DYNAMIC
CF-RAY:
- 84f540666f1917e2-SJC
- 854b801c5fcf9435-SJC
Cache-Control:
- no-cache, must-revalidate
Connection:
- keep-alive
Content-Encoding:
- gzip
Content-Type:
- application/json
- text/event-stream
Date:
- Fri, 02 Feb 2024 20:40:53 GMT
- Tue, 13 Feb 2024 07:53:56 GMT
Server:
- cloudflare
Transfer-Encoding:
@@ -314,7 +500,7 @@ interactions:
openai-organization:
- user-z7g4wmlazxqvc5wjyaaaocfz
openai-processing-ms:
- '2080'
- '271'
openai-version:
- '2020-10-01'
strict-transport-security:
@@ -326,15 +512,16 @@ interactions:
x-ratelimit-remaining-requests:
- '9999'
x-ratelimit-remaining-tokens:
- '299582'
- '299535'
x-ratelimit-reset-requests:
- 6ms
x-ratelimit-reset-tokens:
- 83ms
- 92ms
x-request-id:
- e30a0b5dbbf8292624095c35c893d784
http_version: HTTP/1.1
status_code: 200
- req_0bdf926dcd7ed8d293a5b691131821f1
status:
code: 200
message: OK
- request:
body: '{"messages": [{"role": "user", "content": "Progressively summarize the
lines of conversation provided, adding onto the previous summary returning a
@@ -345,10 +532,10 @@ interactions:
help humans reach their full potential.\n\nNew summary:\nThe human asks what
the AI thinks of artificial intelligence. The AI thinks artificial intelligence
is a force for good because it will help humans reach their full potential.\nEND
OF EXAMPLE\n\nCurrent summary:\nThe human instructs the AI to greet, to which
the AI responds with \"Hi\".\n\nNew lines of conversation:\nHuman: Just say
bye.\nThis is the context you''re working with:\nHi\nAI: Bye\n\nNew summary:"}],
"model": "gpt-4", "n": 1, "stream": false, "temperature": 0.7}'
OF EXAMPLE\n\nCurrent summary:\nThe human instructs the AI to say hi, and the
AI responds with \"Hi\".\n\nNew lines of conversation:\nHuman: Just say bye.\nThis
is the context you''re working with:\nHi\nAI: Bye\n\nNew summary:"}], "model":
"gpt-4", "n": 1, "stream": false, "temperature": 0.7}'
headers:
accept:
- application/json
@@ -357,16 +544,16 @@ interactions:
connection:
- keep-alive
content-length:
- '988'
- '984'
content-type:
- application/json
cookie:
- __cf_bm=UA.onlCu1vI0C5OQS7NuRtwP7DNFz7W09xd9jMhRLvw-1706906448-1-AcFhcpTYAB7ObVTPwWzhxf9l1QMH4b8dLAMZyER/lYCj5KAK4BUiLNPaa/0+BQHyw96fYT2TF8+5GA/JW9icYv4=;
_cfuvid=9zIEbOcGQmxDxJTE_knwEUEBSUQGkilD4oijzyMAx.k-1706906448692-0-604800000
- __cf_bm=sgnoG7DonxhXrDZ.yDNk.8p_lkNmtU14pjXmNtEn4WM-1707810833-1-AcPZTTxeQOUh53COIvJwnre0/vx4WMebWfMTr1HdojybgWFjduczGAGVaJ8fNqqnVBttaKfvGO6rxMM8OyJ//PI=;
_cfuvid=O0OCRCFP3Vv.mCIxT3n1FLplOWHgTBZM2hPCsKpI7o4-1707810833030-0-604800000
host:
- api.openai.com
user-agent:
- OpenAI/Python 1.7.1
- OpenAI/Python 1.12.0
x-stainless-arch:
- arm64
x-stainless-async:
@@ -376,7 +563,7 @@ interactions:
x-stainless-os:
- MacOS
x-stainless-package-version:
- 1.7.1
- 1.12.0
x-stainless-runtime:
- CPython
x-stainless-runtime-version:
@@ -384,19 +571,20 @@ interactions:
method: POST
uri: https://api.openai.com/v1/chat/completions
response:
content: "{\n \"id\": \"chatcmpl-8nudBlMMKnqMxdeD6t0hpyt98hNzZ\",\n \"object\":
\"chat.completion\",\n \"created\": 1706906453,\n \"model\": \"gpt-4-0613\",\n
\ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\":
\"assistant\",\n \"content\": \"The human instructs the AI to greet,
which it does by saying \\\"Hi\\\". Then the human instructs the AI to say \\\"Bye\\\",
which it complies with.\"\n },\n \"logprobs\": null,\n \"finish_reason\":
\"stop\"\n }\n ],\n \"usage\": {\n \"prompt_tokens\": 173,\n \"completion_tokens\":
36,\n \"total_tokens\": 209\n },\n \"system_fingerprint\": null\n}\n"
body:
string: !!binary |
H4sIAAAAAAAAA4RRTWsbMRC9768YdF4bO3Zi41sLoSTH2FBMU4ysHa+UameENIuzBP/3ovWuTU+9
CDTvgzdvvgoA5Sq1AWWsFtMEP1lH2z48/3zZL36sXt+MPPH3Le+2273eP3aqzAo+fqCRUTU13ASP
4piusImoBbPrfDVbreez9WLdAw1X6LOsDjJZTmZP88WgsOwMJrWBXwUAwFf/5mxU4afawKwcJw2m
pGtUmxsJQEX2eaJ0Si6JJlHlHTRMgtTH3VkE2zaawFGS2BpJIBbh2wsIQ9IdvCvr3lWZv2frjB3h
iCkwVQnOTuzAmsLOIpU95X+uxw6zraZqhE7sPZ+vzFHnmODYZYmjelRN1bDL5VaC5zpEPubCqPX+
Nj85cskeIurElBdOwuEqvxQAv/uy23/6UyFyE+Qg/Acp9TdbXP3U/a53dLkcQGHR/j5/mK+KIaFK
XRJsDidHNcYQXd99zllcir8AAAD//wMAYHujinICAAA=
headers:
CF-Cache-Status:
- DYNAMIC
CF-RAY:
- 84f540744aa817e2-SJC
- 854b802cab039435-SJC
Cache-Control:
- no-cache, must-revalidate
Connection:
@@ -406,7 +594,7 @@ interactions:
Content-Type:
- application/json
Date:
- Fri, 02 Feb 2024 20:40:57 GMT
- Tue, 13 Feb 2024 07:54:02 GMT
Server:
- cloudflare
Transfer-Encoding:
@@ -420,7 +608,7 @@ interactions:
openai-organization:
- user-z7g4wmlazxqvc5wjyaaaocfz
openai-processing-ms:
- '3890'
- '3678'
openai-version:
- '2020-10-01'
strict-transport-security:
@@ -432,38 +620,42 @@ interactions:
x-ratelimit-remaining-requests:
- '9999'
x-ratelimit-remaining-tokens:
- '299769'
- '299770'
x-ratelimit-reset-requests:
- 6ms
x-ratelimit-reset-tokens:
- 46ms
x-request-id:
- d242eb36c84f255ef0fd4e93bb95a969
http_version: HTTP/1.1
status_code: 200
- req_9fc08a7565f7e0add5721312fca34c04
status:
code: 200
message: OK
- request:
body: '{"messages": [{"role": "user", "content": "You are test role2.\ntest backstory2\n\nYour
personal goal is: test goal2TOOLS:\n------\nYou have access to only the following
tools:\n\nDelegate work to co-worker: Useful to delegate a specific task to
one of the following co-workers: test role, test role2.\nThe input to this tool
should be a pipe (|) separated text of length 3 (three), representing the co-worker
you want to ask it to (one of the options), the task and all actual context
you have for the task.\nFor example, `coworker|task|context`.\nAsk question
to co-worker: Useful to ask a question, opinion or take from on of the following
co-workers: test role, test role2.\nThe input to this tool should be a pipe
(|) separated text of length 3 (three), representing the co-worker you want
to ask it to (one of the options), the question and all actual context you have
for the question.\n For example, `coworker|question|context`.\n\nTo use a tool,
please use the exact following format:\n\n```\nThought: Do I need to use a tool?
Yes\nAction: the action to take, should be one of [Delegate work to co-worker,
Ask question to co-worker], just the name.\nAction Input: the input to the action\nObservation:
the result of the action\n```\n\nWhen you have a response for your task, or
if you do not need to use a tool, you MUST use the format:\n\n```\nThought:
Do I need to use a tool? No\nFinal Answer: [your response here]This is the summary
of your work so far:\nBegin! This is VERY important to you, your job depends
on it!\n\nCurrent Task: Answer accordingly to the context you got.\nThis is
the context you''re working with:\nHi\n"}], "model": "gpt-4", "n": 1, "stop":
["\nObservation"], "stream": false, "temperature": 0.7}'
tools:\n\n(\"Delegate work to co-worker: Delegate work to co-worker(coworker:
str, task: str, context: str) - Delegate a specific task to one of the following
co-workers:\\n- test role.\\nThe input to this tool should be the role of the
coworker, the task you want them to do, and ALL necessary context to exectue
the task, they know nothing about the task, so share absolute everything you
know, don''t reference things but instead explain them.\\nAsk question to co-worker:
Ask question to co-worker(coworker: str, question: str, context: str) - Ask
a specific question to one of the following co-workers:\\n- test role.\\nThe
input to this tool should be the role of the coworker, the question you have
for them, and ALL necessary context to ask the question properly, they know
nothing about the question, so share absolute everything you know, don''t reference
things but instead explain them.\",)\n\nTo use a tool, please use the exact
following format:\n\n```\nThought: Do I need to use a tool? Yes\nAction: the
tool you wanna use, should be one of [Delegate work to co-worker, Ask question
to co-worker], just the name.\nAction Input: Any and all relevant information
input and context for using the tool\nObservation: the result of using the tool\n```\n\nWhen
you have a response for your task, or if you do not need to use a tool, you
MUST use the format:\n\n```\nThought: Do I need to use a tool? No\nFinal Answer:
[your response here]```This is the summary of your work so far:\nBegin! This
is VERY important to you, your job depends on it!\n\nCurrent Task: Answer accordingly
to the context you got.\nThis is the context you''re working with:\nHi\n"}],
"model": "gpt-4", "n": 1, "stop": ["\nObservation"], "stream": true, "temperature":
0.7}'
headers:
accept:
- application/json
@@ -472,13 +664,13 @@ interactions:
connection:
- keep-alive
content-length:
- '1731'
- '1921'
content-type:
- application/json
host:
- api.openai.com
user-agent:
- OpenAI/Python 1.7.1
- OpenAI/Python 1.12.0
x-stainless-arch:
- arm64
x-stainless-async:
@@ -488,7 +680,7 @@ interactions:
x-stainless-os:
- MacOS
x-stainless-package-version:
- 1.7.1
- 1.12.0
x-stainless-runtime:
- CPython
x-stainless-runtime-version:
@@ -496,36 +688,125 @@ interactions:
method: POST
uri: https://api.openai.com/v1/chat/completions
response:
content: "{\n \"id\": \"chatcmpl-8nudFRwWtooLxk6L6eCDzsw76aUgI\",\n \"object\":
\"chat.completion\",\n \"created\": 1706906457,\n \"model\": \"gpt-4-0613\",\n
\ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\":
\"assistant\",\n \"content\": \"Thought: Do I need to use a tool? No\\nFinal
Answer: Hello! How may I assist you today?\"\n },\n \"logprobs\":
null,\n \"finish_reason\": \"stop\"\n }\n ],\n \"usage\": {\n \"prompt_tokens\":
385,\n \"completion_tokens\": 24,\n \"total_tokens\": 409\n },\n \"system_fingerprint\":
null\n}\n"
body:
string: 'data: {"id":"chatcmpl-8rhu6kUiFLSUh46G57YjP3i8168PL","object":"chat.completion.chunk","created":1707810842,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"role":"assistant","content":""},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rhu6kUiFLSUh46G57YjP3i8168PL","object":"chat.completion.chunk","created":1707810842,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"Thought"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rhu6kUiFLSUh46G57YjP3i8168PL","object":"chat.completion.chunk","created":1707810842,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":":"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rhu6kUiFLSUh46G57YjP3i8168PL","object":"chat.completion.chunk","created":1707810842,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
Do"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rhu6kUiFLSUh46G57YjP3i8168PL","object":"chat.completion.chunk","created":1707810842,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
I"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rhu6kUiFLSUh46G57YjP3i8168PL","object":"chat.completion.chunk","created":1707810842,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
need"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rhu6kUiFLSUh46G57YjP3i8168PL","object":"chat.completion.chunk","created":1707810842,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
to"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rhu6kUiFLSUh46G57YjP3i8168PL","object":"chat.completion.chunk","created":1707810842,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
use"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rhu6kUiFLSUh46G57YjP3i8168PL","object":"chat.completion.chunk","created":1707810842,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
a"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rhu6kUiFLSUh46G57YjP3i8168PL","object":"chat.completion.chunk","created":1707810842,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
tool"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rhu6kUiFLSUh46G57YjP3i8168PL","object":"chat.completion.chunk","created":1707810842,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"?"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rhu6kUiFLSUh46G57YjP3i8168PL","object":"chat.completion.chunk","created":1707810842,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
No"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rhu6kUiFLSUh46G57YjP3i8168PL","object":"chat.completion.chunk","created":1707810842,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"\n"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rhu6kUiFLSUh46G57YjP3i8168PL","object":"chat.completion.chunk","created":1707810842,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"Final"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rhu6kUiFLSUh46G57YjP3i8168PL","object":"chat.completion.chunk","created":1707810842,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
Answer"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rhu6kUiFLSUh46G57YjP3i8168PL","object":"chat.completion.chunk","created":1707810842,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":":"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rhu6kUiFLSUh46G57YjP3i8168PL","object":"chat.completion.chunk","created":1707810842,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
Hello"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rhu6kUiFLSUh46G57YjP3i8168PL","object":"chat.completion.chunk","created":1707810842,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"!"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rhu6kUiFLSUh46G57YjP3i8168PL","object":"chat.completion.chunk","created":1707810842,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
How"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rhu6kUiFLSUh46G57YjP3i8168PL","object":"chat.completion.chunk","created":1707810842,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
can"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rhu6kUiFLSUh46G57YjP3i8168PL","object":"chat.completion.chunk","created":1707810842,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
I"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rhu6kUiFLSUh46G57YjP3i8168PL","object":"chat.completion.chunk","created":1707810842,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
assist"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rhu6kUiFLSUh46G57YjP3i8168PL","object":"chat.completion.chunk","created":1707810842,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
you"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rhu6kUiFLSUh46G57YjP3i8168PL","object":"chat.completion.chunk","created":1707810842,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
today"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rhu6kUiFLSUh46G57YjP3i8168PL","object":"chat.completion.chunk","created":1707810842,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"?"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rhu6kUiFLSUh46G57YjP3i8168PL","object":"chat.completion.chunk","created":1707810842,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{},"logprobs":null,"finish_reason":"stop"}]}
data: [DONE]
'
headers:
CF-Cache-Status:
- DYNAMIC
CF-RAY:
- 84f54090fed496e3-SJC
- 854b8046c8e016a4-SJC
Cache-Control:
- no-cache, must-revalidate
Connection:
- keep-alive
Content-Encoding:
- gzip
Content-Type:
- application/json
- text/event-stream
Date:
- Fri, 02 Feb 2024 20:40:59 GMT
- Tue, 13 Feb 2024 07:54:03 GMT
Server:
- cloudflare
Set-Cookie:
- __cf_bm=1a8iCx8oaT_528C4gPIvdKX1IWXA6Aw7hHi6uKwJARM-1706906459-1-AS1ipjLS8aTFlU5CIJ11MKzJMqqcSkV3FaOqkZIvmgUJmnV6YPB9LV++7vq0Bzg8e3E8OYLiAiFKlVbBaWkgXr4=;
path=/; expires=Fri, 02-Feb-24 21:10:59 GMT; domain=.api.openai.com; HttpOnly;
- __cf_bm=9iqOVX99fNw3iBrL2lGkIsbxEmVHjCDC6dQ.IIPGsVI-1707810843-1-AXvH8pf5WEUYQLNgC9g6z7NDUcy41y1GipheSqIvEL5ZZUMRQ1RavWK2fwdwPnyJO0mrOzXgRaVKPBYHNgPNuXk=;
path=/; expires=Tue, 13-Feb-24 08:24:03 GMT; domain=.api.openai.com; HttpOnly;
Secure; SameSite=None
- _cfuvid=5AKzoFvo8LvsIRwuHEYgHDTsEg6WtL55Ee4roC_7nTg-1706906459586-0-604800000;
- _cfuvid=Lj_LxomUxRnFWQnkhey0pEEnt5co5my8ZJd7VvM_bts-1707810843273-0-604800000;
path=/; domain=.api.openai.com; HttpOnly; Secure; SameSite=None
Transfer-Encoding:
- chunked
@@ -538,7 +819,7 @@ interactions:
openai-organization:
- user-z7g4wmlazxqvc5wjyaaaocfz
openai-processing-ms:
- '1731'
- '359'
openai-version:
- '2020-10-01'
strict-transport-security:
@@ -550,15 +831,16 @@ interactions:
x-ratelimit-remaining-requests:
- '9999'
x-ratelimit-remaining-tokens:
- '299591'
- '299545'
x-ratelimit-reset-requests:
- 6ms
x-ratelimit-reset-tokens:
- 81ms
- 91ms
x-request-id:
- 676122a930b0702fa6feee211a0cc372
http_version: HTTP/1.1
status_code: 200
- req_9f1f2f39815add1beeb2c6034c678964
status:
code: 200
message: OK
- request:
body: '{"messages": [{"role": "user", "content": "Progressively summarize the
lines of conversation provided, adding onto the previous summary returning a
@@ -571,7 +853,7 @@ interactions:
is a force for good because it will help humans reach their full potential.\nEND
OF EXAMPLE\n\nCurrent summary:\n\n\nNew lines of conversation:\nHuman: Answer
accordingly to the context you got.\nThis is the context you''re working with:\nHi\nAI:
Hello! How may I assist you today?\n\nNew summary:"}], "model": "gpt-4", "n":
Hello! How can I assist you today?\n\nNew summary:"}], "model": "gpt-4", "n":
1, "stream": false, "temperature": 0.7}'
headers:
accept:
@@ -585,12 +867,12 @@ interactions:
content-type:
- application/json
cookie:
- __cf_bm=1a8iCx8oaT_528C4gPIvdKX1IWXA6Aw7hHi6uKwJARM-1706906459-1-AS1ipjLS8aTFlU5CIJ11MKzJMqqcSkV3FaOqkZIvmgUJmnV6YPB9LV++7vq0Bzg8e3E8OYLiAiFKlVbBaWkgXr4=;
_cfuvid=5AKzoFvo8LvsIRwuHEYgHDTsEg6WtL55Ee4roC_7nTg-1706906459586-0-604800000
- __cf_bm=9iqOVX99fNw3iBrL2lGkIsbxEmVHjCDC6dQ.IIPGsVI-1707810843-1-AXvH8pf5WEUYQLNgC9g6z7NDUcy41y1GipheSqIvEL5ZZUMRQ1RavWK2fwdwPnyJO0mrOzXgRaVKPBYHNgPNuXk=;
_cfuvid=Lj_LxomUxRnFWQnkhey0pEEnt5co5my8ZJd7VvM_bts-1707810843273-0-604800000
host:
- api.openai.com
user-agent:
- OpenAI/Python 1.7.1
- OpenAI/Python 1.12.0
x-stainless-arch:
- arm64
x-stainless-async:
@@ -600,7 +882,7 @@ interactions:
x-stainless-os:
- MacOS
x-stainless-package-version:
- 1.7.1
- 1.12.0
x-stainless-runtime:
- CPython
x-stainless-runtime-version:
@@ -608,20 +890,20 @@ interactions:
method: POST
uri: https://api.openai.com/v1/chat/completions
response:
content: "{\n \"id\": \"chatcmpl-8nudHpUv8DAMhkr2JAbA76VPIGCzz\",\n \"object\":
\"chat.completion\",\n \"created\": 1706906459,\n \"model\": \"gpt-4-0613\",\n
\ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\":
\"assistant\",\n \"content\": \"The human instructs the AI to answer
according to the context provided. In response to a simple greeting from the
human, the AI politely responds and offers assistance.\"\n },\n \"logprobs\":
null,\n \"finish_reason\": \"stop\"\n }\n ],\n \"usage\": {\n \"prompt_tokens\":
166,\n \"completion_tokens\": 32,\n \"total_tokens\": 198\n },\n \"system_fingerprint\":
null\n}\n"
body:
string: !!binary |
H4sIAAAAAAAAA1RRwW7bMAy9+ysInZMgXrMkzS0bhqLHAcEuxRAoMm1rk0lBpNOuRf59kGMn20WH
9/ieHh8/CgDjK7MD41qrrothvk1tv92fnzbdOfTv9GP/rVx/PXTfT/WXw6uZZQWffqHTSbVw3MWA
6pmutEtoFbNruVlutuVyu1oNRMcVhixros5X8+W6fBgVLXuHYnbwUgAAfAxvzkYVvpkdLGcT0qGI
bdDsbkMAJnHIiLEiXtSSmtmddEyKNMQ9tAht31kCT6KpdyqgLcL+GZTBkrxiAuscp8pTk7HMxsRn
X2EFg9ObLuCZIKFEJsFpaOSAa7AgPjcCTUJUT81s+oTrGpPAFNPhwow5L7cFAzcx8SmXQX0IN7z2
5KU9JrTClJcR5XiVXwqAn0OR/X/dmJi4i3pU/o2UDcv1+upn7je7s58eR1JZbfhH9fi5GBMa+SOK
3bH21GCKyQ+95pzFpfgLAAD//wMA4hMCWk4CAAA=
headers:
CF-Cache-Status:
- DYNAMIC
CF-RAY:
- 84f5409c8dab96e3-SJC
- 854b8051c9a916a4-SJC
Cache-Control:
- no-cache, must-revalidate
Connection:
@@ -631,7 +913,7 @@ interactions:
Content-Type:
- application/json
Date:
- Fri, 02 Feb 2024 20:41:03 GMT
- Tue, 13 Feb 2024 07:54:06 GMT
Server:
- cloudflare
Transfer-Encoding:
@@ -645,7 +927,7 @@ interactions:
openai-organization:
- user-z7g4wmlazxqvc5wjyaaaocfz
openai-processing-ms:
- '3358'
- '1573'
openai-version:
- '2020-10-01'
strict-transport-security:
@@ -663,7 +945,8 @@ interactions:
x-ratelimit-reset-tokens:
- 45ms
x-request-id:
- d383779572ba9f95df5216c72b6708e2
http_version: HTTP/1.1
status_code: 200
- req_5448ec67f3a6cc3af91a85e0794c913c
status:
code: 200
message: OK
version: 1

File diff suppressed because it is too large Load Diff

View File

@@ -1,15 +1,16 @@
interactions:
- request:
body: '{"messages": [{"role": "user", "content": "You are test role.\ntest backstory\n\nYour
personal goal is: test goal\n\nTOOLS:\n------\nYou have access to the following
tools:\n\n\n\nTo use a tool, please use the exact following format:\n\n```\nThought:
Do I need to use a tool? Yes\nAction: the action to take, should be one of []\nAction
Input: the input to the action\nObservation: the result of the action\n```\n\nWhen
personal goal is: test goalTOOLS:\n------\nYou have access to only the following
tools:\n\n('''',)\n\nTo use a tool, please use the exact following format:\n\n```\nThought:
Do I need to use a tool? Yes\nAction: the tool you wanna use, should be one
of [], just the name.\nAction Input: Any and all relevant information input
and context for using the tool\nObservation: the result of using the tool\n```\n\nWhen
you have a response for your task, or if you do not need to use a tool, you
MUST use the format:\n\n```\nThought: Do I need to use a tool? No\nFinal Answer:
[your response here]\n```\nBegin! This is VERY important to you, your job depends
on it!\n\nCurrent Task: How much is 1 + 1?\n\n"}], "model": "gpt-4", "n": 1,
"stop": ["\nObservation"], "stream": false, "temperature": 0.0}'
[your response here]```Begin! This is VERY important to you, your job depends
on it!\n\nCurrent Task: How much is 1 + 1?\n"}], "model": "gpt-4", "n": 1, "stop":
["\nObservation"], "stream": true, "temperature": 0.0}'
headers:
accept:
- application/json
@@ -18,13 +19,13 @@ interactions:
connection:
- keep-alive
content-length:
- '799'
- '868'
content-type:
- application/json
host:
- api.openai.com
user-agent:
- OpenAI/Python 1.6.0
- OpenAI/Python 1.12.0
x-stainless-arch:
- arm64
x-stainless-async:
@@ -34,7 +35,7 @@ interactions:
x-stainless-os:
- MacOS
x-stainless-package-version:
- 1.6.0
- 1.12.0
x-stainless-runtime:
- CPython
x-stainless-runtime-version:
@@ -42,35 +43,123 @@ interactions:
method: POST
uri: https://api.openai.com/v1/chat/completions
response:
content: "{\n \"id\": \"chatcmpl-8XwKQzRwByjmOhHcKQk32biG9bslO\",\n \"object\":
\"chat.completion\",\n \"created\": 1703099730,\n \"model\": \"gpt-4-0613\",\n
\ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\":
\"assistant\",\n \"content\": \"Thought: Do I need to use a tool? No\\nFinal
Answer: 1 + 1 equals 2.\"\n },\n \"logprobs\": null,\n \"finish_reason\":
\"stop\"\n }\n ],\n \"usage\": {\n \"prompt_tokens\": 169,\n \"completion_tokens\":
24,\n \"total_tokens\": 193\n },\n \"system_fingerprint\": null\n}\n"
body:
string: 'data: {"id":"chatcmpl-8rhoWQPa2JkVfJ3R7p3stqT2DXung","object":"chat.completion.chunk","created":1707810496,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"role":"assistant","content":""},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rhoWQPa2JkVfJ3R7p3stqT2DXung","object":"chat.completion.chunk","created":1707810496,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"Thought"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rhoWQPa2JkVfJ3R7p3stqT2DXung","object":"chat.completion.chunk","created":1707810496,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":":"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rhoWQPa2JkVfJ3R7p3stqT2DXung","object":"chat.completion.chunk","created":1707810496,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
Do"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rhoWQPa2JkVfJ3R7p3stqT2DXung","object":"chat.completion.chunk","created":1707810496,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
I"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rhoWQPa2JkVfJ3R7p3stqT2DXung","object":"chat.completion.chunk","created":1707810496,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
need"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rhoWQPa2JkVfJ3R7p3stqT2DXung","object":"chat.completion.chunk","created":1707810496,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
to"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rhoWQPa2JkVfJ3R7p3stqT2DXung","object":"chat.completion.chunk","created":1707810496,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
use"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rhoWQPa2JkVfJ3R7p3stqT2DXung","object":"chat.completion.chunk","created":1707810496,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
a"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rhoWQPa2JkVfJ3R7p3stqT2DXung","object":"chat.completion.chunk","created":1707810496,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
tool"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rhoWQPa2JkVfJ3R7p3stqT2DXung","object":"chat.completion.chunk","created":1707810496,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"?"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rhoWQPa2JkVfJ3R7p3stqT2DXung","object":"chat.completion.chunk","created":1707810496,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
No"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rhoWQPa2JkVfJ3R7p3stqT2DXung","object":"chat.completion.chunk","created":1707810496,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"\n"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rhoWQPa2JkVfJ3R7p3stqT2DXung","object":"chat.completion.chunk","created":1707810496,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"Final"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rhoWQPa2JkVfJ3R7p3stqT2DXung","object":"chat.completion.chunk","created":1707810496,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
Answer"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rhoWQPa2JkVfJ3R7p3stqT2DXung","object":"chat.completion.chunk","created":1707810496,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":":"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rhoWQPa2JkVfJ3R7p3stqT2DXung","object":"chat.completion.chunk","created":1707810496,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rhoWQPa2JkVfJ3R7p3stqT2DXung","object":"chat.completion.chunk","created":1707810496,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"1"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rhoWQPa2JkVfJ3R7p3stqT2DXung","object":"chat.completion.chunk","created":1707810496,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
+"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rhoWQPa2JkVfJ3R7p3stqT2DXung","object":"chat.completion.chunk","created":1707810496,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rhoWQPa2JkVfJ3R7p3stqT2DXung","object":"chat.completion.chunk","created":1707810496,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"1"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rhoWQPa2JkVfJ3R7p3stqT2DXung","object":"chat.completion.chunk","created":1707810496,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
equals"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rhoWQPa2JkVfJ3R7p3stqT2DXung","object":"chat.completion.chunk","created":1707810496,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rhoWQPa2JkVfJ3R7p3stqT2DXung","object":"chat.completion.chunk","created":1707810496,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"2"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rhoWQPa2JkVfJ3R7p3stqT2DXung","object":"chat.completion.chunk","created":1707810496,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"."},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8rhoWQPa2JkVfJ3R7p3stqT2DXung","object":"chat.completion.chunk","created":1707810496,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{},"logprobs":null,"finish_reason":"stop"}]}
data: [DONE]
'
headers:
CF-Cache-Status:
- DYNAMIC
CF-RAY:
- 838a36e258f71a90-GRU
- 854b77d38d4215f1-SJC
Cache-Control:
- no-cache, must-revalidate
Connection:
- keep-alive
Content-Encoding:
- gzip
Content-Type:
- application/json
- text/event-stream
Date:
- Wed, 20 Dec 2023 19:15:33 GMT
- Tue, 13 Feb 2024 07:48:17 GMT
Server:
- cloudflare
Set-Cookie:
- __cf_bm=MXonvOWhzu2xzqqQW6loCrPxpu3htpmRm9QULnV6Uic-1703099733-1-AXiceGXQ09SeMmJaW7hk60DzeZec+Bojjr+ptfpQQRQa0K6o+0oZS2Nhjv6TgYOas6QVkjTabKLwlzoS7qAeJoM=;
path=/; expires=Wed, 20-Dec-23 19:45:33 GMT; domain=.api.openai.com; HttpOnly;
- __cf_bm=MzxC1xo7LqfNVrkSGvbsQwFovwE9BGVOVvKdxJ6iNGs-1707810497-1-AT9+DPA/J8Vj8DV63FFZ7Ofu50U6fltHzYuk8IAFOCmfEyoE/gRDFNAMT3cpkIGc9QYP9vePmgaRrOwn3jlaE1c=;
path=/; expires=Tue, 13-Feb-24 08:18:17 GMT; domain=.api.openai.com; HttpOnly;
Secure; SameSite=None
- _cfuvid=0hRVC1wV9ReH4cnY21jU6QE6m7DIZOjDOkZSBykI7yI-1703099733905-0-604800000;
- _cfuvid=Xh3ZuwMifgbaylPyq.F1cxXIU.lHdsLsy_556QRUXxw-1707810497093-0-604800000;
path=/; domain=.api.openai.com; HttpOnly; Secure; SameSite=None
Transfer-Encoding:
- chunked
@@ -83,7 +172,7 @@ interactions:
openai-organization:
- user-z7g4wmlazxqvc5wjyaaaocfz
openai-processing-ms:
- '3423'
- '325'
openai-version:
- '2020-10-01'
strict-transport-security:
@@ -92,22 +181,17 @@ interactions:
- '10000'
x-ratelimit-limit-tokens:
- '300000'
x-ratelimit-limit-tokens_usage_based:
- '300000'
x-ratelimit-remaining-requests:
- '9999'
x-ratelimit-remaining-tokens:
- '299825'
x-ratelimit-remaining-tokens_usage_based:
- '299825'
- '299805'
x-ratelimit-reset-requests:
- 6ms
x-ratelimit-reset-tokens:
- 35ms
x-ratelimit-reset-tokens_usage_based:
- 35ms
- 38ms
x-request-id:
- b8aea1a1a2f28a89da6e56213d29152c
http_version: HTTP/1.1
status_code: 200
- req_be41ae797dc113ec3cc04303015e8597
status:
code: 200
message: OK
version: 1

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,612 @@
interactions:
- request:
body: !!binary |
CuwKCiQKIgoMc2VydmljZS5uYW1lEhIKEGNyZXdBSS10ZWxlbWV0cnkSwwoKEgoQY3Jld2FpLnRl
bGVtZXRyeRLMAQoQwp0SIfudEwsrKbf+lo58IBIIf611tHXnM6YqClRvb2wgVXNhZ2UwATlwwj4a
iF2zF0FImT8aiF2zF0ofCgl0b29sX25hbWUSEgoQZ2V0X2ZpbmFsX2Fuc3dlckoOCghhdHRlbXB0
cxICGAFKWQoDbGxtElIKUHsibmFtZSI6IG51bGwsICJtb2RlbF9uYW1lIjogImdwdC00IiwgInRl
bXBlcmF0dXJlIjogMC43LCAiY2xhc3MiOiAiQ2hhdE9wZW5BSSJ9egIYARLdCAoQOtWPiAfm5nEl
WCn75Vb4mRII6VcVnnypR+gqDENyZXcgQ3JlYXRlZDABOQjIxNKIXbMXQbjyxtKIXbMXShoKDmNy
ZXdhaV92ZXJzaW9uEggKBjAuMTAuMkoaCg5weXRob25fdmVyc2lvbhIICgYzLjExLjdKMQoHY3Jl
d19pZBImCiQ4ZjMyMmYyNS1jYmIyLTRhZmQtOWY1MC03MmRjYWIxOGUzOTlKHAoMY3Jld19wcm9j
ZXNzEgwKCnNlcXVlbnRpYWxKFQoNY3Jld19sYW5ndWFnZRIECgJlbkoaChRjcmV3X251bWJlcl9v
Zl90YXNrcxICGAJKGwoVY3Jld19udW1iZXJfb2ZfYWdlbnRzEgIYAUrKAgoLY3Jld19hZ2VudHMS
ugIKtwJbeyJpZCI6ICI0YzZiNzM0Mi1iZThiLTRiMTItYTQ1Zi0yMDIwNmU0NWQwNTQiLCAicm9s
ZSI6ICJ0ZXN0IHJvbGUiLCAibWVtb3J5X2VuYWJsZWQ/IjogdHJ1ZSwgInZlcmJvc2U/IjogdHJ1
ZSwgIm1heF9pdGVyIjogMTUsICJtYXhfcnBtIjogbnVsbCwgImkxOG4iOiAiZW4iLCAibGxtIjog
IntcIm5hbWVcIjogbnVsbCwgXCJtb2RlbF9uYW1lXCI6IFwiZ3B0LTRcIiwgXCJ0ZW1wZXJhdHVy
ZVwiOiAwLjcsIFwiY2xhc3NcIjogXCJDaGF0T3BlbkFJXCJ9IiwgImRlbGVnYXRpb25fZW5hYmxl
ZD8iOiBmYWxzZSwgInRvb2xzX25hbWVzIjogW119XUqEAgoKY3Jld190YXNrcxL1AQryAVt7Imlk
IjogImJkMGU1OWRhLTc3NDktNDlmMS1iZjEyLWQ2ZjcyMDkyMmZjOSIsICJhc3luY19leGVjdXRp
b24/IjogZmFsc2UsICJhZ2VudF9yb2xlIjogInRlc3Qgcm9sZSIsICJ0b29sc19uYW1lcyI6IFtd
fSwgeyJpZCI6ICJlNWUxNGIwNS0xZmY5LTQ5OTktOWQ4NS04YjdlMzRiZjA0ZDgiLCAiYXN5bmNf
ZXhlY3V0aW9uPyI6IGZhbHNlLCAiYWdlbnRfcm9sZSI6ICJ0ZXN0IHJvbGUiLCAidG9vbHNfbmFt
ZXMiOiBbXX1dSigKCHBsYXRmb3JtEhwKGm1hY09TLTE0LjMtYXJtNjQtYXJtLTY0Yml0ShwKEHBs
YXRmb3JtX3JlbGVhc2USCAoGMjMuMy4wShsKD3BsYXRmb3JtX3N5c3RlbRIICgZEYXJ3aW5KewoQ
cGxhdGZvcm1fdmVyc2lvbhJnCmVEYXJ3aW4gS2VybmVsIFZlcnNpb24gMjMuMy4wOiBXZWQgRGVj
IDIwIDIxOjMwOjU5IFBTVCAyMDIzOyByb290OnhudS0xMDAwMi44MS41fjcvUkVMRUFTRV9BUk02
NF9UNjAzMEoKCgRjcHVzEgIYDHoCGAE=
headers:
Accept:
- '*/*'
Accept-Encoding:
- gzip, deflate
Connection:
- keep-alive
Content-Length:
- '1391'
Content-Type:
- application/x-protobuf
User-Agent:
- OTel-OTLP-Exporter-Python/1.22.0
method: POST
uri: http://telemetry.crewai.com:4318/v1/traces
response:
body:
string: "\n\0"
headers:
Content-Length:
- '2'
Content-Type:
- application/x-protobuf
Date:
- Tue, 13 Feb 2024 08:05:26 GMT
status:
code: 200
message: OK
- request:
body: '{"messages": [{"role": "user", "content": "You are test role.\ntest backstory\n\nYour
personal goal is: test goalTOOLS:\n------\nYou have access to only the following
tools:\n\n('''',)\n\nTo use a tool, please use the exact following format:\n\n```\nThought:
Do I need to use a tool? Yes\nAction: the tool you wanna use, should be one
of [], just the name.\nAction Input: Any and all relevant information input
and context for using the tool\nObservation: the result of using the tool\n```\n\nWhen
you have a response for your task, or if you do not need to use a tool, you
MUST use the format:\n\n```\nThought: Do I need to use a tool? No\nFinal Answer:
[your response here]```This is the summary of your work so far:\nBegin! This
is VERY important to you, your job depends on it!\n\nCurrent Task: just say
hi!\n"}], "model": "gpt-4", "n": 1, "stop": ["\nObservation"], "stream": true,
"temperature": 0.7}'
headers:
accept:
- application/json
accept-encoding:
- gzip, deflate
connection:
- keep-alive
content-length:
- '904'
content-type:
- application/json
host:
- api.openai.com
user-agent:
- OpenAI/Python 1.12.0
x-stainless-arch:
- arm64
x-stainless-async:
- 'false'
x-stainless-lang:
- python
x-stainless-os:
- MacOS
x-stainless-package-version:
- 1.12.0
x-stainless-runtime:
- CPython
x-stainless-runtime-version:
- 3.11.7
method: POST
uri: https://api.openai.com/v1/chat/completions
response:
body:
string: 'data: {"id":"chatcmpl-8ri580menKdx2UwVSxcCvbrHE69Ui","object":"chat.completion.chunk","created":1707811526,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"role":"assistant","content":""},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8ri580menKdx2UwVSxcCvbrHE69Ui","object":"chat.completion.chunk","created":1707811526,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"Thought"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8ri580menKdx2UwVSxcCvbrHE69Ui","object":"chat.completion.chunk","created":1707811526,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":":"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8ri580menKdx2UwVSxcCvbrHE69Ui","object":"chat.completion.chunk","created":1707811526,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
Do"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8ri580menKdx2UwVSxcCvbrHE69Ui","object":"chat.completion.chunk","created":1707811526,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
I"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8ri580menKdx2UwVSxcCvbrHE69Ui","object":"chat.completion.chunk","created":1707811526,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
need"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8ri580menKdx2UwVSxcCvbrHE69Ui","object":"chat.completion.chunk","created":1707811526,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
to"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8ri580menKdx2UwVSxcCvbrHE69Ui","object":"chat.completion.chunk","created":1707811526,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
use"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8ri580menKdx2UwVSxcCvbrHE69Ui","object":"chat.completion.chunk","created":1707811526,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
a"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8ri580menKdx2UwVSxcCvbrHE69Ui","object":"chat.completion.chunk","created":1707811526,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
tool"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8ri580menKdx2UwVSxcCvbrHE69Ui","object":"chat.completion.chunk","created":1707811526,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"?"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8ri580menKdx2UwVSxcCvbrHE69Ui","object":"chat.completion.chunk","created":1707811526,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
No"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8ri580menKdx2UwVSxcCvbrHE69Ui","object":"chat.completion.chunk","created":1707811526,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"\n"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8ri580menKdx2UwVSxcCvbrHE69Ui","object":"chat.completion.chunk","created":1707811526,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"Final"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8ri580menKdx2UwVSxcCvbrHE69Ui","object":"chat.completion.chunk","created":1707811526,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
Answer"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8ri580menKdx2UwVSxcCvbrHE69Ui","object":"chat.completion.chunk","created":1707811526,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":":"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8ri580menKdx2UwVSxcCvbrHE69Ui","object":"chat.completion.chunk","created":1707811526,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
Hi"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8ri580menKdx2UwVSxcCvbrHE69Ui","object":"chat.completion.chunk","created":1707811526,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"!"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8ri580menKdx2UwVSxcCvbrHE69Ui","object":"chat.completion.chunk","created":1707811526,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{},"logprobs":null,"finish_reason":"stop"}]}
data: [DONE]
'
headers:
CF-Cache-Status:
- DYNAMIC
CF-RAY:
- 854b90f5ce86fb28-SJC
Cache-Control:
- no-cache, must-revalidate
Connection:
- keep-alive
Content-Type:
- text/event-stream
Date:
- Tue, 13 Feb 2024 08:05:26 GMT
Server:
- cloudflare
Set-Cookie:
- __cf_bm=eZx_Cc28AsZ4sE9XhSDROTXe.zTSX.5NABIk4QNh4rE-1707811526-1-AUSW1VrxOPxZjbDBkaJGjn3RvnxQi2anKBjm3rtF34M+3WVMXKZnsuFT1NyLSbUlKlHLmk+tH0BFBkkjVf1KNAQ=;
path=/; expires=Tue, 13-Feb-24 08:35:26 GMT; domain=.api.openai.com; HttpOnly;
Secure; SameSite=None
- _cfuvid=1hOKQMgKuc9NQV1lVNIkVHpksu9kDExwfGmwkHTeUl4-1707811526659-0-604800000;
path=/; domain=.api.openai.com; HttpOnly; Secure; SameSite=None
Transfer-Encoding:
- chunked
access-control-allow-origin:
- '*'
alt-svc:
- h3=":443"; ma=86400
openai-model:
- gpt-4-0613
openai-organization:
- user-z7g4wmlazxqvc5wjyaaaocfz
openai-processing-ms:
- '400'
openai-version:
- '2020-10-01'
strict-transport-security:
- max-age=15724800; includeSubDomains
x-ratelimit-limit-requests:
- '10000'
x-ratelimit-limit-tokens:
- '300000'
x-ratelimit-remaining-requests:
- '9999'
x-ratelimit-remaining-tokens:
- '299796'
x-ratelimit-reset-requests:
- 6ms
x-ratelimit-reset-tokens:
- 40ms
x-request-id:
- req_1e2e3f72498b1c3f5bdfb527b6808aa3
status:
code: 200
message: OK
- request:
body: '{"messages": [{"role": "user", "content": "Progressively summarize the
lines of conversation provided, adding onto the previous summary returning a
new summary.\n\nEXAMPLE\nCurrent summary:\nThe human asks what the AI thinks
of artificial intelligence. The AI thinks artificial intelligence is a force
for good.\n\nNew lines of conversation:\nHuman: Why do you think artificial
intelligence is a force for good?\nAI: Because artificial intelligence will
help humans reach their full potential.\n\nNew summary:\nThe human asks what
the AI thinks of artificial intelligence. The AI thinks artificial intelligence
is a force for good because it will help humans reach their full potential.\nEND
OF EXAMPLE\n\nCurrent summary:\n\n\nNew lines of conversation:\nHuman: just
say hi!\nAI: Hi!\n\nNew summary:"}], "model": "gpt-4", "n": 1, "stream": false,
"temperature": 0.7}'
headers:
accept:
- application/json
accept-encoding:
- gzip, deflate
connection:
- keep-alive
content-length:
- '867'
content-type:
- application/json
cookie:
- __cf_bm=eZx_Cc28AsZ4sE9XhSDROTXe.zTSX.5NABIk4QNh4rE-1707811526-1-AUSW1VrxOPxZjbDBkaJGjn3RvnxQi2anKBjm3rtF34M+3WVMXKZnsuFT1NyLSbUlKlHLmk+tH0BFBkkjVf1KNAQ=;
_cfuvid=1hOKQMgKuc9NQV1lVNIkVHpksu9kDExwfGmwkHTeUl4-1707811526659-0-604800000
host:
- api.openai.com
user-agent:
- OpenAI/Python 1.12.0
x-stainless-arch:
- arm64
x-stainless-async:
- 'false'
x-stainless-lang:
- python
x-stainless-os:
- MacOS
x-stainless-package-version:
- 1.12.0
x-stainless-runtime:
- CPython
x-stainless-runtime-version:
- 3.11.7
method: POST
uri: https://api.openai.com/v1/chat/completions
response:
body:
string: !!binary |
H4sIAAAAAAAAA1SQW0sDMRCF3/dXjHlupVu3F/etKHgBhUIFxUpJs9Pd6CaTJrOoSP+7ZLtt9SWQ
OTmT75yfBEDoQuQgVCVZGVf3p16PZvf2+uVmsZlvJ09XzzR3j3qLlw+WRS86aP2Oig+uc0XG1cia
7F5WHiVj3JpOBpNpmo6G01YwVGAdbaXjftYfjNOLzlGRVhhEDq8JAMBPe0Y2W+CXyGHQO0wMhiBL
FPnxEYDwVMeJkCHowLLj7ERFltG2uIsKoWqMtKBtYN8oDsAVwuwOmKD0iBzvpgfSFgfFY3BkiwCf
miuQEHSMC0txq8+WQnQf7Y6ENZXO0zqmsU1dH+cbbXWoVh5lIBtpApPb23cJwFvbRPMvnHCejOMV
0wfauDDNsv0+cSr9pA67mgQTy/qPa5wlHaEI34HRrDbaluid120xkTPZJb8AAAD//wMADZpmMA8C
AAA=
headers:
CF-Cache-Status:
- DYNAMIC
CF-RAY:
- 854b91035a06fb28-SJC
Cache-Control:
- no-cache, must-revalidate
Connection:
- keep-alive
Content-Encoding:
- gzip
Content-Type:
- application/json
Date:
- Tue, 13 Feb 2024 08:05:30 GMT
Server:
- cloudflare
Transfer-Encoding:
- chunked
access-control-allow-origin:
- '*'
alt-svc:
- h3=":443"; ma=86400
openai-model:
- gpt-4-0613
openai-organization:
- user-z7g4wmlazxqvc5wjyaaaocfz
openai-processing-ms:
- '1880'
openai-version:
- '2020-10-01'
strict-transport-security:
- max-age=15724800; includeSubDomains
x-ratelimit-limit-requests:
- '10000'
x-ratelimit-limit-tokens:
- '300000'
x-ratelimit-remaining-requests:
- '9999'
x-ratelimit-remaining-tokens:
- '299799'
x-ratelimit-reset-requests:
- 6ms
x-ratelimit-reset-tokens:
- 40ms
x-request-id:
- req_939914d4d3f3e4fc959d143817d71fbc
status:
code: 200
message: OK
- request:
body: '{"messages": [{"role": "user", "content": "You are test role.\ntest backstory\n\nYour
personal goal is: test goalTOOLS:\n------\nYou have access to only the following
tools:\n\n('''',)\n\nTo use a tool, please use the exact following format:\n\n```\nThought:
Do I need to use a tool? Yes\nAction: the tool you wanna use, should be one
of [], just the name.\nAction Input: Any and all relevant information input
and context for using the tool\nObservation: the result of using the tool\n```\n\nWhen
you have a response for your task, or if you do not need to use a tool, you
MUST use the format:\n\n```\nThought: Do I need to use a tool? No\nFinal Answer:
[your response here]```This is the summary of your work so far:\nThe human instructs
the AI to greet them, and the AI responds with a simple \"Hi!\"Begin! This is
VERY important to you, your job depends on it!\n\nCurrent Task: just say hello!\nThis
is the context you''re working with:\nHi!\n"}], "model": "gpt-4", "n": 1, "stop":
["\nObservation"], "stream": true, "temperature": 0.7}'
headers:
accept:
- application/json
accept-encoding:
- gzip, deflate
connection:
- keep-alive
content-length:
- '1037'
content-type:
- application/json
cookie:
- __cf_bm=eZx_Cc28AsZ4sE9XhSDROTXe.zTSX.5NABIk4QNh4rE-1707811526-1-AUSW1VrxOPxZjbDBkaJGjn3RvnxQi2anKBjm3rtF34M+3WVMXKZnsuFT1NyLSbUlKlHLmk+tH0BFBkkjVf1KNAQ=;
_cfuvid=1hOKQMgKuc9NQV1lVNIkVHpksu9kDExwfGmwkHTeUl4-1707811526659-0-604800000
host:
- api.openai.com
user-agent:
- OpenAI/Python 1.12.0
x-stainless-arch:
- arm64
x-stainless-async:
- 'false'
x-stainless-lang:
- python
x-stainless-os:
- MacOS
x-stainless-package-version:
- 1.12.0
x-stainless-runtime:
- CPython
x-stainless-runtime-version:
- 3.11.7
method: POST
uri: https://api.openai.com/v1/chat/completions
response:
body:
string: 'data: {"id":"chatcmpl-8ri5CPtraPfqxGNtaNwyQfaesdGAb","object":"chat.completion.chunk","created":1707811530,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"role":"assistant","content":""},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8ri5CPtraPfqxGNtaNwyQfaesdGAb","object":"chat.completion.chunk","created":1707811530,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"Thought"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8ri5CPtraPfqxGNtaNwyQfaesdGAb","object":"chat.completion.chunk","created":1707811530,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":":"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8ri5CPtraPfqxGNtaNwyQfaesdGAb","object":"chat.completion.chunk","created":1707811530,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
Do"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8ri5CPtraPfqxGNtaNwyQfaesdGAb","object":"chat.completion.chunk","created":1707811530,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
I"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8ri5CPtraPfqxGNtaNwyQfaesdGAb","object":"chat.completion.chunk","created":1707811530,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
need"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8ri5CPtraPfqxGNtaNwyQfaesdGAb","object":"chat.completion.chunk","created":1707811530,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
to"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8ri5CPtraPfqxGNtaNwyQfaesdGAb","object":"chat.completion.chunk","created":1707811530,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
use"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8ri5CPtraPfqxGNtaNwyQfaesdGAb","object":"chat.completion.chunk","created":1707811530,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
a"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8ri5CPtraPfqxGNtaNwyQfaesdGAb","object":"chat.completion.chunk","created":1707811530,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
tool"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8ri5CPtraPfqxGNtaNwyQfaesdGAb","object":"chat.completion.chunk","created":1707811530,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"?"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8ri5CPtraPfqxGNtaNwyQfaesdGAb","object":"chat.completion.chunk","created":1707811530,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
No"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8ri5CPtraPfqxGNtaNwyQfaesdGAb","object":"chat.completion.chunk","created":1707811530,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"\n"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8ri5CPtraPfqxGNtaNwyQfaesdGAb","object":"chat.completion.chunk","created":1707811530,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"Final"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8ri5CPtraPfqxGNtaNwyQfaesdGAb","object":"chat.completion.chunk","created":1707811530,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
Answer"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8ri5CPtraPfqxGNtaNwyQfaesdGAb","object":"chat.completion.chunk","created":1707811530,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":":"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8ri5CPtraPfqxGNtaNwyQfaesdGAb","object":"chat.completion.chunk","created":1707811530,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"
Hello"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8ri5CPtraPfqxGNtaNwyQfaesdGAb","object":"chat.completion.chunk","created":1707811530,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"!"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-8ri5CPtraPfqxGNtaNwyQfaesdGAb","object":"chat.completion.chunk","created":1707811530,"model":"gpt-4-0613","system_fingerprint":null,"choices":[{"index":0,"delta":{},"logprobs":null,"finish_reason":"stop"}]}
data: [DONE]
'
headers:
CF-Cache-Status:
- DYNAMIC
CF-RAY:
- 854b91126e42fb28-SJC
Cache-Control:
- no-cache, must-revalidate
Connection:
- keep-alive
Content-Type:
- text/event-stream
Date:
- Tue, 13 Feb 2024 08:05:31 GMT
Server:
- cloudflare
Transfer-Encoding:
- chunked
access-control-allow-origin:
- '*'
alt-svc:
- h3=":443"; ma=86400
openai-model:
- gpt-4-0613
openai-organization:
- user-z7g4wmlazxqvc5wjyaaaocfz
openai-processing-ms:
- '360'
openai-version:
- '2020-10-01'
strict-transport-security:
- max-age=15724800; includeSubDomains
x-ratelimit-limit-requests:
- '10000'
x-ratelimit-limit-tokens:
- '300000'
x-ratelimit-remaining-requests:
- '9999'
x-ratelimit-remaining-tokens:
- '299765'
x-ratelimit-reset-requests:
- 6ms
x-ratelimit-reset-tokens:
- 47ms
x-request-id:
- req_9177fada9049cfa726cab195d9a942f5
status:
code: 200
message: OK
- request:
body: '{"messages": [{"role": "user", "content": "Progressively summarize the
lines of conversation provided, adding onto the previous summary returning a
new summary.\n\nEXAMPLE\nCurrent summary:\nThe human asks what the AI thinks
of artificial intelligence. The AI thinks artificial intelligence is a force
for good.\n\nNew lines of conversation:\nHuman: Why do you think artificial
intelligence is a force for good?\nAI: Because artificial intelligence will
help humans reach their full potential.\n\nNew summary:\nThe human asks what
the AI thinks of artificial intelligence. The AI thinks artificial intelligence
is a force for good because it will help humans reach their full potential.\nEND
OF EXAMPLE\n\nCurrent summary:\nThe human instructs the AI to greet them, and
the AI responds with a simple \"Hi!\"\n\nNew lines of conversation:\nHuman:
just say hello!\nThis is the context you''re working with:\nHi!\nAI: Hello!\n\nNew
summary:"}], "model": "gpt-4", "n": 1, "stream": false, "temperature": 0.7}'
headers:
accept:
- application/json
accept-encoding:
- gzip, deflate
connection:
- keep-alive
content-length:
- '1003'
content-type:
- application/json
cookie:
- __cf_bm=eZx_Cc28AsZ4sE9XhSDROTXe.zTSX.5NABIk4QNh4rE-1707811526-1-AUSW1VrxOPxZjbDBkaJGjn3RvnxQi2anKBjm3rtF34M+3WVMXKZnsuFT1NyLSbUlKlHLmk+tH0BFBkkjVf1KNAQ=;
_cfuvid=1hOKQMgKuc9NQV1lVNIkVHpksu9kDExwfGmwkHTeUl4-1707811526659-0-604800000
host:
- api.openai.com
user-agent:
- OpenAI/Python 1.12.0
x-stainless-arch:
- arm64
x-stainless-async:
- 'false'
x-stainless-lang:
- python
x-stainless-os:
- MacOS
x-stainless-package-version:
- 1.12.0
x-stainless-runtime:
- CPython
x-stainless-runtime-version:
- 3.11.7
method: POST
uri: https://api.openai.com/v1/chat/completions
response:
body:
string: !!binary |
H4sIAAAAAAAAA1xRy24bMQy871cwPK8DO4kf8C0NgqaHokDaS1EXhqKld5VIoiBy0wSB/73Qrh9o
LwI0wyGGMx8VALoG14C2M2pD8pNVdvN7r99vv67e7Psjy7ebsLj79PmnfV0o1kXBT89k9ai6tByS
J3UcR9pmMkpl62w5Xa5ms/n11UAEbsgXWZt0cjOZLmbXB0XHzpLgGn5VAAAfw1u8xYbecA3T+ogE
EjEt4fo0BICZfUHQiDhRE0efB9JyVIqD3R8dQdcHE8FF0dxbFdCO4PYLKEObibT8Qw07l0WPXCZJ
HBuBP047MLDBB3exQTCxKSOxhj5xhF2ftaN82u041uD0P/kGH8h7vtjgJR5M7k/XeW5T5qeSROy9
P+E7F51020xGOJZLRDmN8n0F8HtIsf8nGEyZQ9Kt8gtFGcqYj/vwXNiZHSsCQGU1/oxfTZfVwSHK
uyiF7c7FlnLKbgi1+Kz21V8AAAD//wMAt4zwVksCAAA=
headers:
CF-Cache-Status:
- DYNAMIC
CF-RAY:
- 854b911cff76fb28-SJC
Cache-Control:
- no-cache, must-revalidate
Connection:
- keep-alive
Content-Encoding:
- gzip
Content-Type:
- application/json
Date:
- Tue, 13 Feb 2024 08:05:35 GMT
Server:
- cloudflare
Transfer-Encoding:
- chunked
access-control-allow-origin:
- '*'
alt-svc:
- h3=":443"; ma=86400
openai-model:
- gpt-4-0613
openai-organization:
- user-z7g4wmlazxqvc5wjyaaaocfz
openai-processing-ms:
- '2599'
openai-version:
- '2020-10-01'
strict-transport-security:
- max-age=15724800; includeSubDomains
x-ratelimit-limit-requests:
- '10000'
x-ratelimit-limit-tokens:
- '300000'
x-ratelimit-remaining-requests:
- '9999'
x-ratelimit-remaining-tokens:
- '299765'
x-ratelimit-reset-requests:
- 6ms
x-ratelimit-reset-tokens:
- 46ms
x-request-id:
- req_9590d08c9df508c18c924e19e4e0055d
status:
code: 200
message: OK
version: 1

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@@ -131,15 +131,7 @@ def test_crew_creation():
assert (
crew.kickoff()
== """1. **The Evolution of AI: From Old Concepts to New Frontiers** - Journey with us as we traverse the fascinating timeline of artificial intelligence - from its philosophical and mathematical infancy to the sophisticated, problem-solving tool it has become today. This riveting account will not only educate but also inspire, as we delve deep into the milestones that brought us here and shine a beacon on the potential that lies ahead.
2. **AI Agents in Healthcare: The Future of Medicine** - Imagine a world where illnesses are diagnosed before symptoms appear, where patient outcomes are not mere guesses but accurate predictions. This is the world AI is crafting in healthcare - a revolution that's saving lives and changing the face of medicine as we know it. This article will spotlight this transformative journey, underlining the profound impact AI is having on our health and well-being.
3. **AI and Ethics: Navigating the Moral Landscape of Artificial Intelligence** - As AI becomes an integral part of our lives, it brings along a plethora of ethical dilemmas. This thought-provoking piece will navigate the complex moral landscape of AI, addressing critical concerns like privacy, job displacement, and decision-making biases. It serves as a much-needed discussion platform for the societal implications of AI, urging us to look beyond the technology and into the mirror.
4. **Demystifying AI Algorithms: A Deep Dive into Machine Learning** - Ever wondered what goes on behind the scenes of AI? This enlightening article will break down the complex world of machine learning algorithms into digestible insights, unraveling the mystery of AI's 'black box'. It's a rare opportunity for the non-technical audience to appreciate the inner workings of AI, fostering a deeper understanding of this revolutionary technology.
5. **AI Startups: The Game Changers of the Tech Industry** - In the world of tech, AI startups are the bold pioneers charting new territories. This article will spotlight these game changers, showcasing how their innovative products and services are driving the AI revolution. It's a unique opportunity to catch a glimpse of the entrepreneurial side of AI, offering inspiration for the tech enthusiasts and dreamers alike."""
== '1. "The Role of AI in Predicting and Managing Pandemics"\nHighlight: \nIn an era where global health crises can emerge from any corner of the world, the role of AI in predicting and managing pandemics has never been more critical. Through intelligent data gathering and predictive analytics, AI can potentially identify the onset of pandemics before they reach critical mass, offering a proactive solution to a reactive problem. This article explores the intersection of AI and epidemiology, delving into how this cutting-edge technology is revolutionizing our approach to global health crises.\n\n2. "AI and the Future of Work: Will Robots Take Our Jobs?"\nHighlight: \nThe rise of AI has sparked both excitement and apprehension about the future of work. Will robots replace us, or will they augment our capabilities? This article delves into the heart of this controversial issue, examining the potential of AI to disrupt job markets, transform industries, and redefine the concept of work. It\'s not just a question of job security—it\'s a discussion about the kind of world we want to live in.\n\n3. "AI in Art and Creativity: A New Frontier in Innovation"\nHighlight: \nArt and creativity, once seen as the exclusive domain of human expression, are being redefined by the advent of AI. From algorithmic compositions to AI-assisted design, this article explores the burgeoning field of AI in art and creativity. It\'s a journey into a new frontier of innovation, one where the lines between human creativity and artificial intelligence blur into an exciting, uncharted territory.\n\n4. "Ethics in AI: Balancing Innovation with Responsibility"\nHighlight: \nAs AI continues to permeate every facet of our lives, questions about its ethical implications grow louder. This article invites readers into a thoughtful exploration of the moral landscape of AI. It challenges us to balance the relentless pursuit of innovation with the weighty responsibilities that come with it, asking: How can we harness the power of AI without losing sight of our human values?\n\n5. "AI in Education: Personalizing Learning for the Next Generation"\nHighlight: \nEducation is poised for a transformation as AI enters the classroom, promising a future where learning is personalized, not generalized. This article delves into how AI can tailor educational experiences to individual learning styles, making education more effective and accessible. It\'s a glimpse into a future where AI is not just a tool for learning, but an active participant in shaping the educational journey of the next generation.'
)
@@ -160,22 +152,7 @@ def test_hierarchical_process():
assert (
crew.kickoff()
== """Here are the 5 interesting ideas with a highlight paragraph for each:
1. "The Future of AI in Healthcare: Predicting Diseases Before They Happen"
- "Imagine a future where AI empowers us to detect diseases before they arise, transforming healthcare from reactive to proactive. Machine learning algorithms, trained on vast amounts of patient data, could potentially predict heart diseases, strokes, or cancers before they manifest, allowing for early interventions and significantly improving patient outcomes. This article will delve into the rapid advancements in AI within the healthcare sector and how these technologies are ushering us into a new era of predictive medicine."
2. "How AI is Changing the Way We Cook: An Insight into Smart Kitchens"
- "From the humble home kitchen to grand culinary stages, AI is revolutionizing the way we cook. Smart appliances, equipped with advanced sensors and predictive algorithms, are turning kitchens into creative playgrounds, offering personalized recipes, precise cooking instructions, and even automated meal preparation. This article explores the fascinating intersection of AI and gastronomy, revealing how technology is transforming our culinary experiences."
3. "Redefining Fitness with AI: Personalized Workout Plans and Nutritional Advice"
- "Fitness reimagined that's the promise of AI in the wellness industry. Picture a personal trainer who knows your strengths, weaknesses, and nutritional needs intimately. An AI-powered fitness app can provide this personalized experience, adapting your workout plans and dietary recommendations in real-time based on your progress and feedback. Join us as we unpack how AI is revolutionizing the fitness landscape, offering personalized, data-driven approaches to health and well-being."
4. "AI and the Art World: How Technology is Shaping Creativity"
- "Art and AI may seem like unlikely partners, but their synergy is sparking a creative revolution. AI algorithms are now creating mesmerizing artworks, challenging our perceptions of creativity and originality. From AI-assisted painting to generative music composition, this article will take you on a journey through the fascinating world of AI in art, exploring how technology is reshaping the boundaries of human creativity."
5. "AI in Space Exploration: The Next Frontier"
- "The vast expanse of space, once the sole domain of astronauts and rovers, is the next frontier for AI. AI technology is playing an increasingly vital role in space exploration, from predicting space weather to assisting in interstellar navigation. This article will delve into the exciting intersection of AI and space exploration, exploring how these advanced technologies are helping us uncover the mysteries of the cosmos.\""""
== """Here are the five interesting ideas for articles with their respective highlights:\n\n1. The Role of AI in Climate Change: As the world grapples with the existential threat of climate change, artificial intelligence (AI) has emerged as a powerful ally in our battle against it. The article will explore how AI is being used to predict weather patterns, optimize renewable energy sources, and even capture and reduce greenhouse emissions. This novel intersection of technology and environment could hold the key to a sustainable future, making this a must-read for anyone interested in the potential of AI to transform our world.\n\n2. AI and Mental Health: With the increasing prevalence of mental health issues worldwide, innovative solutions are needed more than ever. This article will delve into the cutting-edge domain of AI and mental health, exploring how machine learning algorithms are helping to diagnose conditions, personalize treatments, and even predict the onset of mental disorders. This exploration of AI's potential in mental health not only sheds light on the future of healthcare but also opens a dialogue on the ethical considerations involved.\n\n3. The Ethical Implications of AI: As AI continues to permeate our lives, it brings with it a host of ethical considerations. This article will unravel the complex ethical terrain of AI, from issues of privacy and consent to its potential for bias and discrimination. By diving into the philosophical underpinnings of AI and its societal implications, this article will provoke thought and stimulate discussion on how we can ensure a fair and equitable AI-enabled future.\n\n4. How AI is Revolutionizing E-commerce: In the fiercely competitive world of e-commerce, AI is proving to be a game-changer. This article will take you on a journey through the world of AI-enhanced e-commerce, showcasing how machine learning algorithms are optimizing logistics, personalizing shopping experiences, and even predicting consumer behavior. This deep dive into AI's transformative impact on e-commerce is a must-read for anyone interested in the future of business and technology.\n\n5. AI in Space Exploration: The final frontier of space exploration is being redefined by the advent of AI. This article will take you on an interstellar journey through the role of AI in space exploration, from autonomous spacecraft navigation to the search for extraterrestrial life. By peering into the cosmos through the lens of AI, this article offers a glimpse into the future of space exploration and the infinite possibilities that AI holds."""
)
@@ -197,6 +174,7 @@ def test_crew_with_delegating_agents():
tasks = [
Task(
description="Produce and amazing 1 paragraph draft of an article about AI Agents.",
expected_output="A 4 paragraph article about AI.",
agent=ceo,
)
]
@@ -209,7 +187,7 @@ def test_crew_with_delegating_agents():
assert (
crew.kickoff()
== '"AI agents, the digital masterminds at the heart of the 21st-century revolution, are shaping a new era of intelligence and innovation. They are autonomous entities, capable of observing their environment, making decisions, and acting on them, all in pursuit of a specific goal. From streamlining operations in logistics to personalizing customer experiences in retail, AI agents are transforming how businesses operate. But their potential extends far beyond the corporate world. They are the sentinels protecting our digital frontiers, the virtual assistants making our lives easier, and the unseen hands guiding autonomous vehicles. As this technology evolves, AI agents will play an increasingly central role in our world, ushering in an era of unprecedented efficiency, personalization, and productivity. But with great power comes great responsibility, and understanding and harnessing this potential responsibly will be one of our greatest challenges and opportunities in the coming years."'
== "The Senior Writer has produced a fantastic 4 paragraph article on AI:\n\n\"Artificial Intelligence, or AI, is often considered the stuff of science fiction, but it is very much a reality in today's world. In simplest terms, AI is a branch of computer science that aims to create machines that mimic human intelligence - think self-driving cars, voice assistants like Siri or Alexa, even your Netflix recommendations. These are all examples of AI in action, silently making our lives easier and more efficient.\n\nThe applications of AI are as vast as our imagination. In healthcare, AI is used to predict diseases and personalize patient care. In finance, algorithms can analyze market trends and make investment decisions. The education sector uses AI to customize learning and identify areas where students need help. Even in creative fields like music and art, AI is making its mark by creating new pieces that are hard to distinguish from those made by humans.\n\nAI's potential for the future is staggering. As technology advances, so too does the complexity and capabilities of AI. It's predicted that AI will play a significant role in tackling some of humanity's biggest challenges, such as climate change and global health crises. Imagine AI systems predicting natural disasters with enough time for us to take preventative measures, or developing new, effective treatments for diseases through data analysis.\n\nHowever, this brave new world does not come without its challenges. Ethical issues are at the forefront, with concerns over privacy and the potential misuse of AI. There's also the question of job displacement due to automation, and the need for laws and regulations to keep pace with this rapidly advancing technology. Despite these hurdles, the promise of AI and its ability to transform our world is an exciting prospect, one that we are only just beginning to explore.\""
)
@@ -277,18 +255,14 @@ def test_crew_verbose_levels_output(capsys):
@pytest.mark.vcr(filter_headers=["authorization"])
def test_cache_hitting_between_agents():
from unittest.mock import patch
from unittest.mock import call, patch
from langchain.tools import tool
@tool
def multiplier(numbers) -> float:
"""Useful for when you need to multiply two numbers together.
The input to this tool should be a comma separated list of numbers of
length two, representing the two numbers you want to multiply together.
For example, `1,2` would be the input if you wanted to multiply 1 by 2."""
a, b = numbers.split(",")
return int(a) * int(b)
def multiplier(first_number: int, second_number: int) -> float:
"""Useful for when you need to multiply two numbers together."""
return first_number * second_number
tasks = [
Task(
@@ -308,15 +282,16 @@ def test_cache_hitting_between_agents():
tasks=tasks,
)
assert crew._cache_handler._cache == {}
output = crew.kickoff()
assert crew._cache_handler._cache == {"multiplier-2,6": "12"}
assert output == "12"
with patch.object(CacheHandler, "read") as read:
read.return_value = "12"
crew.kickoff()
read.assert_called_with("multiplier", "2,6")
assert read.call_count == 2, "read was not called exactly twice"
# Check if read was called with the expected arguments
expected_calls = [
call(tool="multiplier", input={"first_number": 2, "second_number": 6}),
call(tool="multiplier", input={"first_number": 2, "second_number": 6}),
]
read.assert_has_calls(expected_calls, any_order=False)
@pytest.mark.vcr(filter_headers=["authorization"])
@@ -356,6 +331,53 @@ def test_api_calls_throttling(capsys):
moveon.assert_called()
@pytest.mark.vcr(filter_headers=["authorization"])
def test_crew_full_ouput():
agent = Agent(
role="test role",
goal="test goal",
backstory="test backstory",
allow_delegation=False,
verbose=True,
)
task1 = Task(
description="just say hi!",
agent=agent,
)
task2 = Task(
description="just say hello!",
agent=agent,
)
crew = Crew(agents=[agent], tasks=[task1, task2], full_output=True)
result = crew.kickoff()
assert result == {
"final_output": "Hello!",
"tasks_outputs": [task1.output, task2.output],
}
def test_agents_rpm_is_never_set_if_crew_max_RPM_is_not_set():
agent = Agent(
role="test role",
goal="test goal",
backstory="test backstory",
allow_delegation=False,
verbose=True,
)
task = Task(
description="just say hi!",
agent=agent,
)
Crew(agents=[agent], tasks=[task], verbose=2)
assert agent._rpm_controller is None
def test_async_task_execution():
import threading
from unittest.mock import patch
@@ -402,3 +424,106 @@ def test_async_task_execution():
crew.kickoff()
start.assert_called()
join.assert_called()
def test_set_agents_step_callback():
from unittest.mock import patch
researcher_agent = Agent(
role="Researcher",
goal="Make the best research and analysis on content about AI and AI agents",
backstory="You're an expert researcher, specialized in technology, software engineering, AI and startups. You work as a freelancer and is now working on doing research and analysis for a new customer.",
allow_delegation=False,
)
list_ideas = Task(
description="Give me a list of 5 interesting ideas to explore for na article, what makes them unique and interesting.",
expected_output="Bullet point list of 5 important events.",
agent=researcher_agent,
async_execution=True,
)
crew = Crew(
agents=[researcher_agent],
process=Process.sequential,
tasks=[list_ideas],
step_callback=lambda: None,
)
with patch.object(Agent, "execute_task") as execute:
execute.return_value = "ok"
crew.kickoff()
assert researcher_agent.step_callback is not None
def test_dont_set_agents_step_callback_if_already_set():
from unittest.mock import patch
def agent_callback(_):
pass
def crew_callback(_):
pass
researcher_agent = Agent(
role="Researcher",
goal="Make the best research and analysis on content about AI and AI agents",
backstory="You're an expert researcher, specialized in technology, software engineering, AI and startups. You work as a freelancer and is now working on doing research and analysis for a new customer.",
allow_delegation=False,
step_callback=agent_callback,
)
list_ideas = Task(
description="Give me a list of 5 interesting ideas to explore for na article, what makes them unique and interesting.",
expected_output="Bullet point list of 5 important events.",
agent=researcher_agent,
async_execution=True,
)
crew = Crew(
agents=[researcher_agent],
process=Process.sequential,
tasks=[list_ideas],
step_callback=crew_callback,
)
with patch.object(Agent, "execute_task") as execute:
execute.return_value = "ok"
crew.kickoff()
assert researcher_agent.step_callback is not crew_callback
assert researcher_agent.step_callback is agent_callback
@pytest.mark.vcr(filter_headers=["authorization"])
def test_crew_function_calling_llm():
from unittest.mock import patch
from langchain.tools import tool
from langchain_openai import ChatOpenAI
llm = ChatOpenAI(model="gpt-3.5")
with patch.object(llm.client, "create", wraps=llm.client.create) as private_mock:
@tool
def learn_about_AI(topic) -> float:
"""Useful for when you need to learn about AI to write an paragraph about it."""
return "AI is a very broad field."
agent1 = Agent(
role="test role",
goal="test goal",
backstory="test backstory",
tools=[learn_about_AI],
)
essay = Task(
description="Write and then review an small paragraph on AI until it's AMAZING",
agent=agent1,
)
tasks = [essay]
print(agent1.function_calling_llm)
crew = Crew(agents=[agent1], tasks=tasks, function_calling_llm=llm)
print(agent1.function_calling_llm)
crew.kickoff()
private_mock.assert_called()

View File

@@ -74,7 +74,7 @@ def test_task_prompt_includes_expected_output():
with patch.object(Agent, "execute_task") as execute:
execute.return_value = "ok"
task.execute()
execute.assert_called_once_with(task=task._prompt(), context=None, tools=[])
execute.assert_called_once_with(task=task, context=None, tools=[])
def test_task_callback():
@@ -115,7 +115,7 @@ def test_execute_with_agent():
with patch.object(Agent, "execute_task", return_value="ok") as execute:
task.execute(agent=researcher)
execute.assert_called_once_with(task=task._prompt(), context=None, tools=[])
execute.assert_called_once_with(task=task, context=None, tools=[])
def test_async_execution():
@@ -135,4 +135,4 @@ def test_async_execution():
with patch.object(Agent, "execute_task", return_value="ok") as execute:
task.execute(agent=researcher)
execute.assert_called_once_with(task=task._prompt(), context=None, tools=[])
execute.assert_called_once_with(task=task, context=None, tools=[])