mirror of
https://github.com/crewAIInc/crewAI.git
synced 2025-12-16 12:28:30 +00:00
Compare commits
14 Commits
v0.36.0
...
bugfix/lan
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
040e5a78d2 | ||
|
|
b93632a53a | ||
|
|
09938641cd | ||
|
|
7acf0b2107 | ||
|
|
4eb4073661 | ||
|
|
7b53457ef3 | ||
|
|
691b094a40 | ||
|
|
68e9e54c88 | ||
|
|
d0d99125c4 | ||
|
|
129000d01f | ||
|
|
47f9d026dd | ||
|
|
b75b0b5552 | ||
|
|
3dd6249f1e | ||
|
|
8451113039 |
31
docs/how-to/Force-Tool-Ouput-as-Result.md
Normal file
31
docs/how-to/Force-Tool-Ouput-as-Result.md
Normal file
@@ -0,0 +1,31 @@
|
||||
---
|
||||
title: Forcing Tool Output as Result
|
||||
description: Learn how to force tool output as the result in of an Agent's task in crewAI.
|
||||
---
|
||||
|
||||
## Introduction
|
||||
In CrewAI, you can force the output of a tool as the result of an agent's task. This feature is useful when you want to ensure that the tool output is captured and returned as the task result, and avoid the agent modifying the output during the task execution.
|
||||
|
||||
## Forcing Tool Output as Result
|
||||
To force the tool output as the result of an agent's task, you can set the `force_tool_output` parameter to `True` when creating the task. This parameter ensures that the tool output is captured and returned as the task result, without any modifications by the agent.
|
||||
|
||||
Here's an example of how to force the tool output as the result of an agent's task:
|
||||
|
||||
```python
|
||||
# ...
|
||||
# Define a custom tool that returns the result as the answer
|
||||
coding_agent =Agent(
|
||||
role="Data Scientist",
|
||||
goal="Product amazing resports on AI",
|
||||
backstory="You work with data and AI",
|
||||
tools=[MyCustomTool(result_as_answer=True)],
|
||||
)
|
||||
# ...
|
||||
```
|
||||
|
||||
### Workflow in Action
|
||||
|
||||
1. **Task Execution**: The agent executes the task using the tool provided.
|
||||
2. **Tool Output**: The tool generates the output, which is captured as the task result.
|
||||
3. **Agent Interaction**: The agent my reflect and take learnings from the tool but the output is not modified.
|
||||
4. **Result Return**: The tool output is returned as the task result without any modifications.
|
||||
@@ -127,7 +127,7 @@ llm = HuggingFaceHub(
|
||||
```
|
||||
|
||||
## OpenAI Compatible API Endpoints
|
||||
Switch between APIs and models seamlessly using environment variables, supporting platforms like FastChat, LM Studio, and Mistral AI.
|
||||
Switch between APIs and models seamlessly using environment variables, supporting platforms like FastChat, LM Studio, Groq, and Mistral AI.
|
||||
|
||||
### Configuration Examples
|
||||
#### FastChat
|
||||
@@ -144,6 +144,13 @@ OPENAI_API_BASE="http://localhost:1234/v1"
|
||||
OPENAI_API_KEY="lm-studio"
|
||||
```
|
||||
|
||||
#### Groq API
|
||||
```sh
|
||||
OPENAI_API_KEY=your-groq-api-key
|
||||
OPENAI_MODEL_NAME='llama3-8b-8192'
|
||||
OPENAI_API_BASE=https://api.groq.com/openai/v1
|
||||
```
|
||||
|
||||
#### Mistral API
|
||||
```sh
|
||||
OPENAI_API_KEY=your-mistral-api-key
|
||||
@@ -211,4 +218,4 @@ azure_agent = Agent(
|
||||
```
|
||||
|
||||
## Conclusion
|
||||
Integrating CrewAI with different LLMs expands the framework's versatility, allowing for customized, efficient AI solutions across various domains and platforms.
|
||||
Integrating CrewAI with different LLMs expands the framework's versatility, allowing for customized, efficient AI solutions across various domains and platforms.
|
||||
|
||||
137
docs/how-to/Start-a-New-CrewAI-Project.md
Normal file
137
docs/how-to/Start-a-New-CrewAI-Project.md
Normal file
@@ -0,0 +1,137 @@
|
||||
---
|
||||
title: Starting a New CrewAI Project
|
||||
description: A comprehensive guide to starting a new CrewAI project, including the latest updates and project setup methods.
|
||||
---
|
||||
|
||||
# Starting Your CrewAI Project
|
||||
|
||||
Welcome to the ultimate guide for starting a new CrewAI project. This document will walk you through the steps to create, customize, and run your CrewAI project, ensuring you have everything you need to get started.
|
||||
|
||||
## Prerequisites
|
||||
|
||||
We assume you have already installed CrewAI. If not, please refer to the [installation guide](how-to/Installing-CrewAI.md) to install CrewAI and its dependencies.
|
||||
|
||||
## Creating a New Project
|
||||
|
||||
To create a new project, run the following CLI command:
|
||||
|
||||
```shell
|
||||
$ crewai create my_project
|
||||
```
|
||||
|
||||
This command will create a new project folder with the following structure:
|
||||
|
||||
```shell
|
||||
my_project/
|
||||
├── .gitignore
|
||||
├── pyproject.toml
|
||||
├── README.md
|
||||
└── src/
|
||||
└── my_project/
|
||||
├── __init__.py
|
||||
├── main.py
|
||||
├── crew.py
|
||||
├── tools/
|
||||
│ ├── custom_tool.py
|
||||
│ └── __init__.py
|
||||
└── config/
|
||||
├── agents.yaml
|
||||
└── tasks.yaml
|
||||
```
|
||||
|
||||
You can now start developing your project by editing the files in the `src/my_project` folder. The `main.py` file is the entry point of your project, and the `crew.py` file is where you define your agents and tasks.
|
||||
|
||||
## Customizing Your Project
|
||||
|
||||
To customize your project, you can:
|
||||
- Modify `src/my_project/config/agents.yaml` to define your agents.
|
||||
- Modify `src/my_project/config/tasks.yaml` to define your tasks.
|
||||
- Modify `src/my_project/crew.py` to add your own logic, tools, and specific arguments.
|
||||
- Modify `src/my_project/main.py` to add custom inputs for your agents and tasks.
|
||||
- Add your environment variables into the `.env` file.
|
||||
|
||||
### Example: Defining Agents and Tasks
|
||||
|
||||
#### agents.yaml
|
||||
|
||||
```yaml
|
||||
researcher:
|
||||
role: >
|
||||
Job Candidate Researcher
|
||||
goal: >
|
||||
Find potential candidates for the job
|
||||
backstory: >
|
||||
You are adept at finding the right candidates by exploring various online
|
||||
resources. Your skill in identifying suitable candidates ensures the best
|
||||
match for job positions.
|
||||
```
|
||||
|
||||
#### tasks.yaml
|
||||
|
||||
```yaml
|
||||
research_candidates_task:
|
||||
description: >
|
||||
Conduct thorough research to find potential candidates for the specified job.
|
||||
Utilize various online resources and databases to gather a comprehensive list of potential candidates.
|
||||
Ensure that the candidates meet the job requirements provided.
|
||||
|
||||
Job Requirements:
|
||||
{job_requirements}
|
||||
expected_output: >
|
||||
A list of 10 potential candidates with their contact information and brief profiles highlighting their suitability.
|
||||
```
|
||||
|
||||
## Installing Dependencies
|
||||
|
||||
To install the dependencies for your project, you can use Poetry. First, navigate to your project directory:
|
||||
|
||||
```shell
|
||||
$ cd my_project
|
||||
$ poetry lock
|
||||
$ poetry install
|
||||
```
|
||||
|
||||
This will install the dependencies specified in the `pyproject.toml` file.
|
||||
|
||||
## Interpolating Variables
|
||||
|
||||
Any variable interpolated in your `agents.yaml` and `tasks.yaml` files like `{variable}` will be replaced by the value of the variable in the `main.py` file.
|
||||
|
||||
#### agents.yaml
|
||||
|
||||
```yaml
|
||||
research_task:
|
||||
description: >
|
||||
Conduct a thorough research about the customer and competitors in the context
|
||||
of {customer_domain}.
|
||||
Make sure you find any interesting and relevant information given the
|
||||
current year is 2024.
|
||||
expected_output: >
|
||||
A complete report on the customer and their customers and competitors,
|
||||
including their demographics, preferences, market positioning and audience engagement.
|
||||
```
|
||||
|
||||
#### main.py
|
||||
|
||||
```python
|
||||
# main.py
|
||||
def run():
|
||||
inputs = {
|
||||
"customer_domain": "crewai.com"
|
||||
}
|
||||
MyProjectCrew(inputs).crew().kickoff(inputs=inputs)
|
||||
```
|
||||
|
||||
## Running Your Project
|
||||
|
||||
To run your project, use the following command:
|
||||
|
||||
```shell
|
||||
$ poetry run my_project
|
||||
```
|
||||
|
||||
This will initialize your crew of AI agents and begin task execution as defined in your configuration in the `main.py` file.
|
||||
|
||||
## Deploying Your Project
|
||||
|
||||
The easiest way to deploy your crew is through [CrewAI+](https://www.crewai.com/crewaiplus), where you can deploy your crew in a few clicks.
|
||||
@@ -48,6 +48,11 @@ Cutting-edge framework for orchestrating role-playing, autonomous AI agents. By
|
||||
<div style="width:30%">
|
||||
<h2>How-To Guides</h2>
|
||||
<ul>
|
||||
<li>
|
||||
<a href="./how-to/Start-a-New-CrewAI-Project">
|
||||
Starting Your crewAI Project
|
||||
</a>
|
||||
</li>
|
||||
<li>
|
||||
<a href="./how-to/Installing-CrewAI">
|
||||
Installing crewAI
|
||||
@@ -88,6 +93,11 @@ Cutting-edge framework for orchestrating role-playing, autonomous AI agents. By
|
||||
Coding Agents
|
||||
</a>
|
||||
</li>
|
||||
<li>
|
||||
<a href="./how-to/Force-Tool-Ouput-as-Result">
|
||||
Forcing Tool Output as Result
|
||||
</a>
|
||||
</li>
|
||||
<li>
|
||||
<a href="./how-to/Human-Input-on-Execution">
|
||||
Human Input on Execution
|
||||
|
||||
@@ -4,7 +4,7 @@
|
||||
The MDXSearchTool is in continuous development. Features may be added or removed, and functionality could change unpredictably as we refine the tool.
|
||||
|
||||
## Description
|
||||
The MDX Search Tool is a component of the `crewai_tools` package aimed at facilitating advanced market data extraction. This tool is invaluable for researchers and analysts seeking quick access to market insights, especially within the AI sector. It simplifies the task of acquiring, interpreting, and organizing market data by interfacing with various data sources.
|
||||
The MDX Search Tool is a component of the `crewai_tools` package aimed at facilitating advanced markdown language extraction. It enables users to effectively search and extract relevant information from MD files using query-based searches. This tool is invaluable for data analysis, information management, and research tasks, streamlining the process of finding specific information within large document collections.
|
||||
|
||||
## Installation
|
||||
Before using the MDX Search Tool, ensure the `crewai_tools` package is installed. If it is not, you can install it with the following command:
|
||||
@@ -59,4 +59,4 @@ tool = MDXSearchTool(
|
||||
),
|
||||
)
|
||||
)
|
||||
```
|
||||
```
|
||||
|
||||
@@ -131,6 +131,7 @@ nav:
|
||||
- Using LangChain Tools: 'core-concepts/Using-LangChain-Tools.md'
|
||||
- Using LlamaIndex Tools: 'core-concepts/Using-LlamaIndex-Tools.md'
|
||||
- How to Guides:
|
||||
- Starting Your crewAI Project: 'how-to/Start-a-New-CrewAI-Project.md'
|
||||
- Installing CrewAI: 'how-to/Installing-CrewAI.md'
|
||||
- Getting Started: 'how-to/Creating-a-Crew-and-kick-it-off.md'
|
||||
- Create Custom Tools: 'how-to/Create-Custom-Tools.md'
|
||||
@@ -140,6 +141,7 @@ nav:
|
||||
- Connecting to any LLM: 'how-to/LLM-Connections.md'
|
||||
- Customizing Agents: 'how-to/Customizing-Agents.md'
|
||||
- Coding Agents: 'how-to/Coding-Agents.md'
|
||||
- Forcing Tool Output as Result: 'how-to/Force-Tool-Ouput-as-Result.md'
|
||||
- Human Input on Execution: 'how-to/Human-Input-on-Execution.md'
|
||||
- Kickoff a Crew Asynchronously: 'how-to/Kickoff-async.md'
|
||||
- Kickoff a Crew for a List: 'how-to/Kickoff-for-each.md'
|
||||
|
||||
64
poetry.lock
generated
64
poetry.lock
generated
@@ -840,13 +840,13 @@ files = [
|
||||
|
||||
[[package]]
|
||||
name = "crewai-tools"
|
||||
version = "0.4.7"
|
||||
version = "0.4.8"
|
||||
description = "Set of tools for the crewAI framework"
|
||||
optional = false
|
||||
python-versions = "<=3.13,>=3.10"
|
||||
files = [
|
||||
{file = "crewai_tools-0.4.7-py3-none-any.whl", hash = "sha256:3ff04b2da07d2c48e72f898511295b4a10038dd3e4fe859baa93fec1fb8baf8e"},
|
||||
{file = "crewai_tools-0.4.7.tar.gz", hash = "sha256:4502a5e0ab94a7dae6638d000768f80049918909ca5338cdebc280351b3ce003"},
|
||||
{file = "crewai_tools-0.4.8-py3-none-any.whl", hash = "sha256:628b08515ee0e06c751da1dd66b0cff70c9b2644775891c8f59883cb5debfef4"},
|
||||
{file = "crewai_tools-0.4.8.tar.gz", hash = "sha256:ae190bd187f980163523c86ee7e1eb2ed78896f935d6caff98908dd7ab6c982b"},
|
||||
]
|
||||
|
||||
[package.dependencies]
|
||||
@@ -1054,13 +1054,13 @@ idna = ">=2.0.0"
|
||||
|
||||
[[package]]
|
||||
name = "embedchain"
|
||||
version = "0.1.115"
|
||||
version = "0.1.116"
|
||||
description = "Simplest open source retrieval (RAG) framework"
|
||||
optional = false
|
||||
python-versions = "<=3.13,>=3.9"
|
||||
files = [
|
||||
{file = "embedchain-0.1.115-py3-none-any.whl", hash = "sha256:6f1842edea7780e6b1d21d073b72b2c916f440411c32e758165c88141b2ac1ec"},
|
||||
{file = "embedchain-0.1.115.tar.gz", hash = "sha256:8b259a3307e022924c99cf82b17789f141d157f11e53beb3bc82bf49efa2e31e"},
|
||||
{file = "embedchain-0.1.116-py3-none-any.whl", hash = "sha256:388835d047f9ff4542ebf50e3fa633ef596db262cbe506195ee4976b91a49172"},
|
||||
{file = "embedchain-0.1.116.tar.gz", hash = "sha256:3e4d6418df2e749c2bd3cd3153c3857cbecd7227afe40b87d5ac3df629c394b2"},
|
||||
]
|
||||
|
||||
[package.dependencies]
|
||||
@@ -1075,6 +1075,7 @@ langchain = ">0.2,<=0.3"
|
||||
langchain-cohere = ">=0.1.4,<0.2.0"
|
||||
langchain-community = ">=0.2.6,<0.3.0"
|
||||
langchain-openai = ">=0.1.7,<0.2.0"
|
||||
memzero = ">=0.0.7,<0.0.8"
|
||||
openai = ">=1.1.1"
|
||||
posthog = ">=3.0.2,<4.0.0"
|
||||
pypdf = ">=4.0.1,<5.0.0"
|
||||
@@ -2051,13 +2052,13 @@ pyreadline3 = {version = "*", markers = "sys_platform == \"win32\" and python_ve
|
||||
|
||||
[[package]]
|
||||
name = "identify"
|
||||
version = "2.5.36"
|
||||
version = "2.6.0"
|
||||
description = "File identification library for Python"
|
||||
optional = false
|
||||
python-versions = ">=3.8"
|
||||
files = [
|
||||
{file = "identify-2.5.36-py2.py3-none-any.whl", hash = "sha256:37d93f380f4de590500d9dba7db359d0d3da95ffe7f9de1753faa159e71e7dfa"},
|
||||
{file = "identify-2.5.36.tar.gz", hash = "sha256:e5e00f54165f9047fbebeb4a560f9acfb8af4c88232be60a488e9b68d122745d"},
|
||||
{file = "identify-2.6.0-py2.py3-none-any.whl", hash = "sha256:e79ae4406387a9d300332b5fd366d8994f1525e8414984e1a59e058b2eda2dd0"},
|
||||
{file = "identify-2.6.0.tar.gz", hash = "sha256:cb171c685bdc31bcc4c1734698736a7d5b6c8bf2e0c15117f4d469c8640ae5cf"},
|
||||
]
|
||||
|
||||
[package.extras]
|
||||
@@ -2281,6 +2282,17 @@ files = [
|
||||
{file = "jmespath-1.0.1.tar.gz", hash = "sha256:90261b206d6defd58fdd5e85f478bf633a2901798906be2ad389150c5c60edbe"},
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "json-repair"
|
||||
version = "0.25.2"
|
||||
description = "A package to repair broken json strings"
|
||||
optional = false
|
||||
python-versions = ">=3.7"
|
||||
files = [
|
||||
{file = "json_repair-0.25.2-py3-none-any.whl", hash = "sha256:51d67295c3184b6c41a3572689661c6128cef6cfc9fb04db63130709adfc5bf0"},
|
||||
{file = "json_repair-0.25.2.tar.gz", hash = "sha256:161a56d7e6bbfd4cad3a614087e3e0dbd0e10d402dd20dc7db418432428cb32b"},
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "jsonpatch"
|
||||
version = "1.33"
|
||||
@@ -2394,8 +2406,8 @@ langchain-core = ">=0.2.10,<0.3.0"
|
||||
langchain-text-splitters = ">=0.2.0,<0.3.0"
|
||||
langsmith = ">=0.1.17,<0.2.0"
|
||||
numpy = [
|
||||
{version = ">=1.26.0,<2.0.0", markers = "python_version >= \"3.12\""},
|
||||
{version = ">=1,<2", markers = "python_version < \"3.12\""},
|
||||
{version = ">=1.26.0,<2.0.0", markers = "python_version >= \"3.12\""},
|
||||
]
|
||||
pydantic = ">=1,<3"
|
||||
PyYAML = ">=5.3"
|
||||
@@ -2436,8 +2448,8 @@ langchain = ">=0.2.6,<0.3.0"
|
||||
langchain-core = ">=0.2.10,<0.3.0"
|
||||
langsmith = ">=0.1.0,<0.2.0"
|
||||
numpy = [
|
||||
{version = ">=1.26.0,<2.0.0", markers = "python_version >= \"3.12\""},
|
||||
{version = ">=1,<2", markers = "python_version < \"3.12\""},
|
||||
{version = ">=1.26.0,<2.0.0", markers = "python_version >= \"3.12\""},
|
||||
]
|
||||
PyYAML = ">=5.3"
|
||||
requests = ">=2,<3"
|
||||
@@ -2460,8 +2472,8 @@ jsonpatch = ">=1.33,<2.0"
|
||||
langsmith = ">=0.1.75,<0.2.0"
|
||||
packaging = ">=23.2,<25"
|
||||
pydantic = [
|
||||
{version = ">=2.7.4,<3.0.0", markers = "python_full_version >= \"3.12.4\""},
|
||||
{version = ">=1,<3", markers = "python_full_version < \"3.12.4\""},
|
||||
{version = ">=2.7.4,<3.0.0", markers = "python_full_version >= \"3.12.4\""},
|
||||
]
|
||||
PyYAML = ">=5.3"
|
||||
tenacity = ">=8.1.0,<8.4.0 || >8.4.0,<9.0.0"
|
||||
@@ -2498,20 +2510,20 @@ langchain-core = ">=0.2.10,<0.3.0"
|
||||
|
||||
[[package]]
|
||||
name = "langsmith"
|
||||
version = "0.1.83"
|
||||
version = "0.1.84"
|
||||
description = "Client library to connect to the LangSmith LLM Tracing and Evaluation Platform."
|
||||
optional = false
|
||||
python-versions = "<4.0,>=3.8.1"
|
||||
files = [
|
||||
{file = "langsmith-0.1.83-py3-none-any.whl", hash = "sha256:f54d8cd8479b648b6339f3f735d19292c3516d080f680933ecdca3eab4b67ed3"},
|
||||
{file = "langsmith-0.1.83.tar.gz", hash = "sha256:5cdd947212c8ad19adb992c06471c860185a777daa6859bb47150f90daf64bf3"},
|
||||
{file = "langsmith-0.1.84-py3-none-any.whl", hash = "sha256:01f3c6390dba26c583bac8dd0e551ce3d0509c7f55cad714db0b5c8d36e4c7ff"},
|
||||
{file = "langsmith-0.1.84.tar.gz", hash = "sha256:5220c0439838b9a5bd320fd3686be505c5083dcee22d2452006c23891153bea1"},
|
||||
]
|
||||
|
||||
[package.dependencies]
|
||||
orjson = ">=3.9.14,<4.0.0"
|
||||
pydantic = [
|
||||
{version = ">=2.7.4,<3.0.0", markers = "python_full_version >= \"3.12.4\""},
|
||||
{version = ">=1,<3", markers = "python_full_version < \"3.12.4\""},
|
||||
{version = ">=2.7.4,<3.0.0", markers = "python_full_version >= \"3.12.4\""},
|
||||
]
|
||||
requests = ">=2,<3"
|
||||
|
||||
@@ -2672,6 +2684,22 @@ files = [
|
||||
{file = "mdurl-0.1.2.tar.gz", hash = "sha256:bb413d29f5eea38f31dd4754dd7377d4465116fb207585f97bf925588687c1ba"},
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "memzero"
|
||||
version = "0.0.7"
|
||||
description = "Long-term memory for AI Agents"
|
||||
optional = false
|
||||
python-versions = "<4.0,>=3.9"
|
||||
files = [
|
||||
{file = "memzero-0.0.7-py3-none-any.whl", hash = "sha256:65f6da88d46263dbc05621fcd01bd09616d0e7f082d55ed9899dc2152491ffd2"},
|
||||
{file = "memzero-0.0.7.tar.gz", hash = "sha256:0c1f413d8ee0ade955fe9f8b8f5aff2cf58bc94869537aca62139db3d9f50725"},
|
||||
]
|
||||
|
||||
[package.dependencies]
|
||||
httpx = ">=0.27.0,<0.28.0"
|
||||
posthog = ">=3.5.0,<4.0.0"
|
||||
pydantic = ">=2.7.3,<3.0.0"
|
||||
|
||||
[[package]]
|
||||
name = "mergedeep"
|
||||
version = "1.3.4"
|
||||
@@ -3972,8 +4000,8 @@ files = [
|
||||
annotated-types = ">=0.4.0"
|
||||
pydantic-core = "2.20.1"
|
||||
typing-extensions = [
|
||||
{version = ">=4.12.2", markers = "python_version >= \"3.13\""},
|
||||
{version = ">=4.6.1", markers = "python_version < \"3.13\""},
|
||||
{version = ">=4.12.2", markers = "python_version >= \"3.13\""},
|
||||
]
|
||||
|
||||
[package.extras]
|
||||
@@ -6073,4 +6101,4 @@ tools = ["crewai-tools"]
|
||||
[metadata]
|
||||
lock-version = "2.0"
|
||||
python-versions = ">=3.10,<=3.13"
|
||||
content-hash = "4f3e5fddb5f0fc8fd143a8abe947ecac443213d595bd0eeed745ccb82dac2312"
|
||||
content-hash = "2cf5a3904e7cbcfebb85e198b6035252d47213a9b0dd3dd51837516e03b38d3e"
|
||||
|
||||
@@ -21,13 +21,14 @@ opentelemetry-sdk = "^1.22.0"
|
||||
opentelemetry-exporter-otlp-proto-http = "^1.22.0"
|
||||
instructor = "1.3.3"
|
||||
regex = "^2023.12.25"
|
||||
crewai-tools = { version = "^0.4.7", optional = true }
|
||||
crewai-tools = { version = "^0.4.8", optional = true }
|
||||
click = "^8.1.7"
|
||||
python-dotenv = "^1.0.0"
|
||||
appdirs = "^1.4.4"
|
||||
jsonref = "^1.1.0"
|
||||
agentops = { version = "^0.1.9", optional = true }
|
||||
embedchain = "^0.1.114"
|
||||
json-repair = "^0.25.2"
|
||||
|
||||
[tool.poetry.extras]
|
||||
tools = ["crewai-tools"]
|
||||
@@ -45,7 +46,7 @@ mkdocs-material = { extras = ["imaging"], version = "^9.5.7" }
|
||||
mkdocs-material-extensions = "^1.3.1"
|
||||
pillow = "^10.2.0"
|
||||
cairosvg = "^2.7.1"
|
||||
crewai-tools = "^0.4.7"
|
||||
crewai-tools = "^0.4.8"
|
||||
|
||||
[tool.poetry.group.test.dependencies]
|
||||
pytest = "^8.0.0"
|
||||
|
||||
@@ -1,13 +1,14 @@
|
||||
import os
|
||||
from inspect import signature
|
||||
from typing import Any, List, Optional, Tuple
|
||||
|
||||
from langchain.agents.agent import RunnableAgent
|
||||
from langchain.agents.tools import BaseTool
|
||||
from langchain.agents.tools import tool as LangChainTool
|
||||
from langchain.tools.render import render_text_description
|
||||
from langchain_core.agents import AgentAction
|
||||
from langchain_core.callbacks import BaseCallbackHandler
|
||||
from langchain_openai import ChatOpenAI
|
||||
from pydantic import Field, InstanceOf, model_validator
|
||||
from pydantic import Field, InstanceOf, PrivateAttr, model_validator
|
||||
|
||||
from crewai.agents import CacheHandler, CrewAgentExecutor, CrewAgentParser
|
||||
from crewai.agents.agent_builder.base_agent import BaseAgent
|
||||
@@ -54,8 +55,11 @@ class Agent(BaseAgent):
|
||||
tools: Tools at agents disposal
|
||||
step_callback: Callback to be executed after each step of the agent execution.
|
||||
callbacks: A list of callback functions from the langchain library that are triggered during the agent's execution process
|
||||
allow_code_execution: Enable code execution for the agent.
|
||||
max_retry_limit: Maximum number of retries for an agent to execute a task when an error occurs.
|
||||
"""
|
||||
|
||||
_times_executed: int = PrivateAttr(default=0)
|
||||
max_execution_time: Optional[int] = Field(
|
||||
default=None,
|
||||
description="Maximum execution time for an agent to execute a task",
|
||||
@@ -96,6 +100,10 @@ class Agent(BaseAgent):
|
||||
allow_code_execution: Optional[bool] = Field(
|
||||
default=False, description="Enable code execution for the agent."
|
||||
)
|
||||
max_retry_limit: int = Field(
|
||||
default=2,
|
||||
description="Maximum number of retries for an agent to execute a task when an error occurs.",
|
||||
)
|
||||
|
||||
def __init__(__pydantic_self__, **data):
|
||||
config = data.pop("config", {})
|
||||
@@ -167,14 +175,15 @@ class Agent(BaseAgent):
|
||||
if memory.strip() != "":
|
||||
task_prompt += self.i18n.slice("memory").format(memory=memory)
|
||||
|
||||
tools = tools or self.tools
|
||||
|
||||
parsed_tools = self._parse_tools(tools or []) # type: ignore # Argument 1 to "_parse_tools" of "Agent" has incompatible type "list[Any] | None"; expected "list[Any]"
|
||||
tools = tools or self.tools or []
|
||||
parsed_tools = self._parse_tools(tools)
|
||||
self.create_agent_executor(tools=tools)
|
||||
self.agent_executor.tools = parsed_tools
|
||||
self.agent_executor.task = task
|
||||
|
||||
self.agent_executor.tools_description = render_text_description(parsed_tools)
|
||||
self.agent_executor.tools_description = self._render_text_description_and_args(
|
||||
parsed_tools
|
||||
)
|
||||
self.agent_executor.tools_names = self.__tools_names(parsed_tools)
|
||||
|
||||
if self.crew and self.crew._train:
|
||||
@@ -182,13 +191,20 @@ class Agent(BaseAgent):
|
||||
else:
|
||||
task_prompt = self._use_trained_data(task_prompt=task_prompt)
|
||||
|
||||
result = self.agent_executor.invoke(
|
||||
{
|
||||
"input": task_prompt,
|
||||
"tool_names": self.agent_executor.tools_names,
|
||||
"tools": self.agent_executor.tools_description,
|
||||
}
|
||||
)["output"]
|
||||
try:
|
||||
result = self.agent_executor.invoke(
|
||||
{
|
||||
"input": task_prompt,
|
||||
"tool_names": self.agent_executor.tools_names,
|
||||
"tools": self.agent_executor.tools_description,
|
||||
}
|
||||
)["output"]
|
||||
except Exception as e:
|
||||
self._times_executed += 1
|
||||
if self._times_executed > self.max_retry_limit:
|
||||
raise e
|
||||
self.execute_task(task, context, tools)
|
||||
|
||||
if self.max_rpm:
|
||||
self._rpm_controller.stop_rpm_counter()
|
||||
|
||||
@@ -220,7 +236,7 @@ class Agent(BaseAgent):
|
||||
Returns:
|
||||
An instance of the CrewAgentExecutor class.
|
||||
"""
|
||||
tools = tools or self.tools
|
||||
tools = tools or self.tools or []
|
||||
|
||||
agent_args = {
|
||||
"input": lambda x: x["input"],
|
||||
@@ -315,6 +331,7 @@ class Agent(BaseAgent):
|
||||
tools_list = []
|
||||
for tool in tools:
|
||||
tools_list.append(tool)
|
||||
|
||||
return tools_list
|
||||
|
||||
def _training_handler(self, task_prompt: str) -> str:
|
||||
@@ -341,6 +358,52 @@ class Agent(BaseAgent):
|
||||
)
|
||||
return task_prompt
|
||||
|
||||
def _render_text_description(self, tools: List[BaseTool]) -> str:
|
||||
"""Render the tool name and description in plain text.
|
||||
|
||||
Output will be in the format of:
|
||||
|
||||
.. code-block:: markdown
|
||||
|
||||
search: This tool is used for search
|
||||
calculator: This tool is used for math
|
||||
"""
|
||||
description = "\n".join(
|
||||
[
|
||||
f"Tool name: {tool.name}\nTool description:\n{tool.description}"
|
||||
for tool in tools
|
||||
]
|
||||
)
|
||||
|
||||
return description
|
||||
|
||||
def _render_text_description_and_args(self, tools: List[BaseTool]) -> str:
|
||||
"""Render the tool name, description, and args in plain text.
|
||||
|
||||
Output will be in the format of:
|
||||
|
||||
.. code-block:: markdown
|
||||
|
||||
search: This tool is used for search, args: {"query": {"type": "string"}}
|
||||
calculator: This tool is used for math, \
|
||||
args: {"expression": {"type": "string"}}
|
||||
"""
|
||||
tool_strings = []
|
||||
for tool in tools:
|
||||
args_schema = str(tool.args)
|
||||
if hasattr(tool, "func") and tool.func:
|
||||
sig = signature(tool.func)
|
||||
description = (
|
||||
f"Tool Name: {tool.name}{sig}\nTool Description: {tool.description}"
|
||||
)
|
||||
else:
|
||||
description = (
|
||||
f"Tool Name: {tool.name}\nTool Description: {tool.description}"
|
||||
)
|
||||
tool_strings.append(f"{description}\nTool Arguments: {args_schema}")
|
||||
|
||||
return "\n".join(tool_strings)
|
||||
|
||||
@staticmethod
|
||||
def __tools_names(tools) -> str:
|
||||
return ", ".join([t.name for t in tools])
|
||||
|
||||
@@ -24,6 +24,7 @@ class BaseAgentTools(BaseModel, ABC):
|
||||
is_list = coworker.startswith("[") and coworker.endswith("]")
|
||||
if is_list:
|
||||
coworker = coworker[1:-1].split(",")[0]
|
||||
|
||||
return coworker
|
||||
|
||||
def delegate_work(
|
||||
@@ -40,11 +41,13 @@ class BaseAgentTools(BaseModel, ABC):
|
||||
coworker = self._get_coworker(coworker, **kwargs)
|
||||
return self._execute(coworker, question, context)
|
||||
|
||||
def _execute(self, agent: Union[str, None], task: str, context: Union[str, None]):
|
||||
def _execute(
|
||||
self, agent_name: Union[str, None], task: str, context: Union[str, None]
|
||||
):
|
||||
"""Execute the command."""
|
||||
try:
|
||||
if agent is None:
|
||||
agent = ""
|
||||
if agent_name is None:
|
||||
agent_name = ""
|
||||
|
||||
# It is important to remove the quotes from the agent name.
|
||||
# The reason we have to do this is because less-powerful LLM's
|
||||
@@ -53,7 +56,7 @@ class BaseAgentTools(BaseModel, ABC):
|
||||
# {"task": "....", "coworker": "....
|
||||
# when it should look like this:
|
||||
# {"task": "....", "coworker": "...."}
|
||||
agent_name = agent.casefold().replace('"', "").replace("\n", "")
|
||||
agent_name = agent_name.casefold().replace('"', "").replace("\n", "")
|
||||
|
||||
agent = [ # type: ignore # Incompatible types in assignment (expression has type "list[BaseAgent]", variable has type "str | None")
|
||||
available_agent
|
||||
@@ -75,9 +78,9 @@ class BaseAgentTools(BaseModel, ABC):
|
||||
)
|
||||
|
||||
agent = agent[0]
|
||||
task = Task( # type: ignore # Incompatible types in assignment (expression has type "Task", variable has type "str")
|
||||
task_with_assigned_agent = Task( # type: ignore # Incompatible types in assignment (expression has type "Task", variable has type "str")
|
||||
description=task,
|
||||
agent=agent,
|
||||
expected_output="Your best answer to your coworker asking you this, accounting for the context shared.",
|
||||
)
|
||||
return agent.execute_task(task, context) # type: ignore # "str" has no attribute "execute_task"
|
||||
return agent.execute_task(task_with_assigned_agent, context)
|
||||
|
||||
@@ -1,7 +1,7 @@
|
||||
from abc import ABC, abstractmethod
|
||||
from typing import Any, Optional
|
||||
|
||||
from pydantic import BaseModel, Field, PrivateAttr
|
||||
from pydantic import BaseModel, Field
|
||||
|
||||
|
||||
class OutputConverter(BaseModel, ABC):
|
||||
@@ -21,13 +21,12 @@ class OutputConverter(BaseModel, ABC):
|
||||
max_attempts (int): Maximum number of conversion attempts (default: 3).
|
||||
"""
|
||||
|
||||
_is_gpt: bool = PrivateAttr(default=True)
|
||||
text: str = Field(description="Text to be converted.")
|
||||
llm: Any = Field(description="The language model to be used to convert the text.")
|
||||
model: Any = Field(description="The model to be used to convert the text.")
|
||||
instructions: str = Field(description="Conversion instructions to the LLM.")
|
||||
max_attempts: Optional[int] = Field(
|
||||
description="Max number of attempts to try to get the output formated.",
|
||||
description="Max number of attempts to try to get the output formatted.",
|
||||
default=3,
|
||||
)
|
||||
|
||||
@@ -41,7 +40,8 @@ class OutputConverter(BaseModel, ABC):
|
||||
"""Convert text to json."""
|
||||
pass
|
||||
|
||||
@abstractmethod # type: ignore # Name "_is_gpt" already defined on line 25
|
||||
def _is_gpt(self, llm): # type: ignore # Name "_is_gpt" already defined on line 25
|
||||
@property
|
||||
@abstractmethod
|
||||
def is_gpt(self) -> bool:
|
||||
"""Return if llm provided is of gpt from openai."""
|
||||
pass
|
||||
@@ -1,14 +1,6 @@
|
||||
import threading
|
||||
import time
|
||||
from typing import (
|
||||
Any,
|
||||
Dict,
|
||||
Iterator,
|
||||
List,
|
||||
Optional,
|
||||
Tuple,
|
||||
Union,
|
||||
)
|
||||
from typing import Any, Dict, Iterator, List, Optional, Tuple, Union
|
||||
|
||||
from langchain.agents import AgentExecutor
|
||||
from langchain.agents.agent import ExceptionTool
|
||||
@@ -19,9 +11,7 @@ from langchain_core.tools import BaseTool
|
||||
from langchain_core.utils.input import get_color_mapping
|
||||
from pydantic import InstanceOf
|
||||
|
||||
from crewai.agents.agent_builder.base_agent_executor_mixin import (
|
||||
CrewAgentExecutorMixin,
|
||||
)
|
||||
from crewai.agents.agent_builder.base_agent_executor_mixin import CrewAgentExecutorMixin
|
||||
from crewai.agents.tools_handler import ToolsHandler
|
||||
from crewai.tools.tool_usage import ToolUsage, ToolUsageErrorException
|
||||
from crewai.utilities import I18N
|
||||
|
||||
@@ -1,6 +1,7 @@
|
||||
import re
|
||||
from typing import Any, Union
|
||||
|
||||
from json_repair import repair_json
|
||||
from langchain.agents.output_parsers import ReActSingleInputOutputParser
|
||||
from langchain_core.agents import AgentAction, AgentFinish
|
||||
from langchain_core.exceptions import OutputParserException
|
||||
@@ -48,11 +49,15 @@ class CrewAgentParser(ReActSingleInputOutputParser):
|
||||
raise OutputParserException(
|
||||
f"{FINAL_ANSWER_AND_PARSABLE_ACTION_ERROR_MESSAGE}: {text}"
|
||||
)
|
||||
action = action_match.group(1).strip()
|
||||
action_input = action_match.group(2)
|
||||
tool_input = action_input.strip(" ")
|
||||
tool_input = tool_input.strip('"')
|
||||
return AgentAction(action, tool_input, text)
|
||||
action = action_match.group(1)
|
||||
clean_action = self._clean_action(action)
|
||||
|
||||
action_input = action_match.group(2).strip()
|
||||
|
||||
tool_input = action_input.strip(" ").strip('"')
|
||||
safe_tool_input = self._safe_repair_json(tool_input)
|
||||
|
||||
return AgentAction(clean_action, safe_tool_input, text)
|
||||
|
||||
elif includes_answer:
|
||||
return AgentFinish(
|
||||
@@ -87,3 +92,30 @@ class CrewAgentParser(ReActSingleInputOutputParser):
|
||||
llm_output=text,
|
||||
send_to_llm=True,
|
||||
)
|
||||
|
||||
def _clean_action(self, text: str) -> str:
|
||||
"""Clean action string by removing non-essential formatting characters."""
|
||||
return re.sub(r"^\s*\*+\s*|\s*\*+\s*$", "", text).strip()
|
||||
|
||||
def _safe_repair_json(self, tool_input: str) -> str:
|
||||
UNABLE_TO_REPAIR_JSON_RESULTS = ['""', "{}"]
|
||||
|
||||
# Skip repair if the input starts and ends with square brackets
|
||||
# Explanation: The JSON parser has issues handling inputs that are enclosed in square brackets ('[]').
|
||||
# These are typically valid JSON arrays or strings that do not require repair. Attempting to repair such inputs
|
||||
# might lead to unintended alterations, such as wrapping the entire input in additional layers or modifying
|
||||
# the structure in a way that changes its meaning. By skipping the repair for inputs that start and end with
|
||||
# square brackets, we preserve the integrity of these valid JSON structures and avoid unnecessary modifications.
|
||||
if tool_input.startswith("[") and tool_input.endswith("]"):
|
||||
return tool_input
|
||||
|
||||
# Before repair, handle common LLM issues:
|
||||
# 1. Replace """ with " to avoid JSON parser errors
|
||||
|
||||
tool_input = tool_input.replace('"""', '"')
|
||||
|
||||
result = repair_json(tool_input)
|
||||
if result in UNABLE_TO_REPAIR_JSON_RESULTS:
|
||||
return tool_input
|
||||
|
||||
return str(result)
|
||||
|
||||
@@ -12,4 +12,4 @@ reporting_task:
|
||||
Make sure the report is detailed and contains any and all relevant information.
|
||||
expected_output: >
|
||||
A fully fledge reports with the mains topics, each with a full section of information.
|
||||
Formated as markdown with out '```'
|
||||
Formatted as markdown without '```'
|
||||
|
||||
@@ -1,35 +1,42 @@
|
||||
import asyncio
|
||||
import json
|
||||
import uuid
|
||||
from concurrent.futures import Future
|
||||
from typing import Any, Dict, List, Optional, Tuple, Union
|
||||
|
||||
from langchain_core.callbacks import BaseCallbackHandler
|
||||
from pydantic import (
|
||||
UUID4,
|
||||
BaseModel,
|
||||
ConfigDict,
|
||||
Field,
|
||||
InstanceOf,
|
||||
Json,
|
||||
PrivateAttr,
|
||||
field_validator,
|
||||
model_validator,
|
||||
UUID4,
|
||||
BaseModel,
|
||||
ConfigDict,
|
||||
Field,
|
||||
InstanceOf,
|
||||
Json,
|
||||
PrivateAttr,
|
||||
field_validator,
|
||||
model_validator,
|
||||
)
|
||||
from pydantic_core import PydanticCustomError
|
||||
|
||||
from crewai.agent import Agent
|
||||
from crewai.agents.agent_builder.base_agent import BaseAgent
|
||||
from crewai.agents.cache import CacheHandler
|
||||
from crewai.crews.crew_output import CrewOutput
|
||||
from crewai.memory.entity.entity_memory import EntityMemory
|
||||
from crewai.memory.long_term.long_term_memory import LongTermMemory
|
||||
from crewai.memory.short_term.short_term_memory import ShortTermMemory
|
||||
from crewai.process import Process
|
||||
from crewai.task import Task
|
||||
from crewai.tasks.task_output import TaskOutput
|
||||
from crewai.telemetry import Telemetry
|
||||
from crewai.tools.agent_tools import AgentTools
|
||||
from crewai.utilities import I18N, FileHandler, Logger, RPMController
|
||||
from crewai.utilities.constants import TRAINED_AGENTS_DATA_FILE, TRAINING_DATA_FILE
|
||||
from crewai.utilities.evaluators.task_evaluator import TaskEvaluator
|
||||
from crewai.utilities.formatter import (
|
||||
aggregate_raw_outputs_from_task_outputs,
|
||||
aggregate_raw_outputs_from_tasks,
|
||||
)
|
||||
from crewai.utilities.training_handler import CrewTrainingHandler
|
||||
|
||||
try:
|
||||
@@ -57,7 +64,6 @@ class Crew(BaseModel):
|
||||
max_rpm: Maximum number of requests per minute for the crew execution to be respected.
|
||||
prompt_file: Path to the prompt json file to be used for the crew.
|
||||
id: A unique identifier for the crew instance.
|
||||
full_output: Whether the crew should return the full output with all tasks outputs and token usage metrics or just the final output.
|
||||
task_callback: Callback to be executed after each task for every agents execution.
|
||||
step_callback: Callback to be executed after each step for every agents execution.
|
||||
share_crew: Whether you want to share the complete crew information and execution with crewAI to make the library better, and allow us to train models.
|
||||
@@ -93,10 +99,6 @@ class Crew(BaseModel):
|
||||
default=None,
|
||||
description="Metrics for the LLM usage during all tasks execution.",
|
||||
)
|
||||
full_output: Optional[bool] = Field(
|
||||
default=False,
|
||||
description="Whether the crew should return the full output with all tasks outputs and token usage metrics or just the final output.",
|
||||
)
|
||||
manager_llm: Optional[Any] = Field(
|
||||
description="Language model that will run the agent.", default=None
|
||||
)
|
||||
@@ -168,7 +170,6 @@ class Crew(BaseModel):
|
||||
self._rpm_controller = RPMController(max_rpm=self.max_rpm, logger=self._logger)
|
||||
self._telemetry = Telemetry()
|
||||
self._telemetry.set_tracer()
|
||||
self._telemetry.crew_creation(self)
|
||||
return self
|
||||
|
||||
@model_validator(mode="after")
|
||||
@@ -252,6 +253,63 @@ class Crew(BaseModel):
|
||||
|
||||
return self
|
||||
|
||||
@model_validator(mode="after")
|
||||
def validate_end_with_at_most_one_async_task(self):
|
||||
"""Validates that the crew ends with at most one asynchronous task."""
|
||||
final_async_task_count = 0
|
||||
|
||||
# Traverse tasks backward
|
||||
for task in reversed(self.tasks):
|
||||
if task.async_execution:
|
||||
final_async_task_count += 1
|
||||
else:
|
||||
break # Stop traversing as soon as a non-async task is encountered
|
||||
|
||||
if final_async_task_count > 1:
|
||||
raise PydanticCustomError(
|
||||
"async_task_count",
|
||||
"The crew must end with at most one asynchronous task.",
|
||||
{},
|
||||
)
|
||||
|
||||
return self
|
||||
|
||||
@model_validator(mode="after")
|
||||
def validate_async_task_cannot_include_sequential_async_tasks_in_context(self):
|
||||
"""
|
||||
Validates that if a task is set to be executed asynchronously,
|
||||
it cannot include other asynchronous tasks in its context unless
|
||||
separated by a synchronous task.
|
||||
"""
|
||||
for i, task in enumerate(self.tasks):
|
||||
if task.async_execution and task.context:
|
||||
for context_task in task.context:
|
||||
if context_task.async_execution:
|
||||
for j in range(i - 1, -1, -1):
|
||||
if self.tasks[j] == context_task:
|
||||
raise ValueError(
|
||||
f"Task '{task.description}' is asynchronous and cannot include other sequential asynchronous tasks in its context."
|
||||
)
|
||||
if not self.tasks[j].async_execution:
|
||||
break
|
||||
return self
|
||||
|
||||
@model_validator(mode="after")
|
||||
def validate_context_no_future_tasks(self):
|
||||
"""Validates that a task's context does not include future tasks."""
|
||||
task_indices = {id(task): i for i, task in enumerate(self.tasks)}
|
||||
|
||||
for task in self.tasks:
|
||||
if task.context:
|
||||
for context_task in task.context:
|
||||
if id(context_task) not in task_indices:
|
||||
continue # Skip context tasks not in the main tasks list
|
||||
if task_indices[id(context_task)] > task_indices[id(task)]:
|
||||
raise ValueError(
|
||||
f"Task '{task.description}' has a context dependency on a future task '{context_task.description}', which is not allowed."
|
||||
)
|
||||
return self
|
||||
|
||||
def _setup_from_config(self):
|
||||
assert self.config is not None, "Config should not be None."
|
||||
|
||||
@@ -314,12 +372,12 @@ class Crew(BaseModel):
|
||||
|
||||
def kickoff(
|
||||
self,
|
||||
inputs: Optional[Dict[str, Any]] = {},
|
||||
) -> Union[str, Dict[str, Any]]:
|
||||
inputs: Optional[Dict[str, Any]] = None,
|
||||
) -> CrewOutput:
|
||||
"""Starts the crew to work on its assigned tasks."""
|
||||
self._execution_span = self._telemetry.crew_execution_span(self, inputs)
|
||||
|
||||
self._interpolate_inputs(inputs) # type: ignore # Argument 1 to "_interpolate_inputs" of "Crew" has incompatible type "dict[str, Any] | None"; expected "dict[str, Any]"
|
||||
if inputs is not None:
|
||||
self._interpolate_inputs(inputs)
|
||||
self._set_tasks_callbacks()
|
||||
|
||||
i18n = I18N(prompt_file=self.prompt_file)
|
||||
@@ -345,8 +403,7 @@ class Crew(BaseModel):
|
||||
if self.process == Process.sequential:
|
||||
result = self._run_sequential_process()
|
||||
elif self.process == Process.hierarchical:
|
||||
result, manager_metrics = self._run_hierarchical_process() # type: ignore # Incompatible types in assignment (expression has type "str | dict[str, Any]", variable has type "str")
|
||||
metrics.append(manager_metrics)
|
||||
result = self._run_hierarchical_process() # type: ignore # Incompatible types in assignment (expression has type "str | dict[str, Any]", variable has type "str")
|
||||
else:
|
||||
raise NotImplementedError(
|
||||
f"The process '{self.process}' is not implemented yet."
|
||||
@@ -359,11 +416,9 @@ class Crew(BaseModel):
|
||||
|
||||
return result
|
||||
|
||||
def kickoff_for_each(
|
||||
self, inputs: List[Dict[str, Any]]
|
||||
) -> List[Union[str, Dict[str, Any]]]:
|
||||
def kickoff_for_each(self, inputs: List[Dict[str, Any]]) -> List[CrewOutput]:
|
||||
"""Executes the Crew's workflow for each input in the list and aggregates results."""
|
||||
results = []
|
||||
results: List[CrewOutput] = []
|
||||
|
||||
# Initialize the parent crew's usage metrics
|
||||
total_usage_metrics = {
|
||||
@@ -387,13 +442,11 @@ class Crew(BaseModel):
|
||||
self.usage_metrics = total_usage_metrics
|
||||
return results
|
||||
|
||||
async def kickoff_async(
|
||||
self, inputs: Optional[Dict[str, Any]] = {}
|
||||
) -> Union[str, Dict]:
|
||||
async def kickoff_async(self, inputs: Optional[Dict[str, Any]] = {}) -> CrewOutput:
|
||||
"""Asynchronous kickoff method to start the crew execution."""
|
||||
return await asyncio.to_thread(self.kickoff, inputs)
|
||||
|
||||
async def kickoff_for_each_async(self, inputs: List[Dict]) -> List[Any]:
|
||||
async def kickoff_for_each_async(self, inputs: List[Dict]) -> List[CrewOutput]:
|
||||
crew_copies = [self.copy() for _ in inputs]
|
||||
|
||||
async def run_crew(crew, input_data):
|
||||
@@ -403,6 +456,10 @@ class Crew(BaseModel):
|
||||
asyncio.create_task(run_crew(crew_copies[i], inputs[i]))
|
||||
for i in range(len(inputs))
|
||||
]
|
||||
tasks = [
|
||||
asyncio.create_task(run_crew(crew_copies[i], inputs[i]))
|
||||
for i in range(len(inputs))
|
||||
]
|
||||
|
||||
results = await asyncio.gather(*tasks)
|
||||
|
||||
@@ -419,11 +476,25 @@ class Crew(BaseModel):
|
||||
|
||||
self.usage_metrics = total_usage_metrics
|
||||
|
||||
total_usage_metrics = {
|
||||
"total_tokens": 0,
|
||||
"prompt_tokens": 0,
|
||||
"completion_tokens": 0,
|
||||
"successful_requests": 0,
|
||||
}
|
||||
for crew in crew_copies:
|
||||
if crew.usage_metrics:
|
||||
for key in total_usage_metrics:
|
||||
total_usage_metrics[key] += crew.usage_metrics.get(key, 0)
|
||||
|
||||
self.usage_metrics = total_usage_metrics
|
||||
|
||||
return results
|
||||
|
||||
def _run_sequential_process(self) -> Union[str, Dict[str, Any]]:
|
||||
def _run_sequential_process(self) -> CrewOutput:
|
||||
"""Executes tasks sequentially and returns the final output."""
|
||||
task_output = None
|
||||
task_outputs: List[TaskOutput] = []
|
||||
futures: List[Tuple[Task, Future[TaskOutput]]] = []
|
||||
|
||||
for task in self.tasks:
|
||||
if task.agent and task.agent.allow_delegation:
|
||||
@@ -431,7 +502,30 @@ class Crew(BaseModel):
|
||||
agent for agent in self.agents if agent != task.agent
|
||||
]
|
||||
if len(self.agents) > 1 and len(agents_for_delegation) > 0:
|
||||
task.tools += task.agent.get_delegation_tools(agents_for_delegation)
|
||||
delegation_tools = task.agent.get_delegation_tools(
|
||||
agents_for_delegation
|
||||
)
|
||||
|
||||
# Add tools if they are not already in task.tools
|
||||
for new_tool in delegation_tools:
|
||||
# Find the index of the tool with the same name
|
||||
existing_tool_index = next(
|
||||
(
|
||||
index
|
||||
for index, tool in enumerate(task.tools or [])
|
||||
if tool.name == new_tool.name
|
||||
),
|
||||
None,
|
||||
)
|
||||
if not task.tools:
|
||||
task.tools = []
|
||||
|
||||
if existing_tool_index is not None:
|
||||
# Replace the existing tool
|
||||
task.tools[existing_tool_index] = new_tool
|
||||
else:
|
||||
# Add the new tool
|
||||
task.tools.append(new_tool)
|
||||
|
||||
role = task.agent.role if task.agent is not None else "None"
|
||||
self._logger.log("debug", f"== Working Agent: {role}", color="bold_purple")
|
||||
@@ -443,28 +537,80 @@ class Crew(BaseModel):
|
||||
self._file_handler.log(
|
||||
agent=role, task=task.description, status="started"
|
||||
)
|
||||
output = task.execute(context=task_output)
|
||||
|
||||
if not task.async_execution:
|
||||
task_output = output
|
||||
if task.async_execution:
|
||||
context = (
|
||||
aggregate_raw_outputs_from_tasks(task.context)
|
||||
if task.context
|
||||
else aggregate_raw_outputs_from_task_outputs(task_outputs)
|
||||
)
|
||||
future = task.execute_async(
|
||||
agent=task.agent, context=context, tools=task.tools
|
||||
)
|
||||
futures.append((task, future))
|
||||
else:
|
||||
# Before executing a synchronous task, wait for all async tasks to complete
|
||||
if futures:
|
||||
# Clear task_outputs before processing async tasks
|
||||
task_outputs = []
|
||||
for future_task, future in futures:
|
||||
task_output = future.result()
|
||||
task_outputs.append(task_output)
|
||||
self._process_task_result(future_task, task_output)
|
||||
|
||||
role = task.agent.role if task.agent is not None else "None"
|
||||
self._logger.log("debug", f"== [{role}] Task output: {task_output}\n\n")
|
||||
# Clear the futures list after processing all async results
|
||||
futures.clear()
|
||||
|
||||
if self.output_log_file:
|
||||
self._file_handler.log(agent=role, task=task_output, status="completed")
|
||||
context = (
|
||||
aggregate_raw_outputs_from_tasks(task.context)
|
||||
if task.context
|
||||
else aggregate_raw_outputs_from_task_outputs(task_outputs)
|
||||
)
|
||||
task_output = task.execute_sync(
|
||||
agent=task.agent, context=context, tools=task.tools
|
||||
)
|
||||
task_outputs = [task_output]
|
||||
self._process_task_result(task, task_output)
|
||||
|
||||
self._finish_execution(task_output)
|
||||
if futures:
|
||||
# Clear task_outputs before processing async tasks
|
||||
task_outputs = []
|
||||
for future_task, future in futures:
|
||||
task_output = future.result()
|
||||
task_outputs.append(task_output)
|
||||
self._process_task_result(future_task, task_output)
|
||||
|
||||
# Important: There should only be one task output in the list
|
||||
# If there are more or 0, something went wrong.
|
||||
if len(task_outputs) != 1:
|
||||
raise ValueError(
|
||||
"Something went wrong. Kickoff should return only one task output."
|
||||
)
|
||||
|
||||
final_task_output = task_outputs[0]
|
||||
|
||||
final_string_output = final_task_output.raw
|
||||
self._finish_execution(final_string_output)
|
||||
|
||||
token_usage = self.calculate_usage_metrics()
|
||||
|
||||
return self._format_output(task_output if task_output else "", token_usage)
|
||||
return CrewOutput(
|
||||
raw=final_task_output.raw,
|
||||
pydantic=final_task_output.pydantic,
|
||||
json_dict=final_task_output.json_dict,
|
||||
tasks_output=[task.output for task in self.tasks if task.output],
|
||||
token_usage=token_usage,
|
||||
)
|
||||
|
||||
def _run_hierarchical_process(
|
||||
self,
|
||||
) -> Tuple[Union[str, Dict[str, Any]], Dict[str, Any]]:
|
||||
def _process_task_result(self, task: Task, output: TaskOutput) -> None:
|
||||
role = task.agent.role if task.agent is not None else "None"
|
||||
self._logger.log("debug", f"== [{role}] Task output: {output}\n\n")
|
||||
if self.output_log_file:
|
||||
self._file_handler.log(agent=role, task=output, status="completed")
|
||||
|
||||
# TODO: @joao, Breaking change. Changed return type. Usage metrics is included in crewoutput
|
||||
def _run_hierarchical_process(self) -> CrewOutput:
|
||||
"""Creates and assigns a manager agent to make sure the crew completes the tasks."""
|
||||
|
||||
i18n = I18N(prompt_file=self.prompt_file)
|
||||
if self.manager_agent is not None:
|
||||
self.manager_agent.allow_delegation = True
|
||||
@@ -483,8 +629,10 @@ class Crew(BaseModel):
|
||||
)
|
||||
self.manager_agent = manager
|
||||
|
||||
task_output = None
|
||||
task_outputs: List[TaskOutput] = []
|
||||
futures: List[Tuple[Task, Future[TaskOutput]]] = []
|
||||
|
||||
# TODO: IF USER OVERRIDE THE CONTEXT, PASS THAT
|
||||
for task in self.tasks:
|
||||
self._logger.log("debug", f"Working Agent: {manager.role}")
|
||||
self._logger.log("info", f"Starting Task: {task.description}")
|
||||
@@ -494,27 +642,70 @@ class Crew(BaseModel):
|
||||
agent=manager.role, task=task.description, status="started"
|
||||
)
|
||||
|
||||
if task.agent:
|
||||
manager.tools = task.agent.get_delegation_tools([task.agent])
|
||||
if task.async_execution:
|
||||
context = (
|
||||
aggregate_raw_outputs_from_tasks(task.context)
|
||||
if task.context
|
||||
else aggregate_raw_outputs_from_task_outputs(task_outputs)
|
||||
)
|
||||
future = task.execute_async(
|
||||
agent=manager, context=context, tools=manager.tools
|
||||
)
|
||||
futures.append((task, future))
|
||||
else:
|
||||
manager.tools = manager.get_delegation_tools(self.agents)
|
||||
task_output = task.execute(
|
||||
agent=manager, context=task_output, tools=manager.tools
|
||||
# Before executing a synchronous task, wait for all async tasks to complete
|
||||
if futures:
|
||||
# Clear task_outputs before processing async tasks
|
||||
task_outputs = []
|
||||
for future_task, future in futures:
|
||||
task_output = future.result()
|
||||
task_outputs.append(task_output)
|
||||
self._process_task_result(future_task, task_output)
|
||||
|
||||
# Clear the futures list after processing all async results
|
||||
futures.clear()
|
||||
|
||||
context = (
|
||||
aggregate_raw_outputs_from_tasks(task.context)
|
||||
if task.context
|
||||
else aggregate_raw_outputs_from_task_outputs(task_outputs)
|
||||
)
|
||||
task_output = task.execute_sync(
|
||||
agent=manager, context=context, tools=manager.tools
|
||||
)
|
||||
task_outputs = [task_output]
|
||||
self._process_task_result(task, task_output)
|
||||
|
||||
# Process any remaining async results
|
||||
if futures:
|
||||
# Clear task_outputs before processing async tasks
|
||||
task_outputs = []
|
||||
for future_task, future in futures:
|
||||
task_output = future.result()
|
||||
task_outputs.append(task_output)
|
||||
self._process_task_result(future_task, task_output)
|
||||
|
||||
# Important: There should only be one task output in the list
|
||||
# If there are more or 0, something went wrong.
|
||||
if len(task_outputs) != 1:
|
||||
raise ValueError(
|
||||
"Something went wrong. Kickoff should return only one task output."
|
||||
)
|
||||
|
||||
self._logger.log("debug", f"[{manager.role}] Task output: {task_output}")
|
||||
if self.output_log_file:
|
||||
self._file_handler.log(
|
||||
agent=manager.role, task=task_output, status="completed"
|
||||
)
|
||||
final_task_output = task_outputs[0]
|
||||
|
||||
self._finish_execution(task_output)
|
||||
final_string_output = final_task_output.raw
|
||||
self._finish_execution(final_string_output)
|
||||
|
||||
token_usage = self.calculate_usage_metrics()
|
||||
|
||||
return self._format_output(
|
||||
task_output if task_output else "", token_usage
|
||||
), token_usage
|
||||
return CrewOutput(
|
||||
raw=final_task_output.raw,
|
||||
pydantic=final_task_output.pydantic,
|
||||
json_dict=final_task_output.json_dict,
|
||||
tasks_output=[task.output for task in self.tasks if task.output],
|
||||
token_usage=token_usage,
|
||||
)
|
||||
|
||||
def copy(self):
|
||||
"""Create a deep copy of the Crew."""
|
||||
@@ -566,31 +757,15 @@ class Crew(BaseModel):
|
||||
for agent in self.agents:
|
||||
agent.interpolate_inputs(inputs)
|
||||
|
||||
def _format_output(
|
||||
self, output: str, token_usage: Optional[Dict[str, Any]] = None
|
||||
) -> Union[str, Dict[str, Any]]:
|
||||
"""
|
||||
Formats the output of the crew execution.
|
||||
If full_output is True, then returned data type will be a dictionary else returned outputs are string
|
||||
"""
|
||||
|
||||
if self.full_output:
|
||||
return {
|
||||
"final_output": output,
|
||||
"tasks_outputs": [task.output for task in self.tasks if task],
|
||||
"usage_metrics": token_usage,
|
||||
}
|
||||
else:
|
||||
return output
|
||||
|
||||
def _finish_execution(self, output) -> None:
|
||||
def _finish_execution(self, final_string_output: str) -> None:
|
||||
if self.max_rpm:
|
||||
self._rpm_controller.stop_rpm_counter()
|
||||
if agentops:
|
||||
agentops.end_session(
|
||||
end_state="Success", end_state_reason="Finished Execution"
|
||||
end_state="Success",
|
||||
end_state_reason="Finished Execution",
|
||||
)
|
||||
self._telemetry.end_crew(self, output)
|
||||
self._telemetry.end_crew(self, final_string_output)
|
||||
|
||||
def calculate_usage_metrics(self) -> Dict[str, int]:
|
||||
"""Calculates and returns the usage metrics."""
|
||||
|
||||
1
src/crewai/crews/__init__.py
Normal file
1
src/crewai/crews/__init__.py
Normal file
@@ -0,0 +1 @@
|
||||
from .crew_output import CrewOutput
|
||||
60
src/crewai/crews/crew_output.py
Normal file
60
src/crewai/crews/crew_output.py
Normal file
@@ -0,0 +1,60 @@
|
||||
import json
|
||||
from typing import Any, Dict, Optional
|
||||
|
||||
from pydantic import BaseModel, Field
|
||||
|
||||
from crewai.tasks.output_format import OutputFormat
|
||||
from crewai.tasks.task_output import TaskOutput
|
||||
|
||||
|
||||
class CrewOutput(BaseModel):
|
||||
"""Class that represents the result of a crew."""
|
||||
|
||||
raw: str = Field(description="Raw output of crew", default="")
|
||||
pydantic: Optional[BaseModel] = Field(
|
||||
description="Pydantic output of Crew", default=None
|
||||
)
|
||||
json_dict: Optional[Dict[str, Any]] = Field(
|
||||
description="JSON dict output of Crew", default=None
|
||||
)
|
||||
tasks_output: list[TaskOutput] = Field(
|
||||
description="Output of each task", default=[]
|
||||
)
|
||||
token_usage: Dict[str, Any] = Field(
|
||||
description="Processed token summary", default={}
|
||||
)
|
||||
|
||||
# TODO: Joao - Adding this safety check breakes when people want to see
|
||||
# The full output of a CrewOutput.
|
||||
# @property
|
||||
# def pydantic(self) -> Optional[BaseModel]:
|
||||
# # Check if the final task output included a pydantic model
|
||||
# if self.tasks_output[-1].output_format != OutputFormat.PYDANTIC:
|
||||
# raise ValueError(
|
||||
# "No pydantic model found in the final task. Please make sure to set the output_pydantic property in the final task in your crew."
|
||||
# )
|
||||
|
||||
# return self._pydantic
|
||||
|
||||
@property
|
||||
def json(self) -> Optional[str]:
|
||||
if self.tasks_output[-1].output_format != OutputFormat.JSON:
|
||||
raise ValueError(
|
||||
"No JSON output found in the final task. Please make sure to set the output_json property in the final task in your crew."
|
||||
)
|
||||
|
||||
return json.dumps(self.json_dict)
|
||||
|
||||
def to_dict(self) -> Dict[str, Any]:
|
||||
if self.json_dict:
|
||||
return self.json_dict
|
||||
if self.pydantic:
|
||||
return self.pydantic.model_dump()
|
||||
raise ValueError("No output to convert to dictionary")
|
||||
|
||||
def __str__(self):
|
||||
if self.pydantic:
|
||||
return str(self.pydantic)
|
||||
if self.json_dict:
|
||||
return str(self.json_dict)
|
||||
return self.raw
|
||||
@@ -1,9 +1,11 @@
|
||||
import json
|
||||
import os
|
||||
import re
|
||||
import threading
|
||||
import uuid
|
||||
from concurrent.futures import Future, ThreadPoolExecutor
|
||||
from concurrent.futures import Future
|
||||
from copy import copy
|
||||
from typing import Any, Dict, List, Optional, Type, Union
|
||||
from typing import Any, Dict, List, Optional, Tuple, Type, Union
|
||||
|
||||
from langchain_openai import ChatOpenAI
|
||||
from opentelemetry.trace import Span
|
||||
@@ -11,10 +13,10 @@ from pydantic import UUID4, BaseModel, Field, field_validator, model_validator
|
||||
from pydantic_core import PydanticCustomError
|
||||
|
||||
from crewai.agents.agent_builder.base_agent import BaseAgent
|
||||
from crewai.tasks.output_format import OutputFormat
|
||||
from crewai.tasks.task_output import TaskOutput
|
||||
from crewai.telemetry.telemetry import Telemetry
|
||||
from crewai.utilities.converter import ConverterError
|
||||
from crewai.utilities.converter import Converter
|
||||
from crewai.utilities.converter import Converter, ConverterError
|
||||
from crewai.utilities.i18n import I18N
|
||||
from crewai.utilities.printer import Printer
|
||||
from crewai.utilities.pydantic_schema_parser import PydanticSchemaParser
|
||||
@@ -107,7 +109,7 @@ class Task(BaseModel):
|
||||
_execution_span: Span | None = None
|
||||
_original_description: str | None = None
|
||||
_original_expected_output: str | None = None
|
||||
_future: Future | None = None
|
||||
_thread: threading.Thread | None = None
|
||||
|
||||
def __init__(__pydantic_self__, **data):
|
||||
config = data.pop("config", {})
|
||||
@@ -162,80 +164,74 @@ class Task(BaseModel):
|
||||
)
|
||||
return self
|
||||
|
||||
def wait_for_completion(self) -> str | BaseModel:
|
||||
"""Wait for asynchronous task completion and return the output."""
|
||||
assert self.async_execution, "Task is not set to be executed asynchronously."
|
||||
def execute_sync(
|
||||
self,
|
||||
agent: Optional[BaseAgent] = None,
|
||||
context: Optional[str] = None,
|
||||
tools: Optional[List[Any]] = None,
|
||||
) -> TaskOutput:
|
||||
"""Execute the task synchronously."""
|
||||
return self._execute_core(agent, context, tools)
|
||||
|
||||
if self._future:
|
||||
self._future.result() # Wait for the future to complete
|
||||
self._future = None
|
||||
|
||||
assert self.output, "Task output is not set."
|
||||
|
||||
return self.output.exported_output
|
||||
|
||||
def execute(
|
||||
def execute_async(
|
||||
self,
|
||||
agent: BaseAgent | None = None,
|
||||
context: Optional[str] = None,
|
||||
tools: Optional[List[Any]] = None,
|
||||
) -> str | None:
|
||||
"""Execute the task.
|
||||
) -> Future[TaskOutput]:
|
||||
"""Execute the task asynchronously."""
|
||||
future: Future[TaskOutput] = Future()
|
||||
threading.Thread(
|
||||
target=self._execute_task_async, args=(agent, context, tools, future)
|
||||
).start()
|
||||
return future
|
||||
|
||||
Returns:
|
||||
Output of the task.
|
||||
"""
|
||||
|
||||
self._execution_span = self._telemetry.task_started(self)
|
||||
def _execute_task_async(
|
||||
self,
|
||||
agent: Optional[BaseAgent],
|
||||
context: Optional[str],
|
||||
tools: Optional[List[Any]],
|
||||
future: Future[TaskOutput],
|
||||
) -> None:
|
||||
"""Execute the task asynchronously with context handling."""
|
||||
result = self._execute_core(agent, context, tools)
|
||||
future.set_result(result)
|
||||
|
||||
def _execute_core(
|
||||
self,
|
||||
agent: Optional[BaseAgent],
|
||||
context: Optional[str],
|
||||
tools: Optional[List[Any]],
|
||||
) -> TaskOutput:
|
||||
"""Run the core execution logic of the task."""
|
||||
agent = agent or self.agent
|
||||
if not agent:
|
||||
raise Exception(
|
||||
f"The task '{self.description}' has no agent assigned, therefore it can't be executed directly "
|
||||
"and should be executed in a Crew using a specific process that support that, like hierarchical."
|
||||
f"The task '{self.description}' has no agent assigned, therefore it can't be executed directly and should be executed in a Crew using a specific process that support that, like hierarchical."
|
||||
)
|
||||
|
||||
if self.context:
|
||||
internal_context = []
|
||||
for task in self.context:
|
||||
if task.async_execution:
|
||||
task.wait_for_completion()
|
||||
if task.output:
|
||||
internal_context.append(task.output.raw_output)
|
||||
context = "\n".join(internal_context)
|
||||
self._execution_span = self._telemetry.task_started(crew=agent.crew, task=self)
|
||||
|
||||
self.prompt_context = context
|
||||
tools = tools or self.tools
|
||||
tools = tools or self.tools or []
|
||||
|
||||
if self.async_execution:
|
||||
with ThreadPoolExecutor() as executor:
|
||||
self._future = executor.submit(
|
||||
self._execute, agent, self, context, tools
|
||||
)
|
||||
return None
|
||||
else:
|
||||
result = self._execute(
|
||||
task=self,
|
||||
agent=agent,
|
||||
context=context,
|
||||
tools=tools,
|
||||
)
|
||||
return result
|
||||
|
||||
def _execute(self, agent: "BaseAgent", task, context, tools) -> str | None:
|
||||
result = agent.execute_task(
|
||||
task=task,
|
||||
task=self,
|
||||
context=context,
|
||||
tools=tools,
|
||||
)
|
||||
exported_output = self._export_output(result)
|
||||
|
||||
self.output = TaskOutput(
|
||||
pydantic_output, json_output = self._export_output(result)
|
||||
|
||||
task_output = TaskOutput(
|
||||
description=self.description,
|
||||
exported_output=exported_output,
|
||||
raw_output=result,
|
||||
raw=result,
|
||||
pydantic=pydantic_output,
|
||||
json_dict=json_output,
|
||||
agent=agent.role,
|
||||
output_format=self._get_output_format(),
|
||||
)
|
||||
self.output = task_output
|
||||
|
||||
if self.callback:
|
||||
self.callback(self.output)
|
||||
@@ -244,7 +240,15 @@ class Task(BaseModel):
|
||||
self._telemetry.task_ended(self._execution_span, self)
|
||||
self._execution_span = None
|
||||
|
||||
return exported_output
|
||||
if self.output_file:
|
||||
content = (
|
||||
json_output
|
||||
if json_output
|
||||
else pydantic_output.model_dump_json() if pydantic_output else result
|
||||
)
|
||||
self._save_file(content)
|
||||
|
||||
return task_output
|
||||
|
||||
def prompt(self) -> str:
|
||||
"""Prompt the task.
|
||||
@@ -279,7 +283,7 @@ class Task(BaseModel):
|
||||
"""Increment the delegations counter."""
|
||||
self.delegations += 1
|
||||
|
||||
def copy(self, agents: Optional[List["BaseAgent"]] = None) -> "Task": # type: ignore # Signature of "copy" incompatible with supertype "BaseModel"
|
||||
def copy(self, agents: List["BaseAgent"]) -> "Task":
|
||||
"""Create a deep copy of the Task."""
|
||||
exclude = {
|
||||
"id",
|
||||
@@ -292,11 +296,11 @@ class Task(BaseModel):
|
||||
copied_data = {k: v for k, v in copied_data.items() if v is not None}
|
||||
|
||||
cloned_context = (
|
||||
[task.copy() for task in self.context] if self.context else None
|
||||
[task.copy(agents) for task in self.context] if self.context else None
|
||||
)
|
||||
|
||||
def get_agent_by_role(role: str) -> Union["BaseAgent", None]:
|
||||
return next((agent for agent in agents if agent.role == role), None) # type: ignore # Item "None" of "list[BaseAgent] | None" has no attribute "__iter__" (not iterable)
|
||||
return next((agent for agent in agents if agent.role == role), None)
|
||||
|
||||
cloned_agent = get_agent_by_role(self.agent.role) if self.agent else None
|
||||
cloned_tools = copy(self.tools) if self.tools else []
|
||||
@@ -310,72 +314,113 @@ class Task(BaseModel):
|
||||
|
||||
return copied_task
|
||||
|
||||
def _create_converter(self, *args, **kwargs) -> Converter: # type: ignore
|
||||
converter = self.agent.get_output_converter( # type: ignore # Item "None" of "BaseAgent | None" has no attribute "get_output_converter"
|
||||
*args, **kwargs
|
||||
)
|
||||
if self.converter_cls:
|
||||
converter = self.converter_cls( # type: ignore # Item "None" of "BaseAgent | None" has no attribute "get_output_converter"
|
||||
*args, **kwargs
|
||||
)
|
||||
return converter
|
||||
def _create_converter(self, *args, **kwargs) -> Converter:
|
||||
"""Create a converter instance."""
|
||||
converter = self.agent.get_output_converter(*args, **kwargs)
|
||||
if self.converter_cls:
|
||||
converter = self.converter_cls(*args, **kwargs)
|
||||
return converter
|
||||
|
||||
def _export_output(self, result: str) -> Any:
|
||||
exported_result = result
|
||||
instructions = "I'm gonna convert this raw text into valid JSON."
|
||||
def _export_output(
|
||||
self, result: str
|
||||
) -> Tuple[Optional[BaseModel], Optional[Dict[str, Any]]]:
|
||||
pydantic_output: Optional[BaseModel] = None
|
||||
json_output: Optional[Dict[str, Any]] = None
|
||||
|
||||
if self.output_pydantic or self.output_json:
|
||||
model = self.output_pydantic or self.output_json
|
||||
model_output = self._convert_to_model(result)
|
||||
pydantic_output = (
|
||||
model_output if isinstance(model_output, BaseModel) else None
|
||||
)
|
||||
if isinstance(model_output, str):
|
||||
try:
|
||||
json_output = json.loads(model_output)
|
||||
except json.JSONDecodeError:
|
||||
json_output = None
|
||||
else:
|
||||
json_output = model_output if isinstance(model_output, dict) else None
|
||||
|
||||
# try to convert task_output directly to pydantic/json
|
||||
return pydantic_output, json_output
|
||||
|
||||
def _convert_to_model(self, result: str) -> Union[dict, BaseModel, str]:
|
||||
model = self.output_pydantic or self.output_json
|
||||
if model is None:
|
||||
return result
|
||||
|
||||
try:
|
||||
return self._validate_model(result, model)
|
||||
except Exception:
|
||||
return self._handle_partial_json(result, model)
|
||||
|
||||
def _validate_model(
|
||||
self, result: str, model: Type[BaseModel]
|
||||
) -> Union[dict, BaseModel]:
|
||||
exported_result = model.model_validate_json(result)
|
||||
if self.output_json:
|
||||
return exported_result.model_dump()
|
||||
return exported_result
|
||||
|
||||
def _handle_partial_json(
|
||||
self, result: str, model: Type[BaseModel]
|
||||
) -> Union[dict, BaseModel, str]:
|
||||
match = re.search(r"({.*})", result, re.DOTALL)
|
||||
if match:
|
||||
try:
|
||||
exported_result = model.model_validate_json(result) # type: ignore # Item "None" of "type[BaseModel] | None" has no attribute "model_validate_json"
|
||||
exported_result = model.model_validate_json(match.group(0))
|
||||
if self.output_json:
|
||||
return exported_result.model_dump() # type: ignore # "str" has no attribute "model_dump"
|
||||
return exported_result.model_dump()
|
||||
return exported_result
|
||||
except Exception:
|
||||
# sometimes the response contains valid JSON in the middle of text
|
||||
match = re.search(r"({.*})", result, re.DOTALL)
|
||||
if match:
|
||||
try:
|
||||
exported_result = model.model_validate_json(match.group(0)) # type: ignore # Item "None" of "type[BaseModel] | None" has no attribute "model_validate_json"
|
||||
if self.output_json:
|
||||
return exported_result.model_dump() # type: ignore # "str" has no attribute "model_dump"
|
||||
return exported_result
|
||||
except Exception:
|
||||
pass
|
||||
pass
|
||||
|
||||
llm = getattr(self.agent, "function_calling_llm", None) or self.agent.llm # type: ignore # Item "None" of "BaseAgent | None" has no attribute "function_calling_llm"
|
||||
if not self._is_gpt(llm):
|
||||
model_schema = PydanticSchemaParser(model=model).get_schema() # type: ignore # Argument "model" to "PydanticSchemaParser" has incompatible type "type[BaseModel] | None"; expected "type[BaseModel]"
|
||||
instructions = f"{instructions}\n\nThe json should have the following structure, with the following keys:\n{model_schema}"
|
||||
return self._convert_with_instructions(result, model)
|
||||
|
||||
converter = self._create_converter( # type: ignore # Item "None" of "BaseAgent | None" has no attribute "get_output_converter"
|
||||
llm=llm, text=result, model=model, instructions=instructions
|
||||
def _convert_with_instructions(
|
||||
self, result: str, model: Type[BaseModel]
|
||||
) -> Union[dict, BaseModel, str]:
|
||||
llm = self.agent.function_calling_llm or self.agent.llm
|
||||
instructions = self._get_conversion_instructions(model, llm)
|
||||
|
||||
converter = self._create_converter(
|
||||
llm=llm, text=result, model=model, instructions=instructions
|
||||
)
|
||||
exported_result = (
|
||||
converter.to_pydantic() if self.output_pydantic else converter.to_json()
|
||||
)
|
||||
|
||||
if isinstance(exported_result, ConverterError):
|
||||
Printer().print(
|
||||
content=f"{exported_result.message} Using raw output instead.",
|
||||
color="red",
|
||||
)
|
||||
|
||||
if self.output_pydantic:
|
||||
exported_result = converter.to_pydantic()
|
||||
elif self.output_json:
|
||||
exported_result = converter.to_json()
|
||||
|
||||
if isinstance(exported_result, ConverterError):
|
||||
Printer().print(
|
||||
content=f"{exported_result.message} Using raw output instead.",
|
||||
color="red",
|
||||
)
|
||||
exported_result = result
|
||||
|
||||
if self.output_file:
|
||||
content = (
|
||||
exported_result
|
||||
if not self.output_pydantic
|
||||
else exported_result.model_dump_json() # type: ignore # "str" has no attribute "json"
|
||||
)
|
||||
self._save_file(content)
|
||||
return result
|
||||
|
||||
return exported_result
|
||||
|
||||
def _get_output_format(self) -> OutputFormat:
|
||||
if self.output_json:
|
||||
return OutputFormat.JSON
|
||||
if self.output_pydantic:
|
||||
return OutputFormat.PYDANTIC
|
||||
return OutputFormat.RAW
|
||||
|
||||
def _get_conversion_instructions(self, model: Type[BaseModel], llm: Any) -> str:
|
||||
instructions = "I'm gonna convert this raw text into valid JSON."
|
||||
if not self._is_gpt(llm):
|
||||
model_schema = PydanticSchemaParser(model=model).get_schema()
|
||||
instructions = f"{instructions}\n\nThe json should have the following structure, with the following keys:\n{model_schema}"
|
||||
return instructions
|
||||
|
||||
def _save_output(self, content: str) -> None:
|
||||
if not self.output_file:
|
||||
raise Exception("Output file path is not set.")
|
||||
|
||||
directory = os.path.dirname(self.output_file)
|
||||
if directory and not os.path.exists(directory):
|
||||
os.makedirs(directory)
|
||||
with open(self.output_file, "w", encoding="utf-8") as file:
|
||||
file.write(content)
|
||||
|
||||
def _is_gpt(self, llm) -> bool:
|
||||
return isinstance(llm, ChatOpenAI) and llm.openai_api_base is None
|
||||
|
||||
|
||||
@@ -0,0 +1,4 @@
|
||||
from crewai.tasks.output_format import OutputFormat
|
||||
from crewai.tasks.task_output import TaskOutput
|
||||
|
||||
__all__ = ["OutputFormat", "TaskOutput"]
|
||||
|
||||
9
src/crewai/tasks/output_format.py
Normal file
9
src/crewai/tasks/output_format.py
Normal file
@@ -0,0 +1,9 @@
|
||||
from enum import Enum
|
||||
|
||||
|
||||
class OutputFormat(str, Enum):
|
||||
"""Enum that represents the output format of a task."""
|
||||
|
||||
JSON = "json"
|
||||
PYDANTIC = "pydantic"
|
||||
RAW = "raw"
|
||||
@@ -1,24 +1,78 @@
|
||||
from typing import Optional, Union
|
||||
import json
|
||||
from typing import Any, Dict, Optional
|
||||
|
||||
from pydantic import BaseModel, Field, model_validator
|
||||
|
||||
from crewai.tasks.output_format import OutputFormat
|
||||
|
||||
|
||||
class TaskOutput(BaseModel):
|
||||
"""Class that represents the result of a task."""
|
||||
|
||||
description: str = Field(description="Description of the task")
|
||||
summary: Optional[str] = Field(description="Summary of the task", default=None)
|
||||
exported_output: Union[str, BaseModel] = Field(
|
||||
description="Output of the task", default=None
|
||||
raw: str = Field(
|
||||
description="Raw output of the task", default=""
|
||||
) # TODO: @joao: breaking change, by renaming raw_output to raw, but now consistent with CrewOutput
|
||||
pydantic: Optional[BaseModel] = Field(
|
||||
description="Pydantic output of task", default=None
|
||||
)
|
||||
json_dict: Optional[Dict[str, Any]] = Field(
|
||||
description="JSON dictionary of task", default=None
|
||||
)
|
||||
agent: str = Field(description="Agent that executed the task")
|
||||
raw_output: str = Field(description="Result of the task")
|
||||
output_format: OutputFormat = Field(
|
||||
description="Output format of the task", default=OutputFormat.RAW
|
||||
)
|
||||
|
||||
@model_validator(mode="after")
|
||||
def set_summary(self):
|
||||
"""Set the summary field based on the description."""
|
||||
excerpt = " ".join(self.description.split(" ")[:10])
|
||||
self.summary = f"{excerpt}..."
|
||||
return self
|
||||
|
||||
def result(self):
|
||||
return self.exported_output
|
||||
# TODO: Joao - Adding this safety check breakes when people want to see
|
||||
# The full output of a TaskOutput or CrewOutput.
|
||||
# @property
|
||||
# def pydantic(self) -> Optional[BaseModel]:
|
||||
# # Check if the final task output included a pydantic model
|
||||
# if self.output_format != OutputFormat.PYDANTIC:
|
||||
# raise ValueError(
|
||||
# """
|
||||
# Invalid output format requested.
|
||||
# If you would like to access the pydantic model,
|
||||
# please make sure to set the output_pydantic property for the task.
|
||||
# """
|
||||
# )
|
||||
|
||||
# return self._pydantic
|
||||
|
||||
@property
|
||||
def json(self) -> Optional[str]:
|
||||
if self.output_format != OutputFormat.JSON:
|
||||
raise ValueError(
|
||||
"""
|
||||
Invalid output format requested.
|
||||
If you would like to access the JSON output,
|
||||
please make sure to set the output_json property for the task
|
||||
"""
|
||||
)
|
||||
|
||||
return json.dumps(self.json_dict)
|
||||
|
||||
def to_dict(self) -> Dict[str, Any]:
|
||||
"""Convert json_output and pydantic_output to a dictionary."""
|
||||
output_dict = {}
|
||||
if self.json_dict:
|
||||
output_dict.update(self.json_dict)
|
||||
if self.pydantic:
|
||||
output_dict.update(self.pydantic.model_dump())
|
||||
return output_dict
|
||||
|
||||
def __str__(self) -> str:
|
||||
if self.pydantic:
|
||||
return str(self.pydantic)
|
||||
if self.json_dict:
|
||||
return str(self.json_dict)
|
||||
return self.raw
|
||||
|
||||
@@ -80,7 +80,7 @@ class Telemetry:
|
||||
self.ready = False
|
||||
self.trace_set = False
|
||||
|
||||
def crew_creation(self, crew):
|
||||
def crew_creation(self, crew: Crew, inputs: dict[str, Any] | None):
|
||||
"""Records the creation of a crew."""
|
||||
if self.ready:
|
||||
try:
|
||||
@@ -93,6 +93,12 @@ class Telemetry:
|
||||
)
|
||||
self._add_attribute(span, "python_version", platform.python_version())
|
||||
self._add_attribute(span, "crew_id", str(crew.id))
|
||||
|
||||
if crew.share_crew:
|
||||
self._add_attribute(
|
||||
span, "crew_inputs", json.dumps(inputs) if inputs else None
|
||||
)
|
||||
|
||||
self._add_attribute(span, "crew_process", crew.process)
|
||||
self._add_attribute(span, "crew_memory", crew.memory)
|
||||
self._add_attribute(span, "crew_number_of_tasks", len(crew.tasks))
|
||||
@@ -114,7 +120,7 @@ class Telemetry:
|
||||
"llm": json.dumps(self._safe_llm_attributes(agent.llm)),
|
||||
"delegation_enabled?": agent.allow_delegation,
|
||||
"tools_names": [
|
||||
tool.name.casefold() for tool in agent.tools
|
||||
tool.name.casefold() for tool in agent.tools or []
|
||||
],
|
||||
}
|
||||
for agent in crew.agents
|
||||
@@ -139,7 +145,7 @@ class Telemetry:
|
||||
else None
|
||||
),
|
||||
"tools_names": [
|
||||
tool.name.casefold() for tool in task.tools
|
||||
tool.name.casefold() for tool in task.tools or []
|
||||
],
|
||||
}
|
||||
for task in crew.tasks
|
||||
@@ -156,18 +162,40 @@ class Telemetry:
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
def task_started(self, task: Task) -> Span | None:
|
||||
def task_started(self, crew: Crew, task: Task) -> Span | None:
|
||||
"""Records task started in a crew."""
|
||||
if self.ready:
|
||||
try:
|
||||
tracer = trace.get_tracer("crewai.telemetry")
|
||||
|
||||
created_span = tracer.start_span("Task Created")
|
||||
|
||||
self._add_attribute(created_span, "crew_id", str(crew.id))
|
||||
self._add_attribute(created_span, "task_index", crew.tasks.index(task))
|
||||
self._add_attribute(created_span, "task_id", str(task.id))
|
||||
|
||||
if crew.share_crew:
|
||||
self._add_attribute(
|
||||
created_span, "formatted_description", task.description
|
||||
)
|
||||
self._add_attribute(
|
||||
created_span, "formatted_expected_output", task.expected_output
|
||||
)
|
||||
|
||||
created_span.set_status(Status(StatusCode.OK))
|
||||
created_span.end()
|
||||
|
||||
span = tracer.start_span("Task Execution")
|
||||
|
||||
self._add_attribute(span, "crew_id", str(crew.id))
|
||||
self._add_attribute(span, "task_index", crew.tasks.index(task))
|
||||
self._add_attribute(span, "task_id", str(task.id))
|
||||
self._add_attribute(span, "formatted_description", task.description)
|
||||
self._add_attribute(
|
||||
span, "formatted_expected_output", task.expected_output
|
||||
)
|
||||
|
||||
if crew.share_crew:
|
||||
self._add_attribute(span, "formatted_description", task.description)
|
||||
self._add_attribute(
|
||||
span, "formatted_expected_output", task.expected_output
|
||||
)
|
||||
|
||||
return span
|
||||
except Exception:
|
||||
@@ -258,6 +286,8 @@ class Telemetry:
|
||||
"""
|
||||
if (self.ready) and (crew.share_crew):
|
||||
try:
|
||||
self.crew_creation(crew, inputs)
|
||||
|
||||
tracer = trace.get_tracer("crewai.telemetry")
|
||||
span = tracer.start_span("Crew Execution")
|
||||
self._add_attribute(
|
||||
@@ -266,7 +296,9 @@ class Telemetry:
|
||||
pkg_resources.get_distribution("crewai").version,
|
||||
)
|
||||
self._add_attribute(span, "crew_id", str(crew.id))
|
||||
self._add_attribute(span, "inputs", json.dumps(inputs))
|
||||
self._add_attribute(
|
||||
span, "crew_inputs", json.dumps(inputs) if inputs else None
|
||||
)
|
||||
self._add_attribute(
|
||||
span,
|
||||
"crew_agents",
|
||||
@@ -320,7 +352,7 @@ class Telemetry:
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
def end_crew(self, crew, output):
|
||||
def end_crew(self, crew, final_string_output):
|
||||
if (self.ready) and (crew.share_crew):
|
||||
try:
|
||||
self._add_attribute(
|
||||
@@ -328,7 +360,9 @@ class Telemetry:
|
||||
"crewai_version",
|
||||
pkg_resources.get_distribution("crewai").version,
|
||||
)
|
||||
self._add_attribute(crew._execution_span, "crew_output", output)
|
||||
self._add_attribute(
|
||||
crew._execution_span, "crew_output", final_string_output
|
||||
)
|
||||
self._add_attribute(
|
||||
crew._execution_span,
|
||||
"crew_tasks_output",
|
||||
|
||||
@@ -7,7 +7,7 @@ class AgentTools(BaseAgentTools):
|
||||
"""Default tools around agent delegation"""
|
||||
|
||||
def tools(self):
|
||||
coworkers = f"[{', '.join([f'{agent.role}' for agent in self.agents])}]"
|
||||
coworkers = ", ".join([f"{agent.role}" for agent in self.agents])
|
||||
tools = [
|
||||
StructuredTool.from_function(
|
||||
func=self.delegate_work,
|
||||
|
||||
@@ -8,7 +8,7 @@ from pydantic.v1 import BaseModel, Field
|
||||
class ToolCalling(BaseModel):
|
||||
tool_name: str = Field(..., description="The name of the tool to be called.")
|
||||
arguments: Optional[Dict[str, Any]] = Field(
|
||||
..., description="A dictinary of arguments to be passed to the tool."
|
||||
..., description="A dictionary of arguments to be passed to the tool."
|
||||
)
|
||||
|
||||
|
||||
@@ -17,5 +17,5 @@ class InstructorToolCalling(PydanticBaseModel):
|
||||
..., description="The name of the tool to be called."
|
||||
)
|
||||
arguments: Optional[Dict[str, Any]] = PydanticField(
|
||||
..., description="A dictinary of arguments to be passed to the tool."
|
||||
..., description="A dictionary of arguments to be passed to the tool."
|
||||
)
|
||||
|
||||
@@ -119,7 +119,7 @@ class ToolUsage:
|
||||
attempts=self._run_attempts,
|
||||
)
|
||||
result = self._format_result(result=result) # type: ignore # "_format_result" of "ToolUsage" does not return a value (it only ever returns None)
|
||||
return result # type: ignore # Fix the reutrn type of this function
|
||||
return result # type: ignore # Fix the return type of this function
|
||||
|
||||
except Exception:
|
||||
self.task.increment_tools_errors()
|
||||
@@ -151,16 +151,12 @@ class ToolUsage:
|
||||
for k, v in calling.arguments.items()
|
||||
if k in acceptable_args
|
||||
}
|
||||
result = tool._run(**arguments)
|
||||
result = tool.invoke(input=arguments)
|
||||
except Exception:
|
||||
if tool.args_schema:
|
||||
arguments = calling.arguments
|
||||
result = tool._run(**arguments)
|
||||
else:
|
||||
arguments = calling.arguments.values() # type: ignore # Incompatible types in assignment (expression has type "dict_values[str, Any]", variable has type "dict[str, Any]")
|
||||
result = tool._run(*arguments)
|
||||
arguments = calling.arguments
|
||||
result = tool.invoke(input=arguments)
|
||||
else:
|
||||
result = tool._run()
|
||||
result = tool.invoke(input={})
|
||||
except Exception as e:
|
||||
self._run_attempts += 1
|
||||
if self._run_attempts > self._max_parsing_attempts:
|
||||
|
||||
@@ -16,7 +16,7 @@
|
||||
"format_without_tools": "\nSorry, I didn't use the right format. I MUST either use a tool (among the available ones), OR give my best final answer.\nI just remembered the expected format I must follow:\n\nQuestion: the input question you must answer\nThought: you should always think about what to do\nAction: the action to take, should be one of [{tool_names}]\nAction Input: the input to the action\nObservation: the result of the action\n... (this Thought/Action/Action Input/Observation can repeat N times)\nThought: I now can give a great answer\nFinal Answer: my best complete final answer to the task\nYour final answer must be the great and the most complete as possible, it must be outcome described\n\n",
|
||||
"task_with_context": "{task}\n\nThis is the context you're working with:\n{context}",
|
||||
"expected_output": "\nThis is the expect criteria for your final answer: {expected_output} \n you MUST return the actual complete content as the final answer, not a summary.",
|
||||
"human_feedback": "You got human feedback on your work, re-avaluate it and give a new Final Answer when ready.\n {human_feedback}",
|
||||
"human_feedback": "You got human feedback on your work, re-evaluate it and give a new Final Answer when ready.\n {human_feedback}",
|
||||
"getting_input": "This is the agent's final answer: {final_answer}\nPlease provide feedback: "
|
||||
},
|
||||
"errors": {
|
||||
|
||||
@@ -2,10 +2,8 @@ import json
|
||||
|
||||
from langchain.schema import HumanMessage, SystemMessage
|
||||
from langchain_openai import ChatOpenAI
|
||||
from pydantic import model_validator
|
||||
from crewai.agents.agent_builder.utilities.base_output_converter_base import (
|
||||
OutputConverter,
|
||||
)
|
||||
|
||||
from crewai.agents.agent_builder.utilities.base_output_converter import OutputConverter
|
||||
|
||||
|
||||
class ConverterError(Exception):
|
||||
@@ -19,15 +17,10 @@ class ConverterError(Exception):
|
||||
class Converter(OutputConverter):
|
||||
"""Class that converts text into either pydantic or json."""
|
||||
|
||||
@model_validator(mode="after")
|
||||
def check_llm_provider(self):
|
||||
if not self._is_gpt(self.llm):
|
||||
self._is_gpt = False
|
||||
|
||||
def to_pydantic(self, current_attempt=1):
|
||||
"""Convert text to pydantic."""
|
||||
try:
|
||||
if self._is_gpt:
|
||||
if self.is_gpt:
|
||||
return self._create_instructor().to_pydantic()
|
||||
else:
|
||||
return self._create_chain().invoke({})
|
||||
@@ -41,7 +34,7 @@ class Converter(OutputConverter):
|
||||
def to_json(self, current_attempt=1):
|
||||
"""Convert text to json."""
|
||||
try:
|
||||
if self._is_gpt:
|
||||
if self.is_gpt:
|
||||
return self._create_instructor().to_json()
|
||||
else:
|
||||
return json.dumps(self._create_chain().invoke({}).model_dump())
|
||||
@@ -75,5 +68,7 @@ class Converter(OutputConverter):
|
||||
)
|
||||
return new_prompt | self.llm | parser
|
||||
|
||||
def _is_gpt(self, llm) -> bool: # type: ignore # BUG? Name "_is_gpt" defined on line 20 hides name from outer scope
|
||||
return isinstance(llm, ChatOpenAI) and llm.openai_api_base is None
|
||||
@property
|
||||
def is_gpt(self) -> bool:
|
||||
"""Return if llm provided is of gpt from openai."""
|
||||
return isinstance(self.llm, ChatOpenAI) and self.llm.openai_api_base is None
|
||||
|
||||
20
src/crewai/utilities/formatter.py
Normal file
20
src/crewai/utilities/formatter.py
Normal file
@@ -0,0 +1,20 @@
|
||||
from typing import List
|
||||
|
||||
from crewai.task import Task
|
||||
from crewai.tasks.task_output import TaskOutput
|
||||
|
||||
|
||||
def aggregate_raw_outputs_from_task_outputs(task_outputs: List[TaskOutput]) -> str:
|
||||
"""Generate string context from the task outputs."""
|
||||
dividers = "\n\n----------\n\n"
|
||||
|
||||
# Join task outputs with dividers
|
||||
context = dividers.join(output.raw for output in task_outputs)
|
||||
return context
|
||||
|
||||
|
||||
def aggregate_raw_outputs_from_tasks(tasks: List[Task]) -> str:
|
||||
"""Generate string context from the tasks."""
|
||||
task_outputs = [task.output for task in tasks if task.output is not None]
|
||||
|
||||
return aggregate_raw_outputs_from_task_outputs(task_outputs)
|
||||
@@ -631,8 +631,9 @@ def test_agent_use_specific_tasks_output_as_context(capsys):
|
||||
|
||||
crew = Crew(agents=[agent1, agent2], tasks=tasks)
|
||||
result = crew.kickoff()
|
||||
assert "bye" not in result.lower()
|
||||
assert "hi" in result.lower() or "hello" in result.lower()
|
||||
print("LOWER RESULT", result.raw)
|
||||
assert "bye" not in result.raw.lower()
|
||||
assert "hi" in result.raw.lower() or "hello" in result.raw.lower()
|
||||
|
||||
|
||||
@pytest.mark.vcr(filter_headers=["authorization"])
|
||||
@@ -644,7 +645,7 @@ def test_agent_step_callback():
|
||||
with patch.object(StepCallback, "callback") as callback:
|
||||
|
||||
@tool
|
||||
def learn_about_AI(topic) -> float:
|
||||
def learn_about_AI(topic) -> str:
|
||||
"""Useful for when you need to learn about AI to write an paragraph about it."""
|
||||
return "AI is a very broad field."
|
||||
|
||||
@@ -678,7 +679,7 @@ def test_agent_function_calling_llm():
|
||||
with patch.object(llm.client, "create", wraps=llm.client.create) as private_mock:
|
||||
|
||||
@tool
|
||||
def learn_about_AI(topic) -> float:
|
||||
def learn_about_AI(topic) -> str:
|
||||
"""Useful for when you need to learn about AI to write an paragraph about it."""
|
||||
return "AI is a very broad field."
|
||||
|
||||
@@ -750,7 +751,8 @@ def test_tool_result_as_answer_is_the_final_answer_for_the_agent():
|
||||
crew = Crew(agents=[agent1], tasks=tasks)
|
||||
|
||||
result = crew.kickoff()
|
||||
assert result == "Howdy!"
|
||||
print("RESULT: ", result.raw)
|
||||
assert result.raw == "Howdy!"
|
||||
|
||||
|
||||
@pytest.mark.vcr(filter_headers=["authorization"])
|
||||
@@ -961,3 +963,54 @@ def test_agent_use_trained_data(crew_training_handler):
|
||||
crew_training_handler.assert_has_calls(
|
||||
[mock.call(), mock.call("trained_agents_data.pkl"), mock.call().load()]
|
||||
)
|
||||
|
||||
|
||||
def test_agent_max_retry_limit():
|
||||
agent = Agent(
|
||||
role="test role",
|
||||
goal="test goal",
|
||||
backstory="test backstory",
|
||||
max_retry_limit=1,
|
||||
)
|
||||
|
||||
task = Task(
|
||||
agent=agent,
|
||||
description="Say the word: Hi",
|
||||
expected_output="The word: Hi",
|
||||
human_input=True,
|
||||
)
|
||||
|
||||
error_message = "Error happening while sending prompt to model."
|
||||
with patch.object(
|
||||
CrewAgentExecutor, "invoke", wraps=agent.agent_executor.invoke
|
||||
) as invoke_mock:
|
||||
invoke_mock.side_effect = Exception(error_message)
|
||||
|
||||
assert agent._times_executed == 0
|
||||
assert agent.max_retry_limit == 1
|
||||
|
||||
with pytest.raises(Exception) as e:
|
||||
agent.execute_task(
|
||||
task=task,
|
||||
)
|
||||
assert e.value.args[0] == error_message
|
||||
assert agent._times_executed == 2
|
||||
|
||||
invoke_mock.assert_has_calls(
|
||||
[
|
||||
mock.call(
|
||||
{
|
||||
"input": "Say the word: Hi\n\nThis is the expect criteria for your final answer: The word: Hi \n you MUST return the actual complete content as the final answer, not a summary.",
|
||||
"tool_names": "",
|
||||
"tools": "",
|
||||
}
|
||||
),
|
||||
mock.call(
|
||||
{
|
||||
"input": "Say the word: Hi\n\nThis is the expect criteria for your final answer: The word: Hi \n you MUST return the actual complete content as the final answer, not a summary.",
|
||||
"tool_names": "",
|
||||
"tools": "",
|
||||
}
|
||||
),
|
||||
]
|
||||
)
|
||||
|
||||
0
tests/agents/__init__.py
Normal file
0
tests/agents/__init__.py
Normal file
378
tests/agents/test_crew_agent_parser.py
Normal file
378
tests/agents/test_crew_agent_parser.py
Normal file
@@ -0,0 +1,378 @@
|
||||
import pytest
|
||||
from crewai.agents.parser import CrewAgentParser
|
||||
from langchain_core.agents import AgentAction, AgentFinish
|
||||
from langchain_core.exceptions import OutputParserException
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def parser():
|
||||
p = CrewAgentParser()
|
||||
p.agent = MockAgent()
|
||||
return p
|
||||
|
||||
|
||||
def test_valid_action_parsing_special_characters(parser):
|
||||
text = "Thought: Let's find the temperature\nAction: search\nAction Input: what's the temperature in SF?"
|
||||
result = parser.parse(text)
|
||||
assert isinstance(result, AgentAction)
|
||||
assert result.tool == "search"
|
||||
assert result.tool_input == "what's the temperature in SF?"
|
||||
|
||||
|
||||
def test_valid_action_parsing_with_json_tool_input(parser):
|
||||
text = """
|
||||
Thought: Let's find the information
|
||||
Action: query
|
||||
Action Input: ** {"task": "What are some common challenges or barriers that you have observed or experienced when implementing AI-powered solutions in healthcare settings?", "context": "As we've discussed recent advancements in AI applications in healthcare, it's crucial to acknowledge the potential hurdles. Some possible obstacles include...", "coworker": "Senior Researcher"}
|
||||
"""
|
||||
result = parser.parse(text)
|
||||
assert isinstance(result, AgentAction)
|
||||
expected_tool_input = '{"task": "What are some common challenges or barriers that you have observed or experienced when implementing AI-powered solutions in healthcare settings?", "context": "As we\'ve discussed recent advancements in AI applications in healthcare, it\'s crucial to acknowledge the potential hurdles. Some possible obstacles include...", "coworker": "Senior Researcher"}'
|
||||
assert result.tool == "query"
|
||||
assert result.tool_input == expected_tool_input
|
||||
|
||||
|
||||
def test_valid_action_parsing_with_quotes(parser):
|
||||
text = 'Thought: Let\'s find the temperature\nAction: search\nAction Input: "temperature in SF"'
|
||||
result = parser.parse(text)
|
||||
assert isinstance(result, AgentAction)
|
||||
assert result.tool == "search"
|
||||
assert result.tool_input == "temperature in SF"
|
||||
|
||||
|
||||
def test_valid_action_parsing_with_curly_braces(parser):
|
||||
text = "Thought: Let's find the temperature\nAction: search\nAction Input: {temperature in SF}"
|
||||
result = parser.parse(text)
|
||||
assert isinstance(result, AgentAction)
|
||||
assert result.tool == "search"
|
||||
assert result.tool_input == "{temperature in SF}"
|
||||
|
||||
|
||||
def test_valid_action_parsing_with_angle_brackets(parser):
|
||||
text = "Thought: Let's find the temperature\nAction: search\nAction Input: <temperature in SF>"
|
||||
result = parser.parse(text)
|
||||
assert isinstance(result, AgentAction)
|
||||
assert result.tool == "search"
|
||||
assert result.tool_input == "<temperature in SF>"
|
||||
|
||||
|
||||
def test_valid_action_parsing_with_parentheses(parser):
|
||||
text = "Thought: Let's find the temperature\nAction: search\nAction Input: (temperature in SF)"
|
||||
result = parser.parse(text)
|
||||
assert isinstance(result, AgentAction)
|
||||
assert result.tool == "search"
|
||||
assert result.tool_input == "(temperature in SF)"
|
||||
|
||||
|
||||
def test_valid_action_parsing_with_mixed_brackets(parser):
|
||||
text = "Thought: Let's find the temperature\nAction: search\nAction Input: [temperature in {SF}]"
|
||||
result = parser.parse(text)
|
||||
assert isinstance(result, AgentAction)
|
||||
assert result.tool == "search"
|
||||
assert result.tool_input == "[temperature in {SF}]"
|
||||
|
||||
|
||||
def test_valid_action_parsing_with_nested_quotes(parser):
|
||||
text = "Thought: Let's find the temperature\nAction: search\nAction Input: \"what's the temperature in 'SF'?\""
|
||||
result = parser.parse(text)
|
||||
assert isinstance(result, AgentAction)
|
||||
assert result.tool == "search"
|
||||
assert result.tool_input == "what's the temperature in 'SF'?"
|
||||
|
||||
|
||||
def test_valid_action_parsing_with_incomplete_json(parser):
|
||||
text = 'Thought: Let\'s find the temperature\nAction: search\nAction Input: {"query": "temperature in SF"'
|
||||
result = parser.parse(text)
|
||||
assert isinstance(result, AgentAction)
|
||||
assert result.tool == "search"
|
||||
assert result.tool_input == '{"query": "temperature in SF"}'
|
||||
|
||||
|
||||
def test_valid_action_parsing_with_special_characters(parser):
|
||||
text = "Thought: Let's find the temperature\nAction: search\nAction Input: what is the temperature in SF? @$%^&*"
|
||||
result = parser.parse(text)
|
||||
assert isinstance(result, AgentAction)
|
||||
assert result.tool == "search"
|
||||
assert result.tool_input == "what is the temperature in SF? @$%^&*"
|
||||
|
||||
|
||||
def test_valid_action_parsing_with_combination(parser):
|
||||
text = 'Thought: Let\'s find the temperature\nAction: search\nAction Input: "[what is the temperature in SF?]"'
|
||||
result = parser.parse(text)
|
||||
assert isinstance(result, AgentAction)
|
||||
assert result.tool == "search"
|
||||
assert result.tool_input == "[what is the temperature in SF?]"
|
||||
|
||||
|
||||
def test_valid_action_parsing_with_mixed_quotes(parser):
|
||||
text = "Thought: Let's find the temperature\nAction: search\nAction Input: \"what's the temperature in SF?\""
|
||||
result = parser.parse(text)
|
||||
assert isinstance(result, AgentAction)
|
||||
assert result.tool == "search"
|
||||
assert result.tool_input == "what's the temperature in SF?"
|
||||
|
||||
|
||||
def test_valid_action_parsing_with_newlines(parser):
|
||||
text = "Thought: Let's find the temperature\nAction: search\nAction Input: what is\nthe temperature in SF?"
|
||||
result = parser.parse(text)
|
||||
assert isinstance(result, AgentAction)
|
||||
assert result.tool == "search"
|
||||
assert result.tool_input == "what is\nthe temperature in SF?"
|
||||
|
||||
|
||||
def test_valid_action_parsing_with_escaped_characters(parser):
|
||||
text = "Thought: Let's find the temperature\nAction: search\nAction Input: what is the temperature in SF? \\n"
|
||||
result = parser.parse(text)
|
||||
assert isinstance(result, AgentAction)
|
||||
assert result.tool == "search"
|
||||
assert result.tool_input == "what is the temperature in SF? \\n"
|
||||
|
||||
|
||||
def test_valid_action_parsing_with_json_string(parser):
|
||||
text = 'Thought: Let\'s find the temperature\nAction: search\nAction Input: {"query": "temperature in SF"}'
|
||||
result = parser.parse(text)
|
||||
assert isinstance(result, AgentAction)
|
||||
assert result.tool == "search"
|
||||
assert result.tool_input == '{"query": "temperature in SF"}'
|
||||
|
||||
|
||||
def test_valid_action_parsing_with_unbalanced_quotes(parser):
|
||||
text = "Thought: Let's find the temperature\nAction: search\nAction Input: \"what is the temperature in SF?"
|
||||
result = parser.parse(text)
|
||||
assert isinstance(result, AgentAction)
|
||||
assert result.tool == "search"
|
||||
assert result.tool_input == "what is the temperature in SF?"
|
||||
|
||||
|
||||
def test_clean_action_no_formatting(parser):
|
||||
action = "Ask question to senior researcher"
|
||||
cleaned_action = parser._clean_action(action)
|
||||
assert cleaned_action == "Ask question to senior researcher"
|
||||
|
||||
|
||||
def test_clean_action_with_leading_asterisks(parser):
|
||||
action = "** Ask question to senior researcher"
|
||||
cleaned_action = parser._clean_action(action)
|
||||
assert cleaned_action == "Ask question to senior researcher"
|
||||
|
||||
|
||||
def test_clean_action_with_trailing_asterisks(parser):
|
||||
action = "Ask question to senior researcher **"
|
||||
cleaned_action = parser._clean_action(action)
|
||||
assert cleaned_action == "Ask question to senior researcher"
|
||||
|
||||
|
||||
def test_clean_action_with_leading_and_trailing_asterisks(parser):
|
||||
action = "** Ask question to senior researcher **"
|
||||
cleaned_action = parser._clean_action(action)
|
||||
assert cleaned_action == "Ask question to senior researcher"
|
||||
|
||||
|
||||
def test_clean_action_with_multiple_leading_asterisks(parser):
|
||||
action = "**** Ask question to senior researcher"
|
||||
cleaned_action = parser._clean_action(action)
|
||||
assert cleaned_action == "Ask question to senior researcher"
|
||||
|
||||
|
||||
def test_clean_action_with_multiple_trailing_asterisks(parser):
|
||||
action = "Ask question to senior researcher ****"
|
||||
cleaned_action = parser._clean_action(action)
|
||||
assert cleaned_action == "Ask question to senior researcher"
|
||||
|
||||
|
||||
def test_clean_action_with_spaces_and_asterisks(parser):
|
||||
action = " ** Ask question to senior researcher ** "
|
||||
cleaned_action = parser._clean_action(action)
|
||||
print(f"Original action: '{action}'")
|
||||
print(f"Cleaned action: '{cleaned_action}'")
|
||||
assert cleaned_action == "Ask question to senior researcher"
|
||||
|
||||
|
||||
def test_clean_action_with_only_asterisks(parser):
|
||||
action = "****"
|
||||
cleaned_action = parser._clean_action(action)
|
||||
assert cleaned_action == ""
|
||||
|
||||
|
||||
def test_clean_action_with_empty_string(parser):
|
||||
action = ""
|
||||
cleaned_action = parser._clean_action(action)
|
||||
assert cleaned_action == ""
|
||||
|
||||
|
||||
def test_valid_final_answer_parsing(parser):
|
||||
text = (
|
||||
"Thought: I found the information\nFinal Answer: The temperature is 100 degrees"
|
||||
)
|
||||
result = parser.parse(text)
|
||||
assert isinstance(result, AgentFinish)
|
||||
assert result.return_values["output"] == "The temperature is 100 degrees"
|
||||
|
||||
|
||||
def test_missing_action_error(parser):
|
||||
text = "Thought: Let's find the temperature\nAction Input: what is the temperature in SF?"
|
||||
with pytest.raises(OutputParserException) as exc_info:
|
||||
parser.parse(text)
|
||||
assert "Could not parse LLM output" in str(exc_info.value)
|
||||
|
||||
|
||||
def test_missing_action_input_error(parser):
|
||||
text = "Thought: Let's find the temperature\nAction: search"
|
||||
with pytest.raises(OutputParserException) as exc_info:
|
||||
parser.parse(text)
|
||||
assert "Could not parse LLM output" in str(exc_info.value)
|
||||
|
||||
|
||||
def test_action_and_final_answer_error(parser):
|
||||
text = "Thought: I found the information\nAction: search\nAction Input: what is the temperature in SF?\nFinal Answer: The temperature is 100 degrees"
|
||||
with pytest.raises(OutputParserException) as exc_info:
|
||||
parser.parse(text)
|
||||
assert "both perform Action and give a Final Answer" in str(exc_info.value)
|
||||
|
||||
|
||||
def test_safe_repair_json(parser):
|
||||
invalid_json = '{"task": "Research XAI", "context": "Explainable AI", "coworker": Senior Researcher'
|
||||
expected_repaired_json = '{"task": "Research XAI", "context": "Explainable AI", "coworker": "Senior Researcher"}'
|
||||
result = parser._safe_repair_json(invalid_json)
|
||||
assert result == expected_repaired_json
|
||||
|
||||
|
||||
def test_safe_repair_json_unrepairable(parser):
|
||||
invalid_json = "{invalid_json"
|
||||
result = parser._safe_repair_json(invalid_json)
|
||||
print("result:", invalid_json)
|
||||
assert result == invalid_json # Should return the original if unrepairable
|
||||
|
||||
|
||||
def test_safe_repair_json_missing_quotes(parser):
|
||||
invalid_json = (
|
||||
'{task: "Research XAI", context: "Explainable AI", coworker: Senior Researcher}'
|
||||
)
|
||||
expected_repaired_json = '{"task": "Research XAI", "context": "Explainable AI", "coworker": "Senior Researcher"}'
|
||||
result = parser._safe_repair_json(invalid_json)
|
||||
assert result == expected_repaired_json
|
||||
|
||||
|
||||
def test_safe_repair_json_unclosed_brackets(parser):
|
||||
invalid_json = '{"task": "Research XAI", "context": "Explainable AI", "coworker": "Senior Researcher"'
|
||||
expected_repaired_json = '{"task": "Research XAI", "context": "Explainable AI", "coworker": "Senior Researcher"}'
|
||||
result = parser._safe_repair_json(invalid_json)
|
||||
assert result == expected_repaired_json
|
||||
|
||||
|
||||
def test_safe_repair_json_extra_commas(parser):
|
||||
invalid_json = '{"task": "Research XAI", "context": "Explainable AI", "coworker": "Senior Researcher",}'
|
||||
expected_repaired_json = '{"task": "Research XAI", "context": "Explainable AI", "coworker": "Senior Researcher"}'
|
||||
result = parser._safe_repair_json(invalid_json)
|
||||
assert result == expected_repaired_json
|
||||
|
||||
|
||||
def test_safe_repair_json_trailing_commas(parser):
|
||||
invalid_json = '{"task": "Research XAI", "context": "Explainable AI", "coworker": "Senior Researcher",}'
|
||||
expected_repaired_json = '{"task": "Research XAI", "context": "Explainable AI", "coworker": "Senior Researcher"}'
|
||||
result = parser._safe_repair_json(invalid_json)
|
||||
assert result == expected_repaired_json
|
||||
|
||||
|
||||
def test_safe_repair_json_single_quotes(parser):
|
||||
invalid_json = "{'task': 'Research XAI', 'context': 'Explainable AI', 'coworker': 'Senior Researcher'}"
|
||||
expected_repaired_json = '{"task": "Research XAI", "context": "Explainable AI", "coworker": "Senior Researcher"}'
|
||||
result = parser._safe_repair_json(invalid_json)
|
||||
assert result == expected_repaired_json
|
||||
|
||||
|
||||
def test_safe_repair_json_mixed_quotes(parser):
|
||||
invalid_json = "{'task': \"Research XAI\", 'context': \"Explainable AI\", 'coworker': 'Senior Researcher'}"
|
||||
expected_repaired_json = '{"task": "Research XAI", "context": "Explainable AI", "coworker": "Senior Researcher"}'
|
||||
result = parser._safe_repair_json(invalid_json)
|
||||
assert result == expected_repaired_json
|
||||
|
||||
|
||||
def test_safe_repair_json_unescaped_characters(parser):
|
||||
invalid_json = '{"task": "Research XAI", "context": "Explainable AI", "coworker": "Senior Researcher\n"}'
|
||||
expected_repaired_json = '{"task": "Research XAI", "context": "Explainable AI", "coworker": "Senior Researcher"}'
|
||||
result = parser._safe_repair_json(invalid_json)
|
||||
print("result:", result)
|
||||
assert result == expected_repaired_json
|
||||
|
||||
|
||||
def test_safe_repair_json_missing_colon(parser):
|
||||
invalid_json = '{"task" "Research XAI", "context": "Explainable AI", "coworker": "Senior Researcher"}'
|
||||
expected_repaired_json = '{"task": "Research XAI", "context": "Explainable AI", "coworker": "Senior Researcher"}'
|
||||
result = parser._safe_repair_json(invalid_json)
|
||||
assert result == expected_repaired_json
|
||||
|
||||
|
||||
def test_safe_repair_json_missing_comma(parser):
|
||||
invalid_json = '{"task": "Research XAI" "context": "Explainable AI", "coworker": "Senior Researcher"}'
|
||||
expected_repaired_json = '{"task": "Research XAI", "context": "Explainable AI", "coworker": "Senior Researcher"}'
|
||||
result = parser._safe_repair_json(invalid_json)
|
||||
assert result == expected_repaired_json
|
||||
|
||||
|
||||
def test_safe_repair_json_unexpected_trailing_characters(parser):
|
||||
invalid_json = '{"task": "Research XAI", "context": "Explainable AI", "coworker": "Senior Researcher"} random text'
|
||||
expected_repaired_json = '{"task": "Research XAI", "context": "Explainable AI", "coworker": "Senior Researcher"}'
|
||||
result = parser._safe_repair_json(invalid_json)
|
||||
assert result == expected_repaired_json
|
||||
|
||||
|
||||
def test_safe_repair_json_special_characters_key(parser):
|
||||
invalid_json = '{"task!@#": "Research XAI", "context$%^": "Explainable AI", "coworker&*()": "Senior Researcher"}'
|
||||
expected_repaired_json = '{"task!@#": "Research XAI", "context$%^": "Explainable AI", "coworker&*()": "Senior Researcher"}'
|
||||
result = parser._safe_repair_json(invalid_json)
|
||||
assert result == expected_repaired_json
|
||||
|
||||
|
||||
def test_parsing_with_whitespace(parser):
|
||||
text = " Thought: Let's find the temperature \n Action: search \n Action Input: what is the temperature in SF? "
|
||||
result = parser.parse(text)
|
||||
assert isinstance(result, AgentAction)
|
||||
assert result.tool == "search"
|
||||
assert result.tool_input == "what is the temperature in SF?"
|
||||
|
||||
|
||||
def test_parsing_with_special_characters(parser):
|
||||
text = 'Thought: Let\'s find the temperature\nAction: search\nAction Input: "what is the temperature in SF?"'
|
||||
result = parser.parse(text)
|
||||
assert isinstance(result, AgentAction)
|
||||
assert result.tool == "search"
|
||||
assert result.tool_input == "what is the temperature in SF?"
|
||||
|
||||
|
||||
def test_integration_valid_and_invalid(parser):
|
||||
text = """
|
||||
Thought: Let's find the temperature
|
||||
Action: search
|
||||
Action Input: what is the temperature in SF?
|
||||
|
||||
Thought: I found the information
|
||||
Final Answer: The temperature is 100 degrees
|
||||
|
||||
Thought: Missing action
|
||||
Action Input: invalid
|
||||
|
||||
Thought: Missing action input
|
||||
Action: invalid
|
||||
"""
|
||||
parts = text.strip().split("\n\n")
|
||||
results = []
|
||||
for part in parts:
|
||||
try:
|
||||
result = parser.parse(part.strip())
|
||||
results.append(result)
|
||||
except OutputParserException as e:
|
||||
results.append(e)
|
||||
|
||||
assert isinstance(results[0], AgentAction)
|
||||
assert isinstance(results[1], AgentFinish)
|
||||
assert isinstance(results[2], OutputParserException)
|
||||
assert isinstance(results[3], OutputParserException)
|
||||
|
||||
|
||||
class MockAgent:
|
||||
def increment_formatting_errors(self):
|
||||
pass
|
||||
|
||||
|
||||
# TODO: ADD TEST TO MAKE SURE ** REMOVAL DOESN'T MESS UP ANYTHING
|
||||
1804
tests/cassettes/test_async_execution_single_task.yaml
Normal file
1804
tests/cassettes/test_async_execution_single_task.yaml
Normal file
File diff suppressed because it is too large
Load Diff
4694
tests/cassettes/test_async_task_execution.yaml
Normal file
4694
tests/cassettes/test_async_task_execution.yaml
Normal file
File diff suppressed because it is too large
Load Diff
591
tests/cassettes/test_crew_async_kickoff.yaml
Normal file
591
tests/cassettes/test_crew_async_kickoff.yaml
Normal file
@@ -0,0 +1,591 @@
|
||||
interactions:
|
||||
- request:
|
||||
body: '{"messages": [{"content": "You are dog Researcher. You have a lot of experience
|
||||
with dog.\nYour personal goal is: Express hot takes on dog.To give my best complete
|
||||
final answer to the task use the exact following format:\n\nThought: I now can
|
||||
give a great answer\nFinal Answer: my best complete final answer to the task.\nYour
|
||||
final answer must be the great and the most complete as possible, it must be
|
||||
outcome described.\n\nI MUST use these formats, my job depends on it!\nCurrent
|
||||
Task: Give me an analysis around dog.\n\nThis is the expect criteria for your
|
||||
final answer: 1 bullet point about dog that''s under 15 words. \n you MUST return
|
||||
the actual complete content as the final answer, not a summary.\n\nBegin! This
|
||||
is VERY important to you, use the tools available and give your best Final Answer,
|
||||
your job depends on it!\n\nThought:\n", "role": "user"}], "model": "gpt-4o",
|
||||
"n": 1, "stop": ["\nObservation"], "stream": true, "temperature": 0.7}'
|
||||
headers:
|
||||
accept:
|
||||
- application/json
|
||||
accept-encoding:
|
||||
- gzip, deflate, br
|
||||
connection:
|
||||
- keep-alive
|
||||
content-length:
|
||||
- '951'
|
||||
content-type:
|
||||
- application/json
|
||||
host:
|
||||
- api.openai.com
|
||||
user-agent:
|
||||
- OpenAI/Python 1.34.0
|
||||
x-stainless-arch:
|
||||
- arm64
|
||||
x-stainless-async:
|
||||
- 'false'
|
||||
x-stainless-lang:
|
||||
- python
|
||||
x-stainless-os:
|
||||
- MacOS
|
||||
x-stainless-package-version:
|
||||
- 1.34.0
|
||||
x-stainless-runtime:
|
||||
- CPython
|
||||
x-stainless-runtime-version:
|
||||
- 3.12.3
|
||||
method: POST
|
||||
uri: https://api.openai.com/v1/chat/completions
|
||||
response:
|
||||
body:
|
||||
string: 'data: {"id":"chatcmpl-9gdIXYXcf53VmxfiC6Q2NBDG2bPci","object":"chat.completion.chunk","created":1719947865,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_ce0793330f","choices":[{"index":0,"delta":{"role":"assistant","content":""},"logprobs":null,"finish_reason":null}]}
|
||||
|
||||
|
||||
data: {"id":"chatcmpl-9gdIXYXcf53VmxfiC6Q2NBDG2bPci","object":"chat.completion.chunk","created":1719947865,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_ce0793330f","choices":[{"index":0,"delta":{"content":"Thought"},"logprobs":null,"finish_reason":null}]}
|
||||
|
||||
|
||||
data: {"id":"chatcmpl-9gdIXYXcf53VmxfiC6Q2NBDG2bPci","object":"chat.completion.chunk","created":1719947865,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_ce0793330f","choices":[{"index":0,"delta":{"content":":"},"logprobs":null,"finish_reason":null}]}
|
||||
|
||||
|
||||
data: {"id":"chatcmpl-9gdIXYXcf53VmxfiC6Q2NBDG2bPci","object":"chat.completion.chunk","created":1719947865,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_ce0793330f","choices":[{"index":0,"delta":{"content":"
|
||||
I"},"logprobs":null,"finish_reason":null}]}
|
||||
|
||||
|
||||
data: {"id":"chatcmpl-9gdIXYXcf53VmxfiC6Q2NBDG2bPci","object":"chat.completion.chunk","created":1719947865,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_ce0793330f","choices":[{"index":0,"delta":{"content":"
|
||||
now"},"logprobs":null,"finish_reason":null}]}
|
||||
|
||||
|
||||
data: {"id":"chatcmpl-9gdIXYXcf53VmxfiC6Q2NBDG2bPci","object":"chat.completion.chunk","created":1719947865,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_ce0793330f","choices":[{"index":0,"delta":{"content":"
|
||||
can"},"logprobs":null,"finish_reason":null}]}
|
||||
|
||||
|
||||
data: {"id":"chatcmpl-9gdIXYXcf53VmxfiC6Q2NBDG2bPci","object":"chat.completion.chunk","created":1719947865,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_ce0793330f","choices":[{"index":0,"delta":{"content":"
|
||||
give"},"logprobs":null,"finish_reason":null}]}
|
||||
|
||||
|
||||
data: {"id":"chatcmpl-9gdIXYXcf53VmxfiC6Q2NBDG2bPci","object":"chat.completion.chunk","created":1719947865,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_ce0793330f","choices":[{"index":0,"delta":{"content":"
|
||||
a"},"logprobs":null,"finish_reason":null}]}
|
||||
|
||||
|
||||
data: {"id":"chatcmpl-9gdIXYXcf53VmxfiC6Q2NBDG2bPci","object":"chat.completion.chunk","created":1719947865,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_ce0793330f","choices":[{"index":0,"delta":{"content":"
|
||||
great"},"logprobs":null,"finish_reason":null}]}
|
||||
|
||||
|
||||
data: {"id":"chatcmpl-9gdIXYXcf53VmxfiC6Q2NBDG2bPci","object":"chat.completion.chunk","created":1719947865,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_ce0793330f","choices":[{"index":0,"delta":{"content":"
|
||||
answer"},"logprobs":null,"finish_reason":null}]}
|
||||
|
||||
|
||||
data: {"id":"chatcmpl-9gdIXYXcf53VmxfiC6Q2NBDG2bPci","object":"chat.completion.chunk","created":1719947865,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_ce0793330f","choices":[{"index":0,"delta":{"content":"\n"},"logprobs":null,"finish_reason":null}]}
|
||||
|
||||
|
||||
data: {"id":"chatcmpl-9gdIXYXcf53VmxfiC6Q2NBDG2bPci","object":"chat.completion.chunk","created":1719947865,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_ce0793330f","choices":[{"index":0,"delta":{"content":"Final"},"logprobs":null,"finish_reason":null}]}
|
||||
|
||||
|
||||
data: {"id":"chatcmpl-9gdIXYXcf53VmxfiC6Q2NBDG2bPci","object":"chat.completion.chunk","created":1719947865,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_ce0793330f","choices":[{"index":0,"delta":{"content":"
|
||||
Answer"},"logprobs":null,"finish_reason":null}]}
|
||||
|
||||
|
||||
data: {"id":"chatcmpl-9gdIXYXcf53VmxfiC6Q2NBDG2bPci","object":"chat.completion.chunk","created":1719947865,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_ce0793330f","choices":[{"index":0,"delta":{"content":":"},"logprobs":null,"finish_reason":null}]}
|
||||
|
||||
|
||||
data: {"id":"chatcmpl-9gdIXYXcf53VmxfiC6Q2NBDG2bPci","object":"chat.completion.chunk","created":1719947865,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_ce0793330f","choices":[{"index":0,"delta":{"content":"
|
||||
Dogs"},"logprobs":null,"finish_reason":null}]}
|
||||
|
||||
|
||||
data: {"id":"chatcmpl-9gdIXYXcf53VmxfiC6Q2NBDG2bPci","object":"chat.completion.chunk","created":1719947865,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_ce0793330f","choices":[{"index":0,"delta":{"content":"
|
||||
are"},"logprobs":null,"finish_reason":null}]}
|
||||
|
||||
|
||||
data: {"id":"chatcmpl-9gdIXYXcf53VmxfiC6Q2NBDG2bPci","object":"chat.completion.chunk","created":1719947865,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_ce0793330f","choices":[{"index":0,"delta":{"content":"
|
||||
incredibly"},"logprobs":null,"finish_reason":null}]}
|
||||
|
||||
|
||||
data: {"id":"chatcmpl-9gdIXYXcf53VmxfiC6Q2NBDG2bPci","object":"chat.completion.chunk","created":1719947865,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_ce0793330f","choices":[{"index":0,"delta":{"content":"
|
||||
loyal"},"logprobs":null,"finish_reason":null}]}
|
||||
|
||||
|
||||
data: {"id":"chatcmpl-9gdIXYXcf53VmxfiC6Q2NBDG2bPci","object":"chat.completion.chunk","created":1719947865,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_ce0793330f","choices":[{"index":0,"delta":{"content":"
|
||||
and"},"logprobs":null,"finish_reason":null}]}
|
||||
|
||||
|
||||
data: {"id":"chatcmpl-9gdIXYXcf53VmxfiC6Q2NBDG2bPci","object":"chat.completion.chunk","created":1719947865,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_ce0793330f","choices":[{"index":0,"delta":{"content":"
|
||||
provide"},"logprobs":null,"finish_reason":null}]}
|
||||
|
||||
|
||||
data: {"id":"chatcmpl-9gdIXYXcf53VmxfiC6Q2NBDG2bPci","object":"chat.completion.chunk","created":1719947865,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_ce0793330f","choices":[{"index":0,"delta":{"content":"
|
||||
unmatched"},"logprobs":null,"finish_reason":null}]}
|
||||
|
||||
|
||||
data: {"id":"chatcmpl-9gdIXYXcf53VmxfiC6Q2NBDG2bPci","object":"chat.completion.chunk","created":1719947865,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_ce0793330f","choices":[{"index":0,"delta":{"content":"
|
||||
companionship"},"logprobs":null,"finish_reason":null}]}
|
||||
|
||||
|
||||
data: {"id":"chatcmpl-9gdIXYXcf53VmxfiC6Q2NBDG2bPci","object":"chat.completion.chunk","created":1719947865,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_ce0793330f","choices":[{"index":0,"delta":{"content":"
|
||||
to"},"logprobs":null,"finish_reason":null}]}
|
||||
|
||||
|
||||
data: {"id":"chatcmpl-9gdIXYXcf53VmxfiC6Q2NBDG2bPci","object":"chat.completion.chunk","created":1719947865,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_ce0793330f","choices":[{"index":0,"delta":{"content":"
|
||||
humans"},"logprobs":null,"finish_reason":null}]}
|
||||
|
||||
|
||||
data: {"id":"chatcmpl-9gdIXYXcf53VmxfiC6Q2NBDG2bPci","object":"chat.completion.chunk","created":1719947865,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_ce0793330f","choices":[{"index":0,"delta":{"content":"."},"logprobs":null,"finish_reason":null}]}
|
||||
|
||||
|
||||
data: {"id":"chatcmpl-9gdIXYXcf53VmxfiC6Q2NBDG2bPci","object":"chat.completion.chunk","created":1719947865,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_ce0793330f","choices":[{"index":0,"delta":{},"logprobs":null,"finish_reason":"stop"}]}
|
||||
|
||||
|
||||
data: [DONE]
|
||||
|
||||
|
||||
'
|
||||
headers:
|
||||
CF-Cache-Status:
|
||||
- DYNAMIC
|
||||
CF-RAY:
|
||||
- 89d0fa4e7abf53db-ATL
|
||||
Connection:
|
||||
- keep-alive
|
||||
Content-Type:
|
||||
- text/event-stream; charset=utf-8
|
||||
Date:
|
||||
- Tue, 02 Jul 2024 19:17:45 GMT
|
||||
Server:
|
||||
- cloudflare
|
||||
Set-Cookie:
|
||||
- __cf_bm=6Xl2nvdsXT4uSfQ3C1ZK.LWKGYekVs5ErrLDZOdI.50-1719947865-1.0.1.1-6RQoTCznxe7H868MoxghRegIZaElbG_bN_jbs94hmnsnuR1P9bptoj8o2DbOSvj48ubewyvy8L16mOZHlMLw_A;
|
||||
path=/; expires=Tue, 02-Jul-24 19:47:45 GMT; domain=.api.openai.com; HttpOnly;
|
||||
Secure; SameSite=None
|
||||
- _cfuvid=kPTMOkGHQp0ytgVUrm3jFNiB9I.DDI2ONPRTr6IMTeo-1719947865623-0.0.1.1-604800000;
|
||||
path=/; domain=.api.openai.com; HttpOnly; Secure; SameSite=None
|
||||
Transfer-Encoding:
|
||||
- chunked
|
||||
alt-svc:
|
||||
- h3=":443"; ma=86400
|
||||
openai-organization:
|
||||
- crewai-iuxna1
|
||||
openai-processing-ms:
|
||||
- '102'
|
||||
openai-version:
|
||||
- '2020-10-01'
|
||||
strict-transport-security:
|
||||
- max-age=31536000; includeSubDomains
|
||||
x-ratelimit-limit-requests:
|
||||
- '10000'
|
||||
x-ratelimit-limit-tokens:
|
||||
- '16000000'
|
||||
x-ratelimit-remaining-requests:
|
||||
- '9997'
|
||||
x-ratelimit-remaining-tokens:
|
||||
- '15999783'
|
||||
x-ratelimit-reset-requests:
|
||||
- 14ms
|
||||
x-ratelimit-reset-tokens:
|
||||
- 0s
|
||||
x-request-id:
|
||||
- req_2c5219e228ce79f0131c497230904013
|
||||
status:
|
||||
code: 200
|
||||
message: OK
|
||||
- request:
|
||||
body: '{"messages": [{"content": "You are apple Researcher. You have a lot of
|
||||
experience with apple.\nYour personal goal is: Express hot takes on apple.To
|
||||
give my best complete final answer to the task use the exact following format:\n\nThought:
|
||||
I now can give a great answer\nFinal Answer: my best complete final answer to
|
||||
the task.\nYour final answer must be the great and the most complete as possible,
|
||||
it must be outcome described.\n\nI MUST use these formats, my job depends on
|
||||
it!\nCurrent Task: Give me an analysis around apple.\n\nThis is the expect criteria
|
||||
for your final answer: 1 bullet point about apple that''s under 15 words. \n
|
||||
you MUST return the actual complete content as the final answer, not a summary.\n\nBegin!
|
||||
This is VERY important to you, use the tools available and give your best Final
|
||||
Answer, your job depends on it!\n\nThought:\n", "role": "user"}], "model": "gpt-4o",
|
||||
"n": 1, "stop": ["\nObservation"], "stream": true, "temperature": 0.7}'
|
||||
headers:
|
||||
accept:
|
||||
- application/json
|
||||
accept-encoding:
|
||||
- gzip, deflate, br
|
||||
connection:
|
||||
- keep-alive
|
||||
content-length:
|
||||
- '961'
|
||||
content-type:
|
||||
- application/json
|
||||
host:
|
||||
- api.openai.com
|
||||
user-agent:
|
||||
- OpenAI/Python 1.34.0
|
||||
x-stainless-arch:
|
||||
- arm64
|
||||
x-stainless-async:
|
||||
- 'false'
|
||||
x-stainless-lang:
|
||||
- python
|
||||
x-stainless-os:
|
||||
- MacOS
|
||||
x-stainless-package-version:
|
||||
- 1.34.0
|
||||
x-stainless-runtime:
|
||||
- CPython
|
||||
x-stainless-runtime-version:
|
||||
- 3.12.3
|
||||
method: POST
|
||||
uri: https://api.openai.com/v1/chat/completions
|
||||
response:
|
||||
body:
|
||||
string: 'data: {"id":"chatcmpl-9gdIXaXAntrwdA2E5Bhxgz9p7q5Nc","object":"chat.completion.chunk","created":1719947865,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_d576307f90","choices":[{"index":0,"delta":{"role":"assistant","content":""},"logprobs":null,"finish_reason":null}]}
|
||||
|
||||
|
||||
data: {"id":"chatcmpl-9gdIXaXAntrwdA2E5Bhxgz9p7q5Nc","object":"chat.completion.chunk","created":1719947865,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_d576307f90","choices":[{"index":0,"delta":{"content":"I"},"logprobs":null,"finish_reason":null}]}
|
||||
|
||||
|
||||
data: {"id":"chatcmpl-9gdIXaXAntrwdA2E5Bhxgz9p7q5Nc","object":"chat.completion.chunk","created":1719947865,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_d576307f90","choices":[{"index":0,"delta":{"content":"
|
||||
now"},"logprobs":null,"finish_reason":null}]}
|
||||
|
||||
|
||||
data: {"id":"chatcmpl-9gdIXaXAntrwdA2E5Bhxgz9p7q5Nc","object":"chat.completion.chunk","created":1719947865,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_d576307f90","choices":[{"index":0,"delta":{"content":"
|
||||
can"},"logprobs":null,"finish_reason":null}]}
|
||||
|
||||
|
||||
data: {"id":"chatcmpl-9gdIXaXAntrwdA2E5Bhxgz9p7q5Nc","object":"chat.completion.chunk","created":1719947865,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_d576307f90","choices":[{"index":0,"delta":{"content":"
|
||||
give"},"logprobs":null,"finish_reason":null}]}
|
||||
|
||||
|
||||
data: {"id":"chatcmpl-9gdIXaXAntrwdA2E5Bhxgz9p7q5Nc","object":"chat.completion.chunk","created":1719947865,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_d576307f90","choices":[{"index":0,"delta":{"content":"
|
||||
a"},"logprobs":null,"finish_reason":null}]}
|
||||
|
||||
|
||||
data: {"id":"chatcmpl-9gdIXaXAntrwdA2E5Bhxgz9p7q5Nc","object":"chat.completion.chunk","created":1719947865,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_d576307f90","choices":[{"index":0,"delta":{"content":"
|
||||
great"},"logprobs":null,"finish_reason":null}]}
|
||||
|
||||
|
||||
data: {"id":"chatcmpl-9gdIXaXAntrwdA2E5Bhxgz9p7q5Nc","object":"chat.completion.chunk","created":1719947865,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_d576307f90","choices":[{"index":0,"delta":{"content":"
|
||||
answer"},"logprobs":null,"finish_reason":null}]}
|
||||
|
||||
|
||||
data: {"id":"chatcmpl-9gdIXaXAntrwdA2E5Bhxgz9p7q5Nc","object":"chat.completion.chunk","created":1719947865,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_d576307f90","choices":[{"index":0,"delta":{"content":".\n"},"logprobs":null,"finish_reason":null}]}
|
||||
|
||||
|
||||
data: {"id":"chatcmpl-9gdIXaXAntrwdA2E5Bhxgz9p7q5Nc","object":"chat.completion.chunk","created":1719947865,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_d576307f90","choices":[{"index":0,"delta":{"content":"Final"},"logprobs":null,"finish_reason":null}]}
|
||||
|
||||
|
||||
data: {"id":"chatcmpl-9gdIXaXAntrwdA2E5Bhxgz9p7q5Nc","object":"chat.completion.chunk","created":1719947865,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_d576307f90","choices":[{"index":0,"delta":{"content":"
|
||||
Answer"},"logprobs":null,"finish_reason":null}]}
|
||||
|
||||
|
||||
data: {"id":"chatcmpl-9gdIXaXAntrwdA2E5Bhxgz9p7q5Nc","object":"chat.completion.chunk","created":1719947865,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_d576307f90","choices":[{"index":0,"delta":{"content":":"},"logprobs":null,"finish_reason":null}]}
|
||||
|
||||
|
||||
data: {"id":"chatcmpl-9gdIXaXAntrwdA2E5Bhxgz9p7q5Nc","object":"chat.completion.chunk","created":1719947865,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_d576307f90","choices":[{"index":0,"delta":{"content":"
|
||||
Apple"},"logprobs":null,"finish_reason":null}]}
|
||||
|
||||
|
||||
data: {"id":"chatcmpl-9gdIXaXAntrwdA2E5Bhxgz9p7q5Nc","object":"chat.completion.chunk","created":1719947865,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_d576307f90","choices":[{"index":0,"delta":{"content":"
|
||||
revolution"},"logprobs":null,"finish_reason":null}]}
|
||||
|
||||
|
||||
data: {"id":"chatcmpl-9gdIXaXAntrwdA2E5Bhxgz9p7q5Nc","object":"chat.completion.chunk","created":1719947865,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_d576307f90","choices":[{"index":0,"delta":{"content":"izes"},"logprobs":null,"finish_reason":null}]}
|
||||
|
||||
|
||||
data: {"id":"chatcmpl-9gdIXaXAntrwdA2E5Bhxgz9p7q5Nc","object":"chat.completion.chunk","created":1719947865,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_d576307f90","choices":[{"index":0,"delta":{"content":"
|
||||
technology"},"logprobs":null,"finish_reason":null}]}
|
||||
|
||||
|
||||
data: {"id":"chatcmpl-9gdIXaXAntrwdA2E5Bhxgz9p7q5Nc","object":"chat.completion.chunk","created":1719947865,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_d576307f90","choices":[{"index":0,"delta":{"content":"
|
||||
with"},"logprobs":null,"finish_reason":null}]}
|
||||
|
||||
|
||||
data: {"id":"chatcmpl-9gdIXaXAntrwdA2E5Bhxgz9p7q5Nc","object":"chat.completion.chunk","created":1719947865,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_d576307f90","choices":[{"index":0,"delta":{"content":"
|
||||
sleek"},"logprobs":null,"finish_reason":null}]}
|
||||
|
||||
|
||||
data: {"id":"chatcmpl-9gdIXaXAntrwdA2E5Bhxgz9p7q5Nc","object":"chat.completion.chunk","created":1719947865,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_d576307f90","choices":[{"index":0,"delta":{"content":"
|
||||
designs"},"logprobs":null,"finish_reason":null}]}
|
||||
|
||||
|
||||
data: {"id":"chatcmpl-9gdIXaXAntrwdA2E5Bhxgz9p7q5Nc","object":"chat.completion.chunk","created":1719947865,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_d576307f90","choices":[{"index":0,"delta":{"content":","},"logprobs":null,"finish_reason":null}]}
|
||||
|
||||
|
||||
data: {"id":"chatcmpl-9gdIXaXAntrwdA2E5Bhxgz9p7q5Nc","object":"chat.completion.chunk","created":1719947865,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_d576307f90","choices":[{"index":0,"delta":{"content":"
|
||||
seamless"},"logprobs":null,"finish_reason":null}]}
|
||||
|
||||
|
||||
data: {"id":"chatcmpl-9gdIXaXAntrwdA2E5Bhxgz9p7q5Nc","object":"chat.completion.chunk","created":1719947865,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_d576307f90","choices":[{"index":0,"delta":{"content":"
|
||||
integration"},"logprobs":null,"finish_reason":null}]}
|
||||
|
||||
|
||||
data: {"id":"chatcmpl-9gdIXaXAntrwdA2E5Bhxgz9p7q5Nc","object":"chat.completion.chunk","created":1719947865,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_d576307f90","choices":[{"index":0,"delta":{"content":","},"logprobs":null,"finish_reason":null}]}
|
||||
|
||||
|
||||
data: {"id":"chatcmpl-9gdIXaXAntrwdA2E5Bhxgz9p7q5Nc","object":"chat.completion.chunk","created":1719947865,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_d576307f90","choices":[{"index":0,"delta":{"content":"
|
||||
and"},"logprobs":null,"finish_reason":null}]}
|
||||
|
||||
|
||||
data: {"id":"chatcmpl-9gdIXaXAntrwdA2E5Bhxgz9p7q5Nc","object":"chat.completion.chunk","created":1719947865,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_d576307f90","choices":[{"index":0,"delta":{"content":"
|
||||
innovative"},"logprobs":null,"finish_reason":null}]}
|
||||
|
||||
|
||||
data: {"id":"chatcmpl-9gdIXaXAntrwdA2E5Bhxgz9p7q5Nc","object":"chat.completion.chunk","created":1719947865,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_d576307f90","choices":[{"index":0,"delta":{"content":"
|
||||
user"},"logprobs":null,"finish_reason":null}]}
|
||||
|
||||
|
||||
data: {"id":"chatcmpl-9gdIXaXAntrwdA2E5Bhxgz9p7q5Nc","object":"chat.completion.chunk","created":1719947865,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_d576307f90","choices":[{"index":0,"delta":{"content":"
|
||||
experiences"},"logprobs":null,"finish_reason":null}]}
|
||||
|
||||
|
||||
data: {"id":"chatcmpl-9gdIXaXAntrwdA2E5Bhxgz9p7q5Nc","object":"chat.completion.chunk","created":1719947865,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_d576307f90","choices":[{"index":0,"delta":{"content":"."},"logprobs":null,"finish_reason":null}]}
|
||||
|
||||
|
||||
data: {"id":"chatcmpl-9gdIXaXAntrwdA2E5Bhxgz9p7q5Nc","object":"chat.completion.chunk","created":1719947865,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_d576307f90","choices":[{"index":0,"delta":{},"logprobs":null,"finish_reason":"stop"}]}
|
||||
|
||||
|
||||
data: [DONE]
|
||||
|
||||
|
||||
'
|
||||
headers:
|
||||
CF-Cache-Status:
|
||||
- DYNAMIC
|
||||
CF-RAY:
|
||||
- 89d0fa4e7ca907e6-ATL
|
||||
Connection:
|
||||
- keep-alive
|
||||
Content-Type:
|
||||
- text/event-stream; charset=utf-8
|
||||
Date:
|
||||
- Tue, 02 Jul 2024 19:17:45 GMT
|
||||
Server:
|
||||
- cloudflare
|
||||
Set-Cookie:
|
||||
- __cf_bm=wf2ozMjr46sG0EhuZjpiDNagwTxC05ct3Hn7Y9Rs5AI-1719947865-1.0.1.1-uckxTTr7Yfe6sv4ZznqqrGTEz9E3_Cpp7OAWBIEeNz1Smdjwijw8YV5oYPe_6W4DrEtwVzRDxaqIHlWP55O0QA;
|
||||
path=/; expires=Tue, 02-Jul-24 19:47:45 GMT; domain=.api.openai.com; HttpOnly;
|
||||
Secure; SameSite=None
|
||||
- _cfuvid=F9pWw4TeoPa8puOm5RN9Gp2oY0lRoN53ChZ1qFYx1S8-1719947865726-0.0.1.1-604800000;
|
||||
path=/; domain=.api.openai.com; HttpOnly; Secure; SameSite=None
|
||||
Transfer-Encoding:
|
||||
- chunked
|
||||
alt-svc:
|
||||
- h3=":443"; ma=86400
|
||||
openai-organization:
|
||||
- crewai-iuxna1
|
||||
openai-processing-ms:
|
||||
- '168'
|
||||
openai-version:
|
||||
- '2020-10-01'
|
||||
strict-transport-security:
|
||||
- max-age=31536000; includeSubDomains
|
||||
x-ratelimit-limit-requests:
|
||||
- '10000'
|
||||
x-ratelimit-limit-tokens:
|
||||
- '16000000'
|
||||
x-ratelimit-remaining-requests:
|
||||
- '9998'
|
||||
x-ratelimit-remaining-tokens:
|
||||
- '15999780'
|
||||
x-ratelimit-reset-requests:
|
||||
- 10ms
|
||||
x-ratelimit-reset-tokens:
|
||||
- 0s
|
||||
x-request-id:
|
||||
- req_e6dfeda5935eae030bcc2da526234635
|
||||
status:
|
||||
code: 200
|
||||
message: OK
|
||||
- request:
|
||||
body: '{"messages": [{"content": "You are cat Researcher. You have a lot of experience
|
||||
with cat.\nYour personal goal is: Express hot takes on cat.To give my best complete
|
||||
final answer to the task use the exact following format:\n\nThought: I now can
|
||||
give a great answer\nFinal Answer: my best complete final answer to the task.\nYour
|
||||
final answer must be the great and the most complete as possible, it must be
|
||||
outcome described.\n\nI MUST use these formats, my job depends on it!\nCurrent
|
||||
Task: Give me an analysis around cat.\n\nThis is the expect criteria for your
|
||||
final answer: 1 bullet point about cat that''s under 15 words. \n you MUST return
|
||||
the actual complete content as the final answer, not a summary.\n\nBegin! This
|
||||
is VERY important to you, use the tools available and give your best Final Answer,
|
||||
your job depends on it!\n\nThought:\n", "role": "user"}], "model": "gpt-4o",
|
||||
"n": 1, "stop": ["\nObservation"], "stream": true, "temperature": 0.7}'
|
||||
headers:
|
||||
accept:
|
||||
- application/json
|
||||
accept-encoding:
|
||||
- gzip, deflate, br
|
||||
connection:
|
||||
- keep-alive
|
||||
content-length:
|
||||
- '951'
|
||||
content-type:
|
||||
- application/json
|
||||
host:
|
||||
- api.openai.com
|
||||
user-agent:
|
||||
- OpenAI/Python 1.34.0
|
||||
x-stainless-arch:
|
||||
- arm64
|
||||
x-stainless-async:
|
||||
- 'false'
|
||||
x-stainless-lang:
|
||||
- python
|
||||
x-stainless-os:
|
||||
- MacOS
|
||||
x-stainless-package-version:
|
||||
- 1.34.0
|
||||
x-stainless-runtime:
|
||||
- CPython
|
||||
x-stainless-runtime-version:
|
||||
- 3.12.3
|
||||
method: POST
|
||||
uri: https://api.openai.com/v1/chat/completions
|
||||
response:
|
||||
body:
|
||||
string: 'data: {"id":"chatcmpl-9gdIXPfC85ZAgbI0KqvS9z396XBKw","object":"chat.completion.chunk","created":1719947865,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_ce0793330f","choices":[{"index":0,"delta":{"role":"assistant","content":""},"logprobs":null,"finish_reason":null}]}
|
||||
|
||||
|
||||
data: {"id":"chatcmpl-9gdIXPfC85ZAgbI0KqvS9z396XBKw","object":"chat.completion.chunk","created":1719947865,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_ce0793330f","choices":[{"index":0,"delta":{"content":"Thought"},"logprobs":null,"finish_reason":null}]}
|
||||
|
||||
|
||||
data: {"id":"chatcmpl-9gdIXPfC85ZAgbI0KqvS9z396XBKw","object":"chat.completion.chunk","created":1719947865,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_ce0793330f","choices":[{"index":0,"delta":{"content":":"},"logprobs":null,"finish_reason":null}]}
|
||||
|
||||
|
||||
data: {"id":"chatcmpl-9gdIXPfC85ZAgbI0KqvS9z396XBKw","object":"chat.completion.chunk","created":1719947865,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_ce0793330f","choices":[{"index":0,"delta":{"content":"
|
||||
I"},"logprobs":null,"finish_reason":null}]}
|
||||
|
||||
|
||||
data: {"id":"chatcmpl-9gdIXPfC85ZAgbI0KqvS9z396XBKw","object":"chat.completion.chunk","created":1719947865,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_ce0793330f","choices":[{"index":0,"delta":{"content":"
|
||||
now"},"logprobs":null,"finish_reason":null}]}
|
||||
|
||||
|
||||
data: {"id":"chatcmpl-9gdIXPfC85ZAgbI0KqvS9z396XBKw","object":"chat.completion.chunk","created":1719947865,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_ce0793330f","choices":[{"index":0,"delta":{"content":"
|
||||
can"},"logprobs":null,"finish_reason":null}]}
|
||||
|
||||
|
||||
data: {"id":"chatcmpl-9gdIXPfC85ZAgbI0KqvS9z396XBKw","object":"chat.completion.chunk","created":1719947865,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_ce0793330f","choices":[{"index":0,"delta":{"content":"
|
||||
give"},"logprobs":null,"finish_reason":null}]}
|
||||
|
||||
|
||||
data: {"id":"chatcmpl-9gdIXPfC85ZAgbI0KqvS9z396XBKw","object":"chat.completion.chunk","created":1719947865,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_ce0793330f","choices":[{"index":0,"delta":{"content":"
|
||||
a"},"logprobs":null,"finish_reason":null}]}
|
||||
|
||||
|
||||
data: {"id":"chatcmpl-9gdIXPfC85ZAgbI0KqvS9z396XBKw","object":"chat.completion.chunk","created":1719947865,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_ce0793330f","choices":[{"index":0,"delta":{"content":"
|
||||
great"},"logprobs":null,"finish_reason":null}]}
|
||||
|
||||
|
||||
data: {"id":"chatcmpl-9gdIXPfC85ZAgbI0KqvS9z396XBKw","object":"chat.completion.chunk","created":1719947865,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_ce0793330f","choices":[{"index":0,"delta":{"content":"
|
||||
answer"},"logprobs":null,"finish_reason":null}]}
|
||||
|
||||
|
||||
data: {"id":"chatcmpl-9gdIXPfC85ZAgbI0KqvS9z396XBKw","object":"chat.completion.chunk","created":1719947865,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_ce0793330f","choices":[{"index":0,"delta":{"content":"\n"},"logprobs":null,"finish_reason":null}]}
|
||||
|
||||
|
||||
data: {"id":"chatcmpl-9gdIXPfC85ZAgbI0KqvS9z396XBKw","object":"chat.completion.chunk","created":1719947865,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_ce0793330f","choices":[{"index":0,"delta":{"content":"Final"},"logprobs":null,"finish_reason":null}]}
|
||||
|
||||
|
||||
data: {"id":"chatcmpl-9gdIXPfC85ZAgbI0KqvS9z396XBKw","object":"chat.completion.chunk","created":1719947865,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_ce0793330f","choices":[{"index":0,"delta":{"content":"
|
||||
Answer"},"logprobs":null,"finish_reason":null}]}
|
||||
|
||||
|
||||
data: {"id":"chatcmpl-9gdIXPfC85ZAgbI0KqvS9z396XBKw","object":"chat.completion.chunk","created":1719947865,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_ce0793330f","choices":[{"index":0,"delta":{"content":":"},"logprobs":null,"finish_reason":null}]}
|
||||
|
||||
|
||||
data: {"id":"chatcmpl-9gdIXPfC85ZAgbI0KqvS9z396XBKw","object":"chat.completion.chunk","created":1719947865,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_ce0793330f","choices":[{"index":0,"delta":{"content":"
|
||||
Cats"},"logprobs":null,"finish_reason":null}]}
|
||||
|
||||
|
||||
data: {"id":"chatcmpl-9gdIXPfC85ZAgbI0KqvS9z396XBKw","object":"chat.completion.chunk","created":1719947865,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_ce0793330f","choices":[{"index":0,"delta":{"content":"
|
||||
are"},"logprobs":null,"finish_reason":null}]}
|
||||
|
||||
|
||||
data: {"id":"chatcmpl-9gdIXPfC85ZAgbI0KqvS9z396XBKw","object":"chat.completion.chunk","created":1719947865,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_ce0793330f","choices":[{"index":0,"delta":{"content":"
|
||||
master"},"logprobs":null,"finish_reason":null}]}
|
||||
|
||||
|
||||
data: {"id":"chatcmpl-9gdIXPfC85ZAgbI0KqvS9z396XBKw","object":"chat.completion.chunk","created":1719947865,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_ce0793330f","choices":[{"index":0,"delta":{"content":"ful"},"logprobs":null,"finish_reason":null}]}
|
||||
|
||||
|
||||
data: {"id":"chatcmpl-9gdIXPfC85ZAgbI0KqvS9z396XBKw","object":"chat.completion.chunk","created":1719947865,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_ce0793330f","choices":[{"index":0,"delta":{"content":"
|
||||
hunters"},"logprobs":null,"finish_reason":null}]}
|
||||
|
||||
|
||||
data: {"id":"chatcmpl-9gdIXPfC85ZAgbI0KqvS9z396XBKw","object":"chat.completion.chunk","created":1719947865,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_ce0793330f","choices":[{"index":0,"delta":{"content":"
|
||||
and"},"logprobs":null,"finish_reason":null}]}
|
||||
|
||||
|
||||
data: {"id":"chatcmpl-9gdIXPfC85ZAgbI0KqvS9z396XBKw","object":"chat.completion.chunk","created":1719947865,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_ce0793330f","choices":[{"index":0,"delta":{"content":"
|
||||
brilliant"},"logprobs":null,"finish_reason":null}]}
|
||||
|
||||
|
||||
data: {"id":"chatcmpl-9gdIXPfC85ZAgbI0KqvS9z396XBKw","object":"chat.completion.chunk","created":1719947865,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_ce0793330f","choices":[{"index":0,"delta":{"content":"
|
||||
problem"},"logprobs":null,"finish_reason":null}]}
|
||||
|
||||
|
||||
data: {"id":"chatcmpl-9gdIXPfC85ZAgbI0KqvS9z396XBKw","object":"chat.completion.chunk","created":1719947865,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_ce0793330f","choices":[{"index":0,"delta":{"content":"-sol"},"logprobs":null,"finish_reason":null}]}
|
||||
|
||||
|
||||
data: {"id":"chatcmpl-9gdIXPfC85ZAgbI0KqvS9z396XBKw","object":"chat.completion.chunk","created":1719947865,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_ce0793330f","choices":[{"index":0,"delta":{"content":"vers"},"logprobs":null,"finish_reason":null}]}
|
||||
|
||||
|
||||
data: {"id":"chatcmpl-9gdIXPfC85ZAgbI0KqvS9z396XBKw","object":"chat.completion.chunk","created":1719947865,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_ce0793330f","choices":[{"index":0,"delta":{"content":"."},"logprobs":null,"finish_reason":null}]}
|
||||
|
||||
|
||||
data: {"id":"chatcmpl-9gdIXPfC85ZAgbI0KqvS9z396XBKw","object":"chat.completion.chunk","created":1719947865,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_ce0793330f","choices":[{"index":0,"delta":{},"logprobs":null,"finish_reason":"stop"}]}
|
||||
|
||||
|
||||
data: [DONE]
|
||||
|
||||
|
||||
'
|
||||
headers:
|
||||
CF-Cache-Status:
|
||||
- DYNAMIC
|
||||
CF-RAY:
|
||||
- 89d0fa4e7ae912d7-ATL
|
||||
Connection:
|
||||
- keep-alive
|
||||
Content-Type:
|
||||
- text/event-stream; charset=utf-8
|
||||
Date:
|
||||
- Tue, 02 Jul 2024 19:17:45 GMT
|
||||
Server:
|
||||
- cloudflare
|
||||
Set-Cookie:
|
||||
- __cf_bm=y7JNZ8WEp.q5pMXLi79ajfcI.F6MfE0GeYLw34Apkf0-1719947865-1.0.1.1-QKklGeYuOnsQROgqMs42XwqKNvW.mPrmcbtaxMnUg3eSgI7TRnRq4qPuSan0ynDt4Hd9NMuls2FR.Caa1MVr9Q;
|
||||
path=/; expires=Tue, 02-Jul-24 19:47:45 GMT; domain=.api.openai.com; HttpOnly;
|
||||
Secure; SameSite=None
|
||||
- _cfuvid=FVQoSgcvVyiB_o43X6y5MGYgzGojmsQqS.nPObW3JYU-1719947865679-0.0.1.1-604800000;
|
||||
path=/; domain=.api.openai.com; HttpOnly; Secure; SameSite=None
|
||||
Transfer-Encoding:
|
||||
- chunked
|
||||
alt-svc:
|
||||
- h3=":443"; ma=86400
|
||||
openai-organization:
|
||||
- crewai-iuxna1
|
||||
openai-processing-ms:
|
||||
- '132'
|
||||
openai-version:
|
||||
- '2020-10-01'
|
||||
strict-transport-security:
|
||||
- max-age=31536000; includeSubDomains
|
||||
x-ratelimit-limit-requests:
|
||||
- '10000'
|
||||
x-ratelimit-limit-tokens:
|
||||
- '16000000'
|
||||
x-ratelimit-remaining-requests:
|
||||
- '9999'
|
||||
x-ratelimit-remaining-tokens:
|
||||
- '15999783'
|
||||
x-ratelimit-reset-requests:
|
||||
- 6ms
|
||||
x-ratelimit-reset-tokens:
|
||||
- 0s
|
||||
x-request-id:
|
||||
- req_a06bde4044d3ee75edf08f333139679c
|
||||
status:
|
||||
code: 200
|
||||
message: OK
|
||||
version: 1
|
||||
@@ -1,311 +0,0 @@
|
||||
interactions:
|
||||
- request:
|
||||
body: '{"messages": [{"content": "You are test role. test backstory\nYour personal
|
||||
goal is: test goalTo give my best complete final answer to the task use the
|
||||
exact following format:\n\nThought: I now can give a great answer\nFinal Answer:
|
||||
my best complete final answer to the task.\nYour final answer must be the great
|
||||
and the most complete as possible, it must be outcome described.\n\nI MUST use
|
||||
these formats, my job depends on it!\nCurrent Task: just say hi!\n\nThis is
|
||||
the expect criteria for your final answer: your greeting \n you MUST return
|
||||
the actual complete content as the final answer, not a summary.\n\nBegin! This
|
||||
is VERY important to you, use the tools available and give your best Final Answer,
|
||||
your job depends on it!\n\nThought:\n", "role": "user"}], "model": "gpt-4o",
|
||||
"n": 1, "stop": ["\nObservation"], "stream": true, "temperature": 0.7}'
|
||||
headers:
|
||||
accept:
|
||||
- application/json
|
||||
accept-encoding:
|
||||
- gzip, deflate, br
|
||||
connection:
|
||||
- keep-alive
|
||||
content-length:
|
||||
- '853'
|
||||
content-type:
|
||||
- application/json
|
||||
host:
|
||||
- api.openai.com
|
||||
user-agent:
|
||||
- OpenAI/Python 1.35.10
|
||||
x-stainless-arch:
|
||||
- arm64
|
||||
x-stainless-async:
|
||||
- 'false'
|
||||
x-stainless-lang:
|
||||
- python
|
||||
x-stainless-os:
|
||||
- MacOS
|
||||
x-stainless-package-version:
|
||||
- 1.35.10
|
||||
x-stainless-runtime:
|
||||
- CPython
|
||||
x-stainless-runtime-version:
|
||||
- 3.11.9
|
||||
method: POST
|
||||
uri: https://api.openai.com/v1/chat/completions
|
||||
response:
|
||||
body:
|
||||
string: 'data: {"id":"chatcmpl-9hQvRT6xi6C8Fz3S8GFGiottv1VnH","object":"chat.completion.chunk","created":1720138633,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_ce0793330f","choices":[{"index":0,"delta":{"role":"assistant","content":""},"logprobs":null,"finish_reason":null}]}
|
||||
|
||||
|
||||
data: {"id":"chatcmpl-9hQvRT6xi6C8Fz3S8GFGiottv1VnH","object":"chat.completion.chunk","created":1720138633,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_ce0793330f","choices":[{"index":0,"delta":{"content":"Thought"},"logprobs":null,"finish_reason":null}]}
|
||||
|
||||
|
||||
data: {"id":"chatcmpl-9hQvRT6xi6C8Fz3S8GFGiottv1VnH","object":"chat.completion.chunk","created":1720138633,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_ce0793330f","choices":[{"index":0,"delta":{"content":":"},"logprobs":null,"finish_reason":null}]}
|
||||
|
||||
|
||||
data: {"id":"chatcmpl-9hQvRT6xi6C8Fz3S8GFGiottv1VnH","object":"chat.completion.chunk","created":1720138633,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_ce0793330f","choices":[{"index":0,"delta":{"content":"
|
||||
I"},"logprobs":null,"finish_reason":null}]}
|
||||
|
||||
|
||||
data: {"id":"chatcmpl-9hQvRT6xi6C8Fz3S8GFGiottv1VnH","object":"chat.completion.chunk","created":1720138633,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_ce0793330f","choices":[{"index":0,"delta":{"content":"
|
||||
now"},"logprobs":null,"finish_reason":null}]}
|
||||
|
||||
|
||||
data: {"id":"chatcmpl-9hQvRT6xi6C8Fz3S8GFGiottv1VnH","object":"chat.completion.chunk","created":1720138633,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_ce0793330f","choices":[{"index":0,"delta":{"content":"
|
||||
can"},"logprobs":null,"finish_reason":null}]}
|
||||
|
||||
|
||||
data: {"id":"chatcmpl-9hQvRT6xi6C8Fz3S8GFGiottv1VnH","object":"chat.completion.chunk","created":1720138633,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_ce0793330f","choices":[{"index":0,"delta":{"content":"
|
||||
give"},"logprobs":null,"finish_reason":null}]}
|
||||
|
||||
|
||||
data: {"id":"chatcmpl-9hQvRT6xi6C8Fz3S8GFGiottv1VnH","object":"chat.completion.chunk","created":1720138633,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_ce0793330f","choices":[{"index":0,"delta":{"content":"
|
||||
a"},"logprobs":null,"finish_reason":null}]}
|
||||
|
||||
|
||||
data: {"id":"chatcmpl-9hQvRT6xi6C8Fz3S8GFGiottv1VnH","object":"chat.completion.chunk","created":1720138633,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_ce0793330f","choices":[{"index":0,"delta":{"content":"
|
||||
great"},"logprobs":null,"finish_reason":null}]}
|
||||
|
||||
|
||||
data: {"id":"chatcmpl-9hQvRT6xi6C8Fz3S8GFGiottv1VnH","object":"chat.completion.chunk","created":1720138633,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_ce0793330f","choices":[{"index":0,"delta":{"content":"
|
||||
answer"},"logprobs":null,"finish_reason":null}]}
|
||||
|
||||
|
||||
data: {"id":"chatcmpl-9hQvRT6xi6C8Fz3S8GFGiottv1VnH","object":"chat.completion.chunk","created":1720138633,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_ce0793330f","choices":[{"index":0,"delta":{"content":"\n"},"logprobs":null,"finish_reason":null}]}
|
||||
|
||||
|
||||
data: {"id":"chatcmpl-9hQvRT6xi6C8Fz3S8GFGiottv1VnH","object":"chat.completion.chunk","created":1720138633,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_ce0793330f","choices":[{"index":0,"delta":{"content":"Final"},"logprobs":null,"finish_reason":null}]}
|
||||
|
||||
|
||||
data: {"id":"chatcmpl-9hQvRT6xi6C8Fz3S8GFGiottv1VnH","object":"chat.completion.chunk","created":1720138633,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_ce0793330f","choices":[{"index":0,"delta":{"content":"
|
||||
Answer"},"logprobs":null,"finish_reason":null}]}
|
||||
|
||||
|
||||
data: {"id":"chatcmpl-9hQvRT6xi6C8Fz3S8GFGiottv1VnH","object":"chat.completion.chunk","created":1720138633,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_ce0793330f","choices":[{"index":0,"delta":{"content":":"},"logprobs":null,"finish_reason":null}]}
|
||||
|
||||
|
||||
data: {"id":"chatcmpl-9hQvRT6xi6C8Fz3S8GFGiottv1VnH","object":"chat.completion.chunk","created":1720138633,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_ce0793330f","choices":[{"index":0,"delta":{"content":"
|
||||
Hi"},"logprobs":null,"finish_reason":null}]}
|
||||
|
||||
|
||||
data: {"id":"chatcmpl-9hQvRT6xi6C8Fz3S8GFGiottv1VnH","object":"chat.completion.chunk","created":1720138633,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_ce0793330f","choices":[{"index":0,"delta":{"content":"!"},"logprobs":null,"finish_reason":null}]}
|
||||
|
||||
|
||||
data: {"id":"chatcmpl-9hQvRT6xi6C8Fz3S8GFGiottv1VnH","object":"chat.completion.chunk","created":1720138633,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_ce0793330f","choices":[{"index":0,"delta":{},"logprobs":null,"finish_reason":"stop"}]}
|
||||
|
||||
|
||||
data: [DONE]
|
||||
|
||||
|
||||
'
|
||||
headers:
|
||||
CF-Cache-Status:
|
||||
- DYNAMIC
|
||||
CF-RAY:
|
||||
- 89e32bb72dd86d5e-GIG
|
||||
Connection:
|
||||
- keep-alive
|
||||
Content-Type:
|
||||
- text/event-stream; charset=utf-8
|
||||
Date:
|
||||
- Fri, 05 Jul 2024 00:17:13 GMT
|
||||
Server:
|
||||
- cloudflare
|
||||
Set-Cookie:
|
||||
- __cf_bm=uiZ_rH4TceDeZ.DjXTNE0hPKkvL49mU7mpYzwIIEFFM-1720138633-1.0.1.1-8SVfOrd0RHk4AFEXlnmXRJwgooX2qQwzM_m_nsg32Ln.boGk0NnqnlMfqpRgx0pcWpKoZLDOzVQ9iWuKUbXLgQ;
|
||||
path=/; expires=Fri, 05-Jul-24 00:47:13 GMT; domain=.api.openai.com; HttpOnly;
|
||||
Secure; SameSite=None
|
||||
- _cfuvid=4JpRXthJb2jKE1c2ZJrXA42WVcOEN2OaE7UHDUyWLSk-1720138633250-0.0.1.1-604800000;
|
||||
path=/; domain=.api.openai.com; HttpOnly; Secure; SameSite=None
|
||||
Transfer-Encoding:
|
||||
- chunked
|
||||
alt-svc:
|
||||
- h3=":443"; ma=86400
|
||||
openai-organization:
|
||||
- crewai-iuxna1
|
||||
openai-processing-ms:
|
||||
- '84'
|
||||
openai-version:
|
||||
- '2020-10-01'
|
||||
strict-transport-security:
|
||||
- max-age=31536000; includeSubDomains
|
||||
x-ratelimit-limit-requests:
|
||||
- '10000'
|
||||
x-ratelimit-limit-tokens:
|
||||
- '16000000'
|
||||
x-ratelimit-remaining-requests:
|
||||
- '9999'
|
||||
x-ratelimit-remaining-tokens:
|
||||
- '15999808'
|
||||
x-ratelimit-reset-requests:
|
||||
- 6ms
|
||||
x-ratelimit-reset-tokens:
|
||||
- 0s
|
||||
x-request-id:
|
||||
- req_533779597dc8ad44deead7d83922cae7
|
||||
status:
|
||||
code: 200
|
||||
message: OK
|
||||
- request:
|
||||
body: '{"messages": [{"content": "You are test role. test backstory\nYour personal
|
||||
goal is: test goalTo give my best complete final answer to the task use the
|
||||
exact following format:\n\nThought: I now can give a great answer\nFinal Answer:
|
||||
my best complete final answer to the task.\nYour final answer must be the great
|
||||
and the most complete as possible, it must be outcome described.\n\nI MUST use
|
||||
these formats, my job depends on it!\nCurrent Task: just say hello!\n\nThis
|
||||
is the expect criteria for your final answer: your greeting \n you MUST return
|
||||
the actual complete content as the final answer, not a summary.\n\nThis is the
|
||||
context you''re working with:\nHi!\n\nBegin! This is VERY important to you,
|
||||
use the tools available and give your best Final Answer, your job depends on
|
||||
it!\n\nThought:\n", "role": "user"}], "model": "gpt-4o", "n": 1, "stop": ["\nObservation"],
|
||||
"stream": true, "temperature": 0.7}'
|
||||
headers:
|
||||
accept:
|
||||
- application/json
|
||||
accept-encoding:
|
||||
- gzip, deflate, br
|
||||
connection:
|
||||
- keep-alive
|
||||
content-length:
|
||||
- '905'
|
||||
content-type:
|
||||
- application/json
|
||||
host:
|
||||
- api.openai.com
|
||||
user-agent:
|
||||
- OpenAI/Python 1.35.10
|
||||
x-stainless-arch:
|
||||
- arm64
|
||||
x-stainless-async:
|
||||
- 'false'
|
||||
x-stainless-lang:
|
||||
- python
|
||||
x-stainless-os:
|
||||
- MacOS
|
||||
x-stainless-package-version:
|
||||
- 1.35.10
|
||||
x-stainless-runtime:
|
||||
- CPython
|
||||
x-stainless-runtime-version:
|
||||
- 3.11.9
|
||||
method: POST
|
||||
uri: https://api.openai.com/v1/chat/completions
|
||||
response:
|
||||
body:
|
||||
string: 'data: {"id":"chatcmpl-9hQvRZn4XwZEsAA0VrSHe0HcePTyG","object":"chat.completion.chunk","created":1720138633,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_d576307f90","choices":[{"index":0,"delta":{"role":"assistant","content":""},"logprobs":null,"finish_reason":null}]}
|
||||
|
||||
|
||||
data: {"id":"chatcmpl-9hQvRZn4XwZEsAA0VrSHe0HcePTyG","object":"chat.completion.chunk","created":1720138633,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_d576307f90","choices":[{"index":0,"delta":{"content":"Thought"},"logprobs":null,"finish_reason":null}]}
|
||||
|
||||
|
||||
data: {"id":"chatcmpl-9hQvRZn4XwZEsAA0VrSHe0HcePTyG","object":"chat.completion.chunk","created":1720138633,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_d576307f90","choices":[{"index":0,"delta":{"content":":"},"logprobs":null,"finish_reason":null}]}
|
||||
|
||||
|
||||
data: {"id":"chatcmpl-9hQvRZn4XwZEsAA0VrSHe0HcePTyG","object":"chat.completion.chunk","created":1720138633,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_d576307f90","choices":[{"index":0,"delta":{"content":"
|
||||
I"},"logprobs":null,"finish_reason":null}]}
|
||||
|
||||
|
||||
data: {"id":"chatcmpl-9hQvRZn4XwZEsAA0VrSHe0HcePTyG","object":"chat.completion.chunk","created":1720138633,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_d576307f90","choices":[{"index":0,"delta":{"content":"
|
||||
now"},"logprobs":null,"finish_reason":null}]}
|
||||
|
||||
|
||||
data: {"id":"chatcmpl-9hQvRZn4XwZEsAA0VrSHe0HcePTyG","object":"chat.completion.chunk","created":1720138633,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_d576307f90","choices":[{"index":0,"delta":{"content":"
|
||||
can"},"logprobs":null,"finish_reason":null}]}
|
||||
|
||||
|
||||
data: {"id":"chatcmpl-9hQvRZn4XwZEsAA0VrSHe0HcePTyG","object":"chat.completion.chunk","created":1720138633,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_d576307f90","choices":[{"index":0,"delta":{"content":"
|
||||
give"},"logprobs":null,"finish_reason":null}]}
|
||||
|
||||
|
||||
data: {"id":"chatcmpl-9hQvRZn4XwZEsAA0VrSHe0HcePTyG","object":"chat.completion.chunk","created":1720138633,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_d576307f90","choices":[{"index":0,"delta":{"content":"
|
||||
a"},"logprobs":null,"finish_reason":null}]}
|
||||
|
||||
|
||||
data: {"id":"chatcmpl-9hQvRZn4XwZEsAA0VrSHe0HcePTyG","object":"chat.completion.chunk","created":1720138633,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_d576307f90","choices":[{"index":0,"delta":{"content":"
|
||||
great"},"logprobs":null,"finish_reason":null}]}
|
||||
|
||||
|
||||
data: {"id":"chatcmpl-9hQvRZn4XwZEsAA0VrSHe0HcePTyG","object":"chat.completion.chunk","created":1720138633,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_d576307f90","choices":[{"index":0,"delta":{"content":"
|
||||
answer"},"logprobs":null,"finish_reason":null}]}
|
||||
|
||||
|
||||
data: {"id":"chatcmpl-9hQvRZn4XwZEsAA0VrSHe0HcePTyG","object":"chat.completion.chunk","created":1720138633,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_d576307f90","choices":[{"index":0,"delta":{"content":"\n"},"logprobs":null,"finish_reason":null}]}
|
||||
|
||||
|
||||
data: {"id":"chatcmpl-9hQvRZn4XwZEsAA0VrSHe0HcePTyG","object":"chat.completion.chunk","created":1720138633,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_d576307f90","choices":[{"index":0,"delta":{"content":"Final"},"logprobs":null,"finish_reason":null}]}
|
||||
|
||||
|
||||
data: {"id":"chatcmpl-9hQvRZn4XwZEsAA0VrSHe0HcePTyG","object":"chat.completion.chunk","created":1720138633,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_d576307f90","choices":[{"index":0,"delta":{"content":"
|
||||
Answer"},"logprobs":null,"finish_reason":null}]}
|
||||
|
||||
|
||||
data: {"id":"chatcmpl-9hQvRZn4XwZEsAA0VrSHe0HcePTyG","object":"chat.completion.chunk","created":1720138633,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_d576307f90","choices":[{"index":0,"delta":{"content":":"},"logprobs":null,"finish_reason":null}]}
|
||||
|
||||
|
||||
data: {"id":"chatcmpl-9hQvRZn4XwZEsAA0VrSHe0HcePTyG","object":"chat.completion.chunk","created":1720138633,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_d576307f90","choices":[{"index":0,"delta":{"content":"
|
||||
Hello"},"logprobs":null,"finish_reason":null}]}
|
||||
|
||||
|
||||
data: {"id":"chatcmpl-9hQvRZn4XwZEsAA0VrSHe0HcePTyG","object":"chat.completion.chunk","created":1720138633,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_d576307f90","choices":[{"index":0,"delta":{"content":"!"},"logprobs":null,"finish_reason":null}]}
|
||||
|
||||
|
||||
data: {"id":"chatcmpl-9hQvRZn4XwZEsAA0VrSHe0HcePTyG","object":"chat.completion.chunk","created":1720138633,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_d576307f90","choices":[{"index":0,"delta":{},"logprobs":null,"finish_reason":"stop"}]}
|
||||
|
||||
|
||||
data: [DONE]
|
||||
|
||||
|
||||
'
|
||||
headers:
|
||||
CF-Cache-Status:
|
||||
- DYNAMIC
|
||||
CF-RAY:
|
||||
- 89e32bbb3b2a6d5e-GIG
|
||||
Connection:
|
||||
- keep-alive
|
||||
Content-Type:
|
||||
- text/event-stream; charset=utf-8
|
||||
Date:
|
||||
- Fri, 05 Jul 2024 00:17:15 GMT
|
||||
Server:
|
||||
- cloudflare
|
||||
Transfer-Encoding:
|
||||
- chunked
|
||||
alt-svc:
|
||||
- h3=":443"; ma=86400
|
||||
openai-organization:
|
||||
- crewai-iuxna1
|
||||
openai-processing-ms:
|
||||
- '1430'
|
||||
openai-version:
|
||||
- '2020-10-01'
|
||||
strict-transport-security:
|
||||
- max-age=31536000; includeSubDomains
|
||||
x-ratelimit-limit-requests:
|
||||
- '10000'
|
||||
x-ratelimit-limit-tokens:
|
||||
- '16000000'
|
||||
x-ratelimit-remaining-requests:
|
||||
- '9999'
|
||||
x-ratelimit-remaining-tokens:
|
||||
- '15999794'
|
||||
x-ratelimit-reset-requests:
|
||||
- 6ms
|
||||
x-ratelimit-reset-tokens:
|
||||
- 0s
|
||||
x-request-id:
|
||||
- req_ba305909ef3f2a74b7ce2c33e2990b01
|
||||
status:
|
||||
code: 200
|
||||
message: OK
|
||||
version: 1
|
||||
1431
tests/cassettes/test_crew_kickoff_usage_metrics.yaml
Normal file
1431
tests/cassettes/test_crew_kickoff_usage_metrics.yaml
Normal file
File diff suppressed because it is too large
Load Diff
@@ -0,0 +1,193 @@
|
||||
interactions:
|
||||
- request:
|
||||
body: '{"messages": [{"content": "You are Researcher. You''re an expert researcher,
|
||||
specialized in technology, software engineering, AI and startups. You work as
|
||||
a freelancer and is now working on doing research and analysis for a new customer.\nYour
|
||||
personal goal is: Make the best research and analysis on content about AI and
|
||||
AI agentsTo give my best complete final answer to the task use the exact following
|
||||
format:\n\nThought: I now can give a great answer\nFinal Answer: my best complete
|
||||
final answer to the task.\nYour final answer must be the great and the most
|
||||
complete as possible, it must be outcome described.\n\nI MUST use these formats,
|
||||
my job depends on it!\nCurrent Task: Look at the available data nd give me a
|
||||
sense on the total number of sales.\n\nThis is the expect criteria for your
|
||||
final answer: The total number of sales as an integer \n you MUST return the
|
||||
actual complete content as the final answer, not a summary.\n\nBegin! This is
|
||||
VERY important to you, use the tools available and give your best Final Answer,
|
||||
your job depends on it!\n\nThought:\n", "role": "user"}], "model": "gpt-4o",
|
||||
"n": 1, "stop": ["\nObservation"], "stream": true, "temperature": 0.7}'
|
||||
headers:
|
||||
accept:
|
||||
- application/json
|
||||
accept-encoding:
|
||||
- gzip, deflate, br
|
||||
connection:
|
||||
- keep-alive
|
||||
content-length:
|
||||
- '1178'
|
||||
content-type:
|
||||
- application/json
|
||||
host:
|
||||
- api.openai.com
|
||||
user-agent:
|
||||
- OpenAI/Python 1.34.0
|
||||
x-stainless-arch:
|
||||
- arm64
|
||||
x-stainless-async:
|
||||
- 'false'
|
||||
x-stainless-lang:
|
||||
- python
|
||||
x-stainless-os:
|
||||
- MacOS
|
||||
x-stainless-package-version:
|
||||
- 1.34.0
|
||||
x-stainless-runtime:
|
||||
- CPython
|
||||
x-stainless-runtime-version:
|
||||
- 3.12.3
|
||||
method: POST
|
||||
uri: https://api.openai.com/v1/chat/completions
|
||||
response:
|
||||
body:
|
||||
string: 'data: {"id":"chatcmpl-9gJkkQs40FNqD9UjPrPbDEUN4XeLR","object":"chat.completion.chunk","created":1719872734,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_4008e3b719","choices":[{"index":0,"delta":{"role":"assistant","content":""},"logprobs":null,"finish_reason":null}]}
|
||||
|
||||
|
||||
data: {"id":"chatcmpl-9gJkkQs40FNqD9UjPrPbDEUN4XeLR","object":"chat.completion.chunk","created":1719872734,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_4008e3b719","choices":[{"index":0,"delta":{"content":"Thought"},"logprobs":null,"finish_reason":null}]}
|
||||
|
||||
|
||||
data: {"id":"chatcmpl-9gJkkQs40FNqD9UjPrPbDEUN4XeLR","object":"chat.completion.chunk","created":1719872734,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_4008e3b719","choices":[{"index":0,"delta":{"content":":"},"logprobs":null,"finish_reason":null}]}
|
||||
|
||||
|
||||
data: {"id":"chatcmpl-9gJkkQs40FNqD9UjPrPbDEUN4XeLR","object":"chat.completion.chunk","created":1719872734,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_4008e3b719","choices":[{"index":0,"delta":{"content":"
|
||||
I"},"logprobs":null,"finish_reason":null}]}
|
||||
|
||||
|
||||
data: {"id":"chatcmpl-9gJkkQs40FNqD9UjPrPbDEUN4XeLR","object":"chat.completion.chunk","created":1719872734,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_4008e3b719","choices":[{"index":0,"delta":{"content":"
|
||||
now"},"logprobs":null,"finish_reason":null}]}
|
||||
|
||||
|
||||
data: {"id":"chatcmpl-9gJkkQs40FNqD9UjPrPbDEUN4XeLR","object":"chat.completion.chunk","created":1719872734,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_4008e3b719","choices":[{"index":0,"delta":{"content":"
|
||||
can"},"logprobs":null,"finish_reason":null}]}
|
||||
|
||||
|
||||
data: {"id":"chatcmpl-9gJkkQs40FNqD9UjPrPbDEUN4XeLR","object":"chat.completion.chunk","created":1719872734,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_4008e3b719","choices":[{"index":0,"delta":{"content":"
|
||||
give"},"logprobs":null,"finish_reason":null}]}
|
||||
|
||||
|
||||
data: {"id":"chatcmpl-9gJkkQs40FNqD9UjPrPbDEUN4XeLR","object":"chat.completion.chunk","created":1719872734,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_4008e3b719","choices":[{"index":0,"delta":{"content":"
|
||||
a"},"logprobs":null,"finish_reason":null}]}
|
||||
|
||||
|
||||
data: {"id":"chatcmpl-9gJkkQs40FNqD9UjPrPbDEUN4XeLR","object":"chat.completion.chunk","created":1719872734,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_4008e3b719","choices":[{"index":0,"delta":{"content":"
|
||||
great"},"logprobs":null,"finish_reason":null}]}
|
||||
|
||||
|
||||
data: {"id":"chatcmpl-9gJkkQs40FNqD9UjPrPbDEUN4XeLR","object":"chat.completion.chunk","created":1719872734,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_4008e3b719","choices":[{"index":0,"delta":{"content":"
|
||||
answer"},"logprobs":null,"finish_reason":null}]}
|
||||
|
||||
|
||||
data: {"id":"chatcmpl-9gJkkQs40FNqD9UjPrPbDEUN4XeLR","object":"chat.completion.chunk","created":1719872734,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_4008e3b719","choices":[{"index":0,"delta":{"content":"\n"},"logprobs":null,"finish_reason":null}]}
|
||||
|
||||
|
||||
data: {"id":"chatcmpl-9gJkkQs40FNqD9UjPrPbDEUN4XeLR","object":"chat.completion.chunk","created":1719872734,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_4008e3b719","choices":[{"index":0,"delta":{"content":"Final"},"logprobs":null,"finish_reason":null}]}
|
||||
|
||||
|
||||
data: {"id":"chatcmpl-9gJkkQs40FNqD9UjPrPbDEUN4XeLR","object":"chat.completion.chunk","created":1719872734,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_4008e3b719","choices":[{"index":0,"delta":{"content":"
|
||||
Answer"},"logprobs":null,"finish_reason":null}]}
|
||||
|
||||
|
||||
data: {"id":"chatcmpl-9gJkkQs40FNqD9UjPrPbDEUN4XeLR","object":"chat.completion.chunk","created":1719872734,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_4008e3b719","choices":[{"index":0,"delta":{"content":":"},"logprobs":null,"finish_reason":null}]}
|
||||
|
||||
|
||||
data: {"id":"chatcmpl-9gJkkQs40FNqD9UjPrPbDEUN4XeLR","object":"chat.completion.chunk","created":1719872734,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_4008e3b719","choices":[{"index":0,"delta":{"content":"
|
||||
The"},"logprobs":null,"finish_reason":null}]}
|
||||
|
||||
|
||||
data: {"id":"chatcmpl-9gJkkQs40FNqD9UjPrPbDEUN4XeLR","object":"chat.completion.chunk","created":1719872734,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_4008e3b719","choices":[{"index":0,"delta":{"content":"
|
||||
total"},"logprobs":null,"finish_reason":null}]}
|
||||
|
||||
|
||||
data: {"id":"chatcmpl-9gJkkQs40FNqD9UjPrPbDEUN4XeLR","object":"chat.completion.chunk","created":1719872734,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_4008e3b719","choices":[{"index":0,"delta":{"content":"
|
||||
number"},"logprobs":null,"finish_reason":null}]}
|
||||
|
||||
|
||||
data: {"id":"chatcmpl-9gJkkQs40FNqD9UjPrPbDEUN4XeLR","object":"chat.completion.chunk","created":1719872734,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_4008e3b719","choices":[{"index":0,"delta":{"content":"
|
||||
of"},"logprobs":null,"finish_reason":null}]}
|
||||
|
||||
|
||||
data: {"id":"chatcmpl-9gJkkQs40FNqD9UjPrPbDEUN4XeLR","object":"chat.completion.chunk","created":1719872734,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_4008e3b719","choices":[{"index":0,"delta":{"content":"
|
||||
sales"},"logprobs":null,"finish_reason":null}]}
|
||||
|
||||
|
||||
data: {"id":"chatcmpl-9gJkkQs40FNqD9UjPrPbDEUN4XeLR","object":"chat.completion.chunk","created":1719872734,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_4008e3b719","choices":[{"index":0,"delta":{"content":"
|
||||
is"},"logprobs":null,"finish_reason":null}]}
|
||||
|
||||
|
||||
data: {"id":"chatcmpl-9gJkkQs40FNqD9UjPrPbDEUN4XeLR","object":"chat.completion.chunk","created":1719872734,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_4008e3b719","choices":[{"index":0,"delta":{"content":"
|
||||
"},"logprobs":null,"finish_reason":null}]}
|
||||
|
||||
|
||||
data: {"id":"chatcmpl-9gJkkQs40FNqD9UjPrPbDEUN4XeLR","object":"chat.completion.chunk","created":1719872734,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_4008e3b719","choices":[{"index":0,"delta":{"content":"150"},"logprobs":null,"finish_reason":null}]}
|
||||
|
||||
|
||||
data: {"id":"chatcmpl-9gJkkQs40FNqD9UjPrPbDEUN4XeLR","object":"chat.completion.chunk","created":1719872734,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_4008e3b719","choices":[{"index":0,"delta":{"content":"0"},"logprobs":null,"finish_reason":null}]}
|
||||
|
||||
|
||||
data: {"id":"chatcmpl-9gJkkQs40FNqD9UjPrPbDEUN4XeLR","object":"chat.completion.chunk","created":1719872734,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_4008e3b719","choices":[{"index":0,"delta":{"content":"."},"logprobs":null,"finish_reason":null}]}
|
||||
|
||||
|
||||
data: {"id":"chatcmpl-9gJkkQs40FNqD9UjPrPbDEUN4XeLR","object":"chat.completion.chunk","created":1719872734,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_4008e3b719","choices":[{"index":0,"delta":{},"logprobs":null,"finish_reason":"stop"}]}
|
||||
|
||||
|
||||
data: [DONE]
|
||||
|
||||
|
||||
'
|
||||
headers:
|
||||
CF-Cache-Status:
|
||||
- DYNAMIC
|
||||
CF-RAY:
|
||||
- 89c9d0107c8abd30-ATL
|
||||
Connection:
|
||||
- keep-alive
|
||||
Content-Type:
|
||||
- text/event-stream; charset=utf-8
|
||||
Date:
|
||||
- Mon, 01 Jul 2024 22:25:35 GMT
|
||||
Server:
|
||||
- cloudflare
|
||||
Set-Cookie:
|
||||
- __cf_bm=xIvvDveyc7bpEywphx5N4EscKoZiGAT_yDVu3aFAWZ4-1719872735-1.0.1.1-ZOUYc2kEes8fxrMFgGdVppzOh9nPbl4y1Syv73ORt38FBXePWFSTJrFZCZRU.zob6ks9nWzr2vBIZbBQdAOOGQ;
|
||||
path=/; expires=Mon, 01-Jul-24 22:55:35 GMT; domain=.api.openai.com; HttpOnly;
|
||||
Secure; SameSite=None
|
||||
- _cfuvid=aG1BGRRkNAyxmctM98.DLqSNJ2Cx_OQYsMRQbd03.bo-1719872735091-0.0.1.1-604800000;
|
||||
path=/; domain=.api.openai.com; HttpOnly; Secure; SameSite=None
|
||||
Transfer-Encoding:
|
||||
- chunked
|
||||
alt-svc:
|
||||
- h3=":443"; ma=86400
|
||||
openai-organization:
|
||||
- crewai-iuxna1
|
||||
openai-processing-ms:
|
||||
- '80'
|
||||
openai-version:
|
||||
- '2020-10-01'
|
||||
strict-transport-security:
|
||||
- max-age=31536000; includeSubDomains
|
||||
x-ratelimit-limit-requests:
|
||||
- '10000'
|
||||
x-ratelimit-limit-tokens:
|
||||
- '16000000'
|
||||
x-ratelimit-remaining-requests:
|
||||
- '9999'
|
||||
x-ratelimit-remaining-tokens:
|
||||
- '15999725'
|
||||
x-ratelimit-reset-requests:
|
||||
- 6ms
|
||||
x-ratelimit-reset-tokens:
|
||||
- 1ms
|
||||
x-request-id:
|
||||
- req_c90015b7584729268f48a8b33ff7c5ea
|
||||
status:
|
||||
code: 200
|
||||
message: OK
|
||||
version: 1
|
||||
235903
tests/cassettes/test_hierarchical_async_task_execution_completion.yaml
Normal file
235903
tests/cassettes/test_hierarchical_async_task_execution_completion.yaml
Normal file
File diff suppressed because it is too large
Load Diff
1464
tests/cassettes/test_kickoff_for_each_multiple_inputs.yaml
Normal file
1464
tests/cassettes/test_kickoff_for_each_multiple_inputs.yaml
Normal file
File diff suppressed because it is too large
Load Diff
192
tests/cassettes/test_kickoff_for_each_single_input.yaml
Normal file
192
tests/cassettes/test_kickoff_for_each_single_input.yaml
Normal file
@@ -0,0 +1,192 @@
|
||||
interactions:
|
||||
- request:
|
||||
body: '{"messages": [{"content": "You are dog Researcher. You have a lot of experience
|
||||
with dog.\nYour personal goal is: Express hot takes on dog.To give my best complete
|
||||
final answer to the task use the exact following format:\n\nThought: I now can
|
||||
give a great answer\nFinal Answer: my best complete final answer to the task.\nYour
|
||||
final answer must be the great and the most complete as possible, it must be
|
||||
outcome described.\n\nI MUST use these formats, my job depends on it!\nCurrent
|
||||
Task: Give me an analysis around dog.\n\nThis is the expect criteria for your
|
||||
final answer: 1 bullet point about dog that''s under 15 words. \n you MUST return
|
||||
the actual complete content as the final answer, not a summary.\n\nBegin! This
|
||||
is VERY important to you, use the tools available and give your best Final Answer,
|
||||
your job depends on it!\n\nThought:\n", "role": "user"}], "model": "gpt-4o",
|
||||
"n": 1, "stop": ["\nObservation"], "stream": true, "temperature": 0.7}'
|
||||
headers:
|
||||
accept:
|
||||
- application/json
|
||||
accept-encoding:
|
||||
- gzip, deflate
|
||||
connection:
|
||||
- keep-alive
|
||||
content-length:
|
||||
- '951'
|
||||
content-type:
|
||||
- application/json
|
||||
host:
|
||||
- api.openai.com
|
||||
user-agent:
|
||||
- OpenAI/Python 1.35.10
|
||||
x-stainless-arch:
|
||||
- arm64
|
||||
x-stainless-async:
|
||||
- 'false'
|
||||
x-stainless-lang:
|
||||
- python
|
||||
x-stainless-os:
|
||||
- MacOS
|
||||
x-stainless-package-version:
|
||||
- 1.35.10
|
||||
x-stainless-runtime:
|
||||
- CPython
|
||||
x-stainless-runtime-version:
|
||||
- 3.12.3
|
||||
method: POST
|
||||
uri: https://api.openai.com/v1/chat/completions
|
||||
response:
|
||||
body:
|
||||
string: 'data: {"id":"chatcmpl-9j74gJiY9YxFxbeZ5jmpPclWEeaiP","object":"chat.completion.chunk","created":1720538982,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_4008e3b719","choices":[{"index":0,"delta":{"role":"assistant","content":""},"logprobs":null,"finish_reason":null}]}
|
||||
|
||||
|
||||
data: {"id":"chatcmpl-9j74gJiY9YxFxbeZ5jmpPclWEeaiP","object":"chat.completion.chunk","created":1720538982,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_4008e3b719","choices":[{"index":0,"delta":{"content":"Thought"},"logprobs":null,"finish_reason":null}]}
|
||||
|
||||
|
||||
data: {"id":"chatcmpl-9j74gJiY9YxFxbeZ5jmpPclWEeaiP","object":"chat.completion.chunk","created":1720538982,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_4008e3b719","choices":[{"index":0,"delta":{"content":":"},"logprobs":null,"finish_reason":null}]}
|
||||
|
||||
|
||||
data: {"id":"chatcmpl-9j74gJiY9YxFxbeZ5jmpPclWEeaiP","object":"chat.completion.chunk","created":1720538982,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_4008e3b719","choices":[{"index":0,"delta":{"content":"
|
||||
I"},"logprobs":null,"finish_reason":null}]}
|
||||
|
||||
|
||||
data: {"id":"chatcmpl-9j74gJiY9YxFxbeZ5jmpPclWEeaiP","object":"chat.completion.chunk","created":1720538982,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_4008e3b719","choices":[{"index":0,"delta":{"content":"
|
||||
now"},"logprobs":null,"finish_reason":null}]}
|
||||
|
||||
|
||||
data: {"id":"chatcmpl-9j74gJiY9YxFxbeZ5jmpPclWEeaiP","object":"chat.completion.chunk","created":1720538982,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_4008e3b719","choices":[{"index":0,"delta":{"content":"
|
||||
can"},"logprobs":null,"finish_reason":null}]}
|
||||
|
||||
|
||||
data: {"id":"chatcmpl-9j74gJiY9YxFxbeZ5jmpPclWEeaiP","object":"chat.completion.chunk","created":1720538982,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_4008e3b719","choices":[{"index":0,"delta":{"content":"
|
||||
give"},"logprobs":null,"finish_reason":null}]}
|
||||
|
||||
|
||||
data: {"id":"chatcmpl-9j74gJiY9YxFxbeZ5jmpPclWEeaiP","object":"chat.completion.chunk","created":1720538982,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_4008e3b719","choices":[{"index":0,"delta":{"content":"
|
||||
a"},"logprobs":null,"finish_reason":null}]}
|
||||
|
||||
|
||||
data: {"id":"chatcmpl-9j74gJiY9YxFxbeZ5jmpPclWEeaiP","object":"chat.completion.chunk","created":1720538982,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_4008e3b719","choices":[{"index":0,"delta":{"content":"
|
||||
great"},"logprobs":null,"finish_reason":null}]}
|
||||
|
||||
|
||||
data: {"id":"chatcmpl-9j74gJiY9YxFxbeZ5jmpPclWEeaiP","object":"chat.completion.chunk","created":1720538982,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_4008e3b719","choices":[{"index":0,"delta":{"content":"
|
||||
answer"},"logprobs":null,"finish_reason":null}]}
|
||||
|
||||
|
||||
data: {"id":"chatcmpl-9j74gJiY9YxFxbeZ5jmpPclWEeaiP","object":"chat.completion.chunk","created":1720538982,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_4008e3b719","choices":[{"index":0,"delta":{"content":"\n"},"logprobs":null,"finish_reason":null}]}
|
||||
|
||||
|
||||
data: {"id":"chatcmpl-9j74gJiY9YxFxbeZ5jmpPclWEeaiP","object":"chat.completion.chunk","created":1720538982,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_4008e3b719","choices":[{"index":0,"delta":{"content":"Final"},"logprobs":null,"finish_reason":null}]}
|
||||
|
||||
|
||||
data: {"id":"chatcmpl-9j74gJiY9YxFxbeZ5jmpPclWEeaiP","object":"chat.completion.chunk","created":1720538982,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_4008e3b719","choices":[{"index":0,"delta":{"content":"
|
||||
Answer"},"logprobs":null,"finish_reason":null}]}
|
||||
|
||||
|
||||
data: {"id":"chatcmpl-9j74gJiY9YxFxbeZ5jmpPclWEeaiP","object":"chat.completion.chunk","created":1720538982,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_4008e3b719","choices":[{"index":0,"delta":{"content":":"},"logprobs":null,"finish_reason":null}]}
|
||||
|
||||
|
||||
data: {"id":"chatcmpl-9j74gJiY9YxFxbeZ5jmpPclWEeaiP","object":"chat.completion.chunk","created":1720538982,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_4008e3b719","choices":[{"index":0,"delta":{"content":"
|
||||
Dogs"},"logprobs":null,"finish_reason":null}]}
|
||||
|
||||
|
||||
data: {"id":"chatcmpl-9j74gJiY9YxFxbeZ5jmpPclWEeaiP","object":"chat.completion.chunk","created":1720538982,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_4008e3b719","choices":[{"index":0,"delta":{"content":"
|
||||
are"},"logprobs":null,"finish_reason":null}]}
|
||||
|
||||
|
||||
data: {"id":"chatcmpl-9j74gJiY9YxFxbeZ5jmpPclWEeaiP","object":"chat.completion.chunk","created":1720538982,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_4008e3b719","choices":[{"index":0,"delta":{"content":"
|
||||
unparalleled"},"logprobs":null,"finish_reason":null}]}
|
||||
|
||||
|
||||
data: {"id":"chatcmpl-9j74gJiY9YxFxbeZ5jmpPclWEeaiP","object":"chat.completion.chunk","created":1720538982,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_4008e3b719","choices":[{"index":0,"delta":{"content":"
|
||||
in"},"logprobs":null,"finish_reason":null}]}
|
||||
|
||||
|
||||
data: {"id":"chatcmpl-9j74gJiY9YxFxbeZ5jmpPclWEeaiP","object":"chat.completion.chunk","created":1720538982,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_4008e3b719","choices":[{"index":0,"delta":{"content":"
|
||||
loyalty"},"logprobs":null,"finish_reason":null}]}
|
||||
|
||||
|
||||
data: {"id":"chatcmpl-9j74gJiY9YxFxbeZ5jmpPclWEeaiP","object":"chat.completion.chunk","created":1720538982,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_4008e3b719","choices":[{"index":0,"delta":{"content":"
|
||||
and"},"logprobs":null,"finish_reason":null}]}
|
||||
|
||||
|
||||
data: {"id":"chatcmpl-9j74gJiY9YxFxbeZ5jmpPclWEeaiP","object":"chat.completion.chunk","created":1720538982,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_4008e3b719","choices":[{"index":0,"delta":{"content":"
|
||||
companionship"},"logprobs":null,"finish_reason":null}]}
|
||||
|
||||
|
||||
data: {"id":"chatcmpl-9j74gJiY9YxFxbeZ5jmpPclWEeaiP","object":"chat.completion.chunk","created":1720538982,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_4008e3b719","choices":[{"index":0,"delta":{"content":"
|
||||
to"},"logprobs":null,"finish_reason":null}]}
|
||||
|
||||
|
||||
data: {"id":"chatcmpl-9j74gJiY9YxFxbeZ5jmpPclWEeaiP","object":"chat.completion.chunk","created":1720538982,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_4008e3b719","choices":[{"index":0,"delta":{"content":"
|
||||
humans"},"logprobs":null,"finish_reason":null}]}
|
||||
|
||||
|
||||
data: {"id":"chatcmpl-9j74gJiY9YxFxbeZ5jmpPclWEeaiP","object":"chat.completion.chunk","created":1720538982,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_4008e3b719","choices":[{"index":0,"delta":{"content":"."},"logprobs":null,"finish_reason":null}]}
|
||||
|
||||
|
||||
data: {"id":"chatcmpl-9j74gJiY9YxFxbeZ5jmpPclWEeaiP","object":"chat.completion.chunk","created":1720538982,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_4008e3b719","choices":[{"index":0,"delta":{},"logprobs":null,"finish_reason":"stop"}]}
|
||||
|
||||
|
||||
data: [DONE]
|
||||
|
||||
|
||||
'
|
||||
headers:
|
||||
CF-Cache-Status:
|
||||
- DYNAMIC
|
||||
CF-RAY:
|
||||
- 8a0959de6b916783-ATL
|
||||
Connection:
|
||||
- keep-alive
|
||||
Content-Type:
|
||||
- text/event-stream; charset=utf-8
|
||||
Date:
|
||||
- Tue, 09 Jul 2024 15:29:42 GMT
|
||||
Server:
|
||||
- cloudflare
|
||||
Set-Cookie:
|
||||
- __cf_bm=LA.xC.jE_aMjiSgGgU6kDsBPhb_akgqn_4Rx.jXYfnQ-1720538982-1.0.1.1-l5Q1BHprIz5Jxb4HWyYsMfbg6mEnP2H95Vxt89ez24pKOb__90s8LJBBqK52zmPNcPYSSUcaR0wRAaSVFoa4Fw;
|
||||
path=/; expires=Tue, 09-Jul-24 15:59:42 GMT; domain=.api.openai.com; HttpOnly;
|
||||
Secure; SameSite=None
|
||||
- _cfuvid=zzJ51X.VwRkIq7VLCg9xPQGbXoUmAH6b.2g6sf6Y58Y-1720538982657-0.0.1.1-604800000;
|
||||
path=/; domain=.api.openai.com; HttpOnly; Secure; SameSite=None
|
||||
Transfer-Encoding:
|
||||
- chunked
|
||||
alt-svc:
|
||||
- h3=":443"; ma=86400
|
||||
openai-organization:
|
||||
- crewai-iuxna1
|
||||
openai-processing-ms:
|
||||
- '240'
|
||||
openai-version:
|
||||
- '2020-10-01'
|
||||
strict-transport-security:
|
||||
- max-age=31536000; includeSubDomains
|
||||
x-ratelimit-limit-requests:
|
||||
- '10000'
|
||||
x-ratelimit-limit-tokens:
|
||||
- '22000000'
|
||||
x-ratelimit-remaining-requests:
|
||||
- '9999'
|
||||
x-ratelimit-remaining-tokens:
|
||||
- '21999783'
|
||||
x-ratelimit-reset-requests:
|
||||
- 6ms
|
||||
x-ratelimit-reset-tokens:
|
||||
- 0s
|
||||
x-request-id:
|
||||
- req_abdec68aded596628dfd5b999919447d
|
||||
status:
|
||||
code: 200
|
||||
message: OK
|
||||
version: 1
|
||||
1209
tests/cassettes/test_output_json_dict_hierarchical.yaml
Normal file
1209
tests/cassettes/test_output_json_dict_hierarchical.yaml
Normal file
File diff suppressed because it is too large
Load Diff
453
tests/cassettes/test_output_json_dict_sequential.yaml
Normal file
453
tests/cassettes/test_output_json_dict_sequential.yaml
Normal file
@@ -0,0 +1,453 @@
|
||||
interactions:
|
||||
- request:
|
||||
body: '{"messages": [{"content": "You are Scorer. You''re an expert scorer, specialized
|
||||
in scoring titles.\nYour personal goal is: Score the titleTo give my best complete
|
||||
final answer to the task use the exact following format:\n\nThought: I now can
|
||||
give a great answer\nFinal Answer: my best complete final answer to the task.\nYour
|
||||
final answer must be the great and the most complete as possible, it must be
|
||||
outcome described.\n\nI MUST use these formats, my job depends on it!\nCurrent
|
||||
Task: Give me an integer score between 1-5 for the following title: ''The impact
|
||||
of AI in the future of work''\n\nThis is the expect criteria for your final
|
||||
answer: The score of the title. \n you MUST return the actual complete content
|
||||
as the final answer, not a summary.\n\nBegin! This is VERY important to you,
|
||||
use the tools available and give your best Final Answer, your job depends on
|
||||
it!\n\nThought:\n", "role": "user"}], "model": "gpt-4o", "n": 1, "stop": ["\nObservation"],
|
||||
"stream": true, "temperature": 0.7}'
|
||||
headers:
|
||||
accept:
|
||||
- application/json
|
||||
accept-encoding:
|
||||
- gzip, deflate
|
||||
connection:
|
||||
- keep-alive
|
||||
content-length:
|
||||
- '997'
|
||||
content-type:
|
||||
- application/json
|
||||
host:
|
||||
- api.openai.com
|
||||
user-agent:
|
||||
- OpenAI/Python 1.35.10
|
||||
x-stainless-arch:
|
||||
- arm64
|
||||
x-stainless-async:
|
||||
- 'false'
|
||||
x-stainless-lang:
|
||||
- python
|
||||
x-stainless-os:
|
||||
- MacOS
|
||||
x-stainless-package-version:
|
||||
- 1.35.10
|
||||
x-stainless-runtime:
|
||||
- CPython
|
||||
x-stainless-runtime-version:
|
||||
- 3.12.3
|
||||
method: POST
|
||||
uri: https://api.openai.com/v1/chat/completions
|
||||
response:
|
||||
body:
|
||||
string: 'data: {"id":"chatcmpl-9jCJmkP063CQ01vF8ENhkPSwN9BQH","object":"chat.completion.chunk","created":1720559138,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_d576307f90","choices":[{"index":0,"delta":{"role":"assistant","content":""},"logprobs":null,"finish_reason":null}]}
|
||||
|
||||
|
||||
data: {"id":"chatcmpl-9jCJmkP063CQ01vF8ENhkPSwN9BQH","object":"chat.completion.chunk","created":1720559138,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_d576307f90","choices":[{"index":0,"delta":{"content":"I"},"logprobs":null,"finish_reason":null}]}
|
||||
|
||||
|
||||
data: {"id":"chatcmpl-9jCJmkP063CQ01vF8ENhkPSwN9BQH","object":"chat.completion.chunk","created":1720559138,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_d576307f90","choices":[{"index":0,"delta":{"content":"
|
||||
now"},"logprobs":null,"finish_reason":null}]}
|
||||
|
||||
|
||||
data: {"id":"chatcmpl-9jCJmkP063CQ01vF8ENhkPSwN9BQH","object":"chat.completion.chunk","created":1720559138,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_d576307f90","choices":[{"index":0,"delta":{"content":"
|
||||
can"},"logprobs":null,"finish_reason":null}]}
|
||||
|
||||
|
||||
data: {"id":"chatcmpl-9jCJmkP063CQ01vF8ENhkPSwN9BQH","object":"chat.completion.chunk","created":1720559138,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_d576307f90","choices":[{"index":0,"delta":{"content":"
|
||||
give"},"logprobs":null,"finish_reason":null}]}
|
||||
|
||||
|
||||
data: {"id":"chatcmpl-9jCJmkP063CQ01vF8ENhkPSwN9BQH","object":"chat.completion.chunk","created":1720559138,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_d576307f90","choices":[{"index":0,"delta":{"content":"
|
||||
a"},"logprobs":null,"finish_reason":null}]}
|
||||
|
||||
|
||||
data: {"id":"chatcmpl-9jCJmkP063CQ01vF8ENhkPSwN9BQH","object":"chat.completion.chunk","created":1720559138,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_d576307f90","choices":[{"index":0,"delta":{"content":"
|
||||
great"},"logprobs":null,"finish_reason":null}]}
|
||||
|
||||
|
||||
data: {"id":"chatcmpl-9jCJmkP063CQ01vF8ENhkPSwN9BQH","object":"chat.completion.chunk","created":1720559138,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_d576307f90","choices":[{"index":0,"delta":{"content":"
|
||||
answer"},"logprobs":null,"finish_reason":null}]}
|
||||
|
||||
|
||||
data: {"id":"chatcmpl-9jCJmkP063CQ01vF8ENhkPSwN9BQH","object":"chat.completion.chunk","created":1720559138,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_d576307f90","choices":[{"index":0,"delta":{"content":"\n"},"logprobs":null,"finish_reason":null}]}
|
||||
|
||||
|
||||
data: {"id":"chatcmpl-9jCJmkP063CQ01vF8ENhkPSwN9BQH","object":"chat.completion.chunk","created":1720559138,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_d576307f90","choices":[{"index":0,"delta":{"content":"Final"},"logprobs":null,"finish_reason":null}]}
|
||||
|
||||
|
||||
data: {"id":"chatcmpl-9jCJmkP063CQ01vF8ENhkPSwN9BQH","object":"chat.completion.chunk","created":1720559138,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_d576307f90","choices":[{"index":0,"delta":{"content":"
|
||||
Answer"},"logprobs":null,"finish_reason":null}]}
|
||||
|
||||
|
||||
data: {"id":"chatcmpl-9jCJmkP063CQ01vF8ENhkPSwN9BQH","object":"chat.completion.chunk","created":1720559138,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_d576307f90","choices":[{"index":0,"delta":{"content":":"},"logprobs":null,"finish_reason":null}]}
|
||||
|
||||
|
||||
data: {"id":"chatcmpl-9jCJmkP063CQ01vF8ENhkPSwN9BQH","object":"chat.completion.chunk","created":1720559138,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_d576307f90","choices":[{"index":0,"delta":{"content":"
|
||||
"},"logprobs":null,"finish_reason":null}]}
|
||||
|
||||
|
||||
data: {"id":"chatcmpl-9jCJmkP063CQ01vF8ENhkPSwN9BQH","object":"chat.completion.chunk","created":1720559138,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_d576307f90","choices":[{"index":0,"delta":{"content":"4"},"logprobs":null,"finish_reason":null}]}
|
||||
|
||||
|
||||
data: {"id":"chatcmpl-9jCJmkP063CQ01vF8ENhkPSwN9BQH","object":"chat.completion.chunk","created":1720559138,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_d576307f90","choices":[{"index":0,"delta":{},"logprobs":null,"finish_reason":"stop"}]}
|
||||
|
||||
|
||||
data: [DONE]
|
||||
|
||||
|
||||
'
|
||||
headers:
|
||||
CF-Cache-Status:
|
||||
- DYNAMIC
|
||||
CF-RAY:
|
||||
- 8a0b45f368ab6734-ATL
|
||||
Connection:
|
||||
- keep-alive
|
||||
Content-Type:
|
||||
- text/event-stream; charset=utf-8
|
||||
Date:
|
||||
- Tue, 09 Jul 2024 21:05:38 GMT
|
||||
Server:
|
||||
- cloudflare
|
||||
Set-Cookie:
|
||||
- __cf_bm=43lNOCqE3W6gMhKEVIvu20BhU4nI7wyQYcgn28hcb3o-1720559138-1.0.1.1-2pdG6KFn0J2AHC_tnhcxXCqmZ_RyZfwthLi5ET6Aq4v1L9z3EcYxV1D1CeKjOgEBJPLD9GUDdMmIR3h86QYx7w;
|
||||
path=/; expires=Tue, 09-Jul-24 21:35:38 GMT; domain=.api.openai.com; HttpOnly;
|
||||
Secure; SameSite=None
|
||||
- _cfuvid=T4YvZnF6fWjq7JTPVyPFDIHaXBpT8E23GcG55Q0Ky6A-1720559138248-0.0.1.1-604800000;
|
||||
path=/; domain=.api.openai.com; HttpOnly; Secure; SameSite=None
|
||||
Transfer-Encoding:
|
||||
- chunked
|
||||
alt-svc:
|
||||
- h3=":443"; ma=86400
|
||||
openai-organization:
|
||||
- crewai-iuxna1
|
||||
openai-processing-ms:
|
||||
- '159'
|
||||
openai-version:
|
||||
- '2020-10-01'
|
||||
strict-transport-security:
|
||||
- max-age=31536000; includeSubDomains
|
||||
x-ratelimit-limit-requests:
|
||||
- '10000'
|
||||
x-ratelimit-limit-tokens:
|
||||
- '22000000'
|
||||
x-ratelimit-remaining-requests:
|
||||
- '9999'
|
||||
x-ratelimit-remaining-tokens:
|
||||
- '21999771'
|
||||
x-ratelimit-reset-requests:
|
||||
- 6ms
|
||||
x-ratelimit-reset-tokens:
|
||||
- 0s
|
||||
x-request-id:
|
||||
- req_a58604fe17d3de5d4491ec972e98312b
|
||||
status:
|
||||
code: 200
|
||||
message: OK
|
||||
- request:
|
||||
body: '{"messages": [{"role": "user", "content": "4"}, {"role": "system", "content":
|
||||
"I''m gonna convert this raw text into valid JSON."}], "model": "gpt-4o", "tool_choice":
|
||||
{"type": "function", "function": {"name": "ScoreOutput"}}, "tools": [{"type":
|
||||
"function", "function": {"name": "ScoreOutput", "description": "Correctly extracted
|
||||
`ScoreOutput` with all the required parameters with correct types", "parameters":
|
||||
{"properties": {"score": {"title": "Score", "type": "integer"}}, "required":
|
||||
["score"], "type": "object"}}}]}'
|
||||
headers:
|
||||
accept:
|
||||
- application/json
|
||||
accept-encoding:
|
||||
- gzip, deflate
|
||||
connection:
|
||||
- keep-alive
|
||||
content-length:
|
||||
- '519'
|
||||
content-type:
|
||||
- application/json
|
||||
cookie:
|
||||
- __cf_bm=43lNOCqE3W6gMhKEVIvu20BhU4nI7wyQYcgn28hcb3o-1720559138-1.0.1.1-2pdG6KFn0J2AHC_tnhcxXCqmZ_RyZfwthLi5ET6Aq4v1L9z3EcYxV1D1CeKjOgEBJPLD9GUDdMmIR3h86QYx7w;
|
||||
_cfuvid=T4YvZnF6fWjq7JTPVyPFDIHaXBpT8E23GcG55Q0Ky6A-1720559138248-0.0.1.1-604800000
|
||||
host:
|
||||
- api.openai.com
|
||||
user-agent:
|
||||
- OpenAI/Python 1.35.10
|
||||
x-stainless-arch:
|
||||
- arm64
|
||||
x-stainless-async:
|
||||
- 'false'
|
||||
x-stainless-lang:
|
||||
- python
|
||||
x-stainless-os:
|
||||
- MacOS
|
||||
x-stainless-package-version:
|
||||
- 1.35.10
|
||||
x-stainless-runtime:
|
||||
- CPython
|
||||
x-stainless-runtime-version:
|
||||
- 3.12.3
|
||||
method: POST
|
||||
uri: https://api.openai.com/v1/chat/completions
|
||||
response:
|
||||
body:
|
||||
string: !!binary |
|
||||
H4sIAAAAAAAAA2xSTW/bMAy9+1cIPMeDncRL7Gt2GNJhOWzdDmthKIriqJFETaK7FkH++yDHjd2g
|
||||
PggEH98HSJ8SxkDtoGIgDpyEcTotn1Zr8/1lvViV9+rY6ru/zhW/8i+HpvA/YRIZuH2Sgt5YnwQa
|
||||
pyUptBdYeMlJRtV8Mc2Kosxnyw4wuJM60hpH6RzTaTadp1mR5rOeeEAlZICK/UkYY+zUvTGi3ckX
|
||||
qFg2eesYGQJvJFTXIcbAo44d4CGoQNwSTAZQoCVpY2rbaj0CCFHXgms9GF++06ge9sS1rk3Y/Cvv
|
||||
Nq75upx+Pj7/vsfnb2ta6ZHfRfrVdYH2rRXX/Yzwa7+6MWMMLDcd94dALzctuZZu6IwB901rpKUY
|
||||
HU4PEOLwA1TzM7wbPScf1Y99db6uVWPjPG7DzZZgr6wKh9pLHrq0EAjdxSLKPXbna99dBJxH46gm
|
||||
PEobBZf99WD4Xwaw6DFC4nrEKZI+HoTXQNLUe2Ub6Z1X3Slh7+p5li3lbLvIS0jOyX8AAAD//wMA
|
||||
s5wGAdMCAAA=
|
||||
headers:
|
||||
CF-Cache-Status:
|
||||
- DYNAMIC
|
||||
CF-RAY:
|
||||
- 8a0b45f828146734-ATL
|
||||
Connection:
|
||||
- keep-alive
|
||||
Content-Encoding:
|
||||
- gzip
|
||||
Content-Type:
|
||||
- application/json
|
||||
Date:
|
||||
- Tue, 09 Jul 2024 21:05:38 GMT
|
||||
Server:
|
||||
- cloudflare
|
||||
Transfer-Encoding:
|
||||
- chunked
|
||||
alt-svc:
|
||||
- h3=":443"; ma=86400
|
||||
openai-organization:
|
||||
- crewai-iuxna1
|
||||
openai-processing-ms:
|
||||
- '215'
|
||||
openai-version:
|
||||
- '2020-10-01'
|
||||
strict-transport-security:
|
||||
- max-age=31536000; includeSubDomains
|
||||
x-ratelimit-limit-requests:
|
||||
- '10000'
|
||||
x-ratelimit-limit-tokens:
|
||||
- '22000000'
|
||||
x-ratelimit-remaining-requests:
|
||||
- '9999'
|
||||
x-ratelimit-remaining-tokens:
|
||||
- '21999969'
|
||||
x-ratelimit-reset-requests:
|
||||
- 6ms
|
||||
x-ratelimit-reset-tokens:
|
||||
- 0s
|
||||
x-request-id:
|
||||
- req_34b28faf23b6422bb0855e6a45c65e92
|
||||
status:
|
||||
code: 200
|
||||
message: OK
|
||||
- request:
|
||||
body: '{"input": ["examples(Examples): Specific scenarios used to explain counting,
|
||||
addition, and subtraction through the zoo theme, such as counting animals, adding
|
||||
new ones, and subtracting adopted ones."], "model": "text-embedding-ada-002",
|
||||
"encoding_format": "base64"}'
|
||||
headers:
|
||||
accept:
|
||||
- application/json
|
||||
accept-encoding:
|
||||
- gzip, deflate
|
||||
connection:
|
||||
- keep-alive
|
||||
content-length:
|
||||
- '265'
|
||||
content-type:
|
||||
- application/json
|
||||
host:
|
||||
- api.openai.com
|
||||
user-agent:
|
||||
- OpenAI/Python 1.35.10
|
||||
x-stainless-arch:
|
||||
- arm64
|
||||
x-stainless-async:
|
||||
- 'false'
|
||||
x-stainless-lang:
|
||||
- python
|
||||
x-stainless-os:
|
||||
- MacOS
|
||||
x-stainless-package-version:
|
||||
- 1.35.10
|
||||
x-stainless-runtime:
|
||||
- CPython
|
||||
x-stainless-runtime-version:
|
||||
- 3.12.3
|
||||
method: POST
|
||||
uri: https://api.openai.com/v1/embeddings
|
||||
response:
|
||||
body:
|
||||
string: !!binary |
|
||||
H4sIAAAAAAAAA1SaWw+ySrel779fsbJu6R0RkJp8d5xEzoWAp06nI4gIHpBDFVA7+7938F3Z3X1j
|
||||
IqJgMWvMMZ6q//zXX3/93WR1kQ9///uvv19VP/z9P5Zjt+tw/fvff/3Pf/31119//efv9f87s3hn
|
||||
xe1Wfcrf6b8Pq8+tmP7+91/8fx/5vyf9+6+/VYHbhcL5EyZzUN8aML+TT5g+ChmrzjYPH2qdqV16
|
||||
NJs25BtD8dhUNEg1pR4Cch2RP6wE0kRo302x9m3Q7szJ2L+nvTG/n+crsl3BDVfRZtMRg/Yc+C+L
|
||||
Yeu6vhutK+VXANs9U0fdoK6b73YK3Hq6UfxJPWOyZuEIlmGq+PguPh6zelMAe/0s6GV7S7O+7KIG
|
||||
ykDwcDhfOI+W77UKHRdeCXev3Hq0nSEFzVUm6l73WrceGt6HQ+cOWOVQ5TFV7wU4HXFDt63uGIMT
|
||||
PZ9Qa8YzXPUfL+PHQn6iyTZHjIdcTGj1Snj0WrMXmfzLyOj1Is2Q8NsbNdHNYEL+KACJWeOH6101
|
||||
dGzFgw7aLUiJ2IgVm070WCD3LFZEMK4zm3lfrUDdWmIoK5PYjbO4OkOaoYSGZG4yOpbfEsSQAxwE
|
||||
5ith2cZ2kTvLQC13XGejkY25YlnnEW+V2yGZBe3hgiWoN5ybpPe+cGlLeAzzPeSJa6BRTZMCRcpO
|
||||
wrosZAYrKfLh+0pkqgnsk8zHtpNlr7v0VNsaJOu82W7AxMNMQ8/NsqlrD2e4yDGlu/dDZqM1Njka
|
||||
PXND8nNVdTNv6EflNO51nB2dNxtxFnNK9LVC6ho7u2aOChLacs8NmeIjyZb3MioMoQjl6/5R06Fy
|
||||
e5jl6EN3JlfVM9ud7E1l7SPs9onjMecWtUqBIo0mrKuT+cOf1N/9hxtL2BhL/bUQfqsA42izqWmG
|
||||
L2e4bC1KHZrE3hhYFcB6iv1wIpve6/OrkqP5dHSJGO/1WjBfwRMa0Rewtn/o2VTZDxeqGErsdjQy
|
||||
xmMw+eDfy3v4+n45NpshXFHP1doynsgYss3HQtv+dAzncXOrl/GUQN7v7FBM0LWm35NsotXr4mJd
|
||||
tT71yMeyitBaXoeSEBy7uX86LUqMUcDqYH6zLoJ9pHAgX4jUf+tkIPL4hPrOHbHn3R6IdXUtK438
|
||||
eGNsr4rs0R1FHm3K+YHDZu0i8RXdLGjCfou3icHVQ3bUJRhb8sZO3588BqWvgutePTJ+zgWbIkMX
|
||||
Vk/jQ8JeKF7e6MkbC+7WtcF+P8ts1OGhorXsYLqr6eRNj1UrgRNDg/2vt/XmWTznMEihTfhEbVHb
|
||||
zooAe3zKiHLy99nkSbIL/OtUYWwFfMdC8SOBZVjqn+fNzDJREc3uPrYUc5ew7niW4SNIGO93rZOR
|
||||
Tf10ZXm/tel2Vw31/HBXKqJDT/HZYys2rlhggRKXF3xEnZ7N7aviFaabBb3GJysZQZZ4eKjXI/Zj
|
||||
zTfaJ7gj2nBuiwM+aQy2PuVPZGUrI5T2o1KzbW210O9tiazN0fS+oRgB3L6Tid0iH43R+yQVrF6Z
|
||||
S6rI22e9NJY+mPR0p8ZFxll7UhUXdaWwp76SKcn0WeUu2pXzFEKEb8Zyf4LyVrCNfSnTmfjTE8k6
|
||||
l4SdH209evy7gqlyXRxcuMybHjV2QeTeX6xd+ohNduZxaChcQjWOOyAm2c0ZsFyTUHj2kjc2pjXC
|
||||
5B4TrN8jNREOZ1oikXcVIrGsR2xbhy0I78Kn+Pqq2cjHs76RPGBklE/fZMzFfajcHMul7k+fzG5l
|
||||
oa3Bv7F5eWjZ4FnbEGrExKU+jWQdXte+sq3zhO4ORp/NpW1cYW9uKIGjL9fMDrGE9iaidJtTxxBi
|
||||
4VmAa7sN1qZujYbWmnwF02yHLc3XjenL8zNc11TBWHpY9cRpkgvm6fGl3rh16rHzuhlN5ecRNvGg
|
||||
duOjnCJE3olP0FeVkzF7qKq8C/gE/+lX3PxJIUo/NtWy7bobz8ZFgETihHCEdOfxfXg/o7aILGoV
|
||||
zw7N8rosf3pMSlc9GKNS3Cx0DLcuVZsyqmnuhAJSWJAQcV4R9md+zlC5dNcUUkepez5C7B9ibH2j
|
||||
kU3qS7bAi8sv3ZbbrpvTYrZguX8yiU2SsPUpfQOVxCPFz2H05u9GvSo61+9JVb11Nh9FQZIFx9Jx
|
||||
OD1nj0G80QHNzy1VyY14Sz8o0KbWzZAzz3s0QzhcEWrnEHuNR5L5a2U5BLYRhUJsiYz0wZgrN/nG
|
||||
YXsDRiLM4rmAbH9Uw7WbiskgT6mM3DsU/3y/eKsp3M+Nh4+n8Jv86kHWdl+NBvSp1u3TufHQhGRL
|
||||
3T75GtQVbgUseoJNSWgY23ipCtNdelO3VhvEPnkoAzqXXTgx81ozQag59NLtkNoW22b0N/+W64cg
|
||||
f1lHxc39CWouDyHbSR80a8fSBs8lM+F29aseeeHoo/dGPtPtMYzr8XN6piiMPELkm9ElM3SOK28o
|
||||
FonUJlnCppUkw2SOKk77M486vp1doAOh2KroNZnWRWihfN9+sA1kMHo1TXLo6P7zxz/M/PtdAhsQ
|
||||
F8rne9xNlX58w3i+GXhbyqea3LMG0IV5LJTl7GCM61V+hnXOttjGld/NR5GT0XHoRew/P37Sc0qf
|
||||
o3vfrQmf0dl4fT/fGeqgZ1Rjply3p9tZhql8PULQoMpYaI0E8X1RhKOaa6yjA1/BvohC6pgmrunO
|
||||
SyO40oJSD2UdGnqn59EYSyO2tY6xLxiXI/Iv4FM3s4J6GBOlAPn9zbC7eTvGog8N4u4VJqKprTr6
|
||||
yS1ZttBzJKg7Vl6v3S4NxCV/oUt/QEu9SPKvP666+6abH3vnKXvru4Ut25EzKnzPIN8Lv6D7m84y
|
||||
ZtllAxm2cqIM89TNZhe5cE/FFXngp1nzdNW5kDmVQ7fu8Ojoos9wXFcE48gc6lYZ+FFmulVQjXO+
|
||||
jOyZxCuX7gCEIe2FaIb3//R/977PvW/RzrKyyiuLoJ5+jGn5v6ih1xXdWk6DxmSj2QrMck8S1hnJ
|
||||
HKWPXDESKab7qTugIbqkKmh+ecWWcY3ZvHddC1Q5FnEwH9tk5IUiRIczcgnHU8vgm7IqlaEedKo/
|
||||
+QTxxir9U89UT6lr9Ja16SG0TyJRnINUs6HSCaw+7YC32ms05n7KWwApX1Gt2VVLPfdHGT7zlvp7
|
||||
X/Wms3aVZencGtT5+R9+x8vwdElLd8oOGdNlWMco/g45xtuz3c195uXwXMlt2C56zjbnjwp1+d2T
|
||||
9vExPdE0xBhQJmakDJ8BYnzacH/0xTqszW50hB0B45u/8K/fiOdn5IKWpnt6/+4bNEWGK4DRswN1
|
||||
hu0jGfOgyoF3Kg3v5lWIJhT1R1Tf4Rj+9Lmr90IM1fwOqBbM627xU2d0XpUZtpb+1Xg8KeUjSzxq
|
||||
vvlHws47/grkvNFDbrjU9bxabVy4c5vzMj8Doze+RgE5IirWHm/fYLu1p4N94Ca8i/dVN6+lVoak
|
||||
QDkp/ZljBD/44ucXqWHnJRtxnI8oPMUtdb7ykExVHxyBvz067HoCZfNmo6RIf51mImxPYz0+0CkC
|
||||
lqYaPoXVGs2GVNuw3z9X1BeTyWgvq0QCfrc608A5nOvRPdxteV8LFd2lYZnNyNvM6Ms1F7IK6pMx
|
||||
zfdPDGLW+j9/4C3+SIYROIeG4aYxJv+VvjfPoduF4vtxZUwWhTOg/YViSzwDmlLzECEH3c9U3ytl
|
||||
NvsC8yGn+ki3+hWMwXrVR+B25EKQrH+88VsXNnpmlyfWymCXre+ufgSp7wasfg9Qz4fPGEHmlA6+
|
||||
ZWPrzfvTzYT6MwbUIPWElv7HI3v9LsgLmm9CZjEqlJVJIqL+/OTih2Fci0EozOcqG7StVipLv8M7
|
||||
DR8YwVwUol8+Wt1OQd1/nmIKsbh7412Crh2pW+sMtnouyOa2Nup+6edQ+vsqnL3cYbTIjPDndzB+
|
||||
oHfW7Ivwidbf4kCG2FyhITB7HQTRs2g4XIxOtOyyRUt+xQbStox9ckuC+/6BsX8dk4Qh9ngD17ZC
|
||||
KJqc3o3bTxujJR+GXIloN9bHvbx56W5IvfgqG/S0VWc5cU4OduPTO5lf2TUEtmumUOLghkZPniyk
|
||||
9znBuG2Hbnx3mqRsAt7AJ/f0SabjZRuhvtikVJ/iHRqK9NHD97FyCUsuEhtmMFTw4uqLHbOxE3pr
|
||||
ihD0ytDDEtKP0Zuz1ipL3qSn/EIQ6Z9OA8/G4rG5/7w9NnibEtQoKOiWrt71mO3FGB0b90n95Doy
|
||||
usw/tOWdgO7K5y1h1SsT0KMQe2yo1atjt6bwUcg3PJF1UiRTkT6I4l84H//8Fzu+plRJ6mtNABX3
|
||||
ej7dIklJW8EiPNyK+htdUh1c6Y7pUp9s7B7AIZu7PwhHQtwt9T9Ck0YXqnY7r+uaV2BBLDWYFsOl
|
||||
7qZcfx3ltW96OOO/ejbNdxojJbvk1ORwUPfh/TPC9WUYVC8j5LEvKiJIv/sbDZ2yzpb+XMG4Xgfk
|
||||
OJ+rZL7Mqgxzd+CpRkzVI/1TaxRnf1lRVwrMRAz2kaUsfo2G/BGh+arv3qh8H4GakmAj5gqHHJb8
|
||||
To3kIv3jX7wcDjiknzRh18s4Qyc1uz9+elCkjSQv/TwEJVMy4qRSCGbRvjCuxjiblYFf+jth2Ajy
|
||||
LGMHXNowqOOLbqP92iPtrPDoI/sr6vqc1s3H9Eog4WoHb4dz1xHl1L0h3d8/Cy+wk97ZRhXksncK
|
||||
5V/e1tNvCL/rK9pK8PpiParKz384h4ozaDW/BTRVtov1S1Kxhn73nLLWmjs55nOOxvQcPeF8ZjOZ
|
||||
0En1GD6ujmhjDTsiba9lNzpxXaLH+1tjez/e6qmxr2ewz2EaKsO8r/sVC0x4cLcLWc1qZLAoriql
|
||||
PlQt2ROzNJpn31noaHwLshK8oh7PfqSDswnW1OAaVLPYNlzIpzgNJW43McrZlxZmKF2adnyGxrp+
|
||||
SrCyDTUcL1GMpp0pj9D0JqYu52pI/J5mE6r5GdCtcltnk3U+p3DjYpeMMYVsNDvRhLk78RRP3tMY
|
||||
m2/TQ/KmCXUfH9OY8O7UIkWNhVDBsWAM2tYp4YPzDoeX1+D1XnFLIcx5hVqP46Ebvp6lonbyn9Rs
|
||||
AivrP8/VEXZDoWJVb+du8QsmuvX6B1vZXHn0x7PMSHbwtssC9EmVjsB7nvLwsoK0Zvus6ZFGczHk
|
||||
5OSSjaX2ldA3/ahEUd9TRh6uqMq/6wfVXu3mfkpbOB6Q9uNv/+i7mfob3N2suzfVa69E1T4TsZee
|
||||
Tmh04q6Ex/p9XPqj3s07VQ3/8DQf4yqbLBzb8Fg/j9SZv3zd2pdAl8NcUOjumu+90R/3BITLDHRr
|
||||
cTzrl/wFDTAnRGYcevM53lRwcTsdO2iyjNE9nFzonYOG/SUPDqfKyJGBLx4N4fvIZkRvnHyI1B01
|
||||
rLvhiRm+XNHi/0J58aPLfHzC51tucd6qk0eb19b8k3dZrKYGU756iI5cNi3jqxtD1W+P4D9GA+cX
|
||||
tEvE8w6ukG2EkchsO9ekShQf7b/XdcjvQ8TI9TDYiBYMQuG11ZO5fbU8WvI5kYRAqKfH90JgJz0/
|
||||
WJc16jWCfW3lZxRzZLwH0cLnVBsKFGsEkOgwslPtELzGBKpuuLu38CodPpH3pta7+BhzapcyIGJy
|
||||
eJepX8ayvL/CfXKBoGjUvTHAX/eXt6i20aRsmf89nMvXd+GLQcfehm7CBm121PDut25G9MABlh+E
|
||||
4qRMa6bM9hmc+X0IF96WMOEbcfD5VluqvlzizZjucqW27RL7k+wnbF5fSkj2nkpEh7t5M3mvI9gx
|
||||
NaVqdTYzatlNi8IpNan5EotkUKqGwDVmJQ7U9z6bd6rt/8l/6ou82cTIqCLZ2e+pUWYvRHnNO//8
|
||||
U6gEm33Snhx0hOt6UKiaiQGbs8+jVMaXm1B74QnTzesqOBze15ArnBeazp3WgpdzB2rifWHMxzQm
|
||||
cIolibQL35mEyyb66WM4h2HSTStyrcB7zv7CT5DHrkoOv/HBW/N0QWPn1bNy9zhEvUIiBlv0BT1D
|
||||
7Up1X61Q/6tXXvHOZJ0YXDf9/D+/U850121Zx/RWvgJmUkhDnlre5FHEwdUvY+psXM+Ta9xcNws/
|
||||
I1xp1V0v2HH748NU69xVMmvR4Q2SygfUK9cdG44RNqHth9vCw/SOn9MsRPhdmVQN6IMN/PPcQmUl
|
||||
UbhZ83YtWKIDUBh8geMV53csbFMJPRuTx0G3Zqi/VA8f1lPkY1XbB4wRKZEQ7s0Tva5FyMiejQJS
|
||||
c2lYeFpQv9dbhYP6MwcY319FNu3Io0L2zuBCODtS0ueT1YB+lDO8ZQfHm7fEm2FuDJO0931uDMLo
|
||||
5kCe2peajx6j+VHk7x8/JFTXLY8v1qMOcp121D8756TnrnaMIq590O1bXBt9q7/ecCuaG1n8Vz1N
|
||||
VlUo+/17FYpj8EqoUq1adC9znZqZd8kmr2oALsxhNIyxu/SbrYliR6+ovnLK+sdHAesqh7WC1xMh
|
||||
7QMTbb/qm4bBbrPki3cF5aHV6Y83iWNgcL88FK5b3fHma1GncPJ2Ghm1/YBodF+XKJyOJpEurcTm
|
||||
B/+af3yWumaXe+yXNy0pUnEUPgdGo7tSwaZWTXx9zXP38wtoqW/sr4I+6XmH8Sg2cowTtXrVJLUb
|
||||
CX48BrNLysTf+ou8fobYiD40GZTkxMlNGl9CaeT3jBW7ofkzH4KUK7tesqYciXoyhGKp7Or1zy95
|
||||
8NZJaRri8vmmQMrWxWTUrQ3il/4ny+fQJPLifyaeE1p4XcAk4i+flN3nCepKdKnph1qyduTkCKcg
|
||||
fdLdog/jvYpzFHN+hR0tfiHCXU8hLL8fZv4tr0lw1BvgCdHw8rwz4j45HuiKi6n9PN7Qn37xelkd
|
||||
WeWMZ/TAzBQMQaAhe/Ft3ZMymwE5Hyl89fyM5vkK/e//UL9d37xZcc9PiJSthHfPpPXmn37Pcvwh
|
||||
YjA4bLo9eAJKXF2w8eLbf/SASusjtlXhgwatGQgwECYayn1d0x8PtQ0aEv4lcskcrmP9x5ewufAY
|
||||
tjlTFf14kbXfd2zw8uKKvMYCrFvSZEyPA7iyyjwU3r4fw1t7kmzDku+wgUM3m/I+tcBbSXO4+R7T
|
||||
bj54eQlIJWuqGSivx+srF8DVtJ6qpnGqmbhqXbir/OGXR7MlT5nyst5Cd1sW1vPCO2BZv8OuvArY
|
||||
6GzPJXpvpHNYlu9Dxvr3qYRnlj3D9Qqbmfjrj7xTajQINvts0uK0gsMMz4WvvuuZf5MKKJd9yaf4
|
||||
rLpp7zsqoG/wCpF305DQxUGElvmCA+EhsUHMe/kPL+Qur8Cbx1dqKfNKkLG61Bd7X5IIdoGQULzw
|
||||
oMlVQkDISmxy8kbfmFD0TGGLYge7axESqt7tJzxcXsfRLvSSZX5zcIzfe7rku2R8nJiFwtV0DVcO
|
||||
aMma2F4Lx4GIIRfsNsYvf2wU7rQh/LIeN0kIYrT40d/6hMHfZFP4h7dE1w+L71nJKVb/VajndDFj
|
||||
U31wodiHDXbmLULUuZ0bdE1TB9uhs2N/+NlMXZXaW+2YzBMaYvjxvTV9QzIARf3P74WKt/vU4/SR
|
||||
RvmXT+1lPaHXyyCGV7y2ybS+6h07NT38+CdWkX/I1sLoFnCVP0dS773GG25clAOuBgk7jv/wRrN4
|
||||
WsAdnTGcRsFB7FE1PvDiZ4uxT7pffeqQOAcHq0yzmHDgtAJyq1Lp8rzQoGVxD/qtRkRY9K7vW+ai
|
||||
rXPdUP/oX7u+1B4yHLNbuuQBnY3HYBNCHm25JU9axthRDCjUyCmciFl6Q1JOOlr0NxSWfLfk0Qb4
|
||||
Pi9oVE9fY9jF21H55TnjQEvUkIHMP71b+P/Xm8ihaODef9fYvl1eHbnq+A3b644Reck7bKhcAj+9
|
||||
9LTymbRTg3vg9cv2x3Pq0REwgYWnhPIl5bqmKdsKLfyJIOXsdHNw1FuAz7jFP94/65dslslwtbE9
|
||||
67YxPz7xqDQXoQ83jc1lrSVq8NObhb9HBtuvfAmC4FPjYAesm+LXVgCejw74sKwHjWQgIyrClUmU
|
||||
+ehmbMlTctdxIjVqvWTMOKUCMk5YDzn64ZMxM58xRFcnw4ffetxPb68vzaA7aafWE+43b1juj6q0
|
||||
ey58VyLglOlMf7x//o0Pp9vWwu9OHW2vL/KHh8HTYDUp4mqG1ldrMmvjqiOKe35D7psJNYTb4P1Z
|
||||
f22mtl74kJoxvxMs5e/froD/+tdff/2v3w6Dd3MrXsvGgKGYhv/4760C/3G9Xf+D54U/2xBIfy2L
|
||||
v//9zw6Ev79d8/4O/3tonsWn//vff4mbP3sN/h6a4fr6f4//a7nUf/3r/wAAAP//AwCfQckA4CAA
|
||||
AA==
|
||||
headers:
|
||||
CF-Cache-Status:
|
||||
- DYNAMIC
|
||||
CF-RAY:
|
||||
- 8a0b45f9bc6c4589-ATL
|
||||
Connection:
|
||||
- keep-alive
|
||||
Content-Encoding:
|
||||
- gzip
|
||||
Content-Type:
|
||||
- application/json
|
||||
Date:
|
||||
- Tue, 09 Jul 2024 21:05:39 GMT
|
||||
Server:
|
||||
- cloudflare
|
||||
Set-Cookie:
|
||||
- __cf_bm=IuvCm3DYH_Yj2KCjQfa873zjAj_2TgaF47eD6tQ4Mhs-1720559139-1.0.1.1-fWDjNt6ARNNwSwU4ZoyX.VoMhynDVIi97V54zsXBMuMg_KjRGid.vTsH.YWP4cEbPWj_vdlZjnfl3ef4S90Eog;
|
||||
path=/; expires=Tue, 09-Jul-24 21:35:39 GMT; domain=.api.openai.com; HttpOnly;
|
||||
Secure; SameSite=None
|
||||
- _cfuvid=uolQOZ2C52Hd5W7TXNWyFaYk4FEIIwP0B2MH49GGYtA-1720559139009-0.0.1.1-604800000;
|
||||
path=/; domain=.api.openai.com; HttpOnly; Secure; SameSite=None
|
||||
Transfer-Encoding:
|
||||
- chunked
|
||||
access-control-allow-origin:
|
||||
- '*'
|
||||
alt-svc:
|
||||
- h3=":443"; ma=86400
|
||||
openai-model:
|
||||
- text-embedding-ada-002
|
||||
openai-organization:
|
||||
- crewai-iuxna1
|
||||
openai-processing-ms:
|
||||
- '18'
|
||||
openai-version:
|
||||
- '2020-10-01'
|
||||
strict-transport-security:
|
||||
- max-age=31536000; includeSubDomains
|
||||
x-ratelimit-limit-requests:
|
||||
- '10000'
|
||||
x-ratelimit-limit-tokens:
|
||||
- '10000000'
|
||||
x-ratelimit-remaining-requests:
|
||||
- '9999'
|
||||
x-ratelimit-remaining-tokens:
|
||||
- '9999953'
|
||||
x-ratelimit-reset-requests:
|
||||
- 6ms
|
||||
x-ratelimit-reset-tokens:
|
||||
- 0s
|
||||
x-request-id:
|
||||
- req_5f80532d772393d55159f71cbd4e8211
|
||||
status:
|
||||
code: 200
|
||||
message: OK
|
||||
version: 1
|
||||
1516
tests/cassettes/test_output_json_hierarchical.yaml
Normal file
1516
tests/cassettes/test_output_json_hierarchical.yaml
Normal file
File diff suppressed because it is too large
Load Diff
258
tests/cassettes/test_output_json_sequential.yaml
Normal file
258
tests/cassettes/test_output_json_sequential.yaml
Normal file
@@ -0,0 +1,258 @@
|
||||
interactions:
|
||||
- request:
|
||||
body: '{"messages": [{"content": "You are Scorer. You''re an expert scorer, specialized
|
||||
in scoring titles.\nYour personal goal is: Score the titleTo give my best complete
|
||||
final answer to the task use the exact following format:\n\nThought: I now can
|
||||
give a great answer\nFinal Answer: my best complete final answer to the task.\nYour
|
||||
final answer must be the great and the most complete as possible, it must be
|
||||
outcome described.\n\nI MUST use these formats, my job depends on it!\nCurrent
|
||||
Task: Give me an integer score between 1-5 for the following title: ''The impact
|
||||
of AI in the future of work''\n\nThis is the expect criteria for your final
|
||||
answer: The score of the title. \n you MUST return the actual complete content
|
||||
as the final answer, not a summary.\n\nBegin! This is VERY important to you,
|
||||
use the tools available and give your best Final Answer, your job depends on
|
||||
it!\n\nThought:\n", "role": "user"}], "model": "gpt-4o", "n": 1, "stop": ["\nObservation"],
|
||||
"stream": true, "temperature": 0.7}'
|
||||
headers:
|
||||
accept:
|
||||
- application/json
|
||||
accept-encoding:
|
||||
- gzip, deflate
|
||||
connection:
|
||||
- keep-alive
|
||||
content-length:
|
||||
- '997'
|
||||
content-type:
|
||||
- application/json
|
||||
host:
|
||||
- api.openai.com
|
||||
user-agent:
|
||||
- OpenAI/Python 1.35.10
|
||||
x-stainless-arch:
|
||||
- arm64
|
||||
x-stainless-async:
|
||||
- 'false'
|
||||
x-stainless-lang:
|
||||
- python
|
||||
x-stainless-os:
|
||||
- MacOS
|
||||
x-stainless-package-version:
|
||||
- 1.35.10
|
||||
x-stainless-runtime:
|
||||
- CPython
|
||||
x-stainless-runtime-version:
|
||||
- 3.12.3
|
||||
method: POST
|
||||
uri: https://api.openai.com/v1/chat/completions
|
||||
response:
|
||||
body:
|
||||
string: 'data: {"id":"chatcmpl-9jCJkXh6z3EfmS24VIKd3az5QmUI6","object":"chat.completion.chunk","created":1720559136,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_d576307f90","choices":[{"index":0,"delta":{"role":"assistant","content":""},"logprobs":null,"finish_reason":null}]}
|
||||
|
||||
|
||||
data: {"id":"chatcmpl-9jCJkXh6z3EfmS24VIKd3az5QmUI6","object":"chat.completion.chunk","created":1720559136,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_d576307f90","choices":[{"index":0,"delta":{"content":"Thought"},"logprobs":null,"finish_reason":null}]}
|
||||
|
||||
|
||||
data: {"id":"chatcmpl-9jCJkXh6z3EfmS24VIKd3az5QmUI6","object":"chat.completion.chunk","created":1720559136,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_d576307f90","choices":[{"index":0,"delta":{"content":":"},"logprobs":null,"finish_reason":null}]}
|
||||
|
||||
|
||||
data: {"id":"chatcmpl-9jCJkXh6z3EfmS24VIKd3az5QmUI6","object":"chat.completion.chunk","created":1720559136,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_d576307f90","choices":[{"index":0,"delta":{"content":"
|
||||
I"},"logprobs":null,"finish_reason":null}]}
|
||||
|
||||
|
||||
data: {"id":"chatcmpl-9jCJkXh6z3EfmS24VIKd3az5QmUI6","object":"chat.completion.chunk","created":1720559136,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_d576307f90","choices":[{"index":0,"delta":{"content":"
|
||||
now"},"logprobs":null,"finish_reason":null}]}
|
||||
|
||||
|
||||
data: {"id":"chatcmpl-9jCJkXh6z3EfmS24VIKd3az5QmUI6","object":"chat.completion.chunk","created":1720559136,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_d576307f90","choices":[{"index":0,"delta":{"content":"
|
||||
can"},"logprobs":null,"finish_reason":null}]}
|
||||
|
||||
|
||||
data: {"id":"chatcmpl-9jCJkXh6z3EfmS24VIKd3az5QmUI6","object":"chat.completion.chunk","created":1720559136,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_d576307f90","choices":[{"index":0,"delta":{"content":"
|
||||
give"},"logprobs":null,"finish_reason":null}]}
|
||||
|
||||
|
||||
data: {"id":"chatcmpl-9jCJkXh6z3EfmS24VIKd3az5QmUI6","object":"chat.completion.chunk","created":1720559136,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_d576307f90","choices":[{"index":0,"delta":{"content":"
|
||||
a"},"logprobs":null,"finish_reason":null}]}
|
||||
|
||||
|
||||
data: {"id":"chatcmpl-9jCJkXh6z3EfmS24VIKd3az5QmUI6","object":"chat.completion.chunk","created":1720559136,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_d576307f90","choices":[{"index":0,"delta":{"content":"
|
||||
great"},"logprobs":null,"finish_reason":null}]}
|
||||
|
||||
|
||||
data: {"id":"chatcmpl-9jCJkXh6z3EfmS24VIKd3az5QmUI6","object":"chat.completion.chunk","created":1720559136,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_d576307f90","choices":[{"index":0,"delta":{"content":"
|
||||
answer"},"logprobs":null,"finish_reason":null}]}
|
||||
|
||||
|
||||
data: {"id":"chatcmpl-9jCJkXh6z3EfmS24VIKd3az5QmUI6","object":"chat.completion.chunk","created":1720559136,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_d576307f90","choices":[{"index":0,"delta":{"content":"\n"},"logprobs":null,"finish_reason":null}]}
|
||||
|
||||
|
||||
data: {"id":"chatcmpl-9jCJkXh6z3EfmS24VIKd3az5QmUI6","object":"chat.completion.chunk","created":1720559136,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_d576307f90","choices":[{"index":0,"delta":{"content":"Final"},"logprobs":null,"finish_reason":null}]}
|
||||
|
||||
|
||||
data: {"id":"chatcmpl-9jCJkXh6z3EfmS24VIKd3az5QmUI6","object":"chat.completion.chunk","created":1720559136,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_d576307f90","choices":[{"index":0,"delta":{"content":"
|
||||
Answer"},"logprobs":null,"finish_reason":null}]}
|
||||
|
||||
|
||||
data: {"id":"chatcmpl-9jCJkXh6z3EfmS24VIKd3az5QmUI6","object":"chat.completion.chunk","created":1720559136,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_d576307f90","choices":[{"index":0,"delta":{"content":":"},"logprobs":null,"finish_reason":null}]}
|
||||
|
||||
|
||||
data: {"id":"chatcmpl-9jCJkXh6z3EfmS24VIKd3az5QmUI6","object":"chat.completion.chunk","created":1720559136,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_d576307f90","choices":[{"index":0,"delta":{"content":"
|
||||
"},"logprobs":null,"finish_reason":null}]}
|
||||
|
||||
|
||||
data: {"id":"chatcmpl-9jCJkXh6z3EfmS24VIKd3az5QmUI6","object":"chat.completion.chunk","created":1720559136,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_d576307f90","choices":[{"index":0,"delta":{"content":"4"},"logprobs":null,"finish_reason":null}]}
|
||||
|
||||
|
||||
data: {"id":"chatcmpl-9jCJkXh6z3EfmS24VIKd3az5QmUI6","object":"chat.completion.chunk","created":1720559136,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_d576307f90","choices":[{"index":0,"delta":{},"logprobs":null,"finish_reason":"stop"}]}
|
||||
|
||||
|
||||
data: [DONE]
|
||||
|
||||
|
||||
'
|
||||
headers:
|
||||
CF-Cache-Status:
|
||||
- DYNAMIC
|
||||
CF-RAY:
|
||||
- 8a0b45eb19a3c00b-ATL
|
||||
Connection:
|
||||
- keep-alive
|
||||
Content-Type:
|
||||
- text/event-stream; charset=utf-8
|
||||
Date:
|
||||
- Tue, 09 Jul 2024 21:05:36 GMT
|
||||
Server:
|
||||
- cloudflare
|
||||
Set-Cookie:
|
||||
- __cf_bm=pA7SjF9QjLel4TzQ_lNj63W_TlcZBVsYreOxByhCguY-1720559136-1.0.1.1-HZhSIVb4ZIrgcL3DwhR7q53vNdieKNmEv_0ZAHDbmBBkD891hDrzxqLpBZSw7j_mFtCPQEjxpAMjD5JI3o8NEw;
|
||||
path=/; expires=Tue, 09-Jul-24 21:35:36 GMT; domain=.api.openai.com; HttpOnly;
|
||||
Secure; SameSite=None
|
||||
- _cfuvid=OblnrTSQSq8R858tQhKPz9cFRCWv.MPPI1wxnvjeHJI-1720559136855-0.0.1.1-604800000;
|
||||
path=/; domain=.api.openai.com; HttpOnly; Secure; SameSite=None
|
||||
Transfer-Encoding:
|
||||
- chunked
|
||||
alt-svc:
|
||||
- h3=":443"; ma=86400
|
||||
openai-organization:
|
||||
- crewai-iuxna1
|
||||
openai-processing-ms:
|
||||
- '96'
|
||||
openai-version:
|
||||
- '2020-10-01'
|
||||
strict-transport-security:
|
||||
- max-age=31536000; includeSubDomains
|
||||
x-ratelimit-limit-requests:
|
||||
- '10000'
|
||||
x-ratelimit-limit-tokens:
|
||||
- '22000000'
|
||||
x-ratelimit-remaining-requests:
|
||||
- '9999'
|
||||
x-ratelimit-remaining-tokens:
|
||||
- '21999771'
|
||||
x-ratelimit-reset-requests:
|
||||
- 6ms
|
||||
x-ratelimit-reset-tokens:
|
||||
- 0s
|
||||
x-request-id:
|
||||
- req_ee2fc8fd37b03ee0bcf92ff34f91a51c
|
||||
status:
|
||||
code: 200
|
||||
message: OK
|
||||
- request:
|
||||
body: '{"messages": [{"role": "user", "content": "4"}, {"role": "system", "content":
|
||||
"I''m gonna convert this raw text into valid JSON."}], "model": "gpt-4o", "tool_choice":
|
||||
{"type": "function", "function": {"name": "ScoreOutput"}}, "tools": [{"type":
|
||||
"function", "function": {"name": "ScoreOutput", "description": "Correctly extracted
|
||||
`ScoreOutput` with all the required parameters with correct types", "parameters":
|
||||
{"properties": {"score": {"title": "Score", "type": "integer"}}, "required":
|
||||
["score"], "type": "object"}}}]}'
|
||||
headers:
|
||||
accept:
|
||||
- application/json
|
||||
accept-encoding:
|
||||
- gzip, deflate
|
||||
connection:
|
||||
- keep-alive
|
||||
content-length:
|
||||
- '519'
|
||||
content-type:
|
||||
- application/json
|
||||
cookie:
|
||||
- __cf_bm=pA7SjF9QjLel4TzQ_lNj63W_TlcZBVsYreOxByhCguY-1720559136-1.0.1.1-HZhSIVb4ZIrgcL3DwhR7q53vNdieKNmEv_0ZAHDbmBBkD891hDrzxqLpBZSw7j_mFtCPQEjxpAMjD5JI3o8NEw;
|
||||
_cfuvid=OblnrTSQSq8R858tQhKPz9cFRCWv.MPPI1wxnvjeHJI-1720559136855-0.0.1.1-604800000
|
||||
host:
|
||||
- api.openai.com
|
||||
user-agent:
|
||||
- OpenAI/Python 1.35.10
|
||||
x-stainless-arch:
|
||||
- arm64
|
||||
x-stainless-async:
|
||||
- 'false'
|
||||
x-stainless-lang:
|
||||
- python
|
||||
x-stainless-os:
|
||||
- MacOS
|
||||
x-stainless-package-version:
|
||||
- 1.35.10
|
||||
x-stainless-runtime:
|
||||
- CPython
|
||||
x-stainless-runtime-version:
|
||||
- 3.12.3
|
||||
method: POST
|
||||
uri: https://api.openai.com/v1/chat/completions
|
||||
response:
|
||||
body:
|
||||
string: !!binary |
|
||||
H4sIAAAAAAAAA2xS22rjMBB991eIeY4X5+K68WsoXbplt6WUQjfFKIp8SWSNkMabpiH/XuS4sRvW
|
||||
D2KYM+fCjA8BY1CtIWUgSk6iNiqcbxZ3anfPy6sPnu92k23iVvttUl7/XPz+BSPPwNVGCvpi/RBY
|
||||
GyWpQn2ChZWcpFcdJ5MojufjadICNa6l8rTCUDjDcBJNZmEUh+NpRyyxEtJByv4GjDF2aF8fUa/l
|
||||
O6QsGn11aukcLySk5yHGwKLyHeDOVY64Jhj1oEBNUvvUulFqABCiygRXqjc+fYdB3e+JK5Wtdrc3
|
||||
L83Y7h8Xm4/7l3/b13nx8Hz3MPA7Se9NGyhvtDjvZ4Cf++mFGWOged1ynwRa+ach09AFnTHgtmhq
|
||||
qclHh8MSnB9eQjo7wrfRY/C/+q2rjue1KiyMxZW72BLkla5cmVnJXZsWHKE5WXi5t/Z8zbeLgLFY
|
||||
G8oIt1J7wevuetD/Lz0YdxghcTXgxEEXD9zekayzvNKFtMZW7SkhN9k6Tq6mUZLPIwiOwScAAAD/
|
||||
/wMAE8hqg9MCAAA=
|
||||
headers:
|
||||
CF-Cache-Status:
|
||||
- DYNAMIC
|
||||
CF-RAY:
|
||||
- 8a0b45ef8871c00b-ATL
|
||||
Connection:
|
||||
- keep-alive
|
||||
Content-Encoding:
|
||||
- gzip
|
||||
Content-Type:
|
||||
- application/json
|
||||
Date:
|
||||
- Tue, 09 Jul 2024 21:05:37 GMT
|
||||
Server:
|
||||
- cloudflare
|
||||
Transfer-Encoding:
|
||||
- chunked
|
||||
alt-svc:
|
||||
- h3=":443"; ma=86400
|
||||
openai-organization:
|
||||
- crewai-iuxna1
|
||||
openai-processing-ms:
|
||||
- '298'
|
||||
openai-version:
|
||||
- '2020-10-01'
|
||||
strict-transport-security:
|
||||
- max-age=31536000; includeSubDomains
|
||||
x-ratelimit-limit-requests:
|
||||
- '10000'
|
||||
x-ratelimit-limit-tokens:
|
||||
- '22000000'
|
||||
x-ratelimit-remaining-requests:
|
||||
- '9999'
|
||||
x-ratelimit-remaining-tokens:
|
||||
- '21999969'
|
||||
x-ratelimit-reset-requests:
|
||||
- 6ms
|
||||
x-ratelimit-reset-tokens:
|
||||
- 0s
|
||||
x-request-id:
|
||||
- req_cea16690999d7a4128fd55687dc31397
|
||||
status:
|
||||
code: 200
|
||||
message: OK
|
||||
version: 1
|
||||
1649
tests/cassettes/test_output_pydantic_hierarchical.yaml
Normal file
1649
tests/cassettes/test_output_pydantic_hierarchical.yaml
Normal file
File diff suppressed because it is too large
Load Diff
258
tests/cassettes/test_output_pydantic_sequential.yaml
Normal file
258
tests/cassettes/test_output_pydantic_sequential.yaml
Normal file
@@ -0,0 +1,258 @@
|
||||
interactions:
|
||||
- request:
|
||||
body: '{"messages": [{"content": "You are Scorer. You''re an expert scorer, specialized
|
||||
in scoring titles.\nYour personal goal is: Score the titleTo give my best complete
|
||||
final answer to the task use the exact following format:\n\nThought: I now can
|
||||
give a great answer\nFinal Answer: my best complete final answer to the task.\nYour
|
||||
final answer must be the great and the most complete as possible, it must be
|
||||
outcome described.\n\nI MUST use these formats, my job depends on it!\nCurrent
|
||||
Task: Give me an integer score between 1-5 for the following title: ''The impact
|
||||
of AI in the future of work''\n\nThis is the expect criteria for your final
|
||||
answer: The score of the title. \n you MUST return the actual complete content
|
||||
as the final answer, not a summary.\n\nBegin! This is VERY important to you,
|
||||
use the tools available and give your best Final Answer, your job depends on
|
||||
it!\n\nThought:\n", "role": "user"}], "model": "gpt-4o", "n": 1, "stop": ["\nObservation"],
|
||||
"stream": true, "temperature": 0.7}'
|
||||
headers:
|
||||
accept:
|
||||
- application/json
|
||||
accept-encoding:
|
||||
- gzip, deflate
|
||||
connection:
|
||||
- keep-alive
|
||||
content-length:
|
||||
- '997'
|
||||
content-type:
|
||||
- application/json
|
||||
host:
|
||||
- api.openai.com
|
||||
user-agent:
|
||||
- OpenAI/Python 1.35.10
|
||||
x-stainless-arch:
|
||||
- arm64
|
||||
x-stainless-async:
|
||||
- 'false'
|
||||
x-stainless-lang:
|
||||
- python
|
||||
x-stainless-os:
|
||||
- MacOS
|
||||
x-stainless-package-version:
|
||||
- 1.35.10
|
||||
x-stainless-runtime:
|
||||
- CPython
|
||||
x-stainless-runtime-version:
|
||||
- 3.12.3
|
||||
method: POST
|
||||
uri: https://api.openai.com/v1/chat/completions
|
||||
response:
|
||||
body:
|
||||
string: 'data: {"id":"chatcmpl-9jCJjSE1CUTbcdPQgWhGnINQBfDJr","object":"chat.completion.chunk","created":1720559135,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_ce0793330f","choices":[{"index":0,"delta":{"role":"assistant","content":""},"logprobs":null,"finish_reason":null}]}
|
||||
|
||||
|
||||
data: {"id":"chatcmpl-9jCJjSE1CUTbcdPQgWhGnINQBfDJr","object":"chat.completion.chunk","created":1720559135,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_ce0793330f","choices":[{"index":0,"delta":{"content":"Thought"},"logprobs":null,"finish_reason":null}]}
|
||||
|
||||
|
||||
data: {"id":"chatcmpl-9jCJjSE1CUTbcdPQgWhGnINQBfDJr","object":"chat.completion.chunk","created":1720559135,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_ce0793330f","choices":[{"index":0,"delta":{"content":":"},"logprobs":null,"finish_reason":null}]}
|
||||
|
||||
|
||||
data: {"id":"chatcmpl-9jCJjSE1CUTbcdPQgWhGnINQBfDJr","object":"chat.completion.chunk","created":1720559135,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_ce0793330f","choices":[{"index":0,"delta":{"content":"
|
||||
I"},"logprobs":null,"finish_reason":null}]}
|
||||
|
||||
|
||||
data: {"id":"chatcmpl-9jCJjSE1CUTbcdPQgWhGnINQBfDJr","object":"chat.completion.chunk","created":1720559135,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_ce0793330f","choices":[{"index":0,"delta":{"content":"
|
||||
now"},"logprobs":null,"finish_reason":null}]}
|
||||
|
||||
|
||||
data: {"id":"chatcmpl-9jCJjSE1CUTbcdPQgWhGnINQBfDJr","object":"chat.completion.chunk","created":1720559135,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_ce0793330f","choices":[{"index":0,"delta":{"content":"
|
||||
can"},"logprobs":null,"finish_reason":null}]}
|
||||
|
||||
|
||||
data: {"id":"chatcmpl-9jCJjSE1CUTbcdPQgWhGnINQBfDJr","object":"chat.completion.chunk","created":1720559135,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_ce0793330f","choices":[{"index":0,"delta":{"content":"
|
||||
give"},"logprobs":null,"finish_reason":null}]}
|
||||
|
||||
|
||||
data: {"id":"chatcmpl-9jCJjSE1CUTbcdPQgWhGnINQBfDJr","object":"chat.completion.chunk","created":1720559135,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_ce0793330f","choices":[{"index":0,"delta":{"content":"
|
||||
a"},"logprobs":null,"finish_reason":null}]}
|
||||
|
||||
|
||||
data: {"id":"chatcmpl-9jCJjSE1CUTbcdPQgWhGnINQBfDJr","object":"chat.completion.chunk","created":1720559135,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_ce0793330f","choices":[{"index":0,"delta":{"content":"
|
||||
great"},"logprobs":null,"finish_reason":null}]}
|
||||
|
||||
|
||||
data: {"id":"chatcmpl-9jCJjSE1CUTbcdPQgWhGnINQBfDJr","object":"chat.completion.chunk","created":1720559135,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_ce0793330f","choices":[{"index":0,"delta":{"content":"
|
||||
answer"},"logprobs":null,"finish_reason":null}]}
|
||||
|
||||
|
||||
data: {"id":"chatcmpl-9jCJjSE1CUTbcdPQgWhGnINQBfDJr","object":"chat.completion.chunk","created":1720559135,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_ce0793330f","choices":[{"index":0,"delta":{"content":"\n"},"logprobs":null,"finish_reason":null}]}
|
||||
|
||||
|
||||
data: {"id":"chatcmpl-9jCJjSE1CUTbcdPQgWhGnINQBfDJr","object":"chat.completion.chunk","created":1720559135,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_ce0793330f","choices":[{"index":0,"delta":{"content":"Final"},"logprobs":null,"finish_reason":null}]}
|
||||
|
||||
|
||||
data: {"id":"chatcmpl-9jCJjSE1CUTbcdPQgWhGnINQBfDJr","object":"chat.completion.chunk","created":1720559135,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_ce0793330f","choices":[{"index":0,"delta":{"content":"
|
||||
Answer"},"logprobs":null,"finish_reason":null}]}
|
||||
|
||||
|
||||
data: {"id":"chatcmpl-9jCJjSE1CUTbcdPQgWhGnINQBfDJr","object":"chat.completion.chunk","created":1720559135,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_ce0793330f","choices":[{"index":0,"delta":{"content":":"},"logprobs":null,"finish_reason":null}]}
|
||||
|
||||
|
||||
data: {"id":"chatcmpl-9jCJjSE1CUTbcdPQgWhGnINQBfDJr","object":"chat.completion.chunk","created":1720559135,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_ce0793330f","choices":[{"index":0,"delta":{"content":"
|
||||
"},"logprobs":null,"finish_reason":null}]}
|
||||
|
||||
|
||||
data: {"id":"chatcmpl-9jCJjSE1CUTbcdPQgWhGnINQBfDJr","object":"chat.completion.chunk","created":1720559135,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_ce0793330f","choices":[{"index":0,"delta":{"content":"4"},"logprobs":null,"finish_reason":null}]}
|
||||
|
||||
|
||||
data: {"id":"chatcmpl-9jCJjSE1CUTbcdPQgWhGnINQBfDJr","object":"chat.completion.chunk","created":1720559135,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_ce0793330f","choices":[{"index":0,"delta":{},"logprobs":null,"finish_reason":"stop"}]}
|
||||
|
||||
|
||||
data: [DONE]
|
||||
|
||||
|
||||
'
|
||||
headers:
|
||||
CF-Cache-Status:
|
||||
- DYNAMIC
|
||||
CF-RAY:
|
||||
- 8a0b45e24b19bd4d-ATL
|
||||
Connection:
|
||||
- keep-alive
|
||||
Content-Type:
|
||||
- text/event-stream; charset=utf-8
|
||||
Date:
|
||||
- Tue, 09 Jul 2024 21:05:35 GMT
|
||||
Server:
|
||||
- cloudflare
|
||||
Set-Cookie:
|
||||
- __cf_bm=N7yNe.ilaHt2MJPusthFVyL5PrE._f_nyf4RfU.oIv0-1720559135-1.0.1.1-oCOj_tvpNYp16zBvNbxW.TwSHAFXRiB_i23X4XBw_o01D1_7OKj_HwRNZWdwg9DjDh_C_FSMKTonmzQmsUmtdg;
|
||||
path=/; expires=Tue, 09-Jul-24 21:35:35 GMT; domain=.api.openai.com; HttpOnly;
|
||||
Secure; SameSite=None
|
||||
- _cfuvid=aiUOV0PnMjHles7YFoHcFY7PK2Ag6MdKr0GWZzZ_rZo-1720559135403-0.0.1.1-604800000;
|
||||
path=/; domain=.api.openai.com; HttpOnly; Secure; SameSite=None
|
||||
Transfer-Encoding:
|
||||
- chunked
|
||||
alt-svc:
|
||||
- h3=":443"; ma=86400
|
||||
openai-organization:
|
||||
- crewai-iuxna1
|
||||
openai-processing-ms:
|
||||
- '105'
|
||||
openai-version:
|
||||
- '2020-10-01'
|
||||
strict-transport-security:
|
||||
- max-age=31536000; includeSubDomains
|
||||
x-ratelimit-limit-requests:
|
||||
- '10000'
|
||||
x-ratelimit-limit-tokens:
|
||||
- '22000000'
|
||||
x-ratelimit-remaining-requests:
|
||||
- '9999'
|
||||
x-ratelimit-remaining-tokens:
|
||||
- '21999771'
|
||||
x-ratelimit-reset-requests:
|
||||
- 6ms
|
||||
x-ratelimit-reset-tokens:
|
||||
- 0s
|
||||
x-request-id:
|
||||
- req_759b74b995b84a531eae7df3eddf1196
|
||||
status:
|
||||
code: 200
|
||||
message: OK
|
||||
- request:
|
||||
body: '{"messages": [{"role": "user", "content": "4"}, {"role": "system", "content":
|
||||
"I''m gonna convert this raw text into valid JSON."}], "model": "gpt-4o", "tool_choice":
|
||||
{"type": "function", "function": {"name": "ScoreOutput"}}, "tools": [{"type":
|
||||
"function", "function": {"name": "ScoreOutput", "description": "Correctly extracted
|
||||
`ScoreOutput` with all the required parameters with correct types", "parameters":
|
||||
{"properties": {"score": {"title": "Score", "type": "integer"}}, "required":
|
||||
["score"], "type": "object"}}}]}'
|
||||
headers:
|
||||
accept:
|
||||
- application/json
|
||||
accept-encoding:
|
||||
- gzip, deflate
|
||||
connection:
|
||||
- keep-alive
|
||||
content-length:
|
||||
- '519'
|
||||
content-type:
|
||||
- application/json
|
||||
cookie:
|
||||
- __cf_bm=N7yNe.ilaHt2MJPusthFVyL5PrE._f_nyf4RfU.oIv0-1720559135-1.0.1.1-oCOj_tvpNYp16zBvNbxW.TwSHAFXRiB_i23X4XBw_o01D1_7OKj_HwRNZWdwg9DjDh_C_FSMKTonmzQmsUmtdg;
|
||||
_cfuvid=aiUOV0PnMjHles7YFoHcFY7PK2Ag6MdKr0GWZzZ_rZo-1720559135403-0.0.1.1-604800000
|
||||
host:
|
||||
- api.openai.com
|
||||
user-agent:
|
||||
- OpenAI/Python 1.35.10
|
||||
x-stainless-arch:
|
||||
- arm64
|
||||
x-stainless-async:
|
||||
- 'false'
|
||||
x-stainless-lang:
|
||||
- python
|
||||
x-stainless-os:
|
||||
- MacOS
|
||||
x-stainless-package-version:
|
||||
- 1.35.10
|
||||
x-stainless-runtime:
|
||||
- CPython
|
||||
x-stainless-runtime-version:
|
||||
- 3.12.3
|
||||
method: POST
|
||||
uri: https://api.openai.com/v1/chat/completions
|
||||
response:
|
||||
body:
|
||||
string: !!binary |
|
||||
H4sIAAAAAAAAA2xS22rcMBB991eIeV4XX9ZJ1m8lEGgKSZtSUnLBKPLYq40sCWmcdFn234u8ztpZ
|
||||
6gcxzJlzYca7iDGQNZQMxJqT6KyKV5vL680mzX58Xd4l773D7O63vMjP7rfrKwuLwDAvGxT0wfoi
|
||||
TGcVkjT6AAuHnDCopudZUhSrNC8GoDM1qkBrLcVLE2dJtoyTIk7zkbg2UqCHkj1GjDG2G94QUdf4
|
||||
F0qWLD46HXrPW4TyOMQYOKNCB7j30hPXBIsJFEYT6pBa90rNADJGVYIrNRkfvt2snvbElareb27S
|
||||
b28/v7/VRXtP13X75+GMF+hmfgfprR0CNb0Wx/3M8GO/PDFjDDTvBu4vYRze9mR7OqEzBty1fYea
|
||||
QnTYPYEPw09QLvfwaXQf/a9+Hqv9ca3KtNaZF3+yJWikln5dOeR+SAuejD1YBLnn4Xz9p4uAdaaz
|
||||
VJF5RR0EL8brwfS/TGAxYmSIqxmniMZ44LeesKsaqVt01snhlNDYSmByvsrzPGkg2kf/AAAA//8D
|
||||
AKtPZkLTAgAA
|
||||
headers:
|
||||
CF-Cache-Status:
|
||||
- DYNAMIC
|
||||
CF-RAY:
|
||||
- 8a0b45e5ffd7bd4d-ATL
|
||||
Connection:
|
||||
- keep-alive
|
||||
Content-Encoding:
|
||||
- gzip
|
||||
Content-Type:
|
||||
- application/json
|
||||
Date:
|
||||
- Tue, 09 Jul 2024 21:05:36 GMT
|
||||
Server:
|
||||
- cloudflare
|
||||
Transfer-Encoding:
|
||||
- chunked
|
||||
alt-svc:
|
||||
- h3=":443"; ma=86400
|
||||
openai-organization:
|
||||
- crewai-iuxna1
|
||||
openai-processing-ms:
|
||||
- '153'
|
||||
openai-version:
|
||||
- '2020-10-01'
|
||||
strict-transport-security:
|
||||
- max-age=31536000; includeSubDomains
|
||||
x-ratelimit-limit-requests:
|
||||
- '10000'
|
||||
x-ratelimit-limit-tokens:
|
||||
- '22000000'
|
||||
x-ratelimit-remaining-requests:
|
||||
- '9999'
|
||||
x-ratelimit-remaining-tokens:
|
||||
- '21999969'
|
||||
x-ratelimit-reset-requests:
|
||||
- 6ms
|
||||
x-ratelimit-reset-tokens:
|
||||
- 0s
|
||||
x-request-id:
|
||||
- req_36cd16b74a2085c72139d09309d21e39
|
||||
status:
|
||||
code: 200
|
||||
message: OK
|
||||
version: 1
|
||||
6244
tests/cassettes/test_sequential_async_task_execution_completion.yaml
Normal file
6244
tests/cassettes/test_sequential_async_task_execution_completion.yaml
Normal file
File diff suppressed because it is too large
Load Diff
333
tests/cassettes/test_single_task_with_async_execution.yaml
Normal file
333
tests/cassettes/test_single_task_with_async_execution.yaml
Normal file
@@ -0,0 +1,333 @@
|
||||
interactions:
|
||||
- request:
|
||||
body: '{"messages": [{"content": "You are Researcher. You''re an expert researcher,
|
||||
specialized in technology, software engineering, AI and startups. You work as
|
||||
a freelancer and is now working on doing research and analysis for a new customer.\nYour
|
||||
personal goal is: Make the best research and analysis on content about AI and
|
||||
AI agentsTo give my best complete final answer to the task use the exact following
|
||||
format:\n\nThought: I now can give a great answer\nFinal Answer: my best complete
|
||||
final answer to the task.\nYour final answer must be the great and the most
|
||||
complete as possible, it must be outcome described.\n\nI MUST use these formats,
|
||||
my job depends on it!\nCurrent Task: Generate a list of 5 interesting ideas
|
||||
to explore for an article, where each bulletpoint is under 15 words.\n\nThis
|
||||
is the expect criteria for your final answer: Bullet point list of 5 important
|
||||
events. No additional commentary. \n you MUST return the actual complete content
|
||||
as the final answer, not a summary.\n\nBegin! This is VERY important to you,
|
||||
use the tools available and give your best Final Answer, your job depends on
|
||||
it!\n\nThought:\n", "role": "user"}], "model": "gpt-4o", "n": 1, "stop": ["\nObservation"],
|
||||
"stream": true, "temperature": 0.7}'
|
||||
headers:
|
||||
accept:
|
||||
- application/json
|
||||
accept-encoding:
|
||||
- gzip, deflate, br
|
||||
connection:
|
||||
- keep-alive
|
||||
content-length:
|
||||
- '1237'
|
||||
content-type:
|
||||
- application/json
|
||||
host:
|
||||
- api.openai.com
|
||||
user-agent:
|
||||
- OpenAI/Python 1.34.0
|
||||
x-stainless-arch:
|
||||
- arm64
|
||||
x-stainless-async:
|
||||
- 'false'
|
||||
x-stainless-lang:
|
||||
- python
|
||||
x-stainless-os:
|
||||
- MacOS
|
||||
x-stainless-package-version:
|
||||
- 1.34.0
|
||||
x-stainless-runtime:
|
||||
- CPython
|
||||
x-stainless-runtime-version:
|
||||
- 3.12.3
|
||||
method: POST
|
||||
uri: https://api.openai.com/v1/chat/completions
|
||||
response:
|
||||
body:
|
||||
string: 'data: {"id":"chatcmpl-9ce1Nupvw1SEEUL1MxkSS1S2KMYoY","object":"chat.completion.chunk","created":1718997333,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_3e7d703517","choices":[{"index":0,"delta":{"role":"assistant","content":""},"logprobs":null,"finish_reason":null}]}
|
||||
|
||||
|
||||
data: {"id":"chatcmpl-9ce1Nupvw1SEEUL1MxkSS1S2KMYoY","object":"chat.completion.chunk","created":1718997333,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_3e7d703517","choices":[{"index":0,"delta":{"content":"Thought"},"logprobs":null,"finish_reason":null}]}
|
||||
|
||||
|
||||
data: {"id":"chatcmpl-9ce1Nupvw1SEEUL1MxkSS1S2KMYoY","object":"chat.completion.chunk","created":1718997333,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_3e7d703517","choices":[{"index":0,"delta":{"content":":"},"logprobs":null,"finish_reason":null}]}
|
||||
|
||||
|
||||
data: {"id":"chatcmpl-9ce1Nupvw1SEEUL1MxkSS1S2KMYoY","object":"chat.completion.chunk","created":1718997333,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_3e7d703517","choices":[{"index":0,"delta":{"content":"
|
||||
I"},"logprobs":null,"finish_reason":null}]}
|
||||
|
||||
|
||||
data: {"id":"chatcmpl-9ce1Nupvw1SEEUL1MxkSS1S2KMYoY","object":"chat.completion.chunk","created":1718997333,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_3e7d703517","choices":[{"index":0,"delta":{"content":"
|
||||
now"},"logprobs":null,"finish_reason":null}]}
|
||||
|
||||
|
||||
data: {"id":"chatcmpl-9ce1Nupvw1SEEUL1MxkSS1S2KMYoY","object":"chat.completion.chunk","created":1718997333,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_3e7d703517","choices":[{"index":0,"delta":{"content":"
|
||||
can"},"logprobs":null,"finish_reason":null}]}
|
||||
|
||||
|
||||
data: {"id":"chatcmpl-9ce1Nupvw1SEEUL1MxkSS1S2KMYoY","object":"chat.completion.chunk","created":1718997333,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_3e7d703517","choices":[{"index":0,"delta":{"content":"
|
||||
give"},"logprobs":null,"finish_reason":null}]}
|
||||
|
||||
|
||||
data: {"id":"chatcmpl-9ce1Nupvw1SEEUL1MxkSS1S2KMYoY","object":"chat.completion.chunk","created":1718997333,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_3e7d703517","choices":[{"index":0,"delta":{"content":"
|
||||
a"},"logprobs":null,"finish_reason":null}]}
|
||||
|
||||
|
||||
data: {"id":"chatcmpl-9ce1Nupvw1SEEUL1MxkSS1S2KMYoY","object":"chat.completion.chunk","created":1718997333,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_3e7d703517","choices":[{"index":0,"delta":{"content":"
|
||||
great"},"logprobs":null,"finish_reason":null}]}
|
||||
|
||||
|
||||
data: {"id":"chatcmpl-9ce1Nupvw1SEEUL1MxkSS1S2KMYoY","object":"chat.completion.chunk","created":1718997333,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_3e7d703517","choices":[{"index":0,"delta":{"content":"
|
||||
answer"},"logprobs":null,"finish_reason":null}]}
|
||||
|
||||
|
||||
data: {"id":"chatcmpl-9ce1Nupvw1SEEUL1MxkSS1S2KMYoY","object":"chat.completion.chunk","created":1718997333,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_3e7d703517","choices":[{"index":0,"delta":{"content":"\n"},"logprobs":null,"finish_reason":null}]}
|
||||
|
||||
|
||||
data: {"id":"chatcmpl-9ce1Nupvw1SEEUL1MxkSS1S2KMYoY","object":"chat.completion.chunk","created":1718997333,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_3e7d703517","choices":[{"index":0,"delta":{"content":"Final"},"logprobs":null,"finish_reason":null}]}
|
||||
|
||||
|
||||
data: {"id":"chatcmpl-9ce1Nupvw1SEEUL1MxkSS1S2KMYoY","object":"chat.completion.chunk","created":1718997333,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_3e7d703517","choices":[{"index":0,"delta":{"content":"
|
||||
Answer"},"logprobs":null,"finish_reason":null}]}
|
||||
|
||||
|
||||
data: {"id":"chatcmpl-9ce1Nupvw1SEEUL1MxkSS1S2KMYoY","object":"chat.completion.chunk","created":1718997333,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_3e7d703517","choices":[{"index":0,"delta":{"content":":"},"logprobs":null,"finish_reason":null}]}
|
||||
|
||||
|
||||
data: {"id":"chatcmpl-9ce1Nupvw1SEEUL1MxkSS1S2KMYoY","object":"chat.completion.chunk","created":1718997333,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_3e7d703517","choices":[{"index":0,"delta":{"content":"
|
||||
\n"},"logprobs":null,"finish_reason":null}]}
|
||||
|
||||
|
||||
data: {"id":"chatcmpl-9ce1Nupvw1SEEUL1MxkSS1S2KMYoY","object":"chat.completion.chunk","created":1718997333,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_3e7d703517","choices":[{"index":0,"delta":{"content":"-"},"logprobs":null,"finish_reason":null}]}
|
||||
|
||||
|
||||
data: {"id":"chatcmpl-9ce1Nupvw1SEEUL1MxkSS1S2KMYoY","object":"chat.completion.chunk","created":1718997333,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_3e7d703517","choices":[{"index":0,"delta":{"content":"
|
||||
The"},"logprobs":null,"finish_reason":null}]}
|
||||
|
||||
|
||||
data: {"id":"chatcmpl-9ce1Nupvw1SEEUL1MxkSS1S2KMYoY","object":"chat.completion.chunk","created":1718997333,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_3e7d703517","choices":[{"index":0,"delta":{"content":"
|
||||
impact"},"logprobs":null,"finish_reason":null}]}
|
||||
|
||||
|
||||
data: {"id":"chatcmpl-9ce1Nupvw1SEEUL1MxkSS1S2KMYoY","object":"chat.completion.chunk","created":1718997333,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_3e7d703517","choices":[{"index":0,"delta":{"content":"
|
||||
of"},"logprobs":null,"finish_reason":null}]}
|
||||
|
||||
|
||||
data: {"id":"chatcmpl-9ce1Nupvw1SEEUL1MxkSS1S2KMYoY","object":"chat.completion.chunk","created":1718997333,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_3e7d703517","choices":[{"index":0,"delta":{"content":"
|
||||
AI"},"logprobs":null,"finish_reason":null}]}
|
||||
|
||||
|
||||
data: {"id":"chatcmpl-9ce1Nupvw1SEEUL1MxkSS1S2KMYoY","object":"chat.completion.chunk","created":1718997333,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_3e7d703517","choices":[{"index":0,"delta":{"content":"
|
||||
agents"},"logprobs":null,"finish_reason":null}]}
|
||||
|
||||
|
||||
data: {"id":"chatcmpl-9ce1Nupvw1SEEUL1MxkSS1S2KMYoY","object":"chat.completion.chunk","created":1718997333,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_3e7d703517","choices":[{"index":0,"delta":{"content":"
|
||||
on"},"logprobs":null,"finish_reason":null}]}
|
||||
|
||||
|
||||
data: {"id":"chatcmpl-9ce1Nupvw1SEEUL1MxkSS1S2KMYoY","object":"chat.completion.chunk","created":1718997333,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_3e7d703517","choices":[{"index":0,"delta":{"content":"
|
||||
remote"},"logprobs":null,"finish_reason":null}]}
|
||||
|
||||
|
||||
data: {"id":"chatcmpl-9ce1Nupvw1SEEUL1MxkSS1S2KMYoY","object":"chat.completion.chunk","created":1718997333,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_3e7d703517","choices":[{"index":0,"delta":{"content":"
|
||||
work"},"logprobs":null,"finish_reason":null}]}
|
||||
|
||||
|
||||
data: {"id":"chatcmpl-9ce1Nupvw1SEEUL1MxkSS1S2KMYoY","object":"chat.completion.chunk","created":1718997333,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_3e7d703517","choices":[{"index":0,"delta":{"content":"
|
||||
productivity"},"logprobs":null,"finish_reason":null}]}
|
||||
|
||||
|
||||
data: {"id":"chatcmpl-9ce1Nupvw1SEEUL1MxkSS1S2KMYoY","object":"chat.completion.chunk","created":1718997333,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_3e7d703517","choices":[{"index":0,"delta":{"content":".\n"},"logprobs":null,"finish_reason":null}]}
|
||||
|
||||
|
||||
data: {"id":"chatcmpl-9ce1Nupvw1SEEUL1MxkSS1S2KMYoY","object":"chat.completion.chunk","created":1718997333,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_3e7d703517","choices":[{"index":0,"delta":{"content":"-"},"logprobs":null,"finish_reason":null}]}
|
||||
|
||||
|
||||
data: {"id":"chatcmpl-9ce1Nupvw1SEEUL1MxkSS1S2KMYoY","object":"chat.completion.chunk","created":1718997333,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_3e7d703517","choices":[{"index":0,"delta":{"content":"
|
||||
Ethical"},"logprobs":null,"finish_reason":null}]}
|
||||
|
||||
|
||||
data: {"id":"chatcmpl-9ce1Nupvw1SEEUL1MxkSS1S2KMYoY","object":"chat.completion.chunk","created":1718997333,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_3e7d703517","choices":[{"index":0,"delta":{"content":"
|
||||
considerations"},"logprobs":null,"finish_reason":null}]}
|
||||
|
||||
|
||||
data: {"id":"chatcmpl-9ce1Nupvw1SEEUL1MxkSS1S2KMYoY","object":"chat.completion.chunk","created":1718997333,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_3e7d703517","choices":[{"index":0,"delta":{"content":"
|
||||
in"},"logprobs":null,"finish_reason":null}]}
|
||||
|
||||
|
||||
data: {"id":"chatcmpl-9ce1Nupvw1SEEUL1MxkSS1S2KMYoY","object":"chat.completion.chunk","created":1718997333,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_3e7d703517","choices":[{"index":0,"delta":{"content":"
|
||||
AI"},"logprobs":null,"finish_reason":null}]}
|
||||
|
||||
|
||||
data: {"id":"chatcmpl-9ce1Nupvw1SEEUL1MxkSS1S2KMYoY","object":"chat.completion.chunk","created":1718997333,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_3e7d703517","choices":[{"index":0,"delta":{"content":"-driven"},"logprobs":null,"finish_reason":null}]}
|
||||
|
||||
|
||||
data: {"id":"chatcmpl-9ce1Nupvw1SEEUL1MxkSS1S2KMYoY","object":"chat.completion.chunk","created":1718997333,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_3e7d703517","choices":[{"index":0,"delta":{"content":"
|
||||
decision"},"logprobs":null,"finish_reason":null}]}
|
||||
|
||||
|
||||
data: {"id":"chatcmpl-9ce1Nupvw1SEEUL1MxkSS1S2KMYoY","object":"chat.completion.chunk","created":1718997333,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_3e7d703517","choices":[{"index":0,"delta":{"content":"-making"},"logprobs":null,"finish_reason":null}]}
|
||||
|
||||
|
||||
data: {"id":"chatcmpl-9ce1Nupvw1SEEUL1MxkSS1S2KMYoY","object":"chat.completion.chunk","created":1718997333,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_3e7d703517","choices":[{"index":0,"delta":{"content":".\n"},"logprobs":null,"finish_reason":null}]}
|
||||
|
||||
|
||||
data: {"id":"chatcmpl-9ce1Nupvw1SEEUL1MxkSS1S2KMYoY","object":"chat.completion.chunk","created":1718997333,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_3e7d703517","choices":[{"index":0,"delta":{"content":"-"},"logprobs":null,"finish_reason":null}]}
|
||||
|
||||
|
||||
data: {"id":"chatcmpl-9ce1Nupvw1SEEUL1MxkSS1S2KMYoY","object":"chat.completion.chunk","created":1718997333,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_3e7d703517","choices":[{"index":0,"delta":{"content":"
|
||||
How"},"logprobs":null,"finish_reason":null}]}
|
||||
|
||||
|
||||
data: {"id":"chatcmpl-9ce1Nupvw1SEEUL1MxkSS1S2KMYoY","object":"chat.completion.chunk","created":1718997333,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_3e7d703517","choices":[{"index":0,"delta":{"content":"
|
||||
AI"},"logprobs":null,"finish_reason":null}]}
|
||||
|
||||
|
||||
data: {"id":"chatcmpl-9ce1Nupvw1SEEUL1MxkSS1S2KMYoY","object":"chat.completion.chunk","created":1718997333,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_3e7d703517","choices":[{"index":0,"delta":{"content":"
|
||||
agents"},"logprobs":null,"finish_reason":null}]}
|
||||
|
||||
|
||||
data: {"id":"chatcmpl-9ce1Nupvw1SEEUL1MxkSS1S2KMYoY","object":"chat.completion.chunk","created":1718997333,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_3e7d703517","choices":[{"index":0,"delta":{"content":"
|
||||
are"},"logprobs":null,"finish_reason":null}]}
|
||||
|
||||
|
||||
data: {"id":"chatcmpl-9ce1Nupvw1SEEUL1MxkSS1S2KMYoY","object":"chat.completion.chunk","created":1718997333,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_3e7d703517","choices":[{"index":0,"delta":{"content":"
|
||||
transforming"},"logprobs":null,"finish_reason":null}]}
|
||||
|
||||
|
||||
data: {"id":"chatcmpl-9ce1Nupvw1SEEUL1MxkSS1S2KMYoY","object":"chat.completion.chunk","created":1718997333,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_3e7d703517","choices":[{"index":0,"delta":{"content":"
|
||||
customer"},"logprobs":null,"finish_reason":null}]}
|
||||
|
||||
|
||||
data: {"id":"chatcmpl-9ce1Nupvw1SEEUL1MxkSS1S2KMYoY","object":"chat.completion.chunk","created":1718997333,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_3e7d703517","choices":[{"index":0,"delta":{"content":"
|
||||
service"},"logprobs":null,"finish_reason":null}]}
|
||||
|
||||
|
||||
data: {"id":"chatcmpl-9ce1Nupvw1SEEUL1MxkSS1S2KMYoY","object":"chat.completion.chunk","created":1718997333,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_3e7d703517","choices":[{"index":0,"delta":{"content":".\n"},"logprobs":null,"finish_reason":null}]}
|
||||
|
||||
|
||||
data: {"id":"chatcmpl-9ce1Nupvw1SEEUL1MxkSS1S2KMYoY","object":"chat.completion.chunk","created":1718997333,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_3e7d703517","choices":[{"index":0,"delta":{"content":"-"},"logprobs":null,"finish_reason":null}]}
|
||||
|
||||
|
||||
data: {"id":"chatcmpl-9ce1Nupvw1SEEUL1MxkSS1S2KMYoY","object":"chat.completion.chunk","created":1718997333,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_3e7d703517","choices":[{"index":0,"delta":{"content":"
|
||||
The"},"logprobs":null,"finish_reason":null}]}
|
||||
|
||||
|
||||
data: {"id":"chatcmpl-9ce1Nupvw1SEEUL1MxkSS1S2KMYoY","object":"chat.completion.chunk","created":1718997333,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_3e7d703517","choices":[{"index":0,"delta":{"content":"
|
||||
role"},"logprobs":null,"finish_reason":null}]}
|
||||
|
||||
|
||||
data: {"id":"chatcmpl-9ce1Nupvw1SEEUL1MxkSS1S2KMYoY","object":"chat.completion.chunk","created":1718997333,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_3e7d703517","choices":[{"index":0,"delta":{"content":"
|
||||
of"},"logprobs":null,"finish_reason":null}]}
|
||||
|
||||
|
||||
data: {"id":"chatcmpl-9ce1Nupvw1SEEUL1MxkSS1S2KMYoY","object":"chat.completion.chunk","created":1718997333,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_3e7d703517","choices":[{"index":0,"delta":{"content":"
|
||||
AI"},"logprobs":null,"finish_reason":null}]}
|
||||
|
||||
|
||||
data: {"id":"chatcmpl-9ce1Nupvw1SEEUL1MxkSS1S2KMYoY","object":"chat.completion.chunk","created":1718997333,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_3e7d703517","choices":[{"index":0,"delta":{"content":"
|
||||
in"},"logprobs":null,"finish_reason":null}]}
|
||||
|
||||
|
||||
data: {"id":"chatcmpl-9ce1Nupvw1SEEUL1MxkSS1S2KMYoY","object":"chat.completion.chunk","created":1718997333,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_3e7d703517","choices":[{"index":0,"delta":{"content":"
|
||||
personalized"},"logprobs":null,"finish_reason":null}]}
|
||||
|
||||
|
||||
data: {"id":"chatcmpl-9ce1Nupvw1SEEUL1MxkSS1S2KMYoY","object":"chat.completion.chunk","created":1718997333,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_3e7d703517","choices":[{"index":0,"delta":{"content":"
|
||||
learning"},"logprobs":null,"finish_reason":null}]}
|
||||
|
||||
|
||||
data: {"id":"chatcmpl-9ce1Nupvw1SEEUL1MxkSS1S2KMYoY","object":"chat.completion.chunk","created":1718997333,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_3e7d703517","choices":[{"index":0,"delta":{"content":"
|
||||
experiences"},"logprobs":null,"finish_reason":null}]}
|
||||
|
||||
|
||||
data: {"id":"chatcmpl-9ce1Nupvw1SEEUL1MxkSS1S2KMYoY","object":"chat.completion.chunk","created":1718997333,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_3e7d703517","choices":[{"index":0,"delta":{"content":".\n"},"logprobs":null,"finish_reason":null}]}
|
||||
|
||||
|
||||
data: {"id":"chatcmpl-9ce1Nupvw1SEEUL1MxkSS1S2KMYoY","object":"chat.completion.chunk","created":1718997333,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_3e7d703517","choices":[{"index":0,"delta":{"content":"-"},"logprobs":null,"finish_reason":null}]}
|
||||
|
||||
|
||||
data: {"id":"chatcmpl-9ce1Nupvw1SEEUL1MxkSS1S2KMYoY","object":"chat.completion.chunk","created":1718997333,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_3e7d703517","choices":[{"index":0,"delta":{"content":"
|
||||
AI"},"logprobs":null,"finish_reason":null}]}
|
||||
|
||||
|
||||
data: {"id":"chatcmpl-9ce1Nupvw1SEEUL1MxkSS1S2KMYoY","object":"chat.completion.chunk","created":1718997333,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_3e7d703517","choices":[{"index":0,"delta":{"content":"
|
||||
advancements"},"logprobs":null,"finish_reason":null}]}
|
||||
|
||||
|
||||
data: {"id":"chatcmpl-9ce1Nupvw1SEEUL1MxkSS1S2KMYoY","object":"chat.completion.chunk","created":1718997333,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_3e7d703517","choices":[{"index":0,"delta":{"content":"
|
||||
in"},"logprobs":null,"finish_reason":null}]}
|
||||
|
||||
|
||||
data: {"id":"chatcmpl-9ce1Nupvw1SEEUL1MxkSS1S2KMYoY","object":"chat.completion.chunk","created":1718997333,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_3e7d703517","choices":[{"index":0,"delta":{"content":"
|
||||
healthcare"},"logprobs":null,"finish_reason":null}]}
|
||||
|
||||
|
||||
data: {"id":"chatcmpl-9ce1Nupvw1SEEUL1MxkSS1S2KMYoY","object":"chat.completion.chunk","created":1718997333,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_3e7d703517","choices":[{"index":0,"delta":{"content":"
|
||||
diagnostics"},"logprobs":null,"finish_reason":null}]}
|
||||
|
||||
|
||||
data: {"id":"chatcmpl-9ce1Nupvw1SEEUL1MxkSS1S2KMYoY","object":"chat.completion.chunk","created":1718997333,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_3e7d703517","choices":[{"index":0,"delta":{"content":"."},"logprobs":null,"finish_reason":null}]}
|
||||
|
||||
|
||||
data: {"id":"chatcmpl-9ce1Nupvw1SEEUL1MxkSS1S2KMYoY","object":"chat.completion.chunk","created":1718997333,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_3e7d703517","choices":[{"index":0,"delta":{},"logprobs":null,"finish_reason":"stop"}]}
|
||||
|
||||
|
||||
data: [DONE]
|
||||
|
||||
|
||||
'
|
||||
headers:
|
||||
CF-Cache-Status:
|
||||
- DYNAMIC
|
||||
CF-RAY:
|
||||
- 897653f3e8ba7ba2-ATL
|
||||
Connection:
|
||||
- keep-alive
|
||||
Content-Type:
|
||||
- text/event-stream; charset=utf-8
|
||||
Date:
|
||||
- Fri, 21 Jun 2024 19:15:33 GMT
|
||||
Server:
|
||||
- cloudflare
|
||||
Set-Cookie:
|
||||
- __cf_bm=9ch02HraQXiYJx8jBtYzKXOBjm4nToP.1sBISDFt9Gc-1718997333-1.0.1.1-Ykz1rbMzc2Zo8VV5rBwixPedTuO8s_38psrpuLCSy2B.YIyCCXWMGI_JT5WGQVp2gacOcxjWMSVhOOY85gf9QQ;
|
||||
path=/; expires=Fri, 21-Jun-24 19:45:33 GMT; domain=.api.openai.com; HttpOnly;
|
||||
Secure; SameSite=None
|
||||
- _cfuvid=0srdhmUvYEBaQ2xn7BzySIPRoIiEPWzmvngtQRdnpUY-1718997333518-0.0.1.1-604800000;
|
||||
path=/; domain=.api.openai.com; HttpOnly; Secure; SameSite=None
|
||||
Transfer-Encoding:
|
||||
- chunked
|
||||
alt-svc:
|
||||
- h3=":443"; ma=86400
|
||||
openai-organization:
|
||||
- crewai-iuxna1
|
||||
openai-processing-ms:
|
||||
- '165'
|
||||
openai-version:
|
||||
- '2020-10-01'
|
||||
strict-transport-security:
|
||||
- max-age=31536000; includeSubDomains
|
||||
x-ratelimit-limit-requests:
|
||||
- '10000'
|
||||
x-ratelimit-limit-tokens:
|
||||
- '12000000'
|
||||
x-ratelimit-remaining-requests:
|
||||
- '9999'
|
||||
x-ratelimit-remaining-tokens:
|
||||
- '11999712'
|
||||
x-ratelimit-reset-requests:
|
||||
- 6ms
|
||||
x-ratelimit-reset-tokens:
|
||||
- 1ms
|
||||
x-request-id:
|
||||
- 92f00e3ecc754086e0ddf2d998f6f671
|
||||
status:
|
||||
code: 200
|
||||
message: OK
|
||||
version: 1
|
||||
1097
tests/cassettes/test_three_task_with_async_execution.yaml
Normal file
1097
tests/cassettes/test_three_task_with_async_execution.yaml
Normal file
File diff suppressed because it is too large
Load Diff
@@ -1,6 +1,7 @@
|
||||
"""Test Agent creation and execution basic functionality."""
|
||||
|
||||
import json
|
||||
from concurrent.futures import Future
|
||||
from unittest import mock
|
||||
from unittest.mock import patch
|
||||
|
||||
@@ -10,9 +11,11 @@ import pytest
|
||||
from crewai.agent import Agent
|
||||
from crewai.agents.cache import CacheHandler
|
||||
from crewai.crew import Crew
|
||||
from crewai.crews.crew_output import CrewOutput
|
||||
from crewai.memory.contextual.contextual_memory import ContextualMemory
|
||||
from crewai.process import Process
|
||||
from crewai.task import Task
|
||||
from crewai.tasks.task_output import TaskOutput
|
||||
from crewai.utilities import Logger, RPMController
|
||||
|
||||
ceo = Agent(
|
||||
@@ -84,6 +87,87 @@ def test_crew_config_conditional_requirement():
|
||||
]
|
||||
|
||||
|
||||
def test_async_task_cannot_include_sequential_async_tasks_in_context():
|
||||
task1 = Task(
|
||||
description="Task 1",
|
||||
async_execution=True,
|
||||
expected_output="output",
|
||||
agent=researcher,
|
||||
)
|
||||
task2 = Task(
|
||||
description="Task 2",
|
||||
async_execution=True,
|
||||
expected_output="output",
|
||||
agent=researcher,
|
||||
context=[task1],
|
||||
)
|
||||
task3 = Task(
|
||||
description="Task 3",
|
||||
async_execution=True,
|
||||
expected_output="output",
|
||||
agent=researcher,
|
||||
context=[task2],
|
||||
)
|
||||
task4 = Task(
|
||||
description="Task 4",
|
||||
expected_output="output",
|
||||
agent=writer,
|
||||
)
|
||||
task5 = Task(
|
||||
description="Task 5",
|
||||
async_execution=True,
|
||||
expected_output="output",
|
||||
agent=researcher,
|
||||
context=[task4],
|
||||
)
|
||||
|
||||
# This should raise an error because task2 is async and has task1 in its context without a sync task in between
|
||||
with pytest.raises(
|
||||
ValueError,
|
||||
match="Task 'Task 2' is asynchronous and cannot include other sequential asynchronous tasks in its context.",
|
||||
):
|
||||
Crew(tasks=[task1, task2, task3, task4, task5], agents=[researcher, writer])
|
||||
|
||||
# This should not raise an error because task5 has a sync task (task4) in its context
|
||||
try:
|
||||
Crew(tasks=[task1, task4, task5], agents=[researcher, writer])
|
||||
except ValueError:
|
||||
pytest.fail("Unexpected ValidationError raised")
|
||||
|
||||
|
||||
def test_context_no_future_tasks():
|
||||
|
||||
task2 = Task(
|
||||
description="Task 2",
|
||||
expected_output="output",
|
||||
agent=researcher,
|
||||
)
|
||||
task3 = Task(
|
||||
description="Task 3",
|
||||
expected_output="output",
|
||||
agent=researcher,
|
||||
context=[task2],
|
||||
)
|
||||
task4 = Task(
|
||||
description="Task 4",
|
||||
expected_output="output",
|
||||
agent=researcher,
|
||||
)
|
||||
task1 = Task(
|
||||
description="Task 1",
|
||||
expected_output="output",
|
||||
agent=researcher,
|
||||
context=[task4],
|
||||
)
|
||||
|
||||
# This should raise an error because task1 has a context dependency on a future task (task4)
|
||||
with pytest.raises(
|
||||
ValueError,
|
||||
match="Task 'Task 1' has a context dependency on a future task 'Task 4', which is not allowed.",
|
||||
):
|
||||
Crew(tasks=[task1, task2, task3, task4], agents=[researcher, writer])
|
||||
|
||||
|
||||
def test_crew_config_with_wrong_keys():
|
||||
no_tasks_config = json.dumps(
|
||||
{
|
||||
@@ -136,11 +220,57 @@ def test_crew_creation():
|
||||
tasks=tasks,
|
||||
)
|
||||
|
||||
assert (
|
||||
crew.kickoff()
|
||||
== "1. **The Rise of AI in Healthcare**: The convergence of AI and healthcare is a promising frontier, offering unprecedented opportunities for disease diagnosis and patient outcome prediction. AI's potential to revolutionize healthcare lies in its capacity to synthesize vast amounts of data, generating precise and efficient results. This technological breakthrough, however, is not just about improving accuracy and efficiency; it's about saving lives. As we stand on the precipice of this transformative era, we must prepare for the complex challenges and ethical questions it poses, while embracing its ability to reshape healthcare as we know it.\n\n2. **Ethical Implications of AI**: As AI intertwines with our daily lives, it presents a complex web of ethical dilemmas. This fusion of technology, philosophy, and ethics is not merely academically intriguing but profoundly impacts the fabric of our society. The questions raised range from decision-making transparency to accountability, and from privacy to potential biases. As we navigate this ethical labyrinth, it is crucial to establish robust frameworks and regulations to ensure that AI serves humanity, and not the other way around.\n\n3. **AI and Data Privacy**: The rise of AI brings with it an insatiable appetite for data, spawning new debates around privacy rights. Balancing the potential benefits of AI with the right to privacy is a unique challenge that intersects technology, law, and human rights. In an increasingly digital world, where personal information forms the backbone of many services, we must grapple with these issues. It's time to redefine the concept of privacy and devise innovative solutions that ensure our digital footprints are not abused.\n\n4. **AI in Job Market**: The discourse around AI's impact on employment is a narrative of contrast, a tale of displacement and creation. On one hand, AI threatens to automate a multitude of jobs, on the other, it promises to create new roles that we cannot yet imagine. This intersection of technology, economics, and labor rights is a critical dialogue that will shape our future. As we stand at this crossroads, we must not only brace ourselves for the changes but also seize the opportunities that this technological wave brings.\n\n5. **Future of AI Agents**: The evolution of AI agents signifies a leap towards a future where AI is not just a tool, but a partner. These sophisticated AI agents, employed in customer service to personal assistants, are redefining our interactions with technology. As we gaze into the future of AI agents, we see a landscape of possibilities and challenges. This journey will be about harnessing the potential of AI agents while navigating the issues of trust, dependence, and ethical use."
|
||||
result = crew.kickoff()
|
||||
|
||||
expected_string_output = "1. **The Rise of AI in Healthcare**: The convergence of AI and healthcare is a promising frontier, offering unprecedented opportunities for disease diagnosis and patient outcome prediction. AI's potential to revolutionize healthcare lies in its capacity to synthesize vast amounts of data, generating precise and efficient results. This technological breakthrough, however, is not just about improving accuracy and efficiency; it's about saving lives. As we stand on the precipice of this transformative era, we must prepare for the complex challenges and ethical questions it poses, while embracing its ability to reshape healthcare as we know it.\n\n2. **Ethical Implications of AI**: As AI intertwines with our daily lives, it presents a complex web of ethical dilemmas. This fusion of technology, philosophy, and ethics is not merely academically intriguing but profoundly impacts the fabric of our society. The questions raised range from decision-making transparency to accountability, and from privacy to potential biases. As we navigate this ethical labyrinth, it is crucial to establish robust frameworks and regulations to ensure that AI serves humanity, and not the other way around.\n\n3. **AI and Data Privacy**: The rise of AI brings with it an insatiable appetite for data, spawning new debates around privacy rights. Balancing the potential benefits of AI with the right to privacy is a unique challenge that intersects technology, law, and human rights. In an increasingly digital world, where personal information forms the backbone of many services, we must grapple with these issues. It's time to redefine the concept of privacy and devise innovative solutions that ensure our digital footprints are not abused.\n\n4. **AI in Job Market**: The discourse around AI's impact on employment is a narrative of contrast, a tale of displacement and creation. On one hand, AI threatens to automate a multitude of jobs, on the other, it promises to create new roles that we cannot yet imagine. This intersection of technology, economics, and labor rights is a critical dialogue that will shape our future. As we stand at this crossroads, we must not only brace ourselves for the changes but also seize the opportunities that this technological wave brings.\n\n5. **Future of AI Agents**: The evolution of AI agents signifies a leap towards a future where AI is not just a tool, but a partner. These sophisticated AI agents, employed in customer service to personal assistants, are redefining our interactions with technology. As we gaze into the future of AI agents, we see a landscape of possibilities and challenges. This journey will be about harnessing the potential of AI agents while navigating the issues of trust, dependence, and ethical use."
|
||||
|
||||
assert str(result) == expected_string_output
|
||||
assert result.raw == expected_string_output
|
||||
assert isinstance(result, CrewOutput)
|
||||
assert len(result.tasks_output) == len(tasks)
|
||||
assert result.raw == expected_string_output
|
||||
|
||||
|
||||
@pytest.mark.vcr(filter_headers=["authorization"])
|
||||
def test_sync_task_execution():
|
||||
from unittest.mock import patch
|
||||
|
||||
tasks = [
|
||||
Task(
|
||||
description="Give me a list of 5 interesting ideas to explore for an article, what makes them unique and interesting.",
|
||||
expected_output="Bullet point list of 5 important events.",
|
||||
agent=researcher,
|
||||
),
|
||||
Task(
|
||||
description="Write an amazing paragraph highlight for each idea that showcases how good an article about this topic could be. Return the list of ideas with their paragraph and your notes.",
|
||||
expected_output="A 4 paragraph article about AI.",
|
||||
agent=writer,
|
||||
),
|
||||
]
|
||||
|
||||
crew = Crew(
|
||||
agents=[researcher, writer],
|
||||
process=Process.sequential,
|
||||
tasks=tasks,
|
||||
)
|
||||
|
||||
mock_task_output = TaskOutput(
|
||||
description="Mock description", raw="mocked output", agent="mocked agent"
|
||||
)
|
||||
|
||||
# Because we are mocking execute_sync, we never hit the underlying _execute_core
|
||||
# which sets the output attribute of the task
|
||||
for task in tasks:
|
||||
task.output = mock_task_output
|
||||
|
||||
with patch.object(
|
||||
Task, "execute_sync", return_value=mock_task_output
|
||||
) as mock_execute_sync:
|
||||
crew.kickoff()
|
||||
|
||||
# Assert that execute_sync was called for each task
|
||||
assert mock_execute_sync.call_count == len(tasks)
|
||||
|
||||
|
||||
@pytest.mark.vcr(filter_headers=["authorization"])
|
||||
def test_hierarchical_process():
|
||||
@@ -157,9 +287,11 @@ def test_hierarchical_process():
|
||||
manager_llm=ChatOpenAI(temperature=0, model="gpt-4"),
|
||||
tasks=[task],
|
||||
)
|
||||
|
||||
result = crew.kickoff()
|
||||
|
||||
assert (
|
||||
result
|
||||
result.raw
|
||||
== "1. 'Demystifying AI: An in-depth exploration of Artificial Intelligence for the layperson' - In this piece, we will unravel the enigma of AI, simplifying its complexities into digestible information for the everyday individual. By using relatable examples and analogies, we will journey through the neural networks and machine learning algorithms that define AI, without the jargon and convoluted explanations that often accompany such topics.\n\n2. 'The Role of AI in Startups: A Game Changer?' - Startups today are harnessing the power of AI to revolutionize their businesses. This article will delve into how AI, as an innovative force, is shaping the startup ecosystem, transforming everything from customer service to product development. We'll explore real-life case studies of startups that have leveraged AI to accelerate their growth and disrupt their respective industries.\n\n3. 'AI and Ethics: Navigating the Complex Landscape' - AI brings with it not just technological advancements, but ethical dilemmas as well. This article will engage readers in a thought-provoking discussion on the ethical implications of AI, exploring issues like bias in algorithms, privacy concerns, job displacement, and the moral responsibility of AI developers. We will also discuss potential solutions and frameworks to address these challenges.\n\n4. 'Unveiling the AI Agents: The Future of Customer Service' - AI agents are poised to reshape the customer service landscape, offering businesses the ability to provide round-the-clock support and personalized experiences. In this article, we'll dive deep into the world of AI agents, examining how they work, their benefits and limitations, and how they're set to redefine customer interactions in the digital age.\n\n5. 'From Science Fiction to Reality: AI in Everyday Life' - AI, once a concept limited to the realm of sci-fi, has now permeated our daily lives. This article will highlight the ubiquitous presence of AI, from voice assistants and recommendation algorithms, to autonomous vehicles and smart homes. We'll explore how AI, in its various forms, is transforming our everyday experiences, making the future seem a lot closer than we imagined."
|
||||
)
|
||||
|
||||
@@ -194,8 +326,10 @@ def test_crew_with_delegating_agents():
|
||||
tasks=tasks,
|
||||
)
|
||||
|
||||
result = crew.kickoff()
|
||||
|
||||
assert (
|
||||
crew.kickoff()
|
||||
result.raw
|
||||
== "AI Agents, simply put, are intelligent systems that can perceive their environment and take actions to reach specific goals. Imagine them as digital assistants that can learn, adapt and make decisions. They operate in the realms of software or hardware, like a chatbot on a website or a self-driving car. The key to their intelligence is their ability to learn from their experiences, making them better at their tasks over time. In today's interconnected world, AI agents are transforming our lives. They enhance customer service experiences, streamline business processes, and even predict trends in data. Vehicles equipped with AI agents are making transportation safer. In healthcare, AI agents are helping to diagnose diseases, personalizing treatment plans, and monitoring patient health. As we embrace the digital era, these AI agents are not just important, they're becoming indispensable, shaping a future where technology works intuitively and intelligently to meet our needs."
|
||||
)
|
||||
|
||||
@@ -359,49 +493,8 @@ def test_api_calls_throttling(capsys):
|
||||
moveon.assert_called()
|
||||
|
||||
|
||||
# This test is not consistent, some issue is happening on the CI when it comes to Prompt tokens
|
||||
# {'usage_metrics': {'completion_tokens': 34, 'prompt_tokens': 0, 'successful_requests': 2, 'total_tokens': 34}} CI OUTPUT
|
||||
# {'usage_metrics': {'completion_tokens': 34, 'prompt_tokens': 314, 'successful_requests': 2, 'total_tokens': 348}}
|
||||
# The issue migh be related to the calculate_usage_metrics function
|
||||
# @pytest.mark.vcr(filter_headers=["authorization"])
|
||||
# def test_crew_full_output():
|
||||
# agent = Agent(
|
||||
# role="test role",
|
||||
# goal="test goal",
|
||||
# backstory="test backstory",
|
||||
# allow_delegation=False,
|
||||
# verbose=True,
|
||||
# )
|
||||
|
||||
# task1 = Task(
|
||||
# description="just say hi!",
|
||||
# expected_output="your greeting",
|
||||
# agent=agent,
|
||||
# )
|
||||
# task2 = Task(
|
||||
# description="just say hello!",
|
||||
# expected_output="your greeting",
|
||||
# agent=agent,
|
||||
# )
|
||||
|
||||
# crew = Crew(agents=[agent], tasks=[task1, task2], full_output=True)
|
||||
|
||||
# result = crew.kickoff()
|
||||
|
||||
# assert result == {
|
||||
# "final_output": "Hello!",
|
||||
# "tasks_outputs": [task1.output, task2.output],
|
||||
# "usage_metrics": {
|
||||
# "total_tokens": 348,
|
||||
# "prompt_tokens": 314,
|
||||
# "completion_tokens": 34,
|
||||
# "successful_requests": 2,
|
||||
# },
|
||||
# }
|
||||
|
||||
|
||||
@pytest.mark.vcr(filter_headers=["authorization"])
|
||||
def test_crew_kickoff_for_each_full_ouput():
|
||||
def test_crew_kickoff_usage_metrics():
|
||||
inputs = [
|
||||
{"topic": "dog"},
|
||||
{"topic": "cat"},
|
||||
@@ -420,14 +513,11 @@ def test_crew_kickoff_for_each_full_ouput():
|
||||
agent=agent,
|
||||
)
|
||||
|
||||
crew = Crew(agents=[agent], tasks=[task], full_output=True)
|
||||
crew = Crew(agents=[agent], tasks=[task])
|
||||
results = crew.kickoff_for_each(inputs=inputs)
|
||||
|
||||
assert len(results) == len(inputs)
|
||||
for result in results:
|
||||
assert "usage_metrics" in result
|
||||
assert isinstance(result["usage_metrics"], dict)
|
||||
|
||||
# Assert that all required keys are in usage_metrics and their values are not None
|
||||
for key in [
|
||||
"total_tokens",
|
||||
@@ -435,49 +525,8 @@ def test_crew_kickoff_for_each_full_ouput():
|
||||
"completion_tokens",
|
||||
"successful_requests",
|
||||
]:
|
||||
assert key in result["usage_metrics"]
|
||||
assert result["usage_metrics"][key] > 0
|
||||
|
||||
|
||||
@pytest.mark.vcr(filter_headers=["authorization"])
|
||||
@pytest.mark.asyncio
|
||||
async def test_crew_async_kickoff_for_each_full_ouput():
|
||||
inputs = [
|
||||
{"topic": "dog"},
|
||||
{"topic": "cat"},
|
||||
{"topic": "apple"},
|
||||
]
|
||||
|
||||
agent = Agent(
|
||||
role="{topic} Researcher",
|
||||
goal="Express hot takes on {topic}.",
|
||||
backstory="You have a lot of experience with {topic}.",
|
||||
)
|
||||
|
||||
task = Task(
|
||||
description="Give me an analysis around {topic}.",
|
||||
expected_output="1 bullet point about {topic} that's under 15 words.",
|
||||
agent=agent,
|
||||
)
|
||||
|
||||
crew = Crew(agents=[agent], tasks=[task], full_output=True)
|
||||
results = await crew.kickoff_for_each_async(inputs=inputs)
|
||||
|
||||
assert len(results) == len(inputs)
|
||||
for result in results:
|
||||
assert "usage_metrics" in result
|
||||
assert isinstance(result["usage_metrics"], dict)
|
||||
|
||||
# Assert that all required keys are in usage_metrics and their values are not None
|
||||
for key in [
|
||||
"total_tokens",
|
||||
"prompt_tokens",
|
||||
"completion_tokens",
|
||||
"successful_requests",
|
||||
]:
|
||||
assert key in result["usage_metrics"]
|
||||
# TODO: FIX THIS WHEN USAGE METRICS ARE RE-DONE
|
||||
# assert result["usage_metrics"][key] > 0
|
||||
assert key in result.token_usage
|
||||
assert result.token_usage[key] > 0
|
||||
|
||||
|
||||
def test_agents_rpm_is_never_set_if_crew_max_RPM_is_not_set():
|
||||
@@ -500,12 +549,8 @@ def test_agents_rpm_is_never_set_if_crew_max_RPM_is_not_set():
|
||||
assert agent._rpm_controller is None
|
||||
|
||||
|
||||
def test_async_task_execution():
|
||||
from concurrent.futures import ThreadPoolExecutor
|
||||
from unittest.mock import MagicMock, patch
|
||||
|
||||
from crewai.tasks.task_output import TaskOutput
|
||||
|
||||
@pytest.mark.vcr(filter_headers=["authorization"])
|
||||
def test_sequential_async_task_execution_completion():
|
||||
list_ideas = Task(
|
||||
description="Give me a list of 5 interesting ideas to explore for an article, what makes them unique and interesting.",
|
||||
expected_output="Bullet point list of 5 important events.",
|
||||
@@ -525,41 +570,188 @@ def test_async_task_execution():
|
||||
context=[list_ideas, list_important_history],
|
||||
)
|
||||
|
||||
sequential_crew = Crew(
|
||||
agents=[researcher, writer],
|
||||
process=Process.sequential,
|
||||
tasks=[list_ideas, list_important_history, write_article],
|
||||
)
|
||||
|
||||
sequential_result = sequential_crew.kickoff()
|
||||
assert sequential_result.raw.startswith(
|
||||
"**The Evolution of Artificial Intelligence: A Journey Through Milestones**"
|
||||
)
|
||||
|
||||
|
||||
@pytest.mark.vcr(filter_headers=["authorization"])
|
||||
def test_single_task_with_async_execution():
|
||||
|
||||
researcher_agent = Agent(
|
||||
role="Researcher",
|
||||
goal="Make the best research and analysis on content about AI and AI agents",
|
||||
backstory="You're an expert researcher, specialized in technology, software engineering, AI and startups. You work as a freelancer and is now working on doing research and analysis for a new customer.",
|
||||
allow_delegation=False,
|
||||
)
|
||||
|
||||
list_ideas = Task(
|
||||
description="Generate a list of 5 interesting ideas to explore for an article, where each bulletpoint is under 15 words.",
|
||||
expected_output="Bullet point list of 5 important events. No additional commentary.",
|
||||
agent=researcher_agent,
|
||||
async_execution=True,
|
||||
)
|
||||
|
||||
crew = Crew(
|
||||
agents=[researcher_agent],
|
||||
process=Process.sequential,
|
||||
tasks=[list_ideas],
|
||||
)
|
||||
|
||||
result = crew.kickoff()
|
||||
assert result.raw.startswith(
|
||||
"- The impact of AI agents on remote work productivity."
|
||||
)
|
||||
|
||||
|
||||
@pytest.mark.vcr(filter_headers=["authorization"])
|
||||
def test_three_task_with_async_execution():
|
||||
researcher_agent = Agent(
|
||||
role="Researcher",
|
||||
goal="Make the best research and analysis on content about AI and AI agents",
|
||||
backstory="You're an expert researcher, specialized in technology, software engineering, AI and startups. You work as a freelancer and is now working on doing research and analysis for a new customer.",
|
||||
allow_delegation=False,
|
||||
)
|
||||
|
||||
bullet_list = Task(
|
||||
description="Generate a list of 5 interesting ideas to explore for an article, where each bulletpoint is under 15 words.",
|
||||
expected_output="Bullet point list of 5 important events. No additional commentary.",
|
||||
agent=researcher_agent,
|
||||
async_execution=True,
|
||||
)
|
||||
numbered_list = Task(
|
||||
description="Generate a list of 5 interesting ideas to explore for an article, where each bulletpoint is under 15 words.",
|
||||
expected_output="Numbered list of 5 important events. No additional commentary.",
|
||||
agent=researcher_agent,
|
||||
async_execution=True,
|
||||
)
|
||||
letter_list = Task(
|
||||
description="Generate a list of 5 interesting ideas to explore for an article, where each bulletpoint is under 15 words.",
|
||||
expected_output="Numbered list using [A), B), C)] list of 5 important events. No additional commentary.",
|
||||
agent=researcher_agent,
|
||||
async_execution=True,
|
||||
)
|
||||
|
||||
# Expected result is that we will get an error
|
||||
# because a crew can end only end with one or less
|
||||
# async tasks
|
||||
with pytest.raises(pydantic_core._pydantic_core.ValidationError) as error:
|
||||
Crew(
|
||||
agents=[researcher_agent],
|
||||
process=Process.sequential,
|
||||
tasks=[bullet_list, numbered_list, letter_list],
|
||||
)
|
||||
|
||||
assert error.value.errors()[0]["type"] == "async_task_count"
|
||||
assert (
|
||||
"The crew must end with at most one asynchronous task."
|
||||
in error.value.errors()[0]["msg"]
|
||||
)
|
||||
|
||||
|
||||
@pytest.mark.vcr(filter_headers=["authorization"])
|
||||
@pytest.mark.asyncio
|
||||
async def test_crew_async_kickoff():
|
||||
inputs = [
|
||||
{"topic": "dog"},
|
||||
{"topic": "cat"},
|
||||
{"topic": "apple"},
|
||||
]
|
||||
|
||||
agent = Agent(
|
||||
role="{topic} Researcher",
|
||||
goal="Express hot takes on {topic}.",
|
||||
backstory="You have a lot of experience with {topic}.",
|
||||
)
|
||||
|
||||
task = Task(
|
||||
description="Give me an analysis around {topic}.",
|
||||
expected_output="1 bullet point about {topic} that's under 15 words.",
|
||||
agent=agent,
|
||||
)
|
||||
|
||||
crew = Crew(agents=[agent], tasks=[task])
|
||||
results = await crew.kickoff_for_each_async(inputs=inputs)
|
||||
|
||||
assert len(results) == len(inputs)
|
||||
for result in results:
|
||||
# Assert that all required keys are in usage_metrics and their values are not None
|
||||
for key in [
|
||||
"total_tokens",
|
||||
"prompt_tokens",
|
||||
"completion_tokens",
|
||||
"successful_requests",
|
||||
]:
|
||||
assert key in result.token_usage
|
||||
assert result.token_usage[key] > 0
|
||||
|
||||
|
||||
@pytest.mark.vcr(filter_headers=["authorization"])
|
||||
def test_async_task_execution_call_count():
|
||||
from unittest.mock import MagicMock, patch
|
||||
|
||||
list_ideas = Task(
|
||||
description="Give me a list of 5 interesting ideas to explore for na article, what makes them unique and interesting.",
|
||||
expected_output="Bullet point list of 5 important events.",
|
||||
agent=researcher,
|
||||
async_execution=True,
|
||||
)
|
||||
list_important_history = Task(
|
||||
description="Research the history of AI and give me the 5 most important events that shaped the technology.",
|
||||
expected_output="Bullet point list of 5 important events.",
|
||||
agent=researcher,
|
||||
async_execution=True,
|
||||
)
|
||||
write_article = Task(
|
||||
description="Write an article about the history of AI and its most important events.",
|
||||
expected_output="A 4 paragraph article about AI.",
|
||||
agent=writer,
|
||||
)
|
||||
|
||||
crew = Crew(
|
||||
agents=[researcher, writer],
|
||||
process=Process.sequential,
|
||||
tasks=[list_ideas, list_important_history, write_article],
|
||||
)
|
||||
|
||||
with patch.object(Agent, "execute_task") as execute:
|
||||
execute.return_value = "ok"
|
||||
with patch.object(ThreadPoolExecutor, "submit") as submit:
|
||||
future = MagicMock()
|
||||
future.result.return_value = "ok"
|
||||
submit.return_value = future
|
||||
# Create a valid TaskOutput instance to mock the return value
|
||||
mock_task_output = TaskOutput(
|
||||
description="Mock description", raw="mocked output", agent="mocked agent"
|
||||
)
|
||||
|
||||
list_ideas.output = TaskOutput(
|
||||
description="A 4 paragraph article about AI.",
|
||||
raw_output="ok",
|
||||
agent="writer",
|
||||
)
|
||||
list_important_history.output = TaskOutput(
|
||||
description="A 4 paragraph article about AI.",
|
||||
raw_output="ok",
|
||||
agent="writer",
|
||||
)
|
||||
crew.kickoff()
|
||||
submit.assert_called()
|
||||
future.result.assert_called()
|
||||
# Create a MagicMock Future instance
|
||||
mock_future = MagicMock(spec=Future)
|
||||
mock_future.result.return_value = mock_task_output
|
||||
|
||||
# Directly set the output attribute for each task
|
||||
list_ideas.output = mock_task_output
|
||||
list_important_history.output = mock_task_output
|
||||
write_article.output = mock_task_output
|
||||
|
||||
with patch.object(
|
||||
Task, "execute_sync", return_value=mock_task_output
|
||||
) as mock_execute_sync, patch.object(
|
||||
Task, "execute_async", return_value=mock_future
|
||||
) as mock_execute_async:
|
||||
|
||||
crew.kickoff()
|
||||
|
||||
assert mock_execute_async.call_count == 2
|
||||
assert mock_execute_sync.call_count == 1
|
||||
|
||||
|
||||
@pytest.mark.vcr(filter_headers=["authorization"])
|
||||
def test_kickoff_for_each_single_input():
|
||||
"""Tests if kickoff_for_each works with a single input."""
|
||||
from unittest.mock import patch
|
||||
|
||||
inputs = [{"topic": "dog"}]
|
||||
expected_outputs = ["Dogs are loyal companions and popular pets."]
|
||||
|
||||
agent = Agent(
|
||||
role="{topic} Researcher",
|
||||
@@ -573,30 +765,21 @@ def test_kickoff_for_each_single_input():
|
||||
agent=agent,
|
||||
)
|
||||
|
||||
with patch.object(Agent, "execute_task") as mock_execute_task:
|
||||
mock_execute_task.side_effect = expected_outputs
|
||||
crew = Crew(agents=[agent], tasks=[task])
|
||||
results = crew.kickoff_for_each(inputs=inputs)
|
||||
crew = Crew(agents=[agent], tasks=[task])
|
||||
results = crew.kickoff_for_each(inputs=inputs)
|
||||
|
||||
assert len(results) == 1
|
||||
assert results == expected_outputs
|
||||
|
||||
|
||||
@pytest.mark.vcr(filter_headers=["authorization"])
|
||||
def test_kickoff_for_each_multiple_inputs():
|
||||
"""Tests if kickoff_for_each works with multiple inputs."""
|
||||
from unittest.mock import patch
|
||||
|
||||
inputs = [
|
||||
{"topic": "dog"},
|
||||
{"topic": "cat"},
|
||||
{"topic": "apple"},
|
||||
]
|
||||
expected_outputs = [
|
||||
"Dogs are loyal companions and popular pets.",
|
||||
"Cats are independent and low-maintenance pets.",
|
||||
"Apples are a rich source of dietary fiber and vitamin C.",
|
||||
]
|
||||
|
||||
agent = Agent(
|
||||
role="{topic} Researcher",
|
||||
@@ -610,14 +793,10 @@ def test_kickoff_for_each_multiple_inputs():
|
||||
agent=agent,
|
||||
)
|
||||
|
||||
with patch.object(Agent, "execute_task") as mock_execute_task:
|
||||
mock_execute_task.side_effect = expected_outputs
|
||||
crew = Crew(agents=[agent], tasks=[task])
|
||||
results = crew.kickoff_for_each(inputs=inputs)
|
||||
crew = Crew(agents=[agent], tasks=[task])
|
||||
results = crew.kickoff_for_each(inputs=inputs)
|
||||
|
||||
assert len(results) == len(inputs)
|
||||
for i, res in enumerate(results):
|
||||
assert res == expected_outputs[i]
|
||||
|
||||
|
||||
@pytest.mark.vcr(filter_headers=["authorization"])
|
||||
@@ -889,7 +1068,7 @@ def test_crew_function_calling_llm():
|
||||
with patch.object(llm.client, "create", wraps=llm.client.create) as private_mock:
|
||||
|
||||
@tool
|
||||
def learn_about_AI(topic) -> float:
|
||||
def learn_about_AI(topic) -> str:
|
||||
"""Useful for when you need to learn about AI to write an paragraph about it."""
|
||||
return "AI is a very broad field."
|
||||
|
||||
@@ -938,7 +1117,7 @@ def test_task_with_no_arguments():
|
||||
crew = Crew(agents=[researcher], tasks=[task])
|
||||
|
||||
result = crew.kickoff()
|
||||
assert result == "75"
|
||||
assert result.raw == "75"
|
||||
|
||||
|
||||
def test_code_execution_flag_adds_code_tool_upon_kickoff():
|
||||
@@ -969,7 +1148,6 @@ def test_code_execution_flag_adds_code_tool_upon_kickoff():
|
||||
|
||||
@pytest.mark.vcr(filter_headers=["authorization"])
|
||||
def test_delegation_is_not_enabled_if_there_are_only_one_agent():
|
||||
from unittest.mock import patch
|
||||
|
||||
researcher = Agent(
|
||||
role="Researcher",
|
||||
@@ -985,10 +1163,9 @@ def test_delegation_is_not_enabled_if_there_are_only_one_agent():
|
||||
)
|
||||
|
||||
crew = Crew(agents=[researcher], tasks=[task])
|
||||
with patch.object(Task, "execute") as execute:
|
||||
execute.return_value = "ok"
|
||||
crew.kickoff()
|
||||
assert task.tools == []
|
||||
|
||||
crew.kickoff()
|
||||
assert task.tools == []
|
||||
|
||||
|
||||
@pytest.mark.vcr(filter_headers=["authorization"])
|
||||
@@ -1006,35 +1183,12 @@ def test_agents_do_not_get_delegation_tools_with_there_is_only_one_agent():
|
||||
|
||||
result = crew.kickoff()
|
||||
assert (
|
||||
result
|
||||
result.raw
|
||||
== "Howdy! I hope this message finds you well and brings a smile to your face. Have a fantastic day!"
|
||||
)
|
||||
assert len(agent.tools) == 0
|
||||
|
||||
|
||||
@pytest.mark.vcr(filter_headers=["authorization"])
|
||||
def test_agent_usage_metrics_are_captured_for_sequential_process():
|
||||
agent = Agent(
|
||||
role="Researcher",
|
||||
goal="Be super empathetic.",
|
||||
backstory="You're love to sey howdy.",
|
||||
allow_delegation=False,
|
||||
)
|
||||
|
||||
task = Task(description="say howdy", expected_output="Howdy!", agent=agent)
|
||||
|
||||
crew = Crew(agents=[agent], tasks=[task])
|
||||
|
||||
result = crew.kickoff()
|
||||
assert result == "Howdy!"
|
||||
assert crew.usage_metrics == {
|
||||
"completion_tokens": 17,
|
||||
"prompt_tokens": 158,
|
||||
"successful_requests": 1,
|
||||
"total_tokens": 175,
|
||||
}
|
||||
|
||||
|
||||
@pytest.mark.vcr(filter_headers=["authorization"])
|
||||
def test_sequential_crew_creation_tasks_without_agents():
|
||||
task = Task(
|
||||
@@ -1079,11 +1233,13 @@ def test_agent_usage_metrics_are_captured_for_hierarchical_process():
|
||||
)
|
||||
|
||||
result = crew.kickoff()
|
||||
assert result == '"Howdy!"'
|
||||
assert result.raw == '"Howdy!"'
|
||||
|
||||
print(crew.usage_metrics)
|
||||
|
||||
assert crew.usage_metrics == {
|
||||
"total_tokens": 1927,
|
||||
"prompt_tokens": 1557,
|
||||
"total_tokens": 2217,
|
||||
"prompt_tokens": 1847,
|
||||
"completion_tokens": 370,
|
||||
"successful_requests": 4,
|
||||
}
|
||||
@@ -1110,15 +1266,17 @@ def test_hierarchical_crew_creation_tasks_with_agents():
|
||||
manager_llm=ChatOpenAI(model="gpt-4o"),
|
||||
)
|
||||
crew.kickoff()
|
||||
|
||||
assert crew.manager_agent is not None
|
||||
assert crew.manager_agent.tools is not None
|
||||
print("TOOL DESCRIPTION", crew.manager_agent.tools[0].description)
|
||||
assert crew.manager_agent.tools[0].description.startswith(
|
||||
"Delegate a specific task to one of the following coworkers: [Senior Writer]"
|
||||
"Delegate a specific task to one of the following coworkers: Senior Writer"
|
||||
)
|
||||
|
||||
|
||||
@pytest.mark.vcr(filter_headers=["authorization"])
|
||||
def test_hierarchical_crew_creation_tasks_without_async_execution():
|
||||
def test_hierarchical_crew_creation_tasks_with_async_execution():
|
||||
from langchain_openai import ChatOpenAI
|
||||
|
||||
task = Task(
|
||||
@@ -1198,9 +1356,81 @@ def test_crew_inputs_interpolate_both_agents_and_tasks_diff():
|
||||
interpolate_task_inputs.assert_called()
|
||||
|
||||
|
||||
def test_task_callback_on_crew():
|
||||
def test_crew_does_not_interpolate_without_inputs():
|
||||
from unittest.mock import patch
|
||||
|
||||
agent = Agent(
|
||||
role="{topic} Researcher",
|
||||
goal="Express hot takes on {topic}.",
|
||||
backstory="You have a lot of experience with {topic}.",
|
||||
)
|
||||
|
||||
task = Task(
|
||||
description="Give me an analysis around {topic}.",
|
||||
expected_output="{points} bullet points about {topic}.",
|
||||
agent=agent,
|
||||
)
|
||||
|
||||
crew = Crew(agents=[agent], tasks=[task])
|
||||
|
||||
with patch.object(Agent, "interpolate_inputs") as interpolate_agent_inputs:
|
||||
with patch.object(Task, "interpolate_inputs") as interpolate_task_inputs:
|
||||
crew.kickoff()
|
||||
interpolate_agent_inputs.assert_not_called()
|
||||
interpolate_task_inputs.assert_not_called()
|
||||
|
||||
|
||||
# TODO: Ask @joao if we want to start throwing errors if inputs are not provided
|
||||
# def test_crew_partial_inputs():
|
||||
# agent = Agent(
|
||||
# role="{topic} Researcher",
|
||||
# goal="Express hot takes on {topic}.",
|
||||
# backstory="You have a lot of experience with {topic}.",
|
||||
# )
|
||||
|
||||
# task = Task(
|
||||
# description="Give me an analysis around {topic}.",
|
||||
# expected_output="{points} bullet points about {topic}.",
|
||||
# )
|
||||
|
||||
# crew = Crew(agents=[agent], tasks=[task], inputs={"topic": "AI"})
|
||||
# inputs = {"topic": "AI"}
|
||||
# crew._interpolate_inputs(inputs=inputs) # Manual call for now
|
||||
|
||||
# assert crew.tasks[0].description == "Give me an analysis around AI."
|
||||
# assert crew.tasks[0].expected_output == "{points} bullet points about AI."
|
||||
# assert crew.agents[0].role == "AI Researcher"
|
||||
# assert crew.agents[0].goal == "Express hot takes on AI."
|
||||
# assert crew.agents[0].backstory == "You have a lot of experience with AI."
|
||||
|
||||
|
||||
# TODO: If we do want ot throw errors if we are missing inputs. Add in this test.
|
||||
# def test_crew_invalid_inputs():
|
||||
# agent = Agent(
|
||||
# role="{topic} Researcher",
|
||||
# goal="Express hot takes on {topic}.",
|
||||
# backstory="You have a lot of experience with {topic}.",
|
||||
# )
|
||||
|
||||
# task = Task(
|
||||
# description="Give me an analysis around {topic}.",
|
||||
# expected_output="{points} bullet points about {topic}.",
|
||||
# )
|
||||
|
||||
# crew = Crew(agents=[agent], tasks=[task], inputs={"subject": "AI"})
|
||||
# inputs = {"subject": "AI"}
|
||||
# crew._interpolate_inputs(inputs=inputs) # Manual call for now
|
||||
|
||||
# assert crew.tasks[0].description == "Give me an analysis around {topic}."
|
||||
# assert crew.tasks[0].expected_output == "{points} bullet points about {topic}."
|
||||
# assert crew.agents[0].role == "{topic} Researcher"
|
||||
# assert crew.agents[0].goal == "Express hot takes on {topic}."
|
||||
# assert crew.agents[0].backstory == "You have a lot of experience with {topic}."
|
||||
|
||||
|
||||
def test_task_callback_on_crew():
|
||||
from unittest.mock import MagicMock, patch
|
||||
|
||||
researcher_agent = Agent(
|
||||
role="Researcher",
|
||||
goal="Make the best research and analysis on content about AI and AI agents",
|
||||
@@ -1215,17 +1445,23 @@ def test_task_callback_on_crew():
|
||||
async_execution=True,
|
||||
)
|
||||
|
||||
mock_callback = MagicMock()
|
||||
|
||||
crew = Crew(
|
||||
agents=[researcher_agent],
|
||||
process=Process.sequential,
|
||||
tasks=[list_ideas],
|
||||
task_callback=lambda: None,
|
||||
task_callback=mock_callback,
|
||||
)
|
||||
|
||||
with patch.object(Agent, "execute_task") as execute:
|
||||
execute.return_value = "ok"
|
||||
crew.kickoff()
|
||||
|
||||
assert list_ideas.callback is not None
|
||||
mock_callback.assert_called_once()
|
||||
args, _ = mock_callback.call_args
|
||||
assert isinstance(args[0], TaskOutput)
|
||||
|
||||
|
||||
@pytest.mark.vcr(filter_headers=["authorization"])
|
||||
@@ -1235,7 +1471,7 @@ def test_tools_with_custom_caching():
|
||||
from crewai_tools import tool
|
||||
|
||||
@tool
|
||||
def multiplcation_tool(first_number: int, second_number: int) -> str:
|
||||
def multiplcation_tool(first_number: int, second_number: int) -> int:
|
||||
"""Useful for when you need to multiply two numbers together."""
|
||||
return first_number * second_number
|
||||
|
||||
@@ -1297,7 +1533,7 @@ def test_tools_with_custom_caching():
|
||||
input={"first_number": 2, "second_number": 6},
|
||||
output=12,
|
||||
)
|
||||
assert result == "3"
|
||||
assert result.raw == "3"
|
||||
|
||||
|
||||
@pytest.mark.vcr(filter_headers=["authorization"])
|
||||
@@ -1395,10 +1631,20 @@ def test_manager_agent():
|
||||
tasks=[task],
|
||||
)
|
||||
|
||||
with patch.object(Task, "execute") as execute:
|
||||
mock_task_output = TaskOutput(
|
||||
description="Mock description", raw="mocked output", agent="mocked agent"
|
||||
)
|
||||
|
||||
# Because we are mocking execute_sync, we never hit the underlying _execute_core
|
||||
# which sets the output attribute of the task
|
||||
task.output = mock_task_output
|
||||
|
||||
with patch.object(
|
||||
Task, "execute_sync", return_value=mock_task_output
|
||||
) as mock_execute_sync:
|
||||
crew.kickoff()
|
||||
assert manager.allow_delegation is True
|
||||
execute.assert_called()
|
||||
mock_execute_sync.assert_called()
|
||||
|
||||
|
||||
def test_manager_agent_in_agents_raises_exception():
|
||||
|
||||
@@ -81,7 +81,7 @@ def test_task_prompt_includes_expected_output():
|
||||
|
||||
with patch.object(Agent, "execute_task") as execute:
|
||||
execute.return_value = "ok"
|
||||
task.execute()
|
||||
task.execute_sync()
|
||||
execute.assert_called_once_with(task=task, context=None, tools=[])
|
||||
|
||||
|
||||
@@ -104,11 +104,13 @@ def test_task_callback():
|
||||
|
||||
with patch.object(Agent, "execute_task") as execute:
|
||||
execute.return_value = "ok"
|
||||
task.execute()
|
||||
task.execute_sync()
|
||||
task_completed.assert_called_once_with(task.output)
|
||||
|
||||
|
||||
def test_task_callback_returns_task_ouput():
|
||||
from crewai.tasks.output_format import OutputFormat
|
||||
|
||||
researcher = Agent(
|
||||
role="Researcher",
|
||||
goal="Make the best research and analysis on content about AI and AI agents",
|
||||
@@ -127,7 +129,7 @@ def test_task_callback_returns_task_ouput():
|
||||
|
||||
with patch.object(Agent, "execute_task") as execute:
|
||||
execute.return_value = "exported_ok"
|
||||
task.execute()
|
||||
task.execute_sync()
|
||||
# Ensure the callback is called with a TaskOutput object serialized to JSON
|
||||
task_completed.assert_called_once()
|
||||
callback_data = task_completed.call_args[0][0]
|
||||
@@ -140,10 +142,12 @@ def test_task_callback_returns_task_ouput():
|
||||
output_dict = json.loads(callback_data)
|
||||
expected_output = {
|
||||
"description": task.description,
|
||||
"exported_output": "exported_ok",
|
||||
"raw_output": "exported_ok",
|
||||
"raw": "exported_ok",
|
||||
"pydantic": None,
|
||||
"json_dict": None,
|
||||
"agent": researcher.role,
|
||||
"summary": "Give me a list of 5 interesting ideas to explore...",
|
||||
"output_format": OutputFormat.RAW,
|
||||
}
|
||||
assert output_dict == expected_output
|
||||
|
||||
@@ -162,7 +166,7 @@ def test_execute_with_agent():
|
||||
)
|
||||
|
||||
with patch.object(Agent, "execute_task", return_value="ok") as execute:
|
||||
task.execute(agent=researcher)
|
||||
task.execute_sync(agent=researcher)
|
||||
execute.assert_called_once_with(task=task, context=None, tools=[])
|
||||
|
||||
|
||||
@@ -182,7 +186,7 @@ def test_async_execution():
|
||||
)
|
||||
|
||||
with patch.object(Agent, "execute_task", return_value="ok") as execute:
|
||||
task.execute(agent=researcher)
|
||||
task.execute_async(agent=researcher)
|
||||
execute.assert_called_once_with(task=task, context=None, tools=[])
|
||||
|
||||
|
||||
@@ -200,7 +204,7 @@ def test_multiple_output_type_error():
|
||||
|
||||
|
||||
@pytest.mark.vcr(filter_headers=["authorization"])
|
||||
def test_output_pydantic():
|
||||
def test_output_pydantic_sequential():
|
||||
class ScoreOutput(BaseModel):
|
||||
score: int
|
||||
|
||||
@@ -218,13 +222,46 @@ def test_output_pydantic():
|
||||
agent=scorer,
|
||||
)
|
||||
|
||||
crew = Crew(agents=[scorer], tasks=[task])
|
||||
crew = Crew(agents=[scorer], tasks=[task], process=Process.sequential)
|
||||
result = crew.kickoff()
|
||||
assert isinstance(result, ScoreOutput)
|
||||
assert isinstance(result.pydantic, ScoreOutput)
|
||||
assert result.to_dict() == {"score": 4}
|
||||
|
||||
|
||||
@pytest.mark.vcr(filter_headers=["authorization"])
|
||||
def test_output_json():
|
||||
def test_output_pydantic_hierarchical():
|
||||
from langchain_openai import ChatOpenAI
|
||||
|
||||
class ScoreOutput(BaseModel):
|
||||
score: int
|
||||
|
||||
scorer = Agent(
|
||||
role="Scorer",
|
||||
goal="Score the title",
|
||||
backstory="You're an expert scorer, specialized in scoring titles.",
|
||||
allow_delegation=False,
|
||||
)
|
||||
|
||||
task = Task(
|
||||
description="Give me an integer score between 1-5 for the following title: 'The impact of AI in the future of work'",
|
||||
expected_output="The score of the title.",
|
||||
output_pydantic=ScoreOutput,
|
||||
agent=scorer,
|
||||
)
|
||||
|
||||
crew = Crew(
|
||||
agents=[scorer],
|
||||
tasks=[task],
|
||||
process=Process.hierarchical,
|
||||
manager_llm=ChatOpenAI(model="gpt-4o"),
|
||||
)
|
||||
result = crew.kickoff()
|
||||
assert isinstance(result.pydantic, ScoreOutput)
|
||||
assert result.to_dict() == {"score": 4}
|
||||
|
||||
|
||||
@pytest.mark.vcr(filter_headers=["authorization"])
|
||||
def test_output_json_sequential():
|
||||
class ScoreOutput(BaseModel):
|
||||
score: int
|
||||
|
||||
@@ -242,9 +279,126 @@ def test_output_json():
|
||||
agent=scorer,
|
||||
)
|
||||
|
||||
crew = Crew(agents=[scorer], tasks=[task])
|
||||
crew = Crew(agents=[scorer], tasks=[task], process=Process.sequential)
|
||||
result = crew.kickoff()
|
||||
assert '{\n "score": 4\n}' == result
|
||||
assert '{"score": 4}' == result.json
|
||||
assert result.to_dict() == {"score": 4}
|
||||
|
||||
|
||||
@pytest.mark.vcr(filter_headers=["authorization"])
|
||||
def test_output_json_hierarchical():
|
||||
from langchain_openai import ChatOpenAI
|
||||
|
||||
class ScoreOutput(BaseModel):
|
||||
score: int
|
||||
|
||||
scorer = Agent(
|
||||
role="Scorer",
|
||||
goal="Score the title",
|
||||
backstory="You're an expert scorer, specialized in scoring titles.",
|
||||
allow_delegation=False,
|
||||
)
|
||||
|
||||
task = Task(
|
||||
description="Give me an integer score between 1-5 for the following title: 'The impact of AI in the future of work'",
|
||||
expected_output="The score of the title.",
|
||||
output_json=ScoreOutput,
|
||||
agent=scorer,
|
||||
)
|
||||
|
||||
crew = Crew(
|
||||
agents=[scorer],
|
||||
tasks=[task],
|
||||
process=Process.hierarchical,
|
||||
manager_llm=ChatOpenAI(model="gpt-4o"),
|
||||
)
|
||||
result = crew.kickoff()
|
||||
assert '{"score": 4}' == result.json
|
||||
assert result.to_dict() == {"score": 4}
|
||||
|
||||
|
||||
def test_json_property_without_output_json():
|
||||
class ScoreOutput(BaseModel):
|
||||
score: int
|
||||
|
||||
scorer = Agent(
|
||||
role="Scorer",
|
||||
goal="Score the title",
|
||||
backstory="You're an expert scorer, specialized in scoring titles.",
|
||||
allow_delegation=False,
|
||||
)
|
||||
|
||||
task = Task(
|
||||
description="Give me an integer score between 1-5 for the following title: 'The impact of AI in the future of work'",
|
||||
expected_output="The score of the title.",
|
||||
output_pydantic=ScoreOutput, # Using output_pydantic instead of output_json
|
||||
agent=scorer,
|
||||
)
|
||||
|
||||
crew = Crew(agents=[scorer], tasks=[task], process=Process.sequential)
|
||||
result = crew.kickoff()
|
||||
|
||||
with pytest.raises(ValueError) as excinfo:
|
||||
_ = result.json # Attempt to access the json property
|
||||
|
||||
assert "No JSON output found in the final task." in str(excinfo.value)
|
||||
|
||||
|
||||
@pytest.mark.vcr(filter_headers=["authorization"])
|
||||
def test_output_json_dict_sequential():
|
||||
class ScoreOutput(BaseModel):
|
||||
score: int
|
||||
|
||||
scorer = Agent(
|
||||
role="Scorer",
|
||||
goal="Score the title",
|
||||
backstory="You're an expert scorer, specialized in scoring titles.",
|
||||
allow_delegation=False,
|
||||
)
|
||||
|
||||
task = Task(
|
||||
description="Give me an integer score between 1-5 for the following title: 'The impact of AI in the future of work'",
|
||||
expected_output="The score of the title.",
|
||||
output_json=ScoreOutput,
|
||||
agent=scorer,
|
||||
)
|
||||
|
||||
crew = Crew(agents=[scorer], tasks=[task], process=Process.sequential)
|
||||
result = crew.kickoff()
|
||||
assert {"score": 4} == result.json_dict
|
||||
assert result.to_dict() == {"score": 4}
|
||||
|
||||
|
||||
@pytest.mark.vcr(filter_headers=["authorization"])
|
||||
def test_output_json_dict_hierarchical():
|
||||
from langchain_openai import ChatOpenAI
|
||||
|
||||
class ScoreOutput(BaseModel):
|
||||
score: int
|
||||
|
||||
scorer = Agent(
|
||||
role="Scorer",
|
||||
goal="Score the title",
|
||||
backstory="You're an expert scorer, specialized in scoring titles.",
|
||||
allow_delegation=False,
|
||||
)
|
||||
|
||||
task = Task(
|
||||
description="Give me an integer score between 1-5 for the following title: 'The impact of AI in the future of work'",
|
||||
expected_output="The score of the title.",
|
||||
output_json=ScoreOutput,
|
||||
agent=scorer,
|
||||
)
|
||||
|
||||
crew = Crew(
|
||||
agents=[scorer],
|
||||
tasks=[task],
|
||||
process=Process.hierarchical,
|
||||
manager_llm=ChatOpenAI(model="gpt-4o"),
|
||||
)
|
||||
result = crew.kickoff()
|
||||
assert {"score": 4} == result.json_dict
|
||||
assert result.to_dict() == {"score": 4}
|
||||
|
||||
|
||||
@pytest.mark.vcr(filter_headers=["authorization"])
|
||||
@@ -280,7 +434,11 @@ def test_output_pydantic_to_another_task():
|
||||
|
||||
crew = Crew(agents=[scorer], tasks=[task1, task2], verbose=2)
|
||||
result = crew.kickoff()
|
||||
assert 5 == result.score
|
||||
pydantic_result = result.pydantic
|
||||
assert isinstance(
|
||||
pydantic_result, ScoreOutput
|
||||
), "Expected pydantic result to be of type ScoreOutput"
|
||||
assert 5 == pydantic_result.score
|
||||
|
||||
|
||||
@pytest.mark.vcr(filter_headers=["authorization"])
|
||||
@@ -311,7 +469,7 @@ def test_output_json_to_another_task():
|
||||
|
||||
crew = Crew(agents=[scorer], tasks=[task1, task2])
|
||||
result = crew.kickoff()
|
||||
assert '{\n "score": 5\n}' == result
|
||||
assert '{"score": 5}' == result.json
|
||||
|
||||
|
||||
@pytest.mark.vcr(filter_headers=["authorization"])
|
||||
@@ -363,7 +521,9 @@ def test_save_task_json_output():
|
||||
with patch.object(Task, "_save_file") as save_file:
|
||||
save_file.return_value = None
|
||||
crew.kickoff()
|
||||
save_file.assert_called_once_with('{\n "score": 4\n}')
|
||||
save_file.assert_called_once_with(
|
||||
{"score": 4}
|
||||
) # TODO: @Joao, should this be a dict or a json string?
|
||||
|
||||
|
||||
@pytest.mark.vcr(filter_headers=["authorization"])
|
||||
@@ -556,3 +716,80 @@ def test_interpolate_inputs():
|
||||
== "Give me a list of 5 interesting ideas about ML to explore for an article, what makes them unique and interesting."
|
||||
)
|
||||
assert task.expected_output == "Bullet point list of 5 interesting ideas about ML."
|
||||
|
||||
|
||||
def test_task_output_str_with_pydantic():
|
||||
from crewai.tasks.output_format import OutputFormat
|
||||
|
||||
class ScoreOutput(BaseModel):
|
||||
score: int
|
||||
|
||||
score_output = ScoreOutput(score=4)
|
||||
task_output = TaskOutput(
|
||||
description="Test task",
|
||||
agent="Test Agent",
|
||||
pydantic=score_output,
|
||||
output_format=OutputFormat.PYDANTIC,
|
||||
)
|
||||
|
||||
assert str(task_output) == str(score_output)
|
||||
|
||||
|
||||
def test_task_output_str_with_json_dict():
|
||||
from crewai.tasks.output_format import OutputFormat
|
||||
|
||||
json_dict = {"score": 4}
|
||||
task_output = TaskOutput(
|
||||
description="Test task",
|
||||
agent="Test Agent",
|
||||
json_dict=json_dict,
|
||||
output_format=OutputFormat.JSON,
|
||||
)
|
||||
|
||||
assert str(task_output) == str(json_dict)
|
||||
|
||||
|
||||
def test_task_output_str_with_raw():
|
||||
from crewai.tasks.output_format import OutputFormat
|
||||
|
||||
raw_output = "Raw task output"
|
||||
task_output = TaskOutput(
|
||||
description="Test task",
|
||||
agent="Test Agent",
|
||||
raw=raw_output,
|
||||
output_format=OutputFormat.RAW,
|
||||
)
|
||||
|
||||
assert str(task_output) == raw_output
|
||||
|
||||
|
||||
def test_task_output_str_with_pydantic_and_json_dict():
|
||||
from crewai.tasks.output_format import OutputFormat
|
||||
|
||||
class ScoreOutput(BaseModel):
|
||||
score: int
|
||||
|
||||
score_output = ScoreOutput(score=4)
|
||||
json_dict = {"score": 4}
|
||||
task_output = TaskOutput(
|
||||
description="Test task",
|
||||
agent="Test Agent",
|
||||
pydantic=score_output,
|
||||
json_dict=json_dict,
|
||||
output_format=OutputFormat.PYDANTIC,
|
||||
)
|
||||
|
||||
# When both pydantic and json_dict are present, pydantic should take precedence
|
||||
assert str(task_output) == str(score_output)
|
||||
|
||||
|
||||
def test_task_output_str_with_none():
|
||||
from crewai.tasks.output_format import OutputFormat
|
||||
|
||||
task_output = TaskOutput(
|
||||
description="Test task",
|
||||
agent="Test Agent",
|
||||
output_format=OutputFormat.RAW,
|
||||
)
|
||||
|
||||
assert str(task_output) == ""
|
||||
|
||||
Reference in New Issue
Block a user