Compare commits

..

26 Commits

Author SHA1 Message Date
Rip&Tear
a54d34ea5b update readme.md 2024-09-10 21:56:46 +08:00
João Moura
bc793749a5 preparing enw version with deploy 2024-09-07 11:17:12 -07:00
João Moura
a9916940ef preparing new verison 0.55.1 2024-09-07 10:16:07 -07:00
João Moura
b7f4931de5 updating dependencies 2024-09-07 00:55:21 -07:00
João Moura
327b728bef preparing to cut new version 2024-09-07 00:34:34 -07:00
Sean
a9510eec88 Update LLM-Connections.md (#1181)
* Update LLM-Connections.md

* Update LLM-Connections.md

---------

Co-authored-by: João Moura <joaomdmoura@gmail.com>
2024-09-07 04:31:09 -03:00
Brandon Hancock (bhancock_ai)
d6db557f50 Update regex (#1228) 2024-09-07 04:27:58 -03:00
Brandon Hancock (bhancock_ai)
5ae56e3f72 add in 2 small improvements based on joao feedback (#1264) 2024-09-07 04:13:23 -03:00
Astha Puri
1c9ebb59b1 Update Start-a-New-CrewAI-Project-Template-Method.md (#1276)
* Update Start-a-New-CrewAI-Project-Template-Method.md

* Update Start-a-New-CrewAI-Project-Template-Method.md

---------

Co-authored-by: João Moura <joaomdmoura@gmail.com>
2024-09-07 04:12:51 -03:00
Astha Puri
f520ceeb0d Add missing virtual environment commands (#1277)
* Add missing virtual environment commands

* Update Start-a-New-CrewAI-Project-Template-Method.md

---------

Co-authored-by: João Moura <joaomdmoura@gmail.com>
2024-09-07 04:12:04 -03:00
Astha Puri
0df4d2fd4b Update Tasks.md (#1279) 2024-09-07 04:05:56 -03:00
Rip&Tear
596491d932 Update readme.md (#1294)
* Update pyproject.toml

More GH link updates

* Added FAQ section in README.md

---------

Co-authored-by: João Moura <joaomdmoura@gmail.com>
2024-09-07 03:59:48 -03:00
Ali Waleed
72fb109147 Fix: Langtrace Docs (#1297)
* fix langtrace docs

* remove gif for size constraint
2024-09-07 03:58:27 -03:00
Brandon Hancock (bhancock_ai)
40b336d2a5 Brandon/cre 256 default template crew isnt running properly (#1299)
* Update config typecheck to accept agents

* Clean up prints
2024-09-07 03:57:36 -03:00
anmol-aidora
5958df71a2 Updated CrewAI Documentation and Repository link in tools.poetry.urls (#1305)
* Updated CrewAI Documentation and Repository link in tools.poetry.urls

* Update pyproject.toml

---------

Co-authored-by: João Moura <joaomdmoura@gmail.com>
2024-09-07 03:55:02 -03:00
Brandon Hancock (bhancock_ai)
26d9af8367 Brandon/cre 252 add agent to crewai test (#1308)
* Update config typecheck to accept agents

* Clean up prints

* Adding agents to crew evaluator output table

* Properly generating table now

* Update tests
2024-09-07 03:53:23 -03:00
Brandon Hancock (bhancock_ai)
cdaf2d41c7 move away from pydantic v1 (#1284) 2024-09-06 14:22:01 -04:00
Paul Nugent
d9ee104167 Merge pull request #1290 from crewAIInc/DOCS/readme_update
Docs/readme update
2024-09-06 16:18:31 +01:00
Rip&Tear
0b9eeb7cdb Revert "feat: Improve documentation for Conditional Tasks in crewAI"
This reverts commit 18a2722e4d.
2024-09-05 10:30:08 +08:00
Rip&Tear
9b558ddc51 Revert "docs: Improve "Creating and Utilizing Tools in crewAI" documentation"
This reverts commit b955416458.
2024-09-05 10:30:00 +08:00
Rip&Tear
b857afe45b Revert "feat: Improve documentation for TXTSearchTool"
This reverts commit d2fab55561.
2024-09-05 10:29:03 +08:00
Rip&Tear
1d77c8de10 feat: Improve documentation for TXTSearchTool
Updated wording positioning
2024-09-05 10:27:11 +08:00
Rip&Tear
503f3a6372 Update README.md
Updated  GitHub links to point to new Repos
2024-09-05 10:17:46 +08:00
Rip&Tear (aider)
d2fab55561 feat: Improve documentation for TXTSearchTool 2024-09-05 00:06:11 +08:00
Rip&Tear (aider)
b955416458 docs: Improve "Creating and Utilizing Tools in crewAI" documentation 2024-09-04 18:31:09 +08:00
Rip&Tear (aider)
18a2722e4d feat: Improve documentation for Conditional Tasks in crewAI 2024-09-04 18:26:12 +08:00
31 changed files with 341 additions and 312 deletions

View File

@@ -8,11 +8,11 @@
<h3> <h3>
[Homepage](https://www.crewai.io/) | [Documentation](https://docs.crewai.com/) | [Chat with Docs](https://chatg.pt/DWjSBZn) | [Examples](https://github.com/joaomdmoura/crewai-examples) | [Discord](https://discord.com/invite/X4JWnZnxPb) [Homepage](https://www.crewai.com/) | [Documentation](https://docs.crewai.com/) | [Chat with Docs](https://chatg.pt/DWjSBZn) | [Examples](https://github.com/crewAIInc/crewAI-examples) | [Discourse](https://community.crewai.com)
</h3> </h3>
[![GitHub Repo stars](https://img.shields.io/github/stars/joaomdmoura/crewAI)](https://github.com/joaomdmoura/crewAI) [![GitHub Repo stars](https://img.shields.io/github/stars/joaomdmoura/crewAI)](https://github.com/crewAIInc/crewAI)
[![License: MIT](https://img.shields.io/badge/License-MIT-green.svg)](https://opensource.org/licenses/MIT) [![License: MIT](https://img.shields.io/badge/License-MIT-green.svg)](https://opensource.org/licenses/MIT)
</div> </div>
@@ -154,12 +154,12 @@ In addition to the sequential process, you can use the hierarchical process, whi
## Examples ## Examples
You can test different real life examples of AI crews in the [crewAI-examples repo](https://github.com/joaomdmoura/crewAI-examples?tab=readme-ov-file): You can test different real life examples of AI crews in the [crewAI-examples repo](https://github.com/crewAIInc/crewAI-examples?tab=readme-ov-file):
- [Landing Page Generator](https://github.com/joaomdmoura/crewAI-examples/tree/main/landing_page_generator) - [Landing Page Generator](https://github.com/crewAIInc/crewAI-examples/tree/main/landing_page_generator)
- [Having Human input on the execution](https://docs.crewai.com/how-to/Human-Input-on-Execution) - [Having Human input on the execution](https://docs.crewai.com/how-to/Human-Input-on-Execution)
- [Trip Planner](https://github.com/joaomdmoura/crewAI-examples/tree/main/trip_planner) - [Trip Planner](https://github.com/crewAIInc/crewAI-examples/tree/main/trip_planner)
- [Stock Analysis](https://github.com/joaomdmoura/crewAI-examples/tree/main/stock_analysis) - [Stock Analysis](https://github.com/crewAIInc/crewAI-examples/tree/main/stock_analysis)
### Quick Tutorial ### Quick Tutorial
@@ -167,19 +167,19 @@ You can test different real life examples of AI crews in the [crewAI-examples re
### Write Job Descriptions ### Write Job Descriptions
[Check out code for this example](https://github.com/joaomdmoura/crewAI-examples/tree/main/job-posting) or watch a video below: [Check out code for this example](https://github.com/crewAIInc/crewAI-examples/tree/main/job-posting) or watch a video below:
[![Jobs postings](https://img.youtube.com/vi/u98wEMz-9to/maxresdefault.jpg)](https://www.youtube.com/watch?v=u98wEMz-9to "Jobs postings") [![Jobs postings](https://img.youtube.com/vi/u98wEMz-9to/maxresdefault.jpg)](https://www.youtube.com/watch?v=u98wEMz-9to "Jobs postings")
### Trip Planner ### Trip Planner
[Check out code for this example](https://github.com/joaomdmoura/crewAI-examples/tree/main/trip_planner) or watch a video below: [Check out code for this example](https://github.com/crewAIInc/crewAI-examples/tree/main/trip_planner) or watch a video below:
[![Trip Planner](https://img.youtube.com/vi/xis7rWp-hjs/maxresdefault.jpg)](https://www.youtube.com/watch?v=xis7rWp-hjs "Trip Planner") [![Trip Planner](https://img.youtube.com/vi/xis7rWp-hjs/maxresdefault.jpg)](https://www.youtube.com/watch?v=xis7rWp-hjs "Trip Planner")
### Stock Analysis ### Stock Analysis
[Check out code for this example](https://github.com/joaomdmoura/crewAI-examples/tree/main/stock_analysis) or watch a video below: [Check out code for this example](https://github.com/crewAIInc/crewAI-examples/tree/main/stock_analysis) or watch a video below:
[![Stock Analysis](https://img.youtube.com/vi/e0Uj4yWdaAg/maxresdefault.jpg)](https://www.youtube.com/watch?v=e0Uj4yWdaAg "Stock Analysis") [![Stock Analysis](https://img.youtube.com/vi/e0Uj4yWdaAg/maxresdefault.jpg)](https://www.youtube.com/watch?v=e0Uj4yWdaAg "Stock Analysis")
@@ -191,13 +191,12 @@ Please refer to the [Connect crewAI to LLMs](https://docs.crewai.com/how-to/LLM-
## How CrewAI Compares ## How CrewAI Compares
**CrewAI's Advantage**: CrewAI is built with production in mind. It offers the flexibility of Autogen's conversational agents and the structured process approach of ChatDev, but without the rigidity. CrewAI's processes are designed to be dynamic and adaptable, fitting seamlessly into both development and production workflows.
- **Autogen**: While Autogen does good in creating conversational agents capable of working together, it lacks an inherent concept of process. In Autogen, orchestrating agents' interactions requires additional programming, which can become complex and cumbersome as the scale of tasks grows. - **Autogen**: While Autogen does good in creating conversational agents capable of working together, it lacks an inherent concept of process. In Autogen, orchestrating agents' interactions requires additional programming, which can become complex and cumbersome as the scale of tasks grows.
- **ChatDev**: ChatDev introduced the idea of processes into the realm of AI agents, but its implementation is quite rigid. Customizations in ChatDev are limited and not geared towards production environments, which can hinder scalability and flexibility in real-world applications. - **ChatDev**: ChatDev introduced the idea of processes into the realm of AI agents, but its implementation is quite rigid. Customizations in ChatDev are limited and not geared towards production environments, which can hinder scalability and flexibility in real-world applications.
**CrewAI's Advantage**: CrewAI is built with production in mind. It offers the flexibility of Autogen's conversational agents and the structured process approach of ChatDev, but without the rigidity. CrewAI's processes are designed to be dynamic and adaptable, fitting seamlessly into both development and production workflows.
## Contribution ## Contribution
CrewAI is open-source and we welcome contributions. If you're looking to contribute, please: CrewAI is open-source and we welcome contributions. If you're looking to contribute, please:
@@ -285,3 +284,39 @@ Users can opt-in to Further Telemetry, sharing the complete telemetry data by se
## License ## License
CrewAI is released under the MIT License. CrewAI is released under the MIT License.
## Frequently Asked Questions (FAQ)
### Q: What is CrewAI?
A: CrewAI is a cutting-edge framework for orchestrating role-playing, autonomous AI agents. It enables agents to work together seamlessly, tackling complex tasks through collaborative intelligence.
### Q: How do I install CrewAI?
A: You can install CrewAI using pip:
```shell
pip install crewai
```
For additional tools, use:
```shell
pip install 'crewai[tools]'
```
### Q: Can I use CrewAI with local models?
A: Yes, CrewAI supports various LLMs, including local models. You can configure your agents to use local models via tools like Ollama & LM Studio. Check the [LLM Connections documentation](https://docs.crewai.com/how-to/LLM-Connections/) for more details.
### Q: What are the key features of CrewAI?
A: Key features include role-based agent design, autonomous inter-agent delegation, flexible task management, process-driven execution, output saving as files, and compatibility with both open-source and proprietary models.
### Q: How does CrewAI compare to other AI orchestration tools?
A: CrewAI is designed with production in mind, offering flexibility similar to Autogen's conversational agents and structured processes like ChatDev, but with more adaptability for real-world applications.
### Q: Is CrewAI open-source?
A: Yes, CrewAI is open-source and welcomes contributions from the community.
### Q: Does CrewAI collect any data?
A: CrewAI uses anonymous telemetry to collect usage data for improvement purposes. No sensitive data (like prompts, task descriptions, or API calls) is collected. Users can opt-in to share more detailed data by setting `share_crew=True` on their Crews.
### Q: Where can I find examples of CrewAI in action?
A: You can find various real-life examples in the [crewAI-examples repository](https://github.com/crewAIInc/crewAI-examples), including trip planners, stock analysis tools, and more.
### Q: How can I contribute to CrewAI?
A: Contributions are welcome! You can fork the repository, create a new branch for your feature, add your improvement, and send a pull request. Check the Contribution section in the README for more details.

Binary file not shown.

Before

Width:  |  Height:  |  Size: 1.0 MiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 810 KiB

BIN
docs/assets/langtrace1.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 223 KiB

BIN
docs/assets/langtrace2.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 204 KiB

BIN
docs/assets/langtrace3.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 295 KiB

View File

@@ -33,7 +33,7 @@ Each input creates its own run, flowing through all stages of the pipeline. Mult
## Pipeline Attributes ## Pipeline Attributes
| Attribute | Parameters | Description | | Attribute | Parameters | Description |
| :--------- | :--------- | :------------------------------------------------------------------------------------ | | :--------- | :--------- | :---------------------------------------------------------------------------------------------- |
| **Stages** | `stages` | A list of crews, lists of crews, or routers representing the stages to be executed in sequence. | | **Stages** | `stages` | A list of crews, lists of crews, or routers representing the stages to be executed in sequence. |
## Creating a Pipeline ## Creating a Pipeline
@@ -239,7 +239,7 @@ email_router = Router(
pipeline=normal_pipeline pipeline=normal_pipeline
) )
}, },
default=Pipeline(stages=[normal_pipeline]) # Default to just classification if no urgency score default=Pipeline(stages=[normal_pipeline]) # Default to just normal if no urgency score
) )
# Use the router in a main pipeline # Use the router in a main pipeline

View File

@@ -131,6 +131,7 @@ research_agent = Agent(
verbose=True verbose=True
) )
# to perform a semantic search for a specified query from a text's content across the internet
search_tool = SerperDevTool() search_tool = SerperDevTool()
task = Task( task = Task(

View File

@@ -17,40 +17,12 @@ Before we start, there are a couple of things to note:
Before getting started with CrewAI, make sure that you have installed it via pip: Before getting started with CrewAI, make sure that you have installed it via pip:
```shell ```shell
$ pip install crewai crewai-tools $ pip install 'crewai[tools]'
``` ```
### Virtual Environments
It is highly recommended that you use virtual environments to ensure that your CrewAI project is isolated from other projects and dependencies. Virtual environments provide a clean, separate workspace for each project, preventing conflicts between different versions of packages and libraries. This isolation is crucial for maintaining consistency and reproducibility in your development process. You have multiple options for setting up virtual environments depending on your operating system and Python version:
1. Use venv (Python's built-in virtual environment tool):
venv is included with Python 3.3 and later, making it a convenient choice for many developers. It's lightweight and easy to use, perfect for simple project setups.
To set up virtual environments with venv, refer to the official [Python documentation](https://docs.python.org/3/tutorial/venv.html).
2. Use Conda (A Python virtual environment manager):
Conda is an open-source package manager and environment management system for Python. It's widely used by data scientists, developers, and researchers to manage dependencies and environments in a reproducible way.
To set up virtual environments with Conda, refer to the official [Conda documentation](https://docs.conda.io/projects/conda/en/stable/user-guide/getting-started.html).
3. Use Poetry (A Python package manager and dependency management tool):
Poetry is an open-source Python package manager that simplifies the installation of packages and their dependencies. Poetry offers a convenient way to manage virtual environments and dependencies.
Poetry is CrewAI's preferred tool for package / dependency management in CrewAI.
### Code IDEs
Most users of CrewAI use a Code Editor / Integrated Development Environment (IDE) for building their Crews. You can use any code IDE of your choice. See below for some popular options for Code Editors / Integrated Development Environments (IDE):
- [Visual Studio Code](https://code.visualstudio.com/) - Most popular
- [PyCharm](https://www.jetbrains.com/pycharm/)
- [Cursor AI](https://cursor.com)
Pick one that suits your style and needs.
## Creating a New Project ## Creating a New Project
In this example, we will be using Venv as our virtual environment manager. In this example, we will be using poetry as our virtual environment manager.
To set up a virtual environment, run the following CLI command:
To create a new CrewAI project, run the following CLI command: To create a new CrewAI project, run the following CLI command:
```shell ```shell

View File

@@ -143,13 +143,12 @@ os.environ["OPENAI_MODEL_NAME"]='mistral-small'
from langchain_community.chat_models.solar import SolarChat from langchain_community.chat_models.solar import SolarChat
``` ```
```sh ```sh
os.environ["SOLAR_API_BASE"]='https://api.upstage.ai/v1/solar' os.environ[SOLAR_API_BASE]="https://api.upstage.ai/v1/solar"
os.environ["SOLAR_API_KEY"]='your-solar-api-key' os.environ[SOLAR_API_KEY]="your-solar-api-key"
```
# Free developer API key available here: https://console.upstage.ai/services/solar # Free developer API key available here: https://console.upstage.ai/services/solar
# Langchain Example: https://github.com/langchain-ai/langchain/pull/18556 # Langchain Example: https://github.com/langchain-ai/langchain/pull/18556
```
### Cohere ### Cohere
```python ```python

View File

@@ -7,10 +7,14 @@ description: How to monitor cost, latency, and performance of CrewAI Agents usin
Langtrace is an open-source, external tool that helps you set up observability and evaluations for Large Language Models (LLMs), LLM frameworks, and Vector Databases. While not built directly into CrewAI, Langtrace can be used alongside CrewAI to gain deep visibility into the cost, latency, and performance of your CrewAI Agents. This integration allows you to log hyperparameters, monitor performance regressions, and establish a process for continuous improvement of your Agents. Langtrace is an open-source, external tool that helps you set up observability and evaluations for Large Language Models (LLMs), LLM frameworks, and Vector Databases. While not built directly into CrewAI, Langtrace can be used alongside CrewAI to gain deep visibility into the cost, latency, and performance of your CrewAI Agents. This integration allows you to log hyperparameters, monitor performance regressions, and establish a process for continuous improvement of your Agents.
![Overview of a select series of agent session runs](..%2Fassets%2Flangtrace1.png)
![Overview of agent traces](..%2Fassets%2Flangtrace2.png)
![Overview of llm traces in details](..%2Fassets%2Flangtrace3.png)
## Setup Instructions ## Setup Instructions
1. Sign up for [Langtrace](https://langtrace.ai/) by visiting [https://langtrace.ai/signup](https://langtrace.ai/signup). 1. Sign up for [Langtrace](https://langtrace.ai/) by visiting [https://langtrace.ai/signup](https://langtrace.ai/signup).
2. Create a project and generate an API key. 2. Create a project, set the project type to crewAI & generate an API key.
3. Install Langtrace in your CrewAI project using the following commands: 3. Install Langtrace in your CrewAI project using the following commands:
```bash ```bash
@@ -32,58 +36,29 @@ langtrace.init(api_key='<LANGTRACE_API_KEY>')
from crewai import Agent, Task, Crew from crewai import Agent, Task, Crew
``` ```
2. Create your CrewAI agents and tasks as usual.
3. Use Langtrace's tracking functions to monitor your CrewAI operations. For example:
```python
with langtrace.trace("CrewAI Task Execution"):
result = crew.kickoff()
```
### Features and Their Application to CrewAI ### Features and Their Application to CrewAI
1. **LLM Token and Cost Tracking** 1. **LLM Token and Cost Tracking**
- Monitor the token usage and associated costs for each CrewAI agent interaction. - Monitor the token usage and associated costs for each CrewAI agent interaction.
- Example:
```python
with langtrace.trace("Agent Interaction"):
agent_response = agent.execute(task)
```
2. **Trace Graph for Execution Steps** 2. **Trace Graph for Execution Steps**
- Visualize the execution flow of your CrewAI tasks, including latency and logs. - Visualize the execution flow of your CrewAI tasks, including latency and logs.
- Useful for identifying bottlenecks in your agent workflows. - Useful for identifying bottlenecks in your agent workflows.
3. **Dataset Curation with Manual Annotation** 3. **Dataset Curation with Manual Annotation**
- Create datasets from your CrewAI task outputs for future training or evaluation. - Create datasets from your CrewAI task outputs for future training or evaluation.
- Example:
```python
langtrace.log_dataset_item(task_input, agent_output, {"task_type": "research"})
```
4. **Prompt Versioning and Management** 4. **Prompt Versioning and Management**
- Keep track of different versions of prompts used in your CrewAI agents. - Keep track of different versions of prompts used in your CrewAI agents.
- Useful for A/B testing and optimizing agent performance. - Useful for A/B testing and optimizing agent performance.
5. **Prompt Playground with Model Comparisons** 5. **Prompt Playground with Model Comparisons**
- Test and compare different prompts and models for your CrewAI agents before deployment. - Test and compare different prompts and models for your CrewAI agents before deployment.
6. **Testing and Evaluations** 6. **Testing and Evaluations**
- Set up automated tests for your CrewAI agents and tasks. - Set up automated tests for your CrewAI agents and tasks.
- Example:
```python
langtrace.evaluate(agent_output, expected_output, "accuracy")
```
## Monitoring New CrewAI Features
CrewAI has introduced several new features that can be monitored using Langtrace:
1. **Code Execution**: Monitor the performance and output of code executed by agents.
```python
with langtrace.trace("Agent Code Execution"):
code_output = agent.execute_code(code_snippet)
```
2. **Third-party Agent Integration**: Track interactions with LlamaIndex, LangChain, and Autogen agents.

241
poetry.lock generated
View File

@@ -1,4 +1,4 @@
# This file is automatically @generated by Poetry 1.8.3 and should not be changed by hand. # This file is automatically @generated by Poetry 1.7.1 and should not be changed by hand.
[[package]] [[package]]
name = "agentops" name = "agentops"
@@ -390,17 +390,17 @@ lxml = ["lxml"]
[[package]] [[package]]
name = "boto3" name = "boto3"
version = "1.35.13" version = "1.35.14"
description = "The AWS SDK for Python" description = "The AWS SDK for Python"
optional = false optional = false
python-versions = ">=3.8" python-versions = ">=3.8"
files = [ files = [
{file = "boto3-1.35.13-py3-none-any.whl", hash = "sha256:6e220eae161a4c0ed21e2561edcb0fd9603fa621692c50bc099db318ed3e3ad4"}, {file = "boto3-1.35.14-py3-none-any.whl", hash = "sha256:c3e138e9041d59cd34cdc28a587dfdc899dba02ea26ebc3e10fb4bc88e5cf31b"},
{file = "boto3-1.35.13.tar.gz", hash = "sha256:4af17bd7bada591ddaa835d774b242705210e5d45133e25bd73417daa42e53e7"}, {file = "boto3-1.35.14.tar.gz", hash = "sha256:7bc78d7140c353b10a637927fe4bc4c4d95a464d1b8f515d5844def2ee52cbd5"},
] ]
[package.dependencies] [package.dependencies]
botocore = ">=1.35.13,<1.36.0" botocore = ">=1.35.14,<1.36.0"
jmespath = ">=0.7.1,<2.0.0" jmespath = ">=0.7.1,<2.0.0"
s3transfer = ">=0.10.0,<0.11.0" s3transfer = ">=0.10.0,<0.11.0"
@@ -409,13 +409,13 @@ crt = ["botocore[crt] (>=1.21.0,<2.0a0)"]
[[package]] [[package]]
name = "botocore" name = "botocore"
version = "1.35.13" version = "1.35.14"
description = "Low-level, data-driven core of boto 3." description = "Low-level, data-driven core of boto 3."
optional = false optional = false
python-versions = ">=3.8" python-versions = ">=3.8"
files = [ files = [
{file = "botocore-1.35.13-py3-none-any.whl", hash = "sha256:dd8a8bb1946187c8eb902a3b856d3b24df63917e4f2c61a6bce7f3ea9f112761"}, {file = "botocore-1.35.14-py3-none-any.whl", hash = "sha256:24823135232f88266b66ae8e1d0f3d40872c14cd976781f7fe52b8f0d79035a0"},
{file = "botocore-1.35.13.tar.gz", hash = "sha256:f7ae62eab44d731a5ad8917788378316c79c7bceb530a8307ed0f3bca7037341"}, {file = "botocore-1.35.14.tar.gz", hash = "sha256:8515a2fc7ca5bcf0b10016ba05ccf2d642b7cb77d8773026ff2fa5aa3bf38d2e"},
] ]
[package.dependencies] [package.dependencies]
@@ -428,13 +428,13 @@ crt = ["awscrt (==0.21.2)"]
[[package]] [[package]]
name = "build" name = "build"
version = "1.2.1" version = "1.2.2"
description = "A simple, correct Python build frontend" description = "A simple, correct Python build frontend"
optional = false optional = false
python-versions = ">=3.8" python-versions = ">=3.8"
files = [ files = [
{file = "build-1.2.1-py3-none-any.whl", hash = "sha256:75e10f767a433d9a86e50d83f418e83efc18ede923ee5ff7df93b6cb0306c5d4"}, {file = "build-1.2.2-py3-none-any.whl", hash = "sha256:277ccc71619d98afdd841a0e96ac9fe1593b823af481d3b0cea748e8894e0613"},
{file = "build-1.2.1.tar.gz", hash = "sha256:526263f4870c26f26c433545579475377b2b7588b6f1eac76a001e873ae3e19d"}, {file = "build-1.2.2.tar.gz", hash = "sha256:119b2fb462adef986483438377a13b2f42064a2a3a4161f24a0cca698a07ac8c"},
] ]
[package.dependencies] [package.dependencies]
@@ -849,13 +849,13 @@ cron = ["capturer (>=2.4)"]
[[package]] [[package]]
name = "crewai-tools" name = "crewai-tools"
version = "0.8.3" version = "0.12.0"
description = "Set of tools for the crewAI framework" description = "Set of tools for the crewAI framework"
optional = false optional = false
python-versions = "<=3.13,>=3.10" python-versions = "<=3.13,>=3.10"
files = [ files = [
{file = "crewai_tools-0.8.3-py3-none-any.whl", hash = "sha256:a54a10c36b8403250e13d6594bd37db7e7deb3f9fabc77e8720c081864ae6189"}, {file = "crewai_tools-0.12.0-py3-none-any.whl", hash = "sha256:cd1fce27960a1dee99f63f3ae276797feed1b09443d6fdeb62c557855188cc50"},
{file = "crewai_tools-0.8.3.tar.gz", hash = "sha256:f0317ea1d926221b22fcf4b816d71916fe870aa66ed7ee2a0067dba42b5634eb"}, {file = "crewai_tools-0.12.0.tar.gz", hash = "sha256:1f327d4cf436d6f34cf440c0067fac7a67d5256dc279fe8d829219eddbf79ee1"},
] ]
[package.dependencies] [package.dependencies]
@@ -2570,13 +2570,13 @@ langchain-core = ">=0.2.38,<0.3.0"
[[package]] [[package]]
name = "langsmith" name = "langsmith"
version = "0.1.115" version = "0.1.116"
description = "Client library to connect to the LangSmith LLM Tracing and Evaluation Platform." description = "Client library to connect to the LangSmith LLM Tracing and Evaluation Platform."
optional = false optional = false
python-versions = "<4.0,>=3.8.1" python-versions = "<4.0,>=3.8.1"
files = [ files = [
{file = "langsmith-0.1.115-py3-none-any.whl", hash = "sha256:04e35cfd4c2d4ff1ea10bb577ff43957b05ebb3d9eb4e06e200701f4a2b4ac9f"}, {file = "langsmith-0.1.116-py3-none-any.whl", hash = "sha256:4b5ea64c81ba5ca309695c85dc3fb4617429a985129ed5d9eca00d1c9d6483f4"},
{file = "langsmith-0.1.115.tar.gz", hash = "sha256:3b775377d858d32354f3ee0dd1ed637068cfe9a1f13e7b3bfa82db1615cdffc9"}, {file = "langsmith-0.1.116.tar.gz", hash = "sha256:5ccd7f5c1840f7c507ab3ee56334a1391de28c8bf72669782e2d82cafeefffa7"},
] ]
[package.dependencies] [package.dependencies]
@@ -4072,19 +4072,6 @@ files = [
{file = "pyarrow-17.0.0-cp312-cp312-win_amd64.whl", hash = "sha256:392bc9feabc647338e6c89267635e111d71edad5fcffba204425a7c8d13610d7"}, {file = "pyarrow-17.0.0-cp312-cp312-win_amd64.whl", hash = "sha256:392bc9feabc647338e6c89267635e111d71edad5fcffba204425a7c8d13610d7"},
{file = "pyarrow-17.0.0-cp38-cp38-macosx_10_15_x86_64.whl", hash = "sha256:af5ff82a04b2171415f1410cff7ebb79861afc5dae50be73ce06d6e870615204"}, {file = "pyarrow-17.0.0-cp38-cp38-macosx_10_15_x86_64.whl", hash = "sha256:af5ff82a04b2171415f1410cff7ebb79861afc5dae50be73ce06d6e870615204"},
{file = "pyarrow-17.0.0-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:edca18eaca89cd6382dfbcff3dd2d87633433043650c07375d095cd3517561d8"}, {file = "pyarrow-17.0.0-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:edca18eaca89cd6382dfbcff3dd2d87633433043650c07375d095cd3517561d8"},
{file = "pyarrow-17.0.0-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:7c7916bff914ac5d4a8fe25b7a25e432ff921e72f6f2b7547d1e325c1ad9d155"},
{file = "pyarrow-17.0.0-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:f553ca691b9e94b202ff741bdd40f6ccb70cdd5fbf65c187af132f1317de6145"},
{file = "pyarrow-17.0.0-cp38-cp38-manylinux_2_28_aarch64.whl", hash = "sha256:0cdb0e627c86c373205a2f94a510ac4376fdc523f8bb36beab2e7f204416163c"},
{file = "pyarrow-17.0.0-cp38-cp38-manylinux_2_28_x86_64.whl", hash = "sha256:d7d192305d9d8bc9082d10f361fc70a73590a4c65cf31c3e6926cd72b76bc35c"},
{file = "pyarrow-17.0.0-cp38-cp38-win_amd64.whl", hash = "sha256:02dae06ce212d8b3244dd3e7d12d9c4d3046945a5933d28026598e9dbbda1fca"},
{file = "pyarrow-17.0.0-cp39-cp39-macosx_10_15_x86_64.whl", hash = "sha256:13d7a460b412f31e4c0efa1148e1d29bdf18ad1411eb6757d38f8fbdcc8645fb"},
{file = "pyarrow-17.0.0-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:9b564a51fbccfab5a04a80453e5ac6c9954a9c5ef2890d1bcf63741909c3f8df"},
{file = "pyarrow-17.0.0-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:32503827abbc5aadedfa235f5ece8c4f8f8b0a3cf01066bc8d29de7539532687"},
{file = "pyarrow-17.0.0-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:a155acc7f154b9ffcc85497509bcd0d43efb80d6f733b0dc3bb14e281f131c8b"},
{file = "pyarrow-17.0.0-cp39-cp39-manylinux_2_28_aarch64.whl", hash = "sha256:dec8d129254d0188a49f8a1fc99e0560dc1b85f60af729f47de4046015f9b0a5"},
{file = "pyarrow-17.0.0-cp39-cp39-manylinux_2_28_x86_64.whl", hash = "sha256:a48ddf5c3c6a6c505904545c25a4ae13646ae1f8ba703c4df4a1bfe4f4006bda"},
{file = "pyarrow-17.0.0-cp39-cp39-win_amd64.whl", hash = "sha256:42bf93249a083aca230ba7e2786c5f673507fa97bbd9725a1e2754715151a204"},
{file = "pyarrow-17.0.0.tar.gz", hash = "sha256:4beca9521ed2c0921c1023e68d097d0299b62c362639ea315572a58f3f50fd28"},
] ]
[package.dependencies] [package.dependencies]
@@ -5475,13 +5462,13 @@ typing-extensions = ">=3.7.4.3"
[[package]] [[package]]
name = "types-requests" name = "types-requests"
version = "2.32.0.20240905" version = "2.32.0.20240907"
description = "Typing stubs for requests" description = "Typing stubs for requests"
optional = false optional = false
python-versions = ">=3.8" python-versions = ">=3.8"
files = [ files = [
{file = "types-requests-2.32.0.20240905.tar.gz", hash = "sha256:e97fd015a5ed982c9ddcd14cc4afba9d111e0e06b797c8f776d14602735e9bd6"}, {file = "types-requests-2.32.0.20240907.tar.gz", hash = "sha256:ff33935f061b5e81ec87997e91050f7b4af4f82027a7a7a9d9aaea04a963fdf8"},
{file = "types_requests-2.32.0.20240905-py3-none-any.whl", hash = "sha256:f46ecb55f5e1a37a58be684cf3f013f166da27552732ef2469a0cc8e62a72881"}, {file = "types_requests-2.32.0.20240907-py3-none-any.whl", hash = "sha256:1d1e79faeaf9d42def77f3c304893dea17a97cae98168ac69f3cb465516ee8da"},
] ]
[package.dependencies] [package.dependencies]
@@ -6004,103 +5991,103 @@ h11 = ">=0.9.0,<1"
[[package]] [[package]]
name = "yarl" name = "yarl"
version = "1.9.11" version = "1.10.0"
description = "Yet another URL library" description = "Yet another URL library"
optional = false optional = false
python-versions = ">=3.8" python-versions = ">=3.8"
files = [ files = [
{file = "yarl-1.9.11-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:79e08c691deae6fcac2fdde2e0515ac561dd3630d7c8adf7b1e786e22f1e193b"}, {file = "yarl-1.10.0-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:1718c0bca5a61edac7a57dcc11856cb01bde13a9360a3cb6baf384b89cfc0b40"},
{file = "yarl-1.9.11-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:752f4b5cf93268dc73c2ae994cc6d684b0dad5118bc87fbd965fd5d6dca20f45"}, {file = "yarl-1.10.0-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:e4657fd290d556a5f3018d07c7b7deadcb622760c0125277d10a11471c340054"},
{file = "yarl-1.9.11-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:441049d3a449fb8756b0535be72c6a1a532938a33e1cf03523076700a5f87a01"}, {file = "yarl-1.10.0-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:044b76d069e69c6b0246f071ebac0576f89c772f806d66ef51e662bd015d03c7"},
{file = "yarl-1.9.11-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:b3dfe17b4aed832c627319da22a33f27f282bd32633d6b145c726d519c89fbaf"}, {file = "yarl-1.10.0-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:c5527d32506c11150ca87f33820057dc284e2a01a87f0238555cada247a8b278"},
{file = "yarl-1.9.11-cp310-cp310-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:67abcb7df27952864440c9c85f1c549a4ad94afe44e2655f77d74b0d25895454"}, {file = "yarl-1.10.0-cp310-cp310-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:36d12d78b8b0d46099d413c8689b5510ad9ce5e443363d1c37b6ac5b3d7cbdfb"},
{file = "yarl-1.9.11-cp310-cp310-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:6de3fa29e76fd1518a80e6af4902c44f3b1b4d7fed28eb06913bba4727443de3"}, {file = "yarl-1.10.0-cp310-cp310-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:11f7f8a72b3e26c533fa7ffa7a8068f4e3aad7b67c5cf7b17ea8c79fc81d9830"},
{file = "yarl-1.9.11-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:fee45b3bd4d8d5786472e056aa1359cc4dc9da68aded95a10cd7929a0ec661fe"}, {file = "yarl-1.10.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:88173836a25b7e5dce989eeee3b92d8ef5cdf512830d4155c6212de98e616f70"},
{file = "yarl-1.9.11-cp310-cp310-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:c59b23886234abeba62087fd97d10fb6b905d9e36e2f3465d1886ce5c0ca30df"}, {file = "yarl-1.10.0-cp310-cp310-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:c382e189af10070bcb39caa9406b9cc47b26c1d2257979f11fe03a38be09fea9"},
{file = "yarl-1.9.11-cp310-cp310-musllinux_1_2_aarch64.whl", hash = "sha256:d93c612b2024ac25a3dc01341fd98fdd19c8c5e2011f3dcd084b3743cba8d756"}, {file = "yarl-1.10.0-cp310-cp310-musllinux_1_2_aarch64.whl", hash = "sha256:534b8bc181dca1691cf491c263e084af678a8fb6b6181687c788027d8c317026"},
{file = "yarl-1.9.11-cp310-cp310-musllinux_1_2_i686.whl", hash = "sha256:4d368e3b9ecd50fa22017a20c49e356471af6ae91c4d788c6e9297e25ddf5a62"}, {file = "yarl-1.10.0-cp310-cp310-musllinux_1_2_i686.whl", hash = "sha256:5f3372f9ae1d1f001826b77d0b29d4220e84f6c5f53915e71a825cdd02600065"},
{file = "yarl-1.9.11-cp310-cp310-musllinux_1_2_ppc64le.whl", hash = "sha256:5b593acd45cdd4cf6664d342ceacedf25cd95263b83b964fddd6c78930ea5211"}, {file = "yarl-1.10.0-cp310-cp310-musllinux_1_2_ppc64le.whl", hash = "sha256:4cca9ba00be4bb8a051c4007b60fc91d6c9728c8b70c86cee4c24be9d641002f"},
{file = "yarl-1.9.11-cp310-cp310-musllinux_1_2_s390x.whl", hash = "sha256:224f8186c220ff00079e64bf193909829144d4e5174bb58665ef0da8bf6955c4"}, {file = "yarl-1.10.0-cp310-cp310-musllinux_1_2_s390x.whl", hash = "sha256:a9d8c4be5658834dc688072239d220631ad4b71ff79a5f3d17fb653f16d10759"},
{file = "yarl-1.9.11-cp310-cp310-musllinux_1_2_x86_64.whl", hash = "sha256:91c478741d7563a12162f7a2db96c0d23d93b0521563f1f1f0ece46ea1702d33"}, {file = "yarl-1.10.0-cp310-cp310-musllinux_1_2_x86_64.whl", hash = "sha256:ff45a655ca51e1cb778abbb586083fddb7d896332f47bb3b03bc75e30c25649f"},
{file = "yarl-1.9.11-cp310-cp310-win32.whl", hash = "sha256:1cdb8f5bb0534986776a43df84031da7ff04ac0cf87cb22ae8a6368231949c40"}, {file = "yarl-1.10.0-cp310-cp310-win32.whl", hash = "sha256:9ef7ce61958b3c7b2e2e0927c52d35cf367c5ee410e06e1337ecc83a90c23b95"},
{file = "yarl-1.9.11-cp310-cp310-win_amd64.whl", hash = "sha256:498439af143b43a2b2314451ffd0295410aa0dcbdac5ee18fc8633da4670b605"}, {file = "yarl-1.10.0-cp310-cp310-win_amd64.whl", hash = "sha256:48a48261f8d610b0e15fed033e74798763bc2f8f2c0d769a2a0732511af71f1e"},
{file = "yarl-1.9.11-cp311-cp311-macosx_10_9_universal2.whl", hash = "sha256:9e290de5db4fd4859b4ed57cddfe793fcb218504e65781854a8ac283ab8d5518"}, {file = "yarl-1.10.0-cp311-cp311-macosx_10_9_universal2.whl", hash = "sha256:308d1cce071b5b500e3d95636bbf15dfdb8e87ed081b893555658a7f9869a156"},
{file = "yarl-1.9.11-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:e5f50a2e26cc2b89186f04c97e0ec0ba107ae41f1262ad16832d46849864f914"}, {file = "yarl-1.10.0-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:bc66927f6362ed613a483c22618f88f014994ccbd0b7a25ec1ebc8c472d4b40a"},
{file = "yarl-1.9.11-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:b4a0e724a28d7447e4d549c8f40779f90e20147e94bf949d490402eee09845c6"}, {file = "yarl-1.10.0-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:c4d13071c5b99974cfe2f94c749ecc4baf882f7c4b6e4c40ca3d15d1b7e81f24"},
{file = "yarl-1.9.11-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:85333d38a4fa5997fa2ff6fd169be66626d814b34fa35ec669e8c914ca50a097"}, {file = "yarl-1.10.0-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:348ad53acd41caa489df7db352d620c982ab069855d9635dda73d685bbbc3636"},
{file = "yarl-1.9.11-cp311-cp311-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:6ff184002ee72e4b247240e35d5dce4c2d9a0e81fdbef715dde79ab4718aa541"}, {file = "yarl-1.10.0-cp311-cp311-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:293f7c2b30d015de3f1441c4ee764963b86636fde881b4d6093498d1e8711f69"},
{file = "yarl-1.9.11-cp311-cp311-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:675004040f847c0284827f44a1fa92d8baf425632cc93e7e0aa38408774b07c1"}, {file = "yarl-1.10.0-cp311-cp311-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:315e8853d0ea46aabdce01f1f248fff7b9743de89b555c5f0487f54ac84beae8"},
{file = "yarl-1.9.11-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:b30703a7ade2b53f02e09a30685b70cd54f65ed314a8d9af08670c9a5391af1b"}, {file = "yarl-1.10.0-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:012c506b2c23be4500fb97509aa7e6a575996fb317b80667fa26899d456e2aaf"},
{file = "yarl-1.9.11-cp311-cp311-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:7230007ab67d43cf19200ec15bc6b654e6b85c402f545a6fc565d254d34ff754"}, {file = "yarl-1.10.0-cp311-cp311-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:5f769c2708c31227c5349c3e4c668c8b4b2e25af3e7263723f2ef33e8e3906a0"},
{file = "yarl-1.9.11-cp311-cp311-musllinux_1_2_aarch64.whl", hash = "sha256:8c2cf0c7ad745e1c6530fe6521dfb19ca43338239dfcc7da165d0ef2332c0882"}, {file = "yarl-1.10.0-cp311-cp311-musllinux_1_2_aarch64.whl", hash = "sha256:4f6ac063a4e9bbd4f6cc88cc621516a44d6aec66862ea8399ba063374e4b12c7"},
{file = "yarl-1.9.11-cp311-cp311-musllinux_1_2_i686.whl", hash = "sha256:4567cc08f479ad80fb07ed0c9e1bcb363a4f6e3483a490a39d57d1419bf1c4c7"}, {file = "yarl-1.10.0-cp311-cp311-musllinux_1_2_i686.whl", hash = "sha256:18b7ce6d8c35da8e16dcc8de124a80e250fc8c73f8c02663acf2485c874f1972"},
{file = "yarl-1.9.11-cp311-cp311-musllinux_1_2_ppc64le.whl", hash = "sha256:95adc179a02949c4560ef40f8f650a008380766eb253d74232eb9c024747c111"}, {file = "yarl-1.10.0-cp311-cp311-musllinux_1_2_ppc64le.whl", hash = "sha256:b80246bdee036381636e73ef0f19b032912064622b0e5ee44f6960fd11df12aa"},
{file = "yarl-1.9.11-cp311-cp311-musllinux_1_2_s390x.whl", hash = "sha256:755ae9cff06c429632d750aa8206f08df2e3d422ca67be79567aadbe74ae64cc"}, {file = "yarl-1.10.0-cp311-cp311-musllinux_1_2_s390x.whl", hash = "sha256:183dd37bb5471e8017ab8a998c1ea070b4a0b08a97a7c4e20e0c7ccbe8ebb999"},
{file = "yarl-1.9.11-cp311-cp311-musllinux_1_2_x86_64.whl", hash = "sha256:94f71d54c5faf715e92c8434b4a0b968c4d1043469954d228fc031d51086f143"}, {file = "yarl-1.10.0-cp311-cp311-musllinux_1_2_x86_64.whl", hash = "sha256:9b6d0d7522b514f054b359409817af4c5ed76fa4fe42d8bd1ed12956804cf595"},
{file = "yarl-1.9.11-cp311-cp311-win32.whl", hash = "sha256:4ae079573efeaa54e5978ce86b77f4175cd32f42afcaf9bfb8a0677e91f84e4e"}, {file = "yarl-1.10.0-cp311-cp311-win32.whl", hash = "sha256:6026a6ef14d038a38ca9d81422db4b6bb7d5da94f9d08f21e0ad9ebd9c4bc3bb"},
{file = "yarl-1.9.11-cp311-cp311-win_amd64.whl", hash = "sha256:9fae7ec5c9a4fe22abb995804e6ce87067dfaf7e940272b79328ce37c8f22097"}, {file = "yarl-1.10.0-cp311-cp311-win_amd64.whl", hash = "sha256:190e70d2f9f16f1c9d666c103d635c9ed4bf8de7803e9fa0495eec405a3e96a8"},
{file = "yarl-1.9.11-cp312-cp312-macosx_10_9_universal2.whl", hash = "sha256:614fa50fd0db41b79f426939a413d216cdc7bab8d8c8a25844798d286a999c5a"}, {file = "yarl-1.10.0-cp312-cp312-macosx_10_9_universal2.whl", hash = "sha256:6bc602c7413e1b5223bc988947125998cb54d6184de45a871985daacc23e6c8c"},
{file = "yarl-1.9.11-cp312-cp312-macosx_10_9_x86_64.whl", hash = "sha256:ff64f575d71eacb5a4d6f0696bfe991993d979423ea2241f23ab19ff63f0f9d1"}, {file = "yarl-1.10.0-cp312-cp312-macosx_10_9_x86_64.whl", hash = "sha256:bf733c835ebbd52bd78a52b919205e0f06d8571f71976a0259e5bcc20d0a2f44"},
{file = "yarl-1.9.11-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:5c23f6dc3d7126b4c64b80aa186ac2bb65ab104a8372c4454e462fb074197bc6"}, {file = "yarl-1.10.0-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:6e91ed5f6818e1e3806eaeb7b14d9e17b90340f23089451ea59a89a29499d760"},
{file = "yarl-1.9.11-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:b8f847cc092c2b85d22e527f91ea83a6cf51533e727e2461557a47a859f96734"}, {file = "yarl-1.10.0-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:23057a004bc9735008eb2a04b6ce94c6c06219cdf2b193997fd3ae6039eb3196"},
{file = "yarl-1.9.11-cp312-cp312-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:63a5dc2866791236779d99d7a422611d22bb3a3d50935bafa4e017ea13e51469"}, {file = "yarl-1.10.0-cp312-cp312-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:2b922c32a1cff62bc43d408d1a8745abeed0a705793f2253c622bf3521922198"},
{file = "yarl-1.9.11-cp312-cp312-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:c335342d482e66254ae94b1231b1532790afb754f89e2e0c646f7f19d09740aa"}, {file = "yarl-1.10.0-cp312-cp312-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:be199fed28861d72df917e355287ad6835555d8210e7f8203060561f24d7d842"},
{file = "yarl-1.9.11-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:e4a8c3dedd081cca134a21179aebe58b6e426e8d1e0202da9d1cafa56e01af3c"}, {file = "yarl-1.10.0-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:5cece693380c1c4a606cdcaa0c54eda8f72cfe1ba83f5149b9023bb955e8fa8e"},
{file = "yarl-1.9.11-cp312-cp312-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:504d19320c92532cabc3495fb7ed6bb599f3c2bfb45fed432049bf4693dbd6d0"}, {file = "yarl-1.10.0-cp312-cp312-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:ff8e803d8ca170e632fb3b4df1bfd29ba29be8edc3e9306c5ffa5fadea234a4f"},
{file = "yarl-1.9.11-cp312-cp312-musllinux_1_2_aarch64.whl", hash = "sha256:0b2a8e5eb18181060197e3d5db7e78f818432725c0759bc1e5a9d603d9246389"}, {file = "yarl-1.10.0-cp312-cp312-musllinux_1_2_aarch64.whl", hash = "sha256:30dde3a8b88c80a4f049eb4dd240d2a02e89174da6be2525541f949bf9fa38ab"},
{file = "yarl-1.9.11-cp312-cp312-musllinux_1_2_i686.whl", hash = "sha256:f568d70b7187f4002b6b500c0996c37674a25ce44b20716faebe5fdb8bd356e7"}, {file = "yarl-1.10.0-cp312-cp312-musllinux_1_2_i686.whl", hash = "sha256:dff84623e7098cf9bfbb5187f9883051af652b0ce08b9f7084cc8630b87b6457"},
{file = "yarl-1.9.11-cp312-cp312-musllinux_1_2_ppc64le.whl", hash = "sha256:735b285ea46ca7e86ad261a462a071d0968aade44e1a3ea2b7d4f3d63b5aab12"}, {file = "yarl-1.10.0-cp312-cp312-musllinux_1_2_ppc64le.whl", hash = "sha256:8e69b55965a47dd6c79e578abd7d85637b1bb4a7565436630826bdb28aa9b7ad"},
{file = "yarl-1.9.11-cp312-cp312-musllinux_1_2_s390x.whl", hash = "sha256:2d1c81c3b92bef0c1c180048e43a5a85754a61b4f69d6f84df8e4bd615bef25d"}, {file = "yarl-1.10.0-cp312-cp312-musllinux_1_2_s390x.whl", hash = "sha256:5d0c9e1dcc92d46ca89608fe4763fc2362f1e81c19a922c67dbc0f20951466e4"},
{file = "yarl-1.9.11-cp312-cp312-musllinux_1_2_x86_64.whl", hash = "sha256:8d6e1c1562b53bd26efd38e886fc13863b8d904d559426777990171020c478a9"}, {file = "yarl-1.10.0-cp312-cp312-musllinux_1_2_x86_64.whl", hash = "sha256:32e79d5ae975f7c2cc29f7104691fc9be5ee3724f24e1a7254d72f6219672108"},
{file = "yarl-1.9.11-cp312-cp312-win32.whl", hash = "sha256:aeba4aaa59cb709edb824fa88a27cbbff4e0095aaf77212b652989276c493c00"}, {file = "yarl-1.10.0-cp312-cp312-win32.whl", hash = "sha256:762a196612c2aba4197cd271da65fe08308f7ddf130dc63842c7a76d774b6a2c"},
{file = "yarl-1.9.11-cp312-cp312-win_amd64.whl", hash = "sha256:569309a3efb8369ff5d32edb2a0520ebaf810c3059f11d34477418c90aa878fd"}, {file = "yarl-1.10.0-cp312-cp312-win_amd64.whl", hash = "sha256:8c6214071f653d21bb7b43f7ee519afcbf7084263bb43408f4939d14558290db"},
{file = "yarl-1.9.11-cp313-cp313-macosx_10_13_universal2.whl", hash = "sha256:4915818ac850c3b0413e953af34398775b7a337babe1e4d15f68c8f5c4872553"}, {file = "yarl-1.10.0-cp313-cp313-macosx_10_13_universal2.whl", hash = "sha256:0e0aea8319fdc1ac340236e58b0b7dc763621bce6ce98124a9d58104cafd0aaa"},
{file = "yarl-1.9.11-cp313-cp313-macosx_10_13_x86_64.whl", hash = "sha256:ef9610b2f5a73707d4d8bac040f0115ca848e510e3b1f45ca53e97f609b54130"}, {file = "yarl-1.10.0-cp313-cp313-macosx_10_13_x86_64.whl", hash = "sha256:0b3bf343b4ef9ec600d75363eb9b48ab3bd53b53d4e1c5a9fbf0cfe7ba73a47f"},
{file = "yarl-1.9.11-cp313-cp313-macosx_11_0_arm64.whl", hash = "sha256:47c0a3dc8076a8dd159de10628dea04215bc7ddaa46c5775bf96066a0a18f82b"}, {file = "yarl-1.10.0-cp313-cp313-macosx_11_0_arm64.whl", hash = "sha256:05b07e6e0f715eaae9d927a302d9220724392f3c0b4e7f8dfa174bf2e1b8433e"},
{file = "yarl-1.9.11-cp313-cp313-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:545f2fbfa0c723b446e9298b5beba0999ff82ce2c126110759e8dac29b5deaf4"}, {file = "yarl-1.10.0-cp313-cp313-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:8d7bd531d7eec4aa7ef8a99fef91962eeea5158a53af0ec507c476ddf8ebc29c"},
{file = "yarl-1.9.11-cp313-cp313-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:9137975a4ccc163ad5d7a75aad966e6e4e95dedee08d7995eab896a639a0bce2"}, {file = "yarl-1.10.0-cp313-cp313-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:183136dc5d5411872e7529c924189a2e26fac5a7f9769cf13ef854d1d653ad36"},
{file = "yarl-1.9.11-cp313-cp313-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:0b0c70c451d2a86f8408abced5b7498423e2487543acf6fcf618b03f6e669b0a"}, {file = "yarl-1.10.0-cp313-cp313-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:c77a3c10af4aaf8891578fe492ef0990c65cf7005dd371f5ea8007b420958bf6"},
{file = "yarl-1.9.11-cp313-cp313-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:ce2bd986b1e44528677c237b74d59f215c8bfcdf2d69442aa10f62fd6ab2951c"}, {file = "yarl-1.10.0-cp313-cp313-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:030d41d48217b180c5a176e59c49d212d54d89f6f53640fa4c1a1766492aec27"},
{file = "yarl-1.9.11-cp313-cp313-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:8d7b717f77846a9631046899c6cc730ea469c0e2fb252ccff1cc119950dbc296"}, {file = "yarl-1.10.0-cp313-cp313-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:6f4f43ba30d604ba391bc7fe2dd104d6b87b62b0de4bbde79e362524b8a1eb75"},
{file = "yarl-1.9.11-cp313-cp313-musllinux_1_2_aarch64.whl", hash = "sha256:3a26a24bbd19241283d601173cea1e5b93dec361a223394e18a1e8e5b0ef20bd"}, {file = "yarl-1.10.0-cp313-cp313-musllinux_1_2_aarch64.whl", hash = "sha256:637dd0f55d1781d4634c23994101c509e455b5ab61af9086b5763b7eca9359aa"},
{file = "yarl-1.9.11-cp313-cp313-musllinux_1_2_i686.whl", hash = "sha256:c189bf01af155ac9882e128d9f3b3ad68a1f2c2f51404afad7201305df4e12b1"}, {file = "yarl-1.10.0-cp313-cp313-musllinux_1_2_i686.whl", hash = "sha256:99e7459ee86a3b81e57777afd3825b8b1acaac8a99f9c0bd02415d80eb3c371b"},
{file = "yarl-1.9.11-cp313-cp313-musllinux_1_2_ppc64le.whl", hash = "sha256:0cbcc2c54084b2bda4109415631db017cf2960f74f9e8fd1698e1400e4f8aae2"}, {file = "yarl-1.10.0-cp313-cp313-musllinux_1_2_ppc64le.whl", hash = "sha256:a80cdb3c15c15b33ecdb080546dcb022789b0084ca66ad41ffa0fe09857fca11"},
{file = "yarl-1.9.11-cp313-cp313-musllinux_1_2_s390x.whl", hash = "sha256:30f201bc65941a4aa59c1236783efe89049ec5549dafc8cd2b63cc179d3767b0"}, {file = "yarl-1.10.0-cp313-cp313-musllinux_1_2_s390x.whl", hash = "sha256:1824bfb932d8100e5c94f4f98c078f23ebc6f6fa93acc3d95408762089c54a06"},
{file = "yarl-1.9.11-cp313-cp313-musllinux_1_2_x86_64.whl", hash = "sha256:922ba3b74f0958a0b5b9c14ff1ef12714a381760c08018f2b9827632783a590c"}, {file = "yarl-1.10.0-cp313-cp313-musllinux_1_2_x86_64.whl", hash = "sha256:90fd64ce00f594db02f603efa502521c440fa1afcf6266be82eb31f19d2d9561"},
{file = "yarl-1.9.11-cp313-cp313-win32.whl", hash = "sha256:17107b4b8c43e66befdcbe543fff2f9c93f7a3a9f8e3a9c9ac42bffeba0e8828"}, {file = "yarl-1.10.0-cp313-cp313-win32.whl", hash = "sha256:687131ee4d045f3d58128ca28f5047ec902f7760545c39bbe003cc737c5a02b5"},
{file = "yarl-1.9.11-cp313-cp313-win_amd64.whl", hash = "sha256:0324506afab4f2e176a93cb08b8abcb8b009e1f324e6cbced999a8f5dd9ddb76"}, {file = "yarl-1.10.0-cp313-cp313-win_amd64.whl", hash = "sha256:493ad061ee025c5ed3a60893cd70204eead1b3f60ccc90682e752f95b845bd46"},
{file = "yarl-1.9.11-cp38-cp38-macosx_10_9_universal2.whl", hash = "sha256:4e4f820fde9437bb47297194f43d29086433e6467fa28fe9876366ad357bd7bb"}, {file = "yarl-1.10.0-cp38-cp38-macosx_10_9_universal2.whl", hash = "sha256:cd65588273d19f8483bc8f32a6fcf602e94a9a7ba287a1725977bd9527cd6c0c"},
{file = "yarl-1.9.11-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:dfa9b9d5c9c0dbe69670f5695264452f5e40947590ec3a38cfddc9640ae8ff89"}, {file = "yarl-1.10.0-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:6f64f8681671624f539eea5564518bc924524c25eb90ab24a7eddc2d872e668e"},
{file = "yarl-1.9.11-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:e700eb26635ce665c018c8cfea058baff9b843ed0cc77aa61849d807bb82a64c"}, {file = "yarl-1.10.0-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:3576ed2c51f8525d4ff5c3279247aacff9540bb43b292c4a37a8e6c6e1691adb"},
{file = "yarl-1.9.11-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:c305c1bdf10869b5e51facf50bd5b15892884aeae81962ae4ba061fc11217103"}, {file = "yarl-1.10.0-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:ca42a9281807fdf8fba86e671d8fdd76f92e9302a6d332957f2bae51c774f8a7"},
{file = "yarl-1.9.11-cp38-cp38-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:c5b7b307140231ea4f7aad5b69355aba2a67f2d7bc34271cffa3c9c324d35b27"}, {file = "yarl-1.10.0-cp38-cp38-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:54a4b5e6a060d46cad6a3cf340f4cb268e6fbc89c589d82a2da58f7db47c47c8"},
{file = "yarl-1.9.11-cp38-cp38-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:a744bdeda6c86cf3025c94eb0e01ccabe949cf385cd75b6576a3ac9669404b68"}, {file = "yarl-1.10.0-cp38-cp38-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:6eec21d8c3aa932c5a89480b58fa877e9c48092ab838ccc76788cbc917ceec0d"},
{file = "yarl-1.9.11-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:8e8ed183c7a8f75e40068333fc185566472a8f6c77a750cf7541e11810576ea5"}, {file = "yarl-1.10.0-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:273baee8a8af5989d5aab51c740e65bc2b1fc6619b9dd192cd16a3fae51100be"},
{file = "yarl-1.9.11-cp38-cp38-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:c1db9a4384694b5d20bdd9cb53f033b0831ac816416ab176c8d0997835015d22"}, {file = "yarl-1.10.0-cp38-cp38-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:c1bf63ba496cd4f12d30e916d9a52daa6c91433fedd9cd0d99fef3e13232836f"},
{file = "yarl-1.9.11-cp38-cp38-musllinux_1_2_aarch64.whl", hash = "sha256:70194da6e99713250aa3f335a7fa246b36adf53672a2bcd0ddaa375d04e53dc0"}, {file = "yarl-1.10.0-cp38-cp38-musllinux_1_2_aarch64.whl", hash = "sha256:f8e24b9a4afdffab399191a9f0b0e80eabc7b7fdb9f2dbccdeb8e4d28e5c57bb"},
{file = "yarl-1.9.11-cp38-cp38-musllinux_1_2_i686.whl", hash = "sha256:ddad5cfcda729e22422bb1c85520bdf2770ce6d975600573ac9017fe882f4b7e"}, {file = "yarl-1.10.0-cp38-cp38-musllinux_1_2_i686.whl", hash = "sha256:4c46454fafa31f7241083a0dd21814f63e0fcb4ae49662dc7e286fd6a5160ea1"},
{file = "yarl-1.9.11-cp38-cp38-musllinux_1_2_ppc64le.whl", hash = "sha256:ca35996e0a4bed28fa0640d9512d37952f6b50dea583bcc167d4f0b1e112ac7f"}, {file = "yarl-1.10.0-cp38-cp38-musllinux_1_2_ppc64le.whl", hash = "sha256:beda87b63c08fb4df8cc5353eeefe68efe12aa4f5284958bd1466b14c85e508e"},
{file = "yarl-1.9.11-cp38-cp38-musllinux_1_2_s390x.whl", hash = "sha256:61ec0e80970b21a8f3c4b97fa6c6d181c6c6a135dbc7b4a601a78add3feeb209"}, {file = "yarl-1.10.0-cp38-cp38-musllinux_1_2_s390x.whl", hash = "sha256:9a8d6a0e2b5617b5c15c59db25f20ba429f1fea810f2c09fbf93067cb21ab085"},
{file = "yarl-1.9.11-cp38-cp38-musllinux_1_2_x86_64.whl", hash = "sha256:9636e4519f6c7558fdccf8f91e6e3b98df2340dc505c4cc3286986d33f2096c2"}, {file = "yarl-1.10.0-cp38-cp38-musllinux_1_2_x86_64.whl", hash = "sha256:b453b3dbc1ed4c2907632d05b378123f3fb411cad05d8d96de7d95104ef11c70"},
{file = "yarl-1.9.11-cp38-cp38-win32.whl", hash = "sha256:58081cea14b8feda57c7ce447520e9d0a96c4d010cce54373d789c13242d7083"}, {file = "yarl-1.10.0-cp38-cp38-win32.whl", hash = "sha256:1ea30675fbf0ad6795c100da677ef6a8960a7db05ac5293f02a23c2230203c89"},
{file = "yarl-1.9.11-cp38-cp38-win_amd64.whl", hash = "sha256:7d2dee7d6485807c0f64dd5eab9262b7c0b34f760e502243dd83ec09d647d5e1"}, {file = "yarl-1.10.0-cp38-cp38-win_amd64.whl", hash = "sha256:347011ad09a8f9be3d41fe2d7d611c3a4de4d49aa77bcb9a8c03c7a82fc45248"},
{file = "yarl-1.9.11-cp39-cp39-macosx_10_9_universal2.whl", hash = "sha256:d65ad67f981e93ea11f87815f67d086c4f33da4800cf2106d650dd8a0b79dda4"}, {file = "yarl-1.10.0-cp39-cp39-macosx_10_9_universal2.whl", hash = "sha256:18bc4600eed1907762c1816bb16ac63bc52912e53b5e9a353eb0935a78e95496"},
{file = "yarl-1.9.11-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:752c0d33b4aacdb147871d0754b88f53922c6dc2aff033096516b3d5f0c02a0f"}, {file = "yarl-1.10.0-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:eeb6a40c5ae2616fd38c1e039c6dd50031bbfbc2acacfd7b70a5d64fafc70901"},
{file = "yarl-1.9.11-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:54cc24be98d7f4ff355ca2e725a577e19909788c0db6beead67a0dda70bd3f82"}, {file = "yarl-1.10.0-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:bc544248b5263e1c0f61332ccf35e37404b54213f77ed17457f857f40af51452"},
{file = "yarl-1.9.11-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:1c82126817492bb2ebc946e74af1ffa10aacaca81bee360858477f96124be39a"}, {file = "yarl-1.10.0-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:3352c69dc235850d6bf8ddad915931f00dcab208ac4248b9af46175204c2f5f9"},
{file = "yarl-1.9.11-cp39-cp39-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:8503989860d7ac10c85cb5b607fec003a45049cf7a5b4b72451e87893c6bb990"}, {file = "yarl-1.10.0-cp39-cp39-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:af5b52bfbbd5eb208cf1afe23c5ada443929e9b9d79e9fbc66cacc07e4e39748"},
{file = "yarl-1.9.11-cp39-cp39-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:475e09a67f8b09720192a170ad9021b7abf7827ffd4f3a83826317a705be06b7"}, {file = "yarl-1.10.0-cp39-cp39-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:1eafa7317063de4bc310716cdd9026c13f00b1629e649079a6908c3aafdf5046"},
{file = "yarl-1.9.11-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:afcac5bda602b74ff701e1f683feccd8cce0d5a21dbc68db81bf9bd8fd93ba56"}, {file = "yarl-1.10.0-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:a162cf04fd1e8d81025ec651d14cac4f6e0ca73a3c0a9482de8691b944e3098a"},
{file = "yarl-1.9.11-cp39-cp39-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:aaeffcb84faceb2923a94a8a9aaa972745d3c728ab54dd011530cc30a3d5d0c1"}, {file = "yarl-1.10.0-cp39-cp39-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:179b1df5e9cd99234ea65e63d5bfc6dd524b2c3b6cf68a14b94ccbe01ab37ddd"},
{file = "yarl-1.9.11-cp39-cp39-musllinux_1_2_aarch64.whl", hash = "sha256:51a6f770ac86477cd5c553f88a77a06fe1f6f3b643b053fcc7902ab55d6cbe14"}, {file = "yarl-1.10.0-cp39-cp39-musllinux_1_2_aarch64.whl", hash = "sha256:32d2e46848dea122484317485129f080220aa84aeb6a9572ad9015107cebeb07"},
{file = "yarl-1.9.11-cp39-cp39-musllinux_1_2_i686.whl", hash = "sha256:3fcd056cb7dff3aea5b1ee1b425b0fbaa2fbf6a1c6003e88caf524f01de5f395"}, {file = "yarl-1.10.0-cp39-cp39-musllinux_1_2_i686.whl", hash = "sha256:aa1aeb99408be0ca774c5126977eb085fedda6dd7d9198ce4ceb2d06a44325c7"},
{file = "yarl-1.9.11-cp39-cp39-musllinux_1_2_ppc64le.whl", hash = "sha256:21e56c30e39a1833e4e3fd0112dde98c2abcbc4c39b077e6105c76bb63d2aa04"}, {file = "yarl-1.10.0-cp39-cp39-musllinux_1_2_ppc64le.whl", hash = "sha256:d2366e2f987f69752f0588d2035321aaf24272693d75f7f6bb7e8a0f48f7ccdd"},
{file = "yarl-1.9.11-cp39-cp39-musllinux_1_2_s390x.whl", hash = "sha256:0a205ec6349879f5e75dddfb63e069a24f726df5330b92ce76c4752a436aac01"}, {file = "yarl-1.10.0-cp39-cp39-musllinux_1_2_s390x.whl", hash = "sha256:e8da33665ecc64cd3e593098adb449f9c65b4e3bc6338e75ad592da15453d898"},
{file = "yarl-1.9.11-cp39-cp39-musllinux_1_2_x86_64.whl", hash = "sha256:a5706821e1cf3c70dfea223e4e0958ea354f4e2af9420a1bd45c6b547297fb97"}, {file = "yarl-1.10.0-cp39-cp39-musllinux_1_2_x86_64.whl", hash = "sha256:5b46c603bee1f2dd407b8358c2afc9b0472a22ccca528f114e1f4cd30dfecd22"},
{file = "yarl-1.9.11-cp39-cp39-win32.whl", hash = "sha256:cc295969f8c2172b5d013c0871dccfec7a0e1186cf961e7ea575d47b4d5cbd32"}, {file = "yarl-1.10.0-cp39-cp39-win32.whl", hash = "sha256:96422a3322b4d954f4c52403a2fc129ad118c151ee60a717847fb46a8480d1e1"},
{file = "yarl-1.9.11-cp39-cp39-win_amd64.whl", hash = "sha256:55a67dd29367ce7c08a0541bb602ec0a2c10d46c86b94830a1a665f7fd093dfa"}, {file = "yarl-1.10.0-cp39-cp39-win_amd64.whl", hash = "sha256:52d1ae09b0764017e330bb5bf9af760c0168c564225085bb806f687bccffda8a"},
{file = "yarl-1.9.11-py3-none-any.whl", hash = "sha256:c6f6c87665a9e18a635f0545ea541d9640617832af2317d4f5ad389686b4ed3d"}, {file = "yarl-1.10.0-py3-none-any.whl", hash = "sha256:99eaa7d53f509ba1c2fea8fdfec15ba3cd36caca31d57ec6665073b148b5f260"},
{file = "yarl-1.9.11.tar.gz", hash = "sha256:c7548a90cb72b67652e2cd6ae80e2683ee08fde663104528ac7df12d8ef271d2"}, {file = "yarl-1.10.0.tar.gz", hash = "sha256:3bf10a395adac62177ba8ea738617e8de6cbb1cea6aa5d5dd2accde704fc8195"},
] ]
[package.dependencies] [package.dependencies]
@@ -6133,4 +6120,4 @@ tools = ["crewai-tools"]
[metadata] [metadata]
lock-version = "2.0" lock-version = "2.0"
python-versions = ">=3.10,<=3.13" python-versions = ">=3.10,<=3.13"
content-hash = "269990492b05eaecacda4b7164c98b8b5cf545e589faff8c9485a5297b8a745a" content-hash = "0675e966c403260634207bd30679662291cceac370137563681fa02a2538947b"

View File

@@ -1,6 +1,6 @@
[tool.poetry] [tool.poetry]
name = "crewai" name = "crewai"
version = "0.54.0" version = "0.55.2"
description = "Cutting-edge framework for orchestrating role-playing, autonomous AI agents. By fostering collaborative intelligence, CrewAI empowers agents to work together seamlessly, tackling complex tasks." description = "Cutting-edge framework for orchestrating role-playing, autonomous AI agents. By fostering collaborative intelligence, CrewAI empowers agents to work together seamlessly, tackling complex tasks."
authors = ["Joao Moura <joao@crewai.com>"] authors = ["Joao Moura <joao@crewai.com>"]
readme = "README.md" readme = "README.md"
@@ -8,8 +8,8 @@ packages = [{ include = "crewai", from = "src" }]
[tool.poetry.urls] [tool.poetry.urls]
Homepage = "https://crewai.com" Homepage = "https://crewai.com"
Documentation = "https://github.com/joaomdmoura/CrewAI/wiki/Index" Documentation = "https://docs.crewai.com"
Repository = "https://github.com/joaomdmoura/crewai" Repository = "https://github.com/crewAIInc/crewAI"
[tool.poetry.dependencies] [tool.poetry.dependencies]
python = ">=3.10,<=3.13" python = ">=3.10,<=3.13"
@@ -21,7 +21,7 @@ opentelemetry-sdk = "^1.22.0"
opentelemetry-exporter-otlp-proto-http = "^1.22.0" opentelemetry-exporter-otlp-proto-http = "^1.22.0"
instructor = "1.3.3" instructor = "1.3.3"
regex = "^2024.7.24" regex = "^2024.7.24"
crewai-tools = { version = "^0.8.3", optional = true } crewai-tools = { version = "^0.12.0", optional = true }
click = "^8.1.7" click = "^8.1.7"
python-dotenv = "^1.0.0" python-dotenv = "^1.0.0"
appdirs = "^1.4.4" appdirs = "^1.4.4"
@@ -47,7 +47,7 @@ mkdocs-material = { extras = ["imaging"], version = "^9.5.7" }
mkdocs-material-extensions = "^1.3.1" mkdocs-material-extensions = "^1.3.1"
pillow = "^10.2.0" pillow = "^10.2.0"
cairosvg = "^2.7.1" cairosvg = "^2.7.1"
crewai-tools = "^0.8.3" crewai-tools = "^0.12.0"
[tool.poetry.group.test.dependencies] [tool.poetry.group.test.dependencies]
pytest = "^8.0.0" pytest = "^8.0.0"

View File

@@ -2,6 +2,7 @@ from crewai.agent import Agent
from crewai.crew import Crew from crewai.crew import Crew
from crewai.pipeline import Pipeline from crewai.pipeline import Pipeline
from crewai.process import Process from crewai.process import Process
from crewai.routers import Router
from crewai.task import Task from crewai.task import Task
__all__ = ["Agent", "Crew", "Process", "Task", "Pipeline"] __all__ = ["Agent", "Crew", "Process", "Task", "Pipeline", "Router"]

View File

@@ -1,4 +1,4 @@
ALGORITHMS = ["RS256"] ALGORITHMS = ["RS256"]
AUTH0_DOMAIN = "dev-jzsr0j8zs0atl5ha.us.auth0.com" AUTH0_DOMAIN = "crewai.us.auth0.com"
AUTH0_CLIENT_ID = "CZtyRHuVW80HbLSjk4ggXNzjg4KAt7Oe" AUTH0_CLIENT_ID = "DEVC5Fw6NlRoSzmDCcOhVq85EfLBjKa8"
AUTH0_AUDIENCE = "https://dev-jzsr0j8zs0atl5ha.us.auth0.com/api/v2/" AUTH0_AUDIENCE = "https://crewai.us.auth0.com/api/v2/"

View File

@@ -203,10 +203,11 @@ def deploy():
@deploy.command(name="create") @deploy.command(name="create")
def deploy_create(): @click.option("-y", "--yes", is_flag=True, help="Skip the confirmation prompt")
def deploy_create(yes: bool):
"""Create a Crew deployment.""" """Create a Crew deployment."""
deploy_cmd = DeployCommand() deploy_cmd = DeployCommand()
deploy_cmd.create_crew() deploy_cmd.create_crew(yes)
@deploy.command(name="list") @deploy.command(name="list")

View File

@@ -18,7 +18,7 @@ class CrewAPI:
"User-Agent": f"CrewAI-CLI/{get_crewai_version()}", "User-Agent": f"CrewAI-CLI/{get_crewai_version()}",
} }
self.base_url = getenv( self.base_url = getenv(
"CREWAI_BASE_URL", "https://dev.crewai.com/crewai_plus/api/v1/crews" "CREWAI_BASE_URL", "https://crewai.com/crewai_plus/api/v1/crews"
) )
def _make_request(self, method: str, endpoint: str, **kwargs) -> requests.Response: def _make_request(self, method: str, endpoint: str, **kwargs) -> requests.Response:

View File

@@ -28,9 +28,7 @@ class DeployCommand:
self._telemetry.set_tracer() self._telemetry.set_tracer()
access_token = get_auth_token() access_token = get_auth_token()
except Exception: except Exception:
self._deploy_signup_error_span = self._telemetry.deploy_signup_error_span( self._deploy_signup_error_span = self._telemetry.deploy_signup_error_span()
self
)
console.print( console.print(
"Please sign up/login to CrewAI+ before using the CLI.", "Please sign up/login to CrewAI+ before using the CLI.",
style="bold red", style="bold red",
@@ -40,7 +38,10 @@ class DeployCommand:
self.project_name = get_project_name() self.project_name = get_project_name()
if self.project_name is None: if self.project_name is None:
console.print("No project name found. Please ensure your project has a valid pyproject.toml file.", style="bold red") console.print(
"No project name found. Please ensure your project has a valid pyproject.toml file.",
style="bold red",
)
raise SystemExit raise SystemExit
self.client = CrewAPI(api_key=access_token) self.client = CrewAPI(api_key=access_token)
@@ -100,7 +101,7 @@ class DeployCommand:
Args: Args:
uuid (Optional[str]): The UUID of the crew to deploy. uuid (Optional[str]): The UUID of the crew to deploy.
""" """
self._start_deployment_span = self._telemetry.start_deployment_span(self, uuid) self._start_deployment_span = self._telemetry.start_deployment_span(uuid)
console.print("Starting deployment...", style="bold blue") console.print("Starting deployment...", style="bold blue")
if uuid: if uuid:
response = self.client.deploy_by_uuid(uuid) response = self.client.deploy_by_uuid(uuid)
@@ -116,12 +117,12 @@ class DeployCommand:
else: else:
self._handle_error(json_response) self._handle_error(json_response)
def create_crew(self) -> None: def create_crew(self, confirm: bool) -> None:
""" """
Create a new crew deployment. Create a new crew deployment.
""" """
self._create_crew_deployment_span = self._telemetry.create_crew_deployment_span( self._create_crew_deployment_span = (
self self._telemetry.create_crew_deployment_span()
) )
console.print("Creating deployment...", style="bold blue") console.print("Creating deployment...", style="bold blue")
env_vars = fetch_and_json_env_file() env_vars = fetch_and_json_env_file()
@@ -129,10 +130,13 @@ class DeployCommand:
if remote_repo_url is None: if remote_repo_url is None:
console.print("No remote repository URL found.", style="bold red") console.print("No remote repository URL found.", style="bold red")
console.print("Please ensure your project has a valid remote repository.", style="yellow") console.print(
"Please ensure your project has a valid remote repository.",
style="yellow",
)
return return
self._confirm_input(env_vars, remote_repo_url) self._confirm_input(env_vars, remote_repo_url, confirm)
payload = self._create_payload(env_vars, remote_repo_url) payload = self._create_payload(env_vars, remote_repo_url)
response = self.client.create_crew(payload) response = self.client.create_crew(payload)
@@ -141,14 +145,18 @@ class DeployCommand:
else: else:
self._handle_error(response.json()) self._handle_error(response.json())
def _confirm_input(self, env_vars: Dict[str, str], remote_repo_url: str) -> None: def _confirm_input(
self, env_vars: Dict[str, str], remote_repo_url: str, confirm: bool
) -> None:
""" """
Confirm input parameters with the user. Confirm input parameters with the user.
Args: Args:
env_vars (Dict[str, str]): Environment variables. env_vars (Dict[str, str]): Environment variables.
remote_repo_url (str): Remote repository URL. remote_repo_url (str): Remote repository URL.
confirm (bool): Whether to confirm input.
""" """
if not confirm:
input(f"Press Enter to continue with the following Env vars: {env_vars}") input(f"Press Enter to continue with the following Env vars: {env_vars}")
input( input(
f"Press Enter to continue with the following remote repository: {remote_repo_url}\n" f"Press Enter to continue with the following remote repository: {remote_repo_url}\n"
@@ -266,9 +274,7 @@ class DeployCommand:
uuid (Optional[str]): The UUID of the crew to get logs for. uuid (Optional[str]): The UUID of the crew to get logs for.
log_type (str): The type of logs to retrieve (default: "deployment"). log_type (str): The type of logs to retrieve (default: "deployment").
""" """
self._get_crew_logs_span = self._telemetry.get_crew_logs_span( self._get_crew_logs_span = self._telemetry.get_crew_logs_span(uuid, log_type)
self, uuid, log_type
)
console.print(f"Fetching {log_type} logs...", style="bold blue") console.print(f"Fetching {log_type} logs...", style="bold blue")
if uuid: if uuid:
@@ -291,7 +297,7 @@ class DeployCommand:
Args: Args:
uuid (Optional[str]): The UUID of the crew to remove. uuid (Optional[str]): The UUID of the crew to remove.
""" """
self._remove_crew_span = self._telemetry.remove_crew_span(self, uuid) self._remove_crew_span = self._telemetry.remove_crew_span(uuid)
console.print("Removing deployment...", style="bold blue") console.print("Removing deployment...", style="bold blue")
if uuid: if uuid:

View File

@@ -6,7 +6,7 @@ authors = ["Your Name <you@example.com>"]
[tool.poetry.dependencies] [tool.poetry.dependencies]
python = ">=3.10,<=3.13" python = ">=3.10,<=3.13"
crewai = { extras = ["tools"], version = ">=0.54.0,<1.0.0" } crewai = { extras = ["tools"], version = ">=0.55.2,<1.0.0" }
[tool.poetry.scripts] [tool.poetry.scripts]

View File

@@ -6,7 +6,7 @@ authors = ["Your Name <you@example.com>"]
[tool.poetry.dependencies] [tool.poetry.dependencies]
python = ">=3.10,<=3.13" python = ">=3.10,<=3.13"
crewai = { extras = ["tools"], version = ">=0.54.0,<1.0.0" } crewai = { extras = ["tools"], version = ">=0.55.2,<1.0.0" }
asyncio = "*" asyncio = "*"
[tool.poetry.scripts] [tool.poetry.scripts]

View File

@@ -6,7 +6,7 @@ authors = ["Your Name <you@example.com>"]
[tool.poetry.dependencies] [tool.poetry.dependencies]
python = ">=3.10,<=3.13" python = ">=3.10,<=3.13"
crewai = { extras = ["tools"], version = ">=0.54.0,<1.0.0" } crewai = { extras = ["tools"], version = ">=0.55.2,<1.0.0" }
[tool.poetry.scripts] [tool.poetry.scripts]

View File

@@ -584,7 +584,10 @@ class Crew(BaseModel):
self.manager_agent.allow_delegation = True self.manager_agent.allow_delegation = True
manager = self.manager_agent manager = self.manager_agent
if manager.tools is not None and len(manager.tools) > 0: if manager.tools is not None and len(manager.tools) > 0:
raise Exception("Manager agent should not have tools") self._logger.log(
"warning", "Manager agent should not have tools", color="orange"
)
manager.tools = []
manager.tools = self.manager_agent.get_delegation_tools(self.agents) manager.tools = self.manager_agent.get_delegation_tools(self.agents)
else: else:
manager = Agent( manager = Agent(

View File

@@ -103,7 +103,8 @@ def crew(func):
for task_name in sorted_task_names: for task_name in sorted_task_names:
task_instance = tasks[task_name]() task_instance = tasks[task_name]()
instantiated_tasks.append(task_instance) instantiated_tasks.append(task_instance)
if hasattr(task_instance, "agent"): agent_instance = getattr(task_instance, "agent", None)
if agent_instance is not None:
agent_instance = task_instance.agent agent_instance = task_instance.agent
if agent_instance.role not in agent_roles: if agent_instance.role not in agent_roles:
instantiated_agents.append(agent_instance) instantiated_agents.append(agent_instance)

View File

@@ -6,7 +6,7 @@ import uuid
from concurrent.futures import Future from concurrent.futures import Future
from copy import copy from copy import copy
from hashlib import md5 from hashlib import md5
from typing import Any, Dict, List, Optional, Tuple, Type, Union from typing import Any, Dict, List, Optional, Set, Tuple, Type, Union
from opentelemetry.trace import Span from opentelemetry.trace import Span
from pydantic import ( from pydantic import (
@@ -108,6 +108,7 @@ class Task(BaseModel):
description="A converter class used to export structured output", description="A converter class used to export structured output",
default=None, default=None,
) )
processed_by_agents: Set[str] = Field(default_factory=set)
_telemetry: Telemetry = PrivateAttr(default_factory=Telemetry) _telemetry: Telemetry = PrivateAttr(default_factory=Telemetry)
_execution_span: Optional[Span] = PrivateAttr(default=None) _execution_span: Optional[Span] = PrivateAttr(default=None)
@@ -241,6 +242,8 @@ class Task(BaseModel):
self.prompt_context = context self.prompt_context = context
tools = tools or self.tools or [] tools = tools or self.tools or []
self.processed_by_agents.add(agent.role)
result = agent.execute_task( result = agent.execute_task(
task=self, task=self,
context=context, context=context,
@@ -273,9 +276,7 @@ class Task(BaseModel):
content = ( content = (
json_output json_output
if json_output if json_output
else pydantic_output.model_dump_json() else pydantic_output.model_dump_json() if pydantic_output else result
if pydantic_output
else result
) )
self._save_file(content) self._save_file(content)
@@ -310,8 +311,10 @@ class Task(BaseModel):
"""Increment the tools errors counter.""" """Increment the tools errors counter."""
self.tools_errors += 1 self.tools_errors += 1
def increment_delegations(self) -> None: def increment_delegations(self, agent_name: Optional[str]) -> None:
"""Increment the delegations counter.""" """Increment the delegations counter."""
if agent_name:
self.processed_by_agents.add(agent_name)
self.delegations += 1 self.delegations += 1
def copy(self, agents: List["BaseAgent"]) -> "Task": def copy(self, agents: List["BaseAgent"]) -> "Task":

View File

@@ -1,8 +1,8 @@
from typing import Any, Dict, Optional from typing import Any, Dict, Optional
from pydantic import BaseModel, Field
from pydantic import BaseModel as PydanticBaseModel from pydantic import BaseModel as PydanticBaseModel
from pydantic import Field as PydanticField from pydantic import Field as PydanticField
from pydantic.v1 import BaseModel, Field
class ToolCalling(BaseModel): class ToolCalling(BaseModel):

View File

@@ -5,7 +5,7 @@ import regex
from langchain.output_parsers import PydanticOutputParser from langchain.output_parsers import PydanticOutputParser
from langchain_core.exceptions import OutputParserException from langchain_core.exceptions import OutputParserException
from langchain_core.outputs import Generation from langchain_core.outputs import Generation
from langchain_core.pydantic_v1 import ValidationError from pydantic import ValidationError
class ToolOutputParser(PydanticOutputParser): class ToolOutputParser(PydanticOutputParser):

View File

@@ -8,6 +8,7 @@ from langchain_core.tools import BaseTool
from langchain_openai import ChatOpenAI from langchain_openai import ChatOpenAI
from crewai.agents.tools_handler import ToolsHandler from crewai.agents.tools_handler import ToolsHandler
from crewai.task import Task
from crewai.telemetry import Telemetry from crewai.telemetry import Telemetry
from crewai.tools.tool_calling import InstructorToolCalling, ToolCalling from crewai.tools.tool_calling import InstructorToolCalling, ToolCalling
from crewai.utilities import I18N, Converter, ConverterError, Printer from crewai.utilities import I18N, Converter, ConverterError, Printer
@@ -51,7 +52,7 @@ class ToolUsage:
original_tools: List[Any], original_tools: List[Any],
tools_description: str, tools_description: str,
tools_names: str, tools_names: str,
task: Any, task: Task,
function_calling_llm: Any, function_calling_llm: Any,
agent: Any, agent: Any,
action: Any, action: Any,
@@ -154,7 +155,10 @@ class ToolUsage:
"Delegate work to coworker", "Delegate work to coworker",
"Ask question to coworker", "Ask question to coworker",
]: ]:
self.task.increment_delegations() coworker = (
calling.arguments.get("coworker") if calling.arguments else None
)
self.task.increment_delegations(coworker)
if calling.arguments: if calling.arguments:
try: try:
@@ -241,7 +245,7 @@ class ToolUsage:
result = self._remember_format(result=result) # type: ignore # "_remember_format" of "ToolUsage" does not return a value (it only ever returns None) result = self._remember_format(result=result) # type: ignore # "_remember_format" of "ToolUsage" does not return a value (it only ever returns None)
return result return result
def _should_remember_format(self) -> None: def _should_remember_format(self) -> bool:
return self.task.used_tools % self._remember_format_after_usages == 0 return self.task.used_tools % self._remember_format_after_usages == 0
def _remember_format(self, result: str) -> None: def _remember_format(self, result: str) -> None:
@@ -353,10 +357,10 @@ class ToolUsage:
return ToolUsageErrorException( # type: ignore # Incompatible return value type (got "ToolUsageErrorException", expected "ToolCalling | InstructorToolCalling") return ToolUsageErrorException( # type: ignore # Incompatible return value type (got "ToolUsageErrorException", expected "ToolCalling | InstructorToolCalling")
f'{self._i18n.errors("tool_arguments_error")}' f'{self._i18n.errors("tool_arguments_error")}'
) )
calling = ToolCalling( # type: ignore # Unexpected keyword argument "log" for "ToolCalling" calling = ToolCalling(
tool_name=tool.name, tool_name=tool.name,
arguments=arguments, arguments=arguments,
log=tool_string, log=tool_string, # type: ignore
) )
except Exception as e: except Exception as e:
self._run_attempts += 1 self._run_attempts += 1
@@ -404,19 +408,19 @@ class ToolUsage:
'"' + value.replace('"', '\\"') + '"' '"' + value.replace('"', '\\"') + '"'
) # Re-encapsulate with double quotes ) # Re-encapsulate with double quotes
elif value.isdigit(): # Check if value is a digit, hence integer elif value.isdigit(): # Check if value is a digit, hence integer
formatted_value = value value = value
elif value.lower() in [ elif value.lower() in [
"true", "true",
"false", "false",
"null", "null",
]: # Check for boolean and null values ]: # Check for boolean and null values
formatted_value = value.lower() value = value.lower()
else: else:
# Assume the value is a string and needs quotes # Assume the value is a string and needs quotes
formatted_value = '"' + value.replace('"', '\\"') + '"' value = '"' + value.replace('"', '\\"') + '"'
# Rebuild the entry with proper quoting # Rebuild the entry with proper quoting
formatted_entry = f'"{key}": {formatted_value}' formatted_entry = f'"{key}": {value}'
formatted_entries.append(formatted_entry) formatted_entries.append(formatted_entry)
# Reconstruct the JSON string # Reconstruct the JSON string

View File

@@ -23,17 +23,16 @@ def process_config(
# Copy values from config (originally from YAML) to the model's attributes. # Copy values from config (originally from YAML) to the model's attributes.
# Only copy if the attribute isn't already set, preserving any explicitly defined values. # Only copy if the attribute isn't already set, preserving any explicitly defined values.
for key, value in config.items(): for key, value in config.items():
if key not in model_class.model_fields: if key not in model_class.model_fields or values.get(key) is not None:
continue continue
if values.get(key) is not None:
continue if isinstance(value, dict):
if isinstance(value, (str, int, float, bool, list)):
values[key] = value
elif isinstance(value, dict):
if isinstance(values.get(key), dict): if isinstance(values.get(key), dict):
values[key].update(value) values[key].update(value)
else: else:
values[key] = value values[key] = value
else:
values[key] = value
# Remove the config from values to avoid duplicate processing # Remove the config from values to avoid duplicate processing
values.pop("config", None) values.pop("config", None)

View File

@@ -5,8 +5,7 @@ import regex
from langchain.output_parsers import PydanticOutputParser from langchain.output_parsers import PydanticOutputParser
from langchain_core.exceptions import OutputParserException from langchain_core.exceptions import OutputParserException
from langchain_core.outputs import Generation from langchain_core.outputs import Generation
from langchain_core.pydantic_v1 import ValidationError from pydantic import BaseModel, ValidationError
from pydantic import BaseModel
class CrewPydanticOutputParser(PydanticOutputParser): class CrewPydanticOutputParser(PydanticOutputParser):

View File

@@ -1,14 +1,14 @@
from collections import defaultdict from collections import defaultdict
from langchain_openai import ChatOpenAI
from pydantic import BaseModel, Field
from rich.console import Console
from rich.table import Table
from crewai.agent import Agent from crewai.agent import Agent
from crewai.task import Task from crewai.task import Task
from crewai.tasks.task_output import TaskOutput from crewai.tasks.task_output import TaskOutput
from crewai.telemetry import Telemetry from crewai.telemetry import Telemetry
from langchain_openai import ChatOpenAI
from pydantic import BaseModel, Field
from rich.box import HEAVY_EDGE
from rich.console import Console
from rich.table import Table
class TaskEvaluationPydanticOutput(BaseModel): class TaskEvaluationPydanticOutput(BaseModel):
@@ -77,50 +77,72 @@ class CrewEvaluator:
def print_crew_evaluation_result(self) -> None: def print_crew_evaluation_result(self) -> None:
""" """
Prints the evaluation result of the crew in a table. Prints the evaluation result of the crew in a table.
A Crew with 2 tasks using the command crewai test -n 2 A Crew with 2 tasks using the command crewai test -n 3
will output the following table: will output the following table:
Task Scores Tasks Scores
(1-10 Higher is better) (1-10 Higher is better)
┏━━━━━━━━━━━━┳━━━━━━━┳━━━━━━━┳━━━━━━━━━━━━┓ ━━━━━━━━━━━━━━━━━━━━┳━━━━━━━┳━━━━━━━┳━━━━━━━┳━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┓
┃ Tasks/Crew ┃ Run 1 ┃ Run 2 ┃ Avg. Total ┃ ┃ Tasks/Crew/Agents ┃ Run 1 ┃ Run 2 ┃ Run 3 ┃ Avg. Total ┃ Agents ┃
┡━━━━━━━━━━━━╇━━━━━━━╇━━━━━━━╇━━━━━━━━━━━━┩ ━━━━━━━━━━━━━━━━━━━━╇━━━━━━━╇━━━━━━━╇━━━━━━━╇━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┩
│ Task 1 │ 10.0 │ 9.0 │ 9.5 │ Task 1 │ 9.0 │ 10.0 │ 9.0 │ 9.3 │ - AI LLMs Senior Researcher
│ Task 29.09.09.0 │ │ │ - AI LLMs Reporting Analyst
│ Crew │ 9.5 │ 9.0 │ 9.2 │ │ │ │ │ │
└────────────┴───────┴───────┴────────────┘ │ Task 2 │ 9.0 │ 9.0 │ 9.0 │ 9.0 │ - AI LLMs Senior Researcher │
│ │ │ │ │ │ - AI LLMs Reporting Analyst │
│ │ │ │ │ │ │
│ Crew │ 9.0 │ 9.5 │ 9.0 │ 9.2 │ │
│ Execution Time (s) │ 42 │ 79 │ 52 │ 57 │ │
└────────────────────┴───────┴───────┴───────┴────────────┴──────────────────────────────┘
""" """
task_averages = [ task_averages = [
sum(scores) / len(scores) for scores in zip(*self.tasks_scores.values()) sum(scores) / len(scores) for scores in zip(*self.tasks_scores.values())
] ]
crew_average = sum(task_averages) / len(task_averages) crew_average = sum(task_averages) / len(task_averages)
# Create a table table = Table(title="Tasks Scores \n (1-10 Higher is better)", box=HEAVY_EDGE)
table = Table(title="Tasks Scores \n (1-10 Higher is better)")
# Add columns for the table table.add_column("Tasks/Crew/Agents", style="cyan")
table.add_column("Tasks/Crew")
for run in range(1, len(self.tasks_scores) + 1): for run in range(1, len(self.tasks_scores) + 1):
table.add_column(f"Run {run}") table.add_column(f"Run {run}", justify="center")
table.add_column("Avg. Total") table.add_column("Avg. Total", justify="center")
table.add_column("Agents", style="green")
# Add rows for each task for task_index, task in enumerate(self.crew.tasks):
for task_index in range(len(task_averages)):
task_scores = [ task_scores = [
self.tasks_scores[run][task_index] self.tasks_scores[run][task_index]
for run in range(1, len(self.tasks_scores) + 1) for run in range(1, len(self.tasks_scores) + 1)
] ]
avg_score = task_averages[task_index] avg_score = task_averages[task_index]
agents = list(task.processed_by_agents)
# Add the task row with the first agent
table.add_row( table.add_row(
f"Task {task_index + 1}", *map(str, task_scores), f"{avg_score:.1f}" f"Task {task_index + 1}",
*[f"{score:.1f}" for score in task_scores],
f"{avg_score:.1f}",
f"- {agents[0]}" if agents else "",
) )
# Add a row for the crew average # Add rows for additional agents
for agent in agents[1:]:
table.add_row("", "", "", "", "", f"- {agent}")
# Add a blank separator row if it's not the last task
if task_index < len(self.crew.tasks) - 1:
table.add_row("", "", "", "", "", "")
# Add Crew and Execution Time rows
crew_scores = [ crew_scores = [
sum(self.tasks_scores[run]) / len(self.tasks_scores[run]) sum(self.tasks_scores[run]) / len(self.tasks_scores[run])
for run in range(1, len(self.tasks_scores) + 1) for run in range(1, len(self.tasks_scores) + 1)
] ]
table.add_row("Crew", *map(str, crew_scores), f"{crew_average:.1f}") table.add_row(
"Crew",
*[f"{score:.2f}" for score in crew_scores],
f"{crew_average:.1f}",
"",
)
run_exec_times = [ run_exec_times = [
int(sum(tasks_exec_times)) int(sum(tasks_exec_times))
@@ -128,11 +150,9 @@ class CrewEvaluator:
] ]
execution_time_avg = int(sum(run_exec_times) / len(run_exec_times)) execution_time_avg = int(sum(run_exec_times) / len(run_exec_times))
table.add_row( table.add_row(
"Execution Time (s)", "Execution Time (s)", *map(str, run_exec_times), f"{execution_time_avg}", ""
*map(str, run_exec_times),
f"{execution_time_avg}",
) )
# Display the table in the terminal
console = Console() console = Console()
console.print(table) console.print(table)

View File

@@ -1,7 +1,6 @@
from unittest import mock from unittest import mock
import pytest import pytest
from crewai.agent import Agent from crewai.agent import Agent
from crewai.crew import Crew from crewai.crew import Crew
from crewai.task import Task from crewai.task import Task
@@ -80,6 +79,7 @@ class TestCrewEvaluator:
@mock.patch("crewai.utilities.evaluators.crew_evaluator_handler.Console") @mock.patch("crewai.utilities.evaluators.crew_evaluator_handler.Console")
@mock.patch("crewai.utilities.evaluators.crew_evaluator_handler.Table") @mock.patch("crewai.utilities.evaluators.crew_evaluator_handler.Table")
def test_print_crew_evaluation_result(self, table, console, crew_planner): def test_print_crew_evaluation_result(self, table, console, crew_planner):
# Set up task scores and execution times
crew_planner.tasks_scores = { crew_planner.tasks_scores = {
1: [10, 9, 8], 1: [10, 9, 8],
2: [9, 8, 7], 2: [9, 8, 7],
@@ -89,22 +89,45 @@ class TestCrewEvaluator:
2: [55, 33, 67], 2: [55, 33, 67],
} }
# Mock agents and assign them to tasks
crew_planner.crew.agents = [
mock.Mock(role="Agent 1"),
mock.Mock(role="Agent 2"),
]
crew_planner.crew.tasks = [
mock.Mock(
agent=crew_planner.crew.agents[0], processed_by_agents=["Agent 1"]
),
mock.Mock(
agent=crew_planner.crew.agents[1], processed_by_agents=["Agent 2"]
),
]
# Run the method
crew_planner.print_crew_evaluation_result() crew_planner.print_crew_evaluation_result()
# Verify that the table is created with the appropriate structure and rows
table.assert_has_calls( table.assert_has_calls(
[ [
mock.call(title="Tasks Scores \n (1-10 Higher is better)"), mock.call(
mock.call().add_column("Tasks/Crew"), title="Tasks Scores \n (1-10 Higher is better)", box=mock.ANY
mock.call().add_column("Run 1"), ), # Title and styling
mock.call().add_column("Run 2"), mock.call().add_column("Tasks/Crew/Agents", style="cyan"), # Columns
mock.call().add_column("Avg. Total"), mock.call().add_column("Run 1", justify="center"),
mock.call().add_row("Task 1", "10", "9", "9.5"), mock.call().add_column("Run 2", justify="center"),
mock.call().add_row("Task 2", "9", "8", "8.5"), mock.call().add_column("Avg. Total", justify="center"),
mock.call().add_row("Task 3", "8", "7", "7.5"), mock.call().add_column("Agents", style="green"),
mock.call().add_row("Crew", "9.0", "8.0", "8.5"), # Verify rows for tasks with agents
mock.call().add_row("Execution Time (s)", "135", "155", "145"), mock.call().add_row("Task 1", "10.0", "9.0", "9.5", "- Agent 1"),
mock.call().add_row("", "", "", "", "", ""), # Blank row between tasks
mock.call().add_row("Task 2", "9.0", "8.0", "8.5", "- Agent 2"),
# Add crew averages and execution times
mock.call().add_row("Crew", "9.00", "8.00", "8.5", ""),
mock.call().add_row("Execution Time (s)", "135", "155", "145", ""),
] ]
) )
# Ensure the console prints the table
console.assert_has_calls([mock.call(), mock.call().print(table())]) console.assert_has_calls([mock.call(), mock.call().print(table())])
def test_evaluate(self, crew_planner): def test_evaluate(self, crew_planner):