Compare commits

...

19 Commits

Author SHA1 Message Date
Eduardo Chiarotti
6ebfe5fab0 feat: fix tests 2024-07-29 22:55:35 -03:00
Eduardo Chiarotti
03a9861e73 feat: change test_crew to evalaute_crew to avoid issues with testing libs 2024-07-29 21:41:14 -03:00
Eduardo Chiarotti
fd30f8ce3e feat: Remove unused functions 2024-07-29 21:37:43 -03:00
Eduardo Chiarotti
71ddf60d5f feat: Add execution time to both task and testing feature 2024-07-29 21:35:07 -03:00
Brandon Hancock (bhancock_ai)
fa4393d77e Add in missing triple quote and execution time to resume agent functionality. (#1025)
* Add in missing triple quote and execution time to resume agent functionality

* Fixing broken kwargs and other issues causing our tests to fail
2024-07-29 14:39:02 -03:00
Rip&Tear
25c314befc Minor fixes and updates (#1019)
Co-authored-by: theCyberTech <mattrapidb@gmail.com>
2024-07-29 03:24:23 -03:00
Rip&Tear
2fe79e68cd Small 404 error fixes (#1018)
* Updated Docs:  New Getting started section + content update / addition

* fixed indentation issue

* Minor updates to fix typos

* Fixed up 404 error on latest commit

---------

Co-authored-by: theCyberTech <the_t3ch@pm.me>
Co-authored-by: theCyberTech <mattrapidb@gmail.com>
2024-07-28 22:01:04 -03:00
Nuraly
37d05a2365 Update Force-Tool-Ouput-as-Result.md (#964)
I think there is some mistake, because there is no such parameter as force_output_result, and as the code shows, the correct parameter result_as_answer is set during agent creation, not task.
2024-07-28 15:41:56 -03:00
Carine Bruyndoncx
0111d261a4 Update Crews.md - correct result variable to crew_output (#972) 2024-07-28 15:40:36 -03:00
Taleb
0a23e1dc13 Performed spell check across the rest of code base, and enahnced the yaml paraser code a little (#895)
* Performed spell check across the entire documentation

Thank you once again!

* Performed spell check across the most of code base
Folders been checked:
- agents
- cli
- memory
- project
- tasks
- telemetry
- tools
- translations

* Trying to add a max_token for the agents, so they limited by number of tokens.

* Performed spell check across the rest of code base, and enahnced the yaml paraser code a little

* Small change in the main agent doc

* Improve _save_file method to handle both dict and str inputs

- Add check for dict type input
- Use json.dump for dict serialization
- Convert non-dict inputs to string
- Remove type ignore comments

---------

Co-authored-by: João Moura <joaomdmoura@gmail.com>
2024-07-28 15:39:54 -03:00
Henri Wenlin
ef5ff71346 feat: add verbose option for printing in ToolUsage (#990) 2024-07-28 15:12:10 -03:00
Samuel Mallet
1697b4cacb Add docs for new parameters to SerperDevTool (#993) 2024-07-28 15:09:55 -03:00
Taleb
6b4710a8d1 Improve _save_file method to handle both dict and str inputs (#1011)
- Add check for dict type input
- Use json.dump for dict serialization
- Convert non-dict inputs to string
- Remove type ignore comments
2024-07-28 15:03:18 -03:00
Lennex Zinyando
6f2a8f08ba Fixes getting started section links (#1016) 2024-07-28 15:02:41 -03:00
João Moura
4e6abf596d updating test 2024-07-28 13:23:03 -04:00
Rip&Tear
9018e2ab6a Docs update (#1008)
* Updated Docs:  New Getting started section + content update / addition

* fixed indentation issue

* Minor updates to fix typos

---------

Co-authored-by: theCyberTech <the_t3ch@pm.me>
2024-07-28 11:55:09 -03:00
ResearchAI
99d023c5f3 Update reset_memories_command.py (#974) 2024-07-26 14:40:47 -07:00
Brandon Hancock (bhancock_ai)
da7d8256eb Json Task Output Truncation with Escape Characters (#1009)
* Fixed special character issue when converting json to models. Added numerous tests to ensure thigns work properly.

* Fix linting error and cleaned up tests

* Fix customer_converter_cls test failure

* Fixed tests. Thank you lorenze for pointing that out. added a few more to ensure converter creation works properly

* Address lorenze feedback

* Fix linting issues
2024-07-26 17:27:01 -04:00
Brandon Hancock (bhancock_ai)
88bffaa0d0 Merge pull request #1012 from crewAIInc/fix/breaking-test-task-eval
fix test due to asserting instructions model_schema change
2024-07-26 16:55:26 -04:00
31 changed files with 410336 additions and 7215 deletions

View File

@@ -114,7 +114,7 @@ from langchain.agents import load_tools
langchain_tools = load_tools(["google-serper"], llm=llm)
agent1 = CustomAgent(
role="backstory agent",
role="agent role",
goal="who is {input}?",
backstory="agent backstory",
verbose=True,
@@ -127,7 +127,7 @@ task1 = Task(
)
agent2 = Agent(
role="bio agent",
role="agent role",
goal="summarize the short bio for {input} and if needed do more research",
backstory="agent backstory",
verbose=True,

View File

@@ -137,7 +137,7 @@ crew = Crew(
verbose=2
)
result = crew.kickoff()
crew_output = crew.kickoff()
# Accessing the crew output
print(f"Raw Output: {crew_output.raw}")

View File

@@ -18,4 +18,7 @@ pip install crewai
# Install the main crewAI package and the tools package
# that includes a series of helpful tools for your agents
pip install 'crewai[tools]'
# Alternatively, you can also use:
pip install crewai crewai-tools
```

View File

@@ -1,5 +1,5 @@
---
title: Starting a New CrewAI Project
title: Starting a New CrewAI Project - Using Template
description: A comprehensive guide to starting a new CrewAI project, including the latest updates and project setup methods.
---
@@ -7,13 +7,62 @@ description: A comprehensive guide to starting a new CrewAI project, including t
Welcome to the ultimate guide for starting a new CrewAI project. This document will walk you through the steps to create, customize, and run your CrewAI project, ensuring you have everything you need to get started.
Beforre we start there are a couple of things to note:
1. CrewAI is a Python package and requires Python >=3.10 and <=3.13 to run.
2. The preferred way of setting up CrewAI is using the `crewai create` command.This will create a new project folder and install a skeleton template for you to work on.
## Prerequisites
We assume you have already installed CrewAI. If not, please refer to the [installation guide](https://docs.crewai.com/how-to/Installing-CrewAI/) to install CrewAI and its dependencies.
Before getting started with CrewAI, make sure that you have installed it via pip:
```shell
$ pip install crewai crewi-tools
```
### Virtual Environemnts
It is highly recommended that you use virtual environments to ensure that your CrewAI project is isolated from other projects and dependencies. Virtual environments provide a clean, separate workspace for each project, preventing conflicts between different versions of packages and libraries. This isolation is crucial for maintaining consistency and reproducibility in your development process. You have multiple options for setting up virtual environments depending on your operating system and Python version:
1. Use venv (Python's built-in virtual environment tool):
venv is included with Python 3.3 and later, making it a convenient choice for many developers. It's lightweight and easy to use, perfect for simple project setups.
To set up virtual environments with venv, refer to the official [Python documentation](https://docs.python.org/3/tutorial/venv.html).
2. Use Conda (A Python virtual environment manager):
Conda is an open-source package manager and environment management system for Python. It's widely used by data scientists, developers, and researchers to manage dependencies and environments in a reproducible way.
To set up virtual environments with Conda, refer to the official [Conda documentation](https://docs.conda.io/projects/conda/en/stable/user-guide/getting-started.html).
3. Use Poetry (A Python package manager and dependency management tool):
Poetry is an open-source Python package manager that simplifies the installation of packages and their dependencies. Poetry offers a convenient way to manage virtual environments and dependencies.
Poetry is CrewAI's prefered tool for package / dependancy management in CrewAI.
### Code IDEs
Most users of CrewAI a Code Editor / Integrated Development Environment (IDE) for building there Crews. You can use any code IDE of your choice. Seee below for some popular options for Code Editors / Integrated Development Environments (IDE):
- [Visual Studio Code](https://code.visualstudio.com/) - Most popular
- [PyCharm](https://www.jetbrains.com/pycharm/)
- [Cursor AI](https://cursor.com)
Pick one that suits your style and needs.
## Creating a New Project
In this example we will be using Venv as our virtual environment manager.
To create a new project, run the following CLI command:
To setup a virtual environment, run the following CLI command:
```shell
$ python3 -m venv <venv-name>
```
Activate your virtual environment by running the following CLI command:
```shell
$ source <venv-name>/bin/activate
```
Now, to create a new CrewAI project, run the following CLI command:
```shell
$ crewai create <project_name>

View File

@@ -1,84 +0,0 @@
---
title: Assembling and Activating Your CrewAI Team
description: A comprehensive guide to creating a dynamic CrewAI team for your projects, with updated functionalities including verbose mode, memory capabilities, asynchronous execution, output customization, language model configuration, code execution, integration with third-party agents, and improved task management.
---
## Introduction
Embark on your CrewAI journey by setting up your environment and initiating your AI crew with the latest features. This guide ensures a smooth start, incorporating all recent updates for an enhanced experience, including code execution capabilities, integration with third-party agents, and advanced task management.
## Step 0: Installation
Install CrewAI and any necessary packages for your project. CrewAI is compatible with Python >=3.10,<=3.13.
```shell
pip install crewai
pip install 'crewai[tools]'
```
## Step 1: Assemble Your Agents
Define your agents with distinct roles, backstories, and enhanced capabilities. The Agent class now supports a wide range of attributes for fine-tuned control over agent behavior and interactions, including code execution and integration with third-party agents.
```python
import os
from langchain.llms import OpenAI
from crewai import Agent
from crewai_tools import SerperDevTool, BrowserbaseLoadTool, EXASearchTool
os.environ["OPENAI_API_KEY"] = "Your OpenAI Key"
os.environ["SERPER_API_KEY"] = "Your Serper Key"
os.environ["BROWSERBASE_API_KEY"] = "Your BrowserBase Key"
os.environ["BROWSERBASE_PROJECT_ID"] = "Your BrowserBase Project Id"
search_tool = SerperDevTool()
browser_tool = BrowserbaseLoadTool()
exa_search_tool = EXASearchTool()
# Creating a senior researcher agent with advanced configurations
researcher = Agent(
role='Senior Researcher',
goal='Uncover groundbreaking technologies in {topic}',
backstory=("Driven by curiosity, you're at the forefront of innovation, "
"eager to explore and share knowledge that could change the world."),
memory=True,
verbose=True,
allow_delegation=False,
tools=[search_tool, browser_tool],
allow_code_execution=False, # New attribute for enabling code execution
max_iter=15, # Maximum number of iterations for task execution
max_rpm=100, # Maximum requests per minute
max_execution_time=3600, # Maximum execution time in seconds
system_template="Your custom system template here", # Custom system template
prompt_template="Your custom prompt template here", # Custom prompt template
response_template="Your custom response template here", # Custom response template
)
# Creating a writer agent with custom tools and specific configurations
writer = Agent(
role='Writer',
goal='Narrate compelling tech stories about {topic}',
backstory=("With a flair for simplifying complex topics, you craft engaging "
"narratives that captivate and educate, bringing new discoveries to light."),
verbose=True,
allow_delegation=False,
memory=True,
tools=[exa_search_tool],
function_calling_llm=OpenAI(model_name="gpt-3.5-turbo"), # Separate LLM for function calling
)
# Setting a specific manager agent
manager = Agent(
role='Manager',
goal='Ensure the smooth operation and coordination of the team',
verbose=True,
backstory=(
"As a seasoned project manager, you excel in organizing "
"tasks, managing timelines, and ensuring the team stays on track."
),
allow_code_execution=True, # Enable code execution for the manager
)
```
### New Agent Attributes and Features
1. `allow_code_execution`: Enable or disable code execution capabilities for the agent (default is False).
2. `max_execution_time`: Set a maximum execution time (in seconds) for the agent to complete a task.
3. `function_calling_llm`: Specify a separate language model for function calling.

View File

@@ -7,7 +7,7 @@ description: Learn how to force tool output as the result in of an Agent's task
In CrewAI, you can force the output of a tool as the result of an agent's task. This feature is useful when you want to ensure that the tool output is captured and returned as the task result, and avoid the agent modifying the output during the task execution.
## Forcing Tool Output as Result
To force the tool output as the result of an agent's task, you can set the `force_tool_output` parameter to `True` when creating the task. This parameter ensures that the tool output is captured and returned as the task result, without any modifications by the agent.
To force the tool output as the result of an agent's task, you can set the `result_as_answer` parameter to `True` when creating the agent. This parameter ensures that the tool output is captured and returned as the task result, without any modifications by the agent.
Here's an example of how to force the tool output as the result of an agent's task:

View File

@@ -5,6 +5,19 @@
Cutting-edge framework for orchestrating role-playing, autonomous AI agents. By fostering collaborative intelligence, CrewAI empowers agents to work together seamlessly, tackling complex tasks.
<div style="display:flex; margin:0 auto; justify-content: center;">
<div style="width:25%">
<h2>Getting Started</h2>
<ul>
<li><a href='./getting-started/Installing-CrewAI'>
Installing CrewAI
</a>
</li>
<li><a href='./getting-started/Start-a-New-CrewAI-Project-Template-Method'>
Start a New CrewAI Project: Template Method
</a>
</li>
</ul>
</div>
<div style="width:25%">
<h2>Core Concepts</h2>
<ul>
@@ -53,21 +66,6 @@ Cutting-edge framework for orchestrating role-playing, autonomous AI agents. By
<div style="width:30%">
<h2>How-To Guides</h2>
<ul>
<li>
<a href="./how-to/Start-a-New-CrewAI-Project">
Starting Your crewAI Project
</a>
</li>
<li>
<a href="./how-to/Installing-CrewAI">
Installing crewAI
</a>
</li>
<li>
<a href="./how-to/Creating-a-Crew-and-kick-it-off">
Getting Started
</a>
</li>
<li>
<a href="./how-to/Create-Custom-Tools">
Create Custom Tools

View File

@@ -29,5 +29,70 @@ To effectively use the `SerperDevTool`, follow these steps:
2. **API Key Acquisition**: Acquire a `serper.dev` API key by registering for a free account at `serper.dev`.
3. **Environment Configuration**: Store your obtained API key in an environment variable named `SERPER_API_KEY` to facilitate its use by the tool.
## Parameters
The `SerperDevTool` comes with several parameters that will be passed to the API :
- **search_url**: The URL endpoint for the search API. (Default is `https://google.serper.dev/search`)
- **country**: Optional. Specify the country for the search results.
- **location**: Optional. Specify the location for the search results.
- **locale**: Optional. Specify the locale for the search results.
- **n_results**: Number of search results to return. Default is `10`.
The values for `country`, `location`, `lovale` and `search_url` can be found on the [Serper Playground](https://serper.dev/playground).
## Example with Parameters
Here is an example demonstrating how to use the tool with additional parameters:
```python
from crewai_tools import SerperDevTool
tool = SerperDevTool(
search_url="https://google.serper.dev/scholar",
n_results=2,
)
print(tool.run(search_query="ChatGPT"))
# Using Tool: Search the internet
# Search results: Title: Role of chat gpt in public health
# Link: https://link.springer.com/article/10.1007/s10439-023-03172-7
# Snippet: … ChatGPT in public health. In this overview, we will examine the potential uses of ChatGPT in
# ---
# Title: Potential use of chat gpt in global warming
# Link: https://link.springer.com/article/10.1007/s10439-023-03171-8
# Snippet: … as ChatGPT, have the potential to play a critical role in advancing our understanding of climate
# ---
```
```python
from crewai_tools import SerperDevTool
tool = SerperDevTool(
country="fr",
locale="fr",
location="Paris, Paris, Ile-de-France, France",
n_results=2,
)
print(tool.run(search_query="Jeux Olympiques"))
# Using Tool: Search the internet
# Search results: Title: Jeux Olympiques de Paris 2024 - Actualités, calendriers, résultats
# Link: https://olympics.com/fr/paris-2024
# Snippet: Quels sont les sports présents aux Jeux Olympiques de Paris 2024 ? · Athlétisme · Aviron · Badminton · Basketball · Basketball 3x3 · Boxe · Breaking · Canoë ...
# ---
# Title: Billetterie Officielle de Paris 2024 - Jeux Olympiques et Paralympiques
# Link: https://tickets.paris2024.org/
# Snippet: Achetez vos billets exclusivement sur le site officiel de la billetterie de Paris 2024 pour participer au plus grand événement sportif au monde.
# ---
```
## Conclusion
By integrating the `SerperDevTool` into Python projects, users gain the ability to conduct real-time, relevant searches across the internet directly from their applications. By adhering to the setup and usage guidelines provided, incorporating this tool into projects is streamlined and straightforward.
By integrating the `SerperDevTool` into Python projects, users gain the ability to conduct real-time, relevant searches across the internet directly from their applications. The updated parameters allow for more customized and localized search results. By adhering to the setup and usage guidelines provided, incorporating this tool into projects is streamlined and straightforward.

View File

@@ -119,6 +119,9 @@ theme:
nav:
- Home: '/'
- Getting Started:
- Installing CrewAI: 'getting-started/Installing-CrewAI.md'
- Starting a new CrewAI project: 'getting-started/Start-a-New-CrewAI-Project-Template-Method.md'
- Core Concepts:
- Agents: 'core-concepts/Agents.md'
- Tasks: 'core-concepts/Tasks.md'

146
poetry.lock generated
View File

@@ -1,14 +1,14 @@
# This file is automatically @generated by Poetry 1.7.1 and should not be changed by hand.
# This file is automatically @generated by Poetry 1.8.3 and should not be changed by hand.
[[package]]
name = "agentops"
version = "0.3.0"
version = "0.3.2"
description = "Python SDK for developing AI agent evals and observability"
optional = true
python-versions = ">=3.7"
files = [
{file = "agentops-0.3.0-py3-none-any.whl", hash = "sha256:22aeb3355e66b32a2b2a9f676048b81979b2488feddb088f9266034b3ed50539"},
{file = "agentops-0.3.0.tar.gz", hash = "sha256:6c0c08a57410fa5e826a7bafa1deeba9f7b3524709427d9e1abbd0964caaf76b"},
{file = "agentops-0.3.2-py3-none-any.whl", hash = "sha256:b35988e04378624204572bb3d7a454094f879ea573f05b57d4e75ab0bfbb82af"},
{file = "agentops-0.3.2.tar.gz", hash = "sha256:55559ac4a43634831dfa8937c2597c28e332809dc7c6bb3bc3c8b233442e224c"},
]
[package.dependencies]
@@ -294,38 +294,38 @@ files = [
[[package]]
name = "bcrypt"
version = "4.1.3"
version = "4.2.0"
description = "Modern password hashing for your software and your servers"
optional = false
python-versions = ">=3.7"
files = [
{file = "bcrypt-4.1.3-cp37-abi3-macosx_10_12_universal2.whl", hash = "sha256:48429c83292b57bf4af6ab75809f8f4daf52aa5d480632e53707805cc1ce9b74"},
{file = "bcrypt-4.1.3-cp37-abi3-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:4a8bea4c152b91fd8319fef4c6a790da5c07840421c2b785084989bf8bbb7455"},
{file = "bcrypt-4.1.3-cp37-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:3d3b317050a9a711a5c7214bf04e28333cf528e0ed0ec9a4e55ba628d0f07c1a"},
{file = "bcrypt-4.1.3-cp37-abi3-manylinux_2_28_aarch64.whl", hash = "sha256:094fd31e08c2b102a14880ee5b3d09913ecf334cd604af27e1013c76831f7b05"},
{file = "bcrypt-4.1.3-cp37-abi3-manylinux_2_28_x86_64.whl", hash = "sha256:4fb253d65da30d9269e0a6f4b0de32bd657a0208a6f4e43d3e645774fb5457f3"},
{file = "bcrypt-4.1.3-cp37-abi3-musllinux_1_1_aarch64.whl", hash = "sha256:193bb49eeeb9c1e2db9ba65d09dc6384edd5608d9d672b4125e9320af9153a15"},
{file = "bcrypt-4.1.3-cp37-abi3-musllinux_1_1_x86_64.whl", hash = "sha256:8cbb119267068c2581ae38790e0d1fbae65d0725247a930fc9900c285d95725d"},
{file = "bcrypt-4.1.3-cp37-abi3-musllinux_1_2_aarch64.whl", hash = "sha256:6cac78a8d42f9d120b3987f82252bdbeb7e6e900a5e1ba37f6be6fe4e3848286"},
{file = "bcrypt-4.1.3-cp37-abi3-musllinux_1_2_x86_64.whl", hash = "sha256:01746eb2c4299dd0ae1670234bf77704f581dd72cc180f444bfe74eb80495b64"},
{file = "bcrypt-4.1.3-cp37-abi3-win32.whl", hash = "sha256:037c5bf7c196a63dcce75545c8874610c600809d5d82c305dd327cd4969995bf"},
{file = "bcrypt-4.1.3-cp37-abi3-win_amd64.whl", hash = "sha256:8a893d192dfb7c8e883c4576813bf18bb9d59e2cfd88b68b725990f033f1b978"},
{file = "bcrypt-4.1.3-cp39-abi3-macosx_10_12_universal2.whl", hash = "sha256:0d4cf6ef1525f79255ef048b3489602868c47aea61f375377f0d00514fe4a78c"},
{file = "bcrypt-4.1.3-cp39-abi3-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:f5698ce5292a4e4b9e5861f7e53b1d89242ad39d54c3da451a93cac17b61921a"},
{file = "bcrypt-4.1.3-cp39-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:ec3c2e1ca3e5c4b9edb94290b356d082b721f3f50758bce7cce11d8a7c89ce84"},
{file = "bcrypt-4.1.3-cp39-abi3-manylinux_2_28_aarch64.whl", hash = "sha256:3a5be252fef513363fe281bafc596c31b552cf81d04c5085bc5dac29670faa08"},
{file = "bcrypt-4.1.3-cp39-abi3-manylinux_2_28_x86_64.whl", hash = "sha256:5f7cd3399fbc4ec290378b541b0cf3d4398e4737a65d0f938c7c0f9d5e686611"},
{file = "bcrypt-4.1.3-cp39-abi3-musllinux_1_1_aarch64.whl", hash = "sha256:c4c8d9b3e97209dd7111bf726e79f638ad9224b4691d1c7cfefa571a09b1b2d6"},
{file = "bcrypt-4.1.3-cp39-abi3-musllinux_1_1_x86_64.whl", hash = "sha256:31adb9cbb8737a581a843e13df22ffb7c84638342de3708a98d5c986770f2834"},
{file = "bcrypt-4.1.3-cp39-abi3-musllinux_1_2_aarch64.whl", hash = "sha256:551b320396e1d05e49cc18dd77d970accd52b322441628aca04801bbd1d52a73"},
{file = "bcrypt-4.1.3-cp39-abi3-musllinux_1_2_x86_64.whl", hash = "sha256:6717543d2c110a155e6821ce5670c1f512f602eabb77dba95717ca76af79867d"},
{file = "bcrypt-4.1.3-cp39-abi3-win32.whl", hash = "sha256:6004f5229b50f8493c49232b8e75726b568535fd300e5039e255d919fc3a07f2"},
{file = "bcrypt-4.1.3-cp39-abi3-win_amd64.whl", hash = "sha256:2505b54afb074627111b5a8dc9b6ae69d0f01fea65c2fcaea403448c503d3991"},
{file = "bcrypt-4.1.3-pp310-pypy310_pp73-manylinux_2_28_aarch64.whl", hash = "sha256:cb9c707c10bddaf9e5ba7cdb769f3e889e60b7d4fea22834b261f51ca2b89fed"},
{file = "bcrypt-4.1.3-pp310-pypy310_pp73-manylinux_2_28_x86_64.whl", hash = "sha256:9f8ea645eb94fb6e7bea0cf4ba121c07a3a182ac52876493870033141aa687bc"},
{file = "bcrypt-4.1.3-pp39-pypy39_pp73-manylinux_2_28_aarch64.whl", hash = "sha256:f44a97780677e7ac0ca393bd7982b19dbbd8d7228c1afe10b128fd9550eef5f1"},
{file = "bcrypt-4.1.3-pp39-pypy39_pp73-manylinux_2_28_x86_64.whl", hash = "sha256:d84702adb8f2798d813b17d8187d27076cca3cd52fe3686bb07a9083930ce650"},
{file = "bcrypt-4.1.3.tar.gz", hash = "sha256:2ee15dd749f5952fe3f0430d0ff6b74082e159c50332a1413d51b5689cf06623"},
{file = "bcrypt-4.2.0-cp37-abi3-macosx_10_12_universal2.whl", hash = "sha256:096a15d26ed6ce37a14c1ac1e48119660f21b24cba457f160a4b830f3fe6b5cb"},
{file = "bcrypt-4.2.0-cp37-abi3-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:c02d944ca89d9b1922ceb8a46460dd17df1ba37ab66feac4870f6862a1533c00"},
{file = "bcrypt-4.2.0-cp37-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:1d84cf6d877918620b687b8fd1bf7781d11e8a0998f576c7aa939776b512b98d"},
{file = "bcrypt-4.2.0-cp37-abi3-manylinux_2_28_aarch64.whl", hash = "sha256:1bb429fedbe0249465cdd85a58e8376f31bb315e484f16e68ca4c786dcc04291"},
{file = "bcrypt-4.2.0-cp37-abi3-manylinux_2_28_x86_64.whl", hash = "sha256:655ea221910bcac76ea08aaa76df427ef8625f92e55a8ee44fbf7753dbabb328"},
{file = "bcrypt-4.2.0-cp37-abi3-musllinux_1_1_aarch64.whl", hash = "sha256:1ee38e858bf5d0287c39b7a1fc59eec64bbf880c7d504d3a06a96c16e14058e7"},
{file = "bcrypt-4.2.0-cp37-abi3-musllinux_1_1_x86_64.whl", hash = "sha256:0da52759f7f30e83f1e30a888d9163a81353ef224d82dc58eb5bb52efcabc399"},
{file = "bcrypt-4.2.0-cp37-abi3-musllinux_1_2_aarch64.whl", hash = "sha256:3698393a1b1f1fd5714524193849d0c6d524d33523acca37cd28f02899285060"},
{file = "bcrypt-4.2.0-cp37-abi3-musllinux_1_2_x86_64.whl", hash = "sha256:762a2c5fb35f89606a9fde5e51392dad0cd1ab7ae64149a8b935fe8d79dd5ed7"},
{file = "bcrypt-4.2.0-cp37-abi3-win32.whl", hash = "sha256:5a1e8aa9b28ae28020a3ac4b053117fb51c57a010b9f969603ed885f23841458"},
{file = "bcrypt-4.2.0-cp37-abi3-win_amd64.whl", hash = "sha256:8f6ede91359e5df88d1f5c1ef47428a4420136f3ce97763e31b86dd8280fbdf5"},
{file = "bcrypt-4.2.0-cp39-abi3-macosx_10_12_universal2.whl", hash = "sha256:c52aac18ea1f4a4f65963ea4f9530c306b56ccd0c6f8c8da0c06976e34a6e841"},
{file = "bcrypt-4.2.0-cp39-abi3-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:3bbbfb2734f0e4f37c5136130405332640a1e46e6b23e000eeff2ba8d005da68"},
{file = "bcrypt-4.2.0-cp39-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:3413bd60460f76097ee2e0a493ccebe4a7601918219c02f503984f0a7ee0aebe"},
{file = "bcrypt-4.2.0-cp39-abi3-manylinux_2_28_aarch64.whl", hash = "sha256:8d7bb9c42801035e61c109c345a28ed7e84426ae4865511eb82e913df18f58c2"},
{file = "bcrypt-4.2.0-cp39-abi3-manylinux_2_28_x86_64.whl", hash = "sha256:3d3a6d28cb2305b43feac298774b997e372e56c7c7afd90a12b3dc49b189151c"},
{file = "bcrypt-4.2.0-cp39-abi3-musllinux_1_1_aarch64.whl", hash = "sha256:9c1c4ad86351339c5f320ca372dfba6cb6beb25e8efc659bedd918d921956bae"},
{file = "bcrypt-4.2.0-cp39-abi3-musllinux_1_1_x86_64.whl", hash = "sha256:27fe0f57bb5573104b5a6de5e4153c60814c711b29364c10a75a54bb6d7ff48d"},
{file = "bcrypt-4.2.0-cp39-abi3-musllinux_1_2_aarch64.whl", hash = "sha256:8ac68872c82f1add6a20bd489870c71b00ebacd2e9134a8aa3f98a0052ab4b0e"},
{file = "bcrypt-4.2.0-cp39-abi3-musllinux_1_2_x86_64.whl", hash = "sha256:cb2a8ec2bc07d3553ccebf0746bbf3d19426d1c6d1adbd4fa48925f66af7b9e8"},
{file = "bcrypt-4.2.0-cp39-abi3-win32.whl", hash = "sha256:77800b7147c9dc905db1cba26abe31e504d8247ac73580b4aa179f98e6608f34"},
{file = "bcrypt-4.2.0-cp39-abi3-win_amd64.whl", hash = "sha256:61ed14326ee023917ecd093ee6ef422a72f3aec6f07e21ea5f10622b735538a9"},
{file = "bcrypt-4.2.0-pp310-pypy310_pp73-manylinux_2_28_aarch64.whl", hash = "sha256:39e1d30c7233cfc54f5c3f2c825156fe044efdd3e0b9d309512cc514a263ec2a"},
{file = "bcrypt-4.2.0-pp310-pypy310_pp73-manylinux_2_28_x86_64.whl", hash = "sha256:f4f4acf526fcd1c34e7ce851147deedd4e26e6402369304220250598b26448db"},
{file = "bcrypt-4.2.0-pp39-pypy39_pp73-manylinux_2_28_aarch64.whl", hash = "sha256:1ff39b78a52cf03fdf902635e4c81e544714861ba3f0efc56558979dd4f09170"},
{file = "bcrypt-4.2.0-pp39-pypy39_pp73-manylinux_2_28_x86_64.whl", hash = "sha256:373db9abe198e8e2c70d12b479464e0d5092cc122b20ec504097b5f2297ed184"},
{file = "bcrypt-4.2.0.tar.gz", hash = "sha256:cf69eaf5185fd58f268f805b505ce31f9b9fc2d64b376642164e9244540c1221"},
]
[package.extras]
@@ -355,17 +355,17 @@ lxml = ["lxml"]
[[package]]
name = "boto3"
version = "1.34.145"
version = "1.34.146"
description = "The AWS SDK for Python"
optional = false
python-versions = ">=3.8"
files = [
{file = "boto3-1.34.145-py3-none-any.whl", hash = "sha256:69d5afb7a017d07dd6bdfb680d2912d5d369b3fafa0a45161207d9f393b14d7e"},
{file = "boto3-1.34.145.tar.gz", hash = "sha256:ac770fb53dde1743aec56bd8e56b7ee2e2f5ad42a37825968ec4ff8428822640"},
{file = "boto3-1.34.146-py3-none-any.whl", hash = "sha256:7ec568fb19bce82a70be51f08fddac1ef927ca3fb0896cbb34303a012ba228d8"},
{file = "boto3-1.34.146.tar.gz", hash = "sha256:5686fe2a6d1aa1de8a88e9589cdcc33361640d3d7a13da718a30717248886124"},
]
[package.dependencies]
botocore = ">=1.34.145,<1.35.0"
botocore = ">=1.34.146,<1.35.0"
jmespath = ">=0.7.1,<2.0.0"
s3transfer = ">=0.10.0,<0.11.0"
@@ -374,13 +374,13 @@ crt = ["botocore[crt] (>=1.21.0,<2.0a0)"]
[[package]]
name = "botocore"
version = "1.34.145"
version = "1.34.146"
description = "Low-level, data-driven core of boto 3."
optional = false
python-versions = ">=3.8"
files = [
{file = "botocore-1.34.145-py3-none-any.whl", hash = "sha256:2e72e262de02adcb0264ac2bac159a28f55dbba8d9e52aa0308773a42950dff5"},
{file = "botocore-1.34.145.tar.gz", hash = "sha256:edf0fb4c02186ae29b76263ac5fda18b0a085d334a310551c9984407cf1079e6"},
{file = "botocore-1.34.146-py3-none-any.whl", hash = "sha256:3fd4782362bd29c192704ebf859c5c8c5189ad05719e391eefe23088434427ae"},
{file = "botocore-1.34.146.tar.gz", hash = "sha256:849cb8e54e042443aeabcd7822b5f2b76cb5cfe33fe3a71f91c7c069748a869c"},
]
[package.dependencies]
@@ -747,13 +747,13 @@ colorama = {version = "*", markers = "platform_system == \"Windows\""}
[[package]]
name = "cohere"
version = "5.6.1"
version = "5.6.2"
description = ""
optional = false
python-versions = "<4.0,>=3.8"
files = [
{file = "cohere-5.6.1-py3-none-any.whl", hash = "sha256:1c8bcd39a54622d64b83cafb865f102cd2565ce091b0856fd5ce11bf7169109a"},
{file = "cohere-5.6.1.tar.gz", hash = "sha256:5d7efda64f0e512d4cc35aa04b17a6f74b3d8c175a99f2797991a7f31dfac349"},
{file = "cohere-5.6.2-py3-none-any.whl", hash = "sha256:cfecf1343bcaa4091266c5a231fbcb3ccbd80cad05ea093ef80024a117aa3a2f"},
{file = "cohere-5.6.2.tar.gz", hash = "sha256:6bb901afdfb02f62ad8ed2d82f12d8ea87a6869710f5f880cb89190c4e994805"},
]
[package.dependencies]
@@ -1517,13 +1517,13 @@ protobuf = ">=3.20.2,<4.21.0 || >4.21.0,<4.21.1 || >4.21.1,<4.21.2 || >4.21.2,<4
[[package]]
name = "google-cloud-storage"
version = "2.17.0"
version = "2.18.0"
description = "Google Cloud Storage API client library"
optional = false
python-versions = ">=3.7"
files = [
{file = "google-cloud-storage-2.17.0.tar.gz", hash = "sha256:49378abff54ef656b52dca5ef0f2eba9aa83dc2b2c72c78714b03a1a95fe9388"},
{file = "google_cloud_storage-2.17.0-py2.py3-none-any.whl", hash = "sha256:5b393bc766b7a3bc6f5407b9e665b2450d36282614b7945e570b3480a456d1e1"},
{file = "google_cloud_storage-2.18.0-py2.py3-none-any.whl", hash = "sha256:e8e1a9577952143c3fca8163005ecfadd2d70ec080fa158a8b305000e2c22fbb"},
{file = "google_cloud_storage-2.18.0.tar.gz", hash = "sha256:0aa3f7c57f3632f81b455d91558d2b27ada96eee2de3aaa17f689db1470d9578"},
]
[package.dependencies]
@@ -1535,7 +1535,8 @@ google-resumable-media = ">=2.6.0"
requests = ">=2.18.0,<3.0.0dev"
[package.extras]
protobuf = ["protobuf (<5.0.0dev)"]
protobuf = ["protobuf (<6.0.0dev)"]
tracing = ["opentelemetry-api (>=1.1.0)"]
[[package]]
name = "google-crc32c"
@@ -2865,13 +2866,13 @@ pyyaml = ">=5.1"
[[package]]
name = "mkdocs-material"
version = "9.5.29"
version = "9.5.30"
description = "Documentation that simply works"
optional = false
python-versions = ">=3.8"
files = [
{file = "mkdocs_material-9.5.29-py3-none-any.whl", hash = "sha256:afc1f508e2662ded95f0a35a329e8a5acd73ee88ca07ba73836eb6fcdae5d8b4"},
{file = "mkdocs_material-9.5.29.tar.gz", hash = "sha256:3e977598ec15a4ddad5c4dfc9e08edab6023edb51e88f0729bd27be77e3d322a"},
{file = "mkdocs_material-9.5.30-py3-none-any.whl", hash = "sha256:fc070689c5250a180e9b9d79d8491ef9a3a7acb240db0728728d6c31eeb131d4"},
{file = "mkdocs_material-9.5.30.tar.gz", hash = "sha256:3fd417dd42d679e3ba08b9e2d72cd8b8af142cc4a3969676ad6b00993dd182ec"},
]
[package.dependencies]
@@ -3337,13 +3338,13 @@ sympy = "*"
[[package]]
name = "openai"
version = "1.36.0"
version = "1.37.0"
description = "The official Python library for the openai API"
optional = false
python-versions = ">=3.7.1"
files = [
{file = "openai-1.36.0-py3-none-any.whl", hash = "sha256:82b74ded1fe2ea94abb19a007178bc143675f1b6903cebd63e2968d654bb0a6f"},
{file = "openai-1.36.0.tar.gz", hash = "sha256:a124baf0e1657d6156e12248642f88489cd030be8655b69bc1c13eb50e71a93d"},
{file = "openai-1.37.0-py3-none-any.whl", hash = "sha256:a903245c0ecf622f2830024acdaa78683c70abb8e9d37a497b851670864c9f73"},
{file = "openai-1.37.0.tar.gz", hash = "sha256:dc8197fc40ab9d431777b6620d962cc49f4544ffc3011f03ce0a805e6eb54adb"},
]
[package.dependencies]
@@ -4085,6 +4086,19 @@ files = [
{file = "pyarrow-17.0.0-cp312-cp312-win_amd64.whl", hash = "sha256:392bc9feabc647338e6c89267635e111d71edad5fcffba204425a7c8d13610d7"},
{file = "pyarrow-17.0.0-cp38-cp38-macosx_10_15_x86_64.whl", hash = "sha256:af5ff82a04b2171415f1410cff7ebb79861afc5dae50be73ce06d6e870615204"},
{file = "pyarrow-17.0.0-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:edca18eaca89cd6382dfbcff3dd2d87633433043650c07375d095cd3517561d8"},
{file = "pyarrow-17.0.0-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:7c7916bff914ac5d4a8fe25b7a25e432ff921e72f6f2b7547d1e325c1ad9d155"},
{file = "pyarrow-17.0.0-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:f553ca691b9e94b202ff741bdd40f6ccb70cdd5fbf65c187af132f1317de6145"},
{file = "pyarrow-17.0.0-cp38-cp38-manylinux_2_28_aarch64.whl", hash = "sha256:0cdb0e627c86c373205a2f94a510ac4376fdc523f8bb36beab2e7f204416163c"},
{file = "pyarrow-17.0.0-cp38-cp38-manylinux_2_28_x86_64.whl", hash = "sha256:d7d192305d9d8bc9082d10f361fc70a73590a4c65cf31c3e6926cd72b76bc35c"},
{file = "pyarrow-17.0.0-cp38-cp38-win_amd64.whl", hash = "sha256:02dae06ce212d8b3244dd3e7d12d9c4d3046945a5933d28026598e9dbbda1fca"},
{file = "pyarrow-17.0.0-cp39-cp39-macosx_10_15_x86_64.whl", hash = "sha256:13d7a460b412f31e4c0efa1148e1d29bdf18ad1411eb6757d38f8fbdcc8645fb"},
{file = "pyarrow-17.0.0-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:9b564a51fbccfab5a04a80453e5ac6c9954a9c5ef2890d1bcf63741909c3f8df"},
{file = "pyarrow-17.0.0-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:32503827abbc5aadedfa235f5ece8c4f8f8b0a3cf01066bc8d29de7539532687"},
{file = "pyarrow-17.0.0-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:a155acc7f154b9ffcc85497509bcd0d43efb80d6f733b0dc3bb14e281f131c8b"},
{file = "pyarrow-17.0.0-cp39-cp39-manylinux_2_28_aarch64.whl", hash = "sha256:dec8d129254d0188a49f8a1fc99e0560dc1b85f60af729f47de4046015f9b0a5"},
{file = "pyarrow-17.0.0-cp39-cp39-manylinux_2_28_x86_64.whl", hash = "sha256:a48ddf5c3c6a6c505904545c25a4ae13646ae1f8ba703c4df4a1bfe4f4006bda"},
{file = "pyarrow-17.0.0-cp39-cp39-win_amd64.whl", hash = "sha256:42bf93249a083aca230ba7e2786c5f673507fa97bbd9725a1e2754715151a204"},
{file = "pyarrow-17.0.0.tar.gz", hash = "sha256:4beca9521ed2c0921c1023e68d097d0299b62c362639ea315572a58f3f50fd28"},
]
[package.dependencies]
@@ -4321,13 +4335,13 @@ extra = ["pygments (>=2.12)"]
[[package]]
name = "pypdf"
version = "4.3.0"
version = "4.3.1"
description = "A pure-python PDF library capable of splitting, merging, cropping, and transforming PDF files"
optional = false
python-versions = ">=3.6"
files = [
{file = "pypdf-4.3.0-py3-none-any.whl", hash = "sha256:eeea4d019b57c099d02a0e1692eaaab23341ae3f255c1dafa3c8566b4636496d"},
{file = "pypdf-4.3.0.tar.gz", hash = "sha256:0d7a4c67fd03782f5a09d3f48c11c7a31e0bb9af78861a25229bb49259ed0504"},
{file = "pypdf-4.3.1-py3-none-any.whl", hash = "sha256:64b31da97eda0771ef22edb1bfecd5deee4b72c3d1736b7df2689805076d6418"},
{file = "pypdf-4.3.1.tar.gz", hash = "sha256:b2f37fe9a3030aa97ca86067a56ba3f9d3565f9a791b305c7355d8392c30d91b"},
]
[package.dependencies]
@@ -4414,13 +4428,13 @@ files = [
[[package]]
name = "pytest"
version = "8.2.2"
version = "8.3.1"
description = "pytest: simple powerful testing with Python"
optional = false
python-versions = ">=3.8"
files = [
{file = "pytest-8.2.2-py3-none-any.whl", hash = "sha256:c434598117762e2bd304e526244f67bf66bbd7b5d6cf22138be51ff661980343"},
{file = "pytest-8.2.2.tar.gz", hash = "sha256:de4bb8104e201939ccdc688b27a89a7be2079b22e2bd2b07f806b6ba71117977"},
{file = "pytest-8.3.1-py3-none-any.whl", hash = "sha256:e9600ccf4f563976e2c99fa02c7624ab938296551f280835ee6516df8bc4ae8c"},
{file = "pytest-8.3.1.tar.gz", hash = "sha256:7e8e5c5abd6e93cb1cc151f23e57adc31fcf8cfd2a3ff2da63e23f732de35db6"},
]
[package.dependencies]
@@ -4428,7 +4442,7 @@ colorama = {version = "*", markers = "sys_platform == \"win32\""}
exceptiongroup = {version = ">=1.0.0rc8", markers = "python_version < \"3.11\""}
iniconfig = "*"
packaging = "*"
pluggy = ">=1.5,<2.0"
pluggy = ">=1.5,<2"
tomli = {version = ">=1", markers = "python_version < \"3.11\""}
[package.extras]
@@ -4899,19 +4913,19 @@ files = [
[[package]]
name = "setuptools"
version = "71.0.4"
version = "71.1.0"
description = "Easily download, build, install, upgrade, and uninstall Python packages"
optional = false
python-versions = ">=3.8"
files = [
{file = "setuptools-71.0.4-py3-none-any.whl", hash = "sha256:ed2feca703be3bdbd94e6bb17365d91c6935c6b2a8d0bb09b66a2c435ba0b1a5"},
{file = "setuptools-71.0.4.tar.gz", hash = "sha256:48297e5d393a62b7cb2a10b8f76c63a73af933bd809c9e0d0d6352a1a0135dd8"},
{file = "setuptools-71.1.0-py3-none-any.whl", hash = "sha256:33874fdc59b3188304b2e7c80d9029097ea31627180896fb549c578ceb8a0855"},
{file = "setuptools-71.1.0.tar.gz", hash = "sha256:032d42ee9fb536e33087fb66cac5f840eb9391ed05637b3f2a76a7c8fb477936"},
]
[package.extras]
core = ["importlib-metadata (>=6)", "importlib-resources (>=5.10.2)", "jaraco.text (>=3.7)", "more-itertools (>=8.8)", "ordered-set (>=3.1.1)", "packaging (>=24)", "platformdirs (>=2.6.2)", "tomli (>=2.0.1)", "wheel (>=0.43.0)"]
doc = ["furo", "jaraco.packaging (>=9.3)", "jaraco.tidelift (>=1.4)", "pygments-github-lexers (==0.0.5)", "pyproject-hooks (!=1.1)", "rst.linker (>=1.9)", "sphinx (>=3.5)", "sphinx-favicon", "sphinx-inline-tabs", "sphinx-lint", "sphinx-notfound-page (>=1,<2)", "sphinx-reredirects", "sphinxcontrib-towncrier"]
test = ["build[virtualenv] (>=1.0.3)", "filelock (>=3.4.0)", "importlib-metadata", "ini2toml[lite] (>=0.14)", "jaraco.develop (>=7.21)", "jaraco.envs (>=2.2)", "jaraco.path (>=3.2.0)", "jaraco.test", "mypy (==1.10.0)", "packaging (>=23.2)", "pip (>=19.1)", "pyproject-hooks (!=1.1)", "pytest (>=6,!=8.1.*)", "pytest-checkdocs (>=2.4)", "pytest-cov", "pytest-enabler (>=2.2)", "pytest-home (>=0.5)", "pytest-mypy", "pytest-perf", "pytest-ruff (<0.4)", "pytest-ruff (>=0.2.1)", "pytest-ruff (>=0.3.2)", "pytest-subprocess", "pytest-timeout", "pytest-xdist (>=3)", "tomli", "tomli-w (>=1.0.0)", "virtualenv (>=13.0.0)", "wheel"]
test = ["build[virtualenv] (>=1.0.3)", "filelock (>=3.4.0)", "importlib-metadata", "ini2toml[lite] (>=0.14)", "jaraco.develop (>=7.21)", "jaraco.envs (>=2.2)", "jaraco.path (>=3.2.0)", "jaraco.test", "mypy (==1.11.*)", "packaging (>=23.2)", "pip (>=19.1)", "pyproject-hooks (!=1.1)", "pytest (>=6,!=8.1.*)", "pytest-checkdocs (>=2.4)", "pytest-cov", "pytest-enabler (>=2.2)", "pytest-home (>=0.5)", "pytest-mypy", "pytest-perf", "pytest-ruff (<0.4)", "pytest-ruff (>=0.2.1)", "pytest-ruff (>=0.3.2)", "pytest-subprocess", "pytest-timeout", "pytest-xdist (>=3)", "tomli", "tomli-w (>=1.0.0)", "virtualenv (>=13.0.0)", "wheel"]
[[package]]
name = "shapely"
@@ -5256,13 +5270,13 @@ test = ["pytest", "ruff"]
[[package]]
name = "together"
version = "1.2.2"
version = "1.2.3"
description = "Python client for Together's Cloud Platform!"
optional = false
python-versions = "<4.0,>=3.8"
files = [
{file = "together-1.2.2-py3-none-any.whl", hash = "sha256:7ce89f902dbaca67e46e693d90182514494f510f3bc16cb89d816a5031ab0433"},
{file = "together-1.2.2.tar.gz", hash = "sha256:fd026f4a604e1fb3ee2fa5803f31e5e36ad31b3d182ef47f611326de66907d13"},
{file = "together-1.2.3-py3-none-any.whl", hash = "sha256:bbafb4b8340e0f7e0ddb11ad447eb3467c591090910d0291cfbf74b47af045c1"},
{file = "together-1.2.3.tar.gz", hash = "sha256:4ea7626a9581d16fbf293e3eaf91557c43dea044627cf6dbe458bbf43408a6b2"},
]
[package.dependencies]

View File

@@ -55,8 +55,6 @@ class Agent(BaseAgent):
tools: Tools at agents disposal
step_callback: Callback to be executed after each step of the agent execution.
callbacks: A list of callback functions from the langchain library that are triggered during the agent's execution process
allow_code_execution: Enable code execution for the agent.
max_retry_limit: Maximum number of retries for an agent to execute a task when an error occurs.
"""
_times_executed: int = PrivateAttr(default=0)
@@ -262,6 +260,7 @@ class Agent(BaseAgent):
"tools_handler": self.tools_handler,
"function_calling_llm": self.function_calling_llm,
"callbacks": self.callbacks,
"max_tokens": self.max_tokens,
}
if self._rpm_controller:

View File

@@ -45,6 +45,7 @@ class BaseAgent(ABC, BaseModel):
i18n (I18N): Internationalization settings.
cache_handler (InstanceOf[CacheHandler]): An instance of the CacheHandler class.
tools_handler (InstanceOf[ToolsHandler]): An instance of the ToolsHandler class.
max_tokens: Maximum number of tokens for the agent to generate in a response.
Methods:
@@ -118,6 +119,9 @@ class BaseAgent(ABC, BaseModel):
tools_handler: InstanceOf[ToolsHandler] = Field(
default=None, description="An instance of the ToolsHandler class."
)
max_tokens: Optional[int] = Field(
default=None, description="Maximum number of tokens for the agent's execution."
)
_original_role: str | None = None
_original_goal: str | None = None

View File

@@ -6,9 +6,9 @@ from crewai.memory.storage.kickoff_task_outputs_storage import (
)
from .create_crew import create_crew
from .evaluate_crew import evaluate_crew
from .replay_from_task import replay_task_command
from .reset_memories_command import reset_memories_command
from .test_crew import test_crew
from .train_crew import train_crew
@@ -144,7 +144,7 @@ def reset_memories(long, short, entities, kickoff_outputs, all):
def test(n_iterations: int, model: str):
"""Test the crew and evaluate the results."""
click.echo(f"Testing the crew for {n_iterations} iterations with model {model}")
test_crew(n_iterations, model)
evaluate_crew(n_iterations, model)
if __name__ == "__main__":

View File

@@ -3,9 +3,9 @@ import subprocess
import click
def test_crew(n_iterations: int, model: str) -> None:
def evaluate_crew(n_iterations: int, model: str) -> None:
"""
Test the crew by running a command in the Poetry environment.
Test and Evaluate the crew by running a command in the Poetry environment.
Args:
n_iterations (int): The number of iterations to test the crew.

View File

@@ -9,10 +9,14 @@ from crewai.utilities.task_output_storage_handler import TaskOutputStorageHandle
def reset_memories_command(long, short, entity, kickoff_outputs, all) -> None:
"""
Replay the crew execution from a specific task.
Reset the crew memories.
Args:
task_id (str): The ID of the task to replay from.
long (bool): Whether to reset the long-term memory.
short (bool): Whether to reset the short-term memory.
entity (bool): Whether to reset the entity memory.
kickoff_outputs (bool): Whether to reset the latest kickoff task outputs.
all (bool): Whether to reset all memories.
"""
try:

View File

@@ -19,7 +19,7 @@ class ShortTermMemory(Memory):
super().__init__(storage)
def save(self, item: ShortTermMemoryItem) -> None:
super().save(item.data, item.metadata, item.agent)
super().save(value=item.data, metadata=item.metadata, agent=item.agent)
def search(self, query: str, score_threshold: float = 0.35):
return self.storage.search(query=query, score_threshold=score_threshold) # type: ignore # BUG? The reference is to the parent class, but the parent class does not have this parameters

View File

@@ -1,6 +1,6 @@
import datetime
import json
import os
import re
import threading
import uuid
from concurrent.futures import Future
@@ -8,7 +8,6 @@ from copy import copy
from hashlib import md5
from typing import Any, Dict, List, Optional, Tuple, Type, Union
from langchain_openai import ChatOpenAI
from opentelemetry.trace import Span
from pydantic import UUID4, BaseModel, Field, field_validator, model_validator
from pydantic_core import PydanticCustomError
@@ -17,10 +16,8 @@ from crewai.agents.agent_builder.base_agent import BaseAgent
from crewai.tasks.output_format import OutputFormat
from crewai.tasks.task_output import TaskOutput
from crewai.telemetry.telemetry import Telemetry
from crewai.utilities.converter import Converter, ConverterError
from crewai.utilities.converter import Converter, convert_to_model
from crewai.utilities.i18n import I18N
from crewai.utilities.printer import Printer
from crewai.utilities.pydantic_schema_parser import PydanticSchemaParser
class Task(BaseModel):
@@ -111,6 +108,7 @@ class Task(BaseModel):
_original_description: str | None = None
_original_expected_output: str | None = None
_thread: threading.Thread | None = None
_execution_time: float | None = None
def __init__(__pydantic_self__, **data):
config = data.pop("config", {})
@@ -124,9 +122,15 @@ class Task(BaseModel):
"may_not_set_field", "This field is not to be set by the user.", {}
)
def _set_start_execution_time(self) -> float:
return datetime.datetime.now().timestamp()
def _set_end_execution_time(self, start_time: float) -> None:
self._execution_time = datetime.datetime.now().timestamp() - start_time
@field_validator("output_file")
@classmethod
def output_file_validattion(cls, value: str) -> str:
def output_file_validation(cls, value: str) -> str:
"""Validate the output file path by removing the / from the beginning of the path."""
if value.startswith("/"):
return value[1:]
@@ -220,6 +224,7 @@ class Task(BaseModel):
f"The task '{self.description}' has no agent assigned, therefore it can't be executed directly and should be executed in a Crew using a specific process that support that, like hierarchical."
)
start_time = self._set_start_execution_time()
self._execution_span = self._telemetry.task_started(crew=agent.crew, task=self)
self.prompt_context = context
@@ -243,6 +248,7 @@ class Task(BaseModel):
)
self.output = task_output
self._set_end_execution_time(start_time)
if self.callback:
self.callback(self.output)
@@ -326,18 +332,6 @@ class Task(BaseModel):
return copied_task
def _create_converter(self, *args, **kwargs) -> Converter:
"""Create a converter instance."""
if self.agent and not self.converter_cls:
converter = self.agent.get_output_converter(*args, **kwargs)
elif self.converter_cls:
converter = self.converter_cls(*args, **kwargs)
if not converter:
raise Exception("No output converter found or set.")
return converter
def _export_output(
self, result: str
) -> Tuple[Optional[BaseModel], Optional[Dict[str, Any]]]:
@@ -345,75 +339,26 @@ class Task(BaseModel):
json_output: Optional[Dict[str, Any]] = None
if self.output_pydantic or self.output_json:
model_output = self._convert_to_model(result)
pydantic_output = (
model_output if isinstance(model_output, BaseModel) else None
model_output = convert_to_model(
result,
self.output_pydantic,
self.output_json,
self.agent,
self.converter_cls,
)
if isinstance(model_output, str):
if isinstance(model_output, BaseModel):
pydantic_output = model_output
elif isinstance(model_output, dict):
json_output = model_output
elif isinstance(model_output, str):
try:
json_output = json.loads(model_output)
except json.JSONDecodeError:
json_output = None
else:
json_output = model_output if isinstance(model_output, dict) else None
return pydantic_output, json_output
def _convert_to_model(self, result: str) -> Union[dict, BaseModel, str]:
model = self.output_pydantic or self.output_json
if model is None:
return result
try:
return self._validate_model(result, model)
except Exception:
return self._handle_partial_json(result, model)
def _validate_model(
self, result: str, model: Type[BaseModel]
) -> Union[dict, BaseModel]:
exported_result = model.model_validate_json(result)
if self.output_json:
return exported_result.model_dump()
return exported_result
def _handle_partial_json(
self, result: str, model: Type[BaseModel]
) -> Union[dict, BaseModel, str]:
match = re.search(r"({.*})", result, re.DOTALL)
if match:
try:
exported_result = model.model_validate_json(match.group(0))
if self.output_json:
return exported_result.model_dump()
return exported_result
except Exception:
pass
return self._convert_with_instructions(result, model)
def _convert_with_instructions(
self, result: str, model: Type[BaseModel]
) -> Union[dict, BaseModel, str]:
llm = self.agent.function_calling_llm or self.agent.llm # type: ignore # Item "None" of "BaseAgent | None" has no attribute "function_calling_llm"
instructions = self._get_conversion_instructions(model, llm)
converter = self._create_converter(
llm=llm, text=result, model=model, instructions=instructions
)
exported_result = (
converter.to_pydantic() if self.output_pydantic else converter.to_json()
)
if isinstance(exported_result, ConverterError):
Printer().print(
content=f"{exported_result.message} Using raw output instead.",
color="red",
)
return result
return exported_result
def _get_output_format(self) -> OutputFormat:
if self.output_json:
return OutputFormat.JSON
@@ -421,34 +366,22 @@ class Task(BaseModel):
return OutputFormat.PYDANTIC
return OutputFormat.RAW
def _get_conversion_instructions(self, model: Type[BaseModel], llm: Any) -> str:
instructions = "I'm gonna convert this raw text into valid JSON."
if not self._is_gpt(llm):
model_schema = PydanticSchemaParser(model=model).get_schema()
instructions = f"{instructions}\n\nThe json should have the following structure, with the following keys:\n{model_schema}"
return instructions
def _save_output(self, content: str) -> None:
if not self.output_file:
raise Exception("Output file path is not set.")
directory = os.path.dirname(self.output_file)
if directory and not os.path.exists(directory):
os.makedirs(directory)
with open(self.output_file, "w", encoding="utf-8") as file:
file.write(content)
def _is_gpt(self, llm) -> bool:
return isinstance(llm, ChatOpenAI) and llm.openai_api_base is None
def _save_file(self, result: Any) -> None:
if self.output_file is None:
raise ValueError("output_file is not set.")
directory = os.path.dirname(self.output_file) # type: ignore # Value of type variable "AnyOrLiteralStr" of "dirname" cannot be "str | None"
if directory and not os.path.exists(directory):
os.makedirs(directory)
with open(self.output_file, "w", encoding="utf-8") as file: # type: ignore # Argument 1 to "open" has incompatible type "str | None"; expected "int | str | bytes | PathLike[str] | PathLike[bytes]"
file.write(result)
with open(self.output_file, "w", encoding="utf-8") as file:
if isinstance(result, dict):
import json
json.dump(result, file, ensure_ascii=False, indent=2)
else:
file.write(str(result))
return None
def __repr__(self):

View File

@@ -86,7 +86,8 @@ class ToolUsage:
) -> str:
if isinstance(calling, ToolUsageErrorException):
error = calling.message
self._printer.print(content=f"\n\n{error}\n", color="red")
if self.agent.verbose:
self._printer.print(content=f"\n\n{error}\n", color="red")
self.task.increment_tools_errors()
return error
@@ -96,7 +97,8 @@ class ToolUsage:
except Exception as e:
error = getattr(e, "message", str(e))
self.task.increment_tools_errors()
self._printer.print(content=f"\n\n{error}\n", color="red")
if self.agent.verbose:
self._printer.print(content=f"\n\n{error}\n", color="red")
return error
return f"{self._use(tool_string=tool_string, tool=tool, calling=calling)}" # type: ignore # BUG?: "_use" of "ToolUsage" does not return a value (it only ever returns None)
@@ -112,7 +114,8 @@ class ToolUsage:
result = self._i18n.errors("task_repeated_usage").format(
tool_names=self.tools_names
)
self._printer.print(content=f"\n\n{result}\n", color="purple")
if self.agent.verbose:
self._printer.print(content=f"\n\n{result}\n", color="purple")
self._telemetry.tool_repeated_usage(
llm=self.function_calling_llm,
tool_name=tool.name,
@@ -168,7 +171,10 @@ class ToolUsage:
f'\n{error_message}.\nMoving on then. {self._i18n.slice("format").format(tool_names=self.tools_names)}'
).message
self.task.increment_tools_errors()
self._printer.print(content=f"\n\n{error_message}\n", color="red")
if self.agent.verbose:
self._printer.print(
content=f"\n\n{error_message}\n", color="red"
)
return error # type: ignore # No return value expected
self.task.increment_tools_errors()
@@ -192,7 +198,8 @@ class ToolUsage:
calling=calling, output=result, should_cache=should_cache
)
self._printer.print(content=f"\n\n{result}\n", color="purple")
if self.agent.verbose:
self._printer.print(content=f"\n\n{result}\n", color="purple")
if agentops:
agentops.record(tool_event)
self._telemetry.tool_usage(
@@ -346,7 +353,8 @@ class ToolUsage:
if self._run_attempts > self._max_parsing_attempts:
self._telemetry.tool_usage_error(llm=self.function_calling_llm)
self.task.increment_tools_errors()
self._printer.print(content=f"\n\n{e}\n", color="red")
if self.agent.verbose:
self._printer.print(content=f"\n\n{e}\n", color="red")
return ToolUsageErrorException( # type: ignore # Incompatible return value type (got "ToolUsageErrorException", expected "ToolCalling | InstructorToolCalling")
f'{self._i18n.errors("tool_usage_error").format(error=e)}\nMoving on then. {self._i18n.slice("format").format(tool_names=self.tools_names)}'
)

View File

@@ -1,9 +1,14 @@
import json
import re
from typing import Any, Optional, Type, Union
from langchain.schema import HumanMessage, SystemMessage
from langchain_openai import ChatOpenAI
from pydantic import BaseModel, ValidationError
from crewai.agents.agent_builder.utilities.base_output_converter import OutputConverter
from crewai.utilities.printer import Printer
from crewai.utilities.pydantic_schema_parser import PydanticSchemaParser
class ConverterError(Exception):
@@ -72,3 +77,153 @@ class Converter(OutputConverter):
def is_gpt(self) -> bool:
"""Return if llm provided is of gpt from openai."""
return isinstance(self.llm, ChatOpenAI) and self.llm.openai_api_base is None
def convert_to_model(
result: str,
output_pydantic: Optional[Type[BaseModel]],
output_json: Optional[Type[BaseModel]],
agent: Any,
converter_cls: Optional[Type[Converter]] = None,
) -> Union[dict, BaseModel, str]:
model = output_pydantic or output_json
if model is None:
return result
try:
escaped_result = json.dumps(json.loads(result, strict=False))
return validate_model(escaped_result, model, bool(output_json))
except json.JSONDecodeError as e:
Printer().print(
content=f"Error parsing JSON: {e}. Attempting to handle partial JSON.",
color="yellow",
)
return handle_partial_json(
result, model, bool(output_json), agent, converter_cls
)
except ValidationError as e:
Printer().print(
content=f"Pydantic validation error: {e}. Attempting to handle partial JSON.",
color="yellow",
)
return handle_partial_json(
result, model, bool(output_json), agent, converter_cls
)
except Exception as e:
Printer().print(
content=f"Unexpected error during model conversion: {type(e).__name__}: {e}. Returning original result.",
color="red",
)
return result
def validate_model(
result: str, model: Type[BaseModel], is_json_output: bool
) -> Union[dict, BaseModel]:
exported_result = model.model_validate_json(result)
if is_json_output:
return exported_result.model_dump()
return exported_result
def handle_partial_json(
result: str,
model: Type[BaseModel],
is_json_output: bool,
agent: Any,
converter_cls: Optional[Type[Converter]] = None,
) -> Union[dict, BaseModel, str]:
match = re.search(r"({.*})", result, re.DOTALL)
if match:
try:
exported_result = model.model_validate_json(match.group(0))
if is_json_output:
return exported_result.model_dump()
return exported_result
except json.JSONDecodeError as e:
Printer().print(
content=f"Error parsing JSON: {e}. The extracted JSON-like string is not valid JSON. Attempting alternative conversion method.",
color="yellow",
)
except ValidationError as e:
Printer().print(
content=f"Pydantic validation error: {e}. The JSON structure doesn't match the expected model. Attempting alternative conversion method.",
color="yellow",
)
except Exception as e:
Printer().print(
content=f"Unexpected error during partial JSON handling: {type(e).__name__}: {e}. Attempting alternative conversion method.",
color="red",
)
return convert_with_instructions(
result, model, is_json_output, agent, converter_cls
)
def convert_with_instructions(
result: str,
model: Type[BaseModel],
is_json_output: bool,
agent: Any,
converter_cls: Optional[Type[Converter]] = None,
) -> Union[dict, BaseModel, str]:
llm = agent.function_calling_llm or agent.llm
instructions = get_conversion_instructions(model, llm)
converter = create_converter(
agent=agent,
converter_cls=converter_cls,
llm=llm,
text=result,
model=model,
instructions=instructions,
)
exported_result = (
converter.to_pydantic() if not is_json_output else converter.to_json()
)
if isinstance(exported_result, ConverterError):
Printer().print(
content=f"{exported_result.message} Using raw output instead.",
color="red",
)
return result
return exported_result
def get_conversion_instructions(model: Type[BaseModel], llm: Any) -> str:
instructions = "I'm gonna convert this raw text into valid JSON."
if not is_gpt(llm):
model_schema = PydanticSchemaParser(model=model).get_schema()
instructions = f"{instructions}\n\nThe json should have the following structure, with the following keys:\n{model_schema}"
return instructions
def is_gpt(llm: Any) -> bool:
from langchain_openai import ChatOpenAI
return isinstance(llm, ChatOpenAI) and llm.openai_api_base is None
def create_converter(
agent: Optional[Any] = None,
converter_cls: Optional[Type[Converter]] = None,
*args,
**kwargs,
) -> Converter:
if agent and not converter_cls:
if hasattr(agent, "get_output_converter"):
converter = agent.get_output_converter(*args, **kwargs)
else:
raise AttributeError("Agent does not have a 'get_output_converter' method")
elif converter_cls:
converter = converter_cls(*args, **kwargs)
else:
raise ValueError("Either agent or converter_cls must be provided")
if not converter:
raise Exception("No output converter found or set.")
return converter

View File

@@ -28,6 +28,7 @@ class CrewEvaluator:
"""
tasks_scores: defaultdict = defaultdict(list)
run_execution_times: defaultdict = defaultdict(list)
iteration: int = 0
def __init__(self, crew, openai_model_name: str):
@@ -40,9 +41,6 @@ class CrewEvaluator:
for task in self.crew.tasks:
task.callback = self.evaluate
def set_iteration(self, iteration: int) -> None:
self.iteration = iteration
def _evaluator_agent(self):
return Agent(
role="Task Execution Evaluator",
@@ -71,6 +69,9 @@ class CrewEvaluator:
output_pydantic=TaskEvaluationPydanticOutput,
)
def set_iteration(self, iteration: int) -> None:
self.iteration = iteration
def print_crew_evaluation_result(self) -> None:
"""
Prints the evaluation result of the crew in a table.
@@ -119,6 +120,16 @@ class CrewEvaluator:
]
table.add_row("Crew", *map(str, crew_scores), f"{crew_average:.1f}")
run_exec_times = [
int(sum(tasks_exec_times))
for _, tasks_exec_times in self.run_execution_times.items()
]
execution_time_avg = int(sum(run_exec_times) / len(run_exec_times))
table.add_row(
"Execution Time (s)",
*map(str, run_exec_times),
f"{execution_time_avg}",
)
# Display the table in the terminal
console = Console()
console.print(table)
@@ -145,5 +156,8 @@ class CrewEvaluator:
if isinstance(evaluation_result.pydantic, TaskEvaluationPydanticOutput):
self.tasks_scores[self.iteration].append(evaluation_result.pydantic.quality)
self.run_execution_times[self.iteration].append(
current_task._execution_time
)
else:
raise ValueError("Evaluation result is not in the expected format")

View File

@@ -54,12 +54,12 @@ class TaskEvaluator:
def __init__(self, original_agent):
self.llm = original_agent.llm
def evaluate(self, task, ouput) -> TaskEvaluation:
def evaluate(self, task, output) -> TaskEvaluation:
evaluation_query = (
f"Assess the quality of the task completed based on the description, expected output, and actual results.\n\n"
f"Task Description:\n{task.description}\n\n"
f"Expected Output:\n{task.expected_output}\n\n"
f"Actual Output:\n{ouput}\n\n"
f"Actual Output:\n{output}\n\n"
"Please provide:\n"
"- Bullet points suggestions to improve future similar tasks\n"
"- A score from 0 to 10 evaluating on completion, quality, and overall performance"

View File

@@ -1,17 +1,28 @@
import re
class YamlParser:
@staticmethod
def parse(file):
"""
Parses a YAML file, modifies specific patterns, and checks for unsupported 'context' usage.
Args:
file (file object): The YAML file to parse.
Returns:
str: The modified content of the YAML file.
Raises:
ValueError: If 'context:' is used incorrectly.
"""
content = file.read()
# Replace single { and } with doubled ones, while leaving already doubled ones intact and the other special characters {# and {%
modified_content = re.sub(r"(?<!\{){(?!\{)(?!\#)(?!\%)", "{{", content)
modified_content = re.sub(
r"(?<!\})(?<!\%)(?<!\#)\}(?!})", "}}", modified_content
)
modified_content = re.sub(r"(?<!\})(?<!\%)(?<!\#)\}(?!})", "}}", modified_content)
# Check for 'context:' not followed by '[' and raise an error
if re.search(r"context:(?!\s*\[)", modified_content):
raise ValueError(
"Context is currently only supported in code when creating a task. Please use the 'context' key in the task configuration."
"Context is currently only supported in code when creating a task. "
"Please use the 'context' key in the task configuration."
)
return modified_content

View File

@@ -397,7 +397,7 @@ def test_agent_moved_on_after_max_iterations():
)
task = Task(
description="The final answer is 42. But don't give it yet, instead keep using the `get_final_answer` tool over and over until you're told you can give yout final answer.",
description="The final answer is 42. But don't give it yet, instead keep using the `get_final_answer` tool over and over until you're told you can give your final answer.",
expected_output="The final answer",
)
output = agent.execute_task(
@@ -948,7 +948,7 @@ def test_agent_use_trained_data(crew_training_handler):
crew_training_handler().load.return_value = {
agent.role: {
"suggestions": [
"The result of the math operatio must be right.",
"The result of the math operation must be right.",
"Result must be better than 1.",
]
}
@@ -958,7 +958,7 @@ def test_agent_use_trained_data(crew_training_handler):
assert (
result == "What is 1 + 1?You MUST follow these feedbacks: \n "
"The result of the math operatio must be right.\n - Result must be better than 1."
"The result of the math operation must be right.\n - Result must be better than 1."
)
crew_training_handler.assert_has_calls(
[mock.call(), mock.call("trained_agents_data.pkl"), mock.call().load()]

View File

@@ -135,29 +135,29 @@ def test_version_command_with_tools(runner):
)
@mock.patch("crewai.cli.cli.test_crew")
def test_test_default_iterations(test_crew, runner):
@mock.patch("crewai.cli.cli.evaluate_crew")
def test_test_default_iterations(evaluate_crew, runner):
result = runner.invoke(test)
test_crew.assert_called_once_with(3, "gpt-4o-mini")
evaluate_crew.assert_called_once_with(3, "gpt-4o-mini")
assert result.exit_code == 0
assert "Testing the crew for 3 iterations with model gpt-4o-mini" in result.output
@mock.patch("crewai.cli.cli.test_crew")
def test_test_custom_iterations(test_crew, runner):
@mock.patch("crewai.cli.cli.evaluate_crew")
def test_test_custom_iterations(evaluate_crew, runner):
result = runner.invoke(test, ["--n_iterations", "5", "--model", "gpt-4o"])
test_crew.assert_called_once_with(5, "gpt-4o")
evaluate_crew.assert_called_once_with(5, "gpt-4o")
assert result.exit_code == 0
assert "Testing the crew for 5 iterations with model gpt-4o" in result.output
@mock.patch("crewai.cli.cli.test_crew")
def test_test_invalid_string_iterations(test_crew, runner):
@mock.patch("crewai.cli.cli.evaluate_crew")
def test_test_invalid_string_iterations(evaluate_crew, runner):
result = runner.invoke(test, ["--n_iterations", "invalid"])
test_crew.assert_not_called()
evaluate_crew.assert_not_called()
assert result.exit_code == 2
assert (
"Usage: test [OPTIONS]\nTry 'test --help' for help.\n\nError: Invalid value for '-n' / '--n_iterations': 'invalid' is not a valid integer.\n"

View File

@@ -3,7 +3,7 @@ from unittest import mock
import pytest
from crewai.cli import test_crew
from crewai.cli import evaluate_crew
@pytest.mark.parametrize(
@@ -14,13 +14,13 @@ from crewai.cli import test_crew
(10, "gpt-4"),
],
)
@mock.patch("crewai.cli.test_crew.subprocess.run")
@mock.patch("crewai.cli.evaluate_crew.subprocess.run")
def test_crew_success(mock_subprocess_run, n_iterations, model):
"""Test the crew function for successful execution."""
mock_subprocess_run.return_value = subprocess.CompletedProcess(
args=f"poetry run test {n_iterations} {model}", returncode=0
)
result = test_crew.test_crew(n_iterations, model)
result = evaluate_crew.evaluate_crew(n_iterations, model)
mock_subprocess_run.assert_called_once_with(
["poetry", "run", "test", str(n_iterations), model],
@@ -31,26 +31,26 @@ def test_crew_success(mock_subprocess_run, n_iterations, model):
assert result is None
@mock.patch("crewai.cli.test_crew.click")
@mock.patch("crewai.cli.evaluate_crew.click")
def test_test_crew_zero_iterations(click):
test_crew.test_crew(0, "gpt-4o")
evaluate_crew.evaluate_crew(0, "gpt-4o")
click.echo.assert_called_once_with(
"An unexpected error occurred: The number of iterations must be a positive integer.",
err=True,
)
@mock.patch("crewai.cli.test_crew.click")
@mock.patch("crewai.cli.evaluate_crew.click")
def test_test_crew_negative_iterations(click):
test_crew.test_crew(-2, "gpt-4o")
evaluate_crew.evaluate_crew(-2, "gpt-4o")
click.echo.assert_called_once_with(
"An unexpected error occurred: The number of iterations must be a positive integer.",
err=True,
)
@mock.patch("crewai.cli.test_crew.click")
@mock.patch("crewai.cli.test_crew.subprocess.run")
@mock.patch("crewai.cli.evaluate_crew.click")
@mock.patch("crewai.cli.evaluate_crew.subprocess.run")
def test_test_crew_called_process_error(mock_subprocess_run, click):
n_iterations = 5
mock_subprocess_run.side_effect = subprocess.CalledProcessError(
@@ -59,7 +59,7 @@ def test_test_crew_called_process_error(mock_subprocess_run, click):
output="Error",
stderr="Some error occurred",
)
test_crew.test_crew(n_iterations, "gpt-4o")
evaluate_crew.evaluate_crew(n_iterations, "gpt-4o")
mock_subprocess_run.assert_called_once_with(
["poetry", "run", "test", "5", "gpt-4o"],
@@ -78,13 +78,13 @@ def test_test_crew_called_process_error(mock_subprocess_run, click):
)
@mock.patch("crewai.cli.test_crew.click")
@mock.patch("crewai.cli.test_crew.subprocess.run")
@mock.patch("crewai.cli.evaluate_crew.click")
@mock.patch("crewai.cli.evaluate_crew.subprocess.run")
def test_test_crew_unexpected_exception(mock_subprocess_run, click):
# Arrange
n_iterations = 5
mock_subprocess_run.side_effect = Exception("Unexpected error")
test_crew.test_crew(n_iterations, "gpt-4o")
evaluate_crew.evaluate_crew(n_iterations, "gpt-4o")
mock_subprocess_run.assert_called_once_with(
["poetry", "run", "test", "5", "gpt-4o"],

View File

@@ -8,7 +8,6 @@ from unittest.mock import MagicMock, patch
import pydantic_core
import pytest
from crewai.agent import Agent
from crewai.agents.cache import CacheHandler
from crewai.crew import Crew
@@ -69,7 +68,7 @@ def test_crew_config_conditional_requirement():
"agent": "Senior Researcher",
},
{
"description": "Write a 1 amazing paragraph highlight for each idead that showcases how good an article about this topic could be, check references if necessary or search for more content but make sure it's unique, interesting and well written. Return the list of ideas with their paragraph and your notes.",
"description": "Write a 1 amazing paragraph highlight for each idea that showcases how good an article about this topic could be, check references if necessary or search for more content but make sure it's unique, interesting and well written. Return the list of ideas with their paragraph and your notes.",
"expected_output": "A 4 paragraph article about AI.",
"agent": "Senior Writer",
},
@@ -633,18 +632,21 @@ def test_sequential_async_task_execution_completion():
list_ideas = Task(
description="Give me a list of 5 interesting ideas to explore for an article, what makes them unique and interesting.",
expected_output="Bullet point list of 5 important events.",
max_retry_limit=3,
agent=researcher,
async_execution=True,
)
list_important_history = Task(
description="Research the history of AI and give me the 5 most important events that shaped the technology.",
expected_output="Bullet point list of 5 important events.",
max_retry_limit=3,
agent=researcher,
async_execution=True,
)
write_article = Task(
description="Write an article about the history of AI and its most important events.",
expected_output="A 4 paragraph article about AI.",
max_retry_limit=3,
agent=writer,
context=[list_ideas, list_important_history],
)
@@ -657,7 +659,7 @@ def test_sequential_async_task_execution_completion():
sequential_result = sequential_crew.kickoff()
assert sequential_result.raw.startswith(
"**The Evolution of Artificial Intelligence: A Journey Through Milestones**"
"The history of artificial intelligence (AI) is marked by several pivotal events that have shaped its evolution and impact on various sectors."
)
@@ -1189,7 +1191,7 @@ def test_task_with_no_arguments():
)
task = Task(
description="Look at the available data nd give me a sense on the total number of sales.",
description="Look at the available data and give me a sense on the total number of sales.",
expected_output="The total number of sales as an integer",
agent=researcher,
)
@@ -1236,7 +1238,7 @@ def test_delegation_is_not_enabled_if_there_are_only_one_agent():
)
task = Task(
description="Look at the available data nd give me a sense on the total number of sales.",
description="Look at the available data and give me a sense on the total number of sales.",
expected_output="The total number of sales as an integer",
agent=researcher,
)
@@ -1312,14 +1314,14 @@ def test_agent_usage_metrics_are_captured_for_hierarchical_process():
)
result = crew.kickoff()
assert result.raw == '"Howdy!"'
assert result.raw == "Howdy!"
print(crew.usage_metrics)
assert crew.usage_metrics == {
"total_tokens": 311,
"prompt_tokens": 224,
"completion_tokens": 87,
"total_tokens": 219,
"prompt_tokens": 201,
"completion_tokens": 18,
"successful_requests": 1,
}
@@ -1599,16 +1601,16 @@ def test_tools_with_custom_caching():
writer1 = Agent(
role="Writer",
goal="You write lesssons of math for kids.",
backstory="You're an expert in writting and you love to teach kids but you know nothing of math.",
goal="You write lessons of math for kids.",
backstory="You're an expert in writing and you love to teach kids but you know nothing of math.",
tools=[multiplcation_tool],
allow_delegation=False,
)
writer2 = Agent(
role="Writer",
goal="You write lesssons of math for kids.",
backstory="You're an expert in writting and you love to teach kids but you know nothing of math.",
goal="You write lessons of math for kids.",
backstory="You're an expert in writing and you love to teach kids but you know nothing of math.",
tools=[multiplcation_tool],
allow_delegation=False,
)

View File

@@ -5,13 +5,12 @@ import json
from unittest.mock import MagicMock, patch
import pytest
from pydantic import BaseModel
from pydantic_core import ValidationError
from crewai import Agent, Crew, Process, Task
from crewai.tasks.conditional_task import ConditionalTask
from crewai.tasks.task_output import TaskOutput
from crewai.utilities.converter import Converter
from pydantic import BaseModel
from pydantic_core import ValidationError
def test_task_tool_reflect_agent_tools():
@@ -110,7 +109,7 @@ def test_task_callback():
task_completed.assert_called_once_with(task.output)
def test_task_callback_returns_task_ouput():
def test_task_callback_returns_task_output():
from crewai.tasks.output_format import OutputFormat
researcher = Agent(

View File

@@ -84,6 +84,10 @@ class TestCrewEvaluator:
1: [10, 9, 8],
2: [9, 8, 7],
}
crew_planner.run_execution_times = {
1: [24, 45, 66],
2: [55, 33, 67],
}
crew_planner.print_crew_evaluation_result()
@@ -98,6 +102,7 @@ class TestCrewEvaluator:
mock.call().add_row("Task 2", "9", "8", "8.5"),
mock.call().add_row("Task 3", "8", "7", "7.5"),
mock.call().add_row("Crew", "9.0", "8.0", "8.5"),
mock.call().add_row("Execution Time (s)", "135", "155", "145"),
]
)
console.assert_has_calls([mock.call(), mock.call().print(table())])

View File

@@ -0,0 +1,266 @@
import json
from unittest.mock import MagicMock, Mock, patch
import pytest
from crewai.utilities.converter import (
Converter,
ConverterError,
convert_to_model,
convert_with_instructions,
create_converter,
get_conversion_instructions,
handle_partial_json,
is_gpt,
validate_model,
)
from pydantic import BaseModel
# Sample Pydantic models for testing
class EmailResponse(BaseModel):
previous_message_content: str
class EmailResponses(BaseModel):
responses: list[EmailResponse]
class SimpleModel(BaseModel):
name: str
age: int
class NestedModel(BaseModel):
id: int
data: SimpleModel
# Fixtures
@pytest.fixture
def mock_agent():
agent = Mock()
agent.function_calling_llm = None
agent.llm = Mock()
return agent
# Tests for convert_to_model
def test_convert_to_model_with_valid_json():
result = '{"name": "John", "age": 30}'
output = convert_to_model(result, SimpleModel, None, None)
assert isinstance(output, SimpleModel)
assert output.name == "John"
assert output.age == 30
def test_convert_to_model_with_invalid_json():
result = '{"name": "John", "age": "thirty"}'
with patch("crewai.utilities.converter.handle_partial_json") as mock_handle:
mock_handle.return_value = "Fallback result"
output = convert_to_model(result, SimpleModel, None, None)
assert output == "Fallback result"
def test_convert_to_model_with_no_model():
result = "Plain text"
output = convert_to_model(result, None, None, None)
assert output == "Plain text"
def test_convert_to_model_with_special_characters():
json_string_test = """
{
"responses": [
{
"previous_message_content": "Hi Tom,\r\n\r\nNiamh has chosen the Mika phonics on"
}
]
}
"""
output = convert_to_model(json_string_test, EmailResponses, None, None)
assert isinstance(output, EmailResponses)
assert len(output.responses) == 1
assert (
output.responses[0].previous_message_content
== "Hi Tom,\r\n\r\nNiamh has chosen the Mika phonics on"
)
def test_convert_to_model_with_escaped_special_characters():
json_string_test = json.dumps(
{
"responses": [
{
"previous_message_content": "Hi Tom,\r\n\r\nNiamh has chosen the Mika phonics on"
}
]
}
)
output = convert_to_model(json_string_test, EmailResponses, None, None)
assert isinstance(output, EmailResponses)
assert len(output.responses) == 1
assert (
output.responses[0].previous_message_content
== "Hi Tom,\r\n\r\nNiamh has chosen the Mika phonics on"
)
def test_convert_to_model_with_multiple_special_characters():
json_string_test = """
{
"responses": [
{
"previous_message_content": "Line 1\r\nLine 2\tTabbed\nLine 3\r\n\rEscaped newline"
}
]
}
"""
output = convert_to_model(json_string_test, EmailResponses, None, None)
assert isinstance(output, EmailResponses)
assert len(output.responses) == 1
assert (
output.responses[0].previous_message_content
== "Line 1\r\nLine 2\tTabbed\nLine 3\r\n\rEscaped newline"
)
# Tests for validate_model
def test_validate_model_pydantic_output():
result = '{"name": "Alice", "age": 25}'
output = validate_model(result, SimpleModel, False)
assert isinstance(output, SimpleModel)
assert output.name == "Alice"
assert output.age == 25
def test_validate_model_json_output():
result = '{"name": "Bob", "age": 40}'
output = validate_model(result, SimpleModel, True)
assert isinstance(output, dict)
assert output == {"name": "Bob", "age": 40}
# Tests for handle_partial_json
def test_handle_partial_json_with_valid_partial():
result = 'Some text {"name": "Charlie", "age": 35} more text'
output = handle_partial_json(result, SimpleModel, False, None)
assert isinstance(output, SimpleModel)
assert output.name == "Charlie"
assert output.age == 35
def test_handle_partial_json_with_invalid_partial(mock_agent):
result = "No valid JSON here"
with patch("crewai.utilities.converter.convert_with_instructions") as mock_convert:
mock_convert.return_value = "Converted result"
output = handle_partial_json(result, SimpleModel, False, mock_agent)
assert output == "Converted result"
# Tests for convert_with_instructions
@patch("crewai.utilities.converter.create_converter")
@patch("crewai.utilities.converter.get_conversion_instructions")
def test_convert_with_instructions_success(
mock_get_instructions, mock_create_converter, mock_agent
):
mock_get_instructions.return_value = "Instructions"
mock_converter = Mock()
mock_converter.to_pydantic.return_value = SimpleModel(name="David", age=50)
mock_create_converter.return_value = mock_converter
result = "Some text to convert"
output = convert_with_instructions(result, SimpleModel, False, mock_agent)
assert isinstance(output, SimpleModel)
assert output.name == "David"
assert output.age == 50
@patch("crewai.utilities.converter.create_converter")
@patch("crewai.utilities.converter.get_conversion_instructions")
def test_convert_with_instructions_failure(
mock_get_instructions, mock_create_converter, mock_agent
):
mock_get_instructions.return_value = "Instructions"
mock_converter = Mock()
mock_converter.to_pydantic.return_value = ConverterError("Conversion failed")
mock_create_converter.return_value = mock_converter
result = "Some text to convert"
with patch("crewai.utilities.converter.Printer") as mock_printer:
output = convert_with_instructions(result, SimpleModel, False, mock_agent)
assert output == result
mock_printer.return_value.print.assert_called_once()
# Tests for get_conversion_instructions
def test_get_conversion_instructions_gpt():
mock_llm = Mock()
mock_llm.openai_api_base = None
with patch("crewai.utilities.converter.is_gpt", return_value=True):
instructions = get_conversion_instructions(SimpleModel, mock_llm)
assert instructions == "I'm gonna convert this raw text into valid JSON."
def test_get_conversion_instructions_non_gpt():
mock_llm = Mock()
with patch("crewai.utilities.converter.is_gpt", return_value=False):
with patch("crewai.utilities.converter.PydanticSchemaParser") as mock_parser:
mock_parser.return_value.get_schema.return_value = "Sample schema"
instructions = get_conversion_instructions(SimpleModel, mock_llm)
assert "Sample schema" in instructions
# Tests for is_gpt
def test_is_gpt_true():
from langchain_openai import ChatOpenAI
mock_llm = Mock(spec=ChatOpenAI)
mock_llm.openai_api_base = None
assert is_gpt(mock_llm) is True
def test_is_gpt_false():
mock_llm = Mock()
assert is_gpt(mock_llm) is False
class CustomConverter(Converter):
pass
def test_create_converter_with_mock_agent():
mock_agent = MagicMock()
mock_agent.get_output_converter.return_value = MagicMock(spec=Converter)
converter = create_converter(
agent=mock_agent,
llm=Mock(),
text="Sample",
model=SimpleModel,
instructions="Convert",
)
assert isinstance(converter, Converter)
mock_agent.get_output_converter.assert_called_once()
def test_create_converter_with_custom_converter():
converter = create_converter(
converter_cls=CustomConverter,
llm=Mock(),
text="Sample",
model=SimpleModel,
instructions="Convert",
)
assert isinstance(converter, CustomConverter)
def test_create_converter_fails_without_agent_or_converter_cls():
with pytest.raises(
ValueError, match="Either agent or converter_cls must be provided"
):
create_converter(
llm=Mock(), text="Sample", model=SimpleModel, instructions="Convert"
)