Compare commits

..

39 Commits

Author SHA1 Message Date
Eduardo Chiarotti
b5c49bcebd Update feature_request.md 2024-08-06 12:40:25 -03:00
Eduardo Chiarotti
d6b73f2a73 Update issue templates
Add both Bug and Feature templates
2024-08-05 21:47:55 -03:00
Thiago Moretto
c0c59dc932 Merge pull request #1064 from crewAIInc/thiago/pipeline-fix
Fix flaky test due to suppressed error on `on_llm_start` callback
2024-08-05 16:13:19 -03:00
Thiago Moretto
f3b3d321e5 Fix lint issue 2024-08-05 13:34:03 -03:00
Thiago Moretto
67e4433dc2 Fix flaky test due to suppressed error on on_llm_start callback 2024-08-05 13:29:39 -03:00
Rip&Tear
4a7ae8df71 Update LLM-Connections.md (#1039)
* Minor fixes and updates

* minor fixes across docs

* Updated LLM-Connections.md

---------

Co-authored-by: theCyberTech <mattrapidb@gmail.com>
2024-08-02 15:04:52 -03:00
Rip&Tear
09f92122d5 Docs minor fixes (#1035)
* Minor fixes and updates

* minor fixes across docs

---------

Co-authored-by: theCyberTech <mattrapidb@gmail.com>
2024-08-02 15:01:16 -03:00
Lorenze Jay
8118b7b7d6 Feat/sliding context window (#1042)
* patching for non-gpt model

* removal of json_object tool name assignment

* fixed issue for smaller models due to instructions prompt

* fixing for ollama llama3 models

* WIP: generated summary from documents split, could also create memgpt approach

* WIP: need tests but user inputted summarization strategy implemented - handling context window exceeding errors

* rm extra line

* removed type ignores

* added tests

* handling n to summarize prompt

* code cleanup, using click for cli asker

* rm not used class

* better refactor

* reverted poetry lock

* reverted poetry.locl

* improved context window exceeding exception class
2024-08-01 13:15:50 -07:00
João Moura
c93b85ac53 Preparing for new version 2024-07-30 19:21:18 -04:00
Lorenze Jay
6378f6caec WIP fixed mypy src types (#1036) 2024-07-30 10:59:50 -07:00
Eduardo Chiarotti
d824db82a3 feat: Add execution time to both task and testing feature (#1031)
* feat: Add execution time to both task and testing feature

* feat: Remove unused functions

* feat: change test_crew to evalaute_crew to avoid issues with testing libs

* feat: fix tests
2024-07-29 23:17:07 -03:00
Matt Young
de6b597eff telemetry.py - fix typo in comment. (#1020) 2024-07-29 23:03:51 -03:00
Deepak Tammali
6111d05219 docs: Fix crewai-tools package name typo in getting-started docs (#1026) 2024-07-29 23:03:32 -03:00
Monarch Wadia
f83c91d612 Fixed package name typo in pip install command (#1029)
Changed `pip install crewai-tools` to `pip install crewai-tools`
2024-07-29 23:02:48 -03:00
Mackensie Alvarez
c8f360414e Update Start-a-New-CrewAI-Project-Template-Method.md (#1030) 2024-07-29 23:02:18 -03:00
Brandon Hancock (bhancock_ai)
fa4393d77e Add in missing triple quote and execution time to resume agent functionality. (#1025)
* Add in missing triple quote and execution time to resume agent functionality

* Fixing broken kwargs and other issues causing our tests to fail
2024-07-29 14:39:02 -03:00
Rip&Tear
25c314befc Minor fixes and updates (#1019)
Co-authored-by: theCyberTech <mattrapidb@gmail.com>
2024-07-29 03:24:23 -03:00
Rip&Tear
2fe79e68cd Small 404 error fixes (#1018)
* Updated Docs:  New Getting started section + content update / addition

* fixed indentation issue

* Minor updates to fix typos

* Fixed up 404 error on latest commit

---------

Co-authored-by: theCyberTech <the_t3ch@pm.me>
Co-authored-by: theCyberTech <mattrapidb@gmail.com>
2024-07-28 22:01:04 -03:00
Nuraly
37d05a2365 Update Force-Tool-Ouput-as-Result.md (#964)
I think there is some mistake, because there is no such parameter as force_output_result, and as the code shows, the correct parameter result_as_answer is set during agent creation, not task.
2024-07-28 15:41:56 -03:00
Carine Bruyndoncx
0111d261a4 Update Crews.md - correct result variable to crew_output (#972) 2024-07-28 15:40:36 -03:00
Taleb
0a23e1dc13 Performed spell check across the rest of code base, and enahnced the yaml paraser code a little (#895)
* Performed spell check across the entire documentation

Thank you once again!

* Performed spell check across the most of code base
Folders been checked:
- agents
- cli
- memory
- project
- tasks
- telemetry
- tools
- translations

* Trying to add a max_token for the agents, so they limited by number of tokens.

* Performed spell check across the rest of code base, and enahnced the yaml paraser code a little

* Small change in the main agent doc

* Improve _save_file method to handle both dict and str inputs

- Add check for dict type input
- Use json.dump for dict serialization
- Convert non-dict inputs to string
- Remove type ignore comments

---------

Co-authored-by: João Moura <joaomdmoura@gmail.com>
2024-07-28 15:39:54 -03:00
Henri Wenlin
ef5ff71346 feat: add verbose option for printing in ToolUsage (#990) 2024-07-28 15:12:10 -03:00
Samuel Mallet
1697b4cacb Add docs for new parameters to SerperDevTool (#993) 2024-07-28 15:09:55 -03:00
Taleb
6b4710a8d1 Improve _save_file method to handle both dict and str inputs (#1011)
- Add check for dict type input
- Use json.dump for dict serialization
- Convert non-dict inputs to string
- Remove type ignore comments
2024-07-28 15:03:18 -03:00
Lennex Zinyando
6f2a8f08ba Fixes getting started section links (#1016) 2024-07-28 15:02:41 -03:00
João Moura
4e6abf596d updating test 2024-07-28 13:23:03 -04:00
Rip&Tear
9018e2ab6a Docs update (#1008)
* Updated Docs:  New Getting started section + content update / addition

* fixed indentation issue

* Minor updates to fix typos

---------

Co-authored-by: theCyberTech <the_t3ch@pm.me>
2024-07-28 11:55:09 -03:00
ResearchAI
99d023c5f3 Update reset_memories_command.py (#974) 2024-07-26 14:40:47 -07:00
Brandon Hancock (bhancock_ai)
da7d8256eb Json Task Output Truncation with Escape Characters (#1009)
* Fixed special character issue when converting json to models. Added numerous tests to ensure thigns work properly.

* Fix linting error and cleaned up tests

* Fix customer_converter_cls test failure

* Fixed tests. Thank you lorenze for pointing that out. added a few more to ensure converter creation works properly

* Address lorenze feedback

* Fix linting issues
2024-07-26 17:27:01 -04:00
Brandon Hancock (bhancock_ai)
88bffaa0d0 Merge pull request #1012 from crewAIInc/fix/breaking-test-task-eval
fix test due to asserting instructions model_schema change
2024-07-26 16:55:26 -04:00
Lorenze Jay
1159140d9f fix test due to asserting instructions model_schema change 2024-07-26 13:37:44 -07:00
Lorenze Jay
5ac7050f7a Patch/non gpt model pydantic output (#1003)
* patching for non-gpt model

* removal of json_object tool name assignment

* fixed issue for smaller models due to instructions prompt

* fixing for ollama llama3 models

* closing brackets

* removed not used and fixes
2024-07-26 10:57:56 -07:00
Lorenze Jay
8b513de64c hierarchical process unblocked for async tasks (#995)
* WIP: hierarchical unblock for async tasks

* added better test

* update name change

* added more test and crew manager cleanup

* remove prints

* code cleanup, no need to pass manager
2024-07-26 10:55:51 -07:00
Eduardo Chiarotti
144e6d203f feat: add ability to set LLM for AgentPLanner on Crew (#1001)
* feat: add ability to set LLM for AgentPLanner on Crew

* feat: fixes issue on instantiating the ChatOpenAI on the crew

* docs: add docs for the planning_llm new parameter

* docs: change message to ChatOpenAI llm

* feat: add tests
2024-07-26 14:24:29 -03:00
Eduardo Chiarotti
2d2154ed65 feat: add crew Testing/Evaluating feature (#998)
* feat: add crew Testing/evalauting feature

* feat: add docs and add unit test

* feat: improve testing output table

* feat: add tests

* feat: fix type checking issue

* feat: add raise ValueError when testing if output is not the expected

* docs: add docs for Testing

* feat: improve tests and fix some issue

* feat: back to sync

* feat: change opdeai model

* feat: fix test
2024-07-26 14:23:51 -03:00
Brandon Hancock (bhancock_ai)
2d086ab596 Merge pull request #994 from crewAIInc/fix/getting-started-docs
fixed bullet points for crew yaml annoations
2024-07-23 14:36:45 -04:00
Lorenze Jay
776c67cc0f clearer usage for crewai create command 2024-07-23 11:32:25 -07:00
Lorenze Jay
78ef490646 fixed bullet points for crew yaml annoations 2024-07-23 11:31:09 -07:00
Lorenze Jay
4da5cc9778 Feat yaml config all attributes (#985)
* WIP: yaml proper mapping for agents and agent

* WIP: added output_json and output_pydantic setup

* WIP: core logic added, need cleanup

* code cleanup

* updated docs and example template to use yaml to reference agents within tasks

* cleanup type errors

* Update Start-a-New-CrewAI-Project.md

---------

Co-authored-by: João Moura <joaomdmoura@gmail.com>
2024-07-23 00:21:01 -03:00
67 changed files with 437354 additions and 7555 deletions

35
.github/ISSUE_TEMPLATE/bug_report.md vendored Normal file
View File

@@ -0,0 +1,35 @@
---
name: Bug report
about: Create a report to help us improve CrewAI
title: "[BUG]"
labels: bug
assignees: ''
---
**Description**
Provide a clear and concise description of what the bug is.
**Steps to Reproduce**
Provide a step-by-step process to reproduce the behavior:
**Expected behavior**
A clear and concise description of what you expected to happen.
**Screenshots/Code snippets**
If applicable, add screenshots or code snippets to help explain your problem.
**Environment Details:**
- **Operating System**: [e.g., Ubuntu 20.04, macOS Catalina, Windows 10]
- **Python Version**: [e.g., 3.8, 3.9, 3.10]
- **crewAI Version**: [e.g., 0.30.11]
- **crewAI Tools Version**: [e.g., 0.2.6]
**Logs**
Include relevant logs or error messages if applicable.
**Possible Solution**
Have a solution in mind? Please suggest it here, or write "None".
**Additional context**
Add any other context about the problem here.

View File

@@ -0,0 +1,21 @@
---
name: Feature request
about: Suggest a Feature to improve CrewAI
title: "[FEAT]"
labels: feature-request, improvement
assignees: ''
---
**Is your feature request related to a problem? Please describe.**
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]
If possible attach the Issue related to it
**Describe the solution you'd like / Use-case**
A clear and concise description of what you want to happen.
**Describe alternatives you've considered**
A clear and concise description of any alternative solutions or features you've considered.
**Additional context**
Add any other context or screenshots about the feature request here.

View File

@@ -254,7 +254,7 @@ pip install dist/*.tar.gz
CrewAI uses anonymous telemetry to collect usage data with the main purpose of helping us improve the library by focusing our efforts on the most used features, integrations and tools. CrewAI uses anonymous telemetry to collect usage data with the main purpose of helping us improve the library by focusing our efforts on the most used features, integrations and tools.
There is NO data being collected on the prompts, tasks descriptions agents backstories or goals nor tools usage, no API calls, nor responses nor any data that is being processed by the agents, nor any secrets and env vars. It's pivotal to understand that **NO data is collected** concerning prompts, task descriptions, agents' backstories or goals, usage of tools, API calls, responses, any data processed by the agents, or secrets and environment variables, with the exception of the conditions mentioned. When the `share_crew` feature is enabled, detailed data including task descriptions, agents' backstories or goals, and other specific attributes are collected to provide deeper insights while respecting user privacy. We don't offer a way to disable it now, but we will in the future.
Data collected includes: Data collected includes:
@@ -279,7 +279,7 @@ Data collected includes:
- Tools names available - Tools names available
- Understand out of the publically available tools, which ones are being used the most so we can improve them - Understand out of the publically available tools, which ones are being used the most so we can improve them
Users can opt-in sharing the complete telemetry data by setting the `share_crew` attribute to `True` on their Crews. Users can opt-in to Further Telemetry, sharing the complete telemetry data by setting the `share_crew` attribute to `True` on their Crews. Enabling `share_crew` results in the collection of detailed crew and task execution data, including `goal`, `backstory`, `context`, and `output` of tasks. This enables a deeper insight into usage patterns while respecting the user's choice to share.
## License ## License

View File

@@ -114,7 +114,7 @@ from langchain.agents import load_tools
langchain_tools = load_tools(["google-serper"], llm=llm) langchain_tools = load_tools(["google-serper"], llm=llm)
agent1 = CustomAgent( agent1 = CustomAgent(
role="backstory agent", role="agent role",
goal="who is {input}?", goal="who is {input}?",
backstory="agent backstory", backstory="agent backstory",
verbose=True, verbose=True,
@@ -127,7 +127,7 @@ task1 = Task(
) )
agent2 = Agent( agent2 = Agent(
role="bio agent", role="agent role",
goal="summarize the short bio for {input} and if needed do more research", goal="summarize the short bio for {input} and if needed do more research",
backstory="agent backstory", backstory="agent backstory",
verbose=True, verbose=True,

View File

@@ -33,6 +33,7 @@ A crew in crewAI represents a collaborative group of agents working together to
| **Manager Callbacks** _(optional)_ | `manager_callbacks` | `manager_callbacks` takes a list of callback handlers to be executed by the manager agent when a hierarchical process is used. | | **Manager Callbacks** _(optional)_ | `manager_callbacks` | `manager_callbacks` takes a list of callback handlers to be executed by the manager agent when a hierarchical process is used. |
| **Prompt File** _(optional)_ | `prompt_file` | Path to the prompt JSON file to be used for the crew. | | **Prompt File** _(optional)_ | `prompt_file` | Path to the prompt JSON file to be used for the crew. |
| **Planning** *(optional)* | `planning` | Adds planning ability to the Crew. When activated before each Crew iteration, all Crew data is sent to an AgentPlanner that will plan the tasks and this plan will be added to each task description. | **Planning** *(optional)* | `planning` | Adds planning ability to the Crew. When activated before each Crew iteration, all Crew data is sent to an AgentPlanner that will plan the tasks and this plan will be added to each task description.
| **Planning LLM** *(optional)* | `planning_llm` | The language model used by the AgentPlanner in a planning process. |
!!! note "Crew Max RPM" !!! note "Crew Max RPM"
The `max_rpm` attribute sets the maximum number of requests per minute the crew can perform to avoid rate limits and will override individual agents' `max_rpm` settings if you set it. The `max_rpm` attribute sets the maximum number of requests per minute the crew can perform to avoid rate limits and will override individual agents' `max_rpm` settings if you set it.
@@ -136,7 +137,7 @@ crew = Crew(
verbose=2 verbose=2
) )
result = crew.kickoff() crew_output = crew.kickoff()
# Accessing the crew output # Accessing the crew output
print(f"Raw Output: {crew_output.raw}") print(f"Raw Output: {crew_output.raw}")

View File

@@ -23,6 +23,25 @@ my_crew = Crew(
From this point on, your crew will have planning enabled, and the tasks will be planned before each iteration. From this point on, your crew will have planning enabled, and the tasks will be planned before each iteration.
#### Planning LLM
Now you can define the LLM that will be used to plan the tasks. You can use any ChatOpenAI LLM model available.
```python
from crewai import Crew, Agent, Task, Process
from langchain_openai import ChatOpenAI
# Assemble your crew with planning capabilities and custom LLM
my_crew = Crew(
agents=self.agents,
tasks=self.tasks,
process=Process.sequential,
planning=True,
planning_llm=ChatOpenAI(model="gpt-4o")
)
```
### Example ### Example
When running the base case example, you will see something like the following output, which represents the output of the AgentPlanner responsible for creating the step-by-step logic to add to the Agents tasks. When running the base case example, you will see something like the following output, which represents the output of the AgentPlanner responsible for creating the step-by-step logic to add to the Agents tasks.

View File

@@ -18,4 +18,7 @@ pip install crewai
# Install the main crewAI package and the tools package # Install the main crewAI package and the tools package
# that includes a series of helpful tools for your agents # that includes a series of helpful tools for your agents
pip install 'crewai[tools]' pip install 'crewai[tools]'
# Alternatively, you can also use:
pip install crewai crewai-tools
``` ```

View File

@@ -0,0 +1,255 @@
---
title: Starting a New CrewAI Project - Using Template
description: A comprehensive guide to starting a new CrewAI project, including the latest updates and project setup methods.
---
# Starting Your CrewAI Project
Welcome to the ultimate guide for starting a new CrewAI project. This document will walk you through the steps to create, customize, and run your CrewAI project, ensuring you have everything you need to get started.
Beforre we start there are a couple of things to note:
1. CrewAI is a Python package and requires Python >=3.10 and <=3.13 to run.
2. The preferred way of setting up CrewAI is using the `crewai create` command.This will create a new project folder and install a skeleton template for you to work on.
## Prerequisites
Before getting started with CrewAI, make sure that you have installed it via pip:
```shell
$ pip install crewai crewai-tools
```
### Virtual Environments
It is highly recommended that you use virtual environments to ensure that your CrewAI project is isolated from other projects and dependencies. Virtual environments provide a clean, separate workspace for each project, preventing conflicts between different versions of packages and libraries. This isolation is crucial for maintaining consistency and reproducibility in your development process. You have multiple options for setting up virtual environments depending on your operating system and Python version:
1. Use venv (Python's built-in virtual environment tool):
venv is included with Python 3.3 and later, making it a convenient choice for many developers. It's lightweight and easy to use, perfect for simple project setups.
To set up virtual environments with venv, refer to the official [Python documentation](https://docs.python.org/3/tutorial/venv.html).
2. Use Conda (A Python virtual environment manager):
Conda is an open-source package manager and environment management system for Python. It's widely used by data scientists, developers, and researchers to manage dependencies and environments in a reproducible way.
To set up virtual environments with Conda, refer to the official [Conda documentation](https://docs.conda.io/projects/conda/en/stable/user-guide/getting-started.html).
3. Use Poetry (A Python package manager and dependency management tool):
Poetry is an open-source Python package manager that simplifies the installation of packages and their dependencies. Poetry offers a convenient way to manage virtual environments and dependencies.
Poetry is CrewAI's prefered tool for package / dependancy management in CrewAI.
### Code IDEs
Most users of CrewAI a Code Editor / Integrated Development Environment (IDE) for building there Crews. You can use any code IDE of your choice. Seee below for some popular options for Code Editors / Integrated Development Environments (IDE):
- [Visual Studio Code](https://code.visualstudio.com/) - Most popular
- [PyCharm](https://www.jetbrains.com/pycharm/)
- [Cursor AI](https://cursor.com)
Pick one that suits your style and needs.
## Creating a New Project
In this example we will be using Venv as our virtual environment manager.
To setup a virtual environment, run the following CLI command:
```shell
$ python3 -m venv <venv-name>
```
Activate your virtual environment by running the following CLI command:
```shell
$ source <venv-name>/bin/activate
```
Now, to create a new CrewAI project, run the following CLI command:
```shell
$ crewai create <project_name>
```
This command will create a new project folder with the following structure:
```shell
my_project/
├── .gitignore
├── pyproject.toml
├── README.md
└── src/
└── my_project/
├── __init__.py
├── main.py
├── crew.py
├── tools/
│ ├── custom_tool.py
│ └── __init__.py
└── config/
├── agents.yaml
└── tasks.yaml
```
You can now start developing your project by editing the files in the `src/my_project` folder. The `main.py` file is the entry point of your project, and the `crew.py` file is where you define your agents and tasks.
## Customizing Your Project
To customize your project, you can:
- Modify `src/my_project/config/agents.yaml` to define your agents.
- Modify `src/my_project/config/tasks.yaml` to define your tasks.
- Modify `src/my_project/crew.py` to add your own logic, tools, and specific arguments.
- Modify `src/my_project/main.py` to add custom inputs for your agents and tasks.
- Add your environment variables into the `.env` file.
### Example: Defining Agents and Tasks
#### agents.yaml
```yaml
researcher:
role: >
Job Candidate Researcher
goal: >
Find potential candidates for the job
backstory: >
You are adept at finding the right candidates by exploring various online
resources. Your skill in identifying suitable candidates ensures the best
match for job positions.
```
#### tasks.yaml
```yaml
research_candidates_task:
description: >
Conduct thorough research to find potential candidates for the specified job.
Utilize various online resources and databases to gather a comprehensive list of potential candidates.
Ensure that the candidates meet the job requirements provided.
Job Requirements:
{job_requirements}
expected_output: >
A list of 10 potential candidates with their contact information and brief profiles highlighting their suitability.
agent: researcher # THIS NEEDS TO MATCH THE AGENT NAME IN THE AGENTS.YAML FILE AND THE AGENT DEFINED IN THE Crew.PY FILE
context: # THESE NEED TO MATCH THE TASK NAMES DEFINED ABOVE AND THE TASKS.YAML FILE AND THE TASK DEFINED IN THE Crew.PY FILE
- researcher
```
### Referencing Variables:
Your defined functions with the same name will be used. For example, you can reference the agent for specific tasks from task.yaml file. Ensure your annotated agent and function name is the same otherwise your task wont recognize the reference properly.
#### Example References
agent.yaml
```yaml
email_summarizer:
role: >
Email Summarizer
goal: >
Summarize emails into a concise and clear summary
backstory: >
You will create a 5 bullet point summary of the report
llm: mixtal_llm
```
task.yaml
```yaml
email_summarizer_task:
description: >
Summarize the email into a 5 bullet point summary
expected_output: >
A 5 bullet point summary of the email
agent: email_summarizer
context:
- reporting_task
- research_task
```
Use the annotations are used to properly reference the agent and task in the crew.py file.
### Annotations include:
* @agent
* @task
* @crew
* @llm
* @tool
* @callback
* @output_json
* @output_pydantic
* @cache_handler
crew.py
```py
...
@llm
def mixtal_llm(self):
return ChatGroq(temperature=0, model_name="mixtral-8x7b-32768")
@agent
def email_summarizer(self) -> Agent:
return Agent(
config=self.agents_config["email_summarizer"],
)
## ...other tasks defined
@task
def email_summarizer_task(self) -> Task:
return Task(
config=self.tasks_config["email_summarizer_task"],
)
...
```
## Installing Dependencies
To install the dependencies for your project, you can use Poetry. First, navigate to your project directory:
```shell
$ cd my_project
$ poetry lock
$ poetry install
```
This will install the dependencies specified in the `pyproject.toml` file.
## Interpolating Variables
Any variable interpolated in your `agents.yaml` and `tasks.yaml` files like `{variable}` will be replaced by the value of the variable in the `main.py` file.
#### agents.yaml
```yaml
research_task:
description: >
Conduct a thorough research about the customer and competitors in the context
of {customer_domain}.
Make sure you find any interesting and relevant information given the
current year is 2024.
expected_output: >
A complete report on the customer and their customers and competitors,
including their demographics, preferences, market positioning and audience engagement.
```
#### main.py
```python
# main.py
def run():
inputs = {
"customer_domain": "crewai.com"
}
MyProjectCrew(inputs).crew().kickoff(inputs=inputs)
```
## Running Your Project
To run your project, use the following command:
```shell
$ poetry run my_project
```
This will initialize your crew of AI agents and begin task execution as defined in your configuration in the `main.py` file.
## Deploying Your Project
The easiest way to deploy your crew is through [CrewAI+](https://www.crewai.com/crewaiplus), where you can deploy your crew in a few clicks.

View File

@@ -7,6 +7,7 @@ description: Comprehensive guide on crafting, using, and managing custom tools w
This guide provides detailed instructions on creating custom tools for the crewAI framework and how to efficiently manage and utilize these tools, incorporating the latest functionalities such as tool delegation, error handling, and dynamic tool calling. It also highlights the importance of collaboration tools, enabling agents to perform a wide range of actions. This guide provides detailed instructions on creating custom tools for the crewAI framework and how to efficiently manage and utilize these tools, incorporating the latest functionalities such as tool delegation, error handling, and dynamic tool calling. It also highlights the importance of collaboration tools, enabling agents to perform a wide range of actions.
### Prerequisites ### Prerequisites
Before creating your own tools, ensure you have the crewAI extra tools package installed: Before creating your own tools, ensure you have the crewAI extra tools package installed:
```bash ```bash
@@ -31,7 +32,7 @@ class MyCustomTool(BaseTool):
### Using the `tool` Decorator ### Using the `tool` Decorator
Alternatively, use the `tool` decorator for a direct approach to create tools. This requires specifying attributes and the tool's logic within a function. Alternatively, you can use the tool decorator `@tool`. This approach allows you to define the tool's attributes and functionality directly within a function, offering a concise and efficient way to create specialized tools tailored to your needs.
```python ```python
from crewai_tools import tool from crewai_tools import tool

View File

@@ -1,84 +0,0 @@
---
title: Assembling and Activating Your CrewAI Team
description: A comprehensive guide to creating a dynamic CrewAI team for your projects, with updated functionalities including verbose mode, memory capabilities, asynchronous execution, output customization, language model configuration, code execution, integration with third-party agents, and improved task management.
---
## Introduction
Embark on your CrewAI journey by setting up your environment and initiating your AI crew with the latest features. This guide ensures a smooth start, incorporating all recent updates for an enhanced experience, including code execution capabilities, integration with third-party agents, and advanced task management.
## Step 0: Installation
Install CrewAI and any necessary packages for your project. CrewAI is compatible with Python >=3.10,<=3.13.
```shell
pip install crewai
pip install 'crewai[tools]'
```
## Step 1: Assemble Your Agents
Define your agents with distinct roles, backstories, and enhanced capabilities. The Agent class now supports a wide range of attributes for fine-tuned control over agent behavior and interactions, including code execution and integration with third-party agents.
```python
import os
from langchain.llms import OpenAI
from crewai import Agent
from crewai_tools import SerperDevTool, BrowserbaseLoadTool, EXASearchTool
os.environ["OPENAI_API_KEY"] = "Your OpenAI Key"
os.environ["SERPER_API_KEY"] = "Your Serper Key"
os.environ["BROWSERBASE_API_KEY"] = "Your BrowserBase Key"
os.environ["BROWSERBASE_PROJECT_ID"] = "Your BrowserBase Project Id"
search_tool = SerperDevTool()
browser_tool = BrowserbaseLoadTool()
exa_search_tool = EXASearchTool()
# Creating a senior researcher agent with advanced configurations
researcher = Agent(
role='Senior Researcher',
goal='Uncover groundbreaking technologies in {topic}',
backstory=("Driven by curiosity, you're at the forefront of innovation, "
"eager to explore and share knowledge that could change the world."),
memory=True,
verbose=True,
allow_delegation=False,
tools=[search_tool, browser_tool],
allow_code_execution=False, # New attribute for enabling code execution
max_iter=15, # Maximum number of iterations for task execution
max_rpm=100, # Maximum requests per minute
max_execution_time=3600, # Maximum execution time in seconds
system_template="Your custom system template here", # Custom system template
prompt_template="Your custom prompt template here", # Custom prompt template
response_template="Your custom response template here", # Custom response template
)
# Creating a writer agent with custom tools and specific configurations
writer = Agent(
role='Writer',
goal='Narrate compelling tech stories about {topic}',
backstory=("With a flair for simplifying complex topics, you craft engaging "
"narratives that captivate and educate, bringing new discoveries to light."),
verbose=True,
allow_delegation=False,
memory=True,
tools=[exa_search_tool],
function_calling_llm=OpenAI(model_name="gpt-3.5-turbo"), # Separate LLM for function calling
)
# Setting a specific manager agent
manager = Agent(
role='Manager',
goal='Ensure the smooth operation and coordination of the team',
verbose=True,
backstory=(
"As a seasoned project manager, you excel in organizing "
"tasks, managing timelines, and ensuring the team stays on track."
),
allow_code_execution=True, # Enable code execution for the manager
)
```
### New Agent Attributes and Features
1. `allow_code_execution`: Enable or disable code execution capabilities for the agent (default is False).
2. `max_execution_time`: Set a maximum execution time (in seconds) for the agent to complete a task.
3. `function_calling_llm`: Specify a separate language model for function calling.

View File

@@ -7,7 +7,7 @@ description: Learn how to force tool output as the result in of an Agent's task
In CrewAI, you can force the output of a tool as the result of an agent's task. This feature is useful when you want to ensure that the tool output is captured and returned as the task result, and avoid the agent modifying the output during the task execution. In CrewAI, you can force the output of a tool as the result of an agent's task. This feature is useful when you want to ensure that the tool output is captured and returned as the task result, and avoid the agent modifying the output during the task execution.
## Forcing Tool Output as Result ## Forcing Tool Output as Result
To force the tool output as the result of an agent's task, you can set the `force_tool_output` parameter to `True` when creating the task. This parameter ensures that the tool output is captured and returned as the task result, without any modifications by the agent. To force the tool output as the result of an agent's task, you can set the `result_as_answer` parameter to `True` when creating the agent. This parameter ensures that the tool output is captured and returned as the task result, without any modifications by the agent.
Here's an example of how to force the tool output as the result of an agent's task: Here's an example of how to force the tool output as the result of an agent's task:
@@ -16,7 +16,7 @@ Here's an example of how to force the tool output as the result of an agent's ta
# Define a custom tool that returns the result as the answer # Define a custom tool that returns the result as the answer
coding_agent =Agent( coding_agent =Agent(
role="Data Scientist", role="Data Scientist",
goal="Product amazing resports on AI", goal="Product amazing reports on AI",
backstory="You work with data and AI", backstory="You work with data and AI",
tools=[MyCustomTool(result_as_answer=True)], tools=[MyCustomTool(result_as_answer=True)],
) )

View File

@@ -6,33 +6,25 @@ description: Comprehensive guide on integrating CrewAI with various Large Langua
## Connect CrewAI to LLMs ## Connect CrewAI to LLMs
!!! note "Default LLM" !!! note "Default LLM"
By default, CrewAI uses OpenAI's GPT-4 model (specifically, the model specified by the OPENAI_MODEL_NAME environment variable, defaulting to "gpt-4o") for language processing. You can configure your agents to use a different model or API as described in this guide. By default, CrewAI uses OpenAI's GPT-4o model (specifically, the model specified by the OPENAI_MODEL_NAME environment variable, defaulting to "gpt-4o") for language processing. You can configure your agents to use a different model or API as described in this guide.
By default, CrewAI uses OpenAI's GPT-4 model (specifically, the model specified by the OPENAI_MODEL_NAME environment variable, defaulting to "gpt-4") for language processing. You can configure your agents to use a different model or API as described in this guide.
CrewAI offers flexibility in connecting to various LLMs, including local models via [Ollama](https://ollama.ai) and different APIs like Azure. It's compatible with all [LangChain LLM](https://python.langchain.com/docs/integrations/llms/) components, enabling diverse integrations for tailored AI solutions. CrewAI provides extensive versatility in integrating with various Language Models (LLMs), including local options through Ollama such as Llama and Mixtral to cloud-based solutions like Azure. Its compatibility extends to all [LangChain LLM components](https://python.langchain.com/v0.2/docs/integrations/llms/), offering a wide range of integration possibilities for customized AI applications.
## CrewAI Agent Overview The platform supports connections to an array of Generative AI models, including:
The `Agent` class is the cornerstone for implementing AI solutions in CrewAI. Here's a comprehensive overview of the Agent class attributes and methods: - OpenAI's suite of advanced language models
- Anthropic's cutting-edge AI offerings
- Ollama's diverse range of locally-hosted generative model & embeddings
- LM Studio's diverse range of locally hosted generative models & embeddings
- Groq's Super Fast LLM offerings
- Azures' generative AI offerings
- HuggingFace's generative AI offerings
- **Attributes**: This broad spectrum of LLM options enables users to select the most suitable model for their specific needs, whether prioritizing local deployment, specialized capabilities, or cloud-based scalability.
- `role`: Defines the agent's role within the solution.
- `goal`: Specifies the agent's objective.
- `backstory`: Provides a background story to the agent.
- `cache` *Optional*: Determines whether the agent should use a cache for tool usage. Default is `True`.
- `max_rpm` *Optional*: Maximum number of requests per minute the agent's execution should respect. Optional.
- `verbose` *Optional*: Enables detailed logging of the agent's execution. Default is `False`.
- `allow_delegation` *Optional*: Allows the agent to delegate tasks to other agents, default is `True`.
- `tools`: Specifies the tools available to the agent for task execution. Optional.
- `max_iter` *Optional*: Maximum number of iterations for an agent to execute a task, default is 25.
- `max_execution_time` *Optional*: Maximum execution time for an agent to execute a task. Optional.
- `step_callback` *Optional*: Provides a callback function to be executed after each step. Optional.
- `llm` *Optional*: Indicates the Large Language Model the agent uses. By default, it uses the GPT-4 model defined in the environment variable "OPENAI_MODEL_NAME".
- `function_calling_llm` *Optional* : Will turn the ReAct CrewAI agent into a function-calling agent.
- `callbacks` *Optional*: A list of callback functions from the LangChain library that are triggered during the agent's execution process.
- `system_template` *Optional*: Optional string to define the system format for the agent.
- `prompt_template` *Optional*: Optional string to define the prompt format for the agent.
- `response_template` *Optional*: Optional string to define the response format for the agent.
## Changing the default LLM
The default LLM is provided through the `langchain openai` package, which is installed by default when you install CrewAI. You can change this default LLM to a different model or API by setting the `OPENAI_MODEL_NAME` environment variable. This straightforward process allows you to harness the power of different OpenAI models, enhancing the flexibility and capabilities of your CrewAI implementation.
```python ```python
# Required # Required
os.environ["OPENAI_MODEL_NAME"]="gpt-4-0125-preview" os.environ["OPENAI_MODEL_NAME"]="gpt-4-0125-preview"
@@ -45,30 +37,27 @@ example_agent = Agent(
verbose=True verbose=True
) )
``` ```
## Ollama Local Integration
Ollama is preferred for local LLM integration, offering customization and privacy benefits. To integrate Ollama with CrewAI, you will need the `langchain-ollama` package. You can then set the following environment variables to connect to your Ollama instance running locally on port 11434.
## Ollama Integration
Ollama is preferred for local LLM integration, offering customization and privacy benefits. To integrate Ollama with CrewAI, set the appropriate environment variables as shown below.
### Setting Up Ollama
- **Environment Variables Configuration**: To integrate Ollama, set the following environment variables:
```sh ```sh
OPENAI_API_BASE='http://localhost:11434' os.environ[OPENAI_API_BASE]='http://localhost:11434'
OPENAI_MODEL_NAME='llama2' # Adjust based on available model os.environ[OPENAI_MODEL_NAME]='llama2' # Adjust based on available model
OPENAI_API_KEY='' os.environ[OPENAI_API_KEY]='' # No API Key required for Ollama
``` ```
## Ollama Integration (ex. for using Llama 2 locally) ## Ollama Integration Step by Step (ex. for using Llama 3.1 8B locally)
1. [Download Ollama](https://ollama.com/download). 1. [Download and install Ollama](https://ollama.com/download).
2. After setting up the Ollama, Pull the Llama2 by typing following lines into the terminal ```ollama pull llama2```. 2. After setting up the Ollama, Pull the Llama3.1 8B model by typing following lines into your terminal ```ollama run llama3.1```.
3. Enjoy your free Llama2 model that powered up by excellent agents from crewai. 3. Llama3.1 should now be served locally on `http://localhost:11434`
``` ```
from crewai import Agent, Task, Crew from crewai import Agent, Task, Crew
from langchain.llms import Ollama from langchain_ollama import ChatOllama
import os import os
os.environ["OPENAI_API_KEY"] = "NA" os.environ["OPENAI_API_KEY"] = "NA"
llm = Ollama( llm = Ollama(
model = "llama2", model = "llama3.1",
base_url = "http://localhost:11434") base_url = "http://localhost:11434")
general_agent = Agent(role = "Math Professor", general_agent = Agent(role = "Math Professor",
@@ -98,13 +87,14 @@ There are a couple of different ways you can use HuggingFace to host your LLM.
### Your own HuggingFace endpoint ### Your own HuggingFace endpoint
```python ```python
from langchain_community.llms import HuggingFaceEndpoint from langchain_huggingface import HuggingFaceEndpoint,
llm = HuggingFaceEndpoint( llm = HuggingFaceEndpoint(
endpoint_url="<YOUR_ENDPOINT_URL_HERE>", repo_id="microsoft/Phi-3-mini-4k-instruct",
huggingfacehub_api_token="<HF_TOKEN_HERE>",
task="text-generation", task="text-generation",
max_new_tokens=512 max_new_tokens=512,
do_sample=False,
repetition_penalty=1.03,
) )
agent = Agent( agent = Agent(
@@ -115,66 +105,50 @@ agent = Agent(
) )
``` ```
### From HuggingFaceHub endpoint
```python
from langchain_community.llms import HuggingFaceHub
llm = HuggingFaceHub(
repo_id="HuggingFaceH4/zephyr-7b-beta",
huggingfacehub_api_token="<HF_TOKEN_HERE>",
task="text-generation",
)
```
## OpenAI Compatible API Endpoints ## OpenAI Compatible API Endpoints
Switch between APIs and models seamlessly using environment variables, supporting platforms like FastChat, LM Studio, Groq, and Mistral AI. Switch between APIs and models seamlessly using environment variables, supporting platforms like FastChat, LM Studio, Groq, and Mistral AI.
### Configuration Examples ### Configuration Examples
#### FastChat #### FastChat
```sh ```sh
OPENAI_API_BASE="http://localhost:8001/v1" os.environ[OPENAI_API_BASE]="http://localhost:8001/v1"
OPENAI_MODEL_NAME='oh-2.5m7b-q51' os.environ[OPENAI_MODEL_NAME]='oh-2.5m7b-q51'
OPENAI_API_KEY=NA os.environ[OPENAI_API_KEY]=NA
``` ```
#### LM Studio #### LM Studio
Launch [LM Studio](https://lmstudio.ai) and go to the Server tab. Then select a model from the dropdown menu and wait for it to load. Once it's loaded, click the green Start Server button and use the URL, port, and API key that's shown (you can modify them). Below is an example of the default settings as of LM Studio 0.2.19: Launch [LM Studio](https://lmstudio.ai) and go to the Server tab. Then select a model from the dropdown menu and wait for it to load. Once it's loaded, click the green Start Server button and use the URL, port, and API key that's shown (you can modify them). Below is an example of the default settings as of LM Studio 0.2.19:
```sh ```sh
OPENAI_API_BASE="http://localhost:1234/v1" os.environ[OPENAI_API_BASE]="http://localhost:1234/v1"
OPENAI_API_KEY="lm-studio" os.environ[OPENAI_API_KEY]="lm-studio"
``` ```
#### Groq API #### Groq API
```sh ```sh
OPENAI_API_KEY=your-groq-api-key os.environ[OPENAI_API_KEY]=your-groq-api-key
OPENAI_MODEL_NAME='llama3-8b-8192' os.environ[OPENAI_MODEL_NAME]='llama3-8b-8192'
OPENAI_API_BASE=https://api.groq.com/openai/v1 os.environ[OPENAI_API_BASE]=https://api.groq.com/openai/v1
``` ```
#### Mistral API #### Mistral API
```sh ```sh
OPENAI_API_KEY=your-mistral-api-key os.environ[OPENAI_API_KEY]=your-mistral-api-key
OPENAI_API_BASE=https://api.mistral.ai/v1 os.environ[OPENAI_API_BASE]=https://api.mistral.ai/v1
OPENAI_MODEL_NAME="mistral-small" os.environ[OPENAI_MODEL_NAME]="mistral-small"
``` ```
### Solar ### Solar
```python ```sh
from langchain_community.chat_models.solar import SolarChat from langchain_community.chat_models.solar import SolarChat
# Initialize language model ```
os.environ["SOLAR_API_KEY"] = "your-solar-api-key" ```sh
llm = SolarChat(max_tokens=1024) os.environ[SOLAR_API_BASE]="https://api.upstage.ai/v1/solar"
os.environ[SOLAR_API_KEY]="your-solar-api-key"
```
# Free developer API key available here: https://console.upstage.ai/services/solar # Free developer API key available here: https://console.upstage.ai/services/solar
# Langchain Example: https://github.com/langchain-ai/langchain/pull/18556 # Langchain Example: https://github.com/langchain-ai/langchain/pull/18556
```
### text-gen-web-ui
```sh
OPENAI_API_BASE=http://localhost:5000/v1
OPENAI_MODEL_NAME=NA
OPENAI_API_KEY=NA
```
### Cohere ### Cohere
```python ```python
@@ -190,10 +164,11 @@ llm = ChatCohere()
### Azure Open AI Configuration ### Azure Open AI Configuration
For Azure OpenAI API integration, set the following environment variables: For Azure OpenAI API integration, set the following environment variables:
```sh ```sh
AZURE_OPENAI_VERSION="2022-12-01"
AZURE_OPENAI_DEPLOYMENT="" os.environ[AZURE_OPENAI_DEPLOYMENT] = "You deployment"
AZURE_OPENAI_ENDPOINT="" os.environ["OPENAI_API_VERSION"] = "2023-12-01-preview"
AZURE_OPENAI_KEY="" os.environ["AZURE_OPENAI_ENDPOINT"] = "Your Endpoint"
os.environ["AZURE_OPENAI_API_KEY"] = "<Your API Key>"
``` ```
### Example Agent with Azure LLM ### Example Agent with Azure LLM
@@ -216,6 +191,5 @@ azure_agent = Agent(
llm=azure_llm llm=azure_llm
) )
``` ```
## Conclusion ## Conclusion
Integrating CrewAI with different LLMs expands the framework's versatility, allowing for customized, efficient AI solutions across various domains and platforms. Integrating CrewAI with different LLMs expands the framework's versatility, allowing for customized, efficient AI solutions across various domains and platforms.

View File

@@ -1,137 +0,0 @@
---
title: Starting a New CrewAI Project
description: A comprehensive guide to starting a new CrewAI project, including the latest updates and project setup methods.
---
# Starting Your CrewAI Project
Welcome to the ultimate guide for starting a new CrewAI project. This document will walk you through the steps to create, customize, and run your CrewAI project, ensuring you have everything you need to get started.
## Prerequisites
We assume you have already installed CrewAI. If not, please refer to the [installation guide](https://docs.crewai.com/how-to/Installing-CrewAI/) to install CrewAI and its dependencies.
## Creating a New Project
To create a new project, run the following CLI command:
```shell
$ crewai create my_project
```
This command will create a new project folder with the following structure:
```shell
my_project/
├── .gitignore
├── pyproject.toml
├── README.md
└── src/
└── my_project/
├── __init__.py
├── main.py
├── crew.py
├── tools/
│ ├── custom_tool.py
│ └── __init__.py
└── config/
├── agents.yaml
└── tasks.yaml
```
You can now start developing your project by editing the files in the `src/my_project` folder. The `main.py` file is the entry point of your project, and the `crew.py` file is where you define your agents and tasks.
## Customizing Your Project
To customize your project, you can:
- Modify `src/my_project/config/agents.yaml` to define your agents.
- Modify `src/my_project/config/tasks.yaml` to define your tasks.
- Modify `src/my_project/crew.py` to add your own logic, tools, and specific arguments.
- Modify `src/my_project/main.py` to add custom inputs for your agents and tasks.
- Add your environment variables into the `.env` file.
### Example: Defining Agents and Tasks
#### agents.yaml
```yaml
researcher:
role: >
Job Candidate Researcher
goal: >
Find potential candidates for the job
backstory: >
You are adept at finding the right candidates by exploring various online
resources. Your skill in identifying suitable candidates ensures the best
match for job positions.
```
#### tasks.yaml
```yaml
research_candidates_task:
description: >
Conduct thorough research to find potential candidates for the specified job.
Utilize various online resources and databases to gather a comprehensive list of potential candidates.
Ensure that the candidates meet the job requirements provided.
Job Requirements:
{job_requirements}
expected_output: >
A list of 10 potential candidates with their contact information and brief profiles highlighting their suitability.
```
## Installing Dependencies
To install the dependencies for your project, you can use Poetry. First, navigate to your project directory:
```shell
$ cd my_project
$ poetry lock
$ poetry install
```
This will install the dependencies specified in the `pyproject.toml` file.
## Interpolating Variables
Any variable interpolated in your `agents.yaml` and `tasks.yaml` files like `{variable}` will be replaced by the value of the variable in the `main.py` file.
#### agents.yaml
```yaml
research_task:
description: >
Conduct a thorough research about the customer and competitors in the context
of {customer_domain}.
Make sure you find any interesting and relevant information given the
current year is 2024.
expected_output: >
A complete report on the customer and their customers and competitors,
including their demographics, preferences, market positioning and audience engagement.
```
#### main.py
```python
# main.py
def run():
inputs = {
"customer_domain": "crewai.com"
}
MyProjectCrew(inputs).crew().kickoff(inputs=inputs)
```
## Running Your Project
To run your project, use the following command:
```shell
$ poetry run my_project
```
This will initialize your crew of AI agents and begin task execution as defined in your configuration in the `main.py` file.
## Deploying Your Project
The easiest way to deploy your crew is through [CrewAI+](https://www.crewai.com/crewaiplus), where you can deploy your crew in a few clicks.

View File

@@ -5,6 +5,19 @@
Cutting-edge framework for orchestrating role-playing, autonomous AI agents. By fostering collaborative intelligence, CrewAI empowers agents to work together seamlessly, tackling complex tasks. Cutting-edge framework for orchestrating role-playing, autonomous AI agents. By fostering collaborative intelligence, CrewAI empowers agents to work together seamlessly, tackling complex tasks.
<div style="display:flex; margin:0 auto; justify-content: center;"> <div style="display:flex; margin:0 auto; justify-content: center;">
<div style="width:25%">
<h2>Getting Started</h2>
<ul>
<li><a href='./getting-started/Installing-CrewAI'>
Installing CrewAI
</a>
</li>
<li><a href='./getting-started/Start-a-New-CrewAI-Project-Template-Method'>
Start a New CrewAI Project: Template Method
</a>
</li>
</ul>
</div>
<div style="width:25%"> <div style="width:25%">
<h2>Core Concepts</h2> <h2>Core Concepts</h2>
<ul> <ul>
@@ -53,21 +66,6 @@ Cutting-edge framework for orchestrating role-playing, autonomous AI agents. By
<div style="width:30%"> <div style="width:30%">
<h2>How-To Guides</h2> <h2>How-To Guides</h2>
<ul> <ul>
<li>
<a href="./how-to/Start-a-New-CrewAI-Project">
Starting Your crewAI Project
</a>
</li>
<li>
<a href="./how-to/Installing-CrewAI">
Installing crewAI
</a>
</li>
<li>
<a href="./how-to/Creating-a-Crew-and-kick-it-off">
Getting Started
</a>
</li>
<li> <li>
<a href="./how-to/Create-Custom-Tools"> <a href="./how-to/Create-Custom-Tools">
Create Custom Tools Create Custom Tools

View File

@@ -5,7 +5,7 @@ description: Understanding the telemetry data collected by CrewAI and how it con
## Telemetry ## Telemetry
CrewAI utilizes anonymous telemetry to gather usage statistics with the primary goal of enhancing the library. Our focus is on improving and developing the features, integrations, and tools most utilized by our users. CrewAI utilizes anonymous telemetry to gather usage statistics with the primary goal of enhancing the library. Our focus is on improving and developing the features, integrations, and tools most utilized by our users. We don't offer a way to disable it now, but we will in the future.
It's pivotal to understand that **NO data is collected** concerning prompts, task descriptions, agents' backstories or goals, usage of tools, API calls, responses, any data processed by the agents, or secrets and environment variables, with the exception of the conditions mentioned. When the `share_crew` feature is enabled, detailed data including task descriptions, agents' backstories or goals, and other specific attributes are collected to provide deeper insights while respecting user privacy. It's pivotal to understand that **NO data is collected** concerning prompts, task descriptions, agents' backstories or goals, usage of tools, API calls, responses, any data processed by the agents, or secrets and environment variables, with the exception of the conditions mentioned. When the `share_crew` feature is enabled, detailed data including task descriptions, agents' backstories or goals, and other specific attributes are collected to provide deeper insights while respecting user privacy.
@@ -22,7 +22,7 @@ It's pivotal to understand that **NO data is collected** concerning prompts, tas
- **Tool Usage**: Identifying which tools are most frequently used allows us to prioritize improvements in those areas. - **Tool Usage**: Identifying which tools are most frequently used allows us to prioritize improvements in those areas.
### Opt-In Further Telemetry Sharing ### Opt-In Further Telemetry Sharing
Users can choose to share their complete telemetry data by enabling the `share_crew` attribute to `True` in their crew configurations. This opt-in approach respects user privacy and aligns with data protection standards by ensuring users have control over their data sharing preferences. Enabling `share_crew` results in the collection of detailed crew and task execution data, including `goal`, `backstory`, `context`, and `output` of tasks. This enables a deeper insight into usage patterns while respecting the user's choice to share. Users can choose to share their complete telemetry data by enabling the `share_crew` attribute to `True` in their crew configurations. Enabling `share_crew` results in the collection of detailed crew and task execution data, including `goal`, `backstory`, `context`, and `output` of tasks. This enables a deeper insight into usage patterns while respecting the user's choice to share.
### Updates and Revisions ### Updates and Revisions
We are committed to maintaining the accuracy and transparency of our documentation. Regular reviews and updates are performed to ensure our documentation accurately reflects the latest developments of our codebase and telemetry practices. Users are encouraged to review this section for the most current information on our data collection practices and how they contribute to the improvement of CrewAI. We are committed to maintaining the accuracy and transparency of our documentation. Regular reviews and updates are performed to ensure our documentation accurately reflects the latest developments of our codebase and telemetry practices. Users are encouraged to review this section for the most current information on our data collection practices and how they contribute to the improvement of CrewAI.

View File

@@ -1,9 +1,9 @@
# CodeInterpreterTool # CodeInterpreterTool
## Description ## Description
This tool is used to give the Agent the ability to run code (Python3) from the code generated by the Agent itself. The code is executed in a sandboxed environment, so it is safe to run any code. This tool enables the Agent to execute Python 3 code that it has generated autonomously. The code is run in a secure, isolated environment, ensuring safety regardless of the content.
It is incredible useful since it allows the Agent to generate code, run it in the same environment, get the result and use it to make decisions. This functionality is particularly valuable as it allows the Agent to create code, execute it within the same ecosystem, obtain the results, and utilize that information to inform subsequent decisions and actions.
## Requirements ## Requirements

View File

@@ -2,7 +2,7 @@
## Description ## Description
This tools is a wrapper around the composio toolset and gives your agent access to a wide variety of tools from the composio SDK. This tools is a wrapper around the composio set of tools and gives your agent access to a wide variety of tools from the composio SDK.
## Installation ## Installation
@@ -19,7 +19,7 @@ after the installation is complete, either run `composio login` or export your c
The following example demonstrates how to initialize the tool and execute a github action: The following example demonstrates how to initialize the tool and execute a github action:
1. Initialize toolset 1. Initialize Composio tools
```python ```python
from composio import App from composio import App

View File

@@ -29,5 +29,69 @@ To effectively use the `SerperDevTool`, follow these steps:
2. **API Key Acquisition**: Acquire a `serper.dev` API key by registering for a free account at `serper.dev`. 2. **API Key Acquisition**: Acquire a `serper.dev` API key by registering for a free account at `serper.dev`.
3. **Environment Configuration**: Store your obtained API key in an environment variable named `SERPER_API_KEY` to facilitate its use by the tool. 3. **Environment Configuration**: Store your obtained API key in an environment variable named `SERPER_API_KEY` to facilitate its use by the tool.
## Parameters
The `SerperDevTool` comes with several parameters that will be passed to the API :
- **search_url**: The URL endpoint for the search API. (Default is `https://google.serper.dev/search`)
- **country**: Optional. Specify the country for the search results.
- **location**: Optional. Specify the location for the search results.
- **locale**: Optional. Specify the locale for the search results.
- **n_results**: Number of search results to return. Default is `10`.
The values for `country`, `location`, `locale` and `search_url` can be found on the [Serper Playground](https://serper.dev/playground).
## Example with Parameters
Here is an example demonstrating how to use the tool with additional parameters:
```python
from crewai_tools import SerperDevTool
tool = SerperDevTool(
search_url="https://google.serper.dev/scholar",
n_results=2,
)
print(tool.run(search_query="ChatGPT"))
# Using Tool: Search the internet
# Search results: Title: Role of chat gpt in public health
# Link: https://link.springer.com/article/10.1007/s10439-023-03172-7
# Snippet: … ChatGPT in public health. In this overview, we will examine the potential uses of ChatGPT in
# ---
# Title: Potential use of chat gpt in global warming
# Link: https://link.springer.com/article/10.1007/s10439-023-03171-8
# Snippet: … as ChatGPT, have the potential to play a critical role in advancing our understanding of climate
# ---
```
```python
from crewai_tools import SerperDevTool
tool = SerperDevTool(
country="fr",
locale="fr",
location="Paris, Paris, Ile-de-France, France",
n_results=2,
)
print(tool.run(search_query="Jeux Olympiques"))
# Using Tool: Search the internet
# Search results: Title: Jeux Olympiques de Paris 2024 - Actualités, calendriers, résultats
# Link: https://olympics.com/fr/paris-2024
# Snippet: Quels sont les sports présents aux Jeux Olympiques de Paris 2024 ? · Athlétisme · Aviron · Badminton · Basketball · Basketball 3x3 · Boxe · Breaking · Canoë ...
# ---
# Title: Billetterie Officielle de Paris 2024 - Jeux Olympiques et Paralympiques
# Link: https://tickets.paris2024.org/
# Snippet: Achetez vos billets exclusivement sur le site officiel de la billetterie de Paris 2024 pour participer au plus grand événement sportif au monde.
# ---
```
## Conclusion ## Conclusion
By integrating the `SerperDevTool` into Python projects, users gain the ability to conduct real-time, relevant searches across the internet directly from their applications. By adhering to the setup and usage guidelines provided, incorporating this tool into projects is streamlined and straightforward. By integrating the `SerperDevTool` into Python projects, users gain the ability to conduct real-time, relevant searches across the internet directly from their applications. The updated parameters allow for more customized and localized search results. By adhering to the setup and usage guidelines provided, incorporating this tool into projects is streamlined and straightforward.

View File

@@ -119,6 +119,9 @@ theme:
nav: nav:
- Home: '/' - Home: '/'
- Getting Started:
- Installing CrewAI: 'getting-started/Installing-CrewAI.md'
- Starting a new CrewAI project: 'getting-started/Start-a-New-CrewAI-Project-Template-Method.md'
- Core Concepts: - Core Concepts:
- Agents: 'core-concepts/Agents.md' - Agents: 'core-concepts/Agents.md'
- Tasks: 'core-concepts/Tasks.md' - Tasks: 'core-concepts/Tasks.md'

146
poetry.lock generated
View File

@@ -1,14 +1,14 @@
# This file is automatically @generated by Poetry 1.7.1 and should not be changed by hand. # This file is automatically @generated by Poetry 1.8.3 and should not be changed by hand.
[[package]] [[package]]
name = "agentops" name = "agentops"
version = "0.3.0" version = "0.3.2"
description = "Python SDK for developing AI agent evals and observability" description = "Python SDK for developing AI agent evals and observability"
optional = true optional = true
python-versions = ">=3.7" python-versions = ">=3.7"
files = [ files = [
{file = "agentops-0.3.0-py3-none-any.whl", hash = "sha256:22aeb3355e66b32a2b2a9f676048b81979b2488feddb088f9266034b3ed50539"}, {file = "agentops-0.3.2-py3-none-any.whl", hash = "sha256:b35988e04378624204572bb3d7a454094f879ea573f05b57d4e75ab0bfbb82af"},
{file = "agentops-0.3.0.tar.gz", hash = "sha256:6c0c08a57410fa5e826a7bafa1deeba9f7b3524709427d9e1abbd0964caaf76b"}, {file = "agentops-0.3.2.tar.gz", hash = "sha256:55559ac4a43634831dfa8937c2597c28e332809dc7c6bb3bc3c8b233442e224c"},
] ]
[package.dependencies] [package.dependencies]
@@ -294,38 +294,38 @@ files = [
[[package]] [[package]]
name = "bcrypt" name = "bcrypt"
version = "4.1.3" version = "4.2.0"
description = "Modern password hashing for your software and your servers" description = "Modern password hashing for your software and your servers"
optional = false optional = false
python-versions = ">=3.7" python-versions = ">=3.7"
files = [ files = [
{file = "bcrypt-4.1.3-cp37-abi3-macosx_10_12_universal2.whl", hash = "sha256:48429c83292b57bf4af6ab75809f8f4daf52aa5d480632e53707805cc1ce9b74"}, {file = "bcrypt-4.2.0-cp37-abi3-macosx_10_12_universal2.whl", hash = "sha256:096a15d26ed6ce37a14c1ac1e48119660f21b24cba457f160a4b830f3fe6b5cb"},
{file = "bcrypt-4.1.3-cp37-abi3-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:4a8bea4c152b91fd8319fef4c6a790da5c07840421c2b785084989bf8bbb7455"}, {file = "bcrypt-4.2.0-cp37-abi3-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:c02d944ca89d9b1922ceb8a46460dd17df1ba37ab66feac4870f6862a1533c00"},
{file = "bcrypt-4.1.3-cp37-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:3d3b317050a9a711a5c7214bf04e28333cf528e0ed0ec9a4e55ba628d0f07c1a"}, {file = "bcrypt-4.2.0-cp37-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:1d84cf6d877918620b687b8fd1bf7781d11e8a0998f576c7aa939776b512b98d"},
{file = "bcrypt-4.1.3-cp37-abi3-manylinux_2_28_aarch64.whl", hash = "sha256:094fd31e08c2b102a14880ee5b3d09913ecf334cd604af27e1013c76831f7b05"}, {file = "bcrypt-4.2.0-cp37-abi3-manylinux_2_28_aarch64.whl", hash = "sha256:1bb429fedbe0249465cdd85a58e8376f31bb315e484f16e68ca4c786dcc04291"},
{file = "bcrypt-4.1.3-cp37-abi3-manylinux_2_28_x86_64.whl", hash = "sha256:4fb253d65da30d9269e0a6f4b0de32bd657a0208a6f4e43d3e645774fb5457f3"}, {file = "bcrypt-4.2.0-cp37-abi3-manylinux_2_28_x86_64.whl", hash = "sha256:655ea221910bcac76ea08aaa76df427ef8625f92e55a8ee44fbf7753dbabb328"},
{file = "bcrypt-4.1.3-cp37-abi3-musllinux_1_1_aarch64.whl", hash = "sha256:193bb49eeeb9c1e2db9ba65d09dc6384edd5608d9d672b4125e9320af9153a15"}, {file = "bcrypt-4.2.0-cp37-abi3-musllinux_1_1_aarch64.whl", hash = "sha256:1ee38e858bf5d0287c39b7a1fc59eec64bbf880c7d504d3a06a96c16e14058e7"},
{file = "bcrypt-4.1.3-cp37-abi3-musllinux_1_1_x86_64.whl", hash = "sha256:8cbb119267068c2581ae38790e0d1fbae65d0725247a930fc9900c285d95725d"}, {file = "bcrypt-4.2.0-cp37-abi3-musllinux_1_1_x86_64.whl", hash = "sha256:0da52759f7f30e83f1e30a888d9163a81353ef224d82dc58eb5bb52efcabc399"},
{file = "bcrypt-4.1.3-cp37-abi3-musllinux_1_2_aarch64.whl", hash = "sha256:6cac78a8d42f9d120b3987f82252bdbeb7e6e900a5e1ba37f6be6fe4e3848286"}, {file = "bcrypt-4.2.0-cp37-abi3-musllinux_1_2_aarch64.whl", hash = "sha256:3698393a1b1f1fd5714524193849d0c6d524d33523acca37cd28f02899285060"},
{file = "bcrypt-4.1.3-cp37-abi3-musllinux_1_2_x86_64.whl", hash = "sha256:01746eb2c4299dd0ae1670234bf77704f581dd72cc180f444bfe74eb80495b64"}, {file = "bcrypt-4.2.0-cp37-abi3-musllinux_1_2_x86_64.whl", hash = "sha256:762a2c5fb35f89606a9fde5e51392dad0cd1ab7ae64149a8b935fe8d79dd5ed7"},
{file = "bcrypt-4.1.3-cp37-abi3-win32.whl", hash = "sha256:037c5bf7c196a63dcce75545c8874610c600809d5d82c305dd327cd4969995bf"}, {file = "bcrypt-4.2.0-cp37-abi3-win32.whl", hash = "sha256:5a1e8aa9b28ae28020a3ac4b053117fb51c57a010b9f969603ed885f23841458"},
{file = "bcrypt-4.1.3-cp37-abi3-win_amd64.whl", hash = "sha256:8a893d192dfb7c8e883c4576813bf18bb9d59e2cfd88b68b725990f033f1b978"}, {file = "bcrypt-4.2.0-cp37-abi3-win_amd64.whl", hash = "sha256:8f6ede91359e5df88d1f5c1ef47428a4420136f3ce97763e31b86dd8280fbdf5"},
{file = "bcrypt-4.1.3-cp39-abi3-macosx_10_12_universal2.whl", hash = "sha256:0d4cf6ef1525f79255ef048b3489602868c47aea61f375377f0d00514fe4a78c"}, {file = "bcrypt-4.2.0-cp39-abi3-macosx_10_12_universal2.whl", hash = "sha256:c52aac18ea1f4a4f65963ea4f9530c306b56ccd0c6f8c8da0c06976e34a6e841"},
{file = "bcrypt-4.1.3-cp39-abi3-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:f5698ce5292a4e4b9e5861f7e53b1d89242ad39d54c3da451a93cac17b61921a"}, {file = "bcrypt-4.2.0-cp39-abi3-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:3bbbfb2734f0e4f37c5136130405332640a1e46e6b23e000eeff2ba8d005da68"},
{file = "bcrypt-4.1.3-cp39-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:ec3c2e1ca3e5c4b9edb94290b356d082b721f3f50758bce7cce11d8a7c89ce84"}, {file = "bcrypt-4.2.0-cp39-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:3413bd60460f76097ee2e0a493ccebe4a7601918219c02f503984f0a7ee0aebe"},
{file = "bcrypt-4.1.3-cp39-abi3-manylinux_2_28_aarch64.whl", hash = "sha256:3a5be252fef513363fe281bafc596c31b552cf81d04c5085bc5dac29670faa08"}, {file = "bcrypt-4.2.0-cp39-abi3-manylinux_2_28_aarch64.whl", hash = "sha256:8d7bb9c42801035e61c109c345a28ed7e84426ae4865511eb82e913df18f58c2"},
{file = "bcrypt-4.1.3-cp39-abi3-manylinux_2_28_x86_64.whl", hash = "sha256:5f7cd3399fbc4ec290378b541b0cf3d4398e4737a65d0f938c7c0f9d5e686611"}, {file = "bcrypt-4.2.0-cp39-abi3-manylinux_2_28_x86_64.whl", hash = "sha256:3d3a6d28cb2305b43feac298774b997e372e56c7c7afd90a12b3dc49b189151c"},
{file = "bcrypt-4.1.3-cp39-abi3-musllinux_1_1_aarch64.whl", hash = "sha256:c4c8d9b3e97209dd7111bf726e79f638ad9224b4691d1c7cfefa571a09b1b2d6"}, {file = "bcrypt-4.2.0-cp39-abi3-musllinux_1_1_aarch64.whl", hash = "sha256:9c1c4ad86351339c5f320ca372dfba6cb6beb25e8efc659bedd918d921956bae"},
{file = "bcrypt-4.1.3-cp39-abi3-musllinux_1_1_x86_64.whl", hash = "sha256:31adb9cbb8737a581a843e13df22ffb7c84638342de3708a98d5c986770f2834"}, {file = "bcrypt-4.2.0-cp39-abi3-musllinux_1_1_x86_64.whl", hash = "sha256:27fe0f57bb5573104b5a6de5e4153c60814c711b29364c10a75a54bb6d7ff48d"},
{file = "bcrypt-4.1.3-cp39-abi3-musllinux_1_2_aarch64.whl", hash = "sha256:551b320396e1d05e49cc18dd77d970accd52b322441628aca04801bbd1d52a73"}, {file = "bcrypt-4.2.0-cp39-abi3-musllinux_1_2_aarch64.whl", hash = "sha256:8ac68872c82f1add6a20bd489870c71b00ebacd2e9134a8aa3f98a0052ab4b0e"},
{file = "bcrypt-4.1.3-cp39-abi3-musllinux_1_2_x86_64.whl", hash = "sha256:6717543d2c110a155e6821ce5670c1f512f602eabb77dba95717ca76af79867d"}, {file = "bcrypt-4.2.0-cp39-abi3-musllinux_1_2_x86_64.whl", hash = "sha256:cb2a8ec2bc07d3553ccebf0746bbf3d19426d1c6d1adbd4fa48925f66af7b9e8"},
{file = "bcrypt-4.1.3-cp39-abi3-win32.whl", hash = "sha256:6004f5229b50f8493c49232b8e75726b568535fd300e5039e255d919fc3a07f2"}, {file = "bcrypt-4.2.0-cp39-abi3-win32.whl", hash = "sha256:77800b7147c9dc905db1cba26abe31e504d8247ac73580b4aa179f98e6608f34"},
{file = "bcrypt-4.1.3-cp39-abi3-win_amd64.whl", hash = "sha256:2505b54afb074627111b5a8dc9b6ae69d0f01fea65c2fcaea403448c503d3991"}, {file = "bcrypt-4.2.0-cp39-abi3-win_amd64.whl", hash = "sha256:61ed14326ee023917ecd093ee6ef422a72f3aec6f07e21ea5f10622b735538a9"},
{file = "bcrypt-4.1.3-pp310-pypy310_pp73-manylinux_2_28_aarch64.whl", hash = "sha256:cb9c707c10bddaf9e5ba7cdb769f3e889e60b7d4fea22834b261f51ca2b89fed"}, {file = "bcrypt-4.2.0-pp310-pypy310_pp73-manylinux_2_28_aarch64.whl", hash = "sha256:39e1d30c7233cfc54f5c3f2c825156fe044efdd3e0b9d309512cc514a263ec2a"},
{file = "bcrypt-4.1.3-pp310-pypy310_pp73-manylinux_2_28_x86_64.whl", hash = "sha256:9f8ea645eb94fb6e7bea0cf4ba121c07a3a182ac52876493870033141aa687bc"}, {file = "bcrypt-4.2.0-pp310-pypy310_pp73-manylinux_2_28_x86_64.whl", hash = "sha256:f4f4acf526fcd1c34e7ce851147deedd4e26e6402369304220250598b26448db"},
{file = "bcrypt-4.1.3-pp39-pypy39_pp73-manylinux_2_28_aarch64.whl", hash = "sha256:f44a97780677e7ac0ca393bd7982b19dbbd8d7228c1afe10b128fd9550eef5f1"}, {file = "bcrypt-4.2.0-pp39-pypy39_pp73-manylinux_2_28_aarch64.whl", hash = "sha256:1ff39b78a52cf03fdf902635e4c81e544714861ba3f0efc56558979dd4f09170"},
{file = "bcrypt-4.1.3-pp39-pypy39_pp73-manylinux_2_28_x86_64.whl", hash = "sha256:d84702adb8f2798d813b17d8187d27076cca3cd52fe3686bb07a9083930ce650"}, {file = "bcrypt-4.2.0-pp39-pypy39_pp73-manylinux_2_28_x86_64.whl", hash = "sha256:373db9abe198e8e2c70d12b479464e0d5092cc122b20ec504097b5f2297ed184"},
{file = "bcrypt-4.1.3.tar.gz", hash = "sha256:2ee15dd749f5952fe3f0430d0ff6b74082e159c50332a1413d51b5689cf06623"}, {file = "bcrypt-4.2.0.tar.gz", hash = "sha256:cf69eaf5185fd58f268f805b505ce31f9b9fc2d64b376642164e9244540c1221"},
] ]
[package.extras] [package.extras]
@@ -355,17 +355,17 @@ lxml = ["lxml"]
[[package]] [[package]]
name = "boto3" name = "boto3"
version = "1.34.145" version = "1.34.146"
description = "The AWS SDK for Python" description = "The AWS SDK for Python"
optional = false optional = false
python-versions = ">=3.8" python-versions = ">=3.8"
files = [ files = [
{file = "boto3-1.34.145-py3-none-any.whl", hash = "sha256:69d5afb7a017d07dd6bdfb680d2912d5d369b3fafa0a45161207d9f393b14d7e"}, {file = "boto3-1.34.146-py3-none-any.whl", hash = "sha256:7ec568fb19bce82a70be51f08fddac1ef927ca3fb0896cbb34303a012ba228d8"},
{file = "boto3-1.34.145.tar.gz", hash = "sha256:ac770fb53dde1743aec56bd8e56b7ee2e2f5ad42a37825968ec4ff8428822640"}, {file = "boto3-1.34.146.tar.gz", hash = "sha256:5686fe2a6d1aa1de8a88e9589cdcc33361640d3d7a13da718a30717248886124"},
] ]
[package.dependencies] [package.dependencies]
botocore = ">=1.34.145,<1.35.0" botocore = ">=1.34.146,<1.35.0"
jmespath = ">=0.7.1,<2.0.0" jmespath = ">=0.7.1,<2.0.0"
s3transfer = ">=0.10.0,<0.11.0" s3transfer = ">=0.10.0,<0.11.0"
@@ -374,13 +374,13 @@ crt = ["botocore[crt] (>=1.21.0,<2.0a0)"]
[[package]] [[package]]
name = "botocore" name = "botocore"
version = "1.34.145" version = "1.34.146"
description = "Low-level, data-driven core of boto 3." description = "Low-level, data-driven core of boto 3."
optional = false optional = false
python-versions = ">=3.8" python-versions = ">=3.8"
files = [ files = [
{file = "botocore-1.34.145-py3-none-any.whl", hash = "sha256:2e72e262de02adcb0264ac2bac159a28f55dbba8d9e52aa0308773a42950dff5"}, {file = "botocore-1.34.146-py3-none-any.whl", hash = "sha256:3fd4782362bd29c192704ebf859c5c8c5189ad05719e391eefe23088434427ae"},
{file = "botocore-1.34.145.tar.gz", hash = "sha256:edf0fb4c02186ae29b76263ac5fda18b0a085d334a310551c9984407cf1079e6"}, {file = "botocore-1.34.146.tar.gz", hash = "sha256:849cb8e54e042443aeabcd7822b5f2b76cb5cfe33fe3a71f91c7c069748a869c"},
] ]
[package.dependencies] [package.dependencies]
@@ -747,13 +747,13 @@ colorama = {version = "*", markers = "platform_system == \"Windows\""}
[[package]] [[package]]
name = "cohere" name = "cohere"
version = "5.6.1" version = "5.6.2"
description = "" description = ""
optional = false optional = false
python-versions = "<4.0,>=3.8" python-versions = "<4.0,>=3.8"
files = [ files = [
{file = "cohere-5.6.1-py3-none-any.whl", hash = "sha256:1c8bcd39a54622d64b83cafb865f102cd2565ce091b0856fd5ce11bf7169109a"}, {file = "cohere-5.6.2-py3-none-any.whl", hash = "sha256:cfecf1343bcaa4091266c5a231fbcb3ccbd80cad05ea093ef80024a117aa3a2f"},
{file = "cohere-5.6.1.tar.gz", hash = "sha256:5d7efda64f0e512d4cc35aa04b17a6f74b3d8c175a99f2797991a7f31dfac349"}, {file = "cohere-5.6.2.tar.gz", hash = "sha256:6bb901afdfb02f62ad8ed2d82f12d8ea87a6869710f5f880cb89190c4e994805"},
] ]
[package.dependencies] [package.dependencies]
@@ -1517,13 +1517,13 @@ protobuf = ">=3.20.2,<4.21.0 || >4.21.0,<4.21.1 || >4.21.1,<4.21.2 || >4.21.2,<4
[[package]] [[package]]
name = "google-cloud-storage" name = "google-cloud-storage"
version = "2.17.0" version = "2.18.0"
description = "Google Cloud Storage API client library" description = "Google Cloud Storage API client library"
optional = false optional = false
python-versions = ">=3.7" python-versions = ">=3.7"
files = [ files = [
{file = "google-cloud-storage-2.17.0.tar.gz", hash = "sha256:49378abff54ef656b52dca5ef0f2eba9aa83dc2b2c72c78714b03a1a95fe9388"}, {file = "google_cloud_storage-2.18.0-py2.py3-none-any.whl", hash = "sha256:e8e1a9577952143c3fca8163005ecfadd2d70ec080fa158a8b305000e2c22fbb"},
{file = "google_cloud_storage-2.17.0-py2.py3-none-any.whl", hash = "sha256:5b393bc766b7a3bc6f5407b9e665b2450d36282614b7945e570b3480a456d1e1"}, {file = "google_cloud_storage-2.18.0.tar.gz", hash = "sha256:0aa3f7c57f3632f81b455d91558d2b27ada96eee2de3aaa17f689db1470d9578"},
] ]
[package.dependencies] [package.dependencies]
@@ -1535,7 +1535,8 @@ google-resumable-media = ">=2.6.0"
requests = ">=2.18.0,<3.0.0dev" requests = ">=2.18.0,<3.0.0dev"
[package.extras] [package.extras]
protobuf = ["protobuf (<5.0.0dev)"] protobuf = ["protobuf (<6.0.0dev)"]
tracing = ["opentelemetry-api (>=1.1.0)"]
[[package]] [[package]]
name = "google-crc32c" name = "google-crc32c"
@@ -2865,13 +2866,13 @@ pyyaml = ">=5.1"
[[package]] [[package]]
name = "mkdocs-material" name = "mkdocs-material"
version = "9.5.29" version = "9.5.30"
description = "Documentation that simply works" description = "Documentation that simply works"
optional = false optional = false
python-versions = ">=3.8" python-versions = ">=3.8"
files = [ files = [
{file = "mkdocs_material-9.5.29-py3-none-any.whl", hash = "sha256:afc1f508e2662ded95f0a35a329e8a5acd73ee88ca07ba73836eb6fcdae5d8b4"}, {file = "mkdocs_material-9.5.30-py3-none-any.whl", hash = "sha256:fc070689c5250a180e9b9d79d8491ef9a3a7acb240db0728728d6c31eeb131d4"},
{file = "mkdocs_material-9.5.29.tar.gz", hash = "sha256:3e977598ec15a4ddad5c4dfc9e08edab6023edb51e88f0729bd27be77e3d322a"}, {file = "mkdocs_material-9.5.30.tar.gz", hash = "sha256:3fd417dd42d679e3ba08b9e2d72cd8b8af142cc4a3969676ad6b00993dd182ec"},
] ]
[package.dependencies] [package.dependencies]
@@ -3337,13 +3338,13 @@ sympy = "*"
[[package]] [[package]]
name = "openai" name = "openai"
version = "1.36.0" version = "1.37.0"
description = "The official Python library for the openai API" description = "The official Python library for the openai API"
optional = false optional = false
python-versions = ">=3.7.1" python-versions = ">=3.7.1"
files = [ files = [
{file = "openai-1.36.0-py3-none-any.whl", hash = "sha256:82b74ded1fe2ea94abb19a007178bc143675f1b6903cebd63e2968d654bb0a6f"}, {file = "openai-1.37.0-py3-none-any.whl", hash = "sha256:a903245c0ecf622f2830024acdaa78683c70abb8e9d37a497b851670864c9f73"},
{file = "openai-1.36.0.tar.gz", hash = "sha256:a124baf0e1657d6156e12248642f88489cd030be8655b69bc1c13eb50e71a93d"}, {file = "openai-1.37.0.tar.gz", hash = "sha256:dc8197fc40ab9d431777b6620d962cc49f4544ffc3011f03ce0a805e6eb54adb"},
] ]
[package.dependencies] [package.dependencies]
@@ -4085,6 +4086,19 @@ files = [
{file = "pyarrow-17.0.0-cp312-cp312-win_amd64.whl", hash = "sha256:392bc9feabc647338e6c89267635e111d71edad5fcffba204425a7c8d13610d7"}, {file = "pyarrow-17.0.0-cp312-cp312-win_amd64.whl", hash = "sha256:392bc9feabc647338e6c89267635e111d71edad5fcffba204425a7c8d13610d7"},
{file = "pyarrow-17.0.0-cp38-cp38-macosx_10_15_x86_64.whl", hash = "sha256:af5ff82a04b2171415f1410cff7ebb79861afc5dae50be73ce06d6e870615204"}, {file = "pyarrow-17.0.0-cp38-cp38-macosx_10_15_x86_64.whl", hash = "sha256:af5ff82a04b2171415f1410cff7ebb79861afc5dae50be73ce06d6e870615204"},
{file = "pyarrow-17.0.0-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:edca18eaca89cd6382dfbcff3dd2d87633433043650c07375d095cd3517561d8"}, {file = "pyarrow-17.0.0-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:edca18eaca89cd6382dfbcff3dd2d87633433043650c07375d095cd3517561d8"},
{file = "pyarrow-17.0.0-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:7c7916bff914ac5d4a8fe25b7a25e432ff921e72f6f2b7547d1e325c1ad9d155"},
{file = "pyarrow-17.0.0-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:f553ca691b9e94b202ff741bdd40f6ccb70cdd5fbf65c187af132f1317de6145"},
{file = "pyarrow-17.0.0-cp38-cp38-manylinux_2_28_aarch64.whl", hash = "sha256:0cdb0e627c86c373205a2f94a510ac4376fdc523f8bb36beab2e7f204416163c"},
{file = "pyarrow-17.0.0-cp38-cp38-manylinux_2_28_x86_64.whl", hash = "sha256:d7d192305d9d8bc9082d10f361fc70a73590a4c65cf31c3e6926cd72b76bc35c"},
{file = "pyarrow-17.0.0-cp38-cp38-win_amd64.whl", hash = "sha256:02dae06ce212d8b3244dd3e7d12d9c4d3046945a5933d28026598e9dbbda1fca"},
{file = "pyarrow-17.0.0-cp39-cp39-macosx_10_15_x86_64.whl", hash = "sha256:13d7a460b412f31e4c0efa1148e1d29bdf18ad1411eb6757d38f8fbdcc8645fb"},
{file = "pyarrow-17.0.0-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:9b564a51fbccfab5a04a80453e5ac6c9954a9c5ef2890d1bcf63741909c3f8df"},
{file = "pyarrow-17.0.0-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:32503827abbc5aadedfa235f5ece8c4f8f8b0a3cf01066bc8d29de7539532687"},
{file = "pyarrow-17.0.0-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:a155acc7f154b9ffcc85497509bcd0d43efb80d6f733b0dc3bb14e281f131c8b"},
{file = "pyarrow-17.0.0-cp39-cp39-manylinux_2_28_aarch64.whl", hash = "sha256:dec8d129254d0188a49f8a1fc99e0560dc1b85f60af729f47de4046015f9b0a5"},
{file = "pyarrow-17.0.0-cp39-cp39-manylinux_2_28_x86_64.whl", hash = "sha256:a48ddf5c3c6a6c505904545c25a4ae13646ae1f8ba703c4df4a1bfe4f4006bda"},
{file = "pyarrow-17.0.0-cp39-cp39-win_amd64.whl", hash = "sha256:42bf93249a083aca230ba7e2786c5f673507fa97bbd9725a1e2754715151a204"},
{file = "pyarrow-17.0.0.tar.gz", hash = "sha256:4beca9521ed2c0921c1023e68d097d0299b62c362639ea315572a58f3f50fd28"},
] ]
[package.dependencies] [package.dependencies]
@@ -4321,13 +4335,13 @@ extra = ["pygments (>=2.12)"]
[[package]] [[package]]
name = "pypdf" name = "pypdf"
version = "4.3.0" version = "4.3.1"
description = "A pure-python PDF library capable of splitting, merging, cropping, and transforming PDF files" description = "A pure-python PDF library capable of splitting, merging, cropping, and transforming PDF files"
optional = false optional = false
python-versions = ">=3.6" python-versions = ">=3.6"
files = [ files = [
{file = "pypdf-4.3.0-py3-none-any.whl", hash = "sha256:eeea4d019b57c099d02a0e1692eaaab23341ae3f255c1dafa3c8566b4636496d"}, {file = "pypdf-4.3.1-py3-none-any.whl", hash = "sha256:64b31da97eda0771ef22edb1bfecd5deee4b72c3d1736b7df2689805076d6418"},
{file = "pypdf-4.3.0.tar.gz", hash = "sha256:0d7a4c67fd03782f5a09d3f48c11c7a31e0bb9af78861a25229bb49259ed0504"}, {file = "pypdf-4.3.1.tar.gz", hash = "sha256:b2f37fe9a3030aa97ca86067a56ba3f9d3565f9a791b305c7355d8392c30d91b"},
] ]
[package.dependencies] [package.dependencies]
@@ -4414,13 +4428,13 @@ files = [
[[package]] [[package]]
name = "pytest" name = "pytest"
version = "8.2.2" version = "8.3.1"
description = "pytest: simple powerful testing with Python" description = "pytest: simple powerful testing with Python"
optional = false optional = false
python-versions = ">=3.8" python-versions = ">=3.8"
files = [ files = [
{file = "pytest-8.2.2-py3-none-any.whl", hash = "sha256:c434598117762e2bd304e526244f67bf66bbd7b5d6cf22138be51ff661980343"}, {file = "pytest-8.3.1-py3-none-any.whl", hash = "sha256:e9600ccf4f563976e2c99fa02c7624ab938296551f280835ee6516df8bc4ae8c"},
{file = "pytest-8.2.2.tar.gz", hash = "sha256:de4bb8104e201939ccdc688b27a89a7be2079b22e2bd2b07f806b6ba71117977"}, {file = "pytest-8.3.1.tar.gz", hash = "sha256:7e8e5c5abd6e93cb1cc151f23e57adc31fcf8cfd2a3ff2da63e23f732de35db6"},
] ]
[package.dependencies] [package.dependencies]
@@ -4428,7 +4442,7 @@ colorama = {version = "*", markers = "sys_platform == \"win32\""}
exceptiongroup = {version = ">=1.0.0rc8", markers = "python_version < \"3.11\""} exceptiongroup = {version = ">=1.0.0rc8", markers = "python_version < \"3.11\""}
iniconfig = "*" iniconfig = "*"
packaging = "*" packaging = "*"
pluggy = ">=1.5,<2.0" pluggy = ">=1.5,<2"
tomli = {version = ">=1", markers = "python_version < \"3.11\""} tomli = {version = ">=1", markers = "python_version < \"3.11\""}
[package.extras] [package.extras]
@@ -4899,19 +4913,19 @@ files = [
[[package]] [[package]]
name = "setuptools" name = "setuptools"
version = "71.0.4" version = "71.1.0"
description = "Easily download, build, install, upgrade, and uninstall Python packages" description = "Easily download, build, install, upgrade, and uninstall Python packages"
optional = false optional = false
python-versions = ">=3.8" python-versions = ">=3.8"
files = [ files = [
{file = "setuptools-71.0.4-py3-none-any.whl", hash = "sha256:ed2feca703be3bdbd94e6bb17365d91c6935c6b2a8d0bb09b66a2c435ba0b1a5"}, {file = "setuptools-71.1.0-py3-none-any.whl", hash = "sha256:33874fdc59b3188304b2e7c80d9029097ea31627180896fb549c578ceb8a0855"},
{file = "setuptools-71.0.4.tar.gz", hash = "sha256:48297e5d393a62b7cb2a10b8f76c63a73af933bd809c9e0d0d6352a1a0135dd8"}, {file = "setuptools-71.1.0.tar.gz", hash = "sha256:032d42ee9fb536e33087fb66cac5f840eb9391ed05637b3f2a76a7c8fb477936"},
] ]
[package.extras] [package.extras]
core = ["importlib-metadata (>=6)", "importlib-resources (>=5.10.2)", "jaraco.text (>=3.7)", "more-itertools (>=8.8)", "ordered-set (>=3.1.1)", "packaging (>=24)", "platformdirs (>=2.6.2)", "tomli (>=2.0.1)", "wheel (>=0.43.0)"] core = ["importlib-metadata (>=6)", "importlib-resources (>=5.10.2)", "jaraco.text (>=3.7)", "more-itertools (>=8.8)", "ordered-set (>=3.1.1)", "packaging (>=24)", "platformdirs (>=2.6.2)", "tomli (>=2.0.1)", "wheel (>=0.43.0)"]
doc = ["furo", "jaraco.packaging (>=9.3)", "jaraco.tidelift (>=1.4)", "pygments-github-lexers (==0.0.5)", "pyproject-hooks (!=1.1)", "rst.linker (>=1.9)", "sphinx (>=3.5)", "sphinx-favicon", "sphinx-inline-tabs", "sphinx-lint", "sphinx-notfound-page (>=1,<2)", "sphinx-reredirects", "sphinxcontrib-towncrier"] doc = ["furo", "jaraco.packaging (>=9.3)", "jaraco.tidelift (>=1.4)", "pygments-github-lexers (==0.0.5)", "pyproject-hooks (!=1.1)", "rst.linker (>=1.9)", "sphinx (>=3.5)", "sphinx-favicon", "sphinx-inline-tabs", "sphinx-lint", "sphinx-notfound-page (>=1,<2)", "sphinx-reredirects", "sphinxcontrib-towncrier"]
test = ["build[virtualenv] (>=1.0.3)", "filelock (>=3.4.0)", "importlib-metadata", "ini2toml[lite] (>=0.14)", "jaraco.develop (>=7.21)", "jaraco.envs (>=2.2)", "jaraco.path (>=3.2.0)", "jaraco.test", "mypy (==1.10.0)", "packaging (>=23.2)", "pip (>=19.1)", "pyproject-hooks (!=1.1)", "pytest (>=6,!=8.1.*)", "pytest-checkdocs (>=2.4)", "pytest-cov", "pytest-enabler (>=2.2)", "pytest-home (>=0.5)", "pytest-mypy", "pytest-perf", "pytest-ruff (<0.4)", "pytest-ruff (>=0.2.1)", "pytest-ruff (>=0.3.2)", "pytest-subprocess", "pytest-timeout", "pytest-xdist (>=3)", "tomli", "tomli-w (>=1.0.0)", "virtualenv (>=13.0.0)", "wheel"] test = ["build[virtualenv] (>=1.0.3)", "filelock (>=3.4.0)", "importlib-metadata", "ini2toml[lite] (>=0.14)", "jaraco.develop (>=7.21)", "jaraco.envs (>=2.2)", "jaraco.path (>=3.2.0)", "jaraco.test", "mypy (==1.11.*)", "packaging (>=23.2)", "pip (>=19.1)", "pyproject-hooks (!=1.1)", "pytest (>=6,!=8.1.*)", "pytest-checkdocs (>=2.4)", "pytest-cov", "pytest-enabler (>=2.2)", "pytest-home (>=0.5)", "pytest-mypy", "pytest-perf", "pytest-ruff (<0.4)", "pytest-ruff (>=0.2.1)", "pytest-ruff (>=0.3.2)", "pytest-subprocess", "pytest-timeout", "pytest-xdist (>=3)", "tomli", "tomli-w (>=1.0.0)", "virtualenv (>=13.0.0)", "wheel"]
[[package]] [[package]]
name = "shapely" name = "shapely"
@@ -5256,13 +5270,13 @@ test = ["pytest", "ruff"]
[[package]] [[package]]
name = "together" name = "together"
version = "1.2.2" version = "1.2.3"
description = "Python client for Together's Cloud Platform!" description = "Python client for Together's Cloud Platform!"
optional = false optional = false
python-versions = "<4.0,>=3.8" python-versions = "<4.0,>=3.8"
files = [ files = [
{file = "together-1.2.2-py3-none-any.whl", hash = "sha256:7ce89f902dbaca67e46e693d90182514494f510f3bc16cb89d816a5031ab0433"}, {file = "together-1.2.3-py3-none-any.whl", hash = "sha256:bbafb4b8340e0f7e0ddb11ad447eb3467c591090910d0291cfbf74b47af045c1"},
{file = "together-1.2.2.tar.gz", hash = "sha256:fd026f4a604e1fb3ee2fa5803f31e5e36ad31b3d182ef47f611326de66907d13"}, {file = "together-1.2.3.tar.gz", hash = "sha256:4ea7626a9581d16fbf293e3eaf91557c43dea044627cf6dbe458bbf43408a6b2"},
] ]
[package.dependencies] [package.dependencies]

View File

@@ -1,6 +1,6 @@
[tool.poetry] [tool.poetry]
name = "crewai" name = "crewai"
version = "0.41.1" version = "0.46.0"
description = "Cutting-edge framework for orchestrating role-playing, autonomous AI agents. By fostering collaborative intelligence, CrewAI empowers agents to work together seamlessly, tackling complex tasks." description = "Cutting-edge framework for orchestrating role-playing, autonomous AI agents. By fostering collaborative intelligence, CrewAI empowers agents to work together seamlessly, tackling complex tasks."
authors = ["Joao Moura <joao@crewai.com>"] authors = ["Joao Moura <joao@crewai.com>"]
readme = "README.md" readme = "README.md"

View File

@@ -55,8 +55,6 @@ class Agent(BaseAgent):
tools: Tools at agents disposal tools: Tools at agents disposal
step_callback: Callback to be executed after each step of the agent execution. step_callback: Callback to be executed after each step of the agent execution.
callbacks: A list of callback functions from the langchain library that are triggered during the agent's execution process callbacks: A list of callback functions from the langchain library that are triggered during the agent's execution process
allow_code_execution: Enable code execution for the agent.
max_retry_limit: Maximum number of retries for an agent to execute a task when an error occurs.
""" """
_times_executed: int = PrivateAttr(default=0) _times_executed: int = PrivateAttr(default=0)
@@ -262,6 +260,7 @@ class Agent(BaseAgent):
"tools_handler": self.tools_handler, "tools_handler": self.tools_handler,
"function_calling_llm": self.function_calling_llm, "function_calling_llm": self.function_calling_llm,
"callbacks": self.callbacks, "callbacks": self.callbacks,
"max_tokens": self.max_tokens,
} }
if self._rpm_controller: if self._rpm_controller:

View File

@@ -45,6 +45,7 @@ class BaseAgent(ABC, BaseModel):
i18n (I18N): Internationalization settings. i18n (I18N): Internationalization settings.
cache_handler (InstanceOf[CacheHandler]): An instance of the CacheHandler class. cache_handler (InstanceOf[CacheHandler]): An instance of the CacheHandler class.
tools_handler (InstanceOf[ToolsHandler]): An instance of the ToolsHandler class. tools_handler (InstanceOf[ToolsHandler]): An instance of the ToolsHandler class.
max_tokens: Maximum number of tokens for the agent to generate in a response.
Methods: Methods:
@@ -118,6 +119,9 @@ class BaseAgent(ABC, BaseModel):
tools_handler: InstanceOf[ToolsHandler] = Field( tools_handler: InstanceOf[ToolsHandler] = Field(
default=None, description="An instance of the ToolsHandler class." default=None, description="An instance of the ToolsHandler class."
) )
max_tokens: Optional[int] = Field(
default=None, description="Maximum number of tokens for the agent's execution."
)
_original_role: str | None = None _original_role: str | None = None
_original_goal: str | None = None _original_goal: str | None = None

View File

@@ -3,7 +3,6 @@ from typing import TYPE_CHECKING, Optional
from crewai.memory.entity.entity_memory_item import EntityMemoryItem from crewai.memory.entity.entity_memory_item import EntityMemoryItem
from crewai.memory.long_term.long_term_memory_item import LongTermMemoryItem from crewai.memory.long_term.long_term_memory_item import LongTermMemoryItem
from crewai.memory.short_term.short_term_memory_item import ShortTermMemoryItem
from crewai.utilities.converter import ConverterError from crewai.utilities.converter import ConverterError
from crewai.utilities.evaluators.task_evaluator import TaskEvaluator from crewai.utilities.evaluators.task_evaluator import TaskEvaluator
from crewai.utilities import I18N from crewai.utilities import I18N
@@ -39,18 +38,17 @@ class CrewAgentExecutorMixin:
and "Action: Delegate work to coworker" not in output.log and "Action: Delegate work to coworker" not in output.log
): ):
try: try:
memory = ShortTermMemoryItem(
data=output.log,
agent=self.crew_agent.role,
metadata={
"observation": self.task.description,
},
)
if ( if (
hasattr(self.crew, "_short_term_memory") hasattr(self.crew, "_short_term_memory")
and self.crew._short_term_memory and self.crew._short_term_memory
): ):
self.crew._short_term_memory.save(memory) self.crew._short_term_memory.save(
value=output.log,
metadata={
"observation": self.task.description,
},
agent=self.crew_agent.role,
)
except Exception as e: except Exception as e:
print(f"Failed to add to short term memory: {e}") print(f"Failed to add to short term memory: {e}")
pass pass

View File

@@ -1,6 +1,8 @@
import threading import threading
import time import time
from typing import Any, Dict, Iterator, List, Optional, Tuple, Union from typing import Any, Dict, Iterator, List, Literal, Optional, Tuple, Union
import click
from langchain.agents import AgentExecutor from langchain.agents import AgentExecutor
from langchain.agents.agent import ExceptionTool from langchain.agents.agent import ExceptionTool
@@ -11,12 +13,21 @@ from langchain_core.tools import BaseTool
from langchain_core.utils.input import get_color_mapping from langchain_core.utils.input import get_color_mapping
from pydantic import InstanceOf from pydantic import InstanceOf
from langchain.text_splitter import RecursiveCharacterTextSplitter
from langchain.chains.summarize import load_summarize_chain
from crewai.agents.agent_builder.base_agent_executor_mixin import CrewAgentExecutorMixin from crewai.agents.agent_builder.base_agent_executor_mixin import CrewAgentExecutorMixin
from crewai.agents.tools_handler import ToolsHandler from crewai.agents.tools_handler import ToolsHandler
from crewai.tools.tool_usage import ToolUsage, ToolUsageErrorException from crewai.tools.tool_usage import ToolUsage, ToolUsageErrorException
from crewai.utilities import I18N from crewai.utilities import I18N
from crewai.utilities.constants import TRAINING_DATA_FILE from crewai.utilities.constants import TRAINING_DATA_FILE
from crewai.utilities.exceptions.context_window_exceeding_exception import (
LLMContextLengthExceededException,
)
from crewai.utilities.training_handler import CrewTrainingHandler from crewai.utilities.training_handler import CrewTrainingHandler
from crewai.utilities.logger import Logger
class CrewAgentExecutor(AgentExecutor, CrewAgentExecutorMixin): class CrewAgentExecutor(AgentExecutor, CrewAgentExecutorMixin):
@@ -40,6 +51,8 @@ class CrewAgentExecutor(AgentExecutor, CrewAgentExecutorMixin):
system_template: Optional[str] = None system_template: Optional[str] = None
prompt_template: Optional[str] = None prompt_template: Optional[str] = None
response_template: Optional[str] = None response_template: Optional[str] = None
_logger: Logger = Logger(verbose_level=2)
_fit_context_window_strategy: Optional[Literal["summarize"]] = "summarize"
def _call( def _call(
self, self,
@@ -131,7 +144,7 @@ class CrewAgentExecutor(AgentExecutor, CrewAgentExecutorMixin):
intermediate_steps = self._prepare_intermediate_steps(intermediate_steps) intermediate_steps = self._prepare_intermediate_steps(intermediate_steps)
# Call the LLM to see what to do. # Call the LLM to see what to do.
output = self.agent.plan( # type: ignore # Incompatible types in assignment (expression has type "AgentAction | AgentFinish | list[AgentAction]", variable has type "AgentAction") output = self.agent.plan(
intermediate_steps, intermediate_steps,
callbacks=run_manager.get_child() if run_manager else None, callbacks=run_manager.get_child() if run_manager else None,
**inputs, **inputs,
@@ -185,6 +198,27 @@ class CrewAgentExecutor(AgentExecutor, CrewAgentExecutorMixin):
yield AgentStep(action=output, observation=observation) yield AgentStep(action=output, observation=observation)
return return
except Exception as e:
if LLMContextLengthExceededException(str(e))._is_context_limit_error(
str(e)
):
output = self._handle_context_length_error(
intermediate_steps, run_manager, inputs
)
if isinstance(output, AgentFinish):
yield output
elif isinstance(output, list):
for step in output:
yield step
return
yield AgentStep(
action=AgentAction("_Exception", str(e), str(e)),
observation=str(e),
)
return
# If the tool chosen is the finishing tool, then we end and return. # If the tool chosen is the finishing tool, then we end and return.
if isinstance(output, AgentFinish): if isinstance(output, AgentFinish):
if self.should_ask_for_human_input: if self.should_ask_for_human_input:
@@ -235,6 +269,7 @@ class CrewAgentExecutor(AgentExecutor, CrewAgentExecutorMixin):
agent=self.crew_agent, agent=self.crew_agent,
action=agent_action, action=agent_action,
) )
tool_calling = tool_usage.parse(agent_action.log) tool_calling = tool_usage.parse(agent_action.log)
if isinstance(tool_calling, ToolUsageErrorException): if isinstance(tool_calling, ToolUsageErrorException):
@@ -280,3 +315,91 @@ class CrewAgentExecutor(AgentExecutor, CrewAgentExecutorMixin):
CrewTrainingHandler(TRAINING_DATA_FILE).append( CrewTrainingHandler(TRAINING_DATA_FILE).append(
self.crew._train_iteration, agent_id, training_data self.crew._train_iteration, agent_id, training_data
) )
def _handle_context_length(
self, intermediate_steps: List[Tuple[AgentAction, str]]
) -> List[Tuple[AgentAction, str]]:
text = intermediate_steps[0][1]
original_action = intermediate_steps[0][0]
text_splitter = RecursiveCharacterTextSplitter(
separators=["\n\n", "\n"],
chunk_size=8000,
chunk_overlap=500,
)
if self._fit_context_window_strategy == "summarize":
docs = text_splitter.create_documents([text])
self._logger.log(
"debug",
"Summarizing Content, it is recommended to use a RAG tool",
color="bold_blue",
)
summarize_chain = load_summarize_chain(
self.llm, chain_type="map_reduce", verbose=True
)
summarized_docs = []
for doc in docs:
summary = summarize_chain.invoke(
{"input_documents": [doc]}, return_only_outputs=True
)
summarized_docs.append(summary["output_text"])
formatted_results = "\n\n".join(summarized_docs)
summary_step = AgentStep(
action=AgentAction(
tool=original_action.tool,
tool_input=original_action.tool_input,
log=original_action.log,
),
observation=formatted_results,
)
summary_tuple = (summary_step.action, summary_step.observation)
return [summary_tuple]
return intermediate_steps
def _handle_context_length_error(
self,
intermediate_steps: List[Tuple[AgentAction, str]],
run_manager: Optional[CallbackManagerForChainRun],
inputs: Dict[str, str],
) -> Union[AgentFinish, List[AgentStep]]:
self._logger.log(
"debug",
"Context length exceeded. Asking user if they want to use summarize prompt to fit, this will reduce context length.",
color="yellow",
)
user_choice = click.confirm(
"Context length exceeded. Do you want to summarize the text to fit models context window?"
)
if user_choice:
self._logger.log(
"debug",
"Context length exceeded. Using summarize prompt to fit, this will reduce context length.",
color="bold_blue",
)
intermediate_steps = self._handle_context_length(intermediate_steps)
output = self.agent.plan(
intermediate_steps,
callbacks=run_manager.get_child() if run_manager else None,
**inputs,
)
if isinstance(output, AgentFinish):
return output
elif isinstance(output, AgentAction):
return [AgentStep(action=output, observation=None)]
else:
return [AgentStep(action=action, observation=None) for action in output]
else:
self._logger.log(
"debug",
"Context length exceeded. Consider using smaller text or RAG tools from crewai_tools.",
color="red",
)
raise SystemExit(
"Context length exceeded and user opted not to summarize. Consider using smaller text or RAG tools from crewai_tools."
)

View File

@@ -6,9 +6,9 @@ from crewai.memory.storage.kickoff_task_outputs_storage import (
) )
from .create_crew import create_crew from .create_crew import create_crew
from .evaluate_crew import evaluate_crew
from .replay_from_task import replay_task_command from .replay_from_task import replay_task_command
from .reset_memories_command import reset_memories_command from .reset_memories_command import reset_memories_command
from .test_crew import test_crew
from .train_crew import train_crew from .train_crew import train_crew
@@ -144,7 +144,7 @@ def reset_memories(long, short, entities, kickoff_outputs, all):
def test(n_iterations: int, model: str): def test(n_iterations: int, model: str):
"""Test the crew and evaluate the results.""" """Test the crew and evaluate the results."""
click.echo(f"Testing the crew for {n_iterations} iterations with model {model}") click.echo(f"Testing the crew for {n_iterations} iterations with model {model}")
test_crew(n_iterations, model) evaluate_crew(n_iterations, model)
if __name__ == "__main__": if __name__ == "__main__":

View File

@@ -3,9 +3,9 @@ import subprocess
import click import click
def test_crew(n_iterations: int, model: str) -> None: def evaluate_crew(n_iterations: int, model: str) -> None:
""" """
Test the crew by running a command in the Poetry environment. Test and Evaluate the crew by running a command in the Poetry environment.
Args: Args:
n_iterations (int): The number of iterations to test the crew. n_iterations (int): The number of iterations to test the crew.

View File

@@ -9,10 +9,14 @@ from crewai.utilities.task_output_storage_handler import TaskOutputStorageHandle
def reset_memories_command(long, short, entity, kickoff_outputs, all) -> None: def reset_memories_command(long, short, entity, kickoff_outputs, all) -> None:
""" """
Replay the crew execution from a specific task. Reset the crew memories.
Args: Args:
task_id (str): The ID of the task to replay from. long (bool): Whether to reset the long-term memory.
short (bool): Whether to reset the short-term memory.
entity (bool): Whether to reset the entity memory.
kickoff_outputs (bool): Whether to reset the latest kickoff task outputs.
all (bool): Whether to reset all memories.
""" """
try: try:

View File

@@ -5,6 +5,7 @@ research_task:
the current year is 2024. the current year is 2024.
expected_output: > expected_output: >
A list with 10 bullet points of the most relevant information about {topic} A list with 10 bullet points of the most relevant information about {topic}
agent: researcher
reporting_task: reporting_task:
description: > description: >
@@ -13,3 +14,4 @@ reporting_task:
expected_output: > expected_output: >
A fully fledge reports with the mains topics, each with a full section of information. A fully fledge reports with the mains topics, each with a full section of information.
Formatted as markdown without '```' Formatted as markdown without '```'
agent: reporting_analyst

View File

@@ -32,14 +32,12 @@ class {{crew_name}}Crew():
def research_task(self) -> Task: def research_task(self) -> Task:
return Task( return Task(
config=self.tasks_config['research_task'], config=self.tasks_config['research_task'],
agent=self.researcher()
) )
@task @task
def reporting_task(self) -> Task: def reporting_task(self) -> Task:
return Task( return Task(
config=self.tasks_config['reporting_task'], config=self.tasks_config['reporting_task'],
agent=self.reporting_analyst(),
output_file='report.md' output_file='report.md'
) )

View File

@@ -6,7 +6,7 @@ authors = ["Your Name <you@example.com>"]
[tool.poetry.dependencies] [tool.poetry.dependencies]
python = ">=3.10,<=3.13" python = ">=3.10,<=3.13"
crewai = { extras = ["tools"], version = "^0.41.1" } crewai = { extras = ["tools"], version = "^0.46.0" }
[tool.poetry.scripts] [tool.poetry.scripts]
{{folder_name}} = "{{folder_name}}.main:run" {{folder_name}} = "{{folder_name}}.main:run"

View File

@@ -155,6 +155,10 @@ class Crew(BaseModel):
default=False, default=False,
description="Plan the crew execution and add the plan to the crew.", description="Plan the crew execution and add the plan to the crew.",
) )
planning_llm: Optional[Any] = Field(
default=None,
description="Language model that will run the AgentPlanner if planning is True.",
)
task_execution_output_json_files: Optional[List[str]] = Field( task_execution_output_json_files: Optional[List[str]] = Field(
default=None, default=None,
description="List of file paths for task execution JSON files.", description="List of file paths for task execution JSON files.",
@@ -267,20 +271,6 @@ class Crew(BaseModel):
return self return self
@model_validator(mode="after")
def check_tasks_in_hierarchical_process_not_async(self):
"""Validates that the tasks in hierarchical process are not flagged with async_execution."""
if self.process == Process.hierarchical:
for task in self.tasks:
if task.async_execution:
raise PydanticCustomError(
"async_execution_in_hierarchical_process",
"Hierarchical process error: Tasks cannot be flagged with async_execution.",
{},
)
return self
@model_validator(mode="after") @model_validator(mode="after")
def validate_end_with_at_most_one_async_task(self): def validate_end_with_at_most_one_async_task(self):
"""Validates that the crew ends with at most one asynchronous task.""" """Validates that the crew ends with at most one asynchronous task."""
@@ -560,15 +550,12 @@ class Crew(BaseModel):
def _handle_crew_planning(self): def _handle_crew_planning(self):
"""Handles the Crew planning.""" """Handles the Crew planning."""
self._logger.log("info", "Planning the crew execution") self._logger.log("info", "Planning the crew execution")
result = CrewPlanner(self.tasks)._handle_crew_planning() result = CrewPlanner(
tasks=self.tasks, planning_agent_llm=self.planning_llm
)._handle_crew_planning()
if result is not None and hasattr(result, "list_of_plans_per_task"): for task, step_plan in zip(self.tasks, result.list_of_plans_per_task):
for task, step_plan in zip(self.tasks, result.list_of_plans_per_task): task.description += step_plan
task.description += step_plan
else:
self._logger.log(
"info", "Something went wrong with the planning process of the Crew"
)
def _store_execution_log( def _store_execution_log(
self, self,
@@ -606,7 +593,7 @@ class Crew(BaseModel):
def _run_hierarchical_process(self) -> CrewOutput: def _run_hierarchical_process(self) -> CrewOutput:
"""Creates and assigns a manager agent to make sure the crew completes the tasks.""" """Creates and assigns a manager agent to make sure the crew completes the tasks."""
self._create_manager_agent() self._create_manager_agent()
return self._execute_tasks(self.tasks, self.manager_agent) return self._execute_tasks(self.tasks)
def _create_manager_agent(self): def _create_manager_agent(self):
i18n = I18N(prompt_file=self.prompt_file) i18n = I18N(prompt_file=self.prompt_file)
@@ -630,7 +617,6 @@ class Crew(BaseModel):
def _execute_tasks( def _execute_tasks(
self, self,
tasks: List[Task], tasks: List[Task],
manager: Optional[BaseAgent] = None,
start_index: Optional[int] = 0, start_index: Optional[int] = 0,
was_replayed: bool = False, was_replayed: bool = False,
) -> CrewOutput: ) -> CrewOutput:
@@ -658,13 +644,13 @@ class Crew(BaseModel):
last_sync_output = task.output last_sync_output = task.output
continue continue
agent_to_use = self._get_agent_to_use(task, manager) agent_to_use = self._get_agent_to_use(task)
if agent_to_use is None: if agent_to_use is None:
raise ValueError( raise ValueError(
f"No agent available for task: {task.description}. Ensure that either the task has an assigned agent or a manager agent is provided." f"No agent available for task: {task.description}. Ensure that either the task has an assigned agent or a manager agent is provided."
) )
self._prepare_agent_tools(task, manager) self._prepare_agent_tools(task)
self._log_task_start(task, agent_to_use.role) self._log_task_start(task, agent_to_use.role)
if isinstance(task, ConditionalTask): if isinstance(task, ConditionalTask):
@@ -730,20 +716,18 @@ class Crew(BaseModel):
return skipped_task_output return skipped_task_output
return None return None
def _prepare_agent_tools(self, task: Task, manager: Optional[BaseAgent]): def _prepare_agent_tools(self, task: Task):
if self.process == Process.hierarchical: if self.process == Process.hierarchical:
if manager: if self.manager_agent:
self._update_manager_tools(task, manager) self._update_manager_tools(task)
else: else:
raise ValueError("Manager agent is required for hierarchical process.") raise ValueError("Manager agent is required for hierarchical process.")
elif task.agent and task.agent.allow_delegation: elif task.agent and task.agent.allow_delegation:
self._add_delegation_tools(task) self._add_delegation_tools(task)
def _get_agent_to_use( def _get_agent_to_use(self, task: Task) -> Optional[BaseAgent]:
self, task: Task, manager: Optional[BaseAgent]
) -> Optional[BaseAgent]:
if self.process == Process.hierarchical: if self.process == Process.hierarchical:
return manager return self.manager_agent
return task.agent return task.agent
def _add_delegation_tools(self, task: Task): def _add_delegation_tools(self, task: Task):
@@ -779,11 +763,14 @@ class Crew(BaseModel):
if self.output_log_file: if self.output_log_file:
self._file_handler.log(agent=role, task=task.description, status="started") self._file_handler.log(agent=role, task=task.description, status="started")
def _update_manager_tools(self, task: Task, manager: BaseAgent): def _update_manager_tools(self, task: Task):
if task.agent: if self.manager_agent:
manager.tools = task.agent.get_delegation_tools([task.agent]) if task.agent:
else: self.manager_agent.tools = task.agent.get_delegation_tools([task.agent])
manager.tools = manager.get_delegation_tools(self.agents) else:
self.manager_agent.tools = self.manager_agent.get_delegation_tools(
self.agents
)
def _get_context(self, task: Task, task_outputs: List[TaskOutput]): def _get_context(self, task: Task, task_outputs: List[TaskOutput]):
context = ( context = (
@@ -882,7 +869,7 @@ class Crew(BaseModel):
self.tasks[i].output = task_output self.tasks[i].output = task_output
self._logging_color = "bold_blue" self._logging_color = "bold_blue"
result = self._execute_tasks(self.tasks, self.manager_agent, start_index, True) result = self._execute_tasks(self.tasks, start_index, True)
return result return result
def copy(self): def copy(self):

View File

@@ -1,3 +1,4 @@
from typing import Any, Dict, Optional
from crewai.memory.memory import Memory from crewai.memory.memory import Memory
from crewai.memory.short_term.short_term_memory_item import ShortTermMemoryItem from crewai.memory.short_term.short_term_memory_item import ShortTermMemoryItem
from crewai.memory.storage.rag_storage import RAGStorage from crewai.memory.storage.rag_storage import RAGStorage
@@ -18,8 +19,15 @@ class ShortTermMemory(Memory):
) )
super().__init__(storage) super().__init__(storage)
def save(self, item: ShortTermMemoryItem) -> None: def save(
super().save(item.data, item.metadata, item.agent) self,
value: Any,
metadata: Optional[Dict[str, Any]] = None,
agent: Optional[str] = None,
) -> None:
item = ShortTermMemoryItem(data=value, metadata=metadata, agent=agent)
super().save(value=item.data, metadata=item.metadata, agent=item.agent)
def search(self, query: str, score_threshold: float = 0.35): def search(self, query: str, score_threshold: float = 0.35):
return self.storage.search(query=query, score_threshold=score_threshold) # type: ignore # BUG? The reference is to the parent class, but the parent class does not have this parameters return self.storage.search(query=query, score_threshold=score_threshold) # type: ignore # BUG? The reference is to the parent class, but the parent class does not have this parameters

View File

@@ -3,7 +3,10 @@ from typing import Any, Dict, Optional
class ShortTermMemoryItem: class ShortTermMemoryItem:
def __init__( def __init__(
self, data: Any, agent: str, metadata: Optional[Dict[str, Any]] = None self,
data: Any,
agent: Optional[str] = None,
metadata: Optional[Dict[str, Any]] = None,
): ):
self.data = data self.data = data
self.agent = agent self.agent = agent

View File

@@ -4,7 +4,7 @@ from typing import Any, Dict
class Storage: class Storage:
"""Abstract base class defining the storage interface""" """Abstract base class defining the storage interface"""
def save(self, key: str, value: Any, metadata: Dict[str, Any]) -> None: def save(self, value: Any, metadata: Dict[str, Any]) -> None:
pass pass
def search(self, key: str) -> Dict[str, Any]: # type: ignore def search(self, key: str) -> Dict[str, Any]: # type: ignore

View File

@@ -1,2 +1,25 @@
from .annotations import agent, crew, task from .annotations import (
agent,
crew,
task,
output_json,
output_pydantic,
tool,
callback,
llm,
cache_handler,
)
from .crew_base import CrewBase from .crew_base import CrewBase
__all__ = [
"agent",
"crew",
"task",
"output_json",
"output_pydantic",
"tool",
"callback",
"CrewBase",
"llm",
"cache_handler",
]

View File

@@ -30,6 +30,37 @@ def agent(func):
return func return func
def llm(func):
func.is_llm = True
func = memoize(func)
return func
def output_json(cls):
cls.is_output_json = True
return cls
def output_pydantic(cls):
cls.is_output_pydantic = True
return cls
def tool(func):
func.is_tool = True
return memoize(func)
def callback(func):
func.is_callback = True
return memoize(func)
def cache_handler(func):
func.is_cache_handler = True
return memoize(func)
def crew(func): def crew(func):
def wrapper(self, *args, **kwargs): def wrapper(self, *args, **kwargs):
instantiated_tasks = [] instantiated_tasks = []

View File

@@ -1,6 +1,7 @@
import inspect import inspect
import os import os
from pathlib import Path from pathlib import Path
from typing import Any, Callable, Dict
import yaml import yaml
from dotenv import load_dotenv from dotenv import load_dotenv
@@ -20,11 +21,6 @@ def CrewBase(cls):
base_directory = Path(frame_info.filename).parent.resolve() base_directory = Path(frame_info.filename).parent.resolve()
break break
if base_directory is None:
raise Exception(
"Unable to dynamically determine the project's base directory, you must run it from the project's root directory."
)
original_agents_config_path = getattr( original_agents_config_path = getattr(
cls, "agents_config", "config/agents.yaml" cls, "agents_config", "config/agents.yaml"
) )
@@ -32,12 +28,20 @@ def CrewBase(cls):
def __init__(self, *args, **kwargs): def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs) super().__init__(*args, **kwargs)
if self.base_directory is None:
raise Exception(
"Unable to dynamically determine the project's base directory, you must run it from the project's root directory."
)
self.agents_config = self.load_yaml( self.agents_config = self.load_yaml(
os.path.join(self.base_directory, self.original_agents_config_path) os.path.join(self.base_directory, self.original_agents_config_path)
) )
self.tasks_config = self.load_yaml( self.tasks_config = self.load_yaml(
os.path.join(self.base_directory, self.original_tasks_config_path) os.path.join(self.base_directory, self.original_tasks_config_path)
) )
self.map_all_agent_variables()
self.map_all_task_variables()
@staticmethod @staticmethod
def load_yaml(config_path: str): def load_yaml(config_path: str):
@@ -45,4 +49,138 @@ def CrewBase(cls):
# parsedContent = YamlParser.parse(file) # type: ignore # Argument 1 to "parse" has incompatible type "TextIOWrapper"; expected "YamlParser" # parsedContent = YamlParser.parse(file) # type: ignore # Argument 1 to "parse" has incompatible type "TextIOWrapper"; expected "YamlParser"
return yaml.safe_load(file) return yaml.safe_load(file)
def _get_all_functions(self):
return {
name: getattr(self, name)
for name in dir(self)
if callable(getattr(self, name))
}
def _filter_functions(
self, functions: Dict[str, Callable], attribute: str
) -> Dict[str, Callable]:
return {
name: func
for name, func in functions.items()
if hasattr(func, attribute)
}
def map_all_agent_variables(self) -> None:
all_functions = self._get_all_functions()
llms = self._filter_functions(all_functions, "is_llm")
tool_functions = self._filter_functions(all_functions, "is_tool")
cache_handler_functions = self._filter_functions(
all_functions, "is_cache_handler"
)
callbacks = self._filter_functions(all_functions, "is_callback")
agents = self._filter_functions(all_functions, "is_agent")
for agent_name, agent_info in self.agents_config.items():
self._map_agent_variables(
agent_name,
agent_info,
agents,
llms,
tool_functions,
cache_handler_functions,
callbacks,
)
def _map_agent_variables(
self,
agent_name: str,
agent_info: Dict[str, Any],
agents: Dict[str, Callable],
llms: Dict[str, Callable],
tool_functions: Dict[str, Callable],
cache_handler_functions: Dict[str, Callable],
callbacks: Dict[str, Callable],
) -> None:
if llm := agent_info.get("llm"):
self.agents_config[agent_name]["llm"] = llms[llm]()
if tools := agent_info.get("tools"):
self.agents_config[agent_name]["tools"] = [
tool_functions[tool]() for tool in tools
]
if function_calling_llm := agent_info.get("function_calling_llm"):
self.agents_config[agent_name]["function_calling_llm"] = agents[
function_calling_llm
]()
if step_callback := agent_info.get("step_callback"):
self.agents_config[agent_name]["step_callback"] = callbacks[
step_callback
]()
if cache_handler := agent_info.get("cache_handler"):
self.agents_config[agent_name]["cache_handler"] = (
cache_handler_functions[cache_handler]()
)
def map_all_task_variables(self) -> None:
all_functions = self._get_all_functions()
agents = self._filter_functions(all_functions, "is_agent")
tasks = self._filter_functions(all_functions, "is_task")
output_json_functions = self._filter_functions(
all_functions, "is_output_json"
)
tool_functions = self._filter_functions(all_functions, "is_tool")
callback_functions = self._filter_functions(all_functions, "is_callback")
output_pydantic_functions = self._filter_functions(
all_functions, "is_output_pydantic"
)
for task_name, task_info in self.tasks_config.items():
self._map_task_variables(
task_name,
task_info,
agents,
tasks,
output_json_functions,
tool_functions,
callback_functions,
output_pydantic_functions,
)
def _map_task_variables(
self,
task_name: str,
task_info: Dict[str, Any],
agents: Dict[str, Callable],
tasks: Dict[str, Callable],
output_json_functions: Dict[str, Callable],
tool_functions: Dict[str, Callable],
callback_functions: Dict[str, Callable],
output_pydantic_functions: Dict[str, Callable],
) -> None:
if context_list := task_info.get("context"):
self.tasks_config[task_name]["context"] = [
tasks[context_task_name]() for context_task_name in context_list
]
if tools := task_info.get("tools"):
self.tasks_config[task_name]["tools"] = [
tool_functions[tool]() for tool in tools
]
if agent_name := task_info.get("agent"):
self.tasks_config[task_name]["agent"] = agents[agent_name]()
if output_json := task_info.get("output_json"):
self.tasks_config[task_name]["output_json"] = output_json_functions[
output_json
]
if output_pydantic := task_info.get("output_pydantic"):
self.tasks_config[task_name]["output_pydantic"] = (
output_pydantic_functions[output_pydantic]
)
if callbacks := task_info.get("callbacks"):
self.tasks_config[task_name]["callbacks"] = [
callback_functions[callback]() for callback in callbacks
]
return WrappedClass return WrappedClass

View File

@@ -1,6 +1,6 @@
import datetime
import json import json
import os import os
import re
import threading import threading
import uuid import uuid
from concurrent.futures import Future from concurrent.futures import Future
@@ -8,7 +8,6 @@ from copy import copy
from hashlib import md5 from hashlib import md5
from typing import Any, Dict, List, Optional, Tuple, Type, Union from typing import Any, Dict, List, Optional, Tuple, Type, Union
from langchain_openai import ChatOpenAI
from opentelemetry.trace import Span from opentelemetry.trace import Span
from pydantic import UUID4, BaseModel, Field, field_validator, model_validator from pydantic import UUID4, BaseModel, Field, field_validator, model_validator
from pydantic_core import PydanticCustomError from pydantic_core import PydanticCustomError
@@ -17,10 +16,8 @@ from crewai.agents.agent_builder.base_agent import BaseAgent
from crewai.tasks.output_format import OutputFormat from crewai.tasks.output_format import OutputFormat
from crewai.tasks.task_output import TaskOutput from crewai.tasks.task_output import TaskOutput
from crewai.telemetry.telemetry import Telemetry from crewai.telemetry.telemetry import Telemetry
from crewai.utilities.converter import Converter, ConverterError from crewai.utilities.converter import Converter, convert_to_model
from crewai.utilities.i18n import I18N from crewai.utilities.i18n import I18N
from crewai.utilities.printer import Printer
from crewai.utilities.pydantic_schema_parser import PydanticSchemaParser
class Task(BaseModel): class Task(BaseModel):
@@ -111,6 +108,7 @@ class Task(BaseModel):
_original_description: str | None = None _original_description: str | None = None
_original_expected_output: str | None = None _original_expected_output: str | None = None
_thread: threading.Thread | None = None _thread: threading.Thread | None = None
_execution_time: float | None = None
def __init__(__pydantic_self__, **data): def __init__(__pydantic_self__, **data):
config = data.pop("config", {}) config = data.pop("config", {})
@@ -124,9 +122,15 @@ class Task(BaseModel):
"may_not_set_field", "This field is not to be set by the user.", {} "may_not_set_field", "This field is not to be set by the user.", {}
) )
def _set_start_execution_time(self) -> float:
return datetime.datetime.now().timestamp()
def _set_end_execution_time(self, start_time: float) -> None:
self._execution_time = datetime.datetime.now().timestamp() - start_time
@field_validator("output_file") @field_validator("output_file")
@classmethod @classmethod
def output_file_validattion(cls, value: str) -> str: def output_file_validation(cls, value: str) -> str:
"""Validate the output file path by removing the / from the beginning of the path.""" """Validate the output file path by removing the / from the beginning of the path."""
if value.startswith("/"): if value.startswith("/"):
return value[1:] return value[1:]
@@ -220,6 +224,7 @@ class Task(BaseModel):
f"The task '{self.description}' has no agent assigned, therefore it can't be executed directly and should be executed in a Crew using a specific process that support that, like hierarchical." f"The task '{self.description}' has no agent assigned, therefore it can't be executed directly and should be executed in a Crew using a specific process that support that, like hierarchical."
) )
start_time = self._set_start_execution_time()
self._execution_span = self._telemetry.task_started(crew=agent.crew, task=self) self._execution_span = self._telemetry.task_started(crew=agent.crew, task=self)
self.prompt_context = context self.prompt_context = context
@@ -243,6 +248,7 @@ class Task(BaseModel):
) )
self.output = task_output self.output = task_output
self._set_end_execution_time(start_time)
if self.callback: if self.callback:
self.callback(self.output) self.callback(self.output)
@@ -326,18 +332,6 @@ class Task(BaseModel):
return copied_task return copied_task
def _create_converter(self, *args, **kwargs) -> Converter:
"""Create a converter instance."""
if self.agent and not self.converter_cls:
converter = self.agent.get_output_converter(*args, **kwargs)
elif self.converter_cls:
converter = self.converter_cls(*args, **kwargs)
if not converter:
raise Exception("No output converter found or set.")
return converter
def _export_output( def _export_output(
self, result: str self, result: str
) -> Tuple[Optional[BaseModel], Optional[Dict[str, Any]]]: ) -> Tuple[Optional[BaseModel], Optional[Dict[str, Any]]]:
@@ -345,75 +339,26 @@ class Task(BaseModel):
json_output: Optional[Dict[str, Any]] = None json_output: Optional[Dict[str, Any]] = None
if self.output_pydantic or self.output_json: if self.output_pydantic or self.output_json:
model_output = self._convert_to_model(result) model_output = convert_to_model(
pydantic_output = ( result,
model_output if isinstance(model_output, BaseModel) else None self.output_pydantic,
self.output_json,
self.agent,
self.converter_cls,
) )
if isinstance(model_output, str):
if isinstance(model_output, BaseModel):
pydantic_output = model_output
elif isinstance(model_output, dict):
json_output = model_output
elif isinstance(model_output, str):
try: try:
json_output = json.loads(model_output) json_output = json.loads(model_output)
except json.JSONDecodeError: except json.JSONDecodeError:
json_output = None json_output = None
else:
json_output = model_output if isinstance(model_output, dict) else None
return pydantic_output, json_output return pydantic_output, json_output
def _convert_to_model(self, result: str) -> Union[dict, BaseModel, str]:
model = self.output_pydantic or self.output_json
if model is None:
return result
try:
return self._validate_model(result, model)
except Exception:
return self._handle_partial_json(result, model)
def _validate_model(
self, result: str, model: Type[BaseModel]
) -> Union[dict, BaseModel]:
exported_result = model.model_validate_json(result)
if self.output_json:
return exported_result.model_dump()
return exported_result
def _handle_partial_json(
self, result: str, model: Type[BaseModel]
) -> Union[dict, BaseModel, str]:
match = re.search(r"({.*})", result, re.DOTALL)
if match:
try:
exported_result = model.model_validate_json(match.group(0))
if self.output_json:
return exported_result.model_dump()
return exported_result
except Exception:
pass
return self._convert_with_instructions(result, model)
def _convert_with_instructions(
self, result: str, model: Type[BaseModel]
) -> Union[dict, BaseModel, str]:
llm = self.agent.function_calling_llm or self.agent.llm # type: ignore # Item "None" of "BaseAgent | None" has no attribute "function_calling_llm"
instructions = self._get_conversion_instructions(model, llm)
converter = self._create_converter(
llm=llm, text=result, model=model, instructions=instructions
)
exported_result = (
converter.to_pydantic() if self.output_pydantic else converter.to_json()
)
if isinstance(exported_result, ConverterError):
Printer().print(
content=f"{exported_result.message} Using raw output instead.",
color="red",
)
return result
return exported_result
def _get_output_format(self) -> OutputFormat: def _get_output_format(self) -> OutputFormat:
if self.output_json: if self.output_json:
return OutputFormat.JSON return OutputFormat.JSON
@@ -421,34 +366,22 @@ class Task(BaseModel):
return OutputFormat.PYDANTIC return OutputFormat.PYDANTIC
return OutputFormat.RAW return OutputFormat.RAW
def _get_conversion_instructions(self, model: Type[BaseModel], llm: Any) -> str:
instructions = "I'm gonna convert this raw text into valid JSON."
if not self._is_gpt(llm):
model_schema = PydanticSchemaParser(model=model).get_schema()
instructions = f"{instructions}\n\nThe json should have the following structure, with the following keys:\n{model_schema}"
return instructions
def _save_output(self, content: str) -> None:
if not self.output_file:
raise Exception("Output file path is not set.")
directory = os.path.dirname(self.output_file)
if directory and not os.path.exists(directory):
os.makedirs(directory)
with open(self.output_file, "w", encoding="utf-8") as file:
file.write(content)
def _is_gpt(self, llm) -> bool:
return isinstance(llm, ChatOpenAI) and llm.openai_api_base is None
def _save_file(self, result: Any) -> None: def _save_file(self, result: Any) -> None:
if self.output_file is None:
raise ValueError("output_file is not set.")
directory = os.path.dirname(self.output_file) # type: ignore # Value of type variable "AnyOrLiteralStr" of "dirname" cannot be "str | None" directory = os.path.dirname(self.output_file) # type: ignore # Value of type variable "AnyOrLiteralStr" of "dirname" cannot be "str | None"
if directory and not os.path.exists(directory): if directory and not os.path.exists(directory):
os.makedirs(directory) os.makedirs(directory)
with open(self.output_file, "w", encoding="utf-8") as file: # type: ignore # Argument 1 to "open" has incompatible type "str | None"; expected "int | str | bytes | PathLike[str] | PathLike[bytes]" with open(self.output_file, "w", encoding="utf-8") as file:
file.write(result) if isinstance(result, dict):
import json
json.dump(result, file, ensure_ascii=False, indent=2)
else:
file.write(str(result))
return None return None
def __repr__(self): def __repr__(self):

View File

@@ -40,7 +40,7 @@ class Telemetry:
- Roles of agents in a crew - Roles of agents in a crew
- Tools names available - Tools names available
Users can opt-in to sharing more complete data suing the `share_crew` Users can opt-in to sharing more complete data using the `share_crew`
attribute in the Crew class. attribute in the Crew class.
""" """

View File

@@ -16,7 +16,7 @@ try:
except ImportError: except ImportError:
agentops = None agentops = None
OPENAI_BIGGER_MODELS = ["gpt-4"] OPENAI_BIGGER_MODELS = ["gpt-4o"]
class ToolUsageErrorException(Exception): class ToolUsageErrorException(Exception):
@@ -86,7 +86,8 @@ class ToolUsage:
) -> str: ) -> str:
if isinstance(calling, ToolUsageErrorException): if isinstance(calling, ToolUsageErrorException):
error = calling.message error = calling.message
self._printer.print(content=f"\n\n{error}\n", color="red") if self.agent.verbose:
self._printer.print(content=f"\n\n{error}\n", color="red")
self.task.increment_tools_errors() self.task.increment_tools_errors()
return error return error
@@ -96,7 +97,8 @@ class ToolUsage:
except Exception as e: except Exception as e:
error = getattr(e, "message", str(e)) error = getattr(e, "message", str(e))
self.task.increment_tools_errors() self.task.increment_tools_errors()
self._printer.print(content=f"\n\n{error}\n", color="red") if self.agent.verbose:
self._printer.print(content=f"\n\n{error}\n", color="red")
return error return error
return f"{self._use(tool_string=tool_string, tool=tool, calling=calling)}" # type: ignore # BUG?: "_use" of "ToolUsage" does not return a value (it only ever returns None) return f"{self._use(tool_string=tool_string, tool=tool, calling=calling)}" # type: ignore # BUG?: "_use" of "ToolUsage" does not return a value (it only ever returns None)
@@ -112,7 +114,8 @@ class ToolUsage:
result = self._i18n.errors("task_repeated_usage").format( result = self._i18n.errors("task_repeated_usage").format(
tool_names=self.tools_names tool_names=self.tools_names
) )
self._printer.print(content=f"\n\n{result}\n", color="purple") if self.agent.verbose:
self._printer.print(content=f"\n\n{result}\n", color="purple")
self._telemetry.tool_repeated_usage( self._telemetry.tool_repeated_usage(
llm=self.function_calling_llm, llm=self.function_calling_llm,
tool_name=tool.name, tool_name=tool.name,
@@ -168,7 +171,10 @@ class ToolUsage:
f'\n{error_message}.\nMoving on then. {self._i18n.slice("format").format(tool_names=self.tools_names)}' f'\n{error_message}.\nMoving on then. {self._i18n.slice("format").format(tool_names=self.tools_names)}'
).message ).message
self.task.increment_tools_errors() self.task.increment_tools_errors()
self._printer.print(content=f"\n\n{error_message}\n", color="red") if self.agent.verbose:
self._printer.print(
content=f"\n\n{error_message}\n", color="red"
)
return error # type: ignore # No return value expected return error # type: ignore # No return value expected
self.task.increment_tools_errors() self.task.increment_tools_errors()
@@ -192,7 +198,8 @@ class ToolUsage:
calling=calling, output=result, should_cache=should_cache calling=calling, output=result, should_cache=should_cache
) )
self._printer.print(content=f"\n\n{result}\n", color="purple") if self.agent.verbose:
self._printer.print(content=f"\n\n{result}\n", color="purple")
if agentops: if agentops:
agentops.record(tool_event) agentops.record(tool_event)
self._telemetry.tool_usage( self._telemetry.tool_usage(
@@ -346,7 +353,8 @@ class ToolUsage:
if self._run_attempts > self._max_parsing_attempts: if self._run_attempts > self._max_parsing_attempts:
self._telemetry.tool_usage_error(llm=self.function_calling_llm) self._telemetry.tool_usage_error(llm=self.function_calling_llm)
self.task.increment_tools_errors() self.task.increment_tools_errors()
self._printer.print(content=f"\n\n{e}\n", color="red") if self.agent.verbose:
self._printer.print(content=f"\n\n{e}\n", color="red")
return ToolUsageErrorException( # type: ignore # Incompatible return value type (got "ToolUsageErrorException", expected "ToolCalling | InstructorToolCalling") return ToolUsageErrorException( # type: ignore # Incompatible return value type (got "ToolUsageErrorException", expected "ToolCalling | InstructorToolCalling")
f'{self._i18n.errors("tool_usage_error").format(error=e)}\nMoving on then. {self._i18n.slice("format").format(tool_names=self.tools_names)}' f'{self._i18n.errors("tool_usage_error").format(error=e)}\nMoving on then. {self._i18n.slice("format").format(tool_names=self.tools_names)}'
) )

View File

@@ -7,6 +7,9 @@ from .parser import YamlParser
from .printer import Printer from .printer import Printer
from .prompts import Prompts from .prompts import Prompts
from .rpm_controller import RPMController from .rpm_controller import RPMController
from .exceptions.context_window_exceeding_exception import (
LLMContextLengthExceededException,
)
__all__ = [ __all__ = [
"Converter", "Converter",
@@ -19,4 +22,5 @@ __all__ = [
"Prompts", "Prompts",
"RPMController", "RPMController",
"YamlParser", "YamlParser",
"LLMContextLengthExceededException",
] ]

View File

@@ -1,9 +1,14 @@
import json import json
import re
from typing import Any, Optional, Type, Union
from langchain.schema import HumanMessage, SystemMessage from langchain.schema import HumanMessage, SystemMessage
from langchain_openai import ChatOpenAI from langchain_openai import ChatOpenAI
from pydantic import BaseModel, ValidationError
from crewai.agents.agent_builder.utilities.base_output_converter import OutputConverter from crewai.agents.agent_builder.utilities.base_output_converter import OutputConverter
from crewai.utilities.printer import Printer
from crewai.utilities.pydantic_schema_parser import PydanticSchemaParser
class ConverterError(Exception): class ConverterError(Exception):
@@ -72,3 +77,153 @@ class Converter(OutputConverter):
def is_gpt(self) -> bool: def is_gpt(self) -> bool:
"""Return if llm provided is of gpt from openai.""" """Return if llm provided is of gpt from openai."""
return isinstance(self.llm, ChatOpenAI) and self.llm.openai_api_base is None return isinstance(self.llm, ChatOpenAI) and self.llm.openai_api_base is None
def convert_to_model(
result: str,
output_pydantic: Optional[Type[BaseModel]],
output_json: Optional[Type[BaseModel]],
agent: Any,
converter_cls: Optional[Type[Converter]] = None,
) -> Union[dict, BaseModel, str]:
model = output_pydantic or output_json
if model is None:
return result
try:
escaped_result = json.dumps(json.loads(result, strict=False))
return validate_model(escaped_result, model, bool(output_json))
except json.JSONDecodeError as e:
Printer().print(
content=f"Error parsing JSON: {e}. Attempting to handle partial JSON.",
color="yellow",
)
return handle_partial_json(
result, model, bool(output_json), agent, converter_cls
)
except ValidationError as e:
Printer().print(
content=f"Pydantic validation error: {e}. Attempting to handle partial JSON.",
color="yellow",
)
return handle_partial_json(
result, model, bool(output_json), agent, converter_cls
)
except Exception as e:
Printer().print(
content=f"Unexpected error during model conversion: {type(e).__name__}: {e}. Returning original result.",
color="red",
)
return result
def validate_model(
result: str, model: Type[BaseModel], is_json_output: bool
) -> Union[dict, BaseModel]:
exported_result = model.model_validate_json(result)
if is_json_output:
return exported_result.model_dump()
return exported_result
def handle_partial_json(
result: str,
model: Type[BaseModel],
is_json_output: bool,
agent: Any,
converter_cls: Optional[Type[Converter]] = None,
) -> Union[dict, BaseModel, str]:
match = re.search(r"({.*})", result, re.DOTALL)
if match:
try:
exported_result = model.model_validate_json(match.group(0))
if is_json_output:
return exported_result.model_dump()
return exported_result
except json.JSONDecodeError as e:
Printer().print(
content=f"Error parsing JSON: {e}. The extracted JSON-like string is not valid JSON. Attempting alternative conversion method.",
color="yellow",
)
except ValidationError as e:
Printer().print(
content=f"Pydantic validation error: {e}. The JSON structure doesn't match the expected model. Attempting alternative conversion method.",
color="yellow",
)
except Exception as e:
Printer().print(
content=f"Unexpected error during partial JSON handling: {type(e).__name__}: {e}. Attempting alternative conversion method.",
color="red",
)
return convert_with_instructions(
result, model, is_json_output, agent, converter_cls
)
def convert_with_instructions(
result: str,
model: Type[BaseModel],
is_json_output: bool,
agent: Any,
converter_cls: Optional[Type[Converter]] = None,
) -> Union[dict, BaseModel, str]:
llm = agent.function_calling_llm or agent.llm
instructions = get_conversion_instructions(model, llm)
converter = create_converter(
agent=agent,
converter_cls=converter_cls,
llm=llm,
text=result,
model=model,
instructions=instructions,
)
exported_result = (
converter.to_pydantic() if not is_json_output else converter.to_json()
)
if isinstance(exported_result, ConverterError):
Printer().print(
content=f"{exported_result.message} Using raw output instead.",
color="red",
)
return result
return exported_result
def get_conversion_instructions(model: Type[BaseModel], llm: Any) -> str:
instructions = "I'm gonna convert this raw text into valid JSON."
if not is_gpt(llm):
model_schema = PydanticSchemaParser(model=model).get_schema()
instructions = f"{instructions}\n\nThe json should have the following structure, with the following keys:\n{model_schema}"
return instructions
def is_gpt(llm: Any) -> bool:
from langchain_openai import ChatOpenAI
return isinstance(llm, ChatOpenAI) and llm.openai_api_base is None
def create_converter(
agent: Optional[Any] = None,
converter_cls: Optional[Type[Converter]] = None,
*args,
**kwargs,
) -> Converter:
if agent and not converter_cls:
if hasattr(agent, "get_output_converter"):
converter = agent.get_output_converter(*args, **kwargs)
else:
raise AttributeError("Agent does not have a 'get_output_converter' method")
elif converter_cls:
converter = converter_cls(*args, **kwargs)
else:
raise ValueError("Either agent or converter_cls must be provided")
if not converter:
raise Exception("No output converter found or set.")
return converter

View File

@@ -1,5 +1,5 @@
import json import json
from typing import Any, List, Type, Union from typing import Any, List, Type
import regex import regex
from langchain.output_parsers import PydanticOutputParser from langchain.output_parsers import PydanticOutputParser
@@ -7,29 +7,24 @@ from langchain_core.exceptions import OutputParserException
from langchain_core.outputs import Generation from langchain_core.outputs import Generation
from langchain_core.pydantic_v1 import ValidationError from langchain_core.pydantic_v1 import ValidationError
from pydantic import BaseModel from pydantic import BaseModel
from pydantic.v1 import BaseModel as V1BaseModel
class CrewPydanticOutputParser(PydanticOutputParser): class CrewPydanticOutputParser(PydanticOutputParser):
"""Parses the text into pydantic models""" """Parses the text into pydantic models"""
pydantic_object: Union[Type[BaseModel], Type[V1BaseModel]] pydantic_object: Type[BaseModel]
def parse_result(self, result: List[Generation], *, partial: bool = False) -> Any: def parse_result(self, result: List[Generation]) -> Any:
result[0].text = self._transform_in_valid_json(result[0].text) result[0].text = self._transform_in_valid_json(result[0].text)
# Treating edge case of function calling llm returning the name instead of tool_name # Treating edge case of function calling llm returning the name instead of tool_name
json_object = json.loads(result[0].text) json_object = json.loads(result[0].text)
json_object["tool_name"] = ( if "tool_name" not in json_object:
json_object["name"] json_object["tool_name"] = json_object.get("name", "")
if "tool_name" not in json_object
else json_object["tool_name"]
)
result[0].text = json.dumps(json_object) result[0].text = json.dumps(json_object)
json_object = super().parse_result(result)
try: try:
return self.pydantic_object.parse_obj(json_object) return self.pydantic_object.model_validate(json_object)
except ValidationError as e: except ValidationError as e:
name = self.pydantic_object.__name__ name = self.pydantic_object.__name__
msg = f"Failed to parse {name} from completion {json_object}. Got: {e}" msg = f"Failed to parse {name} from completion {json_object}. Got: {e}"

View File

@@ -28,6 +28,7 @@ class CrewEvaluator:
""" """
tasks_scores: defaultdict = defaultdict(list) tasks_scores: defaultdict = defaultdict(list)
run_execution_times: defaultdict = defaultdict(list)
iteration: int = 0 iteration: int = 0
def __init__(self, crew, openai_model_name: str): def __init__(self, crew, openai_model_name: str):
@@ -40,9 +41,6 @@ class CrewEvaluator:
for task in self.crew.tasks: for task in self.crew.tasks:
task.callback = self.evaluate task.callback = self.evaluate
def set_iteration(self, iteration: int) -> None:
self.iteration = iteration
def _evaluator_agent(self): def _evaluator_agent(self):
return Agent( return Agent(
role="Task Execution Evaluator", role="Task Execution Evaluator",
@@ -71,6 +69,9 @@ class CrewEvaluator:
output_pydantic=TaskEvaluationPydanticOutput, output_pydantic=TaskEvaluationPydanticOutput,
) )
def set_iteration(self, iteration: int) -> None:
self.iteration = iteration
def print_crew_evaluation_result(self) -> None: def print_crew_evaluation_result(self) -> None:
""" """
Prints the evaluation result of the crew in a table. Prints the evaluation result of the crew in a table.
@@ -119,6 +120,16 @@ class CrewEvaluator:
] ]
table.add_row("Crew", *map(str, crew_scores), f"{crew_average:.1f}") table.add_row("Crew", *map(str, crew_scores), f"{crew_average:.1f}")
run_exec_times = [
int(sum(tasks_exec_times))
for _, tasks_exec_times in self.run_execution_times.items()
]
execution_time_avg = int(sum(run_exec_times) / len(run_exec_times))
table.add_row(
"Execution Time (s)",
*map(str, run_exec_times),
f"{execution_time_avg}",
)
# Display the table in the terminal # Display the table in the terminal
console = Console() console = Console()
console.print(table) console.print(table)
@@ -145,5 +156,8 @@ class CrewEvaluator:
if isinstance(evaluation_result.pydantic, TaskEvaluationPydanticOutput): if isinstance(evaluation_result.pydantic, TaskEvaluationPydanticOutput):
self.tasks_scores[self.iteration].append(evaluation_result.pydantic.quality) self.tasks_scores[self.iteration].append(evaluation_result.pydantic.quality)
self.run_execution_times[self.iteration].append(
current_task._execution_time
)
else: else:
raise ValueError("Evaluation result is not in the expected format") raise ValueError("Evaluation result is not in the expected format")

View File

@@ -54,23 +54,23 @@ class TaskEvaluator:
def __init__(self, original_agent): def __init__(self, original_agent):
self.llm = original_agent.llm self.llm = original_agent.llm
def evaluate(self, task, ouput) -> TaskEvaluation: def evaluate(self, task, output) -> TaskEvaluation:
evaluation_query = ( evaluation_query = (
f"Assess the quality of the task completed based on the description, expected output, and actual results.\n\n" f"Assess the quality of the task completed based on the description, expected output, and actual results.\n\n"
f"Task Description:\n{task.description}\n\n" f"Task Description:\n{task.description}\n\n"
f"Expected Output:\n{task.expected_output}\n\n" f"Expected Output:\n{task.expected_output}\n\n"
f"Actual Output:\n{ouput}\n\n" f"Actual Output:\n{output}\n\n"
"Please provide:\n" "Please provide:\n"
"- Bullet points suggestions to improve future similar tasks\n" "- Bullet points suggestions to improve future similar tasks\n"
"- A score from 0 to 10 evaluating on completion, quality, and overall performance" "- A score from 0 to 10 evaluating on completion, quality, and overall performance"
"- Entities extracted from the task output, if any, their type, description, and relationships" "- Entities extracted from the task output, if any, their type, description, and relationships"
) )
instructions = "I'm gonna convert this raw text into valid JSON." instructions = "Convert all responses into valid JSON output."
if not self._is_gpt(self.llm): if not self._is_gpt(self.llm):
model_schema = PydanticSchemaParser(model=TaskEvaluation).get_schema() model_schema = PydanticSchemaParser(model=TaskEvaluation).get_schema()
instructions = f"{instructions}\n\nThe json should have the following structure, with the following keys:\n{model_schema}" instructions = f"{instructions}\n\nReturn only valid JSON with the following schema:\n```json\n{model_schema}\n```"
converter = Converter( converter = Converter(
llm=self.llm, llm=self.llm,

View File

@@ -0,0 +1,26 @@
class LLMContextLengthExceededException(Exception):
CONTEXT_LIMIT_ERRORS = [
"maximum context length",
"context length exceeded",
"context_length_exceeded",
"context window full",
"too many tokens",
"input is too long",
"exceeds token limit",
]
def __init__(self, error_message: str):
self.original_error_message = error_message
super().__init__(self._get_error_message(error_message))
def _is_context_limit_error(self, error_message: str) -> bool:
return any(
phrase.lower() in error_message.lower()
for phrase in self.CONTEXT_LIMIT_ERRORS
)
def _get_error_message(self, error_message: str):
return (
f"LLM context length exceeded. Original error: {error_message}\n"
"Consider using a smaller input or implementing a text splitting strategy."
)

View File

@@ -1,17 +1,28 @@
import re import re
class YamlParser: class YamlParser:
@staticmethod
def parse(file): def parse(file):
"""
Parses a YAML file, modifies specific patterns, and checks for unsupported 'context' usage.
Args:
file (file object): The YAML file to parse.
Returns:
str: The modified content of the YAML file.
Raises:
ValueError: If 'context:' is used incorrectly.
"""
content = file.read() content = file.read()
# Replace single { and } with doubled ones, while leaving already doubled ones intact and the other special characters {# and {% # Replace single { and } with doubled ones, while leaving already doubled ones intact and the other special characters {# and {%
modified_content = re.sub(r"(?<!\{){(?!\{)(?!\#)(?!\%)", "{{", content) modified_content = re.sub(r"(?<!\{){(?!\{)(?!\#)(?!\%)", "{{", content)
modified_content = re.sub( modified_content = re.sub(r"(?<!\})(?<!\%)(?<!\#)\}(?!})", "}}", modified_content)
r"(?<!\})(?<!\%)(?<!\#)\}(?!})", "}}", modified_content
)
# Check for 'context:' not followed by '[' and raise an error # Check for 'context:' not followed by '[' and raise an error
if re.search(r"context:(?!\s*\[)", modified_content): if re.search(r"context:(?!\s*\[)", modified_content):
raise ValueError( raise ValueError(
"Context is currently only supported in code when creating a task. Please use the 'context' key in the task configuration." "Context is currently only supported in code when creating a task. "
"Please use the 'context' key in the task configuration."
) )
return modified_content return modified_content

View File

@@ -1,5 +1,6 @@
from typing import List, Optional from typing import Any, List, Optional
from langchain_openai import ChatOpenAI
from pydantic import BaseModel from pydantic import BaseModel
from crewai.agent import Agent from crewai.agent import Agent
@@ -11,17 +12,27 @@ class PlannerTaskPydanticOutput(BaseModel):
class CrewPlanner: class CrewPlanner:
def __init__(self, tasks: List[Task]): def __init__(self, tasks: List[Task], planning_agent_llm: Optional[Any] = None):
self.tasks = tasks self.tasks = tasks
def _handle_crew_planning(self) -> Optional[BaseModel]: if planning_agent_llm is None:
self.planning_agent_llm = ChatOpenAI(model="gpt-4o-mini")
else:
self.planning_agent_llm = planning_agent_llm
def _handle_crew_planning(self) -> PlannerTaskPydanticOutput:
"""Handles the Crew planning by creating detailed step-by-step plans for each task.""" """Handles the Crew planning by creating detailed step-by-step plans for each task."""
planning_agent = self._create_planning_agent() planning_agent = self._create_planning_agent()
tasks_summary = self._create_tasks_summary() tasks_summary = self._create_tasks_summary()
planner_task = self._create_planner_task(planning_agent, tasks_summary) planner_task = self._create_planner_task(planning_agent, tasks_summary)
return planner_task.execute_sync().pydantic result = planner_task.execute_sync()
if isinstance(result.pydantic, PlannerTaskPydanticOutput):
return result.pydantic
raise ValueError("Failed to get the Planning output")
def _create_planning_agent(self) -> Agent: def _create_planning_agent(self) -> Agent:
"""Creates the planning agent for the crew planning.""" """Creates the planning agent for the crew planning."""
@@ -32,6 +43,7 @@ class CrewPlanner:
"available to each agent so that they can perform the tasks in an exemplary manner" "available to each agent so that they can perform the tasks in an exemplary manner"
), ),
backstory="Planner agent for crew planning", backstory="Planner agent for crew planning",
llm=self.planning_agent_llm,
) )
def _create_planner_task(self, planning_agent: Agent, tasks_summary: str) -> Task: def _create_planner_task(self, planning_agent: Agent, tasks_summary: str) -> Task:

View File

@@ -16,11 +16,13 @@ class PydanticSchemaParser(BaseModel):
return self._get_model_schema(self.model) return self._get_model_schema(self.model)
def _get_model_schema(self, model, depth=0) -> str: def _get_model_schema(self, model, depth=0) -> str:
lines = [] indent = " " * depth
lines = [f"{indent}{{"]
for field_name, field in model.model_fields.items(): for field_name, field in model.model_fields.items():
field_type_str = self._get_field_type(field, depth + 1) field_type_str = self._get_field_type(field, depth + 1)
lines.append(f"{' ' * 4 * depth}- {field_name}: {field_type_str}") lines.append(f"{indent} {field_name}: {field_type_str},")
lines[-1] = lines[-1].rstrip(",") # Remove trailing comma from last item
lines.append(f"{indent}}}")
return "\n".join(lines) return "\n".join(lines)
def _get_field_type(self, field, depth) -> str: def _get_field_type(self, field, depth) -> str:
@@ -35,6 +37,6 @@ class PydanticSchemaParser(BaseModel):
else: else:
return f"List[{list_item_type.__name__}]" return f"List[{list_item_type.__name__}]"
elif issubclass(field_type, BaseModel): elif issubclass(field_type, BaseModel):
return f"\n{self._get_model_schema(field_type, depth)}" return self._get_model_schema(field_type, depth)
else: else:
return field_type.__name__ return field_type.__name__

View File

@@ -10,24 +10,24 @@ from crewai.agents.agent_builder.utilities.base_token_process import TokenProces
class TokenCalcHandler(BaseCallbackHandler): class TokenCalcHandler(BaseCallbackHandler):
model_name: str = "" model_name: str = ""
token_cost_process: TokenProcess token_cost_process: TokenProcess
encoding: tiktoken.Encoding
def __init__(self, model_name, token_cost_process): def __init__(self, model_name, token_cost_process):
self.model_name = model_name self.model_name = model_name
self.token_cost_process = token_cost_process self.token_cost_process = token_cost_process
try:
self.encoding = tiktoken.encoding_for_model(self.model_name)
except KeyError:
self.encoding = tiktoken.get_encoding("cl100k_base")
def on_llm_start( def on_llm_start(
self, serialized: Dict[str, Any], prompts: List[str], **kwargs: Any self, serialized: Dict[str, Any], prompts: List[str], **kwargs: Any
) -> None: ) -> None:
try:
encoding = tiktoken.encoding_for_model(self.model_name)
except KeyError:
encoding = tiktoken.get_encoding("cl100k_base")
if self.token_cost_process is None: if self.token_cost_process is None:
return return
for prompt in prompts: for prompt in prompts:
self.token_cost_process.sum_prompt_tokens(len(encoding.encode(prompt))) self.token_cost_process.sum_prompt_tokens(len(self.encoding.encode(prompt)))
async def on_llm_new_token(self, token: str, **kwargs) -> None: async def on_llm_new_token(self, token: str, **kwargs) -> None:
self.token_cost_process.sum_completion_tokens(1) self.token_cost_process.sum_completion_tokens(1)

View File

@@ -7,6 +7,7 @@ import pytest
from langchain.tools import tool from langchain.tools import tool
from langchain_core.exceptions import OutputParserException from langchain_core.exceptions import OutputParserException
from langchain_openai import ChatOpenAI from langchain_openai import ChatOpenAI
from langchain.schema import AgentAction
from crewai import Agent, Crew, Task from crewai import Agent, Crew, Task
from crewai.agents.cache import CacheHandler from crewai.agents.cache import CacheHandler
@@ -397,7 +398,7 @@ def test_agent_moved_on_after_max_iterations():
) )
task = Task( task = Task(
description="The final answer is 42. But don't give it yet, instead keep using the `get_final_answer` tool over and over until you're told you can give yout final answer.", description="The final answer is 42. But don't give it yet, instead keep using the `get_final_answer` tool over and over until you're told you can give your final answer.",
expected_output="The final answer", expected_output="The final answer",
) )
output = agent.execute_task( output = agent.execute_task(
@@ -948,7 +949,7 @@ def test_agent_use_trained_data(crew_training_handler):
crew_training_handler().load.return_value = { crew_training_handler().load.return_value = {
agent.role: { agent.role: {
"suggestions": [ "suggestions": [
"The result of the math operatio must be right.", "The result of the math operation must be right.",
"Result must be better than 1.", "Result must be better than 1.",
] ]
} }
@@ -958,7 +959,7 @@ def test_agent_use_trained_data(crew_training_handler):
assert ( assert (
result == "What is 1 + 1?You MUST follow these feedbacks: \n " result == "What is 1 + 1?You MUST follow these feedbacks: \n "
"The result of the math operatio must be right.\n - Result must be better than 1." "The result of the math operation must be right.\n - Result must be better than 1."
) )
crew_training_handler.assert_has_calls( crew_training_handler.assert_has_calls(
[mock.call(), mock.call("trained_agents_data.pkl"), mock.call().load()] [mock.call(), mock.call("trained_agents_data.pkl"), mock.call().load()]
@@ -1014,3 +1015,75 @@ def test_agent_max_retry_limit():
), ),
] ]
) )
@pytest.mark.vcr(filter_headers=["authorization"])
def test_handle_context_length_exceeds_limit():
agent = Agent(
role="test role",
goal="test goal",
backstory="test backstory",
)
original_action = AgentAction(
tool="test_tool", tool_input="test_input", log="test_log"
)
with patch.object(
CrewAgentExecutor, "_iter_next_step", wraps=agent.agent_executor._iter_next_step
) as private_mock:
task = Task(
description="The final answer is 42. But don't give it yet, instead keep using the `get_final_answer` tool.",
expected_output="The final answer",
)
agent.execute_task(
task=task,
)
private_mock.assert_called_once()
with patch("crewai.agents.executor.click") as mock_prompt:
mock_prompt.return_value = "y"
with patch.object(
CrewAgentExecutor, "_handle_context_length"
) as mock_handle_context:
mock_handle_context.side_effect = ValueError(
"Context length limit exceeded"
)
long_input = "This is a very long input. " * 10000
# Attempt to handle context length, expecting the mocked error
with pytest.raises(ValueError) as excinfo:
agent.agent_executor._handle_context_length(
[(original_action, long_input)]
)
assert "Context length limit exceeded" in str(excinfo.value)
mock_handle_context.assert_called_once()
@pytest.mark.vcr(filter_headers=["authorization"])
def test_handle_context_length_exceeds_limit_cli_no():
agent = Agent(
role="test role",
goal="test goal",
backstory="test backstory",
)
task = Task(description="test task", agent=agent, expected_output="test output")
with patch.object(
CrewAgentExecutor, "_iter_next_step", wraps=agent.agent_executor._iter_next_step
) as private_mock:
task = Task(
description="The final answer is 42. But don't give it yet, instead keep using the `get_final_answer` tool.",
expected_output="The final answer",
)
agent.execute_task(
task=task,
)
private_mock.assert_called_once()
with patch("crewai.agents.executor.click") as mock_prompt:
mock_prompt.return_value = "n"
pytest.raises(SystemExit)
with patch.object(
CrewAgentExecutor, "_handle_context_length"
) as mock_handle_context:
mock_handle_context.assert_not_called()

View File

@@ -0,0 +1,181 @@
interactions:
- request:
body: '{"messages": [{"content": "You are test role. test backstory\nYour personal
goal is: test goalTo give my best complete final answer to the task use the
exact following format:\n\nThought: I now can give a great answer\nFinal Answer:
my best complete final answer to the task.\nYour final answer must be the great
and the most complete as possible, it must be outcome described.\n\nI MUST use
these formats, my job depends on it!\nCurrent Task: The final answer is 42.
But don''t give it yet, instead keep using the `get_final_answer` tool.\n\nThis
is the expect criteria for your final answer: The final answer \n you MUST return
the actual complete content as the final answer, not a summary.\n\nBegin! This
is VERY important to you, use the tools available and give your best Final Answer,
your job depends on it!\n\nThought:\n", "role": "user"}], "model": "gpt-4o",
"n": 1, "stop": ["\nObservation"], "stream": true, "temperature": 0.7}'
headers:
accept:
- application/json
accept-encoding:
- gzip, deflate, br
connection:
- keep-alive
content-length:
- '938'
content-type:
- application/json
host:
- api.openai.com
user-agent:
- OpenAI/Python 1.35.10
x-stainless-arch:
- arm64
x-stainless-async:
- 'false'
x-stainless-lang:
- python
x-stainless-os:
- MacOS
x-stainless-package-version:
- 1.35.10
x-stainless-runtime:
- CPython
x-stainless-runtime-version:
- 3.11.5
method: POST
uri: https://api.openai.com/v1/chat/completions
response:
body:
string: 'data: {"id":"chatcmpl-9rDmhDy41qR9B2jdH1zkXoxv4LMn6","object":"chat.completion.chunk","created":1722471399,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_4e2b2da518","choices":[{"index":0,"delta":{"role":"assistant","content":""},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9rDmhDy41qR9B2jdH1zkXoxv4LMn6","object":"chat.completion.chunk","created":1722471399,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_4e2b2da518","choices":[{"index":0,"delta":{"content":"Thought"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9rDmhDy41qR9B2jdH1zkXoxv4LMn6","object":"chat.completion.chunk","created":1722471399,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_4e2b2da518","choices":[{"index":0,"delta":{"content":":"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9rDmhDy41qR9B2jdH1zkXoxv4LMn6","object":"chat.completion.chunk","created":1722471399,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_4e2b2da518","choices":[{"index":0,"delta":{"content":"
I"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9rDmhDy41qR9B2jdH1zkXoxv4LMn6","object":"chat.completion.chunk","created":1722471399,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_4e2b2da518","choices":[{"index":0,"delta":{"content":"
now"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9rDmhDy41qR9B2jdH1zkXoxv4LMn6","object":"chat.completion.chunk","created":1722471399,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_4e2b2da518","choices":[{"index":0,"delta":{"content":"
can"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9rDmhDy41qR9B2jdH1zkXoxv4LMn6","object":"chat.completion.chunk","created":1722471399,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_4e2b2da518","choices":[{"index":0,"delta":{"content":"
give"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9rDmhDy41qR9B2jdH1zkXoxv4LMn6","object":"chat.completion.chunk","created":1722471399,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_4e2b2da518","choices":[{"index":0,"delta":{"content":"
a"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9rDmhDy41qR9B2jdH1zkXoxv4LMn6","object":"chat.completion.chunk","created":1722471399,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_4e2b2da518","choices":[{"index":0,"delta":{"content":"
great"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9rDmhDy41qR9B2jdH1zkXoxv4LMn6","object":"chat.completion.chunk","created":1722471399,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_4e2b2da518","choices":[{"index":0,"delta":{"content":"
answer"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9rDmhDy41qR9B2jdH1zkXoxv4LMn6","object":"chat.completion.chunk","created":1722471399,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_4e2b2da518","choices":[{"index":0,"delta":{"content":"\n"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9rDmhDy41qR9B2jdH1zkXoxv4LMn6","object":"chat.completion.chunk","created":1722471399,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_4e2b2da518","choices":[{"index":0,"delta":{"content":"Final"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9rDmhDy41qR9B2jdH1zkXoxv4LMn6","object":"chat.completion.chunk","created":1722471399,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_4e2b2da518","choices":[{"index":0,"delta":{"content":"
Answer"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9rDmhDy41qR9B2jdH1zkXoxv4LMn6","object":"chat.completion.chunk","created":1722471399,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_4e2b2da518","choices":[{"index":0,"delta":{"content":":"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9rDmhDy41qR9B2jdH1zkXoxv4LMn6","object":"chat.completion.chunk","created":1722471399,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_4e2b2da518","choices":[{"index":0,"delta":{"content":"
The"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9rDmhDy41qR9B2jdH1zkXoxv4LMn6","object":"chat.completion.chunk","created":1722471399,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_4e2b2da518","choices":[{"index":0,"delta":{"content":"
final"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9rDmhDy41qR9B2jdH1zkXoxv4LMn6","object":"chat.completion.chunk","created":1722471399,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_4e2b2da518","choices":[{"index":0,"delta":{"content":"
answer"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9rDmhDy41qR9B2jdH1zkXoxv4LMn6","object":"chat.completion.chunk","created":1722471399,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_4e2b2da518","choices":[{"index":0,"delta":{"content":"
is"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9rDmhDy41qR9B2jdH1zkXoxv4LMn6","object":"chat.completion.chunk","created":1722471399,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_4e2b2da518","choices":[{"index":0,"delta":{"content":"
"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9rDmhDy41qR9B2jdH1zkXoxv4LMn6","object":"chat.completion.chunk","created":1722471399,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_4e2b2da518","choices":[{"index":0,"delta":{"content":"42"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9rDmhDy41qR9B2jdH1zkXoxv4LMn6","object":"chat.completion.chunk","created":1722471399,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_4e2b2da518","choices":[{"index":0,"delta":{"content":"."},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9rDmhDy41qR9B2jdH1zkXoxv4LMn6","object":"chat.completion.chunk","created":1722471399,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_4e2b2da518","choices":[{"index":0,"delta":{},"logprobs":null,"finish_reason":"stop"}]}
data: [DONE]
'
headers:
CF-Cache-Status:
- DYNAMIC
CF-RAY:
- 8ac1a40879b87d1f-LAX
Connection:
- keep-alive
Content-Type:
- text/event-stream; charset=utf-8
Date:
- Thu, 01 Aug 2024 00:16:40 GMT
Server:
- cloudflare
Set-Cookie:
- __cf_bm=MHUl15YVi607cmuLtQ84ESiH30IyJiIW1a40fopQ81w-1722471400-1.0.1.1-OGpq5Ezj6iE0ToM1diQllGb70.J3O_K2De9NbwZPWmW2qN07U20adJ_0yd6PKUNqMdL.xEnLcNAOWVmsfrLUrQ;
path=/; expires=Thu, 01-Aug-24 00:46:40 GMT; domain=.api.openai.com; HttpOnly;
Secure; SameSite=None
- _cfuvid=G2ZVNvfNfFk4DeKyZ7jMYetG7wOasINAGHstrOnuAY8-1722471400129-0.0.1.1-604800000;
path=/; domain=.api.openai.com; HttpOnly; Secure; SameSite=None
Transfer-Encoding:
- chunked
X-Content-Type-Options:
- nosniff
alt-svc:
- h3=":443"; ma=86400
openai-organization:
- crewai-iuxna1
openai-processing-ms:
- '131'
openai-version:
- '2020-10-01'
strict-transport-security:
- max-age=15552000; includeSubDomains; preload
x-ratelimit-limit-requests:
- '10000'
x-ratelimit-limit-tokens:
- '30000000'
x-ratelimit-remaining-requests:
- '9999'
x-ratelimit-remaining-tokens:
- '29999786'
x-ratelimit-reset-requests:
- 6ms
x-ratelimit-reset-tokens:
- 0s
x-request-id:
- req_b68b417b3fe1c67244279551e411b37a
status:
code: 200
message: OK
version: 1

View File

@@ -0,0 +1,162 @@
interactions:
- request:
body: '{"messages": [{"content": "You are test role. test backstory\nYour personal
goal is: test goalTo give my best complete final answer to the task use the
exact following format:\n\nThought: I now can give a great answer\nFinal Answer:
my best complete final answer to the task.\nYour final answer must be the great
and the most complete as possible, it must be outcome described.\n\nI MUST use
these formats, my job depends on it!\nCurrent Task: The final answer is 42.
But don''t give it yet, instead keep using the `get_final_answer` tool.\n\nThis
is the expect criteria for your final answer: The final answer \n you MUST return
the actual complete content as the final answer, not a summary.\n\nBegin! This
is VERY important to you, use the tools available and give your best Final Answer,
your job depends on it!\n\nThought:\n", "role": "user"}], "model": "gpt-4o",
"n": 1, "stop": ["\nObservation"], "stream": true, "temperature": 0.7}'
headers:
accept:
- application/json
accept-encoding:
- gzip, deflate, br
connection:
- keep-alive
content-length:
- '938'
content-type:
- application/json
host:
- api.openai.com
user-agent:
- OpenAI/Python 1.35.10
x-stainless-arch:
- arm64
x-stainless-async:
- 'false'
x-stainless-lang:
- python
x-stainless-os:
- MacOS
x-stainless-package-version:
- 1.35.10
x-stainless-runtime:
- CPython
x-stainless-runtime-version:
- 3.11.5
method: POST
uri: https://api.openai.com/v1/chat/completions
response:
body:
string: 'data: {"id":"chatcmpl-9rI1RAFocnugKoDvAxLndHW5uBeU9","object":"chat.completion.chunk","created":1722487689,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_4e2b2da518","choices":[{"index":0,"delta":{"role":"assistant","content":""},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9rI1RAFocnugKoDvAxLndHW5uBeU9","object":"chat.completion.chunk","created":1722487689,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_4e2b2da518","choices":[{"index":0,"delta":{"content":"Thought"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9rI1RAFocnugKoDvAxLndHW5uBeU9","object":"chat.completion.chunk","created":1722487689,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_4e2b2da518","choices":[{"index":0,"delta":{"content":":"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9rI1RAFocnugKoDvAxLndHW5uBeU9","object":"chat.completion.chunk","created":1722487689,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_4e2b2da518","choices":[{"index":0,"delta":{"content":"
I"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9rI1RAFocnugKoDvAxLndHW5uBeU9","object":"chat.completion.chunk","created":1722487689,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_4e2b2da518","choices":[{"index":0,"delta":{"content":"
now"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9rI1RAFocnugKoDvAxLndHW5uBeU9","object":"chat.completion.chunk","created":1722487689,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_4e2b2da518","choices":[{"index":0,"delta":{"content":"
can"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9rI1RAFocnugKoDvAxLndHW5uBeU9","object":"chat.completion.chunk","created":1722487689,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_4e2b2da518","choices":[{"index":0,"delta":{"content":"
give"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9rI1RAFocnugKoDvAxLndHW5uBeU9","object":"chat.completion.chunk","created":1722487689,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_4e2b2da518","choices":[{"index":0,"delta":{"content":"
a"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9rI1RAFocnugKoDvAxLndHW5uBeU9","object":"chat.completion.chunk","created":1722487689,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_4e2b2da518","choices":[{"index":0,"delta":{"content":"
great"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9rI1RAFocnugKoDvAxLndHW5uBeU9","object":"chat.completion.chunk","created":1722487689,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_4e2b2da518","choices":[{"index":0,"delta":{"content":"
answer"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9rI1RAFocnugKoDvAxLndHW5uBeU9","object":"chat.completion.chunk","created":1722487689,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_4e2b2da518","choices":[{"index":0,"delta":{"content":"\n"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9rI1RAFocnugKoDvAxLndHW5uBeU9","object":"chat.completion.chunk","created":1722487689,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_4e2b2da518","choices":[{"index":0,"delta":{"content":"Final"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9rI1RAFocnugKoDvAxLndHW5uBeU9","object":"chat.completion.chunk","created":1722487689,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_4e2b2da518","choices":[{"index":0,"delta":{"content":"
Answer"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9rI1RAFocnugKoDvAxLndHW5uBeU9","object":"chat.completion.chunk","created":1722487689,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_4e2b2da518","choices":[{"index":0,"delta":{"content":":"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9rI1RAFocnugKoDvAxLndHW5uBeU9","object":"chat.completion.chunk","created":1722487689,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_4e2b2da518","choices":[{"index":0,"delta":{"content":"
"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9rI1RAFocnugKoDvAxLndHW5uBeU9","object":"chat.completion.chunk","created":1722487689,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_4e2b2da518","choices":[{"index":0,"delta":{"content":"42"},"logprobs":null,"finish_reason":null}]}
data: {"id":"chatcmpl-9rI1RAFocnugKoDvAxLndHW5uBeU9","object":"chat.completion.chunk","created":1722487689,"model":"gpt-4o-2024-05-13","system_fingerprint":"fp_4e2b2da518","choices":[{"index":0,"delta":{},"logprobs":null,"finish_reason":"stop"}]}
data: [DONE]
'
headers:
CF-Cache-Status:
- DYNAMIC
CF-RAY:
- 8ac331b9eaee2b7f-LAX
Connection:
- keep-alive
Content-Type:
- text/event-stream; charset=utf-8
Date:
- Thu, 01 Aug 2024 04:48:09 GMT
Server:
- cloudflare
Set-Cookie:
- __cf_bm=OXht5zC71vWYFW_z_933m3sZfFS2xBez0DHv93FvT5s-1722487689-1.0.1.1-wE8JTR7MnwUgiiTDppYg8A7zLEiidth.MB0zrwONeAtNWRjKC1tuGf8LZYDlYIHUhqG73syYExpZ.5pZhzJkcg;
path=/; expires=Thu, 01-Aug-24 05:18:09 GMT; domain=.api.openai.com; HttpOnly;
Secure; SameSite=None
- _cfuvid=PAR7y4xRe4VzRT.7GK34Tq5r8vevY6xq0E.i.R40xnU-1722487689562-0.0.1.1-604800000;
path=/; domain=.api.openai.com; HttpOnly; Secure; SameSite=None
Transfer-Encoding:
- chunked
X-Content-Type-Options:
- nosniff
alt-svc:
- h3=":443"; ma=86400
openai-organization:
- crewai-iuxna1
openai-processing-ms:
- '84'
openai-version:
- '2020-10-01'
strict-transport-security:
- max-age=15552000; includeSubDomains; preload
x-ratelimit-limit-requests:
- '10000'
x-ratelimit-limit-tokens:
- '30000000'
x-ratelimit-remaining-requests:
- '9999'
x-ratelimit-remaining-tokens:
- '29999786'
x-ratelimit-reset-requests:
- 6ms
x-ratelimit-reset-tokens:
- 0s
x-request-id:
- req_105dcfc53c9672dea0437249c12c3319
status:
code: 200
message: OK
version: 1

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@@ -135,29 +135,29 @@ def test_version_command_with_tools(runner):
) )
@mock.patch("crewai.cli.cli.test_crew") @mock.patch("crewai.cli.cli.evaluate_crew")
def test_test_default_iterations(test_crew, runner): def test_test_default_iterations(evaluate_crew, runner):
result = runner.invoke(test) result = runner.invoke(test)
test_crew.assert_called_once_with(3, "gpt-4o-mini") evaluate_crew.assert_called_once_with(3, "gpt-4o-mini")
assert result.exit_code == 0 assert result.exit_code == 0
assert "Testing the crew for 3 iterations with model gpt-4o-mini" in result.output assert "Testing the crew for 3 iterations with model gpt-4o-mini" in result.output
@mock.patch("crewai.cli.cli.test_crew") @mock.patch("crewai.cli.cli.evaluate_crew")
def test_test_custom_iterations(test_crew, runner): def test_test_custom_iterations(evaluate_crew, runner):
result = runner.invoke(test, ["--n_iterations", "5", "--model", "gpt-4o"]) result = runner.invoke(test, ["--n_iterations", "5", "--model", "gpt-4o"])
test_crew.assert_called_once_with(5, "gpt-4o") evaluate_crew.assert_called_once_with(5, "gpt-4o")
assert result.exit_code == 0 assert result.exit_code == 0
assert "Testing the crew for 5 iterations with model gpt-4o" in result.output assert "Testing the crew for 5 iterations with model gpt-4o" in result.output
@mock.patch("crewai.cli.cli.test_crew") @mock.patch("crewai.cli.cli.evaluate_crew")
def test_test_invalid_string_iterations(test_crew, runner): def test_test_invalid_string_iterations(evaluate_crew, runner):
result = runner.invoke(test, ["--n_iterations", "invalid"]) result = runner.invoke(test, ["--n_iterations", "invalid"])
test_crew.assert_not_called() evaluate_crew.assert_not_called()
assert result.exit_code == 2 assert result.exit_code == 2
assert ( assert (
"Usage: test [OPTIONS]\nTry 'test --help' for help.\n\nError: Invalid value for '-n' / '--n_iterations': 'invalid' is not a valid integer.\n" "Usage: test [OPTIONS]\nTry 'test --help' for help.\n\nError: Invalid value for '-n' / '--n_iterations': 'invalid' is not a valid integer.\n"

View File

@@ -3,7 +3,7 @@ from unittest import mock
import pytest import pytest
from crewai.cli import test_crew from crewai.cli import evaluate_crew
@pytest.mark.parametrize( @pytest.mark.parametrize(
@@ -14,13 +14,13 @@ from crewai.cli import test_crew
(10, "gpt-4"), (10, "gpt-4"),
], ],
) )
@mock.patch("crewai.cli.test_crew.subprocess.run") @mock.patch("crewai.cli.evaluate_crew.subprocess.run")
def test_crew_success(mock_subprocess_run, n_iterations, model): def test_crew_success(mock_subprocess_run, n_iterations, model):
"""Test the crew function for successful execution.""" """Test the crew function for successful execution."""
mock_subprocess_run.return_value = subprocess.CompletedProcess( mock_subprocess_run.return_value = subprocess.CompletedProcess(
args=f"poetry run test {n_iterations} {model}", returncode=0 args=f"poetry run test {n_iterations} {model}", returncode=0
) )
result = test_crew.test_crew(n_iterations, model) result = evaluate_crew.evaluate_crew(n_iterations, model)
mock_subprocess_run.assert_called_once_with( mock_subprocess_run.assert_called_once_with(
["poetry", "run", "test", str(n_iterations), model], ["poetry", "run", "test", str(n_iterations), model],
@@ -31,26 +31,26 @@ def test_crew_success(mock_subprocess_run, n_iterations, model):
assert result is None assert result is None
@mock.patch("crewai.cli.test_crew.click") @mock.patch("crewai.cli.evaluate_crew.click")
def test_test_crew_zero_iterations(click): def test_test_crew_zero_iterations(click):
test_crew.test_crew(0, "gpt-4o") evaluate_crew.evaluate_crew(0, "gpt-4o")
click.echo.assert_called_once_with( click.echo.assert_called_once_with(
"An unexpected error occurred: The number of iterations must be a positive integer.", "An unexpected error occurred: The number of iterations must be a positive integer.",
err=True, err=True,
) )
@mock.patch("crewai.cli.test_crew.click") @mock.patch("crewai.cli.evaluate_crew.click")
def test_test_crew_negative_iterations(click): def test_test_crew_negative_iterations(click):
test_crew.test_crew(-2, "gpt-4o") evaluate_crew.evaluate_crew(-2, "gpt-4o")
click.echo.assert_called_once_with( click.echo.assert_called_once_with(
"An unexpected error occurred: The number of iterations must be a positive integer.", "An unexpected error occurred: The number of iterations must be a positive integer.",
err=True, err=True,
) )
@mock.patch("crewai.cli.test_crew.click") @mock.patch("crewai.cli.evaluate_crew.click")
@mock.patch("crewai.cli.test_crew.subprocess.run") @mock.patch("crewai.cli.evaluate_crew.subprocess.run")
def test_test_crew_called_process_error(mock_subprocess_run, click): def test_test_crew_called_process_error(mock_subprocess_run, click):
n_iterations = 5 n_iterations = 5
mock_subprocess_run.side_effect = subprocess.CalledProcessError( mock_subprocess_run.side_effect = subprocess.CalledProcessError(
@@ -59,7 +59,7 @@ def test_test_crew_called_process_error(mock_subprocess_run, click):
output="Error", output="Error",
stderr="Some error occurred", stderr="Some error occurred",
) )
test_crew.test_crew(n_iterations, "gpt-4o") evaluate_crew.evaluate_crew(n_iterations, "gpt-4o")
mock_subprocess_run.assert_called_once_with( mock_subprocess_run.assert_called_once_with(
["poetry", "run", "test", "5", "gpt-4o"], ["poetry", "run", "test", "5", "gpt-4o"],
@@ -78,13 +78,13 @@ def test_test_crew_called_process_error(mock_subprocess_run, click):
) )
@mock.patch("crewai.cli.test_crew.click") @mock.patch("crewai.cli.evaluate_crew.click")
@mock.patch("crewai.cli.test_crew.subprocess.run") @mock.patch("crewai.cli.evaluate_crew.subprocess.run")
def test_test_crew_unexpected_exception(mock_subprocess_run, click): def test_test_crew_unexpected_exception(mock_subprocess_run, click):
# Arrange # Arrange
n_iterations = 5 n_iterations = 5
mock_subprocess_run.side_effect = Exception("Unexpected error") mock_subprocess_run.side_effect = Exception("Unexpected error")
test_crew.test_crew(n_iterations, "gpt-4o") evaluate_crew.evaluate_crew(n_iterations, "gpt-4o")
mock_subprocess_run.assert_called_once_with( mock_subprocess_run.assert_called_once_with(
["poetry", "run", "test", "5", "gpt-4o"], ["poetry", "run", "test", "5", "gpt-4o"],

View File

@@ -8,7 +8,6 @@ from unittest.mock import MagicMock, patch
import pydantic_core import pydantic_core
import pytest import pytest
from crewai.agent import Agent from crewai.agent import Agent
from crewai.agents.cache import CacheHandler from crewai.agents.cache import CacheHandler
from crewai.crew import Crew from crewai.crew import Crew
@@ -69,7 +68,7 @@ def test_crew_config_conditional_requirement():
"agent": "Senior Researcher", "agent": "Senior Researcher",
}, },
{ {
"description": "Write a 1 amazing paragraph highlight for each idead that showcases how good an article about this topic could be, check references if necessary or search for more content but make sure it's unique, interesting and well written. Return the list of ideas with their paragraph and your notes.", "description": "Write a 1 amazing paragraph highlight for each idea that showcases how good an article about this topic could be, check references if necessary or search for more content but make sure it's unique, interesting and well written. Return the list of ideas with their paragraph and your notes.",
"expected_output": "A 4 paragraph article about AI.", "expected_output": "A 4 paragraph article about AI.",
"agent": "Senior Writer", "agent": "Senior Writer",
}, },
@@ -657,7 +656,7 @@ def test_sequential_async_task_execution_completion():
sequential_result = sequential_crew.kickoff() sequential_result = sequential_crew.kickoff()
assert sequential_result.raw.startswith( assert sequential_result.raw.startswith(
"**The Evolution of Artificial Intelligence: A Journey Through Milestones**" "The history of artificial intelligence (AI) is marked by several pivotal events that have shaped its evolution and impact on various sectors."
) )
@@ -1189,7 +1188,7 @@ def test_task_with_no_arguments():
) )
task = Task( task = Task(
description="Look at the available data nd give me a sense on the total number of sales.", description="Look at the available data and give me a sense on the total number of sales.",
expected_output="The total number of sales as an integer", expected_output="The total number of sales as an integer",
agent=researcher, agent=researcher,
) )
@@ -1236,7 +1235,7 @@ def test_delegation_is_not_enabled_if_there_are_only_one_agent():
) )
task = Task( task = Task(
description="Look at the available data nd give me a sense on the total number of sales.", description="Look at the available data and give me a sense on the total number of sales.",
expected_output="The total number of sales as an integer", expected_output="The total number of sales as an integer",
agent=researcher, agent=researcher,
) )
@@ -1312,14 +1311,14 @@ def test_agent_usage_metrics_are_captured_for_hierarchical_process():
) )
result = crew.kickoff() result = crew.kickoff()
assert result.raw == '"Howdy!"' assert result.raw == "Howdy!"
print(crew.usage_metrics) print(crew.usage_metrics)
assert crew.usage_metrics == { assert crew.usage_metrics == {
"total_tokens": 311, "total_tokens": 219,
"prompt_tokens": 224, "prompt_tokens": 201,
"completion_tokens": 87, "completion_tokens": 18,
"successful_requests": 1, "successful_requests": 1,
} }
@@ -1356,28 +1355,66 @@ def test_hierarchical_crew_creation_tasks_with_agents():
@pytest.mark.vcr(filter_headers=["authorization"]) @pytest.mark.vcr(filter_headers=["authorization"])
def test_hierarchical_crew_creation_tasks_with_async_execution(): def test_hierarchical_crew_creation_tasks_with_async_execution():
"""
Agents are not required for tasks in a hierarchical process but sometimes they are still added
This test makes sure that the manager still delegates the task to the agent even if the agent is passed in the task
"""
from langchain_openai import ChatOpenAI from langchain_openai import ChatOpenAI
task = Task( task = Task(
description="Come up with a list of 5 interesting ideas to explore for an article, then write one amazing paragraph highlight for each idea that showcases how good an article about this topic could be. Return the list of ideas with their paragraph and your notes.", description="Write one amazing paragraph about AI.",
expected_output="5 bullet points with a paragraph for each idea.", expected_output="A single paragraph with 4 sentences.",
async_execution=True, # should throw an error agent=writer,
async_execution=True,
) )
with pytest.raises(pydantic_core._pydantic_core.ValidationError) as exec_info: crew = Crew(
Crew( tasks=[task],
tasks=[task], agents=[writer, researcher, ceo],
agents=[researcher], process=Process.hierarchical,
process=Process.hierarchical, manager_llm=ChatOpenAI(model="gpt-4o"),
manager_llm=ChatOpenAI(model="gpt-4o"),
)
assert (
exec_info.value.errors()[0]["type"] == "async_execution_in_hierarchical_process"
) )
assert (
"Hierarchical process error: Tasks cannot be flagged with async_execution." crew.kickoff()
in exec_info.value.errors()[0]["msg"] assert crew.manager_agent is not None
assert crew.manager_agent.tools is not None
assert crew.manager_agent.tools[0].description.startswith(
"Delegate a specific task to one of the following coworkers: Senior Writer\n"
)
@pytest.mark.vcr(filter_headers=["authorization"])
def test_hierarchical_crew_creation_tasks_with_sync_last():
"""
Agents are not required for tasks in a hierarchical process but sometimes they are still added
This test makes sure that the manager still delegates the task to the agent even if the agent is passed in the task
"""
from langchain_openai import ChatOpenAI
task = Task(
description="Write one amazing paragraph about AI.",
expected_output="A single paragraph with 4 sentences.",
agent=writer,
async_execution=True,
)
task2 = Task(
description="Write one amazing paragraph about AI.",
expected_output="A single paragraph with 4 sentences.",
async_execution=False,
)
crew = Crew(
tasks=[task, task2],
agents=[writer, researcher, ceo],
process=Process.hierarchical,
manager_llm=ChatOpenAI(model="gpt-4o"),
)
crew.kickoff()
assert crew.manager_agent is not None
assert crew.manager_agent.tools is not None
assert crew.manager_agent.tools[0].description.startswith(
"Delegate a specific task to one of the following coworkers: Senior Writer, Researcher, CEO\n"
) )
@@ -1561,16 +1598,16 @@ def test_tools_with_custom_caching():
writer1 = Agent( writer1 = Agent(
role="Writer", role="Writer",
goal="You write lesssons of math for kids.", goal="You write lessons of math for kids.",
backstory="You're an expert in writting and you love to teach kids but you know nothing of math.", backstory="You're an expert in writing and you love to teach kids but you know nothing of math.",
tools=[multiplcation_tool], tools=[multiplcation_tool],
allow_delegation=False, allow_delegation=False,
) )
writer2 = Agent( writer2 = Agent(
role="Writer", role="Writer",
goal="You write lesssons of math for kids.", goal="You write lessons of math for kids.",
backstory="You're an expert in writting and you love to teach kids but you know nothing of math.", backstory="You're an expert in writing and you love to teach kids but you know nothing of math.",
tools=[multiplcation_tool], tools=[multiplcation_tool],
allow_delegation=False, allow_delegation=False,
) )

View File

@@ -23,10 +23,7 @@ def short_term_memory():
expected_output="A list of relevant URLs based on the search query.", expected_output="A list of relevant URLs based on the search query.",
agent=agent, agent=agent,
) )
return ShortTermMemory(crew=Crew( return ShortTermMemory(crew=Crew(agents=[agent], tasks=[task]))
agents=[agent],
tasks=[task]
))
@pytest.mark.vcr(filter_headers=["authorization"]) @pytest.mark.vcr(filter_headers=["authorization"])
@@ -38,7 +35,11 @@ def test_save_and_search(short_term_memory):
agent="test_agent", agent="test_agent",
metadata={"task": "test_task"}, metadata={"task": "test_task"},
) )
short_term_memory.save(memory) short_term_memory.save(
value=memory.data,
metadata=memory.metadata,
agent=memory.agent,
)
find = short_term_memory.search("test value", score_threshold=0.01)[0] find = short_term_memory.search("test value", score_threshold=0.01)[0]
assert find["context"] == memory.data, "Data value mismatch." assert find["context"] == memory.data, "Data value mismatch."

View File

@@ -5,13 +5,12 @@ import json
from unittest.mock import MagicMock, patch from unittest.mock import MagicMock, patch
import pytest import pytest
from pydantic import BaseModel
from pydantic_core import ValidationError
from crewai import Agent, Crew, Process, Task from crewai import Agent, Crew, Process, Task
from crewai.tasks.conditional_task import ConditionalTask from crewai.tasks.conditional_task import ConditionalTask
from crewai.tasks.task_output import TaskOutput from crewai.tasks.task_output import TaskOutput
from crewai.utilities.converter import Converter from crewai.utilities.converter import Converter
from pydantic import BaseModel
from pydantic_core import ValidationError
def test_task_tool_reflect_agent_tools(): def test_task_tool_reflect_agent_tools():
@@ -110,7 +109,7 @@ def test_task_callback():
task_completed.assert_called_once_with(task.output) task_completed.assert_called_once_with(task.output)
def test_task_callback_returns_task_ouput(): def test_task_callback_returns_task_output():
from crewai.tasks.output_format import OutputFormat from crewai.tasks.output_format import OutputFormat
researcher = Agent( researcher = Agent(

View File

@@ -84,6 +84,10 @@ class TestCrewEvaluator:
1: [10, 9, 8], 1: [10, 9, 8],
2: [9, 8, 7], 2: [9, 8, 7],
} }
crew_planner.run_execution_times = {
1: [24, 45, 66],
2: [55, 33, 67],
}
crew_planner.print_crew_evaluation_result() crew_planner.print_crew_evaluation_result()
@@ -98,6 +102,7 @@ class TestCrewEvaluator:
mock.call().add_row("Task 2", "9", "8", "8.5"), mock.call().add_row("Task 2", "9", "8", "8.5"),
mock.call().add_row("Task 3", "8", "7", "7.5"), mock.call().add_row("Task 3", "8", "7", "7.5"),
mock.call().add_row("Crew", "9.0", "8.0", "8.5"), mock.call().add_row("Crew", "9.0", "8.0", "8.5"),
mock.call().add_row("Execution Time (s)", "135", "155", "145"),
] ]
) )
console.assert_has_calls([mock.call(), mock.call().print(table())]) console.assert_has_calls([mock.call(), mock.call().print(table())])

View File

@@ -56,8 +56,7 @@ def test_evaluate_training_data(converter_mock):
"based on the human feedback\n", "based on the human feedback\n",
model=TrainingTaskEvaluation, model=TrainingTaskEvaluation,
instructions="I'm gonna convert this raw text into valid JSON.\n\nThe json should have the " instructions="I'm gonna convert this raw text into valid JSON.\n\nThe json should have the "
"following structure, with the following keys:\n- suggestions: List[str]\n- " "following structure, with the following keys:\n{\n suggestions: List[str],\n quality: float,\n final_summary: str\n}",
"quality: float\n- final_summary: str",
), ),
mock.call().to_pydantic(), mock.call().to_pydantic(),
] ]

View File

@@ -0,0 +1,266 @@
import json
from unittest.mock import MagicMock, Mock, patch
import pytest
from crewai.utilities.converter import (
Converter,
ConverterError,
convert_to_model,
convert_with_instructions,
create_converter,
get_conversion_instructions,
handle_partial_json,
is_gpt,
validate_model,
)
from pydantic import BaseModel
# Sample Pydantic models for testing
class EmailResponse(BaseModel):
previous_message_content: str
class EmailResponses(BaseModel):
responses: list[EmailResponse]
class SimpleModel(BaseModel):
name: str
age: int
class NestedModel(BaseModel):
id: int
data: SimpleModel
# Fixtures
@pytest.fixture
def mock_agent():
agent = Mock()
agent.function_calling_llm = None
agent.llm = Mock()
return agent
# Tests for convert_to_model
def test_convert_to_model_with_valid_json():
result = '{"name": "John", "age": 30}'
output = convert_to_model(result, SimpleModel, None, None)
assert isinstance(output, SimpleModel)
assert output.name == "John"
assert output.age == 30
def test_convert_to_model_with_invalid_json():
result = '{"name": "John", "age": "thirty"}'
with patch("crewai.utilities.converter.handle_partial_json") as mock_handle:
mock_handle.return_value = "Fallback result"
output = convert_to_model(result, SimpleModel, None, None)
assert output == "Fallback result"
def test_convert_to_model_with_no_model():
result = "Plain text"
output = convert_to_model(result, None, None, None)
assert output == "Plain text"
def test_convert_to_model_with_special_characters():
json_string_test = """
{
"responses": [
{
"previous_message_content": "Hi Tom,\r\n\r\nNiamh has chosen the Mika phonics on"
}
]
}
"""
output = convert_to_model(json_string_test, EmailResponses, None, None)
assert isinstance(output, EmailResponses)
assert len(output.responses) == 1
assert (
output.responses[0].previous_message_content
== "Hi Tom,\r\n\r\nNiamh has chosen the Mika phonics on"
)
def test_convert_to_model_with_escaped_special_characters():
json_string_test = json.dumps(
{
"responses": [
{
"previous_message_content": "Hi Tom,\r\n\r\nNiamh has chosen the Mika phonics on"
}
]
}
)
output = convert_to_model(json_string_test, EmailResponses, None, None)
assert isinstance(output, EmailResponses)
assert len(output.responses) == 1
assert (
output.responses[0].previous_message_content
== "Hi Tom,\r\n\r\nNiamh has chosen the Mika phonics on"
)
def test_convert_to_model_with_multiple_special_characters():
json_string_test = """
{
"responses": [
{
"previous_message_content": "Line 1\r\nLine 2\tTabbed\nLine 3\r\n\rEscaped newline"
}
]
}
"""
output = convert_to_model(json_string_test, EmailResponses, None, None)
assert isinstance(output, EmailResponses)
assert len(output.responses) == 1
assert (
output.responses[0].previous_message_content
== "Line 1\r\nLine 2\tTabbed\nLine 3\r\n\rEscaped newline"
)
# Tests for validate_model
def test_validate_model_pydantic_output():
result = '{"name": "Alice", "age": 25}'
output = validate_model(result, SimpleModel, False)
assert isinstance(output, SimpleModel)
assert output.name == "Alice"
assert output.age == 25
def test_validate_model_json_output():
result = '{"name": "Bob", "age": 40}'
output = validate_model(result, SimpleModel, True)
assert isinstance(output, dict)
assert output == {"name": "Bob", "age": 40}
# Tests for handle_partial_json
def test_handle_partial_json_with_valid_partial():
result = 'Some text {"name": "Charlie", "age": 35} more text'
output = handle_partial_json(result, SimpleModel, False, None)
assert isinstance(output, SimpleModel)
assert output.name == "Charlie"
assert output.age == 35
def test_handle_partial_json_with_invalid_partial(mock_agent):
result = "No valid JSON here"
with patch("crewai.utilities.converter.convert_with_instructions") as mock_convert:
mock_convert.return_value = "Converted result"
output = handle_partial_json(result, SimpleModel, False, mock_agent)
assert output == "Converted result"
# Tests for convert_with_instructions
@patch("crewai.utilities.converter.create_converter")
@patch("crewai.utilities.converter.get_conversion_instructions")
def test_convert_with_instructions_success(
mock_get_instructions, mock_create_converter, mock_agent
):
mock_get_instructions.return_value = "Instructions"
mock_converter = Mock()
mock_converter.to_pydantic.return_value = SimpleModel(name="David", age=50)
mock_create_converter.return_value = mock_converter
result = "Some text to convert"
output = convert_with_instructions(result, SimpleModel, False, mock_agent)
assert isinstance(output, SimpleModel)
assert output.name == "David"
assert output.age == 50
@patch("crewai.utilities.converter.create_converter")
@patch("crewai.utilities.converter.get_conversion_instructions")
def test_convert_with_instructions_failure(
mock_get_instructions, mock_create_converter, mock_agent
):
mock_get_instructions.return_value = "Instructions"
mock_converter = Mock()
mock_converter.to_pydantic.return_value = ConverterError("Conversion failed")
mock_create_converter.return_value = mock_converter
result = "Some text to convert"
with patch("crewai.utilities.converter.Printer") as mock_printer:
output = convert_with_instructions(result, SimpleModel, False, mock_agent)
assert output == result
mock_printer.return_value.print.assert_called_once()
# Tests for get_conversion_instructions
def test_get_conversion_instructions_gpt():
mock_llm = Mock()
mock_llm.openai_api_base = None
with patch("crewai.utilities.converter.is_gpt", return_value=True):
instructions = get_conversion_instructions(SimpleModel, mock_llm)
assert instructions == "I'm gonna convert this raw text into valid JSON."
def test_get_conversion_instructions_non_gpt():
mock_llm = Mock()
with patch("crewai.utilities.converter.is_gpt", return_value=False):
with patch("crewai.utilities.converter.PydanticSchemaParser") as mock_parser:
mock_parser.return_value.get_schema.return_value = "Sample schema"
instructions = get_conversion_instructions(SimpleModel, mock_llm)
assert "Sample schema" in instructions
# Tests for is_gpt
def test_is_gpt_true():
from langchain_openai import ChatOpenAI
mock_llm = Mock(spec=ChatOpenAI)
mock_llm.openai_api_base = None
assert is_gpt(mock_llm) is True
def test_is_gpt_false():
mock_llm = Mock()
assert is_gpt(mock_llm) is False
class CustomConverter(Converter):
pass
def test_create_converter_with_mock_agent():
mock_agent = MagicMock()
mock_agent.get_output_converter.return_value = MagicMock(spec=Converter)
converter = create_converter(
agent=mock_agent,
llm=Mock(),
text="Sample",
model=SimpleModel,
instructions="Convert",
)
assert isinstance(converter, Converter)
mock_agent.get_output_converter.assert_called_once()
def test_create_converter_with_custom_converter():
converter = create_converter(
converter_cls=CustomConverter,
llm=Mock(),
text="Sample",
model=SimpleModel,
instructions="Convert",
)
assert isinstance(converter, CustomConverter)
def test_create_converter_fails_without_agent_or_converter_cls():
with pytest.raises(
ValueError, match="Either agent or converter_cls must be provided"
):
create_converter(
llm=Mock(), text="Sample", model=SimpleModel, instructions="Convert"
)

View File

@@ -1,10 +1,11 @@
from unittest.mock import patch from unittest.mock import patch
from crewai.tasks.task_output import TaskOutput
import pytest import pytest
from langchain_openai import ChatOpenAI
from crewai.agent import Agent from crewai.agent import Agent
from crewai.task import Task from crewai.task import Task
from crewai.tasks.task_output import TaskOutput
from crewai.utilities.planning_handler import CrewPlanner, PlannerTaskPydanticOutput from crewai.utilities.planning_handler import CrewPlanner, PlannerTaskPydanticOutput
@@ -28,7 +29,19 @@ class TestCrewPlanner:
agent=Agent(role="Agent 3", goal="Goal 3", backstory="Backstory 3"), agent=Agent(role="Agent 3", goal="Goal 3", backstory="Backstory 3"),
), ),
] ]
return CrewPlanner(tasks) return CrewPlanner(tasks, None)
@pytest.fixture
def crew_planner_different_llm(self):
tasks = [
Task(
description="Task 1",
expected_output="Output 1",
agent=Agent(role="Agent 1", goal="Goal 1", backstory="Backstory 1"),
)
]
planning_agent_llm = ChatOpenAI(model="gpt-3.5-turbo")
return CrewPlanner(tasks, planning_agent_llm)
def test_handle_crew_planning(self, crew_planner): def test_handle_crew_planning(self, crew_planner):
with patch.object(Task, "execute_sync") as execute: with patch.object(Task, "execute_sync") as execute:
@@ -40,7 +53,7 @@ class TestCrewPlanner:
), ),
) )
result = crew_planner._handle_crew_planning() result = crew_planner._handle_crew_planning()
assert crew_planner.planning_agent_llm.model_name == "gpt-4o-mini"
assert isinstance(result, PlannerTaskPydanticOutput) assert isinstance(result, PlannerTaskPydanticOutput)
assert len(result.list_of_plans_per_task) == len(crew_planner.tasks) assert len(result.list_of_plans_per_task) == len(crew_planner.tasks)
execute.assert_called_once() execute.assert_called_once()
@@ -72,3 +85,22 @@ class TestCrewPlanner:
assert isinstance(tasks_summary, str) assert isinstance(tasks_summary, str)
assert tasks_summary.startswith("\n Task Number 1 - Task 1") assert tasks_summary.startswith("\n Task Number 1 - Task 1")
assert tasks_summary.endswith('"agent_tools": []\n ') assert tasks_summary.endswith('"agent_tools": []\n ')
def test_handle_crew_planning_different_llm(self, crew_planner_different_llm):
with patch.object(Task, "execute_sync") as execute:
execute.return_value = TaskOutput(
description="Description",
agent="agent",
pydantic=PlannerTaskPydanticOutput(list_of_plans_per_task=["Plan 1"]),
)
result = crew_planner_different_llm._handle_crew_planning()
assert (
crew_planner_different_llm.planning_agent_llm.model_name
== "gpt-3.5-turbo"
)
assert isinstance(result, PlannerTaskPydanticOutput)
assert len(result.list_of_plans_per_task) == len(
crew_planner_different_llm.tasks
)
execute.assert_called_once()