Compare commits

..

1 Commits

Author SHA1 Message Date
Eduardo Chiarotti
95626da37e docs: fix references to annotations 2024-08-13 12:40:45 -03:00
43 changed files with 362 additions and 676 deletions

35
.github/ISSUE_TEMPLATE/bug_report.md vendored Normal file
View File

@@ -0,0 +1,35 @@
---
name: Bug report
about: Create a report to help us improve CrewAI
title: "[BUG]"
labels: bug
assignees: ''
---
**Description**
Provide a clear and concise description of what the bug is.
**Steps to Reproduce**
Provide a step-by-step process to reproduce the behavior:
**Expected behavior**
A clear and concise description of what you expected to happen.
**Screenshots/Code snippets**
If applicable, add screenshots or code snippets to help explain your problem.
**Environment Details:**
- **Operating System**: [e.g., Ubuntu 20.04, macOS Catalina, Windows 10]
- **Python Version**: [e.g., 3.8, 3.9, 3.10]
- **crewAI Version**: [e.g., 0.30.11]
- **crewAI Tools Version**: [e.g., 0.2.6]
**Logs**
Include relevant logs or error messages if applicable.
**Possible Solution**
Have a solution in mind? Please suggest it here, or write "None".
**Additional context**
Add any other context about the problem here.

View File

@@ -1,116 +0,0 @@
name: Bug report
description: Create a report to help us improve CrewAI
title: "[BUG]"
labels: ["bug"]
assignees: []
body:
- type: textarea
id: description
attributes:
label: Description
description: Provide a clear and concise description of what the bug is.
validations:
required: true
- type: textarea
id: steps-to-reproduce
attributes:
label: Steps to Reproduce
description: Provide a step-by-step process to reproduce the behavior.
placeholder: |
1. Go to '...'
2. Click on '....'
3. Scroll down to '....'
4. See error
validations:
required: true
- type: textarea
id: expected-behavior
attributes:
label: Expected behavior
description: A clear and concise description of what you expected to happen.
validations:
required: true
- type: textarea
id: screenshots-code
attributes:
label: Screenshots/Code snippets
description: If applicable, add screenshots or code snippets to help explain your problem.
validations:
required: true
- type: dropdown
id: os
attributes:
label: Operating System
description: Select the operating system you're using
options:
- Ubuntu 20.04
- Ubuntu 22.04
- Ubuntu 24.04
- macOS Catalina
- macOS Big Sur
- macOS Monterey
- macOS Ventura
- macOS Sonoma
- Windows 10
- Windows 11
- Other (specify in additional context)
validations:
required: true
- type: dropdown
id: python-version
attributes:
label: Python Version
description: Version of Python your Crew is running on
options:
- '3.10'
- '3.11'
- '3.12'
- '3.13'
validations:
required: true
- type: input
id: crewai-version
attributes:
label: crewAI Version
description: What version of CrewAI are you using
validations:
required: true
- type: input
id: crewai-tools-version
attributes:
label: crewAI Tools Version
description: What version of CrewAI Tools are you using
validations:
required: true
- type: dropdown
id: virtual-environment
attributes:
label: Virtual Environment
description: What Virtual Environment are you running your crew in.
options:
- Venv
- Conda
- Poetry
validations:
required: true
- type: textarea
id: evidence
attributes:
label: Evidence
description: Include relevant information, logs or error messages. These can be screenshots.
validations:
required: true
- type: textarea
id: possible-solution
attributes:
label: Possible Solution
description: Have a solution in mind? Please suggest it here, or write "None".
validations:
required: true
- type: textarea
id: additional-context
attributes:
label: Additional context
description: Add any other context about the problem here.
validations:
required: true

View File

@@ -1 +0,0 @@
blank_issues_enabled: false

24
.github/ISSUE_TEMPLATE/custom.md vendored Normal file
View File

@@ -0,0 +1,24 @@
---
name: Custom issue template
about: Describe this issue template's purpose here.
title: "[DOCS]"
labels: documentation
assignees: ''
---
## Documentation Page
<!-- Provide a link to the documentation page that needs improvement -->
## Description
<!-- Describe what needs to be changed or improved in the documentation -->
## Suggested Changes
<!-- If possible, provide specific suggestions for how to improve the documentation -->
## Additional Context
<!-- Add any other context about the documentation issue here -->
## Checklist
- [ ] I have searched the existing issues to make sure this is not a duplicate
- [ ] I have checked the latest version of the documentation to ensure this hasn't been addressed

View File

@@ -1,65 +0,0 @@
name: Feature request
description: Suggest a new feature for CrewAI
title: "[FEATURE]"
labels: ["feature-request"]
assignees: []
body:
- type: markdown
attributes:
value: |
Thanks for taking the time to fill out this feature request!
- type: dropdown
id: feature-area
attributes:
label: Feature Area
description: Which area of CrewAI does this feature primarily relate to?
options:
- Core functionality
- Agent capabilities
- Task management
- Integration with external tools
- Performance optimization
- Documentation
- Other (please specify in additional context)
validations:
required: true
- type: textarea
id: problem
attributes:
label: Is your feature request related to a an existing bug? Please link it here.
description: A link to the bug or NA if not related to an existing bug.
validations:
required: true
- type: textarea
id: solution
attributes:
label: Describe the solution you'd like
description: A clear and concise description of what you want to happen.
validations:
required: true
- type: textarea
id: alternatives
attributes:
label: Describe alternatives you've considered
description: A clear and concise description of any alternative solutions or features you've considered.
validations:
required: false
- type: textarea
id: context
attributes:
label: Additional context
description: Add any other context, screenshots, or examples about the feature request here.
validations:
required: false
- type: dropdown
id: willingness-to-contribute
attributes:
label: Willingness to Contribute
description: Would you be willing to contribute to the implementation of this feature?
options:
- Yes, I'd be happy to submit a pull request
- I could provide more detailed specifications
- I can test the feature once it's implemented
- No, I'm just suggesting the idea
validations:
required: true

View File

@@ -1,23 +0,0 @@
name: Security Checker
on: [pull_request]
jobs:
security-check:
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Set up Python
uses: actions/setup-python@v4
with:
python-version: "3.11.9"
- name: Install dependencies
run: pip install bandit
- name: Run Bandit
run: bandit -c pyproject.toml -r src/ -lll

View File

@@ -1,129 +0,0 @@
# Creating a CrewAI Pipeline Project
Welcome to the comprehensive guide for creating a new CrewAI pipeline project. This document will walk you through the steps to create, customize, and run your CrewAI pipeline project, ensuring you have everything you need to get started.
To learn more about CrewAI pipelines, visit the [CrewAI documentation](https://docs.crewai.com/core-concepts/Pipeline/).
## Prerequisites
Before getting started with CrewAI pipelines, make sure that you have installed CrewAI via pip:
```shell
$ pip install crewai crewai-tools
```
The same prerequisites for virtual environments and Code IDEs apply as in regular CrewAI projects.
## Creating a New Pipeline Project
To create a new CrewAI pipeline project, you have two options:
1. For a basic pipeline template:
```shell
$ crewai create pipeline <project_name>
```
2. For a pipeline example that includes a router:
```shell
$ crewai create pipeline --router <project_name>
```
These commands will create a new project folder with the following structure:
```
<project_name>/
├── README.md
├── poetry.lock
├── pyproject.toml
├── src/
│ └── <project_name>/
│ ├── __init__.py
│ ├── main.py
│ ├── crews/
│ │ ├── crew1/
│ │ │ ├── crew1.py
│ │ │ └── config/
│ │ │ ├── agents.yaml
│ │ │ └── tasks.yaml
│ │ ├── crew2/
│ │ │ ├── crew2.py
│ │ │ └── config/
│ │ │ ├── agents.yaml
│ │ │ └── tasks.yaml
│ ├── pipelines/
│ │ ├── __init__.py
│ │ ├── pipeline1.py
│ │ └── pipeline2.py
│ └── tools/
│ ├── __init__.py
│ └── custom_tool.py
└── tests/
```
## Customizing Your Pipeline Project
To customize your pipeline project, you can:
1. Modify the crew files in `src/<project_name>/crews/` to define your agents and tasks for each crew.
2. Modify the pipeline files in `src/<project_name>/pipelines/` to define your pipeline structure.
3. Modify `src/<project_name>/main.py` to set up and run your pipelines.
4. Add your environment variables into the `.env` file.
### Example: Defining a Pipeline
Here's an example of how to define a pipeline in `src/<project_name>/pipelines/normal_pipeline.py`:
```python
from crewai import Pipeline
from crewai.project import PipelineBase
from ..crews.normal_crew import NormalCrew
@PipelineBase
class NormalPipeline:
def __init__(self):
# Initialize crews
self.normal_crew = NormalCrew().crew()
def create_pipeline(self):
return Pipeline(
stages=[
self.normal_crew
]
)
async def kickoff(self, inputs):
pipeline = self.create_pipeline()
results = await pipeline.kickoff(inputs)
return results
```
### Annotations
The main annotation you'll use for pipelines is `@PipelineBase`. This annotation is used to decorate your pipeline classes, similar to how `@CrewBase` is used for crews.
## Installing Dependencies
To install the dependencies for your project, use Poetry:
```shell
$ cd <project_name>
$ crewai install
```
## Running Your Pipeline Project
To run your pipeline project, use the following command:
```shell
$ crewai run
```
This will initialize your pipeline and begin task execution as defined in your `main.py` file.
## Deploying Your Pipeline Project
Pipelines can be deployed in the same way as regular CrewAI projects. The easiest way is through [CrewAI+](https://www.crewai.com/crewaiplus), where you can deploy your pipeline in a few clicks.
Remember, when working with pipelines, you're orchestrating multiple crews to work together in a sequence or parallel fashion. This allows for more complex workflows and information processing tasks.

View File

@@ -191,7 +191,8 @@ To install the dependencies for your project, you can use Poetry. First, navigat
```shell ```shell
$ cd my_project $ cd my_project
$ crewai install $ poetry lock
$ poetry install
``` ```
This will install the dependencies specified in the `pyproject.toml` file. This will install the dependencies specified in the `pyproject.toml` file.
@@ -232,6 +233,11 @@ To run your project, use the following command:
```shell ```shell
$ crewai run $ crewai run
``` ```
or
```shell
$ poetry run my_project
```
This will initialize your crew of AI agents and begin task execution as defined in your configuration in the `main.py` file. This will initialize your crew of AI agents and begin task execution as defined in your configuration in the `main.py` file.
### Replay Tasks from Latest Crew Kickoff ### Replay Tasks from Latest Crew Kickoff

View File

@@ -8,20 +8,13 @@ Cutting-edge framework for orchestrating role-playing, autonomous AI agents. By
<div style="width:25%"> <div style="width:25%">
<h2>Getting Started</h2> <h2>Getting Started</h2>
<ul> <ul>
<li> <li><a href='./getting-started/Installing-CrewAI'>
<a href='./getting-started/Installing-CrewAI'>
Installing CrewAI Installing CrewAI
</a> </a>
</li> </li>
<li> <li><a href='./getting-started/Start-a-New-CrewAI-Project-Template-Method'>
<a href='./getting-started/Start-a-New-CrewAI-Project-Template-Method'>
Start a New CrewAI Project: Template Method Start a New CrewAI Project: Template Method
</a> </a>
</li>
<li>
<a href='./getting-started/Create-a-New-CrewAI-Pipeline-Template-Method'>
Create a New CrewAI Pipeline: Template Method
</a>
</li> </li>
</ul> </ul>
</div> </div>

View File

@@ -129,7 +129,6 @@ nav:
- Processes: 'core-concepts/Processes.md' - Processes: 'core-concepts/Processes.md'
- Crews: 'core-concepts/Crews.md' - Crews: 'core-concepts/Crews.md'
- Collaboration: 'core-concepts/Collaboration.md' - Collaboration: 'core-concepts/Collaboration.md'
- Pipeline: 'core-concepts/Pipeline.md'
- Training: 'core-concepts/Training-Crew.md' - Training: 'core-concepts/Training-Crew.md'
- Memory: 'core-concepts/Memory.md' - Memory: 'core-concepts/Memory.md'
- Planning: 'core-concepts/Planning.md' - Planning: 'core-concepts/Planning.md'

61
poetry.lock generated
View File

@@ -1,4 +1,4 @@
# This file is automatically @generated by Poetry 1.8.3 and should not be changed by hand. # This file is automatically @generated by Poetry 1.7.1 and should not be changed by hand.
[[package]] [[package]]
name = "agentops" name = "agentops"
@@ -829,27 +829,29 @@ name = "crewai-tools"
version = "0.8.3" version = "0.8.3"
description = "Set of tools for the crewAI framework" description = "Set of tools for the crewAI framework"
optional = false optional = false
python-versions = "<=3.13,>=3.10" python-versions = ">=3.10,<=3.13"
files = [ files = []
{file = "crewai_tools-0.8.3-py3-none-any.whl", hash = "sha256:a54a10c36b8403250e13d6594bd37db7e7deb3f9fabc77e8720c081864ae6189"}, develop = false
{file = "crewai_tools-0.8.3.tar.gz", hash = "sha256:f0317ea1d926221b22fcf4b816d71916fe870aa66ed7ee2a0067dba42b5634eb"},
]
[package.dependencies] [package.dependencies]
beautifulsoup4 = ">=4.12.3,<5.0.0" beautifulsoup4 = "^4.12.3"
chromadb = ">=0.4.22,<0.5.0" chromadb = "^0.4.22"
docker = ">=7.1.0,<8.0.0" docker = "^7.1.0"
docx2txt = ">=0.8,<0.9" docx2txt = "^0.8"
embedchain = ">=0.1.114,<0.2.0" embedchain = "^0.1.114"
lancedb = ">=0.5.4,<0.6.0" lancedb = "^0.5.4"
langchain = ">0.2,<=0.3" langchain = ">0.2,<=0.3"
openai = ">=1.12.0,<2.0.0" openai = "^1.12.0"
pydantic = ">=2.6.1,<3.0.0" pydantic = "^2.6.1"
pyright = ">=1.1.350,<2.0.0" pyright = "^1.1.350"
pytest = ">=8.0.0,<9.0.0" pytest = "^8.0.0"
pytube = ">=15.0.0,<16.0.0" pytube = "^15.0.0"
requests = ">=2.31.0,<3.0.0" requests = "^2.31.0"
selenium = ">=4.18.1,<5.0.0" selenium = "^4.18.1"
[package.source]
type = "directory"
url = "../crewai-tools"
[[package]] [[package]]
name = "cssselect2" name = "cssselect2"
@@ -1319,12 +1321,12 @@ files = [
google-auth = ">=2.14.1,<3.0.dev0" google-auth = ">=2.14.1,<3.0.dev0"
googleapis-common-protos = ">=1.56.2,<2.0.dev0" googleapis-common-protos = ">=1.56.2,<2.0.dev0"
grpcio = [ grpcio = [
{version = ">=1.33.2,<2.0dev", optional = true, markers = "python_version < \"3.11\" and extra == \"grpc\""},
{version = ">=1.49.1,<2.0dev", optional = true, markers = "python_version >= \"3.11\" and extra == \"grpc\""}, {version = ">=1.49.1,<2.0dev", optional = true, markers = "python_version >= \"3.11\" and extra == \"grpc\""},
{version = ">=1.33.2,<2.0dev", optional = true, markers = "python_version < \"3.11\" and extra == \"grpc\""},
] ]
grpcio-status = [ grpcio-status = [
{version = ">=1.33.2,<2.0.dev0", optional = true, markers = "python_version < \"3.11\" and extra == \"grpc\""},
{version = ">=1.49.1,<2.0.dev0", optional = true, markers = "python_version >= \"3.11\" and extra == \"grpc\""}, {version = ">=1.49.1,<2.0.dev0", optional = true, markers = "python_version >= \"3.11\" and extra == \"grpc\""},
{version = ">=1.33.2,<2.0.dev0", optional = true, markers = "python_version < \"3.11\" and extra == \"grpc\""},
] ]
proto-plus = ">=1.22.3,<2.0.0dev" proto-plus = ">=1.22.3,<2.0.0dev"
protobuf = ">=3.19.5,<3.20.0 || >3.20.0,<3.20.1 || >3.20.1,<4.21.0 || >4.21.0,<4.21.1 || >4.21.1,<4.21.2 || >4.21.2,<4.21.3 || >4.21.3,<4.21.4 || >4.21.4,<4.21.5 || >4.21.5,<6.0.0.dev0" protobuf = ">=3.19.5,<3.20.0 || >3.20.0,<3.20.1 || >3.20.1,<4.21.0 || >4.21.0,<4.21.1 || >4.21.1,<4.21.2 || >4.21.2,<4.21.3 || >4.21.3,<4.21.4 || >4.21.4,<4.21.5 || >4.21.5,<6.0.0.dev0"
@@ -3626,8 +3628,8 @@ files = [
[package.dependencies] [package.dependencies]
numpy = [ numpy = [
{version = ">=1.22.4", markers = "python_version < \"3.11\""},
{version = ">=1.23.2", markers = "python_version == \"3.11\""}, {version = ">=1.23.2", markers = "python_version == \"3.11\""},
{version = ">=1.22.4", markers = "python_version < \"3.11\""},
{version = ">=1.26.0", markers = "python_version >= \"3.12\""}, {version = ">=1.26.0", markers = "python_version >= \"3.12\""},
] ]
python-dateutil = ">=2.8.2" python-dateutil = ">=2.8.2"
@@ -4025,19 +4027,6 @@ files = [
{file = "pyarrow-17.0.0-cp312-cp312-win_amd64.whl", hash = "sha256:392bc9feabc647338e6c89267635e111d71edad5fcffba204425a7c8d13610d7"}, {file = "pyarrow-17.0.0-cp312-cp312-win_amd64.whl", hash = "sha256:392bc9feabc647338e6c89267635e111d71edad5fcffba204425a7c8d13610d7"},
{file = "pyarrow-17.0.0-cp38-cp38-macosx_10_15_x86_64.whl", hash = "sha256:af5ff82a04b2171415f1410cff7ebb79861afc5dae50be73ce06d6e870615204"}, {file = "pyarrow-17.0.0-cp38-cp38-macosx_10_15_x86_64.whl", hash = "sha256:af5ff82a04b2171415f1410cff7ebb79861afc5dae50be73ce06d6e870615204"},
{file = "pyarrow-17.0.0-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:edca18eaca89cd6382dfbcff3dd2d87633433043650c07375d095cd3517561d8"}, {file = "pyarrow-17.0.0-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:edca18eaca89cd6382dfbcff3dd2d87633433043650c07375d095cd3517561d8"},
{file = "pyarrow-17.0.0-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:7c7916bff914ac5d4a8fe25b7a25e432ff921e72f6f2b7547d1e325c1ad9d155"},
{file = "pyarrow-17.0.0-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:f553ca691b9e94b202ff741bdd40f6ccb70cdd5fbf65c187af132f1317de6145"},
{file = "pyarrow-17.0.0-cp38-cp38-manylinux_2_28_aarch64.whl", hash = "sha256:0cdb0e627c86c373205a2f94a510ac4376fdc523f8bb36beab2e7f204416163c"},
{file = "pyarrow-17.0.0-cp38-cp38-manylinux_2_28_x86_64.whl", hash = "sha256:d7d192305d9d8bc9082d10f361fc70a73590a4c65cf31c3e6926cd72b76bc35c"},
{file = "pyarrow-17.0.0-cp38-cp38-win_amd64.whl", hash = "sha256:02dae06ce212d8b3244dd3e7d12d9c4d3046945a5933d28026598e9dbbda1fca"},
{file = "pyarrow-17.0.0-cp39-cp39-macosx_10_15_x86_64.whl", hash = "sha256:13d7a460b412f31e4c0efa1148e1d29bdf18ad1411eb6757d38f8fbdcc8645fb"},
{file = "pyarrow-17.0.0-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:9b564a51fbccfab5a04a80453e5ac6c9954a9c5ef2890d1bcf63741909c3f8df"},
{file = "pyarrow-17.0.0-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:32503827abbc5aadedfa235f5ece8c4f8f8b0a3cf01066bc8d29de7539532687"},
{file = "pyarrow-17.0.0-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:a155acc7f154b9ffcc85497509bcd0d43efb80d6f733b0dc3bb14e281f131c8b"},
{file = "pyarrow-17.0.0-cp39-cp39-manylinux_2_28_aarch64.whl", hash = "sha256:dec8d129254d0188a49f8a1fc99e0560dc1b85f60af729f47de4046015f9b0a5"},
{file = "pyarrow-17.0.0-cp39-cp39-manylinux_2_28_x86_64.whl", hash = "sha256:a48ddf5c3c6a6c505904545c25a4ae13646ae1f8ba703c4df4a1bfe4f4006bda"},
{file = "pyarrow-17.0.0-cp39-cp39-win_amd64.whl", hash = "sha256:42bf93249a083aca230ba7e2786c5f673507fa97bbd9725a1e2754715151a204"},
{file = "pyarrow-17.0.0.tar.gz", hash = "sha256:4beca9521ed2c0921c1023e68d097d0299b62c362639ea315572a58f3f50fd28"},
] ]
[package.dependencies] [package.dependencies]
@@ -6073,4 +6062,4 @@ tools = ["crewai-tools"]
[metadata] [metadata]
lock-version = "2.0" lock-version = "2.0"
python-versions = ">=3.10,<=3.13" python-versions = ">=3.10,<=3.13"
content-hash = "91ba982ea96ca7be017d536784223d4ef83e86de05d11eb1c3ce0fc1b726f283" content-hash = "fc1b510ea9c814db67ac69d2454071b718cb7f6846bd845f7f48561cb0397ce1"

View File

@@ -62,9 +62,6 @@ ignore_missing_imports = true
disable_error_code = 'import-untyped' disable_error_code = 'import-untyped'
exclude = ["cli/templates"] exclude = ["cli/templates"]
[tool.bandit]
exclude_dirs = ["src/crewai/cli/templates"]
[build-system] [build-system]
requires = ["poetry-core"] requires = ["poetry-core"]
build-backend = "poetry.core.masonry.api" build-backend = "poetry.core.masonry.api"

View File

@@ -113,11 +113,10 @@ class Agent(BaseAgent):
description="Maximum number of retries for an agent to execute a task when an error occurs.", description="Maximum number of retries for an agent to execute a task when an error occurs.",
) )
@model_validator(mode="after") def __init__(__pydantic_self__, **data):
def set_agent_ops_agent_name(self) -> "Agent": config = data.pop("config", {})
"""Set agent ops agent name.""" super().__init__(**config, **data)
self.agent_ops_agent_name = self.role __pydantic_self__.agent_ops_agent_name = __pydantic_self__.role
return self
@model_validator(mode="after") @model_validator(mode="after")
def set_agent_executor(self) -> "Agent": def set_agent_executor(self) -> "Agent":
@@ -214,7 +213,7 @@ class Agent(BaseAgent):
raise e raise e
result = self.execute_task(task, context, tools) result = self.execute_task(task, context, tools)
if self.max_rpm and self._rpm_controller: if self.max_rpm:
self._rpm_controller.stop_rpm_counter() self._rpm_controller.stop_rpm_counter()
# If there was any tool in self.tools_results that had result_as_answer # If there was any tool in self.tools_results that had result_as_answer

View File

@@ -7,6 +7,7 @@ from typing import Any, Dict, List, Optional, TypeVar
from pydantic import ( from pydantic import (
UUID4, UUID4,
BaseModel, BaseModel,
ConfigDict,
Field, Field,
InstanceOf, InstanceOf,
PrivateAttr, PrivateAttr,
@@ -73,17 +74,12 @@ class BaseAgent(ABC, BaseModel):
""" """
__hash__ = object.__hash__ # type: ignore __hash__ = object.__hash__ # type: ignore
_logger: Logger = PrivateAttr(default_factory=lambda: Logger(verbose=False)) _logger: Logger = PrivateAttr()
_rpm_controller: Optional[RPMController] = PrivateAttr(default=None) _rpm_controller: RPMController = PrivateAttr(default=None)
_request_within_rpm_limit: Any = PrivateAttr(default=None) _request_within_rpm_limit: Any = PrivateAttr(default=None)
_original_role: Optional[str] = PrivateAttr(default=None) formatting_errors: int = 0
_original_goal: Optional[str] = PrivateAttr(default=None) model_config = ConfigDict(arbitrary_types_allowed=True)
_original_backstory: Optional[str] = PrivateAttr(default=None)
_token_process: TokenProcess = PrivateAttr(default_factory=TokenProcess)
id: UUID4 = Field(default_factory=uuid.uuid4, frozen=True) id: UUID4 = Field(default_factory=uuid.uuid4, frozen=True)
formatting_errors: int = Field(
default=0, description="Number of formatting errors."
)
role: str = Field(description="Role of the agent") role: str = Field(description="Role of the agent")
goal: str = Field(description="Objective of the agent") goal: str = Field(description="Objective of the agent")
backstory: str = Field(description="Backstory of the agent") backstory: str = Field(description="Backstory of the agent")
@@ -127,6 +123,15 @@ class BaseAgent(ABC, BaseModel):
default=None, description="Maximum number of tokens for the agent's execution." default=None, description="Maximum number of tokens for the agent's execution."
) )
_original_role: str | None = None
_original_goal: str | None = None
_original_backstory: str | None = None
_token_process: TokenProcess = TokenProcess()
def __init__(__pydantic_self__, **data):
config = data.pop("config", {})
super().__init__(**config, **data)
@model_validator(mode="after") @model_validator(mode="after")
def set_config_attributes(self): def set_config_attributes(self):
if self.config: if self.config:
@@ -165,7 +170,7 @@ class BaseAgent(ABC, BaseModel):
@property @property
def key(self): def key(self):
source = [self.role, self.goal, self.backstory] source = [self.role, self.goal, self.backstory]
return md5("|".join(source).encode(), usedforsecurity=False).hexdigest() return md5("|".join(source).encode()).hexdigest()
@abstractmethod @abstractmethod
def execute_task( def execute_task(

View File

@@ -1,12 +1,13 @@
from typing import Any, Dict, Optional from typing import Optional
from pydantic import BaseModel, PrivateAttr
class CacheHandler(BaseModel): class CacheHandler:
"""Callback handler for tool usage.""" """Callback handler for tool usage."""
_cache: Dict[str, Any] = PrivateAttr(default_factory=dict) _cache: dict = {}
def __init__(self):
self._cache = {}
def add(self, tool, input, output): def add(self, tool, input, output):
self._cache[f"{tool}-{input}"] = output self._cache[f"{tool}-{input}"] = output

View File

@@ -1,29 +1,33 @@
import threading import threading
import time import time
from typing import Any, Dict, Iterator, List, Literal, Optional, Tuple, Union from typing import Any, Dict, Iterator, List, Literal, Optional, Tuple, Union
import click import click
from langchain.agents import AgentExecutor from langchain.agents import AgentExecutor
from langchain.agents.agent import ExceptionTool from langchain.agents.agent import ExceptionTool
from langchain.callbacks.manager import CallbackManagerForChainRun from langchain.callbacks.manager import CallbackManagerForChainRun
from langchain.chains.summarize import load_summarize_chain
from langchain.text_splitter import RecursiveCharacterTextSplitter
from langchain_core.agents import AgentAction, AgentFinish, AgentStep from langchain_core.agents import AgentAction, AgentFinish, AgentStep
from langchain_core.exceptions import OutputParserException from langchain_core.exceptions import OutputParserException
from langchain_core.tools import BaseTool from langchain_core.tools import BaseTool
from langchain_core.utils.input import get_color_mapping from langchain_core.utils.input import get_color_mapping
from pydantic import InstanceOf from pydantic import InstanceOf
from langchain.text_splitter import RecursiveCharacterTextSplitter
from langchain.chains.summarize import load_summarize_chain
from crewai.agents.agent_builder.base_agent_executor_mixin import CrewAgentExecutorMixin from crewai.agents.agent_builder.base_agent_executor_mixin import CrewAgentExecutorMixin
from crewai.agents.tools_handler import ToolsHandler from crewai.agents.tools_handler import ToolsHandler
from crewai.tools.tool_usage import ToolUsage, ToolUsageErrorException from crewai.tools.tool_usage import ToolUsage, ToolUsageErrorException
from crewai.utilities import I18N from crewai.utilities import I18N
from crewai.utilities.constants import TRAINING_DATA_FILE from crewai.utilities.constants import TRAINING_DATA_FILE
from crewai.utilities.exceptions.context_window_exceeding_exception import ( from crewai.utilities.exceptions.context_window_exceeding_exception import (
LLMContextLengthExceededException, LLMContextLengthExceededException,
) )
from crewai.utilities.logger import Logger
from crewai.utilities.training_handler import CrewTrainingHandler from crewai.utilities.training_handler import CrewTrainingHandler
from crewai.utilities.logger import Logger
class CrewAgentExecutor(AgentExecutor, CrewAgentExecutorMixin): class CrewAgentExecutor(AgentExecutor, CrewAgentExecutorMixin):
@@ -209,7 +213,11 @@ class CrewAgentExecutor(AgentExecutor, CrewAgentExecutorMixin):
yield step yield step
return return
raise e yield AgentStep(
action=AgentAction("_Exception", str(e), str(e)),
observation=str(e),
)
return
# If the tool chosen is the finishing tool, then we end and return. # If the tool chosen is the finishing tool, then we end and return.
if isinstance(output, AgentFinish): if isinstance(output, AgentFinish):

View File

@@ -8,7 +8,6 @@ from crewai.memory.storage.kickoff_task_outputs_storage import (
) )
from .evaluate_crew import evaluate_crew from .evaluate_crew import evaluate_crew
from .install_crew import install_crew
from .replay_from_task import replay_task_command from .replay_from_task import replay_task_command
from .reset_memories_command import reset_memories_command from .reset_memories_command import reset_memories_command
from .run_crew import run_crew from .run_crew import run_crew
@@ -166,16 +165,10 @@ def test(n_iterations: int, model: str):
evaluate_crew(n_iterations, model) evaluate_crew(n_iterations, model)
@crewai.command()
def install():
"""Install the Crew."""
install_crew()
@crewai.command() @crewai.command()
def run(): def run():
"""Run the Crew.""" """Run the crew."""
click.echo("Running the Crew") click.echo("Running the crew")
run_crew() run_crew()

View File

@@ -1,21 +0,0 @@
import subprocess
import click
def install_crew() -> None:
"""
Install the crew by running the Poetry command to lock and install.
"""
try:
subprocess.run(["poetry", "lock"], check=True, capture_output=False, text=True)
subprocess.run(
["poetry", "install"], check=True, capture_output=False, text=True
)
except subprocess.CalledProcessError as e:
click.echo(f"An error occurred while running the crew: {e}", err=True)
click.echo(e.output, err=True)
except Exception as e:
click.echo(f"An unexpected error occurred: {e}", err=True)

View File

@@ -14,9 +14,12 @@ pip install poetry
Next, navigate to your project directory and install the dependencies: Next, navigate to your project directory and install the dependencies:
1. First lock the dependencies and install them by using the CLI command: 1. First lock the dependencies and then install them:
```bash ```bash
crewai install poetry lock
```
```bash
poetry install
``` ```
### Customizing ### Customizing
@@ -34,6 +37,10 @@ To kickstart your crew of AI agents and begin task execution, run this from the
```bash ```bash
$ crewai run $ crewai run
``` ```
or
```bash
poetry run {{folder_name}}
```
This command initializes the {{name}} Crew, assembling the agents and assigning them tasks as defined in your configuration. This command initializes the {{name}} Crew, assembling the agents and assigning them tasks as defined in your configuration.

View File

@@ -6,8 +6,7 @@ authors = ["Your Name <you@example.com>"]
[tool.poetry.dependencies] [tool.poetry.dependencies]
python = ">=3.10,<=3.13" python = ">=3.10,<=3.13"
crewai = { extras = ["tools"], version = ">=0.51.0,<1.0.0" } crewai = { extras = ["tools"], version = "^0.51.0" }
[tool.poetry.scripts] [tool.poetry.scripts]
{{folder_name}} = "{{folder_name}}.main:run" {{folder_name}} = "{{folder_name}}.main:run"

View File

@@ -15,11 +15,12 @@ pip install poetry
Next, navigate to your project directory and install the dependencies: Next, navigate to your project directory and install the dependencies:
1. First lock the dependencies and then install them: 1. First lock the dependencies and then install them:
```bash ```bash
crewai install poetry lock
```
```bash
poetry install
``` ```
### Customizing ### Customizing
**Add your `OPENAI_API_KEY` into the `.env` file** **Add your `OPENAI_API_KEY` into the `.env` file**
@@ -34,7 +35,7 @@ crewai install
To kickstart your crew of AI agents and begin task execution, run this from the root folder of your project: To kickstart your crew of AI agents and begin task execution, run this from the root folder of your project:
```bash ```bash
crewai run poetry run {{folder_name}}
``` ```
This command initializes the {{name}} Crew, assembling the agents and assigning them tasks as defined in your configuration. This command initializes the {{name}} Crew, assembling the agents and assigning them tasks as defined in your configuration.
@@ -48,7 +49,6 @@ The {{name}} Crew is composed of multiple AI agents, each with unique roles, goa
## Support ## Support
For support, questions, or feedback regarding the {{crew_name}} Crew or crewAI. For support, questions, or feedback regarding the {{crew_name}} Crew or crewAI.
- Visit our [documentation](https://docs.crewai.com) - Visit our [documentation](https://docs.crewai.com)
- Reach out to us through our [GitHub repository](https://github.com/joaomdmoura/crewai) - Reach out to us through our [GitHub repository](https://github.com/joaomdmoura/crewai)
- [Join our Discord](https://discord.com/invite/X4JWnZnxPb) - [Join our Discord](https://discord.com/invite/X4JWnZnxPb)

View File

@@ -6,7 +6,7 @@ authors = ["Your Name <you@example.com>"]
[tool.poetry.dependencies] [tool.poetry.dependencies]
python = ">=3.10,<=3.13" python = ">=3.10,<=3.13"
crewai = { extras = ["tools"], version = ">=0.51.0,<1.0.0" } crewai = { extras = ["tools"], version = "^0.51.0" }
asyncio = "*" asyncio = "*"
[tool.poetry.scripts] [tool.poetry.scripts]

View File

@@ -16,7 +16,10 @@ Next, navigate to your project directory and install the dependencies:
1. First lock the dependencies and then install them: 1. First lock the dependencies and then install them:
```bash ```bash
crewai install poetry lock
```
```bash
poetry install
``` ```
### Customizing ### Customizing
@@ -32,7 +35,7 @@ crewai install
To kickstart your crew of AI agents and begin task execution, run this from the root folder of your project: To kickstart your crew of AI agents and begin task execution, run this from the root folder of your project:
```bash ```bash
crewai run poetry run {{folder_name}}
``` ```
This command initializes the {{name}} Crew, assembling the agents and assigning them tasks as defined in your configuration. This command initializes the {{name}} Crew, assembling the agents and assigning them tasks as defined in your configuration.

View File

@@ -6,8 +6,7 @@ authors = ["Your Name <you@example.com>"]
[tool.poetry.dependencies] [tool.poetry.dependencies]
python = ">=3.10,<=3.13" python = ">=3.10,<=3.13"
crewai = { extras = ["tools"], version = ">=0.51.0,<1.0.0" } crewai = { extras = ["tools"], version = "^0.51.0" }
[tool.poetry.scripts] [tool.poetry.scripts]
{{folder_name}} = "{{folder_name}}.main:main" {{folder_name}} = "{{folder_name}}.main:main"

View File

@@ -1,15 +1,16 @@
import asyncio import asyncio
import json import json
import os
import uuid import uuid
from concurrent.futures import Future from concurrent.futures import Future
from hashlib import md5 from hashlib import md5
import os
from typing import TYPE_CHECKING, Any, Dict, List, Optional, Tuple, Union from typing import TYPE_CHECKING, Any, Dict, List, Optional, Tuple, Union
from langchain_core.callbacks import BaseCallbackHandler from langchain_core.callbacks import BaseCallbackHandler
from pydantic import ( from pydantic import (
UUID4, UUID4,
BaseModel, BaseModel,
ConfigDict,
Field, Field,
InstanceOf, InstanceOf,
Json, Json,
@@ -47,10 +48,11 @@ from crewai.utilities.planning_handler import CrewPlanner
from crewai.utilities.task_output_storage_handler import TaskOutputStorageHandler from crewai.utilities.task_output_storage_handler import TaskOutputStorageHandler
from crewai.utilities.training_handler import CrewTrainingHandler from crewai.utilities.training_handler import CrewTrainingHandler
agentops = None agentops = None
if os.environ.get("AGENTOPS_API_KEY"): if os.environ.get("AGENTOPS_API_KEY"):
try: try:
import agentops # type: ignore import agentops
except ImportError: except ImportError:
pass pass
@@ -104,6 +106,7 @@ class Crew(BaseModel):
name: Optional[str] = Field(default=None) name: Optional[str] = Field(default=None)
cache: bool = Field(default=True) cache: bool = Field(default=True)
model_config = ConfigDict(arbitrary_types_allowed=True)
tasks: List[Task] = Field(default_factory=list) tasks: List[Task] = Field(default_factory=list)
agents: List[BaseAgent] = Field(default_factory=list) agents: List[BaseAgent] = Field(default_factory=list)
process: Process = Field(default=Process.sequential) process: Process = Field(default=Process.sequential)
@@ -361,7 +364,7 @@ class Crew(BaseModel):
source = [agent.key for agent in self.agents] + [ source = [agent.key for agent in self.agents] + [
task.key for task in self.tasks task.key for task in self.tasks
] ]
return md5("|".join(source).encode(), usedforsecurity=False).hexdigest() return md5("|".join(source).encode()).hexdigest()
def _setup_from_config(self): def _setup_from_config(self):
assert self.config is not None, "Config should not be None." assert self.config is not None, "Config should not be None."
@@ -538,7 +541,7 @@ class Crew(BaseModel):
)._handle_crew_planning() )._handle_crew_planning()
for task, step_plan in zip(self.tasks, result.list_of_plans_per_task): for task, step_plan in zip(self.tasks, result.list_of_plans_per_task):
task.description += step_plan.plan task.description += step_plan
def _store_execution_log( def _store_execution_log(
self, self,

View File

@@ -6,20 +6,12 @@ def task(func):
task.registration_order = [] task.registration_order = []
func.is_task = True func.is_task = True
memoized_func = memoize(func) wrapped_func = memoize(func)
# Append the function name to the registration order list # Append the function name to the registration order list
task.registration_order.append(func.__name__) task.registration_order.append(func.__name__)
def wrapper(*args, **kwargs): return wrapped_func
result = memoized_func(*args, **kwargs)
if not result.name:
result.name = func.__name__
return result
return wrapper
def agent(func): def agent(func):

View File

@@ -1,45 +1,56 @@
import inspect import inspect
import os
from pathlib import Path from pathlib import Path
from typing import Any, Callable, Dict from typing import Any, Callable, Dict
import yaml import yaml
from dotenv import load_dotenv from dotenv import load_dotenv
from pydantic import ConfigDict
load_dotenv() load_dotenv()
def CrewBase(cls): def CrewBase(cls):
class WrappedClass(cls): class WrappedClass(cls):
model_config = ConfigDict(arbitrary_types_allowed=True)
is_crew_class: bool = True # type: ignore is_crew_class: bool = True # type: ignore
# Get the directory of the class being decorated base_directory = None
base_directory = Path(inspect.getfile(cls)).parent for frame_info in inspect.stack():
if "site-packages" not in frame_info.filename:
base_directory = Path(frame_info.filename).parent.resolve()
break
original_agents_config_path = getattr( original_agents_config_path = getattr(
cls, "agents_config", "config/agents.yaml" cls, "agents_config", "config/agents.yaml"
) )
original_tasks_config_path = getattr(cls, "tasks_config", "config/tasks.yaml") original_tasks_config_path = getattr(cls, "tasks_config", "config/tasks.yaml")
def __init__(self, *args, **kwargs): def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs) super().__init__(*args, **kwargs)
agents_config_path = self.base_directory / self.original_agents_config_path if self.base_directory is None:
tasks_config_path = self.base_directory / self.original_tasks_config_path raise Exception(
"Unable to dynamically determine the project's base directory, you must run it from the project's root directory."
)
self.agents_config = self.load_yaml(agents_config_path) self.agents_config = self.load_yaml(
self.tasks_config = self.load_yaml(tasks_config_path) os.path.join(self.base_directory, self.original_agents_config_path)
)
self.tasks_config = self.load_yaml(
os.path.join(self.base_directory, self.original_tasks_config_path)
)
self.map_all_agent_variables() self.map_all_agent_variables()
self.map_all_task_variables() self.map_all_task_variables()
@staticmethod @staticmethod
def load_yaml(config_path: Path): def load_yaml(config_path: str):
try: with open(config_path, "r") as file:
with open(config_path, "r") as file: # parsedContent = YamlParser.parse(file) # type: ignore # Argument 1 to "parse" has incompatible type "TextIOWrapper"; expected "YamlParser"
return yaml.safe_load(file) return yaml.safe_load(file)
except FileNotFoundError:
print(f"File not found: {config_path}")
raise
def _get_all_functions(self): def _get_all_functions(self):
return { return {

View File

@@ -1,24 +1,24 @@
from typing import Any, Callable, Dict, List, Type, Union from typing import Callable, Dict
from pydantic import ConfigDict
from crewai.crew import Crew from crewai.crew import Crew
from crewai.pipeline.pipeline import Pipeline from crewai.pipeline.pipeline import Pipeline
from crewai.routers.router import Router from crewai.routers.router import Router
PipelineStage = Union[Crew, List[Crew], Router]
# TODO: Could potentially remove. Need to check with @joao and @gui if this is needed for CrewAI+ # TODO: Could potentially remove. Need to check with @joao and @gui if this is needed for CrewAI+
def PipelineBase(cls: Type[Any]) -> Type[Any]: def PipelineBase(cls):
class WrappedClass(cls): class WrappedClass(cls):
is_pipeline_class: bool = True # type: ignore model_config = ConfigDict(arbitrary_types_allowed=True)
stages: List[PipelineStage] is_pipeline_class: bool = True
def __init__(self, *args: Any, **kwargs: Any) -> None: def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs) super().__init__(*args, **kwargs)
self.stages = [] self.stages = []
self._map_pipeline_components() self._map_pipeline_components()
def _get_all_functions(self) -> Dict[str, Callable[..., Any]]: def _get_all_functions(self):
return { return {
name: getattr(self, name) name: getattr(self, name)
for name in dir(self) for name in dir(self)
@@ -26,15 +26,15 @@ def PipelineBase(cls: Type[Any]) -> Type[Any]:
} }
def _filter_functions( def _filter_functions(
self, functions: Dict[str, Callable[..., Any]], attribute: str self, functions: Dict[str, Callable], attribute: str
) -> Dict[str, Callable[..., Any]]: ) -> Dict[str, Callable]:
return { return {
name: func name: func
for name, func in functions.items() for name, func in functions.items()
if hasattr(func, attribute) if hasattr(func, attribute)
} }
def _map_pipeline_components(self) -> None: def _map_pipeline_components(self):
all_functions = self._get_all_functions() all_functions = self._get_all_functions()
crew_functions = self._filter_functions(all_functions, "is_crew") crew_functions = self._filter_functions(all_functions, "is_crew")
router_functions = self._filter_functions(all_functions, "is_router") router_functions = self._filter_functions(all_functions, "is_router")

View File

@@ -1,26 +1,32 @@
from copy import deepcopy from copy import deepcopy
from typing import Any, Callable, Dict, Tuple from typing import Any, Callable, Dict, Generic, Tuple, TypeVar
from pydantic import BaseModel, Field, PrivateAttr from pydantic import BaseModel, Field, PrivateAttr
T = TypeVar("T", bound=Dict[str, Any])
class Route(BaseModel): U = TypeVar("U")
condition: Callable[[Dict[str, Any]], bool]
pipeline: Any
class Router(BaseModel): class Route(Generic[T, U]):
routes: Dict[str, Route] = Field( condition: Callable[[T], bool]
pipeline: U
def __init__(self, condition: Callable[[T], bool], pipeline: U):
self.condition = condition
self.pipeline = pipeline
class Router(BaseModel, Generic[T, U]):
routes: Dict[str, Route[T, U]] = Field(
default_factory=dict, default_factory=dict,
description="Dictionary of route names to (condition, pipeline) tuples", description="Dictionary of route names to (condition, pipeline) tuples",
) )
default: Any = Field(..., description="Default pipeline if no conditions are met") default: U = Field(..., description="Default pipeline if no conditions are met")
_route_types: Dict[str, type] = PrivateAttr(default_factory=dict) _route_types: Dict[str, type] = PrivateAttr(default_factory=dict)
class Config: model_config = {"arbitrary_types_allowed": True}
arbitrary_types_allowed = True
def __init__(self, routes: Dict[str, Route], default: Any, **data): def __init__(self, routes: Dict[str, Route[T, U]], default: U, **data):
super().__init__(routes=routes, default=default, **data) super().__init__(routes=routes, default=default, **data)
self._check_copyable(default) self._check_copyable(default)
for name, route in routes.items(): for name, route in routes.items():
@@ -28,16 +34,16 @@ class Router(BaseModel):
self._route_types[name] = type(route.pipeline) self._route_types[name] = type(route.pipeline)
@staticmethod @staticmethod
def _check_copyable(obj: Any) -> None: def _check_copyable(obj):
if not hasattr(obj, "copy") or not callable(getattr(obj, "copy")): if not hasattr(obj, "copy") or not callable(getattr(obj, "copy")):
raise ValueError(f"Object of type {type(obj)} must have a 'copy' method") raise ValueError(f"Object of type {type(obj)} must have a 'copy' method")
def add_route( def add_route(
self, self,
name: str, name: str,
condition: Callable[[Dict[str, Any]], bool], condition: Callable[[T], bool],
pipeline: Any, pipeline: U,
) -> "Router": ) -> "Router[T, U]":
""" """
Add a named route with its condition and corresponding pipeline to the router. Add a named route with its condition and corresponding pipeline to the router.
@@ -54,7 +60,7 @@ class Router(BaseModel):
self._route_types[name] = type(pipeline) self._route_types[name] = type(pipeline)
return self return self
def route(self, input_data: Dict[str, Any]) -> Tuple[Any, str]: def route(self, input_data: T) -> Tuple[U, str]:
""" """
Evaluate the input against the conditions and return the appropriate pipeline. Evaluate the input against the conditions and return the appropriate pipeline.
@@ -70,15 +76,15 @@ class Router(BaseModel):
return self.default, "default" return self.default, "default"
def copy(self) -> "Router": def copy(self) -> "Router[T, U]":
"""Create a deep copy of the Router.""" """Create a deep copy of the Router."""
new_routes = { new_routes = {
name: Route( name: Route(
condition=deepcopy(route.condition), condition=deepcopy(route.condition),
pipeline=route.pipeline.copy(), pipeline=route.pipeline.copy(), # type: ignore
) )
for name, route in self.routes.items() for name, route in self.routes.items()
} }
new_default = self.default.copy() new_default = self.default.copy() # type: ignore
return Router(routes=new_routes, default=new_default) return Router(routes=new_routes, default=new_default)

View File

@@ -9,14 +9,7 @@ from hashlib import md5
from typing import Any, Dict, List, Optional, Tuple, Type, Union from typing import Any, Dict, List, Optional, Tuple, Type, Union
from opentelemetry.trace import Span from opentelemetry.trace import Span
from pydantic import ( from pydantic import UUID4, BaseModel, Field, field_validator, model_validator
UUID4,
BaseModel,
Field,
PrivateAttr,
field_validator,
model_validator,
)
from pydantic_core import PydanticCustomError from pydantic_core import PydanticCustomError
from crewai.agents.agent_builder.base_agent import BaseAgent from crewai.agents.agent_builder.base_agent import BaseAgent
@@ -46,6 +39,9 @@ class Task(BaseModel):
tools: List of tools/resources limited for task execution. tools: List of tools/resources limited for task execution.
""" """
class Config:
arbitrary_types_allowed = True
__hash__ = object.__hash__ # type: ignore __hash__ = object.__hash__ # type: ignore
used_tools: int = 0 used_tools: int = 0
tools_errors: int = 0 tools_errors: int = 0
@@ -108,12 +104,16 @@ class Task(BaseModel):
default=None, default=None,
) )
_telemetry: Telemetry = PrivateAttr(default_factory=Telemetry) _telemetry: Telemetry
_execution_span: Optional[Span] = PrivateAttr(default=None) _execution_span: Span | None = None
_original_description: Optional[str] = PrivateAttr(default=None) _original_description: str | None = None
_original_expected_output: Optional[str] = PrivateAttr(default=None) _original_expected_output: str | None = None
_thread: Optional[threading.Thread] = PrivateAttr(default=None) _thread: threading.Thread | None = None
_execution_time: Optional[float] = PrivateAttr(default=None) _execution_time: float | None = None
def __init__(__pydantic_self__, **data):
config = data.pop("config", {})
super().__init__(**config, **data)
@field_validator("id", mode="before") @field_validator("id", mode="before")
@classmethod @classmethod
@@ -137,6 +137,12 @@ class Task(BaseModel):
return value[1:] return value[1:]
return value return value
@model_validator(mode="after")
def set_private_attrs(self) -> "Task":
"""Set private attributes."""
self._telemetry = Telemetry()
return self
@model_validator(mode="after") @model_validator(mode="after")
def set_attributes_based_on_config(self) -> "Task": def set_attributes_based_on_config(self) -> "Task":
"""Set attributes based on the agent configuration.""" """Set attributes based on the agent configuration."""
@@ -179,7 +185,7 @@ class Task(BaseModel):
expected_output = self._original_expected_output or self.expected_output expected_output = self._original_expected_output or self.expected_output
source = [description, expected_output] source = [description, expected_output]
return md5("|".join(source).encode(), usedforsecurity=False).hexdigest() return md5("|".join(source).encode()).hexdigest()
def execute_async( def execute_async(
self, self,
@@ -234,9 +240,7 @@ class Task(BaseModel):
pydantic_output, json_output = self._export_output(result) pydantic_output, json_output = self._export_output(result)
task_output = TaskOutput( task_output = TaskOutput(
name=self.name,
description=self.description, description=self.description,
expected_output=self.expected_output,
raw=result, raw=result,
pydantic=pydantic_output, pydantic=pydantic_output,
json_dict=json_output, json_dict=json_output,
@@ -257,7 +261,9 @@ class Task(BaseModel):
content = ( content = (
json_output json_output
if json_output if json_output
else pydantic_output.model_dump_json() if pydantic_output else result else pydantic_output.model_dump_json()
if pydantic_output
else result
) )
self._save_file(content) self._save_file(content)

View File

@@ -10,10 +10,6 @@ class TaskOutput(BaseModel):
"""Class that represents the result of a task.""" """Class that represents the result of a task."""
description: str = Field(description="Description of the task") description: str = Field(description="Description of the task")
name: Optional[str] = Field(description="Name of the task", default=None)
expected_output: Optional[str] = Field(
description="Expected output of the task", default=None
)
summary: Optional[str] = Field(description="Summary of the task", default=None) summary: Optional[str] = Field(description="Summary of the task", default=None)
raw: str = Field(description="Raw output of the task", default="") raw: str = Field(description="Raw output of the task", default="")
pydantic: Optional[BaseModel] = Field( pydantic: Optional[BaseModel] = Field(

View File

@@ -295,7 +295,7 @@ class Telemetry:
pass pass
def individual_test_result_span( def individual_test_result_span(
self, crew: Crew, quality: float, exec_time: int, model_name: str self, crew: Crew, quality: int, exec_time: int, model_name: str
): ):
if self.ready: if self.ready:
try: try:

View File

@@ -1,5 +1,5 @@
from langchain.tools import StructuredTool from langchain.tools import StructuredTool
from pydantic import BaseModel, Field from pydantic import BaseModel, ConfigDict, Field
from crewai.agents.cache import CacheHandler from crewai.agents.cache import CacheHandler
@@ -7,10 +7,11 @@ from crewai.agents.cache import CacheHandler
class CacheTools(BaseModel): class CacheTools(BaseModel):
"""Default tools to hit the cache.""" """Default tools to hit the cache."""
model_config = ConfigDict(arbitrary_types_allowed=True)
name: str = "Hit Cache" name: str = "Hit Cache"
cache_handler: CacheHandler = Field( cache_handler: CacheHandler = Field(
description="Cache Handler for the crew", description="Cache Handler for the crew",
default_factory=CacheHandler, default=CacheHandler(),
) )
def tool(self): def tool(self):

View File

@@ -1,6 +1,6 @@
import ast import ast
import os
from difflib import SequenceMatcher from difflib import SequenceMatcher
import os
from textwrap import dedent from textwrap import dedent
from typing import Any, List, Union from typing import Any, List, Union
@@ -15,7 +15,7 @@ from crewai.utilities import I18N, Converter, ConverterError, Printer
agentops = None agentops = None
if os.environ.get("AGENTOPS_API_KEY"): if os.environ.get("AGENTOPS_API_KEY"):
try: try:
import agentops # type: ignore import agentops
except ImportError: except ImportError:
pass pass
@@ -71,14 +71,14 @@ class ToolUsage:
self.task = task self.task = task
self.action = action self.action = action
self.function_calling_llm = function_calling_llm self.function_calling_llm = function_calling_llm
# Handling bug (see https://github.com/langchain-ai/langchain/pull/16395): raise an error if tools_names have space for ChatOpenAI # Handling bug (see https://github.com/langchain-ai/langchain/pull/16395): raise an error if tools_names have space for ChatOpenAI
if isinstance(self.function_calling_llm, ChatOpenAI): if isinstance(self.function_calling_llm, ChatOpenAI):
if " " in self.tools_names: if " " in self.tools_names:
raise Exception( raise Exception(
"Tools names should not have spaces for ChatOpenAI models." "Tools names should not have spaces for ChatOpenAI models."
) )
# Set the maximum parsing attempts for bigger models # Set the maximum parsing attempts for bigger models
if (isinstance(self.function_calling_llm, ChatOpenAI)) and ( if (isinstance(self.function_calling_llm, ChatOpenAI)) and (
self.function_calling_llm.openai_api_base is None self.function_calling_llm.openai_api_base is None
@@ -118,7 +118,7 @@ class ToolUsage:
tool: BaseTool, tool: BaseTool,
calling: Union[ToolCalling, InstructorToolCalling], calling: Union[ToolCalling, InstructorToolCalling],
) -> str: # TODO: Fix this return type ) -> str: # TODO: Fix this return type
tool_event = agentops.ToolEvent(name=calling.tool_name) if agentops else None # type: ignore tool_event = agentops.ToolEvent(name=calling.tool_name) if agentops else None
if self._check_tool_repeated_usage(calling=calling): # type: ignore # _check_tool_repeated_usage of "ToolUsage" does not return a value (it only ever returns None) if self._check_tool_repeated_usage(calling=calling): # type: ignore # _check_tool_repeated_usage of "ToolUsage" does not return a value (it only ever returns None)
try: try:
result = self._i18n.errors("task_repeated_usage").format( result = self._i18n.errors("task_repeated_usage").format(

View File

@@ -1,13 +1,13 @@
from datetime import datetime from datetime import datetime
from pydantic import BaseModel, Field, PrivateAttr
from crewai.utilities.printer import Printer from crewai.utilities.printer import Printer
class Logger(BaseModel): class Logger:
verbose: bool = Field(default=False) _printer = Printer()
_printer: Printer = PrivateAttr(default_factory=Printer)
def __init__(self, verbose=False):
self.verbose = verbose
def log(self, level, message, color="bold_green"): def log(self, level, message, color="bold_green"):
if self.verbose: if self.verbose:

View File

@@ -1,25 +1,14 @@
from typing import Any, List, Optional from typing import Any, List, Optional
from langchain_openai import ChatOpenAI from langchain_openai import ChatOpenAI
from pydantic import BaseModel, Field from pydantic import BaseModel
from crewai.agent import Agent from crewai.agent import Agent
from crewai.task import Task from crewai.task import Task
class PlanPerTask(BaseModel):
task: str = Field(..., description="The task for which the plan is created")
plan: str = Field(
...,
description="The step by step plan on how the agents can execute their tasks using the available tools with mastery",
)
class PlannerTaskPydanticOutput(BaseModel): class PlannerTaskPydanticOutput(BaseModel):
list_of_plans_per_task: List[PlanPerTask] = Field( list_of_plans_per_task: List[str]
...,
description="Step by step plan on how the agents can execute their tasks using the available tools with mastery",
)
class CrewPlanner: class CrewPlanner:

View File

@@ -1,50 +1,44 @@
import threading import threading
import time import time
from typing import Optional from typing import Union
from pydantic import BaseModel, Field, PrivateAttr, model_validator from pydantic import BaseModel, ConfigDict, Field, PrivateAttr, model_validator
from crewai.utilities.logger import Logger from crewai.utilities.logger import Logger
class RPMController(BaseModel): class RPMController(BaseModel):
max_rpm: Optional[int] = Field(default=None) model_config = ConfigDict(arbitrary_types_allowed=True)
logger: Logger = Field(default_factory=lambda: Logger(verbose=False)) max_rpm: Union[int, None] = Field(default=None)
logger: Logger = Field(default=None)
_current_rpm: int = PrivateAttr(default=0) _current_rpm: int = PrivateAttr(default=0)
_timer: Optional[threading.Timer] = PrivateAttr(default=None) _timer: threading.Timer | None = PrivateAttr(default=None)
_lock: Optional[threading.Lock] = PrivateAttr(default=None) _lock: threading.Lock = PrivateAttr(default=None)
_shutdown_flag: bool = PrivateAttr(default=False) _shutdown_flag = False
@model_validator(mode="after") @model_validator(mode="after")
def reset_counter(self): def reset_counter(self):
if self.max_rpm is not None: if self.max_rpm:
if not self._shutdown_flag: if not self._shutdown_flag:
self._lock = threading.Lock() self._lock = threading.Lock()
self._reset_request_count() self._reset_request_count()
return self return self
def check_or_wait(self): def check_or_wait(self):
if self.max_rpm is None: if not self.max_rpm:
return True return True
def _check_and_increment(): with self._lock:
if self.max_rpm is not None and self._current_rpm < self.max_rpm: if self._current_rpm < self.max_rpm:
self._current_rpm += 1 self._current_rpm += 1
return True return True
elif self.max_rpm is not None: else:
self.logger.log( self.logger.log(
"info", "Max RPM reached, waiting for next minute to start." "info", "Max RPM reached, waiting for next minute to start."
) )
self._wait_for_next_minute() self._wait_for_next_minute()
self._current_rpm = 1 self._current_rpm = 1
return True return True
return True
if self._lock:
with self._lock:
return _check_and_increment()
else:
return _check_and_increment()
def stop_rpm_counter(self): def stop_rpm_counter(self):
if self._timer: if self._timer:
@@ -56,18 +50,10 @@ class RPMController(BaseModel):
self._current_rpm = 0 self._current_rpm = 0
def _reset_request_count(self): def _reset_request_count(self):
def _reset(): with self._lock:
self._current_rpm = 0 self._current_rpm = 0
if not self._shutdown_flag:
self._timer = threading.Timer(60.0, self._reset_request_count)
self._timer.start()
if self._lock:
with self._lock:
_reset()
else:
_reset()
if self._timer: if self._timer:
self._shutdown_flag = True self._shutdown_flag = True
self._timer.cancel() self._timer.cancel()
self._timer = threading.Timer(60.0, self._reset_request_count)
self._timer.start()

View File

@@ -4,6 +4,11 @@ from unittest import mock
from unittest.mock import patch from unittest.mock import patch
import pytest import pytest
from langchain.tools import tool
from langchain_core.exceptions import OutputParserException
from langchain_openai import ChatOpenAI
from langchain.schema import AgentAction
from crewai import Agent, Crew, Task from crewai import Agent, Crew, Task
from crewai.agents.cache import CacheHandler from crewai.agents.cache import CacheHandler
from crewai.agents.executor import CrewAgentExecutor from crewai.agents.executor import CrewAgentExecutor
@@ -11,10 +16,6 @@ from crewai.agents.parser import CrewAgentParser
from crewai.tools.tool_calling import InstructorToolCalling from crewai.tools.tool_calling import InstructorToolCalling
from crewai.tools.tool_usage import ToolUsage from crewai.tools.tool_usage import ToolUsage
from crewai.utilities import RPMController from crewai.utilities import RPMController
from langchain.schema import AgentAction
from langchain.tools import tool
from langchain_core.exceptions import OutputParserException
from langchain_openai import ChatOpenAI
def test_agent_creation(): def test_agent_creation():
@@ -816,7 +817,7 @@ def test_agent_definition_based_on_dict():
"verbose": True, "verbose": True,
} }
agent = Agent(**config) agent = Agent(config=config)
assert agent.role == "test role" assert agent.role == "test role"
assert agent.goal == "test goal" assert agent.goal == "test goal"
@@ -836,7 +837,7 @@ def test_agent_human_input():
"backstory": "test backstory", "backstory": "test backstory",
} }
agent = Agent(**config) agent = Agent(config=config)
task = Task( task = Task(
agent=agent, agent=agent,

View File

@@ -8,6 +8,7 @@ from unittest.mock import MagicMock, patch
import pydantic_core import pydantic_core
import pytest import pytest
from crewai.agent import Agent from crewai.agent import Agent
from crewai.agents.cache import CacheHandler from crewai.agents.cache import CacheHandler
from crewai.crew import Crew from crewai.crew import Crew

View File

@@ -25,20 +25,14 @@ def mock_crew_factory():
MockCrewClass = type("MockCrew", (MagicMock, Crew), {}) MockCrewClass = type("MockCrew", (MagicMock, Crew), {})
class MockCrew(MockCrewClass): class MockCrew(MockCrewClass):
def __deepcopy__(self): def __deepcopy__(self, memo):
result = MockCrewClass() result = MockCrewClass()
result.kickoff_async = self.kickoff_async result.kickoff_async = self.kickoff_async
result.name = self.name result.name = self.name
return result return result
def copy(
self,
):
return self
crew = MockCrew() crew = MockCrew()
crew.name = name crew.name = name
task_output = TaskOutput( task_output = TaskOutput(
description="Test task", raw="Task output", agent="Test Agent" description="Test task", raw="Task output", agent="Test Agent"
) )
@@ -50,15 +44,9 @@ def mock_crew_factory():
pydantic=pydantic_output, pydantic=pydantic_output,
) )
async def kickoff_async(inputs=None): async def async_kickoff(inputs=None):
return crew_output return crew_output
# Create an AsyncMock for kickoff_async
crew.kickoff_async = AsyncMock(side_effect=kickoff_async)
# Mock the synchronous kickoff method
crew.kickoff = MagicMock(return_value=crew_output)
# Add more attributes that Procedure might be expecting # Add more attributes that Procedure might be expecting
crew.verbose = False crew.verbose = False
crew.output_log_file = None crew.output_log_file = None
@@ -68,16 +56,30 @@ def mock_crew_factory():
crew.config = None crew.config = None
crew.cache = True crew.cache = True
# Add non-empty agents and tasks # # Create a valid Agent instance
mock_agent = MagicMock(spec=Agent) mock_agent = Agent(
mock_task = MagicMock(spec=Task) name="Mock Agent",
mock_task.agent = mock_agent role="Mock Role",
mock_task.async_execution = False goal="Mock Goal",
mock_task.context = None backstory="Mock Backstory",
allow_delegation=False,
verbose=False,
)
# Create a valid Task instance
mock_task = Task(
description="Return: Test output",
expected_output="Test output",
agent=mock_agent,
async_execution=False,
context=None,
)
crew.agents = [mock_agent] crew.agents = [mock_agent]
crew.tasks = [mock_task] crew.tasks = [mock_task]
crew.kickoff_async = AsyncMock(side_effect=async_kickoff)
return crew return crew
return _create_mock_crew return _create_mock_crew
@@ -113,7 +115,9 @@ def mock_router_factory(mock_crew_factory):
( (
"route1" "route1"
if x.get("score", 0) > 80 if x.get("score", 0) > 80
else "route2" if x.get("score", 0) > 50 else "default" else "route2"
if x.get("score", 0) > 50
else "default"
), ),
) )
) )
@@ -473,17 +477,31 @@ async def test_pipeline_with_parallel_stages_end_in_single_stage(mock_crew_facto
""" """
Test that Pipeline correctly handles parallel stages. Test that Pipeline correctly handles parallel stages.
""" """
crew1 = mock_crew_factory(name="Crew 1") crew1 = Crew(name="Crew 1", tasks=[task], agents=[agent])
crew2 = mock_crew_factory(name="Crew 2") crew2 = Crew(name="Crew 2", tasks=[task], agents=[agent])
crew3 = mock_crew_factory(name="Crew 3") crew3 = Crew(name="Crew 3", tasks=[task], agents=[agent])
crew4 = mock_crew_factory(name="Crew 4") crew4 = Crew(name="Crew 4", tasks=[task], agents=[agent])
pipeline = Pipeline(stages=[crew1, [crew2, crew3], crew4]) pipeline = Pipeline(stages=[crew1, [crew2, crew3], crew4])
input_data = [{"initial": "data"}] input_data = [{"initial": "data"}]
pipeline_result = await pipeline.kickoff(input_data) pipeline_result = await pipeline.kickoff(input_data)
crew1.kickoff_async.assert_called_once_with(inputs={"initial": "data"}) with patch.object(Crew, "kickoff_async") as mock_kickoff:
mock_kickoff.return_value = CrewOutput(
raw="Test output",
tasks_output=[
TaskOutput(
description="Test task", raw="Task output", agent="Test Agent"
)
],
token_usage=DEFAULT_TOKEN_USAGE,
json_dict=None,
pydantic=None,
)
pipeline_result = await pipeline.kickoff(input_data)
mock_kickoff.assert_called_with(inputs={"initial": "data"})
assert len(pipeline_result) == 1 assert len(pipeline_result) == 1
pipeline_result_1 = pipeline_result[0] pipeline_result_1 = pipeline_result[0]
@@ -631,21 +649,33 @@ Options:
@pytest.mark.asyncio @pytest.mark.asyncio
async def test_pipeline_data_accumulation(mock_crew_factory): async def test_pipeline_data_accumulation():
crew1 = mock_crew_factory(name="Crew 1", output_json_dict={"key1": "value1"}) crew1 = Crew(name="Crew 1", tasks=[task], agents=[agent])
crew2 = mock_crew_factory(name="Crew 2", output_json_dict={"key2": "value2"}) crew2 = Crew(name="Crew 2", tasks=[task], agents=[agent])
pipeline = Pipeline(stages=[crew1, crew2]) pipeline = Pipeline(stages=[crew1, crew2])
input_data = [{"initial": "data"}] input_data = [{"initial": "data"}]
results = await pipeline.kickoff(input_data) results = await pipeline.kickoff(input_data)
# Check that crew1 was called with only the initial input with patch.object(Crew, "kickoff_async") as mock_kickoff:
crew1.kickoff_async.assert_called_once_with(inputs={"initial": "data"}) mock_kickoff.side_effect = [
CrewOutput(
raw="Test output from Crew 1",
tasks_output=[],
token_usage=DEFAULT_TOKEN_USAGE,
json_dict={"key1": "value1"},
pydantic=None,
),
CrewOutput(
raw="Test output from Crew 2",
tasks_output=[],
token_usage=DEFAULT_TOKEN_USAGE,
json_dict={"key2": "value2"},
pydantic=None,
),
]
# Check that crew2 was called with the combined input from the initial data and crew1's output results = await pipeline.kickoff(input_data)
crew2.kickoff_async.assert_called_once_with(
inputs={"initial": "data", "key1": "value1"}
)
# Check the final output # Check the final output
assert len(results) == 1 assert len(results) == 1

View File

@@ -14,14 +14,6 @@ class SimpleCrew:
def simple_task(self): def simple_task(self):
return Task(description="Simple Description", expected_output="Simple Output") return Task(description="Simple Description", expected_output="Simple Output")
@task
def custom_named_task(self):
return Task(
description="Simple Description",
expected_output="Simple Output",
name="Custom",
)
def test_agent_memoization(): def test_agent_memoization():
crew = SimpleCrew() crew = SimpleCrew()
@@ -41,15 +33,3 @@ def test_task_memoization():
assert ( assert (
first_call_result is second_call_result first_call_result is second_call_result
), "Task memoization is not working as expected" ), "Task memoization is not working as expected"
def test_task_name():
simple_task = SimpleCrew().simple_task()
assert (
simple_task.name == "simple_task"
), "Task name is not inferred from function name as expected"
custom_named_task = SimpleCrew().custom_named_task()
assert (
custom_named_task.name == "Custom"
), "Custom task name is not being set as expected"

View File

@@ -1,8 +1,8 @@
"""Test Agent creation and execution basic functionality.""" """Test Agent creation and execution basic functionality."""
import os
import hashlib import hashlib
import json import json
import os
from unittest.mock import MagicMock, patch from unittest.mock import MagicMock, patch
import pytest import pytest
@@ -98,7 +98,6 @@ def test_task_callback():
task_completed = MagicMock(return_value="done") task_completed = MagicMock(return_value="done")
task = Task( task = Task(
name="Brainstorm",
description="Give me a list of 5 interesting ideas to explore for na article, what makes them unique and interesting.", description="Give me a list of 5 interesting ideas to explore for na article, what makes them unique and interesting.",
expected_output="Bullet point list of 5 interesting ideas.", expected_output="Bullet point list of 5 interesting ideas.",
agent=researcher, agent=researcher,
@@ -110,10 +109,6 @@ def test_task_callback():
task.execute_sync(agent=researcher) task.execute_sync(agent=researcher)
task_completed.assert_called_once_with(task.output) task_completed.assert_called_once_with(task.output)
assert task.output.description == task.description
assert task.output.expected_output == task.expected_output
assert task.output.name == task.name
def test_task_callback_returns_task_output(): def test_task_callback_returns_task_output():
from crewai.tasks.output_format import OutputFormat from crewai.tasks.output_format import OutputFormat
@@ -154,8 +149,6 @@ def test_task_callback_returns_task_output():
"json_dict": None, "json_dict": None,
"agent": researcher.role, "agent": researcher.role,
"summary": "Give me a list of 5 interesting ideas to explore...", "summary": "Give me a list of 5 interesting ideas to explore...",
"name": None,
"expected_output": "Bullet point list of 5 interesting ideas.",
"output_format": OutputFormat.RAW, "output_format": OutputFormat.RAW,
} }
assert output_dict == expected_output assert output_dict == expected_output
@@ -703,7 +696,7 @@ def test_task_definition_based_on_dict():
"expected_output": "The score of the title.", "expected_output": "The score of the title.",
} }
task = Task(**config) task = Task(config=config)
assert task.description == config["description"] assert task.description == config["description"]
assert task.expected_output == config["expected_output"] assert task.expected_output == config["expected_output"]
@@ -716,7 +709,7 @@ def test_conditional_task_definition_based_on_dict():
"expected_output": "The score of the title.", "expected_output": "The score of the title.",
} }
task = ConditionalTask(**config, condition=lambda x: True) task = ConditionalTask(config=config, condition=lambda x: True)
assert task.description == config["description"] assert task.description == config["description"]
assert task.expected_output == config["expected_output"] assert task.expected_output == config["expected_output"]

View File

@@ -6,11 +6,7 @@ from langchain_openai import ChatOpenAI
from crewai.agent import Agent from crewai.agent import Agent
from crewai.task import Task from crewai.task import Task
from crewai.tasks.task_output import TaskOutput from crewai.tasks.task_output import TaskOutput
from crewai.utilities.planning_handler import ( from crewai.utilities.planning_handler import CrewPlanner, PlannerTaskPydanticOutput
CrewPlanner,
PlannerTaskPydanticOutput,
PlanPerTask,
)
class TestCrewPlanner: class TestCrewPlanner:
@@ -48,17 +44,12 @@ class TestCrewPlanner:
return CrewPlanner(tasks, planning_agent_llm) return CrewPlanner(tasks, planning_agent_llm)
def test_handle_crew_planning(self, crew_planner): def test_handle_crew_planning(self, crew_planner):
list_of_plans_per_task = [
PlanPerTask(task="Task1", plan="Plan 1"),
PlanPerTask(task="Task2", plan="Plan 2"),
PlanPerTask(task="Task3", plan="Plan 3"),
]
with patch.object(Task, "execute_sync") as execute: with patch.object(Task, "execute_sync") as execute:
execute.return_value = TaskOutput( execute.return_value = TaskOutput(
description="Description", description="Description",
agent="agent", agent="agent",
pydantic=PlannerTaskPydanticOutput( pydantic=PlannerTaskPydanticOutput(
list_of_plans_per_task=list_of_plans_per_task list_of_plans_per_task=["Plan 1", "Plan 2", "Plan 3"]
), ),
) )
result = crew_planner._handle_crew_planning() result = crew_planner._handle_crew_planning()
@@ -100,9 +91,7 @@ class TestCrewPlanner:
execute.return_value = TaskOutput( execute.return_value = TaskOutput(
description="Description", description="Description",
agent="agent", agent="agent",
pydantic=PlannerTaskPydanticOutput( pydantic=PlannerTaskPydanticOutput(list_of_plans_per_task=["Plan 1"]),
list_of_plans_per_task=[PlanPerTask(task="Task1", plan="Plan 1")]
),
) )
result = crew_planner_different_llm._handle_crew_planning() result = crew_planner_different_llm._handle_crew_planning()