mirror of
https://github.com/crewAIInc/crewAI.git
synced 2025-12-20 14:28:28 +00:00
Compare commits
17 Commits
git-temapl
...
feat/updat
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
5136df8cc6 | ||
|
|
5495825b1d | ||
|
|
6e36f84cc6 | ||
|
|
cddf2d8f7c | ||
|
|
5f17e35c5a | ||
|
|
231a833ad0 | ||
|
|
a870295d42 | ||
|
|
ddda8f6bda | ||
|
|
bf7372fefa | ||
|
|
3451b6fc7a | ||
|
|
dbf2570353 | ||
|
|
d0707fac91 | ||
|
|
35ebdd6022 | ||
|
|
92a77e5cac | ||
|
|
a2922c9ad5 | ||
|
|
9f9b52dd26 | ||
|
|
2482c7ab68 |
35
.github/ISSUE_TEMPLATE/bug_report.md
vendored
35
.github/ISSUE_TEMPLATE/bug_report.md
vendored
@@ -1,35 +0,0 @@
|
|||||||
---
|
|
||||||
name: Bug report
|
|
||||||
about: Create a report to help us improve CrewAI
|
|
||||||
title: "[BUG]"
|
|
||||||
labels: bug
|
|
||||||
assignees: ''
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
**Description**
|
|
||||||
Provide a clear and concise description of what the bug is.
|
|
||||||
|
|
||||||
**Steps to Reproduce**
|
|
||||||
Provide a step-by-step process to reproduce the behavior:
|
|
||||||
|
|
||||||
**Expected behavior**
|
|
||||||
A clear and concise description of what you expected to happen.
|
|
||||||
|
|
||||||
**Screenshots/Code snippets**
|
|
||||||
If applicable, add screenshots or code snippets to help explain your problem.
|
|
||||||
|
|
||||||
**Environment Details:**
|
|
||||||
- **Operating System**: [e.g., Ubuntu 20.04, macOS Catalina, Windows 10]
|
|
||||||
- **Python Version**: [e.g., 3.8, 3.9, 3.10]
|
|
||||||
- **crewAI Version**: [e.g., 0.30.11]
|
|
||||||
- **crewAI Tools Version**: [e.g., 0.2.6]
|
|
||||||
|
|
||||||
**Logs**
|
|
||||||
Include relevant logs or error messages if applicable.
|
|
||||||
|
|
||||||
**Possible Solution**
|
|
||||||
Have a solution in mind? Please suggest it here, or write "None".
|
|
||||||
|
|
||||||
**Additional context**
|
|
||||||
Add any other context about the problem here.
|
|
||||||
1
.github/ISSUE_TEMPLATE/config.yml
vendored
Normal file
1
.github/ISSUE_TEMPLATE/config.yml
vendored
Normal file
@@ -0,0 +1 @@
|
|||||||
|
blank_issues_enabled: false
|
||||||
24
.github/ISSUE_TEMPLATE/custom.md
vendored
24
.github/ISSUE_TEMPLATE/custom.md
vendored
@@ -1,24 +0,0 @@
|
|||||||
---
|
|
||||||
name: Custom issue template
|
|
||||||
about: Describe this issue template's purpose here.
|
|
||||||
title: "[DOCS]"
|
|
||||||
labels: documentation
|
|
||||||
assignees: ''
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Documentation Page
|
|
||||||
<!-- Provide a link to the documentation page that needs improvement -->
|
|
||||||
|
|
||||||
## Description
|
|
||||||
<!-- Describe what needs to be changed or improved in the documentation -->
|
|
||||||
|
|
||||||
## Suggested Changes
|
|
||||||
<!-- If possible, provide specific suggestions for how to improve the documentation -->
|
|
||||||
|
|
||||||
## Additional Context
|
|
||||||
<!-- Add any other context about the documentation issue here -->
|
|
||||||
|
|
||||||
## Checklist
|
|
||||||
- [ ] I have searched the existing issues to make sure this is not a duplicate
|
|
||||||
- [ ] I have checked the latest version of the documentation to ensure this hasn't been addressed
|
|
||||||
65
.github/ISSUE_TEMPLATE/feature_request.yml
vendored
Normal file
65
.github/ISSUE_TEMPLATE/feature_request.yml
vendored
Normal file
@@ -0,0 +1,65 @@
|
|||||||
|
name: Feature request
|
||||||
|
description: Suggest a new feature for CrewAI
|
||||||
|
title: "[FEATURE]"
|
||||||
|
labels: ["feature-request"]
|
||||||
|
assignees: []
|
||||||
|
body:
|
||||||
|
- type: markdown
|
||||||
|
attributes:
|
||||||
|
value: |
|
||||||
|
Thanks for taking the time to fill out this feature request!
|
||||||
|
- type: dropdown
|
||||||
|
id: feature-area
|
||||||
|
attributes:
|
||||||
|
label: Feature Area
|
||||||
|
description: Which area of CrewAI does this feature primarily relate to?
|
||||||
|
options:
|
||||||
|
- Core functionality
|
||||||
|
- Agent capabilities
|
||||||
|
- Task management
|
||||||
|
- Integration with external tools
|
||||||
|
- Performance optimization
|
||||||
|
- Documentation
|
||||||
|
- Other (please specify in additional context)
|
||||||
|
validations:
|
||||||
|
required: true
|
||||||
|
- type: textarea
|
||||||
|
id: problem
|
||||||
|
attributes:
|
||||||
|
label: Is your feature request related to a an existing bug? Please link it here.
|
||||||
|
description: A link to the bug or NA if not related to an existing bug.
|
||||||
|
validations:
|
||||||
|
required: true
|
||||||
|
- type: textarea
|
||||||
|
id: solution
|
||||||
|
attributes:
|
||||||
|
label: Describe the solution you'd like
|
||||||
|
description: A clear and concise description of what you want to happen.
|
||||||
|
validations:
|
||||||
|
required: true
|
||||||
|
- type: textarea
|
||||||
|
id: alternatives
|
||||||
|
attributes:
|
||||||
|
label: Describe alternatives you've considered
|
||||||
|
description: A clear and concise description of any alternative solutions or features you've considered.
|
||||||
|
validations:
|
||||||
|
required: false
|
||||||
|
- type: textarea
|
||||||
|
id: context
|
||||||
|
attributes:
|
||||||
|
label: Additional context
|
||||||
|
description: Add any other context, screenshots, or examples about the feature request here.
|
||||||
|
validations:
|
||||||
|
required: false
|
||||||
|
- type: dropdown
|
||||||
|
id: willingness-to-contribute
|
||||||
|
attributes:
|
||||||
|
label: Willingness to Contribute
|
||||||
|
description: Would you be willing to contribute to the implementation of this feature?
|
||||||
|
options:
|
||||||
|
- Yes, I'd be happy to submit a pull request
|
||||||
|
- I could provide more detailed specifications
|
||||||
|
- I can test the feature once it's implemented
|
||||||
|
- No, I'm just suggesting the idea
|
||||||
|
validations:
|
||||||
|
required: true
|
||||||
6
.github/workflows/mkdocs.yml
vendored
6
.github/workflows/mkdocs.yml
vendored
@@ -1,10 +1,8 @@
|
|||||||
name: Deploy MkDocs
|
name: Deploy MkDocs
|
||||||
|
|
||||||
on:
|
on:
|
||||||
workflow_dispatch:
|
release:
|
||||||
push:
|
types: [published]
|
||||||
branches:
|
|
||||||
- main
|
|
||||||
|
|
||||||
permissions:
|
permissions:
|
||||||
contents: write
|
contents: write
|
||||||
|
|||||||
23
.github/workflows/security-checker.yml
vendored
Normal file
23
.github/workflows/security-checker.yml
vendored
Normal file
@@ -0,0 +1,23 @@
|
|||||||
|
name: Security Checker
|
||||||
|
|
||||||
|
on: [pull_request]
|
||||||
|
|
||||||
|
jobs:
|
||||||
|
security-check:
|
||||||
|
runs-on: ubuntu-latest
|
||||||
|
|
||||||
|
steps:
|
||||||
|
- name: Checkout code
|
||||||
|
uses: actions/checkout@v4
|
||||||
|
|
||||||
|
- name: Set up Python
|
||||||
|
uses: actions/setup-python@v4
|
||||||
|
with:
|
||||||
|
python-version: "3.11.9"
|
||||||
|
|
||||||
|
- name: Install dependencies
|
||||||
|
run: pip install bandit
|
||||||
|
|
||||||
|
- name: Run Bandit
|
||||||
|
run: bandit -c pyproject.toml -r src/ -lll
|
||||||
|
|
||||||
1
.github/workflows/stale.yml
vendored
1
.github/workflows/stale.yml
vendored
@@ -24,3 +24,4 @@ jobs:
|
|||||||
stale-pr-message: 'This PR is stale because it has been open for 45 days with no activity.'
|
stale-pr-message: 'This PR is stale because it has been open for 45 days with no activity.'
|
||||||
days-before-pr-stale: 45
|
days-before-pr-stale: 45
|
||||||
days-before-pr-close: -1
|
days-before-pr-close: -1
|
||||||
|
operations-per-run: 500
|
||||||
|
|||||||
142
docs/core-concepts/Cli.md
Normal file
142
docs/core-concepts/Cli.md
Normal file
@@ -0,0 +1,142 @@
|
|||||||
|
# CrewAI CLI Documentation
|
||||||
|
|
||||||
|
The CrewAI CLI provides a set of commands to interact with CrewAI, allowing you to create, train, run, and manage crews and pipelines.
|
||||||
|
|
||||||
|
## Installation
|
||||||
|
|
||||||
|
To use the CrewAI CLI, make sure you have CrewAI & Poetry installed:
|
||||||
|
|
||||||
|
```
|
||||||
|
pip install crewai poetry
|
||||||
|
```
|
||||||
|
|
||||||
|
## Basic Usage
|
||||||
|
|
||||||
|
The basic structure of a CrewAI CLI command is:
|
||||||
|
|
||||||
|
```
|
||||||
|
crewai [COMMAND] [OPTIONS] [ARGUMENTS]
|
||||||
|
```
|
||||||
|
|
||||||
|
## Available Commands
|
||||||
|
|
||||||
|
### 1. create
|
||||||
|
|
||||||
|
Create a new crew or pipeline.
|
||||||
|
|
||||||
|
```
|
||||||
|
crewai create [OPTIONS] TYPE NAME
|
||||||
|
```
|
||||||
|
|
||||||
|
- `TYPE`: Choose between "crew" or "pipeline"
|
||||||
|
- `NAME`: Name of the crew or pipeline
|
||||||
|
- `--router`: (Optional) Create a pipeline with router functionality
|
||||||
|
|
||||||
|
Example:
|
||||||
|
```
|
||||||
|
crewai create crew my_new_crew
|
||||||
|
crewai create pipeline my_new_pipeline --router
|
||||||
|
```
|
||||||
|
|
||||||
|
### 2. version
|
||||||
|
|
||||||
|
Show the installed version of CrewAI.
|
||||||
|
|
||||||
|
```
|
||||||
|
crewai version [OPTIONS]
|
||||||
|
```
|
||||||
|
|
||||||
|
- `--tools`: (Optional) Show the installed version of CrewAI tools
|
||||||
|
|
||||||
|
Example:
|
||||||
|
```
|
||||||
|
crewai version
|
||||||
|
crewai version --tools
|
||||||
|
```
|
||||||
|
|
||||||
|
### 3. train
|
||||||
|
|
||||||
|
Train the crew for a specified number of iterations.
|
||||||
|
|
||||||
|
```
|
||||||
|
crewai train [OPTIONS]
|
||||||
|
```
|
||||||
|
|
||||||
|
- `-n, --n_iterations INTEGER`: Number of iterations to train the crew (default: 5)
|
||||||
|
- `-f, --filename TEXT`: Path to a custom file for training (default: "trained_agents_data.pkl")
|
||||||
|
|
||||||
|
Example:
|
||||||
|
```
|
||||||
|
crewai train -n 10 -f my_training_data.pkl
|
||||||
|
```
|
||||||
|
|
||||||
|
### 4. replay
|
||||||
|
|
||||||
|
Replay the crew execution from a specific task.
|
||||||
|
|
||||||
|
```
|
||||||
|
crewai replay [OPTIONS]
|
||||||
|
```
|
||||||
|
|
||||||
|
- `-t, --task_id TEXT`: Replay the crew from this task ID, including all subsequent tasks
|
||||||
|
|
||||||
|
Example:
|
||||||
|
```
|
||||||
|
crewai replay -t task_123456
|
||||||
|
```
|
||||||
|
|
||||||
|
### 5. log_tasks_outputs
|
||||||
|
|
||||||
|
Retrieve your latest crew.kickoff() task outputs.
|
||||||
|
|
||||||
|
```
|
||||||
|
crewai log_tasks_outputs
|
||||||
|
```
|
||||||
|
|
||||||
|
### 6. reset_memories
|
||||||
|
|
||||||
|
Reset the crew memories (long, short, entity, latest_crew_kickoff_outputs).
|
||||||
|
|
||||||
|
```
|
||||||
|
crewai reset_memories [OPTIONS]
|
||||||
|
```
|
||||||
|
|
||||||
|
- `-l, --long`: Reset LONG TERM memory
|
||||||
|
- `-s, --short`: Reset SHORT TERM memory
|
||||||
|
- `-e, --entities`: Reset ENTITIES memory
|
||||||
|
- `-k, --kickoff-outputs`: Reset LATEST KICKOFF TASK OUTPUTS
|
||||||
|
- `-a, --all`: Reset ALL memories
|
||||||
|
|
||||||
|
Example:
|
||||||
|
```
|
||||||
|
crewai reset_memories --long --short
|
||||||
|
crewai reset_memories --all
|
||||||
|
```
|
||||||
|
|
||||||
|
### 7. test
|
||||||
|
|
||||||
|
Test the crew and evaluate the results.
|
||||||
|
|
||||||
|
```
|
||||||
|
crewai test [OPTIONS]
|
||||||
|
```
|
||||||
|
|
||||||
|
- `-n, --n_iterations INTEGER`: Number of iterations to test the crew (default: 3)
|
||||||
|
- `-m, --model TEXT`: LLM Model to run the tests on the Crew (default: "gpt-4o-mini")
|
||||||
|
|
||||||
|
Example:
|
||||||
|
```
|
||||||
|
crewai test -n 5 -m gpt-3.5-turbo
|
||||||
|
```
|
||||||
|
|
||||||
|
### 8. run
|
||||||
|
|
||||||
|
Run the crew.
|
||||||
|
|
||||||
|
```
|
||||||
|
crewai run
|
||||||
|
```
|
||||||
|
|
||||||
|
## Note
|
||||||
|
|
||||||
|
Make sure to run these commands from the directory where your CrewAI project is set up. Some commands may require additional configuration or setup within your project structure.
|
||||||
@@ -0,0 +1,129 @@
|
|||||||
|
# Creating a CrewAI Pipeline Project
|
||||||
|
|
||||||
|
Welcome to the comprehensive guide for creating a new CrewAI pipeline project. This document will walk you through the steps to create, customize, and run your CrewAI pipeline project, ensuring you have everything you need to get started.
|
||||||
|
|
||||||
|
To learn more about CrewAI pipelines, visit the [CrewAI documentation](https://docs.crewai.com/core-concepts/Pipeline/).
|
||||||
|
|
||||||
|
## Prerequisites
|
||||||
|
|
||||||
|
Before getting started with CrewAI pipelines, make sure that you have installed CrewAI via pip:
|
||||||
|
|
||||||
|
```shell
|
||||||
|
$ pip install crewai crewai-tools
|
||||||
|
```
|
||||||
|
|
||||||
|
The same prerequisites for virtual environments and Code IDEs apply as in regular CrewAI projects.
|
||||||
|
|
||||||
|
## Creating a New Pipeline Project
|
||||||
|
|
||||||
|
To create a new CrewAI pipeline project, you have two options:
|
||||||
|
|
||||||
|
1. For a basic pipeline template:
|
||||||
|
|
||||||
|
```shell
|
||||||
|
$ crewai create pipeline <project_name>
|
||||||
|
```
|
||||||
|
|
||||||
|
2. For a pipeline example that includes a router:
|
||||||
|
|
||||||
|
```shell
|
||||||
|
$ crewai create pipeline --router <project_name>
|
||||||
|
```
|
||||||
|
|
||||||
|
These commands will create a new project folder with the following structure:
|
||||||
|
|
||||||
|
```
|
||||||
|
<project_name>/
|
||||||
|
├── README.md
|
||||||
|
├── poetry.lock
|
||||||
|
├── pyproject.toml
|
||||||
|
├── src/
|
||||||
|
│ └── <project_name>/
|
||||||
|
│ ├── __init__.py
|
||||||
|
│ ├── main.py
|
||||||
|
│ ├── crews/
|
||||||
|
│ │ ├── crew1/
|
||||||
|
│ │ │ ├── crew1.py
|
||||||
|
│ │ │ └── config/
|
||||||
|
│ │ │ ├── agents.yaml
|
||||||
|
│ │ │ └── tasks.yaml
|
||||||
|
│ │ ├── crew2/
|
||||||
|
│ │ │ ├── crew2.py
|
||||||
|
│ │ │ └── config/
|
||||||
|
│ │ │ ├── agents.yaml
|
||||||
|
│ │ │ └── tasks.yaml
|
||||||
|
│ ├── pipelines/
|
||||||
|
│ │ ├── __init__.py
|
||||||
|
│ │ ├── pipeline1.py
|
||||||
|
│ │ └── pipeline2.py
|
||||||
|
│ └── tools/
|
||||||
|
│ ├── __init__.py
|
||||||
|
│ └── custom_tool.py
|
||||||
|
└── tests/
|
||||||
|
```
|
||||||
|
|
||||||
|
## Customizing Your Pipeline Project
|
||||||
|
|
||||||
|
To customize your pipeline project, you can:
|
||||||
|
|
||||||
|
1. Modify the crew files in `src/<project_name>/crews/` to define your agents and tasks for each crew.
|
||||||
|
2. Modify the pipeline files in `src/<project_name>/pipelines/` to define your pipeline structure.
|
||||||
|
3. Modify `src/<project_name>/main.py` to set up and run your pipelines.
|
||||||
|
4. Add your environment variables into the `.env` file.
|
||||||
|
|
||||||
|
### Example: Defining a Pipeline
|
||||||
|
|
||||||
|
Here's an example of how to define a pipeline in `src/<project_name>/pipelines/normal_pipeline.py`:
|
||||||
|
|
||||||
|
```python
|
||||||
|
from crewai import Pipeline
|
||||||
|
from crewai.project import PipelineBase
|
||||||
|
from ..crews.normal_crew import NormalCrew
|
||||||
|
|
||||||
|
@PipelineBase
|
||||||
|
class NormalPipeline:
|
||||||
|
def __init__(self):
|
||||||
|
# Initialize crews
|
||||||
|
self.normal_crew = NormalCrew().crew()
|
||||||
|
|
||||||
|
def create_pipeline(self):
|
||||||
|
return Pipeline(
|
||||||
|
stages=[
|
||||||
|
self.normal_crew
|
||||||
|
]
|
||||||
|
)
|
||||||
|
|
||||||
|
async def kickoff(self, inputs):
|
||||||
|
pipeline = self.create_pipeline()
|
||||||
|
results = await pipeline.kickoff(inputs)
|
||||||
|
return results
|
||||||
|
```
|
||||||
|
|
||||||
|
### Annotations
|
||||||
|
|
||||||
|
The main annotation you'll use for pipelines is `@PipelineBase`. This annotation is used to decorate your pipeline classes, similar to how `@CrewBase` is used for crews.
|
||||||
|
|
||||||
|
## Installing Dependencies
|
||||||
|
|
||||||
|
To install the dependencies for your project, use Poetry:
|
||||||
|
|
||||||
|
```shell
|
||||||
|
$ cd <project_name>
|
||||||
|
$ crewai install
|
||||||
|
```
|
||||||
|
|
||||||
|
## Running Your Pipeline Project
|
||||||
|
|
||||||
|
To run your pipeline project, use the following command:
|
||||||
|
|
||||||
|
```shell
|
||||||
|
$ crewai run
|
||||||
|
```
|
||||||
|
|
||||||
|
This will initialize your pipeline and begin task execution as defined in your `main.py` file.
|
||||||
|
|
||||||
|
## Deploying Your Pipeline Project
|
||||||
|
|
||||||
|
Pipelines can be deployed in the same way as regular CrewAI projects. The easiest way is through [CrewAI+](https://www.crewai.com/crewaiplus), where you can deploy your pipeline in a few clicks.
|
||||||
|
|
||||||
|
Remember, when working with pipelines, you're orchestrating multiple crews to work together in a sequence or parallel fashion. This allows for more complex workflows and information processing tasks.
|
||||||
@@ -191,8 +191,7 @@ To install the dependencies for your project, you can use Poetry. First, navigat
|
|||||||
|
|
||||||
```shell
|
```shell
|
||||||
$ cd my_project
|
$ cd my_project
|
||||||
$ poetry lock
|
$ crewai install
|
||||||
$ poetry install
|
|
||||||
```
|
```
|
||||||
|
|
||||||
This will install the dependencies specified in the `pyproject.toml` file.
|
This will install the dependencies specified in the `pyproject.toml` file.
|
||||||
@@ -233,11 +232,6 @@ To run your project, use the following command:
|
|||||||
```shell
|
```shell
|
||||||
$ crewai run
|
$ crewai run
|
||||||
```
|
```
|
||||||
or
|
|
||||||
```shell
|
|
||||||
$ poetry run my_project
|
|
||||||
```
|
|
||||||
|
|
||||||
This will initialize your crew of AI agents and begin task execution as defined in your configuration in the `main.py` file.
|
This will initialize your crew of AI agents and begin task execution as defined in your configuration in the `main.py` file.
|
||||||
|
|
||||||
### Replay Tasks from Latest Crew Kickoff
|
### Replay Tasks from Latest Crew Kickoff
|
||||||
|
|||||||
@@ -8,14 +8,21 @@ Cutting-edge framework for orchestrating role-playing, autonomous AI agents. By
|
|||||||
<div style="width:25%">
|
<div style="width:25%">
|
||||||
<h2>Getting Started</h2>
|
<h2>Getting Started</h2>
|
||||||
<ul>
|
<ul>
|
||||||
<li><a href='./getting-started/Installing-CrewAI'>
|
<li>
|
||||||
|
<a href='./getting-started/Installing-CrewAI'>
|
||||||
Installing CrewAI
|
Installing CrewAI
|
||||||
</a>
|
</a>
|
||||||
</li>
|
</li>
|
||||||
<li><a href='./getting-started/Start-a-New-CrewAI-Project-Template-Method'>
|
<li>
|
||||||
|
<a href='./getting-started/Start-a-New-CrewAI-Project-Template-Method'>
|
||||||
Start a New CrewAI Project: Template Method
|
Start a New CrewAI Project: Template Method
|
||||||
</a>
|
</a>
|
||||||
</li>
|
</li>
|
||||||
|
<li>
|
||||||
|
<a href='./getting-started/Create-a-New-CrewAI-Pipeline-Template-Method'>
|
||||||
|
Create a New CrewAI Pipeline: Template Method
|
||||||
|
</a>
|
||||||
|
</li>
|
||||||
</ul>
|
</ul>
|
||||||
</div>
|
</div>
|
||||||
<div style="width:25%">
|
<div style="width:25%">
|
||||||
|
|||||||
@@ -129,6 +129,7 @@ nav:
|
|||||||
- Processes: 'core-concepts/Processes.md'
|
- Processes: 'core-concepts/Processes.md'
|
||||||
- Crews: 'core-concepts/Crews.md'
|
- Crews: 'core-concepts/Crews.md'
|
||||||
- Collaboration: 'core-concepts/Collaboration.md'
|
- Collaboration: 'core-concepts/Collaboration.md'
|
||||||
|
- Pipeline: 'core-concepts/Pipeline.md'
|
||||||
- Training: 'core-concepts/Training-Crew.md'
|
- Training: 'core-concepts/Training-Crew.md'
|
||||||
- Memory: 'core-concepts/Memory.md'
|
- Memory: 'core-concepts/Memory.md'
|
||||||
- Planning: 'core-concepts/Planning.md'
|
- Planning: 'core-concepts/Planning.md'
|
||||||
|
|||||||
61
poetry.lock
generated
61
poetry.lock
generated
@@ -1,4 +1,4 @@
|
|||||||
# This file is automatically @generated by Poetry 1.7.1 and should not be changed by hand.
|
# This file is automatically @generated by Poetry 1.8.3 and should not be changed by hand.
|
||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "agentops"
|
name = "agentops"
|
||||||
@@ -829,29 +829,27 @@ name = "crewai-tools"
|
|||||||
version = "0.8.3"
|
version = "0.8.3"
|
||||||
description = "Set of tools for the crewAI framework"
|
description = "Set of tools for the crewAI framework"
|
||||||
optional = false
|
optional = false
|
||||||
python-versions = ">=3.10,<=3.13"
|
python-versions = "<=3.13,>=3.10"
|
||||||
files = []
|
files = [
|
||||||
develop = false
|
{file = "crewai_tools-0.8.3-py3-none-any.whl", hash = "sha256:a54a10c36b8403250e13d6594bd37db7e7deb3f9fabc77e8720c081864ae6189"},
|
||||||
|
{file = "crewai_tools-0.8.3.tar.gz", hash = "sha256:f0317ea1d926221b22fcf4b816d71916fe870aa66ed7ee2a0067dba42b5634eb"},
|
||||||
|
]
|
||||||
|
|
||||||
[package.dependencies]
|
[package.dependencies]
|
||||||
beautifulsoup4 = "^4.12.3"
|
beautifulsoup4 = ">=4.12.3,<5.0.0"
|
||||||
chromadb = "^0.4.22"
|
chromadb = ">=0.4.22,<0.5.0"
|
||||||
docker = "^7.1.0"
|
docker = ">=7.1.0,<8.0.0"
|
||||||
docx2txt = "^0.8"
|
docx2txt = ">=0.8,<0.9"
|
||||||
embedchain = "^0.1.114"
|
embedchain = ">=0.1.114,<0.2.0"
|
||||||
lancedb = "^0.5.4"
|
lancedb = ">=0.5.4,<0.6.0"
|
||||||
langchain = ">0.2,<=0.3"
|
langchain = ">0.2,<=0.3"
|
||||||
openai = "^1.12.0"
|
openai = ">=1.12.0,<2.0.0"
|
||||||
pydantic = "^2.6.1"
|
pydantic = ">=2.6.1,<3.0.0"
|
||||||
pyright = "^1.1.350"
|
pyright = ">=1.1.350,<2.0.0"
|
||||||
pytest = "^8.0.0"
|
pytest = ">=8.0.0,<9.0.0"
|
||||||
pytube = "^15.0.0"
|
pytube = ">=15.0.0,<16.0.0"
|
||||||
requests = "^2.31.0"
|
requests = ">=2.31.0,<3.0.0"
|
||||||
selenium = "^4.18.1"
|
selenium = ">=4.18.1,<5.0.0"
|
||||||
|
|
||||||
[package.source]
|
|
||||||
type = "directory"
|
|
||||||
url = "../crewai-tools"
|
|
||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "cssselect2"
|
name = "cssselect2"
|
||||||
@@ -1321,12 +1319,12 @@ files = [
|
|||||||
google-auth = ">=2.14.1,<3.0.dev0"
|
google-auth = ">=2.14.1,<3.0.dev0"
|
||||||
googleapis-common-protos = ">=1.56.2,<2.0.dev0"
|
googleapis-common-protos = ">=1.56.2,<2.0.dev0"
|
||||||
grpcio = [
|
grpcio = [
|
||||||
{version = ">=1.49.1,<2.0dev", optional = true, markers = "python_version >= \"3.11\" and extra == \"grpc\""},
|
|
||||||
{version = ">=1.33.2,<2.0dev", optional = true, markers = "python_version < \"3.11\" and extra == \"grpc\""},
|
{version = ">=1.33.2,<2.0dev", optional = true, markers = "python_version < \"3.11\" and extra == \"grpc\""},
|
||||||
|
{version = ">=1.49.1,<2.0dev", optional = true, markers = "python_version >= \"3.11\" and extra == \"grpc\""},
|
||||||
]
|
]
|
||||||
grpcio-status = [
|
grpcio-status = [
|
||||||
{version = ">=1.49.1,<2.0.dev0", optional = true, markers = "python_version >= \"3.11\" and extra == \"grpc\""},
|
|
||||||
{version = ">=1.33.2,<2.0.dev0", optional = true, markers = "python_version < \"3.11\" and extra == \"grpc\""},
|
{version = ">=1.33.2,<2.0.dev0", optional = true, markers = "python_version < \"3.11\" and extra == \"grpc\""},
|
||||||
|
{version = ">=1.49.1,<2.0.dev0", optional = true, markers = "python_version >= \"3.11\" and extra == \"grpc\""},
|
||||||
]
|
]
|
||||||
proto-plus = ">=1.22.3,<2.0.0dev"
|
proto-plus = ">=1.22.3,<2.0.0dev"
|
||||||
protobuf = ">=3.19.5,<3.20.0 || >3.20.0,<3.20.1 || >3.20.1,<4.21.0 || >4.21.0,<4.21.1 || >4.21.1,<4.21.2 || >4.21.2,<4.21.3 || >4.21.3,<4.21.4 || >4.21.4,<4.21.5 || >4.21.5,<6.0.0.dev0"
|
protobuf = ">=3.19.5,<3.20.0 || >3.20.0,<3.20.1 || >3.20.1,<4.21.0 || >4.21.0,<4.21.1 || >4.21.1,<4.21.2 || >4.21.2,<4.21.3 || >4.21.3,<4.21.4 || >4.21.4,<4.21.5 || >4.21.5,<6.0.0.dev0"
|
||||||
@@ -3628,8 +3626,8 @@ files = [
|
|||||||
|
|
||||||
[package.dependencies]
|
[package.dependencies]
|
||||||
numpy = [
|
numpy = [
|
||||||
{version = ">=1.23.2", markers = "python_version == \"3.11\""},
|
|
||||||
{version = ">=1.22.4", markers = "python_version < \"3.11\""},
|
{version = ">=1.22.4", markers = "python_version < \"3.11\""},
|
||||||
|
{version = ">=1.23.2", markers = "python_version == \"3.11\""},
|
||||||
{version = ">=1.26.0", markers = "python_version >= \"3.12\""},
|
{version = ">=1.26.0", markers = "python_version >= \"3.12\""},
|
||||||
]
|
]
|
||||||
python-dateutil = ">=2.8.2"
|
python-dateutil = ">=2.8.2"
|
||||||
@@ -4027,6 +4025,19 @@ files = [
|
|||||||
{file = "pyarrow-17.0.0-cp312-cp312-win_amd64.whl", hash = "sha256:392bc9feabc647338e6c89267635e111d71edad5fcffba204425a7c8d13610d7"},
|
{file = "pyarrow-17.0.0-cp312-cp312-win_amd64.whl", hash = "sha256:392bc9feabc647338e6c89267635e111d71edad5fcffba204425a7c8d13610d7"},
|
||||||
{file = "pyarrow-17.0.0-cp38-cp38-macosx_10_15_x86_64.whl", hash = "sha256:af5ff82a04b2171415f1410cff7ebb79861afc5dae50be73ce06d6e870615204"},
|
{file = "pyarrow-17.0.0-cp38-cp38-macosx_10_15_x86_64.whl", hash = "sha256:af5ff82a04b2171415f1410cff7ebb79861afc5dae50be73ce06d6e870615204"},
|
||||||
{file = "pyarrow-17.0.0-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:edca18eaca89cd6382dfbcff3dd2d87633433043650c07375d095cd3517561d8"},
|
{file = "pyarrow-17.0.0-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:edca18eaca89cd6382dfbcff3dd2d87633433043650c07375d095cd3517561d8"},
|
||||||
|
{file = "pyarrow-17.0.0-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:7c7916bff914ac5d4a8fe25b7a25e432ff921e72f6f2b7547d1e325c1ad9d155"},
|
||||||
|
{file = "pyarrow-17.0.0-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:f553ca691b9e94b202ff741bdd40f6ccb70cdd5fbf65c187af132f1317de6145"},
|
||||||
|
{file = "pyarrow-17.0.0-cp38-cp38-manylinux_2_28_aarch64.whl", hash = "sha256:0cdb0e627c86c373205a2f94a510ac4376fdc523f8bb36beab2e7f204416163c"},
|
||||||
|
{file = "pyarrow-17.0.0-cp38-cp38-manylinux_2_28_x86_64.whl", hash = "sha256:d7d192305d9d8bc9082d10f361fc70a73590a4c65cf31c3e6926cd72b76bc35c"},
|
||||||
|
{file = "pyarrow-17.0.0-cp38-cp38-win_amd64.whl", hash = "sha256:02dae06ce212d8b3244dd3e7d12d9c4d3046945a5933d28026598e9dbbda1fca"},
|
||||||
|
{file = "pyarrow-17.0.0-cp39-cp39-macosx_10_15_x86_64.whl", hash = "sha256:13d7a460b412f31e4c0efa1148e1d29bdf18ad1411eb6757d38f8fbdcc8645fb"},
|
||||||
|
{file = "pyarrow-17.0.0-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:9b564a51fbccfab5a04a80453e5ac6c9954a9c5ef2890d1bcf63741909c3f8df"},
|
||||||
|
{file = "pyarrow-17.0.0-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:32503827abbc5aadedfa235f5ece8c4f8f8b0a3cf01066bc8d29de7539532687"},
|
||||||
|
{file = "pyarrow-17.0.0-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:a155acc7f154b9ffcc85497509bcd0d43efb80d6f733b0dc3bb14e281f131c8b"},
|
||||||
|
{file = "pyarrow-17.0.0-cp39-cp39-manylinux_2_28_aarch64.whl", hash = "sha256:dec8d129254d0188a49f8a1fc99e0560dc1b85f60af729f47de4046015f9b0a5"},
|
||||||
|
{file = "pyarrow-17.0.0-cp39-cp39-manylinux_2_28_x86_64.whl", hash = "sha256:a48ddf5c3c6a6c505904545c25a4ae13646ae1f8ba703c4df4a1bfe4f4006bda"},
|
||||||
|
{file = "pyarrow-17.0.0-cp39-cp39-win_amd64.whl", hash = "sha256:42bf93249a083aca230ba7e2786c5f673507fa97bbd9725a1e2754715151a204"},
|
||||||
|
{file = "pyarrow-17.0.0.tar.gz", hash = "sha256:4beca9521ed2c0921c1023e68d097d0299b62c362639ea315572a58f3f50fd28"},
|
||||||
]
|
]
|
||||||
|
|
||||||
[package.dependencies]
|
[package.dependencies]
|
||||||
@@ -6062,4 +6073,4 @@ tools = ["crewai-tools"]
|
|||||||
[metadata]
|
[metadata]
|
||||||
lock-version = "2.0"
|
lock-version = "2.0"
|
||||||
python-versions = ">=3.10,<=3.13"
|
python-versions = ">=3.10,<=3.13"
|
||||||
content-hash = "fc1b510ea9c814db67ac69d2454071b718cb7f6846bd845f7f48561cb0397ce1"
|
content-hash = "91ba982ea96ca7be017d536784223d4ef83e86de05d11eb1c3ce0fc1b726f283"
|
||||||
|
|||||||
@@ -62,6 +62,9 @@ ignore_missing_imports = true
|
|||||||
disable_error_code = 'import-untyped'
|
disable_error_code = 'import-untyped'
|
||||||
exclude = ["cli/templates"]
|
exclude = ["cli/templates"]
|
||||||
|
|
||||||
|
[tool.bandit]
|
||||||
|
exclude_dirs = ["src/crewai/cli/templates"]
|
||||||
|
|
||||||
[build-system]
|
[build-system]
|
||||||
requires = ["poetry-core"]
|
requires = ["poetry-core"]
|
||||||
build-backend = "poetry.core.masonry.api"
|
build-backend = "poetry.core.masonry.api"
|
||||||
|
|||||||
@@ -113,10 +113,11 @@ class Agent(BaseAgent):
|
|||||||
description="Maximum number of retries for an agent to execute a task when an error occurs.",
|
description="Maximum number of retries for an agent to execute a task when an error occurs.",
|
||||||
)
|
)
|
||||||
|
|
||||||
def __init__(__pydantic_self__, **data):
|
@model_validator(mode="after")
|
||||||
config = data.pop("config", {})
|
def set_agent_ops_agent_name(self) -> "Agent":
|
||||||
super().__init__(**config, **data)
|
"""Set agent ops agent name."""
|
||||||
__pydantic_self__.agent_ops_agent_name = __pydantic_self__.role
|
self.agent_ops_agent_name = self.role
|
||||||
|
return self
|
||||||
|
|
||||||
@model_validator(mode="after")
|
@model_validator(mode="after")
|
||||||
def set_agent_executor(self) -> "Agent":
|
def set_agent_executor(self) -> "Agent":
|
||||||
@@ -213,7 +214,7 @@ class Agent(BaseAgent):
|
|||||||
raise e
|
raise e
|
||||||
result = self.execute_task(task, context, tools)
|
result = self.execute_task(task, context, tools)
|
||||||
|
|
||||||
if self.max_rpm:
|
if self.max_rpm and self._rpm_controller:
|
||||||
self._rpm_controller.stop_rpm_counter()
|
self._rpm_controller.stop_rpm_counter()
|
||||||
|
|
||||||
# If there was any tool in self.tools_results that had result_as_answer
|
# If there was any tool in self.tools_results that had result_as_answer
|
||||||
|
|||||||
@@ -7,7 +7,6 @@ from typing import Any, Dict, List, Optional, TypeVar
|
|||||||
from pydantic import (
|
from pydantic import (
|
||||||
UUID4,
|
UUID4,
|
||||||
BaseModel,
|
BaseModel,
|
||||||
ConfigDict,
|
|
||||||
Field,
|
Field,
|
||||||
InstanceOf,
|
InstanceOf,
|
||||||
PrivateAttr,
|
PrivateAttr,
|
||||||
@@ -74,12 +73,17 @@ class BaseAgent(ABC, BaseModel):
|
|||||||
"""
|
"""
|
||||||
|
|
||||||
__hash__ = object.__hash__ # type: ignore
|
__hash__ = object.__hash__ # type: ignore
|
||||||
_logger: Logger = PrivateAttr()
|
_logger: Logger = PrivateAttr(default_factory=lambda: Logger(verbose=False))
|
||||||
_rpm_controller: RPMController = PrivateAttr(default=None)
|
_rpm_controller: Optional[RPMController] = PrivateAttr(default=None)
|
||||||
_request_within_rpm_limit: Any = PrivateAttr(default=None)
|
_request_within_rpm_limit: Any = PrivateAttr(default=None)
|
||||||
formatting_errors: int = 0
|
_original_role: Optional[str] = PrivateAttr(default=None)
|
||||||
model_config = ConfigDict(arbitrary_types_allowed=True)
|
_original_goal: Optional[str] = PrivateAttr(default=None)
|
||||||
|
_original_backstory: Optional[str] = PrivateAttr(default=None)
|
||||||
|
_token_process: TokenProcess = PrivateAttr(default_factory=TokenProcess)
|
||||||
id: UUID4 = Field(default_factory=uuid.uuid4, frozen=True)
|
id: UUID4 = Field(default_factory=uuid.uuid4, frozen=True)
|
||||||
|
formatting_errors: int = Field(
|
||||||
|
default=0, description="Number of formatting errors."
|
||||||
|
)
|
||||||
role: str = Field(description="Role of the agent")
|
role: str = Field(description="Role of the agent")
|
||||||
goal: str = Field(description="Objective of the agent")
|
goal: str = Field(description="Objective of the agent")
|
||||||
backstory: str = Field(description="Backstory of the agent")
|
backstory: str = Field(description="Backstory of the agent")
|
||||||
@@ -123,15 +127,6 @@ class BaseAgent(ABC, BaseModel):
|
|||||||
default=None, description="Maximum number of tokens for the agent's execution."
|
default=None, description="Maximum number of tokens for the agent's execution."
|
||||||
)
|
)
|
||||||
|
|
||||||
_original_role: str | None = None
|
|
||||||
_original_goal: str | None = None
|
|
||||||
_original_backstory: str | None = None
|
|
||||||
_token_process: TokenProcess = TokenProcess()
|
|
||||||
|
|
||||||
def __init__(__pydantic_self__, **data):
|
|
||||||
config = data.pop("config", {})
|
|
||||||
super().__init__(**config, **data)
|
|
||||||
|
|
||||||
@model_validator(mode="after")
|
@model_validator(mode="after")
|
||||||
def set_config_attributes(self):
|
def set_config_attributes(self):
|
||||||
if self.config:
|
if self.config:
|
||||||
@@ -170,7 +165,7 @@ class BaseAgent(ABC, BaseModel):
|
|||||||
@property
|
@property
|
||||||
def key(self):
|
def key(self):
|
||||||
source = [self.role, self.goal, self.backstory]
|
source = [self.role, self.goal, self.backstory]
|
||||||
return md5("|".join(source).encode()).hexdigest()
|
return md5("|".join(source).encode(), usedforsecurity=False).hexdigest()
|
||||||
|
|
||||||
@abstractmethod
|
@abstractmethod
|
||||||
def execute_task(
|
def execute_task(
|
||||||
|
|||||||
11
src/crewai/agents/cache/cache_handler.py
vendored
11
src/crewai/agents/cache/cache_handler.py
vendored
@@ -1,13 +1,12 @@
|
|||||||
from typing import Optional
|
from typing import Any, Dict, Optional
|
||||||
|
|
||||||
|
from pydantic import BaseModel, PrivateAttr
|
||||||
|
|
||||||
|
|
||||||
class CacheHandler:
|
class CacheHandler(BaseModel):
|
||||||
"""Callback handler for tool usage."""
|
"""Callback handler for tool usage."""
|
||||||
|
|
||||||
_cache: dict = {}
|
_cache: Dict[str, Any] = PrivateAttr(default_factory=dict)
|
||||||
|
|
||||||
def __init__(self):
|
|
||||||
self._cache = {}
|
|
||||||
|
|
||||||
def add(self, tool, input, output):
|
def add(self, tool, input, output):
|
||||||
self._cache[f"{tool}-{input}"] = output
|
self._cache[f"{tool}-{input}"] = output
|
||||||
|
|||||||
@@ -1,33 +1,29 @@
|
|||||||
import threading
|
import threading
|
||||||
import time
|
import time
|
||||||
from typing import Any, Dict, Iterator, List, Literal, Optional, Tuple, Union
|
from typing import Any, Dict, Iterator, List, Literal, Optional, Tuple, Union
|
||||||
|
|
||||||
import click
|
import click
|
||||||
|
|
||||||
|
|
||||||
from langchain.agents import AgentExecutor
|
from langchain.agents import AgentExecutor
|
||||||
from langchain.agents.agent import ExceptionTool
|
from langchain.agents.agent import ExceptionTool
|
||||||
from langchain.callbacks.manager import CallbackManagerForChainRun
|
from langchain.callbacks.manager import CallbackManagerForChainRun
|
||||||
|
from langchain.chains.summarize import load_summarize_chain
|
||||||
|
from langchain.text_splitter import RecursiveCharacterTextSplitter
|
||||||
from langchain_core.agents import AgentAction, AgentFinish, AgentStep
|
from langchain_core.agents import AgentAction, AgentFinish, AgentStep
|
||||||
from langchain_core.exceptions import OutputParserException
|
from langchain_core.exceptions import OutputParserException
|
||||||
from langchain_core.tools import BaseTool
|
from langchain_core.tools import BaseTool
|
||||||
from langchain_core.utils.input import get_color_mapping
|
from langchain_core.utils.input import get_color_mapping
|
||||||
from pydantic import InstanceOf
|
from pydantic import InstanceOf
|
||||||
|
|
||||||
from langchain.text_splitter import RecursiveCharacterTextSplitter
|
|
||||||
from langchain.chains.summarize import load_summarize_chain
|
|
||||||
|
|
||||||
from crewai.agents.agent_builder.base_agent_executor_mixin import CrewAgentExecutorMixin
|
from crewai.agents.agent_builder.base_agent_executor_mixin import CrewAgentExecutorMixin
|
||||||
from crewai.agents.tools_handler import ToolsHandler
|
from crewai.agents.tools_handler import ToolsHandler
|
||||||
|
|
||||||
|
|
||||||
from crewai.tools.tool_usage import ToolUsage, ToolUsageErrorException
|
from crewai.tools.tool_usage import ToolUsage, ToolUsageErrorException
|
||||||
from crewai.utilities import I18N
|
from crewai.utilities import I18N
|
||||||
from crewai.utilities.constants import TRAINING_DATA_FILE
|
from crewai.utilities.constants import TRAINING_DATA_FILE
|
||||||
from crewai.utilities.exceptions.context_window_exceeding_exception import (
|
from crewai.utilities.exceptions.context_window_exceeding_exception import (
|
||||||
LLMContextLengthExceededException,
|
LLMContextLengthExceededException,
|
||||||
)
|
)
|
||||||
from crewai.utilities.training_handler import CrewTrainingHandler
|
|
||||||
from crewai.utilities.logger import Logger
|
from crewai.utilities.logger import Logger
|
||||||
|
from crewai.utilities.training_handler import CrewTrainingHandler
|
||||||
|
|
||||||
|
|
||||||
class CrewAgentExecutor(AgentExecutor, CrewAgentExecutorMixin):
|
class CrewAgentExecutor(AgentExecutor, CrewAgentExecutorMixin):
|
||||||
@@ -213,11 +209,7 @@ class CrewAgentExecutor(AgentExecutor, CrewAgentExecutorMixin):
|
|||||||
yield step
|
yield step
|
||||||
return
|
return
|
||||||
|
|
||||||
yield AgentStep(
|
raise e
|
||||||
action=AgentAction("_Exception", str(e), str(e)),
|
|
||||||
observation=str(e),
|
|
||||||
)
|
|
||||||
return
|
|
||||||
|
|
||||||
# If the tool chosen is the finishing tool, then we end and return.
|
# If the tool chosen is the finishing tool, then we end and return.
|
||||||
if isinstance(output, AgentFinish):
|
if isinstance(output, AgentFinish):
|
||||||
|
|||||||
@@ -8,6 +8,7 @@ from crewai.memory.storage.kickoff_task_outputs_storage import (
|
|||||||
)
|
)
|
||||||
|
|
||||||
from .evaluate_crew import evaluate_crew
|
from .evaluate_crew import evaluate_crew
|
||||||
|
from .install_crew import install_crew
|
||||||
from .replay_from_task import replay_task_command
|
from .replay_from_task import replay_task_command
|
||||||
from .reset_memories_command import reset_memories_command
|
from .reset_memories_command import reset_memories_command
|
||||||
from .run_crew import run_crew
|
from .run_crew import run_crew
|
||||||
@@ -165,10 +166,16 @@ def test(n_iterations: int, model: str):
|
|||||||
evaluate_crew(n_iterations, model)
|
evaluate_crew(n_iterations, model)
|
||||||
|
|
||||||
|
|
||||||
|
@crewai.command()
|
||||||
|
def install():
|
||||||
|
"""Install the Crew."""
|
||||||
|
install_crew()
|
||||||
|
|
||||||
|
|
||||||
@crewai.command()
|
@crewai.command()
|
||||||
def run():
|
def run():
|
||||||
"""Run the crew."""
|
"""Run the Crew."""
|
||||||
click.echo("Running the crew")
|
click.echo("Running the Crew")
|
||||||
run_crew()
|
run_crew()
|
||||||
|
|
||||||
|
|
||||||
|
|||||||
21
src/crewai/cli/install_crew.py
Normal file
21
src/crewai/cli/install_crew.py
Normal file
@@ -0,0 +1,21 @@
|
|||||||
|
import subprocess
|
||||||
|
|
||||||
|
import click
|
||||||
|
|
||||||
|
|
||||||
|
def install_crew() -> None:
|
||||||
|
"""
|
||||||
|
Install the crew by running the Poetry command to lock and install.
|
||||||
|
"""
|
||||||
|
try:
|
||||||
|
subprocess.run(["poetry", "lock"], check=True, capture_output=False, text=True)
|
||||||
|
subprocess.run(
|
||||||
|
["poetry", "install"], check=True, capture_output=False, text=True
|
||||||
|
)
|
||||||
|
|
||||||
|
except subprocess.CalledProcessError as e:
|
||||||
|
click.echo(f"An error occurred while running the crew: {e}", err=True)
|
||||||
|
click.echo(e.output, err=True)
|
||||||
|
|
||||||
|
except Exception as e:
|
||||||
|
click.echo(f"An unexpected error occurred: {e}", err=True)
|
||||||
@@ -14,12 +14,9 @@ pip install poetry
|
|||||||
|
|
||||||
Next, navigate to your project directory and install the dependencies:
|
Next, navigate to your project directory and install the dependencies:
|
||||||
|
|
||||||
1. First lock the dependencies and then install them:
|
1. First lock the dependencies and install them by using the CLI command:
|
||||||
```bash
|
```bash
|
||||||
poetry lock
|
crewai install
|
||||||
```
|
|
||||||
```bash
|
|
||||||
poetry install
|
|
||||||
```
|
```
|
||||||
### Customizing
|
### Customizing
|
||||||
|
|
||||||
@@ -37,10 +34,6 @@ To kickstart your crew of AI agents and begin task execution, run this from the
|
|||||||
```bash
|
```bash
|
||||||
$ crewai run
|
$ crewai run
|
||||||
```
|
```
|
||||||
or
|
|
||||||
```bash
|
|
||||||
poetry run {{folder_name}}
|
|
||||||
```
|
|
||||||
|
|
||||||
This command initializes the {{name}} Crew, assembling the agents and assigning them tasks as defined in your configuration.
|
This command initializes the {{name}} Crew, assembling the agents and assigning them tasks as defined in your configuration.
|
||||||
|
|
||||||
|
|||||||
@@ -6,7 +6,8 @@ authors = ["Your Name <you@example.com>"]
|
|||||||
|
|
||||||
[tool.poetry.dependencies]
|
[tool.poetry.dependencies]
|
||||||
python = ">=3.10,<=3.13"
|
python = ">=3.10,<=3.13"
|
||||||
crewai = { extras = ["tools"], version = "^0.51.0" }
|
crewai = { extras = ["tools"], version = ">=0.51.0,<1.0.0" }
|
||||||
|
|
||||||
|
|
||||||
[tool.poetry.scripts]
|
[tool.poetry.scripts]
|
||||||
{{folder_name}} = "{{folder_name}}.main:run"
|
{{folder_name}} = "{{folder_name}}.main:run"
|
||||||
|
|||||||
@@ -15,12 +15,11 @@ pip install poetry
|
|||||||
Next, navigate to your project directory and install the dependencies:
|
Next, navigate to your project directory and install the dependencies:
|
||||||
|
|
||||||
1. First lock the dependencies and then install them:
|
1. First lock the dependencies and then install them:
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
poetry lock
|
crewai install
|
||||||
```
|
|
||||||
```bash
|
|
||||||
poetry install
|
|
||||||
```
|
```
|
||||||
|
|
||||||
### Customizing
|
### Customizing
|
||||||
|
|
||||||
**Add your `OPENAI_API_KEY` into the `.env` file**
|
**Add your `OPENAI_API_KEY` into the `.env` file**
|
||||||
@@ -35,7 +34,7 @@ poetry install
|
|||||||
To kickstart your crew of AI agents and begin task execution, run this from the root folder of your project:
|
To kickstart your crew of AI agents and begin task execution, run this from the root folder of your project:
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
poetry run {{folder_name}}
|
crewai run
|
||||||
```
|
```
|
||||||
|
|
||||||
This command initializes the {{name}} Crew, assembling the agents and assigning them tasks as defined in your configuration.
|
This command initializes the {{name}} Crew, assembling the agents and assigning them tasks as defined in your configuration.
|
||||||
@@ -49,6 +48,7 @@ The {{name}} Crew is composed of multiple AI agents, each with unique roles, goa
|
|||||||
## Support
|
## Support
|
||||||
|
|
||||||
For support, questions, or feedback regarding the {{crew_name}} Crew or crewAI.
|
For support, questions, or feedback regarding the {{crew_name}} Crew or crewAI.
|
||||||
|
|
||||||
- Visit our [documentation](https://docs.crewai.com)
|
- Visit our [documentation](https://docs.crewai.com)
|
||||||
- Reach out to us through our [GitHub repository](https://github.com/joaomdmoura/crewai)
|
- Reach out to us through our [GitHub repository](https://github.com/joaomdmoura/crewai)
|
||||||
- [Join our Discord](https://discord.com/invite/X4JWnZnxPb)
|
- [Join our Discord](https://discord.com/invite/X4JWnZnxPb)
|
||||||
|
|||||||
@@ -6,7 +6,7 @@ authors = ["Your Name <you@example.com>"]
|
|||||||
|
|
||||||
[tool.poetry.dependencies]
|
[tool.poetry.dependencies]
|
||||||
python = ">=3.10,<=3.13"
|
python = ">=3.10,<=3.13"
|
||||||
crewai = { extras = ["tools"], version = "^0.51.0" }
|
crewai = { extras = ["tools"], version = ">=0.51.0,<1.0.0" }
|
||||||
asyncio = "*"
|
asyncio = "*"
|
||||||
|
|
||||||
[tool.poetry.scripts]
|
[tool.poetry.scripts]
|
||||||
|
|||||||
@@ -16,10 +16,7 @@ Next, navigate to your project directory and install the dependencies:
|
|||||||
|
|
||||||
1. First lock the dependencies and then install them:
|
1. First lock the dependencies and then install them:
|
||||||
```bash
|
```bash
|
||||||
poetry lock
|
crewai install
|
||||||
```
|
|
||||||
```bash
|
|
||||||
poetry install
|
|
||||||
```
|
```
|
||||||
### Customizing
|
### Customizing
|
||||||
|
|
||||||
@@ -35,7 +32,7 @@ poetry install
|
|||||||
To kickstart your crew of AI agents and begin task execution, run this from the root folder of your project:
|
To kickstart your crew of AI agents and begin task execution, run this from the root folder of your project:
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
poetry run {{folder_name}}
|
crewai run
|
||||||
```
|
```
|
||||||
|
|
||||||
This command initializes the {{name}} Crew, assembling the agents and assigning them tasks as defined in your configuration.
|
This command initializes the {{name}} Crew, assembling the agents and assigning them tasks as defined in your configuration.
|
||||||
|
|||||||
@@ -6,7 +6,8 @@ authors = ["Your Name <you@example.com>"]
|
|||||||
|
|
||||||
[tool.poetry.dependencies]
|
[tool.poetry.dependencies]
|
||||||
python = ">=3.10,<=3.13"
|
python = ">=3.10,<=3.13"
|
||||||
crewai = { extras = ["tools"], version = "^0.51.0" }
|
crewai = { extras = ["tools"], version = ">=0.51.0,<1.0.0" }
|
||||||
|
|
||||||
|
|
||||||
[tool.poetry.scripts]
|
[tool.poetry.scripts]
|
||||||
{{folder_name}} = "{{folder_name}}.main:main"
|
{{folder_name}} = "{{folder_name}}.main:main"
|
||||||
|
|||||||
@@ -1,16 +1,15 @@
|
|||||||
import asyncio
|
import asyncio
|
||||||
import json
|
import json
|
||||||
|
import os
|
||||||
import uuid
|
import uuid
|
||||||
from concurrent.futures import Future
|
from concurrent.futures import Future
|
||||||
from hashlib import md5
|
from hashlib import md5
|
||||||
import os
|
|
||||||
from typing import TYPE_CHECKING, Any, Dict, List, Optional, Tuple, Union
|
from typing import TYPE_CHECKING, Any, Dict, List, Optional, Tuple, Union
|
||||||
|
|
||||||
from langchain_core.callbacks import BaseCallbackHandler
|
from langchain_core.callbacks import BaseCallbackHandler
|
||||||
from pydantic import (
|
from pydantic import (
|
||||||
UUID4,
|
UUID4,
|
||||||
BaseModel,
|
BaseModel,
|
||||||
ConfigDict,
|
|
||||||
Field,
|
Field,
|
||||||
InstanceOf,
|
InstanceOf,
|
||||||
Json,
|
Json,
|
||||||
@@ -48,11 +47,10 @@ from crewai.utilities.planning_handler import CrewPlanner
|
|||||||
from crewai.utilities.task_output_storage_handler import TaskOutputStorageHandler
|
from crewai.utilities.task_output_storage_handler import TaskOutputStorageHandler
|
||||||
from crewai.utilities.training_handler import CrewTrainingHandler
|
from crewai.utilities.training_handler import CrewTrainingHandler
|
||||||
|
|
||||||
|
|
||||||
agentops = None
|
agentops = None
|
||||||
if os.environ.get("AGENTOPS_API_KEY"):
|
if os.environ.get("AGENTOPS_API_KEY"):
|
||||||
try:
|
try:
|
||||||
import agentops
|
import agentops # type: ignore
|
||||||
except ImportError:
|
except ImportError:
|
||||||
pass
|
pass
|
||||||
|
|
||||||
@@ -106,7 +104,6 @@ class Crew(BaseModel):
|
|||||||
|
|
||||||
name: Optional[str] = Field(default=None)
|
name: Optional[str] = Field(default=None)
|
||||||
cache: bool = Field(default=True)
|
cache: bool = Field(default=True)
|
||||||
model_config = ConfigDict(arbitrary_types_allowed=True)
|
|
||||||
tasks: List[Task] = Field(default_factory=list)
|
tasks: List[Task] = Field(default_factory=list)
|
||||||
agents: List[BaseAgent] = Field(default_factory=list)
|
agents: List[BaseAgent] = Field(default_factory=list)
|
||||||
process: Process = Field(default=Process.sequential)
|
process: Process = Field(default=Process.sequential)
|
||||||
@@ -364,7 +361,7 @@ class Crew(BaseModel):
|
|||||||
source = [agent.key for agent in self.agents] + [
|
source = [agent.key for agent in self.agents] + [
|
||||||
task.key for task in self.tasks
|
task.key for task in self.tasks
|
||||||
]
|
]
|
||||||
return md5("|".join(source).encode()).hexdigest()
|
return md5("|".join(source).encode(), usedforsecurity=False).hexdigest()
|
||||||
|
|
||||||
def _setup_from_config(self):
|
def _setup_from_config(self):
|
||||||
assert self.config is not None, "Config should not be None."
|
assert self.config is not None, "Config should not be None."
|
||||||
@@ -541,7 +538,7 @@ class Crew(BaseModel):
|
|||||||
)._handle_crew_planning()
|
)._handle_crew_planning()
|
||||||
|
|
||||||
for task, step_plan in zip(self.tasks, result.list_of_plans_per_task):
|
for task, step_plan in zip(self.tasks, result.list_of_plans_per_task):
|
||||||
task.description += step_plan
|
task.description += step_plan.plan
|
||||||
|
|
||||||
def _store_execution_log(
|
def _store_execution_log(
|
||||||
self,
|
self,
|
||||||
|
|||||||
@@ -6,12 +6,20 @@ def task(func):
|
|||||||
task.registration_order = []
|
task.registration_order = []
|
||||||
|
|
||||||
func.is_task = True
|
func.is_task = True
|
||||||
wrapped_func = memoize(func)
|
memoized_func = memoize(func)
|
||||||
|
|
||||||
# Append the function name to the registration order list
|
# Append the function name to the registration order list
|
||||||
task.registration_order.append(func.__name__)
|
task.registration_order.append(func.__name__)
|
||||||
|
|
||||||
return wrapped_func
|
def wrapper(*args, **kwargs):
|
||||||
|
result = memoized_func(*args, **kwargs)
|
||||||
|
|
||||||
|
if not result.name:
|
||||||
|
result.name = func.__name__
|
||||||
|
|
||||||
|
return result
|
||||||
|
|
||||||
|
return wrapper
|
||||||
|
|
||||||
|
|
||||||
def agent(func):
|
def agent(func):
|
||||||
|
|||||||
@@ -1,56 +1,45 @@
|
|||||||
import inspect
|
import inspect
|
||||||
import os
|
|
||||||
from pathlib import Path
|
from pathlib import Path
|
||||||
from typing import Any, Callable, Dict
|
from typing import Any, Callable, Dict
|
||||||
|
|
||||||
import yaml
|
import yaml
|
||||||
from dotenv import load_dotenv
|
from dotenv import load_dotenv
|
||||||
from pydantic import ConfigDict
|
|
||||||
|
|
||||||
load_dotenv()
|
load_dotenv()
|
||||||
|
|
||||||
|
|
||||||
def CrewBase(cls):
|
def CrewBase(cls):
|
||||||
class WrappedClass(cls):
|
class WrappedClass(cls):
|
||||||
model_config = ConfigDict(arbitrary_types_allowed=True)
|
|
||||||
is_crew_class: bool = True # type: ignore
|
is_crew_class: bool = True # type: ignore
|
||||||
|
|
||||||
base_directory = None
|
# Get the directory of the class being decorated
|
||||||
for frame_info in inspect.stack():
|
base_directory = Path(inspect.getfile(cls)).parent
|
||||||
if "site-packages" not in frame_info.filename:
|
|
||||||
base_directory = Path(frame_info.filename).parent.resolve()
|
|
||||||
break
|
|
||||||
|
|
||||||
original_agents_config_path = getattr(
|
original_agents_config_path = getattr(
|
||||||
cls, "agents_config", "config/agents.yaml"
|
cls, "agents_config", "config/agents.yaml"
|
||||||
)
|
)
|
||||||
|
|
||||||
original_tasks_config_path = getattr(cls, "tasks_config", "config/tasks.yaml")
|
original_tasks_config_path = getattr(cls, "tasks_config", "config/tasks.yaml")
|
||||||
|
|
||||||
def __init__(self, *args, **kwargs):
|
def __init__(self, *args, **kwargs):
|
||||||
super().__init__(*args, **kwargs)
|
super().__init__(*args, **kwargs)
|
||||||
|
|
||||||
if self.base_directory is None:
|
agents_config_path = self.base_directory / self.original_agents_config_path
|
||||||
raise Exception(
|
tasks_config_path = self.base_directory / self.original_tasks_config_path
|
||||||
"Unable to dynamically determine the project's base directory, you must run it from the project's root directory."
|
|
||||||
)
|
|
||||||
|
|
||||||
self.agents_config = self.load_yaml(
|
self.agents_config = self.load_yaml(agents_config_path)
|
||||||
os.path.join(self.base_directory, self.original_agents_config_path)
|
self.tasks_config = self.load_yaml(tasks_config_path)
|
||||||
)
|
|
||||||
|
|
||||||
self.tasks_config = self.load_yaml(
|
|
||||||
os.path.join(self.base_directory, self.original_tasks_config_path)
|
|
||||||
)
|
|
||||||
|
|
||||||
self.map_all_agent_variables()
|
self.map_all_agent_variables()
|
||||||
self.map_all_task_variables()
|
self.map_all_task_variables()
|
||||||
|
|
||||||
@staticmethod
|
@staticmethod
|
||||||
def load_yaml(config_path: str):
|
def load_yaml(config_path: Path):
|
||||||
|
try:
|
||||||
with open(config_path, "r") as file:
|
with open(config_path, "r") as file:
|
||||||
# parsedContent = YamlParser.parse(file) # type: ignore # Argument 1 to "parse" has incompatible type "TextIOWrapper"; expected "YamlParser"
|
|
||||||
return yaml.safe_load(file)
|
return yaml.safe_load(file)
|
||||||
|
except FileNotFoundError:
|
||||||
|
print(f"File not found: {config_path}")
|
||||||
|
raise
|
||||||
|
|
||||||
def _get_all_functions(self):
|
def _get_all_functions(self):
|
||||||
return {
|
return {
|
||||||
|
|||||||
@@ -1,24 +1,24 @@
|
|||||||
from typing import Callable, Dict
|
from typing import Any, Callable, Dict, List, Type, Union
|
||||||
|
|
||||||
from pydantic import ConfigDict
|
|
||||||
|
|
||||||
from crewai.crew import Crew
|
from crewai.crew import Crew
|
||||||
from crewai.pipeline.pipeline import Pipeline
|
from crewai.pipeline.pipeline import Pipeline
|
||||||
from crewai.routers.router import Router
|
from crewai.routers.router import Router
|
||||||
|
|
||||||
|
PipelineStage = Union[Crew, List[Crew], Router]
|
||||||
|
|
||||||
|
|
||||||
# TODO: Could potentially remove. Need to check with @joao and @gui if this is needed for CrewAI+
|
# TODO: Could potentially remove. Need to check with @joao and @gui if this is needed for CrewAI+
|
||||||
def PipelineBase(cls):
|
def PipelineBase(cls: Type[Any]) -> Type[Any]:
|
||||||
class WrappedClass(cls):
|
class WrappedClass(cls):
|
||||||
model_config = ConfigDict(arbitrary_types_allowed=True)
|
is_pipeline_class: bool = True # type: ignore
|
||||||
is_pipeline_class: bool = True
|
stages: List[PipelineStage]
|
||||||
|
|
||||||
def __init__(self, *args, **kwargs):
|
def __init__(self, *args: Any, **kwargs: Any) -> None:
|
||||||
super().__init__(*args, **kwargs)
|
super().__init__(*args, **kwargs)
|
||||||
self.stages = []
|
self.stages = []
|
||||||
self._map_pipeline_components()
|
self._map_pipeline_components()
|
||||||
|
|
||||||
def _get_all_functions(self):
|
def _get_all_functions(self) -> Dict[str, Callable[..., Any]]:
|
||||||
return {
|
return {
|
||||||
name: getattr(self, name)
|
name: getattr(self, name)
|
||||||
for name in dir(self)
|
for name in dir(self)
|
||||||
@@ -26,15 +26,15 @@ def PipelineBase(cls):
|
|||||||
}
|
}
|
||||||
|
|
||||||
def _filter_functions(
|
def _filter_functions(
|
||||||
self, functions: Dict[str, Callable], attribute: str
|
self, functions: Dict[str, Callable[..., Any]], attribute: str
|
||||||
) -> Dict[str, Callable]:
|
) -> Dict[str, Callable[..., Any]]:
|
||||||
return {
|
return {
|
||||||
name: func
|
name: func
|
||||||
for name, func in functions.items()
|
for name, func in functions.items()
|
||||||
if hasattr(func, attribute)
|
if hasattr(func, attribute)
|
||||||
}
|
}
|
||||||
|
|
||||||
def _map_pipeline_components(self):
|
def _map_pipeline_components(self) -> None:
|
||||||
all_functions = self._get_all_functions()
|
all_functions = self._get_all_functions()
|
||||||
crew_functions = self._filter_functions(all_functions, "is_crew")
|
crew_functions = self._filter_functions(all_functions, "is_crew")
|
||||||
router_functions = self._filter_functions(all_functions, "is_router")
|
router_functions = self._filter_functions(all_functions, "is_router")
|
||||||
|
|||||||
@@ -1,32 +1,26 @@
|
|||||||
from copy import deepcopy
|
from copy import deepcopy
|
||||||
from typing import Any, Callable, Dict, Generic, Tuple, TypeVar
|
from typing import Any, Callable, Dict, Tuple
|
||||||
|
|
||||||
from pydantic import BaseModel, Field, PrivateAttr
|
from pydantic import BaseModel, Field, PrivateAttr
|
||||||
|
|
||||||
T = TypeVar("T", bound=Dict[str, Any])
|
|
||||||
U = TypeVar("U")
|
class Route(BaseModel):
|
||||||
|
condition: Callable[[Dict[str, Any]], bool]
|
||||||
|
pipeline: Any
|
||||||
|
|
||||||
|
|
||||||
class Route(Generic[T, U]):
|
class Router(BaseModel):
|
||||||
condition: Callable[[T], bool]
|
routes: Dict[str, Route] = Field(
|
||||||
pipeline: U
|
|
||||||
|
|
||||||
def __init__(self, condition: Callable[[T], bool], pipeline: U):
|
|
||||||
self.condition = condition
|
|
||||||
self.pipeline = pipeline
|
|
||||||
|
|
||||||
|
|
||||||
class Router(BaseModel, Generic[T, U]):
|
|
||||||
routes: Dict[str, Route[T, U]] = Field(
|
|
||||||
default_factory=dict,
|
default_factory=dict,
|
||||||
description="Dictionary of route names to (condition, pipeline) tuples",
|
description="Dictionary of route names to (condition, pipeline) tuples",
|
||||||
)
|
)
|
||||||
default: U = Field(..., description="Default pipeline if no conditions are met")
|
default: Any = Field(..., description="Default pipeline if no conditions are met")
|
||||||
_route_types: Dict[str, type] = PrivateAttr(default_factory=dict)
|
_route_types: Dict[str, type] = PrivateAttr(default_factory=dict)
|
||||||
|
|
||||||
model_config = {"arbitrary_types_allowed": True}
|
class Config:
|
||||||
|
arbitrary_types_allowed = True
|
||||||
|
|
||||||
def __init__(self, routes: Dict[str, Route[T, U]], default: U, **data):
|
def __init__(self, routes: Dict[str, Route], default: Any, **data):
|
||||||
super().__init__(routes=routes, default=default, **data)
|
super().__init__(routes=routes, default=default, **data)
|
||||||
self._check_copyable(default)
|
self._check_copyable(default)
|
||||||
for name, route in routes.items():
|
for name, route in routes.items():
|
||||||
@@ -34,16 +28,16 @@ class Router(BaseModel, Generic[T, U]):
|
|||||||
self._route_types[name] = type(route.pipeline)
|
self._route_types[name] = type(route.pipeline)
|
||||||
|
|
||||||
@staticmethod
|
@staticmethod
|
||||||
def _check_copyable(obj):
|
def _check_copyable(obj: Any) -> None:
|
||||||
if not hasattr(obj, "copy") or not callable(getattr(obj, "copy")):
|
if not hasattr(obj, "copy") or not callable(getattr(obj, "copy")):
|
||||||
raise ValueError(f"Object of type {type(obj)} must have a 'copy' method")
|
raise ValueError(f"Object of type {type(obj)} must have a 'copy' method")
|
||||||
|
|
||||||
def add_route(
|
def add_route(
|
||||||
self,
|
self,
|
||||||
name: str,
|
name: str,
|
||||||
condition: Callable[[T], bool],
|
condition: Callable[[Dict[str, Any]], bool],
|
||||||
pipeline: U,
|
pipeline: Any,
|
||||||
) -> "Router[T, U]":
|
) -> "Router":
|
||||||
"""
|
"""
|
||||||
Add a named route with its condition and corresponding pipeline to the router.
|
Add a named route with its condition and corresponding pipeline to the router.
|
||||||
|
|
||||||
@@ -60,7 +54,7 @@ class Router(BaseModel, Generic[T, U]):
|
|||||||
self._route_types[name] = type(pipeline)
|
self._route_types[name] = type(pipeline)
|
||||||
return self
|
return self
|
||||||
|
|
||||||
def route(self, input_data: T) -> Tuple[U, str]:
|
def route(self, input_data: Dict[str, Any]) -> Tuple[Any, str]:
|
||||||
"""
|
"""
|
||||||
Evaluate the input against the conditions and return the appropriate pipeline.
|
Evaluate the input against the conditions and return the appropriate pipeline.
|
||||||
|
|
||||||
@@ -76,15 +70,15 @@ class Router(BaseModel, Generic[T, U]):
|
|||||||
|
|
||||||
return self.default, "default"
|
return self.default, "default"
|
||||||
|
|
||||||
def copy(self) -> "Router[T, U]":
|
def copy(self) -> "Router":
|
||||||
"""Create a deep copy of the Router."""
|
"""Create a deep copy of the Router."""
|
||||||
new_routes = {
|
new_routes = {
|
||||||
name: Route(
|
name: Route(
|
||||||
condition=deepcopy(route.condition),
|
condition=deepcopy(route.condition),
|
||||||
pipeline=route.pipeline.copy(), # type: ignore
|
pipeline=route.pipeline.copy(),
|
||||||
)
|
)
|
||||||
for name, route in self.routes.items()
|
for name, route in self.routes.items()
|
||||||
}
|
}
|
||||||
new_default = self.default.copy() # type: ignore
|
new_default = self.default.copy()
|
||||||
|
|
||||||
return Router(routes=new_routes, default=new_default)
|
return Router(routes=new_routes, default=new_default)
|
||||||
|
|||||||
@@ -9,7 +9,14 @@ from hashlib import md5
|
|||||||
from typing import Any, Dict, List, Optional, Tuple, Type, Union
|
from typing import Any, Dict, List, Optional, Tuple, Type, Union
|
||||||
|
|
||||||
from opentelemetry.trace import Span
|
from opentelemetry.trace import Span
|
||||||
from pydantic import UUID4, BaseModel, Field, field_validator, model_validator
|
from pydantic import (
|
||||||
|
UUID4,
|
||||||
|
BaseModel,
|
||||||
|
Field,
|
||||||
|
PrivateAttr,
|
||||||
|
field_validator,
|
||||||
|
model_validator,
|
||||||
|
)
|
||||||
from pydantic_core import PydanticCustomError
|
from pydantic_core import PydanticCustomError
|
||||||
|
|
||||||
from crewai.agents.agent_builder.base_agent import BaseAgent
|
from crewai.agents.agent_builder.base_agent import BaseAgent
|
||||||
@@ -39,9 +46,6 @@ class Task(BaseModel):
|
|||||||
tools: List of tools/resources limited for task execution.
|
tools: List of tools/resources limited for task execution.
|
||||||
"""
|
"""
|
||||||
|
|
||||||
class Config:
|
|
||||||
arbitrary_types_allowed = True
|
|
||||||
|
|
||||||
__hash__ = object.__hash__ # type: ignore
|
__hash__ = object.__hash__ # type: ignore
|
||||||
used_tools: int = 0
|
used_tools: int = 0
|
||||||
tools_errors: int = 0
|
tools_errors: int = 0
|
||||||
@@ -104,16 +108,12 @@ class Task(BaseModel):
|
|||||||
default=None,
|
default=None,
|
||||||
)
|
)
|
||||||
|
|
||||||
_telemetry: Telemetry
|
_telemetry: Telemetry = PrivateAttr(default_factory=Telemetry)
|
||||||
_execution_span: Span | None = None
|
_execution_span: Optional[Span] = PrivateAttr(default=None)
|
||||||
_original_description: str | None = None
|
_original_description: Optional[str] = PrivateAttr(default=None)
|
||||||
_original_expected_output: str | None = None
|
_original_expected_output: Optional[str] = PrivateAttr(default=None)
|
||||||
_thread: threading.Thread | None = None
|
_thread: Optional[threading.Thread] = PrivateAttr(default=None)
|
||||||
_execution_time: float | None = None
|
_execution_time: Optional[float] = PrivateAttr(default=None)
|
||||||
|
|
||||||
def __init__(__pydantic_self__, **data):
|
|
||||||
config = data.pop("config", {})
|
|
||||||
super().__init__(**config, **data)
|
|
||||||
|
|
||||||
@field_validator("id", mode="before")
|
@field_validator("id", mode="before")
|
||||||
@classmethod
|
@classmethod
|
||||||
@@ -137,12 +137,6 @@ class Task(BaseModel):
|
|||||||
return value[1:]
|
return value[1:]
|
||||||
return value
|
return value
|
||||||
|
|
||||||
@model_validator(mode="after")
|
|
||||||
def set_private_attrs(self) -> "Task":
|
|
||||||
"""Set private attributes."""
|
|
||||||
self._telemetry = Telemetry()
|
|
||||||
return self
|
|
||||||
|
|
||||||
@model_validator(mode="after")
|
@model_validator(mode="after")
|
||||||
def set_attributes_based_on_config(self) -> "Task":
|
def set_attributes_based_on_config(self) -> "Task":
|
||||||
"""Set attributes based on the agent configuration."""
|
"""Set attributes based on the agent configuration."""
|
||||||
@@ -185,7 +179,7 @@ class Task(BaseModel):
|
|||||||
expected_output = self._original_expected_output or self.expected_output
|
expected_output = self._original_expected_output or self.expected_output
|
||||||
source = [description, expected_output]
|
source = [description, expected_output]
|
||||||
|
|
||||||
return md5("|".join(source).encode()).hexdigest()
|
return md5("|".join(source).encode(), usedforsecurity=False).hexdigest()
|
||||||
|
|
||||||
def execute_async(
|
def execute_async(
|
||||||
self,
|
self,
|
||||||
@@ -240,7 +234,9 @@ class Task(BaseModel):
|
|||||||
pydantic_output, json_output = self._export_output(result)
|
pydantic_output, json_output = self._export_output(result)
|
||||||
|
|
||||||
task_output = TaskOutput(
|
task_output = TaskOutput(
|
||||||
|
name=self.name,
|
||||||
description=self.description,
|
description=self.description,
|
||||||
|
expected_output=self.expected_output,
|
||||||
raw=result,
|
raw=result,
|
||||||
pydantic=pydantic_output,
|
pydantic=pydantic_output,
|
||||||
json_dict=json_output,
|
json_dict=json_output,
|
||||||
@@ -261,9 +257,7 @@ class Task(BaseModel):
|
|||||||
content = (
|
content = (
|
||||||
json_output
|
json_output
|
||||||
if json_output
|
if json_output
|
||||||
else pydantic_output.model_dump_json()
|
else pydantic_output.model_dump_json() if pydantic_output else result
|
||||||
if pydantic_output
|
|
||||||
else result
|
|
||||||
)
|
)
|
||||||
self._save_file(content)
|
self._save_file(content)
|
||||||
|
|
||||||
|
|||||||
@@ -10,6 +10,10 @@ class TaskOutput(BaseModel):
|
|||||||
"""Class that represents the result of a task."""
|
"""Class that represents the result of a task."""
|
||||||
|
|
||||||
description: str = Field(description="Description of the task")
|
description: str = Field(description="Description of the task")
|
||||||
|
name: Optional[str] = Field(description="Name of the task", default=None)
|
||||||
|
expected_output: Optional[str] = Field(
|
||||||
|
description="Expected output of the task", default=None
|
||||||
|
)
|
||||||
summary: Optional[str] = Field(description="Summary of the task", default=None)
|
summary: Optional[str] = Field(description="Summary of the task", default=None)
|
||||||
raw: str = Field(description="Raw output of the task", default="")
|
raw: str = Field(description="Raw output of the task", default="")
|
||||||
pydantic: Optional[BaseModel] = Field(
|
pydantic: Optional[BaseModel] = Field(
|
||||||
|
|||||||
@@ -295,7 +295,7 @@ class Telemetry:
|
|||||||
pass
|
pass
|
||||||
|
|
||||||
def individual_test_result_span(
|
def individual_test_result_span(
|
||||||
self, crew: Crew, quality: int, exec_time: int, model_name: str
|
self, crew: Crew, quality: float, exec_time: int, model_name: str
|
||||||
):
|
):
|
||||||
if self.ready:
|
if self.ready:
|
||||||
try:
|
try:
|
||||||
|
|||||||
@@ -1,5 +1,5 @@
|
|||||||
from langchain.tools import StructuredTool
|
from langchain.tools import StructuredTool
|
||||||
from pydantic import BaseModel, ConfigDict, Field
|
from pydantic import BaseModel, Field
|
||||||
|
|
||||||
from crewai.agents.cache import CacheHandler
|
from crewai.agents.cache import CacheHandler
|
||||||
|
|
||||||
@@ -7,11 +7,10 @@ from crewai.agents.cache import CacheHandler
|
|||||||
class CacheTools(BaseModel):
|
class CacheTools(BaseModel):
|
||||||
"""Default tools to hit the cache."""
|
"""Default tools to hit the cache."""
|
||||||
|
|
||||||
model_config = ConfigDict(arbitrary_types_allowed=True)
|
|
||||||
name: str = "Hit Cache"
|
name: str = "Hit Cache"
|
||||||
cache_handler: CacheHandler = Field(
|
cache_handler: CacheHandler = Field(
|
||||||
description="Cache Handler for the crew",
|
description="Cache Handler for the crew",
|
||||||
default=CacheHandler(),
|
default_factory=CacheHandler,
|
||||||
)
|
)
|
||||||
|
|
||||||
def tool(self):
|
def tool(self):
|
||||||
|
|||||||
@@ -1,6 +1,6 @@
|
|||||||
import ast
|
import ast
|
||||||
from difflib import SequenceMatcher
|
|
||||||
import os
|
import os
|
||||||
|
from difflib import SequenceMatcher
|
||||||
from textwrap import dedent
|
from textwrap import dedent
|
||||||
from typing import Any, List, Union
|
from typing import Any, List, Union
|
||||||
|
|
||||||
@@ -15,7 +15,7 @@ from crewai.utilities import I18N, Converter, ConverterError, Printer
|
|||||||
agentops = None
|
agentops = None
|
||||||
if os.environ.get("AGENTOPS_API_KEY"):
|
if os.environ.get("AGENTOPS_API_KEY"):
|
||||||
try:
|
try:
|
||||||
import agentops
|
import agentops # type: ignore
|
||||||
except ImportError:
|
except ImportError:
|
||||||
pass
|
pass
|
||||||
|
|
||||||
@@ -118,7 +118,7 @@ class ToolUsage:
|
|||||||
tool: BaseTool,
|
tool: BaseTool,
|
||||||
calling: Union[ToolCalling, InstructorToolCalling],
|
calling: Union[ToolCalling, InstructorToolCalling],
|
||||||
) -> str: # TODO: Fix this return type
|
) -> str: # TODO: Fix this return type
|
||||||
tool_event = agentops.ToolEvent(name=calling.tool_name) if agentops else None
|
tool_event = agentops.ToolEvent(name=calling.tool_name) if agentops else None # type: ignore
|
||||||
if self._check_tool_repeated_usage(calling=calling): # type: ignore # _check_tool_repeated_usage of "ToolUsage" does not return a value (it only ever returns None)
|
if self._check_tool_repeated_usage(calling=calling): # type: ignore # _check_tool_repeated_usage of "ToolUsage" does not return a value (it only ever returns None)
|
||||||
try:
|
try:
|
||||||
result = self._i18n.errors("task_repeated_usage").format(
|
result = self._i18n.errors("task_repeated_usage").format(
|
||||||
|
|||||||
@@ -1,13 +1,13 @@
|
|||||||
from datetime import datetime
|
from datetime import datetime
|
||||||
|
|
||||||
|
from pydantic import BaseModel, Field, PrivateAttr
|
||||||
|
|
||||||
from crewai.utilities.printer import Printer
|
from crewai.utilities.printer import Printer
|
||||||
|
|
||||||
|
|
||||||
class Logger:
|
class Logger(BaseModel):
|
||||||
_printer = Printer()
|
verbose: bool = Field(default=False)
|
||||||
|
_printer: Printer = PrivateAttr(default_factory=Printer)
|
||||||
def __init__(self, verbose=False):
|
|
||||||
self.verbose = verbose
|
|
||||||
|
|
||||||
def log(self, level, message, color="bold_green"):
|
def log(self, level, message, color="bold_green"):
|
||||||
if self.verbose:
|
if self.verbose:
|
||||||
|
|||||||
@@ -1,14 +1,25 @@
|
|||||||
from typing import Any, List, Optional
|
from typing import Any, List, Optional
|
||||||
|
|
||||||
from langchain_openai import ChatOpenAI
|
from langchain_openai import ChatOpenAI
|
||||||
from pydantic import BaseModel
|
from pydantic import BaseModel, Field
|
||||||
|
|
||||||
from crewai.agent import Agent
|
from crewai.agent import Agent
|
||||||
from crewai.task import Task
|
from crewai.task import Task
|
||||||
|
|
||||||
|
|
||||||
|
class PlanPerTask(BaseModel):
|
||||||
|
task: str = Field(..., description="The task for which the plan is created")
|
||||||
|
plan: str = Field(
|
||||||
|
...,
|
||||||
|
description="The step by step plan on how the agents can execute their tasks using the available tools with mastery",
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
class PlannerTaskPydanticOutput(BaseModel):
|
class PlannerTaskPydanticOutput(BaseModel):
|
||||||
list_of_plans_per_task: List[str]
|
list_of_plans_per_task: List[PlanPerTask] = Field(
|
||||||
|
...,
|
||||||
|
description="Step by step plan on how the agents can execute their tasks using the available tools with mastery",
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
class CrewPlanner:
|
class CrewPlanner:
|
||||||
|
|||||||
@@ -1,44 +1,50 @@
|
|||||||
import threading
|
import threading
|
||||||
import time
|
import time
|
||||||
from typing import Union
|
from typing import Optional
|
||||||
|
|
||||||
from pydantic import BaseModel, ConfigDict, Field, PrivateAttr, model_validator
|
from pydantic import BaseModel, Field, PrivateAttr, model_validator
|
||||||
|
|
||||||
from crewai.utilities.logger import Logger
|
from crewai.utilities.logger import Logger
|
||||||
|
|
||||||
|
|
||||||
class RPMController(BaseModel):
|
class RPMController(BaseModel):
|
||||||
model_config = ConfigDict(arbitrary_types_allowed=True)
|
max_rpm: Optional[int] = Field(default=None)
|
||||||
max_rpm: Union[int, None] = Field(default=None)
|
logger: Logger = Field(default_factory=lambda: Logger(verbose=False))
|
||||||
logger: Logger = Field(default=None)
|
|
||||||
_current_rpm: int = PrivateAttr(default=0)
|
_current_rpm: int = PrivateAttr(default=0)
|
||||||
_timer: threading.Timer | None = PrivateAttr(default=None)
|
_timer: Optional[threading.Timer] = PrivateAttr(default=None)
|
||||||
_lock: threading.Lock = PrivateAttr(default=None)
|
_lock: Optional[threading.Lock] = PrivateAttr(default=None)
|
||||||
_shutdown_flag = False
|
_shutdown_flag: bool = PrivateAttr(default=False)
|
||||||
|
|
||||||
@model_validator(mode="after")
|
@model_validator(mode="after")
|
||||||
def reset_counter(self):
|
def reset_counter(self):
|
||||||
if self.max_rpm:
|
if self.max_rpm is not None:
|
||||||
if not self._shutdown_flag:
|
if not self._shutdown_flag:
|
||||||
self._lock = threading.Lock()
|
self._lock = threading.Lock()
|
||||||
self._reset_request_count()
|
self._reset_request_count()
|
||||||
return self
|
return self
|
||||||
|
|
||||||
def check_or_wait(self):
|
def check_or_wait(self):
|
||||||
if not self.max_rpm:
|
if self.max_rpm is None:
|
||||||
return True
|
return True
|
||||||
|
|
||||||
with self._lock:
|
def _check_and_increment():
|
||||||
if self._current_rpm < self.max_rpm:
|
if self.max_rpm is not None and self._current_rpm < self.max_rpm:
|
||||||
self._current_rpm += 1
|
self._current_rpm += 1
|
||||||
return True
|
return True
|
||||||
else:
|
elif self.max_rpm is not None:
|
||||||
self.logger.log(
|
self.logger.log(
|
||||||
"info", "Max RPM reached, waiting for next minute to start."
|
"info", "Max RPM reached, waiting for next minute to start."
|
||||||
)
|
)
|
||||||
self._wait_for_next_minute()
|
self._wait_for_next_minute()
|
||||||
self._current_rpm = 1
|
self._current_rpm = 1
|
||||||
return True
|
return True
|
||||||
|
return True
|
||||||
|
|
||||||
|
if self._lock:
|
||||||
|
with self._lock:
|
||||||
|
return _check_and_increment()
|
||||||
|
else:
|
||||||
|
return _check_and_increment()
|
||||||
|
|
||||||
def stop_rpm_counter(self):
|
def stop_rpm_counter(self):
|
||||||
if self._timer:
|
if self._timer:
|
||||||
@@ -50,10 +56,18 @@ class RPMController(BaseModel):
|
|||||||
self._current_rpm = 0
|
self._current_rpm = 0
|
||||||
|
|
||||||
def _reset_request_count(self):
|
def _reset_request_count(self):
|
||||||
with self._lock:
|
def _reset():
|
||||||
self._current_rpm = 0
|
self._current_rpm = 0
|
||||||
|
if not self._shutdown_flag:
|
||||||
|
self._timer = threading.Timer(60.0, self._reset_request_count)
|
||||||
|
self._timer.start()
|
||||||
|
|
||||||
|
if self._lock:
|
||||||
|
with self._lock:
|
||||||
|
_reset()
|
||||||
|
else:
|
||||||
|
_reset()
|
||||||
|
|
||||||
if self._timer:
|
if self._timer:
|
||||||
self._shutdown_flag = True
|
self._shutdown_flag = True
|
||||||
self._timer.cancel()
|
self._timer.cancel()
|
||||||
self._timer = threading.Timer(60.0, self._reset_request_count)
|
|
||||||
self._timer.start()
|
|
||||||
|
|||||||
@@ -4,11 +4,6 @@ from unittest import mock
|
|||||||
from unittest.mock import patch
|
from unittest.mock import patch
|
||||||
|
|
||||||
import pytest
|
import pytest
|
||||||
from langchain.tools import tool
|
|
||||||
from langchain_core.exceptions import OutputParserException
|
|
||||||
from langchain_openai import ChatOpenAI
|
|
||||||
from langchain.schema import AgentAction
|
|
||||||
|
|
||||||
from crewai import Agent, Crew, Task
|
from crewai import Agent, Crew, Task
|
||||||
from crewai.agents.cache import CacheHandler
|
from crewai.agents.cache import CacheHandler
|
||||||
from crewai.agents.executor import CrewAgentExecutor
|
from crewai.agents.executor import CrewAgentExecutor
|
||||||
@@ -16,6 +11,10 @@ from crewai.agents.parser import CrewAgentParser
|
|||||||
from crewai.tools.tool_calling import InstructorToolCalling
|
from crewai.tools.tool_calling import InstructorToolCalling
|
||||||
from crewai.tools.tool_usage import ToolUsage
|
from crewai.tools.tool_usage import ToolUsage
|
||||||
from crewai.utilities import RPMController
|
from crewai.utilities import RPMController
|
||||||
|
from langchain.schema import AgentAction
|
||||||
|
from langchain.tools import tool
|
||||||
|
from langchain_core.exceptions import OutputParserException
|
||||||
|
from langchain_openai import ChatOpenAI
|
||||||
|
|
||||||
|
|
||||||
def test_agent_creation():
|
def test_agent_creation():
|
||||||
@@ -817,7 +816,7 @@ def test_agent_definition_based_on_dict():
|
|||||||
"verbose": True,
|
"verbose": True,
|
||||||
}
|
}
|
||||||
|
|
||||||
agent = Agent(config=config)
|
agent = Agent(**config)
|
||||||
|
|
||||||
assert agent.role == "test role"
|
assert agent.role == "test role"
|
||||||
assert agent.goal == "test goal"
|
assert agent.goal == "test goal"
|
||||||
@@ -837,7 +836,7 @@ def test_agent_human_input():
|
|||||||
"backstory": "test backstory",
|
"backstory": "test backstory",
|
||||||
}
|
}
|
||||||
|
|
||||||
agent = Agent(config=config)
|
agent = Agent(**config)
|
||||||
|
|
||||||
task = Task(
|
task = Task(
|
||||||
agent=agent,
|
agent=agent,
|
||||||
|
|||||||
@@ -8,7 +8,6 @@ from unittest.mock import MagicMock, patch
|
|||||||
|
|
||||||
import pydantic_core
|
import pydantic_core
|
||||||
import pytest
|
import pytest
|
||||||
|
|
||||||
from crewai.agent import Agent
|
from crewai.agent import Agent
|
||||||
from crewai.agents.cache import CacheHandler
|
from crewai.agents.cache import CacheHandler
|
||||||
from crewai.crew import Crew
|
from crewai.crew import Crew
|
||||||
|
|||||||
@@ -25,14 +25,20 @@ def mock_crew_factory():
|
|||||||
MockCrewClass = type("MockCrew", (MagicMock, Crew), {})
|
MockCrewClass = type("MockCrew", (MagicMock, Crew), {})
|
||||||
|
|
||||||
class MockCrew(MockCrewClass):
|
class MockCrew(MockCrewClass):
|
||||||
def __deepcopy__(self, memo):
|
def __deepcopy__(self):
|
||||||
result = MockCrewClass()
|
result = MockCrewClass()
|
||||||
result.kickoff_async = self.kickoff_async
|
result.kickoff_async = self.kickoff_async
|
||||||
result.name = self.name
|
result.name = self.name
|
||||||
return result
|
return result
|
||||||
|
|
||||||
|
def copy(
|
||||||
|
self,
|
||||||
|
):
|
||||||
|
return self
|
||||||
|
|
||||||
crew = MockCrew()
|
crew = MockCrew()
|
||||||
crew.name = name
|
crew.name = name
|
||||||
|
|
||||||
task_output = TaskOutput(
|
task_output = TaskOutput(
|
||||||
description="Test task", raw="Task output", agent="Test Agent"
|
description="Test task", raw="Task output", agent="Test Agent"
|
||||||
)
|
)
|
||||||
@@ -44,9 +50,15 @@ def mock_crew_factory():
|
|||||||
pydantic=pydantic_output,
|
pydantic=pydantic_output,
|
||||||
)
|
)
|
||||||
|
|
||||||
async def async_kickoff(inputs=None):
|
async def kickoff_async(inputs=None):
|
||||||
return crew_output
|
return crew_output
|
||||||
|
|
||||||
|
# Create an AsyncMock for kickoff_async
|
||||||
|
crew.kickoff_async = AsyncMock(side_effect=kickoff_async)
|
||||||
|
|
||||||
|
# Mock the synchronous kickoff method
|
||||||
|
crew.kickoff = MagicMock(return_value=crew_output)
|
||||||
|
|
||||||
# Add more attributes that Procedure might be expecting
|
# Add more attributes that Procedure might be expecting
|
||||||
crew.verbose = False
|
crew.verbose = False
|
||||||
crew.output_log_file = None
|
crew.output_log_file = None
|
||||||
@@ -56,30 +68,16 @@ def mock_crew_factory():
|
|||||||
crew.config = None
|
crew.config = None
|
||||||
crew.cache = True
|
crew.cache = True
|
||||||
|
|
||||||
# # Create a valid Agent instance
|
# Add non-empty agents and tasks
|
||||||
mock_agent = Agent(
|
mock_agent = MagicMock(spec=Agent)
|
||||||
name="Mock Agent",
|
mock_task = MagicMock(spec=Task)
|
||||||
role="Mock Role",
|
mock_task.agent = mock_agent
|
||||||
goal="Mock Goal",
|
mock_task.async_execution = False
|
||||||
backstory="Mock Backstory",
|
mock_task.context = None
|
||||||
allow_delegation=False,
|
|
||||||
verbose=False,
|
|
||||||
)
|
|
||||||
|
|
||||||
# Create a valid Task instance
|
|
||||||
mock_task = Task(
|
|
||||||
description="Return: Test output",
|
|
||||||
expected_output="Test output",
|
|
||||||
agent=mock_agent,
|
|
||||||
async_execution=False,
|
|
||||||
context=None,
|
|
||||||
)
|
|
||||||
|
|
||||||
crew.agents = [mock_agent]
|
crew.agents = [mock_agent]
|
||||||
crew.tasks = [mock_task]
|
crew.tasks = [mock_task]
|
||||||
|
|
||||||
crew.kickoff_async = AsyncMock(side_effect=async_kickoff)
|
|
||||||
|
|
||||||
return crew
|
return crew
|
||||||
|
|
||||||
return _create_mock_crew
|
return _create_mock_crew
|
||||||
@@ -115,9 +113,7 @@ def mock_router_factory(mock_crew_factory):
|
|||||||
(
|
(
|
||||||
"route1"
|
"route1"
|
||||||
if x.get("score", 0) > 80
|
if x.get("score", 0) > 80
|
||||||
else "route2"
|
else "route2" if x.get("score", 0) > 50 else "default"
|
||||||
if x.get("score", 0) > 50
|
|
||||||
else "default"
|
|
||||||
),
|
),
|
||||||
)
|
)
|
||||||
)
|
)
|
||||||
@@ -477,31 +473,17 @@ async def test_pipeline_with_parallel_stages_end_in_single_stage(mock_crew_facto
|
|||||||
"""
|
"""
|
||||||
Test that Pipeline correctly handles parallel stages.
|
Test that Pipeline correctly handles parallel stages.
|
||||||
"""
|
"""
|
||||||
crew1 = Crew(name="Crew 1", tasks=[task], agents=[agent])
|
crew1 = mock_crew_factory(name="Crew 1")
|
||||||
crew2 = Crew(name="Crew 2", tasks=[task], agents=[agent])
|
crew2 = mock_crew_factory(name="Crew 2")
|
||||||
crew3 = Crew(name="Crew 3", tasks=[task], agents=[agent])
|
crew3 = mock_crew_factory(name="Crew 3")
|
||||||
crew4 = Crew(name="Crew 4", tasks=[task], agents=[agent])
|
crew4 = mock_crew_factory(name="Crew 4")
|
||||||
|
|
||||||
pipeline = Pipeline(stages=[crew1, [crew2, crew3], crew4])
|
pipeline = Pipeline(stages=[crew1, [crew2, crew3], crew4])
|
||||||
input_data = [{"initial": "data"}]
|
input_data = [{"initial": "data"}]
|
||||||
|
|
||||||
pipeline_result = await pipeline.kickoff(input_data)
|
pipeline_result = await pipeline.kickoff(input_data)
|
||||||
|
|
||||||
with patch.object(Crew, "kickoff_async") as mock_kickoff:
|
crew1.kickoff_async.assert_called_once_with(inputs={"initial": "data"})
|
||||||
mock_kickoff.return_value = CrewOutput(
|
|
||||||
raw="Test output",
|
|
||||||
tasks_output=[
|
|
||||||
TaskOutput(
|
|
||||||
description="Test task", raw="Task output", agent="Test Agent"
|
|
||||||
)
|
|
||||||
],
|
|
||||||
token_usage=DEFAULT_TOKEN_USAGE,
|
|
||||||
json_dict=None,
|
|
||||||
pydantic=None,
|
|
||||||
)
|
|
||||||
pipeline_result = await pipeline.kickoff(input_data)
|
|
||||||
|
|
||||||
mock_kickoff.assert_called_with(inputs={"initial": "data"})
|
|
||||||
|
|
||||||
assert len(pipeline_result) == 1
|
assert len(pipeline_result) == 1
|
||||||
pipeline_result_1 = pipeline_result[0]
|
pipeline_result_1 = pipeline_result[0]
|
||||||
@@ -649,33 +631,21 @@ Options:
|
|||||||
|
|
||||||
|
|
||||||
@pytest.mark.asyncio
|
@pytest.mark.asyncio
|
||||||
async def test_pipeline_data_accumulation():
|
async def test_pipeline_data_accumulation(mock_crew_factory):
|
||||||
crew1 = Crew(name="Crew 1", tasks=[task], agents=[agent])
|
crew1 = mock_crew_factory(name="Crew 1", output_json_dict={"key1": "value1"})
|
||||||
crew2 = Crew(name="Crew 2", tasks=[task], agents=[agent])
|
crew2 = mock_crew_factory(name="Crew 2", output_json_dict={"key2": "value2"})
|
||||||
|
|
||||||
pipeline = Pipeline(stages=[crew1, crew2])
|
pipeline = Pipeline(stages=[crew1, crew2])
|
||||||
input_data = [{"initial": "data"}]
|
input_data = [{"initial": "data"}]
|
||||||
results = await pipeline.kickoff(input_data)
|
results = await pipeline.kickoff(input_data)
|
||||||
|
|
||||||
with patch.object(Crew, "kickoff_async") as mock_kickoff:
|
# Check that crew1 was called with only the initial input
|
||||||
mock_kickoff.side_effect = [
|
crew1.kickoff_async.assert_called_once_with(inputs={"initial": "data"})
|
||||||
CrewOutput(
|
|
||||||
raw="Test output from Crew 1",
|
|
||||||
tasks_output=[],
|
|
||||||
token_usage=DEFAULT_TOKEN_USAGE,
|
|
||||||
json_dict={"key1": "value1"},
|
|
||||||
pydantic=None,
|
|
||||||
),
|
|
||||||
CrewOutput(
|
|
||||||
raw="Test output from Crew 2",
|
|
||||||
tasks_output=[],
|
|
||||||
token_usage=DEFAULT_TOKEN_USAGE,
|
|
||||||
json_dict={"key2": "value2"},
|
|
||||||
pydantic=None,
|
|
||||||
),
|
|
||||||
]
|
|
||||||
|
|
||||||
results = await pipeline.kickoff(input_data)
|
# Check that crew2 was called with the combined input from the initial data and crew1's output
|
||||||
|
crew2.kickoff_async.assert_called_once_with(
|
||||||
|
inputs={"initial": "data", "key1": "value1"}
|
||||||
|
)
|
||||||
|
|
||||||
# Check the final output
|
# Check the final output
|
||||||
assert len(results) == 1
|
assert len(results) == 1
|
||||||
|
|||||||
@@ -14,6 +14,14 @@ class SimpleCrew:
|
|||||||
def simple_task(self):
|
def simple_task(self):
|
||||||
return Task(description="Simple Description", expected_output="Simple Output")
|
return Task(description="Simple Description", expected_output="Simple Output")
|
||||||
|
|
||||||
|
@task
|
||||||
|
def custom_named_task(self):
|
||||||
|
return Task(
|
||||||
|
description="Simple Description",
|
||||||
|
expected_output="Simple Output",
|
||||||
|
name="Custom",
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
def test_agent_memoization():
|
def test_agent_memoization():
|
||||||
crew = SimpleCrew()
|
crew = SimpleCrew()
|
||||||
@@ -33,3 +41,15 @@ def test_task_memoization():
|
|||||||
assert (
|
assert (
|
||||||
first_call_result is second_call_result
|
first_call_result is second_call_result
|
||||||
), "Task memoization is not working as expected"
|
), "Task memoization is not working as expected"
|
||||||
|
|
||||||
|
|
||||||
|
def test_task_name():
|
||||||
|
simple_task = SimpleCrew().simple_task()
|
||||||
|
assert (
|
||||||
|
simple_task.name == "simple_task"
|
||||||
|
), "Task name is not inferred from function name as expected"
|
||||||
|
|
||||||
|
custom_named_task = SimpleCrew().custom_named_task()
|
||||||
|
assert (
|
||||||
|
custom_named_task.name == "Custom"
|
||||||
|
), "Custom task name is not being set as expected"
|
||||||
|
|||||||
@@ -1,8 +1,8 @@
|
|||||||
"""Test Agent creation and execution basic functionality."""
|
"""Test Agent creation and execution basic functionality."""
|
||||||
|
|
||||||
import os
|
|
||||||
import hashlib
|
import hashlib
|
||||||
import json
|
import json
|
||||||
|
import os
|
||||||
from unittest.mock import MagicMock, patch
|
from unittest.mock import MagicMock, patch
|
||||||
|
|
||||||
import pytest
|
import pytest
|
||||||
@@ -98,6 +98,7 @@ def test_task_callback():
|
|||||||
task_completed = MagicMock(return_value="done")
|
task_completed = MagicMock(return_value="done")
|
||||||
|
|
||||||
task = Task(
|
task = Task(
|
||||||
|
name="Brainstorm",
|
||||||
description="Give me a list of 5 interesting ideas to explore for na article, what makes them unique and interesting.",
|
description="Give me a list of 5 interesting ideas to explore for na article, what makes them unique and interesting.",
|
||||||
expected_output="Bullet point list of 5 interesting ideas.",
|
expected_output="Bullet point list of 5 interesting ideas.",
|
||||||
agent=researcher,
|
agent=researcher,
|
||||||
@@ -109,6 +110,10 @@ def test_task_callback():
|
|||||||
task.execute_sync(agent=researcher)
|
task.execute_sync(agent=researcher)
|
||||||
task_completed.assert_called_once_with(task.output)
|
task_completed.assert_called_once_with(task.output)
|
||||||
|
|
||||||
|
assert task.output.description == task.description
|
||||||
|
assert task.output.expected_output == task.expected_output
|
||||||
|
assert task.output.name == task.name
|
||||||
|
|
||||||
|
|
||||||
def test_task_callback_returns_task_output():
|
def test_task_callback_returns_task_output():
|
||||||
from crewai.tasks.output_format import OutputFormat
|
from crewai.tasks.output_format import OutputFormat
|
||||||
@@ -149,6 +154,8 @@ def test_task_callback_returns_task_output():
|
|||||||
"json_dict": None,
|
"json_dict": None,
|
||||||
"agent": researcher.role,
|
"agent": researcher.role,
|
||||||
"summary": "Give me a list of 5 interesting ideas to explore...",
|
"summary": "Give me a list of 5 interesting ideas to explore...",
|
||||||
|
"name": None,
|
||||||
|
"expected_output": "Bullet point list of 5 interesting ideas.",
|
||||||
"output_format": OutputFormat.RAW,
|
"output_format": OutputFormat.RAW,
|
||||||
}
|
}
|
||||||
assert output_dict == expected_output
|
assert output_dict == expected_output
|
||||||
@@ -696,7 +703,7 @@ def test_task_definition_based_on_dict():
|
|||||||
"expected_output": "The score of the title.",
|
"expected_output": "The score of the title.",
|
||||||
}
|
}
|
||||||
|
|
||||||
task = Task(config=config)
|
task = Task(**config)
|
||||||
|
|
||||||
assert task.description == config["description"]
|
assert task.description == config["description"]
|
||||||
assert task.expected_output == config["expected_output"]
|
assert task.expected_output == config["expected_output"]
|
||||||
@@ -709,7 +716,7 @@ def test_conditional_task_definition_based_on_dict():
|
|||||||
"expected_output": "The score of the title.",
|
"expected_output": "The score of the title.",
|
||||||
}
|
}
|
||||||
|
|
||||||
task = ConditionalTask(config=config, condition=lambda x: True)
|
task = ConditionalTask(**config, condition=lambda x: True)
|
||||||
|
|
||||||
assert task.description == config["description"]
|
assert task.description == config["description"]
|
||||||
assert task.expected_output == config["expected_output"]
|
assert task.expected_output == config["expected_output"]
|
||||||
|
|||||||
@@ -6,7 +6,11 @@ from langchain_openai import ChatOpenAI
|
|||||||
from crewai.agent import Agent
|
from crewai.agent import Agent
|
||||||
from crewai.task import Task
|
from crewai.task import Task
|
||||||
from crewai.tasks.task_output import TaskOutput
|
from crewai.tasks.task_output import TaskOutput
|
||||||
from crewai.utilities.planning_handler import CrewPlanner, PlannerTaskPydanticOutput
|
from crewai.utilities.planning_handler import (
|
||||||
|
CrewPlanner,
|
||||||
|
PlannerTaskPydanticOutput,
|
||||||
|
PlanPerTask,
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
class TestCrewPlanner:
|
class TestCrewPlanner:
|
||||||
@@ -44,12 +48,17 @@ class TestCrewPlanner:
|
|||||||
return CrewPlanner(tasks, planning_agent_llm)
|
return CrewPlanner(tasks, planning_agent_llm)
|
||||||
|
|
||||||
def test_handle_crew_planning(self, crew_planner):
|
def test_handle_crew_planning(self, crew_planner):
|
||||||
|
list_of_plans_per_task = [
|
||||||
|
PlanPerTask(task="Task1", plan="Plan 1"),
|
||||||
|
PlanPerTask(task="Task2", plan="Plan 2"),
|
||||||
|
PlanPerTask(task="Task3", plan="Plan 3"),
|
||||||
|
]
|
||||||
with patch.object(Task, "execute_sync") as execute:
|
with patch.object(Task, "execute_sync") as execute:
|
||||||
execute.return_value = TaskOutput(
|
execute.return_value = TaskOutput(
|
||||||
description="Description",
|
description="Description",
|
||||||
agent="agent",
|
agent="agent",
|
||||||
pydantic=PlannerTaskPydanticOutput(
|
pydantic=PlannerTaskPydanticOutput(
|
||||||
list_of_plans_per_task=["Plan 1", "Plan 2", "Plan 3"]
|
list_of_plans_per_task=list_of_plans_per_task
|
||||||
),
|
),
|
||||||
)
|
)
|
||||||
result = crew_planner._handle_crew_planning()
|
result = crew_planner._handle_crew_planning()
|
||||||
@@ -91,7 +100,9 @@ class TestCrewPlanner:
|
|||||||
execute.return_value = TaskOutput(
|
execute.return_value = TaskOutput(
|
||||||
description="Description",
|
description="Description",
|
||||||
agent="agent",
|
agent="agent",
|
||||||
pydantic=PlannerTaskPydanticOutput(list_of_plans_per_task=["Plan 1"]),
|
pydantic=PlannerTaskPydanticOutput(
|
||||||
|
list_of_plans_per_task=[PlanPerTask(task="Task1", plan="Plan 1")]
|
||||||
|
),
|
||||||
)
|
)
|
||||||
result = crew_planner_different_llm._handle_crew_planning()
|
result = crew_planner_different_llm._handle_crew_planning()
|
||||||
|
|
||||||
|
|||||||
Reference in New Issue
Block a user