Compare commits

...

84 Commits

Author SHA1 Message Date
Thiago Moretto
2ef5cc37a9 Get initial state type from generic 2024-09-13 15:47:46 -03:00
Brandon Hancock
ba8fbed30a More changes and todos 2024-09-13 12:47:43 -04:00
Brandon Hancock
abfd121f99 Everything is working 2024-09-12 16:21:12 -04:00
Brandon Hancock
72f0b600b8 template working 2024-09-12 11:33:46 -04:00
Brandon Hancock
a028566bd6 WIP. Working on adding and & or to flows. In the middle of setting up template for flow as well 2024-09-11 16:26:02 -04:00
Brandon Hancock
3a266d6b40 Everything is workign 2024-09-11 13:01:36 -04:00
Brandon Hancock
a4a14df72e Working but not clean engouth 2024-09-11 11:59:12 -04:00
Brandon Hancock
8664f3912b It fully works but not clean enought 2024-09-11 10:45:24 -04:00
Brandon Hancock
d67c12a5a3 Almost working! 2024-09-11 10:29:54 -04:00
Paul Nugent
322780a5f3 Merge pull request #1315 from crewAIInc/docs_update
update readme.md
2024-09-10 15:26:00 +01:00
Rip&Tear
a54d34ea5b update readme.md 2024-09-10 21:56:46 +08:00
João Moura
bc793749a5 preparing enw version with deploy 2024-09-07 11:17:12 -07:00
João Moura
a9916940ef preparing new verison 0.55.1 2024-09-07 10:16:07 -07:00
João Moura
b7f4931de5 updating dependencies 2024-09-07 00:55:21 -07:00
João Moura
327b728bef preparing to cut new version 2024-09-07 00:34:34 -07:00
Sean
a9510eec88 Update LLM-Connections.md (#1181)
* Update LLM-Connections.md

* Update LLM-Connections.md

---------

Co-authored-by: João Moura <joaomdmoura@gmail.com>
2024-09-07 04:31:09 -03:00
Brandon Hancock (bhancock_ai)
d6db557f50 Update regex (#1228) 2024-09-07 04:27:58 -03:00
Brandon Hancock (bhancock_ai)
5ae56e3f72 add in 2 small improvements based on joao feedback (#1264) 2024-09-07 04:13:23 -03:00
Astha Puri
1c9ebb59b1 Update Start-a-New-CrewAI-Project-Template-Method.md (#1276)
* Update Start-a-New-CrewAI-Project-Template-Method.md

* Update Start-a-New-CrewAI-Project-Template-Method.md

---------

Co-authored-by: João Moura <joaomdmoura@gmail.com>
2024-09-07 04:12:51 -03:00
Astha Puri
f520ceeb0d Add missing virtual environment commands (#1277)
* Add missing virtual environment commands

* Update Start-a-New-CrewAI-Project-Template-Method.md

---------

Co-authored-by: João Moura <joaomdmoura@gmail.com>
2024-09-07 04:12:04 -03:00
Astha Puri
0df4d2fd4b Update Tasks.md (#1279) 2024-09-07 04:05:56 -03:00
Rip&Tear
596491d932 Update readme.md (#1294)
* Update pyproject.toml

More GH link updates

* Added FAQ section in README.md

---------

Co-authored-by: João Moura <joaomdmoura@gmail.com>
2024-09-07 03:59:48 -03:00
Ali Waleed
72fb109147 Fix: Langtrace Docs (#1297)
* fix langtrace docs

* remove gif for size constraint
2024-09-07 03:58:27 -03:00
Brandon Hancock (bhancock_ai)
40b336d2a5 Brandon/cre 256 default template crew isnt running properly (#1299)
* Update config typecheck to accept agents

* Clean up prints
2024-09-07 03:57:36 -03:00
anmol-aidora
5958df71a2 Updated CrewAI Documentation and Repository link in tools.poetry.urls (#1305)
* Updated CrewAI Documentation and Repository link in tools.poetry.urls

* Update pyproject.toml

---------

Co-authored-by: João Moura <joaomdmoura@gmail.com>
2024-09-07 03:55:02 -03:00
Brandon Hancock (bhancock_ai)
26d9af8367 Brandon/cre 252 add agent to crewai test (#1308)
* Update config typecheck to accept agents

* Clean up prints

* Adding agents to crew evaluator output table

* Properly generating table now

* Update tests
2024-09-07 03:53:23 -03:00
Brandon Hancock (bhancock_ai)
cdaf2d41c7 move away from pydantic v1 (#1284) 2024-09-06 14:22:01 -04:00
Paul Nugent
d9ee104167 Merge pull request #1290 from crewAIInc/DOCS/readme_update
Docs/readme update
2024-09-06 16:18:31 +01:00
Rip&Tear
0b9eeb7cdb Revert "feat: Improve documentation for Conditional Tasks in crewAI"
This reverts commit 18a2722e4d.
2024-09-05 10:30:08 +08:00
Rip&Tear
9b558ddc51 Revert "docs: Improve "Creating and Utilizing Tools in crewAI" documentation"
This reverts commit b955416458.
2024-09-05 10:30:00 +08:00
Rip&Tear
b857afe45b Revert "feat: Improve documentation for TXTSearchTool"
This reverts commit d2fab55561.
2024-09-05 10:29:03 +08:00
Rip&Tear
1d77c8de10 feat: Improve documentation for TXTSearchTool
Updated wording positioning
2024-09-05 10:27:11 +08:00
Rip&Tear
503f3a6372 Update README.md
Updated  GitHub links to point to new Repos
2024-09-05 10:17:46 +08:00
Rip&Tear (aider)
d2fab55561 feat: Improve documentation for TXTSearchTool 2024-09-05 00:06:11 +08:00
Rip&Tear (aider)
b955416458 docs: Improve "Creating and Utilizing Tools in crewAI" documentation 2024-09-04 18:31:09 +08:00
Rip&Tear (aider)
18a2722e4d feat: Improve documentation for Conditional Tasks in crewAI 2024-09-04 18:26:12 +08:00
Rip&Tear
c7e8d55926 Merge pull request #1273 from Astha0024/main
Update README.md with default model
2024-09-01 00:43:40 +08:00
Astha Puri
48698bf0b7 Merge branch 'main' into main 2024-08-30 21:58:02 -04:00
Thiago Moretto
f79b3fc322 Merge pull request #1269 from crewAIInc/tm-fix-cli-for-py310
Add py 3.10 support back to CLI + fixes
2024-08-30 13:39:04 -03:00
Thiago Moretto
0b9e753c2f Add comment to warn about dro simple_toml_parser 2024-08-30 11:52:53 -03:00
Astha Puri
5b3f7be1c4 Update README.md 2024-08-30 06:55:31 -04:00
Astha Puri
f2208f5f8e Update README.md 2024-08-30 06:54:34 -04:00
João Moura
79b5248b83 preparing new version 2024-08-30 00:33:51 -03:00
João Moura
d4791bef28 updating deployment cli with 2024-08-30 00:32:18 -03:00
João Moura
d861cb0d74 updating docs 2024-08-30 00:15:06 -03:00
João Moura
67f19f79c2 removing base_model from telemetry 2024-08-29 23:35:05 -03:00
Thiago Moretto
5f359b14f7 Fix test 2024-08-29 15:58:47 -03:00
Thiago Moretto
cda1900b14 Read as str no bytes
+handle when project_name is None (fails, basically)
2024-08-29 15:17:51 -03:00
Thiago Moretto
c8c0a89dc6 Fix type checking + lint 2024-08-29 15:02:19 -03:00
Thiago Moretto
9a10cc15f4 Add python 3.10 support back to CLI +fixes 2024-08-29 14:37:34 -03:00
Thiago Moretto
345f1eacde Get current crewai version from poetry.lock 2024-08-29 11:14:04 -03:00
Thiago Moretto
fa937bf3a7 Add Python 3.10 support to CLI 2024-08-29 10:22:54 -03:00
mvanwyk
172758020c bug: fix incorrect mkdocs site_url (#1238)
* bug: fix incorrect mkdocs site_url

* bug: fix incorrect mkdocs repo_url

Co-authored-by: Eduardo Chiarotti <dudumelgaco@hotmail.com>

---------

Co-authored-by: Eduardo Chiarotti <dudumelgaco@hotmail.com>
2024-08-24 15:45:59 -03:00
Brandon Hancock (bhancock_ai)
5ff178084e Fix deployment name issue to support Azure (#1253)
* Fix deployment name issue to support Azure

* More carefully check atters on llm
2024-08-23 12:58:37 -04:00
Brandon Hancock (bhancock_ai)
c012e0ff8d Update async docs with more examples (#1254)
* Update async docs with more examples

* Add use cases
2024-08-23 12:51:58 -04:00
Eduardo Chiarotti
f777c1c2e0 fix: All files pre commit (#1249) 2024-08-23 10:52:36 -03:00
Paul Nugent
782ce22d99 Update LLM-Connections.md (#1190)
Added missing quotes around os.environ

Co-authored-by: Eduardo Chiarotti <dudumelgaco@hotmail.com>
2024-08-23 10:39:06 -03:00
Eduardo Chiarotti
f5246039e5 Feat/cli deploy (#1240)
* feat: set basic structure deploy commands

* feat: add first iteration of CLI Deploy

* feat: some minor refactor

* feat: Add api, Deploy command and update cli

* feat: Remove test token

* feat: add auth0 lib, update cli and improve code

* feat: update code and decouple auth

* fix: parts of the code

* feat: Add token manager to encrypt access token and get and save tokens

* feat: add audience to costants

* feat: add subsystem saving credentials and remove comment of type hinting

* feat: add get crew version to send on header of request

* feat: add docstrings

* feat: add tests for authentication module

* feat: add tests for utils

* feat: add unit tests for cl

* feat: add tests

* feat: add deploy man tests

* feat: fix type checking issue

* feat: rename tests to pass ci

* feat: fix pr issues

* feat: fix get crewai versoin

* fix: add timeout for tests.yml
2024-08-23 10:20:03 -03:00
Rip&Tear
4736604b4d Merge pull request #1239 from ShuHuang/patch-1
Bugfix: Update LLM-Connections.md
2024-08-23 09:49:46 +08:00
Shu Huang
09cba0135e Bugfix: Update LLM-Connections.md
The original code doesn't work due to a comma
2024-08-22 14:39:15 +01:00
Brandon Hancock (bhancock_ai)
8119edb495 Brandon/cre 211 fix agent and task config for yaml based projects (#1211)
* Fixed agents. Now need to fix tasks.

* Add type fixes and fix task decorator

* Clean up logs

* fix more type errors

* Revert back to required

* Undo changes.

* Remove default none for properties that cannot be none

* Clean up comments

* Implement all of Guis feedback
2024-08-20 09:31:02 -04:00
William Espegren
17bffb0803 docs: add spider docs (#1165)
* docs: add spider docs

* chore: add "Spider scraper" to mkdocs.yml
2024-08-20 07:53:04 -03:00
Rip&Tear
cbe139fced Merge pull request #1216 from theCyberTech/main 2024-08-20 18:32:04 +08:00
Eduardo Chiarotti
946d8567fe feat: Add only on release to deploy docs (#1212) 2024-08-20 07:26:50 -03:00
Rip&Tear
7b5d5bdeef Merge pull request #2 from theCyberTech/theCyberTech-operations-per-run
Update operations-per-run in stale.yml
2024-08-20 12:54:55 +08:00
Rip&Tear
a1551bcf2b Update operations-per-run in stale.yml
operations-per-run: 1200

this will allow for complete cleanup of all exiting issues
2024-08-20 12:54:26 +08:00
Rip&Tear
5495825b1d Merge pull request #1206 from theCyberTech/main
Create Cli.md
2024-08-17 21:51:13 +08:00
Rip&Tear
6e36f84cc6 Update Cli.md 2024-08-17 20:55:46 +08:00
Rip&Tear
cddf2d8f7c Create Cli.md
Added initial Cli.md to help users get info on Cli commands
2024-08-17 20:06:31 +08:00
Rip&Tear
5f17e35c5a Merge pull request #1205 from theCyberTech/theCyberTech-stale-fix
Update stale.yml
2024-08-17 20:00:43 +08:00
Eduardo Chiarotti
231a833ad0 feat: Add crewai install CLI command (#1203)
* feat: Add crewai install CLI command

* feat: Add crewai install to the docs and force now crewai run
2024-08-17 08:41:53 -03:00
Rip&Tear
a870295d42 Update stale.yml
Added  
operations-per-run: 500
2024-08-17 19:16:31 +08:00
Rip&Tear
ddda8f6bda Merge pull request #1194 from crewAIInc/docs_update
Updated Documentation to fix minor issues + minor .github fixes
2024-08-17 08:14:17 +08:00
Brandon Hancock (bhancock_ai)
bf7372fefa Adding Autocomplete to OSS (#1198)
* Cleaned up model_config

* Fix pydantic issues

* 99% done with autocomplete

* fixed test issues

* Fix type checking issues
2024-08-16 15:04:21 -04:00
Brandon Hancock (bhancock_ai)
3451b6fc7a Clean up pipeline (#1187)
* Clean up pipeline

* Make versioning dynamic in templates

* fix .env issues when openai is trying to use invalid keys

* Fix type checker issue in pipeline

* Fix tests.
2024-08-16 14:47:28 -04:00
Vini Brasil
dbf2570353 Add name and expected_output to TaskOutput (#1199)
* Add name and expected_output to TaskOutput

This commit adds task information to the TaskOutput class. This is
useful to provide extra context to callbacks.

* Populate task name from function names

This commit populates task name from function names when using
annotations.
2024-08-15 22:24:41 +01:00
Eduardo Chiarotti
d0707fac91 feat: Add bandit ci pipeline (#1200)
* feat: Add bandit ci pipeline

* feat: add useforsecurty false for bandit pipeline

* feat: Add report only for High severity issues
2024-08-15 18:19:57 -03:00
theCyberTech
35ebdd6022 Updated Documentaion to fix navigation link for pipelin feature, removed legacy md fiel from .github & added missing config.yml config to remove custom issues from user access 2024-08-15 16:35:05 +08:00
Rip&Tear
92a77e5cac Merge pull request #1183 from crewAIInc/feature-templates
Feature templates
2024-08-15 11:29:36 +08:00
Rip&Tear
a2922c9ad5 Merge pull request #1182 from crewAIInc/git-temaplates
updated bug report template to yml for more control
2024-08-15 11:28:31 +08:00
Eduardo Chiarotti
9f9b52dd26 fix: Fix planning_llm issue (#1189)
* fix: Fix planning_llm issue

* fix: add poetry.lock updated version

* fix: type checking issues

* fix: tests
2024-08-14 18:54:53 -03:00
theCyberTech
2482c7ab68 Addded feature request template in YAML format
Added config .yml to remove blank template
2024-08-14 15:49:55 +08:00
theCyberTech
7fdabda97e updated bug report template to yml for more control 2024-08-14 15:08:59 +08:00
Eduardo Chiarotti
7306414de7 docs: fix references to annotations (#1176) 2024-08-13 12:58:12 -03:00
113 changed files with 5052 additions and 1865 deletions

View File

@@ -1,35 +0,0 @@
---
name: Bug report
about: Create a report to help us improve CrewAI
title: "[BUG]"
labels: bug
assignees: ''
---
**Description**
Provide a clear and concise description of what the bug is.
**Steps to Reproduce**
Provide a step-by-step process to reproduce the behavior:
**Expected behavior**
A clear and concise description of what you expected to happen.
**Screenshots/Code snippets**
If applicable, add screenshots or code snippets to help explain your problem.
**Environment Details:**
- **Operating System**: [e.g., Ubuntu 20.04, macOS Catalina, Windows 10]
- **Python Version**: [e.g., 3.8, 3.9, 3.10]
- **crewAI Version**: [e.g., 0.30.11]
- **crewAI Tools Version**: [e.g., 0.2.6]
**Logs**
Include relevant logs or error messages if applicable.
**Possible Solution**
Have a solution in mind? Please suggest it here, or write "None".
**Additional context**
Add any other context about the problem here.

116
.github/ISSUE_TEMPLATE/bug_report.yml vendored Normal file
View File

@@ -0,0 +1,116 @@
name: Bug report
description: Create a report to help us improve CrewAI
title: "[BUG]"
labels: ["bug"]
assignees: []
body:
- type: textarea
id: description
attributes:
label: Description
description: Provide a clear and concise description of what the bug is.
validations:
required: true
- type: textarea
id: steps-to-reproduce
attributes:
label: Steps to Reproduce
description: Provide a step-by-step process to reproduce the behavior.
placeholder: |
1. Go to '...'
2. Click on '....'
3. Scroll down to '....'
4. See error
validations:
required: true
- type: textarea
id: expected-behavior
attributes:
label: Expected behavior
description: A clear and concise description of what you expected to happen.
validations:
required: true
- type: textarea
id: screenshots-code
attributes:
label: Screenshots/Code snippets
description: If applicable, add screenshots or code snippets to help explain your problem.
validations:
required: true
- type: dropdown
id: os
attributes:
label: Operating System
description: Select the operating system you're using
options:
- Ubuntu 20.04
- Ubuntu 22.04
- Ubuntu 24.04
- macOS Catalina
- macOS Big Sur
- macOS Monterey
- macOS Ventura
- macOS Sonoma
- Windows 10
- Windows 11
- Other (specify in additional context)
validations:
required: true
- type: dropdown
id: python-version
attributes:
label: Python Version
description: Version of Python your Crew is running on
options:
- '3.10'
- '3.11'
- '3.12'
- '3.13'
validations:
required: true
- type: input
id: crewai-version
attributes:
label: crewAI Version
description: What version of CrewAI are you using
validations:
required: true
- type: input
id: crewai-tools-version
attributes:
label: crewAI Tools Version
description: What version of CrewAI Tools are you using
validations:
required: true
- type: dropdown
id: virtual-environment
attributes:
label: Virtual Environment
description: What Virtual Environment are you running your crew in.
options:
- Venv
- Conda
- Poetry
validations:
required: true
- type: textarea
id: evidence
attributes:
label: Evidence
description: Include relevant information, logs or error messages. These can be screenshots.
validations:
required: true
- type: textarea
id: possible-solution
attributes:
label: Possible Solution
description: Have a solution in mind? Please suggest it here, or write "None".
validations:
required: true
- type: textarea
id: additional-context
attributes:
label: Additional context
description: Add any other context about the problem here.
validations:
required: true

1
.github/ISSUE_TEMPLATE/config.yml vendored Normal file
View File

@@ -0,0 +1 @@
blank_issues_enabled: false

View File

@@ -1,24 +0,0 @@
---
name: Custom issue template
about: Describe this issue template's purpose here.
title: "[DOCS]"
labels: documentation
assignees: ''
---
## Documentation Page
<!-- Provide a link to the documentation page that needs improvement -->
## Description
<!-- Describe what needs to be changed or improved in the documentation -->
## Suggested Changes
<!-- If possible, provide specific suggestions for how to improve the documentation -->
## Additional Context
<!-- Add any other context about the documentation issue here -->
## Checklist
- [ ] I have searched the existing issues to make sure this is not a duplicate
- [ ] I have checked the latest version of the documentation to ensure this hasn't been addressed

View File

@@ -0,0 +1,65 @@
name: Feature request
description: Suggest a new feature for CrewAI
title: "[FEATURE]"
labels: ["feature-request"]
assignees: []
body:
- type: markdown
attributes:
value: |
Thanks for taking the time to fill out this feature request!
- type: dropdown
id: feature-area
attributes:
label: Feature Area
description: Which area of CrewAI does this feature primarily relate to?
options:
- Core functionality
- Agent capabilities
- Task management
- Integration with external tools
- Performance optimization
- Documentation
- Other (please specify in additional context)
validations:
required: true
- type: textarea
id: problem
attributes:
label: Is your feature request related to a an existing bug? Please link it here.
description: A link to the bug or NA if not related to an existing bug.
validations:
required: true
- type: textarea
id: solution
attributes:
label: Describe the solution you'd like
description: A clear and concise description of what you want to happen.
validations:
required: true
- type: textarea
id: alternatives
attributes:
label: Describe alternatives you've considered
description: A clear and concise description of any alternative solutions or features you've considered.
validations:
required: false
- type: textarea
id: context
attributes:
label: Additional context
description: Add any other context, screenshots, or examples about the feature request here.
validations:
required: false
- type: dropdown
id: willingness-to-contribute
attributes:
label: Willingness to Contribute
description: Would you be willing to contribute to the implementation of this feature?
options:
- Yes, I'd be happy to submit a pull request
- I could provide more detailed specifications
- I can test the feature once it's implemented
- No, I'm just suggesting the idea
validations:
required: true

View File

@@ -1,10 +1,8 @@
name: Deploy MkDocs
on:
workflow_dispatch:
push:
branches:
- main
release:
types: [published]
permissions:
contents: write

23
.github/workflows/security-checker.yml vendored Normal file
View File

@@ -0,0 +1,23 @@
name: Security Checker
on: [pull_request]
jobs:
security-check:
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Set up Python
uses: actions/setup-python@v4
with:
python-version: "3.11.9"
- name: Install dependencies
run: pip install bandit
- name: Run Bandit
run: bandit -c pyproject.toml -r src/ -lll

View File

@@ -24,3 +24,4 @@ jobs:
stale-pr-message: 'This PR is stale because it has been open for 45 days with no activity.'
days-before-pr-stale: 45
days-before-pr-close: -1
operations-per-run: 1200

View File

@@ -11,6 +11,7 @@ env:
jobs:
deploy:
runs-on: ubuntu-latest
timeout-minutes: 15
steps:
- name: Checkout code

View File

@@ -8,11 +8,11 @@
<h3>
[Homepage](https://www.crewai.io/) | [Documentation](https://docs.crewai.com/) | [Chat with Docs](https://chatg.pt/DWjSBZn) | [Examples](https://github.com/joaomdmoura/crewai-examples) | [Discord](https://discord.com/invite/X4JWnZnxPb)
[Homepage](https://www.crewai.com/) | [Documentation](https://docs.crewai.com/) | [Chat with Docs](https://chatg.pt/DWjSBZn) | [Examples](https://github.com/crewAIInc/crewAI-examples) | [Discourse](https://community.crewai.com)
</h3>
[![GitHub Repo stars](https://img.shields.io/github/stars/joaomdmoura/crewAI)](https://github.com/joaomdmoura/crewAI)
[![GitHub Repo stars](https://img.shields.io/github/stars/joaomdmoura/crewAI)](https://github.com/crewAIInc/crewAI)
[![License: MIT](https://img.shields.io/badge/License-MIT-green.svg)](https://opensource.org/licenses/MIT)
</div>
@@ -73,6 +73,7 @@ os.environ["SERPER_API_KEY"] = "Your Key" # serper.dev API key
# You can pass an optional llm attribute specifying what model you wanna use.
# It can be a local model through Ollama / LM Studio or a remote
# model like OpenAI, Mistral, Antrophic or others (https://docs.crewai.com/how-to/LLM-Connections/)
# If you don't specify a model, the default is OpenAI gpt-4o
#
# import os
# os.environ['OPENAI_MODEL_NAME'] = 'gpt-3.5-turbo'
@@ -153,12 +154,12 @@ In addition to the sequential process, you can use the hierarchical process, whi
## Examples
You can test different real life examples of AI crews in the [crewAI-examples repo](https://github.com/joaomdmoura/crewAI-examples?tab=readme-ov-file):
You can test different real life examples of AI crews in the [crewAI-examples repo](https://github.com/crewAIInc/crewAI-examples?tab=readme-ov-file):
- [Landing Page Generator](https://github.com/joaomdmoura/crewAI-examples/tree/main/landing_page_generator)
- [Landing Page Generator](https://github.com/crewAIInc/crewAI-examples/tree/main/landing_page_generator)
- [Having Human input on the execution](https://docs.crewai.com/how-to/Human-Input-on-Execution)
- [Trip Planner](https://github.com/joaomdmoura/crewAI-examples/tree/main/trip_planner)
- [Stock Analysis](https://github.com/joaomdmoura/crewAI-examples/tree/main/stock_analysis)
- [Trip Planner](https://github.com/crewAIInc/crewAI-examples/tree/main/trip_planner)
- [Stock Analysis](https://github.com/crewAIInc/crewAI-examples/tree/main/stock_analysis)
### Quick Tutorial
@@ -166,19 +167,19 @@ You can test different real life examples of AI crews in the [crewAI-examples re
### Write Job Descriptions
[Check out code for this example](https://github.com/joaomdmoura/crewAI-examples/tree/main/job-posting) or watch a video below:
[Check out code for this example](https://github.com/crewAIInc/crewAI-examples/tree/main/job-posting) or watch a video below:
[![Jobs postings](https://img.youtube.com/vi/u98wEMz-9to/maxresdefault.jpg)](https://www.youtube.com/watch?v=u98wEMz-9to "Jobs postings")
### Trip Planner
[Check out code for this example](https://github.com/joaomdmoura/crewAI-examples/tree/main/trip_planner) or watch a video below:
[Check out code for this example](https://github.com/crewAIInc/crewAI-examples/tree/main/trip_planner) or watch a video below:
[![Trip Planner](https://img.youtube.com/vi/xis7rWp-hjs/maxresdefault.jpg)](https://www.youtube.com/watch?v=xis7rWp-hjs "Trip Planner")
### Stock Analysis
[Check out code for this example](https://github.com/joaomdmoura/crewAI-examples/tree/main/stock_analysis) or watch a video below:
[Check out code for this example](https://github.com/crewAIInc/crewAI-examples/tree/main/stock_analysis) or watch a video below:
[![Stock Analysis](https://img.youtube.com/vi/e0Uj4yWdaAg/maxresdefault.jpg)](https://www.youtube.com/watch?v=e0Uj4yWdaAg "Stock Analysis")
@@ -190,13 +191,12 @@ Please refer to the [Connect crewAI to LLMs](https://docs.crewai.com/how-to/LLM-
## How CrewAI Compares
**CrewAI's Advantage**: CrewAI is built with production in mind. It offers the flexibility of Autogen's conversational agents and the structured process approach of ChatDev, but without the rigidity. CrewAI's processes are designed to be dynamic and adaptable, fitting seamlessly into both development and production workflows.
- **Autogen**: While Autogen does good in creating conversational agents capable of working together, it lacks an inherent concept of process. In Autogen, orchestrating agents' interactions requires additional programming, which can become complex and cumbersome as the scale of tasks grows.
- **ChatDev**: ChatDev introduced the idea of processes into the realm of AI agents, but its implementation is quite rigid. Customizations in ChatDev are limited and not geared towards production environments, which can hinder scalability and flexibility in real-world applications.
**CrewAI's Advantage**: CrewAI is built with production in mind. It offers the flexibility of Autogen's conversational agents and the structured process approach of ChatDev, but without the rigidity. CrewAI's processes are designed to be dynamic and adaptable, fitting seamlessly into both development and production workflows.
## Contribution
CrewAI is open-source and we welcome contributions. If you're looking to contribute, please:
@@ -284,3 +284,39 @@ Users can opt-in to Further Telemetry, sharing the complete telemetry data by se
## License
CrewAI is released under the MIT License.
## Frequently Asked Questions (FAQ)
### Q: What is CrewAI?
A: CrewAI is a cutting-edge framework for orchestrating role-playing, autonomous AI agents. It enables agents to work together seamlessly, tackling complex tasks through collaborative intelligence.
### Q: How do I install CrewAI?
A: You can install CrewAI using pip:
```shell
pip install crewai
```
For additional tools, use:
```shell
pip install 'crewai[tools]'
```
### Q: Can I use CrewAI with local models?
A: Yes, CrewAI supports various LLMs, including local models. You can configure your agents to use local models via tools like Ollama & LM Studio. Check the [LLM Connections documentation](https://docs.crewai.com/how-to/LLM-Connections/) for more details.
### Q: What are the key features of CrewAI?
A: Key features include role-based agent design, autonomous inter-agent delegation, flexible task management, process-driven execution, output saving as files, and compatibility with both open-source and proprietary models.
### Q: How does CrewAI compare to other AI orchestration tools?
A: CrewAI is designed with production in mind, offering flexibility similar to Autogen's conversational agents and structured processes like ChatDev, but with more adaptability for real-world applications.
### Q: Is CrewAI open-source?
A: Yes, CrewAI is open-source and welcomes contributions from the community.
### Q: Does CrewAI collect any data?
A: CrewAI uses anonymous telemetry to collect usage data for improvement purposes. No sensitive data (like prompts, task descriptions, or API calls) is collected. Users can opt-in to share more detailed data by setting `share_crew=True` on their Crews.
### Q: Where can I find examples of CrewAI in action?
A: You can find various real-life examples in the [crewAI-examples repository](https://github.com/crewAIInc/crewAI-examples), including trip planners, stock analysis tools, and more.
### Q: How can I contribute to CrewAI?
A: Contributions are welcome! You can fork the repository, create a new branch for your feature, add your improvement, and send a pull request. Check the Contribution section in the README for more details.

Binary file not shown.

Before

Width:  |  Height:  |  Size: 1.0 MiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 810 KiB

BIN
docs/assets/langtrace1.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 223 KiB

BIN
docs/assets/langtrace2.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 204 KiB

BIN
docs/assets/langtrace3.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 295 KiB

142
docs/core-concepts/Cli.md Normal file
View File

@@ -0,0 +1,142 @@
# CrewAI CLI Documentation
The CrewAI CLI provides a set of commands to interact with CrewAI, allowing you to create, train, run, and manage crews and pipelines.
## Installation
To use the CrewAI CLI, make sure you have CrewAI & Poetry installed:
```
pip install crewai poetry
```
## Basic Usage
The basic structure of a CrewAI CLI command is:
```
crewai [COMMAND] [OPTIONS] [ARGUMENTS]
```
## Available Commands
### 1. create
Create a new crew or pipeline.
```
crewai create [OPTIONS] TYPE NAME
```
- `TYPE`: Choose between "crew" or "pipeline"
- `NAME`: Name of the crew or pipeline
- `--router`: (Optional) Create a pipeline with router functionality
Example:
```
crewai create crew my_new_crew
crewai create pipeline my_new_pipeline --router
```
### 2. version
Show the installed version of CrewAI.
```
crewai version [OPTIONS]
```
- `--tools`: (Optional) Show the installed version of CrewAI tools
Example:
```
crewai version
crewai version --tools
```
### 3. train
Train the crew for a specified number of iterations.
```
crewai train [OPTIONS]
```
- `-n, --n_iterations INTEGER`: Number of iterations to train the crew (default: 5)
- `-f, --filename TEXT`: Path to a custom file for training (default: "trained_agents_data.pkl")
Example:
```
crewai train -n 10 -f my_training_data.pkl
```
### 4. replay
Replay the crew execution from a specific task.
```
crewai replay [OPTIONS]
```
- `-t, --task_id TEXT`: Replay the crew from this task ID, including all subsequent tasks
Example:
```
crewai replay -t task_123456
```
### 5. log_tasks_outputs
Retrieve your latest crew.kickoff() task outputs.
```
crewai log_tasks_outputs
```
### 6. reset_memories
Reset the crew memories (long, short, entity, latest_crew_kickoff_outputs).
```
crewai reset_memories [OPTIONS]
```
- `-l, --long`: Reset LONG TERM memory
- `-s, --short`: Reset SHORT TERM memory
- `-e, --entities`: Reset ENTITIES memory
- `-k, --kickoff-outputs`: Reset LATEST KICKOFF TASK OUTPUTS
- `-a, --all`: Reset ALL memories
Example:
```
crewai reset_memories --long --short
crewai reset_memories --all
```
### 7. test
Test the crew and evaluate the results.
```
crewai test [OPTIONS]
```
- `-n, --n_iterations INTEGER`: Number of iterations to test the crew (default: 3)
- `-m, --model TEXT`: LLM Model to run the tests on the Crew (default: "gpt-4o-mini")
Example:
```
crewai test -n 5 -m gpt-3.5-turbo
```
### 8. run
Run the crew.
```
crewai run
```
## Note
Make sure to run these commands from the directory where your CrewAI project is set up. Some commands may require additional configuration or setup within your project structure.

View File

@@ -32,8 +32,8 @@ Each input creates its own run, flowing through all stages of the pipeline. Mult
## Pipeline Attributes
| Attribute | Parameters | Description |
| :--------- | :--------- | :------------------------------------------------------------------------------------ |
| Attribute | Parameters | Description |
| :--------- | :--------- | :---------------------------------------------------------------------------------------------- |
| **Stages** | `stages` | A list of crews, lists of crews, or routers representing the stages to be executed in sequence. |
## Creating a Pipeline
@@ -239,7 +239,7 @@ email_router = Router(
pipeline=normal_pipeline
)
},
default=Pipeline(stages=[normal_pipeline]) # Default to just classification if no urgency score
default=Pipeline(stages=[normal_pipeline]) # Default to just normal if no urgency score
)
# Use the router in a main pipeline

View File

@@ -131,6 +131,7 @@ research_agent = Agent(
verbose=True
)
# to perform a semantic search for a specified query from a text's content across the internet
search_tool = SerperDevTool()
task = Task(
@@ -312,4 +313,4 @@ save_output_task = Task(
## Conclusion
Tasks are the driving force behind the actions of agents in crewAI. By properly defining tasks and their outcomes, you set the stage for your AI agents to work effectively, either independently or as a collaborative unit. Equipping tasks with appropriate tools, understanding the execution process, and following robust validation practices are crucial for maximizing CrewAI's potential, ensuring agents are effectively prepared for their assignments and that tasks are executed as intended.
Tasks are the driving force behind the actions of agents in crewAI. By properly defining tasks and their outcomes, you set the stage for your AI agents to work effectively, either independently or as a collaborative unit. Equipping tasks with appropriate tools, understanding the execution process, and following robust validation practices are crucial for maximizing CrewAI's potential, ensuring agents are effectively prepared for their assignments and that tasks are executed as intended.

View File

@@ -0,0 +1,129 @@
# Creating a CrewAI Pipeline Project
Welcome to the comprehensive guide for creating a new CrewAI pipeline project. This document will walk you through the steps to create, customize, and run your CrewAI pipeline project, ensuring you have everything you need to get started.
To learn more about CrewAI pipelines, visit the [CrewAI documentation](https://docs.crewai.com/core-concepts/Pipeline/).
## Prerequisites
Before getting started with CrewAI pipelines, make sure that you have installed CrewAI via pip:
```shell
$ pip install crewai crewai-tools
```
The same prerequisites for virtual environments and Code IDEs apply as in regular CrewAI projects.
## Creating a New Pipeline Project
To create a new CrewAI pipeline project, you have two options:
1. For a basic pipeline template:
```shell
$ crewai create pipeline <project_name>
```
2. For a pipeline example that includes a router:
```shell
$ crewai create pipeline --router <project_name>
```
These commands will create a new project folder with the following structure:
```
<project_name>/
├── README.md
├── poetry.lock
├── pyproject.toml
├── src/
│ └── <project_name>/
│ ├── __init__.py
│ ├── main.py
│ ├── crews/
│ │ ├── crew1/
│ │ │ ├── crew1.py
│ │ │ └── config/
│ │ │ ├── agents.yaml
│ │ │ └── tasks.yaml
│ │ ├── crew2/
│ │ │ ├── crew2.py
│ │ │ └── config/
│ │ │ ├── agents.yaml
│ │ │ └── tasks.yaml
│ ├── pipelines/
│ │ ├── __init__.py
│ │ ├── pipeline1.py
│ │ └── pipeline2.py
│ └── tools/
│ ├── __init__.py
│ └── custom_tool.py
└── tests/
```
## Customizing Your Pipeline Project
To customize your pipeline project, you can:
1. Modify the crew files in `src/<project_name>/crews/` to define your agents and tasks for each crew.
2. Modify the pipeline files in `src/<project_name>/pipelines/` to define your pipeline structure.
3. Modify `src/<project_name>/main.py` to set up and run your pipelines.
4. Add your environment variables into the `.env` file.
### Example: Defining a Pipeline
Here's an example of how to define a pipeline in `src/<project_name>/pipelines/normal_pipeline.py`:
```python
from crewai import Pipeline
from crewai.project import PipelineBase
from ..crews.normal_crew import NormalCrew
@PipelineBase
class NormalPipeline:
def __init__(self):
# Initialize crews
self.normal_crew = NormalCrew().crew()
def create_pipeline(self):
return Pipeline(
stages=[
self.normal_crew
]
)
async def kickoff(self, inputs):
pipeline = self.create_pipeline()
results = await pipeline.kickoff(inputs)
return results
```
### Annotations
The main annotation you'll use for pipelines is `@PipelineBase`. This annotation is used to decorate your pipeline classes, similar to how `@CrewBase` is used for crews.
## Installing Dependencies
To install the dependencies for your project, use Poetry:
```shell
$ cd <project_name>
$ crewai install
```
## Running Your Pipeline Project
To run your pipeline project, use the following command:
```shell
$ crewai run
```
This will initialize your pipeline and begin task execution as defined in your `main.py` file.
## Deploying Your Pipeline Project
Pipelines can be deployed in the same way as regular CrewAI projects. The easiest way is through [CrewAI+](https://www.crewai.com/crewaiplus), where you can deploy your pipeline in a few clicks.
Remember, when working with pipelines, you're orchestrating multiple crews to work together in a sequence or parallel fashion. This allows for more complex workflows and information processing tasks.

View File

@@ -17,40 +17,12 @@ Before we start, there are a couple of things to note:
Before getting started with CrewAI, make sure that you have installed it via pip:
```shell
$ pip install crewai crewai-tools
$ pip install 'crewai[tools]'
```
### Virtual Environments
It is highly recommended that you use virtual environments to ensure that your CrewAI project is isolated from other projects and dependencies. Virtual environments provide a clean, separate workspace for each project, preventing conflicts between different versions of packages and libraries. This isolation is crucial for maintaining consistency and reproducibility in your development process. You have multiple options for setting up virtual environments depending on your operating system and Python version:
1. Use venv (Python's built-in virtual environment tool):
venv is included with Python 3.3 and later, making it a convenient choice for many developers. It's lightweight and easy to use, perfect for simple project setups.
To set up virtual environments with venv, refer to the official [Python documentation](https://docs.python.org/3/tutorial/venv.html).
2. Use Conda (A Python virtual environment manager):
Conda is an open-source package manager and environment management system for Python. It's widely used by data scientists, developers, and researchers to manage dependencies and environments in a reproducible way.
To set up virtual environments with Conda, refer to the official [Conda documentation](https://docs.conda.io/projects/conda/en/stable/user-guide/getting-started.html).
3. Use Poetry (A Python package manager and dependency management tool):
Poetry is an open-source Python package manager that simplifies the installation of packages and their dependencies. Poetry offers a convenient way to manage virtual environments and dependencies.
Poetry is CrewAI's preferred tool for package / dependency management in CrewAI.
### Code IDEs
Most users of CrewAI use a Code Editor / Integrated Development Environment (IDE) for building their Crews. You can use any code IDE of your choice. See below for some popular options for Code Editors / Integrated Development Environments (IDE):
- [Visual Studio Code](https://code.visualstudio.com/) - Most popular
- [PyCharm](https://www.jetbrains.com/pycharm/)
- [Cursor AI](https://cursor.com)
Pick one that suits your style and needs.
## Creating a New Project
In this example, we will be using Venv as our virtual environment manager.
In this example, we will be using poetry as our virtual environment manager.
To set up a virtual environment, run the following CLI command:
To create a new CrewAI project, run the following CLI command:
```shell
@@ -154,15 +126,15 @@ email_summarizer_task:
Use the annotations to properly reference the agent and task in the crew.py file.
### Annotations include:
* @agent
* @task
* @crew
* @llm
* @tool
* @callback
* @output_json
* @output_pydantic
* @cache_handler
* [@agent](https://github.com/crewAIInc/crewAI/blob/97d7bfb52ad49a9f04db360e1b6612d98c91971e/src/crewai/project/annotations.py#L17)
* [@task](https://github.com/crewAIInc/crewAI/blob/97d7bfb52ad49a9f04db360e1b6612d98c91971e/src/crewai/project/annotations.py#L4)
* [@crew](https://github.com/crewAIInc/crewAI/blob/97d7bfb52ad49a9f04db360e1b6612d98c91971e/src/crewai/project/annotations.py#L69)
* [@llm](https://github.com/crewAIInc/crewAI/blob/97d7bfb52ad49a9f04db360e1b6612d98c91971e/src/crewai/project/annotations.py#L23)
* [@tool](https://github.com/crewAIInc/crewAI/blob/97d7bfb52ad49a9f04db360e1b6612d98c91971e/src/crewai/project/annotations.py#L39)
* [@callback](https://github.com/crewAIInc/crewAI/blob/97d7bfb52ad49a9f04db360e1b6612d98c91971e/src/crewai/project/annotations.py#L44)
* [@output_json](https://github.com/crewAIInc/crewAI/blob/97d7bfb52ad49a9f04db360e1b6612d98c91971e/src/crewai/project/annotations.py#L29)
* [@output_pydantic](https://github.com/crewAIInc/crewAI/blob/97d7bfb52ad49a9f04db360e1b6612d98c91971e/src/crewai/project/annotations.py#L34)
* [@cache_handler](https://github.com/crewAIInc/crewAI/blob/97d7bfb52ad49a9f04db360e1b6612d98c91971e/src/crewai/project/annotations.py#L49)
crew.py
```py
@@ -191,8 +163,7 @@ To install the dependencies for your project, you can use Poetry. First, navigat
```shell
$ cd my_project
$ poetry lock
$ poetry install
$ crewai install
```
This will install the dependencies specified in the `pyproject.toml` file.
@@ -233,11 +204,6 @@ To run your project, use the following command:
```shell
$ crewai run
```
or
```shell
$ poetry run my_project
```
This will initialize your crew of AI agents and begin task execution as defined in your configuration in the `main.py` file.
### Replay Tasks from Latest Crew Kickoff

View File

@@ -4,9 +4,11 @@ description: Kickoff a Crew Asynchronously
---
## Introduction
CrewAI provides the ability to kickoff a crew asynchronously, allowing you to start the crew execution in a non-blocking manner. This feature is particularly useful when you want to run multiple crews concurrently or when you need to perform other tasks while the crew is executing.
## Asynchronous Crew Execution
To kickoff a crew asynchronously, use the `kickoff_async()` method. This method initiates the crew execution in a separate thread, allowing the main thread to continue executing other tasks.
### Method Signature
@@ -23,10 +25,20 @@ def kickoff_async(self, inputs: dict) -> CrewOutput:
- `CrewOutput`: An object representing the result of the crew execution.
## Example
Here's an example of how to kickoff a crew asynchronously:
## Potential Use Cases
- **Parallel Content Generation**: Kickoff multiple independent crews asynchronously, each responsible for generating content on different topics. For example, one crew might research and draft an article on AI trends, while another crew generates social media posts about a new product launch. Each crew operates independently, allowing content production to scale efficiently.
- **Concurrent Market Research Tasks**: Launch multiple crews asynchronously to conduct market research in parallel. One crew might analyze industry trends, while another examines competitor strategies, and yet another evaluates consumer sentiment. Each crew independently completes its task, enabling faster and more comprehensive insights.
- **Independent Travel Planning Modules**: Execute separate crews to independently plan different aspects of a trip. One crew might handle flight options, another handles accommodation, and a third plans activities. Each crew works asynchronously, allowing various components of the trip to be planned simultaneously and independently for faster results.
## Example: Single Asynchronous Crew Execution
Here's an example of how to kickoff a crew asynchronously using asyncio and awaiting the result:
```python
import asyncio
from crewai import Crew, Agent, Task
# Create an agent with code execution enabled
@@ -49,6 +61,57 @@ analysis_crew = Crew(
tasks=[data_analysis_task]
)
# Execute the crew asynchronously
result = analysis_crew.kickoff_async(inputs={"ages": [25, 30, 35, 40, 45]})
```
# Async function to kickoff the crew asynchronously
async def async_crew_execution():
result = await analysis_crew.kickoff_async(inputs={"ages": [25, 30, 35, 40, 45]})
print("Crew Result:", result)
# Run the async function
asyncio.run(async_crew_execution())
```
## Example: Multiple Asynchronous Crew Executions
In this example, we'll show how to kickoff multiple crews asynchronously and wait for all of them to complete using asyncio.gather():
```python
import asyncio
from crewai import Crew, Agent, Task
# Create an agent with code execution enabled
coding_agent = Agent(
role="Python Data Analyst",
goal="Analyze data and provide insights using Python",
backstory="You are an experienced data analyst with strong Python skills.",
allow_code_execution=True
)
# Create tasks that require code execution
task_1 = Task(
description="Analyze the first dataset and calculate the average age of participants. Ages: {ages}",
agent=coding_agent
)
task_2 = Task(
description="Analyze the second dataset and calculate the average age of participants. Ages: {ages}",
agent=coding_agent
)
# Create two crews and add tasks
crew_1 = Crew(agents=[coding_agent], tasks=[task_1])
crew_2 = Crew(agents=[coding_agent], tasks=[task_2])
# Async function to kickoff multiple crews asynchronously and wait for all to finish
async def async_multiple_crews():
result_1 = crew_1.kickoff_async(inputs={"ages": [25, 30, 35, 40, 45]})
result_2 = crew_2.kickoff_async(inputs={"ages": [20, 22, 24, 28, 30]})
# Wait for both crews to finish
results = await asyncio.gather(result_1, result_2)
for i, result in enumerate(results, 1):
print(f"Crew {i} Result:", result)
# Run the async function
asyncio.run(async_multiple_crews())
```

View File

@@ -88,7 +88,7 @@ There are a couple of different ways you can use HuggingFace to host your LLM.
### Your own HuggingFace endpoint
```python
from langchain_huggingface import HuggingFaceEndpoint,
from langchain_huggingface import HuggingFaceEndpoint
llm = HuggingFaceEndpoint(
repo_id="microsoft/Phi-3-mini-4k-instruct",
@@ -112,30 +112,30 @@ Switch between APIs and models seamlessly using environment variables, supportin
### Configuration Examples
#### FastChat
```sh
os.environ[OPENAI_API_BASE]="http://localhost:8001/v1"
os.environ[OPENAI_MODEL_NAME]='oh-2.5m7b-q51'
os.environ[OPENAI_API_KEY]=NA
os.environ["OPENAI_API_BASE"]='http://localhost:8001/v1'
os.environ["OPENAI_MODEL_NAME"]='oh-2.5m7b-q51'
os.environ[OPENAI_API_KEY]='NA'
```
#### LM Studio
Launch [LM Studio](https://lmstudio.ai) and go to the Server tab. Then select a model from the dropdown menu and wait for it to load. Once it's loaded, click the green Start Server button and use the URL, port, and API key that's shown (you can modify them). Below is an example of the default settings as of LM Studio 0.2.19:
```sh
os.environ[OPENAI_API_BASE]="http://localhost:1234/v1"
os.environ[OPENAI_API_KEY]="lm-studio"
os.environ["OPENAI_API_BASE"]='http://localhost:1234/v1'
os.environ["OPENAI_API_KEY"]='lm-studio'
```
#### Groq API
```sh
os.environ[OPENAI_API_KEY]=your-groq-api-key
os.environ[OPENAI_MODEL_NAME]='llama3-8b-8192'
os.environ[OPENAI_API_BASE]=https://api.groq.com/openai/v1
os.environ["OPENAI_API_KEY"]='your-groq-api-key'
os.environ["OPENAI_MODEL_NAME"]='llama3-8b-8192'
os.environ["OPENAI_API_BASE"]='https://api.groq.com/openai/v1'
```
#### Mistral API
```sh
os.environ[OPENAI_API_KEY]=your-mistral-api-key
os.environ[OPENAI_API_BASE]=https://api.mistral.ai/v1
os.environ[OPENAI_MODEL_NAME]="mistral-small"
os.environ["OPENAI_API_KEY"]='your-mistral-api-key'
os.environ["OPENAI_API_BASE"]='https://api.mistral.ai/v1'
os.environ["OPENAI_MODEL_NAME"]='mistral-small'
```
### Solar
@@ -145,17 +145,16 @@ from langchain_community.chat_models.solar import SolarChat
```sh
os.environ[SOLAR_API_BASE]="https://api.upstage.ai/v1/solar"
os.environ[SOLAR_API_KEY]="your-solar-api-key"
```
# Free developer API key available here: https://console.upstage.ai/services/solar
# Langchain Example: https://github.com/langchain-ai/langchain/pull/18556
```
### Cohere
```python
from langchain_cohere import ChatCohere
# Initialize language model
os.environ["COHERE_API_KEY"] = "your-cohere-api-key"
os.environ["COHERE_API_KEY"]='your-cohere-api-key'
llm = ChatCohere()
# Free developer API key available here: https://cohere.com/
@@ -166,10 +165,10 @@ llm = ChatCohere()
For Azure OpenAI API integration, set the following environment variables:
```sh
os.environ[AZURE_OPENAI_DEPLOYMENT] = "Your deployment"
os.environ["OPENAI_API_VERSION"] = "2023-12-01-preview"
os.environ["AZURE_OPENAI_ENDPOINT"] = "Your Endpoint"
os.environ["AZURE_OPENAI_API_KEY"] = "<Your API Key>"
os.environ["AZURE_OPENAI_DEPLOYMENT"]='Your deployment'
os.environ["OPENAI_API_VERSION"]='2023-12-01-preview'
os.environ["AZURE_OPENAI_ENDPOINT"]='Your Endpoint'
os.environ["AZURE_OPENAI_API_KEY"]='Your API Key'
```
### Example Agent with Azure LLM
@@ -194,4 +193,4 @@ azure_agent = Agent(
```
## Conclusion
Integrating CrewAI with different LLMs expands the framework's versatility, allowing for customized, efficient AI solutions across various domains and platforms.
Integrating CrewAI with different LLMs expands the framework's versatility, allowing for customized, efficient AI solutions across various domains and platforms.

View File

@@ -7,10 +7,14 @@ description: How to monitor cost, latency, and performance of CrewAI Agents usin
Langtrace is an open-source, external tool that helps you set up observability and evaluations for Large Language Models (LLMs), LLM frameworks, and Vector Databases. While not built directly into CrewAI, Langtrace can be used alongside CrewAI to gain deep visibility into the cost, latency, and performance of your CrewAI Agents. This integration allows you to log hyperparameters, monitor performance regressions, and establish a process for continuous improvement of your Agents.
![Overview of a select series of agent session runs](..%2Fassets%2Flangtrace1.png)
![Overview of agent traces](..%2Fassets%2Flangtrace2.png)
![Overview of llm traces in details](..%2Fassets%2Flangtrace3.png)
## Setup Instructions
1. Sign up for [Langtrace](https://langtrace.ai/) by visiting [https://langtrace.ai/signup](https://langtrace.ai/signup).
2. Create a project and generate an API key.
2. Create a project, set the project type to crewAI & generate an API key.
3. Install Langtrace in your CrewAI project using the following commands:
```bash
@@ -32,58 +36,29 @@ langtrace.init(api_key='<LANGTRACE_API_KEY>')
from crewai import Agent, Task, Crew
```
2. Create your CrewAI agents and tasks as usual.
3. Use Langtrace's tracking functions to monitor your CrewAI operations. For example:
```python
with langtrace.trace("CrewAI Task Execution"):
result = crew.kickoff()
```
### Features and Their Application to CrewAI
1. **LLM Token and Cost Tracking**
- Monitor the token usage and associated costs for each CrewAI agent interaction.
- Example:
```python
with langtrace.trace("Agent Interaction"):
agent_response = agent.execute(task)
```
2. **Trace Graph for Execution Steps**
- Visualize the execution flow of your CrewAI tasks, including latency and logs.
- Useful for identifying bottlenecks in your agent workflows.
3. **Dataset Curation with Manual Annotation**
- Create datasets from your CrewAI task outputs for future training or evaluation.
- Example:
```python
langtrace.log_dataset_item(task_input, agent_output, {"task_type": "research"})
```
4. **Prompt Versioning and Management**
- Keep track of different versions of prompts used in your CrewAI agents.
- Useful for A/B testing and optimizing agent performance.
5. **Prompt Playground with Model Comparisons**
- Test and compare different prompts and models for your CrewAI agents before deployment.
6. **Testing and Evaluations**
- Set up automated tests for your CrewAI agents and tasks.
- Example:
```python
langtrace.evaluate(agent_output, expected_output, "accuracy")
```
## Monitoring New CrewAI Features
CrewAI has introduced several new features that can be monitored using Langtrace:
1. **Code Execution**: Monitor the performance and output of code executed by agents.
```python
with langtrace.trace("Agent Code Execution"):
code_output = agent.execute_code(code_snippet)
```
2. **Third-party Agent Integration**: Track interactions with LlamaIndex, LangChain, and Autogen agents.

View File

@@ -8,13 +8,20 @@ Cutting-edge framework for orchestrating role-playing, autonomous AI agents. By
<div style="width:25%">
<h2>Getting Started</h2>
<ul>
<li><a href='./getting-started/Installing-CrewAI'>
<li>
<a href='./getting-started/Installing-CrewAI'>
Installing CrewAI
</a>
</a>
</li>
<li><a href='./getting-started/Start-a-New-CrewAI-Project-Template-Method'>
<li>
<a href='./getting-started/Start-a-New-CrewAI-Project-Template-Method'>
Start a New CrewAI Project: Template Method
</a>
</a>
</li>
<li>
<a href='./getting-started/Create-a-New-CrewAI-Pipeline-Template-Method'>
Create a New CrewAI Pipeline: Template Method
</a>
</li>
</ul>
</div>

View File

@@ -5,24 +5,39 @@ description: Understanding the telemetry data collected by CrewAI and how it con
## Telemetry
CrewAI utilizes anonymous telemetry to gather usage statistics with the primary goal of enhancing the library. Our focus is on improving and developing the features, integrations, and tools most utilized by our users. We don't offer a way to disable it now, but we will in the future.
!!! note "Personal Information"
By default, we collect no data that would be considered personal information under GDPR and other privacy regulations.
We do collect Tool's names and Agent's roles, so be advised not to include any personal information in the tool's names or the Agent's roles.
Because no personal information is collected, it's not necessary to worry about data residency.
When `share_crew` is enabled, additional data is collected which may contain personal information if included by the user. Users should exercise caution when enabling this feature to ensure compliance with privacy regulations.
It's pivotal to understand that **NO data is collected** concerning prompts, task descriptions, agents' backstories or goals, usage of tools, API calls, responses, any data processed by the agents, or secrets and environment variables, with the exception of the conditions mentioned. When the `share_crew` feature is enabled, detailed data including task descriptions, agents' backstories or goals, and other specific attributes are collected to provide deeper insights while respecting user privacy.
CrewAI utilizes anonymous telemetry to gather usage statistics with the primary goal of enhancing the library. Our focus is on improving and developing the features, integrations, and tools most utilized by our users.
### Data Collected Includes:
- **Version of CrewAI**: Assessing the adoption rate of our latest version helps us understand user needs and guide our updates.
- **Python Version**: Identifying the Python versions our users operate with assists in prioritizing our support efforts for these versions.
- **General OS Information**: Details like the number of CPUs and the operating system type (macOS, Windows, Linux) enable us to focus our development on the most used operating systems and explore the potential for OS-specific features.
- **Number of Agents and Tasks in a Crew**: Ensures our internal testing mirrors real-world scenarios, helping us guide users towards best practices.
- **Crew Process Utilization**: Understanding how crews are utilized aids in directing our development focus.
- **Memory and Delegation Use by Agents**: Insights into how these features are used help evaluate their effectiveness and future.
- **Task Execution Mode**: Knowing whether tasks are executed in parallel or sequentially influences our emphasis on enhancing parallel execution capabilities.
- **Language Model Utilization**: Supports our goal to improve support for the most popular languages among our users.
- **Roles of Agents within a Crew**: Understanding the various roles agents play aids in crafting better tools, integrations, and examples.
- **Tool Usage**: Identifying which tools are most frequently used allows us to prioritize improvements in those areas.
It's pivotal to understand that by default, **NO personal data is collected** concerning prompts, task descriptions, agents' backstories or goals, usage of tools, API calls, responses, any data processed by the agents, or secrets and environment variables.
When the `share_crew` feature is enabled, detailed data including task descriptions, agents' backstories or goals, and other specific attributes are collected to provide deeper insights. This expanded data collection may include personal information if users have incorporated it into their crews or tasks. Users should carefully consider the content of their crews and tasks before enabling `share_crew`. Users can disable telemetry by setting the environment variable OTEL_SDK_DISABLED to true.
### Data Explanation:
| Defaulted | Data | Reason and Specifics |
|-----------|-------------------------------------------|-----------------------------------------------------------------------------------------------------------------------------|
| Yes | CrewAI and Python Version | Tracks software versions. Example: CrewAI v1.2.3, Python 3.8.10. No personal data. |
| Yes | Crew Metadata | Includes: randomly generated key and ID, process type (e.g., 'sequential', 'parallel'), boolean flag for memory usage (true/false), count of tasks, count of agents. All non-personal. |
| Yes | Agent Data | Includes: randomly generated key and ID, role name (should not include personal info), boolean settings (verbose, delegation enabled, code execution allowed), max iterations, max RPM, max retry limit, LLM info (see LLM Attributes), list of tool names (should not include personal info). No personal data. |
| Yes | Task Metadata | Includes: randomly generated key and ID, boolean execution settings (async_execution, human_input), associated agent's role and key, list of tool names. All non-personal. |
| Yes | Tool Usage Statistics | Includes: tool name (should not include personal info), number of usage attempts (integer), LLM attributes used. No personal data. |
| Yes | Test Execution Data | Includes: crew's randomly generated key and ID, number of iterations, model name used, quality score (float), execution time (in seconds). All non-personal. |
| Yes | Task Lifecycle Data | Includes: creation and execution start/end times, crew and task identifiers. Stored as spans with timestamps. No personal data. |
| Yes | LLM Attributes | Includes: name, model_name, model, top_k, temperature, and class name of the LLM. All technical, non-personal data. |
| Yes | Crew Deployment attempt using crewAI CLI | Includes: The fact a deploy is being made and crew id, and if it's trying to pull logs, no other data. |
| No | Agent's Expanded Data | Includes: goal description, backstory text, i18n prompt file identifier. Users should ensure no personal info is included in text fields. |
| No | Detailed Task Information | Includes: task description, expected output description, context references. Users should ensure no personal info is included in these fields. |
| No | Environment Information | Includes: platform, release, system, version, and CPU count. Example: 'Windows 10', 'x86_64'. No personal data. |
| No | Crew and Task Inputs and Outputs | Includes: input parameters and output results as non-identifiable data. Users should ensure no personal info is included. |
| No | Comprehensive Crew Execution Data | Includes: detailed logs of crew operations, all agents and tasks data, final output. All non-personal and technical in nature. |
Note: "No" in the "Defaulted" column indicates that this data is only collected when `share_crew` is set to `true`.
### Opt-In Further Telemetry Sharing
Users can choose to share their complete telemetry data by enabling the `share_crew` attribute to `True` in their crew configurations. Enabling `share_crew` results in the collection of detailed crew and task execution data, including `goal`, `backstory`, `context`, and `output` of tasks. This enables a deeper insight into usage patterns while respecting the user's choice to share.
Users can choose to share their complete telemetry data by enabling the `share_crew` attribute to `True` in their crew configurations. Enabling `share_crew` results in the collection of detailed crew and task execution data, including `goal`, `backstory`, `context`, and `output` of tasks. This enables a deeper insight into usage patterns.
### Updates and Revisions
We are committed to maintaining the accuracy and transparency of our documentation. Regular reviews and updates are performed to ensure our documentation accurately reflects the latest developments of our codebase and telemetry practices. Users are encouraged to review this section for the most current information on our data collection practices and how they contribute to the improvement of CrewAI.
!!! warning "Potential Personal Information"
If you enable `share_crew`, the collected data may include personal information if it has been incorporated into crew configurations, task descriptions, or outputs. Users should carefully review their data and ensure compliance with GDPR and other applicable privacy regulations before enabling this feature.

81
docs/tools/SpiderTool.md Normal file
View File

@@ -0,0 +1,81 @@
# SpiderTool
## Description
[Spider](https://spider.cloud/?ref=crewai) is the [fastest](https://github.com/spider-rs/spider/blob/main/benches/BENCHMARKS.md#benchmark-results) open source scraper and crawler that returns LLM-ready data. It converts any website into pure HTML, markdown, metadata or text while enabling you to crawl with custom actions using AI.
## Installation
To use the Spider API you need to download the [Spider SDK](https://pypi.org/project/spider-client/) and the crewai[tools] SDK too:
```python
pip install spider-client 'crewai[tools]'
```
## Example
This example shows you how you can use the Spider tool to enable your agent to scrape and crawl websites. The data returned from the Spider API is already LLM-ready, so no need to do any cleaning there.
```python
from crewai_tools import SpiderTool
def main():
spider_tool = SpiderTool()
searcher = Agent(
role="Web Research Expert",
goal="Find related information from specific URL's",
backstory="An expert web researcher that uses the web extremely well",
tools=[spider_tool],
verbose=True,
)
return_metadata = Task(
description="Scrape https://spider.cloud with a limit of 1 and enable metadata",
expected_output="Metadata and 10 word summary of spider.cloud",
agent=searcher
)
crew = Crew(
agents=[searcher],
tasks=[
return_metadata,
],
verbose=2
)
crew.kickoff()
if __name__ == "__main__":
main()
```
## Arguments
- `api_key` (string, optional): Specifies Spider API key. If not specified, it looks for `SPIDER_API_KEY` in environment variables.
- `params` (object, optional): Optional parameters for the request. Defaults to `{"return_format": "markdown"}` to return the website's content in a format that fits LLMs better.
- `request` (string): The request type to perform. Possible values are `http`, `chrome`, and `smart`. Use `smart` to perform an HTTP request by default until JavaScript rendering is needed for the HTML.
- `limit` (int): The maximum number of pages allowed to crawl per website. Remove the value or set it to `0` to crawl all pages.
- `depth` (int): The crawl limit for maximum depth. If `0`, no limit will be applied.
- `cache` (bool): Use HTTP caching for the crawl to speed up repeated runs. Default is `true`.
- `budget` (object): Object that has paths with a counter for limiting the amount of pages example `{"*":1}` for only crawling the root page.
- `locale` (string): The locale to use for request, example `en-US`.
- `cookies` (string): Add HTTP cookies to use for request.
- `stealth` (bool): Use stealth mode for headless chrome request to help prevent being blocked. The default is `true` on chrome.
- `headers` (object): Forward HTTP headers to use for all request. The object is expected to be a map of key value pairs.
- `metadata` (bool): Boolean to store metadata about the pages and content found. This could help improve AI interopt. Defaults to `false` unless you have the website already stored with the configuration enabled.
- `viewport` (object): Configure the viewport for chrome. Defaults to `800x600`.
- `encoding` (string): The type of encoding to use like `UTF-8`, `SHIFT_JIS`, or etc.
- `subdomains` (bool): Allow subdomains to be included. Default is `false`.
- `user_agent` (string): Add a custom HTTP user agent to the request. By default this is set to a random agent.
- `store_data` (bool): Boolean to determine if storage should be used. If set this takes precedence over `storageless`. Defaults to `false`.
- `gpt_config` (object): Use AI to generate actions to perform during the crawl. You can pass an array for the `"prompt"` to chain steps.
- `fingerprint` (bool): Use advanced fingerprint for chrome.
- `storageless` (bool): Boolean to prevent storing any type of data for the request including storage and AI vectors embedding. Defaults to `false` unless you have the website already stored.
- `readability` (bool): Use [readability](https://github.com/mozilla/readability) to pre-process the content for reading. This may drastically improve the content for LLM usage.
`return_format` (string): The format to return the data in. Possible values are `markdown`, `raw`, `text`, and `html2text`. Use `raw` to return the default format of the page like HTML etc.
- `proxy_enabled` (bool): Enable high performance premium proxies for the request to prevent being blocked at the network level.
- `query_selector` (string): The CSS query selector to use when extracting content from the markup.
- `full_resources` (bool): Crawl and download all the resources for a website.
- `request_timeout` (int): The timeout to use for request. Timeouts can be from `5-60`. The default is `30` seconds.
- `run_in_background` (bool): Run the request in the background. Useful if storing data and wanting to trigger crawls to the dashboard. This has no effect if storageless is set.

View File

@@ -2,8 +2,8 @@ site_name: crewAI
site_author: crewAI, Inc
site_description: Cutting-edge framework for orchestrating role-playing, autonomous AI agents. By fostering collaborative intelligence, CrewAI empowers agents to work together seamlessly, tackling complex tasks.
repo_name: crewAI
repo_url: https://github.com/joaomdmoura/crewai/
site_url: https://crewai.com
repo_url: https://github.com/crewAIInc/crewAI
site_url: https://docs.crewai.com
edit_uri: edit/main/docs/
copyright: Copyright &copy; 2024 crewAI, Inc
@@ -129,6 +129,7 @@ nav:
- Processes: 'core-concepts/Processes.md'
- Crews: 'core-concepts/Crews.md'
- Collaboration: 'core-concepts/Collaboration.md'
- Pipeline: 'core-concepts/Pipeline.md'
- Training: 'core-concepts/Training-Crew.md'
- Memory: 'core-concepts/Memory.md'
- Planning: 'core-concepts/Planning.md'
@@ -177,6 +178,7 @@ nav:
- PG RAG Search: 'tools/PGSearchTool.md'
- Scrape Website: 'tools/ScrapeWebsiteTool.md'
- Selenium Scraper: 'tools/SeleniumScrapingTool.md'
- Spider Scraper: 'tools/SpiderTool.md'
- TXT RAG Search: 'tools/TXTSearchTool.md'
- Vision Tool: 'tools/VisionTool.md'
- Website RAG Search: 'tools/WebsiteSearchTool.md'

2580
poetry.lock generated

File diff suppressed because it is too large Load Diff

View File

@@ -1,6 +1,6 @@
[tool.poetry]
name = "crewai"
version = "0.51.1"
version = "0.55.2"
description = "Cutting-edge framework for orchestrating role-playing, autonomous AI agents. By fostering collaborative intelligence, CrewAI empowers agents to work together seamlessly, tackling complex tasks."
authors = ["Joao Moura <joao@crewai.com>"]
readme = "README.md"
@@ -8,8 +8,8 @@ packages = [{ include = "crewai", from = "src" }]
[tool.poetry.urls]
Homepage = "https://crewai.com"
Documentation = "https://github.com/joaomdmoura/CrewAI/wiki/Index"
Repository = "https://github.com/joaomdmoura/crewai"
Documentation = "https://docs.crewai.com"
Repository = "https://github.com/crewAIInc/crewAI"
[tool.poetry.dependencies]
python = ">=3.10,<=3.13"
@@ -20,8 +20,8 @@ opentelemetry-api = "^1.22.0"
opentelemetry-sdk = "^1.22.0"
opentelemetry-exporter-otlp-proto-http = "^1.22.0"
instructor = "1.3.3"
regex = "^2023.12.25"
crewai-tools = { version = "^0.8.3", optional = true }
regex = "^2024.7.24"
crewai-tools = { version = "^0.12.0", optional = true }
click = "^8.1.7"
python-dotenv = "^1.0.0"
appdirs = "^1.4.4"
@@ -29,6 +29,7 @@ jsonref = "^1.1.0"
agentops = { version = "^0.3.0", optional = true }
embedchain = "^0.1.114"
json-repair = "^0.25.2"
auth0-python = "^4.7.1"
[tool.poetry.extras]
tools = ["crewai-tools"]
@@ -46,7 +47,7 @@ mkdocs-material = { extras = ["imaging"], version = "^9.5.7" }
mkdocs-material-extensions = "^1.3.1"
pillow = "^10.2.0"
cairosvg = "^2.7.1"
crewai-tools = "^0.8.3"
crewai-tools = "^0.12.0"
[tool.poetry.group.test.dependencies]
pytest = "^8.0.0"
@@ -62,6 +63,9 @@ ignore_missing_imports = true
disable_error_code = 'import-untyped'
exclude = ["cli/templates"]
[tool.bandit]
exclude_dirs = ["src/crewai/cli/templates"]
[build-system]
requires = ["poetry-core"]
build-backend = "poetry.core.masonry.api"

View File

@@ -2,6 +2,7 @@ from crewai.agent import Agent
from crewai.crew import Crew
from crewai.pipeline import Pipeline
from crewai.process import Process
from crewai.routers import Router
from crewai.task import Task
__all__ = ["Agent", "Crew", "Process", "Task", "Pipeline"]
__all__ = ["Agent", "Crew", "Process", "Task", "Pipeline", "Router"]

View File

@@ -113,40 +113,46 @@ class Agent(BaseAgent):
description="Maximum number of retries for an agent to execute a task when an error occurs.",
)
def __init__(__pydantic_self__, **data):
config = data.pop("config", {})
super().__init__(**config, **data)
__pydantic_self__.agent_ops_agent_name = __pydantic_self__.role
@model_validator(mode="after")
def set_agent_executor(self) -> "Agent":
"""Ensure agent executor and token process are set."""
if hasattr(self.llm, "model_name"):
token_handler = TokenCalcHandler(self.llm.model_name, self._token_process)
def post_init_setup(self):
self.agent_ops_agent_name = self.role
# Ensure self.llm.callbacks is a list
if not isinstance(self.llm.callbacks, list):
self.llm.callbacks = []
# Different llms store the model name in different attributes
model_name = getattr(self.llm, "model_name", None) or getattr(
self.llm, "deployment_name", None
)
# Check if an instance of TokenCalcHandler already exists in the list
if not any(
isinstance(handler, TokenCalcHandler) for handler in self.llm.callbacks
):
self.llm.callbacks.append(token_handler)
if agentops and not any(
isinstance(handler, agentops.LangchainCallbackHandler)
for handler in self.llm.callbacks
):
agentops.stop_instrumenting()
self.llm.callbacks.append(agentops.LangchainCallbackHandler())
if model_name:
self._setup_llm_callbacks(model_name)
if not self.agent_executor:
if not self.cache_handler:
self.cache_handler = CacheHandler()
self.set_cache_handler(self.cache_handler)
self._setup_agent_executor()
return self
def _setup_llm_callbacks(self, model_name: str):
token_handler = TokenCalcHandler(model_name, self._token_process)
if not isinstance(self.llm.callbacks, list):
self.llm.callbacks = []
if not any(
isinstance(handler, TokenCalcHandler) for handler in self.llm.callbacks
):
self.llm.callbacks.append(token_handler)
if agentops and not any(
isinstance(handler, agentops.LangchainCallbackHandler)
for handler in self.llm.callbacks
):
agentops.stop_instrumenting()
self.llm.callbacks.append(agentops.LangchainCallbackHandler())
def _setup_agent_executor(self):
if not self.cache_handler:
self.cache_handler = CacheHandler()
self.set_cache_handler(self.cache_handler)
def execute_task(
self,
task: Any,
@@ -213,7 +219,7 @@ class Agent(BaseAgent):
raise e
result = self.execute_task(task, context, tools)
if self.max_rpm:
if self.max_rpm and self._rpm_controller:
self._rpm_controller.stop_rpm_counter()
# If there was any tool in self.tools_results that had result_as_answer

View File

@@ -2,3 +2,5 @@ from .cache.cache_handler import CacheHandler
from .executor import CrewAgentExecutor
from .parser import CrewAgentParser
from .tools_handler import ToolsHandler
__all__ = ["CacheHandler", "CrewAgentExecutor", "CrewAgentParser", "ToolsHandler"]

View File

@@ -7,7 +7,6 @@ from typing import Any, Dict, List, Optional, TypeVar
from pydantic import (
UUID4,
BaseModel,
ConfigDict,
Field,
InstanceOf,
PrivateAttr,
@@ -20,6 +19,7 @@ from crewai.agents.agent_builder.utilities.base_token_process import TokenProces
from crewai.agents.cache.cache_handler import CacheHandler
from crewai.agents.tools_handler import ToolsHandler
from crewai.utilities import I18N, Logger, RPMController
from crewai.utilities.config import process_config
T = TypeVar("T", bound="BaseAgent")
@@ -74,21 +74,26 @@ class BaseAgent(ABC, BaseModel):
"""
__hash__ = object.__hash__ # type: ignore
_logger: Logger = PrivateAttr()
_rpm_controller: RPMController = PrivateAttr(default=None)
_logger: Logger = PrivateAttr(default_factory=lambda: Logger(verbose=False))
_rpm_controller: Optional[RPMController] = PrivateAttr(default=None)
_request_within_rpm_limit: Any = PrivateAttr(default=None)
formatting_errors: int = 0
model_config = ConfigDict(arbitrary_types_allowed=True)
_original_role: Optional[str] = PrivateAttr(default=None)
_original_goal: Optional[str] = PrivateAttr(default=None)
_original_backstory: Optional[str] = PrivateAttr(default=None)
_token_process: TokenProcess = PrivateAttr(default_factory=TokenProcess)
id: UUID4 = Field(default_factory=uuid.uuid4, frozen=True)
formatting_errors: int = Field(
default=0, description="Number of formatting errors."
)
role: str = Field(description="Role of the agent")
goal: str = Field(description="Objective of the agent")
backstory: str = Field(description="Backstory of the agent")
config: Optional[Dict[str, Any]] = Field(
description="Configuration for the agent", default=None, exclude=True
)
cache: bool = Field(
default=True, description="Whether the agent should use a cache for tool usage."
)
config: Optional[Dict[str, Any]] = Field(
description="Configuration for the agent", default=None
)
verbose: bool = Field(
default=False, description="Verbose mode for the Agent Execution"
)
@@ -123,20 +128,29 @@ class BaseAgent(ABC, BaseModel):
default=None, description="Maximum number of tokens for the agent's execution."
)
_original_role: str | None = None
_original_goal: str | None = None
_original_backstory: str | None = None
_token_process: TokenProcess = TokenProcess()
def __init__(__pydantic_self__, **data):
config = data.pop("config", {})
super().__init__(**config, **data)
@model_validator(mode="before")
@classmethod
def process_model_config(cls, values):
return process_config(values, cls)
@model_validator(mode="after")
def set_config_attributes(self):
if self.config:
for key, value in self.config.items():
setattr(self, key, value)
def validate_and_set_attributes(self):
# Validate required fields
for field in ["role", "goal", "backstory"]:
if getattr(self, field) is None:
raise ValueError(
f"{field} must be provided either directly or through config"
)
# Set private attributes
self._logger = Logger(verbose=self.verbose)
if self.max_rpm and not self._rpm_controller:
self._rpm_controller = RPMController(
max_rpm=self.max_rpm, logger=self._logger
)
if not self._token_process:
self._token_process = TokenProcess()
return self
@field_validator("id", mode="before")
@@ -147,14 +161,6 @@ class BaseAgent(ABC, BaseModel):
"may_not_set_field", "This field is not to be set by the user.", {}
)
@model_validator(mode="after")
def set_attributes_based_on_config(self) -> "BaseAgent":
"""Set attributes based on the agent configuration."""
if self.config:
for key, value in self.config.items():
setattr(self, key, value)
return self
@model_validator(mode="after")
def set_private_attrs(self):
"""Set private attributes."""
@@ -170,7 +176,7 @@ class BaseAgent(ABC, BaseModel):
@property
def key(self):
source = [self.role, self.goal, self.backstory]
return md5("|".join(source).encode()).hexdigest()
return md5("|".join(source).encode(), usedforsecurity=False).hexdigest()
@abstractmethod
def execute_task(

View File

@@ -1 +1,3 @@
from .cache_handler import CacheHandler
__all__ = ["CacheHandler"]

View File

@@ -1,13 +1,12 @@
from typing import Optional
from typing import Any, Dict, Optional
from pydantic import BaseModel, PrivateAttr
class CacheHandler:
class CacheHandler(BaseModel):
"""Callback handler for tool usage."""
_cache: dict = {}
def __init__(self):
self._cache = {}
_cache: Dict[str, Any] = PrivateAttr(default_factory=dict)
def add(self, tool, input, output):
self._cache[f"{tool}-{input}"] = output

View File

@@ -1,33 +1,29 @@
import threading
import time
from typing import Any, Dict, Iterator, List, Literal, Optional, Tuple, Union
import click
from langchain.agents import AgentExecutor
from langchain.agents.agent import ExceptionTool
from langchain.callbacks.manager import CallbackManagerForChainRun
from langchain.chains.summarize import load_summarize_chain
from langchain.text_splitter import RecursiveCharacterTextSplitter
from langchain_core.agents import AgentAction, AgentFinish, AgentStep
from langchain_core.exceptions import OutputParserException
from langchain_core.tools import BaseTool
from langchain_core.utils.input import get_color_mapping
from pydantic import InstanceOf
from langchain.text_splitter import RecursiveCharacterTextSplitter
from langchain.chains.summarize import load_summarize_chain
from crewai.agents.agent_builder.base_agent_executor_mixin import CrewAgentExecutorMixin
from crewai.agents.tools_handler import ToolsHandler
from crewai.tools.tool_usage import ToolUsage, ToolUsageErrorException
from crewai.utilities import I18N
from crewai.utilities.constants import TRAINING_DATA_FILE
from crewai.utilities.exceptions.context_window_exceeding_exception import (
LLMContextLengthExceededException,
)
from crewai.utilities.training_handler import CrewTrainingHandler
from crewai.utilities.logger import Logger
from crewai.utilities.training_handler import CrewTrainingHandler
class CrewAgentExecutor(AgentExecutor, CrewAgentExecutorMixin):
@@ -213,11 +209,7 @@ class CrewAgentExecutor(AgentExecutor, CrewAgentExecutorMixin):
yield step
return
yield AgentStep(
action=AgentAction("_Exception", str(e), str(e)),
observation=str(e),
)
return
raise e
# If the tool chosen is the finishing tool, then we end and return.
if isinstance(output, AgentFinish):

View File

@@ -0,0 +1,3 @@
from .main import AuthenticationCommand
__all__ = ["AuthenticationCommand"]

View File

@@ -0,0 +1,4 @@
ALGORITHMS = ["RS256"]
AUTH0_DOMAIN = "crewai.us.auth0.com"
AUTH0_CLIENT_ID = "DEVC5Fw6NlRoSzmDCcOhVq85EfLBjKa8"
AUTH0_AUDIENCE = "https://crewai.us.auth0.com/api/v2/"

View File

@@ -0,0 +1,75 @@
import time
import webbrowser
from typing import Any, Dict
import requests
from rich.console import Console
from .constants import AUTH0_AUDIENCE, AUTH0_CLIENT_ID, AUTH0_DOMAIN
from .utils import TokenManager, validate_token
console = Console()
class AuthenticationCommand:
DEVICE_CODE_URL = f"https://{AUTH0_DOMAIN}/oauth/device/code"
TOKEN_URL = f"https://{AUTH0_DOMAIN}/oauth/token"
def __init__(self):
self.token_manager = TokenManager()
def signup(self) -> None:
"""Sign up to CrewAI+"""
console.print("Signing Up to CrewAI+ \n", style="bold blue")
device_code_data = self._get_device_code()
self._display_auth_instructions(device_code_data)
return self._poll_for_token(device_code_data)
def _get_device_code(self) -> Dict[str, Any]:
"""Get the device code to authenticate the user."""
device_code_payload = {
"client_id": AUTH0_CLIENT_ID,
"scope": "openid",
"audience": AUTH0_AUDIENCE,
}
response = requests.post(url=self.DEVICE_CODE_URL, data=device_code_payload)
response.raise_for_status()
return response.json()
def _display_auth_instructions(self, device_code_data: Dict[str, str]) -> None:
"""Display the authentication instructions to the user."""
console.print("1. Navigate to: ", device_code_data["verification_uri_complete"])
console.print("2. Enter the following code: ", device_code_data["user_code"])
webbrowser.open(device_code_data["verification_uri_complete"])
def _poll_for_token(self, device_code_data: Dict[str, Any]) -> None:
"""Poll the server for the token."""
token_payload = {
"grant_type": "urn:ietf:params:oauth:grant-type:device_code",
"device_code": device_code_data["device_code"],
"client_id": AUTH0_CLIENT_ID,
}
attempts = 0
while True and attempts < 5:
response = requests.post(self.TOKEN_URL, data=token_payload)
token_data = response.json()
if response.status_code == 200:
validate_token(token_data["id_token"])
expires_in = 360000 # Token expiration time in seconds
self.token_manager.save_tokens(token_data["access_token"], expires_in)
console.print("\nWelcome to CrewAI+ !!", style="green")
return
if token_data["error"] not in ("authorization_pending", "slow_down"):
raise requests.HTTPError(token_data["error_description"])
time.sleep(device_code_data["interval"])
attempts += 1
console.print(
"Timeout: Failed to get the token. Please try again.", style="bold red"
)

View File

@@ -0,0 +1,144 @@
import json
import os
import sys
from datetime import datetime, timedelta
from pathlib import Path
from typing import Optional
from auth0.authentication.token_verifier import (
AsymmetricSignatureVerifier,
TokenVerifier,
)
from cryptography.fernet import Fernet
from .constants import AUTH0_CLIENT_ID, AUTH0_DOMAIN
def validate_token(id_token: str) -> None:
"""
Verify the token and its precedence
:param id_token:
"""
jwks_url = f"https://{AUTH0_DOMAIN}/.well-known/jwks.json"
issuer = f"https://{AUTH0_DOMAIN}/"
signature_verifier = AsymmetricSignatureVerifier(jwks_url)
token_verifier = TokenVerifier(
signature_verifier=signature_verifier, issuer=issuer, audience=AUTH0_CLIENT_ID
)
token_verifier.verify(id_token)
class TokenManager:
def __init__(self, file_path: str = "tokens.enc") -> None:
"""
Initialize the TokenManager class.
:param file_path: The file path to store the encrypted tokens. Default is "tokens.enc".
"""
self.file_path = file_path
self.key = self._get_or_create_key()
self.fernet = Fernet(self.key)
def _get_or_create_key(self) -> bytes:
"""
Get or create the encryption key.
:return: The encryption key.
"""
key_filename = "secret.key"
key = self.read_secure_file(key_filename)
if key is not None:
return key
new_key = Fernet.generate_key()
self.save_secure_file(key_filename, new_key)
return new_key
def save_tokens(self, access_token: str, expires_in: int) -> None:
"""
Save the access token and its expiration time.
:param access_token: The access token to save.
:param expires_in: The expiration time of the access token in seconds.
"""
expiration_time = datetime.now() + timedelta(seconds=expires_in)
data = {
"access_token": access_token,
"expiration": expiration_time.isoformat(),
}
encrypted_data = self.fernet.encrypt(json.dumps(data).encode())
self.save_secure_file(self.file_path, encrypted_data)
def get_token(self) -> Optional[str]:
"""
Get the access token if it is valid and not expired.
:return: The access token if valid and not expired, otherwise None.
"""
encrypted_data = self.read_secure_file(self.file_path)
decrypted_data = self.fernet.decrypt(encrypted_data)
data = json.loads(decrypted_data)
expiration = datetime.fromisoformat(data["expiration"])
if expiration <= datetime.now():
return None
return data["access_token"]
def get_secure_storage_path(self) -> Path:
"""
Get the secure storage path based on the operating system.
:return: The secure storage path.
"""
if sys.platform == "win32":
# Windows: Use %LOCALAPPDATA%
base_path = os.environ.get("LOCALAPPDATA")
elif sys.platform == "darwin":
# macOS: Use ~/Library/Application Support
base_path = os.path.expanduser("~/Library/Application Support")
else:
# Linux and other Unix-like: Use ~/.local/share
base_path = os.path.expanduser("~/.local/share")
app_name = "crewai/credentials"
storage_path = Path(base_path) / app_name
storage_path.mkdir(parents=True, exist_ok=True)
return storage_path
def save_secure_file(self, filename: str, content: bytes) -> None:
"""
Save the content to a secure file.
:param filename: The name of the file.
:param content: The content to save.
"""
storage_path = self.get_secure_storage_path()
file_path = storage_path / filename
with open(file_path, "wb") as f:
f.write(content)
# Set appropriate permissions (read/write for owner only)
os.chmod(file_path, 0o600)
def read_secure_file(self, filename: str) -> Optional[bytes]:
"""
Read the content of a secure file.
:param filename: The name of the file.
:return: The content of the file if it exists, otherwise None.
"""
storage_path = self.get_secure_storage_path()
file_path = storage_path / filename
if not file_path.exists():
return None
with open(file_path, "rb") as f:
return f.read()

View File

@@ -1,13 +1,19 @@
from typing import Optional
import click
import pkg_resources
from crewai.cli.create_crew import create_crew
from crewai.cli.create_flow import create_flow
from crewai.cli.create_pipeline import create_pipeline
from crewai.memory.storage.kickoff_task_outputs_storage import (
KickoffTaskOutputsSQLiteStorage,
)
from .authentication.main import AuthenticationCommand
from .deploy.main import DeployCommand
from .evaluate_crew import evaluate_crew
from .install_crew import install_crew
from .replay_from_task import replay_task_command
from .reset_memories_command import reset_memories_command
from .run_crew import run_crew
@@ -20,19 +26,20 @@ def crewai():
@crewai.command()
@click.argument("type", type=click.Choice(["crew", "pipeline"]))
@click.argument("type", type=click.Choice(["crew", "pipeline", "flow"]))
@click.argument("name")
@click.option(
"--router", is_flag=True, help="Create a pipeline with router functionality"
)
def create(type, name, router):
"""Create a new crew or pipeline."""
def create(type, name):
"""Create a new crew, pipeline, or flow."""
if type == "crew":
create_crew(name)
elif type == "pipeline":
create_pipeline(name, router)
create_pipeline(name)
elif type == "flow":
create_flow(name)
else:
click.secho("Error: Invalid type. Must be 'crew' or 'pipeline'.", fg="red")
click.secho(
"Error: Invalid type. Must be 'crew', 'pipeline', or 'flow'.", fg="red"
)
@crewai.command()
@@ -165,12 +172,84 @@ def test(n_iterations: int, model: str):
evaluate_crew(n_iterations, model)
@crewai.command()
def install():
"""Install the Crew."""
install_crew()
@crewai.command()
def run():
"""Run the crew."""
click.echo("Running the crew")
"""Run the Crew."""
click.echo("Running the Crew")
run_crew()
@crewai.command()
def signup():
"""Sign Up/Login to CrewAI+."""
AuthenticationCommand().signup()
@crewai.command()
def login():
"""Sign Up/Login to CrewAI+."""
AuthenticationCommand().signup()
# DEPLOY CREWAI+ COMMANDS
@crewai.group()
def deploy():
"""Deploy the Crew CLI group."""
pass
@deploy.command(name="create")
@click.option("-y", "--yes", is_flag=True, help="Skip the confirmation prompt")
def deploy_create(yes: bool):
"""Create a Crew deployment."""
deploy_cmd = DeployCommand()
deploy_cmd.create_crew(yes)
@deploy.command(name="list")
def deploy_list():
"""List all deployments."""
deploy_cmd = DeployCommand()
deploy_cmd.list_crews()
@deploy.command(name="push")
@click.option("-u", "--uuid", type=str, help="Crew UUID parameter")
def deploy_push(uuid: Optional[str]):
"""Deploy the Crew."""
deploy_cmd = DeployCommand()
deploy_cmd.deploy(uuid=uuid)
@deploy.command(name="status")
@click.option("-u", "--uuid", type=str, help="Crew UUID parameter")
def deply_status(uuid: Optional[str]):
"""Get the status of a deployment."""
deploy_cmd = DeployCommand()
deploy_cmd.get_crew_status(uuid=uuid)
@deploy.command(name="logs")
@click.option("-u", "--uuid", type=str, help="Crew UUID parameter")
def deploy_logs(uuid: Optional[str]):
"""Get the logs of a deployment."""
deploy_cmd = DeployCommand()
deploy_cmd.get_crew_logs(uuid=uuid)
@deploy.command(name="remove")
@click.option("-u", "--uuid", type=str, help="Crew UUID parameter")
def deploy_remove(uuid: Optional[str]):
"""Remove a deployment."""
deploy_cmd = DeployCommand()
deploy_cmd.remove_crew(uuid=uuid)
if __name__ == "__main__":
crewai()

View File

@@ -0,0 +1,86 @@
from pathlib import Path
import click
def create_flow(name):
"""Create a new flow."""
folder_name = name.replace(" ", "_").replace("-", "_").lower()
class_name = name.replace("_", " ").replace("-", " ").title().replace(" ", "")
click.secho(f"Creating flow {folder_name}...", fg="green", bold=True)
project_root = Path(folder_name)
if project_root.exists():
click.secho(f"Error: Folder {folder_name} already exists.", fg="red")
return
# Create directory structure
(project_root / "src" / folder_name).mkdir(parents=True)
(project_root / "src" / folder_name / "crews").mkdir(parents=True)
(project_root / "src" / folder_name / "tools").mkdir(parents=True)
(project_root / "tests").mkdir(exist_ok=True)
# Create .env file
with open(project_root / ".env", "w") as file:
file.write("OPENAI_API_KEY=YOUR_API_KEY")
package_dir = Path(__file__).parent
templates_dir = package_dir / "templates" / "flow"
# List of template files to copy
root_template_files = [".gitignore", "pyproject.toml", "README.md"]
src_template_files = ["__init__.py", "main.py"]
tools_template_files = ["tools/__init__.py", "tools/custom_tool.py"]
crew_folders = [
"poem_crew",
]
def process_file(src_file, dst_file):
with open(src_file, "r") as file:
content = file.read()
content = content.replace("{{name}}", name)
content = content.replace("{{flow_name}}", class_name)
content = content.replace("{{folder_name}}", folder_name)
with open(dst_file, "w") as file:
file.write(content)
# Copy and process root template files
for file_name in root_template_files:
src_file = templates_dir / file_name
dst_file = project_root / file_name
process_file(src_file, dst_file)
# Copy and process src template files
for file_name in src_template_files:
src_file = templates_dir / file_name
dst_file = project_root / "src" / folder_name / file_name
process_file(src_file, dst_file)
# Copy tools files
for file_name in tools_template_files:
src_file = templates_dir / file_name
dst_file = project_root / "src" / folder_name / file_name
process_file(src_file, dst_file)
# Copy crew folders
for crew_folder in crew_folders:
src_crew_folder = templates_dir / "crews" / crew_folder
dst_crew_folder = project_root / "src" / folder_name / "crews" / crew_folder
if src_crew_folder.exists():
for src_file in src_crew_folder.rglob("*"):
if src_file.is_file():
relative_path = src_file.relative_to(src_crew_folder)
dst_file = dst_crew_folder / relative_path
dst_file.parent.mkdir(parents=True, exist_ok=True)
process_file(src_file, dst_file)
else:
click.secho(
f"Warning: Crew folder {crew_folder} not found in template.",
fg="yellow",
)
click.secho(f"Flow {name} created successfully!", fg="green", bold=True)

View File

View File

@@ -0,0 +1,66 @@
from os import getenv
import requests
from crewai.cli.deploy.utils import get_crewai_version
class CrewAPI:
"""
CrewAPI class to interact with the crewAI+ API.
"""
def __init__(self, api_key: str) -> None:
self.api_key = api_key
self.headers = {
"Authorization": f"Bearer {api_key}",
"Content-Type": "application/json",
"User-Agent": f"CrewAI-CLI/{get_crewai_version()}",
}
self.base_url = getenv(
"CREWAI_BASE_URL", "https://crewai.com/crewai_plus/api/v1/crews"
)
def _make_request(self, method: str, endpoint: str, **kwargs) -> requests.Response:
url = f"{self.base_url}/{endpoint}"
return requests.request(method, url, headers=self.headers, **kwargs)
# Deploy
def deploy_by_name(self, project_name: str) -> requests.Response:
return self._make_request("POST", f"by-name/{project_name}/deploy")
def deploy_by_uuid(self, uuid: str) -> requests.Response:
return self._make_request("POST", f"{uuid}/deploy")
# Status
def status_by_name(self, project_name: str) -> requests.Response:
return self._make_request("GET", f"by-name/{project_name}/status")
def status_by_uuid(self, uuid: str) -> requests.Response:
return self._make_request("GET", f"{uuid}/status")
# Logs
def logs_by_name(
self, project_name: str, log_type: str = "deployment"
) -> requests.Response:
return self._make_request("GET", f"by-name/{project_name}/logs/{log_type}")
def logs_by_uuid(
self, uuid: str, log_type: str = "deployment"
) -> requests.Response:
return self._make_request("GET", f"{uuid}/logs/{log_type}")
# Delete
def delete_by_name(self, project_name: str) -> requests.Response:
return self._make_request("DELETE", f"by-name/{project_name}")
def delete_by_uuid(self, uuid: str) -> requests.Response:
return self._make_request("DELETE", f"{uuid}")
# List
def list_crews(self) -> requests.Response:
return self._make_request("GET", "")
# Create
def create_crew(self, payload) -> requests.Response:
return self._make_request("POST", "", json=payload)

View File

@@ -0,0 +1,318 @@
from typing import Any, Dict, List, Optional
from rich.console import Console
from crewai.telemetry import Telemetry
from .api import CrewAPI
from .utils import (
fetch_and_json_env_file,
get_auth_token,
get_git_remote_url,
get_project_name,
)
console = Console()
class DeployCommand:
"""
A class to handle deployment-related operations for CrewAI projects.
"""
def __init__(self):
"""
Initialize the DeployCommand with project name and API client.
"""
try:
self._telemetry = Telemetry()
self._telemetry.set_tracer()
access_token = get_auth_token()
except Exception:
self._deploy_signup_error_span = self._telemetry.deploy_signup_error_span()
console.print(
"Please sign up/login to CrewAI+ before using the CLI.",
style="bold red",
)
console.print("Run 'crewai signup' to sign up/login.", style="bold green")
raise SystemExit
self.project_name = get_project_name()
if self.project_name is None:
console.print(
"No project name found. Please ensure your project has a valid pyproject.toml file.",
style="bold red",
)
raise SystemExit
self.client = CrewAPI(api_key=access_token)
def _handle_error(self, json_response: Dict[str, Any]) -> None:
"""
Handle and display error messages from API responses.
Args:
json_response (Dict[str, Any]): The JSON response containing error information.
"""
error = json_response.get("error", "Unknown error")
message = json_response.get("message", "No message provided")
console.print(f"Error: {error}", style="bold red")
console.print(f"Message: {message}", style="bold red")
def _standard_no_param_error_message(self) -> None:
"""
Display a standard error message when no UUID or project name is available.
"""
console.print(
"No UUID provided, project pyproject.toml not found or with error.",
style="bold red",
)
def _display_deployment_info(self, json_response: Dict[str, Any]) -> None:
"""
Display deployment information.
Args:
json_response (Dict[str, Any]): The deployment information to display.
"""
console.print("Deploying the crew...\n", style="bold blue")
for key, value in json_response.items():
console.print(f"{key.title()}: [green]{value}[/green]")
console.print("\nTo check the status of the deployment, run:")
console.print("crewai deploy status")
console.print(" or")
console.print(f"crewai deploy status --uuid \"{json_response['uuid']}\"")
def _display_logs(self, log_messages: List[Dict[str, Any]]) -> None:
"""
Display log messages.
Args:
log_messages (List[Dict[str, Any]]): The log messages to display.
"""
for log_message in log_messages:
console.print(
f"{log_message['timestamp']} - {log_message['level']}: {log_message['message']}"
)
def deploy(self, uuid: Optional[str] = None) -> None:
"""
Deploy a crew using either UUID or project name.
Args:
uuid (Optional[str]): The UUID of the crew to deploy.
"""
self._start_deployment_span = self._telemetry.start_deployment_span(uuid)
console.print("Starting deployment...", style="bold blue")
if uuid:
response = self.client.deploy_by_uuid(uuid)
elif self.project_name:
response = self.client.deploy_by_name(self.project_name)
else:
self._standard_no_param_error_message()
return
json_response = response.json()
if response.status_code == 200:
self._display_deployment_info(json_response)
else:
self._handle_error(json_response)
def create_crew(self, confirm: bool) -> None:
"""
Create a new crew deployment.
"""
self._create_crew_deployment_span = (
self._telemetry.create_crew_deployment_span()
)
console.print("Creating deployment...", style="bold blue")
env_vars = fetch_and_json_env_file()
remote_repo_url = get_git_remote_url()
if remote_repo_url is None:
console.print("No remote repository URL found.", style="bold red")
console.print(
"Please ensure your project has a valid remote repository.",
style="yellow",
)
return
self._confirm_input(env_vars, remote_repo_url, confirm)
payload = self._create_payload(env_vars, remote_repo_url)
response = self.client.create_crew(payload)
if response.status_code == 201:
self._display_creation_success(response.json())
else:
self._handle_error(response.json())
def _confirm_input(
self, env_vars: Dict[str, str], remote_repo_url: str, confirm: bool
) -> None:
"""
Confirm input parameters with the user.
Args:
env_vars (Dict[str, str]): Environment variables.
remote_repo_url (str): Remote repository URL.
confirm (bool): Whether to confirm input.
"""
if not confirm:
input(f"Press Enter to continue with the following Env vars: {env_vars}")
input(
f"Press Enter to continue with the following remote repository: {remote_repo_url}\n"
)
def _create_payload(
self,
env_vars: Dict[str, str],
remote_repo_url: str,
) -> Dict[str, Any]:
"""
Create the payload for crew creation.
Args:
remote_repo_url (str): Remote repository URL.
env_vars (Dict[str, str]): Environment variables.
Returns:
Dict[str, Any]: The payload for crew creation.
"""
return {
"deploy": {
"name": self.project_name,
"repo_clone_url": remote_repo_url,
"env": env_vars,
}
}
def _display_creation_success(self, json_response: Dict[str, Any]) -> None:
"""
Display success message after crew creation.
Args:
json_response (Dict[str, Any]): The response containing crew information.
"""
console.print("Deployment created successfully!\n", style="bold green")
console.print(
f"Name: {self.project_name} ({json_response['uuid']})", style="bold green"
)
console.print(f"Status: {json_response['status']}", style="bold green")
console.print("\nTo (re)deploy the crew, run:")
console.print("crewai deploy push")
console.print(" or")
console.print(f"crewai deploy push --uuid {json_response['uuid']}")
def list_crews(self) -> None:
"""
List all available crews.
"""
console.print("Listing all Crews\n", style="bold blue")
response = self.client.list_crews()
json_response = response.json()
if response.status_code == 200:
self._display_crews(json_response)
else:
self._display_no_crews_message()
def _display_crews(self, crews_data: List[Dict[str, Any]]) -> None:
"""
Display the list of crews.
Args:
crews_data (List[Dict[str, Any]]): List of crew data to display.
"""
for crew_data in crews_data:
console.print(
f"- {crew_data['name']} ({crew_data['uuid']}) [blue]{crew_data['status']}[/blue]"
)
def _display_no_crews_message(self) -> None:
"""
Display a message when no crews are available.
"""
console.print("You don't have any Crews yet. Let's create one!", style="yellow")
console.print(" crewai create crew <crew_name>", style="green")
def get_crew_status(self, uuid: Optional[str] = None) -> None:
"""
Get the status of a crew.
Args:
uuid (Optional[str]): The UUID of the crew to check.
"""
console.print("Fetching deployment status...", style="bold blue")
if uuid:
response = self.client.status_by_uuid(uuid)
elif self.project_name:
response = self.client.status_by_name(self.project_name)
else:
self._standard_no_param_error_message()
return
json_response = response.json()
if response.status_code == 200:
self._display_crew_status(json_response)
else:
self._handle_error(json_response)
def _display_crew_status(self, status_data: Dict[str, str]) -> None:
"""
Display the status of a crew.
Args:
status_data (Dict[str, str]): The status data to display.
"""
console.print(f"Name:\t {status_data['name']}")
console.print(f"Status:\t {status_data['status']}")
def get_crew_logs(self, uuid: Optional[str], log_type: str = "deployment") -> None:
"""
Get logs for a crew.
Args:
uuid (Optional[str]): The UUID of the crew to get logs for.
log_type (str): The type of logs to retrieve (default: "deployment").
"""
self._get_crew_logs_span = self._telemetry.get_crew_logs_span(uuid, log_type)
console.print(f"Fetching {log_type} logs...", style="bold blue")
if uuid:
response = self.client.logs_by_uuid(uuid, log_type)
elif self.project_name:
response = self.client.logs_by_name(self.project_name, log_type)
else:
self._standard_no_param_error_message()
return
if response.status_code == 200:
self._display_logs(response.json())
else:
self._handle_error(response.json())
def remove_crew(self, uuid: Optional[str]) -> None:
"""
Remove a crew deployment.
Args:
uuid (Optional[str]): The UUID of the crew to remove.
"""
self._remove_crew_span = self._telemetry.remove_crew_span(uuid)
console.print("Removing deployment...", style="bold blue")
if uuid:
response = self.client.delete_by_uuid(uuid)
elif self.project_name:
response = self.client.delete_by_name(self.project_name)
else:
self._standard_no_param_error_message()
return
if response.status_code == 204:
console.print(
f"Crew '{self.project_name}' removed successfully.", style="green"
)
else:
console.print(
f"Failed to remove crew '{self.project_name}'", style="bold red"
)

View File

@@ -0,0 +1,155 @@
import sys
import re
import subprocess
from rich.console import Console
from ..authentication.utils import TokenManager
console = Console()
if sys.version_info >= (3, 11):
import tomllib
# Drop the simple_toml_parser when we move to python3.11
def simple_toml_parser(content):
result = {}
current_section = result
for line in content.split('\n'):
line = line.strip()
if line.startswith('[') and line.endswith(']'):
# New section
section = line[1:-1].split('.')
current_section = result
for key in section:
current_section = current_section.setdefault(key, {})
elif '=' in line:
key, value = line.split('=', 1)
key = key.strip()
value = value.strip().strip('"')
current_section[key] = value
return result
def parse_toml(content):
if sys.version_info >= (3, 11):
return tomllib.loads(content)
else:
return simple_toml_parser(content)
def get_git_remote_url() -> str | None:
"""Get the Git repository's remote URL."""
try:
# Run the git remote -v command
result = subprocess.run(
["git", "remote", "-v"], capture_output=True, text=True, check=True
)
# Get the output
output = result.stdout
# Parse the output to find the origin URL
matches = re.findall(r"origin\s+(.*?)\s+\(fetch\)", output)
if matches:
return matches[0] # Return the first match (origin URL)
else:
console.print("No origin remote found.", style="bold red")
except subprocess.CalledProcessError as e:
console.print(f"Error running trying to fetch the Git Repository: {e}", style="bold red")
except FileNotFoundError:
console.print("Git command not found. Make sure Git is installed and in your PATH.", style="bold red")
return None
def get_project_name(pyproject_path: str = "pyproject.toml") -> str | None:
"""Get the project name from the pyproject.toml file."""
try:
# Read the pyproject.toml file
with open(pyproject_path, "r") as f:
pyproject_content = parse_toml(f.read())
# Extract the project name
project_name = pyproject_content["tool"]["poetry"]["name"]
if "crewai" not in pyproject_content["tool"]["poetry"]["dependencies"]:
raise Exception("crewai is not in the dependencies.")
return project_name
except FileNotFoundError:
print(f"Error: {pyproject_path} not found.")
except KeyError:
print(f"Error: {pyproject_path} is not a valid pyproject.toml file.")
except tomllib.TOMLDecodeError if sys.version_info >= (3, 11) else Exception as e: # type: ignore
print(
f"Error: {pyproject_path} is not a valid TOML file."
if sys.version_info >= (3, 11)
else f"Error reading the pyproject.toml file: {e}"
)
except Exception as e:
print(f"Error reading the pyproject.toml file: {e}")
return None
def get_crewai_version(poetry_lock_path: str = "poetry.lock") -> str:
"""Get the version number of crewai from the poetry.lock file."""
try:
with open(poetry_lock_path, "r") as f:
lock_content = f.read()
match = re.search(
r'\[\[package\]\]\s*name\s*=\s*"crewai"\s*version\s*=\s*"([^"]+)"',
lock_content,
re.DOTALL,
)
if match:
return match.group(1)
else:
print("crewai package not found in poetry.lock")
return "no-version-found"
except FileNotFoundError:
print(f"Error: {poetry_lock_path} not found.")
except Exception as e:
print(f"Error reading the poetry.lock file: {e}")
return "no-version-found"
def fetch_and_json_env_file(env_file_path: str = ".env") -> dict:
"""Fetch the environment variables from a .env file and return them as a dictionary."""
try:
# Read the .env file
with open(env_file_path, "r") as f:
env_content = f.read()
# Parse the .env file content to a dictionary
env_dict = {}
for line in env_content.splitlines():
if line.strip() and not line.strip().startswith("#"):
key, value = line.split("=", 1)
env_dict[key.strip()] = value.strip()
return env_dict
except FileNotFoundError:
print(f"Error: {env_file_path} not found.")
except Exception as e:
print(f"Error reading the .env file: {e}")
return {}
def get_auth_token() -> str:
"""Get the authentication token."""
access_token = TokenManager().get_token()
if not access_token:
raise Exception()
return access_token

View File

@@ -0,0 +1,21 @@
import subprocess
import click
def install_crew() -> None:
"""
Install the crew by running the Poetry command to lock and install.
"""
try:
subprocess.run(["poetry", "lock"], check=True, capture_output=False, text=True)
subprocess.run(
["poetry", "install"], check=True, capture_output=False, text=True
)
except subprocess.CalledProcessError as e:
click.echo(f"An error occurred while running the crew: {e}", err=True)
click.echo(e.output, err=True)
except Exception as e:
click.echo(f"An unexpected error occurred: {e}", err=True)

View File

@@ -14,12 +14,9 @@ pip install poetry
Next, navigate to your project directory and install the dependencies:
1. First lock the dependencies and then install them:
1. First lock the dependencies and install them by using the CLI command:
```bash
poetry lock
```
```bash
poetry install
crewai install
```
### Customizing
@@ -37,10 +34,6 @@ To kickstart your crew of AI agents and begin task execution, run this from the
```bash
$ crewai run
```
or
```bash
poetry run {{folder_name}}
```
This command initializes the {{name}} Crew, assembling the agents and assigning them tasks as defined in your configuration.

View File

@@ -6,7 +6,8 @@ authors = ["Your Name <you@example.com>"]
[tool.poetry.dependencies]
python = ">=3.10,<=3.13"
crewai = { extras = ["tools"], version = "^0.51.0" }
crewai = { extras = ["tools"], version = ">=0.55.2,<1.0.0" }
[tool.poetry.scripts]
{{folder_name}} = "{{folder_name}}.main:run"

View File

@@ -0,0 +1,2 @@
.env
__pycache__/

View File

@@ -0,0 +1,57 @@
# {{crew_name}} Crew
Welcome to the {{crew_name}} Crew project, powered by [crewAI](https://crewai.com). This template is designed to help you set up a multi-agent AI system with ease, leveraging the powerful and flexible framework provided by crewAI. Our goal is to enable your agents to collaborate effectively on complex tasks, maximizing their collective intelligence and capabilities.
## Installation
Ensure you have Python >=3.10 <=3.13 installed on your system. This project uses [Poetry](https://python-poetry.org/) for dependency management and package handling, offering a seamless setup and execution experience.
First, if you haven't already, install Poetry:
```bash
pip install poetry
```
Next, navigate to your project directory and install the dependencies:
1. First lock the dependencies and then install them:
```bash
crewai install
```
### Customizing
**Add your `OPENAI_API_KEY` into the `.env` file**
- Modify `src/{{folder_name}}/config/agents.yaml` to define your agents
- Modify `src/{{folder_name}}/config/tasks.yaml` to define your tasks
- Modify `src/{{folder_name}}/crew.py` to add your own logic, tools and specific args
- Modify `src/{{folder_name}}/main.py` to add custom inputs for your agents and tasks
## Running the Project
To kickstart your crew of AI agents and begin task execution, run this from the root folder of your project:
```bash
crewai run
```
This command initializes the {{name}} Crew, assembling the agents and assigning them tasks as defined in your configuration.
This example, unmodified, will run the create a `report.md` file with the output of a research on LLMs in the root folder.
## Understanding Your Crew
The {{name}} Crew is composed of multiple AI agents, each with unique roles, goals, and tools. These agents collaborate on a series of tasks, defined in `config/tasks.yaml`, leveraging their collective skills to achieve complex objectives. The `config/agents.yaml` file outlines the capabilities and configurations of each agent in your crew.
## Support
For support, questions, or feedback regarding the {{crew_name}} Crew or crewAI.
- Visit our [documentation](https://docs.crewai.com)
- Reach out to us through our [GitHub repository](https://github.com/joaomdmoura/crewai)
- [Join our Discord](https://discord.com/invite/X4JWnZnxPb)
- [Chat with our docs](https://chatg.pt/DWjSBZn)
Let's create wonders together with the power and simplicity of crewAI.

View File

@@ -0,0 +1,11 @@
poem_writer:
role: >
CrewAI Poem Writer
goal: >
Generate a funny, light heartedpoem about how CrewAI
is awesome with a sentence count of {sentence_count}
backstory: >
You're a creative poet with a talent for capturing the essence of any topic
in a beautiful and engaging way. Known for your ability to craft poems that
resonate with readers, you bring a unique perspective and artistic flair to
every piece you write.

View File

@@ -0,0 +1,7 @@
write_poem:
description: >
Write a poem about how CrewAI is awesome.
Ensure the poem is engaging and adheres to the specified sentence count of {sentence_count}.
expected_output: >
A beautifully crafted poem about CrewAI, with exactly {sentence_count} sentences.
agent: poem_writer

View File

@@ -0,0 +1,31 @@
from crewai import Agent, Crew, Process, Task
from crewai.project import CrewBase, agent, crew, task
@CrewBase
class PoemCrew():
"""Poem Crew"""
agents_config = 'config/agents.yaml'
tasks_config = 'config/tasks.yaml'
@agent
def poem_writer(self) -> Agent:
return Agent(
config=self.agents_config['poem_writer'],
)
@task
def write_poem(self) -> Task:
return Task(
config=self.tasks_config['write_poem'],
)
@crew
def crew(self) -> Crew:
"""Creates the Research Crew"""
return Crew(
agents=self.agents, # Automatically created by the @agent decorator
tasks=self.tasks, # Automatically created by the @task decorator
process=Process.sequential,
verbose=True,
)

View File

@@ -0,0 +1,55 @@
#!/usr/bin/env python
import asyncio
from random import randint
from pydantic import BaseModel
from crewai.flow.flow import Flow, listen, start
from .crews.poem_crew.poem_crew import PoemCrew
class PoemState(BaseModel):
sentence_count: int = 1
poem: str = ""
class PoemFlow(Flow[PoemState]):
initial_state = PoemState
@start()
def generate_sentence_count(self):
print("Generating sentence count")
# Generate a number between 1 and 5
self.state.sentence_count = randint(1, 5)
@listen(generate_sentence_count)
def generate_poem(self):
print("Generating poem")
print(f"State before poem: {self.state}")
poem_crew = PoemCrew().crew()
result = poem_crew.kickoff(inputs={"sentence_count": self.state.sentence_count})
print("Poem generated", result.raw)
self.state.poem = result.raw
print(f"State after generate_poem: {self.state}")
@listen(generate_poem)
def save_poem(self):
print("Saving poem")
print(f"State before save_poem: {self.state}")
with open("poem.txt", "w") as f:
f.write(self.state.poem)
print(f"State after save_poem: {self.state}")
async def run():
"""
Run the flow.
"""
poem_flow = PoemFlow()
await poem_flow.kickoff()
def main():
asyncio.run(run())
if __name__ == "__main__":
main()

View File

@@ -0,0 +1,18 @@
[tool.poetry]
name = "{{folder_name}}"
version = "0.1.0"
description = "{{name}} using crewAI"
authors = ["Your Name <you@example.com>"]
[tool.poetry.dependencies]
python = ">=3.10,<=3.13"
crewai = { extras = ["tools"], version = ">=0.55.2,<1.0.0" }
asyncio = "*"
[tool.poetry.scripts]
{{folder_name}} = "{{folder_name}}.main:main"
run_crew = "{{folder_name}}.main:main"
[build-system]
requires = ["poetry-core"]
build-backend = "poetry.core.masonry.api"

View File

@@ -0,0 +1,12 @@
from crewai_tools import BaseTool
class MyCustomTool(BaseTool):
name: str = "Name of my tool"
description: str = (
"Clear description for what this tool is useful for, you agent will need this information to use it."
)
def _run(self, argument: str) -> str:
# Implementation goes here
return "this is an example of a tool output, ignore it and move along."

View File

@@ -15,12 +15,11 @@ pip install poetry
Next, navigate to your project directory and install the dependencies:
1. First lock the dependencies and then install them:
```bash
poetry lock
```
```bash
poetry install
crewai install
```
### Customizing
**Add your `OPENAI_API_KEY` into the `.env` file**
@@ -35,7 +34,7 @@ poetry install
To kickstart your crew of AI agents and begin task execution, run this from the root folder of your project:
```bash
poetry run {{folder_name}}
crewai run
```
This command initializes the {{name}} Crew, assembling the agents and assigning them tasks as defined in your configuration.
@@ -49,6 +48,7 @@ The {{name}} Crew is composed of multiple AI agents, each with unique roles, goa
## Support
For support, questions, or feedback regarding the {{crew_name}} Crew or crewAI.
- Visit our [documentation](https://docs.crewai.com)
- Reach out to us through our [GitHub repository](https://github.com/joaomdmoura/crewai)
- [Join our Discord](https://discord.com/invite/X4JWnZnxPb)

View File

@@ -6,7 +6,7 @@ authors = ["Your Name <you@example.com>"]
[tool.poetry.dependencies]
python = ">=3.10,<=3.13"
crewai = { extras = ["tools"], version = "^0.51.0" }
crewai = { extras = ["tools"], version = ">=0.55.2,<1.0.0" }
asyncio = "*"
[tool.poetry.scripts]

View File

@@ -16,10 +16,7 @@ Next, navigate to your project directory and install the dependencies:
1. First lock the dependencies and then install them:
```bash
poetry lock
```
```bash
poetry install
crewai install
```
### Customizing
@@ -35,7 +32,7 @@ poetry install
To kickstart your crew of AI agents and begin task execution, run this from the root folder of your project:
```bash
poetry run {{folder_name}}
crewai run
```
This command initializes the {{name}} Crew, assembling the agents and assigning them tasks as defined in your configuration.

View File

@@ -6,7 +6,8 @@ authors = ["Your Name <you@example.com>"]
[tool.poetry.dependencies]
python = ">=3.10,<=3.13"
crewai = { extras = ["tools"], version = "^0.51.0" }
crewai = { extras = ["tools"], version = ">=0.55.2,<1.0.0" }
[tool.poetry.scripts]
{{folder_name}} = "{{folder_name}}.main:main"

View File

@@ -1,16 +1,15 @@
import asyncio
import json
import os
import uuid
from concurrent.futures import Future
from hashlib import md5
import os
from typing import TYPE_CHECKING, Any, Dict, List, Optional, Tuple, Union
from langchain_core.callbacks import BaseCallbackHandler
from pydantic import (
UUID4,
BaseModel,
ConfigDict,
Field,
InstanceOf,
Json,
@@ -48,11 +47,10 @@ from crewai.utilities.planning_handler import CrewPlanner
from crewai.utilities.task_output_storage_handler import TaskOutputStorageHandler
from crewai.utilities.training_handler import CrewTrainingHandler
agentops = None
if os.environ.get("AGENTOPS_API_KEY"):
try:
import agentops
import agentops # type: ignore
except ImportError:
pass
@@ -106,7 +104,6 @@ class Crew(BaseModel):
name: Optional[str] = Field(default=None)
cache: bool = Field(default=True)
model_config = ConfigDict(arbitrary_types_allowed=True)
tasks: List[Task] = Field(default_factory=list)
agents: List[BaseAgent] = Field(default_factory=list)
process: Process = Field(default=Process.sequential)
@@ -364,7 +361,7 @@ class Crew(BaseModel):
source = [agent.key for agent in self.agents] + [
task.key for task in self.tasks
]
return md5("|".join(source).encode()).hexdigest()
return md5("|".join(source).encode(), usedforsecurity=False).hexdigest()
def _setup_from_config(self):
assert self.config is not None, "Config should not be None."
@@ -541,7 +538,7 @@ class Crew(BaseModel):
)._handle_crew_planning()
for task, step_plan in zip(self.tasks, result.list_of_plans_per_task):
task.description += step_plan
task.description += step_plan.plan
def _store_execution_log(
self,
@@ -587,7 +584,10 @@ class Crew(BaseModel):
self.manager_agent.allow_delegation = True
manager = self.manager_agent
if manager.tools is not None and len(manager.tools) > 0:
raise Exception("Manager agent should not have tools")
self._logger.log(
"warning", "Manager agent should not have tools", color="orange"
)
manager.tools = []
manager.tools = self.manager_agent.get_delegation_tools(self.agents)
else:
manager = Agent(

View File

@@ -1 +1,3 @@
from .crew_output import CrewOutput
__all__ = ["CrewOutput"]

View File

@@ -41,6 +41,14 @@ class CrewOutput(BaseModel):
output_dict.update(self.pydantic.model_dump())
return output_dict
def __getitem__(self, key):
if self.pydantic and hasattr(self.pydantic, key):
return getattr(self.pydantic, key)
elif self.json_dict and key in self.json_dict:
return self.json_dict[key]
else:
raise KeyError(f"Key '{key}' not found in CrewOutput.")
def __str__(self):
if self.pydantic:
return str(self.pydantic)

View File

@@ -0,0 +1 @@
# TODO:

View File

@@ -0,0 +1,13 @@
from crewai.flows import Flow, end_job, start_job # type: ignore
class SimpleFlow(Flow):
@start_job()
async def research(self, topic: str) -> str:
print(f"Researching {topic}...")
return f"Full report on {topic}..."
@end_job("research")
async def write_post(self, report: str) -> str:
return f"Post written: {report}"

View File

@@ -0,0 +1,17 @@
from crewai.flows import Flow, end_job, job, start_job # type: ignore
class LongerFlow(Flow):
@start_job()
async def research(self, topic: str) -> str:
print(f"Researching {topic}...")
return f"Full report on {topic}..."
@job("research")
async def edit_report(self, report: str) -> str:
return f"Edited report: {report}"
@end_job("edit_report")
async def write_post(self, report: str) -> str:
return f"Post written: {report}"

View File

@@ -0,0 +1,22 @@
from typing import Tuple
from crewai.flows import Flow, end_job, router, start_job # type: ignore
class RouterFlow(Flow):
@start_job()
@router()
async def classify_email(self, report: str) -> Tuple[str, str]:
if "urgent" in report:
return "urgent", report
return "normal", report
@end_job("urgent")
async def write_urgent_email(self, report: str) -> str:
return f"Urgent Email Response: {report}"
@end_job("normal")
async def write_normal_email(self, report: str) -> str:
return f"Normal Email Response: {report}"

View File

@@ -0,0 +1,13 @@
from crewai.flows import Flow, end_job, start_job # type: ignore
class SimpleFlow(Flow):
@start_job()
async def research(self, topic: str) -> str:
print(f"Researching {topic}...")
return f"Full report on {topic}..."
@end_job("research")
async def write_post(self, report: str) -> str:
return f"Post written: {report}"

View File

@@ -0,0 +1,19 @@
from crewai.flows import Flow, end_job, job, start_job # type: ignore
class SimpleFlow(Flow):
@start_job()
async def research_crew(self, topic: str) -> str:
result = research_crew.kickoff(inputs={topic: topic})
return result.raw
@job("research_crew")
async def create_x_post(self, research: str) -> str:
result = x_post_crew.kickoff(inputs={research: research})
return result.raw
@end_job("research")
async def post_to_x(self, post: str) -> None:
# TODO: Post to X
return None

234
src/crewai/flow/flow.py Normal file
View File

@@ -0,0 +1,234 @@
import asyncio
import inspect
from typing import Any, Callable, Dict, Generic, List, Set, Type, TypeVar, Union
from pydantic import BaseModel
# TODO: Allow people to pass results from one method to another and not just state
T = TypeVar("T", bound=Union[BaseModel, Dict[str, Any]])
def start():
def decorator(func):
print(f"[start decorator] Decorating start method: {func.__name__}")
func.__is_start_method__ = True
return func
return decorator
def listen(condition):
def decorator(func):
print(
f"[listen decorator] Decorating listener: {func.__name__} with condition: {condition}"
)
if isinstance(condition, str):
func.__trigger_methods__ = [condition]
func.__condition_type__ = "OR"
print(
f"[listen decorator] Set __trigger_methods__ for {func.__name__}: [{condition}] with mode: OR"
)
elif (
isinstance(condition, dict)
and "type" in condition
and "methods" in condition
):
func.__trigger_methods__ = condition["methods"]
func.__condition_type__ = condition["type"]
print(
f"[listen decorator] Set __trigger_methods__ for {func.__name__}: {func.__trigger_methods__} with mode: {func.__condition_type__}"
)
elif callable(condition) and hasattr(condition, "__name__"):
func.__trigger_methods__ = [condition.__name__]
func.__condition_type__ = "OR"
print(
f"[listen decorator] Set __trigger_methods__ for {func.__name__}: [{condition.__name__}] with mode: OR"
)
else:
raise ValueError(
"Condition must be a method, string, or a result of or_() or and_()"
)
return func
return decorator
def or_(*conditions):
methods = []
for condition in conditions:
if isinstance(condition, dict) and "methods" in condition:
methods.extend(condition["methods"])
elif callable(condition) and hasattr(condition, "__name__"):
methods.append(condition.__name__)
elif isinstance(condition, str):
methods.append(condition)
else:
raise ValueError("Invalid condition in or_()")
return {"type": "OR", "methods": methods}
def and_(*conditions):
methods = []
for condition in conditions:
if isinstance(condition, dict) and "methods" in condition:
methods.extend(condition["methods"])
elif callable(condition) and hasattr(condition, "__name__"):
methods.append(condition.__name__)
elif isinstance(condition, str):
methods.append(condition)
else:
raise ValueError("Invalid condition in and_()")
return {"type": "AND", "methods": methods}
class FlowMeta(type):
def __new__(mcs, name, bases, dct):
cls = super().__new__(mcs, name, bases, dct)
start_methods = []
listeners = {}
print(f"[FlowMeta] Processing class: {name}")
for attr_name, attr_value in dct.items():
print(f"[FlowMeta] Checking attribute: {attr_name}")
if hasattr(attr_value, "__is_start_method__"):
print(f"[FlowMeta] Found start method: {attr_name}")
start_methods.append(attr_name)
if hasattr(attr_value, "__trigger_methods__"):
methods = attr_value.__trigger_methods__
condition_type = getattr(attr_value, "__condition_type__", "OR")
print(f"[FlowMeta] Conditions for {attr_name}:", methods)
listeners[attr_name] = (condition_type, methods)
setattr(cls, "_start_methods", start_methods)
setattr(cls, "_listeners", listeners)
print("[FlowMeta] ALL LISTENERS:", listeners)
print("[FlowMeta] START METHODS:", start_methods)
return cls
class Flow(Generic[T], metaclass=FlowMeta):
_start_methods: List[str] = []
_listeners: Dict[str, tuple[str, List[str]]] = {}
initial_state: Union[Type[T], T, None] = None
def __class_getitem__(cls, item):
print(f"[Flow.__class_getitem__] Getting initial state type: {item}")
class _FlowGeneric(cls):
_initial_state_T = item
return _FlowGeneric
def __init__(self):
print("[Flow.__init__] Initializing Flow")
self._methods: Dict[str, Callable] = {}
self._state = self._create_initial_state()
self._completed_methods: Set[str] = set()
self._pending_and_listeners: Dict[str, Set[str]] = {}
for method_name in dir(self):
if callable(getattr(self, method_name)) and not method_name.startswith(
"__"
):
print(f"[Flow.__init__] Adding method: {method_name}")
self._methods[method_name] = getattr(self, method_name)
print("[Flow.__init__] All methods:", self._methods.keys())
print("[Flow.__init__] Listeners:", self._listeners)
def _create_initial_state(self) -> T:
print("[Flow._create_initial_state] Creating initial state")
if self.initial_state is None and hasattr(self, "_initial_state_T"):
return self._initial_state_T()
elif self.initial_state is None:
return {} # type: ignore
elif isinstance(self.initial_state, type):
return self.initial_state()
else:
return self.initial_state
@property
def state(self) -> T:
return self._state
async def kickoff(self):
print("[Flow.kickoff] Starting kickoff")
if not self._start_methods:
raise ValueError("No start method defined")
for start_method in self._start_methods:
print(f"[Flow.kickoff] Executing start method: {start_method}")
result = await self._execute_method(self._methods[start_method])
print(
f"[Flow.kickoff] Start method {start_method} completed. Executing listeners."
)
await self._execute_listeners(start_method, result)
async def _execute_method(self, method: Callable, *args, **kwargs):
print(f"[Flow._execute_method] Executing method: {method.__name__}")
if inspect.iscoroutinefunction(method):
return await method(*args, **kwargs)
else:
return method(*args, **kwargs)
async def _execute_listeners(self, trigger_method: str, result: Any):
print(
f"[Flow._execute_listeners] Executing listeners for trigger method: {trigger_method}"
)
listener_tasks = []
for listener, (condition_type, methods) in self._listeners.items():
print(
f"[Flow._execute_listeners] Checking listener: {listener}, condition: {condition_type}, methods: {methods}"
)
if condition_type == "OR":
if trigger_method in methods:
print(
f"[Flow._execute_listeners] TRIGGERING METHOD: {listener} due to trigger: {trigger_method}"
)
listener_tasks.append(
self._execute_single_listener(listener, result)
)
elif condition_type == "AND":
if listener not in self._pending_and_listeners:
self._pending_and_listeners[listener] = set()
self._pending_and_listeners[listener].add(trigger_method)
if set(methods) == self._pending_and_listeners[listener]:
print(
f"[Flow._execute_listeners] All conditions met for listener: {listener}. Executing."
)
listener_tasks.append(
self._execute_single_listener(listener, result)
)
del self._pending_and_listeners[listener]
# Run all listener tasks concurrently and wait for them to complete
print(
f"[Flow._execute_listeners] Executing {len(listener_tasks)} listener tasks"
)
await asyncio.gather(*listener_tasks)
async def _execute_single_listener(self, listener: str, result: Any):
print(f"[Flow._execute_single_listener] Executing listener: {listener}")
try:
method = self._methods[listener]
sig = inspect.signature(method)
if len(sig.parameters) > 1: # More than just 'self'
print(
f"[Flow._execute_single_listener] Executing {listener} with result"
)
listener_result = await self._execute_method(method, result)
else:
print(
f"[Flow._execute_single_listener] Executing {listener} without result"
)
listener_result = await self._execute_method(method)
print(
f"[Flow._execute_single_listener] {listener} completed, executing its listeners"
)
await self._execute_listeners(listener, listener_result)
except Exception as e:
print(
f"[Flow._execute_single_listener] Error in method {listener}: {str(e)}"
)

View File

@@ -0,0 +1,46 @@
import asyncio
from crewai.flow.flow import Flow, listen, start
from pydantic import BaseModel
class ExampleState(BaseModel):
counter: int = 0
message: str = ""
class StructuredExampleFlow(Flow[ExampleState]):
@start()
async def start_method(self):
print("Starting the structured flow")
print(f"State in start_method: {self.state}")
self.state.message = "Hello from structured flow"
print(f"State after start_method: {self.state}")
return "Start result"
@listen(start_method)
async def second_method(self, result):
print(f"Second method, received: {result}")
print(f"State before increment: {self.state}")
self.state.counter += 1
self.state.message += " - updated"
print(f"State after second_method: {self.state}")
return "Second result"
@listen(start_method)
async def third_method(self, result):
print(f"Third method, received: {result}")
print(f"State before increment: {self.state}")
self.state.counter += 1
self.state.message += " - updated"
print(f"State after third_method: {self.state}")
return "Third result"
async def main():
flow = StructuredExampleFlow()
await flow.kickoff()
asyncio.run(main())

View File

@@ -0,0 +1,42 @@
import asyncio
from crewai.flow.flow import Flow, listen, start
from pydantic import BaseModel
class ExampleState(BaseModel):
counter: int = 0
message: str = ""
class StructuredExampleFlow(Flow):
initial_state = ExampleState
@start()
async def start_method(self):
print("Starting the structured flow")
print(f"State in start_method: {self.state}")
self.state.message = "Hello from structured flow"
print(f"State after start_method: {self.state}")
return "Start result"
@listen(start_method)
async def second_method(self):
print(f"State before increment: {self.state}")
self.state.counter += 1
self.state.message += " - updated"
print(f"State after second_method: {self.state}")
return "Second result"
@listen(start_method and second_method)
async def logger(self):
print("AND METHOD RUNNING")
print("CURRENT STATE FROM OR: ", self.state)
async def main():
flow = StructuredExampleFlow()
await flow.kickoff()
asyncio.run(main())

View File

@@ -0,0 +1,47 @@
import asyncio
from crewai.flow.flow import Flow, and_, listen, or_, start
from pydantic import BaseModel
class ExampleState(BaseModel):
counter: int = 0
message: str = ""
class StructuredExampleFlow(Flow[ExampleState]):
initial_state = ExampleState
@start()
async def start_method(self):
print("Starting the structured flow")
print(f"State in start_method: {self.state}")
self.state.message = "Hello from structured flow"
print(f"State after start_method: {self.state}")
return "Start result"
@listen(start_method)
async def second_method(self):
print(f"State before increment: {self.state}")
self.state.counter += 1
self.state.message += " - updated"
print(f"State after second_method: {self.state}")
return "Second result"
@listen(or_(start_method, second_method))
async def logger(self):
print("LOGGER METHOD RUNNING")
print("CURRENT STATE FROM LOGGER: ", self.state)
@listen(and_(start_method, second_method))
async def and_logger(self):
print("AND LOGGER METHOD RUNNING")
print("CURRENT STATE FROM AND LOGGER: ", self.state)
async def main():
flow = StructuredExampleFlow()
await flow.kickoff()
asyncio.run(main())

View File

@@ -0,0 +1,33 @@
import asyncio
from crewai.flow.flow import Flow, listen, start
class FlexibleExampleFlow(Flow):
@start()
def start_method(self):
print("Starting the flexible flow")
self.state["counter"] = 1
return "Start result"
@listen(start_method)
def second_method(self, result):
print(f"Second method, received: {result}")
self.state["counter"] += 1
self.state["message"] = "Hello from flexible flow"
return "Second result"
@listen(second_method)
def third_method(self, result):
print(f"Third method, received: {result}")
print(f"Final counter value: {self.state["counter"]}")
print(f"Final message: {self.state["message"]}")
return "Third result"
async def main():
flow = FlexibleExampleFlow()
await flow.kickoff()
asyncio.run(main())

View File

@@ -1,3 +1,5 @@
from .entity.entity_memory import EntityMemory
from .long_term.long_term_memory import LongTermMemory
from .short_term.short_term_memory import ShortTermMemory
__all__ = ["EntityMemory", "LongTermMemory", "ShortTermMemory"]

View File

@@ -21,7 +21,7 @@ class Memory:
if agent:
metadata["agent"] = agent
self.storage.save(value, metadata) # type: ignore # Maybe BUG? Should be self.storage.save(key, value, metadata)
self.storage.save(value, metadata)
def search(self, query: str) -> Dict[str, Any]:
return self.storage.search(query)

View File

@@ -5,13 +5,14 @@ import os
import shutil
from typing import Any, Dict, List, Optional
from crewai.memory.storage.interface import Storage
from crewai.utilities.paths import db_storage_path
from embedchain import App
from embedchain.llm.base import BaseLlm
from embedchain.models.data_type import DataType
from embedchain.vectordb.chroma import InvalidDimensionException
from crewai.memory.storage.interface import Storage
from crewai.utilities.paths import db_storage_path
@contextlib.contextmanager
def suppress_logging(
@@ -77,12 +78,12 @@ class RAGStorage(Storage):
self.app.llm = FakeLLM()
if allow_reset:
self.app.reset()
def _sanitize_role(self, role: str) -> str:
"""
Sanitizes agent roles to ensure valid directory names.
"""
return role.replace('\n', '').replace(' ', '_').replace('/', '_')
return role.replace("\n", "").replace(" ", "_").replace("/", "_")
def save(self, value: Any, metadata: Dict[str, Any]) -> None:
self._generate_embedding(value, metadata)

View File

@@ -1,3 +1,5 @@
from crewai.pipeline.pipeline import Pipeline
from crewai.pipeline.pipeline_kickoff_result import PipelineKickoffResult
from crewai.pipeline.pipeline_output import PipelineOutput
__all__ = ["Pipeline", "PipelineKickoffResult", "PipelineOutput"]

View File

@@ -1,3 +1,5 @@
from functools import wraps
from crewai.project.utils import memoize
@@ -5,13 +7,17 @@ def task(func):
if not hasattr(task, "registration_order"):
task.registration_order = []
func.is_task = True
wrapped_func = memoize(func)
@wraps(func)
def wrapper(*args, **kwargs):
result = func(*args, **kwargs)
if not result.name:
result.name = func.__name__
return result
# Append the function name to the registration order list
setattr(wrapper, "is_task", True)
task.registration_order.append(func.__name__)
return wrapped_func
return memoize(wrapper)
def agent(func):
@@ -97,7 +103,8 @@ def crew(func):
for task_name in sorted_task_names:
task_instance = tasks[task_name]()
instantiated_tasks.append(task_instance)
if hasattr(task_instance, "agent"):
agent_instance = getattr(task_instance, "agent", None)
if agent_instance is not None:
agent_instance = task_instance.agent
if agent_instance.role not in agent_roles:
instantiated_agents.append(agent_instance)

View File

@@ -1,56 +1,45 @@
import inspect
import os
from pathlib import Path
from typing import Any, Callable, Dict
import yaml
from dotenv import load_dotenv
from pydantic import ConfigDict
load_dotenv()
def CrewBase(cls):
class WrappedClass(cls):
model_config = ConfigDict(arbitrary_types_allowed=True)
is_crew_class: bool = True # type: ignore
base_directory = None
for frame_info in inspect.stack():
if "site-packages" not in frame_info.filename:
base_directory = Path(frame_info.filename).parent.resolve()
break
# Get the directory of the class being decorated
base_directory = Path(inspect.getfile(cls)).parent
original_agents_config_path = getattr(
cls, "agents_config", "config/agents.yaml"
)
original_tasks_config_path = getattr(cls, "tasks_config", "config/tasks.yaml")
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
if self.base_directory is None:
raise Exception(
"Unable to dynamically determine the project's base directory, you must run it from the project's root directory."
)
agents_config_path = self.base_directory / self.original_agents_config_path
tasks_config_path = self.base_directory / self.original_tasks_config_path
self.agents_config = self.load_yaml(
os.path.join(self.base_directory, self.original_agents_config_path)
)
self.tasks_config = self.load_yaml(
os.path.join(self.base_directory, self.original_tasks_config_path)
)
self.agents_config = self.load_yaml(agents_config_path)
self.tasks_config = self.load_yaml(tasks_config_path)
self.map_all_agent_variables()
self.map_all_task_variables()
@staticmethod
def load_yaml(config_path: str):
with open(config_path, "r") as file:
# parsedContent = YamlParser.parse(file) # type: ignore # Argument 1 to "parse" has incompatible type "TextIOWrapper"; expected "YamlParser"
return yaml.safe_load(file)
def load_yaml(config_path: Path):
try:
with open(config_path, "r") as file:
return yaml.safe_load(file)
except FileNotFoundError:
print(f"File not found: {config_path}")
raise
def _get_all_functions(self):
return {

View File

@@ -1,24 +1,24 @@
from typing import Callable, Dict
from pydantic import ConfigDict
from typing import Any, Callable, Dict, List, Type, Union
from crewai.crew import Crew
from crewai.pipeline.pipeline import Pipeline
from crewai.routers.router import Router
PipelineStage = Union[Crew, List[Crew], Router]
# TODO: Could potentially remove. Need to check with @joao and @gui if this is needed for CrewAI+
def PipelineBase(cls):
def PipelineBase(cls: Type[Any]) -> Type[Any]:
class WrappedClass(cls):
model_config = ConfigDict(arbitrary_types_allowed=True)
is_pipeline_class: bool = True
is_pipeline_class: bool = True # type: ignore
stages: List[PipelineStage]
def __init__(self, *args, **kwargs):
def __init__(self, *args: Any, **kwargs: Any) -> None:
super().__init__(*args, **kwargs)
self.stages = []
self._map_pipeline_components()
def _get_all_functions(self):
def _get_all_functions(self) -> Dict[str, Callable[..., Any]]:
return {
name: getattr(self, name)
for name in dir(self)
@@ -26,15 +26,15 @@ def PipelineBase(cls):
}
def _filter_functions(
self, functions: Dict[str, Callable], attribute: str
) -> Dict[str, Callable]:
self, functions: Dict[str, Callable[..., Any]], attribute: str
) -> Dict[str, Callable[..., Any]]:
return {
name: func
for name, func in functions.items()
if hasattr(func, attribute)
}
def _map_pipeline_components(self):
def _map_pipeline_components(self) -> None:
all_functions = self._get_all_functions()
crew_functions = self._filter_functions(all_functions, "is_crew")
router_functions = self._filter_functions(all_functions, "is_router")

View File

@@ -1 +1,3 @@
from crewai.routers.router import Router
__all__ = ["Router"]

View File

@@ -1,32 +1,26 @@
from copy import deepcopy
from typing import Any, Callable, Dict, Generic, Tuple, TypeVar
from typing import Any, Callable, Dict, Tuple
from pydantic import BaseModel, Field, PrivateAttr
T = TypeVar("T", bound=Dict[str, Any])
U = TypeVar("U")
class Route(BaseModel):
condition: Callable[[Dict[str, Any]], bool]
pipeline: Any
class Route(Generic[T, U]):
condition: Callable[[T], bool]
pipeline: U
def __init__(self, condition: Callable[[T], bool], pipeline: U):
self.condition = condition
self.pipeline = pipeline
class Router(BaseModel, Generic[T, U]):
routes: Dict[str, Route[T, U]] = Field(
class Router(BaseModel):
routes: Dict[str, Route] = Field(
default_factory=dict,
description="Dictionary of route names to (condition, pipeline) tuples",
)
default: U = Field(..., description="Default pipeline if no conditions are met")
default: Any = Field(..., description="Default pipeline if no conditions are met")
_route_types: Dict[str, type] = PrivateAttr(default_factory=dict)
model_config = {"arbitrary_types_allowed": True}
class Config:
arbitrary_types_allowed = True
def __init__(self, routes: Dict[str, Route[T, U]], default: U, **data):
def __init__(self, routes: Dict[str, Route], default: Any, **data):
super().__init__(routes=routes, default=default, **data)
self._check_copyable(default)
for name, route in routes.items():
@@ -34,16 +28,16 @@ class Router(BaseModel, Generic[T, U]):
self._route_types[name] = type(route.pipeline)
@staticmethod
def _check_copyable(obj):
def _check_copyable(obj: Any) -> None:
if not hasattr(obj, "copy") or not callable(getattr(obj, "copy")):
raise ValueError(f"Object of type {type(obj)} must have a 'copy' method")
def add_route(
self,
name: str,
condition: Callable[[T], bool],
pipeline: U,
) -> "Router[T, U]":
condition: Callable[[Dict[str, Any]], bool],
pipeline: Any,
) -> "Router":
"""
Add a named route with its condition and corresponding pipeline to the router.
@@ -60,7 +54,7 @@ class Router(BaseModel, Generic[T, U]):
self._route_types[name] = type(pipeline)
return self
def route(self, input_data: T) -> Tuple[U, str]:
def route(self, input_data: Dict[str, Any]) -> Tuple[Any, str]:
"""
Evaluate the input against the conditions and return the appropriate pipeline.
@@ -76,15 +70,15 @@ class Router(BaseModel, Generic[T, U]):
return self.default, "default"
def copy(self) -> "Router[T, U]":
def copy(self) -> "Router":
"""Create a deep copy of the Router."""
new_routes = {
name: Route(
condition=deepcopy(route.condition),
pipeline=route.pipeline.copy(), # type: ignore
pipeline=route.pipeline.copy(),
)
for name, route in self.routes.items()
}
new_default = self.default.copy() # type: ignore
new_default = self.default.copy()
return Router(routes=new_routes, default=new_default)

View File

@@ -6,16 +6,24 @@ import uuid
from concurrent.futures import Future
from copy import copy
from hashlib import md5
from typing import Any, Dict, List, Optional, Tuple, Type, Union
from typing import Any, Dict, List, Optional, Set, Tuple, Type, Union
from opentelemetry.trace import Span
from pydantic import UUID4, BaseModel, Field, field_validator, model_validator
from pydantic import (
UUID4,
BaseModel,
Field,
PrivateAttr,
field_validator,
model_validator,
)
from pydantic_core import PydanticCustomError
from crewai.agents.agent_builder.base_agent import BaseAgent
from crewai.tasks.output_format import OutputFormat
from crewai.tasks.task_output import TaskOutput
from crewai.telemetry.telemetry import Telemetry
from crewai.utilities.config import process_config
from crewai.utilities.converter import Converter, convert_to_model
from crewai.utilities.i18n import I18N
@@ -39,9 +47,6 @@ class Task(BaseModel):
tools: List of tools/resources limited for task execution.
"""
class Config:
arbitrary_types_allowed = True
__hash__ = object.__hash__ # type: ignore
used_tools: int = 0
tools_errors: int = 0
@@ -103,17 +108,29 @@ class Task(BaseModel):
description="A converter class used to export structured output",
default=None,
)
processed_by_agents: Set[str] = Field(default_factory=set)
_telemetry: Telemetry
_execution_span: Span | None = None
_original_description: str | None = None
_original_expected_output: str | None = None
_thread: threading.Thread | None = None
_execution_time: float | None = None
_telemetry: Telemetry = PrivateAttr(default_factory=Telemetry)
_execution_span: Optional[Span] = PrivateAttr(default=None)
_original_description: Optional[str] = PrivateAttr(default=None)
_original_expected_output: Optional[str] = PrivateAttr(default=None)
_thread: Optional[threading.Thread] = PrivateAttr(default=None)
_execution_time: Optional[float] = PrivateAttr(default=None)
def __init__(__pydantic_self__, **data):
config = data.pop("config", {})
super().__init__(**config, **data)
@model_validator(mode="before")
@classmethod
def process_model_config(cls, values):
return process_config(values, cls)
@model_validator(mode="after")
def validate_required_fields(self):
required_fields = ["description", "expected_output"]
for field in required_fields:
if getattr(self, field) is None:
raise ValueError(
f"{field} must be provided either directly or through config"
)
return self
@field_validator("id", mode="before")
@classmethod
@@ -137,12 +154,6 @@ class Task(BaseModel):
return value[1:]
return value
@model_validator(mode="after")
def set_private_attrs(self) -> "Task":
"""Set private attributes."""
self._telemetry = Telemetry()
return self
@model_validator(mode="after")
def set_attributes_based_on_config(self) -> "Task":
"""Set attributes based on the agent configuration."""
@@ -185,7 +196,7 @@ class Task(BaseModel):
expected_output = self._original_expected_output or self.expected_output
source = [description, expected_output]
return md5("|".join(source).encode()).hexdigest()
return md5("|".join(source).encode(), usedforsecurity=False).hexdigest()
def execute_async(
self,
@@ -231,6 +242,8 @@ class Task(BaseModel):
self.prompt_context = context
tools = tools or self.tools or []
self.processed_by_agents.add(agent.role)
result = agent.execute_task(
task=self,
context=context,
@@ -240,7 +253,9 @@ class Task(BaseModel):
pydantic_output, json_output = self._export_output(result)
task_output = TaskOutput(
name=self.name,
description=self.description,
expected_output=self.expected_output,
raw=result,
pydantic=pydantic_output,
json_dict=json_output,
@@ -261,9 +276,7 @@ class Task(BaseModel):
content = (
json_output
if json_output
else pydantic_output.model_dump_json()
if pydantic_output
else result
else pydantic_output.model_dump_json() if pydantic_output else result
)
self._save_file(content)
@@ -298,8 +311,10 @@ class Task(BaseModel):
"""Increment the tools errors counter."""
self.tools_errors += 1
def increment_delegations(self) -> None:
def increment_delegations(self, agent_name: Optional[str]) -> None:
"""Increment the delegations counter."""
if agent_name:
self.processed_by_agents.add(agent_name)
self.delegations += 1
def copy(self, agents: List["BaseAgent"]) -> "Task":

View File

@@ -10,6 +10,10 @@ class TaskOutput(BaseModel):
"""Class that represents the result of a task."""
description: str = Field(description="Description of the task")
name: Optional[str] = Field(description="Name of the task", default=None)
expected_output: Optional[str] = Field(
description="Expected output of the task", default=None
)
summary: Optional[str] = Field(description="Summary of the task", default=None)
raw: str = Field(description="Raw output of the task", default="")
pydantic: Optional[BaseModel] = Field(

View File

@@ -1 +1,3 @@
from .telemetry import Telemetry
__all__ = ["Telemetry"]

View File

@@ -4,7 +4,7 @@ import asyncio
import json
import os
import platform
from typing import TYPE_CHECKING, Any
from typing import TYPE_CHECKING, Any, Optional
import pkg_resources
from opentelemetry import trace
@@ -28,18 +28,6 @@ class Telemetry:
agents backstories or goals nor responses or any data that is being
processed by the agents, nor any secrets and env vars.
Data collected includes:
- Version of crewAI
- Version of Python
- General OS (e.g. number of CPUs, macOS/Windows/Linux)
- Number of agents and tasks in a crew
- Crew Process being used
- If Agents are using memory or allowing delegation
- If Tasks are being executed in parallel or sequentially
- Language model being used
- Roles of agents in a crew
- Tools names available
Users can opt-in to sharing more complete data using the `share_crew`
attribute in the Crew class.
"""
@@ -114,10 +102,17 @@ class Telemetry:
"max_iter": agent.max_iter,
"max_rpm": agent.max_rpm,
"i18n": agent.i18n.prompt_file,
"function_calling_llm": json.dumps(
self._safe_llm_attributes(
agent.function_calling_llm
)
),
"llm": json.dumps(
self._safe_llm_attributes(agent.llm)
),
"delegation_enabled?": agent.allow_delegation,
"allow_code_execution?": agent.allow_code_execution,
"max_retry_limit": agent.max_retry_limit,
"tools_names": [
tool.name.casefold()
for tool in agent.tools or []
@@ -165,7 +160,62 @@ class Telemetry:
self._add_attribute(
span, "crew_inputs", json.dumps(inputs) if inputs else None
)
else:
self._add_attribute(
span,
"crew_agents",
json.dumps(
[
{
"key": agent.key,
"id": str(agent.id),
"role": agent.role,
"verbose?": agent.verbose,
"max_iter": agent.max_iter,
"max_rpm": agent.max_rpm,
"function_calling_llm": json.dumps(
self._safe_llm_attributes(
agent.function_calling_llm
)
),
"llm": json.dumps(
self._safe_llm_attributes(agent.llm)
),
"delegation_enabled?": agent.allow_delegation,
"allow_code_execution?": agent.allow_code_execution,
"max_retry_limit": agent.max_retry_limit,
"tools_names": [
tool.name.casefold()
for tool in agent.tools or []
],
}
for agent in crew.agents
]
),
)
self._add_attribute(
span,
"crew_tasks",
json.dumps(
[
{
"key": task.key,
"id": str(task.id),
"async_execution?": task.async_execution,
"human_input?": task.human_input,
"agent_role": task.agent.role
if task.agent
else "None",
"agent_key": task.agent.key if task.agent else None,
"tools_names": [
tool.name.casefold()
for tool in task.tools or []
],
}
for task in crew.tasks
]
),
)
span.set_status(Status(StatusCode.OK))
span.end()
except Exception:
@@ -295,7 +345,7 @@ class Telemetry:
pass
def individual_test_result_span(
self, crew: Crew, quality: int, exec_time: int, model_name: str
self, crew: Crew, quality: float, exec_time: int, model_name: str
):
if self.ready:
try:
@@ -349,6 +399,63 @@ class Telemetry:
except Exception:
pass
def deploy_signup_error_span(self):
if self.ready:
try:
tracer = trace.get_tracer("crewai.telemetry")
span = tracer.start_span("Deploy Signup Error")
span.set_status(Status(StatusCode.OK))
span.end()
except Exception:
pass
def start_deployment_span(self, uuid: Optional[str] = None):
if self.ready:
try:
tracer = trace.get_tracer("crewai.telemetry")
span = tracer.start_span("Start Deployment")
if uuid:
self._add_attribute(span, "uuid", uuid)
span.set_status(Status(StatusCode.OK))
span.end()
except Exception:
pass
def create_crew_deployment_span(self):
if self.ready:
try:
tracer = trace.get_tracer("crewai.telemetry")
span = tracer.start_span("Create Crew Deployment")
span.set_status(Status(StatusCode.OK))
span.end()
except Exception:
pass
def get_crew_logs_span(self, uuid: Optional[str], log_type: str = "deployment"):
if self.ready:
try:
tracer = trace.get_tracer("crewai.telemetry")
span = tracer.start_span("Get Crew Logs")
self._add_attribute(span, "log_type", log_type)
if uuid:
self._add_attribute(span, "uuid", uuid)
span.set_status(Status(StatusCode.OK))
span.end()
except Exception:
pass
def remove_crew_span(self, uuid: Optional[str] = None):
if self.ready:
try:
tracer = trace.get_tracer("crewai.telemetry")
span = tracer.start_span("Remove Crew")
if uuid:
self._add_attribute(span, "uuid", uuid)
span.set_status(Status(StatusCode.OK))
span.end()
except Exception:
pass
def crew_execution_span(self, crew: Crew, inputs: dict[str, Any] | None):
"""Records the complete execution of a crew.
This is only collected if the user has opted-in to share the crew.
@@ -462,7 +569,7 @@ class Telemetry:
pass
def _safe_llm_attributes(self, llm):
attributes = ["name", "model_name", "base_url", "model", "top_k", "temperature"]
attributes = ["name", "model_name", "model", "top_k", "temperature"]
if llm:
safe_attributes = {k: v for k, v in vars(llm).items() if k in attributes}
safe_attributes["class"] = llm.__class__.__name__

View File

@@ -1,5 +1,5 @@
from langchain.tools import StructuredTool
from pydantic import BaseModel, ConfigDict, Field
from pydantic import BaseModel, Field
from crewai.agents.cache import CacheHandler
@@ -7,11 +7,10 @@ from crewai.agents.cache import CacheHandler
class CacheTools(BaseModel):
"""Default tools to hit the cache."""
model_config = ConfigDict(arbitrary_types_allowed=True)
name: str = "Hit Cache"
cache_handler: CacheHandler = Field(
description="Cache Handler for the crew",
default=CacheHandler(),
default_factory=CacheHandler,
)
def tool(self):

View File

@@ -1,8 +1,8 @@
from typing import Any, Dict, Optional
from pydantic import BaseModel, Field
from pydantic import BaseModel as PydanticBaseModel
from pydantic import Field as PydanticField
from pydantic.v1 import BaseModel, Field
class ToolCalling(BaseModel):

View File

@@ -5,7 +5,7 @@ import regex
from langchain.output_parsers import PydanticOutputParser
from langchain_core.exceptions import OutputParserException
from langchain_core.outputs import Generation
from langchain_core.pydantic_v1 import ValidationError
from pydantic import ValidationError
class ToolOutputParser(PydanticOutputParser):

View File

@@ -1,6 +1,6 @@
import ast
from difflib import SequenceMatcher
import os
from difflib import SequenceMatcher
from textwrap import dedent
from typing import Any, List, Union
@@ -8,6 +8,7 @@ from langchain_core.tools import BaseTool
from langchain_openai import ChatOpenAI
from crewai.agents.tools_handler import ToolsHandler
from crewai.task import Task
from crewai.telemetry import Telemetry
from crewai.tools.tool_calling import InstructorToolCalling, ToolCalling
from crewai.utilities import I18N, Converter, ConverterError, Printer
@@ -15,7 +16,7 @@ from crewai.utilities import I18N, Converter, ConverterError, Printer
agentops = None
if os.environ.get("AGENTOPS_API_KEY"):
try:
import agentops
import agentops # type: ignore
except ImportError:
pass
@@ -51,7 +52,7 @@ class ToolUsage:
original_tools: List[Any],
tools_description: str,
tools_names: str,
task: Any,
task: Task,
function_calling_llm: Any,
agent: Any,
action: Any,
@@ -71,14 +72,14 @@ class ToolUsage:
self.task = task
self.action = action
self.function_calling_llm = function_calling_llm
# Handling bug (see https://github.com/langchain-ai/langchain/pull/16395): raise an error if tools_names have space for ChatOpenAI
if isinstance(self.function_calling_llm, ChatOpenAI):
if " " in self.tools_names:
raise Exception(
"Tools names should not have spaces for ChatOpenAI models."
)
# Set the maximum parsing attempts for bigger models
if (isinstance(self.function_calling_llm, ChatOpenAI)) and (
self.function_calling_llm.openai_api_base is None
@@ -118,7 +119,7 @@ class ToolUsage:
tool: BaseTool,
calling: Union[ToolCalling, InstructorToolCalling],
) -> str: # TODO: Fix this return type
tool_event = agentops.ToolEvent(name=calling.tool_name) if agentops else None
tool_event = agentops.ToolEvent(name=calling.tool_name) if agentops else None # type: ignore
if self._check_tool_repeated_usage(calling=calling): # type: ignore # _check_tool_repeated_usage of "ToolUsage" does not return a value (it only ever returns None)
try:
result = self._i18n.errors("task_repeated_usage").format(
@@ -154,7 +155,10 @@ class ToolUsage:
"Delegate work to coworker",
"Ask question to coworker",
]:
self.task.increment_delegations()
coworker = (
calling.arguments.get("coworker") if calling.arguments else None
)
self.task.increment_delegations(coworker)
if calling.arguments:
try:
@@ -241,7 +245,7 @@ class ToolUsage:
result = self._remember_format(result=result) # type: ignore # "_remember_format" of "ToolUsage" does not return a value (it only ever returns None)
return result
def _should_remember_format(self) -> None:
def _should_remember_format(self) -> bool:
return self.task.used_tools % self._remember_format_after_usages == 0
def _remember_format(self, result: str) -> None:
@@ -353,10 +357,10 @@ class ToolUsage:
return ToolUsageErrorException( # type: ignore # Incompatible return value type (got "ToolUsageErrorException", expected "ToolCalling | InstructorToolCalling")
f'{self._i18n.errors("tool_arguments_error")}'
)
calling = ToolCalling( # type: ignore # Unexpected keyword argument "log" for "ToolCalling"
calling = ToolCalling(
tool_name=tool.name,
arguments=arguments,
log=tool_string,
log=tool_string, # type: ignore
)
except Exception as e:
self._run_attempts += 1
@@ -404,19 +408,19 @@ class ToolUsage:
'"' + value.replace('"', '\\"') + '"'
) # Re-encapsulate with double quotes
elif value.isdigit(): # Check if value is a digit, hence integer
formatted_value = value
value = value
elif value.lower() in [
"true",
"false",
"null",
]: # Check for boolean and null values
formatted_value = value.lower()
value = value.lower()
else:
# Assume the value is a string and needs quotes
formatted_value = '"' + value.replace('"', '\\"') + '"'
value = '"' + value.replace('"', '\\"') + '"'
# Rebuild the entry with proper quoting
formatted_entry = f'"{key}": {formatted_value}'
formatted_entry = f'"{key}": {value}'
formatted_entries.append(formatted_entry)
# Reconstruct the JSON string

View File

@@ -0,0 +1,39 @@
from typing import Any, Dict, Type
from pydantic import BaseModel
def process_config(
values: Dict[str, Any], model_class: Type[BaseModel]
) -> Dict[str, Any]:
"""
Process the config dictionary and update the values accordingly.
Args:
values (Dict[str, Any]): The dictionary of values to update.
model_class (Type[BaseModel]): The Pydantic model class to reference for field validation.
Returns:
Dict[str, Any]: The updated values dictionary.
"""
config = values.get("config", {})
if not config:
return values
# Copy values from config (originally from YAML) to the model's attributes.
# Only copy if the attribute isn't already set, preserving any explicitly defined values.
for key, value in config.items():
if key not in model_class.model_fields or values.get(key) is not None:
continue
if isinstance(value, dict):
if isinstance(values.get(key), dict):
values[key].update(value)
else:
values[key] = value
else:
values[key] = value
# Remove the config from values to avoid duplicate processing
values.pop("config", None)
return values

View File

@@ -5,8 +5,7 @@ import regex
from langchain.output_parsers import PydanticOutputParser
from langchain_core.exceptions import OutputParserException
from langchain_core.outputs import Generation
from langchain_core.pydantic_v1 import ValidationError
from pydantic import BaseModel
from pydantic import BaseModel, ValidationError
class CrewPydanticOutputParser(PydanticOutputParser):

View File

@@ -1,14 +1,14 @@
from collections import defaultdict
from langchain_openai import ChatOpenAI
from pydantic import BaseModel, Field
from rich.console import Console
from rich.table import Table
from crewai.agent import Agent
from crewai.task import Task
from crewai.tasks.task_output import TaskOutput
from crewai.telemetry import Telemetry
from langchain_openai import ChatOpenAI
from pydantic import BaseModel, Field
from rich.box import HEAVY_EDGE
from rich.console import Console
from rich.table import Table
class TaskEvaluationPydanticOutput(BaseModel):
@@ -77,50 +77,72 @@ class CrewEvaluator:
def print_crew_evaluation_result(self) -> None:
"""
Prints the evaluation result of the crew in a table.
A Crew with 2 tasks using the command crewai test -n 2
A Crew with 2 tasks using the command crewai test -n 3
will output the following table:
Task Scores
Tasks Scores
(1-10 Higher is better)
┏━━━━━━━━━━━━┳━━━━━━━┳━━━━━━━┳━━━━━━━━━━━━┓
┃ Tasks/Crew ┃ Run 1 ┃ Run 2 ┃ Avg. Total ┃
┡━━━━━━━━━━━━╇━━━━━━━╇━━━━━━━╇━━━━━━━━━━━━┩
│ Task 1 │ 10.0 │ 9.0 │ 9.5
│ Task 29.09.09.0
│ Crew │ 9.5 │ 9.0 │ 9.2
└────────────┴───────┴───────┴────────────┘
━━━━━━━━━━━━━━━━━━━━┳━━━━━━━┳━━━━━━━┳━━━━━━━┳━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┓
┃ Tasks/Crew/Agents ┃ Run 1 ┃ Run 2 ┃ Run 3 ┃ Avg. Total ┃ Agents ┃
━━━━━━━━━━━━━━━━━━━━╇━━━━━━━╇━━━━━━━╇━━━━━━━╇━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┩
│ Task 1 │ 9.0 │ 10.0 │ 9.0 │ 9.3 │ - AI LLMs Senior Researcher
│ │ │ - AI LLMs Reporting Analyst
│ │ │ │ │ │
│ Task 2 │ 9.0 │ 9.0 │ 9.0 │ 9.0 │ - AI LLMs Senior Researcher │
│ │ │ │ │ │ - AI LLMs Reporting Analyst │
│ │ │ │ │ │ │
│ Crew │ 9.0 │ 9.5 │ 9.0 │ 9.2 │ │
│ Execution Time (s) │ 42 │ 79 │ 52 │ 57 │ │
└────────────────────┴───────┴───────┴───────┴────────────┴──────────────────────────────┘
"""
task_averages = [
sum(scores) / len(scores) for scores in zip(*self.tasks_scores.values())
]
crew_average = sum(task_averages) / len(task_averages)
# Create a table
table = Table(title="Tasks Scores \n (1-10 Higher is better)")
table = Table(title="Tasks Scores \n (1-10 Higher is better)", box=HEAVY_EDGE)
# Add columns for the table
table.add_column("Tasks/Crew")
table.add_column("Tasks/Crew/Agents", style="cyan")
for run in range(1, len(self.tasks_scores) + 1):
table.add_column(f"Run {run}")
table.add_column("Avg. Total")
table.add_column(f"Run {run}", justify="center")
table.add_column("Avg. Total", justify="center")
table.add_column("Agents", style="green")
# Add rows for each task
for task_index in range(len(task_averages)):
for task_index, task in enumerate(self.crew.tasks):
task_scores = [
self.tasks_scores[run][task_index]
for run in range(1, len(self.tasks_scores) + 1)
]
avg_score = task_averages[task_index]
agents = list(task.processed_by_agents)
# Add the task row with the first agent
table.add_row(
f"Task {task_index + 1}", *map(str, task_scores), f"{avg_score:.1f}"
f"Task {task_index + 1}",
*[f"{score:.1f}" for score in task_scores],
f"{avg_score:.1f}",
f"- {agents[0]}" if agents else "",
)
# Add a row for the crew average
# Add rows for additional agents
for agent in agents[1:]:
table.add_row("", "", "", "", "", f"- {agent}")
# Add a blank separator row if it's not the last task
if task_index < len(self.crew.tasks) - 1:
table.add_row("", "", "", "", "", "")
# Add Crew and Execution Time rows
crew_scores = [
sum(self.tasks_scores[run]) / len(self.tasks_scores[run])
for run in range(1, len(self.tasks_scores) + 1)
]
table.add_row("Crew", *map(str, crew_scores), f"{crew_average:.1f}")
table.add_row(
"Crew",
*[f"{score:.2f}" for score in crew_scores],
f"{crew_average:.1f}",
"",
)
run_exec_times = [
int(sum(tasks_exec_times))
@@ -128,11 +150,9 @@ class CrewEvaluator:
]
execution_time_avg = int(sum(run_exec_times) / len(run_exec_times))
table.add_row(
"Execution Time (s)",
*map(str, run_exec_times),
f"{execution_time_avg}",
"Execution Time (s)", *map(str, run_exec_times), f"{execution_time_avg}", ""
)
# Display the table in the terminal
console = Console()
console.print(table)

View File

@@ -1,13 +1,13 @@
from datetime import datetime
from pydantic import BaseModel, Field, PrivateAttr
from crewai.utilities.printer import Printer
class Logger:
_printer = Printer()
def __init__(self, verbose=False):
self.verbose = verbose
class Logger(BaseModel):
verbose: bool = Field(default=False)
_printer: Printer = PrivateAttr(default_factory=Printer)
def log(self, level, message, color="bold_green"):
if self.verbose:

View File

@@ -1,5 +1,6 @@
import re
class YamlParser:
@staticmethod
def parse(file):
@@ -16,7 +17,9 @@ class YamlParser:
# Replace single { and } with doubled ones, while leaving already doubled ones intact and the other special characters {# and {%
modified_content = re.sub(r"(?<!\{){(?!\{)(?!\#)(?!\%)", "{{", content)
modified_content = re.sub(r"(?<!\})(?<!\%)(?<!\#)\}(?!})", "}}", modified_content)
modified_content = re.sub(
r"(?<!\})(?<!\%)(?<!\#)\}(?!})", "}}", modified_content
)
# Check for 'context:' not followed by '[' and raise an error
if re.search(r"context:(?!\s*\[)", modified_content):

View File

@@ -1,14 +1,25 @@
from typing import Any, List, Optional
from langchain_openai import ChatOpenAI
from pydantic import BaseModel
from pydantic import BaseModel, Field
from crewai.agent import Agent
from crewai.task import Task
class PlanPerTask(BaseModel):
task: str = Field(..., description="The task for which the plan is created")
plan: str = Field(
...,
description="The step by step plan on how the agents can execute their tasks using the available tools with mastery",
)
class PlannerTaskPydanticOutput(BaseModel):
list_of_plans_per_task: List[str]
list_of_plans_per_task: List[PlanPerTask] = Field(
...,
description="Step by step plan on how the agents can execute their tasks using the available tools with mastery",
)
class CrewPlanner:

Some files were not shown because too many files have changed in this diff Show More