Compare commits

...

158 Commits

Author SHA1 Message Date
GabeKoga
59a5f51fd7 remove extra folder 2024-03-22 18:55:17 -03:00
GabeKoga
375946c15a Fixed: use absolute import, run main as app 2024-03-19 18:15:35 -03:00
João Moura
637bd885cf adding auto flake 2024-03-11 23:27:19 -03:00
João Moura
337afe228f cutting new version with proper imports 2024-03-11 23:27:04 -03:00
João Moura
4541835487 adding autoflake 2024-03-11 22:56:14 -03:00
João Moura
04d9603449 cutting new version 2024-03-11 22:55:56 -03:00
João Moura
671a8d0180 preparring new version that autoloads env 2024-03-11 22:19:47 -03:00
João Moura
3950878690 preparring to cut new version 2024-03-11 19:54:27 -03:00
João Moura
eaac627600 updating CLI template and guaranteeing tasks order 2024-03-11 19:53:34 -03:00
João Moura
35f8919e73 Preparing new version 2024-03-11 17:37:12 -03:00
João Moura
cb5a528550 Improving agent logging 2024-03-11 17:05:54 -03:00
João Moura
1f95d7b982 Improve tempalte readme 2024-03-11 17:05:20 -03:00
Abe Gong
46971ee985 Fix typo in Tools.md (#300) 2024-03-11 16:45:28 -03:00
Selim Erhan
e67009ee2e Update Create-Custom-Tools.md (#311)
Added the langchain "Tool" functionality by creating a python function and then adding the functionality of that function to the tool by 'func' variable in the 'Tool' function.
2024-03-11 16:44:04 -03:00
Johan
9d3da98251 Update Tools.md (#326)
* Update Tools.md

Fixing typo on the instantiation part

* Update Tools.md

Update tool naming
2024-03-11 16:41:14 -03:00
Bill Chambers
b94de6e947 Update Crews.md (#331) 2024-03-11 16:40:45 -03:00
Chris Pang
f8a1d4f414 added langchain callback to agents (#333)
Co-authored-by: Chris Pang <chris_pang@racv.com.au>
2024-03-11 16:40:10 -03:00
Merbin J Anselm
7deb268de8 docs: fix formatting in Human-Input-on-Execution.md (#335) 2024-03-11 16:38:59 -03:00
João Moura
47b5cbd211 adding initial CLI support 2024-03-11 16:37:32 -03:00
João Moura
a4e9b9ccfe removing double space on logs 2024-03-11 16:23:00 -03:00
João Moura
99be4f5a61 Overridding classes __repr__ 2024-03-05 10:12:49 -03:00
João Moura
ba28ab1680 adding support for agents and tasks to be defined of configs 2024-03-05 01:26:07 -03:00
João Moura
e51b8aadae fix readme 2024-03-05 00:31:52 -03:00
João Moura
33354aa07e udpatign readme example 2024-03-05 00:29:55 -03:00
João Moura
730b71fad8 update serper doc 2024-03-04 11:15:49 -03:00
João Moura
364cf216a0 updating docs disclaimer 2024-03-04 09:59:01 -03:00
João Moura
3cb48ac562 updating docs 2024-03-04 01:29:27 -03:00
João Moura
ea65283023 updating docs 2024-03-03 22:43:51 -03:00
João Moura
d2003cc32d fix docs path 2024-03-03 22:18:48 -03:00
João Moura
1766e27337 Adding tool specific docs 2024-03-03 22:14:53 -03:00
João Moura
442c324243 Updating dependencies, cutting new version 2024-03-03 21:23:42 -03:00
João Moura
3134711240 Updating Docs 2024-03-03 20:54:15 -03:00
João Moura
546fc965f8 updating README 2024-03-03 20:54:15 -03:00
João Moura
9ab45d9118 preparing new version 2024-03-03 20:54:15 -03:00
João Moura
b1ae86757b preparing 0.17.0rc0 2024-03-03 20:54:15 -03:00
João Moura
42eeec5897 Update inner tool usage logic to support both regular and function calling 2024-03-03 20:54:15 -03:00
João Moura
c12283bb16 Small docs update 2024-03-03 20:54:15 -03:00
João Moura
b856b21fc6 updating tests 2024-03-03 20:54:15 -03:00
Jay Mathis
72a0d1edef Update README.md (#301)
Fix a very minor typo
2024-03-03 12:41:35 -03:00
heyfixit
c0a0e01cf6 fix directory typo (#295) 2024-03-03 12:41:14 -03:00
João Moura
78bf008c36 cutting a new version addressin backward compatibility 2024-02-28 12:04:13 -03:00
Hongbo
5857c22daf correct a typo in tool_usage.py (#276) 2024-02-28 09:25:27 -03:00
Gordon Stein
5f73ba1371 Update en.json (#281) 2024-02-28 09:24:44 -03:00
Selim Erhan
4c09835abc Update Tools.md (#283)
Added the link to LangChain built-in toolkits
2024-02-28 09:22:51 -03:00
João Moura
0a025901c5 cutting new versions that doens't include cli just yet 2024-02-28 09:16:13 -03:00
João Moura
9768e4518f Fixing bug preparing new version 2024-02-28 09:09:37 -03:00
João Moura
1f802ccb5a removing logs and preping new version 2024-02-28 03:44:23 -03:00
João Moura
e1306a8e6a removing necessary crewai-tools dependency 2024-02-28 03:44:23 -03:00
João Moura
997c906b5f adding support for input interpolation for tasks and agents 2024-02-28 03:44:23 -03:00
João Moura
2530196cf8 fixing tests 2024-02-28 03:44:23 -03:00
João Moura
340bea3271 Adding ability to track tools_errors and delegations 2024-02-28 03:44:23 -03:00
João Moura
3df3bba756 changing method naming to increment 2024-02-28 03:44:23 -03:00
João Moura
a9863fe670 Adding overall usage_metrics to crew and not adding delegation tools if there no agents the allow delegation 2024-02-28 03:44:23 -03:00
João Moura
7b49b4e985 Adding initial formatting error counting and token counter 2024-02-28 03:44:23 -03:00
João Moura
577db88f8e Updating README 2024-02-28 03:44:23 -03:00
João Moura
01a2e650a4 Adding write job description example 2024-02-28 03:44:23 -03:00
BR
cd9f7931c9 Fix Creating-a-Crew-and-kick-it-off.md so it can run (#280)
* Fix Creating-a-Crew-and-kick-it-off.md

- Update deps to include `crewai[tools]`
- Remove invalid `max_inter` arg from Task constructor call

* Update Creating-a-Crew-and-kick-it-off.md

---------

Co-authored-by: João Moura <joaomdmoura@gmail.com>
2024-02-27 14:23:19 -03:00
João Moura
2b04ae4e4a updating docs 2024-02-26 15:54:06 -03:00
João Moura
cd0b82e794 Cutting new version removing crewai-tool as a mandatory dependency 2024-02-26 15:27:04 -03:00
João Moura
0ddcffe601 updating telemetry timeout 2024-02-26 13:40:41 -03:00
João Moura
712d106a44 updating docs 2024-02-26 13:38:14 -03:00
João Moura
34c5560cb0 updating telemetry code and gitignore 2024-02-24 16:18:26 -03:00
João Moura
dcba1488a6 make agents not have a memory by default 2024-02-24 03:33:05 -03:00
João Moura
8e4b156f11 preparing new version 2024-02-24 03:30:12 -03:00
João Moura
ab98c3bd28 Avoid empty task outputs 2024-02-24 03:11:41 -03:00
João Moura
7f98a99e90 Adding support for agents without tools 2024-02-24 01:39:29 -03:00
João Moura
101b80c234 updating broken doc link 2024-02-24 01:38:16 -03:00
João Moura
44598babcb startign support to crew docs 2024-02-24 01:38:04 -03:00
João Moura
51edfb4604 reducing telemetry timeout 2024-02-23 16:02:24 -03:00
João Moura
12d6fa1494 Reducing telemetry timeout 2024-02-23 15:54:22 -03:00
João Moura
99a15ac2ae preping new version 2024-02-23 15:24:16 -03:00
João Moura
093a9c8174 bringing TaskOutput.result back to avoind breakign change 2024-02-23 15:23:58 -03:00
João Moura
464dfc4e67 preparing new version 0.14.0 2024-02-22 16:10:17 -03:00
João Moura
1c7f9826b4 adding new converter logic 2024-02-22 15:16:17 -03:00
João Moura
e397a49c23 Updatign prompts 2024-02-22 15:13:41 -03:00
João Moura
8c925237e7 preparing new RC 2024-02-20 17:56:55 -03:00
João Moura
0593d52b91 Improving inner prompts 2024-02-20 17:53:30 -03:00
João Moura
7b7d714109 preparing new version 2024-02-20 10:40:57 -03:00
João Moura
e9aa87f62b Updating tests 2024-02-20 10:40:37 -03:00
João Moura
8f5d735b2f bug fixing 2024-02-20 10:40:16 -03:00
João Moura
e24f4867df Preparing new version 2024-02-19 22:50:38 -03:00
João Moura
ef024ca106 improving reliability for agent tools 2024-02-19 22:48:47 -03:00
João Moura
4c519d9d98 updating tests 2024-02-19 22:48:34 -03:00
João Moura
94cb96b288 Increasing timeout for telemetry 2024-02-19 22:48:14 -03:00
João Moura
108a0d36b7 Adding support to export tasks as json, pydantic objects, and save as file 2024-02-19 22:46:34 -03:00
João Moura
efb097a76b Adding new tool usage and parsing logic 2024-02-19 22:43:10 -03:00
João Moura
af03042852 Updating docs 2024-02-19 22:01:09 -03:00
João Moura
21667bc7e1 adding more error logging and preparing new version 2024-02-15 23:49:30 -03:00
João Moura
19b6c15fff Cutting new version with tool ussage bug fix 2024-02-15 23:19:12 -03:00
João Moura
3ef502024d preparing new version 2024-02-13 02:58:16 -08:00
João Moura
e55cee7372 adding function calling llm support 2024-02-13 02:57:12 -08:00
João Moura
b72eb838c2 updating readme 2024-02-13 01:50:23 -08:00
João Moura
b21191dd55 updating tests 2024-02-13 01:50:12 -08:00
João Moura
76b17a8d04 renaming function for tools 2024-02-12 16:48:14 -08:00
João Moura
e97d1a0cf8 removing hostname from default telemetry 2024-02-12 16:11:15 -08:00
João Moura
c875d887b7 Crewating a tool output parser 2024-02-12 14:24:36 -08:00
João Moura
44d9cbca81 adding regexp as dependency 2024-02-12 14:13:20 -08:00
João Moura
6e399101fd refactoring default agent tools 2024-02-12 13:27:02 -08:00
João Moura
e8e3617ba6 allowing to set model naem through env var 2024-02-12 13:24:01 -08:00
João Moura
45fa30c007 avoinding telemetry errors 2024-02-12 13:23:40 -08:00
João Moura
15768d9c4d updating LLM connection docs 2024-02-12 13:21:43 -08:00
João Moura
a1fcaa398c updating versions and adding instructor 2024-02-12 13:20:28 -08:00
João Moura
871643d98d updating codeignore 2024-02-11 20:37:42 -08:00
João Moura
91659d6488 counting for tool retries on the acutal usage 2024-02-10 13:14:00 -08:00
João Moura
0076ea7bff Adding ability to remember instruction after using too many tools 2024-02-10 12:53:02 -08:00
João Moura
e79da7bc05 refactoring task execution 2024-02-10 11:28:08 -08:00
João Moura
00206a62ab Revamping tool usage 2024-02-10 10:36:34 -08:00
João Moura
d0b0a33be3 updating translations 2024-02-10 01:08:04 -08:00
João Moura
6ea21e95b6 Adding printer logic 2024-02-10 00:57:04 -08:00
João Moura
c226dafd0d updating dependencies 2024-02-10 00:56:25 -08:00
João Moura
d4c21a23f4 updating all cassettes 2024-02-10 00:55:40 -08:00
João Moura
b76ae5b921 avoind unnecesarry telemetry errors 2024-02-09 10:48:45 -08:00
João Moura
b48e5af9a0 include agentFinish as part of step callback 2024-02-09 02:00:41 -08:00
João Moura
d36c2a74cb recreating executor upon setting new step_callback 2024-02-09 01:52:28 -08:00
João Moura
a1e0596450 adding crew step_callback 2024-02-09 01:24:31 -08:00
João Moura
596e243374 adding support for step_callback 2024-02-08 23:56:13 -08:00
João Moura
326ad08ba2 adding support for full_ouput in crews 2024-02-08 23:23:34 -08:00
João Moura
f63d4edbb4 adding agent step callback 2024-02-08 23:01:30 -08:00
João Moura
0057ed6786 adding user the otpion to share all data of their crews 2024-02-08 23:01:02 -08:00
João Moura
44b6bcbcaa preparing verison 0.5.5 2024-02-07 23:13:39 -08:00
João Moura
a45c82c5f7 fixing RPM controlelr being set unencessarily 2024-02-07 23:09:36 -08:00
João Moura
98133a4eb6 Adding new crew specific docs 2024-02-07 23:09:16 -08:00
João Moura
44c2fd223d preparing version 0.5.4 2024-02-07 22:22:33 -08:00
João Moura
fc249eefda adding initial telemetry 2024-02-07 22:21:44 -08:00
João Moura
1a1eb4e7aa preparing new version 0.5.3 2024-02-07 02:14:58 -08:00
João Moura
723fdc6245 adding fix to hierarchical process 2024-02-07 02:13:19 -08:00
João Moura
43a47b8bdf preparing v0.5.2 2024-02-06 00:04:53 -08:00
João Moura
ab5647145f updating RPM and max_inter logic 2024-02-05 23:14:22 -08:00
João Moura
856981e0ed updating docs and readme 2024-02-05 23:13:10 -08:00
João Moura
09bec0e28b adding manager_llm 2024-02-05 20:46:47 -08:00
João Moura
2f0bf3b325 updating readme 2024-02-04 13:13:42 -08:00
João Moura
51278424c1 moving dependencies 2024-02-04 12:11:11 -08:00
João Moura
bfe26de026 updating readme 2024-02-04 12:07:40 -08:00
João Moura
db100439cb preparing new version 0.5.0 2024-02-04 12:01:05 -08:00
João Moura
c37f54c86f installing mkdocs dependencies 2024-02-04 11:58:21 -08:00
João Moura
e0262d9712 fixing dependencies for mkdocs 2024-02-04 11:51:44 -08:00
João Moura
63fb5a22be adding new docs and smaller fixes 2024-02-04 11:47:49 -08:00
João Moura
05dda59cf6 Adding multi thread execution 2024-02-03 23:24:41 -08:00
João Moura
5628bcca78 updating docs 2024-02-03 23:23:47 -08:00
João Moura
6042d9a7d8 Update README.md 2024-02-03 05:48:54 -03:00
João Moura
144239394d simplifying README 2024-02-03 00:04:33 -08:00
João Moura
d712ee8451 adding ability to pass context to tasks 2024-02-02 23:17:02 -08:00
João Moura
a8c1348235 Update README.md 2024-02-03 02:26:10 -03:00
João Moura
148d9202bf Update README.md 2024-02-03 01:33:59 -03:00
Ilya Sudakov
44442e6407 Update README.md: new header, text clean up, fix broken links (#210)
* Update README.MD

* Update examples section in README.md
2024-02-03 01:29:04 -03:00
Gui Vieira
c78237cb86 Hierarchical process (#206)
* Hierarchical process +  Docs
Co-authored-by: João Moura <joaomdmoura@gmail.com>
2024-02-02 13:56:35 -03:00
João Moura
8fc0f33dd5 adding task callback 2024-01-30 22:46:20 -03:00
João Moura
2010702880 Update README.md 2024-01-29 22:49:17 -03:00
Guilherme Vieira
29c31a2404 Fix static typing errors (#187)
Co-authored-by: João Moura <joaomdmoura@gmail.com>
2024-01-29 19:52:14 -03:00
João Moura
cd77981102 Adding support for expected output 2024-01-29 00:11:30 -03:00
IT Lackey
4f78d1e29c Feature: Documentation Site (#188) 2024-01-28 23:43:23 -03:00
Eliad Cohen
5be79454c3 Addresses typo and clarifiction in comments (#191)
Minor changes include a typo fixed and enhancing
an example for using OpenAI as an agent model with Ollama
via langchain

Resolves #189 #190
2024-01-28 23:42:31 -03:00
João Moura
d8c14ff31e updating website for crewai 2024-01-28 23:36:39 -03:00
João Moura
9e1be4ecd2 Update README.md 2024-01-22 11:05:01 -03:00
scott------
327d5c3a53 Update agent.py (#161)
adding tools to the list of attribute descriptions
2024-01-21 16:56:19 -03:00
Greyson LaLonde
852ca21e38 Update some docstrings / typehints (#144) 2024-01-21 16:55:17 -03:00
Prabha Arivalagan
23a549ac65 Fixed the small typo (#168) 2024-01-21 16:54:19 -03:00
João Moura
3e9630afe8 cutting new version 2024-01-14 11:25:09 -03:00
209 changed files with 77208 additions and 10705 deletions

Binary file not shown.

After

Width:  |  Height:  |  Size: 28 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 36 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 37 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 27 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 42 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 44 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 45 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 48 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 35 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 23 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 43 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 39 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 30 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 27 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 24 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 44 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 25 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 49 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 18 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 35 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 34 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 42 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 30 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 30 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 33 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 39 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 39 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 45 KiB

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

After

Width:  |  Height:  |  Size: 34 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 44 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 52 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 40 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 29 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 40 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 29 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 40 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 47 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 17 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 18 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 21 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 55 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 29 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 36 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 39 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 28 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 44 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 39 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 28 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 37 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 33 KiB

View File

@@ -1,10 +1,10 @@
name: Lint
on: [push, pull_request]
on: [pull_request]
jobs:
lint:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- uses: psf/black@stable
- uses: psf/black@stable

View File

@@ -1,6 +1,7 @@
name: Deploy MkDocs
on:
workflow_dispatch:
push:
branches:
- main
@@ -21,11 +22,23 @@ jobs:
with:
python-version: '3.10'
- name: Calculate requirements hash
id: req-hash
run: echo "::set-output name=hash::$(sha256sum requirements-doc.txt | awk '{print $1}')"
- name: Setup cache
uses: actions/cache@v3
with:
key: mkdocs-material-${{ steps.req-hash.outputs.hash }}
path: .cache
restore-keys: |
mkdocs-material-
- name: Install Requirements
run: |
sudo apt-get update &&
sudo apt-get install pngquant &&
pip install mkdocs-material
pip install mkdocs-material mkdocs-material-extensions pillow cairosvg
env:
GH_TOKEN: ${{ secrets.GH_TOKEN }}

View File

@@ -1,6 +1,6 @@
name: Run Tests
on: [push, pull_request]
on: [pull_request]
permissions:
contents: write

30
.github/workflows/type-checker.yml vendored Normal file
View File

@@ -0,0 +1,30 @@
name: Run Type Checks
on: [pull_request]
permissions:
contents: write
jobs:
type-checker:
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v2
- name: Setup Python
uses: actions/setup-python@v4
with:
python-version: '3.10'
- name: Install Requirements
run: |
sudo apt-get update &&
pip install poetry &&
poetry lock &&
poetry install
- name: Run type checks
run: poetry run pyright

4
.gitignore vendored
View File

@@ -5,4 +5,6 @@ dist/
.env
assets/*
.idea
test.py
test/
docs_crew/
chroma.sqlite3

View File

@@ -1,11 +1,11 @@
repos:
- repo: https://github.com/psf/black-pre-commit-mirror
rev: 23.12.1
hooks:
- id: black
language_version: python3.11
files: \.(py)$
exclude: 'src/crewai/cli/templates/(crew|main)\.py'
- repo: https://github.com/pycqa/isort
rev: 5.13.2

210
README.md
View File

@@ -1,18 +1,37 @@
# crewAI
<div align="center">
![Logo of crewAI, tow people rowing on a boat](./docs/crewai_logo.png)
![Logo of crewAI, two people rowing on a boat](./docs/crewai_logo.png)
🤖 Cutting-edge framework for orchestrating role-playing, autonomous AI agents. By fostering collaborative intelligence, CrewAI empowers agents to work together seamlessly, tackling complex tasks.
# **crewAI**
- [Why CrewAI](#why-crewai)
🤖 **crewAI**: Cutting-edge framework for orchestrating role-playing, autonomous AI agents. By fostering collaborative intelligence, CrewAI empowers agents to work together seamlessly, tackling complex tasks.
<h3>
[Homepage](https://www.crewai.io/) | [Documentation](https://docs.crewai.com/) | [Chat with Docs](https://chatg.pt/DWjSBZn) | [Examples](https://github.com/joaomdmoura/crewai-examples) | [Discord](https://discord.com/invite/X4JWnZnxPb)
</h3>
[![GitHub Repo stars](https://img.shields.io/github/stars/joaomdmoura/crewAI)](https://github.com/joaomdmoura/crewAI)
[![License: MIT](https://img.shields.io/badge/License-MIT-green.svg)](https://opensource.org/licenses/MIT)
</div>
## Table of contents
- [Why CrewAI?](#why-crewai)
- [Getting Started](#getting-started)
- [Key Features](#key-features)
- [Examples](#examples)
- [Local Open Source Models](#local-open-source-models)
- [CrewAI x AutoGen x ChatDev](#how-crewai-compares)
- [Quick Tutorial](#quick-tutorial)
- [Write Job Descriptions](#write-job-descriptions)
- [Trip Planner](#trip-planner)
- [Stock Analysis](#stock-analysis)
- [Connecting Your Crew to a Model](#connecting-your-crew-to-a-model)
- [How CrewAI Compares](#how-crewai-compares)
- [Contribution](#contribution)
- [💬 CrewAI Discord Community](https://discord.com/invite/X4JWnZnxPb)
- [Hire Consulting](#hire-consulting)
- [Hire CrewAI](#hire-crewai)
- [Telemetry](#telemetry)
- [License](#license)
## Why CrewAI?
@@ -20,42 +39,39 @@
The power of AI collaboration has too much to offer.
CrewAI is designed to enable AI agents to assume roles, share goals, and operate in a cohesive unit - much like a well-oiled crew. Whether you're building a smart assistant platform, an automated customer service ensemble, or a multi-agent research team, CrewAI provides the backbone for sophisticated multi-agent interactions.
- 🤖 [Talk with the Docs](https://chatg.pt/DWjSBZn)
- 📄 [Documentation Wiki](https://github.com/joaomdmoura/CrewAI/wiki)
## Getting Started
To get started with CrewAI, follow these simple steps:
1. **Installation**:
### 1. Installation
```shell
pip install crewai
```
The example below also uses duckduckgo, so also install that
If you want to also install crewai-tools, which is a package with tools that can be used by the agents, but more dependencies, you can install it with, example bellow uses it:
```shell
pip install duckduckgo-search
pip install 'crewai[tools]'
```
2. **Setting Up Your Crew**:
### 2. Setting Up Your Crew
```python
import os
from crewai import Agent, Task, Crew, Process
from crewai_tools import SerperDevTool
os.environ["OPENAI_API_KEY"] = "YOUR KEY"
os.environ["OPENAI_API_KEY"] = "YOUR_API_KEY"
os.environ["SERPER_API_KEY"] = "Your Key" # serper.dev API key
# You can choose to use a local model through Ollama for example.
#
# from langchain.llms import Ollama
# ollama_llm = Ollama(model="openhermes")
# You can choose to use a local model through Ollama for example. See https://docs.crewai.com/how-to/LLM-Connections/ for more information.
# Install duckduckgo-search for this example:
# !pip install -U duckduckgo-search
# os.environ["OPENAI_API_BASE"] = 'http://localhost:11434/v1'
# os.environ["OPENAI_MODEL_NAME"] ='openhermes' # Adjust based on available model
# os.environ["OPENAI_API_KEY"] ='sk-111111111111111111111111111111111111111111111111'
from langchain.tools import DuckDuckGoSearchRun
search_tool = DuckDuckGoSearchRun()
search_tool = SerperDevTool()
# Define your agents with roles and goals
researcher = Agent(
@@ -63,35 +79,36 @@ researcher = Agent(
goal='Uncover cutting-edge developments in AI and data science',
backstory="""You work at a leading tech think tank.
Your expertise lies in identifying emerging trends.
You have a knack for dissecting complex data and presenting
actionable insights.""",
You have a knack for dissecting complex data and presenting actionable insights.""",
verbose=True,
allow_delegation=False,
tools=[search_tool]
# You can pass an optional llm attribute specifying what mode you wanna use.
# It can be a local model through Ollama / LM Studio or a remote
# model like OpenAI, Mistral, Antrophic of others (https://python.langchain.com/docs/integrations/llms/)
# model like OpenAI, Mistral, Antrophic or others (https://docs.crewai.com/how-to/LLM-Connections/)
#
# Examples:
# llm=ollama_llm # was defined above in the file
# import os
# os.environ['OPENAI_MODEL_NAME'] = 'gpt-3.5-turbo'
#
# OR
#
# from langchain_openai import ChatOpenAI
# llm=ChatOpenAI(model_name="gpt-3.5", temperature=0.7)
)
writer = Agent(
role='Tech Content Strategist',
goal='Craft compelling content on tech advancements',
backstory="""You are a renowned Content Strategist, known for
your insightful and engaging articles.
backstory="""You are a renowned Content Strategist, known for your insightful and engaging articles.
You transform complex concepts into compelling narratives.""",
verbose=True,
allow_delegation=True,
# (optional) llm=ollama_llm
allow_delegation=True
)
# Create tasks for your agents
task1 = Task(
description="""Conduct a comprehensive analysis of the latest advancements in AI in 2024.
Identify key trends, breakthrough technologies, and potential industry impacts.
Your final answer MUST be a full analysis report""",
Identify key trends, breakthrough technologies, and potential industry impacts.""",
expected_output="Full analysis report in bullet points",
agent=researcher
)
@@ -99,8 +116,8 @@ task2 = Task(
description="""Using the insights provided, develop an engaging blog
post that highlights the most significant AI advancements.
Your post should be informative yet accessible, catering to a tech-savvy audience.
Make it sound cool, avoid complex words so it doesn't sound like AI.
Your final answer MUST be the full blog post of at least 4 paragraphs.""",
Make it sound cool, avoid complex words so it doesn't sound like AI.""",
expected_output="Full blog post of at least 4 paragraphs",
agent=writer
)
@@ -118,68 +135,60 @@ print("######################")
print(result)
```
Currently the only supported process is `Process.sequential`, where one task is executed after the other and the outcome of one is passed as extra content into this next.
In addition to the sequential process, you can use the hierarchical process, which automatically assigns a manager to the defined crew to properly coordinate the planning and execution of tasks through delegation and validation of results. [See more about the processes here](https://docs.crewai.com/core-concepts/Processes/).
## Key Features
- **Role-Based Agent Design**: Customize agents with specific roles, goals, and tools.
- **Autonomous Inter-Agent Delegation**: Agents can autonomously delegate tasks and inquire amongst themselves, enhancing problem-solving efficiency.
- **Flexible Task Management**: Define tasks with customizable tools and assign them to agents dynamically.
- **Processes Driven**: Currently only supports `sequential` task execution but more complex processes like consensual and hierarchical being worked on.
- **Processes Driven**: Currently only supports `sequential` task execution and `hierarchical` processes, but more complex processes like consensual and autonomous are being worked on.
- **Save output as file**: Save the output of individual tasks as a file, so you can use it later.
- **Parse output as Pydantic or Json**: Parse the output of individual tasks as a Pydantic model or as a Json if you want to.
- **Works with Open Source Models**: Run your crew using Open AI or open source models refer to the [Connect crewAI to LLMs](https://docs.crewai.com/how-to/LLM-Connections/) page for details on configuring your agents' connections to models, even ones running locally!
![CrewAI Mind Map](./docs/crewAI-mindmap.png "CrewAI Mind Map")
## Examples
You can test different real life examples of AI crews [in the examples repo](https://github.com/joaomdmoura/crewAI-examples?tab=readme-ov-file)
### Code
You can test different real life examples of AI crews in the [crewAI-examples repo](https://github.com/joaomdmoura/crewAI-examples?tab=readme-ov-file):
- [Landing Page Generator](https://github.com/joaomdmoura/crewAI-examples/tree/main/landing_page_generator)
- [Having Human input on the execution](https://docs.crewai.com/how-to/Human-Input-on-Execution)
- [Trip Planner](https://github.com/joaomdmoura/crewAI-examples/tree/main/trip_planner)
- [Stock Analysis](https://github.com/joaomdmoura/crewAI-examples/tree/main/stock_analysis)
- [Landing Page Generator](https://github.com/joaomdmoura/crewAI-examples/tree/main/landing_page_generator)
- [Having Human input on the execution](https://github.com/joaomdmoura/crewAI/wiki/Human-Input-on-Execution)
### Video
#### Quick Tutorial
[![CrewAI Tutorial](https://img.youtube.com/vi/tnejrr-0a94/0.jpg)](https://www.youtube.com/watch?v=tnejrr-0a94 "CrewAI Tutorial")
### Quick Tutorial
#### Trip Planner
[![Trip Planner](https://img.youtube.com/vi/xis7rWp-hjs/0.jpg)](https://www.youtube.com/watch?v=xis7rWp-hjs "Trip Planner")
[![CrewAI Tutorial](https://img.youtube.com/vi/tnejrr-0a94/maxresdefault.jpg)](https://www.youtube.com/watch?v=tnejrr-0a94 "CrewAI Tutorial")
#### Stock Analysis
[![Stock Analysis](https://img.youtube.com/vi/e0Uj4yWdaAg/0.jpg)](https://www.youtube.com/watch?v=e0Uj4yWdaAg "Stock Analysis")
### Write Job Descriptions
## Local Open Source Models
crewAI supports integration with local models, thorugh tools such as [Ollama](https://ollama.ai/), for enhanced flexibility and customization. This allows you to utilize your own models, which can be particularly useful for specialized tasks or data privacy concerns.
[Check out code for this example](https://github.com/joaomdmoura/crewAI-examples/tree/main/job-posting) or watch a video below:
### Setting Up Ollama
- **Install Ollama**: Ensure that Ollama is properly installed in your environment. Follow the installation guide provided by Ollama for detailed instructions.
- **Configure Ollama**: Set up Ollama to work with your local model. You will probably need to [tweak the model using a Modelfile](https://github.com/jmorganca/ollama/blob/main/docs/modelfile.md). I'd recommend adding `Observation` as a stop word and playing with `top_p` and `temperature`.
[![Jobs postings](https://img.youtube.com/vi/u98wEMz-9to/maxresdefault.jpg)](https://www.youtube.com/watch?v=u98wEMz-9to "Jobs postings")
### Integrating Ollama with CrewAI
- Instantiate Ollama Model: Create an instance of the Ollama model. You can specify the model and the base URL during instantiation. For example:
### Trip Planner
```python
from langchain.llms import Ollama
ollama_openhermes = Ollama(model="openhermes")
# Pass Ollama Model to Agents: When creating your agents within the CrewAI framework, you can pass the Ollama model as an argument to the Agent constructor. For instance:
[Check out code for this example](https://github.com/joaomdmoura/crewAI-examples/tree/main/trip_planner) or watch a video below:
local_expert = Agent(
role='Local Expert at this city',
goal='Provide the BEST insights about the selected city',
backstory="""A knowledgeable local guide with extensive information
about the city, it's attractions and customs""",
tools=[
SearchTools.search_internet,
BrowserTools.scrape_and_summarize_website,
],
llm=ollama_openhermes, # Ollama model passed here
verbose=True
)
```
[![Trip Planner](https://img.youtube.com/vi/xis7rWp-hjs/maxresdefault.jpg)](https://www.youtube.com/watch?v=xis7rWp-hjs "Trip Planner")
### Stock Analysis
[Check out code for this example](https://github.com/joaomdmoura/crewAI-examples/tree/main/stock_analysis) or watch a video below:
[![Stock Analysis](https://img.youtube.com/vi/e0Uj4yWdaAg/maxresdefault.jpg)](https://www.youtube.com/watch?v=e0Uj4yWdaAg "Stock Analysis")
## Connecting Your Crew to a Model
crewAI supports using various LLMs through a variety of connection options. By default your agents will use the OpenAI API when querying the model. However, there are several other ways to allow your agents to connect to models. For example, you can configure your agents to use a local model via the Ollama tool.
Please refer to the [Connect crewAI to LLMs](https://docs.crewai.com/how-to/LLM-Connections/) page for details on configuring you agents' connections to models.
## How CrewAI Compares
- **Autogen**: While Autogen excels in creating conversational agents capable of working together, it lacks an inherent concept of process. In Autogen, orchestrating agents' interactions requires additional programming, which can become complex and cumbersome as the scale of tasks grows.
- **Autogen**: While Autogen does good in creating conversational agents capable of working together, it lacks an inherent concept of process. In Autogen, orchestrating agents' interactions requires additional programming, which can become complex and cumbersome as the scale of tasks grows.
- **ChatDev**: ChatDev introduced the idea of processes into the realm of AI agents, but its implementation is quite rigid. Customizations in ChatDev are limited and not geared towards production environments, which can hinder scalability and flexibility in real-world applications.
@@ -196,12 +205,14 @@ CrewAI is open-source and we welcome contributions. If you're looking to contrib
- We appreciate your input!
### Installing Dependencies
```bash
poetry lock
poetry install
```
### Virtual Env
```bash
poetry shell
```
@@ -213,25 +224,64 @@ pre-commit install
```
### Running Tests
```bash
poetry run pytest
```
### Running static type checks
```bash
poetry run pyright
```
### Packaging
```bash
poetry build
```
### Installing Locally
```bash
pip install dist/*.tar.gz
```
## Hire Consulting
I, [@joaomdmoura](https://github.com/joaomdmoura) (creator or crewAI), offer consulting through my LLC ([AI Nest Labs](https://ainestlabs.com)).
If you are interested on hiring weekly hours with me on a retainer, feel free to email me at [joao@ainestlabs.com](mailto:joao@ainestlabs.com)
## Hire CrewAI
We're a company developing crewAI and crewAI Enterprise, we for a limited time are offer consulting with selected customers, to get them early access to our enterprise solution
If you are interested on having access to it and hiring weekly hours with our team, feel free to email us at [joao@crewai.com](mailto:joao@crewai.com).
## Telemetry
CrewAI uses anonymous telemetry to collect usage data with the main purpose of helping us improve the library by focusing our efforts on the most used features, integrations and tools.
There is NO data being collected on the prompts, tasks descriptions agents backstories or goals nor tools usage, no API calls, nor responses nor any data that is being processed by the agents, nor any secrets and env vars.
Data collected includes:
- Version of crewAI
- So we can understand how many users are using the latest version
- Version of Python
- So we can decide on what versions to better support
- General OS (e.g. number of CPUs, macOS/Windows/Linux)
- So we know what OS we should focus on and if we could build specific OS related features
- Number of agents and tasks in a crew
- So we make sure we are testing internally with similar use cases and educate people on the best practices
- Crew Process being used
- Understand where we should focus our efforts
- If Agents are using memory or allowing delegation
- Understand if we improved the features or maybe even drop them
- If Tasks are being executed in parallel or sequentially
- Understand if we should focus more on parallel execution
- Language model being used
- Improved support on most used languages
- Roles of agents in a crew
- Understand high level use cases so we can build better tools, integrations and examples about it
- Tools names available
- Understand out of the publically available tools, which ones are being used the most so we can improve them
Users can opt-in sharing the complete telemetry data by setting the `share_crew` attribute to `True` on their Crews.
## License
CrewAI is released under the MIT License
CrewAI is released under the MIT License.

View File

@@ -0,0 +1,66 @@
---
title: crewAI Agents
description: What are crewAI Agents and how to use them.
---
## What is an Agent?
!!! note "What is an Agent?"
An agent is an **autonomous unit** programmed to:
<ul>
<li class='leading-3'>Perform tasks</li>
<li class='leading-3'>Make decisions</li>
<li class='leading-3'>Communicate with other agents</li>
<ul>
<br/>
Think of an agent as a member of a team, with specific skills and a particular job to do. Agents can have different roles like 'Researcher', 'Writer', or 'Customer Support', each contributing to the overall goal of the crew.
## Agent Attributes
| Attribute | Description |
| :--------------------- | :--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| **Role** | Defines the agent's function within the crew. It determines the kind of tasks the agent is best suited for. |
| **Goal** | The individual objective that the agent aims to achieve. It guides the agent's decision-making process. |
| **Backstory** | Provides context to the agent's role and goal, enriching the interaction and collaboration dynamics. |
| **LLM** *(optional)* | The language model used by the agent to process and generate text. It dynamically fetches the model name from the `OPENAI_MODEL_NAME` environment variable, defaulting to "gpt-4" if not specified. |
| **Tools** *(optional)* | Set of capabilities or functions that the agent can use to perform tasks. Tools can be shared or exclusive to specific agents. It's an attribute that can be set during the initialization of an agent, with a default value of an empty list. |
| **Function Calling LLM** *(optional)* | If passed, this agent will use this LLM to execute function calling for tools instead of relying on the main LLM output. |
| **Max Iter** *(optional)* | The maximum number of iterations the agent can perform before being forced to give its best answer. Default is `15`. |
| **Max RPM** *(optional)* | The maximum number of requests per minute the agent can perform to avoid rate limits. It's optional and can be left unspecified, with a default value of `None`. |
| **Verbose** *(optional)* | Enables detailed logging of the agent's execution for debugging or monitoring purposes when set to True. Default is `False`. |
| **Allow Delegation** *(optional)* | Agents can delegate tasks or questions to one another, ensuring that each task is handled by the most suitable agent. Default is `True`. |
| **Step Callback** *(optional)* | A function that is called after each step of the agent. This can be used to log the agent's actions or to perform other operations. It will overwrite the crew `step_callback`. |
| **Memory** *(optional)* | Indicates whether the agent should have memory or not, with a default value of False. This impacts the agent's ability to remember past interactions. Default is `False`. |
## Creating an Agent
!!! note "Agent Interaction"
Agents can interact with each other using crewAI's built-in delegation and communication mechanisms. This allows for dynamic task management and problem-solving within the crew.
To create an agent, you would typically initialize an instance of the `Agent` class with the desired properties. Here's a conceptual example including all attributes:
```python
# Example: Creating an agent with all attributes
from crewai import Agent
agent = Agent(
role='Data Analyst',
goal='Extract actionable insights',
backstory="""You're a data analyst at a large company.
You're responsible for analyzing data and providing insights
to the business.
You're currently working on a project to analyze the
performance of our marketing campaigns.""",
tools=[my_tool1, my_tool2], # Optional, defaults to an empty list
llm=my_llm, # Optional
function_calling_llm=my_llm, # Optional
max_iter=15, # Optional
max_rpm=None, # Optional
verbose=True, # Optional
allow_delegation=True, # Optional
step_callback=my_intermediate_step_callback, # Optional
memory=True # Optional
)
```
## Conclusion
Agents are the building blocks of the CrewAI framework. By understanding how to define and interact with agents, you can create sophisticated AI systems that leverage the power of collaborative intelligence.

View File

@@ -0,0 +1,38 @@
---
title: How Agents Collaborate in CrewAI
description: Exploring the dynamics of agent collaboration within the CrewAI framework, focusing on the newly integrated features for enhanced functionality.
---
## Collaboration Fundamentals
!!! note "Core of Agent Interaction"
Collaboration in CrewAI is fundamental, enabling agents to combine their skills, share information, and assist each other in task execution, embodying a truly cooperative ecosystem.
- **Information Sharing**: Ensures all agents are well-informed and can contribute effectively by sharing data and findings.
- **Task Assistance**: Allows agents to seek help from peers with the required expertise for specific tasks.
- **Resource Allocation**: Optimizes task execution through the efficient distribution and sharing of resources among agents.
## Enhanced Attributes for Improved Collaboration
The `Crew` class has been enriched with several attributes to support advanced functionalities:
- **Language Model Management (`manager_llm`, `function_calling_llm`)**: Manages language models for executing tasks and tools, facilitating sophisticated agent-tool interactions. It's important to note that `manager_llm` is mandatory when using a hierarchical process for ensuring proper execution flow.
- **Process Flow (`process`)**: Defines the execution logic (e.g., sequential, hierarchical) to streamline task distribution and execution.
- **Verbose Logging (`verbose`)**: Offers detailed logging capabilities for monitoring and debugging purposes. It supports both integer and boolean types to indicate the verbosity level.
- **Configuration (`config`)**: Allows extensive customization to tailor the crew's behavior according to specific requirements.
- **Rate Limiting (`max_rpm`)**: Ensures efficient utilization of resources by limiting requests per minute.
- **Internationalization Support (`language`)**: Facilitates operation in multiple languages, enhancing global usability.
- **Execution and Output Handling (`full_output`)**: Distinguishes between full and final outputs for nuanced control over task results.
- **Callback and Telemetry (`step_callback`)**: Integrates callbacks for step-wise execution monitoring and telemetry for performance analytics.
- **Crew Sharing (`share_crew`)**: Enables sharing of crew information with CrewAI for continuous improvement and training models.
- **Usage Metrics (`usage_metrics`)**: Store all metrics for the language model (LLM) usage during all tasks' execution, providing insights into operational efficiency and areas for improvement, you can check it after the crew execution.
## Delegation: Dividing to Conquer
Delegation enhances functionality by allowing agents to intelligently assign tasks or seek help, thereby amplifying the crew's overall capability.
## Implementing Collaboration and Delegation
Setting up a crew involves defining the roles and capabilities of each agent. CrewAI seamlessly manages their interactions, ensuring efficient collaboration and delegation, with enhanced customization and monitoring features to adapt to various operational needs.
## Example Scenario
Consider a crew with a researcher agent tasked with data gathering and a writer agent responsible for compiling reports. The integration of advanced language model management and process flow attributes allows for more sophisticated interactions, such as the writer delegating complex research tasks to the researcher or querying specific information, thereby facilitating a seamless workflow.
## Conclusion
The integration of advanced attributes and functionalities into the CrewAI framework significantly enriches the agent collaboration ecosystem. These enhancements not only simplify interactions but also offer unprecedented flexibility and control, paving the way for sophisticated AI-driven solutions capable of tackling complex tasks through intelligent collaboration and delegation.

View File

@@ -0,0 +1,98 @@
---
title: crewAI Crews
description: Understanding and utilizing crews in the crewAI framework.
---
## What is a Crew?
!!! note "Definition of a Crew"
A crew in crewAI represents a collaborative group of agents working together to achieve a set of tasks. Each crew defines the strategy for task execution, agent collaboration, and the overall workflow.
## Crew Attributes
| Attribute | Description |
| :---------------------- | :----------------------------------------------------------- |
| **Tasks** | A list of tasks assigned to the crew. |
| **Agents** | A list of agents that are part of the crew. |
| **Process** *(optional)* | The process flow (e.g., sequential, hierarchical) the crew follows. |
| **Verbose** *(optional)* | The verbosity level for logging during execution. |
| **Manager LLM** *(optional)* | The language model used by the manager agent in a hierarchical process. **Required when using a hierarchical process.** |
| **Function Calling LLM** *(optional)* | If passed, the crew will use this LLM to do function calling for tools for all agents in the crew. Each agent can have its own LLM, which overrides the crew's LLM for function calling. |
| **Config** *(optional)* | Optional configuration settings for the crew, in `Json` or `Dict[str, Any]` format. |
| **Max RPM** *(optional)* | Maximum requests per minute the crew adheres to during execution. |
| **Language** *(optional)* | Language used for the crew, defaults to English. |
| **Full Output** *(optional)* | Whether the crew should return the full output with all tasks outputs or just the final output. |
| **Step Callback** *(optional)* | A function that is called after each step of every agent. This can be used to log the agent's actions or to perform other operations; it won't override the agent-specific `step_callback`. |
| **Share Crew** *(optional)* | Whether you want to share the complete crew information and execution with the crewAI team to make the library better, and allow us to train models. |
!!! note "Crew Max RPM"
The `max_rpm` attribute sets the maximum number of requests per minute the crew can perform to avoid rate limits and will override individual agents' `max_rpm` settings if you set it.
## Creating a Crew
!!! note "Crew Composition"
When assembling a crew, you combine agents with complementary roles and tools, assign tasks, and select a process that dictates their execution order and interaction.
### Example: Assembling a Crew
```python
from crewai import Crew, Agent, Task, Process
from langchain_community.tools import DuckDuckGoSearchRun
# Define agents with specific roles and tools
researcher = Agent(
role='Senior Research Analyst',
goal='Discover innovative AI technologies',
tools=[DuckDuckGoSearchRun()]
)
writer = Agent(
role='Content Writer',
goal='Write engaging articles on AI discoveries',
verbose=True
)
# Create tasks for the agents
research_task = Task(
description='Identify breakthrough AI technologies',
agent=researcher
)
write_article_task = Task(
description='Draft an article on the latest AI technologies',
agent=writer
)
# Assemble the crew with a sequential process
my_crew = Crew(
agents=[researcher, writer],
tasks=[research_task, write_article_task],
process=Process.sequential,
full_output=True,
verbose=True,
)
```
## Crew Usage Metrics
After the crew execution, you can access the `usage_metrics` attribute to view the language model (LLM) usage metrics for all tasks executed by the crew. This provides insights into operational efficiency and areas for improvement.
```python
# Access the crew's usage metrics
crew = Crew(agents=[agent1, agent2], tasks=[task1, task2])
crew.kickoff()
print(crew.usage_metrics)
```
## Crew Execution Process
- **Sequential Process**: Tasks are executed one after another, allowing for a linear flow of work.
- **Hierarchical Process**: A manager agent coordinates the crew, delegating tasks and validating outcomes before proceeding. **Note**: A `manager_llm` is required for this process and it's essential for validating the process flow.
### Kicking Off a Crew
Once your crew is assembled, initiate the workflow with the `kickoff()` method. This starts the execution process according to the defined process flow.
```python
# Start the crew's task execution
result = my_crew.kickoff()
print(result)
```

View File

@@ -0,0 +1,60 @@
---
title: Managing Processes in CrewAI
description: Detailed guide on workflow management through processes in CrewAI, with updated implementation details.
---
## Understanding Processes
!!! note "Core Concept"
In CrewAI, processes orchestrate the execution of tasks by agents, akin to project management in human teams. These processes ensure tasks are distributed and executed efficiently, in alignment with a predefined strategy.
## Process Implementations
- **Sequential**: Executes tasks sequentially, ensuring tasks are completed in an orderly progression.
- **Hierarchical**: Organizes tasks in a managerial hierarchy, where tasks are delegated and executed based on a structured chain of command. Note: A manager language model (`manager_llm`) must be specified in the crew to enable the hierarchical process, allowing for the creation and management of tasks by the manager.
- **Consensual Process (Planned)**: Currently under consideration for future development, this process type aims for collaborative decision-making among agents on task execution, introducing a more democratic approach to task management within CrewAI. As of now, it is not implemented in the codebase.
## The Role of Processes in Teamwork
Processes enable individual agents to operate as a cohesive unit, streamlining their efforts to achieve common objectives with efficiency and coherence.
## Assigning Processes to a Crew
To assign a process to a crew, specify the process type upon crew creation to set the execution strategy. Note: For a hierarchical process, ensure to define `manager_llm` for the manager agent.
```python
from crewai import Crew
from crewai.process import Process
from langchain_openai import ChatOpenAI
# Example: Creating a crew with a sequential process
crew = Crew(
agents=my_agents,
tasks=my_tasks,
process=Process.sequential
)
# Example: Creating a crew with a hierarchical process
# Ensure to provide a manager_llm
crew = Crew(
agents=my_agents,
tasks=my_tasks,
process=Process.hierarchical,
manager_llm=ChatOpenAI(model="gpt-4")
)
```
**Note:** Ensure `my_agents` and `my_tasks` are defined prior to creating a `Crew` object, and for the hierarchical process, `manager_llm` is also required.
## Sequential Process
This method mirrors dynamic team workflows, progressing through tasks in a thoughtful and systematic manner. Task execution follows the predefined order in the task list, with the output of one task serving as context for the next.
To customize task context, utilize the `context` parameter in the `Task` class to specify outputs that should be used as context for subsequent tasks.
## Hierarchical Process
Emulates a corporate hierarchy, crewAI creates a manager automatically for you, requiring the specification of a manager language model (`manager_llm`) for the manager agent. This agent oversees task execution, including planning, delegation, and validation. Tasks are not pre-assigned; the manager allocates tasks to agents based on their capabilities, reviews outputs, and assesses task completion.
## Process Class: Detailed Overview
The `Process` class is implemented as an enumeration (`Enum`), ensuring type safety and restricting process values to the defined types (`sequential`, `hierarchical`, and future `consensual`). This design choice guarantees that only valid processes are utilized within the CrewAI framework.
## Planned Future Processes
- **Consensual Process**: This collaborative decision-making process among agents on task execution is under consideration but not currently implemented. This future enhancement aims to introduce a more democratic approach to task management within CrewAI.
## Conclusion
The structured collaboration facilitated by processes within CrewAI is crucial for enabling systematic teamwork among agents. Documentation will be regularly updated to reflect new processes and enhancements, ensuring users have access to the most current and comprehensive information.

227
docs/core-concepts/Tasks.md Normal file
View File

@@ -0,0 +1,227 @@
---
title: crewAI Tasks
description: Overview and management of tasks within the crewAI framework.
---
## Overview of a Task
!!! note "What is a Task?"
In the CrewAI framework, tasks are individual assignments that agents complete. They encapsulate necessary information for execution, including a description, assigned agent, required tools, offering flexibility for various action complexities.
Tasks in CrewAI can be designed to require collaboration between agents. For example, one agent might gather data while another analyzes it. This collaborative approach can be defined within the task properties and managed by the Crew's process.
## Task Attributes
| Attribute | Description |
| :------------- | :----------------------------------- |
| **Description** | A clear, concise statement of what the task entails. |
| **Agent** | Optionally, you can specify which agent is responsible for the task. If not, the crew's process will determine who takes it on. |
| **Expected Output** | Clear and detailed definition of expected output for the task. |
| **Tools** *(optional)* | These are the functions or capabilities the agent can utilize to perform the task. They can be anything from simple actions like 'search' to more complex interactions with other agents or APIs. |
| **Async Execution** *(optional)* | Indicates whether the task should be executed asynchronously, allowing the crew to continue with the next task without waiting for completion. |
| **Context** *(optional)* | Other tasks that will have their output used as context for this task. If a task is asynchronous, the system will wait for that to finish before using its output as context. |
| **Output JSON** *(optional)* | Takes a pydantic model and returns the output as a JSON object. **Agent LLM needs to be using an OpenAI client, could be Ollama for example but using the OpenAI wrapper** |
| **Output Pydantic** *(optional)* | Takes a pydantic model and returns the output as a pydantic object. **Agent LLM needs to be using an OpenAI client, could be Ollama for example but using the OpenAI wrapper** |
| **Output File** *(optional)* | Takes a file path and saves the output of the task on it. |
| **Callback** *(optional)* | A function to be executed after the task is completed. |
## Creating a Task
This is the simplest example for creating a task, it involves defining its scope and agent, but there are optional attributes that can provide a lot of flexibility:
```python
from crewai import Task
task = Task(
description='Find and summarize the latest and most relevant news on AI',
agent=sales_agent
)
```
!!! note "Task Assignment"
Tasks can be assigned directly by specifying an `agent` to them, or they can be assigned in run time if you are using the `hierarchical` through CrewAI's process, considering roles, availability, or other criteria.
## Integrating Tools with Tasks
Tools from the [crewAI Toolkit](https://github.com/joaomdmoura/crewai-tools) and [LangChain Tools](https://python.langchain.com/docs/integrations/tools) enhance task performance, allowing agents to interact more effectively with their environment. Assigning specific tools to tasks can tailor agent capabilities to particular needs.
## Creating a Task with Tools
```python
import os
os.environ["OPENAI_API_KEY"] = "Your Key"
os.environ["SERPER_API_KEY"] = "Your Key" # serper.dev API key
from crewai import Agent, Task, Crew
from crewai_tools import SerperDevTool
research_agent = Agent(
role='Researcher',
goal='Find and summarize the latest AI news',
backstory="""You're a researcher at a large company.
You're responsible for analyzing data and providing insights
to the business."""
verbose=True
)
search_tool = SerperDevTool()
task = Task(
description='Find and summarize the latest AI news',
expected_output='A bullet list summary of the top 5 most important AI news',
agent=research_agent,
tools=[search_tool]
)
crew = Crew(
agents=[research_agent],
tasks=[task],
verbose=2
)
result = crew.kickoff()
print(result)
```
This demonstrates how tasks with specific tools can override an agent's default set for tailored task execution.
## Referring to Other Tasks
In crewAI, the output of one task is automatically relayed into the next one, but you can specifically define what tasks' output, including multiple should be used as context for another task.
This is useful when you have a task that depends on the output of another task that is not performed immediately after it. This is done through the `context` attribute of the task:
```python
# ...
research_ai_task = Task(
description='Find and summarize the latest AI news',
expected_output='A bullet list summary of the top 5 most important AI news',
async_execution=True,
agent=research_agent,
tools=[search_tool]
)
research_ops_task = Task(
description='Find and summarize the latest AI Ops news',
expected_output='A bullet list summary of the top 5 most important AI Ops news',
async_execution=True,
agent=research_agent,
tools=[search_tool]
)
write_blog_task = Task(
description="Write a full blog post about the importance of AI and its latest news",
expected_output='Full blog post that is 4 paragraphs long',
agent=writer_agent,
context=[research_ai_task, research_ops_task]
)
#...
```
## Asynchronous Execution
You can define a task to be executed asynchronously. This means that the crew will not wait for it to be completed to continue with the next task. This is useful for tasks that take a long time to be completed, or that are not crucial for the next tasks to be performed.
You can then use the `context` attribute to define in a future task that it should wait for the output of the asynchronous task to be completed.
```python
#...
list_ideas = Task(
description="List of 5 interesting ideas to explore for an article about AI.",
expected_output="Bullet point list of 5 ideas for an article.",
agent=researcher,
async_execution=True # Will be executed asynchronously
)
list_important_history = Task(
description="Research the history of AI and give me the 5 most important events.",
expected_output="Bullet point list of 5 important events.",
agent=researcher,
async_execution=True # Will be executed asynchronously
)
write_article = Task(
description="Write an article about AI, its history, and interesting ideas.",
expected_output="A 4 paragraph article about AI.",
agent=writer,
context=[list_ideas, list_important_history] # Will wait for the output of the two tasks to be completed
)
#...
```
## Callback Mechanism
The callback function is executed after the task is completed, allowing for actions or notifications to be triggered based on the task's outcome.
```python
# ...
def callback_function(output: TaskOutput):
# Do something after the task is completed
# Example: Send an email to the manager
print(f"""
Task completed!
Task: {output.description}
Output: {output.raw_output}
""")
research_task = Task(
description='Find and summarize the latest AI news',
expected_output='A bullet list summary of the top 5 most important AI news',
agent=research_agent,
tools=[search_tool],
callback=callback_function
)
#...
```
## Accessing a Specific Task Output
Once a crew finishes running, you can access the output of a specific task by using the `output` attribute of the task object:
```python
# ...
task1 = Task(
description='Find and summarize the latest AI news',
expected_output='A bullet list summary of the top 5 most important AI news',
agent=research_agent,
tools=[search_tool]
)
#...
crew = Crew(
agents=[research_agent],
tasks=[task1, task2, task3],
verbose=2
)
result = crew.kickoff()
# Returns a TaskOutput object with the description and results of the task
print(f"""
Task completed!
Task: {task1.output.description}
Output: {task1.output.raw_output}
""")
```
## Tool Override Mechanism
Specifying tools in a task allows for dynamic adaptation of agent capabilities, emphasizing CrewAI's flexibility.
## Error Handling and Validation Mechanisms
While creating and executing tasks, certain validation mechanisms are in place to ensure the robustness and reliability of task attributes. These include but are not limited to:
- Ensuring only one output type is set per task to maintain clear output expectations.
- Preventing the manual assignment of the `id` attribute to uphold the integrity of the unique identifier system.
These validations help in maintaining the consistency and reliability of task executions within the crewAI framework.
## Conclusion
Tasks are the driving force behind the actions of agents in crewAI. By properly defining tasks and their outcomes, you set the stage for your AI agents to work effectively, either independently or as a collaborative unit. Equipping tasks with appropriate tools, understanding the execution process, and following robust validation practices are crucial for maximizing CrewAI's potential, ensuring agents are effectively prepared for their assignments and that tasks are executed as intended.

231
docs/core-concepts/Tools.md Normal file
View File

@@ -0,0 +1,231 @@
---
title: crewAI Tools
description: Understanding and leveraging tools within the crewAI framework for agent collaboration and task execution.
---
## Introduction
CrewAI tools empower agents with capabilities ranging from web searching and data analysis to collaboration and delegating tasks among coworkers. This documentation outlines how to create, integrate, and leverage these tools within the CrewAI framework, including a new focus on collaboration tools.
## What is a Tool?
!!! note "Definition"
A tool in CrewAI is a skill or function that agents can utilize to perform various actions. This includes tools from the [crewAI Toolkit](https://github.com/joaomdmoura/crewai-tools) and [LangChain Tools](https://python.langchain.com/docs/integrations/tools), enabling everything from simple searches to complex interactions and effective teamwork among agents.
## Key Characteristics of Tools
- **Utility**: Crafted for tasks such as web searching, data analysis, content generation, and agent collaboration.
- **Integration**: Boosts agent capabilities by seamlessly integrating tools into their workflow.
- **Customizability**: Provides the flexibility to develop custom tools or utilize existing ones, catering to the specific needs of agents.
## Using crewAI Tools
To enhance your agents' capabilities with crewAI tools, begin by installing our extra tools package:
```bash
pip install 'crewai[tools]'
```
Here's an example demonstrating their use:
```python
import os
from crewai import Agent, Task, Crew
# Importing crewAI tools
from crewai_tools import (
DirectoryReadTool,
FileReadTool,
SerperDevTool,
WebsiteSearchTool
)
# Set up API keys
os.environ["SERPER_API_KEY"] = "Your Key" # serper.dev API key
os.environ["OPENAI_API_KEY"] = "Your Key"
# Instantiate tools
docs_tool = DirectoryReadTool(directory='./blog-posts')
file_tool = FileReadTool()
search_tool = SerperDevTool()
web_rag_tool = WebsiteSearchTool()
# Create agents
researcher = Agent(
role='Market Research Analyst',
goal='Provide up-to-date market analysis of the AI industry',
backstory='An expert analyst with a keen eye for market trends.',
tools=[search_tool, web_rag_tool],
verbose=True
)
writer = Agent(
role='Content Writer',
goal='Craft engaging blog posts about the AI industry',
backstory='A skilled writer with a passion for technology.',
tools=[docs_tool, file_tool],
verbose=True
)
# Define tasks
research = Task(
description='Research the latest trends in the AI industry and provide a summary.',
expected_output='A summary of the top 3 trending developments in the AI industry with a unique perspective on their significance.',
agent=researcher
)
write = Task(
description='Write an engaging blog post about the AI industry, based on the research analysts summary. Draw inspiration from the latest blog posts in the directory.',
expected_output='A 4-paragraph blog post formatted in markdown with engaging, informative, and accessible content, avoiding complex jargon.',
agent=writer,
output_file='blog-posts/new_post.md' # The final blog post will be saved here
)
# Assemble a crew
crew = Crew(
agents=[researcher, writer],
tasks=[research, write],
verbose=2
)
# Execute tasks
crew.kickoff()
```
## Available crewAI Tools
Most of the tools in the crewAI toolkit offer the ability to set specific arguments or let them to be more wide open, this is the case for most of the tools, for example:
```python
from crewai_tools import DirectoryReadTool
# This will allow the agent with this tool to read any directory it wants during it's execution
tool = DirectoryReadTool()
# OR
# This will allow the agent with this tool to read only the directory specified during it's execution
toos = DirectoryReadTool(directory='./directory')
```
Specific per tool docs are coming soon.
Here is a list of the available tools and their descriptions:
| Tool | Description |
| :-------------------------- | :-------------------------------------------------------------------------------------------- |
| **CodeDocsSearchTool** | A RAG tool optimized for searching through code documentation and related technical documents.|
| **CSVSearchTool** | A RAG tool designed for searching within CSV files, tailored to handle structured data. |
| **DirectorySearchTool** | A RAG tool for searching within directories, useful for navigating through file systems. |
| **DOCXSearchTool** | A RAG tool aimed at searching within DOCX documents, ideal for processing Word files. |
| **DirectoryReadTool** | Facilitates reading and processing of directory structures and their contents. |
| **FileReadTool** | Enables reading and extracting data from files, supporting various file formats. |
| **GithubSearchTool** | A RAG tool for searching within GitHub repositories, useful for code and documentation search.|
| **SeperDevTool** | A specialized tool for development purposes, with specific functionalities under development. |
| **TXTSearchTool** | A RAG tool focused on searching within text (.txt) files, suitable for unstructured data. |
| **JSONSearchTool** | A RAG tool designed for searching within JSON files, catering to structured data handling. |
| **MDXSearchTool** | A RAG tool tailored for searching within Markdown (MDX) files, useful for documentation. |
| **PDFSearchTool** | A RAG tool aimed at searching within PDF documents, ideal for processing scanned documents. |
| **PGSearchTool** | A RAG tool optimized for searching within PostgreSQL databases, suitable for database queries. |
| **RagTool** | A general-purpose RAG tool capable of handling various data sources and types. |
| **ScrapeElementFromWebsiteTool** | Enables scraping specific elements from websites, useful for targeted data extraction. |
| **ScrapeWebsiteTool** | Facilitates scraping entire websites, ideal for comprehensive data collection. |
| **WebsiteSearchTool** | A RAG tool for searching website content, optimized for web data extraction. |
| **XMLSearchTool** | A RAG tool designed for searching within XML files, suitable for structured data formats. |
| **YoutubeChannelSearchTool**| A RAG tool for searching within YouTube channels, useful for video content analysis. |
| **YoutubeVideoSearchTool** | A RAG tool aimed at searching within YouTube videos, ideal for video data extraction. |
## Creating your own Tools
!!! example "Custom Tool Creation"
Developers can craft custom tools tailored for their agents needs or utilize pre-built options:
To create your own crewAI tools you will need to install our extra tools package:
```bash
pip install 'crewai[tools]'
```
Once you do that there are two main ways for one to create a crewAI tool:
### Subclassing `BaseTool`
```python
from crewai_tools import BaseTool
class MyCustomTool(BaseTool):
name: str = "Name of my tool"
description: str = "Clear description for what this tool is useful for, you agent will need this information to use it."
def _run(self, argument: str) -> str:
# Implementation goes here
return "Result from custom tool"
```
Define a new class inheriting from `BaseTool`, specifying `name`, `description`, and the `_run` method for operational logic.
### Utilizing the `tool` Decorator
For a simpler approach, create a `Tool` object directly with the required attributes and a functional logic.
```python
from crewai_tools import tool
@tool("Name of my tool")
def my_tool(question: str) -> str:
"""Clear description for what this tool is useful for, you agent will need this information to use it."""
# Function logic here
```
```python
import json
import requests
from crewai import Agent
from crewai.tools import tool
from unstructured.partition.html import partition_html
# Annotate the function with the tool decorator from crewAI
@tool("Integration with a given API")
def integration_tool(argument: str) -> str:
"""Integration with a given API"""
# Code here
return resutls # string to be sent back to the agent
# Assign the scraping tool to an agent
agent = Agent(
role='Research Analyst',
goal='Provide up-to-date market analysis',
backstory='An expert analyst with a keen eye for market trends.',
tools=[integration_tool]
)
```
## Using LangChain Tools
!!! info "LangChain Integration"
CrewAI seamlessly integrates with LangChains comprehensive toolkit for search-based queries and more, here are the available built-in tools that are offered by Langchain [LangChain Toolkit](https://python.langchain.com/docs/integrations/tools/)
:
```python
from crewai import Agent
from langchain.agents import Tool
from langchain.utilities import GoogleSerperAPIWrapper
# Setup API keys
os.environ["SERPER_API_KEY"] = "Your Key"
search = GoogleSerperAPIWrapper()
# Create and assign the search tool to an agent
serper_tool = Tool(
name="Intermediate Answer",
func=search.run,
description="Useful for search-based queries",
)
agent = Agent(
role='Research Analyst',
goal='Provide up-to-date market analysis',
backstory='An expert analyst with a keen eye for market trends.',
tools=[serper_tool]
)
# rest of the code ...
```
## Conclusion
Tools are pivotal in extending the capabilities of CrewAI agents, enabling them to undertake a broad spectrum of tasks and collaborate effectively. When building solutions with CrewAI, leverage both custom and existing tools to empower your agents and enhance the AI ecosystem.

BIN
docs/crew_only_logo.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 94 KiB

View File

@@ -0,0 +1,91 @@
---
title: Creating your own Tools
description: Guide on how to create and use custom tools within the crewAI framework.
---
## Creating your own Tools
!!! example "Custom Tool Creation"
Developers can craft custom tools tailored for their agents needs or utilize pre-built options:
To create your own crewAI tools you will need to install our extra tools package:
```bash
pip install 'crewai[tools]'
```
Once you do that there are two main ways for one to create a crewAI tool:
### Subclassing `BaseTool`
```python
from crewai_tools import BaseTool
class MyCustomTool(BaseTool):
name: str = "Name of my tool"
description: str = "Clear description for what this tool is useful for, you agent will need this information to use it."
def _run(self, argument: str) -> str:
# Implementation goes here
return "Result from custom tool"
```
Define a new class inheriting from `BaseTool`, specifying `name`, `description`, and the `_run` method for operational logic.
### Utilizing the `tool` Decorator
For a simpler approach, create a `Tool` object directly with the required attributes and a functional logic.
```python
from crewai_tools import tool
@tool("Name of my tool")
def my_tool(question: str) -> str:
"""Clear description for what this tool is useful for, you agent will need this information to use it."""
# Function logic here
```
```python
import json
import requests
from crewai import Agent
from crewai.tools import tool
from unstructured.partition.html import partition_html
# Annotate the function with the tool decorator from crewAI
@tool("Integration with a given API")
def integtation_tool(argument: str) -> str:
"""Integration with a given API"""
# Code here
return resutls # string to be sent back to the agent
# Assign the scraping tool to an agent
agent = Agent(
role='Research Analyst',
goal='Provide up-to-date market analysis',
backstory='An expert analyst with a keen eye for market trends.',
tools=[integtation_tool]
)
```
### Using the `Tool` function from langchain
For another simple approach, create a function in python directly with the required attributes and a functional logic.
```python
def combine(a, b):
return a + b
```
Then you can add that function into the your tool by using 'func' variable in the Tool function.
```python
from langchain.agents import Tool
math_tool = Tool(
name="Math tool",
func=math_tool,
description="Useful for adding two numbers together, in other words combining them."
)
```

View File

@@ -0,0 +1,118 @@
---
title: Assembling and Activating Your CrewAI Team
description: A comprehensive guide to creating a dynamic CrewAI team for your projects, with updated functionalities including verbose mode, memory capabilities, and more.
---
## Introduction
Embark on your CrewAI journey by setting up your environment and initiating your AI crew with enhanced features. This guide ensures a seamless start, incorporating the latest updates.
## Step 0: Installation
Install CrewAI and any necessary packages for your project.
```shell
pip install crewai
pip install 'crewai[tools]'
```
## Step 1: Assemble Your Agents
Define your agents with distinct roles, backstories, and now, enhanced capabilities such as verbose mode and memory usage. These elements add depth and guide their task execution and interaction within the crew.
```python
import os
os.environ["SERPER_API_KEY"] = "Your Key" # serper.dev API key
os.environ["OPENAI_API_KEY"] = "Your Key"
from crewai import Agent
from crewai_tools import SerperDevTool
search_tool = SerperDevTool()
# Creating a senior researcher agent with memory and verbose mode
researcher = Agent(
role='Senior Researcher',
goal='Uncover groundbreaking technologies in {topic}',
verbose=True,
memory=True,
backstory=(
"Driven by curiosity, you're at the forefront of"
"innovation, eager to explore and share knowledge that could change"
"the world."
),
tools=[search_tool],
allow_delegation=True
)
# Creating a writer agent with custom tools and delegation capability
writer = Agent(
role='Writer',
goal='Narrate compelling tech stories about {topic}',
verbose=True,
memory=True,
backstory=(
"With a flair for simplifying complex topics, you craft"
"engaging narratives that captivate and educate, bringing new"
"discoveries to light in an accessible manner."
),
tools=[search_tool],
allow_delegation=False
)
```
## Step 2: Define the Tasks
Detail the specific objectives for your agents, including new features for asynchronous execution and output customization. These tasks ensure a targeted approach to their roles.
```python
from crewai import Task
# Research task
research_task = Task(
description=(
"Identify the next big trend in {topic}."
"Focus on identifying pros and cons and the overall narrative."
"Your final report should clearly articulate the key points"
"its market opportunities, and potential risks."
),
expected_output='A comprehensive 3 paragraphs long report on the latest AI trends.',
tools=[search_tool],
agent=researcher,
)
# Writing task with language model configuration
write_task = Task(
description=(
"Compose an insightful article on {topic}."
"Focus on the latest trends and how it's impacting the industry."
"This article should be easy to understand, engaging, and positive."
),
expected_output='A 4 paragraph article on {topic} advancements formatted as markdown.',
tools=[search_tool],
agent=writer,
async_execution=False,
output_file='new-blog-post.md' # Example of output customization
)
```
## Step 3: Form the Crew
Combine your agents into a crew, setting the workflow process they'll follow to accomplish the tasks, now with the option to configure language models for enhanced interaction.
```python
from crewai import Crew, Process
# Forming the tech-focused crew with enhanced configurations
crew = Crew(
agents=[researcher, writer],
tasks=[research_task, write_task],
process=Process.sequential # Optional: Sequential task execution is default
)
```
## Step 4: Kick It Off
Initiate the process with your enhanced crew ready. Observe as your agents collaborate, leveraging their new capabilities for a successful project outcome. You can also pass the inputs that will be interpolated into the agents and tasks.
```python
# Starting the task execution process with enhanced feedback
result = crew.kickoff(inputs={'topic': 'AI in healthcare'})
print(result)
```
## Conclusion
Building and activating a crew in CrewAI has evolved with new functionalities. By incorporating verbose mode, memory capabilities, asynchronous task execution, output customization, and language model configuration, your AI team is more equipped than ever to tackle challenges efficiently. The depth of agent backstories and the precision of their objectives enrich collaboration, leading to successful project outcomes.

View File

@@ -0,0 +1,83 @@
---
title: Customizing Agents in CrewAI
description: A comprehensive guide to tailoring agents for specific roles, tasks, and advanced customizations within the CrewAI framework.
---
## Customizable Attributes
Crafting an efficient CrewAI team hinges on the ability to tailor your AI agents dynamically to meet the unique requirements of any project. This section covers the foundational attributes you can customize.
### Key Attributes for Customization
- **Role**: Specifies the agent's job within the crew, such as 'Analyst' or 'Customer Service Rep'.
- **Goal**: Defines what the agent aims to achieve, in alignment with its role and the overarching objectives of the crew.
- **Backstory**: Provides depth to the agent's persona, enriching its motivations and engagements within the crew.
- **Tools**: Represents the capabilities or methods the agent uses to perform tasks, from simple functions to intricate integrations.
## Advanced Customization Options
Beyond the basic attributes, CrewAI allows for deeper customization to enhance an agent's behavior and capabilities significantly.
### Language Model Customization
Agents can be customized with specific language models (`llm`) and function-calling language models (`function_calling_llm`), offering advanced control over their processing and decision-making abilities.
By default crewAI agents are ReAct agents, but by setting the `function_calling_llm` you can turn them into a function calling agents.
### Enabling Memory for Agents
CrewAI supports memory for agents, enabling them to remember past interactions. This feature is critical for tasks requiring awareness of previous contexts or decisions.
## Performance and Debugging Settings
Adjusting an agent's performance and monitoring its operations are crucial for efficient task execution.
### Verbose Mode and RPM Limit
- **Verbose Mode**: Enables detailed logging of an agent's actions, useful for debugging and optimization. Specifically, it provides insights into agent execution processes, aiding in the optimization of performance.
- **RPM Limit**: Sets the maximum number of requests per minute (`max_rpm`), controlling the agent's query frequency to external services.
### Maximum Iterations for Task Execution
The `max_iter` attribute allows users to define the maximum number of iterations an agent can perform for a single task, preventing infinite loops or excessively long executions. The default value is set to 15, providing a balance between thoroughness and efficiency. Once the agent approaches this number it will try it's best to give a good answer.
## Customizing Agents and Tools
Agents are customized by defining their attributes and tools during initialization. Tools are critical for an agent's functionality, enabling them to perform specialized tasks. In this example we will use the crewAI tools package to create a tool for a research analyst agent.
```shell
pip install 'crewai[tools]'
```
### Example: Assigning Tools to an Agent
```python
import os
from crewai import Agent
from crewai_tools import SerperDevTool
# Set API keys for tool initialization
os.environ["OPENAI_API_KEY"] = "Your Key"
os.environ["SERPER_API_KEY"] = "Your Key"
# Initialize a search tool
search_tool = SerperDevTool()
# Initialize the agent with advanced options
agent = Agent(
role='Research Analyst',
goal='Provide up-to-date market analysis',
backstory='An expert analyst with a keen eye for market trends.',
tools=[search_tool],
memory=True,
verbose=True,
max_rpm=10, # Optional: Limit requests to 10 per minute, preventing API abuse
max_iter=5, # Optional: Limit task iterations to 5 before the agent tries to give its best answer
allow_delegation=False
)
```
## Delegation and Autonomy
Controlling an agent's ability to delegate tasks or ask questions is vital for tailoring its autonomy and collaborative dynamics within the crewAI framework. By default, the `allow_delegation` attribute is set to `True`, enabling agents to seek assistance or delegate tasks as needed. This default behavior promotes collaborative problem-solving and efficiency within the crewAI ecosystem.
### Example: Disabling Delegation for an Agent
```python
agent = Agent(
role='Content Writer',
goal='Write engaging content on market trends',
backstory='A seasoned writer with expertise in market analysis.',
allow_delegation=False
)
```
## Conclusion
Customizing agents in CrewAI by setting their roles, goals, backstories, and tools, alongside advanced options like language model customization, memory, performance settings, and delegation preferences, equips a nuanced and capable AI team ready for complex challenges.

View File

@@ -0,0 +1,61 @@
---
title: Implementing the Hierarchical Process in CrewAI
description: A comprehensive guide to understanding and applying the hierarchical process within your CrewAI projects, updated to reflect the latest coding practices and functionalities.
---
## Introduction
The hierarchical process in CrewAI introduces a structured approach to task management, simulating traditional organizational hierarchies for efficient task delegation and execution. This systematic workflow enhances project outcomes by ensuring tasks are handled with optimal efficiency and accuracy.
!!! note "Complexity and Efficiency"
The hierarchical process is designed to leverage advanced models like GPT-4, optimizing token usage while handling complex tasks with greater efficiency.
## Hierarchical Process Overview
By default, tasks in CrewAI are managed through a sequential process. However, adopting a hierarchical approach allows for a clear hierarchy in task management, where a 'manager' agent coordinates the workflow, delegates tasks, and validates outcomes for streamlined and effective execution. This manager agent is automatically created by crewAI so you don't need to worry about it.
### Key Features
- **Task Delegation**: A manager agent allocates tasks among crew members based on their roles and capabilities.
- **Result Validation**: The manager evaluates outcomes to ensure they meet the required standards.
- **Efficient Workflow**: Emulates corporate structures, providing an organized approach to task management.
## Implementing the Hierarchical Process
To utilize the hierarchical process, it's essential to explicitly set the process attribute to `Process.hierarchical`, as the default behavior is `Process.sequential`. Define a crew with a designated manager and establish a clear chain of command.
!!! note "Tools and Agent Assignment"
Assign tools at the agent level to facilitate task delegation and execution by the designated agents under the manager's guidance.
!!! note "Manager LLM Requirement"
Configuring the `manager_llm` parameter is crucial for the hierarchical process. The system requires a manager LLM to be set up for proper function, ensuring tailored decision-making.
```python
from langchain_openai import ChatOpenAI
from crewai import Crew, Process, Agent
# Agents are defined with an optional tools parameter
researcher = Agent(
role='Researcher',
goal='Conduct in-depth analysis',
# tools=[] # This can be optionally specified; defaults to an empty list
)
writer = Agent(
role='Writer',
goal='Create engaging content',
# tools=[] # Optionally specify tools; defaults to an empty list
)
# Establishing the crew with a hierarchical process
project_crew = Crew(
tasks=[...], # Tasks to be delegated and executed under the manager's supervision
agents=[researcher, writer],
manager_llm=ChatOpenAI(temperature=0, model="gpt-4"), # Mandatory for hierarchical process
process=Process.hierarchical # Specifies the hierarchical management approach
)
```
### Workflow in Action
1. **Task Assignment**: The manager assigns tasks strategically, considering each agent's capabilities.
2. **Execution and Review**: Agents complete their tasks, with the manager ensuring quality standards.
3. **Sequential Task Progression**: Despite being a hierarchical process, tasks follow a logical order for smooth progression, facilitated by the manager's oversight.
## Conclusion
Adopting the hierarchical process in crewAI, with the correct configurations and understanding of the system's capabilities, facilitates an organized and efficient approach to project management.

View File

@@ -0,0 +1,94 @@
---
title: Human Input on Execution
description: Comprehensive guide on integrating CrewAI with human input during execution in complex decision-making processes or when needed help during complex tasks.
---
# Human Input in Agent Execution
Human input plays a pivotal role in several agent execution scenarios, enabling agents to seek additional information or clarification when necessary. This capability is invaluable in complex decision-making processes or when agents need more details to complete a task effectively.
## Using Human Input with CrewAI
Incorporating human input with CrewAI is straightforward, enhancing the agent's ability to make informed decisions. While the documentation previously mentioned using a "LangChain Tool" and a specific "DuckDuckGoSearchRun" tool from `langchain_community.tools`, it's important to clarify that the integration of such tools should align with the actual capabilities and configurations defined within your `Agent` class setup.
### Example:
```shell
pip install crewai
pip install 'crewai[tools]'
```
```python
import os
from crewai import Agent, Task, Crew
from crewai_tools import SerperDevTool
from langchain.agents import load_tools
os.environ["SERPER_API_KEY"] = "Your Key" # serper.dev API key
os.environ["OPENAI_API_KEY"] = "Your Key"
# Loading Human Tools
human_tools = load_tools(["human"])
search_tool = SerperDevTool()
# Define your agents with roles, goals, and tools
researcher = Agent(
role='Senior Research Analyst',
goal='Uncover cutting-edge developments in AI and data science',
backstory=(
"You are a Senior Research Analyst at a leading tech think tank."
"Your expertise lies in identifying emerging trends and technologies in AI and data science."
"You have a knack for dissecting complex data and presenting actionable insights."
),
verbose=True,
allow_delegation=False,
tools=[search_tool]+human_tools # Passing human tools to the agent
)
writer = Agent(
role='Tech Content Strategist',
goal='Craft compelling content on tech advancements',
backstory=(
"You are a renowned Tech Content Strategist, known for your insightful and engaging articles on technology and innovation."
"With a deep understanding of the tech industry, you transform complex concepts into compelling narratives."
),
verbose=True,
allow_delegation=True
)
# Create tasks for your agents
task1 = Task(
description=(
"Conduct a comprehensive analysis of the latest advancements in AI in 2024."
"Identify key trends, breakthrough technologies, and potential industry impacts."
"Compile your findings in a detailed report."
"Make sure to check with a human if the draft is good before finalizing your answer."
),
expected_output='A comprehensive full report on the latest AI advancements in 2024, leave nothing out',
agent=researcher,
)
task2 = Task(
description=(
"Using the insights from the researcher's report, develop an engaging blog post that highlights the most significant AI advancements."
"Your post should be informative yet accessible, catering to a tech-savvy audience."
"Aim for a narrative that captures the essence of these breakthroughs and their implications for the future."
),
expected_output='A compelling 3 paragraphs blog post formatted as markdown about the latest AI advancements in 2024',
agent=writer
)
# Instantiate your crew with a sequential process
crew = Crew(
agents=[researcher, writer],
tasks=[task1, task2],
verbose=2
)
# Get your crew to work!
result = crew.kickoff()
print("######################")
print(result)
```

View File

@@ -0,0 +1,109 @@
---
title: Connect CrewAI to LLMs
description: Comprehensive guide on integrating CrewAI with various Large Language Models (LLMs), including detailed class attributes and methods.
---
## Connect CrewAI to LLMs
!!! note "Default LLM"
By default, CrewAI uses OpenAI's GPT-4 model for language processing. However, you can configure your agents to use a different model or API. This guide will show you how to connect your agents to different LLMs through environment variables and direct instantiation.
CrewAI offers flexibility in connecting to various LLMs, including local models via [Ollama](https://ollama.ai) and different APIs like Azure. It's compatible with all [LangChain LLM](https://python.langchain.com/docs/integrations/llms/) components, enabling diverse integrations for tailored AI solutions.
## CrewAI Agent Overview
The `Agent` class is the cornerstone for implementing AI solutions in CrewAI. Here's an updated overview reflecting the latest codebase changes:
- **Attributes**:
- `role`: Defines the agent's role within the solution.
- `goal`: Specifies the agent's objective.
- `backstory`: Provides a background story to the agent.
- `llm`: Indicates the Large Language Model the agent uses.
- `function_calling_llm` *Optinal*: Will turn the ReAct crewAI agent into a function calling agent.
- `max_iter`: Maximum number of iterations for an agent to execute a task, default is 15.
- `memory`: Enables the agent to retain information during the execution.
- `max_rpm`: Sets the maximum number of requests per minute.
- `verbose`: Enables detailed logging of the agent's execution.
- `allow_delegation`: Allows the agent to delegate tasks to other agents, default is `True`.
- `tools`: Specifies the tools available to the agent for task execution.
- `step_callback`: Provides a callback function to be executed after each step.
```python
# Required
os.environ["OPENAI_MODEL_NAME"]="gpt-4-0125-preview"
# Agent will automatically use the model defined in the environment variable
example_agent = Agent(
role='Local Expert',
goal='Provide insights about the city',
backstory="A knowledgeable local guide.",
verbose=True
)
```
## Ollama Integration
Ollama is preferred for local LLM integration, offering customization and privacy benefits. To integrate Ollama with CrewAI, set the appropriate environment variables as shown below. Note: Detailed Ollama setup is beyond this document's scope, but general guidance is provided.
### Setting Up Ollama
- **Environment Variables Configuration**: To integrate Ollama, set the following environment variables:
```sh
OPENAI_API_BASE='http://localhost:11434/v1'
OPENAI_MODEL_NAME='openhermes' # Adjust based on available model
OPENAI_API_KEY=''
```
## OpenAI Compatible API Endpoints
Switch between APIs and models seamlessly using environment variables, supporting platforms like FastChat, LM Studio, and Mistral AI.
### Configuration Examples
#### FastChat
```sh
OPENAI_API_BASE="http://localhost:8001/v1"
OPENAI_MODEL_NAME='oh-2.5m7b-q51'
OPENAI_API_KEY=NA
```
#### LM Studio
```sh
OPENAI_API_BASE="http://localhost:8000/v1"
OPENAI_MODEL_NAME=NA
OPENAI_API_KEY=NA
```
#### Mistral API
```sh
OPENAI_API_KEY=your-mistral-api-key
OPENAI_API_BASE=https://api.mistral.ai/v1
OPENAI_MODEL_NAME="mistral-small"
```
### Azure Open AI Configuration
For Azure OpenAI API integration, set the following environment variables:
```sh
AZURE_OPENAI_VERSION="2022-12-01"
AZURE_OPENAI_DEPLOYMENT=""
AZURE_OPENAI_ENDPOINT=""
AZURE_OPENAI_KEY=""
```
### Example Agent with Azure LLM
```python
from dotenv import load_dotenv
from crewai import Agent
from langchain_openai import AzureChatOpenAI
load_dotenv()
azure_llm = AzureChatOpenAI(
azure_endpoint=os.environ.get("AZURE_OPENAI_ENDPOINT"),
api_key=os.environ.get("AZURE_OPENAI_KEY")
)
azure_agent = Agent(
role='Example Agent',
goal='Demonstrate custom LLM configuration',
backstory='A diligent explorer of GitHub docs.',
llm=azure_llm
)
```
## Conclusion
Integrating CrewAI with different LLMs expands the framework's versatility, allowing for customized, efficient AI solutions across various domains and platforms.

60
docs/how-to/Sequential.md Normal file
View File

@@ -0,0 +1,60 @@
---
title: Using the Sequential Processes in crewAI
description: A comprehensive guide to utilizing the sequential processe for task execution in crewAI projects.
---
## Introduction
CrewAI offers a flexible framework for executing tasks in a structured manner, supporting both sequential and hierarchical processes. This guide outlines how to effectively implement these processes to ensure efficient task execution and project completion.
## Sequential Process Overview
The sequential process ensures tasks are executed one after the other, following a linear progression. This approach is ideal for projects requiring tasks to be completed in a specific order.
### Key Features
- **Linear Task Flow**: Ensures orderly progression by handling tasks in a predetermined sequence.
- **Simplicity**: Best suited for projects with clear, step-by-step tasks.
- **Easy Monitoring**: Facilitates easy tracking of task completion and project progress.
## Implementing the Sequential Process
Assemble your crew and define tasks in the order they need to be executed.
```python
from crewai import Crew, Process, Agent, Task
# Define your agents
researcher = Agent(
role='Researcher',
goal='Conduct foundational research',
backstory='An experienced researcher with a passion for uncovering insights'
)
analyst = Agent(
role='Data Analyst',
goal='Analyze research findings',
backstory='A meticulous analyst with a knack for uncovering patterns'
)
writer = Agent(
role='Writer',
goal='Draft the final report',
backstory='A skilled writer with a talent for crafting compelling narratives'
)
# Define the tasks in sequence
research_task = Task(description='Gather relevant data...', agent=researcher)
analysis_task = Task(description='Analyze the data...', agent=analyst)
writing_task = Task(description='Compose the report...', agent=writer)
# Form the crew with a sequential process
report_crew = Crew(
agents=[researcher, analyst, writer],
tasks=[research_task, analysis_task, writing_task],
process=Process.sequential
)
```
### Workflow in Action
1. **Initial Task**: In a sequential process, the first agent completes their task and signals completion.
2. **Subsequent Tasks**: Agents pick up their tasks based on the process type, with outcomes of preceding tasks or manager directives guiding their execution.
3. **Completion**: The process concludes once the final task is executed, leading to project completion.
## Conclusion
The sequential process in CrewAI provides a clear, straightforward path for task execution. It's particularly suited for projects requiring a logical progression of tasks, ensuring each step is completed before the next begins, thereby facilitating a cohesive final product.

View File

@@ -1,8 +1,118 @@
<img src='./crewai_logo.png' width='250'/>
<img src='./crew_only_logo.png' width='250' class='mb-10'/>
# Welcome to crewAI Documentation
🤖 Cutting-edge framework for orchestrating role-playing, autonomous AI agents. By fostering collaborative intelligence, CrewAI empowers agents to work together seamlessly, tackling complex tasks.
# crewAI Documentation
<p align="center">
<img src='./crewAI-mindmap.png' />
</p>
Cutting-edge framework for orchestrating role-playing, autonomous AI agents. By fostering collaborative intelligence, CrewAI empowers agents to work together seamlessly, tackling complex tasks.
<div style="display:flex; margin:0 auto; justify-content: center;">
<div style="width:25%">
<h2>Core Concepts</h2>
<ul>
<li>
<a href="./core-concepts/Agents">
Agents
</a>
</li>
<li>
<a href="./core-concepts/Tasks">
Tasks
</a>
</li>
<li>
<a href="./core-concepts/Tools">
Tools
</a>
</li>
<li>
<a href="./core-concepts/Processes">
Processes
</a>
</li>
<li>
<a href="./core-concepts/Crews">
Crews
</a>
</li>
</ul>
</div>
<div style="width:30%">
<h2>How-To Guides</h2>
<ul>
<li>
<a href="./how-to/Creating-a-Crew-and-kick-it-off">
Getting Started
</a>
</li>
<li>
<a href="./how-to/Create-Custom-Tools">
Create Custom Tools
</a>
</li>
<li>
<a href="./how-to/Sequential">
Using Sequential Process
</a>
</li>
<li>
<a href="./how-to/Hierarchical">
Using Hierarchical Process
</a>
</li>
<li>
<a href="./how-to/LLM-Connections">
Connecting to LLMs
</a>
</li>
<li>
<a href="./how-to/Customizing-Agents">
Customizing Agents
</a>
</li>
<li>
<a href="./how-to/Human-Input-on-Execution">
Human Input on Execution
</a>
</li>
</ul>
</div>
<div style="width:30%">
<h2>Examples</h2>
<ul>
<li>
<a target='_blank' href="https://github.com/joaomdmoura/crewAI-examples/tree/main/prep-for-a-meeting">
Prepare for meetings
</a>
</li>
<li>
<a target='_blank' href="https://github.com/joaomdmoura/crewAI-examples/tree/main/trip_planner">
Trip Planner Crew
</a>
</li>
<li>
<a target='_blank' href="https://github.com/joaomdmoura/crewAI-examples/tree/main/instagram_post">
Create Instagram Post
</a>
</li>
<li>
<a target='_blank' href="https://github.com/joaomdmoura/crewAI-examples/tree/main/stock_analysis">
Stock Analysis
</a>
</li>
<li>
<a target='_blank' href="https://github.com/joaomdmoura/crewAI-examples/tree/main/game-builder-crew">
Game Generator
</a>
</li>
<li>
<a target='_blank' href="https://github.com/joaomdmoura/crewAI-examples/tree/main/CrewAI-LangGraph">
Drafting emails with LangGraph
</a>
</li>
<li>
<a target='_blank' href="https://github.com/joaomdmoura/crewAI-examples/tree/main/landing_page_generator">
Landing Page Generator
</a>
</li>
</ul>
</div>
</div>

3
docs/postcss.config.js Normal file
View File

@@ -0,0 +1,3 @@
module.exports = {
plugins: [require('tailwindcss'), require('autoprefixer')]
}

View File

@@ -0,0 +1,3 @@
.md-typeset .admonition-title {
margin-bottom: 10px;
}

565
docs/stylesheets/output.css Normal file
View File

@@ -0,0 +1,565 @@
/*
! tailwindcss v3.4.1 | MIT License | https://tailwindcss.com
*/
/*
1. Prevent padding and border from affecting element width. (https://github.com/mozdevs/cssremedy/issues/4)
2. Allow adding a border to an element by just adding a border-width. (https://github.com/tailwindcss/tailwindcss/pull/116)
*/
*,
::before,
::after {
box-sizing: border-box;
/* 1 */
border-width: 0;
/* 2 */
border-style: solid;
/* 2 */
border-color: #e5e7eb;
/* 2 */
}
::before,
::after {
--tw-content: '';
}
/*
1. Use a consistent sensible line-height in all browsers.
2. Prevent adjustments of font size after orientation changes in iOS.
3. Use a more readable tab size.
4. Use the user's configured `sans` font-family by default.
5. Use the user's configured `sans` font-feature-settings by default.
6. Use the user's configured `sans` font-variation-settings by default.
7. Disable tap highlights on iOS
*/
html,
:host {
line-height: 1.5;
/* 1 */
-webkit-text-size-adjust: 100%;
/* 2 */
-moz-tab-size: 4;
/* 3 */
-o-tab-size: 4;
tab-size: 4;
/* 3 */
font-family: ui-sans-serif, system-ui, sans-serif, "Apple Color Emoji", "Segoe UI Emoji", "Segoe UI Symbol", "Noto Color Emoji";
/* 4 */
font-feature-settings: normal;
/* 5 */
font-variation-settings: normal;
/* 6 */
-webkit-tap-highlight-color: transparent;
/* 7 */
}
/*
1. Remove the margin in all browsers.
2. Inherit line-height from `html` so users can set them as a class directly on the `html` element.
*/
body {
margin: 0;
/* 1 */
line-height: inherit;
/* 2 */
}
/*
1. Add the correct height in Firefox.
2. Correct the inheritance of border color in Firefox. (https://bugzilla.mozilla.org/show_bug.cgi?id=190655)
3. Ensure horizontal rules are visible by default.
*/
hr {
height: 0;
/* 1 */
color: inherit;
/* 2 */
border-top-width: 1px;
/* 3 */
}
/*
Add the correct text decoration in Chrome, Edge, and Safari.
*/
abbr:where([title]) {
-webkit-text-decoration: underline dotted;
text-decoration: underline dotted;
}
/*
Remove the default font size and weight for headings.
*/
h1,
h2,
h3,
h4,
h5,
h6 {
font-size: inherit;
font-weight: inherit;
}
/*
Reset links to optimize for opt-in styling instead of opt-out.
*/
a {
color: inherit;
text-decoration: inherit;
}
/*
Add the correct font weight in Edge and Safari.
*/
b,
strong {
font-weight: bolder;
}
/*
1. Use the user's configured `mono` font-family by default.
2. Use the user's configured `mono` font-feature-settings by default.
3. Use the user's configured `mono` font-variation-settings by default.
4. Correct the odd `em` font sizing in all browsers.
*/
code,
kbd,
samp,
pre {
font-family: ui-monospace, SFMono-Regular, Menlo, Monaco, Consolas, "Liberation Mono", "Courier New", monospace;
/* 1 */
font-feature-settings: normal;
/* 2 */
font-variation-settings: normal;
/* 3 */
font-size: 1em;
/* 4 */
}
/*
Add the correct font size in all browsers.
*/
small {
font-size: 80%;
}
/*
Prevent `sub` and `sup` elements from affecting the line height in all browsers.
*/
sub,
sup {
font-size: 75%;
line-height: 0;
position: relative;
vertical-align: baseline;
}
sub {
bottom: -0.25em;
}
sup {
top: -0.5em;
}
/*
1. Remove text indentation from table contents in Chrome and Safari. (https://bugs.chromium.org/p/chromium/issues/detail?id=999088, https://bugs.webkit.org/show_bug.cgi?id=201297)
2. Correct table border color inheritance in all Chrome and Safari. (https://bugs.chromium.org/p/chromium/issues/detail?id=935729, https://bugs.webkit.org/show_bug.cgi?id=195016)
3. Remove gaps between table borders by default.
*/
table {
text-indent: 0;
/* 1 */
border-color: inherit;
/* 2 */
border-collapse: collapse;
/* 3 */
}
/*
1. Change the font styles in all browsers.
2. Remove the margin in Firefox and Safari.
3. Remove default padding in all browsers.
*/
button,
input,
optgroup,
select,
textarea {
font-family: inherit;
/* 1 */
font-feature-settings: inherit;
/* 1 */
font-variation-settings: inherit;
/* 1 */
font-size: 100%;
/* 1 */
font-weight: inherit;
/* 1 */
line-height: inherit;
/* 1 */
color: inherit;
/* 1 */
margin: 0;
/* 2 */
padding: 0;
/* 3 */
}
/*
Remove the inheritance of text transform in Edge and Firefox.
*/
button,
select {
text-transform: none;
}
/*
1. Correct the inability to style clickable types in iOS and Safari.
2. Remove default button styles.
*/
button,
[type='button'],
[type='reset'],
[type='submit'] {
-webkit-appearance: button;
/* 1 */
background-color: transparent;
/* 2 */
background-image: none;
/* 2 */
}
/*
Use the modern Firefox focus style for all focusable elements.
*/
:-moz-focusring {
outline: auto;
}
/*
Remove the additional `:invalid` styles in Firefox. (https://github.com/mozilla/gecko-dev/blob/2f9eacd9d3d995c937b4251a5557d95d494c9be1/layout/style/res/forms.css#L728-L737)
*/
:-moz-ui-invalid {
box-shadow: none;
}
/*
Add the correct vertical alignment in Chrome and Firefox.
*/
progress {
vertical-align: baseline;
}
/*
Correct the cursor style of increment and decrement buttons in Safari.
*/
::-webkit-inner-spin-button,
::-webkit-outer-spin-button {
height: auto;
}
/*
1. Correct the odd appearance in Chrome and Safari.
2. Correct the outline style in Safari.
*/
[type='search'] {
-webkit-appearance: textfield;
/* 1 */
outline-offset: -2px;
/* 2 */
}
/*
Remove the inner padding in Chrome and Safari on macOS.
*/
::-webkit-search-decoration {
-webkit-appearance: none;
}
/*
1. Correct the inability to style clickable types in iOS and Safari.
2. Change font properties to `inherit` in Safari.
*/
::-webkit-file-upload-button {
-webkit-appearance: button;
/* 1 */
font: inherit;
/* 2 */
}
/*
Add the correct display in Chrome and Safari.
*/
summary {
display: list-item;
}
/*
Removes the default spacing and border for appropriate elements.
*/
blockquote,
dl,
dd,
h1,
h2,
h3,
h4,
h5,
h6,
hr,
figure,
p,
pre {
margin: 0;
}
fieldset {
margin: 0;
padding: 0;
}
legend {
padding: 0;
}
ol,
ul,
menu {
list-style: none;
margin: 0;
padding: 0;
}
/*
Reset default styling for dialogs.
*/
dialog {
padding: 0;
}
/*
Prevent resizing textareas horizontally by default.
*/
textarea {
resize: vertical;
}
/*
1. Reset the default placeholder opacity in Firefox. (https://github.com/tailwindlabs/tailwindcss/issues/3300)
2. Set the default placeholder color to the user's configured gray 400 color.
*/
input::-moz-placeholder, textarea::-moz-placeholder {
opacity: 1;
/* 1 */
color: #9ca3af;
/* 2 */
}
input::placeholder,
textarea::placeholder {
opacity: 1;
/* 1 */
color: #9ca3af;
/* 2 */
}
/*
Set the default cursor for buttons.
*/
button,
[role="button"] {
cursor: pointer;
}
/*
Make sure disabled buttons don't get the pointer cursor.
*/
:disabled {
cursor: default;
}
/*
1. Make replaced elements `display: block` by default. (https://github.com/mozdevs/cssremedy/issues/14)
2. Add `vertical-align: middle` to align replaced elements more sensibly by default. (https://github.com/jensimmons/cssremedy/issues/14#issuecomment-634934210)
This can trigger a poorly considered lint error in some tools but is included by design.
*/
img,
svg,
video,
canvas,
audio,
iframe,
embed,
object {
display: block;
/* 1 */
vertical-align: middle;
/* 2 */
}
/*
Constrain images and videos to the parent width and preserve their intrinsic aspect ratio. (https://github.com/mozdevs/cssremedy/issues/14)
*/
img,
video {
max-width: 100%;
height: auto;
}
/* Make elements with the HTML hidden attribute stay hidden by default */
[hidden] {
display: none;
}
*, ::before, ::after {
--tw-border-spacing-x: 0;
--tw-border-spacing-y: 0;
--tw-translate-x: 0;
--tw-translate-y: 0;
--tw-rotate: 0;
--tw-skew-x: 0;
--tw-skew-y: 0;
--tw-scale-x: 1;
--tw-scale-y: 1;
--tw-pan-x: ;
--tw-pan-y: ;
--tw-pinch-zoom: ;
--tw-scroll-snap-strictness: proximity;
--tw-gradient-from-position: ;
--tw-gradient-via-position: ;
--tw-gradient-to-position: ;
--tw-ordinal: ;
--tw-slashed-zero: ;
--tw-numeric-figure: ;
--tw-numeric-spacing: ;
--tw-numeric-fraction: ;
--tw-ring-inset: ;
--tw-ring-offset-width: 0px;
--tw-ring-offset-color: #fff;
--tw-ring-color: rgb(59 130 246 / 0.5);
--tw-ring-offset-shadow: 0 0 #0000;
--tw-ring-shadow: 0 0 #0000;
--tw-shadow: 0 0 #0000;
--tw-shadow-colored: 0 0 #0000;
--tw-blur: ;
--tw-brightness: ;
--tw-contrast: ;
--tw-grayscale: ;
--tw-hue-rotate: ;
--tw-invert: ;
--tw-saturate: ;
--tw-sepia: ;
--tw-drop-shadow: ;
--tw-backdrop-blur: ;
--tw-backdrop-brightness: ;
--tw-backdrop-contrast: ;
--tw-backdrop-grayscale: ;
--tw-backdrop-hue-rotate: ;
--tw-backdrop-invert: ;
--tw-backdrop-opacity: ;
--tw-backdrop-saturate: ;
--tw-backdrop-sepia: ;
}
::backdrop {
--tw-border-spacing-x: 0;
--tw-border-spacing-y: 0;
--tw-translate-x: 0;
--tw-translate-y: 0;
--tw-rotate: 0;
--tw-skew-x: 0;
--tw-skew-y: 0;
--tw-scale-x: 1;
--tw-scale-y: 1;
--tw-pan-x: ;
--tw-pan-y: ;
--tw-pinch-zoom: ;
--tw-scroll-snap-strictness: proximity;
--tw-gradient-from-position: ;
--tw-gradient-via-position: ;
--tw-gradient-to-position: ;
--tw-ordinal: ;
--tw-slashed-zero: ;
--tw-numeric-figure: ;
--tw-numeric-spacing: ;
--tw-numeric-fraction: ;
--tw-ring-inset: ;
--tw-ring-offset-width: 0px;
--tw-ring-offset-color: #fff;
--tw-ring-color: rgb(59 130 246 / 0.5);
--tw-ring-offset-shadow: 0 0 #0000;
--tw-ring-shadow: 0 0 #0000;
--tw-shadow: 0 0 #0000;
--tw-shadow-colored: 0 0 #0000;
--tw-blur: ;
--tw-brightness: ;
--tw-contrast: ;
--tw-grayscale: ;
--tw-hue-rotate: ;
--tw-invert: ;
--tw-saturate: ;
--tw-sepia: ;
--tw-drop-shadow: ;
--tw-backdrop-blur: ;
--tw-backdrop-brightness: ;
--tw-backdrop-contrast: ;
--tw-backdrop-grayscale: ;
--tw-backdrop-hue-rotate: ;
--tw-backdrop-invert: ;
--tw-backdrop-opacity: ;
--tw-backdrop-saturate: ;
--tw-backdrop-sepia: ;
}
.mb-10 {
margin-bottom: 2.5rem;
}
.transform {
transform: translate(var(--tw-translate-x), var(--tw-translate-y)) rotate(var(--tw-rotate)) skewX(var(--tw-skew-x)) skewY(var(--tw-skew-y)) scaleX(var(--tw-scale-x)) scaleY(var(--tw-scale-y));
}
.leading-3 {
line-height: .75rem;
}
.transition {
transition-property: color, background-color, border-color, text-decoration-color, fill, stroke, opacity, box-shadow, transform, filter, -webkit-backdrop-filter;
transition-property: color, background-color, border-color, text-decoration-color, fill, stroke, opacity, box-shadow, transform, filter, backdrop-filter;
transition-property: color, background-color, border-color, text-decoration-color, fill, stroke, opacity, box-shadow, transform, filter, backdrop-filter, -webkit-backdrop-filter;
transition-timing-function: cubic-bezier(0.4, 0, 0.2, 1);
transition-duration: 150ms;
}

View File

@@ -0,0 +1,3 @@
@import 'tailwindcss/base';
@import 'tailwindcss/components';
@import 'tailwindcss/utilities';

9
docs/tailwind.config.js Normal file
View File

@@ -0,0 +1,9 @@
/** @type {import('tailwindcss').Config} */
module.exports = {
content: ["./**/*.md"],
theme: {
extend: {},
},
plugins: [],
}

View File

@@ -0,0 +1,27 @@
---
title: Telemetry
description: Understanding the telemetry data collected by CrewAI and how it contributes to the enhancement of the library.
---
## Telemetry
CrewAI utilizes anonymous telemetry to gather usage statistics with the primary goal of enhancing the library. Our focus is on improving and developing the features, integrations, and tools most utilized by our users.
It's pivotal to understand that **NO data is collected** concerning prompts, task descriptions, agents' backstories or goals, usage of tools, API calls, responses, any data processed by the agents, or secrets and environment variables, with the exception of the conditions mentioned. When the `share_crew` feature is enabled, detailed data including task descriptions, agents' backstories or goals, and other specific attributes are collected to provide deeper insights while respecting user privacy.
### Data Collected Includes:
- **Version of CrewAI**: Assessing the adoption rate of our latest version helps us understand user needs and guide our updates.
- **Python Version**: Identifying the Python versions our users operate with assists in prioritizing our support efforts for these versions.
- **General OS Information**: Details like the number of CPUs and the operating system type (macOS, Windows, Linux) enable us to focus our development on the most used operating systems and explore the potential for OS-specific features.
- **Number of Agents and Tasks in a Crew**: Ensures our internal testing mirrors real-world scenarios, helping us guide users towards best practices.
- **Crew Process Utilization**: Understanding how crews are utilized aids in directing our development focus.
- **Memory and Delegation Use by Agents**: Insights into how these features are used help evaluate their effectiveness and future.
- **Task Execution Mode**: Knowing whether tasks are executed in parallel or sequentially influences our emphasis on enhancing parallel execution capabilities.
- **Language Model Utilization**: Supports our goal to improve support for the most popular languages among our users.
- **Roles of Agents within a Crew**: Understanding the various roles agents play aids in crafting better tools, integrations, and examples.
- **Tool Usage**: Identifying which tools are most frequently used allows us to prioritize improvements in those areas.
### Opt-In Further Telemetry Sharing
Users can choose to share their complete telemetry data by enabling the `share_crew` attribute to `True` in their crew configurations. This opt-in approach respects user privacy and aligns with data protection standards by ensuring users have control over their data sharing preferences. Enabling `share_crew` results in the collection of detailed `crew` and `task` execution data, including `goal`, `backstory`, `context`, and `output` of tasks. This enables a deeper insight into usage patterns while respecting the user's choice to share.
### Updates and Revisions
We are committed to maintaining the accuracy and transparency of our documentation. Regular reviews and updates are performed to ensure our documentation accurately reflects the latest developments of our codebase and telemetry practices. Users are encouraged to review this section for the most current information on our data collection practices and how they contribute to the improvement of CrewAI.

View File

@@ -0,0 +1,37 @@
# CSVSearchTool
!!! note "Depend on OpenAI"
All RAG tools at the moment can only use openAI to generate embeddings, we are working on adding support for other providers.
!!! note "Experimental"
We are still working on improving tools, so there might be unexpected behavior or changes in the future.
## Description
This tool is used to perform a RAG (Retrieval-Augmented Generation) search within a CSV file's content. It allows users to semantically search for queries in the content of a specified CSV file. This feature is particularly useful for extracting information from large CSV datasets where traditional search methods might be inefficient. All tools with "Search" in their name, including CSVSearchTool, are RAG tools designed for searching different sources of data.
## Installation
Install the crewai_tools package
```shell
pip install 'crewai[tools]'
```
## Example
```python
from crewai_tools import CSVSearchTool
# Initialize the tool with a specific CSV file. This setup allows the agent to only search the given CSV file.
tool = CSVSearchTool(csv='path/to/your/csvfile.csv')
# OR
# Initialize the tool without a specific CSV file. Agent will need to provide the CSV path at runtime.
tool = CSVSearchTool()
```
## Arguments
- `csv` : The path to the CSV file you want to search. This is a mandatory argument if the tool was initialized without a specific CSV file; otherwise, it is optional.

View File

@@ -0,0 +1,35 @@
# DOCXSearchTool
!!! note "Depend on OpenAI"
All RAG tools at the moment can only use openAI to generate embeddings, we are working on adding support for other providers.
!!! note "Experimental"
We are still working on improving tools, so there might be unexpected behavior or changes in the future.
## Description
The DOCXSearchTool is a RAG tool designed for semantic searching within DOCX documents. It enables users to effectively search and extract relevant information from DOCX files using query-based searches. This tool is invaluable for data analysis, information management, and research tasks, streamlining the process of finding specific information within large document collections.
## Installation
Install the crewai_tools package by running the following command in your terminal:
```shell
pip install 'crewai[tools]'
```
## Example
The following example demonstrates initializing the DOCXSearchTool to search within any DOCX file's content or with a specific DOCX file path.
```python
from crewai_tools import DOCXSearchTool
# Initialize the tool to search within any DOCX file's content
tool = DOCXSearchTool()
# OR
# Initialize the tool with a specific DOCX file, so the agent can only search the content of the specified DOCX file
tool = DOCXSearchTool(docx='path/to/your/document.docx')
```
## Arguments
- `docx`: An optional file path to a specific DOCX document you wish to search. If not provided during initialization, the tool allows for later specification of any DOCX file's content path for searching.

View File

@@ -0,0 +1,36 @@
# DirectoryReadTool
!!! note "Experimental"
We are still working on improving tools, so there might be unexpected behavior or changes in the future.
## Description
The DirectoryReadTool is a highly efficient utility designed for the comprehensive listing of directory contents. It recursively navigates through the specified directory, providing users with a detailed enumeration of all files, including those nested within subdirectories. This tool is indispensable for tasks requiring a thorough inventory of directory structures or for validating the organization of files within directories.
## Installation
Install the `crewai_tools` package to use the DirectoryReadTool in your project. If you haven't added this package to your environment, you can easily install it with pip using the following command:
```shell
pip install 'crewai[tools]'
```
This installs the latest version of the `crewai_tools` package, allowing access to the DirectoryReadTool and other utilities.
## Example
The DirectoryReadTool is simple to use. The code snippet below shows how to set up and use the tool to list the contents of a specified directory:
```python
from crewai_tools import DirectoryReadTool
# Initialize the tool so the agent can read any directory's content it learns about during execution
tool = DirectoryReadTool()
# OR
# Initialize the tool with a specific directory, so the agent can only read the content of the specified directory
tool = DirectoryReadTool(directory='/path/to/your/directory')
```
## Arguments
The DirectoryReadTool requires minimal configuration for use. The essential argument for this tool is as follows:
- `directory`: **Optional** A argument that specifies the path to the directory whose contents you wish to list. It accepts both absolute and relative paths, guiding the tool to the desired directory for content listing.

View File

@@ -0,0 +1,33 @@
# DirectorySearchTool
!!! note "Depend on OpenAI"
All RAG tools at the moment can only use openAI to generate embeddings, we are working on adding support for other providers.
!!! note "Experimental"
We are still working on improving tools, so there might be unexpected behavior or changes in the future.
## Description
This tool is designed to perform a semantic search for queries within the content of a specified directory. Utilizing the RAG (Retrieval-Augmented Generation) methodology, it offers a powerful means to semantically navigate through the files of a given directory. The tool can be dynamically set to search any directory specified at runtime or can be pre-configured to search within a specific directory upon initialization.
## Installation
To start using the DirectorySearchTool, you need to install the crewai_tools package. Execute the following command in your terminal:
```shell
pip install 'crewai[tools]'
```
## Example
The following examples demonstrate how to initialize the DirectorySearchTool for different use cases and how to perform a search:
```python
from crewai_tools import DirectorySearchTool
# To enable searching within any specified directory at runtime
tool = DirectorySearchTool()
# Alternatively, to restrict searches to a specific directory
tool = DirectorySearchTool(directory='/path/to/directory')
```
## Arguments
- `directory` : This string argument specifies the directory within which to search. It is mandatory if the tool has not been initialized with a directory; otherwise, the tool will only search within the initialized directory.

View File

@@ -0,0 +1,32 @@
# FileReadTool
!!! note "Experimental"
We are still working on improving tools, so there might be unexpected behavior or changes in the future.
## Description
The FileReadTool is a versatile component of the crewai_tools package, designed to streamline the process of reading and retrieving content from files. It is particularly useful in scenarios such as batch text file processing, runtime configuration file reading, and data importation for analytics. This tool supports various text-based file formats including `.txt`, `.csv`, `.json` and more, and adapts its functionality based on the file type, for instance, converting JSON content into a Python dictionary for easy use.
## Installation
Install the crewai_tools package to use the FileReadTool in your projects:
```shell
pip install 'crewai[tools]'
```
## Example
To get started with the FileReadTool:
```python
from crewai_tools import FileReadTool
# Initialize the tool to read any files the agents knows or lean the path for
file_read_tool = FileReadTool()
# OR
# Initialize the tool with a specific file path, so the agent can only read the content of the specified file
file_read_tool = FileReadTool(file_path='path/to/your/file.txt')
```
## Arguments
- `file_path`: The path to the file you want to read. It accepts both absolute and relative paths. Ensure the file exists and you have the necessary permissions to access it.

View File

@@ -0,0 +1,42 @@
# GitHubSearchTool
!!! note "Depend on OpenAI"
All RAG tools at the moment can only use openAI to generate embeddings, we are working on adding support for other providers.
!!! note "Experimental"
We are still working on improving tools, so there might be unexpected behavior or changes in the future.
## Description
The GitHubSearchTool is a Read, Append, and Generate (RAG) tool specifically designed for conducting semantic searches within GitHub repositories. Utilizing advanced semantic search capabilities, it sifts through code, pull requests, issues, and repositories, making it an essential tool for developers, researchers, or anyone in need of precise information from GitHub.
## Installation
To use the GitHubSearchTool, first ensure the crewai_tools package is installed in your Python environment:
```shell
pip install 'crewai[tools]'
```
This command installs the necessary package to run the GitHubSearchTool along with any other tools included in the crewai_tools package.
## Example
Heres how you can use the GitHubSearchTool to perform semantic searches within a GitHub repository:
```python
from crewai_tools import GitHubSearchTool
# Initialize the tool for semantic searches within a specific GitHub repository
tool = GitHubSearchTool(
github_repo='https://github.com/example/repo',
content_types=['code', 'issue'] # Options: code, repo, pr, issue
)
# OR
# Initialize the tool for semantic searches within a specific GitHub repository, so the agent can search any repository if it learns about during its execution
tool = GitHubSearchTool(
content_types=['code', 'issue'] # Options: code, repo, pr, issue
)
```
## Arguments
- `github_repo` : The URL of the GitHub repository where the search will be conducted. This is a mandatory field and specifies the target repository for your search.
- `content_types` : Specifies the types of content to include in your search. You must provide a list of content types from the following options: `code` for searching within the code, `repo` for searching within the repository's general information, `pr` for searching within pull requests, and `issue` for searching within issues. This field is mandatory and allows tailoring the search to specific content types within the GitHub repository.

View File

@@ -0,0 +1,33 @@
# JSONSearchTool
!!! note "Depend on OpenAI"
All RAG tools at the moment can only use openAI to generate embeddings, we are working on adding support for other providers.
!!! note "Experimental"
We are still working on improving tools, so there might be unexpected behavior or changes in the future.
## Description
This tool is used to perform a RAG search within a JSON file's content. It allows users to initiate a search with a specific JSON path, focusing the search operation within that particular JSON file. If the path is provided at initialization, the tool restricts its search scope to the specified JSON file, thereby enhancing the precision of search results.
## Installation
Install the crewai_tools package by executing the following command in your terminal:
```shell
pip install 'crewai[tools]'
```
## Example
Below are examples demonstrating how to use the JSONSearchTool for searching within JSON files. You can either search any JSON content or restrict the search to a specific JSON file.
```python
from crewai_tools import JSONSearchTool
# Example 1: Initialize the tool for a general search across any JSON content. This is useful when the path is known or can be discovered during execution.
tool = JSONSearchTool()
# Example 2: Initialize the tool with a specific JSON path, limiting the search to a particular JSON file.
tool = JSONSearchTool(json_path='./path/to/your/file.json')
```
## Arguments
- `json_path` (str): An optional argument that defines the path to the JSON file to be searched. This parameter is only necessary if the tool is initialized without a specific JSON path. Providing this argument restricts the search to the specified JSON file.

View File

@@ -0,0 +1,35 @@
# MDXSearchTool
!!! note "Depend on OpenAI"
All RAG tools at the moment can only use openAI to generate embeddings, we are working on adding support for other providers.
!!! note "Experimental"
We are still working on improving tools, so there might be unexpected behavior or changes in the future.
## Description
The MDX Search Tool, a key component of the `crewai_tools` package, is designed for advanced market data extraction, offering invaluable support to researchers and analysts requiring immediate market insights in the AI sector. With its ability to interface with various data sources and tools, it streamlines the process of acquiring, reading, and organizing market data efficiently.
## Installation
To utilize the MDX Search Tool, ensure the `crewai_tools` package is installed. If not already present, install it using the following command:
```shell
pip install 'crewai[tools]'
```
## Example
Configuring and using the MDX Search Tool involves setting up environment variables and utilizing the tool within a crewAI project for market research. Here's a simple example:
```python
from crewai_tools import MDXSearchTool
# Initialize the tool so the agent can search any MDX content if it learns about during its execution
tool = MDXSearchTool()
# OR
# Initialize the tool with a specific MDX file path for exclusive search within that document
tool = MDXSearchTool(mdx='path/to/your/document.mdx')
```
## Arguments
- mdx: **Optional** The MDX path for the search. Can be provided at initialization

View File

@@ -0,0 +1,35 @@
# PDFSearchTool
!!! note "Depend on OpenAI"
All RAG tools at the moment can only use openAI to generate embeddings, we are working on adding support for other providers.
!!! note "Experimental"
We are still working on improving tools, so there might be unexpected behavior or changes in the future.
## Description
The PDFSearchTool is a RAG tool designed for semantic searches within PDF content. It allows for inputting a search query and a PDF document, leveraging advanced search techniques to find relevant content efficiently. This capability makes it especially useful for extracting specific information from large PDF files quickly.
## Installation
To get started with the PDFSearchTool, first, ensure the crewai_tools package is installed with the following command:
```shell
pip install 'crewai[tools]'
```
## Example
Here's how to use the PDFSearchTool to search within a PDF document:
```python
from crewai_tools import PDFSearchTool
# Initialize the tool allowing for any PDF content search if the path is provided during execution
tool = PDFSearchTool()
# OR
# Initialize the tool with a specific PDF path for exclusive search within that document
tool = PDFSearchTool(pdf='path/to/your/document.pdf')
```
## Arguments
- `pdf`: **Optinal** The PDF path for the search. Can be provided at initialization or within the `run` method's arguments. If provided at initialization, the tool confines its search to the specified document.

Some files were not shown because too many files have changed in this diff Show More