Compare commits
1170 Commits
gui/fix-ty
...
bugfix-pyt
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
a2224bbe18 | ||
|
|
37979a0ca1 | ||
|
|
d96543d314 | ||
|
|
c9f47e6a37 | ||
|
|
5780c3147a | ||
|
|
52e10d6c84 | ||
|
|
98ccbeb4bd | ||
|
|
fbb156b9de | ||
|
|
b73960cebe | ||
|
|
10328f3db4 | ||
|
|
da42ec7eb9 | ||
|
|
f18a112cd7 | ||
|
|
97d4439872 | ||
|
|
40dcdb43d6 | ||
|
|
c3bb221fb3 | ||
|
|
e68cad380e | ||
|
|
1167fbdd8c | ||
|
|
d200d00bb5 | ||
|
|
bf55dde358 | ||
|
|
96a78a97f0 | ||
|
|
337d2b634b | ||
|
|
475b704f95 | ||
|
|
b992ee9d6b | ||
|
|
d7fa8464c7 | ||
|
|
918c0589eb | ||
|
|
c9d3eb7ccf | ||
|
|
d216edb022 | ||
|
|
afa8783750 | ||
|
|
a661050464 | ||
|
|
c14f990098 | ||
|
|
26ccaf78ec | ||
|
|
12e98e1f3c | ||
|
|
efe27bd570 | ||
|
|
403ea385d7 | ||
|
|
9b51e1174c | ||
|
|
a3b5413f16 | ||
|
|
bce4bb5c4e | ||
|
|
3f92e217f9 | ||
|
|
b0f9637662 | ||
|
|
63ef3918dd | ||
|
|
3c24350306 | ||
|
|
356d4d9729 | ||
|
|
e290064ecc | ||
|
|
77fa1b18c7 | ||
|
|
08a6a82071 | ||
|
|
625748e462 | ||
|
|
6e209d5d77 | ||
|
|
f845fac4da | ||
|
|
b6c32b014c | ||
|
|
06950921e9 | ||
|
|
fc9da22c38 | ||
|
|
02f790ffcb | ||
|
|
af7983be43 | ||
|
|
a83661fd6e | ||
|
|
e1a73e0c44 | ||
|
|
48983773f5 | ||
|
|
73701fda1e | ||
|
|
3deeba4cab | ||
|
|
e3dde17af0 | ||
|
|
49b8cc95ae | ||
|
|
6145331ee4 | ||
|
|
f1839bc6db | ||
|
|
0b58911153 | ||
|
|
ee78446cc5 | ||
|
|
50fe5080e6 | ||
|
|
e1b8394265 | ||
|
|
c23e8fbb02 | ||
|
|
65aeb85e88 | ||
|
|
6c003e0382 | ||
|
|
6b14ffcffb | ||
|
|
df25703cc2 | ||
|
|
12a815e5db | ||
|
|
102836a2c2 | ||
|
|
d38be25d33 | ||
|
|
ac848f9ff4 | ||
|
|
a25a27c3d3 | ||
|
|
22c8e5f433 | ||
|
|
8df8255f18 | ||
|
|
66124d9afb | ||
|
|
7def3a8acc | ||
|
|
5b7fed2cb6 | ||
|
|
838b3bc09d | ||
|
|
ebb585e494 | ||
|
|
7c67c2c6af | ||
|
|
e4f5c7cdf2 | ||
|
|
f09238e512 | ||
|
|
da5f60e7f3 | ||
|
|
807c13e144 | ||
|
|
3dea3d0183 | ||
|
|
35cb7fcf4d | ||
|
|
d2a9a4a4e4 | ||
|
|
e62e9c7401 | ||
|
|
3c5031e711 | ||
|
|
82e84c0f88 | ||
|
|
2c550dc175 | ||
|
|
bdc92deade | ||
|
|
448d31cad9 | ||
|
|
ed1f009c64 | ||
|
|
bb3829a9ed | ||
|
|
0a116202f0 | ||
|
|
4daa88fa59 | ||
|
|
53067f8b92 | ||
|
|
d3a09c3180 | ||
|
|
4d7aacb5f2 | ||
|
|
6b1cf78e41 | ||
|
|
80f1a88b63 | ||
|
|
32da76a2ca | ||
|
|
b3667a8c09 | ||
|
|
3aa48dcd58 | ||
|
|
03f1d57463 | ||
|
|
4725d0de0d | ||
|
|
b766af75f2 | ||
|
|
b2c8779f4c | ||
|
|
df266bda01 | ||
|
|
eed7919d72 | ||
|
|
1e49d1b592 | ||
|
|
ded7197fcb | ||
|
|
2155acb3a3 | ||
|
|
794574957e | ||
|
|
66b19311a7 | ||
|
|
9fc84fc1ac | ||
|
|
f8f9df6d1d | ||
|
|
6e94edb777 | ||
|
|
5f2ac8c33e | ||
|
|
bbe896d48c | ||
|
|
9298054436 | ||
|
|
90b7937796 | ||
|
|
520933b4c5 | ||
|
|
9ea4fb8c82 | ||
|
|
fe0813e831 | ||
|
|
33cebea15b | ||
|
|
e723e5ca3f | ||
|
|
24f1a19310 | ||
|
|
d0959573dc | ||
|
|
939afd5f82 | ||
|
|
d42e58e199 | ||
|
|
000bab4cf5 | ||
|
|
8df1042180 | ||
|
|
313038882c | ||
|
|
41a670166a | ||
|
|
a77496a217 | ||
|
|
430260c985 | ||
|
|
334b0959b0 | ||
|
|
2b31e26ba5 | ||
|
|
7122a29a20 | ||
|
|
f3ddb430a7 | ||
|
|
435bfca186 | ||
|
|
2ef896bdd5 | ||
|
|
59c6c29706 | ||
|
|
a1f35e768f | ||
|
|
00eede0d5d | ||
|
|
cf1864ce0f | ||
|
|
a3d5c86218 | ||
|
|
60d13bf7e8 | ||
|
|
52e0a84829 | ||
|
|
86825e1769 | ||
|
|
7afc531fbb | ||
|
|
ed0490112b | ||
|
|
66c66e3d84 | ||
|
|
b9b625a70d | ||
|
|
b58253cacc | ||
|
|
fbf8732784 | ||
|
|
8fedbe49cb | ||
|
|
1e8ee247ca | ||
|
|
34d2993456 | ||
|
|
e3c5c174ee | ||
|
|
b4e2db0306 | ||
|
|
9cc759ba32 | ||
|
|
ac9f8b9d5a | ||
|
|
3d4a1e4b18 | ||
|
|
123f302744 | ||
|
|
5bae78639e | ||
|
|
5235442a5b | ||
|
|
c62fb615b1 | ||
|
|
78797c64b0 | ||
|
|
8a7584798b | ||
|
|
b50772a38b | ||
|
|
96a7e8038f | ||
|
|
ec050e5d33 | ||
|
|
e2ce65fc5b | ||
|
|
14503bc43b | ||
|
|
00c2f5043e | ||
|
|
bcd90e26b0 | ||
|
|
4eaa8755eb | ||
|
|
ba66910fbd | ||
|
|
90f1bee602 | ||
|
|
1cb5f57864 | ||
|
|
7dc47adb5c | ||
|
|
ac819bcb6e | ||
|
|
b6d668fc66 | ||
|
|
1b488b6da7 | ||
|
|
d3b398ed52 | ||
|
|
d52fd09602 | ||
|
|
d6800d8957 | ||
|
|
2fd7506ed9 | ||
|
|
161084aff2 | ||
|
|
b145cb3247 | ||
|
|
1adbcf697d | ||
|
|
e51355200a | ||
|
|
47818f4f41 | ||
|
|
9b10fd47b0 | ||
|
|
c408368267 | ||
|
|
90b3145e92 | ||
|
|
fbd0e015d5 | ||
|
|
17e25fb842 | ||
|
|
d6d98ee969 | ||
|
|
e0600e3bb9 | ||
|
|
a79d77dfd7 | ||
|
|
56ec9bc224 | ||
|
|
8eef02739a | ||
|
|
6f4ad532e6 | ||
|
|
74a1de8550 | ||
|
|
e529766391 | ||
|
|
a7f5d574dc | ||
|
|
0cc02d9492 | ||
|
|
fa26f6ebae | ||
|
|
f6c2982619 | ||
|
|
5a8649a97f | ||
|
|
e6100debac | ||
|
|
abee94d056 | ||
|
|
92731544ae | ||
|
|
77c7b7dfa1 | ||
|
|
ea64c29fee | ||
|
|
f4bb040ad8 | ||
|
|
515478473a | ||
|
|
9cf3fadd0f | ||
|
|
89c4b3fe88 | ||
|
|
9e5c599f58 | ||
|
|
a950e67c7d | ||
|
|
de6933b2d2 | ||
|
|
748383d74c | ||
|
|
23b9e10323 | ||
|
|
ddb7958da7 | ||
|
|
477cce321f | ||
|
|
7bed63a693 | ||
|
|
2709a9205a | ||
|
|
d19d7b01ec | ||
|
|
a3ad2c1957 | ||
|
|
c3e7a3ec19 | ||
|
|
cba8c9faec | ||
|
|
bcb7fb27d0 | ||
|
|
c310044bec | ||
|
|
5263df24b6 | ||
|
|
dea6ed7ef0 | ||
|
|
d3a0dad323 | ||
|
|
67bf4aea56 | ||
|
|
8c76bad50f | ||
|
|
e27a15023c | ||
|
|
a836f466f4 | ||
|
|
67f0de1f90 | ||
|
|
c642ebf97e | ||
|
|
a21e310d78 | ||
|
|
aba68da542 | ||
|
|
e254f11933 | ||
|
|
ab2274caf0 | ||
|
|
3e4f112f39 | ||
|
|
cc018bf128 | ||
|
|
46d3e4d4d9 | ||
|
|
627bb3f5f6 | ||
|
|
4a44245de9 | ||
|
|
30d027158a | ||
|
|
3fecde49b6 | ||
|
|
cc129a0bce | ||
|
|
b5779dca12 | ||
|
|
42311d9c7a | ||
|
|
294f2cc3a9 | ||
|
|
3dc442801f | ||
|
|
c12343a8b8 | ||
|
|
835557e648 | ||
|
|
4185ea688f | ||
|
|
0532089246 | ||
|
|
24b155015c | ||
|
|
8ceeec7d36 | ||
|
|
75e68f6fc8 | ||
|
|
3de81cedd6 | ||
|
|
5dc8dd0e8a | ||
|
|
b8d07fee83 | ||
|
|
be8e33daf6 | ||
|
|
efc8323c63 | ||
|
|
831951efc4 | ||
|
|
2131b94ddb | ||
|
|
b3504e768c | ||
|
|
350457b9b8 | ||
|
|
355bf3b48b | ||
|
|
0e94236735 | ||
|
|
673a38c5d9 | ||
|
|
8f57753656 | ||
|
|
a2f839fada | ||
|
|
440883e9e8 | ||
|
|
d3da73136c | ||
|
|
7272fd15ac | ||
|
|
518800239c | ||
|
|
30bd79390a | ||
|
|
d1e2430aac | ||
|
|
bfe2c44f55 | ||
|
|
845951a0db | ||
|
|
c1172a685a | ||
|
|
4bcc3b532d | ||
|
|
ba89e43b62 | ||
|
|
4469461b38 | ||
|
|
a548463fae | ||
|
|
45b802a625 | ||
|
|
ba0965ef87 | ||
|
|
d85898cf29 | ||
|
|
73f328860b | ||
|
|
a0c322a535 | ||
|
|
86f58c95de | ||
|
|
99fe91586d | ||
|
|
0c2d23dfe0 | ||
|
|
2433819c4f | ||
|
|
97fc44c930 | ||
|
|
409892d65f | ||
|
|
62f3df7ed5 | ||
|
|
4cf8913d31 | ||
|
|
82647358b2 | ||
|
|
6cc2f510bf | ||
|
|
9a65abf6b8 | ||
|
|
b3185ad90c | ||
|
|
c887ff1f47 | ||
|
|
22e5d39884 | ||
|
|
9ee6824ccd | ||
|
|
da73865f25 | ||
|
|
627b9f1abb | ||
|
|
1b8001bf98 | ||
|
|
e59e07e4f7 | ||
|
|
ee239b1c06 | ||
|
|
bf459bf983 | ||
|
|
94eaa6740e | ||
|
|
6d7c1b0743 | ||
|
|
6b864ee21d | ||
|
|
1ffa8904db | ||
|
|
ad916abd76 | ||
|
|
9702711094 | ||
|
|
8094754239 | ||
|
|
bc5e303d5f | ||
|
|
ec89e003c8 | ||
|
|
0b0f2d30ab | ||
|
|
1df61aba4c | ||
|
|
da9220fa81 | ||
|
|
da4f356fab | ||
|
|
d932b20c6e | ||
|
|
2f9a2afd9e | ||
|
|
c1df7c410e | ||
|
|
54ebd6cf90 | ||
|
|
6b87d22a70 | ||
|
|
c4f7eaf259 | ||
|
|
236e42d0bc | ||
|
|
8c90db04b5 | ||
|
|
1261ce513f | ||
|
|
b07c51532c | ||
|
|
d763eefc2e | ||
|
|
e01c0a0f4c | ||
|
|
5a7a323f3a | ||
|
|
46be5e8097 | ||
|
|
bc2a86d66a | ||
|
|
11a3d4b840 | ||
|
|
6930b68484 | ||
|
|
c7c0647dd2 | ||
|
|
7b276e6797 | ||
|
|
3daba0c79e | ||
|
|
2c85e8e23a | ||
|
|
b0f1d1fcf0 | ||
|
|
611526596a | ||
|
|
fa373f9660 | ||
|
|
48bb8ef775 | ||
|
|
bbea797b0c | ||
|
|
066ad73423 | ||
|
|
0695c26703 | ||
|
|
4fb3331c6a | ||
|
|
b6c6eea6f5 | ||
|
|
1af95f5146 | ||
|
|
ed3487aa22 | ||
|
|
77af733e44 | ||
|
|
aaf80d1d43 | ||
|
|
9e9b945a46 | ||
|
|
308a8dc925 | ||
|
|
7d9d0ff6f7 | ||
|
|
f8a8e7b2a5 | ||
|
|
3285c1b196 | ||
|
|
4bc23affe0 | ||
|
|
bca56eea48 | ||
|
|
588ad3c4a4 | ||
|
|
c6a6c918e0 | ||
|
|
366bbbbea3 | ||
|
|
293305790d | ||
|
|
8bc09eb054 | ||
|
|
db1b678c3a | ||
|
|
6f32bf52cc | ||
|
|
49d173a02d | ||
|
|
4069b621d5 | ||
|
|
a7147c99c6 | ||
|
|
6fe308202e | ||
|
|
63ecb7395d | ||
|
|
8cf1cd5a62 | ||
|
|
93c0467bba | ||
|
|
8f5f67de41 | ||
|
|
f8ca49d8df | ||
|
|
c119230fd6 | ||
|
|
14a36d3f5e | ||
|
|
fde1ee45f9 | ||
|
|
6774bc2c53 | ||
|
|
94c62263ed | ||
|
|
495c3859af | ||
|
|
3e003f5e32 | ||
|
|
1c8b509d7d | ||
|
|
58af5c08f9 | ||
|
|
55e968c9e0 | ||
|
|
0b9092702b | ||
|
|
8376698534 | ||
|
|
3dc02310b6 | ||
|
|
e70bc94ab6 | ||
|
|
9285ebf8a2 | ||
|
|
4ca785eb15 | ||
|
|
c57cbd8591 | ||
|
|
7fb1289205 | ||
|
|
f02681ae01 | ||
|
|
c725105b1f | ||
|
|
36aa4bcb46 | ||
|
|
b98f8f9fe1 | ||
|
|
bcfcf88e78 | ||
|
|
fd0de3a47e | ||
|
|
c7b9ae02fd | ||
|
|
4afb022572 | ||
|
|
8610faef22 | ||
|
|
6d677541c7 | ||
|
|
49220ec163 | ||
|
|
40a676b7ac | ||
|
|
50bf146d1e | ||
|
|
40d378abfb | ||
|
|
1b09b085a7 | ||
|
|
9f2acfe91f | ||
|
|
e856359e23 | ||
|
|
faa231e278 | ||
|
|
3d44795476 | ||
|
|
f50e709985 | ||
|
|
d70c542547 | ||
|
|
57201fb856 | ||
|
|
9b142e580b | ||
|
|
3878daffd6 | ||
|
|
34954e6f74 | ||
|
|
e66a135d5d | ||
|
|
66698503b8 | ||
|
|
ec2967c362 | ||
|
|
4ae07468f3 | ||
|
|
6193eb13fa | ||
|
|
55cd15bfc6 | ||
|
|
5f46ff8836 | ||
|
|
cdfbd5f62b | ||
|
|
b43f3987ec | ||
|
|
240527d06c | ||
|
|
276cb7b7e8 | ||
|
|
048aa6cbcc | ||
|
|
fa9949b9d0 | ||
|
|
500072d855 | ||
|
|
04bcfa6e2d | ||
|
|
26afee9bed | ||
|
|
f29f4abdd7 | ||
|
|
4589d6fe9d | ||
|
|
201e652fa2 | ||
|
|
8bc07e6071 | ||
|
|
6baaad045a | ||
|
|
74c1703310 | ||
|
|
a921828e51 | ||
|
|
e1fd83e6a7 | ||
|
|
7d68e287cc | ||
|
|
f39a975e20 | ||
|
|
b8a3c29745 | ||
|
|
9cd4ff05c9 | ||
|
|
4687779702 | ||
|
|
8731915330 | ||
|
|
093259389e | ||
|
|
6bcb3d1080 | ||
|
|
71a217b210 | ||
|
|
b98256e434 | ||
|
|
40f81aecf5 | ||
|
|
d1737a96fb | ||
|
|
84f48c465d | ||
|
|
60efcad481 | ||
|
|
53a9f107ca | ||
|
|
6fa2b89831 | ||
|
|
d72ebb9bb8 | ||
|
|
81ae07abdb | ||
|
|
6d20ba70a1 | ||
|
|
67f55bae2c | ||
|
|
9b59de1720 | ||
|
|
798d16a6c6 | ||
|
|
c9152f2af8 | ||
|
|
24b09e97cd | ||
|
|
a6b7295092 | ||
|
|
725d159e44 | ||
|
|
ef21da15e6 | ||
|
|
de5d2eaa9b | ||
|
|
e2badaa4c6 | ||
|
|
916dec2418 | ||
|
|
7f387dd7c3 | ||
|
|
6534a909d6 | ||
|
|
b149bd4149 | ||
|
|
49138c6e37 | ||
|
|
258b22f5bc | ||
|
|
b887c5cf3c | ||
|
|
42871d9ffc | ||
|
|
a7696d5aed | ||
|
|
02718e291b | ||
|
|
76c4f2a2b4 | ||
|
|
fbc6a10f2e | ||
|
|
5d8f8cbc79 | ||
|
|
0dfe3bcb0a | ||
|
|
3f81383285 | ||
|
|
e8a49e7687 | ||
|
|
ed48efb9aa | ||
|
|
c3291b967b | ||
|
|
92e867010c | ||
|
|
5059aef574 | ||
|
|
c50d62b82f | ||
|
|
f46a12b3b4 | ||
|
|
dd0b622826 | ||
|
|
835eb9fbea | ||
|
|
8cb10f9fcc | ||
|
|
30e26c9e35 | ||
|
|
01329a01ab | ||
|
|
0e11b33f6e | ||
|
|
5113bca025 | ||
|
|
71c5972fc7 | ||
|
|
ba55160d6b | ||
|
|
24e973d792 | ||
|
|
96427c1dd2 | ||
|
|
f15d5cbb64 | ||
|
|
c8a5a3e32e | ||
|
|
32fdd11c93 | ||
|
|
7f830b4f43 | ||
|
|
d6c57402cf | ||
|
|
42bea00184 | ||
|
|
5a6b0ff398 | ||
|
|
1b57bc0c75 | ||
|
|
96544009f5 | ||
|
|
44c8765add | ||
|
|
bc31019b67 | ||
|
|
ff16348d4c | ||
|
|
7310f4d85b | ||
|
|
ac331504e9 | ||
|
|
6823f76ff4 | ||
|
|
c3ac3219fe | ||
|
|
104ef7a0c2 | ||
|
|
2bbf8ed8a8 | ||
|
|
5dc6644ac7 | ||
|
|
9c0f97eaf7 | ||
|
|
164e7895bf | ||
|
|
fb46fb9ca3 | ||
|
|
effb7efc37 | ||
|
|
f5098e7e45 | ||
|
|
b15d632308 | ||
|
|
e534efa3e9 | ||
|
|
8001314718 | ||
|
|
e91ac4c5ad | ||
|
|
e19bdcb97d | ||
|
|
b8aa46a767 | ||
|
|
ab79ee32fd | ||
|
|
8d9c49a281 | ||
|
|
e659b60d8b | ||
|
|
7987bfee39 | ||
|
|
b6075f1a97 | ||
|
|
9820a69443 | ||
|
|
753118687d | ||
|
|
35e234ed6e | ||
|
|
2d54b096af | ||
|
|
493f046c03 | ||
|
|
3b6d1838b4 | ||
|
|
769ab940ed | ||
|
|
498a9e6e68 | ||
|
|
699be4887c | ||
|
|
854c58ded7 | ||
|
|
a19a4a5556 | ||
|
|
59e51f18fd | ||
|
|
7d981ba8ce | ||
|
|
6dad33f47c | ||
|
|
18c3925fa3 | ||
|
|
000e2666fb | ||
|
|
91ff331fec | ||
|
|
e3c7c0185d | ||
|
|
405650840e | ||
|
|
1bd188e0d2 | ||
|
|
9de7aa6377 | ||
|
|
d4c0a4248c | ||
|
|
c4167a5517 | ||
|
|
c055c35361 | ||
|
|
a318a226de | ||
|
|
e88cb2fea6 | ||
|
|
0ab072a95e | ||
|
|
5e8322b272 | ||
|
|
5a3b888f43 | ||
|
|
d7473edb41 | ||
|
|
d125c85a2b | ||
|
|
b46e663778 | ||
|
|
2787c9b0ef | ||
|
|
e77442cf34 | ||
|
|
322780a5f3 | ||
|
|
a54d34ea5b | ||
|
|
bc793749a5 | ||
|
|
a9916940ef | ||
|
|
b7f4931de5 | ||
|
|
327b728bef | ||
|
|
a9510eec88 | ||
|
|
d6db557f50 | ||
|
|
5ae56e3f72 | ||
|
|
1c9ebb59b1 | ||
|
|
f520ceeb0d | ||
|
|
0df4d2fd4b | ||
|
|
596491d932 | ||
|
|
72fb109147 | ||
|
|
40b336d2a5 | ||
|
|
5958df71a2 | ||
|
|
26d9af8367 | ||
|
|
cdaf2d41c7 | ||
|
|
d9ee104167 | ||
|
|
0b9eeb7cdb | ||
|
|
9b558ddc51 | ||
|
|
b857afe45b | ||
|
|
1d77c8de10 | ||
|
|
503f3a6372 | ||
|
|
d2fab55561 | ||
|
|
b955416458 | ||
|
|
18a2722e4d | ||
|
|
c7e8d55926 | ||
|
|
48698bf0b7 | ||
|
|
f79b3fc322 | ||
|
|
0b9e753c2f | ||
|
|
5b3f7be1c4 | ||
|
|
f2208f5f8e | ||
|
|
79b5248b83 | ||
|
|
d4791bef28 | ||
|
|
d861cb0d74 | ||
|
|
67f19f79c2 | ||
|
|
5f359b14f7 | ||
|
|
cda1900b14 | ||
|
|
c8c0a89dc6 | ||
|
|
9a10cc15f4 | ||
|
|
345f1eacde | ||
|
|
fa937bf3a7 | ||
|
|
172758020c | ||
|
|
5ff178084e | ||
|
|
c012e0ff8d | ||
|
|
f777c1c2e0 | ||
|
|
782ce22d99 | ||
|
|
f5246039e5 | ||
|
|
4736604b4d | ||
|
|
09cba0135e | ||
|
|
8119edb495 | ||
|
|
17bffb0803 | ||
|
|
cbe139fced | ||
|
|
946d8567fe | ||
|
|
7b5d5bdeef | ||
|
|
a1551bcf2b | ||
|
|
5495825b1d | ||
|
|
6e36f84cc6 | ||
|
|
cddf2d8f7c | ||
|
|
5f17e35c5a | ||
|
|
231a833ad0 | ||
|
|
a870295d42 | ||
|
|
ddda8f6bda | ||
|
|
bf7372fefa | ||
|
|
3451b6fc7a | ||
|
|
dbf2570353 | ||
|
|
d0707fac91 | ||
|
|
35ebdd6022 | ||
|
|
92a77e5cac | ||
|
|
a2922c9ad5 | ||
|
|
9f9b52dd26 | ||
|
|
2482c7ab68 | ||
|
|
7fdabda97e | ||
|
|
7306414de7 | ||
|
|
97d7bfb52a | ||
|
|
9f85a2a011 | ||
|
|
ab47d276db | ||
|
|
44e38b1d5e | ||
|
|
e9fa2bb556 | ||
|
|
183f466ac4 | ||
|
|
cc7b7e2b79 | ||
|
|
a17fa70b1b | ||
|
|
7b63b6f485 | ||
|
|
ed5d81fa1a | ||
|
|
c2d12b2de2 | ||
|
|
8966dc2f2f | ||
|
|
59ab1ef9f4 | ||
|
|
227cca00a2 | ||
|
|
16dab8e583 | ||
|
|
1c97b916d9 | ||
|
|
94b52cfd87 | ||
|
|
82b1db1711 | ||
|
|
638a8f03f0 | ||
|
|
dbce944934 | ||
|
|
f1ad137fb7 | ||
|
|
5eb1cff9b5 | ||
|
|
b074138e39 | ||
|
|
6ca051e5f3 | ||
|
|
fd87d930a7 | ||
|
|
95a9691a8b | ||
|
|
e2d6e2649e | ||
|
|
d3ff1bf01d | ||
|
|
d68b8cf6e4 | ||
|
|
6615ab2fba | ||
|
|
5e83a36009 | ||
|
|
51ee483e9d | ||
|
|
62f5b2fb2e | ||
|
|
6583f31459 | ||
|
|
217f5fc5ac | ||
|
|
297dc93fb4 | ||
|
|
86c6760f58 | ||
|
|
498e96a419 | ||
|
|
c0c59dc932 | ||
|
|
f3b3d321e5 | ||
|
|
67e4433dc2 | ||
|
|
4a7ae8df71 | ||
|
|
09f92122d5 | ||
|
|
8118b7b7d6 | ||
|
|
c93b85ac53 | ||
|
|
6378f6caec | ||
|
|
d824db82a3 | ||
|
|
de6b597eff | ||
|
|
6111d05219 | ||
|
|
f83c91d612 | ||
|
|
c8f360414e | ||
|
|
fa4393d77e | ||
|
|
25c314befc | ||
|
|
2fe79e68cd | ||
|
|
37d05a2365 | ||
|
|
0111d261a4 | ||
|
|
0a23e1dc13 | ||
|
|
ef5ff71346 | ||
|
|
1697b4cacb | ||
|
|
6b4710a8d1 | ||
|
|
6f2a8f08ba | ||
|
|
4e6abf596d | ||
|
|
9018e2ab6a | ||
|
|
99d023c5f3 | ||
|
|
da7d8256eb | ||
|
|
88bffaa0d0 | ||
|
|
1159140d9f | ||
|
|
5ac7050f7a | ||
|
|
8b513de64c | ||
|
|
144e6d203f | ||
|
|
2d2154ed65 | ||
|
|
2d086ab596 | ||
|
|
776c67cc0f | ||
|
|
78ef490646 | ||
|
|
4da5cc9778 | ||
|
|
6930656897 | ||
|
|
349753a013 | ||
|
|
f53a3a00e1 | ||
|
|
e2113fe417 | ||
|
|
f9288295e6 | ||
|
|
fcc57f2fc0 | ||
|
|
5cb6ee9eeb | ||
|
|
b38f0825e7 | ||
|
|
f51e94dede | ||
|
|
47bf93d291 | ||
|
|
41fd1c6124 | ||
|
|
be1b9a3994 | ||
|
|
61a196394b | ||
|
|
5b442e4350 | ||
|
|
c9920b9823 | ||
|
|
2faa2dbddb | ||
|
|
76607062f0 | ||
|
|
a8cac9b7e9 | ||
|
|
dfacc8832f | ||
|
|
93f643f851 | ||
|
|
cbf5d548be | ||
|
|
6946b89e17 | ||
|
|
dc4911b1ca | ||
|
|
6ad218f9a0 | ||
|
|
36efa172ee | ||
|
|
a7a2dfd296 | ||
|
|
7baaeacac3 | ||
|
|
021f2eb8a1 | ||
|
|
cb720143c7 | ||
|
|
731de2ff31 | ||
|
|
24e28da203 | ||
|
|
bde0a3e99c | ||
|
|
0415b9982b | ||
|
|
99ada42d97 | ||
|
|
ee32d36312 | ||
|
|
ef928ee3cb | ||
|
|
c66559345f | ||
|
|
3ad95d50d4 | ||
|
|
bc7f601f84 | ||
|
|
e8cbdb7881 | ||
|
|
b0c2b15a3e | ||
|
|
c0f04bbb37 | ||
|
|
c320fc655e | ||
|
|
ac2815c781 | ||
|
|
dd8a199e99 | ||
|
|
161c4a6856 | ||
|
|
67b04b30bf | ||
|
|
7696b45fc3 | ||
|
|
641921eb6c | ||
|
|
a02d2fb93e | ||
|
|
b93632a53a | ||
|
|
09938641cd | ||
|
|
7acf0b2107 | ||
|
|
4eb4073661 | ||
|
|
7b53457ef3 | ||
|
|
691b094a40 | ||
|
|
68e9e54c88 | ||
|
|
d0d99125c4 | ||
|
|
129000d01f | ||
|
|
47f9d026dd | ||
|
|
b75b0b5552 | ||
|
|
3dd6249f1e | ||
|
|
8451113039 | ||
|
|
a79b216875 | ||
|
|
52217c2f63 | ||
|
|
7edacf6e24 | ||
|
|
58558a1950 | ||
|
|
1607c85ae5 | ||
|
|
a6ff342948 | ||
|
|
d2eb54ebf8 | ||
|
|
a41bd18599 | ||
|
|
bb64c80964 | ||
|
|
2fb56f1f9f | ||
|
|
35676fe2f5 | ||
|
|
81ed6f177e | ||
|
|
4bcd1df6bb | ||
|
|
6fae56dd60 | ||
|
|
430f0e9013 | ||
|
|
d7f080a978 | ||
|
|
5d18f73654 | ||
|
|
57fc079267 | ||
|
|
706f4cd74a | ||
|
|
2e3646cc96 | ||
|
|
844cc515d5 | ||
|
|
f47904134b | ||
|
|
d72b00af3c | ||
|
|
bd053a98c7 | ||
|
|
c18208ca59 | ||
|
|
acbe5af8ce | ||
|
|
c81146505a | ||
|
|
6b9a1d4040 | ||
|
|
508fbd49e9 | ||
|
|
e18a6c6bb8 | ||
|
|
16237ef393 | ||
|
|
5332d02f36 | ||
|
|
7258120a0d | ||
|
|
8b7bc69ba1 | ||
|
|
5a807eb93f | ||
|
|
130682c93b | ||
|
|
02e29e4681 | ||
|
|
6943eb4463 | ||
|
|
939a18a4d2 | ||
|
|
ccbe415315 | ||
|
|
511af98dea | ||
|
|
a9d94112f5 | ||
|
|
1bca6029fe | ||
|
|
c027aa8bf6 | ||
|
|
ce7d86e0df | ||
|
|
5dfaf866c9 | ||
|
|
5b66e87621 | ||
|
|
851dd0f84f | ||
|
|
2188358f13 | ||
|
|
10997dd175 | ||
|
|
da9cc5f097 | ||
|
|
c005ec3f78 | ||
|
|
6018fe5872 | ||
|
|
bf0e70999e | ||
|
|
175d5b3dd6 | ||
|
|
9e61b8325b | ||
|
|
c4d76cde8f | ||
|
|
9c44fd8c4a | ||
|
|
f9f8c8f336 | ||
|
|
0fb3ccb9e9 | ||
|
|
0e5fd0be2c | ||
|
|
1b45daee49 | ||
|
|
9f384e3fc1 | ||
|
|
377f919d42 | ||
|
|
e6445afac5 | ||
|
|
095015d397 | ||
|
|
614183cbb1 | ||
|
|
0bc92a284d | ||
|
|
d3b6640b4a | ||
|
|
a1a48888c3 | ||
|
|
bb622bf747 | ||
|
|
946c56494e | ||
|
|
2a0e21ca76 | ||
|
|
ea893432e8 | ||
|
|
bf40956491 | ||
|
|
48948e1217 | ||
|
|
27412c89dd | ||
|
|
56f1d24e9d | ||
|
|
ab066a11a8 | ||
|
|
e35e81e554 | ||
|
|
551e48da4f | ||
|
|
21ce0aa17e | ||
|
|
2d6f2830e1 | ||
|
|
24ed8a2549 | ||
|
|
a336381849 | ||
|
|
208c3a780c | ||
|
|
1e112fa50a | ||
|
|
38fc5510ed | ||
|
|
1a1f4717aa | ||
|
|
977c6114ba | ||
|
|
27fddae286 | ||
|
|
615ac7f297 | ||
|
|
87d28e896d | ||
|
|
23f10418d7 | ||
|
|
27e7f48a44 | ||
|
|
7fd8850ddb | ||
|
|
7a4d3dd496 | ||
|
|
c1d7936689 | ||
|
|
1ec4da6947 | ||
|
|
8430c2f9af | ||
|
|
7cc6bccdec | ||
|
|
aeba64feaf | ||
|
|
04b4191de5 | ||
|
|
1da7473f26 | ||
|
|
95d13bd033 | ||
|
|
7eb4fcdaf4 | ||
|
|
809b4b227c | ||
|
|
ff51a2da9b | ||
|
|
be83681665 | ||
|
|
2bd30af72b | ||
|
|
d7b021061b | ||
|
|
73647f1669 | ||
|
|
d341cb3d5c | ||
|
|
30438410d6 | ||
|
|
b264ebabc0 | ||
|
|
2edc88e0a1 | ||
|
|
552dda46f8 | ||
|
|
2340a127d6 | ||
|
|
ecde504a79 | ||
|
|
0b781065d2 | ||
|
|
bcb57ce5f9 | ||
|
|
6392a8cdd0 | ||
|
|
34e3dd24b4 | ||
|
|
c303d3730c | ||
|
|
0a53ce17a2 | ||
|
|
7973651e05 | ||
|
|
672b150972 | ||
|
|
d8bcbd7d0a | ||
|
|
ff2f1477bb | ||
|
|
1139073297 | ||
|
|
39deac2747 | ||
|
|
0a35868367 | ||
|
|
608f869789 | ||
|
|
c30bd1a18e | ||
|
|
20a81af95f | ||
|
|
531c70b476 | ||
|
|
dae0aedc99 | ||
|
|
5fde03f4b0 | ||
|
|
48f53b529b | ||
|
|
4d9b0c6138 | ||
|
|
70cabec876 | ||
|
|
60423376cf | ||
|
|
22c646294a | ||
|
|
10b317cf34 | ||
|
|
03f0c44cac | ||
|
|
caa0e5db8d | ||
|
|
b862e464f8 | ||
|
|
3d5257592b | ||
|
|
ff76715cd2 | ||
|
|
cdb0a9c953 | ||
|
|
b0acae81b0 | ||
|
|
afc616d263 | ||
|
|
e066b4dcb1 | ||
|
|
9ea495902e | ||
|
|
d786c367b4 | ||
|
|
a391004432 | ||
|
|
dd97a2674d | ||
|
|
437c4c91bc | ||
|
|
575f1f98b0 | ||
|
|
2ee6ab6332 | ||
|
|
3d862538d2 | ||
|
|
4bd36e0460 | ||
|
|
7fbf0f1988 | ||
|
|
066127013b | ||
|
|
f675208d72 | ||
|
|
36aa69cf66 | ||
|
|
66b77ffd08 | ||
|
|
d2a3e4869a | ||
|
|
a2dc7c7f31 | ||
|
|
55ac69776a | ||
|
|
7a7c9b0076 | ||
|
|
77d40230a8 | ||
|
|
e4556040a8 | ||
|
|
755b3934a4 | ||
|
|
2d77fb72a5 | ||
|
|
106b0df42e | ||
|
|
c31ac4cf7e | ||
|
|
7b309df0c5 | ||
|
|
326f524e7c | ||
|
|
315ad20111 | ||
|
|
b1daf17a61 | ||
|
|
9db99befb6 | ||
|
|
aebc443b62 | ||
|
|
2c0e5586e8 | ||
|
|
25f7557751 | ||
|
|
59ebf7b762 | ||
|
|
1abe9db8e0 | ||
|
|
e4363f9ed8 | ||
|
|
e00b545548 | ||
|
|
1aa32c2036 | ||
|
|
65824ef814 | ||
|
|
d17bc33bfb | ||
|
|
d874ac92b4 | ||
|
|
0362449fe4 | ||
|
|
0d4c062487 | ||
|
|
ec622022f9 | ||
|
|
e9adc3fa4e | ||
|
|
5bc63a321c | ||
|
|
6317380c8d | ||
|
|
a7f007f475 | ||
|
|
fcffc4a898 | ||
|
|
8ed4c66117 | ||
|
|
38486223b2 | ||
|
|
ac5e7d2b1e | ||
|
|
cf4138f385 | ||
|
|
af7803e94b | ||
|
|
10b631bfb4 | ||
|
|
76f1c194dc | ||
|
|
0c9bc95dfc | ||
|
|
6f0d19d916 | ||
|
|
427d3169b6 | ||
|
|
0fc828c816 | ||
|
|
2d97177eff | ||
|
|
33dfcc700b | ||
|
|
09c8193c8f | ||
|
|
4f4128075f | ||
|
|
9ab3e67ba2 | ||
|
|
ed31860071 | ||
|
|
ddb84cc16d | ||
|
|
5b59e450f7 | ||
|
|
a6c3b1f1d4 | ||
|
|
bf6b09b9f5 | ||
|
|
c95eed3fe0 | ||
|
|
9d7cdd56b5 | ||
|
|
0d70302963 | ||
|
|
32a09660b4 | ||
|
|
0612097f81 | ||
|
|
b0c373b6af | ||
|
|
4839cdf261 | ||
|
|
5977c442b1 | ||
|
|
d05dcac16f | ||
|
|
2cdfe459be | ||
|
|
721b27d222 | ||
|
|
be2def3fc8 | ||
|
|
7259dba90d | ||
|
|
ef5bfcb48b | ||
|
|
446baff697 | ||
|
|
bcf701b287 | ||
|
|
22ab99cbd6 | ||
|
|
98ee60e06f | ||
|
|
a3abdb5d19 | ||
|
|
e3ebeb9dde | ||
|
|
646ed4f132 | ||
|
|
128ce91951 | ||
|
|
aa0eb02968 | ||
|
|
637bd885cf | ||
|
|
337afe228f | ||
|
|
4541835487 | ||
|
|
04d9603449 | ||
|
|
671a8d0180 | ||
|
|
3950878690 | ||
|
|
eaac627600 | ||
|
|
35f8919e73 | ||
|
|
cb5a528550 | ||
|
|
1f95d7b982 | ||
|
|
46971ee985 | ||
|
|
e67009ee2e | ||
|
|
9d3da98251 | ||
|
|
b94de6e947 | ||
|
|
f8a1d4f414 | ||
|
|
7deb268de8 | ||
|
|
47b5cbd211 | ||
|
|
a4e9b9ccfe | ||
|
|
99be4f5a61 | ||
|
|
ba28ab1680 | ||
|
|
e51b8aadae | ||
|
|
33354aa07e | ||
|
|
730b71fad8 | ||
|
|
364cf216a0 | ||
|
|
3cb48ac562 | ||
|
|
ea65283023 | ||
|
|
d2003cc32d | ||
|
|
1766e27337 | ||
|
|
442c324243 | ||
|
|
3134711240 | ||
|
|
546fc965f8 | ||
|
|
9ab45d9118 | ||
|
|
b1ae86757b | ||
|
|
42eeec5897 | ||
|
|
c12283bb16 | ||
|
|
b856b21fc6 | ||
|
|
72a0d1edef | ||
|
|
c0a0e01cf6 | ||
|
|
78bf008c36 | ||
|
|
5857c22daf | ||
|
|
5f73ba1371 | ||
|
|
4c09835abc | ||
|
|
0a025901c5 | ||
|
|
9768e4518f | ||
|
|
1f802ccb5a | ||
|
|
e1306a8e6a | ||
|
|
997c906b5f | ||
|
|
2530196cf8 | ||
|
|
340bea3271 | ||
|
|
3df3bba756 | ||
|
|
a9863fe670 | ||
|
|
7b49b4e985 | ||
|
|
577db88f8e | ||
|
|
01a2e650a4 | ||
|
|
cd9f7931c9 | ||
|
|
2b04ae4e4a | ||
|
|
cd0b82e794 | ||
|
|
0ddcffe601 | ||
|
|
712d106a44 | ||
|
|
34c5560cb0 | ||
|
|
dcba1488a6 | ||
|
|
8e4b156f11 | ||
|
|
ab98c3bd28 | ||
|
|
7f98a99e90 | ||
|
|
101b80c234 | ||
|
|
44598babcb | ||
|
|
51edfb4604 | ||
|
|
12d6fa1494 | ||
|
|
99a15ac2ae | ||
|
|
093a9c8174 | ||
|
|
464dfc4e67 | ||
|
|
1c7f9826b4 | ||
|
|
e397a49c23 | ||
|
|
8c925237e7 | ||
|
|
0593d52b91 | ||
|
|
7b7d714109 | ||
|
|
e9aa87f62b | ||
|
|
8f5d735b2f | ||
|
|
e24f4867df | ||
|
|
ef024ca106 | ||
|
|
4c519d9d98 | ||
|
|
94cb96b288 | ||
|
|
108a0d36b7 | ||
|
|
efb097a76b | ||
|
|
af03042852 | ||
|
|
21667bc7e1 | ||
|
|
19b6c15fff | ||
|
|
3ef502024d | ||
|
|
e55cee7372 | ||
|
|
b72eb838c2 | ||
|
|
b21191dd55 | ||
|
|
76b17a8d04 | ||
|
|
e97d1a0cf8 | ||
|
|
c875d887b7 | ||
|
|
44d9cbca81 | ||
|
|
6e399101fd | ||
|
|
e8e3617ba6 | ||
|
|
45fa30c007 | ||
|
|
15768d9c4d | ||
|
|
a1fcaa398c | ||
|
|
871643d98d | ||
|
|
91659d6488 | ||
|
|
0076ea7bff | ||
|
|
e79da7bc05 | ||
|
|
00206a62ab | ||
|
|
d0b0a33be3 | ||
|
|
6ea21e95b6 | ||
|
|
c226dafd0d | ||
|
|
d4c21a23f4 | ||
|
|
b76ae5b921 | ||
|
|
b48e5af9a0 | ||
|
|
d36c2a74cb | ||
|
|
a1e0596450 | ||
|
|
596e243374 | ||
|
|
326ad08ba2 | ||
|
|
f63d4edbb4 | ||
|
|
0057ed6786 |
14
.editorconfig
Normal file
@@ -0,0 +1,14 @@
|
||||
# .editorconfig
|
||||
root = true
|
||||
|
||||
# All files
|
||||
[*]
|
||||
charset = utf-8
|
||||
end_of_line = lf
|
||||
insert_final_newline = true
|
||||
trim_trailing_whitespace = true
|
||||
|
||||
# Python files
|
||||
[*.py]
|
||||
indent_style = space
|
||||
indent_size = 2
|
||||
115
.github/ISSUE_TEMPLATE/bug_report.yml
vendored
Normal file
@@ -0,0 +1,115 @@
|
||||
name: Bug report
|
||||
description: Create a report to help us improve CrewAI
|
||||
title: "[BUG]"
|
||||
labels: ["bug"]
|
||||
assignees: []
|
||||
body:
|
||||
- type: textarea
|
||||
id: description
|
||||
attributes:
|
||||
label: Description
|
||||
description: Provide a clear and concise description of what the bug is.
|
||||
validations:
|
||||
required: true
|
||||
- type: textarea
|
||||
id: steps-to-reproduce
|
||||
attributes:
|
||||
label: Steps to Reproduce
|
||||
description: Provide a step-by-step process to reproduce the behavior.
|
||||
placeholder: |
|
||||
1. Go to '...'
|
||||
2. Click on '....'
|
||||
3. Scroll down to '....'
|
||||
4. See error
|
||||
validations:
|
||||
required: true
|
||||
- type: textarea
|
||||
id: expected-behavior
|
||||
attributes:
|
||||
label: Expected behavior
|
||||
description: A clear and concise description of what you expected to happen.
|
||||
validations:
|
||||
required: true
|
||||
- type: textarea
|
||||
id: screenshots-code
|
||||
attributes:
|
||||
label: Screenshots/Code snippets
|
||||
description: If applicable, add screenshots or code snippets to help explain your problem.
|
||||
validations:
|
||||
required: true
|
||||
- type: dropdown
|
||||
id: os
|
||||
attributes:
|
||||
label: Operating System
|
||||
description: Select the operating system you're using
|
||||
options:
|
||||
- Ubuntu 20.04
|
||||
- Ubuntu 22.04
|
||||
- Ubuntu 24.04
|
||||
- macOS Catalina
|
||||
- macOS Big Sur
|
||||
- macOS Monterey
|
||||
- macOS Ventura
|
||||
- macOS Sonoma
|
||||
- Windows 10
|
||||
- Windows 11
|
||||
- Other (specify in additional context)
|
||||
validations:
|
||||
required: true
|
||||
- type: dropdown
|
||||
id: python-version
|
||||
attributes:
|
||||
label: Python Version
|
||||
description: Version of Python your Crew is running on
|
||||
options:
|
||||
- '3.10'
|
||||
- '3.11'
|
||||
- '3.12'
|
||||
validations:
|
||||
required: true
|
||||
- type: input
|
||||
id: crewai-version
|
||||
attributes:
|
||||
label: crewAI Version
|
||||
description: What version of CrewAI are you using
|
||||
validations:
|
||||
required: true
|
||||
- type: input
|
||||
id: crewai-tools-version
|
||||
attributes:
|
||||
label: crewAI Tools Version
|
||||
description: What version of CrewAI Tools are you using
|
||||
validations:
|
||||
required: true
|
||||
- type: dropdown
|
||||
id: virtual-environment
|
||||
attributes:
|
||||
label: Virtual Environment
|
||||
description: What Virtual Environment are you running your crew in.
|
||||
options:
|
||||
- Venv
|
||||
- Conda
|
||||
- Poetry
|
||||
validations:
|
||||
required: true
|
||||
- type: textarea
|
||||
id: evidence
|
||||
attributes:
|
||||
label: Evidence
|
||||
description: Include relevant information, logs or error messages. These can be screenshots.
|
||||
validations:
|
||||
required: true
|
||||
- type: textarea
|
||||
id: possible-solution
|
||||
attributes:
|
||||
label: Possible Solution
|
||||
description: Have a solution in mind? Please suggest it here, or write "None".
|
||||
validations:
|
||||
required: true
|
||||
- type: textarea
|
||||
id: additional-context
|
||||
attributes:
|
||||
label: Additional context
|
||||
description: Add any other context about the problem here.
|
||||
validations:
|
||||
required: true
|
||||
1
.github/ISSUE_TEMPLATE/config.yml
vendored
Normal file
@@ -0,0 +1 @@
|
||||
blank_issues_enabled: false
|
||||
65
.github/ISSUE_TEMPLATE/feature_request.yml
vendored
Normal file
@@ -0,0 +1,65 @@
|
||||
name: Feature request
|
||||
description: Suggest a new feature for CrewAI
|
||||
title: "[FEATURE]"
|
||||
labels: ["feature-request"]
|
||||
assignees: []
|
||||
body:
|
||||
- type: markdown
|
||||
attributes:
|
||||
value: |
|
||||
Thanks for taking the time to fill out this feature request!
|
||||
- type: dropdown
|
||||
id: feature-area
|
||||
attributes:
|
||||
label: Feature Area
|
||||
description: Which area of CrewAI does this feature primarily relate to?
|
||||
options:
|
||||
- Core functionality
|
||||
- Agent capabilities
|
||||
- Task management
|
||||
- Integration with external tools
|
||||
- Performance optimization
|
||||
- Documentation
|
||||
- Other (please specify in additional context)
|
||||
validations:
|
||||
required: true
|
||||
- type: textarea
|
||||
id: problem
|
||||
attributes:
|
||||
label: Is your feature request related to a an existing bug? Please link it here.
|
||||
description: A link to the bug or NA if not related to an existing bug.
|
||||
validations:
|
||||
required: true
|
||||
- type: textarea
|
||||
id: solution
|
||||
attributes:
|
||||
label: Describe the solution you'd like
|
||||
description: A clear and concise description of what you want to happen.
|
||||
validations:
|
||||
required: true
|
||||
- type: textarea
|
||||
id: alternatives
|
||||
attributes:
|
||||
label: Describe alternatives you've considered
|
||||
description: A clear and concise description of any alternative solutions or features you've considered.
|
||||
validations:
|
||||
required: false
|
||||
- type: textarea
|
||||
id: context
|
||||
attributes:
|
||||
label: Additional context
|
||||
description: Add any other context, screenshots, or examples about the feature request here.
|
||||
validations:
|
||||
required: false
|
||||
- type: dropdown
|
||||
id: willingness-to-contribute
|
||||
attributes:
|
||||
label: Willingness to Contribute
|
||||
description: Would you be willing to contribute to the implementation of this feature?
|
||||
options:
|
||||
- Yes, I'd be happy to submit a pull request
|
||||
- I could provide more detailed specifications
|
||||
- I can test the feature once it's implemented
|
||||
- No, I'm just suggesting the idea
|
||||
validations:
|
||||
required: true
|
||||
19
.github/security.md
vendored
Normal file
@@ -0,0 +1,19 @@
|
||||
CrewAI takes the security of our software products and services seriously, which includes all source code repositories managed through our GitHub organization.
|
||||
If you believe you have found a security vulnerability in any CrewAI product or service, please report it to us as described below.
|
||||
|
||||
## Reporting a Vulnerability
|
||||
Please do not report security vulnerabilities through public GitHub issues.
|
||||
To report a vulnerability, please email us at security@crewai.com.
|
||||
Please include the requested information listed below so that we can triage your report more quickly
|
||||
|
||||
- Type of issue (e.g. SQL injection, cross-site scripting, etc.)
|
||||
- Full paths of source file(s) related to the manifestation of the issue
|
||||
- The location of the affected source code (tag/branch/commit or direct URL)
|
||||
- Any special configuration required to reproduce the issue
|
||||
- Step-by-step instructions to reproduce the issue (please include screenshots if needed)
|
||||
- Proof-of-concept or exploit code (if possible)
|
||||
- Impact of the issue, including how an attacker might exploit the issue
|
||||
|
||||
Once we have received your report, we will respond to you at the email address you provide. If the issue is confirmed, we will release a patch as soon as possible depending on the complexity of the issue.
|
||||
|
||||
At this time, we are not offering a bug bounty program. Any rewards will be at our discretion.
|
||||
10
.github/workflows/black.yml
vendored
@@ -1,10 +0,0 @@
|
||||
name: Lint
|
||||
|
||||
on: [pull_request]
|
||||
|
||||
jobs:
|
||||
lint:
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- uses: actions/checkout@v3
|
||||
- uses: psf/black@stable
|
||||
16
.github/workflows/linter.yml
vendored
Normal file
@@ -0,0 +1,16 @@
|
||||
name: Lint
|
||||
|
||||
on: [pull_request]
|
||||
|
||||
jobs:
|
||||
lint:
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- uses: actions/checkout@v4
|
||||
|
||||
- name: Install Requirements
|
||||
run: |
|
||||
pip install ruff
|
||||
|
||||
- name: Run Ruff Linter
|
||||
run: ruff check
|
||||
14
.github/workflows/mkdocs.yml
vendored
@@ -1,10 +1,8 @@
|
||||
name: Deploy MkDocs
|
||||
|
||||
on:
|
||||
workflow_dispatch:
|
||||
push:
|
||||
branches:
|
||||
- main
|
||||
release:
|
||||
types: [published]
|
||||
|
||||
permissions:
|
||||
contents: write
|
||||
@@ -15,10 +13,10 @@ jobs:
|
||||
|
||||
steps:
|
||||
- name: Checkout code
|
||||
uses: actions/checkout@v2
|
||||
uses: actions/checkout@v4
|
||||
|
||||
- name: Setup Python
|
||||
uses: actions/setup-python@v4
|
||||
uses: actions/setup-python@v5
|
||||
with:
|
||||
python-version: '3.10'
|
||||
|
||||
@@ -27,7 +25,7 @@ jobs:
|
||||
run: echo "::set-output name=hash::$(sha256sum requirements-doc.txt | awk '{print $1}')"
|
||||
|
||||
- name: Setup cache
|
||||
uses: actions/cache@v3
|
||||
uses: actions/cache@v4
|
||||
with:
|
||||
key: mkdocs-material-${{ steps.req-hash.outputs.hash }}
|
||||
path: .cache
|
||||
@@ -44,4 +42,4 @@ jobs:
|
||||
GH_TOKEN: ${{ secrets.GH_TOKEN }}
|
||||
|
||||
- name: Build and deploy MkDocs
|
||||
run: mkdocs gh-deploy --force
|
||||
run: mkdocs gh-deploy --force
|
||||
|
||||
23
.github/workflows/security-checker.yml
vendored
Normal file
@@ -0,0 +1,23 @@
|
||||
name: Security Checker
|
||||
|
||||
on: [pull_request]
|
||||
|
||||
jobs:
|
||||
security-check:
|
||||
runs-on: ubuntu-latest
|
||||
|
||||
steps:
|
||||
- name: Checkout code
|
||||
uses: actions/checkout@v4
|
||||
|
||||
- name: Set up Python
|
||||
uses: actions/setup-python@v5
|
||||
with:
|
||||
python-version: "3.11.9"
|
||||
|
||||
- name: Install dependencies
|
||||
run: pip install bandit
|
||||
|
||||
- name: Run Bandit
|
||||
run: bandit -c pyproject.toml -r src/ -ll
|
||||
|
||||
29
.github/workflows/stale.yml
vendored
Normal file
@@ -0,0 +1,29 @@
|
||||
name: Mark stale issues and pull requests
|
||||
|
||||
permissions:
|
||||
contents: write
|
||||
issues: write
|
||||
pull-requests: write
|
||||
|
||||
on:
|
||||
schedule:
|
||||
- cron: '10 12 * * *'
|
||||
workflow_dispatch:
|
||||
|
||||
jobs:
|
||||
stale:
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- uses: actions/stale@v9
|
||||
with:
|
||||
repo-token: ${{ secrets.GITHUB_TOKEN }}
|
||||
stale-issue-label: 'no-issue-activity'
|
||||
stale-issue-message: 'This issue is stale because it has been open for 30 days with no activity. Remove stale label or comment or this will be closed in 5 days.'
|
||||
close-issue-message: 'This issue was closed because it has been stalled for 5 days with no activity.'
|
||||
days-before-issue-stale: 30
|
||||
days-before-issue-close: 5
|
||||
stale-pr-label: 'no-pr-activity'
|
||||
stale-pr-message: 'This PR is stale because it has been open for 45 days with no activity.'
|
||||
days-before-pr-stale: 45
|
||||
days-before-pr-close: -1
|
||||
operations-per-run: 1200
|
||||
28
.github/workflows/tests.yml
vendored
@@ -9,24 +9,26 @@ env:
|
||||
OPENAI_API_KEY: fake-api-key
|
||||
|
||||
jobs:
|
||||
deploy:
|
||||
tests:
|
||||
runs-on: ubuntu-latest
|
||||
|
||||
timeout-minutes: 15
|
||||
strategy:
|
||||
matrix:
|
||||
python-version: ['3.10', '3.11', '3.12']
|
||||
steps:
|
||||
- name: Checkout code
|
||||
uses: actions/checkout@v2
|
||||
uses: actions/checkout@v4
|
||||
|
||||
- name: Setup Python
|
||||
uses: actions/setup-python@v4
|
||||
- name: Install uv
|
||||
uses: astral-sh/setup-uv@v3
|
||||
with:
|
||||
python-version: '3.10'
|
||||
enable-cache: true
|
||||
|
||||
- name: Install Requirements
|
||||
run: |
|
||||
sudo apt-get update &&
|
||||
pip install poetry &&
|
||||
poetry lock &&
|
||||
poetry install
|
||||
- name: Set up Python ${{ matrix.python-version }}
|
||||
run: uv python install ${{ matrix.python-version }}
|
||||
|
||||
- name: Install the project
|
||||
run: uv sync --dev --all-extras
|
||||
|
||||
- name: Run tests
|
||||
run: poetry run pytest
|
||||
run: uv run pytest tests -vv
|
||||
|
||||
14
.github/workflows/type-checker.yml
vendored
@@ -1,4 +1,3 @@
|
||||
|
||||
name: Run Type Checks
|
||||
|
||||
on: [pull_request]
|
||||
@@ -12,19 +11,16 @@ jobs:
|
||||
|
||||
steps:
|
||||
- name: Checkout code
|
||||
uses: actions/checkout@v2
|
||||
uses: actions/checkout@v4
|
||||
|
||||
- name: Setup Python
|
||||
uses: actions/setup-python@v4
|
||||
uses: actions/setup-python@v5
|
||||
with:
|
||||
python-version: '3.10'
|
||||
python-version: "3.11.9"
|
||||
|
||||
- name: Install Requirements
|
||||
run: |
|
||||
sudo apt-get update &&
|
||||
pip install poetry &&
|
||||
poetry lock &&
|
||||
poetry install
|
||||
pip install mypy
|
||||
|
||||
- name: Run type checks
|
||||
run: poetry run pyright
|
||||
run: mypy src
|
||||
|
||||
23
.gitignore
vendored
@@ -2,7 +2,28 @@
|
||||
.pytest_cache
|
||||
__pycache__
|
||||
dist/
|
||||
lib/
|
||||
.env
|
||||
assets/*
|
||||
.idea
|
||||
test.py
|
||||
test/
|
||||
docs_crew/
|
||||
chroma.sqlite3
|
||||
old_en.json
|
||||
db/
|
||||
test.py
|
||||
rc-tests/*
|
||||
*.pkl
|
||||
temp/*
|
||||
.vscode/*
|
||||
crew_tasks_output.json
|
||||
.codesight
|
||||
.mypy_cache
|
||||
.ruff_cache
|
||||
.venv
|
||||
agentops.log
|
||||
test_flow.html
|
||||
crewairules.mdc
|
||||
plan.md
|
||||
conceptual_plan.md
|
||||
build_image
|
||||
@@ -1,21 +1,7 @@
|
||||
repos:
|
||||
|
||||
- repo: https://github.com/psf/black-pre-commit-mirror
|
||||
rev: 23.12.1
|
||||
- repo: https://github.com/astral-sh/ruff-pre-commit
|
||||
rev: v0.8.2
|
||||
hooks:
|
||||
- id: black
|
||||
language_version: python3.11
|
||||
files: \.(py)$
|
||||
|
||||
- repo: https://github.com/pycqa/isort
|
||||
rev: 5.13.2
|
||||
hooks:
|
||||
- id: isort
|
||||
name: isort (python)
|
||||
args: ["--profile", "black", "--filter-files"]
|
||||
|
||||
- repo: https://github.com/PyCQA/autoflake
|
||||
rev: v2.2.1
|
||||
hooks:
|
||||
- id: autoflake
|
||||
args: ['--in-place', '--remove-all-unused-imports', '--remove-unused-variables', '--ignore-init-module-imports']
|
||||
- id: ruff
|
||||
args: ["--fix"]
|
||||
- id: ruff-format
|
||||
|
||||
9
.ruff.toml
Normal file
@@ -0,0 +1,9 @@
|
||||
exclude = [
|
||||
"templates",
|
||||
"__init__.py",
|
||||
]
|
||||
|
||||
[lint]
|
||||
select = [
|
||||
"I", # isort rules
|
||||
]
|
||||
2
LICENSE
@@ -1,4 +1,4 @@
|
||||
Copyright (c) 2018 The Python Packaging Authority
|
||||
Copyright (c) 2025 crewAI, Inc.
|
||||
|
||||
Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||
of this software and associated documentation files (the "Software"), to deal
|
||||
|
||||
662
README.md
@@ -1,18 +1,51 @@
|
||||
<div align="center">
|
||||
|
||||

|
||||

|
||||
|
||||
# **crewAI**
|
||||
|
||||
🤖 **crewAI**: Cutting-edge framework for orchestrating role-playing, autonomous AI agents. By fostering collaborative intelligence, CrewAI empowers agents to work together seamlessly, tackling complex tasks.
|
||||
</div>
|
||||
|
||||
### Fast and Flexible Multi-Agent Automation Framework
|
||||
|
||||
CrewAI is a lean, lightning-fast Python framework built entirely from
|
||||
scratch—completely **independent of LangChain or other agent frameworks**.
|
||||
It empowers developers with both high-level simplicity and precise low-level
|
||||
control, ideal for creating autonomous AI agents tailored to any scenario.
|
||||
|
||||
- **CrewAI Crews**: Optimize for autonomy and collaborative intelligence.
|
||||
- **CrewAI Flows**: Enable granular, event-driven control, single LLM calls for precise task orchestration and supports Crews natively
|
||||
|
||||
With over 100,000 developers certified through our community courses at
|
||||
[learn.crewai.com](https://learn.crewai.com), CrewAI is rapidly becoming the
|
||||
standard for enterprise-ready AI automation.
|
||||
|
||||
# CrewAI Enterprise Suite
|
||||
|
||||
CrewAI Enterprise Suite is a comprehensive bundle tailored for organizations
|
||||
that require secure, scalable, and easy-to-manage agent-driven automation.
|
||||
|
||||
You can try one part of the suite the [Crew Control Plane for free](https://app.crewai.com)
|
||||
|
||||
## Crew Control Plane Key Features:
|
||||
- **Tracing & Observability**: Monitor and track your AI agents and workflows in real-time, including metrics, logs, and traces.
|
||||
- **Unified Control Plane**: A centralized platform for managing, monitoring, and scaling your AI agents and workflows.
|
||||
- **Seamless Integrations**: Easily connect with existing enterprise systems, data sources, and cloud infrastructure.
|
||||
- **Advanced Security**: Built-in robust security and compliance measures ensuring safe deployment and management.
|
||||
- **Actionable Insights**: Real-time analytics and reporting to optimize performance and decision-making.
|
||||
- **24/7 Support**: Dedicated enterprise support to ensure uninterrupted operation and quick resolution of issues.
|
||||
- **On-premise and Cloud Deployment Options**: Deploy CrewAI Enterprise on-premise or in the cloud, depending on your security and compliance requirements.
|
||||
|
||||
CrewAI Enterprise is designed for enterprises seeking a powerful,
|
||||
reliable solution to transform complex business processes into efficient,
|
||||
intelligent automations.
|
||||
|
||||
<h3>
|
||||
|
||||
[Homepage](https://www.crewai.io/) | [Documentation](https://docs.crewai.com/) | [Chat with Docs](https://chatg.pt/DWjSBZn) | [Examples](https://github.com/joaomdmoura/crewai-examples) | [Discord](https://discord.com/invite/X4JWnZnxPb)
|
||||
[Homepage](https://www.crewai.com/) | [Documentation](https://docs.crewai.com/) | [Chat with Docs](https://chatg.pt/DWjSBZn) | [Discourse](https://community.crewai.com)
|
||||
|
||||
</h3>
|
||||
|
||||
[](https://github.com/joaomdmoura/crewAI)
|
||||
[](https://github.com/crewAIInc/crewAI)
|
||||
[](https://opensource.org/licenses/MIT)
|
||||
|
||||
</div>
|
||||
@@ -22,170 +55,465 @@
|
||||
- [Why CrewAI?](#why-crewai)
|
||||
- [Getting Started](#getting-started)
|
||||
- [Key Features](#key-features)
|
||||
- [Understanding Flows and Crews](#understanding-flows-and-crews)
|
||||
- [CrewAI vs LangGraph](#how-crewai-compares)
|
||||
- [Examples](#examples)
|
||||
- [Quick Tutorial](#quick-tutorial)
|
||||
- [Write Job Descriptions](#write-job-descriptions)
|
||||
- [Trip Planner](#trip-planner)
|
||||
- [Stock Analysis](#stock-analysis)
|
||||
- [Using Crews and Flows Together](#using-crews-and-flows-together)
|
||||
- [Connecting Your Crew to a Model](#connecting-your-crew-to-a-model)
|
||||
- [How CrewAI Compares](#how-crewai-compares)
|
||||
- [Frequently Asked Questions (FAQ)](#frequently-asked-questions-faq)
|
||||
- [Contribution](#contribution)
|
||||
- [Hire CrewAI](#hire-crewai)
|
||||
- [Telemetry](#telemetry)
|
||||
- [License](#license)
|
||||
|
||||
## Why CrewAI?
|
||||
|
||||
The power of AI collaboration has too much to offer.
|
||||
CrewAI is designed to enable AI agents to assume roles, share goals, and operate in a cohesive unit - much like a well-oiled crew. Whether you're building a smart assistant platform, an automated customer service ensemble, or a multi-agent research team, CrewAI provides the backbone for sophisticated multi-agent interactions.
|
||||
<div align="center" style="margin-bottom: 30px;">
|
||||
<img src="docs/asset.png" alt="CrewAI Logo" width="100%">
|
||||
</div>
|
||||
|
||||
CrewAI unlocks the true potential of multi-agent automation, delivering the best-in-class combination of speed, flexibility, and control with either Crews of AI Agents or Flows of Events:
|
||||
|
||||
- **Standalone Framework**: Built from scratch, independent of LangChain or any other agent framework.
|
||||
- **High Performance**: Optimized for speed and minimal resource usage, enabling faster execution.
|
||||
- **Flexible Low Level Customization**: Complete freedom to customize at both high and low levels - from overall workflows and system architecture to granular agent behaviors, internal prompts, and execution logic.
|
||||
- **Ideal for Every Use Case**: Proven effective for both simple tasks and highly complex, real-world, enterprise-grade scenarios.
|
||||
- **Robust Community**: Backed by a rapidly growing community of over **100,000 certified** developers offering comprehensive support and resources.
|
||||
|
||||
CrewAI empowers developers and enterprises to confidently build intelligent automations, bridging the gap between simplicity, flexibility, and performance.
|
||||
|
||||
## Getting Started
|
||||
|
||||
### Learning Resources
|
||||
|
||||
Learn CrewAI through our comprehensive courses:
|
||||
- [Multi AI Agent Systems with CrewAI](https://www.deeplearning.ai/short-courses/multi-ai-agent-systems-with-crewai/) - Master the fundamentals of multi-agent systems
|
||||
- [Practical Multi AI Agents and Advanced Use Cases](https://www.deeplearning.ai/short-courses/practical-multi-ai-agents-and-advanced-use-cases-with-crewai/) - Deep dive into advanced implementations
|
||||
|
||||
### Understanding Flows and Crews
|
||||
|
||||
CrewAI offers two powerful, complementary approaches that work seamlessly together to build sophisticated AI applications:
|
||||
|
||||
1. **Crews**: Teams of AI agents with true autonomy and agency, working together to accomplish complex tasks through role-based collaboration. Crews enable:
|
||||
- Natural, autonomous decision-making between agents
|
||||
- Dynamic task delegation and collaboration
|
||||
- Specialized roles with defined goals and expertise
|
||||
- Flexible problem-solving approaches
|
||||
|
||||
2. **Flows**: Production-ready, event-driven workflows that deliver precise control over complex automations. Flows provide:
|
||||
- Fine-grained control over execution paths for real-world scenarios
|
||||
- Secure, consistent state management between tasks
|
||||
- Clean integration of AI agents with production Python code
|
||||
- Conditional branching for complex business logic
|
||||
|
||||
The true power of CrewAI emerges when combining Crews and Flows. This synergy allows you to:
|
||||
- Build complex, production-grade applications
|
||||
- Balance autonomy with precise control
|
||||
- Handle sophisticated real-world scenarios
|
||||
- Maintain clean, maintainable code structure
|
||||
|
||||
### Getting Started with Installation
|
||||
|
||||
To get started with CrewAI, follow these simple steps:
|
||||
|
||||
### 1. Installation
|
||||
|
||||
Ensure you have Python >=3.10 <3.13 installed on your system. CrewAI uses [UV](https://docs.astral.sh/uv/) for dependency management and package handling, offering a seamless setup and execution experience.
|
||||
|
||||
First, install CrewAI:
|
||||
|
||||
```shell
|
||||
pip install crewai
|
||||
```
|
||||
|
||||
The example below also uses DuckDuckGo's Search. You can install it with `pip` too:
|
||||
If you want to install the 'crewai' package along with its optional features that include additional tools for agents, you can do so by using the following command:
|
||||
|
||||
```shell
|
||||
pip install duckduckgo-search
|
||||
pip install 'crewai[tools]'
|
||||
```
|
||||
The command above installs the basic package and also adds extra components which require more dependencies to function.
|
||||
|
||||
### Troubleshooting Dependencies
|
||||
|
||||
If you encounter issues during installation or usage, here are some common solutions:
|
||||
|
||||
#### Common Issues
|
||||
|
||||
1. **ModuleNotFoundError: No module named 'tiktoken'**
|
||||
- Install tiktoken explicitly: `pip install 'crewai[embeddings]'`
|
||||
- If using embedchain or other tools: `pip install 'crewai[tools]'`
|
||||
|
||||
2. **Failed building wheel for tiktoken**
|
||||
- Ensure Rust compiler is installed (see installation steps above)
|
||||
- For Windows: Verify Visual C++ Build Tools are installed
|
||||
- Try upgrading pip: `pip install --upgrade pip`
|
||||
- If issues persist, use a pre-built wheel: `pip install tiktoken --prefer-binary`
|
||||
|
||||
### 2. Setting Up Your Crew with the YAML Configuration
|
||||
|
||||
To create a new CrewAI project, run the following CLI (Command Line Interface) command:
|
||||
|
||||
```shell
|
||||
crewai create crew <project_name>
|
||||
```
|
||||
|
||||
### 2. Setting Up Your Crew
|
||||
This command creates a new project folder with the following structure:
|
||||
|
||||
```
|
||||
my_project/
|
||||
├── .gitignore
|
||||
├── pyproject.toml
|
||||
├── README.md
|
||||
├── .env
|
||||
└── src/
|
||||
└── my_project/
|
||||
├── __init__.py
|
||||
├── main.py
|
||||
├── crew.py
|
||||
├── tools/
|
||||
│ ├── custom_tool.py
|
||||
│ └── __init__.py
|
||||
└── config/
|
||||
├── agents.yaml
|
||||
└── tasks.yaml
|
||||
```
|
||||
|
||||
You can now start developing your crew by editing the files in the `src/my_project` folder. The `main.py` file is the entry point of the project, the `crew.py` file is where you define your crew, the `agents.yaml` file is where you define your agents, and the `tasks.yaml` file is where you define your tasks.
|
||||
|
||||
#### To customize your project, you can:
|
||||
|
||||
- Modify `src/my_project/config/agents.yaml` to define your agents.
|
||||
- Modify `src/my_project/config/tasks.yaml` to define your tasks.
|
||||
- Modify `src/my_project/crew.py` to add your own logic, tools, and specific arguments.
|
||||
- Modify `src/my_project/main.py` to add custom inputs for your agents and tasks.
|
||||
- Add your environment variables into the `.env` file.
|
||||
|
||||
#### Example of a simple crew with a sequential process:
|
||||
|
||||
Instantiate your crew:
|
||||
|
||||
```shell
|
||||
crewai create crew latest-ai-development
|
||||
```
|
||||
|
||||
Modify the files as needed to fit your use case:
|
||||
|
||||
**agents.yaml**
|
||||
|
||||
```yaml
|
||||
# src/my_project/config/agents.yaml
|
||||
researcher:
|
||||
role: >
|
||||
{topic} Senior Data Researcher
|
||||
goal: >
|
||||
Uncover cutting-edge developments in {topic}
|
||||
backstory: >
|
||||
You're a seasoned researcher with a knack for uncovering the latest
|
||||
developments in {topic}. Known for your ability to find the most relevant
|
||||
information and present it in a clear and concise manner.
|
||||
|
||||
reporting_analyst:
|
||||
role: >
|
||||
{topic} Reporting Analyst
|
||||
goal: >
|
||||
Create detailed reports based on {topic} data analysis and research findings
|
||||
backstory: >
|
||||
You're a meticulous analyst with a keen eye for detail. You're known for
|
||||
your ability to turn complex data into clear and concise reports, making
|
||||
it easy for others to understand and act on the information you provide.
|
||||
```
|
||||
|
||||
**tasks.yaml**
|
||||
|
||||
```yaml
|
||||
# src/my_project/config/tasks.yaml
|
||||
research_task:
|
||||
description: >
|
||||
Conduct a thorough research about {topic}
|
||||
Make sure you find any interesting and relevant information given
|
||||
the current year is 2025.
|
||||
expected_output: >
|
||||
A list with 10 bullet points of the most relevant information about {topic}
|
||||
agent: researcher
|
||||
|
||||
reporting_task:
|
||||
description: >
|
||||
Review the context you got and expand each topic into a full section for a report.
|
||||
Make sure the report is detailed and contains any and all relevant information.
|
||||
expected_output: >
|
||||
A fully fledge reports with the mains topics, each with a full section of information.
|
||||
Formatted as markdown without '```'
|
||||
agent: reporting_analyst
|
||||
output_file: report.md
|
||||
```
|
||||
|
||||
**crew.py**
|
||||
|
||||
```python
|
||||
import os
|
||||
from crewai import Agent, Task, Crew, Process
|
||||
# src/my_project/crew.py
|
||||
from crewai import Agent, Crew, Process, Task
|
||||
from crewai.project import CrewBase, agent, crew, task
|
||||
from crewai_tools import SerperDevTool
|
||||
|
||||
os.environ["OPENAI_API_KEY"] = "YOUR_API_KEY"
|
||||
@CrewBase
|
||||
class LatestAiDevelopmentCrew():
|
||||
"""LatestAiDevelopment crew"""
|
||||
|
||||
# You can choose to use a local model through Ollama for example. See ./docs/how-to/llm-connections.md for more information.
|
||||
# from langchain_community.llms import Ollama
|
||||
# ollama_llm = Ollama(model="openhermes")
|
||||
@agent
|
||||
def researcher(self) -> Agent:
|
||||
return Agent(
|
||||
config=self.agents_config['researcher'],
|
||||
verbose=True,
|
||||
tools=[SerperDevTool()]
|
||||
)
|
||||
|
||||
# Install duckduckgo-search for this example:
|
||||
# !pip install -U duckduckgo-search
|
||||
@agent
|
||||
def reporting_analyst(self) -> Agent:
|
||||
return Agent(
|
||||
config=self.agents_config['reporting_analyst'],
|
||||
verbose=True
|
||||
)
|
||||
|
||||
from langchain_community.tools import DuckDuckGoSearchRun
|
||||
search_tool = DuckDuckGoSearchRun()
|
||||
@task
|
||||
def research_task(self) -> Task:
|
||||
return Task(
|
||||
config=self.tasks_config['research_task'],
|
||||
)
|
||||
|
||||
# Define your agents with roles and goals
|
||||
researcher = Agent(
|
||||
role='Senior Research Analyst',
|
||||
goal='Uncover cutting-edge developments in AI and data science',
|
||||
backstory="""You work at a leading tech think tank.
|
||||
Your expertise lies in identifying emerging trends.
|
||||
You have a knack for dissecting complex data and presenting actionable insights.""",
|
||||
verbose=True,
|
||||
allow_delegation=False,
|
||||
tools=[search_tool]
|
||||
# You can pass an optional llm attribute specifying what mode you wanna use.
|
||||
# It can be a local model through Ollama / LM Studio or a remote
|
||||
# model like OpenAI, Mistral, Antrophic or others (https://python.langchain.com/docs/integrations/llms/)
|
||||
#
|
||||
# Examples:
|
||||
#
|
||||
# from langchain_community.llms import Ollama
|
||||
# llm=ollama_llm # was defined above in the file
|
||||
#
|
||||
# from langchain_openai import ChatOpenAI
|
||||
# llm=ChatOpenAI(model_name="gpt-3.5", temperature=0.7)
|
||||
)
|
||||
writer = Agent(
|
||||
role='Tech Content Strategist',
|
||||
goal='Craft compelling content on tech advancements',
|
||||
backstory="""You are a renowned Content Strategist, known for your insightful and engaging articles.
|
||||
You transform complex concepts into compelling narratives.""",
|
||||
verbose=True,
|
||||
allow_delegation=True,
|
||||
# (optional) llm=ollama_llm
|
||||
)
|
||||
@task
|
||||
def reporting_task(self) -> Task:
|
||||
return Task(
|
||||
config=self.tasks_config['reporting_task'],
|
||||
output_file='report.md'
|
||||
)
|
||||
|
||||
# Create tasks for your agents
|
||||
task1 = Task(
|
||||
description="""Conduct a comprehensive analysis of the latest advancements in AI in 2024.
|
||||
Identify key trends, breakthrough technologies, and potential industry impacts.
|
||||
Your final answer MUST be a full analysis report""",
|
||||
agent=researcher
|
||||
)
|
||||
|
||||
task2 = Task(
|
||||
description="""Using the insights provided, develop an engaging blog
|
||||
post that highlights the most significant AI advancements.
|
||||
Your post should be informative yet accessible, catering to a tech-savvy audience.
|
||||
Make it sound cool, avoid complex words so it doesn't sound like AI.
|
||||
Your final answer MUST be the full blog post of at least 4 paragraphs.""",
|
||||
agent=writer
|
||||
)
|
||||
|
||||
# Instantiate your crew with a sequential process
|
||||
crew = Crew(
|
||||
agents=[researcher, writer],
|
||||
tasks=[task1, task2],
|
||||
verbose=2, # You can set it to 1 or 2 to different logging levels
|
||||
)
|
||||
|
||||
# Get your crew to work!
|
||||
result = crew.kickoff()
|
||||
|
||||
print("######################")
|
||||
print(result)
|
||||
@crew
|
||||
def crew(self) -> Crew:
|
||||
"""Creates the LatestAiDevelopment crew"""
|
||||
return Crew(
|
||||
agents=self.agents, # Automatically created by the @agent decorator
|
||||
tasks=self.tasks, # Automatically created by the @task decorator
|
||||
process=Process.sequential,
|
||||
verbose=True,
|
||||
)
|
||||
```
|
||||
|
||||
**main.py**
|
||||
|
||||
```python
|
||||
#!/usr/bin/env python
|
||||
# src/my_project/main.py
|
||||
import sys
|
||||
from latest_ai_development.crew import LatestAiDevelopmentCrew
|
||||
|
||||
def run():
|
||||
"""
|
||||
Run the crew.
|
||||
"""
|
||||
inputs = {
|
||||
'topic': 'AI Agents'
|
||||
}
|
||||
LatestAiDevelopmentCrew().crew().kickoff(inputs=inputs)
|
||||
```
|
||||
|
||||
### 3. Running Your Crew
|
||||
|
||||
Before running your crew, make sure you have the following keys set as environment variables in your `.env` file:
|
||||
|
||||
- An [OpenAI API key](https://platform.openai.com/account/api-keys) (or other LLM API key): `OPENAI_API_KEY=sk-...`
|
||||
- A [Serper.dev](https://serper.dev/) API key: `SERPER_API_KEY=YOUR_KEY_HERE`
|
||||
|
||||
Lock the dependencies and install them by using the CLI command but first, navigate to your project directory:
|
||||
|
||||
```shell
|
||||
cd my_project
|
||||
crewai install (Optional)
|
||||
```
|
||||
|
||||
To run your crew, execute the following command in the root of your project:
|
||||
|
||||
```bash
|
||||
crewai run
|
||||
```
|
||||
|
||||
or
|
||||
|
||||
```bash
|
||||
python src/my_project/main.py
|
||||
```
|
||||
|
||||
If an error happens due to the usage of poetry, please run the following command to update your crewai package:
|
||||
|
||||
```bash
|
||||
crewai update
|
||||
```
|
||||
|
||||
You should see the output in the console and the `report.md` file should be created in the root of your project with the full final report.
|
||||
|
||||
In addition to the sequential process, you can use the hierarchical process, which automatically assigns a manager to the defined crew to properly coordinate the planning and execution of tasks through delegation and validation of results. [See more about the processes here](https://docs.crewai.com/core-concepts/Processes/).
|
||||
|
||||
## Key Features
|
||||
|
||||
- **Role-Based Agent Design**: Customize agents with specific roles, goals, and tools.
|
||||
- **Autonomous Inter-Agent Delegation**: Agents can autonomously delegate tasks and inquire amongst themselves, enhancing problem-solving efficiency.
|
||||
- **Flexible Task Management**: Define tasks with customizable tools and assign them to agents dynamically.
|
||||
- **Processes Driven**: Currently only supports `sequential` task execution and `hierarchical` processes, but more complex processes like consensual and autonomous are being worked on.
|
||||
- **Works with Open Source Models**: Run your crew using Open AI or open source models refer to the [Connect crewAI to LLMs](https://docs.crewai.com/how-to/LLM-Connections/) page for details on configuring you agents' connections to models, even ones running locally!
|
||||
CrewAI stands apart as a lean, standalone, high-performance framework delivering simplicity, flexibility, and precise control—free from the complexity and limitations found in other agent frameworks.
|
||||
|
||||

|
||||
- **Standalone & Lean**: Completely independent from other frameworks like LangChain, offering faster execution and lighter resource demands.
|
||||
- **Flexible & Precise**: Easily orchestrate autonomous agents through intuitive [Crews](https://docs.crewai.com/concepts/crews) or precise [Flows](https://docs.crewai.com/concepts/flows), achieving perfect balance for your needs.
|
||||
- **Seamless Integration**: Effortlessly combine Crews (autonomy) and Flows (precision) to create complex, real-world automations.
|
||||
- **Deep Customization**: Tailor every aspect—from high-level workflows down to low-level internal prompts and agent behaviors.
|
||||
- **Reliable Performance**: Consistent results across simple tasks and complex, enterprise-level automations.
|
||||
- **Thriving Community**: Backed by robust documentation and over 100,000 certified developers, providing exceptional support and guidance.
|
||||
|
||||
Choose CrewAI to easily build powerful, adaptable, and production-ready AI automations.
|
||||
|
||||
## Examples
|
||||
|
||||
You can test different real life examples of AI crews in the [crewAI-examples repo](https://github.com/joaomdmoura/crewAI-examples?tab=readme-ov-file):
|
||||
You can test different real life examples of AI crews in the [CrewAI-examples repo](https://github.com/crewAIInc/crewAI-examples?tab=readme-ov-file):
|
||||
|
||||
- [Landing Page Generator](https://github.com/joaomdmoura/crewAI-examples/tree/main/landing_page_generator)
|
||||
- [Landing Page Generator](https://github.com/crewAIInc/crewAI-examples/tree/main/landing_page_generator)
|
||||
- [Having Human input on the execution](https://docs.crewai.com/how-to/Human-Input-on-Execution)
|
||||
- [Trip Planner](https://github.com/joaomdmoura/crewAI-examples/tree/main/trip_planner)
|
||||
- [Stock Analysis](https://github.com/joaomdmoura/crewAI-examples/tree/main/stock_analysis)
|
||||
- [Trip Planner](https://github.com/crewAIInc/crewAI-examples/tree/main/trip_planner)
|
||||
- [Stock Analysis](https://github.com/crewAIInc/crewAI-examples/tree/main/stock_analysis)
|
||||
|
||||
### Quick Tutorial
|
||||
|
||||
[](https://www.youtube.com/watch?v=tnejrr-0a94 "CrewAI Tutorial")
|
||||
|
||||
### Write Job Descriptions
|
||||
|
||||
[Check out code for this example](https://github.com/crewAIInc/crewAI-examples/tree/main/job-posting) or watch a video below:
|
||||
|
||||
[](https://www.youtube.com/watch?v=u98wEMz-9to "Jobs postings")
|
||||
|
||||
### Trip Planner
|
||||
|
||||
[Check out code for this example](https://github.com/joaomdmoura/crewAI-examples/tree/main/trip_planner) or watch a video below:
|
||||
[Check out code for this example](https://github.com/crewAIInc/crewAI-examples/tree/main/trip_planner) or watch a video below:
|
||||
|
||||
[](https://www.youtube.com/watch?v=xis7rWp-hjs "Trip Planner")
|
||||
|
||||
### Stock Analysis
|
||||
|
||||
[Check out code for this example](https://github.com/joaomdmoura/crewAI-examples/tree/main/stock_analysis) or watch a video below:
|
||||
[Check out code for this example](https://github.com/crewAIInc/crewAI-examples/tree/main/stock_analysis) or watch a video below:
|
||||
|
||||
[](https://www.youtube.com/watch?v=e0Uj4yWdaAg "Stock Analysis")
|
||||
|
||||
### Using Crews and Flows Together
|
||||
|
||||
CrewAI's power truly shines when combining Crews with Flows to create sophisticated automation pipelines.
|
||||
CrewAI flows support logical operators like `or_` and `and_` to combine multiple conditions. This can be used with `@start`, `@listen`, or `@router` decorators to create complex triggering conditions.
|
||||
- `or_`: Triggers when any of the specified conditions are met.
|
||||
- `and_`Triggers when all of the specified conditions are met.
|
||||
|
||||
Here's how you can orchestrate multiple Crews within a Flow:
|
||||
|
||||
```python
|
||||
from crewai.flow.flow import Flow, listen, start, router, or_
|
||||
from crewai import Crew, Agent, Task, Process
|
||||
from pydantic import BaseModel
|
||||
|
||||
# Define structured state for precise control
|
||||
class MarketState(BaseModel):
|
||||
sentiment: str = "neutral"
|
||||
confidence: float = 0.0
|
||||
recommendations: list = []
|
||||
|
||||
class AdvancedAnalysisFlow(Flow[MarketState]):
|
||||
@start()
|
||||
def fetch_market_data(self):
|
||||
# Demonstrate low-level control with structured state
|
||||
self.state.sentiment = "analyzing"
|
||||
return {"sector": "tech", "timeframe": "1W"} # These parameters match the task description template
|
||||
|
||||
@listen(fetch_market_data)
|
||||
def analyze_with_crew(self, market_data):
|
||||
# Show crew agency through specialized roles
|
||||
analyst = Agent(
|
||||
role="Senior Market Analyst",
|
||||
goal="Conduct deep market analysis with expert insight",
|
||||
backstory="You're a veteran analyst known for identifying subtle market patterns"
|
||||
)
|
||||
researcher = Agent(
|
||||
role="Data Researcher",
|
||||
goal="Gather and validate supporting market data",
|
||||
backstory="You excel at finding and correlating multiple data sources"
|
||||
)
|
||||
|
||||
analysis_task = Task(
|
||||
description="Analyze {sector} sector data for the past {timeframe}",
|
||||
expected_output="Detailed market analysis with confidence score",
|
||||
agent=analyst
|
||||
)
|
||||
research_task = Task(
|
||||
description="Find supporting data to validate the analysis",
|
||||
expected_output="Corroborating evidence and potential contradictions",
|
||||
agent=researcher
|
||||
)
|
||||
|
||||
# Demonstrate crew autonomy
|
||||
analysis_crew = Crew(
|
||||
agents=[analyst, researcher],
|
||||
tasks=[analysis_task, research_task],
|
||||
process=Process.sequential,
|
||||
verbose=True
|
||||
)
|
||||
return analysis_crew.kickoff(inputs=market_data) # Pass market_data as named inputs
|
||||
|
||||
@router(analyze_with_crew)
|
||||
def determine_next_steps(self):
|
||||
# Show flow control with conditional routing
|
||||
if self.state.confidence > 0.8:
|
||||
return "high_confidence"
|
||||
elif self.state.confidence > 0.5:
|
||||
return "medium_confidence"
|
||||
return "low_confidence"
|
||||
|
||||
@listen("high_confidence")
|
||||
def execute_strategy(self):
|
||||
# Demonstrate complex decision making
|
||||
strategy_crew = Crew(
|
||||
agents=[
|
||||
Agent(role="Strategy Expert",
|
||||
goal="Develop optimal market strategy")
|
||||
],
|
||||
tasks=[
|
||||
Task(description="Create detailed strategy based on analysis",
|
||||
expected_output="Step-by-step action plan")
|
||||
]
|
||||
)
|
||||
return strategy_crew.kickoff()
|
||||
|
||||
@listen(or_("medium_confidence", "low_confidence"))
|
||||
def request_additional_analysis(self):
|
||||
self.state.recommendations.append("Gather more data")
|
||||
return "Additional analysis required"
|
||||
```
|
||||
|
||||
This example demonstrates how to:
|
||||
1. Use Python code for basic data operations
|
||||
2. Create and execute Crews as steps in your workflow
|
||||
3. Use Flow decorators to manage the sequence of operations
|
||||
4. Implement conditional branching based on Crew results
|
||||
|
||||
## Connecting Your Crew to a Model
|
||||
|
||||
crewAI supports using various LLMs through a variety of connection options. By default your agents will use the OpenAI API when querying the model. However, there are several other ways to allow your agents to connect to models. For example, you can configure your agents to use a local model via the Ollama tool.
|
||||
CrewAI supports using various LLMs through a variety of connection options. By default your agents will use the OpenAI API when querying the model. However, there are several other ways to allow your agents to connect to models. For example, you can configure your agents to use a local model via the Ollama tool.
|
||||
|
||||
Please refer to the [Connect crewAI to LLMs](https://docs.crewai.com/how-to/LLM-Connections/) page for details on configuring you agents' connections to models.
|
||||
Please refer to the [Connect CrewAI to LLMs](https://docs.crewai.com/how-to/LLM-Connections/) page for details on configuring you agents' connections to models.
|
||||
|
||||
## How CrewAI Compares
|
||||
|
||||
- **Autogen**: While Autogen excels in creating conversational agents capable of working together, it lacks an inherent concept of process. In Autogen, orchestrating agents' interactions requires additional programming, which can become complex and cumbersome as the scale of tasks grows.
|
||||
**CrewAI's Advantage**: CrewAI combines autonomous agent intelligence with precise workflow control through its unique Crews and Flows architecture. The framework excels at both high-level orchestration and low-level customization, enabling complex, production-grade systems with granular control.
|
||||
|
||||
- **LangGraph**: While LangGraph provides a foundation for building agent workflows, its approach requires significant boilerplate code and complex state management patterns. The framework's tight coupling with LangChain can limit flexibility when implementing custom agent behaviors or integrating with external systems.
|
||||
|
||||
*P.S. CrewAI demonstrates significant performance advantages over LangGraph, executing 5.76x faster in certain cases like this QA task example ([see comparison](https://github.com/crewAIInc/crewAI-examples/tree/main/Notebooks/CrewAI%20Flows%20%26%20Langgraph/QA%20Agent)) while achieving higher evaluation scores with faster completion times in certain coding tasks, like in this example ([detailed analysis](https://github.com/crewAIInc/crewAI-examples/blob/main/Notebooks/CrewAI%20Flows%20%26%20Langgraph/Coding%20Assistant/coding_assistant_eval.ipynb)).*
|
||||
|
||||
- **Autogen**: While Autogen excels at creating conversational agents capable of working together, it lacks an inherent concept of process. In Autogen, orchestrating agents' interactions requires additional programming, which can become complex and cumbersome as the scale of tasks grows.
|
||||
|
||||
- **ChatDev**: ChatDev introduced the idea of processes into the realm of AI agents, but its implementation is quite rigid. Customizations in ChatDev are limited and not geared towards production environments, which can hinder scalability and flexibility in real-world applications.
|
||||
|
||||
**CrewAI's Advantage**: CrewAI is built with production in mind. It offers the flexibility of Autogen's conversational agents and the structured process approach of ChatDev, but without the rigidity. CrewAI's processes are designed to be dynamic and adaptable, fitting seamlessly into both development and production workflows.
|
||||
|
||||
## Contribution
|
||||
|
||||
CrewAI is open-source and we welcome contributions. If you're looking to contribute, please:
|
||||
@@ -199,14 +527,14 @@ CrewAI is open-source and we welcome contributions. If you're looking to contrib
|
||||
### Installing Dependencies
|
||||
|
||||
```bash
|
||||
poetry lock
|
||||
poetry install
|
||||
uv lock
|
||||
uv sync
|
||||
```
|
||||
|
||||
### Virtual Env
|
||||
|
||||
```bash
|
||||
poetry shell
|
||||
uv venv
|
||||
```
|
||||
|
||||
### Pre-commit hooks
|
||||
@@ -218,19 +546,19 @@ pre-commit install
|
||||
### Running Tests
|
||||
|
||||
```bash
|
||||
poetry run pytest
|
||||
uv run pytest .
|
||||
```
|
||||
|
||||
### Running static type checks
|
||||
|
||||
```bash
|
||||
poetry run pyright
|
||||
uvx mypy src
|
||||
```
|
||||
|
||||
### Packaging
|
||||
|
||||
```bash
|
||||
poetry build
|
||||
uv build
|
||||
```
|
||||
|
||||
### Installing Locally
|
||||
@@ -239,29 +567,137 @@ poetry build
|
||||
pip install dist/*.tar.gz
|
||||
```
|
||||
|
||||
## Hire CrewAI
|
||||
|
||||
We're a company developing crewAI and crewAI Enterprise, we for a limited time are offer consulting with selected customers, to get them early access to our enterprise solution
|
||||
If you are interested on having access to it and hiring weekly hours with our team, feel free to email us at [joao@crewai.com](mailto:joao@crewai.com).
|
||||
|
||||
## Telemetry
|
||||
|
||||
CrewAI uses anonymous telemetry to collect usage data with the main purpose of helping us improve the library by focusing our efforts on the most used features, integrations and tools.
|
||||
|
||||
There is NO data being collected on the prompts, tasks descriptions agents backstories or goals nor tools usage, no API calls, nor responses nor any data that is being processed by the agents, nor any secrets and env vars.
|
||||
It's pivotal to understand that **NO data is collected** concerning prompts, task descriptions, agents' backstories or goals, usage of tools, API calls, responses, any data processed by the agents, or secrets and environment variables, with the exception of the conditions mentioned. When the `share_crew` feature is enabled, detailed data including task descriptions, agents' backstories or goals, and other specific attributes are collected to provide deeper insights while respecting user privacy. Users can disable telemetry by setting the environment variable OTEL_SDK_DISABLED to true.
|
||||
|
||||
Data collected includes:
|
||||
- Version of crewAI
|
||||
|
||||
- Version of CrewAI
|
||||
- So we can understand how many users are using the latest version
|
||||
- Version of Python
|
||||
- So we can decide on what versions to better support
|
||||
- General OS (e.g. number of CPUs, macOS/Windows/Linux)
|
||||
- So we know what OS we should focus on and if we could build specific OS related features
|
||||
- Number of agents and tasks in a crew
|
||||
- So we make sure we are testing internally with similar use cases and educate people on the best practices
|
||||
- Crew Process being used
|
||||
- Understand where we should focus our efforts
|
||||
- If Agents are using memory or allowing delegation
|
||||
- Understand if we improved the features or maybe even drop them
|
||||
- If Tasks are being executed in parallel or sequentially
|
||||
- Understand if we should focus more on parallel execution
|
||||
- Language model being used
|
||||
- Improved support on most used languages
|
||||
- Roles of agents in a crew
|
||||
- Understand high level use cases so we can build better tools, integrations and examples about it
|
||||
- Tools names available
|
||||
- Understand out of the publicly available tools, which ones are being used the most so we can improve them
|
||||
|
||||
Users can opt-in to Further Telemetry, sharing the complete telemetry data by setting the `share_crew` attribute to `True` on their Crews. Enabling `share_crew` results in the collection of detailed crew and task execution data, including `goal`, `backstory`, `context`, and `output` of tasks. This enables a deeper insight into usage patterns while respecting the user's choice to share.
|
||||
|
||||
## License
|
||||
|
||||
CrewAI is released under the MIT License.
|
||||
CrewAI is released under the [MIT License](https://github.com/crewAIInc/crewAI/blob/main/LICENSE).
|
||||
|
||||
|
||||
## Frequently Asked Questions (FAQ)
|
||||
|
||||
### General
|
||||
- [What exactly is CrewAI?](#q-what-exactly-is-crewai)
|
||||
- [How do I install CrewAI?](#q-how-do-i-install-crewai)
|
||||
- [Does CrewAI depend on LangChain?](#q-does-crewai-depend-on-langchain)
|
||||
- [Is CrewAI open-source?](#q-is-crewai-open-source)
|
||||
- [Does CrewAI collect data from users?](#q-does-crewai-collect-data-from-users)
|
||||
|
||||
### Features and Capabilities
|
||||
- [Can CrewAI handle complex use cases?](#q-can-crewai-handle-complex-use-cases)
|
||||
- [Can I use CrewAI with local AI models?](#q-can-i-use-crewai-with-local-ai-models)
|
||||
- [What makes Crews different from Flows?](#q-what-makes-crews-different-from-flows)
|
||||
- [How is CrewAI better than LangChain?](#q-how-is-crewai-better-than-langchain)
|
||||
- [Does CrewAI support fine-tuning or training custom models?](#q-does-crewai-support-fine-tuning-or-training-custom-models)
|
||||
|
||||
### Resources and Community
|
||||
- [Where can I find real-world CrewAI examples?](#q-where-can-i-find-real-world-crewai-examples)
|
||||
- [How can I contribute to CrewAI?](#q-how-can-i-contribute-to-crewai)
|
||||
|
||||
### Enterprise Features
|
||||
- [What additional features does CrewAI Enterprise offer?](#q-what-additional-features-does-crewai-enterprise-offer)
|
||||
- [Is CrewAI Enterprise available for cloud and on-premise deployments?](#q-is-crewai-enterprise-available-for-cloud-and-on-premise-deployments)
|
||||
- [Can I try CrewAI Enterprise for free?](#q-can-i-try-crewai-enterprise-for-free)
|
||||
|
||||
|
||||
|
||||
### Q: What exactly is CrewAI?
|
||||
A: CrewAI is a standalone, lean, and fast Python framework built specifically for orchestrating autonomous AI agents. Unlike frameworks like LangChain, CrewAI does not rely on external dependencies, making it leaner, faster, and simpler.
|
||||
|
||||
### Q: How do I install CrewAI?
|
||||
A: Install CrewAI using pip:
|
||||
```shell
|
||||
pip install crewai
|
||||
```
|
||||
For additional tools, use:
|
||||
```shell
|
||||
pip install 'crewai[tools]'
|
||||
```
|
||||
### Q: Does CrewAI depend on LangChain?
|
||||
A: No. CrewAI is built entirely from the ground up, with no dependencies on LangChain or other agent frameworks. This ensures a lean, fast, and flexible experience.
|
||||
|
||||
### Q: Can CrewAI handle complex use cases?
|
||||
A: Yes. CrewAI excels at both simple and highly complex real-world scenarios, offering deep customization options at both high and low levels, from internal prompts to sophisticated workflow orchestration.
|
||||
|
||||
### Q: Can I use CrewAI with local AI models?
|
||||
A: Absolutely! CrewAI supports various language models, including local ones. Tools like Ollama and LM Studio allow seamless integration. Check the [LLM Connections documentation](https://docs.crewai.com/how-to/LLM-Connections/) for more details.
|
||||
|
||||
### Q: What makes Crews different from Flows?
|
||||
A: Crews provide autonomous agent collaboration, ideal for tasks requiring flexible decision-making and dynamic interaction. Flows offer precise, event-driven control, ideal for managing detailed execution paths and secure state management. You can seamlessly combine both for maximum effectiveness.
|
||||
|
||||
### Q: How is CrewAI better than LangChain?
|
||||
A: CrewAI provides simpler, more intuitive APIs, faster execution speeds, more reliable and consistent results, robust documentation, and an active community—addressing common criticisms and limitations associated with LangChain.
|
||||
|
||||
### Q: Is CrewAI open-source?
|
||||
A: Yes, CrewAI is open-source and actively encourages community contributions and collaboration.
|
||||
|
||||
### Q: Does CrewAI collect data from users?
|
||||
A: CrewAI collects anonymous telemetry data strictly for improvement purposes. Sensitive data such as prompts, tasks, or API responses are never collected unless explicitly enabled by the user.
|
||||
|
||||
### Q: Where can I find real-world CrewAI examples?
|
||||
A: Check out practical examples in the [CrewAI-examples repository](https://github.com/crewAIInc/crewAI-examples), covering use cases like trip planners, stock analysis, and job postings.
|
||||
|
||||
### Q: How can I contribute to CrewAI?
|
||||
A: Contributions are warmly welcomed! Fork the repository, create your branch, implement your changes, and submit a pull request. See the Contribution section of the README for detailed guidelines.
|
||||
|
||||
### Q: What additional features does CrewAI Enterprise offer?
|
||||
A: CrewAI Enterprise provides advanced features such as a unified control plane, real-time observability, secure integrations, advanced security, actionable insights, and dedicated 24/7 enterprise support.
|
||||
|
||||
### Q: Is CrewAI Enterprise available for cloud and on-premise deployments?
|
||||
A: Yes, CrewAI Enterprise supports both cloud-based and on-premise deployment options, allowing enterprises to meet their specific security and compliance requirements.
|
||||
|
||||
### Q: Can I try CrewAI Enterprise for free?
|
||||
A: Yes, you can explore part of the CrewAI Enterprise Suite by accessing the [Crew Control Plane](https://app.crewai.com) for free.
|
||||
|
||||
### Q: Does CrewAI support fine-tuning or training custom models?
|
||||
A: Yes, CrewAI can integrate with custom-trained or fine-tuned models, allowing you to enhance your agents with domain-specific knowledge and accuracy.
|
||||
|
||||
### Q: Can CrewAI agents interact with external tools and APIs?
|
||||
A: Absolutely! CrewAI agents can easily integrate with external tools, APIs, and databases, empowering them to leverage real-world data and resources.
|
||||
|
||||
### Q: Is CrewAI suitable for production environments?
|
||||
A: Yes, CrewAI is explicitly designed with production-grade standards, ensuring reliability, stability, and scalability for enterprise deployments.
|
||||
|
||||
### Q: How scalable is CrewAI?
|
||||
A: CrewAI is highly scalable, supporting simple automations and large-scale enterprise workflows involving numerous agents and complex tasks simultaneously.
|
||||
|
||||
### Q: Does CrewAI offer debugging and monitoring tools?
|
||||
A: Yes, CrewAI Enterprise includes advanced debugging, tracing, and real-time observability features, simplifying the management and troubleshooting of your automations.
|
||||
|
||||
### Q: What programming languages does CrewAI support?
|
||||
A: CrewAI is primarily Python-based but easily integrates with services and APIs written in any programming language through its flexible API integration capabilities.
|
||||
|
||||
### Q: Does CrewAI offer educational resources for beginners?
|
||||
A: Yes, CrewAI provides extensive beginner-friendly tutorials, courses, and documentation through learn.crewai.com, supporting developers at all skill levels.
|
||||
|
||||
### Q: Can CrewAI automate human-in-the-loop workflows?
|
||||
A: Yes, CrewAI fully supports human-in-the-loop workflows, allowing seamless collaboration between human experts and AI agents for enhanced decision-making.
|
||||
|
||||
@@ -1463,11 +1463,11 @@
|
||||
"locked": false,
|
||||
"fontSize": 20,
|
||||
"fontFamily": 3,
|
||||
"text": "Agents have the inert ability of\nreach out to another to delegate\nwork or ask questions.",
|
||||
"text": "Agents have the innate ability of\nreach out to another to delegate\nwork or ask questions.",
|
||||
"textAlign": "right",
|
||||
"verticalAlign": "top",
|
||||
"containerId": null,
|
||||
"originalText": "Agents have the inert ability of\nreach out to another to delegate\nwork or ask questions.",
|
||||
"originalText": "Agents have the innate ability of\nreach out to another to delegate\nwork or ask questions.",
|
||||
"lineHeight": 1.2,
|
||||
"baseline": 68
|
||||
},
|
||||
@@ -1734,4 +1734,4 @@
|
||||
"viewBackgroundColor": "#ffffff"
|
||||
},
|
||||
"files": {}
|
||||
}
|
||||
}
|
||||
|
||||
BIN
docs/asset.png
Normal file
|
After Width: | Height: | Size: 66 KiB |
187
docs/changelog.mdx
Normal file
@@ -0,0 +1,187 @@
|
||||
---
|
||||
title: Changelog
|
||||
description: View the latest updates and changes to CrewAI
|
||||
icon: timeline
|
||||
---
|
||||
|
||||
<Update label="2025-03-17" description="v0.108.0">
|
||||
**Features**
|
||||
- Converted tabs to spaces in `crew.py` template
|
||||
- Enhanced LLM Streaming Response Handling and Event System
|
||||
- Included `model_name`
|
||||
- Enhanced Event Listener with rich visualization and improved logging
|
||||
- Added fingerprints
|
||||
|
||||
**Bug Fixes**
|
||||
- Fixed Mistral issues
|
||||
- Fixed a bug in documentation
|
||||
- Fixed type check error in fingerprint property
|
||||
|
||||
**Documentation Updates**
|
||||
- Improved tool documentation
|
||||
- Updated installation guide for the `uv` tool package
|
||||
- Added instructions for upgrading crewAI with the `uv` tool
|
||||
- Added documentation for `ApifyActorsTool`
|
||||
</Update>
|
||||
|
||||
<Update label="2025-03-10" description="v0.105.0">
|
||||
**Core Improvements & Fixes**
|
||||
- Fixed issues with missing template variables and user memory configuration
|
||||
- Improved async flow support and addressed agent response formatting
|
||||
- Enhanced memory reset functionality and fixed CLI memory commands
|
||||
- Fixed type issues, tool calling properties, and telemetry decoupling
|
||||
|
||||
**New Features & Enhancements**
|
||||
- Added Flow state export and improved state utilities
|
||||
- Enhanced agent knowledge setup with optional crew embedder
|
||||
- Introduced event emitter for better observability and LLM call tracking
|
||||
- Added support for Python 3.10 and ChatOllama from langchain_ollama
|
||||
- Integrated context window size support for the o3-mini model
|
||||
- Added support for multiple router calls
|
||||
|
||||
**Documentation & Guides**
|
||||
- Improved documentation layout and hierarchical structure
|
||||
- Added QdrantVectorSearchTool guide and clarified event listener usage
|
||||
- Fixed typos in prompts and updated Amazon Bedrock model listings
|
||||
</Update>
|
||||
|
||||
<Update label="2025-02-12" description="v0.102.0">
|
||||
**Core Improvements & Fixes**
|
||||
- Enhanced LLM Support: Improved structured LLM output, parameter handling, and formatting for Anthropic models
|
||||
- Crew & Agent Stability: Fixed issues with cloning agents/crews using knowledge sources, multiple task outputs in conditional tasks, and ignored Crew task callbacks
|
||||
- Memory & Storage Fixes: Fixed short-term memory handling with Bedrock, ensured correct embedder initialization, and added a reset memories function in the crew class
|
||||
- Training & Execution Reliability: Fixed broken training and interpolation issues with dict and list input types
|
||||
|
||||
**New Features & Enhancements**
|
||||
- Advanced Knowledge Management: Improved naming conventions and enhanced embedding configuration with custom embedder support
|
||||
- Expanded Logging & Observability: Added JSON format support for logging and integrated MLflow tracing documentation
|
||||
- Data Handling Improvements: Updated excel_knowledge_source.py to process multi-tab files
|
||||
- General Performance & Codebase Clean-Up: Streamlined enterprise code alignment and resolved linting issues
|
||||
- Adding new tool: `QdrantVectorSearchTool`
|
||||
|
||||
**Documentation & Guides**
|
||||
- Updated AI & Memory Docs: Improved Bedrock, Google AI, and long-term memory documentation
|
||||
- Task & Workflow Clarity: Added "Human Input" row to Task Attributes, Langfuse guide, and FileWriterTool documentation
|
||||
- Fixed Various Typos & Formatting Issues
|
||||
</Update>
|
||||
|
||||
<Update label="2025-01-28" description="v0.100.0">
|
||||
**Features**
|
||||
- Add Composio docs
|
||||
- Add SageMaker as a LLM provider
|
||||
|
||||
**Fixes**
|
||||
- Overall LLM connection issues
|
||||
- Using safe accessors on training
|
||||
- Add version check to crew_chat.py
|
||||
|
||||
**Documentation**
|
||||
- New docs for crewai chat
|
||||
- Improve formatting and clarity in CLI and Composio Tool docs
|
||||
</Update>
|
||||
|
||||
<Update label="2025-01-20" description="v0.98.0">
|
||||
**Features**
|
||||
- Conversation crew v1
|
||||
- Add unique ID to flow states
|
||||
- Add @persist decorator with FlowPersistence interface
|
||||
|
||||
**Integrations**
|
||||
- Add SambaNova integration
|
||||
- Add NVIDIA NIM provider in cli
|
||||
- Introducing VoyageAI
|
||||
|
||||
**Fixes**
|
||||
- Fix API Key Behavior and Entity Handling in Mem0 Integration
|
||||
- Fixed core invoke loop logic and relevant tests
|
||||
- Make tool inputs actual objects and not strings
|
||||
- Add important missing parts to creating tools
|
||||
- Drop litellm version to prevent windows issue
|
||||
- Before kickoff if inputs are none
|
||||
- Fixed typos, nested pydantic model issue, and docling issues
|
||||
</Update>
|
||||
|
||||
<Update label="2025-01-04" description="v0.95.0">
|
||||
**New Features**
|
||||
- Adding Multimodal Abilities to Crew
|
||||
- Programatic Guardrails
|
||||
- HITL multiple rounds
|
||||
- Gemini 2.0 Support
|
||||
- CrewAI Flows Improvements
|
||||
- Add Workflow Permissions
|
||||
- Add support for langfuse with litellm
|
||||
- Portkey Integration with CrewAI
|
||||
- Add interpolate_only method and improve error handling
|
||||
- Docling Support
|
||||
- Weviate Support
|
||||
|
||||
**Fixes**
|
||||
- output_file not respecting system path
|
||||
- disk I/O error when resetting short-term memory
|
||||
- CrewJSONEncoder now accepts enums
|
||||
- Python max version
|
||||
- Interpolation for output_file in Task
|
||||
- Handle coworker role name case/whitespace properly
|
||||
- Add tiktoken as explicit dependency and document Rust requirement
|
||||
- Include agent knowledge in planning process
|
||||
- Change storage initialization to None for KnowledgeStorage
|
||||
- Fix optional storage checks
|
||||
- include event emitter in flows
|
||||
- Docstring, Error Handling, and Type Hints Improvements
|
||||
- Suppressed userWarnings from litellm pydantic issues
|
||||
</Update>
|
||||
|
||||
<Update label="2024-12-05" description="v0.86.0">
|
||||
**Changes**
|
||||
- Remove all references to pipeline and pipeline router
|
||||
- Add Nvidia NIM as provider in Custom LLM
|
||||
- Add knowledge demo + improve knowledge docs
|
||||
- Add HITL multiple rounds of followup
|
||||
- New docs about yaml crew with decorators
|
||||
- Simplify template crew
|
||||
</Update>
|
||||
|
||||
<Update label="2024-12-04" description="v0.85.0">
|
||||
**Features**
|
||||
- Added knowledge to agent level
|
||||
- Feat/remove langchain
|
||||
- Improve typed task outputs
|
||||
- Log in to Tool Repository on crewai login
|
||||
|
||||
**Fixes**
|
||||
- Fixes issues with result as answer not properly exiting LLM loop
|
||||
- Fix missing key name when running with ollama provider
|
||||
- Fix spelling issue found
|
||||
|
||||
**Documentation**
|
||||
- Update readme for running mypy
|
||||
- Add knowledge to mint.json
|
||||
- Update Github actions
|
||||
- Update Agents docs to include two approaches for creating an agent
|
||||
- Improvements to LLM Configuration and Usage
|
||||
</Update>
|
||||
|
||||
<Update label="2024-11-25" description="v0.83.0">
|
||||
**New Features**
|
||||
- New before_kickoff and after_kickoff crew callbacks
|
||||
- Support to pre-seed agents with Knowledge
|
||||
- Add support for retrieving user preferences and memories using Mem0
|
||||
|
||||
**Fixes**
|
||||
- Fix Async Execution
|
||||
- Upgrade chroma and adjust embedder function generator
|
||||
- Update CLI Watson supported models + docs
|
||||
- Reduce level for Bandit
|
||||
- Fixing all tests
|
||||
|
||||
**Documentation**
|
||||
- Update Docs
|
||||
</Update>
|
||||
|
||||
<Update label="2024-11-13" description="v0.80.0">
|
||||
**Fixes**
|
||||
- Fixing Tokens callback replacement bug
|
||||
- Fixing Step callback issue
|
||||
- Add cached prompt tokens info on usage metrics
|
||||
- Fix crew_train_success test
|
||||
</Update>
|
||||
BIN
docs/complexity_precision.png
Normal file
|
After Width: | Height: | Size: 16 KiB |
357
docs/concepts/agents.mdx
Normal file
@@ -0,0 +1,357 @@
|
||||
---
|
||||
title: Agents
|
||||
description: Detailed guide on creating and managing agents within the CrewAI framework.
|
||||
icon: robot
|
||||
---
|
||||
|
||||
## Overview of an Agent
|
||||
|
||||
In the CrewAI framework, an `Agent` is an autonomous unit that can:
|
||||
- Perform specific tasks
|
||||
- Make decisions based on its role and goal
|
||||
- Use tools to accomplish objectives
|
||||
- Communicate and collaborate with other agents
|
||||
- Maintain memory of interactions
|
||||
- Delegate tasks when allowed
|
||||
|
||||
<Tip>
|
||||
Think of an agent as a specialized team member with specific skills, expertise, and responsibilities. For example, a `Researcher` agent might excel at gathering and analyzing information, while a `Writer` agent might be better at creating content.
|
||||
</Tip>
|
||||
|
||||
<Note type="info" title="Enterprise Enhancement: Visual Agent Builder">
|
||||
CrewAI Enterprise includes a Visual Agent Builder that simplifies agent creation and configuration without writing code. Design your agents visually and test them in real-time.
|
||||
|
||||

|
||||
|
||||
The Visual Agent Builder enables:
|
||||
- Intuitive agent configuration with form-based interfaces
|
||||
- Real-time testing and validation
|
||||
- Template library with pre-configured agent types
|
||||
- Easy customization of agent attributes and behaviors
|
||||
</Note>
|
||||
|
||||
## Agent Attributes
|
||||
|
||||
| Attribute | Parameter | Type | Description |
|
||||
| :-------------------------------------- | :----------------------- | :---------------------------- | :------------------------------------------------------------------------------------------------------------------- |
|
||||
| **Role** | `role` | `str` | Defines the agent's function and expertise within the crew. |
|
||||
| **Goal** | `goal` | `str` | The individual objective that guides the agent's decision-making. |
|
||||
| **Backstory** | `backstory` | `str` | Provides context and personality to the agent, enriching interactions. |
|
||||
| **LLM** _(optional)_ | `llm` | `Union[str, LLM, Any]` | Language model that powers the agent. Defaults to the model specified in `OPENAI_MODEL_NAME` or "gpt-4". |
|
||||
| **Tools** _(optional)_ | `tools` | `List[BaseTool]` | Capabilities or functions available to the agent. Defaults to an empty list. |
|
||||
| **Function Calling LLM** _(optional)_ | `function_calling_llm` | `Optional[Any]` | Language model for tool calling, overrides crew's LLM if specified. |
|
||||
| **Max Iterations** _(optional)_ | `max_iter` | `int` | Maximum iterations before the agent must provide its best answer. Default is 20. |
|
||||
| **Max RPM** _(optional)_ | `max_rpm` | `Optional[int]` | Maximum requests per minute to avoid rate limits. |
|
||||
| **Max Execution Time** _(optional)_ | `max_execution_time` | `Optional[int]` | Maximum time (in seconds) for task execution. |
|
||||
| **Memory** _(optional)_ | `memory` | `bool` | Whether the agent should maintain memory of interactions. Default is True. |
|
||||
| **Verbose** _(optional)_ | `verbose` | `bool` | Enable detailed execution logs for debugging. Default is False. |
|
||||
| **Allow Delegation** _(optional)_ | `allow_delegation` | `bool` | Allow the agent to delegate tasks to other agents. Default is False. |
|
||||
| **Step Callback** _(optional)_ | `step_callback` | `Optional[Any]` | Function called after each agent step, overrides crew callback. |
|
||||
| **Cache** _(optional)_ | `cache` | `bool` | Enable caching for tool usage. Default is True. |
|
||||
| **System Template** _(optional)_ | `system_template` | `Optional[str]` | Custom system prompt template for the agent. |
|
||||
| **Prompt Template** _(optional)_ | `prompt_template` | `Optional[str]` | Custom prompt template for the agent. |
|
||||
| **Response Template** _(optional)_ | `response_template` | `Optional[str]` | Custom response template for the agent. |
|
||||
| **Allow Code Execution** _(optional)_ | `allow_code_execution` | `Optional[bool]` | Enable code execution for the agent. Default is False. |
|
||||
| **Max Retry Limit** _(optional)_ | `max_retry_limit` | `int` | Maximum number of retries when an error occurs. Default is 2. |
|
||||
| **Respect Context Window** _(optional)_ | `respect_context_window` | `bool` | Keep messages under context window size by summarizing. Default is True. |
|
||||
| **Code Execution Mode** _(optional)_ | `code_execution_mode` | `Literal["safe", "unsafe"]` | Mode for code execution: 'safe' (using Docker) or 'unsafe' (direct). Default is 'safe'. |
|
||||
| **Embedder** _(optional)_ | `embedder` | `Optional[Dict[str, Any]]` | Configuration for the embedder used by the agent. |
|
||||
| **Knowledge Sources** _(optional)_ | `knowledge_sources` | `Optional[List[BaseKnowledgeSource]]` | Knowledge sources available to the agent. |
|
||||
| **Use System Prompt** _(optional)_ | `use_system_prompt` | `Optional[bool]` | Whether to use system prompt (for o1 model support). Default is True. |
|
||||
|
||||
## Creating Agents
|
||||
|
||||
There are two ways to create agents in CrewAI: using **YAML configuration (recommended)** or defining them **directly in code**.
|
||||
|
||||
### YAML Configuration (Recommended)
|
||||
|
||||
Using YAML configuration provides a cleaner, more maintainable way to define agents. We strongly recommend using this approach in your CrewAI projects.
|
||||
|
||||
After creating your CrewAI project as outlined in the [Installation](/installation) section, navigate to the `src/latest_ai_development/config/agents.yaml` file and modify the template to match your requirements.
|
||||
|
||||
<Note>
|
||||
Variables in your YAML files (like `{topic}`) will be replaced with values from your inputs when running the crew:
|
||||
```python Code
|
||||
crew.kickoff(inputs={'topic': 'AI Agents'})
|
||||
```
|
||||
</Note>
|
||||
|
||||
Here's an example of how to configure agents using YAML:
|
||||
|
||||
```yaml agents.yaml
|
||||
# src/latest_ai_development/config/agents.yaml
|
||||
researcher:
|
||||
role: >
|
||||
{topic} Senior Data Researcher
|
||||
goal: >
|
||||
Uncover cutting-edge developments in {topic}
|
||||
backstory: >
|
||||
You're a seasoned researcher with a knack for uncovering the latest
|
||||
developments in {topic}. Known for your ability to find the most relevant
|
||||
information and present it in a clear and concise manner.
|
||||
|
||||
reporting_analyst:
|
||||
role: >
|
||||
{topic} Reporting Analyst
|
||||
goal: >
|
||||
Create detailed reports based on {topic} data analysis and research findings
|
||||
backstory: >
|
||||
You're a meticulous analyst with a keen eye for detail. You're known for
|
||||
your ability to turn complex data into clear and concise reports, making
|
||||
it easy for others to understand and act on the information you provide.
|
||||
```
|
||||
|
||||
To use this YAML configuration in your code, create a crew class that inherits from `CrewBase`:
|
||||
|
||||
```python Code
|
||||
# src/latest_ai_development/crew.py
|
||||
from crewai import Agent, Crew, Process
|
||||
from crewai.project import CrewBase, agent, crew
|
||||
from crewai_tools import SerperDevTool
|
||||
|
||||
@CrewBase
|
||||
class LatestAiDevelopmentCrew():
|
||||
"""LatestAiDevelopment crew"""
|
||||
|
||||
agents_config = "config/agents.yaml"
|
||||
|
||||
@agent
|
||||
def researcher(self) -> Agent:
|
||||
return Agent(
|
||||
config=self.agents_config['researcher'],
|
||||
verbose=True,
|
||||
tools=[SerperDevTool()]
|
||||
)
|
||||
|
||||
@agent
|
||||
def reporting_analyst(self) -> Agent:
|
||||
return Agent(
|
||||
config=self.agents_config['reporting_analyst'],
|
||||
verbose=True
|
||||
)
|
||||
```
|
||||
|
||||
<Note>
|
||||
The names you use in your YAML files (`agents.yaml`) should match the method names in your Python code.
|
||||
</Note>
|
||||
|
||||
### Direct Code Definition
|
||||
|
||||
You can create agents directly in code by instantiating the `Agent` class. Here's a comprehensive example showing all available parameters:
|
||||
|
||||
```python Code
|
||||
from crewai import Agent
|
||||
from crewai_tools import SerperDevTool
|
||||
|
||||
# Create an agent with all available parameters
|
||||
agent = Agent(
|
||||
role="Senior Data Scientist",
|
||||
goal="Analyze and interpret complex datasets to provide actionable insights",
|
||||
backstory="With over 10 years of experience in data science and machine learning, "
|
||||
"you excel at finding patterns in complex datasets.",
|
||||
llm="gpt-4", # Default: OPENAI_MODEL_NAME or "gpt-4"
|
||||
function_calling_llm=None, # Optional: Separate LLM for tool calling
|
||||
memory=True, # Default: True
|
||||
verbose=False, # Default: False
|
||||
allow_delegation=False, # Default: False
|
||||
max_iter=20, # Default: 20 iterations
|
||||
max_rpm=None, # Optional: Rate limit for API calls
|
||||
max_execution_time=None, # Optional: Maximum execution time in seconds
|
||||
max_retry_limit=2, # Default: 2 retries on error
|
||||
allow_code_execution=False, # Default: False
|
||||
code_execution_mode="safe", # Default: "safe" (options: "safe", "unsafe")
|
||||
respect_context_window=True, # Default: True
|
||||
use_system_prompt=True, # Default: True
|
||||
tools=[SerperDevTool()], # Optional: List of tools
|
||||
knowledge_sources=None, # Optional: List of knowledge sources
|
||||
embedder=None, # Optional: Custom embedder configuration
|
||||
system_template=None, # Optional: Custom system prompt template
|
||||
prompt_template=None, # Optional: Custom prompt template
|
||||
response_template=None, # Optional: Custom response template
|
||||
step_callback=None, # Optional: Callback function for monitoring
|
||||
)
|
||||
```
|
||||
|
||||
Let's break down some key parameter combinations for common use cases:
|
||||
|
||||
#### Basic Research Agent
|
||||
```python Code
|
||||
research_agent = Agent(
|
||||
role="Research Analyst",
|
||||
goal="Find and summarize information about specific topics",
|
||||
backstory="You are an experienced researcher with attention to detail",
|
||||
tools=[SerperDevTool()],
|
||||
verbose=True # Enable logging for debugging
|
||||
)
|
||||
```
|
||||
|
||||
#### Code Development Agent
|
||||
```python Code
|
||||
dev_agent = Agent(
|
||||
role="Senior Python Developer",
|
||||
goal="Write and debug Python code",
|
||||
backstory="Expert Python developer with 10 years of experience",
|
||||
allow_code_execution=True,
|
||||
code_execution_mode="safe", # Uses Docker for safety
|
||||
max_execution_time=300, # 5-minute timeout
|
||||
max_retry_limit=3 # More retries for complex code tasks
|
||||
)
|
||||
```
|
||||
|
||||
#### Long-Running Analysis Agent
|
||||
```python Code
|
||||
analysis_agent = Agent(
|
||||
role="Data Analyst",
|
||||
goal="Perform deep analysis of large datasets",
|
||||
backstory="Specialized in big data analysis and pattern recognition",
|
||||
memory=True,
|
||||
respect_context_window=True,
|
||||
max_rpm=10, # Limit API calls
|
||||
function_calling_llm="gpt-4o-mini" # Cheaper model for tool calls
|
||||
)
|
||||
```
|
||||
|
||||
#### Custom Template Agent
|
||||
```python Code
|
||||
custom_agent = Agent(
|
||||
role="Customer Service Representative",
|
||||
goal="Assist customers with their inquiries",
|
||||
backstory="Experienced in customer support with a focus on satisfaction",
|
||||
system_template="""<|start_header_id|>system<|end_header_id|>
|
||||
{{ .System }}<|eot_id|>""",
|
||||
prompt_template="""<|start_header_id|>user<|end_header_id|>
|
||||
{{ .Prompt }}<|eot_id|>""",
|
||||
response_template="""<|start_header_id|>assistant<|end_header_id|>
|
||||
{{ .Response }}<|eot_id|>""",
|
||||
)
|
||||
```
|
||||
|
||||
### Parameter Details
|
||||
|
||||
#### Critical Parameters
|
||||
- `role`, `goal`, and `backstory` are required and shape the agent's behavior
|
||||
- `llm` determines the language model used (default: OpenAI's GPT-4)
|
||||
|
||||
#### Memory and Context
|
||||
- `memory`: Enable to maintain conversation history
|
||||
- `respect_context_window`: Prevents token limit issues
|
||||
- `knowledge_sources`: Add domain-specific knowledge bases
|
||||
|
||||
#### Execution Control
|
||||
- `max_iter`: Maximum attempts before giving best answer
|
||||
- `max_execution_time`: Timeout in seconds
|
||||
- `max_rpm`: Rate limiting for API calls
|
||||
- `max_retry_limit`: Retries on error
|
||||
|
||||
#### Code Execution
|
||||
- `allow_code_execution`: Must be True to run code
|
||||
- `code_execution_mode`:
|
||||
- `"safe"`: Uses Docker (recommended for production)
|
||||
- `"unsafe"`: Direct execution (use only in trusted environments)
|
||||
|
||||
#### Templates
|
||||
- `system_template`: Defines agent's core behavior
|
||||
- `prompt_template`: Structures input format
|
||||
- `response_template`: Formats agent responses
|
||||
|
||||
<Note>
|
||||
When using custom templates, you can use variables like `{role}`, `{goal}`, and `{input}` in your templates. These will be automatically populated during execution.
|
||||
</Note>
|
||||
|
||||
## Agent Tools
|
||||
|
||||
Agents can be equipped with various tools to enhance their capabilities. CrewAI supports tools from:
|
||||
- [CrewAI Toolkit](https://github.com/joaomdmoura/crewai-tools)
|
||||
- [LangChain Tools](https://python.langchain.com/docs/integrations/tools)
|
||||
|
||||
Here's how to add tools to an agent:
|
||||
|
||||
```python Code
|
||||
from crewai import Agent
|
||||
from crewai_tools import SerperDevTool, WikipediaTools
|
||||
|
||||
# Create tools
|
||||
search_tool = SerperDevTool()
|
||||
wiki_tool = WikipediaTools()
|
||||
|
||||
# Add tools to agent
|
||||
researcher = Agent(
|
||||
role="AI Technology Researcher",
|
||||
goal="Research the latest AI developments",
|
||||
tools=[search_tool, wiki_tool],
|
||||
verbose=True
|
||||
)
|
||||
```
|
||||
|
||||
## Agent Memory and Context
|
||||
|
||||
Agents can maintain memory of their interactions and use context from previous tasks. This is particularly useful for complex workflows where information needs to be retained across multiple tasks.
|
||||
|
||||
```python Code
|
||||
from crewai import Agent
|
||||
|
||||
analyst = Agent(
|
||||
role="Data Analyst",
|
||||
goal="Analyze and remember complex data patterns",
|
||||
memory=True, # Enable memory
|
||||
verbose=True
|
||||
)
|
||||
```
|
||||
|
||||
<Note>
|
||||
When `memory` is enabled, the agent will maintain context across multiple interactions, improving its ability to handle complex, multi-step tasks.
|
||||
</Note>
|
||||
|
||||
## Important Considerations and Best Practices
|
||||
|
||||
### Security and Code Execution
|
||||
- When using `allow_code_execution`, be cautious with user input and always validate it
|
||||
- Use `code_execution_mode: "safe"` (Docker) in production environments
|
||||
- Consider setting appropriate `max_execution_time` limits to prevent infinite loops
|
||||
|
||||
### Performance Optimization
|
||||
- Use `respect_context_window: true` to prevent token limit issues
|
||||
- Set appropriate `max_rpm` to avoid rate limiting
|
||||
- Enable `cache: true` to improve performance for repetitive tasks
|
||||
- Adjust `max_iter` and `max_retry_limit` based on task complexity
|
||||
|
||||
### Memory and Context Management
|
||||
- Use `memory: true` for tasks requiring historical context
|
||||
- Leverage `knowledge_sources` for domain-specific information
|
||||
- Configure `embedder_config` when using custom embedding models
|
||||
- Use custom templates (`system_template`, `prompt_template`, `response_template`) for fine-grained control over agent behavior
|
||||
|
||||
### Agent Collaboration
|
||||
- Enable `allow_delegation: true` when agents need to work together
|
||||
- Use `step_callback` to monitor and log agent interactions
|
||||
- Consider using different LLMs for different purposes:
|
||||
- Main `llm` for complex reasoning
|
||||
- `function_calling_llm` for efficient tool usage
|
||||
|
||||
### Model Compatibility
|
||||
- Set `use_system_prompt: false` for older models that don't support system messages
|
||||
- Ensure your chosen `llm` supports the features you need (like function calling)
|
||||
|
||||
## Troubleshooting Common Issues
|
||||
|
||||
1. **Rate Limiting**: If you're hitting API rate limits:
|
||||
- Implement appropriate `max_rpm`
|
||||
- Use caching for repetitive operations
|
||||
- Consider batching requests
|
||||
|
||||
2. **Context Window Errors**: If you're exceeding context limits:
|
||||
- Enable `respect_context_window`
|
||||
- Use more efficient prompts
|
||||
- Clear agent memory periodically
|
||||
|
||||
3. **Code Execution Issues**: If code execution fails:
|
||||
- Verify Docker is installed for safe mode
|
||||
- Check execution permissions
|
||||
- Review code sandbox settings
|
||||
|
||||
4. **Memory Issues**: If agent responses seem inconsistent:
|
||||
- Verify memory is enabled
|
||||
- Check knowledge source configuration
|
||||
- Review conversation history management
|
||||
|
||||
Remember that agents are most effective when configured according to their specific use case. Take time to understand your requirements and adjust these parameters accordingly.
|
||||
211
docs/concepts/cli.mdx
Normal file
@@ -0,0 +1,211 @@
|
||||
---
|
||||
title: CLI
|
||||
description: Learn how to use the CrewAI CLI to interact with CrewAI.
|
||||
icon: terminal
|
||||
---
|
||||
|
||||
# CrewAI CLI Documentation
|
||||
|
||||
The CrewAI CLI provides a set of commands to interact with CrewAI, allowing you to create, train, run, and manage crews & flows.
|
||||
|
||||
## Installation
|
||||
|
||||
To use the CrewAI CLI, make sure you have CrewAI installed:
|
||||
|
||||
```shell Terminal
|
||||
pip install crewai
|
||||
```
|
||||
|
||||
## Basic Usage
|
||||
|
||||
The basic structure of a CrewAI CLI command is:
|
||||
|
||||
```shell Terminal
|
||||
crewai [COMMAND] [OPTIONS] [ARGUMENTS]
|
||||
```
|
||||
|
||||
## Available Commands
|
||||
|
||||
### 1. Create
|
||||
|
||||
Create a new crew or flow.
|
||||
|
||||
```shell Terminal
|
||||
crewai create [OPTIONS] TYPE NAME
|
||||
```
|
||||
|
||||
- `TYPE`: Choose between "crew" or "flow"
|
||||
- `NAME`: Name of the crew or flow
|
||||
|
||||
Example:
|
||||
```shell Terminal
|
||||
crewai create crew my_new_crew
|
||||
crewai create flow my_new_flow
|
||||
```
|
||||
|
||||
### 2. Version
|
||||
|
||||
Show the installed version of CrewAI.
|
||||
|
||||
```shell Terminal
|
||||
crewai version [OPTIONS]
|
||||
```
|
||||
|
||||
- `--tools`: (Optional) Show the installed version of CrewAI tools
|
||||
|
||||
Example:
|
||||
```shell Terminal
|
||||
crewai version
|
||||
crewai version --tools
|
||||
```
|
||||
|
||||
### 3. Train
|
||||
|
||||
Train the crew for a specified number of iterations.
|
||||
|
||||
```shell Terminal
|
||||
crewai train [OPTIONS]
|
||||
```
|
||||
|
||||
- `-n, --n_iterations INTEGER`: Number of iterations to train the crew (default: 5)
|
||||
- `-f, --filename TEXT`: Path to a custom file for training (default: "trained_agents_data.pkl")
|
||||
|
||||
Example:
|
||||
```shell Terminal
|
||||
crewai train -n 10 -f my_training_data.pkl
|
||||
```
|
||||
|
||||
### 4. Replay
|
||||
|
||||
Replay the crew execution from a specific task.
|
||||
|
||||
```shell Terminal
|
||||
crewai replay [OPTIONS]
|
||||
```
|
||||
|
||||
- `-t, --task_id TEXT`: Replay the crew from this task ID, including all subsequent tasks
|
||||
|
||||
Example:
|
||||
```shell Terminal
|
||||
crewai replay -t task_123456
|
||||
```
|
||||
|
||||
### 5. Log-tasks-outputs
|
||||
|
||||
Retrieve your latest crew.kickoff() task outputs.
|
||||
|
||||
```shell Terminal
|
||||
crewai log-tasks-outputs
|
||||
```
|
||||
|
||||
### 6. Reset-memories
|
||||
|
||||
Reset the crew memories (long, short, entity, latest_crew_kickoff_outputs).
|
||||
|
||||
```shell Terminal
|
||||
crewai reset-memories [OPTIONS]
|
||||
```
|
||||
|
||||
- `-l, --long`: Reset LONG TERM memory
|
||||
- `-s, --short`: Reset SHORT TERM memory
|
||||
- `-e, --entities`: Reset ENTITIES memory
|
||||
- `-k, --kickoff-outputs`: Reset LATEST KICKOFF TASK OUTPUTS
|
||||
- `-a, --all`: Reset ALL memories
|
||||
|
||||
Example:
|
||||
```shell Terminal
|
||||
crewai reset-memories --long --short
|
||||
crewai reset-memories --all
|
||||
```
|
||||
|
||||
### 7. Test
|
||||
|
||||
Test the crew and evaluate the results.
|
||||
|
||||
```shell Terminal
|
||||
crewai test [OPTIONS]
|
||||
```
|
||||
|
||||
- `-n, --n_iterations INTEGER`: Number of iterations to test the crew (default: 3)
|
||||
- `-m, --model TEXT`: LLM Model to run the tests on the Crew (default: "gpt-4o-mini")
|
||||
|
||||
Example:
|
||||
```shell Terminal
|
||||
crewai test -n 5 -m gpt-3.5-turbo
|
||||
```
|
||||
|
||||
### 8. Run
|
||||
|
||||
Run the crew or flow.
|
||||
|
||||
```shell Terminal
|
||||
crewai run
|
||||
```
|
||||
|
||||
<Note>
|
||||
Starting from version 0.103.0, the `crewai run` command can be used to run both standard crews and flows. For flows, it automatically detects the type from pyproject.toml and runs the appropriate command. This is now the recommended way to run both crews and flows.
|
||||
</Note>
|
||||
|
||||
<Note>
|
||||
Make sure to run these commands from the directory where your CrewAI project is set up.
|
||||
Some commands may require additional configuration or setup within your project structure.
|
||||
</Note>
|
||||
|
||||
### 9. Chat
|
||||
|
||||
Starting in version `0.98.0`, when you run the `crewai chat` command, you start an interactive session with your crew. The AI assistant will guide you by asking for necessary inputs to execute the crew. Once all inputs are provided, the crew will execute its tasks.
|
||||
|
||||
After receiving the results, you can continue interacting with the assistant for further instructions or questions.
|
||||
|
||||
```shell Terminal
|
||||
crewai chat
|
||||
```
|
||||
<Note>
|
||||
Ensure you execute these commands from your CrewAI project's root directory.
|
||||
</Note>
|
||||
<Note>
|
||||
IMPORTANT: Set the `chat_llm` property in your `crew.py` file to enable this command.
|
||||
|
||||
```python
|
||||
@crew
|
||||
def crew(self) -> Crew:
|
||||
return Crew(
|
||||
agents=self.agents,
|
||||
tasks=self.tasks,
|
||||
process=Process.sequential,
|
||||
verbose=True,
|
||||
chat_llm="gpt-4o", # LLM for chat orchestration
|
||||
)
|
||||
```
|
||||
</Note>
|
||||
|
||||
### 10. API Keys
|
||||
|
||||
When running ```crewai create crew``` command, the CLI will first show you the top 5 most common LLM providers and ask you to select one.
|
||||
|
||||
Once you've selected an LLM provider, you will be prompted for API keys.
|
||||
|
||||
#### Initial API key providers
|
||||
|
||||
The CLI will initially prompt for API keys for the following services:
|
||||
|
||||
* OpenAI
|
||||
* Groq
|
||||
* Anthropic
|
||||
* Google Gemini
|
||||
* SambaNova
|
||||
|
||||
When you select a provider, the CLI will prompt you to enter your API key.
|
||||
|
||||
#### Other Options
|
||||
|
||||
If you select option 6, you will be able to select from a list of LiteLLM supported providers.
|
||||
|
||||
When you select a provider, the CLI will prompt you to enter the Key name and the API key.
|
||||
|
||||
See the following link for each provider's key name:
|
||||
|
||||
* [LiteLLM Providers](https://docs.litellm.ai/docs/providers)
|
||||
|
||||
|
||||
|
||||
51
docs/concepts/collaboration.mdx
Normal file
@@ -0,0 +1,51 @@
|
||||
---
|
||||
title: Collaboration
|
||||
description: Exploring the dynamics of agent collaboration within the CrewAI framework, focusing on the newly integrated features for enhanced functionality.
|
||||
icon: screen-users
|
||||
---
|
||||
|
||||
## Collaboration Fundamentals
|
||||
|
||||
Collaboration in CrewAI is fundamental, enabling agents to combine their skills, share information, and assist each other in task execution, embodying a truly cooperative ecosystem.
|
||||
|
||||
- **Information Sharing**: Ensures all agents are well-informed and can contribute effectively by sharing data and findings.
|
||||
- **Task Assistance**: Allows agents to seek help from peers with the required expertise for specific tasks.
|
||||
- **Resource Allocation**: Optimizes task execution through the efficient distribution and sharing of resources among agents.
|
||||
|
||||
## Enhanced Attributes for Improved Collaboration
|
||||
|
||||
The `Crew` class has been enriched with several attributes to support advanced functionalities:
|
||||
|
||||
| Feature | Description |
|
||||
|:-------------------------------------------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
|
||||
| **Language Model Management** (`manager_llm`, `function_calling_llm`) | Manages language models for executing tasks and tools. `manager_llm` is required for hierarchical processes, while `function_calling_llm` is optional with a default value for streamlined interactions. |
|
||||
| **Custom Manager Agent** (`manager_agent`) | Specifies a custom agent as the manager, replacing the default CrewAI manager. |
|
||||
| **Process Flow** (`process`) | Defines execution logic (e.g., sequential, hierarchical) for task distribution. |
|
||||
| **Verbose Logging** (`verbose`) | Provides detailed logging for monitoring and debugging. Accepts integer and boolean values to control verbosity level. |
|
||||
| **Rate Limiting** (`max_rpm`) | Limits requests per minute to optimize resource usage. Setting guidelines depend on task complexity and load. |
|
||||
| **Internationalization / Customization** (`prompt_file`) | Supports prompt customization for global usability. [Example of file](https://github.com/joaomdmoura/crewAI/blob/main/src/crewai/translations/en.json) |
|
||||
| **Callback and Telemetry** (`step_callback`, `task_callback`) | Enables step-wise and task-level execution monitoring and telemetry for performance analytics. |
|
||||
| **Crew Sharing** (`share_crew`) | Allows sharing crew data with CrewAI for model improvement. Privacy implications and benefits should be considered. |
|
||||
| **Usage Metrics** (`usage_metrics`) | Logs all LLM usage metrics during task execution for performance insights. |
|
||||
| **Memory Usage** (`memory`) | Enables memory for storing execution history, aiding in agent learning and task efficiency. |
|
||||
| **Embedder Configuration** (`embedder`) | Configures the embedder for language understanding and generation, with support for provider customization. |
|
||||
| **Cache Management** (`cache`) | Specifies whether to cache tool execution results, enhancing performance. |
|
||||
| **Output Logging** (`output_log_file`) | Defines the file path for logging crew execution output. |
|
||||
| **Planning Mode** (`planning`) | Enables action planning before task execution. Set `planning=True` to activate. |
|
||||
| **Replay Feature** (`replay`) | Provides CLI for listing tasks from the last run and replaying from specific tasks, aiding in task management and troubleshooting. |
|
||||
|
||||
## Delegation (Dividing to Conquer)
|
||||
|
||||
Delegation enhances functionality by allowing agents to intelligently assign tasks or seek help, thereby amplifying the crew's overall capability.
|
||||
|
||||
## Implementing Collaboration and Delegation
|
||||
|
||||
Setting up a crew involves defining the roles and capabilities of each agent. CrewAI seamlessly manages their interactions, ensuring efficient collaboration and delegation, with enhanced customization and monitoring features to adapt to various operational needs.
|
||||
|
||||
## Example Scenario
|
||||
|
||||
Consider a crew with a researcher agent tasked with data gathering and a writer agent responsible for compiling reports. The integration of advanced language model management and process flow attributes allows for more sophisticated interactions, such as the writer delegating complex research tasks to the researcher or querying specific information, thereby facilitating a seamless workflow.
|
||||
|
||||
## Conclusion
|
||||
|
||||
The integration of advanced attributes and functionalities into the CrewAI framework significantly enriches the agent collaboration ecosystem. These enhancements not only simplify interactions but also offer unprecedented flexibility and control, paving the way for sophisticated AI-driven solutions capable of tackling complex tasks through intelligent collaboration and delegation.
|
||||
353
docs/concepts/crews.mdx
Normal file
@@ -0,0 +1,353 @@
|
||||
---
|
||||
title: Crews
|
||||
description: Understanding and utilizing crews in the crewAI framework with comprehensive attributes and functionalities.
|
||||
icon: people-group
|
||||
---
|
||||
|
||||
## What is a Crew?
|
||||
|
||||
A crew in crewAI represents a collaborative group of agents working together to achieve a set of tasks. Each crew defines the strategy for task execution, agent collaboration, and the overall workflow.
|
||||
|
||||
## Crew Attributes
|
||||
|
||||
| Attribute | Parameters | Description |
|
||||
| :------------------------------------ | :--------------------- | :-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
|
||||
| **Tasks** | `tasks` | A list of tasks assigned to the crew. |
|
||||
| **Agents** | `agents` | A list of agents that are part of the crew. |
|
||||
| **Process** _(optional)_ | `process` | The process flow (e.g., sequential, hierarchical) the crew follows. Default is `sequential`. |
|
||||
| **Verbose** _(optional)_ | `verbose` | The verbosity level for logging during execution. Defaults to `False`. |
|
||||
| **Manager LLM** _(optional)_ | `manager_llm` | The language model used by the manager agent in a hierarchical process. **Required when using a hierarchical process.** |
|
||||
| **Function Calling LLM** _(optional)_ | `function_calling_llm` | If passed, the crew will use this LLM to do function calling for tools for all agents in the crew. Each agent can have its own LLM, which overrides the crew's LLM for function calling. |
|
||||
| **Config** _(optional)_ | `config` | Optional configuration settings for the crew, in `Json` or `Dict[str, Any]` format. |
|
||||
| **Max RPM** _(optional)_ | `max_rpm` | Maximum requests per minute the crew adheres to during execution. Defaults to `None`. |
|
||||
| **Memory** _(optional)_ | `memory` | Utilized for storing execution memories (short-term, long-term, entity memory). |
|
||||
| **Memory Config** _(optional)_ | `memory_config` | Configuration for the memory provider to be used by the crew. |
|
||||
| **Cache** _(optional)_ | `cache` | Specifies whether to use a cache for storing the results of tools' execution. Defaults to `True`. |
|
||||
| **Embedder** _(optional)_ | `embedder` | Configuration for the embedder to be used by the crew. Mostly used by memory for now. Default is `{"provider": "openai"}`. |
|
||||
| **Step Callback** _(optional)_ | `step_callback` | A function that is called after each step of every agent. This can be used to log the agent's actions or to perform other operations; it won't override the agent-specific `step_callback`. |
|
||||
| **Task Callback** _(optional)_ | `task_callback` | A function that is called after the completion of each task. Useful for monitoring or additional operations post-task execution. |
|
||||
| **Share Crew** _(optional)_ | `share_crew` | Whether you want to share the complete crew information and execution with the crewAI team to make the library better, and allow us to train models. |
|
||||
| **Output Log File** _(optional)_ | `output_log_file` | Set to True to save logs as logs.txt in the current directory or provide a file path. Logs will be in JSON format if the filename ends in .json, otherwise .txt. Defautls to `None`. |
|
||||
| **Manager Agent** _(optional)_ | `manager_agent` | `manager` sets a custom agent that will be used as a manager. |
|
||||
| **Prompt File** _(optional)_ | `prompt_file` | Path to the prompt JSON file to be used for the crew. |
|
||||
| **Planning** *(optional)* | `planning` | Adds planning ability to the Crew. When activated before each Crew iteration, all Crew data is sent to an AgentPlanner that will plan the tasks and this plan will be added to each task description. |
|
||||
| **Planning LLM** *(optional)* | `planning_llm` | The language model used by the AgentPlanner in a planning process. |
|
||||
|
||||
<Tip>
|
||||
**Crew Max RPM**: The `max_rpm` attribute sets the maximum number of requests per minute the crew can perform to avoid rate limits and will override individual agents' `max_rpm` settings if you set it.
|
||||
</Tip>
|
||||
|
||||
## Creating Crews
|
||||
|
||||
There are two ways to create crews in CrewAI: using **YAML configuration (recommended)** or defining them **directly in code**.
|
||||
|
||||
### YAML Configuration (Recommended)
|
||||
|
||||
Using YAML configuration provides a cleaner, more maintainable way to define crews and is consistent with how agents and tasks are defined in CrewAI projects.
|
||||
|
||||
After creating your CrewAI project as outlined in the [Installation](/installation) section, you can define your crew in a class that inherits from `CrewBase` and uses decorators to define agents, tasks, and the crew itself.
|
||||
|
||||
#### Example Crew Class with Decorators
|
||||
|
||||
```python code
|
||||
from crewai import Agent, Crew, Task, Process
|
||||
from crewai.project import CrewBase, agent, task, crew, before_kickoff, after_kickoff
|
||||
|
||||
|
||||
@CrewBase
|
||||
class YourCrewName:
|
||||
"""Description of your crew"""
|
||||
|
||||
# Paths to your YAML configuration files
|
||||
# To see an example agent and task defined in YAML, checkout the following:
|
||||
# - Task: https://docs.crewai.com/concepts/tasks#yaml-configuration-recommended
|
||||
# - Agents: https://docs.crewai.com/concepts/agents#yaml-configuration-recommended
|
||||
agents_config = 'config/agents.yaml'
|
||||
tasks_config = 'config/tasks.yaml'
|
||||
|
||||
@before_kickoff
|
||||
def prepare_inputs(self, inputs):
|
||||
# Modify inputs before the crew starts
|
||||
inputs['additional_data'] = "Some extra information"
|
||||
return inputs
|
||||
|
||||
@after_kickoff
|
||||
def process_output(self, output):
|
||||
# Modify output after the crew finishes
|
||||
output.raw += "\nProcessed after kickoff."
|
||||
return output
|
||||
|
||||
@agent
|
||||
def agent_one(self) -> Agent:
|
||||
return Agent(
|
||||
config=self.agents_config['agent_one'],
|
||||
verbose=True
|
||||
)
|
||||
|
||||
@agent
|
||||
def agent_two(self) -> Agent:
|
||||
return Agent(
|
||||
config=self.agents_config['agent_two'],
|
||||
verbose=True
|
||||
)
|
||||
|
||||
@task
|
||||
def task_one(self) -> Task:
|
||||
return Task(
|
||||
config=self.tasks_config['task_one']
|
||||
)
|
||||
|
||||
@task
|
||||
def task_two(self) -> Task:
|
||||
return Task(
|
||||
config=self.tasks_config['task_two']
|
||||
)
|
||||
|
||||
@crew
|
||||
def crew(self) -> Crew:
|
||||
return Crew(
|
||||
agents=self.agents, # Automatically collected by the @agent decorator
|
||||
tasks=self.tasks, # Automatically collected by the @task decorator.
|
||||
process=Process.sequential,
|
||||
verbose=True,
|
||||
)
|
||||
```
|
||||
|
||||
<Note>
|
||||
Tasks will be executed in the order they are defined.
|
||||
</Note>
|
||||
|
||||
The `CrewBase` class, along with these decorators, automates the collection of agents and tasks, reducing the need for manual management.
|
||||
|
||||
#### Decorators overview from `annotations.py`
|
||||
|
||||
CrewAI provides several decorators in the `annotations.py` file that are used to mark methods within your crew class for special handling:
|
||||
|
||||
- `@CrewBase`: Marks the class as a crew base class.
|
||||
- `@agent`: Denotes a method that returns an `Agent` object.
|
||||
- `@task`: Denotes a method that returns a `Task` object.
|
||||
- `@crew`: Denotes the method that returns the `Crew` object.
|
||||
- `@before_kickoff`: (Optional) Marks a method to be executed before the crew starts.
|
||||
- `@after_kickoff`: (Optional) Marks a method to be executed after the crew finishes.
|
||||
|
||||
These decorators help in organizing your crew's structure and automatically collecting agents and tasks without manually listing them.
|
||||
|
||||
### Direct Code Definition (Alternative)
|
||||
|
||||
Alternatively, you can define the crew directly in code without using YAML configuration files.
|
||||
|
||||
```python code
|
||||
from crewai import Agent, Crew, Task, Process
|
||||
from crewai_tools import YourCustomTool
|
||||
|
||||
class YourCrewName:
|
||||
def agent_one(self) -> Agent:
|
||||
return Agent(
|
||||
role="Data Analyst",
|
||||
goal="Analyze data trends in the market",
|
||||
backstory="An experienced data analyst with a background in economics",
|
||||
verbose=True,
|
||||
tools=[YourCustomTool()]
|
||||
)
|
||||
|
||||
def agent_two(self) -> Agent:
|
||||
return Agent(
|
||||
role="Market Researcher",
|
||||
goal="Gather information on market dynamics",
|
||||
backstory="A diligent researcher with a keen eye for detail",
|
||||
verbose=True
|
||||
)
|
||||
|
||||
def task_one(self) -> Task:
|
||||
return Task(
|
||||
description="Collect recent market data and identify trends.",
|
||||
expected_output="A report summarizing key trends in the market.",
|
||||
agent=self.agent_one()
|
||||
)
|
||||
|
||||
def task_two(self) -> Task:
|
||||
return Task(
|
||||
description="Research factors affecting market dynamics.",
|
||||
expected_output="An analysis of factors influencing the market.",
|
||||
agent=self.agent_two()
|
||||
)
|
||||
|
||||
def crew(self) -> Crew:
|
||||
return Crew(
|
||||
agents=[self.agent_one(), self.agent_two()],
|
||||
tasks=[self.task_one(), self.task_two()],
|
||||
process=Process.sequential,
|
||||
verbose=True
|
||||
)
|
||||
```
|
||||
|
||||
In this example:
|
||||
|
||||
- Agents and tasks are defined directly within the class without decorators.
|
||||
- We manually create and manage the list of agents and tasks.
|
||||
- This approach provides more control but can be less maintainable for larger projects.
|
||||
|
||||
## Crew Output
|
||||
|
||||
The output of a crew in the CrewAI framework is encapsulated within the `CrewOutput` class.
|
||||
This class provides a structured way to access results of the crew's execution, including various formats such as raw strings, JSON, and Pydantic models.
|
||||
The `CrewOutput` includes the results from the final task output, token usage, and individual task outputs.
|
||||
|
||||
### Crew Output Attributes
|
||||
|
||||
| Attribute | Parameters | Type | Description |
|
||||
| :--------------- | :------------- | :------------------------- | :--------------------------------------------------------------------------------------------------- |
|
||||
| **Raw** | `raw` | `str` | The raw output of the crew. This is the default format for the output. |
|
||||
| **Pydantic** | `pydantic` | `Optional[BaseModel]` | A Pydantic model object representing the structured output of the crew. |
|
||||
| **JSON Dict** | `json_dict` | `Optional[Dict[str, Any]]` | A dictionary representing the JSON output of the crew. |
|
||||
| **Tasks Output** | `tasks_output` | `List[TaskOutput]` | A list of `TaskOutput` objects, each representing the output of a task in the crew. |
|
||||
| **Token Usage** | `token_usage` | `Dict[str, Any]` | A summary of token usage, providing insights into the language model's performance during execution. |
|
||||
|
||||
### Crew Output Methods and Properties
|
||||
|
||||
| Method/Property | Description |
|
||||
| :-------------- | :------------------------------------------------------------------------------------------------ |
|
||||
| **json** | Returns the JSON string representation of the crew output if the output format is JSON. |
|
||||
| **to_dict** | Converts the JSON and Pydantic outputs to a dictionary. |
|
||||
| \***\*str\*\*** | Returns the string representation of the crew output, prioritizing Pydantic, then JSON, then raw. |
|
||||
|
||||
### Accessing Crew Outputs
|
||||
|
||||
Once a crew has been executed, its output can be accessed through the `output` attribute of the `Crew` object. The `CrewOutput` class provides various ways to interact with and present this output.
|
||||
|
||||
#### Example
|
||||
|
||||
```python Code
|
||||
# Example crew execution
|
||||
crew = Crew(
|
||||
agents=[research_agent, writer_agent],
|
||||
tasks=[research_task, write_article_task],
|
||||
verbose=True
|
||||
)
|
||||
|
||||
crew_output = crew.kickoff()
|
||||
|
||||
# Accessing the crew output
|
||||
print(f"Raw Output: {crew_output.raw}")
|
||||
if crew_output.json_dict:
|
||||
print(f"JSON Output: {json.dumps(crew_output.json_dict, indent=2)}")
|
||||
if crew_output.pydantic:
|
||||
print(f"Pydantic Output: {crew_output.pydantic}")
|
||||
print(f"Tasks Output: {crew_output.tasks_output}")
|
||||
print(f"Token Usage: {crew_output.token_usage}")
|
||||
```
|
||||
|
||||
## Accessing Crew Logs
|
||||
|
||||
You can see real time log of the crew execution, by setting `output_log_file` as a `True(Boolean)` or a `file_name(str)`. Supports logging of events as both `file_name.txt` and `file_name.json`.
|
||||
In case of `True(Boolean)` will save as `logs.txt`.
|
||||
|
||||
In case of `output_log_file` is set as `False(Booelan)` or `None`, the logs will not be populated.
|
||||
|
||||
```python Code
|
||||
# Save crew logs
|
||||
crew = Crew(output_log_file = True) # Logs will be saved as logs.txt
|
||||
crew = Crew(output_log_file = file_name) # Logs will be saved as file_name.txt
|
||||
crew = Crew(output_log_file = file_name.txt) # Logs will be saved as file_name.txt
|
||||
crew = Crew(output_log_file = file_name.json) # Logs will be saved as file_name.json
|
||||
```
|
||||
|
||||
|
||||
|
||||
## Memory Utilization
|
||||
|
||||
Crews can utilize memory (short-term, long-term, and entity memory) to enhance their execution and learning over time. This feature allows crews to store and recall execution memories, aiding in decision-making and task execution strategies.
|
||||
|
||||
## Cache Utilization
|
||||
|
||||
Caches can be employed to store the results of tools' execution, making the process more efficient by reducing the need to re-execute identical tasks.
|
||||
|
||||
## Crew Usage Metrics
|
||||
|
||||
After the crew execution, you can access the `usage_metrics` attribute to view the language model (LLM) usage metrics for all tasks executed by the crew. This provides insights into operational efficiency and areas for improvement.
|
||||
|
||||
```python Code
|
||||
# Access the crew's usage metrics
|
||||
crew = Crew(agents=[agent1, agent2], tasks=[task1, task2])
|
||||
crew.kickoff()
|
||||
print(crew.usage_metrics)
|
||||
```
|
||||
|
||||
## Crew Execution Process
|
||||
|
||||
- **Sequential Process**: Tasks are executed one after another, allowing for a linear flow of work.
|
||||
- **Hierarchical Process**: A manager agent coordinates the crew, delegating tasks and validating outcomes before proceeding. **Note**: A `manager_llm` or `manager_agent` is required for this process and it's essential for validating the process flow.
|
||||
|
||||
### Kicking Off a Crew
|
||||
|
||||
Once your crew is assembled, initiate the workflow with the `kickoff()` method. This starts the execution process according to the defined process flow.
|
||||
|
||||
```python Code
|
||||
# Start the crew's task execution
|
||||
result = my_crew.kickoff()
|
||||
print(result)
|
||||
```
|
||||
|
||||
### Different Ways to Kick Off a Crew
|
||||
|
||||
Once your crew is assembled, initiate the workflow with the appropriate kickoff method. CrewAI provides several methods for better control over the kickoff process: `kickoff()`, `kickoff_for_each()`, `kickoff_async()`, and `kickoff_for_each_async()`.
|
||||
|
||||
- `kickoff()`: Starts the execution process according to the defined process flow.
|
||||
- `kickoff_for_each()`: Executes tasks sequentially for each provided input event or item in the collection.
|
||||
- `kickoff_async()`: Initiates the workflow asynchronously.
|
||||
- `kickoff_for_each_async()`: Executes tasks concurrently for each provided input event or item, leveraging asynchronous processing.
|
||||
|
||||
```python Code
|
||||
# Start the crew's task execution
|
||||
result = my_crew.kickoff()
|
||||
print(result)
|
||||
|
||||
# Example of using kickoff_for_each
|
||||
inputs_array = [{'topic': 'AI in healthcare'}, {'topic': 'AI in finance'}]
|
||||
results = my_crew.kickoff_for_each(inputs=inputs_array)
|
||||
for result in results:
|
||||
print(result)
|
||||
|
||||
# Example of using kickoff_async
|
||||
inputs = {'topic': 'AI in healthcare'}
|
||||
async_result = my_crew.kickoff_async(inputs=inputs)
|
||||
print(async_result)
|
||||
|
||||
# Example of using kickoff_for_each_async
|
||||
inputs_array = [{'topic': 'AI in healthcare'}, {'topic': 'AI in finance'}]
|
||||
async_results = my_crew.kickoff_for_each_async(inputs=inputs_array)
|
||||
for async_result in async_results:
|
||||
print(async_result)
|
||||
```
|
||||
|
||||
These methods provide flexibility in how you manage and execute tasks within your crew, allowing for both synchronous and asynchronous workflows tailored to your needs.
|
||||
|
||||
### Replaying from a Specific Task
|
||||
|
||||
You can now replay from a specific task using our CLI command `replay`.
|
||||
|
||||
The replay feature in CrewAI allows you to replay from a specific task using the command-line interface (CLI). By running the command `crewai replay -t <task_id>`, you can specify the `task_id` for the replay process.
|
||||
|
||||
Kickoffs will now save the latest kickoffs returned task outputs locally for you to be able to replay from.
|
||||
|
||||
### Replaying from a Specific Task Using the CLI
|
||||
|
||||
To use the replay feature, follow these steps:
|
||||
|
||||
1. Open your terminal or command prompt.
|
||||
2. Navigate to the directory where your CrewAI project is located.
|
||||
3. Run the following command:
|
||||
|
||||
To view the latest kickoff task IDs, use:
|
||||
|
||||
```shell
|
||||
crewai log-tasks-outputs
|
||||
```
|
||||
|
||||
Then, to replay from a specific task, use:
|
||||
|
||||
```shell
|
||||
crewai replay -t <task_id>
|
||||
```
|
||||
|
||||
These commands let you replay from your latest kickoff tasks, still retaining context from previously executed tasks.
|
||||
365
docs/concepts/event-listener.mdx
Normal file
@@ -0,0 +1,365 @@
|
||||
---
|
||||
title: 'Event Listeners'
|
||||
description: 'Tap into CrewAI events to build custom integrations and monitoring'
|
||||
icon: spinner
|
||||
---
|
||||
|
||||
# Event Listeners
|
||||
|
||||
CrewAI provides a powerful event system that allows you to listen for and react to various events that occur during the execution of your Crew. This feature enables you to build custom integrations, monitoring solutions, logging systems, or any other functionality that needs to be triggered based on CrewAI's internal events.
|
||||
|
||||
## How It Works
|
||||
|
||||
CrewAI uses an event bus architecture to emit events throughout the execution lifecycle. The event system is built on the following components:
|
||||
|
||||
1. **CrewAIEventsBus**: A singleton event bus that manages event registration and emission
|
||||
2. **BaseEvent**: Base class for all events in the system
|
||||
3. **BaseEventListener**: Abstract base class for creating custom event listeners
|
||||
|
||||
When specific actions occur in CrewAI (like a Crew starting execution, an Agent completing a task, or a tool being used), the system emits corresponding events. You can register handlers for these events to execute custom code when they occur.
|
||||
|
||||
<Note type="info" title="Enterprise Enhancement: Prompt Tracing">
|
||||
CrewAI Enterprise provides a built-in Prompt Tracing feature that leverages the event system to track, store, and visualize all prompts, completions, and associated metadata. This provides powerful debugging capabilities and transparency into your agent operations.
|
||||
|
||||

|
||||
|
||||
With Prompt Tracing you can:
|
||||
- View the complete history of all prompts sent to your LLM
|
||||
- Track token usage and costs
|
||||
- Debug agent reasoning failures
|
||||
- Share prompt sequences with your team
|
||||
- Compare different prompt strategies
|
||||
- Export traces for compliance and auditing
|
||||
</Note>
|
||||
|
||||
## Creating a Custom Event Listener
|
||||
|
||||
To create a custom event listener, you need to:
|
||||
|
||||
1. Create a class that inherits from `BaseEventListener`
|
||||
2. Implement the `setup_listeners` method
|
||||
3. Register handlers for the events you're interested in
|
||||
4. Create an instance of your listener in the appropriate file
|
||||
|
||||
Here's a simple example of a custom event listener class:
|
||||
|
||||
```python
|
||||
from crewai.utilities.events import (
|
||||
CrewKickoffStartedEvent,
|
||||
CrewKickoffCompletedEvent,
|
||||
AgentExecutionCompletedEvent,
|
||||
)
|
||||
from crewai.utilities.events.base_event_listener import BaseEventListener
|
||||
|
||||
class MyCustomListener(BaseEventListener):
|
||||
def __init__(self):
|
||||
super().__init__()
|
||||
|
||||
def setup_listeners(self, crewai_event_bus):
|
||||
@crewai_event_bus.on(CrewKickoffStartedEvent)
|
||||
def on_crew_started(source, event):
|
||||
print(f"Crew '{event.crew_name}' has started execution!")
|
||||
|
||||
@crewai_event_bus.on(CrewKickoffCompletedEvent)
|
||||
def on_crew_completed(source, event):
|
||||
print(f"Crew '{event.crew_name}' has completed execution!")
|
||||
print(f"Output: {event.output}")
|
||||
|
||||
@crewai_event_bus.on(AgentExecutionCompletedEvent)
|
||||
def on_agent_execution_completed(source, event):
|
||||
print(f"Agent '{event.agent.role}' completed task")
|
||||
print(f"Output: {event.output}")
|
||||
```
|
||||
|
||||
## Properly Registering Your Listener
|
||||
|
||||
Simply defining your listener class isn't enough. You need to create an instance of it and ensure it's imported in your application. This ensures that:
|
||||
|
||||
1. The event handlers are registered with the event bus
|
||||
2. The listener instance remains in memory (not garbage collected)
|
||||
3. The listener is active when events are emitted
|
||||
|
||||
### Option 1: Import and Instantiate in Your Crew or Flow Implementation
|
||||
|
||||
The most important thing is to create an instance of your listener in the file where your Crew or Flow is defined and executed:
|
||||
|
||||
#### For Crew-based Applications
|
||||
|
||||
Create and import your listener at the top of your Crew implementation file:
|
||||
|
||||
```python
|
||||
# In your crew.py file
|
||||
from crewai import Agent, Crew, Task
|
||||
from my_listeners import MyCustomListener
|
||||
|
||||
# Create an instance of your listener
|
||||
my_listener = MyCustomListener()
|
||||
|
||||
class MyCustomCrew:
|
||||
# Your crew implementation...
|
||||
|
||||
def crew(self):
|
||||
return Crew(
|
||||
agents=[...],
|
||||
tasks=[...],
|
||||
# ...
|
||||
)
|
||||
```
|
||||
|
||||
#### For Flow-based Applications
|
||||
|
||||
Create and import your listener at the top of your Flow implementation file:
|
||||
|
||||
```python
|
||||
# In your main.py or flow.py file
|
||||
from crewai.flow import Flow, listen, start
|
||||
from my_listeners import MyCustomListener
|
||||
|
||||
# Create an instance of your listener
|
||||
my_listener = MyCustomListener()
|
||||
|
||||
class MyCustomFlow(Flow):
|
||||
# Your flow implementation...
|
||||
|
||||
@start()
|
||||
def first_step(self):
|
||||
# ...
|
||||
```
|
||||
|
||||
This ensures that your listener is loaded and active when your Crew or Flow is executed.
|
||||
|
||||
### Option 2: Create a Package for Your Listeners
|
||||
|
||||
For a more structured approach, especially if you have multiple listeners:
|
||||
|
||||
1. Create a package for your listeners:
|
||||
|
||||
```
|
||||
my_project/
|
||||
├── listeners/
|
||||
│ ├── __init__.py
|
||||
│ ├── my_custom_listener.py
|
||||
│ └── another_listener.py
|
||||
```
|
||||
|
||||
2. In `my_custom_listener.py`, define your listener class and create an instance:
|
||||
|
||||
```python
|
||||
# my_custom_listener.py
|
||||
from crewai.utilities.events.base_event_listener import BaseEventListener
|
||||
# ... import events ...
|
||||
|
||||
class MyCustomListener(BaseEventListener):
|
||||
# ... implementation ...
|
||||
|
||||
# Create an instance of your listener
|
||||
my_custom_listener = MyCustomListener()
|
||||
```
|
||||
|
||||
3. In `__init__.py`, import the listener instances to ensure they're loaded:
|
||||
|
||||
```python
|
||||
# __init__.py
|
||||
from .my_custom_listener import my_custom_listener
|
||||
from .another_listener import another_listener
|
||||
|
||||
# Optionally export them if you need to access them elsewhere
|
||||
__all__ = ['my_custom_listener', 'another_listener']
|
||||
```
|
||||
|
||||
4. Import your listeners package in your Crew or Flow file:
|
||||
|
||||
```python
|
||||
# In your crew.py or flow.py file
|
||||
import my_project.listeners # This loads all your listeners
|
||||
|
||||
class MyCustomCrew:
|
||||
# Your crew implementation...
|
||||
```
|
||||
|
||||
This is exactly how CrewAI's built-in `agentops_listener` is registered. In the CrewAI codebase, you'll find:
|
||||
|
||||
```python
|
||||
# src/crewai/utilities/events/third_party/__init__.py
|
||||
from .agentops_listener import agentops_listener
|
||||
```
|
||||
|
||||
This ensures the `agentops_listener` is loaded when the `crewai.utilities.events` package is imported.
|
||||
|
||||
## Available Event Types
|
||||
|
||||
CrewAI provides a wide range of events that you can listen for:
|
||||
|
||||
### Crew Events
|
||||
|
||||
- **CrewKickoffStartedEvent**: Emitted when a Crew starts execution
|
||||
- **CrewKickoffCompletedEvent**: Emitted when a Crew completes execution
|
||||
- **CrewKickoffFailedEvent**: Emitted when a Crew fails to complete execution
|
||||
- **CrewTestStartedEvent**: Emitted when a Crew starts testing
|
||||
- **CrewTestCompletedEvent**: Emitted when a Crew completes testing
|
||||
- **CrewTestFailedEvent**: Emitted when a Crew fails to complete testing
|
||||
- **CrewTrainStartedEvent**: Emitted when a Crew starts training
|
||||
- **CrewTrainCompletedEvent**: Emitted when a Crew completes training
|
||||
- **CrewTrainFailedEvent**: Emitted when a Crew fails to complete training
|
||||
|
||||
### Agent Events
|
||||
|
||||
- **AgentExecutionStartedEvent**: Emitted when an Agent starts executing a task
|
||||
- **AgentExecutionCompletedEvent**: Emitted when an Agent completes executing a task
|
||||
- **AgentExecutionErrorEvent**: Emitted when an Agent encounters an error during execution
|
||||
|
||||
### Task Events
|
||||
|
||||
- **TaskStartedEvent**: Emitted when a Task starts execution
|
||||
- **TaskCompletedEvent**: Emitted when a Task completes execution
|
||||
- **TaskFailedEvent**: Emitted when a Task fails to complete execution
|
||||
- **TaskEvaluationEvent**: Emitted when a Task is evaluated
|
||||
|
||||
### Tool Usage Events
|
||||
|
||||
- **ToolUsageStartedEvent**: Emitted when a tool execution is started
|
||||
- **ToolUsageFinishedEvent**: Emitted when a tool execution is completed
|
||||
- **ToolUsageErrorEvent**: Emitted when a tool execution encounters an error
|
||||
- **ToolValidateInputErrorEvent**: Emitted when a tool input validation encounters an error
|
||||
- **ToolExecutionErrorEvent**: Emitted when a tool execution encounters an error
|
||||
- **ToolSelectionErrorEvent**: Emitted when there's an error selecting a tool
|
||||
|
||||
### Flow Events
|
||||
|
||||
- **FlowCreatedEvent**: Emitted when a Flow is created
|
||||
- **FlowStartedEvent**: Emitted when a Flow starts execution
|
||||
- **FlowFinishedEvent**: Emitted when a Flow completes execution
|
||||
- **FlowPlotEvent**: Emitted when a Flow is plotted
|
||||
- **MethodExecutionStartedEvent**: Emitted when a Flow method starts execution
|
||||
- **MethodExecutionFinishedEvent**: Emitted when a Flow method completes execution
|
||||
- **MethodExecutionFailedEvent**: Emitted when a Flow method fails to complete execution
|
||||
|
||||
### LLM Events
|
||||
|
||||
- **LLMCallStartedEvent**: Emitted when an LLM call starts
|
||||
- **LLMCallCompletedEvent**: Emitted when an LLM call completes
|
||||
- **LLMCallFailedEvent**: Emitted when an LLM call fails
|
||||
- **LLMStreamChunkEvent**: Emitted for each chunk received during streaming LLM responses
|
||||
|
||||
## Event Handler Structure
|
||||
|
||||
Each event handler receives two parameters:
|
||||
|
||||
1. **source**: The object that emitted the event
|
||||
2. **event**: The event instance, containing event-specific data
|
||||
|
||||
The structure of the event object depends on the event type, but all events inherit from `BaseEvent` and include:
|
||||
|
||||
- **timestamp**: The time when the event was emitted
|
||||
- **type**: A string identifier for the event type
|
||||
|
||||
Additional fields vary by event type. For example, `CrewKickoffCompletedEvent` includes `crew_name` and `output` fields.
|
||||
|
||||
## Real-World Example: Integration with AgentOps
|
||||
|
||||
CrewAI includes an example of a third-party integration with [AgentOps](https://github.com/AgentOps-AI/agentops), a monitoring and observability platform for AI agents. Here's how it's implemented:
|
||||
|
||||
```python
|
||||
from typing import Optional
|
||||
|
||||
from crewai.utilities.events import (
|
||||
CrewKickoffCompletedEvent,
|
||||
ToolUsageErrorEvent,
|
||||
ToolUsageStartedEvent,
|
||||
)
|
||||
from crewai.utilities.events.base_event_listener import BaseEventListener
|
||||
from crewai.utilities.events.crew_events import CrewKickoffStartedEvent
|
||||
from crewai.utilities.events.task_events import TaskEvaluationEvent
|
||||
|
||||
try:
|
||||
import agentops
|
||||
AGENTOPS_INSTALLED = True
|
||||
except ImportError:
|
||||
AGENTOPS_INSTALLED = False
|
||||
|
||||
class AgentOpsListener(BaseEventListener):
|
||||
tool_event: Optional["agentops.ToolEvent"] = None
|
||||
session: Optional["agentops.Session"] = None
|
||||
|
||||
def __init__(self):
|
||||
super().__init__()
|
||||
|
||||
def setup_listeners(self, crewai_event_bus):
|
||||
if not AGENTOPS_INSTALLED:
|
||||
return
|
||||
|
||||
@crewai_event_bus.on(CrewKickoffStartedEvent)
|
||||
def on_crew_kickoff_started(source, event: CrewKickoffStartedEvent):
|
||||
self.session = agentops.init()
|
||||
for agent in source.agents:
|
||||
if self.session:
|
||||
self.session.create_agent(
|
||||
name=agent.role,
|
||||
agent_id=str(agent.id),
|
||||
)
|
||||
|
||||
@crewai_event_bus.on(CrewKickoffCompletedEvent)
|
||||
def on_crew_kickoff_completed(source, event: CrewKickoffCompletedEvent):
|
||||
if self.session:
|
||||
self.session.end_session(
|
||||
end_state="Success",
|
||||
end_state_reason="Finished Execution",
|
||||
)
|
||||
|
||||
@crewai_event_bus.on(ToolUsageStartedEvent)
|
||||
def on_tool_usage_started(source, event: ToolUsageStartedEvent):
|
||||
self.tool_event = agentops.ToolEvent(name=event.tool_name)
|
||||
if self.session:
|
||||
self.session.record(self.tool_event)
|
||||
|
||||
@crewai_event_bus.on(ToolUsageErrorEvent)
|
||||
def on_tool_usage_error(source, event: ToolUsageErrorEvent):
|
||||
agentops.ErrorEvent(exception=event.error, trigger_event=self.tool_event)
|
||||
```
|
||||
|
||||
This listener initializes an AgentOps session when a Crew starts, registers agents with AgentOps, tracks tool usage, and ends the session when the Crew completes.
|
||||
|
||||
The AgentOps listener is registered in CrewAI's event system through the import in `src/crewai/utilities/events/third_party/__init__.py`:
|
||||
|
||||
```python
|
||||
from .agentops_listener import agentops_listener
|
||||
```
|
||||
|
||||
This ensures the `agentops_listener` is loaded when the `crewai.utilities.events` package is imported.
|
||||
|
||||
## Advanced Usage: Scoped Handlers
|
||||
|
||||
For temporary event handling (useful for testing or specific operations), you can use the `scoped_handlers` context manager:
|
||||
|
||||
```python
|
||||
from crewai.utilities.events import crewai_event_bus, CrewKickoffStartedEvent
|
||||
|
||||
with crewai_event_bus.scoped_handlers():
|
||||
@crewai_event_bus.on(CrewKickoffStartedEvent)
|
||||
def temp_handler(source, event):
|
||||
print("This handler only exists within this context")
|
||||
|
||||
# Do something that emits events
|
||||
|
||||
# Outside the context, the temporary handler is removed
|
||||
```
|
||||
|
||||
## Use Cases
|
||||
|
||||
Event listeners can be used for a variety of purposes:
|
||||
|
||||
1. **Logging and Monitoring**: Track the execution of your Crew and log important events
|
||||
2. **Analytics**: Collect data about your Crew's performance and behavior
|
||||
3. **Debugging**: Set up temporary listeners to debug specific issues
|
||||
4. **Integration**: Connect CrewAI with external systems like monitoring platforms, databases, or notification services
|
||||
5. **Custom Behavior**: Trigger custom actions based on specific events
|
||||
|
||||
## Best Practices
|
||||
|
||||
1. **Keep Handlers Light**: Event handlers should be lightweight and avoid blocking operations
|
||||
2. **Error Handling**: Include proper error handling in your event handlers to prevent exceptions from affecting the main execution
|
||||
3. **Cleanup**: If your listener allocates resources, ensure they're properly cleaned up
|
||||
4. **Selective Listening**: Only listen for events you actually need to handle
|
||||
5. **Testing**: Test your event listeners in isolation to ensure they behave as expected
|
||||
|
||||
By leveraging CrewAI's event system, you can extend its functionality and integrate it seamlessly with your existing infrastructure.
|
||||
884
docs/concepts/flows.mdx
Normal file
@@ -0,0 +1,884 @@
|
||||
---
|
||||
title: Flows
|
||||
description: Learn how to create and manage AI workflows using CrewAI Flows.
|
||||
icon: arrow-progress
|
||||
---
|
||||
|
||||
## Introduction
|
||||
|
||||
CrewAI Flows is a powerful feature designed to streamline the creation and management of AI workflows. Flows allow developers to combine and coordinate coding tasks and Crews efficiently, providing a robust framework for building sophisticated AI automations.
|
||||
|
||||
Flows allow you to create structured, event-driven workflows. They provide a seamless way to connect multiple tasks, manage state, and control the flow of execution in your AI applications. With Flows, you can easily design and implement multi-step processes that leverage the full potential of CrewAI's capabilities.
|
||||
|
||||
1. **Simplified Workflow Creation**: Easily chain together multiple Crews and tasks to create complex AI workflows.
|
||||
|
||||
2. **State Management**: Flows make it super easy to manage and share state between different tasks in your workflow.
|
||||
|
||||
3. **Event-Driven Architecture**: Built on an event-driven model, allowing for dynamic and responsive workflows.
|
||||
|
||||
4. **Flexible Control Flow**: Implement conditional logic, loops, and branching within your workflows.
|
||||
|
||||
## Getting Started
|
||||
|
||||
Let's create a simple Flow where you will use OpenAI to generate a random city in one task and then use that city to generate a fun fact in another task.
|
||||
|
||||
```python Code
|
||||
|
||||
from crewai.flow.flow import Flow, listen, start
|
||||
from dotenv import load_dotenv
|
||||
from litellm import completion
|
||||
|
||||
|
||||
class ExampleFlow(Flow):
|
||||
model = "gpt-4o-mini"
|
||||
|
||||
@start()
|
||||
def generate_city(self):
|
||||
print("Starting flow")
|
||||
# Each flow state automatically gets a unique ID
|
||||
print(f"Flow State ID: {self.state['id']}")
|
||||
|
||||
response = completion(
|
||||
model=self.model,
|
||||
messages=[
|
||||
{
|
||||
"role": "user",
|
||||
"content": "Return the name of a random city in the world.",
|
||||
},
|
||||
],
|
||||
)
|
||||
|
||||
random_city = response["choices"][0]["message"]["content"]
|
||||
# Store the city in our state
|
||||
self.state["city"] = random_city
|
||||
print(f"Random City: {random_city}")
|
||||
|
||||
return random_city
|
||||
|
||||
@listen(generate_city)
|
||||
def generate_fun_fact(self, random_city):
|
||||
response = completion(
|
||||
model=self.model,
|
||||
messages=[
|
||||
{
|
||||
"role": "user",
|
||||
"content": f"Tell me a fun fact about {random_city}",
|
||||
},
|
||||
],
|
||||
)
|
||||
|
||||
fun_fact = response["choices"][0]["message"]["content"]
|
||||
# Store the fun fact in our state
|
||||
self.state["fun_fact"] = fun_fact
|
||||
return fun_fact
|
||||
|
||||
|
||||
|
||||
flow = ExampleFlow()
|
||||
result = flow.kickoff()
|
||||
|
||||
print(f"Generated fun fact: {result}")
|
||||
```
|
||||
|
||||
In the above example, we have created a simple Flow that generates a random city using OpenAI and then generates a fun fact about that city. The Flow consists of two tasks: `generate_city` and `generate_fun_fact`. The `generate_city` task is the starting point of the Flow, and the `generate_fun_fact` task listens for the output of the `generate_city` task.
|
||||
|
||||
Each Flow instance automatically receives a unique identifier (UUID) in its state, which helps track and manage flow executions. The state can also store additional data (like the generated city and fun fact) that persists throughout the flow's execution.
|
||||
|
||||
When you run the Flow, it will:
|
||||
1. Generate a unique ID for the flow state
|
||||
2. Generate a random city and store it in the state
|
||||
3. Generate a fun fact about that city and store it in the state
|
||||
4. Print the results to the console
|
||||
|
||||
The state's unique ID and stored data can be useful for tracking flow executions and maintaining context between tasks.
|
||||
|
||||
**Note:** Ensure you have set up your `.env` file to store your `OPENAI_API_KEY`. This key is necessary for authenticating requests to the OpenAI API.
|
||||
|
||||
### @start()
|
||||
|
||||
The `@start()` decorator is used to mark a method as the starting point of a Flow. When a Flow is started, all the methods decorated with `@start()` are executed in parallel. You can have multiple start methods in a Flow, and they will all be executed when the Flow is started.
|
||||
|
||||
### @listen()
|
||||
|
||||
The `@listen()` decorator is used to mark a method as a listener for the output of another task in the Flow. The method decorated with `@listen()` will be executed when the specified task emits an output. The method can access the output of the task it is listening to as an argument.
|
||||
|
||||
#### Usage
|
||||
|
||||
The `@listen()` decorator can be used in several ways:
|
||||
|
||||
1. **Listening to a Method by Name**: You can pass the name of the method you want to listen to as a string. When that method completes, the listener method will be triggered.
|
||||
|
||||
```python Code
|
||||
@listen("generate_city")
|
||||
def generate_fun_fact(self, random_city):
|
||||
# Implementation
|
||||
```
|
||||
|
||||
2. **Listening to a Method Directly**: You can pass the method itself. When that method completes, the listener method will be triggered.
|
||||
```python Code
|
||||
@listen(generate_city)
|
||||
def generate_fun_fact(self, random_city):
|
||||
# Implementation
|
||||
```
|
||||
|
||||
### Flow Output
|
||||
|
||||
Accessing and handling the output of a Flow is essential for integrating your AI workflows into larger applications or systems. CrewAI Flows provide straightforward mechanisms to retrieve the final output, access intermediate results, and manage the overall state of your Flow.
|
||||
|
||||
#### Retrieving the Final Output
|
||||
|
||||
When you run a Flow, the final output is determined by the last method that completes. The `kickoff()` method returns the output of this final method.
|
||||
|
||||
Here's how you can access the final output:
|
||||
|
||||
<CodeGroup>
|
||||
```python Code
|
||||
from crewai.flow.flow import Flow, listen, start
|
||||
|
||||
class OutputExampleFlow(Flow):
|
||||
@start()
|
||||
def first_method(self):
|
||||
return "Output from first_method"
|
||||
|
||||
@listen(first_method)
|
||||
def second_method(self, first_output):
|
||||
return f"Second method received: {first_output}"
|
||||
|
||||
|
||||
flow = OutputExampleFlow()
|
||||
final_output = flow.kickoff()
|
||||
|
||||
print("---- Final Output ----")
|
||||
print(final_output)
|
||||
```
|
||||
|
||||
```text Output
|
||||
---- Final Output ----
|
||||
Second method received: Output from first_method
|
||||
```
|
||||
|
||||
</CodeGroup>
|
||||
|
||||
In this example, the `second_method` is the last method to complete, so its output will be the final output of the Flow.
|
||||
The `kickoff()` method will return the final output, which is then printed to the console.
|
||||
|
||||
#### Accessing and Updating State
|
||||
|
||||
In addition to retrieving the final output, you can also access and update the state within your Flow. The state can be used to store and share data between different methods in the Flow. After the Flow has run, you can access the state to retrieve any information that was added or updated during the execution.
|
||||
|
||||
Here's an example of how to update and access the state:
|
||||
|
||||
<CodeGroup>
|
||||
|
||||
```python Code
|
||||
from crewai.flow.flow import Flow, listen, start
|
||||
from pydantic import BaseModel
|
||||
|
||||
class ExampleState(BaseModel):
|
||||
counter: int = 0
|
||||
message: str = ""
|
||||
|
||||
class StateExampleFlow(Flow[ExampleState]):
|
||||
|
||||
@start()
|
||||
def first_method(self):
|
||||
self.state.message = "Hello from first_method"
|
||||
self.state.counter += 1
|
||||
|
||||
@listen(first_method)
|
||||
def second_method(self):
|
||||
self.state.message += " - updated by second_method"
|
||||
self.state.counter += 1
|
||||
return self.state.message
|
||||
|
||||
flow = StateExampleFlow()
|
||||
final_output = flow.kickoff()
|
||||
print(f"Final Output: {final_output}")
|
||||
print("Final State:")
|
||||
print(flow.state)
|
||||
```
|
||||
|
||||
```text Output
|
||||
Final Output: Hello from first_method - updated by second_method
|
||||
Final State:
|
||||
counter=2 message='Hello from first_method - updated by second_method'
|
||||
```
|
||||
|
||||
</CodeGroup>
|
||||
|
||||
In this example, the state is updated by both `first_method` and `second_method`.
|
||||
After the Flow has run, you can access the final state to see the updates made by these methods.
|
||||
|
||||
By ensuring that the final method's output is returned and providing access to the state, CrewAI Flows make it easy to integrate the results of your AI workflows into larger applications or systems,
|
||||
while also maintaining and accessing the state throughout the Flow's execution.
|
||||
|
||||
## Flow State Management
|
||||
|
||||
Managing state effectively is crucial for building reliable and maintainable AI workflows. CrewAI Flows provides robust mechanisms for both unstructured and structured state management,
|
||||
allowing developers to choose the approach that best fits their application's needs.
|
||||
|
||||
### Unstructured State Management
|
||||
|
||||
In unstructured state management, all state is stored in the `state` attribute of the `Flow` class.
|
||||
This approach offers flexibility, enabling developers to add or modify state attributes on the fly without defining a strict schema.
|
||||
Even with unstructured states, CrewAI Flows automatically generates and maintains a unique identifier (UUID) for each state instance.
|
||||
|
||||
```python Code
|
||||
from crewai.flow.flow import Flow, listen, start
|
||||
|
||||
class UnstructuredExampleFlow(Flow):
|
||||
|
||||
@start()
|
||||
def first_method(self):
|
||||
# The state automatically includes an 'id' field
|
||||
print(f"State ID: {self.state['id']}")
|
||||
self.state['counter'] = 0
|
||||
self.state['message'] = "Hello from structured flow"
|
||||
|
||||
@listen(first_method)
|
||||
def second_method(self):
|
||||
self.state['counter'] += 1
|
||||
self.state['message'] += " - updated"
|
||||
|
||||
@listen(second_method)
|
||||
def third_method(self):
|
||||
self.state['counter'] += 1
|
||||
self.state['message'] += " - updated again"
|
||||
|
||||
print(f"State after third_method: {self.state}")
|
||||
|
||||
|
||||
flow = UnstructuredExampleFlow()
|
||||
flow.kickoff()
|
||||
```
|
||||
|
||||
**Note:** The `id` field is automatically generated and preserved throughout the flow's execution. You don't need to manage or set it manually, and it will be maintained even when updating the state with new data.
|
||||
|
||||
**Key Points:**
|
||||
|
||||
- **Flexibility:** You can dynamically add attributes to `self.state` without predefined constraints.
|
||||
- **Simplicity:** Ideal for straightforward workflows where state structure is minimal or varies significantly.
|
||||
|
||||
### Structured State Management
|
||||
|
||||
Structured state management leverages predefined schemas to ensure consistency and type safety across the workflow.
|
||||
By using models like Pydantic's `BaseModel`, developers can define the exact shape of the state, enabling better validation and auto-completion in development environments.
|
||||
|
||||
Each state in CrewAI Flows automatically receives a unique identifier (UUID) to help track and manage state instances. This ID is automatically generated and managed by the Flow system.
|
||||
|
||||
```python Code
|
||||
from crewai.flow.flow import Flow, listen, start
|
||||
from pydantic import BaseModel
|
||||
|
||||
|
||||
class ExampleState(BaseModel):
|
||||
# Note: 'id' field is automatically added to all states
|
||||
counter: int = 0
|
||||
message: str = ""
|
||||
|
||||
|
||||
class StructuredExampleFlow(Flow[ExampleState]):
|
||||
|
||||
@start()
|
||||
def first_method(self):
|
||||
# Access the auto-generated ID if needed
|
||||
print(f"State ID: {self.state.id}")
|
||||
self.state.message = "Hello from structured flow"
|
||||
|
||||
@listen(first_method)
|
||||
def second_method(self):
|
||||
self.state.counter += 1
|
||||
self.state.message += " - updated"
|
||||
|
||||
@listen(second_method)
|
||||
def third_method(self):
|
||||
self.state.counter += 1
|
||||
self.state.message += " - updated again"
|
||||
|
||||
print(f"State after third_method: {self.state}")
|
||||
|
||||
|
||||
flow = StructuredExampleFlow()
|
||||
flow.kickoff()
|
||||
```
|
||||
|
||||
**Key Points:**
|
||||
|
||||
- **Defined Schema:** `ExampleState` clearly outlines the state structure, enhancing code readability and maintainability.
|
||||
- **Type Safety:** Leveraging Pydantic ensures that state attributes adhere to the specified types, reducing runtime errors.
|
||||
- **Auto-Completion:** IDEs can provide better auto-completion and error checking based on the defined state model.
|
||||
|
||||
### Choosing Between Unstructured and Structured State Management
|
||||
|
||||
- **Use Unstructured State Management when:**
|
||||
|
||||
- The workflow's state is simple or highly dynamic.
|
||||
- Flexibility is prioritized over strict state definitions.
|
||||
- Rapid prototyping is required without the overhead of defining schemas.
|
||||
|
||||
- **Use Structured State Management when:**
|
||||
- The workflow requires a well-defined and consistent state structure.
|
||||
- Type safety and validation are important for your application's reliability.
|
||||
- You want to leverage IDE features like auto-completion and type checking for better developer experience.
|
||||
|
||||
By providing both unstructured and structured state management options, CrewAI Flows empowers developers to build AI workflows that are both flexible and robust, catering to a wide range of application requirements.
|
||||
|
||||
## Flow Persistence
|
||||
|
||||
The @persist decorator enables automatic state persistence in CrewAI Flows, allowing you to maintain flow state across restarts or different workflow executions. This decorator can be applied at either the class level or method level, providing flexibility in how you manage state persistence.
|
||||
|
||||
### Class-Level Persistence
|
||||
|
||||
When applied at the class level, the @persist decorator automatically persists all flow method states:
|
||||
|
||||
```python
|
||||
@persist # Using SQLiteFlowPersistence by default
|
||||
class MyFlow(Flow[MyState]):
|
||||
@start()
|
||||
def initialize_flow(self):
|
||||
# This method will automatically have its state persisted
|
||||
self.state.counter = 1
|
||||
print("Initialized flow. State ID:", self.state.id)
|
||||
|
||||
@listen(initialize_flow)
|
||||
def next_step(self):
|
||||
# The state (including self.state.id) is automatically reloaded
|
||||
self.state.counter += 1
|
||||
print("Flow state is persisted. Counter:", self.state.counter)
|
||||
```
|
||||
|
||||
### Method-Level Persistence
|
||||
|
||||
For more granular control, you can apply @persist to specific methods:
|
||||
|
||||
```python
|
||||
class AnotherFlow(Flow[dict]):
|
||||
@persist # Persists only this method's state
|
||||
@start()
|
||||
def begin(self):
|
||||
if "runs" not in self.state:
|
||||
self.state["runs"] = 0
|
||||
self.state["runs"] += 1
|
||||
print("Method-level persisted runs:", self.state["runs"])
|
||||
```
|
||||
|
||||
### How It Works
|
||||
|
||||
1. **Unique State Identification**
|
||||
- Each flow state automatically receives a unique UUID
|
||||
- The ID is preserved across state updates and method calls
|
||||
- Supports both structured (Pydantic BaseModel) and unstructured (dictionary) states
|
||||
|
||||
2. **Default SQLite Backend**
|
||||
- SQLiteFlowPersistence is the default storage backend
|
||||
- States are automatically saved to a local SQLite database
|
||||
- Robust error handling ensures clear messages if database operations fail
|
||||
|
||||
3. **Error Handling**
|
||||
- Comprehensive error messages for database operations
|
||||
- Automatic state validation during save and load
|
||||
- Clear feedback when persistence operations encounter issues
|
||||
|
||||
### Important Considerations
|
||||
|
||||
- **State Types**: Both structured (Pydantic BaseModel) and unstructured (dictionary) states are supported
|
||||
- **Automatic ID**: The `id` field is automatically added if not present
|
||||
- **State Recovery**: Failed or restarted flows can automatically reload their previous state
|
||||
- **Custom Implementation**: You can provide your own FlowPersistence implementation for specialized storage needs
|
||||
|
||||
### Technical Advantages
|
||||
|
||||
1. **Precise Control Through Low-Level Access**
|
||||
- Direct access to persistence operations for advanced use cases
|
||||
- Fine-grained control via method-level persistence decorators
|
||||
- Built-in state inspection and debugging capabilities
|
||||
- Full visibility into state changes and persistence operations
|
||||
|
||||
2. **Enhanced Reliability**
|
||||
- Automatic state recovery after system failures or restarts
|
||||
- Transaction-based state updates for data integrity
|
||||
- Comprehensive error handling with clear error messages
|
||||
- Robust validation during state save and load operations
|
||||
|
||||
3. **Extensible Architecture**
|
||||
- Customizable persistence backend through FlowPersistence interface
|
||||
- Support for specialized storage solutions beyond SQLite
|
||||
- Compatible with both structured (Pydantic) and unstructured (dict) states
|
||||
- Seamless integration with existing CrewAI flow patterns
|
||||
|
||||
The persistence system's architecture emphasizes technical precision and customization options, allowing developers to maintain full control over state management while benefiting from built-in reliability features.
|
||||
|
||||
## Flow Control
|
||||
|
||||
### Conditional Logic: `or`
|
||||
|
||||
The `or_` function in Flows allows you to listen to multiple methods and trigger the listener method when any of the specified methods emit an output.
|
||||
|
||||
<CodeGroup>
|
||||
|
||||
```python Code
|
||||
from crewai.flow.flow import Flow, listen, or_, start
|
||||
|
||||
class OrExampleFlow(Flow):
|
||||
|
||||
@start()
|
||||
def start_method(self):
|
||||
return "Hello from the start method"
|
||||
|
||||
@listen(start_method)
|
||||
def second_method(self):
|
||||
return "Hello from the second method"
|
||||
|
||||
@listen(or_(start_method, second_method))
|
||||
def logger(self, result):
|
||||
print(f"Logger: {result}")
|
||||
|
||||
|
||||
|
||||
flow = OrExampleFlow()
|
||||
flow.kickoff()
|
||||
```
|
||||
|
||||
```text Output
|
||||
Logger: Hello from the start method
|
||||
Logger: Hello from the second method
|
||||
```
|
||||
|
||||
</CodeGroup>
|
||||
|
||||
When you run this Flow, the `logger` method will be triggered by the output of either the `start_method` or the `second_method`.
|
||||
The `or_` function is used to listen to multiple methods and trigger the listener method when any of the specified methods emit an output.
|
||||
|
||||
### Conditional Logic: `and`
|
||||
|
||||
The `and_` function in Flows allows you to listen to multiple methods and trigger the listener method only when all the specified methods emit an output.
|
||||
|
||||
<CodeGroup>
|
||||
|
||||
```python Code
|
||||
from crewai.flow.flow import Flow, and_, listen, start
|
||||
|
||||
class AndExampleFlow(Flow):
|
||||
|
||||
@start()
|
||||
def start_method(self):
|
||||
self.state["greeting"] = "Hello from the start method"
|
||||
|
||||
@listen(start_method)
|
||||
def second_method(self):
|
||||
self.state["joke"] = "What do computers eat? Microchips."
|
||||
|
||||
@listen(and_(start_method, second_method))
|
||||
def logger(self):
|
||||
print("---- Logger ----")
|
||||
print(self.state)
|
||||
|
||||
flow = AndExampleFlow()
|
||||
flow.kickoff()
|
||||
```
|
||||
|
||||
```text Output
|
||||
---- Logger ----
|
||||
{'greeting': 'Hello from the start method', 'joke': 'What do computers eat? Microchips.'}
|
||||
```
|
||||
|
||||
</CodeGroup>
|
||||
|
||||
When you run this Flow, the `logger` method will be triggered only when both the `start_method` and the `second_method` emit an output.
|
||||
The `and_` function is used to listen to multiple methods and trigger the listener method only when all the specified methods emit an output.
|
||||
|
||||
### Router
|
||||
|
||||
The `@router()` decorator in Flows allows you to define conditional routing logic based on the output of a method.
|
||||
You can specify different routes based on the output of the method, allowing you to control the flow of execution dynamically.
|
||||
|
||||
<CodeGroup>
|
||||
|
||||
```python Code
|
||||
import random
|
||||
from crewai.flow.flow import Flow, listen, router, start
|
||||
from pydantic import BaseModel
|
||||
|
||||
class ExampleState(BaseModel):
|
||||
success_flag: bool = False
|
||||
|
||||
class RouterFlow(Flow[ExampleState]):
|
||||
|
||||
@start()
|
||||
def start_method(self):
|
||||
print("Starting the structured flow")
|
||||
random_boolean = random.choice([True, False])
|
||||
self.state.success_flag = random_boolean
|
||||
|
||||
@router(start_method)
|
||||
def second_method(self):
|
||||
if self.state.success_flag:
|
||||
return "success"
|
||||
else:
|
||||
return "failed"
|
||||
|
||||
@listen("success")
|
||||
def third_method(self):
|
||||
print("Third method running")
|
||||
|
||||
@listen("failed")
|
||||
def fourth_method(self):
|
||||
print("Fourth method running")
|
||||
|
||||
|
||||
flow = RouterFlow()
|
||||
flow.kickoff()
|
||||
```
|
||||
|
||||
```text Output
|
||||
Starting the structured flow
|
||||
Third method running
|
||||
Fourth method running
|
||||
```
|
||||
|
||||
</CodeGroup>
|
||||
|
||||
In the above example, the `start_method` generates a random boolean value and sets it in the state.
|
||||
The `second_method` uses the `@router()` decorator to define conditional routing logic based on the value of the boolean.
|
||||
If the boolean is `True`, the method returns `"success"`, and if it is `False`, the method returns `"failed"`.
|
||||
The `third_method` and `fourth_method` listen to the output of the `second_method` and execute based on the returned value.
|
||||
|
||||
When you run this Flow, the output will change based on the random boolean value generated by the `start_method`.
|
||||
|
||||
## Adding Agents to Flows
|
||||
|
||||
Agents can be seamlessly integrated into your flows, providing a lightweight alternative to full Crews when you need simpler, focused task execution. Here's an example of how to use an Agent within a flow to perform market research:
|
||||
|
||||
```python
|
||||
import asyncio
|
||||
from typing import Any, Dict, List
|
||||
|
||||
from crewai_tools import SerperDevTool
|
||||
from pydantic import BaseModel, Field
|
||||
|
||||
from crewai.agent import Agent
|
||||
from crewai.flow.flow import Flow, listen, start
|
||||
|
||||
|
||||
# Define a structured output format
|
||||
class MarketAnalysis(BaseModel):
|
||||
key_trends: List[str] = Field(description="List of identified market trends")
|
||||
market_size: str = Field(description="Estimated market size")
|
||||
competitors: List[str] = Field(description="Major competitors in the space")
|
||||
|
||||
|
||||
# Define flow state
|
||||
class MarketResearchState(BaseModel):
|
||||
product: str = ""
|
||||
analysis: MarketAnalysis | None = None
|
||||
|
||||
|
||||
# Create a flow class
|
||||
class MarketResearchFlow(Flow[MarketResearchState]):
|
||||
@start()
|
||||
def initialize_research(self) -> Dict[str, Any]:
|
||||
print(f"Starting market research for {self.state.product}")
|
||||
return {"product": self.state.product}
|
||||
|
||||
@listen(initialize_research)
|
||||
async def analyze_market(self) -> Dict[str, Any]:
|
||||
# Create an Agent for market research
|
||||
analyst = Agent(
|
||||
role="Market Research Analyst",
|
||||
goal=f"Analyze the market for {self.state.product}",
|
||||
backstory="You are an experienced market analyst with expertise in "
|
||||
"identifying market trends and opportunities.",
|
||||
tools=[SerperDevTool()],
|
||||
verbose=True,
|
||||
)
|
||||
|
||||
# Define the research query
|
||||
query = f"""
|
||||
Research the market for {self.state.product}. Include:
|
||||
1. Key market trends
|
||||
2. Market size
|
||||
3. Major competitors
|
||||
|
||||
Format your response according to the specified structure.
|
||||
"""
|
||||
|
||||
# Execute the analysis with structured output format
|
||||
result = await analyst.kickoff_async(query, response_format=MarketAnalysis)
|
||||
if result.pydantic:
|
||||
print("result", result.pydantic)
|
||||
else:
|
||||
print("result", result)
|
||||
|
||||
# Return the analysis to update the state
|
||||
return {"analysis": result.pydantic}
|
||||
|
||||
@listen(analyze_market)
|
||||
def present_results(self, analysis) -> None:
|
||||
print("\nMarket Analysis Results")
|
||||
print("=====================")
|
||||
|
||||
if isinstance(analysis, dict):
|
||||
# If we got a dict with 'analysis' key, extract the actual analysis object
|
||||
market_analysis = analysis.get("analysis")
|
||||
else:
|
||||
market_analysis = analysis
|
||||
|
||||
if market_analysis and isinstance(market_analysis, MarketAnalysis):
|
||||
print("\nKey Market Trends:")
|
||||
for trend in market_analysis.key_trends:
|
||||
print(f"- {trend}")
|
||||
|
||||
print(f"\nMarket Size: {market_analysis.market_size}")
|
||||
|
||||
print("\nMajor Competitors:")
|
||||
for competitor in market_analysis.competitors:
|
||||
print(f"- {competitor}")
|
||||
else:
|
||||
print("No structured analysis data available.")
|
||||
print("Raw analysis:", analysis)
|
||||
|
||||
|
||||
# Usage example
|
||||
async def run_flow():
|
||||
flow = MarketResearchFlow()
|
||||
result = await flow.kickoff_async(inputs={"product": "AI-powered chatbots"})
|
||||
return result
|
||||
|
||||
|
||||
# Run the flow
|
||||
if __name__ == "__main__":
|
||||
asyncio.run(run_flow())
|
||||
```
|
||||
|
||||
This example demonstrates several key features of using Agents in flows:
|
||||
|
||||
1. **Structured Output**: Using Pydantic models to define the expected output format (`MarketAnalysis`) ensures type safety and structured data throughout the flow.
|
||||
|
||||
2. **State Management**: The flow state (`MarketResearchState`) maintains context between steps and stores both inputs and outputs.
|
||||
|
||||
3. **Tool Integration**: Agents can use tools (like `WebsiteSearchTool`) to enhance their capabilities.
|
||||
|
||||
## Adding Crews to Flows
|
||||
|
||||
Creating a flow with multiple crews in CrewAI is straightforward.
|
||||
|
||||
You can generate a new CrewAI project that includes all the scaffolding needed to create a flow with multiple crews by running the following command:
|
||||
|
||||
```bash
|
||||
crewai create flow name_of_flow
|
||||
```
|
||||
|
||||
This command will generate a new CrewAI project with the necessary folder structure. The generated project includes a prebuilt crew called `poem_crew` that is already working. You can use this crew as a template by copying, pasting, and editing it to create other crews.
|
||||
|
||||
### Folder Structure
|
||||
|
||||
After running the `crewai create flow name_of_flow` command, you will see a folder structure similar to the following:
|
||||
|
||||
| Directory/File | Description |
|
||||
| :--------------------- | :----------------------------------------------------------------- |
|
||||
| `name_of_flow/` | Root directory for the flow. |
|
||||
| ├── `crews/` | Contains directories for specific crews. |
|
||||
| │ └── `poem_crew/` | Directory for the "poem_crew" with its configurations and scripts. |
|
||||
| │ ├── `config/` | Configuration files directory for the "poem_crew". |
|
||||
| │ │ ├── `agents.yaml` | YAML file defining the agents for "poem_crew". |
|
||||
| │ │ └── `tasks.yaml` | YAML file defining the tasks for "poem_crew". |
|
||||
| │ ├── `poem_crew.py` | Script for "poem_crew" functionality. |
|
||||
| ├── `tools/` | Directory for additional tools used in the flow. |
|
||||
| │ └── `custom_tool.py` | Custom tool implementation. |
|
||||
| ├── `main.py` | Main script for running the flow. |
|
||||
| ├── `README.md` | Project description and instructions. |
|
||||
| ├── `pyproject.toml` | Configuration file for project dependencies and settings. |
|
||||
| └── `.gitignore` | Specifies files and directories to ignore in version control. |
|
||||
|
||||
### Building Your Crews
|
||||
|
||||
In the `crews` folder, you can define multiple crews. Each crew will have its own folder containing configuration files and the crew definition file. For example, the `poem_crew` folder contains:
|
||||
|
||||
- `config/agents.yaml`: Defines the agents for the crew.
|
||||
- `config/tasks.yaml`: Defines the tasks for the crew.
|
||||
- `poem_crew.py`: Contains the crew definition, including agents, tasks, and the crew itself.
|
||||
|
||||
You can copy, paste, and edit the `poem_crew` to create other crews.
|
||||
|
||||
### Connecting Crews in `main.py`
|
||||
|
||||
The `main.py` file is where you create your flow and connect the crews together. You can define your flow by using the `Flow` class and the decorators `@start` and `@listen` to specify the flow of execution.
|
||||
|
||||
Here's an example of how you can connect the `poem_crew` in the `main.py` file:
|
||||
|
||||
```python Code
|
||||
#!/usr/bin/env python
|
||||
from random import randint
|
||||
|
||||
from pydantic import BaseModel
|
||||
from crewai.flow.flow import Flow, listen, start
|
||||
from .crews.poem_crew.poem_crew import PoemCrew
|
||||
|
||||
class PoemState(BaseModel):
|
||||
sentence_count: int = 1
|
||||
poem: str = ""
|
||||
|
||||
class PoemFlow(Flow[PoemState]):
|
||||
|
||||
@start()
|
||||
def generate_sentence_count(self):
|
||||
print("Generating sentence count")
|
||||
self.state.sentence_count = randint(1, 5)
|
||||
|
||||
@listen(generate_sentence_count)
|
||||
def generate_poem(self):
|
||||
print("Generating poem")
|
||||
result = PoemCrew().crew().kickoff(inputs={"sentence_count": self.state.sentence_count})
|
||||
|
||||
print("Poem generated", result.raw)
|
||||
self.state.poem = result.raw
|
||||
|
||||
@listen(generate_poem)
|
||||
def save_poem(self):
|
||||
print("Saving poem")
|
||||
with open("poem.txt", "w") as f:
|
||||
f.write(self.state.poem)
|
||||
|
||||
def kickoff():
|
||||
poem_flow = PoemFlow()
|
||||
poem_flow.kickoff()
|
||||
|
||||
|
||||
def plot():
|
||||
poem_flow = PoemFlow()
|
||||
poem_flow.plot()
|
||||
|
||||
if __name__ == "__main__":
|
||||
kickoff()
|
||||
```
|
||||
|
||||
In this example, the `PoemFlow` class defines a flow that generates a sentence count, uses the `PoemCrew` to generate a poem, and then saves the poem to a file. The flow is kicked off by calling the `kickoff()` method.
|
||||
|
||||
### Running the Flow
|
||||
|
||||
(Optional) Before running the flow, you can install the dependencies by running:
|
||||
|
||||
```bash
|
||||
crewai install
|
||||
```
|
||||
|
||||
Once all of the dependencies are installed, you need to activate the virtual environment by running:
|
||||
|
||||
```bash
|
||||
source .venv/bin/activate
|
||||
```
|
||||
|
||||
After activating the virtual environment, you can run the flow by executing one of the following commands:
|
||||
|
||||
```bash
|
||||
crewai flow kickoff
|
||||
```
|
||||
|
||||
or
|
||||
|
||||
```bash
|
||||
uv run kickoff
|
||||
```
|
||||
|
||||
The flow will execute, and you should see the output in the console.
|
||||
|
||||
## Plot Flows
|
||||
|
||||
Visualizing your AI workflows can provide valuable insights into the structure and execution paths of your flows. CrewAI offers a powerful visualization tool that allows you to generate interactive plots of your flows, making it easier to understand and optimize your AI workflows.
|
||||
|
||||
### What are Plots?
|
||||
|
||||
Plots in CrewAI are graphical representations of your AI workflows. They display the various tasks, their connections, and the flow of data between them. This visualization helps in understanding the sequence of operations, identifying bottlenecks, and ensuring that the workflow logic aligns with your expectations.
|
||||
|
||||
### How to Generate a Plot
|
||||
|
||||
CrewAI provides two convenient methods to generate plots of your flows:
|
||||
|
||||
#### Option 1: Using the `plot()` Method
|
||||
|
||||
If you are working directly with a flow instance, you can generate a plot by calling the `plot()` method on your flow object. This method will create an HTML file containing the interactive plot of your flow.
|
||||
|
||||
```python Code
|
||||
# Assuming you have a flow instance
|
||||
flow.plot("my_flow_plot")
|
||||
```
|
||||
|
||||
This will generate a file named `my_flow_plot.html` in your current directory. You can open this file in a web browser to view the interactive plot.
|
||||
|
||||
#### Option 2: Using the Command Line
|
||||
|
||||
If you are working within a structured CrewAI project, you can generate a plot using the command line. This is particularly useful for larger projects where you want to visualize the entire flow setup.
|
||||
|
||||
```bash
|
||||
crewai flow plot
|
||||
```
|
||||
|
||||
This command will generate an HTML file with the plot of your flow, similar to the `plot()` method. The file will be saved in your project directory, and you can open it in a web browser to explore the flow.
|
||||
|
||||
### Understanding the Plot
|
||||
|
||||
The generated plot will display nodes representing the tasks in your flow, with directed edges indicating the flow of execution. The plot is interactive, allowing you to zoom in and out, and hover over nodes to see additional details.
|
||||
|
||||
By visualizing your flows, you can gain a clearer understanding of the workflow's structure, making it easier to debug, optimize, and communicate your AI processes to others.
|
||||
|
||||
### Conclusion
|
||||
|
||||
Plotting your flows is a powerful feature of CrewAI that enhances your ability to design and manage complex AI workflows. Whether you choose to use the `plot()` method or the command line, generating plots will provide you with a visual representation of your workflows, aiding in both development and presentation.
|
||||
|
||||
## Next Steps
|
||||
|
||||
If you're interested in exploring additional examples of flows, we have a variety of recommendations in our examples repository. Here are four specific flow examples, each showcasing unique use cases to help you match your current problem type to a specific example:
|
||||
|
||||
1. **Email Auto Responder Flow**: This example demonstrates an infinite loop where a background job continually runs to automate email responses. It's a great use case for tasks that need to be performed repeatedly without manual intervention. [View Example](https://github.com/crewAIInc/crewAI-examples/tree/main/email_auto_responder_flow)
|
||||
|
||||
2. **Lead Score Flow**: This flow showcases adding human-in-the-loop feedback and handling different conditional branches using the router. It's an excellent example of how to incorporate dynamic decision-making and human oversight into your workflows. [View Example](https://github.com/crewAIInc/crewAI-examples/tree/main/lead-score-flow)
|
||||
|
||||
3. **Write a Book Flow**: This example excels at chaining multiple crews together, where the output of one crew is used by another. Specifically, one crew outlines an entire book, and another crew generates chapters based on the outline. Eventually, everything is connected to produce a complete book. This flow is perfect for complex, multi-step processes that require coordination between different tasks. [View Example](https://github.com/crewAIInc/crewAI-examples/tree/main/write_a_book_with_flows)
|
||||
|
||||
4. **Meeting Assistant Flow**: This flow demonstrates how to broadcast one event to trigger multiple follow-up actions. For instance, after a meeting is completed, the flow can update a Trello board, send a Slack message, and save the results. It's a great example of handling multiple outcomes from a single event, making it ideal for comprehensive task management and notification systems. [View Example](https://github.com/crewAIInc/crewAI-examples/tree/main/meeting_assistant_flow)
|
||||
|
||||
By exploring these examples, you can gain insights into how to leverage CrewAI Flows for various use cases, from automating repetitive tasks to managing complex, multi-step processes with dynamic decision-making and human feedback.
|
||||
|
||||
Also, check out our YouTube video on how to use flows in CrewAI below!
|
||||
|
||||
<iframe
|
||||
width="560"
|
||||
height="315"
|
||||
src="https://www.youtube.com/embed/MTb5my6VOT8"
|
||||
title="YouTube video player"
|
||||
frameborder="0"
|
||||
allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share"
|
||||
referrerpolicy="strict-origin-when-cross-origin"
|
||||
allowfullscreen
|
||||
></iframe>
|
||||
|
||||
## Running Flows
|
||||
|
||||
There are two ways to run a flow:
|
||||
|
||||
### Using the Flow API
|
||||
|
||||
You can run a flow programmatically by creating an instance of your flow class and calling the `kickoff()` method:
|
||||
|
||||
```python
|
||||
flow = ExampleFlow()
|
||||
result = flow.kickoff()
|
||||
```
|
||||
|
||||
### Using the CLI
|
||||
|
||||
Starting from version 0.103.0, you can run flows using the `crewai run` command:
|
||||
|
||||
```shell
|
||||
crewai run
|
||||
```
|
||||
|
||||
This command automatically detects if your project is a flow (based on the `type = "flow"` setting in your pyproject.toml) and runs it accordingly. This is the recommended way to run flows from the command line.
|
||||
|
||||
For backward compatibility, you can also use:
|
||||
|
||||
```shell
|
||||
crewai flow kickoff
|
||||
```
|
||||
|
||||
However, the `crewai run` command is now the preferred method as it works for both crews and flows.
|
||||
626
docs/concepts/knowledge.mdx
Normal file
@@ -0,0 +1,626 @@
|
||||
---
|
||||
title: Knowledge
|
||||
description: What is knowledge in CrewAI and how to use it.
|
||||
icon: book
|
||||
---
|
||||
|
||||
## What is Knowledge?
|
||||
|
||||
Knowledge in CrewAI is a powerful system that allows AI agents to access and utilize external information sources during their tasks.
|
||||
Think of it as giving your agents a reference library they can consult while working.
|
||||
|
||||
<Info>
|
||||
Key benefits of using Knowledge:
|
||||
- Enhance agents with domain-specific information
|
||||
- Support decisions with real-world data
|
||||
- Maintain context across conversations
|
||||
- Ground responses in factual information
|
||||
</Info>
|
||||
|
||||
## Supported Knowledge Sources
|
||||
|
||||
CrewAI supports various types of knowledge sources out of the box:
|
||||
|
||||
<CardGroup cols={2}>
|
||||
<Card title="Text Sources" icon="text">
|
||||
- Raw strings
|
||||
- Text files (.txt)
|
||||
- PDF documents
|
||||
</Card>
|
||||
<Card title="Structured Data" icon="table">
|
||||
- CSV files
|
||||
- Excel spreadsheets
|
||||
- JSON documents
|
||||
</Card>
|
||||
</CardGroup>
|
||||
|
||||
## Supported Knowledge Parameters
|
||||
|
||||
| Parameter | Type | Required | Description |
|
||||
| :--------------------------- | :---------------------------------- | :------- | :---------------------------------------------------------------------------------------------------------------------------------------------------- |
|
||||
| `sources` | **List[BaseKnowledgeSource]** | Yes | List of knowledge sources that provide content to be stored and queried. Can include PDF, CSV, Excel, JSON, text files, or string content. |
|
||||
| `collection_name` | **str** | No | Name of the collection where the knowledge will be stored. Used to identify different sets of knowledge. Defaults to "knowledge" if not provided. |
|
||||
| `storage` | **Optional[KnowledgeStorage]** | No | Custom storage configuration for managing how the knowledge is stored and retrieved. If not provided, a default storage will be created. |
|
||||
|
||||
## Quickstart Example
|
||||
|
||||
<Tip>
|
||||
For file-Based Knowledge Sources, make sure to place your files in a `knowledge` directory at the root of your project.
|
||||
Also, use relative paths from the `knowledge` directory when creating the source.
|
||||
</Tip>
|
||||
|
||||
Here's an example using string-based knowledge:
|
||||
|
||||
```python Code
|
||||
from crewai import Agent, Task, Crew, Process, LLM
|
||||
from crewai.knowledge.source.string_knowledge_source import StringKnowledgeSource
|
||||
|
||||
# Create a knowledge source
|
||||
content = "Users name is John. He is 30 years old and lives in San Francisco."
|
||||
string_source = StringKnowledgeSource(
|
||||
content=content,
|
||||
)
|
||||
|
||||
# Create an LLM with a temperature of 0 to ensure deterministic outputs
|
||||
llm = LLM(model="gpt-4o-mini", temperature=0)
|
||||
|
||||
# Create an agent with the knowledge store
|
||||
agent = Agent(
|
||||
role="About User",
|
||||
goal="You know everything about the user.",
|
||||
backstory="""You are a master at understanding people and their preferences.""",
|
||||
verbose=True,
|
||||
allow_delegation=False,
|
||||
llm=llm,
|
||||
)
|
||||
task = Task(
|
||||
description="Answer the following questions about the user: {question}",
|
||||
expected_output="An answer to the question.",
|
||||
agent=agent,
|
||||
)
|
||||
|
||||
crew = Crew(
|
||||
agents=[agent],
|
||||
tasks=[task],
|
||||
verbose=True,
|
||||
process=Process.sequential,
|
||||
knowledge_sources=[string_source], # Enable knowledge by adding the sources here. You can also add more sources to the sources list.
|
||||
)
|
||||
|
||||
result = crew.kickoff(inputs={"question": "What city does John live in and how old is he?"})
|
||||
```
|
||||
|
||||
|
||||
Here's another example with the `CrewDoclingSource`. The CrewDoclingSource is actually quite versatile and can handle multiple file formats including MD, PDF, DOCX, HTML, and more.
|
||||
|
||||
<Note>
|
||||
You need to install `docling` for the following example to work: `uv add docling`
|
||||
</Note>
|
||||
|
||||
|
||||
|
||||
```python Code
|
||||
from crewai import LLM, Agent, Crew, Process, Task
|
||||
from crewai.knowledge.source.crew_docling_source import CrewDoclingSource
|
||||
|
||||
# Create a knowledge source
|
||||
content_source = CrewDoclingSource(
|
||||
file_paths=[
|
||||
"https://lilianweng.github.io/posts/2024-11-28-reward-hacking",
|
||||
"https://lilianweng.github.io/posts/2024-07-07-hallucination",
|
||||
],
|
||||
)
|
||||
|
||||
# Create an LLM with a temperature of 0 to ensure deterministic outputs
|
||||
llm = LLM(model="gpt-4o-mini", temperature=0)
|
||||
|
||||
# Create an agent with the knowledge store
|
||||
agent = Agent(
|
||||
role="About papers",
|
||||
goal="You know everything about the papers.",
|
||||
backstory="""You are a master at understanding papers and their content.""",
|
||||
verbose=True,
|
||||
allow_delegation=False,
|
||||
llm=llm,
|
||||
)
|
||||
task = Task(
|
||||
description="Answer the following questions about the papers: {question}",
|
||||
expected_output="An answer to the question.",
|
||||
agent=agent,
|
||||
)
|
||||
|
||||
crew = Crew(
|
||||
agents=[agent],
|
||||
tasks=[task],
|
||||
verbose=True,
|
||||
process=Process.sequential,
|
||||
knowledge_sources=[
|
||||
content_source
|
||||
], # Enable knowledge by adding the sources here. You can also add more sources to the sources list.
|
||||
)
|
||||
|
||||
result = crew.kickoff(
|
||||
inputs={
|
||||
"question": "What is the reward hacking paper about? Be sure to provide sources."
|
||||
}
|
||||
)
|
||||
```
|
||||
|
||||
## More Examples
|
||||
|
||||
Here are examples of how to use different types of knowledge sources:
|
||||
|
||||
Note: Please ensure that you create the ./knowldge folder. All source files (e.g., .txt, .pdf, .xlsx, .json) should be placed in this folder for centralized management.
|
||||
|
||||
### Text File Knowledge Source
|
||||
```python
|
||||
from crewai.knowledge.source.text_file_knowledge_source import TextFileKnowledgeSource
|
||||
|
||||
# Create a text file knowledge source
|
||||
text_source = TextFileKnowledgeSource(
|
||||
file_paths=["document.txt", "another.txt"]
|
||||
)
|
||||
|
||||
# Create crew with text file source on agents or crew level
|
||||
agent = Agent(
|
||||
...
|
||||
knowledge_sources=[text_source]
|
||||
)
|
||||
|
||||
crew = Crew(
|
||||
...
|
||||
knowledge_sources=[text_source]
|
||||
)
|
||||
```
|
||||
|
||||
### PDF Knowledge Source
|
||||
```python
|
||||
from crewai.knowledge.source.pdf_knowledge_source import PDFKnowledgeSource
|
||||
|
||||
# Create a PDF knowledge source
|
||||
pdf_source = PDFKnowledgeSource(
|
||||
file_paths=["document.pdf", "another.pdf"]
|
||||
)
|
||||
|
||||
# Create crew with PDF knowledge source on agents or crew level
|
||||
agent = Agent(
|
||||
...
|
||||
knowledge_sources=[pdf_source]
|
||||
)
|
||||
|
||||
crew = Crew(
|
||||
...
|
||||
knowledge_sources=[pdf_source]
|
||||
)
|
||||
```
|
||||
|
||||
### CSV Knowledge Source
|
||||
```python
|
||||
from crewai.knowledge.source.csv_knowledge_source import CSVKnowledgeSource
|
||||
|
||||
# Create a CSV knowledge source
|
||||
csv_source = CSVKnowledgeSource(
|
||||
file_paths=["data.csv"]
|
||||
)
|
||||
|
||||
# Create crew with CSV knowledge source or on agent level
|
||||
agent = Agent(
|
||||
...
|
||||
knowledge_sources=[csv_source]
|
||||
)
|
||||
|
||||
crew = Crew(
|
||||
...
|
||||
knowledge_sources=[csv_source]
|
||||
)
|
||||
```
|
||||
|
||||
### Excel Knowledge Source
|
||||
```python
|
||||
from crewai.knowledge.source.excel_knowledge_source import ExcelKnowledgeSource
|
||||
|
||||
# Create an Excel knowledge source
|
||||
excel_source = ExcelKnowledgeSource(
|
||||
file_paths=["spreadsheet.xlsx"]
|
||||
)
|
||||
|
||||
# Create crew with Excel knowledge source on agents or crew level
|
||||
agent = Agent(
|
||||
...
|
||||
knowledge_sources=[excel_source]
|
||||
)
|
||||
|
||||
crew = Crew(
|
||||
...
|
||||
knowledge_sources=[excel_source]
|
||||
)
|
||||
```
|
||||
|
||||
### JSON Knowledge Source
|
||||
```python
|
||||
from crewai.knowledge.source.json_knowledge_source import JSONKnowledgeSource
|
||||
|
||||
# Create a JSON knowledge source
|
||||
json_source = JSONKnowledgeSource(
|
||||
file_paths=["data.json"]
|
||||
)
|
||||
|
||||
# Create crew with JSON knowledge source on agents or crew level
|
||||
agent = Agent(
|
||||
...
|
||||
knowledge_sources=[json_source]
|
||||
)
|
||||
|
||||
crew = Crew(
|
||||
...
|
||||
knowledge_sources=[json_source]
|
||||
)
|
||||
```
|
||||
|
||||
## Knowledge Configuration
|
||||
|
||||
### Chunking Configuration
|
||||
|
||||
Knowledge sources automatically chunk content for better processing.
|
||||
You can configure chunking behavior in your knowledge sources:
|
||||
|
||||
```python
|
||||
from crewai.knowledge.source.string_knowledge_source import StringKnowledgeSource
|
||||
|
||||
source = StringKnowledgeSource(
|
||||
content="Your content here",
|
||||
chunk_size=4000, # Maximum size of each chunk (default: 4000)
|
||||
chunk_overlap=200 # Overlap between chunks (default: 200)
|
||||
)
|
||||
```
|
||||
|
||||
The chunking configuration helps in:
|
||||
- Breaking down large documents into manageable pieces
|
||||
- Maintaining context through chunk overlap
|
||||
- Optimizing retrieval accuracy
|
||||
|
||||
### Embeddings Configuration
|
||||
|
||||
You can also configure the embedder for the knowledge store.
|
||||
This is useful if you want to use a different embedder for the knowledge store than the one used for the agents.
|
||||
The `embedder` parameter supports various embedding model providers that include:
|
||||
- `openai`: OpenAI's embedding models
|
||||
- `google`: Google's text embedding models
|
||||
- `azure`: Azure OpenAI embeddings
|
||||
- `ollama`: Local embeddings with Ollama
|
||||
- `vertexai`: Google Cloud VertexAI embeddings
|
||||
- `cohere`: Cohere's embedding models
|
||||
- `voyageai`: VoyageAI's embedding models
|
||||
- `bedrock`: AWS Bedrock embeddings
|
||||
- `huggingface`: Hugging Face models
|
||||
- `watson`: IBM Watson embeddings
|
||||
|
||||
Here's an example of how to configure the embedder for the knowledge store using Google's `text-embedding-004` model:
|
||||
<CodeGroup>
|
||||
```python Example
|
||||
from crewai import Agent, Task, Crew, Process, LLM
|
||||
from crewai.knowledge.source.string_knowledge_source import StringKnowledgeSource
|
||||
import os
|
||||
|
||||
# Get the GEMINI API key
|
||||
GEMINI_API_KEY = os.environ.get("GEMINI_API_KEY")
|
||||
|
||||
# Create a knowledge source
|
||||
content = "Users name is John. He is 30 years old and lives in San Francisco."
|
||||
string_source = StringKnowledgeSource(
|
||||
content=content,
|
||||
)
|
||||
|
||||
# Create an LLM with a temperature of 0 to ensure deterministic outputs
|
||||
gemini_llm = LLM(
|
||||
model="gemini/gemini-1.5-pro-002",
|
||||
api_key=GEMINI_API_KEY,
|
||||
temperature=0,
|
||||
)
|
||||
|
||||
# Create an agent with the knowledge store
|
||||
agent = Agent(
|
||||
role="About User",
|
||||
goal="You know everything about the user.",
|
||||
backstory="""You are a master at understanding people and their preferences.""",
|
||||
verbose=True,
|
||||
allow_delegation=False,
|
||||
llm=gemini_llm,
|
||||
embedder={
|
||||
"provider": "google",
|
||||
"config": {
|
||||
"model": "models/text-embedding-004",
|
||||
"api_key": GEMINI_API_KEY,
|
||||
}
|
||||
}
|
||||
)
|
||||
|
||||
task = Task(
|
||||
description="Answer the following questions about the user: {question}",
|
||||
expected_output="An answer to the question.",
|
||||
agent=agent,
|
||||
)
|
||||
|
||||
crew = Crew(
|
||||
agents=[agent],
|
||||
tasks=[task],
|
||||
verbose=True,
|
||||
process=Process.sequential,
|
||||
knowledge_sources=[string_source],
|
||||
embedder={
|
||||
"provider": "google",
|
||||
"config": {
|
||||
"model": "models/text-embedding-004",
|
||||
"api_key": GEMINI_API_KEY,
|
||||
}
|
||||
}
|
||||
)
|
||||
|
||||
result = crew.kickoff(inputs={"question": "What city does John live in and how old is he?"})
|
||||
```
|
||||
```text Output
|
||||
# Agent: About User
|
||||
## Task: Answer the following questions about the user: What city does John live in and how old is he?
|
||||
|
||||
# Agent: About User
|
||||
## Final Answer:
|
||||
John is 30 years old and lives in San Francisco.
|
||||
```
|
||||
</CodeGroup>
|
||||
## Clearing Knowledge
|
||||
|
||||
If you need to clear the knowledge stored in CrewAI, you can use the `crewai reset-memories` command with the `--knowledge` option.
|
||||
|
||||
```bash Command
|
||||
crewai reset-memories --knowledge
|
||||
```
|
||||
|
||||
This is useful when you've updated your knowledge sources and want to ensure that the agents are using the most recent information.
|
||||
|
||||
## Agent-Specific Knowledge
|
||||
|
||||
While knowledge can be provided at the crew level using `crew.knowledge_sources`, individual agents can also have their own knowledge sources using the `knowledge_sources` parameter:
|
||||
|
||||
```python Code
|
||||
from crewai import Agent, Task, Crew
|
||||
from crewai.knowledge.source.string_knowledge_source import StringKnowledgeSource
|
||||
|
||||
# Create agent-specific knowledge about a product
|
||||
product_specs = StringKnowledgeSource(
|
||||
content="""The XPS 13 laptop features:
|
||||
- 13.4-inch 4K display
|
||||
- Intel Core i7 processor
|
||||
- 16GB RAM
|
||||
- 512GB SSD storage
|
||||
- 12-hour battery life""",
|
||||
metadata={"category": "product_specs"}
|
||||
)
|
||||
|
||||
# Create a support agent with product knowledge
|
||||
support_agent = Agent(
|
||||
role="Technical Support Specialist",
|
||||
goal="Provide accurate product information and support.",
|
||||
backstory="You are an expert on our laptop products and specifications.",
|
||||
knowledge_sources=[product_specs] # Agent-specific knowledge
|
||||
)
|
||||
|
||||
# Create a task that requires product knowledge
|
||||
support_task = Task(
|
||||
description="Answer this customer question: {question}",
|
||||
agent=support_agent
|
||||
)
|
||||
|
||||
# Create and run the crew
|
||||
crew = Crew(
|
||||
agents=[support_agent],
|
||||
tasks=[support_task]
|
||||
)
|
||||
|
||||
# Get answer about the laptop's specifications
|
||||
result = crew.kickoff(
|
||||
inputs={"question": "What is the storage capacity of the XPS 13?"}
|
||||
)
|
||||
```
|
||||
|
||||
<Info>
|
||||
Benefits of agent-specific knowledge:
|
||||
- Give agents specialized information for their roles
|
||||
- Maintain separation of concerns between agents
|
||||
- Combine with crew-level knowledge for layered information access
|
||||
</Info>
|
||||
|
||||
## Custom Knowledge Sources
|
||||
|
||||
CrewAI allows you to create custom knowledge sources for any type of data by extending the `BaseKnowledgeSource` class. Let's create a practical example that fetches and processes space news articles.
|
||||
|
||||
#### Space News Knowledge Source Example
|
||||
|
||||
<CodeGroup>
|
||||
|
||||
```python Code
|
||||
from crewai import Agent, Task, Crew, Process, LLM
|
||||
from crewai.knowledge.source.base_knowledge_source import BaseKnowledgeSource
|
||||
import requests
|
||||
from datetime import datetime
|
||||
from typing import Dict, Any
|
||||
from pydantic import BaseModel, Field
|
||||
|
||||
class SpaceNewsKnowledgeSource(BaseKnowledgeSource):
|
||||
"""Knowledge source that fetches data from Space News API."""
|
||||
|
||||
api_endpoint: str = Field(description="API endpoint URL")
|
||||
limit: int = Field(default=10, description="Number of articles to fetch")
|
||||
|
||||
def load_content(self) -> Dict[Any, str]:
|
||||
"""Fetch and format space news articles."""
|
||||
try:
|
||||
response = requests.get(
|
||||
f"{self.api_endpoint}?limit={self.limit}"
|
||||
)
|
||||
response.raise_for_status()
|
||||
|
||||
data = response.json()
|
||||
articles = data.get('results', [])
|
||||
|
||||
formatted_data = self.validate_content(articles)
|
||||
return {self.api_endpoint: formatted_data}
|
||||
except Exception as e:
|
||||
raise ValueError(f"Failed to fetch space news: {str(e)}")
|
||||
|
||||
def validate_content(self, articles: list) -> str:
|
||||
"""Format articles into readable text."""
|
||||
formatted = "Space News Articles:\n\n"
|
||||
for article in articles:
|
||||
formatted += f"""
|
||||
Title: {article['title']}
|
||||
Published: {article['published_at']}
|
||||
Summary: {article['summary']}
|
||||
News Site: {article['news_site']}
|
||||
URL: {article['url']}
|
||||
-------------------"""
|
||||
return formatted
|
||||
|
||||
def add(self) -> None:
|
||||
"""Process and store the articles."""
|
||||
content = self.load_content()
|
||||
for _, text in content.items():
|
||||
chunks = self._chunk_text(text)
|
||||
self.chunks.extend(chunks)
|
||||
|
||||
self._save_documents()
|
||||
|
||||
# Create knowledge source
|
||||
recent_news = SpaceNewsKnowledgeSource(
|
||||
api_endpoint="https://api.spaceflightnewsapi.net/v4/articles",
|
||||
limit=10,
|
||||
)
|
||||
|
||||
# Create specialized agent
|
||||
space_analyst = Agent(
|
||||
role="Space News Analyst",
|
||||
goal="Answer questions about space news accurately and comprehensively",
|
||||
backstory="""You are a space industry analyst with expertise in space exploration,
|
||||
satellite technology, and space industry trends. You excel at answering questions
|
||||
about space news and providing detailed, accurate information.""",
|
||||
knowledge_sources=[recent_news],
|
||||
llm=LLM(model="gpt-4", temperature=0.0)
|
||||
)
|
||||
|
||||
# Create task that handles user questions
|
||||
analysis_task = Task(
|
||||
description="Answer this question about space news: {user_question}",
|
||||
expected_output="A detailed answer based on the recent space news articles",
|
||||
agent=space_analyst
|
||||
)
|
||||
|
||||
# Create and run the crew
|
||||
crew = Crew(
|
||||
agents=[space_analyst],
|
||||
tasks=[analysis_task],
|
||||
verbose=True,
|
||||
process=Process.sequential
|
||||
)
|
||||
|
||||
# Example usage
|
||||
result = crew.kickoff(
|
||||
inputs={"user_question": "What are the latest developments in space exploration?"}
|
||||
)
|
||||
```
|
||||
|
||||
```output Output
|
||||
# Agent: Space News Analyst
|
||||
## Task: Answer this question about space news: What are the latest developments in space exploration?
|
||||
|
||||
|
||||
# Agent: Space News Analyst
|
||||
## Final Answer:
|
||||
The latest developments in space exploration, based on recent space news articles, include the following:
|
||||
|
||||
1. SpaceX has received the final regulatory approvals to proceed with the second integrated Starship/Super Heavy launch, scheduled for as soon as the morning of Nov. 17, 2023. This is a significant step in SpaceX's ambitious plans for space exploration and colonization. [Source: SpaceNews](https://spacenews.com/starship-cleared-for-nov-17-launch/)
|
||||
|
||||
2. SpaceX has also informed the US Federal Communications Commission (FCC) that it plans to begin launching its first next-generation Starlink Gen2 satellites. This represents a major upgrade to the Starlink satellite internet service, which aims to provide high-speed internet access worldwide. [Source: Teslarati](https://www.teslarati.com/spacex-first-starlink-gen2-satellite-launch-2022/)
|
||||
|
||||
3. AI startup Synthetaic has raised $15 million in Series B funding. The company uses artificial intelligence to analyze data from space and air sensors, which could have significant applications in space exploration and satellite technology. [Source: SpaceNews](https://spacenews.com/ai-startup-synthetaic-raises-15-million-in-series-b-funding/)
|
||||
|
||||
4. The Space Force has formally established a unit within the U.S. Indo-Pacific Command, marking a permanent presence in the Indo-Pacific region. This could have significant implications for space security and geopolitics. [Source: SpaceNews](https://spacenews.com/space-force-establishes-permanent-presence-in-indo-pacific-region/)
|
||||
|
||||
5. Slingshot Aerospace, a space tracking and data analytics company, is expanding its network of ground-based optical telescopes to increase coverage of low Earth orbit. This could improve our ability to track and analyze objects in low Earth orbit, including satellites and space debris. [Source: SpaceNews](https://spacenews.com/slingshots-space-tracking-network-to-extend-coverage-of-low-earth-orbit/)
|
||||
|
||||
6. The National Natural Science Foundation of China has outlined a five-year project for researchers to study the assembly of ultra-large spacecraft. This could lead to significant advancements in spacecraft technology and space exploration capabilities. [Source: SpaceNews](https://spacenews.com/china-researching-challenges-of-kilometer-scale-ultra-large-spacecraft/)
|
||||
|
||||
7. The Center for AEroSpace Autonomy Research (CAESAR) at Stanford University is focusing on spacecraft autonomy. The center held a kickoff event on May 22, 2024, to highlight the industry, academia, and government collaboration it seeks to foster. This could lead to significant advancements in autonomous spacecraft technology. [Source: SpaceNews](https://spacenews.com/stanford-center-focuses-on-spacecraft-autonomy/)
|
||||
```
|
||||
|
||||
</CodeGroup>
|
||||
#### Key Components Explained
|
||||
|
||||
1. **Custom Knowledge Source (`SpaceNewsKnowledgeSource`)**:
|
||||
|
||||
- Extends `BaseKnowledgeSource` for integration with CrewAI
|
||||
- Configurable API endpoint and article limit
|
||||
- Implements three key methods:
|
||||
- `load_content()`: Fetches articles from the API
|
||||
- `_format_articles()`: Structures the articles into readable text
|
||||
- `add()`: Processes and stores the content
|
||||
|
||||
2. **Agent Configuration**:
|
||||
|
||||
- Specialized role as a Space News Analyst
|
||||
- Uses the knowledge source to access space news
|
||||
|
||||
3. **Task Setup**:
|
||||
|
||||
- Takes a user question as input through `{user_question}`
|
||||
- Designed to provide detailed answers based on the knowledge source
|
||||
|
||||
4. **Crew Orchestration**:
|
||||
- Manages the workflow between agent and task
|
||||
- Handles input/output through the kickoff method
|
||||
|
||||
This example demonstrates how to:
|
||||
|
||||
- Create a custom knowledge source that fetches real-time data
|
||||
- Process and format external data for AI consumption
|
||||
- Use the knowledge source to answer specific user questions
|
||||
- Integrate everything seamlessly with CrewAI's agent system
|
||||
|
||||
#### About the Spaceflight News API
|
||||
|
||||
The example uses the [Spaceflight News API](https://api.spaceflightnewsapi.net/v4/docs/), which:
|
||||
|
||||
- Provides free access to space-related news articles
|
||||
- Requires no authentication
|
||||
- Returns structured data about space news
|
||||
- Supports pagination and filtering
|
||||
|
||||
You can customize the API query by modifying the endpoint URL:
|
||||
|
||||
```python
|
||||
# Fetch more articles
|
||||
recent_news = SpaceNewsKnowledgeSource(
|
||||
api_endpoint="https://api.spaceflightnewsapi.net/v4/articles",
|
||||
limit=20, # Increase the number of articles
|
||||
)
|
||||
|
||||
# Add search parameters
|
||||
recent_news = SpaceNewsKnowledgeSource(
|
||||
api_endpoint="https://api.spaceflightnewsapi.net/v4/articles?search=NASA", # Search for NASA news
|
||||
limit=10,
|
||||
)
|
||||
```
|
||||
|
||||
## Best Practices
|
||||
|
||||
<AccordionGroup>
|
||||
<Accordion title="Content Organization">
|
||||
- Keep chunk sizes appropriate for your content type
|
||||
- Consider content overlap for context preservation
|
||||
- Organize related information into separate knowledge sources
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="Performance Tips">
|
||||
- Adjust chunk sizes based on content complexity
|
||||
- Configure appropriate embedding models
|
||||
- Consider using local embedding providers for faster processing
|
||||
</Accordion>
|
||||
</AccordionGroup>
|
||||
58
docs/concepts/langchain-tools.mdx
Normal file
@@ -0,0 +1,58 @@
|
||||
---
|
||||
title: Using LangChain Tools
|
||||
description: Learn how to integrate LangChain tools with CrewAI agents to enhance search-based queries and more.
|
||||
icon: link
|
||||
---
|
||||
|
||||
## Using LangChain Tools
|
||||
|
||||
<Info>
|
||||
CrewAI seamlessly integrates with LangChain's comprehensive [list of tools](https://python.langchain.com/docs/integrations/tools/), all of which can be used with CrewAI.
|
||||
</Info>
|
||||
|
||||
```python Code
|
||||
import os
|
||||
from dotenv import load_dotenv
|
||||
from crewai import Agent, Task, Crew
|
||||
from crewai.tools import BaseTool
|
||||
from pydantic import Field
|
||||
from langchain_community.utilities import GoogleSerperAPIWrapper
|
||||
|
||||
# Set up your SERPER_API_KEY key in an .env file, eg:
|
||||
# SERPER_API_KEY=<your api key>
|
||||
load_dotenv()
|
||||
|
||||
search = GoogleSerperAPIWrapper()
|
||||
|
||||
class SearchTool(BaseTool):
|
||||
name: str = "Search"
|
||||
description: str = "Useful for search-based queries. Use this to find current information about markets, companies, and trends."
|
||||
search: GoogleSerperAPIWrapper = Field(default_factory=GoogleSerperAPIWrapper)
|
||||
|
||||
def _run(self, query: str) -> str:
|
||||
"""Execute the search query and return results"""
|
||||
try:
|
||||
return self.search.run(query)
|
||||
except Exception as e:
|
||||
return f"Error performing search: {str(e)}"
|
||||
|
||||
# Create Agents
|
||||
researcher = Agent(
|
||||
role='Research Analyst',
|
||||
goal='Gather current market data and trends',
|
||||
backstory="""You are an expert research analyst with years of experience in
|
||||
gathering market intelligence. You're known for your ability to find
|
||||
relevant and up-to-date market information and present it in a clear,
|
||||
actionable format.""",
|
||||
tools=[SearchTool()],
|
||||
verbose=True
|
||||
)
|
||||
|
||||
# rest of the code ...
|
||||
```
|
||||
|
||||
## Conclusion
|
||||
|
||||
Tools are pivotal in extending the capabilities of CrewAI agents, enabling them to undertake a broad spectrum of tasks and collaborate effectively.
|
||||
When building solutions with CrewAI, leverage both custom and existing tools to empower your agents and enhance the AI ecosystem. Consider utilizing error handling, caching mechanisms,
|
||||
and the flexibility of tool arguments to optimize your agents' performance and capabilities.
|
||||
71
docs/concepts/llamaindex-tools.mdx
Normal file
@@ -0,0 +1,71 @@
|
||||
---
|
||||
title: Using LlamaIndex Tools
|
||||
description: Learn how to integrate LlamaIndex tools with CrewAI agents to enhance search-based queries and more.
|
||||
icon: toolbox
|
||||
---
|
||||
|
||||
## Using LlamaIndex Tools
|
||||
|
||||
<Info>
|
||||
CrewAI seamlessly integrates with LlamaIndex’s comprehensive toolkit for RAG (Retrieval-Augmented Generation) and agentic pipelines, enabling advanced search-based queries and more.
|
||||
</Info>
|
||||
|
||||
Here are the available built-in tools offered by LlamaIndex.
|
||||
|
||||
```python Code
|
||||
from crewai import Agent
|
||||
from crewai_tools import LlamaIndexTool
|
||||
|
||||
# Example 1: Initialize from FunctionTool
|
||||
from llama_index.core.tools import FunctionTool
|
||||
|
||||
your_python_function = lambda ...: ...
|
||||
og_tool = FunctionTool.from_defaults(
|
||||
your_python_function,
|
||||
name="<name>",
|
||||
description='<description>'
|
||||
)
|
||||
tool = LlamaIndexTool.from_tool(og_tool)
|
||||
|
||||
# Example 2: Initialize from LlamaHub Tools
|
||||
from llama_index.tools.wolfram_alpha import WolframAlphaToolSpec
|
||||
wolfram_spec = WolframAlphaToolSpec(app_id="<app_id>")
|
||||
wolfram_tools = wolfram_spec.to_tool_list()
|
||||
tools = [LlamaIndexTool.from_tool(t) for t in wolfram_tools]
|
||||
|
||||
# Example 3: Initialize Tool from a LlamaIndex Query Engine
|
||||
query_engine = index.as_query_engine()
|
||||
query_tool = LlamaIndexTool.from_query_engine(
|
||||
query_engine,
|
||||
name="Uber 2019 10K Query Tool",
|
||||
description="Use this tool to lookup the 2019 Uber 10K Annual Report"
|
||||
)
|
||||
|
||||
# Create and assign the tools to an agent
|
||||
agent = Agent(
|
||||
role='Research Analyst',
|
||||
goal='Provide up-to-date market analysis',
|
||||
backstory='An expert analyst with a keen eye for market trends.',
|
||||
tools=[tool, *tools, query_tool]
|
||||
)
|
||||
|
||||
# rest of the code ...
|
||||
```
|
||||
|
||||
## Steps to Get Started
|
||||
|
||||
To effectively use the LlamaIndexTool, follow these steps:
|
||||
|
||||
<Steps>
|
||||
<Step title="Package Installation">
|
||||
Make sure that `crewai[tools]` package is installed in your Python environment:
|
||||
<CodeGroup>
|
||||
```shell Terminal
|
||||
pip install 'crewai[tools]'
|
||||
```
|
||||
</CodeGroup>
|
||||
</Step>
|
||||
<Step title="Install and Use LlamaIndex">
|
||||
Follow the LlamaIndex documentation [LlamaIndex Documentation](https://docs.llamaindex.ai/) to set up a RAG/agent pipeline.
|
||||
</Step>
|
||||
</Steps>
|
||||
795
docs/concepts/llms.mdx
Normal file
@@ -0,0 +1,795 @@
|
||||
---
|
||||
title: 'LLMs'
|
||||
description: 'A comprehensive guide to configuring and using Large Language Models (LLMs) in your CrewAI projects'
|
||||
icon: 'microchip-ai'
|
||||
---
|
||||
|
||||
<Note>
|
||||
CrewAI integrates with multiple LLM providers through LiteLLM, giving you the flexibility to choose the right model for your specific use case. This guide will help you understand how to configure and use different LLM providers in your CrewAI projects.
|
||||
</Note>
|
||||
|
||||
## What are LLMs?
|
||||
|
||||
Large Language Models (LLMs) are the core intelligence behind CrewAI agents. They enable agents to understand context, make decisions, and generate human-like responses. Here's what you need to know:
|
||||
|
||||
<CardGroup cols={2}>
|
||||
<Card title="LLM Basics" icon="brain">
|
||||
Large Language Models are AI systems trained on vast amounts of text data. They power the intelligence of your CrewAI agents, enabling them to understand and generate human-like text.
|
||||
</Card>
|
||||
<Card title="Context Window" icon="window">
|
||||
The context window determines how much text an LLM can process at once. Larger windows (e.g., 128K tokens) allow for more context but may be more expensive and slower.
|
||||
</Card>
|
||||
<Card title="Temperature" icon="temperature-three-quarters">
|
||||
Temperature (0.0 to 1.0) controls response randomness. Lower values (e.g., 0.2) produce more focused, deterministic outputs, while higher values (e.g., 0.8) increase creativity and variability.
|
||||
</Card>
|
||||
<Card title="Provider Selection" icon="server">
|
||||
Each LLM provider (e.g., OpenAI, Anthropic, Google) offers different models with varying capabilities, pricing, and features. Choose based on your needs for accuracy, speed, and cost.
|
||||
</Card>
|
||||
</CardGroup>
|
||||
|
||||
## Setting Up Your LLM
|
||||
|
||||
There are three ways to configure LLMs in CrewAI. Choose the method that best fits your workflow:
|
||||
|
||||
<Tabs>
|
||||
<Tab title="1. Environment Variables">
|
||||
The simplest way to get started. Set these variables in your environment:
|
||||
|
||||
```bash
|
||||
# Required: Your API key for authentication
|
||||
OPENAI_API_KEY=<your-api-key>
|
||||
|
||||
# Optional: Default model selection
|
||||
OPENAI_MODEL_NAME=gpt-4o-mini # Default if not set
|
||||
|
||||
# Optional: Organization ID (if applicable)
|
||||
OPENAI_ORGANIZATION_ID=<your-org-id>
|
||||
```
|
||||
|
||||
<Warning>
|
||||
Never commit API keys to version control. Use environment files (.env) or your system's secret management.
|
||||
</Warning>
|
||||
</Tab>
|
||||
<Tab title="2. YAML Configuration">
|
||||
Create a YAML file to define your agent configurations. This method is great for version control and team collaboration:
|
||||
|
||||
```yaml
|
||||
researcher:
|
||||
role: Research Specialist
|
||||
goal: Conduct comprehensive research and analysis
|
||||
backstory: A dedicated research professional with years of experience
|
||||
verbose: true
|
||||
llm: openai/gpt-4o-mini # your model here
|
||||
# (see provider configuration examples below for more)
|
||||
```
|
||||
|
||||
<Info>
|
||||
The YAML configuration allows you to:
|
||||
- Version control your agent settings
|
||||
- Easily switch between different models
|
||||
- Share configurations across team members
|
||||
- Document model choices and their purposes
|
||||
</Info>
|
||||
</Tab>
|
||||
<Tab title="3. Direct Code">
|
||||
For maximum flexibility, configure LLMs directly in your Python code:
|
||||
|
||||
```python
|
||||
from crewai import LLM
|
||||
|
||||
# Basic configuration
|
||||
llm = LLM(model="gpt-4")
|
||||
|
||||
# Advanced configuration with detailed parameters
|
||||
llm = LLM(
|
||||
model="gpt-4o-mini",
|
||||
temperature=0.7, # Higher for more creative outputs
|
||||
timeout=120, # Seconds to wait for response
|
||||
max_tokens=4000, # Maximum length of response
|
||||
top_p=0.9, # Nucleus sampling parameter
|
||||
frequency_penalty=0.1, # Reduce repetition
|
||||
presence_penalty=0.1, # Encourage topic diversity
|
||||
response_format={"type": "json"}, # For structured outputs
|
||||
seed=42 # For reproducible results
|
||||
)
|
||||
```
|
||||
|
||||
<Info>
|
||||
Parameter explanations:
|
||||
- `temperature`: Controls randomness (0.0-1.0)
|
||||
- `timeout`: Maximum wait time for response
|
||||
- `max_tokens`: Limits response length
|
||||
- `top_p`: Alternative to temperature for sampling
|
||||
- `frequency_penalty`: Reduces word repetition
|
||||
- `presence_penalty`: Encourages new topics
|
||||
- `response_format`: Specifies output structure
|
||||
- `seed`: Ensures consistent outputs
|
||||
</Info>
|
||||
</Tab>
|
||||
</Tabs>
|
||||
|
||||
## Provider Configuration Examples
|
||||
|
||||
|
||||
CrewAI supports a multitude of LLM providers, each offering unique features, authentication methods, and model capabilities.
|
||||
In this section, you'll find detailed examples that help you select, configure, and optimize the LLM that best fits your project's needs.
|
||||
|
||||
<AccordionGroup>
|
||||
<Accordion title="OpenAI">
|
||||
Set the following environment variables in your `.env` file:
|
||||
|
||||
```toml Code
|
||||
# Required
|
||||
OPENAI_API_KEY=sk-...
|
||||
|
||||
# Optional
|
||||
OPENAI_API_BASE=<custom-base-url>
|
||||
OPENAI_ORGANIZATION=<your-org-id>
|
||||
```
|
||||
|
||||
Example usage in your CrewAI project:
|
||||
```python Code
|
||||
from crewai import LLM
|
||||
|
||||
llm = LLM(
|
||||
model="openai/gpt-4", # call model by provider/model_name
|
||||
temperature=0.8,
|
||||
max_tokens=150,
|
||||
top_p=0.9,
|
||||
frequency_penalty=0.1,
|
||||
presence_penalty=0.1,
|
||||
stop=["END"],
|
||||
seed=42
|
||||
)
|
||||
```
|
||||
|
||||
OpenAI is one of the leading providers of LLMs with a wide range of models and features.
|
||||
|
||||
| Model | Context Window | Best For |
|
||||
|---------------------|------------------|-----------------------------------------------|
|
||||
| GPT-4 | 8,192 tokens | High-accuracy tasks, complex reasoning |
|
||||
| GPT-4 Turbo | 128,000 tokens | Long-form content, document analysis |
|
||||
| GPT-4o & GPT-4o-mini | 128,000 tokens | Cost-effective large context processing |
|
||||
| o3-mini | 200,000 tokens | Fast reasoning, complex reasoning |
|
||||
| o1-mini | 128,000 tokens | Fast reasoning, complex reasoning |
|
||||
| o1-preview | 128,000 tokens | Fast reasoning, complex reasoning |
|
||||
| o1 | 200,000 tokens | Fast reasoning, complex reasoning |
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="Anthropic">
|
||||
```toml Code
|
||||
# Required
|
||||
ANTHROPIC_API_KEY=sk-ant-...
|
||||
|
||||
# Optional
|
||||
ANTHROPIC_API_BASE=<custom-base-url>
|
||||
```
|
||||
|
||||
Example usage in your CrewAI project:
|
||||
```python Code
|
||||
llm = LLM(
|
||||
model="anthropic/claude-3-sonnet-20240229-v1:0",
|
||||
temperature=0.7
|
||||
)
|
||||
```
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="Google">
|
||||
Set the following environment variables in your `.env` file:
|
||||
|
||||
```toml Code
|
||||
# Option 1: Gemini accessed with an API key.
|
||||
# https://ai.google.dev/gemini-api/docs/api-key
|
||||
GEMINI_API_KEY=<your-api-key>
|
||||
|
||||
# Option 2: Vertex AI IAM credentials for Gemini, Anthropic, and Model Garden.
|
||||
# https://cloud.google.com/vertex-ai/generative-ai/docs/overview
|
||||
```
|
||||
|
||||
Get credentials from your Google Cloud Console and save it to a JSON file with the following code:
|
||||
```python Code
|
||||
import json
|
||||
|
||||
file_path = 'path/to/vertex_ai_service_account.json'
|
||||
|
||||
# Load the JSON file
|
||||
with open(file_path, 'r') as file:
|
||||
vertex_credentials = json.load(file)
|
||||
|
||||
# Convert the credentials to a JSON string
|
||||
vertex_credentials_json = json.dumps(vertex_credentials)
|
||||
```
|
||||
|
||||
Example usage in your CrewAI project:
|
||||
```python Code
|
||||
from crewai import LLM
|
||||
|
||||
llm = LLM(
|
||||
model="gemini/gemini-1.5-pro-latest",
|
||||
temperature=0.7,
|
||||
vertex_credentials=vertex_credentials_json
|
||||
)
|
||||
```
|
||||
Google offers a range of powerful models optimized for different use cases:
|
||||
|
||||
| Model | Context Window | Best For |
|
||||
|-----------------------|----------------|------------------------------------------------------------------|
|
||||
| gemini-2.0-flash-exp | 1M tokens | Higher quality at faster speed, multimodal model, good for most tasks |
|
||||
| gemini-1.5-flash | 1M tokens | Balanced multimodal model, good for most tasks |
|
||||
| gemini-1.5-flash-8B | 1M tokens | Fastest, most cost-efficient, good for high-frequency tasks |
|
||||
| gemini-1.5-pro | 2M tokens | Best performing, wide variety of reasoning tasks including logical reasoning, coding, and creative collaboration |
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="Azure">
|
||||
```toml Code
|
||||
# Required
|
||||
AZURE_API_KEY=<your-api-key>
|
||||
AZURE_API_BASE=<your-resource-url>
|
||||
AZURE_API_VERSION=<api-version>
|
||||
|
||||
# Optional
|
||||
AZURE_AD_TOKEN=<your-azure-ad-token>
|
||||
AZURE_API_TYPE=<your-azure-api-type>
|
||||
```
|
||||
|
||||
Example usage in your CrewAI project:
|
||||
```python Code
|
||||
llm = LLM(
|
||||
model="azure/gpt-4",
|
||||
api_version="2023-05-15"
|
||||
)
|
||||
```
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="AWS Bedrock">
|
||||
```toml Code
|
||||
AWS_ACCESS_KEY_ID=<your-access-key>
|
||||
AWS_SECRET_ACCESS_KEY=<your-secret-key>
|
||||
AWS_DEFAULT_REGION=<your-region>
|
||||
```
|
||||
|
||||
Example usage in your CrewAI project:
|
||||
```python Code
|
||||
llm = LLM(
|
||||
model="bedrock/anthropic.claude-3-sonnet-20240229-v1:0"
|
||||
)
|
||||
```
|
||||
|
||||
Before using Amazon Bedrock, make sure you have boto3 installed in your environment
|
||||
|
||||
[Amazon Bedrock](https://docs.aws.amazon.com/bedrock/latest/userguide/models-regions.html) is a managed service that provides access to multiple foundation models from top AI companies through a unified API, enabling secure and responsible AI application development.
|
||||
|
||||
| Model | Context Window | Best For |
|
||||
|-------------------------|----------------------|-------------------------------------------------------------------|
|
||||
| Amazon Nova Pro | Up to 300k tokens | High-performance, model balancing accuracy, speed, and cost-effectiveness across diverse tasks. |
|
||||
| Amazon Nova Micro | Up to 128k tokens | High-performance, cost-effective text-only model optimized for lowest latency responses. |
|
||||
| Amazon Nova Lite | Up to 300k tokens | High-performance, affordable multimodal processing for images, video, and text with real-time capabilities. |
|
||||
| Claude 3.7 Sonnet | Up to 128k tokens | High-performance, best for complex reasoning, coding & AI agents |
|
||||
| Claude 3.5 Sonnet v2 | Up to 200k tokens | State-of-the-art model specialized in software engineering, agentic capabilities, and computer interaction at optimized cost. |
|
||||
| Claude 3.5 Sonnet | Up to 200k tokens | High-performance model delivering superior intelligence and reasoning across diverse tasks with optimal speed-cost balance. |
|
||||
| Claude 3.5 Haiku | Up to 200k tokens | Fast, compact multimodal model optimized for quick responses and seamless human-like interactions |
|
||||
| Claude 3 Sonnet | Up to 200k tokens | Multimodal model balancing intelligence and speed for high-volume deployments. |
|
||||
| Claude 3 Haiku | Up to 200k tokens | Compact, high-speed multimodal model optimized for quick responses and natural conversational interactions |
|
||||
| Claude 3 Opus | Up to 200k tokens | Most advanced multimodal model exceling at complex tasks with human-like reasoning and superior contextual understanding. |
|
||||
| Claude 2.1 | Up to 200k tokens | Enhanced version with expanded context window, improved reliability, and reduced hallucinations for long-form and RAG applications |
|
||||
| Claude | Up to 100k tokens | Versatile model excelling in sophisticated dialogue, creative content, and precise instruction following. |
|
||||
| Claude Instant | Up to 100k tokens | Fast, cost-effective model for everyday tasks like dialogue, analysis, summarization, and document Q&A |
|
||||
| Llama 3.1 405B Instruct | Up to 128k tokens | Advanced LLM for synthetic data generation, distillation, and inference for chatbots, coding, and domain-specific tasks. |
|
||||
| Llama 3.1 70B Instruct | Up to 128k tokens | Powers complex conversations with superior contextual understanding, reasoning and text generation. |
|
||||
| Llama 3.1 8B Instruct | Up to 128k tokens | Advanced state-of-the-art model with language understanding, superior reasoning, and text generation. |
|
||||
| Llama 3 70B Instruct | Up to 8k tokens | Powers complex conversations with superior contextual understanding, reasoning and text generation. |
|
||||
| Llama 3 8B Instruct | Up to 8k tokens | Advanced state-of-the-art LLM with language understanding, superior reasoning, and text generation. |
|
||||
| Titan Text G1 - Lite | Up to 4k tokens | Lightweight, cost-effective model optimized for English tasks and fine-tuning with focus on summarization and content generation. |
|
||||
| Titan Text G1 - Express | Up to 8k tokens | Versatile model for general language tasks, chat, and RAG applications with support for English and 100+ languages. |
|
||||
| Cohere Command | Up to 4k tokens | Model specialized in following user commands and delivering practical enterprise solutions. |
|
||||
| Jurassic-2 Mid | Up to 8,191 tokens | Cost-effective model balancing quality and affordability for diverse language tasks like Q&A, summarization, and content generation. |
|
||||
| Jurassic-2 Ultra | Up to 8,191 tokens | Model for advanced text generation and comprehension, excelling in complex tasks like analysis and content creation. |
|
||||
| Jamba-Instruct | Up to 256k tokens | Model with extended context window optimized for cost-effective text generation, summarization, and Q&A. |
|
||||
| Mistral 7B Instruct | Up to 32k tokens | This LLM follows instructions, completes requests, and generates creative text. |
|
||||
| Mistral 8x7B Instruct | Up to 32k tokens | An MOE LLM that follows instructions, completes requests, and generates creative text. |
|
||||
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="Amazon SageMaker">
|
||||
```toml Code
|
||||
AWS_ACCESS_KEY_ID=<your-access-key>
|
||||
AWS_SECRET_ACCESS_KEY=<your-secret-key>
|
||||
AWS_DEFAULT_REGION=<your-region>
|
||||
```
|
||||
|
||||
Example usage in your CrewAI project:
|
||||
```python Code
|
||||
llm = LLM(
|
||||
model="sagemaker/<my-endpoint>"
|
||||
)
|
||||
```
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="Mistral">
|
||||
Set the following environment variables in your `.env` file:
|
||||
```toml Code
|
||||
MISTRAL_API_KEY=<your-api-key>
|
||||
```
|
||||
|
||||
Example usage in your CrewAI project:
|
||||
```python Code
|
||||
llm = LLM(
|
||||
model="mistral/mistral-large-latest",
|
||||
temperature=0.7
|
||||
)
|
||||
```
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="Nvidia NIM">
|
||||
Set the following environment variables in your `.env` file:
|
||||
```toml Code
|
||||
NVIDIA_API_KEY=<your-api-key>
|
||||
```
|
||||
|
||||
Example usage in your CrewAI project:
|
||||
```python Code
|
||||
llm = LLM(
|
||||
model="nvidia_nim/meta/llama3-70b-instruct",
|
||||
temperature=0.7
|
||||
)
|
||||
```
|
||||
|
||||
Nvidia NIM provides a comprehensive suite of models for various use cases, from general-purpose tasks to specialized applications.
|
||||
|
||||
| Model | Context Window | Best For |
|
||||
|-------------------------------------------------------------------------|----------------|-------------------------------------------------------------------|
|
||||
| nvidia/mistral-nemo-minitron-8b-8k-instruct | 8,192 tokens | State-of-the-art small language model delivering superior accuracy for chatbot, virtual assistants, and content generation. |
|
||||
| nvidia/nemotron-4-mini-hindi-4b-instruct | 4,096 tokens | A bilingual Hindi-English SLM for on-device inference, tailored specifically for Hindi Language. |
|
||||
| nvidia/llama-3.1-nemotron-70b-instruct | 128k tokens | Customized for enhanced helpfulness in responses |
|
||||
| nvidia/llama3-chatqa-1.5-8b | 128k tokens | Advanced LLM to generate high-quality, context-aware responses for chatbots and search engines. |
|
||||
| nvidia/llama3-chatqa-1.5-70b | 128k tokens | Advanced LLM to generate high-quality, context-aware responses for chatbots and search engines. |
|
||||
| nvidia/vila | 128k tokens | Multi-modal vision-language model that understands text/img/video and creates informative responses |
|
||||
| nvidia/neva-22 | 4,096 tokens | Multi-modal vision-language model that understands text/images and generates informative responses |
|
||||
| nvidia/nemotron-mini-4b-instruct | 8,192 tokens | General-purpose tasks |
|
||||
| nvidia/usdcode-llama3-70b-instruct | 128k tokens | State-of-the-art LLM that answers OpenUSD knowledge queries and generates USD-Python code. |
|
||||
| nvidia/nemotron-4-340b-instruct | 4,096 tokens | Creates diverse synthetic data that mimics the characteristics of real-world data. |
|
||||
| meta/codellama-70b | 100k tokens | LLM capable of generating code from natural language and vice versa. |
|
||||
| meta/llama2-70b | 4,096 tokens | Cutting-edge large language AI model capable of generating text and code in response to prompts. |
|
||||
| meta/llama3-8b-instruct | 8,192 tokens | Advanced state-of-the-art LLM with language understanding, superior reasoning, and text generation. |
|
||||
| meta/llama3-70b-instruct | 8,192 tokens | Powers complex conversations with superior contextual understanding, reasoning and text generation. |
|
||||
| meta/llama-3.1-8b-instruct | 128k tokens | Advanced state-of-the-art model with language understanding, superior reasoning, and text generation. |
|
||||
| meta/llama-3.1-70b-instruct | 128k tokens | Powers complex conversations with superior contextual understanding, reasoning and text generation. |
|
||||
| meta/llama-3.1-405b-instruct | 128k tokens | Advanced LLM for synthetic data generation, distillation, and inference for chatbots, coding, and domain-specific tasks. |
|
||||
| meta/llama-3.2-1b-instruct | 128k tokens | Advanced state-of-the-art small language model with language understanding, superior reasoning, and text generation. |
|
||||
| meta/llama-3.2-3b-instruct | 128k tokens | Advanced state-of-the-art small language model with language understanding, superior reasoning, and text generation. |
|
||||
| meta/llama-3.2-11b-vision-instruct | 128k tokens | Advanced state-of-the-art small language model with language understanding, superior reasoning, and text generation. |
|
||||
| meta/llama-3.2-90b-vision-instruct | 128k tokens | Advanced state-of-the-art small language model with language understanding, superior reasoning, and text generation. |
|
||||
| google/gemma-7b | 8,192 tokens | Cutting-edge text generation model text understanding, transformation, and code generation. |
|
||||
| google/gemma-2b | 8,192 tokens | Cutting-edge text generation model text understanding, transformation, and code generation. |
|
||||
| google/codegemma-7b | 8,192 tokens | Cutting-edge model built on Google's Gemma-7B specialized for code generation and code completion. |
|
||||
| google/codegemma-1.1-7b | 8,192 tokens | Advanced programming model for code generation, completion, reasoning, and instruction following. |
|
||||
| google/recurrentgemma-2b | 8,192 tokens | Novel recurrent architecture based language model for faster inference when generating long sequences. |
|
||||
| google/gemma-2-9b-it | 8,192 tokens | Cutting-edge text generation model text understanding, transformation, and code generation. |
|
||||
| google/gemma-2-27b-it | 8,192 tokens | Cutting-edge text generation model text understanding, transformation, and code generation. |
|
||||
| google/gemma-2-2b-it | 8,192 tokens | Cutting-edge text generation model text understanding, transformation, and code generation. |
|
||||
| google/deplot | 512 tokens | One-shot visual language understanding model that translates images of plots into tables. |
|
||||
| google/paligemma | 8,192 tokens | Vision language model adept at comprehending text and visual inputs to produce informative responses. |
|
||||
| mistralai/mistral-7b-instruct-v0.2 | 32k tokens | This LLM follows instructions, completes requests, and generates creative text. |
|
||||
| mistralai/mixtral-8x7b-instruct-v0.1 | 8,192 tokens | An MOE LLM that follows instructions, completes requests, and generates creative text. |
|
||||
| mistralai/mistral-large | 4,096 tokens | Creates diverse synthetic data that mimics the characteristics of real-world data. |
|
||||
| mistralai/mixtral-8x22b-instruct-v0.1 | 8,192 tokens | Creates diverse synthetic data that mimics the characteristics of real-world data. |
|
||||
| mistralai/mistral-7b-instruct-v0.3 | 32k tokens | This LLM follows instructions, completes requests, and generates creative text. |
|
||||
| nv-mistralai/mistral-nemo-12b-instruct | 128k tokens | Most advanced language model for reasoning, code, multilingual tasks; runs on a single GPU. |
|
||||
| mistralai/mamba-codestral-7b-v0.1 | 256k tokens | Model for writing and interacting with code across a wide range of programming languages and tasks. |
|
||||
| microsoft/phi-3-mini-128k-instruct | 128K tokens | Lightweight, state-of-the-art open LLM with strong math and logical reasoning skills. |
|
||||
| microsoft/phi-3-mini-4k-instruct | 4,096 tokens | Lightweight, state-of-the-art open LLM with strong math and logical reasoning skills. |
|
||||
| microsoft/phi-3-small-8k-instruct | 8,192 tokens | Lightweight, state-of-the-art open LLM with strong math and logical reasoning skills. |
|
||||
| microsoft/phi-3-small-128k-instruct | 128K tokens | Lightweight, state-of-the-art open LLM with strong math and logical reasoning skills. |
|
||||
| microsoft/phi-3-medium-4k-instruct | 4,096 tokens | Lightweight, state-of-the-art open LLM with strong math and logical reasoning skills. |
|
||||
| microsoft/phi-3-medium-128k-instruct | 128K tokens | Lightweight, state-of-the-art open LLM with strong math and logical reasoning skills. |
|
||||
| microsoft/phi-3.5-mini-instruct | 128K tokens | Lightweight multilingual LLM powering AI applications in latency bound, memory/compute constrained environments |
|
||||
| microsoft/phi-3.5-moe-instruct | 128K tokens | Advanced LLM based on Mixture of Experts architecure to deliver compute efficient content generation |
|
||||
| microsoft/kosmos-2 | 1,024 tokens | Groundbreaking multimodal model designed to understand and reason about visual elements in images. |
|
||||
| microsoft/phi-3-vision-128k-instruct | 128k tokens | Cutting-edge open multimodal model exceling in high-quality reasoning from images. |
|
||||
| microsoft/phi-3.5-vision-instruct | 128k tokens | Cutting-edge open multimodal model exceling in high-quality reasoning from images. |
|
||||
| databricks/dbrx-instruct | 12k tokens | A general-purpose LLM with state-of-the-art performance in language understanding, coding, and RAG. |
|
||||
| snowflake/arctic | 1,024 tokens | Delivers high efficiency inference for enterprise applications focused on SQL generation and coding. |
|
||||
| aisingapore/sea-lion-7b-instruct | 4,096 tokens | LLM to represent and serve the linguistic and cultural diversity of Southeast Asia |
|
||||
| ibm/granite-8b-code-instruct | 4,096 tokens | Software programming LLM for code generation, completion, explanation, and multi-turn conversion. |
|
||||
| ibm/granite-34b-code-instruct | 8,192 tokens | Software programming LLM for code generation, completion, explanation, and multi-turn conversion. |
|
||||
| ibm/granite-3.0-8b-instruct | 4,096 tokens | Advanced Small Language Model supporting RAG, summarization, classification, code, and agentic AI |
|
||||
| ibm/granite-3.0-3b-a800m-instruct | 4,096 tokens | Highly efficient Mixture of Experts model for RAG, summarization, entity extraction, and classification |
|
||||
| mediatek/breeze-7b-instruct | 4,096 tokens | Creates diverse synthetic data that mimics the characteristics of real-world data. |
|
||||
| upstage/solar-10.7b-instruct | 4,096 tokens | Excels in NLP tasks, particularly in instruction-following, reasoning, and mathematics. |
|
||||
| writer/palmyra-med-70b-32k | 32k tokens | Leading LLM for accurate, contextually relevant responses in the medical domain. |
|
||||
| writer/palmyra-med-70b | 32k tokens | Leading LLM for accurate, contextually relevant responses in the medical domain. |
|
||||
| writer/palmyra-fin-70b-32k | 32k tokens | Specialized LLM for financial analysis, reporting, and data processing |
|
||||
| 01-ai/yi-large | 32k tokens | Powerful model trained on English and Chinese for diverse tasks including chatbot and creative writing. |
|
||||
| deepseek-ai/deepseek-coder-6.7b-instruct | 2k tokens | Powerful coding model offering advanced capabilities in code generation, completion, and infilling |
|
||||
| rakuten/rakutenai-7b-instruct | 1,024 tokens | Advanced state-of-the-art LLM with language understanding, superior reasoning, and text generation. |
|
||||
| rakuten/rakutenai-7b-chat | 1,024 tokens | Advanced state-of-the-art LLM with language understanding, superior reasoning, and text generation. |
|
||||
| baichuan-inc/baichuan2-13b-chat | 4,096 tokens | Support Chinese and English chat, coding, math, instruction following, solving quizzes |
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="Local NVIDIA NIM Deployed using WSL2">
|
||||
|
||||
NVIDIA NIM enables you to run powerful LLMs locally on your Windows machine using WSL2 (Windows Subsystem for Linux).
|
||||
This approach allows you to leverage your NVIDIA GPU for private, secure, and cost-effective AI inference without relying on cloud services.
|
||||
Perfect for development, testing, or production scenarios where data privacy or offline capabilities are required.
|
||||
|
||||
Here is a step-by-step guide to setting up a local NVIDIA NIM model:
|
||||
|
||||
1. Follow installation instructions from [NVIDIA Website](https://docs.nvidia.com/nim/wsl2/latest/getting-started.html)
|
||||
|
||||
2. Install the local model. For Llama 3.1-8b follow [instructions](https://build.nvidia.com/meta/llama-3_1-8b-instruct/deploy)
|
||||
|
||||
3. Configure your crewai local models:
|
||||
|
||||
```python Code
|
||||
from crewai.llm import LLM
|
||||
|
||||
local_nvidia_nim_llm = LLM(
|
||||
model="openai/meta/llama-3.1-8b-instruct", # it's an openai-api compatible model
|
||||
base_url="http://localhost:8000/v1",
|
||||
api_key="<your_api_key|any text if you have not configured it>", # api_key is required, but you can use any text
|
||||
)
|
||||
|
||||
# Then you can use it in your crew:
|
||||
|
||||
@CrewBase
|
||||
class MyCrew():
|
||||
# ...
|
||||
|
||||
@agent
|
||||
def researcher(self) -> Agent:
|
||||
return Agent(
|
||||
config=self.agents_config['researcher'],
|
||||
llm=local_nvidia_nim_llm
|
||||
)
|
||||
|
||||
# ...
|
||||
```
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="Groq">
|
||||
Set the following environment variables in your `.env` file:
|
||||
|
||||
```toml Code
|
||||
GROQ_API_KEY=<your-api-key>
|
||||
```
|
||||
|
||||
Example usage in your CrewAI project:
|
||||
```python Code
|
||||
llm = LLM(
|
||||
model="groq/llama-3.2-90b-text-preview",
|
||||
temperature=0.7
|
||||
)
|
||||
```
|
||||
| Model | Context Window | Best For |
|
||||
|-------------------|------------------|--------------------------------------------|
|
||||
| Llama 3.1 70B/8B | 131,072 tokens | High-performance, large context tasks |
|
||||
| Llama 3.2 Series | 8,192 tokens | General-purpose tasks |
|
||||
| Mixtral 8x7B | 32,768 tokens | Balanced performance and context |
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="IBM watsonx.ai">
|
||||
Set the following environment variables in your `.env` file:
|
||||
```toml Code
|
||||
# Required
|
||||
WATSONX_URL=<your-url>
|
||||
WATSONX_APIKEY=<your-apikey>
|
||||
WATSONX_PROJECT_ID=<your-project-id>
|
||||
|
||||
# Optional
|
||||
WATSONX_TOKEN=<your-token>
|
||||
WATSONX_DEPLOYMENT_SPACE_ID=<your-space-id>
|
||||
```
|
||||
|
||||
Example usage in your CrewAI project:
|
||||
```python Code
|
||||
llm = LLM(
|
||||
model="watsonx/meta-llama/llama-3-1-70b-instruct",
|
||||
base_url="https://api.watsonx.ai/v1"
|
||||
)
|
||||
```
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="Ollama (Local LLMs)">
|
||||
1. Install Ollama: [ollama.ai](https://ollama.ai/)
|
||||
2. Run a model: `ollama run llama3`
|
||||
3. Configure:
|
||||
|
||||
```python Code
|
||||
llm = LLM(
|
||||
model="ollama/llama3:70b",
|
||||
base_url="http://localhost:11434"
|
||||
)
|
||||
```
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="Fireworks AI">
|
||||
Set the following environment variables in your `.env` file:
|
||||
```toml Code
|
||||
FIREWORKS_API_KEY=<your-api-key>
|
||||
```
|
||||
|
||||
Example usage in your CrewAI project:
|
||||
```python Code
|
||||
llm = LLM(
|
||||
model="fireworks_ai/accounts/fireworks/models/llama-v3-70b-instruct",
|
||||
temperature=0.7
|
||||
)
|
||||
```
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="Perplexity AI">
|
||||
Set the following environment variables in your `.env` file:
|
||||
```toml Code
|
||||
PERPLEXITY_API_KEY=<your-api-key>
|
||||
```
|
||||
|
||||
Example usage in your CrewAI project:
|
||||
```python Code
|
||||
llm = LLM(
|
||||
model="llama-3.1-sonar-large-128k-online",
|
||||
base_url="https://api.perplexity.ai/"
|
||||
)
|
||||
```
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="Hugging Face">
|
||||
Set the following environment variables in your `.env` file:
|
||||
```toml Code
|
||||
HUGGINGFACE_API_KEY=<your-api-key>
|
||||
```
|
||||
|
||||
Example usage in your CrewAI project:
|
||||
```python Code
|
||||
llm = LLM(
|
||||
model="huggingface/meta-llama/Meta-Llama-3.1-8B-Instruct",
|
||||
base_url="your_api_endpoint"
|
||||
)
|
||||
```
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="SambaNova">
|
||||
Set the following environment variables in your `.env` file:
|
||||
|
||||
```toml Code
|
||||
SAMBANOVA_API_KEY=<your-api-key>
|
||||
```
|
||||
|
||||
Example usage in your CrewAI project:
|
||||
```python Code
|
||||
llm = LLM(
|
||||
model="sambanova/Meta-Llama-3.1-8B-Instruct",
|
||||
temperature=0.7
|
||||
)
|
||||
```
|
||||
| Model | Context Window | Best For |
|
||||
|--------------------|------------------------|----------------------------------------------|
|
||||
| Llama 3.1 70B/8B | Up to 131,072 tokens | High-performance, large context tasks |
|
||||
| Llama 3.1 405B | 8,192 tokens | High-performance and output quality |
|
||||
| Llama 3.2 Series | 8,192 tokens | General-purpose, multimodal tasks |
|
||||
| Llama 3.3 70B | Up to 131,072 tokens | High-performance and output quality |
|
||||
| Qwen2 familly | 8,192 tokens | High-performance and output quality |
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="Cerebras">
|
||||
Set the following environment variables in your `.env` file:
|
||||
```toml Code
|
||||
# Required
|
||||
CEREBRAS_API_KEY=<your-api-key>
|
||||
```
|
||||
|
||||
Example usage in your CrewAI project:
|
||||
```python Code
|
||||
llm = LLM(
|
||||
model="cerebras/llama3.1-70b",
|
||||
temperature=0.7,
|
||||
max_tokens=8192
|
||||
)
|
||||
```
|
||||
|
||||
<Info>
|
||||
Cerebras features:
|
||||
- Fast inference speeds
|
||||
- Competitive pricing
|
||||
- Good balance of speed and quality
|
||||
- Support for long context windows
|
||||
</Info>
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="Open Router">
|
||||
Set the following environment variables in your `.env` file:
|
||||
```toml Code
|
||||
OPENROUTER_API_KEY=<your-api-key>
|
||||
```
|
||||
|
||||
Example usage in your CrewAI project:
|
||||
```python Code
|
||||
llm = LLM(
|
||||
model="openrouter/deepseek/deepseek-r1",
|
||||
base_url="https://openrouter.ai/api/v1",
|
||||
api_key=OPENROUTER_API_KEY
|
||||
)
|
||||
```
|
||||
|
||||
<Info>
|
||||
Open Router models:
|
||||
- openrouter/deepseek/deepseek-r1
|
||||
- openrouter/deepseek/deepseek-chat
|
||||
</Info>
|
||||
</Accordion>
|
||||
</AccordionGroup>
|
||||
|
||||
## Streaming Responses
|
||||
|
||||
CrewAI supports streaming responses from LLMs, allowing your application to receive and process outputs in real-time as they're generated.
|
||||
|
||||
<Tabs>
|
||||
<Tab title="Basic Setup">
|
||||
Enable streaming by setting the `stream` parameter to `True` when initializing your LLM:
|
||||
|
||||
```python
|
||||
from crewai import LLM
|
||||
|
||||
# Create an LLM with streaming enabled
|
||||
llm = LLM(
|
||||
model="openai/gpt-4o",
|
||||
stream=True # Enable streaming
|
||||
)
|
||||
```
|
||||
|
||||
When streaming is enabled, responses are delivered in chunks as they're generated, creating a more responsive user experience.
|
||||
</Tab>
|
||||
|
||||
<Tab title="Event Handling">
|
||||
CrewAI emits events for each chunk received during streaming:
|
||||
|
||||
```python
|
||||
from crewai import LLM
|
||||
from crewai.utilities.events import EventHandler, LLMStreamChunkEvent
|
||||
|
||||
class MyEventHandler(EventHandler):
|
||||
def on_llm_stream_chunk(self, event: LLMStreamChunkEvent):
|
||||
# Process each chunk as it arrives
|
||||
print(f"Received chunk: {event.chunk}")
|
||||
|
||||
# Register the event handler
|
||||
from crewai.utilities.events import crewai_event_bus
|
||||
crewai_event_bus.register_handler(MyEventHandler())
|
||||
```
|
||||
</Tab>
|
||||
</Tabs>
|
||||
|
||||
## Structured LLM Calls
|
||||
|
||||
CrewAI supports structured responses from LLM calls by allowing you to define a `response_format` using a Pydantic model. This enables the framework to automatically parse and validate the output, making it easier to integrate the response into your application without manual post-processing.
|
||||
|
||||
For example, you can define a Pydantic model to represent the expected response structure and pass it as the `response_format` when instantiating the LLM. The model will then be used to convert the LLM output into a structured Python object.
|
||||
|
||||
```python Code
|
||||
from crewai import LLM
|
||||
|
||||
class Dog(BaseModel):
|
||||
name: str
|
||||
age: int
|
||||
breed: str
|
||||
|
||||
|
||||
llm = LLM(model="gpt-4o", response_format=Dog)
|
||||
|
||||
response = llm.call(
|
||||
"Analyze the following messages and return the name, age, and breed. "
|
||||
"Meet Kona! She is 3 years old and is a black german shepherd."
|
||||
)
|
||||
print(response)
|
||||
|
||||
# Output:
|
||||
# Dog(name='Kona', age=3, breed='black german shepherd')
|
||||
```
|
||||
|
||||
## Advanced Features and Optimization
|
||||
|
||||
Learn how to get the most out of your LLM configuration:
|
||||
|
||||
<AccordionGroup>
|
||||
<Accordion title="Context Window Management">
|
||||
CrewAI includes smart context management features:
|
||||
|
||||
```python
|
||||
from crewai import LLM
|
||||
|
||||
# CrewAI automatically handles:
|
||||
# 1. Token counting and tracking
|
||||
# 2. Content summarization when needed
|
||||
# 3. Task splitting for large contexts
|
||||
|
||||
llm = LLM(
|
||||
model="gpt-4",
|
||||
max_tokens=4000, # Limit response length
|
||||
)
|
||||
```
|
||||
|
||||
<Info>
|
||||
Best practices for context management:
|
||||
1. Choose models with appropriate context windows
|
||||
2. Pre-process long inputs when possible
|
||||
3. Use chunking for large documents
|
||||
4. Monitor token usage to optimize costs
|
||||
</Info>
|
||||
</Accordion>
|
||||
|
||||
<Accordion title="Performance Optimization">
|
||||
<Steps>
|
||||
<Step title="Token Usage Optimization">
|
||||
Choose the right context window for your task:
|
||||
- Small tasks (up to 4K tokens): Standard models
|
||||
- Medium tasks (between 4K-32K): Enhanced models
|
||||
- Large tasks (over 32K): Large context models
|
||||
|
||||
```python
|
||||
# Configure model with appropriate settings
|
||||
llm = LLM(
|
||||
model="openai/gpt-4-turbo-preview",
|
||||
temperature=0.7, # Adjust based on task
|
||||
max_tokens=4096, # Set based on output needs
|
||||
timeout=300 # Longer timeout for complex tasks
|
||||
)
|
||||
```
|
||||
<Tip>
|
||||
- Lower temperature (0.1 to 0.3) for factual responses
|
||||
- Higher temperature (0.7 to 0.9) for creative tasks
|
||||
</Tip>
|
||||
</Step>
|
||||
|
||||
<Step title="Best Practices">
|
||||
1. Monitor token usage
|
||||
2. Implement rate limiting
|
||||
3. Use caching when possible
|
||||
4. Set appropriate max_tokens limits
|
||||
</Step>
|
||||
</Steps>
|
||||
|
||||
<Info>
|
||||
Remember to regularly monitor your token usage and adjust your configuration as needed to optimize costs and performance.
|
||||
</Info>
|
||||
</Accordion>
|
||||
</AccordionGroup>
|
||||
|
||||
## Common Issues and Solutions
|
||||
|
||||
<Tabs>
|
||||
<Tab title="Authentication">
|
||||
<Warning>
|
||||
Most authentication issues can be resolved by checking API key format and environment variable names.
|
||||
</Warning>
|
||||
|
||||
```bash
|
||||
# OpenAI
|
||||
OPENAI_API_KEY=sk-...
|
||||
|
||||
# Anthropic
|
||||
ANTHROPIC_API_KEY=sk-ant-...
|
||||
```
|
||||
</Tab>
|
||||
<Tab title="Model Names">
|
||||
<Check>
|
||||
Always include the provider prefix in model names
|
||||
</Check>
|
||||
|
||||
```python
|
||||
# Correct
|
||||
llm = LLM(model="openai/gpt-4")
|
||||
|
||||
# Incorrect
|
||||
llm = LLM(model="gpt-4")
|
||||
```
|
||||
</Tab>
|
||||
<Tab title="Context Length">
|
||||
<Tip>
|
||||
Use larger context models for extensive tasks
|
||||
</Tip>
|
||||
|
||||
```python
|
||||
# Large context model
|
||||
llm = LLM(model="openai/gpt-4o") # 128K tokens
|
||||
```
|
||||
</Tab>
|
||||
</Tabs>
|
||||
728
docs/concepts/memory.mdx
Normal file
@@ -0,0 +1,728 @@
|
||||
---
|
||||
title: Memory
|
||||
description: Leveraging memory systems in the CrewAI framework to enhance agent capabilities.
|
||||
icon: database
|
||||
---
|
||||
|
||||
## Introduction to Memory Systems in CrewAI
|
||||
|
||||
The crewAI framework introduces a sophisticated memory system designed to significantly enhance the capabilities of AI agents.
|
||||
This system comprises `short-term memory`, `long-term memory`, `entity memory`, and `contextual memory`, each serving a unique purpose in aiding agents to remember,
|
||||
reason, and learn from past interactions.
|
||||
|
||||
## Memory System Components
|
||||
|
||||
| Component | Description |
|
||||
| :------------------- | :---------------------------------------------------------------------------------------------------------------------- |
|
||||
| **Short-Term Memory**| Temporarily stores recent interactions and outcomes using `RAG`, enabling agents to recall and utilize information relevant to their current context during the current executions.|
|
||||
| **Long-Term Memory** | Preserves valuable insights and learnings from past executions, allowing agents to build and refine their knowledge over time. |
|
||||
| **Entity Memory** | Captures and organizes information about entities (people, places, concepts) encountered during tasks, facilitating deeper understanding and relationship mapping. Uses `RAG` for storing entity information. |
|
||||
| **Contextual Memory**| Maintains the context of interactions by combining `ShortTermMemory`, `LongTermMemory`, and `EntityMemory`, aiding in the coherence and relevance of agent responses over a sequence of tasks or a conversation. |
|
||||
| **External Memory** | Enables integration with external memory systems and providers (like Mem0), allowing for specialized memory storage and retrieval across different applications. Supports custom storage implementations for flexible memory management. |
|
||||
| **User Memory** | ⚠️ **DEPRECATED**: This component is deprecated and will be removed in a future version. Please use [External Memory](#using-external-memory) instead. |
|
||||
|
||||
## How Memory Systems Empower Agents
|
||||
|
||||
1. **Contextual Awareness**: With short-term and contextual memory, agents gain the ability to maintain context over a conversation or task sequence, leading to more coherent and relevant responses.
|
||||
|
||||
2. **Experience Accumulation**: Long-term memory allows agents to accumulate experiences, learning from past actions to improve future decision-making and problem-solving.
|
||||
|
||||
3. **Entity Understanding**: By maintaining entity memory, agents can recognize and remember key entities, enhancing their ability to process and interact with complex information.
|
||||
|
||||
## Implementing Memory in Your Crew
|
||||
|
||||
When configuring a crew, you can enable and customize each memory component to suit the crew's objectives and the nature of tasks it will perform.
|
||||
By default, the memory system is disabled, and you can ensure it is active by setting `memory=True` in the crew configuration.
|
||||
The memory will use OpenAI embeddings by default, but you can change it by setting `embedder` to a different model.
|
||||
It's also possible to initialize the memory instance with your own instance.
|
||||
|
||||
The 'embedder' only applies to **Short-Term Memory** which uses Chroma for RAG.
|
||||
The **Long-Term Memory** uses SQLite3 to store task results. Currently, there is no way to override these storage implementations.
|
||||
The data storage files are saved into a platform-specific location found using the appdirs package,
|
||||
and the name of the project can be overridden using the **CREWAI_STORAGE_DIR** environment variable.
|
||||
|
||||
### Example: Configuring Memory for a Crew
|
||||
|
||||
```python Code
|
||||
from crewai import Crew, Agent, Task, Process
|
||||
|
||||
# Assemble your crew with memory capabilities
|
||||
my_crew = Crew(
|
||||
agents=[...],
|
||||
tasks=[...],
|
||||
process=Process.sequential,
|
||||
memory=True,
|
||||
verbose=True
|
||||
)
|
||||
```
|
||||
|
||||
### Example: Use Custom Memory Instances e.g FAISS as the VectorDB
|
||||
|
||||
```python Code
|
||||
from crewai import Crew, Process
|
||||
from crewai.memory import LongTermMemory, ShortTermMemory, EntityMemory
|
||||
from crewai.memory.storage.rag_storage import RAGStorage
|
||||
from crewai.memory.storage.ltm_sqlite_storage import LTMSQLiteStorage
|
||||
from typing import List, Optional
|
||||
|
||||
# Assemble your crew with memory capabilities
|
||||
my_crew: Crew = Crew(
|
||||
agents = [...],
|
||||
tasks = [...],
|
||||
process = Process.sequential,
|
||||
memory = True,
|
||||
# Long-term memory for persistent storage across sessions
|
||||
long_term_memory = LongTermMemory(
|
||||
storage=LTMSQLiteStorage(
|
||||
db_path="/my_crew1/long_term_memory_storage.db"
|
||||
)
|
||||
),
|
||||
# Short-term memory for current context using RAG
|
||||
short_term_memory = ShortTermMemory(
|
||||
storage = RAGStorage(
|
||||
embedder_config={
|
||||
"provider": "openai",
|
||||
"config": {
|
||||
"model": 'text-embedding-3-small'
|
||||
}
|
||||
},
|
||||
type="short_term",
|
||||
path="/my_crew1/"
|
||||
)
|
||||
),
|
||||
),
|
||||
# Entity memory for tracking key information about entities
|
||||
entity_memory = EntityMemory(
|
||||
storage=RAGStorage(
|
||||
embedder_config={
|
||||
"provider": "openai",
|
||||
"config": {
|
||||
"model": 'text-embedding-3-small'
|
||||
}
|
||||
},
|
||||
type="short_term",
|
||||
path="/my_crew1/"
|
||||
)
|
||||
),
|
||||
verbose=True,
|
||||
)
|
||||
```
|
||||
|
||||
## Security Considerations
|
||||
|
||||
When configuring memory storage:
|
||||
- Use environment variables for storage paths (e.g., `CREWAI_STORAGE_DIR`)
|
||||
- Never hardcode sensitive information like database credentials
|
||||
- Consider access permissions for storage directories
|
||||
- Use relative paths when possible to maintain portability
|
||||
|
||||
Example using environment variables:
|
||||
```python
|
||||
import os
|
||||
from crewai import Crew
|
||||
from crewai.memory import LongTermMemory
|
||||
from crewai.memory.storage.ltm_sqlite_storage import LTMSQLiteStorage
|
||||
|
||||
# Configure storage path using environment variable
|
||||
storage_path = os.getenv("CREWAI_STORAGE_DIR", "./storage")
|
||||
crew = Crew(
|
||||
memory=True,
|
||||
long_term_memory=LongTermMemory(
|
||||
storage=LTMSQLiteStorage(
|
||||
db_path="{storage_path}/memory.db".format(storage_path=storage_path)
|
||||
)
|
||||
)
|
||||
)
|
||||
```
|
||||
|
||||
## Configuration Examples
|
||||
|
||||
### Basic Memory Configuration
|
||||
```python
|
||||
from crewai import Crew
|
||||
from crewai.memory import LongTermMemory
|
||||
|
||||
# Simple memory configuration
|
||||
crew = Crew(memory=True) # Uses default storage locations
|
||||
```
|
||||
|
||||
### Custom Storage Configuration
|
||||
```python
|
||||
from crewai import Crew
|
||||
from crewai.memory import LongTermMemory
|
||||
from crewai.memory.storage.ltm_sqlite_storage import LTMSQLiteStorage
|
||||
|
||||
# Configure custom storage paths
|
||||
crew = Crew(
|
||||
memory=True,
|
||||
long_term_memory=LongTermMemory(
|
||||
storage=LTMSQLiteStorage(db_path="./memory.db")
|
||||
)
|
||||
)
|
||||
```
|
||||
|
||||
## Integrating Mem0 for Enhanced User Memory
|
||||
|
||||
[Mem0](https://mem0.ai/) is a self-improving memory layer for LLM applications, enabling personalized AI experiences.
|
||||
|
||||
|
||||
### Using Mem0 API platform
|
||||
|
||||
To include user-specific memory you can get your API key [here](https://app.mem0.ai/dashboard/api-keys) and refer the [docs](https://docs.mem0.ai/platform/quickstart#4-1-create-memories) for adding user preferences. In this case `user_memory` is set to `MemoryClient` from mem0.
|
||||
|
||||
|
||||
```python Code
|
||||
import os
|
||||
from crewai import Crew, Process
|
||||
from mem0 import MemoryClient
|
||||
|
||||
# Set environment variables for Mem0
|
||||
os.environ["MEM0_API_KEY"] = "m0-xx"
|
||||
|
||||
# Step 1: Create a Crew with User Memory
|
||||
|
||||
crew = Crew(
|
||||
agents=[...],
|
||||
tasks=[...],
|
||||
verbose=True,
|
||||
process=Process.sequential,
|
||||
memory=True,
|
||||
memory_config={
|
||||
"provider": "mem0",
|
||||
"config": {"user_id": "john"},
|
||||
"user_memory" : {} #Set user_memory explicitly to a dictionary, we are working on this issue.
|
||||
},
|
||||
)
|
||||
```
|
||||
|
||||
#### Additional Memory Configuration Options
|
||||
If you want to access a specific organization and project, you can set the `org_id` and `project_id` parameters in the memory configuration.
|
||||
|
||||
```python Code
|
||||
from crewai import Crew
|
||||
|
||||
crew = Crew(
|
||||
agents=[...],
|
||||
tasks=[...],
|
||||
verbose=True,
|
||||
memory=True,
|
||||
memory_config={
|
||||
"provider": "mem0",
|
||||
"config": {"user_id": "john", "org_id": "my_org_id", "project_id": "my_project_id"},
|
||||
"user_memory" : {} #Set user_memory explicitly to a dictionary, we are working on this issue.
|
||||
},
|
||||
)
|
||||
```
|
||||
|
||||
### Using Local Mem0 memory
|
||||
If you want to use local mem0 memory, with a custom configuration, you can set a parameter `local_mem0_config` in the config itself.
|
||||
If both os environment key is set and local_mem0_config is given, the API platform takes higher priority over the local configuration.
|
||||
Check [this](https://docs.mem0.ai/open-source/python-quickstart#run-mem0-locally) mem0 local configuration docs for more understanding.
|
||||
In this case `user_memory` is set to `Memory` from mem0.
|
||||
|
||||
|
||||
```python Code
|
||||
from crewai import Crew
|
||||
|
||||
|
||||
#local mem0 config
|
||||
config = {
|
||||
"vector_store": {
|
||||
"provider": "qdrant",
|
||||
"config": {
|
||||
"host": "localhost",
|
||||
"port": 6333
|
||||
}
|
||||
},
|
||||
"llm": {
|
||||
"provider": "openai",
|
||||
"config": {
|
||||
"api_key": "your-api-key",
|
||||
"model": "gpt-4"
|
||||
}
|
||||
},
|
||||
"embedder": {
|
||||
"provider": "openai",
|
||||
"config": {
|
||||
"api_key": "your-api-key",
|
||||
"model": "text-embedding-3-small"
|
||||
}
|
||||
},
|
||||
"graph_store": {
|
||||
"provider": "neo4j",
|
||||
"config": {
|
||||
"url": "neo4j+s://your-instance",
|
||||
"username": "neo4j",
|
||||
"password": "password"
|
||||
}
|
||||
},
|
||||
"history_db_path": "/path/to/history.db",
|
||||
"version": "v1.1",
|
||||
"custom_fact_extraction_prompt": "Optional custom prompt for fact extraction for memory",
|
||||
"custom_update_memory_prompt": "Optional custom prompt for update memory"
|
||||
}
|
||||
|
||||
crew = Crew(
|
||||
agents=[...],
|
||||
tasks=[...],
|
||||
verbose=True,
|
||||
memory=True,
|
||||
memory_config={
|
||||
"provider": "mem0",
|
||||
"config": {"user_id": "john", 'local_mem0_config': config},
|
||||
"user_memory" : {} #Set user_memory explicitly to a dictionary, we are working on this issue.
|
||||
},
|
||||
)
|
||||
```
|
||||
|
||||
### Using External Memory
|
||||
|
||||
External Memory is a powerful feature that allows you to integrate external memory systems with your CrewAI applications. This is particularly useful when you want to use specialized memory providers or maintain memory across different applications.
|
||||
|
||||
#### Basic Usage with Mem0
|
||||
|
||||
The most common way to use External Memory is with Mem0 as the provider:
|
||||
|
||||
```python
|
||||
from crewai import Agent, Crew, Process, Task
|
||||
from crewai.memory.external.external_memory import ExternalMemory
|
||||
|
||||
agent = Agent(
|
||||
role="You are a helpful assistant",
|
||||
goal="Plan a vacation for the user",
|
||||
backstory="You are a helpful assistant that can plan a vacation for the user",
|
||||
verbose=True,
|
||||
)
|
||||
task = Task(
|
||||
description="Give things related to the user's vacation",
|
||||
expected_output="A plan for the vacation",
|
||||
agent=agent,
|
||||
)
|
||||
|
||||
crew = Crew(
|
||||
agents=[agent],
|
||||
tasks=[task],
|
||||
verbose=True,
|
||||
process=Process.sequential,
|
||||
memory=True,
|
||||
external_memory=ExternalMemory(
|
||||
embedder_config={"provider": "mem0", "config": {"user_id": "U-123"}} # you can provide an entire Mem0 configuration
|
||||
),
|
||||
)
|
||||
|
||||
crew.kickoff(
|
||||
inputs={"question": "which destination is better for a beach vacation?"}
|
||||
)
|
||||
```
|
||||
|
||||
#### Using External Memory with Custom Storage
|
||||
|
||||
You can also create custom storage implementations for External Memory. Here's an example of how to create a custom storage:
|
||||
|
||||
```python
|
||||
from crewai import Agent, Crew, Process, Task
|
||||
from crewai.memory.external.external_memory import ExternalMemory
|
||||
from crewai.memory.storage.interface import Storage
|
||||
|
||||
|
||||
class CustomStorage(Storage):
|
||||
def __init__(self):
|
||||
self.memories = []
|
||||
|
||||
def save(self, value, metadata=None, agent=None):
|
||||
self.memories.append({"value": value, "metadata": metadata, "agent": agent})
|
||||
|
||||
def search(self, query, limit=10, score_threshold=0.5):
|
||||
# Implement your search logic here
|
||||
return []
|
||||
|
||||
def reset(self):
|
||||
self.memories = []
|
||||
|
||||
|
||||
# Create external memory with custom storage
|
||||
external_memory = ExternalMemory(
|
||||
storage=CustomStorage(),
|
||||
embedder_config={"provider": "mem0", "config": {"user_id": "U-123"}},
|
||||
)
|
||||
|
||||
agent = Agent(
|
||||
role="You are a helpful assistant",
|
||||
goal="Plan a vacation for the user",
|
||||
backstory="You are a helpful assistant that can plan a vacation for the user",
|
||||
verbose=True,
|
||||
)
|
||||
task = Task(
|
||||
description="Give things related to the user's vacation",
|
||||
expected_output="A plan for the vacation",
|
||||
agent=agent,
|
||||
)
|
||||
|
||||
crew = Crew(
|
||||
agents=[agent],
|
||||
tasks=[task],
|
||||
verbose=True,
|
||||
process=Process.sequential,
|
||||
memory=True,
|
||||
external_memory=external_memory,
|
||||
)
|
||||
|
||||
crew.kickoff(
|
||||
inputs={"question": "which destination is better for a beach vacation?"}
|
||||
)
|
||||
```
|
||||
|
||||
|
||||
## Additional Embedding Providers
|
||||
|
||||
### Using OpenAI embeddings (already default)
|
||||
```python Code
|
||||
from crewai import Crew, Agent, Task, Process
|
||||
|
||||
my_crew = Crew(
|
||||
agents=[...],
|
||||
tasks=[...],
|
||||
process=Process.sequential,
|
||||
memory=True,
|
||||
verbose=True,
|
||||
embedder={
|
||||
"provider": "openai",
|
||||
"config": {
|
||||
"model": 'text-embedding-3-small'
|
||||
}
|
||||
}
|
||||
)
|
||||
```
|
||||
Alternatively, you can directly pass the OpenAIEmbeddingFunction to the embedder parameter.
|
||||
|
||||
Example:
|
||||
```python Code
|
||||
from crewai import Crew, Agent, Task, Process
|
||||
from chromadb.utils.embedding_functions import OpenAIEmbeddingFunction
|
||||
|
||||
my_crew = Crew(
|
||||
agents=[...],
|
||||
tasks=[...],
|
||||
process=Process.sequential,
|
||||
memory=True,
|
||||
verbose=True,
|
||||
embedder={
|
||||
"provider": "openai",
|
||||
"config": {
|
||||
"model": 'text-embedding-3-small'
|
||||
}
|
||||
}
|
||||
)
|
||||
```
|
||||
|
||||
### Using Ollama embeddings
|
||||
|
||||
```python Code
|
||||
from crewai import Crew, Agent, Task, Process
|
||||
|
||||
my_crew = Crew(
|
||||
agents=[...],
|
||||
tasks=[...],
|
||||
process=Process.sequential,
|
||||
memory=True,
|
||||
verbose=True,
|
||||
embedder={
|
||||
"provider": "ollama",
|
||||
"config": {
|
||||
"model": "mxbai-embed-large"
|
||||
}
|
||||
}
|
||||
)
|
||||
```
|
||||
|
||||
### Using Google AI embeddings
|
||||
|
||||
#### Prerequisites
|
||||
Before using Google AI embeddings, ensure you have:
|
||||
- Access to the Gemini API
|
||||
- The necessary API keys and permissions
|
||||
|
||||
You will need to update your *pyproject.toml* dependencies:
|
||||
```YAML
|
||||
dependencies = [
|
||||
"google-generativeai>=0.8.4", #main version in January/2025 - crewai v.0.100.0 and crewai-tools 0.33.0
|
||||
"crewai[tools]>=0.100.0,<1.0.0"
|
||||
]
|
||||
```
|
||||
|
||||
```python Code
|
||||
from crewai import Crew, Agent, Task, Process
|
||||
|
||||
my_crew = Crew(
|
||||
agents=[...],
|
||||
tasks=[...],
|
||||
process=Process.sequential,
|
||||
memory=True,
|
||||
verbose=True,
|
||||
embedder={
|
||||
"provider": "google",
|
||||
"config": {
|
||||
"api_key": "<YOUR_API_KEY>",
|
||||
"model": "<model_name>"
|
||||
}
|
||||
}
|
||||
)
|
||||
```
|
||||
|
||||
### Using Azure OpenAI embeddings
|
||||
|
||||
```python Code
|
||||
from chromadb.utils.embedding_functions import OpenAIEmbeddingFunction
|
||||
from crewai import Crew, Agent, Task, Process
|
||||
|
||||
my_crew = Crew(
|
||||
agents=[...],
|
||||
tasks=[...],
|
||||
process=Process.sequential,
|
||||
memory=True,
|
||||
verbose=True,
|
||||
embedder={
|
||||
"provider": "openai",
|
||||
"config": {
|
||||
"api_key": "YOUR_API_KEY",
|
||||
"api_base": "YOUR_API_BASE_PATH",
|
||||
"api_version": "YOUR_API_VERSION",
|
||||
"model_name": 'text-embedding-3-small'
|
||||
}
|
||||
}
|
||||
)
|
||||
```
|
||||
|
||||
### Using Vertex AI embeddings
|
||||
|
||||
```python Code
|
||||
from chromadb.utils.embedding_functions import GoogleVertexEmbeddingFunction
|
||||
from crewai import Crew, Agent, Task, Process
|
||||
|
||||
my_crew = Crew(
|
||||
agents=[...],
|
||||
tasks=[...],
|
||||
process=Process.sequential,
|
||||
memory=True,
|
||||
verbose=True,
|
||||
embedder={
|
||||
"provider": "vertexai",
|
||||
"config": {
|
||||
"project_id"="YOUR_PROJECT_ID",
|
||||
"region"="YOUR_REGION",
|
||||
"api_key"="YOUR_API_KEY",
|
||||
"model_name"="textembedding-gecko"
|
||||
}
|
||||
}
|
||||
)
|
||||
```
|
||||
|
||||
### Using Cohere embeddings
|
||||
|
||||
```python Code
|
||||
from crewai import Crew, Agent, Task, Process
|
||||
|
||||
my_crew = Crew(
|
||||
agents=[...],
|
||||
tasks=[...],
|
||||
process=Process.sequential,
|
||||
memory=True,
|
||||
verbose=True,
|
||||
embedder={
|
||||
"provider": "cohere",
|
||||
"config": {
|
||||
"api_key": "YOUR_API_KEY",
|
||||
"model": "<model_name>"
|
||||
}
|
||||
}
|
||||
)
|
||||
```
|
||||
### Using VoyageAI embeddings
|
||||
|
||||
```python Code
|
||||
from crewai import Crew, Agent, Task, Process
|
||||
|
||||
my_crew = Crew(
|
||||
agents=[...],
|
||||
tasks=[...],
|
||||
process=Process.sequential,
|
||||
memory=True,
|
||||
verbose=True,
|
||||
embedder={
|
||||
"provider": "voyageai",
|
||||
"config": {
|
||||
"api_key": "YOUR_API_KEY",
|
||||
"model": "<model_name>"
|
||||
}
|
||||
}
|
||||
)
|
||||
```
|
||||
### Using HuggingFace embeddings
|
||||
|
||||
```python Code
|
||||
from crewai import Crew, Agent, Task, Process
|
||||
|
||||
my_crew = Crew(
|
||||
agents=[...],
|
||||
tasks=[...],
|
||||
process=Process.sequential,
|
||||
memory=True,
|
||||
verbose=True,
|
||||
embedder={
|
||||
"provider": "huggingface",
|
||||
"config": {
|
||||
"api_url": "<api_url>",
|
||||
}
|
||||
}
|
||||
)
|
||||
```
|
||||
|
||||
### Using Watson embeddings
|
||||
|
||||
```python Code
|
||||
from crewai import Crew, Agent, Task, Process
|
||||
|
||||
# Note: Ensure you have installed and imported `ibm_watsonx_ai` for Watson embeddings to work.
|
||||
|
||||
my_crew = Crew(
|
||||
agents=[...],
|
||||
tasks=[...],
|
||||
process=Process.sequential,
|
||||
memory=True,
|
||||
verbose=True,
|
||||
embedder={
|
||||
"provider": "watson",
|
||||
"config": {
|
||||
"model": "<model_name>",
|
||||
"api_url": "<api_url>",
|
||||
"api_key": "<YOUR_API_KEY>",
|
||||
"project_id": "<YOUR_PROJECT_ID>",
|
||||
}
|
||||
}
|
||||
)
|
||||
```
|
||||
|
||||
### Using Amazon Bedrock embeddings
|
||||
|
||||
```python Code
|
||||
# Note: Ensure you have installed `boto3` for Bedrock embeddings to work.
|
||||
|
||||
import os
|
||||
import boto3
|
||||
from crewai import Crew, Agent, Task, Process
|
||||
|
||||
boto3_session = boto3.Session(
|
||||
region_name=os.environ.get("AWS_REGION_NAME"),
|
||||
aws_access_key_id=os.environ.get("AWS_ACCESS_KEY_ID"),
|
||||
aws_secret_access_key=os.environ.get("AWS_SECRET_ACCESS_KEY")
|
||||
)
|
||||
|
||||
my_crew = Crew(
|
||||
agents=[...],
|
||||
tasks=[...],
|
||||
process=Process.sequential,
|
||||
memory=True,
|
||||
embedder={
|
||||
"provider": "bedrock",
|
||||
"config":{
|
||||
"session": boto3_session,
|
||||
"model": "amazon.titan-embed-text-v2:0",
|
||||
"vector_dimension": 1024
|
||||
}
|
||||
}
|
||||
verbose=True
|
||||
)
|
||||
```
|
||||
|
||||
### Adding Custom Embedding Function
|
||||
|
||||
```python Code
|
||||
from crewai import Crew, Agent, Task, Process
|
||||
from chromadb import Documents, EmbeddingFunction, Embeddings
|
||||
|
||||
# Create a custom embedding function
|
||||
class CustomEmbedder(EmbeddingFunction):
|
||||
def __call__(self, input: Documents) -> Embeddings:
|
||||
# generate embeddings
|
||||
return [1, 2, 3] # this is a dummy embedding
|
||||
|
||||
my_crew = Crew(
|
||||
agents=[...],
|
||||
tasks=[...],
|
||||
process=Process.sequential,
|
||||
memory=True,
|
||||
verbose=True,
|
||||
embedder={
|
||||
"provider": "custom",
|
||||
"config": {
|
||||
"embedder": CustomEmbedder()
|
||||
}
|
||||
}
|
||||
)
|
||||
```
|
||||
|
||||
### Resetting Memory via cli
|
||||
|
||||
```shell
|
||||
crewai reset-memories [OPTIONS]
|
||||
```
|
||||
|
||||
#### Resetting Memory Options
|
||||
|
||||
| Option | Description | Type | Default |
|
||||
| :----------------- | :------------------------------- | :------------- | :------ |
|
||||
| `-l`, `--long` | Reset LONG TERM memory. | Flag (boolean) | False |
|
||||
| `-s`, `--short` | Reset SHORT TERM memory. | Flag (boolean) | False |
|
||||
| `-e`, `--entities` | Reset ENTITIES memory. | Flag (boolean) | False |
|
||||
| `-k`, `--kickoff-outputs` | Reset LATEST KICKOFF TASK OUTPUTS. | Flag (boolean) | False |
|
||||
| `-kn`, `--knowledge` | Reset KNOWLEDEGE storage | Flag (boolean) | False |
|
||||
| `-a`, `--all` | Reset ALL memories. | Flag (boolean) | False |
|
||||
|
||||
Note: To use the cli command you need to have your crew in a file called crew.py in the same directory.
|
||||
|
||||
|
||||
|
||||
|
||||
### Resetting Memory via crew object
|
||||
|
||||
```python
|
||||
|
||||
my_crew = Crew(
|
||||
agents=[...],
|
||||
tasks=[...],
|
||||
process=Process.sequential,
|
||||
memory=True,
|
||||
verbose=True,
|
||||
embedder={
|
||||
"provider": "custom",
|
||||
"config": {
|
||||
"embedder": CustomEmbedder()
|
||||
}
|
||||
}
|
||||
)
|
||||
|
||||
my_crew.reset_memories(command_type = 'all') # Resets all the memory
|
||||
```
|
||||
|
||||
#### Resetting Memory Options
|
||||
|
||||
| Command Type | Description |
|
||||
| :----------------- | :------------------------------- |
|
||||
| `long` | Reset LONG TERM memory. |
|
||||
| `short` | Reset SHORT TERM memory. |
|
||||
| `entities` | Reset ENTITIES memory. |
|
||||
| `kickoff_outputs` | Reset LATEST KICKOFF TASK OUTPUTS. |
|
||||
| `knowledge` | Reset KNOWLEDGE memory. |
|
||||
| `all` | Reset ALL memories. |
|
||||
|
||||
|
||||
## Benefits of Using CrewAI's Memory System
|
||||
|
||||
- 🦾 **Adaptive Learning:** Crews become more efficient over time, adapting to new information and refining their approach to tasks.
|
||||
- 🫡 **Enhanced Personalization:** Memory enables agents to remember user preferences and historical interactions, leading to personalized experiences.
|
||||
- 🧠 **Improved Problem Solving:** Access to a rich memory store aids agents in making more informed decisions, drawing on past learnings and contextual insights.
|
||||
|
||||
## Conclusion
|
||||
|
||||
Integrating CrewAI's memory system into your projects is straightforward. By leveraging the provided memory components and configurations,
|
||||
you can quickly empower your agents with the ability to remember, reason, and learn from their interactions, unlocking new levels of intelligence and capability.
|
||||
150
docs/concepts/planning.mdx
Normal file
@@ -0,0 +1,150 @@
|
||||
---
|
||||
title: Planning
|
||||
description: Learn how to add planning to your CrewAI Crew and improve their performance.
|
||||
icon: brain
|
||||
---
|
||||
|
||||
## Introduction
|
||||
|
||||
The planning feature in CrewAI allows you to add planning capability to your crew. When enabled, before each Crew iteration,
|
||||
all Crew information is sent to an AgentPlanner that will plan the tasks step by step, and this plan will be added to each task description.
|
||||
|
||||
### Using the Planning Feature
|
||||
|
||||
Getting started with the planning feature is very easy, the only step required is to add `planning=True` to your Crew:
|
||||
|
||||
<CodeGroup>
|
||||
```python Code
|
||||
from crewai import Crew, Agent, Task, Process
|
||||
|
||||
# Assemble your crew with planning capabilities
|
||||
my_crew = Crew(
|
||||
agents=self.agents,
|
||||
tasks=self.tasks,
|
||||
process=Process.sequential,
|
||||
planning=True,
|
||||
)
|
||||
```
|
||||
</CodeGroup>
|
||||
|
||||
From this point on, your crew will have planning enabled, and the tasks will be planned before each iteration.
|
||||
|
||||
#### Planning LLM
|
||||
|
||||
Now you can define the LLM that will be used to plan the tasks.
|
||||
|
||||
When running the base case example, you will see something like the output below, which represents the output of the `AgentPlanner`
|
||||
responsible for creating the step-by-step logic to add to the Agents' tasks.
|
||||
|
||||
<CodeGroup>
|
||||
```python Code
|
||||
from crewai import Crew, Agent, Task, Process
|
||||
|
||||
# Assemble your crew with planning capabilities and custom LLM
|
||||
my_crew = Crew(
|
||||
agents=self.agents,
|
||||
tasks=self.tasks,
|
||||
process=Process.sequential,
|
||||
planning=True,
|
||||
planning_llm="gpt-4o"
|
||||
)
|
||||
|
||||
# Run the crew
|
||||
my_crew.kickoff()
|
||||
```
|
||||
|
||||
```markdown Result
|
||||
[2024-07-15 16:49:11][INFO]: Planning the crew execution
|
||||
**Step-by-Step Plan for Task Execution**
|
||||
|
||||
**Task Number 1: Conduct a thorough research about AI LLMs**
|
||||
|
||||
**Agent:** AI LLMs Senior Data Researcher
|
||||
|
||||
**Agent Goal:** Uncover cutting-edge developments in AI LLMs
|
||||
|
||||
**Task Expected Output:** A list with 10 bullet points of the most relevant information about AI LLMs
|
||||
|
||||
**Task Tools:** None specified
|
||||
|
||||
**Agent Tools:** None specified
|
||||
|
||||
**Step-by-Step Plan:**
|
||||
|
||||
1. **Define Research Scope:**
|
||||
|
||||
- Determine the specific areas of AI LLMs to focus on, such as advancements in architecture, use cases, ethical considerations, and performance metrics.
|
||||
|
||||
2. **Identify Reliable Sources:**
|
||||
|
||||
- List reputable sources for AI research, including academic journals, industry reports, conferences (e.g., NeurIPS, ACL), AI research labs (e.g., OpenAI, Google AI), and online databases (e.g., IEEE Xplore, arXiv).
|
||||
|
||||
3. **Collect Data:**
|
||||
|
||||
- Search for the latest papers, articles, and reports published in 2024 and early 2025.
|
||||
- Use keywords like "Large Language Models 2025", "AI LLM advancements", "AI ethics 2025", etc.
|
||||
|
||||
4. **Analyze Findings:**
|
||||
|
||||
- Read and summarize the key points from each source.
|
||||
- Highlight new techniques, models, and applications introduced in the past year.
|
||||
|
||||
5. **Organize Information:**
|
||||
|
||||
- Categorize the information into relevant topics (e.g., new architectures, ethical implications, real-world applications).
|
||||
- Ensure each bullet point is concise but informative.
|
||||
|
||||
6. **Create the List:**
|
||||
|
||||
- Compile the 10 most relevant pieces of information into a bullet point list.
|
||||
- Review the list to ensure clarity and relevance.
|
||||
|
||||
**Expected Output:**
|
||||
|
||||
A list with 10 bullet points of the most relevant information about AI LLMs.
|
||||
|
||||
---
|
||||
|
||||
**Task Number 2: Review the context you got and expand each topic into a full section for a report**
|
||||
|
||||
**Agent:** AI LLMs Reporting Analyst
|
||||
|
||||
**Agent Goal:** Create detailed reports based on AI LLMs data analysis and research findings
|
||||
|
||||
**Task Expected Output:** A fully fledged report with the main topics, each with a full section of information. Formatted as markdown without '```'
|
||||
|
||||
**Task Tools:** None specified
|
||||
|
||||
**Agent Tools:** None specified
|
||||
|
||||
**Step-by-Step Plan:**
|
||||
|
||||
1. **Review the Bullet Points:**
|
||||
- Carefully read through the list of 10 bullet points provided by the AI LLMs Senior Data Researcher.
|
||||
|
||||
2. **Outline the Report:**
|
||||
- Create an outline with each bullet point as a main section heading.
|
||||
- Plan sub-sections under each main heading to cover different aspects of the topic.
|
||||
|
||||
3. **Research Further Details:**
|
||||
- For each bullet point, conduct additional research if necessary to gather more detailed information.
|
||||
- Look for case studies, examples, and statistical data to support each section.
|
||||
|
||||
4. **Write Detailed Sections:**
|
||||
- Expand each bullet point into a comprehensive section.
|
||||
- Ensure each section includes an introduction, detailed explanation, examples, and a conclusion.
|
||||
- Use markdown formatting for headings, subheadings, lists, and emphasis.
|
||||
|
||||
5. **Review and Edit:**
|
||||
- Proofread the report for clarity, coherence, and correctness.
|
||||
- Make sure the report flows logically from one section to the next.
|
||||
- Format the report according to markdown standards.
|
||||
|
||||
6. **Finalize the Report:**
|
||||
- Ensure the report is complete with all sections expanded and detailed.
|
||||
- Double-check formatting and make any necessary adjustments.
|
||||
|
||||
**Expected Output:**
|
||||
A fully fledged report with the main topics, each with a full section of information. Formatted as markdown without '```'.
|
||||
```
|
||||
</CodeGroup>
|
||||
65
docs/concepts/processes.mdx
Normal file
@@ -0,0 +1,65 @@
|
||||
---
|
||||
title: Processes
|
||||
description: Detailed guide on workflow management through processes in CrewAI, with updated implementation details.
|
||||
icon: bars-staggered
|
||||
---
|
||||
|
||||
## Understanding Processes
|
||||
<Tip>
|
||||
Processes orchestrate the execution of tasks by agents, akin to project management in human teams.
|
||||
These processes ensure tasks are distributed and executed efficiently, in alignment with a predefined strategy.
|
||||
</Tip>
|
||||
|
||||
## Process Implementations
|
||||
|
||||
- **Sequential**: Executes tasks sequentially, ensuring tasks are completed in an orderly progression.
|
||||
- **Hierarchical**: Organizes tasks in a managerial hierarchy, where tasks are delegated and executed based on a structured chain of command. A manager language model (`manager_llm`) or a custom manager agent (`manager_agent`) must be specified in the crew to enable the hierarchical process, facilitating the creation and management of tasks by the manager.
|
||||
- **Consensual Process (Planned)**: Aiming for collaborative decision-making among agents on task execution, this process type introduces a democratic approach to task management within CrewAI. It is planned for future development and is not currently implemented in the codebase.
|
||||
|
||||
## The Role of Processes in Teamwork
|
||||
Processes enable individual agents to operate as a cohesive unit, streamlining their efforts to achieve common objectives with efficiency and coherence.
|
||||
|
||||
## Assigning Processes to a Crew
|
||||
To assign a process to a crew, specify the process type upon crew creation to set the execution strategy. For a hierarchical process, ensure to define `manager_llm` or `manager_agent` for the manager agent.
|
||||
|
||||
```python
|
||||
from crewai import Crew, Process
|
||||
|
||||
# Example: Creating a crew with a sequential process
|
||||
crew = Crew(
|
||||
agents=my_agents,
|
||||
tasks=my_tasks,
|
||||
process=Process.sequential
|
||||
)
|
||||
|
||||
# Example: Creating a crew with a hierarchical process
|
||||
# Ensure to provide a manager_llm or manager_agent
|
||||
crew = Crew(
|
||||
agents=my_agents,
|
||||
tasks=my_tasks,
|
||||
process=Process.hierarchical,
|
||||
manager_llm="gpt-4o"
|
||||
# or
|
||||
# manager_agent=my_manager_agent
|
||||
)
|
||||
```
|
||||
**Note:** Ensure `my_agents` and `my_tasks` are defined prior to creating a `Crew` object, and for the hierarchical process, either `manager_llm` or `manager_agent` is also required.
|
||||
|
||||
## Sequential Process
|
||||
|
||||
This method mirrors dynamic team workflows, progressing through tasks in a thoughtful and systematic manner. Task execution follows the predefined order in the task list, with the output of one task serving as context for the next.
|
||||
|
||||
To customize task context, utilize the `context` parameter in the `Task` class to specify outputs that should be used as context for subsequent tasks.
|
||||
|
||||
## Hierarchical Process
|
||||
|
||||
Emulates a corporate hierarchy, CrewAI allows specifying a custom manager agent or automatically creates one, requiring the specification of a manager language model (`manager_llm`). This agent oversees task execution, including planning, delegation, and validation. Tasks are not pre-assigned; the manager allocates tasks to agents based on their capabilities, reviews outputs, and assesses task completion.
|
||||
|
||||
## Process Class: Detailed Overview
|
||||
|
||||
The `Process` class is implemented as an enumeration (`Enum`), ensuring type safety and restricting process values to the defined types (`sequential`, `hierarchical`). The consensual process is planned for future inclusion, emphasizing our commitment to continuous development and innovation.
|
||||
|
||||
## Conclusion
|
||||
|
||||
The structured collaboration facilitated by processes within CrewAI is crucial for enabling systematic teamwork among agents.
|
||||
This documentation has been updated to reflect the latest features, enhancements, and the planned integration of the Consensual Process, ensuring users have access to the most current and comprehensive information.
|
||||
909
docs/concepts/tasks.mdx
Normal file
@@ -0,0 +1,909 @@
|
||||
---
|
||||
title: Tasks
|
||||
description: Detailed guide on managing and creating tasks within the CrewAI framework.
|
||||
icon: list-check
|
||||
---
|
||||
|
||||
## Overview of a Task
|
||||
|
||||
In the CrewAI framework, a `Task` is a specific assignment completed by an `Agent`.
|
||||
|
||||
Tasks provide all necessary details for execution, such as a description, the agent responsible, required tools, and more, facilitating a wide range of action complexities.
|
||||
|
||||
Tasks within CrewAI can be collaborative, requiring multiple agents to work together. This is managed through the task properties and orchestrated by the Crew's process, enhancing teamwork and efficiency.
|
||||
|
||||
<Note type="info" title="Enterprise Enhancement: Visual Task Builder">
|
||||
CrewAI Enterprise includes a Visual Task Builder in Crew Studio that simplifies complex task creation and chaining. Design your task flows visually and test them in real-time without writing code.
|
||||
|
||||

|
||||
|
||||
The Visual Task Builder enables:
|
||||
- Drag-and-drop task creation
|
||||
- Visual task dependencies and flow
|
||||
- Real-time testing and validation
|
||||
- Easy sharing and collaboration
|
||||
</Note>
|
||||
|
||||
### Task Execution Flow
|
||||
|
||||
Tasks can be executed in two ways:
|
||||
- **Sequential**: Tasks are executed in the order they are defined
|
||||
- **Hierarchical**: Tasks are assigned to agents based on their roles and expertise
|
||||
|
||||
The execution flow is defined when creating the crew:
|
||||
```python Code
|
||||
crew = Crew(
|
||||
agents=[agent1, agent2],
|
||||
tasks=[task1, task2],
|
||||
process=Process.sequential # or Process.hierarchical
|
||||
)
|
||||
```
|
||||
|
||||
## Task Attributes
|
||||
|
||||
| Attribute | Parameters | Type | Description |
|
||||
| :------------------------------- | :---------------- | :---------------------------- | :------------------------------------------------------------------------------------------------------------------- |
|
||||
| **Description** | `description` | `str` | A clear, concise statement of what the task entails. |
|
||||
| **Expected Output** | `expected_output` | `str` | A detailed description of what the task's completion looks like. |
|
||||
| **Name** _(optional)_ | `name` | `Optional[str]` | A name identifier for the task. |
|
||||
| **Agent** _(optional)_ | `agent` | `Optional[BaseAgent]` | The agent responsible for executing the task. |
|
||||
| **Tools** _(optional)_ | `tools` | `List[BaseTool]` | The tools/resources the agent is limited to use for this task. |
|
||||
| **Context** _(optional)_ | `context` | `Optional[List["Task"]]` | Other tasks whose outputs will be used as context for this task. |
|
||||
| **Async Execution** _(optional)_ | `async_execution` | `Optional[bool]` | Whether the task should be executed asynchronously. Defaults to False. |
|
||||
| **Human Input** _(optional)_ | `human_input` | `Optional[bool]` | Whether the task should have a human review the final answer of the agent. Defaults to False. |
|
||||
| **Config** _(optional)_ | `config` | `Optional[Dict[str, Any]]` | Task-specific configuration parameters. |
|
||||
| **Output File** _(optional)_ | `output_file` | `Optional[str]` | File path for storing the task output. |
|
||||
| **Output JSON** _(optional)_ | `output_json` | `Optional[Type[BaseModel]]` | A Pydantic model to structure the JSON output. |
|
||||
| **Output Pydantic** _(optional)_ | `output_pydantic` | `Optional[Type[BaseModel]]` | A Pydantic model for task output. |
|
||||
| **Callback** _(optional)_ | `callback` | `Optional[Any]` | Function/object to be executed after task completion. |
|
||||
|
||||
## Creating Tasks
|
||||
|
||||
There are two ways to create tasks in CrewAI: using **YAML configuration (recommended)** or defining them **directly in code**.
|
||||
|
||||
### YAML Configuration (Recommended)
|
||||
|
||||
Using YAML configuration provides a cleaner, more maintainable way to define tasks. We strongly recommend using this approach to define tasks in your CrewAI projects.
|
||||
|
||||
After creating your CrewAI project as outlined in the [Installation](/installation) section, navigate to the `src/latest_ai_development/config/tasks.yaml` file and modify the template to match your specific task requirements.
|
||||
|
||||
<Note>
|
||||
Variables in your YAML files (like `{topic}`) will be replaced with values from your inputs when running the crew:
|
||||
```python Code
|
||||
crew.kickoff(inputs={'topic': 'AI Agents'})
|
||||
```
|
||||
</Note>
|
||||
|
||||
Here's an example of how to configure tasks using YAML:
|
||||
|
||||
```yaml tasks.yaml
|
||||
research_task:
|
||||
description: >
|
||||
Conduct a thorough research about {topic}
|
||||
Make sure you find any interesting and relevant information given
|
||||
the current year is 2025.
|
||||
expected_output: >
|
||||
A list with 10 bullet points of the most relevant information about {topic}
|
||||
agent: researcher
|
||||
|
||||
reporting_task:
|
||||
description: >
|
||||
Review the context you got and expand each topic into a full section for a report.
|
||||
Make sure the report is detailed and contains any and all relevant information.
|
||||
expected_output: >
|
||||
A fully fledge reports with the mains topics, each with a full section of information.
|
||||
Formatted as markdown without '```'
|
||||
agent: reporting_analyst
|
||||
output_file: report.md
|
||||
```
|
||||
|
||||
To use this YAML configuration in your code, create a crew class that inherits from `CrewBase`:
|
||||
|
||||
```python crew.py
|
||||
# src/latest_ai_development/crew.py
|
||||
|
||||
from crewai import Agent, Crew, Process, Task
|
||||
from crewai.project import CrewBase, agent, crew, task
|
||||
from crewai_tools import SerperDevTool
|
||||
|
||||
@CrewBase
|
||||
class LatestAiDevelopmentCrew():
|
||||
"""LatestAiDevelopment crew"""
|
||||
|
||||
@agent
|
||||
def researcher(self) -> Agent:
|
||||
return Agent(
|
||||
config=self.agents_config['researcher'],
|
||||
verbose=True,
|
||||
tools=[SerperDevTool()]
|
||||
)
|
||||
|
||||
@agent
|
||||
def reporting_analyst(self) -> Agent:
|
||||
return Agent(
|
||||
config=self.agents_config['reporting_analyst'],
|
||||
verbose=True
|
||||
)
|
||||
|
||||
@task
|
||||
def research_task(self) -> Task:
|
||||
return Task(
|
||||
config=self.tasks_config['research_task']
|
||||
)
|
||||
|
||||
@task
|
||||
def reporting_task(self) -> Task:
|
||||
return Task(
|
||||
config=self.tasks_config['reporting_task']
|
||||
)
|
||||
|
||||
@crew
|
||||
def crew(self) -> Crew:
|
||||
return Crew(
|
||||
agents=[
|
||||
self.researcher(),
|
||||
self.reporting_analyst()
|
||||
],
|
||||
tasks=[
|
||||
self.research_task(),
|
||||
self.reporting_task()
|
||||
],
|
||||
process=Process.sequential
|
||||
)
|
||||
```
|
||||
|
||||
<Note>
|
||||
The names you use in your YAML files (`agents.yaml` and `tasks.yaml`) should match the method names in your Python code.
|
||||
</Note>
|
||||
|
||||
### Direct Code Definition (Alternative)
|
||||
|
||||
Alternatively, you can define tasks directly in your code without using YAML configuration:
|
||||
|
||||
```python task.py
|
||||
from crewai import Task
|
||||
|
||||
research_task = Task(
|
||||
description="""
|
||||
Conduct a thorough research about AI Agents.
|
||||
Make sure you find any interesting and relevant information given
|
||||
the current year is 2025.
|
||||
""",
|
||||
expected_output="""
|
||||
A list with 10 bullet points of the most relevant information about AI Agents
|
||||
""",
|
||||
agent=researcher
|
||||
)
|
||||
|
||||
reporting_task = Task(
|
||||
description="""
|
||||
Review the context you got and expand each topic into a full section for a report.
|
||||
Make sure the report is detailed and contains any and all relevant information.
|
||||
""",
|
||||
expected_output="""
|
||||
A fully fledge reports with the mains topics, each with a full section of information.
|
||||
Formatted as markdown without '```'
|
||||
""",
|
||||
agent=reporting_analyst,
|
||||
output_file="report.md"
|
||||
)
|
||||
```
|
||||
|
||||
<Tip>
|
||||
Directly specify an `agent` for assignment or let the `hierarchical` CrewAI's process decide based on roles, availability, etc.
|
||||
</Tip>
|
||||
|
||||
## Task Output
|
||||
|
||||
Understanding task outputs is crucial for building effective AI workflows. CrewAI provides a structured way to handle task results through the `TaskOutput` class, which supports multiple output formats and can be easily passed between tasks.
|
||||
|
||||
The output of a task in CrewAI framework is encapsulated within the `TaskOutput` class. This class provides a structured way to access results of a task, including various formats such as raw output, JSON, and Pydantic models.
|
||||
|
||||
By default, the `TaskOutput` will only include the `raw` output. A `TaskOutput` will only include the `pydantic` or `json_dict` output if the original `Task` object was configured with `output_pydantic` or `output_json`, respectively.
|
||||
|
||||
### Task Output Attributes
|
||||
|
||||
| Attribute | Parameters | Type | Description |
|
||||
| :---------------- | :-------------- | :------------------------- | :------------------------------------------------------------------------------------------------- |
|
||||
| **Description** | `description` | `str` | Description of the task. |
|
||||
| **Summary** | `summary` | `Optional[str]` | Summary of the task, auto-generated from the first 10 words of the description. |
|
||||
| **Raw** | `raw` | `str` | The raw output of the task. This is the default format for the output. |
|
||||
| **Pydantic** | `pydantic` | `Optional[BaseModel]` | A Pydantic model object representing the structured output of the task. |
|
||||
| **JSON Dict** | `json_dict` | `Optional[Dict[str, Any]]` | A dictionary representing the JSON output of the task. |
|
||||
| **Agent** | `agent` | `str` | The agent that executed the task. |
|
||||
| **Output Format** | `output_format` | `OutputFormat` | The format of the task output, with options including RAW, JSON, and Pydantic. The default is RAW. |
|
||||
|
||||
### Task Methods and Properties
|
||||
|
||||
| Method/Property | Description |
|
||||
| :-------------- | :------------------------------------------------------------------------------------------------ |
|
||||
| **json** | Returns the JSON string representation of the task output if the output format is JSON. |
|
||||
| **to_dict** | Converts the JSON and Pydantic outputs to a dictionary. |
|
||||
| **str** | Returns the string representation of the task output, prioritizing Pydantic, then JSON, then raw. |
|
||||
|
||||
### Accessing Task Outputs
|
||||
|
||||
Once a task has been executed, its output can be accessed through the `output` attribute of the `Task` object. The `TaskOutput` class provides various ways to interact with and present this output.
|
||||
|
||||
#### Example
|
||||
|
||||
```python Code
|
||||
# Example task
|
||||
task = Task(
|
||||
description='Find and summarize the latest AI news',
|
||||
expected_output='A bullet list summary of the top 5 most important AI news',
|
||||
agent=research_agent,
|
||||
tools=[search_tool]
|
||||
)
|
||||
|
||||
# Execute the crew
|
||||
crew = Crew(
|
||||
agents=[research_agent],
|
||||
tasks=[task],
|
||||
verbose=True
|
||||
)
|
||||
|
||||
result = crew.kickoff()
|
||||
|
||||
# Accessing the task output
|
||||
task_output = task.output
|
||||
|
||||
print(f"Task Description: {task_output.description}")
|
||||
print(f"Task Summary: {task_output.summary}")
|
||||
print(f"Raw Output: {task_output.raw}")
|
||||
if task_output.json_dict:
|
||||
print(f"JSON Output: {json.dumps(task_output.json_dict, indent=2)}")
|
||||
if task_output.pydantic:
|
||||
print(f"Pydantic Output: {task_output.pydantic}")
|
||||
```
|
||||
|
||||
## Task Dependencies and Context
|
||||
|
||||
Tasks can depend on the output of other tasks using the `context` attribute. For example:
|
||||
|
||||
```python Code
|
||||
research_task = Task(
|
||||
description="Research the latest developments in AI",
|
||||
expected_output="A list of recent AI developments",
|
||||
agent=researcher
|
||||
)
|
||||
|
||||
analysis_task = Task(
|
||||
description="Analyze the research findings and identify key trends",
|
||||
expected_output="Analysis report of AI trends",
|
||||
agent=analyst,
|
||||
context=[research_task] # This task will wait for research_task to complete
|
||||
)
|
||||
```
|
||||
|
||||
## Task Guardrails
|
||||
|
||||
Task guardrails provide a way to validate and transform task outputs before they
|
||||
are passed to the next task. This feature helps ensure data quality and provides
|
||||
feedback to agents when their output doesn't meet specific criteria.
|
||||
|
||||
### Using Task Guardrails
|
||||
|
||||
To add a guardrail to a task, provide a validation function through the `guardrail` parameter:
|
||||
|
||||
```python Code
|
||||
from typing import Tuple, Union, Dict, Any
|
||||
|
||||
def validate_blog_content(result: str) -> Tuple[bool, Union[Dict[str, Any], str]]:
|
||||
"""Validate blog content meets requirements."""
|
||||
try:
|
||||
# Check word count
|
||||
word_count = len(result.split())
|
||||
if word_count > 200:
|
||||
return (False, {
|
||||
"error": "Blog content exceeds 200 words",
|
||||
"code": "WORD_COUNT_ERROR",
|
||||
"context": {"word_count": word_count}
|
||||
})
|
||||
|
||||
# Additional validation logic here
|
||||
return (True, result.strip())
|
||||
except Exception as e:
|
||||
return (False, {
|
||||
"error": "Unexpected error during validation",
|
||||
"code": "SYSTEM_ERROR"
|
||||
})
|
||||
|
||||
blog_task = Task(
|
||||
description="Write a blog post about AI",
|
||||
expected_output="A blog post under 200 words",
|
||||
agent=blog_agent,
|
||||
guardrail=validate_blog_content # Add the guardrail function
|
||||
)
|
||||
```
|
||||
|
||||
### Guardrail Function Requirements
|
||||
|
||||
1. **Function Signature**:
|
||||
- Must accept exactly one parameter (the task output)
|
||||
- Should return a tuple of `(bool, Any)`
|
||||
- Type hints are recommended but optional
|
||||
|
||||
2. **Return Values**:
|
||||
- Success: Return `(True, validated_result)`
|
||||
- Failure: Return `(False, error_details)`
|
||||
|
||||
### Error Handling Best Practices
|
||||
|
||||
1. **Structured Error Responses**:
|
||||
```python Code
|
||||
def validate_with_context(result: str) -> Tuple[bool, Union[Dict[str, Any], str]]:
|
||||
try:
|
||||
# Main validation logic
|
||||
validated_data = perform_validation(result)
|
||||
return (True, validated_data)
|
||||
except ValidationError as e:
|
||||
return (False, {
|
||||
"error": str(e),
|
||||
"code": "VALIDATION_ERROR",
|
||||
"context": {"input": result}
|
||||
})
|
||||
except Exception as e:
|
||||
return (False, {
|
||||
"error": "Unexpected error",
|
||||
"code": "SYSTEM_ERROR"
|
||||
})
|
||||
```
|
||||
|
||||
2. **Error Categories**:
|
||||
- Use specific error codes
|
||||
- Include relevant context
|
||||
- Provide actionable feedback
|
||||
|
||||
3. **Validation Chain**:
|
||||
```python Code
|
||||
from typing import Any, Dict, List, Tuple, Union
|
||||
|
||||
def complex_validation(result: str) -> Tuple[bool, Union[str, Dict[str, Any]]]:
|
||||
"""Chain multiple validation steps."""
|
||||
# Step 1: Basic validation
|
||||
if not result:
|
||||
return (False, {"error": "Empty result", "code": "EMPTY_INPUT"})
|
||||
|
||||
# Step 2: Content validation
|
||||
try:
|
||||
validated = validate_content(result)
|
||||
if not validated:
|
||||
return (False, {"error": "Invalid content", "code": "CONTENT_ERROR"})
|
||||
|
||||
# Step 3: Format validation
|
||||
formatted = format_output(validated)
|
||||
return (True, formatted)
|
||||
except Exception as e:
|
||||
return (False, {
|
||||
"error": str(e),
|
||||
"code": "VALIDATION_ERROR",
|
||||
"context": {"step": "content_validation"}
|
||||
})
|
||||
```
|
||||
|
||||
### Handling Guardrail Results
|
||||
|
||||
When a guardrail returns `(False, error)`:
|
||||
1. The error is sent back to the agent
|
||||
2. The agent attempts to fix the issue
|
||||
3. The process repeats until:
|
||||
- The guardrail returns `(True, result)`
|
||||
- Maximum retries are reached
|
||||
|
||||
Example with retry handling:
|
||||
```python Code
|
||||
from typing import Optional, Tuple, Union
|
||||
|
||||
def validate_json_output(result: str) -> Tuple[bool, Union[Dict[str, Any], str]]:
|
||||
"""Validate and parse JSON output."""
|
||||
try:
|
||||
# Try to parse as JSON
|
||||
data = json.loads(result)
|
||||
return (True, data)
|
||||
except json.JSONDecodeError as e:
|
||||
return (False, {
|
||||
"error": "Invalid JSON format",
|
||||
"code": "JSON_ERROR",
|
||||
"context": {"line": e.lineno, "column": e.colno}
|
||||
})
|
||||
|
||||
task = Task(
|
||||
description="Generate a JSON report",
|
||||
expected_output="A valid JSON object",
|
||||
agent=analyst,
|
||||
guardrail=validate_json_output,
|
||||
max_retries=3 # Limit retry attempts
|
||||
)
|
||||
```
|
||||
|
||||
## Getting Structured Consistent Outputs from Tasks
|
||||
|
||||
<Note>
|
||||
It's also important to note that the output of the final task of a crew becomes the final output of the actual crew itself.
|
||||
</Note>
|
||||
|
||||
### Using `output_pydantic`
|
||||
The `output_pydantic` property allows you to define a Pydantic model that the task output should conform to. This ensures that the output is not only structured but also validated according to the Pydantic model.
|
||||
|
||||
Here's an example demonstrating how to use output_pydantic:
|
||||
|
||||
```python Code
|
||||
import json
|
||||
|
||||
from crewai import Agent, Crew, Process, Task
|
||||
from pydantic import BaseModel
|
||||
|
||||
|
||||
class Blog(BaseModel):
|
||||
title: str
|
||||
content: str
|
||||
|
||||
|
||||
blog_agent = Agent(
|
||||
role="Blog Content Generator Agent",
|
||||
goal="Generate a blog title and content",
|
||||
backstory="""You are an expert content creator, skilled in crafting engaging and informative blog posts.""",
|
||||
verbose=False,
|
||||
allow_delegation=False,
|
||||
llm="gpt-4o",
|
||||
)
|
||||
|
||||
task1 = Task(
|
||||
description="""Create a blog title and content on a given topic. Make sure the content is under 200 words.""",
|
||||
expected_output="A compelling blog title and well-written content.",
|
||||
agent=blog_agent,
|
||||
output_pydantic=Blog,
|
||||
)
|
||||
|
||||
# Instantiate your crew with a sequential process
|
||||
crew = Crew(
|
||||
agents=[blog_agent],
|
||||
tasks=[task1],
|
||||
verbose=True,
|
||||
process=Process.sequential,
|
||||
)
|
||||
|
||||
result = crew.kickoff()
|
||||
|
||||
# Option 1: Accessing Properties Using Dictionary-Style Indexing
|
||||
print("Accessing Properties - Option 1")
|
||||
title = result["title"]
|
||||
content = result["content"]
|
||||
print("Title:", title)
|
||||
print("Content:", content)
|
||||
|
||||
# Option 2: Accessing Properties Directly from the Pydantic Model
|
||||
print("Accessing Properties - Option 2")
|
||||
title = result.pydantic.title
|
||||
content = result.pydantic.content
|
||||
print("Title:", title)
|
||||
print("Content:", content)
|
||||
|
||||
# Option 3: Accessing Properties Using the to_dict() Method
|
||||
print("Accessing Properties - Option 3")
|
||||
output_dict = result.to_dict()
|
||||
title = output_dict["title"]
|
||||
content = output_dict["content"]
|
||||
print("Title:", title)
|
||||
print("Content:", content)
|
||||
|
||||
# Option 4: Printing the Entire Blog Object
|
||||
print("Accessing Properties - Option 5")
|
||||
print("Blog:", result)
|
||||
|
||||
```
|
||||
In this example:
|
||||
* A Pydantic model Blog is defined with title and content fields.
|
||||
* The task task1 uses the output_pydantic property to specify that its output should conform to the Blog model.
|
||||
* After executing the crew, you can access the structured output in multiple ways as shown.
|
||||
|
||||
#### Explanation of Accessing the Output
|
||||
1. Dictionary-Style Indexing: You can directly access the fields using result["field_name"]. This works because the CrewOutput class implements the __getitem__ method.
|
||||
2. Directly from Pydantic Model: Access the attributes directly from the result.pydantic object.
|
||||
3. Using to_dict() Method: Convert the output to a dictionary and access the fields.
|
||||
4. Printing the Entire Object: Simply print the result object to see the structured output.
|
||||
|
||||
### Using `output_json`
|
||||
The `output_json` property allows you to define the expected output in JSON format. This ensures that the task's output is a valid JSON structure that can be easily parsed and used in your application.
|
||||
|
||||
Here's an example demonstrating how to use `output_json`:
|
||||
|
||||
```python Code
|
||||
import json
|
||||
|
||||
from crewai import Agent, Crew, Process, Task
|
||||
from pydantic import BaseModel
|
||||
|
||||
|
||||
# Define the Pydantic model for the blog
|
||||
class Blog(BaseModel):
|
||||
title: str
|
||||
content: str
|
||||
|
||||
|
||||
# Define the agent
|
||||
blog_agent = Agent(
|
||||
role="Blog Content Generator Agent",
|
||||
goal="Generate a blog title and content",
|
||||
backstory="""You are an expert content creator, skilled in crafting engaging and informative blog posts.""",
|
||||
verbose=False,
|
||||
allow_delegation=False,
|
||||
llm="gpt-4o",
|
||||
)
|
||||
|
||||
# Define the task with output_json set to the Blog model
|
||||
task1 = Task(
|
||||
description="""Create a blog title and content on a given topic. Make sure the content is under 200 words.""",
|
||||
expected_output="A JSON object with 'title' and 'content' fields.",
|
||||
agent=blog_agent,
|
||||
output_json=Blog,
|
||||
)
|
||||
|
||||
# Instantiate the crew with a sequential process
|
||||
crew = Crew(
|
||||
agents=[blog_agent],
|
||||
tasks=[task1],
|
||||
verbose=True,
|
||||
process=Process.sequential,
|
||||
)
|
||||
|
||||
# Kickoff the crew to execute the task
|
||||
result = crew.kickoff()
|
||||
|
||||
# Option 1: Accessing Properties Using Dictionary-Style Indexing
|
||||
print("Accessing Properties - Option 1")
|
||||
title = result["title"]
|
||||
content = result["content"]
|
||||
print("Title:", title)
|
||||
print("Content:", content)
|
||||
|
||||
# Option 2: Printing the Entire Blog Object
|
||||
print("Accessing Properties - Option 2")
|
||||
print("Blog:", result)
|
||||
```
|
||||
|
||||
In this example:
|
||||
* A Pydantic model Blog is defined with title and content fields, which is used to specify the structure of the JSON output.
|
||||
* The task task1 uses the output_json property to indicate that it expects a JSON output conforming to the Blog model.
|
||||
* After executing the crew, you can access the structured JSON output in two ways as shown.
|
||||
|
||||
#### Explanation of Accessing the Output
|
||||
|
||||
1. Accessing Properties Using Dictionary-Style Indexing: You can access the fields directly using result["field_name"]. This is possible because the CrewOutput class implements the __getitem__ method, allowing you to treat the output like a dictionary. In this option, we're retrieving the title and content from the result.
|
||||
2. Printing the Entire Blog Object: By printing result, you get the string representation of the CrewOutput object. Since the __str__ method is implemented to return the JSON output, this will display the entire output as a formatted string representing the Blog object.
|
||||
|
||||
---
|
||||
|
||||
By using output_pydantic or output_json, you ensure that your tasks produce outputs in a consistent and structured format, making it easier to process and utilize the data within your application or across multiple tasks.
|
||||
|
||||
## Integrating Tools with Tasks
|
||||
|
||||
Leverage tools from the [CrewAI Toolkit](https://github.com/joaomdmoura/crewai-tools) and [LangChain Tools](https://python.langchain.com/docs/integrations/tools) for enhanced task performance and agent interaction.
|
||||
|
||||
## Creating a Task with Tools
|
||||
|
||||
```python Code
|
||||
import os
|
||||
os.environ["OPENAI_API_KEY"] = "Your Key"
|
||||
os.environ["SERPER_API_KEY"] = "Your Key" # serper.dev API key
|
||||
|
||||
from crewai import Agent, Task, Crew
|
||||
from crewai_tools import SerperDevTool
|
||||
|
||||
research_agent = Agent(
|
||||
role='Researcher',
|
||||
goal='Find and summarize the latest AI news',
|
||||
backstory="""You're a researcher at a large company.
|
||||
You're responsible for analyzing data and providing insights
|
||||
to the business.""",
|
||||
verbose=True
|
||||
)
|
||||
|
||||
# to perform a semantic search for a specified query from a text's content across the internet
|
||||
search_tool = SerperDevTool()
|
||||
|
||||
task = Task(
|
||||
description='Find and summarize the latest AI news',
|
||||
expected_output='A bullet list summary of the top 5 most important AI news',
|
||||
agent=research_agent,
|
||||
tools=[search_tool]
|
||||
)
|
||||
|
||||
crew = Crew(
|
||||
agents=[research_agent],
|
||||
tasks=[task],
|
||||
verbose=True
|
||||
)
|
||||
|
||||
result = crew.kickoff()
|
||||
print(result)
|
||||
```
|
||||
|
||||
This demonstrates how tasks with specific tools can override an agent's default set for tailored task execution.
|
||||
|
||||
## Referring to Other Tasks
|
||||
|
||||
In CrewAI, the output of one task is automatically relayed into the next one, but you can specifically define what tasks' output, including multiple, should be used as context for another task.
|
||||
|
||||
This is useful when you have a task that depends on the output of another task that is not performed immediately after it. This is done through the `context` attribute of the task:
|
||||
|
||||
```python Code
|
||||
# ...
|
||||
|
||||
research_ai_task = Task(
|
||||
description="Research the latest developments in AI",
|
||||
expected_output="A list of recent AI developments",
|
||||
async_execution=True,
|
||||
agent=research_agent,
|
||||
tools=[search_tool]
|
||||
)
|
||||
|
||||
research_ops_task = Task(
|
||||
description="Research the latest developments in AI Ops",
|
||||
expected_output="A list of recent AI Ops developments",
|
||||
async_execution=True,
|
||||
agent=research_agent,
|
||||
tools=[search_tool]
|
||||
)
|
||||
|
||||
write_blog_task = Task(
|
||||
description="Write a full blog post about the importance of AI and its latest news",
|
||||
expected_output="Full blog post that is 4 paragraphs long",
|
||||
agent=writer_agent,
|
||||
context=[research_ai_task, research_ops_task]
|
||||
)
|
||||
|
||||
#...
|
||||
```
|
||||
|
||||
## Asynchronous Execution
|
||||
|
||||
You can define a task to be executed asynchronously. This means that the crew will not wait for it to be completed to continue with the next task. This is useful for tasks that take a long time to be completed, or that are not crucial for the next tasks to be performed.
|
||||
|
||||
You can then use the `context` attribute to define in a future task that it should wait for the output of the asynchronous task to be completed.
|
||||
|
||||
```python Code
|
||||
#...
|
||||
|
||||
list_ideas = Task(
|
||||
description="List of 5 interesting ideas to explore for an article about AI.",
|
||||
expected_output="Bullet point list of 5 ideas for an article.",
|
||||
agent=researcher,
|
||||
async_execution=True # Will be executed asynchronously
|
||||
)
|
||||
|
||||
list_important_history = Task(
|
||||
description="Research the history of AI and give me the 5 most important events.",
|
||||
expected_output="Bullet point list of 5 important events.",
|
||||
agent=researcher,
|
||||
async_execution=True # Will be executed asynchronously
|
||||
)
|
||||
|
||||
write_article = Task(
|
||||
description="Write an article about AI, its history, and interesting ideas.",
|
||||
expected_output="A 4 paragraph article about AI.",
|
||||
agent=writer,
|
||||
context=[list_ideas, list_important_history] # Will wait for the output of the two tasks to be completed
|
||||
)
|
||||
|
||||
#...
|
||||
```
|
||||
|
||||
## Callback Mechanism
|
||||
|
||||
The callback function is executed after the task is completed, allowing for actions or notifications to be triggered based on the task's outcome.
|
||||
|
||||
```python Code
|
||||
# ...
|
||||
|
||||
def callback_function(output: TaskOutput):
|
||||
# Do something after the task is completed
|
||||
# Example: Send an email to the manager
|
||||
print(f"""
|
||||
Task completed!
|
||||
Task: {output.description}
|
||||
Output: {output.raw}
|
||||
""")
|
||||
|
||||
research_task = Task(
|
||||
description='Find and summarize the latest AI news',
|
||||
expected_output='A bullet list summary of the top 5 most important AI news',
|
||||
agent=research_agent,
|
||||
tools=[search_tool],
|
||||
callback=callback_function
|
||||
)
|
||||
|
||||
#...
|
||||
```
|
||||
|
||||
## Accessing a Specific Task Output
|
||||
|
||||
Once a crew finishes running, you can access the output of a specific task by using the `output` attribute of the task object:
|
||||
|
||||
```python Code
|
||||
# ...
|
||||
task1 = Task(
|
||||
description='Find and summarize the latest AI news',
|
||||
expected_output='A bullet list summary of the top 5 most important AI news',
|
||||
agent=research_agent,
|
||||
tools=[search_tool]
|
||||
)
|
||||
|
||||
#...
|
||||
|
||||
crew = Crew(
|
||||
agents=[research_agent],
|
||||
tasks=[task1, task2, task3],
|
||||
verbose=True
|
||||
)
|
||||
|
||||
result = crew.kickoff()
|
||||
|
||||
# Returns a TaskOutput object with the description and results of the task
|
||||
print(f"""
|
||||
Task completed!
|
||||
Task: {task1.output.description}
|
||||
Output: {task1.output.raw}
|
||||
""")
|
||||
```
|
||||
|
||||
## Tool Override Mechanism
|
||||
|
||||
Specifying tools in a task allows for dynamic adaptation of agent capabilities, emphasizing CrewAI's flexibility.
|
||||
|
||||
## Error Handling and Validation Mechanisms
|
||||
|
||||
While creating and executing tasks, certain validation mechanisms are in place to ensure the robustness and reliability of task attributes. These include but are not limited to:
|
||||
|
||||
- Ensuring only one output type is set per task to maintain clear output expectations.
|
||||
- Preventing the manual assignment of the `id` attribute to uphold the integrity of the unique identifier system.
|
||||
|
||||
These validations help in maintaining the consistency and reliability of task executions within the crewAI framework.
|
||||
|
||||
## Task Guardrails
|
||||
|
||||
Task guardrails provide a powerful way to validate, transform, or filter task outputs before they are passed to the next task. Guardrails are optional functions that execute before the next task starts, allowing you to ensure that task outputs meet specific requirements or formats.
|
||||
|
||||
### Basic Usage
|
||||
|
||||
```python Code
|
||||
from typing import Tuple, Union
|
||||
from crewai import Task
|
||||
|
||||
def validate_json_output(result: str) -> Tuple[bool, Union[dict, str]]:
|
||||
"""Validate that the output is valid JSON."""
|
||||
try:
|
||||
json_data = json.loads(result)
|
||||
return (True, json_data)
|
||||
except json.JSONDecodeError:
|
||||
return (False, "Output must be valid JSON")
|
||||
|
||||
task = Task(
|
||||
description="Generate JSON data",
|
||||
expected_output="Valid JSON object",
|
||||
guardrail=validate_json_output
|
||||
)
|
||||
```
|
||||
|
||||
### How Guardrails Work
|
||||
|
||||
1. **Optional Attribute**: Guardrails are an optional attribute at the task level, allowing you to add validation only where needed.
|
||||
2. **Execution Timing**: The guardrail function is executed before the next task starts, ensuring valid data flow between tasks.
|
||||
3. **Return Format**: Guardrails must return a tuple of `(success, data)`:
|
||||
- If `success` is `True`, `data` is the validated/transformed result
|
||||
- If `success` is `False`, `data` is the error message
|
||||
4. **Result Routing**:
|
||||
- On success (`True`), the result is automatically passed to the next task
|
||||
- On failure (`False`), the error is sent back to the agent to generate a new answer
|
||||
|
||||
### Common Use Cases
|
||||
|
||||
#### Data Format Validation
|
||||
```python Code
|
||||
def validate_email_format(result: str) -> Tuple[bool, Union[str, str]]:
|
||||
"""Ensure the output contains a valid email address."""
|
||||
import re
|
||||
email_pattern = r'^[\w\.-]+@[\w\.-]+\.\w+$'
|
||||
if re.match(email_pattern, result.strip()):
|
||||
return (True, result.strip())
|
||||
return (False, "Output must be a valid email address")
|
||||
```
|
||||
|
||||
#### Content Filtering
|
||||
```python Code
|
||||
def filter_sensitive_info(result: str) -> Tuple[bool, Union[str, str]]:
|
||||
"""Remove or validate sensitive information."""
|
||||
sensitive_patterns = ['SSN:', 'password:', 'secret:']
|
||||
for pattern in sensitive_patterns:
|
||||
if pattern.lower() in result.lower():
|
||||
return (False, f"Output contains sensitive information ({pattern})")
|
||||
return (True, result)
|
||||
```
|
||||
|
||||
#### Data Transformation
|
||||
```python Code
|
||||
def normalize_phone_number(result: str) -> Tuple[bool, Union[str, str]]:
|
||||
"""Ensure phone numbers are in a consistent format."""
|
||||
import re
|
||||
digits = re.sub(r'\D', '', result)
|
||||
if len(digits) == 10:
|
||||
formatted = f"({digits[:3]}) {digits[3:6]}-{digits[6:]}"
|
||||
return (True, formatted)
|
||||
return (False, "Output must be a 10-digit phone number")
|
||||
```
|
||||
|
||||
### Advanced Features
|
||||
|
||||
#### Chaining Multiple Validations
|
||||
```python Code
|
||||
def chain_validations(*validators):
|
||||
"""Chain multiple validators together."""
|
||||
def combined_validator(result):
|
||||
for validator in validators:
|
||||
success, data = validator(result)
|
||||
if not success:
|
||||
return (False, data)
|
||||
result = data
|
||||
return (True, result)
|
||||
return combined_validator
|
||||
|
||||
# Usage
|
||||
task = Task(
|
||||
description="Get user contact info",
|
||||
expected_output="Email and phone",
|
||||
guardrail=chain_validations(
|
||||
validate_email_format,
|
||||
filter_sensitive_info
|
||||
)
|
||||
)
|
||||
```
|
||||
|
||||
#### Custom Retry Logic
|
||||
```python Code
|
||||
task = Task(
|
||||
description="Generate data",
|
||||
expected_output="Valid data",
|
||||
guardrail=validate_data,
|
||||
max_retries=5 # Override default retry limit
|
||||
)
|
||||
```
|
||||
|
||||
## Creating Directories when Saving Files
|
||||
|
||||
You can now specify if a task should create directories when saving its output to a file. This is particularly useful for organizing outputs and ensuring that file paths are correctly structured.
|
||||
|
||||
```python Code
|
||||
# ...
|
||||
|
||||
save_output_task = Task(
|
||||
description='Save the summarized AI news to a file',
|
||||
expected_output='File saved successfully',
|
||||
agent=research_agent,
|
||||
tools=[file_save_tool],
|
||||
output_file='outputs/ai_news_summary.txt',
|
||||
create_directory=True
|
||||
)
|
||||
|
||||
#...
|
||||
```
|
||||
|
||||
Check out the video below to see how to use structured outputs in CrewAI:
|
||||
|
||||
<iframe
|
||||
width="560"
|
||||
height="315"
|
||||
src="https://www.youtube.com/embed/dNpKQk5uxHw"
|
||||
title="YouTube video player"
|
||||
frameborder="0"
|
||||
allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share"
|
||||
referrerpolicy="strict-origin-when-cross-origin"
|
||||
allowfullscreen
|
||||
></iframe>
|
||||
|
||||
## Conclusion
|
||||
|
||||
Tasks are the driving force behind the actions of agents in CrewAI.
|
||||
By properly defining tasks and their outcomes, you set the stage for your AI agents to work effectively, either independently or as a collaborative unit.
|
||||
Equipping tasks with appropriate tools, understanding the execution process, and following robust validation practices are crucial for maximizing CrewAI's potential,
|
||||
ensuring agents are effectively prepared for their assignments and that tasks are executed as intended.
|
||||
48
docs/concepts/testing.mdx
Normal file
@@ -0,0 +1,48 @@
|
||||
---
|
||||
title: Testing
|
||||
description: Learn how to test your CrewAI Crew and evaluate their performance.
|
||||
icon: vial
|
||||
---
|
||||
|
||||
## Introduction
|
||||
|
||||
Testing is a crucial part of the development process, and it is essential to ensure that your crew is performing as expected. With crewAI, you can easily test your crew and evaluate its performance using the built-in testing capabilities.
|
||||
|
||||
### Using the Testing Feature
|
||||
|
||||
We added the CLI command `crewai test` to make it easy to test your crew. This command will run your crew for a specified number of iterations and provide detailed performance metrics. The parameters are `n_iterations` and `model`, which are optional and default to 2 and `gpt-4o-mini` respectively. For now, the only provider available is OpenAI.
|
||||
|
||||
```bash
|
||||
crewai test
|
||||
```
|
||||
|
||||
If you want to run more iterations or use a different model, you can specify the parameters like this:
|
||||
|
||||
```bash
|
||||
crewai test --n_iterations 5 --model gpt-4o
|
||||
```
|
||||
|
||||
or using the short forms:
|
||||
|
||||
```bash
|
||||
crewai test -n 5 -m gpt-4o
|
||||
```
|
||||
|
||||
When you run the `crewai test` command, the crew will be executed for the specified number of iterations, and the performance metrics will be displayed at the end of the run.
|
||||
|
||||
A table of scores at the end will show the performance of the crew in terms of the following metrics:
|
||||
|
||||
<center>**Tasks Scores (1-10 Higher is better)**</center>
|
||||
|
||||
| Tasks/Crew/Agents | Run 1 | Run 2 | Avg. Total | Agents | Additional Info |
|
||||
|:------------------|:-----:|:-----:|:----------:|:------------------------------:|:---------------------------------|
|
||||
| Task 1 | 9.0 | 9.5 | **9.2** | Professional Insights | |
|
||||
| | | | | Researcher | |
|
||||
| Task 2 | 9.0 | 10.0 | **9.5** | Company Profile Investigator | |
|
||||
| Task 3 | 9.0 | 9.0 | **9.0** | Automation Insights | |
|
||||
| | | | | Specialist | |
|
||||
| Task 4 | 9.0 | 9.0 | **9.0** | Final Report Compiler | Automation Insights Specialist |
|
||||
| Crew | 9.00 | 9.38 | **9.2** | | |
|
||||
| Execution Time (s) | 126 | 145 | **135** | | |
|
||||
|
||||
The example above shows the test results for two runs of the crew with two tasks, with the average total score for each task and the crew as a whole.
|
||||
272
docs/concepts/tools.mdx
Normal file
@@ -0,0 +1,272 @@
|
||||
---
|
||||
title: Tools
|
||||
description: Understanding and leveraging tools within the CrewAI framework for agent collaboration and task execution.
|
||||
icon: screwdriver-wrench
|
||||
---
|
||||
|
||||
## Introduction
|
||||
|
||||
CrewAI tools empower agents with capabilities ranging from web searching and data analysis to collaboration and delegating tasks among coworkers.
|
||||
This documentation outlines how to create, integrate, and leverage these tools within the CrewAI framework, including a new focus on collaboration tools.
|
||||
|
||||
## What is a Tool?
|
||||
|
||||
A tool in CrewAI is a skill or function that agents can utilize to perform various actions.
|
||||
This includes tools from the [CrewAI Toolkit](https://github.com/joaomdmoura/crewai-tools) and [LangChain Tools](https://python.langchain.com/docs/integrations/tools),
|
||||
enabling everything from simple searches to complex interactions and effective teamwork among agents.
|
||||
|
||||
<Note type="info" title="Enterprise Enhancement: Tools Repository">
|
||||
CrewAI Enterprise provides a comprehensive Tools Repository with pre-built integrations for common business systems and APIs. Deploy agents with enterprise tools in minutes instead of days.
|
||||
|
||||

|
||||
|
||||
The Enterprise Tools Repository includes:
|
||||
- Pre-built connectors for popular enterprise systems
|
||||
- Custom tool creation interface
|
||||
- Version control and sharing capabilities
|
||||
- Security and compliance features
|
||||
</Note>
|
||||
|
||||
## Key Characteristics of Tools
|
||||
|
||||
- **Utility**: Crafted for tasks such as web searching, data analysis, content generation, and agent collaboration.
|
||||
- **Integration**: Boosts agent capabilities by seamlessly integrating tools into their workflow.
|
||||
- **Customizability**: Provides the flexibility to develop custom tools or utilize existing ones, catering to the specific needs of agents.
|
||||
- **Error Handling**: Incorporates robust error handling mechanisms to ensure smooth operation.
|
||||
- **Caching Mechanism**: Features intelligent caching to optimize performance and reduce redundant operations.
|
||||
|
||||
## Using CrewAI Tools
|
||||
|
||||
To enhance your agents' capabilities with crewAI tools, begin by installing our extra tools package:
|
||||
|
||||
```bash
|
||||
pip install 'crewai[tools]'
|
||||
```
|
||||
|
||||
Here's an example demonstrating their use:
|
||||
|
||||
```python Code
|
||||
import os
|
||||
from crewai import Agent, Task, Crew
|
||||
# Importing crewAI tools
|
||||
from crewai_tools import (
|
||||
DirectoryReadTool,
|
||||
FileReadTool,
|
||||
SerperDevTool,
|
||||
WebsiteSearchTool
|
||||
)
|
||||
|
||||
# Set up API keys
|
||||
os.environ["SERPER_API_KEY"] = "Your Key" # serper.dev API key
|
||||
os.environ["OPENAI_API_KEY"] = "Your Key"
|
||||
|
||||
# Instantiate tools
|
||||
docs_tool = DirectoryReadTool(directory='./blog-posts')
|
||||
file_tool = FileReadTool()
|
||||
search_tool = SerperDevTool()
|
||||
web_rag_tool = WebsiteSearchTool()
|
||||
|
||||
# Create agents
|
||||
researcher = Agent(
|
||||
role='Market Research Analyst',
|
||||
goal='Provide up-to-date market analysis of the AI industry',
|
||||
backstory='An expert analyst with a keen eye for market trends.',
|
||||
tools=[search_tool, web_rag_tool],
|
||||
verbose=True
|
||||
)
|
||||
|
||||
writer = Agent(
|
||||
role='Content Writer',
|
||||
goal='Craft engaging blog posts about the AI industry',
|
||||
backstory='A skilled writer with a passion for technology.',
|
||||
tools=[docs_tool, file_tool],
|
||||
verbose=True
|
||||
)
|
||||
|
||||
# Define tasks
|
||||
research = Task(
|
||||
description='Research the latest trends in the AI industry and provide a summary.',
|
||||
expected_output='A summary of the top 3 trending developments in the AI industry with a unique perspective on their significance.',
|
||||
agent=researcher
|
||||
)
|
||||
|
||||
write = Task(
|
||||
description='Write an engaging blog post about the AI industry, based on the research analyst's summary. Draw inspiration from the latest blog posts in the directory.',
|
||||
expected_output='A 4-paragraph blog post formatted in markdown with engaging, informative, and accessible content, avoiding complex jargon.',
|
||||
agent=writer,
|
||||
output_file='blog-posts/new_post.md' # The final blog post will be saved here
|
||||
)
|
||||
|
||||
# Assemble a crew with planning enabled
|
||||
crew = Crew(
|
||||
agents=[researcher, writer],
|
||||
tasks=[research, write],
|
||||
verbose=True,
|
||||
planning=True, # Enable planning feature
|
||||
)
|
||||
|
||||
# Execute tasks
|
||||
crew.kickoff()
|
||||
```
|
||||
|
||||
## Available CrewAI Tools
|
||||
|
||||
- **Error Handling**: All tools are built with error handling capabilities, allowing agents to gracefully manage exceptions and continue their tasks.
|
||||
- **Caching Mechanism**: All tools support caching, enabling agents to efficiently reuse previously obtained results, reducing the load on external resources and speeding up the execution time. You can also define finer control over the caching mechanism using the `cache_function` attribute on the tool.
|
||||
|
||||
Here is a list of the available tools and their descriptions:
|
||||
|
||||
| Tool | Description |
|
||||
| :------------------------------- | :--------------------------------------------------------------------------------------------- |
|
||||
| **ApifyActorsTool** | A tool that integrates Apify Actors with your workflows for web scraping and automation tasks. |
|
||||
| **BrowserbaseLoadTool** | A tool for interacting with and extracting data from web browsers. |
|
||||
| **CodeDocsSearchTool** | A RAG tool optimized for searching through code documentation and related technical documents. |
|
||||
| **CodeInterpreterTool** | A tool for interpreting python code. |
|
||||
| **ComposioTool** | Enables use of Composio tools. |
|
||||
| **CSVSearchTool** | A RAG tool designed for searching within CSV files, tailored to handle structured data. |
|
||||
| **DALL-E Tool** | A tool for generating images using the DALL-E API. |
|
||||
| **DirectorySearchTool** | A RAG tool for searching within directories, useful for navigating through file systems. |
|
||||
| **DOCXSearchTool** | A RAG tool aimed at searching within DOCX documents, ideal for processing Word files. |
|
||||
| **DirectoryReadTool** | Facilitates reading and processing of directory structures and their contents. |
|
||||
| **EXASearchTool** | A tool designed for performing exhaustive searches across various data sources. |
|
||||
| **FileReadTool** | Enables reading and extracting data from files, supporting various file formats. |
|
||||
| **FirecrawlSearchTool** | A tool to search webpages using Firecrawl and return the results. |
|
||||
| **FirecrawlCrawlWebsiteTool** | A tool for crawling webpages using Firecrawl. |
|
||||
| **FirecrawlScrapeWebsiteTool** | A tool for scraping webpages URL using Firecrawl and returning its contents. |
|
||||
| **GithubSearchTool** | A RAG tool for searching within GitHub repositories, useful for code and documentation search. |
|
||||
| **SerperDevTool** | A specialized tool for development purposes, with specific functionalities under development. |
|
||||
| **TXTSearchTool** | A RAG tool focused on searching within text (.txt) files, suitable for unstructured data. |
|
||||
| **JSONSearchTool** | A RAG tool designed for searching within JSON files, catering to structured data handling. |
|
||||
| **LlamaIndexTool** | Enables the use of LlamaIndex tools. |
|
||||
| **MDXSearchTool** | A RAG tool tailored for searching within Markdown (MDX) files, useful for documentation. |
|
||||
| **PDFSearchTool** | A RAG tool aimed at searching within PDF documents, ideal for processing scanned documents. |
|
||||
| **PGSearchTool** | A RAG tool optimized for searching within PostgreSQL databases, suitable for database queries. |
|
||||
| **Vision Tool** | A tool for generating images using the DALL-E API. |
|
||||
| **RagTool** | A general-purpose RAG tool capable of handling various data sources and types. |
|
||||
| **ScrapeElementFromWebsiteTool** | Enables scraping specific elements from websites, useful for targeted data extraction. |
|
||||
| **ScrapeWebsiteTool** | Facilitates scraping entire websites, ideal for comprehensive data collection. |
|
||||
| **WebsiteSearchTool** | A RAG tool for searching website content, optimized for web data extraction. |
|
||||
| **XMLSearchTool** | A RAG tool designed for searching within XML files, suitable for structured data formats. |
|
||||
| **YoutubeChannelSearchTool** | A RAG tool for searching within YouTube channels, useful for video content analysis. |
|
||||
| **YoutubeVideoSearchTool** | A RAG tool aimed at searching within YouTube videos, ideal for video data extraction. |
|
||||
|
||||
## Creating your own Tools
|
||||
|
||||
<Tip>
|
||||
Developers can craft `custom tools` tailored for their agent's needs or
|
||||
utilize pre-built options.
|
||||
</Tip>
|
||||
|
||||
There are two main ways for one to create a CrewAI tool:
|
||||
|
||||
### Subclassing `BaseTool`
|
||||
|
||||
```python Code
|
||||
from crewai.tools import BaseTool
|
||||
from pydantic import BaseModel, Field
|
||||
|
||||
class MyToolInput(BaseModel):
|
||||
"""Input schema for MyCustomTool."""
|
||||
argument: str = Field(..., description="Description of the argument.")
|
||||
|
||||
class MyCustomTool(BaseTool):
|
||||
name: str = "Name of my tool"
|
||||
description: str = "What this tool does. It's vital for effective utilization."
|
||||
args_schema: Type[BaseModel] = MyToolInput
|
||||
|
||||
def _run(self, argument: str) -> str:
|
||||
# Your tool's logic here
|
||||
return "Tool's result"
|
||||
```
|
||||
|
||||
### Utilizing the `tool` Decorator
|
||||
|
||||
```python Code
|
||||
from crewai.tools import tool
|
||||
@tool("Name of my tool")
|
||||
def my_tool(question: str) -> str:
|
||||
"""Clear description for what this tool is useful for, your agent will need this information to use it."""
|
||||
# Function logic here
|
||||
return "Result from your custom tool"
|
||||
```
|
||||
|
||||
### Structured Tools
|
||||
|
||||
The `StructuredTool` class wraps functions as tools, providing flexibility and validation while reducing boilerplate. It supports custom schemas and dynamic logic for seamless integration of complex functionalities.
|
||||
|
||||
#### Example:
|
||||
Using `StructuredTool.from_function`, you can wrap a function that interacts with an external API or system, providing a structured interface. This enables robust validation and consistent execution, making it easier to integrate complex functionalities into your applications as demonstrated in the following example:
|
||||
|
||||
```python
|
||||
from crewai.tools.structured_tool import CrewStructuredTool
|
||||
from pydantic import BaseModel
|
||||
|
||||
# Define the schema for the tool's input using Pydantic
|
||||
class APICallInput(BaseModel):
|
||||
endpoint: str
|
||||
parameters: dict
|
||||
|
||||
# Wrapper function to execute the API call
|
||||
def tool_wrapper(*args, **kwargs):
|
||||
# Here, you would typically call the API using the parameters
|
||||
# For demonstration, we'll return a placeholder string
|
||||
return f"Call the API at {kwargs['endpoint']} with parameters {kwargs['parameters']}"
|
||||
|
||||
# Create and return the structured tool
|
||||
def create_structured_tool():
|
||||
return CrewStructuredTool.from_function(
|
||||
name='Wrapper API',
|
||||
description="A tool to wrap API calls with structured input.",
|
||||
args_schema=APICallInput,
|
||||
func=tool_wrapper,
|
||||
)
|
||||
|
||||
# Example usage
|
||||
structured_tool = create_structured_tool()
|
||||
|
||||
# Execute the tool with structured input
|
||||
result = structured_tool._run(**{
|
||||
"endpoint": "https://example.com/api",
|
||||
"parameters": {"key1": "value1", "key2": "value2"}
|
||||
})
|
||||
print(result) # Output: Call the API at https://example.com/api with parameters {'key1': 'value1', 'key2': 'value2'}
|
||||
```
|
||||
|
||||
### Custom Caching Mechanism
|
||||
|
||||
<Tip>
|
||||
Tools can optionally implement a `cache_function` to fine-tune caching
|
||||
behavior. This function determines when to cache results based on specific
|
||||
conditions, offering granular control over caching logic.
|
||||
</Tip>
|
||||
|
||||
```python Code
|
||||
from crewai.tools import tool
|
||||
|
||||
@tool
|
||||
def multiplication_tool(first_number: int, second_number: int) -> str:
|
||||
"""Useful for when you need to multiply two numbers together."""
|
||||
return first_number * second_number
|
||||
|
||||
def cache_func(args, result):
|
||||
# In this case, we only cache the result if it's a multiple of 2
|
||||
cache = result % 2 == 0
|
||||
return cache
|
||||
|
||||
multiplication_tool.cache_function = cache_func
|
||||
|
||||
writer1 = Agent(
|
||||
role="Writer",
|
||||
goal="You write lessons of math for kids.",
|
||||
backstory="You're an expert in writing and you love to teach kids but you know nothing of math.",
|
||||
tools=[multiplication_tool],
|
||||
allow_delegation=False,
|
||||
)
|
||||
#...
|
||||
```
|
||||
|
||||
## Conclusion
|
||||
|
||||
Tools are pivotal in extending the capabilities of CrewAI agents, enabling them to undertake a broad spectrum of tasks and collaborate effectively.
|
||||
When building solutions with CrewAI, leverage both custom and existing tools to empower your agents and enhance the AI ecosystem. Consider utilizing error handling,
|
||||
caching mechanisms, and the flexibility of tool arguments to optimize your agents' performance and capabilities.
|
||||
67
docs/concepts/training.mdx
Normal file
@@ -0,0 +1,67 @@
|
||||
---
|
||||
title: Training
|
||||
description: Learn how to train your CrewAI agents by giving them feedback early on and get consistent results.
|
||||
icon: dumbbell
|
||||
---
|
||||
|
||||
## Introduction
|
||||
|
||||
The training feature in CrewAI allows you to train your AI agents using the command-line interface (CLI).
|
||||
By running the command `crewai train -n <n_iterations>`, you can specify the number of iterations for the training process.
|
||||
|
||||
During training, CrewAI utilizes techniques to optimize the performance of your agents along with human feedback.
|
||||
This helps the agents improve their understanding, decision-making, and problem-solving abilities.
|
||||
|
||||
### Training Your Crew Using the CLI
|
||||
|
||||
To use the training feature, follow these steps:
|
||||
|
||||
1. Open your terminal or command prompt.
|
||||
2. Navigate to the directory where your CrewAI project is located.
|
||||
3. Run the following command:
|
||||
|
||||
```shell
|
||||
crewai train -n <n_iterations> <filename> (optional)
|
||||
```
|
||||
<Tip>
|
||||
Replace `<n_iterations>` with the desired number of training iterations and `<filename>` with the appropriate filename ending with `.pkl`.
|
||||
</Tip>
|
||||
|
||||
### Training Your Crew Programmatically
|
||||
|
||||
To train your crew programmatically, use the following steps:
|
||||
|
||||
1. Define the number of iterations for training.
|
||||
2. Specify the input parameters for the training process.
|
||||
3. Execute the training command within a try-except block to handle potential errors.
|
||||
|
||||
```python Code
|
||||
n_iterations = 2
|
||||
inputs = {"topic": "CrewAI Training"}
|
||||
filename = "your_model.pkl"
|
||||
|
||||
try:
|
||||
YourCrewName_Crew().crew().train(
|
||||
n_iterations=n_iterations,
|
||||
inputs=inputs,
|
||||
filename=filename
|
||||
)
|
||||
|
||||
except Exception as e:
|
||||
raise Exception(f"An error occurred while training the crew: {e}")
|
||||
```
|
||||
|
||||
### Key Points to Note
|
||||
|
||||
- **Positive Integer Requirement:** Ensure that the number of iterations (`n_iterations`) is a positive integer. The code will raise a `ValueError` if this condition is not met.
|
||||
- **Filename Requirement:** Ensure that the filename ends with `.pkl`. The code will raise a `ValueError` if this condition is not met.
|
||||
- **Error Handling:** The code handles subprocess errors and unexpected exceptions, providing error messages to the user.
|
||||
|
||||
It is important to note that the training process may take some time, depending on the complexity of your agents and will also require your feedback on each iteration.
|
||||
|
||||
Once the training is complete, your agents will be equipped with enhanced capabilities and knowledge, ready to tackle complex tasks and provide more consistent and valuable insights.
|
||||
|
||||
Remember to regularly update and retrain your agents to ensure they stay up-to-date with the latest information and advancements in the field.
|
||||
|
||||
Happy training with CrewAI! 🚀
|
||||
|
||||
@@ -1,58 +0,0 @@
|
||||
---
|
||||
title: crewAI Agents
|
||||
description: What are crewAI Agents and how to use them.
|
||||
---
|
||||
|
||||
## What is an Agent?
|
||||
!!! note "What is an Agent?"
|
||||
An agent is an **autonomous unit** programmed to:
|
||||
<ul>
|
||||
<li class='leading-3'>Perform tasks</li>
|
||||
<li class='leading-3'>Make decisions</li>
|
||||
<li class='leading-3'>Communicate with other agents</li>
|
||||
<br/>
|
||||
Think of an agent as a member of a team, with specific skills and a particular job to do. Agents can have different roles like 'Researcher', 'Writer', or 'Customer Support', each contributing to the overall goal of the crew.
|
||||
|
||||
## Agent Attributes
|
||||
|
||||
| Attribute | Description |
|
||||
| :---------- | :----------------------------------- |
|
||||
| **Role** | Defines the agent's function within the crew. It determines the kind of tasks the agent is best suited for. |
|
||||
| **Goal** | The individual objective that the agent aims to achieve. It guides the agent's decision-making process. |
|
||||
| **Backstory** | Provides context to the agent's role and goal, enriching the interaction and collaboration dynamics. |
|
||||
| **Tools** | Set of capabilities or functions that the agent can use to perform tasks. Tools can be shared or exclusive to specific agents. |
|
||||
| **Max Iter** | The maximum number of iterations the agent can perform before forced to give its best answer |
|
||||
| **Max RPM** | The maximum number of requests per minute the agent can perform to avoid rate limits |
|
||||
| **Verbose** | This allow you to actually see what is going on during the Crew execution. |
|
||||
| **Allow Delegation** | Agents can delegate tasks or questions to one another, ensuring that each task is handled by the most suitable agent. |
|
||||
|
||||
|
||||
## Creating an Agent
|
||||
|
||||
!!! note "Agent Interaction"
|
||||
Agents can interact with each other using the CrewAI's built-in delegation and communication mechanisms.<br/>This allows for dynamic task management and problem-solving within the crew.
|
||||
|
||||
To create an agent, you would typically initialize an instance of the `Agent` class with the desired properties. Here's a conceptual example:
|
||||
|
||||
```python
|
||||
# Example: Creating an agent with all attributes
|
||||
from crewai import Agent
|
||||
|
||||
agent = Agent(
|
||||
role='Data Analyst',
|
||||
goal='Extract actionable insights',
|
||||
backstory="""You're a data analyst at a large company.
|
||||
You're responsible for analyzing data and providing insights
|
||||
to the business.
|
||||
You're currently working on a project to analyze the
|
||||
performance of our marketing campaigns.""",
|
||||
tools=[my_tool1, my_tool2],
|
||||
max_iter=10,
|
||||
max_rpm=10,
|
||||
verbose=True,
|
||||
allow_delegation=True
|
||||
)
|
||||
```
|
||||
|
||||
## Conclusion
|
||||
Agents are the building blocks of the CrewAI framework. By understanding how to define and interact with agents, you can create sophisticated AI systems that leverage the power of collaborative intelligence.
|
||||
@@ -1,24 +0,0 @@
|
||||
---
|
||||
title: How Agents Collaborate in CrewAI
|
||||
description: Exploring the dynamics of agent collaboration within the CrewAI framework.
|
||||
---
|
||||
|
||||
## Collaboration Fundamentals
|
||||
!!! note "Core of Agent Interaction"
|
||||
Collaboration in CrewAI is fundamental, enabling agents to combine their skills, share information, and assist each other in task execution, embodying a truly cooperative ecosystem.
|
||||
|
||||
- **Information Sharing**: Ensures all agents are well-informed and can contribute effectively by sharing data and findings.
|
||||
- **Task Assistance**: Allows agents to seek help from peers with the required expertise for specific tasks.
|
||||
- **Resource Allocation**: Optimizes task execution through the efficient distribution and sharing of resources among agents.
|
||||
|
||||
## Delegation: Dividing to Conquer
|
||||
Delegation enhances functionality by allowing agents to intelligently assign tasks or seek help, thereby amplifying the crew's overall capability.
|
||||
|
||||
## Implementing Collaboration and Delegation
|
||||
Setting up a crew involves defining the roles and capabilities of each agent. CrewAI seamlessly manages their interactions, ensuring efficient collaboration and delegation.
|
||||
|
||||
## Example Scenario
|
||||
Imagine a crew with a researcher agent tasked with data gathering and a writer agent responsible for compiling reports. The writer can delegate research tasks or ask questions to the researcher, facilitating a seamless workflow.
|
||||
|
||||
## Conclusion
|
||||
Collaboration and delegation are pivotal, transforming individual AI agents into a coherent, intelligent crew capable of tackling complex tasks. CrewAI's framework not only simplifies these interactions but enhances their effectiveness, paving the way for sophisticated AI-driven solutions.
|
||||
@@ -1,75 +0,0 @@
|
||||
---
|
||||
title: crewAI Crews
|
||||
description: Understanding and utilizing crews in the crewAI framework.
|
||||
---
|
||||
|
||||
## What is a Crew?
|
||||
!!! note "Definition of a Crew"
|
||||
A crew in crewAI represents a collaborative group of agents working together to achieve a set of tasks. Each crew defines the strategy for task execution, agent collaboration, and the overall workflow.
|
||||
|
||||
## Crew Attributes
|
||||
|
||||
| Attribute | Description |
|
||||
| :------------------- | :----------------------------------------------------------- |
|
||||
| **Tasks** | A list of tasks assigned to the crew. |
|
||||
| **Agents** | A list of agents that are part of the crew. |
|
||||
| **Process** | The process flow (e.g., sequential, hierarchical) the crew follows. |
|
||||
| **Verbose** | The verbosity level for logging during execution. |
|
||||
| **Manager LLM** | The language model used by the manager agent in a hierarchical process. |
|
||||
| **Config** | Configuration settings for the crew. |
|
||||
| **Max RPM** | Maximum requests per minute the crew adheres to during execution. |
|
||||
| **Language** | Language setting for the crew's operation. |
|
||||
|
||||
!!! note "Crew Max RPM"
|
||||
The `max_rpm` attribute sets the maximum number of requests per minute the crew can perform to avoid rate limits and will override individual agents `max_rpm` settings if you set it.
|
||||
|
||||
## Creating a Crew
|
||||
|
||||
!!! note "Crew Composition"
|
||||
When assembling a crew, you combine agents with complementary roles and tools, assign tasks, and select a process that dictates their execution order and interaction.
|
||||
|
||||
### Example: Assembling a Crew
|
||||
|
||||
```python
|
||||
from crewai import Crew, Agent, Task, Process
|
||||
from langchain_community.tools import DuckDuckGoSearchRun
|
||||
|
||||
# Define agents with specific roles and tools
|
||||
researcher = Agent(
|
||||
role='Senior Research Analyst',
|
||||
goal='Discover innovative AI technologies',
|
||||
tools=[DuckDuckGoSearchRun()]
|
||||
)
|
||||
|
||||
writer = Agent(
|
||||
role='Content Writer',
|
||||
goal='Write engaging articles on AI discoveries'
|
||||
)
|
||||
|
||||
# Create tasks for the agents
|
||||
research_task = Task(description='Identify breakthrough AI technologies', agent=researcher)
|
||||
write_article_task = Task(description='Draft an article on the latest AI technologies', agent=writer)
|
||||
|
||||
# Assemble the crew with a sequential process
|
||||
my_crew = Crew(
|
||||
agents=[researcher, writer],
|
||||
tasks=[research_task, write_article_task],
|
||||
process=Process.sequential,
|
||||
verbose=True
|
||||
)
|
||||
```
|
||||
|
||||
## Crew Execution Process
|
||||
|
||||
- **Sequential Process**: Tasks are executed one after another, allowing for a linear flow of work.
|
||||
- **Hierarchical Process**: A manager agent coordinates the crew, delegating tasks and validating outcomes before proceeding.
|
||||
|
||||
### Kicking Off a Crew
|
||||
|
||||
Once your crew is assembled, initiate the workflow with the `kickoff()` method. This starts the execution process according to the defined process flow.
|
||||
|
||||
```python
|
||||
# Start the crew's task execution
|
||||
result = my_crew.kickoff()
|
||||
print(result)
|
||||
```
|
||||
@@ -1,48 +0,0 @@
|
||||
---
|
||||
title: Managing Processes in CrewAI
|
||||
description: An overview of workflow management through processes in CrewAI.
|
||||
---
|
||||
|
||||
## Understanding Processes
|
||||
!!! note "Core Concept"
|
||||
Processes in CrewAI orchestrate how tasks are executed by agents, akin to project management in human teams. They ensure tasks are distributed and completed efficiently, according to a predefined game plan.
|
||||
|
||||
## Process Implementations
|
||||
|
||||
- **Sequential**: Executes tasks one after another, ensuring a linear and orderly progression.
|
||||
- **Hierarchical**: Implements a chain of command, where tasks are delegated and executed based on a managerial structure.
|
||||
- **Consensual (WIP)**: Future process type aiming for collaborative decision-making among agents on task execution.
|
||||
|
||||
## The Role of Processes in Teamwork
|
||||
Processes transform individual agents into a unified team, coordinating their efforts to achieve common goals with efficiency and harmony.
|
||||
|
||||
## Assigning Processes to a Crew
|
||||
Specify the process during crew creation to determine the execution strategy:
|
||||
|
||||
```python
|
||||
from crewai import Crew
|
||||
from crewai.process import Process
|
||||
|
||||
# Example: Creating a crew with a sequential process
|
||||
crew = Crew(agents=my_agents, tasks=my_tasks, process=Process.sequential)
|
||||
|
||||
# Example: Creating a crew with a hierarchical process
|
||||
crew = Crew(agents=my_agents, tasks=my_tasks, process=Process.hierarchical)
|
||||
```
|
||||
|
||||
## Sequential Process
|
||||
Ensures a natural flow of work, mirroring human team dynamics by progressing through tasks thoughtfully and systematically.
|
||||
|
||||
Tasks need to be pre-assigned to agents, and the order of execution is determined by the order of the tasks in the list.
|
||||
|
||||
Tasks are executed one after another, ensuring a linear and orderly progression and the output of one task is automatically used as context into the next task.
|
||||
|
||||
You can also define specific task's outputs that should be used as context for another task by using the `context` parameter in the `Task` class.
|
||||
|
||||
## Hierarchical Process
|
||||
Mimics a corporate hierarchy, where a manager oversees task execution, planning, delegation, and validation, enhancing task coordination.
|
||||
|
||||
In this process tasks don't need to be pre-assigned to agents, the manager will decide which agent will perform each task, review the output and decide if the task is completed or not.
|
||||
|
||||
## Conclusion
|
||||
Processes are vital for structured collaboration within CrewAI, enabling agents to work together systematically. Future updates will introduce new processes, further mimicking the adaptability and complexity of human teamwork.
|
||||
@@ -1,212 +0,0 @@
|
||||
---
|
||||
title: crewAI Tasks
|
||||
description: Overview and management of tasks within the crewAI framework.
|
||||
---
|
||||
|
||||
## Overview of a Task
|
||||
!!! note "What is a Task?"
|
||||
In the CrewAI framework, tasks are individual assignments that agents complete. They encapsulate necessary information for execution, including a description, assigned agent, and required tools, offering flexibility for various action complexities.
|
||||
|
||||
Tasks in CrewAI can be designed to require collaboration between agents. For example, one agent might gather data while another analyzes it. This collaborative approach can be defined within the task properties and managed by the Crew's process.
|
||||
|
||||
## Task Attributes
|
||||
|
||||
| Attribute | Description |
|
||||
| :---------- | :----------------------------------- |
|
||||
| **Description** | A clear, concise statement of what the task entails. |
|
||||
| **Agent** | Optionally, you can specify which agent is responsible for the task. If not, the crew's process will determine who takes it on. |
|
||||
| **Expected Output** *(optional)* | Clear and detailed definition of expected output for the task. |
|
||||
| **Tools** *(optional)* | These are the functions or capabilities the agent can utilize to perform the task. They can be anything from simple actions like 'search' to more complex interactions with other agents or APIs. |
|
||||
| **Async Execution** *(optional)* | If the task should be executed asynchronously. |
|
||||
| **Context** *(optional)* | Other tasks that will have their output used as context for this task, if one is an asynchronous task it will wait for that to finish |
|
||||
| **Callback** *(optional)* | A function to be executed after the task is completed. |
|
||||
|
||||
## Creating a Task
|
||||
|
||||
This is the simpliest example for creating a task, it involves defining its scope and agent, but there are optional attributes that can provide a lot of flexibility:
|
||||
|
||||
```python
|
||||
from crewai import Task
|
||||
|
||||
task = Task(
|
||||
description='Find and summarize the latest and most relevant news on AI',
|
||||
agent=sales_agent
|
||||
)
|
||||
```
|
||||
!!! note "Task Assignment"
|
||||
Tasks can be assigned directly by specifying an `agent` to them, or they can be assigned in run time if you are using the `hierarchical` through CrewAI's process, considering roles, availability, or other criteria.
|
||||
|
||||
## Integrating Tools with Tasks
|
||||
|
||||
Tools from the [crewAI Toolkit](https://github.com/joaomdmoura/crewai-tools) and [LangChain Tools](https://python.langchain.com/docs/integrations/tools) enhance task performance, allowing agents to interact more effectively with their environment. Assigning specific tools to tasks can tailor agent capabilities to particular needs.
|
||||
|
||||
## Creating a Task with Tools
|
||||
|
||||
```python
|
||||
import os
|
||||
os.environ["OPENAI_API_KEY"] = "Your Key"
|
||||
|
||||
from crewai import Agent, Task, Crew
|
||||
from langchain.agents import Tool
|
||||
from langchain_community.tools import DuckDuckGoSearchRun
|
||||
|
||||
research_agent = Agent(
|
||||
role='Researcher',
|
||||
goal='Find and summarize the latest AI news',
|
||||
backstory="""You're a researcher at a large company.
|
||||
You're responsible for analyzing data and providing insights
|
||||
to the business."""
|
||||
verbose=True
|
||||
)
|
||||
|
||||
# Install duckduckgo-search for this example:
|
||||
# !pip install -U duckduckgo-search
|
||||
search_tool = DuckDuckGoSearchRun()
|
||||
|
||||
task = Task(
|
||||
description='Find and summarize the latest AI news',
|
||||
expected_output='A bullet list summary of the top 5 most important AI news',
|
||||
agent=research_agent,
|
||||
tools=[search_tool]
|
||||
)
|
||||
|
||||
crew = Crew(
|
||||
agents=[research_agent],
|
||||
tasks=[task],
|
||||
verbose=2
|
||||
)
|
||||
|
||||
result = crew.kickoff()
|
||||
print(result)
|
||||
```
|
||||
|
||||
This demonstrates how tasks with specific tools can override an agent's default set for tailored task execution.
|
||||
|
||||
## Refering other Tasks
|
||||
|
||||
In crewAI the output of one task is automatically relayed into the next one, but you can specifically define what tasks output should be used as context for another task.
|
||||
|
||||
This is useful when you have a task that depends on the output of another task that is not performed immediately after it. This is done through the `context` attribute of the task:
|
||||
|
||||
```python
|
||||
# ...
|
||||
|
||||
research_task = Task(
|
||||
description='Find and summarize the latest AI news',
|
||||
expected_output='A bullet list summary of the top 5 most important AI news',
|
||||
agent=research_agent,
|
||||
tools=[search_tool]
|
||||
)
|
||||
|
||||
write_blog_task = Task(
|
||||
description="Write a full blog post about the importante of AI and it's latest news",
|
||||
expected_output='Full blog post that is 4 paragraphs long',
|
||||
agent=writer_agent,
|
||||
context=[research_task]
|
||||
)
|
||||
|
||||
#...
|
||||
```
|
||||
|
||||
## Asynchronous Execution
|
||||
|
||||
You can define a task to be executed asynchronously, this means that the crew will not wait for it to be completed to continue with the next task. This is useful for tasks that take a long time to be completed, or that are not crucial for the next tasks to be performed.
|
||||
|
||||
You can then use the `context` attribute to define in a future task that it should wait for the output of the asynchronous task to be completed.
|
||||
|
||||
```python
|
||||
#...
|
||||
|
||||
list_ideas = Task(
|
||||
description="List of 5 interesting ideas to explore for na article about AI.",
|
||||
expected_output="Bullet point list of 5 ideas for an article.",
|
||||
agent=researcher,
|
||||
async_execution=True # Will be executed asynchronously
|
||||
)
|
||||
|
||||
list_important_history = Task(
|
||||
description="Research the history of AI and give me the 5 most important events.",
|
||||
expected_output="Bullet point list of 5 important events.",
|
||||
agent=researcher,
|
||||
async_execution=True # Will be executed asynchronously
|
||||
)
|
||||
|
||||
write_article = Task(
|
||||
description="Write an article about AI, it's history and interesting ideas.",
|
||||
expected_output="A 4 paragraph article about AI.",
|
||||
agent=writer,
|
||||
context=[list_ideas, list_important_history] # Will wait for the output of the two tasks to be completed
|
||||
)
|
||||
|
||||
#...
|
||||
```
|
||||
|
||||
## Callback Mechanism
|
||||
|
||||
You can define a callback function that will be executed after the task is completed. This is useful for tasks that need to trigger some side effect after they are completed, while the crew is still running.
|
||||
|
||||
```python
|
||||
# ...
|
||||
|
||||
def callback_function(output: TaskOutput):
|
||||
# Do something after the task is completed
|
||||
# Example: Send an email to the manager
|
||||
print(f"""
|
||||
Task completed!
|
||||
Task: {output.description}
|
||||
Output: {output.result}
|
||||
""")
|
||||
|
||||
research_task = Task(
|
||||
description='Find and summarize the latest AI news',
|
||||
expected_output='A bullet list summary of the top 5 most important AI news',
|
||||
agent=research_agent,
|
||||
tools=[search_tool],
|
||||
callback=callback_function
|
||||
)
|
||||
|
||||
#...
|
||||
```
|
||||
|
||||
## Accessing a specific Task Output
|
||||
|
||||
Once a crew finishes running, you can access the output of a specific task by using the `output` attribute of the task object:
|
||||
|
||||
```python
|
||||
# ...
|
||||
task1 = Task(
|
||||
description='Find and summarize the latest AI news',
|
||||
expected_output='A bullet list summary of the top 5 most important AI news',
|
||||
agent=research_agent,
|
||||
tools=[search_tool]
|
||||
)
|
||||
|
||||
#...
|
||||
|
||||
crew = Crew(
|
||||
agents=[research_agent],
|
||||
tasks=[task1, task2, task3],
|
||||
verbose=2
|
||||
)
|
||||
|
||||
result = crew.kickoff()
|
||||
|
||||
# Returns a TaskOutput object with the description and results of the task
|
||||
print(f"""
|
||||
Task completed!
|
||||
Task: {task1.output.description}
|
||||
Output: {task1.output.result}
|
||||
""")
|
||||
```
|
||||
|
||||
|
||||
|
||||
|
||||
## Tool Override Mechanism
|
||||
|
||||
Specifying tools in a task allows for dynamic adaptation of agent capabilities, emphasizing CrewAI's flexibility.
|
||||
|
||||
## Conclusion
|
||||
|
||||
Tasks are the driving force behind the actions of agents in crewAI. By properly defining tasks and their outcomes, you set the stage for your AI agents to work effectively, either independently or as a collaborative unit.
|
||||
Equipping tasks with appropriate tools is crucial for maximizing CrewAI's potential, ensuring agents are effectively prepared for their assignments.
|
||||
@@ -1,83 +0,0 @@
|
||||
---
|
||||
title: crewAI Tools
|
||||
description: Understanding and leveraging tools within the crewAI framework.
|
||||
---
|
||||
|
||||
## What is a Tool?
|
||||
!!! note "Definition"
|
||||
A tool in CrewAI, is a skill, something Agents can use perform tasks, right now those can be tools from the [crewAI Toolkit](https://github.com/joaomdmoura/crewai-tools) and [LangChain Tools](https://python.langchain.com/docs/integrations/tools), those are basically functions that an agent can utilize for various actions, from simple searches to complex interactions with external systems.
|
||||
|
||||
## Key Characteristics of Tools
|
||||
|
||||
- **Utility**: Designed for specific tasks such as web searching, data analysis, or content generation.
|
||||
- **Integration**: Enhance agent capabilities by integrating tools directly into their workflow.
|
||||
- **Customizability**: Offers the flexibility to develop custom tools or use existing ones from LangChain's ecosystem.
|
||||
|
||||
## Creating your own Tools
|
||||
!!! example "Custom Tool Creation"
|
||||
Developers can craft custom tools tailored for their agent’s needs or utilize pre-built options. Here’s how to create one:
|
||||
|
||||
```python
|
||||
import json
|
||||
import requests
|
||||
from crewai import Agent
|
||||
from langchain.tools import tool
|
||||
from unstructured.partition.html import partition_html
|
||||
|
||||
class BrowserTools():
|
||||
|
||||
# Anotate the fuction with the tool decorator from LangChain
|
||||
@tool("Scrape website content")
|
||||
def scrape_website(website):
|
||||
# Write logic for the tool.
|
||||
# In this case a function to scrape website content
|
||||
url = f"https://chrome.browserless.io/content?token={config('BROWSERLESS_API_KEY')}"
|
||||
payload = json.dumps({"url": website})
|
||||
headers = {'cache-control': 'no-cache', 'content-type': 'application/json'}
|
||||
response = requests.request("POST", url, headers=headers, data=payload)
|
||||
elements = partition_html(text=response.text)
|
||||
content = "\n\n".join([str(el) for el in elements])
|
||||
return content[:5000]
|
||||
|
||||
# Assign the scraping tool to an agent
|
||||
agent = Agent(
|
||||
role='Research Analyst',
|
||||
goal='Provide up-to-date market analysis',
|
||||
backstory='An expert analyst with a keen eye for market trends.',
|
||||
tools=[BrowserTools().scrape_website]
|
||||
)
|
||||
```
|
||||
|
||||
## Using LangChain Tools
|
||||
!!! info "LangChain Integration"
|
||||
CrewAI seamlessly integrates with LangChain’s comprehensive toolkit. Assigning an existing tool to an agent is straightforward:
|
||||
|
||||
```python
|
||||
from crewai import Agent
|
||||
from langchain.agents import Tool
|
||||
from langchain.utilities import GoogleSerperAPIWrapper
|
||||
import os
|
||||
|
||||
# Setup API keys
|
||||
os.environ["OPENAI_API_KEY"] = "Your Key"
|
||||
os.environ["SERPER_API_KEY"] = "Your Key"
|
||||
|
||||
search = GoogleSerperAPIWrapper()
|
||||
|
||||
# Create and assign the search tool to an agent
|
||||
serper_tool = Tool(
|
||||
name="Intermediate Answer",
|
||||
func=search.run,
|
||||
description="Useful for search-based queries",
|
||||
)
|
||||
|
||||
agent = Agent(
|
||||
role='Research Analyst',
|
||||
goal='Provide up-to-date market analysis',
|
||||
backstory='An expert analyst with a keen eye for market trends.',
|
||||
tools=[serper_tool]
|
||||
)
|
||||
```
|
||||
|
||||
## Conclusion
|
||||
Tools are crucial for extending the capabilities of CrewAI agents, allowing them to undertake a diverse array of tasks and collaborate efficiently. When building your AI solutions with CrewAI, consider both custom and existing tools to empower your agents and foster a dynamic AI ecosystem.
|
||||
|
Before Width: | Height: | Size: 94 KiB After Width: | Height: | Size: 14 KiB |
|
Before Width: | Height: | Size: 97 KiB After Width: | Height: | Size: 14 KiB |
BIN
docs/crews.png
Normal file
|
After Width: | Height: | Size: 29 KiB |
642
docs/custom_llm.md
Normal file
@@ -0,0 +1,642 @@
|
||||
# Custom LLM Implementations
|
||||
|
||||
CrewAI now supports custom LLM implementations through the `BaseLLM` abstract base class. This allows you to create your own LLM implementations that don't rely on litellm's authentication mechanism.
|
||||
|
||||
## Using Custom LLM Implementations
|
||||
|
||||
To create a custom LLM implementation, you need to:
|
||||
|
||||
1. Inherit from the `BaseLLM` abstract base class
|
||||
2. Implement the required methods:
|
||||
- `call()`: The main method to call the LLM with messages
|
||||
- `supports_function_calling()`: Whether the LLM supports function calling
|
||||
- `supports_stop_words()`: Whether the LLM supports stop words
|
||||
- `get_context_window_size()`: The context window size of the LLM
|
||||
|
||||
## Example: Basic Custom LLM
|
||||
|
||||
```python
|
||||
from crewai import BaseLLM
|
||||
from typing import Any, Dict, List, Optional, Union
|
||||
|
||||
class CustomLLM(BaseLLM):
|
||||
def __init__(self, api_key: str, endpoint: str):
|
||||
super().__init__() # Initialize the base class to set default attributes
|
||||
if not api_key or not isinstance(api_key, str):
|
||||
raise ValueError("Invalid API key: must be a non-empty string")
|
||||
if not endpoint or not isinstance(endpoint, str):
|
||||
raise ValueError("Invalid endpoint URL: must be a non-empty string")
|
||||
self.api_key = api_key
|
||||
self.endpoint = endpoint
|
||||
self.stop = [] # You can customize stop words if needed
|
||||
|
||||
def call(
|
||||
self,
|
||||
messages: Union[str, List[Dict[str, str]]],
|
||||
tools: Optional[List[dict]] = None,
|
||||
callbacks: Optional[List[Any]] = None,
|
||||
available_functions: Optional[Dict[str, Any]] = None,
|
||||
) -> Union[str, Any]:
|
||||
"""Call the LLM with the given messages.
|
||||
|
||||
Args:
|
||||
messages: Input messages for the LLM.
|
||||
tools: Optional list of tool schemas for function calling.
|
||||
callbacks: Optional list of callback functions.
|
||||
available_functions: Optional dict mapping function names to callables.
|
||||
|
||||
Returns:
|
||||
Either a text response from the LLM or the result of a tool function call.
|
||||
|
||||
Raises:
|
||||
TimeoutError: If the LLM request times out.
|
||||
RuntimeError: If the LLM request fails for other reasons.
|
||||
ValueError: If the response format is invalid.
|
||||
"""
|
||||
# Implement your own logic to call the LLM
|
||||
# For example, using requests:
|
||||
import requests
|
||||
|
||||
try:
|
||||
headers = {
|
||||
"Authorization": f"Bearer {self.api_key}",
|
||||
"Content-Type": "application/json"
|
||||
}
|
||||
|
||||
# Convert string message to proper format if needed
|
||||
if isinstance(messages, str):
|
||||
messages = [{"role": "user", "content": messages}]
|
||||
|
||||
data = {
|
||||
"messages": messages,
|
||||
"tools": tools
|
||||
}
|
||||
|
||||
response = requests.post(
|
||||
self.endpoint,
|
||||
headers=headers,
|
||||
json=data,
|
||||
timeout=30 # Set a reasonable timeout
|
||||
)
|
||||
response.raise_for_status() # Raise an exception for HTTP errors
|
||||
return response.json()["choices"][0]["message"]["content"]
|
||||
except requests.Timeout:
|
||||
raise TimeoutError("LLM request timed out")
|
||||
except requests.RequestException as e:
|
||||
raise RuntimeError(f"LLM request failed: {str(e)}")
|
||||
except (KeyError, IndexError, ValueError) as e:
|
||||
raise ValueError(f"Invalid response format: {str(e)}")
|
||||
|
||||
def supports_function_calling(self) -> bool:
|
||||
"""Check if the LLM supports function calling.
|
||||
|
||||
Returns:
|
||||
True if the LLM supports function calling, False otherwise.
|
||||
"""
|
||||
# Return True if your LLM supports function calling
|
||||
return True
|
||||
|
||||
def supports_stop_words(self) -> bool:
|
||||
"""Check if the LLM supports stop words.
|
||||
|
||||
Returns:
|
||||
True if the LLM supports stop words, False otherwise.
|
||||
"""
|
||||
# Return True if your LLM supports stop words
|
||||
return True
|
||||
|
||||
def get_context_window_size(self) -> int:
|
||||
"""Get the context window size of the LLM.
|
||||
|
||||
Returns:
|
||||
The context window size as an integer.
|
||||
"""
|
||||
# Return the context window size of your LLM
|
||||
return 8192
|
||||
```
|
||||
|
||||
## Error Handling Best Practices
|
||||
|
||||
When implementing custom LLMs, it's important to handle errors properly to ensure robustness and reliability. Here are some best practices:
|
||||
|
||||
### 1. Implement Try-Except Blocks for API Calls
|
||||
|
||||
Always wrap API calls in try-except blocks to handle different types of errors:
|
||||
|
||||
```python
|
||||
def call(
|
||||
self,
|
||||
messages: Union[str, List[Dict[str, str]]],
|
||||
tools: Optional[List[dict]] = None,
|
||||
callbacks: Optional[List[Any]] = None,
|
||||
available_functions: Optional[Dict[str, Any]] = None,
|
||||
) -> Union[str, Any]:
|
||||
try:
|
||||
# API call implementation
|
||||
response = requests.post(
|
||||
self.endpoint,
|
||||
headers=self.headers,
|
||||
json=self.prepare_payload(messages),
|
||||
timeout=30 # Set a reasonable timeout
|
||||
)
|
||||
response.raise_for_status() # Raise an exception for HTTP errors
|
||||
return response.json()["choices"][0]["message"]["content"]
|
||||
except requests.Timeout:
|
||||
raise TimeoutError("LLM request timed out")
|
||||
except requests.RequestException as e:
|
||||
raise RuntimeError(f"LLM request failed: {str(e)}")
|
||||
except (KeyError, IndexError, ValueError) as e:
|
||||
raise ValueError(f"Invalid response format: {str(e)}")
|
||||
```
|
||||
|
||||
### 2. Implement Retry Logic for Transient Failures
|
||||
|
||||
For transient failures like network issues or rate limiting, implement retry logic with exponential backoff:
|
||||
|
||||
```python
|
||||
def call(
|
||||
self,
|
||||
messages: Union[str, List[Dict[str, str]]],
|
||||
tools: Optional[List[dict]] = None,
|
||||
callbacks: Optional[List[Any]] = None,
|
||||
available_functions: Optional[Dict[str, Any]] = None,
|
||||
) -> Union[str, Any]:
|
||||
import time
|
||||
|
||||
max_retries = 3
|
||||
retry_delay = 1 # seconds
|
||||
|
||||
for attempt in range(max_retries):
|
||||
try:
|
||||
response = requests.post(
|
||||
self.endpoint,
|
||||
headers=self.headers,
|
||||
json=self.prepare_payload(messages),
|
||||
timeout=30
|
||||
)
|
||||
response.raise_for_status()
|
||||
return response.json()["choices"][0]["message"]["content"]
|
||||
except (requests.Timeout, requests.ConnectionError) as e:
|
||||
if attempt < max_retries - 1:
|
||||
time.sleep(retry_delay * (2 ** attempt)) # Exponential backoff
|
||||
continue
|
||||
raise TimeoutError(f"LLM request failed after {max_retries} attempts: {str(e)}")
|
||||
except requests.RequestException as e:
|
||||
raise RuntimeError(f"LLM request failed: {str(e)}")
|
||||
```
|
||||
|
||||
### 3. Validate Input Parameters
|
||||
|
||||
Always validate input parameters to prevent runtime errors:
|
||||
|
||||
```python
|
||||
def __init__(self, api_key: str, endpoint: str):
|
||||
super().__init__()
|
||||
if not api_key or not isinstance(api_key, str):
|
||||
raise ValueError("Invalid API key: must be a non-empty string")
|
||||
if not endpoint or not isinstance(endpoint, str):
|
||||
raise ValueError("Invalid endpoint URL: must be a non-empty string")
|
||||
self.api_key = api_key
|
||||
self.endpoint = endpoint
|
||||
```
|
||||
|
||||
### 4. Handle Authentication Errors Gracefully
|
||||
|
||||
Provide clear error messages for authentication failures:
|
||||
|
||||
```python
|
||||
def call(
|
||||
self,
|
||||
messages: Union[str, List[Dict[str, str]]],
|
||||
tools: Optional[List[dict]] = None,
|
||||
callbacks: Optional[List[Any]] = None,
|
||||
available_functions: Optional[Dict[str, Any]] = None,
|
||||
) -> Union[str, Any]:
|
||||
try:
|
||||
response = requests.post(self.endpoint, headers=self.headers, json=data)
|
||||
if response.status_code == 401:
|
||||
raise ValueError("Authentication failed: Invalid API key or token")
|
||||
elif response.status_code == 403:
|
||||
raise ValueError("Authorization failed: Insufficient permissions")
|
||||
response.raise_for_status()
|
||||
# Process response
|
||||
except Exception as e:
|
||||
# Handle error
|
||||
raise
|
||||
```
|
||||
|
||||
## Example: JWT-based Authentication
|
||||
|
||||
For services that use JWT-based authentication instead of API keys, you can implement a custom LLM like this:
|
||||
|
||||
```python
|
||||
from crewai import BaseLLM, Agent, Task
|
||||
from typing import Any, Dict, List, Optional, Union
|
||||
|
||||
class JWTAuthLLM(BaseLLM):
|
||||
def __init__(self, jwt_token: str, endpoint: str):
|
||||
super().__init__() # Initialize the base class to set default attributes
|
||||
if not jwt_token or not isinstance(jwt_token, str):
|
||||
raise ValueError("Invalid JWT token: must be a non-empty string")
|
||||
if not endpoint or not isinstance(endpoint, str):
|
||||
raise ValueError("Invalid endpoint URL: must be a non-empty string")
|
||||
self.jwt_token = jwt_token
|
||||
self.endpoint = endpoint
|
||||
self.stop = [] # You can customize stop words if needed
|
||||
|
||||
def call(
|
||||
self,
|
||||
messages: Union[str, List[Dict[str, str]]],
|
||||
tools: Optional[List[dict]] = None,
|
||||
callbacks: Optional[List[Any]] = None,
|
||||
available_functions: Optional[Dict[str, Any]] = None,
|
||||
) -> Union[str, Any]:
|
||||
"""Call the LLM with JWT authentication.
|
||||
|
||||
Args:
|
||||
messages: Input messages for the LLM.
|
||||
tools: Optional list of tool schemas for function calling.
|
||||
callbacks: Optional list of callback functions.
|
||||
available_functions: Optional dict mapping function names to callables.
|
||||
|
||||
Returns:
|
||||
Either a text response from the LLM or the result of a tool function call.
|
||||
|
||||
Raises:
|
||||
TimeoutError: If the LLM request times out.
|
||||
RuntimeError: If the LLM request fails for other reasons.
|
||||
ValueError: If the response format is invalid.
|
||||
"""
|
||||
# Implement your own logic to call the LLM with JWT authentication
|
||||
import requests
|
||||
|
||||
try:
|
||||
headers = {
|
||||
"Authorization": f"Bearer {self.jwt_token}",
|
||||
"Content-Type": "application/json"
|
||||
}
|
||||
|
||||
# Convert string message to proper format if needed
|
||||
if isinstance(messages, str):
|
||||
messages = [{"role": "user", "content": messages}]
|
||||
|
||||
data = {
|
||||
"messages": messages,
|
||||
"tools": tools
|
||||
}
|
||||
|
||||
response = requests.post(
|
||||
self.endpoint,
|
||||
headers=headers,
|
||||
json=data,
|
||||
timeout=30 # Set a reasonable timeout
|
||||
)
|
||||
|
||||
if response.status_code == 401:
|
||||
raise ValueError("Authentication failed: Invalid JWT token")
|
||||
elif response.status_code == 403:
|
||||
raise ValueError("Authorization failed: Insufficient permissions")
|
||||
|
||||
response.raise_for_status() # Raise an exception for HTTP errors
|
||||
return response.json()["choices"][0]["message"]["content"]
|
||||
except requests.Timeout:
|
||||
raise TimeoutError("LLM request timed out")
|
||||
except requests.RequestException as e:
|
||||
raise RuntimeError(f"LLM request failed: {str(e)}")
|
||||
except (KeyError, IndexError, ValueError) as e:
|
||||
raise ValueError(f"Invalid response format: {str(e)}")
|
||||
|
||||
def supports_function_calling(self) -> bool:
|
||||
"""Check if the LLM supports function calling.
|
||||
|
||||
Returns:
|
||||
True if the LLM supports function calling, False otherwise.
|
||||
"""
|
||||
return True
|
||||
|
||||
def supports_stop_words(self) -> bool:
|
||||
"""Check if the LLM supports stop words.
|
||||
|
||||
Returns:
|
||||
True if the LLM supports stop words, False otherwise.
|
||||
"""
|
||||
return True
|
||||
|
||||
def get_context_window_size(self) -> int:
|
||||
"""Get the context window size of the LLM.
|
||||
|
||||
Returns:
|
||||
The context window size as an integer.
|
||||
"""
|
||||
return 8192
|
||||
```
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
Here are some common issues you might encounter when implementing custom LLMs and how to resolve them:
|
||||
|
||||
### 1. Authentication Failures
|
||||
|
||||
**Symptoms**: 401 Unauthorized or 403 Forbidden errors
|
||||
|
||||
**Solutions**:
|
||||
- Verify that your API key or JWT token is valid and not expired
|
||||
- Check that you're using the correct authentication header format
|
||||
- Ensure that your token has the necessary permissions
|
||||
|
||||
### 2. Timeout Issues
|
||||
|
||||
**Symptoms**: Requests taking too long or timing out
|
||||
|
||||
**Solutions**:
|
||||
- Implement timeout handling as shown in the examples
|
||||
- Use retry logic with exponential backoff
|
||||
- Consider using a more reliable network connection
|
||||
|
||||
### 3. Response Parsing Errors
|
||||
|
||||
**Symptoms**: KeyError, IndexError, or ValueError when processing responses
|
||||
|
||||
**Solutions**:
|
||||
- Validate the response format before accessing nested fields
|
||||
- Implement proper error handling for malformed responses
|
||||
- Check the API documentation for the expected response format
|
||||
|
||||
### 4. Rate Limiting
|
||||
|
||||
**Symptoms**: 429 Too Many Requests errors
|
||||
|
||||
**Solutions**:
|
||||
- Implement rate limiting in your custom LLM
|
||||
- Add exponential backoff for retries
|
||||
- Consider using a token bucket algorithm for more precise rate control
|
||||
|
||||
## Advanced Features
|
||||
|
||||
### Logging
|
||||
|
||||
Adding logging to your custom LLM can help with debugging and monitoring:
|
||||
|
||||
```python
|
||||
import logging
|
||||
from typing import Any, Dict, List, Optional, Union
|
||||
|
||||
class LoggingLLM(BaseLLM):
|
||||
def __init__(self, api_key: str, endpoint: str):
|
||||
super().__init__()
|
||||
self.api_key = api_key
|
||||
self.endpoint = endpoint
|
||||
self.logger = logging.getLogger("crewai.llm.custom")
|
||||
|
||||
def call(
|
||||
self,
|
||||
messages: Union[str, List[Dict[str, str]]],
|
||||
tools: Optional[List[dict]] = None,
|
||||
callbacks: Optional[List[Any]] = None,
|
||||
available_functions: Optional[Dict[str, Any]] = None,
|
||||
) -> Union[str, Any]:
|
||||
self.logger.info(f"Calling LLM with {len(messages) if isinstance(messages, list) else 1} messages")
|
||||
try:
|
||||
# API call implementation
|
||||
response = self._make_api_call(messages, tools)
|
||||
self.logger.debug(f"LLM response received: {response[:100]}...")
|
||||
return response
|
||||
except Exception as e:
|
||||
self.logger.error(f"LLM call failed: {str(e)}")
|
||||
raise
|
||||
```
|
||||
|
||||
### Rate Limiting
|
||||
|
||||
Implementing rate limiting can help avoid overwhelming the LLM API:
|
||||
|
||||
```python
|
||||
import time
|
||||
from typing import Any, Dict, List, Optional, Union
|
||||
|
||||
class RateLimitedLLM(BaseLLM):
|
||||
def __init__(
|
||||
self,
|
||||
api_key: str,
|
||||
endpoint: str,
|
||||
requests_per_minute: int = 60
|
||||
):
|
||||
super().__init__()
|
||||
self.api_key = api_key
|
||||
self.endpoint = endpoint
|
||||
self.requests_per_minute = requests_per_minute
|
||||
self.request_times: List[float] = []
|
||||
|
||||
def call(
|
||||
self,
|
||||
messages: Union[str, List[Dict[str, str]]],
|
||||
tools: Optional[List[dict]] = None,
|
||||
callbacks: Optional[List[Any]] = None,
|
||||
available_functions: Optional[Dict[str, Any]] = None,
|
||||
) -> Union[str, Any]:
|
||||
self._enforce_rate_limit()
|
||||
# Record this request time
|
||||
self.request_times.append(time.time())
|
||||
# Make the actual API call
|
||||
return self._make_api_call(messages, tools)
|
||||
|
||||
def _enforce_rate_limit(self) -> None:
|
||||
"""Enforce the rate limit by waiting if necessary."""
|
||||
now = time.time()
|
||||
# Remove request times older than 1 minute
|
||||
self.request_times = [t for t in self.request_times if now - t < 60]
|
||||
|
||||
if len(self.request_times) >= self.requests_per_minute:
|
||||
# Calculate how long to wait
|
||||
oldest_request = min(self.request_times)
|
||||
wait_time = 60 - (now - oldest_request)
|
||||
if wait_time > 0:
|
||||
time.sleep(wait_time)
|
||||
```
|
||||
|
||||
### Metrics Collection
|
||||
|
||||
Collecting metrics can help you monitor your LLM usage:
|
||||
|
||||
```python
|
||||
import time
|
||||
from typing import Any, Dict, List, Optional, Union
|
||||
|
||||
class MetricsCollectingLLM(BaseLLM):
|
||||
def __init__(self, api_key: str, endpoint: str):
|
||||
super().__init__()
|
||||
self.api_key = api_key
|
||||
self.endpoint = endpoint
|
||||
self.metrics: Dict[str, Any] = {
|
||||
"total_calls": 0,
|
||||
"total_tokens": 0,
|
||||
"errors": 0,
|
||||
"latency": []
|
||||
}
|
||||
|
||||
def call(
|
||||
self,
|
||||
messages: Union[str, List[Dict[str, str]]],
|
||||
tools: Optional[List[dict]] = None,
|
||||
callbacks: Optional[List[Any]] = None,
|
||||
available_functions: Optional[Dict[str, Any]] = None,
|
||||
) -> Union[str, Any]:
|
||||
start_time = time.time()
|
||||
self.metrics["total_calls"] += 1
|
||||
|
||||
try:
|
||||
response = self._make_api_call(messages, tools)
|
||||
# Estimate tokens (simplified)
|
||||
if isinstance(messages, str):
|
||||
token_estimate = len(messages) // 4
|
||||
else:
|
||||
token_estimate = sum(len(m.get("content", "")) // 4 for m in messages)
|
||||
self.metrics["total_tokens"] += token_estimate
|
||||
return response
|
||||
except Exception as e:
|
||||
self.metrics["errors"] += 1
|
||||
raise
|
||||
finally:
|
||||
latency = time.time() - start_time
|
||||
self.metrics["latency"].append(latency)
|
||||
|
||||
def get_metrics(self) -> Dict[str, Any]:
|
||||
"""Return the collected metrics."""
|
||||
avg_latency = sum(self.metrics["latency"]) / len(self.metrics["latency"]) if self.metrics["latency"] else 0
|
||||
return {
|
||||
**self.metrics,
|
||||
"avg_latency": avg_latency
|
||||
}
|
||||
```
|
||||
|
||||
## Advanced Usage: Function Calling
|
||||
|
||||
If your LLM supports function calling, you can implement the function calling logic in your custom LLM:
|
||||
|
||||
```python
|
||||
import json
|
||||
from typing import Any, Dict, List, Optional, Union
|
||||
|
||||
def call(
|
||||
self,
|
||||
messages: Union[str, List[Dict[str, str]]],
|
||||
tools: Optional[List[dict]] = None,
|
||||
callbacks: Optional[List[Any]] = None,
|
||||
available_functions: Optional[Dict[str, Any]] = None,
|
||||
) -> Union[str, Any]:
|
||||
import requests
|
||||
|
||||
try:
|
||||
headers = {
|
||||
"Authorization": f"Bearer {self.jwt_token}",
|
||||
"Content-Type": "application/json"
|
||||
}
|
||||
|
||||
# Convert string message to proper format if needed
|
||||
if isinstance(messages, str):
|
||||
messages = [{"role": "user", "content": messages}]
|
||||
|
||||
data = {
|
||||
"messages": messages,
|
||||
"tools": tools
|
||||
}
|
||||
|
||||
response = requests.post(
|
||||
self.endpoint,
|
||||
headers=headers,
|
||||
json=data,
|
||||
timeout=30
|
||||
)
|
||||
response.raise_for_status()
|
||||
response_data = response.json()
|
||||
|
||||
# Check if the LLM wants to call a function
|
||||
if response_data["choices"][0]["message"].get("tool_calls"):
|
||||
tool_calls = response_data["choices"][0]["message"]["tool_calls"]
|
||||
|
||||
# Process each tool call
|
||||
for tool_call in tool_calls:
|
||||
function_name = tool_call["function"]["name"]
|
||||
function_args = json.loads(tool_call["function"]["arguments"])
|
||||
|
||||
if available_functions and function_name in available_functions:
|
||||
function_to_call = available_functions[function_name]
|
||||
function_response = function_to_call(**function_args)
|
||||
|
||||
# Add the function response to the messages
|
||||
messages.append({
|
||||
"role": "tool",
|
||||
"tool_call_id": tool_call["id"],
|
||||
"name": function_name,
|
||||
"content": str(function_response)
|
||||
})
|
||||
|
||||
# Call the LLM again with the updated messages
|
||||
return self.call(messages, tools, callbacks, available_functions)
|
||||
|
||||
# Return the text response if no function call
|
||||
return response_data["choices"][0]["message"]["content"]
|
||||
except requests.Timeout:
|
||||
raise TimeoutError("LLM request timed out")
|
||||
except requests.RequestException as e:
|
||||
raise RuntimeError(f"LLM request failed: {str(e)}")
|
||||
except (KeyError, IndexError, ValueError) as e:
|
||||
raise ValueError(f"Invalid response format: {str(e)}")
|
||||
```
|
||||
|
||||
## Using Your Custom LLM with CrewAI
|
||||
|
||||
Once you've implemented your custom LLM, you can use it with CrewAI agents and crews:
|
||||
|
||||
```python
|
||||
from crewai import Agent, Task, Crew
|
||||
from typing import Dict, Any
|
||||
|
||||
# Create your custom LLM instance
|
||||
jwt_llm = JWTAuthLLM(
|
||||
jwt_token="your.jwt.token",
|
||||
endpoint="https://your-llm-endpoint.com/v1/chat/completions"
|
||||
)
|
||||
|
||||
# Use it with an agent
|
||||
agent = Agent(
|
||||
role="Research Assistant",
|
||||
goal="Find information on a topic",
|
||||
backstory="You are a research assistant tasked with finding information.",
|
||||
llm=jwt_llm,
|
||||
)
|
||||
|
||||
# Create a task for the agent
|
||||
task = Task(
|
||||
description="Research the benefits of exercise",
|
||||
agent=agent,
|
||||
expected_output="A summary of the benefits of exercise",
|
||||
)
|
||||
|
||||
# Execute the task
|
||||
result = agent.execute_task(task)
|
||||
print(result)
|
||||
|
||||
# Or use it with a crew
|
||||
crew = Crew(
|
||||
agents=[agent],
|
||||
tasks=[task],
|
||||
manager_llm=jwt_llm, # Use your custom LLM for the manager
|
||||
)
|
||||
|
||||
# Run the crew
|
||||
result = crew.kickoff()
|
||||
print(result)
|
||||
```
|
||||
|
||||
## Implementing Your Own Authentication Mechanism
|
||||
|
||||
The `BaseLLM` class allows you to implement any authentication mechanism you need, not just JWT or API keys. You can use:
|
||||
|
||||
- OAuth tokens
|
||||
- Client certificates
|
||||
- Custom headers
|
||||
- Session-based authentication
|
||||
- Any other authentication method required by your LLM provider
|
||||
|
||||
Simply implement the appropriate authentication logic in your custom LLM class.
|
||||
238
docs/docs.json
Normal file
@@ -0,0 +1,238 @@
|
||||
{
|
||||
"$schema": "https://mintlify.com/docs.json",
|
||||
"theme": "mint",
|
||||
"name": "CrewAI",
|
||||
"colors": {
|
||||
"primary": "#EB6658",
|
||||
"light": "#F3A78B",
|
||||
"dark": "#C94C3C"
|
||||
},
|
||||
"favicon": "favicon.svg",
|
||||
"navigation": {
|
||||
"tabs": [
|
||||
{
|
||||
"tab": "Get Started",
|
||||
"groups": [
|
||||
{
|
||||
"group": "Get Started",
|
||||
"pages": [
|
||||
"introduction",
|
||||
"installation",
|
||||
"quickstart",
|
||||
"changelog"
|
||||
]
|
||||
},
|
||||
{
|
||||
"group": "Guides",
|
||||
"pages": [
|
||||
{
|
||||
"group": "Concepts",
|
||||
"pages": [
|
||||
"guides/concepts/evaluating-use-cases"
|
||||
]
|
||||
},
|
||||
{
|
||||
"group": "Agents",
|
||||
"pages": [
|
||||
"guides/agents/crafting-effective-agents"
|
||||
]
|
||||
},
|
||||
{
|
||||
"group": "Crews",
|
||||
"pages": [
|
||||
"guides/crews/first-crew"
|
||||
]
|
||||
},
|
||||
{
|
||||
"group": "Flows",
|
||||
"pages": [
|
||||
"guides/flows/first-flow",
|
||||
"guides/flows/mastering-flow-state"
|
||||
]
|
||||
},
|
||||
{
|
||||
"group": "Advanced",
|
||||
"pages": [
|
||||
"guides/advanced/customizing-prompts",
|
||||
"guides/advanced/fingerprinting"
|
||||
]
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"group": "Core Concepts",
|
||||
"pages": [
|
||||
"concepts/agents",
|
||||
"concepts/tasks",
|
||||
"concepts/crews",
|
||||
"concepts/flows",
|
||||
"concepts/knowledge",
|
||||
"concepts/llms",
|
||||
"concepts/processes",
|
||||
"concepts/collaboration",
|
||||
"concepts/training",
|
||||
"concepts/memory",
|
||||
"concepts/planning",
|
||||
"concepts/testing",
|
||||
"concepts/cli",
|
||||
"concepts/tools",
|
||||
"concepts/event-listener"
|
||||
]
|
||||
},
|
||||
{
|
||||
"group": "How to Guides",
|
||||
"pages": [
|
||||
"how-to/create-custom-tools",
|
||||
"how-to/sequential-process",
|
||||
"how-to/hierarchical-process",
|
||||
"how-to/custom-manager-agent",
|
||||
"how-to/llm-connections",
|
||||
"how-to/customizing-agents",
|
||||
"how-to/multimodal-agents",
|
||||
"how-to/coding-agents",
|
||||
"how-to/force-tool-output-as-result",
|
||||
"how-to/human-input-on-execution",
|
||||
"how-to/kickoff-async",
|
||||
"how-to/kickoff-for-each",
|
||||
"how-to/replay-tasks-from-latest-crew-kickoff",
|
||||
"how-to/conditional-tasks",
|
||||
"how-to/langchain-tools",
|
||||
"how-to/llamaindex-tools"
|
||||
]
|
||||
},
|
||||
{
|
||||
"group": "Agent Monitoring & Observability",
|
||||
"pages": [
|
||||
"how-to/agentops-observability",
|
||||
"how-to/arize-phoenix-observability",
|
||||
"how-to/langfuse-observability",
|
||||
"how-to/langtrace-observability",
|
||||
"how-to/mlflow-observability",
|
||||
"how-to/openlit-observability",
|
||||
"how-to/opik-observability",
|
||||
"how-to/portkey-observability",
|
||||
"how-to/weave-integration"
|
||||
]
|
||||
},
|
||||
{
|
||||
"group": "Tools",
|
||||
"pages": [
|
||||
"tools/aimindtool",
|
||||
"tools/apifyactorstool",
|
||||
"tools/bedrockinvokeagenttool",
|
||||
"tools/bedrockkbretriever",
|
||||
"tools/bravesearchtool",
|
||||
"tools/browserbaseloadtool",
|
||||
"tools/codedocssearchtool",
|
||||
"tools/codeinterpretertool",
|
||||
"tools/composiotool",
|
||||
"tools/csvsearchtool",
|
||||
"tools/dalletool",
|
||||
"tools/directorysearchtool",
|
||||
"tools/directoryreadtool",
|
||||
"tools/docxsearchtool",
|
||||
"tools/exasearchtool",
|
||||
"tools/filereadtool",
|
||||
"tools/filewritetool",
|
||||
"tools/firecrawlcrawlwebsitetool",
|
||||
"tools/firecrawlscrapewebsitetool",
|
||||
"tools/firecrawlsearchtool",
|
||||
"tools/githubsearchtool",
|
||||
"tools/hyperbrowserloadtool",
|
||||
"tools/linkupsearchtool",
|
||||
"tools/llamaindextool",
|
||||
"tools/serperdevtool",
|
||||
"tools/s3readertool",
|
||||
"tools/s3writertool",
|
||||
"tools/scrapegraphscrapetool",
|
||||
"tools/scrapeelementfromwebsitetool",
|
||||
"tools/jsonsearchtool",
|
||||
"tools/mdxsearchtool",
|
||||
"tools/mysqltool",
|
||||
"tools/multiontool",
|
||||
"tools/nl2sqltool",
|
||||
"tools/patronustools",
|
||||
"tools/pdfsearchtool",
|
||||
"tools/pgsearchtool",
|
||||
"tools/qdrantvectorsearchtool",
|
||||
"tools/ragtool",
|
||||
"tools/scrapewebsitetool",
|
||||
"tools/scrapflyscrapetool",
|
||||
"tools/seleniumscrapingtool",
|
||||
"tools/snowflakesearchtool",
|
||||
"tools/spidertool",
|
||||
"tools/txtsearchtool",
|
||||
"tools/visiontool",
|
||||
"tools/weaviatevectorsearchtool",
|
||||
"tools/websitesearchtool",
|
||||
"tools/xmlsearchtool",
|
||||
"tools/youtubechannelsearchtool",
|
||||
"tools/youtubevideosearchtool"
|
||||
]
|
||||
},
|
||||
{
|
||||
"group": "Telemetry",
|
||||
"pages": [
|
||||
"telemetry"
|
||||
]
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"tab": "Examples",
|
||||
"groups": [
|
||||
{
|
||||
"group": "Examples",
|
||||
"pages": [
|
||||
"examples/example"
|
||||
]
|
||||
}
|
||||
]
|
||||
}
|
||||
],
|
||||
"global": {
|
||||
"anchors": [
|
||||
{
|
||||
"anchor": "Community",
|
||||
"href": "https://community.crewai.com",
|
||||
"icon": "discourse"
|
||||
},
|
||||
{
|
||||
"anchor": "Tutorials",
|
||||
"href": "https://www.youtube.com/@crewAIInc",
|
||||
"icon": "youtube"
|
||||
}
|
||||
]
|
||||
}
|
||||
},
|
||||
"logo": {
|
||||
"light": "crew_only_logo.png",
|
||||
"dark": "crew_only_logo.png"
|
||||
},
|
||||
"appearance": {
|
||||
"default": "dark",
|
||||
"strict": false
|
||||
},
|
||||
"navbar": {
|
||||
"primary": {
|
||||
"type": "github",
|
||||
"href": "https://github.com/crewAIInc/crewAI"
|
||||
}
|
||||
},
|
||||
"search": {
|
||||
"prompt": "Search CrewAI docs"
|
||||
},
|
||||
"seo": {
|
||||
"indexing": "navigable"
|
||||
},
|
||||
"footer": {
|
||||
"socials": {
|
||||
"website": "https://crewai.com",
|
||||
"x": "https://x.com/crewAIInc",
|
||||
"github": "https://github.com/crewAIInc/crewAI",
|
||||
"linkedin": "https://www.linkedin.com/company/crewai-inc",
|
||||
"youtube": "https://youtube.com/@crewAIInc",
|
||||
"reddit": "https://www.reddit.com/r/crewAIInc/"
|
||||
}
|
||||
}
|
||||
}
|
||||
62
docs/examples/example.mdx
Normal file
@@ -0,0 +1,62 @@
|
||||
---
|
||||
title: CrewAI Examples
|
||||
description: A collection of examples that show how to use CrewAI framework to automate workflows.
|
||||
icon: rocket-launch
|
||||
---
|
||||
|
||||
<CardGroup cols={3}>
|
||||
<Card
|
||||
title="Marketing Strategy"
|
||||
color="#F3A78B"
|
||||
href="https://github.com/crewAIInc/crewAI-examples/tree/main/marketing_strategy"
|
||||
icon="bullhorn"
|
||||
iconType="solid"
|
||||
>
|
||||
Automate marketing strategy creation with CrewAI.
|
||||
</Card>
|
||||
<Card
|
||||
title="Surprise Trip"
|
||||
color="#F3A78B"
|
||||
href="https://github.com/crewAIInc/crewAI-examples/tree/main/surprise_trip"
|
||||
icon="plane"
|
||||
iconType="duotone"
|
||||
>
|
||||
Create a surprise trip itinerary with CrewAI.
|
||||
</Card>
|
||||
<Card
|
||||
title="Match Profile to Positions"
|
||||
color="#F3A78B"
|
||||
href="https://github.com/crewAIInc/crewAI-examples/tree/main/match_profile_to_positions"
|
||||
icon="linkedin"
|
||||
iconType="duotone"
|
||||
>
|
||||
Match a profile to jobpositions with CrewAI.
|
||||
</Card>
|
||||
<Card
|
||||
title="Create Job Posting"
|
||||
color="#F3A78B"
|
||||
href="https://github.com/crewAIInc/crewAI-examples/tree/main/job-posting"
|
||||
icon="newspaper"
|
||||
iconType="duotone"
|
||||
>
|
||||
Create a job posting with CrewAI.
|
||||
</Card>
|
||||
<Card
|
||||
title="Game Generator"
|
||||
color="#F3A78B"
|
||||
href="https://github.com/crewAIInc/crewAI-examples/tree/main/game-builder-crew"
|
||||
icon="gamepad"
|
||||
iconType="duotone"
|
||||
>
|
||||
Create a game with CrewAI.
|
||||
</Card>
|
||||
<Card
|
||||
title="Find Job Candidates"
|
||||
color="#F3A78B"
|
||||
href="https://github.com/crewAIInc/crewAI-examples/tree/main/recruitment"
|
||||
icon="user-group"
|
||||
iconType="duotone"
|
||||
>
|
||||
Find job candidates with CrewAI.
|
||||
</Card>
|
||||
</CardGroup>
|
||||
18
docs/favicon.svg
Normal file
@@ -0,0 +1,18 @@
|
||||
<?xml version="1.0" standalone="no"?>
|
||||
<!DOCTYPE svg PUBLIC "-//W3C//DTD SVG 20010904//EN"
|
||||
"http://www.w3.org/TR/2001/REC-SVG-20010904/DTD/svg10.dtd">
|
||||
<svg version="1.0" xmlns="http://www.w3.org/2000/svg"
|
||||
width="48.000000pt" height="48.000000pt" viewBox="0 0 48.000000 48.000000"
|
||||
preserveAspectRatio="xMidYMid meet">
|
||||
|
||||
<g transform="translate(0.000000,48.000000) scale(0.100000,-0.100000)"
|
||||
fill="#000000" stroke="none">
|
||||
<path d="M252 469 c-103 -22 -213 -172 -214 -294 -1 -107 60 -168 168 -167
|
||||
130 1 276 133 234 211 -13 25 -27 26 -52 4 -31 -27 -32 -6 -4 56 34 77 33 103
|
||||
-6 146 -38 40 -78 55 -126 44z m103 -40 c44 -39 46 -82 9 -163 -27 -60 -42
|
||||
-68 -74 -36 -24 24 -26 67 -5 117 22 51 19 60 -11 32 -72 -65 -125 -189 -105
|
||||
-242 9 -23 16 -27 53 -27 54 0 122 33 154 76 34 44 54 44 54 1 0 -75 -125
|
||||
-167 -225 -167 -121 0 -181 92 -145 222 17 58 86 153 137 187 63 42 110 42
|
||||
158 0z"/>
|
||||
</g>
|
||||
</svg>
|
||||
|
After Width: | Height: | Size: 885 B |
BIN
docs/flows.png
Normal file
|
After Width: | Height: | Size: 27 KiB |
157
docs/guides/advanced/customizing-prompts.mdx
Normal file
@@ -0,0 +1,157 @@
|
||||
---
|
||||
title: Customizing Prompts
|
||||
description: Dive deeper into low-level prompt customization for CrewAI, enabling super custom and complex use cases for different models and languages.
|
||||
icon: message-pen
|
||||
---
|
||||
|
||||
# Customizing Prompts at a Low Level
|
||||
|
||||
## Why Customize Prompts?
|
||||
|
||||
Although CrewAI's default prompts work well for many scenarios, low-level customization opens the door to significantly more flexible and powerful agent behavior. Here’s why you might want to take advantage of this deeper control:
|
||||
|
||||
1. **Optimize for specific LLMs** – Different models (such as GPT-4, Claude, or Llama) thrive with prompt formats tailored to their unique architectures.
|
||||
2. **Change the language** – Build agents that operate exclusively in languages beyond English, handling nuances with precision.
|
||||
3. **Specialize for complex domains** – Adapt prompts for highly specialized industries like healthcare, finance, or legal.
|
||||
4. **Adjust tone and style** – Make agents more formal, casual, creative, or analytical.
|
||||
5. **Support super custom use cases** – Utilize advanced prompt structures and formatting to meet intricate, project-specific requirements.
|
||||
|
||||
This guide explores how to tap into CrewAI's prompts at a lower level, giving you fine-grained control over how agents think and interact.
|
||||
|
||||
## Understanding CrewAI's Prompt System
|
||||
|
||||
Under the hood, CrewAI employs a modular prompt system that you can customize extensively:
|
||||
|
||||
- **Agent templates** – Govern each agent’s approach to their assigned role.
|
||||
- **Prompt slices** – Control specialized behaviors such as tasks, tool usage, and output structure.
|
||||
- **Error handling** – Direct how agents respond to failures, exceptions, or timeouts.
|
||||
- **Tool-specific prompts** – Define detailed instructions for how tools are invoked or utilized.
|
||||
|
||||
Check out the [original prompt templates in CrewAI's repository](https://github.com/crewAIInc/crewAI/blob/main/src/crewai/translations/en.json) to see how these elements are organized. From there, you can override or adapt them as needed to unlock advanced behaviors.
|
||||
|
||||
## Best Practices for Managing Prompt Files
|
||||
|
||||
When engaging in low-level prompt customization, follow these guidelines to keep things organized and maintainable:
|
||||
|
||||
1. **Keep files separate** – Store your customized prompts in dedicated JSON files outside your main codebase.
|
||||
2. **Version control** – Track changes within your repository, ensuring clear documentation of prompt adjustments over time.
|
||||
3. **Organize by model or language** – Use naming schemes like `prompts_llama.json` or `prompts_es.json` to quickly identify specialized configurations.
|
||||
4. **Document changes** – Provide comments or maintain a README detailing the purpose and scope of your customizations.
|
||||
5. **Minimize alterations** – Only override the specific slices you genuinely need to adjust, keeping default functionality intact for everything else.
|
||||
|
||||
## The Simplest Way to Customize Prompts
|
||||
|
||||
One straightforward approach is to create a JSON file for the prompts you want to override and then point your Crew at that file:
|
||||
|
||||
1. Craft a JSON file with your updated prompt slices.
|
||||
2. Reference that file via the `prompt_file` parameter in your Crew.
|
||||
|
||||
CrewAI then merges your customizations with the defaults, so you don’t have to redefine every prompt. Here’s how:
|
||||
|
||||
### Example: Basic Prompt Customization
|
||||
|
||||
Create a `custom_prompts.json` file with the prompts you want to modify. Ensure you list all top-level prompts it should contain, not just your changes:
|
||||
|
||||
```json
|
||||
{
|
||||
"slices": {
|
||||
"format": "When responding, follow this structure:\n\nTHOUGHTS: Your step-by-step thinking\nACTION: Any tool you're using\nRESULT: Your final answer or conclusion"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
Then integrate it like so:
|
||||
|
||||
```python
|
||||
from crewai import Agent, Crew, Task, Process
|
||||
|
||||
# Create agents and tasks as normal
|
||||
researcher = Agent(
|
||||
role="Research Specialist",
|
||||
goal="Find information on quantum computing",
|
||||
backstory="You are a quantum physics expert",
|
||||
verbose=True
|
||||
)
|
||||
|
||||
research_task = Task(
|
||||
description="Research quantum computing applications",
|
||||
expected_output="A summary of practical applications",
|
||||
agent=researcher
|
||||
)
|
||||
|
||||
# Create a crew with your custom prompt file
|
||||
crew = Crew(
|
||||
agents=[researcher],
|
||||
tasks=[research_task],
|
||||
prompt_file="path/to/custom_prompts.json",
|
||||
verbose=True
|
||||
)
|
||||
|
||||
# Run the crew
|
||||
result = crew.kickoff()
|
||||
```
|
||||
|
||||
With these few edits, you gain low-level control over how your agents communicate and solve tasks.
|
||||
|
||||
## Optimizing for Specific Models
|
||||
|
||||
Different models thrive on differently structured prompts. Making deeper adjustments can significantly boost performance by aligning your prompts with a model’s nuances.
|
||||
|
||||
### Example: Llama 3.3 Prompting Template
|
||||
|
||||
For instance, when dealing with Meta’s Llama 3.3, deeper-level customization may reflect the recommended structure described at:
|
||||
https://www.llama.com/docs/model-cards-and-prompt-formats/llama3_1/#prompt-template
|
||||
|
||||
Here’s an example to highlight how you might fine-tune an Agent to leverage Llama 3.3 in code:
|
||||
|
||||
```python
|
||||
from crewai import Agent, Crew, Task, Process
|
||||
from crewai_tools import DirectoryReadTool, FileReadTool
|
||||
|
||||
# Define templates for system, user (prompt), and assistant (response) messages
|
||||
system_template = """<|begin_of_text|><|start_header_id|>system<|end_header_id|>{{ .System }}<|eot_id|>"""
|
||||
prompt_template = """<|start_header_id|>user<|end_header_id|>{{ .Prompt }}<|eot_id|>"""
|
||||
response_template = """<|start_header_id|>assistant<|end_header_id|>{{ .Response }}<|eot_id|>"""
|
||||
|
||||
# Create an Agent using Llama-specific layouts
|
||||
principal_engineer = Agent(
|
||||
role="Principal Engineer",
|
||||
goal="Oversee AI architecture and make high-level decisions",
|
||||
backstory="You are the lead engineer responsible for critical AI systems",
|
||||
verbose=True,
|
||||
llm="groq/llama-3.3-70b-versatile", # Using the Llama 3 model
|
||||
system_template=system_template,
|
||||
prompt_template=prompt_template,
|
||||
response_template=response_template,
|
||||
tools=[DirectoryReadTool(), FileReadTool()]
|
||||
)
|
||||
|
||||
# Define a sample task
|
||||
engineering_task = Task(
|
||||
description="Review AI implementation files for potential improvements",
|
||||
expected_output="A summary of key findings and recommendations",
|
||||
agent=principal_engineer
|
||||
)
|
||||
|
||||
# Create a Crew for the task
|
||||
llama_crew = Crew(
|
||||
agents=[principal_engineer],
|
||||
tasks=[engineering_task],
|
||||
process=Process.sequential,
|
||||
verbose=True
|
||||
)
|
||||
|
||||
# Execute the crew
|
||||
result = llama_crew.kickoff()
|
||||
print(result.raw)
|
||||
```
|
||||
|
||||
Through this deeper configuration, you can exercise comprehensive, low-level control over your Llama-based workflows without needing a separate JSON file.
|
||||
|
||||
## Conclusion
|
||||
|
||||
Low-level prompt customization in CrewAI opens the door to super custom, complex use cases. By establishing well-organized prompt files (or direct inline templates), you can accommodate various models, languages, and specialized domains. This level of flexibility ensures you can craft precisely the AI behavior you need, all while knowing CrewAI still provides reliable defaults when you don’t override them.
|
||||
|
||||
<Check>
|
||||
You now have the foundation for advanced prompt customizations in CrewAI. Whether you’re adapting for model-specific structures or domain-specific constraints, this low-level approach lets you shape agent interactions in highly specialized ways.
|
||||
</Check>
|
||||
135
docs/guides/advanced/fingerprinting.mdx
Normal file
@@ -0,0 +1,135 @@
|
||||
---
|
||||
title: Fingerprinting
|
||||
description: Learn how to use CrewAI's fingerprinting system to uniquely identify and track components throughout their lifecycle.
|
||||
icon: fingerprint
|
||||
---
|
||||
|
||||
# Fingerprinting in CrewAI
|
||||
|
||||
## Overview
|
||||
|
||||
Fingerprints in CrewAI provide a way to uniquely identify and track components throughout their lifecycle. Each `Agent`, `Crew`, and `Task` automatically receives a unique fingerprint when created, which cannot be manually overridden.
|
||||
|
||||
These fingerprints can be used for:
|
||||
- Auditing and tracking component usage
|
||||
- Ensuring component identity integrity
|
||||
- Attaching metadata to components
|
||||
- Creating a traceable chain of operations
|
||||
|
||||
## How Fingerprints Work
|
||||
|
||||
A fingerprint is an instance of the `Fingerprint` class from the `crewai.security` module. Each fingerprint contains:
|
||||
|
||||
- A UUID string: A unique identifier for the component that is automatically generated and cannot be manually set
|
||||
- A creation timestamp: When the fingerprint was generated, automatically set and cannot be manually modified
|
||||
- Metadata: A dictionary of additional information that can be customized
|
||||
|
||||
Fingerprints are automatically generated and assigned when a component is created. Each component exposes its fingerprint through a read-only property.
|
||||
|
||||
## Basic Usage
|
||||
|
||||
### Accessing Fingerprints
|
||||
|
||||
```python
|
||||
from crewai import Agent, Crew, Task
|
||||
|
||||
# Create components - fingerprints are automatically generated
|
||||
agent = Agent(
|
||||
role="Data Scientist",
|
||||
goal="Analyze data",
|
||||
backstory="Expert in data analysis"
|
||||
)
|
||||
|
||||
crew = Crew(
|
||||
agents=[agent],
|
||||
tasks=[]
|
||||
)
|
||||
|
||||
task = Task(
|
||||
description="Analyze customer data",
|
||||
expected_output="Insights from data analysis",
|
||||
agent=agent
|
||||
)
|
||||
|
||||
# Access the fingerprints
|
||||
agent_fingerprint = agent.fingerprint
|
||||
crew_fingerprint = crew.fingerprint
|
||||
task_fingerprint = task.fingerprint
|
||||
|
||||
# Print the UUID strings
|
||||
print(f"Agent fingerprint: {agent_fingerprint.uuid_str}")
|
||||
print(f"Crew fingerprint: {crew_fingerprint.uuid_str}")
|
||||
print(f"Task fingerprint: {task_fingerprint.uuid_str}")
|
||||
```
|
||||
|
||||
### Working with Fingerprint Metadata
|
||||
|
||||
You can add metadata to fingerprints for additional context:
|
||||
|
||||
```python
|
||||
# Add metadata to the agent's fingerprint
|
||||
agent.security_config.fingerprint.metadata = {
|
||||
"version": "1.0",
|
||||
"department": "Data Science",
|
||||
"project": "Customer Analysis"
|
||||
}
|
||||
|
||||
# Access the metadata
|
||||
print(f"Agent metadata: {agent.fingerprint.metadata}")
|
||||
```
|
||||
|
||||
## Fingerprint Persistence
|
||||
|
||||
Fingerprints are designed to persist and remain unchanged throughout a component's lifecycle. If you modify a component, the fingerprint remains the same:
|
||||
|
||||
```python
|
||||
original_fingerprint = agent.fingerprint.uuid_str
|
||||
|
||||
# Modify the agent
|
||||
agent.goal = "New goal for analysis"
|
||||
|
||||
# The fingerprint remains unchanged
|
||||
assert agent.fingerprint.uuid_str == original_fingerprint
|
||||
```
|
||||
|
||||
## Deterministic Fingerprints
|
||||
|
||||
While you cannot directly set the UUID and creation timestamp, you can create deterministic fingerprints using the `generate` method with a seed:
|
||||
|
||||
```python
|
||||
from crewai.security import Fingerprint
|
||||
|
||||
# Create a deterministic fingerprint using a seed string
|
||||
deterministic_fingerprint = Fingerprint.generate(seed="my-agent-id")
|
||||
|
||||
# The same seed always produces the same fingerprint
|
||||
same_fingerprint = Fingerprint.generate(seed="my-agent-id")
|
||||
assert deterministic_fingerprint.uuid_str == same_fingerprint.uuid_str
|
||||
|
||||
# You can also set metadata
|
||||
custom_fingerprint = Fingerprint.generate(
|
||||
seed="my-agent-id",
|
||||
metadata={"version": "1.0"}
|
||||
)
|
||||
```
|
||||
|
||||
## Advanced Usage
|
||||
|
||||
### Fingerprint Structure
|
||||
|
||||
Each fingerprint has the following structure:
|
||||
|
||||
```python
|
||||
from crewai.security import Fingerprint
|
||||
|
||||
fingerprint = agent.fingerprint
|
||||
|
||||
# UUID string - the unique identifier (auto-generated)
|
||||
uuid_str = fingerprint.uuid_str # e.g., "123e4567-e89b-12d3-a456-426614174000"
|
||||
|
||||
# Creation timestamp (auto-generated)
|
||||
created_at = fingerprint.created_at # A datetime object
|
||||
|
||||
# Metadata - for additional information (can be customized)
|
||||
metadata = fingerprint.metadata # A dictionary, defaults to {}
|
||||
```
|
||||
454
docs/guides/agents/crafting-effective-agents.mdx
Normal file
@@ -0,0 +1,454 @@
|
||||
---
|
||||
title: Crafting Effective Agents
|
||||
description: Learn best practices for designing powerful, specialized AI agents that collaborate effectively to solve complex problems.
|
||||
icon: robot
|
||||
---
|
||||
|
||||
# Crafting Effective Agents
|
||||
|
||||
## The Art and Science of Agent Design
|
||||
|
||||
At the heart of CrewAI lies the agent - a specialized AI entity designed to perform specific roles within a collaborative framework. While creating basic agents is simple, crafting truly effective agents that produce exceptional results requires understanding key design principles and best practices.
|
||||
|
||||
This guide will help you master the art of agent design, enabling you to create specialized AI personas that collaborate effectively, think critically, and produce high-quality outputs tailored to your specific needs.
|
||||
|
||||
### Why Agent Design Matters
|
||||
|
||||
The way you define your agents significantly impacts:
|
||||
|
||||
1. **Output quality**: Well-designed agents produce more relevant, high-quality results
|
||||
2. **Collaboration effectiveness**: Agents with complementary skills work together more efficiently
|
||||
3. **Task performance**: Agents with clear roles and goals execute tasks more effectively
|
||||
4. **System scalability**: Thoughtfully designed agents can be reused across multiple crews and contexts
|
||||
|
||||
Let's explore best practices for creating agents that excel in these dimensions.
|
||||
|
||||
## The 80/20 Rule: Focus on Tasks Over Agents
|
||||
|
||||
When building effective AI systems, remember this crucial principle: **80% of your effort should go into designing tasks, and only 20% into defining agents**.
|
||||
|
||||
Why? Because even the most perfectly defined agent will fail with poorly designed tasks, but well-designed tasks can elevate even a simple agent. This means:
|
||||
|
||||
- Spend most of your time writing clear task instructions
|
||||
- Define detailed inputs and expected outputs
|
||||
- Add examples and context to guide execution
|
||||
- Dedicate the remaining time to agent role, goal, and backstory
|
||||
|
||||
This doesn't mean agent design isn't important - it absolutely is. But task design is where most execution failures occur, so prioritize accordingly.
|
||||
|
||||
## Core Principles of Effective Agent Design
|
||||
|
||||
### 1. The Role-Goal-Backstory Framework
|
||||
|
||||
The most powerful agents in CrewAI are built on a strong foundation of three key elements:
|
||||
|
||||
#### Role: The Agent's Specialized Function
|
||||
|
||||
The role defines what the agent does and their area of expertise. When crafting roles:
|
||||
|
||||
- **Be specific and specialized**: Instead of "Writer," use "Technical Documentation Specialist" or "Creative Storyteller"
|
||||
- **Align with real-world professions**: Base roles on recognizable professional archetypes
|
||||
- **Include domain expertise**: Specify the agent's field of knowledge (e.g., "Financial Analyst specializing in market trends")
|
||||
|
||||
**Examples of effective roles:**
|
||||
```yaml
|
||||
role: "Senior UX Researcher specializing in user interview analysis"
|
||||
role: "Full-Stack Software Architect with expertise in distributed systems"
|
||||
role: "Corporate Communications Director specializing in crisis management"
|
||||
```
|
||||
|
||||
#### Goal: The Agent's Purpose and Motivation
|
||||
|
||||
The goal directs the agent's efforts and shapes their decision-making process. Effective goals should:
|
||||
|
||||
- **Be clear and outcome-focused**: Define what the agent is trying to achieve
|
||||
- **Emphasize quality standards**: Include expectations about the quality of work
|
||||
- **Incorporate success criteria**: Help the agent understand what "good" looks like
|
||||
|
||||
**Examples of effective goals:**
|
||||
```yaml
|
||||
goal: "Uncover actionable user insights by analyzing interview data and identifying recurring patterns, unmet needs, and improvement opportunities"
|
||||
goal: "Design robust, scalable system architectures that balance performance, maintainability, and cost-effectiveness"
|
||||
goal: "Craft clear, empathetic crisis communications that address stakeholder concerns while protecting organizational reputation"
|
||||
```
|
||||
|
||||
#### Backstory: The Agent's Experience and Perspective
|
||||
|
||||
The backstory gives depth to the agent, influencing how they approach problems and interact with others. Good backstories:
|
||||
|
||||
- **Establish expertise and experience**: Explain how the agent gained their skills
|
||||
- **Define working style and values**: Describe how the agent approaches their work
|
||||
- **Create a cohesive persona**: Ensure all elements of the backstory align with the role and goal
|
||||
|
||||
**Examples of effective backstories:**
|
||||
```yaml
|
||||
backstory: "You have spent 15 years conducting and analyzing user research for top tech companies. You have a talent for reading between the lines and identifying patterns that others miss. You believe that good UX is invisible and that the best insights come from listening to what users don't say as much as what they do say."
|
||||
|
||||
backstory: "With 20+ years of experience building distributed systems at scale, you've developed a pragmatic approach to software architecture. You've seen both successful and failed systems and have learned valuable lessons from each. You balance theoretical best practices with practical constraints and always consider the maintenance and operational aspects of your designs."
|
||||
|
||||
backstory: "As a seasoned communications professional who has guided multiple organizations through high-profile crises, you understand the importance of transparency, speed, and empathy in crisis response. You have a methodical approach to crafting messages that address concerns while maintaining organizational credibility."
|
||||
```
|
||||
|
||||
### 2. Specialists Over Generalists
|
||||
|
||||
Agents perform significantly better when given specialized roles rather than general ones. A highly focused agent delivers more precise, relevant outputs:
|
||||
|
||||
**Generic (Less Effective):**
|
||||
```yaml
|
||||
role: "Writer"
|
||||
```
|
||||
|
||||
**Specialized (More Effective):**
|
||||
```yaml
|
||||
role: "Technical Blog Writer specializing in explaining complex AI concepts to non-technical audiences"
|
||||
```
|
||||
|
||||
**Specialist Benefits:**
|
||||
- Clearer understanding of expected output
|
||||
- More consistent performance
|
||||
- Better alignment with specific tasks
|
||||
- Improved ability to make domain-specific judgments
|
||||
|
||||
### 3. Balancing Specialization and Versatility
|
||||
|
||||
Effective agents strike the right balance between specialization (doing one thing extremely well) and versatility (being adaptable to various situations):
|
||||
|
||||
- **Specialize in role, versatile in application**: Create agents with specialized skills that can be applied across multiple contexts
|
||||
- **Avoid overly narrow definitions**: Ensure agents can handle variations within their domain of expertise
|
||||
- **Consider the collaborative context**: Design agents whose specializations complement the other agents they'll work with
|
||||
|
||||
### 4. Setting Appropriate Expertise Levels
|
||||
|
||||
The expertise level you assign to your agent shapes how they approach tasks:
|
||||
|
||||
- **Novice agents**: Good for straightforward tasks, brainstorming, or initial drafts
|
||||
- **Intermediate agents**: Suitable for most standard tasks with reliable execution
|
||||
- **Expert agents**: Best for complex, specialized tasks requiring depth and nuance
|
||||
- **World-class agents**: Reserved for critical tasks where exceptional quality is needed
|
||||
|
||||
Choose the appropriate expertise level based on task complexity and quality requirements. For most collaborative crews, a mix of expertise levels often works best, with higher expertise assigned to core specialized functions.
|
||||
|
||||
## Practical Examples: Before and After
|
||||
|
||||
Let's look at some examples of agent definitions before and after applying these best practices:
|
||||
|
||||
### Example 1: Content Creation Agent
|
||||
|
||||
**Before:**
|
||||
```yaml
|
||||
role: "Writer"
|
||||
goal: "Write good content"
|
||||
backstory: "You are a writer who creates content for websites."
|
||||
```
|
||||
|
||||
**After:**
|
||||
```yaml
|
||||
role: "B2B Technology Content Strategist"
|
||||
goal: "Create compelling, technically accurate content that explains complex topics in accessible language while driving reader engagement and supporting business objectives"
|
||||
backstory: "You have spent a decade creating content for leading technology companies, specializing in translating technical concepts for business audiences. You excel at research, interviewing subject matter experts, and structuring information for maximum clarity and impact. You believe that the best B2B content educates first and sells second, building trust through genuine expertise rather than marketing hype."
|
||||
```
|
||||
|
||||
### Example 2: Research Agent
|
||||
|
||||
**Before:**
|
||||
```yaml
|
||||
role: "Researcher"
|
||||
goal: "Find information"
|
||||
backstory: "You are good at finding information online."
|
||||
```
|
||||
|
||||
**After:**
|
||||
```yaml
|
||||
role: "Academic Research Specialist in Emerging Technologies"
|
||||
goal: "Discover and synthesize cutting-edge research, identifying key trends, methodologies, and findings while evaluating the quality and reliability of sources"
|
||||
backstory: "With a background in both computer science and library science, you've mastered the art of digital research. You've worked with research teams at prestigious universities and know how to navigate academic databases, evaluate research quality, and synthesize findings across disciplines. You're methodical in your approach, always cross-referencing information and tracing claims to primary sources before drawing conclusions."
|
||||
```
|
||||
|
||||
## Crafting Effective Tasks for Your Agents
|
||||
|
||||
While agent design is important, task design is critical for successful execution. Here are best practices for designing tasks that set your agents up for success:
|
||||
|
||||
### The Anatomy of an Effective Task
|
||||
|
||||
A well-designed task has two key components that serve different purposes:
|
||||
|
||||
#### Task Description: The Process
|
||||
The description should focus on what to do and how to do it, including:
|
||||
- Detailed instructions for execution
|
||||
- Context and background information
|
||||
- Scope and constraints
|
||||
- Process steps to follow
|
||||
|
||||
#### Expected Output: The Deliverable
|
||||
The expected output should define what the final result should look like:
|
||||
- Format specifications (markdown, JSON, etc.)
|
||||
- Structure requirements
|
||||
- Quality criteria
|
||||
- Examples of good outputs (when possible)
|
||||
|
||||
### Task Design Best Practices
|
||||
|
||||
#### 1. Single Purpose, Single Output
|
||||
Tasks perform best when focused on one clear objective:
|
||||
|
||||
**Bad Example (Too Broad):**
|
||||
```yaml
|
||||
task_description: "Research market trends, analyze the data, and create a visualization."
|
||||
```
|
||||
|
||||
**Good Example (Focused):**
|
||||
```yaml
|
||||
# Task 1
|
||||
research_task:
|
||||
description: "Research the top 5 market trends in the AI industry for 2024."
|
||||
expected_output: "A markdown list of the 5 trends with supporting evidence."
|
||||
|
||||
# Task 2
|
||||
analysis_task:
|
||||
description: "Analyze the identified trends to determine potential business impacts."
|
||||
expected_output: "A structured analysis with impact ratings (High/Medium/Low)."
|
||||
|
||||
# Task 3
|
||||
visualization_task:
|
||||
description: "Create a visual representation of the analyzed trends."
|
||||
expected_output: "A description of a chart showing trends and their impact ratings."
|
||||
```
|
||||
|
||||
#### 2. Be Explicit About Inputs and Outputs
|
||||
Always clearly specify what inputs the task will use and what the output should look like:
|
||||
|
||||
**Example:**
|
||||
```yaml
|
||||
analysis_task:
|
||||
description: >
|
||||
Analyze the customer feedback data from the CSV file.
|
||||
Focus on identifying recurring themes related to product usability.
|
||||
Consider sentiment and frequency when determining importance.
|
||||
expected_output: >
|
||||
A markdown report with the following sections:
|
||||
1. Executive summary (3-5 bullet points)
|
||||
2. Top 3 usability issues with supporting data
|
||||
3. Recommendations for improvement
|
||||
```
|
||||
|
||||
#### 3. Include Purpose and Context
|
||||
Explain why the task matters and how it fits into the larger workflow:
|
||||
|
||||
**Example:**
|
||||
```yaml
|
||||
competitor_analysis_task:
|
||||
description: >
|
||||
Analyze our three main competitors' pricing strategies.
|
||||
This analysis will inform our upcoming pricing model revision.
|
||||
Focus on identifying patterns in how they price premium features
|
||||
and how they structure their tiered offerings.
|
||||
```
|
||||
|
||||
#### 4. Use Structured Output Tools
|
||||
For machine-readable outputs, specify the format clearly:
|
||||
|
||||
**Example:**
|
||||
```yaml
|
||||
data_extraction_task:
|
||||
description: "Extract key metrics from the quarterly report."
|
||||
expected_output: "JSON object with the following keys: revenue, growth_rate, customer_acquisition_cost, and retention_rate."
|
||||
```
|
||||
|
||||
## Common Mistakes to Avoid
|
||||
|
||||
Based on lessons learned from real-world implementations, here are the most common pitfalls in agent and task design:
|
||||
|
||||
### 1. Unclear Task Instructions
|
||||
|
||||
**Problem:** Tasks lack sufficient detail, making it difficult for agents to execute effectively.
|
||||
|
||||
**Example of Poor Design:**
|
||||
```yaml
|
||||
research_task:
|
||||
description: "Research AI trends."
|
||||
expected_output: "A report on AI trends."
|
||||
```
|
||||
|
||||
**Improved Version:**
|
||||
```yaml
|
||||
research_task:
|
||||
description: >
|
||||
Research the top emerging AI trends for 2024 with a focus on:
|
||||
1. Enterprise adoption patterns
|
||||
2. Technical breakthroughs in the past 6 months
|
||||
3. Regulatory developments affecting implementation
|
||||
|
||||
For each trend, identify key companies, technologies, and potential business impacts.
|
||||
expected_output: >
|
||||
A comprehensive markdown report with:
|
||||
- Executive summary (5 bullet points)
|
||||
- 5-7 major trends with supporting evidence
|
||||
- For each trend: definition, examples, and business implications
|
||||
- References to authoritative sources
|
||||
```
|
||||
|
||||
### 2. "God Tasks" That Try to Do Too Much
|
||||
|
||||
**Problem:** Tasks that combine multiple complex operations into one instruction set.
|
||||
|
||||
**Example of Poor Design:**
|
||||
```yaml
|
||||
comprehensive_task:
|
||||
description: "Research market trends, analyze competitor strategies, create a marketing plan, and design a launch timeline."
|
||||
```
|
||||
|
||||
**Improved Version:**
|
||||
Break this into sequential, focused tasks:
|
||||
```yaml
|
||||
# Task 1: Research
|
||||
market_research_task:
|
||||
description: "Research current market trends in the SaaS project management space."
|
||||
expected_output: "A markdown summary of key market trends."
|
||||
|
||||
# Task 2: Competitive Analysis
|
||||
competitor_analysis_task:
|
||||
description: "Analyze strategies of the top 3 competitors based on the market research."
|
||||
expected_output: "A comparison table of competitor strategies."
|
||||
context: [market_research_task]
|
||||
|
||||
# Continue with additional focused tasks...
|
||||
```
|
||||
|
||||
### 3. Misaligned Description and Expected Output
|
||||
|
||||
**Problem:** The task description asks for one thing while the expected output specifies something different.
|
||||
|
||||
**Example of Poor Design:**
|
||||
```yaml
|
||||
analysis_task:
|
||||
description: "Analyze customer feedback to find areas of improvement."
|
||||
expected_output: "A marketing plan for the next quarter."
|
||||
```
|
||||
|
||||
**Improved Version:**
|
||||
```yaml
|
||||
analysis_task:
|
||||
description: "Analyze customer feedback to identify the top 3 areas for product improvement."
|
||||
expected_output: "A report listing the 3 priority improvement areas with supporting customer quotes and data points."
|
||||
```
|
||||
|
||||
### 4. Not Understanding the Process Yourself
|
||||
|
||||
**Problem:** Asking agents to execute tasks that you yourself don't fully understand.
|
||||
|
||||
**Solution:**
|
||||
1. Try to perform the task manually first
|
||||
2. Document your process, decision points, and information sources
|
||||
3. Use this documentation as the basis for your task description
|
||||
|
||||
### 5. Premature Use of Hierarchical Structures
|
||||
|
||||
**Problem:** Creating unnecessarily complex agent hierarchies where sequential processes would work better.
|
||||
|
||||
**Solution:** Start with sequential processes and only move to hierarchical models when the workflow complexity truly requires it.
|
||||
|
||||
### 6. Vague or Generic Agent Definitions
|
||||
|
||||
**Problem:** Generic agent definitions lead to generic outputs.
|
||||
|
||||
**Example of Poor Design:**
|
||||
```yaml
|
||||
agent:
|
||||
role: "Business Analyst"
|
||||
goal: "Analyze business data"
|
||||
backstory: "You are good at business analysis."
|
||||
```
|
||||
|
||||
**Improved Version:**
|
||||
```yaml
|
||||
agent:
|
||||
role: "SaaS Metrics Specialist focusing on growth-stage startups"
|
||||
goal: "Identify actionable insights from business data that can directly impact customer retention and revenue growth"
|
||||
backstory: "With 10+ years analyzing SaaS business models, you've developed a keen eye for the metrics that truly matter for sustainable growth. You've helped numerous companies identify the leverage points that turned around their business trajectory. You believe in connecting data to specific, actionable recommendations rather than general observations."
|
||||
```
|
||||
|
||||
## Advanced Agent Design Strategies
|
||||
|
||||
### Designing for Collaboration
|
||||
|
||||
When creating agents that will work together in a crew, consider:
|
||||
|
||||
- **Complementary skills**: Design agents with distinct but complementary abilities
|
||||
- **Handoff points**: Define clear interfaces for how work passes between agents
|
||||
- **Constructive tension**: Sometimes, creating agents with slightly different perspectives can lead to better outcomes through productive dialogue
|
||||
|
||||
For example, a content creation crew might include:
|
||||
|
||||
```yaml
|
||||
# Research Agent
|
||||
role: "Research Specialist for technical topics"
|
||||
goal: "Gather comprehensive, accurate information from authoritative sources"
|
||||
backstory: "You are a meticulous researcher with a background in library science..."
|
||||
|
||||
# Writer Agent
|
||||
role: "Technical Content Writer"
|
||||
goal: "Transform research into engaging, clear content that educates and informs"
|
||||
backstory: "You are an experienced writer who excels at explaining complex concepts..."
|
||||
|
||||
# Editor Agent
|
||||
role: "Content Quality Editor"
|
||||
goal: "Ensure content is accurate, well-structured, and polished while maintaining consistency"
|
||||
backstory: "With years of experience in publishing, you have a keen eye for detail..."
|
||||
```
|
||||
|
||||
### Creating Specialized Tool Users
|
||||
|
||||
Some agents can be designed specifically to leverage certain tools effectively:
|
||||
|
||||
```yaml
|
||||
role: "Data Analysis Specialist"
|
||||
goal: "Derive meaningful insights from complex datasets through statistical analysis"
|
||||
backstory: "With a background in data science, you excel at working with structured and unstructured data..."
|
||||
tools: [PythonREPLTool, DataVisualizationTool, CSVAnalysisTool]
|
||||
```
|
||||
|
||||
### Tailoring Agents to LLM Capabilities
|
||||
|
||||
Different LLMs have different strengths. Design your agents with these capabilities in mind:
|
||||
|
||||
```yaml
|
||||
# For complex reasoning tasks
|
||||
analyst:
|
||||
role: "Data Insights Analyst"
|
||||
goal: "..."
|
||||
backstory: "..."
|
||||
llm: openai/gpt-4o
|
||||
|
||||
# For creative content
|
||||
writer:
|
||||
role: "Creative Content Writer"
|
||||
goal: "..."
|
||||
backstory: "..."
|
||||
llm: anthropic/claude-3-opus
|
||||
```
|
||||
|
||||
## Testing and Iterating on Agent Design
|
||||
|
||||
Agent design is often an iterative process. Here's a practical approach:
|
||||
|
||||
1. **Start with a prototype**: Create an initial agent definition
|
||||
2. **Test with sample tasks**: Evaluate performance on representative tasks
|
||||
3. **Analyze outputs**: Identify strengths and weaknesses
|
||||
4. **Refine the definition**: Adjust role, goal, and backstory based on observations
|
||||
5. **Test in collaboration**: Evaluate how the agent performs in a crew setting
|
||||
|
||||
## Conclusion
|
||||
|
||||
Crafting effective agents is both an art and a science. By carefully defining roles, goals, and backstories that align with your specific needs, and combining them with well-designed tasks, you can create specialized AI collaborators that produce exceptional results.
|
||||
|
||||
Remember that agent and task design is an iterative process. Start with these best practices, observe your agents in action, and refine your approach based on what you learn. And always keep in mind the 80/20 rule - focus most of your effort on creating clear, focused tasks to get the best results from your agents.
|
||||
|
||||
<Check>
|
||||
Congratulations! You now understand the principles and practices of effective agent design. Apply these techniques to create powerful, specialized agents that work together seamlessly to accomplish complex tasks.
|
||||
</Check>
|
||||
|
||||
## Next Steps
|
||||
|
||||
- Experiment with different agent configurations for your specific use case
|
||||
- Learn about [building your first crew](/guides/crews/first-crew) to see how agents work together
|
||||
- Explore [CrewAI Flows](/guides/flows/first-flow) for more advanced orchestration
|
||||
505
docs/guides/concepts/evaluating-use-cases.mdx
Normal file
@@ -0,0 +1,505 @@
|
||||
---
|
||||
title: Evaluating Use Cases for CrewAI
|
||||
description: Learn how to assess your AI application needs and choose the right approach between Crews and Flows based on complexity and precision requirements.
|
||||
icon: scale-balanced
|
||||
---
|
||||
|
||||
# Evaluating Use Cases for CrewAI
|
||||
|
||||
## Understanding the Decision Framework
|
||||
|
||||
When building AI applications with CrewAI, one of the most important decisions you'll make is choosing the right approach for your specific use case. Should you use a Crew? A Flow? A combination of both? This guide will help you evaluate your requirements and make informed architectural decisions.
|
||||
|
||||
At the heart of this decision is understanding the relationship between **complexity** and **precision** in your application:
|
||||
|
||||
<Frame caption="Complexity vs. Precision Matrix for CrewAI Applications">
|
||||
<img src="../..//complexity_precision.png" alt="Complexity vs. Precision Matrix" />
|
||||
</Frame>
|
||||
|
||||
This matrix helps visualize how different approaches align with varying requirements for complexity and precision. Let's explore what each quadrant means and how it guides your architectural choices.
|
||||
|
||||
## The Complexity-Precision Matrix Explained
|
||||
|
||||
### What is Complexity?
|
||||
|
||||
In the context of CrewAI applications, **complexity** refers to:
|
||||
|
||||
- The number of distinct steps or operations required
|
||||
- The diversity of tasks that need to be performed
|
||||
- The interdependencies between different components
|
||||
- The need for conditional logic and branching
|
||||
- The sophistication of the overall workflow
|
||||
|
||||
### What is Precision?
|
||||
|
||||
**Precision** in this context refers to:
|
||||
|
||||
- The accuracy required in the final output
|
||||
- The need for structured, predictable results
|
||||
- The importance of reproducibility
|
||||
- The level of control needed over each step
|
||||
- The tolerance for variation in outputs
|
||||
|
||||
### The Four Quadrants
|
||||
|
||||
#### 1. Low Complexity, Low Precision
|
||||
|
||||
**Characteristics:**
|
||||
- Simple, straightforward tasks
|
||||
- Tolerance for some variation in outputs
|
||||
- Limited number of steps
|
||||
- Creative or exploratory applications
|
||||
|
||||
**Recommended Approach:** Simple Crews with minimal agents
|
||||
|
||||
**Example Use Cases:**
|
||||
- Basic content generation
|
||||
- Idea brainstorming
|
||||
- Simple summarization tasks
|
||||
- Creative writing assistance
|
||||
|
||||
#### 2. Low Complexity, High Precision
|
||||
|
||||
**Characteristics:**
|
||||
- Simple workflows that require exact, structured outputs
|
||||
- Need for reproducible results
|
||||
- Limited steps but high accuracy requirements
|
||||
- Often involves data processing or transformation
|
||||
|
||||
**Recommended Approach:** Flows with direct LLM calls or simple Crews with structured outputs
|
||||
|
||||
**Example Use Cases:**
|
||||
- Data extraction and transformation
|
||||
- Form filling and validation
|
||||
- Structured content generation (JSON, XML)
|
||||
- Simple classification tasks
|
||||
|
||||
#### 3. High Complexity, Low Precision
|
||||
|
||||
**Characteristics:**
|
||||
- Multi-stage processes with many steps
|
||||
- Creative or exploratory outputs
|
||||
- Complex interactions between components
|
||||
- Tolerance for variation in final results
|
||||
|
||||
**Recommended Approach:** Complex Crews with multiple specialized agents
|
||||
|
||||
**Example Use Cases:**
|
||||
- Research and analysis
|
||||
- Content creation pipelines
|
||||
- Exploratory data analysis
|
||||
- Creative problem-solving
|
||||
|
||||
#### 4. High Complexity, High Precision
|
||||
|
||||
**Characteristics:**
|
||||
- Complex workflows requiring structured outputs
|
||||
- Multiple interdependent steps with strict accuracy requirements
|
||||
- Need for both sophisticated processing and precise results
|
||||
- Often mission-critical applications
|
||||
|
||||
**Recommended Approach:** Flows orchestrating multiple Crews with validation steps
|
||||
|
||||
**Example Use Cases:**
|
||||
- Enterprise decision support systems
|
||||
- Complex data processing pipelines
|
||||
- Multi-stage document processing
|
||||
- Regulated industry applications
|
||||
|
||||
## Choosing Between Crews and Flows
|
||||
|
||||
### When to Choose Crews
|
||||
|
||||
Crews are ideal when:
|
||||
|
||||
1. **You need collaborative intelligence** - Multiple agents with different specializations need to work together
|
||||
2. **The problem requires emergent thinking** - The solution benefits from different perspectives and approaches
|
||||
3. **The task is primarily creative or analytical** - The work involves research, content creation, or analysis
|
||||
4. **You value adaptability over strict structure** - The workflow can benefit from agent autonomy
|
||||
5. **The output format can be somewhat flexible** - Some variation in output structure is acceptable
|
||||
|
||||
```python
|
||||
# Example: Research Crew for market analysis
|
||||
from crewai import Agent, Crew, Process, Task
|
||||
|
||||
# Create specialized agents
|
||||
researcher = Agent(
|
||||
role="Market Research Specialist",
|
||||
goal="Find comprehensive market data on emerging technologies",
|
||||
backstory="You are an expert at discovering market trends and gathering data."
|
||||
)
|
||||
|
||||
analyst = Agent(
|
||||
role="Market Analyst",
|
||||
goal="Analyze market data and identify key opportunities",
|
||||
backstory="You excel at interpreting market data and spotting valuable insights."
|
||||
)
|
||||
|
||||
# Define their tasks
|
||||
research_task = Task(
|
||||
description="Research the current market landscape for AI-powered healthcare solutions",
|
||||
expected_output="Comprehensive market data including key players, market size, and growth trends",
|
||||
agent=researcher
|
||||
)
|
||||
|
||||
analysis_task = Task(
|
||||
description="Analyze the market data and identify the top 3 investment opportunities",
|
||||
expected_output="Analysis report with 3 recommended investment opportunities and rationale",
|
||||
agent=analyst,
|
||||
context=[research_task]
|
||||
)
|
||||
|
||||
# Create the crew
|
||||
market_analysis_crew = Crew(
|
||||
agents=[researcher, analyst],
|
||||
tasks=[research_task, analysis_task],
|
||||
process=Process.sequential,
|
||||
verbose=True
|
||||
)
|
||||
|
||||
# Run the crew
|
||||
result = market_analysis_crew.kickoff()
|
||||
```
|
||||
|
||||
### When to Choose Flows
|
||||
|
||||
Flows are ideal when:
|
||||
|
||||
1. **You need precise control over execution** - The workflow requires exact sequencing and state management
|
||||
2. **The application has complex state requirements** - You need to maintain and transform state across multiple steps
|
||||
3. **You need structured, predictable outputs** - The application requires consistent, formatted results
|
||||
4. **The workflow involves conditional logic** - Different paths need to be taken based on intermediate results
|
||||
5. **You need to combine AI with procedural code** - The solution requires both AI capabilities and traditional programming
|
||||
|
||||
```python
|
||||
# Example: Customer Support Flow with structured processing
|
||||
from crewai.flow.flow import Flow, listen, router, start
|
||||
from pydantic import BaseModel
|
||||
from typing import List, Dict
|
||||
|
||||
# Define structured state
|
||||
class SupportTicketState(BaseModel):
|
||||
ticket_id: str = ""
|
||||
customer_name: str = ""
|
||||
issue_description: str = ""
|
||||
category: str = ""
|
||||
priority: str = "medium"
|
||||
resolution: str = ""
|
||||
satisfaction_score: int = 0
|
||||
|
||||
class CustomerSupportFlow(Flow[SupportTicketState]):
|
||||
@start()
|
||||
def receive_ticket(self):
|
||||
# In a real app, this might come from an API
|
||||
self.state.ticket_id = "TKT-12345"
|
||||
self.state.customer_name = "Alex Johnson"
|
||||
self.state.issue_description = "Unable to access premium features after payment"
|
||||
return "Ticket received"
|
||||
|
||||
@listen(receive_ticket)
|
||||
def categorize_ticket(self, _):
|
||||
# Use a direct LLM call for categorization
|
||||
from crewai import LLM
|
||||
llm = LLM(model="openai/gpt-4o-mini")
|
||||
|
||||
prompt = f"""
|
||||
Categorize the following customer support issue into one of these categories:
|
||||
- Billing
|
||||
- Account Access
|
||||
- Technical Issue
|
||||
- Feature Request
|
||||
- Other
|
||||
|
||||
Issue: {self.state.issue_description}
|
||||
|
||||
Return only the category name.
|
||||
"""
|
||||
|
||||
self.state.category = llm.call(prompt).strip()
|
||||
return self.state.category
|
||||
|
||||
@router(categorize_ticket)
|
||||
def route_by_category(self, category):
|
||||
# Route to different handlers based on category
|
||||
return category.lower().replace(" ", "_")
|
||||
|
||||
@listen("billing")
|
||||
def handle_billing_issue(self):
|
||||
# Handle billing-specific logic
|
||||
self.state.priority = "high"
|
||||
# More billing-specific processing...
|
||||
return "Billing issue handled"
|
||||
|
||||
@listen("account_access")
|
||||
def handle_access_issue(self):
|
||||
# Handle access-specific logic
|
||||
self.state.priority = "high"
|
||||
# More access-specific processing...
|
||||
return "Access issue handled"
|
||||
|
||||
# Additional category handlers...
|
||||
|
||||
@listen("billing", "account_access", "technical_issue", "feature_request", "other")
|
||||
def resolve_ticket(self, resolution_info):
|
||||
# Final resolution step
|
||||
self.state.resolution = f"Issue resolved: {resolution_info}"
|
||||
return self.state.resolution
|
||||
|
||||
# Run the flow
|
||||
support_flow = CustomerSupportFlow()
|
||||
result = support_flow.kickoff()
|
||||
```
|
||||
|
||||
### When to Combine Crews and Flows
|
||||
|
||||
The most sophisticated applications often benefit from combining Crews and Flows:
|
||||
|
||||
1. **Complex multi-stage processes** - Use Flows to orchestrate the overall process and Crews for complex subtasks
|
||||
2. **Applications requiring both creativity and structure** - Use Crews for creative tasks and Flows for structured processing
|
||||
3. **Enterprise-grade AI applications** - Use Flows to manage state and process flow while leveraging Crews for specialized work
|
||||
|
||||
```python
|
||||
# Example: Content Production Pipeline combining Crews and Flows
|
||||
from crewai.flow.flow import Flow, listen, start
|
||||
from crewai import Agent, Crew, Process, Task
|
||||
from pydantic import BaseModel
|
||||
from typing import List, Dict
|
||||
|
||||
class ContentState(BaseModel):
|
||||
topic: str = ""
|
||||
target_audience: str = ""
|
||||
content_type: str = ""
|
||||
outline: Dict = {}
|
||||
draft_content: str = ""
|
||||
final_content: str = ""
|
||||
seo_score: int = 0
|
||||
|
||||
class ContentProductionFlow(Flow[ContentState]):
|
||||
@start()
|
||||
def initialize_project(self):
|
||||
# Set initial parameters
|
||||
self.state.topic = "Sustainable Investing"
|
||||
self.state.target_audience = "Millennial Investors"
|
||||
self.state.content_type = "Blog Post"
|
||||
return "Project initialized"
|
||||
|
||||
@listen(initialize_project)
|
||||
def create_outline(self, _):
|
||||
# Use a research crew to create an outline
|
||||
researcher = Agent(
|
||||
role="Content Researcher",
|
||||
goal=f"Research {self.state.topic} for {self.state.target_audience}",
|
||||
backstory="You are an expert researcher with deep knowledge of content creation."
|
||||
)
|
||||
|
||||
outliner = Agent(
|
||||
role="Content Strategist",
|
||||
goal=f"Create an engaging outline for a {self.state.content_type}",
|
||||
backstory="You excel at structuring content for maximum engagement."
|
||||
)
|
||||
|
||||
research_task = Task(
|
||||
description=f"Research {self.state.topic} focusing on what would interest {self.state.target_audience}",
|
||||
expected_output="Comprehensive research notes with key points and statistics",
|
||||
agent=researcher
|
||||
)
|
||||
|
||||
outline_task = Task(
|
||||
description=f"Create an outline for a {self.state.content_type} about {self.state.topic}",
|
||||
expected_output="Detailed content outline with sections and key points",
|
||||
agent=outliner,
|
||||
context=[research_task]
|
||||
)
|
||||
|
||||
outline_crew = Crew(
|
||||
agents=[researcher, outliner],
|
||||
tasks=[research_task, outline_task],
|
||||
process=Process.sequential,
|
||||
verbose=True
|
||||
)
|
||||
|
||||
# Run the crew and store the result
|
||||
result = outline_crew.kickoff()
|
||||
|
||||
# Parse the outline (in a real app, you might use a more robust parsing approach)
|
||||
import json
|
||||
try:
|
||||
self.state.outline = json.loads(result.raw)
|
||||
except:
|
||||
# Fallback if not valid JSON
|
||||
self.state.outline = {"sections": result.raw}
|
||||
|
||||
return "Outline created"
|
||||
|
||||
@listen(create_outline)
|
||||
def write_content(self, _):
|
||||
# Use a writing crew to create the content
|
||||
writer = Agent(
|
||||
role="Content Writer",
|
||||
goal=f"Write engaging content for {self.state.target_audience}",
|
||||
backstory="You are a skilled writer who creates compelling content."
|
||||
)
|
||||
|
||||
editor = Agent(
|
||||
role="Content Editor",
|
||||
goal="Ensure content is polished, accurate, and engaging",
|
||||
backstory="You have a keen eye for detail and a talent for improving content."
|
||||
)
|
||||
|
||||
writing_task = Task(
|
||||
description=f"Write a {self.state.content_type} about {self.state.topic} following this outline: {self.state.outline}",
|
||||
expected_output="Complete draft content in markdown format",
|
||||
agent=writer
|
||||
)
|
||||
|
||||
editing_task = Task(
|
||||
description="Edit and improve the draft content for clarity, engagement, and accuracy",
|
||||
expected_output="Polished final content in markdown format",
|
||||
agent=editor,
|
||||
context=[writing_task]
|
||||
)
|
||||
|
||||
writing_crew = Crew(
|
||||
agents=[writer, editor],
|
||||
tasks=[writing_task, editing_task],
|
||||
process=Process.sequential,
|
||||
verbose=True
|
||||
)
|
||||
|
||||
# Run the crew and store the result
|
||||
result = writing_crew.kickoff()
|
||||
self.state.final_content = result.raw
|
||||
|
||||
return "Content created"
|
||||
|
||||
@listen(write_content)
|
||||
def optimize_for_seo(self, _):
|
||||
# Use a direct LLM call for SEO optimization
|
||||
from crewai import LLM
|
||||
llm = LLM(model="openai/gpt-4o-mini")
|
||||
|
||||
prompt = f"""
|
||||
Analyze this content for SEO effectiveness for the keyword "{self.state.topic}".
|
||||
Rate it on a scale of 1-100 and provide 3 specific recommendations for improvement.
|
||||
|
||||
Content: {self.state.final_content[:1000]}... (truncated for brevity)
|
||||
|
||||
Format your response as JSON with the following structure:
|
||||
{{
|
||||
"score": 85,
|
||||
"recommendations": [
|
||||
"Recommendation 1",
|
||||
"Recommendation 2",
|
||||
"Recommendation 3"
|
||||
]
|
||||
}}
|
||||
"""
|
||||
|
||||
seo_analysis = llm.call(prompt)
|
||||
|
||||
# Parse the SEO analysis
|
||||
import json
|
||||
try:
|
||||
analysis = json.loads(seo_analysis)
|
||||
self.state.seo_score = analysis.get("score", 0)
|
||||
return analysis
|
||||
except:
|
||||
self.state.seo_score = 50
|
||||
return {"score": 50, "recommendations": ["Unable to parse SEO analysis"]}
|
||||
|
||||
# Run the flow
|
||||
content_flow = ContentProductionFlow()
|
||||
result = content_flow.kickoff()
|
||||
```
|
||||
|
||||
## Practical Evaluation Framework
|
||||
|
||||
To determine the right approach for your specific use case, follow this step-by-step evaluation framework:
|
||||
|
||||
### Step 1: Assess Complexity
|
||||
|
||||
Rate your application's complexity on a scale of 1-10 by considering:
|
||||
|
||||
1. **Number of steps**: How many distinct operations are required?
|
||||
- 1-3 steps: Low complexity (1-3)
|
||||
- 4-7 steps: Medium complexity (4-7)
|
||||
- 8+ steps: High complexity (8-10)
|
||||
|
||||
2. **Interdependencies**: How interconnected are the different parts?
|
||||
- Few dependencies: Low complexity (1-3)
|
||||
- Some dependencies: Medium complexity (4-7)
|
||||
- Many complex dependencies: High complexity (8-10)
|
||||
|
||||
3. **Conditional logic**: How much branching and decision-making is needed?
|
||||
- Linear process: Low complexity (1-3)
|
||||
- Some branching: Medium complexity (4-7)
|
||||
- Complex decision trees: High complexity (8-10)
|
||||
|
||||
4. **Domain knowledge**: How specialized is the knowledge required?
|
||||
- General knowledge: Low complexity (1-3)
|
||||
- Some specialized knowledge: Medium complexity (4-7)
|
||||
- Deep expertise in multiple domains: High complexity (8-10)
|
||||
|
||||
Calculate your average score to determine overall complexity.
|
||||
|
||||
### Step 2: Assess Precision Requirements
|
||||
|
||||
Rate your precision requirements on a scale of 1-10 by considering:
|
||||
|
||||
1. **Output structure**: How structured must the output be?
|
||||
- Free-form text: Low precision (1-3)
|
||||
- Semi-structured: Medium precision (4-7)
|
||||
- Strictly formatted (JSON, XML): High precision (8-10)
|
||||
|
||||
2. **Accuracy needs**: How important is factual accuracy?
|
||||
- Creative content: Low precision (1-3)
|
||||
- Informational content: Medium precision (4-7)
|
||||
- Critical information: High precision (8-10)
|
||||
|
||||
3. **Reproducibility**: How consistent must results be across runs?
|
||||
- Variation acceptable: Low precision (1-3)
|
||||
- Some consistency needed: Medium precision (4-7)
|
||||
- Exact reproducibility required: High precision (8-10)
|
||||
|
||||
4. **Error tolerance**: What is the impact of errors?
|
||||
- Low impact: Low precision (1-3)
|
||||
- Moderate impact: Medium precision (4-7)
|
||||
- High impact: High precision (8-10)
|
||||
|
||||
Calculate your average score to determine overall precision requirements.
|
||||
|
||||
### Step 3: Map to the Matrix
|
||||
|
||||
Plot your complexity and precision scores on the matrix:
|
||||
|
||||
- **Low Complexity (1-4), Low Precision (1-4)**: Simple Crews
|
||||
- **Low Complexity (1-4), High Precision (5-10)**: Flows with direct LLM calls
|
||||
- **High Complexity (5-10), Low Precision (1-4)**: Complex Crews
|
||||
- **High Complexity (5-10), High Precision (5-10)**: Flows orchestrating Crews
|
||||
|
||||
### Step 4: Consider Additional Factors
|
||||
|
||||
Beyond complexity and precision, consider:
|
||||
|
||||
1. **Development time**: Crews are often faster to prototype
|
||||
2. **Maintenance needs**: Flows provide better long-term maintainability
|
||||
3. **Team expertise**: Consider your team's familiarity with different approaches
|
||||
4. **Scalability requirements**: Flows typically scale better for complex applications
|
||||
5. **Integration needs**: Consider how the solution will integrate with existing systems
|
||||
|
||||
## Conclusion
|
||||
|
||||
Choosing between Crews and Flows—or combining them—is a critical architectural decision that impacts the effectiveness, maintainability, and scalability of your CrewAI application. By evaluating your use case along the dimensions of complexity and precision, you can make informed decisions that align with your specific requirements.
|
||||
|
||||
Remember that the best approach often evolves as your application matures. Start with the simplest solution that meets your needs, and be prepared to refine your architecture as you gain experience and your requirements become clearer.
|
||||
|
||||
<Check>
|
||||
You now have a framework for evaluating CrewAI use cases and choosing the right approach based on complexity and precision requirements. This will help you build more effective, maintainable, and scalable AI applications.
|
||||
</Check>
|
||||
|
||||
## Next Steps
|
||||
|
||||
- Learn more about [crafting effective agents](/guides/agents/crafting-effective-agents)
|
||||
- Explore [building your first crew](/guides/crews/first-crew)
|
||||
- Dive into [mastering flow state management](/guides/flows/mastering-flow-state)
|
||||
- Check out the [core concepts](/concepts/agents) for deeper understanding
|
||||
390
docs/guides/crews/first-crew.mdx
Normal file
@@ -0,0 +1,390 @@
|
||||
---
|
||||
title: Build Your First Crew
|
||||
description: Step-by-step tutorial to create a collaborative AI team that works together to solve complex problems.
|
||||
icon: users-gear
|
||||
---
|
||||
|
||||
# Build Your First Crew
|
||||
|
||||
## Unleashing the Power of Collaborative AI
|
||||
|
||||
Imagine having a team of specialized AI agents working together seamlessly to solve complex problems, each contributing their unique skills to achieve a common goal. This is the power of CrewAI - a framework that enables you to create collaborative AI systems that can accomplish tasks far beyond what a single AI could achieve alone.
|
||||
|
||||
In this guide, we'll walk through creating a research crew that will help us research and analyze a topic, then create a comprehensive report. This practical example demonstrates how AI agents can collaborate to accomplish complex tasks, but it's just the beginning of what's possible with CrewAI.
|
||||
|
||||
### What You'll Build and Learn
|
||||
|
||||
By the end of this guide, you'll have:
|
||||
|
||||
1. **Created a specialized AI research team** with distinct roles and responsibilities
|
||||
2. **Orchestrated collaboration** between multiple AI agents
|
||||
3. **Automated a complex workflow** that involves gathering information, analysis, and report generation
|
||||
4. **Built foundational skills** that you can apply to more ambitious projects
|
||||
|
||||
While we're building a simple research crew in this guide, the same patterns and techniques can be applied to create much more sophisticated teams for tasks like:
|
||||
|
||||
- Multi-stage content creation with specialized writers, editors, and fact-checkers
|
||||
- Complex customer service systems with tiered support agents
|
||||
- Autonomous business analysts that gather data, create visualizations, and generate insights
|
||||
- Product development teams that ideate, design, and plan implementation
|
||||
|
||||
Let's get started building your first crew!
|
||||
|
||||
### Prerequisites
|
||||
|
||||
Before starting, make sure you have:
|
||||
|
||||
1. Installed CrewAI following the [installation guide](/installation)
|
||||
2. Set up your OpenAI API key in your environment variables
|
||||
3. Basic understanding of Python
|
||||
|
||||
## Step 1: Create a New CrewAI Project
|
||||
|
||||
First, let's create a new CrewAI project using the CLI. This command will set up a complete project structure with all the necessary files, allowing you to focus on defining your agents and their tasks rather than setting up boilerplate code.
|
||||
|
||||
```bash
|
||||
crewai create crew research_crew
|
||||
cd research_crew
|
||||
```
|
||||
|
||||
This will generate a project with the basic structure needed for your crew. The CLI automatically creates:
|
||||
|
||||
- A project directory with the necessary files
|
||||
- Configuration files for agents and tasks
|
||||
- A basic crew implementation
|
||||
- A main script to run the crew
|
||||
|
||||
<Frame caption="CrewAI Framework Overview">
|
||||
<img src="../../crews.png" alt="CrewAI Framework Overview" />
|
||||
</Frame>
|
||||
|
||||
|
||||
## Step 2: Explore the Project Structure
|
||||
|
||||
Let's take a moment to understand the project structure created by the CLI. CrewAI follows best practices for Python projects, making it easy to maintain and extend your code as your crews become more complex.
|
||||
|
||||
```
|
||||
research_crew/
|
||||
├── .gitignore
|
||||
├── pyproject.toml
|
||||
├── README.md
|
||||
├── .env
|
||||
└── src/
|
||||
└── research_crew/
|
||||
├── __init__.py
|
||||
├── main.py
|
||||
├── crew.py
|
||||
├── tools/
|
||||
│ ├── custom_tool.py
|
||||
│ └── __init__.py
|
||||
└── config/
|
||||
├── agents.yaml
|
||||
└── tasks.yaml
|
||||
```
|
||||
|
||||
This structure follows best practices for Python projects and makes it easy to organize your code. The separation of configuration files (in YAML) from implementation code (in Python) makes it easy to modify your crew's behavior without changing the underlying code.
|
||||
|
||||
## Step 3: Configure Your Agents
|
||||
|
||||
Now comes the fun part - defining your AI agents! In CrewAI, agents are specialized entities with specific roles, goals, and backstories that shape their behavior. Think of them as characters in a play, each with their own personality and purpose.
|
||||
|
||||
For our research crew, we'll create two agents:
|
||||
1. A **researcher** who excels at finding and organizing information
|
||||
2. An **analyst** who can interpret research findings and create insightful reports
|
||||
|
||||
Let's modify the `agents.yaml` file to define these specialized agents:
|
||||
|
||||
```yaml
|
||||
# src/research_crew/config/agents.yaml
|
||||
researcher:
|
||||
role: >
|
||||
Senior Research Specialist for {topic}
|
||||
goal: >
|
||||
Find comprehensive and accurate information about {topic}
|
||||
with a focus on recent developments and key insights
|
||||
backstory: >
|
||||
You are an experienced research specialist with a talent for
|
||||
finding relevant information from various sources. You excel at
|
||||
organizing information in a clear and structured manner, making
|
||||
complex topics accessible to others.
|
||||
llm: openai/gpt-4o-mini
|
||||
|
||||
analyst:
|
||||
role: >
|
||||
Data Analyst and Report Writer for {topic}
|
||||
goal: >
|
||||
Analyze research findings and create a comprehensive, well-structured
|
||||
report that presents insights in a clear and engaging way
|
||||
backstory: >
|
||||
You are a skilled analyst with a background in data interpretation
|
||||
and technical writing. You have a talent for identifying patterns
|
||||
and extracting meaningful insights from research data, then
|
||||
communicating those insights effectively through well-crafted reports.
|
||||
llm: openai/gpt-4o-mini
|
||||
```
|
||||
|
||||
Notice how each agent has a distinct role, goal, and backstory. These elements aren't just descriptive - they actively shape how the agent approaches its tasks. By crafting these carefully, you can create agents with specialized skills and perspectives that complement each other.
|
||||
|
||||
## Step 4: Define Your Tasks
|
||||
|
||||
With our agents defined, we now need to give them specific tasks to perform. Tasks in CrewAI represent the concrete work that agents will perform, with detailed instructions and expected outputs.
|
||||
|
||||
For our research crew, we'll define two main tasks:
|
||||
1. A **research task** for gathering comprehensive information
|
||||
2. An **analysis task** for creating an insightful report
|
||||
|
||||
Let's modify the `tasks.yaml` file:
|
||||
|
||||
```yaml
|
||||
# src/research_crew/config/tasks.yaml
|
||||
research_task:
|
||||
description: >
|
||||
Conduct thorough research on {topic}. Focus on:
|
||||
1. Key concepts and definitions
|
||||
2. Historical development and recent trends
|
||||
3. Major challenges and opportunities
|
||||
4. Notable applications or case studies
|
||||
5. Future outlook and potential developments
|
||||
|
||||
Make sure to organize your findings in a structured format with clear sections.
|
||||
expected_output: >
|
||||
A comprehensive research document with well-organized sections covering
|
||||
all the requested aspects of {topic}. Include specific facts, figures,
|
||||
and examples where relevant.
|
||||
agent: researcher
|
||||
|
||||
analysis_task:
|
||||
description: >
|
||||
Analyze the research findings and create a comprehensive report on {topic}.
|
||||
Your report should:
|
||||
1. Begin with an executive summary
|
||||
2. Include all key information from the research
|
||||
3. Provide insightful analysis of trends and patterns
|
||||
4. Offer recommendations or future considerations
|
||||
5. Be formatted in a professional, easy-to-read style with clear headings
|
||||
expected_output: >
|
||||
A polished, professional report on {topic} that presents the research
|
||||
findings with added analysis and insights. The report should be well-structured
|
||||
with an executive summary, main sections, and conclusion.
|
||||
agent: analyst
|
||||
context:
|
||||
- research_task
|
||||
output_file: output/report.md
|
||||
```
|
||||
|
||||
Note the `context` field in the analysis task - this is a powerful feature that allows the analyst to access the output of the research task. This creates a workflow where information flows naturally between agents, just as it would in a human team.
|
||||
|
||||
## Step 5: Configure Your Crew
|
||||
|
||||
Now it's time to bring everything together by configuring our crew. The crew is the container that orchestrates how agents work together to complete tasks.
|
||||
|
||||
Let's modify the `crew.py` file:
|
||||
|
||||
```python
|
||||
# src/research_crew/crew.py
|
||||
from crewai import Agent, Crew, Process, Task
|
||||
from crewai.project import CrewBase, agent, crew, task
|
||||
from crewai_tools import SerperDevTool
|
||||
|
||||
@CrewBase
|
||||
class ResearchCrew():
|
||||
"""Research crew for comprehensive topic analysis and reporting"""
|
||||
|
||||
@agent
|
||||
def researcher(self) -> Agent:
|
||||
return Agent(
|
||||
config=self.agents_config['researcher'],
|
||||
verbose=True,
|
||||
tools=[SerperDevTool()]
|
||||
)
|
||||
|
||||
@agent
|
||||
def analyst(self) -> Agent:
|
||||
return Agent(
|
||||
config=self.agents_config['analyst'],
|
||||
verbose=True
|
||||
)
|
||||
|
||||
@task
|
||||
def research_task(self) -> Task:
|
||||
return Task(
|
||||
config=self.tasks_config['research_task']
|
||||
)
|
||||
|
||||
@task
|
||||
def analysis_task(self) -> Task:
|
||||
return Task(
|
||||
config=self.tasks_config['analysis_task'],
|
||||
output_file='output/report.md'
|
||||
)
|
||||
|
||||
@crew
|
||||
def crew(self) -> Crew:
|
||||
"""Creates the research crew"""
|
||||
return Crew(
|
||||
agents=self.agents,
|
||||
tasks=self.tasks,
|
||||
process=Process.sequential,
|
||||
verbose=True,
|
||||
)
|
||||
```
|
||||
|
||||
In this code, we're:
|
||||
1. Creating the researcher agent and equipping it with the SerperDevTool to search the web
|
||||
2. Creating the analyst agent
|
||||
3. Setting up the research and analysis tasks
|
||||
4. Configuring the crew to run tasks sequentially (the analyst will wait for the researcher to finish)
|
||||
|
||||
This is where the magic happens - with just a few lines of code, we've defined a collaborative AI system where specialized agents work together in a coordinated process.
|
||||
|
||||
## Step 6: Set Up Your Main Script
|
||||
|
||||
Now, let's set up the main script that will run our crew. This is where we provide the specific topic we want our crew to research.
|
||||
|
||||
```python
|
||||
#!/usr/bin/env python
|
||||
# src/research_crew/main.py
|
||||
import os
|
||||
from research_crew.crew import ResearchCrew
|
||||
|
||||
# Create output directory if it doesn't exist
|
||||
os.makedirs('output', exist_ok=True)
|
||||
|
||||
def run():
|
||||
"""
|
||||
Run the research crew.
|
||||
"""
|
||||
inputs = {
|
||||
'topic': 'Artificial Intelligence in Healthcare'
|
||||
}
|
||||
|
||||
# Create and run the crew
|
||||
result = ResearchCrew().crew().kickoff(inputs=inputs)
|
||||
|
||||
# Print the result
|
||||
print("\n\n=== FINAL REPORT ===\n\n")
|
||||
print(result.raw)
|
||||
|
||||
print("\n\nReport has been saved to output/report.md")
|
||||
|
||||
if __name__ == "__main__":
|
||||
run()
|
||||
```
|
||||
|
||||
This script prepares the environment, specifies our research topic, and kicks off the crew's work. The power of CrewAI is evident in how simple this code is - all the complexity of managing multiple AI agents is handled by the framework.
|
||||
|
||||
## Step 7: Set Up Your Environment Variables
|
||||
|
||||
Create a `.env` file in your project root with your API keys:
|
||||
|
||||
```
|
||||
OPENAI_API_KEY=your_openai_api_key
|
||||
SERPER_API_KEY=your_serper_api_key
|
||||
```
|
||||
|
||||
You can get a Serper API key from [Serper.dev](https://serper.dev/).
|
||||
|
||||
## Step 8: Install Dependencies
|
||||
|
||||
Install the required dependencies using the CrewAI CLI:
|
||||
|
||||
```bash
|
||||
crewai install
|
||||
```
|
||||
|
||||
This command will:
|
||||
1. Read the dependencies from your project configuration
|
||||
2. Create a virtual environment if needed
|
||||
3. Install all required packages
|
||||
|
||||
## Step 9: Run Your Crew
|
||||
|
||||
Now for the exciting moment - it's time to run your crew and see AI collaboration in action!
|
||||
|
||||
```bash
|
||||
crewai run
|
||||
```
|
||||
|
||||
When you run this command, you'll see your crew spring to life. The researcher will gather information about the specified topic, and the analyst will then create a comprehensive report based on that research. You'll see the agents' thought processes, actions, and outputs in real-time as they work together to complete their tasks.
|
||||
|
||||
## Step 10: Review the Output
|
||||
|
||||
Once the crew completes its work, you'll find the final report in the `output/report.md` file. The report will include:
|
||||
|
||||
1. An executive summary
|
||||
2. Detailed information about the topic
|
||||
3. Analysis and insights
|
||||
4. Recommendations or future considerations
|
||||
|
||||
Take a moment to appreciate what you've accomplished - you've created a system where multiple AI agents collaborated on a complex task, each contributing their specialized skills to produce a result that's greater than what any single agent could achieve alone.
|
||||
|
||||
## Exploring Other CLI Commands
|
||||
|
||||
CrewAI offers several other useful CLI commands for working with crews:
|
||||
|
||||
```bash
|
||||
# View all available commands
|
||||
crewai --help
|
||||
|
||||
# Run the crew
|
||||
crewai run
|
||||
|
||||
# Test the crew
|
||||
crewai test
|
||||
|
||||
# Reset crew memories
|
||||
crewai reset-memories
|
||||
|
||||
# Replay from a specific task
|
||||
crewai replay -t <task_id>
|
||||
```
|
||||
|
||||
## The Art of the Possible: Beyond Your First Crew
|
||||
|
||||
What you've built in this guide is just the beginning. The skills and patterns you've learned can be applied to create increasingly sophisticated AI systems. Here are some ways you could extend this basic research crew:
|
||||
|
||||
### Expanding Your Crew
|
||||
|
||||
You could add more specialized agents to your crew:
|
||||
- A **fact-checker** to verify research findings
|
||||
- A **data visualizer** to create charts and graphs
|
||||
- A **domain expert** with specialized knowledge in a particular area
|
||||
- A **critic** to identify weaknesses in the analysis
|
||||
|
||||
### Adding Tools and Capabilities
|
||||
|
||||
You could enhance your agents with additional tools:
|
||||
- Web browsing tools for real-time research
|
||||
- CSV/database tools for data analysis
|
||||
- Code execution tools for data processing
|
||||
- API connections to external services
|
||||
|
||||
### Creating More Complex Workflows
|
||||
|
||||
You could implement more sophisticated processes:
|
||||
- Hierarchical processes where manager agents delegate to worker agents
|
||||
- Iterative processes with feedback loops for refinement
|
||||
- Parallel processes where multiple agents work simultaneously
|
||||
- Dynamic processes that adapt based on intermediate results
|
||||
|
||||
### Applying to Different Domains
|
||||
|
||||
The same patterns can be applied to create crews for:
|
||||
- **Content creation**: Writers, editors, fact-checkers, and designers working together
|
||||
- **Customer service**: Triage agents, specialists, and quality control working together
|
||||
- **Product development**: Researchers, designers, and planners collaborating
|
||||
- **Data analysis**: Data collectors, analysts, and visualization specialists
|
||||
|
||||
## Next Steps
|
||||
|
||||
Now that you've built your first crew, you can:
|
||||
|
||||
1. Experiment with different agent configurations and personalities
|
||||
2. Try more complex task structures and workflows
|
||||
3. Implement custom tools to give your agents new capabilities
|
||||
4. Apply your crew to different topics or problem domains
|
||||
5. Explore [CrewAI Flows](/guides/flows/first-flow) for more advanced workflows with procedural programming
|
||||
|
||||
<Check>
|
||||
Congratulations! You've successfully built your first CrewAI crew that can research and analyze any topic you provide. This foundational experience has equipped you with the skills to create increasingly sophisticated AI systems that can tackle complex, multi-stage problems through collaborative intelligence.
|
||||
</Check>
|
||||
604
docs/guides/flows/first-flow.mdx
Normal file
@@ -0,0 +1,604 @@
|
||||
---
|
||||
title: Build Your First Flow
|
||||
description: Learn how to create structured, event-driven workflows with precise control over execution.
|
||||
icon: diagram-project
|
||||
---
|
||||
|
||||
# Build Your First Flow
|
||||
|
||||
## Taking Control of AI Workflows with Flows
|
||||
|
||||
CrewAI Flows represent the next level in AI orchestration - combining the collaborative power of AI agent crews with the precision and flexibility of procedural programming. While crews excel at agent collaboration, flows give you fine-grained control over exactly how and when different components of your AI system interact.
|
||||
|
||||
In this guide, we'll walk through creating a powerful CrewAI Flow that generates a comprehensive learning guide on any topic. This tutorial will demonstrate how Flows provide structured, event-driven control over your AI workflows by combining regular code, direct LLM calls, and crew-based processing.
|
||||
|
||||
### What Makes Flows Powerful
|
||||
|
||||
Flows enable you to:
|
||||
|
||||
1. **Combine different AI interaction patterns** - Use crews for complex collaborative tasks, direct LLM calls for simpler operations, and regular code for procedural logic
|
||||
2. **Build event-driven systems** - Define how components respond to specific events and data changes
|
||||
3. **Maintain state across components** - Share and transform data between different parts of your application
|
||||
4. **Integrate with external systems** - Seamlessly connect your AI workflow with databases, APIs, and user interfaces
|
||||
5. **Create complex execution paths** - Design conditional branches, parallel processing, and dynamic workflows
|
||||
|
||||
### What You'll Build and Learn
|
||||
|
||||
By the end of this guide, you'll have:
|
||||
|
||||
1. **Created a sophisticated content generation system** that combines user input, AI planning, and multi-agent content creation
|
||||
2. **Orchestrated the flow of information** between different components of your system
|
||||
3. **Implemented event-driven architecture** where each step responds to the completion of previous steps
|
||||
4. **Built a foundation for more complex AI applications** that you can expand and customize
|
||||
|
||||
This guide creator flow demonstrates fundamental patterns that can be applied to create much more advanced applications, such as:
|
||||
|
||||
- Interactive AI assistants that combine multiple specialized subsystems
|
||||
- Complex data processing pipelines with AI-enhanced transformations
|
||||
- Autonomous agents that integrate with external services and APIs
|
||||
- Multi-stage decision-making systems with human-in-the-loop processes
|
||||
|
||||
Let's dive in and build your first flow!
|
||||
|
||||
## Prerequisites
|
||||
|
||||
Before starting, make sure you have:
|
||||
|
||||
1. Installed CrewAI following the [installation guide](/installation)
|
||||
2. Set up your OpenAI API key in your environment variables
|
||||
3. Basic understanding of Python
|
||||
|
||||
## Step 1: Create a New CrewAI Flow Project
|
||||
|
||||
First, let's create a new CrewAI Flow project using the CLI. This command sets up a scaffolded project with all the necessary directories and template files for your flow.
|
||||
|
||||
```bash
|
||||
crewai create flow guide_creator_flow
|
||||
cd guide_creator_flow
|
||||
```
|
||||
|
||||
This will generate a project with the basic structure needed for your flow.
|
||||
|
||||
<Frame caption="CrewAI Framework Overview">
|
||||
<img src="../../flows.png" alt="CrewAI Framework Overview" />
|
||||
</Frame>
|
||||
|
||||
## Step 2: Understanding the Project Structure
|
||||
|
||||
The generated project has the following structure. Take a moment to familiarize yourself with it, as understanding this structure will help you create more complex flows in the future.
|
||||
|
||||
```
|
||||
guide_creator_flow/
|
||||
├── .gitignore
|
||||
├── pyproject.toml
|
||||
├── README.md
|
||||
├── .env
|
||||
├── main.py
|
||||
├── crews/
|
||||
│ └── poem_crew/
|
||||
│ ├── config/
|
||||
│ │ ├── agents.yaml
|
||||
│ │ └── tasks.yaml
|
||||
│ └── poem_crew.py
|
||||
└── tools/
|
||||
└── custom_tool.py
|
||||
```
|
||||
|
||||
This structure provides a clear separation between different components of your flow:
|
||||
- The main flow logic in the `main.py` file
|
||||
- Specialized crews in the `crews` directory
|
||||
- Custom tools in the `tools` directory
|
||||
|
||||
We'll modify this structure to create our guide creator flow, which will orchestrate the process of generating comprehensive learning guides.
|
||||
|
||||
## Step 3: Add a Content Writer Crew
|
||||
|
||||
Our flow will need a specialized crew to handle the content creation process. Let's use the CrewAI CLI to add a content writer crew:
|
||||
|
||||
```bash
|
||||
crewai flow add-crew content-crew
|
||||
```
|
||||
|
||||
This command automatically creates the necessary directories and template files for your crew. The content writer crew will be responsible for writing and reviewing sections of our guide, working within the overall flow orchestrated by our main application.
|
||||
|
||||
## Step 4: Configure the Content Writer Crew
|
||||
|
||||
Now, let's modify the generated files for the content writer crew. We'll set up two specialized agents - a writer and a reviewer - that will collaborate to create high-quality content for our guide.
|
||||
|
||||
1. First, update the agents configuration file to define our content creation team:
|
||||
|
||||
```yaml
|
||||
# src/guide_creator_flow/crews/content_crew/config/agents.yaml
|
||||
content_writer:
|
||||
role: >
|
||||
Educational Content Writer
|
||||
goal: >
|
||||
Create engaging, informative content that thoroughly explains the assigned topic
|
||||
and provides valuable insights to the reader
|
||||
backstory: >
|
||||
You are a talented educational writer with expertise in creating clear, engaging
|
||||
content. You have a gift for explaining complex concepts in accessible language
|
||||
and organizing information in a way that helps readers build their understanding.
|
||||
llm: openai/gpt-4o-mini
|
||||
|
||||
content_reviewer:
|
||||
role: >
|
||||
Educational Content Reviewer and Editor
|
||||
goal: >
|
||||
Ensure content is accurate, comprehensive, well-structured, and maintains
|
||||
consistency with previously written sections
|
||||
backstory: >
|
||||
You are a meticulous editor with years of experience reviewing educational
|
||||
content. You have an eye for detail, clarity, and coherence. You excel at
|
||||
improving content while maintaining the original author's voice and ensuring
|
||||
consistent quality across multiple sections.
|
||||
llm: openai/gpt-4o-mini
|
||||
```
|
||||
|
||||
These agent definitions establish the specialized roles and perspectives that will shape how our AI agents approach content creation. Notice how each agent has a distinct purpose and expertise.
|
||||
|
||||
2. Next, update the tasks configuration file to define the specific writing and reviewing tasks:
|
||||
|
||||
```yaml
|
||||
# src/guide_creator_flow/crews/content_crew/config/tasks.yaml
|
||||
write_section_task:
|
||||
description: >
|
||||
Write a comprehensive section on the topic: "{section_title}"
|
||||
|
||||
Section description: {section_description}
|
||||
Target audience: {audience_level} level learners
|
||||
|
||||
Your content should:
|
||||
1. Begin with a brief introduction to the section topic
|
||||
2. Explain all key concepts clearly with examples
|
||||
3. Include practical applications or exercises where appropriate
|
||||
4. End with a summary of key points
|
||||
5. Be approximately 500-800 words in length
|
||||
|
||||
Format your content in Markdown with appropriate headings, lists, and emphasis.
|
||||
|
||||
Previously written sections:
|
||||
{previous_sections}
|
||||
|
||||
Make sure your content maintains consistency with previously written sections
|
||||
and builds upon concepts that have already been explained.
|
||||
expected_output: >
|
||||
A well-structured, comprehensive section in Markdown format that thoroughly
|
||||
explains the topic and is appropriate for the target audience.
|
||||
agent: content_writer
|
||||
|
||||
review_section_task:
|
||||
description: >
|
||||
Review and improve the following section on "{section_title}":
|
||||
|
||||
{draft_content}
|
||||
|
||||
Target audience: {audience_level} level learners
|
||||
|
||||
Previously written sections:
|
||||
{previous_sections}
|
||||
|
||||
Your review should:
|
||||
1. Fix any grammatical or spelling errors
|
||||
2. Improve clarity and readability
|
||||
3. Ensure content is comprehensive and accurate
|
||||
4. Verify consistency with previously written sections
|
||||
5. Enhance the structure and flow
|
||||
6. Add any missing key information
|
||||
|
||||
Provide the improved version of the section in Markdown format.
|
||||
expected_output: >
|
||||
An improved, polished version of the section that maintains the original
|
||||
structure but enhances clarity, accuracy, and consistency.
|
||||
agent: content_reviewer
|
||||
context:
|
||||
- write_section_task
|
||||
```
|
||||
|
||||
These task definitions provide detailed instructions to our agents, ensuring they produce content that meets our quality standards. Note how the `context` parameter in the review task creates a workflow where the reviewer has access to the writer's output.
|
||||
|
||||
3. Now, update the crew implementation file to define how our agents and tasks work together:
|
||||
|
||||
```python
|
||||
# src/guide_creator_flow/crews/content_crew/content_crew.py
|
||||
from crewai import Agent, Crew, Process, Task
|
||||
from crewai.project import CrewBase, agent, crew, task
|
||||
|
||||
@CrewBase
|
||||
class ContentCrew():
|
||||
"""Content writing crew"""
|
||||
|
||||
@agent
|
||||
def content_writer(self) -> Agent:
|
||||
return Agent(
|
||||
config=self.agents_config['content_writer'],
|
||||
verbose=True
|
||||
)
|
||||
|
||||
@agent
|
||||
def content_reviewer(self) -> Agent:
|
||||
return Agent(
|
||||
config=self.agents_config['content_reviewer'],
|
||||
verbose=True
|
||||
)
|
||||
|
||||
@task
|
||||
def write_section_task(self) -> Task:
|
||||
return Task(
|
||||
config=self.tasks_config['write_section_task']
|
||||
)
|
||||
|
||||
@task
|
||||
def review_section_task(self) -> Task:
|
||||
return Task(
|
||||
config=self.tasks_config['review_section_task'],
|
||||
context=[self.write_section_task()]
|
||||
)
|
||||
|
||||
@crew
|
||||
def crew(self) -> Crew:
|
||||
"""Creates the content writing crew"""
|
||||
return Crew(
|
||||
agents=self.agents,
|
||||
tasks=self.tasks,
|
||||
process=Process.sequential,
|
||||
verbose=True,
|
||||
)
|
||||
```
|
||||
|
||||
This crew definition establishes the relationship between our agents and tasks, setting up a sequential process where the content writer creates a draft and then the reviewer improves it. While this crew can function independently, in our flow it will be orchestrated as part of a larger system.
|
||||
|
||||
## Step 5: Create the Flow
|
||||
|
||||
Now comes the exciting part - creating the flow that will orchestrate the entire guide creation process. This is where we'll combine regular Python code, direct LLM calls, and our content creation crew into a cohesive system.
|
||||
|
||||
Our flow will:
|
||||
1. Get user input for a topic and audience level
|
||||
2. Make a direct LLM call to create a structured guide outline
|
||||
3. Process each section sequentially using the content writer crew
|
||||
4. Combine everything into a final comprehensive document
|
||||
|
||||
Let's create our flow in the `main.py` file:
|
||||
|
||||
```python
|
||||
#!/usr/bin/env python
|
||||
import json
|
||||
from typing import List, Dict
|
||||
from pydantic import BaseModel, Field
|
||||
from crewai import LLM
|
||||
from crewai.flow.flow import Flow, listen, start
|
||||
from guide_creator_flow.crews.content_crew.content_crew import ContentCrew
|
||||
|
||||
# Define our models for structured data
|
||||
class Section(BaseModel):
|
||||
title: str = Field(description="Title of the section")
|
||||
description: str = Field(description="Brief description of what the section should cover")
|
||||
|
||||
class GuideOutline(BaseModel):
|
||||
title: str = Field(description="Title of the guide")
|
||||
introduction: str = Field(description="Introduction to the topic")
|
||||
target_audience: str = Field(description="Description of the target audience")
|
||||
sections: List[Section] = Field(description="List of sections in the guide")
|
||||
conclusion: str = Field(description="Conclusion or summary of the guide")
|
||||
|
||||
# Define our flow state
|
||||
class GuideCreatorState(BaseModel):
|
||||
topic: str = ""
|
||||
audience_level: str = ""
|
||||
guide_outline: GuideOutline = None
|
||||
sections_content: Dict[str, str] = {}
|
||||
|
||||
class GuideCreatorFlow(Flow[GuideCreatorState]):
|
||||
"""Flow for creating a comprehensive guide on any topic"""
|
||||
|
||||
@start()
|
||||
def get_user_input(self):
|
||||
"""Get input from the user about the guide topic and audience"""
|
||||
print("\n=== Create Your Comprehensive Guide ===\n")
|
||||
|
||||
# Get user input
|
||||
self.state.topic = input("What topic would you like to create a guide for? ")
|
||||
|
||||
# Get audience level with validation
|
||||
while True:
|
||||
audience = input("Who is your target audience? (beginner/intermediate/advanced) ").lower()
|
||||
if audience in ["beginner", "intermediate", "advanced"]:
|
||||
self.state.audience_level = audience
|
||||
break
|
||||
print("Please enter 'beginner', 'intermediate', or 'advanced'")
|
||||
|
||||
print(f"\nCreating a guide on {self.state.topic} for {self.state.audience_level} audience...\n")
|
||||
return self.state
|
||||
|
||||
@listen(get_user_input)
|
||||
def create_guide_outline(self, state):
|
||||
"""Create a structured outline for the guide using a direct LLM call"""
|
||||
print("Creating guide outline...")
|
||||
|
||||
# Initialize the LLM
|
||||
llm = LLM(model="openai/gpt-4o-mini", response_format=GuideOutline)
|
||||
|
||||
# Create the messages for the outline
|
||||
messages = [
|
||||
{"role": "system", "content": "You are a helpful assistant designed to output JSON."},
|
||||
{"role": "user", "content": f"""
|
||||
Create a detailed outline for a comprehensive guide on "{state.topic}" for {state.audience_level} level learners.
|
||||
|
||||
The outline should include:
|
||||
1. A compelling title for the guide
|
||||
2. An introduction to the topic
|
||||
3. 4-6 main sections that cover the most important aspects of the topic
|
||||
4. A conclusion or summary
|
||||
|
||||
For each section, provide a clear title and a brief description of what it should cover.
|
||||
"""}
|
||||
]
|
||||
|
||||
# Make the LLM call with JSON response format
|
||||
response = llm.call(messages=messages)
|
||||
|
||||
# Parse the JSON response
|
||||
outline_dict = json.loads(response)
|
||||
self.state.guide_outline = GuideOutline(**outline_dict)
|
||||
|
||||
# Save the outline to a file
|
||||
with open("output/guide_outline.json", "w") as f:
|
||||
json.dump(outline_dict, f, indent=2)
|
||||
|
||||
print(f"Guide outline created with {len(self.state.guide_outline.sections)} sections")
|
||||
return self.state.guide_outline
|
||||
|
||||
@listen(create_guide_outline)
|
||||
def write_and_compile_guide(self, outline):
|
||||
"""Write all sections and compile the guide"""
|
||||
print("Writing guide sections and compiling...")
|
||||
completed_sections = []
|
||||
|
||||
# Process sections one by one to maintain context flow
|
||||
for section in outline.sections:
|
||||
print(f"Processing section: {section.title}")
|
||||
|
||||
# Build context from previous sections
|
||||
previous_sections_text = ""
|
||||
if completed_sections:
|
||||
previous_sections_text = "# Previously Written Sections\n\n"
|
||||
for title in completed_sections:
|
||||
previous_sections_text += f"## {title}\n\n"
|
||||
previous_sections_text += self.state.sections_content.get(title, "") + "\n\n"
|
||||
else:
|
||||
previous_sections_text = "No previous sections written yet."
|
||||
|
||||
# Run the content crew for this section
|
||||
result = ContentCrew().crew().kickoff(inputs={
|
||||
"section_title": section.title,
|
||||
"section_description": section.description,
|
||||
"audience_level": self.state.audience_level,
|
||||
"previous_sections": previous_sections_text,
|
||||
"draft_content": ""
|
||||
})
|
||||
|
||||
# Store the content
|
||||
self.state.sections_content[section.title] = result.raw
|
||||
completed_sections.append(section.title)
|
||||
print(f"Section completed: {section.title}")
|
||||
|
||||
# Compile the final guide
|
||||
guide_content = f"# {outline.title}\n\n"
|
||||
guide_content += f"## Introduction\n\n{outline.introduction}\n\n"
|
||||
|
||||
# Add each section in order
|
||||
for section in outline.sections:
|
||||
section_content = self.state.sections_content.get(section.title, "")
|
||||
guide_content += f"\n\n{section_content}\n\n"
|
||||
|
||||
# Add conclusion
|
||||
guide_content += f"## Conclusion\n\n{outline.conclusion}\n\n"
|
||||
|
||||
# Save the guide
|
||||
with open("output/complete_guide.md", "w") as f:
|
||||
f.write(guide_content)
|
||||
|
||||
print("\nComplete guide compiled and saved to output/complete_guide.md")
|
||||
return "Guide creation completed successfully"
|
||||
|
||||
def kickoff():
|
||||
"""Run the guide creator flow"""
|
||||
GuideCreatorFlow().kickoff()
|
||||
print("\n=== Flow Complete ===")
|
||||
print("Your comprehensive guide is ready in the output directory.")
|
||||
print("Open output/complete_guide.md to view it.")
|
||||
|
||||
def plot():
|
||||
"""Generate a visualization of the flow"""
|
||||
flow = GuideCreatorFlow()
|
||||
flow.plot("guide_creator_flow")
|
||||
print("Flow visualization saved to guide_creator_flow.html")
|
||||
|
||||
if __name__ == "__main__":
|
||||
kickoff()
|
||||
```
|
||||
|
||||
Let's analyze what's happening in this flow:
|
||||
|
||||
1. We define Pydantic models for structured data, ensuring type safety and clear data representation
|
||||
2. We create a state class to maintain data across different steps of the flow
|
||||
3. We implement three main flow steps:
|
||||
- Getting user input with the `@start()` decorator
|
||||
- Creating a guide outline with a direct LLM call
|
||||
- Processing sections with our content crew
|
||||
4. We use the `@listen()` decorator to establish event-driven relationships between steps
|
||||
|
||||
This is the power of flows - combining different types of processing (user interaction, direct LLM calls, crew-based tasks) into a coherent, event-driven system.
|
||||
|
||||
## Step 6: Set Up Your Environment Variables
|
||||
|
||||
Create a `.env` file in your project root with your API keys:
|
||||
|
||||
```
|
||||
OPENAI_API_KEY=your_openai_api_key
|
||||
```
|
||||
|
||||
## Step 7: Install Dependencies
|
||||
|
||||
Install the required dependencies:
|
||||
|
||||
```bash
|
||||
crewai install
|
||||
```
|
||||
|
||||
## Step 8: Run Your Flow
|
||||
|
||||
Now it's time to see your flow in action! Run it using the CrewAI CLI:
|
||||
|
||||
```bash
|
||||
crewai flow kickoff
|
||||
```
|
||||
|
||||
When you run this command, you'll see your flow spring to life:
|
||||
1. It will prompt you for a topic and audience level
|
||||
2. It will create a structured outline for your guide
|
||||
3. It will process each section, with the content writer and reviewer collaborating on each
|
||||
4. Finally, it will compile everything into a comprehensive guide
|
||||
|
||||
This demonstrates the power of flows to orchestrate complex processes involving multiple components, both AI and non-AI.
|
||||
|
||||
## Step 9: Visualize Your Flow
|
||||
|
||||
One of the powerful features of flows is the ability to visualize their structure:
|
||||
|
||||
```bash
|
||||
crewai flow plot
|
||||
```
|
||||
|
||||
This will create an HTML file that shows the structure of your flow, including the relationships between different steps and the data that flows between them. This visualization can be invaluable for understanding and debugging complex flows.
|
||||
|
||||
## Step 10: Review the Output
|
||||
|
||||
Once the flow completes, you'll find two files in the `output` directory:
|
||||
|
||||
1. `guide_outline.json`: Contains the structured outline of the guide
|
||||
2. `complete_guide.md`: The comprehensive guide with all sections
|
||||
|
||||
Take a moment to review these files and appreciate what you've built - a system that combines user input, direct AI interactions, and collaborative agent work to produce a complex, high-quality output.
|
||||
|
||||
## The Art of the Possible: Beyond Your First Flow
|
||||
|
||||
What you've learned in this guide provides a foundation for creating much more sophisticated AI systems. Here are some ways you could extend this basic flow:
|
||||
|
||||
### Enhancing User Interaction
|
||||
|
||||
You could create more interactive flows with:
|
||||
- Web interfaces for input and output
|
||||
- Real-time progress updates
|
||||
- Interactive feedback and refinement loops
|
||||
- Multi-stage user interactions
|
||||
|
||||
### Adding More Processing Steps
|
||||
|
||||
You could expand your flow with additional steps for:
|
||||
- Research before outline creation
|
||||
- Image generation for illustrations
|
||||
- Code snippet generation for technical guides
|
||||
- Final quality assurance and fact-checking
|
||||
|
||||
### Creating More Complex Flows
|
||||
|
||||
You could implement more sophisticated flow patterns:
|
||||
- Conditional branching based on user preferences or content type
|
||||
- Parallel processing of independent sections
|
||||
- Iterative refinement loops with feedback
|
||||
- Integration with external APIs and services
|
||||
|
||||
### Applying to Different Domains
|
||||
|
||||
The same patterns can be applied to create flows for:
|
||||
- **Interactive storytelling**: Create personalized stories based on user input
|
||||
- **Business intelligence**: Process data, generate insights, and create reports
|
||||
- **Product development**: Facilitate ideation, design, and planning
|
||||
- **Educational systems**: Create personalized learning experiences
|
||||
|
||||
## Key Features Demonstrated
|
||||
|
||||
This guide creator flow demonstrates several powerful features of CrewAI:
|
||||
|
||||
1. **User interaction**: The flow collects input directly from the user
|
||||
2. **Direct LLM calls**: Uses the LLM class for efficient, single-purpose AI interactions
|
||||
3. **Structured data with Pydantic**: Uses Pydantic models to ensure type safety
|
||||
4. **Sequential processing with context**: Writes sections in order, providing previous sections for context
|
||||
5. **Multi-agent crews**: Leverages specialized agents (writer and reviewer) for content creation
|
||||
6. **State management**: Maintains state across different steps of the process
|
||||
7. **Event-driven architecture**: Uses the `@listen` decorator to respond to events
|
||||
|
||||
## Understanding the Flow Structure
|
||||
|
||||
Let's break down the key components of flows to help you understand how to build your own:
|
||||
|
||||
### 1. Direct LLM Calls
|
||||
|
||||
Flows allow you to make direct calls to language models when you need simple, structured responses:
|
||||
|
||||
```python
|
||||
llm = LLM(model="openai/gpt-4o-mini", response_format=GuideOutline)
|
||||
response = llm.call(messages=messages)
|
||||
```
|
||||
|
||||
This is more efficient than using a crew when you need a specific, structured output.
|
||||
|
||||
### 2. Event-Driven Architecture
|
||||
|
||||
Flows use decorators to establish relationships between components:
|
||||
|
||||
```python
|
||||
@start()
|
||||
def get_user_input(self):
|
||||
# First step in the flow
|
||||
# ...
|
||||
|
||||
@listen(get_user_input)
|
||||
def create_guide_outline(self, state):
|
||||
# This runs when get_user_input completes
|
||||
# ...
|
||||
```
|
||||
|
||||
This creates a clear, declarative structure for your application.
|
||||
|
||||
### 3. State Management
|
||||
|
||||
Flows maintain state across steps, making it easy to share data:
|
||||
|
||||
```python
|
||||
class GuideCreatorState(BaseModel):
|
||||
topic: str = ""
|
||||
audience_level: str = ""
|
||||
guide_outline: GuideOutline = None
|
||||
sections_content: Dict[str, str] = {}
|
||||
```
|
||||
|
||||
This provides a type-safe way to track and transform data throughout your flow.
|
||||
|
||||
### 4. Crew Integration
|
||||
|
||||
Flows can seamlessly integrate with crews for complex collaborative tasks:
|
||||
|
||||
```python
|
||||
result = ContentCrew().crew().kickoff(inputs={
|
||||
"section_title": section.title,
|
||||
# ...
|
||||
})
|
||||
```
|
||||
|
||||
This allows you to use the right tool for each part of your application - direct LLM calls for simple tasks and crews for complex collaboration.
|
||||
|
||||
## Next Steps
|
||||
|
||||
Now that you've built your first flow, you can:
|
||||
|
||||
1. Experiment with more complex flow structures and patterns
|
||||
2. Try using `@router()` to create conditional branches in your flows
|
||||
3. Explore the `and_` and `or_` functions for more complex parallel execution
|
||||
4. Connect your flow to external APIs, databases, or user interfaces
|
||||
5. Combine multiple specialized crews in a single flow
|
||||
|
||||
<Check>
|
||||
Congratulations! You've successfully built your first CrewAI Flow that combines regular code, direct LLM calls, and crew-based processing to create a comprehensive guide. These foundational skills enable you to create increasingly sophisticated AI applications that can tackle complex, multi-stage problems through a combination of procedural control and collaborative intelligence.
|
||||
</Check>
|
||||
771
docs/guides/flows/mastering-flow-state.mdx
Normal file
@@ -0,0 +1,771 @@
|
||||
---
|
||||
title: Mastering Flow State Management
|
||||
description: A comprehensive guide to managing, persisting, and leveraging state in CrewAI Flows for building robust AI applications.
|
||||
icon: diagram-project
|
||||
---
|
||||
|
||||
# Mastering Flow State Management
|
||||
|
||||
## Understanding the Power of State in Flows
|
||||
|
||||
State management is the backbone of any sophisticated AI workflow. In CrewAI Flows, the state system allows you to maintain context, share data between steps, and build complex application logic. Mastering state management is essential for creating reliable, maintainable, and powerful AI applications.
|
||||
|
||||
This guide will walk you through everything you need to know about managing state in CrewAI Flows, from basic concepts to advanced techniques, with practical code examples along the way.
|
||||
|
||||
### Why State Management Matters
|
||||
|
||||
Effective state management enables you to:
|
||||
|
||||
1. **Maintain context across execution steps** - Pass information seamlessly between different stages of your workflow
|
||||
2. **Build complex conditional logic** - Make decisions based on accumulated data
|
||||
3. **Create persistent applications** - Save and restore workflow progress
|
||||
4. **Handle errors gracefully** - Implement recovery patterns for more robust applications
|
||||
5. **Scale your applications** - Support complex workflows with proper data organization
|
||||
6. **Enable conversational applications** - Store and access conversation history for context-aware AI interactions
|
||||
|
||||
Let's explore how to leverage these capabilities effectively.
|
||||
|
||||
## State Management Fundamentals
|
||||
|
||||
### The Flow State Lifecycle
|
||||
|
||||
In CrewAI Flows, the state follows a predictable lifecycle:
|
||||
|
||||
1. **Initialization** - When a flow is created, its state is initialized (either as an empty dictionary or a Pydantic model instance)
|
||||
2. **Modification** - Flow methods access and modify the state as they execute
|
||||
3. **Transmission** - State is passed automatically between flow methods
|
||||
4. **Persistence** (optional) - State can be saved to storage and later retrieved
|
||||
5. **Completion** - The final state reflects the cumulative changes from all executed methods
|
||||
|
||||
Understanding this lifecycle is crucial for designing effective flows.
|
||||
|
||||
### Two Approaches to State Management
|
||||
|
||||
CrewAI offers two ways to manage state in your flows:
|
||||
|
||||
1. **Unstructured State** - Using dictionary-like objects for flexibility
|
||||
2. **Structured State** - Using Pydantic models for type safety and validation
|
||||
|
||||
Let's examine each approach in detail.
|
||||
|
||||
## Unstructured State Management
|
||||
|
||||
Unstructured state uses a dictionary-like approach, offering flexibility and simplicity for straightforward applications.
|
||||
|
||||
### How It Works
|
||||
|
||||
With unstructured state:
|
||||
- You access state via `self.state` which behaves like a dictionary
|
||||
- You can freely add, modify, or remove keys at any point
|
||||
- All state is automatically available to all flow methods
|
||||
|
||||
### Basic Example
|
||||
|
||||
Here's a simple example of unstructured state management:
|
||||
|
||||
```python
|
||||
from crewai.flow.flow import Flow, listen, start
|
||||
|
||||
class UnstructuredStateFlow(Flow):
|
||||
@start()
|
||||
def initialize_data(self):
|
||||
print("Initializing flow data")
|
||||
# Add key-value pairs to state
|
||||
self.state["user_name"] = "Alex"
|
||||
self.state["preferences"] = {
|
||||
"theme": "dark",
|
||||
"language": "English"
|
||||
}
|
||||
self.state["items"] = []
|
||||
|
||||
# The flow state automatically gets a unique ID
|
||||
print(f"Flow ID: {self.state['id']}")
|
||||
|
||||
return "Initialized"
|
||||
|
||||
@listen(initialize_data)
|
||||
def process_data(self, previous_result):
|
||||
print(f"Previous step returned: {previous_result}")
|
||||
|
||||
# Access and modify state
|
||||
user = self.state["user_name"]
|
||||
print(f"Processing data for {user}")
|
||||
|
||||
# Add items to a list in state
|
||||
self.state["items"].append("item1")
|
||||
self.state["items"].append("item2")
|
||||
|
||||
# Add a new key-value pair
|
||||
self.state["processed"] = True
|
||||
|
||||
return "Processed"
|
||||
|
||||
@listen(process_data)
|
||||
def generate_summary(self, previous_result):
|
||||
# Access multiple state values
|
||||
user = self.state["user_name"]
|
||||
theme = self.state["preferences"]["theme"]
|
||||
items = self.state["items"]
|
||||
processed = self.state.get("processed", False)
|
||||
|
||||
summary = f"User {user} has {len(items)} items with {theme} theme. "
|
||||
summary += "Data is processed." if processed else "Data is not processed."
|
||||
|
||||
return summary
|
||||
|
||||
# Run the flow
|
||||
flow = UnstructuredStateFlow()
|
||||
result = flow.kickoff()
|
||||
print(f"Final result: {result}")
|
||||
print(f"Final state: {flow.state}")
|
||||
```
|
||||
|
||||
### When to Use Unstructured State
|
||||
|
||||
Unstructured state is ideal for:
|
||||
- Quick prototyping and simple flows
|
||||
- Dynamically evolving state needs
|
||||
- Cases where the structure may not be known in advance
|
||||
- Flows with simple state requirements
|
||||
|
||||
While flexible, unstructured state lacks type checking and schema validation, which can lead to errors in complex applications.
|
||||
|
||||
## Structured State Management
|
||||
|
||||
Structured state uses Pydantic models to define a schema for your flow's state, providing type safety, validation, and better developer experience.
|
||||
|
||||
### How It Works
|
||||
|
||||
With structured state:
|
||||
- You define a Pydantic model that represents your state structure
|
||||
- You pass this model type to your Flow class as a type parameter
|
||||
- You access state via `self.state`, which behaves like a Pydantic model instance
|
||||
- All fields are validated according to their defined types
|
||||
- You get IDE autocompletion and type checking support
|
||||
|
||||
### Basic Example
|
||||
|
||||
Here's how to implement structured state management:
|
||||
|
||||
```python
|
||||
from crewai.flow.flow import Flow, listen, start
|
||||
from pydantic import BaseModel, Field
|
||||
from typing import List, Dict, Optional
|
||||
|
||||
# Define your state model
|
||||
class UserPreferences(BaseModel):
|
||||
theme: str = "light"
|
||||
language: str = "English"
|
||||
|
||||
class AppState(BaseModel):
|
||||
user_name: str = ""
|
||||
preferences: UserPreferences = UserPreferences()
|
||||
items: List[str] = []
|
||||
processed: bool = False
|
||||
completion_percentage: float = 0.0
|
||||
|
||||
# Create a flow with typed state
|
||||
class StructuredStateFlow(Flow[AppState]):
|
||||
@start()
|
||||
def initialize_data(self):
|
||||
print("Initializing flow data")
|
||||
# Set state values (type-checked)
|
||||
self.state.user_name = "Taylor"
|
||||
self.state.preferences.theme = "dark"
|
||||
|
||||
# The ID field is automatically available
|
||||
print(f"Flow ID: {self.state.id}")
|
||||
|
||||
return "Initialized"
|
||||
|
||||
@listen(initialize_data)
|
||||
def process_data(self, previous_result):
|
||||
print(f"Processing data for {self.state.user_name}")
|
||||
|
||||
# Modify state (with type checking)
|
||||
self.state.items.append("item1")
|
||||
self.state.items.append("item2")
|
||||
self.state.processed = True
|
||||
self.state.completion_percentage = 50.0
|
||||
|
||||
return "Processed"
|
||||
|
||||
@listen(process_data)
|
||||
def generate_summary(self, previous_result):
|
||||
# Access state (with autocompletion)
|
||||
summary = f"User {self.state.user_name} has {len(self.state.items)} items "
|
||||
summary += f"with {self.state.preferences.theme} theme. "
|
||||
summary += "Data is processed." if self.state.processed else "Data is not processed."
|
||||
summary += f" Completion: {self.state.completion_percentage}%"
|
||||
|
||||
return summary
|
||||
|
||||
# Run the flow
|
||||
flow = StructuredStateFlow()
|
||||
result = flow.kickoff()
|
||||
print(f"Final result: {result}")
|
||||
print(f"Final state: {flow.state}")
|
||||
```
|
||||
|
||||
### Benefits of Structured State
|
||||
|
||||
Using structured state provides several advantages:
|
||||
|
||||
1. **Type Safety** - Catch type errors at development time
|
||||
2. **Self-Documentation** - The state model clearly documents what data is available
|
||||
3. **Validation** - Automatic validation of data types and constraints
|
||||
4. **IDE Support** - Get autocomplete and inline documentation
|
||||
5. **Default Values** - Easily define fallbacks for missing data
|
||||
|
||||
### When to Use Structured State
|
||||
|
||||
Structured state is recommended for:
|
||||
- Complex flows with well-defined data schemas
|
||||
- Team projects where multiple developers work on the same code
|
||||
- Applications where data validation is important
|
||||
- Flows that need to enforce specific data types and constraints
|
||||
|
||||
## The Automatic State ID
|
||||
|
||||
Both unstructured and structured states automatically receive a unique identifier (UUID) to help track and manage state instances.
|
||||
|
||||
### How It Works
|
||||
|
||||
- For unstructured state, the ID is accessible as `self.state["id"]`
|
||||
- For structured state, the ID is accessible as `self.state.id`
|
||||
- This ID is generated automatically when the flow is created
|
||||
- The ID remains the same throughout the flow's lifecycle
|
||||
- The ID can be used for tracking, logging, and retrieving persisted states
|
||||
|
||||
This UUID is particularly valuable when implementing persistence or tracking multiple flow executions.
|
||||
|
||||
## Dynamic State Updates
|
||||
|
||||
Regardless of whether you're using structured or unstructured state, you can update state dynamically throughout your flow's execution.
|
||||
|
||||
### Passing Data Between Steps
|
||||
|
||||
Flow methods can return values that are then passed as arguments to listening methods:
|
||||
|
||||
```python
|
||||
from crewai.flow.flow import Flow, listen, start
|
||||
|
||||
class DataPassingFlow(Flow):
|
||||
@start()
|
||||
def generate_data(self):
|
||||
# This return value will be passed to listening methods
|
||||
return "Generated data"
|
||||
|
||||
@listen(generate_data)
|
||||
def process_data(self, data_from_previous_step):
|
||||
print(f"Received: {data_from_previous_step}")
|
||||
# You can modify the data and pass it along
|
||||
processed_data = f"{data_from_previous_step} - processed"
|
||||
# Also update state
|
||||
self.state["last_processed"] = processed_data
|
||||
return processed_data
|
||||
|
||||
@listen(process_data)
|
||||
def finalize_data(self, processed_data):
|
||||
print(f"Received processed data: {processed_data}")
|
||||
# Access both the passed data and state
|
||||
last_processed = self.state.get("last_processed", "")
|
||||
return f"Final: {processed_data} (from state: {last_processed})"
|
||||
```
|
||||
|
||||
This pattern allows you to combine direct data passing with state updates for maximum flexibility.
|
||||
|
||||
## Persisting Flow State
|
||||
|
||||
One of CrewAI's most powerful features is the ability to persist flow state across executions. This enables workflows that can be paused, resumed, and even recovered after failures.
|
||||
|
||||
### The @persist Decorator
|
||||
|
||||
The `@persist` decorator automates state persistence, saving your flow's state at key points in execution.
|
||||
|
||||
#### Class-Level Persistence
|
||||
|
||||
When applied at the class level, `@persist` saves state after every method execution:
|
||||
|
||||
```python
|
||||
from crewai.flow.flow import Flow, listen, persist, start
|
||||
from pydantic import BaseModel
|
||||
|
||||
class CounterState(BaseModel):
|
||||
value: int = 0
|
||||
|
||||
@persist # Apply to the entire flow class
|
||||
class PersistentCounterFlow(Flow[CounterState]):
|
||||
@start()
|
||||
def increment(self):
|
||||
self.state.value += 1
|
||||
print(f"Incremented to {self.state.value}")
|
||||
return self.state.value
|
||||
|
||||
@listen(increment)
|
||||
def double(self, value):
|
||||
self.state.value = value * 2
|
||||
print(f"Doubled to {self.state.value}")
|
||||
return self.state.value
|
||||
|
||||
# First run
|
||||
flow1 = PersistentCounterFlow()
|
||||
result1 = flow1.kickoff()
|
||||
print(f"First run result: {result1}")
|
||||
|
||||
# Second run - state is automatically loaded
|
||||
flow2 = PersistentCounterFlow()
|
||||
result2 = flow2.kickoff()
|
||||
print(f"Second run result: {result2}") # Will be higher due to persisted state
|
||||
```
|
||||
|
||||
#### Method-Level Persistence
|
||||
|
||||
For more granular control, you can apply `@persist` to specific methods:
|
||||
|
||||
```python
|
||||
from crewai.flow.flow import Flow, listen, persist, start
|
||||
|
||||
class SelectivePersistFlow(Flow):
|
||||
@start()
|
||||
def first_step(self):
|
||||
self.state["count"] = 1
|
||||
return "First step"
|
||||
|
||||
@persist # Only persist after this method
|
||||
@listen(first_step)
|
||||
def important_step(self, prev_result):
|
||||
self.state["count"] += 1
|
||||
self.state["important_data"] = "This will be persisted"
|
||||
return "Important step completed"
|
||||
|
||||
@listen(important_step)
|
||||
def final_step(self, prev_result):
|
||||
self.state["count"] += 1
|
||||
return f"Complete with count {self.state['count']}"
|
||||
```
|
||||
|
||||
|
||||
## Advanced State Patterns
|
||||
|
||||
### State-Based Conditional Logic
|
||||
|
||||
You can use state to implement complex conditional logic in your flows:
|
||||
|
||||
```python
|
||||
from crewai.flow.flow import Flow, listen, router, start
|
||||
from pydantic import BaseModel
|
||||
|
||||
class PaymentState(BaseModel):
|
||||
amount: float = 0.0
|
||||
is_approved: bool = False
|
||||
retry_count: int = 0
|
||||
|
||||
class PaymentFlow(Flow[PaymentState]):
|
||||
@start()
|
||||
def process_payment(self):
|
||||
# Simulate payment processing
|
||||
self.state.amount = 100.0
|
||||
self.state.is_approved = self.state.amount < 1000
|
||||
return "Payment processed"
|
||||
|
||||
@router(process_payment)
|
||||
def check_approval(self, previous_result):
|
||||
if self.state.is_approved:
|
||||
return "approved"
|
||||
elif self.state.retry_count < 3:
|
||||
return "retry"
|
||||
else:
|
||||
return "rejected"
|
||||
|
||||
@listen("approved")
|
||||
def handle_approval(self):
|
||||
return f"Payment of ${self.state.amount} approved!"
|
||||
|
||||
@listen("retry")
|
||||
def handle_retry(self):
|
||||
self.state.retry_count += 1
|
||||
print(f"Retrying payment (attempt {self.state.retry_count})...")
|
||||
# Could implement retry logic here
|
||||
return "Retry initiated"
|
||||
|
||||
@listen("rejected")
|
||||
def handle_rejection(self):
|
||||
return f"Payment of ${self.state.amount} rejected after {self.state.retry_count} retries."
|
||||
```
|
||||
|
||||
### Handling Complex State Transformations
|
||||
|
||||
For complex state transformations, you can create dedicated methods:
|
||||
|
||||
```python
|
||||
from crewai.flow.flow import Flow, listen, start
|
||||
from pydantic import BaseModel
|
||||
from typing import List, Dict
|
||||
|
||||
class UserData(BaseModel):
|
||||
name: str
|
||||
active: bool = True
|
||||
login_count: int = 0
|
||||
|
||||
class ComplexState(BaseModel):
|
||||
users: Dict[str, UserData] = {}
|
||||
active_user_count: int = 0
|
||||
|
||||
class TransformationFlow(Flow[ComplexState]):
|
||||
@start()
|
||||
def initialize(self):
|
||||
# Add some users
|
||||
self.add_user("alice", "Alice")
|
||||
self.add_user("bob", "Bob")
|
||||
self.add_user("charlie", "Charlie")
|
||||
return "Initialized"
|
||||
|
||||
@listen(initialize)
|
||||
def process_users(self, _):
|
||||
# Increment login counts
|
||||
for user_id in self.state.users:
|
||||
self.increment_login(user_id)
|
||||
|
||||
# Deactivate one user
|
||||
self.deactivate_user("bob")
|
||||
|
||||
# Update active count
|
||||
self.update_active_count()
|
||||
|
||||
return f"Processed {len(self.state.users)} users"
|
||||
|
||||
# Helper methods for state transformations
|
||||
def add_user(self, user_id: str, name: str):
|
||||
self.state.users[user_id] = UserData(name=name)
|
||||
self.update_active_count()
|
||||
|
||||
def increment_login(self, user_id: str):
|
||||
if user_id in self.state.users:
|
||||
self.state.users[user_id].login_count += 1
|
||||
|
||||
def deactivate_user(self, user_id: str):
|
||||
if user_id in self.state.users:
|
||||
self.state.users[user_id].active = False
|
||||
self.update_active_count()
|
||||
|
||||
def update_active_count(self):
|
||||
self.state.active_user_count = sum(
|
||||
1 for user in self.state.users.values() if user.active
|
||||
)
|
||||
```
|
||||
|
||||
This pattern of creating helper methods keeps your flow methods clean while enabling complex state manipulations.
|
||||
|
||||
## State Management with Crews
|
||||
|
||||
One of the most powerful patterns in CrewAI is combining flow state management with crew execution.
|
||||
|
||||
### Passing State to Crews
|
||||
|
||||
You can use flow state to parameterize crews:
|
||||
|
||||
```python
|
||||
from crewai.flow.flow import Flow, listen, start
|
||||
from crewai import Agent, Crew, Process, Task
|
||||
from pydantic import BaseModel
|
||||
|
||||
class ResearchState(BaseModel):
|
||||
topic: str = ""
|
||||
depth: str = "medium"
|
||||
results: str = ""
|
||||
|
||||
class ResearchFlow(Flow[ResearchState]):
|
||||
@start()
|
||||
def get_parameters(self):
|
||||
# In a real app, this might come from user input
|
||||
self.state.topic = "Artificial Intelligence Ethics"
|
||||
self.state.depth = "deep"
|
||||
return "Parameters set"
|
||||
|
||||
@listen(get_parameters)
|
||||
def execute_research(self, _):
|
||||
# Create agents
|
||||
researcher = Agent(
|
||||
role="Research Specialist",
|
||||
goal=f"Research {self.state.topic} in {self.state.depth} detail",
|
||||
backstory="You are an expert researcher with a talent for finding accurate information."
|
||||
)
|
||||
|
||||
writer = Agent(
|
||||
role="Content Writer",
|
||||
goal="Transform research into clear, engaging content",
|
||||
backstory="You excel at communicating complex ideas clearly and concisely."
|
||||
)
|
||||
|
||||
# Create tasks
|
||||
research_task = Task(
|
||||
description=f"Research {self.state.topic} with {self.state.depth} analysis",
|
||||
expected_output="Comprehensive research notes in markdown format",
|
||||
agent=researcher
|
||||
)
|
||||
|
||||
writing_task = Task(
|
||||
description=f"Create a summary on {self.state.topic} based on the research",
|
||||
expected_output="Well-written article in markdown format",
|
||||
agent=writer,
|
||||
context=[research_task]
|
||||
)
|
||||
|
||||
# Create and run crew
|
||||
research_crew = Crew(
|
||||
agents=[researcher, writer],
|
||||
tasks=[research_task, writing_task],
|
||||
process=Process.sequential,
|
||||
verbose=True
|
||||
)
|
||||
|
||||
# Run crew and store result in state
|
||||
result = research_crew.kickoff()
|
||||
self.state.results = result.raw
|
||||
|
||||
return "Research completed"
|
||||
|
||||
@listen(execute_research)
|
||||
def summarize_results(self, _):
|
||||
# Access the stored results
|
||||
result_length = len(self.state.results)
|
||||
return f"Research on {self.state.topic} completed with {result_length} characters of results."
|
||||
```
|
||||
|
||||
### Handling Crew Outputs in State
|
||||
|
||||
When a crew completes, you can process its output and store it in your flow state:
|
||||
|
||||
```python
|
||||
@listen(execute_crew)
|
||||
def process_crew_results(self, _):
|
||||
# Parse the raw results (assuming JSON output)
|
||||
import json
|
||||
try:
|
||||
results_dict = json.loads(self.state.raw_results)
|
||||
self.state.processed_results = {
|
||||
"title": results_dict.get("title", ""),
|
||||
"main_points": results_dict.get("main_points", []),
|
||||
"conclusion": results_dict.get("conclusion", "")
|
||||
}
|
||||
return "Results processed successfully"
|
||||
except json.JSONDecodeError:
|
||||
self.state.error = "Failed to parse crew results as JSON"
|
||||
return "Error processing results"
|
||||
```
|
||||
|
||||
## Best Practices for State Management
|
||||
|
||||
### 1. Keep State Focused
|
||||
|
||||
Design your state to contain only what's necessary:
|
||||
|
||||
```python
|
||||
# Too broad
|
||||
class BloatedState(BaseModel):
|
||||
user_data: Dict = {}
|
||||
system_settings: Dict = {}
|
||||
temporary_calculations: List = []
|
||||
debug_info: Dict = {}
|
||||
# ...many more fields
|
||||
|
||||
# Better: Focused state
|
||||
class FocusedState(BaseModel):
|
||||
user_id: str
|
||||
preferences: Dict[str, str]
|
||||
completion_status: Dict[str, bool]
|
||||
```
|
||||
|
||||
### 2. Use Structured State for Complex Flows
|
||||
|
||||
As your flows grow in complexity, structured state becomes increasingly valuable:
|
||||
|
||||
```python
|
||||
# Simple flow can use unstructured state
|
||||
class SimpleGreetingFlow(Flow):
|
||||
@start()
|
||||
def greet(self):
|
||||
self.state["name"] = "World"
|
||||
return f"Hello, {self.state['name']}!"
|
||||
|
||||
# Complex flow benefits from structured state
|
||||
class UserRegistrationState(BaseModel):
|
||||
username: str
|
||||
email: str
|
||||
verification_status: bool = False
|
||||
registration_date: datetime = Field(default_factory=datetime.now)
|
||||
last_login: Optional[datetime] = None
|
||||
|
||||
class RegistrationFlow(Flow[UserRegistrationState]):
|
||||
# Methods with strongly-typed state access
|
||||
```
|
||||
|
||||
### 3. Document State Transitions
|
||||
|
||||
For complex flows, document how state changes throughout the execution:
|
||||
|
||||
```python
|
||||
@start()
|
||||
def initialize_order(self):
|
||||
"""
|
||||
Initialize order state with empty values.
|
||||
|
||||
State before: {}
|
||||
State after: {order_id: str, items: [], status: 'new'}
|
||||
"""
|
||||
self.state.order_id = str(uuid.uuid4())
|
||||
self.state.items = []
|
||||
self.state.status = "new"
|
||||
return "Order initialized"
|
||||
```
|
||||
|
||||
### 4. Handle State Errors Gracefully
|
||||
|
||||
Implement error handling for state access:
|
||||
|
||||
```python
|
||||
@listen(previous_step)
|
||||
def process_data(self, _):
|
||||
try:
|
||||
# Try to access a value that might not exist
|
||||
user_preference = self.state.preferences.get("theme", "default")
|
||||
except (AttributeError, KeyError):
|
||||
# Handle the error gracefully
|
||||
self.state.errors = self.state.get("errors", [])
|
||||
self.state.errors.append("Failed to access preferences")
|
||||
user_preference = "default"
|
||||
|
||||
return f"Used preference: {user_preference}"
|
||||
```
|
||||
|
||||
### 5. Use State for Progress Tracking
|
||||
|
||||
Leverage state to track progress in long-running flows:
|
||||
|
||||
```python
|
||||
class ProgressTrackingFlow(Flow):
|
||||
@start()
|
||||
def initialize(self):
|
||||
self.state["total_steps"] = 3
|
||||
self.state["current_step"] = 0
|
||||
self.state["progress"] = 0.0
|
||||
self.update_progress()
|
||||
return "Initialized"
|
||||
|
||||
def update_progress(self):
|
||||
"""Helper method to calculate and update progress"""
|
||||
if self.state.get("total_steps", 0) > 0:
|
||||
self.state["progress"] = (self.state.get("current_step", 0) /
|
||||
self.state["total_steps"]) * 100
|
||||
print(f"Progress: {self.state['progress']:.1f}%")
|
||||
|
||||
@listen(initialize)
|
||||
def step_one(self, _):
|
||||
# Do work...
|
||||
self.state["current_step"] = 1
|
||||
self.update_progress()
|
||||
return "Step 1 complete"
|
||||
|
||||
# Additional steps...
|
||||
```
|
||||
|
||||
### 6. Use Immutable Operations When Possible
|
||||
|
||||
Especially with structured state, prefer immutable operations for clarity:
|
||||
|
||||
```python
|
||||
# Instead of modifying lists in place:
|
||||
self.state.items.append(new_item) # Mutable operation
|
||||
|
||||
# Consider creating new state:
|
||||
from pydantic import BaseModel
|
||||
from typing import List
|
||||
|
||||
class ItemState(BaseModel):
|
||||
items: List[str] = []
|
||||
|
||||
class ImmutableFlow(Flow[ItemState]):
|
||||
@start()
|
||||
def add_item(self):
|
||||
# Create new list with the added item
|
||||
self.state.items = [*self.state.items, "new item"]
|
||||
return "Item added"
|
||||
```
|
||||
|
||||
## Debugging Flow State
|
||||
|
||||
### Logging State Changes
|
||||
|
||||
When developing, add logging to track state changes:
|
||||
|
||||
```python
|
||||
import logging
|
||||
logging.basicConfig(level=logging.INFO)
|
||||
|
||||
class LoggingFlow(Flow):
|
||||
def log_state(self, step_name):
|
||||
logging.info(f"State after {step_name}: {self.state}")
|
||||
|
||||
@start()
|
||||
def initialize(self):
|
||||
self.state["counter"] = 0
|
||||
self.log_state("initialize")
|
||||
return "Initialized"
|
||||
|
||||
@listen(initialize)
|
||||
def increment(self, _):
|
||||
self.state["counter"] += 1
|
||||
self.log_state("increment")
|
||||
return f"Incremented to {self.state['counter']}"
|
||||
```
|
||||
|
||||
### State Visualization
|
||||
|
||||
You can add methods to visualize your state for debugging:
|
||||
|
||||
```python
|
||||
def visualize_state(self):
|
||||
"""Create a simple visualization of the current state"""
|
||||
import json
|
||||
from rich.console import Console
|
||||
from rich.panel import Panel
|
||||
|
||||
console = Console()
|
||||
|
||||
if hasattr(self.state, "model_dump"):
|
||||
# Pydantic v2
|
||||
state_dict = self.state.model_dump()
|
||||
elif hasattr(self.state, "dict"):
|
||||
# Pydantic v1
|
||||
state_dict = self.state.dict()
|
||||
else:
|
||||
# Unstructured state
|
||||
state_dict = dict(self.state)
|
||||
|
||||
# Remove id for cleaner output
|
||||
if "id" in state_dict:
|
||||
state_dict.pop("id")
|
||||
|
||||
state_json = json.dumps(state_dict, indent=2, default=str)
|
||||
console.print(Panel(state_json, title="Current Flow State"))
|
||||
```
|
||||
|
||||
## Conclusion
|
||||
|
||||
Mastering state management in CrewAI Flows gives you the power to build sophisticated, robust AI applications that maintain context, make complex decisions, and deliver consistent results.
|
||||
|
||||
Whether you choose unstructured or structured state, implementing proper state management practices will help you create flows that are maintainable, extensible, and effective at solving real-world problems.
|
||||
|
||||
As you develop more complex flows, remember that good state management is about finding the right balance between flexibility and structure, making your code both powerful and easy to understand.
|
||||
|
||||
<Check>
|
||||
You've now mastered the concepts and practices of state management in CrewAI Flows! With this knowledge, you can create robust AI workflows that effectively maintain context, share data between steps, and build sophisticated application logic.
|
||||
</Check>
|
||||
|
||||
## Next Steps
|
||||
|
||||
- Experiment with both structured and unstructured state in your flows
|
||||
- Try implementing state persistence for long-running workflows
|
||||
- Explore [building your first crew](/guides/crews/first-crew) to see how crews and flows can work together
|
||||
- Check out the [Flow reference documentation](/concepts/flows) for more advanced features
|
||||
@@ -1,112 +0,0 @@
|
||||
---
|
||||
title: Assembling and Activating Your CrewAI Team
|
||||
description: A step-by-step guide to creating a cohesive CrewAI team for your projects.
|
||||
---
|
||||
|
||||
## Introduction
|
||||
Embarking on your CrewAI journey involves a few straightforward steps to set up your environment and initiate your AI crew. This guide ensures a seamless start.
|
||||
|
||||
## Step 0: Installation
|
||||
Begin by installing CrewAI and any additional packages required for your project. For instance, the `duckduckgo-search` package is used in this example for enhanced search capabilities.
|
||||
|
||||
```shell
|
||||
pip install crewai
|
||||
pip install duckduckgo-search
|
||||
```
|
||||
|
||||
## Step 1: Assemble Your Agents
|
||||
Begin by defining your agents with distinct roles and backstories. These elements not only add depth but also guide their task execution and interaction within the crew.
|
||||
|
||||
```python
|
||||
import os
|
||||
os.environ["OPENAI_API_KEY"] = "Your Key"
|
||||
|
||||
from crewai import Agent
|
||||
|
||||
# Topic that will be used in the crew run
|
||||
topic = 'AI in healthcare'
|
||||
|
||||
# Creating a senior researcher agent
|
||||
researcher = Agent(
|
||||
role='Senior Researcher',
|
||||
goal=f'Uncover groundbreaking technologies around {topic}',
|
||||
verbose=True,
|
||||
backstory="""Driven by curiosity, you're at the forefront of
|
||||
innovation, eager to explore and share knowledge that could change
|
||||
the world."""
|
||||
)
|
||||
|
||||
# Creating a writer agent
|
||||
writer = Agent(
|
||||
role='Writer',
|
||||
goal=f'Narrate compelling tech stories around {topic}',
|
||||
verbose=True,
|
||||
backstory="""With a flair for simplifying complex topics, you craft
|
||||
engaging narratives that captivate and educate, bringing new
|
||||
discoveries to light in an accessible manner."""
|
||||
)
|
||||
```
|
||||
|
||||
## Step 2: Define the Tasks
|
||||
Detail the specific objectives for your agents. These tasks guide their focus and ensure a targeted approach to their roles.
|
||||
|
||||
```python
|
||||
from crewai import Task
|
||||
|
||||
# Install duckduckgo-search for this example:
|
||||
# !pip install -U duckduckgo-search
|
||||
|
||||
from langchain_community.tools import DuckDuckGoSearchRun
|
||||
search_tool = DuckDuckGoSearchRun()
|
||||
|
||||
# Research task for identifying AI trends
|
||||
research_task = Task(
|
||||
description=f"""Identify the next big trend in {topic}.
|
||||
Focus on identifying pros and cons and the overall narrative.
|
||||
|
||||
Your final report should clearly articulate the key points,
|
||||
its market opportunities, and potential risks.
|
||||
""",
|
||||
expected_output='A comprehensive 3 paragraphs long report on the latest AI trends.',
|
||||
max_inter=3,
|
||||
tools=[search_tool],
|
||||
agent=researcher
|
||||
)
|
||||
|
||||
# Writing task based on research findings
|
||||
write_task = Task(
|
||||
description=f"""Compose an insightful article on {topic}.
|
||||
Focus on the latest trends and how it's impacting the industry.
|
||||
This article should be easy to understand, engaging and positive.
|
||||
""",
|
||||
expected_output=f'A 4 paragraph article on {topic} advancements.',
|
||||
tools=[search_tool],
|
||||
agent=writer
|
||||
)
|
||||
```
|
||||
|
||||
## Step 3: Form the Crew
|
||||
Combine your agents into a crew, setting the workflow process they'll follow to accomplish the tasks.
|
||||
|
||||
```python
|
||||
from crewai import Crew, Process
|
||||
|
||||
# Forming the tech-focused crew
|
||||
crew = Crew(
|
||||
agents=[researcher, writer],
|
||||
tasks=[research_task, write_task],
|
||||
process=Process.sequential # Sequential task execution
|
||||
)
|
||||
```
|
||||
|
||||
## Step 4: Kick It Off
|
||||
With your crew ready and the stage set, initiate the process. Watch as your agents collaborate, each contributing their expertise to achieve the collective goal.
|
||||
|
||||
```python
|
||||
# Starting the task execution process
|
||||
result = crew.kickoff()
|
||||
print(result)
|
||||
```
|
||||
|
||||
## Conclusion
|
||||
Building and activating a crew in CrewAI is a seamless process. By carefully assigning roles, tasks, and a clear process, your AI team is equipped to tackle challenges efficiently. The depth of agent backstories and the precision of their objectives enrich the collaboration, leading to successful project outcomes.
|
||||
@@ -1,65 +0,0 @@
|
||||
---
|
||||
title: Customizing Agents in CrewAI
|
||||
description: A guide to tailoring agents for specific roles and tasks within the CrewAI framework.
|
||||
---
|
||||
|
||||
## Customizable Attributes
|
||||
Tailoring your AI agents is pivotal in crafting an efficient CrewAI team. Customization allows agents to be dynamically adapted to the unique requirements of any project.
|
||||
|
||||
### Key Attributes for Customization
|
||||
- **Role**: Defines the agent's job within the crew, such as 'Analyst' or 'Customer Service Rep'.
|
||||
- **Goal**: The agent's objective, aligned with its role and the crew's overall goals.
|
||||
- **Backstory**: Adds depth to the agent's character, enhancing its role and motivations within the crew.
|
||||
- **Tools**: The capabilities or methods the agent employs to accomplish tasks, ranging from simple functions to complex integrations.
|
||||
|
||||
## Understanding Tools in CrewAI
|
||||
Tools empower agents with functionalities to interact and manipulate their environment, from generic utilities to specialized functions. Integrating with LangChain offers access to a broad range of tools for diverse tasks.
|
||||
|
||||
## Customizing Agents and Tools
|
||||
Agents are customized by defining their attributes during initialization, with tools being a critical aspect of their functionality.
|
||||
|
||||
### Example: Assigning Tools to an Agent
|
||||
```python
|
||||
from crewai import Agent
|
||||
from langchain.agents import Tool
|
||||
from langchain.utilities import GoogleSerperAPIWrapper
|
||||
import os
|
||||
|
||||
# Set API keys for tool initialization
|
||||
os.environ["OPENAI_API_KEY"] = "Your Key"
|
||||
os.environ["SERPER_API_KEY"] = "Your Key"
|
||||
|
||||
# Initialize a search tool
|
||||
search_tool = GoogleSerperAPIWrapper()
|
||||
|
||||
# Define and assign the tool to an agent
|
||||
serper_tool = Tool(
|
||||
name="Intermediate Answer",
|
||||
func=search_tool.run,
|
||||
description="Useful for search-based queries"
|
||||
)
|
||||
|
||||
# Initialize the agent with the tool
|
||||
agent = Agent(
|
||||
role='Research Analyst',
|
||||
goal='Provide up-to-date market analysis',
|
||||
backstory='An expert analyst with a keen eye for market trends.',
|
||||
tools=[serper_tool]
|
||||
)
|
||||
```
|
||||
|
||||
## Delegation and Autonomy
|
||||
Agents in CrewAI can delegate tasks or ask questions, enhancing the crew's collaborative dynamics. This feature can be disabled to ensure straightforward task execution.
|
||||
|
||||
### Example: Disabling Delegation for an Agent
|
||||
```python
|
||||
agent = Agent(
|
||||
role='Content Writer',
|
||||
goal='Write engaging content on market trends',
|
||||
backstory='A seasoned writer with expertise in market analysis.',
|
||||
allow_delegation=False
|
||||
)
|
||||
```
|
||||
|
||||
## Conclusion
|
||||
Customizing agents is key to leveraging the full potential of CrewAI. By thoughtfully setting agents' roles, goals, backstories, and tools, you craft a nuanced and capable AI team ready to tackle complex challenges.
|
||||
@@ -1,60 +0,0 @@
|
||||
---
|
||||
title: Implementing the Hierarchical Process in CrewAI
|
||||
description: Understanding and applying the hierarchical process within your CrewAI projects.
|
||||
---
|
||||
|
||||
## Introduction
|
||||
The hierarchical process in CrewAI introduces a structured approach to task management, mimicking traditional organizational hierarchies for efficient task delegation and execution.
|
||||
|
||||
!!! note "Complexity"
|
||||
The current implementation of the hierarchical process relies on tools usage that usually require more complex models like GPT-4 and usually imply of a higher token usage.
|
||||
|
||||
## Hierarchical Process Overview
|
||||
In this process, tasks are assigned and executed based on a defined hierarchy, where a 'manager' agent coordinates the workflow, delegating tasks to other agents and validating their outcomes before proceeding.
|
||||
|
||||
### Key Features
|
||||
- **Task Delegation**: A manager agent oversees task distribution among crew members.
|
||||
- **Result Validation**: The manager reviews outcomes before passing tasks along, ensuring quality and relevance.
|
||||
- **Efficient Workflow**: Mimics corporate structures for a familiar and organized task management approach.
|
||||
|
||||
## Implementing the Hierarchical Process
|
||||
To utilize the hierarchical process, you must define a crew with a designated manager and a clear chain of command for task execution.
|
||||
|
||||
!!! note "Tools on the hierarchical process"
|
||||
For tools when using the hierarchical process, you want to make sure to assign them to the agents instead of the tasks, as the manager will be the one delegating the tasks and the agents will be the ones executing them.
|
||||
|
||||
!!! note "Manager LLM"
|
||||
A manager will be automatically set for the crew, you don't need to define it. You do need to set the `manager_llm` parameter in the crew though.
|
||||
|
||||
```python
|
||||
from langchain_openai import ChatOpenAI
|
||||
from crewai import Crew, Process, Agent
|
||||
|
||||
# Define your agents, no need to define a manager
|
||||
researcher = Agent(
|
||||
role='Researcher',
|
||||
goal='Conduct in-depth analysis',
|
||||
# tools = [...]
|
||||
)
|
||||
writer = Agent(
|
||||
role='Writer',
|
||||
goal='Create engaging content',
|
||||
# tools = [...]
|
||||
)
|
||||
|
||||
# Form the crew with a hierarchical process
|
||||
project_crew = Crew(
|
||||
tasks=[...], # Tasks that that manager will figure out how to complete
|
||||
agents=[researcher, writer],
|
||||
manager_llm=ChatOpenAI(temperature=0, model="gpt-4"), # The manager's LLM that will be used internally
|
||||
process=Process.hierarchical # Designating the hierarchical approach
|
||||
)
|
||||
```
|
||||
|
||||
### Workflow in Action
|
||||
1. **Task Assignment**: The manager assigns tasks based on agent roles and capabilities.
|
||||
2. **Execution and Review**: Agents perform their tasks, with the manager reviewing outcomes for approval.
|
||||
3. **Sequential Task Progression**: Tasks are completed in a sequence dictated by the manager, ensuring orderly progression.
|
||||
|
||||
## Conclusion
|
||||
The hierarchical process in CrewAI offers a familiar, structured way to manage tasks within a project. By leveraging a chain of command, it enhances efficiency and quality control, making it ideal for complex projects requiring meticulous oversight.
|
||||
@@ -1,76 +0,0 @@
|
||||
# Human Input on Execution
|
||||
|
||||
Human inputs is important in many agent execution use cases, humans are AGI so they can can be prompted to step in and provide extra details ins necessary.
|
||||
Using it with crewAI is pretty straightforward and you can do it through a LangChain Tool.
|
||||
Check [LangChain Integration](https://python.langchain.com/docs/integrations/tools/human_tools) for more details:
|
||||
|
||||
Example:
|
||||
|
||||
```python
|
||||
import os
|
||||
from crewai import Agent, Task, Crew, Process
|
||||
from langchain_community.tools import DuckDuckGoSearchRun
|
||||
from langchain.agents import load_tools
|
||||
|
||||
search_tool = DuckDuckGoSearchRun()
|
||||
|
||||
# Loading Human Tools
|
||||
human_tools = load_tools(["human"])
|
||||
|
||||
# Define your agents with roles and goals
|
||||
researcher = Agent(
|
||||
role='Senior Research Analyst',
|
||||
goal='Uncover cutting-edge developments in AI and data science in',
|
||||
backstory="""You are a Senior Research Analyst at a leading tech think tank.
|
||||
Your expertise lies in identifying emerging trends and technologies in AI and
|
||||
data science. You have a knack for dissecting complex data and presenting
|
||||
actionable insights.""",
|
||||
verbose=True,
|
||||
allow_delegation=False,
|
||||
# Passing human tools to the agent
|
||||
tools=[search_tool]+human_tools
|
||||
)
|
||||
writer = Agent(
|
||||
role='Tech Content Strategist',
|
||||
goal='Craft compelling content on tech advancements',
|
||||
backstory="""You are a renowned Tech Content Strategist, known for your insightful
|
||||
and engaging articles on technology and innovation. With a deep understanding of
|
||||
the tech industry, you transform complex concepts into compelling narratives.""",
|
||||
verbose=True,
|
||||
allow_delegation=True
|
||||
)
|
||||
|
||||
# Create tasks for your agents
|
||||
# Being explicit on the task to ask for human feedback.
|
||||
task1 = Task(
|
||||
description="""Conduct a comprehensive analysis of the latest advancements in AI in 2024.
|
||||
Identify key trends, breakthrough technologies, and potential industry impacts.
|
||||
Compile your findings in a detailed report.
|
||||
Make sure to check with the human if the draft is good before returning your Final Answer.
|
||||
Your final answer MUST be a full analysis report""",
|
||||
agent=researcher
|
||||
)
|
||||
|
||||
task2 = Task(
|
||||
description="""Using the insights from the researcher's report, develop an engaging blog
|
||||
post that highlights the most significant AI advancements.
|
||||
Your post should be informative yet accessible, catering to a tech-savvy audience.
|
||||
Aim for a narrative that captures the essence of these breakthroughs and their
|
||||
implications for the future.
|
||||
Your final answer MUST be the full blog post of at least 3 paragraphs.""",
|
||||
agent=writer
|
||||
)
|
||||
|
||||
# Instantiate your crew with a sequential process
|
||||
crew = Crew(
|
||||
agents=[researcher, writer],
|
||||
tasks=[task1, task2],
|
||||
verbose=2
|
||||
)
|
||||
|
||||
# Get your crew to work!
|
||||
result = crew.kickoff()
|
||||
|
||||
print("######################")
|
||||
print(result)
|
||||
```
|
||||
@@ -1,105 +0,0 @@
|
||||
---
|
||||
title: Connect CrewAI to LLMs
|
||||
description: Guide on integrating CrewAI with various Large Language Models (LLMs).
|
||||
---
|
||||
|
||||
## Connect CrewAI to LLMs
|
||||
!!! note "Default LLM"
|
||||
By default, crewAI uses OpenAI's GPT-4 model for language processing. However, you can configure your agents to use a different model or API. This guide will show you how to connect your agents to different LLMs.
|
||||
|
||||
CrewAI offers flexibility in connecting to various LLMs, including local models via [Ollama](https://ollama.ai) and different APIs like Azure. It's compatible with all [LangChain LLM](https://python.langchain.com/docs/integrations/llms/) components, enabling diverse integrations for tailored AI solutions.
|
||||
|
||||
## Ollama Integration
|
||||
Ollama is preferred for local LLM integration, offering customization and privacy benefits. It requires installation and configuration, including model adjustments via a Modelfile to optimize performance.
|
||||
|
||||
### Setting Up Ollama
|
||||
- **Installation**: Follow Ollama's guide for setup.
|
||||
- **Configuration**: [Adjust your local model with a Modelfile](https://github.com/jmorganca/ollama/blob/main/docs/modelfile.md), considering adding `Observation` as a stop word and playing with parameters like `top_p` and `temperature`.
|
||||
|
||||
### Integrating Ollama with CrewAI
|
||||
Instantiate Ollama and pass it to your agents within CrewAI, enhancing them with the local model's capabilities.
|
||||
|
||||
```python
|
||||
from langchain_community.llms import Ollama
|
||||
|
||||
# Assuming you have Ollama installed and downloaded the openhermes model
|
||||
ollama_openhermes = Ollama(model="openhermes")
|
||||
|
||||
local_expert = Agent(
|
||||
role='Local Expert',
|
||||
goal='Provide insights about the city',
|
||||
backstory="A knowledgeable local guide.",
|
||||
tools=[SearchTools.search_internet, BrowserTools.scrape_and_summarize_website],
|
||||
llm=ollama_openhermes,
|
||||
verbose=True
|
||||
)
|
||||
```
|
||||
|
||||
## OpenAI Compatible API Endpoints
|
||||
You can use environment variables for easy switch between APIs and models, supporting diverse platforms like FastChat, LM Studio, and Mistral AI.
|
||||
|
||||
### Configuration Examples
|
||||
|
||||
### FastChat
|
||||
```sh
|
||||
# Required
|
||||
OPENAI_API_BASE="http://localhost:8001/v1"
|
||||
OPENAI_API_KEY=NA
|
||||
MODEL_NAME='oh-2.5m7b-q51' # Depending on the model you have available
|
||||
```
|
||||
|
||||
### LM Studio
|
||||
```sh
|
||||
# Required
|
||||
OPENAI_API_BASE="http://localhost:8000/v1"
|
||||
OPENAI_API_KEY=NA
|
||||
MODEL_NAME=NA
|
||||
```
|
||||
|
||||
### Mistral API
|
||||
```sh
|
||||
OPENAI_API_KEY=your-mistral-api-key
|
||||
OPENAI_API_BASE=https://api.mistral.ai/v1
|
||||
MODEL_NAME="mistral-small" # Check documentation for available models
|
||||
```
|
||||
|
||||
### text-gen-web-ui
|
||||
```sh
|
||||
# Required
|
||||
API_BASE_URL=http://localhost:5000
|
||||
OPENAI_API_KEY=NA
|
||||
MODEL_NAME=NA
|
||||
```
|
||||
|
||||
### Azure Open AI
|
||||
Azure's OpenAI API needs a distinct setup, utilizing the `langchain_openai` component for Azure-specific configurations.
|
||||
|
||||
Configuration settings:
|
||||
```sh
|
||||
AZURE_OPENAI_VERSION="2022-12-01"
|
||||
AZURE_OPENAI_DEPLOYMENT=""
|
||||
AZURE_OPENAI_ENDPOINT=""
|
||||
AZURE_OPENAI_KEY=""
|
||||
```
|
||||
|
||||
```python
|
||||
from dotenv import load_dotenv
|
||||
from langchain_openai import AzureChatOpenAI
|
||||
|
||||
load_dotenv()
|
||||
|
||||
default_llm = AzureChatOpenAI(
|
||||
azure_endpoint=os.environ.get("AZURE_OPENAI_ENDPOINT"),
|
||||
api_key=os.environ.get("AZURE_OPENAI_KEY")
|
||||
)
|
||||
|
||||
example_agent = Agent(
|
||||
role='Example Agent',
|
||||
goal='Demonstrate custom LLM configuration',
|
||||
backstory='A diligent explorer of GitHub docs.',
|
||||
llm=default_llm
|
||||
)
|
||||
```
|
||||
|
||||
## Conclusion
|
||||
Integrating CrewAI with different LLMs expands the framework's versatility, allowing for customized, efficient AI solutions across various domains and platforms.
|
||||
@@ -1,50 +0,0 @@
|
||||
---
|
||||
title: Implementing the Sequential Process in CrewAI
|
||||
description: A guide to utilizing the sequential process for task execution in CrewAI projects.
|
||||
---
|
||||
|
||||
## Introduction
|
||||
The sequential process in CrewAI ensures tasks are executed one after the other, following a linear progression. This approach is akin to a relay race, where each agent completes their task before passing the baton to the next.
|
||||
|
||||
## Sequential Process Overview
|
||||
This process is straightforward and effective, particularly for projects where tasks must be completed in a specific order to achieve the desired outcome.
|
||||
|
||||
### Key Features
|
||||
- **Linear Task Flow**: Tasks are handled in a predetermined sequence, ensuring orderly progression.
|
||||
- **Simplicity**: Ideal for projects with clearly defined, step-by-step tasks.
|
||||
- **Easy Monitoring**: Task completion can be easily tracked, offering clear insights into project progress.
|
||||
|
||||
## Implementing the Sequential Process
|
||||
To apply the sequential process, assemble your crew and define the tasks in the order they need to be executed.
|
||||
|
||||
!!! note "Task assignment"
|
||||
In the sequential process you need to make sure all tasks are assigned to the agents, as the agents will be the ones executing them.
|
||||
|
||||
```python
|
||||
from crewai import Crew, Process, Agent, Task
|
||||
|
||||
# Define your agents
|
||||
researcher = Agent(role='Researcher', goal='Conduct foundational research')
|
||||
analyst = Agent(role='Data Analyst', goal='Analyze research findings')
|
||||
writer = Agent(role='Writer', goal='Draft the final report')
|
||||
|
||||
# Define the tasks in sequence
|
||||
research_task = Task(description='Gather relevant data', agent=researcher)
|
||||
analysis_task = Task(description='Analyze the data', agent=analyst)
|
||||
writing_task = Task(description='Compose the report', agent=writer)
|
||||
|
||||
# Form the crew with a sequential process
|
||||
report_crew = Crew(
|
||||
agents=[researcher, analyst, writer],
|
||||
tasks=[research_task, analysis_task, writing_task],
|
||||
process=Process.sequential
|
||||
)
|
||||
```
|
||||
|
||||
### Workflow in Action
|
||||
1. **Initial Task**: The first agent completes their task and signals completion.
|
||||
2. **Subsequent Tasks**: Following agents pick up their tasks in the order defined, using the outcomes of preceding tasks as inputs.
|
||||
3. **Completion**: The process concludes once the final task is executed, culminating in the project's completion.
|
||||
|
||||
## Conclusion
|
||||
The sequential process in CrewAI provides a clear, straightforward path for task execution. It's particularly suited for projects requiring a logical progression of tasks, ensuring each step is completed before the next begins, thereby facilitating a cohesive final product.
|
||||
126
docs/how-to/agentops-observability.mdx
Normal file
@@ -0,0 +1,126 @@
|
||||
---
|
||||
title: AgentOps Integration
|
||||
description: Understanding and logging your agent performance with AgentOps.
|
||||
icon: paperclip
|
||||
---
|
||||
|
||||
# Introduction
|
||||
|
||||
Observability is a key aspect of developing and deploying conversational AI agents. It allows developers to understand how their agents are performing,
|
||||
how their agents are interacting with users, and how their agents use external tools and APIs.
|
||||
AgentOps is a product independent of CrewAI that provides a comprehensive observability solution for agents.
|
||||
|
||||
## AgentOps
|
||||
|
||||
[AgentOps](https://agentops.ai/?=crew) provides session replays, metrics, and monitoring for agents.
|
||||
|
||||
At a high level, AgentOps gives you the ability to monitor cost, token usage, latency, agent failures, session-wide statistics, and more.
|
||||
For more info, check out the [AgentOps Repo](https://github.com/AgentOps-AI/agentops).
|
||||
|
||||
### Overview
|
||||
|
||||
AgentOps provides monitoring for agents in development and production.
|
||||
It provides a dashboard for tracking agent performance, session replays, and custom reporting.
|
||||
|
||||
Additionally, AgentOps provides session drilldowns for viewing Crew agent interactions, LLM calls, and tool usage in real-time.
|
||||
This feature is useful for debugging and understanding how agents interact with users as well as other agents.
|
||||
|
||||

|
||||

|
||||

|
||||
|
||||
### Features
|
||||
|
||||
- **LLM Cost Management and Tracking**: Track spend with foundation model providers.
|
||||
- **Replay Analytics**: Watch step-by-step agent execution graphs.
|
||||
- **Recursive Thought Detection**: Identify when agents fall into infinite loops.
|
||||
- **Custom Reporting**: Create custom analytics on agent performance.
|
||||
- **Analytics Dashboard**: Monitor high-level statistics about agents in development and production.
|
||||
- **Public Model Testing**: Test your agents against benchmarks and leaderboards.
|
||||
- **Custom Tests**: Run your agents against domain-specific tests.
|
||||
- **Time Travel Debugging**: Restart your sessions from checkpoints.
|
||||
- **Compliance and Security**: Create audit logs and detect potential threats such as profanity and PII leaks.
|
||||
- **Prompt Injection Detection**: Identify potential code injection and secret leaks.
|
||||
|
||||
### Using AgentOps
|
||||
|
||||
<Steps>
|
||||
<Step title="Create an API Key">
|
||||
Create a user API key here: [Create API Key](https://app.agentops.ai/account)
|
||||
</Step>
|
||||
<Step title="Configure Your Environment">
|
||||
Add your API key to your environment variables:
|
||||
```bash
|
||||
AGENTOPS_API_KEY=<YOUR_AGENTOPS_API_KEY>
|
||||
```
|
||||
</Step>
|
||||
<Step title="Install AgentOps">
|
||||
Install AgentOps with:
|
||||
```bash
|
||||
pip install 'crewai[agentops]'
|
||||
```
|
||||
or
|
||||
```bash
|
||||
pip install agentops
|
||||
```
|
||||
</Step>
|
||||
<Step title="Initialize AgentOps">
|
||||
Before using `Crew` in your script, include these lines:
|
||||
|
||||
```python
|
||||
import agentops
|
||||
agentops.init()
|
||||
```
|
||||
|
||||
This will initiate an AgentOps session as well as automatically track Crew agents. For further info on how to outfit more complex agentic systems,
|
||||
check out the [AgentOps documentation](https://docs.agentops.ai) or join the [Discord](https://discord.gg/j4f3KbeH).
|
||||
</Step>
|
||||
</Steps>
|
||||
|
||||
### Crew + AgentOps Examples
|
||||
|
||||
<CardGroup cols={3}>
|
||||
<Card
|
||||
title="Job Posting"
|
||||
color="#F3A78B"
|
||||
href="https://github.com/joaomdmoura/crewAI-examples/tree/main/job-posting"
|
||||
icon="briefcase"
|
||||
iconType="solid"
|
||||
>
|
||||
Example of a Crew agent that generates job posts.
|
||||
</Card>
|
||||
<Card
|
||||
title="Markdown Validator"
|
||||
color="#F3A78B"
|
||||
href="https://github.com/joaomdmoura/crewAI-examples/tree/main/markdown_validator"
|
||||
icon="markdown"
|
||||
iconType="solid"
|
||||
>
|
||||
Example of a Crew agent that validates Markdown files.
|
||||
</Card>
|
||||
<Card
|
||||
title="Instagram Post"
|
||||
color="#F3A78B"
|
||||
href="https://github.com/joaomdmoura/crewAI-examples/tree/main/instagram_post"
|
||||
icon="square-instagram"
|
||||
iconType="brands"
|
||||
>
|
||||
Example of a Crew agent that generates Instagram posts.
|
||||
</Card>
|
||||
</CardGroup>
|
||||
|
||||
### Further Information
|
||||
|
||||
To get started, create an [AgentOps account](https://agentops.ai/?=crew).
|
||||
|
||||
For feature requests or bug reports, please reach out to the AgentOps team on the [AgentOps Repo](https://github.com/AgentOps-AI/agentops).
|
||||
|
||||
#### Extra links
|
||||
|
||||
<a href="https://twitter.com/agentopsai/">🐦 Twitter</a>
|
||||
<span> • </span>
|
||||
<a href="https://discord.gg/JHPt4C7r">📢 Discord</a>
|
||||
<span> • </span>
|
||||
<a href="https://app.agentops.ai/?=crew">🖇️ AgentOps Dashboard</a>
|
||||
<span> • </span>
|
||||
<a href="https://docs.agentops.ai/introduction">📙 Documentation</a>
|
||||
145
docs/how-to/arize-phoenix-observability.mdx
Normal file
@@ -0,0 +1,145 @@
|
||||
---
|
||||
title: Arize Phoenix
|
||||
description: Arize Phoenix integration for CrewAI with OpenTelemetry and OpenInference
|
||||
icon: magnifying-glass-chart
|
||||
---
|
||||
|
||||
# Arize Phoenix Integration
|
||||
|
||||
This guide demonstrates how to integrate **Arize Phoenix** with **CrewAI** using OpenTelemetry via the [OpenInference](https://github.com/openinference/openinference) SDK. By the end of this guide, you will be able to trace your CrewAI agents and easily debug your agents.
|
||||
|
||||
> **What is Arize Phoenix?** [Arize Phoenix](https://phoenix.arize.com) is an LLM observability platform that provides tracing and evaluation for AI applications.
|
||||
|
||||
[](https://www.youtube.com/watch?v=Yc5q3l6F7Ww)
|
||||
|
||||
## Get Started
|
||||
|
||||
We'll walk through a simple example of using CrewAI and integrating it with Arize Phoenix via OpenTelemetry using OpenInference.
|
||||
|
||||
You can also access this guide on [Google Colab](https://colab.research.google.com/github/Arize-ai/phoenix/blob/main/tutorials/tracing/crewai_tracing_tutorial.ipynb).
|
||||
|
||||
### Step 1: Install Dependencies
|
||||
|
||||
```bash
|
||||
pip install openinference-instrumentation-crewai crewai crewai-tools arize-phoenix-otel
|
||||
```
|
||||
|
||||
### Step 2: Set Up Environment Variables
|
||||
|
||||
Setup Phoenix Cloud API keys and configure OpenTelemetry to send traces to Phoenix. Phoenix Cloud is a hosted version of Arize Phoenix, but it is not required to use this integration.
|
||||
|
||||
You can get your free Serper API key [here](https://serper.dev/).
|
||||
|
||||
```python
|
||||
import os
|
||||
from getpass import getpass
|
||||
|
||||
# Get your Phoenix Cloud credentials
|
||||
PHOENIX_API_KEY = getpass("🔑 Enter your Phoenix Cloud API Key: ")
|
||||
|
||||
# Get API keys for services
|
||||
OPENAI_API_KEY = getpass("🔑 Enter your OpenAI API key: ")
|
||||
SERPER_API_KEY = getpass("🔑 Enter your Serper API key: ")
|
||||
|
||||
# Set environment variables
|
||||
os.environ["PHOENIX_CLIENT_HEADERS"] = f"api_key={PHOENIX_API_KEY}"
|
||||
os.environ["PHOENIX_COLLECTOR_ENDPOINT"] = "https://app.phoenix.arize.com" # Phoenix Cloud, change this to your own endpoint if you are using a self-hosted instance
|
||||
os.environ["OPENAI_API_KEY"] = OPENAI_API_KEY
|
||||
os.environ["SERPER_API_KEY"] = SERPER_API_KEY
|
||||
```
|
||||
|
||||
### Step 3: Initialize OpenTelemetry with Phoenix
|
||||
|
||||
Initialize the OpenInference OpenTelemetry instrumentation SDK to start capturing traces and send them to Phoenix.
|
||||
|
||||
```python
|
||||
from phoenix.otel import register
|
||||
|
||||
tracer_provider = register(
|
||||
project_name="crewai-tracing-demo",
|
||||
auto_instrument=True,
|
||||
)
|
||||
```
|
||||
|
||||
### Step 4: Create a CrewAI Application
|
||||
|
||||
We'll create a CrewAI application where two agents collaborate to research and write a blog post about AI advancements.
|
||||
|
||||
```python
|
||||
from crewai import Agent, Crew, Process, Task
|
||||
from crewai_tools import SerperDevTool
|
||||
|
||||
search_tool = SerperDevTool()
|
||||
|
||||
# Define your agents with roles and goals
|
||||
researcher = Agent(
|
||||
role="Senior Research Analyst",
|
||||
goal="Uncover cutting-edge developments in AI and data science",
|
||||
backstory="""You work at a leading tech think tank.
|
||||
Your expertise lies in identifying emerging trends.
|
||||
You have a knack for dissecting complex data and presenting actionable insights.""",
|
||||
verbose=True,
|
||||
allow_delegation=False,
|
||||
# You can pass an optional llm attribute specifying what model you wanna use.
|
||||
# llm=ChatOpenAI(model_name="gpt-3.5", temperature=0.7),
|
||||
tools=[search_tool],
|
||||
)
|
||||
writer = Agent(
|
||||
role="Tech Content Strategist",
|
||||
goal="Craft compelling content on tech advancements",
|
||||
backstory="""You are a renowned Content Strategist, known for your insightful and engaging articles.
|
||||
You transform complex concepts into compelling narratives.""",
|
||||
verbose=True,
|
||||
allow_delegation=True,
|
||||
)
|
||||
|
||||
# Create tasks for your agents
|
||||
task1 = Task(
|
||||
description="""Conduct a comprehensive analysis of the latest advancements in AI in 2024.
|
||||
Identify key trends, breakthrough technologies, and potential industry impacts.""",
|
||||
expected_output="Full analysis report in bullet points",
|
||||
agent=researcher,
|
||||
)
|
||||
|
||||
task2 = Task(
|
||||
description="""Using the insights provided, develop an engaging blog
|
||||
post that highlights the most significant AI advancements.
|
||||
Your post should be informative yet accessible, catering to a tech-savvy audience.
|
||||
Make it sound cool, avoid complex words so it doesn't sound like AI.""",
|
||||
expected_output="Full blog post of at least 4 paragraphs",
|
||||
agent=writer,
|
||||
)
|
||||
|
||||
# Instantiate your crew with a sequential process
|
||||
crew = Crew(
|
||||
agents=[researcher, writer], tasks=[task1, task2], verbose=1, process=Process.sequential
|
||||
)
|
||||
|
||||
# Get your crew to work!
|
||||
result = crew.kickoff()
|
||||
|
||||
print("######################")
|
||||
print(result)
|
||||
```
|
||||
|
||||
### Step 5: View Traces in Phoenix
|
||||
|
||||
After running the agent, you can view the traces generated by your CrewAI application in Phoenix. You should see detailed steps of the agent interactions and LLM calls, which can help you debug and optimize your AI agents.
|
||||
|
||||
Log into your Phoenix Cloud account and navigate to the project you specified in the `project_name` parameter. You'll see a timeline view of your trace with all the agent interactions, tool usages, and LLM calls.
|
||||
|
||||

|
||||
|
||||
|
||||
### Version Compatibility Information
|
||||
- Python 3.8+
|
||||
- CrewAI >= 0.86.0
|
||||
- Arize Phoenix >= 7.0.1
|
||||
- OpenTelemetry SDK >= 1.31.0
|
||||
|
||||
|
||||
### References
|
||||
- [Phoenix Documentation](https://docs.arize.com/phoenix/) - Overview of the Phoenix platform.
|
||||
- [CrewAI Documentation](https://docs.crewai.com/) - Overview of the CrewAI framework.
|
||||
- [OpenTelemetry Docs](https://opentelemetry.io/docs/) - OpenTelemetry guide
|
||||
- [OpenInference GitHub](https://github.com/openinference/openinference) - Source code for OpenInference SDK.
|
||||
59
docs/how-to/before-and-after-kickoff-hooks.mdx
Normal file
@@ -0,0 +1,59 @@
|
||||
---
|
||||
title: Before and After Kickoff Hooks
|
||||
description: Learn how to use before and after kickoff hooks in CrewAI
|
||||
---
|
||||
|
||||
CrewAI provides hooks that allow you to execute code before and after a crew's kickoff. These hooks are useful for preprocessing inputs or post-processing results.
|
||||
|
||||
## Before Kickoff Hook
|
||||
|
||||
The before kickoff hook is executed before the crew starts its tasks. It receives the input dictionary and can modify it before passing it to the crew. You can use this hook to set up your environment, load necessary data, or preprocess your inputs. This is useful in scenarios where the input data might need enrichment or validation before being processed by the crew.
|
||||
|
||||
Here's an example of defining a before kickoff function in your `crew.py`:
|
||||
|
||||
```python
|
||||
from crewai import CrewBase, before_kickoff
|
||||
|
||||
@CrewBase
|
||||
class MyCrew:
|
||||
@before_kickoff
|
||||
def prepare_data(self, inputs):
|
||||
# Preprocess or modify inputs
|
||||
inputs['processed'] = True
|
||||
return inputs
|
||||
|
||||
#...
|
||||
```
|
||||
|
||||
In this example, the prepare_data function modifies the inputs by adding a new key-value pair indicating that the inputs have been processed.
|
||||
|
||||
## After Kickoff Hook
|
||||
|
||||
The after kickoff hook is executed after the crew has completed its tasks. It receives the result object, which contains the outputs of the crew's execution. This hook is ideal for post-processing results, such as logging, data transformation, or further analysis.
|
||||
|
||||
Here's how you can define an after kickoff function in your `crew.py`:
|
||||
|
||||
```python
|
||||
from crewai import CrewBase, after_kickoff
|
||||
|
||||
@CrewBase
|
||||
class MyCrew:
|
||||
@after_kickoff
|
||||
def log_results(self, result):
|
||||
# Log or modify the results
|
||||
print("Crew execution completed with result:", result)
|
||||
return result
|
||||
|
||||
# ...
|
||||
```
|
||||
|
||||
|
||||
In the `log_results` function, the results of the crew execution are simply printed out. You can extend this to perform more complex operations such as sending notifications or integrating with other services.
|
||||
|
||||
## Utilizing Both Hooks
|
||||
|
||||
Both hooks can be used together to provide a comprehensive setup and teardown process for your crew's execution. They are particularly useful in maintaining clean code architecture by separating concerns and enhancing the modularity of your CrewAI implementations.
|
||||
|
||||
## Conclusion
|
||||
|
||||
Before and after kickoff hooks in CrewAI offer powerful ways to interact with the lifecycle of a crew's execution. By understanding and utilizing these hooks, you can greatly enhance the robustness and flexibility of your AI agents.
|
||||
95
docs/how-to/coding-agents.mdx
Normal file
@@ -0,0 +1,95 @@
|
||||
---
|
||||
title: Coding Agents
|
||||
description: Learn how to enable your CrewAI Agents to write and execute code, and explore advanced features for enhanced functionality.
|
||||
icon: rectangle-code
|
||||
---
|
||||
|
||||
## Introduction
|
||||
|
||||
CrewAI Agents now have the powerful ability to write and execute code, significantly enhancing their problem-solving capabilities. This feature is particularly useful for tasks that require computational or programmatic solutions.
|
||||
|
||||
## Enabling Code Execution
|
||||
|
||||
To enable code execution for an agent, set the `allow_code_execution` parameter to `True` when creating the agent.
|
||||
|
||||
Here's an example:
|
||||
|
||||
```python Code
|
||||
from crewai import Agent
|
||||
|
||||
coding_agent = Agent(
|
||||
role="Senior Python Developer",
|
||||
goal="Craft well-designed and thought-out code",
|
||||
backstory="You are a senior Python developer with extensive experience in software architecture and best practices.",
|
||||
allow_code_execution=True
|
||||
)
|
||||
```
|
||||
|
||||
<Note>
|
||||
Note that `allow_code_execution` parameter defaults to `False`.
|
||||
</Note>
|
||||
|
||||
## Important Considerations
|
||||
|
||||
1. **Model Selection**: It is strongly recommended to use more capable models like Claude 3.5 Sonnet and GPT-4 when enabling code execution.
|
||||
These models have a better understanding of programming concepts and are more likely to generate correct and efficient code.
|
||||
|
||||
2. **Error Handling**: The code execution feature includes error handling. If executed code raises an exception, the agent will receive the error message and can attempt to correct the code or
|
||||
provide alternative solutions. The `max_retry_limit` parameter, which defaults to 2, controls the maximum number of retries for a task.
|
||||
|
||||
3. **Dependencies**: To use the code execution feature, you need to install the `crewai_tools` package. If not installed, the agent will log an info message:
|
||||
"Coding tools not available. Install crewai_tools."
|
||||
|
||||
## Code Execution Process
|
||||
|
||||
When an agent with code execution enabled encounters a task requiring programming:
|
||||
|
||||
<Steps>
|
||||
<Step title="Task Analysis">
|
||||
The agent analyzes the task and determines that code execution is necessary.
|
||||
</Step>
|
||||
<Step title="Code Formulation">
|
||||
It formulates the Python code needed to solve the problem.
|
||||
</Step>
|
||||
<Step title="Code Execution">
|
||||
The code is sent to the internal code execution tool (`CodeInterpreterTool`).
|
||||
</Step>
|
||||
<Step title="Result Interpretation">
|
||||
The agent interprets the result and incorporates it into its response or uses it for further problem-solving.
|
||||
</Step>
|
||||
</Steps>
|
||||
|
||||
## Example Usage
|
||||
|
||||
Here's a detailed example of creating an agent with code execution capabilities and using it in a task:
|
||||
|
||||
```python Code
|
||||
from crewai import Agent, Task, Crew
|
||||
|
||||
# Create an agent with code execution enabled
|
||||
coding_agent = Agent(
|
||||
role="Python Data Analyst",
|
||||
goal="Analyze data and provide insights using Python",
|
||||
backstory="You are an experienced data analyst with strong Python skills.",
|
||||
allow_code_execution=True
|
||||
)
|
||||
|
||||
# Create a task that requires code execution
|
||||
data_analysis_task = Task(
|
||||
description="Analyze the given dataset and calculate the average age of participants.",
|
||||
agent=coding_agent
|
||||
)
|
||||
|
||||
# Create a crew and add the task
|
||||
analysis_crew = Crew(
|
||||
agents=[coding_agent],
|
||||
tasks=[data_analysis_task]
|
||||
)
|
||||
|
||||
# Execute the crew
|
||||
result = analysis_crew.kickoff()
|
||||
|
||||
print(result)
|
||||
```
|
||||
|
||||
In this example, the `coding_agent` can write and execute Python code to perform data analysis tasks.
|
||||
89
docs/how-to/conditional-tasks.mdx
Normal file
@@ -0,0 +1,89 @@
|
||||
---
|
||||
title: Conditional Tasks
|
||||
description: Learn how to use conditional tasks in a crewAI kickoff
|
||||
icon: diagram-subtask
|
||||
---
|
||||
|
||||
## Introduction
|
||||
|
||||
Conditional Tasks in crewAI allow for dynamic workflow adaptation based on the outcomes of previous tasks.
|
||||
This powerful feature enables crews to make decisions and execute tasks selectively, enhancing the flexibility and efficiency of your AI-driven processes.
|
||||
|
||||
## Example Usage
|
||||
|
||||
```python Code
|
||||
from typing import List
|
||||
from pydantic import BaseModel
|
||||
from crewai import Agent, Crew
|
||||
from crewai.tasks.conditional_task import ConditionalTask
|
||||
from crewai.tasks.task_output import TaskOutput
|
||||
from crewai.task import Task
|
||||
from crewai_tools import SerperDevTool
|
||||
|
||||
# Define a condition function for the conditional task
|
||||
# If false, the task will be skipped, if true, then execute the task.
|
||||
def is_data_missing(output: TaskOutput) -> bool:
|
||||
return len(output.pydantic.events) < 10 # this will skip this task
|
||||
|
||||
# Define the agents
|
||||
data_fetcher_agent = Agent(
|
||||
role="Data Fetcher",
|
||||
goal="Fetch data online using Serper tool",
|
||||
backstory="Backstory 1",
|
||||
verbose=True,
|
||||
tools=[SerperDevTool()]
|
||||
)
|
||||
|
||||
data_processor_agent = Agent(
|
||||
role="Data Processor",
|
||||
goal="Process fetched data",
|
||||
backstory="Backstory 2",
|
||||
verbose=True
|
||||
)
|
||||
|
||||
summary_generator_agent = Agent(
|
||||
role="Summary Generator",
|
||||
goal="Generate summary from fetched data",
|
||||
backstory="Backstory 3",
|
||||
verbose=True
|
||||
)
|
||||
|
||||
class EventOutput(BaseModel):
|
||||
events: List[str]
|
||||
|
||||
task1 = Task(
|
||||
description="Fetch data about events in San Francisco using Serper tool",
|
||||
expected_output="List of 10 things to do in SF this week",
|
||||
agent=data_fetcher_agent,
|
||||
output_pydantic=EventOutput,
|
||||
)
|
||||
|
||||
conditional_task = ConditionalTask(
|
||||
description="""
|
||||
Check if data is missing. If we have less than 10 events,
|
||||
fetch more events using Serper tool so that
|
||||
we have a total of 10 events in SF this week..
|
||||
""",
|
||||
expected_output="List of 10 Things to do in SF this week",
|
||||
condition=is_data_missing,
|
||||
agent=data_processor_agent,
|
||||
)
|
||||
|
||||
task3 = Task(
|
||||
description="Generate summary of events in San Francisco from fetched data",
|
||||
expected_output="A complete report on the customer and their customers and competitors, including their demographics, preferences, market positioning and audience engagement.",
|
||||
agent=summary_generator_agent,
|
||||
)
|
||||
|
||||
# Create a crew with the tasks
|
||||
crew = Crew(
|
||||
agents=[data_fetcher_agent, data_processor_agent, summary_generator_agent],
|
||||
tasks=[task1, conditional_task, task3],
|
||||
verbose=True,
|
||||
planning=True
|
||||
)
|
||||
|
||||
# Run the crew
|
||||
result = crew.kickoff()
|
||||
print("results", result)
|
||||
```
|
||||
69
docs/how-to/create-custom-tools.mdx
Normal file
@@ -0,0 +1,69 @@
|
||||
---
|
||||
title: Create Custom Tools
|
||||
description: Comprehensive guide on crafting, using, and managing custom tools within the CrewAI framework, including new functionalities and error handling.
|
||||
icon: hammer
|
||||
---
|
||||
|
||||
## Creating and Utilizing Tools in CrewAI
|
||||
|
||||
This guide provides detailed instructions on creating custom tools for the CrewAI framework and how to efficiently manage and utilize these tools,
|
||||
incorporating the latest functionalities such as tool delegation, error handling, and dynamic tool calling. It also highlights the importance of collaboration tools,
|
||||
enabling agents to perform a wide range of actions.
|
||||
|
||||
### Subclassing `BaseTool`
|
||||
|
||||
To create a personalized tool, inherit from `BaseTool` and define the necessary attributes, including the `args_schema` for input validation, and the `_run` method.
|
||||
|
||||
```python Code
|
||||
from typing import Type
|
||||
from crewai.tools import BaseTool
|
||||
from pydantic import BaseModel, Field
|
||||
|
||||
class MyToolInput(BaseModel):
|
||||
"""Input schema for MyCustomTool."""
|
||||
argument: str = Field(..., description="Description of the argument.")
|
||||
|
||||
class MyCustomTool(BaseTool):
|
||||
name: str = "Name of my tool"
|
||||
description: str = "What this tool does. It's vital for effective utilization."
|
||||
args_schema: Type[BaseModel] = MyToolInput
|
||||
|
||||
def _run(self, argument: str) -> str:
|
||||
# Your tool's logic here
|
||||
return "Tool's result"
|
||||
```
|
||||
|
||||
### Using the `tool` Decorator
|
||||
|
||||
Alternatively, you can use the tool decorator `@tool`. This approach allows you to define the tool's attributes and functionality directly within a function,
|
||||
offering a concise and efficient way to create specialized tools tailored to your needs.
|
||||
|
||||
```python Code
|
||||
from crewai.tools import tool
|
||||
|
||||
@tool("Tool Name")
|
||||
def my_simple_tool(question: str) -> str:
|
||||
"""Tool description for clarity."""
|
||||
# Tool logic here
|
||||
return "Tool output"
|
||||
```
|
||||
|
||||
### Defining a Cache Function for the Tool
|
||||
|
||||
To optimize tool performance with caching, define custom caching strategies using the `cache_function` attribute.
|
||||
|
||||
```python Code
|
||||
@tool("Tool with Caching")
|
||||
def cached_tool(argument: str) -> str:
|
||||
"""Tool functionality description."""
|
||||
return "Cacheable result"
|
||||
|
||||
def my_cache_strategy(arguments: dict, result: str) -> bool:
|
||||
# Define custom caching logic
|
||||
return True if some_condition else False
|
||||
|
||||
cached_tool.cache_function = my_cache_strategy
|
||||
```
|
||||
|
||||
By adhering to these guidelines and incorporating new functionalities and collaboration tools into your tool creation and management processes,
|
||||
you can leverage the full capabilities of the CrewAI framework, enhancing both the development experience and the efficiency of your AI agents.
|
||||
90
docs/how-to/custom-manager-agent.mdx
Normal file
@@ -0,0 +1,90 @@
|
||||
---
|
||||
title: Create Your Own Manager Agent
|
||||
description: Learn how to set a custom agent as the manager in CrewAI, providing more control over task management and coordination.
|
||||
icon: user-shield
|
||||
---
|
||||
|
||||
# Setting a Specific Agent as Manager in CrewAI
|
||||
|
||||
CrewAI allows users to set a specific agent as the manager of the crew, providing more control over the management and coordination of tasks.
|
||||
This feature enables the customization of the managerial role to better fit your project's requirements.
|
||||
|
||||
## Using the `manager_agent` Attribute
|
||||
|
||||
### Custom Manager Agent
|
||||
|
||||
The `manager_agent` attribute allows you to define a custom agent to manage the crew. This agent will oversee the entire process, ensuring that tasks are completed efficiently and to the highest standard.
|
||||
|
||||
### Example
|
||||
|
||||
```python Code
|
||||
import os
|
||||
from crewai import Agent, Task, Crew, Process
|
||||
|
||||
# Define your agents
|
||||
researcher = Agent(
|
||||
role="Researcher",
|
||||
goal="Conduct thorough research and analysis on AI and AI agents",
|
||||
backstory="You're an expert researcher, specialized in technology, software engineering, AI, and startups. You work as a freelancer and are currently researching for a new client.",
|
||||
allow_delegation=False,
|
||||
)
|
||||
|
||||
writer = Agent(
|
||||
role="Senior Writer",
|
||||
goal="Create compelling content about AI and AI agents",
|
||||
backstory="You're a senior writer, specialized in technology, software engineering, AI, and startups. You work as a freelancer and are currently writing content for a new client.",
|
||||
allow_delegation=False,
|
||||
)
|
||||
|
||||
# Define your task
|
||||
task = Task(
|
||||
description="Generate a list of 5 interesting ideas for an article, then write one captivating paragraph for each idea that showcases the potential of a full article on this topic. Return the list of ideas with their paragraphs and your notes.",
|
||||
expected_output="5 bullet points, each with a paragraph and accompanying notes.",
|
||||
)
|
||||
|
||||
# Define the manager agent
|
||||
manager = Agent(
|
||||
role="Project Manager",
|
||||
goal="Efficiently manage the crew and ensure high-quality task completion",
|
||||
backstory="You're an experienced project manager, skilled in overseeing complex projects and guiding teams to success. Your role is to coordinate the efforts of the crew members, ensuring that each task is completed on time and to the highest standard.",
|
||||
allow_delegation=True,
|
||||
)
|
||||
|
||||
# Instantiate your crew with a custom manager
|
||||
crew = Crew(
|
||||
agents=[researcher, writer],
|
||||
tasks=[task],
|
||||
manager_agent=manager,
|
||||
process=Process.hierarchical,
|
||||
)
|
||||
|
||||
# Start the crew's work
|
||||
result = crew.kickoff()
|
||||
```
|
||||
|
||||
## Benefits of a Custom Manager Agent
|
||||
|
||||
- **Enhanced Control**: Tailor the management approach to fit the specific needs of your project.
|
||||
- **Improved Coordination**: Ensure efficient task coordination and management by an experienced agent.
|
||||
- **Customizable Management**: Define managerial roles and responsibilities that align with your project's goals.
|
||||
|
||||
## Setting a Manager LLM
|
||||
|
||||
If you're using the hierarchical process and don't want to set a custom manager agent, you can specify the language model for the manager:
|
||||
|
||||
```python Code
|
||||
from crewai import LLM
|
||||
|
||||
manager_llm = LLM(model="gpt-4o")
|
||||
|
||||
crew = Crew(
|
||||
agents=[researcher, writer],
|
||||
tasks=[task],
|
||||
process=Process.hierarchical,
|
||||
manager_llm=manager_llm
|
||||
)
|
||||
```
|
||||
|
||||
<Note>
|
||||
Either `manager_agent` or `manager_llm` must be set when using the hierarchical process.
|
||||
</Note>
|
||||
111
docs/how-to/customizing-agents.mdx
Normal file
@@ -0,0 +1,111 @@
|
||||
---
|
||||
title: Customize Agents
|
||||
description: A comprehensive guide to tailoring agents for specific roles, tasks, and advanced customizations within the CrewAI framework.
|
||||
icon: user-pen
|
||||
---
|
||||
|
||||
## Customizable Attributes
|
||||
|
||||
Crafting an efficient CrewAI team hinges on the ability to dynamically tailor your AI agents to meet the unique requirements of any project. This section covers the foundational attributes you can customize.
|
||||
|
||||
### Key Attributes for Customization
|
||||
|
||||
| Attribute | Description |
|
||||
|:-----------------------|:----------------------------------------------------------------------------------------------------------------------------------------------------------|
|
||||
| **Role** | Specifies the agent's job within the crew, such as 'Analyst' or 'Customer Service Rep'. |
|
||||
| **Goal** | Defines the agent’s objectives, aligned with its role and the crew’s overarching mission. |
|
||||
| **Backstory** | Provides depth to the agent's persona, enhancing motivations and engagements within the crew. |
|
||||
| **Tools** *(Optional)* | Represents the capabilities or methods the agent uses for tasks, from simple functions to complex integrations. |
|
||||
| **Cache** *(Optional)* | Determines if the agent should use a cache for tool usage. |
|
||||
| **Max RPM** | Sets the maximum requests per minute (`max_rpm`). Can be set to `None` for unlimited requests to external services. |
|
||||
| **Verbose** *(Optional)* | Enables detailed logging for debugging and optimization, providing insights into execution processes. |
|
||||
| **Allow Delegation** *(Optional)* | Controls task delegation to other agents, default is `False`. |
|
||||
| **Max Iter** *(Optional)* | Limits the maximum number of iterations (`max_iter`) for a task to prevent infinite loops, with a default of 25. |
|
||||
| **Max Execution Time** *(Optional)* | Sets the maximum time allowed for an agent to complete a task. |
|
||||
| **System Template** *(Optional)* | Defines the system format for the agent. |
|
||||
| **Prompt Template** *(Optional)* | Defines the prompt format for the agent. |
|
||||
| **Response Template** *(Optional)* | Defines the response format for the agent. |
|
||||
| **Use System Prompt** *(Optional)* | Controls whether the agent will use a system prompt during task execution. |
|
||||
| **Respect Context Window** | Enables a sliding context window by default, maintaining context size. |
|
||||
| **Max Retry Limit** | Sets the maximum number of retries (`max_retry_limit`) for an agent in case of errors. |
|
||||
|
||||
## Advanced Customization Options
|
||||
|
||||
Beyond the basic attributes, CrewAI allows for deeper customization to enhance an agent's behavior and capabilities significantly.
|
||||
|
||||
### Language Model Customization
|
||||
|
||||
Agents can be customized with specific language models (`llm`) and function-calling language models (`function_calling_llm`), offering advanced control over their processing and decision-making abilities.
|
||||
It's important to note that setting the `function_calling_llm` allows for overriding the default crew function-calling language model, providing a greater degree of customization.
|
||||
|
||||
## Performance and Debugging Settings
|
||||
|
||||
Adjusting an agent's performance and monitoring its operations are crucial for efficient task execution.
|
||||
|
||||
### Verbose Mode and RPM Limit
|
||||
|
||||
- **Verbose Mode**: Enables detailed logging of an agent's actions, useful for debugging and optimization. Specifically, it provides insights into agent execution processes, aiding in the optimization of performance.
|
||||
- **RPM Limit**: Sets the maximum number of requests per minute (`max_rpm`). This attribute is optional and can be set to `None` for no limit, allowing for unlimited queries to external services if needed.
|
||||
|
||||
### Maximum Iterations for Task Execution
|
||||
|
||||
The `max_iter` attribute allows users to define the maximum number of iterations an agent can perform for a single task, preventing infinite loops or excessively long executions.
|
||||
The default value is set to 25, providing a balance between thoroughness and efficiency. Once the agent approaches this number, it will try its best to give a good answer.
|
||||
|
||||
## Customizing Agents and Tools
|
||||
|
||||
Agents are customized by defining their attributes and tools during initialization. Tools are critical for an agent's functionality, enabling them to perform specialized tasks.
|
||||
The `tools` attribute should be an array of tools the agent can utilize, and it's initialized as an empty list by default. Tools can be added or modified post-agent initialization to adapt to new requirements.
|
||||
|
||||
```shell
|
||||
pip install 'crewai[tools]'
|
||||
```
|
||||
|
||||
### Example: Assigning Tools to an Agent
|
||||
|
||||
```python Code
|
||||
import os
|
||||
from crewai import Agent
|
||||
from crewai_tools import SerperDevTool
|
||||
|
||||
# Set API keys for tool initialization
|
||||
os.environ["OPENAI_API_KEY"] = "Your Key"
|
||||
os.environ["SERPER_API_KEY"] = "Your Key"
|
||||
|
||||
# Initialize a search tool
|
||||
search_tool = SerperDevTool()
|
||||
|
||||
# Initialize the agent with advanced options
|
||||
agent = Agent(
|
||||
role='Research Analyst',
|
||||
goal='Provide up-to-date market analysis',
|
||||
backstory='An expert analyst with a keen eye for market trends.',
|
||||
tools=[search_tool],
|
||||
memory=True, # Enable memory
|
||||
verbose=True,
|
||||
max_rpm=None, # No limit on requests per minute
|
||||
max_iter=25, # Default value for maximum iterations
|
||||
)
|
||||
```
|
||||
|
||||
## Delegation and Autonomy
|
||||
|
||||
Controlling an agent's ability to delegate tasks or ask questions is vital for tailoring its autonomy and collaborative dynamics within the CrewAI framework. By default,
|
||||
the `allow_delegation` attribute is now set to `False`, disabling agents to seek assistance or delegate tasks as needed. This default behavior can be changed to promote collaborative problem-solving and
|
||||
efficiency within the CrewAI ecosystem. If needed, delegation can be enabled to suit specific operational requirements.
|
||||
|
||||
### Example: Disabling Delegation for an Agent
|
||||
|
||||
```python Code
|
||||
agent = Agent(
|
||||
role='Content Writer',
|
||||
goal='Write engaging content on market trends',
|
||||
backstory='A seasoned writer with expertise in market analysis.',
|
||||
allow_delegation=True # Enabling delegation
|
||||
)
|
||||
```
|
||||
|
||||
## Conclusion
|
||||
|
||||
Customizing agents in CrewAI by setting their roles, goals, backstories, and tools, alongside advanced options like language model customization, memory, performance settings, and delegation preferences,
|
||||
equips a nuanced and capable AI team ready for complex challenges.
|
||||
50
docs/how-to/force-tool-output-as-result.mdx
Normal file
@@ -0,0 +1,50 @@
|
||||
---
|
||||
title: Force Tool Output as Result
|
||||
description: Learn how to force tool output as the result in an Agent's task in CrewAI.
|
||||
icon: wrench-simple
|
||||
---
|
||||
|
||||
## Introduction
|
||||
|
||||
In CrewAI, you can force the output of a tool as the result of an agent's task.
|
||||
This feature is useful when you want to ensure that the tool output is captured and returned as the task result, avoiding any agent modification during the task execution.
|
||||
|
||||
## Forcing Tool Output as Result
|
||||
|
||||
To force the tool output as the result of an agent's task, you need to set the `result_as_answer` parameter to `True` when adding a tool to the agent.
|
||||
This parameter ensures that the tool output is captured and returned as the task result, without any modifications by the agent.
|
||||
|
||||
Here's an example of how to force the tool output as the result of an agent's task:
|
||||
|
||||
```python Code
|
||||
from crewai.agent import Agent
|
||||
from my_tool import MyCustomTool
|
||||
|
||||
# Create a coding agent with the custom tool
|
||||
coding_agent = Agent(
|
||||
role="Data Scientist",
|
||||
goal="Produce amazing reports on AI",
|
||||
backstory="You work with data and AI",
|
||||
tools=[MyCustomTool(result_as_answer=True)],
|
||||
)
|
||||
|
||||
# Assuming the tool's execution and result population occurs within the system
|
||||
task_result = coding_agent.execute_task(task)
|
||||
```
|
||||
|
||||
## Workflow in Action
|
||||
|
||||
<Steps>
|
||||
<Step title="Task Execution">
|
||||
The agent executes the task using the tool provided.
|
||||
</Step>
|
||||
<Step title="Tool Output">
|
||||
The tool generates the output, which is captured as the task result.
|
||||
</Step>
|
||||
<Step title="Agent Interaction">
|
||||
The agent may reflect and take learnings from the tool but the output is not modified.
|
||||
</Step>
|
||||
<Step title="Result Return">
|
||||
The tool output is returned as the task result without any modifications.
|
||||
</Step>
|
||||
</Steps>
|
||||
112
docs/how-to/hierarchical-process.mdx
Normal file
@@ -0,0 +1,112 @@
|
||||
---
|
||||
title: Hierarchical Process
|
||||
description: A comprehensive guide to understanding and applying the hierarchical process within your CrewAI projects, updated to reflect the latest coding practices and functionalities.
|
||||
icon: sitemap
|
||||
---
|
||||
|
||||
## Introduction
|
||||
|
||||
The hierarchical process in CrewAI introduces a structured approach to task management, simulating traditional organizational hierarchies for efficient task delegation and execution.
|
||||
This systematic workflow enhances project outcomes by ensuring tasks are handled with optimal efficiency and accuracy.
|
||||
|
||||
<Tip>
|
||||
The hierarchical process is designed to leverage advanced models like GPT-4, optimizing token usage while handling complex tasks with greater efficiency.
|
||||
</Tip>
|
||||
|
||||
## Hierarchical Process Overview
|
||||
|
||||
By default, tasks in CrewAI are managed through a sequential process. However, adopting a hierarchical approach allows for a clear hierarchy in task management,
|
||||
where a 'manager' agent coordinates the workflow, delegates tasks, and validates outcomes for streamlined and effective execution. This manager agent can now be either
|
||||
automatically created by CrewAI or explicitly set by the user.
|
||||
|
||||
### Key Features
|
||||
|
||||
- **Task Delegation**: A manager agent allocates tasks among crew members based on their roles and capabilities.
|
||||
- **Result Validation**: The manager evaluates outcomes to ensure they meet the required standards.
|
||||
- **Efficient Workflow**: Emulates corporate structures, providing an organized approach to task management.
|
||||
- **System Prompt Handling**: Optionally specify whether the system should use predefined prompts.
|
||||
- **Stop Words Control**: Optionally specify whether stop words should be used, supporting various models including the o1 models.
|
||||
- **Context Window Respect**: Prioritize important context by enabling respect of the context window, which is now the default behavior.
|
||||
- **Delegation Control**: Delegation is now disabled by default to give users explicit control.
|
||||
- **Max Requests Per Minute**: Configurable option to set the maximum number of requests per minute.
|
||||
- **Max Iterations**: Limit the maximum number of iterations for obtaining a final answer.
|
||||
|
||||
|
||||
## Implementing the Hierarchical Process
|
||||
|
||||
To utilize the hierarchical process, it's essential to explicitly set the process attribute to `Process.hierarchical`, as the default behavior is `Process.sequential`.
|
||||
Define a crew with a designated manager and establish a clear chain of command.
|
||||
|
||||
<Tip>
|
||||
Assign tools at the agent level to facilitate task delegation and execution by the designated agents under the manager's guidance.
|
||||
Tools can also be specified at the task level for precise control over tool availability during task execution.
|
||||
</Tip>
|
||||
|
||||
<Tip>
|
||||
Configuring the `manager_llm` parameter is crucial for the hierarchical process.
|
||||
The system requires a manager LLM to be set up for proper function, ensuring tailored decision-making.
|
||||
</Tip>
|
||||
|
||||
```python Code
|
||||
from crewai import Crew, Process, Agent
|
||||
|
||||
# Agents are defined with attributes for backstory, cache, and verbose mode
|
||||
researcher = Agent(
|
||||
role='Researcher',
|
||||
goal='Conduct in-depth analysis',
|
||||
backstory='Experienced data analyst with a knack for uncovering hidden trends.',
|
||||
)
|
||||
writer = Agent(
|
||||
role='Writer',
|
||||
goal='Create engaging content',
|
||||
backstory='Creative writer passionate about storytelling in technical domains.',
|
||||
)
|
||||
|
||||
# Establishing the crew with a hierarchical process and additional configurations
|
||||
project_crew = Crew(
|
||||
tasks=[...], # Tasks to be delegated and executed under the manager's supervision
|
||||
agents=[researcher, writer],
|
||||
manager_llm="gpt-4o", # Specify which LLM the manager should use
|
||||
process=Process.hierarchical,
|
||||
planning=True,
|
||||
)
|
||||
```
|
||||
|
||||
### Using a Custom Manager Agent
|
||||
|
||||
Alternatively, you can create a custom manager agent with specific attributes tailored to your project's management needs. This gives you more control over the manager's behavior and capabilities.
|
||||
|
||||
```python
|
||||
# Define a custom manager agent
|
||||
manager = Agent(
|
||||
role="Project Manager",
|
||||
goal="Efficiently manage the crew and ensure high-quality task completion",
|
||||
backstory="You're an experienced project manager, skilled in overseeing complex projects and guiding teams to success.",
|
||||
allow_delegation=True,
|
||||
)
|
||||
|
||||
# Use the custom manager in your crew
|
||||
project_crew = Crew(
|
||||
tasks=[...],
|
||||
agents=[researcher, writer],
|
||||
manager_agent=manager, # Use your custom manager agent
|
||||
process=Process.hierarchical,
|
||||
planning=True,
|
||||
)
|
||||
```
|
||||
|
||||
<Tip>
|
||||
For more details on creating and customizing a manager agent, check out the [Custom Manager Agent documentation](https://docs.crewai.com/how-to/custom-manager-agent#custom-manager-agent).
|
||||
</Tip>
|
||||
|
||||
|
||||
### Workflow in Action
|
||||
|
||||
1. **Task Assignment**: The manager assigns tasks strategically, considering each agent's capabilities and available tools.
|
||||
2. **Execution and Review**: Agents complete their tasks with the option for asynchronous execution and callback functions for streamlined workflows.
|
||||
3. **Sequential Task Progression**: Despite being a hierarchical process, tasks follow a logical order for smooth progression, facilitated by the manager's oversight.
|
||||
|
||||
## Conclusion
|
||||
|
||||
Adopting the hierarchical process in CrewAI, with the correct configurations and understanding of the system's capabilities, facilitates an organized and efficient approach to project management.
|
||||
Utilize the advanced features and customizations to tailor the workflow to your specific needs, ensuring optimal task execution and project success.
|
||||
98
docs/how-to/human-input-on-execution.mdx
Normal file
@@ -0,0 +1,98 @@
|
||||
---
|
||||
title: Human Input on Execution
|
||||
description: Integrating CrewAI with human input during execution in complex decision-making processes and leveraging the full capabilities of the agent's attributes and tools.
|
||||
icon: user-check
|
||||
---
|
||||
|
||||
## Human input in agent execution
|
||||
|
||||
Human input is critical in several agent execution scenarios, allowing agents to request additional information or clarification when necessary.
|
||||
This feature is especially useful in complex decision-making processes or when agents require more details to complete a task effectively.
|
||||
|
||||
## Using human input with CrewAI
|
||||
|
||||
To integrate human input into agent execution, set the `human_input` flag in the task definition. When enabled, the agent prompts the user for input before delivering its final answer.
|
||||
This input can provide extra context, clarify ambiguities, or validate the agent's output.
|
||||
|
||||
### Example:
|
||||
|
||||
```shell
|
||||
pip install crewai
|
||||
```
|
||||
|
||||
```python Code
|
||||
import os
|
||||
from crewai import Agent, Task, Crew
|
||||
from crewai_tools import SerperDevTool
|
||||
|
||||
os.environ["SERPER_API_KEY"] = "Your Key" # serper.dev API key
|
||||
os.environ["OPENAI_API_KEY"] = "Your Key"
|
||||
|
||||
# Loading Tools
|
||||
search_tool = SerperDevTool()
|
||||
|
||||
# Define your agents with roles, goals, tools, and additional attributes
|
||||
researcher = Agent(
|
||||
role='Senior Research Analyst',
|
||||
goal='Uncover cutting-edge developments in AI and data science',
|
||||
backstory=(
|
||||
"You are a Senior Research Analyst at a leading tech think tank. "
|
||||
"Your expertise lies in identifying emerging trends and technologies in AI and data science. "
|
||||
"You have a knack for dissecting complex data and presenting actionable insights."
|
||||
),
|
||||
verbose=True,
|
||||
allow_delegation=False,
|
||||
tools=[search_tool]
|
||||
)
|
||||
writer = Agent(
|
||||
role='Tech Content Strategist',
|
||||
goal='Craft compelling content on tech advancements',
|
||||
backstory=(
|
||||
"You are a renowned Tech Content Strategist, known for your insightful and engaging articles on technology and innovation. "
|
||||
"With a deep understanding of the tech industry, you transform complex concepts into compelling narratives."
|
||||
),
|
||||
verbose=True,
|
||||
allow_delegation=True,
|
||||
tools=[search_tool],
|
||||
cache=False, # Disable cache for this agent
|
||||
)
|
||||
|
||||
# Create tasks for your agents
|
||||
task1 = Task(
|
||||
description=(
|
||||
"Conduct a comprehensive analysis of the latest advancements in AI in 2025. "
|
||||
"Identify key trends, breakthrough technologies, and potential industry impacts. "
|
||||
"Compile your findings in a detailed report. "
|
||||
"Make sure to check with a human if the draft is good before finalizing your answer."
|
||||
),
|
||||
expected_output='A comprehensive full report on the latest AI advancements in 2025, leave nothing out',
|
||||
agent=researcher,
|
||||
human_input=True
|
||||
)
|
||||
|
||||
task2 = Task(
|
||||
description=(
|
||||
"Using the insights from the researcher\'s report, develop an engaging blog post that highlights the most significant AI advancements. "
|
||||
"Your post should be informative yet accessible, catering to a tech-savvy audience. "
|
||||
"Aim for a narrative that captures the essence of these breakthroughs and their implications for the future."
|
||||
),
|
||||
expected_output='A compelling 3 paragraphs blog post formatted as markdown about the latest AI advancements in 2025',
|
||||
agent=writer,
|
||||
human_input=True
|
||||
)
|
||||
|
||||
# Instantiate your crew with a sequential process
|
||||
crew = Crew(
|
||||
agents=[researcher, writer],
|
||||
tasks=[task1, task2],
|
||||
verbose=True,
|
||||
memory=True,
|
||||
planning=True # Enable planning feature for the crew
|
||||
)
|
||||
|
||||
# Get your crew to work!
|
||||
result = crew.kickoff()
|
||||
|
||||
print("######################")
|
||||
print(result)
|
||||
```
|
||||
122
docs/how-to/kickoff-async.mdx
Normal file
@@ -0,0 +1,122 @@
|
||||
---
|
||||
title: Kickoff Crew Asynchronously
|
||||
description: Kickoff a Crew Asynchronously
|
||||
icon: rocket-launch
|
||||
---
|
||||
|
||||
## Introduction
|
||||
|
||||
CrewAI provides the ability to kickoff a crew asynchronously, allowing you to start the crew execution in a non-blocking manner.
|
||||
This feature is particularly useful when you want to run multiple crews concurrently or when you need to perform other tasks while the crew is executing.
|
||||
|
||||
## Asynchronous Crew Execution
|
||||
|
||||
To kickoff a crew asynchronously, use the `kickoff_async()` method. This method initiates the crew execution in a separate thread, allowing the main thread to continue executing other tasks.
|
||||
|
||||
### Method Signature
|
||||
|
||||
```python Code
|
||||
def kickoff_async(self, inputs: dict) -> CrewOutput:
|
||||
```
|
||||
|
||||
### Parameters
|
||||
|
||||
- `inputs` (dict): A dictionary containing the input data required for the tasks.
|
||||
|
||||
### Returns
|
||||
|
||||
- `CrewOutput`: An object representing the result of the crew execution.
|
||||
|
||||
## Potential Use Cases
|
||||
|
||||
- **Parallel Content Generation**: Kickoff multiple independent crews asynchronously, each responsible for generating content on different topics. For example, one crew might research and draft an article on AI trends, while another crew generates social media posts about a new product launch. Each crew operates independently, allowing content production to scale efficiently.
|
||||
|
||||
- **Concurrent Market Research Tasks**: Launch multiple crews asynchronously to conduct market research in parallel. One crew might analyze industry trends, while another examines competitor strategies, and yet another evaluates consumer sentiment. Each crew independently completes its task, enabling faster and more comprehensive insights.
|
||||
|
||||
- **Independent Travel Planning Modules**: Execute separate crews to independently plan different aspects of a trip. One crew might handle flight options, another handles accommodation, and a third plans activities. Each crew works asynchronously, allowing various components of the trip to be planned simultaneously and independently for faster results.
|
||||
|
||||
## Example: Single Asynchronous Crew Execution
|
||||
|
||||
Here's an example of how to kickoff a crew asynchronously using asyncio and awaiting the result:
|
||||
|
||||
```python Code
|
||||
import asyncio
|
||||
from crewai import Crew, Agent, Task
|
||||
|
||||
# Create an agent with code execution enabled
|
||||
coding_agent = Agent(
|
||||
role="Python Data Analyst",
|
||||
goal="Analyze data and provide insights using Python",
|
||||
backstory="You are an experienced data analyst with strong Python skills.",
|
||||
allow_code_execution=True
|
||||
)
|
||||
|
||||
# Create a task that requires code execution
|
||||
data_analysis_task = Task(
|
||||
description="Analyze the given dataset and calculate the average age of participants. Ages: {ages}",
|
||||
agent=coding_agent,
|
||||
expected_output="The average age of the participants."
|
||||
)
|
||||
|
||||
# Create a crew and add the task
|
||||
analysis_crew = Crew(
|
||||
agents=[coding_agent],
|
||||
tasks=[data_analysis_task]
|
||||
)
|
||||
|
||||
# Async function to kickoff the crew asynchronously
|
||||
async def async_crew_execution():
|
||||
result = await analysis_crew.kickoff_async(inputs={"ages": [25, 30, 35, 40, 45]})
|
||||
print("Crew Result:", result)
|
||||
|
||||
# Run the async function
|
||||
asyncio.run(async_crew_execution())
|
||||
```
|
||||
|
||||
## Example: Multiple Asynchronous Crew Executions
|
||||
|
||||
In this example, we'll show how to kickoff multiple crews asynchronously and wait for all of them to complete using `asyncio.gather()`:
|
||||
|
||||
```python Code
|
||||
import asyncio
|
||||
from crewai import Crew, Agent, Task
|
||||
|
||||
# Create an agent with code execution enabled
|
||||
coding_agent = Agent(
|
||||
role="Python Data Analyst",
|
||||
goal="Analyze data and provide insights using Python",
|
||||
backstory="You are an experienced data analyst with strong Python skills.",
|
||||
allow_code_execution=True
|
||||
)
|
||||
|
||||
# Create tasks that require code execution
|
||||
task_1 = Task(
|
||||
description="Analyze the first dataset and calculate the average age of participants. Ages: {ages}",
|
||||
agent=coding_agent,
|
||||
expected_output="The average age of the participants."
|
||||
)
|
||||
|
||||
task_2 = Task(
|
||||
description="Analyze the second dataset and calculate the average age of participants. Ages: {ages}",
|
||||
agent=coding_agent,
|
||||
expected_output="The average age of the participants."
|
||||
)
|
||||
|
||||
# Create two crews and add tasks
|
||||
crew_1 = Crew(agents=[coding_agent], tasks=[task_1])
|
||||
crew_2 = Crew(agents=[coding_agent], tasks=[task_2])
|
||||
|
||||
# Async function to kickoff multiple crews asynchronously and wait for all to finish
|
||||
async def async_multiple_crews():
|
||||
result_1 = crew_1.kickoff_async(inputs={"ages": [25, 30, 35, 40, 45]})
|
||||
result_2 = crew_2.kickoff_async(inputs={"ages": [20, 22, 24, 28, 30]})
|
||||
|
||||
# Wait for both crews to finish
|
||||
results = await asyncio.gather(result_1, result_2)
|
||||
|
||||
for i, result in enumerate(results, 1):
|
||||
print(f"Crew {i} Result:", result)
|
||||
|
||||
# Run the async function
|
||||
asyncio.run(async_multiple_crews())
|
||||
```
|
||||
53
docs/how-to/kickoff-for-each.mdx
Normal file
@@ -0,0 +1,53 @@
|
||||
---
|
||||
title: Kickoff Crew for Each
|
||||
description: Kickoff Crew for Each Item in a List
|
||||
icon: at
|
||||
---
|
||||
|
||||
## Introduction
|
||||
|
||||
CrewAI provides the ability to kickoff a crew for each item in a list, allowing you to execute the crew for each item in the list.
|
||||
This feature is particularly useful when you need to perform the same set of tasks for multiple items.
|
||||
|
||||
## Kicking Off a Crew for Each Item
|
||||
|
||||
To kickoff a crew for each item in a list, use the `kickoff_for_each()` method.
|
||||
This method executes the crew for each item in the list, allowing you to process multiple items efficiently.
|
||||
|
||||
Here's an example of how to kickoff a crew for each item in a list:
|
||||
|
||||
```python Code
|
||||
from crewai import Crew, Agent, Task
|
||||
|
||||
# Create an agent with code execution enabled
|
||||
coding_agent = Agent(
|
||||
role="Python Data Analyst",
|
||||
goal="Analyze data and provide insights using Python",
|
||||
backstory="You are an experienced data analyst with strong Python skills.",
|
||||
allow_code_execution=True
|
||||
)
|
||||
|
||||
# Create a task that requires code execution
|
||||
data_analysis_task = Task(
|
||||
description="Analyze the given dataset and calculate the average age of participants. Ages: {ages}",
|
||||
agent=coding_agent,
|
||||
expected_output="The average age calculated from the dataset"
|
||||
)
|
||||
|
||||
# Create a crew and add the task
|
||||
analysis_crew = Crew(
|
||||
agents=[coding_agent],
|
||||
tasks=[data_analysis_task],
|
||||
verbose=True,
|
||||
memory=False
|
||||
)
|
||||
|
||||
datasets = [
|
||||
{ "ages": [25, 30, 35, 40, 45] },
|
||||
{ "ages": [20, 25, 30, 35, 40] },
|
||||
{ "ages": [30, 35, 40, 45, 50] }
|
||||
]
|
||||
|
||||
# Execute the crew
|
||||
result = analysis_crew.kickoff_for_each(inputs=datasets)
|
||||
```
|
||||
100
docs/how-to/langfuse-observability.mdx
Normal file
@@ -0,0 +1,100 @@
|
||||
---
|
||||
title: Langfuse Integration
|
||||
description: Learn how to integrate Langfuse with CrewAI via OpenTelemetry using OpenLit
|
||||
icon: vials
|
||||
---
|
||||
|
||||
# Integrate Langfuse with CrewAI
|
||||
|
||||
This notebook demonstrates how to integrate **Langfuse** with **CrewAI** using OpenTelemetry via the **OpenLit** SDK. By the end of this notebook, you will be able to trace your CrewAI applications with Langfuse for improved observability and debugging.
|
||||
|
||||
> **What is Langfuse?** [Langfuse](https://langfuse.com) is an open-source LLM engineering platform. It provides tracing and monitoring capabilities for LLM applications, helping developers debug, analyze, and optimize their AI systems. Langfuse integrates with various tools and frameworks via native integrations, OpenTelemetry, and APIs/SDKs.
|
||||
|
||||
[](https://langfuse.com/watch-demo)
|
||||
|
||||
## Get Started
|
||||
|
||||
We'll walk through a simple example of using CrewAI and integrating it with Langfuse via OpenTelemetry using OpenLit.
|
||||
|
||||
### Step 1: Install Dependencies
|
||||
|
||||
|
||||
```python
|
||||
%pip install langfuse openlit crewai crewai_tools
|
||||
```
|
||||
|
||||
### Step 2: Set Up Environment Variables
|
||||
|
||||
Set your Langfuse API keys and configure OpenTelemetry export settings to send traces to Langfuse. Please refer to the [Langfuse OpenTelemetry Docs](https://langfuse.com/docs/opentelemetry/get-started) for more information on the Langfuse OpenTelemetry endpoint `/api/public/otel` and authentication.
|
||||
|
||||
|
||||
```python
|
||||
import os
|
||||
import base64
|
||||
|
||||
LANGFUSE_PUBLIC_KEY="pk-lf-..."
|
||||
LANGFUSE_SECRET_KEY="sk-lf-..."
|
||||
LANGFUSE_AUTH=base64.b64encode(f"{LANGFUSE_PUBLIC_KEY}:{LANGFUSE_SECRET_KEY}".encode()).decode()
|
||||
|
||||
os.environ["OTEL_EXPORTER_OTLP_ENDPOINT"] = "https://cloud.langfuse.com/api/public/otel" # EU data region
|
||||
# os.environ["OTEL_EXPORTER_OTLP_ENDPOINT"] = "https://us.cloud.langfuse.com/api/public/otel" # US data region
|
||||
os.environ["OTEL_EXPORTER_OTLP_HEADERS"] = f"Authorization=Basic {LANGFUSE_AUTH}"
|
||||
|
||||
# your openai key
|
||||
os.environ["OPENAI_API_KEY"] = "sk-..."
|
||||
```
|
||||
|
||||
### Step 3: Initialize OpenLit
|
||||
|
||||
Initialize the OpenLit OpenTelemetry instrumentation SDK to start capturing OpenTelemetry traces.
|
||||
|
||||
|
||||
```python
|
||||
import openlit
|
||||
|
||||
openlit.init()
|
||||
```
|
||||
|
||||
### Step 4: Create a Simple CrewAI Application
|
||||
|
||||
We'll create a simple CrewAI application where multiple agents collaborate to answer a user's question.
|
||||
|
||||
|
||||
```python
|
||||
from crewai import Agent, Task, Crew
|
||||
|
||||
from crewai_tools import (
|
||||
WebsiteSearchTool
|
||||
)
|
||||
|
||||
web_rag_tool = WebsiteSearchTool()
|
||||
|
||||
writer = Agent(
|
||||
role="Writer",
|
||||
goal="You make math engaging and understandable for young children through poetry",
|
||||
backstory="You're an expert in writing haikus but you know nothing of math.",
|
||||
tools=[web_rag_tool],
|
||||
)
|
||||
|
||||
task = Task(description=("What is {multiplication}?"),
|
||||
expected_output=("Compose a haiku that includes the answer."),
|
||||
agent=writer)
|
||||
|
||||
crew = Crew(
|
||||
agents=[writer],
|
||||
tasks=[task],
|
||||
share_crew=False
|
||||
)
|
||||
```
|
||||
|
||||
### Step 5: See Traces in Langfuse
|
||||
|
||||
After running the agent, you can view the traces generated by your CrewAI application in [Langfuse](https://cloud.langfuse.com). You should see detailed steps of the LLM interactions, which can help you debug and optimize your AI agent.
|
||||
|
||||

|
||||
|
||||
_[Public example trace in Langfuse](https://cloud.langfuse.com/project/cloramnkj0002jz088vzn1ja4/traces/e2cf380ffc8d47d28da98f136140642b?timestamp=2025-02-05T15%3A12%3A02.717Z&observation=3b32338ee6a5d9af)_
|
||||
|
||||
## References
|
||||
|
||||
- [Langfuse OpenTelemetry Docs](https://langfuse.com/docs/opentelemetry/get-started)
|
||||
72
docs/how-to/langtrace-observability.mdx
Normal file
@@ -0,0 +1,72 @@
|
||||
---
|
||||
title: Langtrace Integration
|
||||
description: How to monitor cost, latency, and performance of CrewAI Agents using Langtrace, an external observability tool.
|
||||
icon: chart-line
|
||||
---
|
||||
|
||||
# Langtrace Overview
|
||||
|
||||
Langtrace is an open-source, external tool that helps you set up observability and evaluations for Large Language Models (LLMs), LLM frameworks, and Vector Databases.
|
||||
While not built directly into CrewAI, Langtrace can be used alongside CrewAI to gain deep visibility into the cost, latency, and performance of your CrewAI Agents.
|
||||
This integration allows you to log hyperparameters, monitor performance regressions, and establish a process for continuous improvement of your Agents.
|
||||
|
||||

|
||||

|
||||

|
||||
|
||||
## Setup Instructions
|
||||
|
||||
<Steps>
|
||||
<Step title="Sign up for Langtrace">
|
||||
Sign up by visiting [https://langtrace.ai/signup](https://langtrace.ai/signup).
|
||||
</Step>
|
||||
<Step title="Create a project">
|
||||
Set the project type to `CrewAI` and generate an API key.
|
||||
</Step>
|
||||
<Step title="Install Langtrace in your CrewAI project">
|
||||
Use the following command:
|
||||
|
||||
```bash
|
||||
pip install langtrace-python-sdk
|
||||
```
|
||||
</Step>
|
||||
<Step title="Import Langtrace">
|
||||
Import and initialize Langtrace at the beginning of your script, before any CrewAI imports:
|
||||
|
||||
```python
|
||||
from langtrace_python_sdk import langtrace
|
||||
langtrace.init(api_key='<LANGTRACE_API_KEY>')
|
||||
|
||||
# Now import CrewAI modules
|
||||
from crewai import Agent, Task, Crew
|
||||
```
|
||||
</Step>
|
||||
</Steps>
|
||||
|
||||
### Features and Their Application to CrewAI
|
||||
|
||||
1. **LLM Token and Cost Tracking**
|
||||
|
||||
- Monitor the token usage and associated costs for each CrewAI agent interaction.
|
||||
|
||||
2. **Trace Graph for Execution Steps**
|
||||
|
||||
- Visualize the execution flow of your CrewAI tasks, including latency and logs.
|
||||
- Useful for identifying bottlenecks in your agent workflows.
|
||||
|
||||
3. **Dataset Curation with Manual Annotation**
|
||||
|
||||
- Create datasets from your CrewAI task outputs for future training or evaluation.
|
||||
|
||||
4. **Prompt Versioning and Management**
|
||||
|
||||
- Keep track of different versions of prompts used in your CrewAI agents.
|
||||
- Useful for A/B testing and optimizing agent performance.
|
||||
|
||||
5. **Prompt Playground with Model Comparisons**
|
||||
|
||||
- Test and compare different prompts and models for your CrewAI agents before deployment.
|
||||
|
||||
6. **Testing and Evaluations**
|
||||
|
||||
- Set up automated tests for your CrewAI agents and tasks.
|
||||
185
docs/how-to/llm-connections.mdx
Normal file
@@ -0,0 +1,185 @@
|
||||
---
|
||||
title: Connect to any LLM
|
||||
description: Comprehensive guide on integrating CrewAI with various Large Language Models (LLMs) using LiteLLM, including supported providers and configuration options.
|
||||
icon: brain-circuit
|
||||
---
|
||||
|
||||
## Connect CrewAI to LLMs
|
||||
|
||||
CrewAI uses LiteLLM to connect to a wide variety of Language Models (LLMs). This integration provides extensive versatility, allowing you to use models from numerous providers with a simple, unified interface.
|
||||
|
||||
<Note>
|
||||
By default, CrewAI uses the `gpt-4o-mini` model. This is determined by the `OPENAI_MODEL_NAME` environment variable, which defaults to "gpt-4o-mini" if not set.
|
||||
You can easily configure your agents to use a different model or provider as described in this guide.
|
||||
</Note>
|
||||
|
||||
## Supported Providers
|
||||
|
||||
LiteLLM supports a wide range of providers, including but not limited to:
|
||||
|
||||
- OpenAI
|
||||
- Anthropic
|
||||
- Google (Vertex AI, Gemini)
|
||||
- Azure OpenAI
|
||||
- AWS (Bedrock, SageMaker)
|
||||
- Cohere
|
||||
- VoyageAI
|
||||
- Hugging Face
|
||||
- Ollama
|
||||
- Mistral AI
|
||||
- Replicate
|
||||
- Together AI
|
||||
- AI21
|
||||
- Cloudflare Workers AI
|
||||
- DeepInfra
|
||||
- Groq
|
||||
- SambaNova
|
||||
- [NVIDIA NIMs](https://docs.api.nvidia.com/nim/reference/models-1)
|
||||
- And many more!
|
||||
|
||||
For a complete and up-to-date list of supported providers, please refer to the [LiteLLM Providers documentation](https://docs.litellm.ai/docs/providers).
|
||||
|
||||
## Changing the LLM
|
||||
|
||||
To use a different LLM with your CrewAI agents, you have several options:
|
||||
|
||||
<Tabs>
|
||||
<Tab title="Using a String Identifier">
|
||||
Pass the model name as a string when initializing the agent:
|
||||
<CodeGroup>
|
||||
```python Code
|
||||
from crewai import Agent
|
||||
|
||||
# Using OpenAI's GPT-4
|
||||
openai_agent = Agent(
|
||||
role='OpenAI Expert',
|
||||
goal='Provide insights using GPT-4',
|
||||
backstory="An AI assistant powered by OpenAI's latest model.",
|
||||
llm='gpt-4'
|
||||
)
|
||||
|
||||
# Using Anthropic's Claude
|
||||
claude_agent = Agent(
|
||||
role='Anthropic Expert',
|
||||
goal='Analyze data using Claude',
|
||||
backstory="An AI assistant leveraging Anthropic's language model.",
|
||||
llm='claude-2'
|
||||
)
|
||||
```
|
||||
</CodeGroup>
|
||||
</Tab>
|
||||
<Tab title="Using the LLM Class">
|
||||
For more detailed configuration, use the LLM class:
|
||||
<CodeGroup>
|
||||
```python Code
|
||||
from crewai import Agent, LLM
|
||||
|
||||
llm = LLM(
|
||||
model="gpt-4",
|
||||
temperature=0.7,
|
||||
base_url="https://api.openai.com/v1",
|
||||
api_key="your-api-key-here"
|
||||
)
|
||||
|
||||
agent = Agent(
|
||||
role='Customized LLM Expert',
|
||||
goal='Provide tailored responses',
|
||||
backstory="An AI assistant with custom LLM settings.",
|
||||
llm=llm
|
||||
)
|
||||
```
|
||||
</CodeGroup>
|
||||
</Tab>
|
||||
</Tabs>
|
||||
|
||||
## Configuration Options
|
||||
|
||||
When configuring an LLM for your agent, you have access to a wide range of parameters:
|
||||
|
||||
| Parameter | Type | Description |
|
||||
|:----------|:-----:|:-------------|
|
||||
| **model** | `str` | The name of the model to use (e.g., "gpt-4", "claude-2") |
|
||||
| **temperature** | `float` | Controls randomness in output (0.0 to 1.0) |
|
||||
| **max_tokens** | `int` | Maximum number of tokens to generate |
|
||||
| **top_p** | `float` | Controls diversity of output (0.0 to 1.0) |
|
||||
| **frequency_penalty** | `float` | Penalizes new tokens based on their frequency in the text so far |
|
||||
| **presence_penalty** | `float` | Penalizes new tokens based on their presence in the text so far |
|
||||
| **stop** | `str`, `List[str]` | Sequence(s) to stop generation |
|
||||
| **base_url** | `str` | The base URL for the API endpoint |
|
||||
| **api_key** | `str` | Your API key for authentication |
|
||||
|
||||
For a complete list of parameters and their descriptions, refer to the LLM class documentation.
|
||||
|
||||
## Connecting to OpenAI-Compatible LLMs
|
||||
|
||||
You can connect to OpenAI-compatible LLMs using either environment variables or by setting specific attributes on the LLM class:
|
||||
|
||||
<Tabs>
|
||||
<Tab title="Using Environment Variables">
|
||||
<CodeGroup>
|
||||
```python Code
|
||||
import os
|
||||
|
||||
os.environ["OPENAI_API_KEY"] = "your-api-key"
|
||||
os.environ["OPENAI_API_BASE"] = "https://api.your-provider.com/v1"
|
||||
os.environ["OPENAI_MODEL_NAME"] = "your-model-name"
|
||||
```
|
||||
</CodeGroup>
|
||||
</Tab>
|
||||
<Tab title="Using LLM Class Attributes">
|
||||
<CodeGroup>
|
||||
```python Code
|
||||
llm = LLM(
|
||||
model="custom-model-name",
|
||||
api_key="your-api-key",
|
||||
base_url="https://api.your-provider.com/v1"
|
||||
)
|
||||
agent = Agent(llm=llm, ...)
|
||||
```
|
||||
</CodeGroup>
|
||||
</Tab>
|
||||
</Tabs>
|
||||
|
||||
## Using Local Models with Ollama
|
||||
|
||||
For local models like those provided by Ollama:
|
||||
|
||||
<Steps>
|
||||
<Step title="Download and install Ollama">
|
||||
[Click here to download and install Ollama](https://ollama.com/download)
|
||||
</Step>
|
||||
<Step title="Pull the desired model">
|
||||
For example, run `ollama pull llama3.2` to download the model.
|
||||
</Step>
|
||||
<Step title="Configure your agent">
|
||||
<CodeGroup>
|
||||
```python Code
|
||||
agent = Agent(
|
||||
role='Local AI Expert',
|
||||
goal='Process information using a local model',
|
||||
backstory="An AI assistant running on local hardware.",
|
||||
llm=LLM(model="ollama/llama3.2", base_url="http://localhost:11434")
|
||||
)
|
||||
```
|
||||
</CodeGroup>
|
||||
</Step>
|
||||
</Steps>
|
||||
|
||||
## Changing the Base API URL
|
||||
|
||||
You can change the base API URL for any LLM provider by setting the `base_url` parameter:
|
||||
|
||||
```python Code
|
||||
llm = LLM(
|
||||
model="custom-model-name",
|
||||
base_url="https://api.your-provider.com/v1",
|
||||
api_key="your-api-key"
|
||||
)
|
||||
agent = Agent(llm=llm, ...)
|
||||
```
|
||||
|
||||
This is particularly useful when working with OpenAI-compatible APIs or when you need to specify a different endpoint for your chosen provider.
|
||||
|
||||
## Conclusion
|
||||
|
||||
By leveraging LiteLLM, CrewAI offers seamless integration with a vast array of LLMs. This flexibility allows you to choose the most suitable model for your specific needs, whether you prioritize performance, cost-efficiency, or local deployment. Remember to consult the [LiteLLM documentation](https://docs.litellm.ai/docs/) for the most up-to-date information on supported models and configuration options.
|
||||
206
docs/how-to/mlflow-observability.mdx
Normal file
@@ -0,0 +1,206 @@
|
||||
---
|
||||
title: MLflow Integration
|
||||
description: Quickly start monitoring your Agents with MLflow.
|
||||
icon: bars-staggered
|
||||
---
|
||||
|
||||
# MLflow Overview
|
||||
|
||||
[MLflow](https://mlflow.org/) is an open-source platform to assist machine learning practitioners and teams in handling the complexities of the machine learning process.
|
||||
|
||||
It provides a tracing feature that enhances LLM observability in your Generative AI applications by capturing detailed information about the execution of your application’s services.
|
||||
Tracing provides a way to record the inputs, outputs, and metadata associated with each intermediate step of a request, enabling you to easily pinpoint the source of bugs and unexpected behaviors.
|
||||
|
||||

|
||||
|
||||
### Features
|
||||
|
||||
- **Tracing Dashboard**: Monitor activities of your crewAI agents with detailed dashboards that include inputs, outputs and metadata of spans.
|
||||
- **Automated Tracing**: A fully automated integration with crewAI, which can be enabled by running `mlflow.crewai.autolog()`.
|
||||
- **Manual Trace Instrumentation with minor efforts**: Customize trace instrumentation through MLflow's high-level fluent APIs such as decorators, function wrappers and context managers.
|
||||
- **OpenTelemetry Compatibility**: MLflow Tracing supports exporting traces to an OpenTelemetry Collector, which can then be used to export traces to various backends such as Jaeger, Zipkin, and AWS X-Ray.
|
||||
- **Package and Deploy Agents**: Package and deploy your crewAI agents to an inference server with a variety of deployment targets.
|
||||
- **Securely Host LLMs**: Host multiple LLM from various providers in one unified endpoint through MFflow gateway.
|
||||
- **Evaluation**: Evaluate your crewAI agents with a wide range of metrics using a convenient API `mlflow.evaluate()`.
|
||||
|
||||
## Setup Instructions
|
||||
|
||||
<Steps>
|
||||
<Step title="Install MLflow package">
|
||||
```shell
|
||||
# The crewAI integration is available in mlflow>=2.19.0
|
||||
pip install mlflow
|
||||
```
|
||||
</Step>
|
||||
<Step title="Start MFflow tracking server">
|
||||
```shell
|
||||
# This process is optional, but it is recommended to use MLflow tracking server for better visualization and broader features.
|
||||
mlflow server
|
||||
```
|
||||
</Step>
|
||||
<Step title="Initialize MLflow in Your Application">
|
||||
Add the following two lines to your application code:
|
||||
|
||||
```python
|
||||
import mlflow
|
||||
|
||||
mlflow.crewai.autolog()
|
||||
|
||||
# Optional: Set a tracking URI and an experiment name if you have a tracking server
|
||||
mlflow.set_tracking_uri("http://localhost:5000")
|
||||
mlflow.set_experiment("CrewAI")
|
||||
```
|
||||
|
||||
Example Usage for tracing CrewAI Agents:
|
||||
|
||||
```python
|
||||
from crewai import Agent, Crew, Task
|
||||
from crewai.knowledge.source.string_knowledge_source import StringKnowledgeSource
|
||||
from crewai_tools import SerperDevTool, WebsiteSearchTool
|
||||
|
||||
from textwrap import dedent
|
||||
|
||||
content = "Users name is John. He is 30 years old and lives in San Francisco."
|
||||
string_source = StringKnowledgeSource(
|
||||
content=content, metadata={"preference": "personal"}
|
||||
)
|
||||
|
||||
search_tool = WebsiteSearchTool()
|
||||
|
||||
|
||||
class TripAgents:
|
||||
def city_selection_agent(self):
|
||||
return Agent(
|
||||
role="City Selection Expert",
|
||||
goal="Select the best city based on weather, season, and prices",
|
||||
backstory="An expert in analyzing travel data to pick ideal destinations",
|
||||
tools=[
|
||||
search_tool,
|
||||
],
|
||||
verbose=True,
|
||||
)
|
||||
|
||||
def local_expert(self):
|
||||
return Agent(
|
||||
role="Local Expert at this city",
|
||||
goal="Provide the BEST insights about the selected city",
|
||||
backstory="""A knowledgeable local guide with extensive information
|
||||
about the city, it's attractions and customs""",
|
||||
tools=[search_tool],
|
||||
verbose=True,
|
||||
)
|
||||
|
||||
|
||||
class TripTasks:
|
||||
def identify_task(self, agent, origin, cities, interests, range):
|
||||
return Task(
|
||||
description=dedent(
|
||||
f"""
|
||||
Analyze and select the best city for the trip based
|
||||
on specific criteria such as weather patterns, seasonal
|
||||
events, and travel costs. This task involves comparing
|
||||
multiple cities, considering factors like current weather
|
||||
conditions, upcoming cultural or seasonal events, and
|
||||
overall travel expenses.
|
||||
Your final answer must be a detailed
|
||||
report on the chosen city, and everything you found out
|
||||
about it, including the actual flight costs, weather
|
||||
forecast and attractions.
|
||||
|
||||
Traveling from: {origin}
|
||||
City Options: {cities}
|
||||
Trip Date: {range}
|
||||
Traveler Interests: {interests}
|
||||
"""
|
||||
),
|
||||
agent=agent,
|
||||
expected_output="Detailed report on the chosen city including flight costs, weather forecast, and attractions",
|
||||
)
|
||||
|
||||
def gather_task(self, agent, origin, interests, range):
|
||||
return Task(
|
||||
description=dedent(
|
||||
f"""
|
||||
As a local expert on this city you must compile an
|
||||
in-depth guide for someone traveling there and wanting
|
||||
to have THE BEST trip ever!
|
||||
Gather information about key attractions, local customs,
|
||||
special events, and daily activity recommendations.
|
||||
Find the best spots to go to, the kind of place only a
|
||||
local would know.
|
||||
This guide should provide a thorough overview of what
|
||||
the city has to offer, including hidden gems, cultural
|
||||
hotspots, must-visit landmarks, weather forecasts, and
|
||||
high level costs.
|
||||
The final answer must be a comprehensive city guide,
|
||||
rich in cultural insights and practical tips,
|
||||
tailored to enhance the travel experience.
|
||||
|
||||
Trip Date: {range}
|
||||
Traveling from: {origin}
|
||||
Traveler Interests: {interests}
|
||||
"""
|
||||
),
|
||||
agent=agent,
|
||||
expected_output="Comprehensive city guide including hidden gems, cultural hotspots, and practical travel tips",
|
||||
)
|
||||
|
||||
|
||||
class TripCrew:
|
||||
def __init__(self, origin, cities, date_range, interests):
|
||||
self.cities = cities
|
||||
self.origin = origin
|
||||
self.interests = interests
|
||||
self.date_range = date_range
|
||||
|
||||
def run(self):
|
||||
agents = TripAgents()
|
||||
tasks = TripTasks()
|
||||
|
||||
city_selector_agent = agents.city_selection_agent()
|
||||
local_expert_agent = agents.local_expert()
|
||||
|
||||
identify_task = tasks.identify_task(
|
||||
city_selector_agent,
|
||||
self.origin,
|
||||
self.cities,
|
||||
self.interests,
|
||||
self.date_range,
|
||||
)
|
||||
gather_task = tasks.gather_task(
|
||||
local_expert_agent, self.origin, self.interests, self.date_range
|
||||
)
|
||||
|
||||
crew = Crew(
|
||||
agents=[city_selector_agent, local_expert_agent],
|
||||
tasks=[identify_task, gather_task],
|
||||
verbose=True,
|
||||
memory=True,
|
||||
knowledge={
|
||||
"sources": [string_source],
|
||||
"metadata": {"preference": "personal"},
|
||||
},
|
||||
)
|
||||
|
||||
result = crew.kickoff()
|
||||
return result
|
||||
|
||||
|
||||
trip_crew = TripCrew("California", "Tokyo", "Dec 12 - Dec 20", "sports")
|
||||
result = trip_crew.run()
|
||||
|
||||
print(result)
|
||||
```
|
||||
Refer to [MLflow Tracing Documentation](https://mlflow.org/docs/latest/llms/tracing/index.html) for more configurations and use cases.
|
||||
</Step>
|
||||
<Step title="Visualize Activities of Agents">
|
||||
Now traces for your crewAI agents are captured by MLflow.
|
||||
Let's visit MLflow tracking server to view the traces and get insights into your Agents.
|
||||
|
||||
Open `127.0.0.1:5000` on your browser to visit MLflow tracking server.
|
||||
<Frame caption="MLflow Tracing Dashboard">
|
||||
<img src="/images/mlflow1.png" alt="MLflow tracing example with crewai" />
|
||||
</Frame>
|
||||
</Step>
|
||||
</Steps>
|
||||
|
||||
140
docs/how-to/multimodal-agents.mdx
Normal file
@@ -0,0 +1,140 @@
|
||||
---
|
||||
title: Using Multimodal Agents
|
||||
description: Learn how to enable and use multimodal capabilities in your agents for processing images and other non-text content within the CrewAI framework.
|
||||
icon: video
|
||||
---
|
||||
|
||||
## Using Multimodal Agents
|
||||
|
||||
CrewAI supports multimodal agents that can process both text and non-text content like images. This guide will show you how to enable and use multimodal capabilities in your agents.
|
||||
|
||||
### Enabling Multimodal Capabilities
|
||||
|
||||
To create a multimodal agent, simply set the `multimodal` parameter to `True` when initializing your agent:
|
||||
|
||||
```python
|
||||
from crewai import Agent
|
||||
|
||||
agent = Agent(
|
||||
role="Image Analyst",
|
||||
goal="Analyze and extract insights from images",
|
||||
backstory="An expert in visual content interpretation with years of experience in image analysis",
|
||||
multimodal=True # This enables multimodal capabilities
|
||||
)
|
||||
```
|
||||
|
||||
When you set `multimodal=True`, the agent is automatically configured with the necessary tools for handling non-text content, including the `AddImageTool`.
|
||||
|
||||
### Working with Images
|
||||
|
||||
The multimodal agent comes pre-configured with the `AddImageTool`, which allows it to process images. You don't need to manually add this tool - it's automatically included when you enable multimodal capabilities.
|
||||
|
||||
Here's a complete example showing how to use a multimodal agent to analyze an image:
|
||||
|
||||
```python
|
||||
from crewai import Agent, Task, Crew
|
||||
|
||||
# Create a multimodal agent
|
||||
image_analyst = Agent(
|
||||
role="Product Analyst",
|
||||
goal="Analyze product images and provide detailed descriptions",
|
||||
backstory="Expert in visual product analysis with deep knowledge of design and features",
|
||||
multimodal=True
|
||||
)
|
||||
|
||||
# Create a task for image analysis
|
||||
task = Task(
|
||||
description="Analyze the product image at https://example.com/product.jpg and provide a detailed description",
|
||||
expected_output="A detailed description of the product image",
|
||||
agent=image_analyst
|
||||
)
|
||||
|
||||
# Create and run the crew
|
||||
crew = Crew(
|
||||
agents=[image_analyst],
|
||||
tasks=[task]
|
||||
)
|
||||
|
||||
result = crew.kickoff()
|
||||
```
|
||||
|
||||
### Advanced Usage with Context
|
||||
|
||||
You can provide additional context or specific questions about the image when creating tasks for multimodal agents. The task description can include specific aspects you want the agent to focus on:
|
||||
|
||||
```python
|
||||
from crewai import Agent, Task, Crew
|
||||
|
||||
# Create a multimodal agent for detailed analysis
|
||||
expert_analyst = Agent(
|
||||
role="Visual Quality Inspector",
|
||||
goal="Perform detailed quality analysis of product images",
|
||||
backstory="Senior quality control expert with expertise in visual inspection",
|
||||
multimodal=True # AddImageTool is automatically included
|
||||
)
|
||||
|
||||
# Create a task with specific analysis requirements
|
||||
inspection_task = Task(
|
||||
description="""
|
||||
Analyze the product image at https://example.com/product.jpg with focus on:
|
||||
1. Quality of materials
|
||||
2. Manufacturing defects
|
||||
3. Compliance with standards
|
||||
Provide a detailed report highlighting any issues found.
|
||||
""",
|
||||
expected_output="A detailed report highlighting any issues found",
|
||||
agent=expert_analyst
|
||||
)
|
||||
|
||||
# Create and run the crew
|
||||
crew = Crew(
|
||||
agents=[expert_analyst],
|
||||
tasks=[inspection_task]
|
||||
)
|
||||
|
||||
result = crew.kickoff()
|
||||
```
|
||||
|
||||
### Tool Details
|
||||
|
||||
When working with multimodal agents, the `AddImageTool` is automatically configured with the following schema:
|
||||
|
||||
```python
|
||||
class AddImageToolSchema:
|
||||
image_url: str # Required: The URL or path of the image to process
|
||||
action: Optional[str] = None # Optional: Additional context or specific questions about the image
|
||||
```
|
||||
|
||||
The multimodal agent will automatically handle the image processing through its built-in tools, allowing it to:
|
||||
- Access images via URLs or local file paths
|
||||
- Process image content with optional context or specific questions
|
||||
- Provide analysis and insights based on the visual information and task requirements
|
||||
|
||||
### Best Practices
|
||||
|
||||
When working with multimodal agents, keep these best practices in mind:
|
||||
|
||||
1. **Image Access**
|
||||
- Ensure your images are accessible via URLs that the agent can reach
|
||||
- For local images, consider hosting them temporarily or using absolute file paths
|
||||
- Verify that image URLs are valid and accessible before running tasks
|
||||
|
||||
2. **Task Description**
|
||||
- Be specific about what aspects of the image you want the agent to analyze
|
||||
- Include clear questions or requirements in the task description
|
||||
- Consider using the optional `action` parameter for focused analysis
|
||||
|
||||
3. **Resource Management**
|
||||
- Image processing may require more computational resources than text-only tasks
|
||||
- Some language models may require base64 encoding for image data
|
||||
- Consider batch processing for multiple images to optimize performance
|
||||
|
||||
4. **Environment Setup**
|
||||
- Verify that your environment has the necessary dependencies for image processing
|
||||
- Ensure your language model supports multimodal capabilities
|
||||
- Test with small images first to validate your setup
|
||||
|
||||
5. **Error Handling**
|
||||
- Implement proper error handling for image loading failures
|
||||
- Have fallback strategies for when image processing fails
|
||||
- Monitor and log image processing operations for debugging
|
||||
181
docs/how-to/openlit-observability.mdx
Normal file
@@ -0,0 +1,181 @@
|
||||
---
|
||||
title: OpenLIT Integration
|
||||
description: Quickly start monitoring your Agents in just a single line of code with OpenTelemetry.
|
||||
icon: magnifying-glass-chart
|
||||
---
|
||||
|
||||
# OpenLIT Overview
|
||||
|
||||
[OpenLIT](https://github.com/openlit/openlit?src=crewai-docs) is an open-source tool that makes it simple to monitor the performance of AI agents, LLMs, VectorDBs, and GPUs with just **one** line of code.
|
||||
|
||||
It provides OpenTelemetry-native tracing and metrics to track important parameters like cost, latency, interactions and task sequences.
|
||||
This setup enables you to track hyperparameters and monitor for performance issues, helping you find ways to enhance and fine-tune your agents over time.
|
||||
|
||||
<Frame caption="OpenLIT Dashboard">
|
||||
<img src="/images/openlit1.png" alt="Overview Agent usage including cost and tokens" />
|
||||
<img src="/images/openlit2.png" alt="Overview of agent otel traces and metrics" />
|
||||
<img src="/images/openlit3.png" alt="Overview of agent traces in details" />
|
||||
</Frame>
|
||||
|
||||
### Features
|
||||
|
||||
- **Analytics Dashboard**: Monitor your Agents health and performance with detailed dashboards that track metrics, costs, and user interactions.
|
||||
- **OpenTelemetry-native Observability SDK**: Vendor-neutral SDKs to send traces and metrics to your existing observability tools like Grafana, DataDog and more.
|
||||
- **Cost Tracking for Custom and Fine-Tuned Models**: Tailor cost estimations for specific models using custom pricing files for precise budgeting.
|
||||
- **Exceptions Monitoring Dashboard**: Quickly spot and resolve issues by tracking common exceptions and errors with a monitoring dashboard.
|
||||
- **Compliance and Security**: Detect potential threats such as profanity and PII leaks.
|
||||
- **Prompt Injection Detection**: Identify potential code injection and secret leaks.
|
||||
- **API Keys and Secrets Management**: Securely handle your LLM API keys and secrets centrally, avoiding insecure practices.
|
||||
- **Prompt Management**: Manage and version Agent prompts using PromptHub for consistent and easy access across Agents.
|
||||
- **Model Playground** Test and compare different models for your CrewAI agents before deployment.
|
||||
|
||||
## Setup Instructions
|
||||
|
||||
<Steps>
|
||||
<Step title="Deploy OpenLIT">
|
||||
<Steps>
|
||||
<Step title="Git Clone OpenLIT Repository">
|
||||
```shell
|
||||
git clone git@github.com:openlit/openlit.git
|
||||
```
|
||||
</Step>
|
||||
<Step title="Start Docker Compose">
|
||||
From the root directory of the [OpenLIT Repo](https://github.com/openlit/openlit), Run the below command:
|
||||
```shell
|
||||
docker compose up -d
|
||||
```
|
||||
</Step>
|
||||
</Steps>
|
||||
</Step>
|
||||
<Step title="Install OpenLIT SDK">
|
||||
```shell
|
||||
pip install openlit
|
||||
```
|
||||
</Step>
|
||||
<Step title="Initialize OpenLIT in Your Application">
|
||||
Add the following two lines to your application code:
|
||||
<Tabs>
|
||||
<Tab title="Setup using function arguments">
|
||||
```python
|
||||
import openlit
|
||||
openlit.init(otlp_endpoint="http://127.0.0.1:4318")
|
||||
```
|
||||
|
||||
Example Usage for monitoring a CrewAI Agent:
|
||||
|
||||
```python
|
||||
from crewai import Agent, Task, Crew, Process
|
||||
import openlit
|
||||
|
||||
openlit.init(disable_metrics=True)
|
||||
# Define your agents
|
||||
researcher = Agent(
|
||||
role="Researcher",
|
||||
goal="Conduct thorough research and analysis on AI and AI agents",
|
||||
backstory="You're an expert researcher, specialized in technology, software engineering, AI, and startups. You work as a freelancer and are currently researching for a new client.",
|
||||
allow_delegation=False,
|
||||
llm='command-r'
|
||||
)
|
||||
|
||||
|
||||
# Define your task
|
||||
task = Task(
|
||||
description="Generate a list of 5 interesting ideas for an article, then write one captivating paragraph for each idea that showcases the potential of a full article on this topic. Return the list of ideas with their paragraphs and your notes.",
|
||||
expected_output="5 bullet points, each with a paragraph and accompanying notes.",
|
||||
)
|
||||
|
||||
# Define the manager agent
|
||||
manager = Agent(
|
||||
role="Project Manager",
|
||||
goal="Efficiently manage the crew and ensure high-quality task completion",
|
||||
backstory="You're an experienced project manager, skilled in overseeing complex projects and guiding teams to success. Your role is to coordinate the efforts of the crew members, ensuring that each task is completed on time and to the highest standard.",
|
||||
allow_delegation=True,
|
||||
llm='command-r'
|
||||
)
|
||||
|
||||
# Instantiate your crew with a custom manager
|
||||
crew = Crew(
|
||||
agents=[researcher],
|
||||
tasks=[task],
|
||||
manager_agent=manager,
|
||||
process=Process.hierarchical,
|
||||
)
|
||||
|
||||
# Start the crew's work
|
||||
result = crew.kickoff()
|
||||
|
||||
print(result)
|
||||
```
|
||||
</Tab>
|
||||
<Tab title="Setup using Environment Variables">
|
||||
|
||||
Add the following two lines to your application code:
|
||||
```python
|
||||
import openlit
|
||||
|
||||
openlit.init()
|
||||
```
|
||||
|
||||
Run the following command to configure the OTEL export endpoint:
|
||||
```shell
|
||||
export OTEL_EXPORTER_OTLP_ENDPOINT = "http://127.0.0.1:4318"
|
||||
```
|
||||
|
||||
Example Usage for monitoring a CrewAI Async Agent:
|
||||
|
||||
```python
|
||||
import asyncio
|
||||
from crewai import Crew, Agent, Task
|
||||
import openlit
|
||||
|
||||
openlit.init(otlp_endpoint="http://127.0.0.1:4318")
|
||||
|
||||
# Create an agent with code execution enabled
|
||||
coding_agent = Agent(
|
||||
role="Python Data Analyst",
|
||||
goal="Analyze data and provide insights using Python",
|
||||
backstory="You are an experienced data analyst with strong Python skills.",
|
||||
allow_code_execution=True,
|
||||
llm="command-r"
|
||||
)
|
||||
|
||||
# Create a task that requires code execution
|
||||
data_analysis_task = Task(
|
||||
description="Analyze the given dataset and calculate the average age of participants. Ages: {ages}",
|
||||
agent=coding_agent,
|
||||
expected_output="5 bullet points, each with a paragraph and accompanying notes.",
|
||||
)
|
||||
|
||||
# Create a crew and add the task
|
||||
analysis_crew = Crew(
|
||||
agents=[coding_agent],
|
||||
tasks=[data_analysis_task]
|
||||
)
|
||||
|
||||
# Async function to kickoff the crew asynchronously
|
||||
async def async_crew_execution():
|
||||
result = await analysis_crew.kickoff_async(inputs={"ages": [25, 30, 35, 40, 45]})
|
||||
print("Crew Result:", result)
|
||||
|
||||
# Run the async function
|
||||
asyncio.run(async_crew_execution())
|
||||
```
|
||||
</Tab>
|
||||
</Tabs>
|
||||
Refer to OpenLIT [Python SDK repository](https://github.com/openlit/openlit/tree/main/sdk/python) for more advanced configurations and use cases.
|
||||
</Step>
|
||||
<Step title="Visualize and Analyze">
|
||||
With the Agent Observability data now being collected and sent to OpenLIT, the next step is to visualize and analyze this data to get insights into your Agent's performance, behavior, and identify areas of improvement.
|
||||
|
||||
Just head over to OpenLIT at `127.0.0.1:3000` on your browser to start exploring. You can login using the default credentials
|
||||
- **Email**: `user@openlit.io`
|
||||
- **Password**: `openlituser`
|
||||
|
||||
<Frame caption="OpenLIT Dashboard">
|
||||
<img src="/images/openlit1.png" alt="Overview Agent usage including cost and tokens" />
|
||||
<img src="/images/openlit2.png" alt="Overview of agent otel traces and metrics" />
|
||||
</Frame>
|
||||
|
||||
</Step>
|
||||
</Steps>
|
||||
|
||||
129
docs/how-to/opik-observability.mdx
Normal file
@@ -0,0 +1,129 @@
|
||||
---
|
||||
title: Opik Integration
|
||||
description: Learn how to use Comet Opik to debug, evaluate, and monitor your CrewAI applications with comprehensive tracing, automated evaluations, and production-ready dashboards.
|
||||
icon: meteor
|
||||
---
|
||||
|
||||
# Opik Overview
|
||||
|
||||
With [Comet Opik](https://www.comet.com/docs/opik/), debug, evaluate, and monitor your LLM applications, RAG systems, and agentic workflows with comprehensive tracing, automated evaluations, and production-ready dashboards.
|
||||
|
||||
<Frame caption="Opik Agent Dashboard">
|
||||
<img src="/images/opik-crewai-dashboard.png" alt="Opik agent monitoring example with CrewAI" />
|
||||
</Frame>
|
||||
|
||||
Opik provides comprehensive support for every stage of your CrewAI application development:
|
||||
|
||||
- **Log Traces and Spans**: Automatically track LLM calls and application logic to debug and analyze development and production systems. Manually or programmatically annotate, view, and compare responses across projects.
|
||||
- **Evaluate Your LLM Application's Performance**: Evaluate against a custom test set and run built-in evaluation metrics or define your own metrics in the SDK or UI.
|
||||
- **Test Within Your CI/CD Pipeline**: Establish reliable performance baselines with Opik's LLM unit tests, built on PyTest. Run online evaluations for continuous monitoring in production.
|
||||
- **Monitor & Analyze Production Data**: Understand your models' performance on unseen data in production and generate datasets for new dev iterations.
|
||||
|
||||
## Setup
|
||||
Comet provides a hosted version of the Opik platform, or you can run the platform locally.
|
||||
|
||||
To use the hosted version, simply [create a free Comet account](https://www.comet.com/signup?utm_medium=github&utm_source=crewai_docs) and grab you API Key.
|
||||
|
||||
To run the Opik platform locally, see our [installation guide](https://www.comet.com/docs/opik/self-host/overview/) for more information.
|
||||
|
||||
For this guide we will use CrewAI’s quickstart example.
|
||||
|
||||
<Steps>
|
||||
<Step title="Install required packages">
|
||||
```shell
|
||||
pip install crewai crewai-tools opik --upgrade
|
||||
```
|
||||
</Step>
|
||||
<Step title="Configure Opik">
|
||||
```python
|
||||
import opik
|
||||
opik.configure(use_local=False)
|
||||
```
|
||||
</Step>
|
||||
<Step title="Prepare environment">
|
||||
First, we set up our API keys for our LLM-provider as environment variables:
|
||||
|
||||
```python
|
||||
import os
|
||||
import getpass
|
||||
|
||||
if "OPENAI_API_KEY" not in os.environ:
|
||||
os.environ["OPENAI_API_KEY"] = getpass.getpass("Enter your OpenAI API key: ")
|
||||
```
|
||||
</Step>
|
||||
<Step title="Using CrewAI">
|
||||
The first step is to create our project. We will use an example from CrewAI’s documentation:
|
||||
|
||||
```python
|
||||
from crewai import Agent, Crew, Task, Process
|
||||
|
||||
|
||||
class YourCrewName:
|
||||
def agent_one(self) -> Agent:
|
||||
return Agent(
|
||||
role="Data Analyst",
|
||||
goal="Analyze data trends in the market",
|
||||
backstory="An experienced data analyst with a background in economics",
|
||||
verbose=True,
|
||||
)
|
||||
|
||||
def agent_two(self) -> Agent:
|
||||
return Agent(
|
||||
role="Market Researcher",
|
||||
goal="Gather information on market dynamics",
|
||||
backstory="A diligent researcher with a keen eye for detail",
|
||||
verbose=True,
|
||||
)
|
||||
|
||||
def task_one(self) -> Task:
|
||||
return Task(
|
||||
name="Collect Data Task",
|
||||
description="Collect recent market data and identify trends.",
|
||||
expected_output="A report summarizing key trends in the market.",
|
||||
agent=self.agent_one(),
|
||||
)
|
||||
|
||||
def task_two(self) -> Task:
|
||||
return Task(
|
||||
name="Market Research Task",
|
||||
description="Research factors affecting market dynamics.",
|
||||
expected_output="An analysis of factors influencing the market.",
|
||||
agent=self.agent_two(),
|
||||
)
|
||||
|
||||
def crew(self) -> Crew:
|
||||
return Crew(
|
||||
agents=[self.agent_one(), self.agent_two()],
|
||||
tasks=[self.task_one(), self.task_two()],
|
||||
process=Process.sequential,
|
||||
verbose=True,
|
||||
)
|
||||
|
||||
```
|
||||
|
||||
Now we can import Opik’s tracker and run our crew:
|
||||
|
||||
```python
|
||||
from opik.integrations.crewai import track_crewai
|
||||
|
||||
track_crewai(project_name="crewai-integration-demo")
|
||||
|
||||
my_crew = YourCrewName().crew()
|
||||
result = my_crew.kickoff()
|
||||
|
||||
print(result)
|
||||
```
|
||||
After running your CrewAI application, visit the Opik app to view:
|
||||
- LLM traces, spans, and their metadata
|
||||
- Agent interactions and task execution flow
|
||||
- Performance metrics like latency and token usage
|
||||
- Evaluation metrics (built-in or custom)
|
||||
</Step>
|
||||
</Steps>
|
||||
|
||||
## Resources
|
||||
|
||||
- [🦉 Opik Documentation](https://www.comet.com/docs/opik/)
|
||||
- [👉 Opik + CrewAI Colab](https://colab.research.google.com/github/comet-ml/opik/blob/main/apps/opik-documentation/documentation/docs/cookbook/crewai.ipynb)
|
||||
- [🐦 X](https://x.com/cometml)
|
||||
- [💬 Slack](https://slack.comet.com/)
|
||||
202
docs/how-to/portkey-observability.mdx
Normal file
@@ -0,0 +1,202 @@
|
||||
---
|
||||
title: Portkey Integration
|
||||
description: How to use Portkey with CrewAI
|
||||
icon: key
|
||||
---
|
||||
|
||||
<img src="https://raw.githubusercontent.com/siddharthsambharia-portkey/Portkey-Product-Images/main/Portkey-CrewAI.png" alt="Portkey CrewAI Header Image" width="70%" />
|
||||
|
||||
|
||||
[Portkey](https://portkey.ai/?utm_source=crewai&utm_medium=crewai&utm_campaign=crewai) is a 2-line upgrade to make your CrewAI agents reliable, cost-efficient, and fast.
|
||||
|
||||
Portkey adds 4 core production capabilities to any CrewAI agent:
|
||||
1. Routing to **200+ LLMs**
|
||||
2. Making each LLM call more robust
|
||||
3. Full-stack tracing & cost, performance analytics
|
||||
4. Real-time guardrails to enforce behavior
|
||||
|
||||
## Getting Started
|
||||
|
||||
<Steps>
|
||||
<Step title="Install CrewAI and Portkey">
|
||||
```bash
|
||||
pip install -qU crewai portkey-ai
|
||||
```
|
||||
</Step>
|
||||
<Step title="Configure the LLM Client">
|
||||
To build CrewAI Agents with Portkey, you'll need two keys:
|
||||
- **Portkey API Key**: Sign up on the [Portkey app](https://app.portkey.ai/?utm_source=crewai&utm_medium=crewai&utm_campaign=crewai) and copy your API key
|
||||
- **Virtual Key**: Virtual Keys securely manage your LLM API keys in one place. Store your LLM provider API keys securely in Portkey's vault
|
||||
|
||||
```python
|
||||
from crewai import LLM
|
||||
from portkey_ai import createHeaders, PORTKEY_GATEWAY_URL
|
||||
|
||||
gpt_llm = LLM(
|
||||
model="gpt-4",
|
||||
base_url=PORTKEY_GATEWAY_URL,
|
||||
api_key="dummy", # We are using Virtual key
|
||||
extra_headers=createHeaders(
|
||||
api_key="YOUR_PORTKEY_API_KEY",
|
||||
virtual_key="YOUR_VIRTUAL_KEY", # Enter your Virtual key from Portkey
|
||||
)
|
||||
)
|
||||
```
|
||||
</Step>
|
||||
<Step title="Create and Run Your First Agent">
|
||||
```python
|
||||
from crewai import Agent, Task, Crew
|
||||
|
||||
# Define your agents with roles and goals
|
||||
coder = Agent(
|
||||
role='Software developer',
|
||||
goal='Write clear, concise code on demand',
|
||||
backstory='An expert coder with a keen eye for software trends.',
|
||||
llm=gpt_llm
|
||||
)
|
||||
|
||||
# Create tasks for your agents
|
||||
task1 = Task(
|
||||
description="Define the HTML for making a simple website with heading- Hello World! Portkey is working!",
|
||||
expected_output="A clear and concise HTML code",
|
||||
agent=coder
|
||||
)
|
||||
|
||||
# Instantiate your crew
|
||||
crew = Crew(
|
||||
agents=[coder],
|
||||
tasks=[task1],
|
||||
)
|
||||
|
||||
result = crew.kickoff()
|
||||
print(result)
|
||||
```
|
||||
</Step>
|
||||
</Steps>
|
||||
|
||||
## Key Features
|
||||
|
||||
| Feature | Description |
|
||||
|:--------|:------------|
|
||||
| 🌐 Multi-LLM Support | Access OpenAI, Anthropic, Gemini, Azure, and 250+ providers through a unified interface |
|
||||
| 🛡️ Production Reliability | Implement retries, timeouts, load balancing, and fallbacks |
|
||||
| 📊 Advanced Observability | Track 40+ metrics including costs, tokens, latency, and custom metadata |
|
||||
| 🔍 Comprehensive Logging | Debug with detailed execution traces and function call logs |
|
||||
| 🚧 Security Controls | Set budget limits and implement role-based access control |
|
||||
| 🔄 Performance Analytics | Capture and analyze feedback for continuous improvement |
|
||||
| 💾 Intelligent Caching | Reduce costs and latency with semantic or simple caching |
|
||||
|
||||
|
||||
## Production Features with Portkey Configs
|
||||
|
||||
All features mentioned below are through Portkey's Config system. Portkey's Config system allows you to define routing strategies using simple JSON objects in your LLM API calls. You can create and manage Configs directly in your code or through the Portkey Dashboard. Each Config has a unique ID for easy reference.
|
||||
|
||||
<Frame>
|
||||
<img src="https://raw.githubusercontent.com/Portkey-AI/docs-core/refs/heads/main/images/libraries/libraries-3.avif"/>
|
||||
</Frame>
|
||||
|
||||
|
||||
### 1. Use 250+ LLMs
|
||||
Access various LLMs like Anthropic, Gemini, Mistral, Azure OpenAI, and more with minimal code changes. Switch between providers or use them together seamlessly. [Learn more about Universal API](https://portkey.ai/docs/product/ai-gateway/universal-api)
|
||||
|
||||
|
||||
Easily switch between different LLM providers:
|
||||
|
||||
```python
|
||||
# Anthropic Configuration
|
||||
anthropic_llm = LLM(
|
||||
model="claude-3-5-sonnet-latest",
|
||||
base_url=PORTKEY_GATEWAY_URL,
|
||||
api_key="dummy",
|
||||
extra_headers=createHeaders(
|
||||
api_key="YOUR_PORTKEY_API_KEY",
|
||||
virtual_key="YOUR_ANTHROPIC_VIRTUAL_KEY", #You don't need provider when using Virtual keys
|
||||
trace_id="anthropic_agent"
|
||||
)
|
||||
)
|
||||
|
||||
# Azure OpenAI Configuration
|
||||
azure_llm = LLM(
|
||||
model="gpt-4",
|
||||
base_url=PORTKEY_GATEWAY_URL,
|
||||
api_key="dummy",
|
||||
extra_headers=createHeaders(
|
||||
api_key="YOUR_PORTKEY_API_KEY",
|
||||
virtual_key="YOUR_AZURE_VIRTUAL_KEY", #You don't need provider when using Virtual keys
|
||||
trace_id="azure_agent"
|
||||
)
|
||||
)
|
||||
```
|
||||
|
||||
|
||||
### 2. Caching
|
||||
Improve response times and reduce costs with two powerful caching modes:
|
||||
- **Simple Cache**: Perfect for exact matches
|
||||
- **Semantic Cache**: Matches responses for requests that are semantically similar
|
||||
[Learn more about Caching](https://portkey.ai/docs/product/ai-gateway/cache-simple-and-semantic)
|
||||
|
||||
```py
|
||||
config = {
|
||||
"cache": {
|
||||
"mode": "semantic", # or "simple" for exact matching
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### 3. Production Reliability
|
||||
Portkey provides comprehensive reliability features:
|
||||
- **Automatic Retries**: Handle temporary failures gracefully
|
||||
- **Request Timeouts**: Prevent hanging operations
|
||||
- **Conditional Routing**: Route requests based on specific conditions
|
||||
- **Fallbacks**: Set up automatic provider failovers
|
||||
- **Load Balancing**: Distribute requests efficiently
|
||||
|
||||
[Learn more about Reliability Features](https://portkey.ai/docs/product/ai-gateway/)
|
||||
|
||||
|
||||
|
||||
### 4. Metrics
|
||||
|
||||
Agent runs are complex. Portkey automatically logs **40+ comprehensive metrics** for your AI agents, including cost, tokens used, latency, etc. Whether you need a broad overview or granular insights into your agent runs, Portkey's customizable filters provide the metrics you need.
|
||||
|
||||
|
||||
- Cost per agent interaction
|
||||
- Response times and latency
|
||||
- Token usage and efficiency
|
||||
- Success/failure rates
|
||||
- Cache hit rates
|
||||
|
||||
<img src="https://github.com/siddharthsambharia-portkey/Portkey-Product-Images/blob/main/Portkey-Dashboard.png?raw=true" width="70%" alt="Portkey Dashboard" />
|
||||
|
||||
### 5. Detailed Logging
|
||||
Logs are essential for understanding agent behavior, diagnosing issues, and improving performance. They provide a detailed record of agent activities and tool use, which is crucial for debugging and optimizing processes.
|
||||
|
||||
|
||||
Access a dedicated section to view records of agent executions, including parameters, outcomes, function calls, and errors. Filter logs based on multiple parameters such as trace ID, model, tokens used, and metadata.
|
||||
|
||||
<details>
|
||||
<summary><b>Traces</b></summary>
|
||||
<img src="https://raw.githubusercontent.com/siddharthsambharia-portkey/Portkey-Product-Images/main/Portkey-Traces.png" alt="Portkey Traces" width="70%" />
|
||||
</details>
|
||||
|
||||
<details>
|
||||
<summary><b>Logs</b></summary>
|
||||
<img src="https://raw.githubusercontent.com/siddharthsambharia-portkey/Portkey-Product-Images/main/Portkey-Logs.png" alt="Portkey Logs" width="70%" />
|
||||
</details>
|
||||
|
||||
### 6. Enterprise Security Features
|
||||
- Set budget limit and rate limts per Virtual Key (disposable API keys)
|
||||
- Implement role-based access control
|
||||
- Track system changes with audit logs
|
||||
- Configure data retention policies
|
||||
|
||||
|
||||
|
||||
For detailed information on creating and managing Configs, visit the [Portkey documentation](https://docs.portkey.ai/product/ai-gateway/configs).
|
||||
|
||||
## Resources
|
||||
|
||||
- [📘 Portkey Documentation](https://docs.portkey.ai)
|
||||
- [📊 Portkey Dashboard](https://app.portkey.ai/?utm_source=crewai&utm_medium=crewai&utm_campaign=crewai)
|
||||
- [🐦 Twitter](https://twitter.com/portkeyai)
|
||||
- [💬 Discord Community](https://discord.gg/DD7vgKK299)
|
||||
80
docs/how-to/replay-tasks-from-latest-crew-kickoff.mdx
Normal file
@@ -0,0 +1,80 @@
|
||||
---
|
||||
title: Replay Tasks from Latest Crew Kickoff
|
||||
description: Replay tasks from the latest crew.kickoff(...)
|
||||
icon: arrow-right
|
||||
---
|
||||
|
||||
## Introduction
|
||||
|
||||
CrewAI provides the ability to replay from a task specified from the latest crew kickoff. This feature is particularly useful when you've finished a kickoff and may want to retry certain tasks or don't need to refetch data over and your agents already have the context saved from the kickoff execution so you just need to replay the tasks you want to.
|
||||
|
||||
<Note>
|
||||
You must run `crew.kickoff()` before you can replay a task.
|
||||
Currently, only the latest kickoff is supported, so if you use `kickoff_for_each`, it will only allow you to replay from the most recent crew run.
|
||||
</Note>
|
||||
|
||||
Here's an example of how to replay from a task:
|
||||
|
||||
### Replaying from Specific Task Using the CLI
|
||||
|
||||
To use the replay feature, follow these steps:
|
||||
|
||||
<Steps>
|
||||
<Step title="Open your terminal or command prompt.">
|
||||
</Step>
|
||||
<Step title="Navigate to the directory where your CrewAI project is located.">
|
||||
</Step>
|
||||
<Step title="Run the following commands:">
|
||||
To view the latest kickoff task_ids use:
|
||||
|
||||
```shell
|
||||
crewai log-tasks-outputs
|
||||
```
|
||||
|
||||
Once you have your `task_id` to replay, use:
|
||||
|
||||
```shell
|
||||
crewai replay -t <task_id>
|
||||
```
|
||||
</Step>
|
||||
</Steps>
|
||||
|
||||
<Note>
|
||||
Ensure `crewai` is installed and configured correctly in your development environment.
|
||||
</Note>
|
||||
|
||||
### Replaying from a Task Programmatically
|
||||
|
||||
To replay from a task programmatically, use the following steps:
|
||||
|
||||
<Steps>
|
||||
<Step title="Specify the `task_id` and input parameters for the replay process.">
|
||||
Specify the `task_id` and input parameters for the replay process.
|
||||
</Step>
|
||||
<Step title="Execute the replay command within a try-except block to handle potential errors.">
|
||||
Execute the replay command within a try-except block to handle potential errors.
|
||||
<CodeGroup>
|
||||
```python Code
|
||||
def replay():
|
||||
"""
|
||||
Replay the crew execution from a specific task.
|
||||
"""
|
||||
task_id = '<task_id>'
|
||||
inputs = {"topic": "CrewAI Training"} # This is optional; you can pass in the inputs you want to replay; otherwise, it uses the previous kickoff's inputs.
|
||||
try:
|
||||
YourCrewName_Crew().crew().replay(task_id=task_id, inputs=inputs)
|
||||
|
||||
except subprocess.CalledProcessError as e:
|
||||
raise Exception(f"An error occurred while replaying the crew: {e}")
|
||||
|
||||
except Exception as e:
|
||||
raise Exception(f"An unexpected error occurred: {e}")
|
||||
```
|
||||
</CodeGroup>
|
||||
</Step>
|
||||
</Steps>
|
||||
|
||||
## Conclusion
|
||||
|
||||
With the above enhancements and detailed functionality, replaying specific tasks in CrewAI has been made more efficient and robust.
|
||||
Ensure you follow the commands and steps precisely to make the most of these features.
|
||||
127
docs/how-to/sequential-process.mdx
Normal file
@@ -0,0 +1,127 @@
|
||||
---
|
||||
title: Sequential Processes
|
||||
description: A comprehensive guide to utilizing the sequential processes for task execution in CrewAI projects.
|
||||
icon: forward
|
||||
---
|
||||
|
||||
## Introduction
|
||||
|
||||
CrewAI offers a flexible framework for executing tasks in a structured manner, supporting both sequential and hierarchical processes.
|
||||
This guide outlines how to effectively implement these processes to ensure efficient task execution and project completion.
|
||||
|
||||
## Sequential Process Overview
|
||||
|
||||
The sequential process ensures tasks are executed one after the other, following a linear progression.
|
||||
This approach is ideal for projects requiring tasks to be completed in a specific order.
|
||||
|
||||
### Key Features
|
||||
|
||||
- **Linear Task Flow**: Ensures orderly progression by handling tasks in a predetermined sequence.
|
||||
- **Simplicity**: Best suited for projects with clear, step-by-step tasks.
|
||||
- **Easy Monitoring**: Facilitates easy tracking of task completion and project progress.
|
||||
|
||||
## Implementing the Sequential Process
|
||||
|
||||
To use the sequential process, assemble your crew and define tasks in the order they need to be executed.
|
||||
|
||||
```python Code
|
||||
from crewai import Crew, Process, Agent, Task, TaskOutput, CrewOutput
|
||||
|
||||
# Define your agents
|
||||
researcher = Agent(
|
||||
role='Researcher',
|
||||
goal='Conduct foundational research',
|
||||
backstory='An experienced researcher with a passion for uncovering insights'
|
||||
)
|
||||
analyst = Agent(
|
||||
role='Data Analyst',
|
||||
goal='Analyze research findings',
|
||||
backstory='A meticulous analyst with a knack for uncovering patterns'
|
||||
)
|
||||
writer = Agent(
|
||||
role='Writer',
|
||||
goal='Draft the final report',
|
||||
backstory='A skilled writer with a talent for crafting compelling narratives'
|
||||
)
|
||||
|
||||
# Define your tasks
|
||||
research_task = Task(
|
||||
description='Gather relevant data...',
|
||||
agent=researcher,
|
||||
expected_output='Raw Data'
|
||||
)
|
||||
analysis_task = Task(
|
||||
description='Analyze the data...',
|
||||
agent=analyst,
|
||||
expected_output='Data Insights'
|
||||
)
|
||||
writing_task = Task(
|
||||
description='Compose the report...',
|
||||
agent=writer,
|
||||
expected_output='Final Report'
|
||||
)
|
||||
|
||||
# Form the crew with a sequential process
|
||||
report_crew = Crew(
|
||||
agents=[researcher, analyst, writer],
|
||||
tasks=[research_task, analysis_task, writing_task],
|
||||
process=Process.sequential
|
||||
)
|
||||
|
||||
# Execute the crew
|
||||
result = report_crew.kickoff()
|
||||
|
||||
# Accessing the type-safe output
|
||||
task_output: TaskOutput = result.tasks[0].output
|
||||
crew_output: CrewOutput = result.output
|
||||
```
|
||||
|
||||
### Note:
|
||||
|
||||
Each task in a sequential process **must** have an agent assigned. Ensure that every `Task` includes an `agent` parameter.
|
||||
|
||||
### Workflow in Action
|
||||
|
||||
1. **Initial Task**: In a sequential process, the first agent completes their task and signals completion.
|
||||
2. **Subsequent Tasks**: Agents pick up their tasks based on the process type, with outcomes of preceding tasks or directives guiding their execution.
|
||||
3. **Completion**: The process concludes once the final task is executed, leading to project completion.
|
||||
|
||||
## Advanced Features
|
||||
|
||||
### Task Delegation
|
||||
|
||||
In sequential processes, if an agent has `allow_delegation` set to `True`, they can delegate tasks to other agents in the crew.
|
||||
This feature is automatically set up when there are multiple agents in the crew.
|
||||
|
||||
### Asynchronous Execution
|
||||
|
||||
Tasks can be executed asynchronously, allowing for parallel processing when appropriate.
|
||||
To create an asynchronous task, set `async_execution=True` when defining the task.
|
||||
|
||||
### Memory and Caching
|
||||
|
||||
CrewAI supports both memory and caching features:
|
||||
|
||||
- **Memory**: Enable by setting `memory=True` when creating the Crew. This allows agents to retain information across tasks.
|
||||
- **Caching**: By default, caching is enabled. Set `cache=False` to disable it.
|
||||
|
||||
### Callbacks
|
||||
|
||||
You can set callbacks at both the task and step level:
|
||||
|
||||
- `task_callback`: Executed after each task completion.
|
||||
- `step_callback`: Executed after each step in an agent's execution.
|
||||
|
||||
### Usage Metrics
|
||||
|
||||
CrewAI tracks token usage across all tasks and agents. You can access these metrics after execution.
|
||||
|
||||
## Best Practices for Sequential Processes
|
||||
|
||||
1. **Order Matters**: Arrange tasks in a logical sequence where each task builds upon the previous one.
|
||||
2. **Clear Task Descriptions**: Provide detailed descriptions for each task to guide the agents effectively.
|
||||
3. **Appropriate Agent Selection**: Match agents' skills and roles to the requirements of each task.
|
||||
4. **Use Context**: Leverage the context from previous tasks to inform subsequent ones.
|
||||
|
||||
This updated documentation ensures that details accurately reflect the latest changes in the codebase and clearly describes how to leverage new features and configurations.
|
||||
The content is kept simple and direct to ensure easy understanding.
|
||||
124
docs/how-to/weave-integration.mdx
Normal file
@@ -0,0 +1,124 @@
|
||||
---
|
||||
title: Weave Integration
|
||||
description: Learn how to use Weights & Biases (W&B) Weave to track, experiment with, evaluate, and improve your CrewAI applications.
|
||||
icon: radar
|
||||
---
|
||||
|
||||
# Weave Overview
|
||||
|
||||
[Weights & Biases (W&B) Weave](https://weave-docs.wandb.ai/) is a framework for tracking, experimenting with, evaluating, deploying, and improving LLM-based applications.
|
||||
|
||||

|
||||
|
||||
Weave provides comprehensive support for every stage of your CrewAI application development:
|
||||
|
||||
- **Tracing & Monitoring**: Automatically track LLM calls and application logic to debug and analyze production systems
|
||||
- **Systematic Iteration**: Refine and iterate on prompts, datasets, and models
|
||||
- **Evaluation**: Use custom or pre-built scorers to systematically assess and enhance agent performance
|
||||
- **Guardrails**: Protect your agents with pre- and post-safeguards for content moderation and prompt safety
|
||||
|
||||
Weave automatically captures traces for your CrewAI applications, enabling you to monitor and analyze your agents' performance, interactions, and execution flow. This helps you build better evaluation datasets and optimize your agent workflows.
|
||||
|
||||
## Setup Instructions
|
||||
|
||||
<Steps>
|
||||
<Step title="Install required packages">
|
||||
```shell
|
||||
pip install crewai weave
|
||||
```
|
||||
</Step>
|
||||
<Step title="Set up W&B Account">
|
||||
Sign up for a [Weights & Biases account](https://wandb.ai) if you haven't already. You'll need this to view your traces and metrics.
|
||||
</Step>
|
||||
<Step title="Initialize Weave in Your Application">
|
||||
Add the following code to your application:
|
||||
|
||||
```python
|
||||
import weave
|
||||
|
||||
# Initialize Weave with your project name
|
||||
weave.init(project_name="crewai_demo")
|
||||
```
|
||||
|
||||
After initialization, Weave will provide a URL where you can view your traces and metrics.
|
||||
</Step>
|
||||
<Step title="Create your Crews/Flows">
|
||||
```python
|
||||
from crewai import Agent, Task, Crew, LLM, Process
|
||||
|
||||
# Create an LLM with a temperature of 0 to ensure deterministic outputs
|
||||
llm = LLM(model="gpt-4o", temperature=0)
|
||||
|
||||
# Create agents
|
||||
researcher = Agent(
|
||||
role='Research Analyst',
|
||||
goal='Find and analyze the best investment opportunities',
|
||||
backstory='Expert in financial analysis and market research',
|
||||
llm=llm,
|
||||
verbose=True,
|
||||
allow_delegation=False,
|
||||
)
|
||||
|
||||
writer = Agent(
|
||||
role='Report Writer',
|
||||
goal='Write clear and concise investment reports',
|
||||
backstory='Experienced in creating detailed financial reports',
|
||||
llm=llm,
|
||||
verbose=True,
|
||||
allow_delegation=False,
|
||||
)
|
||||
|
||||
# Create tasks
|
||||
research_task = Task(
|
||||
description='Deep research on the {topic}',
|
||||
expected_output='Comprehensive market data including key players, market size, and growth trends.',
|
||||
agent=researcher
|
||||
)
|
||||
|
||||
writing_task = Task(
|
||||
description='Write a detailed report based on the research',
|
||||
expected_output='The report should be easy to read and understand. Use bullet points where applicable.',
|
||||
agent=writer
|
||||
)
|
||||
|
||||
# Create a crew
|
||||
crew = Crew(
|
||||
agents=[researcher, writer],
|
||||
tasks=[research_task, writing_task],
|
||||
verbose=True,
|
||||
process=Process.sequential,
|
||||
)
|
||||
|
||||
# Run the crew
|
||||
result = crew.kickoff(inputs={"topic": "AI in material science"})
|
||||
print(result)
|
||||
```
|
||||
</Step>
|
||||
<Step title="View Traces in Weave">
|
||||
After running your CrewAI application, visit the Weave URL provided during initialization to view:
|
||||
- LLM calls and their metadata
|
||||
- Agent interactions and task execution flow
|
||||
- Performance metrics like latency and token usage
|
||||
- Any errors or issues that occurred during execution
|
||||
|
||||
<Frame caption="Weave Tracing Dashboard">
|
||||
<img src="/images/weave-tracing.png" alt="Weave tracing example with CrewAI" />
|
||||
</Frame>
|
||||
</Step>
|
||||
</Steps>
|
||||
|
||||
## Features
|
||||
|
||||
- Weave automatically captures all CrewAI operations: agent interactions and task executions; LLM calls with metadata and token usage; tool usage and results.
|
||||
- The integration supports all CrewAI execution methods: `kickoff()`, `kickoff_for_each()`, `kickoff_async()`, and `kickoff_for_each_async()`.
|
||||
- Automatic tracing of all [crewAI-tools](https://github.com/crewAIInc/crewAI-tools).
|
||||
- Flow feature support with decorator patching (`@start`, `@listen`, `@router`, `@or_`, `@and_`).
|
||||
- Track custom guardrails passed to CrewAI `Task` with `@weave.op()`.
|
||||
|
||||
For detailed information on what's supported, visit the [Weave CrewAI documentation](https://weave-docs.wandb.ai/guides/integrations/crewai/#getting-started-with-flow).
|
||||
|
||||
## Resources
|
||||
|
||||
- [📘 Weave Documentation](https://weave-docs.wandb.ai)
|
||||
- [📊 Example Weave x CrewAI dashboard](https://wandb.ai/ayut/crewai_demo/weave/traces?cols=%7B%22wb_run_id%22%3Afalse%2C%22attributes.weave.client_version%22%3Afalse%2C%22attributes.weave.os_name%22%3Afalse%2C%22attributes.weave.os_release%22%3Afalse%2C%22attributes.weave.os_version%22%3Afalse%2C%22attributes.weave.source%22%3Afalse%2C%22attributes.weave.sys_version%22%3Afalse%7D&peekPath=%2Fayut%2Fcrewai_demo%2Fcalls%2F0195c838-38cb-71a2-8a15-651ecddf9d89)
|
||||
- [🐦 X](https://x.com/weave_wb)
|
||||
BIN
docs/images/agentops-overview.png
Normal file
|
After Width: | Height: | Size: 288 KiB |
BIN
docs/images/agentops-replay.png
Normal file
|
After Width: | Height: | Size: 419 KiB |
BIN
docs/images/agentops-session.png
Normal file
|
After Width: | Height: | Size: 263 KiB |
BIN
docs/images/crewai-run-poetry-error.png
Normal file
|
After Width: | Height: | Size: 104 KiB |
BIN
docs/images/crewai-update.png
Normal file
|
After Width: | Height: | Size: 50 KiB |
BIN
docs/images/langtrace1.png
Normal file
|
After Width: | Height: | Size: 223 KiB |
BIN
docs/images/langtrace2.png
Normal file
|
After Width: | Height: | Size: 204 KiB |
BIN
docs/images/langtrace3.png
Normal file
|
After Width: | Height: | Size: 295 KiB |
BIN
docs/images/mlflow-tracing.gif
Normal file
|
After Width: | Height: | Size: 16 MiB |
BIN
docs/images/mlflow1.png
Normal file
|
After Width: | Height: | Size: 382 KiB |
BIN
docs/images/openlit1.png
Normal file
|
After Width: | Height: | Size: 390 KiB |