Compare commits

...

726 Commits

Author SHA1 Message Date
Rip&Tear
a54d34ea5b update readme.md 2024-09-10 21:56:46 +08:00
João Moura
bc793749a5 preparing enw version with deploy 2024-09-07 11:17:12 -07:00
João Moura
a9916940ef preparing new verison 0.55.1 2024-09-07 10:16:07 -07:00
João Moura
b7f4931de5 updating dependencies 2024-09-07 00:55:21 -07:00
João Moura
327b728bef preparing to cut new version 2024-09-07 00:34:34 -07:00
Sean
a9510eec88 Update LLM-Connections.md (#1181)
* Update LLM-Connections.md

* Update LLM-Connections.md

---------

Co-authored-by: João Moura <joaomdmoura@gmail.com>
2024-09-07 04:31:09 -03:00
Brandon Hancock (bhancock_ai)
d6db557f50 Update regex (#1228) 2024-09-07 04:27:58 -03:00
Brandon Hancock (bhancock_ai)
5ae56e3f72 add in 2 small improvements based on joao feedback (#1264) 2024-09-07 04:13:23 -03:00
Astha Puri
1c9ebb59b1 Update Start-a-New-CrewAI-Project-Template-Method.md (#1276)
* Update Start-a-New-CrewAI-Project-Template-Method.md

* Update Start-a-New-CrewAI-Project-Template-Method.md

---------

Co-authored-by: João Moura <joaomdmoura@gmail.com>
2024-09-07 04:12:51 -03:00
Astha Puri
f520ceeb0d Add missing virtual environment commands (#1277)
* Add missing virtual environment commands

* Update Start-a-New-CrewAI-Project-Template-Method.md

---------

Co-authored-by: João Moura <joaomdmoura@gmail.com>
2024-09-07 04:12:04 -03:00
Astha Puri
0df4d2fd4b Update Tasks.md (#1279) 2024-09-07 04:05:56 -03:00
Rip&Tear
596491d932 Update readme.md (#1294)
* Update pyproject.toml

More GH link updates

* Added FAQ section in README.md

---------

Co-authored-by: João Moura <joaomdmoura@gmail.com>
2024-09-07 03:59:48 -03:00
Ali Waleed
72fb109147 Fix: Langtrace Docs (#1297)
* fix langtrace docs

* remove gif for size constraint
2024-09-07 03:58:27 -03:00
Brandon Hancock (bhancock_ai)
40b336d2a5 Brandon/cre 256 default template crew isnt running properly (#1299)
* Update config typecheck to accept agents

* Clean up prints
2024-09-07 03:57:36 -03:00
anmol-aidora
5958df71a2 Updated CrewAI Documentation and Repository link in tools.poetry.urls (#1305)
* Updated CrewAI Documentation and Repository link in tools.poetry.urls

* Update pyproject.toml

---------

Co-authored-by: João Moura <joaomdmoura@gmail.com>
2024-09-07 03:55:02 -03:00
Brandon Hancock (bhancock_ai)
26d9af8367 Brandon/cre 252 add agent to crewai test (#1308)
* Update config typecheck to accept agents

* Clean up prints

* Adding agents to crew evaluator output table

* Properly generating table now

* Update tests
2024-09-07 03:53:23 -03:00
Brandon Hancock (bhancock_ai)
cdaf2d41c7 move away from pydantic v1 (#1284) 2024-09-06 14:22:01 -04:00
Paul Nugent
d9ee104167 Merge pull request #1290 from crewAIInc/DOCS/readme_update
Docs/readme update
2024-09-06 16:18:31 +01:00
Rip&Tear
0b9eeb7cdb Revert "feat: Improve documentation for Conditional Tasks in crewAI"
This reverts commit 18a2722e4d.
2024-09-05 10:30:08 +08:00
Rip&Tear
9b558ddc51 Revert "docs: Improve "Creating and Utilizing Tools in crewAI" documentation"
This reverts commit b955416458.
2024-09-05 10:30:00 +08:00
Rip&Tear
b857afe45b Revert "feat: Improve documentation for TXTSearchTool"
This reverts commit d2fab55561.
2024-09-05 10:29:03 +08:00
Rip&Tear
1d77c8de10 feat: Improve documentation for TXTSearchTool
Updated wording positioning
2024-09-05 10:27:11 +08:00
Rip&Tear
503f3a6372 Update README.md
Updated  GitHub links to point to new Repos
2024-09-05 10:17:46 +08:00
Rip&Tear (aider)
d2fab55561 feat: Improve documentation for TXTSearchTool 2024-09-05 00:06:11 +08:00
Rip&Tear (aider)
b955416458 docs: Improve "Creating and Utilizing Tools in crewAI" documentation 2024-09-04 18:31:09 +08:00
Rip&Tear (aider)
18a2722e4d feat: Improve documentation for Conditional Tasks in crewAI 2024-09-04 18:26:12 +08:00
Rip&Tear
c7e8d55926 Merge pull request #1273 from Astha0024/main
Update README.md with default model
2024-09-01 00:43:40 +08:00
Astha Puri
48698bf0b7 Merge branch 'main' into main 2024-08-30 21:58:02 -04:00
Thiago Moretto
f79b3fc322 Merge pull request #1269 from crewAIInc/tm-fix-cli-for-py310
Add py 3.10 support back to CLI + fixes
2024-08-30 13:39:04 -03:00
Thiago Moretto
0b9e753c2f Add comment to warn about dro simple_toml_parser 2024-08-30 11:52:53 -03:00
Astha Puri
5b3f7be1c4 Update README.md 2024-08-30 06:55:31 -04:00
Astha Puri
f2208f5f8e Update README.md 2024-08-30 06:54:34 -04:00
João Moura
79b5248b83 preparing new version 2024-08-30 00:33:51 -03:00
João Moura
d4791bef28 updating deployment cli with 2024-08-30 00:32:18 -03:00
João Moura
d861cb0d74 updating docs 2024-08-30 00:15:06 -03:00
João Moura
67f19f79c2 removing base_model from telemetry 2024-08-29 23:35:05 -03:00
Thiago Moretto
5f359b14f7 Fix test 2024-08-29 15:58:47 -03:00
Thiago Moretto
cda1900b14 Read as str no bytes
+handle when project_name is None (fails, basically)
2024-08-29 15:17:51 -03:00
Thiago Moretto
c8c0a89dc6 Fix type checking + lint 2024-08-29 15:02:19 -03:00
Thiago Moretto
9a10cc15f4 Add python 3.10 support back to CLI +fixes 2024-08-29 14:37:34 -03:00
Thiago Moretto
345f1eacde Get current crewai version from poetry.lock 2024-08-29 11:14:04 -03:00
Thiago Moretto
fa937bf3a7 Add Python 3.10 support to CLI 2024-08-29 10:22:54 -03:00
mvanwyk
172758020c bug: fix incorrect mkdocs site_url (#1238)
* bug: fix incorrect mkdocs site_url

* bug: fix incorrect mkdocs repo_url

Co-authored-by: Eduardo Chiarotti <dudumelgaco@hotmail.com>

---------

Co-authored-by: Eduardo Chiarotti <dudumelgaco@hotmail.com>
2024-08-24 15:45:59 -03:00
Brandon Hancock (bhancock_ai)
5ff178084e Fix deployment name issue to support Azure (#1253)
* Fix deployment name issue to support Azure

* More carefully check atters on llm
2024-08-23 12:58:37 -04:00
Brandon Hancock (bhancock_ai)
c012e0ff8d Update async docs with more examples (#1254)
* Update async docs with more examples

* Add use cases
2024-08-23 12:51:58 -04:00
Eduardo Chiarotti
f777c1c2e0 fix: All files pre commit (#1249) 2024-08-23 10:52:36 -03:00
Paul Nugent
782ce22d99 Update LLM-Connections.md (#1190)
Added missing quotes around os.environ

Co-authored-by: Eduardo Chiarotti <dudumelgaco@hotmail.com>
2024-08-23 10:39:06 -03:00
Eduardo Chiarotti
f5246039e5 Feat/cli deploy (#1240)
* feat: set basic structure deploy commands

* feat: add first iteration of CLI Deploy

* feat: some minor refactor

* feat: Add api, Deploy command and update cli

* feat: Remove test token

* feat: add auth0 lib, update cli and improve code

* feat: update code and decouple auth

* fix: parts of the code

* feat: Add token manager to encrypt access token and get and save tokens

* feat: add audience to costants

* feat: add subsystem saving credentials and remove comment of type hinting

* feat: add get crew version to send on header of request

* feat: add docstrings

* feat: add tests for authentication module

* feat: add tests for utils

* feat: add unit tests for cl

* feat: add tests

* feat: add deploy man tests

* feat: fix type checking issue

* feat: rename tests to pass ci

* feat: fix pr issues

* feat: fix get crewai versoin

* fix: add timeout for tests.yml
2024-08-23 10:20:03 -03:00
Rip&Tear
4736604b4d Merge pull request #1239 from ShuHuang/patch-1
Bugfix: Update LLM-Connections.md
2024-08-23 09:49:46 +08:00
Shu Huang
09cba0135e Bugfix: Update LLM-Connections.md
The original code doesn't work due to a comma
2024-08-22 14:39:15 +01:00
Brandon Hancock (bhancock_ai)
8119edb495 Brandon/cre 211 fix agent and task config for yaml based projects (#1211)
* Fixed agents. Now need to fix tasks.

* Add type fixes and fix task decorator

* Clean up logs

* fix more type errors

* Revert back to required

* Undo changes.

* Remove default none for properties that cannot be none

* Clean up comments

* Implement all of Guis feedback
2024-08-20 09:31:02 -04:00
William Espegren
17bffb0803 docs: add spider docs (#1165)
* docs: add spider docs

* chore: add "Spider scraper" to mkdocs.yml
2024-08-20 07:53:04 -03:00
Rip&Tear
cbe139fced Merge pull request #1216 from theCyberTech/main 2024-08-20 18:32:04 +08:00
Eduardo Chiarotti
946d8567fe feat: Add only on release to deploy docs (#1212) 2024-08-20 07:26:50 -03:00
Rip&Tear
7b5d5bdeef Merge pull request #2 from theCyberTech/theCyberTech-operations-per-run
Update operations-per-run in stale.yml
2024-08-20 12:54:55 +08:00
Rip&Tear
a1551bcf2b Update operations-per-run in stale.yml
operations-per-run: 1200

this will allow for complete cleanup of all exiting issues
2024-08-20 12:54:26 +08:00
Rip&Tear
5495825b1d Merge pull request #1206 from theCyberTech/main
Create Cli.md
2024-08-17 21:51:13 +08:00
Rip&Tear
6e36f84cc6 Update Cli.md 2024-08-17 20:55:46 +08:00
Rip&Tear
cddf2d8f7c Create Cli.md
Added initial Cli.md to help users get info on Cli commands
2024-08-17 20:06:31 +08:00
Rip&Tear
5f17e35c5a Merge pull request #1205 from theCyberTech/theCyberTech-stale-fix
Update stale.yml
2024-08-17 20:00:43 +08:00
Eduardo Chiarotti
231a833ad0 feat: Add crewai install CLI command (#1203)
* feat: Add crewai install CLI command

* feat: Add crewai install to the docs and force now crewai run
2024-08-17 08:41:53 -03:00
Rip&Tear
a870295d42 Update stale.yml
Added  
operations-per-run: 500
2024-08-17 19:16:31 +08:00
Rip&Tear
ddda8f6bda Merge pull request #1194 from crewAIInc/docs_update
Updated Documentation to fix minor issues + minor .github fixes
2024-08-17 08:14:17 +08:00
Brandon Hancock (bhancock_ai)
bf7372fefa Adding Autocomplete to OSS (#1198)
* Cleaned up model_config

* Fix pydantic issues

* 99% done with autocomplete

* fixed test issues

* Fix type checking issues
2024-08-16 15:04:21 -04:00
Brandon Hancock (bhancock_ai)
3451b6fc7a Clean up pipeline (#1187)
* Clean up pipeline

* Make versioning dynamic in templates

* fix .env issues when openai is trying to use invalid keys

* Fix type checker issue in pipeline

* Fix tests.
2024-08-16 14:47:28 -04:00
Vini Brasil
dbf2570353 Add name and expected_output to TaskOutput (#1199)
* Add name and expected_output to TaskOutput

This commit adds task information to the TaskOutput class. This is
useful to provide extra context to callbacks.

* Populate task name from function names

This commit populates task name from function names when using
annotations.
2024-08-15 22:24:41 +01:00
Eduardo Chiarotti
d0707fac91 feat: Add bandit ci pipeline (#1200)
* feat: Add bandit ci pipeline

* feat: add useforsecurty false for bandit pipeline

* feat: Add report only for High severity issues
2024-08-15 18:19:57 -03:00
theCyberTech
35ebdd6022 Updated Documentaion to fix navigation link for pipelin feature, removed legacy md fiel from .github & added missing config.yml config to remove custom issues from user access 2024-08-15 16:35:05 +08:00
Rip&Tear
92a77e5cac Merge pull request #1183 from crewAIInc/feature-templates
Feature templates
2024-08-15 11:29:36 +08:00
Rip&Tear
a2922c9ad5 Merge pull request #1182 from crewAIInc/git-temaplates
updated bug report template to yml for more control
2024-08-15 11:28:31 +08:00
Eduardo Chiarotti
9f9b52dd26 fix: Fix planning_llm issue (#1189)
* fix: Fix planning_llm issue

* fix: add poetry.lock updated version

* fix: type checking issues

* fix: tests
2024-08-14 18:54:53 -03:00
theCyberTech
2482c7ab68 Addded feature request template in YAML format
Added config .yml to remove blank template
2024-08-14 15:49:55 +08:00
theCyberTech
7fdabda97e updated bug report template to yml for more control 2024-08-14 15:08:59 +08:00
Eduardo Chiarotti
7306414de7 docs: fix references to annotations (#1176) 2024-08-13 12:58:12 -03:00
Eduardo Chiarotti
97d7bfb52a docs: Update Dalle, FileWrite, Nl2Sql and Side menu Tools (#1175)
* docs: Update Dalle, FileWrite, Nl2Sql and Side menu Tools

* docs: remove unused phrase

* docs: fix identation
2024-08-13 12:29:34 -03:00
Rafael Miller
9f85a2a011 Added Firecrawl tools to docs (#628) 2024-08-13 12:09:11 -03:00
João Moura
ab47d276db preparing new version 2024-08-11 22:07:54 -03:00
João Moura
44e38b1d5e Fixing telemetry condition that was missing 2024-08-11 22:07:45 -03:00
João Moura
e9fa2bb556 fix broken link 2024-08-11 15:52:25 -03:00
João Moura
183f466ac4 adding new docs 2024-08-11 15:50:42 -03:00
João Moura
cc7b7e2b79 adding testing link 2024-08-11 15:39:30 -03:00
João Moura
a17fa70b1b Updating docs 2024-08-11 15:04:45 -03:00
João Moura
7b63b6f485 preparing new version 2024-08-11 01:33:20 -03:00
João Moura
ed5d81fa1a Fixing evaluator reporter 2024-08-11 01:32:40 -03:00
João Moura
c2d12b2de2 Updating templates to new versions 2024-08-11 01:02:47 -03:00
João Moura
8966dc2f2f Preparing new version 2024-08-11 00:58:41 -03:00
João Moura
59ab1ef9f4 adding docs for new tools 2024-08-11 00:07:00 -03:00
João Moura
227cca00a2 preparing new verion 2024-08-10 17:59:17 -03:00
João Moura
16dab8e583 missing arg 2024-08-10 17:58:54 -03:00
João Moura
1c97b916d9 fixing mock_agent_ops_provider 2024-08-10 17:26:45 -03:00
João Moura
94b52cfd87 fixing mock_agent_ops_provider 2024-08-10 17:21:21 -03:00
Abebe M.
82b1db1711 Handle minor issue: tools name shouldn't contain space for openai (#961)
As per (https://github.com/langchain-ai/langchain/pull/16395), OpenAI functions don't accept tool names with space. Therefore, I added an exception handling snippet to raise an issue if a custom tool name has a space.
2024-08-10 16:51:08 -03:00
Joshua Harper
638a8f03f0 Sanitize agent roles to ensure valid directory names (#1037) 2024-08-10 09:50:38 -03:00
Vikram Guhan Subbiah
dbce944934 AgentOps ENG-525: Decouple CrewAI and AgentOps (#1033)
* Make AgentOps import optional upon AGENTOPS_API_KEY
    being set

Co-authored-by: João Moura <joaomdmoura@gmail.com>
2024-08-10 09:47:13 -03:00
David
f1ad137fb7 remove broken links (#1043) 2024-08-10 09:45:21 -03:00
Jason Wu
5eb1cff9b5 Update AgentOps-Observability.md (#1044)
Fix the incorrectly formatted external link
2024-08-10 09:43:22 -03:00
Thiago Moretto
b074138e39 Increase test coverage for output to file (#1049) 2024-08-10 09:42:47 -03:00
Constantin Schreiber
6ca051e5f3 Update Start-a-New-CrewAI-Project-Template-Method.md (#1054)
Fixed grammar and typo

Co-authored-by: João Moura <joaomdmoura@gmail.com>
2024-08-10 09:39:49 -03:00
fastali
fd87d930a7 Update LLM-Connections.md (#1071)
ollama integration example code bug fixed.
2024-08-10 08:56:30 -03:00
Chris Johnston
95a9691a8b Update Start-a-New-CrewAI-Project-Template-Method.md (#1081)
I helped 💚
2024-08-10 08:55:39 -03:00
maf-rnmourao
e2d6e2649e Fix misplaced task info from process doc (#1098)
Co-authored-by: rnmourao <robertonunesmourao@yahoo.com.br>
2024-08-10 08:55:18 -03:00
Giulio De Luise
d3ff1bf01d Fix documentation typo. (#1153) 2024-08-10 08:54:40 -03:00
Muhammad Hakim Asy'ari
d68b8cf6e4 Remove orphan links (#1163)
Remove deprecated links, related to #1019
2024-08-10 08:54:12 -03:00
João Moura
6615ab2fba preparing new verison 2024-08-10 03:28:53 -07:00
João Moura
5e83a36009 adding test results telemetry 2024-08-10 03:13:11 -07:00
Eduardo Chiarotti
51ee483e9d feat: add ability to train on custom file (#1161)
* feat: add ability to train on custom file

* feat: add pkl file validation

* feat: fix tests

* feat: fix tests

* feat: fix tests
2024-08-09 19:41:58 -03:00
Lorenze Jay
62f5b2fb2e Brandon/cre 130 pipeline project structure (#1066)
* WIP. Procedure appears to be working well. Working on mocking properly for tests

* All tests are passing now

* rshift working

* Add back in Gui's tool_usage fix

* WIP

* Going to start refactoring for pipeline_output

* Update terminology

* new pipeline flow with traces and usage metrics working. need to add more tests and make sure PipelineOutput behaves likew CrewOutput

* Fix pipelineoutput to look more like crewoutput and taskoutput

* Implemented additional tests for pipeline. One test is failing. Need team support

* Update docs for pipeline

* Update pipeline to properly process input and ouput dictionary

* Update Pipeline docs

* Add back in commentary at top of pipeline file

* Starting to work on router

* Drop router for now. will add in separately

* In the middle of fixing router. A ton of circular dependencies. Moving over to a new design.

* WIP.

* Fix circular dependencies and updated PipelineRouter

* Add in Eduardo feedback. Still need to add in more commentary describing the design decisions for pipeline

* Add developer notes to explain what is going on in pipelines.

* Add doc strings

* Fix missing rag datatype

* WIP. Converting usage metrics from a dict to an object

* Fix tests that were checking usage metrics

* Drop todo

* Fix 1 type error in pipeline

* Update pipeline to use UsageMetric

* Add missing doc string

* WIP.

* Change names

* Rename variables based on joaos feedback

* Fix critical circular dependency issues. Now needing to fix trace issue.

* Tests working now!

* Add more tests which showed underlying issue with traces

* Fix tests

* Remove overly complicated test

* Add router example to docs

* Clean up end of docs

* Clean up docs

* Working on creating Crew templates and pipeline templates

* WIP.

* WIP

* Fix poetry install from templates

* WIP

* Restructure

* changes for lorenze

* more todos

* WIP: create pipelines cli working

* wrapped up router

* ignore mypy src on templates

* ignored signature of copy

* fix all verbose

* rm print statements

* brought back correct folders

* fixes missing folders and then rm print statements

* fixed tests

* fixed broken test

* fixed type checker

* fixed type ignore

* ignore types for templates

* needed

* revert

* exclude only required

* rm type errors on templates

* rm excluding type checks for template files on github action

* fixed missing quotes

---------

Co-authored-by: Brandon Hancock <brandon@brandonhancock.io>
2024-08-09 14:13:29 -07:00
Eduardo Chiarotti
6583f31459 Update issue templates (#1076)
* Update issue templates

* Update custom.md
2024-08-08 20:10:41 -03:00
Eduardo Chiarotti
217f5fc5ac Create stale.yml (#1158) 2024-08-08 11:54:13 -03:00
Eduardo Chiarotti
297dc93fb4 feat: add cli to run the crew (#1080)
* feat: add cli to run the crew

* feat: change command to run_crew

* feat: change pyprojet to run_Crew

* docs: change docs to address crewai run
2024-08-08 10:48:22 -03:00
Lorenze Jay
86c6760f58 Fix logging types to bool (#1051)
* fixes pydantic validations hierarchical

* more tests

* logger logs everything or not

* verbose rm levels to bool

* updated readme verbose levels
2024-08-07 10:31:18 -07:00
Eduardo Chiarotti
498e96a419 Update issue templates (#1067)
* Update issue templates

Add both Bug and Feature templates

* Update feature_request.md
2024-08-06 14:47:00 -03:00
Thiago Moretto
c0c59dc932 Merge pull request #1064 from crewAIInc/thiago/pipeline-fix
Fix flaky test due to suppressed error on `on_llm_start` callback
2024-08-05 16:13:19 -03:00
Thiago Moretto
f3b3d321e5 Fix lint issue 2024-08-05 13:34:03 -03:00
Thiago Moretto
67e4433dc2 Fix flaky test due to suppressed error on on_llm_start callback 2024-08-05 13:29:39 -03:00
Rip&Tear
4a7ae8df71 Update LLM-Connections.md (#1039)
* Minor fixes and updates

* minor fixes across docs

* Updated LLM-Connections.md

---------

Co-authored-by: theCyberTech <mattrapidb@gmail.com>
2024-08-02 15:04:52 -03:00
Rip&Tear
09f92122d5 Docs minor fixes (#1035)
* Minor fixes and updates

* minor fixes across docs

---------

Co-authored-by: theCyberTech <mattrapidb@gmail.com>
2024-08-02 15:01:16 -03:00
Lorenze Jay
8118b7b7d6 Feat/sliding context window (#1042)
* patching for non-gpt model

* removal of json_object tool name assignment

* fixed issue for smaller models due to instructions prompt

* fixing for ollama llama3 models

* WIP: generated summary from documents split, could also create memgpt approach

* WIP: need tests but user inputted summarization strategy implemented - handling context window exceeding errors

* rm extra line

* removed type ignores

* added tests

* handling n to summarize prompt

* code cleanup, using click for cli asker

* rm not used class

* better refactor

* reverted poetry lock

* reverted poetry.locl

* improved context window exceeding exception class
2024-08-01 13:15:50 -07:00
João Moura
c93b85ac53 Preparing for new version 2024-07-30 19:21:18 -04:00
Lorenze Jay
6378f6caec WIP fixed mypy src types (#1036) 2024-07-30 10:59:50 -07:00
Eduardo Chiarotti
d824db82a3 feat: Add execution time to both task and testing feature (#1031)
* feat: Add execution time to both task and testing feature

* feat: Remove unused functions

* feat: change test_crew to evalaute_crew to avoid issues with testing libs

* feat: fix tests
2024-07-29 23:17:07 -03:00
Matt Young
de6b597eff telemetry.py - fix typo in comment. (#1020) 2024-07-29 23:03:51 -03:00
Deepak Tammali
6111d05219 docs: Fix crewai-tools package name typo in getting-started docs (#1026) 2024-07-29 23:03:32 -03:00
Monarch Wadia
f83c91d612 Fixed package name typo in pip install command (#1029)
Changed `pip install crewai-tools` to `pip install crewai-tools`
2024-07-29 23:02:48 -03:00
Mackensie Alvarez
c8f360414e Update Start-a-New-CrewAI-Project-Template-Method.md (#1030) 2024-07-29 23:02:18 -03:00
Brandon Hancock (bhancock_ai)
fa4393d77e Add in missing triple quote and execution time to resume agent functionality. (#1025)
* Add in missing triple quote and execution time to resume agent functionality

* Fixing broken kwargs and other issues causing our tests to fail
2024-07-29 14:39:02 -03:00
Rip&Tear
25c314befc Minor fixes and updates (#1019)
Co-authored-by: theCyberTech <mattrapidb@gmail.com>
2024-07-29 03:24:23 -03:00
Rip&Tear
2fe79e68cd Small 404 error fixes (#1018)
* Updated Docs:  New Getting started section + content update / addition

* fixed indentation issue

* Minor updates to fix typos

* Fixed up 404 error on latest commit

---------

Co-authored-by: theCyberTech <the_t3ch@pm.me>
Co-authored-by: theCyberTech <mattrapidb@gmail.com>
2024-07-28 22:01:04 -03:00
Nuraly
37d05a2365 Update Force-Tool-Ouput-as-Result.md (#964)
I think there is some mistake, because there is no such parameter as force_output_result, and as the code shows, the correct parameter result_as_answer is set during agent creation, not task.
2024-07-28 15:41:56 -03:00
Carine Bruyndoncx
0111d261a4 Update Crews.md - correct result variable to crew_output (#972) 2024-07-28 15:40:36 -03:00
Taleb
0a23e1dc13 Performed spell check across the rest of code base, and enahnced the yaml paraser code a little (#895)
* Performed spell check across the entire documentation

Thank you once again!

* Performed spell check across the most of code base
Folders been checked:
- agents
- cli
- memory
- project
- tasks
- telemetry
- tools
- translations

* Trying to add a max_token for the agents, so they limited by number of tokens.

* Performed spell check across the rest of code base, and enahnced the yaml paraser code a little

* Small change in the main agent doc

* Improve _save_file method to handle both dict and str inputs

- Add check for dict type input
- Use json.dump for dict serialization
- Convert non-dict inputs to string
- Remove type ignore comments

---------

Co-authored-by: João Moura <joaomdmoura@gmail.com>
2024-07-28 15:39:54 -03:00
Henri Wenlin
ef5ff71346 feat: add verbose option for printing in ToolUsage (#990) 2024-07-28 15:12:10 -03:00
Samuel Mallet
1697b4cacb Add docs for new parameters to SerperDevTool (#993) 2024-07-28 15:09:55 -03:00
Taleb
6b4710a8d1 Improve _save_file method to handle both dict and str inputs (#1011)
- Add check for dict type input
- Use json.dump for dict serialization
- Convert non-dict inputs to string
- Remove type ignore comments
2024-07-28 15:03:18 -03:00
Lennex Zinyando
6f2a8f08ba Fixes getting started section links (#1016) 2024-07-28 15:02:41 -03:00
João Moura
4e6abf596d updating test 2024-07-28 13:23:03 -04:00
Rip&Tear
9018e2ab6a Docs update (#1008)
* Updated Docs:  New Getting started section + content update / addition

* fixed indentation issue

* Minor updates to fix typos

---------

Co-authored-by: theCyberTech <the_t3ch@pm.me>
2024-07-28 11:55:09 -03:00
ResearchAI
99d023c5f3 Update reset_memories_command.py (#974) 2024-07-26 14:40:47 -07:00
Brandon Hancock (bhancock_ai)
da7d8256eb Json Task Output Truncation with Escape Characters (#1009)
* Fixed special character issue when converting json to models. Added numerous tests to ensure thigns work properly.

* Fix linting error and cleaned up tests

* Fix customer_converter_cls test failure

* Fixed tests. Thank you lorenze for pointing that out. added a few more to ensure converter creation works properly

* Address lorenze feedback

* Fix linting issues
2024-07-26 17:27:01 -04:00
Brandon Hancock (bhancock_ai)
88bffaa0d0 Merge pull request #1012 from crewAIInc/fix/breaking-test-task-eval
fix test due to asserting instructions model_schema change
2024-07-26 16:55:26 -04:00
Lorenze Jay
1159140d9f fix test due to asserting instructions model_schema change 2024-07-26 13:37:44 -07:00
Lorenze Jay
5ac7050f7a Patch/non gpt model pydantic output (#1003)
* patching for non-gpt model

* removal of json_object tool name assignment

* fixed issue for smaller models due to instructions prompt

* fixing for ollama llama3 models

* closing brackets

* removed not used and fixes
2024-07-26 10:57:56 -07:00
Lorenze Jay
8b513de64c hierarchical process unblocked for async tasks (#995)
* WIP: hierarchical unblock for async tasks

* added better test

* update name change

* added more test and crew manager cleanup

* remove prints

* code cleanup, no need to pass manager
2024-07-26 10:55:51 -07:00
Eduardo Chiarotti
144e6d203f feat: add ability to set LLM for AgentPLanner on Crew (#1001)
* feat: add ability to set LLM for AgentPLanner on Crew

* feat: fixes issue on instantiating the ChatOpenAI on the crew

* docs: add docs for the planning_llm new parameter

* docs: change message to ChatOpenAI llm

* feat: add tests
2024-07-26 14:24:29 -03:00
Eduardo Chiarotti
2d2154ed65 feat: add crew Testing/Evaluating feature (#998)
* feat: add crew Testing/evalauting feature

* feat: add docs and add unit test

* feat: improve testing output table

* feat: add tests

* feat: fix type checking issue

* feat: add raise ValueError when testing if output is not the expected

* docs: add docs for Testing

* feat: improve tests and fix some issue

* feat: back to sync

* feat: change opdeai model

* feat: fix test
2024-07-26 14:23:51 -03:00
Brandon Hancock (bhancock_ai)
2d086ab596 Merge pull request #994 from crewAIInc/fix/getting-started-docs
fixed bullet points for crew yaml annoations
2024-07-23 14:36:45 -04:00
Lorenze Jay
776c67cc0f clearer usage for crewai create command 2024-07-23 11:32:25 -07:00
Lorenze Jay
78ef490646 fixed bullet points for crew yaml annoations 2024-07-23 11:31:09 -07:00
Lorenze Jay
4da5cc9778 Feat yaml config all attributes (#985)
* WIP: yaml proper mapping for agents and agent

* WIP: added output_json and output_pydantic setup

* WIP: core logic added, need cleanup

* code cleanup

* updated docs and example template to use yaml to reference agents within tasks

* cleanup type errors

* Update Start-a-New-CrewAI-Project.md

---------

Co-authored-by: João Moura <joaomdmoura@gmail.com>
2024-07-23 00:21:01 -03:00
Eduardo Chiarotti
6930656897 feat: add crewai test feature (#984)
* feat: add crewai test feature

* fix: remove unused import

* feat: update docstirng

* fix: tests
2024-07-22 17:21:05 -03:00
João Moura
349753a013 prepping new version 2024-07-20 12:26:32 -04:00
Eduardo Chiarotti
f53a3a00e1 fix: planning feature output (#969)
* fix: planning feature output

* fix: add validation for planning result
2024-07-20 11:56:53 -03:00
João Moura
e2113fe417 preparing new verions 2024-07-19 13:22:28 -04:00
Eduardo Chiarotti
f9288295e6 fix: agent missing fix (#966) 2024-07-19 13:15:33 -03:00
João Moura
fcc57f2fc0 rmeoving extra logging 2024-07-19 01:16:15 -04:00
Dev Khant
5cb6ee9eeb Docs: Update info about tools (#896) 2024-07-19 01:38:42 -03:00
ariel
b38f0825e7 Fix broken link to the installation guide (#912)
Updated the installation guide link to use the absolute URL instead of a relative path, ensuring it correctly points to 'https://docs.crewai.com/how-to/Installing-CrewAI/'.
2024-07-19 01:37:54 -03:00
Salman Faroz
f51e94dede Update Crews.md (#889)
To solve :
I encountered an error while trying to use the tool. This was the error: DuckDuckGoSearchRun._run() got an unexpected keyword argument 'q'.
 Tool duckduckgo_search accepts these inputs: A wrapper around DuckDuckGo Search. Useful for when you need to answer questions about current events. Input should be a search query.

refer : https://github.com/joaomdmoura/crewAI/issues/316
2024-07-19 01:37:24 -03:00
robbyriverside
47bf93d291 Update Memory.md (#728)
The memory documentation left me with a lot of questions.  After I went through the code to find an answer.  I added this paragraph to explain what I found.  Hope this is helpful.
2024-07-19 01:36:54 -03:00
Braelyn Boynton
41fd1c6124 upgrade agentops to 0.3 (#957)
* upgrade agentops to 0.3

* lockfile
2024-07-18 13:30:04 -03:00
Lorenze Jay
be1b9a3994 Reset memory (#958)
* reseting memory on cli

* using storage.reset

* deleting memories on command

* added tests

* handle when no flags are used

* added docs
2024-07-18 13:29:42 -03:00
Eduardo Chiarotti
61a196394b feat: Add planning feature to crew (#919)
* feat: add planning feature to crew

* feat: add test to planning handler and change to execute_async method

* docs: add planning parameter to the Core documentation

* docs: add planning docs

* fix: fix type checking issue

* fix: test and logic
2024-07-18 13:15:08 -03:00
Lorenze Jay
5b442e4350 Merge pull request #951 from crewAIInc/test-hierarchical-tools-proper-setup
Test hierarchical tools proper setup
2024-07-17 08:53:23 -07:00
Lorenze Jay
c9920b9823 better spacing 2024-07-17 08:40:52 -07:00
Lorenze Jay
2faa2dbddb code cleanup 2024-07-17 08:39:57 -07:00
Lorenze Jay
76607062f0 using gpt4o 2024-07-17 08:27:43 -07:00
Lorenze Jay
a8cac9b7e9 Merge branch 'main' of github.com:joaomdmoura/crewAI into test-hierarchical-tools-proper-setup 2024-07-17 08:21:13 -07:00
Brandon Hancock (bhancock_ai)
dfacc8832f Merge pull request #954 from crewAIInc/hotfix/improve-async-logging
Fix logging for async and sync tasks
2024-07-17 11:20:13 -04:00
Lorenze Jay
93f643f851 fixed test 2024-07-17 08:20:05 -07:00
Brandon Hancock
cbf5d548be Merge branch 'main' into hotfix/improve-async-logging 2024-07-17 11:17:23 -04:00
Lorenze Jay
6946b89e17 Merge branch 'main' of github.com:joaomdmoura/crewAI into test-hierarchical-tools-proper-setup 2024-07-17 08:16:44 -07:00
Brandon Hancock (bhancock_ai)
dc4911b1ca Merge pull request #950 from crewAIInc/conditional-task-f
conditional task feat
2024-07-17 11:08:06 -04:00
Brandon Hancock
6ad218f9a0 Fix issues found by linter 2024-07-17 11:05:31 -04:00
Brandon Hancock
36efa172ee Add more tests. Clean up docs. Improve conditional task 2024-07-17 11:03:11 -04:00
Brandon Hancock
a7a2dfd296 Fix logging 2024-07-17 10:10:34 -04:00
João Moura
7baaeacac3 Adding better support for open source tool calling models (#952)
* Adding better support for open source tool calling models

* making sure the right tool is called

* fixing tests

* better support opensource models
2024-07-17 05:54:13 -03:00
Lorenze Jay
021f2eb8a1 Merge branch 'conditional-task-f' of github.com:joaomdmoura/crewAI into test-hierarchical-tools-proper-setup 2024-07-16 20:35:27 -07:00
Lorenze Jay
cb720143c7 Merge branch 'main' of github.com:joaomdmoura/crewAI into conditional-task-f 2024-07-16 20:34:35 -07:00
Lorenze Jay
731de2ff31 Merge branch 'test-hierarchical-tools-proper-setup' of github.com:joaomdmoura/crewAI into test-hierarchical-tools-proper-setup 2024-07-16 20:31:42 -07:00
Lorenze Jay
24e28da203 Merge branch 'conditional-task-f' of github.com:joaomdmoura/crewAI into test-hierarchical-tools-proper-setup 2024-07-16 20:28:50 -07:00
Lorenze Jay
bde0a3e99c code cleanup 2024-07-16 20:11:52 -07:00
Lorenze Jay
0415b9982b code cleanup 2024-07-16 20:07:05 -07:00
Brandon Hancock (bhancock_ai)
99ada42d97 Merge pull request #941 from crewAIInc/bugfix/minor-max-retry-recursion-fix
Properly capture result from max retry recursive call
2024-07-16 22:05:58 -04:00
Lorenze Jay
ee32d36312 Merge branch 'conditional-task-f' of github.com:joaomdmoura/crewAI into test-hierarchical-tools-proper-setup 2024-07-16 16:05:09 -07:00
Lorenze Jay
ef928ee3cb added docs and tests 2024-07-16 16:04:41 -07:00
Lorenze Jay
c66559345f Merge branch 'conditional-task-f' of github.com:joaomdmoura/crewAI into test-hierarchical-tools-proper-setup 2024-07-16 15:20:46 -07:00
Lorenze Jay
3ad95d50d4 ensures _update_manager_tools has a manager otherwise throw error 2024-07-16 15:15:50 -07:00
Lorenze Jay
bc7f601f84 updated fixes for conditional tasks 2024-07-16 15:10:13 -07:00
Lorenze Jay
e8cbdb7881 fixed hierarchial manager tools when assigned an agent 2024-07-16 14:00:25 -07:00
Lorenze Jay
b0c2b15a3e better code spacing 2024-07-16 13:07:31 -07:00
Lorenze Jay
c0f04bbb37 removing unused code 2024-07-16 13:06:50 -07:00
Lorenze Jay
c320fc655e conditional task feat 2024-07-16 12:04:34 -07:00
Brandon Hancock (bhancock_ai)
ac2815c781 Add docs for crewoutput and taskoutput (#943)
* Add docs for crewoutput and taskoutput

* Add reference to change log
2024-07-15 21:39:15 -03:00
Gui Vieira
dd8a199e99 Introduce structure keys (#902)
* Introduce structure keys

* Add agent key to tasks

* Rebasing is hard

* Rename task output telemetry

* Feedback
2024-07-15 19:37:07 -03:00
Gui Vieira
161c4a6856 Fix crew creation telemetry (#939)
* Fix crew creation telemetry

* Remove task index
2024-07-15 17:43:57 -03:00
Lorenze Jay
67b04b30bf Replay feat using db (#930)
* Cleaned up task execution to now have separate paths for async and sync execution. Updating all kickoff functions to return CrewOutput. WIP. Waiting for Joao feedback on async task execution with task_output

* Consistently storing async and sync output for context

* outline tests I need to create going forward

* Major rehaul of TaskOutput and CrewOutput. Updated all tests to work with new change. Need to add in a few final tricky async tests and add a few more to verify output types on TaskOutput and CrewOutput.

* Encountering issues with callback. Need to test on main. WIP

* working on tests. WIP

* WIP. Figuring out disconnect issue.

* Cleaned up logs now that I've isolated the issue to the LLM

* more wip.

* WIP. It looks like usage metrics has always been broken for async

* Update parent crew who is managing for_each loop

* Merge in main to bugfix/kickoff-for-each-usage-metrics

* Clean up code for review

* Add new tests

* Final cleanup. Ready for review.

* Moving copy functionality from Agent to BaseAgent

* Fix renaming issue

* Fix linting errors

* use BaseAgent instead of Agent where applicable

* Fixing missing function. Working on tests.

* WIP. Needing team to review change

* Fixing issues brought about by merge

* WIP: need to fix json encoder

* WIP need to fix encoder

* WIP

* WIP: replay working with async. need to add tests

* Implement major fixes from yesterdays group conversation. Now working on tests.

* The majority of tasks are working now. Need to fix converter class

* Fix final failing test

* Fix linting and type-checker issues

* Add more tests to fully test CrewOutput and TaskOutput changes

* Add in validation for async cannot depend on other async tasks.

* WIP: working replay feat fixing inputs, need tests

* WIP: core logic of seq and heir for executing tasks added into one

* Update validators and tests

* better logic for seq and hier

* replay working for both seq and hier just need tests

* fixed context

* added cli command + code cleanup TODO: need better refactoring

* refactoring for cleaner code

* added better tests

* removed todo comments and fixed some tests

* fix logging now all tests should pass

* cleaner code

* ensure replay is delcared when replaying specific tasks

* ensure hierarchical works

* better typing for stored_outputs and separated task_output_handler

* added better tests

* added replay feature to crew docs

* easier cli command name

* fixing changes

* using sqllite instead of .json file for logging previous task_outputs

* tools fix

* added to docs and fixed tests

* fixed .db

* fixed docs and removed unneeded comments

* separating ltm and replay db

* fixed printing colors

* added how to doc

---------

Co-authored-by: Brandon Hancock <brandon@brandonhancock.io>
2024-07-15 17:14:10 -03:00
Gui Vieira
7696b45fc3 Fix tool usage (#925)
* Fix tool usage

* new tests

---------

Co-authored-by: João Moura <joaomdmoura@gmail.com>
2024-07-15 17:13:35 -03:00
Brandon Hancock
641921eb6c capture result from recursive call 2024-07-15 13:59:58 -04:00
Brandon Hancock
a02d2fb93e Add return statement to recursive call 2024-07-15 13:40:51 -04:00
Gui Vieira
b93632a53a [DO NOT MERGE] Provide inputs on crew creation (#898)
* Provide inputs on crew creation

* Better naming

* Add crew id and task index to tasks

* Fix type again
2024-07-15 09:00:02 -03:00
Eduardo Chiarotti
09938641cd feat: add max retry limit to agent execution (#899)
* feat: add max retry limit to agent execution

* feat: add test to max retry limit feature

* feat: add code execution docstring

---------

Co-authored-by: João Moura <joaomdmoura@gmail.com>
2024-07-15 08:58:50 -03:00
Brandon Hancock (bhancock_ai)
7acf0b2107 Feature/use converter instead of manually trimming (#894)
* Exploring output being passed to tool selector to see if we can better format data

* WIP. Adding JSON repair functionality

* Almost done implementing JSON repair. Testing fixes vs current base case.

* More action cleanup with additional tests

* WIP. Trying to figure out what is going on with tool descriptions

* Update tool description generation

* WIP. Trying to find out what is causing the tools to duplicate

* Replacing tools properly instead of duplicating them accidentally

* Fixing issues for MR

* Update dependencies for JSON_REPAIR

* More cleaning up pull request

* preppering for call

* Fix type-checking issues

---------

Co-authored-by: João Moura <joaomdmoura@gmail.com>
2024-07-15 08:53:41 -03:00
OP (oppenheimer)
4eb4073661 Add Groq - OpenAI Compatible API - details (#934) 2024-07-14 16:11:54 -03:00
Brandon Hancock (bhancock_ai)
7b53457ef3 Feature/kickoff consistent output (#847)
* Cleaned up task execution to now have separate paths for async and sync execution. Updating all kickoff functions to return CrewOutput. WIP. Waiting for Joao feedback on async task execution with task_output

* Consistently storing async and sync output for context

* outline tests I need to create going forward

* Major rehaul of TaskOutput and CrewOutput. Updated all tests to work with new change. Need to add in a few final tricky async tests and add a few more to verify output types on TaskOutput and CrewOutput.

* Encountering issues with callback. Need to test on main. WIP

* working on tests. WIP

* WIP. Figuring out disconnect issue.

* Cleaned up logs now that I've isolated the issue to the LLM

* more wip.

* WIP. It looks like usage metrics has always been broken for async

* Update parent crew who is managing for_each loop

* Merge in main to bugfix/kickoff-for-each-usage-metrics

* Clean up code for review

* Add new tests

* Final cleanup. Ready for review.

* Moving copy functionality from Agent to BaseAgent

* Fix renaming issue

* Fix linting errors

* use BaseAgent instead of Agent where applicable

* Fixing missing function. Working on tests.

* WIP. Needing team to review change

* Fixing issues brought about by merge

* WIP

* Implement major fixes from yesterdays group conversation. Now working on tests.

* The majority of tasks are working now. Need to fix converter class

* Fix final failing test

* Fix linting and type-checker issues

* Add more tests to fully test CrewOutput and TaskOutput changes

* Add in validation for async cannot depend on other async tasks.

* Update validators and tests
2024-07-11 00:35:02 -03:00
João Moura
691b094a40 adding new docs 2024-07-08 03:15:14 -04:00
prime-computing-lab
68e9e54c88 Update MDXSearchTool.md (#745)
description fixed to markdown language instead of marketing search
2024-07-08 02:21:00 -03:00
João Moura
d0d99125c4 updating crewAI-tools verison 2024-07-08 01:17:22 -04:00
Taleb
129000d01f Performed spell check across most of code base (#882)
* Performed spell check across the entire documentation

Thank you once again!

* Performed spell check across the most of code base
Folders been checked:
- agents
- cli
- memory
- project
- tasks
- telemetry
- tools
- translations
2024-07-07 13:00:05 -03:00
WellyngtonF
47f9d026dd passing cloned agents when copying context (#885) 2024-07-07 12:58:38 -03:00
Gui Vieira
b75b0b5552 Emit task created (#875)
* Emit task created

* Limit data to shared crews
2024-07-07 12:58:24 -03:00
João Moura
3dd6249f1e TYPO 2024-07-06 20:03:54 -04:00
João Moura
8451113039 new docs 2024-07-06 16:32:00 -04:00
João Moura
a79b216875 preparing new version 2024-07-06 12:26:41 -04:00
João Moura
52217c2f63 updating dependencies and fixing tests (#878) 2024-07-06 02:14:52 -03:00
Eelke van den Bos
7edacf6e24 Add converter_cls option to Task (#800)
* Add converter_cls option to Task

Fixes #799

* Update task_test.py

* Update task.py

* Update task.py

* Update task_test.py

* Update task.py

* Update task.py

* Update task.py

* Update task.py

---------

Co-authored-by: João Moura <joaomdmoura@gmail.com>
2024-07-06 02:01:39 -03:00
João Moura
58558a1950 TYPO 2024-07-06 00:34:50 -04:00
Ikko Eltociear Ashimine
1607c85ae5 chore: fix typo (#810)
* chore: update converter.py

attemps -> attempts

* chore: update tool_usage.py

attemps -> attempts
2024-07-06 01:33:48 -03:00
Alex Brinsmead
a6ff342948 Fix incorrect definition of RAG in GithubTool docs (#864) 2024-07-06 01:31:51 -03:00
Taleb
d2eb54ebf8 Performed spell check across the entire documentation (#872)
Thank you once again!
2024-07-06 01:30:40 -03:00
Eduardo Chiarotti
a41bd18599 Fix/async tasks (#877)
* fix: async tasks calls

* fix: some issue along with some type check errors

* fix: some issue along with some type check errors

* fix: async test
2024-07-06 01:30:07 -03:00
Eduardo Chiarotti
bb64c80964 fix: Fix tests (#873)
* fix: call asserts

* fix: test_increment_tool_errors

* fix: test_increment_delegations_for_sequential_process

* fix: test_increment_delegations_for_hierarchical_process

* fix: test_code_execution_flag_adds_code_tool_upon_kickoff

* fix: test_tool_usage_information_is_appended_to_agent

* fix: try to fix test_crew_full_output

* fix: try to fix test_crew_full_output

* fix: test remove vcr to test crew_test test

* fix: comment test to see if ci passes

* fix: comment test to see if ci passes

* fix: test changing prompt tokens to get error on CI

* fix: test changing prompt tokens to get error on CI

* fix: test changing prompt tokens to get error on CI

* fix: test changing prompt tokens to get error on CI

* fix: test new approach

* fix: comment funciont not working in CI

* fix: github python version

* fix: remove need of vcr

* fix: fix and add comments for all type checking errors
2024-07-05 09:06:56 -03:00
João Moura
2fb56f1f9f Adding support to force a tool return to be the final answer. (#867)
* Adding support to force a tool return to be the final answer.
This will at the end of the execution return the tool output.
It will return the output of the latest tool with the flag

* Update src/crewai/agent.py

Co-authored-by: Gui Vieira <guilherme_vieira@me.com>

* Update tests/agent_test.py

Co-authored-by: Gui Vieira <guilherme_vieira@me.com>

---------

Co-authored-by: Gui Vieira <guilherme_vieira@me.com>
2024-07-04 16:36:00 -03:00
MO Jr
35676fe2f5 Update Crews.md (#868)
Fix misspelling
2024-07-04 16:35:07 -03:00
Eduardo Chiarotti
81ed6f177e fix: file_handler issue (#869)
* fix: file_handler issue

* fix: add logic for the trained_agent data
2024-07-04 16:34:43 -03:00
João Moura
4bcd1df6bb TYPO 2024-07-03 18:41:52 -04:00
João Moura
6fae56dd60 TYPO 2024-07-03 18:41:52 -04:00
João Moura
430f0e9013 TYPO 2024-07-03 18:41:52 -04:00
João Moura
d7f080a978 fix agentops attribute 2024-07-03 18:41:52 -04:00
Lorenze Jay
5d18f73654 Lj/optional agent in task bug (#843)
* fixed bug for manager overriding task agent and then added pydanic valditors to sequential when no agent is added to task

* better test and fixed task.agent logic

* fixed tests and better validator message

* added validator for async_execution true in tasks whenever in hierarchical run
2024-07-03 18:45:53 -03:00
Brandon Hancock (bhancock_ai)
57fc079267 Bugfix/kickoff for each usage metrics (#844)
* WIP. Figuring out disconnect issue.

* Cleaned up logs now that I've isolated the issue to the LLM

* more wip.

* WIP. It looks like usage metrics has always been broken for async

* Update parent crew who is managing for_each loop

* Merge in main to bugfix/kickoff-for-each-usage-metrics

* Clean up code for review

* Add new tests

* Final cleanup. Ready for review.

* Moving copy functionality from Agent to BaseAgent

* Fix renaming issue

* Fix linting errors

* use BaseAgent instead of Agent where applicable
2024-07-03 15:30:53 -03:00
Alex Brinsmead
706f4cd74a Fix typos in EN "human_feedback" string (#859)
* Fix typo in EN "human_feedback" string

* Fix typos in EN "human_feedback" string
2024-07-03 15:26:58 -03:00
Taleb
2e3646cc96 Improved documentation for training module usage (#860)
- Added detailed steps for training the crew programmatically.
- Clarified the distinction between using the CLI and programmatic approaches.

This update makes it easier for users to understand how to train their crew both through the CLI and programmatically, whether using a UI or API endpoints.

Again Thank you to the author for the great project and the excellent foundation provided!
2024-07-03 15:26:32 -03:00
Brandon Hancock (bhancock_ai)
844cc515d5 Fix issue agentop poetry install issue (#863)
* Fix issue agentop poetry install issue

* Updated install requirements tests to fail if .lock becomes out of sync with poetry install. Cleaned up old issues that were merged back in.
2024-07-03 15:22:32 -03:00
Braelyn Boynton
f47904134b Add back AgentOps as Optional Dependency (#543)
* implements agentops with a langchain handler, agent tracking and tool call recording

* track tool usage

* end session after completion

* track tool usage time

* better tool and llm tracking

* code cleanup

* make agentops optional

* optional dependency usage

* remove telemetry code

* optional agentops

* agentops version bump

* remove org key

* true dependency

* add crew org key to agentops

* cleanup

* Update pyproject.toml

* Revert "true dependency"

This reverts commit e52e8e9568.

* Revert "cleanup"

This reverts commit 7f5635fb9e.

* optional parent key

* agentops 0.1.5

* Revert "Revert "cleanup""

This reverts commit cea33d9a5d.

* Revert "Revert "true dependency""

This reverts commit 4d1b460b

* cleanup

* Forcing version 0.1.5

* Update pyproject.toml

* agentops update

* noop

* add crew tag

* black formatting

* use langchain callback handler to support all LLMs

* agentops version bump

* track task evaluator

* merge upstream

* Fix typo in instruction en.json (#676)

* Enable search in docs (#663)

* Clarify text in docstring (#662)

* Update agent.py (#655)

Changed default model value from gpt-4 to gpt-4o.
Reasoning.
gpt-4 costs 30$ per million tokens while gpt-4o costs 5$.
This is more cost friendly for default option.

* Update README.md (#652)

Rework example so that if you use a custom LLM it doesn't throw code errors by uncommenting.

* Update BrowserbaseLoadTool.md (#647)

* Update crew.py (#644)

Fixed Type on line 53

* fixes #665 (#666)

* Added timestamp to logger (#646)

* Added timestamp to logger

Updated the logger.py file to include timestamps when logging output. For example:

 [2024-05-20 15:32:48][DEBUG]: == Working Agent: Researcher
 [2024-05-20 15:32:48][INFO]: == Starting Task: Research the topic
 [2024-05-20 15:33:22][DEBUG]: == [Researcher] Task output:

* Update tool_usage.py

* Revert "Update tool_usage.py"

This reverts commit 95d18d5b6f.

incorrect bramch for this commit

* support skip auto end session

* conditional protect agentops use

* fix crew logger bug

* fix crew logger bug

* Update crew.py

* Update tool_usage.py

---------

Co-authored-by: João Moura <joaomdmoura@gmail.com>
Co-authored-by: Howard Gil <howardbgil@gmail.com>
Co-authored-by: Olivier Roberdet <niox5199@gmail.com>
Co-authored-by: Paul Sanders <psanders1@gmail.com>
Co-authored-by: Anudeep Kolluri <50168940+Anudeep-Kolluri@users.noreply.github.com>
Co-authored-by: Mike Heavers <heaversm@users.noreply.github.com>
Co-authored-by: Mish Ushakov <10400064+mishushakov@users.noreply.github.com>
Co-authored-by: theCyberTech - Rip&Tear <84775494+theCyberTech@users.noreply.github.com>
Co-authored-by: Saif Mahmud <60409889+vmsaif@users.noreply.github.com>
2024-07-02 21:52:15 -03:00
Salman Faroz
d72b00af3c Update Sequential.md (#849)
To Resolve : 
pydantic_core._pydantic_core.ValidationError: 1 validation error for Task
expected_output
Field required [type=missing, input_value=, input_type=dict]
For further information visit https://errors.pydantic.dev/2.6/v/missing

"Expected Output" is mandatory now as it forces people to be specific about the expected result and get better result


refer : https://github.com/joaomdmoura/crewAI/issues/308
2024-07-02 21:17:53 -03:00
Taleb
bd053a98c7 Enhanced documentation for readability and clarity (#855)
- Added a "Parameters" column to attribute tables. Improved overall document formatting for enhanced readability and ease of use.

Thank you to the author for the great project and the excellent foundation provided!
2024-07-02 21:17:04 -03:00
Lorenze Jay
c18208ca59 fixed mixin (#831)
* fixed mixin

* WIP: fixing types

* type fixes on mixin
2024-07-02 21:16:26 -03:00
João Moura
acbe5af8ce preparing new version 2024-07-02 09:03:20 -07:00
Eduardo Chiarotti
c81146505a docs: Update training feature/code interpreter docs (#852)
* docs: remove training docs from README

* docs: add CodeinterpreterTool to docs and update docs

* docs: fix name of tool
2024-07-02 13:00:37 -03:00
João Moura
6b9a1d4040 adding link to docs 2024-07-01 18:41:31 -07:00
João Moura
508fbd49e9 preparing new version 2024-07-01 18:28:11 -07:00
João Moura
e18a6c6bb8 updatign tools 2024-07-01 15:25:29 -07:00
João Moura
16237ef393 rollback update to new version 2024-07-01 15:25:10 -07:00
João Moura
5332d02f36 preparing new version 2024-07-01 15:12:22 -07:00
João Moura
7258120a0d preparing new version 2024-07-01 15:10:13 -07:00
João Moura
8b7bc69ba1 preparing new version 2024-07-01 08:41:13 -07:00
João Moura
5a807eb93f preparing new version 2024-07-01 06:08:46 -07:00
João Moura
130682c93b preparing new version 2024-07-01 05:48:47 -07:00
João Moura
02e29e4681 new docs 2024-07-01 05:32:22 -07:00
João Moura
6943eb4463 small formatting details 2024-07-01 05:32:22 -07:00
João Moura
939a18a4d2 Updating docs 2024-07-01 05:32:22 -07:00
João Moura
ccbe415315 updating docs 2024-07-01 05:32:22 -07:00
João Moura
511af98dea small refractoring for new version 2024-07-01 05:32:22 -07:00
gpu7
a9d94112f5 bugfix in python script sample code (#787)
Add the line:

process = Process.sequential
2024-07-01 00:23:06 -03:00
JoePro
1bca6029fe Update LLM-Connections.md (#796)
Revised to utilize Ollama from langchain.llms instead as the functionality from the other method simply doesn't work when delegating.

Co-authored-by: João Moura <joaomdmoura@gmail.com>
2024-07-01 00:22:38 -03:00
Eelke van den Bos
c027aa8bf6 Set manager verbosity to crew verbosity by default (#797)
Fixes #793
2024-07-01 00:20:39 -03:00
finecwg
ce7d86e0df Update tool_usage.py (#828)
fixed error for some cases with Pandas DataFrame:

ValueError: The truth value of a DataFrame is ambiguous. Use a.empty, a.bool(), a.item(), a.any() or a.all().
2024-07-01 00:19:36 -03:00
Bruno Tanabe
5dfaf866c9 fix: Fix grammar error in documentation 'Crew Attributes' (#836)
Correction of grammar error in the CrewAI documentation, on the page 'https://docs.crewai.com/core-concepts/Crews/' it says 'ustom' instead of 'Custom'.
2024-07-01 00:16:06 -03:00
Gui Vieira
5b66e87621 Improve telemetry (#818)
* Improve telemetry

* Minor adjustments

* Try to fix typing error

* Try to fix typing error [2]
2024-06-28 20:05:47 -03:00
João Moura
851dd0f84f preparing new version 2024-06-27 11:04:08 -07:00
Eduardo Chiarotti
2188358f13 docs: add docs for training (#824) 2024-06-27 14:56:32 -03:00
Lorenze Jay
10997dd175 Lorenzejay/byoa (#776)
* better spacing

* works with llama index

* works on langchain custom just need delegation to work

* cleanup for custom_agent class

* works with different argument expectations for agent_executor

* cleanup for hierarchial process, better agent_executor args handler and added to the crew agent doc page

* removed code examples for langchain + llama index, added to docs instead

* added key output if return is not a str for and added some tests

* added hinting for CustomAgent class

* removed pass as it was not needed

* closer just need to figuire ou agentTools

* running agents - llamaindex and langchain with base agent

* some cleanup on baseAgent

* minimum for agent to run for base class and ensure it works with hierarchical process

* cleanup for original agent to take on BaseAgent class

* Agent takes on langchainagent and cleanup across

* token handling working for usage_metrics to continue working

* installed llama-index, updated docs and added better name

* fixed some type errors

* base agent holds token_process

* heirarchail process uses proper tools and no longer relies on hasattr for token_processes

* removal of test_custom_agent_executions

* this fixes copying agents

* leveraging an executor class for trigger llamaindex agent

* llama index now has ask_human

* executor mixins added

* added output converter base class

* type listed

* cleanup for output conversions and tokenprocess eliminated redundancy

* properly handling tokens

* simplified token calc handling

* original agent with base agent builder structure setup

* better docs

* no more llama-index dep

* cleaner docs

* test fixes

* poetry reverts and better docs

* base_agent_tools set for third party agents

* updated task and test fix
2024-06-27 14:56:08 -03:00
Eduardo Chiarotti
da9cc5f097 fix: fix trainig_data error (#820)
* fix: fix trainig_data error

* fix: fix lack crew on agent

* fix: fix lack crew on agent executor
2024-06-27 12:58:20 -03:00
Eduardo Chiarotti
c005ec3f78 fix: fix tests (#814) 2024-06-27 05:45:23 -03:00
Eduardo Chiarotti
6018fe5872 feat: add CodeInterpreterTool to run when enable code execution (#804)
* feat: add CodeInterpreterTool to run when enable code execution is allowed on agent

* feat: change to allow_code_execution

* feat: add readme for CodeInterpreterTool
2024-06-27 02:25:39 -03:00
Nuraly
bf0e70999e Update Agents.md (#816)
Made a space to ensure that Header formatting is displayed correctly on the website
2024-06-27 02:23:18 -03:00
Eduardo Chiarotti
175d5b3dd6 feat: Add Train feature for Crews (#686)
* feat: add training logic to agent and crew

* feat: add training logic to agent executor

* feat: add input parameter  to cli command

* feat: add utilities for the training logic

* feat: polish code, logic and add private variables

* feat: add docstring and type hinting to executor

* feat: add constant file, add constant to code

* feat: fix name of training handler function

* feat: remove unused var

* feat: change file handler file name

* feat: Add training handler file, class and change on the code

* feat: fix name error from file

* fix: change import to adapt to logic

* feat: add training handler test

* feat: add tests for file and training_handler

* feat: add test for task evaluator function

* feat: change text to fit in-screen

* feat: add test for train function

* feat: add test for agent training_handler function

* feat: add test for agent._use_trained_data
2024-06-27 02:22:34 -03:00
Bruno Tanabe
9e61b8325b fix: Fix grammar error in documentation in PDF Search Tool (#819)
Correction of grammar error in the CrewAI documentation, on the page 'https://docs.crewai.com/tools/PDFSearchTool/' it says 'Optinal' instead of 'Optional'.
2024-06-27 00:41:22 -03:00
João Moura
c4d76cde8f updating docs 2024-06-22 19:49:50 -03:00
João Moura
9c44fd8c4a preparing new version 2024-06-22 17:47:35 -03:00
João Moura
f9f8c8f336 Preparing new version 2024-06-22 17:01:22 -03:00
João Moura
0fb3ccb9e9 preapring to cut new version 2024-06-20 12:58:50 -03:00
João Moura
0e5fd0be2c addding new kickoff docs 2024-06-20 02:46:13 -03:00
João Moura
1b45daee49 adding new docs to the menu 2024-06-20 02:24:02 -03:00
João Moura
9f384e3fc1 Updating Docs 2024-06-20 02:19:35 -03:00
Brandon Hancock (bhancock_ai)
377f919d42 Resolved Merge Conflicts for PR #712: Remove Hyphen in co-workers (#786)
* removed hyphen in co-workers

* Fix issue with AgentTool agent selection. The LLM included double quotes in the agent name which messed up the string comparison. Added additional types. Cleaned up error messaging.

* Remove duplicate import

* Improve explanation

* Revert poetry.lock changes

* Fix missing line in poetry.lock

---------

Co-authored-by: madmag77 <goncharov.artemv@gmail.com>
2024-06-18 16:57:56 -03:00
João Moura
e6445afac5 fixing bug to multiple crews on yaml format in the same project 2024-06-18 02:32:53 -03:00
Lorenze Jay
095015d397 Lorenzejay/crew kickoff union type (#767)
* added extra parameter for kickoff to return token usage count after result

* added output_token_usage to class and in full_output

* logger duplicated

* added more types

* added usage_metrics to full output instead

* added more to the description on full_output

* possible mispacing

* updated kickoff return types to be either string or dict applicable when full_output is set

* removed duplicates
2024-06-14 14:23:55 -03:00
Lorenze Jay
614183cbb1 fixes crewai docs assembling crew code block example code (#768) 2024-06-14 14:23:30 -03:00
Dan McKinley
0bc92a284d updates instructor to the latest version. (#760)
* updates instructor to the latest version. adds jsonref, which instructor seems to depend on.

* updates embedchain reference, necessary for python 3.12
2024-06-14 01:57:40 -03:00
Lorenze Jay
d3b6640b4a added usage_metrics to full output (#756)
* added extra parameter for kickoff to return token usage count after result

* added output_token_usage to class and in full_output

* logger duplicated

* added more types

* added usage_metrics to full output instead

* added more to the description on full_output

* possible mispacing
2024-06-12 14:18:52 -03:00
Guangqiang Lu
a1a48888c3 add datetime import for logger.py (#702) 2024-06-11 16:43:15 -03:00
Matt Thompson
bb622bf747 fix: correct default model (gpt-4o), correct token counts, and correct TaskOutput attributes (added agent) (#749)
* fix: 'from datetime import datetime for logging' to print the timestamp

* fix: correct default model (gpt-4o), correct token counts, and correct TaskOutput attributes (added agent)

* test: verify Task callback data is an instance of TaskOutput
2024-06-11 15:29:22 -03:00
Brandon Hancock (bhancock_ai)
946c56494e Feature/kickoff for each sync (#680)
* Sync with deep copy working now

* async working!!

* Clean up code for review

* Fix naming

---------

Co-authored-by: João Moura <joaomdmoura@gmail.com>
2024-06-11 12:51:39 -03:00
Taradepan R
2a0e21ca76 updated the import for cohere llm (#696) 2024-06-04 03:32:23 -03:00
Karthik Kalyanaraman
ea893432e8 Add Langtrace to the "How to" docs for CrewAI Agent Observability (#634)
* Add files via upload

* Create Langtrace-Observability.md

* Rename crewai-agentops-stats.png to crewai-langtrace-stats.png
2024-05-29 02:14:29 -03:00
theCyberTech - Rip&Tear
bf40956491 Added timestamp to logger (#646)
* Added timestamp to logger

Updated the logger.py file to include timestamps when logging output. For example:

 [2024-05-20 15:32:48][DEBUG]: == Working Agent: Researcher
 [2024-05-20 15:32:48][INFO]: == Starting Task: Research the topic
 [2024-05-20 15:33:22][DEBUG]: == [Researcher] Task output:

* Update tool_usage.py

* Revert "Update tool_usage.py"

This reverts commit 95d18d5b6f.

incorrect bramch for this commit
2024-05-26 01:32:16 -03:00
Saif Mahmud
48948e1217 fixes #665 (#666) 2024-05-26 01:31:28 -03:00
theCyberTech - Rip&Tear
27412c89dd Update crew.py (#644)
Fixed Type on line 53
2024-05-24 00:06:27 -03:00
Mish Ushakov
56f1d24e9d Update BrowserbaseLoadTool.md (#647) 2024-05-24 00:05:52 -03:00
Mike Heavers
ab066a11a8 Update README.md (#652)
Rework example so that if you use a custom LLM it doesn't throw code errors by uncommenting.
2024-05-24 00:05:32 -03:00
Anudeep Kolluri
e35e81e554 Update agent.py (#655)
Changed default model value from gpt-4 to gpt-4o.
Reasoning.
gpt-4 costs 30$ per million tokens while gpt-4o costs 5$.
This is more cost friendly for default option.
2024-05-24 00:04:53 -03:00
Paul Sanders
551e48da4f Clarify text in docstring (#662) 2024-05-24 00:04:01 -03:00
Paul Sanders
21ce0aa17e Enable search in docs (#663) 2024-05-24 00:03:31 -03:00
Olivier Roberdet
2d6f2830e1 Fix typo in instruction en.json (#676) 2024-05-24 00:03:07 -03:00
Eduardo Chiarotti
24ed8a2549 feat: Add crew train cli (#624)
* fix: fix crewai-tools cli command

* feat: add crewai train CLI command

* feat: add the tests

* fix: fix typing hinting issue on code

* fix: test.yml

* fix: fix test

* fix: removed fix since it didnt changed the test
2024-05-23 18:46:45 -03:00
João Moura
a336381849 adding agent to task output 2024-05-16 05:12:32 -03:00
Jason Schrader
208c3a780c Add version command to CLI (#348)
* feat: add version command to cli with tools flag

* test: check output of version and tools flag

* fix: add version tool info to cli outputs
2024-05-15 19:50:49 -03:00
João Moura
1e112fa50a fixing crew base 2024-05-14 17:40:38 -03:00
João Moura
38fc5510ed ppreparing new version 0.30.9 2024-05-14 11:32:05 -03:00
João Moura
1a1f4717aa cutting new version with no yaml parsing 2024-05-13 23:09:29 -03:00
João Moura
977c6114ba preparing new version 2024-05-13 22:32:24 -03:00
João Moura
27fddae286 New version, updating dependencies, fixing memory 2024-05-13 22:26:41 -03:00
João Moura
615ac7f297 preparing new version 2024-05-13 12:59:55 -03:00
João Moura
87d28e896d preparing new version 2024-05-13 02:35:46 -03:00
Saif Mahmud
23f10418d7 Fixes #603 (#604) 2024-05-13 02:34:52 -03:00
João Moura
27e7f48a44 Adding new tests 2024-05-13 02:34:33 -03:00
João Moura
7fd8850ddb Small RC Fixes (#608)
* mentioning ollama on the docs as embedder

* lowering barrier to match tool with simialr name

* Fixing agent tools to support co_worker

* Adding new tests

* Fixing type"

* updating tests

* fixing conflict
2024-05-13 02:29:04 -03:00
Ítalo Vieira
7a4d3dd496 fix typo exectue -> execute (#607) 2024-05-13 02:19:06 -03:00
João Moura
c1d7936689 preparing new version 2024-05-12 19:56:40 -03:00
Eduardo Chiarotti
1ec4da6947 feat: add mypy as type checker, update code and add comment to reference (#591)
* fix: fix test actually running

* fix: fix test to not send request to openai

* fix: fix linting to remove cli files

* fix: exclude only files that breaks black

* fix: Fix all Ruff checkings on the code and Fix Test with repeated name

* fix: Change linter name on yml file

* feat: update pre-commit

* feat: remove need for isort on the code

* feat: add mypy as type checker, update code and add comment to reference

* feat: remove black linter

* feat: remove poetry to run the command

* feat: change logic to test mypy

* feat: update tests yml to try to fix the tests gh action

* feat: try to add just mypy to run on gh action

* feat: fix yml file

* feat: add comment to avoid issue on gh action

* feat: decouple pytest from the necessity of poetry install

* feat: change tests.yml to test different approach

* feat: change to poetry run

* fix: parameter field on yml file

* fix: update parameters to be on the pyproject

* fix: update pyproject to remove import untyped errors
2024-05-10 16:37:52 -03:00
Steven Edwards
8430c2f9af Task needs an expected_output field in docs. (#568)
* Task needs an expected_output field in docs..

* Add missing comma.
2024-05-10 11:55:10 -03:00
Ayo Ayibiowu
7cc6bccdec feat: adds support to automatically fallback to the default encoding (#596)
* feat: adds support to automatically fallbackk to the default encoding

* fix: use the correct method
2024-05-10 11:54:45 -03:00
Eduardo Chiarotti
aeba64feaf Feat: Add Ruff to improve linting/formatting (#588)
* fix: fix test actually running

* fix: fix test to not send request to openai

* fix: fix linting to remove cli files

* fix: exclude only files that breaks black

* fix: Fix all Ruff checkings on the code and Fix Test with repeated name

* fix: Change linter name on yml file

* feat: update pre-commit

* feat: remove need for isort on the code

* feat: remove black linter

* feat: update tests yml to try to fix the tests gh action
2024-05-10 11:53:53 -03:00
GabeKoga
04b4191de5 Fix/yaml formatting (#590)
* Bug/curly_braces_yaml

Added parser to help users on yaml syntax

* context error

Patch and later will prioritize this again to have context work with the yaml
2024-05-09 21:35:21 -03:00
Eduardo Chiarotti
1da7473f26 fix: fix test actually running (#587)
* fix: fix test actually running

* fix: fix test to not send request to openai

* fix: fix linting to remove cli files

* fix: exclude only files that breaks black
2024-05-09 21:33:48 -03:00
João Moura
95d13bd033 prepping new version 2024-05-09 09:12:57 -03:00
Eduardo Chiarotti
7eb4fcdaf4 fix: Add validation fix output_file issue when have '/' (#585)
* fix: Add validation fix output_file issue when have /

* fix: run black to format code

* fix: run black to format code
2024-05-09 08:11:00 -03:00
João Moura
809b4b227c Revert "Fix .md doc file 404 error on github (#564)" (#567)
This reverts commit 2bd30af72b.
2024-05-05 10:35:46 -03:00
Alex Fazio
ff51a2da9b corrected imprecision in the instantiation (#555) 2024-05-05 03:55:13 -03:00
João Moura
be83681665 preparing new RC version 2024-05-05 02:57:29 -03:00
Jackie Qi
2bd30af72b Fix .md doc file 404 error on github (#564)
* fix md file link not working on github

* miss one changed file
2024-05-05 02:53:20 -03:00
João Moura
d7b021061b updating .gitignore 2024-05-05 02:52:43 -03:00
João Moura
73647f1669 TYPO 2024-05-05 02:14:49 -03:00
João Moura
d341cb3d5c Fixing manager_agent_support 2024-05-05 00:51:18 -03:00
João Moura
30438410d6 cutting new RC 2024-05-03 00:55:32 -03:00
João Moura
b264ebabc0 adding meomization to crewai project annotations 2024-05-03 00:49:37 -03:00
tarekadam
2edc88e0a1 Update LLM-Connections.md (#553)
fixes command to lower case
2024-05-03 00:25:03 -03:00
João Moura
552dda46f8 updating manager llm pydantic error 2024-05-02 23:39:56 -03:00
João Moura
2340a127d6 curring new rc 2024-05-02 23:22:02 -03:00
João Moura
ecde504a79 updating gitignore 2024-05-02 21:57:49 -03:00
João Moura
0b781065d2 Better json parsing for smaller models 2024-05-02 21:57:41 -03:00
João Moura
bcb57ce5f9 updating git ignore 2024-05-02 20:52:43 -03:00
David Solito
6392a8cdd0 Update crew.py (#551)
Ad manager_agent description in crew docstring
2024-05-02 19:21:22 -03:00
João Moura
34e3dd24b4 new version 2024-05-02 05:00:29 -03:00
João Moura
c303d3730c cutting new version 2024-05-02 05:00:29 -03:00
João Moura
0a53ce17a2 small improvements for i18n 2024-05-02 05:00:29 -03:00
João Moura
7973651e05 new version 2024-05-02 05:00:29 -03:00
João Moura
672b150972 adding initial support for external prompt file 2024-05-02 05:00:29 -03:00
Jason Schrader
d8bcbd7d0a fix typos in generated readme (#345)
small things I noticed while upgrading our setup!
2024-05-02 03:32:18 -03:00
Dmitri Khokhlov
ff2f1477bb fix: TypeError: LongTermMemory.search() missing 1 required positional argument: 'latest_n' (#488)
Signed-off-by: Dmitri Khokhlov <dkhokhlov@gmail.com>
2024-05-02 03:28:36 -03:00
Ikko Eltociear Ashimine
1139073297 fix typo (#489)
* Update test_crew_function_calling_llm.yaml

ouput -> output

* Update tool_usage.py

ouput -> output
2024-05-02 03:27:40 -03:00
Sarvajith Adyanthaya
39deac2747 Changed "Inert" to "Innate" #470 (#490) 2024-05-02 03:27:09 -03:00
ftoppi
0a35868367 Update task.py: try to find json in task output using regex (#491)
* Update task.py: try to find json in task output using regex

Sometimes the model replies with a valid and additional text, let's try to extract and validate it first. It's cheaper than calling LLM for that.

* Update task.py

---------

Co-authored-by: João Moura <joaomdmoura@gmail.com>
2024-05-02 03:26:34 -03:00
Mosta
608f869789 Update PGSearchTool.md (#492)
typo on code snippet
2024-05-02 03:22:18 -03:00
Samuel Kocúr
c30bd1a18e fix db_storage_path handling to use env variable or cwd (#507) 2024-05-02 03:16:54 -03:00
Mish Ushakov
20a81af95f Added Browserbase loader to the docs (#508)
* Create BrowserbaseLoadTool.md

* added browserbase loader
2024-05-02 03:15:59 -03:00
deadlious
531c70b476 Tool name recognition based on string distance (#521)
* adding variations of ask question and delegate work tools

* Revert "adding variations of ask question and delegate work tools"

This reverts commit 38d4589be8.

* adding distance calculation for tool names.

* proper formatting

* remove brackets
2024-05-02 03:15:34 -03:00
Victor Carvalho Tavernari
dae0aedc99 Add conditional check for output file directory creation (#523)
This commit adds a conditional check to ensure that the output file directory exists before attempting to create it. This ensures that the code does not
fail in cases where the directory does not exist and needs to be created. The condition is added in the `_save_file` method of the `Task` class, ensuring
that the correct behavior is maintained for saving results to a file.
2024-05-02 03:13:51 -03:00
Jim Collins
5fde03f4b0 Update README.md (#525)
Reworded "If you want to also install crewai-tools, which is a package with tools that can be used by the agents, but more dependencies, you can install it with, example below uses it:" for clarity
2024-05-02 03:12:03 -03:00
Alex Fazio
48f53b529b fix to import statement PGSearchTool.md (#548)
fix to the import statement in PGSearchTool documentation
2024-05-02 03:10:43 -03:00
João Moura
4d9b0c6138 smal fixes and better guardrail for parsing small models tools usage 2024-05-02 02:21:59 -03:00
João Moura
70cabec876 Adding support for system, prompt and answe templates 2024-05-02 02:21:59 -03:00
João Moura
60423376cf removing unnecessary test 2024-05-02 02:21:59 -03:00
João Moura
22c646294a unifying co-worker string 2024-05-02 02:21:59 -03:00
João Moura
10b317cf34 remving blank line 2024-05-02 02:21:59 -03:00
João Moura
03f0c44cac Fixing task callback 2024-05-02 02:21:59 -03:00
João Moura
caa0e5db8d Revert "AgentOps Implementation (#411)"
This reverts commit 3d5257592b.
2024-05-02 02:21:59 -03:00
Alex Fazio
b862e464f8 docs fix to xml tool import statement (#546)
* docs fix to xml tool import statement

* Update XMLSearchTool.md
2024-05-01 12:53:49 -03:00
Braelyn Boynton
3d5257592b AgentOps Implementation (#411)
* implements agentops with a langchain handler, agent tracking and tool call recording

* track tool usage

* end session after completion

* track tool usage time

* better tool and llm tracking

* code cleanup

* make agentops optional

* optional dependency usage

* remove telemetry code

* optional agentops

* agentops version bump

* remove org key

* true dependency

* add crew org key to agentops

* cleanup

* Update pyproject.toml

* Revert "true dependency"

This reverts commit e52e8e9568.

* Revert "cleanup"

This reverts commit 7f5635fb9e.

* optional parent key

* agentops 0.1.5

* Revert "Revert "cleanup""

This reverts commit cea33d9a5d.

* Revert "Revert "true dependency""

This reverts commit 4d1b460b

* cleanup

* Forcing version 0.1.5

* Update pyproject.toml

---------

Co-authored-by: João Moura <joaomdmoura@gmail.com>
2024-04-20 12:20:13 -03:00
Elijas Dapšauskas
ff76715cd2 Allow minor version patches to python-dotenv (#339)
Co-authored-by: João Moura <joaomdmoura@gmail.com>
2024-04-19 02:44:08 -03:00
Emmanuel Crown
cdb0a9c953 Fixed a typo in the main readme on the llm selection , options for an agent (#349) 2024-04-19 02:42:04 -03:00
Sajal Sharma
b0acae81b0 Update LLM-Connections.md (#353)
Co-authored-by: João Moura <joaomdmoura@gmail.com>
2024-04-19 02:41:36 -03:00
Kaushal Powar
afc616d263 Update GitHubSearchTool.md (#357)
GithubSearchTool was misspelled as GitHubSearchTool

Co-authored-by: João Moura <joaomdmoura@gmail.com>
2024-04-19 02:40:38 -03:00
Selim Erhan
e066b4dcb1 Update LLM-Connections.md (#359)
Created a short documentation on how to use Llama2 locally with crewAI thanks to the help of Ollama.

Co-authored-by: João Moura <joaomdmoura@gmail.com>
2024-04-19 02:39:33 -03:00
Christian24
9ea495902e Fix lockfile (#477) 2024-04-18 11:28:06 -03:00
João Moura
d786c367b4 Update README.md 2024-04-17 00:02:49 -03:00
João Moura
a391004432 Adding manager llm 2024-04-16 16:50:44 -03:00
João Moura
dd97a2674d adding new installing crew docs 2024-04-16 16:50:44 -03:00
Joseph Bastulli
437c4c91bc fix: swapped the task callback assignment (#443) 2024-04-16 15:54:42 -03:00
Jack Hayter
575f1f98b0 Prevent duplicate TokenCalcHandler callbacks on Agent (#475) 2024-04-16 15:54:02 -03:00
Alex Reibman
2ee6ab6332 Incorrect documentation link for AgentOps (#458)
* remove .md

* made language more clear

* update images and documentation for spelling

* update typos and links

* update repo placement

* update wording

* clarify

* update wording

* Added clearer features

---------

Co-authored-by: João Moura <joaomdmoura@gmail.com>
2024-04-16 08:24:30 -03:00
Jonathan Morales Vélez
3d862538d2 fix link to observability (#461) 2024-04-16 08:22:11 -03:00
Preston Badeer
4bd36e0460 Update LLM-Connections.md with up to date LM Studio instructions (#468)
Co-authored-by: Preston Badeer <467756+pbadeer@users.noreply.github.com>
2024-04-16 08:20:56 -03:00
Eivind Hyldmo
7fbf0f1988 Fixed typo in Tools.md (#472) 2024-04-16 08:20:25 -03:00
Lennart J. Kurzweg
066127013b Added optional manager_agent parameter (#474)
* Added optional manager_agent parameter

* Update crew.py

---------

Co-authored-by: Lennart J. Kurzweg (Nx2) <git@nx2.site>
Co-authored-by: João Moura <joaomdmoura@gmail.com>
2024-04-16 08:18:36 -03:00
João Moura
f675208d72 cutting new version with updated cli template 2024-04-11 11:30:30 -03:00
João Moura
36aa69cf66 Preparing new version to use new version of crewai-tools 2024-04-10 11:52:12 -03:00
Cfomodz
66b77ffd08 Update README.md (#442) 2024-04-08 05:59:04 -03:00
João Moura
d2a3e4869a preparring new version 2024-04-08 02:08:57 -03:00
João Moura
a2dc7c7f31 adding missing import 2024-04-08 02:08:43 -03:00
João Moura
55ac69776a preparing new version 2024-04-08 01:39:22 -03:00
João Moura
7a7c9b0076 removing unnecessary certificate 2024-04-08 01:39:11 -03:00
João Moura
77d40230a8 preparing new version 2024-04-07 14:55:45 -03:00
João Moura
e4556040a8 fixing long temr memory interpolation 2024-04-07 14:55:35 -03:00
João Moura
755b3934a4 preping new verison with new tools package 2024-04-07 14:19:50 -03:00
João Moura
2d77fb72a5 preparing new version 2024-04-07 04:18:05 -03:00
rajib
106b0df42e The suggestions were getting split at character level and not at sentence level (#436)
* fix the issue where the suggestions were split at character level

* Update contextual_memory.py

---------

Co-authored-by: rajib76 <rajib76@yahoo.com>
Co-authored-by: João Moura <joaomdmoura@gmail.com>
2024-04-07 02:57:23 -03:00
João Moura
c31ac4cf7e Updating tool dependency 2024-04-05 22:46:32 -03:00
João Moura
7b309df0c5 preparing new version 2024-04-05 19:52:13 -03:00
shivam singh
326f524e7c doc: Add documentation to Task model. (#363) 2024-04-05 19:49:36 -03:00
高璟琦
315ad20111 add solar example (#373)
Co-authored-by: João Moura <joaomdmoura@gmail.com>
2024-04-05 19:48:27 -03:00
Rueben Ramirez
b1daf17a61 whitespace consistency across docs (#407)
I saw a rendedered whitespace inconsistency in the Tasks docs here:
ed31860071/docs/core-concepts/Tasks.md (L173)

So I set out to patch that up to make it easier to read.  I then noticed
there were a few whitespace inconsistencies:
- 2 spaces
- 4 whitespaces
- tabs

It appears that the 4 whitespaces is the prevalent whitesapce usage, so
I overwrote other whitespace usages with that in this commit.

Co-authored-by: Rueben Ramirez <rramirez@ruebens-mbp.tail7c016.ts.net>
Co-authored-by: João Moura <joaomdmoura@gmail.com>
2024-04-05 19:47:09 -03:00
GabeKoga
9db99befb6 Feature: Log files (#423)
* log_file

feature: added a new parameter for crew that creates a txt file to log agent execution

* unit tests and documentation

unit test if file is created but not what is inside the file
2024-04-05 19:44:50 -03:00
GabeKoga
aebc443b62 purple (#428)
changed from yellow to purple for visibility
2024-04-05 18:25:59 -03:00
João Moura
2c0e5586e8 TYPO 2024-04-05 09:37:51 -03:00
João Moura
25f7557751 fixing memory docs 2024-04-05 08:59:54 -03:00
João Moura
59ebf7b762 adding specific memmory docs 2024-04-05 08:59:20 -03:00
João Moura
1abe9db8e0 Increasing default max inter 2024-04-05 08:36:09 -03:00
João Moura
e4363f9ed8 updating tests 2024-04-05 08:33:31 -03:00
João Moura
e00b545548 adding max execution time 2024-04-05 08:31:25 -03:00
João Moura
1aa32c2036 preparing new version 2024-04-05 08:24:41 -03:00
João Moura
65824ef814 not overriding llm callbacks 2024-04-05 08:24:20 -03:00
João Moura
d17bc33bfb fix docs 2024-04-04 17:36:50 -03:00
João Moura
d874ac92b4 preparing new version 0.27.0 2024-04-04 15:29:45 -03:00
João Moura
0362449fe4 Adding new test for crew memory 2024-04-04 15:29:45 -03:00
João Moura
0d4c062487 Adding link to agentops docs 2024-04-04 15:29:45 -03:00
João Moura
ec622022f9 updating dependendies 2024-04-04 15:29:45 -03:00
João Moura
e9adc3fa4e Removing memory flag from agent in favor of crew memory 2024-04-04 15:29:45 -03:00
João Moura
5bc63a321c TYPO 2024-04-04 15:29:45 -03:00
João Moura
6317380c8d updating tools dependency 2024-04-04 15:29:45 -03:00
João Moura
a7f007f475 Updating docs 2024-04-04 15:29:45 -03:00
Braelyn Boynton
fcffc4a898 AgentOps Docs (#412)
Agentops documentation
2024-04-04 15:09:31 -03:00
ftoppi
8ed4c66117 tasks.py: don't call Converter when model response is valid (#406)
* tasks.py: don't call Converter when model response is valid

Try to convert the task output to the expected Pydantic model before sending it to Converter, maybe the model got it right.
2024-04-04 10:11:46 -03:00
ftoppi
38486223b2 Update Creating-a-Crew-and-kick-it-off.md: add compatible python versions (#420)
* Update Creating-a-Crew-and-kick-it-off.md: add compatible python versions

* Update Creating-a-Crew-and-kick-it-off.md

---------

Co-authored-by: João Moura <joaomdmoura@gmail.com>
2024-04-03 19:10:11 -03:00
João Moura
ac5e7d2b1e preparing new rc 2024-04-03 08:11:30 -03:00
João Moura
cf4138f385 setting fake openai key 2024-04-03 06:56:02 -03:00
João Moura
af7803e94b updating dependencies 2024-04-03 06:03:18 -03:00
João Moura
10b631bfb4 force reseting db in care of change in dimensions 2024-04-03 05:52:35 -03:00
João Moura
76f1c194dc Fixing db path 2024-04-03 05:45:59 -03:00
João Moura
0c9bc95dfc creating db file based on package name 2024-04-03 05:22:20 -03:00
João Moura
6f0d19d916 preparing new version 2024-04-03 05:04:26 -03:00
João Moura
427d3169b6 adding initial memory docs 2024-04-03 05:04:14 -03:00
João Moura
0fc828c816 updating gitignore 2024-04-03 05:04:00 -03:00
João Moura
2d97177eff checking crew before using memory 2024-04-03 05:03:43 -03:00
João Moura
33dfcc700b cutting new version, adding cache_function docs 2024-04-02 14:26:22 -03:00
João Moura
09c8193c8f updating specs 2024-04-02 13:51:16 -03:00
João Moura
4f4128075f updating db storage and dependencies 2024-04-02 13:51:05 -03:00
João Moura
9ab3e67ba2 preparing RC 2024-04-01 14:38:26 -03:00
João Moura
ed31860071 update docs 2024-04-01 11:14:06 -03:00
João Moura
ddb84cc16d Starting i18n language file support 2024-04-01 10:45:17 -03:00
João Moura
5b59e450f7 Adding long term, short term, entity and contextual memory 2024-04-01 10:45:17 -03:00
João Moura
a6c3b1f1d4 updating gitignore 2024-04-01 10:45:17 -03:00
João Moura
bf6b09b9f5 updating dependencies 2024-04-01 10:45:17 -03:00
João Moura
c95eed3fe0 adding editor config 2024-04-01 10:45:17 -03:00
João Moura
9d7cdd56b5 using .casefold() instead of lower 2024-04-01 10:45:17 -03:00
João Moura
0d70302963 updating git ignore 2024-04-01 10:45:17 -03:00
João Moura
32a09660b4 updating i18n to take on translation files 2024-04-01 10:45:17 -03:00
João Moura
0612097f81 improving agent tools descriptions 2024-04-01 10:45:17 -03:00
João Moura
b0c373b6af updating gitignore 2024-04-01 10:45:17 -03:00
João Moura
4839cdf261 improving original promtps 2024-04-01 10:45:14 -03:00
João Moura
5977c442b1 Adding custom caching 2024-04-01 10:43:05 -03:00
João Moura
d05dcac16f udpating dependencies 2024-04-01 10:43:05 -03:00
João Moura
2cdfe459be adding proper docs for crewAI 2024-04-01 10:43:05 -03:00
João Moura
721b27d222 Ability to disable cache at agent and crew level 2024-04-01 10:43:05 -03:00
João Moura
be2def3fc8 Adding HuggingFace docs 2024-04-01 10:43:05 -03:00
João Moura
7259dba90d fixing warnings 2024-04-01 10:43:05 -03:00
João Moura
ef5bfcb48b updating telemetry to use https 2024-04-01 10:43:05 -03:00
João Moura
446baff697 Updating crewai-tools dependency 2024-04-01 10:43:05 -03:00
GabeKoga
bcf701b287 feature: human input per task (#395)
* feature: human input per task

* Update executor.py

* Update executor.py

* Update executor.py

* Update executor.py

* Update executor.py

* feat: change human input for unit testing
added documentation and unit test

* Create test_agent_human_input.yaml

add yaml for test

---------

Co-authored-by: João Moura <joaomdmoura@gmail.com>
2024-04-01 10:04:56 -03:00
Elle Neal
22ab99cbd6 Update LLM-Connections.md (#252)
Adding Cohere LLM

Co-authored-by: João Moura <joaomdmoura@gmail.com>
2024-04-01 10:03:16 -03:00
sebestyenmiklos1
98ee60e06f Update Tasks.md (#240)
Fix example code of missing comma.

Co-authored-by: João Moura <joaomdmoura@gmail.com>
2024-03-31 20:40:51 -03:00
Ken Jenney
a3abdb5d19 Update ScrapeWebsiteTool.md (#385) 2024-03-30 11:57:08 -03:00
chowderhead
e3ebeb9dde Update GitHubSearchTool.md (#390)
Import statement has a lower case h
2024-03-30 11:56:34 -03:00
Ikko Eltociear Ashimine
646ed4f132 Update README.md (#391)
bellow -> below
2024-03-30 11:56:08 -03:00
Gui Vieira
128ce91951 Fix input interpolation bug (#369) 2024-03-22 03:08:54 -03:00
Gui Vieira
aa0eb02968 Custom model docs (#368) 2024-03-22 03:01:34 -03:00
João Moura
637bd885cf adding auto flake 2024-03-11 23:27:19 -03:00
João Moura
337afe228f cutting new version with proper imports 2024-03-11 23:27:04 -03:00
João Moura
4541835487 adding autoflake 2024-03-11 22:56:14 -03:00
João Moura
04d9603449 cutting new version 2024-03-11 22:55:56 -03:00
João Moura
671a8d0180 preparring new version that autoloads env 2024-03-11 22:19:47 -03:00
João Moura
3950878690 preparring to cut new version 2024-03-11 19:54:27 -03:00
João Moura
eaac627600 updating CLI template and guaranteeing tasks order 2024-03-11 19:53:34 -03:00
João Moura
35f8919e73 Preparing new version 2024-03-11 17:37:12 -03:00
João Moura
cb5a528550 Improving agent logging 2024-03-11 17:05:54 -03:00
João Moura
1f95d7b982 Improve tempalte readme 2024-03-11 17:05:20 -03:00
Abe Gong
46971ee985 Fix typo in Tools.md (#300) 2024-03-11 16:45:28 -03:00
Selim Erhan
e67009ee2e Update Create-Custom-Tools.md (#311)
Added the langchain "Tool" functionality by creating a python function and then adding the functionality of that function to the tool by 'func' variable in the 'Tool' function.
2024-03-11 16:44:04 -03:00
Johan
9d3da98251 Update Tools.md (#326)
* Update Tools.md

Fixing typo on the instantiation part

* Update Tools.md

Update tool naming
2024-03-11 16:41:14 -03:00
Bill Chambers
b94de6e947 Update Crews.md (#331) 2024-03-11 16:40:45 -03:00
Chris Pang
f8a1d4f414 added langchain callback to agents (#333)
Co-authored-by: Chris Pang <chris_pang@racv.com.au>
2024-03-11 16:40:10 -03:00
Merbin J Anselm
7deb268de8 docs: fix formatting in Human-Input-on-Execution.md (#335) 2024-03-11 16:38:59 -03:00
João Moura
47b5cbd211 adding initial CLI support 2024-03-11 16:37:32 -03:00
João Moura
a4e9b9ccfe removing double space on logs 2024-03-11 16:23:00 -03:00
João Moura
99be4f5a61 Overridding classes __repr__ 2024-03-05 10:12:49 -03:00
João Moura
ba28ab1680 adding support for agents and tasks to be defined of configs 2024-03-05 01:26:07 -03:00
João Moura
e51b8aadae fix readme 2024-03-05 00:31:52 -03:00
João Moura
33354aa07e udpatign readme example 2024-03-05 00:29:55 -03:00
João Moura
730b71fad8 update serper doc 2024-03-04 11:15:49 -03:00
João Moura
364cf216a0 updating docs disclaimer 2024-03-04 09:59:01 -03:00
João Moura
3cb48ac562 updating docs 2024-03-04 01:29:27 -03:00
João Moura
ea65283023 updating docs 2024-03-03 22:43:51 -03:00
João Moura
d2003cc32d fix docs path 2024-03-03 22:18:48 -03:00
João Moura
1766e27337 Adding tool specific docs 2024-03-03 22:14:53 -03:00
João Moura
442c324243 Updating dependencies, cutting new version 2024-03-03 21:23:42 -03:00
João Moura
3134711240 Updating Docs 2024-03-03 20:54:15 -03:00
João Moura
546fc965f8 updating README 2024-03-03 20:54:15 -03:00
João Moura
9ab45d9118 preparing new version 2024-03-03 20:54:15 -03:00
João Moura
b1ae86757b preparing 0.17.0rc0 2024-03-03 20:54:15 -03:00
João Moura
42eeec5897 Update inner tool usage logic to support both regular and function calling 2024-03-03 20:54:15 -03:00
João Moura
c12283bb16 Small docs update 2024-03-03 20:54:15 -03:00
João Moura
b856b21fc6 updating tests 2024-03-03 20:54:15 -03:00
Jay Mathis
72a0d1edef Update README.md (#301)
Fix a very minor typo
2024-03-03 12:41:35 -03:00
heyfixit
c0a0e01cf6 fix directory typo (#295) 2024-03-03 12:41:14 -03:00
João Moura
78bf008c36 cutting a new version addressin backward compatibility 2024-02-28 12:04:13 -03:00
Hongbo
5857c22daf correct a typo in tool_usage.py (#276) 2024-02-28 09:25:27 -03:00
Gordon Stein
5f73ba1371 Update en.json (#281) 2024-02-28 09:24:44 -03:00
Selim Erhan
4c09835abc Update Tools.md (#283)
Added the link to LangChain built-in toolkits
2024-02-28 09:22:51 -03:00
João Moura
0a025901c5 cutting new versions that doens't include cli just yet 2024-02-28 09:16:13 -03:00
João Moura
9768e4518f Fixing bug preparing new version 2024-02-28 09:09:37 -03:00
João Moura
1f802ccb5a removing logs and preping new version 2024-02-28 03:44:23 -03:00
João Moura
e1306a8e6a removing necessary crewai-tools dependency 2024-02-28 03:44:23 -03:00
João Moura
997c906b5f adding support for input interpolation for tasks and agents 2024-02-28 03:44:23 -03:00
João Moura
2530196cf8 fixing tests 2024-02-28 03:44:23 -03:00
João Moura
340bea3271 Adding ability to track tools_errors and delegations 2024-02-28 03:44:23 -03:00
João Moura
3df3bba756 changing method naming to increment 2024-02-28 03:44:23 -03:00
João Moura
a9863fe670 Adding overall usage_metrics to crew and not adding delegation tools if there no agents the allow delegation 2024-02-28 03:44:23 -03:00
João Moura
7b49b4e985 Adding initial formatting error counting and token counter 2024-02-28 03:44:23 -03:00
João Moura
577db88f8e Updating README 2024-02-28 03:44:23 -03:00
João Moura
01a2e650a4 Adding write job description example 2024-02-28 03:44:23 -03:00
BR
cd9f7931c9 Fix Creating-a-Crew-and-kick-it-off.md so it can run (#280)
* Fix Creating-a-Crew-and-kick-it-off.md

- Update deps to include `crewai[tools]`
- Remove invalid `max_inter` arg from Task constructor call

* Update Creating-a-Crew-and-kick-it-off.md

---------

Co-authored-by: João Moura <joaomdmoura@gmail.com>
2024-02-27 14:23:19 -03:00
João Moura
2b04ae4e4a updating docs 2024-02-26 15:54:06 -03:00
João Moura
cd0b82e794 Cutting new version removing crewai-tool as a mandatory dependency 2024-02-26 15:27:04 -03:00
João Moura
0ddcffe601 updating telemetry timeout 2024-02-26 13:40:41 -03:00
João Moura
712d106a44 updating docs 2024-02-26 13:38:14 -03:00
João Moura
34c5560cb0 updating telemetry code and gitignore 2024-02-24 16:18:26 -03:00
João Moura
dcba1488a6 make agents not have a memory by default 2024-02-24 03:33:05 -03:00
João Moura
8e4b156f11 preparing new version 2024-02-24 03:30:12 -03:00
João Moura
ab98c3bd28 Avoid empty task outputs 2024-02-24 03:11:41 -03:00
João Moura
7f98a99e90 Adding support for agents without tools 2024-02-24 01:39:29 -03:00
João Moura
101b80c234 updating broken doc link 2024-02-24 01:38:16 -03:00
João Moura
44598babcb startign support to crew docs 2024-02-24 01:38:04 -03:00
João Moura
51edfb4604 reducing telemetry timeout 2024-02-23 16:02:24 -03:00
João Moura
12d6fa1494 Reducing telemetry timeout 2024-02-23 15:54:22 -03:00
João Moura
99a15ac2ae preping new version 2024-02-23 15:24:16 -03:00
João Moura
093a9c8174 bringing TaskOutput.result back to avoind breakign change 2024-02-23 15:23:58 -03:00
João Moura
464dfc4e67 preparing new version 0.14.0 2024-02-22 16:10:17 -03:00
João Moura
1c7f9826b4 adding new converter logic 2024-02-22 15:16:17 -03:00
João Moura
e397a49c23 Updatign prompts 2024-02-22 15:13:41 -03:00
João Moura
8c925237e7 preparing new RC 2024-02-20 17:56:55 -03:00
João Moura
0593d52b91 Improving inner prompts 2024-02-20 17:53:30 -03:00
João Moura
7b7d714109 preparing new version 2024-02-20 10:40:57 -03:00
João Moura
e9aa87f62b Updating tests 2024-02-20 10:40:37 -03:00
João Moura
8f5d735b2f bug fixing 2024-02-20 10:40:16 -03:00
João Moura
e24f4867df Preparing new version 2024-02-19 22:50:38 -03:00
João Moura
ef024ca106 improving reliability for agent tools 2024-02-19 22:48:47 -03:00
João Moura
4c519d9d98 updating tests 2024-02-19 22:48:34 -03:00
João Moura
94cb96b288 Increasing timeout for telemetry 2024-02-19 22:48:14 -03:00
João Moura
108a0d36b7 Adding support to export tasks as json, pydantic objects, and save as file 2024-02-19 22:46:34 -03:00
João Moura
efb097a76b Adding new tool usage and parsing logic 2024-02-19 22:43:10 -03:00
João Moura
af03042852 Updating docs 2024-02-19 22:01:09 -03:00
João Moura
21667bc7e1 adding more error logging and preparing new version 2024-02-15 23:49:30 -03:00
João Moura
19b6c15fff Cutting new version with tool ussage bug fix 2024-02-15 23:19:12 -03:00
João Moura
3ef502024d preparing new version 2024-02-13 02:58:16 -08:00
João Moura
e55cee7372 adding function calling llm support 2024-02-13 02:57:12 -08:00
João Moura
b72eb838c2 updating readme 2024-02-13 01:50:23 -08:00
João Moura
b21191dd55 updating tests 2024-02-13 01:50:12 -08:00
João Moura
76b17a8d04 renaming function for tools 2024-02-12 16:48:14 -08:00
João Moura
e97d1a0cf8 removing hostname from default telemetry 2024-02-12 16:11:15 -08:00
João Moura
c875d887b7 Crewating a tool output parser 2024-02-12 14:24:36 -08:00
João Moura
44d9cbca81 adding regexp as dependency 2024-02-12 14:13:20 -08:00
João Moura
6e399101fd refactoring default agent tools 2024-02-12 13:27:02 -08:00
João Moura
e8e3617ba6 allowing to set model naem through env var 2024-02-12 13:24:01 -08:00
João Moura
45fa30c007 avoinding telemetry errors 2024-02-12 13:23:40 -08:00
João Moura
15768d9c4d updating LLM connection docs 2024-02-12 13:21:43 -08:00
João Moura
a1fcaa398c updating versions and adding instructor 2024-02-12 13:20:28 -08:00
João Moura
871643d98d updating codeignore 2024-02-11 20:37:42 -08:00
João Moura
91659d6488 counting for tool retries on the acutal usage 2024-02-10 13:14:00 -08:00
João Moura
0076ea7bff Adding ability to remember instruction after using too many tools 2024-02-10 12:53:02 -08:00
João Moura
e79da7bc05 refactoring task execution 2024-02-10 11:28:08 -08:00
João Moura
00206a62ab Revamping tool usage 2024-02-10 10:36:34 -08:00
João Moura
d0b0a33be3 updating translations 2024-02-10 01:08:04 -08:00
João Moura
6ea21e95b6 Adding printer logic 2024-02-10 00:57:04 -08:00
João Moura
c226dafd0d updating dependencies 2024-02-10 00:56:25 -08:00
João Moura
d4c21a23f4 updating all cassettes 2024-02-10 00:55:40 -08:00
João Moura
b76ae5b921 avoind unnecesarry telemetry errors 2024-02-09 10:48:45 -08:00
João Moura
b48e5af9a0 include agentFinish as part of step callback 2024-02-09 02:00:41 -08:00
João Moura
d36c2a74cb recreating executor upon setting new step_callback 2024-02-09 01:52:28 -08:00
João Moura
a1e0596450 adding crew step_callback 2024-02-09 01:24:31 -08:00
João Moura
596e243374 adding support for step_callback 2024-02-08 23:56:13 -08:00
João Moura
326ad08ba2 adding support for full_ouput in crews 2024-02-08 23:23:34 -08:00
João Moura
f63d4edbb4 adding agent step callback 2024-02-08 23:01:30 -08:00
João Moura
0057ed6786 adding user the otpion to share all data of their crews 2024-02-08 23:01:02 -08:00
João Moura
44b6bcbcaa preparing verison 0.5.5 2024-02-07 23:13:39 -08:00
João Moura
a45c82c5f7 fixing RPM controlelr being set unencessarily 2024-02-07 23:09:36 -08:00
João Moura
98133a4eb6 Adding new crew specific docs 2024-02-07 23:09:16 -08:00
João Moura
44c2fd223d preparing version 0.5.4 2024-02-07 22:22:33 -08:00
João Moura
fc249eefda adding initial telemetry 2024-02-07 22:21:44 -08:00
João Moura
1a1eb4e7aa preparing new version 0.5.3 2024-02-07 02:14:58 -08:00
João Moura
723fdc6245 adding fix to hierarchical process 2024-02-07 02:13:19 -08:00
João Moura
43a47b8bdf preparing v0.5.2 2024-02-06 00:04:53 -08:00
João Moura
ab5647145f updating RPM and max_inter logic 2024-02-05 23:14:22 -08:00
João Moura
856981e0ed updating docs and readme 2024-02-05 23:13:10 -08:00
João Moura
09bec0e28b adding manager_llm 2024-02-05 20:46:47 -08:00
João Moura
2f0bf3b325 updating readme 2024-02-04 13:13:42 -08:00
João Moura
51278424c1 moving dependencies 2024-02-04 12:11:11 -08:00
João Moura
bfe26de026 updating readme 2024-02-04 12:07:40 -08:00
João Moura
db100439cb preparing new version 0.5.0 2024-02-04 12:01:05 -08:00
João Moura
c37f54c86f installing mkdocs dependencies 2024-02-04 11:58:21 -08:00
João Moura
e0262d9712 fixing dependencies for mkdocs 2024-02-04 11:51:44 -08:00
João Moura
63fb5a22be adding new docs and smaller fixes 2024-02-04 11:47:49 -08:00
João Moura
05dda59cf6 Adding multi thread execution 2024-02-03 23:24:41 -08:00
João Moura
5628bcca78 updating docs 2024-02-03 23:23:47 -08:00
João Moura
6042d9a7d8 Update README.md 2024-02-03 05:48:54 -03:00
João Moura
144239394d simplifying README 2024-02-03 00:04:33 -08:00
João Moura
d712ee8451 adding ability to pass context to tasks 2024-02-02 23:17:02 -08:00
João Moura
a8c1348235 Update README.md 2024-02-03 02:26:10 -03:00
João Moura
148d9202bf Update README.md 2024-02-03 01:33:59 -03:00
Ilya Sudakov
44442e6407 Update README.md: new header, text clean up, fix broken links (#210)
* Update README.MD

* Update examples section in README.md
2024-02-03 01:29:04 -03:00
Gui Vieira
c78237cb86 Hierarchical process (#206)
* Hierarchical process +  Docs
Co-authored-by: João Moura <joaomdmoura@gmail.com>
2024-02-02 13:56:35 -03:00
João Moura
8fc0f33dd5 adding task callback 2024-01-30 22:46:20 -03:00
João Moura
2010702880 Update README.md 2024-01-29 22:49:17 -03:00
Guilherme Vieira
29c31a2404 Fix static typing errors (#187)
Co-authored-by: João Moura <joaomdmoura@gmail.com>
2024-01-29 19:52:14 -03:00
João Moura
cd77981102 Adding support for expected output 2024-01-29 00:11:30 -03:00
IT Lackey
4f78d1e29c Feature: Documentation Site (#188) 2024-01-28 23:43:23 -03:00
Eliad Cohen
5be79454c3 Addresses typo and clarifiction in comments (#191)
Minor changes include a typo fixed and enhancing
an example for using OpenAI as an agent model with Ollama
via langchain

Resolves #189 #190
2024-01-28 23:42:31 -03:00
João Moura
d8c14ff31e updating website for crewai 2024-01-28 23:36:39 -03:00
João Moura
9e1be4ecd2 Update README.md 2024-01-22 11:05:01 -03:00
scott------
327d5c3a53 Update agent.py (#161)
adding tools to the list of attribute descriptions
2024-01-21 16:56:19 -03:00
Greyson LaLonde
852ca21e38 Update some docstrings / typehints (#144) 2024-01-21 16:55:17 -03:00
Prabha Arivalagan
23a549ac65 Fixed the small typo (#168) 2024-01-21 16:54:19 -03:00
João Moura
3e9630afe8 cutting new version 2024-01-14 11:25:09 -03:00
João Moura
2bf924b732 Add RPM control to both agents and crews (#133)
* moving file into utilities
* creating Logger and RPMController
* Adding support for RPM to agents and crew
2024-01-14 00:22:11 -03:00
João Moura
3686804f7e Update tests.yml 2024-01-14 00:11:53 -03:00
João Moura
4b8f99d7a3 slightly improving prompts 2024-01-13 11:32:32 -03:00
Jimmy Kounelis
4d996044e6 Adding Greek translation (#122)
* Adding Greek translation
Co-authored-by: JimJim12 <loljk@Madness>
2024-01-13 11:22:23 -03:00
João Moura
53a32153a5 Adding support for Crew throttling using RPM (#124)
* Add translations
* fixing translations
* Adding support for Crew throttling with RPM
2024-01-13 11:20:30 -03:00
Greyson LaLonde
cbe688adbc Add github action for black (#116) 2024-01-12 22:06:13 -03:00
João Moura
8e7772c9c3 Adding support for translations (#120)
Add translations support
2024-01-12 14:49:36 -03:00
João Moura
ea7759b322 Revamp max iteration Logic (#111)
This now will allow to add a max_inter option to agents while also making sure to force the agent to give it's best final answer before running out of it's max_inter.
2024-01-11 12:32:54 -03:00
Greyson LaLonde
8cc51d5e9e Bump to langchain0.1.0 (#108)
* Bump `langchain`, `openai`; add `langchain-openai`

* Update imports to fix warnings
2024-01-11 09:33:43 -03:00
João Moura
fdd36b0766 Update README.md 2024-01-11 09:31:45 -03:00
João Moura
4f22bbf4d4 Update README.md 2024-01-10 21:00:37 -03:00
João Moura
34c1c0d76a starting to revamp docs 2024-01-10 13:12:31 -03:00
João Moura
feafa586ae fixing github action 2024-01-10 12:24:37 -03:00
João Moura
786691e97e replacing circleci with github actions 2024-01-10 12:05:42 -03:00
Greyson LaLonde
155368be3b Move to src dir usage (#99) 2024-01-10 11:39:36 -03:00
João Moura
a944cfc8d0 installing mkdocs as part of the github workflow 2024-01-10 00:46:56 -03:00
João Moura
bc7366b862 TYPO 2024-01-10 00:42:12 -03:00
João Moura
bb080c47f6 starting github actions for docs 2024-01-10 00:40:56 -03:00
João Moura
402137711c starting to setup new documentation 2024-01-10 00:30:18 -03:00
Greyson LaLonde
002da5a6f5 Add imports (#98) 2024-01-10 00:13:06 -03:00
João Moura
376fee952d updating logo 2024-01-10 00:08:39 -03:00
SashaXser
761f682d44 Refractoring (#88)
Co-authored-by: João Moura <joaomdmoura@gmail.com>
2024-01-10 00:04:13 -03:00
João Moura
40aea44470 bringing output log 2024-01-09 23:57:35 -03:00
yanzz
8eba7aab89 improved readability (#90) 2024-01-09 23:29:50 -03:00
Chris
bc54d310f2 update example usage in README (#97) 2024-01-09 23:22:42 -03:00
João Moura
f102c2e7dd cutting new version v0.1.24 2024-01-07 21:36:14 -03:00
João Moura
1ce9a8540b removing reference for pydantic v1 2024-01-07 21:35:30 -03:00
João Moura
f101dc5592 Improving agent delegation prompt 2024-01-07 21:35:27 -03:00
Ikko Eltociear Ashimine
55de63f6fa Update README.md (#81)
bellow -> below
2024-01-07 13:37:30 -03:00
João Moura
7954f6b51c Reliability improvements (#77)
* fixing identation for AgentTools
* updating gitignore to exclude quick test script
* startingprompt translation
* supporting individual task output
* adding agent to task output
* cutting new version
* Updating README example
2024-01-07 12:43:23 -03:00
João Moura
234a2c72b0 Tools cache and delegation improvements (#68)
* Fixing repeated tool usage treatment
* Improving agent delegation prompt
2024-01-06 11:46:34 -03:00
João Moura
7a22b03713 Update README.md 2024-01-06 01:36:00 -03:00
Chris Bruner
52d404a267 Updated the main example in README.md (#61)
Update Example to mention local LLMs
2024-01-06 00:34:28 -03:00
João Moura
6e086fe574 Update README.md 2024-01-06 00:03:03 -03:00
João Moura
8206eb8915 Update README.md 2024-01-06 00:01:39 -03:00
João Moura
8288f38281 Update README.md 2024-01-06 00:01:07 -03:00
João Moura
99efb33b3f Update README.md 2024-01-05 16:06:48 -03:00
João Moura
57c870e15d Update README.md 2024-01-05 13:50:48 -03:00
João Moura
3f9c4df32d Better agent execution error handling (#54)
A few quality of life improvements around cache handling and repeated tool usage
2024-01-05 11:04:59 -03:00
João Moura
6b054651a7 Refactoring task cache to be a tool (#50)
* Refactoring task cache to be a tool

The previous implementation of the task caching system was early exiting
the agent executor due to the fact it was returning an AgentFinish object.

This now refactors it to use a cache specific tool that is dynamically
added and forced into the agent in case of a task execution that was
already executed with the same input.
2024-01-04 21:29:42 -03:00
João Moura
fe6bef0af1 Update README.md 2024-01-04 10:06:08 -03:00
João Moura
358e5fa534 Update README.md 2024-01-04 10:04:56 -03:00
João Moura
b5e9173cbb Update README.md 2024-01-04 10:04:31 -03:00
João Moura
14a081b814 Proper README example (#48) 2024-01-04 10:03:23 -03:00
João Moura
9a9319eea9 Update README.md 2024-01-03 20:21:59 -03:00
João Moura
05984093f0 bumping langchain version and cutting new version 2024-01-03 18:58:45 -03:00
João Moura
2c4851bd2e Updating README example 2024-01-03 18:58:45 -03:00
Scott Stoltzman
c2f403f0eb Change "agent" to "openhermes" in Ollama example (#33) 2024-01-03 10:38:14 -03:00
SuperMalinge
00e584312c Update output_parser.py (#42) 2024-01-02 20:52:12 -03:00
João Moura
f6c042e58e Update README.md 2024-01-02 18:51:44 -03:00
João Moura
fddeb0e672 Update README.md 2023-12-31 17:41:50 -03:00
Greyson LaLonde
f311afaab3 Remove model inheritance (#30) 2023-12-31 10:52:08 -03:00
Greyson LaLonde
0323191436 Implement CrewAIBaseModel and Update to ConfigDict (#29)
New CrewAIBaseModel:

Base for Agent, Crew, Task.
Includes generated, frozen UUID.
Adds hashing capability
Migrate to ConfigDict:

Replaces class Config with model_config, see this deprecation note .
Benefits:
Adds auditing capability with frozen UUIDs.
2023-12-30 21:52:04 -03:00
Ikko Eltociear Ashimine
fd4c850df7 Update README.md (#27)
Documention -> Documentation
2023-12-30 21:49:20 -03:00
João Moura
45ee442b4c Cutting a new version 0.1.14 2023-12-30 11:03:03 -03:00
João Moura
f887d9bd79 Small updates to the code formatting 2023-12-30 10:53:10 -03:00
João Moura
d6c60f873a Adding verbose levels 2023-12-30 07:41:38 -03:00
Greyson LaLonde
ff46652752 Update to use absolute imports (#17)
Update to use absolute imports
2023-12-29 22:39:59 -03:00
João Moura
af9e749edb Adding tool caching a loop execution prevention. (#25)
* Adding tool caching a loop execution prevention.

This adds some guardrails, to both prevent the same tool to be used
consecutively and also caching tool's results across the entire crew
so it cuts down execution time and eventual LLM calls.

This plays a huge role for smaller opensource models that usually fall
into those behaviors patterns.

It also includes some smaller improvements around the tool prompt and
agent tools, all with the same intention of guiding models into
better conform with agent instructions.
2023-12-29 22:35:23 -03:00
Greyson LaLonde
5cc230263c Refactor Codebase to Use Pydantic v2 and Enhance Type Hints, Documentation (#24)
Update to Pydantic v2:

Transitioned all references from pydantic.v1 to pydantic (v2), ensuring compatibility with the latest Pydantic features and improvements.
Affected components include agent tools, prompts, crew, and task modules.
Refactoring & Alignment with Pydantic Standards:

Refactored the agent module away from traditional __init__ to align more closely with Pydantic best practices.
Updated the crew module to Pydantic v2 and enhanced configurations, allowing JSON and dictionary inputs. Additionally, some (not all) exceptions have been migrated to leverage Pydantic's error-handling capabilities.
Enhancements to Validators and Typings:

Improved validators and type annotations across multiple modules, enhancing code readability and maintainability.
Streamlined the validation process in line with Pydantic v2's methodologies.
Import and Configuration Adjustments:

Updated to test-related absolute imports due to issues with Pytest finding packages through relative imports.
2023-12-29 21:24:30 -03:00
João Moura
3b5515c5c2 Add .circleci/config.yml (#26)
* Add .circleci/config.yml

---------

Co-authored-by: João Moura <joaomdmoura@mgail.com>
2023-12-29 21:14:15 -03:00
João Moura
6adfa6fe07 Merge pull request #15 from greysonlalonde/gl/devops/ci-code-formatting-enhancements
Update Python to 3.9, Add Code Quality Tools, & Update Lockfile
2023-12-27 17:34:56 -03:00
Greyson Lalonde
92f192fc5e Make tools a subpackage 2023-12-27 15:13:42 -05:00
Greyson Lalonde
a4e93cea75 Run pre-commit hooks
In the title !
2023-12-27 15:13:42 -05:00
Greyson Lalonde
542a794e64 Update autoflake args
This wont format automatically unless --in-place is passed and will remove init imports when missing --ignore-init-module-imports
2023-12-27 15:09:05 -05:00
Greyson Lalonde
b104d1ee44 Update readme to reflect pre-commit 2023-12-27 15:09:05 -05:00
Greyson Lalonde
6716a78aa0 Add pre-commit config w/ new dev deps 2023-12-27 15:09:05 -05:00
Greyson Lalonde
03140d3dd5 Bump min py to 3.9; add formatting deps
Increased minimum Python version from 3.81 to 3.9 - most projects align with this; added pre-commit hooks, isort, black, & autoflake for code quality; updated lock file.
2023-12-27 15:09:05 -05:00
João Moura
99853e55cd removing AgentVote class 2023-12-27 16:18:08 -03:00
João Moura
f36372c7bc allowing cassetes to eb versioned 2023-12-27 16:18:08 -03:00
João Moura
6b2234fcef Adding VCr and cassetes 2023-12-27 16:18:08 -03:00
João Moura
b8974c1f91 Merge pull request #14 from jerryjliu/jerry/fix_typo
fix prompt typo
2023-12-27 15:05:50 -03:00
Jerry Liu
10556d0886 cr 2023-12-27 09:27:15 -08:00
João Moura
d6be9ca0ef small updates 2023-12-25 11:18:47 -03:00
João Moura
2aa76dbc3d Updating readme 2023-12-25 11:17:11 -03:00
João Moura
9d0f41f32a adding more specific guidelines to agent delegation tools 2023-12-25 11:13:46 -03:00
João Moura
1e7bda63bc Merge pull request #12 from JamesChannel1/main
Update agent.py
2023-12-25 09:25:01 -03:00
JamesChannel1
d6c35cee0f Update agent.py
updated docstring
2023-12-25 00:38:21 +00:00
João Moura
f2c5e838bf Update README.md 2023-12-23 10:18:10 -03:00
João Moura
133fd10324 Merge pull request #9 from llxxxll/develop
Update README.md
2023-12-22 10:24:35 -03:00
LiuYongFeng
dfddb83d02 Update README.md
This example can be run faster for openai users.
2023-12-22 11:40:22 +08:00
LiuYongFeng
367e190773 Update README.md
Fix the 'SyntaxError: invalid syntax. Perhaps you forgot a comma?' error in this code
2023-12-22 11:35:06 +08:00
João Moura
db01df68aa Updating openai version 2023-12-20 17:21:48 -03:00
João Moura
d1ecbc035e updating specs 2023-12-20 17:20:55 -03:00
João Moura
d43f2df4f0 Adding proper support to memory-less agents 2023-12-20 11:30:56 -03:00
João Moura
09812e4249 cutting new version 2023-12-19 20:00:50 -03:00
Joao Moura
126a38fecc Updating to the latest version of langchain 2023-12-19 20:00:50 -03:00
João Moura
290d915f57 Update README.md 2023-12-19 11:06:27 -03:00
João Moura
4cd146cb34 Merge pull request #7 from shreyaskarnik/main
Fix typo in readme for valid syntax in example code.
2023-12-18 01:10:49 -03:00
Joao Moura
d70cfd696d rolling back verison upgrade for now 2023-12-18 01:10:02 -03:00
Joao Moura
f6e166aa5c adding allow_delegation=False to the readme example 2023-12-18 01:09:34 -03:00
Joao Moura
4c3902b018 fixing readme 2023-12-18 01:05:03 -03:00
Shreyas Karnik
1a8445f2b3 Fix typo in readme for valid syntax in example code. 2023-12-09 22:46:33 +00:00
Joao Moura
0b9ad08155 rolling back prompt with --- 2023-12-05 00:09:44 -08:00
Joao Moura
9be65e03d7 Making config optional with default value as it's WIP and Adding new treatment for wrong agent tool calls 2023-12-04 23:58:48 -08:00
Joao Moura
2ff9ad8a7f Preparing to cut new version 2023-12-04 00:13:42 -08:00
Joao Moura
53f6b0f844 slightly modifications on prompt 2023-12-04 00:12:36 -08:00
Joao Moura
da00aa2668 cutting enw version 2023-11-24 17:09:43 -03:00
Joao Moura
7ad5680453 Allwoing to use other LLM that are not OpenAI 2023-11-24 17:09:06 -03:00
Joao Moura
96a2b5b236 adding index to README 2023-11-20 18:37:42 -03:00
Joao Moura
13c19c8032 Preparing to cut new version 2023-11-18 22:11:10 -03:00
João Moura
5163a3a7b5 Merge pull request #3 from manuel-soria/fix-typo-in-readme
fix typo in quickstart snippet
2023-11-17 15:56:13 -03:00
Manuel Soria
7ec8beaf7e another missing comma 2023-11-17 15:13:31 -03:00
Manuel Soria
6031a654a3 fix typo in quickstart snippet 2023-11-17 15:11:31 -03:00
João Moura
cbc7847bbe Merge pull request #2 from joaomdmoura/joaomdmoura/adding-crew-specific-verbose-logs
Adding crew specific verbose logs
2023-11-16 19:17:11 -03:00
João Moura
e4840ddf0a Merge branch 'main' into joaomdmoura/adding-crew-specific-verbose-logs 2023-11-16 19:17:04 -03:00
Joao Moura
afca1a585e Updating README tro include crew verbose more 2023-11-16 18:51:48 -03:00
João Moura
27c1e76606 Merge pull request #1 from franzejr/patch-1
Fix tiny typo
2023-11-16 18:48:56 -03:00
Franze M
caf4f51110 Update README.md 2023-11-16 08:17:18 -03:00
Joao Moura
f1b3875073 Adding verbose option for crew 2023-11-15 17:29:47 -03:00
Joao Moura
62b65401bb Adding Assistante to the README 2023-11-15 17:29:25 -03:00
Joao Moura
492f361634 adding extra detail on delegtion errors so LLM can recover 2023-11-15 00:41:17 -03:00
Joao Moura
7cb13cae0e Updating README 2023-11-15 00:39:52 -03:00
459 changed files with 1659227 additions and 1355 deletions

Binary file not shown.

After

Width:  |  Height:  |  Size: 28 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 36 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 37 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 27 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 42 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 44 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 45 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 48 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 35 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 23 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 43 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 39 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 30 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 27 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 24 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 44 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 25 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 49 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 18 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 35 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 34 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 42 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 30 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 30 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 33 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 39 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 39 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 45 KiB

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

After

Width:  |  Height:  |  Size: 34 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 44 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 52 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 40 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 29 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 40 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 29 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 40 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 47 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 17 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 18 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 21 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 55 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 29 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 36 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 39 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 28 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 44 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 39 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 28 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 37 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 33 KiB

14
.editorconfig Normal file
View File

@@ -0,0 +1,14 @@
# .editorconfig
root = true
# All files
[*]
charset = utf-8
end_of_line = lf
insert_final_newline = true
trim_trailing_whitespace = true
# Python files
[*.py]
indent_style = space
indent_size = 2

116
.github/ISSUE_TEMPLATE/bug_report.yml vendored Normal file
View File

@@ -0,0 +1,116 @@
name: Bug report
description: Create a report to help us improve CrewAI
title: "[BUG]"
labels: ["bug"]
assignees: []
body:
- type: textarea
id: description
attributes:
label: Description
description: Provide a clear and concise description of what the bug is.
validations:
required: true
- type: textarea
id: steps-to-reproduce
attributes:
label: Steps to Reproduce
description: Provide a step-by-step process to reproduce the behavior.
placeholder: |
1. Go to '...'
2. Click on '....'
3. Scroll down to '....'
4. See error
validations:
required: true
- type: textarea
id: expected-behavior
attributes:
label: Expected behavior
description: A clear and concise description of what you expected to happen.
validations:
required: true
- type: textarea
id: screenshots-code
attributes:
label: Screenshots/Code snippets
description: If applicable, add screenshots or code snippets to help explain your problem.
validations:
required: true
- type: dropdown
id: os
attributes:
label: Operating System
description: Select the operating system you're using
options:
- Ubuntu 20.04
- Ubuntu 22.04
- Ubuntu 24.04
- macOS Catalina
- macOS Big Sur
- macOS Monterey
- macOS Ventura
- macOS Sonoma
- Windows 10
- Windows 11
- Other (specify in additional context)
validations:
required: true
- type: dropdown
id: python-version
attributes:
label: Python Version
description: Version of Python your Crew is running on
options:
- '3.10'
- '3.11'
- '3.12'
- '3.13'
validations:
required: true
- type: input
id: crewai-version
attributes:
label: crewAI Version
description: What version of CrewAI are you using
validations:
required: true
- type: input
id: crewai-tools-version
attributes:
label: crewAI Tools Version
description: What version of CrewAI Tools are you using
validations:
required: true
- type: dropdown
id: virtual-environment
attributes:
label: Virtual Environment
description: What Virtual Environment are you running your crew in.
options:
- Venv
- Conda
- Poetry
validations:
required: true
- type: textarea
id: evidence
attributes:
label: Evidence
description: Include relevant information, logs or error messages. These can be screenshots.
validations:
required: true
- type: textarea
id: possible-solution
attributes:
label: Possible Solution
description: Have a solution in mind? Please suggest it here, or write "None".
validations:
required: true
- type: textarea
id: additional-context
attributes:
label: Additional context
description: Add any other context about the problem here.
validations:
required: true

1
.github/ISSUE_TEMPLATE/config.yml vendored Normal file
View File

@@ -0,0 +1 @@
blank_issues_enabled: false

View File

@@ -0,0 +1,65 @@
name: Feature request
description: Suggest a new feature for CrewAI
title: "[FEATURE]"
labels: ["feature-request"]
assignees: []
body:
- type: markdown
attributes:
value: |
Thanks for taking the time to fill out this feature request!
- type: dropdown
id: feature-area
attributes:
label: Feature Area
description: Which area of CrewAI does this feature primarily relate to?
options:
- Core functionality
- Agent capabilities
- Task management
- Integration with external tools
- Performance optimization
- Documentation
- Other (please specify in additional context)
validations:
required: true
- type: textarea
id: problem
attributes:
label: Is your feature request related to a an existing bug? Please link it here.
description: A link to the bug or NA if not related to an existing bug.
validations:
required: true
- type: textarea
id: solution
attributes:
label: Describe the solution you'd like
description: A clear and concise description of what you want to happen.
validations:
required: true
- type: textarea
id: alternatives
attributes:
label: Describe alternatives you've considered
description: A clear and concise description of any alternative solutions or features you've considered.
validations:
required: false
- type: textarea
id: context
attributes:
label: Additional context
description: Add any other context, screenshots, or examples about the feature request here.
validations:
required: false
- type: dropdown
id: willingness-to-contribute
attributes:
label: Willingness to Contribute
description: Would you be willing to contribute to the implementation of this feature?
options:
- Yes, I'd be happy to submit a pull request
- I could provide more detailed specifications
- I can test the feature once it's implemented
- No, I'm just suggesting the idea
validations:
required: true

16
.github/workflows/linter.yml vendored Normal file
View File

@@ -0,0 +1,16 @@
name: Lint
on: [pull_request]
jobs:
lint:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: Install Requirements
run: |
pip install ruff
- name: Run Ruff Linter
run: ruff check --exclude "templates","__init__.py"

45
.github/workflows/mkdocs.yml vendored Normal file
View File

@@ -0,0 +1,45 @@
name: Deploy MkDocs
on:
release:
types: [published]
permissions:
contents: write
jobs:
deploy:
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v2
- name: Setup Python
uses: actions/setup-python@v4
with:
python-version: '3.10'
- name: Calculate requirements hash
id: req-hash
run: echo "::set-output name=hash::$(sha256sum requirements-doc.txt | awk '{print $1}')"
- name: Setup cache
uses: actions/cache@v3
with:
key: mkdocs-material-${{ steps.req-hash.outputs.hash }}
path: .cache
restore-keys: |
mkdocs-material-
- name: Install Requirements
run: |
sudo apt-get update &&
sudo apt-get install pngquant &&
pip install mkdocs-material mkdocs-material-extensions pillow cairosvg
env:
GH_TOKEN: ${{ secrets.GH_TOKEN }}
- name: Build and deploy MkDocs
run: mkdocs gh-deploy --force

23
.github/workflows/security-checker.yml vendored Normal file
View File

@@ -0,0 +1,23 @@
name: Security Checker
on: [pull_request]
jobs:
security-check:
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Set up Python
uses: actions/setup-python@v4
with:
python-version: "3.11.9"
- name: Install dependencies
run: pip install bandit
- name: Run Bandit
run: bandit -c pyproject.toml -r src/ -lll

27
.github/workflows/stale.yml vendored Normal file
View File

@@ -0,0 +1,27 @@
name: Mark stale issues and pull requests
on:
schedule:
- cron: '10 12 * * *'
workflow_dispatch:
jobs:
stale:
runs-on: ubuntu-latest
permissions:
issues: write
pull-requests: write
steps:
- uses: actions/stale@v9
with:
repo-token: ${{ secrets.GITHUB_TOKEN }}
stale-issue-label: 'no-issue-activity'
stale-issue-message: 'This issue is stale because it has been open for 30 days with no activity. Remove stale label or comment or this will be closed in 5 days.'
close-issue-message: 'This issue was closed because it has been stalled for 5 days with no activity.'
days-before-issue-stale: 30
days-before-issue-close: 5
stale-pr-label: 'no-pr-activity'
stale-pr-message: 'This PR is stale because it has been open for 45 days with no activity.'
days-before-pr-stale: 45
days-before-pr-close: -1
operations-per-run: 1200

32
.github/workflows/tests.yml vendored Normal file
View File

@@ -0,0 +1,32 @@
name: Run Tests
on: [pull_request]
permissions:
contents: write
env:
OPENAI_API_KEY: fake-api-key
jobs:
deploy:
runs-on: ubuntu-latest
timeout-minutes: 15
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Setup Python
uses: actions/setup-python@v4
with:
python-version: "3.11.9"
- name: Install Requirements
run: |
set -e
pip install poetry
poetry install
- name: Run tests
run: poetry run pytest

26
.github/workflows/type-checker.yml vendored Normal file
View File

@@ -0,0 +1,26 @@
name: Run Type Checks
on: [pull_request]
permissions:
contents: write
jobs:
type-checker:
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Setup Python
uses: actions/setup-python@v4
with:
python-version: "3.10"
- name: Install Requirements
run: |
pip install mypy
- name: Run type checks
run: mypy src

16
.gitignore vendored
View File

@@ -2,5 +2,17 @@
.pytest_cache
__pycache__
dist/
*/**/cassettes/*
.env
.env
assets/*
.idea
test/
docs_crew/
chroma.sqlite3
old_en.json
db/
test.py
rc-tests/*
*.pkl
temp/*
.vscode/*
crew_tasks_output.json

9
.pre-commit-config.yaml Normal file
View File

@@ -0,0 +1,9 @@
repos:
- repo: https://github.com/astral-sh/ruff-pre-commit
rev: v0.4.4
hooks:
- id: ruff
args: ["--fix"]
exclude: "templates"
- id: ruff-format
exclude: "templates"

257
README.md
View File

@@ -1,78 +1,202 @@
# CrewAI
<div align="center">
🤖 Cutting-edge framework for orchestrating role-playing, autonomous AI agents. By fostering collaborative intelligence, CrewAI empowers agents to work together seamlessly, tackling complex tasks.
![Logo of crewAI, two people rowing on a boat](./docs/crewai_logo.png)
# **crewAI**
🤖 **crewAI**: Cutting-edge framework for orchestrating role-playing, autonomous AI agents. By fostering collaborative intelligence, CrewAI empowers agents to work together seamlessly, tackling complex tasks.
<h3>
[Homepage](https://www.crewai.com/) | [Documentation](https://docs.crewai.com/) | [Chat with Docs](https://chatg.pt/DWjSBZn) | [Examples](https://github.com/crewAIInc/crewAI-examples) | [Discourse](https://community.crewai.com)
</h3>
[![GitHub Repo stars](https://img.shields.io/github/stars/joaomdmoura/crewAI)](https://github.com/crewAIInc/crewAI)
[![License: MIT](https://img.shields.io/badge/License-MIT-green.svg)](https://opensource.org/licenses/MIT)
</div>
## Table of contents
- [Why CrewAI?](#why-crewai)
- [Getting Started](#getting-started)
- [Key Features](#key-features)
- [Examples](#examples)
- [Quick Tutorial](#quick-tutorial)
- [Write Job Descriptions](#write-job-descriptions)
- [Trip Planner](#trip-planner)
- [Stock Analysis](#stock-analysis)
- [Connecting Your Crew to a Model](#connecting-your-crew-to-a-model)
- [How CrewAI Compares](#how-crewai-compares)
- [Contribution](#contribution)
- [Telemetry](#telemetry)
- [License](#license)
## Why CrewAI?
The power of AI collaboration has too much to offer.
CrewAI is designed to enable AI agents to assume roles, share goals, and operate in a cohesive unit - much like a well-oiled crew. Whether you're building a smart assistant platform, an automated customer service ensemble, or a multi-agent research team, CrewAI provides the backbone for sophisticated multi-agent interactions.
[Documention Wiki](https://github.com/joaomdmoura/CrewAI/wiki)
## Getting Started
To get started with CrewAI, follow these simple steps:
1. **Installation**:
### 1. Installation
```shell
pip install crewai
```
2. **Setting Up Your Crew**:
If you want to install the 'crewai' package along with its optional features that include additional tools for agents, you can do so by using the following command: pip install 'crewai[tools]'. This command installs the basic package and also adds extra components which require more dependencies to function."
```shell
pip install 'crewai[tools]'
```
### 2. Setting Up Your Crew
```python
import os
from crewai import Agent, Task, Crew, Process
from crewai_tools import SerperDevTool
os.environ["OPENAI_API_KEY"] = "YOUR_API_KEY"
os.environ["SERPER_API_KEY"] = "Your Key" # serper.dev API key
# You can choose to use a local model through Ollama for example. See https://docs.crewai.com/how-to/LLM-Connections/ for more information.
# os.environ["OPENAI_API_BASE"] = 'http://localhost:11434/v1'
# os.environ["OPENAI_MODEL_NAME"] ='openhermes' # Adjust based on available model
# os.environ["OPENAI_API_KEY"] ='sk-111111111111111111111111111111111111111111111111'
# You can pass an optional llm attribute specifying what model you wanna use.
# It can be a local model through Ollama / LM Studio or a remote
# model like OpenAI, Mistral, Antrophic or others (https://docs.crewai.com/how-to/LLM-Connections/)
# If you don't specify a model, the default is OpenAI gpt-4o
#
# import os
# os.environ['OPENAI_MODEL_NAME'] = 'gpt-3.5-turbo'
#
# OR
#
# from langchain_openai import ChatOpenAI
search_tool = SerperDevTool()
# Define your agents with roles and goals
researcher = Agent(
role='Researcher',
goal='Discover new insights',
backstory="You're a world class researcher working on a amjor data science company",
verbose=True
role='Senior Research Analyst',
goal='Uncover cutting-edge developments in AI and data science',
backstory="""You work at a leading tech think tank.
Your expertise lies in identifying emerging trends.
You have a knack for dissecting complex data and presenting actionable insights.""",
verbose=True,
allow_delegation=False,
# You can pass an optional llm attribute specifying what model you wanna use.
# llm=ChatOpenAI(model_name="gpt-3.5", temperature=0.7),
tools=[search_tool]
)
writer = Agent(
role='Writer',
goal='Create engaging content',
backstory="You're a famous technical writer, specialized on writing data related content"
verbose=True
role='Tech Content Strategist',
goal='Craft compelling content on tech advancements',
backstory="""You are a renowned Content Strategist, known for your insightful and engaging articles.
You transform complex concepts into compelling narratives.""",
verbose=True,
allow_delegation=True
)
# Create tasks for your agents
task1 = Task(description='Investigate the latest AI trends', agent=researcher)
task2 = Task(description='Write a blog post on AI advancements', agent=writer)
task1 = Task(
description="""Conduct a comprehensive analysis of the latest advancements in AI in 2024.
Identify key trends, breakthrough technologies, and potential industry impacts.""",
expected_output="Full analysis report in bullet points",
agent=researcher
)
task2 = Task(
description="""Using the insights provided, develop an engaging blog
post that highlights the most significant AI advancements.
Your post should be informative yet accessible, catering to a tech-savvy audience.
Make it sound cool, avoid complex words so it doesn't sound like AI.""",
expected_output="Full blog post of at least 4 paragraphs",
agent=writer
)
# Instantiate your crew with a sequential process
crew = Crew(
agents=[researcher, writer],
tasks=[task1, task2],
process=Process.sequential # Sequential process will have tasks executed one after the other and the outcome of the previous one is passed as extra content into this next.
verbose=True,
process = Process.sequential
)
# Get your crew to work!
crew.kickoff()
result = crew.kickoff()
print("######################")
print(result)
```
Currently the only supported process is `Process.sequential`, where one task is executed after the other and the outcome of one is passed as extra content into this next.
In addition to the sequential process, you can use the hierarchical process, which automatically assigns a manager to the defined crew to properly coordinate the planning and execution of tasks through delegation and validation of results. [See more about the processes here](https://docs.crewai.com/core-concepts/Processes/).
## Key Features
- **Role-Based Agent Design**: Customize agents with specific roles, goals, and tools.
- **Autonomous Inter-Agent Delegation**: Agents can autonomously delegate tasks and inquire amongst themselves, enhancing problem-solving efficiency.
- **Flexible Task Management**: Define tasks with customizable tools and assign them to agents dynamically.
- **Processes Driven**: Currently only supports `sequential` task execution but more complex processes like consensual and hierarchical being worked on.
- **Processes Driven**: Currently only supports `sequential` task execution and `hierarchical` processes, but more complex processes like consensual and autonomous are being worked on.
- **Save output as file**: Save the output of individual tasks as a file, so you can use it later.
- **Parse output as Pydantic or Json**: Parse the output of individual tasks as a Pydantic model or as a Json if you want to.
- **Works with Open Source Models**: Run your crew using Open AI or open source models refer to the [Connect crewAI to LLMs](https://docs.crewai.com/how-to/LLM-Connections/) page for details on configuring your agents' connections to models, even ones running locally!
![CrewAI Mind Map](/crewAI-mindmap.png "CrewAI Mind Map")
![CrewAI Mind Map](./docs/crewAI-mindmap.png "CrewAI Mind Map")
## Examples
You can test different real life examples of AI crews in the [crewAI-examples repo](https://github.com/crewAIInc/crewAI-examples?tab=readme-ov-file):
- [Landing Page Generator](https://github.com/crewAIInc/crewAI-examples/tree/main/landing_page_generator)
- [Having Human input on the execution](https://docs.crewai.com/how-to/Human-Input-on-Execution)
- [Trip Planner](https://github.com/crewAIInc/crewAI-examples/tree/main/trip_planner)
- [Stock Analysis](https://github.com/crewAIInc/crewAI-examples/tree/main/stock_analysis)
### Quick Tutorial
[![CrewAI Tutorial](https://img.youtube.com/vi/tnejrr-0a94/maxresdefault.jpg)](https://www.youtube.com/watch?v=tnejrr-0a94 "CrewAI Tutorial")
### Write Job Descriptions
[Check out code for this example](https://github.com/crewAIInc/crewAI-examples/tree/main/job-posting) or watch a video below:
[![Jobs postings](https://img.youtube.com/vi/u98wEMz-9to/maxresdefault.jpg)](https://www.youtube.com/watch?v=u98wEMz-9to "Jobs postings")
### Trip Planner
[Check out code for this example](https://github.com/crewAIInc/crewAI-examples/tree/main/trip_planner) or watch a video below:
[![Trip Planner](https://img.youtube.com/vi/xis7rWp-hjs/maxresdefault.jpg)](https://www.youtube.com/watch?v=xis7rWp-hjs "Trip Planner")
### Stock Analysis
[Check out code for this example](https://github.com/crewAIInc/crewAI-examples/tree/main/stock_analysis) or watch a video below:
[![Stock Analysis](https://img.youtube.com/vi/e0Uj4yWdaAg/maxresdefault.jpg)](https://www.youtube.com/watch?v=e0Uj4yWdaAg "Stock Analysis")
## Connecting Your Crew to a Model
crewAI supports using various LLMs through a variety of connection options. By default your agents will use the OpenAI API when querying the model. However, there are several other ways to allow your agents to connect to models. For example, you can configure your agents to use a local model via the Ollama tool.
Please refer to the [Connect crewAI to LLMs](https://docs.crewai.com/how-to/LLM-Connections/) page for details on configuring you agents' connections to models.
## How CrewAI Compares
- **Autogen**: While Autogen excels in creating conversational agents capable of working together, it lacks an inherent concept of process. In Autogen, orchestrating agents' interactions requires additional programming, which can become complex and cumbersome as the scale of tasks grows.
**CrewAI's Advantage**: CrewAI is built with production in mind. It offers the flexibility of Autogen's conversational agents and the structured process approach of ChatDev, but without the rigidity. CrewAI's processes are designed to be dynamic and adaptable, fitting seamlessly into both development and production workflows.
- **Autogen**: While Autogen does good in creating conversational agents capable of working together, it lacks an inherent concept of process. In Autogen, orchestrating agents' interactions requires additional programming, which can become complex and cumbersome as the scale of tasks grows.
- **ChatDev**: ChatDev introduced the idea of processes into the realm of AI agents, but its implementation is quite rigid. Customizations in ChatDev are limited and not geared towards production environments, which can hinder scalability and flexibility in real-world applications.
**CrewAI's Advantage**: CrewAI is built with production in mind. It offers the flexibility of Autogen's conversational agents and the structured process approach of ChatDev, but without the rigidity. CrewAI's processes are designed to be dynamic and adaptable, fitting seamlessly into both development and production workflows.
## Contribution
CrewAI is open-source and we welcome contributions. If you're looking to contribute, please:
@@ -84,32 +208,115 @@ CrewAI is open-source and we welcome contributions. If you're looking to contrib
- We appreciate your input!
### Installing Dependencies
```bash
poetry lock
poetry install
```
### Virtual Env
```bash
poetry shell
```
### Pre-commit hooks
```bash
pre-commit install
```
### Running Tests
```bash
poetry run pytest
```
### Running static type checks
```bash
poetry run mypy
```
### Packaging
```bash
poetry build
```
### Installing Locally
```bash
pip install dist/*.tar.gz
```
## Telemetry
CrewAI uses anonymous telemetry to collect usage data with the main purpose of helping us improve the library by focusing our efforts on the most used features, integrations and tools.
It's pivotal to understand that **NO data is collected** concerning prompts, task descriptions, agents' backstories or goals, usage of tools, API calls, responses, any data processed by the agents, or secrets and environment variables, with the exception of the conditions mentioned. When the `share_crew` feature is enabled, detailed data including task descriptions, agents' backstories or goals, and other specific attributes are collected to provide deeper insights while respecting user privacy. We don't offer a way to disable it now, but we will in the future.
Data collected includes:
- Version of crewAI
- So we can understand how many users are using the latest version
- Version of Python
- So we can decide on what versions to better support
- General OS (e.g. number of CPUs, macOS/Windows/Linux)
- So we know what OS we should focus on and if we could build specific OS related features
- Number of agents and tasks in a crew
- So we make sure we are testing internally with similar use cases and educate people on the best practices
- Crew Process being used
- Understand where we should focus our efforts
- If Agents are using memory or allowing delegation
- Understand if we improved the features or maybe even drop them
- If Tasks are being executed in parallel or sequentially
- Understand if we should focus more on parallel execution
- Language model being used
- Improved support on most used languages
- Roles of agents in a crew
- Understand high level use cases so we can build better tools, integrations and examples about it
- Tools names available
- Understand out of the publically available tools, which ones are being used the most so we can improve them
Users can opt-in to Further Telemetry, sharing the complete telemetry data by setting the `share_crew` attribute to `True` on their Crews. Enabling `share_crew` results in the collection of detailed crew and task execution data, including `goal`, `backstory`, `context`, and `output` of tasks. This enables a deeper insight into usage patterns while respecting the user's choice to share.
## License
CrewAI is released under the MIT License
CrewAI is released under the MIT License.
## Frequently Asked Questions (FAQ)
### Q: What is CrewAI?
A: CrewAI is a cutting-edge framework for orchestrating role-playing, autonomous AI agents. It enables agents to work together seamlessly, tackling complex tasks through collaborative intelligence.
### Q: How do I install CrewAI?
A: You can install CrewAI using pip:
```shell
pip install crewai
```
For additional tools, use:
```shell
pip install 'crewai[tools]'
```
### Q: Can I use CrewAI with local models?
A: Yes, CrewAI supports various LLMs, including local models. You can configure your agents to use local models via tools like Ollama & LM Studio. Check the [LLM Connections documentation](https://docs.crewai.com/how-to/LLM-Connections/) for more details.
### Q: What are the key features of CrewAI?
A: Key features include role-based agent design, autonomous inter-agent delegation, flexible task management, process-driven execution, output saving as files, and compatibility with both open-source and proprietary models.
### Q: How does CrewAI compare to other AI orchestration tools?
A: CrewAI is designed with production in mind, offering flexibility similar to Autogen's conversational agents and structured processes like ChatDev, but with more adaptability for real-world applications.
### Q: Is CrewAI open-source?
A: Yes, CrewAI is open-source and welcomes contributions from the community.
### Q: Does CrewAI collect any data?
A: CrewAI uses anonymous telemetry to collect usage data for improvement purposes. No sensitive data (like prompts, task descriptions, or API calls) is collected. Users can opt-in to share more detailed data by setting `share_crew=True` on their Crews.
### Q: Where can I find examples of CrewAI in action?
A: You can find various real-life examples in the [crewAI-examples repository](https://github.com/crewAIInc/crewAI-examples), including trip planners, stock analysis tools, and more.
### Q: How can I contribute to CrewAI?
A: Contributions are welcome! You can fork the repository, create a new branch for your feature, add your improvement, and send a pull request. Check the Contribution section in the README for more details.

Binary file not shown.

Before

Width:  |  Height:  |  Size: 431 KiB

View File

@@ -1463,11 +1463,11 @@
"locked": false,
"fontSize": 20,
"fontFamily": 3,
"text": "Agents have the inert ability of\nreach out to another to delegate\nwork or ask questions.",
"text": "Agents have the innate ability of\nreach out to another to delegate\nwork or ask questions.",
"textAlign": "right",
"verticalAlign": "top",
"containerId": null,
"originalText": "Agents have the inert ability of\nreach out to another to delegate\nwork or ask questions.",
"originalText": "Agents have the innate ability of\nreach out to another to delegate\nwork or ask questions.",
"lineHeight": 1.2,
"baseline": 68
},
@@ -1734,4 +1734,4 @@
"viewBackgroundColor": "#ffffff"
},
"files": {}
}
}

View File

@@ -1,4 +0,0 @@
from .task import Task
from .crew import Crew
from .agent import Agent
from .process import Process

View File

@@ -1,99 +0,0 @@
"""Generic agent."""
from typing import List, Any, Optional
from pydantic.v1 import BaseModel, Field, root_validator
from langchain.agents import AgentExecutor
from langchain.chat_models import ChatOpenAI as OpenAI
from langchain.tools.render import render_text_description
from langchain.agents.format_scratchpad import format_log_to_str
from langchain.agents.output_parsers import ReActSingleInputOutputParser
from langchain.memory import ConversationSummaryMemory
from .prompts import Prompts
class Agent(BaseModel):
"""Generic agent implementation."""
agent_executor: AgentExecutor = None
role: str = Field(description="Role of the agent")
goal: str = Field(description="Objective of the agent")
backstory: str = Field(description="Backstory of the agent")
llm: Optional[OpenAI] = Field(description="LLM that will run the agent")
verbose: bool = Field(
description="Verbose mode for the Agent Execution",
default=False
)
allow_delegation: bool = Field(
description="Allow delegation of tasks to agents",
default=True
)
tools: List[Any] = Field(
description="Tools at agents disposal",
default=[]
)
@root_validator(pre=True)
def check_llm(_cls, values):
if not values.get('llm'):
values['llm'] = OpenAI(
temperature=0.7,
model_name="gpt-4"
)
return values
def __init__(self, **data):
super().__init__(**data)
execution_prompt = Prompts.TASK_EXECUTION_PROMPT.partial(
goal=self.goal,
role=self.role,
backstory=self.backstory,
)
llm_with_bind = self.llm.bind(stop=["\nObservation"])
inner_agent = {
"input": lambda x: x["input"],
"tools": lambda x: x["tools"],
"tool_names": lambda x: x["tool_names"],
"chat_history": lambda x: x["chat_history"],
"agent_scratchpad": lambda x: format_log_to_str(x['intermediate_steps']),
} | execution_prompt | llm_with_bind | ReActSingleInputOutputParser()
summary_memory = ConversationSummaryMemory(
llm=self.llm,
memory_key='chat_history',
input_key="input"
)
self.agent_executor = AgentExecutor(
agent=inner_agent,
tools=self.tools,
memory=summary_memory,
verbose=self.verbose,
handle_parsing_errors=True,
)
def execute_task(self, task: str, context: str = None, tools: List[Any] = None) -> str:
"""
Execute a task with the agent.
Parameters:
task (str): Task to execute
Returns:
output (str): Output of the agent
"""
if context:
task = "\n".join([
task,
"\nThis is the context you are working with:",
context
])
tools = tools or self.tools
self.agent_executor.tools = tools
return self.agent_executor.invoke({
"input": task,
"tool_names": self.__tools_names(tools),
"tools": render_text_description(tools),
})['output']
def __tools_names(self, tools) -> str:
return ", ".join([t.name for t in tools])

View File

@@ -1,5 +0,0 @@
from pydantic.v1 import BaseModel, Field
class AgentVote(BaseModel):
task: str = Field(description="Task to be executed by the agent")
agent_vote: str = Field(description="Agent that will execute the task")

View File

@@ -1,73 +0,0 @@
import json
from typing import List, Optional
from pydantic.v1 import BaseModel, Field, Json, root_validator
from .process import Process
from .agent import Agent
from .task import Task
from .tools.agent_tools import AgentTools
class Crew(BaseModel):
"""
Class that represents a group of agents, how they should work together and
their tasks.
"""
config: Optional[Json] = Field(description="Configuration of the crew.")
tasks: Optional[List[Task]] = Field(description="List of tasks")
agents: Optional[List[Agent]] = Field(description="List of agents in this crew.")
process: Process = Field(
description="Process that the crew will follow.",
default=Process.sequential
)
@root_validator(pre=True)
def check_config(_cls, values):
if (
not values.get('config')
and (
not values.get('agents') and not values.get('tasks')
)
):
raise ValueError('Either agents and task need to be set or config.')
if values.get('config'):
config = json.loads(values.get('config'))
if not config.get('agents') or not config.get('tasks'):
raise ValueError('Config should have agents and tasks.')
values['agents'] = [Agent(**agent) for agent in config['agents']]
tasks = []
for task in config['tasks']:
task_agent = [agt for agt in values['agents'] if agt.role == task['agent']][0]
del task['agent']
tasks.append(Task(**task, agent=task_agent))
values['tasks'] = tasks
return values
def kickoff(self) -> str:
"""
Kickoff the crew to work on it's tasks.
Returns:
output (List[str]): Output of the crew for each task.
"""
if self.process == Process.sequential:
return self.__sequential_loop()
return "Crew is executing task"
def __sequential_loop(self) -> str:
"""
Loop that executes the sequential process.
Returns:
output (str): Output of the crew.
"""
task_outcome = None
for task in self.tasks:
# Add delegation tools to the task if the agent allows it
if task.agent.allow_delegation:
tools = AgentTools(agents=self.agents).tools()
task.tools += tools
task_outcome = task.execute(task_outcome)
return task_outcome

View File

@@ -1,9 +0,0 @@
from enum import Enum
class Process(str, Enum):
"""
Class representing the different processes that can be used to tackle tasks
"""
sequential = 'sequential'
# TODO: consensual = 'consensual'
# TODO: hierarchical = 'hierarchical'

View File

@@ -1,71 +0,0 @@
"""Prompts for generic agent."""
from textwrap import dedent
from typing import ClassVar
from pydantic.v1 import BaseModel
from langchain.prompts import PromptTemplate
class Prompts(BaseModel):
"""Prompts for generic agent."""
TASK_SLICE: ClassVar[str] = dedent("""\
Begin! This is VERY important to you, your job depends on it!
Current Task: {input}
{agent_scratchpad}
""")
MEMORY_SLICE: ClassVar[str] = dedent("""\
This is the summary of your work so far:
{chat_history}
""")
ROLE_PLAYING_SLICE: ClassVar[str] = dedent("""\
You are {role}.
{backstory}
Your personal goal is: {goal}
""")
TOOLS_SLICE: ClassVar[str] = dedent("""\
TOOLS:
------
You have access to the following tools:
{tools}
To use a tool, please use the following format:
```
Thought: Do I need to use a tool? Yes
Action: the action to take, should be one of [{tool_names}]
Action Input: the input to the action
Observation: the result of the action
```
When you have a response for your task, or if you do not need to use a tool, you MUST use the format:
```
Thought: Do I need to use a tool? No
Final Answer: [your response here]
```
""")
VOTING_SLICE: ClassVar[str] = dedent("""\
You are working on a crew with your co-workers and need to decide who will execute the task.
These are tyour format instructions:
{format_instructions}
These are your co-workers and their roles:
{coworkers}
""")
TASK_EXECUTION_PROMPT: ClassVar[str] = PromptTemplate.from_template(
ROLE_PLAYING_SLICE + TOOLS_SLICE + MEMORY_SLICE + TASK_SLICE
)
CONSENSUNS_VOTING_PROMPT: ClassVar[str] = PromptTemplate.from_template(
ROLE_PLAYING_SLICE + VOTING_SLICE + TASK_SLICE
)

View File

@@ -1,42 +0,0 @@
from typing import List, Optional
from pydantic.v1 import BaseModel, Field, root_validator
from langchain.tools import Tool
from .agent import Agent
class Task(BaseModel):
"""
Class that represent a task to be executed.
"""
description: str = Field(description="Description of the actual task.")
agent: Optional[Agent] = Field(
description="Agent responsible for the task.",
default=None
)
tools: Optional[List[Tool]] = Field(
description="Tools the agent are limited to use for this task.",
default=[]
)
@root_validator(pre=False)
def _set_tools(_cls, values):
if (values.get('agent')) and not (values.get('tools')):
values['tools'] = values.get('agent').tools
return values
def execute(self, context: str = None) -> str:
"""
Execute the task.
Returns:
output (str): Output of the task.
"""
if self.agent:
return self.agent.execute_task(
task = self.description,
context = context,
tools = self.tools
)
else:
raise Exception(f"The task '{self.description}' has no agent assigned, therefore it can't be executed directly and should be executed in a Crew using a specific process that support that, either consensual or hierarchical.")

View File

@@ -1,57 +0,0 @@
from typing import List, Any
from pydantic.v1 import BaseModel, Field
from textwrap import dedent
from langchain.tools import Tool
from ..agent import Agent
class AgentTools(BaseModel):
"""Tools for generic agent."""
agents: List[Agent] = Field(description="List of agents in this crew.")
def tools(self):
return [
Tool.from_function(
func=self.delegate_work,
name="Delegate Work to Co-Worker",
description=dedent(f"""Useful to delegate a specific task to one of the
following co-workers: [{', '.join([agent.role for agent in self.agents])}].
The input to this tool should be a pipe (|) separated text of length
three, representing the role you want to delegate it to, the task and
information necessary. For example, `coworker|task|information`.
""")
),
Tool.from_function(
func=self.ask_question,
name="Ask Question to Co-Worker",
description=dedent(f"""Useful to ask a question, opinion or take from on
of the following co-workers: [{', '.join([agent.role for agent in self.agents])}].
The input to this tool should be a pipe (|) separated text of length
three, representing the role you want to ask it to, the question and
information necessary. For example, `coworker|question|information`.
""")
),
]
def delegate_work(self, command):
"""Useful to delegate a specific task to a coworker."""
return self.__execute(command)
def ask_question(self, command):
"""Useful to ask a question, opinion or take from a coworker."""
return self.__execute(command)
def __execute(self, command):
"""Execute the command."""
agent, task, information = command.split("|")
if not agent or not task or not information:
return "Error executing tool."
agent = [available_agent for available_agent in self.agents if available_agent.role == agent]
if len(agent) == 0:
return "Error executing tool."
agent = agent[0]
result = agent.execute_task(task, information)
return result

1
docs/CNAME Normal file
View File

@@ -0,0 +1 @@
docs.crewai.com

Binary file not shown.

After

Width:  |  Height:  |  Size: 288 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 419 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 263 KiB

BIN
docs/assets/langtrace1.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 223 KiB

BIN
docs/assets/langtrace2.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 204 KiB

BIN
docs/assets/langtrace3.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 295 KiB

View File

@@ -0,0 +1,151 @@
---
title: crewAI Agents
description: What are crewAI Agents and how to use them.
---
## What is an Agent?
!!! note "What is an Agent?"
An agent is an **autonomous unit** programmed to:
<ul>
<li class='leading-3'>Perform tasks</li>
<li class='leading-3'>Make decisions</li>
<li class='leading-3'>Communicate with other agents</li>
</ul>
<br/>
Think of an agent as a member of a team, with specific skills and a particular job to do. Agents can have different roles like 'Researcher', 'Writer', or 'Customer Support', each contributing to the overall goal of the crew.
## Agent Attributes
| Attribute | Parameter | Description |
| :------------------------- | :---- | :--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| **Role** | `role` | Defines the agent's function within the crew. It determines the kind of tasks the agent is best suited for. |
| **Goal** | `goal` | The individual objective that the agent aims to achieve. It guides the agent's decision-making process. |
| **Backstory** | `backstory` | Provides context to the agent's role and goal, enriching the interaction and collaboration dynamics. |
| **LLM** *(optional)* | `llm` | Represents the language model that will run the agent. It dynamically fetches the model name from the `OPENAI_MODEL_NAME` environment variable, defaulting to "gpt-4" if not specified. |
| **Tools** *(optional)* | `tools` | Set of capabilities or functions that the agent can use to perform tasks. Expected to be instances of custom classes compatible with the agent's execution environment. Tools are initialized with a default value of an empty list. |
| **Function Calling LLM** *(optional)* | `function_calling_llm` | Specifies the language model that will handle the tool calling for this agent, overriding the crew function calling LLM if passed. Default is `None`. |
| **Max Iter** *(optional)* | `max_iter` | Max Iter is the maximum number of iterations the agent can perform before being forced to give its best answer. Default is `25`. |
| **Max RPM** *(optional)* | `max_rpm` | Max RPM is the maximum number of requests per minute the agent can perform to avoid rate limits. It's optional and can be left unspecified, with a default value of `None`. |
| **Max Execution Time** *(optional)* | `max_execution_time` | Max Execution Time is the maximum execution time for an agent to execute a task. It's optional and can be left unspecified, with a default value of `None`, meaning no max execution time. |
| **Verbose** *(optional)* | `verbose` | Setting this to `True` configures the internal logger to provide detailed execution logs, aiding in debugging and monitoring. Default is `False`. |
| **Allow Delegation** *(optional)* | `allow_delegation` | Agents can delegate tasks or questions to one another, ensuring that each task is handled by the most suitable agent. Default is `True`. |
| **Step Callback** *(optional)* | `step_callback` | A function that is called after each step of the agent. This can be used to log the agent's actions or to perform other operations. It will overwrite the crew `step_callback`. |
| **Cache** *(optional)* | `cache` | Indicates if the agent should use a cache for tool usage. Default is `True`. |
| **System Template** *(optional)* | `system_template` | Specifies the system format for the agent. Default is `None`. |
| **Prompt Template** *(optional)* | `prompt_template` | Specifies the prompt format for the agent. Default is `None`. |
| **Response Template** *(optional)* | `response_template` | Specifies the response format for the agent. Default is `None`. |
| **Allow Code Execution** *(optional)* | `allow_code_execution` | Enable code execution for the agent. Default is `False`. |
| **Max Retry Limit** *(optional)* | `max_retry_limit` | Maximum number of retries for an agent to execute a task when an error occurs. Default is `2`. |
## Creating an Agent
!!! note "Agent Interaction"
Agents can interact with each other using crewAI's built-in delegation and communication mechanisms. This allows for dynamic task management and problem-solving within the crew.
To create an agent, you would typically initialize an instance of the `Agent` class with the desired properties. Here's a conceptual example including all attributes:
```python
# Example: Creating an agent with all attributes
from crewai import Agent
agent = Agent(
role='Data Analyst',
goal='Extract actionable insights',
backstory="""You're a data analyst at a large company.
You're responsible for analyzing data and providing insights
to the business.
You're currently working on a project to analyze the
performance of our marketing campaigns.""",
tools=[my_tool1, my_tool2], # Optional, defaults to an empty list
llm=my_llm, # Optional
function_calling_llm=my_llm, # Optional
max_iter=15, # Optional
max_rpm=None, # Optional
max_execution_time=None, # Optional
verbose=True, # Optional
allow_delegation=True, # Optional
step_callback=my_intermediate_step_callback, # Optional
cache=True, # Optional
system_template=my_system_template, # Optional
prompt_template=my_prompt_template, # Optional
response_template=my_response_template, # Optional
config=my_config, # Optional
crew=my_crew, # Optional
tools_handler=my_tools_handler, # Optional
cache_handler=my_cache_handler, # Optional
callbacks=[callback1, callback2], # Optional
allow_code_execution=True, # Optiona
max_retry_limit=2, # Optional
)
```
## Setting prompt templates
Prompt templates are used to format the prompt for the agent. You can use to update the system, regular and response templates for the agent. Here's an example of how to set prompt templates:
```python
agent = Agent(
role="{topic} specialist",
goal="Figure {goal} out",
backstory="I am the master of {role}",
system_template="""<|start_header_id|>system<|end_header_id|>
{{ .System }}<|eot_id|>""",
prompt_template="""<|start_header_id|>user<|end_header_id|>
{{ .Prompt }}<|eot_id|>""",
response_template="""<|start_header_id|>assistant<|end_header_id|>
{{ .Response }}<|eot_id|>""",
)
```
## Bring your Third Party Agents
!!! note "Extend your Third Party Agents like LlamaIndex, Langchain, Autogen or fully custom agents using the the crewai's BaseAgent class."
BaseAgent includes attributes and methods required to integrate with your crews to run and delegate tasks to other agents within your own crew.
CrewAI is a universal multi agent framework that allows for all agents to work together to automate tasks and solve problems.
```py
from crewai import Agent, Task, Crew
from custom_agent import CustomAgent # You need to build and extend your own agent logic with the CrewAI BaseAgent class then import it here.
from langchain.agents import load_tools
langchain_tools = load_tools(["google-serper"], llm=llm)
agent1 = CustomAgent(
role="agent role",
goal="who is {input}?",
backstory="agent backstory",
verbose=True,
)
task1 = Task(
expected_output="a short biography of {input}",
description="a short biography of {input}",
agent=agent1,
)
agent2 = Agent(
role="agent role",
goal="summarize the short bio for {input} and if needed do more research",
backstory="agent backstory",
verbose=True,
)
task2 = Task(
description="a tldr summary of the short biography",
expected_output="5 bullet point summary of the biography",
agent=agent2,
context=[task1],
)
my_crew = Crew(agents=[agent1, agent2], tasks=[task1, task2])
crew = my_crew.kickoff(inputs={"input": "Mark Twain"})
```
## Conclusion
Agents are the building blocks of the CrewAI framework. By understanding how to define and interact with agents, you can create sophisticated AI systems that leverage the power of collaborative intelligence.

142
docs/core-concepts/Cli.md Normal file
View File

@@ -0,0 +1,142 @@
# CrewAI CLI Documentation
The CrewAI CLI provides a set of commands to interact with CrewAI, allowing you to create, train, run, and manage crews and pipelines.
## Installation
To use the CrewAI CLI, make sure you have CrewAI & Poetry installed:
```
pip install crewai poetry
```
## Basic Usage
The basic structure of a CrewAI CLI command is:
```
crewai [COMMAND] [OPTIONS] [ARGUMENTS]
```
## Available Commands
### 1. create
Create a new crew or pipeline.
```
crewai create [OPTIONS] TYPE NAME
```
- `TYPE`: Choose between "crew" or "pipeline"
- `NAME`: Name of the crew or pipeline
- `--router`: (Optional) Create a pipeline with router functionality
Example:
```
crewai create crew my_new_crew
crewai create pipeline my_new_pipeline --router
```
### 2. version
Show the installed version of CrewAI.
```
crewai version [OPTIONS]
```
- `--tools`: (Optional) Show the installed version of CrewAI tools
Example:
```
crewai version
crewai version --tools
```
### 3. train
Train the crew for a specified number of iterations.
```
crewai train [OPTIONS]
```
- `-n, --n_iterations INTEGER`: Number of iterations to train the crew (default: 5)
- `-f, --filename TEXT`: Path to a custom file for training (default: "trained_agents_data.pkl")
Example:
```
crewai train -n 10 -f my_training_data.pkl
```
### 4. replay
Replay the crew execution from a specific task.
```
crewai replay [OPTIONS]
```
- `-t, --task_id TEXT`: Replay the crew from this task ID, including all subsequent tasks
Example:
```
crewai replay -t task_123456
```
### 5. log_tasks_outputs
Retrieve your latest crew.kickoff() task outputs.
```
crewai log_tasks_outputs
```
### 6. reset_memories
Reset the crew memories (long, short, entity, latest_crew_kickoff_outputs).
```
crewai reset_memories [OPTIONS]
```
- `-l, --long`: Reset LONG TERM memory
- `-s, --short`: Reset SHORT TERM memory
- `-e, --entities`: Reset ENTITIES memory
- `-k, --kickoff-outputs`: Reset LATEST KICKOFF TASK OUTPUTS
- `-a, --all`: Reset ALL memories
Example:
```
crewai reset_memories --long --short
crewai reset_memories --all
```
### 7. test
Test the crew and evaluate the results.
```
crewai test [OPTIONS]
```
- `-n, --n_iterations INTEGER`: Number of iterations to test the crew (default: 3)
- `-m, --model TEXT`: LLM Model to run the tests on the Crew (default: "gpt-4o-mini")
Example:
```
crewai test -n 5 -m gpt-3.5-turbo
```
### 8. run
Run the crew.
```
crewai run
```
## Note
Make sure to run these commands from the directory where your CrewAI project is set up. Some commands may require additional configuration or setup within your project structure.

View File

@@ -0,0 +1,44 @@
---
title: How Agents Collaborate in CrewAI
description: Exploring the dynamics of agent collaboration within the CrewAI framework, focusing on the newly integrated features for enhanced functionality.
---
## Collaboration Fundamentals
!!! note "Core of Agent Interaction"
Collaboration in CrewAI is fundamental, enabling agents to combine their skills, share information, and assist each other in task execution, embodying a truly cooperative ecosystem.
- **Information Sharing**: Ensures all agents are well-informed and can contribute effectively by sharing data and findings.
- **Task Assistance**: Allows agents to seek help from peers with the required expertise for specific tasks.
- **Resource Allocation**: Optimizes task execution through the efficient distribution and sharing of resources among agents.
## Enhanced Attributes for Improved Collaboration
The `Crew` class has been enriched with several attributes to support advanced functionalities:
- **Language Model Management (`manager_llm`, `function_calling_llm`)**: Manages language models for executing tasks and tools, facilitating sophisticated agent-tool interactions. Note that while `manager_llm` is mandatory for hierarchical processes to ensure proper execution flow, `function_calling_llm` is optional, with a default value provided for streamlined tool interaction.
- **Custom Manager Agent (`manager_agent`)**: Allows specifying a custom agent as the manager instead of using the default manager provided by CrewAI.
- **Process Flow (`process`)**: Defines the execution logic (e.g., sequential, hierarchical) to streamline task distribution and execution.
- **Verbose Logging (`verbose`)**: Offers detailed logging capabilities for monitoring and debugging purposes. It supports both integer and boolean types to indicate the verbosity level. For example, setting `verbose` to 1 might enable basic logging, whereas setting it to True enables more detailed logs.
- **Rate Limiting (`max_rpm`)**: Ensures efficient utilization of resources by limiting requests per minute. Guidelines for setting `max_rpm` should consider the complexity of tasks and the expected load on resources.
- **Internationalization / Customization Support (`language`, `prompt_file`)**: Facilitates full customization of the inner prompts, enhancing global usability. Supported languages and the process for utilizing the `prompt_file` attribute for customization should be clearly documented. [Example of file](https://github.com/joaomdmoura/crewAI/blob/main/src/crewai/translations/en.json)
- **Execution and Output Handling (`full_output`)**: Distinguishes between full and final outputs for nuanced control over task results. Examples showcasing the difference in outputs can aid in understanding the practical implications of this attribute.
- **Callback and Telemetry (`step_callback`, `task_callback`)**: Integrates callbacks for step-wise and task-level execution monitoring, alongside telemetry for performance analytics. The purpose and usage of `task_callback` alongside `step_callback` for granular monitoring should be clearly explained.
- **Crew Sharing (`share_crew`)**: Enables sharing of crew information with CrewAI for continuous improvement and training models. The privacy implications and benefits of this feature, including how it contributes to model improvement, should be outlined.
- **Usage Metrics (`usage_metrics`)**: Stores all metrics for the language model (LLM) usage during all tasks' execution, providing insights into operational efficiency and areas for improvement. Detailed information on accessing and interpreting these metrics for performance analysis should be provided.
- **Memory Usage (`memory`)**: Indicates whether the crew should use memory to store memories of its execution, enhancing task execution and agent learning.
- **Embedder Configuration (`embedder`)**: Specifies the configuration for the embedder to be used by the crew for understanding and generating language. This attribute supports customization of the language model provider.
- **Cache Management (`cache`)**: Determines whether the crew should use a cache to store the results of tool executions, optimizing performance.
- **Output Logging (`output_log_file`)**: Specifies the file path for logging the output of the crew execution.
- **Planning Mode (`planning`)**: Allows crews to plan their actions before executing tasks by setting `planning=True` when creating the `Crew` instance. This feature enhances coordination and efficiency.
- **Replay Feature**: Introduces a new CLI for listing tasks from the last run and replaying from a specific task, enhancing task management and troubleshooting.
## Delegation: Dividing to Conquer
Delegation enhances functionality by allowing agents to intelligently assign tasks or seek help, thereby amplifying the crew's overall capability.
## Implementing Collaboration and Delegation
Setting up a crew involves defining the roles and capabilities of each agent. CrewAI seamlessly manages their interactions, ensuring efficient collaboration and delegation, with enhanced customization and monitoring features to adapt to various operational needs.
## Example Scenario
Consider a crew with a researcher agent tasked with data gathering and a writer agent responsible for compiling reports. The integration of advanced language model management and process flow attributes allows for more sophisticated interactions, such as the writer delegating complex research tasks to the researcher or querying specific information, thereby facilitating a seamless workflow.
## Conclusion
The integration of advanced attributes and functionalities into the CrewAI framework significantly enriches the agent collaboration ecosystem. These enhancements not only simplify interactions but also offer unprecedented flexibility and control, paving the way for sophisticated AI-driven solutions capable of tackling complex tasks through intelligent collaboration and delegation.

248
docs/core-concepts/Crews.md Normal file
View File

@@ -0,0 +1,248 @@
---
title: crewAI Crews
description: Understanding and utilizing crews in the crewAI framework with comprehensive attributes and functionalities.
---
## What is a Crew?
A crew in crewAI represents a collaborative group of agents working together to achieve a set of tasks. Each crew defines the strategy for task execution, agent collaboration, and the overall workflow.
## Crew Attributes
| Attribute | Parameters | Description |
| :------------------------------------ | :--------------------- | :-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| **Tasks** | `tasks` | A list of tasks assigned to the crew. |
| **Agents** | `agents` | A list of agents that are part of the crew. |
| **Process** _(optional)_ | `process` | The process flow (e.g., sequential, hierarchical) the crew follows. |
| **Verbose** _(optional)_ | `verbose` | The verbosity level for logging during execution. |
| **Manager LLM** _(optional)_ | `manager_llm` | The language model used by the manager agent in a hierarchical process. **Required when using a hierarchical process.** |
| **Function Calling LLM** _(optional)_ | `function_calling_llm` | If passed, the crew will use this LLM to do function calling for tools for all agents in the crew. Each agent can have its own LLM, which overrides the crew's LLM for function calling. |
| **Config** _(optional)_ | `config` | Optional configuration settings for the crew, in `Json` or `Dict[str, Any]` format. |
| **Max RPM** _(optional)_ | `max_rpm` | Maximum requests per minute the crew adheres to during execution. |
| **Language** _(optional)_ | `language` | Language used for the crew, defaults to English. |
| **Language File** _(optional)_ | `language_file` | Path to the language file to be used for the crew. |
| **Memory** _(optional)_ | `memory` | Utilized for storing execution memories (short-term, long-term, entity memory). |
| **Cache** _(optional)_ | `cache` | Specifies whether to use a cache for storing the results of tools' execution. |
| **Embedder** _(optional)_ | `embedder` | Configuration for the embedder to be used by the crew. Mostly used by memory for now. |
| **Full Output** _(optional)_ | `full_output` | Whether the crew should return the full output with all tasks outputs or just the final output. |
| **Step Callback** _(optional)_ | `step_callback` | A function that is called after each step of every agent. This can be used to log the agent's actions or to perform other operations; it won't override the agent-specific `step_callback`. |
| **Task Callback** _(optional)_ | `task_callback` | A function that is called after the completion of each task. Useful for monitoring or additional operations post-task execution. |
| **Share Crew** _(optional)_ | `share_crew` | Whether you want to share the complete crew information and execution with the crewAI team to make the library better, and allow us to train models. |
| **Output Log File** _(optional)_ | `output_log_file` | Whether you want to have a file with the complete crew output and execution. You can set it using True and it will default to the folder you are currently in and it will be called logs.txt or passing a string with the full path and name of the file. |
| **Manager Agent** _(optional)_ | `manager_agent` | `manager` sets a custom agent that will be used as a manager. |
| **Manager Callbacks** _(optional)_ | `manager_callbacks` | `manager_callbacks` takes a list of callback handlers to be executed by the manager agent when a hierarchical process is used. |
| **Prompt File** _(optional)_ | `prompt_file` | Path to the prompt JSON file to be used for the crew. |
| **Planning** *(optional)* | `planning` | Adds planning ability to the Crew. When activated before each Crew iteration, all Crew data is sent to an AgentPlanner that will plan the tasks and this plan will be added to each task description. |
| **Planning LLM** *(optional)* | `planning_llm` | The language model used by the AgentPlanner in a planning process. |
!!! note "Crew Max RPM"
The `max_rpm` attribute sets the maximum number of requests per minute the crew can perform to avoid rate limits and will override individual agents' `max_rpm` settings if you set it.
## Creating a Crew
When assembling a crew, you combine agents with complementary roles and tools, assign tasks, and select a process that dictates their execution order and interaction.
### Example: Assembling a Crew
```python
from crewai import Crew, Agent, Task, Process
from langchain_community.tools import DuckDuckGoSearchRun
from crewai_tools import tool
@tool('DuckDuckGoSearch')
def search(search_query: str):
"""Search the web for information on a given topic"""
return DuckDuckGoSearchRun().run(search_query)
# Define agents with specific roles and tools
researcher = Agent(
role='Senior Research Analyst',
goal='Discover innovative AI technologies',
backstory="""You're a senior research analyst at a large company.
You're responsible for analyzing data and providing insights
to the business.
You're currently working on a project to analyze the
trends and innovations in the space of artificial intelligence.""",
tools=[search]
)
writer = Agent(
role='Content Writer',
goal='Write engaging articles on AI discoveries',
backstory="""You're a senior writer at a large company.
You're responsible for creating content to the business.
You're currently working on a project to write about trends
and innovations in the space of AI for your next meeting.""",
verbose=True
)
# Create tasks for the agents
research_task = Task(
description='Identify breakthrough AI technologies',
agent=researcher,
expected_output='A bullet list summary of the top 5 most important AI news'
)
write_article_task = Task(
description='Draft an article on the latest AI technologies',
agent=writer,
expected_output='3 paragraph blog post on the latest AI technologies'
)
# Assemble the crew with a sequential process
my_crew = Crew(
agents=[researcher, writer],
tasks=[research_task, write_article_task],
process=Process.sequential,
full_output=True,
verbose=True,
)
```
## Crew Output
!!! note "Understanding Crew Outputs"
The output of a crew in the crewAI framework is encapsulated within the `CrewOutput` class.
This class provides a structured way to access results of the crew's execution, including various formats such as raw strings, JSON, and Pydantic models.
The `CrewOutput` includes the results from the final task output, token usage, and individual task outputs.
### Crew Output Attributes
| Attribute | Parameters | Type | Description |
| :--------------- | :------------- | :------------------------- | :--------------------------------------------------------------------------------------------------- |
| **Raw** | `raw` | `str` | The raw output of the crew. This is the default format for the output. |
| **Pydantic** | `pydantic` | `Optional[BaseModel]` | A Pydantic model object representing the structured output of the crew. |
| **JSON Dict** | `json_dict` | `Optional[Dict[str, Any]]` | A dictionary representing the JSON output of the crew. |
| **Tasks Output** | `tasks_output` | `List[TaskOutput]` | A list of `TaskOutput` objects, each representing the output of a task in the crew. |
| **Token Usage** | `token_usage` | `Dict[str, Any]` | A summary of token usage, providing insights into the language model's performance during execution. |
### Crew Output Methods and Properties
| Method/Property | Description |
| :-------------- | :------------------------------------------------------------------------------------------------ |
| **json** | Returns the JSON string representation of the crew output if the output format is JSON. |
| **to_dict** | Converts the JSON and Pydantic outputs to a dictionary. |
| \***\*str\*\*** | Returns the string representation of the crew output, prioritizing Pydantic, then JSON, then raw. |
### Accessing Crew Outputs
Once a crew has been executed, its output can be accessed through the `output` attribute of the `Crew` object. The `CrewOutput` class provides various ways to interact with and present this output.
#### Example
```python
# Example crew execution
crew = Crew(
agents=[research_agent, writer_agent],
tasks=[research_task, write_article_task],
verbose=True
)
crew_output = crew.kickoff()
# Accessing the crew output
print(f"Raw Output: {crew_output.raw}")
if crew_output.json_dict:
print(f"JSON Output: {json.dumps(crew_output.json_dict, indent=2)}")
if crew_output.pydantic:
print(f"Pydantic Output: {crew_output.pydantic}")
print(f"Tasks Output: {crew_output.tasks_output}")
print(f"Token Usage: {crew_output.token_usage}")
```
## Memory Utilization
Crews can utilize memory (short-term, long-term, and entity memory) to enhance their execution and learning over time. This feature allows crews to store and recall execution memories, aiding in decision-making and task execution strategies.
## Cache Utilization
Caches can be employed to store the results of tools' execution, making the process more efficient by reducing the need to re-execute identical tasks.
## Crew Usage Metrics
After the crew execution, you can access the `usage_metrics` attribute to view the language model (LLM) usage metrics for all tasks executed by the crew. This provides insights into operational efficiency and areas for improvement.
```python
# Access the crew's usage metrics
crew = Crew(agents=[agent1, agent2], tasks=[task1, task2])
crew.kickoff()
print(crew.usage_metrics)
```
## Crew Execution Process
- **Sequential Process**: Tasks are executed one after another, allowing for a linear flow of work.
- **Hierarchical Process**: A manager agent coordinates the crew, delegating tasks and validating outcomes before proceeding. **Note**: A `manager_llm` or `manager_agent` is required for this process and it's essential for validating the process flow.
### Kicking Off a Crew
Once your crew is assembled, initiate the workflow with the `kickoff()` method. This starts the execution process according to the defined process flow.
```python
# Start the crew's task execution
result = my_crew.kickoff()
print(result)
```
### Different Ways to Kick Off a Crew
Once your crew is assembled, initiate the workflow with the appropriate kickoff method. CrewAI provides several methods for better control over the kickoff process: `kickoff()`, `kickoff_for_each()`, `kickoff_async()`, and `kickoff_for_each_async()`.
- `kickoff()`: Starts the execution process according to the defined process flow.
- `kickoff_for_each()`: Executes tasks for each agent individually.
- `kickoff_async()`: Initiates the workflow asynchronously.
- `kickoff_for_each_async()`: Executes tasks for each agent individually in an asynchronous manner.
```python
# Start the crew's task execution
result = my_crew.kickoff()
print(result)
# Example of using kickoff_for_each
inputs_array = [{'topic': 'AI in healthcare'}, {'topic': 'AI in finance'}]
results = my_crew.kickoff_for_each(inputs=inputs_array)
for result in results:
print(result)
# Example of using kickoff_async
inputs = {'topic': 'AI in healthcare'}
async_result = my_crew.kickoff_async(inputs=inputs)
print(async_result)
# Example of using kickoff_for_each_async
inputs_array = [{'topic': 'AI in healthcare'}, {'topic': 'AI in finance'}]
async_results = my_crew.kickoff_for_each_async(inputs=inputs_array)
for async_result in async_results:
print(async_result)
```
These methods provide flexibility in how you manage and execute tasks within your crew, allowing for both synchronous and asynchronous workflows tailored to your needs.
### Replaying from a Specific Task
You can now replay from a specific task using our CLI command `replay`.
The replay feature in CrewAI allows you to replay from a specific task using the command-line interface (CLI). By running the command `crewai replay -t <task_id>`, you can specify the `task_id` for the replay process.
Kickoffs will now save the latest kickoffs returned task outputs locally for you to be able to replay from.
### Replaying from a Specific Task Using the CLI
To use the replay feature, follow these steps:
1. Open your terminal or command prompt.
2. Navigate to the directory where your CrewAI project is located.
3. Run the following command:
To view the latest kickoff task IDs, use:
```shell
crewai log-tasks-outputs
```
Then, to replay from a specific task, use:
```shell
crewai replay -t <task_id>
```
These commands let you replay from your latest kickoff tasks, still retaining context from previously executed tasks.

View File

@@ -0,0 +1,206 @@
---
title: crewAI Memory Systems
description: Leveraging memory systems in the crewAI framework to enhance agent capabilities.
---
## Introduction to Memory Systems in crewAI
!!! note "Enhancing Agent Intelligence"
The crewAI framework introduces a sophisticated memory system designed to significantly enhance the capabilities of AI agents. This system comprises short-term memory, long-term memory, entity memory, and contextual memory, each serving a unique purpose in aiding agents to remember, reason, and learn from past interactions.
## Memory System Components
| Component | Description |
| :------------------- | :----------------------------------------------------------- |
| **Short-Term Memory**| Temporarily stores recent interactions and outcomes, enabling agents to recall and utilize information relevant to their current context during the current executions. |
| **Long-Term Memory** | Preserves valuable insights and learnings from past executions, allowing agents to build and refine their knowledge over time. So Agents can remember what they did right and wrong across multiple executions |
| **Entity Memory** | Captures and organizes information about entities (people, places, concepts) encountered during tasks, facilitating deeper understanding and relationship mapping. |
| **Contextual Memory**| Maintains the context of interactions by combining `ShortTermMemory`, `LongTermMemory`, and `EntityMemory`, aiding in the coherence and relevance of agent responses over a sequence of tasks or a conversation. |
## How Memory Systems Empower Agents
1. **Contextual Awareness**: With short-term and contextual memory, agents gain the ability to maintain context over a conversation or task sequence, leading to more coherent and relevant responses.
2. **Experience Accumulation**: Long-term memory allows agents to accumulate experiences, learning from past actions to improve future decision-making and problem-solving.
3. **Entity Understanding**: By maintaining entity memory, agents can recognize and remember key entities, enhancing their ability to process and interact with complex information.
## Implementing Memory in Your Crew
When configuring a crew, you can enable and customize each memory component to suit the crew's objectives and the nature of tasks it will perform.
By default, the memory system is disabled, and you can ensure it is active by setting `memory=True` in the crew configuration. The memory will use OpenAI Embeddings by default, but you can change it by setting `embedder` to a different model.
The 'embedder' only applies to **Short-Term Memory** which uses Chroma for RAG using EmbedChain package.
The **Long-Term Memory** uses SQLLite3 to store task results. Currently, there is no way to override these storage implementations.
The data storage files are saved into a platform specific location found using the appdirs package
and the name of the project which can be overridden using the **CREWAI_STORAGE_DIR** environment variable.
### Example: Configuring Memory for a Crew
```python
from crewai import Crew, Agent, Task, Process
# Assemble your crew with memory capabilities
my_crew = Crew(
agents=[...],
tasks=[...],
process=Process.sequential,
memory=True,
verbose=True
)
```
## Additional Embedding Providers
### Using OpenAI embeddings (already default)
```python
from crewai import Crew, Agent, Task, Process
my_crew = Crew(
agents=[...],
tasks=[...],
process=Process.sequential,
memory=True,
verbose=True,
embedder={
"provider": "openai",
"config":{
"model": 'text-embedding-3-small'
}
}
)
```
### Using Google AI embeddings
```python
from crewai import Crew, Agent, Task, Process
my_crew = Crew(
agents=[...],
tasks=[...],
process=Process.sequential,
memory=True,
verbose=True,
embedder={
"provider": "google",
"config":{
"model": 'models/embedding-001',
"task_type": "retrieval_document",
"title": "Embeddings for Embedchain"
}
}
)
```
### Using Azure OpenAI embeddings
```python
from crewai import Crew, Agent, Task, Process
my_crew = Crew(
agents=[...],
tasks=[...],
process=Process.sequential,
memory=True,
verbose=True,
embedder={
"provider": "azure_openai",
"config":{
"model": 'text-embedding-ada-002',
"deployment_name": "your_embedding_model_deployment_name"
}
}
)
```
### Using GPT4ALL embeddings
```python
from crewai import Crew, Agent, Task, Process
my_crew = Crew(
agents=[...],
tasks=[...],
process=Process.sequential,
memory=True,
verbose=True,
embedder={
"provider": "gpt4all"
}
)
```
### Using Vertex AI embeddings
```python
from crewai import Crew, Agent, Task, Process
my_crew = Crew(
agents=[...],
tasks=[...],
process=Process.sequential,
memory=True,
verbose=True,
embedder={
"provider": "vertexai",
"config":{
"model": 'textembedding-gecko'
}
}
)
```
### Using Cohere embeddings
```python
from crewai import Crew, Agent, Task, Process
my_crew = Crew(
agents=[...],
tasks=[...],
process=Process.sequential,
memory=True,
verbose=True,
embedder={
"provider": "cohere",
"config":{
"model": "embed-english-v3.0",
"vector_dimension": 1024
}
}
)
```
### Resetting Memory
```sh
crewai reset_memories [OPTIONS]
```
#### Resetting Memory Options
- **`-l, --long`**
- **Description:** Reset LONG TERM memory.
- **Type:** Flag (boolean)
- **Default:** False
- **`-s, --short`**
- **Description:** Reset SHORT TERM memory.
- **Type:** Flag (boolean)
- **Default:** False
- **`-e, --entities`**
- **Description:** Reset ENTITIES memory.
- **Type:** Flag (boolean)
- **Default:** False
- **`-k, --kickoff-outputs`**
- **Description:** Reset LATEST KICKOFF TASK OUTPUTS.
- **Type:** Flag (boolean)
- **Default:** False
- **`-a, --all`**
- **Description:** Reset ALL memories.
- **Type:** Flag (boolean)
- **Default:** False
## Benefits of Using crewAI's Memory System
- **Adaptive Learning:** Crews become more efficient over time, adapting to new information and refining their approach to tasks.
- **Enhanced Personalization:** Memory enables agents to remember user preferences and historical interactions, leading to personalized experiences.
- **Improved Problem Solving:** Access to a rich memory store aids agents in making more informed decisions, drawing on past learnings and contextual insights.
## Getting Started
Integrating crewAI's memory system into your projects is straightforward. By leveraging the provided memory components and configurations, you can quickly empower your agents with the ability to remember, reason, and learn from their interactions, unlocking new levels of intelligence and capability.

View File

@@ -0,0 +1,267 @@
---
title: crewAI Pipelines
description: Understanding and utilizing pipelines in the crewAI framework for efficient multi-stage task processing.
---
## What is a Pipeline?
A pipeline in crewAI represents a structured workflow that allows for the sequential or parallel execution of multiple crews. It provides a way to organize complex processes involving multiple stages, where the output of one stage can serve as input for subsequent stages.
## Key Terminology
Understanding the following terms is crucial for working effectively with pipelines:
- **Stage**: A distinct part of the pipeline, which can be either sequential (a single crew) or parallel (multiple crews executing concurrently).
- **Run**: A specific execution of the pipeline for a given set of inputs, representing a single instance of processing through the pipeline.
- **Branch**: Parallel executions within a stage (e.g., concurrent crew operations).
- **Trace**: The journey of an individual input through the entire pipeline, capturing the path and transformations it undergoes.
Example pipeline structure:
```
crew1 >> [crew2, crew3] >> crew4
```
This represents a pipeline with three stages:
1. A sequential stage (crew1)
2. A parallel stage with two branches (crew2 and crew3 executing concurrently)
3. Another sequential stage (crew4)
Each input creates its own run, flowing through all stages of the pipeline. Multiple runs can be processed concurrently, each following the defined pipeline structure.
## Pipeline Attributes
| Attribute | Parameters | Description |
| :--------- | :--------- | :---------------------------------------------------------------------------------------------- |
| **Stages** | `stages` | A list of crews, lists of crews, or routers representing the stages to be executed in sequence. |
## Creating a Pipeline
When creating a pipeline, you define a series of stages, each consisting of either a single crew or a list of crews for parallel execution. The pipeline ensures that each stage is executed in order, with the output of one stage feeding into the next.
### Example: Assembling a Pipeline
```python
from crewai import Crew, Agent, Task, Pipeline
# Define your crews
research_crew = Crew(
agents=[researcher],
tasks=[research_task],
process=Process.sequential
)
analysis_crew = Crew(
agents=[analyst],
tasks=[analysis_task],
process=Process.sequential
)
writing_crew = Crew(
agents=[writer],
tasks=[writing_task],
process=Process.sequential
)
# Assemble the pipeline
my_pipeline = Pipeline(
stages=[research_crew, analysis_crew, writing_crew]
)
```
## Pipeline Methods
| Method | Description |
| :--------------- | :----------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| **process_runs** | Executes the pipeline, processing all stages and returning the results. This method initiates one or more runs through the pipeline, handling the flow of data between stages. |
## Pipeline Output
!!! note "Understanding Pipeline Outputs"
The output of a pipeline in the crewAI framework is encapsulated within the `PipelineKickoffResult` class. This class provides a structured way to access the results of the pipeline's execution, including various formats such as raw strings, JSON, and Pydantic models.
### Pipeline Output Attributes
| Attribute | Parameters | Type | Description |
| :-------------- | :------------ | :------------------------ | :-------------------------------------------------------------------------------------------------------- |
| **ID** | `id` | `UUID4` | A unique identifier for the pipeline output. |
| **Run Results** | `run_results` | `List[PipelineRunResult]` | A list of `PipelineRunResult` objects, each representing the output of a single run through the pipeline. |
### Pipeline Output Methods
| Method/Property | Description |
| :----------------- | :----------------------------------------------------- |
| **add_run_result** | Adds a `PipelineRunResult` to the list of run results. |
### Pipeline Run Result Attributes
| Attribute | Parameters | Type | Description |
| :---------------- | :-------------- | :------------------------- | :-------------------------------------------------------------------------------------------- |
| **ID** | `id` | `UUID4` | A unique identifier for the run result. |
| **Raw** | `raw` | `str` | The raw output of the final stage in the pipeline run. |
| **Pydantic** | `pydantic` | `Optional[BaseModel]` | A Pydantic model object representing the structured output of the final stage, if applicable. |
| **JSON Dict** | `json_dict` | `Optional[Dict[str, Any]]` | A dictionary representing the JSON output of the final stage, if applicable. |
| **Token Usage** | `token_usage` | `Dict[str, Any]` | A summary of token usage across all stages of the pipeline run. |
| **Trace** | `trace` | `List[Any]` | A trace of the journey of inputs through the pipeline run. |
| **Crews Outputs** | `crews_outputs` | `List[CrewOutput]` | A list of `CrewOutput` objects, representing the outputs from each crew in the pipeline run. |
### Pipeline Run Result Methods and Properties
| Method/Property | Description |
| :-------------- | :------------------------------------------------------------------------------------------------------- |
| **json** | Returns the JSON string representation of the run result if the output format of the final task is JSON. |
| **to_dict** | Converts the JSON and Pydantic outputs to a dictionary. |
| \***\*str\*\*** | Returns the string representation of the run result, prioritizing Pydantic, then JSON, then raw. |
### Accessing Pipeline Outputs
Once a pipeline has been executed, its output can be accessed through the `PipelineOutput` object returned by the `process_runs` method. The `PipelineOutput` class provides access to individual `PipelineRunResult` objects, each representing a single run through the pipeline.
#### Example
```python
# Define input data for the pipeline
input_data = [{"initial_query": "Latest advancements in AI"}, {"initial_query": "Future of robotics"}]
# Execute the pipeline
pipeline_output = await my_pipeline.process_runs(input_data)
# Access the results
for run_result in pipeline_output.run_results:
print(f"Run ID: {run_result.id}")
print(f"Final Raw Output: {run_result.raw}")
if run_result.json_dict:
print(f"JSON Output: {json.dumps(run_result.json_dict, indent=2)}")
if run_result.pydantic:
print(f"Pydantic Output: {run_result.pydantic}")
print(f"Token Usage: {run_result.token_usage}")
print(f"Trace: {run_result.trace}")
print("Crew Outputs:")
for crew_output in run_result.crews_outputs:
print(f" Crew: {crew_output.raw}")
print("\n")
```
This example demonstrates how to access and work with the pipeline output, including individual run results and their associated data.
## Using Pipelines
Pipelines are particularly useful for complex workflows that involve multiple stages of processing, analysis, or content generation. They allow you to:
1. **Sequence Operations**: Execute crews in a specific order, ensuring that the output of one crew is available as input to the next.
2. **Parallel Processing**: Run multiple crews concurrently within a stage for increased efficiency.
3. **Manage Complex Workflows**: Break down large tasks into smaller, manageable steps executed by specialized crews.
### Example: Running a Pipeline
```python
# Define input data for the pipeline
input_data = [{"initial_query": "Latest advancements in AI"}]
# Execute the pipeline, initiating a run for each input
results = await my_pipeline.process_runs(input_data)
# Access the results
for result in results:
print(f"Final Output: {result.raw}")
print(f"Token Usage: {result.token_usage}")
print(f"Trace: {result.trace}") # Shows the path of the input through all stages
```
## Advanced Features
### Parallel Execution within Stages
You can define parallel execution within a stage by providing a list of crews, creating multiple branches:
```python
parallel_analysis_crew = Crew(agents=[financial_analyst], tasks=[financial_analysis_task])
market_analysis_crew = Crew(agents=[market_analyst], tasks=[market_analysis_task])
my_pipeline = Pipeline(
stages=[
research_crew,
[parallel_analysis_crew, market_analysis_crew], # Parallel execution (branching)
writing_crew
]
)
```
### Routers in Pipelines
Routers are a powerful feature in crewAI pipelines that allow for dynamic decision-making and branching within your workflow. They enable you to direct the flow of execution based on specific conditions or criteria, making your pipelines more flexible and adaptive.
#### What is a Router?
A router in crewAI is a special component that can be included as a stage in your pipeline. It evaluates the input data and determines which path the execution should take next. This allows for conditional branching in your pipeline, where different crews or sub-pipelines can be executed based on the router's decision.
#### Key Components of a Router
1. **Routes**: A dictionary of named routes, each associated with a condition and a pipeline to execute if the condition is met.
2. **Default Route**: A fallback pipeline that is executed if none of the defined route conditions are met.
#### Creating a Router
Here's an example of how to create a router:
```python
from crewai import Router, Route, Pipeline, Crew, Agent, Task
# Define your agents
classifier = Agent(name="Classifier", role="Email Classifier")
urgent_handler = Agent(name="Urgent Handler", role="Urgent Email Processor")
normal_handler = Agent(name="Normal Handler", role="Normal Email Processor")
# Define your tasks
classify_task = Task(description="Classify the email based on its content and metadata.")
urgent_task = Task(description="Process and respond to urgent email quickly.")
normal_task = Task(description="Process and respond to normal email thoroughly.")
# Define your crews
classification_crew = Crew(agents=[classifier], tasks=[classify_task]) # classify email between high and low urgency 1-10
urgent_crew = Crew(agents=[urgent_handler], tasks=[urgent_task])
normal_crew = Crew(agents=[normal_handler], tasks=[normal_task])
# Create pipelines for different urgency levels
urgent_pipeline = Pipeline(stages=[urgent_crew])
normal_pipeline = Pipeline(stages=[normal_crew])
# Create a router
email_router = Router(
routes={
"high_urgency": Route(
condition=lambda x: x.get("urgency_score", 0) > 7,
pipeline=urgent_pipeline
),
"low_urgency": Route(
condition=lambda x: x.get("urgency_score", 0) <= 7,
pipeline=normal_pipeline
)
},
default=Pipeline(stages=[normal_pipeline]) # Default to just normal if no urgency score
)
# Use the router in a main pipeline
main_pipeline = Pipeline(stages=[classification_crew, email_router])
inputs = [{"email": "..."}, {"email": "..."}] # List of email data
main_pipeline.kickoff(inputs=inputs)
```
In this example, the router decides between an urgent pipeline and a normal pipeline based on the urgency score of the email. If the urgency score is greater than 7, it routes to the urgent pipeline; otherwise, it uses the normal pipeline. If the input doesn't include an urgency score, it defaults to just the classification crew.
#### Benefits of Using Routers
1. **Dynamic Workflow**: Adapt your pipeline's behavior based on input characteristics or intermediate results.
2. **Efficiency**: Route urgent tasks to quicker processes, reserving more thorough pipelines for less time-sensitive inputs.
3. **Flexibility**: Easily modify or extend your pipeline's logic without changing the core structure.
4. **Scalability**: Handle a wide range of email types and urgency levels with a single pipeline structure.
### Error Handling and Validation
The Pipeline class includes validation mechanisms to ensure the robustness of the pipeline structure:
- Validates that stages contain only Crew instances or lists of Crew instances.
- Prevents double nesting of stages to maintain a clear structure.

View File

@@ -0,0 +1,134 @@
---
title: crewAI Planning
description: Learn how to add planning to your crewAI Crew and improve their performance.
---
## Introduction
The planning feature in CrewAI allows you to add planning capability to your crew. When enabled, before each Crew iteration, all Crew information is sent to an AgentPlanner that will plan the tasks step by step, and this plan will be added to each task description.
### Using the Planning Feature
Getting started with the planning feature is very easy, the only step required is to add `planning=True` to your Crew:
```python
from crewai import Crew, Agent, Task, Process
# Assemble your crew with planning capabilities
my_crew = Crew(
agents=self.agents,
tasks=self.tasks,
process=Process.sequential,
planning=True,
)
```
From this point on, your crew will have planning enabled, and the tasks will be planned before each iteration.
#### Planning LLM
Now you can define the LLM that will be used to plan the tasks. You can use any ChatOpenAI LLM model available.
```python
from crewai import Crew, Agent, Task, Process
from langchain_openai import ChatOpenAI
# Assemble your crew with planning capabilities and custom LLM
my_crew = Crew(
agents=self.agents,
tasks=self.tasks,
process=Process.sequential,
planning=True,
planning_llm=ChatOpenAI(model="gpt-4o")
)
```
### Example
When running the base case example, you will see something like the following output, which represents the output of the AgentPlanner responsible for creating the step-by-step logic to add to the Agents tasks.
```
[2024-07-15 16:49:11][INFO]: Planning the crew execution
**Step-by-Step Plan for Task Execution**
**Task Number 1: Conduct a thorough research about AI LLMs**
**Agent:** AI LLMs Senior Data Researcher
**Agent Goal:** Uncover cutting-edge developments in AI LLMs
**Task Expected Output:** A list with 10 bullet points of the most relevant information about AI LLMs
**Task Tools:** None specified
**Agent Tools:** None specified
**Step-by-Step Plan:**
1. **Define Research Scope:**
- Determine the specific areas of AI LLMs to focus on, such as advancements in architecture, use cases, ethical considerations, and performance metrics.
2. **Identify Reliable Sources:**
- List reputable sources for AI research, including academic journals, industry reports, conferences (e.g., NeurIPS, ACL), AI research labs (e.g., OpenAI, Google AI), and online databases (e.g., IEEE Xplore, arXiv).
3. **Collect Data:**
- Search for the latest papers, articles, and reports published in 2023 and early 2024.
- Use keywords like "Large Language Models 2024", "AI LLM advancements", "AI ethics 2024", etc.
4. **Analyze Findings:**
- Read and summarize the key points from each source.
- Highlight new techniques, models, and applications introduced in the past year.
5. **Organize Information:**
- Categorize the information into relevant topics (e.g., new architectures, ethical implications, real-world applications).
- Ensure each bullet point is concise but informative.
6. **Create the List:**
- Compile the 10 most relevant pieces of information into a bullet point list.
- Review the list to ensure clarity and relevance.
**Expected Output:**
A list with 10 bullet points of the most relevant information about AI LLMs.
---
**Task Number 2: Review the context you got and expand each topic into a full section for a report**
**Agent:** AI LLMs Reporting Analyst
**Agent Goal:** Create detailed reports based on AI LLMs data analysis and research findings
**Task Expected Output:** A fully fledge report with the main topics, each with a full section of information. Formatted as markdown without '```'
**Task Tools:** None specified
**Agent Tools:** None specified
**Step-by-Step Plan:**
1. **Review the Bullet Points:**
- Carefully read through the list of 10 bullet points provided by the AI LLMs Senior Data Researcher.
2. **Outline the Report:**
- Create an outline with each bullet point as a main section heading.
- Plan sub-sections under each main heading to cover different aspects of the topic.
3. **Research Further Details:**
- For each bullet point, conduct additional research if necessary to gather more detailed information.
- Look for case studies, examples, and statistical data to support each section.
4. **Write Detailed Sections:**
- Expand each bullet point into a comprehensive section.
- Ensure each section includes an introduction, detailed explanation, examples, and a conclusion.
- Use markdown formatting for headings, subheadings, lists, and emphasis.
5. **Review and Edit:**
- Proofread the report for clarity, coherence, and correctness.
- Make sure the report flows logically from one section to the next.
- Format the report according to markdown standards.
6. **Finalize the Report:**
- Ensure the report is complete with all sections expanded and detailed.
- Double-check formatting and make any necessary adjustments.
**Expected Output:**
A fully-fledged report with the main topics, each with a full section of information. Formatted as markdown without '```'.
```

View File

@@ -0,0 +1,59 @@
---
title: Managing Processes in CrewAI
description: Detailed guide on workflow management through processes in CrewAI, with updated implementation details.
---
## Understanding Processes
!!! note "Core Concept"
In CrewAI, processes orchestrate the execution of tasks by agents, akin to project management in human teams. These processes ensure tasks are distributed and executed efficiently, in alignment with a predefined strategy.
## Process Implementations
- **Sequential**: Executes tasks sequentially, ensuring tasks are completed in an orderly progression.
- **Hierarchical**: Organizes tasks in a managerial hierarchy, where tasks are delegated and executed based on a structured chain of command. A manager language model (`manager_llm`) or a custom manager agent (`manager_agent`) must be specified in the crew to enable the hierarchical process, facilitating the creation and management of tasks by the manager.
- **Consensual Process (Planned)**: Aiming for collaborative decision-making among agents on task execution, this process type introduces a democratic approach to task management within CrewAI. It is planned for future development and is not currently implemented in the codebase.
## The Role of Processes in Teamwork
Processes enable individual agents to operate as a cohesive unit, streamlining their efforts to achieve common objectives with efficiency and coherence.
## Assigning Processes to a Crew
To assign a process to a crew, specify the process type upon crew creation to set the execution strategy. For a hierarchical process, ensure to define `manager_llm` or `manager_agent` for the manager agent.
```python
from crewai import Crew
from crewai.process import Process
from langchain_openai import ChatOpenAI
# Example: Creating a crew with a sequential process
crew = Crew(
agents=my_agents,
tasks=my_tasks,
process=Process.sequential
)
# Example: Creating a crew with a hierarchical process
# Ensure to provide a manager_llm or manager_agent
crew = Crew(
agents=my_agents,
tasks=my_tasks,
process=Process.hierarchical,
manager_llm=ChatOpenAI(model="gpt-4")
# or
# manager_agent=my_manager_agent
)
```
**Note:** Ensure `my_agents` and `my_tasks` are defined prior to creating a `Crew` object, and for the hierarchical process, either `manager_llm` or `manager_agent` is also required.
## Sequential Process
This method mirrors dynamic team workflows, progressing through tasks in a thoughtful and systematic manner. Task execution follows the predefined order in the task list, with the output of one task serving as context for the next.
To customize task context, utilize the `context` parameter in the `Task` class to specify outputs that should be used as context for subsequent tasks.
## Hierarchical Process
Emulates a corporate hierarchy, CrewAI allows specifying a custom manager agent or automatically creates one, requiring the specification of a manager language model (`manager_llm`). This agent oversees task execution, including planning, delegation, and validation. Tasks are not pre-assigned; the manager allocates tasks to agents based on their capabilities, reviews outputs, and assesses task completion.
## Process Class: Detailed Overview
The `Process` class is implemented as an enumeration (`Enum`), ensuring type safety and restricting process values to the defined types (`sequential`, `hierarchical`). The consensual process is planned for future inclusion, emphasizing our commitment to continuous development and innovation.
## Conclusion
The structured collaboration facilitated by processes within CrewAI is crucial for enabling systematic teamwork among agents. This documentation has been updated to reflect the latest features, enhancements, and the planned integration of the Consensual Process, ensuring users have access to the most current and comprehensive information.

Some files were not shown because too many files have changed in this diff Show More