Compare commits

...

183 Commits

Author SHA1 Message Date
Lorenze Jay
ae2e9de893 added validator for async_execution true in tasks whenever in hierarchical run 2024-07-03 12:27:21 -07:00
Lorenze Jay
9efd03e2df Merge branch 'main' of github.com:joaomdmoura/crewAI into lj/optional-agent-in-task-bug 2024-07-03 11:48:35 -07:00
Brandon Hancock (bhancock_ai)
57fc079267 Bugfix/kickoff for each usage metrics (#844)
* WIP. Figuring out disconnect issue.

* Cleaned up logs now that I've isolated the issue to the LLM

* more wip.

* WIP. It looks like usage metrics has always been broken for async

* Update parent crew who is managing for_each loop

* Merge in main to bugfix/kickoff-for-each-usage-metrics

* Clean up code for review

* Add new tests

* Final cleanup. Ready for review.

* Moving copy functionality from Agent to BaseAgent

* Fix renaming issue

* Fix linting errors

* use BaseAgent instead of Agent where applicable
2024-07-03 15:30:53 -03:00
Alex Brinsmead
706f4cd74a Fix typos in EN "human_feedback" string (#859)
* Fix typo in EN "human_feedback" string

* Fix typos in EN "human_feedback" string
2024-07-03 15:26:58 -03:00
Taleb
2e3646cc96 Improved documentation for training module usage (#860)
- Added detailed steps for training the crew programmatically.
- Clarified the distinction between using the CLI and programmatic approaches.

This update makes it easier for users to understand how to train their crew both through the CLI and programmatically, whether using a UI or API endpoints.

Again Thank you to the author for the great project and the excellent foundation provided!
2024-07-03 15:26:32 -03:00
Brandon Hancock (bhancock_ai)
844cc515d5 Fix issue agentop poetry install issue (#863)
* Fix issue agentop poetry install issue

* Updated install requirements tests to fail if .lock becomes out of sync with poetry install. Cleaned up old issues that were merged back in.
2024-07-03 15:22:32 -03:00
Lorenze Jay
09ded56ae5 fixed tests and better validator message 2024-07-03 11:14:13 -07:00
Lorenze Jay
229c1d74b6 Merge branch 'main' of github.com:joaomdmoura/crewAI into lj/optional-agent-in-task-bug 2024-07-03 07:59:15 -07:00
Braelyn Boynton
f47904134b Add back AgentOps as Optional Dependency (#543)
* implements agentops with a langchain handler, agent tracking and tool call recording

* track tool usage

* end session after completion

* track tool usage time

* better tool and llm tracking

* code cleanup

* make agentops optional

* optional dependency usage

* remove telemetry code

* optional agentops

* agentops version bump

* remove org key

* true dependency

* add crew org key to agentops

* cleanup

* Update pyproject.toml

* Revert "true dependency"

This reverts commit e52e8e9568.

* Revert "cleanup"

This reverts commit 7f5635fb9e.

* optional parent key

* agentops 0.1.5

* Revert "Revert "cleanup""

This reverts commit cea33d9a5d.

* Revert "Revert "true dependency""

This reverts commit 4d1b460b

* cleanup

* Forcing version 0.1.5

* Update pyproject.toml

* agentops update

* noop

* add crew tag

* black formatting

* use langchain callback handler to support all LLMs

* agentops version bump

* track task evaluator

* merge upstream

* Fix typo in instruction en.json (#676)

* Enable search in docs (#663)

* Clarify text in docstring (#662)

* Update agent.py (#655)

Changed default model value from gpt-4 to gpt-4o.
Reasoning.
gpt-4 costs 30$ per million tokens while gpt-4o costs 5$.
This is more cost friendly for default option.

* Update README.md (#652)

Rework example so that if you use a custom LLM it doesn't throw code errors by uncommenting.

* Update BrowserbaseLoadTool.md (#647)

* Update crew.py (#644)

Fixed Type on line 53

* fixes #665 (#666)

* Added timestamp to logger (#646)

* Added timestamp to logger

Updated the logger.py file to include timestamps when logging output. For example:

 [2024-05-20 15:32:48][DEBUG]: == Working Agent: Researcher
 [2024-05-20 15:32:48][INFO]: == Starting Task: Research the topic
 [2024-05-20 15:33:22][DEBUG]: == [Researcher] Task output:

* Update tool_usage.py

* Revert "Update tool_usage.py"

This reverts commit 95d18d5b6f.

incorrect bramch for this commit

* support skip auto end session

* conditional protect agentops use

* fix crew logger bug

* fix crew logger bug

* Update crew.py

* Update tool_usage.py

---------

Co-authored-by: João Moura <joaomdmoura@gmail.com>
Co-authored-by: Howard Gil <howardbgil@gmail.com>
Co-authored-by: Olivier Roberdet <niox5199@gmail.com>
Co-authored-by: Paul Sanders <psanders1@gmail.com>
Co-authored-by: Anudeep Kolluri <50168940+Anudeep-Kolluri@users.noreply.github.com>
Co-authored-by: Mike Heavers <heaversm@users.noreply.github.com>
Co-authored-by: Mish Ushakov <10400064+mishushakov@users.noreply.github.com>
Co-authored-by: theCyberTech - Rip&Tear <84775494+theCyberTech@users.noreply.github.com>
Co-authored-by: Saif Mahmud <60409889+vmsaif@users.noreply.github.com>
2024-07-02 21:52:15 -03:00
Salman Faroz
d72b00af3c Update Sequential.md (#849)
To Resolve : 
pydantic_core._pydantic_core.ValidationError: 1 validation error for Task
expected_output
Field required [type=missing, input_value=, input_type=dict]
For further information visit https://errors.pydantic.dev/2.6/v/missing

"Expected Output" is mandatory now as it forces people to be specific about the expected result and get better result


refer : https://github.com/joaomdmoura/crewAI/issues/308
2024-07-02 21:17:53 -03:00
Taleb
bd053a98c7 Enhanced documentation for readability and clarity (#855)
- Added a "Parameters" column to attribute tables. Improved overall document formatting for enhanced readability and ease of use.

Thank you to the author for the great project and the excellent foundation provided!
2024-07-02 21:17:04 -03:00
Lorenze Jay
c18208ca59 fixed mixin (#831)
* fixed mixin

* WIP: fixing types

* type fixes on mixin
2024-07-02 21:16:26 -03:00
Lorenze Jay
7c0a55c158 better test and fixed task.agent logic 2024-07-02 16:22:25 -07:00
Lorenze Jay
c2d7678812 Merge branch 'main' of github.com:joaomdmoura/crewAI into lj/optional-agent-in-task-bug 2024-07-02 15:40:25 -07:00
João Moura
acbe5af8ce preparing new version 2024-07-02 09:03:20 -07:00
Eduardo Chiarotti
c81146505a docs: Update training feature/code interpreter docs (#852)
* docs: remove training docs from README

* docs: add CodeinterpreterTool to docs and update docs

* docs: fix name of tool
2024-07-02 13:00:37 -03:00
João Moura
6b9a1d4040 adding link to docs 2024-07-01 18:41:31 -07:00
João Moura
508fbd49e9 preparing new version 2024-07-01 18:28:11 -07:00
João Moura
e18a6c6bb8 updatign tools 2024-07-01 15:25:29 -07:00
João Moura
16237ef393 rollback update to new version 2024-07-01 15:25:10 -07:00
João Moura
5332d02f36 preparing new version 2024-07-01 15:12:22 -07:00
João Moura
7258120a0d preparing new version 2024-07-01 15:10:13 -07:00
Lorenze Jay
feb9d6938c Merge branch 'main' of github.com:joaomdmoura/crewAI into lj/optional-agent-in-task-bug 2024-07-01 09:33:08 -07:00
Lorenze Jay
9392788ed0 fixed bug for manager overriding task agent and then added pydanic valditors to sequential when no agent is added to task 2024-07-01 09:32:43 -07:00
João Moura
8b7bc69ba1 preparing new version 2024-07-01 08:41:13 -07:00
João Moura
5a807eb93f preparing new version 2024-07-01 06:08:46 -07:00
João Moura
130682c93b preparing new version 2024-07-01 05:48:47 -07:00
João Moura
02e29e4681 new docs 2024-07-01 05:32:22 -07:00
João Moura
6943eb4463 small formatting details 2024-07-01 05:32:22 -07:00
João Moura
939a18a4d2 Updating docs 2024-07-01 05:32:22 -07:00
João Moura
ccbe415315 updating docs 2024-07-01 05:32:22 -07:00
João Moura
511af98dea small refractoring for new version 2024-07-01 05:32:22 -07:00
gpu7
a9d94112f5 bugfix in python script sample code (#787)
Add the line:

process = Process.sequential
2024-07-01 00:23:06 -03:00
JoePro
1bca6029fe Update LLM-Connections.md (#796)
Revised to utilize Ollama from langchain.llms instead as the functionality from the other method simply doesn't work when delegating.

Co-authored-by: João Moura <joaomdmoura@gmail.com>
2024-07-01 00:22:38 -03:00
Eelke van den Bos
c027aa8bf6 Set manager verbosity to crew verbosity by default (#797)
Fixes #793
2024-07-01 00:20:39 -03:00
finecwg
ce7d86e0df Update tool_usage.py (#828)
fixed error for some cases with Pandas DataFrame:

ValueError: The truth value of a DataFrame is ambiguous. Use a.empty, a.bool(), a.item(), a.any() or a.all().
2024-07-01 00:19:36 -03:00
Bruno Tanabe
5dfaf866c9 fix: Fix grammar error in documentation 'Crew Attributes' (#836)
Correction of grammar error in the CrewAI documentation, on the page 'https://docs.crewai.com/core-concepts/Crews/' it says 'ustom' instead of 'Custom'.
2024-07-01 00:16:06 -03:00
Gui Vieira
5b66e87621 Improve telemetry (#818)
* Improve telemetry

* Minor adjustments

* Try to fix typing error

* Try to fix typing error [2]
2024-06-28 20:05:47 -03:00
João Moura
851dd0f84f preparing new version 2024-06-27 11:04:08 -07:00
Eduardo Chiarotti
2188358f13 docs: add docs for training (#824) 2024-06-27 14:56:32 -03:00
Lorenze Jay
10997dd175 Lorenzejay/byoa (#776)
* better spacing

* works with llama index

* works on langchain custom just need delegation to work

* cleanup for custom_agent class

* works with different argument expectations for agent_executor

* cleanup for hierarchial process, better agent_executor args handler and added to the crew agent doc page

* removed code examples for langchain + llama index, added to docs instead

* added key output if return is not a str for and added some tests

* added hinting for CustomAgent class

* removed pass as it was not needed

* closer just need to figuire ou agentTools

* running agents - llamaindex and langchain with base agent

* some cleanup on baseAgent

* minimum for agent to run for base class and ensure it works with hierarchical process

* cleanup for original agent to take on BaseAgent class

* Agent takes on langchainagent and cleanup across

* token handling working for usage_metrics to continue working

* installed llama-index, updated docs and added better name

* fixed some type errors

* base agent holds token_process

* heirarchail process uses proper tools and no longer relies on hasattr for token_processes

* removal of test_custom_agent_executions

* this fixes copying agents

* leveraging an executor class for trigger llamaindex agent

* llama index now has ask_human

* executor mixins added

* added output converter base class

* type listed

* cleanup for output conversions and tokenprocess eliminated redundancy

* properly handling tokens

* simplified token calc handling

* original agent with base agent builder structure setup

* better docs

* no more llama-index dep

* cleaner docs

* test fixes

* poetry reverts and better docs

* base_agent_tools set for third party agents

* updated task and test fix
2024-06-27 14:56:08 -03:00
Eduardo Chiarotti
da9cc5f097 fix: fix trainig_data error (#820)
* fix: fix trainig_data error

* fix: fix lack crew on agent

* fix: fix lack crew on agent executor
2024-06-27 12:58:20 -03:00
Eduardo Chiarotti
c005ec3f78 fix: fix tests (#814) 2024-06-27 05:45:23 -03:00
Eduardo Chiarotti
6018fe5872 feat: add CodeInterpreterTool to run when enable code execution (#804)
* feat: add CodeInterpreterTool to run when enable code execution is allowed on agent

* feat: change to allow_code_execution

* feat: add readme for CodeInterpreterTool
2024-06-27 02:25:39 -03:00
Nuraly
bf0e70999e Update Agents.md (#816)
Made a space to ensure that Header formatting is displayed correctly on the website
2024-06-27 02:23:18 -03:00
Eduardo Chiarotti
175d5b3dd6 feat: Add Train feature for Crews (#686)
* feat: add training logic to agent and crew

* feat: add training logic to agent executor

* feat: add input parameter  to cli command

* feat: add utilities for the training logic

* feat: polish code, logic and add private variables

* feat: add docstring and type hinting to executor

* feat: add constant file, add constant to code

* feat: fix name of training handler function

* feat: remove unused var

* feat: change file handler file name

* feat: Add training handler file, class and change on the code

* feat: fix name error from file

* fix: change import to adapt to logic

* feat: add training handler test

* feat: add tests for file and training_handler

* feat: add test for task evaluator function

* feat: change text to fit in-screen

* feat: add test for train function

* feat: add test for agent training_handler function

* feat: add test for agent._use_trained_data
2024-06-27 02:22:34 -03:00
Bruno Tanabe
9e61b8325b fix: Fix grammar error in documentation in PDF Search Tool (#819)
Correction of grammar error in the CrewAI documentation, on the page 'https://docs.crewai.com/tools/PDFSearchTool/' it says 'Optinal' instead of 'Optional'.
2024-06-27 00:41:22 -03:00
João Moura
c4d76cde8f updating docs 2024-06-22 19:49:50 -03:00
João Moura
9c44fd8c4a preparing new version 2024-06-22 17:47:35 -03:00
João Moura
f9f8c8f336 Preparing new version 2024-06-22 17:01:22 -03:00
João Moura
0fb3ccb9e9 preapring to cut new version 2024-06-20 12:58:50 -03:00
João Moura
0e5fd0be2c addding new kickoff docs 2024-06-20 02:46:13 -03:00
João Moura
1b45daee49 adding new docs to the menu 2024-06-20 02:24:02 -03:00
João Moura
9f384e3fc1 Updating Docs 2024-06-20 02:19:35 -03:00
Brandon Hancock (bhancock_ai)
377f919d42 Resolved Merge Conflicts for PR #712: Remove Hyphen in co-workers (#786)
* removed hyphen in co-workers

* Fix issue with AgentTool agent selection. The LLM included double quotes in the agent name which messed up the string comparison. Added additional types. Cleaned up error messaging.

* Remove duplicate import

* Improve explanation

* Revert poetry.lock changes

* Fix missing line in poetry.lock

---------

Co-authored-by: madmag77 <goncharov.artemv@gmail.com>
2024-06-18 16:57:56 -03:00
João Moura
e6445afac5 fixing bug to multiple crews on yaml format in the same project 2024-06-18 02:32:53 -03:00
Lorenze Jay
095015d397 Lorenzejay/crew kickoff union type (#767)
* added extra parameter for kickoff to return token usage count after result

* added output_token_usage to class and in full_output

* logger duplicated

* added more types

* added usage_metrics to full output instead

* added more to the description on full_output

* possible mispacing

* updated kickoff return types to be either string or dict applicable when full_output is set

* removed duplicates
2024-06-14 14:23:55 -03:00
Lorenze Jay
614183cbb1 fixes crewai docs assembling crew code block example code (#768) 2024-06-14 14:23:30 -03:00
Dan McKinley
0bc92a284d updates instructor to the latest version. (#760)
* updates instructor to the latest version. adds jsonref, which instructor seems to depend on.

* updates embedchain reference, necessary for python 3.12
2024-06-14 01:57:40 -03:00
Lorenze Jay
d3b6640b4a added usage_metrics to full output (#756)
* added extra parameter for kickoff to return token usage count after result

* added output_token_usage to class and in full_output

* logger duplicated

* added more types

* added usage_metrics to full output instead

* added more to the description on full_output

* possible mispacing
2024-06-12 14:18:52 -03:00
Guangqiang Lu
a1a48888c3 add datetime import for logger.py (#702) 2024-06-11 16:43:15 -03:00
Matt Thompson
bb622bf747 fix: correct default model (gpt-4o), correct token counts, and correct TaskOutput attributes (added agent) (#749)
* fix: 'from datetime import datetime for logging' to print the timestamp

* fix: correct default model (gpt-4o), correct token counts, and correct TaskOutput attributes (added agent)

* test: verify Task callback data is an instance of TaskOutput
2024-06-11 15:29:22 -03:00
Brandon Hancock (bhancock_ai)
946c56494e Feature/kickoff for each sync (#680)
* Sync with deep copy working now

* async working!!

* Clean up code for review

* Fix naming

---------

Co-authored-by: João Moura <joaomdmoura@gmail.com>
2024-06-11 12:51:39 -03:00
Taradepan R
2a0e21ca76 updated the import for cohere llm (#696) 2024-06-04 03:32:23 -03:00
Karthik Kalyanaraman
ea893432e8 Add Langtrace to the "How to" docs for CrewAI Agent Observability (#634)
* Add files via upload

* Create Langtrace-Observability.md

* Rename crewai-agentops-stats.png to crewai-langtrace-stats.png
2024-05-29 02:14:29 -03:00
theCyberTech - Rip&Tear
bf40956491 Added timestamp to logger (#646)
* Added timestamp to logger

Updated the logger.py file to include timestamps when logging output. For example:

 [2024-05-20 15:32:48][DEBUG]: == Working Agent: Researcher
 [2024-05-20 15:32:48][INFO]: == Starting Task: Research the topic
 [2024-05-20 15:33:22][DEBUG]: == [Researcher] Task output:

* Update tool_usage.py

* Revert "Update tool_usage.py"

This reverts commit 95d18d5b6f.

incorrect bramch for this commit
2024-05-26 01:32:16 -03:00
Saif Mahmud
48948e1217 fixes #665 (#666) 2024-05-26 01:31:28 -03:00
theCyberTech - Rip&Tear
27412c89dd Update crew.py (#644)
Fixed Type on line 53
2024-05-24 00:06:27 -03:00
Mish Ushakov
56f1d24e9d Update BrowserbaseLoadTool.md (#647) 2024-05-24 00:05:52 -03:00
Mike Heavers
ab066a11a8 Update README.md (#652)
Rework example so that if you use a custom LLM it doesn't throw code errors by uncommenting.
2024-05-24 00:05:32 -03:00
Anudeep Kolluri
e35e81e554 Update agent.py (#655)
Changed default model value from gpt-4 to gpt-4o.
Reasoning.
gpt-4 costs 30$ per million tokens while gpt-4o costs 5$.
This is more cost friendly for default option.
2024-05-24 00:04:53 -03:00
Paul Sanders
551e48da4f Clarify text in docstring (#662) 2024-05-24 00:04:01 -03:00
Paul Sanders
21ce0aa17e Enable search in docs (#663) 2024-05-24 00:03:31 -03:00
Olivier Roberdet
2d6f2830e1 Fix typo in instruction en.json (#676) 2024-05-24 00:03:07 -03:00
Eduardo Chiarotti
24ed8a2549 feat: Add crew train cli (#624)
* fix: fix crewai-tools cli command

* feat: add crewai train CLI command

* feat: add the tests

* fix: fix typing hinting issue on code

* fix: test.yml

* fix: fix test

* fix: removed fix since it didnt changed the test
2024-05-23 18:46:45 -03:00
João Moura
a336381849 adding agent to task output 2024-05-16 05:12:32 -03:00
Jason Schrader
208c3a780c Add version command to CLI (#348)
* feat: add version command to cli with tools flag

* test: check output of version and tools flag

* fix: add version tool info to cli outputs
2024-05-15 19:50:49 -03:00
João Moura
1e112fa50a fixing crew base 2024-05-14 17:40:38 -03:00
João Moura
38fc5510ed ppreparing new version 0.30.9 2024-05-14 11:32:05 -03:00
João Moura
1a1f4717aa cutting new version with no yaml parsing 2024-05-13 23:09:29 -03:00
João Moura
977c6114ba preparing new version 2024-05-13 22:32:24 -03:00
João Moura
27fddae286 New version, updating dependencies, fixing memory 2024-05-13 22:26:41 -03:00
João Moura
615ac7f297 preparing new version 2024-05-13 12:59:55 -03:00
João Moura
87d28e896d preparing new version 2024-05-13 02:35:46 -03:00
Saif Mahmud
23f10418d7 Fixes #603 (#604) 2024-05-13 02:34:52 -03:00
João Moura
27e7f48a44 Adding new tests 2024-05-13 02:34:33 -03:00
João Moura
7fd8850ddb Small RC Fixes (#608)
* mentioning ollama on the docs as embedder

* lowering barrier to match tool with simialr name

* Fixing agent tools to support co_worker

* Adding new tests

* Fixing type"

* updating tests

* fixing conflict
2024-05-13 02:29:04 -03:00
Ítalo Vieira
7a4d3dd496 fix typo exectue -> execute (#607) 2024-05-13 02:19:06 -03:00
João Moura
c1d7936689 preparing new version 2024-05-12 19:56:40 -03:00
Eduardo Chiarotti
1ec4da6947 feat: add mypy as type checker, update code and add comment to reference (#591)
* fix: fix test actually running

* fix: fix test to not send request to openai

* fix: fix linting to remove cli files

* fix: exclude only files that breaks black

* fix: Fix all Ruff checkings on the code and Fix Test with repeated name

* fix: Change linter name on yml file

* feat: update pre-commit

* feat: remove need for isort on the code

* feat: add mypy as type checker, update code and add comment to reference

* feat: remove black linter

* feat: remove poetry to run the command

* feat: change logic to test mypy

* feat: update tests yml to try to fix the tests gh action

* feat: try to add just mypy to run on gh action

* feat: fix yml file

* feat: add comment to avoid issue on gh action

* feat: decouple pytest from the necessity of poetry install

* feat: change tests.yml to test different approach

* feat: change to poetry run

* fix: parameter field on yml file

* fix: update parameters to be on the pyproject

* fix: update pyproject to remove import untyped errors
2024-05-10 16:37:52 -03:00
Steven Edwards
8430c2f9af Task needs an expected_output field in docs. (#568)
* Task needs an expected_output field in docs..

* Add missing comma.
2024-05-10 11:55:10 -03:00
Ayo Ayibiowu
7cc6bccdec feat: adds support to automatically fallback to the default encoding (#596)
* feat: adds support to automatically fallbackk to the default encoding

* fix: use the correct method
2024-05-10 11:54:45 -03:00
Eduardo Chiarotti
aeba64feaf Feat: Add Ruff to improve linting/formatting (#588)
* fix: fix test actually running

* fix: fix test to not send request to openai

* fix: fix linting to remove cli files

* fix: exclude only files that breaks black

* fix: Fix all Ruff checkings on the code and Fix Test with repeated name

* fix: Change linter name on yml file

* feat: update pre-commit

* feat: remove need for isort on the code

* feat: remove black linter

* feat: update tests yml to try to fix the tests gh action
2024-05-10 11:53:53 -03:00
GabeKoga
04b4191de5 Fix/yaml formatting (#590)
* Bug/curly_braces_yaml

Added parser to help users on yaml syntax

* context error

Patch and later will prioritize this again to have context work with the yaml
2024-05-09 21:35:21 -03:00
Eduardo Chiarotti
1da7473f26 fix: fix test actually running (#587)
* fix: fix test actually running

* fix: fix test to not send request to openai

* fix: fix linting to remove cli files

* fix: exclude only files that breaks black
2024-05-09 21:33:48 -03:00
João Moura
95d13bd033 prepping new version 2024-05-09 09:12:57 -03:00
Eduardo Chiarotti
7eb4fcdaf4 fix: Add validation fix output_file issue when have '/' (#585)
* fix: Add validation fix output_file issue when have /

* fix: run black to format code

* fix: run black to format code
2024-05-09 08:11:00 -03:00
João Moura
809b4b227c Revert "Fix .md doc file 404 error on github (#564)" (#567)
This reverts commit 2bd30af72b.
2024-05-05 10:35:46 -03:00
Alex Fazio
ff51a2da9b corrected imprecision in the instantiation (#555) 2024-05-05 03:55:13 -03:00
João Moura
be83681665 preparing new RC version 2024-05-05 02:57:29 -03:00
Jackie Qi
2bd30af72b Fix .md doc file 404 error on github (#564)
* fix md file link not working on github

* miss one changed file
2024-05-05 02:53:20 -03:00
João Moura
d7b021061b updating .gitignore 2024-05-05 02:52:43 -03:00
João Moura
73647f1669 TYPO 2024-05-05 02:14:49 -03:00
João Moura
d341cb3d5c Fixing manager_agent_support 2024-05-05 00:51:18 -03:00
João Moura
30438410d6 cutting new RC 2024-05-03 00:55:32 -03:00
João Moura
b264ebabc0 adding meomization to crewai project annotations 2024-05-03 00:49:37 -03:00
tarekadam
2edc88e0a1 Update LLM-Connections.md (#553)
fixes command to lower case
2024-05-03 00:25:03 -03:00
João Moura
552dda46f8 updating manager llm pydantic error 2024-05-02 23:39:56 -03:00
João Moura
2340a127d6 curring new rc 2024-05-02 23:22:02 -03:00
João Moura
ecde504a79 updating gitignore 2024-05-02 21:57:49 -03:00
João Moura
0b781065d2 Better json parsing for smaller models 2024-05-02 21:57:41 -03:00
João Moura
bcb57ce5f9 updating git ignore 2024-05-02 20:52:43 -03:00
David Solito
6392a8cdd0 Update crew.py (#551)
Ad manager_agent description in crew docstring
2024-05-02 19:21:22 -03:00
João Moura
34e3dd24b4 new version 2024-05-02 05:00:29 -03:00
João Moura
c303d3730c cutting new version 2024-05-02 05:00:29 -03:00
João Moura
0a53ce17a2 small improvements for i18n 2024-05-02 05:00:29 -03:00
João Moura
7973651e05 new version 2024-05-02 05:00:29 -03:00
João Moura
672b150972 adding initial support for external prompt file 2024-05-02 05:00:29 -03:00
Jason Schrader
d8bcbd7d0a fix typos in generated readme (#345)
small things I noticed while upgrading our setup!
2024-05-02 03:32:18 -03:00
Dmitri Khokhlov
ff2f1477bb fix: TypeError: LongTermMemory.search() missing 1 required positional argument: 'latest_n' (#488)
Signed-off-by: Dmitri Khokhlov <dkhokhlov@gmail.com>
2024-05-02 03:28:36 -03:00
Ikko Eltociear Ashimine
1139073297 fix typo (#489)
* Update test_crew_function_calling_llm.yaml

ouput -> output

* Update tool_usage.py

ouput -> output
2024-05-02 03:27:40 -03:00
Sarvajith Adyanthaya
39deac2747 Changed "Inert" to "Innate" #470 (#490) 2024-05-02 03:27:09 -03:00
ftoppi
0a35868367 Update task.py: try to find json in task output using regex (#491)
* Update task.py: try to find json in task output using regex

Sometimes the model replies with a valid and additional text, let's try to extract and validate it first. It's cheaper than calling LLM for that.

* Update task.py

---------

Co-authored-by: João Moura <joaomdmoura@gmail.com>
2024-05-02 03:26:34 -03:00
Mosta
608f869789 Update PGSearchTool.md (#492)
typo on code snippet
2024-05-02 03:22:18 -03:00
Samuel Kocúr
c30bd1a18e fix db_storage_path handling to use env variable or cwd (#507) 2024-05-02 03:16:54 -03:00
Mish Ushakov
20a81af95f Added Browserbase loader to the docs (#508)
* Create BrowserbaseLoadTool.md

* added browserbase loader
2024-05-02 03:15:59 -03:00
deadlious
531c70b476 Tool name recognition based on string distance (#521)
* adding variations of ask question and delegate work tools

* Revert "adding variations of ask question and delegate work tools"

This reverts commit 38d4589be8.

* adding distance calculation for tool names.

* proper formatting

* remove brackets
2024-05-02 03:15:34 -03:00
Victor Carvalho Tavernari
dae0aedc99 Add conditional check for output file directory creation (#523)
This commit adds a conditional check to ensure that the output file directory exists before attempting to create it. This ensures that the code does not
fail in cases where the directory does not exist and needs to be created. The condition is added in the `_save_file` method of the `Task` class, ensuring
that the correct behavior is maintained for saving results to a file.
2024-05-02 03:13:51 -03:00
Jim Collins
5fde03f4b0 Update README.md (#525)
Reworded "If you want to also install crewai-tools, which is a package with tools that can be used by the agents, but more dependencies, you can install it with, example below uses it:" for clarity
2024-05-02 03:12:03 -03:00
Alex Fazio
48f53b529b fix to import statement PGSearchTool.md (#548)
fix to the import statement in PGSearchTool documentation
2024-05-02 03:10:43 -03:00
João Moura
4d9b0c6138 smal fixes and better guardrail for parsing small models tools usage 2024-05-02 02:21:59 -03:00
João Moura
70cabec876 Adding support for system, prompt and answe templates 2024-05-02 02:21:59 -03:00
João Moura
60423376cf removing unnecessary test 2024-05-02 02:21:59 -03:00
João Moura
22c646294a unifying co-worker string 2024-05-02 02:21:59 -03:00
João Moura
10b317cf34 remving blank line 2024-05-02 02:21:59 -03:00
João Moura
03f0c44cac Fixing task callback 2024-05-02 02:21:59 -03:00
João Moura
caa0e5db8d Revert "AgentOps Implementation (#411)"
This reverts commit 3d5257592b.
2024-05-02 02:21:59 -03:00
Alex Fazio
b862e464f8 docs fix to xml tool import statement (#546)
* docs fix to xml tool import statement

* Update XMLSearchTool.md
2024-05-01 12:53:49 -03:00
Braelyn Boynton
3d5257592b AgentOps Implementation (#411)
* implements agentops with a langchain handler, agent tracking and tool call recording

* track tool usage

* end session after completion

* track tool usage time

* better tool and llm tracking

* code cleanup

* make agentops optional

* optional dependency usage

* remove telemetry code

* optional agentops

* agentops version bump

* remove org key

* true dependency

* add crew org key to agentops

* cleanup

* Update pyproject.toml

* Revert "true dependency"

This reverts commit e52e8e9568.

* Revert "cleanup"

This reverts commit 7f5635fb9e.

* optional parent key

* agentops 0.1.5

* Revert "Revert "cleanup""

This reverts commit cea33d9a5d.

* Revert "Revert "true dependency""

This reverts commit 4d1b460b

* cleanup

* Forcing version 0.1.5

* Update pyproject.toml

---------

Co-authored-by: João Moura <joaomdmoura@gmail.com>
2024-04-20 12:20:13 -03:00
Elijas Dapšauskas
ff76715cd2 Allow minor version patches to python-dotenv (#339)
Co-authored-by: João Moura <joaomdmoura@gmail.com>
2024-04-19 02:44:08 -03:00
Emmanuel Crown
cdb0a9c953 Fixed a typo in the main readme on the llm selection , options for an agent (#349) 2024-04-19 02:42:04 -03:00
Sajal Sharma
b0acae81b0 Update LLM-Connections.md (#353)
Co-authored-by: João Moura <joaomdmoura@gmail.com>
2024-04-19 02:41:36 -03:00
Kaushal Powar
afc616d263 Update GitHubSearchTool.md (#357)
GithubSearchTool was misspelled as GitHubSearchTool

Co-authored-by: João Moura <joaomdmoura@gmail.com>
2024-04-19 02:40:38 -03:00
Selim Erhan
e066b4dcb1 Update LLM-Connections.md (#359)
Created a short documentation on how to use Llama2 locally with crewAI thanks to the help of Ollama.

Co-authored-by: João Moura <joaomdmoura@gmail.com>
2024-04-19 02:39:33 -03:00
Christian24
9ea495902e Fix lockfile (#477) 2024-04-18 11:28:06 -03:00
João Moura
d786c367b4 Update README.md 2024-04-17 00:02:49 -03:00
João Moura
a391004432 Adding manager llm 2024-04-16 16:50:44 -03:00
João Moura
dd97a2674d adding new installing crew docs 2024-04-16 16:50:44 -03:00
Joseph Bastulli
437c4c91bc fix: swapped the task callback assignment (#443) 2024-04-16 15:54:42 -03:00
Jack Hayter
575f1f98b0 Prevent duplicate TokenCalcHandler callbacks on Agent (#475) 2024-04-16 15:54:02 -03:00
Alex Reibman
2ee6ab6332 Incorrect documentation link for AgentOps (#458)
* remove .md

* made language more clear

* update images and documentation for spelling

* update typos and links

* update repo placement

* update wording

* clarify

* update wording

* Added clearer features

---------

Co-authored-by: João Moura <joaomdmoura@gmail.com>
2024-04-16 08:24:30 -03:00
Jonathan Morales Vélez
3d862538d2 fix link to observability (#461) 2024-04-16 08:22:11 -03:00
Preston Badeer
4bd36e0460 Update LLM-Connections.md with up to date LM Studio instructions (#468)
Co-authored-by: Preston Badeer <467756+pbadeer@users.noreply.github.com>
2024-04-16 08:20:56 -03:00
Eivind Hyldmo
7fbf0f1988 Fixed typo in Tools.md (#472) 2024-04-16 08:20:25 -03:00
Lennart J. Kurzweg
066127013b Added optional manager_agent parameter (#474)
* Added optional manager_agent parameter

* Update crew.py

---------

Co-authored-by: Lennart J. Kurzweg (Nx2) <git@nx2.site>
Co-authored-by: João Moura <joaomdmoura@gmail.com>
2024-04-16 08:18:36 -03:00
João Moura
f675208d72 cutting new version with updated cli template 2024-04-11 11:30:30 -03:00
João Moura
36aa69cf66 Preparing new version to use new version of crewai-tools 2024-04-10 11:52:12 -03:00
Cfomodz
66b77ffd08 Update README.md (#442) 2024-04-08 05:59:04 -03:00
João Moura
d2a3e4869a preparring new version 2024-04-08 02:08:57 -03:00
João Moura
a2dc7c7f31 adding missing import 2024-04-08 02:08:43 -03:00
João Moura
55ac69776a preparing new version 2024-04-08 01:39:22 -03:00
João Moura
7a7c9b0076 removing unnecessary certificate 2024-04-08 01:39:11 -03:00
João Moura
77d40230a8 preparing new version 2024-04-07 14:55:45 -03:00
João Moura
e4556040a8 fixing long temr memory interpolation 2024-04-07 14:55:35 -03:00
João Moura
755b3934a4 preping new verison with new tools package 2024-04-07 14:19:50 -03:00
João Moura
2d77fb72a5 preparing new version 2024-04-07 04:18:05 -03:00
rajib
106b0df42e The suggestions were getting split at character level and not at sentence level (#436)
* fix the issue where the suggestions were split at character level

* Update contextual_memory.py

---------

Co-authored-by: rajib76 <rajib76@yahoo.com>
Co-authored-by: João Moura <joaomdmoura@gmail.com>
2024-04-07 02:57:23 -03:00
João Moura
c31ac4cf7e Updating tool dependency 2024-04-05 22:46:32 -03:00
João Moura
7b309df0c5 preparing new version 2024-04-05 19:52:13 -03:00
shivam singh
326f524e7c doc: Add documentation to Task model. (#363) 2024-04-05 19:49:36 -03:00
高璟琦
315ad20111 add solar example (#373)
Co-authored-by: João Moura <joaomdmoura@gmail.com>
2024-04-05 19:48:27 -03:00
Rueben Ramirez
b1daf17a61 whitespace consistency across docs (#407)
I saw a rendedered whitespace inconsistency in the Tasks docs here:
ed31860071/docs/core-concepts/Tasks.md (L173)

So I set out to patch that up to make it easier to read.  I then noticed
there were a few whitespace inconsistencies:
- 2 spaces
- 4 whitespaces
- tabs

It appears that the 4 whitespaces is the prevalent whitesapce usage, so
I overwrote other whitespace usages with that in this commit.

Co-authored-by: Rueben Ramirez <rramirez@ruebens-mbp.tail7c016.ts.net>
Co-authored-by: João Moura <joaomdmoura@gmail.com>
2024-04-05 19:47:09 -03:00
GabeKoga
9db99befb6 Feature: Log files (#423)
* log_file

feature: added a new parameter for crew that creates a txt file to log agent execution

* unit tests and documentation

unit test if file is created but not what is inside the file
2024-04-05 19:44:50 -03:00
GabeKoga
aebc443b62 purple (#428)
changed from yellow to purple for visibility
2024-04-05 18:25:59 -03:00
João Moura
2c0e5586e8 TYPO 2024-04-05 09:37:51 -03:00
João Moura
25f7557751 fixing memory docs 2024-04-05 08:59:54 -03:00
João Moura
59ebf7b762 adding specific memmory docs 2024-04-05 08:59:20 -03:00
João Moura
1abe9db8e0 Increasing default max inter 2024-04-05 08:36:09 -03:00
João Moura
e4363f9ed8 updating tests 2024-04-05 08:33:31 -03:00
João Moura
e00b545548 adding max execution time 2024-04-05 08:31:25 -03:00
João Moura
1aa32c2036 preparing new version 2024-04-05 08:24:41 -03:00
João Moura
65824ef814 not overriding llm callbacks 2024-04-05 08:24:20 -03:00
João Moura
d17bc33bfb fix docs 2024-04-04 17:36:50 -03:00
151 changed files with 632410 additions and 14114 deletions

View File

@@ -1,10 +0,0 @@
name: Lint
on: [pull_request]
jobs:
lint:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- uses: psf/black@stable

16
.github/workflows/linter.yml vendored Normal file
View File

@@ -0,0 +1,16 @@
name: Lint
on: [pull_request]
jobs:
lint:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: Install Requirements
run: |
pip install ruff
- name: Run Ruff Linter
run: ruff check --exclude "templates","__init__.py"

View File

@@ -14,19 +14,18 @@ jobs:
steps:
- name: Checkout code
uses: actions/checkout@v2
uses: actions/checkout@v4
- name: Setup Python
uses: actions/setup-python@v4
with:
python-version: '3.10'
python-version: "3.10"
- name: Install Requirements
run: |
sudo apt-get update &&
pip install poetry &&
poetry lock &&
set -e
pip install poetry
poetry install
- name: Run tests
run: poetry run pytest
run: poetry run pytest tests

View File

@@ -1,4 +1,3 @@
name: Run Type Checks
on: [pull_request]
@@ -12,19 +11,16 @@ jobs:
steps:
- name: Checkout code
uses: actions/checkout@v2
uses: actions/checkout@v4
- name: Setup Python
uses: actions/setup-python@v4
with:
python-version: '3.10'
python-version: "3.10"
- name: Install Requirements
run: |
sudo apt-get update &&
pip install poetry &&
poetry lock &&
poetry install
pip install mypy
- name: Run type checks
run: poetry run pyright
run: mypy src

8
.gitignore vendored
View File

@@ -8,4 +8,10 @@ assets/*
test/
docs_crew/
chroma.sqlite3
old_en.json
old_en.json
db/
test.py
rc-tests/*
*.pkl
temp/*
.vscode/*

View File

@@ -1,21 +1,9 @@
repos:
- repo: https://github.com/psf/black-pre-commit-mirror
rev: 23.12.1
- repo: https://github.com/astral-sh/ruff-pre-commit
rev: v0.4.4
hooks:
- id: black
language_version: python3.11
files: \.(py)$
exclude: 'src/crewai/cli/templates/(crew|main)\.py'
- repo: https://github.com/pycqa/isort
rev: 5.13.2
hooks:
- id: isort
name: isort (python)
args: ["--profile", "black", "--filter-files"]
- repo: https://github.com/PyCQA/autoflake
rev: v2.2.1
hooks:
- id: autoflake
args: ['--in-place', '--remove-all-unused-imports', '--remove-unused-variables', '--ignore-init-module-imports']
- id: ruff
args: ["--fix"]
exclude: "templates"
- id: ruff-format
exclude: "templates"

View File

@@ -30,7 +30,6 @@
- [Connecting Your Crew to a Model](#connecting-your-crew-to-a-model)
- [How CrewAI Compares](#how-crewai-compares)
- [Contribution](#contribution)
- [Hire CrewAI](#hire-crewai)
- [Telemetry](#telemetry)
- [License](#license)
@@ -49,7 +48,7 @@ To get started with CrewAI, follow these simple steps:
pip install crewai
```
If you want to also install crewai-tools, which is a package with tools that can be used by the agents, but more dependencies, you can install it with, example below uses it:
If you want to install the 'crewai' package along with its optional features that include additional tools for agents, you can do so by using the following command: pip install 'crewai[tools]'. This command installs the basic package and also adds extra components which require more dependencies to function."
```shell
pip install 'crewai[tools]'
@@ -71,6 +70,17 @@ os.environ["SERPER_API_KEY"] = "Your Key" # serper.dev API key
# os.environ["OPENAI_MODEL_NAME"] ='openhermes' # Adjust based on available model
# os.environ["OPENAI_API_KEY"] ='sk-111111111111111111111111111111111111111111111111'
# You can pass an optional llm attribute specifying what model you wanna use.
# It can be a local model through Ollama / LM Studio or a remote
# model like OpenAI, Mistral, Antrophic or others (https://docs.crewai.com/how-to/LLM-Connections/)
#
# import os
# os.environ['OPENAI_MODEL_NAME'] = 'gpt-3.5-turbo'
#
# OR
#
# from langchain_openai import ChatOpenAI
search_tool = SerperDevTool()
# Define your agents with roles and goals
@@ -82,18 +92,9 @@ researcher = Agent(
You have a knack for dissecting complex data and presenting actionable insights.""",
verbose=True,
allow_delegation=False,
# You can pass an optional llm attribute specifying what model you wanna use.
# llm=ChatOpenAI(model_name="gpt-3.5", temperature=0.7),
tools=[search_tool]
# You can pass an optional llm attribute specifying what mode you wanna use.
# It can be a local model through Ollama / LM Studio or a remote
# model like OpenAI, Mistral, Antrophic or others (https://docs.crewai.com/how-to/LLM-Connections/)
#
# import os
# os.environ['OPENAI_MODEL_NAME'] = 'gpt-3.5-turbo'
#
# OR
#
# from langchain_openai import ChatOpenAI
# llm=ChatOpenAI(model_name="gpt-3.5", temperature=0.7)
)
writer = Agent(
role='Tech Content Strategist',
@@ -126,6 +127,7 @@ crew = Crew(
agents=[researcher, writer],
tasks=[task1, task2],
verbose=2, # You can set it to 1 or 2 to different logging levels
process = Process.sequential
)
# Get your crew to work!
@@ -194,6 +196,7 @@ Please refer to the [Connect crewAI to LLMs](https://docs.crewai.com/how-to/LLM-
**CrewAI's Advantage**: CrewAI is built with production in mind. It offers the flexibility of Autogen's conversational agents and the structured process approach of ChatDev, but without the rigidity. CrewAI's processes are designed to be dynamic and adaptable, fitting seamlessly into both development and production workflows.
## Contribution
CrewAI is open-source and we welcome contributions. If you're looking to contribute, please:
@@ -232,7 +235,7 @@ poetry run pytest
### Running static type checks
```bash
poetry run pyright
poetry run mypy
```
### Packaging
@@ -247,11 +250,6 @@ poetry build
pip install dist/*.tar.gz
```
## Hire CrewAI
We're a company developing crewAI and crewAI Enterprise, we for a limited time are offer consulting with selected customers, to get them early access to our enterprise solution
If you are interested on having access to it and hiring weekly hours with our team, feel free to email us at [joao@crewai.com](mailto:joao@crewai.com).
## Telemetry
CrewAI uses anonymous telemetry to collect usage data with the main purpose of helping us improve the library by focusing our efforts on the most used features, integrations and tools.
@@ -259,6 +257,7 @@ CrewAI uses anonymous telemetry to collect usage data with the main purpose of h
There is NO data being collected on the prompts, tasks descriptions agents backstories or goals nor tools usage, no API calls, nor responses nor any data that is being processed by the agents, nor any secrets and env vars.
Data collected includes:
- Version of crewAI
- So we can understand how many users are using the latest version
- Version of Python

View File

@@ -1463,11 +1463,11 @@
"locked": false,
"fontSize": 20,
"fontFamily": 3,
"text": "Agents have the inert ability of\nreach out to another to delegate\nwork or ask questions.",
"text": "Agents have the innate ability of\nreach out to another to delegate\nwork or ask questions.",
"textAlign": "right",
"verticalAlign": "top",
"containerId": null,
"originalText": "Agents have the inert ability of\nreach out to another to delegate\nwork or ask questions.",
"originalText": "Agents have the innate ability of\nreach out to another to delegate\nwork or ask questions.",
"lineHeight": 1.2,
"baseline": 68
},
@@ -1734,4 +1734,4 @@
"viewBackgroundColor": "#ffffff"
},
"files": {}
}
}

Binary file not shown.

Before

Width:  |  Height:  |  Size: 272 KiB

After

Width:  |  Height:  |  Size: 288 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 190 KiB

After

Width:  |  Height:  |  Size: 419 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 176 KiB

After

Width:  |  Height:  |  Size: 263 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.0 MiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 810 KiB

View File

@@ -1,4 +1,3 @@
```markdown
---
title: crewAI Agents
description: What are crewAI Agents and how to use them.
@@ -17,20 +16,24 @@ description: What are crewAI Agents and how to use them.
## Agent Attributes
| Attribute | Description |
| :------------------------- | :--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| **Role** | Defines the agent's function within the crew. It determines the kind of tasks the agent is best suited for. |
| **Goal** | The individual objective that the agent aims to achieve. It guides the agent's decision-making process. |
| **Backstory** | Provides context to the agent's role and goal, enriching the interaction and collaboration dynamics. |
| **LLM** *(optional)* | Represents the language model that will run the agent. It dynamically fetches the model name from the `OPENAI_MODEL_NAME` environment variable, defaulting to "gpt-4" if not specified. |
| **Tools** *(optional)* | Set of capabilities or functions that the agent can use to perform tasks. Expected to be instances of custom classes compatible with the agent's execution environment. Tools are initialized with a default value of an empty list. |
| **Function Calling LLM** *(optional)* | Specifies the language model that will handle the tool calling for this agent, overriding the crew function calling LLM if passed. Default is `None`. |
| **Max Iter** *(optional)* | The maximum number of iterations the agent can perform before being forced to give its best answer. Default is `15`. |
| **Max RPM** *(optional)* | The maximum number of requests per minute the agent can perform to avoid rate limits. It's optional and can be left unspecified, with a default value of `None`. |
| **Verbose** *(optional)* | Setting this to `True` configures the internal logger to provide detailed execution logs, aiding in debugging and monitoring. Default is `False`. |
| **Allow Delegation** *(optional)* | Agents can delegate tasks or questions to one another, ensuring that each task is handled by the most suitable agent. Default is `True`. |
| **Step Callback** *(optional)* | A function that is called after each step of the agent. This can be used to log the agent's actions or to perform other operations. It will overwrite the crew `step_callback`. |
| **Cache** *(optional)* | Indicates if the agent should use a cache for tool usage. Default is `True`. |
| Attribute | Parameter | Description |
| :------------------------- | :---- | :--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| **Role** | `role` | Defines the agent's function within the crew. It determines the kind of tasks the agent is best suited for. |
| **Goal** | `goal` | The individual objective that the agent aims to achieve. It guides the agent's decision-making process. |
| **Backstory** | `backstory` | Provides context to the agent's role and goal, enriching the interaction and collaboration dynamics. |
| **LLM** *(optional)* | `llm` | Represents the language model that will run the agent. It dynamically fetches the model name from the `OPENAI_MODEL_NAME` environment variable, defaulting to "gpt-4" if not specified. |
| **Tools** *(optional)* | `tools` | Set of capabilities or functions that the agent can use to perform tasks. Expected to be instances of custom classes compatible with the agent's execution environment. Tools are initialized with a default value of an empty list. |
| **Function Calling LLM** *(optional)* | `function_calling_llm` | Specifies the language model that will handle the tool calling for this agent, overriding the crew function calling LLM if passed. Default is `None`. |
| **Max Iter** *(optional)* | `max_iter` | Max Iter is the maximum number of iterations the agent can perform before being forced to give its best answer. Default is `25`. |
| **Max RPM** *(optional)* | `max_rpm` | Max RPM is the maximum number of requests per minute the agent can perform to avoid rate limits. It's optional and can be left unspecified, with a default value of `None`. |
| **Max Execution Time** *(optional)* | `max_execution_time` | Max Execution Time is the Maximum execution time for an agent to execute a task. It's optional and can be left unspecified, with a default value of `None`, meaning no max execution time. |
| **Verbose** *(optional)* | `verbose` | Setting this to `True` configures the internal logger to provide detailed execution logs, aiding in debugging and monitoring. Default is `False`. |
| **Allow Delegation** *(optional)* | `allow_delegation` | Agents can delegate tasks or questions to one another, ensuring that each task is handled by the most suitable agent. Default is `True`. |
| **Step Callback** *(optional)* | `step_callback` | A function that is called after each step of the agent. This can be used to log the agent's actions or to perform other operations. It will overwrite the crew `step_callback`. |
| **Cache** *(optional)* | `cache` | Indicates if the agent should use a cache for tool usage. Default is `True`. |
| **System Template** *(optional)* | `system_template` | Specifies the system format for the agent. Default is `None`. |
| **Prompt Template** *(optional)* | `prompt_template` | Specifies the prompt format for the agent. Default is `None`. |
| **Response Template** *(optional)* | `response_template` | Specifies the response format for the agent. Default is `None`. |
## Creating an Agent
@@ -56,13 +59,91 @@ agent = Agent(
function_calling_llm=my_llm, # Optional
max_iter=15, # Optional
max_rpm=None, # Optional
max_execution_time=None, # Optional
verbose=True, # Optional
allow_delegation=True, # Optional
step_callback=my_intermediate_step_callback, # Optional
cache=True # Optional
cache=True, # Optional
system_template=my_system_template, # Optional
prompt_template=my_prompt_template, # Optional
response_template=my_response_template, # Optional
config=my_config, # Optional
crew=my_crew, # Optional
tools_handler=my_tools_handler, # Optional
cache_handler=my_cache_handler, # Optional
callbacks=[callback1, callback2], # Optional
agent_executor=my_agent_executor # Optional
)
```
## Setting prompt templates
Prompt templates are used to format the prompt for the agent. You can use to update the system, regular and response templates for the agent. Here's an example of how to set prompt templates:
```python
agent = Agent(
role="{topic} specialist",
goal="Figure {goal} out",
backstory="I am the master of {role}",
system_template="""<|start_header_id|>system<|end_header_id|>
{{ .System }}<|eot_id|>""",
prompt_template="""<|start_header_id|>user<|end_header_id|>
{{ .Prompt }}<|eot_id|>""",
response_template="""<|start_header_id|>assistant<|end_header_id|>
{{ .Response }}<|eot_id|>""",
)
```
## Bring your Third Party Agents
!!! note "Extend your Third Party Agents like LlamaIndex, Langchain, Autogen or fully custom agents using the the crewai's BaseAgent class."
BaseAgent includes attributes and methods required to integrate with your crews to run and delegate tasks to other agents within your own crew.
CrewAI is a universal multi agent framework that allows for all agents to work together to automate tasks and solve problems.
```py
from crewai import Agent, Task, Crew
from custom_agent import CustomAgent # You need to build and extend your own agent logic with the CrewAI BaseAgent class then import it here.
from langchain.agents import load_tools
langchain_tools = load_tools(["google-serper"], llm=llm)
agent1 = CustomAgent(
role="backstory agent",
goal="who is {input}?",
backstory="agent backstory",
verbose=True,
)
task1 = Task(
expected_output="a short biography of {input}",
description="a short biography of {input}",
agent=agent1,
)
agent2 = Agent(
role="bio agent",
goal="summarize the short bio for {input} and if needed do more research",
backstory="agent backstory",
verbose=True,
)
task2 = Task(
description="a tldr summary of the short biography",
expected_output="5 bullet point summary of the biography",
agent=agent2,
context=[task1],
)
my_crew = Crew(agents=[agent1, agent2], tasks=[task1, task2])
crew = my_crew.kickoff(inputs={"input": "Mark Twain"})
```
## Conclusion
Agents are the building blocks of the CrewAI framework. By understanding how to define and interact with agents, you can create sophisticated AI systems that leverage the power of collaborative intelligence.
```

View File

@@ -15,16 +15,19 @@ description: Exploring the dynamics of agent collaboration within the CrewAI fra
The `Crew` class has been enriched with several attributes to support advanced functionalities:
- **Language Model Management (`manager_llm`, `function_calling_llm`)**: Manages language models for executing tasks and tools, facilitating sophisticated agent-tool interactions. Note that while `manager_llm` is mandatory for hierarchical processes to ensure proper execution flow, `function_calling_llm` is optional, with a default value provided for streamlined tool interaction.
- **Custom Manager Agent (`manager_agent`)**: Allows specifying a custom agent as the manager instead of using the default manager provided by CrewAI.
- **Process Flow (`process`)**: Defines the execution logic (e.g., sequential, hierarchical) to streamline task distribution and execution.
- **Verbose Logging (`verbose`)**: Offers detailed logging capabilities for monitoring and debugging purposes. It supports both integer and boolean types to indicate the verbosity level. For example, setting `verbose` to 1 might enable basic logging, whereas setting it to True enables more detailed logs.
- **Rate Limiting (`max_rpm`)**: Ensures efficient utilization of resources by limiting requests per minute. Guidelines for setting `max_rpm` should consider the complexity of tasks and the expected load on resources.
- **Internationalization Support (`language`, `language_file`)**: Facilitates operation in multiple languages, enhancing global usability. Supported languages and the process for utilizing the `language_file` attribute for customization should be clearly documented.
- **Internationalization / Customization Support (`language`, `prompt_file`)**: Facilitates full customization of the inner prompts, enhancing global usability. Supported languages and the process for utilizing the `prompt_file` attribute for customization should be clearly documented. [Example of file](https://github.com/joaomdmoura/crewAI/blob/main/src/crewai/translations/en.json)
- **Execution and Output Handling (`full_output`)**: Distinguishes between full and final outputs for nuanced control over task results. Examples showcasing the difference in outputs can aid in understanding the practical implications of this attribute.
- **Callback and Telemetry (`step_callback`, `task_callback`)**: Integrates callbacks for step-wise and task-level execution monitoring, alongside telemetry for performance analytics. The purpose and usage of `task_callback` alongside `step_callback` for granular monitoring should be clearly explained.
- **Crew Sharing (`share_crew`)**: Enables sharing of crew information with CrewAI for continuous improvement and training models. The privacy implications and benefits of this feature, including how it contributes to model improvement, should be outlined.
- **Usage Metrics (`usage_metrics`)**: Stores all metrics for the language model (LLM) usage during all tasks' execution, providing insights into operational efficiency and areas for improvement. Detailed information on accessing and interpreting these metrics for performance analysis should be provided.
- **Memory Usage (`memory`)**: Indicates whether the crew should use memory to store memories of its execution, enhancing task execution and agent learning.
- **Embedder Configuration (`embedder`)**: Specifies the configuration for the embedder to be used by the crew for understanding and generating language. This attribute supports customization of the language model provider.
- **Cache Management (`cache`)**: Determines whether the crew should use a cache to store the results of tool executions, optimizing performance.
- **Output Logging (`output_log_file`)**: Specifies the file path for logging the output of the crew execution.
## Delegation: Dividing to Conquer
Delegation enhances functionality by allowing agents to intelligently assign tasks or seek help, thereby amplifying the crew's overall capability.
@@ -36,4 +39,4 @@ Setting up a crew involves defining the roles and capabilities of each agent. Cr
Consider a crew with a researcher agent tasked with data gathering and a writer agent responsible for compiling reports. The integration of advanced language model management and process flow attributes allows for more sophisticated interactions, such as the writer delegating complex research tasks to the researcher or querying specific information, thereby facilitating a seamless workflow.
## Conclusion
The integration of advanced attributes and functionalities into the CrewAI framework significantly enriches the agent collaboration ecosystem. These enhancements not only simplify interactions but also offer unprecedented flexibility and control, paving the way for sophisticated AI-driven solutions capable of tackling complex tasks through intelligent collaboration and delegation.
The integration of advanced attributes and functionalities into the CrewAI framework significantly enriches the agent collaboration ecosystem. These enhancements not only simplify interactions but also offer unprecedented flexibility and control, paving the way for sophisticated AI-driven solutions capable of tackling complex tasks through intelligent collaboration and delegation.

View File

@@ -8,25 +8,29 @@ A crew in crewAI represents a collaborative group of agents working together to
## Crew Attributes
| Attribute | Description |
| :-------------------------- | :----------------------------------------------------------- |
| **Tasks** | A list of tasks assigned to the crew. |
| **Agents** | A list of agents that are part of the crew. |
| **Process** *(optional)* | The process flow (e.g., sequential, hierarchical) the crew follows. |
| **Verbose** *(optional)* | The verbosity level for logging during execution. |
| **Manager LLM** *(optional)*| The language model used by the manager agent in a hierarchical process. **Required when using a hierarchical process.** |
| **Function Calling LLM** *(optional)* | If passed, the crew will use this LLM to do function calling for tools for all agents in the crew. Each agent can have its own LLM, which overrides the crew's LLM for function calling. |
| **Config** *(optional)* | Optional configuration settings for the crew, in `Json` or `Dict[str, Any]` format. |
| **Max RPM** *(optional)* | Maximum requests per minute the crew adheres to during execution. |
| **Language** *(optional)* | Language used for the crew, defaults to English. |
| **Language File** *(optional)* | Path to the language file to be used for the crew. |
| **Memory** *(optional)* | Utilized for storing execution memories (short-term, long-term, entity memory). |
| **Cache** *(optional)* | Specifies whether to use a cache for storing the results of tools' execution. |
| **Embedder** *(optional)* | Configuration for the embedder to be used by the crew. mostly used by memory for now |
| **Full Output** *(optional)*| Whether the crew should return the full output with all tasks outputs or just the final output. |
| **Step Callback** *(optional)* | A function that is called after each step of every agent. This can be used to log the agent's actions or to perform other operations; it won't override the agent-specific `step_callback`. |
| **Task Callback** *(optional)* | A function that is called after the completion of each task. Useful for monitoring or additional operations post-task execution. |
| **Share Crew** *(optional)* | Whether you want to share the complete crew information and execution with the crewAI team to make the library better, and allow us to train models. |
| Attribute | Parameters | Description |
| :-------------------------- | :------------------ | :------------------------------------------------------------------------------------------------------- |
| **Tasks** | `tasks` | A list of tasks assigned to the crew. |
| **Agents** | `agents` | A list of agents that are part of the crew. |
| **Process** *(optional)* | `process` | The process flow (e.g., sequential, hierarchical) the crew follows. |
| **Verbose** *(optional)* | `verbose` | The verbosity level for logging during execution. |
| **Manager LLM** *(optional)*| `manager_llm` | The language model used by the manager agent in a hierarchical process. **Required when using a hierarchical process.** |
| **Function Calling LLM** *(optional)* | `function_calling_llm` | If passed, the crew will use this LLM to do function calling for tools for all agents in the crew. Each agent can have its own LLM, which overrides the crew's LLM for function calling. |
| **Config** *(optional)* | `config` | Optional configuration settings for the crew, in `Json` or `Dict[str, Any]` format. |
| **Max RPM** *(optional)* | `max_rpm` | Maximum requests per minute the crew adheres to during execution. |
| **Language** *(optional)* | `language` | Language used for the crew, defaults to English. |
| **Language File** *(optional)* | `language_file` | Path to the language file to be used for the crew. |
| **Memory** *(optional)* | `memory` | Utilized for storing execution memories (short-term, long-term, entity memory). |
| **Cache** *(optional)* | `cache` | Specifies whether to use a cache for storing the results of tools' execution. |
| **Embedder** *(optional)* | `embedder` | Configuration for the embedder to be used by the crew. Mostly used by memory for now. |
| **Full Output** *(optional)*| `full_output` | Whether the crew should return the full output with all tasks outputs or just the final output. |
| **Step Callback** *(optional)* | `step_callback` | A function that is called after each step of every agent. This can be used to log the agent's actions or to perform other operations; it won't override the agent-specific `step_callback`. |
| **Task Callback** *(optional)* | `task_callback` | A function that is called after the completion of each task. Useful for monitoring or additional operations post-task execution. |
| **Share Crew** *(optional)* | `share_crew` | Whether you want to share the complete crew information and execution with the crewAI team to make the library better, and allow us to train models. |
| **Output Log File** *(optional)* | `output_log_file` | Whether you want to have a file with the complete crew output and execution. You can set it using True and it will default to the folder you are currently in and it will be called logs.txt or passing a string with the full path and name of the file. |
| **Manager Agent** *(optional)* | `manager_agent` | `manager` sets a custom agent that will be used as a manager. |
| **Manager Callbacks** *(optional)* | `manager_callbacks` | `manager_callbacks` takes a list of callback handlers to be executed by the manager agent when a hierarchical process is used. |
| **Prompt File** *(optional)* | `prompt_file` | Path to the prompt JSON file to be used for the crew. |
!!! note "Crew Max RPM"
The `max_rpm` attribute sets the maximum number of requests per minute the crew can perform to avoid rate limits and will override individual agents' `max_rpm` settings if you set it.
@@ -45,23 +49,34 @@ from langchain_community.tools import DuckDuckGoSearchRun
researcher = Agent(
role='Senior Research Analyst',
goal='Discover innovative AI technologies',
backstory="""You're a senior research analyst at a large company.
You're responsible for analyzing data and providing insights
to the business.
You're currently working on a project to analyze the
trends and innovations in the space of artificial intelligence.""",
tools=[DuckDuckGoSearchRun()]
)
writer = Agent(
role='Content Writer',
goal='Write engaging articles on AI discoveries',
backstory="""You're a senior writer at a large company.
You're responsible for creating content to the business.
You're currently working on a project to write about trends
and innovations in the space of AI for your next meeting.""",
verbose=True
)
# Create tasks for the agents
research_task = Task(
description='Identify breakthrough AI technologies',
agent=researcher
agent=researcher,
expected_output='A bullet list summary of the top 5 most important AI news'
)
write_article_task = Task(
description='Draft an article on the latest AI technologies',
agent=writer
agent=writer,
expected_output='3 paragraph blog post on the latest AI technologies'
)
# Assemble the crew with a sequential process
@@ -96,7 +111,7 @@ print(crew.usage_metrics)
## Crew Execution Process
- **Sequential Process**: Tasks are executed one after another, allowing for a linear flow of work.
- **Hierarchical Process**: A manager agent coordinates the crew, delegating tasks and validating outcomes before proceeding. **Note**: A `manager_llm` is required for this process and it's essential for validating the process flow.
- **Hierarchical Process**: A manager agent coordinates the crew, delegating tasks and validating outcomes before proceeding. **Note**: A `manager_llm` or `manager_agent` is required for this process and it's essential for validating the process flow.
### Kicking Off a Crew
@@ -107,3 +122,37 @@ Once your crew is assembled, initiate the workflow with the `kickoff()` method.
result = my_crew.kickoff()
print(result)
```
### Different wayt to Kicking Off a Crew
Once your crew is assembled, initiate the workflow with the appropriate kickoff method. CrewAI provides several methods for better control over the kickoff process: `kickoff()`, `kickoff_for_each()`, `kickoff_async()`, and `kickoff_for_each_async()`.
`kickoff()`: Starts the execution process according to the defined process flow.
`kickoff_for_each()`: Executes tasks for each agent individually.
`kickoff_async()`: Initiates the workflow asynchronously.
`kickoff_for_each_async()`: Executes tasks for each agent individually in an asynchronous manner.
```python
# Start the crew's task execution
result = my_crew.kickoff()
print(result)
# Example of using kickoff_for_each
inputs_array = [{'topic': 'AI in healthcare'}, {'topic': 'AI in finance'}]
results = my_crew.kickoff_for_each(inputs=inputs_array)
for result in results:
print(result)
# Example of using kickoff_async
inputs = {'topic': 'AI in healthcare'}
async_result = my_crew.kickoff_async(inputs=inputs)
print(async_result)
# Example of using kickoff_for_each_async
inputs_array = [{'topic': 'AI in healthcare'}, {'topic': 'AI in finance'}]
async_results = my_crew.kickoff_for_each_async(inputs=inputs_array)
for async_result in async_results:
print(async_result)
```
These methods provide flexibility in how you manage and execute tasks within your crew, allowing for both synchronous and asynchronous workflows tailored to your needs

View File

@@ -1,4 +1,3 @@
```markdown
---
title: crewAI Memory Systems
description: Leveraging memory systems in the crewAI framework to enhance agent capabilities.
@@ -6,16 +5,16 @@ description: Leveraging memory systems in the crewAI framework to enhance agent
## Introduction to Memory Systems in crewAI
!!! note "Enhancing Agent Intelligence"
The crewAI framework introduces a sophisticated memory system designed to significantly enhance the capabilities of AI agents. This system comprises short-term memory, long-term memory, entity memory, and newly identified contextual memory, each serving a unique purpose in aiding agents to remember, reason, and learn from past interactions.
The crewAI framework introduces a sophisticated memory system designed to significantly enhance the capabilities of AI agents. This system comprises short-term memory, long-term memory, entity memory, and contextual memory, each serving a unique purpose in aiding agents to remember, reason, and learn from past interactions.
## Memory System Components
| Component | Description |
| :------------------- | :----------------------------------------------------------- |
| **Short-Term Memory**| Temporarily stores recent interactions and outcomes, enabling agents to recall and utilize information relevant to their current context. |
| **Long-Term Memory** | Preserves valuable insights and learnings from past executions, allowing agents to build and refine their knowledge over time. |
| **Short-Term Memory**| Temporarily stores recent interactions and outcomes, enabling agents to recall and utilize information relevant to their current context during the current executions. |
| **Long-Term Memory** | Preserves valuable insights and learnings from past executions, allowing agents to build and refine their knowledge over time. So Agents can remeber what they did right and wrong across multiple executions |
| **Entity Memory** | Captures and organizes information about entities (people, places, concepts) encountered during tasks, facilitating deeper understanding and relationship mapping. |
| **Contextual Memory**| Maintains the context of interactions, aiding in the coherence and relevance of agent responses over a sequence of tasks or a conversation. |
| **Contextual Memory**| Maintains the context of interactions by combining `ShortTermMemory`, `LongTermMemory`, and `EntityMemory`, aiding in the coherence and relevance of agent responses over a sequence of tasks or a conversation. |
## How Memory Systems Empower Agents
@@ -28,8 +27,7 @@ description: Leveraging memory systems in the crewAI framework to enhance agent
## Implementing Memory in Your Crew
When configuring a crew, you can enable and customize each memory component to suit the crew's objectives and the nature of tasks it will perform.
By default, the memory system is disabled, and you can ensure it is active by setting `memory=True` in the crew configuration.
The memory will use OpenAI Embeddings by default, but you can change it by setting `embedder` to a different model.
By default, the memory system is disabled, and you can ensure it is active by setting `memory=True` in the crew configuration. The memory will use OpenAI Embeddings by default, but you can change it by setting `embedder` to a different model.
### Example: Configuring Memory for a Crew

View File

@@ -10,14 +10,14 @@ description: Detailed guide on workflow management through processes in CrewAI,
## Process Implementations
- **Sequential**: Executes tasks sequentially, ensuring tasks are completed in an orderly progression.
- **Hierarchical**: Organizes tasks in a managerial hierarchy, where tasks are delegated and executed based on a structured chain of command. A manager language model (`manager_llm`) must be specified in the crew to enable the hierarchical process, facilitating the creation and management of tasks by the manager.
- **Hierarchical**: Organizes tasks in a managerial hierarchy, where tasks are delegated and executed based on a structured chain of command. A manager language model (`manager_llm`) or a custom manager agent (`manager_agent`) must be specified in the crew to enable the hierarchical process, facilitating the creation and management of tasks by the manager.
- **Consensual Process (Planned)**: Aiming for collaborative decision-making among agents on task execution, this process type introduces a democratic approach to task management within CrewAI. It is planned for future development and is not currently implemented in the codebase.
## The Role of Processes in Teamwork
Processes enable individual agents to operate as a cohesive unit, streamlining their efforts to achieve common objectives with efficiency and coherence.
## Assigning Processes to a Crew
To assign a process to a crew, specify the process type upon crew creation to set the execution strategy. For a hierarchical process, ensure to define `manager_llm` for the manager agent.
To assign a process to a crew, specify the process type upon crew creation to set the execution strategy. For a hierarchical process, ensure to define `manager_llm` or `manager_agent` for the manager agent.
```python
from crewai import Crew
@@ -32,15 +32,17 @@ crew = Crew(
)
# Example: Creating a crew with a hierarchical process
# Ensure to provide a manager_llm
# Ensure to provide a manager_llm or manager_agent
crew = Crew(
agents=my_agents,
tasks=my_tasks,
process=Process.hierarchical,
manager_llm=ChatOpenAI(model="gpt-4")
# or
# manager_agent=my_manager_agent
)
```
**Note:** Ensure `my_agents` and `my_tasks` are defined prior to creating a `Crew` object, and for the hierarchical process, `manager_llm` is also required.
**Note:** Ensure `my_agents` and `my_tasks` are defined prior to creating a `Crew` object, and for the hierarchical process, either `manager_llm` or `manager_agent` is also required.
## Sequential Process
This method mirrors dynamic team workflows, progressing through tasks in a thoughtful and systematic manner. Task execution follows the predefined order in the task list, with the output of one task serving as context for the next.
@@ -48,7 +50,7 @@ This method mirrors dynamic team workflows, progressing through tasks in a thoug
To customize task context, utilize the `context` parameter in the `Task` class to specify outputs that should be used as context for subsequent tasks.
## Hierarchical Process
Emulates a corporate hierarchy, CrewAI automatically creates a manager for you, requiring the specification of a manager language model (`manager_llm`) for the manager agent. This agent oversees task execution, including planning, delegation, and validation. Tasks are not pre-assigned; the manager allocates tasks to agents based on their capabilities, reviews outputs, and assesses task completion.
Emulates a corporate hierarchy, CrewAI allows specifying a custom manager agent or automatically creates one, requiring the specification of a manager language model (`manager_llm`). This agent oversees task execution, including planning, delegation, and validation. Tasks are not pre-assigned; the manager allocates tasks to agents based on their capabilities, reviews outputs, and assesses task completion.
## Process Class: Detailed Overview
The `Process` class is implemented as an enumeration (`Enum`), ensuring type safety and restricting process values to the defined types (`sequential`, `hierarchical`). The consensual process is planned for future inclusion, emphasizing our commitment to continuous development and innovation.

View File

@@ -11,20 +11,20 @@ Tasks within crewAI can be collaborative, requiring multiple agents to work toge
## Task Attributes
| Attribute | Description |
| :----------------------| :-------------------------------------------------------------------------------------------- |
| **Description** | A clear, concise statement of what the task entails. |
| **Agent** | The agent responsible for the task, assigned either directly or by the crew's process. |
| **Expected Output** | A detailed description of what the task's completion looks like. |
| **Tools** *(optional)* | The functions or capabilities the agent can utilize to perform the task. |
| **Async Execution** *(optional)* | If set, the task executes asynchronously, allowing progression without waiting for completion.|
| **Context** *(optional)* | Specifies tasks whose outputs are used as context for this task. |
| **Config** *(optional)* | Additional configuration details for the agent executing the task, allowing further customization. |
| **Output JSON** *(optional)* | Outputs a JSON object, requiring an OpenAI client. Only one output format can be set. |
| **Output Pydantic** *(optional)* | Outputs a Pydantic model object, requiring an OpenAI client. Only one output format can be set. |
| **Output File** *(optional)* | Saves the task output to a file. If used with `Output JSON` or `Output Pydantic`, specifies how the output is saved. |
| **Callback** *(optional)* | A Python callable that is executed with the task's output upon completion. |
| **Human Input** *(optional)* | Indicates if the task requires human feedback at the end, useful for tasks needing human oversight. |
| Attribute | Parameters | Description |
| :----------------------| :------------------- | :-------------------------------------------------------------------------------------------- |
| **Description** | `description` | A clear, concise statement of what the task entails. |
| **Agent** | `agent` | The agent responsible for the task, assigned either directly or by the crew's process. |
| **Expected Output** | `expected_output` | A detailed description of what the task's completion looks like. |
| **Tools** *(optional)* | `tools` | The functions or capabilities the agent can utilize to perform the task. |
| **Async Execution** *(optional)* | `async_execution` | If set, the task executes asynchronously, allowing progression without waiting for completion.|
| **Context** *(optional)* | `context` | Specifies tasks whose outputs are used as context for this task. |
| **Config** *(optional)* | `config` | Additional configuration details for the agent executing the task, allowing further customization. |
| **Output JSON** *(optional)* | `output_json` | Outputs a JSON object, requiring an OpenAI client. Only one output format can be set. |
| **Output Pydantic** *(optional)* | `output_pydantic` | Outputs a Pydantic model object, requiring an OpenAI client. Only one output format can be set. |
| **Output File** *(optional)* | `output_file` | Saves the task output to a file. If used with `Output JSON` or `Output Pydantic`, specifies how the output is saved. |
| **Callback** *(optional)* | `callback` | A Python callable that is executed with the task's output upon completion. |
| **Human Input** *(optional)* | `human_input` | Indicates if the task requires human feedback at the end, useful for tasks needing human oversight. |
## Creating a Task
@@ -88,7 +88,7 @@ This demonstrates how tasks with specific tools can override an agent's default
## Referring to Other Tasks
In crewAI, the output of one task is automatically relayed into the next one, but you can specifically define what tasks' output, including multiple should be used as context for another task.
In crewAI, the output of one task is automatically relayed into the next one, but you can specifically define what tasks' output, including multiple, should be used as context for another task.
This is useful when you have a task that depends on the output of another task that is not performed immediately after it. This is done through the `context` attribute of the task:
@@ -96,26 +96,26 @@ This is useful when you have a task that depends on the output of another task t
# ...
research_ai_task = Task(
description='Find and summarize the latest AI news',
expected_output='A bullet list summary of the top 5 most important AI news',
async_execution=True,
agent=research_agent,
tools=[search_tool]
description='Find and summarize the latest AI news',
expected_output='A bullet list summary of the top 5 most important AI news',
async_execution=True,
agent=research_agent,
tools=[search_tool]
)
research_ops_task = Task(
description='Find and summarize the latest AI Ops news',
expected_output='A bullet list summary of the top 5 most important AI Ops news',
async_execution=True,
agent=research_agent,
tools=[search_tool]
description='Find and summarize the latest AI Ops news',
expected_output='A bullet list summary of the top 5 most important AI Ops news',
async_execution=True,
agent=research_agent,
tools=[search_tool]
)
write_blog_task = Task(
description="Write a full blog post about the importance of AI and its latest news",
expected_output='Full blog post that is 4 paragraphs long',
agent=writer_agent,
context=[research_ai_task, research_ops_task]
description="Write a full blog post about the importance of AI and its latest news",
expected_output='Full blog post that is 4 paragraphs long',
agent=writer_agent,
context=[research_ai_task, research_ops_task]
)
#...
@@ -131,24 +131,24 @@ You can then use the `context` attribute to define in a future task that it shou
#...
list_ideas = Task(
description="List of 5 interesting ideas to explore for an article about AI.",
expected_output="Bullet point list of 5 ideas for an article.",
agent=researcher,
async_execution=True # Will be executed asynchronously
description="List of 5 interesting ideas to explore for an article about AI.",
expected_output="Bullet point list of 5 ideas for an article.",
agent=researcher,
async_execution=True # Will be executed asynchronously
)
list_important_history = Task(
description="Research the history of AI and give me the 5 most important events.",
expected_output="Bullet point list of 5 important events.",
agent=researcher,
async_execution=True # Will be executed asynchronously
description="Research the history of AI and give me the 5 most important events.",
expected_output="Bullet point list of 5 important events.",
agent=researcher,
async_execution=True # Will be executed asynchronously
)
write_article = Task(
description="Write an article about AI, its history, and interesting ideas.",
expected_output="A 4 paragraph article about AI.",
agent=writer,
context=[list_ideas, list_important_history] # Will wait for the output of the two tasks to be completed
description="Write an article about AI, its history, and interesting ideas.",
expected_output="A 4 paragraph article about AI.",
agent=writer,
context=[list_ideas, list_important_history] # Will wait for the output of the two tasks to be completed
)
#...
@@ -162,20 +162,20 @@ The callback function is executed after the task is completed, allowing for acti
# ...
def callback_function(output: TaskOutput):
# Do something after the task is completed
# Example: Send an email to the manager
print(f"""
Task completed!
Task: {output.description}
Output: {output.raw_output}
""")
# Do something after the task is completed
# Example: Send an email to the manager
print(f"""
Task completed!
Task: {output.description}
Output: {output.raw_output}
""")
research_task = Task(
description='Find and summarize the latest AI news',
expected_output='A bullet list summary of the top 5 most important AI news',
agent=research_agent,
tools=[search_tool],
callback=callback_function
description='Find and summarize the latest AI news',
expected_output='A bullet list summary of the top 5 most important AI news',
agent=research_agent,
tools=[search_tool],
callback=callback_function
)
#...
@@ -188,27 +188,27 @@ Once a crew finishes running, you can access the output of a specific task by us
```python
# ...
task1 = Task(
description='Find and summarize the latest AI news',
expected_output='A bullet list summary of the top 5 most important AI news',
agent=research_agent,
tools=[search_tool]
description='Find and summarize the latest AI news',
expected_output='A bullet list summary of the top 5 most important AI news',
agent=research_agent,
tools=[search_tool]
)
#...
crew = Crew(
agents=[research_agent],
tasks=[task1, task2, task3],
verbose=2
agents=[research_agent],
tasks=[task1, task2, task3],
verbose=2
)
result = crew.kickoff()
# Returns a TaskOutput object with the description and results of the task
print(f"""
Task completed!
Task: {task1.output.description}
Output: {task1.output.raw_output}
Task completed!
Task: {task1.output.description}
Output: {task1.output.raw_output}
""")
```
@@ -225,6 +225,25 @@ While creating and executing tasks, certain validation mechanisms are in place t
These validations help in maintaining the consistency and reliability of task executions within the crewAI framework.
## Creating Directories when Saving Files
You can now specify if a task should create directories when saving its output to a file. This is particularly useful for organizing outputs and ensuring that file paths are correctly structured.
```python
# ...
save_output_task = Task(
description='Save the summarized AI news to a file',
expected_output='File saved successfully',
agent=research_agent,
tools=[file_save_tool],
output_file='outputs/ai_news_summary.txt',
create_directory=True
)
#...
```
## Conclusion
Tasks are the driving force behind the actions of agents in crewAI. By properly defining tasks and their outcomes, you set the stage for your AI agents to work effectively, either independently or as a collaborative unit. Equipping tasks with appropriate tools, understanding the execution process, and following robust validation practices are crucial for maximizing CrewAI's potential, ensuring agents are effectively prepared for their assignments and that tasks are executed as intended.
Tasks are the driving force behind the actions of agents in crewAI. By properly defining tasks and their outcomes, you set the stage for your AI agents to work effectively, either independently or as a collaborative unit. Equipping tasks with appropriate tools, understanding the execution process, and following robust validation practices are crucial for maximizing CrewAI's potential, ensuring agents are effectively prepared for their assignments and that tasks are executed as intended.

View File

@@ -94,7 +94,7 @@ crew.kickoff()
## Available crewAI Tools
- **Error Handling**: All tools are built with error handling capabilities, allowing agents to gracefully manage exceptions and continue their tasks.
- **Caching Mechanism**: All tools support caching, enabling agents to efficiently reuse previously obtained results, reducing the load on external resources and speeding up the execution time, you can also define finner control over the caching mechanism, using `cache_function` attribute on the tool.
- **Caching Mechanism**: All tools support caching, enabling agents to efficiently reuse previously obtained results, reducing the load on external resources and speeding up the execution time. You can also define finer control over the caching mechanism using the `cache_function` attribute on the tool.
Here is a list of the available tools and their descriptions:
@@ -107,7 +107,7 @@ Here is a list of the available tools and their descriptions:
| **DirectoryReadTool** | Facilitates reading and processing of directory structures and their contents. |
| **FileReadTool** | Enables reading and extracting data from files, supporting various file formats. |
| **GithubSearchTool** | A RAG tool for searching within GitHub repositories, useful for code and documentation search.|
| **SeperDevTool** | A specialized tool for development purposes, with specific functionalities under development. |
| **SerperDevTool** | A specialized tool for development purposes, with specific functionalities under development. |
| **TXTSearchTool** | A RAG tool focused on searching within text (.txt) files, suitable for unstructured data. |
| **JSONSearchTool** | A RAG tool designed for searching within JSON files, catering to structured data handling. |
| **MDXSearchTool** | A RAG tool tailored for searching within Markdown (MDX) files, useful for documentation. |
@@ -120,13 +120,14 @@ Here is a list of the available tools and their descriptions:
| **XMLSearchTool** | A RAG tool designed for searching within XML files, suitable for structured data formats. |
| **YoutubeChannelSearchTool**| A RAG tool for searching within YouTube channels, useful for video content analysis. |
| **YoutubeVideoSearchTool** | A RAG tool aimed at searching within YouTube videos, ideal for video data extraction. |
| **BrowserbaseTool** | A tool for interacting with and extracting data from web browsers. |
| **ExaSearchTool** | A tool designed for performing exhaustive searches across various data sources. |
## Creating your own Tools
!!! example "Custom Tool Creation"
Developers can craft custom tools tailored for their agents needs or utilize pre-built options:
To create your own crewAI tools you will need to install our extra tools package:
```bash
@@ -141,7 +142,7 @@ from crewai_tools import BaseTool
class MyCustomTool(BaseTool):
name: str = "Name of my tool"
description: str = "Clear description for what this tool is useful for, you agent will need this information to use it."
description: str = "Clear description for what this tool is useful for, your agent will need this information to use it."
def _run(self, argument: str) -> str:
# Implementation goes here
@@ -154,7 +155,7 @@ class MyCustomTool(BaseTool):
from crewai_tools import tool
@tool("Name of my tool")
def my_tool(question: str) -> str:
"""Clear description for what this tool is useful for, you agent will need this information to use it."""
"""Clear description for what this tool is useful for, your agent will need this information to use it."""
# Function logic here
return "Result from your custom tool"
```
@@ -180,45 +181,14 @@ multiplication_tool.cache_function = cache_func
writer1 = Agent(
role="Writer",
goal="You write lesssons of math for kids.",
backstory="You're an expert in writting and you love to teach kids but you know nothing of math.",
tools=[multiplcation_tool],
goal="You write lessons of math for kids.",
backstory="You're an expert in writing and you love to teach kids but you know nothing of math.",
tools=[multiplication_tool],
allow_delegation=False,
)
#...
```
## Using LangChain Tools
!!! info "LangChain Integration"
CrewAI seamlessly integrates with LangChains comprehensive toolkit for search-based queries and more, here are the available built-in tools that are offered by Langchain [LangChain Toolkit](https://python.langchain.com/docs/integrations/tools/)
:
```python
from crewai import Agent
from langchain.agents import Tool
from langchain.utilities import GoogleSerperAPIWrapper
# Setup API keys
os.environ["SERPER_API_KEY"] = "Your Key"
search = GoogleSerperAPIWrapper()
# Create and assign the search tool to an agent
serper_tool = Tool(
name="Intermediate Answer",
func=search.run,
description="Useful for search-based queries",
)
agent = Agent(
role='Research Analyst',
goal='Provide up-to-date market analysis',
backstory='An expert analyst with a keen eye for market trends.',
tools=[serper_tool]
)
# rest of the code ...
```
## Conclusion
Tools are pivotal in extending the capabilities of CrewAI agents, enabling them to undertake a broad spectrum of tasks and collaborate effectively. When building solutions with CrewAI, leverage both custom and existing tools to empower your agents and enhance the AI ecosystem. Consider utilizing error handling, caching mechanisms, and the flexibility of tool arguments to optimize your agents' performance and capabilities.

View File

@@ -0,0 +1,53 @@
---
title: crewAI Train
description: Learn how to train your crewAI agents by giving them feedback early on and get consistent results.
---
## Introduction
The training feature in CrewAI allows you to train your AI agents using the command-line interface (CLI). By running the command `crewai train -n <n_iterations>`, you can specify the number of iterations for the training process.
During training, CrewAI utilizes techniques to optimize the performance of your agents along with human feedback. This helps the agents improve their understanding, decision-making, and problem-solving abilities.
### Training Your Crew Using the CLI
To use the training feature, follow these steps:
1. Open your terminal or command prompt.
2. Navigate to the directory where your CrewAI project is located.
3. Run the following command:
```shell
crewai train -n <n_iterations>
```
### Training Your Crew Programmatically
To train your crew programmatically, use the following steps:
1. Define the number of iterations for training.
2. Specify the input parameters for the training process.
3. Execute the training command within a try-except block to handle potential errors.
```python
n_iterations = 2
inputs = {"topic": "CrewAI Training"}
try:
YourCrewName_Crew().crew().train(n_iterations= n_iterations, inputs=inputs)
except Exception as e:
raise Exception(f"An error occurred while training the crew: {e}")
```
!!! note "Replace `<n_iterations>` with the desired number of training iterations. This determines how many times the agents will go through the training process."
### Key Points to Note:
- **Positive Integer Requirement:** Ensure that the number of iterations (`n_iterations`) is a positive integer. The code will raise a `ValueError` if this condition is not met.
- **Error Handling:** The code handles subprocess errors and unexpected exceptions, providing error messages to the user.
It is important to note that the training process may take some time, depending on the complexity of your agents and will also require your feedback on each iteration.
Once the training is complete, your agents will be equipped with enhanced capabilities and knowledge, ready to tackle complex tasks and provide more consistent and valuable insights.
Remember to regularly update and retrain your agents to ensure they stay up-to-date with the latest information and advancements in the field.
Happy training with CrewAI!

View File

@@ -0,0 +1,38 @@
---
title: Using LangChain Tools
description: Learn how to integrate LangChain tools with CrewAI agents to enhance search-based queries and more.
---
## Using LangChain Tools
!!! info "LangChain Integration"
CrewAI seamlessly integrates with LangChains comprehensive toolkit for search-based queries and more, here are the available built-in tools that are offered by Langchain [LangChain Toolkit](https://python.langchain.com/docs/integrations/tools/)
```python
from crewai import Agent
from langchain.agents import Tool
from langchain.utilities import GoogleSerperAPIWrapper
# Setup API keys
os.environ["SERPER_API_KEY"] = "Your Key"
search = GoogleSerperAPIWrapper()
# Create and assign the search tool to an agent
serper_tool = Tool(
name="Intermediate Answer",
func=search.run,
description="Useful for search-based queries",
)
agent = Agent(
role='Research Analyst',
goal='Provide up-to-date market analysis',
backstory='An expert analyst with a keen eye for market trends.',
tools=[serper_tool]
)
# rest of the code ...
```
## Conclusion
Tools are pivotal in extending the capabilities of CrewAI agents, enabling them to undertake a broad spectrum of tasks and collaborate effectively. When building solutions with CrewAI, leverage both custom and existing tools to empower your agents and enhance the AI ecosystem. Consider utilizing error handling, caching mechanisms, and the flexibility of tool arguments to optimize your agents' performance and capabilities.

View File

@@ -0,0 +1,57 @@
---
title: Using LlamaIndex Tools
description: Learn how to integrate LlamaIndex tools with CrewAI agents to enhance search-based queries and more.
---
## Using LlamaIndex Tools
!!! info "LlamaIndex Integration"
CrewAI seamlessly integrates with LlamaIndexs comprehensive toolkit for RAG (Retrieval-Augmented Generation) and agentic pipelines, enabling advanced search-based queries and more. Here are the available built-in tools offered by LlamaIndex.
```python
from crewai import Agent
from crewai_tools import LlamaIndexTool
# Example 1: Initialize from FunctionTool
from llama_index.core.tools import FunctionTool
your_python_function = lambda ...: ...
og_tool = FunctionTool.from_defaults(your_python_function, name="<name>", description='<description>')
tool = LlamaIndexTool.from_tool(og_tool)
# Example 2: Initialize from LlamaHub Tools
from llama_index.tools.wolfram_alpha import WolframAlphaToolSpec
wolfram_spec = WolframAlphaToolSpec(app_id="<app_id>")
wolfram_tools = wolfram_spec.to_tool_list()
tools = [LlamaIndexTool.from_tool(t) for t in wolfram_tools]
# Example 3: Initialize Tool from a LlamaIndex Query Engine
query_engine = index.as_query_engine()
query_tool = LlamaIndexTool.from_query_engine(
query_engine,
name="Uber 2019 10K Query Tool",
description="Use this tool to lookup the 2019 Uber 10K Annual Report"
)
# Create and assign the tools to an agent
agent = Agent(
role='Research Analyst',
goal='Provide up-to-date market analysis',
backstory='An expert analyst with a keen eye for market trends.',
tools=[tool, *tools, query_tool]
)
# rest of the code ...
```
## Steps to Get Started
To effectively use the LlamaIndexTool, follow these steps:
1. **Package Installation**: Confirm that the `crewai[tools]` package is installed in your Python environment.
```shell
pip install 'crewai[tools]'
```
2. **Install and Use LlamaIndex**: Follow LlamaIndex documentation [LlamaIndex Documentation](https://docs.llamaindex.ai/) to set up a RAG/agent pipeline.

View File

@@ -1,72 +1,86 @@
---
title: (AgentOps) Observability using AgentOps
title: Agent Monitoring with AgentOps
description: Understanding and logging your agent performance with AgentOps.
---
# Intro
Observability is a key aspect of developing and deploying conversational AI agents. It allows developers to understand how the agent is performing, how users are interacting with the agent, and how the agent is responding to user inputs.
AgentOps is a product, idependent of crewAI that provides a comprehensive observability solution for agents.
This notebook will provide an overview of AgentOps and how to use it with crewAI.
Observability is a key aspect of developing and deploying conversational AI agents. It allows developers to understand how their agents are performing, how their agents are interacting with users, and how their agents use external tools and APIs. AgentOps is a product independent of CrewAI that provides a comprehensive observability solution for agents.
## AgentOps
[AgentOps](https://agentops.ai) provides session replays, metrics, and monitoring for agents.
[AgentOps Repo](https://github.com/AgentOps-AI/agentops)
[AgentOps](https://agentops.ai/?=crew) provides session replays, metrics, and monitoring for agents.
At a high level, AgentOps gives you the ability to monitor cost, token usage, latency, agent failures, session-wide statistics, and more. For more info, check out the [AgentOps Repo](https://github.com/AgentOps-AI/agentops).
### Overview
AgentOps provides monotoring for agents in development and production. It provides a dashboard for monitoring agent performance, session replays, and custom reporting.
AgentOps provides monitoring for agents in development and production. It provides a dashboard for tracking agent performance, session replays, and custom reporting.
![agentops-overview.png](..%2Fassets%2Fagentops-overview.png)
Additionally, AgentOps provides session drilldowns for viewing Crew agent interactions, LLM calls, and tool usage in real-time. This feature is useful for debugging and understanding how agents interact with users as well as other agents.
Additionally, AgentOps provides session drilldowns that allows users to view the agent's interactions with users in real-time. This feature is useful for debugging and understanding how the agent interacts with users.
![agentops-session.png](..%2Fassets%2Fagentops-session.png)
![agentops-replay.png](..%2Fassets%2Fagentops-replay.png)
![Overview of a select series of agent session runs](..%2Fassets%2Fagentops-overview.png)
![Overview of session drilldowns for examining agent runs](..%2Fassets%2Fagentops-session.png)
![Viewing a step-by-step agent replay execution graph](..%2Fassets%2Fagentops-replay.png)
### Features
- LLM Cost management and tracking
- Replay Analytics
- Recursive thought detection
- Custom Reporting
- Analytics Dashboard
- Public Model Testing
- Custom Tests
- Time Travel Debugging
- Compliance and Security
- **LLM Cost Management and Tracking**: Track spend with foundation model providers.
- **Replay Analytics**: Watch step-by-step agent execution graphs.
- **Recursive Thought Detection**: Identify when agents fall into infinite loops.
- **Custom Reporting**: Create custom analytics on agent performance.
- **Analytics Dashboard**: Monitor high-level statistics about agents in development and production.
- **Public Model Testing**: Test your agents against benchmarks and leaderboards.
- **Custom Tests**: Run your agents against domain-specific tests.
- **Time Travel Debugging**: Restart your sessions from checkpoints.
- **Compliance and Security**: Create audit logs and detect potential threats such as profanity and PII leaks.
- **Prompt Injection Detection**: Identify potential code injection and secret leaks.
### Using AgentOps
Create a user API key here: app.agentops.ai/account
1. **Create an API Key:**
Create a user API key here: [Create API Key](app.agentops.ai/account)
Add your API key to your environment variables
2. **Configure Your Environment:**
Add your API key to your environment variables
```
AGENTOPS_API_KEY=<YOUR_AGENTOPS_API_KEY>
```
```bash
AGENTOPS_API_KEY=<YOUR_AGENTOPS_API_KEY>
```
Install AgentOps with:
```
pip install crewai[agentops]
```
or
```
pip install agentops
```
3. **Install AgentOps:**
Install AgentOps with:
```bash
pip install crewai[agentops]
```
or
```bash
pip install agentops
```
Before using `Crew` in your script, include these lines:
Before using `Crew` in your script, include these lines:
```python
import agentops
agentops.init()
```
```python
import agentops
agentops.init()
```
This will initiate an AgentOps session as well as automatically track Crew agents. For further info on how to outfit more complex agentic systems, check out the [AgentOps documentation](https://docs.agentops.ai) or join the [Discord](https://discord.gg/j4f3KbeH).
### Crew + AgentOps Examples
- [Job Posting](https://github.com/joaomdmoura/crewAI-examples/tree/main/job-posting)
- [Markdown Validator](https://github.com/joaomdmoura/crewAI-examples/tree/main/markdown_validator)
- [Instagram Post](https://github.com/joaomdmoura/crewAI-examples/tree/main/instagram_post)
### Further Information
### Futher Information
To implement more features and better observability, please see the [AgentOps Repo](https://github.com/AgentOps-AI/agentops)
To get started, create an [AgentOps account](https://agentops.ai/?=crew).
For feature requests or bug reports, please reach out to the AgentOps team on the [AgentOps Repo](https://github.com/AgentOps-AI/agentops).
#### Extra links
<a href="https://twitter.com/agentopsai/">🐦 Twitter</a>
<span>&nbsp;&nbsp;•&nbsp;&nbsp;</span>
<a href="https://discord.gg/JHPt4C7r">📢 Discord</a>
<span>&nbsp;&nbsp;•&nbsp;&nbsp;</span>
<a href="https://app.agentops.ai/?=crew">🖇️ AgentOps Dashboard</a>
<span>&nbsp;&nbsp;•&nbsp;&nbsp;</span>
<a href="https://docs.agentops.ai/introduction">📙 Documentation</a>

View File

@@ -0,0 +1,76 @@
---
title: Coding Agents
description: Learn how to enable your crewAI Agents to write and execute code, and explore advanced features for enhanced functionality.
---
## Introduction
crewAI Agents now have the powerful ability to write and execute code, significantly enhancing their problem-solving capabilities. This feature is particularly useful for tasks that require computational or programmatic solutions.
## Enabling Code Execution
To enable code execution for an agent, set the `allow_code_execution` parameter to `True` when creating the agent. Here's an example:
```python
from crewai import Agent
coding_agent = Agent(
role="Senior Python Developer",
goal="Craft well-designed and thought-out code",
backstory="You are a senior Python developer with extensive experience in software architecture and best practices.",
allow_code_execution=True
)
```
## Important Considerations
1. **Model Selection**: It is strongly recommended to use more capable models like Claude 3.5 Sonnet and GPT-4 when enabling code execution. These models have a better understanding of programming concepts and are more likely to generate correct and efficient code.
2. **Error Handling**: The code execution feature includes error handling. If executed code raises an exception, the agent will receive the error message and can attempt to correct the code or provide alternative solutions.
3. **Dependencies**: To use the code execution feature, you need to install the `crewai_tools` package. If not installed, the agent will log an info message: "Coding tools not available. Install crewai_tools."
## Code Execution Process
When an agent with code execution enabled encounters a task requiring programming:
1. The agent analyzes the task and determines that code execution is necessary.
2. It formulates the Python code needed to solve the problem.
3. The code is sent to the internal code execution tool (`CodeInterpreterTool`).
4. The tool executes the code in a controlled environment and returns the result.
5. The agent interprets the result and incorporates it into its response or uses it for further problem-solving.
## Example Usage
Here's a detailed example of creating an agent with code execution capabilities and using it in a task:
```python
from crewai import Agent, Task, Crew
# Create an agent with code execution enabled
coding_agent = Agent(
role="Python Data Analyst",
goal="Analyze data and provide insights using Python",
backstory="You are an experienced data analyst with strong Python skills.",
allow_code_execution=True
)
# Create a task that requires code execution
data_analysis_task = Task(
description="Analyze the given dataset and calculate the average age of participants.",
agent=coding_agent
)
# Create a crew and add the task
analysis_crew = Crew(
agents=[coding_agent],
tasks=[data_analysis_task]
)
# Execute the crew
result = analysis_crew.kickoff()
print(result)
```
In this example, the `coding_agent` can write and execute Python code to perform data analysis tasks.

View File

@@ -42,6 +42,7 @@ def my_simple_tool(question: str) -> str:
# Tool logic here
return "Tool output"
```
### Defining a Cache Function for the Tool
To optimize tool performance with caching, define custom caching strategies using the `cache_function` attribute.

View File

@@ -1,11 +1,10 @@
---
title: Assembling and Activating Your CrewAI Team
description: A comprehensive guide to creating a dynamic CrewAI team for your projects, with updated functionalities including verbose mode, memory capabilities, asynchronous execution, output customization, language model configuration, and more.
description: A comprehensive guide to creating a dynamic CrewAI team for your projects, with updated functionalities including verbose mode, memory capabilities, asynchronous execution, output customization, language model configuration, code execution, integration with third-party agents, and improved task management.
---
## Introduction
Embark on your CrewAI journey by setting up your environment and initiating your AI crew with the latest features. This guide ensures a smooth start, incorporating all recent updates for an enhanced experience.
Embark on your CrewAI journey by setting up your environment and initiating your AI crew with the latest features. This guide ensures a smooth start, incorporating all recent updates for an enhanced experience, including code execution capabilities, integration with third-party agents, and advanced task management.
## Step 0: Installation
Install CrewAI and any necessary packages for your project. CrewAI is compatible with Python >=3.10,<=3.13.
@@ -16,108 +15,69 @@ pip install 'crewai[tools]'
```
## Step 1: Assemble Your Agents
Define your agents with distinct roles, backstories, and enhanced capabilities like verbose mode and memory usage. These elements add depth and guide their task execution and interaction within the crew.
Define your agents with distinct roles, backstories, and enhanced capabilities. The Agent class now supports a wide range of attributes for fine-tuned control over agent behavior and interactions, including code execution and integration with third-party agents.
```python
import os
os.environ["SERPER_API_KEY"] = "Your Key" # serper.dev API key
os.environ["OPENAI_API_KEY"] = "Your Key"
from langchain.llms import OpenAI
from crewai import Agent
from crewai_tools import SerperDevTool
from crewai_tools import SerperDevTool, BrowserbaseTool, ExaSearchTool
os.environ["OPENAI_API_KEY"] = "Your OpenAI Key"
os.environ["SERPER_API_KEY"] = "Your Serper Key"
search_tool = SerperDevTool()
browser_tool = BrowserbaseTool()
exa_search_tool = ExaSearchTool()
# Creating a senior researcher agent with memory and verbose mode
# Creating a senior researcher agent with advanced configurations
researcher = Agent(
role='Senior Researcher',
goal='Uncover groundbreaking technologies in {topic}',
verbose=True,
memory=True,
backstory=(
"Driven by curiosity, you're at the forefront of"
"innovation, eager to explore and share knowledge that could change"
"the world."
),
tools=[search_tool],
allow_delegation=True
role='Senior Researcher',
goal='Uncover groundbreaking technologies in {topic}',
backstory=("Driven by curiosity, you're at the forefront of innovation, "
"eager to explore and share knowledge that could change the world."),
memory=True,
verbose=True,
allow_delegation=False,
tools=[search_tool, browser_tool],
allow_code_execution=False, # New attribute for enabling code execution
max_iter=15, # Maximum number of iterations for task execution
max_rpm=100, # Maximum requests per minute
max_execution_time=3600, # Maximum execution time in seconds
system_template="Your custom system template here", # Custom system template
prompt_template="Your custom prompt template here", # Custom prompt template
response_template="Your custom response template here", # Custom response template
)
# Creating a writer agent with custom tools and delegation capability
# Creating a writer agent with custom tools and specific configurations
writer = Agent(
role='Writer',
goal='Narrate compelling tech stories about {topic}',
role='Writer',
goal='Narrate compelling tech stories about {topic}',
backstory=("With a flair for simplifying complex topics, you craft engaging "
"narratives that captivate and educate, bringing new discoveries to light."),
verbose=True,
allow_delegation=False,
memory=True,
tools=[exa_search_tool],
function_calling_llm=OpenAI(model_name="gpt-3.5-turbo"), # Separate LLM for function calling
)
# Setting a specific manager agent
manager = Agent(
role='Manager',
goal='Ensure the smooth operation and coordination of the team',
verbose=True,
memory=True,
backstory=(
"With a flair for simplifying complex topics, you craft"
"engaging narratives that captivate and educate, bringing new"
"discoveries to light in an accessible manner."
"As a seasoned project manager, you excel in organizing "
"tasks, managing timelines, and ensuring the team stays on track."
),
tools=[search_tool],
allow_delegation=False
allow_code_execution=True, # Enable code execution for the manager
)
```
## Step 2: Define the Tasks
Detail the specific objectives for your agents, including new features for asynchronous execution and output customization. These tasks ensure a targeted approach to their roles.
### New Agent Attributes and Features
```python
from crewai import Task
# Research task
research_task = Task(
description=(
"Identify the next big trend in {topic}."
"Focus on identifying pros and cons and the overall narrative."
"Your final report should clearly articulate the key points,"
"its market opportunities, and potential risks."
),
expected_output='A comprehensive 3 paragraphs long report on the latest AI trends.',
tools=[search_tool],
agent=researcher,
)
# Writing task with language model configuration
write_task = Task(
description=(
"Compose an insightful article on {topic}."
"Focus on the latest trends and how it's impacting the industry."
"This article should be easy to understand, engaging, and positive."
),
expected_output='A 4 paragraph article on {topic} advancements formatted as markdown.',
tools=[search_tool],
agent=writer,
async_execution=False,
output_file='new-blog-post.md' # Example of output customization
)
```
## Step 3: Form the Crew
Combine your agents into a crew, setting the workflow process they'll follow to accomplish the tasks. Now with options to configure language models for enhanced interaction and additional configurations for optimizing performance.
```python
from crewai import Crew, Process
# Forming the tech-focused crew with some enhanced configurations
crew = Crew(
agents=[researcher, writer],
tasks=[research_task, write_task],
process=Process.sequential, # Optional: Sequential task execution is default
memory=True,
cache=True,
max_rpm=100,
share_crew=True
)
```
## Step 4: Kick It Off
Initiate the process with your enhanced crew ready. Observe as your agents collaborate, leveraging their new capabilities for a successful project outcome. Input variables will be interpolated into the agents and tasks for a personalized approach.
```python
# Starting the task execution process with enhanced feedback
result = crew.kickoff(inputs={'topic': 'AI in healthcare'})
print(result)
```
## Conclusion
Building and activating a crew in CrewAI has evolved with new functionalities. By incorporating verbose mode, memory capabilities, asynchronous task execution, output customization, language model configuration, and enhanced crew configurations, your AI team is more equipped than ever to tackle challenges efficiently. The depth of agent backstories and the precision of their objectives enrich collaboration, leading to successful project outcomes. This guide aims to provide you with a clear and detailed understanding of setting up and utilizing the CrewAI framework to its full potential.
1. `allow_code_execution`: Enable or disable code execution capabilities for the agent (default is False).
2. `max_execution_time`: Set a maximum execution time (in seconds) for the agent to complete a task.
3. `function_calling_llm`: Specify a separate language model for function calling.
4

View File

@@ -0,0 +1,94 @@
---
title: Initial Support to Bring Your Own Prompts in CrewAI
description: Enhancing customization and internationalization by allowing users to bring their own prompts in CrewAI.
---
# Initial Support to Bring Your Own Prompts in CrewAI
CrewAI now supports the ability to bring your own prompts, enabling extensive customization and internationalization. This feature allows users to tailor the inner workings of their agents to better suit specific needs, including support for multiple languages.
## Internationalization and Customization Support
### Custom Prompts with `prompt_file`
The `prompt_file` attribute facilitates full customization of the agent prompts, enhancing the global usability of CrewAI. Users can specify their prompt templates, ensuring that the agents communicate in a manner that aligns with specific project requirements or language preferences.
#### Example of a Custom Prompt File
The custom prompts can be defined in a JSON file, similar to the example provided [here](https://github.com/joaomdmoura/crewAI/blob/main/src/crewai/translations/en.json).
### Supported Languages
CrewAI's custom prompt support includes internationalization, allowing prompts to be written in different languages. This is particularly useful for global teams or projects that require multilingual support.
## How to Use the `prompt_file` Attribute
To utilize the `prompt_file` attribute, include it in your crew definition. Below is an example demonstrating how to set up agents and tasks with custom prompts.
### Example
```python
import os
from crewai import Agent, Task, Crew
# Define your agents
researcher = Agent(
role="Researcher",
goal="Make the best research and analysis on content about AI and AI agents",
backstory="You're an expert researcher, specialized in technology, software engineering, AI and startups. You work as a freelancer and is now working on doing research and analysis for a new customer.",
allow_delegation=False,
)
writer = Agent(
role="Senior Writer",
goal="Write the best content about AI and AI agents.",
backstory="You're a senior writer, specialized in technology, software engineering, AI and startups. You work as a freelancer and are now working on writing content for a new customer.",
allow_delegation=False,
)
# Define your tasks
tasks = [
Task(
description="Say Hi",
expected_output="The word: Hi",
agent=researcher,
)
]
# Instantiate your crew with custom prompts
crew = Crew(
agents=[researcher],
tasks=tasks,
prompt_file="prompt.json", # Path to your custom prompt file
)
# Get your crew to work!
crew.kickoff()
```
## Advanced Customization Features
### `language` Attribute
In addition to `prompt_file`, the `language` attribute can be used to specify the language for the agent's prompts. This ensures that the prompts are generated in the desired language, further enhancing the internationalization capabilities of CrewAI.
### Creating Custom Prompt Files
Custom prompt files should be structured in JSON format and include all necessary prompt templates. Below is a simplified example of a prompt JSON file:
```json
{
"system": "You are a system template.",
"prompt": "Here is your prompt template.",
"response": "Here is your response template."
}
```
### Benefits of Custom Prompts
- **Enhanced Flexibility**: Tailor agent communication to specific project needs.
- **Improved Usability**: Supports multiple languages, making it suitable for global projects.
- **Consistency**: Ensures uniform prompt structures across different agents and tasks.
By incorporating these updates, CrewAI provides users with the ability to fully customize and internationalize their agent prompts, making the platform more versatile and user-friendly.

View File

@@ -10,7 +10,16 @@ Crafting an efficient CrewAI team hinges on the ability to dynamically tailor yo
- **Role**: Specifies the agent's job within the crew, such as 'Analyst' or 'Customer Service Rep'.
- **Goal**: Defines what the agent aims to achieve, in alignment with its role and the overarching objectives of the crew.
- **Backstory**: Provides depth to the agent's persona, enriching its motivations and engagements within the crew.
- **Tools**: Represents the capabilities or methods the agent uses to perform tasks, from simple functions to intricate integrations.
- **Tools** *(Optional)*: Represents the capabilities or methods the agent uses to perform tasks, from simple functions to intricate integrations.
- **Cache** *(Optional)*: Determines whether the agent should use a cache for tool usage.
- **Max RPM**: Sets the maximum number of requests per minute (`max_rpm`). This attribute is optional and can be set to `None` for no limit, allowing for unlimited queries to external services if needed.
- **Verbose** *(Optional)*: Enables detailed logging of an agent's actions, useful for debugging and optimization. Specifically, it provides insights into agent execution processes, aiding in the optimization of performance.
- **Allow Delegation** *(Optional)*: `allow_delegation` controls whether the agent is allowed to delegate tasks to other agents.
- **Max Iter** *(Optional)*: The `max_iter` attribute allows users to define the maximum number of iterations an agent can perform for a single task, preventing infinite loops or excessively long executions. The default value is set to 25, providing a balance between thoroughness and efficiency. Once the agent approaches this number, it will try its best to give a good answer.
- **Max Execution Time** *(Optional)*: `max_execution_time` Sets the maximum execution time for an agent to complete a task.
- **System Template** *(Optional)*: `system_template` defines the system format for the agent.
- **Prompt Template** *(Optional)*: `prompt_template` defines the prompt format for the agent.
- **Response Template** *(Optional)*: `response_template` defines the response format for the agent.
## Advanced Customization Options
Beyond the basic attributes, CrewAI allows for deeper customization to enhance an agent's behavior and capabilities significantly.
@@ -26,7 +35,7 @@ Adjusting an agent's performance and monitoring its operations are crucial for e
- **RPM Limit**: Sets the maximum number of requests per minute (`max_rpm`). This attribute is optional and can be set to `None` for no limit, allowing for unlimited queries to external services if needed.
### Maximum Iterations for Task Execution
The `max_iter` attribute allows users to define the maximum number of iterations an agent can perform for a single task, preventing infinite loops or excessively long executions. The default value is set to 15, providing a balance between thoroughness and efficiency. Once the agent approaches this number, it will try its best to give a good answer.
The `max_iter` attribute allows users to define the maximum number of iterations an agent can perform for a single task, preventing infinite loops or excessively long executions. The default value is set to 25, providing a balance between thoroughness and efficiency. Once the agent approaches this number, it will try its best to give a good answer.
## Customizing Agents and Tools
Agents are customized by defining their attributes and tools during initialization. Tools are critical for an agent's functionality, enabling them to perform specialized tasks. The `tools` attribute should be an array of tools the agent can utilize, and it's initialized as an empty list by default. Tools can be added or modified post-agent initialization to adapt to new requirements.
@@ -57,7 +66,7 @@ agent = Agent(
memory=True, # Enable memory
verbose=True,
max_rpm=None, # No limit on requests per minute
max_iter=15, # Default value for maximum iterations
max_iter=25, # Default value for maximum iterations
allow_delegation=False
)
```

View File

@@ -10,7 +10,7 @@ The hierarchical process in CrewAI introduces a structured approach to task mana
The hierarchical process is designed to leverage advanced models like GPT-4, optimizing token usage while handling complex tasks with greater efficiency.
## Hierarchical Process Overview
By default, tasks in CrewAI are managed through a sequential process. However, adopting a hierarchical approach allows for a clear hierarchy in task management, where a 'manager' agent coordinates the workflow, delegates tasks, and validates outcomes for streamlined and effective execution. This manager agent is automatically created by crewAI so you don't need to worry about it.
By default, tasks in CrewAI are managed through a sequential process. However, adopting a hierarchical approach allows for a clear hierarchy in task management, where a 'manager' agent coordinates the workflow, delegates tasks, and validates outcomes for streamlined and effective execution. This manager agent can now be either automatically created by CrewAI or explicitly set by the user.
### Key Features
- **Task Delegation**: A manager agent allocates tasks among crew members based on their roles and capabilities.
@@ -52,9 +52,10 @@ writer = Agent(
project_crew = Crew(
tasks=[...], # Tasks to be delegated and executed under the manager's supervision
agents=[researcher, writer],
manager_llm=ChatOpenAI(temperature=0, model="gpt-4"), # Mandatory for hierarchical process
manager_llm=ChatOpenAI(temperature=0, model="gpt-4"), # Mandatory if manager_agent is not set
process=Process.hierarchical, # Specifies the hierarchical management approach
memory=True, # Enable memory usage for enhanced task execution
manager_agent=None, # Optional: explicitly set a specific agent as manager instead of the manager_llm
)
```
@@ -64,4 +65,4 @@ project_crew = Crew(
3. **Sequential Task Progression**: Despite being a hierarchical process, tasks follow a logical order for smooth progression, facilitated by the manager's oversight.
## Conclusion
Adopting the hierarchical process in crewAI, with the correct configurations and understanding of the system's capabilities, facilitates an organized and efficient approach to project management.
Adopting the hierarchical process in CrewAI, with the correct configurations and understanding of the system's capabilities, facilitates an organized and efficient approach to project management. Utilize the advanced features and customizations to tailor the workflow to your specific needs, ensuring optimal task execution and project success.

View File

@@ -22,7 +22,7 @@ import os
from crewai import Agent, Task, Crew
from crewai_tools import SerperDevTool
os.environ["SERPER_API_KEY"] = "Your Key" # serper.dev API key
os.environ["SERPER_API_KEY"] = "Your Key" # serper.dev API key
os.environ["OPENAI_API_KEY"] = "Your Key"
# Loading Tools
@@ -30,59 +30,59 @@ search_tool = SerperDevTool()
# Define your agents with roles, goals, tools, and additional attributes
researcher = Agent(
role='Senior Research Analyst',
goal='Uncover cutting-edge developments in AI and data science',
backstory=(
"You are a Senior Research Analyst at a leading tech think tank."
"Your expertise lies in identifying emerging trends and technologies in AI and data science."
"You have a knack for dissecting complex data and presenting actionable insights."
),
verbose=True,
allow_delegation=False,
tools=[search_tool],
max_rpm=100
role='Senior Research Analyst',
goal='Uncover cutting-edge developments in AI and data science',
backstory=(
"You are a Senior Research Analyst at a leading tech think tank. "
"Your expertise lies in identifying emerging trends and technologies in AI and data science. "
"You have a knack for dissecting complex data and presenting actionable insights."
),
verbose=True,
allow_delegation=False,
tools=[search_tool]
)
writer = Agent(
role='Tech Content Strategist',
goal='Craft compelling content on tech advancements',
backstory=(
"You are a renowned Tech Content Strategist, known for your insightful and engaging articles on technology and innovation."
"With a deep understanding of the tech industry, you transform complex concepts into compelling narratives."
),
verbose=True,
allow_delegation=True,
tools=[search_tool],
cache=False, # Disable cache for this agent
role='Tech Content Strategist',
goal='Craft compelling content on tech advancements',
backstory=(
"You are a renowned Tech Content Strategist, known for your insightful and engaging articles on technology and innovation. "
"With a deep understanding of the tech industry, you transform complex concepts into compelling narratives."
),
verbose=True,
allow_delegation=True,
tools=[search_tool],
cache=False, # Disable cache for this agent
)
# Create tasks for your agents
task1 = Task(
description=(
"Conduct a comprehensive analysis of the latest advancements in AI in 2024."
"Identify key trends, breakthrough technologies, and potential industry impacts."
"Compile your findings in a detailed report."
"Make sure to check with a human if the draft is good before finalizing your answer."
),
expected_output='A comprehensive full report on the latest AI advancements in 2024, leave nothing out',
agent=researcher,
human_input=True,
description=(
"Conduct a comprehensive analysis of the latest advancements in AI in 2024. "
"Identify key trends, breakthrough technologies, and potential industry impacts. "
"Compile your findings in a detailed report. "
"Make sure to check with a human if the draft is good before finalizing your answer."
),
expected_output='A comprehensive full report on the latest AI advancements in 2024, leave nothing out',
agent=researcher,
human_input=True
)
task2 = Task(
description=(
"Using the insights from the researcher's report, develop an engaging blog post that highlights the most significant AI advancements."
"Your post should be informative yet accessible, catering to a tech-savvy audience."
"Aim for a narrative that captures the essence of these breakthroughs and their implications for the future."
),
expected_output='A compelling 3 paragraphs blog post formatted as markdown about the latest AI advancements in 2024',
agent=writer
description=(
"Using the insights from the researcher\'s report, develop an engaging blog post that highlights the most significant AI advancements. "
"Your post should be informative yet accessible, catering to a tech-savvy audience. "
"Aim for a narrative that captures the essence of these breakthroughs and their implications for the future."
),
expected_output='A compelling 3 paragraphs blog post formatted as markdown about the latest AI advancements in 2024',
agent=writer
)
# Instantiate your crew with a sequential process
crew = Crew(
agents=[researcher, writer],
tasks=[task1, task2],
verbose=2
agents=[researcher, writer],
tasks=[task1, task2],
verbose=2,
memory=True,
)
# Get your crew to work!

View File

@@ -0,0 +1,21 @@
---
title: Installing crewAI
description: A comprehensive guide to installing crewAI and its dependencies, including the latest updates and installation methods.
---
# Installing crewAI
Welcome to crewAI! This guide will walk you through the installation process for crewAI and its dependencies. crewAI is a flexible and powerful AI framework that enables you to create and manage AI agents, tools, and tasks efficiently. Let's get started!
## Installation
To install crewAI, you need to have Python >=3.10 and <=3.13 installed on your system:
```shell
# Install the main crewAI package
pip install crewai
# Install the main crewAI package and the tools package
# that includes a series of helpful tools for your agents
pip install 'crewai[tools]'
```

View File

@@ -0,0 +1,40 @@
---
title: Kickoff Async
description: Kickoff a Crew Asynchronously
---
## Introduction
CrewAI provides the ability to kickoff a crew asynchronously, allowing you to start the crew execution in a non-blocking manner. This feature is particularly useful when you want to run multiple crews concurrently or when you need to perform other tasks while the crew is executing.
## Asynchronous Crew Execution
To kickoff a crew asynchronously, use the `kickoff_async()` method. This method initiates the crew execution in a separate thread, allowing the main thread to continue executing other tasks.
Here's an example of how to kickoff a crew asynchronously:
```python
from crewai import Crew, Agent, Task
# Create an agent with code execution enabled
coding_agent = Agent(
role="Python Data Analyst",
goal="Analyze data and provide insights using Python",
backstory="You are an experienced data analyst with strong Python skills.",
allow_code_execution=True
)
# Create a task that requires code execution
data_analysis_task = Task(
description="Analyze the given dataset and calculate the average age of participants. Ages: {ages}",
agent=coding_agent
)
# Create a crew and add the task
analysis_crew = Crew(
agents=[coding_agent],
tasks=[data_analysis_task]
)
# Execute the crew
result = analysis_crew.kickoff_async(inputs={"ages": [25, 30, 35, 40, 45]})
```

View File

@@ -0,0 +1,45 @@
---
title: Kickoff For Each
description: Kickoff a Crew for a List
---
## Introduction
CrewAI provides the ability to kickoff a crew for each item in a list, allowing you to execute the crew for each item in the list. This feature is particularly useful when you need to perform the same set of tasks for multiple items.
## Kicking Off a Crew for Each Item
To kickoff a crew for each item in a list, use the `kickoff_for_each()` method. This method executes the crew for each item in the list, allowing you to process multiple items efficiently.
Here's an example of how to kickoff a crew for each item in a list:
```python
from crewai import Crew, Agent, Task
# Create an agent with code execution enabled
coding_agent = Agent(
role="Python Data Analyst",
goal="Analyze data and provide insights using Python",
backstory="You are an experienced data analyst with strong Python skills.",
allow_code_execution=True
)
# Create a task that requires code execution
data_analysis_task = Task(
description="Analyze the given dataset and calculate the average age of participants. Ages: {ages}",
agent=coding_agent
)
# Create a crew and add the task
analysis_crew = Crew(
agents=[coding_agent],
tasks=[data_analysis_task]
)
datasets = [
{ "ages": [25, 30, 35, 40, 45] },
{ "ages": [20, 25, 30, 35, 40] },
{ "ages": [30, 35, 40, 45, 50] }
]
# Execute the crew
result = analysis_crew.kickoff_for_each(inputs=datasets)
```

View File

@@ -1,31 +1,37 @@
---
title: Connect CrewAI to LLMs
description: Comprehensive guide on integrating CrewAI with various Large Language Models (LLMs), including detailed class attributes and methods.
description: Comprehensive guide on integrating CrewAI with various Large Language Models (LLMs), including detailed class attributes, methods, and configuration options.
---
## Connect CrewAI to LLMs
!!! note "Default LLM"
By default, CrewAI uses OpenAI's GPT-4 model for language processing. You can configure your agents to use a different model or API. This guide shows how to connect your agents to various LLMs through environment variables and direct instantiation.
By default, CrewAI uses OpenAI's GPT-4 model (specifically, the model specified by the OPENAI_MODEL_NAME environment variable, defaulting to "gpt-4o") for language processing. You can configure your agents to use a different model or API as described in this guide.
CrewAI offers flexibility in connecting to various LLMs, including local models via [Ollama](https://ollama.ai) and different APIs like Azure. It's compatible with all [LangChain LLM](https://python.langchain.com/docs/integrations/llms/) components, enabling diverse integrations for tailored AI solutions.
## CrewAI Agent Overview
The `Agent` class is the cornerstone for implementing AI solutions in CrewAI. Here's an updated overview reflecting the latest codebase changes:
The `Agent` class is the cornerstone for implementing AI solutions in CrewAI. Here's a comprehensive overview of the Agent class attributes and methods:
- **Attributes**:
- `role`: Defines the agent's role within the solution.
- `goal`: Specifies the agent's objective.
- `backstory`: Provides a background story to the agent.
- `llm`: The language model that will run the agent. By default, it uses the GPT-4 model defined in the environment variable "OPENAI_MODEL_NAME".
- `function_calling_llm`: The language model that will handle the tool calling for this agent, overriding the crew function_calling_llm. Optional.
- `max_iter`: Maximum number of iterations for an agent to execute a task, default is 15.
- `memory`: Enables the agent to retain information during and a across executions. Default is `False`.
- `max_rpm`: Maximum number of requests per minute the agent's execution should respect. Optional.
- `verbose`: Enables detailed logging of the agent's execution. Default is `False`.
- `allow_delegation`: Allows the agent to delegate tasks to other agents, default is `True`.
- `cache` *Optional*: Determines whether the agent should use a cache for tool usage. Default is `True`.
- `max_rpm` *Optional*: Maximum number of requests per minute the agent's execution should respect. Optional.
- `verbose` *Optional*: Enables detailed logging of the agent's execution. Default is `False`.
- `allow_delegation` *Optional*: Allows the agent to delegate tasks to other agents, default is `True`.
- `tools`: Specifies the tools available to the agent for task execution. Optional.
- `step_callback`: Provides a callback function to be executed after each step. Optional.
- `cache`: Determines whether the agent should use a cache for tool usage. Default is `True`.
- `max_iter` *Optional*: Maximum number of iterations for an agent to execute a task, default is 25.
- `max_execution_time` *Optional*: Maximum execution time for an agent to execute a task. Optional.
- `step_callback` *Optional*: Provides a callback function to be executed after each step. Optional.
- `llm` *Optional*: Indicates the Large Language Model the agent uses. By default, it uses the GPT-4 model defined in the environment variable "OPENAI_MODEL_NAME".
- `function_calling_llm` *Optional* : Will turn the ReAct CrewAI agent into a function-calling agent.
- `callbacks` *Optional*: A list of callback functions from the LangChain library that are triggered during the agent's execution process.
- `system_template` *Optional*: Optional string to define the system format for the agent.
- `prompt_template` *Optional*: Optional string to define the prompt format for the agent.
- `response_template` *Optional*: Optional string to define the response format for the agent.
```python
# Required
@@ -36,22 +42,57 @@ example_agent = Agent(
role='Local Expert',
goal='Provide insights about the city',
backstory="A knowledgeable local guide.",
verbose=True,
memory=True
verbose=True
)
```
## Ollama Integration
Ollama is preferred for local LLM integration, offering customization and privacy benefits. To integrate Ollama with CrewAI, set the appropriate environment variables as shown below. Note: Detailed Ollama setup is beyond this document's scope, but general guidance is provided.
Ollama is preferred for local LLM integration, offering customization and privacy benefits. To integrate Ollama with CrewAI, set the appropriate environment variables as shown below.
### Setting Up Ollama
- **Environment Variables Configuration**: To integrate Ollama, set the following environment variables:
```sh
OPENAI_API_BASE='http://localhost:11434/v1'
OPENAI_MODEL_NAME='openhermes' # Adjust based on available model
OPENAI_API_BASE='http://localhost:11434'
OPENAI_MODEL_NAME='llama2' # Adjust based on available model
OPENAI_API_KEY=''
```
## Ollama Integration (ex. for using Llama 2 locally)
1. [Download Ollama](https://ollama.com/download).
2. After setting up the Ollama, Pull the Llama2 by typing following lines into the terminal ```ollama pull llama2```.
3. Enjoy your free Llama2 model that powered up by excellent agents from crewai.
```
from crewai import Agent, Task, Crew
from langchain.llms import Ollama
import os
os.environ["OPENAI_API_KEY"] = "NA"
llm = Ollama(
model = "llama2",
base_url = "http://localhost:11434")
general_agent = Agent(role = "Math Professor",
goal = """Provide the solution to the students that are asking mathematical questions and give them the answer.""",
backstory = """You are an excellent math professor that likes to solve math questions in a way that everyone can understand your solution""",
allow_delegation = False,
verbose = True,
llm = llm)
task = Task(description="""what is 3 + 5""",
agent = general_agent,
expected_output="A numerical answer.")
crew = Crew(
agents=[general_agent],
tasks=[task],
verbose=2
)
result = crew.kickoff()
print(result)
```
## HuggingFace Integration
There are a couple of different ways you can use HuggingFace to host your LLM.
@@ -97,10 +138,10 @@ OPENAI_API_KEY=NA
```
#### LM Studio
Launch [LM Studio](https://lmstudio.ai) and go to the Server tab. Then select a model from the dropdown menu and wait for it to load. Once it's loaded, click the green Start Server button and use the URL, port, and API key that's shown (you can modify them). Below is an example of the default settings as of LM Studio 0.2.19:
```sh
OPENAI_API_BASE="http://localhost:8000/v1"
OPENAI_MODEL_NAME=NA
OPENAI_API_KEY=NA
OPENAI_API_BASE="http://localhost:1234/v1"
OPENAI_API_KEY="lm-studio"
```
#### Mistral API
@@ -110,6 +151,17 @@ OPENAI_API_BASE=https://api.mistral.ai/v1
OPENAI_MODEL_NAME="mistral-small"
```
### Solar
```python
from langchain_community.chat_models.solar import SolarChat
# Initialize language model
os.environ["SOLAR_API_KEY"] = "your-solar-api-key"
llm = SolarChat(max_tokens=1024)
# Free developer API key available here: https://console.upstage.ai/services/solar
# Langchain Example: https://github.com/langchain-ai/langchain/pull/18556
```
### text-gen-web-ui
```sh
OPENAI_API_BASE=http://localhost:5000/v1
@@ -118,19 +170,18 @@ OPENAI_API_KEY=NA
```
### Cohere
```sh
from langchain_community.chat_models import ChatCohere
```python
from langchain_cohere import ChatCohere
# Initialize language model
os.environ["COHERE_API_KEY"] = "your-cohere-api-key"
llm = ChatCohere()
Free developer API key available here: https://cohere.com/
Langchain Documentation: https://python.langchain.com/docs/integrations/chat/cohere
# Free developer API key available here: https://cohere.com/
# Langchain Documentation: https://python.langchain.com/docs/integrations/chat/cohere
```
### Azure Open AI
Azure's OpenAI API needs a distinct setup, utilizing the `langchain_openai` component for Azure-specific configurations.
### Azure Open AI Configuration
For Azure OpenAI API integration, set the following environment variables:
```sh
AZURE_OPENAI_VERSION="2022-12-01"
AZURE_OPENAI_DEPLOYMENT=""
@@ -160,4 +211,4 @@ azure_agent = Agent(
```
## Conclusion
Integrating CrewAI with different LLMs expands the framework's versatility, allowing for customized, efficient AI solutions across various domains and platforms.
Integrating CrewAI with different LLMs expands the framework's versatility, allowing for customized, efficient AI solutions across various domains and platforms.

View File

@@ -0,0 +1,89 @@
---
title: CrewAI Agent Monitoring with Langtrace
description: How to monitor cost, latency, and performance of CrewAI Agents using Langtrace, an external observability tool.
---
# Langtrace Overview
Langtrace is an open-source, external tool that helps you set up observability and evaluations for Large Language Models (LLMs), LLM frameworks, and Vector Databases. While not built directly into CrewAI, Langtrace can be used alongside CrewAI to gain deep visibility into the cost, latency, and performance of your CrewAI Agents. This integration allows you to log hyperparameters, monitor performance regressions, and establish a process for continuous improvement of your Agents.
## Setup Instructions
1. Sign up for [Langtrace](https://langtrace.ai/) by visiting [https://langtrace.ai/signup](https://langtrace.ai/signup).
2. Create a project and generate an API key.
3. Install Langtrace in your CrewAI project using the following commands:
```bash
# Install the SDK
pip install langtrace-python-sdk
```
## Using Langtrace with CrewAI
To integrate Langtrace with your CrewAI project, follow these steps:
1. Import and initialize Langtrace at the beginning of your script, before any CrewAI imports:
```python
from langtrace_python_sdk import langtrace
langtrace.init(api_key='<LANGTRACE_API_KEY>')
# Now import CrewAI modules
from crewai import Agent, Task, Crew
```
2. Create your CrewAI agents and tasks as usual.
3. Use Langtrace's tracking functions to monitor your CrewAI operations. For example:
```python
with langtrace.trace("CrewAI Task Execution"):
result = crew.kickoff()
```
### Features and Their Application to CrewAI
1. **LLM Token and Cost Tracking**
- Monitor the token usage and associated costs for each CrewAI agent interaction.
- Example:
```python
with langtrace.trace("Agent Interaction"):
agent_response = agent.execute(task)
```
2. **Trace Graph for Execution Steps**
- Visualize the execution flow of your CrewAI tasks, including latency and logs.
- Useful for identifying bottlenecks in your agent workflows.
3. **Dataset Curation with Manual Annotation**
- Create datasets from your CrewAI task outputs for future training or evaluation.
- Example:
```python
langtrace.log_dataset_item(task_input, agent_output, {"task_type": "research"})
```
4. **Prompt Versioning and Management**
- Keep track of different versions of prompts used in your CrewAI agents.
- Useful for A/B testing and optimizing agent performance.
5. **Prompt Playground with Model Comparisons**
- Test and compare different prompts and models for your CrewAI agents before deployment.
6. **Testing and Evaluations**
- Set up automated tests for your CrewAI agents and tasks.
- Example:
```python
langtrace.evaluate(agent_output, expected_output, "accuracy")
```
## Monitoring New CrewAI Features
CrewAI has introduced several new features that can be monitored using Langtrace:
1. **Code Execution**: Monitor the performance and output of code executed by agents.
```python
with langtrace.trace("Agent Code Execution"):
code_output = agent.execute_code(code_snippet)
```
2. **Third-party Agent Integration**: Track interactions with LlamaIndex, LangChain, and Autogen agents.

View File

@@ -13,8 +13,9 @@ The sequential process ensures tasks are executed one after the other, following
- **Linear Task Flow**: Ensures orderly progression by handling tasks in a predetermined sequence.
- **Simplicity**: Best suited for projects with clear, step-by-step tasks.
- **Easy Monitoring**: Facilitates easy tracking of task completion and project progress.
## Implementing the Sequential Process
Assemble your crew and define tasks in the order they need to be executed.
To use the sequential process, assemble your crew and define tasks in the order they need to be executed.
```python
from crewai import Crew, Process, Agent, Task
@@ -36,10 +37,9 @@ writer = Agent(
backstory='A skilled writer with a talent for crafting compelling narratives'
)
# Define the tasks in sequence
research_task = Task(description='Gather relevant data...', agent=researcher)
analysis_task = Task(description='Analyze the data...', agent=analyst)
writing_task = Task(description='Compose the report...', agent=writer)
research_task = Task(description='Gather relevant data...', agent=researcher, expected_output='Raw Data')
analysis_task = Task(description='Analyze the data...', agent=analyst, expected_output='Data Insights')
writing_task = Task(description='Compose the report...', agent=writer, expected_output='Final Report')
# Form the crew with a sequential process
report_crew = Crew(
@@ -47,6 +47,9 @@ report_crew = Crew(
tasks=[research_task, analysis_task, writing_task],
process=Process.sequential
)
# Execute the crew
result = report_crew.kickoff()
```
### Workflow in Action
@@ -54,5 +57,29 @@ report_crew = Crew(
2. **Subsequent Tasks**: Agents pick up their tasks based on the process type, with outcomes of preceding tasks or manager directives guiding their execution.
3. **Completion**: The process concludes once the final task is executed, leading to project completion.
## Conclusion
The sequential and hierarchical processes in CrewAI offer clear, adaptable paths for task execution. They are well-suited for projects requiring logical progression and dynamic decision-making, ensuring each step is completed effectively, thereby facilitating a cohesive final product.
## Advanced Features
### Task Delegation
In sequential processes, if an agent has `allow_delegation` set to `True`, they can delegate tasks to other agents in the crew. This feature is automatically set up when there are multiple agents in the crew.
### Asynchronous Execution
Tasks can be executed asynchronously, allowing for parallel processing when appropriate. To create an asynchronous task, set `async_execution=True` when defining the task.
### Memory and Caching
CrewAI supports both memory and caching features:
- **Memory**: Enable by setting `memory=True` when creating the Crew. This allows agents to retain information across tasks.
- **Caching**: By default, caching is enabled. Set `cache=False` to disable it.
### Callbacks
You can set callbacks at both the task and step level:
- `task_callback`: Executed after each task completion.
- `step_callback`: Executed after each step in an agent's execution.
### Usage Metrics
CrewAI tracks token usage across all tasks and agents. You can access these metrics after execution.
## Best Practices for Sequential Processes
1. **Order Matters**: Arrange tasks in a logical sequence where each task builds upon the previous one.
2. **Clear Task Descriptions**: Provide detailed descriptions for each task to guide the agents effectively.
3. **Appropriate Agent Selection**: Match agents' skills and roles to the requirements of each task.
4. **Use Context**: Leverage the context from previous tasks to inform subsequent ones

View File

@@ -0,0 +1,87 @@
---
title: Setting a Specific Agent as Manager in CrewAI
description: Learn how to set a custom agent as the manager in CrewAI, providing more control over task management and coordination.
---
# Setting a Specific Agent as Manager in CrewAI
CrewAI allows users to set a specific agent as the manager of the crew, providing more control over the management and coordination of tasks. This feature enables the customization of the managerial role to better fit your project's requirements.
## Using the `manager_agent` Attribute
### Custom Manager Agent
The `manager_agent` attribute allows you to define a custom agent to manage the crew. This agent will oversee the entire process, ensuring that tasks are completed efficiently and to the highest standard.
### Example
```python
import os
from crewai import Agent, Task, Crew, Process
# Define your agents
researcher = Agent(
role="Researcher",
goal="Conduct thorough research and analysis on AI and AI agents",
backstory="You're an expert researcher, specialized in technology, software engineering, AI, and startups. You work as a freelancer and are currently researching for a new client.",
allow_delegation=False,
)
writer = Agent(
role="Senior Writer",
goal="Create compelling content about AI and AI agents",
backstory="You're a senior writer, specialized in technology, software engineering, AI, and startups. You work as a freelancer and are currently writing content for a new client.",
allow_delegation=False,
)
# Define your task
task = Task(
description="Generate a list of 5 interesting ideas for an article, then write one captivating paragraph for each idea that showcases the potential of a full article on this topic. Return the list of ideas with their paragraphs and your notes.",
expected_output="5 bullet points, each with a paragraph and accompanying notes.",
)
# Define the manager agent
manager = Agent(
role="Project Manager",
goal="Efficiently manage the crew and ensure high-quality task completion",
backstory="You're an experienced project manager, skilled in overseeing complex projects and guiding teams to success. Your role is to coordinate the efforts of the crew members, ensuring that each task is completed on time and to the highest standard.",
allow_delegation=True,
)
# Instantiate your crew with a custom manager
crew = Crew(
agents=[researcher, writer],
tasks=[task],
manager_agent=manager,
process=Process.hierarchical,
)
# Start the crew's work
result = crew.kickoff()
```
## Benefits of a Custom Manager Agent
- **Enhanced Control**: Tailor the management approach to fit the specific needs of your project.
- **Improved Coordination**: Ensure efficient task coordination and management by an experienced agent.
- **Customizable Management**: Define managerial roles and responsibilities that align with your project's goals.
## Setting a Manager LLM
If you're using the hierarchical process and don't want to set a custom manager agent, you can specify the language model for the manager:
```python
from langchain_openai import ChatOpenAI
manager_llm = ChatOpenAI(model_name="gpt-4")
crew = Crew(
agents=[researcher, writer],
tasks=[task],
process=Process.hierarchical,
manager_llm=manager_llm
)
```
Note: Either `manager_agent` or `manager_llm` must be set when using the hierarchical process.

View File

@@ -33,11 +33,26 @@ Cutting-edge framework for orchestrating role-playing, autonomous AI agents. By
Crews
</a>
</li>
<li>
<a href="./core-concepts/Training-Crew">
Training
</a>
</li>
<li>
<a href="./core-concepts/Memory">
Memory
</a>
</li>
</ul>
</div>
<div style="width:30%">
<h2>How-To Guides</h2>
<ul>
<li>
<a href="./how-to/Installing-CrewAI">
Installing crewAI
</a>
</li>
<li>
<a href="./how-to/Creating-a-Crew-and-kick-it-off">
Getting Started
@@ -68,14 +83,34 @@ Cutting-edge framework for orchestrating role-playing, autonomous AI agents. By
Customizing Agents
</a>
</li>
<li>
<a href="./how-to/Coding-Agents">
Coding Agents
</a>
</li>
<li>
<a href="./how-to/Human-Input-on-Execution">
Human Input on Execution
</a>
</li>
<li>
<a href="./how-to/AgentOps-Observability.md">
Agent Observability using AgentOps
<a href="./how-to/Kickoff-async">
Kickoff a Crew Asynchronously
</a>
</li>
<li>
<a href="./how-to/Kickoff-for-each">
Kickoff a Crew for a List
</a>
</li>
<li>
<a href="./how-to/AgentOps-Observability">
Agent Monitoring with AgentOps
</a>
</li>
<li>
<a href="./how-to/Langtrace-Observability">
Agent Monitoring with LangTrace
</a>
</li>
</ul>

View File

@@ -2,6 +2,7 @@
title: Telemetry
description: Understanding the telemetry data collected by CrewAI and how it contributes to the enhancement of the library.
---
## Telemetry
CrewAI utilizes anonymous telemetry to gather usage statistics with the primary goal of enhancing the library. Our focus is on improving and developing the features, integrations, and tools most utilized by our users.

View File

@@ -0,0 +1,38 @@
# BrowserbaseLoadTool
## Description
[Browserbase](https://browserbase.com) is a developer platform to reliably run, manage, and monitor headless browsers.
Power your AI data retrievals with:
- [Serverless Infrastructure](https://docs.browserbase.com/under-the-hood) providing reliable browsers to extract data from complex UIs
- [Stealth Mode](https://docs.browserbase.com/features/stealth-mode) with included fingerprinting tactics and automatic captcha solving
- [Session Debugger](https://docs.browserbase.com/features/sessions) to inspect your Browser Session with networks timeline and logs
- [Live Debug](https://docs.browserbase.com/guides/session-debug-connection/browser-remote-control) to quickly debug your automation
## Installation
- Get an API key and Project ID from [browserbase.com](https://browserbase.com) and set it in environment variables (`BROWSERBASE_API_KEY`, `BROWSERBASE_PROJECT_ID`).
- Install the [Browserbase SDK](http://github.com/browserbase/python-sdk) along with `crewai[tools]` package:
```
pip install browserbase 'crewai[tools]'
```
## Example
Utilize the BrowserbaseLoadTool as follows to allow your agent to load websites:
```python
from crewai_tools import BrowserbaseLoadTool
tool = BrowserbaseLoadTool()
```
## Arguments
- `api_key` Optional. Browserbase API key. Default is `BROWSERBASE_API_KEY` env variable.
- `project_id` Optional. Browserbase Project ID. Default is `BROWSERBASE_PROJECT_ID` env variable.
- `text_content` Retrieve only text content. Default is `False`.
- `session_id` Optional. Provide an existing Session ID.
- `proxy` Optional. Enable/Disable Proxies."

View File

@@ -50,7 +50,7 @@ tool = CSVSearchTool(
),
),
embedder=dict(
provider="google",
provider="google", # or openai, ollama, ...
config=dict(
model="models/embedding-001",
task_type="retrieval_document",

View File

@@ -41,7 +41,7 @@ Note: Substitute 'https://docs.example.com/reference' with your target documenta
By default, the tool uses OpenAI for both embeddings and summarization. To customize the model, you can use a config dictionary as follows:
```python
tool = YoutubeVideoSearchTool(
tool = CodeDocsSearchTool(
config=dict(
llm=dict(
provider="ollama", # or google, openai, anthropic, llama2, ...
@@ -53,7 +53,7 @@ tool = YoutubeVideoSearchTool(
),
),
embedder=dict(
provider="google",
provider="google", # or openai, ollama, ...
config=dict(
model="models/embedding-001",
task_type="retrieval_document",

View File

@@ -0,0 +1,41 @@
# CodeInterpreterTool
## Description
This tool is used to give the Agent the ability to run code (Python3) from the code generated by the Agent itself. The code is executed in a sandboxed environment, so it is safe to run any code.
It is incredible useful since it allows the Agent to generate code, run it in the same environment, get the result and use it to make decisions.
## Requirements
- Docker
## Installation
Install the crewai_tools package
```shell
pip install 'crewai[tools]'
```
## Example
Remember that when using this tool, the code must be generated by the Agent itself. The code must be a Python3 code. And it will take some time for the first time to run because it needs to build the Docker image.
```python
from crewai import Agent
from crewai_tools import CodeInterpreterTool
Agent(
...
tools=[CodeInterpreterTool()],
)
```
We also provide a simple way to use it directly from the Agent.
```python
from crewai import Agent
agent = Agent(
...
allow_code_execution=True,
)
```

View File

@@ -0,0 +1,72 @@
# ComposioTool Documentation
## Description
This tools is a wrapper around the composio toolset and gives your agent access to a wide variety of tools from the composio SDK.
## Installation
To incorporate this tool into your project, follow the installation instructions below:
```shell
pip install composio-core
pip install 'crewai[tools]'
```
after the installation is complete, either run `composio login` or export your composio API key as `COMPOSIO_API_KEY`.
## Example
The following example demonstrates how to initialize the tool and execute a github action:
1. Initialize toolset
```python
from composio import App
from crewai_tools import ComposioTool
from crewai import Agent, Task
tools = [ComposioTool.from_action(action=Action.GITHUB_ACTIVITY_STAR_REPO_FOR_AUTHENTICATED_USER)]
```
If you don't know what action you want to use, use `from_app` and `tags` filter to get relevant actions
```python
tools = ComposioTool.from_app(App.GITHUB, tags=["important"])
```
or use `use_case` to search relevant actions
```python
tools = ComposioTool.from_app(App.GITHUB, use_case="Star a github repository")
```
2. Define agent
```python
crewai_agent = Agent(
role="Github Agent",
goal="You take action on Github using Github APIs",
backstory=(
"You are AI agent that is responsible for taking actions on Github "
"on users behalf. You need to take action on Github using Github APIs"
),
verbose=True,
tools=tools,
)
```
3. Execute task
```python
task = Task(
description="Star a repo ComposioHQ/composio on GitHub",
agent=crewai_agent,
expected_output="if the star happened",
)
task.execute()
```
* More detailed list of tools can be found [here](https://app.composio.dev)

View File

@@ -48,7 +48,7 @@ tool = DOCXSearchTool(
),
),
embedder=dict(
provider="google",
provider="google", # or openai, ollama, ...
config=dict(
model="models/embedding-001",
task_type="retrieval_document",

View File

@@ -43,7 +43,7 @@ tool = DirectorySearchTool(
),
),
embedder=dict(
provider="google",
provider="google", # or openai, ollama, ...
config=dict(
model="models/embedding-001",
task_type="retrieval_document",

View File

@@ -0,0 +1,36 @@
# EXASearchTool Documentation
## Description
The EXASearchTool is designed to perform a semantic search for a specified query from a text's content across the internet. It utilizes the [exa.ai](https://exa.ai/) API to fetch and display the most relevant search results based on the query provided by the user.
## Installation
To incorporate this tool into your project, follow the installation instructions below:
```shell
pip install 'crewai[tools]'
```
## Example
The following example demonstrates how to initialize the tool and execute a search with a given query:
```python
from crewai_tools import EXASearchTool
# Initialize the tool for internet searching capabilities
tool = EXASearchTool()
```
## Steps to Get Started
To effectively use the EXASearchTool, follow these steps:
1. **Package Installation**: Confirm that the `crewai[tools]` package is installed in your Python environment.
2. **API Key Acquisition**: Acquire a [exa.ai](https://exa.ai/) API key by registering for a free account at [exa.ai](https://exa.ai/).
3. **Environment Configuration**: Store your obtained API key in an environment variable named `EXA_API_KEY` to facilitate its use by the tool.
## Conclusion
By integrating the EXASearchTool into Python projects, users gain the ability to conduct real-time, relevant searches across the internet directly from their applications. By adhering to the setup and usage guidelines provided, incorporating this tool into projects is streamlined and straightforward.

View File

@@ -55,7 +55,7 @@ tool = GithubSearchTool(
),
),
embedder=dict(
provider="google",
provider="google", # or openai, ollama, ...
config=dict(
model="models/embedding-001",
task_type="retrieval_document",

View File

@@ -48,7 +48,7 @@ tool = JSONSearchTool(
},
},
"embedder": {
"provider": "google",
"provider": "google", # or openai, ollama, ...
"config": {
"model": "models/embedding-001",
"task_type": "retrieval_document",

View File

@@ -49,7 +49,7 @@ tool = MDXSearchTool(
),
),
embedder=dict(
provider="google",
provider="google", # or openai, ollama, ...
config=dict(
model="models/embedding-001",
task_type="retrieval_document",

View File

@@ -29,7 +29,7 @@ tool = PDFSearchTool(pdf='path/to/your/document.pdf')
```
## Arguments
- `pdf`: **Optinal** The PDF path for the search. Can be provided at initialization or within the `run` method's arguments. If provided at initialization, the tool confines its search to the specified document.
- `pdf`: **Optional** The PDF path for the search. Can be provided at initialization or within the `run` method's arguments. If provided at initialization, the tool confines its search to the specified document.
## Custom model and embeddings
@@ -48,7 +48,7 @@ tool = PDFSearchTool(
),
),
embedder=dict(
provider="google",
provider="google", # or openai, ollama, ...
config=dict(
model="models/embedding-001",
task_type="retrieval_document",

View File

@@ -19,7 +19,7 @@ pip install 'crewai[tools]'
Below is a proposed example showcasing how to use the PGSearchTool for conducting a semantic search on a table within a PostgreSQL database:
```python
rom crewai_tools import PGSearchTool
from crewai_tools import PGSearchTool
# Initialize the tool with the database URI and the target table name
tool = PGSearchTool(db_uri='postgresql://user:password@localhost:5432/mydatabase', table_name='employees')
@@ -48,7 +48,7 @@ tool = PGSearchTool(
),
),
embedder=dict(
provider="google",
provider="google", # or openai, ollama, ...
config=dict(
model="models/embedding-001",
task_type="retrieval_document",
@@ -57,4 +57,4 @@ tool = PGSearchTool(
),
)
)
```
```

View File

@@ -50,7 +50,7 @@ tool = TXTSearchTool(
),
),
embedder=dict(
provider="google",
provider="google", # or openai, ollama, ...
config=dict(
model="models/embedding-001",
task_type="retrieval_document",

View File

@@ -48,7 +48,7 @@ tool = WebsiteSearchTool(
),
),
embedder=dict(
provider="google",
provider="google", # or openai, ollama, ...
config=dict(
model="models/embedding-001",
task_type="retrieval_document",

View File

@@ -17,7 +17,7 @@ pip install 'crewai[tools]'
Here are two examples demonstrating how to use the XMLSearchTool. The first example shows searching within a specific XML file, while the second example illustrates initiating a search without predefining an XML path, providing flexibility in search scope.
```python
from crewai_tools.tools.xml_search_tool import XMLSearchTool
from crewai_tools import XMLSearchTool
# Allow agents to search within any XML file's content as it learns about their paths during execution
tool = XMLSearchTool()
@@ -48,7 +48,7 @@ tool = XMLSearchTool(
),
),
embedder=dict(
provider="google",
provider="google", # or openai, ollama, ...
config=dict(
model="models/embedding-001",
task_type="retrieval_document",

View File

@@ -48,7 +48,7 @@ tool = YoutubeChannelSearchTool(
),
),
embedder=dict(
provider="google",
provider="google", # or openai, ollama, ...
config=dict(
model="models/embedding-001",
task_type="retrieval_document",

View File

@@ -52,7 +52,7 @@ tool = YoutubeVideoSearchTool(
),
),
embedder=dict(
provider="google",
provider="google", # or openai, ollama, ...
config=dict(
model="models/embedding-001",
task_type="retrieval_document",

View File

@@ -126,19 +126,33 @@ nav:
- Processes: 'core-concepts/Processes.md'
- Crews: 'core-concepts/Crews.md'
- Collaboration: 'core-concepts/Collaboration.md'
- Training: 'core-concepts/Training-Crew.md'
- Memory: 'core-concepts/Memory.md'
- Using LangChain Tools: 'core-concepts/Using-LangChain-Tools.md'
- Using LlamaIndex Tools: 'core-concepts/Using-LlamaIndex-Tools.md'
- How to Guides:
- Installing CrewAI: 'how-to/Installing-CrewAI.md'
- Getting Started: 'how-to/Creating-a-Crew-and-kick-it-off.md'
- Create Custom Tools: 'how-to/Create-Custom-Tools.md'
- Using Sequential Process: 'how-to/Sequential.md'
- Using Hierarchical Process: 'how-to/Hierarchical.md'
- Create your own Manager Agent: 'how-to/Your-Own-Manager-Agent.md'
- Connecting to any LLM: 'how-to/LLM-Connections.md'
- Customizing Agents: 'how-to/Customizing-Agents.md'
- Coding Agents: 'how-to/Coding-Agents.md'
- Human Input on Execution: 'how-to/Human-Input-on-Execution.md'
- Agent Observability using AgentOps: 'how-to/AgentOps-Observability.md'
- Kickoff a Crew Asynchronously: 'how-to/Kickoff-async.md'
- Kickoff a Crew for a List: 'how-to/Kickoff-for-each.md'
- Agent Monitoring with AgentOps: 'how-to/AgentOps-Observability.md'
- Agent Monitoring with LangTrace: 'how-to/Langtrace-Observability.md'
- Tools Docs:
- Google Serper Search: 'tools/SerperDevTool.md'
- Browserbase Web Loader: 'tools/BrowserbaseLoadTool.md'
- Composio Tools: 'tools/ComposioTool.md'
- Code Interpreter: 'tools/CodeInterpreterTool.md'
- Scrape Website: 'tools/ScrapeWebsiteTool.md'
- Directory Read: 'tools/DirectoryReadTool.md'
- Exa Serch Web Loader: 'tools/EXASearchTool.md'
- File Read: 'tools/FileReadTool.md'
- Selenium Scraper: 'tools/SeleniumScrapingTool.md'
- Directory RAG Search: 'tools/DirectorySearchTool.md'
@@ -171,6 +185,7 @@ extra_css:
plugins:
- social
- search
extra:
analytics:

3588
poetry.lock generated

File diff suppressed because it is too large Load Diff

View File

@@ -1,12 +1,10 @@
[tool.poetry]
name = "crewai"
version = "0.27.0"
version = "0.35.8"
description = "Cutting-edge framework for orchestrating role-playing, autonomous AI agents. By fostering collaborative intelligence, CrewAI empowers agents to work together seamlessly, tackling complex tasks."
authors = ["Joao Moura <joao@crewai.com>"]
readme = "README.md"
packages = [
{ include = "crewai", from = "src" },
]
packages = [{ include = "crewai", from = "src" }]
[tool.poetry.urls]
Homepage = "https://crewai.com"
@@ -16,40 +14,38 @@ Repository = "https://github.com/joaomdmoura/crewai"
[tool.poetry.dependencies]
python = ">=3.10,<=3.13"
pydantic = "^2.4.2"
langchain = "^0.1.10"
langchain = ">=0.1.4,<0.2.0"
openai = "^1.13.3"
opentelemetry-api = "^1.22.0"
opentelemetry-sdk = "^1.22.0"
opentelemetry-exporter-otlp-proto-http = "^1.22.0"
instructor = "^0.5.2"
instructor = "1.3.3"
regex = "^2023.12.25"
crewai-tools = { version = "^0.1.4", optional = true }
crewai-tools = { version = "^0.4.6", optional = true }
click = "^8.1.7"
python-dotenv = "1.0.0"
embedchain = "^0.1.98"
python-dotenv = "^1.0.0"
appdirs = "^1.4.4"
jsonref = "^1.1.0"
agentops = { version = "^0.1.9", optional = true }
embedchain = "^0.1.113"
[tool.poetry.extras]
tools = ["crewai-tools"]
agentops = ["agentops"]
[tool.poetry.group.dev.dependencies]
isort = "^5.13.2"
pyright = ">=1.1.350,<2.0.0"
mypy = "1.10.0"
autoflake = "^2.2.1"
pre-commit = "^3.6.0"
mkdocs = "^1.4.3"
mkdocstrings = "^0.22.0"
mkdocstrings-python = "^1.1.2"
mkdocs-material = {extras = ["imaging"], version = "^9.5.7"}
mkdocs-material = { extras = ["imaging"], version = "^9.5.7" }
mkdocs-material-extensions = "^1.3.1"
pillow = "^10.2.0"
cairosvg = "^2.7.1"
crewai_tools = "^0.1.4"
[tool.isort]
profile = "black"
known_first_party = ["crewai"]
crewai-tools = "^0.4.6"
[tool.poetry.group.test.dependencies]
pytest = "^8.0.0"
@@ -59,6 +55,11 @@ python-dotenv = "1.0.0"
[tool.poetry.scripts]
crewai = "crewai.cli.cli:crewai"
[tool.mypy]
ignore_missing_imports = true
disable_error_code = 'import-untyped'
exclude = ["cli/templates/main.py", "cli/templates/crew.py"]
[build-system]
requires = ["poetry-core"]
build-backend = "poetry.core.masonry.api"

View File

@@ -2,3 +2,5 @@ from crewai.agent import Agent
from crewai.crew import Crew
from crewai.process import Process
from crewai.task import Task
__all__ = ["Agent", "Crew", "Process", "Task"]

View File

@@ -1,6 +1,5 @@
import os
import uuid
from typing import Any, Dict, List, Optional, Tuple
from typing import Any, List, Optional, Tuple
from langchain.agents.agent import RunnableAgent
from langchain.agents.tools import tool as LangChainTool
@@ -8,25 +7,32 @@ from langchain.tools.render import render_text_description
from langchain_core.agents import AgentAction
from langchain_core.callbacks import BaseCallbackHandler
from langchain_openai import ChatOpenAI
from pydantic import (
UUID4,
BaseModel,
ConfigDict,
Field,
InstanceOf,
PrivateAttr,
field_validator,
model_validator,
)
from pydantic_core import PydanticCustomError
from pydantic import Field, InstanceOf, model_validator
from crewai.agents import CacheHandler, CrewAgentExecutor, CrewAgentParser, ToolsHandler
from crewai.agents import CacheHandler, CrewAgentExecutor, CrewAgentParser
from crewai.agents.agent_builder.base_agent import BaseAgent
from crewai.memory.contextual.contextual_memory import ContextualMemory
from crewai.utilities import I18N, Logger, Prompts, RPMController
from crewai.utilities.token_counter_callback import TokenCalcHandler, TokenProcess
from crewai.tools.agent_tools import AgentTools
from crewai.utilities import Converter, Prompts
from crewai.utilities.constants import TRAINED_AGENTS_DATA_FILE, TRAINING_DATA_FILE
from crewai.utilities.token_counter_callback import TokenCalcHandler
from crewai.utilities.training_handler import CrewTrainingHandler
agentops = None
try:
import agentops
from agentops import track_agent
except ImportError:
def track_agent():
def noop(f):
return f
return noop
class Agent(BaseModel):
@track_agent()
class Agent(BaseAgent):
"""Represents an agent in a system.
Each agent has a role, a goal, a backstory, and an optional language model (llm).
@@ -39,7 +45,7 @@ class Agent(BaseModel):
backstory: The backstory of the agent.
config: Dict representation of agent configuration.
llm: The language model that will run the agent.
function_calling_llm: The language model that will the tool calling for this agent, it overrides the crew function_calling_llm.
function_calling_llm: The language model that will handle the tool calling for this agent, it overrides the crew function_calling_llm.
max_iter: Maximum number of iterations for an agent to execute a task.
memory: Whether the agent should have memory or not.
max_rpm: Maximum number of requests per minute for the agent execution to be respected.
@@ -50,53 +56,12 @@ class Agent(BaseModel):
callbacks: A list of callback functions from the langchain library that are triggered during the agent's execution process
"""
__hash__ = object.__hash__ # type: ignore
_logger: Logger = PrivateAttr()
_rpm_controller: RPMController = PrivateAttr(default=None)
_request_within_rpm_limit: Any = PrivateAttr(default=None)
_token_process: TokenProcess = TokenProcess()
formatting_errors: int = 0
model_config = ConfigDict(arbitrary_types_allowed=True)
id: UUID4 = Field(
default_factory=uuid.uuid4,
frozen=True,
description="Unique identifier for the object, not set by user.",
)
role: str = Field(description="Role of the agent")
goal: str = Field(description="Objective of the agent")
backstory: str = Field(description="Backstory of the agent")
cache: bool = Field(
default=True,
description="Whether the agent should use a cache for tool usage.",
)
config: Optional[Dict[str, Any]] = Field(
description="Configuration for the agent",
max_execution_time: Optional[int] = Field(
default=None,
description="Maximum execution time for an agent to execute a task",
)
max_rpm: Optional[int] = Field(
default=None,
description="Maximum number of requests per minute for the agent execution to be respected.",
)
verbose: bool = Field(
default=False, description="Verbose mode for the Agent Execution"
)
allow_delegation: bool = Field(
default=True, description="Allow delegation of tasks to agents"
)
tools: Optional[List[Any]] = Field(
default_factory=list, description="Tools at agents disposal"
)
max_iter: Optional[int] = Field(
default=15, description="Maximum iterations for an agent to execute a task"
)
agent_executor: InstanceOf[CrewAgentExecutor] = Field(
default=None, description="An instance of the CrewAgentExecutor class."
)
crew: Any = Field(default=None, description="Crew to which the agent belongs.")
tools_handler: InstanceOf[ToolsHandler] = Field(
default=None, description="An instance of the ToolsHandler class."
)
agent_ops_agent_name: str = None
agent_ops_agent_id: str = None
cache_handler: InstanceOf[CacheHandler] = Field(
default=None, description="An instance of the CacheHandler class."
)
@@ -104,10 +69,9 @@ class Agent(BaseModel):
default=None,
description="Callback to be executed after each step of the agent execution.",
)
i18n: I18N = Field(default=I18N(), description="Internationalization settings.")
llm: Any = Field(
default_factory=lambda: ChatOpenAI(
model=os.environ.get("OPENAI_MODEL_NAME", "gpt-4")
model=os.environ.get("OPENAI_MODEL_NAME", "gpt-4o")
),
description="Language model that will run the agent.",
)
@@ -117,48 +81,46 @@ class Agent(BaseModel):
callbacks: Optional[List[InstanceOf[BaseCallbackHandler]]] = Field(
default=None, description="Callback to be executed"
)
_original_role: str | None = None
_original_goal: str | None = None
_original_backstory: str | None = None
system_template: Optional[str] = Field(
default=None, description="System format for the agent."
)
prompt_template: Optional[str] = Field(
default=None, description="Prompt format for the agent."
)
response_template: Optional[str] = Field(
default=None, description="Response format for the agent."
)
allow_code_execution: Optional[bool] = Field(
default=False, description="Enable code execution for the agent."
)
def __init__(__pydantic_self__, **data):
config = data.pop("config", {})
super().__init__(**config, **data)
@field_validator("id", mode="before")
@classmethod
def _deny_user_set_id(cls, v: Optional[UUID4]) -> None:
if v:
raise PydanticCustomError(
"may_not_set_field", "This field is not to be set by the user.", {}
)
@model_validator(mode="after")
def set_attributes_based_on_config(self) -> "Agent":
"""Set attributes based on the agent configuration."""
if self.config:
for key, value in self.config.items():
setattr(self, key, value)
return self
@model_validator(mode="after")
def set_private_attrs(self):
"""Set private attributes."""
self._logger = Logger(self.verbose)
if self.max_rpm and not self._rpm_controller:
self._rpm_controller = RPMController(
max_rpm=self.max_rpm, logger=self._logger
)
return self
__pydantic_self__.agent_ops_agent_name = __pydantic_self__.role
@model_validator(mode="after")
def set_agent_executor(self) -> "Agent":
"""set agent executor is set."""
"""Ensure agent executor and token process are set."""
if hasattr(self.llm, "model_name"):
self.llm.callbacks = [
TokenCalcHandler(self.llm.model_name, self._token_process)
]
token_handler = TokenCalcHandler(self.llm.model_name, self._token_process)
# Ensure self.llm.callbacks is a list
if not isinstance(self.llm.callbacks, list):
self.llm.callbacks = []
# Check if an instance of TokenCalcHandler already exists in the list
if not any(
isinstance(handler, TokenCalcHandler) for handler in self.llm.callbacks
):
self.llm.callbacks.append(token_handler)
if agentops and not any(
isinstance(handler, agentops.LangchainCallbackHandler) for handler in self.llm.callbacks
):
agentops.stop_instrumenting()
self.llm.callbacks.append(agentops.LangchainCallbackHandler())
if not self.agent_executor:
if not self.cache_handler:
self.cache_handler = CacheHandler()
@@ -182,6 +144,7 @@ class Agent(BaseModel):
Output of the agent
"""
if self.tools_handler:
# type: ignore # Incompatible types in assignment (expression has type "dict[Never, Never]", variable has type "ToolCalling")
self.tools_handler.last_used_tool = {}
task_prompt = task.prompt()
@@ -202,8 +165,8 @@ class Agent(BaseModel):
task_prompt += self.i18n.slice("memory").format(memory=memory)
tools = tools or self.tools
parsed_tools = self._parse_tools(tools)
# type: ignore # Argument 1 to "_parse_tools" of "Agent" has incompatible type "list[Any] | None"; expected "list[Any]"
parsed_tools = self._parse_tools(tools or [])
self.create_agent_executor(tools=tools)
self.agent_executor.tools = parsed_tools
self.agent_executor.task = task
@@ -211,6 +174,11 @@ class Agent(BaseModel):
self.agent_executor.tools_description = render_text_description(parsed_tools)
self.agent_executor.tools_names = self.__tools_names(parsed_tools)
if self.crew and self.crew._train:
task_prompt = self._training_handler(task_prompt=task_prompt)
else:
task_prompt = self._use_trained_data(task_prompt=task_prompt)
result = self.agent_executor.invoke(
{
"input": task_prompt,
@@ -218,33 +186,22 @@ class Agent(BaseModel):
"tools": self.agent_executor.tools_description,
}
)["output"]
if self.max_rpm:
self._rpm_controller.stop_rpm_counter()
return result
def set_cache_handler(self, cache_handler: CacheHandler) -> None:
"""Set the cache handler for the agent.
Args:
cache_handler: An instance of the CacheHandler class.
"""
self.tools_handler = ToolsHandler()
if self.cache:
self.cache_handler = cache_handler
self.tools_handler.cache = cache_handler
self.create_agent_executor()
def set_rpm_controller(self, rpm_controller: RPMController) -> None:
"""Set the rpm controller for the agent.
Args:
rpm_controller: An instance of the RPMController class.
"""
if not self._rpm_controller:
self._rpm_controller = rpm_controller
self.create_agent_executor()
def format_log_to_str(
self,
intermediate_steps: List[Tuple[AgentAction, str]],
observation_prefix: str = "Observation: ",
llm_prefix: str = "",
) -> str:
"""Construct the scratchpad that lets the agent continue its thought process."""
thoughts = ""
for action, observation in intermediate_steps:
thoughts += action.log
thoughts += f"\n{observation_prefix}{observation}\n{llm_prefix}"
return thoughts
def create_agent_executor(self, tools=None) -> None:
"""Create an agent executor for the agent.
@@ -273,6 +230,7 @@ class Agent(BaseModel):
"original_tools": tools,
"handle_parsing_errors": True,
"max_iterations": self.max_iter,
"max_execution_time": self.max_execution_time,
"step_callback": self.step_callback,
"tools_handler": self.tools_handler,
"function_calling_llm": self.function_calling_llm,
@@ -280,11 +238,17 @@ class Agent(BaseModel):
}
if self._rpm_controller:
executor_args[
"request_within_rpm_limit"
] = self._rpm_controller.check_or_wait
executor_args["request_within_rpm_limit"] = (
self._rpm_controller.check_or_wait
)
prompt = Prompts(i18n=self.i18n, tools=tools).task_execution()
prompt = Prompts(
i18n=self.i18n,
tools=tools,
system_template=self.system_template,
prompt_template=self.prompt_template,
response_template=self.response_template,
).task_execution()
execution_prompt = prompt.partial(
goal=self.goal,
@@ -292,48 +256,43 @@ class Agent(BaseModel):
backstory=self.backstory,
)
bind = self.llm.bind(stop=[self.i18n.slice("observation")])
stop_words = [self.i18n.slice("observation")]
if self.response_template:
stop_words.append(
self.response_template.split("{{ .Response }}")[1].strip()
)
bind = self.llm.bind(stop=stop_words)
inner_agent = agent_args | execution_prompt | bind | CrewAgentParser(agent=self)
self.agent_executor = CrewAgentExecutor(
agent=RunnableAgent(runnable=inner_agent), **executor_args
)
def interpolate_inputs(self, inputs: Dict[str, Any]) -> None:
"""Interpolate inputs into the agent description and backstory."""
if self._original_role is None:
self._original_role = self.role
if self._original_goal is None:
self._original_goal = self.goal
if self._original_backstory is None:
self._original_backstory = self.backstory
def get_delegation_tools(self, agents: List[BaseAgent]):
agent_tools = AgentTools(agents=agents)
tools = agent_tools.tools()
return tools
if inputs:
self.role = self._original_role.format(**inputs)
self.goal = self._original_goal.format(**inputs)
self.backstory = self._original_backstory.format(**inputs)
def get_code_execution_tools(self):
try:
from crewai_tools import CodeInterpreterTool
def increment_formatting_errors(self) -> None:
"""Count the formatting errors of the agent."""
self.formatting_errors += 1
return [CodeInterpreterTool()]
except ModuleNotFoundError:
self._logger.log(
"info", "Coding tools not available. Install crewai_tools. "
)
def format_log_to_str(
self,
intermediate_steps: List[Tuple[AgentAction, str]],
observation_prefix: str = "Observation: ",
llm_prefix: str = "",
) -> str:
"""Construct the scratchpad that lets the agent continue its thought process."""
thoughts = ""
for action, observation in intermediate_steps:
thoughts += action.log
thoughts += f"\n{observation_prefix}{observation}\n{llm_prefix}"
return thoughts
def get_output_converter(self, llm, text, model, instructions):
return Converter(llm=llm, text=text, model=model, instructions=instructions)
def _parse_tools(self, tools: List[Any]) -> List[LangChainTool]:
"""Parse tools to be used for the task."""
# tentatively try to import from crewai_tools import BaseTool as CrewAITool
tools_list = []
try:
# tentatively try to import from crewai_tools import BaseTool as CrewAITool
from crewai_tools import BaseTool as CrewAITool
for tool in tools:
@@ -342,10 +301,35 @@ class Agent(BaseModel):
else:
tools_list.append(tool)
except ModuleNotFoundError:
tools_list = []
for tool in tools:
tools_list.append(tool)
return tools_list
def _training_handler(self, task_prompt: str) -> str:
"""Handle training data for the agent task prompt to improve output on Training."""
if data := CrewTrainingHandler(TRAINING_DATA_FILE).load():
agent_id = str(self.id)
if data.get(agent_id):
human_feedbacks = [
i["human_feedback"] for i in data.get(agent_id, {}).values()
]
task_prompt += "You MUST follow these feedbacks: \n " + "\n - ".join(
human_feedbacks
)
return task_prompt
def _use_trained_data(self, task_prompt: str) -> str:
"""Use trained data for the agent task prompt to improve output."""
if data := CrewTrainingHandler(TRAINED_AGENTS_DATA_FILE).load():
if trained_data_output := data.get(self.role):
task_prompt += "You MUST follow these feedbacks: \n " + "\n - ".join(
trained_data_output["suggestions"]
)
return task_prompt
@staticmethod
def __tools_names(tools) -> str:
return ", ".join([t.name for t in tools])

View File

@@ -0,0 +1,256 @@
import uuid
from abc import ABC, abstractmethod
from copy import copy as shallow_copy
from typing import Any, Dict, List, Optional, TypeVar
from pydantic import (
UUID4,
BaseModel,
ConfigDict,
Field,
InstanceOf,
PrivateAttr,
field_validator,
model_validator,
)
from pydantic_core import PydanticCustomError
from crewai.agents.agent_builder.utilities.base_token_process import TokenProcess
from crewai.agents.cache.cache_handler import CacheHandler
from crewai.agents.tools_handler import ToolsHandler
from crewai.utilities import I18N, Logger, RPMController
T = TypeVar("T", bound="BaseAgent")
class BaseAgent(ABC, BaseModel):
"""Abstract Base Class for all third party agents compatible with CrewAI.
Attributes:
id (UUID4): Unique identifier for the agent.
role (str): Role of the agent.
goal (str): Objective of the agent.
backstory (str): Backstory of the agent.
cache (bool): Whether the agent should use a cache for tool usage.
config (Optional[Dict[str, Any]]): Configuration for the agent.
verbose (bool): Verbose mode for the Agent Execution.
max_rpm (Optional[int]): Maximum number of requests per minute for the agent execution.
allow_delegation (bool): Allow delegation of tasks to agents.
tools (Optional[List[Any]]): Tools at the agent's disposal.
max_iter (Optional[int]): Maximum iterations for an agent to execute a task.
agent_executor (InstanceOf): An instance of the CrewAgentExecutor class.
llm (Any): Language model that will run the agent.
crew (Any): Crew to which the agent belongs.
i18n (I18N): Internationalization settings.
cache_handler (InstanceOf[CacheHandler]): An instance of the CacheHandler class.
tools_handler (InstanceOf[ToolsHandler]): An instance of the ToolsHandler class.
Methods:
execute_task(task: Any, context: Optional[str] = None, tools: Optional[List[Any]] = None) -> str:
Abstract method to execute a task.
create_agent_executor(tools=None) -> None:
Abstract method to create an agent executor.
_parse_tools(tools: List[Any]) -> List[Any]:
Abstract method to parse tools.
get_delegation_tools(agents: List["BaseAgent"]):
Abstract method to set the agents task tools for handling delegation and question asking to other agents in crew.
get_output_converter(llm, model, instructions):
Abstract method to get the converter class for the agent to create json/pydantic outputs.
interpolate_inputs(inputs: Dict[str, Any]) -> None:
Interpolate inputs into the agent description and backstory.
set_cache_handler(cache_handler: CacheHandler) -> None:
Set the cache handler for the agent.
increment_formatting_errors() -> None:
Increment formatting errors.
copy() -> "BaseAgent":
Create a copy of the agent.
set_rpm_controller(rpm_controller: RPMController) -> None:
Set the rpm controller for the agent.
set_private_attrs() -> "BaseAgent":
Set private attributes.
"""
__hash__ = object.__hash__ # type: ignore
_logger: Logger = PrivateAttr()
_rpm_controller: RPMController = PrivateAttr(default=None)
_request_within_rpm_limit: Any = PrivateAttr(default=None)
formatting_errors: int = 0
model_config = ConfigDict(arbitrary_types_allowed=True)
id: UUID4 = Field(default_factory=uuid.uuid4, frozen=True)
role: str = Field(description="Role of the agent")
goal: str = Field(description="Objective of the agent")
backstory: str = Field(description="Backstory of the agent")
cache: bool = Field(
default=True, description="Whether the agent should use a cache for tool usage."
)
config: Optional[Dict[str, Any]] = Field(
description="Configuration for the agent", default=None
)
verbose: bool = Field(
default=False, description="Verbose mode for the Agent Execution"
)
max_rpm: Optional[int] = Field(
default=None,
description="Maximum number of requests per minute for the agent execution to be respected.",
)
allow_delegation: bool = Field(
default=True, description="Allow delegation of tasks to agents"
)
tools: Optional[List[Any]] = Field(
default_factory=list, description="Tools at agents' disposal"
)
max_iter: Optional[int] = Field(
default=25, description="Maximum iterations for an agent to execute a task"
)
agent_executor: InstanceOf = Field(
default=None, description="An instance of the CrewAgentExecutor class."
)
llm: Any = Field(
default=None, description="Language model that will run the agent."
)
crew: Any = Field(default=None, description="Crew to which the agent belongs.")
i18n: I18N = Field(default=I18N(), description="Internationalization settings.")
cache_handler: InstanceOf[CacheHandler] = Field(
default=None, description="An instance of the CacheHandler class."
)
tools_handler: InstanceOf[ToolsHandler] = Field(
default=None, description="An instance of the ToolsHandler class."
)
_original_role: str | None = None
_original_goal: str | None = None
_original_backstory: str | None = None
_token_process: TokenProcess = TokenProcess()
def __init__(__pydantic_self__, **data):
config = data.pop("config", {})
super().__init__(**config, **data)
@model_validator(mode="after")
def set_config_attributes(self):
if self.config:
for key, value in self.config.items():
setattr(self, key, value)
return self
@field_validator("id", mode="before")
@classmethod
def _deny_user_set_id(cls, v: Optional[UUID4]) -> None:
if v:
raise PydanticCustomError(
"may_not_set_field", "This field is not to be set by the user.", {}
)
@model_validator(mode="after")
def set_attributes_based_on_config(self) -> "BaseAgent":
"""Set attributes based on the agent configuration."""
if self.config:
for key, value in self.config.items():
setattr(self, key, value)
return self
@model_validator(mode="after")
def set_private_attrs(self):
"""Set private attributes."""
self._logger = Logger(self.verbose)
if self.max_rpm and not self._rpm_controller:
self._rpm_controller = RPMController(
max_rpm=self.max_rpm, logger=self._logger
)
if not self._token_process:
self._token_process = TokenProcess()
return self
@abstractmethod
def execute_task(
self,
task: Any,
context: Optional[str] = None,
tools: Optional[List[Any]] = None,
) -> str:
pass
@abstractmethod
def create_agent_executor(self, tools=None) -> None:
pass
@abstractmethod
def _parse_tools(self, tools: List[Any]) -> List[Any]:
pass
@abstractmethod
def get_delegation_tools(self, agents: List["BaseAgent"]):
"""Set the task tools that init BaseAgenTools class."""
pass
@abstractmethod
def get_output_converter(
self, llm: Any, text: str, model: type[BaseModel] | None, instructions: str
):
"""Get the converter class for the agent to create json/pydantic outputs."""
pass
def copy(self: T) -> T:
"""Create a deep copy of the Agent."""
exclude = {
"id",
"_logger",
"_rpm_controller",
"_request_within_rpm_limit",
"_token_process",
"agent_executor",
"tools",
"tools_handler",
"cache_handler",
"llm",
}
# Copy llm and clear callbacks
existing_llm = shallow_copy(self.llm)
existing_llm.callbacks = []
copied_data = self.model_dump(exclude=exclude)
copied_data = {k: v for k, v in copied_data.items() if v is not None}
copied_agent = type(self)(**copied_data, llm=existing_llm, tools=self.tools)
return copied_agent
def interpolate_inputs(self, inputs: Dict[str, Any]) -> None:
"""Interpolate inputs into the agent description and backstory."""
if self._original_role is None:
self._original_role = self.role
if self._original_goal is None:
self._original_goal = self.goal
if self._original_backstory is None:
self._original_backstory = self.backstory
if inputs:
self.role = self._original_role.format(**inputs)
self.goal = self._original_goal.format(**inputs)
self.backstory = self._original_backstory.format(**inputs)
def set_cache_handler(self, cache_handler: CacheHandler) -> None:
"""Set the cache handler for the agent.
Args:
cache_handler: An instance of the CacheHandler class.
"""
self.tools_handler = ToolsHandler()
if self.cache:
self.cache_handler = cache_handler
self.tools_handler.cache = cache_handler
self.create_agent_executor()
def increment_formatting_errors(self) -> None:
self.formatting_errors += 1
def set_rpm_controller(self, rpm_controller: RPMController) -> None:
"""Set the rpm controller for the agent.
Args:
rpm_controller: An instance of the RPMController class.
"""
if not self._rpm_controller:
self._rpm_controller = rpm_controller
self.create_agent_executor()

View File

@@ -0,0 +1,109 @@
import time
from typing import TYPE_CHECKING, Optional
from crewai.memory.entity.entity_memory_item import EntityMemoryItem
from crewai.memory.long_term.long_term_memory_item import LongTermMemoryItem
from crewai.memory.short_term.short_term_memory_item import ShortTermMemoryItem
from crewai.utilities.converter import ConverterError
from crewai.utilities.evaluators.task_evaluator import TaskEvaluator
from crewai.utilities import I18N
if TYPE_CHECKING:
from crewai.crew import Crew
from crewai.task import Task
from crewai.agents.agent_builder.base_agent import BaseAgent
class CrewAgentExecutorMixin:
crew: Optional["Crew"]
crew_agent: Optional["BaseAgent"]
task: Optional["Task"]
iterations: int
force_answer_max_iterations: int
have_forced_answer: bool
_i18n: I18N
def _should_force_answer(self) -> bool:
"""Determine if a forced answer is required based on iteration count."""
return (
self.iterations == self.force_answer_max_iterations
) and not self.have_forced_answer
def _create_short_term_memory(self, output) -> None:
"""Create and save a short-term memory item if conditions are met."""
if (
self.crew
and self.crew_agent
and self.task
and "Action: Delegate work to coworker" not in output.log
):
try:
memory = ShortTermMemoryItem(
data=output.log,
agent=self.crew_agent.role,
metadata={
"observation": self.task.description,
},
)
if (
hasattr(self.crew, "_short_term_memory")
and self.crew._short_term_memory
):
self.crew._short_term_memory.save(memory)
except Exception as e:
print(f"Failed to add to short term memory: {e}")
pass
def _create_long_term_memory(self, output) -> None:
"""Create and save long-term and entity memory items based on evaluation."""
if (
self.crew
and self.crew.memory
and self.crew._long_term_memory
and self.crew._entity_memory
and self.task
and self.crew_agent
):
try:
ltm_agent = TaskEvaluator(self.crew_agent)
evaluation = ltm_agent.evaluate(self.task, output.log)
if isinstance(evaluation, ConverterError):
return
long_term_memory = LongTermMemoryItem(
task=self.task.description,
agent=self.crew_agent.role,
quality=evaluation.quality,
datetime=str(time.time()),
expected_output=self.task.expected_output,
metadata={
"suggestions": evaluation.suggestions,
"quality": evaluation.quality,
},
)
self.crew._long_term_memory.save(long_term_memory)
for entity in evaluation.entities:
entity_memory = EntityMemoryItem(
name=entity.name,
type=entity.type,
description=entity.description,
relationships="\n".join(
[f"- {r}" for r in entity.relationships]
),
)
self.crew._entity_memory.save(entity_memory)
except AttributeError as e:
print(f"Missing attributes for long term memory: {e}")
pass
except Exception as e:
print(f"Failed to add to long term memory: {e}")
pass
def _ask_human_input(self, final_answer: dict) -> str:
"""Prompt human input for final decision making."""
return input(
self._i18n.slice("getting_input").format(final_answer=final_answer)
)

View File

@@ -0,0 +1,81 @@
from abc import ABC, abstractmethod
from typing import List, Optional, Union
from pydantic import BaseModel, Field
from crewai.agents.agent_builder.base_agent import BaseAgent
from crewai.task import Task
from crewai.utilities import I18N
class BaseAgentTools(BaseModel, ABC):
"""Default tools around agent delegation"""
agents: List[BaseAgent] = Field(description="List of agents in this crew.")
i18n: I18N = Field(default=I18N(), description="Internationalization settings.")
@abstractmethod
def tools(self):
pass
def _get_coworker(self, coworker: Optional[str], **kwargs) -> Optional[str]:
coworker = coworker or kwargs.get("co_worker") or kwargs.get("coworker")
if coworker:
is_list = coworker.startswith("[") and coworker.endswith("]")
if is_list:
coworker = coworker[1:-1].split(",")[0]
return coworker
def delegate_work(
self, task: str, context: str, coworker: Optional[str] = None, **kwargs
):
"""Useful to delegate a specific task to a coworker passing all necessary context and names."""
coworker = self._get_coworker(coworker, **kwargs)
return self._execute(coworker, task, context)
def ask_question(
self, question: str, context: str, coworker: Optional[str] = None, **kwargs
):
"""Useful to ask a question, opinion or take from a coworker passing all necessary context and names."""
coworker = self._get_coworker(coworker, **kwargs)
return self._execute(coworker, question, context)
def _execute(self, agent: Union[str, None], task: str, context: Union[str, None]):
"""Execute the command."""
try:
if agent is None:
agent = ""
# It is important to remove the quotes from the agent name.
# The reason we have to do this is because less-powerful LLM's
# have difficulty producing valid JSON.
# As a result, we end up with invalid JSON that is truncated like this:
# {"task": "....", "coworker": "....
# when it should look like this:
# {"task": "....", "coworker": "...."}
agent_name = agent.casefold().replace('"', "").replace("\n", "")
agent = [
available_agent
for available_agent in self.agents
if available_agent.role.casefold().replace("\n", "") == agent_name
]
except Exception as _:
return self.i18n.errors("agent_tool_unexsiting_coworker").format(
coworkers="\n".join(
[f"- {agent.role.casefold()}" for agent in self.agents]
)
)
if not agent:
return self.i18n.errors("agent_tool_unexsiting_coworker").format(
coworkers="\n".join(
[f"- {agent.role.casefold()}" for agent in self.agents]
)
)
agent = agent[0]
task = Task(
description=task,
agent=agent,
expected_output="Your best answer to your coworker asking you this, accounting for the context shared.",
)
return agent.execute_task(task, context)

View File

@@ -0,0 +1,48 @@
from abc import ABC, abstractmethod
from typing import Any, Optional
from pydantic import BaseModel, Field, PrivateAttr
class OutputConverter(BaseModel, ABC):
"""
Abstract base class for converting task results into structured formats.
This class provides a framework for converting unstructured text into
either Pydantic models or JSON, tailored for specific agent requirements.
It uses a language model to interpret and structure the input text based
on given instructions.
Attributes:
text (str): The input text to be converted.
llm (Any): The language model used for conversion.
model (Any): The target model for structuring the output.
instructions (str): Specific instructions for the conversion process.
max_attempts (int): Maximum number of conversion attempts (default: 3).
"""
_is_gpt: bool = PrivateAttr(default=True)
text: str = Field(description="Text to be converted.")
llm: Any = Field(description="The language model to be used to convert the text.")
model: Any = Field(description="The model to be used to convert the text.")
instructions: str = Field(description="Conversion instructions to the LLM.")
max_attemps: Optional[int] = Field(
description="Max number of attemps to try to get the output formated.",
default=3,
)
@abstractmethod
def to_pydantic(self, current_attempt=1):
"""Convert text to pydantic."""
pass
@abstractmethod
def to_json(self, current_attempt=1):
"""Convert text to json."""
pass
@abstractmethod
def _is_gpt(self, llm):
"""Return if llm provided is of gpt from openai."""
pass

View File

@@ -0,0 +1,27 @@
from typing import Any, Dict
class TokenProcess:
total_tokens: int = 0
prompt_tokens: int = 0
completion_tokens: int = 0
successful_requests: int = 0
def sum_prompt_tokens(self, tokens: int):
self.prompt_tokens = self.prompt_tokens + tokens
self.total_tokens = self.total_tokens + tokens
def sum_completion_tokens(self, tokens: int):
self.completion_tokens = self.completion_tokens + tokens
self.total_tokens = self.total_tokens + tokens
def sum_successful_requests(self, requests: int):
self.successful_requests = self.successful_requests + requests
def get_summary(self) -> Dict[str, Any]:
return {
"total_tokens": self.total_tokens,
"prompt_tokens": self.prompt_tokens,
"completion_tokens": self.completion_tokens,
"successful_requests": self.successful_requests,
}

View File

@@ -1,28 +1,36 @@
import threading
import time
from typing import Any, Dict, Iterator, List, Optional, Tuple, Union
from typing import (
Any,
Dict,
Iterator,
List,
Optional,
Tuple,
Union,
)
from langchain.agents import AgentExecutor
from langchain.agents.agent import ExceptionTool
from langchain.callbacks.manager import CallbackManagerForChainRun
from langchain_core.agents import AgentAction, AgentFinish, AgentStep
from langchain_core.exceptions import OutputParserException
from langchain_core.pydantic_v1 import root_validator
from langchain_core.tools import BaseTool
from langchain_core.utils.input import get_color_mapping
from pydantic import InstanceOf
from crewai.agents.agent_builder.base_agent_executor_mixin import (
CrewAgentExecutorMixin,
)
from crewai.agents.tools_handler import ToolsHandler
from crewai.memory.entity.entity_memory_item import EntityMemoryItem
from crewai.memory.long_term.long_term_memory_item import LongTermMemoryItem
from crewai.memory.short_term.short_term_memory_item import ShortTermMemoryItem
from crewai.tools.tool_usage import ToolUsage, ToolUsageErrorException
from crewai.utilities.constants import TRAINING_DATA_FILE
from crewai.utilities.training_handler import CrewTrainingHandler
from crewai.utilities import I18N
from crewai.utilities.converter import ConverterError
from crewai.utilities.evaluators.task_evaluator import TaskEvaluator
class CrewAgentExecutor(AgentExecutor):
class CrewAgentExecutor(AgentExecutor, CrewAgentExecutorMixin):
_i18n: I18N = I18N()
should_ask_for_human_input: bool = False
llm: Any = None
@@ -35,68 +43,14 @@ class CrewAgentExecutor(AgentExecutor):
crew: Any = None
function_calling_llm: Any = None
request_within_rpm_limit: Any = None
tools_handler: InstanceOf[ToolsHandler] = None
tools_handler: Optional[InstanceOf[ToolsHandler]] = None
max_iterations: Optional[int] = 15
have_forced_answer: bool = False
force_answer_max_iterations: Optional[int] = None
step_callback: Optional[Any] = None
@root_validator()
def set_force_answer_max_iterations(cls, values: Dict) -> Dict:
values["force_answer_max_iterations"] = values["max_iterations"] - 2
return values
def _should_force_answer(self) -> bool:
return (
self.iterations == self.force_answer_max_iterations
) and not self.have_forced_answer
def _create_short_term_memory(self, output) -> None:
if (
self.crew
and self.crew.memory
and "Action: Delegate work to co-worker" not in output.log
):
memory = ShortTermMemoryItem(
data=output.log,
agent=self.crew_agent.role,
metadata={
"observation": self.task.description,
},
)
self.crew._short_term_memory.save(memory)
def _create_long_term_memory(self, output) -> None:
if self.crew and self.crew.memory:
ltm_agent = TaskEvaluator(self.crew_agent)
evaluation = ltm_agent.evaluate(self.task, output.log)
if isinstance(evaluation, ConverterError):
return
long_term_memory = LongTermMemoryItem(
task=self.task.description,
agent=self.crew_agent.role,
quality=evaluation.quality,
datetime=str(time.time()),
expected_output=self.task.expected_output,
metadata={
"suggestions": "\n".join(
[f"- {s}" for s in evaluation.suggestions]
),
"quality": evaluation.quality,
},
)
self.crew._long_term_memory.save(long_term_memory)
for entity in evaluation.entities:
entity_memory = EntityMemoryItem(
name=entity.name,
type=entity.type,
description=entity.description,
relationships="\n".join([f"- {r}" for r in entity.relationships]),
)
self.crew._entity_memory.save(entity_memory)
system_template: Optional[str] = None
prompt_template: Optional[str] = None
response_template: Optional[str] = None
def _call(
self,
@@ -115,6 +69,7 @@ class CrewAgentExecutor(AgentExecutor):
# Allowing human input given task setting
if self.task.human_input:
self.should_ask_for_human_input = True
# Let's start tracking the number of iterations and time elapsed
self.iterations = 0
time_elapsed = 0.0
@@ -130,8 +85,10 @@ class CrewAgentExecutor(AgentExecutor):
intermediate_steps,
run_manager=run_manager,
)
if self.step_callback:
self.step_callback(next_step_output)
if isinstance(next_step_output, AgentFinish):
# Creating long term memory
create_long_term_memory = threading.Thread(
@@ -185,7 +142,7 @@ class CrewAgentExecutor(AgentExecutor):
intermediate_steps = self._prepare_intermediate_steps(intermediate_steps)
# Call the LLM to see what to do.
output = self.agent.plan(
output = self.agent.plan( # type: ignore # Incompatible types in assignment (expression has type "AgentAction | AgentFinish | list[AgentAction]", variable has type "AgentAction")
intermediate_steps,
callbacks=run_manager.get_child() if run_manager else None,
**inputs,
@@ -242,12 +199,17 @@ class CrewAgentExecutor(AgentExecutor):
# If the tool chosen is the finishing tool, then we end and return.
if isinstance(output, AgentFinish):
if self.should_ask_for_human_input:
human_feedback = self._ask_human_input(output.return_values["output"])
if self.crew and self.crew._train:
self._handle_crew_training_output(output, human_feedback)
# Making sure we only ask for it once, so disabling for the next thought loop
self.should_ask_for_human_input = False
human_feedback = self._ask_human_input(output.return_values["output"])
action = AgentAction(
tool="Human Input", tool_input=human_feedback, log=output.log
)
yield AgentStep(
action=action,
observation=self._i18n.slice("human_feedback").format(
@@ -257,6 +219,9 @@ class CrewAgentExecutor(AgentExecutor):
return
else:
if self.crew and self.crew._train:
self._handle_crew_training_output(output)
yield output
return
@@ -271,8 +236,8 @@ class CrewAgentExecutor(AgentExecutor):
run_manager.on_agent_action(agent_action, color="green")
tool_usage = ToolUsage(
tools_handler=self.tools_handler,
tools=self.tools,
tools_handler=self.tools_handler, # type: ignore # Argument "tools_handler" to "ToolUsage" has incompatible type "ToolsHandler | None"; expected "ToolsHandler"
tools=self.tools, # type: ignore # Argument "tools" to "ToolUsage" has incompatible type "Sequence[BaseTool]"; expected "list[BaseTool]"
original_tools=self.original_tools,
tools_description=self.tools_description,
tools_names=self.tools_names,
@@ -294,11 +259,32 @@ class CrewAgentExecutor(AgentExecutor):
tool=tool_calling.tool_name,
tools=", ".join([tool.name.casefold() for tool in self.tools]),
)
yield AgentStep(action=agent_action, observation=observation)
def _ask_human_input(self, final_answer: dict) -> str:
"""Get human input."""
return input(
self._i18n.slice("getting_input").format(final_answer=final_answer)
)
def _handle_crew_training_output(
self, output: AgentFinish, human_feedback: str | None = None
) -> None:
"""Function to handle the process of the training data."""
agent_id = str(self.crew_agent.id)
if (
CrewTrainingHandler(TRAINING_DATA_FILE).load()
and not self.should_ask_for_human_input
):
training_data = CrewTrainingHandler(TRAINING_DATA_FILE).load()
if training_data.get(agent_id):
training_data[agent_id][self.crew._train_iteration][
"improved_output"
] = output.return_values["output"]
CrewTrainingHandler(TRAINING_DATA_FILE).save(training_data)
if self.should_ask_for_human_input and human_feedback is not None:
training_data = {
"initial_output": output.return_values["output"],
"human_feedback": human_feedback,
"agent": agent_id,
"agent_role": self.crew_agent.role,
}
CrewTrainingHandler(TRAINING_DATA_FILE).append(
self.crew._train_iteration, agent_id, training_data
)

View File

@@ -52,7 +52,6 @@ class CrewAgentParser(ReActSingleInputOutputParser):
action_input = action_match.group(2)
tool_input = action_input.strip(" ")
tool_input = tool_input.strip('"')
return AgentAction(action, tool_input, text)
elif includes_answer:

View File

@@ -8,13 +8,13 @@ from .cache.cache_handler import CacheHandler
class ToolsHandler:
"""Callback handler for tool usage."""
last_used_tool: ToolCalling = {}
cache: CacheHandler
last_used_tool: ToolCalling = {} # type: ignore # BUG?: Incompatible types in assignment (expression has type "Dict[...]", variable has type "ToolCalling")
cache: Optional[CacheHandler]
def __init__(self, cache: Optional[CacheHandler] = None):
"""Initialize the callback handler."""
self.cache = cache
self.last_used_tool = {}
self.last_used_tool = {} # type: ignore # BUG?: same as above
def on_tool_use(
self,
@@ -23,7 +23,7 @@ class ToolsHandler:
should_cache: bool = True,
) -> Any:
"""Run when tool ends running."""
self.last_used_tool = calling
self.last_used_tool = calling # type: ignore # BUG?: Incompatible types in assignment (expression has type "Union[ToolCalling, InstructorToolCalling]", variable has type "ToolCalling")
if self.cache and should_cache and calling.tool_name != CacheTools().name:
self.cache.add(
tool=calling.tool_name,

View File

@@ -1,6 +1,8 @@
import click
import pkg_resources
from .create_crew import create_crew
from .train_crew import train_crew
@click.group()
@@ -15,5 +17,36 @@ def create(project_name):
create_crew(project_name)
@crewai.command()
@click.option(
"--tools", is_flag=True, help="Show the installed version of crewai tools"
)
def version(tools):
"""Show the installed version of crewai."""
crewai_version = pkg_resources.get_distribution("crewai").version
click.echo(f"crewai version: {crewai_version}")
if tools:
try:
tools_version = pkg_resources.get_distribution("crewai-tools").version
click.echo(f"crewai tools version: {tools_version}")
except pkg_resources.DistributionNotFound:
click.echo("crewai tools not installed")
@crewai.command()
@click.option(
"-n",
"--n_iterations",
type=int,
default=5,
help="Number of iterations to train the crew",
)
def train(n_iterations: int):
"""Train the crew."""
click.echo(f"Training the crew for {n_iterations} iterations")
train_crew(n_iterations)
if __name__ == "__main__":
crewai()

View File

@@ -23,7 +23,7 @@ poetry install
```
### Customizing
**Add you `OPENAI_API_KEY` on the `.env` file**
**Add your `OPENAI_API_KEY` into the `.env` file**
- Modify `src/{{folder_name}}/config/agents.yaml` to define your agents
- Modify `src/{{folder_name}}/config/tasks.yaml` to define your tasks
@@ -40,7 +40,7 @@ poetry run {{folder_name}}
This command initializes the {{name}} Crew, assembling the agents and assigning them tasks as defined in your configuration.
This example, unmodified, will run the create a `report.md` file with the output of a research on LLMs in the root folser
This example, unmodified, will run the create a `report.md` file with the output of a research on LLMs in the root folder.
## Understanding Your Crew
@@ -51,7 +51,7 @@ The {{name}} Crew is composed of multiple AI agents, each with unique roles, goa
For support, questions, or feedback regarding the {{crew_name}} Crew or crewAI.
- Visit our [documentation](https://docs.crewai.com)
- Reach out to us through our [GitHub repository](https://github.com/joaomdmoura/crewai)
- [Joing our Discord](https://discord.com/invite/X4JWnZnxPb)
- [Chat wtih our docs](https://chatg.pt/DWjSBZn)
- [Join our Discord](https://discord.com/invite/X4JWnZnxPb)
- [Chat with our docs](https://chatg.pt/DWjSBZn)
Let's create wonders together with the power and simplicity of crewAI.
Let's create wonders together with the power and simplicity of crewAI.

View File

@@ -1,4 +1,5 @@
#!/usr/bin/env python
import sys
from {{folder_name}}.crew import {{crew_name}}Crew
@@ -7,4 +8,16 @@ def run():
inputs = {
'topic': 'AI LLMs'
}
{{crew_name}}Crew().crew().kickoff(inputs=inputs)
{{crew_name}}Crew().crew().kickoff(inputs=inputs)
def train():
"""
Train the crew for a given number of iterations.
"""
inputs = {"topic": "AI LLMs"}
try:
{{crew_name}}Crew().crew().train(n_iterations=int(sys.argv[1]), inputs=inputs)
except Exception as e:
raise Exception(f"An error occurred while training the crew: {e}")

View File

@@ -6,11 +6,12 @@ authors = ["Your Name <you@example.com>"]
[tool.poetry.dependencies]
python = ">=3.10,<=3.13"
crewai = {extras = ["tools"], version = "^0.27.0"}
crewai = { extras = ["tools"], version = "^0.35.8" }
[tool.poetry.scripts]
{{folder_name}} = "{{folder_name}}.main:run"
train = "{{folder_name}}.main:train"
[build-system]
requires = ["poetry-core"]
build-backend = "poetry.core.masonry.api"
build-backend = "poetry.core.masonry.api"

View File

@@ -3,7 +3,9 @@ from crewai_tools import BaseTool
class MyCustomTool(BaseTool):
name: str = "Name of my tool"
description: str = "Clear description for what this tool is useful for, you agent will need this information to use it."
description: str = (
"Clear description for what this tool is useful for, you agent will need this information to use it."
)
def _run(self, argument: str) -> str:
# Implementation goes here

View File

@@ -0,0 +1,29 @@
import subprocess
import click
def train_crew(n_iterations: int) -> None:
"""
Train the crew by running a command in the Poetry environment.
Args:
n_iterations (int): The number of iterations to train the crew.
"""
command = ["poetry", "run", "train", str(n_iterations)]
try:
if n_iterations <= 0:
raise ValueError("The number of iterations must be a positive integer.")
result = subprocess.run(command, capture_output=False, text=True, check=True)
if result.stderr:
click.echo(result.stderr, err=True)
except subprocess.CalledProcessError as e:
click.echo(f"An error occurred while training the crew: {e}", err=True)
click.echo(e.output, err=True)
except Exception as e:
click.echo(f"An unexpected error occurred: {e}", err=True)

View File

@@ -1,6 +1,7 @@
import asyncio
import json
import uuid
from typing import Any, Dict, List, Optional, Union
from typing import Any, Dict, List, Optional, Union, Tuple
from langchain_core.callbacks import BaseCallbackHandler
from pydantic import (
@@ -17,6 +18,7 @@ from pydantic import (
from pydantic_core import PydanticCustomError
from crewai.agent import Agent
from crewai.agents.agent_builder.base_agent import BaseAgent
from crewai.agents.cache import CacheHandler
from crewai.memory.entity.entity_memory import EntityMemory
from crewai.memory.long_term.long_term_memory import LongTermMemory
@@ -25,7 +27,14 @@ from crewai.process import Process
from crewai.task import Task
from crewai.telemetry import Telemetry
from crewai.tools.agent_tools import AgentTools
from crewai.utilities import I18N, Logger, RPMController
from crewai.utilities import I18N, FileHandler, Logger, RPMController
from crewai.utilities.evaluators.task_evaluator import TaskEvaluator
from crewai.utilities.training_handler import CrewTrainingHandler
try:
import agentops
except ImportError:
agentops = None
class Crew(BaseModel):
@@ -36,6 +45,7 @@ class Crew(BaseModel):
tasks: List of tasks assigned to the crew.
agents: List of agents part of this crew.
manager_llm: The language model that will run manager agent.
manager_agent: Custom agent that will be used as manager.
memory: Whether the crew should use memory to store memories of it's execution.
manager_callbacks: The callback handlers to be executed by the manager agent when hierarchical process is used
cache: Whether the crew should use a cache to store the results of the tools execution.
@@ -44,26 +54,30 @@ class Crew(BaseModel):
verbose: Indicates the verbosity level for logging during execution.
config: Configuration settings for the crew.
max_rpm: Maximum number of requests per minute for the crew execution to be respected.
prompt_file: Path to the prompt json file to be used for the crew.
id: A unique identifier for the crew instance.
full_output: Whether the crew should return the full output with all tasks outputs or just the final output.
full_output: Whether the crew should return the full output with all tasks outputs and token usage metrics or just the final output.
task_callback: Callback to be executed after each task for every agents execution.
step_callback: Callback to be executed after each step for every agents execution.
share_crew: Whether you want to share the complete crew infromation and execution with crewAI to make the library better, and allow us to train models.
share_crew: Whether you want to share the complete crew information and execution with crewAI to make the library better, and allow us to train models.
"""
__hash__ = object.__hash__ # type: ignore
_execution_span: Any = PrivateAttr()
_rpm_controller: RPMController = PrivateAttr()
_logger: Logger = PrivateAttr()
_file_handler: FileHandler = PrivateAttr()
_cache_handler: InstanceOf[CacheHandler] = PrivateAttr(default=CacheHandler())
_short_term_memory: Optional[InstanceOf[ShortTermMemory]] = PrivateAttr()
_long_term_memory: Optional[InstanceOf[LongTermMemory]] = PrivateAttr()
_entity_memory: Optional[InstanceOf[EntityMemory]] = PrivateAttr()
_train: Optional[bool] = PrivateAttr(default=False)
_train_iteration: Optional[int] = PrivateAttr()
cache: bool = Field(default=True)
model_config = ConfigDict(arbitrary_types_allowed=True)
tasks: List[Task] = Field(default_factory=list)
agents: List[Agent] = Field(default_factory=list)
agents: List[BaseAgent] = Field(default_factory=list)
process: Process = Field(default=Process.sequential)
verbose: Union[int, bool] = Field(default=0)
memory: bool = Field(
@@ -80,11 +94,14 @@ class Crew(BaseModel):
)
full_output: Optional[bool] = Field(
default=False,
description="Whether the crew should return the full output with all tasks outputs or just the final output.",
description="Whether the crew should return the full output with all tasks outputs and token usage metrics or just the final output.",
)
manager_llm: Optional[Any] = Field(
description="Language model that will run the agent.", default=None
)
manager_agent: Optional[BaseAgent] = Field(
description="Custom agent that will be used as manager.", default=None
)
manager_callbacks: Optional[List[InstanceOf[BaseCallbackHandler]]] = Field(
default=None,
description="A list of callback handlers to be executed by the manager agent when hierarchical process is used",
@@ -107,13 +124,13 @@ class Crew(BaseModel):
default=None,
description="Maximum number of requests per minute for the crew execution to be respected.",
)
language: str = Field(
default="en",
description="Language used for the crew, defaults to English.",
)
language_file: str = Field(
prompt_file: str = Field(
default=None,
description="Path to the language file to be used for the crew.",
description="Path to the prompt json file to be used for the crew.",
)
output_log_file: Optional[Union[bool, str]] = Field(
default=False,
description="output_log_file",
)
@field_validator("id", mode="before")
@@ -145,6 +162,8 @@ class Crew(BaseModel):
"""Set private attributes."""
self._cache_handler = CacheHandler()
self._logger = Logger(self.verbose)
if self.output_log_file:
self._file_handler = FileHandler(self.output_log_file)
self._rpm_controller = RPMController(max_rpm=self.max_rpm, logger=self._logger)
self._telemetry = Telemetry()
self._telemetry.set_tracer()
@@ -156,19 +175,32 @@ class Crew(BaseModel):
"""Set private attributes."""
if self.memory:
self._long_term_memory = LongTermMemory()
self._short_term_memory = ShortTermMemory(embedder_config=self.embedder)
self._entity_memory = EntityMemory(embedder_config=self.embedder)
self._short_term_memory = ShortTermMemory(
crew=self, embedder_config=self.embedder
)
self._entity_memory = EntityMemory(crew=self, embedder_config=self.embedder)
return self
@model_validator(mode="after")
def check_manager_llm(self):
"""Validates that the language model is set when using hierarchical process."""
if self.process == Process.hierarchical and not self.manager_llm:
raise PydanticCustomError(
"missing_manager_llm",
"Attribute `manager_llm` is required when using hierarchical process.",
{},
)
if self.process == Process.hierarchical:
if not self.manager_llm and not self.manager_agent:
raise PydanticCustomError(
"missing_manager_llm_or_manager_agent",
"Attribute `manager_llm` or `manager_agent` is required when using hierarchical process.",
{},
)
if (self.manager_agent is not None) and (
self.agents.count(self.manager_agent) > 0
):
raise PydanticCustomError(
"manager_agent_in_agents",
"Manager agent should not be included in agents list.",
{},
)
return self
@model_validator(mode="after")
@@ -192,6 +224,33 @@ class Crew(BaseModel):
agent.set_rpm_controller(self._rpm_controller)
return self
@model_validator(mode="after")
def validate_tasks(self):
if self.process == Process.sequential:
for task in self.tasks:
if task.agent is None:
raise PydanticCustomError(
"missing_agent_in_task",
f"Sequential process error: Agent is missing in the task with the following description: {task.description}", # type: ignore Argument of type "str" cannot be assigned to parameter "message_template" of type "LiteralString"
{},
)
return self
@model_validator(mode="after")
def check_tasks_in_hierarchical_process_not_async(self):
"""Validates that the tasks in hierarchical process are not flagged with async_execution."""
if self.process == Process.hierarchical:
for task in self.tasks:
if task.async_execution:
raise PydanticCustomError(
"async_execution_in_hierarchical_process",
"Hierarchical process error: Tasks cannot be flagged with async_execution.",
{},
)
return self
def _setup_from_config(self):
assert self.config is not None, "Config should not be None."
@@ -220,20 +279,59 @@ class Crew(BaseModel):
del task_config["agent"]
return Task(**task_config, agent=task_agent)
def kickoff(self, inputs: Optional[Dict[str, Any]] = {}) -> str:
def _setup_for_training(self) -> None:
"""Sets up the crew for training."""
self._train = True
for task in self.tasks:
task.human_input = True
for agent in self.agents:
agent.allow_delegation = False
def train(self, n_iterations: int, inputs: Optional[Dict[str, Any]] = {}) -> None:
"""Trains the crew for a given number of iterations."""
self._setup_for_training()
for n_iteration in range(n_iterations):
self._train_iteration = n_iteration
self.kickoff(inputs=inputs)
training_data = CrewTrainingHandler("training_data.pkl").load()
for agent in self.agents:
result = TaskEvaluator(agent).evaluate_training_data(
training_data=training_data, agent_id=str(agent.id)
)
CrewTrainingHandler("trained_agents_data.pkl").save_trained_data(
agent_id=str(agent.role), trained_data=result.model_dump()
)
def kickoff(
self,
inputs: Optional[Dict[str, Any]] = {},
) -> Union[str, Dict[str, Any]]:
"""Starts the crew to work on its assigned tasks."""
self._execution_span = self._telemetry.crew_execution_span(self)
self._execution_span = self._telemetry.crew_execution_span(self, inputs)
# type: ignore # Argument 1 to "_interpolate_inputs" of "Crew" has incompatible type "dict[str, Any] | None"; expected "dict[str, Any]"
self._interpolate_inputs(inputs)
self._set_tasks_callbacks()
i18n = I18N(language=self.language, language_file=self.language_file)
i18n = I18N(prompt_file=self.prompt_file)
for agent in self.agents:
# type: ignore # Argument 1 to "_interpolate_inputs" of "Crew" has incompatible type "dict[str, Any] | None"; expected "dict[str, Any]"
agent.i18n = i18n
agent.crew = self
# type: ignore[attr-defined] # Argument 1 to "_interpolate_inputs" of "Crew" has incompatible type "dict[str, Any] | None"; expected "dict[str, Any]"
agent.crew = self # type: ignore[attr-defined]
# TODO: Create an AgentFunctionCalling protocol for future refactoring
if not agent.function_calling_llm:
agent.function_calling_llm = self.function_calling_llm
if agent.allow_code_execution:
agent.tools += agent.get_code_execution_tools()
if not agent.step_callback:
agent.step_callback = self.step_callback
@@ -246,91 +344,238 @@ class Crew(BaseModel):
elif self.process == Process.hierarchical:
result, manager_metrics = self._run_hierarchical_process()
metrics.append(manager_metrics)
else:
raise NotImplementedError(
f"The process '{self.process}' is not implemented yet."
)
metrics += [agent._token_process.get_summary() for agent in self.agents]
metrics = metrics + [
agent._token_process.get_summary() for agent in self.agents
]
self.usage_metrics = {
key: sum([m[key] for m in metrics if m is not None]) for key in metrics[0]
}
return result
def kickoff_for_each(
self, inputs: List[Dict[str, Any]]
) -> List[Union[str, Dict[str, Any]]]:
"""Executes the Crew's workflow for each input in the list and aggregates results."""
results = []
# Initialize the parent crew's usage metrics
total_usage_metrics = {
"total_tokens": 0,
"prompt_tokens": 0,
"completion_tokens": 0,
"successful_requests": 0,
}
for input_data in inputs:
crew = self.copy()
output = crew.kickoff(inputs=input_data)
if crew.usage_metrics:
for key in total_usage_metrics:
total_usage_metrics[key] += crew.usage_metrics.get(key, 0)
results.append(output)
self.usage_metrics = total_usage_metrics
return results
async def kickoff_async(
self, inputs: Optional[Dict[str, Any]] = {}
) -> Union[str, Dict]:
"""Asynchronous kickoff method to start the crew execution."""
return await asyncio.to_thread(self.kickoff, inputs)
async def kickoff_for_each_async(self, inputs: List[Dict]) -> List[Any]:
crew_copies = [self.copy() for _ in inputs]
async def run_crew(crew, input_data):
return await crew.kickoff_async(inputs=input_data)
tasks = [
asyncio.create_task(run_crew(crew_copies[i], inputs[i]))
for i in range(len(inputs))
]
results = await asyncio.gather(*tasks)
total_usage_metrics = {
"total_tokens": 0,
"prompt_tokens": 0,
"completion_tokens": 0,
"successful_requests": 0,
}
for crew in crew_copies:
if crew.usage_metrics:
for key in total_usage_metrics:
total_usage_metrics[key] += crew.usage_metrics.get(key, 0)
self.usage_metrics = total_usage_metrics
return results
def _run_sequential_process(self) -> str:
"""Executes tasks sequentially and returns the final output."""
task_output = ""
for task in self.tasks:
if task.agent.allow_delegation:
if task.agent.allow_delegation: # type: ignore # Item "None" of "Agent | None" has no attribute "allow_delegation"
agents_for_delegation = [
agent for agent in self.agents if agent != task.agent
]
if len(self.agents) > 1 and len(agents_for_delegation) > 0:
task.tools += AgentTools(agents=agents_for_delegation).tools()
task.tools += task.agent.get_delegation_tools(agents_for_delegation)
role = task.agent.role if task.agent is not None else "None"
self._logger.log("debug", f"== Working Agent: {role}", color="bold_yellow")
self._logger.log("debug", f"== Working Agent: {role}", color="bold_purple")
self._logger.log(
"info", f"== Starting Task: {task.description}", color="bold_yellow"
"info", f"== Starting Task: {task.description}", color="bold_purple"
)
if self.output_log_file:
self._file_handler.log(
agent=role, task=task.description, status="started"
)
output = task.execute(context=task_output)
if not task.async_execution:
task_output = output
role = task.agent.role if task.agent is not None else "None"
self._logger.log("debug", f"== [{role}] Task output: {task_output}\n\n")
self._finish_execution(task_output)
return self._format_output(task_output)
if self.output_log_file:
self._file_handler.log(agent=role, task=task_output, status="completed")
def _run_hierarchical_process(self) -> str:
self._finish_execution(task_output)
token_usage = self.calculate_usage_metrics()
# type: ignore # Incompatible return value type (got "tuple[str, Any]", expected "str")
return self._format_output(task_output, token_usage)
def _run_hierarchical_process(
self,
) -> Tuple[Union[str, Dict[str, Any]], Dict[str, Any]]:
"""Creates and assigns a manager agent to make sure the crew completes the tasks."""
i18n = I18N(language=self.language, language_file=self.language_file)
manager = Agent(
role=i18n.retrieve("hierarchical_manager_agent", "role"),
goal=i18n.retrieve("hierarchical_manager_agent", "goal"),
backstory=i18n.retrieve("hierarchical_manager_agent", "backstory"),
tools=AgentTools(agents=self.agents).tools(),
llm=self.manager_llm,
verbose=True,
)
i18n = I18N(prompt_file=self.prompt_file)
if self.manager_agent is not None:
self.manager_agent.allow_delegation = True
manager = self.manager_agent
if manager.tools is not None and len(manager.tools) > 0:
raise Exception("Manager agent should not have tools")
manager.tools = self.manager_agent.get_delegation_tools(self.agents)
else:
manager = Agent(
role=i18n.retrieve("hierarchical_manager_agent", "role"),
goal=i18n.retrieve("hierarchical_manager_agent", "goal"),
backstory=i18n.retrieve("hierarchical_manager_agent", "backstory"),
tools=AgentTools(agents=self.agents).tools(),
llm=self.manager_llm,
verbose=self.verbose,
)
self.manager_agent = manager
task_output = ""
for task in self.tasks:
self._logger.log("debug", f"Working Agent: {manager.role}")
self._logger.log("info", f"Starting Task: {task.description}")
if self.output_log_file:
self._file_handler.log(
agent=manager.role, task=task.description, status="started"
)
if task.agent:
manager.tools = task.agent.get_delegation_tools([task.agent])
else:
manager.tools = manager.get_delegation_tools(self.agents)
task_output = task.execute(
agent=manager, context=task_output, tools=manager.tools
)
self._logger.log("debug", f"[{manager.role}] Task output: {task_output}")
if self.output_log_file:
self._file_handler.log(
agent=manager.role, task=task_output, status="completed"
)
self._finish_execution(task_output)
return self._format_output(task_output), manager._token_process.get_summary()
def _set_tasks_callbacks(self) -> str:
# type: ignore # Incompatible return value type (got "tuple[str, Any]", expected "str")
token_usage = self.calculate_usage_metrics()
return self._format_output(task_output, token_usage), token_usage
def copy(self):
"""Create a deep copy of the Crew."""
exclude = {
"id",
"_rpm_controller",
"_logger",
"_execution_span",
"_file_handler",
"_cache_handler",
"_short_term_memory",
"_long_term_memory",
"_entity_memory",
"_telemetry",
"agents",
"tasks",
}
cloned_agents = [agent.copy() for agent in self.agents]
cloned_tasks = [task.copy(cloned_agents) for task in self.tasks]
copied_data = self.model_dump(exclude=exclude)
copied_data = {k: v for k, v in copied_data.items() if v is not None}
copied_data.pop("agents", None)
copied_data.pop("tasks", None)
copied_crew = Crew(**copied_data, agents=cloned_agents, tasks=cloned_tasks)
return copied_crew
def _set_tasks_callbacks(self) -> None:
"""Sets callback for every task suing task_callback"""
for task in self.tasks:
task.callback = self.task_callback
if not task.callback:
task.callback = self.task_callback
def _interpolate_inputs(self, inputs: Dict[str, Any]) -> str:
def _interpolate_inputs(self, inputs: Dict[str, Any]) -> None:
"""Interpolates the inputs in the tasks and agents."""
[task.interpolate_inputs(inputs) for task in self.tasks]
[agent.interpolate_inputs(inputs) for agent in self.agents]
[
task.interpolate_inputs(
# type: ignore # "interpolate_inputs" of "Task" does not return a value (it only ever returns None)
inputs
)
for task in self.tasks
]
# type: ignore # "interpolate_inputs" of "Agent" does not return a value (it only ever returns None)
for agent in self.agents:
agent.interpolate_inputs(inputs)
def _format_output(
self, output: str, token_usage: Optional[Dict[str, Any]] = None
) -> Union[str, Dict[str, Any]]:
"""
Formats the output of the crew execution.
If full_output is True, then returned data type will be a dictionary else returned outputs are string
"""
def _format_output(self, output: str) -> str:
"""Formats the output of the crew execution."""
if self.full_output:
return {
return { # type: ignore # Incompatible return value type (got "dict[str, Sequence[str | TaskOutput | None]]", expected "str")
"final_output": output,
"tasks_outputs": [task.output for task in self.tasks if task],
"usage_metrics": token_usage,
}
else:
return output
@@ -338,7 +583,35 @@ class Crew(BaseModel):
def _finish_execution(self, output) -> None:
if self.max_rpm:
self._rpm_controller.stop_rpm_counter()
if agentops:
agentops.end_session(
end_state="Success",
end_state_reason="Finished Execution",
is_auto_end=True,
)
self._telemetry.end_crew(self, output)
def calculate_usage_metrics(self) -> Dict[str, int]:
"""Calculates and returns the usage metrics."""
total_usage_metrics = {
"total_tokens": 0,
"prompt_tokens": 0,
"completion_tokens": 0,
"successful_requests": 0,
}
for agent in self.agents:
if hasattr(agent, "_token_process"):
token_sum = agent._token_process.get_summary()
for key in total_usage_metrics:
total_usage_metrics[key] += token_sum.get(key, 0)
if self.manager_agent and hasattr(self.manager_agent, "_token_process"):
token_sum = self.manager_agent._token_process.get_summary()
for key in total_usage_metrics:
total_usage_metrics[key] += token_sum.get(key, 0)
return total_usage_metrics
def __repr__(self):
return f"Crew(id={self.id}, process={self.process}, number_of_agents={len(self.agents)}, number_of_tasks={len(self.tasks)})"

View File

@@ -1,3 +1,5 @@
from typing import Optional
from crewai.memory import EntityMemory, LongTermMemory, ShortTermMemory
@@ -32,18 +34,23 @@ class ContextualMemory:
formatted_results = "\n".join([f"- {result}" for result in stm_results])
return f"Recent Insights:\n{formatted_results}" if stm_results else ""
def _fetch_ltm_context(self, task) -> str:
def _fetch_ltm_context(self, task) -> Optional[str]:
"""
Fetches historical data or insights from LTM that are relevant to the task's description and expected_output,
formatted as bullet points.
"""
ltm_results = self.ltm.search(task)
ltm_results = self.ltm.search(task, latest_n=2)
if not ltm_results:
return None
formatted_results = "\n".join(
[f"{result['metadata']['suggestions']}" for result in ltm_results]
)
formatted_results = list(set(formatted_results))
formatted_results = [
suggestion
for result in ltm_results
for suggestion in result["metadata"]["suggestions"] # type: ignore # Invalid index type "str" for "str"; expected type "SupportsIndex | slice"
]
formatted_results = list(dict.fromkeys(formatted_results))
formatted_results = "\n".join([f"- {result}" for result in formatted_results]) # type: ignore # Incompatible types in assignment (expression has type "str", variable has type "list[str]")
return f"Historical Data:\n{formatted_results}" if ltm_results else ""
def _fetch_entity_context(self, query) -> str:
@@ -53,6 +60,6 @@ class ContextualMemory:
"""
em_results = self.em.search(query)
formatted_results = "\n".join(
[f"- {result['context']}" for result in em_results]
[f"- {result['context']}" for result in em_results] # type: ignore # Invalid index type "str" for "str"; expected type "SupportsIndex | slice"
)
return f"Entities:\n{formatted_results}" if em_results else ""

View File

@@ -10,13 +10,16 @@ class EntityMemory(Memory):
Inherits from the Memory class.
"""
def __init__(self, embedder_config=None):
def __init__(self, crew=None, embedder_config=None):
storage = RAGStorage(
type="entities", allow_reset=False, embedder_config=embedder_config
type="entities",
allow_reset=False,
embedder_config=embedder_config,
crew=crew,
)
super().__init__(storage)
def save(self, item: EntityMemoryItem) -> None:
def save(self, item: EntityMemoryItem) -> None: # type: ignore # BUG?: Signature of "save" incompatible with supertype "Memory"
"""Saves an entity item into the SQLite storage."""
data = f"{item.name}({item.type}): {item.description}"
super().save(data, item.metadata)

View File

@@ -18,15 +18,15 @@ class LongTermMemory(Memory):
storage = LTMSQLiteStorage()
super().__init__(storage)
def save(self, item: LongTermMemoryItem) -> None:
def save(self, item: LongTermMemoryItem) -> None: # type: ignore # BUG?: Signature of "save" incompatible with supertype "Memory"
metadata = item.metadata
metadata.update({"agent": item.agent, "expected_output": item.expected_output})
self.storage.save(
self.storage.save( # type: ignore # BUG?: Unexpected keyword argument "task_description","score","datetime" for "save" of "Storage"
task_description=item.task,
score=metadata["quality"],
metadata=metadata,
datetime=item.datetime,
)
def search(self, task: str) -> Dict[str, Any]:
return self.storage.load(task)
def search(self, task: str, latest_n: int = 3) -> Dict[str, Any]:
return self.storage.load(task, latest_n) # type: ignore # BUG?: "Storage" has no attribute "load"

View File

@@ -1,4 +1,4 @@
from typing import Any, Dict, Union
from typing import Any, Dict, Optional, Union
class LongTermMemoryItem:
@@ -8,8 +8,8 @@ class LongTermMemoryItem:
task: str,
expected_output: str,
datetime: str,
quality: Union[int, float] = None,
metadata: Dict[str, Any] = None,
quality: Optional[Union[int, float]] = None,
metadata: Optional[Dict[str, Any]] = None,
):
self.task = task
self.agent = agent

View File

@@ -1,4 +1,4 @@
from typing import Any, Dict
from typing import Any, Dict, Optional
from crewai.memory.storage.interface import Storage
@@ -12,12 +12,16 @@ class Memory:
self.storage = storage
def save(
self, value: Any, metadata: Dict[str, Any] = None, agent: str = None
self,
value: Any,
metadata: Optional[Dict[str, Any]] = None,
agent: Optional[str] = None,
) -> None:
metadata = metadata or {}
if agent:
metadata["agent"] = agent
self.storage.save(value, metadata)
self.storage.save(value, metadata) # type: ignore # Maybe BUG? Should be self.storage.save(key, value, metadata)
def search(self, query: str) -> Dict[str, Any]:
return self.storage.search(query)

View File

@@ -12,12 +12,14 @@ class ShortTermMemory(Memory):
MemoryItem instances.
"""
def __init__(self, embedder_config=None):
storage = RAGStorage(type="short_term", embedder_config=embedder_config)
def __init__(self, crew=None, embedder_config=None):
storage = RAGStorage(
type="short_term", embedder_config=embedder_config, crew=crew
)
super().__init__(storage)
def save(self, item: ShortTermMemoryItem) -> None:
def save(self, item: ShortTermMemoryItem) -> None: # type: ignore # BUG?: Signature of "save" incompatible with supertype "Memory"
super().save(item.data, item.metadata, item.agent)
def search(self, query: str, score_threshold: float = 0.35):
return self.storage.search(query=query, score_threshold=score_threshold)
return self.storage.search(query=query, score_threshold=score_threshold) # type: ignore # BUG? The reference is to the parent class, but the parent class does not have this parameters

View File

@@ -1,8 +1,10 @@
from typing import Any, Dict
from typing import Any, Dict, Optional
class ShortTermMemoryItem:
def __init__(self, data: Any, agent: str, metadata: Dict[str, Any] = None):
def __init__(
self, data: Any, agent: str, metadata: Optional[Dict[str, Any]] = None
):
self.data = data
self.agent = agent
self.metadata = metadata if metadata is not None else {}

View File

@@ -7,5 +7,5 @@ class Storage:
def save(self, key: str, value: Any, metadata: Dict[str, Any]) -> None:
pass
def search(self, key: str) -> Dict[str, Any]:
def search(self, key: str) -> Dict[str, Any]: # type: ignore
pass

View File

@@ -1,6 +1,6 @@
import json
import sqlite3
from typing import Any, Dict, Union
from typing import Any, Dict, List, Optional, Union
from crewai.utilities import Printer
from crewai.utilities.paths import db_storage_path
@@ -11,7 +11,9 @@ class LTMSQLiteStorage:
An updated SQLite storage class for LTM data storage.
"""
def __init__(self, db_path=f"{db_storage_path()}/long_term_memory_storage.db"):
def __init__(
self, db_path: str = f"{db_storage_path()}/long_term_memory_storage.db"
) -> None:
self.db_path = db_path
self._printer: Printer = Printer()
self._initialize_db()
@@ -67,19 +69,21 @@ class LTMSQLiteStorage:
color="red",
)
def load(self, task_description: str) -> Dict[str, Any]:
def load(
self, task_description: str, latest_n: int
) -> Optional[List[Dict[str, Any]]]:
"""Queries the LTM table by task description with error handling."""
try:
with sqlite3.connect(self.db_path) as conn:
cursor = conn.cursor()
cursor.execute(
"""
SELECT metadata, datetime, score
FROM long_term_memories
WHERE task_description = ?
ORDER BY datetime DESC, score ASC
LIMIT 2
""",
f"""
SELECT metadata, datetime, score
FROM long_term_memories
WHERE task_description = ?
ORDER BY datetime DESC, score ASC
LIMIT {latest_n}
""",
(task_description,),
)
rows = cursor.fetchall()

View File

@@ -2,7 +2,7 @@ import contextlib
import io
import logging
import os
from typing import Any, Dict
from typing import Any, Dict, List, Optional
from embedchain import App
from embedchain.llm.base import BaseLlm
@@ -37,13 +37,18 @@ class RAGStorage(Storage):
search efficiency.
"""
def __init__(self, type, allow_reset=True, embedder_config=None):
def __init__(self, type, allow_reset=True, embedder_config=None, crew=None):
super().__init__()
if (
not os.getenv("OPENAI_API_KEY")
and not os.getenv("OPENAI_BASE_URL") == "https://api.openai.com/v1"
):
os.environ["OPENAI_API_KEY"] = "fake"
agents = crew.agents if crew else []
agents = [agent.role for agent in agents]
agents = "_".join(agents)
config = {
"app": {
"config": {"name": type, "collect_metrics": False, "log_level": "ERROR"}
@@ -58,7 +63,7 @@ class RAGStorage(Storage):
"provider": "chroma",
"config": {
"collection_name": type,
"dir": f"{db_storage_path()}/{type}",
"dir": f"{db_storage_path()}/{type}/{agents}",
"allow_reset": allow_reset,
},
},
@@ -72,16 +77,16 @@ class RAGStorage(Storage):
if allow_reset:
self.app.reset()
def save(self, value: Any, metadata: Dict[str, Any]) -> None:
def save(self, value: Any, metadata: Dict[str, Any]) -> None: # type: ignore # BUG?: Should be save(key, value, metadata) Signature of "save" incompatible with supertype "Storage"
self._generate_embedding(value, metadata)
def search(
def search( # type: ignore # BUG?: Signature of "search" incompatible with supertype "Storage"
self,
query: str,
limit: int = 3,
filter: dict = None,
filter: Optional[dict] = None,
score_threshold: float = 0.35,
) -> Dict[str, Any]:
) -> List[Any]:
with suppress_logging():
try:
results = (

View File

@@ -1,14 +1,32 @@
tasks_order = []
def memoize(func):
cache = {}
def memoized_func(*args, **kwargs):
key = (args, tuple(kwargs.items()))
if key not in cache:
cache[key] = func(*args, **kwargs)
return cache[key]
memoized_func.__dict__.update(func.__dict__)
return memoized_func
def task(func):
if not hasattr(task, "registration_order"):
task.registration_order = []
func.is_task = True
tasks_order.append(func.__name__)
return func
wrapped_func = memoize(func)
# Append the function name to the registration order list
task.registration_order.append(func.__name__)
return wrapped_func
def agent(func):
func.is_agent = True
func = memoize(func)
return func
@@ -18,26 +36,43 @@ def crew(func):
instantiated_agents = []
agent_roles = set()
# Iterate over tasks_order to maintain the defined order
for task_name in tasks_order:
possible_task = getattr(self, task_name)
if callable(possible_task):
task_instance = possible_task()
instantiated_tasks.append(task_instance)
if hasattr(task_instance, "agent"):
agent_instance = task_instance.agent
if agent_instance.role not in agent_roles:
instantiated_agents.append(agent_instance)
agent_roles.add(agent_instance.role)
all_functions = {
name: getattr(self, name)
for name in dir(self)
if callable(getattr(self, name))
}
tasks = {
name: func
for name, func in all_functions.items()
if hasattr(func, "is_task")
}
agents = {
name: func
for name, func in all_functions.items()
if hasattr(func, "is_agent")
}
# Sort tasks by their registration order
sorted_task_names = sorted(
tasks, key=lambda name: task.registration_order.index(name)
)
# Instantiate tasks in the order they were defined
for task_name in sorted_task_names:
task_instance = tasks[task_name]()
instantiated_tasks.append(task_instance)
if hasattr(task_instance, "agent"):
agent_instance = task_instance.agent
if agent_instance.role not in agent_roles:
instantiated_agents.append(agent_instance)
agent_roles.add(agent_instance.role)
# Instantiate any additional agents not already included by tasks
for attr_name in dir(self):
possible_agent = getattr(self, attr_name)
if callable(possible_agent) and hasattr(possible_agent, "is_agent"):
temp_agent_instance = possible_agent()
if temp_agent_instance.role not in agent_roles:
instantiated_agents.append(temp_agent_instance)
agent_roles.add(temp_agent_instance.role)
for agent_name in agents:
temp_agent_instance = agents[agent_name]()
if temp_agent_instance.role not in agent_roles:
instantiated_agents.append(temp_agent_instance)
agent_roles.add(temp_agent_instance.role)
self.agents = instantiated_agents
self.tasks = instantiated_tasks

View File

@@ -4,13 +4,15 @@ from pathlib import Path
import yaml
from dotenv import load_dotenv
from pydantic import ConfigDict
load_dotenv()
def CrewBase(cls):
class WrappedClass(cls):
is_crew_class = True
model_config = ConfigDict(arbitrary_types_allowed=True)
is_crew_class: bool = True # type: ignore
base_directory = None
for frame_info in inspect.stack():
@@ -40,6 +42,7 @@ def CrewBase(cls):
@staticmethod
def load_yaml(config_path: str):
with open(config_path, "r") as file:
# parsedContent = YamlParser.parse(file) # type: ignore # Argument 1 to "parse" has incompatible type "TextIOWrapper"; expected "YamlParser"
return yaml.safe_load(file)
return WrappedClass

View File

@@ -1,19 +1,42 @@
import os
import re
import threading
import uuid
from typing import Any, Dict, List, Optional, Type
from copy import copy
from typing import Any, Dict, List, Optional, Type, Union
from langchain_openai import ChatOpenAI
from opentelemetry.trace import Span
from pydantic import UUID4, BaseModel, Field, field_validator, model_validator
from pydantic_core import PydanticCustomError
from crewai.agent import Agent
from crewai.agents.agent_builder.base_agent import BaseAgent
from crewai.tasks.task_output import TaskOutput
from crewai.utilities import I18N, Converter, ConverterError, Printer
from crewai.telemetry.telemetry import Telemetry
from crewai.utilities.converter import ConverterError
from crewai.utilities.i18n import I18N
from crewai.utilities.printer import Printer
from crewai.utilities.pydantic_schema_parser import PydanticSchemaParser
class Task(BaseModel):
"""Class that represent a task to be executed."""
"""Class that represents a task to be executed.
Each task must have a description, an expected output and an agent responsible for execution.
Attributes:
agent: Agent responsible for task execution. Represents entity performing task.
async_execution: Boolean flag indicating asynchronous task execution.
callback: Function/object executed post task completion for additional actions.
config: Dictionary containing task-specific configuration parameters.
context: List of Task instances providing task context or input data.
description: Descriptive text detailing task's purpose and execution.
expected_output: Clear definition of expected task outcome.
output_file: File path for storing task output.
output_json: Pydantic model for structuring JSON output.
output_pydantic: Pydantic model for task output.
tools: List of tools/resources limited for task execution.
"""
class Config:
arbitrary_types_allowed = True
@@ -23,7 +46,6 @@ class Task(BaseModel):
tools_errors: int = 0
delegations: int = 0
i18n: I18N = I18N()
thread: threading.Thread = None
prompt_context: Optional[str] = None
description: str = Field(description="Description of the actual task.")
expected_output: str = Field(
@@ -36,7 +58,7 @@ class Task(BaseModel):
callback: Optional[Any] = Field(
description="Callback to be executed after the task is completed.", default=None
)
agent: Optional[Agent] = Field(
agent: Optional[BaseAgent] = Field(
description="Agent responsible for execution the task.", default=None
)
context: Optional[List["Task"]] = Field(
@@ -76,8 +98,11 @@ class Task(BaseModel):
default=False,
)
_telemetry: Telemetry
_execution_span: Span | None = None
_original_description: str | None = None
_original_expected_output: str | None = None
_thread: threading.Thread | None = None
def __init__(__pydantic_self__, **data):
config = data.pop("config", {})
@@ -91,6 +116,20 @@ class Task(BaseModel):
"may_not_set_field", "This field is not to be set by the user.", {}
)
@field_validator("output_file")
@classmethod
def output_file_validattion(cls, value: str) -> str:
"""Validate the output file path by removing the / from the beginning of the path."""
if value.startswith("/"):
return value[1:]
return value
@model_validator(mode="after")
def set_private_attrs(self) -> "Task":
"""Set private attributes."""
self._telemetry = Telemetry()
return self
@model_validator(mode="after")
def set_attributes_based_on_config(self) -> "Task":
"""Set attributes based on the agent configuration."""
@@ -118,9 +157,21 @@ class Task(BaseModel):
)
return self
def execute(
def wait_for_completion(self) -> str | BaseModel:
"""Wait for asynchronous task completion and return the output."""
assert self.async_execution, "Task is not set to be executed asynchronously."
if self._thread:
self._thread.join()
self._thread = None
assert self.output, "Task output is not set."
return self.output.exported_output
def execute( # type: ignore # Missing return statement
self,
agent: Agent | None = None,
agent: BaseAgent | None = None,
context: Optional[str] = None,
tools: Optional[List[Any]] = None,
) -> str:
@@ -130,6 +181,8 @@ class Task(BaseModel):
Output of the task.
"""
self._execution_span = self._telemetry.task_started(self)
agent = agent or self.agent
if not agent:
raise Exception(
@@ -137,22 +190,25 @@ class Task(BaseModel):
)
if self.context:
# type: ignore # Incompatible types in assignment (expression has type "list[Never]", variable has type "str | None")
context = []
for task in self.context:
if task.async_execution:
task.thread.join()
if task and task.output:
task.wait_for_completion()
if task.output:
# type: ignore # Item "str" of "str | None" has no attribute "append"
context.append(task.output.raw_output)
# type: ignore # Argument 1 to "join" of "str" has incompatible type "str | None"; expected "Iterable[str]"
context = "\n".join(context)
self.prompt_context = context
tools = tools or self.tools
if self.async_execution:
self.thread = threading.Thread(
self._thread = threading.Thread(
target=self._execute, args=(agent, self, context, tools)
)
self.thread.start()
self._thread.start()
else:
result = self._execute(
task=self,
@@ -162,24 +218,29 @@ class Task(BaseModel):
)
return result
def _execute(self, agent, task, context, tools):
def _execute(self, agent: "BaseAgent", task, context, tools):
result = agent.execute_task(
task=task,
context=context,
tools=tools,
)
exported_output = self._export_output(result)
# type: ignore # the responses are usually str but need to figure out a more elegant solution here
self.output = TaskOutput(
description=self.description,
exported_output=exported_output,
raw_output=result,
agent=agent.role,
)
if self.callback:
self.callback(self.output)
if self._execution_span:
self._telemetry.task_ended(self._execution_span, self)
self._execution_span = None
return exported_output
def prompt(self) -> str:
@@ -215,6 +276,37 @@ class Task(BaseModel):
"""Increment the delegations counter."""
self.delegations += 1
def copy(self, agents: Optional[List["BaseAgent"]] = None) -> "Task":
"""Create a deep copy of the Task."""
exclude = {
"id",
"agent",
"context",
"tools",
}
copied_data = self.model_dump(exclude=exclude)
copied_data = {k: v for k, v in copied_data.items() if v is not None}
cloned_context = (
[task.copy() for task in self.context] if self.context else None
)
def get_agent_by_role(role: str) -> Union["BaseAgent", None]:
return next((agent for agent in agents if agent.role == role), None)
cloned_agent = get_agent_by_role(self.agent.role) if self.agent else None
cloned_tools = copy(self.tools) if self.tools else []
copied_task = Task(
**copied_data,
context=cloned_context,
agent=cloned_agent,
tools=cloned_tools,
)
return copied_task
def _export_output(self, result: str) -> Any:
exported_result = result
instructions = "I'm gonna convert this raw text into valid JSON."
@@ -224,20 +316,34 @@ class Task(BaseModel):
# try to convert task_output directly to pydantic/json
try:
# type: ignore # Item "None" of "type[BaseModel] | None" has no attribute "model_validate_json"
exported_result = model.model_validate_json(result)
if self.output_json:
# type: ignore # "str" has no attribute "model_dump"
return exported_result.model_dump()
return exported_result
except Exception:
pass
llm = self.agent.function_calling_llm or self.agent.llm
# sometimes the response contains valid JSON in the middle of text
match = re.search(r"({.*})", result, re.DOTALL)
if match:
try:
# type: ignore # Item "None" of "type[BaseModel] | None" has no attribute "model_validate_json"
exported_result = model.model_validate_json(match.group(0))
if self.output_json:
# type: ignore # "str" has no attribute "model_dump"
return exported_result.model_dump()
return exported_result
except Exception:
pass
# type: ignore # Item "None" of "BaseAgent | None" has no attribute "function_calling_llm"
llm = getattr(self.agent, "function_calling_llm", None) or self.agent.llm
if not self._is_gpt(llm):
# type: ignore # Argument "model" to "PydanticSchemaParser" has incompatible type "type[BaseModel] | None"; expected "type[BaseModel]"
model_schema = PydanticSchemaParser(model=model).get_schema()
instructions = f"{instructions}\n\nThe json should have the following structure, with the following keys:\n{model_schema}"
converter = Converter(
converter = self.agent.get_output_converter(
llm=llm, text=result, model=model, instructions=instructions
)
@@ -255,17 +361,27 @@ class Task(BaseModel):
if self.output_file:
content = (
exported_result if not self.output_pydantic else exported_result.json()
# type: ignore # "str" has no attribute "json"
exported_result
if not self.output_pydantic
else exported_result.model_dump_json()
)
self._save_file(content)
return exported_result
def _is_gpt(self, llm) -> bool:
return isinstance(llm, ChatOpenAI) and llm.openai_api_base == None
return isinstance(llm, ChatOpenAI) and llm.openai_api_base is None
def _save_file(self, result: Any) -> None:
with open(self.output_file, "w") as file:
# type: ignore # Value of type variable "AnyOrLiteralStr" of "dirname" cannot be "str | None"
directory = os.path.dirname(self.output_file)
if directory and not os.path.exists(directory):
os.makedirs(directory)
# type: ignore # Argument 1 to "open" has incompatible type "str | None"; expected "int | str | bytes | PathLike[str] | PathLike[bytes]"
with open(self.output_file, "w", encoding="utf-8") as file:
file.write(result)
return None

View File

@@ -11,6 +11,7 @@ class TaskOutput(BaseModel):
exported_output: Union[str, BaseModel] = Field(
description="Output of the task", default=None
)
agent: str = Field(description="Agent that executed the task")
raw_output: str = Field(description="Result of the task")
@model_validator(mode="after")

View File

@@ -1,46 +0,0 @@
-----BEGIN CERTIFICATE-----
MIIDqDCCAy6gAwIBAgIRAPNkTmtuAFAjfglGvXvh9R0wCgYIKoZIzj0EAwMwgYgx
CzAJBgNVBAYTAlVTMRMwEQYDVQQIEwpOZXcgSmVyc2V5MRQwEgYDVQQHEwtKZXJz
ZXkgQ2l0eTEeMBwGA1UEChMVVGhlIFVTRVJUUlVTVCBOZXR3b3JrMS4wLAYDVQQD
EyVVU0VSVHJ1c3QgRUNDIENlcnRpZmljYXRpb24gQXV0aG9yaXR5MB4XDTE4MTEw
MjAwMDAwMFoXDTMwMTIzMTIzNTk1OVowgY8xCzAJBgNVBAYTAkdCMRswGQYDVQQI
ExJHcmVhdGVyIE1hbmNoZXN0ZXIxEDAOBgNVBAcTB1NhbGZvcmQxGDAWBgNVBAoT
D1NlY3RpZ28gTGltaXRlZDE3MDUGA1UEAxMuU2VjdGlnbyBFQ0MgRG9tYWluIFZh
bGlkYXRpb24gU2VjdXJlIFNlcnZlciBDQTBZMBMGByqGSM49AgEGCCqGSM49AwEH
A0IABHkYk8qfbZ5sVwAjBTcLXw9YWsTef1Wj6R7W2SUKiKAgSh16TwUwimNJE4xk
IQeV/To14UrOkPAY9z2vaKb71EijggFuMIIBajAfBgNVHSMEGDAWgBQ64QmG1M8Z
wpZ2dEl23OA1xmNjmjAdBgNVHQ4EFgQU9oUKOxGG4QR9DqoLLNLuzGR7e64wDgYD
VR0PAQH/BAQDAgGGMBIGA1UdEwEB/wQIMAYBAf8CAQAwHQYDVR0lBBYwFAYIKwYB
BQUHAwEGCCsGAQUFBwMCMBsGA1UdIAQUMBIwBgYEVR0gADAIBgZngQwBAgEwUAYD
VR0fBEkwRzBFoEOgQYY/aHR0cDovL2NybC51c2VydHJ1c3QuY29tL1VTRVJUcnVz
dEVDQ0NlcnRpZmljYXRpb25BdXRob3JpdHkuY3JsMHYGCCsGAQUFBwEBBGowaDA/
BggrBgEFBQcwAoYzaHR0cDovL2NydC51c2VydHJ1c3QuY29tL1VTRVJUcnVzdEVD
Q0FkZFRydXN0Q0EuY3J0MCUGCCsGAQUFBzABhhlodHRwOi8vb2NzcC51c2VydHJ1
c3QuY29tMAoGCCqGSM49BAMDA2gAMGUCMEvnx3FcsVwJbZpCYF9z6fDWJtS1UVRs
cS0chWBNKPFNpvDKdrdKRe+oAkr2jU+ubgIxAODheSr2XhcA7oz9HmedGdMhlrd9
4ToKFbZl+/OnFFzqnvOhcjHvClECEQcKmc8fmA==
-----END CERTIFICATE-----
-----BEGIN CERTIFICATE-----
MIID0zCCArugAwIBAgIQVmcdBOpPmUxvEIFHWdJ1lDANBgkqhkiG9w0BAQwFADB7
MQswCQYDVQQGEwJHQjEbMBkGA1UECAwSR3JlYXRlciBNYW5jaGVzdGVyMRAwDgYD
VQQHDAdTYWxmb3JkMRowGAYDVQQKDBFDb21vZG8gQ0EgTGltaXRlZDEhMB8GA1UE
AwwYQUFBIENlcnRpZmljYXRlIFNlcnZpY2VzMB4XDTE5MDMxMjAwMDAwMFoXDTI4
MTIzMTIzNTk1OVowgYgxCzAJBgNVBAYTAlVTMRMwEQYDVQQIEwpOZXcgSmVyc2V5
MRQwEgYDVQQHEwtKZXJzZXkgQ2l0eTEeMBwGA1UEChMVVGhlIFVTRVJUUlVTVCBO
ZXR3b3JrMS4wLAYDVQQDEyVVU0VSVHJ1c3QgRUNDIENlcnRpZmljYXRpb24gQXV0
aG9yaXR5MHYwEAYHKoZIzj0CAQYFK4EEACIDYgAEGqxUWqn5aCPnetUkb1PGWthL
q8bVttHmc3Gu3ZzWDGH926CJA7gFFOxXzu5dP+Ihs8731Ip54KODfi2X0GHE8Znc
JZFjq38wo7Rw4sehM5zzvy5cU7Ffs30yf4o043l5o4HyMIHvMB8GA1UdIwQYMBaA
FKARCiM+lvEH7OKvKe+CpX/QMKS0MB0GA1UdDgQWBBQ64QmG1M8ZwpZ2dEl23OA1
xmNjmjAOBgNVHQ8BAf8EBAMCAYYwDwYDVR0TAQH/BAUwAwEB/zARBgNVHSAECjAI
MAYGBFUdIAAwQwYDVR0fBDwwOjA4oDagNIYyaHR0cDovL2NybC5jb21vZG9jYS5j
b20vQUFBQ2VydGlmaWNhdGVTZXJ2aWNlcy5jcmwwNAYIKwYBBQUHAQEEKDAmMCQG
CCsGAQUFBzABhhhodHRwOi8vb2NzcC5jb21vZG9jYS5jb20wDQYJKoZIhvcNAQEM
BQADggEBABns652JLCALBIAdGN5CmXKZFjK9Dpx1WywV4ilAbe7/ctvbq5AfjJXy
ij0IckKJUAfiORVsAYfZFhr1wHUrxeZWEQff2Ji8fJ8ZOd+LygBkc7xGEJuTI42+
FsMuCIKchjN0djsoTI0DQoWz4rIjQtUfenVqGtF8qmchxDM6OW1TyaLtYiKou+JV
bJlsQ2uRl9EMC5MCHdK8aXdJ5htN978UeAOwproLtOGFfy/cQjutdAFI3tZs4RmY
CV4Ks2dH/hzg1cEo70qLRDEmBDeNiXQ2Lu+lIg+DdEmSx/cQwgwp+7e9un/jX9Wf
8qn0dNW44bOwgeThpWOjzOoEeJBuv/c=
-----END CERTIFICATE-----

View File

@@ -1,9 +1,10 @@
from __future__ import annotations
import asyncio
import importlib.resources
import json
import os
import platform
from typing import Any
from typing import TYPE_CHECKING, Any
import pkg_resources
from opentelemetry import trace
@@ -11,7 +12,11 @@ from opentelemetry.exporter.otlp.proto.http.trace_exporter import OTLPSpanExport
from opentelemetry.sdk.resources import SERVICE_NAME, Resource
from opentelemetry.sdk.trace import TracerProvider
from opentelemetry.sdk.trace.export import BatchSpanProcessor
from opentelemetry.trace import Status, StatusCode
from opentelemetry.trace import Span, Status, StatusCode
if TYPE_CHECKING:
from crewai.crew import Crew
from crewai.task import Task
class Telemetry:
@@ -41,19 +46,17 @@ class Telemetry:
def __init__(self):
self.ready = False
self.trace_set = False
try:
telemetry_endpoint = "https://telemetry.crewai.com:4319"
self.resource = Resource(
attributes={SERVICE_NAME: "crewAI-telemetry"},
)
self.provider = TracerProvider(resource=self.resource)
cert_file = importlib.resources.files("crewai.telemetry").joinpath(
"STAR_crewai_com_bundle.pem"
)
processor = BatchSpanProcessor(
OTLPSpanExporter(
endpoint=f"{telemetry_endpoint}/v1/traces",
certificate_file=cert_file,
timeout=30,
)
)
@@ -69,13 +72,13 @@ class Telemetry:
self.ready = False
def set_tracer(self):
if self.ready:
provider = trace.get_tracer_provider()
if provider is None:
try:
trace.set_tracer_provider(self.provider)
except Exception:
self.ready = False
if self.ready and not self.trace_set:
try:
trace.set_tracer_provider(self.provider)
self.trace_set = True
except Exception:
self.ready = False
self.trace_set = False
def crew_creation(self, crew):
"""Records the creation of a crew."""
@@ -91,114 +94,9 @@ class Telemetry:
self._add_attribute(span, "python_version", platform.python_version())
self._add_attribute(span, "crew_id", str(crew.id))
self._add_attribute(span, "crew_process", crew.process)
self._add_attribute(span, "crew_language", crew.language)
self._add_attribute(span, "crew_memory", crew.memory)
self._add_attribute(span, "crew_number_of_tasks", len(crew.tasks))
self._add_attribute(span, "crew_number_of_agents", len(crew.agents))
self._add_attribute(
span,
"crew_agents",
json.dumps(
[
{
"id": str(agent.id),
"role": agent.role,
"memory_enabled?": agent.memory,
"verbose?": agent.verbose,
"max_iter": agent.max_iter,
"max_rpm": agent.max_rpm,
"i18n": agent.i18n.language,
"llm": json.dumps(self._safe_llm_attributes(agent.llm)),
"delegation_enabled?": agent.allow_delegation,
"tools_names": [
tool.name.casefold() for tool in agent.tools
],
}
for agent in crew.agents
]
),
)
self._add_attribute(
span,
"crew_tasks",
json.dumps(
[
{
"id": str(task.id),
"async_execution?": task.async_execution,
"agent_role": task.agent.role if task.agent else "None",
"tools_names": [
tool.name.casefold() for tool in task.tools
],
}
for task in crew.tasks
]
),
)
self._add_attribute(span, "platform", platform.platform())
self._add_attribute(span, "platform_release", platform.release())
self._add_attribute(span, "platform_system", platform.system())
self._add_attribute(span, "platform_version", platform.version())
self._add_attribute(span, "cpus", os.cpu_count())
span.set_status(Status(StatusCode.OK))
span.end()
except Exception:
pass
def tool_repeated_usage(self, llm: Any, tool_name: str, attempts: int):
"""Records the repeated usage 'error' of a tool by an agent."""
if self.ready:
try:
tracer = trace.get_tracer("crewai.telemetry")
span = tracer.start_span("Tool Repeated Usage")
self._add_attribute(span, "tool_name", tool_name)
self._add_attribute(span, "attempts", attempts)
self._add_attribute(
span, "llm", json.dumps(self._safe_llm_attributes(llm))
)
span.set_status(Status(StatusCode.OK))
span.end()
except Exception:
pass
def tool_usage(self, llm: Any, tool_name: str, attempts: int):
"""Records the usage of a tool by an agent."""
if self.ready:
try:
tracer = trace.get_tracer("crewai.telemetry")
span = tracer.start_span("Tool Usage")
self._add_attribute(span, "tool_name", tool_name)
self._add_attribute(span, "attempts", attempts)
self._add_attribute(
span, "llm", json.dumps(self._safe_llm_attributes(llm))
)
span.set_status(Status(StatusCode.OK))
span.end()
except Exception:
pass
def tool_usage_error(self, llm: Any):
"""Records the usage of a tool by an agent."""
if self.ready:
try:
tracer = trace.get_tracer("crewai.telemetry")
span = tracer.start_span("Tool Usage Error")
self._add_attribute(
span, "llm", json.dumps(self._safe_llm_attributes(llm))
)
span.set_status(Status(StatusCode.OK))
span.end()
except Exception:
pass
def crew_execution_span(self, crew):
"""Records the complete execution of a crew.
This is only collected if the user has opted-in to share the crew.
"""
if (self.ready) and (crew.share_crew):
try:
tracer = trace.get_tracer("crewai.telemetry")
span = tracer.start_span("Crew Execution")
self._add_attribute(span, "crew_id", str(crew.id))
self._add_attribute(
span,
"crew_agents",
@@ -209,11 +107,10 @@ class Telemetry:
"role": agent.role,
"goal": agent.goal,
"backstory": agent.backstory,
"memory_enabled?": agent.memory,
"verbose?": agent.verbose,
"max_iter": agent.max_iter,
"max_rpm": agent.max_rpm,
"i18n": agent.i18n.language,
"i18n": agent.i18n.prompt_file,
"llm": json.dumps(self._safe_llm_attributes(agent.llm)),
"delegation_enabled?": agent.allow_delegation,
"tools_names": [
@@ -232,12 +129,15 @@ class Telemetry:
{
"id": str(task.id),
"description": task.description,
"expected_output": task.expected_output,
"async_execution?": task.async_execution,
"output": task.expected_output,
"human_input?": task.human_input,
"agent_role": task.agent.role if task.agent else "None",
"context": [task.description for task in task.context]
if task.context
else "None",
"context": (
[task.description for task in task.context]
if task.context
else None
),
"tools_names": [
tool.name.casefold() for tool in task.tools
],
@@ -246,6 +146,176 @@ class Telemetry:
]
),
)
self._add_attribute(span, "platform", platform.platform())
self._add_attribute(span, "platform_release", platform.release())
self._add_attribute(span, "platform_system", platform.system())
self._add_attribute(span, "platform_version", platform.version())
self._add_attribute(span, "cpus", os.cpu_count())
span.set_status(Status(StatusCode.OK))
span.end()
except Exception:
pass
def task_started(self, task: Task) -> Span | None:
"""Records task started in a crew."""
if self.ready:
try:
tracer = trace.get_tracer("crewai.telemetry")
span = tracer.start_span("Task Execution")
self._add_attribute(span, "task_id", str(task.id))
self._add_attribute(span, "formatted_description", task.description)
self._add_attribute(
span, "formatted_expected_output", task.expected_output
)
return span
except Exception:
pass
return None
def task_ended(self, span: Span, task: Task):
"""Records task execution in a crew."""
if self.ready:
try:
self._add_attribute(
span, "output", task.output.raw_output if task.output else ""
)
span.set_status(Status(StatusCode.OK))
span.end()
except Exception:
pass
def tool_repeated_usage(self, llm: Any, tool_name: str, attempts: int):
"""Records the repeated usage 'error' of a tool by an agent."""
if self.ready:
try:
tracer = trace.get_tracer("crewai.telemetry")
span = tracer.start_span("Tool Repeated Usage")
self._add_attribute(
span,
"crewai_version",
pkg_resources.get_distribution("crewai").version,
)
self._add_attribute(span, "tool_name", tool_name)
self._add_attribute(span, "attempts", attempts)
if llm:
self._add_attribute(
span, "llm", json.dumps(self._safe_llm_attributes(llm))
)
span.set_status(Status(StatusCode.OK))
span.end()
except Exception:
pass
def tool_usage(self, llm: Any, tool_name: str, attempts: int):
"""Records the usage of a tool by an agent."""
if self.ready:
try:
tracer = trace.get_tracer("crewai.telemetry")
span = tracer.start_span("Tool Usage")
self._add_attribute(
span,
"crewai_version",
pkg_resources.get_distribution("crewai").version,
)
self._add_attribute(span, "tool_name", tool_name)
self._add_attribute(span, "attempts", attempts)
if llm:
self._add_attribute(
span, "llm", json.dumps(self._safe_llm_attributes(llm))
)
span.set_status(Status(StatusCode.OK))
span.end()
except Exception:
pass
def tool_usage_error(self, llm: Any):
"""Records the usage of a tool by an agent."""
if self.ready:
try:
tracer = trace.get_tracer("crewai.telemetry")
span = tracer.start_span("Tool Usage Error")
self._add_attribute(
span,
"crewai_version",
pkg_resources.get_distribution("crewai").version,
)
if llm:
self._add_attribute(
span, "llm", json.dumps(self._safe_llm_attributes(llm))
)
span.set_status(Status(StatusCode.OK))
span.end()
except Exception:
pass
def crew_execution_span(self, crew: Crew, inputs: dict[str, Any] | None):
"""Records the complete execution of a crew.
This is only collected if the user has opted-in to share the crew.
"""
if (self.ready) and (crew.share_crew):
try:
tracer = trace.get_tracer("crewai.telemetry")
span = tracer.start_span("Crew Execution")
self._add_attribute(
span,
"crewai_version",
pkg_resources.get_distribution("crewai").version,
)
self._add_attribute(span, "crew_id", str(crew.id))
self._add_attribute(span, "inputs", json.dumps(inputs))
self._add_attribute(
span,
"crew_agents",
json.dumps(
[
{
"id": str(agent.id),
"role": agent.role,
"goal": agent.goal,
"backstory": agent.backstory,
"verbose?": agent.verbose,
"max_iter": agent.max_iter,
"max_rpm": agent.max_rpm,
"i18n": agent.i18n.prompt_file,
"llm": json.dumps(self._safe_llm_attributes(agent.llm)),
"delegation_enabled?": agent.allow_delegation,
"tools_names": [
tool.name.casefold() for tool in agent.tools or []
],
}
for agent in crew.agents
]
),
)
self._add_attribute(
span,
"crew_tasks",
json.dumps(
[
{
"id": str(task.id),
"description": task.description,
"expected_output": task.expected_output,
"async_execution?": task.async_execution,
"human_input?": task.human_input,
"agent_role": task.agent.role if task.agent else "None",
"context": (
[task.description for task in task.context]
if task.context
else None
),
"tools_names": [
tool.name.casefold() for tool in task.tools or []
],
}
for task in crew.tasks
]
),
)
return span
except Exception:
pass
@@ -253,6 +323,11 @@ class Telemetry:
def end_crew(self, crew, output):
if (self.ready) and (crew.share_crew):
try:
self._add_attribute(
crew._execution_span,
"crewai_version",
pkg_resources.get_distribution("crewai").version,
)
self._add_attribute(crew._execution_span, "crew_output", output)
self._add_attribute(
crew._execution_span,
@@ -282,6 +357,8 @@ class Telemetry:
def _safe_llm_attributes(self, llm):
attributes = ["name", "model_name", "base_url", "model", "top_k", "temperature"]
safe_attributes = {k: v for k, v in vars(llm).items() if k in attributes}
safe_attributes["class"] = llm.__class__.__name__
return safe_attributes
if llm:
safe_attributes = {k: v for k, v in vars(llm).items() if k in attributes}
safe_attributes["class"] = llm.__class__.__name__
return safe_attributes
return {}

View File

@@ -1,72 +1,25 @@
from typing import List
from langchain.tools import StructuredTool
from pydantic import BaseModel, Field
from crewai.agent import Agent
from crewai.task import Task
from crewai.utilities import I18N
from crewai.agents.agent_builder.utilities.base_agent_tool import BaseAgentTools
class AgentTools(BaseModel):
class AgentTools(BaseAgentTools):
"""Default tools around agent delegation"""
agents: List[Agent] = Field(description="List of agents in this crew.")
i18n: I18N = Field(default=I18N(), description="Internationalization settings.")
def tools(self):
coworkers = f"[{', '.join([f'{agent.role}' for agent in self.agents])}]"
tools = [
StructuredTool.from_function(
func=self.delegate_work,
name="Delegate work to co-worker",
name="Delegate work to coworker",
description=self.i18n.tools("delegate_work").format(
coworkers=f"[{', '.join([f'{agent.role}' for agent in self.agents])}]"
coworkers=coworkers
),
),
StructuredTool.from_function(
func=self.ask_question,
name="Ask question to co-worker",
description=self.i18n.tools("ask_question").format(
coworkers=f"[{', '.join([f'{agent.role}' for agent in self.agents])}]"
),
name="Ask question to coworker",
description=self.i18n.tools("ask_question").format(coworkers=coworkers),
),
]
return tools
def delegate_work(self, coworker: str, task: str, context: str):
"""Useful to delegate a specific task to a coworker passing all necessary context and names."""
return self._execute(coworker, task, context)
def ask_question(self, coworker: str, question: str, context: str):
"""Useful to ask a question, opinion or take from a coworker passing all necessary context and names."""
return self._execute(coworker, question, context)
def _execute(self, agent, task, context):
"""Execute the command."""
try:
agent = [
available_agent
for available_agent in self.agents
if available_agent.role.casefold().strip() == agent.casefold().strip()
]
except:
return self.i18n.errors("agent_tool_unexsiting_coworker").format(
coworkers="\n".join(
[f"- {agent.role.casefold()}" for agent in self.agents]
)
)
if not agent:
return self.i18n.errors("agent_tool_unexsiting_coworker").format(
coworkers="\n".join(
[f"- {agent.role.casefold()}" for agent in self.agents]
)
)
agent = agent[0]
task = Task(
description=task,
agent=agent,
expected_output="Your best answer to your coworker asking you this, accounting for the context shared.",
)
return agent.execute_task(task, context)

View File

@@ -1,4 +1,5 @@
import ast
from difflib import SequenceMatcher
from textwrap import dedent
from typing import Any, List, Union
@@ -10,6 +11,12 @@ from crewai.telemetry import Telemetry
from crewai.tools.tool_calling import InstructorToolCalling, ToolCalling
from crewai.utilities import I18N, Converter, ConverterError, Printer
agentops = None
try:
import agentops
except ImportError:
pass
OPENAI_BIGGER_MODELS = ["gpt-4"]
@@ -26,13 +33,13 @@ class ToolUsage:
Class that represents the usage of a tool by an agent.
Attributes:
task: Task being executed.
tools_handler: Tools handler that will manage the tool usage.
tools: List of tools available for the agent.
original_tools: Original tools available for the agent before being converted to BaseTool.
tools_description: Description of the tools available for the agent.
tools_names: Names of the tools available for the agent.
function_calling_llm: Language model to be used for the tool usage.
task: Task being executed.
tools_handler: Tools handler that will manage the tool usage.
tools: List of tools available for the agent.
original_tools: Original tools available for the agent before being converted to BaseTool.
tools_description: Description of the tools available for the agent.
tools_names: Names of the tools available for the agent.
function_calling_llm: Language model to be used for the tool usage.
"""
def __init__(
@@ -63,7 +70,7 @@ class ToolUsage:
# Set the maximum parsing attempts for bigger models
if (isinstance(self.function_calling_llm, ChatOpenAI)) and (
self.function_calling_llm.openai_api_base == None
self.function_calling_llm.openai_api_base is None
):
if self.function_calling_llm.model_name in OPENAI_BIGGER_MODELS:
self._max_parsing_attempts = 2
@@ -81,6 +88,8 @@ class ToolUsage:
self._printer.print(content=f"\n\n{error}\n", color="red")
self.task.increment_tools_errors()
return error
# BUG? The code below seems to be unreachable
try:
tool = self._select_tool(calling.tool_name)
except Exception as e:
@@ -88,48 +97,50 @@ class ToolUsage:
self.task.increment_tools_errors()
self._printer.print(content=f"\n\n{error}\n", color="red")
return error
return f"{self._use(tool_string=tool_string, tool=tool, calling=calling)}"
return f"{self._use(tool_string=tool_string, tool=tool, calling=calling)}" # type: ignore # BUG?: "_use" of "ToolUsage" does not return a value (it only ever returns None)
def _use(
self,
tool_string: str,
tool: BaseTool,
calling: Union[ToolCalling, InstructorToolCalling],
) -> None:
if self._check_tool_repeated_usage(calling=calling):
) -> str: # TODO: Fix this return type
tool_event = agentops.ToolEvent(name=calling.tool_name) if agentops else None
if self._check_tool_repeated_usage(calling=calling): # type: ignore # _check_tool_repeated_usage of "ToolUsage" does not return a value (it only ever returns None)
try:
result = self._i18n.errors("task_repeated_usage").format(
tool_names=self.tools_names
)
self._printer.print(content=f"\n\n{result}\n", color="yellow")
self._printer.print(content=f"\n\n{result}\n", color="purple")
self._telemetry.tool_repeated_usage(
llm=self.function_calling_llm,
tool_name=tool.name,
attempts=self._run_attempts,
)
result = self._format_result(result=result)
return result
result = self._format_result(result=result) # type: ignore # "_format_result" of "ToolUsage" does not return a value (it only ever returns None)
return result # type: ignore # Fix the reutrn type of this function
except Exception:
self.task.increment_tools_errors()
result = None
result = None # type: ignore # Incompatible types in assignment (expression has type "None", variable has type "str")
if self.tools_handler.cache:
result = self.tools_handler.cache.read(
result = self.tools_handler.cache.read( # type: ignore # Incompatible types in assignment (expression has type "str | None", variable has type "str")
tool=calling.tool_name, input=calling.arguments
)
if not result:
if result is None: #! finecwg: if not result --> if result is None
try:
if calling.tool_name in [
"Delegate work to co-worker",
"Ask question to co-worker",
"Delegate work to coworker",
"Ask question to coworker",
]:
self.task.increment_delegations()
if calling.arguments:
try:
acceptable_args = tool.args_schema.schema()["properties"].keys()
acceptable_args = tool.args_schema.schema()["properties"].keys() # type: ignore # Item "None" of "type[BaseModel] | None" has no attribute "schema"
arguments = {
k: v
for k, v in calling.arguments.items()
@@ -141,7 +152,7 @@ class ToolUsage:
arguments = calling.arguments
result = tool._run(**arguments)
else:
arguments = calling.arguments.values()
arguments = calling.arguments.values() # type: ignore # Incompatible types in assignment (expression has type "dict_values[str, Any]", variable has type "dict[str, Any]")
result = tool._run(*arguments)
else:
result = tool._run()
@@ -157,9 +168,14 @@ class ToolUsage:
).message
self.task.increment_tools_errors()
self._printer.print(content=f"\n\n{error_message}\n", color="red")
return error
return error # type: ignore # No return value expected
self.task.increment_tools_errors()
return self.use(calling=calling, tool_string=tool_string)
if agentops:
agentops.record(
agentops.ErrorEvent(exception=e, trigger_event=tool_event)
)
return self.use(calling=calling, tool_string=tool_string) # type: ignore # No return value expected
if self.tools_handler:
should_cache = True
@@ -168,9 +184,9 @@ class ToolUsage:
)
if (
hasattr(original_tool, "cache_function")
and original_tool.cache_function
and original_tool.cache_function # type: ignore # Item "None" of "Any | None" has no attribute "cache_function"
):
should_cache = original_tool.cache_function(
should_cache = original_tool.cache_function( # type: ignore # Item "None" of "Any | None" has no attribute "cache_function"
calling.arguments, result
)
@@ -178,19 +194,21 @@ class ToolUsage:
calling=calling, output=result, should_cache=should_cache
)
self._printer.print(content=f"\n\n{result}\n", color="yellow")
self._printer.print(content=f"\n\n{result}\n", color="purple")
if agentops:
agentops.record(tool_event)
self._telemetry.tool_usage(
llm=self.function_calling_llm,
tool_name=tool.name,
attempts=self._run_attempts,
)
result = self._format_result(result=result)
return result
)
result = self._format_result(result=result) # type: ignore # "_format_result" of "ToolUsage" does not return a value (it only ever returns None)
return result # type: ignore # No return value expected
def _format_result(self, result: Any) -> None:
self.task.used_tools += 1
if self._should_remember_format():
result = self._remember_format(result=result)
if self._should_remember_format(): # type: ignore # "_should_remember_format" of "ToolUsage" does not return a value (it only ever returns None)
result = self._remember_format(result=result) # type: ignore # "_remember_format" of "ToolUsage" does not return a value (it only ever returns None)
return result
def _should_remember_format(self) -> None:
@@ -201,26 +219,39 @@ class ToolUsage:
result += "\n\n" + self._i18n.slice("tools").format(
tools=self.tools_description, tool_names=self.tools_names
)
return result
return result # type: ignore # No return value expected
def _check_tool_repeated_usage(
self, calling: Union[ToolCalling, InstructorToolCalling]
) -> None:
if not self.tools_handler:
return False
return False # type: ignore # No return value expected
if last_tool_usage := self.tools_handler.last_used_tool:
return (calling.tool_name == last_tool_usage.tool_name) and (
return (calling.tool_name == last_tool_usage.tool_name) and ( # type: ignore # No return value expected
calling.arguments == last_tool_usage.arguments
)
def _select_tool(self, tool_name: str) -> BaseTool:
for tool in self.tools:
if tool.name.lower().strip() == tool_name.lower().strip():
order_tools = sorted(
self.tools,
key=lambda tool: SequenceMatcher(
None, tool.name.lower().strip(), tool_name.lower().strip()
).ratio(),
reverse=True,
)
for tool in order_tools:
if (
tool.name.lower().strip() == tool_name.lower().strip()
or SequenceMatcher(
None, tool.name.lower().strip(), tool_name.lower().strip()
).ratio()
> 0.85
):
return tool
self.task.increment_tools_errors()
if tool_name and tool_name != "":
raise Exception(
f"Action '{tool_name}' don't exist, these are the only available Actions: {self.tools_description}"
f"Action '{tool_name}' don't exist, these are the only available Actions:\n {self.tools_description}"
)
else:
raise Exception(
@@ -247,7 +278,7 @@ class ToolUsage:
return "\n--\n".join(descriptions)
def _is_gpt(self, llm) -> bool:
return isinstance(llm, ChatOpenAI) and llm.openai_api_base == None
return isinstance(llm, ChatOpenAI) and llm.openai_api_base is None
def _tool_calling(
self, tool_string: str
@@ -260,17 +291,17 @@ class ToolUsage:
else ToolCalling
)
converter = Converter(
text=f"Only tools available:\n###\n{self._render()}\n\nReturn a valid schema for the tool, the tool name must be exactly equal one of the options, use this text to inform the valid ouput schema:\n\n{tool_string}```",
text=f"Only tools available:\n###\n{self._render()}\n\nReturn a valid schema for the tool, the tool name must be exactly equal one of the options, use this text to inform the valid output schema:\n\n{tool_string}```",
llm=self.function_calling_llm,
model=model,
instructions=dedent(
"""\
The schema should have the following structure, only two keys:
- tool_name: str
- arguments: dict (with all arguments being passed)
The schema should have the following structure, only two keys:
- tool_name: str
- arguments: dict (with all arguments being passed)
Example:
{"tool_name": "tool name", "arguments": {"arg_name1": "value", "arg_name2": 2}}""",
Example:
{"tool_name": "tool name", "arguments": {"arg_name1": "value", "arg_name2": 2}}""",
),
max_attemps=1,
)
@@ -282,16 +313,17 @@ class ToolUsage:
tool_name = self.action.tool
tool = self._select_tool(tool_name)
try:
arguments = ast.literal_eval(self.action.tool_input)
tool_input = self._validate_tool_input(self.action.tool_input)
arguments = ast.literal_eval(tool_input)
except Exception:
return ToolUsageErrorException(
return ToolUsageErrorException( # type: ignore # Incompatible return value type (got "ToolUsageErrorException", expected "ToolCalling | InstructorToolCalling")
f'{self._i18n.errors("tool_arguments_error")}'
)
if not isinstance(arguments, dict):
return ToolUsageErrorException(
return ToolUsageErrorException( # type: ignore # Incompatible return value type (got "ToolUsageErrorException", expected "ToolCalling | InstructorToolCalling")
f'{self._i18n.errors("tool_arguments_error")}'
)
calling = ToolCalling(
calling = ToolCalling( # type: ignore # Unexpected keyword argument "log" for "ToolCalling"
tool_name=tool.name,
arguments=arguments,
log=tool_string,
@@ -302,9 +334,60 @@ class ToolUsage:
self._telemetry.tool_usage_error(llm=self.function_calling_llm)
self.task.increment_tools_errors()
self._printer.print(content=f"\n\n{e}\n", color="red")
return ToolUsageErrorException(
return ToolUsageErrorException( # type: ignore # Incompatible return value type (got "ToolUsageErrorException", expected "ToolCalling | InstructorToolCalling")
f'{self._i18n.errors("tool_usage_error").format(error=e)}\nMoving on then. {self._i18n.slice("format").format(tool_names=self.tools_names)}'
)
return self._tool_calling(tool_string)
return calling
def _validate_tool_input(self, tool_input: str) -> str:
try:
ast.literal_eval(tool_input)
return tool_input
except Exception:
# Clean and ensure the string is properly enclosed in braces
tool_input = tool_input.strip()
if not tool_input.startswith("{"):
tool_input = "{" + tool_input
if not tool_input.endswith("}"):
tool_input += "}"
# Manually split the input into key-value pairs
entries = tool_input.strip("{} ").split(",")
formatted_entries = []
for entry in entries:
if ":" not in entry:
continue # Skip malformed entries
key, value = entry.split(":", 1)
# Remove extraneous white spaces and quotes, replace single quotes
key = key.strip().strip('"').replace("'", '"')
value = value.strip()
# Handle replacement of single quotes at the start and end of the value string
if value.startswith("'") and value.endswith("'"):
value = value[1:-1] # Remove single quotes
value = (
'"' + value.replace('"', '\\"') + '"'
) # Re-encapsulate with double quotes
elif value.isdigit(): # Check if value is a digit, hence integer
formatted_value = value
elif value.lower() in [
"true",
"false",
"null",
]: # Check for boolean and null values
formatted_value = value.lower()
else:
# Assume the value is a string and needs quotes
formatted_value = '"' + value.replace('"', '\\"') + '"'
# Rebuild the entry with proper quoting
formatted_entry = f'"{key}": {formatted_value}'
formatted_entries.append(formatted_entry)
# Reconstruct the JSON string
new_json_string = "{" + ", ".join(formatted_entries) + "}"
return new_json_string

View File

@@ -6,23 +6,22 @@
},
"slices": {
"observation": "\nObservation",
"task": "\nCurrent Task: {input}\n\nBegin! This is VERY important to you, use the tools available and give your best Final Answer, your job depends on it!\n\nThought: ",
"task": "\nCurrent Task: {input}\n\nBegin! This is VERY important to you, use the tools available and give your best Final Answer, your job depends on it!\n\nThought:",
"memory": "\n\n# Useful context: \n{memory}",
"role_playing": "You are {role}. {backstory}\nYour personal goal is: {goal}",
"tools": "\nYou ONLY have access to the following tools, and should NEVER make up tools that are not listed here:\n\n{tools}\n\nUse the following format:\n\nThought: you should always think about what to do\nAction: the action to take, only one name of [{tool_names}], just the name, exactly as it's written.\nAction Input: the input to the action, just a simple a python dictionary using \" to wrap keys and values.\nObservation: the result of the action\n\nOnce all necessary information is gathered:\n\nThought: I now know the final answer\nFinal Answer: the final answer to the original input question\n",
"tools": "\nYou ONLY have access to the following tools, and should NEVER make up tools that are not listed here:\n\n{tools}\n\nUse the following format:\n\nThought: you should always think about what to do\nAction: the action to take, only one name of [{tool_names}], just the name, exactly as it's written.\nAction Input: the input to the action, just a simple python dictionary, enclosed in curly braces, using \" to wrap keys and values.\nObservation: the result of the action\n\nOnce all necessary information is gathered:\n\nThought: I now know the final answer\nFinal Answer: the final answer to the original input question\n",
"no_tools": "To give my best complete final answer to the task use the exact following format:\n\nThought: I now can give a great answer\nFinal Answer: my best complete final answer to the task.\nYour final answer must be the great and the most complete as possible, it must be outcome described.\n\nI MUST use these formats, my job depends on it!",
"format": "I MUST either use a tool (use one at time) OR give my best final answer. To Use the following format:\n\nThought: you should always think about what to do\nAction: the action to take, should be one of [{tool_names}]\nAction Input: the input to the action, dictionary\nObservation: the result of the action\n... (this Thought/Action/Action Input/Observation can repeat N times)\nThought: I now can give a great answer\nFinal Answer: my best complete final answer to the task.\nYour final answer must be the great and the most complete as possible, it must be outcome described\n\n ",
"format": "I MUST either use a tool (use one at time) OR give my best final answer. To Use the following format:\n\nThought: you should always think about what to do\nAction: the action to take, should be one of [{tool_names}]\nAction Input: the input to the action, dictionary enclosed in curly braces\nObservation: the result of the action\n... (this Thought/Action/Action Input/Observation can repeat N times)\nThought: I now can give a great answer\nFinal Answer: my best complete final answer to the task.\nYour final answer must be the great and the most complete as possible, it must be outcome described\n\n ",
"final_answer_format": "If you don't need to use any more tools, you must give your best complete final answer, make sure it satisfy the expect criteria, use the EXACT format below:\n\nThought: I now can give a great answer\nFinal Answer: my best complete final answer to the task.\n\n",
"format_without_tools": "\nSorry, I didn't use the right format. I MUST either use a tool (among the available ones), OR give my best final answer.\nI just remembered the expected format I must follow:\n\nQuestion: the input question you must answer\nThought: you should always think about what to do\nAction: the action to take, should be one of [{tool_names}]\nAction Input: the input to the action\nObservation: the result of the action\n... (this Thought/Action/Action Input/Observation can repeat N times)\nThought: I now can give a great answer\nFinal Answer: my best complete final answer to the task\nYour final answer must be the great and the most complete as possible, it must be outcome described\n\n",
"task_with_context": "{task}\n\nThis is the context you're working with:\n{context}",
"expected_output": "\nThis is the expect criteria for your final answer: {expected_output} \n you MUST return the actual complete content as the final answer, not a summary.",
"human_feedback": "You got human feedback on your work, re-avaluate it and give a new Final Answer when ready.\n {human_feedback}",
"getting_input": "This is the agent final answer: {final_answer}\nPlease provide a feedback: "
"getting_input": "This is the agent's final answer: {final_answer}\nPlease provide feedback: "
},
"errors": {
"unexpected_format": "\nSorry, I didn't use the expected format, I MUST either use a tool (use one at time) OR give my best final answer.\n",
"force_final_answer": "Tool won't be use because it's time to give your final answer. Don't use tools and just your absolute BEST Final answer.",
"agent_tool_unexsiting_coworker": "\nError executing tool. Co-worker mentioned not found, it must to be one of the following options:\n{coworkers}\n",
"agent_tool_unexsiting_coworker": "\nError executing tool. coworker mentioned not found, it must be one of the following options:\n{coworkers}\n",
"task_repeated_usage": "I tried reusing the same input, I must stop using this action input. I'll try something else instead.\n\n",
"tool_usage_error": "I encountered an error: {error}",
"tool_arguments_error": "Error: the Action Input is not a valid key, value dictionary.",
@@ -30,7 +29,7 @@
"tool_usage_exception": "I encountered an error while trying to use the tool. This was the error: {error}.\n Tool {tool} accepts these inputs: {tool_inputs}"
},
"tools": {
"delegate_work": "Delegate a specific task to one of the following co-workers: {coworkers}\nThe input to this tool should be the coworker, the task you want them to do, and ALL necessary context to exectue the task, they know nothing about the task, so share absolute everything you know, don't reference things but instead explain them.",
"ask_question": "Ask a specific question to one of the following co-workers: {coworkers}\nThe input to this tool should be the coworker, the question you have for them, and ALL necessary context to ask the question properly, they know nothing about the question, so share absolute everything you know, don't reference things but instead explain them."
"delegate_work": "Delegate a specific task to one of the following coworkers: {coworkers}\nThe input to this tool should be the coworker, the task you want them to do, and ALL necessary context to execute the task, they know nothing about the task, so share absolute everything you know, don't reference things but instead explain them.",
"ask_question": "Ask a specific question to one of the following coworkers: {coworkers}\nThe input to this tool should be the coworker, the question you have for them, and ALL necessary context to ask the question properly, they know nothing about the question, so share absolute everything you know, don't reference things but instead explain them."
}
}

View File

@@ -1,7 +1,22 @@
from .converter import Converter, ConverterError
from .file_handler import FileHandler
from .i18n import I18N
from .instructor import Instructor
from .logger import Logger
from .parser import YamlParser
from .printer import Printer
from .prompts import Prompts
from .rpm_controller import RPMController
__all__ = [
"Converter",
"ConverterError",
"FileHandler",
"I18N",
"Instructor",
"Logger",
"Printer",
"Prompts",
"RPMController",
"YamlParser",
]

Some files were not shown because too many files have changed in this diff Show More