Compare commits

...

119 Commits

Author SHA1 Message Date
João Moura
f675208d72 cutting new version with updated cli template 2024-04-11 11:30:30 -03:00
João Moura
36aa69cf66 Preparing new version to use new version of crewai-tools 2024-04-10 11:52:12 -03:00
Cfomodz
66b77ffd08 Update README.md (#442) 2024-04-08 05:59:04 -03:00
João Moura
d2a3e4869a preparring new version 2024-04-08 02:08:57 -03:00
João Moura
a2dc7c7f31 adding missing import 2024-04-08 02:08:43 -03:00
João Moura
55ac69776a preparing new version 2024-04-08 01:39:22 -03:00
João Moura
7a7c9b0076 removing unnecessary certificate 2024-04-08 01:39:11 -03:00
João Moura
77d40230a8 preparing new version 2024-04-07 14:55:45 -03:00
João Moura
e4556040a8 fixing long temr memory interpolation 2024-04-07 14:55:35 -03:00
João Moura
755b3934a4 preping new verison with new tools package 2024-04-07 14:19:50 -03:00
João Moura
2d77fb72a5 preparing new version 2024-04-07 04:18:05 -03:00
rajib
106b0df42e The suggestions were getting split at character level and not at sentence level (#436)
* fix the issue where the suggestions were split at character level

* Update contextual_memory.py

---------

Co-authored-by: rajib76 <rajib76@yahoo.com>
Co-authored-by: João Moura <joaomdmoura@gmail.com>
2024-04-07 02:57:23 -03:00
João Moura
c31ac4cf7e Updating tool dependency 2024-04-05 22:46:32 -03:00
João Moura
7b309df0c5 preparing new version 2024-04-05 19:52:13 -03:00
shivam singh
326f524e7c doc: Add documentation to Task model. (#363) 2024-04-05 19:49:36 -03:00
高璟琦
315ad20111 add solar example (#373)
Co-authored-by: João Moura <joaomdmoura@gmail.com>
2024-04-05 19:48:27 -03:00
Rueben Ramirez
b1daf17a61 whitespace consistency across docs (#407)
I saw a rendedered whitespace inconsistency in the Tasks docs here:
ed31860071/docs/core-concepts/Tasks.md (L173)

So I set out to patch that up to make it easier to read.  I then noticed
there were a few whitespace inconsistencies:
- 2 spaces
- 4 whitespaces
- tabs

It appears that the 4 whitespaces is the prevalent whitesapce usage, so
I overwrote other whitespace usages with that in this commit.

Co-authored-by: Rueben Ramirez <rramirez@ruebens-mbp.tail7c016.ts.net>
Co-authored-by: João Moura <joaomdmoura@gmail.com>
2024-04-05 19:47:09 -03:00
GabeKoga
9db99befb6 Feature: Log files (#423)
* log_file

feature: added a new parameter for crew that creates a txt file to log agent execution

* unit tests and documentation

unit test if file is created but not what is inside the file
2024-04-05 19:44:50 -03:00
GabeKoga
aebc443b62 purple (#428)
changed from yellow to purple for visibility
2024-04-05 18:25:59 -03:00
João Moura
2c0e5586e8 TYPO 2024-04-05 09:37:51 -03:00
João Moura
25f7557751 fixing memory docs 2024-04-05 08:59:54 -03:00
João Moura
59ebf7b762 adding specific memmory docs 2024-04-05 08:59:20 -03:00
João Moura
1abe9db8e0 Increasing default max inter 2024-04-05 08:36:09 -03:00
João Moura
e4363f9ed8 updating tests 2024-04-05 08:33:31 -03:00
João Moura
e00b545548 adding max execution time 2024-04-05 08:31:25 -03:00
João Moura
1aa32c2036 preparing new version 2024-04-05 08:24:41 -03:00
João Moura
65824ef814 not overriding llm callbacks 2024-04-05 08:24:20 -03:00
João Moura
d17bc33bfb fix docs 2024-04-04 17:36:50 -03:00
João Moura
d874ac92b4 preparing new version 0.27.0 2024-04-04 15:29:45 -03:00
João Moura
0362449fe4 Adding new test for crew memory 2024-04-04 15:29:45 -03:00
João Moura
0d4c062487 Adding link to agentops docs 2024-04-04 15:29:45 -03:00
João Moura
ec622022f9 updating dependendies 2024-04-04 15:29:45 -03:00
João Moura
e9adc3fa4e Removing memory flag from agent in favor of crew memory 2024-04-04 15:29:45 -03:00
João Moura
5bc63a321c TYPO 2024-04-04 15:29:45 -03:00
João Moura
6317380c8d updating tools dependency 2024-04-04 15:29:45 -03:00
João Moura
a7f007f475 Updating docs 2024-04-04 15:29:45 -03:00
Braelyn Boynton
fcffc4a898 AgentOps Docs (#412)
Agentops documentation
2024-04-04 15:09:31 -03:00
ftoppi
8ed4c66117 tasks.py: don't call Converter when model response is valid (#406)
* tasks.py: don't call Converter when model response is valid

Try to convert the task output to the expected Pydantic model before sending it to Converter, maybe the model got it right.
2024-04-04 10:11:46 -03:00
ftoppi
38486223b2 Update Creating-a-Crew-and-kick-it-off.md: add compatible python versions (#420)
* Update Creating-a-Crew-and-kick-it-off.md: add compatible python versions

* Update Creating-a-Crew-and-kick-it-off.md

---------

Co-authored-by: João Moura <joaomdmoura@gmail.com>
2024-04-03 19:10:11 -03:00
João Moura
ac5e7d2b1e preparing new rc 2024-04-03 08:11:30 -03:00
João Moura
cf4138f385 setting fake openai key 2024-04-03 06:56:02 -03:00
João Moura
af7803e94b updating dependencies 2024-04-03 06:03:18 -03:00
João Moura
10b631bfb4 force reseting db in care of change in dimensions 2024-04-03 05:52:35 -03:00
João Moura
76f1c194dc Fixing db path 2024-04-03 05:45:59 -03:00
João Moura
0c9bc95dfc creating db file based on package name 2024-04-03 05:22:20 -03:00
João Moura
6f0d19d916 preparing new version 2024-04-03 05:04:26 -03:00
João Moura
427d3169b6 adding initial memory docs 2024-04-03 05:04:14 -03:00
João Moura
0fc828c816 updating gitignore 2024-04-03 05:04:00 -03:00
João Moura
2d97177eff checking crew before using memory 2024-04-03 05:03:43 -03:00
João Moura
33dfcc700b cutting new version, adding cache_function docs 2024-04-02 14:26:22 -03:00
João Moura
09c8193c8f updating specs 2024-04-02 13:51:16 -03:00
João Moura
4f4128075f updating db storage and dependencies 2024-04-02 13:51:05 -03:00
João Moura
9ab3e67ba2 preparing RC 2024-04-01 14:38:26 -03:00
João Moura
ed31860071 update docs 2024-04-01 11:14:06 -03:00
João Moura
ddb84cc16d Starting i18n language file support 2024-04-01 10:45:17 -03:00
João Moura
5b59e450f7 Adding long term, short term, entity and contextual memory 2024-04-01 10:45:17 -03:00
João Moura
a6c3b1f1d4 updating gitignore 2024-04-01 10:45:17 -03:00
João Moura
bf6b09b9f5 updating dependencies 2024-04-01 10:45:17 -03:00
João Moura
c95eed3fe0 adding editor config 2024-04-01 10:45:17 -03:00
João Moura
9d7cdd56b5 using .casefold() instead of lower 2024-04-01 10:45:17 -03:00
João Moura
0d70302963 updating git ignore 2024-04-01 10:45:17 -03:00
João Moura
32a09660b4 updating i18n to take on translation files 2024-04-01 10:45:17 -03:00
João Moura
0612097f81 improving agent tools descriptions 2024-04-01 10:45:17 -03:00
João Moura
b0c373b6af updating gitignore 2024-04-01 10:45:17 -03:00
João Moura
4839cdf261 improving original promtps 2024-04-01 10:45:14 -03:00
João Moura
5977c442b1 Adding custom caching 2024-04-01 10:43:05 -03:00
João Moura
d05dcac16f udpating dependencies 2024-04-01 10:43:05 -03:00
João Moura
2cdfe459be adding proper docs for crewAI 2024-04-01 10:43:05 -03:00
João Moura
721b27d222 Ability to disable cache at agent and crew level 2024-04-01 10:43:05 -03:00
João Moura
be2def3fc8 Adding HuggingFace docs 2024-04-01 10:43:05 -03:00
João Moura
7259dba90d fixing warnings 2024-04-01 10:43:05 -03:00
João Moura
ef5bfcb48b updating telemetry to use https 2024-04-01 10:43:05 -03:00
João Moura
446baff697 Updating crewai-tools dependency 2024-04-01 10:43:05 -03:00
GabeKoga
bcf701b287 feature: human input per task (#395)
* feature: human input per task

* Update executor.py

* Update executor.py

* Update executor.py

* Update executor.py

* Update executor.py

* feat: change human input for unit testing
added documentation and unit test

* Create test_agent_human_input.yaml

add yaml for test

---------

Co-authored-by: João Moura <joaomdmoura@gmail.com>
2024-04-01 10:04:56 -03:00
Elle Neal
22ab99cbd6 Update LLM-Connections.md (#252)
Adding Cohere LLM

Co-authored-by: João Moura <joaomdmoura@gmail.com>
2024-04-01 10:03:16 -03:00
sebestyenmiklos1
98ee60e06f Update Tasks.md (#240)
Fix example code of missing comma.

Co-authored-by: João Moura <joaomdmoura@gmail.com>
2024-03-31 20:40:51 -03:00
Ken Jenney
a3abdb5d19 Update ScrapeWebsiteTool.md (#385) 2024-03-30 11:57:08 -03:00
chowderhead
e3ebeb9dde Update GitHubSearchTool.md (#390)
Import statement has a lower case h
2024-03-30 11:56:34 -03:00
Ikko Eltociear Ashimine
646ed4f132 Update README.md (#391)
bellow -> below
2024-03-30 11:56:08 -03:00
Gui Vieira
128ce91951 Fix input interpolation bug (#369) 2024-03-22 03:08:54 -03:00
Gui Vieira
aa0eb02968 Custom model docs (#368) 2024-03-22 03:01:34 -03:00
João Moura
637bd885cf adding auto flake 2024-03-11 23:27:19 -03:00
João Moura
337afe228f cutting new version with proper imports 2024-03-11 23:27:04 -03:00
João Moura
4541835487 adding autoflake 2024-03-11 22:56:14 -03:00
João Moura
04d9603449 cutting new version 2024-03-11 22:55:56 -03:00
João Moura
671a8d0180 preparring new version that autoloads env 2024-03-11 22:19:47 -03:00
João Moura
3950878690 preparring to cut new version 2024-03-11 19:54:27 -03:00
João Moura
eaac627600 updating CLI template and guaranteeing tasks order 2024-03-11 19:53:34 -03:00
João Moura
35f8919e73 Preparing new version 2024-03-11 17:37:12 -03:00
João Moura
cb5a528550 Improving agent logging 2024-03-11 17:05:54 -03:00
João Moura
1f95d7b982 Improve tempalte readme 2024-03-11 17:05:20 -03:00
Abe Gong
46971ee985 Fix typo in Tools.md (#300) 2024-03-11 16:45:28 -03:00
Selim Erhan
e67009ee2e Update Create-Custom-Tools.md (#311)
Added the langchain "Tool" functionality by creating a python function and then adding the functionality of that function to the tool by 'func' variable in the 'Tool' function.
2024-03-11 16:44:04 -03:00
Johan
9d3da98251 Update Tools.md (#326)
* Update Tools.md

Fixing typo on the instantiation part

* Update Tools.md

Update tool naming
2024-03-11 16:41:14 -03:00
Bill Chambers
b94de6e947 Update Crews.md (#331) 2024-03-11 16:40:45 -03:00
Chris Pang
f8a1d4f414 added langchain callback to agents (#333)
Co-authored-by: Chris Pang <chris_pang@racv.com.au>
2024-03-11 16:40:10 -03:00
Merbin J Anselm
7deb268de8 docs: fix formatting in Human-Input-on-Execution.md (#335) 2024-03-11 16:38:59 -03:00
João Moura
47b5cbd211 adding initial CLI support 2024-03-11 16:37:32 -03:00
João Moura
a4e9b9ccfe removing double space on logs 2024-03-11 16:23:00 -03:00
João Moura
99be4f5a61 Overridding classes __repr__ 2024-03-05 10:12:49 -03:00
João Moura
ba28ab1680 adding support for agents and tasks to be defined of configs 2024-03-05 01:26:07 -03:00
João Moura
e51b8aadae fix readme 2024-03-05 00:31:52 -03:00
João Moura
33354aa07e udpatign readme example 2024-03-05 00:29:55 -03:00
João Moura
730b71fad8 update serper doc 2024-03-04 11:15:49 -03:00
João Moura
364cf216a0 updating docs disclaimer 2024-03-04 09:59:01 -03:00
João Moura
3cb48ac562 updating docs 2024-03-04 01:29:27 -03:00
João Moura
ea65283023 updating docs 2024-03-03 22:43:51 -03:00
João Moura
d2003cc32d fix docs path 2024-03-03 22:18:48 -03:00
João Moura
1766e27337 Adding tool specific docs 2024-03-03 22:14:53 -03:00
João Moura
442c324243 Updating dependencies, cutting new version 2024-03-03 21:23:42 -03:00
João Moura
3134711240 Updating Docs 2024-03-03 20:54:15 -03:00
João Moura
546fc965f8 updating README 2024-03-03 20:54:15 -03:00
João Moura
9ab45d9118 preparing new version 2024-03-03 20:54:15 -03:00
João Moura
b1ae86757b preparing 0.17.0rc0 2024-03-03 20:54:15 -03:00
João Moura
42eeec5897 Update inner tool usage logic to support both regular and function calling 2024-03-03 20:54:15 -03:00
João Moura
c12283bb16 Small docs update 2024-03-03 20:54:15 -03:00
João Moura
b856b21fc6 updating tests 2024-03-03 20:54:15 -03:00
Jay Mathis
72a0d1edef Update README.md (#301)
Fix a very minor typo
2024-03-03 12:41:35 -03:00
heyfixit
c0a0e01cf6 fix directory typo (#295) 2024-03-03 12:41:14 -03:00
153 changed files with 268941 additions and 46762 deletions

14
.editorconfig Normal file
View File

@@ -0,0 +1,14 @@
# .editorconfig
root = true
# All files
[*]
charset = utf-8
end_of_line = lf
insert_final_newline = true
trim_trailing_whitespace = true
# Python files
[*.py]
indent_style = space
indent_size = 2

3
.gitignore vendored
View File

@@ -7,4 +7,5 @@ assets/*
.idea
test/
docs_crew/
chroma.sqlite3
chroma.sqlite3
old_en.json

View File

@@ -1,11 +1,11 @@
repos:
- repo: https://github.com/psf/black-pre-commit-mirror
rev: 23.12.1
hooks:
- id: black
language_version: python3.11
files: \.(py)$
exclude: 'src/crewai/cli/templates/(crew|main)\.py'
- repo: https://github.com/pycqa/isort
rev: 5.13.2

View File

@@ -49,36 +49,29 @@ To get started with CrewAI, follow these simple steps:
pip install crewai
```
If you want to also install crewai-tools, which is a package with tools that can be used by the agents, but more dependencies, you can install it with:
If you want to also install crewai-tools, which is a package with tools that can be used by the agents, but more dependencies, you can install it with, example below uses it:
```shell
pip install 'crewai[tools]'
```
The example below also uses DuckDuckGo's Search. You can install it with `pip` too:
```shell
pip install duckduckgo-search
```
### 2. Setting Up Your Crew
```python
import os
from crewai import Agent, Task, Crew, Process
from crewai_tools import SerperDevTool
os.environ["OPENAI_API_KEY"] = "YOUR_API_KEY"
os.environ["SERPER_API_KEY"] = "Your Key" # serper.dev API key
# You can choose to use a local model through Ollama for example. See https://docs.crewai.com/how-to/LLM-Connections/ for more information.
# osOPENAI_API_BASE='http://localhost:11434/v1'
# OPENAI_MODEL_NAME='openhermes' # Adjust based on available model
# OPENAI_API_KEY=''
# Install duckduckgo-search for this example:
# !pip install -U duckduckgo-search
# os.environ["OPENAI_API_BASE"] = 'http://localhost:11434/v1'
# os.environ["OPENAI_MODEL_NAME"] ='openhermes' # Adjust based on available model
# os.environ["OPENAI_API_KEY"] ='sk-111111111111111111111111111111111111111111111111'
from langchain_community.tools import DuckDuckGoSearchRun
search_tool = DuckDuckGoSearchRun()
search_tool = SerperDevTool()
# Define your agents with roles and goals
researcher = Agent(
@@ -152,7 +145,7 @@ In addition to the sequential process, you can use the hierarchical process, whi
- **Processes Driven**: Currently only supports `sequential` task execution and `hierarchical` processes, but more complex processes like consensual and autonomous are being worked on.
- **Save output as file**: Save the output of individual tasks as a file, so you can use it later.
- **Parse output as Pydantic or Json**: Parse the output of individual tasks as a Pydantic model or as a Json if you want to.
- **Works with Open Source Models**: Run your crew using Open AI or open source models refer to the [Connect crewAI to LLMs](https://docs.crewai.com/how-to/LLM-Connections/) page for details on configuring you agents' connections to models, even ones running locally!
- **Works with Open Source Models**: Run your crew using Open AI or open source models refer to the [Connect crewAI to LLMs](https://docs.crewai.com/how-to/LLM-Connections/) page for details on configuring your agents' connections to models, even ones running locally!
![CrewAI Mind Map](./docs/crewAI-mindmap.png "CrewAI Mind Map")
@@ -256,8 +249,8 @@ pip install dist/*.tar.gz
## Hire CrewAI
We're a company developing crewAI and crewAI Enterprise, we for a limited time are offer consulting with selected customers, to get them early access to our enterprise solution
If you are interested on having access to it and hiring weekly hours with our team, feel free to email us at [joao@crewai.com](mailto:joao@crewai.com).
We're a company developing crewAI and crewAI Enterprise. We, for a limited time, are offering consulting with selected customers; to get them early access to our enterprise solution.
If you are interested in having access to it, and hiring weekly hours with our team, feel free to email us at [joao@crewai.com](mailto:joao@crewai.com).
## Telemetry

1
docs/CNAME Normal file
View File

@@ -0,0 +1 @@
docs.crewai.com

Binary file not shown.

After

Width:  |  Height:  |  Size: 272 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 190 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 176 KiB

View File

@@ -10,30 +10,32 @@ description: What are crewAI Agents and how to use them.
<li class='leading-3'>Perform tasks</li>
<li class='leading-3'>Make decisions</li>
<li class='leading-3'>Communicate with other agents</li>
</ul>
<br/>
Think of an agent as a member of a team, with specific skills and a particular job to do. Agents can have different roles like 'Researcher', 'Writer', or 'Customer Support', each contributing to the overall goal of the crew.
## Agent Attributes
| Attribute | Description |
| :------------------ | :----------------------------------- |
| **Role** | Defines the agent's function within the crew. It determines the kind of tasks the agent is best suited for. |
| **Goal** | The individual objective that the agent aims to achieve. It guides the agent's decision-making process. |
| **Backstory** | Provides context to the agent's role and goal, enriching the interaction and collaboration dynamics. |
| **LLM** | The language model used by the agent to process and generate text. Defaults to using OpenAI's GPT-4 (`ChatOpenAI`), unless another model is specified through the environment variable "OPENAI_MODEL_NAME". |
| **Tools** | Set of capabilities or functions that the agent can use to perform tasks. Tools can be shared or exclusive to specific agents. It's an attribute that can be set during the initialization of an agent. |
| **Function Calling LLM** | The language model used by this agent to call functions. It is an optional field and, if not provided, the behavior of defaulting to the main `llm` is implicit. |
| **Max Iter** | The maximum number of iterations the agent can perform before being forced to give its best answer. Default is `15`. |
| **Max RPM** | The maximum number of requests per minute the agent can perform to avoid rate limits. It's optional and can be left unspecified. |
| **Verbose** | Enables detailed logging of the agent's execution for debugging or monitoring purposes when set to True. Default is `False` |
| **Allow Delegation**| Agents can delegate tasks or questions to one another, ensuring that each task is handled by the most suitable agent. |
| **Step Callback** | A function that is called after each step of the agent. This can be used to log the agent's actions or to perform other operations. It will overwrite the crew `step_callback`. |
| **Memory** | Indicates whether the agent should have memory or not, with a default value of False. This impacts the agent's ability to remember past interactions. Default is `False` |
| Attribute | Description |
| :------------------------- | :--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| **Role** | Defines the agent's function within the crew. It determines the kind of tasks the agent is best suited for. |
| **Goal** | The individual objective that the agent aims to achieve. It guides the agent's decision-making process. |
| **Backstory** | Provides context to the agent's role and goal, enriching the interaction and collaboration dynamics. |
| **LLM** *(optional)* | Represents the language model that will run the agent. It dynamically fetches the model name from the `OPENAI_MODEL_NAME` environment variable, defaulting to "gpt-4" if not specified. |
| **Tools** *(optional)* | Set of capabilities or functions that the agent can use to perform tasks. Expected to be instances of custom classes compatible with the agent's execution environment. Tools are initialized with a default value of an empty list. |
| **Function Calling LLM** *(optional)* | Specifies the language model that will handle the tool calling for this agent, overriding the crew function calling LLM if passed. Default is `None`. |
| **Max Iter** *(optional)* | The maximum number of iterations the agent can perform before being forced to give its best answer. Default is `25`. |
| **Max RPM** *(optional)* | The maximum number of requests per minute the agent can perform to avoid rate limits. It's optional and can be left unspecified, with a default value of `None`. |
| **max_execution_time** *(optional)* | Maximum execution time for an agent to execute a task It's optional and can be left unspecified, with a default value of `None`, menaning no max execution time |
| **Verbose** *(optional)* | Setting this to `True` configures the internal logger to provide detailed execution logs, aiding in debugging and monitoring. Default is `False`. |
| **Allow Delegation** *(optional)* | Agents can delegate tasks or questions to one another, ensuring that each task is handled by the most suitable agent. Default is `True`. |
| **Step Callback** *(optional)* | A function that is called after each step of the agent. This can be used to log the agent's actions or to perform other operations. It will overwrite the crew `step_callback`. |
| **Cache** *(optional)* | Indicates if the agent should use a cache for tool usage. Default is `True`. |
## Creating an Agent
!!! note "Agent Interaction"
Agents can interact with each other using the CrewAI's built-in delegation and communication mechanisms.<br/>This allows for dynamic task management and problem-solving within the crew.
Agents can interact with each other using crewAI's built-in delegation and communication mechanisms. This allows for dynamic task management and problem-solving within the crew.
To create an agent, you would typically initialize an instance of the `Agent` class with the desired properties. Here's a conceptual example including all attributes:
@@ -49,7 +51,7 @@ agent = Agent(
to the business.
You're currently working on a project to analyze the
performance of our marketing campaigns.""",
tools=[my_tool1, my_tool2], # Optional
tools=[my_tool1, my_tool2], # Optional, defaults to an empty list
llm=my_llm, # Optional
function_calling_llm=my_llm, # Optional
max_iter=15, # Optional
@@ -57,7 +59,7 @@ agent = Agent(
verbose=True, # Optional
allow_delegation=True, # Optional
step_callback=my_intermediate_step_callback, # Optional
memory=True # Optional
cache=True # Optional
)
```

View File

@@ -14,15 +14,17 @@ description: Exploring the dynamics of agent collaboration within the CrewAI fra
## Enhanced Attributes for Improved Collaboration
The `Crew` class has been enriched with several attributes to support advanced functionalities:
- **Language Model Management (`manager_llm`, `function_calling_llm`)**: Manages language models for executing tasks and tools, facilitating sophisticated agent-tool interactions.
- **Language Model Management (`manager_llm`, `function_calling_llm`)**: Manages language models for executing tasks and tools, facilitating sophisticated agent-tool interactions. Note that while `manager_llm` is mandatory for hierarchical processes to ensure proper execution flow, `function_calling_llm` is optional, with a default value provided for streamlined tool interaction.
- **Process Flow (`process`)**: Defines the execution logic (e.g., sequential, hierarchical) to streamline task distribution and execution.
- **Verbose Logging (`verbose`)**: Offers detailed logging capabilities for monitoring and debugging purposes.
- **Configuration (`config`)**: Allows extensive customization to tailor the crew's behavior according to specific requirements.
- **Rate Limiting (`max_rpm`)**: Ensures efficient utilization of resources by limiting requests per minute.
- **Internationalization Support (`language`)**: Facilitates operation in multiple languages, enhancing global usability.
- **Execution and Output Handling (`full_output`)**: Distinguishes between full and final outputs for nuanced control over task results.
- **Callback and Telemetry (`step_callback`)**: Integrates callbacks for step-wise execution monitoring and telemetry for performance analytics.
- **Crew Sharing (`share_crew`)**: Enables sharing of crew information with CrewAI for continuous improvement.
- **Verbose Logging (`verbose`)**: Offers detailed logging capabilities for monitoring and debugging purposes. It supports both integer and boolean types to indicate the verbosity level. For example, setting `verbose` to 1 might enable basic logging, whereas setting it to True enables more detailed logs.
- **Rate Limiting (`max_rpm`)**: Ensures efficient utilization of resources by limiting requests per minute. Guidelines for setting `max_rpm` should consider the complexity of tasks and the expected load on resources.
- **Internationalization Support (`language`, `language_file`)**: Facilitates operation in multiple languages, enhancing global usability. Supported languages and the process for utilizing the `language_file` attribute for customization should be clearly documented.
- **Execution and Output Handling (`full_output`)**: Distinguishes between full and final outputs for nuanced control over task results. Examples showcasing the difference in outputs can aid in understanding the practical implications of this attribute.
- **Callback and Telemetry (`step_callback`, `task_callback`)**: Integrates callbacks for step-wise and task-level execution monitoring, alongside telemetry for performance analytics. The purpose and usage of `task_callback` alongside `step_callback` for granular monitoring should be clearly explained.
- **Crew Sharing (`share_crew`)**: Enables sharing of crew information with CrewAI for continuous improvement and training models. The privacy implications and benefits of this feature, including how it contributes to model improvement, should be outlined.
- **Usage Metrics (`usage_metrics`)**: Stores all metrics for the language model (LLM) usage during all tasks' execution, providing insights into operational efficiency and areas for improvement. Detailed information on accessing and interpreting these metrics for performance analysis should be provided.
- **Memory Usage (`memory`)**: Indicates whether the crew should use memory to store memories of its execution, enhancing task execution and agent learning.
- **Embedder Configuration (`embedder`)**: Specifies the configuration for the embedder to be used by the crew for understanding and generating language. This attribute supports customization of the language model provider.
## Delegation: Dividing to Conquer
Delegation enhances functionality by allowing agents to intelligently assign tasks or seek help, thereby amplifying the crew's overall capability.

View File

@@ -1,36 +1,41 @@
---
title: crewAI Crews
description: Understanding and utilizing crews in the crewAI framework.
description: Understanding and utilizing crews in the crewAI framework with comprehensive attributes and functionalities.
---
## What is a Crew?
!!! note "Definition of a Crew"
A crew in crewAI represents a collaborative group of agents working together to achieve a set of tasks. Each crew defines the strategy for task execution, agent collaboration, and the overall workflow.
A crew in crewAI represents a collaborative group of agents working together to achieve a set of tasks. Each crew defines the strategy for task execution, agent collaboration, and the overall workflow.
## Crew Attributes
| Attribute | Description |
| :------------------- | :----------------------------------------------------------- |
| **Tasks** | A list of tasks assigned to the crew. |
| **Agents** | A list of agents that are part of the crew. |
| **Process** | The process flow (e.g., sequential, hierarchical) the crew follows. |
| **Verbose** | The verbosity level for logging during execution. |
| **Manager LLM** | The language model used by the manager agent in a hierarchical process. **Required when using a hierarchical process.** |
| **Function Calling LLM** | The language model used by all agents in the crew for calling functions. If none is passed, the main LLM for each agent will be used. |
| **Config** | Optional configuration settings for the crew, in `Json` or `Dict[str, Any]` format. |
| **Max RPM** | Maximum requests per minute the crew adheres to during execution. |
| **Language** | Language used for the crew, defaults to English. |
| **Full Output** | Whether the crew should return the full output with all tasks outputs or just the final output. |
| **Step Callback** | A function that is called after each step of every agent. This can be used to log the agent's actions or to perform other operations; it won't override the agent-specific `step_callback`. |
| **Share Crew** | Whether you want to share the complete crew information and execution with the crewAI team to make the library better, and allow us to train models. |
| Attribute | Description |
| :-------------------------- | :----------------------------------------------------------- |
| **Tasks** | A list of tasks assigned to the crew. |
| **Agents** | A list of agents that are part of the crew. |
| **Process** *(optional)* | The process flow (e.g., sequential, hierarchical) the crew follows. |
| **Verbose** *(optional)* | The verbosity level for logging during execution. |
| **Manager LLM** *(optional)*| The language model used by the manager agent in a hierarchical process. **Required when using a hierarchical process.** |
| **Function Calling LLM** *(optional)* | If passed, the crew will use this LLM to do function calling for tools for all agents in the crew. Each agent can have its own LLM, which overrides the crew's LLM for function calling. |
| **Config** *(optional)* | Optional configuration settings for the crew, in `Json` or `Dict[str, Any]` format. |
| **Max RPM** *(optional)* | Maximum requests per minute the crew adheres to during execution. |
| **Language** *(optional)* | Language used for the crew, defaults to English. |
| **Language File** *(optional)* | Path to the language file to be used for the crew. |
| **Memory** *(optional)* | Utilized for storing execution memories (short-term, long-term, entity memory). |
| **Cache** *(optional)* | Specifies whether to use a cache for storing the results of tools' execution. |
| **Embedder** *(optional)* | Configuration for the embedder to be used by the crew. mostly used by memory for now |
| **Full Output** *(optional)*| Whether the crew should return the full output with all tasks outputs or just the final output. |
| **Step Callback** *(optional)* | A function that is called after each step of every agent. This can be used to log the agent's actions or to perform other operations; it won't override the agent-specific `step_callback`. |
| **Task Callback** *(optional)* | A function that is called after the completion of each task. Useful for monitoring or additional operations post-task execution. |
| **Share Crew** *(optional)* | Whether you want to share the complete crew information and execution with the crewAI team to make the library better, and allow us to train models. |
| **Output Log File** *(optional)* | Whether you want to have a file with the complete crew output and execution. You can set it using True and it will default to the folder you are currently and it will be called logs.txt or passing a string with the full path and name of the file. |
!!! note "Crew Max RPM"
The `max_rpm` attribute sets the maximum number of requests per minute the crew can perform to avoid rate limits and will override individual agents' `max_rpm` settings if you set it.
The `max_rpm` attribute sets the maximum number of requests per minute the crew can perform to avoid rate limits and will override individual agents' `max_rpm` settings if you set it.
## Creating a Crew
!!! note "Crew Composition"
When assembling a crew, you combine agents with complementary roles and tools, assign tasks, and select a process that dictates their execution order and interaction.
When assembling a crew, you combine agents with complementary roles and tools, assign tasks, and select a process that dictates their execution order and interaction.
### Example: Assembling a Crew
@@ -47,12 +52,19 @@ researcher = Agent(
writer = Agent(
role='Content Writer',
goal='Write engaging articles on AI discoveries'
goal='Write engaging articles on AI discoveries',
verbose=True
)
# Create tasks for the agents
research_task = Task(description='Identify breakthrough AI technologies', agent=researcher)
write_article_task = Task(description='Draft an article on the latest AI technologies', agent=writer)
research_task = Task(
description='Identify breakthrough AI technologies',
agent=researcher
)
write_article_task = Task(
description='Draft an article on the latest AI technologies',
agent=writer
)
# Assemble the crew with a sequential process
my_crew = Crew(
@@ -60,14 +72,33 @@ my_crew = Crew(
tasks=[research_task, write_article_task],
process=Process.sequential,
full_output=True,
verbose=True
verbose=True,
)
```
## Memory Utilization
Crews can utilize memory (short-term, long-term, and entity memory) to enhance their execution and learning over time. This feature allows crews to store and recall execution memories, aiding in decision-making and task execution strategies.
## Cache Utilization
Caches can be employed to store the results of tools' execution, making the process more efficient by reducing the need to re-execute identical tasks.
## Crew Usage Metrics
After the crew execution, you can access the `usage_metrics` attribute to view the language model (LLM) usage metrics for all tasks executed by the crew. This provides insights into operational efficiency and areas for improvement.
```python
# Access the crew's usage metrics
crew = Crew(agents=[agent1, agent2], tasks=[task1, task2])
crew.kickoff()
print(crew.usage_metrics)
```
## Crew Execution Process
- **Sequential Process**: Tasks are executed one after another, allowing for a linear flow of work.
- **Hierarchical Process**: A manager agent coordinates the crew, delegating tasks and validating outcomes before proceeding. **Note**: A `manager_llm` is required for this process.
- **Hierarchical Process**: A manager agent coordinates the crew, delegating tasks and validating outcomes before proceeding. **Note**: A `manager_llm` is required for this process and it's essential for validating the process flow.
### Kicking Off a Crew
@@ -77,4 +108,4 @@ Once your crew is assembled, initiate the workflow with the `kickoff()` method.
# Start the crew's task execution
result = my_crew.kickoff()
print(result)
```
```

View File

@@ -0,0 +1,171 @@
---
title: crewAI Memory Systems
description: Leveraging memory systems in the crewAI framework to enhance agent capabilities.
---
## Introduction to Memory Systems in crewAI
!!! note "Enhancing Agent Intelligence"
The crewAI framework introduces a sophisticated memory system designed to significantly enhance the capabilities of AI agents. This system comprises short-term memory, long-term memory, entity memory, and newly identified contextual memory, each serving a unique purpose in aiding agents to remember, reason, and learn from past interactions.
## Memory System Components
| Component | Description |
| :------------------- | :----------------------------------------------------------- |
| **Short-Term Memory**| Temporarily stores recent interactions and outcomes, enabling agents to recall and utilize information relevant to their current context. |
| **Long-Term Memory** | Preserves valuable insights and learnings from past executions, allowing agents to build and refine their knowledge over time. |
| **Entity Memory** | Captures and organizes information about entities (people, places, concepts) encountered during tasks, facilitating deeper understanding and relationship mapping. |
| **Contextual Memory**| Maintains the context of interactions, aiding in the coherence and relevance of agent responses over a sequence of tasks or a conversation. |
## How Memory Systems Empower Agents
1. **Contextual Awareness**: With short-term and contextual memory, agents gain the ability to maintain context over a conversation or task sequence, leading to more coherent and relevant responses.
2. **Experience Accumulation**: Long-term memory allows agents to accumulate experiences, learning from past actions to improve future decision-making and problem-solving.
3. **Entity Understanding**: By maintaining entity memory, agents can recognize and remember key entities, enhancing their ability to process and interact with complex information.
## Implementing Memory in Your Crew
When configuring a crew, you can enable and customize each memory component to suit the crew's objectives and the nature of tasks it will perform.
By default, the memory system is disabled, and you can ensure it is active by setting `memory=True` in the crew configuration.
The memory will use OpenAI Embeddings by default, but you can change it by setting `embedder` to a different model.
### Example: Configuring Memory for a Crew
```python
from crewai import Crew, Agent, Task, Process
# Assemble your crew with memory capabilities
my_crew = Crew(
agents=[...],
tasks=[...],
process=Process.sequential,
memory=True,
verbose=True
)
```
## Additional Embedding Providers
### Using OpenAI embeddings (already default)
```python
from crewai import Crew, Agent, Task, Process
my_crew = Crew(
agents=[...],
tasks=[...],
process=Process.sequential,
memory=True,
verbose=True,
embedder={
"provider": "openai",
"config":{
"model": 'text-embedding-3-small'
}
}
)
```
### Using Google AI embeddings
```python
from crewai import Crew, Agent, Task, Process
my_crew = Crew(
agents=[...],
tasks=[...],
process=Process.sequential,
memory=True,
verbose=True,
embedder={
"provider": "google",
"config":{
"model": 'models/embedding-001',
"task_type": "retrieval_document",
"title": "Embeddings for Embedchain"
}
}
)
```
### Using Azure OpenAI embeddings
```python
from crewai import Crew, Agent, Task, Process
my_crew = Crew(
agents=[...],
tasks=[...],
process=Process.sequential,
memory=True,
verbose=True,
embedder={
"provider": "azure_openai",
"config":{
"model": 'text-embedding-ada-002',
"deployment_name": "you_embedding_model_deployment_name"
}
}
)
```
### Using GPT4ALL embeddings
```python
from crewai import Crew, Agent, Task, Process
my_crew = Crew(
agents=[...],
tasks=[...],
process=Process.sequential,
memory=True,
verbose=True,
embedder={
"provider": "gpt4all"
}
)
```
### Using Vertex AI embeddings
```python
from crewai import Crew, Agent, Task, Process
my_crew = Crew(
agents=[...],
tasks=[...],
process=Process.sequential,
memory=True,
verbose=True,
embedder={
"provider": "vertexai",
"config":{
"model": 'textembedding-gecko'
}
}
)
```
### Using Cohere embeddings
```python
from crewai import Crew, Agent, Task, Process
my_crew = Crew(
agents=[...],
tasks=[...],
process=Process.sequential,
memory=True,
verbose=True,
embedder={
"provider": "cohere",
"config":{
"model": "embed-english-v3.0"
"vector_dimension": 1024
}
}
)
```
## Benefits of Using crewAI's Memory System
- **Adaptive Learning:** Crews become more efficient over time, adapting to new information and refining their approach to tasks.
- **Enhanced Personalization:** Memory enables agents to remember user preferences and historical interactions, leading to personalized experiences.
- **Improved Problem Solving:** Access to a rich memory store aids agents in making more informed decisions, drawing on past learnings and contextual insights.
## Getting Started
Integrating crewAI's memory system into your projects is straightforward. By leveraging the provided memory components and configurations, you can quickly empower your agents with the ability to remember, reason, and learn from their interactions, unlocking new levels of intelligence and capability.

View File

@@ -10,26 +10,37 @@ description: Detailed guide on workflow management through processes in CrewAI,
## Process Implementations
- **Sequential**: Executes tasks sequentially, ensuring tasks are completed in an orderly progression.
- **Hierarchical**: Organizes tasks in a managerial hierarchy, where tasks are delegated and executed based on a structured chain of command, the manager for delegation is automatically created by crewAI.
- **Consensual (Planned)**: A future process type aiming for collaborative decision-making among agents on task execution, introducing a more democratic approach to task management within CrewAI.
- **Hierarchical**: Organizes tasks in a managerial hierarchy, where tasks are delegated and executed based on a structured chain of command. A manager language model (`manager_llm`) must be specified in the crew to enable the hierarchical process, facilitating the creation and management of tasks by the manager.
- **Consensual Process (Planned)**: Aiming for collaborative decision-making among agents on task execution, this process type introduces a democratic approach to task management within CrewAI. It is planned for future development and is not currently implemented in the codebase.
## The Role of Processes in Teamwork
Processes enable individual agents to operate as a cohesive unit, streamlining their efforts to achieve common objectives with efficiency and coherence.
## Assigning Processes to a Crew
Specify the process type upon crew creation to set the execution strategy:
To assign a process to a crew, specify the process type upon crew creation to set the execution strategy. For a hierarchical process, ensure to define `manager_llm` for the manager agent.
```python
from crewai import Crew
from crewai.process import Process
from langchain_openai import ChatOpenAI
# Example: Creating a crew with a sequential process
crew = Crew(agents=my_agents, tasks=my_tasks, process=Process.sequential)
crew = Crew(
agents=my_agents,
tasks=my_tasks,
process=Process.sequential
)
# Example: Creating a crew with a hierarchical process
crew = Crew(agents=my_agents, tasks=my_tasks, process=Process.hierarchical)
# Ensure to provide a manager_llm
crew = Crew(
agents=my_agents,
tasks=my_tasks,
process=Process.hierarchical,
manager_llm=ChatOpenAI(model="gpt-4")
)
```
**Note:** Ensure `my_agents` and `my_tasks` are defined prior to creating a `Crew` object.
**Note:** Ensure `my_agents` and `my_tasks` are defined prior to creating a `Crew` object, and for the hierarchical process, `manager_llm` is also required.
## Sequential Process
This method mirrors dynamic team workflows, progressing through tasks in a thoughtful and systematic manner. Task execution follows the predefined order in the task list, with the output of one task serving as context for the next.
@@ -37,13 +48,15 @@ This method mirrors dynamic team workflows, progressing through tasks in a thoug
To customize task context, utilize the `context` parameter in the `Task` class to specify outputs that should be used as context for subsequent tasks.
## Hierarchical Process
Emulates a corporate hierarchy. A "manager" agent is automatically created so it oversees task execution, including planning, delegation, and validation. Tasks are not pre-assigned; the manager allocates tasks to agents, reviews outputs, and assesses task completion.
Emulates a corporate hierarchy, CrewAI automatically creates a manager for you, requiring the specification of a manager language model (`manager_llm`) for the manager agent. This agent oversees task execution, including planning, delegation, and validation. Tasks are not pre-assigned; the manager allocates tasks to agents based on their capabilities, reviews outputs, and assesses task completion.
## Process Class: Detailed Overview
The `Process` class is implemented as an enumeration (`Enum`), ensuring type safety and restricting process values to the defined types (`sequential` and `hierarchical`). This design choice guarantees that only valid processes are utilized within the CrewAI framework.
The `Process` class is implemented as an enumeration (`Enum`), ensuring type safety and restricting process values to the defined types (`sequential`, `hierarchical`). The consensual process is planned for future inclusion, emphasizing our commitment to continuous development and innovation.
## Planned Future Processes
- **Consensual Process**: A collaborative decision-making process among agents on task execution is planned but not currently implemented. This future enhancement will introduce a more democratic approach to task management within CrewAI.
## Additional Task Features
- **Asynchronous Execution**: Tasks can now be executed asynchronously, allowing for parallel processing and efficiency improvements. This feature is designed to enable tasks to be carried out concurrently, enhancing the overall productivity of the crew.
- **Human Input Review**: An optional feature that enables the review of task outputs by humans to ensure quality and accuracy before finalization. This additional step introduces a layer of oversight, providing an opportunity for human intervention and validation.
- **Output Customization**: Tasks support various output formats, including JSON (`output_json`), Pydantic models (`output_pydantic`), and file outputs (`output_file`), providing flexibility in how task results are captured and utilized. This allows for a wide range of output possibilities, catering to different needs and requirements.
## Conclusion
The structured collaboration facilitated by processes within CrewAI is crucial for enabling systematic teamwork among agents. Documentation will be updated to reflect new processes and enhancements, ensuring users have access to the most current and comprehensive information.
The structured collaboration facilitated by processes within CrewAI is crucial for enabling systematic teamwork among agents. This documentation has been updated to reflect the latest features, enhancements, and the planned integration of the Consensual Process, ensuring users have access to the most current and comprehensive information.

View File

@@ -1,32 +1,34 @@
---
title: crewAI Tasks
description: Overview and management of tasks within the crewAI framework.
description: Detailed guide on managing and creating tasks within the crewAI framework, reflecting the latest codebase updates.
---
## Overview of a Task
!!! note "What is a Task?"
In the CrewAI framework, tasks are individual assignments that agents complete. They encapsulate necessary information for execution, including a description, assigned agent, required tools, offering flexibility for various action complexities.
In the crewAI framework, tasks are specific assignments completed by agents. They provide all necessary details for execution, such as a description, the agent responsible, required tools, and more, facilitating a wide range of action complexities.
Tasks in CrewAI can be designed to require collaboration between agents. For example, one agent might gather data while another analyzes it. This collaborative approach can be defined within the task properties and managed by the Crew's process.
Tasks within crewAI can be collaborative, requiring multiple agents to work together. This is managed through the task properties and orchestrated by the Crew's process, enhancing teamwork and efficiency.
## Task Attributes
| Attribute | Description |
| :------------- | :----------------------------------- |
| **Description** | A clear, concise statement of what the task entails. |
| **Agent** | Optionally, you can specify which agent is responsible for the task. If not, the crew's process will determine who takes it on. |
| **Expected Output** *(optional)* | Clear and detailed definition of expected output for the task. |
| **Tools** *(optional)* | These are the functions or capabilities the agent can utilize to perform the task. They can be anything from simple actions like 'search' to more complex interactions with other agents or APIs. |
| **Async Execution** *(optional)* | If the task should be executed asynchronously. This indicates that the crew will not wait for the task to be completed to continue with the next task. |
| **Context** *(optional)* | Other tasks that will have their output used as context for this task. If a task is asynchronous, the system will wait for that to finish before using its output as context. |
| **Output JSON** *(optional)* | Takes a pydantic model and returns the output as a JSON object. **Agent LLM needs to be using OpenAI client, could be Ollama for example but using the OpenAI wrapper** |
| **Output Pydantic** *(optional)* | Takes a pydantic model and returns the output as a pydantic object. **Agent LLM needs to be using OpenAI client, could be Ollama for example but using the OpenAI wrapper** |
| **Output File** *(optional)* | Takes a file path and saves the output of the task on it. |
| **Callback** *(optional)* | A function to be executed after the task is completed. |
| Attribute | Description |
| :----------------------| :-------------------------------------------------------------------------------------------- |
| **Description** | A clear, concise statement of what the task entails. |
| **Agent** | The agent responsible for the task, assigned either directly or by the crew's process. |
| **Expected Output** | A detailed description of what the task's completion looks like. |
| **Tools** *(optional)* | The functions or capabilities the agent can utilize to perform the task. |
| **Async Execution** *(optional)* | If set, the task executes asynchronously, allowing progression without waiting for completion.|
| **Context** *(optional)* | Specifies tasks whose outputs are used as context for this task. |
| **Config** *(optional)* | Additional configuration details for the agent executing the task, allowing further customization. |
| **Output JSON** *(optional)* | Outputs a JSON object, requiring an OpenAI client. Only one output format can be set. |
| **Output Pydantic** *(optional)* | Outputs a Pydantic model object, requiring an OpenAI client. Only one output format can be set. |
| **Output File** *(optional)* | Saves the task output to a file. If used with `Output JSON` or `Output Pydantic`, specifies how the output is saved. |
| **Callback** *(optional)* | A Python callable that is executed with the task's output upon completion. |
| **Human Input** *(optional)* | Indicates if the task requires human feedback at the end, useful for tasks needing human oversight. |
## Creating a Task
This is the simplest example for creating a task, it involves defining its scope and agent, but there are optional attributes that can provide a lot of flexibility:
Creating a task involves defining its scope, responsible agent, and any additional attributes for flexibility:
```python
from crewai import Task
@@ -36,35 +38,34 @@ task = Task(
agent=sales_agent
)
```
!!! note "Task Assignment"
Tasks can be assigned directly by specifying an `agent` to them, or they can be assigned in run time if you are using the `hierarchical` through CrewAI's process, considering roles, availability, or other criteria.
Directly specify an `agent` for assignment or let the `hierarchical` CrewAI's process decide based on roles, availability, etc.
## Integrating Tools with Tasks
Tools from the [crewAI Toolkit](https://github.com/joaomdmoura/crewai-tools) and [LangChain Tools](https://python.langchain.com/docs/integrations/tools) enhance task performance, allowing agents to interact more effectively with their environment. Assigning specific tools to tasks can tailor agent capabilities to particular needs.
Leverage tools from the [crewAI Toolkit](https://github.com/joaomdmoura/crewai-tools) and [LangChain Tools](https://python.langchain.com/docs/integrations/tools) for enhanced task performance and agent interaction.
## Creating a Task with Tools
```python
import os
os.environ["OPENAI_API_KEY"] = "Your Key"
os.environ["SERPER_API_KEY"] = "Your Key" # serper.dev API key
from crewai import Agent, Task, Crew
from langchain.agents import Tool
from langchain_community.tools import DuckDuckGoSearchRun
from crewai_tools import SerperDevTool
research_agent = Agent(
role='Researcher',
goal='Find and summarize the latest AI news',
backstory="""You're a researcher at a large company.
You're responsible for analyzing data and providing insights
to the business."""
verbose=True
role='Researcher',
goal='Find and summarize the latest AI news',
backstory="""You're a researcher at a large company.
You're responsible for analyzing data and providing insights
to the business.""",
verbose=True
)
# Install duckduckgo-search for this example:
# !pip install -U duckduckgo-search
search_tool = DuckDuckGoSearchRun()
search_tool = SerperDevTool()
task = Task(
description='Find and summarize the latest AI news',
@@ -87,25 +88,34 @@ This demonstrates how tasks with specific tools can override an agent's default
## Referring to Other Tasks
In crewAI, the output of one task is automatically relayed into the next one, but you can specifically define what tasks' output should be used as context for another task.
In crewAI, the output of one task is automatically relayed into the next one, but you can specifically define what tasks' output, including multiple should be used as context for another task.
This is useful when you have a task that depends on the output of another task that is not performed immediately after it. This is done through the `context` attribute of the task:
```python
# ...
research_task = Task(
description='Find and summarize the latest AI news',
expected_output='A bullet list summary of the top 5 most important AI news',
agent=research_agent,
tools=[search_tool]
research_ai_task = Task(
description='Find and summarize the latest AI news',
expected_output='A bullet list summary of the top 5 most important AI news',
async_execution=True,
agent=research_agent,
tools=[search_tool]
)
research_ops_task = Task(
description='Find and summarize the latest AI Ops news',
expected_output='A bullet list summary of the top 5 most important AI Ops news',
async_execution=True,
agent=research_agent,
tools=[search_tool]
)
write_blog_task = Task(
description="Write a full blog post about the importance of AI and its latest news",
expected_output='Full blog post that is 4 paragraphs long',
agent=writer_agent,
context=[research_task]
description="Write a full blog post about the importance of AI and its latest news",
expected_output='Full blog post that is 4 paragraphs long',
agent=writer_agent,
context=[research_ai_task, research_ops_task]
)
#...
@@ -121,24 +131,24 @@ You can then use the `context` attribute to define in a future task that it shou
#...
list_ideas = Task(
description="List of 5 interesting ideas to explore for an article about AI.",
expected_output="Bullet point list of 5 ideas for an article.",
agent=researcher,
async_execution=True # Will be executed asynchronously
description="List of 5 interesting ideas to explore for an article about AI.",
expected_output="Bullet point list of 5 ideas for an article.",
agent=researcher,
async_execution=True # Will be executed asynchronously
)
list_important_history = Task(
description="Research the history of AI and give me the 5 most important events.",
expected_output="Bullet point list of 5 important events.",
agent=researcher,
async_execution=True # Will be executed asynchronously
description="Research the history of AI and give me the 5 most important events.",
expected_output="Bullet point list of 5 important events.",
agent=researcher,
async_execution=True # Will be executed asynchronously
)
write_article = Task(
description="Write an article about AI, its history, and interesting ideas.",
expected_output="A 4 paragraph article about AI.",
agent=writer,
context=[list_ideas, list_important_history] # Will wait for the output of the two tasks to be completed
description="Write an article about AI, its history, and interesting ideas.",
expected_output="A 4 paragraph article about AI.",
agent=writer,
context=[list_ideas, list_important_history] # Will wait for the output of the two tasks to be completed
)
#...
@@ -146,26 +156,26 @@ write_article = Task(
## Callback Mechanism
You can define a callback function that will be executed after the task is completed. This is useful for tasks that need to trigger some side effect after they are completed, while the crew is still running.
The callback function is executed after the task is completed, allowing for actions or notifications to be triggered based on the task's outcome.
```python
# ...
def callback_function(output: TaskOutput):
# Do something after the task is completed
# Example: Send an email to the manager
print(f"""
Task completed!
Task: {output.description}
Output: {output.raw_output}
""")
# Do something after the task is completed
# Example: Send an email to the manager
print(f"""
Task completed!
Task: {output.description}
Output: {output.raw_output}
""")
research_task = Task(
description='Find and summarize the latest AI news',
expected_output='A bullet list summary of the top 5 most important AI news',
agent=research_agent,
tools=[search_tool],
callback=callback_function
description='Find and summarize the latest AI news',
expected_output='A bullet list summary of the top 5 most important AI news',
agent=research_agent,
tools=[search_tool],
callback=callback_function
)
#...
@@ -178,27 +188,27 @@ Once a crew finishes running, you can access the output of a specific task by us
```python
# ...
task1 = Task(
description='Find and summarize the latest AI news',
expected_output='A bullet list summary of the top 5 most important AI news',
agent=research_agent,
tools=[search_tool]
description='Find and summarize the latest AI news',
expected_output='A bullet list summary of the top 5 most important AI news',
agent=research_agent,
tools=[search_tool]
)
#...
crew = Crew(
agents=[research_agent],
tasks=[task1, task2, task3],
verbose=2
agents=[research_agent],
tasks=[task1, task2, task3],
verbose=2
)
result = crew.kickoff()
# Returns a TaskOutput object with the description and results of the task
print(f"""
Task completed!
Task: {task1.output.description}
Output: {task1.output.raw_output}
Task completed!
Task: {task1.output.description}
Output: {task1.output.raw_output}
""")
```
@@ -217,5 +227,4 @@ These validations help in maintaining the consistency and reliability of task ex
## Conclusion
Tasks are the driving force behind the actions of agents in crewAI. By properly defining tasks and their outcomes, you set the stage for your AI agents to work effectively, either independently or as a collaborative unit.
Equipping tasks with appropriate tools and following robust validation practices is crucial for maximizing CrewAI's potential, ensuring agents are effectively prepared for their assignments and that tasks are executed as intended.
Tasks are the driving force behind the actions of agents in crewAI. By properly defining tasks and their outcomes, you set the stage for your AI agents to work effectively, either independently or as a collaborative unit. Equipping tasks with appropriate tools, understanding the execution process, and following robust validation practices are crucial for maximizing CrewAI's potential, ensuring agents are effectively prepared for their assignments and that tasks are executed as intended.

View File

@@ -12,115 +12,103 @@ CrewAI tools empower agents with capabilities ranging from web searching and dat
## Key Characteristics of Tools
- **Utility**: Designed for various tasks such as web searching, data analysis, content generation, and agent collaboration.
- **Integration**: Enhances agent capabilities by integrating tools directly into their workflow.
- **Customizability**: Offers flexibility to develop custom tools or use existing ones, catering to specific agent needs.
- **Utility**: Crafted for tasks such as web searching, data analysis, content generation, and agent collaboration.
- **Integration**: Boosts agent capabilities by seamlessly integrating tools into their workflow.
- **Customizability**: Provides the flexibility to develop custom tools or utilize existing ones, catering to the specific needs of agents.
- **Error Handling**: Incorporates robust error handling mechanisms to ensure smooth operation.
- **Caching Mechanism**: Features intelligent caching to optimize performance and reduce redundant operations.
## Using crewAI Tools
crewAI comes with a series to built-in tools that can be used to extend the capabilities of your agents. Start by installing our extra tools package:
To enhance your agents' capabilities with crewAI tools, begin by installing our extra tools package:
```bash
pip install 'crewai[tools]'
```
Here is an example on how to use them:
Here's an example demonstrating their use:
```python
import os
from crewai import Agent, Task, Crew
# Importing some of the crewAI tools
# Importing crewAI tools
from crewai_tools import (
DirectoryReadTool,
FileReadTool,
SeperDevTool,
SerperDevTool,
WebsiteSearchTool
)
# get a free account in serper.dev
os.environ["SERPER_API_KEY"] = "Your Key"
# Set up API keys
os.environ["SERPER_API_KEY"] = "Your Key" # serper.dev API key
os.environ["OPENAI_API_KEY"] = "Your Key"
# Instantiate tools
# Assumes this ./blog-posts exists with existing blog posts on it
docs_tools = DirectoryReadTool(directory='./blog-posts')
file_read_tool = FileReadTool()
search_tool = SeperDevTool()
website_rag = WebsiteSearchTool()
docs_tool = DirectoryReadTool(directory='./blog-posts')
file_tool = FileReadTool()
search_tool = SerperDevTool()
web_rag_tool = WebsiteSearchTool()
# Create agents
researcher = Agent(
role='Market Research Analyst',
goal='Provide up-to-date market analysis of the AI industry',
backstory='An expert analyst with a keen eye for market trends.',
tools=[search_tool, website_rag],
tools=[search_tool, web_rag_tool],
verbose=True
)
writer = Agent(
role='Content Writer',
goal='Write amazing, super engaging blog post about the AI industry',
goal='Craft engaging blog posts about the AI industry',
backstory='A skilled writer with a passion for technology.',
tools=[docs_tools, file_read_tool],
tools=[docs_tool, file_tool],
verbose=True
)
# Create tasks
# Define tasks
research = Task(
description='Research the AI industry and provide a summary of the latest most trending matters and developments.',
expected_output='A summary of the top 3 latest most trending matters and developments in the AI industry with you unique take on why they matter.',
description='Research the latest trends in the AI industry and provide a summary.',
expected_output='A summary of the top 3 trending developments in the AI industry with a unique perspective on their significance.',
agent=researcher
)
write = Task(
description='Write an engaging blog post about the AI industry, using the summary provided by the research analyst. Read the latest blog posts in the directory to get inspiration.',
expected_output='A 4 paragraph blog post formatted as markdown with proper subtitles about the latest trends that is engaging and informative and funny, avoid complex words and make it easy to read.',
description='Write an engaging blog post about the AI industry, based on the research analysts summary. Draw inspiration from the latest blog posts in the directory.',
expected_output='A 4-paragraph blog post formatted in markdown with engaging, informative, and accessible content, avoiding complex jargon.',
agent=writer,
output_file='blog-posts/new_post.md' # The final blog post will be written here
output_file='blog-posts/new_post.md' # The final blog post will be saved here
)
# Create a crew
# Assemble a crew
crew = Crew(
agents=[researcher, writer],
tasks=[research, write],
verbose=2
)
# Execute the tasks
# Execute tasks
crew.kickoff()
```
## Available crewAI Tools
Most of the tools in the crewAI toolkit offer the ability to set specific arguments or let them to be more wide open, this is the case for most of the tools, for example:
- **Error Handling**: All tools are built with error handling capabilities, allowing agents to gracefully manage exceptions and continue their tasks.
- **Caching Mechanism**: All tools support caching, enabling agents to efficiently reuse previously obtained results, reducing the load on external resources and speeding up the execution time, you can also define finner control over the caching mechanism, using `cache_function` attribute on the tool.
```python
from crewai_tools import DirectoryReadTool
# This will allow the agent with this tool to read any directory it wants during it's execution
tool = DirectoryReadTool()
# OR
# This will allow the agent with this tool to read only the directory specified during it's execution
toos = DirectoryReadTool(directory='./directory')
```
Specific per tool docs are coming soon.
Here is a list of the available tools and their descriptions:
| Tool | Description |
| :-------------------------- | :-------------------------------------------------------------------------------------------- |
| **CodeDocsSearchTool** | A RAG tool optimized for searching through code documentation and related technical documents.|
| **CodeDocsSearchTool** | A RAG tool optimized for searching through code documentation and related technical documents. |
| **CSVSearchTool** | A RAG tool designed for searching within CSV files, tailored to handle structured data. |
| **DirectorySearchTool** | A RAG tool for searching within directories, useful for navigating through file systems. |
| **DOCXSearchTool** | A RAG tool aimed at searching within DOCX documents, ideal for processing Word files. |
| **DOCXSearchTool** | A RAG tool aimed at searching within DOCX documents, ideal for processing Word files. |
| **DirectoryReadTool** | Facilitates reading and processing of directory structures and their contents. |
| **FileReadTool** | Enables reading and extracting data from files, supporting various file formats. |
| **GithubSearchTool** | A RAG tool for searching within GitHub repositories, useful for code and documentation search.|
| **SeperDevTool** | A specialized tool for development purposes, with specific functionalities under development. |
| **TXTSearchTool** | A RAG tool focused on searching within text (.txt) files, suitable for unstructured data. |
| **TXTSearchTool** | A RAG tool focused on searching within text (.txt) files, suitable for unstructured data. |
| **JSONSearchTool** | A RAG tool designed for searching within JSON files, catering to structured data handling. |
| **MDXSearchTool** | A RAG tool tailored for searching within Markdown (MDX) files, useful for documentation. |
| **PDFSearchTool** | A RAG tool aimed at searching within PDF documents, ideal for processing scanned documents. |
@@ -134,9 +122,11 @@ Here is a list of the available tools and their descriptions:
| **YoutubeVideoSearchTool** | A RAG tool aimed at searching within YouTube videos, ideal for video data extraction. |
## Creating your own Tools
!!! example "Custom Tool Creation"
Developers can craft custom tools tailored for their agents needs or utilize pre-built options:
To create your own crewAI tools you will need to install our extra tools package:
```bash
@@ -144,7 +134,6 @@ pip install 'crewai[tools]'
```
Once you do that there are two main ways for one to create a crewAI tool:
### Subclassing `BaseTool`
```python
@@ -156,45 +145,47 @@ class MyCustomTool(BaseTool):
def _run(self, argument: str) -> str:
# Implementation goes here
pass
return "Result from custom tool"
```
Define a new class inheriting from `BaseTool`, specifying `name`, `description`, and the `_run` method for operational logic.
### Utilizing the `tool` Decorator
For a simpler approach, create a `Tool` object directly with the required attributes and a functional logic.
```python
from crewai_tools import tool
@tool("Name of my tool")
def my_tool(question: str) -> str:
"""Clear description for what this tool is useful for, you agent will need this information to use it."""
# Function logic here
return "Result from your custom tool"
```
### Custom Caching Mechanism
!!! note "Caching"
Tools can optionally implement a `cache_function` to fine-tune caching behavior. This function determines when to cache results based on specific conditions, offering granular control over caching logic.
```python
import json
import requests
from crewai import Agent
from crewai.tools import tool
from unstructured.partition.html import partition_html
from crewai_tools import tool
# Annotate the function with the tool decorator from crewAI
@tool("Integration with a given API")
def integtation_tool(argument: str) -> str:
"""Integration with a given API"""
# Code here
return resutls # string to be sent back to the agent
@tool
def multiplication_tool(first_number: int, second_number: int) -> str:
"""Useful for when you need to multiply two numbers together."""
return first_number * second_number
# Assign the scraping tool to an agent
agent = Agent(
role='Research Analyst',
goal='Provide up-to-date market analysis',
backstory='An expert analyst with a keen eye for market trends.',
tools=[integtation_tool]
)
def cache_func(args, result):
# In this case, we only cache the result if it's a multiple of 2
cache = result % 2 == 0
return cache
multiplication_tool.cache_function = cache_func
writer1 = Agent(
role="Writer",
goal="You write lesssons of math for kids.",
backstory="You're an expert in writting and you love to teach kids but you know nothing of math.",
tools=[multiplcation_tool],
allow_delegation=False,
)
#...
```
## Using LangChain Tools
@@ -230,4 +221,4 @@ agent = Agent(
```
## Conclusion
Tools are crucial for extending the capabilities of CrewAI agents, allowing them to undertake a diverse array of tasks and collaborate efficiently. When building your AI solutions with CrewAI, consider both custom and existing tools to empower your agents and foster a dynamic AI ecosystem.
Tools are pivotal in extending the capabilities of CrewAI agents, enabling them to undertake a broad spectrum of tasks and collaborate effectively. When building solutions with CrewAI, leverage both custom and existing tools to empower your agents and enhance the AI ecosystem. Consider utilizing error handling, caching mechanisms, and the flexibility of tool arguments to optimize your agents' performance and capabilities.

View File

@@ -0,0 +1,72 @@
---
title: (AgentOps) Observability using AgentOps
description: Understanding and logging your agent performance with AgentOps.
---
# Intro
Observability is a key aspect of developing and deploying conversational AI agents. It allows developers to understand how the agent is performing, how users are interacting with the agent, and how the agent is responding to user inputs.
AgentOps is a product, idependent of crewAI that provides a comprehensive observability solution for agents.
This notebook will provide an overview of AgentOps and how to use it with crewAI.
## AgentOps
[AgentOps](https://agentops.ai) provides session replays, metrics, and monitoring for agents.
[AgentOps Repo](https://github.com/AgentOps-AI/agentops)
### Overview
AgentOps provides monotoring for agents in development and production. It provides a dashboard for monitoring agent performance, session replays, and custom reporting.
![agentops-overview.png](..%2Fassets%2Fagentops-overview.png)
Additionally, AgentOps provides session drilldowns that allows users to view the agent's interactions with users in real-time. This feature is useful for debugging and understanding how the agent interacts with users.
![agentops-session.png](..%2Fassets%2Fagentops-session.png)
![agentops-replay.png](..%2Fassets%2Fagentops-replay.png)
### Features
- LLM Cost management and tracking
- Replay Analytics
- Recursive thought detection
- Custom Reporting
- Analytics Dashboard
- Public Model Testing
- Custom Tests
- Time Travel Debugging
- Compliance and Security
### Using AgentOps
Create a user API key here: app.agentops.ai/account
Add your API key to your environment variables
```
AGENTOPS_API_KEY=<YOUR_AGENTOPS_API_KEY>
```
Install AgentOps with:
```
pip install crewai[agentops]
```
or
```
pip install agentops
```
Before using `Crew` in your script, include these lines:
```python
import agentops
agentops.init()
```
### Crew + AgentOps Examples
- [Job Posting](https://github.com/joaomdmoura/crewAI-examples/tree/main/job-posting)
- [Markdown Validator](https://github.com/joaomdmoura/crewAI-examples/tree/main/markdown_validator)
- [Instagram Post](https://github.com/joaomdmoura/crewAI-examples/tree/main/instagram_post)
### Futher Information
To implement more features and better observability, please see the [AgentOps Repo](https://github.com/AgentOps-AI/agentops)

View File

@@ -0,0 +1,62 @@
---
title: Creating and Utilizing Tools in crewAI
description: Comprehensive guide on crafting, using, and managing custom tools within the crewAI framework, including new functionalities and error handling.
---
## Creating and Utilizing Tools in crewAI
This guide provides detailed instructions on creating custom tools for the crewAI framework and how to efficiently manage and utilize these tools, incorporating the latest functionalities such as tool delegation, error handling, and dynamic tool calling. It also highlights the importance of collaboration tools, enabling agents to perform a wide range of actions.
### Prerequisites
Before creating your own tools, ensure you have the crewAI extra tools package installed:
```bash
pip install 'crewai[tools]'
```
### Subclassing `BaseTool`
To create a personalized tool, inherit from `BaseTool` and define the necessary attributes and the `_run` method.
```python
from crewai_tools import BaseTool
class MyCustomTool(BaseTool):
name: str = "Name of my tool"
description: str = "What this tool does. It's vital for effective utilization."
def _run(self, argument: str) -> str:
# Your tool's logic here
return "Tool's result"
```
### Using the `tool` Decorator
Alternatively, use the `tool` decorator for a direct approach to create tools. This requires specifying attributes and the tool's logic within a function.
```python
from crewai_tools import tool
@tool("Tool Name")
def my_simple_tool(question: str) -> str:
"""Tool description for clarity."""
# Tool logic here
return "Tool output"
```
### Defining a Cache Function for the Tool
To optimize tool performance with caching, define custom caching strategies using the `cache_function` attribute.
```python
@tool("Tool with Caching")
def cached_tool(argument: str) -> str:
"""Tool functionality description."""
return "Cachable result"
def my_cache_strategy(arguments: dict, result: str) -> bool:
# Define custom caching logic
return True if some_condition else False
cached_tool.cache_function = my_cache_strategy
```
By adhering to these guidelines and incorporating new functionalities and collaboration tools into your tool creation and management processes, you can leverage the full capabilities of the crewAI framework, enhancing both the development experience and the efficiency of your AI agents.

View File

@@ -1,43 +1,43 @@
---
title: Assembling and Activating Your CrewAI Team
description: A comprehensive guide to creating a dynamic CrewAI team for your projects, with updated functionalities including verbose mode, memory capabilities, and more.
description: A comprehensive guide to creating a dynamic CrewAI team for your projects, with updated functionalities including verbose mode, memory capabilities, asynchronous execution, output customization, language model configuration, and more.
---
## Introduction
Embark on your CrewAI journey by setting up your environment and initiating your AI crew with enhanced features. This guide ensures a seamless start, incorporating the latest updates.
Embark on your CrewAI journey by setting up your environment and initiating your AI crew with the latest features. This guide ensures a smooth start, incorporating all recent updates for an enhanced experience.
## Step 0: Installation
Install CrewAI and any necessary packages for your project. The `duckduckgo-search` package is highlighted here for enhanced search capabilities.
Install CrewAI and any necessary packages for your project. CrewAI is compatible with Python >=3.10,<=3.13.
```shell
pip install crewai
pip install crewai[tools]
pip install duckduckgo-search
pip install 'crewai[tools]'
```
## Step 1: Assemble Your Agents
Define your agents with distinct roles, backstories, and now, enhanced capabilities such as verbose mode and memory usage. These elements add depth and guide their task execution and interaction within the crew.
Define your agents with distinct roles, backstories, and enhanced capabilities like verbose mode and memory usage. These elements add depth and guide their task execution and interaction within the crew.
```python
import os
os.environ["SERPER_API_KEY"] = "Your Key" # serper.dev API key
os.environ["OPENAI_API_KEY"] = "Your Key"
from crewai import Agent
from langchain_community.tools import DuckDuckGoSearchRun
search_tool = DuckDuckGoSearchRun()
# Topic for the crew run
topic = 'AI in healthcare'
from crewai_tools import SerperDevTool
search_tool = SerperDevTool()
# Creating a senior researcher agent with memory and verbose mode
researcher = Agent(
role='Senior Researcher',
goal=f'Uncover groundbreaking technologies in {topic}',
goal='Uncover groundbreaking technologies in {topic}',
verbose=True,
memory=True,
backstory="""Driven by curiosity, you're at the forefront of
innovation, eager to explore and share knowledge that could change
the world.""",
backstory=(
"Driven by curiosity, you're at the forefront of"
"innovation, eager to explore and share knowledge that could change"
"the world."
),
tools=[search_tool],
allow_delegation=True
)
@@ -45,12 +45,14 @@ researcher = Agent(
# Creating a writer agent with custom tools and delegation capability
writer = Agent(
role='Writer',
goal=f'Narrate compelling tech stories about {topic}',
goal='Narrate compelling tech stories about {topic}',
verbose=True,
memory=True,
backstory="""With a flair for simplifying complex topics, you craft
engaging narratives that captivate and educate, bringing new
discoveries to light in an accessible manner.""",
backstory=(
"With a flair for simplifying complex topics, you craft"
"engaging narratives that captivate and educate, bringing new"
"discoveries to light in an accessible manner."
),
tools=[search_tool],
allow_delegation=False
)
@@ -64,10 +66,12 @@ from crewai import Task
# Research task
research_task = Task(
description=f"""Identify the next big trend in {topic}.
Focus on identifying pros and cons and the overall narrative.
Your final report should clearly articulate the key points,
its market opportunities, and potential risks.""",
description=(
"Identify the next big trend in {topic}."
"Focus on identifying pros and cons and the overall narrative."
"Your final report should clearly articulate the key points,"
"its market opportunities, and potential risks."
),
expected_output='A comprehensive 3 paragraphs long report on the latest AI trends.',
tools=[search_tool],
agent=researcher,
@@ -75,10 +79,12 @@ research_task = Task(
# Writing task with language model configuration
write_task = Task(
description=f"""Compose an insightful article on {topic}.
Focus on the latest trends and how it's impacting the industry.
This article should be easy to understand, engaging, and positive.""",
expected_output=f'A 4 paragraph article on {topic} advancements fromated as markdown.',
description=(
"Compose an insightful article on {topic}."
"Focus on the latest trends and how it's impacting the industry."
"This article should be easy to understand, engaging, and positive."
),
expected_output='A 4 paragraph article on {topic} advancements formatted as markdown.',
tools=[search_tool],
agent=writer,
async_execution=False,
@@ -87,27 +93,31 @@ write_task = Task(
```
## Step 3: Form the Crew
Combine your agents into a crew, setting the workflow process they'll follow to accomplish the tasks, now with the option to configure language models for enhanced interaction.
Combine your agents into a crew, setting the workflow process they'll follow to accomplish the tasks. Now with options to configure language models for enhanced interaction and additional configurations for optimizing performance.
```python
from crewai import Crew, Process
# Forming the tech-focused crew with enhanced configurations
# Forming the tech-focused crew with some enhanced configurations
crew = Crew(
agents=[researcher, writer],
tasks=[research_task, write_task],
process=Process.sequential # Optional: Sequential task execution is default
process=Process.sequential, # Optional: Sequential task execution is default
memory=True,
cache=True,
max_rpm=100,
share_crew=True
)
```
## Step 4: Kick It Off
Initiate the process with your enhanced crew ready. Observe as your agents collaborate, leveraging their new capabilities for a successful project outcome.
Initiate the process with your enhanced crew ready. Observe as your agents collaborate, leveraging their new capabilities for a successful project outcome. Input variables will be interpolated into the agents and tasks for a personalized approach.
```python
# Starting the task execution process with enhanced feedback
result = crew.kickoff()
result = crew.kickoff(inputs={'topic': 'AI in healthcare'})
print(result)
```
## Conclusion
Building and activating a crew in CrewAI has evolved with new functionalities. By incorporating verbose mode, memory capabilities, asynchronous task execution, output customization, and language model configuration, your AI team is more equipped than ever to tackle challenges efficiently. The depth of agent backstories and the precision of their objectives enrich collaboration, leading to successful project outcomes.
Building and activating a crew in CrewAI has evolved with new functionalities. By incorporating verbose mode, memory capabilities, asynchronous task execution, output customization, language model configuration, and enhanced crew configurations, your AI team is more equipped than ever to tackle challenges efficiently. The depth of agent backstories and the precision of their objectives enrich collaboration, leading to successful project outcomes. This guide aims to provide you with a clear and detailed understanding of setting up and utilizing the CrewAI framework to its full potential.

View File

@@ -4,7 +4,7 @@ description: A comprehensive guide to tailoring agents for specific roles, tasks
---
## Customizable Attributes
Crafting an efficient CrewAI team hinges on the ability to tailor your AI agents dynamically to meet the unique requirements of any project. This section covers the foundational attributes you can customize.
Crafting an efficient CrewAI team hinges on the ability to dynamically tailor your AI agents to meet the unique requirements of any project. This section covers the foundational attributes you can customize.
### Key Attributes for Customization
- **Role**: Specifies the agent's job within the crew, such as 'Analyst' or 'Customer Service Rep'.
@@ -16,23 +16,20 @@ Crafting an efficient CrewAI team hinges on the ability to tailor your AI agents
Beyond the basic attributes, CrewAI allows for deeper customization to enhance an agent's behavior and capabilities significantly.
### Language Model Customization
Agents can be customized with specific language models (`llm`) and function-calling language models (`function_calling_llm`), offering advanced control over their processing and decision-making abilities.
### Enabling Memory for Agents
CrewAI supports memory for agents, enabling them to remember past interactions. This feature is critical for tasks requiring awareness of previous contexts or decisions.
Agents can be customized with specific language models (`llm`) and function-calling language models (`function_calling_llm`), offering advanced control over their processing and decision-making abilities. It's important to note that setting the `function_calling_llm` allows for overriding the default crew function-calling language model, providing a greater degree of customization.
## Performance and Debugging Settings
Adjusting an agent's performance and monitoring its operations are crucial for efficient task execution.
### Verbose Mode and RPM Limit
- **Verbose Mode**: Enables detailed logging of an agent's actions, useful for debugging and optimization.
- **RPM Limit**: Sets the maximum number of requests per minute (`max_rpm`), controlling the agent's query frequency to external services.
- **Verbose Mode**: Enables detailed logging of an agent's actions, useful for debugging and optimization. Specifically, it provides insights into agent execution processes, aiding in the optimization of performance.
- **RPM Limit**: Sets the maximum number of requests per minute (`max_rpm`). This attribute is optional and can be set to `None` for no limit, allowing for unlimited queries to external services if needed.
### Maximum Iterations for Task Execution
The `max_iter` attribute allows users to define the maximum number of iterations an agent can perform for a single task, preventing infinite loops or excessively long executions.
The `max_iter` attribute allows users to define the maximum number of iterations an agent can perform for a single task, preventing infinite loops or excessively long executions. The default value is set to 15, providing a balance between thoroughness and efficiency. Once the agent approaches this number, it will try its best to give a good answer.
## Customizing Agents and Tools
Agents are customized by defining their attributes and tools during initialization. Tools are critical for an agent's functionality, enabling them to perform specialized tasks. In this example we will use the crewAI tools package to create a tool for a research analyst agent.
Agents are customized by defining their attributes and tools during initialization. Tools are critical for an agent's functionality, enabling them to perform specialized tasks. The `tools` attribute should be an array of tools the agent can utilize, and it's initialized as an empty list by default. Tools can be added or modified post-agent initialization to adapt to new requirements.
```shell
pip install 'crewai[tools]'
@@ -42,31 +39,31 @@ pip install 'crewai[tools]'
```python
import os
from crewai import Agent
from crewai_tools import SeperDevTool
from crewai_tools import SerperDevTool
# Set API keys for tool initialization
os.environ["OPENAI_API_KEY"] = "Your Key"
os.environ["SERPER_API_KEY"] = "Your Key"
# Initialize a search tool
search_tool = SeperDevTool()
search_tool = SerperDevTool()
# Initialize the agent with advanced options
agent = Agent(
role='Research Analyst',
goal='Provide up-to-date market analysis',
backstory='An expert analyst with a keen eye for market trends.',
tools=[serper_tool],
memory=True,
tools=[search_tool],
memory=True, # Enable memory
verbose=True,
max_rpm=10, # Optinal: Limit requests to 10 per minute, preventing API abuse
max_iter=5, # Optional: Limit task iterations to 5 before the agent tried to gives its best answer
max_rpm=None, # No limit on requests per minute
max_iter=15, # Default value for maximum iterations
allow_delegation=False
)
```
## Delegation and Autonomy
Controlling an agent's ability to delegate tasks or ask questions is vital for tailoring its autonomy and collaborative dynamics within the CrewAI framework.
Controlling an agent's ability to delegate tasks or ask questions is vital for tailoring its autonomy and collaborative dynamics within the CrewAI framework. By default, the `allow_delegation` attribute is set to `True`, enabling agents to seek assistance or delegate tasks as needed. This default behavior promotes collaborative problem-solving and efficiency within the CrewAI ecosystem. If needed, delegation can be disabled to suit specific operational requirements.
### Example: Disabling Delegation for an Agent
```python
@@ -74,9 +71,9 @@ agent = Agent(
role='Content Writer',
goal='Write engaging content on market trends',
backstory='A seasoned writer with expertise in market analysis.',
allow_delegation=False
allow_delegation=False # Disabling delegation
)
```
## Conclusion
Customizing agents in CrewAI by setting their roles, goals, backstories, and tools, alongside advanced options like language model customization, memory, and performance settings, equips a nuanced and capable AI team ready for complex challenges.
Customizing agents in CrewAI by setting their roles, goals, backstories, and tools, alongside advanced options like language model customization, memory, performance settings, and delegation preferences, equips a nuanced and capable AI team ready for complex challenges.

View File

@@ -1,60 +1,67 @@
---
title: Implementing the Hierarchical Process in CrewAI
description: Understanding and applying the hierarchical process within your CrewAI projects, with updates reflecting the latest coding practices.
description: A comprehensive guide to understanding and applying the hierarchical process within your CrewAI projects, updated to reflect the latest coding practices and functionalities.
---
## Introduction
The hierarchical process in CrewAI introduces a structured approach to managing tasks, mimicking traditional organizational hierarchies for efficient task delegation and execution. This ensures a systematic workflow that enhances project outcomes.
The hierarchical process in CrewAI introduces a structured approach to task management, simulating traditional organizational hierarchies for efficient task delegation and execution. This systematic workflow enhances project outcomes by ensuring tasks are handled with optimal efficiency and accuracy.
!!! note "Complexity and Efficiency"
The hierarchical process is designed to leverage advanced models like GPT-4, optimizing token usage while handling complex tasks with greater efficiency.
## Hierarchical Process Overview
Tasks within this process are managed through a clear hierarchy, where a 'manager' agent coordinates the workflow, delegates tasks, and validates outcomes, ensuring a streamlined and effective execution process.
By default, tasks in CrewAI are managed through a sequential process. However, adopting a hierarchical approach allows for a clear hierarchy in task management, where a 'manager' agent coordinates the workflow, delegates tasks, and validates outcomes for streamlined and effective execution. This manager agent is automatically created by crewAI so you don't need to worry about it.
### Key Features
- **Task Delegation**: A manager agent is responsible for allocating tasks among crew members based on their roles and capabilities.
- **Result Validation**: The manager evaluates the outcomes to ensure they meet the required standards before moving forward.
- **Efficient Workflow**: Emulates corporate structures, offering an organized and familiar approach to task management.
- **Task Delegation**: A manager agent allocates tasks among crew members based on their roles and capabilities.
- **Result Validation**: The manager evaluates outcomes to ensure they meet the required standards.
- **Efficient Workflow**: Emulates corporate structures, providing an organized approach to task management.
## Implementing the Hierarchical Process
To adopt the hierarchical process, define a crew with a designated manager and establish a clear chain of command for task execution. This structure is crucial for maintaining an orderly and efficient workflow.
To utilize the hierarchical process, it's essential to explicitly set the process attribute to `Process.hierarchical`, as the default behavior is `Process.sequential`. Define a crew with a designated manager and establish a clear chain of command.
!!! note "Tools and Agent Assignment"
Tools should be assigned at the agent level, not the task level, to facilitate task delegation and execution by the designated agents under the manager's guidance.
Assign tools at the agent level to facilitate task delegation and execution by the designated agents under the manager's guidance. Tools can also be specified at the task level for precise control over tool availability during task execution.
!!! note "Manager LLM Configuration"
A manager LLM is automatically assigned to the crew, eliminating the need for manual definition. However, configuring the `manager_llm` parameter is necessary to tailor the manager's decision-making process.
!!! note "Manager LLM Requirement"
Configuring the `manager_llm` parameter is crucial for the hierarchical process. The system requires a manager LLM to be set up for proper function, ensuring tailored decision-making.
```python
from langchain_openai import ChatOpenAI
from crewai import Crew, Process, Agent
# Agents are defined without specifying a manager explicitly
# Agents are defined with attributes for backstory, cache, and verbose mode
researcher = Agent(
role='Researcher',
goal='Conduct in-depth analysis',
# tools = [...]
role='Researcher',
goal='Conduct in-depth analysis',
backstory='Experienced data analyst with a knack for uncovering hidden trends.',
cache=True,
verbose=False,
# tools=[] # This can be optionally specified; defaults to an empty list
)
writer = Agent(
role='Writer',
goal='Create engaging content',
# tools = [...]
role='Writer',
goal='Create engaging content',
backstory='Creative writer passionate about storytelling in technical domains.',
cache=True,
verbose=False,
# tools=[] # Optionally specify tools; defaults to an empty list
)
# Establishing the crew with a hierarchical process
# Establishing the crew with a hierarchical process and additional configurations
project_crew = Crew(
tasks=[...], # Tasks to be delegated and executed under the manager's supervision
agents=[researcher, writer],
manager_llm=ChatOpenAI(temperature=0, model="gpt-4"), # Defines the manager's decision-making engine
process=Process.hierarchical # Specifies the hierarchical management approach
tasks=[...], # Tasks to be delegated and executed under the manager's supervision
agents=[researcher, writer],
manager_llm=ChatOpenAI(temperature=0, model="gpt-4"), # Mandatory for hierarchical process
process=Process.hierarchical, # Specifies the hierarchical management approach
memory=True, # Enable memory usage for enhanced task execution
)
```
### Workflow in Action
1. **Task Assignment**: The manager strategically assigns tasks, considering each agent's role and skills.
2. **Execution and Review**: Agents complete their tasks, followed by a thorough review by the manager to ensure quality standards.
3. **Sequential Task Progression**: The manager ensures tasks are completed in a logical order, facilitating smooth project progression.
1. **Task Assignment**: The manager assigns tasks strategically, considering each agent's capabilities and available tools.
2. **Execution and Review**: Agents complete their tasks with the option for asynchronous execution and callback functions for streamlined workflows.
3. **Sequential Task Progression**: Despite being a hierarchical process, tasks follow a logical order for smooth progression, facilitated by the manager's oversight.
## Conclusion
Adopting the hierarchical process in CrewAI facilitates a well-organized and efficient approach to project management. By structuring tasks and delegations within a clear hierarchy, it enhances both productivity and quality control, making it an ideal strategy for managing complex projects.
Adopting the hierarchical process in crewAI, with the correct configurations and understanding of the system's capabilities, facilitates an organized and efficient approach to project management.

View File

@@ -1,64 +1,80 @@
---
title: Human Input on Execution
description: Integrating CrewAI with human input during execution in complex decision-making processes and leveraging the full capabilities of the agent's attributes and tools.
---
# Human Input in Agent Execution
Human input is crucial in numerous agent execution scenarios, enabling agents to request additional information or clarification when necessary. This feature is particularly useful in complex decision-making processes or when agents require further details to complete a task effectively.
Human input is critical in several agent execution scenarios, allowing agents to request additional information or clarification when necessary. This feature is especially useful in complex decision-making processes or when agents require more details to complete a task effectively.
## Using Human Input with CrewAI
Incorporating human input with CrewAI is straightforward, enhancing the agent's ability to make informed decisions. While the documentation previously mentioned using a "LangChain Tool" and a specific "DuckDuckGoSearchRun" tool from `langchain_community.tools`, it's important to clarify that the integration of such tools should align with the actual capabilities and configurations defined within your `Agent` class setup.
To integrate human input into agent execution, set the `human_input` flag in the task definition. When enabled, the agent prompts the user for input before delivering its final answer. This input can provide extra context, clarify ambiguities, or validate the agent's output.
### Example:
```shell
pip install crewai
```
```python
import os
from crewai import Agent, Task, Crew, Process
from langchain_community.tools import DuckDuckGoSearchRun
from langchain.agents import load_tools
from crewai import Agent, Task, Crew
from crewai_tools import SerperDevTool
search_tool = DuckDuckGoSearchRun()
os.environ["SERPER_API_KEY"] = "Your Key" # serper.dev API key
os.environ["OPENAI_API_KEY"] = "Your Key"
# Loading Human Tools
human_tools = load_tools(["human"])
# Loading Tools
search_tool = SerperDevTool()
# Define your agents with roles and goals
# Define your agents with roles, goals, tools, and additional attributes
researcher = Agent(
role='Senior Research Analyst',
goal='Uncover cutting-edge developments in AI and data science',
backstory="""You are a Senior Research Analyst at a leading tech think tank.
Your expertise lies in identifying emerging trends and technologies in AI and
data science. You have a knack for dissecting complex data and presenting
actionable insights.""",
backstory=(
"You are a Senior Research Analyst at a leading tech think tank."
"Your expertise lies in identifying emerging trends and technologies in AI and data science."
"You have a knack for dissecting complex data and presenting actionable insights."
),
verbose=True,
allow_delegation=False,
tools=[search_tool]+human_tools # Passing human tools to the agent
tools=[search_tool],
max_rpm=100
)
writer = Agent(
role='Tech Content Strategist',
goal='Craft compelling content on tech advancements',
backstory="""You are a renowned Tech Content Strategist, known for your insightful
and engaging articles on technology and innovation. With a deep understanding of
the tech industry, you transform complex concepts into compelling narratives.""",
backstory=(
"You are a renowned Tech Content Strategist, known for your insightful and engaging articles on technology and innovation."
"With a deep understanding of the tech industry, you transform complex concepts into compelling narratives."
),
verbose=True,
allow_delegation=True
allow_delegation=True,
tools=[search_tool],
cache=False, # Disable cache for this agent
)
# Create tasks for your agents
task1 = Task(
description="""Conduct a comprehensive analysis of the latest advancements in AI in 2024.
Identify key trends, breakthrough technologies, and potential industry impacts.
Compile your findings in a detailed report.
Make sure to check with a human if the draft is good before finalizing your answer.""",
description=(
"Conduct a comprehensive analysis of the latest advancements in AI in 2024."
"Identify key trends, breakthrough technologies, and potential industry impacts."
"Compile your findings in a detailed report."
"Make sure to check with a human if the draft is good before finalizing your answer."
),
expected_output='A comprehensive full report on the latest AI advancements in 2024, leave nothing out',
agent=researcher
agent=researcher,
human_input=True,
)
task2 = Task(
description="""Using the insights from the researcher's report, develop an engaging blog
post that highlights the most significant AI advancements.
Your post should be informative yet accessible, catering to a tech-savvy audience.
Aim for a narrative that captures the essence of these breakthroughs and their
implications for the future.
Your final answer MUST be the full blog post of at least 3 paragraphs.""",
expected_output='A compelling 3 paragraphs blog post formated as markdown about the latest AI advancements in 2024',
description=(
"Using the insights from the researcher's report, develop an engaging blog post that highlights the most significant AI advancements."
"Your post should be informative yet accessible, catering to a tech-savvy audience."
"Aim for a narrative that captures the essence of these breakthroughs and their implications for the future."
),
expected_output='A compelling 3 paragraphs blog post formatted as markdown about the latest AI advancements in 2024',
agent=writer
)

View File

@@ -1,24 +1,32 @@
---
title: Connect CrewAI to LLMs
description: Guide on integrating CrewAI with various Large Language Models (LLMs).
description: Comprehensive guide on integrating CrewAI with various Large Language Models (LLMs), including detailed class attributes and methods.
---
## Connect CrewAI to LLMs
!!! note "Default LLM"
By default, CrewAI uses OpenAI's GPT-4 model for language processing. However, you can configure your agents to use a different model or API. This guide will show you how to connect your agents to different LLMs through environment variables and direct instantiation.
By default, CrewAI uses OpenAI's GPT-4 model for language processing. You can configure your agents to use a different model or API. This guide shows how to connect your agents to various LLMs through environment variables and direct instantiation.
CrewAI offers flexibility in connecting to various LLMs, including local models via [Ollama](https://ollama.ai) and different APIs like Azure. It's compatible with all [LangChain LLM](https://python.langchain.com/docs/integrations/llms/) components, enabling diverse integrations for tailored AI solutions.
## CrewAI Agent Overview
The `Agent` class in CrewAI is central to implementing AI solutions. Here's a brief overview:
The `Agent` class is the cornerstone for implementing AI solutions in CrewAI. Here's an updated overview reflecting the latest codebase changes:
- **Attributes**:
- `role`: Defines the agent's role within the solution.
- `goal`: Specifies the agent's objective.
- `backstory`: Provides a background story to the agent.
- `llm`: Indicates the Large Language Model the agent uses.
- `llm`: The language model that will run the agent. By default, it uses the GPT-4 model defined in the environment variable "OPENAI_MODEL_NAME".
- `function_calling_llm`: The language model that will handle the tool calling for this agent, overriding the crew function_calling_llm. Optional.
- `max_iter`: Maximum number of iterations for an agent to execute a task, default is 15.
- `memory`: Enables the agent to retain information during and a across executions. Default is `False`.
- `max_rpm`: Maximum number of requests per minute the agent's execution should respect. Optional.
- `verbose`: Enables detailed logging of the agent's execution. Default is `False`.
- `allow_delegation`: Allows the agent to delegate tasks to other agents, default is `True`.
- `tools`: Specifies the tools available to the agent for task execution. Optional.
- `step_callback`: Provides a callback function to be executed after each step. Optional.
- `cache`: Determines whether the agent should use a cache for tool usage. Default is `True`.
### Example Changing OpenAI's GPT model
```python
# Required
os.environ["OPENAI_MODEL_NAME"]="gpt-4-0125-preview"
@@ -28,7 +36,8 @@ example_agent = Agent(
role='Local Expert',
goal='Provide insights about the city',
backstory="A knowledgeable local guide.",
verbose=True
verbose=True,
memory=True
)
```
@@ -43,6 +52,39 @@ OPENAI_MODEL_NAME='openhermes' # Adjust based on available model
OPENAI_API_KEY=''
```
## HuggingFace Integration
There are a couple of different ways you can use HuggingFace to host your LLM.
### Your own HuggingFace endpoint
```python
from langchain_community.llms import HuggingFaceEndpoint
llm = HuggingFaceEndpoint(
endpoint_url="<YOUR_ENDPOINT_URL_HERE>",
huggingfacehub_api_token="<HF_TOKEN_HERE>",
task="text-generation",
max_new_tokens=512
)
agent = Agent(
role="HuggingFace Agent",
goal="Generate text using HuggingFace",
backstory="A diligent explorer of GitHub docs.",
llm=llm
)
```
### From HuggingFaceHub endpoint
```python
from langchain_community.llms import HuggingFaceHub
llm = HuggingFaceHub(
repo_id="HuggingFaceH4/zephyr-7b-beta",
huggingfacehub_api_token="<HF_TOKEN_HERE>",
task="text-generation",
)
```
## OpenAI Compatible API Endpoints
Switch between APIs and models seamlessly using environment variables, supporting platforms like FastChat, LM Studio, and Mistral AI.
@@ -68,6 +110,35 @@ OPENAI_API_BASE=https://api.mistral.ai/v1
OPENAI_MODEL_NAME="mistral-small"
```
### Solar
```sh
from langchain_community.chat_models.solar import SolarChat
# Initialize language model
os.environ["SOLAR_API_KEY"] = "your-solar-api-key"
llm = SolarChat(max_tokens=1024)
Free developer API key available here: https://console.upstage.ai/services/solar
Langchain Example: https://github.com/langchain-ai/langchain/pull/18556
```
### text-gen-web-ui
```sh
OPENAI_API_BASE=http://localhost:5000/v1
OPENAI_MODEL_NAME=NA
OPENAI_API_KEY=NA
```
### Cohere
```sh
from langchain_community.chat_models import ChatCohere
# Initialize language model
os.environ["COHERE_API_KEY"] = "your-cohere-api-key"
llm = ChatCohere()
Free developer API key available here: https://cohere.com/
Langchain Documentation: https://python.langchain.com/docs/integrations/chat/cohere
```
### Azure Open AI Configuration
For Azure OpenAI API integration, set the following environment variables:
```sh

View File

@@ -1,6 +1,6 @@
---
title: Using the Sequential Processes in crewAI
description: A comprehensive guide to utilizing the sequential processe for task execution in crewAI projects.
description: A comprehensive guide to utilizing the sequential processes for task execution in crewAI projects.
---
## Introduction
@@ -13,8 +13,6 @@ The sequential process ensures tasks are executed one after the other, following
- **Linear Task Flow**: Ensures orderly progression by handling tasks in a predetermined sequence.
- **Simplicity**: Best suited for projects with clear, step-by-step tasks.
- **Easy Monitoring**: Facilitates easy tracking of task completion and project progress.
## Implementing the Sequential Process
Assemble your crew and define tasks in the order they need to be executed.
@@ -57,4 +55,4 @@ report_crew = Crew(
3. **Completion**: The process concludes once the final task is executed, leading to project completion.
## Conclusion
The sequential process in CrewAI provides a clear, straightforward path for task execution. It's particularly suited for projects requiring a logical progression of tasks, ensuring each step is completed before the next begins, thereby facilitating a cohesive final product.
The sequential and hierarchical processes in CrewAI offer clear, adaptable paths for task execution. They are well-suited for projects requiring logical progression and dynamic decision-making, ensuring each step is completed effectively, thereby facilitating a cohesive final product.

View File

@@ -33,6 +33,11 @@ Cutting-edge framework for orchestrating role-playing, autonomous AI agents. By
Crews
</a>
</li>
<li>
<a href="./core-concepts/Memory">
Memory
</a>
</li>
</ul>
</div>
<div style="width:30%">
@@ -43,6 +48,11 @@ Cutting-edge framework for orchestrating role-playing, autonomous AI agents. By
Getting Started
</a>
</li>
<li>
<a href="./how-to/Create-Custom-Tools">
Create Custom Tools
</a>
</li>
<li>
<a href="./how-to/Sequential">
Using Sequential Process
@@ -68,6 +78,11 @@ Cutting-edge framework for orchestrating role-playing, autonomous AI agents. By
Human Input on Execution
</a>
</li>
<li>
<a href="./how-to/AgentOps-Observability.md">
Agent Observability using AgentOps
</a>
</li>
</ul>
</div>
<div style="width:30%">
@@ -110,4 +125,4 @@ Cutting-edge framework for orchestrating role-playing, autonomous AI agents. By
</li>
</ul>
</div>
</div>
</div>

View File

@@ -1,8 +1,12 @@
---
title: Telemetry
description: Understanding the telemetry data collected by CrewAI and how it contributes to the enhancement of the library.
---
## Telemetry
CrewAI utilizes anonymous telemetry to gather usage statistics with the primary goal of enhancing the library. Our focus is on improving and developing the features, integrations, and tools most utilized by our users.
It's pivotal to understand that **NO data is collected** concerning prompts, task descriptions, agents' backstories or goals, usage of tools, API calls, responses, any data processed by the agents, or secrets and environment variables.
It's pivotal to understand that **NO data is collected** concerning prompts, task descriptions, agents' backstories or goals, usage of tools, API calls, responses, any data processed by the agents, or secrets and environment variables, with the exception of the conditions mentioned. When the `share_crew` feature is enabled, detailed data including task descriptions, agents' backstories or goals, and other specific attributes are collected to provide deeper insights while respecting user privacy.
### Data Collected Includes:
- **Version of CrewAI**: Assessing the adoption rate of our latest version helps us understand user needs and guide our updates.
@@ -16,8 +20,8 @@ It's pivotal to understand that **NO data is collected** concerning prompts, tas
- **Roles of Agents within a Crew**: Understanding the various roles agents play aids in crafting better tools, integrations, and examples.
- **Tool Usage**: Identifying which tools are most frequently used allows us to prioritize improvements in those areas.
### Opt-In Futher Telemetry Sharing
Users can choose to share their complete telemetry data by enabling the `share_crew` attribute to `True` in their crew configurations. This opt-in approach respects user privacy and aligns with data protection standards by ensuring users have control over their data sharing preferences.
### Opt-In Further Telemetry Sharing
Users can choose to share their complete telemetry data by enabling the `share_crew` attribute to `True` in their crew configurations. This opt-in approach respects user privacy and aligns with data protection standards by ensuring users have control over their data sharing preferences. Enabling `share_crew` results in the collection of detailed crew and task execution data, including `goal`, `backstory`, `context`, and `output` of tasks. This enables a deeper insight into usage patterns while respecting the user's choice to share.
### Updates and Revisions
We are committed to maintaining the accuracy and transparency of our documentation. Regular reviews and updates are performed to ensure our documentation accurately reflects the latest developments of our codebase and telemetry practices. Users are encouraged to review this section for the most current information on our data collection practices and how they contribute to the improvement of CrewAI.
We are committed to maintaining the accuracy and transparency of our documentation. Regular reviews and updates are performed to ensure our documentation accurately reflects the latest developments of our codebase and telemetry practices. Users are encouraged to review this section for the most current information on our data collection practices and how they contribute to the improvement of CrewAI.

View File

@@ -0,0 +1,62 @@
# CSVSearchTool
!!! note "Experimental"
We are still working on improving tools, so there might be unexpected behavior or changes in the future.
## Description
This tool is used to perform a RAG (Retrieval-Augmented Generation) search within a CSV file's content. It allows users to semantically search for queries in the content of a specified CSV file. This feature is particularly useful for extracting information from large CSV datasets where traditional search methods might be inefficient. All tools with "Search" in their name, including CSVSearchTool, are RAG tools designed for searching different sources of data.
## Installation
Install the crewai_tools package
```shell
pip install 'crewai[tools]'
```
## Example
```python
from crewai_tools import CSVSearchTool
# Initialize the tool with a specific CSV file. This setup allows the agent to only search the given CSV file.
tool = CSVSearchTool(csv='path/to/your/csvfile.csv')
# OR
# Initialize the tool without a specific CSV file. Agent will need to provide the CSV path at runtime.
tool = CSVSearchTool()
```
## Arguments
- `csv` : The path to the CSV file you want to search. This is a mandatory argument if the tool was initialized without a specific CSV file; otherwise, it is optional.
## Custom model and embeddings
By default, the tool uses OpenAI for both embeddings and summarization. To customize the model, you can use a config dictionary as follows:
```python
tool = CSVSearchTool(
config=dict(
llm=dict(
provider="ollama", # or google, openai, anthropic, llama2, ...
config=dict(
model="llama2",
# temperature=0.5,
# top_p=1,
# stream=true,
),
),
embedder=dict(
provider="google",
config=dict(
model="models/embedding-001",
task_type="retrieval_document",
# title="Embeddings",
),
),
)
)
```

View File

@@ -0,0 +1,65 @@
# CodeDocsSearchTool
!!! note "Experimental"
We are still working on improving tools, so there might be unexpected behavior or changes in the future.
## Description
The CodeDocsSearchTool is a powerful RAG (Retrieval-Augmented Generation) tool designed for semantic searches within code documentation. It enables users to efficiently find specific information or topics within code documentation. By providing a `docs_url` during initialization, the tool narrows down the search to that particular documentation site. Alternatively, without a specific `docs_url`, it searches across a wide array of code documentation known or discovered throughout its execution, making it versatile for various documentation search needs.
## Installation
To start using the CodeDocsSearchTool, first, install the crewai_tools package via pip:
```
pip install 'crewai[tools]'
```
## Example
Utilize the CodeDocsSearchTool as follows to conduct searches within code documentation:
```python
from crewai_tools import CodeDocsSearchTool
# To search any code documentation content if the URL is known or discovered during its execution:
tool = CodeDocsSearchTool()
# OR
# To specifically focus your search on a given documentation site by providing its URL:
tool = CodeDocsSearchTool(docs_url='https://docs.example.com/reference')
```
Note: Substitute 'https://docs.example.com/reference' with your target documentation URL and 'How to use search tool' with the search query relevant to your needs.
## Arguments
- `docs_url`: Optional. Specifies the URL of the code documentation to be searched. Providing this during the tool's initialization focuses the search on the specified documentation content.
## Custom model and embeddings
By default, the tool uses OpenAI for both embeddings and summarization. To customize the model, you can use a config dictionary as follows:
```python
tool = YoutubeVideoSearchTool(
config=dict(
llm=dict(
provider="ollama", # or google, openai, anthropic, llama2, ...
config=dict(
model="llama2",
# temperature=0.5,
# top_p=1,
# stream=true,
),
),
embedder=dict(
provider="google",
config=dict(
model="models/embedding-001",
task_type="retrieval_document",
# title="Embeddings",
),
),
)
)
```

View File

@@ -0,0 +1,60 @@
# DOCXSearchTool
!!! note "Experimental"
We are still working on improving tools, so there might be unexpected behavior or changes in the future.
## Description
The DOCXSearchTool is a RAG tool designed for semantic searching within DOCX documents. It enables users to effectively search and extract relevant information from DOCX files using query-based searches. This tool is invaluable for data analysis, information management, and research tasks, streamlining the process of finding specific information within large document collections.
## Installation
Install the crewai_tools package by running the following command in your terminal:
```shell
pip install 'crewai[tools]'
```
## Example
The following example demonstrates initializing the DOCXSearchTool to search within any DOCX file's content or with a specific DOCX file path.
```python
from crewai_tools import DOCXSearchTool
# Initialize the tool to search within any DOCX file's content
tool = DOCXSearchTool()
# OR
# Initialize the tool with a specific DOCX file, so the agent can only search the content of the specified DOCX file
tool = DOCXSearchTool(docx='path/to/your/document.docx')
```
## Arguments
- `docx`: An optional file path to a specific DOCX document you wish to search. If not provided during initialization, the tool allows for later specification of any DOCX file's content path for searching.
## Custom model and embeddings
By default, the tool uses OpenAI for both embeddings and summarization. To customize the model, you can use a config dictionary as follows:
```python
tool = DOCXSearchTool(
config=dict(
llm=dict(
provider="ollama", # or google, openai, anthropic, llama2, ...
config=dict(
model="llama2",
# temperature=0.5,
# top_p=1,
# stream=true,
),
),
embedder=dict(
provider="google",
config=dict(
model="models/embedding-001",
task_type="retrieval_document",
# title="Embeddings",
),
),
)
)
```

View File

@@ -0,0 +1,37 @@
```markdown
# DirectoryReadTool
!!! note "Experimental"
We are still working on improving tools, so there might be unexpected behavior or changes in the future.
## Description
The DirectoryReadTool is a powerful utility designed to provide a comprehensive listing of directory contents. It can recursively navigate through the specified directory, offering users a detailed enumeration of all files, including those within subdirectories. This tool is crucial for tasks that require a thorough inventory of directory structures or for validating the organization of files within directories.
## Installation
To utilize the DirectoryReadTool in your project, install the `crewai_tools` package. If this package is not yet part of your environment, you can install it using pip with the command below:
```shell
pip install 'crewai[tools]'
```
This command installs the latest version of the `crewai_tools` package, granting access to the DirectoryReadTool among other utilities.
## Example
Employing the DirectoryReadTool is straightforward. The following code snippet demonstrates how to set it up and use the tool to list the contents of a specified directory:
```python
from crewai_tools import DirectoryReadTool
# Initialize the tool so the agent can read any directory's content it learns about during execution
tool = DirectoryReadTool()
# OR
# Initialize the tool with a specific directory, so the agent can only read the content of the specified directory
tool = DirectoryReadTool(directory='/path/to/your/directory')
```
## Arguments
The DirectoryReadTool requires minimal configuration for use. The essential argument for this tool is as follows:
- `directory`: **Optional**. An argument that specifies the path to the directory whose contents you wish to list. It accepts both absolute and relative paths, guiding the tool to the desired directory for content listing.

View File

@@ -0,0 +1,55 @@
# DirectorySearchTool
!!! note "Experimental"
The DirectorySearchTool is under continuous development. Features and functionalities might evolve, and unexpected behavior may occur as we refine the tool.
## Description
The DirectorySearchTool enables semantic search within the content of specified directories, leveraging the Retrieval-Augmented Generation (RAG) methodology for efficient navigation through files. Designed for flexibility, it allows users to dynamically specify search directories at runtime or set a fixed directory during initial setup.
## Installation
To use the DirectorySearchTool, begin by installing the crewai_tools package. Execute the following command in your terminal:
```shell
pip install 'crewai[tools]'
```
## Initialization and Usage
Import the DirectorySearchTool from the `crewai_tools` package to start. You can initialize the tool without specifying a directory, enabling the setting of the search directory at runtime. Alternatively, the tool can be initialized with a predefined directory.
```python
from crewai_tools import DirectorySearchTool
# For dynamic directory specification at runtime
tool = DirectorySearchTool()
# For fixed directory searches
tool = DirectorySearchTool(directory='/path/to/directory')
```
## Arguments
- `directory`: A string argument that specifies the search directory. This is optional during initialization but required for searches if not set initially.
## Custom Model and Embeddings
The DirectorySearchTool uses OpenAI for embeddings and summarization by default. Customization options for these settings include changing the model provider and configuration, enhancing flexibility for advanced users.
```python
tool = DirectorySearchTool(
config=dict(
llm=dict(
provider="ollama", # Options include ollama, google, anthropic, llama2, and more
config=dict(
model="llama2",
# Additional configurations here
),
),
embedder=dict(
provider="google",
config=dict(
model="models/embedding-001",
task_type="retrieval_document",
# title="Embeddings",
),
),
)
)
```

View File

@@ -0,0 +1,32 @@
# FileReadTool
!!! note "Experimental"
We are still working on improving tools, so there might be unexpected behavior or changes in the future.
## Description
The FileReadTool conceptually represents a suite of functionalities within the crewai_tools package aimed at facilitating file reading and content retrieval. This suite includes tools for processing batch text files, reading runtime configuration files, and importing data for analytics. It supports a variety of text-based file formats such as `.txt`, `.csv`, `.json`, and more. Depending on the file type, the suite offers specialized functionality, such as converting JSON content into a Python dictionary for ease of use.
## Installation
To utilize the functionalities previously attributed to the FileReadTool, install the crewai_tools package:
```shell
pip install 'crewai[tools]'
```
## Usage Example
To get started with the FileReadTool:
```python
from crewai_tools import FileReadTool
# Initialize the tool to read any files the agents knows or lean the path for
file_read_tool = FileReadTool()
# OR
# Initialize the tool with a specific file path, so the agent can only read the content of the specified file
file_read_tool = FileReadTool(file_path='path/to/your/file.txt')
```
## Arguments
- `file_path`: The path to the file you want to read. It accepts both absolute and relative paths. Ensure the file exists and you have the necessary permissions to access it.

View File

@@ -0,0 +1,67 @@
# GithubSearchTool
!!! note "Experimental"
We are still working on improving tools, so there might be unexpected behavior or changes in the future.
## Description
The GithubSearchTool is a Read, Append, and Generate (RAG) tool specifically designed for conducting semantic searches within GitHub repositories. Utilizing advanced semantic search capabilities, it sifts through code, pull requests, issues, and repositories, making it an essential tool for developers, researchers, or anyone in need of precise information from GitHub.
## Installation
To use the GithubSearchTool, first ensure the crewai_tools package is installed in your Python environment:
```shell
pip install 'crewai[tools]'
```
This command installs the necessary package to run the GithubSearchTool along with any other tools included in the crewai_tools package.
## Example
Heres how you can use the GithubSearchTool to perform semantic searches within a GitHub repository:
```python
from crewai_tools import GithubSearchTool
# Initialize the tool for semantic searches within a specific GitHub repository
tool = GithubSearchTool(
github_repo='https://github.com/example/repo',
content_types=['code', 'issue'] # Options: code, repo, pr, issue
)
# OR
# Initialize the tool for semantic searches within a specific GitHub repository, so the agent can search any repository if it learns about during its execution
tool = GithubSearchTool(
content_types=['code', 'issue'] # Options: code, repo, pr, issue
)
```
## Arguments
- `github_repo` : The URL of the GitHub repository where the search will be conducted. This is a mandatory field and specifies the target repository for your search.
- `content_types` : Specifies the types of content to include in your search. You must provide a list of content types from the following options: `code` for searching within the code, `repo` for searching within the repository's general information, `pr` for searching within pull requests, and `issue` for searching within issues. This field is mandatory and allows tailoring the search to specific content types within the GitHub repository.
## Custom model and embeddings
By default, the tool uses OpenAI for both embeddings and summarization. To customize the model, you can use a config dictionary as follows:
```python
tool = GithubSearchTool(
config=dict(
llm=dict(
provider="ollama", # or google, openai, anthropic, llama2, ...
config=dict(
model="llama2",
# temperature=0.5,
# top_p=1,
# stream=true,
),
),
embedder=dict(
provider="google",
config=dict(
model="models/embedding-001",
task_type="retrieval_document",
# title="Embeddings",
),
),
)
)
```

View File

@@ -0,0 +1,60 @@
# JSONSearchTool
!!! note "Experimental Status"
The JSONSearchTool is currently in an experimental phase. This means the tool is under active development, and users might encounter unexpected behavior or changes. We highly encourage feedback on any issues or suggestions for improvements.
## Description
The JSONSearchTool is designed to facilitate efficient and precise searches within JSON file contents. It utilizes a RAG (Retrieve and Generate) search mechanism, allowing users to specify a JSON path for targeted searches within a particular JSON file. This capability significantly improves the accuracy and relevance of search results.
## Installation
To install the JSONSearchTool, use the following pip command:
```shell
pip install 'crewai[tools]'
```
## Usage Examples
Here are updated examples on how to utilize the JSONSearchTool effectively for searching within JSON files. These examples take into account the current implementation and usage patterns identified in the codebase.
```python
from crewai.json_tools import JSONSearchTool # Updated import path
# General JSON content search
# This approach is suitable when the JSON path is either known beforehand or can be dynamically identified.
tool = JSONSearchTool()
# Restricting search to a specific JSON file
# Use this initialization method when you want to limit the search scope to a specific JSON file.
tool = JSONSearchTool(json_path='./path/to/your/file.json')
```
## Arguments
- `json_path` (str, optional): Specifies the path to the JSON file to be searched. This argument is not required if the tool is initialized for a general search. When provided, it confines the search to the specified JSON file.
## Configuration Options
The JSONSearchTool supports extensive customization through a configuration dictionary. This allows users to select different models for embeddings and summarization based on their requirements.
```python
tool = JSONSearchTool(
config={
"llm": {
"provider": "ollama", # Other options include google, openai, anthropic, llama2, etc.
"config": {
"model": "llama2",
# Additional optional configurations can be specified here.
# temperature=0.5,
# top_p=1,
# stream=true,
},
},
"embedder": {
"provider": "google",
"config": {
"model": "models/embedding-001",
"task_type": "retrieval_document",
# Further customization options can be added here.
},
},
}
)
```

View File

@@ -0,0 +1,62 @@
# MDXSearchTool
!!! note "Experimental"
The MDXSearchTool is in continuous development. Features may be added or removed, and functionality could change unpredictably as we refine the tool.
## Description
The MDX Search Tool is a component of the `crewai_tools` package aimed at facilitating advanced market data extraction. This tool is invaluable for researchers and analysts seeking quick access to market insights, especially within the AI sector. It simplifies the task of acquiring, interpreting, and organizing market data by interfacing with various data sources.
## Installation
Before using the MDX Search Tool, ensure the `crewai_tools` package is installed. If it is not, you can install it with the following command:
```shell
pip install 'crewai[tools]'
```
## Usage Example
To use the MDX Search Tool, you must first set up the necessary environment variables. Then, integrate the tool into your crewAI project to begin your market research. Below is a basic example of how to do this:
```python
from crewai_tools import MDXSearchTool
# Initialize the tool to search any MDX content it learns about during execution
tool = MDXSearchTool()
# OR
# Initialize the tool with a specific MDX file path for an exclusive search within that document
tool = MDXSearchTool(mdx='path/to/your/document.mdx')
```
## Parameters
- mdx: **Optional**. Specifies the MDX file path for the search. It can be provided during initialization.
## Customization of Model and Embeddings
The tool defaults to using OpenAI for embeddings and summarization. For customization, utilize a configuration dictionary as shown below:
```python
tool = MDXSearchTool(
config=dict(
llm=dict(
provider="ollama", # Options include google, openai, anthropic, llama2, etc.
config=dict(
model="llama2",
# Optional parameters can be included here.
# temperature=0.5,
# top_p=1,
# stream=true,
),
),
embedder=dict(
provider="google",
config=dict(
model="models/embedding-001",
task_type="retrieval_document",
# Optional title for the embeddings can be added here.
# title="Embeddings",
),
),
)
)
```

View File

@@ -0,0 +1,60 @@
# PDFSearchTool
!!! note "Experimental"
We are still working on improving tools, so there might be unexpected behavior or changes in the future.
## Description
The PDFSearchTool is a RAG tool designed for semantic searches within PDF content. It allows for inputting a search query and a PDF document, leveraging advanced search techniques to find relevant content efficiently. This capability makes it especially useful for extracting specific information from large PDF files quickly.
## Installation
To get started with the PDFSearchTool, first, ensure the crewai_tools package is installed with the following command:
```shell
pip install 'crewai[tools]'
```
## Example
Here's how to use the PDFSearchTool to search within a PDF document:
```python
from crewai_tools import PDFSearchTool
# Initialize the tool allowing for any PDF content search if the path is provided during execution
tool = PDFSearchTool()
# OR
# Initialize the tool with a specific PDF path for exclusive search within that document
tool = PDFSearchTool(pdf='path/to/your/document.pdf')
```
## Arguments
- `pdf`: **Optinal** The PDF path for the search. Can be provided at initialization or within the `run` method's arguments. If provided at initialization, the tool confines its search to the specified document.
## Custom model and embeddings
By default, the tool uses OpenAI for both embeddings and summarization. To customize the model, you can use a config dictionary as follows:
```python
tool = PDFSearchTool(
config=dict(
llm=dict(
provider="ollama", # or google, openai, anthropic, llama2, ...
config=dict(
model="llama2",
# temperature=0.5,
# top_p=1,
# stream=true,
),
),
embedder=dict(
provider="google",
config=dict(
model="models/embedding-001",
task_type="retrieval_document",
# title="Embeddings",
),
),
)
)
```

View File

@@ -0,0 +1,60 @@
# PGSearchTool
!!! note "Under Development"
The PGSearchTool is currently under development. This document outlines the intended functionality and interface. As development progresses, please be aware that some features may not be available or could change.
## Description
The PGSearchTool is envisioned as a powerful tool for facilitating semantic searches within PostgreSQL database tables. By leveraging advanced Retrieve and Generate (RAG) technology, it aims to provide an efficient means for querying database table content, specifically tailored for PostgreSQL databases. The tool's goal is to simplify the process of finding relevant data through semantic search queries, offering a valuable resource for users needing to conduct advanced queries on extensive datasets within a PostgreSQL environment.
## Installation
The `crewai_tools` package, which will include the PGSearchTool upon its release, can be installed using the following command:
```shell
pip install 'crewai[tools]'
```
(Note: The PGSearchTool is not yet available in the current version of the `crewai_tools` package. This installation command will be updated once the tool is released.)
## Example Usage
Below is a proposed example showcasing how to use the PGSearchTool for conducting a semantic search on a table within a PostgreSQL database:
```python
rom crewai_tools import PGSearchTool
# Initialize the tool with the database URI and the target table name
tool = PGSearchTool(db_uri='postgresql://user:password@localhost:5432/mydatabase', table_name='employees')
```
## Arguments
The PGSearchTool is designed to require the following arguments for its operation:
- `db_uri`: A string representing the URI of the PostgreSQL database to be queried. This argument will be mandatory and must include the necessary authentication details and the location of the database.
- `table_name`: A string specifying the name of the table within the database on which the semantic search will be performed. This argument will also be mandatory.
## Custom Model and Embeddings
The tool intends to use OpenAI for both embeddings and summarization by default. Users will have the option to customize the model using a config dictionary as follows:
```python
tool = PGSearchTool(
config=dict(
llm=dict(
provider="ollama", # or google, openai, anthropic, llama2, ...
config=dict(
model="llama2",
# temperature=0.5,
# top_p=1,
# stream=true,
),
),
embedder=dict(
provider="google",
config=dict(
model="models/embedding-001",
task_type="retrieval_document",
# title="Embeddings",
),
),
)
)
```

View File

@@ -0,0 +1,31 @@
# ScrapeWebsiteTool
!!! note "Experimental"
We are still working on improving tools, so there might be unexpected behavior or changes in the future.
## Description
A tool designed to extract and read the content of a specified website. It is capable of handling various types of web pages by making HTTP requests and parsing the received HTML content. This tool can be particularly useful for web scraping tasks, data collection, or extracting specific information from websites.
## Installation
Install the crewai_tools package
```shell
pip install 'crewai[tools]'
```
## Example
```python
from crewai_tools import ScrapeWebsiteTool
# To enable scrapping any website it finds during it's execution
tool = ScrapeWebsiteTool()
# Initialize the tool with the website URL, so the agent can only scrap the content of the specified website
tool = ScrapeWebsiteTool(website_url='https://www.example.com')
# Extract the text from the site
text = tool.run()
print(text)
```
## Arguments
- `website_url` : Mandatory website URL to read the file. This is the primary input for the tool, specifying which website's content should be scraped and read.

View File

@@ -0,0 +1,44 @@
# SeleniumScrapingTool
!!! note "Experimental"
This tool is currently in development. As we refine its capabilities, users may encounter unexpected behavior. Your feedback is invaluable to us for making improvements.
## Description
The SeleniumScrapingTool is crafted for high-efficiency web scraping tasks. It allows for precise extraction of content from web pages by using CSS selectors to target specific elements. Its design caters to a wide range of scraping needs, offering flexibility to work with any provided website URL.
## Installation
To get started with the SeleniumScrapingTool, install the crewai_tools package using pip:
```
pip install 'crewai[tools]'
```
## Usage Examples
Below are some scenarios where the SeleniumScrapingTool can be utilized:
```python
from crewai_tools import SeleniumScrapingTool
# Example 1: Initialize the tool without any parameters to scrape the current page it navigates to
tool = SeleniumScrapingTool()
# Example 2: Scrape the entire webpage of a given URL
tool = SeleniumScrapingTool(website_url='https://example.com')
# Example 3: Target and scrape a specific CSS element from a webpage
tool = SeleniumScrapingTool(website_url='https://example.com', css_element='.main-content')
# Example 4: Perform scraping with additional parameters for a customized experience
tool = SeleniumScrapingTool(website_url='https://example.com', css_element='.main-content', cookie={'name': 'user', 'value': 'John Doe'}, wait_time=10)
```
## Arguments
The following parameters can be used to customize the SeleniumScrapingTool's scraping process:
- `website_url`: **Mandatory**. Specifies the URL of the website from which content is to be scraped.
- `css_element`: **Mandatory**. The CSS selector for a specific element to target on the website. This enables focused scraping of a particular part of a webpage.
- `cookie`: **Optional**. A dictionary that contains cookie information. Useful for simulating a logged-in session, thereby providing access to content that might be restricted to non-logged-in users.
- `wait_time`: **Optional**. Specifies the delay (in seconds) before the content is scraped. This delay allows for the website and any dynamic content to fully load, ensuring a successful scrape.
!!! attention
Since the SeleniumScrapingTool is under active development, the parameters and functionality may evolve over time. Users are encouraged to keep the tool updated and report any issues or suggestions for enhancements.

View File

@@ -0,0 +1,33 @@
# SerperDevTool Documentation
!!! note "Experimental"
We are still working on improving tools, so there might be unexpected behavior or changes in the future.
## Description
This tool is designed to perform a semantic search for a specified query from a text's content across the internet. It utilizes the [serper.dev](https://serper.dev) API to fetch and display the most relevant search results based on the query provided by the user.
## Installation
To incorporate this tool into your project, follow the installation instructions below:
```shell
pip install 'crewai[tools]'
```
## Example
The following example demonstrates how to initialize the tool and execute a search with a given query:
```python
from crewai_tools import SerperDevTool
# Initialize the tool for internet searching capabilities
tool = SerperDevTool()
```
## Steps to Get Started
To effectively use the `SerperDevTool`, follow these steps:
1. **Package Installation**: Confirm that the `crewai[tools]` package is installed in your Python environment.
2. **API Key Acquisition**: Acquire a `serper.dev` API key by registering for a free account at `serper.dev`.
3. **Environment Configuration**: Store your obtained API key in an environment variable named `SERPER_API_KEY` to facilitate its use by the tool.
## Conclusion
By integrating the `SerperDevTool` into Python projects, users gain the ability to conduct real-time, relevant searches across the internet directly from their applications. By adhering to the setup and usage guidelines provided, incorporating this tool into projects is streamlined and straightforward.

View File

@@ -0,0 +1,62 @@
# TXTSearchTool
!!! note "Experimental"
We are still working on improving tools, so there might be unexpected behavior or changes in the future.
## Description
This tool is used to perform a RAG (Retrieval-Augmented Generation) search within the content of a text file. It allows for semantic searching of a query within a specified text file's content, making it an invaluable resource for quickly extracting information or finding specific sections of text based on the query provided.
## Installation
To use the TXTSearchTool, you first need to install the crewai_tools package. This can be done using pip, a package manager for Python. Open your terminal or command prompt and enter the following command:
```shell
pip install 'crewai[tools]'
```
This command will download and install the TXTSearchTool along with any necessary dependencies.
## Example
The following example demonstrates how to use the TXTSearchTool to search within a text file. This example shows both the initialization of the tool with a specific text file and the subsequent search within that file's content.
```python
from crewai_tools import TXTSearchTool
# Initialize the tool to search within any text file's content the agent learns about during its execution
tool = TXTSearchTool()
# OR
# Initialize the tool with a specific text file, so the agent can search within the given text file's content
tool = TXTSearchTool(txt='path/to/text/file.txt')
```
## Arguments
- `txt` (str): **Optinal**. The path to the text file you want to search. This argument is only required if the tool was not initialized with a specific text file; otherwise, the search will be conducted within the initially provided text file.
## Custom model and embeddings
By default, the tool uses OpenAI for both embeddings and summarization. To customize the model, you can use a config dictionary as follows:
```python
tool = TXTSearchTool(
config=dict(
llm=dict(
provider="ollama", # or google, openai, anthropic, llama2, ...
config=dict(
model="llama2",
# temperature=0.5,
# top_p=1,
# stream=true,
),
),
embedder=dict(
provider="google",
config=dict(
model="models/embedding-001",
task_type="retrieval_document",
# title="Embeddings",
),
),
)
)
```

View File

@@ -0,0 +1,60 @@
# WebsiteSearchTool
!!! note "Experimental Status"
The WebsiteSearchTool is currently in an experimental phase. We are actively working on incorporating this tool into our suite of offerings and will update the documentation accordingly.
## Description
The WebsiteSearchTool is designed as a concept for conducting semantic searches within the content of websites. It aims to leverage advanced machine learning models like Retrieval-Augmented Generation (RAG) to navigate and extract information from specified URLs efficiently. This tool intends to offer flexibility, allowing users to perform searches across any website or focus on specific websites of interest. Please note, the current implementation details of the WebsiteSearchTool are under development, and its functionalities as described may not yet be accessible.
## Installation
To prepare your environment for when the WebsiteSearchTool becomes available, you can install the foundational package with:
```shell
pip install 'crewai[tools]'
```
This command installs the necessary dependencies to ensure that once the tool is fully integrated, users can start using it immediately.
## Example Usage
Below are examples of how the WebsiteSearchTool could be utilized in different scenarios. Please note, these examples are illustrative and represent planned functionality:
```python
from crewai_tools import WebsiteSearchTool
# Example of initiating tool that agents can use to search across any discovered websites
tool = WebsiteSearchTool()
# Example of limiting the search to the content of a specific website, so now agents can only search within that website
tool = WebsiteSearchTool(website='https://example.com')
```
## Arguments
- `website`: An optional argument intended to specify the website URL for focused searches. This argument is designed to enhance the tool's flexibility by allowing targeted searches when necessary.
## Customization Options
By default, the tool uses OpenAI for both embeddings and summarization. To customize the model, you can use a config dictionary as follows:
```python
tool = WebsiteSearchTool(
config=dict(
llm=dict(
provider="ollama", # or google, openai, anthropic, llama2, ...
config=dict(
model="llama2",
# temperature=0.5,
# top_p=1,
# stream=true,
),
),
embedder=dict(
provider="google",
config=dict(
model="models/embedding-001",
task_type="retrieval_document",
# title="Embeddings",
),
),
)
)
```

View File

@@ -0,0 +1,60 @@
# XMLSearchTool
!!! note "Experimental"
We are still working on improving tools, so there might be unexpected behavior or changes in the future.
## Description
The XMLSearchTool is a cutting-edge RAG tool engineered for conducting semantic searches within XML files. Ideal for users needing to parse and extract information from XML content efficiently, this tool supports inputting a search query and an optional XML file path. By specifying an XML path, users can target their search more precisely to the content of that file, thereby obtaining more relevant search outcomes.
## Installation
To start using the XMLSearchTool, you must first install the crewai_tools package. This can be easily done with the following command:
```shell
pip install 'crewai[tools]'
```
## Example
Here are two examples demonstrating how to use the XMLSearchTool. The first example shows searching within a specific XML file, while the second example illustrates initiating a search without predefining an XML path, providing flexibility in search scope.
```python
from crewai_tools.tools.xml_search_tool import XMLSearchTool
# Allow agents to search within any XML file's content as it learns about their paths during execution
tool = XMLSearchTool()
# OR
# Initialize the tool with a specific XML file path for exclusive search within that document
tool = XMLSearchTool(xml='path/to/your/xmlfile.xml')
```
## Arguments
- `xml`: This is the path to the XML file you wish to search. It is an optional parameter during the tool's initialization but must be provided either at initialization or as part of the `run` method's arguments to execute a search.
## Custom model and embeddings
By default, the tool uses OpenAI for both embeddings and summarization. To customize the model, you can use a config dictionary as follows:
```python
tool = XMLSearchTool(
config=dict(
llm=dict(
provider="ollama", # or google, openai, anthropic, llama2, ...
config=dict(
model="llama2",
# temperature=0.5,
# top_p=1,
# stream=true,
),
),
embedder=dict(
provider="google",
config=dict(
model="models/embedding-001",
task_type="retrieval_document",
# title="Embeddings",
),
),
)
)
```

View File

@@ -0,0 +1,60 @@
# YoutubeChannelSearchTool
!!! note "Experimental"
We are still working on improving tools, so there might be unexpected behavior or changes in the future.
## Description
This tool is designed to perform semantic searches within a specific Youtube channel's content. Leveraging the RAG (Retrieval-Augmented Generation) methodology, it provides relevant search results, making it invaluable for extracting information or finding specific content without the need to manually sift through videos. It streamlines the search process within Youtube channels, catering to researchers, content creators, and viewers seeking specific information or topics.
## Installation
To utilize the YoutubeChannelSearchTool, the `crewai_tools` package must be installed. Execute the following command in your shell to install:
```shell
pip install 'crewai[tools]'
```
## Example
To begin using the YoutubeChannelSearchTool, follow the example below. This demonstrates initializing the tool with a specific Youtube channel handle and conducting a search within that channel's content.
```python
from crewai_tools import YoutubeChannelSearchTool
# Initialize the tool to search within any Youtube channel's content the agent learns about during its execution
tool = YoutubeChannelSearchTool()
# OR
# Initialize the tool with a specific Youtube channel handle to target your search
tool = YoutubeChannelSearchTool(youtube_channel_handle='@exampleChannel')
```
## Arguments
- `youtube_channel_handle` : A mandatory string representing the Youtube channel handle. This parameter is crucial for initializing the tool to specify the channel you want to search within. The tool is designed to only search within the content of the provided channel handle.
## Custom model and embeddings
By default, the tool uses OpenAI for both embeddings and summarization. To customize the model, you can use a config dictionary as follows:
```python
tool = YoutubeChannelSearchTool(
config=dict(
llm=dict(
provider="ollama", # or google, openai, anthropic, llama2, ...
config=dict(
model="llama2",
# temperature=0.5,
# top_p=1,
# stream=true,
),
),
embedder=dict(
provider="google",
config=dict(
model="models/embedding-001",
task_type="retrieval_document",
# title="Embeddings",
),
),
)
)
```

View File

@@ -0,0 +1,64 @@
# YoutubeVideoSearchTool
!!! note "Experimental"
We are still working on improving tools, so there might be unexpected behavior or changes in the future.
## Description
This tool is part of the `crewai_tools` package and is designed to perform semantic searches within Youtube video content, utilizing Retrieval-Augmented Generation (RAG) techniques. It is one of several "Search" tools in the package that leverage RAG for different sources. The YoutubeVideoSearchTool allows for flexibility in searches; users can search across any Youtube video content without specifying a video URL, or they can target their search to a specific Youtube video by providing its URL.
## Installation
To utilize the YoutubeVideoSearchTool, you must first install the `crewai_tools` package. This package contains the YoutubeVideoSearchTool among other utilities designed to enhance your data analysis and processing tasks. Install the package by executing the following command in your terminal:
```
pip install 'crewai[tools]'
```
## Example
To integrate the YoutubeVideoSearchTool into your Python projects, follow the example below. This demonstrates how to use the tool both for general Youtube content searches and for targeted searches within a specific video's content.
```python
from crewai_tools import YoutubeVideoSearchTool
# General search across Youtube content without specifying a video URL, so the agent can search within any Youtube video content it learns about irs url during its operation
tool = YoutubeVideoSearchTool()
# Targeted search within a specific Youtube video's content
tool = YoutubeVideoSearchTool(youtube_video_url='https://youtube.com/watch?v=example')
```
## Arguments
The YoutubeVideoSearchTool accepts the following initialization arguments:
- `youtube_video_url`: An optional argument at initialization but required if targeting a specific Youtube video. It specifies the Youtube video URL path you want to search within.
## Custom model and embeddings
By default, the tool uses OpenAI for both embeddings and summarization. To customize the model, you can use a config dictionary as follows:
```python
tool = YoutubeVideoSearchTool(
config=dict(
llm=dict(
provider="ollama", # or google, openai, anthropic, llama2, ...
config=dict(
model="llama2",
# temperature=0.5,
# top_p=1,
# stream=true,
),
),
embedder=dict(
provider="google",
config=dict(
model="models/embedding-001",
task_type="retrieval_document",
# title="Embeddings",
),
),
)
)
```

View File

@@ -126,13 +126,36 @@ nav:
- Processes: 'core-concepts/Processes.md'
- Crews: 'core-concepts/Crews.md'
- Collaboration: 'core-concepts/Collaboration.md'
- Memory: 'core-concepts/Memory.md'
- How to Guides:
- Getting Started: 'how-to/Creating-a-Crew-and-kick-it-off.md'
- Create Custom Tools: 'how-to/Create-Custom-Tools.md'
- Using Sequential Process: 'how-to/Sequential.md'
- Using Hierarchical Process: 'how-to/Hierarchical.md'
- Connecting to any LLM: 'how-to/LLM-Connections.md'
- Customizing Agents: 'how-to/Customizing-Agents.md'
- Human Input on Execution: 'how-to/Human-Input-on-Execution.md'
- Agent Observability using AgentOps: 'how-to/AgentOps-Observability.md'
- Tools Docs:
- Google Serper Search: 'tools/SerperDevTool.md'
- Scrape Website: 'tools/ScrapeWebsiteTool.md'
- Directory Read: 'tools/DirectoryReadTool.md'
- File Read: 'tools/FileReadTool.md'
- Selenium Scraper: 'tools/SeleniumScrapingTool.md'
- Directory RAG Search: 'tools/DirectorySearchTool.md'
- PDF RAG Search: 'tools/PDFSearchTool.md'
- TXT RAG Search: 'tools/TXTSearchTool.md'
- CSV RAG Search: 'tools/CSVSearchTool.md'
- XML RAG Search: 'tools/XMLSearchTool.md'
- JSON RAG Search: 'tools/JSONSearchTool.md'
- Docx Rag Search: 'tools/DOCXSearchTool.md'
- MDX RAG Search: 'tools/MDXSearchTool.md'
- PG RAG Search: 'tools/PGSearchTool.md'
- Website RAG Search: 'tools/WebsiteSearchTool.md'
- Github RAG Search: 'tools/GitHubSearchTool.md'
- Code Docs RAG Search: 'tools/CodeDocsSearchTool.md'
- Youtube Video RAG Search: 'tools/YoutubeVideoSearchTool.md'
- Youtube Channel RAG Search: 'tools/YoutubeChannelSearchTool.md'
- Examples:
- Trip Planner Crew: https://github.com/joaomdmoura/crewAI-examples/tree/main/trip_planner"
- Create Instagram Post: https://github.com/joaomdmoura/crewAI-examples/tree/main/instagram_post"
@@ -158,4 +181,4 @@ extra:
- icon: fontawesome/brands/twitter
link: https://twitter.com/joaomdmoura
- icon: fontawesome/brands/github
link: https://github.com/joaomdmoura/crewAI
link: https://github.com/joaomdmoura/crewAI

1225
poetry.lock generated

File diff suppressed because it is too large Load Diff

View File

@@ -1,7 +1,6 @@
[tool.poetry]
name = "crewai"
version = "0.16.3"
version = "0.28.8"
description = "Cutting-edge framework for orchestrating role-playing, autonomous AI agents. By fostering collaborative intelligence, CrewAI empowers agents to work together seamlessly, tackling complex tasks."
authors = ["Joao Moura <joao@crewai.com>"]
readme = "README.md"
@@ -9,25 +8,26 @@ packages = [
{ include = "crewai", from = "src" },
]
[tool.poetry.urls]
Homepage = "https://crewai.io"
Homepage = "https://crewai.com"
Documentation = "https://github.com/joaomdmoura/CrewAI/wiki/Index"
Repository = "https://github.com/joaomdmoura/crewai"
[tool.poetry.dependencies]
python = ">=3.10,<=3.13"
pydantic = "^2.4.2"
langchain = "^0.1.0"
openai = "^1.7.1"
langchain-openai = "^0.0.5"
langchain = "^0.1.10"
openai = "^1.13.3"
opentelemetry-api = "^1.22.0"
opentelemetry-sdk = "^1.22.0"
opentelemetry-exporter-otlp-proto-http = "^1.22.0"
instructor = "^0.5.2"
regex = "^2023.12.25"
crewai-tools = { version = "^0.0.12", optional = true }
crewai-tools = { version = "^0.1.7", optional = true }
click = "^8.1.7"
python-dotenv = "1.0.0"
embedchain = "^0.1.98"
appdirs = "^1.4.4"
[tool.poetry.extras]
tools = ["crewai-tools"]
@@ -35,7 +35,6 @@ tools = ["crewai-tools"]
[tool.poetry.group.dev.dependencies]
isort = "^5.13.2"
pyright = ">=1.1.350,<2.0.0"
black = {git = "https://github.com/psf/black.git", rev = "stable"}
autoflake = "^2.2.1"
pre-commit = "^3.6.0"
mkdocs = "^1.4.3"
@@ -45,17 +44,21 @@ mkdocs-material = {extras = ["imaging"], version = "^9.5.7"}
mkdocs-material-extensions = "^1.3.1"
pillow = "^10.2.0"
cairosvg = "^2.7.1"
crewai_tools = "^0.0.12"
crewai-tools = "^0.1.7"
[tool.isort]
profile = "black"
known_first_party = ["crewai"]
[tool.poetry.group.test.dependencies]
pytest = "^8.0.0"
pytest-vcr = "^1.0.2"
python-dotenv = "1.0.0"
[tool.poetry.scripts]
crewai = "crewai.cli.cli:crewai"
[build-system]
requires = ["poetry-core"]
build-backend = "poetry.core.masonry.api"

View File

@@ -4,9 +4,9 @@ from typing import Any, Dict, List, Optional, Tuple
from langchain.agents.agent import RunnableAgent
from langchain.agents.tools import tool as LangChainTool
from langchain.memory import ConversationSummaryMemory
from langchain.tools.render import render_text_description
from langchain_core.agents import AgentAction
from langchain_core.callbacks import BaseCallbackHandler
from langchain_openai import ChatOpenAI
from pydantic import (
UUID4,
@@ -21,6 +21,7 @@ from pydantic import (
from pydantic_core import PydanticCustomError
from crewai.agents import CacheHandler, CrewAgentExecutor, CrewAgentParser, ToolsHandler
from crewai.memory.contextual.contextual_memory import ContextualMemory
from crewai.utilities import I18N, Logger, Prompts, RPMController
from crewai.utilities.token_counter_callback import TokenCalcHandler, TokenProcess
@@ -36,6 +37,7 @@ class Agent(BaseModel):
role: The role of the agent.
goal: The objective of the agent.
backstory: The backstory of the agent.
config: Dict representation of agent configuration.
llm: The language model that will run the agent.
function_calling_llm: The language model that will the tool calling for this agent, it overrides the crew function_calling_llm.
max_iter: Maximum number of iterations for an agent to execute a task.
@@ -45,6 +47,7 @@ class Agent(BaseModel):
allow_delegation: Whether the agent is allowed to delegate tasks to other agents.
tools: Tools at agents disposal
step_callback: Callback to be executed after each step of the agent execution.
callbacks: A list of callback functions from the langchain library that are triggered during the agent's execution process
"""
__hash__ = object.__hash__ # type: ignore
@@ -63,13 +66,18 @@ class Agent(BaseModel):
role: str = Field(description="Role of the agent")
goal: str = Field(description="Objective of the agent")
backstory: str = Field(description="Backstory of the agent")
cache: bool = Field(
default=True,
description="Whether the agent should use a cache for tool usage.",
)
config: Optional[Dict[str, Any]] = Field(
description="Configuration for the agent",
default=None,
)
max_rpm: Optional[int] = Field(
default=None,
description="Maximum number of requests per minute for the agent execution to be respected.",
)
memory: bool = Field(
default=False, description="Whether the agent should have memory or not"
)
verbose: bool = Field(
default=False, description="Verbose mode for the Agent Execution"
)
@@ -80,16 +88,21 @@ class Agent(BaseModel):
default_factory=list, description="Tools at agents disposal"
)
max_iter: Optional[int] = Field(
default=15, description="Maximum iterations for an agent to execute a task"
default=25, description="Maximum iterations for an agent to execute a task"
)
max_execution_time: Optional[int] = Field(
default=None,
description="Maximum execution time for an agent to execute a task",
)
agent_executor: InstanceOf[CrewAgentExecutor] = Field(
default=None, description="An instance of the CrewAgentExecutor class."
)
crew: Any = Field(default=None, description="Crew to which the agent belongs.")
tools_handler: InstanceOf[ToolsHandler] = Field(
default=None, description="An instance of the ToolsHandler class."
)
cache_handler: InstanceOf[CacheHandler] = Field(
default=CacheHandler(), description="An instance of the CacheHandler class."
default=None, description="An instance of the CacheHandler class."
)
step_callback: Optional[Any] = Field(
default=None,
@@ -105,6 +118,17 @@ class Agent(BaseModel):
function_calling_llm: Optional[Any] = Field(
description="Language model that will run the agent.", default=None
)
callbacks: Optional[List[InstanceOf[BaseCallbackHandler]]] = Field(
default=None, description="Callback to be executed"
)
_original_role: str | None = None
_original_goal: str | None = None
_original_backstory: str | None = None
def __init__(__pydantic_self__, **data):
config = data.pop("config", {})
super().__init__(**config, **data)
@field_validator("id", mode="before")
@classmethod
@@ -114,6 +138,14 @@ class Agent(BaseModel):
"may_not_set_field", "This field is not to be set by the user.", {}
)
@model_validator(mode="after")
def set_attributes_based_on_config(self) -> "Agent":
"""Set attributes based on the agent configuration."""
if self.config:
for key, value in self.config.items():
setattr(self, key, value)
return self
@model_validator(mode="after")
def set_private_attrs(self):
"""Set private attributes."""
@@ -128,10 +160,15 @@ class Agent(BaseModel):
def set_agent_executor(self) -> "Agent":
"""set agent executor is set."""
if hasattr(self.llm, "model_name"):
self.llm.callbacks = [
TokenCalcHandler(self.llm.model_name, self._token_process)
]
token_handler = TokenCalcHandler(self.llm.model_name, self._token_process)
if isinstance(self.llm.callbacks, list):
self.llm.callbacks.append(token_handler)
else:
self.llm.callbacks = [token_handler]
if not self.agent_executor:
if not self.cache_handler:
self.cache_handler = CacheHandler()
self.set_cache_handler(self.cache_handler)
return self
@@ -151,6 +188,9 @@ class Agent(BaseModel):
Returns:
Output of the agent
"""
if self.tools_handler:
self.tools_handler.last_used_tool = {}
task_prompt = task.prompt()
if context:
@@ -158,12 +198,25 @@ class Agent(BaseModel):
task=task_prompt, context=context
)
tools = self._parse_tools(tools or self.tools)
if self.crew and self.crew.memory:
contextual_memory = ContextualMemory(
self.crew._short_term_memory,
self.crew._long_term_memory,
self.crew._entity_memory,
)
memory = contextual_memory.build_context_for_task(task, context)
if memory.strip() != "":
task_prompt += self.i18n.slice("memory").format(memory=memory)
tools = tools or self.tools
parsed_tools = self._parse_tools(tools)
self.create_agent_executor(tools=tools)
self.agent_executor.tools = tools
self.agent_executor.tools = parsed_tools
self.agent_executor.task = task
self.agent_executor.tools_description = render_text_description(tools)
self.agent_executor.tools_names = self.__tools_names(tools)
self.agent_executor.tools_description = render_text_description(parsed_tools)
self.agent_executor.tools_names = self.__tools_names(parsed_tools)
result = self.agent_executor.invoke(
{
@@ -184,8 +237,10 @@ class Agent(BaseModel):
Args:
cache_handler: An instance of the CacheHandler class.
"""
self.cache_handler = cache_handler
self.tools_handler = ToolsHandler(cache=self.cache_handler)
self.tools_handler = ToolsHandler()
if self.cache:
self.cache_handler = cache_handler
self.tools_handler.cache = cache_handler
self.create_agent_executor()
def set_rpm_controller(self, rpm_controller: RPMController) -> None:
@@ -218,13 +273,18 @@ class Agent(BaseModel):
executor_args = {
"llm": self.llm,
"i18n": self.i18n,
"crew": self.crew,
"crew_agent": self,
"tools": self._parse_tools(tools),
"verbose": self.verbose,
"original_tools": tools,
"handle_parsing_errors": True,
"max_iterations": self.max_iter,
"max_execution_time": self.max_execution_time,
"step_callback": self.step_callback,
"tools_handler": self.tools_handler,
"function_calling_llm": self.function_calling_llm,
"callbacks": self.callbacks,
}
if self._rpm_controller:
@@ -232,15 +292,7 @@ class Agent(BaseModel):
"request_within_rpm_limit"
] = self._rpm_controller.check_or_wait
if self.memory:
summary_memory = ConversationSummaryMemory(
llm=self.llm, input_key="input", memory_key="chat_history"
)
executor_args["memory"] = summary_memory
agent_args["chat_history"] = lambda x: x["chat_history"]
prompt = Prompts(i18n=self.i18n, tools=tools).task_execution_with_memory()
else:
prompt = Prompts(i18n=self.i18n, tools=tools).task_execution()
prompt = Prompts(i18n=self.i18n, tools=tools).task_execution()
execution_prompt = prompt.partial(
goal=self.goal,
@@ -256,10 +308,17 @@ class Agent(BaseModel):
def interpolate_inputs(self, inputs: Dict[str, Any]) -> None:
"""Interpolate inputs into the agent description and backstory."""
if self._original_role is None:
self._original_role = self.role
if self._original_goal is None:
self._original_goal = self.goal
if self._original_backstory is None:
self._original_backstory = self.backstory
if inputs:
self.role = self.role.format(**inputs)
self.goal = self.goal.format(**inputs)
self.backstory = self.backstory.format(**inputs)
self.role = self._original_role.format(**inputs)
self.goal = self._original_goal.format(**inputs)
self.backstory = self._original_backstory.format(**inputs)
def increment_formatting_errors(self) -> None:
"""Count the formatting errors of the agent."""
@@ -268,7 +327,7 @@ class Agent(BaseModel):
def format_log_to_str(
self,
intermediate_steps: List[Tuple[AgentAction, str]],
observation_prefix: str = "Result: ",
observation_prefix: str = "Observation: ",
llm_prefix: str = "",
) -> str:
"""Construct the scratchpad that lets the agent continue its thought process."""
@@ -298,3 +357,6 @@ class Agent(BaseModel):
@staticmethod
def __tools_names(tools) -> str:
return ", ".join([t.name for t in tools])
def __repr__(self):
return f"Agent(role={self.role}, goal={self.goal}, backstory={self.backstory})"

View File

@@ -1,3 +1,4 @@
import threading
import time
from typing import Any, Dict, Iterator, List, Optional, Tuple, Union
@@ -12,21 +13,31 @@ from langchain_core.utils.input import get_color_mapping
from pydantic import InstanceOf
from crewai.agents.tools_handler import ToolsHandler
from crewai.memory.entity.entity_memory_item import EntityMemoryItem
from crewai.memory.long_term.long_term_memory_item import LongTermMemoryItem
from crewai.memory.short_term.short_term_memory_item import ShortTermMemoryItem
from crewai.tools.tool_usage import ToolUsage, ToolUsageErrorException
from crewai.utilities import I18N
from crewai.utilities.converter import ConverterError
from crewai.utilities.evaluators.task_evaluator import TaskEvaluator
class CrewAgentExecutor(AgentExecutor):
_i18n: I18N = I18N()
should_ask_for_human_input: bool = False
llm: Any = None
iterations: int = 0
task: Any = None
tools_description: str = ""
tools_names: str = ""
original_tools: List[Any] = []
crew_agent: Any = None
crew: Any = None
function_calling_llm: Any = None
request_within_rpm_limit: Any = None
tools_handler: InstanceOf[ToolsHandler] = None
max_iterations: Optional[int] = 15
have_forced_answer: bool = False
force_answer_max_iterations: Optional[int] = None
step_callback: Optional[Any] = None
@@ -36,7 +47,54 @@ class CrewAgentExecutor(AgentExecutor):
return values
def _should_force_answer(self) -> bool:
return True if self.iterations == self.force_answer_max_iterations else False
return (
self.iterations == self.force_answer_max_iterations
) and not self.have_forced_answer
def _create_short_term_memory(self, output) -> None:
if (
self.crew
and self.crew.memory
and "Action: Delegate work to co-worker" not in output.log
):
memory = ShortTermMemoryItem(
data=output.log,
agent=self.crew_agent.role,
metadata={
"observation": self.task.description,
},
)
self.crew._short_term_memory.save(memory)
def _create_long_term_memory(self, output) -> None:
if self.crew and self.crew.memory:
ltm_agent = TaskEvaluator(self.crew_agent)
evaluation = ltm_agent.evaluate(self.task, output.log)
if isinstance(evaluation, ConverterError):
return
long_term_memory = LongTermMemoryItem(
task=self.task.description,
agent=self.crew_agent.role,
quality=evaluation.quality,
datetime=str(time.time()),
expected_output=self.task.expected_output,
metadata={
"suggestions": evaluation.suggestions,
"quality": evaluation.quality,
},
)
self.crew._long_term_memory.save(long_term_memory)
for entity in evaluation.entities:
entity_memory = EntityMemoryItem(
name=entity.name,
type=entity.type,
description=entity.description,
relationships="\n".join([f"- {r}" for r in entity.relationships]),
)
self.crew._entity_memory.save(entity_memory)
def _call(
self,
@@ -48,13 +106,18 @@ class CrewAgentExecutor(AgentExecutor):
name_to_tool_map = {tool.name: tool for tool in self.tools}
# We construct a mapping from each tool to a color, used for logging.
color_mapping = get_color_mapping(
[tool.name for tool in self.tools], excluded_colors=["green", "red"]
[tool.name.casefold() for tool in self.tools],
excluded_colors=["green", "red"],
)
intermediate_steps: List[Tuple[AgentAction, str]] = []
# Allowing human input given task setting
if self.task.human_input:
self.should_ask_for_human_input = True
# Let's start tracking the number of iterations and time elapsed
self.iterations = 0
time_elapsed = 0.0
start_time = time.time()
# We now enter the agent loop (until it returns something).
while self._should_continue(self.iterations, time_elapsed):
if not self.request_within_rpm_limit or self.request_within_rpm_limit():
@@ -65,16 +128,21 @@ class CrewAgentExecutor(AgentExecutor):
intermediate_steps,
run_manager=run_manager,
)
if self.step_callback:
self.step_callback(next_step_output)
if isinstance(next_step_output, AgentFinish):
# Creating long term memory
create_long_term_memory = threading.Thread(
target=self._create_long_term_memory, args=(next_step_output,)
)
create_long_term_memory.start()
return self._return(
next_step_output, intermediate_steps, run_manager=run_manager
)
intermediate_steps.extend(next_step_output)
if len(next_step_output) == 1:
next_step_action = next_step_output[0]
# See if tool should return directly
@@ -83,11 +151,13 @@ class CrewAgentExecutor(AgentExecutor):
return self._return(
tool_return, intermediate_steps, run_manager=run_manager
)
self.iterations += 1
time_elapsed = time.time() - start_time
output = self.agent.return_stopped_response(
self.early_stopping_method, intermediate_steps, **inputs
)
return self._return(output, intermediate_steps, run_manager=run_manager)
def _iter_next_step(
@@ -103,7 +173,15 @@ class CrewAgentExecutor(AgentExecutor):
Override this to take control of how the agent makes and acts on choices.
"""
try:
if self._should_force_answer():
error = self._i18n.errors("force_final_answer")
output = AgentAction("_Exception", error, error)
self.have_forced_answer = True
yield AgentStep(action=output, observation=error)
return
intermediate_steps = self._prepare_intermediate_steps(intermediate_steps)
# Call the LLM to see what to do.
output = self.agent.plan(
intermediate_steps,
@@ -111,23 +189,6 @@ class CrewAgentExecutor(AgentExecutor):
**inputs,
)
if self._should_force_answer():
if isinstance(output, AgentFinish):
yield output
return
if isinstance(output, AgentAction):
output = output
else:
raise ValueError(
f"Unexpected output type from agent: {type(output)}"
)
yield AgentStep(
action=output, observation=self._i18n.errors("force_final_answer")
)
return
except OutputParserException as e:
if isinstance(self.handle_parsing_errors, bool):
raise_error = not self.handle_parsing_errors
@@ -140,11 +201,11 @@ class CrewAgentExecutor(AgentExecutor):
"again, pass `handle_parsing_errors=True` to the AgentExecutor. "
f"This is the error: {str(e)}"
)
text = str(e)
str(e)
if isinstance(self.handle_parsing_errors, bool):
if e.send_to_llm:
observation = f"\n{str(e.observation)}"
text = str(e.llm_output)
str(e.llm_output)
else:
observation = ""
elif isinstance(self.handle_parsing_errors, str):
@@ -153,22 +214,24 @@ class CrewAgentExecutor(AgentExecutor):
observation = f"\n{self.handle_parsing_errors(e)}"
else:
raise ValueError("Got unexpected type of `handle_parsing_errors`")
output = AgentAction("_Exception", observation, text)
output = AgentAction("_Exception", observation, "")
if run_manager:
run_manager.on_agent_action(output, color="green")
tool_run_kwargs = self.agent.tool_run_logging_kwargs()
observation = ExceptionTool().run(
output.tool_input,
verbose=self.verbose,
verbose=False,
color=None,
callbacks=run_manager.get_child() if run_manager else None,
**tool_run_kwargs,
)
if self._should_force_answer():
yield AgentStep(
action=output, observation=self._i18n.errors("force_final_answer")
)
error = self._i18n.errors("force_final_answer")
output = AgentAction("_Exception", error, error)
yield AgentStep(action=output, observation=error)
return
yield AgentStep(action=output, observation=observation)
@@ -176,37 +239,64 @@ class CrewAgentExecutor(AgentExecutor):
# If the tool chosen is the finishing tool, then we end and return.
if isinstance(output, AgentFinish):
yield output
return
if self.should_ask_for_human_input:
# Making sure we only ask for it once, so disabling for the next thought loop
self.should_ask_for_human_input = False
human_feedback = self._ask_human_input(output.return_values["output"])
action = AgentAction(
tool="Human Input", tool_input=human_feedback, log=output.log
)
yield AgentStep(
action=action,
observation=self._i18n.slice("human_feedback").format(
human_feedback=human_feedback
),
)
return
else:
yield output
return
self._create_short_term_memory(output)
actions: List[AgentAction]
actions = [output] if isinstance(output, AgentAction) else output
yield from actions
for agent_action in actions:
if run_manager:
run_manager.on_agent_action(agent_action, color="green")
# Otherwise we lookup the tool
tool_usage = ToolUsage(
tools_handler=self.tools_handler,
tools=self.tools,
original_tools=self.original_tools,
tools_description=self.tools_description,
tools_names=self.tools_names,
function_calling_llm=self.function_calling_llm,
llm=self.llm,
task=self.task,
action=agent_action,
)
tool_calling = tool_usage.parse(agent_action.log)
if isinstance(tool_calling, ToolUsageErrorException):
observation = tool_calling.message
else:
if tool_calling.tool_name.lower().strip() in [
name.lower().strip() for name in name_to_tool_map
if tool_calling.tool_name.casefold().strip() in [
name.casefold().strip() for name in name_to_tool_map
]:
observation = tool_usage.use(tool_calling, agent_action.log)
else:
observation = self._i18n.errors("wrong_tool_name").format(
tool=tool_calling.tool_name,
tools=", ".join([tool.name for tool in self.tools]),
tools=", ".join([tool.name.casefold() for tool in self.tools]),
)
yield AgentStep(action=agent_action, observation=observation)
def _ask_human_input(self, final_answer: dict) -> str:
"""Get human input."""
return input(
self._i18n.slice("getting_input").format(final_answer=final_answer)
)

View File

@@ -1,3 +1,4 @@
import re
from typing import Any, Union
from langchain.agents.output_parsers import ReActSingleInputOutputParser
@@ -6,13 +7,14 @@ from langchain_core.exceptions import OutputParserException
from crewai.utilities import I18N
TOOL_USAGE_SECTION = "Use Tool:"
FINAL_ANSWER_ACTION = "Final Answer:"
FINAL_ANSWER_AND_TOOL_ERROR_MESSAGE = "I tried to use a tool and give a final answer at the same time, I must choose only one."
MISSING_ACTION_AFTER_THOUGHT_ERROR_MESSAGE = "I did it wrong. Invalid Format: I missed the 'Action:' after 'Thought:'. I will do right next, and don't use a tool I have already used.\n"
MISSING_ACTION_INPUT_AFTER_ACTION_ERROR_MESSAGE = "I did it wrong. Invalid Format: I missed the 'Action Input:' after 'Action:'. I will do right next, and don't use a tool I have already used.\n"
FINAL_ANSWER_AND_PARSABLE_ACTION_ERROR_MESSAGE = "I did it wrong. Tried to both perform Action and give a Final Answer at the same time, I must do one or the other"
class CrewAgentParser(ReActSingleInputOutputParser):
"""Parses Crew-style LLM calls that have a single tool input.
"""Parses ReAct-style LLM calls that have a single tool input.
Expects output to be in one of two formats.
@@ -20,17 +22,16 @@ class CrewAgentParser(ReActSingleInputOutputParser):
should be in the below format. This will result in an AgentAction
being returned.
```
Use Tool: All context for using the tool here
```
Thought: agent thought here
Action: search
Action Input: what is the temperature in SF?
If the output signals that a final answer should be given,
should be in the below format. This will result in an AgentFinish
being returned.
```
Thought: agent thought here
Final Answer: The temperature is 100 degrees
```
"""
_i18n: I18N = I18N()
@@ -38,26 +39,52 @@ class CrewAgentParser(ReActSingleInputOutputParser):
def parse(self, text: str) -> Union[AgentAction, AgentFinish]:
includes_answer = FINAL_ANSWER_ACTION in text
includes_tool = TOOL_USAGE_SECTION in text
if includes_tool:
regex = (
r"Action\s*\d*\s*:[\s]*(.*?)[\s]*Action\s*\d*\s*Input\s*\d*\s*:[\s]*(.*)"
)
action_match = re.search(regex, text, re.DOTALL)
if action_match:
if includes_answer:
self.agent.increment_formatting_errors()
raise OutputParserException(f"{FINAL_ANSWER_AND_TOOL_ERROR_MESSAGE}")
raise OutputParserException(
f"{FINAL_ANSWER_AND_PARSABLE_ACTION_ERROR_MESSAGE}: {text}"
)
action = action_match.group(1).strip()
action_input = action_match.group(2)
tool_input = action_input.strip(" ")
tool_input = tool_input.strip('"')
return AgentAction("", "", text)
return AgentAction(action, tool_input, text)
elif includes_answer:
return AgentFinish(
{"output": text.split(FINAL_ANSWER_ACTION)[-1].strip()}, text
)
format = self._i18n.slice("format_without_tools")
error = f"{format}"
self.agent.increment_formatting_errors()
raise OutputParserException(
error,
observation=error,
llm_output=text,
send_to_llm=True,
)
if not re.search(r"Action\s*\d*\s*:[\s]*(.*?)", text, re.DOTALL):
self.agent.increment_formatting_errors()
raise OutputParserException(
f"Could not parse LLM output: `{text}`",
observation=f"{MISSING_ACTION_AFTER_THOUGHT_ERROR_MESSAGE}\n{self._i18n.slice('final_answer_format')}",
llm_output=text,
send_to_llm=True,
)
elif not re.search(
r"[\s]*Action\s*\d*\s*Input\s*\d*\s*:[\s]*(.*)", text, re.DOTALL
):
self.agent.increment_formatting_errors()
raise OutputParserException(
f"Could not parse LLM output: `{text}`",
observation=MISSING_ACTION_INPUT_AFTER_ACTION_ERROR_MESSAGE,
llm_output=text,
send_to_llm=True,
)
else:
format = self._i18n.slice("format_without_tools")
error = f"{format}"
self.agent.increment_formatting_errors()
raise OutputParserException(
error,
observation=error,
llm_output=text,
send_to_llm=True,
)

View File

@@ -1,7 +1,7 @@
from typing import Any
from typing import Any, Optional, Union
from ..tools.cache_tools import CacheTools
from ..tools.tool_calling import ToolCalling
from ..tools.tool_calling import InstructorToolCalling, ToolCalling
from .cache.cache_handler import CacheHandler
@@ -11,15 +11,20 @@ class ToolsHandler:
last_used_tool: ToolCalling = {}
cache: CacheHandler
def __init__(self, cache: CacheHandler):
def __init__(self, cache: Optional[CacheHandler] = None):
"""Initialize the callback handler."""
self.cache = cache
self.last_used_tool = {}
def on_tool_use(self, calling: ToolCalling, output: str) -> Any:
def on_tool_use(
self,
calling: Union[ToolCalling, InstructorToolCalling],
output: str,
should_cache: bool = True,
) -> Any:
"""Run when tool ends running."""
self.last_used_tool = calling
if calling.tool_name != CacheTools().name:
if self.cache and should_cache and calling.tool_name != CacheTools().name:
self.cache.add(
tool=calling.tool_name,
input=calling.arguments,

View File

19
src/crewai/cli/cli.py Normal file
View File

@@ -0,0 +1,19 @@
import click
from .create_crew import create_crew
@click.group()
def crewai():
"""Top-level command group for crewai."""
@crewai.command()
@click.argument("project_name")
def create(project_name):
"""Create a new crew."""
create_crew(project_name)
if __name__ == "__main__":
crewai()

View File

@@ -0,0 +1,80 @@
import os
from pathlib import Path
import click
def create_crew(name):
"""Create a new crew."""
folder_name = name.replace(" ", "_").replace("-", "_").lower()
class_name = name.replace("_", " ").replace("-", " ").title().replace(" ", "")
click.secho(f"Creating folder {folder_name}...", fg="green", bold=True)
if not os.path.exists(folder_name):
os.mkdir(folder_name)
os.mkdir(folder_name + "/tests")
os.mkdir(folder_name + "/src")
os.mkdir(folder_name + f"/src/{folder_name}")
os.mkdir(folder_name + f"/src/{folder_name}/tools")
os.mkdir(folder_name + f"/src/{folder_name}/config")
with open(folder_name + "/.env", "w") as file:
file.write("OPENAI_API_KEY=YOUR_API_KEY")
else:
click.secho(
f"\tFolder {folder_name} already exists. Please choose a different name.",
fg="red",
)
return
package_dir = Path(__file__).parent
templates_dir = package_dir / "templates"
# List of template files to copy
root_template_files = [
".gitignore",
"pyproject.toml",
"README.md",
]
tools_template_files = ["tools/custom_tool.py", "tools/__init__.py"]
config_template_files = ["config/agents.yaml", "config/tasks.yaml"]
src_template_files = ["__init__.py", "main.py", "crew.py"]
for file_name in root_template_files:
src_file = templates_dir / file_name
dst_file = Path(folder_name) / file_name
copy_template(src_file, dst_file, name, class_name, folder_name)
for file_name in src_template_files:
src_file = templates_dir / file_name
dst_file = Path(folder_name) / "src" / folder_name / file_name
copy_template(src_file, dst_file, name, class_name, folder_name)
for file_name in tools_template_files:
src_file = templates_dir / file_name
dst_file = Path(folder_name) / "src" / folder_name / file_name
copy_template(src_file, dst_file, name, class_name, folder_name)
for file_name in config_template_files:
src_file = templates_dir / file_name
dst_file = Path(folder_name) / "src" / folder_name / file_name
copy_template(src_file, dst_file, name, class_name, folder_name)
click.secho(f"Crew {name} created successfully!", fg="green", bold=True)
def copy_template(src, dst, name, class_name, folder_name):
"""Copy a file from src to dst."""
with open(src, "r") as file:
content = file.read()
# Interpolate the content
content = content.replace("{{name}}", name)
content = content.replace("{{crew_name}}", class_name)
content = content.replace("{{folder_name}}", folder_name)
# Write the interpolated content to the new file
with open(dst, "w") as file:
file.write(content)
click.secho(f" - Created {dst}", fg="green")

2
src/crewai/cli/templates/.gitignore vendored Normal file
View File

@@ -0,0 +1,2 @@
.env
__pycache__/

View File

@@ -0,0 +1,57 @@
# {{crew_name}} Crew
Welcome to the {{crew_name}} Crew project, powered by [crewAI](https://crewai.com). This template is designed to help you set up a multi-agent AI system with ease, leveraging the powerful and flexible framework provided by crewAI. Our goal is to enable your agents to collaborate effectively on complex tasks, maximizing their collective intelligence and capabilities.
## Installation
Ensure you have Python >=3.10 <=3.13 installed on your system. This project uses [Poetry](https://python-poetry.org/) for dependency management and package handling, offering a seamless setup and execution experience.
First, if you haven't already, install Poetry:
```bash
pip install poetry
```
Next, navigate to your project directory and install the dependencies:
1. First lock the dependencies and then install them:
```bash
poetry lock
```
```bash
poetry install
```
### Customizing
**Add your `OPENAI_API_KEY` into the `.env` file**
- Modify `src/{{folder_name}}/config/agents.yaml` to define your agents
- Modify `src/{{folder_name}}/config/tasks.yaml` to define your tasks
- Modify `src/{{folder_name}}/crew.py` to add your own logic, tools and specific args
- Modify `src/{{folder_name}}/main.py` to add custom inputs for your agents and tasks
## Running the Project
To kickstart your crew of AI agents and begin task execution, run this from the root folder of your project:
```bash
poetry run {{folder_name}}
```
This command initializes the {{name}} Crew, assembling the agents and assigning them tasks as defined in your configuration.
This example, unmodified, will run the create a `report.md` file with the output of a research on LLMs in the root folser
## Understanding Your Crew
The {{name}} Crew is composed of multiple AI agents, each with unique roles, goals, and tools. These agents collaborate on a series of tasks, defined in `config/tasks.yaml`, leveraging their collective skills to achieve complex objectives. The `config/agents.yaml` file outlines the capabilities and configurations of each agent in your crew.
## Support
For support, questions, or feedback regarding the {{crew_name}} Crew or crewAI.
- Visit our [documentation](https://docs.crewai.com)
- Reach out to us through our [GitHub repository](https://github.com/joaomdmoura/crewai)
- [Joing our Discord](https://discord.com/invite/X4JWnZnxPb)
- [Chat wtih our docs](https://chatg.pt/DWjSBZn)
Let's create wonders together with the power and simplicity of crewAI.

View File

View File

@@ -0,0 +1,19 @@
researcher:
role: >
{topic} Senior Data Researcher
goal: >
Uncover cutting-edge developments in {topic}
backstory: >
You're a seasoned researcher with a knack for uncovering the latest
developments in {topic}. Known for your ability to find the most relevant
information and present it in a clear and concise manner.
reporting_analyst:
role: >
{topic} Reporting Analyst
goal: >
Create detailed reports based on {topic} data analysis and research findings
backstory: >
You're a meticulous analyst with a keen eye for detail. You're known for
your ability to turn complex data into clear and concise reports, making
it easy for others to understand and act on the information you provide.

View File

@@ -0,0 +1,15 @@
research_task:
description: >
Conduct a thorough research about {topic}
Make sure you find any interesting and relevant information given
the current year is 2024.
expected_output: >
A list with 10 bullet points of the most relevant information about {topic}
reporting_task:
description: >
Review the context you got and expand each topic into a full section for a report.
Make sure the report is detailed and contains any and all relevant information.
expected_output: >
A fully fledge reports with the mains topics, each with a full section of information.
Formated as markdown with out '```'

View File

@@ -0,0 +1,55 @@
from crewai import Agent, Crew, Process, Task
from crewai.project import CrewBase, agent, crew, task
# Uncomment the following line to use an example of a custom tool
# from {{folder_name}}.tools.custom_tool import MyCustomTool
# Check our tools documentations for more information on how to use them
# from crewai_tools import SerperDevTool
@CrewBase
class {{crew_name}}Crew():
"""{{crew_name}} crew"""
agents_config = 'config/agents.yaml'
tasks_config = 'config/tasks.yaml'
@agent
def researcher(self) -> Agent:
return Agent(
config=self.agents_config['researcher'],
# tools=[MyCustomTool()], # Example of custom tool, loaded on the beginning of file
verbose=True
)
@agent
def reporting_analyst(self) -> Agent:
return Agent(
config=self.agents_config['reporting_analyst'],
verbose=True
)
@task
def research_task(self) -> Task:
return Task(
config=self.tasks_config['research_task'],
agent=self.researcher()
)
@task
def reporting_task(self) -> Task:
return Task(
config=self.tasks_config['reporting_task'],
agent=self.reporting_analyst(),
output_file='report.md'
)
@crew
def crew(self) -> Crew:
"""Creates the {{crew_name}} crew"""
return Crew(
agents=self.agents, # Automatically created by the @agent decorator
tasks=self.tasks, # Automatically created by the @task decorator
process=Process.sequential,
verbose=2,
# process=Process.hierarchical, # In case you wanna use that instead https://docs.crewai.com/how-to/Hierarchical/
)

View File

@@ -0,0 +1,10 @@
#!/usr/bin/env python
from {{folder_name}}.crew import {{crew_name}}Crew
def run():
# Replace with your inputs, it will automatically interpolate any tasks and agents information
inputs = {
'topic': 'AI LLMs'
}
{{crew_name}}Crew().crew().kickoff(inputs=inputs)

View File

@@ -0,0 +1,16 @@
[tool.poetry]
name = "{{folder_name}}"
version = "0.1.0"
description = "{{name}} using crewAI"
authors = ["Your Name <you@example.com>"]
[tool.poetry.dependencies]
python = ">=3.10,<=3.13"
crewai = {extras = ["tools"], version = "^0.28.8"}
[tool.poetry.scripts]
{{folder_name}} = "{{folder_name}}.main:run"
[build-system]
requires = ["poetry-core"]
build-backend = "poetry.core.masonry.api"

View File

@@ -0,0 +1,10 @@
from crewai_tools import BaseTool
class MyCustomTool(BaseTool):
name: str = "Name of my tool"
description: str = "Clear description for what this tool is useful for, you agent will need this information to use it."
def _run(self, argument: str) -> str:
# Implementation goes here
return "this is an example of a tool output, ignore it and move along."

View File

@@ -2,6 +2,7 @@ import json
import uuid
from typing import Any, Dict, List, Optional, Union
from langchain_core.callbacks import BaseCallbackHandler
from pydantic import (
UUID4,
BaseModel,
@@ -17,11 +18,14 @@ from pydantic_core import PydanticCustomError
from crewai.agent import Agent
from crewai.agents.cache import CacheHandler
from crewai.memory.entity.entity_memory import EntityMemory
from crewai.memory.long_term.long_term_memory import LongTermMemory
from crewai.memory.short_term.short_term_memory import ShortTermMemory
from crewai.process import Process
from crewai.task import Task
from crewai.telemtry import Telemetry
from crewai.telemetry import Telemetry
from crewai.tools.agent_tools import AgentTools
from crewai.utilities import I18N, Logger, RPMController
from crewai.utilities import I18N, Logger, RPMController, FileHandler
class Crew(BaseModel):
@@ -32,28 +36,45 @@ class Crew(BaseModel):
tasks: List of tasks assigned to the crew.
agents: List of agents part of this crew.
manager_llm: The language model that will run manager agent.
memory: Whether the crew should use memory to store memories of it's execution.
manager_callbacks: The callback handlers to be executed by the manager agent when hierarchical process is used
cache: Whether the crew should use a cache to store the results of the tools execution.
function_calling_llm: The language model that will run the tool calling for all the agents.
process: The process flow that the crew will follow (e.g., sequential).
process: The process flow that the crew will follow (e.g., sequential, hierarchical).
verbose: Indicates the verbosity level for logging during execution.
config: Configuration settings for the crew.
max_rpm: Maximum number of requests per minute for the crew execution to be respected.
id: A unique identifier for the crew instance.
full_output: Whether the crew should return the full output with all tasks outputs or just the final output.
task_callback: Callback to be executed after each task for every agents execution.
step_callback: Callback to be executed after each step for every agents execution.
share_crew: Whether you want to share the complete crew infromation and execution with crewAI to make the library better, and allow us to train models.
inputs: Any inputs that the crew will use in tasks or agents, it will be interpolated in promtps.
"""
__hash__ = object.__hash__ # type: ignore
_execution_span: Any = PrivateAttr()
_rpm_controller: RPMController = PrivateAttr()
_logger: Logger = PrivateAttr()
_file_handler: FileHandler = PrivateAttr()
_cache_handler: InstanceOf[CacheHandler] = PrivateAttr(default=CacheHandler())
_short_term_memory: Optional[InstanceOf[ShortTermMemory]] = PrivateAttr()
_long_term_memory: Optional[InstanceOf[LongTermMemory]] = PrivateAttr()
_entity_memory: Optional[InstanceOf[EntityMemory]] = PrivateAttr()
cache: bool = Field(default=True)
model_config = ConfigDict(arbitrary_types_allowed=True)
tasks: List[Task] = Field(default_factory=list)
agents: List[Agent] = Field(default_factory=list)
process: Process = Field(default=Process.sequential)
verbose: Union[int, bool] = Field(default=0)
memory: bool = Field(
default=False,
description="Whether the crew should use memory to store memories of it's execution",
)
embedder: Optional[dict] = Field(
default={"provider": "openai"},
description="Configuration for the embedder to be used for the crew.",
)
usage_metrics: Optional[dict] = Field(
default=None,
description="Metrics for the LLM usage during all tasks execution.",
@@ -65,13 +86,13 @@ class Crew(BaseModel):
manager_llm: Optional[Any] = Field(
description="Language model that will run the agent.", default=None
)
manager_callbacks: Optional[List[InstanceOf[BaseCallbackHandler]]] = Field(
default=None,
description="A list of callback handlers to be executed by the manager agent when hierarchical process is used",
)
function_calling_llm: Optional[Any] = Field(
description="Language model that will run the agent.", default=None
)
inputs: Optional[Dict[str, Any]] = Field(
description="Any inputs that the crew will use in tasks or agents, it will be interpolated in promtps.",
default=None,
)
config: Optional[Union[Json, Dict[str, Any]]] = Field(default=None)
id: UUID4 = Field(default_factory=uuid.uuid4, frozen=True)
share_crew: Optional[bool] = Field(default=False)
@@ -79,6 +100,10 @@ class Crew(BaseModel):
default=None,
description="Callback to be executed after each step for all agents execution.",
)
task_callback: Optional[Any] = Field(
default=None,
description="Callback to be executed after each task for all agents execution.",
)
max_rpm: Optional[int] = Field(
default=None,
description="Maximum number of requests per minute for the crew execution to be respected.",
@@ -87,6 +112,14 @@ class Crew(BaseModel):
default="en",
description="Language used for the crew, defaults to English.",
)
language_file: str = Field(
default=None,
description="Path to the language file to be used for the crew.",
)
output_log_file: Optional[Union[bool, str]] = Field(
default=False,
description="output_log_file",
)
@field_validator("id", mode="before")
@classmethod
@@ -117,12 +150,23 @@ class Crew(BaseModel):
"""Set private attributes."""
self._cache_handler = CacheHandler()
self._logger = Logger(self.verbose)
if self.output_log_file:
self._file_handler = FileHandler(self.output_log_file)
self._rpm_controller = RPMController(max_rpm=self.max_rpm, logger=self._logger)
self._telemetry = Telemetry()
self._telemetry.set_tracer()
self._telemetry.crew_creation(self)
return self
@model_validator(mode="after")
def create_crew_memory(self) -> "Crew":
"""Set private attributes."""
if self.memory:
self._long_term_memory = LongTermMemory()
self._short_term_memory = ShortTermMemory(embedder_config=self.embedder)
self._entity_memory = EntityMemory(embedder_config=self.embedder)
return self
@model_validator(mode="after")
def check_manager_llm(self):
"""Validates that the language model is set when using hierarchical process."""
@@ -134,15 +178,6 @@ class Crew(BaseModel):
)
return self
@model_validator(mode="after")
def interpolate_inputs(self):
"""Interpolates the inputs in the tasks and agents."""
for task in self.tasks:
task.interpolate_inputs(self.inputs)
for agent in self.agents:
agent.interpolate_inputs(self.inputs)
return self
@model_validator(mode="after")
def check_config(self):
"""Validates that the crew is properly configured with agents and tasks."""
@@ -158,7 +193,8 @@ class Crew(BaseModel):
if self.agents:
for agent in self.agents:
agent.set_cache_handler(self._cache_handler)
if self.cache:
agent.set_cache_handler(self._cache_handler)
if self.max_rpm:
agent.set_rpm_controller(self._rpm_controller)
return self
@@ -191,19 +227,24 @@ class Crew(BaseModel):
del task_config["agent"]
return Task(**task_config, agent=task_agent)
def kickoff(self) -> str:
def kickoff(self, inputs: Optional[Dict[str, Any]] = {}) -> str:
"""Starts the crew to work on its assigned tasks."""
self._execution_span = self._telemetry.crew_execution_span(self)
self._interpolate_inputs(inputs)
self._set_tasks_callbacks()
i18n = I18N(language=self.language, language_file=self.language_file)
for agent in self.agents:
agent.i18n = I18N(language=self.language)
agent.i18n = i18n
agent.crew = self
if not agent.function_calling_llm:
agent.function_calling_llm = self.function_calling_llm
agent.create_agent_executor()
if not agent.step_callback:
agent.step_callback = self.step_callback
agent.create_agent_executor()
agent.create_agent_executor()
metrics = []
@@ -231,7 +272,7 @@ class Crew(BaseModel):
"""Executes tasks sequentially and returns the final output."""
task_output = ""
for task in self.tasks:
if task.agent is not None and task.agent.allow_delegation:
if task.agent.allow_delegation:
agents_for_delegation = [
agent for agent in self.agents if agent != task.agent
]
@@ -239,15 +280,25 @@ class Crew(BaseModel):
task.tools += AgentTools(agents=agents_for_delegation).tools()
role = task.agent.role if task.agent is not None else "None"
self._logger.log("debug", f"Working Agent: {role}")
self._logger.log("info", f"Starting Task: {task.description}")
self._logger.log("debug", f"== Working Agent: {role}", color="bold_purple")
self._logger.log(
"info", f"== Starting Task: {task.description}", color="bold_purple"
)
if self.output_log_file:
self._file_handler.log(
agent=role, task=task.description, status="started"
)
output = task.execute(context=task_output)
if not task.async_execution:
task_output = output
role = task.agent.role if task.agent is not None else "None"
self._logger.log("debug", f"[{role}] Task output: {task_output}\n\n")
self._logger.log("debug", f"== [{role}] Task output: {task_output}\n\n")
if self.output_log_file:
self._file_handler.log(agent=role, task=task_output, status="completed")
self._finish_execution(task_output)
return self._format_output(task_output)
@@ -255,7 +306,7 @@ class Crew(BaseModel):
def _run_hierarchical_process(self) -> str:
"""Creates and assigns a manager agent to make sure the crew completes the tasks."""
i18n = I18N(language=self.language)
i18n = I18N(language=self.language, language_file=self.language_file)
manager = Agent(
role=i18n.retrieve("hierarchical_manager_agent", "role"),
goal=i18n.retrieve("hierarchical_manager_agent", "goal"),
@@ -270,17 +321,35 @@ class Crew(BaseModel):
self._logger.log("debug", f"Working Agent: {manager.role}")
self._logger.log("info", f"Starting Task: {task.description}")
if self.output_log_file:
self._file_handler.log(
agent=manager.role, task=task.description, status="started"
)
task_output = task.execute(
agent=manager, context=task_output, tools=manager.tools
)
self._logger.log(
"debug", f"[{manager.role}] Task output: {task_output}\n\n"
)
self._logger.log("debug", f"[{manager.role}] Task output: {task_output}")
if self.output_log_file:
self._file_handler.log(
agent=manager.role, task=task_output, status="completed"
)
self._finish_execution(task_output)
return self._format_output(task_output), manager._token_process.get_summary()
def _set_tasks_callbacks(self) -> str:
"""Sets callback for every task suing task_callback"""
for task in self.tasks:
task.callback = self.task_callback
def _interpolate_inputs(self, inputs: Dict[str, Any]) -> str:
"""Interpolates the inputs in the tasks and agents."""
[task.interpolate_inputs(inputs) for task in self.tasks]
[agent.interpolate_inputs(inputs) for agent in self.agents]
def _format_output(self, output: str) -> str:
"""Formats the output of the crew execution."""
if self.full_output:
@@ -295,3 +364,6 @@ class Crew(BaseModel):
if self.max_rpm:
self._rpm_controller.stop_rpm_counter()
self._telemetry.end_crew(self, output)
def __repr__(self):
return f"Crew(id={self.id}, process={self.process}, number_of_agents={len(self.agents)}, number_of_tasks={len(self.tasks)})"

View File

@@ -0,0 +1,3 @@
from .entity.entity_memory import EntityMemory
from .long_term.long_term_memory import LongTermMemory
from .short_term.short_term_memory import ShortTermMemory

View File

View File

@@ -0,0 +1,63 @@
from crewai.memory import EntityMemory, LongTermMemory, ShortTermMemory
class ContextualMemory:
def __init__(self, stm: ShortTermMemory, ltm: LongTermMemory, em: EntityMemory):
self.stm = stm
self.ltm = ltm
self.em = em
def build_context_for_task(self, task, context) -> str:
"""
Automatically builds a minimal, highly relevant set of contextual information
for a given task.
"""
query = f"{task.description} {context}".strip()
if query == "":
return ""
context = []
context.append(self._fetch_ltm_context(task.description))
context.append(self._fetch_stm_context(query))
context.append(self._fetch_entity_context(query))
return "\n".join(filter(None, context))
def _fetch_stm_context(self, query) -> str:
"""
Fetches recent relevant insights from STM related to the task's description and expected_output,
formatted as bullet points.
"""
stm_results = self.stm.search(query)
formatted_results = "\n".join([f"- {result}" for result in stm_results])
return f"Recent Insights:\n{formatted_results}" if stm_results else ""
def _fetch_ltm_context(self, task) -> str:
"""
Fetches historical data or insights from LTM that are relevant to the task's description and expected_output,
formatted as bullet points.
"""
ltm_results = self.ltm.search(task, latest_n=2)
if not ltm_results:
return None
formatted_results = [
suggestion
for result in ltm_results
for suggestion in result["metadata"]["suggestions"]
]
formatted_results = list(dict.fromkeys(formatted_results))
formatted_results = "\n".join([f"- {result}" for result in formatted_results])
return f"Historical Data:\n{formatted_results}" if ltm_results else ""
def _fetch_entity_context(self, query) -> str:
"""
Fetches relevant entity information from Entity Memory related to the task's description and expected_output,
formatted as bullet points.
"""
em_results = self.em.search(query)
formatted_results = "\n".join(
[f"- {result['context']}" for result in em_results]
)
return f"Entities:\n{formatted_results}" if em_results else ""

View File

View File

@@ -0,0 +1,22 @@
from crewai.memory.entity.entity_memory_item import EntityMemoryItem
from crewai.memory.memory import Memory
from crewai.memory.storage.rag_storage import RAGStorage
class EntityMemory(Memory):
"""
EntityMemory class for managing structured information about entities
and their relationships using SQLite storage.
Inherits from the Memory class.
"""
def __init__(self, embedder_config=None):
storage = RAGStorage(
type="entities", allow_reset=False, embedder_config=embedder_config
)
super().__init__(storage)
def save(self, item: EntityMemoryItem) -> None:
"""Saves an entity item into the SQLite storage."""
data = f"{item.name}({item.type}): {item.description}"
super().save(data, item.metadata)

View File

@@ -0,0 +1,12 @@
class EntityMemoryItem:
def __init__(
self,
name: str,
type: str,
description: str,
relationships: str,
):
self.name = name
self.type = type
self.description = description
self.metadata = {"relationships": relationships}

View File

View File

@@ -0,0 +1,32 @@
from typing import Any, Dict
from crewai.memory.long_term.long_term_memory_item import LongTermMemoryItem
from crewai.memory.memory import Memory
from crewai.memory.storage.ltm_sqlite_storage import LTMSQLiteStorage
class LongTermMemory(Memory):
"""
LongTermMemory class for managing cross runs data related to overall crew's
execution and performance.
Inherits from the Memory class and utilizes an instance of a class that
adheres to the Storage for data storage, specifically working with
LongTermMemoryItem instances.
"""
def __init__(self):
storage = LTMSQLiteStorage()
super().__init__(storage)
def save(self, item: LongTermMemoryItem) -> None:
metadata = item.metadata
metadata.update({"agent": item.agent, "expected_output": item.expected_output})
self.storage.save(
task_description=item.task,
score=metadata["quality"],
metadata=metadata,
datetime=item.datetime,
)
def search(self, task: str, latest_n: int) -> Dict[str, Any]:
return self.storage.load(task, latest_n)

View File

@@ -0,0 +1,19 @@
from typing import Any, Dict, Union
class LongTermMemoryItem:
def __init__(
self,
agent: str,
task: str,
expected_output: str,
datetime: str,
quality: Union[int, float] = None,
metadata: Dict[str, Any] = None,
):
self.task = task
self.agent = agent
self.quality = quality
self.datetime = datetime
self.expected_output = expected_output
self.metadata = metadata if metadata is not None else {}

View File

@@ -0,0 +1,23 @@
from typing import Any, Dict
from crewai.memory.storage.interface import Storage
class Memory:
"""
Base class for memory, now supporting agent tags and generic metadata.
"""
def __init__(self, storage: Storage):
self.storage = storage
def save(
self, value: Any, metadata: Dict[str, Any] = None, agent: str = None
) -> None:
metadata = metadata or {}
if agent:
metadata["agent"] = agent
self.storage.save(value, metadata)
def search(self, query: str) -> Dict[str, Any]:
return self.storage.search(query)

View File

View File

@@ -0,0 +1,23 @@
from crewai.memory.memory import Memory
from crewai.memory.short_term.short_term_memory_item import ShortTermMemoryItem
from crewai.memory.storage.rag_storage import RAGStorage
class ShortTermMemory(Memory):
"""
ShortTermMemory class for managing transient data related to immediate tasks
and interactions.
Inherits from the Memory class and utilizes an instance of a class that
adheres to the Storage for data storage, specifically working with
MemoryItem instances.
"""
def __init__(self, embedder_config=None):
storage = RAGStorage(type="short_term", embedder_config=embedder_config)
super().__init__(storage)
def save(self, item: ShortTermMemoryItem) -> None:
super().save(item.data, item.metadata, item.agent)
def search(self, query: str, score_threshold: float = 0.35):
return self.storage.search(query=query, score_threshold=score_threshold)

View File

@@ -0,0 +1,8 @@
from typing import Any, Dict
class ShortTermMemoryItem:
def __init__(self, data: Any, agent: str, metadata: Dict[str, Any] = None):
self.data = data
self.agent = agent
self.metadata = metadata if metadata is not None else {}

View File

@@ -0,0 +1,11 @@
from typing import Any, Dict
class Storage:
"""Abstract base class defining the storage interface"""
def save(self, key: str, value: Any, metadata: Dict[str, Any]) -> None:
pass
def search(self, key: str) -> Dict[str, Any]:
pass

View File

@@ -0,0 +1,101 @@
import json
import sqlite3
from typing import Any, Dict, Union
from crewai.utilities import Printer
from crewai.utilities.paths import db_storage_path
class LTMSQLiteStorage:
"""
An updated SQLite storage class for LTM data storage.
"""
def __init__(self, db_path=f"{db_storage_path()}/long_term_memory_storage.db"):
self.db_path = db_path
self._printer: Printer = Printer()
self._initialize_db()
def _initialize_db(self):
"""
Initializes the SQLite database and creates LTM table
"""
try:
with sqlite3.connect(self.db_path) as conn:
cursor = conn.cursor()
cursor.execute(
"""
CREATE TABLE IF NOT EXISTS long_term_memories (
id INTEGER PRIMARY KEY AUTOINCREMENT,
task_description TEXT,
metadata TEXT,
datetime TEXT,
score REAL
)
"""
)
conn.commit()
except sqlite3.Error as e:
self._printer.print(
content=f"MEMORY ERROR: An error occurred during database initialization: {e}",
color="red",
)
def save(
self,
task_description: str,
metadata: Dict[str, Any],
datetime: str,
score: Union[int, float],
) -> None:
"""Saves data to the LTM table with error handling."""
try:
with sqlite3.connect(self.db_path) as conn:
cursor = conn.cursor()
cursor.execute(
"""
INSERT INTO long_term_memories (task_description, metadata, datetime, score)
VALUES (?, ?, ?, ?)
""",
(task_description, json.dumps(metadata), datetime, score),
)
conn.commit()
except sqlite3.Error as e:
self._printer.print(
content=f"MEMORY ERROR: An error occurred while saving to LTM: {e}",
color="red",
)
def load(self, task_description: str, latest_n: int) -> Dict[str, Any]:
"""Queries the LTM table by task description with error handling."""
try:
with sqlite3.connect(self.db_path) as conn:
cursor = conn.cursor()
cursor.execute(
f"""
SELECT metadata, datetime, score
FROM long_term_memories
WHERE task_description = ?
ORDER BY datetime DESC, score ASC
LIMIT {latest_n}
""",
(task_description,),
)
rows = cursor.fetchall()
if rows:
return [
{
"metadata": json.loads(row[0]),
"datetime": row[1],
"score": row[2],
}
for row in rows
]
except sqlite3.Error as e:
self._printer.print(
content=f"MEMORY ERROR: An error occurred while querying LTM: {e}",
color="red",
)
return None

View File

@@ -0,0 +1,99 @@
import contextlib
import io
import logging
import os
from typing import Any, Dict
from embedchain import App
from embedchain.llm.base import BaseLlm
from embedchain.vectordb.chroma import InvalidDimensionException
from crewai.memory.storage.interface import Storage
from crewai.utilities.paths import db_storage_path
@contextlib.contextmanager
def suppress_logging(
logger_name="chromadb.segment.impl.vector.local_persistent_hnsw",
level=logging.ERROR,
):
logger = logging.getLogger(logger_name)
original_level = logger.getEffectiveLevel()
logger.setLevel(level)
with contextlib.redirect_stdout(io.StringIO()), contextlib.redirect_stderr(
io.StringIO()
), contextlib.suppress(UserWarning):
yield
logger.setLevel(original_level)
class FakeLLM(BaseLlm):
pass
class RAGStorage(Storage):
"""
Extends Storage to handle embeddings for memory entries, improving
search efficiency.
"""
def __init__(self, type, allow_reset=True, embedder_config=None):
super().__init__()
if (
not os.getenv("OPENAI_API_KEY")
and not os.getenv("OPENAI_BASE_URL") == "https://api.openai.com/v1"
):
os.environ["OPENAI_API_KEY"] = "fake"
config = {
"app": {
"config": {"name": type, "collect_metrics": False, "log_level": "ERROR"}
},
"chunker": {
"chunk_size": 5000,
"chunk_overlap": 100,
"length_function": "len",
"min_chunk_size": 150,
},
"vectordb": {
"provider": "chroma",
"config": {
"collection_name": type,
"dir": f"{db_storage_path()}/{type}",
"allow_reset": allow_reset,
},
},
}
if embedder_config:
config["embedder"] = embedder_config
self.app = App.from_config(config=config)
self.app.llm = FakeLLM()
if allow_reset:
self.app.reset()
def save(self, value: Any, metadata: Dict[str, Any]) -> None:
self._generate_embedding(value, metadata)
def search(
self,
query: str,
limit: int = 3,
filter: dict = None,
score_threshold: float = 0.35,
) -> Dict[str, Any]:
with suppress_logging():
try:
results = (
self.app.search(query, limit, where=filter)
if filter
else self.app.search(query, limit)
)
except InvalidDimensionException:
self.app.reset()
return []
return [r for r in results if r["metadata"]["score"] >= score_threshold]
def _generate_embedding(self, text: str, metadata: Dict[str, Any]) -> Any:
with suppress_logging():
self.app.add(text, data_type="text", metadata=metadata)

View File

@@ -0,0 +1,2 @@
from .annotations import agent, crew, task
from .crew_base import CrewBase

View File

@@ -0,0 +1,47 @@
tasks_order = []
def task(func):
func.is_task = True
tasks_order.append(func.__name__)
return func
def agent(func):
func.is_agent = True
return func
def crew(func):
def wrapper(self, *args, **kwargs):
instantiated_tasks = []
instantiated_agents = []
agent_roles = set()
# Iterate over tasks_order to maintain the defined order
for task_name in tasks_order:
possible_task = getattr(self, task_name)
if callable(possible_task):
task_instance = possible_task()
instantiated_tasks.append(task_instance)
if hasattr(task_instance, "agent"):
agent_instance = task_instance.agent
if agent_instance.role not in agent_roles:
instantiated_agents.append(agent_instance)
agent_roles.add(agent_instance.role)
# Instantiate any additional agents not already included by tasks
for attr_name in dir(self):
possible_agent = getattr(self, attr_name)
if callable(possible_agent) and hasattr(possible_agent, "is_agent"):
temp_agent_instance = possible_agent()
if temp_agent_instance.role not in agent_roles:
instantiated_agents.append(temp_agent_instance)
agent_roles.add(temp_agent_instance.role)
self.agents = instantiated_agents
self.tasks = instantiated_tasks
return func(self, *args, **kwargs)
return wrapper

View File

@@ -0,0 +1,45 @@
import inspect
import os
from pathlib import Path
import yaml
from dotenv import load_dotenv
load_dotenv()
def CrewBase(cls):
class WrappedClass(cls):
is_crew_class = True
base_directory = None
for frame_info in inspect.stack():
if "site-packages" not in frame_info.filename:
base_directory = Path(frame_info.filename).parent.resolve()
break
if base_directory is None:
raise Exception(
"Unable to dynamically determine the project's base directory, you must run it from the project's root directory."
)
original_agents_config_path = getattr(
cls, "agents_config", "config/agents.yaml"
)
original_tasks_config_path = getattr(cls, "tasks_config", "config/tasks.yaml")
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
self.agents_config = self.load_yaml(
os.path.join(self.base_directory, self.original_agents_config_path)
)
self.tasks_config = self.load_yaml(
os.path.join(self.base_directory, self.original_tasks_config_path)
)
@staticmethod
def load_yaml(config_path: str):
with open(config_path, "r") as file:
return yaml.safe_load(file)
return WrappedClass

View File

@@ -13,7 +13,23 @@ from crewai.utilities.pydantic_schema_parser import PydanticSchemaParser
class Task(BaseModel):
"""Class that represent a task to be executed."""
"""Class that represents a task to be executed.
Each task must have a description, an expected output and an agent responsible for execution.
Attributes:
agent: Agent responsible for task execution. Represents entity performing task.
async_execution: Boolean flag indicating asynchronous task execution.
callback: Function/object executed post task completion for additional actions.
config: Dictionary containing task-specific configuration parameters.
context: List of Task instances providing task context or input data.
description: Descriptive text detailing task's purpose and execution.
expected_output: Clear definition of expected task outcome.
output_file: File path for storing task output.
output_json: Pydantic model for structuring JSON output.
output_pydantic: Pydantic model for task output.
tools: List of tools/resources limited for task execution.
"""
class Config:
arbitrary_types_allowed = True
@@ -24,17 +40,21 @@ class Task(BaseModel):
delegations: int = 0
i18n: I18N = I18N()
thread: threading.Thread = None
prompt_context: Optional[str] = None
description: str = Field(description="Description of the actual task.")
expected_output: str = Field(
description="Clear definition of expected output for the task."
)
config: Optional[Dict[str, Any]] = Field(
description="Configuration for the agent",
default=None,
)
callback: Optional[Any] = Field(
description="Callback to be executed after the task is completed.", default=None
)
agent: Optional[Agent] = Field(
description="Agent responsible for execution the task.", default=None
)
expected_output: Optional[str] = Field(
description="Clear definition of expected output for the task.",
default=None,
)
context: Optional[List["Task"]] = Field(
description="Other tasks that will have their output used as context for this task.",
default=None,
@@ -67,6 +87,17 @@ class Task(BaseModel):
frozen=True,
description="Unique identifier for the object, not set by user.",
)
human_input: Optional[bool] = Field(
description="Whether the task should have a human review the final answer of the agent",
default=False,
)
_original_description: str | None = None
_original_expected_output: str | None = None
def __init__(__pydantic_self__, **data):
config = data.pop("config", {})
super().__init__(**config, **data)
@field_validator("id", mode="before")
@classmethod
@@ -76,6 +107,14 @@ class Task(BaseModel):
"may_not_set_field", "This field is not to be set by the user.", {}
)
@model_validator(mode="after")
def set_attributes_based_on_config(self) -> "Task":
"""Set attributes based on the agent configuration."""
if self.config:
for key, value in self.config.items():
setattr(self, key, value)
return self
@model_validator(mode="after")
def check_tools(self):
"""Check if the tools are set."""
@@ -122,6 +161,7 @@ class Task(BaseModel):
context.append(task.output.raw_output)
context = "\n".join(context)
self.prompt_context = context
tools = tools or self.tools
if self.async_execution:
@@ -166,19 +206,22 @@ class Task(BaseModel):
"""
tasks_slices = [self.description]
if self.expected_output:
output = self.i18n.slice("expected_output").format(
expected_output=self.expected_output
)
tasks_slices = [self.description, output]
output = self.i18n.slice("expected_output").format(
expected_output=self.expected_output
)
tasks_slices = [self.description, output]
return "\n".join(tasks_slices)
def interpolate_inputs(self, inputs: Dict[str, Any]) -> None:
"""Interpolate inputs into the task description and expected output."""
if self._original_description is None:
self._original_description = self.description
if self._original_expected_output is None:
self._original_expected_output = self.expected_output
if inputs:
self.description = self.description.format(**inputs)
if self.expected_output:
self.expected_output = self.expected_output.format(**inputs)
self.description = self._original_description.format(**inputs)
self.expected_output = self._original_expected_output.format(**inputs)
def increment_tools_errors(self) -> None:
"""Increment the tools errors counter."""
@@ -194,6 +237,16 @@ class Task(BaseModel):
if self.output_pydantic or self.output_json:
model = self.output_pydantic or self.output_json
# try to convert task_output directly to pydantic/json
try:
exported_result = model.model_validate_json(result)
if self.output_json:
return exported_result.model_dump()
return exported_result
except Exception:
pass
llm = self.agent.function_calling_llm or self.agent.llm
if not self._is_gpt(llm):
@@ -231,3 +284,6 @@ class Task(BaseModel):
with open(self.output_file, "w") as file:
file.write(result)
return None
def __repr__(self):
return f"Task(description={self.description}, expected_output={self.expected_output})"

View File

@@ -1,3 +1,4 @@
import asyncio
import json
import os
import platform
@@ -39,26 +40,39 @@ class Telemetry:
def __init__(self):
self.ready = False
self.trace_set = False
try:
telemetry_endpoint = "http://telemetry.crewai.com:4318"
telemetry_endpoint = "https://telemetry.crewai.com:4319"
self.resource = Resource(
attributes={SERVICE_NAME: "crewAI-telemetry"},
)
self.provider = TracerProvider(resource=self.resource)
processor = BatchSpanProcessor(
OTLPSpanExporter(endpoint=f"{telemetry_endpoint}/v1/traces", timeout=15)
OTLPSpanExporter(
endpoint=f"{telemetry_endpoint}/v1/traces",
timeout=30,
)
)
self.provider.add_span_processor(processor)
self.ready = True
except Exception:
pass
except BaseException as e:
if isinstance(
e,
(SystemExit, KeyboardInterrupt, GeneratorExit, asyncio.CancelledError),
):
raise # Re-raise the exception to not interfere with system signals
self.ready = False
def set_tracer(self):
if self.ready:
if self.ready and not self.trace_set:
try:
trace.set_tracer_provider(self.provider)
self.trace_set = True
except Exception:
pass
self.ready = False
self.trace_set = False
def crew_creation(self, crew):
"""Records the creation of a crew."""
@@ -75,6 +89,7 @@ class Telemetry:
self._add_attribute(span, "crew_id", str(crew.id))
self._add_attribute(span, "crew_process", crew.process)
self._add_attribute(span, "crew_language", crew.language)
self._add_attribute(span, "crew_memory", crew.memory)
self._add_attribute(span, "crew_number_of_tasks", len(crew.tasks))
self._add_attribute(span, "crew_number_of_agents", len(crew.agents))
self._add_attribute(
@@ -85,14 +100,15 @@ class Telemetry:
{
"id": str(agent.id),
"role": agent.role,
"memory_enabled?": agent.memory,
"verbose?": agent.verbose,
"max_iter": agent.max_iter,
"max_rpm": agent.max_rpm,
"i18n": agent.i18n.language,
"llm": json.dumps(self._safe_llm_attributes(agent.llm)),
"delegation_enabled?": agent.allow_delegation,
"tools_names": [tool.name for tool in agent.tools],
"tools_names": [
tool.name.casefold() for tool in agent.tools
],
}
for agent in crew.agents
]
@@ -107,7 +123,9 @@ class Telemetry:
"id": str(task.id),
"async_execution?": task.async_execution,
"agent_role": task.agent.role if task.agent else "None",
"tools_names": [tool.name for tool in task.tools],
"tools_names": [
tool.name.casefold() for tool in task.tools
],
}
for task in crew.tasks
]
@@ -129,11 +147,17 @@ class Telemetry:
try:
tracer = trace.get_tracer("crewai.telemetry")
span = tracer.start_span("Tool Repeated Usage")
self._add_attribute(
span,
"crewai_version",
pkg_resources.get_distribution("crewai").version,
)
self._add_attribute(span, "tool_name", tool_name)
self._add_attribute(span, "attempts", attempts)
self._add_attribute(
span, "llm", json.dumps(self._safe_llm_attributes(llm))
)
if llm:
self._add_attribute(
span, "llm", json.dumps(self._safe_llm_attributes(llm))
)
span.set_status(Status(StatusCode.OK))
span.end()
except Exception:
@@ -145,11 +169,17 @@ class Telemetry:
try:
tracer = trace.get_tracer("crewai.telemetry")
span = tracer.start_span("Tool Usage")
self._add_attribute(
span,
"crewai_version",
pkg_resources.get_distribution("crewai").version,
)
self._add_attribute(span, "tool_name", tool_name)
self._add_attribute(span, "attempts", attempts)
self._add_attribute(
span, "llm", json.dumps(self._safe_llm_attributes(llm))
)
if llm:
self._add_attribute(
span, "llm", json.dumps(self._safe_llm_attributes(llm))
)
span.set_status(Status(StatusCode.OK))
span.end()
except Exception:
@@ -162,8 +192,14 @@ class Telemetry:
tracer = trace.get_tracer("crewai.telemetry")
span = tracer.start_span("Tool Usage Error")
self._add_attribute(
span, "llm", json.dumps(self._safe_llm_attributes(llm))
span,
"crewai_version",
pkg_resources.get_distribution("crewai").version,
)
if llm:
self._add_attribute(
span, "llm", json.dumps(self._safe_llm_attributes(llm))
)
span.set_status(Status(StatusCode.OK))
span.end()
except Exception:
@@ -177,6 +213,11 @@ class Telemetry:
try:
tracer = trace.get_tracer("crewai.telemetry")
span = tracer.start_span("Crew Execution")
self._add_attribute(
span,
"crewai_version",
pkg_resources.get_distribution("crewai").version,
)
self._add_attribute(span, "crew_id", str(crew.id))
self._add_attribute(
span,
@@ -188,14 +229,15 @@ class Telemetry:
"role": agent.role,
"goal": agent.goal,
"backstory": agent.backstory,
"memory_enabled?": agent.memory,
"verbose?": agent.verbose,
"max_iter": agent.max_iter,
"max_rpm": agent.max_rpm,
"i18n": agent.i18n.language,
"llm": json.dumps(self._safe_llm_attributes(agent.llm)),
"delegation_enabled?": agent.allow_delegation,
"tools_names": [tool.name for tool in agent.tools],
"tools_names": [
tool.name.casefold() for tool in agent.tools
],
}
for agent in crew.agents
]
@@ -215,7 +257,9 @@ class Telemetry:
"context": [task.description for task in task.context]
if task.context
else "None",
"tools_names": [tool.name for tool in task.tools],
"tools_names": [
tool.name.casefold() for tool in task.tools
],
}
for task in crew.tasks
]
@@ -228,6 +272,11 @@ class Telemetry:
def end_crew(self, crew, output):
if (self.ready) and (crew.share_crew):
try:
self._add_attribute(
crew._execution_span,
"crewai_version",
pkg_resources.get_distribution("crewai").version,
)
self._add_attribute(crew._execution_span, "crew_output", output)
self._add_attribute(
crew._execution_span,
@@ -257,6 +306,8 @@ class Telemetry:
def _safe_llm_attributes(self, llm):
attributes = ["name", "model_name", "base_url", "model", "top_k", "temperature"]
safe_attributes = {k: v for k, v in vars(llm).items() if k in attributes}
safe_attributes["class"] = llm.__class__.__name__
return safe_attributes
if llm:
safe_attributes = {k: v for k, v in vars(llm).items() if k in attributes}
safe_attributes["class"] = llm.__class__.__name__
return safe_attributes
return {}

View File

@@ -15,29 +15,30 @@ class AgentTools(BaseModel):
i18n: I18N = Field(default=I18N(), description="Internationalization settings.")
def tools(self):
return [
tools = [
StructuredTool.from_function(
func=self.delegate_work,
name="Delegate work to co-worker",
description=self.i18n.tools("delegate_work").format(
coworkers="\n".join([f"- {agent.role}" for agent in self.agents])
coworkers=f"[{', '.join([f'{agent.role}' for agent in self.agents])}]"
),
),
StructuredTool.from_function(
func=self.ask_question,
name="Ask question to co-worker",
description=self.i18n.tools("ask_question").format(
coworkers="\n".join([f"- {agent.role}" for agent in self.agents])
coworkers=f"[{', '.join([f'{agent.role}' for agent in self.agents])}]"
),
),
]
return tools
def delegate_work(self, coworker: str, task: str, context: str):
"""Useful to delegate a specific task to a coworker."""
"""Useful to delegate a specific task to a coworker passing all necessary context and names."""
return self._execute(coworker, task, context)
def ask_question(self, coworker: str, question: str, context: str):
"""Useful to ask a question, opinion or take from a coworker."""
"""Useful to ask a question, opinion or take from a coworker passing all necessary context and names."""
return self._execute(coworker, question, context)
def _execute(self, agent, task, context):
@@ -46,18 +47,26 @@ class AgentTools(BaseModel):
agent = [
available_agent
for available_agent in self.agents
if available_agent.role.strip().lower() == agent.strip().lower()
if available_agent.role.casefold().strip() == agent.casefold().strip()
]
except:
return self.i18n.errors("agent_tool_unexsiting_coworker").format(
coworkers="\n".join([f"- {agent.role}" for agent in self.agents])
coworkers="\n".join(
[f"- {agent.role.casefold()}" for agent in self.agents]
)
)
if not agent:
return self.i18n.errors("agent_tool_unexsiting_coworker").format(
coworkers="\n".join([f"- {agent.role}" for agent in self.agents])
coworkers="\n".join(
[f"- {agent.role.casefold()}" for agent in self.agents]
)
)
agent = agent[0]
task = Task(description=task, agent=agent)
task = Task(
description=task,
agent=agent,
expected_output="Your best answer to your coworker asking you this, accounting for the context shared.",
)
return agent.execute_task(task, context)

View File

@@ -1,3 +1,4 @@
import ast
from textwrap import dedent
from typing import Any, List, Union
@@ -5,7 +6,7 @@ from langchain_core.tools import BaseTool
from langchain_openai import ChatOpenAI
from crewai.agents.tools_handler import ToolsHandler
from crewai.telemtry import Telemetry
from crewai.telemetry import Telemetry
from crewai.tools.tool_calling import InstructorToolCalling, ToolCalling
from crewai.utilities import I18N, Converter, ConverterError, Printer
@@ -28,9 +29,9 @@ class ToolUsage:
task: Task being executed.
tools_handler: Tools handler that will manage the tool usage.
tools: List of tools available for the agent.
original_tools: Original tools available for the agent before being converted to BaseTool.
tools_description: Description of the tools available for the agent.
tools_names: Names of the tools available for the agent.
llm: Language model to be used for the tool usage.
function_calling_llm: Language model to be used for the tool usage.
"""
@@ -38,11 +39,12 @@ class ToolUsage:
self,
tools_handler: ToolsHandler,
tools: List[BaseTool],
original_tools: List[Any],
tools_description: str,
tools_names: str,
task: Any,
llm: Any,
function_calling_llm: Any,
action: Any,
) -> None:
self._i18n: I18N = I18N()
self._printer: Printer = Printer()
@@ -53,13 +55,17 @@ class ToolUsage:
self.tools_description = tools_description
self.tools_names = tools_names
self.tools_handler = tools_handler
self.original_tools = original_tools
self.tools = tools
self.task = task
self.llm = function_calling_llm or llm
self.action = action
self.function_calling_llm = function_calling_llm
# Set the maximum parsing attempts for bigger models
if (isinstance(self.llm, ChatOpenAI)) and (self.llm.openai_api_base == None):
if self.llm.model_name in OPENAI_BIGGER_MODELS:
if (isinstance(self.function_calling_llm, ChatOpenAI)) and (
self.function_calling_llm.openai_api_base == None
):
if self.function_calling_llm.model_name in OPENAI_BIGGER_MODELS:
self._max_parsing_attempts = 2
self._remember_format_after_usages = 4
@@ -82,7 +88,7 @@ class ToolUsage:
self.task.increment_tools_errors()
self._printer.print(content=f"\n\n{error}\n", color="red")
return error
return f"{self._use(tool_string=tool_string, tool=tool, calling=calling)}\n\n{self._i18n.slice('final_answer_format')}"
return f"{self._use(tool_string=tool_string, tool=tool, calling=calling)}"
def _use(
self,
@@ -93,23 +99,25 @@ class ToolUsage:
if self._check_tool_repeated_usage(calling=calling):
try:
result = self._i18n.errors("task_repeated_usage").format(
tool=calling.tool_name,
tool_input=", ".join(
[str(arg) for arg in calling.arguments.values()]
),
tool_names=self.tools_names
)
self._printer.print(content=f"\n\n{result}\n", color="yellow")
self._printer.print(content=f"\n\n{result}\n", color="purple")
self._telemetry.tool_repeated_usage(
llm=self.llm, tool_name=tool.name, attempts=self._run_attempts
llm=self.function_calling_llm,
tool_name=tool.name,
attempts=self._run_attempts,
)
result = self._format_result(result=result)
return result
except Exception:
self.task.increment_tools_errors()
result = self.tools_handler.cache.read(
tool=calling.tool_name, input=calling.arguments
)
result = None
if self.tools_handler.cache:
result = self.tools_handler.cache.read(
tool=calling.tool_name, input=calling.arguments
)
if not result:
try:
@@ -120,18 +128,32 @@ class ToolUsage:
self.task.increment_delegations()
if calling.arguments:
result = tool._run(**calling.arguments)
try:
acceptable_args = tool.args_schema.schema()["properties"].keys()
arguments = {
k: v
for k, v in calling.arguments.items()
if k in acceptable_args
}
result = tool._run(**arguments)
except Exception:
if tool.args_schema:
arguments = calling.arguments
result = tool._run(**arguments)
else:
arguments = calling.arguments.values()
result = tool._run(*arguments)
else:
result = tool._run()
except Exception as e:
self._run_attempts += 1
if self._run_attempts > self._max_parsing_attempts:
self._telemetry.tool_usage_error(llm=self.llm)
self._telemetry.tool_usage_error(llm=self.function_calling_llm)
error_message = self._i18n.errors("tool_usage_exception").format(
error=e
error=e, tool=tool.name, tool_inputs=tool.description
)
error = ToolUsageErrorException(
f'\n{error_message}.\nMoving one then. {self._i18n.slice("format").format(tool_names=self.tools_names)}'
f'\n{error_message}.\nMoving on then. {self._i18n.slice("format").format(tool_names=self.tools_names)}'
).message
self.task.increment_tools_errors()
self._printer.print(content=f"\n\n{error_message}\n", color="red")
@@ -139,11 +161,28 @@ class ToolUsage:
self.task.increment_tools_errors()
return self.use(calling=calling, tool_string=tool_string)
self.tools_handler.on_tool_use(calling=calling, output=result)
if self.tools_handler:
should_cache = True
original_tool = next(
(ot for ot in self.original_tools if ot.name == tool.name), None
)
if (
hasattr(original_tool, "cache_function")
and original_tool.cache_function
):
should_cache = original_tool.cache_function(
calling.arguments, result
)
self._printer.print(content=f"\n\n{result}\n", color="yellow")
self.tools_handler.on_tool_use(
calling=calling, output=result, should_cache=should_cache
)
self._printer.print(content=f"\n\n{result}\n", color="purple")
self._telemetry.tool_usage(
llm=self.llm, tool_name=tool.name, attempts=self._run_attempts
llm=self.function_calling_llm,
tool_name=tool.name,
attempts=self._run_attempts,
)
result = self._format_result(result=result)
return result
@@ -167,6 +206,8 @@ class ToolUsage:
def _check_tool_repeated_usage(
self, calling: Union[ToolCalling, InstructorToolCalling]
) -> None:
if not self.tools_handler:
return False
if last_tool_usage := self.tools_handler.last_used_tool:
return (calling.tool_name == last_tool_usage.tool_name) and (
calling.arguments == last_tool_usage.arguments
@@ -177,7 +218,14 @@ class ToolUsage:
if tool.name.lower().strip() == tool_name.lower().strip():
return tool
self.task.increment_tools_errors()
raise Exception(f"Tool '{tool_name}' not found.")
if tool_name and tool_name != "":
raise Exception(
f"Action '{tool_name}' don't exist, these are the only available Actions: {self.tools_description}"
)
else:
raise Exception(
f"I forgot the Action name, these are the only available Actions: {self.tools_description}"
)
def _render(self) -> str:
"""Render the tool name and description in plain text."""
@@ -205,34 +253,57 @@ class ToolUsage:
self, tool_string: str
) -> Union[ToolCalling, InstructorToolCalling]:
try:
model = InstructorToolCalling if self._is_gpt(self.llm) else ToolCalling
converter = Converter(
text=f"Only tools available:\n###\n{self._render()}\n\nReturn a valid schema for the tool, the tool name must be exactly equal one of the options, use this text to inform the valid ouput schema:\n\n{tool_string}```",
llm=self.llm,
model=model,
instructions=dedent(
"""\
The schema should have the following structure, only two keys:
- tool_name: str
- arguments: dict (with all arguments being passed)
if self.function_calling_llm:
model = (
InstructorToolCalling
if self._is_gpt(self.function_calling_llm)
else ToolCalling
)
converter = Converter(
text=f"Only tools available:\n###\n{self._render()}\n\nReturn a valid schema for the tool, the tool name must be exactly equal one of the options, use this text to inform the valid ouput schema:\n\n{tool_string}```",
llm=self.function_calling_llm,
model=model,
instructions=dedent(
"""\
The schema should have the following structure, only two keys:
- tool_name: str
- arguments: dict (with all arguments being passed)
Example:
{"tool_name": "tool name", "arguments": {"arg_name1": "value", "arg_name2": 2}}""",
),
max_attemps=1,
)
calling = converter.to_pydantic()
Example:
{"tool_name": "tool name", "arguments": {"arg_name1": "value", "arg_name2": 2}}""",
),
max_attemps=1,
)
calling = converter.to_pydantic()
if isinstance(calling, ConverterError):
raise calling
if isinstance(calling, ConverterError):
raise calling
else:
tool_name = self.action.tool
tool = self._select_tool(tool_name)
try:
arguments = ast.literal_eval(self.action.tool_input)
except Exception:
return ToolUsageErrorException(
f'{self._i18n.errors("tool_arguments_error")}'
)
if not isinstance(arguments, dict):
return ToolUsageErrorException(
f'{self._i18n.errors("tool_arguments_error")}'
)
calling = ToolCalling(
tool_name=tool.name,
arguments=arguments,
log=tool_string,
)
except Exception as e:
self._run_attempts += 1
if self._run_attempts > self._max_parsing_attempts:
self._telemetry.tool_usage_error(llm=self.llm)
self._telemetry.tool_usage_error(llm=self.function_calling_llm)
self.task.increment_tools_errors()
self._printer.print(content=f"\n\n{e}\n", color="red")
return ToolUsageErrorException(
f'{self._i18n.errors("tool_usage_error")}\n{self._i18n.slice("format").format(tool_names=self.tools_names)}'
f'{self._i18n.errors("tool_usage_error").format(error=e)}\nMoving on then. {self._i18n.slice("format").format(tool_names=self.tools_names)}'
)
return self._tool_calling(tool_string)

View File

@@ -1,27 +0,0 @@
{
"hierarchical_manager_agent": {
"role": "Διευθυντής Ομάδας",
"goal": "Διαχειρίσου την ομάδα σου για να ολοκληρώσει την εργασία με τον καλύτερο δυνατό τρόπο.",
"backstory": "Είσαι ένας έμπειρος διευθυντής με την ικανότητα να βγάζεις το καλύτερο από την ομάδα σου.\nΕίσαι επίσης γνωστός για την ικανότητά σου να αναθέτεις εργασίες στους σωστούς ανθρώπους και να κάνεις τις σωστές ερωτήσεις για να πάρεις το καλύτερο από την ομάδα σου.\nΑκόμα κι αν δεν εκτελείς εργασίες μόνος σου, έχεις πολλή εμπειρία στον τομέα, που σου επιτρέπει να αξιολογείς σωστά τη δουλειά των μελών της ομάδας σου."
},
"slices": {
"observation": "\nΠαρατήρηση",
"task": "Αρχή! Αυτό είναι ΠΟΛΥ σημαντικό για εσάς, η δουλειά σας εξαρτάται από αυτό!\n\nΤρέχουσα εργασία: {input}",
"memory": "Αυτή είναι η περίληψη της μέχρι τώρα δουλειάς σας:\n{chat_history}",
"role_playing": "Είσαι {role}.\n{backstory}\n\nΟ προσωπικός σας στόχος είναι: {goal}",
"tools": "ΕΡΓΑΛΕΙΑ:\n------\nΈχετε πρόσβαση μόνο στα ακόλουθα εργαλεία:\n\n{tools}\n\nΓια να χρησιμοποιήσετε ένα εργαλείο, χρησιμοποιήστε την ακόλουθη ακριβώς μορφή:\n\n```\nThought: Χρειάζεται να χρησιμοποιήσω κάποιο εργαλείο; Ναι\nΕνέργεια: το εργαλείο που θέλετε να χρησιμοποιήσετε, θα πρέπει να είναι ένα από τα [{tool_names}], μόνο το όνομα.\nΕισαγωγή ενέργειας: Οποιαδήποτε και όλες οι σχετικές πληροφορίες και το πλαίσιο χρήσης του εργαλείου\nΠαρατήρηση: το αποτέλεσμα της χρήσης του εργαλείου\n```\n\nΌταν έχετε μια απάντηση για την εργασία σας ή εάν δεν χρειάζεται να χρησιμοποιήσετε ένα εργαλείο, ΠΡΕΠΕΙ να χρησιμοποιήσετε τη μορφή:\n\n```\nΣκέψη: Πρέπει να χρησιμοποιήσω ένα εργαλείο ? Όχι\nΤελική απάντηση: [η απάντησή σας εδώ]```",
"task_with_context": "{task}\nΑυτό είναι το πλαίσιο με το οποίο εργάζεστε:\n{context}",
"expected_output": "Η τελική σας απάντηση πρέπει να είναι: {expected_output}"
},
"errors": {
"force_final_answer": "Στην πραγματικότητα, χρησιμοποίησα πάρα πολλά εργαλεία, οπότε θα σταματήσω τώρα και θα σας δώσω την απόλυτη ΚΑΛΥΤΕΡΗ τελική μου απάντηση ΤΩΡΑ, χρησιμοποιώντας την αναμενόμενη μορφή: ```\nΣκέφτηκα: Χρειάζεται να χρησιμοποιήσω ένα εργαλείο; Όχι\nΤελική απάντηση: [η απάντησή σας εδώ]```",
"agent_tool_unexsiting_coworker": "\nΣφάλμα κατά την εκτέλεση του εργαλείου. Ο συνάδελφος που αναφέρεται στο Action Input δεν βρέθηκε, πρέπει να είναι μία από τις ακόλουθες επιλογές:\n{coworkers}..\n",
"task_repeated_usage": "Μόλις χρησιμοποίησα το εργαλείο {tool} με είσοδο {tool_input}. Άρα το ξέρω ήδη και πρέπει να σταματήσω να το χρησιμοποιώ στη σειρά με την ίδια είσοδο. \nΘα μπορούσα να δώσω την τελική μου απάντηση εάν είμαι έτοιμος, χρησιμοποιώντας ακριβώς την αναμενόμενη μορφή παρακάτω: \n\nΣκέφτηκα: Χρειάζεται να χρησιμοποιήσω κάποιο εργαλείο; Όχι\nΤελική απάντηση: [η απάντησή σας εδώ]\n",
"tool_usage_error": "Φαίνεται ότι αντιμετωπίσαμε ένα απροσδόκητο σφάλμα κατά την προσπάθεια χρήσης του εργαλείου.",
"tool_usage_exception": "Φαίνεται ότι αντιμετωπίσαμε ένα απροσδόκητο σφάλμα κατά την προσπάθεια χρήσης του εργαλείου. Αυτό ήταν το σφάλμα: {error}"
},
"tools": {
"delegate_work": "Αναθέστε μια συγκεκριμένη εργασία σε έναν από τους παρακάτω συναδέλφους:\n{coworkers}.\nΗ εισαγωγή σε αυτό το εργαλείο θα πρέπει να είναι ο ρόλος του συναδέλφου, η εργασία που θέλετε να κάνει και ΟΛΟ το απαραίτητο πλαίσιο για την εκτέλεση της εργασίας, δεν γνωρίζουν τίποτα για την εργασία, γι' αυτό μοιραστείτε απολύτως όλα όσα γνωρίζετε, μην αναφέρετε πράγματα, αλλά εξηγήστε τα.",
"ask_question": "Κάντε μια συγκεκριμένη ερώτηση σε έναν από τους παρακάτω συναδέλφους:\n{coworkers}.\nΗ είσοδος σε αυτό το εργαλείο θα πρέπει να είναι ο ρόλος του συναδέλφου, η ερώτηση που έχετε για αυτόν και ΟΛΟ το απαραίτητο πλαίσιο για να κάνετε σωστά την ερώτηση, δεν γνωρίζουν τίποτα για την ερώτηση, γι' αυτό μοιραστείτε απολύτως όλα όσα γνωρίζετε, μην αναφέρετε πράγματα, αλλά εξηγήστε τα."
}
}

View File

@@ -5,26 +5,29 @@
"backstory": "You are a seasoned manager with a knack for getting the best out of your team.\nYou are also known for your ability to delegate work to the right people, and to ask the right questions to get the best out of your team.\nEven though you don't perform tasks by yourself, you have a lot of experience in the field, which allows you to properly evaluate the work of your team members."
},
"slices": {
"observation": "\nResult",
"task": "\n\nCurrent Task: {input}\n\n Begin! This is VERY important to you, your job depends on it!\n\n",
"memory": "This is the summary of your work so far:\n{chat_history}",
"role_playing": "You are {role}.\n{backstory}\n\nYour personal goal is: {goal}",
"tools": "I have access to ONLY the following tools, I can use only these, use one at time:\n\n{tools}\n\nTo use a tool I MUST use the exact following format:\n\n```\nUse Tool: the tool I wanna use, should be one of [{tool_names}] and absolute all relevant input and context for using the tool, I must use only one tool at once.\nResult: [result of the tool]\n```\n\nTo give my final answer I'll use the exact following format:\n\n```\nFinal Answer: [my expected final answer, entire content of my most complete final answer goes here]\n```\nI MUST use these formats, my job depends on it!",
"no_tools": "To give my final answer use the exact following format:\n\n```\nFinal Answer: [my expected final answer, entire content of my most complete final answer goes here]\n```\nI MUST use these formats, my job depends on it!",
"format": "I MUST either use a tool (use one at time) OR give my best final answer. To use a single tool I MUST use the exact following format:\n\n```\nUse Tool: the tool I wanna use, should be one of [{tool_names}] and absolute all relevant input and context for using the tool, I must use only one tool at once.\nResult: [result of the tool]\n```\n\nTo give my final answer use the exact following format:\n\n```\nFinal Answer: [my expected final answer, entire content of my most complete final answer goes here]\n```\nI MUST use these formats, my job depends on it!",
"final_answer_format": "If I don't need to use any more tools, I must make sure use the correct format to give my final answer:\n\n```Final Answer: [my expected final answer, entire content of my most complete final answer goes here]```\n I MUST use these formats, my job depends on it!",
"format_without_tools": "\nSorry, I didn't use the right format. I MUST either use a tool (among the available ones), OR give my best final answer.\nI just remembered the expected formats I must follow:\n\n```\nUse Tool: the tool I wanna use, and absolute all relevant input and context for using the tool, I must use only one tool at once.\nResult: [result of the tool]\n```\nOR\n```\nFinal Answer: [my expected final answer, entire content of my most complete final answer goes here]\n```\n",
"task_with_context": "{task}\nThis is the context you're working with:\n{context}",
"expected_output": "Your final answer must be: {expected_output}"
"observation": "\nObservation",
"task": "\nCurrent Task: {input}\n\nBegin! This is VERY important to you, use the tools available and give your best Final Answer, your job depends on it!\n\nThought: ",
"memory": "\n\n# Useful context: \n{memory}",
"role_playing": "You are {role}. {backstory}\nYour personal goal is: {goal}",
"tools": "\nYou ONLY have access to the following tools, and should NEVER make up tools that are not listed here:\n\n{tools}\n\nUse the following format:\n\nThought: you should always think about what to do\nAction: the action to take, only one name of [{tool_names}], just the name, exactly as it's written.\nAction Input: the input to the action, just a simple a python dictionary using \" to wrap keys and values.\nObservation: the result of the action\n\nOnce all necessary information is gathered:\n\nThought: I now know the final answer\nFinal Answer: the final answer to the original input question\n",
"no_tools": "To give my best complete final answer to the task use the exact following format:\n\nThought: I now can give a great answer\nFinal Answer: my best complete final answer to the task.\nYour final answer must be the great and the most complete as possible, it must be outcome described.\n\nI MUST use these formats, my job depends on it!",
"format": "I MUST either use a tool (use one at time) OR give my best final answer. To Use the following format:\n\nThought: you should always think about what to do\nAction: the action to take, should be one of [{tool_names}]\nAction Input: the input to the action, dictionary\nObservation: the result of the action\n... (this Thought/Action/Action Input/Observation can repeat N times)\nThought: I now can give a great answer\nFinal Answer: my best complete final answer to the task.\nYour final answer must be the great and the most complete as possible, it must be outcome described\n\n ",
"final_answer_format": "If you don't need to use any more tools, you must give your best complete final answer, make sure it satisfy the expect criteria, use the EXACT format below:\n\nThought: I now can give a great answer\nFinal Answer: my best complete final answer to the task.\n\n",
"format_without_tools": "\nSorry, I didn't use the right format. I MUST either use a tool (among the available ones), OR give my best final answer.\nI just remembered the expected format I must follow:\n\nQuestion: the input question you must answer\nThought: you should always think about what to do\nAction: the action to take, should be one of [{tool_names}]\nAction Input: the input to the action\nObservation: the result of the action\n... (this Thought/Action/Action Input/Observation can repeat N times)\nThought: I now can give a great answer\nFinal Answer: my best complete final answer to the task\nYour final answer must be the great and the most complete as possible, it must be outcome described\n\n",
"task_with_context": "{task}\n\nThis is the context you're working with:\n{context}",
"expected_output": "\nThis is the expect criteria for your final answer: {expected_output} \n you MUST return the actual complete content as the final answer, not a summary.",
"human_feedback": "You got human feedback on your work, re-avaluate it and give a new Final Answer when ready.\n {human_feedback}",
"getting_input": "This is the agent final answer: {final_answer}\nPlease provide a feedback: "
},
"errors": {
"unexpected_format": "\nSorry, I didn't use the expected format, I MUST either use a tool (use one at time) OR give my best final answer.\n",
"force_final_answer": "Actually, I used too many tools, so I'll stop now and give you my absolute BEST Final answer NOW, using exactly the expected format below:\n\n```\nFinal Answer: [my expected final answer, entire content of my most complete final answer goes here]\n```\nI MUST use these formats, my job depends on it!",
"force_final_answer": "Tool won't be use because it's time to give your final answer. Don't use tools and just your absolute BEST Final answer.",
"agent_tool_unexsiting_coworker": "\nError executing tool. Co-worker mentioned not found, it must to be one of the following options:\n{coworkers}\n",
"task_repeated_usage": "I already used the {tool} tool with input {tool_input}. So I already know that and must stop using it with same input. \nI could give my best complete final answer if I'm ready, using exactly the expected format below:\n\n```\nFinal Answer: [my expected final answer, entire content of my most complete final answer goes here]\n```\nI MUST use these formats, my job depends on it!",
"tool_usage_error": "It seems we encountered an unexpected error while trying to use the tool.",
"task_repeated_usage": "I tried reusing the same input, I must stop using this action input. I'll try something else instead.\n\n",
"tool_usage_error": "I encountered an error: {error}",
"tool_arguments_error": "Error: the Action Input is not a valid key, value dictionary.",
"wrong_tool_name": "You tried to use the tool {tool}, but it doesn't exist. You must use one of the following tools, use one at time: {tools}.",
"tool_usage_exception": "It seems we encountered an unexpected error while trying to use the tool. This was the error: {error}"
"tool_usage_exception": "I encountered an error while trying to use the tool. This was the error: {error}.\n Tool {tool} accepts these inputs: {tool_inputs}"
},
"tools": {
"delegate_work": "Delegate a specific task to one of the following co-workers: {coworkers}\nThe input to this tool should be the coworker, the task you want them to do, and ALL necessary context to exectue the task, they know nothing about the task, so share absolute everything you know, don't reference things but instead explain them.",

View File

@@ -5,3 +5,4 @@ from .logger import Logger
from .printer import Printer
from .prompts import Prompts
from .rpm_controller import RPMController
from .fileHandler import FileHandler

View File

@@ -78,8 +78,8 @@ class Converter(BaseModel):
)
parser = CrewPydanticOutputParser(pydantic_object=self.model)
new_prompt = HumanMessage(content=self.text) + SystemMessage(
content=self.instructions
new_prompt = SystemMessage(content=self.instructions) + HumanMessage(
content=self.text
)
return new_prompt | self.llm | parser

View File

@@ -0,0 +1,61 @@
from typing import List
from langchain_openai import ChatOpenAI
from pydantic import BaseModel, Field
from crewai.utilities import Converter
from crewai.utilities.pydantic_schema_parser import PydanticSchemaParser
class Entity(BaseModel):
name: str = Field(description="The name of the entity.")
type: str = Field(description="The type of the entity.")
description: str = Field(description="Description of the entity.")
relationships: List[str] = Field(description="Relationships of the entity.")
class TaskEvaluation(BaseModel):
suggestions: List[str] = Field(
description="Suggestions to improve future similar tasks."
)
quality: float = Field(
description="A score from 0 to 10 evaluating on completion, quality, and overall performance, all taking into account the task description, expected output, and the result of the task."
)
entities: List[Entity] = Field(
description="Entities extracted from the task output."
)
class TaskEvaluator:
def __init__(self, original_agent):
self.llm = original_agent.llm
def evaluate(self, task, ouput) -> TaskEvaluation:
evaluation_query = (
f"Assess the quality of the task completed based on the description, expected output, and actual results.\n\n"
f"Task Description:\n{task.description}\n\n"
f"Expected Output:\n{task.expected_output}\n\n"
f"Actual Output:\n{ouput}\n\n"
"Please provide:\n"
"- Bullet points suggestions to improve future similar tasks\n"
"- A score from 0 to 10 evaluating on completion, quality, and overall performance"
"- Entities extracted from the task output, if any, their type, description, and relationships"
)
instructions = "I'm gonna convert this raw text into valid JSON."
if not self._is_gpt(self.llm):
model_schema = PydanticSchemaParser(model=TaskEvaluation).get_schema()
instructions = f"{instructions}\n\nThe json should have the following structure, with the following keys:\n{model_schema}"
converter = Converter(
llm=self.llm,
text=evaluation_query,
model=TaskEvaluation,
instructions=instructions,
)
return converter.to_pydantic()
def _is_gpt(self, llm) -> bool:
return isinstance(llm, ChatOpenAI) and llm.openai_api_base == None

View File

@@ -0,0 +1,20 @@
import os
from datetime import datetime
class FileHandler:
"""take care of file operations, currently it only logs messages to a file"""
def __init__(self, file_path):
if isinstance(file_path, bool):
self._path = os.path.join(os.curdir, "logs.txt")
elif isinstance(file_path, str):
self._path = file_path
else:
raise ValueError("file_path must be either a boolean or a string.")
def log(self, **kwargs):
now = datetime.now().strftime("%Y-%m-%d %H:%M:%S")
message = f"{now}: ".join([f"{key}={value}" for key, value in kwargs.items()])
with open(self._path, "a") as file:
file.write(message + "\n")

View File

@@ -7,6 +7,10 @@ from pydantic import BaseModel, Field, PrivateAttr, ValidationError, model_valid
class I18N(BaseModel):
_translations: Dict[str, Dict[str, str]] = PrivateAttr()
language_file: Optional[str] = Field(
default=None,
description="Path to the translation file to load",
)
language: Optional[str] = Field(
default="en",
description="Language used to load translations",
@@ -16,13 +20,17 @@ class I18N(BaseModel):
def load_translation(self) -> "I18N":
"""Load translations from a JSON file based on the specified language."""
try:
dir_path = os.path.dirname(os.path.realpath(__file__))
prompts_path = os.path.join(
dir_path, f"../translations/{self.language}.json"
)
if self.language_file:
with open(self.language_file, "r") as f:
self._translations = json.load(f)
else:
dir_path = os.path.dirname(os.path.realpath(__file__))
prompts_path = os.path.join(
dir_path, f"../translations/{self.language}.json"
)
with open(prompts_path, "r") as f:
self._translations = json.load(f)
with open(prompts_path, "r") as f:
self._translations = json.load(f)
except FileNotFoundError:
raise ValidationError(
f"Translation file for language '{self.language}' not found."

View File

@@ -1,11 +1,16 @@
from crewai.utilities.printer import Printer
class Logger:
_printer = Printer()
def __init__(self, verbose_level=0):
verbose_level = (
2 if isinstance(verbose_level, bool) and verbose_level else verbose_level
)
self.verbose_level = verbose_level
def log(self, level, message):
def log(self, level, message, color="bold_green"):
level_map = {"debug": 1, "info": 2}
if self.verbose_level and level_map.get(level, 0) <= self.verbose_level:
print(f"[{level.upper()}]: {message}")
self._printer.print(f"[{level.upper()}]: {message}", color=color)

View File

@@ -0,0 +1,18 @@
from pathlib import Path
import appdirs
def db_storage_path():
app_name = get_project_directory_name()
app_author = "CrewAI"
data_dir = Path(appdirs.user_data_dir(app_name, app_author))
data_dir.mkdir(parents=True, exist_ok=True)
return data_dir
def get_project_directory_name():
cwd = Path.cwd()
project_directory_name = cwd.name
return project_directory_name

View File

@@ -1,14 +1,24 @@
class Printer:
def print(self, content: str, color: str):
if color == "yellow":
self._print_yellow(content)
if color == "purple":
self._print_purple(content)
elif color == "red":
self._print_red(content)
elif color == "bold_green":
self._print_bold_green(content)
elif color == "bold_purple":
self._print_bold_purple(content)
else:
print(content)
def _print_yellow(self, content):
print("\033[93m {}\033[00m".format(content))
def _print_bold_purple(self, content):
print("\033[1m\033[95m {}\033[00m".format(content))
def _print_bold_green(self, content):
print("\033[1m\033[92m {}\033[00m".format(content))
def _print_purple(self, content):
print("\033[95m {}\033[00m".format(content))
def _print_red(self, content):
print("\033[91m {}\033[00m".format(content))

View File

@@ -13,16 +13,6 @@ class Prompts(BaseModel):
tools: list[Any] = Field(default=[])
SCRATCHPAD_SLICE: ClassVar[str] = "\n{agent_scratchpad}"
def task_execution_with_memory(self) -> BasePromptTemplate:
"""Generate a prompt for task execution with memory components."""
slices = ["role_playing"]
if len(self.tools) > 0:
slices.append("tools")
else:
slices.append("no_tools")
slices.extend(["memory", "task"])
return self._build_prompt(slices)
def task_execution_without_tools(self) -> BasePromptTemplate:
"""Generate a prompt for task execution without tools components."""
return self._build_prompt(["role_playing", "task"])

Some files were not shown because too many files have changed in this diff Show More