Compare commits

..

77 Commits

Author SHA1 Message Date
theCyberTech
58af5c08f9 docs: add gh_token documentation to GithubSearchTool 2024-11-20 19:23:09 +08:00
Lorenze Jay
3dc02310b6 upgrade chroma and adjust embedder function generator (#1607)
* upgrade chroma and adjust embedder function generator

* >= version

* linted
2024-11-14 14:13:12 -08:00
Dev Khant
e70bc94ab6 Add support for retrieving user preferences and memories using Mem0 (#1209)
* Integrate Mem0

* Update src/crewai/memory/contextual/contextual_memory.py

Co-authored-by: Deshraj Yadav <deshraj@gatech.edu>

* pending commit for _fetch_user_memories

* update poetry.lock

* fixes mypy issues

* fix mypy checks

* New fixes for user_id

* remove memory_provider

* handle memory_provider

* checks for memory_config

* add mem0 to dependency

* Update pyproject.toml

Co-authored-by: Deshraj Yadav <deshraj@gatech.edu>

* update docs

* update doc

* bump mem0 version

* fix api error msg and mypy issue

* mypy fix

* resolve comments

* fix memory usage without mem0

* mem0 version bump

* lazy import mem0

---------

Co-authored-by: Deshraj Yadav <deshraj@gatech.edu>
Co-authored-by: João Moura <joaomdmoura@gmail.com>
Co-authored-by: Brandon Hancock (bhancock_ai) <109994880+bhancockio@users.noreply.github.com>
2024-11-14 10:59:24 -08:00
Eduardo Chiarotti
9285ebf8a2 feat: Reduce level for Bandit and fix code to adapt (#1604) 2024-11-14 13:12:35 -03:00
Thiago Moretto
4ca785eb15 Merge pull request #1597 from crewAIInc/tm-fix-crew-train-test
Fix crew_train_success test
2024-11-13 10:52:49 -03:00
Thiago Moretto
c57cbd8591 Fix crew_train_success test 2024-11-13 10:47:49 -03:00
Thiago Moretto
7fb1289205 Merge pull request #1596 from crewAIInc/tm-recording-cached-prompt-tokens
Add cached prompt tokens info on usage metrics
2024-11-13 10:37:29 -03:00
Thiago Moretto
f02681ae01 Merge branch 'main' into tm-recording-cached-prompt-tokens 2024-11-13 10:19:02 -03:00
Thiago Moretto
c725105b1f do not include cached on total 2024-11-13 10:18:30 -03:00
Thiago Moretto
36aa4bcb46 Cached prompt tokens on usage metrics 2024-11-13 10:16:30 -03:00
Eduardo Chiarotti
b98f8f9fe1 fix: Step callback issue (#1595)
* fix: Step callback issue

* fix: Add empty thought since its required
2024-11-13 10:07:28 -03:00
João Moura
bcfcf88e78 removing prints 2024-11-12 18:37:57 -03:00
Thiago Moretto
fd0de3a47e Merge pull request #1588 from crewAIInc/tm-workaround-litellm-bug
fixing LiteLLM callback replacement bug
2024-11-12 17:19:01 -03:00
Thiago Moretto
c7b9ae02fd fix test_agent_usage_metrics_are_captured_for_hierarchical_process 2024-11-12 16:43:43 -03:00
Thiago Moretto
4afb022572 fix LiteLLM callback replacement 2024-11-12 15:04:57 -03:00
João Moura
8610faef22 add missing init 2024-11-11 02:29:40 -03:00
João Moura
6d677541c7 preparing new version 2024-11-11 00:03:52 -03:00
João Moura
49220ec163 preparing new version 2024-11-10 23:46:38 -03:00
João Moura
40a676b7ac curring new version 2024-11-10 21:16:36 -03:00
João Moura
50bf146d1e preparing new version 2024-11-10 20:47:56 -03:00
João Moura
40d378abfb updating LLM docs 2024-11-10 11:36:03 -03:00
João Moura
1b09b085a7 preparing new version 2024-11-10 11:00:16 -03:00
João Moura
9f2acfe91f making sure we don't check for agents that were not used in the crew 2024-11-06 23:07:23 -03:00
Brandon Hancock (bhancock_ai)
e856359e23 fix missing config (#1557) 2024-11-05 12:07:29 -05:00
Brandon Hancock (bhancock_ai)
faa231e278 Fix flows to support cycles and added in test (#1556) 2024-11-05 12:02:54 -05:00
Brandon Hancock (bhancock_ai)
3d44795476 Feat/watson in cli (#1535)
* getting cli and .env to work together for different models

* support new models

* clean up prints

* Add support for cerebras

* Fix watson keys
2024-11-05 12:01:57 -05:00
Tony Kipkemboi
f50e709985 docs update (#1558)
* add llm providers accordion group

* fix numbering

* Fix directory tree & add llms to accordion

* update crewai enterprise link in docs
2024-11-05 11:26:19 -05:00
Brandon Hancock (bhancock_ai)
d70c542547 Raise an error if an LLM doesnt return a response (#1548) 2024-11-04 11:42:38 -05:00
Gui Vieira
57201fb856 Increase providers fetching timeout 2024-11-01 18:54:40 -03:00
Brandon Hancock (bhancock_ai)
9b142e580b add inputs to flows (#1553)
* add inputs to flows

* fix flows lint
2024-11-01 14:37:02 -07:00
Brandon Hancock (bhancock_ai)
3878daffd6 Feat/ibm memory (#1549)
* Everything looks like its working. Waiting for lorenze review.

* Update docs as well.

* clean up for PR
2024-11-01 16:42:46 -04:00
Tony Kipkemboi
34954e6f74 Update docs (#1550)
* add llm providers accordion group

* fix numbering

* Fix directory tree & add llms to accordion
2024-11-01 15:58:36 -04:00
C0deZ
e66a135d5d refactor: Move BaseTool to main package and centralize tool description generation (#1514)
* move base_tool to main package and consolidate tool desscription generation

* update import path

* update tests

* update doc

* add base_tool test

* migrate agent delegation tools to use BaseTool

* update tests

* update import path for tool

* fix lint

* update param signature

* add from_langchain to BaseTool for backwards support of langchain tools

* fix the case where StructuredTool doesn't have func

---------

Co-authored-by: c0dez <li@vitablehealth.com>
Co-authored-by: Brandon Hancock (bhancock_ai) <109994880+bhancockio@users.noreply.github.com>
2024-11-01 12:30:48 -04:00
Vini Brasil
66698503b8 Replace .netrc with uv environment variables (#1541)
This commit replaces .netrc with uv environment variables for installing
tools from private repositories. To store credentials, I created a new
and reusable settings file for the CLI in
`$HOME/.config/crewai/settings.json`.

The issue with .netrc files is that they are applied system-wide and are
scoped by hostname, meaning we can't differentiate tool repositories
requests from regular requests to CrewAI's API.
2024-10-31 15:00:58 -03:00
Tony Kipkemboi
ec2967c362 Add llm providers accordion group (#1534)
* add llm providers accordion group

* fix numbering
2024-10-30 21:56:13 -04:00
Robin Wang
4ae07468f3 Enhance log storage to support more data types (#1530) 2024-10-30 16:45:19 -04:00
Brandon Hancock (bhancock_ai)
6193eb13fa Disable telemetry explicitly (#1536)
* Disable telemetry explicitly

* fix linting

* revert parts to og
2024-10-30 16:37:21 -04:00
Rip&Tear
55cd15bfc6 Added security.md file (#1533) 2024-10-30 12:07:38 -04:00
João Moura
5f46ff8836 prepare new version 2024-10-30 00:07:46 -03:00
Brandon Hancock (bhancock_ai)
cdfbd5f62b Bugfix/flows with multiple starts plus ands breaking (#1531)
* bugfix/flows-with-multiple-starts-plus-ands-breaking

* fix user found issue

* remove prints
2024-10-29 19:36:53 -03:00
Brandon Hancock (bhancock_ai)
b43f3987ec Update flows cli to allow you to easily add additional crews to a flow (#1525)
* Update flows cli to allow you to easily add additional crews to a flow

* fix failing test

* adding more error logs to test thats failing

* try again
2024-10-29 11:53:48 -04:00
Tony Kipkemboi
240527d06c Merge pull request #1519 from crewAIInc/feat/improve-tooling-docs
Improve tooling and flow docs
2024-10-29 11:05:17 -04:00
Brandon Hancock (bhancock_ai)
276cb7b7e8 Merge branch 'main' into feat/improve-tooling-docs 2024-10-29 10:41:04 -04:00
Brandon Hancock (bhancock_ai)
048aa6cbcc Update flows.mdx - Fix link 2024-10-29 10:40:49 -04:00
Brandon Hancock
fa9949b9d0 Update flow docs to talk about self evaluation example 2024-10-28 12:18:03 -05:00
Brandon Hancock
500072d855 Update flow docs to talk about self evaluation example 2024-10-28 12:17:44 -05:00
Brandon Hancock
04bcfa6e2d Improve tooling docs 2024-10-28 09:40:56 -05:00
Brandon Hancock (bhancock_ai)
26afee9bed improve tool text description and args (#1512)
* improve tool text descriptoin and args

* fix lint

* Drop print

* add back in docstring
2024-10-25 18:42:55 -04:00
Vini Brasil
f29f4abdd7 Forward install command options to uv sync (#1510)
Allow passing additional options from `crewai install` directly to
`uv sync`. This enables commands like `crewai install --locked` to work
as expected by forwarding all flags and options to the underlying uv
command.
2024-10-25 11:20:41 -03:00
Eduardo Chiarotti
4589d6fe9d feat: add tomli so we can support 3.10 (#1506)
* feat: add tomli so we can support 3.10

* feat: add validation for poetry data
2024-10-25 10:33:21 -03:00
Brandon Hancock (bhancock_ai)
201e652fa2 update plot command (#1504) 2024-10-24 14:44:30 -04:00
João Moura
8bc07e6071 new version 2024-10-23 18:10:37 -03:00
João Moura
6baaad045a new version 2024-10-23 18:08:49 -03:00
João Moura
74c1703310 updating crewai version 2024-10-23 17:58:58 -03:00
Brandon Hancock (bhancock_ai)
a921828e51 Fix memory imports for embedding functions (#1497) 2024-10-23 11:21:27 -04:00
Brandon Hancock (bhancock_ai)
e1fd83e6a7 support unsafe code execution. add in docker install and running checks. (#1496)
* support unsafe code execution. add in docker install and running checks.

* Update return type
2024-10-23 11:01:00 -04:00
Maicon Peixinho
7d68e287cc chore(readme-fix): fixing step for 'running tests' in the contribution section (#1490)
Co-authored-by: Eduardo Chiarotti <dudumelgaco@hotmail.com>
2024-10-23 11:38:41 -03:00
Rip&Tear
f39a975e20 fix/fixed missing API prompt + CLI docs update (#1464)
* updated CLI to allow for submitting API keys

* updated click prompt to remove default number

* removed all unnecessary comments

* feat: implement crew creation CLI command

- refactor code to multiple functions
- Added ability for users to select provider and model when uing crewai create command and ave API key to .env

* refactered select_choice function for early return

* refactored  select_provider to have an ealry return

* cleanup of comments

* refactor/Move functions into utils file, added new provider file and migrated fucntions thre, new constants file + general function refactor

* small comment cleanup

* fix unnecessary deps

* Added docs for new CLI provider + fixed missing API prompt

* Minor doc updates

* allow user to bypass api key entry + incorect number selected logic + ruff formatting

* ruff updates

* Fix spelling mistake

---------

Co-authored-by: Brandon Hancock (bhancock_ai) <109994880+bhancockio@users.noreply.github.com>
Co-authored-by: Brandon Hancock <brandon@brandonhancock.io>
2024-10-23 09:41:14 -04:00
João Moura
b8a3c29745 preparing new verison 2024-10-23 05:34:34 -03:00
Brandon Hancock (bhancock_ai)
9cd4ff05c9 use copy to split testing and training on crews (#1491)
* use copy to split testing and training on crews

* make tests handle new copy functionality on train and test

* fix last test

* fix test
2024-10-22 21:31:44 -04:00
Lorenze Jay
4687779702 ensure original embedding config works (#1476)
* ensure original embedding config works

* some fixes

* raise error on unsupported provider

* WIP: brandons notes

* fixes

* rm prints

* fixed docs

* fixed run types

* updates to add more docs and correct imports with huggingface embedding server enabled

---------

Co-authored-by: Brandon Hancock <brandon@brandonhancock.io>
2024-10-22 12:30:30 -07:00
Tony Kipkemboi
8731915330 Add Cerebras LLM example configuration to LLM docs (#1488) 2024-10-22 13:41:29 -04:00
Brandon Hancock (bhancock_ai)
093259389e simplify flow (#1482)
* simplify flow

* propogate changes

* Update docs and scripts

* Template fix

* make flow kickoff sync

* Clean up docs
2024-10-21 19:32:55 -04:00
Brandon Hancock (bhancock_ai)
6bcb3d1080 drop unneccesary tests (#1484)
* drop uneccesary tests

* fix linting
2024-10-21 15:26:30 -04:00
Sam
71a217b210 fix(docs): typo (#1470) 2024-10-21 11:49:33 -04:00
Vini Brasil
b98256e434 Adapt crewai tool install <tool> to uv (#1481)
This commit updates the tool install comamnd to uv's new custom index
feature.

Related: https://github.com/astral-sh/uv/pull/7746/
2024-10-21 09:24:03 -03:00
João Moura
40f81aecf5 new verison 2024-10-18 17:57:37 -03:00
João Moura
d1737a96fb cutting new version 2024-10-18 17:57:02 -03:00
Brandon Hancock (bhancock_ai)
84f48c465d fix tool calling issue (#1467)
* fix tool calling issue

* Update tool type check

* Drop print
2024-10-18 15:56:56 -03:00
Eduardo Chiarotti
60efcad481 feat: add poetry.lock to uv migration (#1468) 2024-10-18 15:45:01 -03:00
João Moura
53a9f107ca Avoiding exceptions 2024-10-18 08:32:06 -03:00
João Moura
6fa2b89831 fix tasks and agents ordering 2024-10-18 08:06:38 -03:00
João Moura
d72ebb9bb8 fixing annotations 2024-10-18 07:46:30 -03:00
João Moura
81ae07abdb preparing new version 2024-10-18 07:13:17 -03:00
Lorenze Jay
6d20ba70a1 Feat/memory base (#1444)
* byom - short/entity memory

* better

* rm uneeded

* fix text

* use context

* rm dep and sync

* type check fix

* fixed test using new cassete

* fixing types

* fixed types

* fix types

* fixed types

* fixing types

* fix type

* cassette update

* just mock the return of short term mem

* remove print

* try catch block

* added docs

* dding error handling here
2024-10-17 13:19:33 -03:00
Rok Benko
67f55bae2c Fix incorrect parameter name in Vision tool docs page (#1461)
Co-authored-by: João Moura <joaomdmoura@gmail.com>
2024-10-17 13:18:31 -03:00
Rip&Tear
9b59de1720 feat/updated CLI to allow for model selection & submitting API keys (#1430)
* updated CLI to allow for submitting API keys

* updated click prompt to remove default number

* removed all unnecessary comments

* feat: implement crew creation CLI command

- refactor code to multiple functions
- Added ability for users to select provider and model when uing crewai create command and ave API key to .env

* refactered select_choice function for early return

* refactored  select_provider to have an ealry return

* cleanup of comments

* refactor/Move functions into utils file, added new provider file and migrated fucntions thre, new constants file + general function refactor

* small comment cleanup

* fix unnecessary deps

---------

Co-authored-by: Brandon Hancock (bhancock_ai) <109994880+bhancockio@users.noreply.github.com>
Co-authored-by: Brandon Hancock <brandon@brandonhancock.io>
2024-10-17 10:05:07 -04:00
111 changed files with 5588 additions and 2833 deletions

19
.github/security.md vendored Normal file
View File

@@ -0,0 +1,19 @@
CrewAI takes the security of our software products and services seriously, which includes all source code repositories managed through our GitHub organization.
If you believe you have found a security vulnerability in any CrewAI product or service, please report it to us as described below.
## Reporting a Vulnerability
Please do not report security vulnerabilities through public GitHub issues.
To report a vulnerability, please email us at security@crewai.com.
Please include the requested information listed below so that we can triage your report more quickly
- Type of issue (e.g. SQL injection, cross-site scripting, etc.)
- Full paths of source file(s) related to the manifestation of the issue
- The location of the affected source code (tag/branch/commit or direct URL)
- Any special configuration required to reproduce the issue
- Step-by-step instructions to reproduce the issue (please include screenshots if needed)
- Proof-of-concept or exploit code (if possible)
- Impact of the issue, including how an attacker might exploit the issue
Once we have received your report, we will respond to you at the email address you provide. If the issue is confirmed, we will release a patch as soon as possible depending on the complexity of the issue.
At this time, we are not offering a bug bounty program. Any rewards will be at our discretion.

View File

@@ -19,5 +19,5 @@ jobs:
run: pip install bandit run: pip install bandit
- name: Run Bandit - name: Run Bandit
run: bandit -c pyproject.toml -r src/ -lll run: bandit -c pyproject.toml -r src/ -ll

View File

@@ -351,7 +351,7 @@ pre-commit install
### Running Tests ### Running Tests
```bash ```bash
uvx pytest uv run pytest .
``` ```
### Running static type checks ### Running static type checks

View File

@@ -31,16 +31,17 @@ Think of an agent as a member of a team, with specific skills and a particular j
| **Max RPM** *(optional)* | `max_rpm` | Max RPM is the maximum number of requests per minute the agent can perform to avoid rate limits. It's optional and can be left unspecified, with a default value of `None`. | | **Max RPM** *(optional)* | `max_rpm` | Max RPM is the maximum number of requests per minute the agent can perform to avoid rate limits. It's optional and can be left unspecified, with a default value of `None`. |
| **Max Execution Time** *(optional)* | `max_execution_time` | Max Execution Time is the maximum execution time for an agent to execute a task. It's optional and can be left unspecified, with a default value of `None`, meaning no max execution time. | | **Max Execution Time** *(optional)* | `max_execution_time` | Max Execution Time is the maximum execution time for an agent to execute a task. It's optional and can be left unspecified, with a default value of `None`, meaning no max execution time. |
| **Verbose** *(optional)* | `verbose` | Setting this to `True` configures the internal logger to provide detailed execution logs, aiding in debugging and monitoring. Default is `False`. | | **Verbose** *(optional)* | `verbose` | Setting this to `True` configures the internal logger to provide detailed execution logs, aiding in debugging and monitoring. Default is `False`. |
| **Allow Delegation** *(optional)* | `allow_delegation` | Agents can delegate tasks or questions to one another, ensuring that each task is handled by the most suitable agent. Default is `False`. | **Allow Delegation** *(optional)* | `allow_delegation` | Agents can delegate tasks or questions to one another, ensuring that each task is handled by the most suitable agent. Default is `False`. |
| **Step Callback** *(optional)* | `step_callback` | A function that is called after each step of the agent. This can be used to log the agent's actions or to perform other operations. It will overwrite the crew `step_callback`. | | **Step Callback** *(optional)* | `step_callback` | A function that is called after each step of the agent. This can be used to log the agent's actions or to perform other operations. It will overwrite the crew `step_callback`. |
| **Cache** *(optional)* | `cache` | Indicates if the agent should use a cache for tool usage. Default is `True`. | | **Cache** *(optional)* | `cache` | Indicates if the agent should use a cache for tool usage. Default is `True`. |
| **System Template** *(optional)* | `system_template` | Specifies the system format for the agent. Default is `None`. | | **System Template** *(optional)* | `system_template` | Specifies the system format for the agent. Default is `None`. |
| **Prompt Template** *(optional)* | `prompt_template` | Specifies the prompt format for the agent. Default is `None`. | | **Prompt Template** *(optional)* | `prompt_template` | Specifies the prompt format for the agent. Default is `None`. |
| **Response Template** *(optional)* | `response_template` | Specifies the response format for the agent. Default is `None`. | | **Response Template** *(optional)* | `response_template` | Specifies the response format for the agent. Default is `None`. |
| **Allow Code Execution** *(optional)* | `allow_code_execution` | Enable code execution for the agent. Default is `False`. | | **Allow Code Execution** *(optional)* | `allow_code_execution` | Enable code execution for the agent. Default is `False`. |
| **Max Retry Limit** *(optional)* | `max_retry_limit` | Maximum number of retries for an agent to execute a task when an error occurs. Default is `2`. | **Max Retry Limit** *(optional)* | `max_retry_limit` | Maximum number of retries for an agent to execute a task when an error occurs. Default is `2`. |
| **Use System Prompt** *(optional)* | `use_system_prompt` | Adds the ability to not use system prompt (to support o1 models). Default is `True`. | | **Use System Prompt** *(optional)* | `use_system_prompt` | Adds the ability to not use system prompt (to support o1 models). Default is `True`. |
| **Respect Context Window** *(optional)* | `respect_context_window` | Summary strategy to avoid overflowing the context window. Default is `True`. | | **Respect Context Window** *(optional)* | `respect_context_window` | Summary strategy to avoid overflowing the context window. Default is `True`. |
| **Code Execution Mode** *(optional)* | `code_execution_mode` | Determines the mode for code execution: 'safe' (using Docker) or 'unsafe' (direct execution on the host machine). Default is `safe`. |
## Creating an agent ## Creating an agent
@@ -83,6 +84,7 @@ agent = Agent(
max_retry_limit=2, # Optional max_retry_limit=2, # Optional
use_system_prompt=True, # Optional use_system_prompt=True, # Optional
respect_context_window=True, # Optional respect_context_window=True, # Optional
code_execution_mode='safe', # Optional, defaults to 'safe'
) )
``` ```
@@ -156,4 +158,4 @@ crew = my_crew.kickoff(inputs={"input": "Mark Twain"})
## Conclusion ## Conclusion
Agents are the building blocks of the CrewAI framework. By understanding how to define and interact with agents, Agents are the building blocks of the CrewAI framework. By understanding how to define and interact with agents,
you can create sophisticated AI systems that leverage the power of collaborative intelligence. you can create sophisticated AI systems that leverage the power of collaborative intelligence. The `code_execution_mode` attribute provides flexibility in how agents execute code, allowing for both secure and direct execution options.

View File

@@ -6,7 +6,7 @@ icon: terminal
# CrewAI CLI Documentation # CrewAI CLI Documentation
The CrewAI CLI provides a set of commands to interact with CrewAI, allowing you to create, train, run, and manage crews and pipelines. The CrewAI CLI provides a set of commands to interact with CrewAI, allowing you to create, train, run, and manage crews & flows.
## Installation ## Installation
@@ -146,3 +146,34 @@ crewai run
Make sure to run these commands from the directory where your CrewAI project is set up. Make sure to run these commands from the directory where your CrewAI project is set up.
Some commands may require additional configuration or setup within your project structure. Some commands may require additional configuration or setup within your project structure.
</Note> </Note>
### 9. API Keys
When running ```crewai create crew``` command, the CLI will first show you the top 5 most common LLM providers and ask you to select one.
Once you've selected an LLM provider, you will be prompted for API keys.
#### Initial API key providers
The CLI will initially prompt for API keys for the following services:
* OpenAI
* Groq
* Anthropic
* Google Gemini
When you select a provider, the CLI will prompt you to enter your API key.
#### Other Options
If you select option 6, you will be able to select from a list of LiteLLM supported providers.
When you select a provider, the CLI will prompt you to enter the Key name and the API key.
See the following link for each provider's key name:
* [LiteLLM Providers](https://docs.litellm.ai/docs/providers)

View File

@@ -22,7 +22,8 @@ A crew in crewAI represents a collaborative group of agents working together to
| **Max RPM** _(optional)_ | `max_rpm` | Maximum requests per minute the crew adheres to during execution. Defaults to `None`. | | **Max RPM** _(optional)_ | `max_rpm` | Maximum requests per minute the crew adheres to during execution. Defaults to `None`. |
| **Language** _(optional)_ | `language` | Language used for the crew, defaults to English. | | **Language** _(optional)_ | `language` | Language used for the crew, defaults to English. |
| **Language File** _(optional)_ | `language_file` | Path to the language file to be used for the crew. | | **Language File** _(optional)_ | `language_file` | Path to the language file to be used for the crew. |
| **Memory** _(optional)_ | `memory` | Utilized for storing execution memories (short-term, long-term, entity memory). Defaults to `False`. | | **Memory** _(optional)_ | `memory` | Utilized for storing execution memories (short-term, long-term, entity memory). |
| **Memory Config** _(optional)_ | `memory_config` | Configuration for the memory provider to be used by the crew. |
| **Cache** _(optional)_ | `cache` | Specifies whether to use a cache for storing the results of tools' execution. Defaults to `True`. | | **Cache** _(optional)_ | `cache` | Specifies whether to use a cache for storing the results of tools' execution. Defaults to `True`. |
| **Embedder** _(optional)_ | `embedder` | Configuration for the embedder to be used by the crew. Mostly used by memory for now. Default is `{"provider": "openai"}`. | | **Embedder** _(optional)_ | `embedder` | Configuration for the embedder to be used by the crew. Mostly used by memory for now. Default is `{"provider": "openai"}`. |
| **Full Output** _(optional)_ | `full_output` | Whether the crew should return the full output with all tasks outputs or just the final output. Defaults to `False`. | | **Full Output** _(optional)_ | `full_output` | Whether the crew should return the full output with all tasks outputs or just the final output. Defaults to `False`. |

View File

@@ -18,68 +18,71 @@ Flows allow you to create structured, event-driven workflows. They provide a sea
4. **Flexible Control Flow**: Implement conditional logic, loops, and branching within your workflows. 4. **Flexible Control Flow**: Implement conditional logic, loops, and branching within your workflows.
5. **Input Flexibility**: Flows can accept inputs to initialize or update their state, with different handling for structured and unstructured state management.
## Getting Started ## Getting Started
Let's create a simple Flow where you will use OpenAI to generate a random city in one task and then use that city to generate a fun fact in another task. Let's create a simple Flow where you will use OpenAI to generate a random city in one task and then use that city to generate a fun fact in another task.
```python Code ### Passing Inputs to Flows
import asyncio
Flows can accept inputs to initialize or update their state before execution. The way inputs are handled depends on whether the flow uses structured or unstructured state management.
#### Structured State Management
In structured state management, the flow's state is defined using a Pydantic `BaseModel`. Inputs must match the model's schema, and any updates will overwrite the default values.
```python
from crewai.flow.flow import Flow, listen, start from crewai.flow.flow import Flow, listen, start
from litellm import completion from pydantic import BaseModel
class ExampleState(BaseModel):
counter: int = 0
message: str = ""
class ExampleFlow(Flow): class StructuredExampleFlow(Flow[ExampleState]):
model = "gpt-4o-mini"
@start() @start()
def generate_city(self): def first_method(self):
print("Starting flow") # Implementation
response = completion( flow = StructuredExampleFlow()
model=self.model, flow.kickoff(inputs={"counter": 10})
messages=[ ```
{
"role": "user",
"content": "Return the name of a random city in the world.",
},
],
)
random_city = response["choices"][0]["message"]["content"] In this example, the `counter` is initialized to `10`, while `message` retains its default value.
print(f"Random City: {random_city}")
return random_city #### Unstructured State Management
@listen(generate_city) In unstructured state management, the flow's state is a dictionary. You can pass any dictionary to update the state.
def generate_fun_fact(self, random_city):
response = completion(
model=self.model,
messages=[
{
"role": "user",
"content": f"Tell me a fun fact about {random_city}",
},
],
)
fun_fact = response["choices"][0]["message"]["content"] ```python
return fun_fact from crewai.flow.flow import Flow, listen, start
class UnstructuredExampleFlow(Flow):
@start()
def first_method(self):
# Implementation
async def main(): flow = UnstructuredExampleFlow()
flow = ExampleFlow() flow.kickoff(inputs={"counter": 5, "message": "Initial message"})
result = await flow.kickoff() ```
print(f"Generated fun fact: {result}") Here, both `counter` and `message` are updated based on the provided inputs.
asyncio.run(main()) **Note:** Ensure that inputs for structured state management adhere to the defined schema to avoid validation errors.
### Example Flow
```python
# Existing example code
``` ```
In the above example, we have created a simple Flow that generates a random city using OpenAI and then generates a fun fact about that city. The Flow consists of two tasks: `generate_city` and `generate_fun_fact`. The `generate_city` task is the starting point of the Flow, and the `generate_fun_fact` task listens for the output of the `generate_city` task. In the above example, we have created a simple Flow that generates a random city using OpenAI and then generates a fun fact about that city. The Flow consists of two tasks: `generate_city` and `generate_fun_fact`. The `generate_city` task is the starting point of the Flow, and the `generate_fun_fact` task listens for the output of the `generate_city` task.
When you run the Flow, it will generate a random city and then generate a fun fact about that city. The output will be printed to the console. When you run the Flow, it will generate a random city and then generate a fun fact about that city. The output will be printed to the console.
**Note:** Ensure you have set up your `.env` file to store your `OPENAI_API_KEY`. This key is necessary for authenticating requests to the OpenAI API.
### @start() ### @start()
The `@start()` decorator is used to mark a method as the starting point of a Flow. When a Flow is started, all the methods decorated with `@start()` are executed in parallel. You can have multiple start methods in a Flow, and they will all be executed when the Flow is started. The `@start()` decorator is used to mark a method as the starting point of a Flow. When a Flow is started, all the methods decorated with `@start()` are executed in parallel. You can have multiple start methods in a Flow, and they will all be executed when the Flow is started.
@@ -94,14 +97,14 @@ The `@listen()` decorator can be used in several ways:
1. **Listening to a Method by Name**: You can pass the name of the method you want to listen to as a string. When that method completes, the listener method will be triggered. 1. **Listening to a Method by Name**: You can pass the name of the method you want to listen to as a string. When that method completes, the listener method will be triggered.
```python Code ```python
@listen("generate_city") @listen("generate_city")
def generate_fun_fact(self, random_city): def generate_fun_fact(self, random_city):
# Implementation # Implementation
``` ```
2. **Listening to a Method Directly**: You can pass the method itself. When that method completes, the listener method will be triggered. 2. **Listening to a Method Directly**: You can pass the method itself. When that method completes, the listener method will be triggered.
```python Code ```python
@listen(generate_city) @listen(generate_city)
def generate_fun_fact(self, random_city): def generate_fun_fact(self, random_city):
# Implementation # Implementation
@@ -118,8 +121,7 @@ When you run a Flow, the final output is determined by the last method that comp
Here's how you can access the final output: Here's how you can access the final output:
<CodeGroup> <CodeGroup>
```python Code ```python
import asyncio
from crewai.flow.flow import Flow, listen, start from crewai.flow.flow import Flow, listen, start
class OutputExampleFlow(Flow): class OutputExampleFlow(Flow):
@@ -131,26 +133,23 @@ class OutputExampleFlow(Flow):
def second_method(self, first_output): def second_method(self, first_output):
return f"Second method received: {first_output}" return f"Second method received: {first_output}"
async def main(): flow = OutputExampleFlow()
flow = OutputExampleFlow() final_output = flow.kickoff()
final_output = await flow.kickoff()
print("---- Final Output ----")
print(final_output)
asyncio.run(main()) print("---- Final Output ----")
print(final_output)
``` ```
``` text Output ```text
---- Final Output ---- ---- Final Output ----
Second method received: Output from first_method Second method received: Output from first_method
``` ```
</CodeGroup> </CodeGroup>
In this example, the `second_method` is the last method to complete, so its output will be the final output of the Flow. In this example, the `second_method` is the last method to complete, so its output will be the final output of the Flow.
The `kickoff()` method will return the final output, which is then printed to the console. The `kickoff()` method will return the final output, which is then printed to the console.
#### Accessing and Updating State #### Accessing and Updating State
In addition to retrieving the final output, you can also access and update the state within your Flow. The state can be used to store and share data between different methods in the Flow. After the Flow has run, you can access the state to retrieve any information that was added or updated during the execution. In addition to retrieving the final output, you can also access and update the state within your Flow. The state can be used to store and share data between different methods in the Flow. After the Flow has run, you can access the state to retrieve any information that was added or updated during the execution.
@@ -159,8 +158,7 @@ Here's an example of how to update and access the state:
<CodeGroup> <CodeGroup>
```python Code ```python
import asyncio
from crewai.flow.flow import Flow, listen, start from crewai.flow.flow import Flow, listen, start
from pydantic import BaseModel from pydantic import BaseModel
@@ -181,45 +179,41 @@ class StateExampleFlow(Flow[ExampleState]):
self.state.counter += 1 self.state.counter += 1
return self.state.message return self.state.message
async def main(): flow = StateExampleFlow()
flow = StateExampleFlow() final_output = flow.kickoff()
final_output = await flow.kickoff() print(f"Final Output: {final_output}")
print(f"Final Output: {final_output}") print("Final State:")
print("Final State:") print(flow.state)
print(flow.state)
asyncio.run(main())
``` ```
``` text Output ```text
Final Output: Hello from first_method - updated by second_method Final Output: Hello from first_method - updated by second_method
Final State: Final State:
counter=2 message='Hello from first_method - updated by second_method' counter=2 message='Hello from first_method - updated by second_method'
``` ```
</CodeGroup> </CodeGroup>
In this example, the state is updated by both `first_method` and `second_method`. In this example, the state is updated by both `first_method` and `second_method`.
After the Flow has run, you can access the final state to see the updates made by these methods. After the Flow has run, you can access the final state to see the updates made by these methods.
By ensuring that the final method's output is returned and providing access to the state, CrewAI Flows make it easy to integrate the results of your AI workflows into larger applications or systems, By ensuring that the final method's output is returned and providing access to the state, CrewAI Flows make it easy to integrate the results of your AI workflows into larger applications or systems,
while also maintaining and accessing the state throughout the Flow's execution. while also maintaining and accessing the state throughout the Flow's execution.
## Flow State Management ## Flow State Management
Managing state effectively is crucial for building reliable and maintainable AI workflows. CrewAI Flows provides robust mechanisms for both unstructured and structured state management, Managing state effectively is crucial for building reliable and maintainable AI workflows. CrewAI Flows provides robust mechanisms for both unstructured and structured state management,
allowing developers to choose the approach that best fits their application's needs. allowing developers to choose the approach that best fits their application's needs.
### Unstructured State Management ### Unstructured State Management
In unstructured state management, all state is stored in the `state` attribute of the `Flow` class. In unstructured state management, all state is stored in the `state` attribute of the `Flow` class.
This approach offers flexibility, enabling developers to add or modify state attributes on the fly without defining a strict schema. This approach offers flexibility, enabling developers to add or modify state attributes on the fly without defining a strict schema.
```python Code ```python
import asyncio
from crewai.flow.flow import Flow, listen, start from crewai.flow.flow import Flow, listen, start
class UntructuredExampleFlow(Flow): class UnstructuredExampleFlow(Flow):
@start() @start()
def first_method(self): def first_method(self):
@@ -238,13 +232,8 @@ class UntructuredExampleFlow(Flow):
print(f"State after third_method: {self.state}") print(f"State after third_method: {self.state}")
flow = UnstructuredExampleFlow()
async def main(): flow.kickoff()
flow = UntructuredExampleFlow()
await flow.kickoff()
asyncio.run(main())
``` ```
**Key Points:** **Key Points:**
@@ -254,21 +243,17 @@ asyncio.run(main())
### Structured State Management ### Structured State Management
Structured state management leverages predefined schemas to ensure consistency and type safety across the workflow. Structured state management leverages predefined schemas to ensure consistency and type safety across the workflow.
By using models like Pydantic's `BaseModel`, developers can define the exact shape of the state, enabling better validation and auto-completion in development environments. By using models like Pydantic's `BaseModel`, developers can define the exact shape of the state, enabling better validation and auto-completion in development environments.
```python Code ```python
import asyncio
from crewai.flow.flow import Flow, listen, start from crewai.flow.flow import Flow, listen, start
from pydantic import BaseModel from pydantic import BaseModel
class ExampleState(BaseModel): class ExampleState(BaseModel):
counter: int = 0 counter: int = 0
message: str = "" message: str = ""
class StructuredExampleFlow(Flow[ExampleState]): class StructuredExampleFlow(Flow[ExampleState]):
@start() @start()
@@ -287,13 +272,8 @@ class StructuredExampleFlow(Flow[ExampleState]):
print(f"State after third_method: {self.state}") print(f"State after third_method: {self.state}")
flow = StructuredExampleFlow()
async def main(): flow.kickoff()
flow = StructuredExampleFlow()
await flow.kickoff()
asyncio.run(main())
``` ```
**Key Points:** **Key Points:**
@@ -325,8 +305,7 @@ The `or_` function in Flows allows you to listen to multiple methods and trigger
<CodeGroup> <CodeGroup>
```python Code ```python
import asyncio
from crewai.flow.flow import Flow, listen, or_, start from crewai.flow.flow import Flow, listen, or_, start
class OrExampleFlow(Flow): class OrExampleFlow(Flow):
@@ -343,23 +322,18 @@ class OrExampleFlow(Flow):
def logger(self, result): def logger(self, result):
print(f"Logger: {result}") print(f"Logger: {result}")
flow = OrExampleFlow()
async def main(): flow.kickoff()
flow = OrExampleFlow()
await flow.kickoff()
asyncio.run(main())
``` ```
``` text Output ```text
Logger: Hello from the start method Logger: Hello from the start method
Logger: Hello from the second method Logger: Hello from the second method
``` ```
</CodeGroup> </CodeGroup>
When you run this Flow, the `logger` method will be triggered by the output of either the `start_method` or the `second_method`. When you run this Flow, the `logger` method will be triggered by the output of either the `start_method` or the `second_method`.
The `or_` function is used to listen to multiple methods and trigger the listener method when any of the specified methods emit an output. The `or_` function is used to listen to multiple methods and trigger the listener method when any of the specified methods emit an output.
### Conditional Logic: `and` ### Conditional Logic: `and`
@@ -368,8 +342,7 @@ The `and_` function in Flows allows you to listen to multiple methods and trigge
<CodeGroup> <CodeGroup>
```python Code ```python
import asyncio
from crewai.flow.flow import Flow, and_, listen, start from crewai.flow.flow import Flow, and_, listen, start
class AndExampleFlow(Flow): class AndExampleFlow(Flow):
@@ -387,34 +360,28 @@ class AndExampleFlow(Flow):
print("---- Logger ----") print("---- Logger ----")
print(self.state) print(self.state)
flow = AndExampleFlow()
async def main(): flow.kickoff()
flow = AndExampleFlow()
await flow.kickoff()
asyncio.run(main())
``` ```
``` text Output ```text
---- Logger ---- ---- Logger ----
{'greeting': 'Hello from the start method', 'joke': 'What do computers eat? Microchips.'} {'greeting': 'Hello from the start method', 'joke': 'What do computers eat? Microchips.'}
``` ```
</CodeGroup> </CodeGroup>
When you run this Flow, the `logger` method will be triggered only when both the `start_method` and the `second_method` emit an output. When you run this Flow, the `logger` method will be triggered only when both the `start_method` and the `second_method` emit an output.
The `and_` function is used to listen to multiple methods and trigger the listener method only when all the specified methods emit an output. The `and_` function is used to listen to multiple methods and trigger the listener method only when all the specified methods emit an output.
### Router ### Router
The `@router()` decorator in Flows allows you to define conditional routing logic based on the output of a method. The `@router()` decorator in Flows allows you to define conditional routing logic based on the output of a method.
You can specify different routes based on the output of the method, allowing you to control the flow of execution dynamically. You can specify different routes based on the output of the method, allowing you to control the flow of execution dynamically.
<CodeGroup> <CodeGroup>
```python Code ```python
import asyncio
import random import random
from crewai.flow.flow import Flow, listen, router, start from crewai.flow.flow import Flow, listen, router, start
from pydantic import BaseModel from pydantic import BaseModel
@@ -445,16 +412,11 @@ class RouterFlow(Flow[ExampleState]):
def fourth_method(self): def fourth_method(self):
print("Fourth method running") print("Fourth method running")
flow = RouterFlow()
async def main(): flow.kickoff()
flow = RouterFlow()
await flow.kickoff()
asyncio.run(main())
``` ```
``` text Output ```text
Starting the structured flow Starting the structured flow
Third method running Third method running
Fourth method running Fourth method running
@@ -462,16 +424,16 @@ Fourth method running
</CodeGroup> </CodeGroup>
In the above example, the `start_method` generates a random boolean value and sets it in the state. In the above example, the `start_method` generates a random boolean value and sets it in the state.
The `second_method` uses the `@router()` decorator to define conditional routing logic based on the value of the boolean. The `second_method` uses the `@router()` decorator to define conditional routing logic based on the value of the boolean.
If the boolean is `True`, the method returns `"success"`, and if it is `False`, the method returns `"failed"`. If the boolean is `True`, the method returns `"success"`, and if it is `False`, the method returns `"failed"`.
The `third_method` and `fourth_method` listen to the output of the `second_method` and execute based on the returned value. The `third_method` and `fourth_method` listen to the output of the `second_method` and execute based on the returned value.
When you run this Flow, the output will change based on the random boolean value generated by the `start_method`. When you run this Flow, the output will change based on the random boolean value generated by the `start_method`.
## Adding Crews to Flows ## Adding Crews to Flows
Creating a flow with multiple crews in CrewAI is straightforward. Creating a flow with multiple crews in CrewAI is straightforward.
You can generate a new CrewAI project that includes all the scaffolding needed to create a flow with multiple crews by running the following command: You can generate a new CrewAI project that includes all the scaffolding needed to create a flow with multiple crews by running the following command:
@@ -485,22 +447,21 @@ This command will generate a new CrewAI project with the necessary folder struct
After running the `crewai create flow name_of_flow` command, you will see a folder structure similar to the following: After running the `crewai create flow name_of_flow` command, you will see a folder structure similar to the following:
| Directory/File | Description | | Directory/File | Description |
|:---------------------------------|:------------------------------------------------------------------| | :--------------------- | :----------------------------------------------------------------- |
| `name_of_flow/` | Root directory for the flow. | | `name_of_flow/` | Root directory for the flow. |
| ├── `crews/` | Contains directories for specific crews. | | ├── `crews/` | Contains directories for specific crews. |
| │ └── `poem_crew/` | Directory for the "poem_crew" with its configurations and scripts.| | │ └── `poem_crew/` | Directory for the "poem_crew" with its configurations and scripts. |
| │ ├── `config/` | Configuration files directory for the "poem_crew". | | │ ├── `config/` | Configuration files directory for the "poem_crew". |
| │ ├── `agents.yaml` | YAML file defining the agents for "poem_crew". | | │ ├── `agents.yaml` | YAML file defining the agents for "poem_crew". |
| │ └── `tasks.yaml` | YAML file defining the tasks for "poem_crew". | | │ └── `tasks.yaml` | YAML file defining the tasks for "poem_crew". |
| │ ├── `poem_crew.py` | Script for "poem_crew" functionality. | | │ ├── `poem_crew.py` | Script for "poem_crew" functionality. |
| ├── `tools/` | Directory for additional tools used in the flow. | | ├── `tools/` | Directory for additional tools used in the flow. |
| │ └── `custom_tool.py` | Custom tool implementation. | | │ └── `custom_tool.py` | Custom tool implementation. |
| ├── `main.py` | Main script for running the flow. | | ├── `main.py` | Main script for running the flow. |
| ├── `README.md` | Project description and instructions. | | ├── `README.md` | Project description and instructions. |
| ├── `pyproject.toml` | Configuration file for project dependencies and settings. | | ├── `pyproject.toml` | Configuration file for project dependencies and settings. |
| └── `.gitignore` | Specifies files and directories to ignore in version control. | | └── `.gitignore` | Specifies files and directories to ignore in version control. |
### Building Your Crews ### Building Your Crews
@@ -518,9 +479,8 @@ The `main.py` file is where you create your flow and connect the crews together.
Here's an example of how you can connect the `poem_crew` in the `main.py` file: Here's an example of how you can connect the `poem_crew` in the `main.py` file:
```python Code ```python
#!/usr/bin/env python #!/usr/bin/env python
import asyncio
from random import randint from random import randint
from pydantic import BaseModel from pydantic import BaseModel
@@ -536,14 +496,12 @@ class PoemFlow(Flow[PoemState]):
@start() @start()
def generate_sentence_count(self): def generate_sentence_count(self):
print("Generating sentence count") print("Generating sentence count")
# Generate a number between 1 and 5
self.state.sentence_count = randint(1, 5) self.state.sentence_count = randint(1, 5)
@listen(generate_sentence_count) @listen(generate_sentence_count)
def generate_poem(self): def generate_poem(self):
print("Generating poem") print("Generating poem")
poem_crew = PoemCrew().crew() result = PoemCrew().crew().kickoff(inputs={"sentence_count": self.state.sentence_count})
result = poem_crew.kickoff(inputs={"sentence_count": self.state.sentence_count})
print("Poem generated", result.raw) print("Poem generated", result.raw)
self.state.poem = result.raw self.state.poem = result.raw
@@ -554,18 +512,17 @@ class PoemFlow(Flow[PoemState]):
with open("poem.txt", "w") as f: with open("poem.txt", "w") as f:
f.write(self.state.poem) f.write(self.state.poem)
async def run(): def kickoff():
"""
Run the flow.
"""
poem_flow = PoemFlow() poem_flow = PoemFlow()
await poem_flow.kickoff() poem_flow.kickoff()
def main():
asyncio.run(run()) def plot():
poem_flow = PoemFlow()
poem_flow.plot()
if __name__ == "__main__": if __name__ == "__main__":
main() kickoff()
``` ```
In this example, the `PoemFlow` class defines a flow that generates a sentence count, uses the `PoemCrew` to generate a poem, and then saves the poem to a file. The flow is kicked off by calling the `kickoff()` method. In this example, the `PoemFlow` class defines a flow that generates a sentence count, uses the `PoemCrew` to generate a poem, and then saves the poem to a file. The flow is kicked off by calling the `kickoff()` method.
@@ -587,17 +544,53 @@ source .venv/bin/activate
After activating the virtual environment, you can run the flow by executing one of the following commands: After activating the virtual environment, you can run the flow by executing one of the following commands:
```bash ```bash
crewai flow run crewai flow kickoff
``` ```
or or
```bash ```bash
uv run run_flow uv run kickoff
``` ```
The flow will execute, and you should see the output in the console. The flow will execute, and you should see the output in the console.
### Adding Additional Crews Using the CLI
Once you have created your initial flow, you can easily add additional crews to your project using the CLI. This allows you to expand your flow's capabilities by integrating new crews without starting from scratch.
To add a new crew to your existing flow, use the following command:
```bash
crewai flow add-crew <crew_name>
```
This command will create a new directory for your crew within the `crews` folder of your flow project. It will include the necessary configuration files and a crew definition file, similar to the initial setup.
#### Folder Structure
After adding a new crew, your folder structure will look like this:
| Directory/File | Description |
| :--------------------- | :----------------------------------------------------------------- |
| `name_of_flow/` | Root directory for the flow. |
| ├── `crews/` | Contains directories for specific crews. |
| │ ├── `poem_crew/` | Directory for the "poem_crew" with its configurations and scripts. |
| │ │ ├── `config/` | Configuration files directory for the "poem_crew". |
| │ │ │ ├── `agents.yaml` | YAML file defining the agents for "poem_crew". |
| │ │ │ └── `tasks.yaml` | YAML file defining the tasks for "poem_crew". |
| │ │ └── `poem_crew.py` | Script for "poem_crew" functionality. |
| └── `name_of_crew/` | Directory for the new crew. |
| ├── `config/` | Configuration files directory for the new crew. |
| │ ├── `agents.yaml` | YAML file defining the agents for the new crew. |
| │ └── `tasks.yaml` | YAML file defining the tasks for the new crew. |
| └── `name_of_crew.py` | Script for the new crew functionality. |
You can then customize the `agents.yaml` and `tasks.yaml` files to define the agents and tasks for your new crew. The `name_of_crew.py` file will contain the crew's logic, which you can modify to suit your needs.
By using the CLI to add additional crews, you can efficiently build complex AI workflows that leverage multiple crews working together.
## Plot Flows ## Plot Flows
Visualizing your AI workflows can provide valuable insights into the structure and execution paths of your flows. CrewAI offers a powerful visualization tool that allows you to generate interactive plots of your flows, making it easier to understand and optimize your AI workflows. Visualizing your AI workflows can provide valuable insights into the structure and execution paths of your flows. CrewAI offers a powerful visualization tool that allows you to generate interactive plots of your flows, making it easier to understand and optimize your AI workflows.
@@ -614,7 +607,7 @@ CrewAI provides two convenient methods to generate plots of your flows:
If you are working directly with a flow instance, you can generate a plot by calling the `plot()` method on your flow object. This method will create an HTML file containing the interactive plot of your flow. If you are working directly with a flow instance, you can generate a plot by calling the `plot()` method on your flow object. This method will create an HTML file containing the interactive plot of your flow.
```python Code ```python
# Assuming you have a flow instance # Assuming you have a flow instance
flow.plot("my_flow_plot") flow.plot("my_flow_plot")
``` ```
@@ -637,13 +630,114 @@ The generated plot will display nodes representing the tasks in your flow, with
By visualizing your flows, you can gain a clearer understanding of the workflow's structure, making it easier to debug, optimize, and communicate your AI processes to others. By visualizing your flows, you can gain a clearer understanding of the workflow's structure, making it easier to debug, optimize, and communicate your AI processes to others.
### Conclusion
Plotting your flows is a powerful feature of CrewAI that enhances your ability to design and manage complex AI workflows. Whether you choose to use the `plot()` method or the command line, generating plots will provide you with a visual representation of your workflows, aiding in both development and presentation. ## Advanced
In this section, we explore more complex use cases of CrewAI Flows, starting with a self-evaluation loop. This pattern is crucial for developing AI systems that can iteratively improve their outputs through feedback.
### 1) Self-Evaluation Loop
The self-evaluation loop is a powerful pattern that allows AI workflows to automatically assess and refine their outputs. This example demonstrates how to set up a flow that generates content, evaluates it, and iterates based on feedback until the desired quality is achieved.
#### Overview
The self-evaluation loop involves two main Crews:
1. **ShakespeareanXPostCrew**: Generates a Shakespearean-style post on a given topic.
2. **XPostReviewCrew**: Evaluates the generated post, providing feedback on its validity and quality.
The process iterates until the post meets the criteria or a maximum retry limit is reached. This approach ensures high-quality outputs through iterative refinement.
#### Importance
This pattern is essential for building robust AI systems that can adapt and improve over time. By automating the evaluation and feedback loop, developers can ensure that their AI workflows produce reliable and high-quality results.
#### Main Code Highlights
Below is the `main.py` file for the self-evaluation loop flow:
```python
from typing import Optional
from crewai.flow.flow import Flow, listen, router, start
from pydantic import BaseModel
from self_evaluation_loop_flow.crews.shakespeare_crew.shakespeare_crew import (
ShakespeareanXPostCrew,
)
from self_evaluation_loop_flow.crews.x_post_review_crew.x_post_review_crew import (
XPostReviewCrew,
)
class ShakespeareXPostFlowState(BaseModel):
x_post: str = ""
feedback: Optional[str] = None
valid: bool = False
retry_count: int = 0
class ShakespeareXPostFlow(Flow[ShakespeareXPostFlowState]):
@start("retry")
def generate_shakespeare_x_post(self):
print("Generating Shakespearean X post")
topic = "Flying cars"
result = (
ShakespeareanXPostCrew()
.crew()
.kickoff(inputs={"topic": topic, "feedback": self.state.feedback})
)
print("X post generated", result.raw)
self.state.x_post = result.raw
@router(generate_shakespeare_x_post)
def evaluate_x_post(self):
if self.state.retry_count > 3:
return "max_retry_exceeded"
result = XPostReviewCrew().crew().kickoff(inputs={"x_post": self.state.x_post})
self.state.valid = result["valid"]
self.state.feedback = result["feedback"]
print("valid", self.state.valid)
print("feedback", self.state.feedback)
self.state.retry_count += 1
if self.state.valid:
return "complete"
return "retry"
@listen("complete")
def save_result(self):
print("X post is valid")
print("X post:", self.state.x_post)
with open("x_post.txt", "w") as file:
file.write(self.state.x_post)
@listen("max_retry_exceeded")
def max_retry_exceeded_exit(self):
print("Max retry count exceeded")
print("X post:", self.state.x_post)
print("Feedback:", self.state.feedback)
def kickoff():
shakespeare_flow = ShakespeareXPostFlow()
shakespeare_flow.kickoff()
def plot():
shakespeare_flow = ShakespeareXPostFlow()
shakespeare_flow.plot()
if __name__ == "__main__":
kickoff()
```
#### Code Highlights
- **Retry Mechanism**: The flow uses a retry mechanism to regenerate the post if it doesn't meet the criteria, up to a maximum of three retries.
- **Feedback Loop**: Feedback from the `XPostReviewCrew` is used to refine the post iteratively.
- **State Management**: The flow maintains state using a Pydantic model, ensuring type safety and clarity.
For a complete example and further details, please refer to the [Self Evaluation Loop Flow repository](https://github.com/crewAIInc/crewAI-examples/tree/main/self_evaluation_loop_flow).
## Next Steps ## Next Steps
If you're interested in exploring additional examples of flows, we have a variety of recommendations in our examples repository. Here are four specific flow examples, each showcasing unique use cases to help you match your current problem type to a specific example: If you're interested in exploring additional examples of flows, we have a variety of recommendations in our examples repository. Here are five specific flow examples, each showcasing unique use cases to help you match your current problem type to a specific example:
1. **Email Auto Responder Flow**: This example demonstrates an infinite loop where a background job continually runs to automate email responses. It's a great use case for tasks that need to be performed repeatedly without manual intervention. [View Example](https://github.com/crewAIInc/crewAI-examples/tree/main/email_auto_responder_flow) 1. **Email Auto Responder Flow**: This example demonstrates an infinite loop where a background job continually runs to automate email responses. It's a great use case for tasks that need to be performed repeatedly without manual intervention. [View Example](https://github.com/crewAIInc/crewAI-examples/tree/main/email_auto_responder_flow)
@@ -653,17 +747,19 @@ If you're interested in exploring additional examples of flows, we have a variet
4. **Meeting Assistant Flow**: This flow demonstrates how to broadcast one event to trigger multiple follow-up actions. For instance, after a meeting is completed, the flow can update a Trello board, send a Slack message, and save the results. It's a great example of handling multiple outcomes from a single event, making it ideal for comprehensive task management and notification systems. [View Example](https://github.com/crewAIInc/crewAI-examples/tree/main/meeting_assistant_flow) 4. **Meeting Assistant Flow**: This flow demonstrates how to broadcast one event to trigger multiple follow-up actions. For instance, after a meeting is completed, the flow can update a Trello board, send a Slack message, and save the results. It's a great example of handling multiple outcomes from a single event, making it ideal for comprehensive task management and notification systems. [View Example](https://github.com/crewAIInc/crewAI-examples/tree/main/meeting_assistant_flow)
5. **Self Evaluation Loop Flow**: This flow demonstrates a self-evaluation loop where AI workflows automatically assess and refine their outputs through feedback. It involves generating content, evaluating it, and iterating until the desired quality is achieved. This pattern is crucial for developing robust AI systems that can adapt and improve over time. [View Example](https://github.com/crewAIInc/crewAI-examples/tree/main/self_evaluation_loop_flow)
By exploring these examples, you can gain insights into how to leverage CrewAI Flows for various use cases, from automating repetitive tasks to managing complex, multi-step processes with dynamic decision-making and human feedback. By exploring these examples, you can gain insights into how to leverage CrewAI Flows for various use cases, from automating repetitive tasks to managing complex, multi-step processes with dynamic decision-making and human feedback.
Also, check out our YouTube video on how to use flows in CrewAI below! Also, check out our YouTube video on how to use flows in CrewAI below!
<iframe <iframe
width="560" width="560"
height="315" height="315"
src="https://www.youtube.com/embed/MTb5my6VOT8" src="https://www.youtube.com/embed/MTb5my6VOT8"
title="YouTube video player" title="YouTube video player"
frameborder="0" frameborder="0"
allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share"
referrerpolicy="strict-origin-when-cross-origin" referrerpolicy="strict-origin-when-cross-origin"
allowfullscreen allowfullscreen
></iframe> ></iframe>

View File

@@ -25,50 +25,148 @@ By default, CrewAI uses the `gpt-4o-mini` model. It uses environment variables i
- `OPENAI_API_BASE` - `OPENAI_API_BASE`
- `OPENAI_API_KEY` - `OPENAI_API_KEY`
### 2. String Identifier ### 2. Updating YAML files
```python Code You can update the `agents.yml` file to refer to the LLM you want to use:
agent = Agent(llm="gpt-4o", ...)
```yaml Code
researcher:
role: Research Specialist
goal: Conduct comprehensive research and analysis to gather relevant information,
synthesize findings, and produce well-documented insights.
backstory: A dedicated research professional with years of experience in academic
investigation, literature review, and data analysis, known for thorough and
methodical approaches to complex research questions.
verbose: true
llm: openai/gpt-4o
# llm: azure/gpt-4o-mini
# llm: gemini/gemini-pro
# llm: anthropic/claude-3-5-sonnet-20240620
# llm: bedrock/anthropic.claude-3-sonnet-20240229-v1:0
# llm: mistral/mistral-large-latest
# llm: ollama/llama3:70b
# llm: groq/llama-3.2-90b-vision-preview
# llm: watsonx/meta-llama/llama-3-1-70b-instruct
# ...
``` ```
### 3. LLM Instance Keep in mind that you will need to set certain ENV vars depending on the model you are
using to account for the credentials or set a custom LLM object like described below.
Here are some of the required ENV vars for some of the LLM integrations:
List of [more providers](https://docs.litellm.ai/docs/providers). <AccordionGroup>
<Accordion title="OpenAI">
```python Code
OPENAI_API_KEY=<your-api-key>
OPENAI_API_BASE=<optional-custom-base-url>
OPENAI_MODEL_NAME=<openai-model-name>
OPENAI_ORGANIZATION=<your-org-id> # OPTIONAL
OPENAI_API_BASE=<openaiai-api-base> # OPTIONAL
```
</Accordion>
```python Code <Accordion title="Anthropic">
from crewai import LLM ```python Code
ANTHROPIC_API_KEY=<your-api-key>
```
</Accordion>
llm = LLM(model="gpt-4", temperature=0.7) <Accordion title="Google">
agent = Agent(llm=llm, ...) ```python Code
``` GEMINI_API_KEY=<your-api-key>
```
</Accordion>
### 4. Custom LLM Objects <Accordion title="Azure">
```python Code
AZURE_API_KEY=<your-api-key> # "my-azure-api-key"
AZURE_API_BASE=<your-resource-url> # "https://example-endpoint.openai.azure.com"
AZURE_API_VERSION=<api-version> # "2023-05-15"
AZURE_AD_TOKEN=<your-azure-ad-token> # Optional
AZURE_API_TYPE=<your-azure-api-type> # Optional
```
</Accordion>
<Accordion title="AWS Bedrock">
```python Code
AWS_ACCESS_KEY_ID=<your-access-key>
AWS_SECRET_ACCESS_KEY=<your-secret-key>
AWS_DEFAULT_REGION=<your-region>
```
</Accordion>
<Accordion title="Mistral">
```python Code
MISTRAL_API_KEY=<your-api-key>
```
</Accordion>
<Accordion title="Groq">
```python Code
GROQ_API_KEY=<your-api-key>
```
</Accordion>
<Accordion title="IBM watsonx.ai">
```python Code
WATSONX_URL=<your-url> # (required) Base URL of your WatsonX instance
WATSONX_APIKEY=<your-apikey> # (required) IBM cloud API key
WATSONX_TOKEN=<your-token> # (required) IAM auth token (alternative to APIKEY)
WATSONX_PROJECT_ID=<your-project-id> # (optional) Project ID of your WatsonX instance
WATSONX_DEPLOYMENT_SPACE_ID=<your-space-id> # (optional) ID of deployment space for deployed models
```
</Accordion>
</AccordionGroup>
### 3. Custom LLM Objects
Pass a custom LLM implementation or object from another library. Pass a custom LLM implementation or object from another library.
See below for examples.
<Tabs>
<Tab title="String Identifier">
```python Code
agent = Agent(llm="gpt-4o", ...)
```
</Tab>
<Tab title="LLM Instance">
```python Code
from crewai import LLM
llm = LLM(model="gpt-4", temperature=0.7)
agent = Agent(llm=llm, ...)
```
</Tab>
</Tabs>
## Connecting to OpenAI-Compatible LLMs ## Connecting to OpenAI-Compatible LLMs
You can connect to OpenAI-compatible LLMs using either environment variables or by setting specific attributes on the LLM class: You can connect to OpenAI-compatible LLMs using either environment variables or by setting specific attributes on the LLM class:
1. Using environment variables: <Tabs>
<Tab title="Using Environment Variables">
```python Code
import os
```python Code os.environ["OPENAI_API_KEY"] = "your-api-key"
import os os.environ["OPENAI_API_BASE"] = "https://api.your-provider.com/v1"
```
</Tab>
<Tab title="Using LLM Class Attributes">
```python Code
from crewai import LLM
os.environ["OPENAI_API_KEY"] = "your-api-key" llm = LLM(
os.environ["OPENAI_API_BASE"] = "https://api.your-provider.com/v1" model="custom-model-name",
``` api_key="your-api-key",
base_url="https://api.your-provider.com/v1"
2. Using LLM class attributes: )
agent = Agent(llm=llm, ...)
```python Code ```
llm = LLM( </Tab>
model="custom-model-name", </Tabs>
api_key="your-api-key",
base_url="https://api.your-provider.com/v1"
)
agent = Agent(llm=llm, ...)
```
## LLM Configuration Options ## LLM Configuration Options
@@ -95,43 +193,188 @@ When configuring an LLM for your agent, you have access to a wide range of param
| **api_key** | `str` | Your API key for authentication. | | **api_key** | `str` | Your API key for authentication. |
Example: These are examples of how to configure LLMs for your agent.
```python Code <AccordionGroup>
llm = LLM( <Accordion title="OpenAI">
model="gpt-4",
temperature=0.8,
max_tokens=150,
top_p=0.9,
frequency_penalty=0.1,
presence_penalty=0.1,
stop=["END"],
seed=42,
base_url="https://api.openai.com/v1",
api_key="your-api-key-here"
)
agent = Agent(llm=llm, ...)
```
## Using Ollama (Local LLMs)
crewAI supports using Ollama for running open-source models locally: ```python Code
from crewai import LLM
1. Install Ollama: [ollama.ai](https://ollama.ai/) llm = LLM(
2. Run a model: `ollama run llama2` model="gpt-4",
3. Configure agent: temperature=0.8,
max_tokens=150,
top_p=0.9,
frequency_penalty=0.1,
presence_penalty=0.1,
stop=["END"],
seed=42,
base_url="https://api.openai.com/v1",
api_key="your-api-key-here"
)
agent = Agent(llm=llm, ...)
```
</Accordion>
```python Code <Accordion title="Cerebras">
agent = Agent(
llm=LLM(model="ollama/llama3.1", base_url="http://localhost:11434"), ```python Code
... from crewai import LLM
)
``` llm = LLM(
model="cerebras/llama-3.1-70b",
api_key="your-api-key-here"
)
agent = Agent(llm=llm, ...)
```
</Accordion>
<Accordion title="Ollama (Local LLMs)">
CrewAI supports using Ollama for running open-source models locally:
1. Install Ollama: [ollama.ai](https://ollama.ai/)
2. Run a model: `ollama run llama2`
3. Configure agent:
```python Code
from crewai import LLM
agent = Agent(
llm=LLM(
model="ollama/llama3.1",
base_url="http://localhost:11434"
),
...
)
```
</Accordion>
<Accordion title="Groq">
```python Code
from crewai import LLM
llm = LLM(
model="groq/llama3-8b-8192",
api_key="your-api-key-here"
)
agent = Agent(llm=llm, ...)
```
</Accordion>
<Accordion title="Anthropic">
```python Code
from crewai import LLM
llm = LLM(
model="anthropic/claude-3-5-sonnet-20241022",
api_key="your-api-key-here"
)
agent = Agent(llm=llm, ...)
```
</Accordion>
<Accordion title="Fireworks AI">
```python Code
from crewai import LLM
llm = LLM(
model="fireworks_ai/accounts/fireworks/models/llama-v3-70b-instruct",
api_key="your-api-key-here"
)
agent = Agent(llm=llm, ...)
```
</Accordion>
<Accordion title="Gemini">
```python Code
from crewai import LLM
llm = LLM(
model="gemini/gemini-1.5-pro-002",
api_key="your-api-key-here"
)
agent = Agent(llm=llm, ...)
```
</Accordion>
<Accordion title="Perplexity AI (pplx-api)">
```python Code
from crewai import LLM
llm = LLM(
model="perplexity/mistral-7b-instruct",
base_url="https://api.perplexity.ai/v1",
api_key="your-api-key-here"
)
agent = Agent(llm=llm, ...)
```
</Accordion>
<Accordion title="IBM watsonx.ai">
You can use IBM Watson by seeting the following ENV vars:
```python Code
WATSONX_URL=<your-url>
WATSONX_APIKEY=<your-apikey>
WATSONX_PROJECT_ID=<your-project-id>
```
You can then define your agents llms by updating the `agents.yml`
```yaml Code
researcher:
role: Research Specialist
goal: Conduct comprehensive research and analysis to gather relevant information,
synthesize findings, and produce well-documented insights.
backstory: A dedicated research professional with years of experience in academic
investigation, literature review, and data analysis, known for thorough and
methodical approaches to complex research questions.
verbose: true
llm: watsonx/meta-llama/llama-3-1-70b-instruct
```
You can also set up agents more dynamically as a base level LLM instance, like bellow:
```python Code
from crewai import LLM
llm = LLM(
model="watsonx/ibm/granite-13b-chat-v2",
base_url="https://api.watsonx.ai/v1",
api_key="your-api-key-here"
)
agent = Agent(llm=llm, ...)
```
</Accordion>
<Accordion title="Hugging Face">
```python Code
from crewai import LLM
llm = LLM(
model="huggingface/meta-llama/Meta-Llama-3.1-8B-Instruct",
api_key="your-api-key-here",
base_url="your_api_endpoint"
)
agent = Agent(llm=llm, ...)
```
</Accordion>
</AccordionGroup>
## Changing the Base API URL ## Changing the Base API URL
You can change the base API URL for any LLM provider by setting the `base_url` parameter: You can change the base API URL for any LLM provider by setting the `base_url` parameter:
```python Code ```python Code
from crewai import LLM
llm = LLM( llm = LLM(
model="custom-model-name", model="custom-model-name",
base_url="https://api.your-provider.com/v1", base_url="https://api.your-provider.com/v1",

View File

@@ -18,6 +18,7 @@ reason, and learn from past interactions.
| **Long-Term Memory** | Preserves valuable insights and learnings from past executions, allowing agents to build and refine their knowledge over time. | | **Long-Term Memory** | Preserves valuable insights and learnings from past executions, allowing agents to build and refine their knowledge over time. |
| **Entity Memory** | Captures and organizes information about entities (people, places, concepts) encountered during tasks, facilitating deeper understanding and relationship mapping. Uses `RAG` for storing entity information. | | **Entity Memory** | Captures and organizes information about entities (people, places, concepts) encountered during tasks, facilitating deeper understanding and relationship mapping. Uses `RAG` for storing entity information. |
| **Contextual Memory**| Maintains the context of interactions by combining `ShortTermMemory`, `LongTermMemory`, and `EntityMemory`, aiding in the coherence and relevance of agent responses over a sequence of tasks or a conversation. | | **Contextual Memory**| Maintains the context of interactions by combining `ShortTermMemory`, `LongTermMemory`, and `EntityMemory`, aiding in the coherence and relevance of agent responses over a sequence of tasks or a conversation. |
| **User Memory** | Stores user-specific information and preferences, enhancing personalization and user experience. |
## How Memory Systems Empower Agents ## How Memory Systems Empower Agents
@@ -34,7 +35,7 @@ By default, the memory system is disabled, and you can ensure it is active by se
The memory will use OpenAI embeddings by default, but you can change it by setting `embedder` to a different model. The memory will use OpenAI embeddings by default, but you can change it by setting `embedder` to a different model.
It's also possible to initialize the memory instance with your own instance. It's also possible to initialize the memory instance with your own instance.
The 'embedder' only applies to **Short-Term Memory** which uses Chroma for RAG using the EmbedChain package. The 'embedder' only applies to **Short-Term Memory** which uses Chroma for RAG.
The **Long-Term Memory** uses SQLite3 to store task results. Currently, there is no way to override these storage implementations. The **Long-Term Memory** uses SQLite3 to store task results. Currently, there is no way to override these storage implementations.
The data storage files are saved into a platform-specific location found using the appdirs package, The data storage files are saved into a platform-specific location found using the appdirs package,
and the name of the project can be overridden using the **CREWAI_STORAGE_DIR** environment variable. and the name of the project can be overridden using the **CREWAI_STORAGE_DIR** environment variable.
@@ -92,6 +93,47 @@ my_crew = Crew(
) )
``` ```
## Integrating Mem0 for Enhanced User Memory
[Mem0](https://mem0.ai/) is a self-improving memory layer for LLM applications, enabling personalized AI experiences.
To include user-specific memory you can get your API key [here](https://app.mem0.ai/dashboard/api-keys) and refer the [docs](https://docs.mem0.ai/platform/quickstart#4-1-create-memories) for adding user preferences.
```python Code
import os
from crewai import Crew, Process
from mem0 import MemoryClient
# Set environment variables for Mem0
os.environ["MEM0_API_KEY"] = "m0-xx"
# Step 1: Record preferences based on past conversation or user input
client = MemoryClient()
messages = [
{"role": "user", "content": "Hi there! I'm planning a vacation and could use some advice."},
{"role": "assistant", "content": "Hello! I'd be happy to help with your vacation planning. What kind of destination do you prefer?"},
{"role": "user", "content": "I am more of a beach person than a mountain person."},
{"role": "assistant", "content": "That's interesting. Do you like hotels or Airbnb?"},
{"role": "user", "content": "I like Airbnb more."},
]
client.add(messages, user_id="john")
# Step 2: Create a Crew with User Memory
crew = Crew(
agents=[...],
tasks=[...],
verbose=True,
process=Process.sequential,
memory=True,
memory_config={
"provider": "mem0",
"config": {"user_id": "john"},
},
)
```
## Additional Embedding Providers ## Additional Embedding Providers
@@ -113,6 +155,42 @@ my_crew = Crew(
} }
) )
``` ```
Alternatively, you can directly pass the OpenAIEmbeddingFunction to the embedder parameter.
Example:
```python Code
from crewai import Crew, Agent, Task, Process
from chromadb.utils.embedding_functions import OpenAIEmbeddingFunction
my_crew = Crew(
agents=[...],
tasks=[...],
process=Process.sequential,
memory=True,
verbose=True,
embedder=OpenAIEmbeddingFunction(api_key=os.getenv("OPENAI_API_KEY"), model_name="text-embedding-3-small"),
)
```
### Using Ollama embeddings
```python Code
from crewai import Crew, Agent, Task, Process
my_crew = Crew(
agents=[...],
tasks=[...],
process=Process.sequential,
memory=True,
verbose=True,
embedder={
"provider": "ollama",
"config": {
"model": "mxbai-embed-large"
}
}
)
```
### Using Google AI embeddings ### Using Google AI embeddings
@@ -128,9 +206,8 @@ my_crew = Crew(
embedder={ embedder={
"provider": "google", "provider": "google",
"config": { "config": {
"model": 'models/embedding-001', "api_key": "<YOUR_API_KEY>",
"task_type": "retrieval_document", "model_name": "<model_name>"
"title": "Embeddings for Embedchain"
} }
} }
) )
@@ -139,6 +216,7 @@ my_crew = Crew(
### Using Azure OpenAI embeddings ### Using Azure OpenAI embeddings
```python Code ```python Code
from chromadb.utils.embedding_functions import OpenAIEmbeddingFunction
from crewai import Crew, Agent, Task, Process from crewai import Crew, Agent, Task, Process
my_crew = Crew( my_crew = Crew(
@@ -147,36 +225,20 @@ my_crew = Crew(
process=Process.sequential, process=Process.sequential,
memory=True, memory=True,
verbose=True, verbose=True,
embedder={ embedder=OpenAIEmbeddingFunction(
"provider": "azure_openai", api_key="YOUR_API_KEY",
"config": { api_base="YOUR_API_BASE_PATH",
"model": 'text-embedding-ada-002', api_type="azure",
"deployment_name": "your_embedding_model_deployment_name" api_version="YOUR_API_VERSION",
} model_name="text-embedding-3-small"
} )
)
```
### Using GPT4ALL embeddings
```python Code
from crewai import Crew, Agent, Task, Process
my_crew = Crew(
agents=[...],
tasks=[...],
process=Process.sequential,
memory=True,
verbose=True,
embedder={
"provider": "gpt4all"
}
) )
``` ```
### Using Vertex AI embeddings ### Using Vertex AI embeddings
```python Code ```python Code
from chromadb.utils.embedding_functions import GoogleVertexEmbeddingFunction
from crewai import Crew, Agent, Task, Process from crewai import Crew, Agent, Task, Process
my_crew = Crew( my_crew = Crew(
@@ -185,12 +247,12 @@ my_crew = Crew(
process=Process.sequential, process=Process.sequential,
memory=True, memory=True,
verbose=True, verbose=True,
embedder={ embedder=GoogleVertexEmbeddingFunction(
"provider": "vertexai", project_id="YOUR_PROJECT_ID",
"config": { region="YOUR_REGION",
"model": 'textembedding-gecko' api_key="YOUR_API_KEY",
} model_name="textembedding-gecko"
} )
) )
``` ```
@@ -208,8 +270,52 @@ my_crew = Crew(
embedder={ embedder={
"provider": "cohere", "provider": "cohere",
"config": { "config": {
"model": "embed-english-v3.0", "api_key": "YOUR_API_KEY",
"vector_dimension": 1024 "model_name": "<model_name>"
}
}
)
```
### Using HuggingFace embeddings
```python Code
from crewai import Crew, Agent, Task, Process
my_crew = Crew(
agents=[...],
tasks=[...],
process=Process.sequential,
memory=True,
verbose=True,
embedder={
"provider": "huggingface",
"config": {
"api_url": "<api_url>",
}
}
)
```
### Using Watson embeddings
```python Code
from crewai import Crew, Agent, Task, Process
# Note: Ensure you have installed and imported `ibm_watsonx_ai` for Watson embeddings to work.
my_crew = Crew(
agents=[...],
tasks=[...],
process=Process.sequential,
memory=True,
verbose=True,
embedder={
"provider": "watson",
"config": {
"model": "<model_name>",
"api_url": "<api_url>",
"api_key": "<YOUR_API_KEY>",
"project_id": "<YOUR_PROJECT_ID>",
} }
} }
) )

View File

@@ -5,13 +5,14 @@ icon: screwdriver-wrench
--- ---
## Introduction ## Introduction
CrewAI tools empower agents with capabilities ranging from web searching and data analysis to collaboration and delegating tasks among coworkers.
CrewAI tools empower agents with capabilities ranging from web searching and data analysis to collaboration and delegating tasks among coworkers.
This documentation outlines how to create, integrate, and leverage these tools within the CrewAI framework, including a new focus on collaboration tools. This documentation outlines how to create, integrate, and leverage these tools within the CrewAI framework, including a new focus on collaboration tools.
## What is a Tool? ## What is a Tool?
A tool in CrewAI is a skill or function that agents can utilize to perform various actions. A tool in CrewAI is a skill or function that agents can utilize to perform various actions.
This includes tools from the [CrewAI Toolkit](https://github.com/joaomdmoura/crewai-tools) and [LangChain Tools](https://python.langchain.com/docs/integrations/tools), This includes tools from the [CrewAI Toolkit](https://github.com/joaomdmoura/crewai-tools) and [LangChain Tools](https://python.langchain.com/docs/integrations/tools),
enabling everything from simple searches to complex interactions and effective teamwork among agents. enabling everything from simple searches to complex interactions and effective teamwork among agents.
## Key Characteristics of Tools ## Key Characteristics of Tools
@@ -103,57 +104,53 @@ crew.kickoff()
Here is a list of the available tools and their descriptions: Here is a list of the available tools and their descriptions:
| Tool | Description | | Tool | Description |
| :-------------------------- | :-------------------------------------------------------------------------------------------- | | :------------------------------- | :--------------------------------------------------------------------------------------------- |
| **BrowserbaseLoadTool** | A tool for interacting with and extracting data from web browsers. | | **BrowserbaseLoadTool** | A tool for interacting with and extracting data from web browsers. |
| **CodeDocsSearchTool** | A RAG tool optimized for searching through code documentation and related technical documents. | | **CodeDocsSearchTool** | A RAG tool optimized for searching through code documentation and related technical documents. |
| **CodeInterpreterTool** | A tool for interpreting python code. | | **CodeInterpreterTool** | A tool for interpreting python code. |
| **ComposioTool** | Enables use of Composio tools. | | **ComposioTool** | Enables use of Composio tools. |
| **CSVSearchTool** | A RAG tool designed for searching within CSV files, tailored to handle structured data. | | **CSVSearchTool** | A RAG tool designed for searching within CSV files, tailored to handle structured data. |
| **DALL-E Tool** | A tool for generating images using the DALL-E API. | | **DALL-E Tool** | A tool for generating images using the DALL-E API. |
| **DirectorySearchTool** | A RAG tool for searching within directories, useful for navigating through file systems. | | **DirectorySearchTool** | A RAG tool for searching within directories, useful for navigating through file systems. |
| **DOCXSearchTool** | A RAG tool aimed at searching within DOCX documents, ideal for processing Word files. | | **DOCXSearchTool** | A RAG tool aimed at searching within DOCX documents, ideal for processing Word files. |
| **DirectoryReadTool** | Facilitates reading and processing of directory structures and their contents. | | **DirectoryReadTool** | Facilitates reading and processing of directory structures and their contents. |
| **EXASearchTool** | A tool designed for performing exhaustive searches across various data sources. | | **EXASearchTool** | A tool designed for performing exhaustive searches across various data sources. |
| **FileReadTool** | Enables reading and extracting data from files, supporting various file formats. | | **FileReadTool** | Enables reading and extracting data from files, supporting various file formats. |
| **FirecrawlSearchTool** | A tool to search webpages using Firecrawl and return the results. | | **FirecrawlSearchTool** | A tool to search webpages using Firecrawl and return the results. |
| **FirecrawlCrawlWebsiteTool** | A tool for crawling webpages using Firecrawl. | | **FirecrawlCrawlWebsiteTool** | A tool for crawling webpages using Firecrawl. |
| **FirecrawlScrapeWebsiteTool** | A tool for scraping webpages URL using Firecrawl and returning its contents. | | **FirecrawlScrapeWebsiteTool** | A tool for scraping webpages URL using Firecrawl and returning its contents. |
| **GithubSearchTool** | A RAG tool for searching within GitHub repositories, useful for code and documentation search.| | **GithubSearchTool** | A RAG tool for searching within GitHub repositories, useful for code and documentation search. |
| **SerperDevTool** | A specialized tool for development purposes, with specific functionalities under development. | | **SerperDevTool** | A specialized tool for development purposes, with specific functionalities under development. |
| **TXTSearchTool** | A RAG tool focused on searching within text (.txt) files, suitable for unstructured data. | | **TXTSearchTool** | A RAG tool focused on searching within text (.txt) files, suitable for unstructured data. |
| **JSONSearchTool** | A RAG tool designed for searching within JSON files, catering to structured data handling. | | **JSONSearchTool** | A RAG tool designed for searching within JSON files, catering to structured data handling. |
| **LlamaIndexTool** | Enables the use of LlamaIndex tools. | | **LlamaIndexTool** | Enables the use of LlamaIndex tools. |
| **MDXSearchTool** | A RAG tool tailored for searching within Markdown (MDX) files, useful for documentation. | | **MDXSearchTool** | A RAG tool tailored for searching within Markdown (MDX) files, useful for documentation. |
| **PDFSearchTool** | A RAG tool aimed at searching within PDF documents, ideal for processing scanned documents. | | **PDFSearchTool** | A RAG tool aimed at searching within PDF documents, ideal for processing scanned documents. |
| **PGSearchTool** | A RAG tool optimized for searching within PostgreSQL databases, suitable for database queries. | | **PGSearchTool** | A RAG tool optimized for searching within PostgreSQL databases, suitable for database queries. |
| **Vision Tool** | A tool for generating images using the DALL-E API. | | **Vision Tool** | A tool for generating images using the DALL-E API. |
| **RagTool** | A general-purpose RAG tool capable of handling various data sources and types. | | **RagTool** | A general-purpose RAG tool capable of handling various data sources and types. |
| **ScrapeElementFromWebsiteTool** | Enables scraping specific elements from websites, useful for targeted data extraction. | | **ScrapeElementFromWebsiteTool** | Enables scraping specific elements from websites, useful for targeted data extraction. |
| **ScrapeWebsiteTool** | Facilitates scraping entire websites, ideal for comprehensive data collection. | | **ScrapeWebsiteTool** | Facilitates scraping entire websites, ideal for comprehensive data collection. |
| **WebsiteSearchTool** | A RAG tool for searching website content, optimized for web data extraction. | | **WebsiteSearchTool** | A RAG tool for searching website content, optimized for web data extraction. |
| **XMLSearchTool** | A RAG tool designed for searching within XML files, suitable for structured data formats. | | **XMLSearchTool** | A RAG tool designed for searching within XML files, suitable for structured data formats. |
| **YoutubeChannelSearchTool**| A RAG tool for searching within YouTube channels, useful for video content analysis. | | **YoutubeChannelSearchTool** | A RAG tool for searching within YouTube channels, useful for video content analysis. |
| **YoutubeVideoSearchTool** | A RAG tool aimed at searching within YouTube videos, ideal for video data extraction. | | **YoutubeVideoSearchTool** | A RAG tool aimed at searching within YouTube videos, ideal for video data extraction. |
## Creating your own Tools ## Creating your own Tools
<Tip> <Tip>
Developers can craft `custom tools` tailored for their agents needs or utilize pre-built options. Developers can craft `custom tools` tailored for their agents needs or
utilize pre-built options.
</Tip> </Tip>
To create your own CrewAI tools you will need to install our extra tools package: There are two main ways for one to create a CrewAI tool:
```bash
pip install 'crewai[tools]'
```
Once you do that there are two main ways for one to create a CrewAI tool:
### Subclassing `BaseTool` ### Subclassing `BaseTool`
```python Code ```python Code
from crewai_tools import BaseTool from crewai.tools import BaseTool
class MyCustomTool(BaseTool): class MyCustomTool(BaseTool):
name: str = "Name of my tool" name: str = "Name of my tool"
@@ -167,7 +164,7 @@ class MyCustomTool(BaseTool):
### Utilizing the `tool` Decorator ### Utilizing the `tool` Decorator
```python Code ```python Code
from crewai_tools import tool from crewai.tools import tool
@tool("Name of my tool") @tool("Name of my tool")
def my_tool(question: str) -> str: def my_tool(question: str) -> str:
"""Clear description for what this tool is useful for, your agent will need this information to use it.""" """Clear description for what this tool is useful for, your agent will need this information to use it."""
@@ -178,11 +175,13 @@ def my_tool(question: str) -> str:
### Custom Caching Mechanism ### Custom Caching Mechanism
<Tip> <Tip>
Tools can optionally implement a `cache_function` to fine-tune caching behavior. This function determines when to cache results based on specific conditions, offering granular control over caching logic. Tools can optionally implement a `cache_function` to fine-tune caching
behavior. This function determines when to cache results based on specific
conditions, offering granular control over caching logic.
</Tip> </Tip>
```python Code ```python Code
from crewai_tools import tool from crewai.tools import tool
@tool @tool
def multiplication_tool(first_number: int, second_number: int) -> str: def multiplication_tool(first_number: int, second_number: int) -> str:
@@ -208,6 +207,6 @@ writer1 = Agent(
## Conclusion ## Conclusion
Tools are pivotal in extending the capabilities of CrewAI agents, enabling them to undertake a broad spectrum of tasks and collaborate effectively. Tools are pivotal in extending the capabilities of CrewAI agents, enabling them to undertake a broad spectrum of tasks and collaborate effectively.
When building solutions with CrewAI, leverage both custom and existing tools to empower your agents and enhance the AI ecosystem. Consider utilizing error handling, When building solutions with CrewAI, leverage both custom and existing tools to empower your agents and enhance the AI ecosystem. Consider utilizing error handling,
caching mechanisms, and the flexibility of tool arguments to optimize your agents' performance and capabilities. caching mechanisms, and the flexibility of tool arguments to optimize your agents' performance and capabilities.

View File

@@ -6,28 +6,27 @@ icon: hammer
## Creating and Utilizing Tools in CrewAI ## Creating and Utilizing Tools in CrewAI
This guide provides detailed instructions on creating custom tools for the CrewAI framework and how to efficiently manage and utilize these tools, This guide provides detailed instructions on creating custom tools for the CrewAI framework and how to efficiently manage and utilize these tools,
incorporating the latest functionalities such as tool delegation, error handling, and dynamic tool calling. It also highlights the importance of collaboration tools, incorporating the latest functionalities such as tool delegation, error handling, and dynamic tool calling. It also highlights the importance of collaboration tools,
enabling agents to perform a wide range of actions. enabling agents to perform a wide range of actions.
### Prerequisites
Before creating your own tools, ensure you have the crewAI extra tools package installed:
```bash
pip install 'crewai[tools]'
```
### Subclassing `BaseTool` ### Subclassing `BaseTool`
To create a personalized tool, inherit from `BaseTool` and define the necessary attributes and the `_run` method. To create a personalized tool, inherit from `BaseTool` and define the necessary attributes, including the `args_schema` for input validation, and the `_run` method.
```python Code ```python Code
from crewai_tools import BaseTool from typing import Type
from crewai.tools import BaseTool
from pydantic import BaseModel, Field
class MyToolInput(BaseModel):
"""Input schema for MyCustomTool."""
argument: str = Field(..., description="Description of the argument.")
class MyCustomTool(BaseTool): class MyCustomTool(BaseTool):
name: str = "Name of my tool" name: str = "Name of my tool"
description: str = "What this tool does. It's vital for effective utilization." description: str = "What this tool does. It's vital for effective utilization."
args_schema: Type[BaseModel] = MyToolInput
def _run(self, argument: str) -> str: def _run(self, argument: str) -> str:
# Your tool's logic here # Your tool's logic here
@@ -40,7 +39,7 @@ Alternatively, you can use the tool decorator `@tool`. This approach allows you
offering a concise and efficient way to create specialized tools tailored to your needs. offering a concise and efficient way to create specialized tools tailored to your needs.
```python Code ```python Code
from crewai_tools import tool from crewai.tools import tool
@tool("Tool Name") @tool("Tool Name")
def my_simple_tool(question: str) -> str: def my_simple_tool(question: str) -> str:
@@ -66,5 +65,5 @@ def my_cache_strategy(arguments: dict, result: str) -> bool:
cached_tool.cache_function = my_cache_strategy cached_tool.cache_function = my_cache_strategy
``` ```
By adhering to these guidelines and incorporating new functionalities and collaboration tools into your tool creation and management processes, By adhering to these guidelines and incorporating new functionalities and collaboration tools into your tool creation and management processes,
you can leverage the full capabilities of the CrewAI framework, enhancing both the development experience and the efficiency of your AI agents. you can leverage the full capabilities of the CrewAI framework, enhancing both the development experience and the efficiency of your AI agents.

View File

@@ -330,4 +330,4 @@ This will clear the crew's memory, allowing for a fresh start.
## Deploying Your Project ## Deploying Your Project
The easiest way to deploy your crew is through [CrewAI Enterprise](https://www.crewai.com/crewaiplus), where you can deploy your crew in a few clicks. The easiest way to deploy your crew is through [CrewAI Enterprise](http://app.crewai.com/), where you can deploy your crew in a few clicks.

View File

@@ -34,6 +34,7 @@ from crewai_tools import GithubSearchTool
# Initialize the tool for semantic searches within a specific GitHub repository # Initialize the tool for semantic searches within a specific GitHub repository
tool = GithubSearchTool( tool = GithubSearchTool(
github_repo='https://github.com/example/repo', github_repo='https://github.com/example/repo',
gh_token='your_github_personal_access_token',
content_types=['code', 'issue'] # Options: code, repo, pr, issue content_types=['code', 'issue'] # Options: code, repo, pr, issue
) )
@@ -41,6 +42,7 @@ tool = GithubSearchTool(
# Initialize the tool for semantic searches within a specific GitHub repository, so the agent can search any repository if it learns about during its execution # Initialize the tool for semantic searches within a specific GitHub repository, so the agent can search any repository if it learns about during its execution
tool = GithubSearchTool( tool = GithubSearchTool(
gh_token='your_github_personal_access_token',
content_types=['code', 'issue'] # Options: code, repo, pr, issue content_types=['code', 'issue'] # Options: code, repo, pr, issue
) )
``` ```
@@ -48,6 +50,7 @@ tool = GithubSearchTool(
## Arguments ## Arguments
- `github_repo` : The URL of the GitHub repository where the search will be conducted. This is a mandatory field and specifies the target repository for your search. - `github_repo` : The URL of the GitHub repository where the search will be conducted. This is a mandatory field and specifies the target repository for your search.
- `gh_token` : Your GitHub Personal Access Token (PAT) required for authentication. You can create one in your GitHub account settings under Developer Settings > Personal Access Tokens.
- `content_types` : Specifies the types of content to include in your search. You must provide a list of content types from the following options: `code` for searching within the code, - `content_types` : Specifies the types of content to include in your search. You must provide a list of content types from the following options: `code` for searching within the code,
`repo` for searching within the repository's general information, `pr` for searching within pull requests, and `issue` for searching within issues. `repo` for searching within the repository's general information, `pr` for searching within pull requests, and `issue` for searching within issues.
This field is mandatory and allows tailoring the search to specific content types within the GitHub repository. This field is mandatory and allows tailoring the search to specific content types within the GitHub repository.
@@ -77,5 +80,4 @@ tool = GithubSearchTool(
), ),
), ),
) )
) )
```

View File

@@ -8,13 +8,13 @@ icon: eye
## Description ## Description
This tool is used to extract text from images. When passed to the agent it will extract the text from the image and then use it to generate a response, report or any other output. This tool is used to extract text from images. When passed to the agent it will extract the text from the image and then use it to generate a response, report or any other output.
The URL or the PATH of the image should be passed to the Agent. The URL or the PATH of the image should be passed to the Agent.
## Installation ## Installation
Install the crewai_tools package Install the crewai_tools package
```shell ```shell
pip install 'crewai[tools]' pip install 'crewai[tools]'
``` ```
@@ -44,7 +44,6 @@ def researcher(self) -> Agent:
The VisionTool requires the following arguments: The VisionTool requires the following arguments:
| Argument | Type | Description | | Argument | Type | Description |
|:---------------|:---------|:-------------------------------------------------------------------------------------------------------------------------------------| | :----------------- | :------- | :------------------------------------------------------------------------------- |
| **image_path** | `string` | **Mandatory**. The path to the image file from which text needs to be extracted. | | **image_path_url** | `string` | **Mandatory**. The path to the image file from which text needs to be extracted. |

6
poetry.lock generated
View File

@@ -1597,12 +1597,12 @@ files = [
google-auth = ">=2.14.1,<3.0.dev0" google-auth = ">=2.14.1,<3.0.dev0"
googleapis-common-protos = ">=1.56.2,<2.0.dev0" googleapis-common-protos = ">=1.56.2,<2.0.dev0"
grpcio = [ grpcio = [
{version = ">=1.49.1,<2.0dev", optional = true, markers = "python_version >= \"3.11\" and extra == \"grpc\""},
{version = ">=1.33.2,<2.0dev", optional = true, markers = "python_version < \"3.11\" and extra == \"grpc\""}, {version = ">=1.33.2,<2.0dev", optional = true, markers = "python_version < \"3.11\" and extra == \"grpc\""},
{version = ">=1.49.1,<2.0dev", optional = true, markers = "python_version >= \"3.11\" and extra == \"grpc\""},
] ]
grpcio-status = [ grpcio-status = [
{version = ">=1.49.1,<2.0.dev0", optional = true, markers = "python_version >= \"3.11\" and extra == \"grpc\""},
{version = ">=1.33.2,<2.0.dev0", optional = true, markers = "python_version < \"3.11\" and extra == \"grpc\""}, {version = ">=1.33.2,<2.0.dev0", optional = true, markers = "python_version < \"3.11\" and extra == \"grpc\""},
{version = ">=1.49.1,<2.0.dev0", optional = true, markers = "python_version >= \"3.11\" and extra == \"grpc\""},
] ]
proto-plus = ">=1.22.3,<2.0.0dev" proto-plus = ">=1.22.3,<2.0.0dev"
protobuf = ">=3.19.5,<3.20.0 || >3.20.0,<3.20.1 || >3.20.1,<4.21.0 || >4.21.0,<4.21.1 || >4.21.1,<4.21.2 || >4.21.2,<4.21.3 || >4.21.3,<4.21.4 || >4.21.4,<4.21.5 || >4.21.5,<6.0.0.dev0" protobuf = ">=3.19.5,<3.20.0 || >3.20.0,<3.20.1 || >3.20.1,<4.21.0 || >4.21.0,<4.21.1 || >4.21.1,<4.21.2 || >4.21.2,<4.21.3 || >4.21.3,<4.21.4 || >4.21.4,<4.21.5 || >4.21.5,<6.0.0.dev0"
@@ -4286,8 +4286,8 @@ files = [
[package.dependencies] [package.dependencies]
numpy = [ numpy = [
{version = ">=1.23.2", markers = "python_version == \"3.11\""},
{version = ">=1.22.4", markers = "python_version < \"3.11\""}, {version = ">=1.22.4", markers = "python_version < \"3.11\""},
{version = ">=1.23.2", markers = "python_version == \"3.11\""},
{version = ">=1.26.0", markers = "python_version >= \"3.12\""}, {version = ">=1.26.0", markers = "python_version >= \"3.12\""},
] ]
python-dateutil = ">=2.8.2" python-dateutil = ">=2.8.2"

View File

@@ -1,6 +1,6 @@
[project] [project]
name = "crewai" name = "crewai"
version = "0.70.1" version = "0.79.4"
description = "Cutting-edge framework for orchestrating role-playing, autonomous AI agents. By fostering collaborative intelligence, CrewAI empowers agents to work together seamlessly, tackling complex tasks." description = "Cutting-edge framework for orchestrating role-playing, autonomous AI agents. By fostering collaborative intelligence, CrewAI empowers agents to work together seamlessly, tackling complex tasks."
readme = "README.md" readme = "README.md"
requires-python = ">=3.10,<=3.13" requires-python = ">=3.10,<=3.13"
@@ -16,19 +16,19 @@ dependencies = [
"opentelemetry-exporter-otlp-proto-http>=1.22.0", "opentelemetry-exporter-otlp-proto-http>=1.22.0",
"instructor>=1.3.3", "instructor>=1.3.3",
"regex>=2024.9.11", "regex>=2024.9.11",
"crewai-tools>=0.12.1", "crewai-tools>=0.14.0",
"click>=8.1.7", "click>=8.1.7",
"python-dotenv>=1.0.0", "python-dotenv>=1.0.0",
"appdirs>=1.4.4", "appdirs>=1.4.4",
"jsonref>=1.1.0", "jsonref>=1.1.0",
"agentops>=0.3.0",
"embedchain>=0.1.114",
"json-repair>=0.25.2", "json-repair>=0.25.2",
"auth0-python>=4.7.1", "auth0-python>=4.7.1",
"litellm>=1.44.22", "litellm>=1.44.22",
"pyvis>=0.3.2", "pyvis>=0.3.2",
"uv>=0.4.18", "uv>=0.4.25",
"tomli-w>=1.1.0", "tomli-w>=1.1.0",
"tomli>=2.0.2",
"chromadb>=0.5.18",
] ]
[project.urls] [project.urls]
@@ -37,8 +37,9 @@ Documentation = "https://docs.crewai.com"
Repository = "https://github.com/crewAIInc/crewAI" Repository = "https://github.com/crewAIInc/crewAI"
[project.optional-dependencies] [project.optional-dependencies]
tools = ["crewai-tools>=0.12.1"] tools = ["crewai-tools>=0.14.0"]
agentops = ["agentops>=0.3.0"] agentops = ["agentops>=0.3.0"]
mem0 = ["mem0ai>=0.1.29"]
[tool.uv] [tool.uv]
dev-dependencies = [ dev-dependencies = [
@@ -52,7 +53,7 @@ dev-dependencies = [
"mkdocs-material-extensions>=1.3.1", "mkdocs-material-extensions>=1.3.1",
"pillow>=10.2.0", "pillow>=10.2.0",
"cairosvg>=2.7.1", "cairosvg>=2.7.1",
"crewai-tools>=0.12.1", "crewai-tools>=0.14.0",
"pytest>=8.0.0", "pytest>=8.0.0",
"pytest-vcr>=1.0.2", "pytest-vcr>=1.0.2",
"python-dotenv>=1.0.0", "python-dotenv>=1.0.0",

View File

@@ -14,5 +14,5 @@ warnings.filterwarnings(
category=UserWarning, category=UserWarning,
module="pydantic.main", module="pydantic.main",
) )
__version__ = "0.70.1" __version__ = "0.79.4"
__all__ = ["Agent", "Crew", "Process", "Task", "Pipeline", "Router", "LLM", "Flow"] __all__ = ["Agent", "Crew", "Process", "Task", "Pipeline", "Router", "LLM", "Flow"]

View File

@@ -1,15 +1,18 @@
import os import os
from inspect import signature import shutil
from typing import Any, List, Optional, Union import subprocess
from typing import Any, List, Literal, Optional, Union
from pydantic import Field, InstanceOf, PrivateAttr, model_validator from pydantic import Field, InstanceOf, PrivateAttr, model_validator
from crewai.agents import CacheHandler from crewai.agents import CacheHandler
from crewai.agents.agent_builder.base_agent import BaseAgent from crewai.agents.agent_builder.base_agent import BaseAgent
from crewai.agents.crew_agent_executor import CrewAgentExecutor from crewai.agents.crew_agent_executor import CrewAgentExecutor
from crewai.cli.constants import ENV_VARS
from crewai.llm import LLM from crewai.llm import LLM
from crewai.memory.contextual.contextual_memory import ContextualMemory from crewai.memory.contextual.contextual_memory import ContextualMemory
from crewai.tools.agent_tools import AgentTools from crewai.tools.agent_tools.agent_tools import AgentTools
from crewai.tools import BaseTool
from crewai.utilities import Converter, Prompts from crewai.utilities import Converter, Prompts
from crewai.utilities.constants import TRAINED_AGENTS_DATA_FILE, TRAINING_DATA_FILE from crewai.utilities.constants import TRAINED_AGENTS_DATA_FILE, TRAINING_DATA_FILE
from crewai.utilities.token_counter_callback import TokenCalcHandler from crewai.utilities.token_counter_callback import TokenCalcHandler
@@ -112,10 +115,19 @@ class Agent(BaseAgent):
default=2, default=2,
description="Maximum number of retries for an agent to execute a task when an error occurs.", description="Maximum number of retries for an agent to execute a task when an error occurs.",
) )
code_execution_mode: Literal["safe", "unsafe"] = Field(
default="safe",
description="Mode for code execution: 'safe' (using Docker) or 'unsafe' (direct execution).",
)
@model_validator(mode="after") @model_validator(mode="after")
def post_init_setup(self): def post_init_setup(self):
self.agent_ops_agent_name = self.role self.agent_ops_agent_name = self.role
unnacepted_attributes = [
"AWS_ACCESS_KEY_ID",
"AWS_SECRET_ACCESS_KEY",
"AWS_REGION_NAME",
]
# Handle different cases for self.llm # Handle different cases for self.llm
if isinstance(self.llm, str): if isinstance(self.llm, str):
@@ -125,8 +137,12 @@ class Agent(BaseAgent):
# If it's already an LLM instance, keep it as is # If it's already an LLM instance, keep it as is
pass pass
elif self.llm is None: elif self.llm is None:
# If it's None, use environment variables or default # Determine the model name from environment variables or use default
model_name = os.environ.get("OPENAI_MODEL_NAME", "gpt-4o-mini") model_name = (
os.environ.get("OPENAI_MODEL_NAME")
or os.environ.get("MODEL")
or "gpt-4o-mini"
)
llm_params = {"model": model_name} llm_params = {"model": model_name}
api_base = os.environ.get("OPENAI_API_BASE") or os.environ.get( api_base = os.environ.get("OPENAI_API_BASE") or os.environ.get(
@@ -135,9 +151,44 @@ class Agent(BaseAgent):
if api_base: if api_base:
llm_params["base_url"] = api_base llm_params["base_url"] = api_base
api_key = os.environ.get("OPENAI_API_KEY") set_provider = model_name.split("/")[0] if "/" in model_name else "openai"
if api_key:
llm_params["api_key"] = api_key # Iterate over all environment variables to find matching API keys or use defaults
for provider, env_vars in ENV_VARS.items():
if provider == set_provider:
for env_var in env_vars:
if env_var["key_name"] in unnacepted_attributes:
continue
# Check if the environment variable is set
if "key_name" in env_var:
env_value = os.environ.get(env_var["key_name"])
if env_value:
# Map key names containing "API_KEY" to "api_key"
key_name = (
"api_key"
if "API_KEY" in env_var["key_name"]
else env_var["key_name"]
)
# Map key names containing "API_BASE" to "api_base"
key_name = (
"api_base"
if "API_BASE" in env_var["key_name"]
else key_name
)
# Map key names containing "API_VERSION" to "api_version"
key_name = (
"api_version"
if "API_VERSION" in env_var["key_name"]
else key_name
)
llm_params[key_name] = env_value
# Check for default values if the environment variable is not set
elif env_var.get("default", False):
for key, value in env_var.items():
if key not in ["prompt", "key_name", "default"]:
# Only add default if the key is already set in os.environ
if key in os.environ:
llm_params[key] = value
self.llm = LLM(**llm_params) self.llm = LLM(**llm_params)
else: else:
@@ -173,6 +224,9 @@ class Agent(BaseAgent):
if not self.agent_executor: if not self.agent_executor:
self._setup_agent_executor() self._setup_agent_executor()
if self.allow_code_execution:
self._validate_docker_installation()
return self return self
def _setup_agent_executor(self): def _setup_agent_executor(self):
@@ -184,7 +238,7 @@ class Agent(BaseAgent):
self, self,
task: Any, task: Any,
context: Optional[str] = None, context: Optional[str] = None,
tools: Optional[List[Any]] = None, tools: Optional[List[BaseTool]] = None,
) -> str: ) -> str:
"""Execute a task with the agent. """Execute a task with the agent.
@@ -208,9 +262,11 @@ class Agent(BaseAgent):
if self.crew and self.crew.memory: if self.crew and self.crew.memory:
contextual_memory = ContextualMemory( contextual_memory = ContextualMemory(
self.crew.memory_config,
self.crew._short_term_memory, self.crew._short_term_memory,
self.crew._long_term_memory, self.crew._long_term_memory,
self.crew._entity_memory, self.crew._entity_memory,
self.crew._user_memory,
) )
memory = contextual_memory.build_context_for_task(task, context) memory = contextual_memory.build_context_for_task(task, context)
if memory.strip() != "": if memory.strip() != "":
@@ -251,7 +307,9 @@ class Agent(BaseAgent):
return result return result
def create_agent_executor(self, tools=None, task=None) -> None: def create_agent_executor(
self, tools: Optional[List[BaseTool]] = None, task=None
) -> None:
"""Create an agent executor for the agent. """Create an agent executor for the agent.
Returns: Returns:
@@ -308,7 +366,9 @@ class Agent(BaseAgent):
try: try:
from crewai_tools import CodeInterpreterTool from crewai_tools import CodeInterpreterTool
return [CodeInterpreterTool()] # Set the unsafe_mode based on the code_execution_mode attribute
unsafe_mode = self.code_execution_mode == "unsafe"
return [CodeInterpreterTool(unsafe_mode=unsafe_mode)]
except ModuleNotFoundError: except ModuleNotFoundError:
self._logger.log( self._logger.log(
"info", "Coding tools not available. Install crewai_tools. " "info", "Coding tools not available. Install crewai_tools. "
@@ -322,7 +382,7 @@ class Agent(BaseAgent):
tools_list = [] tools_list = []
try: try:
# tentatively try to import from crewai_tools import BaseTool as CrewAITool # tentatively try to import from crewai_tools import BaseTool as CrewAITool
from crewai_tools import BaseTool as CrewAITool from crewai.tools import BaseTool as CrewAITool
for tool in tools: for tool in tools:
if isinstance(tool, CrewAITool): if isinstance(tool, CrewAITool):
@@ -381,33 +441,42 @@ class Agent(BaseAgent):
return description return description
def _render_text_description_and_args(self, tools: List[Any]) -> str: def _render_text_description_and_args(self, tools: List[BaseTool]) -> str:
"""Render the tool name, description, and args in plain text. """Render the tool name, description, and args in plain text.
Output will be in the format of: Output will be in the format of:
.. code-block:: markdown .. code-block:: markdown
search: This tool is used for search, args: {"query": {"type": "string"}} search: This tool is used for search, args: {"query": {"type": "string"}}
calculator: This tool is used for math, \ calculator: This tool is used for math, \
args: {"expression": {"type": "string"}} args: {"expression": {"type": "string"}}
""" """
tool_strings = [] tool_strings = []
for tool in tools: for tool in tools:
args_schema = str(tool.args) tool_strings.append(tool.description)
if hasattr(tool, "func") and tool.func:
sig = signature(tool.func)
description = (
f"Tool Name: {tool.name}{sig}\nTool Description: {tool.description}"
)
else:
description = (
f"Tool Name: {tool.name}\nTool Description: {tool.description}"
)
tool_strings.append(f"{description}\nTool Arguments: {args_schema}")
return "\n".join(tool_strings) return "\n".join(tool_strings)
def _validate_docker_installation(self) -> None:
"""Check if Docker is installed and running."""
if not shutil.which("docker"):
raise RuntimeError(
f"Docker is not installed. Please install Docker to use code execution with agent: {self.role}"
)
try:
subprocess.run(
["docker", "info"],
check=True,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
)
except subprocess.CalledProcessError:
raise RuntimeError(
f"Docker is not running. Please start Docker to use code execution with agent: {self.role}"
)
@staticmethod @staticmethod
def __tools_names(tools) -> str: def __tools_names(tools) -> str:
return ", ".join([t.name for t in tools]) return ", ".join([t.name for t in tools])

View File

@@ -18,6 +18,7 @@ from pydantic_core import PydanticCustomError
from crewai.agents.agent_builder.utilities.base_token_process import TokenProcess from crewai.agents.agent_builder.utilities.base_token_process import TokenProcess
from crewai.agents.cache.cache_handler import CacheHandler from crewai.agents.cache.cache_handler import CacheHandler
from crewai.agents.tools_handler import ToolsHandler from crewai.agents.tools_handler import ToolsHandler
from crewai.tools import BaseTool
from crewai.utilities import I18N, Logger, RPMController from crewai.utilities import I18N, Logger, RPMController
from crewai.utilities.config import process_config from crewai.utilities.config import process_config
@@ -49,11 +50,11 @@ class BaseAgent(ABC, BaseModel):
Methods: Methods:
execute_task(task: Any, context: Optional[str] = None, tools: Optional[List[Any]] = None) -> str: execute_task(task: Any, context: Optional[str] = None, tools: Optional[List[BaseTool]] = None) -> str:
Abstract method to execute a task. Abstract method to execute a task.
create_agent_executor(tools=None) -> None: create_agent_executor(tools=None) -> None:
Abstract method to create an agent executor. Abstract method to create an agent executor.
_parse_tools(tools: List[Any]) -> List[Any]: _parse_tools(tools: List[BaseTool]) -> List[Any]:
Abstract method to parse tools. Abstract method to parse tools.
get_delegation_tools(agents: List["BaseAgent"]): get_delegation_tools(agents: List["BaseAgent"]):
Abstract method to set the agents task tools for handling delegation and question asking to other agents in crew. Abstract method to set the agents task tools for handling delegation and question asking to other agents in crew.
@@ -105,7 +106,7 @@ class BaseAgent(ABC, BaseModel):
default=False, default=False,
description="Enable agent to delegate and ask questions among each other.", description="Enable agent to delegate and ask questions among each other.",
) )
tools: Optional[List[Any]] = Field( tools: Optional[List[BaseTool]] = Field(
default_factory=list, description="Tools at agents' disposal" default_factory=list, description="Tools at agents' disposal"
) )
max_iter: Optional[int] = Field( max_iter: Optional[int] = Field(
@@ -188,7 +189,7 @@ class BaseAgent(ABC, BaseModel):
self, self,
task: Any, task: Any,
context: Optional[str] = None, context: Optional[str] = None,
tools: Optional[List[Any]] = None, tools: Optional[List[BaseTool]] = None,
) -> str: ) -> str:
pass pass
@@ -197,11 +198,11 @@ class BaseAgent(ABC, BaseModel):
pass pass
@abstractmethod @abstractmethod
def _parse_tools(self, tools: List[Any]) -> List[Any]: def _parse_tools(self, tools: List[BaseTool]) -> List[BaseTool]:
pass pass
@abstractmethod @abstractmethod
def get_delegation_tools(self, agents: List["BaseAgent"]) -> List[Any]: def get_delegation_tools(self, agents: List["BaseAgent"]) -> List[BaseTool]:
"""Set the task tools that init BaseAgenTools class.""" """Set the task tools that init BaseAgenTools class."""
pass pass

View File

@@ -17,7 +17,7 @@ if TYPE_CHECKING:
class CrewAgentExecutorMixin: class CrewAgentExecutorMixin:
crew: Optional["Crew"] crew: Optional["Crew"]
crew_agent: Optional["BaseAgent"] agent: Optional["BaseAgent"]
task: Optional["Task"] task: Optional["Task"]
iterations: int iterations: int
have_forced_answer: bool have_forced_answer: bool
@@ -33,9 +33,9 @@ class CrewAgentExecutorMixin:
"""Create and save a short-term memory item if conditions are met.""" """Create and save a short-term memory item if conditions are met."""
if ( if (
self.crew self.crew
and self.crew_agent and self.agent
and self.task and self.task
and "Action: Delegate work to coworker" not in output.log and "Action: Delegate work to coworker" not in output.text
): ):
try: try:
if ( if (
@@ -43,11 +43,11 @@ class CrewAgentExecutorMixin:
and self.crew._short_term_memory and self.crew._short_term_memory
): ):
self.crew._short_term_memory.save( self.crew._short_term_memory.save(
value=output.log, value=output.text,
metadata={ metadata={
"observation": self.task.description, "observation": self.task.description,
}, },
agent=self.crew_agent.role, agent=self.agent.role,
) )
except Exception as e: except Exception as e:
print(f"Failed to add to short term memory: {e}") print(f"Failed to add to short term memory: {e}")
@@ -61,18 +61,18 @@ class CrewAgentExecutorMixin:
and self.crew._long_term_memory and self.crew._long_term_memory
and self.crew._entity_memory and self.crew._entity_memory
and self.task and self.task
and self.crew_agent and self.agent
): ):
try: try:
ltm_agent = TaskEvaluator(self.crew_agent) ltm_agent = TaskEvaluator(self.agent)
evaluation = ltm_agent.evaluate(self.task, output.log) evaluation = ltm_agent.evaluate(self.task, output.text)
if isinstance(evaluation, ConverterError): if isinstance(evaluation, ConverterError):
return return
long_term_memory = LongTermMemoryItem( long_term_memory = LongTermMemoryItem(
task=self.task.description, task=self.task.description,
agent=self.crew_agent.role, agent=self.agent.role,
quality=evaluation.quality, quality=evaluation.quality,
datetime=str(time.time()), datetime=str(time.time()),
expected_output=self.task.expected_output, expected_output=self.task.expected_output,

View File

@@ -4,6 +4,7 @@ from crewai.types.usage_metrics import UsageMetrics
class TokenProcess: class TokenProcess:
total_tokens: int = 0 total_tokens: int = 0
prompt_tokens: int = 0 prompt_tokens: int = 0
cached_prompt_tokens: int = 0
completion_tokens: int = 0 completion_tokens: int = 0
successful_requests: int = 0 successful_requests: int = 0
@@ -15,6 +16,9 @@ class TokenProcess:
self.completion_tokens = self.completion_tokens + tokens self.completion_tokens = self.completion_tokens + tokens
self.total_tokens = self.total_tokens + tokens self.total_tokens = self.total_tokens + tokens
def sum_cached_prompt_tokens(self, tokens: int):
self.cached_prompt_tokens = self.cached_prompt_tokens + tokens
def sum_successful_requests(self, requests: int): def sum_successful_requests(self, requests: int):
self.successful_requests = self.successful_requests + requests self.successful_requests = self.successful_requests + requests
@@ -22,6 +26,7 @@ class TokenProcess:
return UsageMetrics( return UsageMetrics(
total_tokens=self.total_tokens, total_tokens=self.total_tokens,
prompt_tokens=self.prompt_tokens, prompt_tokens=self.prompt_tokens,
cached_prompt_tokens=self.cached_prompt_tokens,
completion_tokens=self.completion_tokens, completion_tokens=self.completion_tokens,
successful_requests=self.successful_requests, successful_requests=self.successful_requests,
) )

View File

@@ -2,6 +2,7 @@ import json
import re import re
from typing import Any, Dict, List, Union from typing import Any, Dict, List, Union
from crewai.agents.agent_builder.base_agent import BaseAgent
from crewai.agents.agent_builder.base_agent_executor_mixin import CrewAgentExecutorMixin from crewai.agents.agent_builder.base_agent_executor_mixin import CrewAgentExecutorMixin
from crewai.agents.parser import ( from crewai.agents.parser import (
FINAL_ANSWER_AND_PARSABLE_ACTION_ERROR_MESSAGE, FINAL_ANSWER_AND_PARSABLE_ACTION_ERROR_MESSAGE,
@@ -29,7 +30,7 @@ class CrewAgentExecutor(CrewAgentExecutorMixin):
llm: Any, llm: Any,
task: Any, task: Any,
crew: Any, crew: Any,
agent: Any, agent: BaseAgent,
prompt: dict[str, str], prompt: dict[str, str],
max_iter: int, max_iter: int,
tools: List[Any], tools: List[Any],
@@ -103,7 +104,8 @@ class CrewAgentExecutor(CrewAgentExecutorMixin):
if self.crew and self.crew._train: if self.crew and self.crew._train:
self._handle_crew_training_output(formatted_answer) self._handle_crew_training_output(formatted_answer)
self._create_short_term_memory(formatted_answer)
self._create_long_term_memory(formatted_answer)
return {"output": formatted_answer.output} return {"output": formatted_answer.output}
def _invoke_loop(self, formatted_answer=None): def _invoke_loop(self, formatted_answer=None):
@@ -115,6 +117,15 @@ class CrewAgentExecutor(CrewAgentExecutorMixin):
callbacks=self.callbacks, callbacks=self.callbacks,
) )
if answer is None or answer == "":
self._printer.print(
content="Received None or empty response from LLM call.",
color="red",
)
raise ValueError(
"Invalid response from LLM call - None or empty."
)
if not self.use_stop_words: if not self.use_stop_words:
try: try:
self._format_answer(answer) self._format_answer(answer)
@@ -134,25 +145,26 @@ class CrewAgentExecutor(CrewAgentExecutorMixin):
formatted_answer.result = action_result formatted_answer.result = action_result
self._show_logs(formatted_answer) self._show_logs(formatted_answer)
if self.step_callback: if self.step_callback:
self.step_callback(formatted_answer) self.step_callback(formatted_answer)
if self._should_force_answer(): if self._should_force_answer():
if self.have_forced_answer: if self.have_forced_answer:
return AgentFinish( return AgentFinish(
output=self._i18n.errors( thought="",
"force_final_answer_error" output=self._i18n.errors(
).format(formatted_answer.text), "force_final_answer_error"
text=formatted_answer.text, ).format(formatted_answer.text),
) text=formatted_answer.text,
else: )
formatted_answer.text += ( else:
f'\n{self._i18n.errors("force_final_answer")}' formatted_answer.text += (
) f'\n{self._i18n.errors("force_final_answer")}'
self.have_forced_answer = True )
self.messages.append( self.have_forced_answer = True
self._format_msg(formatted_answer.text, role="assistant") self.messages.append(
) self._format_msg(formatted_answer.text, role="assistant")
)
except OutputParserException as e: except OutputParserException as e:
self.messages.append({"role": "user", "content": e.error}) self.messages.append({"role": "user", "content": e.error})
@@ -176,6 +188,8 @@ class CrewAgentExecutor(CrewAgentExecutorMixin):
return formatted_answer return formatted_answer
def _show_start_logs(self): def _show_start_logs(self):
if self.agent is None:
raise ValueError("Agent cannot be None")
if self.agent.verbose or ( if self.agent.verbose or (
hasattr(self, "crew") and getattr(self.crew, "verbose", False) hasattr(self, "crew") and getattr(self.crew, "verbose", False)
): ):
@@ -188,6 +202,8 @@ class CrewAgentExecutor(CrewAgentExecutorMixin):
) )
def _show_logs(self, formatted_answer: Union[AgentAction, AgentFinish]): def _show_logs(self, formatted_answer: Union[AgentAction, AgentFinish]):
if self.agent is None:
raise ValueError("Agent cannot be None")
if self.agent.verbose or ( if self.agent.verbose or (
hasattr(self, "crew") and getattr(self.crew, "verbose", False) hasattr(self, "crew") and getattr(self.crew, "verbose", False)
): ):
@@ -306,7 +322,7 @@ class CrewAgentExecutor(CrewAgentExecutorMixin):
self, result: AgentFinish, human_feedback: str | None = None self, result: AgentFinish, human_feedback: str | None = None
) -> None: ) -> None:
"""Function to handle the process of the training data.""" """Function to handle the process of the training data."""
agent_id = str(self.agent.id) agent_id = str(self.agent.id) # type: ignore
# Load training data # Load training data
training_handler = CrewTrainingHandler(TRAINING_DATA_FILE) training_handler = CrewTrainingHandler(TRAINING_DATA_FILE)
@@ -339,7 +355,7 @@ class CrewAgentExecutor(CrewAgentExecutorMixin):
"initial_output": result.output, "initial_output": result.output,
"human_feedback": human_feedback, "human_feedback": human_feedback,
"agent": agent_id, "agent": agent_id,
"agent_role": self.agent.role, "agent_role": self.agent.role, # type: ignore
} }
if self.crew is not None and hasattr(self.crew, "_train_iteration"): if self.crew is not None and hasattr(self.crew, "_train_iteration"):
train_iteration = self.crew._train_iteration train_iteration = self.crew._train_iteration
@@ -370,4 +386,5 @@ class CrewAgentExecutor(CrewAgentExecutorMixin):
return CrewAgentParser(agent=self.agent).parse(answer) return CrewAgentParser(agent=self.agent).parse(answer)
def _format_msg(self, prompt: str, role: str = "user") -> Dict[str, str]: def _format_msg(self, prompt: str, role: str = "user") -> Dict[str, str]:
prompt = prompt.rstrip()
return {"role": role, "content": prompt} return {"role": role, "content": prompt}

View File

@@ -1,6 +1,6 @@
from typing import Any, Optional, Union from typing import Any, Optional, Union
from ..tools.cache_tools import CacheTools from ..tools.cache_tools.cache_tools import CacheTools
from ..tools.tool_calling import InstructorToolCalling, ToolCalling from ..tools.tool_calling import InstructorToolCalling, ToolCalling
from .cache.cache_handler import CacheHandler from .cache.cache_handler import CacheHandler

View File

@@ -0,0 +1,70 @@
from pathlib import Path
import click
from crewai.cli.utils import copy_template
def add_crew_to_flow(crew_name: str) -> None:
"""Add a new crew to the current flow."""
# Check if pyproject.toml exists in the current directory
if not Path("pyproject.toml").exists():
print("This command must be run from the root of a flow project.")
raise click.ClickException(
"This command must be run from the root of a flow project."
)
# Determine the flow folder based on the current directory
flow_folder = Path.cwd()
crews_folder = flow_folder / "src" / flow_folder.name / "crews"
if not crews_folder.exists():
print("Crews folder does not exist in the current flow.")
raise click.ClickException("Crews folder does not exist in the current flow.")
# Create the crew within the flow's crews directory
create_embedded_crew(crew_name, parent_folder=crews_folder)
click.echo(
f"Crew {crew_name} added to the current flow successfully!",
)
def create_embedded_crew(crew_name: str, parent_folder: Path) -> None:
"""Create a new crew within an existing flow project."""
folder_name = crew_name.replace(" ", "_").replace("-", "_").lower()
class_name = crew_name.replace("_", " ").replace("-", " ").title().replace(" ", "")
crew_folder = parent_folder / folder_name
if crew_folder.exists():
if not click.confirm(
f"Crew {folder_name} already exists. Do you want to override it?"
):
click.secho("Operation cancelled.", fg="yellow")
return
click.secho(f"Overriding crew {folder_name}...", fg="green", bold=True)
else:
click.secho(f"Creating crew {folder_name}...", fg="green", bold=True)
crew_folder.mkdir(parents=True)
# Create config and crew.py files
config_folder = crew_folder / "config"
config_folder.mkdir(exist_ok=True)
templates_dir = Path(__file__).parent / "templates" / "crew"
config_template_files = ["agents.yaml", "tasks.yaml"]
crew_template_file = f"{folder_name}.py" # Updated file name
for file_name in config_template_files:
src_file = templates_dir / "config" / file_name
dst_file = config_folder / file_name
copy_template(src_file, dst_file, crew_name, class_name, folder_name)
src_file = templates_dir / "crew.py"
dst_file = crew_folder / crew_template_file
copy_template(src_file, dst_file, crew_name, class_name, folder_name)
click.secho(
f"Crew {crew_name} added to the flow successfully!", fg="green", bold=True
)

View File

@@ -34,7 +34,9 @@ class AuthenticationCommand:
"scope": "openid", "scope": "openid",
"audience": AUTH0_AUDIENCE, "audience": AUTH0_AUDIENCE,
} }
response = requests.post(url=self.DEVICE_CODE_URL, data=device_code_payload) response = requests.post(
url=self.DEVICE_CODE_URL, data=device_code_payload, timeout=20
)
response.raise_for_status() response.raise_for_status()
return response.json() return response.json()
@@ -54,7 +56,7 @@ class AuthenticationCommand:
attempts = 0 attempts = 0
while True and attempts < 5: while True and attempts < 5:
response = requests.post(self.TOKEN_URL, data=token_payload) response = requests.post(self.TOKEN_URL, data=token_payload, timeout=30)
token_data = response.json() token_data = response.json()
if response.status_code == 200: if response.status_code == 200:

View File

@@ -3,6 +3,7 @@ from typing import Optional
import click import click
import pkg_resources import pkg_resources
from crewai.cli.add_crew_to_flow import add_crew_to_flow
from crewai.cli.create_crew import create_crew from crewai.cli.create_crew import create_crew
from crewai.cli.create_flow import create_flow from crewai.cli.create_flow import create_flow
from crewai.cli.create_pipeline import create_pipeline from crewai.cli.create_pipeline import create_pipeline
@@ -14,11 +15,11 @@ from .authentication.main import AuthenticationCommand
from .deploy.main import DeployCommand from .deploy.main import DeployCommand
from .evaluate_crew import evaluate_crew from .evaluate_crew import evaluate_crew
from .install_crew import install_crew from .install_crew import install_crew
from .kickoff_flow import kickoff_flow
from .plot_flow import plot_flow from .plot_flow import plot_flow
from .replay_from_task import replay_task_command from .replay_from_task import replay_task_command
from .reset_memories_command import reset_memories_command from .reset_memories_command import reset_memories_command
from .run_crew import run_crew from .run_crew import run_crew
from .run_flow import run_flow
from .tools.main import ToolCommand from .tools.main import ToolCommand
from .train_crew import train_crew from .train_crew import train_crew
from .update_crew import update_crew from .update_crew import update_crew
@@ -32,10 +33,12 @@ def crewai():
@crewai.command() @crewai.command()
@click.argument("type", type=click.Choice(["crew", "pipeline", "flow"])) @click.argument("type", type=click.Choice(["crew", "pipeline", "flow"]))
@click.argument("name") @click.argument("name")
def create(type, name): @click.option("--provider", type=str, help="The provider to use for the crew")
@click.option("--skip_provider", is_flag=True, help="Skip provider validation")
def create(type, name, provider, skip_provider=False):
"""Create a new crew, pipeline, or flow.""" """Create a new crew, pipeline, or flow."""
if type == "crew": if type == "crew":
create_crew(name) create_crew(name, provider, skip_provider)
elif type == "pipeline": elif type == "pipeline":
create_pipeline(name) create_pipeline(name)
elif type == "flow": elif type == "flow":
@@ -176,10 +179,16 @@ def test(n_iterations: int, model: str):
evaluate_crew(n_iterations, model) evaluate_crew(n_iterations, model)
@crewai.command() @crewai.command(
def install(): context_settings=dict(
ignore_unknown_options=True,
allow_extra_args=True,
)
)
@click.pass_context
def install(context):
"""Install the Crew.""" """Install the Crew."""
install_crew() install_crew(context.args)
@crewai.command() @crewai.command()
@@ -304,11 +313,11 @@ def flow():
pass pass
@flow.command(name="run") @flow.command(name="kickoff")
def flow_run(): def flow_run():
"""Run the Flow.""" """Kickoff the Flow."""
click.echo("Running the Flow") click.echo("Running the Flow")
run_flow() kickoff_flow()
@flow.command(name="plot") @flow.command(name="plot")
@@ -318,5 +327,13 @@ def flow_plot():
plot_flow() plot_flow()
@flow.command(name="add-crew")
@click.argument("crew_name")
def flow_add_crew(crew_name):
"""Add a crew to an existing flow."""
click.echo(f"Adding crew {crew_name} to the flow")
add_crew_to_flow(crew_name)
if __name__ == "__main__": if __name__ == "__main__":
crewai() crewai()

38
src/crewai/cli/config.py Normal file
View File

@@ -0,0 +1,38 @@
import json
from pathlib import Path
from pydantic import BaseModel, Field
from typing import Optional
DEFAULT_CONFIG_PATH = Path.home() / ".config" / "crewai" / "settings.json"
class Settings(BaseModel):
tool_repository_username: Optional[str] = Field(None, description="Username for interacting with the Tool Repository")
tool_repository_password: Optional[str] = Field(None, description="Password for interacting with the Tool Repository")
config_path: Path = Field(default=DEFAULT_CONFIG_PATH, exclude=True)
def __init__(self, config_path: Path = DEFAULT_CONFIG_PATH, **data):
"""Load Settings from config path"""
config_path.parent.mkdir(parents=True, exist_ok=True)
file_data = {}
if config_path.is_file():
try:
with config_path.open("r") as f:
file_data = json.load(f)
except json.JSONDecodeError:
file_data = {}
merged_data = {**file_data, **data}
super().__init__(config_path=config_path, **merged_data)
def dump(self) -> None:
"""Save current settings to settings.json"""
if self.config_path.is_file():
with self.config_path.open("r") as f:
existing_data = json.load(f)
else:
existing_data = {}
updated_data = {**existing_data, **self.model_dump(exclude_unset=True)}
with self.config_path.open("w") as f:
json.dump(updated_data, f, indent=4)

View File

@@ -1,19 +1,168 @@
ENV_VARS = { ENV_VARS = {
'openai': ['OPENAI_API_KEY'], "openai": [
'anthropic': ['ANTHROPIC_API_KEY'], {
'gemini': ['GEMINI_API_KEY'], "prompt": "Enter your OPENAI API key (press Enter to skip)",
'groq': ['GROQ_API_KEY'], "key_name": "OPENAI_API_KEY",
'ollama': ['FAKE_KEY'], }
],
"anthropic": [
{
"prompt": "Enter your ANTHROPIC API key (press Enter to skip)",
"key_name": "ANTHROPIC_API_KEY",
}
],
"gemini": [
{
"prompt": "Enter your GEMINI API key (press Enter to skip)",
"key_name": "GEMINI_API_KEY",
}
],
"groq": [
{
"prompt": "Enter your GROQ API key (press Enter to skip)",
"key_name": "GROQ_API_KEY",
}
],
"watson": [
{
"prompt": "Enter your WATSONX URL (press Enter to skip)",
"key_name": "WATSONX_URL",
},
{
"prompt": "Enter your WATSONX API Key (press Enter to skip)",
"key_name": "WATSONX_APIKEY",
},
{
"prompt": "Enter your WATSONX Project Id (press Enter to skip)",
"key_name": "WATSONX_PROJECT_ID",
},
],
"ollama": [
{
"default": True,
"API_BASE": "http://localhost:11434",
}
],
"bedrock": [
{
"prompt": "Enter your AWS Access Key ID (press Enter to skip)",
"key_name": "AWS_ACCESS_KEY_ID",
},
{
"prompt": "Enter your AWS Secret Access Key (press Enter to skip)",
"key_name": "AWS_SECRET_ACCESS_KEY",
},
{
"prompt": "Enter your AWS Region Name (press Enter to skip)",
"key_name": "AWS_REGION_NAME",
},
],
"azure": [
{
"prompt": "Enter your Azure deployment name (must start with 'azure/')",
"key_name": "model",
},
{
"prompt": "Enter your AZURE API key (press Enter to skip)",
"key_name": "AZURE_API_KEY",
},
{
"prompt": "Enter your AZURE API base URL (press Enter to skip)",
"key_name": "AZURE_API_BASE",
},
{
"prompt": "Enter your AZURE API version (press Enter to skip)",
"key_name": "AZURE_API_VERSION",
},
],
"cerebras": [
{
"prompt": "Enter your Cerebras model name (must start with 'cerebras/')",
"key_name": "model",
},
{
"prompt": "Enter your Cerebras API version (press Enter to skip)",
"key_name": "CEREBRAS_API_KEY",
},
],
} }
PROVIDERS = ['openai', 'anthropic', 'gemini', 'groq', 'ollama']
PROVIDERS = [
"openai",
"anthropic",
"gemini",
"groq",
"ollama",
"watson",
"bedrock",
"azure",
"cerebras",
]
MODELS = { MODELS = {
'openai': ['gpt-4', 'gpt-4o', 'gpt-4o-mini', 'o1-mini', 'o1-preview'], "openai": ["gpt-4", "gpt-4o", "gpt-4o-mini", "o1-mini", "o1-preview"],
'anthropic': ['claude-3-5-sonnet-20240620', 'claude-3-sonnet-20240229', 'claude-3-opus-20240229', 'claude-3-haiku-20240307'], "anthropic": [
'gemini': ['gemini-1.5-flash', 'gemini-1.5-pro', 'gemini-gemma-2-9b-it', 'gemini-gemma-2-27b-it'], "claude-3-5-sonnet-20240620",
'groq': ['llama-3.1-8b-instant', 'llama-3.1-70b-versatile', 'llama-3.1-405b-reasoning', 'gemma2-9b-it', 'gemma-7b-it'], "claude-3-sonnet-20240229",
'ollama': ['llama3.1', 'mixtral'], "claude-3-opus-20240229",
"claude-3-haiku-20240307",
],
"gemini": [
"gemini/gemini-1.5-flash",
"gemini/gemini-1.5-pro",
"gemini/gemini-gemma-2-9b-it",
"gemini/gemini-gemma-2-27b-it",
],
"groq": [
"groq/llama-3.1-8b-instant",
"groq/llama-3.1-70b-versatile",
"groq/llama-3.1-405b-reasoning",
"groq/gemma2-9b-it",
"groq/gemma-7b-it",
],
"ollama": ["ollama/llama3.1", "ollama/mixtral"],
"watson": [
"watsonx/google/flan-t5-xxl",
"watsonx/google/flan-ul2",
"watsonx/bigscience/mt0-xxl",
"watsonx/eleutherai/gpt-neox-20b",
"watsonx/ibm/mpt-7b-instruct2",
"watsonx/bigcode/starcoder",
"watsonx/meta-llama/llama-2-70b-chat",
"watsonx/meta-llama/llama-2-13b-chat",
"watsonx/ibm/granite-13b-instruct-v1",
"watsonx/ibm/granite-13b-chat-v1",
"watsonx/google/flan-t5-xl",
"watsonx/ibm/granite-13b-chat-v2",
"watsonx/ibm/granite-13b-instruct-v2",
"watsonx/elyza/elyza-japanese-llama-2-7b-instruct",
"watsonx/ibm-mistralai/mixtral-8x7b-instruct-v01-q",
],
"bedrock": [
"bedrock/anthropic.claude-3-5-sonnet-20240620-v1:0",
"bedrock/anthropic.claude-3-sonnet-20240229-v1:0",
"bedrock/anthropic.claude-3-haiku-20240307-v1:0",
"bedrock/anthropic.claude-3-opus-20240229-v1:0",
"bedrock/anthropic.claude-v2:1",
"bedrock/anthropic.claude-v2",
"bedrock/anthropic.claude-instant-v1",
"bedrock/meta.llama3-1-405b-instruct-v1:0",
"bedrock/meta.llama3-1-70b-instruct-v1:0",
"bedrock/meta.llama3-1-8b-instruct-v1:0",
"bedrock/meta.llama3-70b-instruct-v1:0",
"bedrock/meta.llama3-8b-instruct-v1:0",
"bedrock/amazon.titan-text-lite-v1",
"bedrock/amazon.titan-text-express-v1",
"bedrock/cohere.command-text-v14",
"bedrock/ai21.j2-mid-v1",
"bedrock/ai21.j2-ultra-v1",
"bedrock/ai21.jamba-instruct-v1:0",
"bedrock/meta.llama2-13b-chat-v1",
"bedrock/meta.llama2-70b-chat-v1",
"bedrock/mistral.mistral-7b-instruct-v0:2",
"bedrock/mistral.mixtral-8x7b-instruct-v0:1",
],
} }
JSON_URL = "https://raw.githubusercontent.com/BerriAI/litellm/main/model_prices_and_context_window.json" JSON_URL = "https://raw.githubusercontent.com/BerriAI/litellm/main/model_prices_and_context_window.json"

View File

@@ -1,8 +1,17 @@
import shutil
import sys
from pathlib import Path from pathlib import Path
import click import click
from crewai.cli.utils import copy_template,load_env_vars, write_env_file
from crewai.cli.provider import get_provider_data, select_provider, select_model, PROVIDERS from crewai.cli.constants import ENV_VARS, MODELS
from crewai.cli.constants import ENV_VARS from crewai.cli.provider import (
get_provider_data,
select_model,
select_provider,
)
from crewai.cli.utils import copy_template, load_env_vars, write_env_file
def create_folder_structure(name, parent_folder=None): def create_folder_structure(name, parent_folder=None):
folder_name = name.replace(" ", "_").replace("-", "_").lower() folder_name = name.replace(" ", "_").replace("-", "_").lower()
@@ -13,29 +22,31 @@ def create_folder_structure(name, parent_folder=None):
else: else:
folder_path = Path(folder_name) folder_path = Path(folder_name)
if folder_path.exists():
if not click.confirm(
f"Folder {folder_name} already exists. Do you want to override it?"
):
click.secho("Operation cancelled.", fg="yellow")
sys.exit(0)
click.secho(f"Overriding folder {folder_name}...", fg="green", bold=True)
shutil.rmtree(folder_path) # Delete the existing folder and its contents
click.secho( click.secho(
f"Creating {'crew' if parent_folder else 'folder'} {folder_name}...", f"Creating {'crew' if parent_folder else 'folder'} {folder_name}...",
fg="green", fg="green",
bold=True, bold=True,
) )
if not folder_path.exists(): folder_path.mkdir(parents=True)
folder_path.mkdir(parents=True) (folder_path / "tests").mkdir(exist_ok=True)
(folder_path / "tests").mkdir(exist_ok=True) if not parent_folder:
if not parent_folder: (folder_path / "src" / folder_name).mkdir(parents=True)
(folder_path / "src" / folder_name).mkdir(parents=True) (folder_path / "src" / folder_name / "tools").mkdir(parents=True)
(folder_path / "src" / folder_name / "tools").mkdir(parents=True) (folder_path / "src" / folder_name / "config").mkdir(parents=True)
(folder_path / "src" / folder_name / "config").mkdir(parents=True)
else:
click.secho(
f"\tFolder {folder_name} already exists.",
fg="yellow",
)
return folder_path, folder_name, class_name return folder_path, folder_name, class_name
def copy_template_files(folder_path, name, class_name, parent_folder): def copy_template_files(folder_path, name, class_name, parent_folder):
package_dir = Path(__file__).parent package_dir = Path(__file__).parent
templates_dir = package_dir / "templates" / "crew" templates_dir = package_dir / "templates" / "crew"
@@ -54,7 +65,9 @@ def copy_template_files(folder_path, name, class_name, parent_folder):
dst_file = folder_path / file_name dst_file = folder_path / file_name
copy_template(src_file, dst_file, name, class_name, folder_path.name) copy_template(src_file, dst_file, name, class_name, folder_path.name)
src_folder = folder_path / "src" / folder_path.name if not parent_folder else folder_path src_folder = (
folder_path / "src" / folder_path.name if not parent_folder else folder_path
)
for file_name in src_template_files: for file_name in src_template_files:
src_file = templates_dir / file_name src_file = templates_dir / file_name
@@ -68,37 +81,88 @@ def copy_template_files(folder_path, name, class_name, parent_folder):
copy_template(src_file, dst_file, name, class_name, folder_path.name) copy_template(src_file, dst_file, name, class_name, folder_path.name)
def create_crew(name, parent_folder=None): def create_crew(name, provider=None, skip_provider=False, parent_folder=None):
folder_path, folder_name, class_name = create_folder_structure(name, parent_folder) folder_path, folder_name, class_name = create_folder_structure(name, parent_folder)
env_vars = load_env_vars(folder_path) env_vars = load_env_vars(folder_path)
if not skip_provider:
if not provider:
provider_models = get_provider_data()
if not provider_models:
return
provider_models = get_provider_data() existing_provider = None
if not provider_models: for provider, env_keys in ENV_VARS.items():
return if any(
"key_name" in details and details["key_name"] in env_vars
for details in env_keys
):
existing_provider = provider
break
selected_provider = select_provider(provider_models) if existing_provider:
if not selected_provider: if not click.confirm(
return f"Found existing environment variable configuration for {existing_provider.capitalize()}. Do you want to override it?"
provider = selected_provider ):
click.secho("Keeping existing provider configuration.", fg="yellow")
return
selected_model = select_model(provider, provider_models) provider_models = get_provider_data()
if not selected_model: if not provider_models:
return return
model = selected_model
if provider in PROVIDERS: while True:
api_key_var = ENV_VARS[provider][0] selected_provider = select_provider(provider_models)
else: if selected_provider is None: # User typed 'q'
api_key_var = click.prompt( click.secho("Exiting...", fg="yellow")
f"Enter the environment variable name for your {provider.capitalize()} API key", sys.exit(0)
type=str if selected_provider: # Valid selection
) break
click.secho(
"No provider selected. Please try again or press 'q' to exit.", fg="red"
)
env_vars = {api_key_var: "YOUR_API_KEY_HERE"} # Check if the selected provider has predefined models
write_env_file(folder_path, env_vars) if selected_provider in MODELS and MODELS[selected_provider]:
while True:
selected_model = select_model(selected_provider, provider_models)
if selected_model is None: # User typed 'q'
click.secho("Exiting...", fg="yellow")
sys.exit(0)
if selected_model: # Valid selection
break
click.secho(
"No model selected. Please try again or press 'q' to exit.",
fg="red",
)
env_vars["MODEL"] = selected_model
env_vars['MODEL'] = model # Check if the selected provider requires API keys
click.secho(f"Selected model: {model}", fg="green") if selected_provider in ENV_VARS:
provider_env_vars = ENV_VARS[selected_provider]
for details in provider_env_vars:
if details.get("default", False):
# Automatically add default key-value pairs
for key, value in details.items():
if key not in ["prompt", "key_name", "default"]:
env_vars[key] = value
elif "key_name" in details:
# Prompt for non-default key-value pairs
prompt = details["prompt"]
key_name = details["key_name"]
api_key_value = click.prompt(prompt, default="", show_default=False)
if api_key_value.strip():
env_vars[key_name] = api_key_value
if env_vars:
write_env_file(folder_path, env_vars)
click.secho("API keys and model saved to .env file", fg="green")
else:
click.secho(
"No API keys provided. Skipping .env file creation.", fg="yellow"
)
click.secho(f"Selected model: {env_vars.get('MODEL', 'N/A')}", fg="green")
package_dir = Path(__file__).parent package_dir = Path(__file__).parent
templates_dir = package_dir / "templates" / "crew" templates_dir = package_dir / "templates" / "crew"

View File

@@ -3,12 +3,13 @@ import subprocess
import click import click
def install_crew() -> None: def install_crew(proxy_options: list[str]) -> None:
""" """
Install the crew by running the UV command to lock and install. Install the crew by running the UV command to lock and install.
""" """
try: try:
subprocess.run(["uv", "sync"], check=True, capture_output=False, text=True) command = ["uv", "sync"] + proxy_options
subprocess.run(command, check=True, capture_output=False, text=True)
except subprocess.CalledProcessError as e: except subprocess.CalledProcessError as e:
click.echo(f"An error occurred while running the crew: {e}", err=True) click.echo(f"An error occurred while running the crew: {e}", err=True)

View File

@@ -3,11 +3,11 @@ import subprocess
import click import click
def run_flow() -> None: def kickoff_flow() -> None:
""" """
Run the flow by running a command in the UV environment. Kickoff the flow by running a command in the UV environment.
""" """
command = ["uv", "run", "run_flow"] command = ["uv", "run", "kickoff"]
try: try:
result = subprocess.run(command, capture_output=False, text=True, check=True) result = subprocess.run(command, capture_output=False, text=True, check=True)

View File

@@ -7,7 +7,7 @@ def plot_flow() -> None:
""" """
Plot the flow by running a command in the UV environment. Plot the flow by running a command in the UV environment.
""" """
command = ["uv", "run", "plot_flow"] command = ["uv", "run", "plot"]
try: try:
result = subprocess.run(command, capture_output=False, text=True, check=True) result = subprocess.run(command, capture_output=False, text=True, check=True)

View File

@@ -1,67 +1,91 @@
import json import json
import time import time
import requests
from collections import defaultdict from collections import defaultdict
from pathlib import Path
import click import click
from pathlib import Path import requests
from crewai.cli.constants import PROVIDERS, MODELS, JSON_URL
from crewai.cli.constants import JSON_URL, MODELS, PROVIDERS
def select_choice(prompt_message, choices): def select_choice(prompt_message, choices):
""" """
Presents a list of choices to the user and prompts them to select one. Presents a list of choices to the user and prompts them to select one.
Args: Args:
- prompt_message (str): The message to display to the user before presenting the choices. - prompt_message (str): The message to display to the user before presenting the choices.
- choices (list): A list of options to present to the user. - choices (list): A list of options to present to the user.
Returns: Returns:
- str: The selected choice from the list, or None if the operation is aborted or an invalid selection is made. - str: The selected choice from the list, or None if the user chooses to quit.
""" """
provider_models = get_provider_data()
if not provider_models:
return
click.secho(prompt_message, fg="cyan") click.secho(prompt_message, fg="cyan")
for idx, choice in enumerate(choices, start=1): for idx, choice in enumerate(choices, start=1):
click.secho(f"{idx}. {choice}", fg="cyan") click.secho(f"{idx}. {choice}", fg="cyan")
try: click.secho("q. Quit", fg="cyan")
selected_index = click.prompt("Enter the number of your choice", type=int) - 1
except click.exceptions.Abort: while True:
click.secho("Operation aborted by the user.", fg="red") choice = click.prompt(
return None "Enter the number of your choice or 'q' to quit", type=str
if not (0 <= selected_index < len(choices)): )
click.secho("Invalid selection.", fg="red")
return None if choice.lower() == "q":
return choices[selected_index] return None
try:
selected_index = int(choice) - 1
if 0 <= selected_index < len(choices):
return choices[selected_index]
except ValueError:
pass
click.secho(
"Invalid selection. Please select a number between 1 and 6 or 'q' to quit.",
fg="red",
)
def select_provider(provider_models): def select_provider(provider_models):
""" """
Presents a list of providers to the user and prompts them to select one. Presents a list of providers to the user and prompts them to select one.
Args: Args:
- provider_models (dict): A dictionary of provider models. - provider_models (dict): A dictionary of provider models.
Returns: Returns:
- str: The selected provider, or None if the operation is aborted or an invalid selection is made. - str: The selected provider
- None: If user explicitly quits
""" """
predefined_providers = [p.lower() for p in PROVIDERS] predefined_providers = [p.lower() for p in PROVIDERS]
all_providers = sorted(set(predefined_providers + list(provider_models.keys()))) all_providers = sorted(set(predefined_providers + list(provider_models.keys())))
provider = select_choice("Select a provider to set up:", predefined_providers + ['other']) provider = select_choice(
if not provider: "Select a provider to set up:", predefined_providers + ["other"]
)
if provider is None: # User typed 'q'
return None return None
provider = provider.lower()
if provider == 'other': if provider == "other":
provider = select_choice("Select a provider from the full list:", all_providers) provider = select_choice("Select a provider from the full list:", all_providers)
if not provider: if provider is None: # User typed 'q'
return None return None
return provider
return provider.lower() if provider else False
def select_model(provider, provider_models): def select_model(provider, provider_models):
""" """
Presents a list of models for a given provider to the user and prompts them to select one. Presents a list of models for a given provider to the user and prompts them to select one.
Args: Args:
- provider (str): The provider for which to select a model. - provider (str): The provider for which to select a model.
- provider_models (dict): A dictionary of provider models. - provider_models (dict): A dictionary of provider models.
Returns: Returns:
- str: The selected model, or None if the operation is aborted or an invalid selection is made. - str: The selected model, or None if the operation is aborted or an invalid selection is made.
""" """
@@ -76,37 +100,49 @@ def select_model(provider, provider_models):
click.secho(f"No models available for provider '{provider}'.", fg="red") click.secho(f"No models available for provider '{provider}'.", fg="red")
return None return None
selected_model = select_choice(f"Select a model to use for {provider.capitalize()}:", available_models) selected_model = select_choice(
f"Select a model to use for {provider.capitalize()}:", available_models
)
return selected_model return selected_model
def load_provider_data(cache_file, cache_expiry): def load_provider_data(cache_file, cache_expiry):
""" """
Loads provider data from a cache file if it exists and is not expired. If the cache is expired or corrupted, it fetches the data from the web. Loads provider data from a cache file if it exists and is not expired. If the cache is expired or corrupted, it fetches the data from the web.
Args: Args:
- cache_file (Path): The path to the cache file. - cache_file (Path): The path to the cache file.
- cache_expiry (int): The cache expiry time in seconds. - cache_expiry (int): The cache expiry time in seconds.
Returns: Returns:
- dict or None: The loaded provider data or None if the operation fails. - dict or None: The loaded provider data or None if the operation fails.
""" """
current_time = time.time() current_time = time.time()
if cache_file.exists() and (current_time - cache_file.stat().st_mtime) < cache_expiry: if (
cache_file.exists()
and (current_time - cache_file.stat().st_mtime) < cache_expiry
):
data = read_cache_file(cache_file) data = read_cache_file(cache_file)
if data: if data:
return data return data
click.secho("Cache is corrupted. Fetching provider data from the web...", fg="yellow") click.secho(
"Cache is corrupted. Fetching provider data from the web...", fg="yellow"
)
else: else:
click.secho("Cache expired or not found. Fetching provider data from the web...", fg="cyan") click.secho(
"Cache expired or not found. Fetching provider data from the web...",
fg="cyan",
)
return fetch_provider_data(cache_file) return fetch_provider_data(cache_file)
def read_cache_file(cache_file): def read_cache_file(cache_file):
""" """
Reads and returns the JSON content from a cache file. Returns None if the file contains invalid JSON. Reads and returns the JSON content from a cache file. Returns None if the file contains invalid JSON.
Args: Args:
- cache_file (Path): The path to the cache file. - cache_file (Path): The path to the cache file.
Returns: Returns:
- dict or None: The JSON content of the cache file or None if the JSON is invalid. - dict or None: The JSON content of the cache file or None if the JSON is invalid.
""" """
@@ -116,18 +152,19 @@ def read_cache_file(cache_file):
except json.JSONDecodeError: except json.JSONDecodeError:
return None return None
def fetch_provider_data(cache_file): def fetch_provider_data(cache_file):
""" """
Fetches provider data from a specified URL and caches it to a file. Fetches provider data from a specified URL and caches it to a file.
Args: Args:
- cache_file (Path): The path to the cache file. - cache_file (Path): The path to the cache file.
Returns: Returns:
- dict or None: The fetched provider data or None if the operation fails. - dict or None: The fetched provider data or None if the operation fails.
""" """
try: try:
response = requests.get(JSON_URL, stream=True, timeout=10) response = requests.get(JSON_URL, stream=True, timeout=60)
response.raise_for_status() response.raise_for_status()
data = download_data(response) data = download_data(response)
with open(cache_file, "w") as f: with open(cache_file, "w") as f:
@@ -139,38 +176,42 @@ def fetch_provider_data(cache_file):
click.secho("Error parsing provider data. Invalid JSON format.", fg="red") click.secho("Error parsing provider data. Invalid JSON format.", fg="red")
return None return None
def download_data(response): def download_data(response):
""" """
Downloads data from a given HTTP response and returns the JSON content. Downloads data from a given HTTP response and returns the JSON content.
Args: Args:
- response (requests.Response): The HTTP response object. - response (requests.Response): The HTTP response object.
Returns: Returns:
- dict: The JSON content of the response. - dict: The JSON content of the response.
""" """
total_size = int(response.headers.get('content-length', 0)) total_size = int(response.headers.get("content-length", 0))
block_size = 8192 block_size = 8192
data_chunks = [] data_chunks = []
with click.progressbar(length=total_size, label='Downloading', show_pos=True) as progress_bar: with click.progressbar(
length=total_size, label="Downloading", show_pos=True
) as progress_bar:
for chunk in response.iter_content(block_size): for chunk in response.iter_content(block_size):
if chunk: if chunk:
data_chunks.append(chunk) data_chunks.append(chunk)
progress_bar.update(len(chunk)) progress_bar.update(len(chunk))
data_content = b''.join(data_chunks) data_content = b"".join(data_chunks)
return json.loads(data_content.decode('utf-8')) return json.loads(data_content.decode("utf-8"))
def get_provider_data(): def get_provider_data():
""" """
Retrieves provider data from a cache file, filters out models based on provider criteria, and returns a dictionary of providers mapped to their models. Retrieves provider data from a cache file, filters out models based on provider criteria, and returns a dictionary of providers mapped to their models.
Returns: Returns:
- dict or None: A dictionary of providers mapped to their models or None if the operation fails. - dict or None: A dictionary of providers mapped to their models or None if the operation fails.
""" """
cache_dir = Path.home() / '.crewai' cache_dir = Path.home() / ".crewai"
cache_dir.mkdir(exist_ok=True) cache_dir.mkdir(exist_ok=True)
cache_file = cache_dir / 'provider_cache.json' cache_file = cache_dir / "provider_cache.json"
cache_expiry = 24 * 3600 cache_expiry = 24 * 3600
data = load_provider_data(cache_file, cache_expiry) data = load_provider_data(cache_file, cache_expiry)
if not data: if not data:
@@ -179,8 +220,8 @@ def get_provider_data():
provider_models = defaultdict(list) provider_models = defaultdict(list)
for model_name, properties in data.items(): for model_name, properties in data.items():
provider = properties.get("litellm_provider", "").strip().lower() provider = properties.get("litellm_provider", "").strip().lower()
if 'http' in provider or provider == 'other': if "http" in provider or provider == "other":
continue continue
if provider: if provider:
provider_models[provider].append(model_name) provider_models[provider].append(model_name)
return provider_models return provider_models

View File

@@ -1,10 +1,9 @@
import subprocess import subprocess
import click import click
import tomllib
from packaging import version from packaging import version
from crewai.cli.utils import get_crewai_version from crewai.cli.utils import get_crewai_version, read_toml
def run_crew() -> None: def run_crew() -> None:
@@ -15,10 +14,9 @@ def run_crew() -> None:
crewai_version = get_crewai_version() crewai_version = get_crewai_version()
min_required_version = "0.71.0" min_required_version = "0.71.0"
with open("pyproject.toml", "rb") as f: pyproject_data = read_toml()
data = tomllib.load(f)
if data.get("tool", {}).get("poetry") and ( if pyproject_data.get("tool", {}).get("poetry") and (
version.parse(crewai_version) < version.parse(min_required_version) version.parse(crewai_version) < version.parse(min_required_version)
): ):
click.secho( click.secho(
@@ -26,7 +24,6 @@ def run_crew() -> None:
f"Please run `crewai update` to update your pyproject.toml to use uv.", f"Please run `crewai update` to update your pyproject.toml to use uv.",
fg="red", fg="red",
) )
print()
try: try:
subprocess.run(command, capture_output=False, text=True, check=True) subprocess.run(command, capture_output=False, text=True, check=True)
@@ -35,10 +32,7 @@ def run_crew() -> None:
click.echo(f"An error occurred while running the crew: {e}", err=True) click.echo(f"An error occurred while running the crew: {e}", err=True)
click.echo(e.output, err=True, nl=True) click.echo(e.output, err=True, nl=True)
with open("pyproject.toml", "rb") as f: if pyproject_data.get("tool", {}).get("poetry"):
data = tomllib.load(f)
if data.get("tool", {}).get("poetry"):
click.secho( click.secho(
"It's possible that you are using an old version of crewAI that uses poetry, please run `crewai update` to update your pyproject.toml to use uv.", "It's possible that you are using an old version of crewAI that uses poetry, please run `crewai update` to update your pyproject.toml to use uv.",
fg="yellow", fg="yellow",

View File

@@ -8,9 +8,12 @@ from crewai.project import CrewBase, agent, crew, task
# from crewai_tools import SerperDevTool # from crewai_tools import SerperDevTool
@CrewBase @CrewBase
class {{crew_name}}Crew(): class {{crew_name}}():
"""{{crew_name}} crew""" """{{crew_name}} crew"""
agents_config = 'config/agents.yaml'
tasks_config = 'config/tasks.yaml'
@agent @agent
def researcher(self) -> Agent: def researcher(self) -> Agent:
return Agent( return Agent(
@@ -48,4 +51,4 @@ class {{crew_name}}Crew():
process=Process.sequential, process=Process.sequential,
verbose=True, verbose=True,
# process=Process.hierarchical, # In case you wanna use that instead https://docs.crewai.com/how-to/Hierarchical/ # process=Process.hierarchical, # In case you wanna use that instead https://docs.crewai.com/how-to/Hierarchical/
) )

View File

@@ -1,9 +1,13 @@
#!/usr/bin/env python #!/usr/bin/env python
import sys import sys
from {{folder_name}}.crew import {{crew_name}}Crew import warnings
from {{folder_name}}.crew import {{crew_name}}
warnings.filterwarnings("ignore", category=SyntaxWarning, module="pysbd")
# This main file is intended to be a way for you to run your # This main file is intended to be a way for you to run your
# crew locally, so refrain from adding necessary logic into this file. # crew locally, so refrain from adding unnecessary logic into this file.
# Replace with inputs you want to test with, it will automatically # Replace with inputs you want to test with, it will automatically
# interpolate any tasks and agents information # interpolate any tasks and agents information
@@ -14,7 +18,7 @@ def run():
inputs = { inputs = {
'topic': 'AI LLMs' 'topic': 'AI LLMs'
} }
{{crew_name}}Crew().crew().kickoff(inputs=inputs) {{crew_name}}().crew().kickoff(inputs=inputs)
def train(): def train():
@@ -25,7 +29,7 @@ def train():
"topic": "AI LLMs" "topic": "AI LLMs"
} }
try: try:
{{crew_name}}Crew().crew().train(n_iterations=int(sys.argv[1]), filename=sys.argv[2], inputs=inputs) {{crew_name}}().crew().train(n_iterations=int(sys.argv[1]), filename=sys.argv[2], inputs=inputs)
except Exception as e: except Exception as e:
raise Exception(f"An error occurred while training the crew: {e}") raise Exception(f"An error occurred while training the crew: {e}")
@@ -35,7 +39,7 @@ def replay():
Replay the crew execution from a specific task. Replay the crew execution from a specific task.
""" """
try: try:
{{crew_name}}Crew().crew().replay(task_id=sys.argv[1]) {{crew_name}}().crew().replay(task_id=sys.argv[1])
except Exception as e: except Exception as e:
raise Exception(f"An error occurred while replaying the crew: {e}") raise Exception(f"An error occurred while replaying the crew: {e}")
@@ -48,7 +52,7 @@ def test():
"topic": "AI LLMs" "topic": "AI LLMs"
} }
try: try:
{{crew_name}}Crew().crew().test(n_iterations=int(sys.argv[1]), openai_model_name=sys.argv[2], inputs=inputs) {{crew_name}}().crew().test(n_iterations=int(sys.argv[1]), openai_model_name=sys.argv[2], inputs=inputs)
except Exception as e: except Exception as e:
raise Exception(f"An error occurred while replaying the crew: {e}") raise Exception(f"An error occurred while replaying the crew: {e}")

View File

@@ -5,7 +5,7 @@ description = "{{name}} using crewAI"
authors = [{ name = "Your Name", email = "you@example.com" }] authors = [{ name = "Your Name", email = "you@example.com" }]
requires-python = ">=3.10,<=3.13" requires-python = ">=3.10,<=3.13"
dependencies = [ dependencies = [
"crewai[tools]>=0.67.1,<1.0.0" "crewai[tools]>=0.79.4,<1.0.0"
] ]
[project.scripts] [project.scripts]

View File

@@ -1,11 +1,18 @@
from crewai_tools import BaseTool from crewai.tools import BaseTool
from typing import Type
from pydantic import BaseModel, Field
class MyCustomToolInput(BaseModel):
"""Input schema for MyCustomTool."""
argument: str = Field(..., description="Description of the argument.")
class MyCustomTool(BaseTool): class MyCustomTool(BaseTool):
name: str = "Name of my tool" name: str = "Name of my tool"
description: str = ( description: str = (
"Clear description for what this tool is useful for, you agent will need this information to use it." "Clear description for what this tool is useful for, you agent will need this information to use it."
) )
args_schema: Type[BaseModel] = MyCustomToolInput
def _run(self, argument: str) -> str: def _run(self, argument: str) -> str:
# Implementation goes here # Implementation goes here

View File

@@ -1,65 +1,53 @@
#!/usr/bin/env python #!/usr/bin/env python
import asyncio
from random import randint from random import randint
from pydantic import BaseModel from pydantic import BaseModel
from crewai.flow.flow import Flow, listen, start from crewai.flow.flow import Flow, listen, start
from .crews.poem_crew.poem_crew import PoemCrew from .crews.poem_crew.poem_crew import PoemCrew
class PoemState(BaseModel): class PoemState(BaseModel):
sentence_count: int = 1 sentence_count: int = 1
poem: str = "" poem: str = ""
class PoemFlow(Flow[PoemState]): class PoemFlow(Flow[PoemState]):
@start() @start()
def generate_sentence_count(self): def generate_sentence_count(self):
print("Generating sentence count") print("Generating sentence count")
# Generate a number between 1 and 5 self.state.sentence_count = randint(1, 5)
self.state.sentence_count = randint(1, 5)
@listen(generate_sentence_count) @listen(generate_sentence_count)
def generate_poem(self): def generate_poem(self):
print("Generating poem") print("Generating poem")
print(f"State before poem: {self.state}") result = (
result = PoemCrew().crew().kickoff(inputs={"sentence_count": self.state.sentence_count}) PoemCrew()
.crew()
.kickoff(inputs={"sentence_count": self.state.sentence_count})
)
print("Poem generated", result.raw) print("Poem generated", result.raw)
self.state.poem = result.raw self.state.poem = result.raw
print(f"State after generate_poem: {self.state}")
@listen(generate_poem) @listen(generate_poem)
def save_poem(self): def save_poem(self):
print("Saving poem") print("Saving poem")
print(f"State before save_poem: {self.state}")
with open("poem.txt", "w") as f: with open("poem.txt", "w") as f:
f.write(self.state.poem) f.write(self.state.poem)
print(f"State after save_poem: {self.state}")
async def run_flow():
""" def kickoff():
Run the flow.
"""
poem_flow = PoemFlow() poem_flow = PoemFlow()
await poem_flow.kickoff() poem_flow.kickoff()
async def plot_flow():
""" def plot():
Plot the flow.
"""
poem_flow = PoemFlow() poem_flow = PoemFlow()
poem_flow.plot() poem_flow.plot()
def main():
asyncio.run(run_flow())
def plot():
asyncio.run(plot_flow())
if __name__ == "__main__": if __name__ == "__main__":
main() kickoff()

View File

@@ -5,14 +5,12 @@ description = "{{name}} using crewAI"
authors = [{ name = "Your Name", email = "you@example.com" }] authors = [{ name = "Your Name", email = "you@example.com" }]
requires-python = ">=3.10,<=3.13" requires-python = ">=3.10,<=3.13"
dependencies = [ dependencies = [
"crewai[tools]>=0.67.1,<1.0.0", "crewai[tools]>=0.79.4,<1.0.0",
"asyncio"
] ]
[project.scripts] [project.scripts]
{{folder_name}} = "{{folder_name}}.main:main" kickoff = "{{folder_name}}.main:kickoff"
run_flow = "{{folder_name}}.main:main" plot = "{{folder_name}}.main:plot"
plot_flow = "{{folder_name}}.main:plot"
[build-system] [build-system]
requires = ["hatchling"] requires = ["hatchling"]

View File

@@ -1,4 +1,13 @@
from crewai_tools import BaseTool from typing import Type
from crewai.tools import BaseTool
from pydantic import BaseModel, Field
class MyCustomToolInput(BaseModel):
"""Input schema for MyCustomTool."""
argument: str = Field(..., description="Description of the argument.")
class MyCustomTool(BaseTool): class MyCustomTool(BaseTool):
@@ -6,6 +15,7 @@ class MyCustomTool(BaseTool):
description: str = ( description: str = (
"Clear description for what this tool is useful for, you agent will need this information to use it." "Clear description for what this tool is useful for, you agent will need this information to use it."
) )
args_schema: Type[BaseModel] = MyCustomToolInput
def _run(self, argument: str) -> str: def _run(self, argument: str) -> str:
# Implementation goes here # Implementation goes here

View File

@@ -6,7 +6,7 @@ authors = ["Your Name <you@example.com>"]
[tool.poetry.dependencies] [tool.poetry.dependencies]
python = ">=3.10,<=3.13" python = ">=3.10,<=3.13"
crewai = { extras = ["tools"], version = ">=0.70.1,<1.0.0" } crewai = { extras = ["tools"], version = ">=0.79.4,<1.0.0" }
asyncio = "*" asyncio = "*"
[tool.poetry.scripts] [tool.poetry.scripts]

View File

@@ -1,11 +1,18 @@
from crewai_tools import BaseTool from typing import Type
from crewai.tools import BaseTool
from pydantic import BaseModel, Field
class MyCustomToolInput(BaseModel):
"""Input schema for MyCustomTool."""
argument: str = Field(..., description="Description of the argument.")
class MyCustomTool(BaseTool): class MyCustomTool(BaseTool):
name: str = "Name of my tool" name: str = "Name of my tool"
description: str = ( description: str = (
"Clear description for what this tool is useful for, you agent will need this information to use it." "Clear description for what this tool is useful for, you agent will need this information to use it."
) )
args_schema: Type[BaseModel] = MyCustomToolInput
def _run(self, argument: str) -> str: def _run(self, argument: str) -> str:
# Implementation goes here # Implementation goes here

View File

@@ -5,7 +5,7 @@ description = "{{name}} using crewAI"
authors = ["Your Name <you@example.com>"] authors = ["Your Name <you@example.com>"]
requires-python = ">=3.10,<=3.13" requires-python = ">=3.10,<=3.13"
dependencies = [ dependencies = [
"crewai[tools]>=0.67.1,<1.0.0" "crewai[tools]>=0.79.4,<1.0.0"
] ]
[project.scripts] [project.scripts]

View File

@@ -1,11 +1,18 @@
from crewai_tools import BaseTool from typing import Type
from crewai.tools import BaseTool
from pydantic import BaseModel, Field
class MyCustomToolInput(BaseModel):
"""Input schema for MyCustomTool."""
argument: str = Field(..., description="Description of the argument.")
class MyCustomTool(BaseTool): class MyCustomTool(BaseTool):
name: str = "Name of my tool" name: str = "Name of my tool"
description: str = ( description: str = (
"Clear description for what this tool is useful for, you agent will need this information to use it." "Clear description for what this tool is useful for, you agent will need this information to use it."
) )
args_schema: Type[BaseModel] = MyCustomToolInput
def _run(self, argument: str) -> str: def _run(self, argument: str) -> str:
# Implementation goes here # Implementation goes here

View File

@@ -5,6 +5,6 @@ description = "Power up your crews with {{folder_name}}"
readme = "README.md" readme = "README.md"
requires-python = ">=3.10,<=3.13" requires-python = ">=3.10,<=3.13"
dependencies = [ dependencies = [
"crewai[tools]>=0.70.1" "crewai[tools]>=0.79.4"
] ]

View File

@@ -1,4 +1,5 @@
from crewai_tools import BaseTool from crewai.tools import BaseTool
class {{class_name}}(BaseTool): class {{class_name}}(BaseTool):
name: str = "Name of my tool" name: str = "Name of my tool"

View File

@@ -1,17 +1,15 @@
import base64 import base64
import os import os
import platform
import subprocess import subprocess
import tempfile import tempfile
from pathlib import Path from pathlib import Path
from netrc import netrc
import stat
import click import click
from rich.console import Console from rich.console import Console
from crewai.cli import git from crewai.cli import git
from crewai.cli.command import BaseCommand, PlusAPIMixin from crewai.cli.command import BaseCommand, PlusAPIMixin
from crewai.cli.config import Settings
from crewai.cli.utils import ( from crewai.cli.utils import (
get_project_description, get_project_description,
get_project_name, get_project_name,
@@ -28,8 +26,6 @@ class ToolCommand(BaseCommand, PlusAPIMixin):
A class to handle tool repository related operations for CrewAI projects. A class to handle tool repository related operations for CrewAI projects.
""" """
BASE_URL = "https://app.crewai.com/pypi/"
def __init__(self): def __init__(self):
BaseCommand.__init__(self) BaseCommand.__init__(self)
PlusAPIMixin.__init__(self, telemetry=self._telemetry) PlusAPIMixin.__init__(self, telemetry=self._telemetry)
@@ -155,39 +151,35 @@ class ToolCommand(BaseCommand, PlusAPIMixin):
raise SystemExit raise SystemExit
login_response_json = login_response.json() login_response_json = login_response.json()
self._set_netrc_credentials(login_response_json["credential"])
settings = Settings()
settings.tool_repository_username = login_response_json["credential"]["username"]
settings.tool_repository_password = login_response_json["credential"]["password"]
settings.dump()
console.print( console.print(
"Successfully authenticated to the tool repository.", style="bold green" "Successfully authenticated to the tool repository.", style="bold green"
) )
def _set_netrc_credentials(self, credentials, netrc_path=None):
if not netrc_path:
netrc_filename = "_netrc" if platform.system() == "Windows" else ".netrc"
netrc_path = Path.home() / netrc_filename
netrc_path.touch(mode=stat.S_IRUSR | stat.S_IWUSR, exist_ok=True)
netrc_instance = netrc(file=netrc_path)
netrc_instance.hosts["app.crewai.com"] = (credentials["username"], "", credentials["password"])
with open(netrc_path, 'w') as file:
file.write(str(netrc_instance))
console.print(f"Added credentials to {netrc_path}", style="bold green")
def _add_package(self, tool_details): def _add_package(self, tool_details):
tool_handle = tool_details["handle"] tool_handle = tool_details["handle"]
repository_handle = tool_details["repository"]["handle"] repository_handle = tool_details["repository"]["handle"]
repository_url = tool_details["repository"]["url"]
index = f"{repository_handle}={repository_url}"
add_package_command = [ add_package_command = [
"uv", "uv",
"add", "add",
"--extra-index-url", "--index",
self.BASE_URL + repository_handle, index,
tool_handle, tool_handle,
] ]
add_package_result = subprocess.run( add_package_result = subprocess.run(
add_package_command, capture_output=False, text=True, check=True add_package_command,
capture_output=False,
env=self._build_env_with_credentials(repository_handle),
text=True,
check=True
) )
if add_package_result.stderr: if add_package_result.stderr:
@@ -206,3 +198,13 @@ class ToolCommand(BaseCommand, PlusAPIMixin):
"[bold yellow]Tip:[/bold yellow] Navigate to a different directory and try again." "[bold yellow]Tip:[/bold yellow] Navigate to a different directory and try again."
) )
raise SystemExit raise SystemExit
def _build_env_with_credentials(self, repository_handle: str):
repository_handle = repository_handle.upper().replace("-", "_")
settings = Settings()
env = os.environ.copy()
env[f"UV_INDEX_{repository_handle}_USERNAME"] = str(settings.tool_repository_username or "")
env[f"UV_INDEX_{repository_handle}_PASSWORD"] = str(settings.tool_repository_password or "")
return env

View File

@@ -1,7 +1,9 @@
import os
import shutil import shutil
import tomli_w import tomli_w
import tomllib
from crewai.cli.utils import read_toml
def update_crew() -> None: def update_crew() -> None:
@@ -17,10 +19,9 @@ def migrate_pyproject(input_file, output_file):
And it will be used to migrate the pyproject.toml to the new format when uv is used. And it will be used to migrate the pyproject.toml to the new format when uv is used.
When the time comes that uv supports the new format, this function will be deprecated. When the time comes that uv supports the new format, this function will be deprecated.
""" """
poetry_data = {}
# Read the input pyproject.toml # Read the input pyproject.toml
with open(input_file, "rb") as f: pyproject_data = read_toml()
pyproject = tomllib.load(f)
# Initialize the new project structure # Initialize the new project structure
new_pyproject = { new_pyproject = {
@@ -29,30 +30,30 @@ def migrate_pyproject(input_file, output_file):
} }
# Migrate project metadata # Migrate project metadata
if "tool" in pyproject and "poetry" in pyproject["tool"]: if "tool" in pyproject_data and "poetry" in pyproject_data["tool"]:
poetry = pyproject["tool"]["poetry"] poetry_data = pyproject_data["tool"]["poetry"]
new_pyproject["project"]["name"] = poetry.get("name") new_pyproject["project"]["name"] = poetry_data.get("name")
new_pyproject["project"]["version"] = poetry.get("version") new_pyproject["project"]["version"] = poetry_data.get("version")
new_pyproject["project"]["description"] = poetry.get("description") new_pyproject["project"]["description"] = poetry_data.get("description")
new_pyproject["project"]["authors"] = [ new_pyproject["project"]["authors"] = [
{ {
"name": author.split("<")[0].strip(), "name": author.split("<")[0].strip(),
"email": author.split("<")[1].strip(">").strip(), "email": author.split("<")[1].strip(">").strip(),
} }
for author in poetry.get("authors", []) for author in poetry_data.get("authors", [])
] ]
new_pyproject["project"]["requires-python"] = poetry.get("python") new_pyproject["project"]["requires-python"] = poetry_data.get("python")
else: else:
# If it's already in the new format, just copy the project section # If it's already in the new format, just copy the project section
new_pyproject["project"] = pyproject.get("project", {}) new_pyproject["project"] = pyproject_data.get("project", {})
# Migrate or copy dependencies # Migrate or copy dependencies
if "dependencies" in new_pyproject["project"]: if "dependencies" in new_pyproject["project"]:
# If dependencies are already in the new format, keep them as is # If dependencies are already in the new format, keep them as is
pass pass
elif "dependencies" in poetry: elif poetry_data and "dependencies" in poetry_data:
new_pyproject["project"]["dependencies"] = [] new_pyproject["project"]["dependencies"] = []
for dep, version in poetry["dependencies"].items(): for dep, version in poetry_data["dependencies"].items():
if isinstance(version, dict): # Handle extras if isinstance(version, dict): # Handle extras
extras = ",".join(version.get("extras", [])) extras = ",".join(version.get("extras", []))
new_dep = f"{dep}[{extras}]" new_dep = f"{dep}[{extras}]"
@@ -66,10 +67,10 @@ def migrate_pyproject(input_file, output_file):
new_pyproject["project"]["dependencies"].append(new_dep) new_pyproject["project"]["dependencies"].append(new_dep)
# Migrate or copy scripts # Migrate or copy scripts
if "scripts" in poetry: if poetry_data and "scripts" in poetry_data:
new_pyproject["project"]["scripts"] = poetry["scripts"] new_pyproject["project"]["scripts"] = poetry_data["scripts"]
elif "scripts" in pyproject.get("project", {}): elif pyproject_data.get("project", {}) and "scripts" in pyproject_data["project"]:
new_pyproject["project"]["scripts"] = pyproject["project"]["scripts"] new_pyproject["project"]["scripts"] = pyproject_data["project"]["scripts"]
else: else:
new_pyproject["project"]["scripts"] = {} new_pyproject["project"]["scripts"] = {}
@@ -86,14 +87,23 @@ def migrate_pyproject(input_file, output_file):
new_pyproject["project"]["scripts"]["run_crew"] = f"{module_name}.main:run" new_pyproject["project"]["scripts"]["run_crew"] = f"{module_name}.main:run"
# Migrate optional dependencies # Migrate optional dependencies
if "extras" in poetry: if poetry_data and "extras" in poetry_data:
new_pyproject["project"]["optional-dependencies"] = poetry["extras"] new_pyproject["project"]["optional-dependencies"] = poetry_data["extras"]
# Backup the old pyproject.toml # Backup the old pyproject.toml
backup_file = "pyproject-old.toml" backup_file = "pyproject-old.toml"
shutil.copy2(input_file, backup_file) shutil.copy2(input_file, backup_file)
print(f"Original pyproject.toml backed up as {backup_file}") print(f"Original pyproject.toml backed up as {backup_file}")
# Rename the poetry.lock file
lock_file = "poetry.lock"
lock_backup = "poetry-old.lock"
if os.path.exists(lock_file):
os.rename(lock_file, lock_backup)
print(f"Original poetry.lock renamed to {lock_backup}")
else:
print("No poetry.lock file found to rename.")
# Write the new pyproject.toml # Write the new pyproject.toml
with open(output_file, "wb") as f: with open(output_file, "wb") as f:
tomli_w.dump(new_pyproject, f) tomli_w.dump(new_pyproject, f)

View File

@@ -6,6 +6,7 @@ from functools import reduce
from typing import Any, Dict, List from typing import Any, Dict, List
import click import click
import tomli
from rich.console import Console from rich.console import Console
from crewai.cli.authentication.utils import TokenManager from crewai.cli.authentication.utils import TokenManager
@@ -54,6 +55,13 @@ def simple_toml_parser(content):
return result return result
def read_toml(file_path: str = "pyproject.toml"):
"""Read the content of a TOML file and return it as a dictionary."""
with open(file_path, "rb") as f:
toml_dict = tomli.load(f)
return toml_dict
def parse_toml(content): def parse_toml(content):
if sys.version_info >= (3, 11): if sys.version_info >= (3, 11):
return tomllib.loads(content) return tomllib.loads(content)

View File

@@ -27,12 +27,13 @@ from crewai.llm import LLM
from crewai.memory.entity.entity_memory import EntityMemory from crewai.memory.entity.entity_memory import EntityMemory
from crewai.memory.long_term.long_term_memory import LongTermMemory from crewai.memory.long_term.long_term_memory import LongTermMemory
from crewai.memory.short_term.short_term_memory import ShortTermMemory from crewai.memory.short_term.short_term_memory import ShortTermMemory
from crewai.memory.user.user_memory import UserMemory
from crewai.process import Process from crewai.process import Process
from crewai.task import Task from crewai.task import Task
from crewai.tasks.conditional_task import ConditionalTask from crewai.tasks.conditional_task import ConditionalTask
from crewai.tasks.task_output import TaskOutput from crewai.tasks.task_output import TaskOutput
from crewai.telemetry import Telemetry from crewai.telemetry import Telemetry
from crewai.tools.agent_tools import AgentTools from crewai.tools.agent_tools.agent_tools import AgentTools
from crewai.types.usage_metrics import UsageMetrics from crewai.types.usage_metrics import UsageMetrics
from crewai.utilities import I18N, FileHandler, Logger, RPMController from crewai.utilities import I18N, FileHandler, Logger, RPMController
from crewai.utilities.constants import ( from crewai.utilities.constants import (
@@ -71,6 +72,7 @@ class Crew(BaseModel):
manager_llm: The language model that will run manager agent. manager_llm: The language model that will run manager agent.
manager_agent: Custom agent that will be used as manager. manager_agent: Custom agent that will be used as manager.
memory: Whether the crew should use memory to store memories of it's execution. memory: Whether the crew should use memory to store memories of it's execution.
memory_config: Configuration for the memory to be used for the crew.
cache: Whether the crew should use a cache to store the results of the tools execution. cache: Whether the crew should use a cache to store the results of the tools execution.
function_calling_llm: The language model that will run the tool calling for all the agents. function_calling_llm: The language model that will run the tool calling for all the agents.
process: The process flow that the crew will follow (e.g., sequential, hierarchical). process: The process flow that the crew will follow (e.g., sequential, hierarchical).
@@ -94,6 +96,7 @@ class Crew(BaseModel):
_short_term_memory: Optional[InstanceOf[ShortTermMemory]] = PrivateAttr() _short_term_memory: Optional[InstanceOf[ShortTermMemory]] = PrivateAttr()
_long_term_memory: Optional[InstanceOf[LongTermMemory]] = PrivateAttr() _long_term_memory: Optional[InstanceOf[LongTermMemory]] = PrivateAttr()
_entity_memory: Optional[InstanceOf[EntityMemory]] = PrivateAttr() _entity_memory: Optional[InstanceOf[EntityMemory]] = PrivateAttr()
_user_memory: Optional[InstanceOf[UserMemory]] = PrivateAttr()
_train: Optional[bool] = PrivateAttr(default=False) _train: Optional[bool] = PrivateAttr(default=False)
_train_iteration: Optional[int] = PrivateAttr() _train_iteration: Optional[int] = PrivateAttr()
_inputs: Optional[Dict[str, Any]] = PrivateAttr(default=None) _inputs: Optional[Dict[str, Any]] = PrivateAttr(default=None)
@@ -114,6 +117,10 @@ class Crew(BaseModel):
default=False, default=False,
description="Whether the crew should use memory to store memories of it's execution", description="Whether the crew should use memory to store memories of it's execution",
) )
memory_config: Optional[Dict[str, Any]] = Field(
default=None,
description="Configuration for the memory to be used for the crew.",
)
short_term_memory: Optional[InstanceOf[ShortTermMemory]] = Field( short_term_memory: Optional[InstanceOf[ShortTermMemory]] = Field(
default=None, default=None,
description="An Instance of the ShortTermMemory to be used by the Crew", description="An Instance of the ShortTermMemory to be used by the Crew",
@@ -126,8 +133,12 @@ class Crew(BaseModel):
default=None, default=None,
description="An Instance of the EntityMemory to be used by the Crew", description="An Instance of the EntityMemory to be used by the Crew",
) )
user_memory: Optional[InstanceOf[UserMemory]] = Field(
default=None,
description="An instance of the UserMemory to be used by the Crew to store/fetch memories of a specific user.",
)
embedder: Optional[dict] = Field( embedder: Optional[dict] = Field(
default={"provider": "openai"}, default=None,
description="Configuration for the embedder to be used for the crew.", description="Configuration for the embedder to be used for the crew.",
) )
usage_metrics: Optional[UsageMetrics] = Field( usage_metrics: Optional[UsageMetrics] = Field(
@@ -238,13 +249,22 @@ class Crew(BaseModel):
self._short_term_memory = ( self._short_term_memory = (
self.short_term_memory self.short_term_memory
if self.short_term_memory if self.short_term_memory
else ShortTermMemory(crew=self, embedder_config=self.embedder) else ShortTermMemory(
crew=self,
embedder_config=self.embedder,
)
) )
self._entity_memory = ( self._entity_memory = (
self.entity_memory self.entity_memory
if self.entity_memory if self.entity_memory
else EntityMemory(crew=self, embedder_config=self.embedder) else EntityMemory(crew=self, embedder_config=self.embedder)
) )
if hasattr(self, "memory_config") and self.memory_config is not None:
self._user_memory = (
self.user_memory if self.user_memory else UserMemory(crew=self)
)
else:
self._user_memory = None
return self return self
@model_validator(mode="after") @model_validator(mode="after")
@@ -435,22 +455,24 @@ class Crew(BaseModel):
self, n_iterations: int, filename: str, inputs: Optional[Dict[str, Any]] = {} self, n_iterations: int, filename: str, inputs: Optional[Dict[str, Any]] = {}
) -> None: ) -> None:
"""Trains the crew for a given number of iterations.""" """Trains the crew for a given number of iterations."""
self._setup_for_training(filename) train_crew = self.copy()
train_crew._setup_for_training(filename)
for n_iteration in range(n_iterations): for n_iteration in range(n_iterations):
self._train_iteration = n_iteration train_crew._train_iteration = n_iteration
self.kickoff(inputs=inputs) train_crew.kickoff(inputs=inputs)
training_data = CrewTrainingHandler(TRAINING_DATA_FILE).load() training_data = CrewTrainingHandler(TRAINING_DATA_FILE).load()
for agent in self.agents: for agent in train_crew.agents:
result = TaskEvaluator(agent).evaluate_training_data( if training_data.get(str(agent.id)):
training_data=training_data, agent_id=str(agent.id) result = TaskEvaluator(agent).evaluate_training_data(
) training_data=training_data, agent_id=str(agent.id)
)
CrewTrainingHandler(filename).save_trained_data( CrewTrainingHandler(filename).save_trained_data(
agent_id=str(agent.role), trained_data=result.model_dump() agent_id=str(agent.role), trained_data=result.model_dump()
) )
def kickoff( def kickoff(
self, self,
@@ -774,7 +796,9 @@ class Crew(BaseModel):
def _log_task_start(self, task: Task, role: str = "None"): def _log_task_start(self, task: Task, role: str = "None"):
if self.output_log_file: if self.output_log_file:
self._file_handler.log(task_name=task.name, task=task.description, agent=role, status="started") self._file_handler.log(
task_name=task.name, task=task.description, agent=role, status="started"
)
def _update_manager_tools(self, task: Task): def _update_manager_tools(self, task: Task):
if self.manager_agent: if self.manager_agent:
@@ -796,7 +820,13 @@ class Crew(BaseModel):
def _process_task_result(self, task: Task, output: TaskOutput) -> None: def _process_task_result(self, task: Task, output: TaskOutput) -> None:
role = task.agent.role if task.agent is not None else "None" role = task.agent.role if task.agent is not None else "None"
if self.output_log_file: if self.output_log_file:
self._file_handler.log(task_name=task.name, task=task.description, agent=role, status="completed", output=output.raw) self._file_handler.log(
task_name=task.name,
task=task.description,
agent=role,
status="completed",
output=output.raw,
)
def _create_crew_output(self, task_outputs: List[TaskOutput]) -> CrewOutput: def _create_crew_output(self, task_outputs: List[TaskOutput]) -> CrewOutput:
if len(task_outputs) != 1: if len(task_outputs) != 1:
@@ -979,17 +1009,19 @@ class Crew(BaseModel):
inputs: Optional[Dict[str, Any]] = None, inputs: Optional[Dict[str, Any]] = None,
) -> None: ) -> None:
"""Test and evaluate the Crew with the given inputs for n iterations concurrently using concurrent.futures.""" """Test and evaluate the Crew with the given inputs for n iterations concurrently using concurrent.futures."""
self._test_execution_span = self._telemetry.test_execution_span( test_crew = self.copy()
self,
self._test_execution_span = test_crew._telemetry.test_execution_span(
test_crew,
n_iterations, n_iterations,
inputs, inputs,
openai_model_name, # type: ignore[arg-type] openai_model_name, # type: ignore[arg-type]
) # type: ignore[arg-type] ) # type: ignore[arg-type]
evaluator = CrewEvaluator(self, openai_model_name) # type: ignore[arg-type] evaluator = CrewEvaluator(test_crew, openai_model_name) # type: ignore[arg-type]
for i in range(1, n_iterations + 1): for i in range(1, n_iterations + 1):
evaluator.set_iteration(i) evaluator.set_iteration(i)
self.kickoff(inputs=inputs) test_crew.kickoff(inputs=inputs)
evaluator.print_crew_evaluation_result() evaluator.print_crew_evaluation_result()

View File

@@ -1,10 +1,20 @@
# flow.py
import asyncio import asyncio
import inspect import inspect
from typing import Any, Callable, Dict, Generic, List, Set, Type, TypeVar, Union from typing import (
Any,
Callable,
Dict,
Generic,
List,
Optional,
Set,
Type,
TypeVar,
Union,
cast,
)
from pydantic import BaseModel from pydantic import BaseModel, ValidationError
from crewai.flow.flow_visualizer import plot_flow from crewai.flow.flow_visualizer import plot_flow
from crewai.flow.utils import get_possible_return_constants from crewai.flow.utils import get_possible_return_constants
@@ -120,6 +130,7 @@ class FlowMeta(type):
methods = attr_value.__trigger_methods__ methods = attr_value.__trigger_methods__
condition_type = getattr(attr_value, "__condition_type__", "OR") condition_type = getattr(attr_value, "__condition_type__", "OR")
listeners[attr_name] = (condition_type, methods) listeners[attr_name] = (condition_type, methods)
elif hasattr(attr_value, "__is_router__"): elif hasattr(attr_value, "__is_router__"):
routers[attr_value.__router_for__] = attr_name routers[attr_value.__router_for__] = attr_name
possible_returns = get_possible_return_constants(attr_value) possible_returns = get_possible_return_constants(attr_value)
@@ -159,7 +170,7 @@ class Flow(Generic[T], metaclass=FlowMeta):
def __init__(self) -> None: def __init__(self) -> None:
self._methods: Dict[str, Callable] = {} self._methods: Dict[str, Callable] = {}
self._state: T = self._create_initial_state() self._state: T = self._create_initial_state()
self._completed_methods: Set[str] = set() self._method_execution_counts: Dict[str, int] = {}
self._pending_and_listeners: Dict[str, Set[str]] = {} self._pending_and_listeners: Dict[str, Set[str]] = {}
self._method_outputs: List[Any] = [] # List to store all method outputs self._method_outputs: List[Any] = [] # List to store all method outputs
@@ -190,7 +201,74 @@ class Flow(Generic[T], metaclass=FlowMeta):
"""Returns the list of all outputs from executed methods.""" """Returns the list of all outputs from executed methods."""
return self._method_outputs return self._method_outputs
async def kickoff(self) -> Any: def _initialize_state(self, inputs: Dict[str, Any]) -> None:
"""
Initializes or updates the state with the provided inputs.
Args:
inputs: Dictionary of inputs to initialize or update the state.
Raises:
ValueError: If inputs do not match the structured state model.
TypeError: If state is neither a BaseModel instance nor a dictionary.
"""
if isinstance(self._state, BaseModel):
# Structured state management
try:
# Define a function to create the dynamic class
def create_model_with_extra_forbid(
base_model: Type[BaseModel],
) -> Type[BaseModel]:
class ModelWithExtraForbid(base_model): # type: ignore
model_config = base_model.model_config.copy()
model_config["extra"] = "forbid"
return ModelWithExtraForbid
# Create the dynamic class
ModelWithExtraForbid = create_model_with_extra_forbid(
self._state.__class__
)
# Create a new instance using the combined state and inputs
self._state = cast(
T, ModelWithExtraForbid(**{**self._state.model_dump(), **inputs})
)
except ValidationError as e:
raise ValueError(f"Invalid inputs for structured state: {e}") from e
elif isinstance(self._state, dict):
# Unstructured state management
self._state.update(inputs)
else:
raise TypeError("State must be a BaseModel instance or a dictionary.")
def kickoff(self, inputs: Optional[Dict[str, Any]] = None) -> Any:
"""
Starts the execution of the flow synchronously.
Args:
inputs: Optional dictionary of inputs to initialize or update the state.
Returns:
The final output from the flow execution.
"""
if inputs is not None:
self._initialize_state(inputs)
return asyncio.run(self.kickoff_async())
async def kickoff_async(self, inputs: Optional[Dict[str, Any]] = None) -> Any:
"""
Starts the execution of the flow asynchronously.
Args:
inputs: Optional dictionary of inputs to initialize or update the state.
Returns:
The final output from the flow execution.
"""
if inputs is not None:
self._initialize_state(inputs)
if not self._start_methods: if not self._start_methods:
raise ValueError("No start method defined") raise ValueError("No start method defined")
@@ -213,17 +291,27 @@ class Flow(Generic[T], metaclass=FlowMeta):
else: else:
return None # Or raise an exception if no methods were executed return None # Or raise an exception if no methods were executed
async def _execute_start_method(self, start_method: str) -> None: async def _execute_start_method(self, start_method_name: str) -> None:
result = await self._execute_method(self._methods[start_method]) result = await self._execute_method(
await self._execute_listeners(start_method, result) start_method_name, self._methods[start_method_name]
)
await self._execute_listeners(start_method_name, result)
async def _execute_method(self, method: Callable, *args: Any, **kwargs: Any) -> Any: async def _execute_method(
self, method_name: str, method: Callable, *args: Any, **kwargs: Any
) -> Any:
result = ( result = (
await method(*args, **kwargs) await method(*args, **kwargs)
if asyncio.iscoroutinefunction(method) if asyncio.iscoroutinefunction(method)
else method(*args, **kwargs) else method(*args, **kwargs)
) )
self._method_outputs.append(result) # Store the output self._method_outputs.append(result) # Store the output
# Track method execution counts
self._method_execution_counts[method_name] = (
self._method_execution_counts.get(method_name, 0) + 1
)
return result return result
async def _execute_listeners(self, trigger_method: str, result: Any) -> None: async def _execute_listeners(self, trigger_method: str, result: Any) -> None:
@@ -231,32 +319,39 @@ class Flow(Generic[T], metaclass=FlowMeta):
if trigger_method in self._routers: if trigger_method in self._routers:
router_method = self._methods[self._routers[trigger_method]] router_method = self._methods[self._routers[trigger_method]]
path = await self._execute_method(router_method) path = await self._execute_method(
# Use the path as the new trigger method self._routers[trigger_method], router_method
)
trigger_method = path trigger_method = path
for listener, (condition_type, methods) in self._listeners.items(): for listener_name, (condition_type, methods) in self._listeners.items():
if condition_type == "OR": if condition_type == "OR":
if trigger_method in methods: if trigger_method in methods:
# Schedule the listener without preventing re-execution
listener_tasks.append( listener_tasks.append(
self._execute_single_listener(listener, result) self._execute_single_listener(listener_name, result)
) )
elif condition_type == "AND": elif condition_type == "AND":
if listener not in self._pending_and_listeners: # Initialize pending methods for this listener if not already done
self._pending_and_listeners[listener] = set() if listener_name not in self._pending_and_listeners:
self._pending_and_listeners[listener].add(trigger_method) self._pending_and_listeners[listener_name] = set(methods)
if set(methods) == self._pending_and_listeners[listener]: # Remove the trigger method from pending methods
self._pending_and_listeners[listener_name].discard(trigger_method)
if not self._pending_and_listeners[listener_name]:
# All required methods have been executed
listener_tasks.append( listener_tasks.append(
self._execute_single_listener(listener, result) self._execute_single_listener(listener_name, result)
) )
del self._pending_and_listeners[listener] # Reset pending methods for this listener
self._pending_and_listeners.pop(listener_name, None)
# Run all listener tasks concurrently and wait for them to complete # Run all listener tasks concurrently and wait for them to complete
await asyncio.gather(*listener_tasks) if listener_tasks:
await asyncio.gather(*listener_tasks)
async def _execute_single_listener(self, listener: str, result: Any) -> None: async def _execute_single_listener(self, listener_name: str, result: Any) -> None:
try: try:
method = self._methods[listener] method = self._methods[listener_name]
sig = inspect.signature(method) sig = inspect.signature(method)
params = list(sig.parameters.values()) params = list(sig.parameters.values())
@@ -265,15 +360,19 @@ class Flow(Generic[T], metaclass=FlowMeta):
if method_params: if method_params:
# If listener expects parameters, pass the result # If listener expects parameters, pass the result
listener_result = await self._execute_method(method, result) listener_result = await self._execute_method(
listener_name, method, result
)
else: else:
# If listener does not expect parameters, call without arguments # If listener does not expect parameters, call without arguments
listener_result = await self._execute_method(method) listener_result = await self._execute_method(listener_name, method)
# Execute listeners of this listener # Execute listeners of this listener
await self._execute_listeners(listener, listener_result) await self._execute_listeners(listener_name, listener_result)
except Exception as e: except Exception as e:
print(f"[Flow._execute_single_listener] Error in method {listener}: {e}") print(
f"[Flow._execute_single_listener] Error in method {listener_name}: {e}"
)
import traceback import traceback
traceback.print_exc() traceback.print_exc()

View File

@@ -1,7 +1,10 @@
import io
import logging
import sys
import warnings
from contextlib import contextmanager from contextlib import contextmanager
from typing import Any, Dict, List, Optional, Union from typing import Any, Dict, List, Optional, Union
import logging
import warnings
import litellm import litellm
from litellm import get_supported_openai_params from litellm import get_supported_openai_params
@@ -9,9 +12,6 @@ from crewai.utilities.exceptions.context_window_exceeding_exception import (
LLMContextLengthExceededException, LLMContextLengthExceededException,
) )
import sys
import io
class FilteredStream(io.StringIO): class FilteredStream(io.StringIO):
def write(self, s): def write(self, s):
@@ -118,12 +118,12 @@ class LLM:
litellm.drop_params = True litellm.drop_params = True
litellm.set_verbose = False litellm.set_verbose = False
litellm.callbacks = callbacks self.set_callbacks(callbacks)
def call(self, messages: List[Dict[str, str]], callbacks: List[Any] = []) -> str: def call(self, messages: List[Dict[str, str]], callbacks: List[Any] = []) -> str:
with suppress_warnings(): with suppress_warnings():
if callbacks and len(callbacks) > 0: if callbacks and len(callbacks) > 0:
litellm.callbacks = callbacks self.set_callbacks(callbacks)
try: try:
params = { params = {
@@ -181,3 +181,15 @@ class LLM:
def get_context_window_size(self) -> int: def get_context_window_size(self) -> int:
# Only using 75% of the context window size to avoid cutting the message in the middle # Only using 75% of the context window size to avoid cutting the message in the middle
return int(LLM_CONTEXT_WINDOW_SIZES.get(self.model, 8192) * 0.75) return int(LLM_CONTEXT_WINDOW_SIZES.get(self.model, 8192) * 0.75)
def set_callbacks(self, callbacks: List[Any]):
callback_types = [type(callback) for callback in callbacks]
for callback in litellm.success_callback[:]:
if type(callback) in callback_types:
litellm.success_callback.remove(callback)
for callback in litellm._async_success_callback[:]:
if type(callback) in callback_types:
litellm._async_success_callback.remove(callback)
litellm.callbacks = callbacks

View File

@@ -1,5 +1,6 @@
from .entity.entity_memory import EntityMemory from .entity.entity_memory import EntityMemory
from .long_term.long_term_memory import LongTermMemory from .long_term.long_term_memory import LongTermMemory
from .short_term.short_term_memory import ShortTermMemory from .short_term.short_term_memory import ShortTermMemory
from .user.user_memory import UserMemory
__all__ = ["EntityMemory", "LongTermMemory", "ShortTermMemory"] __all__ = ["UserMemory", "EntityMemory", "LongTermMemory", "ShortTermMemory"]

View File

@@ -1,13 +1,25 @@
from typing import Optional from typing import Optional, Dict, Any
from crewai.memory import EntityMemory, LongTermMemory, ShortTermMemory from crewai.memory import EntityMemory, LongTermMemory, ShortTermMemory, UserMemory
class ContextualMemory: class ContextualMemory:
def __init__(self, stm: ShortTermMemory, ltm: LongTermMemory, em: EntityMemory): def __init__(
self,
memory_config: Optional[Dict[str, Any]],
stm: ShortTermMemory,
ltm: LongTermMemory,
em: EntityMemory,
um: UserMemory,
):
if memory_config is not None:
self.memory_provider = memory_config.get("provider")
else:
self.memory_provider = None
self.stm = stm self.stm = stm
self.ltm = ltm self.ltm = ltm
self.em = em self.em = em
self.um = um
def build_context_for_task(self, task, context) -> str: def build_context_for_task(self, task, context) -> str:
""" """
@@ -23,6 +35,8 @@ class ContextualMemory:
context.append(self._fetch_ltm_context(task.description)) context.append(self._fetch_ltm_context(task.description))
context.append(self._fetch_stm_context(query)) context.append(self._fetch_stm_context(query))
context.append(self._fetch_entity_context(query)) context.append(self._fetch_entity_context(query))
if self.memory_provider == "mem0":
context.append(self._fetch_user_context(query))
return "\n".join(filter(None, context)) return "\n".join(filter(None, context))
def _fetch_stm_context(self, query) -> str: def _fetch_stm_context(self, query) -> str:
@@ -31,7 +45,12 @@ class ContextualMemory:
formatted as bullet points. formatted as bullet points.
""" """
stm_results = self.stm.search(query) stm_results = self.stm.search(query)
formatted_results = "\n".join([f"- {result}" for result in stm_results]) formatted_results = "\n".join(
[
f"- {result['memory'] if self.memory_provider == 'mem0' else result['context']}"
for result in stm_results
]
)
return f"Recent Insights:\n{formatted_results}" if stm_results else "" return f"Recent Insights:\n{formatted_results}" if stm_results else ""
def _fetch_ltm_context(self, task) -> Optional[str]: def _fetch_ltm_context(self, task) -> Optional[str]:
@@ -60,6 +79,26 @@ class ContextualMemory:
""" """
em_results = self.em.search(query) em_results = self.em.search(query)
formatted_results = "\n".join( formatted_results = "\n".join(
[f"- {result['context']}" for result in em_results] # type: ignore # Invalid index type "str" for "str"; expected type "SupportsIndex | slice" [
f"- {result['memory'] if self.memory_provider == 'mem0' else result['context']}"
for result in em_results
] # type: ignore # Invalid index type "str" for "str"; expected type "SupportsIndex | slice"
) )
return f"Entities:\n{formatted_results}" if em_results else "" return f"Entities:\n{formatted_results}" if em_results else ""
def _fetch_user_context(self, query: str) -> str:
"""
Fetches and formats relevant user information from User Memory.
Args:
query (str): The search query to find relevant user memories.
Returns:
str: Formatted user memories as bullet points, or an empty string if none found.
"""
user_memories = self.um.search(query)
if not user_memories:
return ""
formatted_memories = "\n".join(
f"- {result['memory']}" for result in user_memories
)
return f"User memories/preferences:\n{formatted_memories}"

View File

@@ -11,21 +11,43 @@ class EntityMemory(Memory):
""" """
def __init__(self, crew=None, embedder_config=None, storage=None): def __init__(self, crew=None, embedder_config=None, storage=None):
storage = ( if hasattr(crew, "memory_config") and crew.memory_config is not None:
storage self.memory_provider = crew.memory_config.get("provider")
if storage else:
else RAGStorage( self.memory_provider = None
type="entities",
allow_reset=False, if self.memory_provider == "mem0":
embedder_config=embedder_config, try:
crew=crew, from crewai.memory.storage.mem0_storage import Mem0Storage
except ImportError:
raise ImportError(
"Mem0 is not installed. Please install it with `pip install mem0ai`."
)
storage = Mem0Storage(type="entities", crew=crew)
else:
storage = (
storage
if storage
else RAGStorage(
type="entities",
allow_reset=True,
embedder_config=embedder_config,
crew=crew,
)
) )
)
super().__init__(storage) super().__init__(storage)
def save(self, item: EntityMemoryItem) -> None: # type: ignore # BUG?: Signature of "save" incompatible with supertype "Memory" def save(self, item: EntityMemoryItem) -> None: # type: ignore # BUG?: Signature of "save" incompatible with supertype "Memory"
"""Saves an entity item into the SQLite storage.""" """Saves an entity item into the SQLite storage."""
data = f"{item.name}({item.type}): {item.description}" if self.memory_provider == "mem0":
data = f"""
Remember details about the following entity:
Name: {item.name}
Type: {item.type}
Entity Description: {item.description}
"""
else:
data = f"{item.name}({item.type}): {item.description}"
super().save(data, item.metadata) super().save(data, item.metadata)
def reset(self) -> None: def reset(self) -> None:

View File

@@ -1,4 +1,4 @@
from typing import Any, Dict from typing import Any, Dict, List
from crewai.memory.long_term.long_term_memory_item import LongTermMemoryItem from crewai.memory.long_term.long_term_memory_item import LongTermMemoryItem
from crewai.memory.memory import Memory from crewai.memory.memory import Memory
@@ -28,7 +28,7 @@ class LongTermMemory(Memory):
datetime=item.datetime, datetime=item.datetime,
) )
def search(self, task: str, latest_n: int = 3) -> Dict[str, Any]: def search(self, task: str, latest_n: int = 3) -> List[Dict[str, Any]]: # type: ignore # signature of "search" incompatible with supertype "Memory"
return self.storage.load(task, latest_n) # type: ignore # BUG?: "Storage" has no attribute "load" return self.storage.load(task, latest_n) # type: ignore # BUG?: "Storage" has no attribute "load"
def reset(self) -> None: def reset(self) -> None:

View File

@@ -1,6 +1,6 @@
from typing import Any, Dict, Optional from typing import Any, Dict, Optional, List
from crewai.memory.storage.interface import Storage from crewai.memory.storage.rag_storage import RAGStorage
class Memory: class Memory:
@@ -8,7 +8,7 @@ class Memory:
Base class for memory, now supporting agent tags and generic metadata. Base class for memory, now supporting agent tags and generic metadata.
""" """
def __init__(self, storage: Storage): def __init__(self, storage: RAGStorage):
self.storage = storage self.storage = storage
def save( def save(
@@ -23,5 +23,12 @@ class Memory:
self.storage.save(value, metadata) self.storage.save(value, metadata)
def search(self, query: str) -> Dict[str, Any]: def search(
return self.storage.search(query) self,
query: str,
limit: int = 3,
score_threshold: float = 0.35,
) -> List[Any]:
return self.storage.search(
query=query, limit=limit, score_threshold=score_threshold
)

View File

@@ -14,13 +14,27 @@ class ShortTermMemory(Memory):
""" """
def __init__(self, crew=None, embedder_config=None, storage=None): def __init__(self, crew=None, embedder_config=None, storage=None):
storage = ( if hasattr(crew, "memory_config") and crew.memory_config is not None:
storage self.memory_provider = crew.memory_config.get("provider")
if storage else:
else RAGStorage( self.memory_provider = None
type="short_term", embedder_config=embedder_config, crew=crew
if self.memory_provider == "mem0":
try:
from crewai.memory.storage.mem0_storage import Mem0Storage
except ImportError:
raise ImportError(
"Mem0 is not installed. Please install it with `pip install mem0ai`."
)
storage = Mem0Storage(type="short_term", crew=crew)
else:
storage = (
storage
if storage
else RAGStorage(
type="short_term", embedder_config=embedder_config, crew=crew
)
) )
)
super().__init__(storage) super().__init__(storage)
def save( def save(
@@ -30,11 +44,20 @@ class ShortTermMemory(Memory):
agent: Optional[str] = None, agent: Optional[str] = None,
) -> None: ) -> None:
item = ShortTermMemoryItem(data=value, metadata=metadata, agent=agent) item = ShortTermMemoryItem(data=value, metadata=metadata, agent=agent)
if self.memory_provider == "mem0":
item.data = f"Remember the following insights from Agent run: {item.data}"
super().save(value=item.data, metadata=item.metadata, agent=item.agent) super().save(value=item.data, metadata=item.metadata, agent=item.agent)
def search(self, query: str, score_threshold: float = 0.35): def search(
return self.storage.search(query=query, score_threshold=score_threshold) # type: ignore # BUG? The reference is to the parent class, but the parent class does not have this parameters self,
query: str,
limit: int = 3,
score_threshold: float = 0.35,
):
return self.storage.search(
query=query, limit=limit, score_threshold=score_threshold
) # type: ignore # BUG? The reference is to the parent class, but the parent class does not have this parameters
def reset(self) -> None: def reset(self) -> None:
try: try:

View File

@@ -0,0 +1,76 @@
from abc import ABC, abstractmethod
from typing import Any, Dict, List, Optional
class BaseRAGStorage(ABC):
"""
Base class for RAG-based Storage implementations.
"""
app: Any | None = None
def __init__(
self,
type: str,
allow_reset: bool = True,
embedder_config: Optional[Any] = None,
crew: Any = None,
):
self.type = type
self.allow_reset = allow_reset
self.embedder_config = embedder_config
self.crew = crew
self.agents = self._initialize_agents()
def _initialize_agents(self) -> str:
if self.crew:
return "_".join(
[self._sanitize_role(agent.role) for agent in self.crew.agents]
)
return ""
@abstractmethod
def _sanitize_role(self, role: str) -> str:
"""Sanitizes agent roles to ensure valid directory names."""
pass
@abstractmethod
def save(self, value: Any, metadata: Dict[str, Any]) -> None:
"""Save a value with metadata to the storage."""
pass
@abstractmethod
def search(
self,
query: str,
limit: int = 3,
filter: Optional[dict] = None,
score_threshold: float = 0.35,
) -> List[Any]:
"""Search for entries in the storage."""
pass
@abstractmethod
def reset(self) -> None:
"""Reset the storage."""
pass
@abstractmethod
def _generate_embedding(
self, text: str, metadata: Optional[Dict[str, Any]] = None
) -> Any:
"""Generate an embedding for the given text and metadata."""
pass
@abstractmethod
def _initialize_app(self):
"""Initialize the vector db."""
pass
def setup_config(self, config: Dict[str, Any]):
"""Setup the config of the storage."""
pass
def initialize_client(self):
"""Initialize the client of the storage. This should setup the app and the db collection"""
pass

View File

@@ -1,4 +1,4 @@
from typing import Any, Dict from typing import Any, Dict, List
class Storage: class Storage:
@@ -7,8 +7,10 @@ class Storage:
def save(self, value: Any, metadata: Dict[str, Any]) -> None: def save(self, value: Any, metadata: Dict[str, Any]) -> None:
pass pass
def search(self, key: str) -> Dict[str, Any]: # type: ignore def search(
pass self, query: str, limit: int, score_threshold: float
) -> Dict[str, Any] | List[Any]:
return {}
def reset(self) -> None: def reset(self) -> None:
pass pass

View File

@@ -70,7 +70,7 @@ class KickoffTaskOutputsSQLiteStorage:
task.expected_output, task.expected_output,
json.dumps(output, cls=CrewJSONEncoder), json.dumps(output, cls=CrewJSONEncoder),
task_index, task_index,
json.dumps(inputs), json.dumps(inputs, cls=CrewJSONEncoder),
was_replayed, was_replayed,
), ),
) )
@@ -103,7 +103,7 @@ class KickoffTaskOutputsSQLiteStorage:
else value else value
) )
query = f"UPDATE latest_kickoff_task_outputs SET {', '.join(fields)} WHERE task_index = ?" query = f"UPDATE latest_kickoff_task_outputs SET {', '.join(fields)} WHERE task_index = ?" # nosec
values.append(task_index) values.append(task_index)
cursor.execute(query, tuple(values)) cursor.execute(query, tuple(values))

View File

@@ -83,7 +83,7 @@ class LTMSQLiteStorage:
WHERE task_description = ? WHERE task_description = ?
ORDER BY datetime DESC, score ASC ORDER BY datetime DESC, score ASC
LIMIT {latest_n} LIMIT {latest_n}
""", """, # nosec
(task_description,), (task_description,),
) )
rows = cursor.fetchall() rows = cursor.fetchall()

View File

@@ -0,0 +1,104 @@
import os
from typing import Any, Dict, List
from mem0 import MemoryClient
from crewai.memory.storage.interface import Storage
class Mem0Storage(Storage):
"""
Extends Storage to handle embedding and searching across entities using Mem0.
"""
def __init__(self, type, crew=None):
super().__init__()
if type not in ["user", "short_term", "long_term", "entities"]:
raise ValueError("Invalid type for Mem0Storage. Must be 'user' or 'agent'.")
self.memory_type = type
self.crew = crew
self.memory_config = crew.memory_config
# User ID is required for user memory type "user" since it's used as a unique identifier for the user.
user_id = self._get_user_id()
if type == "user" and not user_id:
raise ValueError("User ID is required for user memory type")
# API key in memory config overrides the environment variable
mem0_api_key = self.memory_config.get("config", {}).get("api_key") or os.getenv(
"MEM0_API_KEY"
)
self.memory = MemoryClient(api_key=mem0_api_key)
def _sanitize_role(self, role: str) -> str:
"""
Sanitizes agent roles to ensure valid directory names.
"""
return role.replace("\n", "").replace(" ", "_").replace("/", "_")
def save(self, value: Any, metadata: Dict[str, Any]) -> None:
user_id = self._get_user_id()
agent_name = self._get_agent_name()
if self.memory_type == "user":
self.memory.add(value, user_id=user_id, metadata={**metadata})
elif self.memory_type == "short_term":
agent_name = self._get_agent_name()
self.memory.add(
value, agent_id=agent_name, metadata={"type": "short_term", **metadata}
)
elif self.memory_type == "long_term":
agent_name = self._get_agent_name()
self.memory.add(
value,
agent_id=agent_name,
infer=False,
metadata={"type": "long_term", **metadata},
)
elif self.memory_type == "entities":
entity_name = None
self.memory.add(
value, user_id=entity_name, metadata={"type": "entity", **metadata}
)
def search(
self,
query: str,
limit: int = 3,
score_threshold: float = 0.35,
) -> List[Any]:
params = {"query": query, "limit": limit}
if self.memory_type == "user":
user_id = self._get_user_id()
params["user_id"] = user_id
elif self.memory_type == "short_term":
agent_name = self._get_agent_name()
params["agent_id"] = agent_name
params["metadata"] = {"type": "short_term"}
elif self.memory_type == "long_term":
agent_name = self._get_agent_name()
params["agent_id"] = agent_name
params["metadata"] = {"type": "long_term"}
elif self.memory_type == "entities":
agent_name = self._get_agent_name()
params["agent_id"] = agent_name
params["metadata"] = {"type": "entity"}
# Discard the filters for now since we create the filters
# automatically when the crew is created.
results = self.memory.search(**params)
return [r for r in results if r["score"] >= score_threshold]
def _get_user_id(self):
if self.memory_type == "user":
if hasattr(self, "memory_config") and self.memory_config is not None:
return self.memory_config.get("config", {}).get("user_id")
else:
return None
return None
def _get_agent_name(self):
agents = self.crew.agents if self.crew else []
agents = [self._sanitize_role(agent.role) for agent in agents]
agents = "_".join(agents)
return agents

View File

@@ -3,9 +3,13 @@ import io
import logging import logging
import os import os
import shutil import shutil
from typing import Any, Dict, List, Optional import uuid
from typing import Any, Dict, List, Optional, cast
from crewai.memory.storage.interface import Storage from chromadb import Documents, EmbeddingFunction, Embeddings
from chromadb.api import ClientAPI
from chromadb.api.types import validate_embedding_function
from crewai.memory.storage.base_rag_storage import BaseRAGStorage
from crewai.utilities.paths import db_storage_path from crewai.utilities.paths import db_storage_path
@@ -17,68 +21,184 @@ def suppress_logging(
logger = logging.getLogger(logger_name) logger = logging.getLogger(logger_name)
original_level = logger.getEffectiveLevel() original_level = logger.getEffectiveLevel()
logger.setLevel(level) logger.setLevel(level)
with contextlib.redirect_stdout(io.StringIO()), contextlib.redirect_stderr( with (
io.StringIO() contextlib.redirect_stdout(io.StringIO()),
), contextlib.suppress(UserWarning): contextlib.redirect_stderr(io.StringIO()),
contextlib.suppress(UserWarning),
):
yield yield
logger.setLevel(original_level) logger.setLevel(original_level)
class RAGStorage(Storage): class RAGStorage(BaseRAGStorage):
""" """
Extends Storage to handle embeddings for memory entries, improving Extends Storage to handle embeddings for memory entries, improving
search efficiency. search efficiency.
""" """
def __init__(self, type, allow_reset=True, embedder_config=None, crew=None): app: ClientAPI | None = None
super().__init__()
if (
not os.getenv("OPENAI_API_KEY")
and not os.getenv("OPENAI_BASE_URL") == "https://api.openai.com/v1"
):
os.environ["OPENAI_API_KEY"] = "fake"
def __init__(self, type, allow_reset=True, embedder_config=None, crew=None):
super().__init__(type, allow_reset, embedder_config, crew)
agents = crew.agents if crew else [] agents = crew.agents if crew else []
agents = [self._sanitize_role(agent.role) for agent in agents] agents = [self._sanitize_role(agent.role) for agent in agents]
agents = "_".join(agents) agents = "_".join(agents)
self.agents = agents
config = {
"app": {
"config": {"name": type, "collect_metrics": False, "log_level": "ERROR"}
},
"chunker": {
"chunk_size": 5000,
"chunk_overlap": 100,
"length_function": "len",
"min_chunk_size": 150,
},
"vectordb": {
"provider": "chroma",
"config": {
"collection_name": type,
"dir": f"{db_storage_path()}/{type}/{agents}",
"allow_reset": allow_reset,
},
},
}
if embedder_config:
config["embedder"] = embedder_config
self.type = type self.type = type
self.config = config
self.allow_reset = allow_reset self.allow_reset = allow_reset
self._initialize_app()
def _set_embedder_config(self):
if self.embedder_config is None:
self.embedder_config = self._create_default_embedding_function()
if isinstance(self.embedder_config, dict):
provider = self.embedder_config.get("provider")
config = self.embedder_config.get("config", {})
model_name = config.get("model")
if provider == "openai":
from chromadb.utils.embedding_functions.openai_embedding_function import (
OpenAIEmbeddingFunction,
)
self.embedder_config = OpenAIEmbeddingFunction(
api_key=config.get("api_key") or os.getenv("OPENAI_API_KEY"),
model_name=model_name,
)
elif provider == "azure":
from chromadb.utils.embedding_functions.openai_embedding_function import (
OpenAIEmbeddingFunction,
)
self.embedder_config = OpenAIEmbeddingFunction(
api_key=config.get("api_key"),
api_base=config.get("api_base"),
api_type=config.get("api_type", "azure"),
api_version=config.get("api_version"),
model_name=model_name,
)
elif provider == "ollama":
from chromadb.utils.embedding_functions.ollama_embedding_function import (
OllamaEmbeddingFunction,
)
self.embedder_config = OllamaEmbeddingFunction(
url=config.get("url", "http://localhost:11434/api/embeddings"),
model_name=model_name,
)
elif provider == "vertexai":
from chromadb.utils.embedding_functions.google_embedding_function import (
GoogleVertexEmbeddingFunction,
)
self.embedder_config = GoogleVertexEmbeddingFunction(
model_name=model_name,
api_key=config.get("api_key"),
)
elif provider == "google":
from chromadb.utils.embedding_functions.google_embedding_function import (
GoogleGenerativeAiEmbeddingFunction,
)
self.embedder_config = GoogleGenerativeAiEmbeddingFunction(
model_name=model_name,
api_key=config.get("api_key"),
)
elif provider == "cohere":
from chromadb.utils.embedding_functions.cohere_embedding_function import (
CohereEmbeddingFunction,
)
self.embedder_config = CohereEmbeddingFunction(
model_name=model_name,
api_key=config.get("api_key"),
)
elif provider == "bedrock":
from chromadb.utils.embedding_functions.amazon_bedrock_embedding_function import (
AmazonBedrockEmbeddingFunction,
)
self.embedder_config = AmazonBedrockEmbeddingFunction(
session=config.get("session"),
)
elif provider == "huggingface":
from chromadb.utils.embedding_functions.huggingface_embedding_function import (
HuggingFaceEmbeddingServer,
)
self.embedder_config = HuggingFaceEmbeddingServer(
url=config.get("api_url"),
)
elif provider == "watson":
try:
import ibm_watsonx_ai.foundation_models as watson_models
from ibm_watsonx_ai import Credentials
from ibm_watsonx_ai.metanames import (
EmbedTextParamsMetaNames as EmbedParams,
)
except ImportError as e:
raise ImportError(
"IBM Watson dependencies are not installed. Please install them to use Watson embedding."
) from e
class WatsonEmbeddingFunction(EmbeddingFunction):
def __call__(self, input: Documents) -> Embeddings:
if isinstance(input, str):
input = [input]
embed_params = {
EmbedParams.TRUNCATE_INPUT_TOKENS: 3,
EmbedParams.RETURN_OPTIONS: {"input_text": True},
}
embedding = watson_models.Embeddings(
model_id=config.get("model"),
params=embed_params,
credentials=Credentials(
api_key=config.get("api_key"), url=config.get("api_url")
),
project_id=config.get("project_id"),
)
try:
embeddings = embedding.embed_documents(input)
return cast(Embeddings, embeddings)
except Exception as e:
print("Error during Watson embedding:", e)
raise e
self.embedder_config = WatsonEmbeddingFunction()
else:
raise Exception(
f"Unsupported embedding provider: {provider}, supported providers: [openai, azure, ollama, vertexai, google, cohere, huggingface, watson]"
)
else:
validate_embedding_function(self.embedder_config)
self.embedder_config = self.embedder_config
def _initialize_app(self): def _initialize_app(self):
from embedchain import App import chromadb
from embedchain.llm.base import BaseLlm from chromadb.config import Settings
class FakeLLM(BaseLlm): self._set_embedder_config()
pass chroma_client = chromadb.PersistentClient(
path=f"{db_storage_path()}/{self.type}/{self.agents}",
settings=Settings(allow_reset=self.allow_reset),
)
self.app = App.from_config(config=self.config) self.app = chroma_client
self.app.llm = FakeLLM()
if self.allow_reset: try:
self.app.reset() self.collection = self.app.get_collection(
name=self.type, embedding_function=self.embedder_config
)
except Exception:
self.collection = self.app.create_collection(
name=self.type, embedding_function=self.embedder_config
)
def _sanitize_role(self, role: str) -> str: def _sanitize_role(self, role: str) -> str:
""" """
@@ -87,11 +207,14 @@ class RAGStorage(Storage):
return role.replace("\n", "").replace(" ", "_").replace("/", "_") return role.replace("\n", "").replace(" ", "_").replace("/", "_")
def save(self, value: Any, metadata: Dict[str, Any]) -> None: def save(self, value: Any, metadata: Dict[str, Any]) -> None:
if not hasattr(self, "app"): if not hasattr(self, "app") or not hasattr(self, "collection"):
self._initialize_app() self._initialize_app()
self._generate_embedding(value, metadata) try:
self._generate_embedding(value, metadata)
except Exception as e:
logging.error(f"Error during {self.type} save: {str(e)}")
def search( # type: ignore # BUG?: Signature of "search" incompatible with supertype "Storage" def search(
self, self,
query: str, query: str,
limit: int = 3, limit: int = 3,
@@ -100,31 +223,56 @@ class RAGStorage(Storage):
) -> List[Any]: ) -> List[Any]:
if not hasattr(self, "app"): if not hasattr(self, "app"):
self._initialize_app() self._initialize_app()
from embedchain.vectordb.chroma import InvalidDimensionException
with suppress_logging(): try:
try: with suppress_logging():
results = ( response = self.collection.query(query_texts=query, n_results=limit)
self.app.search(query, limit, where=filter)
if filter
else self.app.search(query, limit)
)
except InvalidDimensionException:
self.app.reset()
return []
return [r for r in results if r["metadata"]["score"] >= score_threshold]
def _generate_embedding(self, text: str, metadata: Dict[str, Any]) -> Any: results = []
if not hasattr(self, "app"): for i in range(len(response["ids"][0])):
result = {
"id": response["ids"][0][i],
"metadata": response["metadatas"][0][i],
"context": response["documents"][0][i],
"score": response["distances"][0][i],
}
if result["score"] >= score_threshold:
results.append(result)
return results
except Exception as e:
logging.error(f"Error during {self.type} search: {str(e)}")
return []
def _generate_embedding(self, text: str, metadata: Dict[str, Any]) -> None: # type: ignore
if not hasattr(self, "app") or not hasattr(self, "collection"):
self._initialize_app() self._initialize_app()
from embedchain.models.data_type import DataType
self.app.add(text, data_type=DataType.TEXT, metadata=metadata) self.collection.add(
documents=[text],
metadatas=[metadata or {}],
ids=[str(uuid.uuid4())],
)
def reset(self) -> None: def reset(self) -> None:
try: try:
shutil.rmtree(f"{db_storage_path()}/{self.type}") shutil.rmtree(f"{db_storage_path()}/{self.type}")
if self.app:
self.app.reset()
except Exception as e: except Exception as e:
raise Exception( if "attempt to write a readonly database" in str(e):
f"An error occurred while resetting the {self.type} memory: {e}" # Ignore this specific error
) pass
else:
raise Exception(
f"An error occurred while resetting the {self.type} memory: {e}"
)
def _create_default_embedding_function(self):
from chromadb.utils.embedding_functions.openai_embedding_function import (
OpenAIEmbeddingFunction,
)
return OpenAIEmbeddingFunction(
api_key=os.getenv("OPENAI_API_KEY"), model_name="text-embedding-3-small"
)

View File

@@ -0,0 +1,45 @@
from typing import Any, Dict, Optional
from crewai.memory.memory import Memory
class UserMemory(Memory):
"""
UserMemory class for handling user memory storage and retrieval.
Inherits from the Memory class and utilizes an instance of a class that
adheres to the Storage for data storage, specifically working with
MemoryItem instances.
"""
def __init__(self, crew=None):
try:
from crewai.memory.storage.mem0_storage import Mem0Storage
except ImportError:
raise ImportError(
"Mem0 is not installed. Please install it with `pip install mem0ai`."
)
storage = Mem0Storage(type="user", crew=crew)
super().__init__(storage)
def save(
self,
value,
metadata: Optional[Dict[str, Any]] = None,
agent: Optional[str] = None,
) -> None:
# TODO: Change this function since we want to take care of the case where we save memories for the usr
data = f"Remember the details about the user: {value}"
super().save(data, metadata)
def search(
self,
query: str,
limit: int = 3,
score_threshold: float = 0.35,
):
results = super().search(
query=query,
limit=limit,
score_threshold=score_threshold,
)
return results

View File

@@ -0,0 +1,8 @@
from typing import Any, Dict, Optional
class UserMemoryItem:
def __init__(self, data: Any, user: str, metadata: Optional[Dict[str, Any]] = None):
self.data = data
self.user = user
self.metadata = metadata if metadata is not None else {}

View File

@@ -76,27 +76,13 @@ def crew(func) -> Callable[..., Crew]:
instantiated_agents = [] instantiated_agents = []
agent_roles = set() agent_roles = set()
# Collect methods from crew in order # Use the preserved task and agent information
all_functions = [ tasks = self._original_tasks.items()
(name, getattr(self, name)) agents = self._original_agents.items()
for name, attr in self.__class__.__dict__.items()
if callable(attr)
]
tasks = [
(name, method)
for name, method in all_functions
if hasattr(method, "is_task")
]
agents = [
(name, method)
for name, method in all_functions
if hasattr(method, "is_agent")
]
# Instantiate tasks in order # Instantiate tasks in order
for task_name, task_method in tasks: for task_name, task_method in tasks:
task_instance = task_method() task_instance = task_method(self)
instantiated_tasks.append(task_instance) instantiated_tasks.append(task_instance)
agent_instance = getattr(task_instance, "agent", None) agent_instance = getattr(task_instance, "agent", None)
if agent_instance and agent_instance.role not in agent_roles: if agent_instance and agent_instance.role not in agent_roles:
@@ -105,7 +91,7 @@ def crew(func) -> Callable[..., Crew]:
# Instantiate agents not included by tasks # Instantiate agents not included by tasks
for agent_name, agent_method in agents: for agent_name, agent_method in agents:
agent_instance = agent_method() agent_instance = agent_method(self)
if agent_instance.role not in agent_roles: if agent_instance.role not in agent_roles:
instantiated_agents.append(agent_instance) instantiated_agents.append(agent_instance)
agent_roles.add(agent_instance.role) agent_roles.add(agent_instance.role)

View File

@@ -34,6 +34,18 @@ def CrewBase(cls: T) -> T:
self.map_all_agent_variables() self.map_all_agent_variables()
self.map_all_task_variables() self.map_all_task_variables()
# Preserve task and agent information
self._original_tasks = {
name: method
for name, method in cls.__dict__.items()
if hasattr(method, "is_task") and method.is_task
}
self._original_agents = {
name: method
for name, method in cls.__dict__.items()
if hasattr(method, "is_agent") and method.is_agent
}
@staticmethod @staticmethod
def load_yaml(config_path: Path): def load_yaml(config_path: Path):
try: try:

View File

@@ -20,6 +20,7 @@ from pydantic import (
from pydantic_core import PydanticCustomError from pydantic_core import PydanticCustomError
from crewai.agents.agent_builder.base_agent import BaseAgent from crewai.agents.agent_builder.base_agent import BaseAgent
from crewai.tools.base_tool import BaseTool
from crewai.tasks.output_format import OutputFormat from crewai.tasks.output_format import OutputFormat
from crewai.tasks.task_output import TaskOutput from crewai.tasks.task_output import TaskOutput
from crewai.telemetry.telemetry import Telemetry from crewai.telemetry.telemetry import Telemetry
@@ -91,7 +92,7 @@ class Task(BaseModel):
output: Optional[TaskOutput] = Field( output: Optional[TaskOutput] = Field(
description="Task output, it's final result after being executed", default=None description="Task output, it's final result after being executed", default=None
) )
tools: Optional[List[Any]] = Field( tools: Optional[List[BaseTool]] = Field(
default_factory=list, default_factory=list,
description="Tools the agent is limited to use for this task.", description="Tools the agent is limited to use for this task.",
) )
@@ -185,7 +186,7 @@ class Task(BaseModel):
self, self,
agent: Optional[BaseAgent] = None, agent: Optional[BaseAgent] = None,
context: Optional[str] = None, context: Optional[str] = None,
tools: Optional[List[Any]] = None, tools: Optional[List[BaseTool]] = None,
) -> TaskOutput: ) -> TaskOutput:
"""Execute the task synchronously.""" """Execute the task synchronously."""
return self._execute_core(agent, context, tools) return self._execute_core(agent, context, tools)
@@ -202,7 +203,7 @@ class Task(BaseModel):
self, self,
agent: BaseAgent | None = None, agent: BaseAgent | None = None,
context: Optional[str] = None, context: Optional[str] = None,
tools: Optional[List[Any]] = None, tools: Optional[List[BaseTool]] = None,
) -> Future[TaskOutput]: ) -> Future[TaskOutput]:
"""Execute the task asynchronously.""" """Execute the task asynchronously."""
future: Future[TaskOutput] = Future() future: Future[TaskOutput] = Future()

View File

@@ -21,7 +21,7 @@ with suppress_warnings():
from opentelemetry import trace # noqa: E402 from opentelemetry import trace # noqa: E402
from opentelemetry.exporter.otlp.proto.http.trace_exporter import OTLPSpanExporter # noqa: E402 from opentelemetry.exporter.otlp.proto.http.trace_exporter import OTLPSpanExporter # noqa: E402
from opentelemetry.sdk.resources import SERVICE_NAME, Resource # noqa: E402 from opentelemetry.sdk.resources import SERVICE_NAME, Resource # noqa: E402
from opentelemetry.sdk.trace import TracerProvider # noqa: E402 from opentelemetry.sdk.trace import TracerProvider # noqa: E402
from opentelemetry.sdk.trace.export import BatchSpanProcessor # noqa: E402 from opentelemetry.sdk.trace.export import BatchSpanProcessor # noqa: E402
@@ -48,6 +48,10 @@ class Telemetry:
def __init__(self): def __init__(self):
self.ready = False self.ready = False
self.trace_set = False self.trace_set = False
if os.getenv("OTEL_SDK_DISABLED", "false").lower() == "true":
return
try: try:
telemetry_endpoint = "https://telemetry.crewai.com:4319" telemetry_endpoint = "https://telemetry.crewai.com:4319"
self.resource = Resource( self.resource = Resource(
@@ -65,7 +69,7 @@ class Telemetry:
self.provider.add_span_processor(processor) self.provider.add_span_processor(processor)
self.ready = True self.ready = True
except BaseException as e: except Exception as e:
if isinstance( if isinstance(
e, e,
(SystemExit, KeyboardInterrupt, GeneratorExit, asyncio.CancelledError), (SystemExit, KeyboardInterrupt, GeneratorExit, asyncio.CancelledError),
@@ -83,404 +87,33 @@ class Telemetry:
self.ready = False self.ready = False
self.trace_set = False self.trace_set = False
def _safe_telemetry_operation(self, operation):
if not self.ready:
return
try:
operation()
except Exception:
pass
def crew_creation(self, crew: Crew, inputs: dict[str, Any] | None): def crew_creation(self, crew: Crew, inputs: dict[str, Any] | None):
"""Records the creation of a crew.""" """Records the creation of a crew."""
if self.ready:
try:
tracer = trace.get_tracer("crewai.telemetry")
span = tracer.start_span("Crew Created")
self._add_attribute(
span,
"crewai_version",
pkg_resources.get_distribution("crewai").version,
)
self._add_attribute(span, "python_version", platform.python_version())
self._add_attribute(span, "crew_key", crew.key)
self._add_attribute(span, "crew_id", str(crew.id))
self._add_attribute(span, "crew_process", crew.process)
self._add_attribute(span, "crew_memory", crew.memory)
self._add_attribute(span, "crew_number_of_tasks", len(crew.tasks))
self._add_attribute(span, "crew_number_of_agents", len(crew.agents))
if crew.share_crew:
self._add_attribute(
span,
"crew_agents",
json.dumps(
[
{
"key": agent.key,
"id": str(agent.id),
"role": agent.role,
"goal": agent.goal,
"backstory": agent.backstory,
"verbose?": agent.verbose,
"max_iter": agent.max_iter,
"max_rpm": agent.max_rpm,
"i18n": agent.i18n.prompt_file,
"function_calling_llm": (
agent.function_calling_llm.model
if agent.function_calling_llm
else ""
),
"llm": agent.llm.model,
"delegation_enabled?": agent.allow_delegation,
"allow_code_execution?": agent.allow_code_execution,
"max_retry_limit": agent.max_retry_limit,
"tools_names": [
tool.name.casefold()
for tool in agent.tools or []
],
}
for agent in crew.agents
]
),
)
self._add_attribute(
span,
"crew_tasks",
json.dumps(
[
{
"key": task.key,
"id": str(task.id),
"description": task.description,
"expected_output": task.expected_output,
"async_execution?": task.async_execution,
"human_input?": task.human_input,
"agent_role": (
task.agent.role if task.agent else "None"
),
"agent_key": task.agent.key if task.agent else None,
"context": (
[task.description for task in task.context]
if task.context
else None
),
"tools_names": [
tool.name.casefold()
for tool in task.tools or []
],
}
for task in crew.tasks
]
),
)
self._add_attribute(span, "platform", platform.platform())
self._add_attribute(span, "platform_release", platform.release())
self._add_attribute(span, "platform_system", platform.system())
self._add_attribute(span, "platform_version", platform.version())
self._add_attribute(span, "cpus", os.cpu_count())
self._add_attribute(
span, "crew_inputs", json.dumps(inputs) if inputs else None
)
else:
self._add_attribute(
span,
"crew_agents",
json.dumps(
[
{
"key": agent.key,
"id": str(agent.id),
"role": agent.role,
"verbose?": agent.verbose,
"max_iter": agent.max_iter,
"max_rpm": agent.max_rpm,
"function_calling_llm": (
agent.function_calling_llm.model
if agent.function_calling_llm
else ""
),
"llm": agent.llm.model,
"delegation_enabled?": agent.allow_delegation,
"allow_code_execution?": agent.allow_code_execution,
"max_retry_limit": agent.max_retry_limit,
"tools_names": [
tool.name.casefold()
for tool in agent.tools or []
],
}
for agent in crew.agents
]
),
)
self._add_attribute(
span,
"crew_tasks",
json.dumps(
[
{
"key": task.key,
"id": str(task.id),
"async_execution?": task.async_execution,
"human_input?": task.human_input,
"agent_role": (
task.agent.role if task.agent else "None"
),
"agent_key": task.agent.key if task.agent else None,
"tools_names": [
tool.name.casefold()
for tool in task.tools or []
],
}
for task in crew.tasks
]
),
)
span.set_status(Status(StatusCode.OK))
span.end()
except Exception:
pass
def task_started(self, crew: Crew, task: Task) -> Span | None: def operation():
"""Records task started in a crew.""" tracer = trace.get_tracer("crewai.telemetry")
if self.ready: span = tracer.start_span("Crew Created")
try: self._add_attribute(
tracer = trace.get_tracer("crewai.telemetry") span,
"crewai_version",
created_span = tracer.start_span("Task Created") pkg_resources.get_distribution("crewai").version,
)
self._add_attribute(created_span, "crew_key", crew.key) self._add_attribute(span, "python_version", platform.python_version())
self._add_attribute(created_span, "crew_id", str(crew.id)) self._add_attribute(span, "crew_key", crew.key)
self._add_attribute(created_span, "task_key", task.key) self._add_attribute(span, "crew_id", str(crew.id))
self._add_attribute(created_span, "task_id", str(task.id)) self._add_attribute(span, "crew_process", crew.process)
self._add_attribute(span, "crew_memory", crew.memory)
if crew.share_crew: self._add_attribute(span, "crew_number_of_tasks", len(crew.tasks))
self._add_attribute( self._add_attribute(span, "crew_number_of_agents", len(crew.agents))
created_span, "formatted_description", task.description if crew.share_crew:
)
self._add_attribute(
created_span, "formatted_expected_output", task.expected_output
)
created_span.set_status(Status(StatusCode.OK))
created_span.end()
span = tracer.start_span("Task Execution")
self._add_attribute(span, "crew_key", crew.key)
self._add_attribute(span, "crew_id", str(crew.id))
self._add_attribute(span, "task_key", task.key)
self._add_attribute(span, "task_id", str(task.id))
if crew.share_crew:
self._add_attribute(span, "formatted_description", task.description)
self._add_attribute(
span, "formatted_expected_output", task.expected_output
)
return span
except Exception:
pass
return None
def task_ended(self, span: Span, task: Task, crew: Crew):
"""Records task execution in a crew."""
if self.ready:
try:
if crew.share_crew:
self._add_attribute(
span,
"task_output",
task.output.raw if task.output else "",
)
span.set_status(Status(StatusCode.OK))
span.end()
except Exception:
pass
def tool_repeated_usage(self, llm: Any, tool_name: str, attempts: int):
"""Records the repeated usage 'error' of a tool by an agent."""
if self.ready:
try:
tracer = trace.get_tracer("crewai.telemetry")
span = tracer.start_span("Tool Repeated Usage")
self._add_attribute(
span,
"crewai_version",
pkg_resources.get_distribution("crewai").version,
)
self._add_attribute(span, "tool_name", tool_name)
self._add_attribute(span, "attempts", attempts)
if llm:
self._add_attribute(span, "llm", llm.model)
span.set_status(Status(StatusCode.OK))
span.end()
except Exception:
pass
def tool_usage(self, llm: Any, tool_name: str, attempts: int):
"""Records the usage of a tool by an agent."""
if self.ready:
try:
tracer = trace.get_tracer("crewai.telemetry")
span = tracer.start_span("Tool Usage")
self._add_attribute(
span,
"crewai_version",
pkg_resources.get_distribution("crewai").version,
)
self._add_attribute(span, "tool_name", tool_name)
self._add_attribute(span, "attempts", attempts)
if llm:
self._add_attribute(span, "llm", llm.model)
span.set_status(Status(StatusCode.OK))
span.end()
except Exception:
pass
def tool_usage_error(self, llm: Any):
"""Records the usage of a tool by an agent."""
if self.ready:
try:
tracer = trace.get_tracer("crewai.telemetry")
span = tracer.start_span("Tool Usage Error")
self._add_attribute(
span,
"crewai_version",
pkg_resources.get_distribution("crewai").version,
)
if llm:
self._add_attribute(span, "llm", llm.model)
span.set_status(Status(StatusCode.OK))
span.end()
except Exception:
pass
def individual_test_result_span(
self, crew: Crew, quality: float, exec_time: int, model_name: str
):
if self.ready:
try:
tracer = trace.get_tracer("crewai.telemetry")
span = tracer.start_span("Crew Individual Test Result")
self._add_attribute(
span,
"crewai_version",
pkg_resources.get_distribution("crewai").version,
)
self._add_attribute(span, "crew_key", crew.key)
self._add_attribute(span, "crew_id", str(crew.id))
self._add_attribute(span, "quality", str(quality))
self._add_attribute(span, "exec_time", str(exec_time))
self._add_attribute(span, "model_name", model_name)
span.set_status(Status(StatusCode.OK))
span.end()
except Exception:
pass
def test_execution_span(
self,
crew: Crew,
iterations: int,
inputs: dict[str, Any] | None,
model_name: str,
):
if self.ready:
try:
tracer = trace.get_tracer("crewai.telemetry")
span = tracer.start_span("Crew Test Execution")
self._add_attribute(
span,
"crewai_version",
pkg_resources.get_distribution("crewai").version,
)
self._add_attribute(span, "crew_key", crew.key)
self._add_attribute(span, "crew_id", str(crew.id))
self._add_attribute(span, "iterations", str(iterations))
self._add_attribute(span, "model_name", model_name)
if crew.share_crew:
self._add_attribute(
span, "inputs", json.dumps(inputs) if inputs else None
)
span.set_status(Status(StatusCode.OK))
span.end()
except Exception:
pass
def deploy_signup_error_span(self):
if self.ready:
try:
tracer = trace.get_tracer("crewai.telemetry")
span = tracer.start_span("Deploy Signup Error")
span.set_status(Status(StatusCode.OK))
span.end()
except Exception:
pass
def start_deployment_span(self, uuid: Optional[str] = None):
if self.ready:
try:
tracer = trace.get_tracer("crewai.telemetry")
span = tracer.start_span("Start Deployment")
if uuid:
self._add_attribute(span, "uuid", uuid)
span.set_status(Status(StatusCode.OK))
span.end()
except Exception:
pass
def create_crew_deployment_span(self):
if self.ready:
try:
tracer = trace.get_tracer("crewai.telemetry")
span = tracer.start_span("Create Crew Deployment")
span.set_status(Status(StatusCode.OK))
span.end()
except Exception:
pass
def get_crew_logs_span(self, uuid: Optional[str], log_type: str = "deployment"):
if self.ready:
try:
tracer = trace.get_tracer("crewai.telemetry")
span = tracer.start_span("Get Crew Logs")
self._add_attribute(span, "log_type", log_type)
if uuid:
self._add_attribute(span, "uuid", uuid)
span.set_status(Status(StatusCode.OK))
span.end()
except Exception:
pass
def remove_crew_span(self, uuid: Optional[str] = None):
if self.ready:
try:
tracer = trace.get_tracer("crewai.telemetry")
span = tracer.start_span("Remove Crew")
if uuid:
self._add_attribute(span, "uuid", uuid)
span.set_status(Status(StatusCode.OK))
span.end()
except Exception:
pass
def crew_execution_span(self, crew: Crew, inputs: dict[str, Any] | None):
"""Records the complete execution of a crew.
This is only collected if the user has opted-in to share the crew.
"""
self.crew_creation(crew, inputs)
if (self.ready) and (crew.share_crew):
try:
tracer = trace.get_tracer("crewai.telemetry")
span = tracer.start_span("Crew Execution")
self._add_attribute(
span,
"crewai_version",
pkg_resources.get_distribution("crewai").version,
)
self._add_attribute(span, "crew_key", crew.key)
self._add_attribute(span, "crew_id", str(crew.id))
self._add_attribute(
span, "crew_inputs", json.dumps(inputs) if inputs else None
)
self._add_attribute( self._add_attribute(
span, span,
"crew_agents", "crew_agents",
@@ -496,8 +129,15 @@ class Telemetry:
"max_iter": agent.max_iter, "max_iter": agent.max_iter,
"max_rpm": agent.max_rpm, "max_rpm": agent.max_rpm,
"i18n": agent.i18n.prompt_file, "i18n": agent.i18n.prompt_file,
"function_calling_llm": (
agent.function_calling_llm.model
if agent.function_calling_llm
else ""
),
"llm": agent.llm.model, "llm": agent.llm.model,
"delegation_enabled?": agent.allow_delegation, "delegation_enabled?": agent.allow_delegation,
"allow_code_execution?": agent.allow_code_execution,
"max_retry_limit": agent.max_retry_limit,
"tools_names": [ "tools_names": [
tool.name.casefold() for tool in agent.tools or [] tool.name.casefold() for tool in agent.tools or []
], ],
@@ -512,12 +152,15 @@ class Telemetry:
json.dumps( json.dumps(
[ [
{ {
"key": task.key,
"id": str(task.id), "id": str(task.id),
"description": task.description, "description": task.description,
"expected_output": task.expected_output, "expected_output": task.expected_output,
"async_execution?": task.async_execution, "async_execution?": task.async_execution,
"human_input?": task.human_input, "human_input?": task.human_input,
"agent_role": task.agent.role if task.agent else "None", "agent_role": (
task.agent.role if task.agent else "None"
),
"agent_key": task.agent.key if task.agent else None, "agent_key": task.agent.key if task.agent else None,
"context": ( "context": (
[task.description for task in task.context] [task.description for task in task.context]
@@ -532,78 +175,433 @@ class Telemetry:
] ]
), ),
) )
return span self._add_attribute(span, "platform", platform.platform())
except Exception: self._add_attribute(span, "platform_release", platform.release())
pass self._add_attribute(span, "platform_system", platform.system())
self._add_attribute(span, "platform_version", platform.version())
def end_crew(self, crew, final_string_output): self._add_attribute(span, "cpus", os.cpu_count())
if (self.ready) and (crew.share_crew):
try:
self._add_attribute( self._add_attribute(
crew._execution_span, span, "crew_inputs", json.dumps(inputs) if inputs else None
"crewai_version",
pkg_resources.get_distribution("crewai").version,
) )
else:
self._add_attribute( self._add_attribute(
crew._execution_span, "crew_output", final_string_output span,
) "crew_agents",
self._add_attribute(
crew._execution_span,
"crew_tasks_output",
json.dumps( json.dumps(
[ [
{ {
"key": agent.key,
"id": str(agent.id),
"role": agent.role,
"verbose?": agent.verbose,
"max_iter": agent.max_iter,
"max_rpm": agent.max_rpm,
"function_calling_llm": (
agent.function_calling_llm.model
if agent.function_calling_llm
else ""
),
"llm": agent.llm.model,
"delegation_enabled?": agent.allow_delegation,
"allow_code_execution?": agent.allow_code_execution,
"max_retry_limit": agent.max_retry_limit,
"tools_names": [
tool.name.casefold() for tool in agent.tools or []
],
}
for agent in crew.agents
]
),
)
self._add_attribute(
span,
"crew_tasks",
json.dumps(
[
{
"key": task.key,
"id": str(task.id), "id": str(task.id),
"description": task.description, "async_execution?": task.async_execution,
"output": task.output.raw_output, "human_input?": task.human_input,
"agent_role": (
task.agent.role if task.agent else "None"
),
"agent_key": task.agent.key if task.agent else None,
"tools_names": [
tool.name.casefold() for tool in task.tools or []
],
} }
for task in crew.tasks for task in crew.tasks
] ]
), ),
) )
crew._execution_span.set_status(Status(StatusCode.OK)) span.set_status(Status(StatusCode.OK))
crew._execution_span.end() span.end()
except Exception:
pass self._safe_telemetry_operation(operation)
def task_started(self, crew: Crew, task: Task) -> Span | None:
"""Records task started in a crew."""
def operation():
tracer = trace.get_tracer("crewai.telemetry")
created_span = tracer.start_span("Task Created")
self._add_attribute(created_span, "crew_key", crew.key)
self._add_attribute(created_span, "crew_id", str(crew.id))
self._add_attribute(created_span, "task_key", task.key)
self._add_attribute(created_span, "task_id", str(task.id))
if crew.share_crew:
self._add_attribute(
created_span, "formatted_description", task.description
)
self._add_attribute(
created_span, "formatted_expected_output", task.expected_output
)
created_span.set_status(Status(StatusCode.OK))
created_span.end()
span = tracer.start_span("Task Execution")
self._add_attribute(span, "crew_key", crew.key)
self._add_attribute(span, "crew_id", str(crew.id))
self._add_attribute(span, "task_key", task.key)
self._add_attribute(span, "task_id", str(task.id))
if crew.share_crew:
self._add_attribute(span, "formatted_description", task.description)
self._add_attribute(
span, "formatted_expected_output", task.expected_output
)
return span
return self._safe_telemetry_operation(operation)
def task_ended(self, span: Span, task: Task, crew: Crew):
"""Records task execution in a crew."""
def operation():
if crew.share_crew:
self._add_attribute(
span,
"task_output",
task.output.raw if task.output else "",
)
span.set_status(Status(StatusCode.OK))
span.end()
self._safe_telemetry_operation(operation)
def tool_repeated_usage(self, llm: Any, tool_name: str, attempts: int):
"""Records the repeated usage 'error' of a tool by an agent."""
def operation():
tracer = trace.get_tracer("crewai.telemetry")
span = tracer.start_span("Tool Repeated Usage")
self._add_attribute(
span,
"crewai_version",
pkg_resources.get_distribution("crewai").version,
)
self._add_attribute(span, "tool_name", tool_name)
self._add_attribute(span, "attempts", attempts)
if llm:
self._add_attribute(span, "llm", llm.model)
span.set_status(Status(StatusCode.OK))
span.end()
self._safe_telemetry_operation(operation)
def tool_usage(self, llm: Any, tool_name: str, attempts: int):
"""Records the usage of a tool by an agent."""
def operation():
tracer = trace.get_tracer("crewai.telemetry")
span = tracer.start_span("Tool Usage")
self._add_attribute(
span,
"crewai_version",
pkg_resources.get_distribution("crewai").version,
)
self._add_attribute(span, "tool_name", tool_name)
self._add_attribute(span, "attempts", attempts)
if llm:
self._add_attribute(span, "llm", llm.model)
span.set_status(Status(StatusCode.OK))
span.end()
self._safe_telemetry_operation(operation)
def tool_usage_error(self, llm: Any):
"""Records the usage of a tool by an agent."""
def operation():
tracer = trace.get_tracer("crewai.telemetry")
span = tracer.start_span("Tool Usage Error")
self._add_attribute(
span,
"crewai_version",
pkg_resources.get_distribution("crewai").version,
)
if llm:
self._add_attribute(span, "llm", llm.model)
span.set_status(Status(StatusCode.OK))
span.end()
self._safe_telemetry_operation(operation)
def individual_test_result_span(
self, crew: Crew, quality: float, exec_time: int, model_name: str
):
def operation():
tracer = trace.get_tracer("crewai.telemetry")
span = tracer.start_span("Crew Individual Test Result")
self._add_attribute(
span,
"crewai_version",
pkg_resources.get_distribution("crewai").version,
)
self._add_attribute(span, "crew_key", crew.key)
self._add_attribute(span, "crew_id", str(crew.id))
self._add_attribute(span, "quality", str(quality))
self._add_attribute(span, "exec_time", str(exec_time))
self._add_attribute(span, "model_name", model_name)
span.set_status(Status(StatusCode.OK))
span.end()
self._safe_telemetry_operation(operation)
def test_execution_span(
self,
crew: Crew,
iterations: int,
inputs: dict[str, Any] | None,
model_name: str,
):
def operation():
tracer = trace.get_tracer("crewai.telemetry")
span = tracer.start_span("Crew Test Execution")
self._add_attribute(
span,
"crewai_version",
pkg_resources.get_distribution("crewai").version,
)
self._add_attribute(span, "crew_key", crew.key)
self._add_attribute(span, "crew_id", str(crew.id))
self._add_attribute(span, "iterations", str(iterations))
self._add_attribute(span, "model_name", model_name)
if crew.share_crew:
self._add_attribute(
span, "inputs", json.dumps(inputs) if inputs else None
)
span.set_status(Status(StatusCode.OK))
span.end()
self._safe_telemetry_operation(operation)
def deploy_signup_error_span(self):
def operation():
tracer = trace.get_tracer("crewai.telemetry")
span = tracer.start_span("Deploy Signup Error")
span.set_status(Status(StatusCode.OK))
span.end()
self._safe_telemetry_operation(operation)
def start_deployment_span(self, uuid: Optional[str] = None):
def operation():
tracer = trace.get_tracer("crewai.telemetry")
span = tracer.start_span("Start Deployment")
if uuid:
self._add_attribute(span, "uuid", uuid)
span.set_status(Status(StatusCode.OK))
span.end()
self._safe_telemetry_operation(operation)
def create_crew_deployment_span(self):
def operation():
tracer = trace.get_tracer("crewai.telemetry")
span = tracer.start_span("Create Crew Deployment")
span.set_status(Status(StatusCode.OK))
span.end()
self._safe_telemetry_operation(operation)
def get_crew_logs_span(self, uuid: Optional[str], log_type: str = "deployment"):
def operation():
tracer = trace.get_tracer("crewai.telemetry")
span = tracer.start_span("Get Crew Logs")
self._add_attribute(span, "log_type", log_type)
if uuid:
self._add_attribute(span, "uuid", uuid)
span.set_status(Status(StatusCode.OK))
span.end()
self._safe_telemetry_operation(operation)
def remove_crew_span(self, uuid: Optional[str] = None):
def operation():
tracer = trace.get_tracer("crewai.telemetry")
span = tracer.start_span("Remove Crew")
if uuid:
self._add_attribute(span, "uuid", uuid)
span.set_status(Status(StatusCode.OK))
span.end()
self._safe_telemetry_operation(operation)
def crew_execution_span(self, crew: Crew, inputs: dict[str, Any] | None):
"""Records the complete execution of a crew.
This is only collected if the user has opted-in to share the crew.
"""
self.crew_creation(crew, inputs)
def operation():
tracer = trace.get_tracer("crewai.telemetry")
span = tracer.start_span("Crew Execution")
self._add_attribute(
span,
"crewai_version",
pkg_resources.get_distribution("crewai").version,
)
self._add_attribute(span, "crew_key", crew.key)
self._add_attribute(span, "crew_id", str(crew.id))
self._add_attribute(
span, "crew_inputs", json.dumps(inputs) if inputs else None
)
self._add_attribute(
span,
"crew_agents",
json.dumps(
[
{
"key": agent.key,
"id": str(agent.id),
"role": agent.role,
"goal": agent.goal,
"backstory": agent.backstory,
"verbose?": agent.verbose,
"max_iter": agent.max_iter,
"max_rpm": agent.max_rpm,
"i18n": agent.i18n.prompt_file,
"llm": agent.llm.model,
"delegation_enabled?": agent.allow_delegation,
"tools_names": [
tool.name.casefold() for tool in agent.tools or []
],
}
for agent in crew.agents
]
),
)
self._add_attribute(
span,
"crew_tasks",
json.dumps(
[
{
"id": str(task.id),
"description": task.description,
"expected_output": task.expected_output,
"async_execution?": task.async_execution,
"human_input?": task.human_input,
"agent_role": task.agent.role if task.agent else "None",
"agent_key": task.agent.key if task.agent else None,
"context": (
[task.description for task in task.context]
if task.context
else None
),
"tools_names": [
tool.name.casefold() for tool in task.tools or []
],
}
for task in crew.tasks
]
),
)
return span
if crew.share_crew:
return self._safe_telemetry_operation(operation)
return None
def end_crew(self, crew, final_string_output):
def operation():
self._add_attribute(
crew._execution_span,
"crewai_version",
pkg_resources.get_distribution("crewai").version,
)
self._add_attribute(
crew._execution_span, "crew_output", final_string_output
)
self._add_attribute(
crew._execution_span,
"crew_tasks_output",
json.dumps(
[
{
"id": str(task.id),
"description": task.description,
"output": task.output.raw_output,
}
for task in crew.tasks
]
),
)
crew._execution_span.set_status(Status(StatusCode.OK))
crew._execution_span.end()
if crew.share_crew:
self._safe_telemetry_operation(operation)
def _add_attribute(self, span, key, value): def _add_attribute(self, span, key, value):
"""Add an attribute to a span.""" """Add an attribute to a span."""
try:
def operation():
return span.set_attribute(key, value) return span.set_attribute(key, value)
except Exception:
pass self._safe_telemetry_operation(operation)
def flow_creation_span(self, flow_name: str): def flow_creation_span(self, flow_name: str):
if self.ready: def operation():
try: tracer = trace.get_tracer("crewai.telemetry")
tracer = trace.get_tracer("crewai.telemetry") span = tracer.start_span("Flow Creation")
span = tracer.start_span("Flow Creation") self._add_attribute(span, "flow_name", flow_name)
self._add_attribute(span, "flow_name", flow_name) span.set_status(Status(StatusCode.OK))
span.set_status(Status(StatusCode.OK)) span.end()
span.end()
except Exception: self._safe_telemetry_operation(operation)
pass
def flow_plotting_span(self, flow_name: str, node_names: list[str]): def flow_plotting_span(self, flow_name: str, node_names: list[str]):
if self.ready: def operation():
try: tracer = trace.get_tracer("crewai.telemetry")
tracer = trace.get_tracer("crewai.telemetry") span = tracer.start_span("Flow Plotting")
span = tracer.start_span("Flow Plotting") self._add_attribute(span, "flow_name", flow_name)
self._add_attribute(span, "flow_name", flow_name) self._add_attribute(span, "node_names", json.dumps(node_names))
self._add_attribute(span, "node_names", json.dumps(node_names)) span.set_status(Status(StatusCode.OK))
span.set_status(Status(StatusCode.OK)) span.end()
span.end()
except Exception: self._safe_telemetry_operation(operation)
pass
def flow_execution_span(self, flow_name: str, node_names: list[str]): def flow_execution_span(self, flow_name: str, node_names: list[str]):
if self.ready: def operation():
try: tracer = trace.get_tracer("crewai.telemetry")
tracer = trace.get_tracer("crewai.telemetry") span = tracer.start_span("Flow Execution")
span = tracer.start_span("Flow Execution") self._add_attribute(span, "flow_name", flow_name)
self._add_attribute(span, "flow_name", flow_name) self._add_attribute(span, "node_names", json.dumps(node_names))
self._add_attribute(span, "node_names", json.dumps(node_names)) span.set_status(Status(StatusCode.OK))
span.set_status(Status(StatusCode.OK)) span.end()
span.end()
except Exception: self._safe_telemetry_operation(operation)
pass

View File

@@ -0,0 +1 @@
from .base_tool import BaseTool, tool

View File

@@ -1,25 +0,0 @@
from crewai.agents.agent_builder.utilities.base_agent_tool import BaseAgentTools
class AgentTools(BaseAgentTools):
"""Default tools around agent delegation"""
def tools(self):
from langchain.tools import StructuredTool
coworkers = ", ".join([f"{agent.role}" for agent in self.agents])
tools = [
StructuredTool.from_function(
func=self.delegate_work,
name="Delegate work to coworker",
description=self.i18n.tools("delegate_work").format(
coworkers=coworkers
),
),
StructuredTool.from_function(
func=self.ask_question,
name="Ask question to coworker",
description=self.i18n.tools("ask_question").format(coworkers=coworkers),
),
]
return tools

View File

@@ -0,0 +1,32 @@
from crewai.tools.base_tool import BaseTool
from crewai.agents.agent_builder.base_agent import BaseAgent
from crewai.utilities import I18N
from .delegate_work_tool import DelegateWorkTool
from .ask_question_tool import AskQuestionTool
class AgentTools:
"""Manager class for agent-related tools"""
def __init__(self, agents: list[BaseAgent], i18n: I18N = I18N()):
self.agents = agents
self.i18n = i18n
def tools(self) -> list[BaseTool]:
"""Get all available agent tools"""
coworkers = ", ".join([f"{agent.role}" for agent in self.agents])
delegate_tool = DelegateWorkTool(
agents=self.agents,
i18n=self.i18n,
description=self.i18n.tools("delegate_work").format(coworkers=coworkers),
)
ask_tool = AskQuestionTool(
agents=self.agents,
i18n=self.i18n,
description=self.i18n.tools("ask_question").format(coworkers=coworkers),
)
return [delegate_tool, ask_tool]

View File

@@ -0,0 +1,26 @@
from crewai.tools.agent_tools.base_agent_tools import BaseAgentTool
from typing import Optional
from pydantic import BaseModel, Field
class AskQuestionToolSchema(BaseModel):
question: str = Field(..., description="The question to ask")
context: str = Field(..., description="The context for the question")
coworker: str = Field(..., description="The role/name of the coworker to ask")
class AskQuestionTool(BaseAgentTool):
"""Tool for asking questions to coworkers"""
name: str = "Ask question to coworker"
args_schema: type[BaseModel] = AskQuestionToolSchema
def _run(
self,
question: str,
context: str,
coworker: Optional[str] = None,
**kwargs,
) -> str:
coworker = self._get_coworker(coworker, **kwargs)
return self._execute(coworker, question, context)

View File

@@ -1,22 +1,19 @@
from abc import ABC, abstractmethod from typing import Optional, Union
from typing import List, Optional, Union from pydantic import Field
from pydantic import BaseModel, Field
from crewai.tools.base_tool import BaseTool
from crewai.agents.agent_builder.base_agent import BaseAgent from crewai.agents.agent_builder.base_agent import BaseAgent
from crewai.task import Task from crewai.task import Task
from crewai.utilities import I18N from crewai.utilities import I18N
class BaseAgentTools(BaseModel, ABC): class BaseAgentTool(BaseTool):
"""Default tools around agent delegation""" """Base class for agent-related tools"""
agents: List[BaseAgent] = Field(description="List of agents in this crew.") agents: list[BaseAgent] = Field(description="List of available agents")
i18n: I18N = Field(default=I18N(), description="Internationalization settings.") i18n: I18N = Field(
default_factory=I18N, description="Internationalization settings"
@abstractmethod )
def tools(self):
pass
def _get_coworker(self, coworker: Optional[str], **kwargs) -> Optional[str]: def _get_coworker(self, coworker: Optional[str], **kwargs) -> Optional[str]:
coworker = coworker or kwargs.get("co_worker") or kwargs.get("coworker") coworker = coworker or kwargs.get("co_worker") or kwargs.get("coworker")
@@ -24,27 +21,11 @@ class BaseAgentTools(BaseModel, ABC):
is_list = coworker.startswith("[") and coworker.endswith("]") is_list = coworker.startswith("[") and coworker.endswith("]")
if is_list: if is_list:
coworker = coworker[1:-1].split(",")[0] coworker = coworker[1:-1].split(",")[0]
return coworker return coworker
def delegate_work(
self, task: str, context: str, coworker: Optional[str] = None, **kwargs
):
"""Useful to delegate a specific task to a coworker passing all necessary context and names."""
coworker = self._get_coworker(coworker, **kwargs)
return self._execute(coworker, task, context)
def ask_question(
self, question: str, context: str, coworker: Optional[str] = None, **kwargs
):
"""Useful to ask a question, opinion or take from a coworker passing all necessary context and names."""
coworker = self._get_coworker(coworker, **kwargs)
return self._execute(coworker, question, context)
def _execute( def _execute(
self, agent_name: Union[str, None], task: str, context: Union[str, None] self, agent_name: Union[str, None], task: str, context: Union[str, None]
): ) -> str:
"""Execute the command."""
try: try:
if agent_name is None: if agent_name is None:
agent_name = "" agent_name = ""
@@ -57,7 +38,6 @@ class BaseAgentTools(BaseModel, ABC):
# when it should look like this: # when it should look like this:
# {"task": "....", "coworker": "...."} # {"task": "....", "coworker": "...."}
agent_name = agent_name.casefold().replace('"', "").replace("\n", "") agent_name = agent_name.casefold().replace('"', "").replace("\n", "")
agent = [ # type: ignore # Incompatible types in assignment (expression has type "list[BaseAgent]", variable has type "str | None") agent = [ # type: ignore # Incompatible types in assignment (expression has type "list[BaseAgent]", variable has type "str | None")
available_agent available_agent
for available_agent in self.agents for available_agent in self.agents

View File

@@ -0,0 +1,29 @@
from crewai.tools.agent_tools.base_agent_tools import BaseAgentTool
from typing import Optional
from pydantic import BaseModel, Field
class DelegateWorkToolSchema(BaseModel):
task: str = Field(..., description="The task to delegate")
context: str = Field(..., description="The context for the task")
coworker: str = Field(
..., description="The role/name of the coworker to delegate to"
)
class DelegateWorkTool(BaseAgentTool):
"""Tool for delegating work to coworkers"""
name: str = "Delegate work to coworker"
args_schema: type[BaseModel] = DelegateWorkToolSchema
def _run(
self,
task: str,
context: str,
coworker: Optional[str] = None,
**kwargs,
) -> str:
coworker = self._get_coworker(coworker, **kwargs)
return self._execute(coworker, task, context)

View File

@@ -0,0 +1,186 @@
from abc import ABC, abstractmethod
from typing import Any, Callable, Type, get_args, get_origin
from langchain_core.tools import StructuredTool
from pydantic import BaseModel, ConfigDict, Field, validator
from pydantic import BaseModel as PydanticBaseModel
class BaseTool(BaseModel, ABC):
class _ArgsSchemaPlaceholder(PydanticBaseModel):
pass
model_config = ConfigDict()
name: str
"""The unique name of the tool that clearly communicates its purpose."""
description: str
"""Used to tell the model how/when/why to use the tool."""
args_schema: Type[PydanticBaseModel] = Field(default_factory=_ArgsSchemaPlaceholder)
"""The schema for the arguments that the tool accepts."""
description_updated: bool = False
"""Flag to check if the description has been updated."""
cache_function: Callable = lambda _args=None, _result=None: True
"""Function that will be used to determine if the tool should be cached, should return a boolean. If None, the tool will be cached."""
result_as_answer: bool = False
"""Flag to check if the tool should be the final agent answer."""
@validator("args_schema", always=True, pre=True)
def _default_args_schema(
cls, v: Type[PydanticBaseModel]
) -> Type[PydanticBaseModel]:
if not isinstance(v, cls._ArgsSchemaPlaceholder):
return v
return type(
f"{cls.__name__}Schema",
(PydanticBaseModel,),
{
"__annotations__": {
k: v for k, v in cls._run.__annotations__.items() if k != "return"
},
},
)
def model_post_init(self, __context: Any) -> None:
self._generate_description()
super().model_post_init(__context)
def run(
self,
*args: Any,
**kwargs: Any,
) -> Any:
print(f"Using Tool: {self.name}")
return self._run(*args, **kwargs)
@abstractmethod
def _run(
self,
*args: Any,
**kwargs: Any,
) -> Any:
"""Here goes the actual implementation of the tool."""
def to_langchain(self) -> StructuredTool:
self._set_args_schema()
return StructuredTool(
name=self.name,
description=self.description,
args_schema=self.args_schema,
func=self._run,
)
@classmethod
def from_langchain(cls, tool: StructuredTool) -> "BaseTool":
if cls == Tool:
if tool.func is None:
raise ValueError("StructuredTool must have a callable 'func'")
return Tool(
name=tool.name,
description=tool.description,
args_schema=tool.args_schema,
func=tool.func,
)
raise NotImplementedError(f"from_langchain not implemented for {cls.__name__}")
def _set_args_schema(self):
if self.args_schema is None:
class_name = f"{self.__class__.__name__}Schema"
self.args_schema = type(
class_name,
(PydanticBaseModel,),
{
"__annotations__": {
k: v
for k, v in self._run.__annotations__.items()
if k != "return"
},
},
)
def _generate_description(self):
args_schema = {
name: {
"description": field.description,
"type": BaseTool._get_arg_annotations(field.annotation),
}
for name, field in self.args_schema.model_fields.items()
}
self.description = f"Tool Name: {self.name}\nTool Arguments: {args_schema}\nTool Description: {self.description}"
@staticmethod
def _get_arg_annotations(annotation: type[Any] | None) -> str:
if annotation is None:
return "None"
origin = get_origin(annotation)
args = get_args(annotation)
if origin is None:
return (
annotation.__name__
if hasattr(annotation, "__name__")
else str(annotation)
)
if args:
args_str = ", ".join(BaseTool._get_arg_annotations(arg) for arg in args)
return f"{origin.__name__}[{args_str}]"
return origin.__name__
class Tool(BaseTool):
func: Callable
"""The function that will be executed when the tool is called."""
def _run(self, *args: Any, **kwargs: Any) -> Any:
return self.func(*args, **kwargs)
def to_langchain(
tools: list[BaseTool | StructuredTool],
) -> list[StructuredTool]:
return [t.to_langchain() if isinstance(t, BaseTool) else t for t in tools]
def tool(*args):
"""
Decorator to create a tool from a function.
"""
def _make_with_name(tool_name: str) -> Callable:
def _make_tool(f: Callable) -> BaseTool:
if f.__doc__ is None:
raise ValueError("Function must have a docstring")
if f.__annotations__ is None:
raise ValueError("Function must have type annotations")
class_name = "".join(tool_name.split()).title()
args_schema = type(
class_name,
(PydanticBaseModel,),
{
"__annotations__": {
k: v for k, v in f.__annotations__.items() if k != "return"
},
},
)
return Tool(
name=tool_name,
description=f.__doc__,
func=f,
args_schema=args_schema,
)
return _make_tool
if len(args) == 1 and callable(args[0]):
return _make_with_name(args[0].__name__)(args[0])
if len(args) == 1 and isinstance(args[0], str):
return _make_with_name(args[0])
raise ValueError("Invalid arguments")

View File

@@ -6,14 +6,14 @@ from difflib import SequenceMatcher
from textwrap import dedent from textwrap import dedent
from typing import Any, List, Union from typing import Any, List, Union
import crewai.utilities.events as events
from crewai.agents.tools_handler import ToolsHandler from crewai.agents.tools_handler import ToolsHandler
from crewai.task import Task from crewai.task import Task
from crewai.telemetry import Telemetry from crewai.telemetry import Telemetry
from crewai.tools import BaseTool
from crewai.tools.tool_calling import InstructorToolCalling, ToolCalling from crewai.tools.tool_calling import InstructorToolCalling, ToolCalling
from crewai.tools.tool_usage_events import ToolUsageError, ToolUsageFinished from crewai.tools.tool_usage_events import ToolUsageError, ToolUsageFinished
from crewai.utilities import I18N, Converter, ConverterError, Printer from crewai.utilities import I18N, Converter, ConverterError, Printer
import crewai.utilities.events as events
agentops = None agentops = None
if os.environ.get("AGENTOPS_API_KEY"): if os.environ.get("AGENTOPS_API_KEY"):
@@ -50,7 +50,7 @@ class ToolUsage:
def __init__( def __init__(
self, self,
tools_handler: ToolsHandler, tools_handler: ToolsHandler,
tools: List[Any], tools: List[BaseTool],
original_tools: List[Any], original_tools: List[Any],
tools_description: str, tools_description: str,
tools_names: str, tools_names: str,
@@ -299,19 +299,7 @@ class ToolUsage:
"""Render the tool name and description in plain text.""" """Render the tool name and description in plain text."""
descriptions = [] descriptions = []
for tool in self.tools: for tool in self.tools:
args = { descriptions.append(tool.description)
k: {k2: v2 for k2, v2 in v.items() if k2 in ["description", "type"]}
for k, v in tool.args.items()
}
descriptions.append(
"\n".join(
[
f"Tool Name: {tool.name.lower()}",
f"Tool Description: {tool.description}",
f"Tool Arguments: {args}",
]
)
)
return "\n--\n".join(descriptions) return "\n--\n".join(descriptions)
def _function_calling(self, tool_string: str): def _function_calling(self, tool_string: str):

View File

@@ -8,6 +8,7 @@ class UsageMetrics(BaseModel):
Attributes: Attributes:
total_tokens: Total number of tokens used. total_tokens: Total number of tokens used.
prompt_tokens: Number of tokens used in prompts. prompt_tokens: Number of tokens used in prompts.
cached_prompt_tokens: Number of cached prompt tokens used.
completion_tokens: Number of tokens used in completions. completion_tokens: Number of tokens used in completions.
successful_requests: Number of successful requests made. successful_requests: Number of successful requests made.
""" """
@@ -16,6 +17,9 @@ class UsageMetrics(BaseModel):
prompt_tokens: int = Field( prompt_tokens: int = Field(
default=0, description="Number of tokens used in prompts." default=0, description="Number of tokens used in prompts."
) )
cached_prompt_tokens: int = Field(
default=0, description="Number of cached prompt tokens used."
)
completion_tokens: int = Field( completion_tokens: int = Field(
default=0, description="Number of tokens used in completions." default=0, description="Number of tokens used in completions."
) )
@@ -32,5 +36,6 @@ class UsageMetrics(BaseModel):
""" """
self.total_tokens += usage_metrics.total_tokens self.total_tokens += usage_metrics.total_tokens
self.prompt_tokens += usage_metrics.prompt_tokens self.prompt_tokens += usage_metrics.prompt_tokens
self.cached_prompt_tokens += usage_metrics.cached_prompt_tokens
self.completion_tokens += usage_metrics.completion_tokens self.completion_tokens += usage_metrics.completion_tokens
self.successful_requests += usage_metrics.successful_requests self.successful_requests += usage_metrics.successful_requests

View File

@@ -2,13 +2,14 @@ from datetime import datetime, date
import json import json
from uuid import UUID from uuid import UUID
from pydantic import BaseModel from pydantic import BaseModel
from decimal import Decimal
class CrewJSONEncoder(json.JSONEncoder): class CrewJSONEncoder(json.JSONEncoder):
def default(self, obj): def default(self, obj):
if isinstance(obj, BaseModel): if isinstance(obj, BaseModel):
return self._handle_pydantic_model(obj) return self._handle_pydantic_model(obj)
elif isinstance(obj, UUID): elif isinstance(obj, UUID) or isinstance(obj, Decimal):
return str(obj) return str(obj)
elif isinstance(obj, datetime) or isinstance(obj, date): elif isinstance(obj, datetime) or isinstance(obj, date):

View File

@@ -16,7 +16,11 @@ class FileHandler:
def log(self, **kwargs): def log(self, **kwargs):
now = datetime.now().strftime("%Y-%m-%d %H:%M:%S") now = datetime.now().strftime("%Y-%m-%d %H:%M:%S")
message = f"{now}: " + ", ".join([f"{key}=\"{value}\"" for key, value in kwargs.items()]) + "\n" message = (
f"{now}: "
+ ", ".join([f'{key}="{value}"' for key, value in kwargs.items()])
+ "\n"
)
with open(self._path, "a", encoding="utf-8") as file: with open(self._path, "a", encoding="utf-8") as file:
file.write(message + "\n") file.write(message + "\n")
@@ -63,7 +67,7 @@ class PickleHandler:
with open(self.file_path, "rb") as file: with open(self.file_path, "rb") as file:
try: try:
return pickle.load(file) return pickle.load(file) # nosec
except EOFError: except EOFError:
return {} # Return an empty dictionary if the file is empty or corrupted return {} # Return an empty dictionary if the file is empty or corrupted
except Exception: except Exception:

View File

@@ -1,5 +1,5 @@
from litellm.integrations.custom_logger import CustomLogger from litellm.integrations.custom_logger import CustomLogger
from litellm.types.utils import Usage
from crewai.agents.agent_builder.utilities.base_token_process import TokenProcess from crewai.agents.agent_builder.utilities.base_token_process import TokenProcess
@@ -11,8 +11,11 @@ class TokenCalcHandler(CustomLogger):
if self.token_cost_process is None: if self.token_cost_process is None:
return return
usage : Usage = response_obj["usage"]
self.token_cost_process.sum_successful_requests(1) self.token_cost_process.sum_successful_requests(1)
self.token_cost_process.sum_prompt_tokens(response_obj["usage"].prompt_tokens) self.token_cost_process.sum_prompt_tokens(usage.prompt_tokens)
self.token_cost_process.sum_completion_tokens( self.token_cost_process.sum_completion_tokens(usage.completion_tokens)
response_obj["usage"].completion_tokens if usage.prompt_tokens_details:
) self.token_cost_process.sum_cached_prompt_tokens(
usage.prompt_tokens_details.cached_tokens
)

View File

@@ -5,7 +5,6 @@ from unittest import mock
from unittest.mock import patch from unittest.mock import patch
import pytest import pytest
from crewai_tools import tool
from crewai import Agent, Crew, Task from crewai import Agent, Crew, Task
from crewai.agents.cache import CacheHandler from crewai.agents.cache import CacheHandler
@@ -14,6 +13,7 @@ from crewai.agents.parser import AgentAction, CrewAgentParser, OutputParserExcep
from crewai.llm import LLM from crewai.llm import LLM
from crewai.tools.tool_calling import InstructorToolCalling from crewai.tools.tool_calling import InstructorToolCalling
from crewai.tools.tool_usage import ToolUsage from crewai.tools.tool_usage import ToolUsage
from crewai.tools import tool
from crewai.tools.tool_usage_events import ToolUsageFinished from crewai.tools.tool_usage_events import ToolUsageFinished
from crewai.utilities import RPMController from crewai.utilities import RPMController
from crewai.utilities.events import Emitter from crewai.utilities.events import Emitter
@@ -277,9 +277,10 @@ def test_cache_hitting():
"multiplier-{'first_number': 12, 'second_number': 3}": 36, "multiplier-{'first_number': 12, 'second_number': 3}": 36,
} }
with patch.object(CacheHandler, "read") as read, patch.object( with (
Emitter, "emit" patch.object(CacheHandler, "read") as read,
) as emit: patch.object(Emitter, "emit") as emit,
):
read.return_value = "0" read.return_value = "0"
task = Task( task = Task(
description="What is 2 times 6? Ignore correctness and just return the result of the multiplication tool, you must use the tool.", description="What is 2 times 6? Ignore correctness and just return the result of the multiplication tool, you must use the tool.",
@@ -604,7 +605,7 @@ def test_agent_respect_the_max_rpm_set(capsys):
def test_agent_respect_the_max_rpm_set_over_crew_rpm(capsys): def test_agent_respect_the_max_rpm_set_over_crew_rpm(capsys):
from unittest.mock import patch from unittest.mock import patch
from crewai_tools import tool from crewai.tools import tool
@tool @tool
def get_final_answer() -> float: def get_final_answer() -> float:
@@ -642,7 +643,7 @@ def test_agent_respect_the_max_rpm_set_over_crew_rpm(capsys):
def test_agent_without_max_rpm_respet_crew_rpm(capsys): def test_agent_without_max_rpm_respet_crew_rpm(capsys):
from unittest.mock import patch from unittest.mock import patch
from crewai_tools import tool from crewai.tools import tool
@tool @tool
def get_final_answer() -> float: def get_final_answer() -> float:
@@ -696,7 +697,7 @@ def test_agent_without_max_rpm_respet_crew_rpm(capsys):
def test_agent_error_on_parsing_tool(capsys): def test_agent_error_on_parsing_tool(capsys):
from unittest.mock import patch from unittest.mock import patch
from crewai_tools import tool from crewai.tools import tool
@tool @tool
def get_final_answer() -> float: def get_final_answer() -> float:
@@ -739,7 +740,7 @@ def test_agent_error_on_parsing_tool(capsys):
def test_agent_remembers_output_format_after_using_tools_too_many_times(): def test_agent_remembers_output_format_after_using_tools_too_many_times():
from unittest.mock import patch from unittest.mock import patch
from crewai_tools import tool from crewai.tools import tool
@tool @tool
def get_final_answer() -> float: def get_final_answer() -> float:
@@ -863,11 +864,16 @@ def test_agent_function_calling_llm():
from crewai.tools.tool_usage import ToolUsage from crewai.tools.tool_usage import ToolUsage
with patch.object( with (
instructor, "from_litellm", wraps=instructor.from_litellm patch.object(
) as mock_from_litellm, patch.object( instructor, "from_litellm", wraps=instructor.from_litellm
ToolUsage, "_original_tool_calling", side_effect=Exception("Forced exception") ) as mock_from_litellm,
) as mock_original_tool_calling: patch.object(
ToolUsage,
"_original_tool_calling",
side_effect=Exception("Forced exception"),
) as mock_original_tool_calling,
):
crew.kickoff() crew.kickoff()
mock_from_litellm.assert_called() mock_from_litellm.assert_called()
mock_original_tool_calling.assert_called() mock_original_tool_calling.assert_called()
@@ -894,7 +900,7 @@ def test_agent_count_formatting_error():
@pytest.mark.vcr(filter_headers=["authorization"]) @pytest.mark.vcr(filter_headers=["authorization"])
def test_tool_result_as_answer_is_the_final_answer_for_the_agent(): def test_tool_result_as_answer_is_the_final_answer_for_the_agent():
from crewai_tools import BaseTool from crewai.tools import BaseTool
class MyCustomTool(BaseTool): class MyCustomTool(BaseTool):
name: str = "Get Greetings" name: str = "Get Greetings"
@@ -924,7 +930,7 @@ def test_tool_result_as_answer_is_the_final_answer_for_the_agent():
@pytest.mark.vcr(filter_headers=["authorization"]) @pytest.mark.vcr(filter_headers=["authorization"])
def test_tool_usage_information_is_appended_to_agent(): def test_tool_usage_information_is_appended_to_agent():
from crewai_tools import BaseTool from crewai.tools import BaseTool
class MyCustomTool(BaseTool): class MyCustomTool(BaseTool):
name: str = "Decide Greetings" name: str = "Decide Greetings"

View File

@@ -2,6 +2,7 @@ import hashlib
from typing import Any, List, Optional from typing import Any, List, Optional
from crewai.agents.agent_builder.base_agent import BaseAgent from crewai.agents.agent_builder.base_agent import BaseAgent
from crewai.tools.base_tool import BaseTool
from pydantic import BaseModel from pydantic import BaseModel
@@ -10,13 +11,13 @@ class TestAgent(BaseAgent):
self, self,
task: Any, task: Any,
context: Optional[str] = None, context: Optional[str] = None,
tools: Optional[List[Any]] = None, tools: Optional[List[BaseTool]] = None,
) -> str: ) -> str:
return "" return ""
def create_agent_executor(self, tools=None) -> None: ... def create_agent_executor(self, tools=None) -> None: ...
def _parse_tools(self, tools: List[Any]) -> List[Any]: def _parse_tools(self, tools: List[BaseTool]) -> List[BaseTool]:
return [] return []
def get_delegation_tools(self, agents: List["BaseAgent"]): ... def get_delegation_tools(self, agents: List["BaseAgent"]): ...

View File

@@ -10,7 +10,8 @@ interactions:
criteria for your final answer: 1 bullet point about dog that''s under 15 words.\nyou criteria for your final answer: 1 bullet point about dog that''s under 15 words.\nyou
MUST return the actual complete content as the final answer, not a summary.\n\nBegin! MUST return the actual complete content as the final answer, not a summary.\n\nBegin!
This is VERY important to you, use the tools available and give your best Final This is VERY important to you, use the tools available and give your best Final
Answer, your job depends on it!\n\nThought:"}], "model": "gpt-4o"}' Answer, your job depends on it!\n\nThought:"}], "model": "gpt-4o-mini", "stop":
["\nObservation:"], "stream": false}'
headers: headers:
accept: accept:
- application/json - application/json
@@ -19,49 +20,50 @@ interactions:
connection: connection:
- keep-alive - keep-alive
content-length: content-length:
- '869' - '919'
content-type: content-type:
- application/json - application/json
cookie:
- __cf_bm=9.8sBYBkvBR8R1K_bVF7xgU..80XKlEIg3N2OBbTSCU-1727214102-1.0.1.1-.qiTLXbPamYUMSuyNsOEB9jhGu.jOifujOrx9E2JZvStbIZ9RTIiE44xKKNfLPxQkOi6qAT3h6htK8lPDGV_5g;
_cfuvid=lbRdAddVWV6W3f5Dm9SaOPWDUOxqtZBSPr_fTW26nEA-1727213194587-0.0.1.1-604800000
host: host:
- api.openai.com - api.openai.com
user-agent: user-agent:
- OpenAI/Python 1.47.0 - OpenAI/Python 1.52.1
x-stainless-arch: x-stainless-arch:
- arm64 - x64
x-stainless-async: x-stainless-async:
- 'false' - 'false'
x-stainless-lang: x-stainless-lang:
- python - python
x-stainless-os: x-stainless-os:
- MacOS - Linux
x-stainless-package-version: x-stainless-package-version:
- 1.47.0 - 1.52.1
x-stainless-raw-response: x-stainless-raw-response:
- 'true' - 'true'
x-stainless-retry-count:
- '0'
x-stainless-runtime: x-stainless-runtime:
- CPython - CPython
x-stainless-runtime-version: x-stainless-runtime-version:
- 3.11.7 - 3.11.9
method: POST method: POST
uri: https://api.openai.com/v1/chat/completions uri: https://api.openai.com/v1/chat/completions
response: response:
content: "{\n \"id\": \"chatcmpl-AB7auGDrAVE0iXSBBhySZp3xE8gvP\",\n \"object\": body:
\"chat.completion\",\n \"created\": 1727214164,\n \"model\": \"gpt-4o-2024-05-13\",\n string: !!binary |
\ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\": H4sIAAAAAAAAA4xSy27bMBC86ysWPEuB7ciV7VuAIkUObQ+59QFhTa0kttQuS9Jx08D/XkhyLAVJ
\"assistant\",\n \"content\": \"I now can give a great answer\\nFinal gV4EaGdnMLPDpwRAmUrtQOkWo+6czW7u41q2t3+cvCvuPvxafSG+58XHTzXlxWeV9gzZ/yAdn1lX
Answer: Dogs are unparalleled in loyalty and companionship to humans.\",\n \"refusal\": WjpnKRrhEdaeMFKvuiyul3m+uV7lA9BJRbanNS5muWSdYZOtFqs8WxTZcnNmt2I0BbWDrwkAwNPw
null\n },\n \"logprobs\": null,\n \"finish_reason\": \"stop\"\n 7X1yRb/VDhbp86SjELAhtbssASgvtp8oDMGEiBxVOoFaOBIP1u+A5QgaGRrzQIDQ9LYBORzJA3zj
\ }\n ],\n \"usage\": {\n \"prompt_tokens\": 175,\n \"completion_tokens\": W8No4Wb438F7aQKgJ7DyiBb6zMhGOKRA3CJrww10xBEttIQ2toBcgTyQR2vhSNZmezLcXM39eKoP
21,\n \"total_tokens\": 196,\n \"completion_tokens_details\": {\n \"reasoning_tokens\": Afub8MHa8/x0CWilcV724Yxf5rVhE9rSEwbhPkyI4tSAnhKA78MhDy9uo5yXzsUyyk/iMHSzHvXU
0\n }\n },\n \"system_fingerprint\": \"fp_e375328146\"\n}\n" 1N+Ejo0BqCgR7Yy13aZv6JUVRTQ2zKpQGnVL1USdesNDZWQGJLPUr928pT0mN9z8j/wEaE0uUlU6
T5XRLxNPa5765/2vtcuVB8MqPIZIXVkbbsg7b8bHVbuyXm9xs8xXRa2SU/IXAAD//wMAq2ZCBWoD
AAA=
headers: headers:
CF-Cache-Status: CF-Cache-Status:
- DYNAMIC - DYNAMIC
CF-RAY: CF-RAY:
- 8c85f22ddda01cf3-GRU - 8e19bf36db158761-GRU
Connection: Connection:
- keep-alive - keep-alive
Content-Encoding: Content-Encoding:
@@ -69,19 +71,27 @@ interactions:
Content-Type: Content-Type:
- application/json - application/json
Date: Date:
- Tue, 24 Sep 2024 21:42:44 GMT - Tue, 12 Nov 2024 21:52:04 GMT
Server: Server:
- cloudflare - cloudflare
Set-Cookie:
- __cf_bm=MkvcnvacGpTyn.y0OkFRoFXuAwg4oxjMhViZJTt9mw0-1731448324-1.0.1.1-oekkH_B0xOoPnIFw15LpqFCkZ2cu7VBTJVLDGylan4I67NjX.tlPvOiX9kvtP5Acewi28IE2IwlwtrZWzCH3vw;
path=/; expires=Tue, 12-Nov-24 22:22:04 GMT; domain=.api.openai.com; HttpOnly;
Secure; SameSite=None
- _cfuvid=4.17346mfw5npZfYNbCx3Vj1VAVPy.tH0Jm2gkTteJ8-1731448324998-0.0.1.1-604800000;
path=/; domain=.api.openai.com; HttpOnly; Secure; SameSite=None
Transfer-Encoding: Transfer-Encoding:
- chunked - chunked
X-Content-Type-Options: X-Content-Type-Options:
- nosniff - nosniff
access-control-expose-headers: access-control-expose-headers:
- X-Request-ID - X-Request-ID
alt-svc:
- h3=":443"; ma=86400
openai-organization: openai-organization:
- crewai-iuxna1 - user-tqfegqsiobpvvjmn0giaipdq
openai-processing-ms: openai-processing-ms:
- '349' - '601'
openai-version: openai-version:
- '2020-10-01' - '2020-10-01'
strict-transport-security: strict-transport-security:
@@ -89,19 +99,20 @@ interactions:
x-ratelimit-limit-requests: x-ratelimit-limit-requests:
- '10000' - '10000'
x-ratelimit-limit-tokens: x-ratelimit-limit-tokens:
- '30000000' - '200000'
x-ratelimit-remaining-requests: x-ratelimit-remaining-requests:
- '9999' - '9999'
x-ratelimit-remaining-tokens: x-ratelimit-remaining-tokens:
- '29999792' - '199793'
x-ratelimit-reset-requests: x-ratelimit-reset-requests:
- 6ms - 8.64s
x-ratelimit-reset-tokens: x-ratelimit-reset-tokens:
- 0s - 62ms
x-request-id: x-request-id:
- req_4c8cd76fdfba7b65e5ce85397b33c22b - req_77fb166b4e272bfd45c37c08d2b93b0c
http_version: HTTP/1.1 status:
status_code: 200 code: 200
message: OK
- request: - request:
body: '{"messages": [{"role": "system", "content": "You are cat Researcher. You body: '{"messages": [{"role": "system", "content": "You are cat Researcher. You
have a lot of experience with cat.\nYour personal goal is: Express hot takes have a lot of experience with cat.\nYour personal goal is: Express hot takes
@@ -113,7 +124,8 @@ interactions:
criteria for your final answer: 1 bullet point about cat that''s under 15 words.\nyou criteria for your final answer: 1 bullet point about cat that''s under 15 words.\nyou
MUST return the actual complete content as the final answer, not a summary.\n\nBegin! MUST return the actual complete content as the final answer, not a summary.\n\nBegin!
This is VERY important to you, use the tools available and give your best Final This is VERY important to you, use the tools available and give your best Final
Answer, your job depends on it!\n\nThought:"}], "model": "gpt-4o"}' Answer, your job depends on it!\n\nThought:"}], "model": "gpt-4o-mini", "stop":
["\nObservation:"], "stream": false}'
headers: headers:
accept: accept:
- application/json - application/json
@@ -122,49 +134,53 @@ interactions:
connection: connection:
- keep-alive - keep-alive
content-length: content-length:
- '869' - '919'
content-type: content-type:
- application/json - application/json
cookie: cookie:
- __cf_bm=9.8sBYBkvBR8R1K_bVF7xgU..80XKlEIg3N2OBbTSCU-1727214102-1.0.1.1-.qiTLXbPamYUMSuyNsOEB9jhGu.jOifujOrx9E2JZvStbIZ9RTIiE44xKKNfLPxQkOi6qAT3h6htK8lPDGV_5g; - __cf_bm=MkvcnvacGpTyn.y0OkFRoFXuAwg4oxjMhViZJTt9mw0-1731448324-1.0.1.1-oekkH_B0xOoPnIFw15LpqFCkZ2cu7VBTJVLDGylan4I67NjX.tlPvOiX9kvtP5Acewi28IE2IwlwtrZWzCH3vw;
_cfuvid=lbRdAddVWV6W3f5Dm9SaOPWDUOxqtZBSPr_fTW26nEA-1727213194587-0.0.1.1-604800000 _cfuvid=4.17346mfw5npZfYNbCx3Vj1VAVPy.tH0Jm2gkTteJ8-1731448324998-0.0.1.1-604800000
host: host:
- api.openai.com - api.openai.com
user-agent: user-agent:
- OpenAI/Python 1.47.0 - OpenAI/Python 1.52.1
x-stainless-arch: x-stainless-arch:
- arm64 - x64
x-stainless-async: x-stainless-async:
- 'false' - 'false'
x-stainless-lang: x-stainless-lang:
- python - python
x-stainless-os: x-stainless-os:
- MacOS - Linux
x-stainless-package-version: x-stainless-package-version:
- 1.47.0 - 1.52.1
x-stainless-raw-response: x-stainless-raw-response:
- 'true' - 'true'
x-stainless-retry-count:
- '0'
x-stainless-runtime: x-stainless-runtime:
- CPython - CPython
x-stainless-runtime-version: x-stainless-runtime-version:
- 3.11.7 - 3.11.9
method: POST method: POST
uri: https://api.openai.com/v1/chat/completions uri: https://api.openai.com/v1/chat/completions
response: response:
content: "{\n \"id\": \"chatcmpl-AB7auNbAqjT3rgBX92rhxBLuhaLBj\",\n \"object\": body:
\"chat.completion\",\n \"created\": 1727214164,\n \"model\": \"gpt-4o-2024-05-13\",\n string: !!binary |
\ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\": H4sIAAAAAAAAA4xSy27bMBC86ysWPFuB7MhN6ltQIGmBnlL00BcEmlxJ21JLhlzFLQL/eyH5IRlt
\"assistant\",\n \"content\": \"Thought: I now can give a great answer\\nFinal gV4EaGZnMLPLlwxAkVUbUKbVYrrg8rsPsg4P+Orxs9XvPz0U8eP966dS6sdo3wa1GBR++x2NnFRX
Answer: Cats are highly independent, agile, and intuitive creatures beloved xnfBoZDnA20iasHBdXlzvSzL2+vVeiQ6b9ENsiZIXvq8I6Z8VazKvLjJl7dHdevJYFIb+JIBALyM
by millions worldwide.\",\n \"refusal\": null\n },\n \"logprobs\": 3yEnW/ypNlAsTkiHKekG1eY8BKCidwOidEqURLOoxUQaz4I8Rn8H7HdgNENDzwgamiE2aE47jABf
null,\n \"finish_reason\": \"stop\"\n }\n ],\n \"usage\": {\n \"prompt_tokens\": +Z5YO7gb/zfwRksCHRGGGAHZIg/D1GmXFiBtpGfiBjyDtEgR/I5BMHYJNFvomZ56hIAxedaOhDBd
175,\n \"completion_tokens\": 28,\n \"total_tokens\": 203,\n \"completion_tokens_details\": zYNFrPukh+Vw79wR35+bOt+E6LfpyJ/xmphSW0XUyfPQKokPamT3GcC3caP9xZJUiL4LUon/gZzG
{\n \"reasoning_tokens\": 0\n }\n },\n \"system_fingerprint\": \"fp_e375328146\"\n}\n" I60Pfmo65MSuTqR40W6GF8c7XPpVFkWTS7ObKKNNi3aSTgfUvSU/I7JZ6z/T/M370Jy4+R/7iTAG
g6CtQkRL5rLxNBZxeOf/GjtveQys0q8k2FU1cYMxRDq8sjpUxVYXdrkq66XK9tlvAAAA//8DAIjK
KzJzAwAA
headers: headers:
CF-Cache-Status: CF-Cache-Status:
- DYNAMIC - DYNAMIC
CF-RAY: CF-RAY:
- 8c85f2321c1c1cf3-GRU - 8e19bf3fae118761-GRU
Connection: Connection:
- keep-alive - keep-alive
Content-Encoding: Content-Encoding:
@@ -172,7 +188,7 @@ interactions:
Content-Type: Content-Type:
- application/json - application/json
Date: Date:
- Tue, 24 Sep 2024 21:42:45 GMT - Tue, 12 Nov 2024 21:52:05 GMT
Server: Server:
- cloudflare - cloudflare
Transfer-Encoding: Transfer-Encoding:
@@ -181,10 +197,12 @@ interactions:
- nosniff - nosniff
access-control-expose-headers: access-control-expose-headers:
- X-Request-ID - X-Request-ID
alt-svc:
- h3=":443"; ma=86400
openai-organization: openai-organization:
- crewai-iuxna1 - user-tqfegqsiobpvvjmn0giaipdq
openai-processing-ms: openai-processing-ms:
- '430' - '464'
openai-version: openai-version:
- '2020-10-01' - '2020-10-01'
strict-transport-security: strict-transport-security:
@@ -192,19 +210,20 @@ interactions:
x-ratelimit-limit-requests: x-ratelimit-limit-requests:
- '10000' - '10000'
x-ratelimit-limit-tokens: x-ratelimit-limit-tokens:
- '30000000' - '200000'
x-ratelimit-remaining-requests: x-ratelimit-remaining-requests:
- '9999' - '9998'
x-ratelimit-remaining-tokens: x-ratelimit-remaining-tokens:
- '29999792' - '199792'
x-ratelimit-reset-requests: x-ratelimit-reset-requests:
- 6ms - 16.369s
x-ratelimit-reset-tokens: x-ratelimit-reset-tokens:
- 0s - 62ms
x-request-id: x-request-id:
- req_ace859b7d9e83d9fa7753ce23bb03716 - req_91706b23d0ef23458ba63ec18304cd28
http_version: HTTP/1.1 status:
status_code: 200 code: 200
message: OK
- request: - request:
body: '{"messages": [{"role": "system", "content": "You are apple Researcher. body: '{"messages": [{"role": "system", "content": "You are apple Researcher.
You have a lot of experience with apple.\nYour personal goal is: Express hot You have a lot of experience with apple.\nYour personal goal is: Express hot
@@ -217,7 +236,7 @@ interactions:
under 15 words.\nyou MUST return the actual complete content as the final answer, under 15 words.\nyou MUST return the actual complete content as the final answer,
not a summary.\n\nBegin! This is VERY important to you, use the tools available not a summary.\n\nBegin! This is VERY important to you, use the tools available
and give your best Final Answer, your job depends on it!\n\nThought:"}], "model": and give your best Final Answer, your job depends on it!\n\nThought:"}], "model":
"gpt-4o"}' "gpt-4o-mini", "stop": ["\nObservation:"], "stream": false}'
headers: headers:
accept: accept:
- application/json - application/json
@@ -226,49 +245,53 @@ interactions:
connection: connection:
- keep-alive - keep-alive
content-length: content-length:
- '879' - '929'
content-type: content-type:
- application/json - application/json
cookie: cookie:
- __cf_bm=9.8sBYBkvBR8R1K_bVF7xgU..80XKlEIg3N2OBbTSCU-1727214102-1.0.1.1-.qiTLXbPamYUMSuyNsOEB9jhGu.jOifujOrx9E2JZvStbIZ9RTIiE44xKKNfLPxQkOi6qAT3h6htK8lPDGV_5g; - __cf_bm=MkvcnvacGpTyn.y0OkFRoFXuAwg4oxjMhViZJTt9mw0-1731448324-1.0.1.1-oekkH_B0xOoPnIFw15LpqFCkZ2cu7VBTJVLDGylan4I67NjX.tlPvOiX9kvtP5Acewi28IE2IwlwtrZWzCH3vw;
_cfuvid=lbRdAddVWV6W3f5Dm9SaOPWDUOxqtZBSPr_fTW26nEA-1727213194587-0.0.1.1-604800000 _cfuvid=4.17346mfw5npZfYNbCx3Vj1VAVPy.tH0Jm2gkTteJ8-1731448324998-0.0.1.1-604800000
host: host:
- api.openai.com - api.openai.com
user-agent: user-agent:
- OpenAI/Python 1.47.0 - OpenAI/Python 1.52.1
x-stainless-arch: x-stainless-arch:
- arm64 - x64
x-stainless-async: x-stainless-async:
- 'false' - 'false'
x-stainless-lang: x-stainless-lang:
- python - python
x-stainless-os: x-stainless-os:
- MacOS - Linux
x-stainless-package-version: x-stainless-package-version:
- 1.47.0 - 1.52.1
x-stainless-raw-response: x-stainless-raw-response:
- 'true' - 'true'
x-stainless-retry-count:
- '0'
x-stainless-runtime: x-stainless-runtime:
- CPython - CPython
x-stainless-runtime-version: x-stainless-runtime-version:
- 3.11.7 - 3.11.9
method: POST method: POST
uri: https://api.openai.com/v1/chat/completions uri: https://api.openai.com/v1/chat/completions
response: response:
content: "{\n \"id\": \"chatcmpl-AB7avZ0yqY18ukQS7SnLkZydsx72b\",\n \"object\": body:
\"chat.completion\",\n \"created\": 1727214165,\n \"model\": \"gpt-4o-2024-05-13\",\n string: !!binary |
\ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\": H4sIAAAAAAAAA4xSPW/bMBDd9SsOXLpIgeTITarNS4t26JJubSHQ5IliSh1ZHv0RBP7vhSTHctAU
\"assistant\",\n \"content\": \"I now can give a great answer.\\n\\nFinal 6CJQ7909vHd3zxmAsFo0IFQvkxqCKzYPaf1b2/hhW+8PR9N9Kh9W5Zdhjebr4zeRjx1++4gqvXTd
Answer: Apples are incredibly versatile, nutritious, and a staple in diets globally.\",\n KD8Eh8l6mmkVUSYcVau726qu729X7ydi8Brd2GZCKmpfDJZssSpXdVHeFdX9ubv3ViGLBr5nAADP
\ \"refusal\": null\n },\n \"logprobs\": null,\n \"finish_reason\": 03f0SRqPooEyf0EGZJYGRXMpAhDRuxERktlykpREvpDKU0KarH8G8gdQksDYPYIEM9oGSXzACPCD
\"stop\"\n }\n ],\n \"usage\": {\n \"prompt_tokens\": 175,\n \"completion_tokens\": PlqSDjbTfwObEBy+Y0Dl+YkTDmApoYkyIUMvoz7IiDmw79L8kqSBMe7HMMAoB4fM7ikHpF6SsmRg
25,\n \"total_tokens\": 200,\n \"completion_tokens_details\": {\n \"reasoning_tokens\": xxgBjwGjRVJ4c+00YrdjOU6Lds6d8dMluvMmRL/lM3/BO0uW+zaiZE9jTE4+iIk9ZQA/pxHvXk1N
0\n }\n },\n \"system_fingerprint\": \"fp_a5d11b2ef2\"\n}\n" hOiHkNrkfyHxtLX1rCeWzS7svEsAkXyS7govq/wNvVZjktbx1ZKEkqpHvbQuG5U7bf0VkV2l/tvN
W9pzckvmf+QXQikMCXUbImqrXideyiKOh/+vssuUJ8NiPpO2s2Qwhmjns+tCW25lqatV3VUiO2V/
AAAA//8DAPtpFJCEAwAA
headers: headers:
CF-Cache-Status: CF-Cache-Status:
- DYNAMIC - DYNAMIC
CF-RAY: CF-RAY:
- 8c85f2369a761cf3-GRU - 8e19bf447ba48761-GRU
Connection: Connection:
- keep-alive - keep-alive
Content-Encoding: Content-Encoding:
@@ -276,7 +299,7 @@ interactions:
Content-Type: Content-Type:
- application/json - application/json
Date: Date:
- Tue, 24 Sep 2024 21:42:46 GMT - Tue, 12 Nov 2024 21:52:06 GMT
Server: Server:
- cloudflare - cloudflare
Transfer-Encoding: Transfer-Encoding:
@@ -285,10 +308,12 @@ interactions:
- nosniff - nosniff
access-control-expose-headers: access-control-expose-headers:
- X-Request-ID - X-Request-ID
alt-svc:
- h3=":443"; ma=86400
openai-organization: openai-organization:
- crewai-iuxna1 - user-tqfegqsiobpvvjmn0giaipdq
openai-processing-ms: openai-processing-ms:
- '389' - '655'
openai-version: openai-version:
- '2020-10-01' - '2020-10-01'
strict-transport-security: strict-transport-security:
@@ -296,17 +321,18 @@ interactions:
x-ratelimit-limit-requests: x-ratelimit-limit-requests:
- '10000' - '10000'
x-ratelimit-limit-tokens: x-ratelimit-limit-tokens:
- '30000000' - '200000'
x-ratelimit-remaining-requests: x-ratelimit-remaining-requests:
- '9999' - '9997'
x-ratelimit-remaining-tokens: x-ratelimit-remaining-tokens:
- '29999791' - '199791'
x-ratelimit-reset-requests: x-ratelimit-reset-requests:
- 6ms - 24.239s
x-ratelimit-reset-tokens: x-ratelimit-reset-tokens:
- 0s - 62ms
x-request-id: x-request-id:
- req_0167388f0a7a7f1a1026409834ceb914 - req_a228208b0e965ecee334a6947d6c9e7c
http_version: HTTP/1.1 status:
status_code: 200 code: 200
message: OK
version: 1 version: 1

View File

@@ -0,0 +1,205 @@
interactions:
- request:
body: '{"messages": [{"role": "user", "content": "Hello, world!"}], "model": "gpt-4o-mini",
"stream": false}'
headers:
accept:
- application/json
accept-encoding:
- gzip, deflate
connection:
- keep-alive
content-length:
- '101'
content-type:
- application/json
host:
- api.openai.com
user-agent:
- OpenAI/Python 1.52.1
x-stainless-arch:
- x64
x-stainless-async:
- 'false'
x-stainless-lang:
- python
x-stainless-os:
- Linux
x-stainless-package-version:
- 1.52.1
x-stainless-raw-response:
- 'true'
x-stainless-retry-count:
- '0'
x-stainless-runtime:
- CPython
x-stainless-runtime-version:
- 3.11.9
method: POST
uri: https://api.openai.com/v1/chat/completions
response:
body:
string: !!binary |
H4sIAAAAAAAAA4xSwWrcMBS8+ytedY6LvWvYZi8lpZSkBJLSQiChGK307FUi66nSc9Ml7L8H2e56
l7bQiw8zb8Yzg14yAGG0WINQW8mq8za/+Oqv5MUmXv+8+/Hl3uO3j59u1efreHO+/PAszpKCNo+o
+LfqraLOW2RDbqRVQMmYXMvVsqyWy1VVDERHGm2StZ7zivLOOJMvikWVF6u8fDept2QURrGGhwwA
4GX4ppxO4y+xhsFrQDqMUbYo1ocjABHIJkTIGE1k6ViczaQix+iG6JdoLb2BS3oGJR1cwSiAHfXA
pOXu/bEwYNNHmcK73toJ3x+SWGp9oE2c+APeGGfitg4oI7n018jkxcDuM4DvQ+P+pITwgTrPNdMT
umRYlqOdmHeeyfOJY2JpZ3gxjXRqVmtkaWw8GkwoqbaoZ+W8ruy1oSMiO6r8Z5a/eY+1jWv/x34m
lELPqGsfUBt12nc+C5ge4b/ODhMPgUXcRcauboxrMfhgxifQ+LrYyEKXi6opRbbPXgEAAP//AwAM
DMWoEAMAAA==
headers:
CF-Cache-Status:
- DYNAMIC
CF-RAY:
- 8e185b2c1b790303-GRU
Connection:
- keep-alive
Content-Encoding:
- gzip
Content-Type:
- application/json
Date:
- Tue, 12 Nov 2024 17:49:00 GMT
Server:
- cloudflare
Set-Cookie:
- __cf_bm=l.QrRLcNZkML_KSfxjir6YCV35B8GNTitBTNh7cPGc4-1731433740-1.0.1.1-j1ejlmykyoI8yk6i6pQjtPoovGzfxI2f5vG6u0EqodQMjCvhbHfNyN_wmYkeT._BMvFi.zDQ8m_PqEHr8tSdEQ;
path=/; expires=Tue, 12-Nov-24 18:19:00 GMT; domain=.api.openai.com; HttpOnly;
Secure; SameSite=None
- _cfuvid=jcCDyMK__Fd0V5DMeqt9yXdlKc7Hsw87a1K01pZu9l0-1731433740848-0.0.1.1-604800000;
path=/; domain=.api.openai.com; HttpOnly; Secure; SameSite=None
Transfer-Encoding:
- chunked
X-Content-Type-Options:
- nosniff
access-control-expose-headers:
- X-Request-ID
alt-svc:
- h3=":443"; ma=86400
openai-organization:
- user-tqfegqsiobpvvjmn0giaipdq
openai-processing-ms:
- '322'
openai-version:
- '2020-10-01'
strict-transport-security:
- max-age=31536000; includeSubDomains; preload
x-ratelimit-limit-requests:
- '10000'
x-ratelimit-limit-tokens:
- '200000'
x-ratelimit-remaining-requests:
- '9999'
x-ratelimit-remaining-tokens:
- '199978'
x-ratelimit-reset-requests:
- 8.64s
x-ratelimit-reset-tokens:
- 6ms
x-request-id:
- req_037288753767e763a51a04eae757ca84
status:
code: 200
message: OK
- request:
body: '{"messages": [{"role": "user", "content": "Hello, world from another agent!"}],
"model": "gpt-4o-mini", "stream": false}'
headers:
accept:
- application/json
accept-encoding:
- gzip, deflate
connection:
- keep-alive
content-length:
- '120'
content-type:
- application/json
cookie:
- __cf_bm=l.QrRLcNZkML_KSfxjir6YCV35B8GNTitBTNh7cPGc4-1731433740-1.0.1.1-j1ejlmykyoI8yk6i6pQjtPoovGzfxI2f5vG6u0EqodQMjCvhbHfNyN_wmYkeT._BMvFi.zDQ8m_PqEHr8tSdEQ;
_cfuvid=jcCDyMK__Fd0V5DMeqt9yXdlKc7Hsw87a1K01pZu9l0-1731433740848-0.0.1.1-604800000
host:
- api.openai.com
user-agent:
- OpenAI/Python 1.52.1
x-stainless-arch:
- x64
x-stainless-async:
- 'false'
x-stainless-lang:
- python
x-stainless-os:
- Linux
x-stainless-package-version:
- 1.52.1
x-stainless-raw-response:
- 'true'
x-stainless-retry-count:
- '0'
x-stainless-runtime:
- CPython
x-stainless-runtime-version:
- 3.11.9
method: POST
uri: https://api.openai.com/v1/chat/completions
response:
body:
string: !!binary |
H4sIAAAAAAAAA4xSy27bMBC86yu2PFuBZAt14UvRU5MA7aVAEKAIBJpcSUwoLkuu6jiB/z3QI5aM
tkAvPMzsDGZ2+ZoACKPFDoRqJKvW2/TLD3+z//oiD8dfL7d339zvW125x9zX90/3mVj1Cto/ouJ3
1ZWi1ltkQ26kVUDJ2Lvm201ebDbbIh+IljTaXlZ7TgtKW+NMus7WRZpt0/zTpG7IKIxiBz8TAIDX
4e1zOo3PYgfZ6h1pMUZZo9idhwBEINsjQsZoIkvHYjWTihyjG6Jfo7X0Ab4bhcAEipxDxXAw3IB0
xA0GkDU6voJrOoCSDm5gNIUjdcCk5fHz0jxg1UXZF3SdtRN+Oqe1VPtA+zjxZ7wyzsSmDCgjuT5Z
ZPJiYE8JwMOwle6iqPCBWs8l0xO63jAvRjsx32JBfpxIJpZ2xjfTJi/dSo0sjY2LrQolVYN6Vs4n
kJ02tCCSRec/w/zNe+xtXP0/9jOhFHpGXfqA2qjLwvNYwP6n/mvsvOMhsIjHyNiWlXE1Bh/M+E8q
X2Z7mel8XVS5SE7JGwAAAP//AwA/cK4yNQMAAA==
headers:
CF-Cache-Status:
- DYNAMIC
CF-RAY:
- 8e185b31398a0303-GRU
Connection:
- keep-alive
Content-Encoding:
- gzip
Content-Type:
- application/json
Date:
- Tue, 12 Nov 2024 17:49:02 GMT
Server:
- cloudflare
Transfer-Encoding:
- chunked
X-Content-Type-Options:
- nosniff
access-control-expose-headers:
- X-Request-ID
alt-svc:
- h3=":443"; ma=86400
openai-organization:
- user-tqfegqsiobpvvjmn0giaipdq
openai-processing-ms:
- '889'
openai-version:
- '2020-10-01'
strict-transport-security:
- max-age=31536000; includeSubDomains; preload
x-ratelimit-limit-requests:
- '10000'
x-ratelimit-limit-tokens:
- '200000'
x-ratelimit-remaining-requests:
- '9998'
x-ratelimit-remaining-tokens:
- '199975'
x-ratelimit-reset-requests:
- 16.489s
x-ratelimit-reset-tokens:
- 7ms
x-request-id:
- req_bde3810b36a4859688e53d1df64bdd20
status:
code: 200
message: OK
version: 1

View File

@@ -1,7 +1,9 @@
from pathlib import Path
from unittest import mock from unittest import mock
import pytest import pytest
from click.testing import CliRunner from click.testing import CliRunner
from crewai.cli.cli import ( from crewai.cli.cli import (
deploy_create, deploy_create,
deploy_list, deploy_list,
@@ -9,6 +11,7 @@ from crewai.cli.cli import (
deploy_push, deploy_push,
deploy_remove, deploy_remove,
deply_status, deply_status,
flow_add_crew,
reset_memories, reset_memories,
signup, signup,
test, test,
@@ -277,3 +280,42 @@ def test_deploy_remove_no_uuid(command, runner):
assert result.exit_code == 0 assert result.exit_code == 0
mock_deploy.remove_crew.assert_called_once_with(uuid=None) mock_deploy.remove_crew.assert_called_once_with(uuid=None)
@mock.patch("crewai.cli.add_crew_to_flow.create_embedded_crew")
@mock.patch("pathlib.Path.exists", return_value=True) # Mock the existence check
def test_flow_add_crew(mock_path_exists, mock_create_embedded_crew, runner):
crew_name = "new_crew"
result = runner.invoke(flow_add_crew, [crew_name])
# Log the output for debugging
print(result.output)
assert result.exit_code == 0, f"Command failed with output: {result.output}"
assert f"Adding crew {crew_name} to the flow" in result.output
# Verify that create_embedded_crew was called with the correct arguments
mock_create_embedded_crew.assert_called_once()
call_args, call_kwargs = mock_create_embedded_crew.call_args
assert call_args[0] == crew_name
assert "parent_folder" in call_kwargs
assert isinstance(call_kwargs["parent_folder"], Path)
def test_add_crew_to_flow_not_in_root(runner):
# Simulate not being in the root of a flow project
with mock.patch("pathlib.Path.exists", autospec=True) as mock_exists:
# Mock Path.exists to return False when checking for pyproject.toml
def exists_side_effect(self):
if self.name == "pyproject.toml":
return False # Simulate that pyproject.toml does not exist
return True # All other paths exist
mock_exists.side_effect = exists_side_effect
result = runner.invoke(flow_add_crew, ["new_crew"])
assert result.exit_code != 0
assert "This command must be run from the root of a flow project." in str(
result.output
)

109
tests/cli/config_test.py Normal file
View File

@@ -0,0 +1,109 @@
import unittest
import json
import tempfile
import shutil
from pathlib import Path
from crewai.cli.config import Settings
class TestSettings(unittest.TestCase):
def setUp(self):
self.test_dir = Path(tempfile.mkdtemp())
self.config_path = self.test_dir / "settings.json"
def tearDown(self):
shutil.rmtree(self.test_dir)
def test_empty_initialization(self):
settings = Settings(config_path=self.config_path)
self.assertIsNone(settings.tool_repository_username)
self.assertIsNone(settings.tool_repository_password)
def test_initialization_with_data(self):
settings = Settings(
config_path=self.config_path,
tool_repository_username="user1"
)
self.assertEqual(settings.tool_repository_username, "user1")
self.assertIsNone(settings.tool_repository_password)
def test_initialization_with_existing_file(self):
self.config_path.parent.mkdir(parents=True, exist_ok=True)
with self.config_path.open("w") as f:
json.dump({"tool_repository_username": "file_user"}, f)
settings = Settings(config_path=self.config_path)
self.assertEqual(settings.tool_repository_username, "file_user")
def test_merge_file_and_input_data(self):
self.config_path.parent.mkdir(parents=True, exist_ok=True)
with self.config_path.open("w") as f:
json.dump({
"tool_repository_username": "file_user",
"tool_repository_password": "file_pass"
}, f)
settings = Settings(
config_path=self.config_path,
tool_repository_username="new_user"
)
self.assertEqual(settings.tool_repository_username, "new_user")
self.assertEqual(settings.tool_repository_password, "file_pass")
def test_dump_new_settings(self):
settings = Settings(
config_path=self.config_path,
tool_repository_username="user1"
)
settings.dump()
with self.config_path.open("r") as f:
saved_data = json.load(f)
self.assertEqual(saved_data["tool_repository_username"], "user1")
def test_update_existing_settings(self):
self.config_path.parent.mkdir(parents=True, exist_ok=True)
with self.config_path.open("w") as f:
json.dump({"existing_setting": "value"}, f)
settings = Settings(
config_path=self.config_path,
tool_repository_username="user1"
)
settings.dump()
with self.config_path.open("r") as f:
saved_data = json.load(f)
self.assertEqual(saved_data["existing_setting"], "value")
self.assertEqual(saved_data["tool_repository_username"], "user1")
def test_none_values(self):
settings = Settings(
config_path=self.config_path,
tool_repository_username=None
)
settings.dump()
with self.config_path.open("r") as f:
saved_data = json.load(f)
self.assertIsNone(saved_data.get("tool_repository_username"))
def test_invalid_json_in_config(self):
self.config_path.parent.mkdir(parents=True, exist_ok=True)
with self.config_path.open("w") as f:
f.write("invalid json")
try:
settings = Settings(config_path=self.config_path)
self.assertIsNone(settings.tool_repository_username)
except json.JSONDecodeError:
self.fail("Settings initialization should handle invalid JSON")
def test_empty_config_file(self):
self.config_path.parent.mkdir(parents=True, exist_ok=True)
self.config_path.touch()
settings = Settings(config_path=self.config_path)
self.assertIsNone(settings.tool_repository_username)

View File

@@ -75,13 +75,14 @@ def test_install_success(mock_get, mock_subprocess_run):
[ [
"uv", "uv",
"add", "add",
"--extra-index-url", "--index",
"https://app.crewai.com/pypi/sample-repo", "sample-repo=https://example.com/repo",
"sample-tool", "sample-tool",
], ],
capture_output=False, capture_output=False,
text=True, text=True,
check=True, check=True,
env=unittest.mock.ANY
) )
assert "Succesfully installed sample-tool" in output assert "Succesfully installed sample-tool" in output

View File

@@ -9,6 +9,7 @@ from unittest.mock import MagicMock, patch
import instructor import instructor
import pydantic_core import pydantic_core
import pytest import pytest
from crewai.agent import Agent from crewai.agent import Agent
from crewai.agents.cache import CacheHandler from crewai.agents.cache import CacheHandler
from crewai.crew import Crew from crewai.crew import Crew
@@ -455,7 +456,7 @@ def test_crew_verbose_output(capsys):
def test_cache_hitting_between_agents(): def test_cache_hitting_between_agents():
from unittest.mock import call, patch from unittest.mock import call, patch
from crewai_tools import tool from crewai.tools import tool
@tool @tool
def multiplier(first_number: int, second_number: int) -> float: def multiplier(first_number: int, second_number: int) -> float:
@@ -497,7 +498,8 @@ def test_cache_hitting_between_agents():
@pytest.mark.vcr(filter_headers=["authorization"]) @pytest.mark.vcr(filter_headers=["authorization"])
def test_api_calls_throttling(capsys): def test_api_calls_throttling(capsys):
from unittest.mock import patch from unittest.mock import patch
from crewai_tools import tool
from crewai.tools import tool
@tool @tool
def get_final_answer() -> float: def get_final_answer() -> float:
@@ -562,6 +564,7 @@ def test_crew_kickoff_usage_metrics():
assert result.token_usage.prompt_tokens > 0 assert result.token_usage.prompt_tokens > 0
assert result.token_usage.completion_tokens > 0 assert result.token_usage.completion_tokens > 0
assert result.token_usage.successful_requests > 0 assert result.token_usage.successful_requests > 0
assert result.token_usage.cached_prompt_tokens == 0
def test_agents_rpm_is_never_set_if_crew_max_RPM_is_not_set(): def test_agents_rpm_is_never_set_if_crew_max_RPM_is_not_set():
@@ -779,11 +782,14 @@ def test_async_task_execution_call_count():
list_important_history.output = mock_task_output list_important_history.output = mock_task_output
write_article.output = mock_task_output write_article.output = mock_task_output
with patch.object( with (
Task, "execute_sync", return_value=mock_task_output patch.object(
) as mock_execute_sync, patch.object( Task, "execute_sync", return_value=mock_task_output
Task, "execute_async", return_value=mock_future ) as mock_execute_sync,
) as mock_execute_async: patch.object(
Task, "execute_async", return_value=mock_future
) as mock_execute_async,
):
crew.kickoff() crew.kickoff()
assert mock_execute_async.call_count == 2 assert mock_execute_async.call_count == 2
@@ -1105,7 +1111,8 @@ def test_dont_set_agents_step_callback_if_already_set():
@pytest.mark.vcr(filter_headers=["authorization"]) @pytest.mark.vcr(filter_headers=["authorization"])
def test_crew_function_calling_llm(): def test_crew_function_calling_llm():
from unittest.mock import patch from unittest.mock import patch
from crewai_tools import tool
from crewai.tools import tool
llm = "gpt-4o" llm = "gpt-4o"
@@ -1140,7 +1147,7 @@ def test_crew_function_calling_llm():
@pytest.mark.vcr(filter_headers=["authorization"]) @pytest.mark.vcr(filter_headers=["authorization"])
def test_task_with_no_arguments(): def test_task_with_no_arguments():
from crewai_tools import tool from crewai.tools import tool
@tool @tool
def return_data() -> str: def return_data() -> str:
@@ -1274,10 +1281,11 @@ def test_agent_usage_metrics_are_captured_for_hierarchical_process():
assert result.raw == "Howdy!" assert result.raw == "Howdy!"
assert result.token_usage == UsageMetrics( assert result.token_usage == UsageMetrics(
total_tokens=2626, total_tokens=1673,
prompt_tokens=2482, prompt_tokens=1562,
completion_tokens=144, completion_tokens=111,
successful_requests=5, successful_requests=3,
cached_prompt_tokens=0
) )
@@ -1303,8 +1311,9 @@ def test_hierarchical_crew_creation_tasks_with_agents():
assert crew.manager_agent is not None assert crew.manager_agent is not None
assert crew.manager_agent.tools is not None assert crew.manager_agent.tools is not None
assert crew.manager_agent.tools[0].description.startswith( assert (
"Delegate a specific task to one of the following coworkers: Senior Writer" "Delegate a specific task to one of the following coworkers: Senior Writer\n"
in crew.manager_agent.tools[0].description
) )
@@ -1331,8 +1340,9 @@ def test_hierarchical_crew_creation_tasks_with_async_execution():
crew.kickoff() crew.kickoff()
assert crew.manager_agent is not None assert crew.manager_agent is not None
assert crew.manager_agent.tools is not None assert crew.manager_agent.tools is not None
assert crew.manager_agent.tools[0].description.startswith( assert (
"Delegate a specific task to one of the following coworkers: Senior Writer\n" "Delegate a specific task to one of the following coworkers: Senior Writer\n"
in crew.manager_agent.tools[0].description
) )
@@ -1364,8 +1374,9 @@ def test_hierarchical_crew_creation_tasks_with_sync_last():
crew.kickoff() crew.kickoff()
assert crew.manager_agent is not None assert crew.manager_agent is not None
assert crew.manager_agent.tools is not None assert crew.manager_agent.tools is not None
assert crew.manager_agent.tools[0].description.startswith( assert (
"Delegate a specific task to one of the following coworkers: Senior Writer, Researcher, CEO\n" "Delegate a specific task to one of the following coworkers: Senior Writer, Researcher, CEO\n"
in crew.manager_agent.tools[0].description
) )
@@ -1448,52 +1459,6 @@ def test_crew_does_not_interpolate_without_inputs():
interpolate_task_inputs.assert_not_called() interpolate_task_inputs.assert_not_called()
# def test_crew_partial_inputs():
# agent = Agent(
# role="{topic} Researcher",
# goal="Express hot takes on {topic}.",
# backstory="You have a lot of experience with {topic}.",
# )
# task = Task(
# description="Give me an analysis around {topic}.",
# expected_output="{points} bullet points about {topic}.",
# )
# crew = Crew(agents=[agent], tasks=[task], inputs={"topic": "AI"})
# inputs = {"topic": "AI"}
# crew._interpolate_inputs(inputs=inputs) # Manual call for now
# assert crew.tasks[0].description == "Give me an analysis around AI."
# assert crew.tasks[0].expected_output == "{points} bullet points about AI."
# assert crew.agents[0].role == "AI Researcher"
# assert crew.agents[0].goal == "Express hot takes on AI."
# assert crew.agents[0].backstory == "You have a lot of experience with AI."
# def test_crew_invalid_inputs():
# agent = Agent(
# role="{topic} Researcher",
# goal="Express hot takes on {topic}.",
# backstory="You have a lot of experience with {topic}.",
# )
# task = Task(
# description="Give me an analysis around {topic}.",
# expected_output="{points} bullet points about {topic}.",
# )
# crew = Crew(agents=[agent], tasks=[task], inputs={"subject": "AI"})
# inputs = {"subject": "AI"}
# crew._interpolate_inputs(inputs=inputs) # Manual call for now
# assert crew.tasks[0].description == "Give me an analysis around {topic}."
# assert crew.tasks[0].expected_output == "{points} bullet points about {topic}."
# assert crew.agents[0].role == "{topic} Researcher"
# assert crew.agents[0].goal == "Express hot takes on {topic}."
# assert crew.agents[0].backstory == "You have a lot of experience with {topic}."
def test_task_callback_on_crew(): def test_task_callback_on_crew():
from unittest.mock import MagicMock, patch from unittest.mock import MagicMock, patch
@@ -1534,7 +1499,7 @@ def test_task_callback_on_crew():
def test_tools_with_custom_caching(): def test_tools_with_custom_caching():
from unittest.mock import patch from unittest.mock import patch
from crewai_tools import tool from crewai.tools import tool
@tool @tool
def multiplcation_tool(first_number: int, second_number: int) -> int: def multiplcation_tool(first_number: int, second_number: int) -> int:
@@ -1736,7 +1701,7 @@ def test_manager_agent_in_agents_raises_exception():
def test_manager_agent_with_tools_raises_exception(): def test_manager_agent_with_tools_raises_exception():
from crewai_tools import tool from crewai.tools import tool
@tool @tool
def testing_tool(first_number: int, second_number: int) -> int: def testing_tool(first_number: int, second_number: int) -> int:
@@ -1770,7 +1735,10 @@ def test_manager_agent_with_tools_raises_exception():
@patch("crewai.crew.Crew.kickoff") @patch("crewai.crew.Crew.kickoff")
@patch("crewai.crew.CrewTrainingHandler") @patch("crewai.crew.CrewTrainingHandler")
@patch("crewai.crew.TaskEvaluator") @patch("crewai.crew.TaskEvaluator")
def test_crew_train_success(task_evaluator, crew_training_handler, kickoff): @patch("crewai.crew.Crew.copy")
def test_crew_train_success(
copy_mock, task_evaluator, crew_training_handler, kickoff_mock
):
task = Task( task = Task(
description="Come up with a list of 5 interesting ideas to explore for an article, then write one amazing paragraph highlight for each idea that showcases how good an article about this topic could be. Return the list of ideas with their paragraph and your notes.", description="Come up with a list of 5 interesting ideas to explore for an article, then write one amazing paragraph highlight for each idea that showcases how good an article about this topic could be. Return the list of ideas with their paragraph and your notes.",
expected_output="5 bullet points with a paragraph for each idea.", expected_output="5 bullet points with a paragraph for each idea.",
@@ -1781,9 +1749,19 @@ def test_crew_train_success(task_evaluator, crew_training_handler, kickoff):
agents=[researcher, writer], agents=[researcher, writer],
tasks=[task], tasks=[task],
) )
# Create a mock for the copied crew
copy_mock.return_value = crew
crew.train( crew.train(
n_iterations=2, inputs={"topic": "AI"}, filename="trained_agents_data.pkl" n_iterations=2, inputs={"topic": "AI"}, filename="trained_agents_data.pkl"
) )
# Ensure kickoff is called on the copied crew
kickoff_mock.assert_has_calls(
[mock.call(inputs={"topic": "AI"}), mock.call(inputs={"topic": "AI"})]
)
task_evaluator.assert_has_calls( task_evaluator.assert_has_calls(
[ [
mock.call(researcher), mock.call(researcher),
@@ -1801,30 +1779,22 @@ def test_crew_train_success(task_evaluator, crew_training_handler, kickoff):
] ]
) )
crew_training_handler.assert_has_calls( crew_training_handler.assert_any_call("training_data.pkl")
[ crew_training_handler().load.assert_called()
mock.call("training_data.pkl"),
mock.call().load(),
mock.call("trained_agents_data.pkl"),
mock.call().save_trained_data(
agent_id="Researcher",
trained_data=task_evaluator().evaluate_training_data().model_dump(),
),
mock.call("trained_agents_data.pkl"),
mock.call().save_trained_data(
agent_id="Senior Writer",
trained_data=task_evaluator().evaluate_training_data().model_dump(),
),
mock.call(),
mock.call().load(),
mock.call(),
mock.call().load(),
]
)
kickoff.assert_has_calls( crew_training_handler.assert_any_call("trained_agents_data.pkl")
[mock.call(inputs={"topic": "AI"}), mock.call(inputs={"topic": "AI"})] crew_training_handler().load.assert_called()
)
crew_training_handler().save_trained_data.assert_has_calls([
mock.call(
agent_id="Researcher",
trained_data=task_evaluator().evaluate_training_data().model_dump(),
),
mock.call(
agent_id="Senior Writer",
trained_data=task_evaluator().evaluate_training_data().model_dump(),
)
])
def test_crew_train_error(): def test_crew_train_error():
@@ -1840,7 +1810,7 @@ def test_crew_train_error():
) )
with pytest.raises(TypeError) as e: with pytest.raises(TypeError) as e:
crew.train() crew.train() # type: ignore purposefully throwing err
assert "train() missing 1 required positional argument: 'n_iterations'" in str( assert "train() missing 1 required positional argument: 'n_iterations'" in str(
e e
) )
@@ -2536,8 +2506,9 @@ def test_conditional_should_execute():
@mock.patch("crewai.crew.CrewEvaluator") @mock.patch("crewai.crew.CrewEvaluator")
@mock.patch("crewai.crew.Crew.copy")
@mock.patch("crewai.crew.Crew.kickoff") @mock.patch("crewai.crew.Crew.kickoff")
def test_crew_testing_function(mock_kickoff, crew_evaluator): def test_crew_testing_function(kickoff_mock, copy_mock, crew_evaluator):
task = Task( task = Task(
description="Come up with a list of 5 interesting ideas to explore for an article, then write one amazing paragraph highlight for each idea that showcases how good an article about this topic could be. Return the list of ideas with their paragraph and your notes.", description="Come up with a list of 5 interesting ideas to explore for an article, then write one amazing paragraph highlight for each idea that showcases how good an article about this topic could be. Return the list of ideas with their paragraph and your notes.",
expected_output="5 bullet points with a paragraph for each idea.", expected_output="5 bullet points with a paragraph for each idea.",
@@ -2548,11 +2519,15 @@ def test_crew_testing_function(mock_kickoff, crew_evaluator):
agents=[researcher], agents=[researcher],
tasks=[task], tasks=[task],
) )
# Create a mock for the copied crew
copy_mock.return_value = crew
n_iterations = 2 n_iterations = 2
crew.test(n_iterations, openai_model_name="gpt-4o-mini", inputs={"topic": "AI"}) crew.test(n_iterations, openai_model_name="gpt-4o-mini", inputs={"topic": "AI"})
assert len(mock_kickoff.mock_calls) == n_iterations # Ensure kickoff is called on the copied crew
mock_kickoff.assert_has_calls( kickoff_mock.assert_has_calls(
[mock.call(inputs={"topic": "AI"}), mock.call(inputs={"topic": "AI"})] [mock.call(inputs={"topic": "AI"}), mock.call(inputs={"topic": "AI"})]
) )

264
tests/flow_test.py Normal file
View File

@@ -0,0 +1,264 @@
"""Test Flow creation and execution basic functionality."""
import asyncio
import pytest
from crewai.flow.flow import Flow, and_, listen, or_, router, start
def test_simple_sequential_flow():
"""Test a simple flow with two steps called sequentially."""
execution_order = []
class SimpleFlow(Flow):
@start()
def step_1(self):
execution_order.append("step_1")
@listen(step_1)
def step_2(self):
execution_order.append("step_2")
flow = SimpleFlow()
flow.kickoff()
assert execution_order == ["step_1", "step_2"]
def test_flow_with_multiple_starts():
"""Test a flow with multiple start methods."""
execution_order = []
class MultiStartFlow(Flow):
@start()
def step_a(self):
execution_order.append("step_a")
@start()
def step_b(self):
execution_order.append("step_b")
@listen(step_a)
def step_c(self):
execution_order.append("step_c")
@listen(step_b)
def step_d(self):
execution_order.append("step_d")
flow = MultiStartFlow()
flow.kickoff()
assert "step_a" in execution_order
assert "step_b" in execution_order
assert "step_c" in execution_order
assert "step_d" in execution_order
assert execution_order.index("step_c") > execution_order.index("step_a")
assert execution_order.index("step_d") > execution_order.index("step_b")
def test_cyclic_flow():
"""Test a cyclic flow that runs a finite number of iterations."""
execution_order = []
class CyclicFlow(Flow):
iteration = 0
max_iterations = 3
@start("loop")
def step_1(self):
if self.iteration >= self.max_iterations:
return # Do not proceed further
execution_order.append(f"step_1_{self.iteration}")
@listen(step_1)
def step_2(self):
execution_order.append(f"step_2_{self.iteration}")
@router(step_2)
def step_3(self):
execution_order.append(f"step_3_{self.iteration}")
self.iteration += 1
if self.iteration < self.max_iterations:
return "loop"
return "exit"
flow = CyclicFlow()
flow.kickoff()
expected_order = []
for i in range(flow.max_iterations):
expected_order.extend([f"step_1_{i}", f"step_2_{i}", f"step_3_{i}"])
assert execution_order == expected_order
def test_flow_with_and_condition():
"""Test a flow where a step waits for multiple other steps to complete."""
execution_order = []
class AndConditionFlow(Flow):
@start()
def step_1(self):
execution_order.append("step_1")
@start()
def step_2(self):
execution_order.append("step_2")
@listen(and_(step_1, step_2))
def step_3(self):
execution_order.append("step_3")
flow = AndConditionFlow()
flow.kickoff()
assert "step_1" in execution_order
assert "step_2" in execution_order
assert execution_order[-1] == "step_3"
assert execution_order.index("step_3") > execution_order.index("step_1")
assert execution_order.index("step_3") > execution_order.index("step_2")
def test_flow_with_or_condition():
"""Test a flow where a step is triggered when any of multiple steps complete."""
execution_order = []
class OrConditionFlow(Flow):
@start()
def step_a(self):
execution_order.append("step_a")
@start()
def step_b(self):
execution_order.append("step_b")
@listen(or_(step_a, step_b))
def step_c(self):
execution_order.append("step_c")
flow = OrConditionFlow()
flow.kickoff()
assert "step_a" in execution_order or "step_b" in execution_order
assert "step_c" in execution_order
assert execution_order.index("step_c") > min(
execution_order.index("step_a"), execution_order.index("step_b")
)
def test_flow_with_router():
"""Test a flow that uses a router method to determine the next step."""
execution_order = []
class RouterFlow(Flow):
@start()
def start_method(self):
execution_order.append("start_method")
@router(start_method)
def router(self):
execution_order.append("router")
# Ensure the condition is set to True to follow the "step_if_true" path
condition = True
return "step_if_true" if condition else "step_if_false"
@listen("step_if_true")
def truthy(self):
execution_order.append("step_if_true")
@listen("step_if_false")
def falsy(self):
execution_order.append("step_if_false")
flow = RouterFlow()
flow.kickoff()
assert execution_order == ["start_method", "router", "step_if_true"]
def test_async_flow():
"""Test an asynchronous flow."""
execution_order = []
class AsyncFlow(Flow):
@start()
async def step_1(self):
execution_order.append("step_1")
await asyncio.sleep(0.1)
@listen(step_1)
async def step_2(self):
execution_order.append("step_2")
await asyncio.sleep(0.1)
flow = AsyncFlow()
asyncio.run(flow.kickoff_async())
assert execution_order == ["step_1", "step_2"]
def test_flow_with_exceptions():
"""Test flow behavior when exceptions occur in steps."""
execution_order = []
class ExceptionFlow(Flow):
@start()
def step_1(self):
execution_order.append("step_1")
raise ValueError("An error occurred in step_1")
@listen(step_1)
def step_2(self):
execution_order.append("step_2")
flow = ExceptionFlow()
with pytest.raises(ValueError):
flow.kickoff()
# Ensure step_2 did not execute
assert execution_order == ["step_1"]
def test_flow_restart():
"""Test restarting a flow after it has completed."""
execution_order = []
class RestartableFlow(Flow):
@start()
def step_1(self):
execution_order.append("step_1")
@listen(step_1)
def step_2(self):
execution_order.append("step_2")
flow = RestartableFlow()
flow.kickoff()
flow.kickoff() # Restart the flow
assert execution_order == ["step_1", "step_2", "step_1", "step_2"]
def test_flow_with_custom_state():
"""Test a flow that maintains and modifies internal state."""
class StateFlow(Flow):
def __init__(self):
super().__init__()
self.counter = 0
@start()
def step_1(self):
self.counter += 1
@listen(step_1)
def step_2(self):
self.counter *= 2
assert self.counter == 2
flow = StateFlow()
flow.kickoff()
assert flow.counter == 2

30
tests/llm_test.py Normal file
View File

@@ -0,0 +1,30 @@
import pytest
from crewai.agents.agent_builder.utilities.base_token_process import TokenProcess
from crewai.llm import LLM
from crewai.utilities.token_counter_callback import TokenCalcHandler
@pytest.mark.vcr(filter_headers=["authorization"])
def test_llm_callback_replacement():
llm = LLM(model="gpt-4o-mini")
calc_handler_1 = TokenCalcHandler(token_cost_process=TokenProcess())
calc_handler_2 = TokenCalcHandler(token_cost_process=TokenProcess())
llm.call(
messages=[{"role": "user", "content": "Hello, world!"}],
callbacks=[calc_handler_1],
)
usage_metrics_1 = calc_handler_1.token_cost_process.get_summary()
llm.call(
messages=[{"role": "user", "content": "Hello, world from another agent!"}],
callbacks=[calc_handler_2],
)
usage_metrics_2 = calc_handler_2.token_cost_process.get_summary()
# The first handler should not have been updated
assert usage_metrics_1.successful_requests == 1
assert usage_metrics_2.successful_requests == 1
assert usage_metrics_1 == calc_handler_1.token_cost_process.get_summary()

File diff suppressed because one or more lines are too long

View File

@@ -0,0 +1,270 @@
interactions:
- request:
body: ''
headers:
accept:
- '*/*'
accept-encoding:
- gzip, deflate
connection:
- keep-alive
host:
- api.mem0.ai
user-agent:
- python-httpx/0.27.0
method: GET
uri: https://api.mem0.ai/v1/memories/?user_id=test
response:
body:
string: '[]'
headers:
CF-Cache-Status:
- DYNAMIC
CF-RAY:
- 8b477138bad847b9-BOM
Connection:
- keep-alive
Content-Length:
- '2'
Content-Type:
- application/json
Date:
- Sat, 17 Aug 2024 06:00:11 GMT
NEL:
- '{"success_fraction":0,"report_to":"cf-nel","max_age":604800}'
Report-To:
- '{"endpoints":[{"url":"https:\/\/a.nel.cloudflare.com\/report\/v4?s=uuyH2foMJVDpV%2FH52g1q%2FnvXKe3dBKVzvsK0mqmSNezkiszNR9OgrEJfVqmkX%2FlPFRP2sH4zrOuzGo6k%2FjzsjYJczqSWJUZHN2pPujiwnr1E9W%2BdLGKmG6%2FqPrGYAy2SBRWkkJVWsTO3OQ%3D%3D"}],"group":"cf-nel","max_age":604800}'
Server:
- cloudflare
allow:
- GET, POST, DELETE, OPTIONS
alt-svc:
- h3=":443"; ma=86400
cross-origin-opener-policy:
- same-origin
referrer-policy:
- same-origin
vary:
- Accept, origin, Cookie
x-content-type-options:
- nosniff
x-frame-options:
- DENY
status:
code: 200
message: OK
- request:
body: '{"batch": [{"properties": {"python_version": "3.12.4 (v3.12.4:8e8a4baf65,
Jun 6 2024, 17:33:18) [Clang 13.0.0 (clang-1300.0.29.30)]", "os": "darwin",
"os_version": "Darwin Kernel Version 23.4.0: Wed Feb 21 21:44:54 PST 2024; root:xnu-10063.101.15~2/RELEASE_ARM64_T6030",
"os_release": "23.4.0", "processor": "arm", "machine": "arm64", "function":
"mem0.client.main.MemoryClient", "$lib": "posthog-python", "$lib_version": "3.5.0",
"$geoip_disable": true}, "timestamp": "2024-08-17T06:00:11.526640+00:00", "context":
{}, "distinct_id": "fd411bd3-99a2-42d6-acd7-9fca8ad09580", "event": "client.init"}],
"historical_migration": false, "sentAt": "2024-08-17T06:00:11.701621+00:00",
"api_key": "phc_hgJkUVJFYtmaJqrvf6CYN67TIQ8yhXAkWzUn9AMU4yX"}'
headers:
Accept:
- '*/*'
Accept-Encoding:
- gzip, deflate
Connection:
- keep-alive
Content-Length:
- '740'
Content-Type:
- application/json
User-Agent:
- posthog-python/3.5.0
method: POST
uri: https://us.i.posthog.com/batch/
response:
body:
string: '{"status":"Ok"}'
headers:
Connection:
- keep-alive
Content-Length:
- '15'
Content-Type:
- application/json
Date:
- Sat, 17 Aug 2024 06:00:12 GMT
access-control-allow-credentials:
- 'true'
server:
- envoy
vary:
- origin, access-control-request-method, access-control-request-headers
x-envoy-upstream-service-time:
- '69'
status:
code: 200
message: OK
- request:
body: '{"messages": [{"role": "user", "content": "Remember the following insights
from Agent run: test value with provider"}], "metadata": {"task": "test_task_provider",
"agent": "test_agent_provider"}, "app_id": "Researcher"}'
headers:
accept:
- '*/*'
accept-encoding:
- gzip, deflate
connection:
- keep-alive
content-length:
- '219'
content-type:
- application/json
host:
- api.mem0.ai
user-agent:
- python-httpx/0.27.0
method: POST
uri: https://api.mem0.ai/v1/memories/
response:
body:
string: '{"message":"ok"}'
headers:
CF-Cache-Status:
- DYNAMIC
CF-RAY:
- 8b477140282547b9-BOM
Connection:
- keep-alive
Content-Length:
- '16'
Content-Type:
- application/json
Date:
- Sat, 17 Aug 2024 06:00:13 GMT
NEL:
- '{"success_fraction":0,"report_to":"cf-nel","max_age":604800}'
Report-To:
- '{"endpoints":[{"url":"https:\/\/a.nel.cloudflare.com\/report\/v4?s=FRjJKSk3YxVj03wA7S05H8ts35KnWfqS3wb6Rfy4kVZ4BgXfw7nJbm92wI6vEv5fWcAcHVnOlkJDggs11B01BMuB2k3a9RqlBi0dJNiMuk%2Bgm5xE%2BODMPWJctYNRwQMjNVbteUpS%2Fad8YA%3D%3D"}],"group":"cf-nel","max_age":604800}'
Server:
- cloudflare
allow:
- GET, POST, DELETE, OPTIONS
alt-svc:
- h3=":443"; ma=86400
cross-origin-opener-policy:
- same-origin
referrer-policy:
- same-origin
vary:
- Accept, origin, Cookie
x-content-type-options:
- nosniff
x-frame-options:
- DENY
status:
code: 200
message: OK
- request:
body: '{"query": "test value with provider", "limit": 3, "app_id": "Researcher"}'
headers:
accept:
- '*/*'
accept-encoding:
- gzip, deflate
connection:
- keep-alive
content-length:
- '73'
content-type:
- application/json
host:
- api.mem0.ai
user-agent:
- python-httpx/0.27.0
method: POST
uri: https://api.mem0.ai/v1/memories/search/
response:
body:
string: '[]'
headers:
CF-Cache-Status:
- DYNAMIC
CF-RAY:
- 8b47714d083b47b9-BOM
Connection:
- keep-alive
Content-Length:
- '2'
Content-Type:
- application/json
Date:
- Sat, 17 Aug 2024 06:00:14 GMT
NEL:
- '{"success_fraction":0,"report_to":"cf-nel","max_age":604800}'
Report-To:
- '{"endpoints":[{"url":"https:\/\/a.nel.cloudflare.com\/report\/v4?s=2DRWL1cdKdMvnE8vx1fPUGeTITOgSGl3N5g84PS6w30GRqpfz79BtSx6REhpnOiFV8kM6KGqln0iCZ5yoHc2jBVVJXhPJhQ5t0uerD9JFnkphjISrJOU1MJjZWneT9PlNABddxvVNCmluA%3D%3D"}],"group":"cf-nel","max_age":604800}'
Server:
- cloudflare
allow:
- POST, OPTIONS
alt-svc:
- h3=":443"; ma=86400
cross-origin-opener-policy:
- same-origin
referrer-policy:
- same-origin
vary:
- Accept, origin, Cookie
x-content-type-options:
- nosniff
x-frame-options:
- DENY
status:
code: 200
message: OK
- request:
body: '{"batch": [{"properties": {"python_version": "3.12.4 (v3.12.4:8e8a4baf65,
Jun 6 2024, 17:33:18) [Clang 13.0.0 (clang-1300.0.29.30)]", "os": "darwin",
"os_version": "Darwin Kernel Version 23.4.0: Wed Feb 21 21:44:54 PST 2024; root:xnu-10063.101.15~2/RELEASE_ARM64_T6030",
"os_release": "23.4.0", "processor": "arm", "machine": "arm64", "function":
"mem0.client.main.MemoryClient", "$lib": "posthog-python", "$lib_version": "3.5.0",
"$geoip_disable": true}, "timestamp": "2024-08-17T06:00:13.593952+00:00", "context":
{}, "distinct_id": "fd411bd3-99a2-42d6-acd7-9fca8ad09580", "event": "client.add"}],
"historical_migration": false, "sentAt": "2024-08-17T06:00:13.858277+00:00",
"api_key": "phc_hgJkUVJFYtmaJqrvf6CYN67TIQ8yhXAkWzUn9AMU4yX"}'
headers:
Accept:
- '*/*'
Accept-Encoding:
- gzip, deflate
Connection:
- keep-alive
Content-Length:
- '739'
Content-Type:
- application/json
User-Agent:
- posthog-python/3.5.0
method: POST
uri: https://us.i.posthog.com/batch/
response:
body:
string: '{"status":"Ok"}'
headers:
Connection:
- keep-alive
Content-Length:
- '15'
Content-Type:
- application/json
Date:
- Sat, 17 Aug 2024 06:00:13 GMT
access-control-allow-credentials:
- 'true'
server:
- envoy
vary:
- origin, access-control-request-method, access-control-request-headers
x-envoy-upstream-service-time:
- '33'
status:
code: 200
message: OK
version: 1

View File

@@ -1,5 +1,5 @@
import pytest import pytest
from unittest.mock import patch
from crewai.agent import Agent from crewai.agent import Agent
from crewai.crew import Crew from crewai.crew import Crew
from crewai.memory.short_term.short_term_memory import ShortTermMemory from crewai.memory.short_term.short_term_memory import ShortTermMemory
@@ -26,7 +26,6 @@ def short_term_memory():
return ShortTermMemory(crew=Crew(agents=[agent], tasks=[task])) return ShortTermMemory(crew=Crew(agents=[agent], tasks=[task]))
@pytest.mark.vcr(filter_headers=["authorization"])
def test_save_and_search(short_term_memory): def test_save_and_search(short_term_memory):
memory = ShortTermMemoryItem( memory = ShortTermMemoryItem(
data="""test value test value test value test value test value test value data="""test value test value test value test value test value test value
@@ -35,12 +34,28 @@ def test_save_and_search(short_term_memory):
agent="test_agent", agent="test_agent",
metadata={"task": "test_task"}, metadata={"task": "test_task"},
) )
short_term_memory.save(
value=memory.data,
metadata=memory.metadata,
agent=memory.agent,
)
find = short_term_memory.search("test value", score_threshold=0.01)[0] with patch.object(ShortTermMemory, "save") as mock_save:
assert find["context"] == memory.data, "Data value mismatch." short_term_memory.save(
assert find["metadata"]["agent"] == "test_agent", "Agent value mismatch." value=memory.data,
metadata=memory.metadata,
agent=memory.agent,
)
mock_save.assert_called_once_with(
value=memory.data,
metadata=memory.metadata,
agent=memory.agent,
)
expected_result = [
{
"context": memory.data,
"metadata": {"agent": "test_agent"},
"score": 0.95,
}
]
with patch.object(ShortTermMemory, "search", return_value=expected_result):
find = short_term_memory.search("test value", score_threshold=0.01)[0]
assert find["context"] == memory.data, "Data value mismatch."
assert find["metadata"]["agent"] == "test_agent", "Agent value mismatch."

View File

@@ -15,7 +15,7 @@ from pydantic_core import ValidationError
def test_task_tool_reflect_agent_tools(): def test_task_tool_reflect_agent_tools():
from crewai_tools import tool from crewai.tools import tool
@tool @tool
def fake_tool() -> None: def fake_tool() -> None:
@@ -39,7 +39,7 @@ def test_task_tool_reflect_agent_tools():
def test_task_tool_takes_precedence_over_agent_tools(): def test_task_tool_takes_precedence_over_agent_tools():
from crewai_tools import tool from crewai.tools import tool
@tool @tool
def fake_tool() -> None: def fake_tool() -> None:
@@ -656,7 +656,7 @@ def test_increment_delegations_for_sequential_process():
@pytest.mark.vcr(filter_headers=["authorization"]) @pytest.mark.vcr(filter_headers=["authorization"])
def test_increment_tool_errors(): def test_increment_tool_errors():
from crewai_tools import tool from crewai.tools import tool
@tool @tool
def scoring_examples() -> None: def scoring_examples() -> None:

Some files were not shown because too many files have changed in this diff Show More