Compare commits

..

77 Commits

Author SHA1 Message Date
Brandon Hancock
48e1505a0a Merge branch 'main' into undo-agentops-api-check 2024-10-16 11:18:19 -04:00
Vini Brasil
a6b7295092 Adapt Tools CLI to uv (#1455)
* Adapt Tools CLI to UV

* Fix failing test
2024-10-16 10:55:04 -03:00
dbubel
725d159e44 fix typo in template file (#1432) 2024-10-14 16:51:04 -04:00
Stephen Hankinson
ef21da15e6 Correct the role for the message being added to the messages list (#1438)
Co-authored-by: Brandon Hancock (bhancock_ai) <109994880+bhancockio@users.noreply.github.com>
2024-10-14 16:49:16 -04:00
Muhammad Noman Fareed
de5d2eaa9b Fix Cache Typo in Documentation (#1441) 2024-10-14 16:30:31 -04:00
Stephen Hankinson
e2badaa4c6 Use a slice for the manager request. Make the task use the agent i18n settings (#1446) 2024-10-14 16:30:05 -04:00
Eduardo Chiarotti
916dec2418 fix: training issue (#1433)
* fix: training issue

* fix: output from crew

* fix: message
2024-10-11 22:35:17 -03:00
Eduardo Chiarotti
7f387dd7c3 Feat/poetry to uv migration (#1406)
* feat: Start migrating to UV

* feat: add uv to flows

* feat: update docs on Poetry -> uv

* feat: update docs and uv.locl

* feat: update tests and github CI

* feat: run ruff format

* feat: update typechecking

* feat: fix type checking

* feat: update python version

* feat: type checking gic

* feat: adapt uv command to run the tool repo

* Adapt tool build command to uv

* feat: update logic to let only projects with crew to be deployed

* feat: add uv to tools

* fix; tests

* fix: remove breakpoint

* fix :test

* feat: add crewai update to migrate from poetry to uv

* fix: tests

* feat: add validation for ˆ character on pyproject

* feat: add run_crew to pyproject if doesnt exist

* feat: add validation for poetry migration

* fix: warning

---------

Co-authored-by: Vinicius Brasil <vini@hey.com>
2024-10-11 19:11:27 -03:00
Braelyn Boynton
161e2e20a5 remove extra code 2024-09-05 14:50:01 +09:00
Braelyn Boynton
a68f2cec41 remove extra code 2024-09-05 14:48:54 +09:00
Braelyn Boynton
9db3a4ab23 remove extra code 2024-09-05 14:48:28 +09:00
Braelyn Boynton
7d4cf9a7bc undo agentops api key check 2024-09-05 14:45:01 +09:00
Braelyn Boynton
7af89abe53 Merge remote-tracking branch 'refs/remotes/upstream/main' into undo-agentops-api-check 2024-09-05 14:41:51 +09:00
Braelyn Boynton
b3ae127d2c Merge remote-tracking branch 'refs/remotes/upstream/main' 2024-08-08 16:56:49 -07:00
Braelyn Boynton
0543059dbe Merge remote-tracking branch 'upstream/main'
# Conflicts:
#	pyproject.toml
#	src/crewai/agent.py
#	src/crewai/crew.py
#	src/crewai/task.py
#	src/crewai/tools/tool_usage.py
#	src/crewai/utilities/evaluators/task_evaluator.py
2024-07-23 17:55:15 -04:00
Braelyn Boynton
c3b8ea21d3 deprecation messages 2024-07-08 13:56:17 -07:00
Braelyn Boynton
fa9a42cd89 fix crew logger bug 2024-06-06 18:28:11 -07:00
Braelyn Boynton
9b965d9e33 fix crew logger bug 2024-06-06 18:26:09 -07:00
Braelyn Boynton
45655a956a conditional protect agentops use 2024-06-06 17:58:34 -07:00
Braelyn Boynton
f2d2804854 Merge remote-tracking branch 'origin/main' 2024-06-06 17:09:05 -07:00
Braelyn Boynton
ae65622bd0 Merge remote-tracking branch 'upstream/main'
# Conflicts:
#	src/crewai/task.py
2024-06-06 17:08:39 -07:00
Braelyn Boynton
f516fba9b6 Merge branch 'main' into main 2024-06-06 17:07:28 -07:00
Braelyn Boynton
a4622bfce8 support skip auto end session 2024-05-29 14:28:24 -07:00
theCyberTech - Rip&Tear
0dd4f444ea Added timestamp to logger (#646)
* Added timestamp to logger

Updated the logger.py file to include timestamps when logging output. For example:

 [2024-05-20 15:32:48][DEBUG]: == Working Agent: Researcher
 [2024-05-20 15:32:48][INFO]: == Starting Task: Research the topic
 [2024-05-20 15:33:22][DEBUG]: == [Researcher] Task output:

* Update tool_usage.py

* Revert "Update tool_usage.py"

This reverts commit 95d18d5b6f.

incorrect bramch for this commit
2024-05-28 16:45:50 -07:00
Saif Mahmud
e2dfba63cd fixes #665 (#666) 2024-05-28 16:45:50 -07:00
theCyberTech - Rip&Tear
3bba04ac71 Update crew.py (#644)
Fixed Type on line 53
2024-05-28 16:45:50 -07:00
Mish Ushakov
b153bc1a80 Update BrowserbaseLoadTool.md (#647) 2024-05-28 16:45:50 -07:00
Mike Heavers
8e5bface29 Update README.md (#652)
Rework example so that if you use a custom LLM it doesn't throw code errors by uncommenting.
2024-05-28 16:45:50 -07:00
Anudeep Kolluri
9ac6752cbf Update agent.py (#655)
Changed default model value from gpt-4 to gpt-4o.
Reasoning.
gpt-4 costs 30$ per million tokens while gpt-4o costs 5$.
This is more cost friendly for default option.
2024-05-28 16:45:50 -07:00
Paul Sanders
a08d0dfe12 Clarify text in docstring (#662) 2024-05-28 16:45:50 -07:00
Paul Sanders
96e0dacfc1 Enable search in docs (#663) 2024-05-28 16:45:50 -07:00
Olivier Roberdet
f4ce482eb7 Fix typo in instruction en.json (#676) 2024-05-28 16:45:50 -07:00
Braelyn Boynton
c6471814b3 merge upstream 2024-05-28 16:45:20 -07:00
Howard Gil
2d88109cc3 Merge branch 'main' of https://github.com/joaomdmoura/crewAI 2024-05-21 12:18:03 -07:00
Braelyn Boynton
54237c9974 track task evaluator 2024-05-09 13:15:12 -07:00
Braelyn Boynton
b4241a892e agentops version bump 2024-05-06 21:28:47 -07:00
Braelyn Boynton
a6de5253d5 Merge remote-tracking branch 'upstream/main' 2024-05-06 11:50:31 -07:00
Braelyn Boynton
b9d6ec5721 use langchain callback handler to support all LLMs 2024-05-03 15:07:17 -07:00
Braelyn Boynton
498bf77f08 black formatting 2024-05-02 13:06:34 -07:00
Braelyn Boynton
be91c32488 Merge remote-tracking branch 'upstream/main'
# Conflicts:
#	pyproject.toml
#	src/crewai/agent.py
#	src/crewai/crew.py
#	src/crewai/tools/tool_usage.py
2024-05-02 12:52:31 -07:00
Braelyn Boynton
f2c2a625b0 add crew tag 2024-05-02 12:28:06 -07:00
Braelyn Boynton
b160a52139 Merge remote-tracking branch 'upstream/main'
# Conflicts:
#	pyproject.toml
#	src/crewai/agent.py
#	src/crewai/crew.py
#	src/crewai/tools/tool_usage.py
2024-04-30 01:09:16 -07:00
Braelyn Boynton
a19a37bd9a noop 2024-04-29 23:31:48 -07:00
Braelyn Boynton
2f789800b7 Revert "Revert "Revert "true dependency"""
This reverts commit e9335e89
2024-04-29 23:30:02 -07:00
Braelyn Boynton
8be18c8e11 agentops update 2024-04-19 20:05:47 -07:00
João Moura
e366f006ac Update pyproject.toml 2024-04-19 23:38:20 -03:00
João Moura
d678190850 Forcing version 0.1.5 2024-04-19 23:18:43 -03:00
Braelyn Boynton
9005dc7c59 cleanup 2024-04-19 19:10:26 -07:00
Braelyn Boynton
e9335e89a6 Revert "Revert "true dependency""
This reverts commit 4d1b460b
2024-04-19 19:09:20 -07:00
Braelyn Boynton
fd7de7f2eb Revert "Revert "cleanup""
This reverts commit cea33d9a5d.
2024-04-19 19:08:22 -07:00
Braelyn Boynton
c52b5e9690 agentops 0.1.5 2024-04-19 19:07:53 -07:00
Braelyn Boynton
7725e7c52e optional parent key 2024-04-19 19:04:21 -07:00
Braelyn Boynton
7f8573e6cb Merge remote-tracking branch 'origin/main' 2024-04-19 19:02:39 -07:00
Braelyn Boynton
cea33d9a5d Revert "cleanup"
This reverts commit 7f5635fb9e.
2024-04-19 19:02:20 -07:00
Braelyn Boynton
4d1b460b80 Revert "true dependency"
This reverts commit e52e8e9568.
2024-04-19 19:01:52 -07:00
João Moura
906a5bd8ec Update pyproject.toml 2024-04-19 22:54:57 -03:00
Braelyn Boynton
216cc832dc Merge remote-tracking branch 'upstream/main'
# Conflicts:
#	poetry.lock
2024-04-18 16:21:19 -07:00
Braelyn Boynton
7f5635fb9e cleanup 2024-04-17 17:19:38 -07:00
Braelyn Boynton
0ce8d14742 add crew org key to agentops 2024-04-17 14:48:58 -07:00
Braelyn Boynton
e52e8e9568 true dependency 2024-04-17 14:39:23 -07:00
Braelyn Boynton
4f7a9a5b4b Merge remote-tracking branch 'upstream/main'
# Conflicts:
#	src/crewai/crew.py
2024-04-17 14:27:31 -07:00
Braelyn Boynton
2af85c35b4 remove org key 2024-04-15 15:39:24 -04:00
Braelyn Boynton
e82149aaf9 Merge remote-tracking branch 'upstream/main' 2024-04-11 12:32:17 -07:00
Braelyn Boynton
de0ee8ce41 Merge remote-tracking branch 'upstream/main'
# Conflicts:
#	src/crewai/crew.py
2024-04-05 15:48:35 -07:00
Braelyn Boynton
b20ae847c4 agentops version bump 2024-04-05 15:47:01 -07:00
Braelyn Boynton
59f56324ea Merge remote-tracking branch 'upstream/main'
# Conflicts:
#	poetry.lock
#	src/crewai/tools/tool_usage.py
2024-04-05 15:18:40 -07:00
Braelyn Boynton
79a0d8b94d optional agentops 2024-04-04 14:34:20 -07:00
Braelyn Boynton
750085498f remove telemetry code 2024-04-04 13:23:20 -07:00
Braelyn Boynton
215e39833a optional dependency usage 2024-04-03 23:14:37 -07:00
Braelyn Boynton
67bc1de4d6 make agentops optional 2024-04-03 15:36:47 -07:00
Braelyn Boynton
45e307b98a code cleanup 2024-04-02 12:25:52 -07:00
Braelyn Boynton
4402c9be74 merge upstream 2024-04-02 12:22:49 -07:00
Braelyn Boynton
5e46514398 better tool and llm tracking 2024-03-29 17:45:58 -07:00
Braelyn Boynton
c44c2b6808 track tool usage time 2024-03-29 14:28:33 -07:00
Braelyn Boynton
a9339fcef6 end session after completion 2024-03-26 14:09:58 -07:00
Braelyn Boynton
f67d0a26f1 track tool usage 2024-03-20 18:25:41 -07:00
Braelyn Boynton
f6ee12dbc5 implements agentops with a langchain handler, agent tracking and tool call recording 2024-03-19 18:47:22 -07:00
66 changed files with 5926 additions and 1411 deletions

View File

@@ -9,24 +9,24 @@ env:
OPENAI_API_KEY: fake-api-key
jobs:
deploy:
tests:
runs-on: ubuntu-latest
timeout-minutes: 15
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Setup Python
uses: actions/setup-python@v4
- name: Install uv
uses: astral-sh/setup-uv@v3
with:
python-version: "3.11.9"
enable-cache: true
- name: Install Requirements
run: |
set -e
pip install poetry
poetry install
- name: Set up Python
run: uv python install 3.11.9
- name: Install the project
run: uv sync --dev
- name: Run tests
run: poetry run pytest
run: uv run pytest tests

View File

@@ -16,7 +16,7 @@ jobs:
- name: Setup Python
uses: actions/setup-python@v4
with:
python-version: "3.10"
python-version: "3.11.9"
- name: Install Requirements
run: |

View File

@@ -44,15 +44,9 @@ To get started with CrewAI, follow these simple steps:
### 1. Installation
Ensure you have Python >=3.10 <=3.13 installed on your system. CrewAI uses [Poetry](https://python-poetry.org/) for dependency management and package handling, offering a seamless setup and execution experience.
Ensure you have Python >=3.10 <=3.13 installed on your system. CrewAI uses [UV](https://docs.astral.sh/uv/) for dependency management and package handling, offering a seamless setup and execution experience.
First, if you haven't already, install Poetry:
```bash
pip install poetry
```
Then, install CrewAI:
First, install CrewAI:
```shell
pip install crewai
@@ -243,7 +237,7 @@ Lock the dependencies and install them by using the CLI command but first, navig
```shell
cd my_project
crewai install
crewai install (Optional)
```
To run your crew, execute the following command in the root of your project:
@@ -332,14 +326,14 @@ CrewAI is open-source and we welcome contributions. If you're looking to contrib
### Installing Dependencies
```bash
poetry lock
poetry install
uv lock
uv sync
```
### Virtual Env
```bash
poetry shell
uv venv
```
### Pre-commit hooks
@@ -351,19 +345,19 @@ pre-commit install
### Running Tests
```bash
poetry run pytest
uvx pytest
```
### Running static type checks
```bash
poetry run mypy
uvx mypy
```
### Packaging
```bash
poetry build
uv build
```
### Installing Locally

View File

@@ -33,7 +33,7 @@ Think of an agent as a member of a team, with specific skills and a particular j
| **Verbose** *(optional)* | `verbose` | Setting this to `True` configures the internal logger to provide detailed execution logs, aiding in debugging and monitoring. Default is `False`. |
| **Allow Delegation** *(optional)* | `allow_delegation` | Agents can delegate tasks or questions to one another, ensuring that each task is handled by the most suitable agent. Default is `False`.
| **Step Callback** *(optional)* | `step_callback` | A function that is called after each step of the agent. This can be used to log the agent's actions or to perform other operations. It will overwrite the crew `step_callback`. |
| gbv vbn zzdsxcdsdfc**Cache** *(optional)* | `cache` | Indicates if the agent should use a cache for tool usage. Default is `True`. |
| **Cache** *(optional)* | `cache` | Indicates if the agent should use a cache for tool usage. Default is `True`. |
| **System Template** *(optional)* | `system_template` | Specifies the system format for the agent. Default is `None`. |
| **Prompt Template** *(optional)* | `prompt_template` | Specifies the prompt format for the agent. Default is `None`. |
| **Response Template** *(optional)* | `response_template` | Specifies the response format for the agent. Default is `None`. |

View File

@@ -10,10 +10,10 @@ The CrewAI CLI provides a set of commands to interact with CrewAI, allowing you
## Installation
To use the CrewAI CLI, make sure you have CrewAI & Poetry installed:
To use the CrewAI CLI, make sure you have CrewAI installed:
```shell
pip install crewai poetry
pip install crewai
```
## Basic Usage
@@ -145,4 +145,4 @@ crewai run
<Note>
Make sure to run these commands from the directory where your CrewAI project is set up.
Some commands may require additional configuration or setup within your project structure.
</Note>
</Note>

View File

@@ -4,7 +4,7 @@ description: Exploring the dynamics of agent collaboration within the CrewAI fra
icon: screen-users
---
## Collaboration Fundamentals
## Collaboration Fundamentals
Collaboration in CrewAI is fundamental, enabling agents to combine their skills, share information, and assist each other in task execution, embodying a truly cooperative ecosystem.
@@ -16,25 +16,24 @@ Collaboration in CrewAI is fundamental, enabling agents to combine their skills,
The `Crew` class has been enriched with several attributes to support advanced functionalities:
| Feature | Description |
| :-------------------------------------------------------------------- | :------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| **Language Model Management** (`manager_llm`, `function_calling_llm`) | Manages language models for executing tasks and tools. `manager_llm` is required for hierarchical processes, while `function_calling_llm` is optional with a default value for streamlined interactions. |
| **Custom Manager Agent** (`manager_agent`) | Specifies a custom agent as the manager, replacing the default CrewAI manager. |
| **Process Flow** (`process`) | Defines execution logic (e.g., sequential, hierarchical) for task distribution. |
| **Verbose Logging** (`verbose`) | Provides detailed logging for monitoring and debugging. Accepts integer and boolean values to control verbosity level. |
| **Rate Limiting** (`max_rpm`) | Limits requests per minute to optimize resource usage. Setting guidelines depend on task complexity and load. |
| **Internationalization / Customization** (`language`, `prompt_file`) | Supports prompt customization for global usability. [Example of file](https://github.com/joaomdmoura/crewAI/blob/main/src/crewai/translations/en.json) |
| **Execution and Output Handling** (`full_output`) | Controls output granularity, distinguishing between full and final outputs. |
| **Callback and Telemetry** (`step_callback`, `task_callback`) | Enables step-wise and task-level execution monitoring and telemetry for performance analytics. |
| **Crew Sharing** (`share_crew`) | Allows sharing crew data with CrewAI for model improvement. Privacy implications and benefits should be considered. |
| **Usage Metrics** (`usage_metrics`) | Logs all LLM usage metrics during task execution for performance insights. |
| **Memory Usage** (`memory`) | Enables memory for storing execution history, aiding in agent learning and task efficiency. |
| **Memory Provider** (`memory_provider`) | Specifies the memory provider to be used by the crew for storing memories. |
| **Embedder Configuration** (`embedder`) | Configures the embedder for language understanding and generation, with support for provider customization. |
| **Cache Management** (`cache`) | Specifies whether to cache tool execution results, enhancing performance. |
| **Output Logging** (`output_log_file`) | Defines the file path for logging crew execution output. |
| **Planning Mode** (`planning`) | Enables action planning before task execution. Set `planning=True` to activate. |
| **Replay Feature** (`replay`) | Provides CLI for listing tasks from the last run and replaying from specific tasks, aiding in task management and troubleshooting. |
| Feature | Description |
|:-------------------------------------------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| **Language Model Management** (`manager_llm`, `function_calling_llm`) | Manages language models for executing tasks and tools. `manager_llm` is required for hierarchical processes, while `function_calling_llm` is optional with a default value for streamlined interactions. |
| **Custom Manager Agent** (`manager_agent`) | Specifies a custom agent as the manager, replacing the default CrewAI manager. |
| **Process Flow** (`process`) | Defines execution logic (e.g., sequential, hierarchical) for task distribution. |
| **Verbose Logging** (`verbose`) | Provides detailed logging for monitoring and debugging. Accepts integer and boolean values to control verbosity level. |
| **Rate Limiting** (`max_rpm`) | Limits requests per minute to optimize resource usage. Setting guidelines depend on task complexity and load. |
| **Internationalization / Customization** (`language`, `prompt_file`) | Supports prompt customization for global usability. [Example of file](https://github.com/joaomdmoura/crewAI/blob/main/src/crewai/translations/en.json) |
| **Execution and Output Handling** (`full_output`) | Controls output granularity, distinguishing between full and final outputs. |
| **Callback and Telemetry** (`step_callback`, `task_callback`) | Enables step-wise and task-level execution monitoring and telemetry for performance analytics. |
| **Crew Sharing** (`share_crew`) | Allows sharing crew data with CrewAI for model improvement. Privacy implications and benefits should be considered. |
| **Usage Metrics** (`usage_metrics`) | Logs all LLM usage metrics during task execution for performance insights. |
| **Memory Usage** (`memory`) | Enables memory for storing execution history, aiding in agent learning and task efficiency. |
| **Embedder Configuration** (`embedder`) | Configures the embedder for language understanding and generation, with support for provider customization. |
| **Cache Management** (`cache`) | Specifies whether to cache tool execution results, enhancing performance. |
| **Output Logging** (`output_log_file`) | Defines the file path for logging crew execution output. |
| **Planning Mode** (`planning`) | Enables action planning before task execution. Set `planning=True` to activate. |
| **Replay Feature** (`replay`) | Provides CLI for listing tasks from the last run and replaying from specific tasks, aiding in task management and troubleshooting. |
## Delegation (Dividing to Conquer)
@@ -50,4 +49,4 @@ Consider a crew with a researcher agent tasked with data gathering and a writer
## Conclusion
The integration of advanced attributes and functionalities into the CrewAI framework significantly enriches the agent collaboration ecosystem. These enhancements not only simplify interactions but also offer unprecedented flexibility and control, paving the way for sophisticated AI-driven solutions capable of tackling complex tasks through intelligent collaboration and delegation.
The integration of advanced attributes and functionalities into the CrewAI framework significantly enriches the agent collaboration ecosystem. These enhancements not only simplify interactions but also offer unprecedented flexibility and control, paving the way for sophisticated AI-driven solutions capable of tackling complex tasks through intelligent collaboration and delegation.

View File

@@ -22,8 +22,7 @@ A crew in crewAI represents a collaborative group of agents working together to
| **Max RPM** _(optional)_ | `max_rpm` | Maximum requests per minute the crew adheres to during execution. Defaults to `None`. |
| **Language** _(optional)_ | `language` | Language used for the crew, defaults to English. |
| **Language File** _(optional)_ | `language_file` | Path to the language file to be used for the crew. |
| **Memory** _(optional)_ | `memory` | Utilized for storing execution memories (short-term, long-term, entity memory). |
| **Memory Provider** _(optional)_ | `memory_provider` | Specifies the memory provider to be used by the crew for storing memories. |
| **Memory** _(optional)_ | `memory` | Utilized for storing execution memories (short-term, long-term, entity memory). Defaults to `False`. |
| **Cache** _(optional)_ | `cache` | Specifies whether to use a cache for storing the results of tools' execution. Defaults to `True`. |
| **Embedder** _(optional)_ | `embedder` | Configuration for the embedder to be used by the crew. Mostly used by memory for now. Default is `{"provider": "openai"}`. |
| **Full Output** _(optional)_ | `full_output` | Whether the crew should return the full output with all tasks outputs or just the final output. Defaults to `False`. |

View File

@@ -572,16 +572,16 @@ In this example, the `PoemFlow` class defines a flow that generates a sentence c
### Running the Flow
Before running the flow, make sure to install the dependencies by running:
(Optional) Before running the flow, you can install the dependencies by running:
```bash
poetry install
crewai install
```
Once all of the dependencies are installed, you need to activate the virtual environment by running:
```bash
poetry shell
source .venv/bin/activate
```
After activating the virtual environment, you can run the flow by executing one of the following commands:
@@ -593,7 +593,7 @@ crewai flow run
or
```bash
poetry run run_flow
uv run run_flow
```
The flow will execute, and you should see the output in the console.

View File

@@ -6,18 +6,18 @@ icon: database
## Introduction to Memory Systems in CrewAI
The crewAI framework introduces a sophisticated memory system designed to significantly enhance the capabilities of AI agents.
This system comprises `short-term memory`, `long-term memory`, `entity memory`, and `contextual memory`, each serving a unique purpose in aiding agents to remember,
The crewAI framework introduces a sophisticated memory system designed to significantly enhance the capabilities of AI agents.
This system comprises `short-term memory`, `long-term memory`, `entity memory`, and `contextual memory`, each serving a unique purpose in aiding agents to remember,
reason, and learn from past interactions.
## Memory System Components
| Component | Description |
| :-------------------- | :--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| **Short-Term Memory** | Temporarily stores recent interactions and outcomes using `RAG`, enabling agents to recall and utilize information relevant to their current context during the current executions. |
| **Long-Term Memory** | Preserves valuable insights and learnings from past executions, allowing agents to build and refine their knowledge over time. |
| **Entity Memory** | Captures and organizes information about entities (people, places, concepts) encountered during tasks, facilitating deeper understanding and relationship mapping. Uses `RAG` for storing entity information. |
| **Contextual Memory** | Maintains the context of interactions by combining `ShortTermMemory`, `LongTermMemory`, and `EntityMemory`, aiding in the coherence and relevance of agent responses over a sequence of tasks or a conversation. |
| Component | Description |
| :------------------- | :---------------------------------------------------------------------------------------------------------------------- |
| **Short-Term Memory**| Temporarily stores recent interactions and outcomes using `RAG`, enabling agents to recall and utilize information relevant to their current context during the current executions.|
| **Long-Term Memory** | Preserves valuable insights and learnings from past executions, allowing agents to build and refine their knowledge over time. |
| **Entity Memory** | Captures and organizes information about entities (people, places, concepts) encountered during tasks, facilitating deeper understanding and relationship mapping. Uses `RAG` for storing entity information. |
| **Contextual Memory**| Maintains the context of interactions by combining `ShortTermMemory`, `LongTermMemory`, and `EntityMemory`, aiding in the coherence and relevance of agent responses over a sequence of tasks or a conversation. |
## How Memory Systems Empower Agents
@@ -30,8 +30,8 @@ reason, and learn from past interactions.
## Implementing Memory in Your Crew
When configuring a crew, you can enable and customize each memory component to suit the crew's objectives and the nature of tasks it will perform.
By default, the memory system is disabled, and you can ensure it is active by setting `memory=True` in the crew configuration.
The memory will use OpenAI embeddings by default, but you can change it by setting `embedder` to a different model.
By default, the memory system is disabled, and you can ensure it is active by setting `memory=True` in the crew configuration.
The memory will use OpenAI embeddings by default, but you can change it by setting `embedder` to a different model.
It's also possible to initialize the memory instance with your own instance.
The 'embedder' only applies to **Short-Term Memory** which uses Chroma for RAG using the EmbedChain package.
@@ -92,10 +92,10 @@ my_crew = Crew(
)
```
## Additional Embedding Providers
### Using OpenAI embeddings (already default)
```python Code
from crewai import Crew, Agent, Task, Process
@@ -223,13 +223,14 @@ crewai reset-memories [OPTIONS]
#### Resetting Memory Options
| Option | Description | Type | Default |
| :------------------------ | :--------------------------------- | :------------- | :------ |
| `-l`, `--long` | Reset LONG TERM memory. | Flag (boolean) | False |
| `-s`, `--short` | Reset SHORT TERM memory. | Flag (boolean) | False |
| `-e`, `--entities` | Reset ENTITIES memory. | Flag (boolean) | False |
| Option | Description | Type | Default |
| :----------------- | :------------------------------- | :------------- | :------ |
| `-l`, `--long` | Reset LONG TERM memory. | Flag (boolean) | False |
| `-s`, `--short` | Reset SHORT TERM memory. | Flag (boolean) | False |
| `-e`, `--entities` | Reset ENTITIES memory. | Flag (boolean) | False |
| `-k`, `--kickoff-outputs` | Reset LATEST KICKOFF TASK OUTPUTS. | Flag (boolean) | False |
| `-a`, `--all` | Reset ALL memories. | Flag (boolean) | False |
| `-a`, `--all` | Reset ALL memories. | Flag (boolean) | False |
## Benefits of Using CrewAI's Memory System
@@ -239,5 +240,5 @@ crewai reset-memories [OPTIONS]
## Conclusion
Integrating CrewAI's memory system into your projects is straightforward. By leveraging the provided memory components and configurations,
Integrating CrewAI's memory system into your projects is straightforward. By leveraging the provided memory components and configurations,
you can quickly empower your agents with the ability to remember, reason, and learn from their interactions, unlocking new levels of intelligence and capability.

View File

@@ -0,0 +1,163 @@
# Creating a CrewAI Pipeline Project
Welcome to the comprehensive guide for creating a new CrewAI pipeline project. This document will walk you through the steps to create, customize, and run your CrewAI pipeline project, ensuring you have everything you need to get started.
To learn more about CrewAI pipelines, visit the [CrewAI documentation](https://docs.crewai.com/core-concepts/Pipeline/).
## Prerequisites
Before getting started with CrewAI pipelines, make sure that you have installed CrewAI via pip:
```shell
$ pip install crewai crewai-tools
```
The same prerequisites for virtual environments and Code IDEs apply as in regular CrewAI projects.
## Creating a New Pipeline Project
To create a new CrewAI pipeline project, you have two options:
1. For a basic pipeline template:
```shell
$ crewai create pipeline <project_name>
```
2. For a pipeline example that includes a router:
```shell
$ crewai create pipeline --router <project_name>
```
These commands will create a new project folder with the following structure:
```
<project_name>/
├── README.md
├── uv.lock
├── pyproject.toml
├── src/
│ └── <project_name>/
│ ├── __init__.py
│ ├── main.py
│ ├── crews/
│ │ ├── crew1/
│ │ │ ├── crew1.py
│ │ │ └── config/
│ │ │ ├── agents.yaml
│ │ │ └── tasks.yaml
│ │ ├── crew2/
│ │ │ ├── crew2.py
│ │ │ └── config/
│ │ │ ├── agents.yaml
│ │ │ └── tasks.yaml
│ ├── pipelines/
│ │ ├── __init__.py
│ │ ├── pipeline1.py
│ │ └── pipeline2.py
│ └── tools/
│ ├── __init__.py
│ └── custom_tool.py
└── tests/
```
## Customizing Your Pipeline Project
To customize your pipeline project, you can:
1. Modify the crew files in `src/<project_name>/crews/` to define your agents and tasks for each crew.
2. Modify the pipeline files in `src/<project_name>/pipelines/` to define your pipeline structure.
3. Modify `src/<project_name>/main.py` to set up and run your pipelines.
4. Add your environment variables into the `.env` file.
## Example 1: Defining a Two-Stage Sequential Pipeline
Here's an example of how to define a pipeline with sequential stages in `src/<project_name>/pipelines/pipeline.py`:
```python
from crewai import Pipeline
from crewai.project import PipelineBase
from ..crews.research_crew.research_crew import ResearchCrew
from ..crews.write_x_crew.write_x_crew import WriteXCrew
@PipelineBase
class SequentialPipeline:
def __init__(self):
# Initialize crews
self.research_crew = ResearchCrew().crew()
self.write_x_crew = WriteXCrew().crew()
def create_pipeline(self):
return Pipeline(
stages=[
self.research_crew,
self.write_x_crew
]
)
async def kickoff(self, inputs):
pipeline = self.create_pipeline()
results = await pipeline.kickoff(inputs)
return results
```
## Example 2: Defining a Two-Stage Pipeline with Parallel Execution
```python
from crewai import Pipeline
from crewai.project import PipelineBase
from ..crews.research_crew.research_crew import ResearchCrew
from ..crews.write_x_crew.write_x_crew import WriteXCrew
from ..crews.write_linkedin_crew.write_linkedin_crew import WriteLinkedInCrew
@PipelineBase
class ParallelExecutionPipeline:
def __init__(self):
# Initialize crews
self.research_crew = ResearchCrew().crew()
self.write_x_crew = WriteXCrew().crew()
self.write_linkedin_crew = WriteLinkedInCrew().crew()
def create_pipeline(self):
return Pipeline(
stages=[
self.research_crew,
[self.write_x_crew, self.write_linkedin_crew] # Parallel execution
]
)
async def kickoff(self, inputs):
pipeline = self.create_pipeline()
results = await pipeline.kickoff(inputs)
return results
```
### Annotations
The main annotation you'll use for pipelines is `@PipelineBase`. This annotation is used to decorate your pipeline classes, similar to how `@CrewBase` is used for crews.
## Installing Dependencies
To install the dependencies for your project, use `uv` the install command is optional because when running `crewai run`, it will automatically install the dependencies for you:
```shell
$ cd <project_name>
$ crewai install (optional)
```
## Running Your Pipeline Project
To run your pipeline project, use the following command:
```shell
$ crewai run
```
This will initialize your pipeline and begin task execution as defined in your `main.py` file.
## Deploying Your Pipeline Project
Pipelines can be deployed in the same way as regular CrewAI projects. The easiest way is through [CrewAI+](https://www.crewai.com/crewaiplus), where you can deploy your pipeline in a few clicks.
Remember, when working with pipelines, you're orchestrating multiple crews to work together in a sequence or parallel fashion. This allows for more complex workflows and information processing tasks.

View File

@@ -0,0 +1,236 @@
---
title: Starting a New CrewAI Project - Using Template
description: A comprehensive guide to starting a new CrewAI project, including the latest updates and project setup methods.
---
# Starting Your CrewAI Project
Welcome to the ultimate guide for starting a new CrewAI project. This document will walk you through the steps to create, customize, and run your CrewAI project, ensuring you have everything you need to get started.
Before we start, there are a couple of things to note:
1. CrewAI is a Python package and requires Python >=3.10 and <=3.13 to run.
2. The preferred way of setting up CrewAI is using the `crewai create crew` command. This will create a new project folder and install a skeleton template for you to work on.
## Prerequisites
Before getting started with CrewAI, make sure that you have installed it via pip:
```shell
$ pip install 'crewai[tools]'
```
## Creating a New Project
In this example, we will be using `uv` as our virtual environment manager.
To create a new CrewAI project, run the following CLI command:
```shell
$ crewai create crew <project_name>
```
This command will create a new project folder with the following structure:
```shell
my_project/
├── .gitignore
├── pyproject.toml
├── README.md
└── src/
└── my_project/
├── __init__.py
├── main.py
├── crew.py
├── tools/
│ ├── custom_tool.py
│ └── __init__.py
└── config/
├── agents.yaml
└── tasks.yaml
```
You can now start developing your project by editing the files in the `src/my_project` folder. The `main.py` file is the entry point of your project, and the `crew.py` file is where you define your agents and tasks.
## Customizing Your Project
To customize your project, you can:
- Modify `src/my_project/config/agents.yaml` to define your agents.
- Modify `src/my_project/config/tasks.yaml` to define your tasks.
- Modify `src/my_project/crew.py` to add your own logic, tools, and specific arguments.
- Modify `src/my_project/main.py` to add custom inputs for your agents and tasks.
- Add your environment variables into the `.env` file.
### Example: Defining Agents and Tasks
#### agents.yaml
```yaml
researcher:
role: >
Job Candidate Researcher
goal: >
Find potential candidates for the job
backstory: >
You are adept at finding the right candidates by exploring various online
resources. Your skill in identifying suitable candidates ensures the best
match for job positions.
```
#### tasks.yaml
```yaml
research_candidates_task:
description: >
Conduct thorough research to find potential candidates for the specified job.
Utilize various online resources and databases to gather a comprehensive list of potential candidates.
Ensure that the candidates meet the job requirements provided.
Job Requirements:
{job_requirements}
expected_output: >
A list of 10 potential candidates with their contact information and brief profiles highlighting their suitability.
agent: researcher # THIS NEEDS TO MATCH THE AGENT NAME IN THE AGENTS.YAML FILE AND THE AGENT DEFINED IN THE crew.py FILE
context: # THESE NEED TO MATCH THE TASK NAMES DEFINED ABOVE AND THE TASKS.YAML FILE AND THE TASK DEFINED IN THE crew.py FILE
- researcher
```
### Referencing Variables:
Your defined functions with the same name will be used. For example, you can reference the agent for specific tasks from `tasks.yaml` file. Ensure your annotated agent and function name are the same; otherwise, your task won't recognize the reference properly.
#### Example References
`agents.yaml`
```yaml
email_summarizer:
role: >
Email Summarizer
goal: >
Summarize emails into a concise and clear summary
backstory: >
You will create a 5 bullet point summary of the report
llm: mixtal_llm
```
`tasks.yaml`
```yaml
email_summarizer_task:
description: >
Summarize the email into a 5 bullet point summary
expected_output: >
A 5 bullet point summary of the email
agent: email_summarizer
context:
- reporting_task
- research_task
```
Use the annotations to properly reference the agent and task in the `crew.py` file.
### Annotations include:
* `@agent`
* `@task`
* `@crew`
* `@tool`
* `@callback`
* `@output_json`
* `@output_pydantic`
* `@cache_handler`
`crew.py`
```python
# ...
@agent
def email_summarizer(self) -> Agent:
return Agent(
config=self.agents_config["email_summarizer"],
)
@task
def email_summarizer_task(self) -> Task:
return Task(
config=self.tasks_config["email_summarizer_task"],
)
# ...
```
## Installing Dependencies
To install the dependencies for your project, you can use `uv`. Running the following command is optional since when running `crewai run`, it will automatically install the dependencies for you.
```shell
$ cd my_project
$ crewai install (optional)
```
This will install the dependencies specified in the `pyproject.toml` file.
## Interpolating Variables
Any variable interpolated in your `agents.yaml` and `tasks.yaml` files like `{variable}` will be replaced by the value of the variable in the `main.py` file.
#### tasks.yaml
```yaml
research_task:
description: >
Conduct a thorough research about the customer and competitors in the context
of {customer_domain}.
Make sure you find any interesting and relevant information given the
current year is 2024.
expected_output: >
A complete report on the customer and their customers and competitors,
including their demographics, preferences, market positioning and audience engagement.
```
#### main.py
```python
# main.py
def run():
inputs = {
"customer_domain": "crewai.com"
}
MyProjectCrew(inputs).crew().kickoff(inputs=inputs)
```
## Running Your Project
To run your project, use the following command:
```shell
$ crewai run
```
This will initialize your crew of AI agents and begin task execution as defined in your configuration in the `main.py` file.
### Replay Tasks from Latest Crew Kickoff
CrewAI now includes a replay feature that allows you to list the tasks from the last run and replay from a specific one. To use this feature, run:
```shell
$ crewai replay <task_id>
```
Replace `<task_id>` with the ID of the task you want to replay.
### Reset Crew Memory
If you need to reset the memory of your crew before running it again, you can do so by calling the reset memory feature:
```shell
$ crewai reset-memory
```
This will clear the crew's memory, allowing for a fresh start.
## Deploying Your Project
The easiest way to deploy your crew is through [CrewAI+](https://www.crewai.com/crewaiplus), where you can deploy your crew in a few clicks.

6
poetry.lock generated
View File

@@ -1597,12 +1597,12 @@ files = [
google-auth = ">=2.14.1,<3.0.dev0"
googleapis-common-protos = ">=1.56.2,<2.0.dev0"
grpcio = [
{version = ">=1.33.2,<2.0dev", optional = true, markers = "python_version < \"3.11\" and extra == \"grpc\""},
{version = ">=1.49.1,<2.0dev", optional = true, markers = "python_version >= \"3.11\" and extra == \"grpc\""},
{version = ">=1.33.2,<2.0dev", optional = true, markers = "python_version < \"3.11\" and extra == \"grpc\""},
]
grpcio-status = [
{version = ">=1.33.2,<2.0.dev0", optional = true, markers = "python_version < \"3.11\" and extra == \"grpc\""},
{version = ">=1.49.1,<2.0.dev0", optional = true, markers = "python_version >= \"3.11\" and extra == \"grpc\""},
{version = ">=1.33.2,<2.0.dev0", optional = true, markers = "python_version < \"3.11\" and extra == \"grpc\""},
]
proto-plus = ">=1.22.3,<2.0.0dev"
protobuf = ">=3.19.5,<3.20.0 || >3.20.0,<3.20.1 || >3.20.1,<4.21.0 || >4.21.0,<4.21.1 || >4.21.1,<4.21.2 || >4.21.2,<4.21.3 || >4.21.3,<4.21.4 || >4.21.4,<4.21.5 || >4.21.5,<6.0.0.dev0"
@@ -4286,8 +4286,8 @@ files = [
[package.dependencies]
numpy = [
{version = ">=1.22.4", markers = "python_version < \"3.11\""},
{version = ">=1.23.2", markers = "python_version == \"3.11\""},
{version = ">=1.22.4", markers = "python_version < \"3.11\""},
{version = ">=1.26.0", markers = "python_version >= \"3.12\""},
]
python-dateutil = ">=2.8.2"

View File

@@ -1,65 +1,66 @@
[tool.poetry]
[project]
name = "crewai"
version = "0.70.1"
description = "Cutting-edge framework for orchestrating role-playing, autonomous AI agents. By fostering collaborative intelligence, CrewAI empowers agents to work together seamlessly, tackling complex tasks."
authors = ["Joao Moura <joao@crewai.com>"]
readme = "README.md"
packages = [{ include = "crewai", from = "src" }]
requires-python = ">=3.10,<=3.13"
authors = [
{ name = "Joao Moura", email = "joao@crewai.com" }
]
dependencies = [
"pydantic>=2.4.2",
"langchain>=0.2.16",
"openai>=1.13.3",
"opentelemetry-api>=1.22.0",
"opentelemetry-sdk>=1.22.0",
"opentelemetry-exporter-otlp-proto-http>=1.22.0",
"instructor>=1.3.3",
"regex>=2024.9.11",
"crewai-tools>=0.12.1",
"click>=8.1.7",
"python-dotenv>=1.0.0",
"appdirs>=1.4.4",
"jsonref>=1.1.0",
"agentops>=0.3.0",
"embedchain>=0.1.114",
"json-repair>=0.25.2",
"auth0-python>=4.7.1",
"litellm>=1.44.22",
"pyvis>=0.3.2",
"uv>=0.4.18",
"tomli-w>=1.1.0",
]
[tool.poetry.urls]
[project.urls]
Homepage = "https://crewai.com"
Documentation = "https://docs.crewai.com"
Repository = "https://github.com/crewAIInc/crewAI"
[tool.poetry.dependencies]
python = ">=3.10,<=3.13"
pydantic = "^2.4.2"
langchain = "^0.2.16"
openai = "^1.13.3"
opentelemetry-api = "^1.22.0"
opentelemetry-sdk = "^1.22.0"
opentelemetry-exporter-otlp-proto-http = "^1.22.0"
instructor = "1.3.3"
regex = "^2024.9.11"
crewai-tools = { version = "^0.12.1", optional = true }
click = "^8.1.7"
python-dotenv = "^1.0.0"
appdirs = "^1.4.4"
jsonref = "^1.1.0"
agentops = { version = "^0.3.0", optional = true }
embedchain = "0.1.122"
json-repair = "^0.25.2"
auth0-python = "^4.7.1"
poetry = "^1.8.3"
litellm = "^1.44.22"
pyvis = "^0.3.2"
[project.optional-dependencies]
tools = ["crewai-tools>=0.12.1"]
agentops = ["agentops>=0.3.0"]
[tool.poetry.extras]
tools = ["crewai-tools"]
agentops = ["agentops"]
[tool.uv]
dev-dependencies = [
"ruff>=0.4.10",
"mypy>=1.10.0",
"pre-commit>=3.6.0",
"mkdocs>=1.4.3",
"mkdocstrings>=0.22.0",
"mkdocstrings-python>=1.1.2",
"mkdocs-material>=9.5.7",
"mkdocs-material-extensions>=1.3.1",
"pillow>=10.2.0",
"cairosvg>=2.7.1",
"crewai-tools>=0.12.1",
"pytest>=8.0.0",
"pytest-vcr>=1.0.2",
"python-dotenv>=1.0.0",
"pytest-asyncio>=0.23.7",
"pytest-subprocess>=1.5.2",
]
[tool.poetry.group.dev.dependencies]
isort = "^5.13.2"
mypy = "1.10.0"
autoflake = "^2.2.1"
pre-commit = "^3.6.0"
mkdocs = "^1.4.3"
mkdocstrings = "^0.22.0"
mkdocstrings-python = "^1.1.2"
mkdocs-material = { extras = ["imaging"], version = "^9.5.7" }
mkdocs-material-extensions = "^1.3.1"
pillow = "^10.2.0"
cairosvg = "^2.7.1"
crewai-tools = "^0.12.1"
[tool.poetry.group.test.dependencies]
pytest = "^8.0.0"
pytest-vcr = "^1.0.2"
python-dotenv = "1.0.0"
pytest-asyncio = "^0.23.7"
pytest-subprocess = "^1.5.2"
[tool.poetry.scripts]
[project.scripts]
crewai = "crewai.cli.cli:crewai"
[tool.mypy]
@@ -71,5 +72,5 @@ exclude = ["cli/templates"]
exclude_dirs = ["src/crewai/cli/templates"]
[build-system]
requires = ["poetry-core"]
build-backend = "poetry.core.masonry.api"
requires = ["hatchling"]
build-backend = "hatchling.build"

View File

@@ -15,27 +15,18 @@ from crewai.utilities.constants import TRAINED_AGENTS_DATA_FILE, TRAINING_DATA_F
from crewai.utilities.token_counter_callback import TokenCalcHandler
from crewai.utilities.training_handler import CrewTrainingHandler
agentops = None
try:
import agentops # type: ignore # Name "agentops" already defined on line 21
from agentops import track_agent # type: ignore
except ImportError:
def mock_agent_ops_provider():
def track_agent(*args, **kwargs):
def track_agent():
def noop(f):
return f
return noop
return track_agent
agentops = None
if os.environ.get("AGENTOPS_API_KEY"):
try:
from agentops import track_agent
except ImportError:
track_agent = mock_agent_ops_provider()
else:
track_agent = mock_agent_ops_provider()
@track_agent()
class Agent(BaseAgent):
@@ -201,8 +192,6 @@ class Agent(BaseAgent):
task_prompt = task.prompt()
print("context for task", context)
if context:
task_prompt = self.i18n.slice("task_with_context").format(
task=task_prompt, context=context
@@ -213,8 +202,6 @@ class Agent(BaseAgent):
self.crew._short_term_memory,
self.crew._long_term_memory,
self.crew._entity_memory,
self.crew._user_memory,
self.crew.memory_provider,
)
memory = contextual_memory.build_context_for_task(task, context)
if memory.strip() != "":

View File

@@ -81,6 +81,7 @@ class BaseAgentTools(BaseModel, ABC):
task_with_assigned_agent = Task( # type: ignore # Incompatible types in assignment (expression has type "Task", variable has type "str")
description=task,
agent=agent,
expected_output="Your best answer to your coworker asking you this, accounting for the context shared.",
expected_output=agent.i18n.slice("manager_request"),
i18n=agent.i18n,
)
return agent.execute_task(task_with_assigned_agent, context)

View File

@@ -151,7 +151,7 @@ class CrewAgentExecutor(CrewAgentExecutorMixin):
)
self.have_forced_answer = True
self.messages.append(
self._format_msg(formatted_answer.text, role="user")
self._format_msg(formatted_answer.text, role="assistant")
)
except OutputParserException as e:
@@ -317,9 +317,9 @@ class CrewAgentExecutor(CrewAgentExecutorMixin):
if self.crew is not None and hasattr(self.crew, "_train_iteration"):
train_iteration = self.crew._train_iteration
if agent_id in training_data and isinstance(train_iteration, int):
training_data[agent_id][train_iteration][
"improved_output"
] = result.output
training_data[agent_id][train_iteration]["improved_output"] = (
result.output
)
training_handler.save(training_data)
else:
self._logger.log(
@@ -334,6 +334,32 @@ class CrewAgentExecutor(CrewAgentExecutorMixin):
color="red",
)
if self.ask_for_human_input and human_feedback is not None:
training_data = {
"initial_output": result.output,
"human_feedback": human_feedback,
"agent": agent_id,
"agent_role": self.agent.role,
}
if self.crew is not None and hasattr(self.crew, "_train_iteration"):
train_iteration = self.crew._train_iteration
if isinstance(train_iteration, int):
CrewTrainingHandler(TRAINING_DATA_FILE).append(
train_iteration, agent_id, training_data
)
else:
self._logger.log(
"error",
"Invalid train iteration type. Expected int.",
color="red",
)
else:
self._logger.log(
"error",
"Crew is None or does not have _train_iteration attribute.",
color="red",
)
def _format_prompt(self, prompt: str, inputs: Dict[str, str]) -> str:
prompt = prompt.replace("{input}", inputs["input"])
prompt = prompt.replace("{tool_names}", inputs["tool_names"])

View File

@@ -21,6 +21,7 @@ from .run_crew import run_crew
from .run_flow import run_flow
from .tools.main import ToolCommand
from .train_crew import train_crew
from .update_crew import update_crew
@click.group()
@@ -188,6 +189,12 @@ def run():
run_crew()
@crewai.command()
def update():
"""Update the pyproject.toml of the Crew project to use uv."""
update_crew()
@crewai.command()
def signup():
"""Sign Up/Login to CrewAI+."""
@@ -276,7 +283,13 @@ def tool_install(handle: str):
@tool.command(name="publish")
@click.option("--force", is_flag=True, show_default=True, default=False, help="Bypasses Git remote validations")
@click.option(
"--force",
is_flag=True,
show_default=True,
default=False,
help="Bypasses Git remote validations",
)
@click.option("--public", "is_public", flag_value=True, default=False)
@click.option("--private", "is_public", flag_value=False)
def tool_publish(is_public: bool, force: bool):

View File

@@ -5,13 +5,13 @@ import click
def evaluate_crew(n_iterations: int, model: str) -> None:
"""
Test and Evaluate the crew by running a command in the Poetry environment.
Test and Evaluate the crew by running a command in the UV environment.
Args:
n_iterations (int): The number of iterations to test the crew.
model (str): The model to test the crew with.
"""
command = ["poetry", "run", "test", str(n_iterations), model]
command = ["uv", "run", "test", str(n_iterations), model]
try:
if n_iterations <= 0:

View File

@@ -5,13 +5,10 @@ import click
def install_crew() -> None:
"""
Install the crew by running the Poetry command to lock and install.
Install the crew by running the UV command to lock and install.
"""
try:
subprocess.run(["poetry", "lock"], check=True, capture_output=False, text=True)
subprocess.run(
["poetry", "install"], check=True, capture_output=False, text=True
)
subprocess.run(["uv", "sync"], check=True, capture_output=False, text=True)
except subprocess.CalledProcessError as e:
click.echo(f"An error occurred while running the crew: {e}", err=True)

View File

@@ -5,9 +5,9 @@ import click
def plot_flow() -> None:
"""
Plot the flow by running a command in the Poetry environment.
Plot the flow by running a command in the UV environment.
"""
command = ["poetry", "run", "plot_flow"]
command = ["uv", "run", "plot_flow"]
try:
result = subprocess.run(command, capture_output=False, text=True, check=True)

View File

@@ -25,7 +25,9 @@ class PlusAPI:
def _make_request(self, method: str, endpoint: str, **kwargs) -> requests.Response:
url = urljoin(self.base_url, endpoint)
return requests.request(method, url, headers=self.headers, **kwargs)
session = requests.Session()
session.trust_env = False
return session.request(method, url, headers=self.headers, **kwargs)
def login_to_tool_repository(self):
return self._make_request("POST", f"{self.TOOLS_RESOURCE}/login")

View File

@@ -1,4 +1,5 @@
import subprocess
import click
@@ -9,7 +10,7 @@ def replay_task_command(task_id: str) -> None:
Args:
task_id (str): The ID of the task to replay from.
"""
command = ["poetry", "run", "replay", task_id]
command = ["uv", "run", "replay", task_id]
try:
result = subprocess.run(command, capture_output=False, text=True, check=True)

View File

@@ -1,23 +1,29 @@
import subprocess
import click
import tomllib
def run_crew() -> None:
"""
Run the crew by running a command in the Poetry environment.
Run the crew by running a command in the UV environment.
"""
command = ["poetry", "run", "run_crew"]
command = ["uv", "run", "run_crew"]
try:
result = subprocess.run(command, capture_output=False, text=True, check=True)
if result.stderr:
click.echo(result.stderr, err=True)
subprocess.run(command, capture_output=False, text=True, check=True)
except subprocess.CalledProcessError as e:
click.echo(f"An error occurred while running the crew: {e}", err=True)
click.echo(e.output, err=True)
click.echo(e.output, err=True, nl=True)
with open("pyproject.toml", "rb") as f:
data = tomllib.load(f)
if data.get("tool", {}).get("poetry"):
click.secho(
"It's possible that you are using an old version of crewAI that uses poetry, please run `crewai update` to update your pyproject.toml to use uv.",
fg="yellow",
)
except Exception as e:
click.echo(f"An unexpected error occurred: {e}", err=True)

View File

@@ -5,9 +5,9 @@ import click
def run_flow() -> None:
"""
Run the flow by running a command in the Poetry environment.
Run the flow by running a command in the UV environment.
"""
command = ["poetry", "run", "run_flow"]
command = ["uv", "run", "run_flow"]
try:
result = subprocess.run(command, capture_output=False, text=True, check=True)

View File

@@ -4,17 +4,17 @@ Welcome to the {{crew_name}} Crew project, powered by [crewAI](https://crewai.co
## Installation
Ensure you have Python >=3.10 <=3.13 installed on your system. This project uses [Poetry](https://python-poetry.org/) for dependency management and package handling, offering a seamless setup and execution experience.
Ensure you have Python >=3.10 <=3.13 installed on your system. This project uses [UV](https://docs.astral.sh/uv/) for dependency management and package handling, offering a seamless setup and execution experience.
First, if you haven't already, install Poetry:
First, if you haven't already, install uv:
```bash
pip install poetry
pip install uv
```
Next, navigate to your project directory and install the dependencies:
1. First lock the dependencies and install them by using the CLI command:
(Optional) Lock the dependencies and install them by using the CLI command:
```bash
crewai install
```

View File

@@ -2,7 +2,7 @@
import sys
from {{folder_name}}.crew import {{crew_name}}Crew
# This main file is intended to be a way for your to run your
# This main file is intended to be a way for you to run your
# crew locally, so refrain from adding necessary logic into this file.
# Replace with inputs you want to test with, it will automatically
# interpolate any tasks and agents information

View File

@@ -1,15 +1,14 @@
[tool.poetry]
[project]
name = "{{folder_name}}"
version = "0.1.0"
description = "{{name}} using crewAI"
authors = ["Your Name <you@example.com>"]
authors = [{ name = "Your Name", email = "you@example.com" }]
requires-python = ">=3.10,<=3.13"
dependencies = [
"crewai[tools]>=0.67.1,<1.0.0"
]
[tool.poetry.dependencies]
python = ">=3.10,<=3.13"
crewai = { extras = ["tools"], version = ">=0.70.1,<1.0.0" }
[tool.poetry.scripts]
[project.scripts]
{{folder_name}} = "{{folder_name}}.main:run"
run_crew = "{{folder_name}}.main:run"
train = "{{folder_name}}.main:train"
@@ -17,5 +16,5 @@ replay = "{{folder_name}}.main:replay"
test = "{{folder_name}}.main:test"
[build-system]
requires = ["poetry-core"]
build-backend = "poetry.core.masonry.api"
requires = ["hatchling"]
build-backend = "hatchling.build"

View File

@@ -4,18 +4,17 @@ Welcome to the {{crew_name}} Crew project, powered by [crewAI](https://crewai.co
## Installation
Ensure you have Python >=3.10 <=3.13 installed on your system. This project uses [Poetry](https://python-poetry.org/) for dependency management and package handling, offering a seamless setup and execution experience.
Ensure you have Python >=3.10 <=3.13 installed on your system. This project uses [UV](https://docs.astral.sh/uv/) for dependency management and package handling, offering a seamless setup and execution experience.
First, if you haven't already, install Poetry:
First, if you haven't already, install uv:
```bash
pip install poetry
pip install uv
```
Next, navigate to your project directory and install the dependencies:
1. First lock the dependencies and then install them:
(Optional) Lock the dependencies and install them by using the CLI command:
```bash
crewai install
```

View File

@@ -1,19 +1,19 @@
[tool.poetry]
[project]
name = "{{folder_name}}"
version = "0.1.0"
description = "{{name}} using crewAI"
authors = ["Your Name <you@example.com>"]
authors = [{ name = "Your Name", email = "you@example.com" }]
requires-python = ">=3.10,<=3.13"
dependencies = [
"crewai[tools]>=0.67.1,<1.0.0",
"asyncio"
]
[tool.poetry.dependencies]
python = ">=3.10,<=3.13"
crewai = { extras = ["tools"], version = ">=0.70.1,<1.0.0" }
asyncio = "*"
[tool.poetry.scripts]
[project.scripts]
{{folder_name}} = "{{folder_name}}.main:main"
run_flow = "{{folder_name}}.main:main"
plot_flow = "{{folder_name}}.main:plot"
[build-system]
requires = ["poetry-core"]
build-backend = "poetry.core.masonry.api"
requires = ["hatchling"]
build-backend = "hatchling.build"

View File

@@ -1,20 +1,21 @@
[tool.poetry]
[project]
name = "{{folder_name}}"
version = "0.1.0"
description = "{{name}} using crewAI"
authors = ["Your Name <you@example.com>"]
requires-python = ">=3.10,<=3.13"
dependencies = [
"crewai[tools]>=0.67.1,<1.0.0"
]
[tool.poetry.dependencies]
python = ">=3.10,<=3.13"
crewai = { extras = ["tools"], version = ">=0.70.1,<1.0.0" }
[tool.poetry.scripts]
[project.scripts]
{{folder_name}} = "{{folder_name}}.main:main"
run_crew = "{{folder_name}}.main:main"
train = "{{folder_name}}.main:train"
replay = "{{folder_name}}.main:replay"
test = "{{folder_name}}.main:test"
[build-system]
requires = ["poetry-core"]
build-backend = "poetry.core.masonry.api"
requires = ["hatchling"]
build-backend = "hatchling.build"

View File

@@ -0,0 +1,10 @@
# Python-generated files
__pycache__/
*.py[oc]
build/
dist/
wheels/
*.egg-info
# Virtual environments
.venv

View File

@@ -6,13 +6,13 @@ custom tools to power up your crews.
## Installing
Ensure you have Python >=3.10 <=3.13 installed on your system. This project
uses [Poetry](https://python-poetry.org/) for dependency management and package
uses [UV](https://docs.astral.sh/uv/) for dependency management and package
handling, offering a seamless setup and execution experience.
First, if you haven't already, install Poetry:
First, if you haven't already, install `uv`:
```bash
pip install poetry
pip install uv
```
Next, navigate to your project directory and install the dependencies with:

View File

@@ -1,14 +1,10 @@
[tool.poetry]
[project]
name = "{{folder_name}}"
version = "0.1.0"
description = "Power up your crews with {{folder_name}}"
authors = ["Your Name <you@example.com>"]
readme = "README.md"
requires-python = ">=3.10,<=3.13"
dependencies = [
"crewai[tools]>=0.70.1"
]
[tool.poetry.dependencies]
python = ">=3.10,<=3.13"
crewai = { extras = ["tools"], version = ">=0.70.1,<1.0.0" }
[build-system]
requires = ["poetry-core"]
build-backend = "poetry.core.masonry.api"

View File

@@ -1,20 +1,24 @@
import base64
from pathlib import Path
import click
import os
import platform
import subprocess
import tempfile
from pathlib import Path
from netrc import netrc
import stat
import click
from rich.console import Console
from crewai.cli.command import BaseCommand, PlusAPIMixin
from crewai.cli import git
from crewai.cli.command import BaseCommand, PlusAPIMixin
from crewai.cli.utils import (
get_project_name,
get_project_description,
get_project_name,
get_project_version,
tree_copy,
tree_find_and_replace,
)
from rich.console import Console
console = Console()
@@ -24,6 +28,8 @@ class ToolCommand(BaseCommand, PlusAPIMixin):
A class to handle tool repository related operations for CrewAI projects.
"""
BASE_URL = "https://app.crewai.com/pypi/"
def __init__(self):
BaseCommand.__init__(self)
PlusAPIMixin.__init__(self, telemetry=self._telemetry)
@@ -82,7 +88,7 @@ class ToolCommand(BaseCommand, PlusAPIMixin):
with tempfile.TemporaryDirectory() as temp_build_dir:
subprocess.run(
["poetry", "build", "-f", "sdist", "--output", temp_build_dir],
["uv", "build", "--sdist", "--out-dir", temp_build_dir],
check=True,
capture_output=False,
)
@@ -92,7 +98,7 @@ class ToolCommand(BaseCommand, PlusAPIMixin):
)
if not tarball_filename:
console.print(
"Project build failed. Please ensure that the command `poetry build -f sdist` completes successfully.",
"Project build failed. Please ensure that the command `uv build --sdist` completes successfully.",
style="bold red",
)
raise SystemExit
@@ -143,68 +149,41 @@ class ToolCommand(BaseCommand, PlusAPIMixin):
if login_response.status_code != 200:
console.print(
"Failed to authenticate to the tool repository. Make sure you have the access to tools.",
"Authentication failed. Verify access to the tool repository, or try `crewai login`. ",
style="bold red",
)
raise SystemExit
login_response_json = login_response.json()
for repository in login_response_json["repositories"]:
self._add_repository_to_poetry(
repository, login_response_json["credential"]
)
self._set_netrc_credentials(login_response_json["credential"])
console.print(
"Succesfully authenticated to the tool repository.", style="bold green"
"Successfully authenticated to the tool repository.", style="bold green"
)
def _add_repository_to_poetry(self, repository, credentials):
repository_handle = f"crewai-{repository['handle']}"
def _set_netrc_credentials(self, credentials, netrc_path=None):
if not netrc_path:
netrc_filename = "_netrc" if platform.system() == "Windows" else ".netrc"
netrc_path = Path.home() / netrc_filename
netrc_path.touch(mode=stat.S_IRUSR | stat.S_IWUSR, exist_ok=True)
add_repository_command = [
"poetry",
"source",
"add",
"--priority=explicit",
repository_handle,
repository["url"],
]
add_repository_result = subprocess.run(
add_repository_command, text=True, check=True
)
netrc_instance = netrc(file=netrc_path)
netrc_instance.hosts["app.crewai.com"] = (credentials["username"], "", credentials["password"])
if add_repository_result.stderr:
click.echo(add_repository_result.stderr, err=True)
raise SystemExit
with open(netrc_path, 'w') as file:
file.write(str(netrc_instance))
add_repository_credentials_command = [
"poetry",
"config",
f"http-basic.{repository_handle}",
credentials["username"],
credentials["password"],
]
add_repository_credentials_result = subprocess.run(
add_repository_credentials_command,
capture_output=False,
text=True,
check=True,
)
if add_repository_credentials_result.stderr:
click.echo(add_repository_credentials_result.stderr, err=True)
raise SystemExit
console.print(f"Added credentials to {netrc_path}", style="bold green")
def _add_package(self, tool_details):
tool_handle = tool_details["handle"]
repository_handle = tool_details["repository"]["handle"]
pypi_index_handle = f"crewai-{repository_handle}"
add_package_command = [
"poetry",
"uv",
"add",
"--source",
pypi_index_handle,
"--extra-index-url",
self.BASE_URL + repository_handle,
tool_handle,
]
add_package_result = subprocess.run(

View File

@@ -5,12 +5,12 @@ import click
def train_crew(n_iterations: int, filename: str) -> None:
"""
Train the crew by running a command in the Poetry environment.
Train the crew by running a command in the UV environment.
Args:
n_iterations (int): The number of iterations to train the crew.
"""
command = ["poetry", "run", "train", str(n_iterations), filename]
command = ["uv", "run", "train", str(n_iterations), filename]
try:
if n_iterations <= 0:

View File

@@ -0,0 +1,115 @@
import shutil
import tomli_w
import tomllib
def update_crew() -> None:
"""Update the pyproject.toml of the Crew project to use uv."""
migrate_pyproject("pyproject.toml", "pyproject.toml")
def migrate_pyproject(input_file, output_file):
"""
Migrate the pyproject.toml to the new format.
This function is used to migrate the pyproject.toml to the new format.
And it will be used to migrate the pyproject.toml to the new format when uv is used.
When the time comes that uv supports the new format, this function will be deprecated.
"""
# Read the input pyproject.toml
with open(input_file, "rb") as f:
pyproject = tomllib.load(f)
# Initialize the new project structure
new_pyproject = {
"project": {},
"build-system": {"requires": ["hatchling"], "build-backend": "hatchling.build"},
}
# Migrate project metadata
if "tool" in pyproject and "poetry" in pyproject["tool"]:
poetry = pyproject["tool"]["poetry"]
new_pyproject["project"]["name"] = poetry.get("name")
new_pyproject["project"]["version"] = poetry.get("version")
new_pyproject["project"]["description"] = poetry.get("description")
new_pyproject["project"]["authors"] = [
{
"name": author.split("<")[0].strip(),
"email": author.split("<")[1].strip(">").strip(),
}
for author in poetry.get("authors", [])
]
new_pyproject["project"]["requires-python"] = poetry.get("python")
else:
# If it's already in the new format, just copy the project section
new_pyproject["project"] = pyproject.get("project", {})
# Migrate or copy dependencies
if "dependencies" in new_pyproject["project"]:
# If dependencies are already in the new format, keep them as is
pass
elif "dependencies" in poetry:
new_pyproject["project"]["dependencies"] = []
for dep, version in poetry["dependencies"].items():
if isinstance(version, dict): # Handle extras
extras = ",".join(version.get("extras", []))
new_dep = f"{dep}[{extras}]"
if "version" in version:
new_dep += parse_version(version["version"])
elif dep == "python":
new_pyproject["project"]["requires-python"] = version
continue
else:
new_dep = f"{dep}{parse_version(version)}"
new_pyproject["project"]["dependencies"].append(new_dep)
# Migrate or copy scripts
if "scripts" in poetry:
new_pyproject["project"]["scripts"] = poetry["scripts"]
elif "scripts" in pyproject.get("project", {}):
new_pyproject["project"]["scripts"] = pyproject["project"]["scripts"]
else:
new_pyproject["project"]["scripts"] = {}
if (
"run_crew" not in new_pyproject["project"]["scripts"]
and len(new_pyproject["project"]["scripts"]) > 0
):
# Extract the module name from any existing script
existing_scripts = new_pyproject["project"]["scripts"]
module_name = next(
(value.split(".")[0] for value in existing_scripts.values() if "." in value)
)
new_pyproject["project"]["scripts"]["run_crew"] = f"{module_name}.main:run"
# Migrate optional dependencies
if "extras" in poetry:
new_pyproject["project"]["optional-dependencies"] = poetry["extras"]
# Backup the old pyproject.toml
backup_file = "pyproject-old.toml"
shutil.copy2(input_file, backup_file)
print(f"Original pyproject.toml backed up as {backup_file}")
# Write the new pyproject.toml
with open(output_file, "wb") as f:
tomli_w.dump(new_pyproject, f)
print(f"Migration complete. New pyproject.toml written to {output_file}")
def parse_version(version: str) -> str:
"""Parse and convert version specifiers."""
if version.startswith("^"):
main_lib_version = version[1:].split(",")[0]
addtional_lib_version = None
if len(version[1:].split(",")) > 1:
addtional_lib_version = version[1:].split(",")[1]
return f">={main_lib_version}" + (
f",{addtional_lib_version}" if addtional_lib_version else ""
)
return version

View File

@@ -1,13 +1,14 @@
import importlib.metadata
import os
import shutil
import click
import sys
import importlib.metadata
from functools import reduce
from typing import Any, Dict, List
import click
from rich.console import Console
from crewai.cli.authentication.utils import TokenManager
from functools import reduce
from rich.console import Console
from typing import Any, Dict, List
if sys.version_info >= (3, 11):
import tomllib
@@ -55,17 +56,14 @@ def simple_toml_parser(content):
def parse_toml(content):
if sys.version_info >= (3, 11):
return tomllib.loads(content)
else:
return simple_toml_parser(content)
return simple_toml_parser(content)
def get_project_name(
pyproject_path: str = "pyproject.toml", require: bool = False
) -> str | None:
"""Get the project name from the pyproject.toml file."""
return _get_project_attribute(
pyproject_path, ["tool", "poetry", "name"], require=require
)
return _get_project_attribute(pyproject_path, ["project", "name"], require=require)
def get_project_version(
@@ -73,7 +71,7 @@ def get_project_version(
) -> str | None:
"""Get the project version from the pyproject.toml file."""
return _get_project_attribute(
pyproject_path, ["tool", "poetry", "version"], require=require
pyproject_path, ["project", "version"], require=require
)
@@ -82,7 +80,7 @@ def get_project_description(
) -> str | None:
"""Get the project description from the pyproject.toml file."""
return _get_project_attribute(
pyproject_path, ["tool", "poetry", "description"], require=require
pyproject_path, ["project", "description"], require=require
)
@@ -97,10 +95,9 @@ def _get_project_attribute(
pyproject_content = parse_toml(f.read())
dependencies = (
_get_nested_value(pyproject_content, ["tool", "poetry", "dependencies"])
or {}
_get_nested_value(pyproject_content, ["project", "dependencies"]) or []
)
if "crewai" not in dependencies:
if not any(True for dep in dependencies if "crewai" in dep):
raise Exception("crewai is not in the dependencies.")
attribute = _get_nested_value(pyproject_content, keys)

View File

@@ -1,6 +1,5 @@
import asyncio
import json
import os
import uuid
import warnings
from concurrent.futures import Future
@@ -27,7 +26,6 @@ from crewai.llm import LLM
from crewai.memory.entity.entity_memory import EntityMemory
from crewai.memory.long_term.long_term_memory import LongTermMemory
from crewai.memory.short_term.short_term_memory import ShortTermMemory
from crewai.memory.user.user_memory import UserMemory
from crewai.process import Process
from crewai.task import Task
from crewai.tasks.conditional_task import ConditionalTask
@@ -49,12 +47,10 @@ from crewai.utilities.planning_handler import CrewPlanner
from crewai.utilities.task_output_storage_handler import TaskOutputStorageHandler
from crewai.utilities.training_handler import CrewTrainingHandler
agentops = None
if os.environ.get("AGENTOPS_API_KEY"):
try:
import agentops # type: ignore
except ImportError:
pass
try:
import agentops
except ImportError:
agentops = None
if TYPE_CHECKING:
from crewai.pipeline.pipeline import Pipeline
@@ -95,7 +91,6 @@ class Crew(BaseModel):
_short_term_memory: Optional[InstanceOf[ShortTermMemory]] = PrivateAttr()
_long_term_memory: Optional[InstanceOf[LongTermMemory]] = PrivateAttr()
_entity_memory: Optional[InstanceOf[EntityMemory]] = PrivateAttr()
_user_memory: Optional[InstanceOf[UserMemory]] = PrivateAttr()
_train: Optional[bool] = PrivateAttr(default=False)
_train_iteration: Optional[int] = PrivateAttr()
_inputs: Optional[Dict[str, Any]] = PrivateAttr(default=None)
@@ -116,10 +111,6 @@ class Crew(BaseModel):
default=False,
description="Whether the crew should use memory to store memories of it's execution",
)
memory_provider: Optional[str] = Field(
default=None,
description="The memory provider to be used for the crew.",
)
short_term_memory: Optional[InstanceOf[ShortTermMemory]] = Field(
default=None,
description="An Instance of the ShortTermMemory to be used by the Crew",
@@ -213,14 +204,6 @@ class Crew(BaseModel):
# TODO: Improve typing
return json.loads(v) if isinstance(v, Json) else v # type: ignore
@field_validator("memory_provider", mode="before")
@classmethod
def validate_memory_provider(cls, v: Optional[str]) -> Optional[str]:
"""Ensure memory provider is either None or 'mem0'."""
if v not in (None, "mem0"):
raise ValueError("Memory provider must be either None or 'mem0'.")
return v
@model_validator(mode="after")
def set_private_attrs(self) -> "Crew":
"""Set private attributes."""
@@ -252,23 +235,12 @@ class Crew(BaseModel):
self._short_term_memory = (
self.short_term_memory
if self.short_term_memory
else ShortTermMemory(
memory_provider=self.memory_provider,
crew=self,
embedder_config=self.embedder,
)
else ShortTermMemory(crew=self, embedder_config=self.embedder)
)
self._entity_memory = (
self.entity_memory
if self.entity_memory
else EntityMemory(
memory_provider=self.memory_provider,
crew=self,
embedder_config=self.embedder,
)
)
self._user_memory = (
UserMemory(crew=self) if self.memory_provider == "mem0" else None
else EntityMemory(crew=self, embedder_config=self.embedder)
)
return self
@@ -922,7 +894,6 @@ class Crew(BaseModel):
"_short_term_memory",
"_long_term_memory",
"_entity_memory",
"_user_memory",
"_telemetry",
"agents",
"tasks",

View File

@@ -1,6 +1,5 @@
from .entity.entity_memory import EntityMemory
from .long_term.long_term_memory import LongTermMemory
from .short_term.short_term_memory import ShortTermMemory
from .user.user_memory import UserMemory
__all__ = ["UserMemory", "EntityMemory", "LongTermMemory", "ShortTermMemory"]
__all__ = ["EntityMemory", "LongTermMemory", "ShortTermMemory"]

View File

@@ -1,22 +1,13 @@
from typing import Optional
from crewai.memory import EntityMemory, LongTermMemory, ShortTermMemory, UserMemory
from crewai.memory import EntityMemory, LongTermMemory, ShortTermMemory
class ContextualMemory:
def __init__(
self,
stm: ShortTermMemory,
ltm: LongTermMemory,
em: EntityMemory,
um: UserMemory,
memory_provider: Optional[str] = None, # Default value added
):
def __init__(self, stm: ShortTermMemory, ltm: LongTermMemory, em: EntityMemory):
self.stm = stm
self.ltm = ltm
self.em = em
self.um = um
self.memory_provider = memory_provider
def build_context_for_task(self, task, context) -> str:
"""
@@ -32,8 +23,6 @@ class ContextualMemory:
context.append(self._fetch_ltm_context(task.description))
context.append(self._fetch_stm_context(query))
context.append(self._fetch_entity_context(query))
if self.memory_provider == "mem0":
context.append(self._fetch_user_memories(query))
return "\n".join(filter(None, context))
def _fetch_stm_context(self, query) -> str:
@@ -71,22 +60,6 @@ class ContextualMemory:
"""
em_results = self.em.search(query)
formatted_results = "\n".join(
[
f"- {result['memory'] if self.memory_provider == 'mem0' else result['context']}"
for result in em_results
] # type: ignore # Invalid index type "str" for "str"; expected type "SupportsIndex | slice"
[f"- {result['context']}" for result in em_results] # type: ignore # Invalid index type "str" for "str"; expected type "SupportsIndex | slice"
)
return f"Entities:\n{formatted_results}" if em_results else ""
def _fetch_user_memories(self, query) -> str:
"""
Fetches relevant user memory information from User Memory related to the task's description and expected_output,
"""
print("query", query)
um_results = self.um.search(query)
print("um_results", um_results)
formatted_results = "\n".join(
[f"- {result['memory']}" for result in um_results]
)
print(f"User memories/preferences:\n{formatted_results}")
return f"User memories/preferences:\n{formatted_results}" if um_results else ""

View File

@@ -1,7 +1,6 @@
from crewai.memory.entity.entity_memory_item import EntityMemoryItem
from crewai.memory.memory import Memory
from crewai.memory.storage.rag_storage import RAGStorage
from crewai.memory.storage.mem0_storage import Mem0Storage
class EntityMemory(Memory):
@@ -11,39 +10,22 @@ class EntityMemory(Memory):
Inherits from the Memory class.
"""
def __init__(
self, memory_provider=None, crew=None, embedder_config=None, storage=None
):
self.memory_provider = memory_provider
if self.memory_provider == "mem0":
storage = Mem0Storage(
def __init__(self, crew=None, embedder_config=None, storage=None):
storage = (
storage
if storage
else RAGStorage(
type="entities",
allow_reset=False,
embedder_config=embedder_config,
crew=crew,
)
else:
storage = (
storage
if storage
else RAGStorage(
type="entities",
allow_reset=False,
embedder_config=embedder_config,
crew=crew,
)
)
)
super().__init__(storage)
def save(self, item: EntityMemoryItem) -> None: # type: ignore # BUG?: Signature of "save" incompatible with supertype "Memory"
"""Saves an entity item into the SQLite storage."""
if self.memory_provider == "mem0":
data = f"""
Remember details about the following entity:
Name: {item.name}
Type: {item.type}
Entity Description: {item.description}
"""
else:
data = f"{item.name}({item.type}): {item.description}"
data = f"{item.name}({item.type}): {item.description}"
super().save(data, item.metadata)
def reset(self) -> None:

View File

@@ -1,4 +1,4 @@
from typing import Any, Dict, List
from typing import Any, Dict
from crewai.memory.long_term.long_term_memory_item import LongTermMemoryItem
from crewai.memory.memory import Memory
@@ -18,25 +18,18 @@ class LongTermMemory(Memory):
storage = storage if storage else LTMSQLiteStorage()
super().__init__(storage)
def save(self, item: LongTermMemoryItem) -> None:
metadata = item.metadata.copy() # Create a copy to avoid modifying the original
metadata.update(
{
"agent": item.agent,
"expected_output": item.expected_output,
"quality": item.quality, # Add quality to metadata
}
)
def save(self, item: LongTermMemoryItem) -> None: # type: ignore # BUG?: Signature of "save" incompatible with supertype "Memory"
metadata = item.metadata
metadata.update({"agent": item.agent, "expected_output": item.expected_output})
self.storage.save( # type: ignore # BUG?: Unexpected keyword argument "task_description","score","datetime" for "save" of "Storage"
task_description=item.task,
score=item.quality,
score=metadata["quality"],
metadata=metadata,
datetime=item.datetime,
)
def search(self, task: str, latest_n: int = 3) -> List[Dict[str, Any]]:
results = self.storage.load(task, latest_n)
return results
def search(self, task: str, latest_n: int = 3) -> Dict[str, Any]:
return self.storage.load(task, latest_n) # type: ignore # BUG?: "Storage" has no attribute "load"
def reset(self) -> None:
self.storage.reset()

View File

@@ -23,13 +23,5 @@ class Memory:
self.storage.save(value, metadata)
def search(
self,
query: str,
limit: int = 3,
filters: dict = {},
score_threshold: float = 0.35,
) -> Dict[str, Any]:
return self.storage.search(
query=query, limit=limit, filters=filters, score_threshold=score_threshold
)
def search(self, query: str) -> Dict[str, Any]:
return self.storage.search(query)

View File

@@ -2,7 +2,6 @@ from typing import Any, Dict, Optional
from crewai.memory.memory import Memory
from crewai.memory.short_term.short_term_memory_item import ShortTermMemoryItem
from crewai.memory.storage.rag_storage import RAGStorage
from crewai.memory.storage.mem0_storage import Mem0Storage
class ShortTermMemory(Memory):
@@ -14,20 +13,14 @@ class ShortTermMemory(Memory):
MemoryItem instances.
"""
def __init__(
self, memory_provider=None, crew=None, embedder_config=None, storage=None
):
self.memory_provider = memory_provider
if self.memory_provider == "mem0":
storage = Mem0Storage(type="short_term", crew=crew)
else:
storage = (
storage
if storage
else RAGStorage(
type="short_term", embedder_config=embedder_config, crew=crew
)
def __init__(self, crew=None, embedder_config=None, storage=None):
storage = (
storage
if storage
else RAGStorage(
type="short_term", embedder_config=embedder_config, crew=crew
)
)
super().__init__(storage)
def save(
@@ -37,21 +30,11 @@ class ShortTermMemory(Memory):
agent: Optional[str] = None,
) -> None:
item = ShortTermMemoryItem(data=value, metadata=metadata, agent=agent)
if self.memory_provider == "mem0":
item.data = f"Remember the following insights from Agent run: {item.data}"
super().save(value=item.data, metadata=item.metadata, agent=item.agent)
def search(
self,
query: str,
limit: int = 3,
filters: dict = {},
score_threshold: float = 0.35,
):
return self.storage.search(
query=query, limit=limit, filters=filters, score_threshold=score_threshold
) # type: ignore # BUG? The reference is to the parent class, but the parent class does not have this parameters
def search(self, query: str, score_threshold: float = 0.35):
return self.storage.search(query=query, score_threshold=score_threshold) # type: ignore # BUG? The reference is to the parent class, but the parent class does not have this parameters
def reset(self) -> None:
try:

View File

@@ -7,10 +7,8 @@ class Storage:
def save(self, value: Any, metadata: Dict[str, Any]) -> None:
pass
def search(
self, query: str, limit: int, filters: Dict, score_threshold: float
) -> Dict[str, Any]: # type: ignore
return {}
def search(self, key: str) -> Dict[str, Any]: # type: ignore
pass
def reset(self) -> None:
pass

View File

@@ -1,45 +0,0 @@
import os
from typing import Any, Dict, List, Optional
from crewai.memory.storage.interface import Storage
from mem0 import MemoryClient
class Mem0Storage(Storage):
"""
Extends Storage to handle embedding and searching across entities using Mem0.
"""
def __init__(self, type, crew=None):
super().__init__()
if (
not os.getenv("OPENAI_API_KEY")
and not os.getenv("OPENAI_BASE_URL") == "https://api.openai.com/v1"
):
os.environ["OPENAI_API_KEY"] = "fake"
if not os.getenv("MEM0_API_KEY"):
raise EnvironmentError("MEM0_API_KEY is not set.")
agents = crew.agents if crew else []
agents = [agent.role for agent in agents]
agents = "_".join(agents)
self.app_id = agents
self.memory = MemoryClient(api_key=os.getenv("MEM0_API_KEY"))
def save(self, value: Any, metadata: Dict[str, Any]) -> None:
self.memory.add(value, metadata=metadata, app_id=self.app_id)
def search(
self,
query: str,
limit: int = 3,
filters: Optional[dict] = None,
score_threshold: float = 0.35,
) -> List[Any]:
params = {"query": query, "limit": limit, "app_id": self.app_id}
if filters:
params["filters"] = filters
results = self.memory.search(**params)
return [r for r in results if float(r["score"]) >= score_threshold]

View File

@@ -95,7 +95,7 @@ class RAGStorage(Storage):
self,
query: str,
limit: int = 3,
filters: Optional[dict] = None,
filter: Optional[dict] = None,
score_threshold: float = 0.35,
) -> List[Any]:
if not hasattr(self, "app"):
@@ -105,8 +105,8 @@ class RAGStorage(Storage):
with suppress_logging():
try:
results = (
self.app.search(query, limit, where=filters)
if filters
self.app.search(query, limit, where=filter)
if filter
else self.app.search(query, limit)
)
except InvalidDimensionException:

View File

@@ -1,43 +0,0 @@
from typing import Any, Dict, Optional
from crewai.memory.memory import Memory
from crewai.memory.storage.mem0_storage import Mem0Storage
class UserMemory(Memory):
"""
UserMemory class for handling user memory storage and retrieval.
Inherits from the Memory class and utilizes an instance of a class that
adheres to the Storage for data storage, specifically working with
MemoryItem instances.
"""
def __init__(self, crew=None):
storage = Mem0Storage(type="user", crew=crew)
super().__init__(storage)
def save(
self,
value,
metadata: Optional[Dict[str, Any]] = None,
agent: Optional[str] = None,
) -> None:
data = f"Remember the details about the user: {value}"
super().save(data, metadata)
def search(
self,
query: str,
limit: int = 3,
filters: dict = {},
score_threshold: float = 0.35,
):
print("SEARCHING USER MEMORY", query, limit, filters, score_threshold)
result = super().search(
query=query,
limit=limit,
filters=filters,
score_threshold=score_threshold,
)
print("USER MEMORY SEARCH RESULT:", result)
return result

View File

@@ -1,8 +0,0 @@
from typing import Any, Dict, Optional
class UserMemoryItem:
def __init__(self, data: Any, user: str, metadata: Optional[Dict[str, Any]] = None):
self.data = data
self.user = user
self.metadata = metadata if metadata is not None else {}

View File

@@ -1,26 +1,22 @@
import ast
import datetime
import os
import time
from difflib import SequenceMatcher
from textwrap import dedent
from typing import Any, List, Union
import crewai.utilities.events as events
from crewai.agents.tools_handler import ToolsHandler
from crewai.task import Task
from crewai.telemetry import Telemetry
from crewai.tools.tool_calling import InstructorToolCalling, ToolCalling
from crewai.tools.tool_usage_events import ToolUsageError, ToolUsageFinished
from crewai.utilities import I18N, Converter, ConverterError, Printer
import crewai.utilities.events as events
agentops = None
if os.environ.get("AGENTOPS_API_KEY"):
try:
import agentops # type: ignore
except ImportError:
pass
try:
import agentops
except ImportError:
agentops = None
OPENAI_BIGGER_MODELS = ["gpt-4", "gpt-4o", "o1-preview", "o1-mini"]

View File

@@ -20,7 +20,8 @@
"getting_input": "This is the agent's final answer: {final_answer}\n\n",
"summarizer_system_message": "You are a helpful assistant that summarizes text.",
"sumamrize_instruction": "Summarize the following text, make sure to include all the important information: {group}",
"summary": "This is a summary of our conversation so far:\n{merged_summary}"
"summary": "This is a summary of our conversation so far:\n{merged_summary}",
"manager_request": "Your best answer to your coworker asking you this, accounting for the context shared."
},
"errors": {
"force_final_answer_error": "You can't keep going, this was the best you could do.\n {formatted_answer.text}",

View File

@@ -1,4 +1,3 @@
import os
from typing import List
from pydantic import BaseModel, Field
@@ -7,26 +6,16 @@ from crewai.utilities import Converter
from crewai.utilities.pydantic_schema_parser import PydanticSchemaParser
def mock_agent_ops_provider():
def track_agent(*args, **kwargs):
agentops = None
try:
from agentops import track_agent
except ImportError:
def track_agent(name):
def noop(f):
return f
return noop
return track_agent
agentops = None
if os.environ.get("AGENTOPS_API_KEY"):
try:
from agentops import track_agent
except ImportError:
track_agent = mock_agent_ops_provider()
else:
track_agent = mock_agent_ops_provider()
class Entity(BaseModel):
name: str = Field(description="The name of the entity.")

View File

@@ -1,21 +1,21 @@
"""Test Agent creation and execution basic functionality."""
import os
from unittest import mock
from unittest.mock import patch
import os
import pytest
from crewai_tools import tool
from crewai import Agent, Crew, Task
from crewai.agents.cache import CacheHandler
from crewai.agents.crew_agent_executor import CrewAgentExecutor
from crewai.agents.parser import AgentAction, CrewAgentParser, OutputParserException
from crewai.llm import LLM
from crewai.agents.parser import CrewAgentParser, OutputParserException
from crewai.tools.tool_calling import InstructorToolCalling
from crewai.tools.tool_usage import ToolUsage
from crewai.tools.tool_usage_events import ToolUsageFinished
from crewai.utilities import RPMController
from crewai_tools import tool
from crewai.agents.parser import AgentAction
from crewai.utilities.events import Emitter
@@ -73,7 +73,7 @@ def test_agent_creation():
def test_agent_default_values():
agent = Agent(role="test role", goal="test goal", backstory="test backstory")
assert agent.llm.model == "gpt-4o"
assert agent.llm.model == "gpt-4o-mini"
assert agent.allow_delegation is False
@@ -116,6 +116,7 @@ def test_custom_llm_temperature_preservation():
@pytest.mark.vcr(filter_headers=["authorization"])
def test_agent_execute_task():
from langchain_openai import ChatOpenAI
from crewai import Task
agent = Agent(
@@ -206,7 +207,7 @@ def test_logging_tool_usage():
verbose=True,
)
assert agent.llm.model == "gpt-4o"
assert agent.llm.model == "gpt-4o-mini"
assert agent.tools_handler.last_used_tool == {}
task = Task(
description="What is 3 times 4?",
@@ -602,6 +603,7 @@ def test_agent_respect_the_max_rpm_set(capsys):
@pytest.mark.vcr(filter_headers=["authorization"])
def test_agent_respect_the_max_rpm_set_over_crew_rpm(capsys):
from unittest.mock import patch
from crewai_tools import tool
@tool
@@ -693,6 +695,7 @@ def test_agent_without_max_rpm_respet_crew_rpm(capsys):
@pytest.mark.vcr(filter_headers=["authorization"])
def test_agent_error_on_parsing_tool(capsys):
from unittest.mock import patch
from crewai_tools import tool
@tool
@@ -855,7 +858,9 @@ def test_agent_function_calling_llm():
tasks = [essay]
crew = Crew(agents=[agent1], tasks=tasks)
from unittest.mock import patch
import instructor
from crewai.tools.tool_usage import ToolUsage
with patch.object(

View File

@@ -1,12 +1,12 @@
import pytest
import requests
import sys
import unittest
from io import StringIO
from requests.exceptions import JSONDecodeError
from unittest.mock import MagicMock, Mock, patch
import pytest
import requests
from requests.exceptions import JSONDecodeError
from crewai.cli.deploy.main import DeployCommand
from crewai.cli.utils import parse_toml
@@ -228,13 +228,11 @@ class TestDeployCommand(unittest.TestCase):
"builtins.open",
new_callable=unittest.mock.mock_open,
read_data="""
[tool.poetry]
[project]
name = "test_project"
version = "0.1.0"
[tool.poetry.dependencies]
python = "^3.10"
crewai = { extras = ["tools"], version = ">=0.51.0,<1.0.0" }
requires-python = ">=3.10,<=3.13"
dependencies = ["crewai"]
""",
)
def test_get_project_name_python_310(self, mock_open):
@@ -248,13 +246,11 @@ class TestDeployCommand(unittest.TestCase):
"builtins.open",
new_callable=unittest.mock.mock_open,
read_data="""
[tool.poetry]
[project]
name = "test_project"
version = "0.1.0"
[tool.poetry.dependencies]
python = "^3.11"
crewai = { extras = ["tools"], version = ">=0.51.0,<1.0.0" }
requires-python = ">=3.10,<=3.13"
dependencies = ["crewai"]
""",
)
def test_get_project_name_python_311_plus(self, mock_open):

View File

@@ -18,12 +18,12 @@ from crewai.cli import evaluate_crew
def test_crew_success(mock_subprocess_run, n_iterations, model):
"""Test the crew function for successful execution."""
mock_subprocess_run.return_value = subprocess.CompletedProcess(
args=f"poetry run test {n_iterations} {model}", returncode=0
args=f"uv run test {n_iterations} {model}", returncode=0
)
result = evaluate_crew.evaluate_crew(n_iterations, model)
mock_subprocess_run.assert_called_once_with(
["poetry", "run", "test", str(n_iterations), model],
["uv", "run", "test", str(n_iterations), model],
capture_output=False,
text=True,
check=True,
@@ -55,14 +55,14 @@ def test_test_crew_called_process_error(mock_subprocess_run, click):
n_iterations = 5
mock_subprocess_run.side_effect = subprocess.CalledProcessError(
returncode=1,
cmd=["poetry", "run", "test", str(n_iterations), "gpt-4o"],
cmd=["uv", "run", "test", str(n_iterations), "gpt-4o"],
output="Error",
stderr="Some error occurred",
)
evaluate_crew.evaluate_crew(n_iterations, "gpt-4o")
mock_subprocess_run.assert_called_once_with(
["poetry", "run", "test", "5", "gpt-4o"],
["uv", "run", "test", "5", "gpt-4o"],
capture_output=False,
text=True,
check=True,
@@ -70,7 +70,7 @@ def test_test_crew_called_process_error(mock_subprocess_run, click):
click.echo.assert_has_calls(
[
mock.call.echo(
"An error occurred while testing the crew: Command '['poetry', 'run', 'test', '5', 'gpt-4o']' returned non-zero exit status 1.",
"An error occurred while testing the crew: Command '['uv', 'run', 'test', '5', 'gpt-4o']' returned non-zero exit status 1.",
err=True,
),
mock.call.echo("Error", err=True),
@@ -87,7 +87,7 @@ def test_test_crew_unexpected_exception(mock_subprocess_run, click):
evaluate_crew.evaluate_crew(n_iterations, "gpt-4o")
mock_subprocess_run.assert_called_once_with(
["poetry", "run", "test", "5", "gpt-4o"],
["uv", "run", "test", "5", "gpt-4o"],
capture_output=False,
text=True,
check=True,

View File

@@ -92,16 +92,20 @@ class TestPlusAPI(unittest.TestCase):
)
self.assertEqual(response, mock_response)
@patch("crewai.cli.plus_api.requests.request")
def test_make_request(self, mock_request):
@patch("crewai.cli.plus_api.requests.Session")
def test_make_request(self, mock_session):
mock_response = MagicMock()
mock_request.return_value = mock_response
mock_session_instance = mock_session.return_value
mock_session_instance.request.return_value = mock_response
response = self.api._make_request("GET", "test_endpoint")
mock_request.assert_called_once_with(
mock_session.assert_called_once()
mock_session_instance.request.assert_called_once_with(
"GET", f"{self.api.base_url}/test_endpoint", headers=self.api.headers
)
mock_session_instance.trust_env = False
self.assertEqual(response, mock_response)
@patch("crewai.cli.plus_api.PlusAPI._make_request")

View File

@@ -1,13 +1,16 @@
import os
import tempfile
import unittest
import unittest.mock
import os
from contextlib import contextmanager
from io import StringIO
from unittest import mock
from unittest.mock import MagicMock, patch
from pytest import raises
from crewai.cli.tools.main import ToolCommand
from io import StringIO
from unittest.mock import patch, MagicMock
@contextmanager
def in_temp_dir():
@@ -19,6 +22,7 @@ def in_temp_dir():
finally:
os.chdir(original_dir)
@patch("crewai.cli.tools.main.subprocess.run")
def test_create_success(mock_subprocess):
with in_temp_dir():
@@ -38,9 +42,7 @@ def test_create_success(mock_subprocess):
)
assert os.path.isfile(os.path.join("test_tool", "src", "test_tool", "tool.py"))
with open(
os.path.join("test_tool", "src", "test_tool", "tool.py"), "r"
) as f:
with open(os.path.join("test_tool", "src", "test_tool", "tool.py"), "r") as f:
content = f.read()
assert "class TestTool" in content
@@ -49,6 +51,7 @@ def test_create_success(mock_subprocess):
assert "Creating custom tool test_tool..." in output
@patch("crewai.cli.tools.main.subprocess.run")
@patch("crewai.cli.plus_api.PlusAPI.get_tool")
def test_install_success(mock_get, mock_subprocess_run):
@@ -67,9 +70,15 @@ def test_install_success(mock_get, mock_subprocess_run):
tool_command.install("sample-tool")
output = fake_out.getvalue()
mock_get.assert_called_once_with("sample-tool")
mock_get.assert_has_calls([mock.call("sample-tool"), mock.call().json()])
mock_subprocess_run.assert_any_call(
["poetry", "add", "--source", "crewai-sample-repo", "sample-tool"],
[
"uv",
"add",
"--extra-index-url",
"https://app.crewai.com/pypi/sample-repo",
"sample-tool",
],
capture_output=False,
text=True,
check=True,
@@ -77,6 +86,7 @@ def test_install_success(mock_get, mock_subprocess_run):
assert "Succesfully installed sample-tool" in output
@patch("crewai.cli.plus_api.PlusAPI.get_tool")
def test_install_tool_not_found(mock_get):
mock_get_response = MagicMock()
@@ -95,6 +105,7 @@ def test_install_tool_not_found(mock_get):
mock_get.assert_called_once_with("non-existent-tool")
assert "No tool found with this name" in output
@patch("crewai.cli.plus_api.PlusAPI.get_tool")
def test_install_api_error(mock_get):
mock_get_response = MagicMock()
@@ -113,15 +124,16 @@ def test_install_api_error(mock_get):
mock_get.assert_called_once_with("error-tool")
assert "Failed to get tool details" in output
@patch("crewai.cli.tools.main.git.Repository.is_synced", return_value=False)
def test_publish_when_not_in_sync(mock_is_synced):
with patch("sys.stdout", new=StringIO()) as fake_out, \
raises(SystemExit):
with patch("sys.stdout", new=StringIO()) as fake_out, raises(SystemExit):
tool_command = ToolCommand()
tool_command.publish(is_public=True)
assert "Local changes need to be resolved before publishing" in fake_out.getvalue()
@patch("crewai.cli.tools.main.get_project_name", return_value="sample-tool")
@patch("crewai.cli.tools.main.get_project_version", return_value="1.0.0")
@patch("crewai.cli.tools.main.get_project_description", return_value="A sample tool")
@@ -156,7 +168,7 @@ def test_publish_when_not_in_sync_and_force(
mock_get_project_version.assert_called_with(require=True)
mock_get_project_description.assert_called_with(require=False)
mock_subprocess_run.assert_called_with(
["poetry", "build", "-f", "sdist", "--output", unittest.mock.ANY],
["uv", "build", "--sdist", "--out-dir", unittest.mock.ANY],
check=True,
capture_output=False,
)
@@ -169,6 +181,7 @@ def test_publish_when_not_in_sync_and_force(
encoded_file=unittest.mock.ANY,
)
@patch("crewai.cli.tools.main.get_project_name", return_value="sample-tool")
@patch("crewai.cli.tools.main.get_project_version", return_value="1.0.0")
@patch("crewai.cli.tools.main.get_project_description", return_value="A sample tool")
@@ -203,7 +216,7 @@ def test_publish_success(
mock_get_project_version.assert_called_with(require=True)
mock_get_project_description.assert_called_with(require=False)
mock_subprocess_run.assert_called_with(
["poetry", "build", "-f", "sdist", "--output", unittest.mock.ANY],
["uv", "build", "--sdist", "--out-dir", unittest.mock.ANY],
check=True,
capture_output=False,
)
@@ -216,6 +229,7 @@ def test_publish_success(
encoded_file=unittest.mock.ANY,
)
@patch("crewai.cli.tools.main.get_project_name", return_value="sample-tool")
@patch("crewai.cli.tools.main.get_project_version", return_value="1.0.0")
@patch("crewai.cli.tools.main.get_project_description", return_value="A sample tool")
@@ -254,6 +268,7 @@ def test_publish_failure(
assert "Failed to complete operation" in output
assert "Name is already taken" in output
@patch("crewai.cli.tools.main.get_project_name", return_value="sample-tool")
@patch("crewai.cli.tools.main.get_project_version", return_value="1.0.0")
@patch("crewai.cli.tools.main.get_project_description", return_value="A sample tool")
@@ -291,54 +306,3 @@ def test_publish_api_error(
mock_publish.assert_called_once()
assert "Request to Enterprise API failed" in output
@patch("crewai.cli.plus_api.PlusAPI.login_to_tool_repository")
@patch("crewai.cli.tools.main.subprocess.run")
def test_login_success(mock_subprocess_run, mock_login):
mock_login_response = MagicMock()
mock_login_response.status_code = 200
mock_login_response.json.return_value = {
"repositories": [
{
"handle": "tools",
"url": "https://example.com/repo",
}
],
"credential": {"username": "user", "password": "pass"},
}
mock_login.return_value = mock_login_response
mock_subprocess_run.return_value = MagicMock(stderr=None)
tool_command = ToolCommand()
with patch("sys.stdout", new=StringIO()) as fake_out:
tool_command.login()
output = fake_out.getvalue()
mock_login.assert_called_once()
mock_subprocess_run.assert_any_call(
[
"poetry",
"source",
"add",
"--priority=explicit",
"crewai-tools",
"https://example.com/repo",
],
text=True,
check=True,
)
mock_subprocess_run.assert_any_call(
[
"poetry",
"config",
"http-basic.crewai-tools",
"user",
"pass",
],
capture_output=False,
text=True,
check=True,
)
assert "Succesfully authenticated to the tool repository" in output

View File

@@ -8,7 +8,7 @@ from crewai.cli.train_crew import train_crew
def test_train_crew_positive_iterations(mock_subprocess_run):
n_iterations = 5
mock_subprocess_run.return_value = subprocess.CompletedProcess(
args=["poetry", "run", "train", str(n_iterations)],
args=["uv", "run", "train", str(n_iterations)],
returncode=0,
stdout="Success",
stderr="",
@@ -17,7 +17,7 @@ def test_train_crew_positive_iterations(mock_subprocess_run):
train_crew(n_iterations, "trained_agents_data.pkl")
mock_subprocess_run.assert_called_once_with(
["poetry", "run", "train", str(n_iterations), "trained_agents_data.pkl"],
["uv", "run", "train", str(n_iterations), "trained_agents_data.pkl"],
capture_output=False,
text=True,
check=True,
@@ -48,14 +48,14 @@ def test_train_crew_called_process_error(mock_subprocess_run, click):
n_iterations = 5
mock_subprocess_run.side_effect = subprocess.CalledProcessError(
returncode=1,
cmd=["poetry", "run", "train", str(n_iterations)],
cmd=["uv", "run", "train", str(n_iterations)],
output="Error",
stderr="Some error occurred",
)
train_crew(n_iterations, "trained_agents_data.pkl")
mock_subprocess_run.assert_called_once_with(
["poetry", "run", "train", str(n_iterations), "trained_agents_data.pkl"],
["uv", "run", "train", str(n_iterations), "trained_agents_data.pkl"],
capture_output=False,
text=True,
check=True,
@@ -63,7 +63,7 @@ def test_train_crew_called_process_error(mock_subprocess_run, click):
click.echo.assert_has_calls(
[
mock.call.echo(
"An error occurred while training the crew: Command '['poetry', 'run', 'train', '5']' returned non-zero exit status 1.",
"An error occurred while training the crew: Command '['uv', 'run', 'train', '5']' returned non-zero exit status 1.",
err=True,
),
mock.call.echo("Error", err=True),
@@ -79,7 +79,7 @@ def test_train_crew_unexpected_exception(mock_subprocess_run, click):
train_crew(n_iterations, "trained_agents_data.pkl")
mock_subprocess_run.assert_called_once_with(
["poetry", "run", "train", str(n_iterations), "trained_agents_data.pkl"],
["uv", "run", "train", str(n_iterations), "trained_agents_data.pkl"],
capture_output=False,
text=True,
check=True,

View File

@@ -23,7 +23,6 @@ from crewai.types.usage_metrics import UsageMetrics
from crewai.utilities import Logger
from crewai.utilities.rpm_controller import RPMController
from crewai.utilities.task_output_storage_handler import TaskOutputStorageHandler
from pydantic_core import ValidationError
ceo = Agent(
role="CEO",
@@ -174,57 +173,6 @@ def test_context_no_future_tasks():
Crew(tasks=[task1, task2, task3, task4], agents=[researcher, writer])
def test_memory_provider_validation():
# Create mock agents
agent1 = Agent(
role="Researcher",
goal="Conduct research on AI",
backstory="An experienced AI researcher",
allow_delegation=False,
)
agent2 = Agent(
role="Writer",
goal="Write articles on AI",
backstory="A seasoned writer with a focus on technology",
allow_delegation=False,
)
# Create mock tasks
task1 = Task(
description="Research the latest trends in AI",
expected_output="A report on AI trends",
agent=agent1,
)
task2 = Task(
description="Write an article based on the research",
expected_output="An article on AI trends",
agent=agent2,
)
# Test with valid memory provider values
try:
crew_with_none = Crew(
agents=[agent1, agent2], tasks=[task1, task2], memory_provider=None
)
crew_with_mem0 = Crew(
agents=[agent1, agent2], tasks=[task1, task2], memory_provider="mem0"
)
except ValidationError:
pytest.fail(
"Unexpected ValidationError raised for valid memory provider values"
)
# Test with an invalid memory provider value
with pytest.raises(ValidationError) as excinfo:
Crew(
agents=[agent1, agent2],
tasks=[task1, task2],
memory_provider="invalid_provider",
)
assert "Memory provider must be either None or 'mem0'." in str(excinfo.value)
def test_crew_config_with_wrong_keys():
no_tasks_config = json.dumps(
{
@@ -549,7 +497,6 @@ def test_cache_hitting_between_agents():
@pytest.mark.vcr(filter_headers=["authorization"])
def test_api_calls_throttling(capsys):
from unittest.mock import patch
from crewai_tools import tool
@tool
@@ -1158,7 +1105,6 @@ def test_dont_set_agents_step_callback_if_already_set():
@pytest.mark.vcr(filter_headers=["authorization"])
def test_crew_function_calling_llm():
from unittest.mock import patch
from crewai_tools import tool
llm = "gpt-4o"

View File

@@ -1,270 +0,0 @@
interactions:
- request:
body: ''
headers:
accept:
- '*/*'
accept-encoding:
- gzip, deflate
connection:
- keep-alive
host:
- api.mem0.ai
user-agent:
- python-httpx/0.27.0
method: GET
uri: https://api.mem0.ai/v1/memories/?user_id=test
response:
body:
string: '[]'
headers:
CF-Cache-Status:
- DYNAMIC
CF-RAY:
- 8b477138bad847b9-BOM
Connection:
- keep-alive
Content-Length:
- '2'
Content-Type:
- application/json
Date:
- Sat, 17 Aug 2024 06:00:11 GMT
NEL:
- '{"success_fraction":0,"report_to":"cf-nel","max_age":604800}'
Report-To:
- '{"endpoints":[{"url":"https:\/\/a.nel.cloudflare.com\/report\/v4?s=uuyH2foMJVDpV%2FH52g1q%2FnvXKe3dBKVzvsK0mqmSNezkiszNR9OgrEJfVqmkX%2FlPFRP2sH4zrOuzGo6k%2FjzsjYJczqSWJUZHN2pPujiwnr1E9W%2BdLGKmG6%2FqPrGYAy2SBRWkkJVWsTO3OQ%3D%3D"}],"group":"cf-nel","max_age":604800}'
Server:
- cloudflare
allow:
- GET, POST, DELETE, OPTIONS
alt-svc:
- h3=":443"; ma=86400
cross-origin-opener-policy:
- same-origin
referrer-policy:
- same-origin
vary:
- Accept, origin, Cookie
x-content-type-options:
- nosniff
x-frame-options:
- DENY
status:
code: 200
message: OK
- request:
body: '{"batch": [{"properties": {"python_version": "3.12.4 (v3.12.4:8e8a4baf65,
Jun 6 2024, 17:33:18) [Clang 13.0.0 (clang-1300.0.29.30)]", "os": "darwin",
"os_version": "Darwin Kernel Version 23.4.0: Wed Feb 21 21:44:54 PST 2024; root:xnu-10063.101.15~2/RELEASE_ARM64_T6030",
"os_release": "23.4.0", "processor": "arm", "machine": "arm64", "function":
"mem0.client.main.MemoryClient", "$lib": "posthog-python", "$lib_version": "3.5.0",
"$geoip_disable": true}, "timestamp": "2024-08-17T06:00:11.526640+00:00", "context":
{}, "distinct_id": "fd411bd3-99a2-42d6-acd7-9fca8ad09580", "event": "client.init"}],
"historical_migration": false, "sentAt": "2024-08-17T06:00:11.701621+00:00",
"api_key": "phc_hgJkUVJFYtmaJqrvf6CYN67TIQ8yhXAkWzUn9AMU4yX"}'
headers:
Accept:
- '*/*'
Accept-Encoding:
- gzip, deflate
Connection:
- keep-alive
Content-Length:
- '740'
Content-Type:
- application/json
User-Agent:
- posthog-python/3.5.0
method: POST
uri: https://us.i.posthog.com/batch/
response:
body:
string: '{"status":"Ok"}'
headers:
Connection:
- keep-alive
Content-Length:
- '15'
Content-Type:
- application/json
Date:
- Sat, 17 Aug 2024 06:00:12 GMT
access-control-allow-credentials:
- 'true'
server:
- envoy
vary:
- origin, access-control-request-method, access-control-request-headers
x-envoy-upstream-service-time:
- '69'
status:
code: 200
message: OK
- request:
body: '{"messages": [{"role": "user", "content": "Remember the following insights
from Agent run: test value with provider"}], "metadata": {"task": "test_task_provider",
"agent": "test_agent_provider"}, "app_id": "Researcher"}'
headers:
accept:
- '*/*'
accept-encoding:
- gzip, deflate
connection:
- keep-alive
content-length:
- '219'
content-type:
- application/json
host:
- api.mem0.ai
user-agent:
- python-httpx/0.27.0
method: POST
uri: https://api.mem0.ai/v1/memories/
response:
body:
string: '{"message":"ok"}'
headers:
CF-Cache-Status:
- DYNAMIC
CF-RAY:
- 8b477140282547b9-BOM
Connection:
- keep-alive
Content-Length:
- '16'
Content-Type:
- application/json
Date:
- Sat, 17 Aug 2024 06:00:13 GMT
NEL:
- '{"success_fraction":0,"report_to":"cf-nel","max_age":604800}'
Report-To:
- '{"endpoints":[{"url":"https:\/\/a.nel.cloudflare.com\/report\/v4?s=FRjJKSk3YxVj03wA7S05H8ts35KnWfqS3wb6Rfy4kVZ4BgXfw7nJbm92wI6vEv5fWcAcHVnOlkJDggs11B01BMuB2k3a9RqlBi0dJNiMuk%2Bgm5xE%2BODMPWJctYNRwQMjNVbteUpS%2Fad8YA%3D%3D"}],"group":"cf-nel","max_age":604800}'
Server:
- cloudflare
allow:
- GET, POST, DELETE, OPTIONS
alt-svc:
- h3=":443"; ma=86400
cross-origin-opener-policy:
- same-origin
referrer-policy:
- same-origin
vary:
- Accept, origin, Cookie
x-content-type-options:
- nosniff
x-frame-options:
- DENY
status:
code: 200
message: OK
- request:
body: '{"query": "test value with provider", "limit": 3, "app_id": "Researcher"}'
headers:
accept:
- '*/*'
accept-encoding:
- gzip, deflate
connection:
- keep-alive
content-length:
- '73'
content-type:
- application/json
host:
- api.mem0.ai
user-agent:
- python-httpx/0.27.0
method: POST
uri: https://api.mem0.ai/v1/memories/search/
response:
body:
string: '[]'
headers:
CF-Cache-Status:
- DYNAMIC
CF-RAY:
- 8b47714d083b47b9-BOM
Connection:
- keep-alive
Content-Length:
- '2'
Content-Type:
- application/json
Date:
- Sat, 17 Aug 2024 06:00:14 GMT
NEL:
- '{"success_fraction":0,"report_to":"cf-nel","max_age":604800}'
Report-To:
- '{"endpoints":[{"url":"https:\/\/a.nel.cloudflare.com\/report\/v4?s=2DRWL1cdKdMvnE8vx1fPUGeTITOgSGl3N5g84PS6w30GRqpfz79BtSx6REhpnOiFV8kM6KGqln0iCZ5yoHc2jBVVJXhPJhQ5t0uerD9JFnkphjISrJOU1MJjZWneT9PlNABddxvVNCmluA%3D%3D"}],"group":"cf-nel","max_age":604800}'
Server:
- cloudflare
allow:
- POST, OPTIONS
alt-svc:
- h3=":443"; ma=86400
cross-origin-opener-policy:
- same-origin
referrer-policy:
- same-origin
vary:
- Accept, origin, Cookie
x-content-type-options:
- nosniff
x-frame-options:
- DENY
status:
code: 200
message: OK
- request:
body: '{"batch": [{"properties": {"python_version": "3.12.4 (v3.12.4:8e8a4baf65,
Jun 6 2024, 17:33:18) [Clang 13.0.0 (clang-1300.0.29.30)]", "os": "darwin",
"os_version": "Darwin Kernel Version 23.4.0: Wed Feb 21 21:44:54 PST 2024; root:xnu-10063.101.15~2/RELEASE_ARM64_T6030",
"os_release": "23.4.0", "processor": "arm", "machine": "arm64", "function":
"mem0.client.main.MemoryClient", "$lib": "posthog-python", "$lib_version": "3.5.0",
"$geoip_disable": true}, "timestamp": "2024-08-17T06:00:13.593952+00:00", "context":
{}, "distinct_id": "fd411bd3-99a2-42d6-acd7-9fca8ad09580", "event": "client.add"}],
"historical_migration": false, "sentAt": "2024-08-17T06:00:13.858277+00:00",
"api_key": "phc_hgJkUVJFYtmaJqrvf6CYN67TIQ8yhXAkWzUn9AMU4yX"}'
headers:
Accept:
- '*/*'
Accept-Encoding:
- gzip, deflate
Connection:
- keep-alive
Content-Length:
- '739'
Content-Type:
- application/json
User-Agent:
- posthog-python/3.5.0
method: POST
uri: https://us.i.posthog.com/batch/
response:
body:
string: '{"status":"Ok"}'
headers:
Connection:
- keep-alive
Content-Length:
- '15'
Content-Type:
- application/json
Date:
- Sat, 17 Aug 2024 06:00:13 GMT
access-control-allow-credentials:
- 'true'
server:
- envoy
vary:
- origin, access-control-request-method, access-control-request-headers
x-envoy-upstream-service-time:
- '33'
status:
code: 200
message: OK
version: 1

View File

@@ -1,147 +0,0 @@
from unittest.mock import MagicMock, patch
import pytest
from crewai.memory import EntityMemory, LongTermMemory, ShortTermMemory, UserMemory
from crewai.memory.contextual.contextual_memory import ContextualMemory
@pytest.fixture
def mock_memories():
return {
"stm": MagicMock(spec=ShortTermMemory),
"ltm": MagicMock(spec=LongTermMemory),
"em": MagicMock(spec=EntityMemory),
"um": MagicMock(spec=UserMemory),
}
@pytest.fixture
def contextual_memory_mem0(mock_memories):
return ContextualMemory(
memory_provider="mem0",
stm=mock_memories["stm"],
ltm=mock_memories["ltm"],
em=mock_memories["em"],
um=mock_memories["um"],
)
@pytest.fixture
def contextual_memory_other(mock_memories):
return ContextualMemory(
memory_provider="other",
stm=mock_memories["stm"],
ltm=mock_memories["ltm"],
em=mock_memories["em"],
um=mock_memories["um"],
)
@pytest.fixture
def contextual_memory_none(mock_memories):
return ContextualMemory(
memory_provider=None,
stm=mock_memories["stm"],
ltm=mock_memories["ltm"],
em=mock_memories["em"],
um=mock_memories["um"],
)
def test_build_context_for_task_mem0(contextual_memory_mem0, mock_memories):
task = MagicMock(description="Test task")
context = "Additional context"
mock_memories["stm"].search.return_value = ["Recent insight"]
mock_memories["ltm"].search.return_value = [
{"metadata": {"suggestions": ["Historical data"]}}
]
mock_memories["em"].search.return_value = [{"memory": "Entity memory"}]
mock_memories["um"].search.return_value = [{"memory": "User memory"}]
result = contextual_memory_mem0.build_context_for_task(task, context)
assert "Recent Insights:" in result
assert "Historical Data:" in result
assert "Entities:" in result
assert "User memories/preferences:" in result
def test_build_context_for_task_other_provider(contextual_memory_other, mock_memories):
task = MagicMock(description="Test task")
context = "Additional context"
mock_memories["stm"].search.return_value = ["Recent insight"]
mock_memories["ltm"].search.return_value = [
{"metadata": {"suggestions": ["Historical data"]}}
]
mock_memories["em"].search.return_value = [{"context": "Entity context"}]
mock_memories["um"].search.return_value = [{"memory": "User memory"}]
result = contextual_memory_other.build_context_for_task(task, context)
assert "Recent Insights:" in result
assert "Historical Data:" in result
assert "Entities:" in result
assert "User memories/preferences:" not in result
def test_build_context_for_task_none_provider(contextual_memory_none, mock_memories):
task = MagicMock(description="Test task")
context = "Additional context"
mock_memories["stm"].search.return_value = ["Recent insight"]
mock_memories["ltm"].search.return_value = [
{"metadata": {"suggestions": ["Historical data"]}}
]
mock_memories["em"].search.return_value = [{"context": "Entity context"}]
mock_memories["um"].search.return_value = [{"memory": "User memory"}]
result = contextual_memory_none.build_context_for_task(task, context)
assert "Recent Insights:" in result
assert "Historical Data:" in result
assert "Entities:" in result
assert "User memories/preferences:" not in result
def test_fetch_entity_context_mem0(contextual_memory_mem0, mock_memories):
mock_memories["em"].search.return_value = [
{"memory": "Entity 1"},
{"memory": "Entity 2"},
]
result = contextual_memory_mem0._fetch_entity_context("query")
expected_result = "Entities:\n- Entity 1\n- Entity 2"
assert result == expected_result
def test_fetch_entity_context_other_provider(contextual_memory_other, mock_memories):
mock_memories["em"].search.return_value = [
{"context": "Entity 1"},
{"context": "Entity 2"},
]
result = contextual_memory_other._fetch_entity_context("query")
expected_result = "Entities:\n- Entity 1\n- Entity 2"
assert result == expected_result
def test_user_memories_only_for_mem0(contextual_memory_mem0, mock_memories):
mock_memories["um"].search.return_value = [{"memory": "User memory"}]
# Test for mem0 provider
result_mem0 = contextual_memory_mem0._fetch_user_memories("query")
assert "User memories/preferences:" in result_mem0
assert "User memory" in result_mem0
# Additional test to ensure user memories are included/excluded in the full context
task = MagicMock(description="Test task")
context = "Additional context"
mock_memories["stm"].search.return_value = ["Recent insight"]
mock_memories["ltm"].search.return_value = [
{"metadata": {"suggestions": ["Historical data"]}}
]
mock_memories["em"].search.return_value = [{"memory": "Entity memory"}]
full_context_mem0 = contextual_memory_mem0.build_context_for_task(task, context)
assert "User memories/preferences:" in full_context_mem0
assert "User memory" in full_context_mem0

View File

@@ -1,119 +0,0 @@
# tests/memory/test_entity_memory.py
from unittest.mock import MagicMock, patch
import pytest
from crewai.memory.entity.entity_memory import EntityMemory
from crewai.memory.entity.entity_memory_item import EntityMemoryItem
from crewai.memory.storage.mem0_storage import Mem0Storage
from crewai.memory.storage.rag_storage import RAGStorage
@pytest.fixture
def mock_rag_storage():
"""Fixture to create a mock RAGStorage instance"""
return MagicMock(spec=RAGStorage)
@pytest.fixture
def mock_mem0_storage():
"""Fixture to create a mock Mem0Storage instance"""
return MagicMock(spec=Mem0Storage)
@pytest.fixture
def entity_memory_rag(mock_rag_storage):
"""Fixture to create an EntityMemory instance with RAGStorage"""
with patch(
"crewai.memory.entity.entity_memory.RAGStorage", return_value=mock_rag_storage
):
return EntityMemory()
@pytest.fixture
def entity_memory_mem0(mock_mem0_storage):
"""Fixture to create an EntityMemory instance with Mem0Storage"""
with patch(
"crewai.memory.entity.entity_memory.Mem0Storage", return_value=mock_mem0_storage
):
return EntityMemory(memory_provider="mem0")
def test_save_rag_storage(entity_memory_rag, mock_rag_storage):
item = EntityMemoryItem(
name="John Doe",
type="Person",
description="A software engineer",
relationships="Works at TechCorp",
)
entity_memory_rag.save(item)
expected_data = "John Doe(Person): A software engineer"
mock_rag_storage.save.assert_called_once_with(expected_data, item.metadata)
def test_save_mem0_storage(entity_memory_mem0, mock_mem0_storage):
item = EntityMemoryItem(
name="John Doe",
type="Person",
description="A software engineer",
relationships="Works at TechCorp",
)
entity_memory_mem0.save(item)
expected_data = """
Remember details about the following entity:
Name: John Doe
Type: Person
Entity Description: A software engineer
"""
mock_mem0_storage.save.assert_called_once_with(expected_data, item.metadata)
def test_search(entity_memory_rag, mock_rag_storage):
query = "software engineer"
limit = 5
filters = {"type": "Person"}
score_threshold = 0.7
entity_memory_rag.search(query, limit, filters, score_threshold)
mock_rag_storage.search.assert_called_once_with(
query=query, limit=limit, filters=filters, score_threshold=score_threshold
)
def test_reset(entity_memory_rag, mock_rag_storage):
entity_memory_rag.reset()
mock_rag_storage.reset.assert_called_once()
def test_reset_error(entity_memory_rag, mock_rag_storage):
mock_rag_storage.reset.side_effect = Exception("Reset error")
with pytest.raises(Exception) as exc_info:
entity_memory_rag.reset()
assert (
str(exc_info.value)
== "An error occurred while resetting the entity memory: Reset error"
)
@pytest.mark.parametrize("memory_provider", [None, "other"])
def test_init_with_rag_storage(memory_provider):
with patch("crewai.memory.entity.entity_memory.RAGStorage") as mock_rag_storage:
EntityMemory(memory_provider=memory_provider)
mock_rag_storage.assert_called_once()
def test_init_with_mem0_storage():
with patch("crewai.memory.entity.entity_memory.Mem0Storage") as mock_mem0_storage:
EntityMemory(memory_provider="mem0")
mock_mem0_storage.assert_called_once()
def test_init_with_custom_storage():
custom_storage = MagicMock()
entity_memory = EntityMemory(storage=custom_storage)
assert entity_memory.storage == custom_storage

View File

@@ -1,125 +1,29 @@
# tests/memory/long_term_memory_test.py
from datetime import datetime
from unittest.mock import MagicMock, patch
import pytest
from crewai.memory.long_term.long_term_memory import LongTermMemory
from crewai.memory.long_term.long_term_memory_item import LongTermMemoryItem
from crewai.memory.storage.ltm_sqlite_storage import LTMSQLiteStorage
@pytest.fixture
def mock_storage():
"""Fixture to create a mock LTMSQLiteStorage instance"""
return MagicMock(spec=LTMSQLiteStorage)
def long_term_memory():
"""Fixture to create a LongTermMemory instance"""
return LongTermMemory()
@pytest.fixture
def long_term_memory(mock_storage):
"""Fixture to create a LongTermMemory instance with mock storage"""
return LongTermMemory(storage=mock_storage)
def test_save(long_term_memory, mock_storage):
def test_save_and_search(long_term_memory):
memory = LongTermMemoryItem(
agent="test_agent",
task="test_task",
expected_output="test_output",
datetime="2023-01-01 12:00:00",
datetime="test_datetime",
quality=0.5,
metadata={"additional_info": "test_info"},
metadata={"task": "test_task", "quality": 0.5},
)
long_term_memory.save(memory)
expected_metadata = {
"additional_info": "test_info",
"agent": "test_agent",
"expected_output": "test_output",
"quality": 0.5, # Include quality in expected metadata
}
mock_storage.save.assert_called_once_with(
task_description="test_task",
score=0.5,
metadata=expected_metadata,
datetime="2023-01-01 12:00:00",
)
def test_search(long_term_memory, mock_storage):
mock_storage.load.return_value = [
{
"metadata": {
"agent": "test_agent",
"expected_output": "test_output",
"task": "test_task",
},
"datetime": "2023-01-01 12:00:00",
"score": 0.5,
}
]
result = long_term_memory.search("test_task", latest_n=5)
mock_storage.load.assert_called_once_with("test_task", 5)
assert len(result) == 1
assert result[0]["metadata"]["agent"] == "test_agent"
assert result[0]["metadata"]["expected_output"] == "test_output"
assert result[0]["metadata"]["task"] == "test_task"
assert result[0]["datetime"] == "2023-01-01 12:00:00"
assert result[0]["score"] == 0.5
def test_save_with_minimal_metadata(long_term_memory, mock_storage):
memory = LongTermMemoryItem(
agent="minimal_agent",
task="minimal_task",
expected_output="minimal_output",
datetime="2023-01-01 12:00:00",
quality=0.3,
metadata={},
)
long_term_memory.save(memory)
expected_metadata = {
"agent": "minimal_agent",
"expected_output": "minimal_output",
"quality": 0.3, # Include quality in expected metadata
}
mock_storage.save.assert_called_once_with(
task_description="minimal_task",
score=0.3,
metadata=expected_metadata,
datetime="2023-01-01 12:00:00",
)
def test_reset(long_term_memory, mock_storage):
long_term_memory.reset()
mock_storage.reset.assert_called_once()
def test_search_with_no_results(long_term_memory, mock_storage):
mock_storage.load.return_value = []
result = long_term_memory.search("nonexistent_task")
assert result == []
def test_init_with_default_storage():
with patch(
"crewai.memory.long_term.long_term_memory.LTMSQLiteStorage"
) as mock_storage_class:
LongTermMemory()
mock_storage_class.assert_called_once()
def test_init_with_custom_storage():
custom_storage = MagicMock()
memory = LongTermMemory(storage=custom_storage)
assert memory.storage == custom_storage
@pytest.mark.parametrize("latest_n", [1, 3, 5, 10])
def test_search_with_different_latest_n(long_term_memory, mock_storage, latest_n):
long_term_memory.search("test_task", latest_n=latest_n)
mock_storage.load.assert_called_once_with("test_task", latest_n)
find = long_term_memory.search("test_task", latest_n=5)[0]
assert find["score"] == 0.5
assert find["datetime"] == "test_datetime"
assert find["metadata"]["agent"] == "test_agent"
assert find["metadata"]["quality"] == 0.5
assert find["metadata"]["task"] == "test_task"
assert find["metadata"]["expected_output"] == "test_output"

View File

@@ -44,46 +44,3 @@ def test_save_and_search(short_term_memory):
find = short_term_memory.search("test value", score_threshold=0.01)[0]
assert find["context"] == memory.data, "Data value mismatch."
assert find["metadata"]["agent"] == "test_agent", "Agent value mismatch."
@pytest.fixture
def short_term_memory_with_provider():
"""Fixture to create a ShortTermMemory instance with a specific memory provider"""
agent = Agent(
role="Researcher",
goal="Search relevant data and provide results",
backstory="You are a researcher at a leading tech think tank.",
tools=[],
verbose=True,
)
task = Task(
description="Perform a search on specific topics.",
expected_output="A list of relevant URLs based on the search query.",
agent=agent,
)
return ShortTermMemory(
crew=Crew(agents=[agent], tasks=[task]), memory_provider="mem0"
)
def test_save_and_search_with_provider(short_term_memory_with_provider):
memory = ShortTermMemoryItem(
data="Loves to do research on the latest technologies.",
agent="test_agent_provider",
metadata={"task": "test_task_provider"},
)
short_term_memory_with_provider.save(
value=memory.data,
metadata=memory.metadata,
agent=memory.agent,
)
find = short_term_memory_with_provider.search(
"Loves to do research on the latest technologies.", score_threshold=0.01
)[0]
assert find["memory"] in memory.data, "Data value mismatch."
assert find["metadata"]["agent"] == "test_agent_provider", "Agent value mismatch."
assert (
short_term_memory_with_provider.memory_provider == "mem0"
), "Memory provider mismatch."

4972
uv.lock generated Normal file

File diff suppressed because it is too large Load Diff