Compare commits

..

15 Commits

Author SHA1 Message Date
Brandon Hancock (bhancock_ai)
f35eda3e95 Merge branch 'main' into pr-1209 2024-10-11 10:55:31 -04:00
Brandon Hancock
774bc9ea75 logs and fix merge request 2024-10-11 10:36:39 -04:00
Brandon Hancock
8c83379cb9 Merge branch 'main' into pr-1209 2024-10-11 09:41:44 -04:00
Brandon Hancock
0354ad378b Add addtiional validation and tests. Also, if memory was set to True on a Crew, you were required to have a mem0 key. Fixed this direct dependency. 2024-10-09 09:52:56 -04:00
Dev Khant
5d6eb6e9c1 Merge branch 'main' into intergrate-mem0 2024-10-02 00:10:12 +05:30
Dev-Khant
bb90718bb8 Merge branch 'main' into intergrate-mem0 2024-10-02 00:09:47 +05:30
João Moura
6e13c0b8ff Merge branch 'main' into intergrate-mem0 2024-09-23 18:09:39 -03:00
Dev-Khant
3517d539ae fix mypy checks 2024-09-23 16:05:32 +05:30
Dev-Khant
66a1ecca00 fixes mypy issues 2024-09-23 16:02:04 +05:30
Dev-Khant
7371a454ad update poetry.lock 2024-09-23 15:29:27 +05:30
Dev-Khant
1f66c6dad2 pending commit for _fetch_user_memories 2024-09-23 15:06:28 +05:30
Dev-Khant
c7d326a8a0 Merge branch 'main' into intergrate-mem0 2024-09-23 15:00:41 +05:30
João Moura
602497a6bb Merge branch 'main' into intergrate-mem0 2024-09-22 16:18:00 -03:00
Dev Khant
67991a31f2 Update src/crewai/memory/contextual/contextual_memory.py
Co-authored-by: Deshraj Yadav <deshraj@gatech.edu>
2024-08-19 09:39:46 +05:30
Dev-Khant
2607ac3a1f Integrate Mem0 2024-08-17 12:21:16 +05:30
56 changed files with 1345 additions and 5875 deletions

View File

@@ -9,24 +9,24 @@ env:
OPENAI_API_KEY: fake-api-key
jobs:
tests:
deploy:
runs-on: ubuntu-latest
timeout-minutes: 15
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Install uv
uses: astral-sh/setup-uv@v3
- name: Setup Python
uses: actions/setup-python@v4
with:
enable-cache: true
python-version: "3.11.9"
- name: Set up Python
run: uv python install 3.11.9
- name: Install the project
run: uv sync --dev
- name: Install Requirements
run: |
set -e
pip install poetry
poetry install
- name: Run tests
run: uv run pytest tests
run: poetry run pytest

View File

@@ -16,7 +16,7 @@ jobs:
- name: Setup Python
uses: actions/setup-python@v4
with:
python-version: "3.11.9"
python-version: "3.10"
- name: Install Requirements
run: |

View File

@@ -44,9 +44,15 @@ To get started with CrewAI, follow these simple steps:
### 1. Installation
Ensure you have Python >=3.10 <=3.13 installed on your system. CrewAI uses [UV](https://docs.astral.sh/uv/) for dependency management and package handling, offering a seamless setup and execution experience.
Ensure you have Python >=3.10 <=3.13 installed on your system. CrewAI uses [Poetry](https://python-poetry.org/) for dependency management and package handling, offering a seamless setup and execution experience.
First, install CrewAI:
First, if you haven't already, install Poetry:
```bash
pip install poetry
```
Then, install CrewAI:
```shell
pip install crewai
@@ -237,7 +243,7 @@ Lock the dependencies and install them by using the CLI command but first, navig
```shell
cd my_project
crewai install (Optional)
crewai install
```
To run your crew, execute the following command in the root of your project:
@@ -326,14 +332,14 @@ CrewAI is open-source and we welcome contributions. If you're looking to contrib
### Installing Dependencies
```bash
uv lock
uv sync
poetry lock
poetry install
```
### Virtual Env
```bash
uv venv
poetry shell
```
### Pre-commit hooks
@@ -345,19 +351,19 @@ pre-commit install
### Running Tests
```bash
uvx pytest
poetry run pytest
```
### Running static type checks
```bash
uvx mypy
poetry run mypy
```
### Packaging
```bash
uv build
poetry build
```
### Installing Locally

View File

@@ -10,10 +10,10 @@ The CrewAI CLI provides a set of commands to interact with CrewAI, allowing you
## Installation
To use the CrewAI CLI, make sure you have CrewAI installed:
To use the CrewAI CLI, make sure you have CrewAI & Poetry installed:
```shell
pip install crewai
pip install crewai poetry
```
## Basic Usage
@@ -145,4 +145,4 @@ crewai run
<Note>
Make sure to run these commands from the directory where your CrewAI project is set up.
Some commands may require additional configuration or setup within your project structure.
</Note>
</Note>

View File

@@ -4,7 +4,7 @@ description: Exploring the dynamics of agent collaboration within the CrewAI fra
icon: screen-users
---
## Collaboration Fundamentals
## Collaboration Fundamentals
Collaboration in CrewAI is fundamental, enabling agents to combine their skills, share information, and assist each other in task execution, embodying a truly cooperative ecosystem.
@@ -16,24 +16,25 @@ Collaboration in CrewAI is fundamental, enabling agents to combine their skills,
The `Crew` class has been enriched with several attributes to support advanced functionalities:
| Feature | Description |
|:-------------------------------------------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| **Language Model Management** (`manager_llm`, `function_calling_llm`) | Manages language models for executing tasks and tools. `manager_llm` is required for hierarchical processes, while `function_calling_llm` is optional with a default value for streamlined interactions. |
| **Custom Manager Agent** (`manager_agent`) | Specifies a custom agent as the manager, replacing the default CrewAI manager. |
| **Process Flow** (`process`) | Defines execution logic (e.g., sequential, hierarchical) for task distribution. |
| **Verbose Logging** (`verbose`) | Provides detailed logging for monitoring and debugging. Accepts integer and boolean values to control verbosity level. |
| **Rate Limiting** (`max_rpm`) | Limits requests per minute to optimize resource usage. Setting guidelines depend on task complexity and load. |
| **Internationalization / Customization** (`language`, `prompt_file`) | Supports prompt customization for global usability. [Example of file](https://github.com/joaomdmoura/crewAI/blob/main/src/crewai/translations/en.json) |
| **Execution and Output Handling** (`full_output`) | Controls output granularity, distinguishing between full and final outputs. |
| **Callback and Telemetry** (`step_callback`, `task_callback`) | Enables step-wise and task-level execution monitoring and telemetry for performance analytics. |
| **Crew Sharing** (`share_crew`) | Allows sharing crew data with CrewAI for model improvement. Privacy implications and benefits should be considered. |
| **Usage Metrics** (`usage_metrics`) | Logs all LLM usage metrics during task execution for performance insights. |
| **Memory Usage** (`memory`) | Enables memory for storing execution history, aiding in agent learning and task efficiency. |
| **Embedder Configuration** (`embedder`) | Configures the embedder for language understanding and generation, with support for provider customization. |
| **Cache Management** (`cache`) | Specifies whether to cache tool execution results, enhancing performance. |
| **Output Logging** (`output_log_file`) | Defines the file path for logging crew execution output. |
| **Planning Mode** (`planning`) | Enables action planning before task execution. Set `planning=True` to activate. |
| **Replay Feature** (`replay`) | Provides CLI for listing tasks from the last run and replaying from specific tasks, aiding in task management and troubleshooting. |
| Feature | Description |
| :-------------------------------------------------------------------- | :------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| **Language Model Management** (`manager_llm`, `function_calling_llm`) | Manages language models for executing tasks and tools. `manager_llm` is required for hierarchical processes, while `function_calling_llm` is optional with a default value for streamlined interactions. |
| **Custom Manager Agent** (`manager_agent`) | Specifies a custom agent as the manager, replacing the default CrewAI manager. |
| **Process Flow** (`process`) | Defines execution logic (e.g., sequential, hierarchical) for task distribution. |
| **Verbose Logging** (`verbose`) | Provides detailed logging for monitoring and debugging. Accepts integer and boolean values to control verbosity level. |
| **Rate Limiting** (`max_rpm`) | Limits requests per minute to optimize resource usage. Setting guidelines depend on task complexity and load. |
| **Internationalization / Customization** (`language`, `prompt_file`) | Supports prompt customization for global usability. [Example of file](https://github.com/joaomdmoura/crewAI/blob/main/src/crewai/translations/en.json) |
| **Execution and Output Handling** (`full_output`) | Controls output granularity, distinguishing between full and final outputs. |
| **Callback and Telemetry** (`step_callback`, `task_callback`) | Enables step-wise and task-level execution monitoring and telemetry for performance analytics. |
| **Crew Sharing** (`share_crew`) | Allows sharing crew data with CrewAI for model improvement. Privacy implications and benefits should be considered. |
| **Usage Metrics** (`usage_metrics`) | Logs all LLM usage metrics during task execution for performance insights. |
| **Memory Usage** (`memory`) | Enables memory for storing execution history, aiding in agent learning and task efficiency. |
| **Memory Provider** (`memory_provider`) | Specifies the memory provider to be used by the crew for storing memories. |
| **Embedder Configuration** (`embedder`) | Configures the embedder for language understanding and generation, with support for provider customization. |
| **Cache Management** (`cache`) | Specifies whether to cache tool execution results, enhancing performance. |
| **Output Logging** (`output_log_file`) | Defines the file path for logging crew execution output. |
| **Planning Mode** (`planning`) | Enables action planning before task execution. Set `planning=True` to activate. |
| **Replay Feature** (`replay`) | Provides CLI for listing tasks from the last run and replaying from specific tasks, aiding in task management and troubleshooting. |
## Delegation (Dividing to Conquer)
@@ -49,4 +50,4 @@ Consider a crew with a researcher agent tasked with data gathering and a writer
## Conclusion
The integration of advanced attributes and functionalities into the CrewAI framework significantly enriches the agent collaboration ecosystem. These enhancements not only simplify interactions but also offer unprecedented flexibility and control, paving the way for sophisticated AI-driven solutions capable of tackling complex tasks through intelligent collaboration and delegation.
The integration of advanced attributes and functionalities into the CrewAI framework significantly enriches the agent collaboration ecosystem. These enhancements not only simplify interactions but also offer unprecedented flexibility and control, paving the way for sophisticated AI-driven solutions capable of tackling complex tasks through intelligent collaboration and delegation.

View File

@@ -22,7 +22,8 @@ A crew in crewAI represents a collaborative group of agents working together to
| **Max RPM** _(optional)_ | `max_rpm` | Maximum requests per minute the crew adheres to during execution. Defaults to `None`. |
| **Language** _(optional)_ | `language` | Language used for the crew, defaults to English. |
| **Language File** _(optional)_ | `language_file` | Path to the language file to be used for the crew. |
| **Memory** _(optional)_ | `memory` | Utilized for storing execution memories (short-term, long-term, entity memory). Defaults to `False`. |
| **Memory** _(optional)_ | `memory` | Utilized for storing execution memories (short-term, long-term, entity memory). |
| **Memory Provider** _(optional)_ | `memory_provider` | Specifies the memory provider to be used by the crew for storing memories. |
| **Cache** _(optional)_ | `cache` | Specifies whether to use a cache for storing the results of tools' execution. Defaults to `True`. |
| **Embedder** _(optional)_ | `embedder` | Configuration for the embedder to be used by the crew. Mostly used by memory for now. Default is `{"provider": "openai"}`. |
| **Full Output** _(optional)_ | `full_output` | Whether the crew should return the full output with all tasks outputs or just the final output. Defaults to `False`. |

View File

@@ -572,16 +572,16 @@ In this example, the `PoemFlow` class defines a flow that generates a sentence c
### Running the Flow
(Optional) Before running the flow, you can install the dependencies by running:
Before running the flow, make sure to install the dependencies by running:
```bash
crewai install
poetry install
```
Once all of the dependencies are installed, you need to activate the virtual environment by running:
```bash
source .venv/bin/activate
poetry shell
```
After activating the virtual environment, you can run the flow by executing one of the following commands:
@@ -593,7 +593,7 @@ crewai flow run
or
```bash
uv run run_flow
poetry run run_flow
```
The flow will execute, and you should see the output in the console.

View File

@@ -6,18 +6,18 @@ icon: database
## Introduction to Memory Systems in CrewAI
The crewAI framework introduces a sophisticated memory system designed to significantly enhance the capabilities of AI agents.
This system comprises `short-term memory`, `long-term memory`, `entity memory`, and `contextual memory`, each serving a unique purpose in aiding agents to remember,
The crewAI framework introduces a sophisticated memory system designed to significantly enhance the capabilities of AI agents.
This system comprises `short-term memory`, `long-term memory`, `entity memory`, and `contextual memory`, each serving a unique purpose in aiding agents to remember,
reason, and learn from past interactions.
## Memory System Components
| Component | Description |
| :------------------- | :---------------------------------------------------------------------------------------------------------------------- |
| **Short-Term Memory**| Temporarily stores recent interactions and outcomes using `RAG`, enabling agents to recall and utilize information relevant to their current context during the current executions.|
| **Long-Term Memory** | Preserves valuable insights and learnings from past executions, allowing agents to build and refine their knowledge over time. |
| **Entity Memory** | Captures and organizes information about entities (people, places, concepts) encountered during tasks, facilitating deeper understanding and relationship mapping. Uses `RAG` for storing entity information. |
| **Contextual Memory**| Maintains the context of interactions by combining `ShortTermMemory`, `LongTermMemory`, and `EntityMemory`, aiding in the coherence and relevance of agent responses over a sequence of tasks or a conversation. |
| Component | Description |
| :-------------------- | :--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| **Short-Term Memory** | Temporarily stores recent interactions and outcomes using `RAG`, enabling agents to recall and utilize information relevant to their current context during the current executions. |
| **Long-Term Memory** | Preserves valuable insights and learnings from past executions, allowing agents to build and refine their knowledge over time. |
| **Entity Memory** | Captures and organizes information about entities (people, places, concepts) encountered during tasks, facilitating deeper understanding and relationship mapping. Uses `RAG` for storing entity information. |
| **Contextual Memory** | Maintains the context of interactions by combining `ShortTermMemory`, `LongTermMemory`, and `EntityMemory`, aiding in the coherence and relevance of agent responses over a sequence of tasks or a conversation. |
## How Memory Systems Empower Agents
@@ -30,8 +30,8 @@ reason, and learn from past interactions.
## Implementing Memory in Your Crew
When configuring a crew, you can enable and customize each memory component to suit the crew's objectives and the nature of tasks it will perform.
By default, the memory system is disabled, and you can ensure it is active by setting `memory=True` in the crew configuration.
The memory will use OpenAI embeddings by default, but you can change it by setting `embedder` to a different model.
By default, the memory system is disabled, and you can ensure it is active by setting `memory=True` in the crew configuration.
The memory will use OpenAI embeddings by default, but you can change it by setting `embedder` to a different model.
It's also possible to initialize the memory instance with your own instance.
The 'embedder' only applies to **Short-Term Memory** which uses Chroma for RAG using the EmbedChain package.
@@ -92,10 +92,10 @@ my_crew = Crew(
)
```
## Additional Embedding Providers
### Using OpenAI embeddings (already default)
```python Code
from crewai import Crew, Agent, Task, Process
@@ -223,14 +223,13 @@ crewai reset-memories [OPTIONS]
#### Resetting Memory Options
| Option | Description | Type | Default |
| :----------------- | :------------------------------- | :------------- | :------ |
| `-l`, `--long` | Reset LONG TERM memory. | Flag (boolean) | False |
| `-s`, `--short` | Reset SHORT TERM memory. | Flag (boolean) | False |
| `-e`, `--entities` | Reset ENTITIES memory. | Flag (boolean) | False |
| Option | Description | Type | Default |
| :------------------------ | :--------------------------------- | :------------- | :------ |
| `-l`, `--long` | Reset LONG TERM memory. | Flag (boolean) | False |
| `-s`, `--short` | Reset SHORT TERM memory. | Flag (boolean) | False |
| `-e`, `--entities` | Reset ENTITIES memory. | Flag (boolean) | False |
| `-k`, `--kickoff-outputs` | Reset LATEST KICKOFF TASK OUTPUTS. | Flag (boolean) | False |
| `-a`, `--all` | Reset ALL memories. | Flag (boolean) | False |
| `-a`, `--all` | Reset ALL memories. | Flag (boolean) | False |
## Benefits of Using CrewAI's Memory System
@@ -240,5 +239,5 @@ crewai reset-memories [OPTIONS]
## Conclusion
Integrating CrewAI's memory system into your projects is straightforward. By leveraging the provided memory components and configurations,
Integrating CrewAI's memory system into your projects is straightforward. By leveraging the provided memory components and configurations,
you can quickly empower your agents with the ability to remember, reason, and learn from their interactions, unlocking new levels of intelligence and capability.

View File

@@ -1,163 +0,0 @@
# Creating a CrewAI Pipeline Project
Welcome to the comprehensive guide for creating a new CrewAI pipeline project. This document will walk you through the steps to create, customize, and run your CrewAI pipeline project, ensuring you have everything you need to get started.
To learn more about CrewAI pipelines, visit the [CrewAI documentation](https://docs.crewai.com/core-concepts/Pipeline/).
## Prerequisites
Before getting started with CrewAI pipelines, make sure that you have installed CrewAI via pip:
```shell
$ pip install crewai crewai-tools
```
The same prerequisites for virtual environments and Code IDEs apply as in regular CrewAI projects.
## Creating a New Pipeline Project
To create a new CrewAI pipeline project, you have two options:
1. For a basic pipeline template:
```shell
$ crewai create pipeline <project_name>
```
2. For a pipeline example that includes a router:
```shell
$ crewai create pipeline --router <project_name>
```
These commands will create a new project folder with the following structure:
```
<project_name>/
├── README.md
├── uv.lock
├── pyproject.toml
├── src/
│ └── <project_name>/
│ ├── __init__.py
│ ├── main.py
│ ├── crews/
│ │ ├── crew1/
│ │ │ ├── crew1.py
│ │ │ └── config/
│ │ │ ├── agents.yaml
│ │ │ └── tasks.yaml
│ │ ├── crew2/
│ │ │ ├── crew2.py
│ │ │ └── config/
│ │ │ ├── agents.yaml
│ │ │ └── tasks.yaml
│ ├── pipelines/
│ │ ├── __init__.py
│ │ ├── pipeline1.py
│ │ └── pipeline2.py
│ └── tools/
│ ├── __init__.py
│ └── custom_tool.py
└── tests/
```
## Customizing Your Pipeline Project
To customize your pipeline project, you can:
1. Modify the crew files in `src/<project_name>/crews/` to define your agents and tasks for each crew.
2. Modify the pipeline files in `src/<project_name>/pipelines/` to define your pipeline structure.
3. Modify `src/<project_name>/main.py` to set up and run your pipelines.
4. Add your environment variables into the `.env` file.
## Example 1: Defining a Two-Stage Sequential Pipeline
Here's an example of how to define a pipeline with sequential stages in `src/<project_name>/pipelines/pipeline.py`:
```python
from crewai import Pipeline
from crewai.project import PipelineBase
from ..crews.research_crew.research_crew import ResearchCrew
from ..crews.write_x_crew.write_x_crew import WriteXCrew
@PipelineBase
class SequentialPipeline:
def __init__(self):
# Initialize crews
self.research_crew = ResearchCrew().crew()
self.write_x_crew = WriteXCrew().crew()
def create_pipeline(self):
return Pipeline(
stages=[
self.research_crew,
self.write_x_crew
]
)
async def kickoff(self, inputs):
pipeline = self.create_pipeline()
results = await pipeline.kickoff(inputs)
return results
```
## Example 2: Defining a Two-Stage Pipeline with Parallel Execution
```python
from crewai import Pipeline
from crewai.project import PipelineBase
from ..crews.research_crew.research_crew import ResearchCrew
from ..crews.write_x_crew.write_x_crew import WriteXCrew
from ..crews.write_linkedin_crew.write_linkedin_crew import WriteLinkedInCrew
@PipelineBase
class ParallelExecutionPipeline:
def __init__(self):
# Initialize crews
self.research_crew = ResearchCrew().crew()
self.write_x_crew = WriteXCrew().crew()
self.write_linkedin_crew = WriteLinkedInCrew().crew()
def create_pipeline(self):
return Pipeline(
stages=[
self.research_crew,
[self.write_x_crew, self.write_linkedin_crew] # Parallel execution
]
)
async def kickoff(self, inputs):
pipeline = self.create_pipeline()
results = await pipeline.kickoff(inputs)
return results
```
### Annotations
The main annotation you'll use for pipelines is `@PipelineBase`. This annotation is used to decorate your pipeline classes, similar to how `@CrewBase` is used for crews.
## Installing Dependencies
To install the dependencies for your project, use `uv` the install command is optional because when running `crewai run`, it will automatically install the dependencies for you:
```shell
$ cd <project_name>
$ crewai install (optional)
```
## Running Your Pipeline Project
To run your pipeline project, use the following command:
```shell
$ crewai run
```
This will initialize your pipeline and begin task execution as defined in your `main.py` file.
## Deploying Your Pipeline Project
Pipelines can be deployed in the same way as regular CrewAI projects. The easiest way is through [CrewAI+](https://www.crewai.com/crewaiplus), where you can deploy your pipeline in a few clicks.
Remember, when working with pipelines, you're orchestrating multiple crews to work together in a sequence or parallel fashion. This allows for more complex workflows and information processing tasks.

View File

@@ -1,236 +0,0 @@
---
title: Starting a New CrewAI Project - Using Template
description: A comprehensive guide to starting a new CrewAI project, including the latest updates and project setup methods.
---
# Starting Your CrewAI Project
Welcome to the ultimate guide for starting a new CrewAI project. This document will walk you through the steps to create, customize, and run your CrewAI project, ensuring you have everything you need to get started.
Before we start, there are a couple of things to note:
1. CrewAI is a Python package and requires Python >=3.10 and <=3.13 to run.
2. The preferred way of setting up CrewAI is using the `crewai create crew` command. This will create a new project folder and install a skeleton template for you to work on.
## Prerequisites
Before getting started with CrewAI, make sure that you have installed it via pip:
```shell
$ pip install 'crewai[tools]'
```
## Creating a New Project
In this example, we will be using `uv` as our virtual environment manager.
To create a new CrewAI project, run the following CLI command:
```shell
$ crewai create crew <project_name>
```
This command will create a new project folder with the following structure:
```shell
my_project/
├── .gitignore
├── pyproject.toml
├── README.md
└── src/
└── my_project/
├── __init__.py
├── main.py
├── crew.py
├── tools/
│ ├── custom_tool.py
│ └── __init__.py
└── config/
├── agents.yaml
└── tasks.yaml
```
You can now start developing your project by editing the files in the `src/my_project` folder. The `main.py` file is the entry point of your project, and the `crew.py` file is where you define your agents and tasks.
## Customizing Your Project
To customize your project, you can:
- Modify `src/my_project/config/agents.yaml` to define your agents.
- Modify `src/my_project/config/tasks.yaml` to define your tasks.
- Modify `src/my_project/crew.py` to add your own logic, tools, and specific arguments.
- Modify `src/my_project/main.py` to add custom inputs for your agents and tasks.
- Add your environment variables into the `.env` file.
### Example: Defining Agents and Tasks
#### agents.yaml
```yaml
researcher:
role: >
Job Candidate Researcher
goal: >
Find potential candidates for the job
backstory: >
You are adept at finding the right candidates by exploring various online
resources. Your skill in identifying suitable candidates ensures the best
match for job positions.
```
#### tasks.yaml
```yaml
research_candidates_task:
description: >
Conduct thorough research to find potential candidates for the specified job.
Utilize various online resources and databases to gather a comprehensive list of potential candidates.
Ensure that the candidates meet the job requirements provided.
Job Requirements:
{job_requirements}
expected_output: >
A list of 10 potential candidates with their contact information and brief profiles highlighting their suitability.
agent: researcher # THIS NEEDS TO MATCH THE AGENT NAME IN THE AGENTS.YAML FILE AND THE AGENT DEFINED IN THE crew.py FILE
context: # THESE NEED TO MATCH THE TASK NAMES DEFINED ABOVE AND THE TASKS.YAML FILE AND THE TASK DEFINED IN THE crew.py FILE
- researcher
```
### Referencing Variables:
Your defined functions with the same name will be used. For example, you can reference the agent for specific tasks from `tasks.yaml` file. Ensure your annotated agent and function name are the same; otherwise, your task won't recognize the reference properly.
#### Example References
`agents.yaml`
```yaml
email_summarizer:
role: >
Email Summarizer
goal: >
Summarize emails into a concise and clear summary
backstory: >
You will create a 5 bullet point summary of the report
llm: mixtal_llm
```
`tasks.yaml`
```yaml
email_summarizer_task:
description: >
Summarize the email into a 5 bullet point summary
expected_output: >
A 5 bullet point summary of the email
agent: email_summarizer
context:
- reporting_task
- research_task
```
Use the annotations to properly reference the agent and task in the `crew.py` file.
### Annotations include:
* `@agent`
* `@task`
* `@crew`
* `@tool`
* `@callback`
* `@output_json`
* `@output_pydantic`
* `@cache_handler`
`crew.py`
```python
# ...
@agent
def email_summarizer(self) -> Agent:
return Agent(
config=self.agents_config["email_summarizer"],
)
@task
def email_summarizer_task(self) -> Task:
return Task(
config=self.tasks_config["email_summarizer_task"],
)
# ...
```
## Installing Dependencies
To install the dependencies for your project, you can use `uv`. Running the following command is optional since when running `crewai run`, it will automatically install the dependencies for you.
```shell
$ cd my_project
$ crewai install (optional)
```
This will install the dependencies specified in the `pyproject.toml` file.
## Interpolating Variables
Any variable interpolated in your `agents.yaml` and `tasks.yaml` files like `{variable}` will be replaced by the value of the variable in the `main.py` file.
#### tasks.yaml
```yaml
research_task:
description: >
Conduct a thorough research about the customer and competitors in the context
of {customer_domain}.
Make sure you find any interesting and relevant information given the
current year is 2024.
expected_output: >
A complete report on the customer and their customers and competitors,
including their demographics, preferences, market positioning and audience engagement.
```
#### main.py
```python
# main.py
def run():
inputs = {
"customer_domain": "crewai.com"
}
MyProjectCrew(inputs).crew().kickoff(inputs=inputs)
```
## Running Your Project
To run your project, use the following command:
```shell
$ crewai run
```
This will initialize your crew of AI agents and begin task execution as defined in your configuration in the `main.py` file.
### Replay Tasks from Latest Crew Kickoff
CrewAI now includes a replay feature that allows you to list the tasks from the last run and replay from a specific one. To use this feature, run:
```shell
$ crewai replay <task_id>
```
Replace `<task_id>` with the ID of the task you want to replay.
### Reset Crew Memory
If you need to reset the memory of your crew before running it again, you can do so by calling the reset memory feature:
```shell
$ crewai reset-memory
```
This will clear the crew's memory, allowing for a fresh start.
## Deploying Your Project
The easiest way to deploy your crew is through [CrewAI+](https://www.crewai.com/crewaiplus), where you can deploy your crew in a few clicks.

6
poetry.lock generated
View File

@@ -1597,12 +1597,12 @@ files = [
google-auth = ">=2.14.1,<3.0.dev0"
googleapis-common-protos = ">=1.56.2,<2.0.dev0"
grpcio = [
{version = ">=1.49.1,<2.0dev", optional = true, markers = "python_version >= \"3.11\" and extra == \"grpc\""},
{version = ">=1.33.2,<2.0dev", optional = true, markers = "python_version < \"3.11\" and extra == \"grpc\""},
{version = ">=1.49.1,<2.0dev", optional = true, markers = "python_version >= \"3.11\" and extra == \"grpc\""},
]
grpcio-status = [
{version = ">=1.49.1,<2.0.dev0", optional = true, markers = "python_version >= \"3.11\" and extra == \"grpc\""},
{version = ">=1.33.2,<2.0.dev0", optional = true, markers = "python_version < \"3.11\" and extra == \"grpc\""},
{version = ">=1.49.1,<2.0.dev0", optional = true, markers = "python_version >= \"3.11\" and extra == \"grpc\""},
]
proto-plus = ">=1.22.3,<2.0.0dev"
protobuf = ">=3.19.5,<3.20.0 || >3.20.0,<3.20.1 || >3.20.1,<4.21.0 || >4.21.0,<4.21.1 || >4.21.1,<4.21.2 || >4.21.2,<4.21.3 || >4.21.3,<4.21.4 || >4.21.4,<4.21.5 || >4.21.5,<6.0.0.dev0"
@@ -4286,8 +4286,8 @@ files = [
[package.dependencies]
numpy = [
{version = ">=1.23.2", markers = "python_version == \"3.11\""},
{version = ">=1.22.4", markers = "python_version < \"3.11\""},
{version = ">=1.23.2", markers = "python_version == \"3.11\""},
{version = ">=1.26.0", markers = "python_version >= \"3.12\""},
]
python-dateutil = ">=2.8.2"

View File

@@ -1,66 +1,65 @@
[project]
[tool.poetry]
name = "crewai"
version = "0.70.1"
description = "Cutting-edge framework for orchestrating role-playing, autonomous AI agents. By fostering collaborative intelligence, CrewAI empowers agents to work together seamlessly, tackling complex tasks."
authors = ["Joao Moura <joao@crewai.com>"]
readme = "README.md"
requires-python = ">=3.10,<=3.13"
authors = [
{ name = "Joao Moura", email = "joao@crewai.com" }
]
dependencies = [
"pydantic>=2.4.2",
"langchain>=0.2.16",
"openai>=1.13.3",
"opentelemetry-api>=1.22.0",
"opentelemetry-sdk>=1.22.0",
"opentelemetry-exporter-otlp-proto-http>=1.22.0",
"instructor>=1.3.3",
"regex>=2024.9.11",
"crewai-tools>=0.12.1",
"click>=8.1.7",
"python-dotenv>=1.0.0",
"appdirs>=1.4.4",
"jsonref>=1.1.0",
"agentops>=0.3.0",
"embedchain>=0.1.114",
"json-repair>=0.25.2",
"auth0-python>=4.7.1",
"litellm>=1.44.22",
"pyvis>=0.3.2",
"uv>=0.4.18",
"tomli-w>=1.1.0",
]
packages = [{ include = "crewai", from = "src" }]
[project.urls]
[tool.poetry.urls]
Homepage = "https://crewai.com"
Documentation = "https://docs.crewai.com"
Repository = "https://github.com/crewAIInc/crewAI"
[project.optional-dependencies]
tools = ["crewai-tools>=0.12.1"]
agentops = ["agentops>=0.3.0"]
[tool.poetry.dependencies]
python = ">=3.10,<=3.13"
pydantic = "^2.4.2"
langchain = "^0.2.16"
openai = "^1.13.3"
opentelemetry-api = "^1.22.0"
opentelemetry-sdk = "^1.22.0"
opentelemetry-exporter-otlp-proto-http = "^1.22.0"
instructor = "1.3.3"
regex = "^2024.9.11"
crewai-tools = { version = "^0.12.1", optional = true }
click = "^8.1.7"
python-dotenv = "^1.0.0"
appdirs = "^1.4.4"
jsonref = "^1.1.0"
agentops = { version = "^0.3.0", optional = true }
embedchain = "0.1.122"
json-repair = "^0.25.2"
auth0-python = "^4.7.1"
poetry = "^1.8.3"
litellm = "^1.44.22"
pyvis = "^0.3.2"
[tool.uv]
dev-dependencies = [
"ruff>=0.4.10",
"mypy>=1.10.0",
"pre-commit>=3.6.0",
"mkdocs>=1.4.3",
"mkdocstrings>=0.22.0",
"mkdocstrings-python>=1.1.2",
"mkdocs-material>=9.5.7",
"mkdocs-material-extensions>=1.3.1",
"pillow>=10.2.0",
"cairosvg>=2.7.1",
"crewai-tools>=0.12.1",
"pytest>=8.0.0",
"pytest-vcr>=1.0.2",
"python-dotenv>=1.0.0",
"pytest-asyncio>=0.23.7",
"pytest-subprocess>=1.5.2",
]
[tool.poetry.extras]
tools = ["crewai-tools"]
agentops = ["agentops"]
[project.scripts]
[tool.poetry.group.dev.dependencies]
isort = "^5.13.2"
mypy = "1.10.0"
autoflake = "^2.2.1"
pre-commit = "^3.6.0"
mkdocs = "^1.4.3"
mkdocstrings = "^0.22.0"
mkdocstrings-python = "^1.1.2"
mkdocs-material = { extras = ["imaging"], version = "^9.5.7" }
mkdocs-material-extensions = "^1.3.1"
pillow = "^10.2.0"
cairosvg = "^2.7.1"
crewai-tools = "^0.12.1"
[tool.poetry.group.test.dependencies]
pytest = "^8.0.0"
pytest-vcr = "^1.0.2"
python-dotenv = "1.0.0"
pytest-asyncio = "^0.23.7"
pytest-subprocess = "^1.5.2"
[tool.poetry.scripts]
crewai = "crewai.cli.cli:crewai"
[tool.mypy]
@@ -72,5 +71,5 @@ exclude = ["cli/templates"]
exclude_dirs = ["src/crewai/cli/templates"]
[build-system]
requires = ["hatchling"]
build-backend = "hatchling.build"
requires = ["poetry-core"]
build-backend = "poetry.core.masonry.api"

View File

@@ -201,6 +201,8 @@ class Agent(BaseAgent):
task_prompt = task.prompt()
print("context for task", context)
if context:
task_prompt = self.i18n.slice("task_with_context").format(
task=task_prompt, context=context
@@ -211,6 +213,8 @@ class Agent(BaseAgent):
self.crew._short_term_memory,
self.crew._long_term_memory,
self.crew._entity_memory,
self.crew._user_memory,
self.crew.memory_provider,
)
memory = contextual_memory.build_context_for_task(task, context)
if memory.strip() != "":

View File

@@ -317,9 +317,9 @@ class CrewAgentExecutor(CrewAgentExecutorMixin):
if self.crew is not None and hasattr(self.crew, "_train_iteration"):
train_iteration = self.crew._train_iteration
if agent_id in training_data and isinstance(train_iteration, int):
training_data[agent_id][train_iteration]["improved_output"] = (
result.output
)
training_data[agent_id][train_iteration][
"improved_output"
] = result.output
training_handler.save(training_data)
else:
self._logger.log(
@@ -334,32 +334,6 @@ class CrewAgentExecutor(CrewAgentExecutorMixin):
color="red",
)
if self.ask_for_human_input and human_feedback is not None:
training_data = {
"initial_output": result.output,
"human_feedback": human_feedback,
"agent": agent_id,
"agent_role": self.agent.role,
}
if self.crew is not None and hasattr(self.crew, "_train_iteration"):
train_iteration = self.crew._train_iteration
if isinstance(train_iteration, int):
CrewTrainingHandler(TRAINING_DATA_FILE).append(
train_iteration, agent_id, training_data
)
else:
self._logger.log(
"error",
"Invalid train iteration type. Expected int.",
color="red",
)
else:
self._logger.log(
"error",
"Crew is None or does not have _train_iteration attribute.",
color="red",
)
def _format_prompt(self, prompt: str, inputs: Dict[str, str]) -> str:
prompt = prompt.replace("{input}", inputs["input"])
prompt = prompt.replace("{tool_names}", inputs["tool_names"])

View File

@@ -21,7 +21,6 @@ from .run_crew import run_crew
from .run_flow import run_flow
from .tools.main import ToolCommand
from .train_crew import train_crew
from .update_crew import update_crew
@click.group()
@@ -189,12 +188,6 @@ def run():
run_crew()
@crewai.command()
def update():
"""Update the pyproject.toml of the Crew project to use uv."""
update_crew()
@crewai.command()
def signup():
"""Sign Up/Login to CrewAI+."""
@@ -283,13 +276,7 @@ def tool_install(handle: str):
@tool.command(name="publish")
@click.option(
"--force",
is_flag=True,
show_default=True,
default=False,
help="Bypasses Git remote validations",
)
@click.option("--force", is_flag=True, show_default=True, default=False, help="Bypasses Git remote validations")
@click.option("--public", "is_public", flag_value=True, default=False)
@click.option("--private", "is_public", flag_value=False)
def tool_publish(is_public: bool, force: bool):

View File

@@ -5,13 +5,13 @@ import click
def evaluate_crew(n_iterations: int, model: str) -> None:
"""
Test and Evaluate the crew by running a command in the UV environment.
Test and Evaluate the crew by running a command in the Poetry environment.
Args:
n_iterations (int): The number of iterations to test the crew.
model (str): The model to test the crew with.
"""
command = ["uv", "run", "test", str(n_iterations), model]
command = ["poetry", "run", "test", str(n_iterations), model]
try:
if n_iterations <= 0:

View File

@@ -5,10 +5,13 @@ import click
def install_crew() -> None:
"""
Install the crew by running the UV command to lock and install.
Install the crew by running the Poetry command to lock and install.
"""
try:
subprocess.run(["uv", "sync"], check=True, capture_output=False, text=True)
subprocess.run(["poetry", "lock"], check=True, capture_output=False, text=True)
subprocess.run(
["poetry", "install"], check=True, capture_output=False, text=True
)
except subprocess.CalledProcessError as e:
click.echo(f"An error occurred while running the crew: {e}", err=True)

View File

@@ -5,9 +5,9 @@ import click
def plot_flow() -> None:
"""
Plot the flow by running a command in the UV environment.
Plot the flow by running a command in the Poetry environment.
"""
command = ["uv", "run", "plot_flow"]
command = ["poetry", "run", "plot_flow"]
try:
result = subprocess.run(command, capture_output=False, text=True, check=True)

View File

@@ -1,5 +1,4 @@
import subprocess
import click
@@ -10,7 +9,7 @@ def replay_task_command(task_id: str) -> None:
Args:
task_id (str): The ID of the task to replay from.
"""
command = ["uv", "run", "replay", task_id]
command = ["poetry", "run", "replay", task_id]
try:
result = subprocess.run(command, capture_output=False, text=True, check=True)

View File

@@ -1,29 +1,23 @@
import subprocess
import click
import tomllib
def run_crew() -> None:
"""
Run the crew by running a command in the UV environment.
Run the crew by running a command in the Poetry environment.
"""
command = ["uv", "run", "run_crew"]
command = ["poetry", "run", "run_crew"]
try:
subprocess.run(command, capture_output=False, text=True, check=True)
result = subprocess.run(command, capture_output=False, text=True, check=True)
if result.stderr:
click.echo(result.stderr, err=True)
except subprocess.CalledProcessError as e:
click.echo(f"An error occurred while running the crew: {e}", err=True)
click.echo(e.output, err=True, nl=True)
with open("pyproject.toml", "rb") as f:
data = tomllib.load(f)
if data.get("tool", {}).get("poetry"):
click.secho(
"It's possible that you are using an old version of crewAI that uses poetry, please run `crewai update` to update your pyproject.toml to use uv.",
fg="yellow",
)
click.echo(e.output, err=True)
except Exception as e:
click.echo(f"An unexpected error occurred: {e}", err=True)

View File

@@ -5,9 +5,9 @@ import click
def run_flow() -> None:
"""
Run the flow by running a command in the UV environment.
Run the flow by running a command in the Poetry environment.
"""
command = ["uv", "run", "run_flow"]
command = ["poetry", "run", "run_flow"]
try:
result = subprocess.run(command, capture_output=False, text=True, check=True)

View File

@@ -4,17 +4,17 @@ Welcome to the {{crew_name}} Crew project, powered by [crewAI](https://crewai.co
## Installation
Ensure you have Python >=3.10 <=3.13 installed on your system. This project uses [UV](https://docs.astral.sh/uv/) for dependency management and package handling, offering a seamless setup and execution experience.
Ensure you have Python >=3.10 <=3.13 installed on your system. This project uses [Poetry](https://python-poetry.org/) for dependency management and package handling, offering a seamless setup and execution experience.
First, if you haven't already, install uv:
First, if you haven't already, install Poetry:
```bash
pip install uv
pip install poetry
```
Next, navigate to your project directory and install the dependencies:
(Optional) Lock the dependencies and install them by using the CLI command:
1. First lock the dependencies and install them by using the CLI command:
```bash
crewai install
```

View File

@@ -1,14 +1,15 @@
[project]
[tool.poetry]
name = "{{folder_name}}"
version = "0.1.0"
description = "{{name}} using crewAI"
authors = [{ name = "Your Name", email = "you@example.com" }]
requires-python = ">=3.10,<=3.13"
dependencies = [
"crewai[tools]>=0.67.1,<1.0.0"
]
authors = ["Your Name <you@example.com>"]
[project.scripts]
[tool.poetry.dependencies]
python = ">=3.10,<=3.13"
crewai = { extras = ["tools"], version = ">=0.70.1,<1.0.0" }
[tool.poetry.scripts]
{{folder_name}} = "{{folder_name}}.main:run"
run_crew = "{{folder_name}}.main:run"
train = "{{folder_name}}.main:train"
@@ -16,5 +17,5 @@ replay = "{{folder_name}}.main:replay"
test = "{{folder_name}}.main:test"
[build-system]
requires = ["hatchling"]
build-backend = "hatchling.build"
requires = ["poetry-core"]
build-backend = "poetry.core.masonry.api"

View File

@@ -4,17 +4,18 @@ Welcome to the {{crew_name}} Crew project, powered by [crewAI](https://crewai.co
## Installation
Ensure you have Python >=3.10 <=3.13 installed on your system. This project uses [UV](https://docs.astral.sh/uv/) for dependency management and package handling, offering a seamless setup and execution experience.
Ensure you have Python >=3.10 <=3.13 installed on your system. This project uses [Poetry](https://python-poetry.org/) for dependency management and package handling, offering a seamless setup and execution experience.
First, if you haven't already, install uv:
First, if you haven't already, install Poetry:
```bash
pip install uv
pip install poetry
```
Next, navigate to your project directory and install the dependencies:
(Optional) Lock the dependencies and install them by using the CLI command:
1. First lock the dependencies and then install them:
```bash
crewai install
```

View File

@@ -1,19 +1,19 @@
[project]
[tool.poetry]
name = "{{folder_name}}"
version = "0.1.0"
description = "{{name}} using crewAI"
authors = [{ name = "Your Name", email = "you@example.com" }]
requires-python = ">=3.10,<=3.13"
dependencies = [
"crewai[tools]>=0.67.1,<1.0.0",
"asyncio"
]
authors = ["Your Name <you@example.com>"]
[project.scripts]
[tool.poetry.dependencies]
python = ">=3.10,<=3.13"
crewai = { extras = ["tools"], version = ">=0.70.1,<1.0.0" }
asyncio = "*"
[tool.poetry.scripts]
{{folder_name}} = "{{folder_name}}.main:main"
run_flow = "{{folder_name}}.main:main"
plot_flow = "{{folder_name}}.main:plot"
[build-system]
requires = ["hatchling"]
build-backend = "hatchling.build"
requires = ["poetry-core"]
build-backend = "poetry.core.masonry.api"

View File

@@ -1,21 +1,20 @@
[project]
[tool.poetry]
name = "{{folder_name}}"
version = "0.1.0"
description = "{{name}} using crewAI"
authors = ["Your Name <you@example.com>"]
requires-python = ">=3.10,<=3.13"
dependencies = [
"crewai[tools]>=0.67.1,<1.0.0"
]
[project.scripts]
[tool.poetry.dependencies]
python = ">=3.10,<=3.13"
crewai = { extras = ["tools"], version = ">=0.70.1,<1.0.0" }
[tool.poetry.scripts]
{{folder_name}} = "{{folder_name}}.main:main"
run_crew = "{{folder_name}}.main:main"
train = "{{folder_name}}.main:train"
replay = "{{folder_name}}.main:replay"
test = "{{folder_name}}.main:test"
[build-system]
requires = ["hatchling"]
build-backend = "hatchling.build"
requires = ["poetry-core"]
build-backend = "poetry.core.masonry.api"

View File

@@ -6,13 +6,13 @@ custom tools to power up your crews.
## Installing
Ensure you have Python >=3.10 <=3.13 installed on your system. This project
uses [UV](https://docs.astral.sh/uv/) for dependency management and package
uses [Poetry](https://python-poetry.org/) for dependency management and package
handling, offering a seamless setup and execution experience.
First, if you haven't already, install `uv`:
First, if you haven't already, install Poetry:
```bash
pip install uv
pip install poetry
```
Next, navigate to your project directory and install the dependencies with:

View File

@@ -1,22 +1,20 @@
import base64
from pathlib import Path
import click
import os
import platform
import subprocess
import tempfile
from pathlib import Path
import click
from rich.console import Console
from crewai.cli import git
from crewai.cli.command import BaseCommand, PlusAPIMixin
from crewai.cli import git
from crewai.cli.utils import (
get_project_description,
get_project_name,
get_project_description,
get_project_version,
tree_copy,
tree_find_and_replace,
)
from rich.console import Console
console = Console()
@@ -26,8 +24,6 @@ class ToolCommand(BaseCommand, PlusAPIMixin):
A class to handle tool repository related operations for CrewAI projects.
"""
BASE_URL = "https://app.crewai.com/pypi/"
def __init__(self):
BaseCommand.__init__(self)
PlusAPIMixin.__init__(self, telemetry=self._telemetry)
@@ -86,7 +82,7 @@ class ToolCommand(BaseCommand, PlusAPIMixin):
with tempfile.TemporaryDirectory() as temp_build_dir:
subprocess.run(
["uv", "build", "--sdist", "--out-dir", temp_build_dir],
["poetry", "build", "-f", "sdist", "--output", temp_build_dir],
check=True,
capture_output=False,
)
@@ -96,7 +92,7 @@ class ToolCommand(BaseCommand, PlusAPIMixin):
)
if not tarball_filename:
console.print(
"Project build failed. Please ensure that the command `uv build --sdist` completes successfully.",
"Project build failed. Please ensure that the command `poetry build -f sdist` completes successfully.",
style="bold red",
)
raise SystemExit
@@ -153,39 +149,62 @@ class ToolCommand(BaseCommand, PlusAPIMixin):
raise SystemExit
login_response_json = login_response.json()
self._set_netrc_credentials(login_response_json["credential"])
for repository in login_response_json["repositories"]:
self._add_repository_to_poetry(
repository, login_response_json["credential"]
)
console.print(
"Successfully authenticated to the tool repository.", style="bold green"
"Succesfully authenticated to the tool repository.", style="bold green"
)
def _set_netrc_credentials(self, credentials):
# Create .netrc or _netrc file
netrc_filename = "_netrc" if platform.system() == "Windows" else ".netrc"
netrc_path = Path.home() / netrc_filename
def _add_repository_to_poetry(self, repository, credentials):
repository_handle = f"crewai-{repository['handle']}"
netrc_content = f"""machine app.crewai.com
login {credentials['username']}
password {credentials['password']}
"""
add_repository_command = [
"poetry",
"source",
"add",
"--priority=explicit",
repository_handle,
repository["url"],
]
add_repository_result = subprocess.run(
add_repository_command, text=True, check=True
)
with open(netrc_path, "a") as netrc_file:
netrc_file.write(netrc_content)
if add_repository_result.stderr:
click.echo(add_repository_result.stderr, err=True)
raise SystemExit
# Set appropriate permissions for Unix-like systems
if platform.system() != "Windows":
os.chmod(netrc_path, 0o600)
console.print(f"Added credentials to {netrc_filename}", style="bold green")
add_repository_credentials_command = [
"poetry",
"config",
f"http-basic.{repository_handle}",
credentials["username"],
credentials["password"],
]
add_repository_credentials_result = subprocess.run(
add_repository_credentials_command,
capture_output=False,
text=True,
check=True,
)
if add_repository_credentials_result.stderr:
click.echo(add_repository_credentials_result.stderr, err=True)
raise SystemExit
def _add_package(self, tool_details):
tool_handle = tool_details["handle"]
repository_handle = tool_details["repository"]["handle"]
pypi_index_handle = f"crewai-{repository_handle}"
add_package_command = [
"uv",
"poetry",
"add",
"--extra-index-url",
self.BASE_URL + repository_handle,
"--source",
pypi_index_handle,
tool_handle,
]
add_package_result = subprocess.run(

View File

@@ -5,12 +5,12 @@ import click
def train_crew(n_iterations: int, filename: str) -> None:
"""
Train the crew by running a command in the UV environment.
Train the crew by running a command in the Poetry environment.
Args:
n_iterations (int): The number of iterations to train the crew.
"""
command = ["uv", "run", "train", str(n_iterations), filename]
command = ["poetry", "run", "train", str(n_iterations), filename]
try:
if n_iterations <= 0:

View File

@@ -1,115 +0,0 @@
import shutil
import tomli_w
import tomllib
def update_crew() -> None:
"""Update the pyproject.toml of the Crew project to use uv."""
migrate_pyproject("pyproject.toml", "pyproject.toml")
def migrate_pyproject(input_file, output_file):
"""
Migrate the pyproject.toml to the new format.
This function is used to migrate the pyproject.toml to the new format.
And it will be used to migrate the pyproject.toml to the new format when uv is used.
When the time comes that uv supports the new format, this function will be deprecated.
"""
# Read the input pyproject.toml
with open(input_file, "rb") as f:
pyproject = tomllib.load(f)
# Initialize the new project structure
new_pyproject = {
"project": {},
"build-system": {"requires": ["hatchling"], "build-backend": "hatchling.build"},
}
# Migrate project metadata
if "tool" in pyproject and "poetry" in pyproject["tool"]:
poetry = pyproject["tool"]["poetry"]
new_pyproject["project"]["name"] = poetry.get("name")
new_pyproject["project"]["version"] = poetry.get("version")
new_pyproject["project"]["description"] = poetry.get("description")
new_pyproject["project"]["authors"] = [
{
"name": author.split("<")[0].strip(),
"email": author.split("<")[1].strip(">").strip(),
}
for author in poetry.get("authors", [])
]
new_pyproject["project"]["requires-python"] = poetry.get("python")
else:
# If it's already in the new format, just copy the project section
new_pyproject["project"] = pyproject.get("project", {})
# Migrate or copy dependencies
if "dependencies" in new_pyproject["project"]:
# If dependencies are already in the new format, keep them as is
pass
elif "dependencies" in poetry:
new_pyproject["project"]["dependencies"] = []
for dep, version in poetry["dependencies"].items():
if isinstance(version, dict): # Handle extras
extras = ",".join(version.get("extras", []))
new_dep = f"{dep}[{extras}]"
if "version" in version:
new_dep += parse_version(version["version"])
elif dep == "python":
new_pyproject["project"]["requires-python"] = version
continue
else:
new_dep = f"{dep}{parse_version(version)}"
new_pyproject["project"]["dependencies"].append(new_dep)
# Migrate or copy scripts
if "scripts" in poetry:
new_pyproject["project"]["scripts"] = poetry["scripts"]
elif "scripts" in pyproject.get("project", {}):
new_pyproject["project"]["scripts"] = pyproject["project"]["scripts"]
else:
new_pyproject["project"]["scripts"] = {}
if (
"run_crew" not in new_pyproject["project"]["scripts"]
and len(new_pyproject["project"]["scripts"]) > 0
):
# Extract the module name from any existing script
existing_scripts = new_pyproject["project"]["scripts"]
module_name = next(
(value.split(".")[0] for value in existing_scripts.values() if "." in value)
)
new_pyproject["project"]["scripts"]["run_crew"] = f"{module_name}.main:run"
# Migrate optional dependencies
if "extras" in poetry:
new_pyproject["project"]["optional-dependencies"] = poetry["extras"]
# Backup the old pyproject.toml
backup_file = "pyproject-old.toml"
shutil.copy2(input_file, backup_file)
print(f"Original pyproject.toml backed up as {backup_file}")
# Write the new pyproject.toml
with open(output_file, "wb") as f:
tomli_w.dump(new_pyproject, f)
print(f"Migration complete. New pyproject.toml written to {output_file}")
def parse_version(version: str) -> str:
"""Parse and convert version specifiers."""
if version.startswith("^"):
main_lib_version = version[1:].split(",")[0]
addtional_lib_version = None
if len(version[1:].split(",")) > 1:
addtional_lib_version = version[1:].split(",")[1]
return f">={main_lib_version}" + (
f",{addtional_lib_version}" if addtional_lib_version else ""
)
return version

View File

@@ -1,14 +1,13 @@
import importlib.metadata
import os
import shutil
import sys
from functools import reduce
from typing import Any, Dict, List
import click
from rich.console import Console
import sys
import importlib.metadata
from crewai.cli.authentication.utils import TokenManager
from functools import reduce
from rich.console import Console
from typing import Any, Dict, List
if sys.version_info >= (3, 11):
import tomllib
@@ -56,14 +55,17 @@ def simple_toml_parser(content):
def parse_toml(content):
if sys.version_info >= (3, 11):
return tomllib.loads(content)
return simple_toml_parser(content)
else:
return simple_toml_parser(content)
def get_project_name(
pyproject_path: str = "pyproject.toml", require: bool = False
) -> str | None:
"""Get the project name from the pyproject.toml file."""
return _get_project_attribute(pyproject_path, ["project", "name"], require=require)
return _get_project_attribute(
pyproject_path, ["tool", "poetry", "name"], require=require
)
def get_project_version(
@@ -71,7 +73,7 @@ def get_project_version(
) -> str | None:
"""Get the project version from the pyproject.toml file."""
return _get_project_attribute(
pyproject_path, ["project", "version"], require=require
pyproject_path, ["tool", "poetry", "version"], require=require
)
@@ -80,7 +82,7 @@ def get_project_description(
) -> str | None:
"""Get the project description from the pyproject.toml file."""
return _get_project_attribute(
pyproject_path, ["project", "description"], require=require
pyproject_path, ["tool", "poetry", "description"], require=require
)
@@ -95,9 +97,10 @@ def _get_project_attribute(
pyproject_content = parse_toml(f.read())
dependencies = (
_get_nested_value(pyproject_content, ["project", "dependencies"]) or []
_get_nested_value(pyproject_content, ["tool", "poetry", "dependencies"])
or {}
)
if not any(True for dep in dependencies if "crewai" in dep):
if "crewai" not in dependencies:
raise Exception("crewai is not in the dependencies.")
attribute = _get_nested_value(pyproject_content, keys)

View File

@@ -27,6 +27,7 @@ from crewai.llm import LLM
from crewai.memory.entity.entity_memory import EntityMemory
from crewai.memory.long_term.long_term_memory import LongTermMemory
from crewai.memory.short_term.short_term_memory import ShortTermMemory
from crewai.memory.user.user_memory import UserMemory
from crewai.process import Process
from crewai.task import Task
from crewai.tasks.conditional_task import ConditionalTask
@@ -94,6 +95,7 @@ class Crew(BaseModel):
_short_term_memory: Optional[InstanceOf[ShortTermMemory]] = PrivateAttr()
_long_term_memory: Optional[InstanceOf[LongTermMemory]] = PrivateAttr()
_entity_memory: Optional[InstanceOf[EntityMemory]] = PrivateAttr()
_user_memory: Optional[InstanceOf[UserMemory]] = PrivateAttr()
_train: Optional[bool] = PrivateAttr(default=False)
_train_iteration: Optional[int] = PrivateAttr()
_inputs: Optional[Dict[str, Any]] = PrivateAttr(default=None)
@@ -114,6 +116,10 @@ class Crew(BaseModel):
default=False,
description="Whether the crew should use memory to store memories of it's execution",
)
memory_provider: Optional[str] = Field(
default=None,
description="The memory provider to be used for the crew.",
)
short_term_memory: Optional[InstanceOf[ShortTermMemory]] = Field(
default=None,
description="An Instance of the ShortTermMemory to be used by the Crew",
@@ -207,6 +213,14 @@ class Crew(BaseModel):
# TODO: Improve typing
return json.loads(v) if isinstance(v, Json) else v # type: ignore
@field_validator("memory_provider", mode="before")
@classmethod
def validate_memory_provider(cls, v: Optional[str]) -> Optional[str]:
"""Ensure memory provider is either None or 'mem0'."""
if v not in (None, "mem0"):
raise ValueError("Memory provider must be either None or 'mem0'.")
return v
@model_validator(mode="after")
def set_private_attrs(self) -> "Crew":
"""Set private attributes."""
@@ -238,12 +252,23 @@ class Crew(BaseModel):
self._short_term_memory = (
self.short_term_memory
if self.short_term_memory
else ShortTermMemory(crew=self, embedder_config=self.embedder)
else ShortTermMemory(
memory_provider=self.memory_provider,
crew=self,
embedder_config=self.embedder,
)
)
self._entity_memory = (
self.entity_memory
if self.entity_memory
else EntityMemory(crew=self, embedder_config=self.embedder)
else EntityMemory(
memory_provider=self.memory_provider,
crew=self,
embedder_config=self.embedder,
)
)
self._user_memory = (
UserMemory(crew=self) if self.memory_provider == "mem0" else None
)
return self
@@ -897,6 +922,7 @@ class Crew(BaseModel):
"_short_term_memory",
"_long_term_memory",
"_entity_memory",
"_user_memory",
"_telemetry",
"agents",
"tasks",

View File

@@ -1,5 +1,6 @@
from .entity.entity_memory import EntityMemory
from .long_term.long_term_memory import LongTermMemory
from .short_term.short_term_memory import ShortTermMemory
from .user.user_memory import UserMemory
__all__ = ["EntityMemory", "LongTermMemory", "ShortTermMemory"]
__all__ = ["UserMemory", "EntityMemory", "LongTermMemory", "ShortTermMemory"]

View File

@@ -1,13 +1,22 @@
from typing import Optional
from crewai.memory import EntityMemory, LongTermMemory, ShortTermMemory
from crewai.memory import EntityMemory, LongTermMemory, ShortTermMemory, UserMemory
class ContextualMemory:
def __init__(self, stm: ShortTermMemory, ltm: LongTermMemory, em: EntityMemory):
def __init__(
self,
stm: ShortTermMemory,
ltm: LongTermMemory,
em: EntityMemory,
um: UserMemory,
memory_provider: Optional[str] = None, # Default value added
):
self.stm = stm
self.ltm = ltm
self.em = em
self.um = um
self.memory_provider = memory_provider
def build_context_for_task(self, task, context) -> str:
"""
@@ -23,6 +32,8 @@ class ContextualMemory:
context.append(self._fetch_ltm_context(task.description))
context.append(self._fetch_stm_context(query))
context.append(self._fetch_entity_context(query))
if self.memory_provider == "mem0":
context.append(self._fetch_user_memories(query))
return "\n".join(filter(None, context))
def _fetch_stm_context(self, query) -> str:
@@ -60,6 +71,22 @@ class ContextualMemory:
"""
em_results = self.em.search(query)
formatted_results = "\n".join(
[f"- {result['context']}" for result in em_results] # type: ignore # Invalid index type "str" for "str"; expected type "SupportsIndex | slice"
[
f"- {result['memory'] if self.memory_provider == 'mem0' else result['context']}"
for result in em_results
] # type: ignore # Invalid index type "str" for "str"; expected type "SupportsIndex | slice"
)
return f"Entities:\n{formatted_results}" if em_results else ""
def _fetch_user_memories(self, query) -> str:
"""
Fetches relevant user memory information from User Memory related to the task's description and expected_output,
"""
print("query", query)
um_results = self.um.search(query)
print("um_results", um_results)
formatted_results = "\n".join(
[f"- {result['memory']}" for result in um_results]
)
print(f"User memories/preferences:\n{formatted_results}")
return f"User memories/preferences:\n{formatted_results}" if um_results else ""

View File

@@ -1,6 +1,7 @@
from crewai.memory.entity.entity_memory_item import EntityMemoryItem
from crewai.memory.memory import Memory
from crewai.memory.storage.rag_storage import RAGStorage
from crewai.memory.storage.mem0_storage import Mem0Storage
class EntityMemory(Memory):
@@ -10,22 +11,39 @@ class EntityMemory(Memory):
Inherits from the Memory class.
"""
def __init__(self, crew=None, embedder_config=None, storage=None):
storage = (
storage
if storage
else RAGStorage(
def __init__(
self, memory_provider=None, crew=None, embedder_config=None, storage=None
):
self.memory_provider = memory_provider
if self.memory_provider == "mem0":
storage = Mem0Storage(
type="entities",
allow_reset=False,
embedder_config=embedder_config,
crew=crew,
)
)
else:
storage = (
storage
if storage
else RAGStorage(
type="entities",
allow_reset=False,
embedder_config=embedder_config,
crew=crew,
)
)
super().__init__(storage)
def save(self, item: EntityMemoryItem) -> None: # type: ignore # BUG?: Signature of "save" incompatible with supertype "Memory"
"""Saves an entity item into the SQLite storage."""
data = f"{item.name}({item.type}): {item.description}"
if self.memory_provider == "mem0":
data = f"""
Remember details about the following entity:
Name: {item.name}
Type: {item.type}
Entity Description: {item.description}
"""
else:
data = f"{item.name}({item.type}): {item.description}"
super().save(data, item.metadata)
def reset(self) -> None:

View File

@@ -1,4 +1,4 @@
from typing import Any, Dict
from typing import Any, Dict, List
from crewai.memory.long_term.long_term_memory_item import LongTermMemoryItem
from crewai.memory.memory import Memory
@@ -18,18 +18,25 @@ class LongTermMemory(Memory):
storage = storage if storage else LTMSQLiteStorage()
super().__init__(storage)
def save(self, item: LongTermMemoryItem) -> None: # type: ignore # BUG?: Signature of "save" incompatible with supertype "Memory"
metadata = item.metadata
metadata.update({"agent": item.agent, "expected_output": item.expected_output})
def save(self, item: LongTermMemoryItem) -> None:
metadata = item.metadata.copy() # Create a copy to avoid modifying the original
metadata.update(
{
"agent": item.agent,
"expected_output": item.expected_output,
"quality": item.quality, # Add quality to metadata
}
)
self.storage.save( # type: ignore # BUG?: Unexpected keyword argument "task_description","score","datetime" for "save" of "Storage"
task_description=item.task,
score=metadata["quality"],
score=item.quality,
metadata=metadata,
datetime=item.datetime,
)
def search(self, task: str, latest_n: int = 3) -> Dict[str, Any]:
return self.storage.load(task, latest_n) # type: ignore # BUG?: "Storage" has no attribute "load"
def search(self, task: str, latest_n: int = 3) -> List[Dict[str, Any]]:
results = self.storage.load(task, latest_n)
return results
def reset(self) -> None:
self.storage.reset()

View File

@@ -23,5 +23,13 @@ class Memory:
self.storage.save(value, metadata)
def search(self, query: str) -> Dict[str, Any]:
return self.storage.search(query)
def search(
self,
query: str,
limit: int = 3,
filters: dict = {},
score_threshold: float = 0.35,
) -> Dict[str, Any]:
return self.storage.search(
query=query, limit=limit, filters=filters, score_threshold=score_threshold
)

View File

@@ -2,6 +2,7 @@ from typing import Any, Dict, Optional
from crewai.memory.memory import Memory
from crewai.memory.short_term.short_term_memory_item import ShortTermMemoryItem
from crewai.memory.storage.rag_storage import RAGStorage
from crewai.memory.storage.mem0_storage import Mem0Storage
class ShortTermMemory(Memory):
@@ -13,14 +14,20 @@ class ShortTermMemory(Memory):
MemoryItem instances.
"""
def __init__(self, crew=None, embedder_config=None, storage=None):
storage = (
storage
if storage
else RAGStorage(
type="short_term", embedder_config=embedder_config, crew=crew
def __init__(
self, memory_provider=None, crew=None, embedder_config=None, storage=None
):
self.memory_provider = memory_provider
if self.memory_provider == "mem0":
storage = Mem0Storage(type="short_term", crew=crew)
else:
storage = (
storage
if storage
else RAGStorage(
type="short_term", embedder_config=embedder_config, crew=crew
)
)
)
super().__init__(storage)
def save(
@@ -30,11 +37,21 @@ class ShortTermMemory(Memory):
agent: Optional[str] = None,
) -> None:
item = ShortTermMemoryItem(data=value, metadata=metadata, agent=agent)
if self.memory_provider == "mem0":
item.data = f"Remember the following insights from Agent run: {item.data}"
super().save(value=item.data, metadata=item.metadata, agent=item.agent)
def search(self, query: str, score_threshold: float = 0.35):
return self.storage.search(query=query, score_threshold=score_threshold) # type: ignore # BUG? The reference is to the parent class, but the parent class does not have this parameters
def search(
self,
query: str,
limit: int = 3,
filters: dict = {},
score_threshold: float = 0.35,
):
return self.storage.search(
query=query, limit=limit, filters=filters, score_threshold=score_threshold
) # type: ignore # BUG? The reference is to the parent class, but the parent class does not have this parameters
def reset(self) -> None:
try:

View File

@@ -7,8 +7,10 @@ class Storage:
def save(self, value: Any, metadata: Dict[str, Any]) -> None:
pass
def search(self, key: str) -> Dict[str, Any]: # type: ignore
pass
def search(
self, query: str, limit: int, filters: Dict, score_threshold: float
) -> Dict[str, Any]: # type: ignore
return {}
def reset(self) -> None:
pass

View File

@@ -0,0 +1,45 @@
import os
from typing import Any, Dict, List, Optional
from crewai.memory.storage.interface import Storage
from mem0 import MemoryClient
class Mem0Storage(Storage):
"""
Extends Storage to handle embedding and searching across entities using Mem0.
"""
def __init__(self, type, crew=None):
super().__init__()
if (
not os.getenv("OPENAI_API_KEY")
and not os.getenv("OPENAI_BASE_URL") == "https://api.openai.com/v1"
):
os.environ["OPENAI_API_KEY"] = "fake"
if not os.getenv("MEM0_API_KEY"):
raise EnvironmentError("MEM0_API_KEY is not set.")
agents = crew.agents if crew else []
agents = [agent.role for agent in agents]
agents = "_".join(agents)
self.app_id = agents
self.memory = MemoryClient(api_key=os.getenv("MEM0_API_KEY"))
def save(self, value: Any, metadata: Dict[str, Any]) -> None:
self.memory.add(value, metadata=metadata, app_id=self.app_id)
def search(
self,
query: str,
limit: int = 3,
filters: Optional[dict] = None,
score_threshold: float = 0.35,
) -> List[Any]:
params = {"query": query, "limit": limit, "app_id": self.app_id}
if filters:
params["filters"] = filters
results = self.memory.search(**params)
return [r for r in results if float(r["score"]) >= score_threshold]

View File

@@ -95,7 +95,7 @@ class RAGStorage(Storage):
self,
query: str,
limit: int = 3,
filter: Optional[dict] = None,
filters: Optional[dict] = None,
score_threshold: float = 0.35,
) -> List[Any]:
if not hasattr(self, "app"):
@@ -105,8 +105,8 @@ class RAGStorage(Storage):
with suppress_logging():
try:
results = (
self.app.search(query, limit, where=filter)
if filter
self.app.search(query, limit, where=filters)
if filters
else self.app.search(query, limit)
)
except InvalidDimensionException:

View File

View File

@@ -0,0 +1,43 @@
from typing import Any, Dict, Optional
from crewai.memory.memory import Memory
from crewai.memory.storage.mem0_storage import Mem0Storage
class UserMemory(Memory):
"""
UserMemory class for handling user memory storage and retrieval.
Inherits from the Memory class and utilizes an instance of a class that
adheres to the Storage for data storage, specifically working with
MemoryItem instances.
"""
def __init__(self, crew=None):
storage = Mem0Storage(type="user", crew=crew)
super().__init__(storage)
def save(
self,
value,
metadata: Optional[Dict[str, Any]] = None,
agent: Optional[str] = None,
) -> None:
data = f"Remember the details about the user: {value}"
super().save(data, metadata)
def search(
self,
query: str,
limit: int = 3,
filters: dict = {},
score_threshold: float = 0.35,
):
print("SEARCHING USER MEMORY", query, limit, filters, score_threshold)
result = super().search(
query=query,
limit=limit,
filters=filters,
score_threshold=score_threshold,
)
print("USER MEMORY SEARCH RESULT:", result)
return result

View File

@@ -0,0 +1,8 @@
from typing import Any, Dict, Optional
class UserMemoryItem:
def __init__(self, data: Any, user: str, metadata: Optional[Dict[str, Any]] = None):
self.data = data
self.user = user
self.metadata = metadata if metadata is not None else {}

View File

@@ -1,21 +1,21 @@
"""Test Agent creation and execution basic functionality."""
import os
from unittest import mock
from unittest.mock import patch
import os
import pytest
from crewai_tools import tool
from crewai import Agent, Crew, Task
from crewai.agents.cache import CacheHandler
from crewai.agents.crew_agent_executor import CrewAgentExecutor
from crewai.agents.parser import AgentAction, CrewAgentParser, OutputParserException
from crewai.llm import LLM
from crewai.agents.parser import CrewAgentParser, OutputParserException
from crewai.tools.tool_calling import InstructorToolCalling
from crewai.tools.tool_usage import ToolUsage
from crewai.tools.tool_usage_events import ToolUsageFinished
from crewai.utilities import RPMController
from crewai_tools import tool
from crewai.agents.parser import AgentAction
from crewai.utilities.events import Emitter
@@ -73,7 +73,7 @@ def test_agent_creation():
def test_agent_default_values():
agent = Agent(role="test role", goal="test goal", backstory="test backstory")
assert agent.llm.model == "gpt-4o-mini"
assert agent.llm.model == "gpt-4o"
assert agent.allow_delegation is False
@@ -116,7 +116,6 @@ def test_custom_llm_temperature_preservation():
@pytest.mark.vcr(filter_headers=["authorization"])
def test_agent_execute_task():
from langchain_openai import ChatOpenAI
from crewai import Task
agent = Agent(
@@ -207,7 +206,7 @@ def test_logging_tool_usage():
verbose=True,
)
assert agent.llm.model == "gpt-4o-mini"
assert agent.llm.model == "gpt-4o"
assert agent.tools_handler.last_used_tool == {}
task = Task(
description="What is 3 times 4?",
@@ -603,7 +602,6 @@ def test_agent_respect_the_max_rpm_set(capsys):
@pytest.mark.vcr(filter_headers=["authorization"])
def test_agent_respect_the_max_rpm_set_over_crew_rpm(capsys):
from unittest.mock import patch
from crewai_tools import tool
@tool
@@ -695,7 +693,6 @@ def test_agent_without_max_rpm_respet_crew_rpm(capsys):
@pytest.mark.vcr(filter_headers=["authorization"])
def test_agent_error_on_parsing_tool(capsys):
from unittest.mock import patch
from crewai_tools import tool
@tool
@@ -858,9 +855,7 @@ def test_agent_function_calling_llm():
tasks = [essay]
crew = Crew(agents=[agent1], tasks=tasks)
from unittest.mock import patch
import instructor
from crewai.tools.tool_usage import ToolUsage
with patch.object(

View File

@@ -1,11 +1,11 @@
import sys
import unittest
from io import StringIO
from unittest.mock import MagicMock, Mock, patch
import pytest
import requests
import sys
import unittest
from io import StringIO
from requests.exceptions import JSONDecodeError
from unittest.mock import MagicMock, Mock, patch
from crewai.cli.deploy.main import DeployCommand
from crewai.cli.utils import parse_toml
@@ -228,11 +228,13 @@ class TestDeployCommand(unittest.TestCase):
"builtins.open",
new_callable=unittest.mock.mock_open,
read_data="""
[project]
[tool.poetry]
name = "test_project"
version = "0.1.0"
requires-python = ">=3.10,<=3.13"
dependencies = ["crewai"]
[tool.poetry.dependencies]
python = "^3.10"
crewai = { extras = ["tools"], version = ">=0.51.0,<1.0.0" }
""",
)
def test_get_project_name_python_310(self, mock_open):
@@ -246,11 +248,13 @@ class TestDeployCommand(unittest.TestCase):
"builtins.open",
new_callable=unittest.mock.mock_open,
read_data="""
[project]
[tool.poetry]
name = "test_project"
version = "0.1.0"
requires-python = ">=3.10,<=3.13"
dependencies = ["crewai"]
[tool.poetry.dependencies]
python = "^3.11"
crewai = { extras = ["tools"], version = ">=0.51.0,<1.0.0" }
""",
)
def test_get_project_name_python_311_plus(self, mock_open):

View File

@@ -18,12 +18,12 @@ from crewai.cli import evaluate_crew
def test_crew_success(mock_subprocess_run, n_iterations, model):
"""Test the crew function for successful execution."""
mock_subprocess_run.return_value = subprocess.CompletedProcess(
args=f"uv run test {n_iterations} {model}", returncode=0
args=f"poetry run test {n_iterations} {model}", returncode=0
)
result = evaluate_crew.evaluate_crew(n_iterations, model)
mock_subprocess_run.assert_called_once_with(
["uv", "run", "test", str(n_iterations), model],
["poetry", "run", "test", str(n_iterations), model],
capture_output=False,
text=True,
check=True,
@@ -55,14 +55,14 @@ def test_test_crew_called_process_error(mock_subprocess_run, click):
n_iterations = 5
mock_subprocess_run.side_effect = subprocess.CalledProcessError(
returncode=1,
cmd=["uv", "run", "test", str(n_iterations), "gpt-4o"],
cmd=["poetry", "run", "test", str(n_iterations), "gpt-4o"],
output="Error",
stderr="Some error occurred",
)
evaluate_crew.evaluate_crew(n_iterations, "gpt-4o")
mock_subprocess_run.assert_called_once_with(
["uv", "run", "test", "5", "gpt-4o"],
["poetry", "run", "test", "5", "gpt-4o"],
capture_output=False,
text=True,
check=True,
@@ -70,7 +70,7 @@ def test_test_crew_called_process_error(mock_subprocess_run, click):
click.echo.assert_has_calls(
[
mock.call.echo(
"An error occurred while testing the crew: Command '['uv', 'run', 'test', '5', 'gpt-4o']' returned non-zero exit status 1.",
"An error occurred while testing the crew: Command '['poetry', 'run', 'test', '5', 'gpt-4o']' returned non-zero exit status 1.",
err=True,
),
mock.call.echo("Error", err=True),
@@ -87,7 +87,7 @@ def test_test_crew_unexpected_exception(mock_subprocess_run, click):
evaluate_crew.evaluate_crew(n_iterations, "gpt-4o")
mock_subprocess_run.assert_called_once_with(
["uv", "run", "test", "5", "gpt-4o"],
["poetry", "run", "test", "5", "gpt-4o"],
capture_output=False,
text=True,
check=True,

View File

@@ -1,16 +1,13 @@
import os
import tempfile
import unittest
import unittest.mock
import os
from contextlib import contextmanager
from io import StringIO
from unittest import mock
from unittest.mock import MagicMock, patch
from pytest import raises
from crewai.cli.tools.main import ToolCommand
from io import StringIO
from unittest.mock import patch, MagicMock
@contextmanager
def in_temp_dir():
@@ -22,7 +19,6 @@ def in_temp_dir():
finally:
os.chdir(original_dir)
@patch("crewai.cli.tools.main.subprocess.run")
def test_create_success(mock_subprocess):
with in_temp_dir():
@@ -42,7 +38,9 @@ def test_create_success(mock_subprocess):
)
assert os.path.isfile(os.path.join("test_tool", "src", "test_tool", "tool.py"))
with open(os.path.join("test_tool", "src", "test_tool", "tool.py"), "r") as f:
with open(
os.path.join("test_tool", "src", "test_tool", "tool.py"), "r"
) as f:
content = f.read()
assert "class TestTool" in content
@@ -51,7 +49,6 @@ def test_create_success(mock_subprocess):
assert "Creating custom tool test_tool..." in output
@patch("crewai.cli.tools.main.subprocess.run")
@patch("crewai.cli.plus_api.PlusAPI.get_tool")
def test_install_success(mock_get, mock_subprocess_run):
@@ -70,15 +67,9 @@ def test_install_success(mock_get, mock_subprocess_run):
tool_command.install("sample-tool")
output = fake_out.getvalue()
mock_get.assert_has_calls([mock.call("sample-tool"), mock.call().json()])
mock_get.assert_called_once_with("sample-tool")
mock_subprocess_run.assert_any_call(
[
"uv",
"add",
"--extra-index-url",
"https://app.crewai.com/pypi/sample-repo",
"sample-tool",
],
["poetry", "add", "--source", "crewai-sample-repo", "sample-tool"],
capture_output=False,
text=True,
check=True,
@@ -86,7 +77,6 @@ def test_install_success(mock_get, mock_subprocess_run):
assert "Succesfully installed sample-tool" in output
@patch("crewai.cli.plus_api.PlusAPI.get_tool")
def test_install_tool_not_found(mock_get):
mock_get_response = MagicMock()
@@ -105,7 +95,6 @@ def test_install_tool_not_found(mock_get):
mock_get.assert_called_once_with("non-existent-tool")
assert "No tool found with this name" in output
@patch("crewai.cli.plus_api.PlusAPI.get_tool")
def test_install_api_error(mock_get):
mock_get_response = MagicMock()
@@ -124,16 +113,15 @@ def test_install_api_error(mock_get):
mock_get.assert_called_once_with("error-tool")
assert "Failed to get tool details" in output
@patch("crewai.cli.tools.main.git.Repository.is_synced", return_value=False)
def test_publish_when_not_in_sync(mock_is_synced):
with patch("sys.stdout", new=StringIO()) as fake_out, raises(SystemExit):
with patch("sys.stdout", new=StringIO()) as fake_out, \
raises(SystemExit):
tool_command = ToolCommand()
tool_command.publish(is_public=True)
assert "Local changes need to be resolved before publishing" in fake_out.getvalue()
@patch("crewai.cli.tools.main.get_project_name", return_value="sample-tool")
@patch("crewai.cli.tools.main.get_project_version", return_value="1.0.0")
@patch("crewai.cli.tools.main.get_project_description", return_value="A sample tool")
@@ -168,7 +156,7 @@ def test_publish_when_not_in_sync_and_force(
mock_get_project_version.assert_called_with(require=True)
mock_get_project_description.assert_called_with(require=False)
mock_subprocess_run.assert_called_with(
["uv", "build", "--sdist", "--out-dir", unittest.mock.ANY],
["poetry", "build", "-f", "sdist", "--output", unittest.mock.ANY],
check=True,
capture_output=False,
)
@@ -181,7 +169,6 @@ def test_publish_when_not_in_sync_and_force(
encoded_file=unittest.mock.ANY,
)
@patch("crewai.cli.tools.main.get_project_name", return_value="sample-tool")
@patch("crewai.cli.tools.main.get_project_version", return_value="1.0.0")
@patch("crewai.cli.tools.main.get_project_description", return_value="A sample tool")
@@ -216,7 +203,7 @@ def test_publish_success(
mock_get_project_version.assert_called_with(require=True)
mock_get_project_description.assert_called_with(require=False)
mock_subprocess_run.assert_called_with(
["uv", "build", "--sdist", "--out-dir", unittest.mock.ANY],
["poetry", "build", "-f", "sdist", "--output", unittest.mock.ANY],
check=True,
capture_output=False,
)
@@ -229,7 +216,6 @@ def test_publish_success(
encoded_file=unittest.mock.ANY,
)
@patch("crewai.cli.tools.main.get_project_name", return_value="sample-tool")
@patch("crewai.cli.tools.main.get_project_version", return_value="1.0.0")
@patch("crewai.cli.tools.main.get_project_description", return_value="A sample tool")
@@ -268,7 +254,6 @@ def test_publish_failure(
assert "Failed to complete operation" in output
assert "Name is already taken" in output
@patch("crewai.cli.tools.main.get_project_name", return_value="sample-tool")
@patch("crewai.cli.tools.main.get_project_version", return_value="1.0.0")
@patch("crewai.cli.tools.main.get_project_description", return_value="A sample tool")
@@ -306,3 +291,54 @@ def test_publish_api_error(
mock_publish.assert_called_once()
assert "Request to Enterprise API failed" in output
@patch("crewai.cli.plus_api.PlusAPI.login_to_tool_repository")
@patch("crewai.cli.tools.main.subprocess.run")
def test_login_success(mock_subprocess_run, mock_login):
mock_login_response = MagicMock()
mock_login_response.status_code = 200
mock_login_response.json.return_value = {
"repositories": [
{
"handle": "tools",
"url": "https://example.com/repo",
}
],
"credential": {"username": "user", "password": "pass"},
}
mock_login.return_value = mock_login_response
mock_subprocess_run.return_value = MagicMock(stderr=None)
tool_command = ToolCommand()
with patch("sys.stdout", new=StringIO()) as fake_out:
tool_command.login()
output = fake_out.getvalue()
mock_login.assert_called_once()
mock_subprocess_run.assert_any_call(
[
"poetry",
"source",
"add",
"--priority=explicit",
"crewai-tools",
"https://example.com/repo",
],
text=True,
check=True,
)
mock_subprocess_run.assert_any_call(
[
"poetry",
"config",
"http-basic.crewai-tools",
"user",
"pass",
],
capture_output=False,
text=True,
check=True,
)
assert "Succesfully authenticated to the tool repository" in output

View File

@@ -8,7 +8,7 @@ from crewai.cli.train_crew import train_crew
def test_train_crew_positive_iterations(mock_subprocess_run):
n_iterations = 5
mock_subprocess_run.return_value = subprocess.CompletedProcess(
args=["uv", "run", "train", str(n_iterations)],
args=["poetry", "run", "train", str(n_iterations)],
returncode=0,
stdout="Success",
stderr="",
@@ -17,7 +17,7 @@ def test_train_crew_positive_iterations(mock_subprocess_run):
train_crew(n_iterations, "trained_agents_data.pkl")
mock_subprocess_run.assert_called_once_with(
["uv", "run", "train", str(n_iterations), "trained_agents_data.pkl"],
["poetry", "run", "train", str(n_iterations), "trained_agents_data.pkl"],
capture_output=False,
text=True,
check=True,
@@ -48,14 +48,14 @@ def test_train_crew_called_process_error(mock_subprocess_run, click):
n_iterations = 5
mock_subprocess_run.side_effect = subprocess.CalledProcessError(
returncode=1,
cmd=["uv", "run", "train", str(n_iterations)],
cmd=["poetry", "run", "train", str(n_iterations)],
output="Error",
stderr="Some error occurred",
)
train_crew(n_iterations, "trained_agents_data.pkl")
mock_subprocess_run.assert_called_once_with(
["uv", "run", "train", str(n_iterations), "trained_agents_data.pkl"],
["poetry", "run", "train", str(n_iterations), "trained_agents_data.pkl"],
capture_output=False,
text=True,
check=True,
@@ -63,7 +63,7 @@ def test_train_crew_called_process_error(mock_subprocess_run, click):
click.echo.assert_has_calls(
[
mock.call.echo(
"An error occurred while training the crew: Command '['uv', 'run', 'train', '5']' returned non-zero exit status 1.",
"An error occurred while training the crew: Command '['poetry', 'run', 'train', '5']' returned non-zero exit status 1.",
err=True,
),
mock.call.echo("Error", err=True),
@@ -79,7 +79,7 @@ def test_train_crew_unexpected_exception(mock_subprocess_run, click):
train_crew(n_iterations, "trained_agents_data.pkl")
mock_subprocess_run.assert_called_once_with(
["uv", "run", "train", str(n_iterations), "trained_agents_data.pkl"],
["poetry", "run", "train", str(n_iterations), "trained_agents_data.pkl"],
capture_output=False,
text=True,
check=True,

View File

@@ -23,6 +23,7 @@ from crewai.types.usage_metrics import UsageMetrics
from crewai.utilities import Logger
from crewai.utilities.rpm_controller import RPMController
from crewai.utilities.task_output_storage_handler import TaskOutputStorageHandler
from pydantic_core import ValidationError
ceo = Agent(
role="CEO",
@@ -173,6 +174,57 @@ def test_context_no_future_tasks():
Crew(tasks=[task1, task2, task3, task4], agents=[researcher, writer])
def test_memory_provider_validation():
# Create mock agents
agent1 = Agent(
role="Researcher",
goal="Conduct research on AI",
backstory="An experienced AI researcher",
allow_delegation=False,
)
agent2 = Agent(
role="Writer",
goal="Write articles on AI",
backstory="A seasoned writer with a focus on technology",
allow_delegation=False,
)
# Create mock tasks
task1 = Task(
description="Research the latest trends in AI",
expected_output="A report on AI trends",
agent=agent1,
)
task2 = Task(
description="Write an article based on the research",
expected_output="An article on AI trends",
agent=agent2,
)
# Test with valid memory provider values
try:
crew_with_none = Crew(
agents=[agent1, agent2], tasks=[task1, task2], memory_provider=None
)
crew_with_mem0 = Crew(
agents=[agent1, agent2], tasks=[task1, task2], memory_provider="mem0"
)
except ValidationError:
pytest.fail(
"Unexpected ValidationError raised for valid memory provider values"
)
# Test with an invalid memory provider value
with pytest.raises(ValidationError) as excinfo:
Crew(
agents=[agent1, agent2],
tasks=[task1, task2],
memory_provider="invalid_provider",
)
assert "Memory provider must be either None or 'mem0'." in str(excinfo.value)
def test_crew_config_with_wrong_keys():
no_tasks_config = json.dumps(
{
@@ -497,6 +549,7 @@ def test_cache_hitting_between_agents():
@pytest.mark.vcr(filter_headers=["authorization"])
def test_api_calls_throttling(capsys):
from unittest.mock import patch
from crewai_tools import tool
@tool
@@ -1105,6 +1158,7 @@ def test_dont_set_agents_step_callback_if_already_set():
@pytest.mark.vcr(filter_headers=["authorization"])
def test_crew_function_calling_llm():
from unittest.mock import patch
from crewai_tools import tool
llm = "gpt-4o"

View File

@@ -0,0 +1,270 @@
interactions:
- request:
body: ''
headers:
accept:
- '*/*'
accept-encoding:
- gzip, deflate
connection:
- keep-alive
host:
- api.mem0.ai
user-agent:
- python-httpx/0.27.0
method: GET
uri: https://api.mem0.ai/v1/memories/?user_id=test
response:
body:
string: '[]'
headers:
CF-Cache-Status:
- DYNAMIC
CF-RAY:
- 8b477138bad847b9-BOM
Connection:
- keep-alive
Content-Length:
- '2'
Content-Type:
- application/json
Date:
- Sat, 17 Aug 2024 06:00:11 GMT
NEL:
- '{"success_fraction":0,"report_to":"cf-nel","max_age":604800}'
Report-To:
- '{"endpoints":[{"url":"https:\/\/a.nel.cloudflare.com\/report\/v4?s=uuyH2foMJVDpV%2FH52g1q%2FnvXKe3dBKVzvsK0mqmSNezkiszNR9OgrEJfVqmkX%2FlPFRP2sH4zrOuzGo6k%2FjzsjYJczqSWJUZHN2pPujiwnr1E9W%2BdLGKmG6%2FqPrGYAy2SBRWkkJVWsTO3OQ%3D%3D"}],"group":"cf-nel","max_age":604800}'
Server:
- cloudflare
allow:
- GET, POST, DELETE, OPTIONS
alt-svc:
- h3=":443"; ma=86400
cross-origin-opener-policy:
- same-origin
referrer-policy:
- same-origin
vary:
- Accept, origin, Cookie
x-content-type-options:
- nosniff
x-frame-options:
- DENY
status:
code: 200
message: OK
- request:
body: '{"batch": [{"properties": {"python_version": "3.12.4 (v3.12.4:8e8a4baf65,
Jun 6 2024, 17:33:18) [Clang 13.0.0 (clang-1300.0.29.30)]", "os": "darwin",
"os_version": "Darwin Kernel Version 23.4.0: Wed Feb 21 21:44:54 PST 2024; root:xnu-10063.101.15~2/RELEASE_ARM64_T6030",
"os_release": "23.4.0", "processor": "arm", "machine": "arm64", "function":
"mem0.client.main.MemoryClient", "$lib": "posthog-python", "$lib_version": "3.5.0",
"$geoip_disable": true}, "timestamp": "2024-08-17T06:00:11.526640+00:00", "context":
{}, "distinct_id": "fd411bd3-99a2-42d6-acd7-9fca8ad09580", "event": "client.init"}],
"historical_migration": false, "sentAt": "2024-08-17T06:00:11.701621+00:00",
"api_key": "phc_hgJkUVJFYtmaJqrvf6CYN67TIQ8yhXAkWzUn9AMU4yX"}'
headers:
Accept:
- '*/*'
Accept-Encoding:
- gzip, deflate
Connection:
- keep-alive
Content-Length:
- '740'
Content-Type:
- application/json
User-Agent:
- posthog-python/3.5.0
method: POST
uri: https://us.i.posthog.com/batch/
response:
body:
string: '{"status":"Ok"}'
headers:
Connection:
- keep-alive
Content-Length:
- '15'
Content-Type:
- application/json
Date:
- Sat, 17 Aug 2024 06:00:12 GMT
access-control-allow-credentials:
- 'true'
server:
- envoy
vary:
- origin, access-control-request-method, access-control-request-headers
x-envoy-upstream-service-time:
- '69'
status:
code: 200
message: OK
- request:
body: '{"messages": [{"role": "user", "content": "Remember the following insights
from Agent run: test value with provider"}], "metadata": {"task": "test_task_provider",
"agent": "test_agent_provider"}, "app_id": "Researcher"}'
headers:
accept:
- '*/*'
accept-encoding:
- gzip, deflate
connection:
- keep-alive
content-length:
- '219'
content-type:
- application/json
host:
- api.mem0.ai
user-agent:
- python-httpx/0.27.0
method: POST
uri: https://api.mem0.ai/v1/memories/
response:
body:
string: '{"message":"ok"}'
headers:
CF-Cache-Status:
- DYNAMIC
CF-RAY:
- 8b477140282547b9-BOM
Connection:
- keep-alive
Content-Length:
- '16'
Content-Type:
- application/json
Date:
- Sat, 17 Aug 2024 06:00:13 GMT
NEL:
- '{"success_fraction":0,"report_to":"cf-nel","max_age":604800}'
Report-To:
- '{"endpoints":[{"url":"https:\/\/a.nel.cloudflare.com\/report\/v4?s=FRjJKSk3YxVj03wA7S05H8ts35KnWfqS3wb6Rfy4kVZ4BgXfw7nJbm92wI6vEv5fWcAcHVnOlkJDggs11B01BMuB2k3a9RqlBi0dJNiMuk%2Bgm5xE%2BODMPWJctYNRwQMjNVbteUpS%2Fad8YA%3D%3D"}],"group":"cf-nel","max_age":604800}'
Server:
- cloudflare
allow:
- GET, POST, DELETE, OPTIONS
alt-svc:
- h3=":443"; ma=86400
cross-origin-opener-policy:
- same-origin
referrer-policy:
- same-origin
vary:
- Accept, origin, Cookie
x-content-type-options:
- nosniff
x-frame-options:
- DENY
status:
code: 200
message: OK
- request:
body: '{"query": "test value with provider", "limit": 3, "app_id": "Researcher"}'
headers:
accept:
- '*/*'
accept-encoding:
- gzip, deflate
connection:
- keep-alive
content-length:
- '73'
content-type:
- application/json
host:
- api.mem0.ai
user-agent:
- python-httpx/0.27.0
method: POST
uri: https://api.mem0.ai/v1/memories/search/
response:
body:
string: '[]'
headers:
CF-Cache-Status:
- DYNAMIC
CF-RAY:
- 8b47714d083b47b9-BOM
Connection:
- keep-alive
Content-Length:
- '2'
Content-Type:
- application/json
Date:
- Sat, 17 Aug 2024 06:00:14 GMT
NEL:
- '{"success_fraction":0,"report_to":"cf-nel","max_age":604800}'
Report-To:
- '{"endpoints":[{"url":"https:\/\/a.nel.cloudflare.com\/report\/v4?s=2DRWL1cdKdMvnE8vx1fPUGeTITOgSGl3N5g84PS6w30GRqpfz79BtSx6REhpnOiFV8kM6KGqln0iCZ5yoHc2jBVVJXhPJhQ5t0uerD9JFnkphjISrJOU1MJjZWneT9PlNABddxvVNCmluA%3D%3D"}],"group":"cf-nel","max_age":604800}'
Server:
- cloudflare
allow:
- POST, OPTIONS
alt-svc:
- h3=":443"; ma=86400
cross-origin-opener-policy:
- same-origin
referrer-policy:
- same-origin
vary:
- Accept, origin, Cookie
x-content-type-options:
- nosniff
x-frame-options:
- DENY
status:
code: 200
message: OK
- request:
body: '{"batch": [{"properties": {"python_version": "3.12.4 (v3.12.4:8e8a4baf65,
Jun 6 2024, 17:33:18) [Clang 13.0.0 (clang-1300.0.29.30)]", "os": "darwin",
"os_version": "Darwin Kernel Version 23.4.0: Wed Feb 21 21:44:54 PST 2024; root:xnu-10063.101.15~2/RELEASE_ARM64_T6030",
"os_release": "23.4.0", "processor": "arm", "machine": "arm64", "function":
"mem0.client.main.MemoryClient", "$lib": "posthog-python", "$lib_version": "3.5.0",
"$geoip_disable": true}, "timestamp": "2024-08-17T06:00:13.593952+00:00", "context":
{}, "distinct_id": "fd411bd3-99a2-42d6-acd7-9fca8ad09580", "event": "client.add"}],
"historical_migration": false, "sentAt": "2024-08-17T06:00:13.858277+00:00",
"api_key": "phc_hgJkUVJFYtmaJqrvf6CYN67TIQ8yhXAkWzUn9AMU4yX"}'
headers:
Accept:
- '*/*'
Accept-Encoding:
- gzip, deflate
Connection:
- keep-alive
Content-Length:
- '739'
Content-Type:
- application/json
User-Agent:
- posthog-python/3.5.0
method: POST
uri: https://us.i.posthog.com/batch/
response:
body:
string: '{"status":"Ok"}'
headers:
Connection:
- keep-alive
Content-Length:
- '15'
Content-Type:
- application/json
Date:
- Sat, 17 Aug 2024 06:00:13 GMT
access-control-allow-credentials:
- 'true'
server:
- envoy
vary:
- origin, access-control-request-method, access-control-request-headers
x-envoy-upstream-service-time:
- '33'
status:
code: 200
message: OK
version: 1

View File

@@ -0,0 +1,147 @@
from unittest.mock import MagicMock, patch
import pytest
from crewai.memory import EntityMemory, LongTermMemory, ShortTermMemory, UserMemory
from crewai.memory.contextual.contextual_memory import ContextualMemory
@pytest.fixture
def mock_memories():
return {
"stm": MagicMock(spec=ShortTermMemory),
"ltm": MagicMock(spec=LongTermMemory),
"em": MagicMock(spec=EntityMemory),
"um": MagicMock(spec=UserMemory),
}
@pytest.fixture
def contextual_memory_mem0(mock_memories):
return ContextualMemory(
memory_provider="mem0",
stm=mock_memories["stm"],
ltm=mock_memories["ltm"],
em=mock_memories["em"],
um=mock_memories["um"],
)
@pytest.fixture
def contextual_memory_other(mock_memories):
return ContextualMemory(
memory_provider="other",
stm=mock_memories["stm"],
ltm=mock_memories["ltm"],
em=mock_memories["em"],
um=mock_memories["um"],
)
@pytest.fixture
def contextual_memory_none(mock_memories):
return ContextualMemory(
memory_provider=None,
stm=mock_memories["stm"],
ltm=mock_memories["ltm"],
em=mock_memories["em"],
um=mock_memories["um"],
)
def test_build_context_for_task_mem0(contextual_memory_mem0, mock_memories):
task = MagicMock(description="Test task")
context = "Additional context"
mock_memories["stm"].search.return_value = ["Recent insight"]
mock_memories["ltm"].search.return_value = [
{"metadata": {"suggestions": ["Historical data"]}}
]
mock_memories["em"].search.return_value = [{"memory": "Entity memory"}]
mock_memories["um"].search.return_value = [{"memory": "User memory"}]
result = contextual_memory_mem0.build_context_for_task(task, context)
assert "Recent Insights:" in result
assert "Historical Data:" in result
assert "Entities:" in result
assert "User memories/preferences:" in result
def test_build_context_for_task_other_provider(contextual_memory_other, mock_memories):
task = MagicMock(description="Test task")
context = "Additional context"
mock_memories["stm"].search.return_value = ["Recent insight"]
mock_memories["ltm"].search.return_value = [
{"metadata": {"suggestions": ["Historical data"]}}
]
mock_memories["em"].search.return_value = [{"context": "Entity context"}]
mock_memories["um"].search.return_value = [{"memory": "User memory"}]
result = contextual_memory_other.build_context_for_task(task, context)
assert "Recent Insights:" in result
assert "Historical Data:" in result
assert "Entities:" in result
assert "User memories/preferences:" not in result
def test_build_context_for_task_none_provider(contextual_memory_none, mock_memories):
task = MagicMock(description="Test task")
context = "Additional context"
mock_memories["stm"].search.return_value = ["Recent insight"]
mock_memories["ltm"].search.return_value = [
{"metadata": {"suggestions": ["Historical data"]}}
]
mock_memories["em"].search.return_value = [{"context": "Entity context"}]
mock_memories["um"].search.return_value = [{"memory": "User memory"}]
result = contextual_memory_none.build_context_for_task(task, context)
assert "Recent Insights:" in result
assert "Historical Data:" in result
assert "Entities:" in result
assert "User memories/preferences:" not in result
def test_fetch_entity_context_mem0(contextual_memory_mem0, mock_memories):
mock_memories["em"].search.return_value = [
{"memory": "Entity 1"},
{"memory": "Entity 2"},
]
result = contextual_memory_mem0._fetch_entity_context("query")
expected_result = "Entities:\n- Entity 1\n- Entity 2"
assert result == expected_result
def test_fetch_entity_context_other_provider(contextual_memory_other, mock_memories):
mock_memories["em"].search.return_value = [
{"context": "Entity 1"},
{"context": "Entity 2"},
]
result = contextual_memory_other._fetch_entity_context("query")
expected_result = "Entities:\n- Entity 1\n- Entity 2"
assert result == expected_result
def test_user_memories_only_for_mem0(contextual_memory_mem0, mock_memories):
mock_memories["um"].search.return_value = [{"memory": "User memory"}]
# Test for mem0 provider
result_mem0 = contextual_memory_mem0._fetch_user_memories("query")
assert "User memories/preferences:" in result_mem0
assert "User memory" in result_mem0
# Additional test to ensure user memories are included/excluded in the full context
task = MagicMock(description="Test task")
context = "Additional context"
mock_memories["stm"].search.return_value = ["Recent insight"]
mock_memories["ltm"].search.return_value = [
{"metadata": {"suggestions": ["Historical data"]}}
]
mock_memories["em"].search.return_value = [{"memory": "Entity memory"}]
full_context_mem0 = contextual_memory_mem0.build_context_for_task(task, context)
assert "User memories/preferences:" in full_context_mem0
assert "User memory" in full_context_mem0

View File

@@ -0,0 +1,119 @@
# tests/memory/test_entity_memory.py
from unittest.mock import MagicMock, patch
import pytest
from crewai.memory.entity.entity_memory import EntityMemory
from crewai.memory.entity.entity_memory_item import EntityMemoryItem
from crewai.memory.storage.mem0_storage import Mem0Storage
from crewai.memory.storage.rag_storage import RAGStorage
@pytest.fixture
def mock_rag_storage():
"""Fixture to create a mock RAGStorage instance"""
return MagicMock(spec=RAGStorage)
@pytest.fixture
def mock_mem0_storage():
"""Fixture to create a mock Mem0Storage instance"""
return MagicMock(spec=Mem0Storage)
@pytest.fixture
def entity_memory_rag(mock_rag_storage):
"""Fixture to create an EntityMemory instance with RAGStorage"""
with patch(
"crewai.memory.entity.entity_memory.RAGStorage", return_value=mock_rag_storage
):
return EntityMemory()
@pytest.fixture
def entity_memory_mem0(mock_mem0_storage):
"""Fixture to create an EntityMemory instance with Mem0Storage"""
with patch(
"crewai.memory.entity.entity_memory.Mem0Storage", return_value=mock_mem0_storage
):
return EntityMemory(memory_provider="mem0")
def test_save_rag_storage(entity_memory_rag, mock_rag_storage):
item = EntityMemoryItem(
name="John Doe",
type="Person",
description="A software engineer",
relationships="Works at TechCorp",
)
entity_memory_rag.save(item)
expected_data = "John Doe(Person): A software engineer"
mock_rag_storage.save.assert_called_once_with(expected_data, item.metadata)
def test_save_mem0_storage(entity_memory_mem0, mock_mem0_storage):
item = EntityMemoryItem(
name="John Doe",
type="Person",
description="A software engineer",
relationships="Works at TechCorp",
)
entity_memory_mem0.save(item)
expected_data = """
Remember details about the following entity:
Name: John Doe
Type: Person
Entity Description: A software engineer
"""
mock_mem0_storage.save.assert_called_once_with(expected_data, item.metadata)
def test_search(entity_memory_rag, mock_rag_storage):
query = "software engineer"
limit = 5
filters = {"type": "Person"}
score_threshold = 0.7
entity_memory_rag.search(query, limit, filters, score_threshold)
mock_rag_storage.search.assert_called_once_with(
query=query, limit=limit, filters=filters, score_threshold=score_threshold
)
def test_reset(entity_memory_rag, mock_rag_storage):
entity_memory_rag.reset()
mock_rag_storage.reset.assert_called_once()
def test_reset_error(entity_memory_rag, mock_rag_storage):
mock_rag_storage.reset.side_effect = Exception("Reset error")
with pytest.raises(Exception) as exc_info:
entity_memory_rag.reset()
assert (
str(exc_info.value)
== "An error occurred while resetting the entity memory: Reset error"
)
@pytest.mark.parametrize("memory_provider", [None, "other"])
def test_init_with_rag_storage(memory_provider):
with patch("crewai.memory.entity.entity_memory.RAGStorage") as mock_rag_storage:
EntityMemory(memory_provider=memory_provider)
mock_rag_storage.assert_called_once()
def test_init_with_mem0_storage():
with patch("crewai.memory.entity.entity_memory.Mem0Storage") as mock_mem0_storage:
EntityMemory(memory_provider="mem0")
mock_mem0_storage.assert_called_once()
def test_init_with_custom_storage():
custom_storage = MagicMock()
entity_memory = EntityMemory(storage=custom_storage)
assert entity_memory.storage == custom_storage

View File

@@ -1,29 +1,125 @@
import pytest
# tests/memory/long_term_memory_test.py
from datetime import datetime
from unittest.mock import MagicMock, patch
import pytest
from crewai.memory.long_term.long_term_memory import LongTermMemory
from crewai.memory.long_term.long_term_memory_item import LongTermMemoryItem
from crewai.memory.storage.ltm_sqlite_storage import LTMSQLiteStorage
@pytest.fixture
def long_term_memory():
"""Fixture to create a LongTermMemory instance"""
return LongTermMemory()
def mock_storage():
"""Fixture to create a mock LTMSQLiteStorage instance"""
return MagicMock(spec=LTMSQLiteStorage)
def test_save_and_search(long_term_memory):
@pytest.fixture
def long_term_memory(mock_storage):
"""Fixture to create a LongTermMemory instance with mock storage"""
return LongTermMemory(storage=mock_storage)
def test_save(long_term_memory, mock_storage):
memory = LongTermMemoryItem(
agent="test_agent",
task="test_task",
expected_output="test_output",
datetime="test_datetime",
datetime="2023-01-01 12:00:00",
quality=0.5,
metadata={"task": "test_task", "quality": 0.5},
metadata={"additional_info": "test_info"},
)
long_term_memory.save(memory)
find = long_term_memory.search("test_task", latest_n=5)[0]
assert find["score"] == 0.5
assert find["datetime"] == "test_datetime"
assert find["metadata"]["agent"] == "test_agent"
assert find["metadata"]["quality"] == 0.5
assert find["metadata"]["task"] == "test_task"
assert find["metadata"]["expected_output"] == "test_output"
expected_metadata = {
"additional_info": "test_info",
"agent": "test_agent",
"expected_output": "test_output",
"quality": 0.5, # Include quality in expected metadata
}
mock_storage.save.assert_called_once_with(
task_description="test_task",
score=0.5,
metadata=expected_metadata,
datetime="2023-01-01 12:00:00",
)
def test_search(long_term_memory, mock_storage):
mock_storage.load.return_value = [
{
"metadata": {
"agent": "test_agent",
"expected_output": "test_output",
"task": "test_task",
},
"datetime": "2023-01-01 12:00:00",
"score": 0.5,
}
]
result = long_term_memory.search("test_task", latest_n=5)
mock_storage.load.assert_called_once_with("test_task", 5)
assert len(result) == 1
assert result[0]["metadata"]["agent"] == "test_agent"
assert result[0]["metadata"]["expected_output"] == "test_output"
assert result[0]["metadata"]["task"] == "test_task"
assert result[0]["datetime"] == "2023-01-01 12:00:00"
assert result[0]["score"] == 0.5
def test_save_with_minimal_metadata(long_term_memory, mock_storage):
memory = LongTermMemoryItem(
agent="minimal_agent",
task="minimal_task",
expected_output="minimal_output",
datetime="2023-01-01 12:00:00",
quality=0.3,
metadata={},
)
long_term_memory.save(memory)
expected_metadata = {
"agent": "minimal_agent",
"expected_output": "minimal_output",
"quality": 0.3, # Include quality in expected metadata
}
mock_storage.save.assert_called_once_with(
task_description="minimal_task",
score=0.3,
metadata=expected_metadata,
datetime="2023-01-01 12:00:00",
)
def test_reset(long_term_memory, mock_storage):
long_term_memory.reset()
mock_storage.reset.assert_called_once()
def test_search_with_no_results(long_term_memory, mock_storage):
mock_storage.load.return_value = []
result = long_term_memory.search("nonexistent_task")
assert result == []
def test_init_with_default_storage():
with patch(
"crewai.memory.long_term.long_term_memory.LTMSQLiteStorage"
) as mock_storage_class:
LongTermMemory()
mock_storage_class.assert_called_once()
def test_init_with_custom_storage():
custom_storage = MagicMock()
memory = LongTermMemory(storage=custom_storage)
assert memory.storage == custom_storage
@pytest.mark.parametrize("latest_n", [1, 3, 5, 10])
def test_search_with_different_latest_n(long_term_memory, mock_storage, latest_n):
long_term_memory.search("test_task", latest_n=latest_n)
mock_storage.load.assert_called_once_with("test_task", latest_n)

View File

@@ -44,3 +44,46 @@ def test_save_and_search(short_term_memory):
find = short_term_memory.search("test value", score_threshold=0.01)[0]
assert find["context"] == memory.data, "Data value mismatch."
assert find["metadata"]["agent"] == "test_agent", "Agent value mismatch."
@pytest.fixture
def short_term_memory_with_provider():
"""Fixture to create a ShortTermMemory instance with a specific memory provider"""
agent = Agent(
role="Researcher",
goal="Search relevant data and provide results",
backstory="You are a researcher at a leading tech think tank.",
tools=[],
verbose=True,
)
task = Task(
description="Perform a search on specific topics.",
expected_output="A list of relevant URLs based on the search query.",
agent=agent,
)
return ShortTermMemory(
crew=Crew(agents=[agent], tasks=[task]), memory_provider="mem0"
)
def test_save_and_search_with_provider(short_term_memory_with_provider):
memory = ShortTermMemoryItem(
data="Loves to do research on the latest technologies.",
agent="test_agent_provider",
metadata={"task": "test_task_provider"},
)
short_term_memory_with_provider.save(
value=memory.data,
metadata=memory.metadata,
agent=memory.agent,
)
find = short_term_memory_with_provider.search(
"Loves to do research on the latest technologies.", score_threshold=0.01
)[0]
assert find["memory"] in memory.data, "Data value mismatch."
assert find["metadata"]["agent"] == "test_agent_provider", "Agent value mismatch."
assert (
short_term_memory_with_provider.memory_provider == "mem0"
), "Memory provider mismatch."

4972
uv.lock generated

File diff suppressed because it is too large Load Diff