mirror of
https://github.com/crewAIInc/crewAI.git
synced 2025-12-25 16:58:29 +00:00
Compare commits
38 Commits
pr-1209
...
bugfix/plo
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
1d8584fe33 | ||
|
|
5af7df8031 | ||
|
|
8bc07e6071 | ||
|
|
6baaad045a | ||
|
|
74c1703310 | ||
|
|
a921828e51 | ||
|
|
e1fd83e6a7 | ||
|
|
7d68e287cc | ||
|
|
f39a975e20 | ||
|
|
b8a3c29745 | ||
|
|
9cd4ff05c9 | ||
|
|
4687779702 | ||
|
|
8731915330 | ||
|
|
093259389e | ||
|
|
6bcb3d1080 | ||
|
|
71a217b210 | ||
|
|
b98256e434 | ||
|
|
40f81aecf5 | ||
|
|
d1737a96fb | ||
|
|
84f48c465d | ||
|
|
60efcad481 | ||
|
|
53a9f107ca | ||
|
|
6fa2b89831 | ||
|
|
d72ebb9bb8 | ||
|
|
81ae07abdb | ||
|
|
6d20ba70a1 | ||
|
|
67f55bae2c | ||
|
|
9b59de1720 | ||
|
|
798d16a6c6 | ||
|
|
c9152f2af8 | ||
|
|
24b09e97cd | ||
|
|
a6b7295092 | ||
|
|
725d159e44 | ||
|
|
ef21da15e6 | ||
|
|
de5d2eaa9b | ||
|
|
e2badaa4c6 | ||
|
|
916dec2418 | ||
|
|
7f387dd7c3 |
22
.github/workflows/tests.yml
vendored
22
.github/workflows/tests.yml
vendored
@@ -9,24 +9,24 @@ env:
|
||||
OPENAI_API_KEY: fake-api-key
|
||||
|
||||
jobs:
|
||||
deploy:
|
||||
tests:
|
||||
runs-on: ubuntu-latest
|
||||
timeout-minutes: 15
|
||||
|
||||
steps:
|
||||
- name: Checkout code
|
||||
uses: actions/checkout@v4
|
||||
|
||||
- name: Setup Python
|
||||
uses: actions/setup-python@v4
|
||||
- name: Install uv
|
||||
uses: astral-sh/setup-uv@v3
|
||||
with:
|
||||
python-version: "3.11.9"
|
||||
enable-cache: true
|
||||
|
||||
- name: Install Requirements
|
||||
run: |
|
||||
set -e
|
||||
pip install poetry
|
||||
poetry install
|
||||
|
||||
- name: Set up Python
|
||||
run: uv python install 3.11.9
|
||||
|
||||
- name: Install the project
|
||||
run: uv sync --dev
|
||||
|
||||
- name: Run tests
|
||||
run: poetry run pytest
|
||||
run: uv run pytest tests
|
||||
|
||||
2
.github/workflows/type-checker.yml
vendored
2
.github/workflows/type-checker.yml
vendored
@@ -16,7 +16,7 @@ jobs:
|
||||
- name: Setup Python
|
||||
uses: actions/setup-python@v4
|
||||
with:
|
||||
python-version: "3.10"
|
||||
python-version: "3.11.9"
|
||||
|
||||
- name: Install Requirements
|
||||
run: |
|
||||
|
||||
30
README.md
30
README.md
@@ -44,15 +44,9 @@ To get started with CrewAI, follow these simple steps:
|
||||
|
||||
### 1. Installation
|
||||
|
||||
Ensure you have Python >=3.10 <=3.13 installed on your system. CrewAI uses [Poetry](https://python-poetry.org/) for dependency management and package handling, offering a seamless setup and execution experience.
|
||||
Ensure you have Python >=3.10 <=3.13 installed on your system. CrewAI uses [UV](https://docs.astral.sh/uv/) for dependency management and package handling, offering a seamless setup and execution experience.
|
||||
|
||||
First, if you haven't already, install Poetry:
|
||||
|
||||
```bash
|
||||
pip install poetry
|
||||
```
|
||||
|
||||
Then, install CrewAI:
|
||||
First, install CrewAI:
|
||||
|
||||
```shell
|
||||
pip install crewai
|
||||
@@ -243,7 +237,7 @@ Lock the dependencies and install them by using the CLI command but first, navig
|
||||
|
||||
```shell
|
||||
cd my_project
|
||||
crewai install
|
||||
crewai install (Optional)
|
||||
```
|
||||
|
||||
To run your crew, execute the following command in the root of your project:
|
||||
@@ -258,6 +252,12 @@ or
|
||||
python src/my_project/main.py
|
||||
```
|
||||
|
||||
If an error happens due to the usage of poetry, please run the following command to update your crewai package:
|
||||
|
||||
```bash
|
||||
crewai update
|
||||
```
|
||||
|
||||
You should see the output in the console and the `report.md` file should be created in the root of your project with the full final report.
|
||||
|
||||
In addition to the sequential process, you can use the hierarchical process, which automatically assigns a manager to the defined crew to properly coordinate the planning and execution of tasks through delegation and validation of results. [See more about the processes here](https://docs.crewai.com/core-concepts/Processes/).
|
||||
@@ -332,14 +332,14 @@ CrewAI is open-source and we welcome contributions. If you're looking to contrib
|
||||
### Installing Dependencies
|
||||
|
||||
```bash
|
||||
poetry lock
|
||||
poetry install
|
||||
uv lock
|
||||
uv sync
|
||||
```
|
||||
|
||||
### Virtual Env
|
||||
|
||||
```bash
|
||||
poetry shell
|
||||
uv venv
|
||||
```
|
||||
|
||||
### Pre-commit hooks
|
||||
@@ -351,19 +351,19 @@ pre-commit install
|
||||
### Running Tests
|
||||
|
||||
```bash
|
||||
poetry run pytest
|
||||
uv run pytest .
|
||||
```
|
||||
|
||||
### Running static type checks
|
||||
|
||||
```bash
|
||||
poetry run mypy
|
||||
uvx mypy
|
||||
```
|
||||
|
||||
### Packaging
|
||||
|
||||
```bash
|
||||
poetry build
|
||||
uv build
|
||||
```
|
||||
|
||||
### Installing Locally
|
||||
|
||||
@@ -31,16 +31,17 @@ Think of an agent as a member of a team, with specific skills and a particular j
|
||||
| **Max RPM** *(optional)* | `max_rpm` | Max RPM is the maximum number of requests per minute the agent can perform to avoid rate limits. It's optional and can be left unspecified, with a default value of `None`. |
|
||||
| **Max Execution Time** *(optional)* | `max_execution_time` | Max Execution Time is the maximum execution time for an agent to execute a task. It's optional and can be left unspecified, with a default value of `None`, meaning no max execution time. |
|
||||
| **Verbose** *(optional)* | `verbose` | Setting this to `True` configures the internal logger to provide detailed execution logs, aiding in debugging and monitoring. Default is `False`. |
|
||||
| **Allow Delegation** *(optional)* | `allow_delegation` | Agents can delegate tasks or questions to one another, ensuring that each task is handled by the most suitable agent. Default is `False`.
|
||||
| **Allow Delegation** *(optional)* | `allow_delegation` | Agents can delegate tasks or questions to one another, ensuring that each task is handled by the most suitable agent. Default is `False`. |
|
||||
| **Step Callback** *(optional)* | `step_callback` | A function that is called after each step of the agent. This can be used to log the agent's actions or to perform other operations. It will overwrite the crew `step_callback`. |
|
||||
| gbv vbn zzdsxcdsdfc**Cache** *(optional)* | `cache` | Indicates if the agent should use a cache for tool usage. Default is `True`. |
|
||||
| **Cache** *(optional)* | `cache` | Indicates if the agent should use a cache for tool usage. Default is `True`. |
|
||||
| **System Template** *(optional)* | `system_template` | Specifies the system format for the agent. Default is `None`. |
|
||||
| **Prompt Template** *(optional)* | `prompt_template` | Specifies the prompt format for the agent. Default is `None`. |
|
||||
| **Response Template** *(optional)* | `response_template` | Specifies the response format for the agent. Default is `None`. |
|
||||
| **Allow Code Execution** *(optional)* | `allow_code_execution` | Enable code execution for the agent. Default is `False`. |
|
||||
| **Max Retry Limit** *(optional)* | `max_retry_limit` | Maximum number of retries for an agent to execute a task when an error occurs. Default is `2`.
|
||||
| **Max Retry Limit** *(optional)* | `max_retry_limit` | Maximum number of retries for an agent to execute a task when an error occurs. Default is `2`. |
|
||||
| **Use System Prompt** *(optional)* | `use_system_prompt` | Adds the ability to not use system prompt (to support o1 models). Default is `True`. |
|
||||
| **Respect Context Window** *(optional)* | `respect_context_window` | Summary strategy to avoid overflowing the context window. Default is `True`. |
|
||||
| **Code Execution Mode** *(optional)* | `code_execution_mode` | Determines the mode for code execution: 'safe' (using Docker) or 'unsafe' (direct execution on the host machine). Default is `safe`. |
|
||||
|
||||
## Creating an agent
|
||||
|
||||
@@ -83,6 +84,7 @@ agent = Agent(
|
||||
max_retry_limit=2, # Optional
|
||||
use_system_prompt=True, # Optional
|
||||
respect_context_window=True, # Optional
|
||||
code_execution_mode='safe', # Optional, defaults to 'safe'
|
||||
)
|
||||
```
|
||||
|
||||
@@ -156,4 +158,4 @@ crew = my_crew.kickoff(inputs={"input": "Mark Twain"})
|
||||
## Conclusion
|
||||
|
||||
Agents are the building blocks of the CrewAI framework. By understanding how to define and interact with agents,
|
||||
you can create sophisticated AI systems that leverage the power of collaborative intelligence.
|
||||
you can create sophisticated AI systems that leverage the power of collaborative intelligence. The `code_execution_mode` attribute provides flexibility in how agents execute code, allowing for both secure and direct execution options.
|
||||
|
||||
@@ -6,14 +6,14 @@ icon: terminal
|
||||
|
||||
# CrewAI CLI Documentation
|
||||
|
||||
The CrewAI CLI provides a set of commands to interact with CrewAI, allowing you to create, train, run, and manage crews and pipelines.
|
||||
The CrewAI CLI provides a set of commands to interact with CrewAI, allowing you to create, train, run, and manage crews & flows.
|
||||
|
||||
## Installation
|
||||
|
||||
To use the CrewAI CLI, make sure you have CrewAI & Poetry installed:
|
||||
To use the CrewAI CLI, make sure you have CrewAI installed:
|
||||
|
||||
```shell
|
||||
pip install crewai poetry
|
||||
pip install crewai
|
||||
```
|
||||
|
||||
## Basic Usage
|
||||
@@ -145,4 +145,35 @@ crewai run
|
||||
<Note>
|
||||
Make sure to run these commands from the directory where your CrewAI project is set up.
|
||||
Some commands may require additional configuration or setup within your project structure.
|
||||
</Note>
|
||||
</Note>
|
||||
|
||||
|
||||
### 9. API Keys
|
||||
|
||||
When running ```crewai create crew``` command, the CLI will first show you the top 5 most common LLM providers and ask you to select one.
|
||||
|
||||
Once you've selected an LLM provider, you will be prompted for API keys.
|
||||
|
||||
#### Initial API key providers
|
||||
|
||||
The CLI will initially prompt for API keys for the following services:
|
||||
|
||||
* OpenAI
|
||||
* Groq
|
||||
* Anthropic
|
||||
* Google Gemini
|
||||
|
||||
When you select a provider, the CLI will prompt you to enter your API key.
|
||||
|
||||
#### Other Options
|
||||
|
||||
If you select option 6, you will be able to select from a list of LiteLLM supported providers.
|
||||
|
||||
When you select a provider, the CLI will prompt you to enter the Key name and the API key.
|
||||
|
||||
See the following link for each provider's key name:
|
||||
|
||||
* [LiteLLM Providers](https://docs.litellm.ai/docs/providers)
|
||||
|
||||
|
||||
|
||||
|
||||
@@ -4,7 +4,7 @@ description: Exploring the dynamics of agent collaboration within the CrewAI fra
|
||||
icon: screen-users
|
||||
---
|
||||
|
||||
## Collaboration Fundamentals
|
||||
## Collaboration Fundamentals
|
||||
|
||||
Collaboration in CrewAI is fundamental, enabling agents to combine their skills, share information, and assist each other in task execution, embodying a truly cooperative ecosystem.
|
||||
|
||||
@@ -16,25 +16,24 @@ Collaboration in CrewAI is fundamental, enabling agents to combine their skills,
|
||||
|
||||
The `Crew` class has been enriched with several attributes to support advanced functionalities:
|
||||
|
||||
| Feature | Description |
|
||||
| :-------------------------------------------------------------------- | :------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
|
||||
| **Language Model Management** (`manager_llm`, `function_calling_llm`) | Manages language models for executing tasks and tools. `manager_llm` is required for hierarchical processes, while `function_calling_llm` is optional with a default value for streamlined interactions. |
|
||||
| **Custom Manager Agent** (`manager_agent`) | Specifies a custom agent as the manager, replacing the default CrewAI manager. |
|
||||
| **Process Flow** (`process`) | Defines execution logic (e.g., sequential, hierarchical) for task distribution. |
|
||||
| **Verbose Logging** (`verbose`) | Provides detailed logging for monitoring and debugging. Accepts integer and boolean values to control verbosity level. |
|
||||
| **Rate Limiting** (`max_rpm`) | Limits requests per minute to optimize resource usage. Setting guidelines depend on task complexity and load. |
|
||||
| **Internationalization / Customization** (`language`, `prompt_file`) | Supports prompt customization for global usability. [Example of file](https://github.com/joaomdmoura/crewAI/blob/main/src/crewai/translations/en.json) |
|
||||
| **Execution and Output Handling** (`full_output`) | Controls output granularity, distinguishing between full and final outputs. |
|
||||
| **Callback and Telemetry** (`step_callback`, `task_callback`) | Enables step-wise and task-level execution monitoring and telemetry for performance analytics. |
|
||||
| **Crew Sharing** (`share_crew`) | Allows sharing crew data with CrewAI for model improvement. Privacy implications and benefits should be considered. |
|
||||
| **Usage Metrics** (`usage_metrics`) | Logs all LLM usage metrics during task execution for performance insights. |
|
||||
| **Memory Usage** (`memory`) | Enables memory for storing execution history, aiding in agent learning and task efficiency. |
|
||||
| **Memory Provider** (`memory_provider`) | Specifies the memory provider to be used by the crew for storing memories. |
|
||||
| **Embedder Configuration** (`embedder`) | Configures the embedder for language understanding and generation, with support for provider customization. |
|
||||
| **Cache Management** (`cache`) | Specifies whether to cache tool execution results, enhancing performance. |
|
||||
| **Output Logging** (`output_log_file`) | Defines the file path for logging crew execution output. |
|
||||
| **Planning Mode** (`planning`) | Enables action planning before task execution. Set `planning=True` to activate. |
|
||||
| **Replay Feature** (`replay`) | Provides CLI for listing tasks from the last run and replaying from specific tasks, aiding in task management and troubleshooting. |
|
||||
| Feature | Description |
|
||||
|:-------------------------------------------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
|
||||
| **Language Model Management** (`manager_llm`, `function_calling_llm`) | Manages language models for executing tasks and tools. `manager_llm` is required for hierarchical processes, while `function_calling_llm` is optional with a default value for streamlined interactions. |
|
||||
| **Custom Manager Agent** (`manager_agent`) | Specifies a custom agent as the manager, replacing the default CrewAI manager. |
|
||||
| **Process Flow** (`process`) | Defines execution logic (e.g., sequential, hierarchical) for task distribution. |
|
||||
| **Verbose Logging** (`verbose`) | Provides detailed logging for monitoring and debugging. Accepts integer and boolean values to control verbosity level. |
|
||||
| **Rate Limiting** (`max_rpm`) | Limits requests per minute to optimize resource usage. Setting guidelines depend on task complexity and load. |
|
||||
| **Internationalization / Customization** (`language`, `prompt_file`) | Supports prompt customization for global usability. [Example of file](https://github.com/joaomdmoura/crewAI/blob/main/src/crewai/translations/en.json) |
|
||||
| **Execution and Output Handling** (`full_output`) | Controls output granularity, distinguishing between full and final outputs. |
|
||||
| **Callback and Telemetry** (`step_callback`, `task_callback`) | Enables step-wise and task-level execution monitoring and telemetry for performance analytics. |
|
||||
| **Crew Sharing** (`share_crew`) | Allows sharing crew data with CrewAI for model improvement. Privacy implications and benefits should be considered. |
|
||||
| **Usage Metrics** (`usage_metrics`) | Logs all LLM usage metrics during task execution for performance insights. |
|
||||
| **Memory Usage** (`memory`) | Enables memory for storing execution history, aiding in agent learning and task efficiency. |
|
||||
| **Embedder Configuration** (`embedder`) | Configures the embedder for language understanding and generation, with support for provider customization. |
|
||||
| **Cache Management** (`cache`) | Specifies whether to cache tool execution results, enhancing performance. |
|
||||
| **Output Logging** (`output_log_file`) | Defines the file path for logging crew execution output. |
|
||||
| **Planning Mode** (`planning`) | Enables action planning before task execution. Set `planning=True` to activate. |
|
||||
| **Replay Feature** (`replay`) | Provides CLI for listing tasks from the last run and replaying from specific tasks, aiding in task management and troubleshooting. |
|
||||
|
||||
## Delegation (Dividing to Conquer)
|
||||
|
||||
@@ -50,4 +49,4 @@ Consider a crew with a researcher agent tasked with data gathering and a writer
|
||||
|
||||
## Conclusion
|
||||
|
||||
The integration of advanced attributes and functionalities into the CrewAI framework significantly enriches the agent collaboration ecosystem. These enhancements not only simplify interactions but also offer unprecedented flexibility and control, paving the way for sophisticated AI-driven solutions capable of tackling complex tasks through intelligent collaboration and delegation.
|
||||
The integration of advanced attributes and functionalities into the CrewAI framework significantly enriches the agent collaboration ecosystem. These enhancements not only simplify interactions but also offer unprecedented flexibility and control, paving the way for sophisticated AI-driven solutions capable of tackling complex tasks through intelligent collaboration and delegation.
|
||||
@@ -22,8 +22,7 @@ A crew in crewAI represents a collaborative group of agents working together to
|
||||
| **Max RPM** _(optional)_ | `max_rpm` | Maximum requests per minute the crew adheres to during execution. Defaults to `None`. |
|
||||
| **Language** _(optional)_ | `language` | Language used for the crew, defaults to English. |
|
||||
| **Language File** _(optional)_ | `language_file` | Path to the language file to be used for the crew. |
|
||||
| **Memory** _(optional)_ | `memory` | Utilized for storing execution memories (short-term, long-term, entity memory). |
|
||||
| **Memory Provider** _(optional)_ | `memory_provider` | Specifies the memory provider to be used by the crew for storing memories. |
|
||||
| **Memory** _(optional)_ | `memory` | Utilized for storing execution memories (short-term, long-term, entity memory). Defaults to `False`. |
|
||||
| **Cache** _(optional)_ | `cache` | Specifies whether to use a cache for storing the results of tools' execution. Defaults to `True`. |
|
||||
| **Embedder** _(optional)_ | `embedder` | Configuration for the embedder to be used by the crew. Mostly used by memory for now. Default is `{"provider": "openai"}`. |
|
||||
| **Full Output** _(optional)_ | `full_output` | Whether the crew should return the full output with all tasks outputs or just the final output. Defaults to `False`. |
|
||||
|
||||
@@ -23,9 +23,9 @@ Flows allow you to create structured, event-driven workflows. They provide a sea
|
||||
Let's create a simple Flow where you will use OpenAI to generate a random city in one task and then use that city to generate a fun fact in another task.
|
||||
|
||||
```python Code
|
||||
import asyncio
|
||||
|
||||
from crewai.flow.flow import Flow, listen, start
|
||||
from dotenv import load_dotenv
|
||||
from litellm import completion
|
||||
|
||||
|
||||
@@ -67,19 +67,19 @@ class ExampleFlow(Flow):
|
||||
return fun_fact
|
||||
|
||||
|
||||
async def main():
|
||||
flow = ExampleFlow()
|
||||
result = await flow.kickoff()
|
||||
|
||||
print(f"Generated fun fact: {result}")
|
||||
flow = ExampleFlow()
|
||||
result = flow.kickoff()
|
||||
|
||||
asyncio.run(main())
|
||||
print(f"Generated fun fact: {result}")
|
||||
```
|
||||
|
||||
In the above example, we have created a simple Flow that generates a random city using OpenAI and then generates a fun fact about that city. The Flow consists of two tasks: `generate_city` and `generate_fun_fact`. The `generate_city` task is the starting point of the Flow, and the `generate_fun_fact` task listens for the output of the `generate_city` task.
|
||||
|
||||
When you run the Flow, it will generate a random city and then generate a fun fact about that city. The output will be printed to the console.
|
||||
|
||||
**Note:** Ensure you have set up your `.env` file to store your `OPENAI_API_KEY`. This key is necessary for authenticating requests to the OpenAI API.
|
||||
|
||||
### @start()
|
||||
|
||||
The `@start()` decorator is used to mark a method as the starting point of a Flow. When a Flow is started, all the methods decorated with `@start()` are executed in parallel. You can have multiple start methods in a Flow, and they will all be executed when the Flow is started.
|
||||
@@ -119,7 +119,6 @@ Here's how you can access the final output:
|
||||
|
||||
<CodeGroup>
|
||||
```python Code
|
||||
import asyncio
|
||||
from crewai.flow.flow import Flow, listen, start
|
||||
|
||||
class OutputExampleFlow(Flow):
|
||||
@@ -131,26 +130,24 @@ class OutputExampleFlow(Flow):
|
||||
def second_method(self, first_output):
|
||||
return f"Second method received: {first_output}"
|
||||
|
||||
async def main():
|
||||
flow = OutputExampleFlow()
|
||||
final_output = await flow.kickoff()
|
||||
print("---- Final Output ----")
|
||||
print(final_output)
|
||||
|
||||
asyncio.run(main())
|
||||
```
|
||||
flow = OutputExampleFlow()
|
||||
final_output = flow.kickoff()
|
||||
|
||||
print("---- Final Output ----")
|
||||
print(final_output)
|
||||
````
|
||||
|
||||
``` text Output
|
||||
---- Final Output ----
|
||||
Second method received: Output from first_method
|
||||
```
|
||||
````
|
||||
|
||||
</CodeGroup>
|
||||
|
||||
In this example, the `second_method` is the last method to complete, so its output will be the final output of the Flow.
|
||||
In this example, the `second_method` is the last method to complete, so its output will be the final output of the Flow.
|
||||
The `kickoff()` method will return the final output, which is then printed to the console.
|
||||
|
||||
|
||||
#### Accessing and Updating State
|
||||
|
||||
In addition to retrieving the final output, you can also access and update the state within your Flow. The state can be used to store and share data between different methods in the Flow. After the Flow has run, you can access the state to retrieve any information that was added or updated during the execution.
|
||||
@@ -160,7 +157,6 @@ Here's an example of how to update and access the state:
|
||||
<CodeGroup>
|
||||
|
||||
```python Code
|
||||
import asyncio
|
||||
from crewai.flow.flow import Flow, listen, start
|
||||
from pydantic import BaseModel
|
||||
|
||||
@@ -181,42 +177,38 @@ class StateExampleFlow(Flow[ExampleState]):
|
||||
self.state.counter += 1
|
||||
return self.state.message
|
||||
|
||||
async def main():
|
||||
flow = StateExampleFlow()
|
||||
final_output = await flow.kickoff()
|
||||
print(f"Final Output: {final_output}")
|
||||
print("Final State:")
|
||||
print(flow.state)
|
||||
|
||||
asyncio.run(main())
|
||||
flow = StateExampleFlow()
|
||||
final_output = flow.kickoff()
|
||||
print(f"Final Output: {final_output}")
|
||||
print("Final State:")
|
||||
print(flow.state)
|
||||
```
|
||||
|
||||
``` text Output
|
||||
```text Output
|
||||
Final Output: Hello from first_method - updated by second_method
|
||||
Final State:
|
||||
counter=2 message='Hello from first_method - updated by second_method'
|
||||
```
|
||||
|
||||
</CodeGroup>
|
||||
|
||||
In this example, the state is updated by both `first_method` and `second_method`.
|
||||
In this example, the state is updated by both `first_method` and `second_method`.
|
||||
After the Flow has run, you can access the final state to see the updates made by these methods.
|
||||
|
||||
By ensuring that the final method's output is returned and providing access to the state, CrewAI Flows make it easy to integrate the results of your AI workflows into larger applications or systems,
|
||||
By ensuring that the final method's output is returned and providing access to the state, CrewAI Flows make it easy to integrate the results of your AI workflows into larger applications or systems,
|
||||
while also maintaining and accessing the state throughout the Flow's execution.
|
||||
|
||||
## Flow State Management
|
||||
|
||||
Managing state effectively is crucial for building reliable and maintainable AI workflows. CrewAI Flows provides robust mechanisms for both unstructured and structured state management,
|
||||
Managing state effectively is crucial for building reliable and maintainable AI workflows. CrewAI Flows provides robust mechanisms for both unstructured and structured state management,
|
||||
allowing developers to choose the approach that best fits their application's needs.
|
||||
|
||||
### Unstructured State Management
|
||||
|
||||
In unstructured state management, all state is stored in the `state` attribute of the `Flow` class.
|
||||
In unstructured state management, all state is stored in the `state` attribute of the `Flow` class.
|
||||
This approach offers flexibility, enabling developers to add or modify state attributes on the fly without defining a strict schema.
|
||||
|
||||
```python Code
|
||||
import asyncio
|
||||
|
||||
from crewai.flow.flow import Flow, listen, start
|
||||
|
||||
class UntructuredExampleFlow(Flow):
|
||||
@@ -239,12 +231,8 @@ class UntructuredExampleFlow(Flow):
|
||||
print(f"State after third_method: {self.state}")
|
||||
|
||||
|
||||
async def main():
|
||||
flow = UntructuredExampleFlow()
|
||||
await flow.kickoff()
|
||||
|
||||
|
||||
asyncio.run(main())
|
||||
flow = UntructuredExampleFlow()
|
||||
flow.kickoff()
|
||||
```
|
||||
|
||||
**Key Points:**
|
||||
@@ -254,12 +242,10 @@ asyncio.run(main())
|
||||
|
||||
### Structured State Management
|
||||
|
||||
Structured state management leverages predefined schemas to ensure consistency and type safety across the workflow.
|
||||
Structured state management leverages predefined schemas to ensure consistency and type safety across the workflow.
|
||||
By using models like Pydantic's `BaseModel`, developers can define the exact shape of the state, enabling better validation and auto-completion in development environments.
|
||||
|
||||
```python Code
|
||||
import asyncio
|
||||
|
||||
from crewai.flow.flow import Flow, listen, start
|
||||
from pydantic import BaseModel
|
||||
|
||||
@@ -288,12 +274,8 @@ class StructuredExampleFlow(Flow[ExampleState]):
|
||||
print(f"State after third_method: {self.state}")
|
||||
|
||||
|
||||
async def main():
|
||||
flow = StructuredExampleFlow()
|
||||
await flow.kickoff()
|
||||
|
||||
|
||||
asyncio.run(main())
|
||||
flow = StructuredExampleFlow()
|
||||
flow.kickoff()
|
||||
```
|
||||
|
||||
**Key Points:**
|
||||
@@ -326,7 +308,6 @@ The `or_` function in Flows allows you to listen to multiple methods and trigger
|
||||
<CodeGroup>
|
||||
|
||||
```python Code
|
||||
import asyncio
|
||||
from crewai.flow.flow import Flow, listen, or_, start
|
||||
|
||||
class OrExampleFlow(Flow):
|
||||
@@ -344,22 +325,19 @@ class OrExampleFlow(Flow):
|
||||
print(f"Logger: {result}")
|
||||
|
||||
|
||||
async def main():
|
||||
flow = OrExampleFlow()
|
||||
await flow.kickoff()
|
||||
|
||||
|
||||
asyncio.run(main())
|
||||
flow = OrExampleFlow()
|
||||
flow.kickoff()
|
||||
```
|
||||
|
||||
``` text Output
|
||||
```text Output
|
||||
Logger: Hello from the start method
|
||||
Logger: Hello from the second method
|
||||
```
|
||||
|
||||
</CodeGroup>
|
||||
|
||||
When you run this Flow, the `logger` method will be triggered by the output of either the `start_method` or the `second_method`.
|
||||
When you run this Flow, the `logger` method will be triggered by the output of either the `start_method` or the `second_method`.
|
||||
The `or_` function is used to listen to multiple methods and trigger the listener method when any of the specified methods emit an output.
|
||||
|
||||
### Conditional Logic: `and`
|
||||
@@ -369,7 +347,6 @@ The `and_` function in Flows allows you to listen to multiple methods and trigge
|
||||
<CodeGroup>
|
||||
|
||||
```python Code
|
||||
import asyncio
|
||||
from crewai.flow.flow import Flow, and_, listen, start
|
||||
|
||||
class AndExampleFlow(Flow):
|
||||
@@ -387,34 +364,28 @@ class AndExampleFlow(Flow):
|
||||
print("---- Logger ----")
|
||||
print(self.state)
|
||||
|
||||
|
||||
async def main():
|
||||
flow = AndExampleFlow()
|
||||
await flow.kickoff()
|
||||
|
||||
|
||||
asyncio.run(main())
|
||||
flow = AndExampleFlow()
|
||||
flow.kickoff()
|
||||
```
|
||||
|
||||
``` text Output
|
||||
```text Output
|
||||
---- Logger ----
|
||||
{'greeting': 'Hello from the start method', 'joke': 'What do computers eat? Microchips.'}
|
||||
```
|
||||
|
||||
</CodeGroup>
|
||||
|
||||
When you run this Flow, the `logger` method will be triggered only when both the `start_method` and the `second_method` emit an output.
|
||||
When you run this Flow, the `logger` method will be triggered only when both the `start_method` and the `second_method` emit an output.
|
||||
The `and_` function is used to listen to multiple methods and trigger the listener method only when all the specified methods emit an output.
|
||||
|
||||
### Router
|
||||
|
||||
The `@router()` decorator in Flows allows you to define conditional routing logic based on the output of a method.
|
||||
The `@router()` decorator in Flows allows you to define conditional routing logic based on the output of a method.
|
||||
You can specify different routes based on the output of the method, allowing you to control the flow of execution dynamically.
|
||||
|
||||
<CodeGroup>
|
||||
|
||||
```python Code
|
||||
import asyncio
|
||||
import random
|
||||
from crewai.flow.flow import Flow, listen, router, start
|
||||
from pydantic import BaseModel
|
||||
@@ -446,15 +417,11 @@ class RouterFlow(Flow[ExampleState]):
|
||||
print("Fourth method running")
|
||||
|
||||
|
||||
async def main():
|
||||
flow = RouterFlow()
|
||||
await flow.kickoff()
|
||||
|
||||
|
||||
asyncio.run(main())
|
||||
flow = RouterFlow()
|
||||
flow.kickoff()
|
||||
```
|
||||
|
||||
``` text Output
|
||||
```text Output
|
||||
Starting the structured flow
|
||||
Third method running
|
||||
Fourth method running
|
||||
@@ -462,16 +429,16 @@ Fourth method running
|
||||
|
||||
</CodeGroup>
|
||||
|
||||
In the above example, the `start_method` generates a random boolean value and sets it in the state.
|
||||
The `second_method` uses the `@router()` decorator to define conditional routing logic based on the value of the boolean.
|
||||
If the boolean is `True`, the method returns `"success"`, and if it is `False`, the method returns `"failed"`.
|
||||
In the above example, the `start_method` generates a random boolean value and sets it in the state.
|
||||
The `second_method` uses the `@router()` decorator to define conditional routing logic based on the value of the boolean.
|
||||
If the boolean is `True`, the method returns `"success"`, and if it is `False`, the method returns `"failed"`.
|
||||
The `third_method` and `fourth_method` listen to the output of the `second_method` and execute based on the returned value.
|
||||
|
||||
When you run this Flow, the output will change based on the random boolean value generated by the `start_method`.
|
||||
|
||||
## Adding Crews to Flows
|
||||
|
||||
Creating a flow with multiple crews in CrewAI is straightforward.
|
||||
Creating a flow with multiple crews in CrewAI is straightforward.
|
||||
|
||||
You can generate a new CrewAI project that includes all the scaffolding needed to create a flow with multiple crews by running the following command:
|
||||
|
||||
@@ -485,22 +452,21 @@ This command will generate a new CrewAI project with the necessary folder struct
|
||||
|
||||
After running the `crewai create flow name_of_flow` command, you will see a folder structure similar to the following:
|
||||
|
||||
| Directory/File | Description |
|
||||
|:---------------------------------|:------------------------------------------------------------------|
|
||||
| `name_of_flow/` | Root directory for the flow. |
|
||||
| ├── `crews/` | Contains directories for specific crews. |
|
||||
| │ └── `poem_crew/` | Directory for the "poem_crew" with its configurations and scripts.|
|
||||
| │ ├── `config/` | Configuration files directory for the "poem_crew". |
|
||||
| │ │ ├── `agents.yaml` | YAML file defining the agents for "poem_crew". |
|
||||
| │ │ └── `tasks.yaml` | YAML file defining the tasks for "poem_crew". |
|
||||
| │ ├── `poem_crew.py` | Script for "poem_crew" functionality. |
|
||||
| ├── `tools/` | Directory for additional tools used in the flow. |
|
||||
| │ └── `custom_tool.py` | Custom tool implementation. |
|
||||
| ├── `main.py` | Main script for running the flow. |
|
||||
| ├── `README.md` | Project description and instructions. |
|
||||
| ├── `pyproject.toml` | Configuration file for project dependencies and settings. |
|
||||
| └── `.gitignore` | Specifies files and directories to ignore in version control. |
|
||||
|
||||
| Directory/File | Description |
|
||||
| :--------------------- | :----------------------------------------------------------------- |
|
||||
| `name_of_flow/` | Root directory for the flow. |
|
||||
| ├── `crews/` | Contains directories for specific crews. |
|
||||
| │ └── `poem_crew/` | Directory for the "poem_crew" with its configurations and scripts. |
|
||||
| │ ├── `config/` | Configuration files directory for the "poem_crew". |
|
||||
| │ │ ├── `agents.yaml` | YAML file defining the agents for "poem_crew". |
|
||||
| │ │ └── `tasks.yaml` | YAML file defining the tasks for "poem_crew". |
|
||||
| │ ├── `poem_crew.py` | Script for "poem_crew" functionality. |
|
||||
| ├── `tools/` | Directory for additional tools used in the flow. |
|
||||
| │ └── `custom_tool.py` | Custom tool implementation. |
|
||||
| ├── `main.py` | Main script for running the flow. |
|
||||
| ├── `README.md` | Project description and instructions. |
|
||||
| ├── `pyproject.toml` | Configuration file for project dependencies and settings. |
|
||||
| └── `.gitignore` | Specifies files and directories to ignore in version control. |
|
||||
|
||||
### Building Your Crews
|
||||
|
||||
@@ -520,7 +486,6 @@ Here's an example of how you can connect the `poem_crew` in the `main.py` file:
|
||||
|
||||
```python Code
|
||||
#!/usr/bin/env python
|
||||
import asyncio
|
||||
from random import randint
|
||||
|
||||
from pydantic import BaseModel
|
||||
@@ -536,14 +501,12 @@ class PoemFlow(Flow[PoemState]):
|
||||
@start()
|
||||
def generate_sentence_count(self):
|
||||
print("Generating sentence count")
|
||||
# Generate a number between 1 and 5
|
||||
self.state.sentence_count = randint(1, 5)
|
||||
|
||||
@listen(generate_sentence_count)
|
||||
def generate_poem(self):
|
||||
print("Generating poem")
|
||||
poem_crew = PoemCrew().crew()
|
||||
result = poem_crew.kickoff(inputs={"sentence_count": self.state.sentence_count})
|
||||
result = PoemCrew().crew().kickoff(inputs={"sentence_count": self.state.sentence_count})
|
||||
|
||||
print("Poem generated", result.raw)
|
||||
self.state.poem = result.raw
|
||||
@@ -554,46 +517,45 @@ class PoemFlow(Flow[PoemState]):
|
||||
with open("poem.txt", "w") as f:
|
||||
f.write(self.state.poem)
|
||||
|
||||
async def run():
|
||||
"""
|
||||
Run the flow.
|
||||
"""
|
||||
def kickoff():
|
||||
poem_flow = PoemFlow()
|
||||
await poem_flow.kickoff()
|
||||
poem_flow.kickoff()
|
||||
|
||||
def main():
|
||||
asyncio.run(run())
|
||||
|
||||
def plot():
|
||||
poem_flow = PoemFlow()
|
||||
poem_flow.plot()
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
kickoff()
|
||||
```
|
||||
|
||||
In this example, the `PoemFlow` class defines a flow that generates a sentence count, uses the `PoemCrew` to generate a poem, and then saves the poem to a file. The flow is kicked off by calling the `kickoff()` method.
|
||||
|
||||
### Running the Flow
|
||||
|
||||
Before running the flow, make sure to install the dependencies by running:
|
||||
(Optional) Before running the flow, you can install the dependencies by running:
|
||||
|
||||
```bash
|
||||
poetry install
|
||||
crewai install
|
||||
```
|
||||
|
||||
Once all of the dependencies are installed, you need to activate the virtual environment by running:
|
||||
|
||||
```bash
|
||||
poetry shell
|
||||
source .venv/bin/activate
|
||||
```
|
||||
|
||||
After activating the virtual environment, you can run the flow by executing one of the following commands:
|
||||
|
||||
```bash
|
||||
crewai flow run
|
||||
crewai flow kickoff
|
||||
```
|
||||
|
||||
or
|
||||
|
||||
```bash
|
||||
poetry run run_flow
|
||||
uv run kickoff
|
||||
```
|
||||
|
||||
The flow will execute, and you should see the output in the console.
|
||||
@@ -653,4 +615,17 @@ If you're interested in exploring additional examples of flows, we have a variet
|
||||
|
||||
4. **Meeting Assistant Flow**: This flow demonstrates how to broadcast one event to trigger multiple follow-up actions. For instance, after a meeting is completed, the flow can update a Trello board, send a Slack message, and save the results. It's a great example of handling multiple outcomes from a single event, making it ideal for comprehensive task management and notification systems. [View Example](https://github.com/crewAIInc/crewAI-examples/tree/main/meeting_assistant_flow)
|
||||
|
||||
By exploring these examples, you can gain insights into how to leverage CrewAI Flows for various use cases, from automating repetitive tasks to managing complex, multi-step processes with dynamic decision-making and human feedback.
|
||||
By exploring these examples, you can gain insights into how to leverage CrewAI Flows for various use cases, from automating repetitive tasks to managing complex, multi-step processes with dynamic decision-making and human feedback.
|
||||
|
||||
Also, check out our YouTube video on how to use flows in CrewAI below!
|
||||
|
||||
<iframe
|
||||
width="560"
|
||||
height="315"
|
||||
src="https://www.youtube.com/embed/MTb5my6VOT8"
|
||||
title="YouTube video player"
|
||||
frameborder="0"
|
||||
allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share"
|
||||
referrerpolicy="strict-origin-when-cross-origin"
|
||||
allowfullscreen
|
||||
></iframe>
|
||||
|
||||
@@ -62,6 +62,8 @@ os.environ["OPENAI_API_BASE"] = "https://api.your-provider.com/v1"
|
||||
2. Using LLM class attributes:
|
||||
|
||||
```python Code
|
||||
from crewai import LLM
|
||||
|
||||
llm = LLM(
|
||||
model="custom-model-name",
|
||||
api_key="your-api-key",
|
||||
@@ -95,9 +97,11 @@ When configuring an LLM for your agent, you have access to a wide range of param
|
||||
| **api_key** | `str` | Your API key for authentication. |
|
||||
|
||||
|
||||
Example:
|
||||
## OpenAI Example Configuration
|
||||
|
||||
```python Code
|
||||
from crewai import LLM
|
||||
|
||||
llm = LLM(
|
||||
model="gpt-4",
|
||||
temperature=0.8,
|
||||
@@ -112,15 +116,31 @@ llm = LLM(
|
||||
)
|
||||
agent = Agent(llm=llm, ...)
|
||||
```
|
||||
|
||||
## Cerebras Example Configuration
|
||||
|
||||
```python Code
|
||||
from crewai import LLM
|
||||
|
||||
llm = LLM(
|
||||
model="cerebras/llama-3.1-70b",
|
||||
base_url="https://api.cerebras.ai/v1",
|
||||
api_key="your-api-key-here"
|
||||
)
|
||||
agent = Agent(llm=llm, ...)
|
||||
```
|
||||
|
||||
## Using Ollama (Local LLMs)
|
||||
|
||||
crewAI supports using Ollama for running open-source models locally:
|
||||
CrewAI supports using Ollama for running open-source models locally:
|
||||
|
||||
1. Install Ollama: [ollama.ai](https://ollama.ai/)
|
||||
2. Run a model: `ollama run llama2`
|
||||
3. Configure agent:
|
||||
|
||||
```python Code
|
||||
from crewai import LLM
|
||||
|
||||
agent = Agent(
|
||||
llm=LLM(model="ollama/llama3.1", base_url="http://localhost:11434"),
|
||||
...
|
||||
@@ -132,6 +152,8 @@ agent = Agent(
|
||||
You can change the base API URL for any LLM provider by setting the `base_url` parameter:
|
||||
|
||||
```python Code
|
||||
from crewai import LLM
|
||||
|
||||
llm = LLM(
|
||||
model="custom-model-name",
|
||||
base_url="https://api.your-provider.com/v1",
|
||||
|
||||
@@ -6,18 +6,18 @@ icon: database
|
||||
|
||||
## Introduction to Memory Systems in CrewAI
|
||||
|
||||
The crewAI framework introduces a sophisticated memory system designed to significantly enhance the capabilities of AI agents.
|
||||
This system comprises `short-term memory`, `long-term memory`, `entity memory`, and `contextual memory`, each serving a unique purpose in aiding agents to remember,
|
||||
The crewAI framework introduces a sophisticated memory system designed to significantly enhance the capabilities of AI agents.
|
||||
This system comprises `short-term memory`, `long-term memory`, `entity memory`, and `contextual memory`, each serving a unique purpose in aiding agents to remember,
|
||||
reason, and learn from past interactions.
|
||||
|
||||
## Memory System Components
|
||||
|
||||
| Component | Description |
|
||||
| :-------------------- | :--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
|
||||
| **Short-Term Memory** | Temporarily stores recent interactions and outcomes using `RAG`, enabling agents to recall and utilize information relevant to their current context during the current executions. |
|
||||
| **Long-Term Memory** | Preserves valuable insights and learnings from past executions, allowing agents to build and refine their knowledge over time. |
|
||||
| **Entity Memory** | Captures and organizes information about entities (people, places, concepts) encountered during tasks, facilitating deeper understanding and relationship mapping. Uses `RAG` for storing entity information. |
|
||||
| **Contextual Memory** | Maintains the context of interactions by combining `ShortTermMemory`, `LongTermMemory`, and `EntityMemory`, aiding in the coherence and relevance of agent responses over a sequence of tasks or a conversation. |
|
||||
| Component | Description |
|
||||
| :------------------- | :---------------------------------------------------------------------------------------------------------------------- |
|
||||
| **Short-Term Memory**| Temporarily stores recent interactions and outcomes using `RAG`, enabling agents to recall and utilize information relevant to their current context during the current executions.|
|
||||
| **Long-Term Memory** | Preserves valuable insights and learnings from past executions, allowing agents to build and refine their knowledge over time. |
|
||||
| **Entity Memory** | Captures and organizes information about entities (people, places, concepts) encountered during tasks, facilitating deeper understanding and relationship mapping. Uses `RAG` for storing entity information. |
|
||||
| **Contextual Memory**| Maintains the context of interactions by combining `ShortTermMemory`, `LongTermMemory`, and `EntityMemory`, aiding in the coherence and relevance of agent responses over a sequence of tasks or a conversation. |
|
||||
|
||||
## How Memory Systems Empower Agents
|
||||
|
||||
@@ -30,11 +30,11 @@ reason, and learn from past interactions.
|
||||
## Implementing Memory in Your Crew
|
||||
|
||||
When configuring a crew, you can enable and customize each memory component to suit the crew's objectives and the nature of tasks it will perform.
|
||||
By default, the memory system is disabled, and you can ensure it is active by setting `memory=True` in the crew configuration.
|
||||
The memory will use OpenAI embeddings by default, but you can change it by setting `embedder` to a different model.
|
||||
By default, the memory system is disabled, and you can ensure it is active by setting `memory=True` in the crew configuration.
|
||||
The memory will use OpenAI embeddings by default, but you can change it by setting `embedder` to a different model.
|
||||
It's also possible to initialize the memory instance with your own instance.
|
||||
|
||||
The 'embedder' only applies to **Short-Term Memory** which uses Chroma for RAG using the EmbedChain package.
|
||||
The 'embedder' only applies to **Short-Term Memory** which uses Chroma for RAG.
|
||||
The **Long-Term Memory** uses SQLite3 to store task results. Currently, there is no way to override these storage implementations.
|
||||
The data storage files are saved into a platform-specific location found using the appdirs package,
|
||||
and the name of the project can be overridden using the **CREWAI_STORAGE_DIR** environment variable.
|
||||
@@ -92,10 +92,10 @@ my_crew = Crew(
|
||||
)
|
||||
```
|
||||
|
||||
|
||||
## Additional Embedding Providers
|
||||
|
||||
### Using OpenAI embeddings (already default)
|
||||
|
||||
```python Code
|
||||
from crewai import Crew, Agent, Task, Process
|
||||
|
||||
@@ -113,6 +113,42 @@ my_crew = Crew(
|
||||
}
|
||||
)
|
||||
```
|
||||
Alternatively, you can directly pass the OpenAIEmbeddingFunction to the embedder parameter.
|
||||
|
||||
Example:
|
||||
```python Code
|
||||
from crewai import Crew, Agent, Task, Process
|
||||
from chromadb.utils.embedding_functions import OpenAIEmbeddingFunction
|
||||
|
||||
my_crew = Crew(
|
||||
agents=[...],
|
||||
tasks=[...],
|
||||
process=Process.sequential,
|
||||
memory=True,
|
||||
verbose=True,
|
||||
embedder=OpenAIEmbeddingFunction(api_key=os.getenv("OPENAI_API_KEY"), model_name="text-embedding-3-small"),
|
||||
)
|
||||
```
|
||||
|
||||
### Using Ollama embeddings
|
||||
|
||||
```python Code
|
||||
from crewai import Crew, Agent, Task, Process
|
||||
|
||||
my_crew = Crew(
|
||||
agents=[...],
|
||||
tasks=[...],
|
||||
process=Process.sequential,
|
||||
memory=True,
|
||||
verbose=True,
|
||||
embedder={
|
||||
"provider": "ollama",
|
||||
"config": {
|
||||
"model": "mxbai-embed-large"
|
||||
}
|
||||
}
|
||||
)
|
||||
```
|
||||
|
||||
### Using Google AI embeddings
|
||||
|
||||
@@ -128,9 +164,8 @@ my_crew = Crew(
|
||||
embedder={
|
||||
"provider": "google",
|
||||
"config": {
|
||||
"model": 'models/embedding-001',
|
||||
"task_type": "retrieval_document",
|
||||
"title": "Embeddings for Embedchain"
|
||||
"api_key": "<YOUR_API_KEY>",
|
||||
"model_name": "<model_name>"
|
||||
}
|
||||
}
|
||||
)
|
||||
@@ -139,6 +174,7 @@ my_crew = Crew(
|
||||
### Using Azure OpenAI embeddings
|
||||
|
||||
```python Code
|
||||
from chromadb.utils.embedding_functions import OpenAIEmbeddingFunction
|
||||
from crewai import Crew, Agent, Task, Process
|
||||
|
||||
my_crew = Crew(
|
||||
@@ -147,36 +183,20 @@ my_crew = Crew(
|
||||
process=Process.sequential,
|
||||
memory=True,
|
||||
verbose=True,
|
||||
embedder={
|
||||
"provider": "azure_openai",
|
||||
"config": {
|
||||
"model": 'text-embedding-ada-002',
|
||||
"deployment_name": "your_embedding_model_deployment_name"
|
||||
}
|
||||
}
|
||||
)
|
||||
```
|
||||
|
||||
### Using GPT4ALL embeddings
|
||||
|
||||
```python Code
|
||||
from crewai import Crew, Agent, Task, Process
|
||||
|
||||
my_crew = Crew(
|
||||
agents=[...],
|
||||
tasks=[...],
|
||||
process=Process.sequential,
|
||||
memory=True,
|
||||
verbose=True,
|
||||
embedder={
|
||||
"provider": "gpt4all"
|
||||
}
|
||||
embedder=OpenAIEmbeddingFunction(
|
||||
api_key="YOUR_API_KEY",
|
||||
api_base="YOUR_API_BASE_PATH",
|
||||
api_type="azure",
|
||||
api_version="YOUR_API_VERSION",
|
||||
model_name="text-embedding-3-small"
|
||||
)
|
||||
)
|
||||
```
|
||||
|
||||
### Using Vertex AI embeddings
|
||||
|
||||
```python Code
|
||||
from chromadb.utils.embedding_functions import GoogleVertexEmbeddingFunction
|
||||
from crewai import Crew, Agent, Task, Process
|
||||
|
||||
my_crew = Crew(
|
||||
@@ -185,12 +205,12 @@ my_crew = Crew(
|
||||
process=Process.sequential,
|
||||
memory=True,
|
||||
verbose=True,
|
||||
embedder={
|
||||
"provider": "vertexai",
|
||||
"config": {
|
||||
"model": 'textembedding-gecko'
|
||||
}
|
||||
}
|
||||
embedder=GoogleVertexEmbeddingFunction(
|
||||
project_id="YOUR_PROJECT_ID",
|
||||
region="YOUR_REGION",
|
||||
api_key="YOUR_API_KEY",
|
||||
model_name="textembedding-gecko"
|
||||
)
|
||||
)
|
||||
```
|
||||
|
||||
@@ -208,8 +228,27 @@ my_crew = Crew(
|
||||
embedder={
|
||||
"provider": "cohere",
|
||||
"config": {
|
||||
"model": "embed-english-v3.0",
|
||||
"vector_dimension": 1024
|
||||
"api_key": "YOUR_API_KEY",
|
||||
"model_name": "<model_name>"
|
||||
}
|
||||
}
|
||||
)
|
||||
```
|
||||
### Using HuggingFace embeddings
|
||||
|
||||
```python Code
|
||||
from crewai import Crew, Agent, Task, Process
|
||||
|
||||
my_crew = Crew(
|
||||
agents=[...],
|
||||
tasks=[...],
|
||||
process=Process.sequential,
|
||||
memory=True,
|
||||
verbose=True,
|
||||
embedder={
|
||||
"provider": "huggingface",
|
||||
"config": {
|
||||
"api_url": "<api_url>",
|
||||
}
|
||||
}
|
||||
)
|
||||
@@ -223,13 +262,14 @@ crewai reset-memories [OPTIONS]
|
||||
|
||||
#### Resetting Memory Options
|
||||
|
||||
| Option | Description | Type | Default |
|
||||
| :------------------------ | :--------------------------------- | :------------- | :------ |
|
||||
| `-l`, `--long` | Reset LONG TERM memory. | Flag (boolean) | False |
|
||||
| `-s`, `--short` | Reset SHORT TERM memory. | Flag (boolean) | False |
|
||||
| `-e`, `--entities` | Reset ENTITIES memory. | Flag (boolean) | False |
|
||||
| Option | Description | Type | Default |
|
||||
| :----------------- | :------------------------------- | :------------- | :------ |
|
||||
| `-l`, `--long` | Reset LONG TERM memory. | Flag (boolean) | False |
|
||||
| `-s`, `--short` | Reset SHORT TERM memory. | Flag (boolean) | False |
|
||||
| `-e`, `--entities` | Reset ENTITIES memory. | Flag (boolean) | False |
|
||||
| `-k`, `--kickoff-outputs` | Reset LATEST KICKOFF TASK OUTPUTS. | Flag (boolean) | False |
|
||||
| `-a`, `--all` | Reset ALL memories. | Flag (boolean) | False |
|
||||
| `-a`, `--all` | Reset ALL memories. | Flag (boolean) | False |
|
||||
|
||||
|
||||
## Benefits of Using CrewAI's Memory System
|
||||
|
||||
@@ -239,5 +279,5 @@ crewai reset-memories [OPTIONS]
|
||||
|
||||
## Conclusion
|
||||
|
||||
Integrating CrewAI's memory system into your projects is straightforward. By leveraging the provided memory components and configurations,
|
||||
Integrating CrewAI's memory system into your projects is straightforward. By leveraging the provided memory components and configurations,
|
||||
you can quickly empower your agents with the ability to remember, reason, and learn from their interactions, unlocking new levels of intelligence and capability.
|
||||
|
||||
@@ -1,277 +0,0 @@
|
||||
---
|
||||
title: Pipelines
|
||||
description: Understanding and utilizing pipelines in the crewAI framework for efficient multi-stage task processing.
|
||||
icon: timeline-arrow
|
||||
---
|
||||
|
||||
## What is a Pipeline?
|
||||
|
||||
A pipeline in CrewAI represents a structured workflow that allows for the sequential or parallel execution of multiple crews. It provides a way to organize complex processes involving multiple stages, where the output of one stage can serve as input for subsequent stages.
|
||||
|
||||
## Key Terminology
|
||||
|
||||
Understanding the following terms is crucial for working effectively with pipelines:
|
||||
|
||||
- **Stage**: A distinct part of the pipeline, which can be either sequential (a single crew) or parallel (multiple crews executing concurrently).
|
||||
- **Kickoff**: A specific execution of the pipeline for a given set of inputs, representing a single instance of processing through the pipeline.
|
||||
- **Branch**: Parallel executions within a stage (e.g., concurrent crew operations).
|
||||
- **Trace**: The journey of an individual input through the entire pipeline, capturing the path and transformations it undergoes.
|
||||
|
||||
Example pipeline structure:
|
||||
|
||||
```bash Pipeline
|
||||
crew1 >> [crew2, crew3] >> crew4
|
||||
```
|
||||
|
||||
This represents a pipeline with three stages:
|
||||
|
||||
1. A sequential stage (crew1)
|
||||
2. A parallel stage with two branches (crew2 and crew3 executing concurrently)
|
||||
3. Another sequential stage (crew4)
|
||||
|
||||
Each input creates its own kickoff, flowing through all stages of the pipeline. Multiple kickoffs can be processed concurrently, each following the defined pipeline structure.
|
||||
|
||||
## Pipeline Attributes
|
||||
|
||||
| Attribute | Parameters | Description |
|
||||
| :--------- | :---------- | :----------------------------------------------------------------------------------------------------------------- |
|
||||
| **Stages** | `stages` | A list of `PipelineStage` (crews, lists of crews, or routers) representing the stages to be executed in sequence. |
|
||||
|
||||
## Creating a Pipeline
|
||||
|
||||
When creating a pipeline, you define a series of stages, each consisting of either a single crew or a list of crews for parallel execution.
|
||||
The pipeline ensures that each stage is executed in order, with the output of one stage feeding into the next.
|
||||
|
||||
### Example: Assembling a Pipeline
|
||||
|
||||
```python
|
||||
from crewai import Crew, Process, Pipeline
|
||||
|
||||
# Define your crews
|
||||
research_crew = Crew(
|
||||
agents=[researcher],
|
||||
tasks=[research_task],
|
||||
process=Process.sequential
|
||||
)
|
||||
|
||||
analysis_crew = Crew(
|
||||
agents=[analyst],
|
||||
tasks=[analysis_task],
|
||||
process=Process.sequential
|
||||
)
|
||||
|
||||
writing_crew = Crew(
|
||||
agents=[writer],
|
||||
tasks=[writing_task],
|
||||
process=Process.sequential
|
||||
)
|
||||
|
||||
# Assemble the pipeline
|
||||
my_pipeline = Pipeline(
|
||||
stages=[research_crew, analysis_crew, writing_crew]
|
||||
)
|
||||
```
|
||||
|
||||
## Pipeline Methods
|
||||
|
||||
| Method | Description |
|
||||
| :--------------- | :----------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
|
||||
| **kickoff** | Executes the pipeline, processing all stages and returning the results. This method initiates one or more kickoffs through the pipeline, handling the flow of data between stages. |
|
||||
| **process_runs** | Runs the pipeline for each input provided, handling the flow and transformation of data between stages. |
|
||||
|
||||
## Pipeline Output
|
||||
|
||||
The output of a pipeline in the CrewAI framework is encapsulated within the `PipelineKickoffResult` class.
|
||||
This class provides a structured way to access the results of the pipeline's execution, including various formats such as raw strings, JSON, and Pydantic models.
|
||||
|
||||
### Pipeline Output Attributes
|
||||
|
||||
| Attribute | Parameters | Type | Description |
|
||||
| :-------------- | :------------ | :------------------------ | :-------------------------------------------------------------------------------------------------------- |
|
||||
| **ID** | `id` | `UUID4` | A unique identifier for the pipeline output. |
|
||||
| **Run Results** | `run_results` | `List[PipelineRunResult]` | A list of `PipelineRunResult` objects, each representing the output of a single run through the pipeline. |
|
||||
|
||||
### Pipeline Output Methods
|
||||
|
||||
| Method/Property | Description |
|
||||
| :----------------- | :----------------------------------------------------- |
|
||||
| **add_run_result** | Adds a `PipelineRunResult` to the list of run results. |
|
||||
|
||||
### Pipeline Run Result Attributes
|
||||
|
||||
| Attribute | Parameters | Type | Description |
|
||||
| :---------------- | :-------------- | :------------------------- | :-------------------------------------------------------------------------------------------- |
|
||||
| **ID** | `id` | `UUID4` | A unique identifier for the run result. |
|
||||
| **Raw** | `raw` | `str` | The raw output of the final stage in the pipeline kickoff. |
|
||||
| **Pydantic** | `pydantic` | `Any` | A Pydantic model object representing the structured output of the final stage, if applicable. |
|
||||
| **JSON Dict** | `json_dict` | `Union[Dict[str, Any], None]` | A dictionary representing the JSON output of the final stage, if applicable. |
|
||||
| **Token Usage** | `token_usage` | `Dict[str, UsageMetrics]` | A summary of token usage across all stages of the pipeline kickoff. |
|
||||
| **Trace** | `trace` | `List[Any]` | A trace of the journey of inputs through the pipeline kickoff. |
|
||||
| **Crews Outputs** | `crews_outputs` | `List[CrewOutput]` | A list of `CrewOutput` objects, representing the outputs from each crew in the pipeline kickoff. |
|
||||
|
||||
### Pipeline Run Result Methods and Properties
|
||||
|
||||
| Method/Property | Description |
|
||||
| :-------------- | :------------------------------------------------------------------------------------------------------- |
|
||||
| **json** | Returns the JSON string representation of the run result if the output format of the final task is JSON. |
|
||||
| **to_dict** | Converts the JSON and Pydantic outputs to a dictionary. |
|
||||
| **str** | Returns the string representation of the run result, prioritizing Pydantic, then JSON, then raw. |
|
||||
|
||||
### Accessing Pipeline Outputs
|
||||
|
||||
Once a pipeline has been executed, its output can be accessed through the `PipelineOutput` object returned by the `process_runs` method.
|
||||
The `PipelineOutput` class provides access to individual `PipelineRunResult` objects, each representing a single run through the pipeline.
|
||||
|
||||
#### Example
|
||||
|
||||
```python
|
||||
# Define input data for the pipeline
|
||||
input_data = [
|
||||
{"initial_query": "Latest advancements in AI"},
|
||||
{"initial_query": "Future of robotics"}
|
||||
]
|
||||
|
||||
# Execute the pipeline
|
||||
pipeline_output = await my_pipeline.process_runs(input_data)
|
||||
|
||||
# Access the results
|
||||
for run_result in pipeline_output.run_results:
|
||||
print(f"Run ID: {run_result.id}")
|
||||
print(f"Final Raw Output: {run_result.raw}")
|
||||
if run_result.json_dict:
|
||||
print(f"JSON Output: {json.dumps(run_result.json_dict, indent=2)}")
|
||||
if run_result.pydantic:
|
||||
print(f"Pydantic Output: {run_result.pydantic}")
|
||||
print(f"Token Usage: {run_result.token_usage}")
|
||||
print(f"Trace: {run_result.trace}")
|
||||
print("Crew Outputs:")
|
||||
for crew_output in run_result.crews_outputs:
|
||||
print(f" Crew: {crew_output.raw}")
|
||||
print("\n")
|
||||
```
|
||||
|
||||
This example demonstrates how to access and work with the pipeline output, including individual run results and their associated data.
|
||||
|
||||
## Using Pipelines
|
||||
|
||||
Pipelines are particularly useful for complex workflows that involve multiple stages of processing, analysis, or content generation. They allow you to:
|
||||
|
||||
1. **Sequence Operations**: Execute crews in a specific order, ensuring that the output of one crew is available as input to the next.
|
||||
2. **Parallel Processing**: Run multiple crews concurrently within a stage for increased efficiency.
|
||||
3. **Manage Complex Workflows**: Break down large tasks into smaller, manageable steps executed by specialized crews.
|
||||
|
||||
### Example: Running a Pipeline
|
||||
|
||||
```python
|
||||
# Define input data for the pipeline
|
||||
input_data = [{"initial_query": "Latest advancements in AI"}]
|
||||
|
||||
# Execute the pipeline, initiating a run for each input
|
||||
results = await my_pipeline.process_runs(input_data)
|
||||
|
||||
# Access the results
|
||||
for result in results:
|
||||
print(f"Final Output: {result.raw}")
|
||||
print(f"Token Usage: {result.token_usage}")
|
||||
print(f"Trace: {result.trace}") # Shows the path of the input through all stages
|
||||
```
|
||||
|
||||
## Advanced Features
|
||||
|
||||
### Parallel Execution within Stages
|
||||
|
||||
You can define parallel execution within a stage by providing a list of crews, creating multiple branches:
|
||||
|
||||
```python
|
||||
parallel_analysis_crew = Crew(agents=[financial_analyst], tasks=[financial_analysis_task])
|
||||
market_analysis_crew = Crew(agents=[market_analyst], tasks=[market_analysis_task])
|
||||
|
||||
my_pipeline = Pipeline(
|
||||
stages=[
|
||||
research_crew,
|
||||
[parallel_analysis_crew, market_analysis_crew], # Parallel execution (branching)
|
||||
writing_crew
|
||||
]
|
||||
)
|
||||
```
|
||||
|
||||
### Routers in Pipelines
|
||||
|
||||
Routers are a powerful feature in crewAI pipelines that allow for dynamic decision-making and branching within your workflow.
|
||||
They enable you to direct the flow of execution based on specific conditions or criteria, making your pipelines more flexible and adaptive.
|
||||
|
||||
#### What is a Router?
|
||||
|
||||
A router in crewAI is a special component that can be included as a stage in your pipeline. It evaluates the input data and determines which path the execution should take next.
|
||||
This allows for conditional branching in your pipeline, where different crews or sub-pipelines can be executed based on the router's decision.
|
||||
|
||||
#### Key Components of a Router
|
||||
|
||||
1. **Routes**: A dictionary of named routes, each associated with a condition and a pipeline to execute if the condition is met.
|
||||
2. **Default Route**: A fallback pipeline that is executed if none of the defined route conditions are met.
|
||||
|
||||
#### Creating a Router
|
||||
|
||||
Here's an example of how to create a router:
|
||||
|
||||
```python
|
||||
from crewai import Router, Route, Pipeline, Crew, Agent, Task
|
||||
|
||||
# Define your agents
|
||||
classifier = Agent(name="Classifier", role="Email Classifier")
|
||||
urgent_handler = Agent(name="Urgent Handler", role="Urgent Email Processor")
|
||||
normal_handler = Agent(name="Normal Handler", role="Normal Email Processor")
|
||||
|
||||
# Define your tasks
|
||||
classify_task = Task(description="Classify the email based on its content and metadata.")
|
||||
urgent_task = Task(description="Process and respond to urgent email quickly.")
|
||||
normal_task = Task(description="Process and respond to normal email thoroughly.")
|
||||
|
||||
# Define your crews
|
||||
classification_crew = Crew(agents=[classifier], tasks=[classify_task]) # classify email between high and low urgency 1-10
|
||||
urgent_crew = Crew(agents=[urgent_handler], tasks=[urgent_task])
|
||||
normal_crew = Crew(agents=[normal_handler], tasks=[normal_task])
|
||||
|
||||
# Create pipelines for different urgency levels
|
||||
urgent_pipeline = Pipeline(stages=[urgent_crew])
|
||||
normal_pipeline = Pipeline(stages=[normal_crew])
|
||||
|
||||
# Create a router
|
||||
email_router = Router(
|
||||
routes={
|
||||
"high_urgency": Route(
|
||||
condition=lambda x: x.get("urgency_score", 0) > 7,
|
||||
pipeline=urgent_pipeline
|
||||
),
|
||||
"low_urgency": Route(
|
||||
condition=lambda x: x.get("urgency_score", 0) <= 7,
|
||||
pipeline=normal_pipeline
|
||||
)
|
||||
},
|
||||
default=Pipeline(stages=[normal_pipeline]) # Default to just normal if no urgency score
|
||||
)
|
||||
|
||||
# Use the router in a main pipeline
|
||||
main_pipeline = Pipeline(stages=[classification_crew, email_router])
|
||||
|
||||
inputs = [{"email": "..."}, {"email": "..."}] # List of email data
|
||||
|
||||
main_pipeline.kickoff(inputs=inputs)
|
||||
```
|
||||
|
||||
In this example, the router decides between an urgent pipeline and a normal pipeline based on the urgency score of the email. If the urgency score is greater than 7,
|
||||
it routes to the urgent pipeline; otherwise, it uses the normal pipeline. If the input doesn't include an urgency score, it defaults to just the classification crew.
|
||||
|
||||
#### Benefits of Using Routers
|
||||
|
||||
1. **Dynamic Workflow**: Adapt your pipeline's behavior based on input characteristics or intermediate results.
|
||||
2. **Efficiency**: Route urgent tasks to quicker processes, reserving more thorough pipelines for less time-sensitive inputs.
|
||||
3. **Flexibility**: Easily modify or extend your pipeline's logic without changing the core structure.
|
||||
4. **Scalability**: Handle a wide range of email types and urgency levels with a single pipeline structure.
|
||||
|
||||
### Error Handling and Validation
|
||||
|
||||
The `Pipeline` class includes validation mechanisms to ensure the robustness of the pipeline structure:
|
||||
|
||||
- Validates that stages contain only Crew instances or lists of Crew instances.
|
||||
- Prevents double nesting of stages to maintain a clear structure.
|
||||
@@ -25,9 +25,9 @@ It provides a dashboard for tracking agent performance, session replays, and cus
|
||||
Additionally, AgentOps provides session drilldowns for viewing Crew agent interactions, LLM calls, and tool usage in real-time.
|
||||
This feature is useful for debugging and understanding how agents interact with users as well as other agents.
|
||||
|
||||

|
||||

|
||||

|
||||

|
||||

|
||||

|
||||
|
||||
### Features
|
||||
|
||||
@@ -123,4 +123,4 @@ For feature requests or bug reports, please reach out to the AgentOps team on th
|
||||
<span> • </span>
|
||||
<a href="https://app.agentops.ai/?=crew">🖇️ AgentOps Dashboard</a>
|
||||
<span> • </span>
|
||||
<a href="https://docs.agentops.ai/introduction">📙 Documentation</a>
|
||||
<a href="https://docs.agentops.ai/introduction">📙 Documentation</a>
|
||||
|
||||
@@ -10,9 +10,9 @@ Langtrace is an open-source, external tool that helps you set up observability a
|
||||
While not built directly into CrewAI, Langtrace can be used alongside CrewAI to gain deep visibility into the cost, latency, and performance of your CrewAI Agents.
|
||||
This integration allows you to log hyperparameters, monitor performance regressions, and establish a process for continuous improvement of your Agents.
|
||||
|
||||

|
||||

|
||||

|
||||

|
||||

|
||||

|
||||
|
||||
## Setup Instructions
|
||||
|
||||
@@ -69,4 +69,4 @@ This integration allows you to log hyperparameters, monitor performance regressi
|
||||
|
||||
6. **Testing and Evaluations**
|
||||
|
||||
- Set up automated tests for your CrewAI agents and tasks.
|
||||
- Set up automated tests for your CrewAI agents and tasks.
|
||||
|
||||
BIN
docs/images/crewai-run-poetry-error.png
Normal file
BIN
docs/images/crewai-run-poetry-error.png
Normal file
Binary file not shown.
|
After Width: | Height: | Size: 104 KiB |
BIN
docs/images/crewai-update.png
Normal file
BIN
docs/images/crewai-update.png
Normal file
Binary file not shown.
|
After Width: | Height: | Size: 50 KiB |
@@ -1,11 +1,9 @@
|
||||
---
|
||||
title: Installation & Setup
|
||||
title: Installation
|
||||
description:
|
||||
icon: wrench
|
||||
---
|
||||
|
||||
## Install CrewAI
|
||||
|
||||
This guide will walk you through the installation process for CrewAI and its dependencies.
|
||||
CrewAI is a flexible and powerful AI framework that enables you to create and manage AI agents, tools, and tasks efficiently.
|
||||
Let's get started! 🚀
|
||||
@@ -15,17 +13,8 @@ Let's get started! 🚀
|
||||
</Tip>
|
||||
|
||||
<Steps>
|
||||
<Step title="Install Poetry">
|
||||
First, if you haven't already, install [Poetry](https://python-poetry.org/).
|
||||
CrewAI uses Poetry for dependency management and package handling, offering a seamless setup and execution experience.
|
||||
<CodeGroup>
|
||||
```shell Terminal
|
||||
pip install poetry
|
||||
```
|
||||
</CodeGroup>
|
||||
</Step>
|
||||
<Step title="Install CrewAI">
|
||||
Then, install the main CrewAI package:
|
||||
Install the main CrewAI package with the following command:
|
||||
<CodeGroup>
|
||||
```shell Terminal
|
||||
pip install crewai
|
||||
@@ -45,15 +34,29 @@ Let's get started! 🚀
|
||||
</CodeGroup>
|
||||
</Step>
|
||||
<Step title="Upgrade CrewAI">
|
||||
To upgrade CrewAI and CrewAI Tools to the latest version, run the following command:
|
||||
To upgrade CrewAI and CrewAI Tools to the latest version, run the following command
|
||||
<CodeGroup>
|
||||
```shell Terminal
|
||||
pip install --upgrade crewai crewai-tools
|
||||
```
|
||||
</CodeGroup>
|
||||
<Note>
|
||||
1. If you're using an older version of CrewAI, you may receive a warning about using `Poetry` for dependency management.
|
||||

|
||||
|
||||
2. In this case, you'll need to run the command below to update your project.
|
||||
This command will migrate your project to use [UV](https://github.com/astral-sh/uv) and update the necessary files.
|
||||
```shell Terminal
|
||||
crewai update
|
||||
```
|
||||
3. After running the command above, you should see the following output:
|
||||

|
||||
|
||||
4. You're all set! You can now proceed to the next step! 🎉
|
||||
</Note>
|
||||
</Step>
|
||||
<Step title="Verify the installation">
|
||||
To verify that `crewai` and `crewai-tools` are installed correctly, run the following command:
|
||||
To verify that `crewai` and `crewai-tools` are installed correctly, run the following command
|
||||
<CodeGroup>
|
||||
```shell Terminal
|
||||
pip freeze | grep crewai
|
||||
|
||||
@@ -45,5 +45,5 @@ By fostering collaborative intelligence, CrewAI empowers agents to work together
|
||||
|
||||
## Next Step
|
||||
|
||||
- [Install CrewAI](/installation)
|
||||
- [Install CrewAI](/installation) to get started with your first agent.
|
||||
|
||||
|
||||
@@ -66,18 +66,17 @@
|
||||
"pages": [
|
||||
"concepts/agents",
|
||||
"concepts/tasks",
|
||||
"concepts/tools",
|
||||
"concepts/processes",
|
||||
"concepts/crews",
|
||||
"concepts/flows",
|
||||
"concepts/llms",
|
||||
"concepts/processes",
|
||||
"concepts/collaboration",
|
||||
"concepts/pipeline",
|
||||
"concepts/training",
|
||||
"concepts/memory",
|
||||
"concepts/planning",
|
||||
"concepts/testing",
|
||||
"concepts/flows",
|
||||
"concepts/cli",
|
||||
"concepts/llms",
|
||||
"concepts/tools",
|
||||
"concepts/langchain-tools",
|
||||
"concepts/llamaindex-tools"
|
||||
]
|
||||
|
||||
@@ -26,6 +26,7 @@ Follow the steps below to get crewing! 🚣♂️
|
||||
<Step title="Modify your `agents.yaml` file">
|
||||
<Tip>
|
||||
You can also modify the agents as needed to fit your use case or copy and paste as is to your project.
|
||||
Any variable interpolated in your `agents.yaml` and `tasks.yaml` files like `{topic}` will be replaced by the value of the variable in the `main.py` file.
|
||||
</Tip>
|
||||
```yaml agents.yaml
|
||||
# src/latest_ai_development/config/agents.yaml
|
||||
@@ -124,7 +125,7 @@ Follow the steps below to get crewing! 🚣♂️
|
||||
```
|
||||
</Step>
|
||||
<Step title="Feel free to pass custom inputs to your crew">
|
||||
For example, you can pass the `topic` input to your crew to customize the research and reporting to medical llms or any other topic.
|
||||
For example, you can pass the `topic` input to your crew to customize the research and reporting.
|
||||
```python main.py
|
||||
#!/usr/bin/env python
|
||||
# src/latest_ai_development/main.py
|
||||
@@ -233,6 +234,74 @@ Follow the steps below to get crewing! 🚣♂️
|
||||
</Step>
|
||||
</Steps>
|
||||
|
||||
### Note on Consistency in Naming
|
||||
|
||||
The names you use in your YAML files (`agents.yaml` and `tasks.yaml`) should match the method names in your Python code.
|
||||
For example, you can reference the agent for specific tasks from `tasks.yaml` file.
|
||||
This naming consistency allows CrewAI to automatically link your configurations with your code; otherwise, your task won't recognize the reference properly.
|
||||
|
||||
#### Example References
|
||||
|
||||
<Tip>
|
||||
Note how we use the same name for the agent in the `agents.yaml` (`email_summarizer`) file as the method name in the `crew.py` (`email_summarizer`) file.
|
||||
</Tip>
|
||||
|
||||
```yaml agents.yaml
|
||||
email_summarizer:
|
||||
role: >
|
||||
Email Summarizer
|
||||
goal: >
|
||||
Summarize emails into a concise and clear summary
|
||||
backstory: >
|
||||
You will create a 5 bullet point summary of the report
|
||||
llm: mixtal_llm
|
||||
```
|
||||
|
||||
<Tip>
|
||||
Note how we use the same name for the agent in the `tasks.yaml` (`email_summarizer_task`) file as the method name in the `crew.py` (`email_summarizer_task`) file.
|
||||
</Tip>
|
||||
|
||||
```yaml tasks.yaml
|
||||
email_summarizer_task:
|
||||
description: >
|
||||
Summarize the email into a 5 bullet point summary
|
||||
expected_output: >
|
||||
A 5 bullet point summary of the email
|
||||
agent: email_summarizer
|
||||
context:
|
||||
- reporting_task
|
||||
- research_task
|
||||
```
|
||||
|
||||
Use the annotations to properly reference the agent and task in the `crew.py` file.
|
||||
|
||||
### Annotations include:
|
||||
|
||||
* `@agent`
|
||||
* `@task`
|
||||
* `@crew`
|
||||
* `@tool`
|
||||
* `@callback`
|
||||
* `@output_json`
|
||||
* `@output_pydantic`
|
||||
* `@cache_handler`
|
||||
|
||||
```python crew.py
|
||||
# ...
|
||||
@agent
|
||||
def email_summarizer(self) -> Agent:
|
||||
return Agent(
|
||||
config=self.agents_config["email_summarizer"],
|
||||
)
|
||||
|
||||
@task
|
||||
def email_summarizer_task(self) -> Task:
|
||||
return Task(
|
||||
config=self.tasks_config["email_summarizer_task"],
|
||||
)
|
||||
# ...
|
||||
```
|
||||
|
||||
<Tip>
|
||||
In addition to the [sequential process](../how-to/sequential-process), you can use the [hierarchical process](../how-to/hierarchical-process),
|
||||
which automatically assigns a manager to the defined crew to properly coordinate the planning and execution of tasks through delegation and validation of results.
|
||||
@@ -241,7 +310,7 @@ You can learn more about the core concepts [here](/concepts).
|
||||
|
||||
### Replay Tasks from Latest Crew Kickoff
|
||||
|
||||
CrewAI now includes a replay feature that allows you to list the tasks from the last run and replay from a specific one. To use this feature, run:
|
||||
CrewAI now includes a replay feature that allows you to list the tasks from the last run and replay from a specific one. To use this feature, run.
|
||||
|
||||
```shell
|
||||
crewai replay <task_id>
|
||||
|
||||
@@ -8,13 +8,13 @@ icon: eye
|
||||
|
||||
## Description
|
||||
|
||||
This tool is used to extract text from images. When passed to the agent it will extract the text from the image and then use it to generate a response, report or any other output.
|
||||
This tool is used to extract text from images. When passed to the agent it will extract the text from the image and then use it to generate a response, report or any other output.
|
||||
The URL or the PATH of the image should be passed to the Agent.
|
||||
|
||||
|
||||
## Installation
|
||||
|
||||
Install the crewai_tools package
|
||||
|
||||
```shell
|
||||
pip install 'crewai[tools]'
|
||||
```
|
||||
@@ -44,7 +44,6 @@ def researcher(self) -> Agent:
|
||||
|
||||
The VisionTool requires the following arguments:
|
||||
|
||||
| Argument | Type | Description |
|
||||
|:---------------|:---------|:-------------------------------------------------------------------------------------------------------------------------------------|
|
||||
| **image_path** | `string` | **Mandatory**. The path to the image file from which text needs to be extracted. |
|
||||
|
||||
| Argument | Type | Description |
|
||||
| :----------------- | :------- | :------------------------------------------------------------------------------- |
|
||||
| **image_path_url** | `string` | **Mandatory**. The path to the image file from which text needs to be extracted. |
|
||||
|
||||
6
poetry.lock
generated
6
poetry.lock
generated
@@ -1597,12 +1597,12 @@ files = [
|
||||
google-auth = ">=2.14.1,<3.0.dev0"
|
||||
googleapis-common-protos = ">=1.56.2,<2.0.dev0"
|
||||
grpcio = [
|
||||
{version = ">=1.33.2,<2.0dev", optional = true, markers = "python_version < \"3.11\" and extra == \"grpc\""},
|
||||
{version = ">=1.49.1,<2.0dev", optional = true, markers = "python_version >= \"3.11\" and extra == \"grpc\""},
|
||||
{version = ">=1.33.2,<2.0dev", optional = true, markers = "python_version < \"3.11\" and extra == \"grpc\""},
|
||||
]
|
||||
grpcio-status = [
|
||||
{version = ">=1.33.2,<2.0.dev0", optional = true, markers = "python_version < \"3.11\" and extra == \"grpc\""},
|
||||
{version = ">=1.49.1,<2.0.dev0", optional = true, markers = "python_version >= \"3.11\" and extra == \"grpc\""},
|
||||
{version = ">=1.33.2,<2.0.dev0", optional = true, markers = "python_version < \"3.11\" and extra == \"grpc\""},
|
||||
]
|
||||
proto-plus = ">=1.22.3,<2.0.0dev"
|
||||
protobuf = ">=3.19.5,<3.20.0 || >3.20.0,<3.20.1 || >3.20.1,<4.21.0 || >4.21.0,<4.21.1 || >4.21.1,<4.21.2 || >4.21.2,<4.21.3 || >4.21.3,<4.21.4 || >4.21.4,<4.21.5 || >4.21.5,<6.0.0.dev0"
|
||||
@@ -4286,8 +4286,8 @@ files = [
|
||||
|
||||
[package.dependencies]
|
||||
numpy = [
|
||||
{version = ">=1.22.4", markers = "python_version < \"3.11\""},
|
||||
{version = ">=1.23.2", markers = "python_version == \"3.11\""},
|
||||
{version = ">=1.22.4", markers = "python_version < \"3.11\""},
|
||||
{version = ">=1.26.0", markers = "python_version >= \"3.12\""},
|
||||
]
|
||||
python-dateutil = ">=2.8.2"
|
||||
|
||||
108
pyproject.toml
108
pyproject.toml
@@ -1,65 +1,65 @@
|
||||
[tool.poetry]
|
||||
[project]
|
||||
name = "crewai"
|
||||
version = "0.70.1"
|
||||
version = "0.76.2"
|
||||
description = "Cutting-edge framework for orchestrating role-playing, autonomous AI agents. By fostering collaborative intelligence, CrewAI empowers agents to work together seamlessly, tackling complex tasks."
|
||||
authors = ["Joao Moura <joao@crewai.com>"]
|
||||
readme = "README.md"
|
||||
packages = [{ include = "crewai", from = "src" }]
|
||||
requires-python = ">=3.10,<=3.13"
|
||||
authors = [
|
||||
{ name = "Joao Moura", email = "joao@crewai.com" }
|
||||
]
|
||||
dependencies = [
|
||||
"pydantic>=2.4.2",
|
||||
"langchain>=0.2.16",
|
||||
"openai>=1.13.3",
|
||||
"opentelemetry-api>=1.22.0",
|
||||
"opentelemetry-sdk>=1.22.0",
|
||||
"opentelemetry-exporter-otlp-proto-http>=1.22.0",
|
||||
"instructor>=1.3.3",
|
||||
"regex>=2024.9.11",
|
||||
"crewai-tools>=0.13.2",
|
||||
"click>=8.1.7",
|
||||
"python-dotenv>=1.0.0",
|
||||
"appdirs>=1.4.4",
|
||||
"jsonref>=1.1.0",
|
||||
"json-repair>=0.25.2",
|
||||
"auth0-python>=4.7.1",
|
||||
"litellm>=1.44.22",
|
||||
"pyvis>=0.3.2",
|
||||
"uv>=0.4.25",
|
||||
"tomli-w>=1.1.0",
|
||||
"chromadb>=0.4.24",
|
||||
]
|
||||
|
||||
[tool.poetry.urls]
|
||||
[project.urls]
|
||||
Homepage = "https://crewai.com"
|
||||
Documentation = "https://docs.crewai.com"
|
||||
Repository = "https://github.com/crewAIInc/crewAI"
|
||||
|
||||
[tool.poetry.dependencies]
|
||||
python = ">=3.10,<=3.13"
|
||||
pydantic = "^2.4.2"
|
||||
langchain = "^0.2.16"
|
||||
openai = "^1.13.3"
|
||||
opentelemetry-api = "^1.22.0"
|
||||
opentelemetry-sdk = "^1.22.0"
|
||||
opentelemetry-exporter-otlp-proto-http = "^1.22.0"
|
||||
instructor = "1.3.3"
|
||||
regex = "^2024.9.11"
|
||||
crewai-tools = { version = "^0.12.1", optional = true }
|
||||
click = "^8.1.7"
|
||||
python-dotenv = "^1.0.0"
|
||||
appdirs = "^1.4.4"
|
||||
jsonref = "^1.1.0"
|
||||
agentops = { version = "^0.3.0", optional = true }
|
||||
embedchain = "0.1.122"
|
||||
json-repair = "^0.25.2"
|
||||
auth0-python = "^4.7.1"
|
||||
poetry = "^1.8.3"
|
||||
litellm = "^1.44.22"
|
||||
pyvis = "^0.3.2"
|
||||
[project.optional-dependencies]
|
||||
tools = ["crewai-tools>=0.13.2"]
|
||||
agentops = ["agentops>=0.3.0"]
|
||||
|
||||
[tool.poetry.extras]
|
||||
tools = ["crewai-tools"]
|
||||
agentops = ["agentops"]
|
||||
[tool.uv]
|
||||
dev-dependencies = [
|
||||
"ruff>=0.4.10",
|
||||
"mypy>=1.10.0",
|
||||
"pre-commit>=3.6.0",
|
||||
"mkdocs>=1.4.3",
|
||||
"mkdocstrings>=0.22.0",
|
||||
"mkdocstrings-python>=1.1.2",
|
||||
"mkdocs-material>=9.5.7",
|
||||
"mkdocs-material-extensions>=1.3.1",
|
||||
"pillow>=10.2.0",
|
||||
"cairosvg>=2.7.1",
|
||||
"crewai-tools>=0.13.2",
|
||||
"pytest>=8.0.0",
|
||||
"pytest-vcr>=1.0.2",
|
||||
"python-dotenv>=1.0.0",
|
||||
"pytest-asyncio>=0.23.7",
|
||||
"pytest-subprocess>=1.5.2",
|
||||
]
|
||||
|
||||
[tool.poetry.group.dev.dependencies]
|
||||
isort = "^5.13.2"
|
||||
mypy = "1.10.0"
|
||||
autoflake = "^2.2.1"
|
||||
pre-commit = "^3.6.0"
|
||||
mkdocs = "^1.4.3"
|
||||
mkdocstrings = "^0.22.0"
|
||||
mkdocstrings-python = "^1.1.2"
|
||||
mkdocs-material = { extras = ["imaging"], version = "^9.5.7" }
|
||||
mkdocs-material-extensions = "^1.3.1"
|
||||
pillow = "^10.2.0"
|
||||
cairosvg = "^2.7.1"
|
||||
crewai-tools = "^0.12.1"
|
||||
|
||||
[tool.poetry.group.test.dependencies]
|
||||
pytest = "^8.0.0"
|
||||
pytest-vcr = "^1.0.2"
|
||||
python-dotenv = "1.0.0"
|
||||
pytest-asyncio = "^0.23.7"
|
||||
pytest-subprocess = "^1.5.2"
|
||||
|
||||
[tool.poetry.scripts]
|
||||
[project.scripts]
|
||||
crewai = "crewai.cli.cli:crewai"
|
||||
|
||||
[tool.mypy]
|
||||
@@ -71,5 +71,5 @@ exclude = ["cli/templates"]
|
||||
exclude_dirs = ["src/crewai/cli/templates"]
|
||||
|
||||
[build-system]
|
||||
requires = ["poetry-core"]
|
||||
build-backend = "poetry.core.masonry.api"
|
||||
requires = ["hatchling"]
|
||||
build-backend = "hatchling.build"
|
||||
|
||||
@@ -14,5 +14,5 @@ warnings.filterwarnings(
|
||||
category=UserWarning,
|
||||
module="pydantic.main",
|
||||
)
|
||||
__version__ = "0.70.1"
|
||||
__version__ = "0.76.2"
|
||||
__all__ = ["Agent", "Crew", "Process", "Task", "Pipeline", "Router", "LLM", "Flow"]
|
||||
|
||||
@@ -1,6 +1,8 @@
|
||||
import os
|
||||
import shutil
|
||||
import subprocess
|
||||
from inspect import signature
|
||||
from typing import Any, List, Optional, Union
|
||||
from typing import Any, List, Literal, Optional, Union
|
||||
|
||||
from pydantic import Field, InstanceOf, PrivateAttr, model_validator
|
||||
|
||||
@@ -112,6 +114,10 @@ class Agent(BaseAgent):
|
||||
default=2,
|
||||
description="Maximum number of retries for an agent to execute a task when an error occurs.",
|
||||
)
|
||||
code_execution_mode: Literal["safe", "unsafe"] = Field(
|
||||
default="safe",
|
||||
description="Mode for code execution: 'safe' (using Docker) or 'unsafe' (direct execution).",
|
||||
)
|
||||
|
||||
@model_validator(mode="after")
|
||||
def post_init_setup(self):
|
||||
@@ -173,6 +179,9 @@ class Agent(BaseAgent):
|
||||
if not self.agent_executor:
|
||||
self._setup_agent_executor()
|
||||
|
||||
if self.allow_code_execution:
|
||||
self._validate_docker_installation()
|
||||
|
||||
return self
|
||||
|
||||
def _setup_agent_executor(self):
|
||||
@@ -201,8 +210,6 @@ class Agent(BaseAgent):
|
||||
|
||||
task_prompt = task.prompt()
|
||||
|
||||
print("context for task", context)
|
||||
|
||||
if context:
|
||||
task_prompt = self.i18n.slice("task_with_context").format(
|
||||
task=task_prompt, context=context
|
||||
@@ -213,8 +220,6 @@ class Agent(BaseAgent):
|
||||
self.crew._short_term_memory,
|
||||
self.crew._long_term_memory,
|
||||
self.crew._entity_memory,
|
||||
self.crew._user_memory,
|
||||
self.crew.memory_provider,
|
||||
)
|
||||
memory = contextual_memory.build_context_for_task(task, context)
|
||||
if memory.strip() != "":
|
||||
@@ -312,7 +317,9 @@ class Agent(BaseAgent):
|
||||
try:
|
||||
from crewai_tools import CodeInterpreterTool
|
||||
|
||||
return [CodeInterpreterTool()]
|
||||
# Set the unsafe_mode based on the code_execution_mode attribute
|
||||
unsafe_mode = self.code_execution_mode == "unsafe"
|
||||
return [CodeInterpreterTool(unsafe_mode=unsafe_mode)]
|
||||
except ModuleNotFoundError:
|
||||
self._logger.log(
|
||||
"info", "Coding tools not available. Install crewai_tools. "
|
||||
@@ -398,7 +405,7 @@ class Agent(BaseAgent):
|
||||
"""
|
||||
tool_strings = []
|
||||
for tool in tools:
|
||||
args_schema = str(tool.args)
|
||||
args_schema = str(tool.model_fields)
|
||||
if hasattr(tool, "func") and tool.func:
|
||||
sig = signature(tool.func)
|
||||
description = (
|
||||
@@ -412,6 +419,25 @@ class Agent(BaseAgent):
|
||||
|
||||
return "\n".join(tool_strings)
|
||||
|
||||
def _validate_docker_installation(self) -> None:
|
||||
"""Check if Docker is installed and running."""
|
||||
if not shutil.which("docker"):
|
||||
raise RuntimeError(
|
||||
f"Docker is not installed. Please install Docker to use code execution with agent: {self.role}"
|
||||
)
|
||||
|
||||
try:
|
||||
subprocess.run(
|
||||
["docker", "info"],
|
||||
check=True,
|
||||
stdout=subprocess.PIPE,
|
||||
stderr=subprocess.PIPE,
|
||||
)
|
||||
except subprocess.CalledProcessError:
|
||||
raise RuntimeError(
|
||||
f"Docker is not running. Please start Docker to use code execution with agent: {self.role}"
|
||||
)
|
||||
|
||||
@staticmethod
|
||||
def __tools_names(tools) -> str:
|
||||
return ", ".join([t.name for t in tools])
|
||||
|
||||
@@ -17,7 +17,7 @@ if TYPE_CHECKING:
|
||||
|
||||
class CrewAgentExecutorMixin:
|
||||
crew: Optional["Crew"]
|
||||
crew_agent: Optional["BaseAgent"]
|
||||
agent: Optional["BaseAgent"]
|
||||
task: Optional["Task"]
|
||||
iterations: int
|
||||
have_forced_answer: bool
|
||||
@@ -33,9 +33,9 @@ class CrewAgentExecutorMixin:
|
||||
"""Create and save a short-term memory item if conditions are met."""
|
||||
if (
|
||||
self.crew
|
||||
and self.crew_agent
|
||||
and self.agent
|
||||
and self.task
|
||||
and "Action: Delegate work to coworker" not in output.log
|
||||
and "Action: Delegate work to coworker" not in output.text
|
||||
):
|
||||
try:
|
||||
if (
|
||||
@@ -43,11 +43,11 @@ class CrewAgentExecutorMixin:
|
||||
and self.crew._short_term_memory
|
||||
):
|
||||
self.crew._short_term_memory.save(
|
||||
value=output.log,
|
||||
value=output.text,
|
||||
metadata={
|
||||
"observation": self.task.description,
|
||||
},
|
||||
agent=self.crew_agent.role,
|
||||
agent=self.agent.role,
|
||||
)
|
||||
except Exception as e:
|
||||
print(f"Failed to add to short term memory: {e}")
|
||||
@@ -61,18 +61,18 @@ class CrewAgentExecutorMixin:
|
||||
and self.crew._long_term_memory
|
||||
and self.crew._entity_memory
|
||||
and self.task
|
||||
and self.crew_agent
|
||||
and self.agent
|
||||
):
|
||||
try:
|
||||
ltm_agent = TaskEvaluator(self.crew_agent)
|
||||
evaluation = ltm_agent.evaluate(self.task, output.log)
|
||||
ltm_agent = TaskEvaluator(self.agent)
|
||||
evaluation = ltm_agent.evaluate(self.task, output.text)
|
||||
|
||||
if isinstance(evaluation, ConverterError):
|
||||
return
|
||||
|
||||
long_term_memory = LongTermMemoryItem(
|
||||
task=self.task.description,
|
||||
agent=self.crew_agent.role,
|
||||
agent=self.agent.role,
|
||||
quality=evaluation.quality,
|
||||
datetime=str(time.time()),
|
||||
expected_output=self.task.expected_output,
|
||||
|
||||
@@ -81,6 +81,7 @@ class BaseAgentTools(BaseModel, ABC):
|
||||
task_with_assigned_agent = Task( # type: ignore # Incompatible types in assignment (expression has type "Task", variable has type "str")
|
||||
description=task,
|
||||
agent=agent,
|
||||
expected_output="Your best answer to your coworker asking you this, accounting for the context shared.",
|
||||
expected_output=agent.i18n.slice("manager_request"),
|
||||
i18n=agent.i18n,
|
||||
)
|
||||
return agent.execute_task(task_with_assigned_agent, context)
|
||||
|
||||
@@ -2,6 +2,7 @@ import json
|
||||
import re
|
||||
from typing import Any, Dict, List, Union
|
||||
|
||||
from crewai.agents.agent_builder.base_agent import BaseAgent
|
||||
from crewai.agents.agent_builder.base_agent_executor_mixin import CrewAgentExecutorMixin
|
||||
from crewai.agents.parser import (
|
||||
FINAL_ANSWER_AND_PARSABLE_ACTION_ERROR_MESSAGE,
|
||||
@@ -29,7 +30,7 @@ class CrewAgentExecutor(CrewAgentExecutorMixin):
|
||||
llm: Any,
|
||||
task: Any,
|
||||
crew: Any,
|
||||
agent: Any,
|
||||
agent: BaseAgent,
|
||||
prompt: dict[str, str],
|
||||
max_iter: int,
|
||||
tools: List[Any],
|
||||
@@ -103,7 +104,8 @@ class CrewAgentExecutor(CrewAgentExecutorMixin):
|
||||
|
||||
if self.crew and self.crew._train:
|
||||
self._handle_crew_training_output(formatted_answer)
|
||||
|
||||
self._create_short_term_memory(formatted_answer)
|
||||
self._create_long_term_memory(formatted_answer)
|
||||
return {"output": formatted_answer.output}
|
||||
|
||||
def _invoke_loop(self, formatted_answer=None):
|
||||
@@ -151,7 +153,7 @@ class CrewAgentExecutor(CrewAgentExecutorMixin):
|
||||
)
|
||||
self.have_forced_answer = True
|
||||
self.messages.append(
|
||||
self._format_msg(formatted_answer.text, role="user")
|
||||
self._format_msg(formatted_answer.text, role="assistant")
|
||||
)
|
||||
|
||||
except OutputParserException as e:
|
||||
@@ -176,6 +178,8 @@ class CrewAgentExecutor(CrewAgentExecutorMixin):
|
||||
return formatted_answer
|
||||
|
||||
def _show_start_logs(self):
|
||||
if self.agent is None:
|
||||
raise ValueError("Agent cannot be None")
|
||||
if self.agent.verbose or (
|
||||
hasattr(self, "crew") and getattr(self.crew, "verbose", False)
|
||||
):
|
||||
@@ -188,6 +192,8 @@ class CrewAgentExecutor(CrewAgentExecutorMixin):
|
||||
)
|
||||
|
||||
def _show_logs(self, formatted_answer: Union[AgentAction, AgentFinish]):
|
||||
if self.agent is None:
|
||||
raise ValueError("Agent cannot be None")
|
||||
if self.agent.verbose or (
|
||||
hasattr(self, "crew") and getattr(self.crew, "verbose", False)
|
||||
):
|
||||
@@ -306,7 +312,7 @@ class CrewAgentExecutor(CrewAgentExecutorMixin):
|
||||
self, result: AgentFinish, human_feedback: str | None = None
|
||||
) -> None:
|
||||
"""Function to handle the process of the training data."""
|
||||
agent_id = str(self.agent.id)
|
||||
agent_id = str(self.agent.id) # type: ignore
|
||||
|
||||
# Load training data
|
||||
training_handler = CrewTrainingHandler(TRAINING_DATA_FILE)
|
||||
@@ -334,6 +340,32 @@ class CrewAgentExecutor(CrewAgentExecutorMixin):
|
||||
color="red",
|
||||
)
|
||||
|
||||
if self.ask_for_human_input and human_feedback is not None:
|
||||
training_data = {
|
||||
"initial_output": result.output,
|
||||
"human_feedback": human_feedback,
|
||||
"agent": agent_id,
|
||||
"agent_role": self.agent.role, # type: ignore
|
||||
}
|
||||
if self.crew is not None and hasattr(self.crew, "_train_iteration"):
|
||||
train_iteration = self.crew._train_iteration
|
||||
if isinstance(train_iteration, int):
|
||||
CrewTrainingHandler(TRAINING_DATA_FILE).append(
|
||||
train_iteration, agent_id, training_data
|
||||
)
|
||||
else:
|
||||
self._logger.log(
|
||||
"error",
|
||||
"Invalid train iteration type. Expected int.",
|
||||
color="red",
|
||||
)
|
||||
else:
|
||||
self._logger.log(
|
||||
"error",
|
||||
"Crew is None or does not have _train_iteration attribute.",
|
||||
color="red",
|
||||
)
|
||||
|
||||
def _format_prompt(self, prompt: str, inputs: Dict[str, str]) -> str:
|
||||
prompt = prompt.replace("{input}", inputs["input"])
|
||||
prompt = prompt.replace("{tool_names}", inputs["tool_names"])
|
||||
|
||||
@@ -14,13 +14,14 @@ from .authentication.main import AuthenticationCommand
|
||||
from .deploy.main import DeployCommand
|
||||
from .evaluate_crew import evaluate_crew
|
||||
from .install_crew import install_crew
|
||||
from .kickoff_flow import kickoff_flow
|
||||
from .plot_flow import plot_flow
|
||||
from .replay_from_task import replay_task_command
|
||||
from .reset_memories_command import reset_memories_command
|
||||
from .run_crew import run_crew
|
||||
from .run_flow import run_flow
|
||||
from .tools.main import ToolCommand
|
||||
from .train_crew import train_crew
|
||||
from .update_crew import update_crew
|
||||
|
||||
|
||||
@click.group()
|
||||
@@ -31,10 +32,12 @@ def crewai():
|
||||
@crewai.command()
|
||||
@click.argument("type", type=click.Choice(["crew", "pipeline", "flow"]))
|
||||
@click.argument("name")
|
||||
def create(type, name):
|
||||
@click.option("--provider", type=str, help="The provider to use for the crew")
|
||||
@click.option("--skip_provider", is_flag=True, help="Skip provider validation")
|
||||
def create(type, name, provider, skip_provider=False):
|
||||
"""Create a new crew, pipeline, or flow."""
|
||||
if type == "crew":
|
||||
create_crew(name)
|
||||
create_crew(name, provider, skip_provider)
|
||||
elif type == "pipeline":
|
||||
create_pipeline(name)
|
||||
elif type == "flow":
|
||||
@@ -188,6 +191,12 @@ def run():
|
||||
run_crew()
|
||||
|
||||
|
||||
@crewai.command()
|
||||
def update():
|
||||
"""Update the pyproject.toml of the Crew project to use uv."""
|
||||
update_crew()
|
||||
|
||||
|
||||
@crewai.command()
|
||||
def signup():
|
||||
"""Sign Up/Login to CrewAI+."""
|
||||
@@ -276,7 +285,13 @@ def tool_install(handle: str):
|
||||
|
||||
|
||||
@tool.command(name="publish")
|
||||
@click.option("--force", is_flag=True, show_default=True, default=False, help="Bypasses Git remote validations")
|
||||
@click.option(
|
||||
"--force",
|
||||
is_flag=True,
|
||||
show_default=True,
|
||||
default=False,
|
||||
help="Bypasses Git remote validations",
|
||||
)
|
||||
@click.option("--public", "is_public", flag_value=True, default=False)
|
||||
@click.option("--private", "is_public", flag_value=False)
|
||||
def tool_publish(is_public: bool, force: bool):
|
||||
@@ -291,11 +306,11 @@ def flow():
|
||||
pass
|
||||
|
||||
|
||||
@flow.command(name="run")
|
||||
@flow.command(name="kickoff")
|
||||
def flow_run():
|
||||
"""Run the Flow."""
|
||||
"""Kickoff the Flow."""
|
||||
click.echo("Running the Flow")
|
||||
run_flow()
|
||||
kickoff_flow()
|
||||
|
||||
|
||||
@flow.command(name="plot")
|
||||
|
||||
19
src/crewai/cli/constants.py
Normal file
19
src/crewai/cli/constants.py
Normal file
@@ -0,0 +1,19 @@
|
||||
ENV_VARS = {
|
||||
'openai': ['OPENAI_API_KEY'],
|
||||
'anthropic': ['ANTHROPIC_API_KEY'],
|
||||
'gemini': ['GEMINI_API_KEY'],
|
||||
'groq': ['GROQ_API_KEY'],
|
||||
'ollama': ['FAKE_KEY'],
|
||||
}
|
||||
|
||||
PROVIDERS = ['openai', 'anthropic', 'gemini', 'groq', 'ollama']
|
||||
|
||||
MODELS = {
|
||||
'openai': ['gpt-4', 'gpt-4o', 'gpt-4o-mini', 'o1-mini', 'o1-preview'],
|
||||
'anthropic': ['claude-3-5-sonnet-20240620', 'claude-3-sonnet-20240229', 'claude-3-opus-20240229', 'claude-3-haiku-20240307'],
|
||||
'gemini': ['gemini-1.5-flash', 'gemini-1.5-pro', 'gemini-gemma-2-9b-it', 'gemini-gemma-2-27b-it'],
|
||||
'groq': ['llama-3.1-8b-instant', 'llama-3.1-70b-versatile', 'llama-3.1-405b-reasoning', 'gemma2-9b-it', 'gemma-7b-it'],
|
||||
'ollama': ['llama3.1', 'mixtral'],
|
||||
}
|
||||
|
||||
JSON_URL = "https://raw.githubusercontent.com/BerriAI/litellm/main/model_prices_and_context_window.json"
|
||||
@@ -1,12 +1,19 @@
|
||||
import sys
|
||||
from pathlib import Path
|
||||
|
||||
import click
|
||||
|
||||
from crewai.cli.utils import copy_template
|
||||
from crewai.cli.constants import ENV_VARS
|
||||
from crewai.cli.provider import (
|
||||
PROVIDERS,
|
||||
get_provider_data,
|
||||
select_model,
|
||||
select_provider,
|
||||
)
|
||||
from crewai.cli.utils import copy_template, load_env_vars, write_env_file
|
||||
|
||||
|
||||
def create_crew(name, parent_folder=None):
|
||||
"""Create a new crew."""
|
||||
def create_folder_structure(name, parent_folder=None):
|
||||
folder_name = name.replace(" ", "_").replace("-", "_").lower()
|
||||
class_name = name.replace("_", " ").replace("-", " ").title().replace(" ", "")
|
||||
|
||||
@@ -15,11 +22,19 @@ def create_crew(name, parent_folder=None):
|
||||
else:
|
||||
folder_path = Path(folder_name)
|
||||
|
||||
click.secho(
|
||||
f"Creating {'crew' if parent_folder else 'folder'} {folder_name}...",
|
||||
fg="green",
|
||||
bold=True,
|
||||
)
|
||||
if folder_path.exists():
|
||||
if not click.confirm(
|
||||
f"Folder {folder_name} already exists. Do you want to override it?"
|
||||
):
|
||||
click.secho("Operation cancelled.", fg="yellow")
|
||||
sys.exit(0)
|
||||
click.secho(f"Overriding folder {folder_name}...", fg="green", bold=True)
|
||||
else:
|
||||
click.secho(
|
||||
f"Creating {'crew' if parent_folder else 'folder'} {folder_name}...",
|
||||
fg="green",
|
||||
bold=True,
|
||||
)
|
||||
|
||||
if not folder_path.exists():
|
||||
folder_path.mkdir(parents=True)
|
||||
@@ -28,19 +43,126 @@ def create_crew(name, parent_folder=None):
|
||||
(folder_path / "src" / folder_name).mkdir(parents=True)
|
||||
(folder_path / "src" / folder_name / "tools").mkdir(parents=True)
|
||||
(folder_path / "src" / folder_name / "config").mkdir(parents=True)
|
||||
with open(folder_path / ".env", "w") as file:
|
||||
file.write("OPENAI_API_KEY=YOUR_API_KEY")
|
||||
else:
|
||||
click.secho(
|
||||
f"\tFolder {folder_name} already exists. Please choose a different name.",
|
||||
fg="red",
|
||||
|
||||
return folder_path, folder_name, class_name
|
||||
|
||||
|
||||
def copy_template_files(folder_path, name, class_name, parent_folder):
|
||||
package_dir = Path(__file__).parent
|
||||
templates_dir = package_dir / "templates" / "crew"
|
||||
|
||||
root_template_files = (
|
||||
[".gitignore", "pyproject.toml", "README.md"] if not parent_folder else []
|
||||
)
|
||||
tools_template_files = ["tools/custom_tool.py", "tools/__init__.py"]
|
||||
config_template_files = ["config/agents.yaml", "config/tasks.yaml"]
|
||||
src_template_files = (
|
||||
["__init__.py", "main.py", "crew.py"] if not parent_folder else ["crew.py"]
|
||||
)
|
||||
|
||||
for file_name in root_template_files:
|
||||
src_file = templates_dir / file_name
|
||||
dst_file = folder_path / file_name
|
||||
copy_template(src_file, dst_file, name, class_name, folder_path.name)
|
||||
|
||||
src_folder = (
|
||||
folder_path / "src" / folder_path.name if not parent_folder else folder_path
|
||||
)
|
||||
|
||||
for file_name in src_template_files:
|
||||
src_file = templates_dir / file_name
|
||||
dst_file = src_folder / file_name
|
||||
copy_template(src_file, dst_file, name, class_name, folder_path.name)
|
||||
|
||||
if not parent_folder:
|
||||
for file_name in tools_template_files + config_template_files:
|
||||
src_file = templates_dir / file_name
|
||||
dst_file = src_folder / file_name
|
||||
copy_template(src_file, dst_file, name, class_name, folder_path.name)
|
||||
|
||||
|
||||
def create_crew(name, provider=None, skip_provider=False, parent_folder=None):
|
||||
folder_path, folder_name, class_name = create_folder_structure(name, parent_folder)
|
||||
env_vars = load_env_vars(folder_path)
|
||||
if not skip_provider:
|
||||
if not provider:
|
||||
provider_models = get_provider_data()
|
||||
if not provider_models:
|
||||
return
|
||||
|
||||
existing_provider = None
|
||||
for provider, env_keys in ENV_VARS.items():
|
||||
if any(key in env_vars for key in env_keys):
|
||||
existing_provider = provider
|
||||
break
|
||||
|
||||
if existing_provider:
|
||||
if not click.confirm(
|
||||
f"Found existing environment variable configuration for {existing_provider.capitalize()}. Do you want to override it?"
|
||||
):
|
||||
click.secho("Keeping existing provider configuration.", fg="yellow")
|
||||
return
|
||||
|
||||
provider_models = get_provider_data()
|
||||
if not provider_models:
|
||||
return
|
||||
|
||||
while True:
|
||||
selected_provider = select_provider(provider_models)
|
||||
if selected_provider is None: # User typed 'q'
|
||||
click.secho("Exiting...", fg="yellow")
|
||||
sys.exit(0)
|
||||
if selected_provider: # Valid selection
|
||||
break
|
||||
click.secho(
|
||||
"No provider selected. Please try again or press 'q' to exit.", fg="red"
|
||||
)
|
||||
|
||||
while True:
|
||||
selected_model = select_model(selected_provider, provider_models)
|
||||
if selected_model is None: # User typed 'q'
|
||||
click.secho("Exiting...", fg="yellow")
|
||||
sys.exit(0)
|
||||
if selected_model: # Valid selection
|
||||
break
|
||||
click.secho(
|
||||
"No model selected. Please try again or press 'q' to exit.", fg="red"
|
||||
)
|
||||
|
||||
if selected_provider in PROVIDERS:
|
||||
api_key_var = ENV_VARS[selected_provider][0]
|
||||
else:
|
||||
api_key_var = click.prompt(
|
||||
f"Enter the environment variable name for your {selected_provider.capitalize()} API key",
|
||||
type=str,
|
||||
default="",
|
||||
)
|
||||
|
||||
api_key_value = ""
|
||||
click.echo(
|
||||
f"Enter your {selected_provider.capitalize()} API key (press Enter to skip): ",
|
||||
nl=False,
|
||||
)
|
||||
return
|
||||
try:
|
||||
api_key_value = input()
|
||||
except (KeyboardInterrupt, EOFError):
|
||||
api_key_value = ""
|
||||
|
||||
if api_key_value.strip():
|
||||
env_vars = {api_key_var: api_key_value}
|
||||
write_env_file(folder_path, env_vars)
|
||||
click.secho("API key saved to .env file", fg="green")
|
||||
else:
|
||||
click.secho(
|
||||
"No API key provided. Skipping .env file creation.", fg="yellow"
|
||||
)
|
||||
|
||||
env_vars["MODEL"] = selected_model
|
||||
click.secho(f"Selected model: {selected_model}", fg="green")
|
||||
|
||||
package_dir = Path(__file__).parent
|
||||
templates_dir = package_dir / "templates" / "crew"
|
||||
|
||||
# List of template files to copy
|
||||
root_template_files = (
|
||||
[".gitignore", "pyproject.toml", "README.md"] if not parent_folder else []
|
||||
)
|
||||
|
||||
@@ -5,13 +5,13 @@ import click
|
||||
|
||||
def evaluate_crew(n_iterations: int, model: str) -> None:
|
||||
"""
|
||||
Test and Evaluate the crew by running a command in the Poetry environment.
|
||||
Test and Evaluate the crew by running a command in the UV environment.
|
||||
|
||||
Args:
|
||||
n_iterations (int): The number of iterations to test the crew.
|
||||
model (str): The model to test the crew with.
|
||||
"""
|
||||
command = ["poetry", "run", "test", str(n_iterations), model]
|
||||
command = ["uv", "run", "test", str(n_iterations), model]
|
||||
|
||||
try:
|
||||
if n_iterations <= 0:
|
||||
|
||||
@@ -5,13 +5,10 @@ import click
|
||||
|
||||
def install_crew() -> None:
|
||||
"""
|
||||
Install the crew by running the Poetry command to lock and install.
|
||||
Install the crew by running the UV command to lock and install.
|
||||
"""
|
||||
try:
|
||||
subprocess.run(["poetry", "lock"], check=True, capture_output=False, text=True)
|
||||
subprocess.run(
|
||||
["poetry", "install"], check=True, capture_output=False, text=True
|
||||
)
|
||||
subprocess.run(["uv", "sync"], check=True, capture_output=False, text=True)
|
||||
|
||||
except subprocess.CalledProcessError as e:
|
||||
click.echo(f"An error occurred while running the crew: {e}", err=True)
|
||||
|
||||
@@ -3,11 +3,11 @@ import subprocess
|
||||
import click
|
||||
|
||||
|
||||
def run_flow() -> None:
|
||||
def kickoff_flow() -> None:
|
||||
"""
|
||||
Run the flow by running a command in the Poetry environment.
|
||||
Kickoff the flow by running a command in the UV environment.
|
||||
"""
|
||||
command = ["poetry", "run", "run_flow"]
|
||||
command = ["uv", "run", "kickoff"]
|
||||
|
||||
try:
|
||||
result = subprocess.run(command, capture_output=False, text=True, check=True)
|
||||
@@ -5,9 +5,9 @@ import click
|
||||
|
||||
def plot_flow() -> None:
|
||||
"""
|
||||
Plot the flow by running a command in the Poetry environment.
|
||||
Plot the flow by running a command in the UV environment.
|
||||
"""
|
||||
command = ["poetry", "run", "plot_flow"]
|
||||
command = ["uv", "run", "plot"]
|
||||
|
||||
try:
|
||||
result = subprocess.run(command, capture_output=False, text=True, check=True)
|
||||
|
||||
@@ -25,7 +25,9 @@ class PlusAPI:
|
||||
|
||||
def _make_request(self, method: str, endpoint: str, **kwargs) -> requests.Response:
|
||||
url = urljoin(self.base_url, endpoint)
|
||||
return requests.request(method, url, headers=self.headers, **kwargs)
|
||||
session = requests.Session()
|
||||
session.trust_env = False
|
||||
return session.request(method, url, headers=self.headers, **kwargs)
|
||||
|
||||
def login_to_tool_repository(self):
|
||||
return self._make_request("POST", f"{self.TOOLS_RESOURCE}/login")
|
||||
|
||||
227
src/crewai/cli/provider.py
Normal file
227
src/crewai/cli/provider.py
Normal file
@@ -0,0 +1,227 @@
|
||||
import json
|
||||
import time
|
||||
from collections import defaultdict
|
||||
from pathlib import Path
|
||||
|
||||
import click
|
||||
import requests
|
||||
|
||||
from crewai.cli.constants import JSON_URL, MODELS, PROVIDERS
|
||||
|
||||
|
||||
def select_choice(prompt_message, choices):
|
||||
"""
|
||||
Presents a list of choices to the user and prompts them to select one.
|
||||
|
||||
Args:
|
||||
- prompt_message (str): The message to display to the user before presenting the choices.
|
||||
- choices (list): A list of options to present to the user.
|
||||
|
||||
Returns:
|
||||
- str: The selected choice from the list, or None if the user chooses to quit.
|
||||
"""
|
||||
|
||||
provider_models = get_provider_data()
|
||||
if not provider_models:
|
||||
return
|
||||
click.secho(prompt_message, fg="cyan")
|
||||
for idx, choice in enumerate(choices, start=1):
|
||||
click.secho(f"{idx}. {choice}", fg="cyan")
|
||||
click.secho("q. Quit", fg="cyan")
|
||||
|
||||
while True:
|
||||
choice = click.prompt(
|
||||
"Enter the number of your choice or 'q' to quit", type=str
|
||||
)
|
||||
|
||||
if choice.lower() == "q":
|
||||
return None
|
||||
|
||||
try:
|
||||
selected_index = int(choice) - 1
|
||||
if 0 <= selected_index < len(choices):
|
||||
return choices[selected_index]
|
||||
except ValueError:
|
||||
pass
|
||||
|
||||
click.secho(
|
||||
"Invalid selection. Please select a number between 1 and 6 or 'q' to quit.",
|
||||
fg="red",
|
||||
)
|
||||
|
||||
|
||||
def select_provider(provider_models):
|
||||
"""
|
||||
Presents a list of providers to the user and prompts them to select one.
|
||||
|
||||
Args:
|
||||
- provider_models (dict): A dictionary of provider models.
|
||||
|
||||
Returns:
|
||||
- str: The selected provider
|
||||
- None: If user explicitly quits
|
||||
"""
|
||||
predefined_providers = [p.lower() for p in PROVIDERS]
|
||||
all_providers = sorted(set(predefined_providers + list(provider_models.keys())))
|
||||
|
||||
provider = select_choice(
|
||||
"Select a provider to set up:", predefined_providers + ["other"]
|
||||
)
|
||||
if provider is None: # User typed 'q'
|
||||
return None
|
||||
|
||||
if provider == "other":
|
||||
provider = select_choice("Select a provider from the full list:", all_providers)
|
||||
if provider is None: # User typed 'q'
|
||||
return None
|
||||
|
||||
return provider.lower() if provider else False
|
||||
|
||||
|
||||
def select_model(provider, provider_models):
|
||||
"""
|
||||
Presents a list of models for a given provider to the user and prompts them to select one.
|
||||
|
||||
Args:
|
||||
- provider (str): The provider for which to select a model.
|
||||
- provider_models (dict): A dictionary of provider models.
|
||||
|
||||
Returns:
|
||||
- str: The selected model, or None if the operation is aborted or an invalid selection is made.
|
||||
"""
|
||||
predefined_providers = [p.lower() for p in PROVIDERS]
|
||||
|
||||
if provider in predefined_providers:
|
||||
available_models = MODELS.get(provider, [])
|
||||
else:
|
||||
available_models = provider_models.get(provider, [])
|
||||
|
||||
if not available_models:
|
||||
click.secho(f"No models available for provider '{provider}'.", fg="red")
|
||||
return None
|
||||
|
||||
selected_model = select_choice(
|
||||
f"Select a model to use for {provider.capitalize()}:", available_models
|
||||
)
|
||||
return selected_model
|
||||
|
||||
|
||||
def load_provider_data(cache_file, cache_expiry):
|
||||
"""
|
||||
Loads provider data from a cache file if it exists and is not expired. If the cache is expired or corrupted, it fetches the data from the web.
|
||||
|
||||
Args:
|
||||
- cache_file (Path): The path to the cache file.
|
||||
- cache_expiry (int): The cache expiry time in seconds.
|
||||
|
||||
Returns:
|
||||
- dict or None: The loaded provider data or None if the operation fails.
|
||||
"""
|
||||
current_time = time.time()
|
||||
if (
|
||||
cache_file.exists()
|
||||
and (current_time - cache_file.stat().st_mtime) < cache_expiry
|
||||
):
|
||||
data = read_cache_file(cache_file)
|
||||
if data:
|
||||
return data
|
||||
click.secho(
|
||||
"Cache is corrupted. Fetching provider data from the web...", fg="yellow"
|
||||
)
|
||||
else:
|
||||
click.secho(
|
||||
"Cache expired or not found. Fetching provider data from the web...",
|
||||
fg="cyan",
|
||||
)
|
||||
return fetch_provider_data(cache_file)
|
||||
|
||||
|
||||
def read_cache_file(cache_file):
|
||||
"""
|
||||
Reads and returns the JSON content from a cache file. Returns None if the file contains invalid JSON.
|
||||
|
||||
Args:
|
||||
- cache_file (Path): The path to the cache file.
|
||||
|
||||
Returns:
|
||||
- dict or None: The JSON content of the cache file or None if the JSON is invalid.
|
||||
"""
|
||||
try:
|
||||
with open(cache_file, "r") as f:
|
||||
return json.load(f)
|
||||
except json.JSONDecodeError:
|
||||
return None
|
||||
|
||||
|
||||
def fetch_provider_data(cache_file):
|
||||
"""
|
||||
Fetches provider data from a specified URL and caches it to a file.
|
||||
|
||||
Args:
|
||||
- cache_file (Path): The path to the cache file.
|
||||
|
||||
Returns:
|
||||
- dict or None: The fetched provider data or None if the operation fails.
|
||||
"""
|
||||
try:
|
||||
response = requests.get(JSON_URL, stream=True, timeout=10)
|
||||
response.raise_for_status()
|
||||
data = download_data(response)
|
||||
with open(cache_file, "w") as f:
|
||||
json.dump(data, f)
|
||||
return data
|
||||
except requests.RequestException as e:
|
||||
click.secho(f"Error fetching provider data: {e}", fg="red")
|
||||
except json.JSONDecodeError:
|
||||
click.secho("Error parsing provider data. Invalid JSON format.", fg="red")
|
||||
return None
|
||||
|
||||
|
||||
def download_data(response):
|
||||
"""
|
||||
Downloads data from a given HTTP response and returns the JSON content.
|
||||
|
||||
Args:
|
||||
- response (requests.Response): The HTTP response object.
|
||||
|
||||
Returns:
|
||||
- dict: The JSON content of the response.
|
||||
"""
|
||||
total_size = int(response.headers.get("content-length", 0))
|
||||
block_size = 8192
|
||||
data_chunks = []
|
||||
with click.progressbar(
|
||||
length=total_size, label="Downloading", show_pos=True
|
||||
) as progress_bar:
|
||||
for chunk in response.iter_content(block_size):
|
||||
if chunk:
|
||||
data_chunks.append(chunk)
|
||||
progress_bar.update(len(chunk))
|
||||
data_content = b"".join(data_chunks)
|
||||
return json.loads(data_content.decode("utf-8"))
|
||||
|
||||
|
||||
def get_provider_data():
|
||||
"""
|
||||
Retrieves provider data from a cache file, filters out models based on provider criteria, and returns a dictionary of providers mapped to their models.
|
||||
|
||||
Returns:
|
||||
- dict or None: A dictionary of providers mapped to their models or None if the operation fails.
|
||||
"""
|
||||
cache_dir = Path.home() / ".crewai"
|
||||
cache_dir.mkdir(exist_ok=True)
|
||||
cache_file = cache_dir / "provider_cache.json"
|
||||
cache_expiry = 24 * 3600
|
||||
|
||||
data = load_provider_data(cache_file, cache_expiry)
|
||||
if not data:
|
||||
return None
|
||||
|
||||
provider_models = defaultdict(list)
|
||||
for model_name, properties in data.items():
|
||||
provider = properties.get("litellm_provider", "").strip().lower()
|
||||
if "http" in provider or provider == "other":
|
||||
continue
|
||||
if provider:
|
||||
provider_models[provider].append(model_name)
|
||||
return provider_models
|
||||
@@ -1,4 +1,5 @@
|
||||
import subprocess
|
||||
|
||||
import click
|
||||
|
||||
|
||||
@@ -9,7 +10,7 @@ def replay_task_command(task_id: str) -> None:
|
||||
Args:
|
||||
task_id (str): The ID of the task to replay from.
|
||||
"""
|
||||
command = ["poetry", "run", "replay", task_id]
|
||||
command = ["uv", "run", "replay", task_id]
|
||||
|
||||
try:
|
||||
result = subprocess.run(command, capture_output=False, text=True, check=True)
|
||||
|
||||
@@ -1,23 +1,48 @@
|
||||
import subprocess
|
||||
|
||||
import click
|
||||
import tomllib
|
||||
from packaging import version
|
||||
|
||||
from crewai.cli.utils import get_crewai_version
|
||||
|
||||
|
||||
def run_crew() -> None:
|
||||
"""
|
||||
Run the crew by running a command in the Poetry environment.
|
||||
Run the crew by running a command in the UV environment.
|
||||
"""
|
||||
command = ["poetry", "run", "run_crew"]
|
||||
command = ["uv", "run", "run_crew"]
|
||||
crewai_version = get_crewai_version()
|
||||
min_required_version = "0.71.0"
|
||||
|
||||
with open("pyproject.toml", "rb") as f:
|
||||
data = tomllib.load(f)
|
||||
|
||||
if data.get("tool", {}).get("poetry") and (
|
||||
version.parse(crewai_version) < version.parse(min_required_version)
|
||||
):
|
||||
click.secho(
|
||||
f"You are running an older version of crewAI ({crewai_version}) that uses poetry pyproject.toml. "
|
||||
f"Please run `crewai update` to update your pyproject.toml to use uv.",
|
||||
fg="red",
|
||||
)
|
||||
print()
|
||||
|
||||
try:
|
||||
result = subprocess.run(command, capture_output=False, text=True, check=True)
|
||||
|
||||
if result.stderr:
|
||||
click.echo(result.stderr, err=True)
|
||||
subprocess.run(command, capture_output=False, text=True, check=True)
|
||||
|
||||
except subprocess.CalledProcessError as e:
|
||||
click.echo(f"An error occurred while running the crew: {e}", err=True)
|
||||
click.echo(e.output, err=True)
|
||||
click.echo(e.output, err=True, nl=True)
|
||||
|
||||
with open("pyproject.toml", "rb") as f:
|
||||
data = tomllib.load(f)
|
||||
|
||||
if data.get("tool", {}).get("poetry"):
|
||||
click.secho(
|
||||
"It's possible that you are using an old version of crewAI that uses poetry, please run `crewai update` to update your pyproject.toml to use uv.",
|
||||
fg="yellow",
|
||||
)
|
||||
|
||||
except Exception as e:
|
||||
click.echo(f"An unexpected error occurred: {e}", err=True)
|
||||
|
||||
@@ -4,17 +4,17 @@ Welcome to the {{crew_name}} Crew project, powered by [crewAI](https://crewai.co
|
||||
|
||||
## Installation
|
||||
|
||||
Ensure you have Python >=3.10 <=3.13 installed on your system. This project uses [Poetry](https://python-poetry.org/) for dependency management and package handling, offering a seamless setup and execution experience.
|
||||
Ensure you have Python >=3.10 <=3.13 installed on your system. This project uses [UV](https://docs.astral.sh/uv/) for dependency management and package handling, offering a seamless setup and execution experience.
|
||||
|
||||
First, if you haven't already, install Poetry:
|
||||
First, if you haven't already, install uv:
|
||||
|
||||
```bash
|
||||
pip install poetry
|
||||
pip install uv
|
||||
```
|
||||
|
||||
Next, navigate to your project directory and install the dependencies:
|
||||
|
||||
1. First lock the dependencies and install them by using the CLI command:
|
||||
(Optional) Lock the dependencies and install them by using the CLI command:
|
||||
```bash
|
||||
crewai install
|
||||
```
|
||||
|
||||
@@ -2,8 +2,8 @@
|
||||
import sys
|
||||
from {{folder_name}}.crew import {{crew_name}}Crew
|
||||
|
||||
# This main file is intended to be a way for your to run your
|
||||
# crew locally, so refrain from adding necessary logic into this file.
|
||||
# This main file is intended to be a way for you to run your
|
||||
# crew locally, so refrain from adding unnecessary logic into this file.
|
||||
# Replace with inputs you want to test with, it will automatically
|
||||
# interpolate any tasks and agents information
|
||||
|
||||
|
||||
@@ -1,15 +1,14 @@
|
||||
[tool.poetry]
|
||||
[project]
|
||||
name = "{{folder_name}}"
|
||||
version = "0.1.0"
|
||||
description = "{{name}} using crewAI"
|
||||
authors = ["Your Name <you@example.com>"]
|
||||
authors = [{ name = "Your Name", email = "you@example.com" }]
|
||||
requires-python = ">=3.10,<=3.13"
|
||||
dependencies = [
|
||||
"crewai[tools]>=0.76.2,<1.0.0"
|
||||
]
|
||||
|
||||
[tool.poetry.dependencies]
|
||||
python = ">=3.10,<=3.13"
|
||||
crewai = { extras = ["tools"], version = ">=0.70.1,<1.0.0" }
|
||||
|
||||
|
||||
[tool.poetry.scripts]
|
||||
[project.scripts]
|
||||
{{folder_name}} = "{{folder_name}}.main:run"
|
||||
run_crew = "{{folder_name}}.main:run"
|
||||
train = "{{folder_name}}.main:train"
|
||||
@@ -17,5 +16,5 @@ replay = "{{folder_name}}.main:replay"
|
||||
test = "{{folder_name}}.main:test"
|
||||
|
||||
[build-system]
|
||||
requires = ["poetry-core"]
|
||||
build-backend = "poetry.core.masonry.api"
|
||||
requires = ["hatchling"]
|
||||
build-backend = "hatchling.build"
|
||||
|
||||
@@ -4,18 +4,17 @@ Welcome to the {{crew_name}} Crew project, powered by [crewAI](https://crewai.co
|
||||
|
||||
## Installation
|
||||
|
||||
Ensure you have Python >=3.10 <=3.13 installed on your system. This project uses [Poetry](https://python-poetry.org/) for dependency management and package handling, offering a seamless setup and execution experience.
|
||||
Ensure you have Python >=3.10 <=3.13 installed on your system. This project uses [UV](https://docs.astral.sh/uv/) for dependency management and package handling, offering a seamless setup and execution experience.
|
||||
|
||||
First, if you haven't already, install Poetry:
|
||||
First, if you haven't already, install uv:
|
||||
|
||||
```bash
|
||||
pip install poetry
|
||||
pip install uv
|
||||
```
|
||||
|
||||
Next, navigate to your project directory and install the dependencies:
|
||||
|
||||
1. First lock the dependencies and then install them:
|
||||
|
||||
(Optional) Lock the dependencies and install them by using the CLI command:
|
||||
```bash
|
||||
crewai install
|
||||
```
|
||||
|
||||
@@ -1,65 +1,53 @@
|
||||
#!/usr/bin/env python
|
||||
import asyncio
|
||||
from random import randint
|
||||
|
||||
from pydantic import BaseModel
|
||||
|
||||
from crewai.flow.flow import Flow, listen, start
|
||||
|
||||
from .crews.poem_crew.poem_crew import PoemCrew
|
||||
|
||||
|
||||
class PoemState(BaseModel):
|
||||
sentence_count: int = 1
|
||||
poem: str = ""
|
||||
|
||||
|
||||
class PoemFlow(Flow[PoemState]):
|
||||
|
||||
@start()
|
||||
def generate_sentence_count(self):
|
||||
print("Generating sentence count")
|
||||
# Generate a number between 1 and 5
|
||||
self.state.sentence_count = randint(1, 5)
|
||||
self.state.sentence_count = randint(1, 5)
|
||||
|
||||
@listen(generate_sentence_count)
|
||||
def generate_poem(self):
|
||||
print("Generating poem")
|
||||
print(f"State before poem: {self.state}")
|
||||
result = PoemCrew().crew().kickoff(inputs={"sentence_count": self.state.sentence_count})
|
||||
|
||||
result = (
|
||||
PoemCrew()
|
||||
.crew()
|
||||
.kickoff(inputs={"sentence_count": self.state.sentence_count})
|
||||
)
|
||||
|
||||
print("Poem generated", result.raw)
|
||||
self.state.poem = result.raw
|
||||
|
||||
print(f"State after generate_poem: {self.state}")
|
||||
|
||||
@listen(generate_poem)
|
||||
def save_poem(self):
|
||||
print("Saving poem")
|
||||
print(f"State before save_poem: {self.state}")
|
||||
with open("poem.txt", "w") as f:
|
||||
f.write(self.state.poem)
|
||||
print(f"State after save_poem: {self.state}")
|
||||
|
||||
async def run_flow():
|
||||
"""
|
||||
Run the flow.
|
||||
"""
|
||||
|
||||
def kickoff():
|
||||
poem_flow = PoemFlow()
|
||||
await poem_flow.kickoff()
|
||||
poem_flow.kickoff()
|
||||
|
||||
async def plot_flow():
|
||||
"""
|
||||
Plot the flow.
|
||||
"""
|
||||
|
||||
def plot():
|
||||
poem_flow = PoemFlow()
|
||||
poem_flow.plot()
|
||||
|
||||
|
||||
def main():
|
||||
asyncio.run(run_flow())
|
||||
|
||||
|
||||
def plot():
|
||||
asyncio.run(plot_flow())
|
||||
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
kickoff()
|
||||
|
||||
@@ -1,19 +1,17 @@
|
||||
[tool.poetry]
|
||||
[project]
|
||||
name = "{{folder_name}}"
|
||||
version = "0.1.0"
|
||||
description = "{{name}} using crewAI"
|
||||
authors = ["Your Name <you@example.com>"]
|
||||
authors = [{ name = "Your Name", email = "you@example.com" }]
|
||||
requires-python = ">=3.10,<=3.13"
|
||||
dependencies = [
|
||||
"crewai[tools]>=0.76.2,<1.0.0",
|
||||
]
|
||||
|
||||
[tool.poetry.dependencies]
|
||||
python = ">=3.10,<=3.13"
|
||||
crewai = { extras = ["tools"], version = ">=0.70.1,<1.0.0" }
|
||||
asyncio = "*"
|
||||
|
||||
[tool.poetry.scripts]
|
||||
{{folder_name}} = "{{folder_name}}.main:main"
|
||||
run_flow = "{{folder_name}}.main:main"
|
||||
plot_flow = "{{folder_name}}.main:plot"
|
||||
[project.scripts]
|
||||
kickoff = "{{folder_name}}.main:kickoff"
|
||||
plot = "{{folder_name}}.main:plot"
|
||||
|
||||
[build-system]
|
||||
requires = ["poetry-core"]
|
||||
build-backend = "poetry.core.masonry.api"
|
||||
requires = ["hatchling"]
|
||||
build-backend = "hatchling.build"
|
||||
|
||||
@@ -6,7 +6,7 @@ authors = ["Your Name <you@example.com>"]
|
||||
|
||||
[tool.poetry.dependencies]
|
||||
python = ">=3.10,<=3.13"
|
||||
crewai = { extras = ["tools"], version = ">=0.70.1,<1.0.0" }
|
||||
crewai = { extras = ["tools"], version = ">=0.76.2,<1.0.0" }
|
||||
asyncio = "*"
|
||||
|
||||
[tool.poetry.scripts]
|
||||
|
||||
@@ -1,20 +1,21 @@
|
||||
[tool.poetry]
|
||||
[project]
|
||||
name = "{{folder_name}}"
|
||||
version = "0.1.0"
|
||||
description = "{{name}} using crewAI"
|
||||
authors = ["Your Name <you@example.com>"]
|
||||
requires-python = ">=3.10,<=3.13"
|
||||
dependencies = [
|
||||
"crewai[tools]>=0.76.2,<1.0.0"
|
||||
]
|
||||
|
||||
[tool.poetry.dependencies]
|
||||
python = ">=3.10,<=3.13"
|
||||
crewai = { extras = ["tools"], version = ">=0.70.1,<1.0.0" }
|
||||
|
||||
|
||||
[tool.poetry.scripts]
|
||||
[project.scripts]
|
||||
{{folder_name}} = "{{folder_name}}.main:main"
|
||||
run_crew = "{{folder_name}}.main:main"
|
||||
train = "{{folder_name}}.main:train"
|
||||
replay = "{{folder_name}}.main:replay"
|
||||
test = "{{folder_name}}.main:test"
|
||||
|
||||
[build-system]
|
||||
requires = ["poetry-core"]
|
||||
build-backend = "poetry.core.masonry.api"
|
||||
requires = ["hatchling"]
|
||||
build-backend = "hatchling.build"
|
||||
|
||||
|
||||
10
src/crewai/cli/templates/tool/.gitignore
vendored
Normal file
10
src/crewai/cli/templates/tool/.gitignore
vendored
Normal file
@@ -0,0 +1,10 @@
|
||||
# Python-generated files
|
||||
__pycache__/
|
||||
*.py[oc]
|
||||
build/
|
||||
dist/
|
||||
wheels/
|
||||
*.egg-info
|
||||
|
||||
# Virtual environments
|
||||
.venv
|
||||
@@ -6,13 +6,13 @@ custom tools to power up your crews.
|
||||
## Installing
|
||||
|
||||
Ensure you have Python >=3.10 <=3.13 installed on your system. This project
|
||||
uses [Poetry](https://python-poetry.org/) for dependency management and package
|
||||
uses [UV](https://docs.astral.sh/uv/) for dependency management and package
|
||||
handling, offering a seamless setup and execution experience.
|
||||
|
||||
First, if you haven't already, install Poetry:
|
||||
First, if you haven't already, install `uv`:
|
||||
|
||||
```bash
|
||||
pip install poetry
|
||||
pip install uv
|
||||
```
|
||||
|
||||
Next, navigate to your project directory and install the dependencies with:
|
||||
|
||||
@@ -1,14 +1,10 @@
|
||||
[tool.poetry]
|
||||
[project]
|
||||
name = "{{folder_name}}"
|
||||
version = "0.1.0"
|
||||
description = "Power up your crews with {{folder_name}}"
|
||||
authors = ["Your Name <you@example.com>"]
|
||||
readme = "README.md"
|
||||
requires-python = ">=3.10,<=3.13"
|
||||
dependencies = [
|
||||
"crewai[tools]>=0.76.2"
|
||||
]
|
||||
|
||||
[tool.poetry.dependencies]
|
||||
python = ">=3.10,<=3.13"
|
||||
crewai = { extras = ["tools"], version = ">=0.70.1,<1.0.0" }
|
||||
|
||||
[build-system]
|
||||
requires = ["poetry-core"]
|
||||
build-backend = "poetry.core.masonry.api"
|
||||
|
||||
@@ -1,20 +1,24 @@
|
||||
import base64
|
||||
from pathlib import Path
|
||||
import click
|
||||
import os
|
||||
import platform
|
||||
import subprocess
|
||||
import tempfile
|
||||
from pathlib import Path
|
||||
from netrc import netrc
|
||||
import stat
|
||||
|
||||
import click
|
||||
from rich.console import Console
|
||||
|
||||
from crewai.cli.command import BaseCommand, PlusAPIMixin
|
||||
from crewai.cli import git
|
||||
from crewai.cli.command import BaseCommand, PlusAPIMixin
|
||||
from crewai.cli.utils import (
|
||||
get_project_name,
|
||||
get_project_description,
|
||||
get_project_name,
|
||||
get_project_version,
|
||||
tree_copy,
|
||||
tree_find_and_replace,
|
||||
)
|
||||
from rich.console import Console
|
||||
|
||||
console = Console()
|
||||
|
||||
@@ -82,7 +86,7 @@ class ToolCommand(BaseCommand, PlusAPIMixin):
|
||||
|
||||
with tempfile.TemporaryDirectory() as temp_build_dir:
|
||||
subprocess.run(
|
||||
["poetry", "build", "-f", "sdist", "--output", temp_build_dir],
|
||||
["uv", "build", "--sdist", "--out-dir", temp_build_dir],
|
||||
check=True,
|
||||
capture_output=False,
|
||||
)
|
||||
@@ -92,7 +96,7 @@ class ToolCommand(BaseCommand, PlusAPIMixin):
|
||||
)
|
||||
if not tarball_filename:
|
||||
console.print(
|
||||
"Project build failed. Please ensure that the command `poetry build -f sdist` completes successfully.",
|
||||
"Project build failed. Please ensure that the command `uv build --sdist` completes successfully.",
|
||||
style="bold red",
|
||||
)
|
||||
raise SystemExit
|
||||
@@ -143,68 +147,43 @@ class ToolCommand(BaseCommand, PlusAPIMixin):
|
||||
|
||||
if login_response.status_code != 200:
|
||||
console.print(
|
||||
"Failed to authenticate to the tool repository. Make sure you have the access to tools.",
|
||||
"Authentication failed. Verify access to the tool repository, or try `crewai login`. ",
|
||||
style="bold red",
|
||||
)
|
||||
raise SystemExit
|
||||
|
||||
login_response_json = login_response.json()
|
||||
for repository in login_response_json["repositories"]:
|
||||
self._add_repository_to_poetry(
|
||||
repository, login_response_json["credential"]
|
||||
)
|
||||
self._set_netrc_credentials(login_response_json["credential"])
|
||||
|
||||
console.print(
|
||||
"Succesfully authenticated to the tool repository.", style="bold green"
|
||||
"Successfully authenticated to the tool repository.", style="bold green"
|
||||
)
|
||||
|
||||
def _add_repository_to_poetry(self, repository, credentials):
|
||||
repository_handle = f"crewai-{repository['handle']}"
|
||||
def _set_netrc_credentials(self, credentials, netrc_path=None):
|
||||
if not netrc_path:
|
||||
netrc_filename = "_netrc" if platform.system() == "Windows" else ".netrc"
|
||||
netrc_path = Path.home() / netrc_filename
|
||||
netrc_path.touch(mode=stat.S_IRUSR | stat.S_IWUSR, exist_ok=True)
|
||||
|
||||
add_repository_command = [
|
||||
"poetry",
|
||||
"source",
|
||||
"add",
|
||||
"--priority=explicit",
|
||||
repository_handle,
|
||||
repository["url"],
|
||||
]
|
||||
add_repository_result = subprocess.run(
|
||||
add_repository_command, text=True, check=True
|
||||
)
|
||||
netrc_instance = netrc(file=netrc_path)
|
||||
netrc_instance.hosts["app.crewai.com"] = (credentials["username"], "", credentials["password"])
|
||||
|
||||
if add_repository_result.stderr:
|
||||
click.echo(add_repository_result.stderr, err=True)
|
||||
raise SystemExit
|
||||
with open(netrc_path, 'w') as file:
|
||||
file.write(str(netrc_instance))
|
||||
|
||||
add_repository_credentials_command = [
|
||||
"poetry",
|
||||
"config",
|
||||
f"http-basic.{repository_handle}",
|
||||
credentials["username"],
|
||||
credentials["password"],
|
||||
]
|
||||
add_repository_credentials_result = subprocess.run(
|
||||
add_repository_credentials_command,
|
||||
capture_output=False,
|
||||
text=True,
|
||||
check=True,
|
||||
)
|
||||
|
||||
if add_repository_credentials_result.stderr:
|
||||
click.echo(add_repository_credentials_result.stderr, err=True)
|
||||
raise SystemExit
|
||||
console.print(f"Added credentials to {netrc_path}", style="bold green")
|
||||
|
||||
def _add_package(self, tool_details):
|
||||
tool_handle = tool_details["handle"]
|
||||
repository_handle = tool_details["repository"]["handle"]
|
||||
pypi_index_handle = f"crewai-{repository_handle}"
|
||||
repository_url = tool_details["repository"]["url"]
|
||||
index = f"{repository_handle}={repository_url}"
|
||||
|
||||
add_package_command = [
|
||||
"poetry",
|
||||
"uv",
|
||||
"add",
|
||||
"--source",
|
||||
pypi_index_handle,
|
||||
"--index",
|
||||
index,
|
||||
tool_handle,
|
||||
]
|
||||
add_package_result = subprocess.run(
|
||||
|
||||
@@ -5,12 +5,12 @@ import click
|
||||
|
||||
def train_crew(n_iterations: int, filename: str) -> None:
|
||||
"""
|
||||
Train the crew by running a command in the Poetry environment.
|
||||
Train the crew by running a command in the UV environment.
|
||||
|
||||
Args:
|
||||
n_iterations (int): The number of iterations to train the crew.
|
||||
"""
|
||||
command = ["poetry", "run", "train", str(n_iterations), filename]
|
||||
command = ["uv", "run", "train", str(n_iterations), filename]
|
||||
|
||||
try:
|
||||
if n_iterations <= 0:
|
||||
|
||||
125
src/crewai/cli/update_crew.py
Normal file
125
src/crewai/cli/update_crew.py
Normal file
@@ -0,0 +1,125 @@
|
||||
import os
|
||||
import shutil
|
||||
|
||||
import tomli_w
|
||||
import tomllib
|
||||
|
||||
|
||||
def update_crew() -> None:
|
||||
"""Update the pyproject.toml of the Crew project to use uv."""
|
||||
migrate_pyproject("pyproject.toml", "pyproject.toml")
|
||||
|
||||
|
||||
def migrate_pyproject(input_file, output_file):
|
||||
"""
|
||||
Migrate the pyproject.toml to the new format.
|
||||
|
||||
This function is used to migrate the pyproject.toml to the new format.
|
||||
And it will be used to migrate the pyproject.toml to the new format when uv is used.
|
||||
When the time comes that uv supports the new format, this function will be deprecated.
|
||||
"""
|
||||
|
||||
# Read the input pyproject.toml
|
||||
with open(input_file, "rb") as f:
|
||||
pyproject = tomllib.load(f)
|
||||
|
||||
# Initialize the new project structure
|
||||
new_pyproject = {
|
||||
"project": {},
|
||||
"build-system": {"requires": ["hatchling"], "build-backend": "hatchling.build"},
|
||||
}
|
||||
|
||||
# Migrate project metadata
|
||||
if "tool" in pyproject and "poetry" in pyproject["tool"]:
|
||||
poetry = pyproject["tool"]["poetry"]
|
||||
new_pyproject["project"]["name"] = poetry.get("name")
|
||||
new_pyproject["project"]["version"] = poetry.get("version")
|
||||
new_pyproject["project"]["description"] = poetry.get("description")
|
||||
new_pyproject["project"]["authors"] = [
|
||||
{
|
||||
"name": author.split("<")[0].strip(),
|
||||
"email": author.split("<")[1].strip(">").strip(),
|
||||
}
|
||||
for author in poetry.get("authors", [])
|
||||
]
|
||||
new_pyproject["project"]["requires-python"] = poetry.get("python")
|
||||
else:
|
||||
# If it's already in the new format, just copy the project section
|
||||
new_pyproject["project"] = pyproject.get("project", {})
|
||||
|
||||
# Migrate or copy dependencies
|
||||
if "dependencies" in new_pyproject["project"]:
|
||||
# If dependencies are already in the new format, keep them as is
|
||||
pass
|
||||
elif "dependencies" in poetry:
|
||||
new_pyproject["project"]["dependencies"] = []
|
||||
for dep, version in poetry["dependencies"].items():
|
||||
if isinstance(version, dict): # Handle extras
|
||||
extras = ",".join(version.get("extras", []))
|
||||
new_dep = f"{dep}[{extras}]"
|
||||
if "version" in version:
|
||||
new_dep += parse_version(version["version"])
|
||||
elif dep == "python":
|
||||
new_pyproject["project"]["requires-python"] = version
|
||||
continue
|
||||
else:
|
||||
new_dep = f"{dep}{parse_version(version)}"
|
||||
new_pyproject["project"]["dependencies"].append(new_dep)
|
||||
|
||||
# Migrate or copy scripts
|
||||
if "scripts" in poetry:
|
||||
new_pyproject["project"]["scripts"] = poetry["scripts"]
|
||||
elif "scripts" in pyproject.get("project", {}):
|
||||
new_pyproject["project"]["scripts"] = pyproject["project"]["scripts"]
|
||||
else:
|
||||
new_pyproject["project"]["scripts"] = {}
|
||||
|
||||
if (
|
||||
"run_crew" not in new_pyproject["project"]["scripts"]
|
||||
and len(new_pyproject["project"]["scripts"]) > 0
|
||||
):
|
||||
# Extract the module name from any existing script
|
||||
existing_scripts = new_pyproject["project"]["scripts"]
|
||||
module_name = next(
|
||||
(value.split(".")[0] for value in existing_scripts.values() if "." in value)
|
||||
)
|
||||
|
||||
new_pyproject["project"]["scripts"]["run_crew"] = f"{module_name}.main:run"
|
||||
|
||||
# Migrate optional dependencies
|
||||
if "extras" in poetry:
|
||||
new_pyproject["project"]["optional-dependencies"] = poetry["extras"]
|
||||
|
||||
# Backup the old pyproject.toml
|
||||
backup_file = "pyproject-old.toml"
|
||||
shutil.copy2(input_file, backup_file)
|
||||
print(f"Original pyproject.toml backed up as {backup_file}")
|
||||
|
||||
# Rename the poetry.lock file
|
||||
lock_file = "poetry.lock"
|
||||
lock_backup = "poetry-old.lock"
|
||||
if os.path.exists(lock_file):
|
||||
os.rename(lock_file, lock_backup)
|
||||
print(f"Original poetry.lock renamed to {lock_backup}")
|
||||
else:
|
||||
print("No poetry.lock file found to rename.")
|
||||
|
||||
# Write the new pyproject.toml
|
||||
with open(output_file, "wb") as f:
|
||||
tomli_w.dump(new_pyproject, f)
|
||||
|
||||
print(f"Migration complete. New pyproject.toml written to {output_file}")
|
||||
|
||||
|
||||
def parse_version(version: str) -> str:
|
||||
"""Parse and convert version specifiers."""
|
||||
if version.startswith("^"):
|
||||
main_lib_version = version[1:].split(",")[0]
|
||||
addtional_lib_version = None
|
||||
if len(version[1:].split(",")) > 1:
|
||||
addtional_lib_version = version[1:].split(",")[1]
|
||||
|
||||
return f">={main_lib_version}" + (
|
||||
f",{addtional_lib_version}" if addtional_lib_version else ""
|
||||
)
|
||||
return version
|
||||
@@ -1,13 +1,15 @@
|
||||
import importlib.metadata
|
||||
import os
|
||||
import shutil
|
||||
import click
|
||||
import sys
|
||||
import importlib.metadata
|
||||
from functools import reduce
|
||||
from typing import Any, Dict, List
|
||||
|
||||
import click
|
||||
from rich.console import Console
|
||||
|
||||
from crewai.cli.authentication.utils import TokenManager
|
||||
from functools import reduce
|
||||
from rich.console import Console
|
||||
from typing import Any, Dict, List
|
||||
from crewai.cli.constants import ENV_VARS
|
||||
|
||||
if sys.version_info >= (3, 11):
|
||||
import tomllib
|
||||
@@ -55,17 +57,14 @@ def simple_toml_parser(content):
|
||||
def parse_toml(content):
|
||||
if sys.version_info >= (3, 11):
|
||||
return tomllib.loads(content)
|
||||
else:
|
||||
return simple_toml_parser(content)
|
||||
return simple_toml_parser(content)
|
||||
|
||||
|
||||
def get_project_name(
|
||||
pyproject_path: str = "pyproject.toml", require: bool = False
|
||||
) -> str | None:
|
||||
"""Get the project name from the pyproject.toml file."""
|
||||
return _get_project_attribute(
|
||||
pyproject_path, ["tool", "poetry", "name"], require=require
|
||||
)
|
||||
return _get_project_attribute(pyproject_path, ["project", "name"], require=require)
|
||||
|
||||
|
||||
def get_project_version(
|
||||
@@ -73,7 +72,7 @@ def get_project_version(
|
||||
) -> str | None:
|
||||
"""Get the project version from the pyproject.toml file."""
|
||||
return _get_project_attribute(
|
||||
pyproject_path, ["tool", "poetry", "version"], require=require
|
||||
pyproject_path, ["project", "version"], require=require
|
||||
)
|
||||
|
||||
|
||||
@@ -82,7 +81,7 @@ def get_project_description(
|
||||
) -> str | None:
|
||||
"""Get the project description from the pyproject.toml file."""
|
||||
return _get_project_attribute(
|
||||
pyproject_path, ["tool", "poetry", "description"], require=require
|
||||
pyproject_path, ["project", "description"], require=require
|
||||
)
|
||||
|
||||
|
||||
@@ -97,10 +96,9 @@ def _get_project_attribute(
|
||||
pyproject_content = parse_toml(f.read())
|
||||
|
||||
dependencies = (
|
||||
_get_nested_value(pyproject_content, ["tool", "poetry", "dependencies"])
|
||||
or {}
|
||||
_get_nested_value(pyproject_content, ["project", "dependencies"]) or []
|
||||
)
|
||||
if "crewai" not in dependencies:
|
||||
if not any(True for dep in dependencies if "crewai" in dep):
|
||||
raise Exception("crewai is not in the dependencies.")
|
||||
|
||||
attribute = _get_nested_value(pyproject_content, keys)
|
||||
@@ -203,3 +201,76 @@ def tree_find_and_replace(directory, find, replace):
|
||||
new_dirpath = os.path.join(path, new_dirname)
|
||||
old_dirpath = os.path.join(path, dirname)
|
||||
os.rename(old_dirpath, new_dirpath)
|
||||
|
||||
|
||||
def load_env_vars(folder_path):
|
||||
"""
|
||||
Loads environment variables from a .env file in the specified folder path.
|
||||
|
||||
Args:
|
||||
- folder_path (Path): The path to the folder containing the .env file.
|
||||
|
||||
Returns:
|
||||
- dict: A dictionary of environment variables.
|
||||
"""
|
||||
env_file_path = folder_path / ".env"
|
||||
env_vars = {}
|
||||
if env_file_path.exists():
|
||||
with open(env_file_path, "r") as file:
|
||||
for line in file:
|
||||
key, _, value = line.strip().partition("=")
|
||||
if key and value:
|
||||
env_vars[key] = value
|
||||
return env_vars
|
||||
|
||||
|
||||
def update_env_vars(env_vars, provider, model):
|
||||
"""
|
||||
Updates environment variables with the API key for the selected provider and model.
|
||||
|
||||
Args:
|
||||
- env_vars (dict): Environment variables dictionary.
|
||||
- provider (str): Selected provider.
|
||||
- model (str): Selected model.
|
||||
|
||||
Returns:
|
||||
- None
|
||||
"""
|
||||
api_key_var = ENV_VARS.get(
|
||||
provider,
|
||||
[
|
||||
click.prompt(
|
||||
f"Enter the environment variable name for your {provider.capitalize()} API key",
|
||||
type=str,
|
||||
)
|
||||
],
|
||||
)[0]
|
||||
|
||||
if api_key_var not in env_vars:
|
||||
try:
|
||||
env_vars[api_key_var] = click.prompt(
|
||||
f"Enter your {provider.capitalize()} API key", type=str, hide_input=True
|
||||
)
|
||||
except click.exceptions.Abort:
|
||||
click.secho("Operation aborted by the user.", fg="red")
|
||||
return None
|
||||
else:
|
||||
click.secho(f"API key already exists for {provider.capitalize()}.", fg="yellow")
|
||||
|
||||
env_vars["MODEL"] = model
|
||||
click.secho(f"Selected model: {model}", fg="green")
|
||||
return env_vars
|
||||
|
||||
|
||||
def write_env_file(folder_path, env_vars):
|
||||
"""
|
||||
Writes environment variables to a .env file in the specified folder.
|
||||
|
||||
Args:
|
||||
- folder_path (Path): The path to the folder where the .env file will be written.
|
||||
- env_vars (dict): A dictionary of environment variables to write.
|
||||
"""
|
||||
env_file_path = folder_path / ".env"
|
||||
with open(env_file_path, "w") as file:
|
||||
for key, value in env_vars.items():
|
||||
file.write(f"{key}={value}\n")
|
||||
|
||||
@@ -27,7 +27,6 @@ from crewai.llm import LLM
|
||||
from crewai.memory.entity.entity_memory import EntityMemory
|
||||
from crewai.memory.long_term.long_term_memory import LongTermMemory
|
||||
from crewai.memory.short_term.short_term_memory import ShortTermMemory
|
||||
from crewai.memory.user.user_memory import UserMemory
|
||||
from crewai.process import Process
|
||||
from crewai.task import Task
|
||||
from crewai.tasks.conditional_task import ConditionalTask
|
||||
@@ -95,7 +94,6 @@ class Crew(BaseModel):
|
||||
_short_term_memory: Optional[InstanceOf[ShortTermMemory]] = PrivateAttr()
|
||||
_long_term_memory: Optional[InstanceOf[LongTermMemory]] = PrivateAttr()
|
||||
_entity_memory: Optional[InstanceOf[EntityMemory]] = PrivateAttr()
|
||||
_user_memory: Optional[InstanceOf[UserMemory]] = PrivateAttr()
|
||||
_train: Optional[bool] = PrivateAttr(default=False)
|
||||
_train_iteration: Optional[int] = PrivateAttr()
|
||||
_inputs: Optional[Dict[str, Any]] = PrivateAttr(default=None)
|
||||
@@ -116,10 +114,6 @@ class Crew(BaseModel):
|
||||
default=False,
|
||||
description="Whether the crew should use memory to store memories of it's execution",
|
||||
)
|
||||
memory_provider: Optional[str] = Field(
|
||||
default=None,
|
||||
description="The memory provider to be used for the crew.",
|
||||
)
|
||||
short_term_memory: Optional[InstanceOf[ShortTermMemory]] = Field(
|
||||
default=None,
|
||||
description="An Instance of the ShortTermMemory to be used by the Crew",
|
||||
@@ -132,8 +126,8 @@ class Crew(BaseModel):
|
||||
default=None,
|
||||
description="An Instance of the EntityMemory to be used by the Crew",
|
||||
)
|
||||
embedder: Optional[dict] = Field(
|
||||
default={"provider": "openai"},
|
||||
embedder: Optional[Any] = Field(
|
||||
default=None,
|
||||
description="Configuration for the embedder to be used for the crew.",
|
||||
)
|
||||
usage_metrics: Optional[UsageMetrics] = Field(
|
||||
@@ -213,14 +207,6 @@ class Crew(BaseModel):
|
||||
# TODO: Improve typing
|
||||
return json.loads(v) if isinstance(v, Json) else v # type: ignore
|
||||
|
||||
@field_validator("memory_provider", mode="before")
|
||||
@classmethod
|
||||
def validate_memory_provider(cls, v: Optional[str]) -> Optional[str]:
|
||||
"""Ensure memory provider is either None or 'mem0'."""
|
||||
if v not in (None, "mem0"):
|
||||
raise ValueError("Memory provider must be either None or 'mem0'.")
|
||||
return v
|
||||
|
||||
@model_validator(mode="after")
|
||||
def set_private_attrs(self) -> "Crew":
|
||||
"""Set private attributes."""
|
||||
@@ -252,23 +238,12 @@ class Crew(BaseModel):
|
||||
self._short_term_memory = (
|
||||
self.short_term_memory
|
||||
if self.short_term_memory
|
||||
else ShortTermMemory(
|
||||
memory_provider=self.memory_provider,
|
||||
crew=self,
|
||||
embedder_config=self.embedder,
|
||||
)
|
||||
else ShortTermMemory(crew=self, embedder_config=self.embedder)
|
||||
)
|
||||
self._entity_memory = (
|
||||
self.entity_memory
|
||||
if self.entity_memory
|
||||
else EntityMemory(
|
||||
memory_provider=self.memory_provider,
|
||||
crew=self,
|
||||
embedder_config=self.embedder,
|
||||
)
|
||||
)
|
||||
self._user_memory = (
|
||||
UserMemory(crew=self) if self.memory_provider == "mem0" else None
|
||||
else EntityMemory(crew=self, embedder_config=self.embedder)
|
||||
)
|
||||
return self
|
||||
|
||||
@@ -460,15 +435,16 @@ class Crew(BaseModel):
|
||||
self, n_iterations: int, filename: str, inputs: Optional[Dict[str, Any]] = {}
|
||||
) -> None:
|
||||
"""Trains the crew for a given number of iterations."""
|
||||
self._setup_for_training(filename)
|
||||
train_crew = self.copy()
|
||||
train_crew._setup_for_training(filename)
|
||||
|
||||
for n_iteration in range(n_iterations):
|
||||
self._train_iteration = n_iteration
|
||||
self.kickoff(inputs=inputs)
|
||||
train_crew._train_iteration = n_iteration
|
||||
train_crew.kickoff(inputs=inputs)
|
||||
|
||||
training_data = CrewTrainingHandler(TRAINING_DATA_FILE).load()
|
||||
|
||||
for agent in self.agents:
|
||||
for agent in train_crew.agents:
|
||||
result = TaskEvaluator(agent).evaluate_training_data(
|
||||
training_data=training_data, agent_id=str(agent.id)
|
||||
)
|
||||
@@ -799,7 +775,9 @@ class Crew(BaseModel):
|
||||
|
||||
def _log_task_start(self, task: Task, role: str = "None"):
|
||||
if self.output_log_file:
|
||||
self._file_handler.log(task_name=task.name, task=task.description, agent=role, status="started")
|
||||
self._file_handler.log(
|
||||
task_name=task.name, task=task.description, agent=role, status="started"
|
||||
)
|
||||
|
||||
def _update_manager_tools(self, task: Task):
|
||||
if self.manager_agent:
|
||||
@@ -821,7 +799,13 @@ class Crew(BaseModel):
|
||||
def _process_task_result(self, task: Task, output: TaskOutput) -> None:
|
||||
role = task.agent.role if task.agent is not None else "None"
|
||||
if self.output_log_file:
|
||||
self._file_handler.log(task_name=task.name, task=task.description, agent=role, status="completed", output=output.raw)
|
||||
self._file_handler.log(
|
||||
task_name=task.name,
|
||||
task=task.description,
|
||||
agent=role,
|
||||
status="completed",
|
||||
output=output.raw,
|
||||
)
|
||||
|
||||
def _create_crew_output(self, task_outputs: List[TaskOutput]) -> CrewOutput:
|
||||
if len(task_outputs) != 1:
|
||||
@@ -922,7 +906,6 @@ class Crew(BaseModel):
|
||||
"_short_term_memory",
|
||||
"_long_term_memory",
|
||||
"_entity_memory",
|
||||
"_user_memory",
|
||||
"_telemetry",
|
||||
"agents",
|
||||
"tasks",
|
||||
@@ -1005,17 +988,19 @@ class Crew(BaseModel):
|
||||
inputs: Optional[Dict[str, Any]] = None,
|
||||
) -> None:
|
||||
"""Test and evaluate the Crew with the given inputs for n iterations concurrently using concurrent.futures."""
|
||||
self._test_execution_span = self._telemetry.test_execution_span(
|
||||
self,
|
||||
test_crew = self.copy()
|
||||
|
||||
self._test_execution_span = test_crew._telemetry.test_execution_span(
|
||||
test_crew,
|
||||
n_iterations,
|
||||
inputs,
|
||||
openai_model_name, # type: ignore[arg-type]
|
||||
) # type: ignore[arg-type]
|
||||
evaluator = CrewEvaluator(self, openai_model_name) # type: ignore[arg-type]
|
||||
evaluator = CrewEvaluator(test_crew, openai_model_name) # type: ignore[arg-type]
|
||||
|
||||
for i in range(1, n_iterations + 1):
|
||||
evaluator.set_iteration(i)
|
||||
self.kickoff(inputs=inputs)
|
||||
test_crew.kickoff(inputs=inputs)
|
||||
|
||||
evaluator.print_crew_evaluation_result()
|
||||
|
||||
|
||||
@@ -190,7 +190,10 @@ class Flow(Generic[T], metaclass=FlowMeta):
|
||||
"""Returns the list of all outputs from executed methods."""
|
||||
return self._method_outputs
|
||||
|
||||
async def kickoff(self) -> Any:
|
||||
def kickoff(self) -> Any:
|
||||
return asyncio.run(self.kickoff_async())
|
||||
|
||||
async def kickoff_async(self) -> Any:
|
||||
if not self._start_methods:
|
||||
raise ValueError("No start method defined")
|
||||
|
||||
|
||||
@@ -1,6 +1,5 @@
|
||||
from .entity.entity_memory import EntityMemory
|
||||
from .long_term.long_term_memory import LongTermMemory
|
||||
from .short_term.short_term_memory import ShortTermMemory
|
||||
from .user.user_memory import UserMemory
|
||||
|
||||
__all__ = ["UserMemory", "EntityMemory", "LongTermMemory", "ShortTermMemory"]
|
||||
__all__ = ["EntityMemory", "LongTermMemory", "ShortTermMemory"]
|
||||
|
||||
@@ -1,22 +1,13 @@
|
||||
from typing import Optional
|
||||
|
||||
from crewai.memory import EntityMemory, LongTermMemory, ShortTermMemory, UserMemory
|
||||
from crewai.memory import EntityMemory, LongTermMemory, ShortTermMemory
|
||||
|
||||
|
||||
class ContextualMemory:
|
||||
def __init__(
|
||||
self,
|
||||
stm: ShortTermMemory,
|
||||
ltm: LongTermMemory,
|
||||
em: EntityMemory,
|
||||
um: UserMemory,
|
||||
memory_provider: Optional[str] = None, # Default value added
|
||||
):
|
||||
def __init__(self, stm: ShortTermMemory, ltm: LongTermMemory, em: EntityMemory):
|
||||
self.stm = stm
|
||||
self.ltm = ltm
|
||||
self.em = em
|
||||
self.um = um
|
||||
self.memory_provider = memory_provider
|
||||
|
||||
def build_context_for_task(self, task, context) -> str:
|
||||
"""
|
||||
@@ -32,8 +23,6 @@ class ContextualMemory:
|
||||
context.append(self._fetch_ltm_context(task.description))
|
||||
context.append(self._fetch_stm_context(query))
|
||||
context.append(self._fetch_entity_context(query))
|
||||
if self.memory_provider == "mem0":
|
||||
context.append(self._fetch_user_memories(query))
|
||||
return "\n".join(filter(None, context))
|
||||
|
||||
def _fetch_stm_context(self, query) -> str:
|
||||
@@ -42,7 +31,9 @@ class ContextualMemory:
|
||||
formatted as bullet points.
|
||||
"""
|
||||
stm_results = self.stm.search(query)
|
||||
formatted_results = "\n".join([f"- {result}" for result in stm_results])
|
||||
formatted_results = "\n".join(
|
||||
[f"- {result['context']}" for result in stm_results]
|
||||
)
|
||||
return f"Recent Insights:\n{formatted_results}" if stm_results else ""
|
||||
|
||||
def _fetch_ltm_context(self, task) -> Optional[str]:
|
||||
@@ -71,22 +62,6 @@ class ContextualMemory:
|
||||
"""
|
||||
em_results = self.em.search(query)
|
||||
formatted_results = "\n".join(
|
||||
[
|
||||
f"- {result['memory'] if self.memory_provider == 'mem0' else result['context']}"
|
||||
for result in em_results
|
||||
] # type: ignore # Invalid index type "str" for "str"; expected type "SupportsIndex | slice"
|
||||
[f"- {result['context']}" for result in em_results] # type: ignore # Invalid index type "str" for "str"; expected type "SupportsIndex | slice"
|
||||
)
|
||||
return f"Entities:\n{formatted_results}" if em_results else ""
|
||||
|
||||
def _fetch_user_memories(self, query) -> str:
|
||||
"""
|
||||
Fetches relevant user memory information from User Memory related to the task's description and expected_output,
|
||||
"""
|
||||
print("query", query)
|
||||
um_results = self.um.search(query)
|
||||
print("um_results", um_results)
|
||||
formatted_results = "\n".join(
|
||||
[f"- {result['memory']}" for result in um_results]
|
||||
)
|
||||
print(f"User memories/preferences:\n{formatted_results}")
|
||||
return f"User memories/preferences:\n{formatted_results}" if um_results else ""
|
||||
|
||||
@@ -1,7 +1,6 @@
|
||||
from crewai.memory.entity.entity_memory_item import EntityMemoryItem
|
||||
from crewai.memory.memory import Memory
|
||||
from crewai.memory.storage.rag_storage import RAGStorage
|
||||
from crewai.memory.storage.mem0_storage import Mem0Storage
|
||||
|
||||
|
||||
class EntityMemory(Memory):
|
||||
@@ -11,39 +10,22 @@ class EntityMemory(Memory):
|
||||
Inherits from the Memory class.
|
||||
"""
|
||||
|
||||
def __init__(
|
||||
self, memory_provider=None, crew=None, embedder_config=None, storage=None
|
||||
):
|
||||
self.memory_provider = memory_provider
|
||||
if self.memory_provider == "mem0":
|
||||
storage = Mem0Storage(
|
||||
def __init__(self, crew=None, embedder_config=None, storage=None):
|
||||
storage = (
|
||||
storage
|
||||
if storage
|
||||
else RAGStorage(
|
||||
type="entities",
|
||||
allow_reset=True,
|
||||
embedder_config=embedder_config,
|
||||
crew=crew,
|
||||
)
|
||||
else:
|
||||
storage = (
|
||||
storage
|
||||
if storage
|
||||
else RAGStorage(
|
||||
type="entities",
|
||||
allow_reset=False,
|
||||
embedder_config=embedder_config,
|
||||
crew=crew,
|
||||
)
|
||||
)
|
||||
)
|
||||
super().__init__(storage)
|
||||
|
||||
def save(self, item: EntityMemoryItem) -> None: # type: ignore # BUG?: Signature of "save" incompatible with supertype "Memory"
|
||||
"""Saves an entity item into the SQLite storage."""
|
||||
if self.memory_provider == "mem0":
|
||||
data = f"""
|
||||
Remember details about the following entity:
|
||||
Name: {item.name}
|
||||
Type: {item.type}
|
||||
Entity Description: {item.description}
|
||||
"""
|
||||
else:
|
||||
data = f"{item.name}({item.type}): {item.description}"
|
||||
data = f"{item.name}({item.type}): {item.description}"
|
||||
super().save(data, item.metadata)
|
||||
|
||||
def reset(self) -> None:
|
||||
|
||||
@@ -18,25 +18,18 @@ class LongTermMemory(Memory):
|
||||
storage = storage if storage else LTMSQLiteStorage()
|
||||
super().__init__(storage)
|
||||
|
||||
def save(self, item: LongTermMemoryItem) -> None:
|
||||
metadata = item.metadata.copy() # Create a copy to avoid modifying the original
|
||||
metadata.update(
|
||||
{
|
||||
"agent": item.agent,
|
||||
"expected_output": item.expected_output,
|
||||
"quality": item.quality, # Add quality to metadata
|
||||
}
|
||||
)
|
||||
def save(self, item: LongTermMemoryItem) -> None: # type: ignore # BUG?: Signature of "save" incompatible with supertype "Memory"
|
||||
metadata = item.metadata
|
||||
metadata.update({"agent": item.agent, "expected_output": item.expected_output})
|
||||
self.storage.save( # type: ignore # BUG?: Unexpected keyword argument "task_description","score","datetime" for "save" of "Storage"
|
||||
task_description=item.task,
|
||||
score=item.quality,
|
||||
score=metadata["quality"],
|
||||
metadata=metadata,
|
||||
datetime=item.datetime,
|
||||
)
|
||||
|
||||
def search(self, task: str, latest_n: int = 3) -> List[Dict[str, Any]]:
|
||||
results = self.storage.load(task, latest_n)
|
||||
return results
|
||||
def search(self, task: str, latest_n: int = 3) -> List[Dict[str, Any]]: # type: ignore # signature of "search" incompatible with supertype "Memory"
|
||||
return self.storage.load(task, latest_n) # type: ignore # BUG?: "Storage" has no attribute "load"
|
||||
|
||||
def reset(self) -> None:
|
||||
self.storage.reset()
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
from typing import Any, Dict, Optional
|
||||
from typing import Any, Dict, Optional, List
|
||||
|
||||
from crewai.memory.storage.interface import Storage
|
||||
from crewai.memory.storage.rag_storage import RAGStorage
|
||||
|
||||
|
||||
class Memory:
|
||||
@@ -8,7 +8,7 @@ class Memory:
|
||||
Base class for memory, now supporting agent tags and generic metadata.
|
||||
"""
|
||||
|
||||
def __init__(self, storage: Storage):
|
||||
def __init__(self, storage: RAGStorage):
|
||||
self.storage = storage
|
||||
|
||||
def save(
|
||||
@@ -23,13 +23,5 @@ class Memory:
|
||||
|
||||
self.storage.save(value, metadata)
|
||||
|
||||
def search(
|
||||
self,
|
||||
query: str,
|
||||
limit: int = 3,
|
||||
filters: dict = {},
|
||||
score_threshold: float = 0.35,
|
||||
) -> Dict[str, Any]:
|
||||
return self.storage.search(
|
||||
query=query, limit=limit, filters=filters, score_threshold=score_threshold
|
||||
)
|
||||
def search(self, query: str) -> List[Dict[str, Any]]:
|
||||
return self.storage.search(query)
|
||||
|
||||
@@ -2,7 +2,6 @@ from typing import Any, Dict, Optional
|
||||
from crewai.memory.memory import Memory
|
||||
from crewai.memory.short_term.short_term_memory_item import ShortTermMemoryItem
|
||||
from crewai.memory.storage.rag_storage import RAGStorage
|
||||
from crewai.memory.storage.mem0_storage import Mem0Storage
|
||||
|
||||
|
||||
class ShortTermMemory(Memory):
|
||||
@@ -14,20 +13,14 @@ class ShortTermMemory(Memory):
|
||||
MemoryItem instances.
|
||||
"""
|
||||
|
||||
def __init__(
|
||||
self, memory_provider=None, crew=None, embedder_config=None, storage=None
|
||||
):
|
||||
self.memory_provider = memory_provider
|
||||
if self.memory_provider == "mem0":
|
||||
storage = Mem0Storage(type="short_term", crew=crew)
|
||||
else:
|
||||
storage = (
|
||||
storage
|
||||
if storage
|
||||
else RAGStorage(
|
||||
type="short_term", embedder_config=embedder_config, crew=crew
|
||||
)
|
||||
def __init__(self, crew=None, embedder_config=None, storage=None):
|
||||
storage = (
|
||||
storage
|
||||
if storage
|
||||
else RAGStorage(
|
||||
type="short_term", embedder_config=embedder_config, crew=crew
|
||||
)
|
||||
)
|
||||
super().__init__(storage)
|
||||
|
||||
def save(
|
||||
@@ -37,21 +30,11 @@ class ShortTermMemory(Memory):
|
||||
agent: Optional[str] = None,
|
||||
) -> None:
|
||||
item = ShortTermMemoryItem(data=value, metadata=metadata, agent=agent)
|
||||
if self.memory_provider == "mem0":
|
||||
item.data = f"Remember the following insights from Agent run: {item.data}"
|
||||
|
||||
super().save(value=item.data, metadata=item.metadata, agent=item.agent)
|
||||
|
||||
def search(
|
||||
self,
|
||||
query: str,
|
||||
limit: int = 3,
|
||||
filters: dict = {},
|
||||
score_threshold: float = 0.35,
|
||||
):
|
||||
return self.storage.search(
|
||||
query=query, limit=limit, filters=filters, score_threshold=score_threshold
|
||||
) # type: ignore # BUG? The reference is to the parent class, but the parent class does not have this parameters
|
||||
def search(self, query: str, score_threshold: float = 0.35):
|
||||
return self.storage.search(query=query, score_threshold=score_threshold) # type: ignore # BUG? The reference is to the parent class, but the parent class does not have this parameters
|
||||
|
||||
def reset(self) -> None:
|
||||
try:
|
||||
|
||||
76
src/crewai/memory/storage/base_rag_storage.py
Normal file
76
src/crewai/memory/storage/base_rag_storage.py
Normal file
@@ -0,0 +1,76 @@
|
||||
from abc import ABC, abstractmethod
|
||||
from typing import Any, Dict, List, Optional
|
||||
|
||||
|
||||
class BaseRAGStorage(ABC):
|
||||
"""
|
||||
Base class for RAG-based Storage implementations.
|
||||
"""
|
||||
|
||||
app: Any | None = None
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
type: str,
|
||||
allow_reset: bool = True,
|
||||
embedder_config: Optional[Any] = None,
|
||||
crew: Any = None,
|
||||
):
|
||||
self.type = type
|
||||
self.allow_reset = allow_reset
|
||||
self.embedder_config = embedder_config
|
||||
self.crew = crew
|
||||
self.agents = self._initialize_agents()
|
||||
|
||||
def _initialize_agents(self) -> str:
|
||||
if self.crew:
|
||||
return "_".join(
|
||||
[self._sanitize_role(agent.role) for agent in self.crew.agents]
|
||||
)
|
||||
return ""
|
||||
|
||||
@abstractmethod
|
||||
def _sanitize_role(self, role: str) -> str:
|
||||
"""Sanitizes agent roles to ensure valid directory names."""
|
||||
pass
|
||||
|
||||
@abstractmethod
|
||||
def save(self, value: Any, metadata: Dict[str, Any]) -> None:
|
||||
"""Save a value with metadata to the storage."""
|
||||
pass
|
||||
|
||||
@abstractmethod
|
||||
def search(
|
||||
self,
|
||||
query: str,
|
||||
limit: int = 3,
|
||||
filter: Optional[dict] = None,
|
||||
score_threshold: float = 0.35,
|
||||
) -> List[Any]:
|
||||
"""Search for entries in the storage."""
|
||||
pass
|
||||
|
||||
@abstractmethod
|
||||
def reset(self) -> None:
|
||||
"""Reset the storage."""
|
||||
pass
|
||||
|
||||
@abstractmethod
|
||||
def _generate_embedding(
|
||||
self, text: str, metadata: Optional[Dict[str, Any]] = None
|
||||
) -> Any:
|
||||
"""Generate an embedding for the given text and metadata."""
|
||||
pass
|
||||
|
||||
@abstractmethod
|
||||
def _initialize_app(self):
|
||||
"""Initialize the vector db."""
|
||||
pass
|
||||
|
||||
def setup_config(self, config: Dict[str, Any]):
|
||||
"""Setup the config of the storage."""
|
||||
pass
|
||||
|
||||
def initialize_client(self):
|
||||
"""Initialize the client of the storage. This should setup the app and the db collection"""
|
||||
pass
|
||||
@@ -1,4 +1,4 @@
|
||||
from typing import Any, Dict
|
||||
from typing import Any, Dict, List
|
||||
|
||||
|
||||
class Storage:
|
||||
@@ -7,10 +7,8 @@ class Storage:
|
||||
def save(self, value: Any, metadata: Dict[str, Any]) -> None:
|
||||
pass
|
||||
|
||||
def search(
|
||||
self, query: str, limit: int, filters: Dict, score_threshold: float
|
||||
) -> Dict[str, Any]: # type: ignore
|
||||
return {}
|
||||
def search(self, key: str) -> List[Dict[str, Any]]: # type: ignore
|
||||
pass
|
||||
|
||||
def reset(self) -> None:
|
||||
pass
|
||||
|
||||
@@ -1,45 +0,0 @@
|
||||
import os
|
||||
from typing import Any, Dict, List, Optional
|
||||
|
||||
from crewai.memory.storage.interface import Storage
|
||||
from mem0 import MemoryClient
|
||||
|
||||
|
||||
class Mem0Storage(Storage):
|
||||
"""
|
||||
Extends Storage to handle embedding and searching across entities using Mem0.
|
||||
"""
|
||||
|
||||
def __init__(self, type, crew=None):
|
||||
super().__init__()
|
||||
if (
|
||||
not os.getenv("OPENAI_API_KEY")
|
||||
and not os.getenv("OPENAI_BASE_URL") == "https://api.openai.com/v1"
|
||||
):
|
||||
os.environ["OPENAI_API_KEY"] = "fake"
|
||||
|
||||
if not os.getenv("MEM0_API_KEY"):
|
||||
raise EnvironmentError("MEM0_API_KEY is not set.")
|
||||
|
||||
agents = crew.agents if crew else []
|
||||
agents = [agent.role for agent in agents]
|
||||
agents = "_".join(agents)
|
||||
|
||||
self.app_id = agents
|
||||
self.memory = MemoryClient(api_key=os.getenv("MEM0_API_KEY"))
|
||||
|
||||
def save(self, value: Any, metadata: Dict[str, Any]) -> None:
|
||||
self.memory.add(value, metadata=metadata, app_id=self.app_id)
|
||||
|
||||
def search(
|
||||
self,
|
||||
query: str,
|
||||
limit: int = 3,
|
||||
filters: Optional[dict] = None,
|
||||
score_threshold: float = 0.35,
|
||||
) -> List[Any]:
|
||||
params = {"query": query, "limit": limit, "app_id": self.app_id}
|
||||
if filters:
|
||||
params["filters"] = filters
|
||||
results = self.memory.search(**params)
|
||||
return [r for r in results if float(r["score"]) >= score_threshold]
|
||||
@@ -3,10 +3,14 @@ import io
|
||||
import logging
|
||||
import os
|
||||
import shutil
|
||||
import uuid
|
||||
from typing import Any, Dict, List, Optional
|
||||
|
||||
from crewai.memory.storage.interface import Storage
|
||||
from crewai.memory.storage.base_rag_storage import BaseRAGStorage
|
||||
from crewai.utilities.paths import db_storage_path
|
||||
from chromadb.api import ClientAPI
|
||||
from chromadb.api.types import validate_embedding_function
|
||||
from chromadb import Documents, EmbeddingFunction, Embeddings
|
||||
from typing import cast
|
||||
|
||||
|
||||
@contextlib.contextmanager
|
||||
@@ -24,61 +28,119 @@ def suppress_logging(
|
||||
logger.setLevel(original_level)
|
||||
|
||||
|
||||
class RAGStorage(Storage):
|
||||
class RAGStorage(BaseRAGStorage):
|
||||
"""
|
||||
Extends Storage to handle embeddings for memory entries, improving
|
||||
search efficiency.
|
||||
"""
|
||||
|
||||
def __init__(self, type, allow_reset=True, embedder_config=None, crew=None):
|
||||
super().__init__()
|
||||
if (
|
||||
not os.getenv("OPENAI_API_KEY")
|
||||
and not os.getenv("OPENAI_BASE_URL") == "https://api.openai.com/v1"
|
||||
):
|
||||
os.environ["OPENAI_API_KEY"] = "fake"
|
||||
app: ClientAPI | None = None
|
||||
|
||||
def __init__(self, type, allow_reset=True, embedder_config=None, crew=None):
|
||||
super().__init__(type, allow_reset, embedder_config, crew)
|
||||
agents = crew.agents if crew else []
|
||||
agents = [self._sanitize_role(agent.role) for agent in agents]
|
||||
agents = "_".join(agents)
|
||||
self.agents = agents
|
||||
|
||||
config = {
|
||||
"app": {
|
||||
"config": {"name": type, "collect_metrics": False, "log_level": "ERROR"}
|
||||
},
|
||||
"chunker": {
|
||||
"chunk_size": 5000,
|
||||
"chunk_overlap": 100,
|
||||
"length_function": "len",
|
||||
"min_chunk_size": 150,
|
||||
},
|
||||
"vectordb": {
|
||||
"provider": "chroma",
|
||||
"config": {
|
||||
"collection_name": type,
|
||||
"dir": f"{db_storage_path()}/{type}/{agents}",
|
||||
"allow_reset": allow_reset,
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
if embedder_config:
|
||||
config["embedder"] = embedder_config
|
||||
self.type = type
|
||||
self.config = config
|
||||
|
||||
self.allow_reset = allow_reset
|
||||
self._initialize_app()
|
||||
|
||||
def _set_embedder_config(self):
|
||||
import chromadb.utils.embedding_functions as embedding_functions
|
||||
|
||||
if self.embedder_config is None:
|
||||
self.embedder_config = self._create_default_embedding_function()
|
||||
|
||||
if isinstance(self.embedder_config, dict):
|
||||
provider = self.embedder_config.get("provider")
|
||||
config = self.embedder_config.get("config", {})
|
||||
model_name = config.get("model")
|
||||
if provider == "openai":
|
||||
self.embedder_config = embedding_functions.OpenAIEmbeddingFunction(
|
||||
api_key=config.get("api_key") or os.getenv("OPENAI_API_KEY"),
|
||||
model_name=model_name,
|
||||
)
|
||||
elif provider == "azure":
|
||||
self.embedder_config = embedding_functions.OpenAIEmbeddingFunction(
|
||||
api_key=config.get("api_key"),
|
||||
api_base=config.get("api_base"),
|
||||
api_type=config.get("api_type", "azure"),
|
||||
api_version=config.get("api_version"),
|
||||
model_name=model_name,
|
||||
)
|
||||
elif provider == "ollama":
|
||||
from openai import OpenAI
|
||||
|
||||
class OllamaEmbeddingFunction(EmbeddingFunction):
|
||||
def __call__(self, input: Documents) -> Embeddings:
|
||||
client = OpenAI(
|
||||
base_url="http://localhost:11434/v1",
|
||||
api_key=config.get("api_key", "ollama"),
|
||||
)
|
||||
try:
|
||||
response = client.embeddings.create(
|
||||
input=input, model=model_name
|
||||
)
|
||||
embeddings = [item.embedding for item in response.data]
|
||||
return cast(Embeddings, embeddings)
|
||||
except Exception as e:
|
||||
raise e
|
||||
|
||||
self.embedder_config = OllamaEmbeddingFunction()
|
||||
elif provider == "vertexai":
|
||||
self.embedder_config = (
|
||||
embedding_functions.GoogleVertexEmbeddingFunction(
|
||||
model_name=model_name,
|
||||
api_key=config.get("api_key"),
|
||||
)
|
||||
)
|
||||
elif provider == "google":
|
||||
self.embedder_config = (
|
||||
embedding_functions.GoogleGenerativeAiEmbeddingFunction(
|
||||
model_name=model_name,
|
||||
api_key=config.get("api_key"),
|
||||
)
|
||||
)
|
||||
elif provider == "cohere":
|
||||
self.embedder_config = embedding_functions.CohereEmbeddingFunction(
|
||||
model_name=model_name,
|
||||
api_key=config.get("api_key"),
|
||||
)
|
||||
elif provider == "huggingface":
|
||||
self.embedder_config = embedding_functions.HuggingFaceEmbeddingServer(
|
||||
url=config.get("api_url"),
|
||||
)
|
||||
else:
|
||||
raise Exception(
|
||||
f"Unsupported embedding provider: {provider}, supported providers: [openai, azure, ollama, vertexai, google, cohere, huggingface]"
|
||||
)
|
||||
else:
|
||||
validate_embedding_function(self.embedder_config) # type: ignore # used for validating embedder_config if defined a embedding function/class
|
||||
self.embedder_config = self.embedder_config
|
||||
|
||||
def _initialize_app(self):
|
||||
from embedchain import App
|
||||
from embedchain.llm.base import BaseLlm
|
||||
import chromadb
|
||||
from chromadb.config import Settings
|
||||
|
||||
class FakeLLM(BaseLlm):
|
||||
pass
|
||||
self._set_embedder_config()
|
||||
chroma_client = chromadb.PersistentClient(
|
||||
path=f"{db_storage_path()}/{self.type}/{self.agents}",
|
||||
settings=Settings(allow_reset=self.allow_reset),
|
||||
)
|
||||
|
||||
self.app = App.from_config(config=self.config)
|
||||
self.app.llm = FakeLLM()
|
||||
if self.allow_reset:
|
||||
self.app.reset()
|
||||
self.app = chroma_client
|
||||
|
||||
try:
|
||||
self.collection = self.app.get_collection(
|
||||
name=self.type, embedding_function=self.embedder_config
|
||||
)
|
||||
except Exception:
|
||||
self.collection = self.app.create_collection(
|
||||
name=self.type, embedding_function=self.embedder_config
|
||||
)
|
||||
|
||||
def _sanitize_role(self, role: str) -> str:
|
||||
"""
|
||||
@@ -87,44 +149,70 @@ class RAGStorage(Storage):
|
||||
return role.replace("\n", "").replace(" ", "_").replace("/", "_")
|
||||
|
||||
def save(self, value: Any, metadata: Dict[str, Any]) -> None:
|
||||
if not hasattr(self, "app"):
|
||||
if not hasattr(self, "app") or not hasattr(self, "collection"):
|
||||
self._initialize_app()
|
||||
self._generate_embedding(value, metadata)
|
||||
try:
|
||||
self._generate_embedding(value, metadata)
|
||||
except Exception as e:
|
||||
logging.error(f"Error during {self.type} save: {str(e)}")
|
||||
|
||||
def search( # type: ignore # BUG?: Signature of "search" incompatible with supertype "Storage"
|
||||
def search(
|
||||
self,
|
||||
query: str,
|
||||
limit: int = 3,
|
||||
filters: Optional[dict] = None,
|
||||
filter: Optional[dict] = None,
|
||||
score_threshold: float = 0.35,
|
||||
) -> List[Any]:
|
||||
if not hasattr(self, "app"):
|
||||
self._initialize_app()
|
||||
from embedchain.vectordb.chroma import InvalidDimensionException
|
||||
|
||||
with suppress_logging():
|
||||
try:
|
||||
results = (
|
||||
self.app.search(query, limit, where=filters)
|
||||
if filters
|
||||
else self.app.search(query, limit)
|
||||
)
|
||||
except InvalidDimensionException:
|
||||
self.app.reset()
|
||||
return []
|
||||
return [r for r in results if r["metadata"]["score"] >= score_threshold]
|
||||
try:
|
||||
with suppress_logging():
|
||||
response = self.collection.query(query_texts=query, n_results=limit)
|
||||
|
||||
def _generate_embedding(self, text: str, metadata: Dict[str, Any]) -> Any:
|
||||
if not hasattr(self, "app"):
|
||||
results = []
|
||||
for i in range(len(response["ids"][0])):
|
||||
result = {
|
||||
"id": response["ids"][0][i],
|
||||
"metadata": response["metadatas"][0][i],
|
||||
"context": response["documents"][0][i],
|
||||
"score": response["distances"][0][i],
|
||||
}
|
||||
if result["score"] >= score_threshold:
|
||||
results.append(result)
|
||||
|
||||
return results
|
||||
except Exception as e:
|
||||
logging.error(f"Error during {self.type} search: {str(e)}")
|
||||
return []
|
||||
|
||||
def _generate_embedding(self, text: str, metadata: Dict[str, Any]) -> None: # type: ignore
|
||||
if not hasattr(self, "app") or not hasattr(self, "collection"):
|
||||
self._initialize_app()
|
||||
from embedchain.models.data_type import DataType
|
||||
|
||||
self.app.add(text, data_type=DataType.TEXT, metadata=metadata)
|
||||
self.collection.add(
|
||||
documents=[text],
|
||||
metadatas=[metadata or {}],
|
||||
ids=[str(uuid.uuid4())],
|
||||
)
|
||||
|
||||
def reset(self) -> None:
|
||||
try:
|
||||
shutil.rmtree(f"{db_storage_path()}/{self.type}")
|
||||
if self.app:
|
||||
self.app.reset()
|
||||
except Exception as e:
|
||||
raise Exception(
|
||||
f"An error occurred while resetting the {self.type} memory: {e}"
|
||||
)
|
||||
if "attempt to write a readonly database" in str(e):
|
||||
# Ignore this specific error
|
||||
pass
|
||||
else:
|
||||
raise Exception(
|
||||
f"An error occurred while resetting the {self.type} memory: {e}"
|
||||
)
|
||||
|
||||
def _create_default_embedding_function(self):
|
||||
import chromadb.utils.embedding_functions as embedding_functions
|
||||
|
||||
return embedding_functions.OpenAIEmbeddingFunction(
|
||||
api_key=os.getenv("OPENAI_API_KEY"), model_name="text-embedding-3-small"
|
||||
)
|
||||
|
||||
@@ -1,43 +0,0 @@
|
||||
from typing import Any, Dict, Optional
|
||||
|
||||
from crewai.memory.memory import Memory
|
||||
from crewai.memory.storage.mem0_storage import Mem0Storage
|
||||
|
||||
|
||||
class UserMemory(Memory):
|
||||
"""
|
||||
UserMemory class for handling user memory storage and retrieval.
|
||||
Inherits from the Memory class and utilizes an instance of a class that
|
||||
adheres to the Storage for data storage, specifically working with
|
||||
MemoryItem instances.
|
||||
"""
|
||||
|
||||
def __init__(self, crew=None):
|
||||
storage = Mem0Storage(type="user", crew=crew)
|
||||
super().__init__(storage)
|
||||
|
||||
def save(
|
||||
self,
|
||||
value,
|
||||
metadata: Optional[Dict[str, Any]] = None,
|
||||
agent: Optional[str] = None,
|
||||
) -> None:
|
||||
data = f"Remember the details about the user: {value}"
|
||||
super().save(data, metadata)
|
||||
|
||||
def search(
|
||||
self,
|
||||
query: str,
|
||||
limit: int = 3,
|
||||
filters: dict = {},
|
||||
score_threshold: float = 0.35,
|
||||
):
|
||||
print("SEARCHING USER MEMORY", query, limit, filters, score_threshold)
|
||||
result = super().search(
|
||||
query=query,
|
||||
limit=limit,
|
||||
filters=filters,
|
||||
score_threshold=score_threshold,
|
||||
)
|
||||
print("USER MEMORY SEARCH RESULT:", result)
|
||||
return result
|
||||
@@ -1,8 +0,0 @@
|
||||
from typing import Any, Dict, Optional
|
||||
|
||||
|
||||
class UserMemoryItem:
|
||||
def __init__(self, data: Any, user: str, metadata: Optional[Dict[str, Any]] = None):
|
||||
self.data = data
|
||||
self.user = user
|
||||
self.metadata = metadata if metadata is not None else {}
|
||||
@@ -76,27 +76,13 @@ def crew(func) -> Callable[..., Crew]:
|
||||
instantiated_agents = []
|
||||
agent_roles = set()
|
||||
|
||||
# Collect methods from crew in order
|
||||
all_functions = [
|
||||
(name, getattr(self, name))
|
||||
for name, attr in self.__class__.__dict__.items()
|
||||
if callable(attr)
|
||||
]
|
||||
tasks = [
|
||||
(name, method)
|
||||
for name, method in all_functions
|
||||
if hasattr(method, "is_task")
|
||||
]
|
||||
|
||||
agents = [
|
||||
(name, method)
|
||||
for name, method in all_functions
|
||||
if hasattr(method, "is_agent")
|
||||
]
|
||||
# Use the preserved task and agent information
|
||||
tasks = self._original_tasks.items()
|
||||
agents = self._original_agents.items()
|
||||
|
||||
# Instantiate tasks in order
|
||||
for task_name, task_method in tasks:
|
||||
task_instance = task_method()
|
||||
task_instance = task_method(self)
|
||||
instantiated_tasks.append(task_instance)
|
||||
agent_instance = getattr(task_instance, "agent", None)
|
||||
if agent_instance and agent_instance.role not in agent_roles:
|
||||
@@ -105,7 +91,7 @@ def crew(func) -> Callable[..., Crew]:
|
||||
|
||||
# Instantiate agents not included by tasks
|
||||
for agent_name, agent_method in agents:
|
||||
agent_instance = agent_method()
|
||||
agent_instance = agent_method(self)
|
||||
if agent_instance.role not in agent_roles:
|
||||
instantiated_agents.append(agent_instance)
|
||||
agent_roles.add(agent_instance.role)
|
||||
|
||||
@@ -34,6 +34,18 @@ def CrewBase(cls: T) -> T:
|
||||
self.map_all_agent_variables()
|
||||
self.map_all_task_variables()
|
||||
|
||||
# Preserve task and agent information
|
||||
self._original_tasks = {
|
||||
name: method
|
||||
for name, method in cls.__dict__.items()
|
||||
if hasattr(method, "is_task") and method.is_task
|
||||
}
|
||||
self._original_agents = {
|
||||
name: method
|
||||
for name, method in cls.__dict__.items()
|
||||
if hasattr(method, "is_agent") and method.is_agent
|
||||
}
|
||||
|
||||
@staticmethod
|
||||
def load_yaml(config_path: Path):
|
||||
try:
|
||||
|
||||
@@ -65,7 +65,7 @@ class Telemetry:
|
||||
|
||||
self.provider.add_span_processor(processor)
|
||||
self.ready = True
|
||||
except BaseException as e:
|
||||
except Exception as e:
|
||||
if isinstance(
|
||||
e,
|
||||
(SystemExit, KeyboardInterrupt, GeneratorExit, asyncio.CancelledError),
|
||||
@@ -83,404 +83,33 @@ class Telemetry:
|
||||
self.ready = False
|
||||
self.trace_set = False
|
||||
|
||||
def _safe_telemetry_operation(self, operation):
|
||||
if not self.ready:
|
||||
return
|
||||
try:
|
||||
operation()
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
def crew_creation(self, crew: Crew, inputs: dict[str, Any] | None):
|
||||
"""Records the creation of a crew."""
|
||||
if self.ready:
|
||||
try:
|
||||
tracer = trace.get_tracer("crewai.telemetry")
|
||||
span = tracer.start_span("Crew Created")
|
||||
self._add_attribute(
|
||||
span,
|
||||
"crewai_version",
|
||||
pkg_resources.get_distribution("crewai").version,
|
||||
)
|
||||
self._add_attribute(span, "python_version", platform.python_version())
|
||||
self._add_attribute(span, "crew_key", crew.key)
|
||||
self._add_attribute(span, "crew_id", str(crew.id))
|
||||
self._add_attribute(span, "crew_process", crew.process)
|
||||
self._add_attribute(span, "crew_memory", crew.memory)
|
||||
self._add_attribute(span, "crew_number_of_tasks", len(crew.tasks))
|
||||
self._add_attribute(span, "crew_number_of_agents", len(crew.agents))
|
||||
if crew.share_crew:
|
||||
self._add_attribute(
|
||||
span,
|
||||
"crew_agents",
|
||||
json.dumps(
|
||||
[
|
||||
{
|
||||
"key": agent.key,
|
||||
"id": str(agent.id),
|
||||
"role": agent.role,
|
||||
"goal": agent.goal,
|
||||
"backstory": agent.backstory,
|
||||
"verbose?": agent.verbose,
|
||||
"max_iter": agent.max_iter,
|
||||
"max_rpm": agent.max_rpm,
|
||||
"i18n": agent.i18n.prompt_file,
|
||||
"function_calling_llm": (
|
||||
agent.function_calling_llm.model
|
||||
if agent.function_calling_llm
|
||||
else ""
|
||||
),
|
||||
"llm": agent.llm.model,
|
||||
"delegation_enabled?": agent.allow_delegation,
|
||||
"allow_code_execution?": agent.allow_code_execution,
|
||||
"max_retry_limit": agent.max_retry_limit,
|
||||
"tools_names": [
|
||||
tool.name.casefold()
|
||||
for tool in agent.tools or []
|
||||
],
|
||||
}
|
||||
for agent in crew.agents
|
||||
]
|
||||
),
|
||||
)
|
||||
self._add_attribute(
|
||||
span,
|
||||
"crew_tasks",
|
||||
json.dumps(
|
||||
[
|
||||
{
|
||||
"key": task.key,
|
||||
"id": str(task.id),
|
||||
"description": task.description,
|
||||
"expected_output": task.expected_output,
|
||||
"async_execution?": task.async_execution,
|
||||
"human_input?": task.human_input,
|
||||
"agent_role": (
|
||||
task.agent.role if task.agent else "None"
|
||||
),
|
||||
"agent_key": task.agent.key if task.agent else None,
|
||||
"context": (
|
||||
[task.description for task in task.context]
|
||||
if task.context
|
||||
else None
|
||||
),
|
||||
"tools_names": [
|
||||
tool.name.casefold()
|
||||
for tool in task.tools or []
|
||||
],
|
||||
}
|
||||
for task in crew.tasks
|
||||
]
|
||||
),
|
||||
)
|
||||
self._add_attribute(span, "platform", platform.platform())
|
||||
self._add_attribute(span, "platform_release", platform.release())
|
||||
self._add_attribute(span, "platform_system", platform.system())
|
||||
self._add_attribute(span, "platform_version", platform.version())
|
||||
self._add_attribute(span, "cpus", os.cpu_count())
|
||||
self._add_attribute(
|
||||
span, "crew_inputs", json.dumps(inputs) if inputs else None
|
||||
)
|
||||
else:
|
||||
self._add_attribute(
|
||||
span,
|
||||
"crew_agents",
|
||||
json.dumps(
|
||||
[
|
||||
{
|
||||
"key": agent.key,
|
||||
"id": str(agent.id),
|
||||
"role": agent.role,
|
||||
"verbose?": agent.verbose,
|
||||
"max_iter": agent.max_iter,
|
||||
"max_rpm": agent.max_rpm,
|
||||
"function_calling_llm": (
|
||||
agent.function_calling_llm.model
|
||||
if agent.function_calling_llm
|
||||
else ""
|
||||
),
|
||||
"llm": agent.llm.model,
|
||||
"delegation_enabled?": agent.allow_delegation,
|
||||
"allow_code_execution?": agent.allow_code_execution,
|
||||
"max_retry_limit": agent.max_retry_limit,
|
||||
"tools_names": [
|
||||
tool.name.casefold()
|
||||
for tool in agent.tools or []
|
||||
],
|
||||
}
|
||||
for agent in crew.agents
|
||||
]
|
||||
),
|
||||
)
|
||||
self._add_attribute(
|
||||
span,
|
||||
"crew_tasks",
|
||||
json.dumps(
|
||||
[
|
||||
{
|
||||
"key": task.key,
|
||||
"id": str(task.id),
|
||||
"async_execution?": task.async_execution,
|
||||
"human_input?": task.human_input,
|
||||
"agent_role": (
|
||||
task.agent.role if task.agent else "None"
|
||||
),
|
||||
"agent_key": task.agent.key if task.agent else None,
|
||||
"tools_names": [
|
||||
tool.name.casefold()
|
||||
for tool in task.tools or []
|
||||
],
|
||||
}
|
||||
for task in crew.tasks
|
||||
]
|
||||
),
|
||||
)
|
||||
span.set_status(Status(StatusCode.OK))
|
||||
span.end()
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
def task_started(self, crew: Crew, task: Task) -> Span | None:
|
||||
"""Records task started in a crew."""
|
||||
if self.ready:
|
||||
try:
|
||||
tracer = trace.get_tracer("crewai.telemetry")
|
||||
|
||||
created_span = tracer.start_span("Task Created")
|
||||
|
||||
self._add_attribute(created_span, "crew_key", crew.key)
|
||||
self._add_attribute(created_span, "crew_id", str(crew.id))
|
||||
self._add_attribute(created_span, "task_key", task.key)
|
||||
self._add_attribute(created_span, "task_id", str(task.id))
|
||||
|
||||
if crew.share_crew:
|
||||
self._add_attribute(
|
||||
created_span, "formatted_description", task.description
|
||||
)
|
||||
self._add_attribute(
|
||||
created_span, "formatted_expected_output", task.expected_output
|
||||
)
|
||||
|
||||
created_span.set_status(Status(StatusCode.OK))
|
||||
created_span.end()
|
||||
|
||||
span = tracer.start_span("Task Execution")
|
||||
|
||||
self._add_attribute(span, "crew_key", crew.key)
|
||||
self._add_attribute(span, "crew_id", str(crew.id))
|
||||
self._add_attribute(span, "task_key", task.key)
|
||||
self._add_attribute(span, "task_id", str(task.id))
|
||||
|
||||
if crew.share_crew:
|
||||
self._add_attribute(span, "formatted_description", task.description)
|
||||
self._add_attribute(
|
||||
span, "formatted_expected_output", task.expected_output
|
||||
)
|
||||
|
||||
return span
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
return None
|
||||
|
||||
def task_ended(self, span: Span, task: Task, crew: Crew):
|
||||
"""Records task execution in a crew."""
|
||||
if self.ready:
|
||||
try:
|
||||
if crew.share_crew:
|
||||
self._add_attribute(
|
||||
span,
|
||||
"task_output",
|
||||
task.output.raw if task.output else "",
|
||||
)
|
||||
|
||||
span.set_status(Status(StatusCode.OK))
|
||||
span.end()
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
def tool_repeated_usage(self, llm: Any, tool_name: str, attempts: int):
|
||||
"""Records the repeated usage 'error' of a tool by an agent."""
|
||||
if self.ready:
|
||||
try:
|
||||
tracer = trace.get_tracer("crewai.telemetry")
|
||||
span = tracer.start_span("Tool Repeated Usage")
|
||||
self._add_attribute(
|
||||
span,
|
||||
"crewai_version",
|
||||
pkg_resources.get_distribution("crewai").version,
|
||||
)
|
||||
self._add_attribute(span, "tool_name", tool_name)
|
||||
self._add_attribute(span, "attempts", attempts)
|
||||
if llm:
|
||||
self._add_attribute(span, "llm", llm.model)
|
||||
span.set_status(Status(StatusCode.OK))
|
||||
span.end()
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
def tool_usage(self, llm: Any, tool_name: str, attempts: int):
|
||||
"""Records the usage of a tool by an agent."""
|
||||
if self.ready:
|
||||
try:
|
||||
tracer = trace.get_tracer("crewai.telemetry")
|
||||
span = tracer.start_span("Tool Usage")
|
||||
self._add_attribute(
|
||||
span,
|
||||
"crewai_version",
|
||||
pkg_resources.get_distribution("crewai").version,
|
||||
)
|
||||
self._add_attribute(span, "tool_name", tool_name)
|
||||
self._add_attribute(span, "attempts", attempts)
|
||||
if llm:
|
||||
self._add_attribute(span, "llm", llm.model)
|
||||
span.set_status(Status(StatusCode.OK))
|
||||
span.end()
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
def tool_usage_error(self, llm: Any):
|
||||
"""Records the usage of a tool by an agent."""
|
||||
if self.ready:
|
||||
try:
|
||||
tracer = trace.get_tracer("crewai.telemetry")
|
||||
span = tracer.start_span("Tool Usage Error")
|
||||
self._add_attribute(
|
||||
span,
|
||||
"crewai_version",
|
||||
pkg_resources.get_distribution("crewai").version,
|
||||
)
|
||||
if llm:
|
||||
self._add_attribute(span, "llm", llm.model)
|
||||
span.set_status(Status(StatusCode.OK))
|
||||
span.end()
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
def individual_test_result_span(
|
||||
self, crew: Crew, quality: float, exec_time: int, model_name: str
|
||||
):
|
||||
if self.ready:
|
||||
try:
|
||||
tracer = trace.get_tracer("crewai.telemetry")
|
||||
span = tracer.start_span("Crew Individual Test Result")
|
||||
|
||||
self._add_attribute(
|
||||
span,
|
||||
"crewai_version",
|
||||
pkg_resources.get_distribution("crewai").version,
|
||||
)
|
||||
self._add_attribute(span, "crew_key", crew.key)
|
||||
self._add_attribute(span, "crew_id", str(crew.id))
|
||||
self._add_attribute(span, "quality", str(quality))
|
||||
self._add_attribute(span, "exec_time", str(exec_time))
|
||||
self._add_attribute(span, "model_name", model_name)
|
||||
span.set_status(Status(StatusCode.OK))
|
||||
span.end()
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
def test_execution_span(
|
||||
self,
|
||||
crew: Crew,
|
||||
iterations: int,
|
||||
inputs: dict[str, Any] | None,
|
||||
model_name: str,
|
||||
):
|
||||
if self.ready:
|
||||
try:
|
||||
tracer = trace.get_tracer("crewai.telemetry")
|
||||
span = tracer.start_span("Crew Test Execution")
|
||||
|
||||
self._add_attribute(
|
||||
span,
|
||||
"crewai_version",
|
||||
pkg_resources.get_distribution("crewai").version,
|
||||
)
|
||||
self._add_attribute(span, "crew_key", crew.key)
|
||||
self._add_attribute(span, "crew_id", str(crew.id))
|
||||
self._add_attribute(span, "iterations", str(iterations))
|
||||
self._add_attribute(span, "model_name", model_name)
|
||||
|
||||
if crew.share_crew:
|
||||
self._add_attribute(
|
||||
span, "inputs", json.dumps(inputs) if inputs else None
|
||||
)
|
||||
|
||||
span.set_status(Status(StatusCode.OK))
|
||||
span.end()
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
def deploy_signup_error_span(self):
|
||||
if self.ready:
|
||||
try:
|
||||
tracer = trace.get_tracer("crewai.telemetry")
|
||||
span = tracer.start_span("Deploy Signup Error")
|
||||
span.set_status(Status(StatusCode.OK))
|
||||
span.end()
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
def start_deployment_span(self, uuid: Optional[str] = None):
|
||||
if self.ready:
|
||||
try:
|
||||
tracer = trace.get_tracer("crewai.telemetry")
|
||||
span = tracer.start_span("Start Deployment")
|
||||
if uuid:
|
||||
self._add_attribute(span, "uuid", uuid)
|
||||
span.set_status(Status(StatusCode.OK))
|
||||
span.end()
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
def create_crew_deployment_span(self):
|
||||
if self.ready:
|
||||
try:
|
||||
tracer = trace.get_tracer("crewai.telemetry")
|
||||
span = tracer.start_span("Create Crew Deployment")
|
||||
span.set_status(Status(StatusCode.OK))
|
||||
span.end()
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
def get_crew_logs_span(self, uuid: Optional[str], log_type: str = "deployment"):
|
||||
if self.ready:
|
||||
try:
|
||||
tracer = trace.get_tracer("crewai.telemetry")
|
||||
span = tracer.start_span("Get Crew Logs")
|
||||
self._add_attribute(span, "log_type", log_type)
|
||||
if uuid:
|
||||
self._add_attribute(span, "uuid", uuid)
|
||||
span.set_status(Status(StatusCode.OK))
|
||||
span.end()
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
def remove_crew_span(self, uuid: Optional[str] = None):
|
||||
if self.ready:
|
||||
try:
|
||||
tracer = trace.get_tracer("crewai.telemetry")
|
||||
span = tracer.start_span("Remove Crew")
|
||||
if uuid:
|
||||
self._add_attribute(span, "uuid", uuid)
|
||||
span.set_status(Status(StatusCode.OK))
|
||||
span.end()
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
def crew_execution_span(self, crew: Crew, inputs: dict[str, Any] | None):
|
||||
"""Records the complete execution of a crew.
|
||||
This is only collected if the user has opted-in to share the crew.
|
||||
"""
|
||||
self.crew_creation(crew, inputs)
|
||||
|
||||
if (self.ready) and (crew.share_crew):
|
||||
try:
|
||||
tracer = trace.get_tracer("crewai.telemetry")
|
||||
span = tracer.start_span("Crew Execution")
|
||||
self._add_attribute(
|
||||
span,
|
||||
"crewai_version",
|
||||
pkg_resources.get_distribution("crewai").version,
|
||||
)
|
||||
self._add_attribute(span, "crew_key", crew.key)
|
||||
self._add_attribute(span, "crew_id", str(crew.id))
|
||||
self._add_attribute(
|
||||
span, "crew_inputs", json.dumps(inputs) if inputs else None
|
||||
)
|
||||
def operation():
|
||||
tracer = trace.get_tracer("crewai.telemetry")
|
||||
span = tracer.start_span("Crew Created")
|
||||
self._add_attribute(
|
||||
span,
|
||||
"crewai_version",
|
||||
pkg_resources.get_distribution("crewai").version,
|
||||
)
|
||||
self._add_attribute(span, "python_version", platform.python_version())
|
||||
self._add_attribute(span, "crew_key", crew.key)
|
||||
self._add_attribute(span, "crew_id", str(crew.id))
|
||||
self._add_attribute(span, "crew_process", crew.process)
|
||||
self._add_attribute(span, "crew_memory", crew.memory)
|
||||
self._add_attribute(span, "crew_number_of_tasks", len(crew.tasks))
|
||||
self._add_attribute(span, "crew_number_of_agents", len(crew.agents))
|
||||
if crew.share_crew:
|
||||
self._add_attribute(
|
||||
span,
|
||||
"crew_agents",
|
||||
@@ -496,8 +125,15 @@ class Telemetry:
|
||||
"max_iter": agent.max_iter,
|
||||
"max_rpm": agent.max_rpm,
|
||||
"i18n": agent.i18n.prompt_file,
|
||||
"function_calling_llm": (
|
||||
agent.function_calling_llm.model
|
||||
if agent.function_calling_llm
|
||||
else ""
|
||||
),
|
||||
"llm": agent.llm.model,
|
||||
"delegation_enabled?": agent.allow_delegation,
|
||||
"allow_code_execution?": agent.allow_code_execution,
|
||||
"max_retry_limit": agent.max_retry_limit,
|
||||
"tools_names": [
|
||||
tool.name.casefold() for tool in agent.tools or []
|
||||
],
|
||||
@@ -512,12 +148,15 @@ class Telemetry:
|
||||
json.dumps(
|
||||
[
|
||||
{
|
||||
"key": task.key,
|
||||
"id": str(task.id),
|
||||
"description": task.description,
|
||||
"expected_output": task.expected_output,
|
||||
"async_execution?": task.async_execution,
|
||||
"human_input?": task.human_input,
|
||||
"agent_role": task.agent.role if task.agent else "None",
|
||||
"agent_role": (
|
||||
task.agent.role if task.agent else "None"
|
||||
),
|
||||
"agent_key": task.agent.key if task.agent else None,
|
||||
"context": (
|
||||
[task.description for task in task.context]
|
||||
@@ -532,78 +171,433 @@ class Telemetry:
|
||||
]
|
||||
),
|
||||
)
|
||||
return span
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
def end_crew(self, crew, final_string_output):
|
||||
if (self.ready) and (crew.share_crew):
|
||||
try:
|
||||
self._add_attribute(span, "platform", platform.platform())
|
||||
self._add_attribute(span, "platform_release", platform.release())
|
||||
self._add_attribute(span, "platform_system", platform.system())
|
||||
self._add_attribute(span, "platform_version", platform.version())
|
||||
self._add_attribute(span, "cpus", os.cpu_count())
|
||||
self._add_attribute(
|
||||
crew._execution_span,
|
||||
"crewai_version",
|
||||
pkg_resources.get_distribution("crewai").version,
|
||||
span, "crew_inputs", json.dumps(inputs) if inputs else None
|
||||
)
|
||||
else:
|
||||
self._add_attribute(
|
||||
crew._execution_span, "crew_output", final_string_output
|
||||
)
|
||||
self._add_attribute(
|
||||
crew._execution_span,
|
||||
"crew_tasks_output",
|
||||
span,
|
||||
"crew_agents",
|
||||
json.dumps(
|
||||
[
|
||||
{
|
||||
"key": agent.key,
|
||||
"id": str(agent.id),
|
||||
"role": agent.role,
|
||||
"verbose?": agent.verbose,
|
||||
"max_iter": agent.max_iter,
|
||||
"max_rpm": agent.max_rpm,
|
||||
"function_calling_llm": (
|
||||
agent.function_calling_llm.model
|
||||
if agent.function_calling_llm
|
||||
else ""
|
||||
),
|
||||
"llm": agent.llm.model,
|
||||
"delegation_enabled?": agent.allow_delegation,
|
||||
"allow_code_execution?": agent.allow_code_execution,
|
||||
"max_retry_limit": agent.max_retry_limit,
|
||||
"tools_names": [
|
||||
tool.name.casefold() for tool in agent.tools or []
|
||||
],
|
||||
}
|
||||
for agent in crew.agents
|
||||
]
|
||||
),
|
||||
)
|
||||
self._add_attribute(
|
||||
span,
|
||||
"crew_tasks",
|
||||
json.dumps(
|
||||
[
|
||||
{
|
||||
"key": task.key,
|
||||
"id": str(task.id),
|
||||
"description": task.description,
|
||||
"output": task.output.raw_output,
|
||||
"async_execution?": task.async_execution,
|
||||
"human_input?": task.human_input,
|
||||
"agent_role": (
|
||||
task.agent.role if task.agent else "None"
|
||||
),
|
||||
"agent_key": task.agent.key if task.agent else None,
|
||||
"tools_names": [
|
||||
tool.name.casefold() for tool in task.tools or []
|
||||
],
|
||||
}
|
||||
for task in crew.tasks
|
||||
]
|
||||
),
|
||||
)
|
||||
crew._execution_span.set_status(Status(StatusCode.OK))
|
||||
crew._execution_span.end()
|
||||
except Exception:
|
||||
pass
|
||||
span.set_status(Status(StatusCode.OK))
|
||||
span.end()
|
||||
|
||||
self._safe_telemetry_operation(operation)
|
||||
|
||||
def task_started(self, crew: Crew, task: Task) -> Span | None:
|
||||
"""Records task started in a crew."""
|
||||
|
||||
def operation():
|
||||
tracer = trace.get_tracer("crewai.telemetry")
|
||||
|
||||
created_span = tracer.start_span("Task Created")
|
||||
|
||||
self._add_attribute(created_span, "crew_key", crew.key)
|
||||
self._add_attribute(created_span, "crew_id", str(crew.id))
|
||||
self._add_attribute(created_span, "task_key", task.key)
|
||||
self._add_attribute(created_span, "task_id", str(task.id))
|
||||
|
||||
if crew.share_crew:
|
||||
self._add_attribute(
|
||||
created_span, "formatted_description", task.description
|
||||
)
|
||||
self._add_attribute(
|
||||
created_span, "formatted_expected_output", task.expected_output
|
||||
)
|
||||
|
||||
created_span.set_status(Status(StatusCode.OK))
|
||||
created_span.end()
|
||||
|
||||
span = tracer.start_span("Task Execution")
|
||||
|
||||
self._add_attribute(span, "crew_key", crew.key)
|
||||
self._add_attribute(span, "crew_id", str(crew.id))
|
||||
self._add_attribute(span, "task_key", task.key)
|
||||
self._add_attribute(span, "task_id", str(task.id))
|
||||
|
||||
if crew.share_crew:
|
||||
self._add_attribute(span, "formatted_description", task.description)
|
||||
self._add_attribute(
|
||||
span, "formatted_expected_output", task.expected_output
|
||||
)
|
||||
|
||||
return span
|
||||
|
||||
return self._safe_telemetry_operation(operation)
|
||||
|
||||
def task_ended(self, span: Span, task: Task, crew: Crew):
|
||||
"""Records task execution in a crew."""
|
||||
|
||||
def operation():
|
||||
if crew.share_crew:
|
||||
self._add_attribute(
|
||||
span,
|
||||
"task_output",
|
||||
task.output.raw if task.output else "",
|
||||
)
|
||||
|
||||
span.set_status(Status(StatusCode.OK))
|
||||
span.end()
|
||||
|
||||
self._safe_telemetry_operation(operation)
|
||||
|
||||
def tool_repeated_usage(self, llm: Any, tool_name: str, attempts: int):
|
||||
"""Records the repeated usage 'error' of a tool by an agent."""
|
||||
|
||||
def operation():
|
||||
tracer = trace.get_tracer("crewai.telemetry")
|
||||
span = tracer.start_span("Tool Repeated Usage")
|
||||
self._add_attribute(
|
||||
span,
|
||||
"crewai_version",
|
||||
pkg_resources.get_distribution("crewai").version,
|
||||
)
|
||||
self._add_attribute(span, "tool_name", tool_name)
|
||||
self._add_attribute(span, "attempts", attempts)
|
||||
if llm:
|
||||
self._add_attribute(span, "llm", llm.model)
|
||||
span.set_status(Status(StatusCode.OK))
|
||||
span.end()
|
||||
|
||||
self._safe_telemetry_operation(operation)
|
||||
|
||||
def tool_usage(self, llm: Any, tool_name: str, attempts: int):
|
||||
"""Records the usage of a tool by an agent."""
|
||||
|
||||
def operation():
|
||||
tracer = trace.get_tracer("crewai.telemetry")
|
||||
span = tracer.start_span("Tool Usage")
|
||||
self._add_attribute(
|
||||
span,
|
||||
"crewai_version",
|
||||
pkg_resources.get_distribution("crewai").version,
|
||||
)
|
||||
self._add_attribute(span, "tool_name", tool_name)
|
||||
self._add_attribute(span, "attempts", attempts)
|
||||
if llm:
|
||||
self._add_attribute(span, "llm", llm.model)
|
||||
span.set_status(Status(StatusCode.OK))
|
||||
span.end()
|
||||
|
||||
self._safe_telemetry_operation(operation)
|
||||
|
||||
def tool_usage_error(self, llm: Any):
|
||||
"""Records the usage of a tool by an agent."""
|
||||
|
||||
def operation():
|
||||
tracer = trace.get_tracer("crewai.telemetry")
|
||||
span = tracer.start_span("Tool Usage Error")
|
||||
self._add_attribute(
|
||||
span,
|
||||
"crewai_version",
|
||||
pkg_resources.get_distribution("crewai").version,
|
||||
)
|
||||
if llm:
|
||||
self._add_attribute(span, "llm", llm.model)
|
||||
span.set_status(Status(StatusCode.OK))
|
||||
span.end()
|
||||
|
||||
self._safe_telemetry_operation(operation)
|
||||
|
||||
def individual_test_result_span(
|
||||
self, crew: Crew, quality: float, exec_time: int, model_name: str
|
||||
):
|
||||
def operation():
|
||||
tracer = trace.get_tracer("crewai.telemetry")
|
||||
span = tracer.start_span("Crew Individual Test Result")
|
||||
|
||||
self._add_attribute(
|
||||
span,
|
||||
"crewai_version",
|
||||
pkg_resources.get_distribution("crewai").version,
|
||||
)
|
||||
self._add_attribute(span, "crew_key", crew.key)
|
||||
self._add_attribute(span, "crew_id", str(crew.id))
|
||||
self._add_attribute(span, "quality", str(quality))
|
||||
self._add_attribute(span, "exec_time", str(exec_time))
|
||||
self._add_attribute(span, "model_name", model_name)
|
||||
span.set_status(Status(StatusCode.OK))
|
||||
span.end()
|
||||
|
||||
self._safe_telemetry_operation(operation)
|
||||
|
||||
def test_execution_span(
|
||||
self,
|
||||
crew: Crew,
|
||||
iterations: int,
|
||||
inputs: dict[str, Any] | None,
|
||||
model_name: str,
|
||||
):
|
||||
def operation():
|
||||
tracer = trace.get_tracer("crewai.telemetry")
|
||||
span = tracer.start_span("Crew Test Execution")
|
||||
|
||||
self._add_attribute(
|
||||
span,
|
||||
"crewai_version",
|
||||
pkg_resources.get_distribution("crewai").version,
|
||||
)
|
||||
self._add_attribute(span, "crew_key", crew.key)
|
||||
self._add_attribute(span, "crew_id", str(crew.id))
|
||||
self._add_attribute(span, "iterations", str(iterations))
|
||||
self._add_attribute(span, "model_name", model_name)
|
||||
|
||||
if crew.share_crew:
|
||||
self._add_attribute(
|
||||
span, "inputs", json.dumps(inputs) if inputs else None
|
||||
)
|
||||
|
||||
span.set_status(Status(StatusCode.OK))
|
||||
span.end()
|
||||
|
||||
self._safe_telemetry_operation(operation)
|
||||
|
||||
def deploy_signup_error_span(self):
|
||||
def operation():
|
||||
tracer = trace.get_tracer("crewai.telemetry")
|
||||
span = tracer.start_span("Deploy Signup Error")
|
||||
span.set_status(Status(StatusCode.OK))
|
||||
span.end()
|
||||
|
||||
self._safe_telemetry_operation(operation)
|
||||
|
||||
def start_deployment_span(self, uuid: Optional[str] = None):
|
||||
def operation():
|
||||
tracer = trace.get_tracer("crewai.telemetry")
|
||||
span = tracer.start_span("Start Deployment")
|
||||
if uuid:
|
||||
self._add_attribute(span, "uuid", uuid)
|
||||
span.set_status(Status(StatusCode.OK))
|
||||
span.end()
|
||||
|
||||
self._safe_telemetry_operation(operation)
|
||||
|
||||
def create_crew_deployment_span(self):
|
||||
def operation():
|
||||
tracer = trace.get_tracer("crewai.telemetry")
|
||||
span = tracer.start_span("Create Crew Deployment")
|
||||
span.set_status(Status(StatusCode.OK))
|
||||
span.end()
|
||||
|
||||
self._safe_telemetry_operation(operation)
|
||||
|
||||
def get_crew_logs_span(self, uuid: Optional[str], log_type: str = "deployment"):
|
||||
def operation():
|
||||
tracer = trace.get_tracer("crewai.telemetry")
|
||||
span = tracer.start_span("Get Crew Logs")
|
||||
self._add_attribute(span, "log_type", log_type)
|
||||
if uuid:
|
||||
self._add_attribute(span, "uuid", uuid)
|
||||
span.set_status(Status(StatusCode.OK))
|
||||
span.end()
|
||||
|
||||
self._safe_telemetry_operation(operation)
|
||||
|
||||
def remove_crew_span(self, uuid: Optional[str] = None):
|
||||
def operation():
|
||||
tracer = trace.get_tracer("crewai.telemetry")
|
||||
span = tracer.start_span("Remove Crew")
|
||||
if uuid:
|
||||
self._add_attribute(span, "uuid", uuid)
|
||||
span.set_status(Status(StatusCode.OK))
|
||||
span.end()
|
||||
|
||||
self._safe_telemetry_operation(operation)
|
||||
|
||||
def crew_execution_span(self, crew: Crew, inputs: dict[str, Any] | None):
|
||||
"""Records the complete execution of a crew.
|
||||
This is only collected if the user has opted-in to share the crew.
|
||||
"""
|
||||
self.crew_creation(crew, inputs)
|
||||
|
||||
def operation():
|
||||
tracer = trace.get_tracer("crewai.telemetry")
|
||||
span = tracer.start_span("Crew Execution")
|
||||
self._add_attribute(
|
||||
span,
|
||||
"crewai_version",
|
||||
pkg_resources.get_distribution("crewai").version,
|
||||
)
|
||||
self._add_attribute(span, "crew_key", crew.key)
|
||||
self._add_attribute(span, "crew_id", str(crew.id))
|
||||
self._add_attribute(
|
||||
span, "crew_inputs", json.dumps(inputs) if inputs else None
|
||||
)
|
||||
self._add_attribute(
|
||||
span,
|
||||
"crew_agents",
|
||||
json.dumps(
|
||||
[
|
||||
{
|
||||
"key": agent.key,
|
||||
"id": str(agent.id),
|
||||
"role": agent.role,
|
||||
"goal": agent.goal,
|
||||
"backstory": agent.backstory,
|
||||
"verbose?": agent.verbose,
|
||||
"max_iter": agent.max_iter,
|
||||
"max_rpm": agent.max_rpm,
|
||||
"i18n": agent.i18n.prompt_file,
|
||||
"llm": agent.llm.model,
|
||||
"delegation_enabled?": agent.allow_delegation,
|
||||
"tools_names": [
|
||||
tool.name.casefold() for tool in agent.tools or []
|
||||
],
|
||||
}
|
||||
for agent in crew.agents
|
||||
]
|
||||
),
|
||||
)
|
||||
self._add_attribute(
|
||||
span,
|
||||
"crew_tasks",
|
||||
json.dumps(
|
||||
[
|
||||
{
|
||||
"id": str(task.id),
|
||||
"description": task.description,
|
||||
"expected_output": task.expected_output,
|
||||
"async_execution?": task.async_execution,
|
||||
"human_input?": task.human_input,
|
||||
"agent_role": task.agent.role if task.agent else "None",
|
||||
"agent_key": task.agent.key if task.agent else None,
|
||||
"context": (
|
||||
[task.description for task in task.context]
|
||||
if task.context
|
||||
else None
|
||||
),
|
||||
"tools_names": [
|
||||
tool.name.casefold() for tool in task.tools or []
|
||||
],
|
||||
}
|
||||
for task in crew.tasks
|
||||
]
|
||||
),
|
||||
)
|
||||
return span
|
||||
|
||||
if crew.share_crew:
|
||||
return self._safe_telemetry_operation(operation)
|
||||
return None
|
||||
|
||||
def end_crew(self, crew, final_string_output):
|
||||
def operation():
|
||||
self._add_attribute(
|
||||
crew._execution_span,
|
||||
"crewai_version",
|
||||
pkg_resources.get_distribution("crewai").version,
|
||||
)
|
||||
self._add_attribute(
|
||||
crew._execution_span, "crew_output", final_string_output
|
||||
)
|
||||
self._add_attribute(
|
||||
crew._execution_span,
|
||||
"crew_tasks_output",
|
||||
json.dumps(
|
||||
[
|
||||
{
|
||||
"id": str(task.id),
|
||||
"description": task.description,
|
||||
"output": task.output.raw_output,
|
||||
}
|
||||
for task in crew.tasks
|
||||
]
|
||||
),
|
||||
)
|
||||
crew._execution_span.set_status(Status(StatusCode.OK))
|
||||
crew._execution_span.end()
|
||||
|
||||
if crew.share_crew:
|
||||
self._safe_telemetry_operation(operation)
|
||||
|
||||
def _add_attribute(self, span, key, value):
|
||||
"""Add an attribute to a span."""
|
||||
try:
|
||||
|
||||
def operation():
|
||||
return span.set_attribute(key, value)
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
self._safe_telemetry_operation(operation)
|
||||
|
||||
def flow_creation_span(self, flow_name: str):
|
||||
if self.ready:
|
||||
try:
|
||||
tracer = trace.get_tracer("crewai.telemetry")
|
||||
span = tracer.start_span("Flow Creation")
|
||||
self._add_attribute(span, "flow_name", flow_name)
|
||||
span.set_status(Status(StatusCode.OK))
|
||||
span.end()
|
||||
except Exception:
|
||||
pass
|
||||
def operation():
|
||||
tracer = trace.get_tracer("crewai.telemetry")
|
||||
span = tracer.start_span("Flow Creation")
|
||||
self._add_attribute(span, "flow_name", flow_name)
|
||||
span.set_status(Status(StatusCode.OK))
|
||||
span.end()
|
||||
|
||||
self._safe_telemetry_operation(operation)
|
||||
|
||||
def flow_plotting_span(self, flow_name: str, node_names: list[str]):
|
||||
if self.ready:
|
||||
try:
|
||||
tracer = trace.get_tracer("crewai.telemetry")
|
||||
span = tracer.start_span("Flow Plotting")
|
||||
self._add_attribute(span, "flow_name", flow_name)
|
||||
self._add_attribute(span, "node_names", json.dumps(node_names))
|
||||
span.set_status(Status(StatusCode.OK))
|
||||
span.end()
|
||||
except Exception:
|
||||
pass
|
||||
def operation():
|
||||
tracer = trace.get_tracer("crewai.telemetry")
|
||||
span = tracer.start_span("Flow Plotting")
|
||||
self._add_attribute(span, "flow_name", flow_name)
|
||||
self._add_attribute(span, "node_names", json.dumps(node_names))
|
||||
span.set_status(Status(StatusCode.OK))
|
||||
span.end()
|
||||
|
||||
self._safe_telemetry_operation(operation)
|
||||
|
||||
def flow_execution_span(self, flow_name: str, node_names: list[str]):
|
||||
if self.ready:
|
||||
try:
|
||||
tracer = trace.get_tracer("crewai.telemetry")
|
||||
span = tracer.start_span("Flow Execution")
|
||||
self._add_attribute(span, "flow_name", flow_name)
|
||||
self._add_attribute(span, "node_names", json.dumps(node_names))
|
||||
span.set_status(Status(StatusCode.OK))
|
||||
span.end()
|
||||
except Exception:
|
||||
pass
|
||||
def operation():
|
||||
tracer = trace.get_tracer("crewai.telemetry")
|
||||
span = tracer.start_span("Flow Execution")
|
||||
self._add_attribute(span, "flow_name", flow_name)
|
||||
self._add_attribute(span, "node_names", json.dumps(node_names))
|
||||
span.set_status(Status(StatusCode.OK))
|
||||
span.end()
|
||||
|
||||
self._safe_telemetry_operation(operation)
|
||||
|
||||
@@ -6,14 +6,13 @@ from difflib import SequenceMatcher
|
||||
from textwrap import dedent
|
||||
from typing import Any, List, Union
|
||||
|
||||
import crewai.utilities.events as events
|
||||
from crewai.agents.tools_handler import ToolsHandler
|
||||
from crewai.task import Task
|
||||
from crewai.telemetry import Telemetry
|
||||
from crewai.tools.tool_calling import InstructorToolCalling, ToolCalling
|
||||
from crewai.tools.tool_usage_events import ToolUsageError, ToolUsageFinished
|
||||
from crewai.utilities import I18N, Converter, ConverterError, Printer
|
||||
import crewai.utilities.events as events
|
||||
|
||||
|
||||
agentops = None
|
||||
if os.environ.get("AGENTOPS_API_KEY"):
|
||||
@@ -59,7 +58,7 @@ class ToolUsage:
|
||||
agent: Any,
|
||||
action: Any,
|
||||
) -> None:
|
||||
self._i18n: I18N = I18N()
|
||||
self._i18n: I18N = agent.i18n
|
||||
self._printer: Printer = Printer()
|
||||
self._telemetry: Telemetry = Telemetry()
|
||||
self._run_attempts: int = 1
|
||||
@@ -300,8 +299,11 @@ class ToolUsage:
|
||||
descriptions = []
|
||||
for tool in self.tools:
|
||||
args = {
|
||||
k: {k2: v2 for k2, v2 in v.items() if k2 in ["description", "type"]}
|
||||
for k, v in tool.args.items()
|
||||
name: {
|
||||
"description": field.description,
|
||||
"type": field.annotation.__name__,
|
||||
}
|
||||
for name, field in tool.args_schema.model_fields.items()
|
||||
}
|
||||
descriptions.append(
|
||||
"\n".join(
|
||||
|
||||
@@ -20,7 +20,8 @@
|
||||
"getting_input": "This is the agent's final answer: {final_answer}\n\n",
|
||||
"summarizer_system_message": "You are a helpful assistant that summarizes text.",
|
||||
"sumamrize_instruction": "Summarize the following text, make sure to include all the important information: {group}",
|
||||
"summary": "This is a summary of our conversation so far:\n{merged_summary}"
|
||||
"summary": "This is a summary of our conversation so far:\n{merged_summary}",
|
||||
"manager_request": "Your best answer to your coworker asking you this, accounting for the context shared."
|
||||
},
|
||||
"errors": {
|
||||
"force_final_answer_error": "You can't keep going, this was the best you could do.\n {formatted_answer.text}",
|
||||
|
||||
@@ -1,21 +1,21 @@
|
||||
"""Test Agent creation and execution basic functionality."""
|
||||
|
||||
import os
|
||||
from unittest import mock
|
||||
from unittest.mock import patch
|
||||
|
||||
import os
|
||||
import pytest
|
||||
from crewai_tools import tool
|
||||
|
||||
from crewai import Agent, Crew, Task
|
||||
from crewai.agents.cache import CacheHandler
|
||||
from crewai.agents.crew_agent_executor import CrewAgentExecutor
|
||||
from crewai.agents.parser import AgentAction, CrewAgentParser, OutputParserException
|
||||
from crewai.llm import LLM
|
||||
from crewai.agents.parser import CrewAgentParser, OutputParserException
|
||||
from crewai.tools.tool_calling import InstructorToolCalling
|
||||
from crewai.tools.tool_usage import ToolUsage
|
||||
from crewai.tools.tool_usage_events import ToolUsageFinished
|
||||
from crewai.utilities import RPMController
|
||||
from crewai_tools import tool
|
||||
from crewai.agents.parser import AgentAction
|
||||
from crewai.utilities.events import Emitter
|
||||
|
||||
|
||||
@@ -73,7 +73,7 @@ def test_agent_creation():
|
||||
|
||||
def test_agent_default_values():
|
||||
agent = Agent(role="test role", goal="test goal", backstory="test backstory")
|
||||
assert agent.llm.model == "gpt-4o"
|
||||
assert agent.llm.model == "gpt-4o-mini"
|
||||
assert agent.allow_delegation is False
|
||||
|
||||
|
||||
@@ -116,6 +116,7 @@ def test_custom_llm_temperature_preservation():
|
||||
@pytest.mark.vcr(filter_headers=["authorization"])
|
||||
def test_agent_execute_task():
|
||||
from langchain_openai import ChatOpenAI
|
||||
|
||||
from crewai import Task
|
||||
|
||||
agent = Agent(
|
||||
@@ -206,7 +207,7 @@ def test_logging_tool_usage():
|
||||
verbose=True,
|
||||
)
|
||||
|
||||
assert agent.llm.model == "gpt-4o"
|
||||
assert agent.llm.model == "gpt-4o-mini"
|
||||
assert agent.tools_handler.last_used_tool == {}
|
||||
task = Task(
|
||||
description="What is 3 times 4?",
|
||||
@@ -602,6 +603,7 @@ def test_agent_respect_the_max_rpm_set(capsys):
|
||||
@pytest.mark.vcr(filter_headers=["authorization"])
|
||||
def test_agent_respect_the_max_rpm_set_over_crew_rpm(capsys):
|
||||
from unittest.mock import patch
|
||||
|
||||
from crewai_tools import tool
|
||||
|
||||
@tool
|
||||
@@ -693,6 +695,7 @@ def test_agent_without_max_rpm_respet_crew_rpm(capsys):
|
||||
@pytest.mark.vcr(filter_headers=["authorization"])
|
||||
def test_agent_error_on_parsing_tool(capsys):
|
||||
from unittest.mock import patch
|
||||
|
||||
from crewai_tools import tool
|
||||
|
||||
@tool
|
||||
@@ -855,7 +858,9 @@ def test_agent_function_calling_llm():
|
||||
tasks = [essay]
|
||||
crew = Crew(agents=[agent1], tasks=tasks)
|
||||
from unittest.mock import patch
|
||||
|
||||
import instructor
|
||||
|
||||
from crewai.tools.tool_usage import ToolUsage
|
||||
|
||||
with patch.object(
|
||||
|
||||
@@ -1,12 +1,12 @@
|
||||
import pytest
|
||||
import requests
|
||||
import sys
|
||||
import unittest
|
||||
|
||||
from io import StringIO
|
||||
from requests.exceptions import JSONDecodeError
|
||||
from unittest.mock import MagicMock, Mock, patch
|
||||
|
||||
import pytest
|
||||
import requests
|
||||
from requests.exceptions import JSONDecodeError
|
||||
|
||||
from crewai.cli.deploy.main import DeployCommand
|
||||
from crewai.cli.utils import parse_toml
|
||||
|
||||
@@ -228,13 +228,11 @@ class TestDeployCommand(unittest.TestCase):
|
||||
"builtins.open",
|
||||
new_callable=unittest.mock.mock_open,
|
||||
read_data="""
|
||||
[tool.poetry]
|
||||
[project]
|
||||
name = "test_project"
|
||||
version = "0.1.0"
|
||||
|
||||
[tool.poetry.dependencies]
|
||||
python = "^3.10"
|
||||
crewai = { extras = ["tools"], version = ">=0.51.0,<1.0.0" }
|
||||
requires-python = ">=3.10,<=3.13"
|
||||
dependencies = ["crewai"]
|
||||
""",
|
||||
)
|
||||
def test_get_project_name_python_310(self, mock_open):
|
||||
@@ -248,13 +246,11 @@ class TestDeployCommand(unittest.TestCase):
|
||||
"builtins.open",
|
||||
new_callable=unittest.mock.mock_open,
|
||||
read_data="""
|
||||
[tool.poetry]
|
||||
[project]
|
||||
name = "test_project"
|
||||
version = "0.1.0"
|
||||
|
||||
[tool.poetry.dependencies]
|
||||
python = "^3.11"
|
||||
crewai = { extras = ["tools"], version = ">=0.51.0,<1.0.0" }
|
||||
requires-python = ">=3.10,<=3.13"
|
||||
dependencies = ["crewai"]
|
||||
""",
|
||||
)
|
||||
def test_get_project_name_python_311_plus(self, mock_open):
|
||||
|
||||
@@ -18,12 +18,12 @@ from crewai.cli import evaluate_crew
|
||||
def test_crew_success(mock_subprocess_run, n_iterations, model):
|
||||
"""Test the crew function for successful execution."""
|
||||
mock_subprocess_run.return_value = subprocess.CompletedProcess(
|
||||
args=f"poetry run test {n_iterations} {model}", returncode=0
|
||||
args=f"uv run test {n_iterations} {model}", returncode=0
|
||||
)
|
||||
result = evaluate_crew.evaluate_crew(n_iterations, model)
|
||||
|
||||
mock_subprocess_run.assert_called_once_with(
|
||||
["poetry", "run", "test", str(n_iterations), model],
|
||||
["uv", "run", "test", str(n_iterations), model],
|
||||
capture_output=False,
|
||||
text=True,
|
||||
check=True,
|
||||
@@ -55,14 +55,14 @@ def test_test_crew_called_process_error(mock_subprocess_run, click):
|
||||
n_iterations = 5
|
||||
mock_subprocess_run.side_effect = subprocess.CalledProcessError(
|
||||
returncode=1,
|
||||
cmd=["poetry", "run", "test", str(n_iterations), "gpt-4o"],
|
||||
cmd=["uv", "run", "test", str(n_iterations), "gpt-4o"],
|
||||
output="Error",
|
||||
stderr="Some error occurred",
|
||||
)
|
||||
evaluate_crew.evaluate_crew(n_iterations, "gpt-4o")
|
||||
|
||||
mock_subprocess_run.assert_called_once_with(
|
||||
["poetry", "run", "test", "5", "gpt-4o"],
|
||||
["uv", "run", "test", "5", "gpt-4o"],
|
||||
capture_output=False,
|
||||
text=True,
|
||||
check=True,
|
||||
@@ -70,7 +70,7 @@ def test_test_crew_called_process_error(mock_subprocess_run, click):
|
||||
click.echo.assert_has_calls(
|
||||
[
|
||||
mock.call.echo(
|
||||
"An error occurred while testing the crew: Command '['poetry', 'run', 'test', '5', 'gpt-4o']' returned non-zero exit status 1.",
|
||||
"An error occurred while testing the crew: Command '['uv', 'run', 'test', '5', 'gpt-4o']' returned non-zero exit status 1.",
|
||||
err=True,
|
||||
),
|
||||
mock.call.echo("Error", err=True),
|
||||
@@ -87,7 +87,7 @@ def test_test_crew_unexpected_exception(mock_subprocess_run, click):
|
||||
evaluate_crew.evaluate_crew(n_iterations, "gpt-4o")
|
||||
|
||||
mock_subprocess_run.assert_called_once_with(
|
||||
["poetry", "run", "test", "5", "gpt-4o"],
|
||||
["uv", "run", "test", "5", "gpt-4o"],
|
||||
capture_output=False,
|
||||
text=True,
|
||||
check=True,
|
||||
|
||||
@@ -92,16 +92,20 @@ class TestPlusAPI(unittest.TestCase):
|
||||
)
|
||||
self.assertEqual(response, mock_response)
|
||||
|
||||
@patch("crewai.cli.plus_api.requests.request")
|
||||
def test_make_request(self, mock_request):
|
||||
@patch("crewai.cli.plus_api.requests.Session")
|
||||
def test_make_request(self, mock_session):
|
||||
mock_response = MagicMock()
|
||||
mock_request.return_value = mock_response
|
||||
|
||||
mock_session_instance = mock_session.return_value
|
||||
mock_session_instance.request.return_value = mock_response
|
||||
|
||||
response = self.api._make_request("GET", "test_endpoint")
|
||||
|
||||
mock_request.assert_called_once_with(
|
||||
mock_session.assert_called_once()
|
||||
mock_session_instance.request.assert_called_once_with(
|
||||
"GET", f"{self.api.base_url}/test_endpoint", headers=self.api.headers
|
||||
)
|
||||
mock_session_instance.trust_env = False
|
||||
self.assertEqual(response, mock_response)
|
||||
|
||||
@patch("crewai.cli.plus_api.PlusAPI._make_request")
|
||||
|
||||
@@ -1,13 +1,16 @@
|
||||
import os
|
||||
import tempfile
|
||||
import unittest
|
||||
import unittest.mock
|
||||
import os
|
||||
from contextlib import contextmanager
|
||||
from io import StringIO
|
||||
from unittest import mock
|
||||
from unittest.mock import MagicMock, patch
|
||||
|
||||
from pytest import raises
|
||||
|
||||
from crewai.cli.tools.main import ToolCommand
|
||||
from io import StringIO
|
||||
from unittest.mock import patch, MagicMock
|
||||
|
||||
|
||||
@contextmanager
|
||||
def in_temp_dir():
|
||||
@@ -19,6 +22,7 @@ def in_temp_dir():
|
||||
finally:
|
||||
os.chdir(original_dir)
|
||||
|
||||
|
||||
@patch("crewai.cli.tools.main.subprocess.run")
|
||||
def test_create_success(mock_subprocess):
|
||||
with in_temp_dir():
|
||||
@@ -38,9 +42,7 @@ def test_create_success(mock_subprocess):
|
||||
)
|
||||
assert os.path.isfile(os.path.join("test_tool", "src", "test_tool", "tool.py"))
|
||||
|
||||
with open(
|
||||
os.path.join("test_tool", "src", "test_tool", "tool.py"), "r"
|
||||
) as f:
|
||||
with open(os.path.join("test_tool", "src", "test_tool", "tool.py"), "r") as f:
|
||||
content = f.read()
|
||||
assert "class TestTool" in content
|
||||
|
||||
@@ -49,6 +51,7 @@ def test_create_success(mock_subprocess):
|
||||
|
||||
assert "Creating custom tool test_tool..." in output
|
||||
|
||||
|
||||
@patch("crewai.cli.tools.main.subprocess.run")
|
||||
@patch("crewai.cli.plus_api.PlusAPI.get_tool")
|
||||
def test_install_success(mock_get, mock_subprocess_run):
|
||||
@@ -67,9 +70,15 @@ def test_install_success(mock_get, mock_subprocess_run):
|
||||
tool_command.install("sample-tool")
|
||||
output = fake_out.getvalue()
|
||||
|
||||
mock_get.assert_called_once_with("sample-tool")
|
||||
mock_get.assert_has_calls([mock.call("sample-tool"), mock.call().json()])
|
||||
mock_subprocess_run.assert_any_call(
|
||||
["poetry", "add", "--source", "crewai-sample-repo", "sample-tool"],
|
||||
[
|
||||
"uv",
|
||||
"add",
|
||||
"--index",
|
||||
"sample-repo=https://example.com/repo",
|
||||
"sample-tool",
|
||||
],
|
||||
capture_output=False,
|
||||
text=True,
|
||||
check=True,
|
||||
@@ -77,6 +86,7 @@ def test_install_success(mock_get, mock_subprocess_run):
|
||||
|
||||
assert "Succesfully installed sample-tool" in output
|
||||
|
||||
|
||||
@patch("crewai.cli.plus_api.PlusAPI.get_tool")
|
||||
def test_install_tool_not_found(mock_get):
|
||||
mock_get_response = MagicMock()
|
||||
@@ -95,6 +105,7 @@ def test_install_tool_not_found(mock_get):
|
||||
mock_get.assert_called_once_with("non-existent-tool")
|
||||
assert "No tool found with this name" in output
|
||||
|
||||
|
||||
@patch("crewai.cli.plus_api.PlusAPI.get_tool")
|
||||
def test_install_api_error(mock_get):
|
||||
mock_get_response = MagicMock()
|
||||
@@ -113,15 +124,16 @@ def test_install_api_error(mock_get):
|
||||
mock_get.assert_called_once_with("error-tool")
|
||||
assert "Failed to get tool details" in output
|
||||
|
||||
|
||||
@patch("crewai.cli.tools.main.git.Repository.is_synced", return_value=False)
|
||||
def test_publish_when_not_in_sync(mock_is_synced):
|
||||
with patch("sys.stdout", new=StringIO()) as fake_out, \
|
||||
raises(SystemExit):
|
||||
with patch("sys.stdout", new=StringIO()) as fake_out, raises(SystemExit):
|
||||
tool_command = ToolCommand()
|
||||
tool_command.publish(is_public=True)
|
||||
|
||||
assert "Local changes need to be resolved before publishing" in fake_out.getvalue()
|
||||
|
||||
|
||||
@patch("crewai.cli.tools.main.get_project_name", return_value="sample-tool")
|
||||
@patch("crewai.cli.tools.main.get_project_version", return_value="1.0.0")
|
||||
@patch("crewai.cli.tools.main.get_project_description", return_value="A sample tool")
|
||||
@@ -156,7 +168,7 @@ def test_publish_when_not_in_sync_and_force(
|
||||
mock_get_project_version.assert_called_with(require=True)
|
||||
mock_get_project_description.assert_called_with(require=False)
|
||||
mock_subprocess_run.assert_called_with(
|
||||
["poetry", "build", "-f", "sdist", "--output", unittest.mock.ANY],
|
||||
["uv", "build", "--sdist", "--out-dir", unittest.mock.ANY],
|
||||
check=True,
|
||||
capture_output=False,
|
||||
)
|
||||
@@ -169,6 +181,7 @@ def test_publish_when_not_in_sync_and_force(
|
||||
encoded_file=unittest.mock.ANY,
|
||||
)
|
||||
|
||||
|
||||
@patch("crewai.cli.tools.main.get_project_name", return_value="sample-tool")
|
||||
@patch("crewai.cli.tools.main.get_project_version", return_value="1.0.0")
|
||||
@patch("crewai.cli.tools.main.get_project_description", return_value="A sample tool")
|
||||
@@ -203,7 +216,7 @@ def test_publish_success(
|
||||
mock_get_project_version.assert_called_with(require=True)
|
||||
mock_get_project_description.assert_called_with(require=False)
|
||||
mock_subprocess_run.assert_called_with(
|
||||
["poetry", "build", "-f", "sdist", "--output", unittest.mock.ANY],
|
||||
["uv", "build", "--sdist", "--out-dir", unittest.mock.ANY],
|
||||
check=True,
|
||||
capture_output=False,
|
||||
)
|
||||
@@ -216,6 +229,7 @@ def test_publish_success(
|
||||
encoded_file=unittest.mock.ANY,
|
||||
)
|
||||
|
||||
|
||||
@patch("crewai.cli.tools.main.get_project_name", return_value="sample-tool")
|
||||
@patch("crewai.cli.tools.main.get_project_version", return_value="1.0.0")
|
||||
@patch("crewai.cli.tools.main.get_project_description", return_value="A sample tool")
|
||||
@@ -254,6 +268,7 @@ def test_publish_failure(
|
||||
assert "Failed to complete operation" in output
|
||||
assert "Name is already taken" in output
|
||||
|
||||
|
||||
@patch("crewai.cli.tools.main.get_project_name", return_value="sample-tool")
|
||||
@patch("crewai.cli.tools.main.get_project_version", return_value="1.0.0")
|
||||
@patch("crewai.cli.tools.main.get_project_description", return_value="A sample tool")
|
||||
@@ -291,54 +306,3 @@ def test_publish_api_error(
|
||||
|
||||
mock_publish.assert_called_once()
|
||||
assert "Request to Enterprise API failed" in output
|
||||
|
||||
@patch("crewai.cli.plus_api.PlusAPI.login_to_tool_repository")
|
||||
@patch("crewai.cli.tools.main.subprocess.run")
|
||||
def test_login_success(mock_subprocess_run, mock_login):
|
||||
mock_login_response = MagicMock()
|
||||
mock_login_response.status_code = 200
|
||||
mock_login_response.json.return_value = {
|
||||
"repositories": [
|
||||
{
|
||||
"handle": "tools",
|
||||
"url": "https://example.com/repo",
|
||||
}
|
||||
],
|
||||
"credential": {"username": "user", "password": "pass"},
|
||||
}
|
||||
mock_login.return_value = mock_login_response
|
||||
|
||||
mock_subprocess_run.return_value = MagicMock(stderr=None)
|
||||
|
||||
tool_command = ToolCommand()
|
||||
|
||||
with patch("sys.stdout", new=StringIO()) as fake_out:
|
||||
tool_command.login()
|
||||
output = fake_out.getvalue()
|
||||
|
||||
mock_login.assert_called_once()
|
||||
mock_subprocess_run.assert_any_call(
|
||||
[
|
||||
"poetry",
|
||||
"source",
|
||||
"add",
|
||||
"--priority=explicit",
|
||||
"crewai-tools",
|
||||
"https://example.com/repo",
|
||||
],
|
||||
text=True,
|
||||
check=True,
|
||||
)
|
||||
mock_subprocess_run.assert_any_call(
|
||||
[
|
||||
"poetry",
|
||||
"config",
|
||||
"http-basic.crewai-tools",
|
||||
"user",
|
||||
"pass",
|
||||
],
|
||||
capture_output=False,
|
||||
text=True,
|
||||
check=True,
|
||||
)
|
||||
assert "Succesfully authenticated to the tool repository" in output
|
||||
|
||||
@@ -8,7 +8,7 @@ from crewai.cli.train_crew import train_crew
|
||||
def test_train_crew_positive_iterations(mock_subprocess_run):
|
||||
n_iterations = 5
|
||||
mock_subprocess_run.return_value = subprocess.CompletedProcess(
|
||||
args=["poetry", "run", "train", str(n_iterations)],
|
||||
args=["uv", "run", "train", str(n_iterations)],
|
||||
returncode=0,
|
||||
stdout="Success",
|
||||
stderr="",
|
||||
@@ -17,7 +17,7 @@ def test_train_crew_positive_iterations(mock_subprocess_run):
|
||||
train_crew(n_iterations, "trained_agents_data.pkl")
|
||||
|
||||
mock_subprocess_run.assert_called_once_with(
|
||||
["poetry", "run", "train", str(n_iterations), "trained_agents_data.pkl"],
|
||||
["uv", "run", "train", str(n_iterations), "trained_agents_data.pkl"],
|
||||
capture_output=False,
|
||||
text=True,
|
||||
check=True,
|
||||
@@ -48,14 +48,14 @@ def test_train_crew_called_process_error(mock_subprocess_run, click):
|
||||
n_iterations = 5
|
||||
mock_subprocess_run.side_effect = subprocess.CalledProcessError(
|
||||
returncode=1,
|
||||
cmd=["poetry", "run", "train", str(n_iterations)],
|
||||
cmd=["uv", "run", "train", str(n_iterations)],
|
||||
output="Error",
|
||||
stderr="Some error occurred",
|
||||
)
|
||||
train_crew(n_iterations, "trained_agents_data.pkl")
|
||||
|
||||
mock_subprocess_run.assert_called_once_with(
|
||||
["poetry", "run", "train", str(n_iterations), "trained_agents_data.pkl"],
|
||||
["uv", "run", "train", str(n_iterations), "trained_agents_data.pkl"],
|
||||
capture_output=False,
|
||||
text=True,
|
||||
check=True,
|
||||
@@ -63,7 +63,7 @@ def test_train_crew_called_process_error(mock_subprocess_run, click):
|
||||
click.echo.assert_has_calls(
|
||||
[
|
||||
mock.call.echo(
|
||||
"An error occurred while training the crew: Command '['poetry', 'run', 'train', '5']' returned non-zero exit status 1.",
|
||||
"An error occurred while training the crew: Command '['uv', 'run', 'train', '5']' returned non-zero exit status 1.",
|
||||
err=True,
|
||||
),
|
||||
mock.call.echo("Error", err=True),
|
||||
@@ -79,7 +79,7 @@ def test_train_crew_unexpected_exception(mock_subprocess_run, click):
|
||||
train_crew(n_iterations, "trained_agents_data.pkl")
|
||||
|
||||
mock_subprocess_run.assert_called_once_with(
|
||||
["poetry", "run", "train", str(n_iterations), "trained_agents_data.pkl"],
|
||||
["uv", "run", "train", str(n_iterations), "trained_agents_data.pkl"],
|
||||
capture_output=False,
|
||||
text=True,
|
||||
check=True,
|
||||
|
||||
@@ -9,6 +9,7 @@ from unittest.mock import MagicMock, patch
|
||||
import instructor
|
||||
import pydantic_core
|
||||
import pytest
|
||||
|
||||
from crewai.agent import Agent
|
||||
from crewai.agents.cache import CacheHandler
|
||||
from crewai.crew import Crew
|
||||
@@ -23,7 +24,6 @@ from crewai.types.usage_metrics import UsageMetrics
|
||||
from crewai.utilities import Logger
|
||||
from crewai.utilities.rpm_controller import RPMController
|
||||
from crewai.utilities.task_output_storage_handler import TaskOutputStorageHandler
|
||||
from pydantic_core import ValidationError
|
||||
|
||||
ceo = Agent(
|
||||
role="CEO",
|
||||
@@ -174,57 +174,6 @@ def test_context_no_future_tasks():
|
||||
Crew(tasks=[task1, task2, task3, task4], agents=[researcher, writer])
|
||||
|
||||
|
||||
def test_memory_provider_validation():
|
||||
# Create mock agents
|
||||
agent1 = Agent(
|
||||
role="Researcher",
|
||||
goal="Conduct research on AI",
|
||||
backstory="An experienced AI researcher",
|
||||
allow_delegation=False,
|
||||
)
|
||||
agent2 = Agent(
|
||||
role="Writer",
|
||||
goal="Write articles on AI",
|
||||
backstory="A seasoned writer with a focus on technology",
|
||||
allow_delegation=False,
|
||||
)
|
||||
|
||||
# Create mock tasks
|
||||
task1 = Task(
|
||||
description="Research the latest trends in AI",
|
||||
expected_output="A report on AI trends",
|
||||
agent=agent1,
|
||||
)
|
||||
task2 = Task(
|
||||
description="Write an article based on the research",
|
||||
expected_output="An article on AI trends",
|
||||
agent=agent2,
|
||||
)
|
||||
|
||||
# Test with valid memory provider values
|
||||
try:
|
||||
crew_with_none = Crew(
|
||||
agents=[agent1, agent2], tasks=[task1, task2], memory_provider=None
|
||||
)
|
||||
crew_with_mem0 = Crew(
|
||||
agents=[agent1, agent2], tasks=[task1, task2], memory_provider="mem0"
|
||||
)
|
||||
except ValidationError:
|
||||
pytest.fail(
|
||||
"Unexpected ValidationError raised for valid memory provider values"
|
||||
)
|
||||
|
||||
# Test with an invalid memory provider value
|
||||
with pytest.raises(ValidationError) as excinfo:
|
||||
Crew(
|
||||
agents=[agent1, agent2],
|
||||
tasks=[task1, task2],
|
||||
memory_provider="invalid_provider",
|
||||
)
|
||||
|
||||
assert "Memory provider must be either None or 'mem0'." in str(excinfo.value)
|
||||
|
||||
|
||||
def test_crew_config_with_wrong_keys():
|
||||
no_tasks_config = json.dumps(
|
||||
{
|
||||
@@ -832,11 +781,14 @@ def test_async_task_execution_call_count():
|
||||
list_important_history.output = mock_task_output
|
||||
write_article.output = mock_task_output
|
||||
|
||||
with patch.object(
|
||||
Task, "execute_sync", return_value=mock_task_output
|
||||
) as mock_execute_sync, patch.object(
|
||||
Task, "execute_async", return_value=mock_future
|
||||
) as mock_execute_async:
|
||||
with (
|
||||
patch.object(
|
||||
Task, "execute_sync", return_value=mock_task_output
|
||||
) as mock_execute_sync,
|
||||
patch.object(
|
||||
Task, "execute_async", return_value=mock_future
|
||||
) as mock_execute_async,
|
||||
):
|
||||
crew.kickoff()
|
||||
|
||||
assert mock_execute_async.call_count == 2
|
||||
@@ -1502,52 +1454,6 @@ def test_crew_does_not_interpolate_without_inputs():
|
||||
interpolate_task_inputs.assert_not_called()
|
||||
|
||||
|
||||
# def test_crew_partial_inputs():
|
||||
# agent = Agent(
|
||||
# role="{topic} Researcher",
|
||||
# goal="Express hot takes on {topic}.",
|
||||
# backstory="You have a lot of experience with {topic}.",
|
||||
# )
|
||||
|
||||
# task = Task(
|
||||
# description="Give me an analysis around {topic}.",
|
||||
# expected_output="{points} bullet points about {topic}.",
|
||||
# )
|
||||
|
||||
# crew = Crew(agents=[agent], tasks=[task], inputs={"topic": "AI"})
|
||||
# inputs = {"topic": "AI"}
|
||||
# crew._interpolate_inputs(inputs=inputs) # Manual call for now
|
||||
|
||||
# assert crew.tasks[0].description == "Give me an analysis around AI."
|
||||
# assert crew.tasks[0].expected_output == "{points} bullet points about AI."
|
||||
# assert crew.agents[0].role == "AI Researcher"
|
||||
# assert crew.agents[0].goal == "Express hot takes on AI."
|
||||
# assert crew.agents[0].backstory == "You have a lot of experience with AI."
|
||||
|
||||
|
||||
# def test_crew_invalid_inputs():
|
||||
# agent = Agent(
|
||||
# role="{topic} Researcher",
|
||||
# goal="Express hot takes on {topic}.",
|
||||
# backstory="You have a lot of experience with {topic}.",
|
||||
# )
|
||||
|
||||
# task = Task(
|
||||
# description="Give me an analysis around {topic}.",
|
||||
# expected_output="{points} bullet points about {topic}.",
|
||||
# )
|
||||
|
||||
# crew = Crew(agents=[agent], tasks=[task], inputs={"subject": "AI"})
|
||||
# inputs = {"subject": "AI"}
|
||||
# crew._interpolate_inputs(inputs=inputs) # Manual call for now
|
||||
|
||||
# assert crew.tasks[0].description == "Give me an analysis around {topic}."
|
||||
# assert crew.tasks[0].expected_output == "{points} bullet points about {topic}."
|
||||
# assert crew.agents[0].role == "{topic} Researcher"
|
||||
# assert crew.agents[0].goal == "Express hot takes on {topic}."
|
||||
# assert crew.agents[0].backstory == "You have a lot of experience with {topic}."
|
||||
|
||||
|
||||
def test_task_callback_on_crew():
|
||||
from unittest.mock import MagicMock, patch
|
||||
|
||||
@@ -1824,7 +1730,10 @@ def test_manager_agent_with_tools_raises_exception():
|
||||
@patch("crewai.crew.Crew.kickoff")
|
||||
@patch("crewai.crew.CrewTrainingHandler")
|
||||
@patch("crewai.crew.TaskEvaluator")
|
||||
def test_crew_train_success(task_evaluator, crew_training_handler, kickoff):
|
||||
@patch("crewai.crew.Crew.copy")
|
||||
def test_crew_train_success(
|
||||
copy_mock, task_evaluator, crew_training_handler, kickoff_mock
|
||||
):
|
||||
task = Task(
|
||||
description="Come up with a list of 5 interesting ideas to explore for an article, then write one amazing paragraph highlight for each idea that showcases how good an article about this topic could be. Return the list of ideas with their paragraph and your notes.",
|
||||
expected_output="5 bullet points with a paragraph for each idea.",
|
||||
@@ -1835,9 +1744,19 @@ def test_crew_train_success(task_evaluator, crew_training_handler, kickoff):
|
||||
agents=[researcher, writer],
|
||||
tasks=[task],
|
||||
)
|
||||
|
||||
# Create a mock for the copied crew
|
||||
copy_mock.return_value = crew
|
||||
|
||||
crew.train(
|
||||
n_iterations=2, inputs={"topic": "AI"}, filename="trained_agents_data.pkl"
|
||||
)
|
||||
|
||||
# Ensure kickoff is called on the copied crew
|
||||
kickoff_mock.assert_has_calls(
|
||||
[mock.call(inputs={"topic": "AI"}), mock.call(inputs={"topic": "AI"})]
|
||||
)
|
||||
|
||||
task_evaluator.assert_has_calls(
|
||||
[
|
||||
mock.call(researcher),
|
||||
@@ -1876,10 +1795,6 @@ def test_crew_train_success(task_evaluator, crew_training_handler, kickoff):
|
||||
]
|
||||
)
|
||||
|
||||
kickoff.assert_has_calls(
|
||||
[mock.call(inputs={"topic": "AI"}), mock.call(inputs={"topic": "AI"})]
|
||||
)
|
||||
|
||||
|
||||
def test_crew_train_error():
|
||||
task = Task(
|
||||
@@ -1894,7 +1809,7 @@ def test_crew_train_error():
|
||||
)
|
||||
|
||||
with pytest.raises(TypeError) as e:
|
||||
crew.train()
|
||||
crew.train() # type: ignore purposefully throwing err
|
||||
assert "train() missing 1 required positional argument: 'n_iterations'" in str(
|
||||
e
|
||||
)
|
||||
@@ -2590,8 +2505,9 @@ def test_conditional_should_execute():
|
||||
|
||||
|
||||
@mock.patch("crewai.crew.CrewEvaluator")
|
||||
@mock.patch("crewai.crew.Crew.copy")
|
||||
@mock.patch("crewai.crew.Crew.kickoff")
|
||||
def test_crew_testing_function(mock_kickoff, crew_evaluator):
|
||||
def test_crew_testing_function(kickoff_mock, copy_mock, crew_evaluator):
|
||||
task = Task(
|
||||
description="Come up with a list of 5 interesting ideas to explore for an article, then write one amazing paragraph highlight for each idea that showcases how good an article about this topic could be. Return the list of ideas with their paragraph and your notes.",
|
||||
expected_output="5 bullet points with a paragraph for each idea.",
|
||||
@@ -2602,11 +2518,15 @@ def test_crew_testing_function(mock_kickoff, crew_evaluator):
|
||||
agents=[researcher],
|
||||
tasks=[task],
|
||||
)
|
||||
|
||||
# Create a mock for the copied crew
|
||||
copy_mock.return_value = crew
|
||||
|
||||
n_iterations = 2
|
||||
crew.test(n_iterations, openai_model_name="gpt-4o-mini", inputs={"topic": "AI"})
|
||||
|
||||
assert len(mock_kickoff.mock_calls) == n_iterations
|
||||
mock_kickoff.assert_has_calls(
|
||||
# Ensure kickoff is called on the copied crew
|
||||
kickoff_mock.assert_has_calls(
|
||||
[mock.call(inputs={"topic": "AI"}), mock.call(inputs={"topic": "AI"})]
|
||||
)
|
||||
|
||||
|
||||
File diff suppressed because one or more lines are too long
@@ -1,270 +0,0 @@
|
||||
interactions:
|
||||
- request:
|
||||
body: ''
|
||||
headers:
|
||||
accept:
|
||||
- '*/*'
|
||||
accept-encoding:
|
||||
- gzip, deflate
|
||||
connection:
|
||||
- keep-alive
|
||||
host:
|
||||
- api.mem0.ai
|
||||
user-agent:
|
||||
- python-httpx/0.27.0
|
||||
method: GET
|
||||
uri: https://api.mem0.ai/v1/memories/?user_id=test
|
||||
response:
|
||||
body:
|
||||
string: '[]'
|
||||
headers:
|
||||
CF-Cache-Status:
|
||||
- DYNAMIC
|
||||
CF-RAY:
|
||||
- 8b477138bad847b9-BOM
|
||||
Connection:
|
||||
- keep-alive
|
||||
Content-Length:
|
||||
- '2'
|
||||
Content-Type:
|
||||
- application/json
|
||||
Date:
|
||||
- Sat, 17 Aug 2024 06:00:11 GMT
|
||||
NEL:
|
||||
- '{"success_fraction":0,"report_to":"cf-nel","max_age":604800}'
|
||||
Report-To:
|
||||
- '{"endpoints":[{"url":"https:\/\/a.nel.cloudflare.com\/report\/v4?s=uuyH2foMJVDpV%2FH52g1q%2FnvXKe3dBKVzvsK0mqmSNezkiszNR9OgrEJfVqmkX%2FlPFRP2sH4zrOuzGo6k%2FjzsjYJczqSWJUZHN2pPujiwnr1E9W%2BdLGKmG6%2FqPrGYAy2SBRWkkJVWsTO3OQ%3D%3D"}],"group":"cf-nel","max_age":604800}'
|
||||
Server:
|
||||
- cloudflare
|
||||
allow:
|
||||
- GET, POST, DELETE, OPTIONS
|
||||
alt-svc:
|
||||
- h3=":443"; ma=86400
|
||||
cross-origin-opener-policy:
|
||||
- same-origin
|
||||
referrer-policy:
|
||||
- same-origin
|
||||
vary:
|
||||
- Accept, origin, Cookie
|
||||
x-content-type-options:
|
||||
- nosniff
|
||||
x-frame-options:
|
||||
- DENY
|
||||
status:
|
||||
code: 200
|
||||
message: OK
|
||||
- request:
|
||||
body: '{"batch": [{"properties": {"python_version": "3.12.4 (v3.12.4:8e8a4baf65,
|
||||
Jun 6 2024, 17:33:18) [Clang 13.0.0 (clang-1300.0.29.30)]", "os": "darwin",
|
||||
"os_version": "Darwin Kernel Version 23.4.0: Wed Feb 21 21:44:54 PST 2024; root:xnu-10063.101.15~2/RELEASE_ARM64_T6030",
|
||||
"os_release": "23.4.0", "processor": "arm", "machine": "arm64", "function":
|
||||
"mem0.client.main.MemoryClient", "$lib": "posthog-python", "$lib_version": "3.5.0",
|
||||
"$geoip_disable": true}, "timestamp": "2024-08-17T06:00:11.526640+00:00", "context":
|
||||
{}, "distinct_id": "fd411bd3-99a2-42d6-acd7-9fca8ad09580", "event": "client.init"}],
|
||||
"historical_migration": false, "sentAt": "2024-08-17T06:00:11.701621+00:00",
|
||||
"api_key": "phc_hgJkUVJFYtmaJqrvf6CYN67TIQ8yhXAkWzUn9AMU4yX"}'
|
||||
headers:
|
||||
Accept:
|
||||
- '*/*'
|
||||
Accept-Encoding:
|
||||
- gzip, deflate
|
||||
Connection:
|
||||
- keep-alive
|
||||
Content-Length:
|
||||
- '740'
|
||||
Content-Type:
|
||||
- application/json
|
||||
User-Agent:
|
||||
- posthog-python/3.5.0
|
||||
method: POST
|
||||
uri: https://us.i.posthog.com/batch/
|
||||
response:
|
||||
body:
|
||||
string: '{"status":"Ok"}'
|
||||
headers:
|
||||
Connection:
|
||||
- keep-alive
|
||||
Content-Length:
|
||||
- '15'
|
||||
Content-Type:
|
||||
- application/json
|
||||
Date:
|
||||
- Sat, 17 Aug 2024 06:00:12 GMT
|
||||
access-control-allow-credentials:
|
||||
- 'true'
|
||||
server:
|
||||
- envoy
|
||||
vary:
|
||||
- origin, access-control-request-method, access-control-request-headers
|
||||
x-envoy-upstream-service-time:
|
||||
- '69'
|
||||
status:
|
||||
code: 200
|
||||
message: OK
|
||||
- request:
|
||||
body: '{"messages": [{"role": "user", "content": "Remember the following insights
|
||||
from Agent run: test value with provider"}], "metadata": {"task": "test_task_provider",
|
||||
"agent": "test_agent_provider"}, "app_id": "Researcher"}'
|
||||
headers:
|
||||
accept:
|
||||
- '*/*'
|
||||
accept-encoding:
|
||||
- gzip, deflate
|
||||
connection:
|
||||
- keep-alive
|
||||
content-length:
|
||||
- '219'
|
||||
content-type:
|
||||
- application/json
|
||||
host:
|
||||
- api.mem0.ai
|
||||
user-agent:
|
||||
- python-httpx/0.27.0
|
||||
method: POST
|
||||
uri: https://api.mem0.ai/v1/memories/
|
||||
response:
|
||||
body:
|
||||
string: '{"message":"ok"}'
|
||||
headers:
|
||||
CF-Cache-Status:
|
||||
- DYNAMIC
|
||||
CF-RAY:
|
||||
- 8b477140282547b9-BOM
|
||||
Connection:
|
||||
- keep-alive
|
||||
Content-Length:
|
||||
- '16'
|
||||
Content-Type:
|
||||
- application/json
|
||||
Date:
|
||||
- Sat, 17 Aug 2024 06:00:13 GMT
|
||||
NEL:
|
||||
- '{"success_fraction":0,"report_to":"cf-nel","max_age":604800}'
|
||||
Report-To:
|
||||
- '{"endpoints":[{"url":"https:\/\/a.nel.cloudflare.com\/report\/v4?s=FRjJKSk3YxVj03wA7S05H8ts35KnWfqS3wb6Rfy4kVZ4BgXfw7nJbm92wI6vEv5fWcAcHVnOlkJDggs11B01BMuB2k3a9RqlBi0dJNiMuk%2Bgm5xE%2BODMPWJctYNRwQMjNVbteUpS%2Fad8YA%3D%3D"}],"group":"cf-nel","max_age":604800}'
|
||||
Server:
|
||||
- cloudflare
|
||||
allow:
|
||||
- GET, POST, DELETE, OPTIONS
|
||||
alt-svc:
|
||||
- h3=":443"; ma=86400
|
||||
cross-origin-opener-policy:
|
||||
- same-origin
|
||||
referrer-policy:
|
||||
- same-origin
|
||||
vary:
|
||||
- Accept, origin, Cookie
|
||||
x-content-type-options:
|
||||
- nosniff
|
||||
x-frame-options:
|
||||
- DENY
|
||||
status:
|
||||
code: 200
|
||||
message: OK
|
||||
- request:
|
||||
body: '{"query": "test value with provider", "limit": 3, "app_id": "Researcher"}'
|
||||
headers:
|
||||
accept:
|
||||
- '*/*'
|
||||
accept-encoding:
|
||||
- gzip, deflate
|
||||
connection:
|
||||
- keep-alive
|
||||
content-length:
|
||||
- '73'
|
||||
content-type:
|
||||
- application/json
|
||||
host:
|
||||
- api.mem0.ai
|
||||
user-agent:
|
||||
- python-httpx/0.27.0
|
||||
method: POST
|
||||
uri: https://api.mem0.ai/v1/memories/search/
|
||||
response:
|
||||
body:
|
||||
string: '[]'
|
||||
headers:
|
||||
CF-Cache-Status:
|
||||
- DYNAMIC
|
||||
CF-RAY:
|
||||
- 8b47714d083b47b9-BOM
|
||||
Connection:
|
||||
- keep-alive
|
||||
Content-Length:
|
||||
- '2'
|
||||
Content-Type:
|
||||
- application/json
|
||||
Date:
|
||||
- Sat, 17 Aug 2024 06:00:14 GMT
|
||||
NEL:
|
||||
- '{"success_fraction":0,"report_to":"cf-nel","max_age":604800}'
|
||||
Report-To:
|
||||
- '{"endpoints":[{"url":"https:\/\/a.nel.cloudflare.com\/report\/v4?s=2DRWL1cdKdMvnE8vx1fPUGeTITOgSGl3N5g84PS6w30GRqpfz79BtSx6REhpnOiFV8kM6KGqln0iCZ5yoHc2jBVVJXhPJhQ5t0uerD9JFnkphjISrJOU1MJjZWneT9PlNABddxvVNCmluA%3D%3D"}],"group":"cf-nel","max_age":604800}'
|
||||
Server:
|
||||
- cloudflare
|
||||
allow:
|
||||
- POST, OPTIONS
|
||||
alt-svc:
|
||||
- h3=":443"; ma=86400
|
||||
cross-origin-opener-policy:
|
||||
- same-origin
|
||||
referrer-policy:
|
||||
- same-origin
|
||||
vary:
|
||||
- Accept, origin, Cookie
|
||||
x-content-type-options:
|
||||
- nosniff
|
||||
x-frame-options:
|
||||
- DENY
|
||||
status:
|
||||
code: 200
|
||||
message: OK
|
||||
- request:
|
||||
body: '{"batch": [{"properties": {"python_version": "3.12.4 (v3.12.4:8e8a4baf65,
|
||||
Jun 6 2024, 17:33:18) [Clang 13.0.0 (clang-1300.0.29.30)]", "os": "darwin",
|
||||
"os_version": "Darwin Kernel Version 23.4.0: Wed Feb 21 21:44:54 PST 2024; root:xnu-10063.101.15~2/RELEASE_ARM64_T6030",
|
||||
"os_release": "23.4.0", "processor": "arm", "machine": "arm64", "function":
|
||||
"mem0.client.main.MemoryClient", "$lib": "posthog-python", "$lib_version": "3.5.0",
|
||||
"$geoip_disable": true}, "timestamp": "2024-08-17T06:00:13.593952+00:00", "context":
|
||||
{}, "distinct_id": "fd411bd3-99a2-42d6-acd7-9fca8ad09580", "event": "client.add"}],
|
||||
"historical_migration": false, "sentAt": "2024-08-17T06:00:13.858277+00:00",
|
||||
"api_key": "phc_hgJkUVJFYtmaJqrvf6CYN67TIQ8yhXAkWzUn9AMU4yX"}'
|
||||
headers:
|
||||
Accept:
|
||||
- '*/*'
|
||||
Accept-Encoding:
|
||||
- gzip, deflate
|
||||
Connection:
|
||||
- keep-alive
|
||||
Content-Length:
|
||||
- '739'
|
||||
Content-Type:
|
||||
- application/json
|
||||
User-Agent:
|
||||
- posthog-python/3.5.0
|
||||
method: POST
|
||||
uri: https://us.i.posthog.com/batch/
|
||||
response:
|
||||
body:
|
||||
string: '{"status":"Ok"}'
|
||||
headers:
|
||||
Connection:
|
||||
- keep-alive
|
||||
Content-Length:
|
||||
- '15'
|
||||
Content-Type:
|
||||
- application/json
|
||||
Date:
|
||||
- Sat, 17 Aug 2024 06:00:13 GMT
|
||||
access-control-allow-credentials:
|
||||
- 'true'
|
||||
server:
|
||||
- envoy
|
||||
vary:
|
||||
- origin, access-control-request-method, access-control-request-headers
|
||||
x-envoy-upstream-service-time:
|
||||
- '33'
|
||||
status:
|
||||
code: 200
|
||||
message: OK
|
||||
version: 1
|
||||
@@ -1,147 +0,0 @@
|
||||
from unittest.mock import MagicMock, patch
|
||||
|
||||
import pytest
|
||||
from crewai.memory import EntityMemory, LongTermMemory, ShortTermMemory, UserMemory
|
||||
from crewai.memory.contextual.contextual_memory import ContextualMemory
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def mock_memories():
|
||||
return {
|
||||
"stm": MagicMock(spec=ShortTermMemory),
|
||||
"ltm": MagicMock(spec=LongTermMemory),
|
||||
"em": MagicMock(spec=EntityMemory),
|
||||
"um": MagicMock(spec=UserMemory),
|
||||
}
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def contextual_memory_mem0(mock_memories):
|
||||
return ContextualMemory(
|
||||
memory_provider="mem0",
|
||||
stm=mock_memories["stm"],
|
||||
ltm=mock_memories["ltm"],
|
||||
em=mock_memories["em"],
|
||||
um=mock_memories["um"],
|
||||
)
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def contextual_memory_other(mock_memories):
|
||||
return ContextualMemory(
|
||||
memory_provider="other",
|
||||
stm=mock_memories["stm"],
|
||||
ltm=mock_memories["ltm"],
|
||||
em=mock_memories["em"],
|
||||
um=mock_memories["um"],
|
||||
)
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def contextual_memory_none(mock_memories):
|
||||
return ContextualMemory(
|
||||
memory_provider=None,
|
||||
stm=mock_memories["stm"],
|
||||
ltm=mock_memories["ltm"],
|
||||
em=mock_memories["em"],
|
||||
um=mock_memories["um"],
|
||||
)
|
||||
|
||||
|
||||
def test_build_context_for_task_mem0(contextual_memory_mem0, mock_memories):
|
||||
task = MagicMock(description="Test task")
|
||||
context = "Additional context"
|
||||
|
||||
mock_memories["stm"].search.return_value = ["Recent insight"]
|
||||
mock_memories["ltm"].search.return_value = [
|
||||
{"metadata": {"suggestions": ["Historical data"]}}
|
||||
]
|
||||
mock_memories["em"].search.return_value = [{"memory": "Entity memory"}]
|
||||
mock_memories["um"].search.return_value = [{"memory": "User memory"}]
|
||||
|
||||
result = contextual_memory_mem0.build_context_for_task(task, context)
|
||||
|
||||
assert "Recent Insights:" in result
|
||||
assert "Historical Data:" in result
|
||||
assert "Entities:" in result
|
||||
assert "User memories/preferences:" in result
|
||||
|
||||
|
||||
def test_build_context_for_task_other_provider(contextual_memory_other, mock_memories):
|
||||
task = MagicMock(description="Test task")
|
||||
context = "Additional context"
|
||||
|
||||
mock_memories["stm"].search.return_value = ["Recent insight"]
|
||||
mock_memories["ltm"].search.return_value = [
|
||||
{"metadata": {"suggestions": ["Historical data"]}}
|
||||
]
|
||||
mock_memories["em"].search.return_value = [{"context": "Entity context"}]
|
||||
mock_memories["um"].search.return_value = [{"memory": "User memory"}]
|
||||
|
||||
result = contextual_memory_other.build_context_for_task(task, context)
|
||||
|
||||
assert "Recent Insights:" in result
|
||||
assert "Historical Data:" in result
|
||||
assert "Entities:" in result
|
||||
assert "User memories/preferences:" not in result
|
||||
|
||||
|
||||
def test_build_context_for_task_none_provider(contextual_memory_none, mock_memories):
|
||||
task = MagicMock(description="Test task")
|
||||
context = "Additional context"
|
||||
|
||||
mock_memories["stm"].search.return_value = ["Recent insight"]
|
||||
mock_memories["ltm"].search.return_value = [
|
||||
{"metadata": {"suggestions": ["Historical data"]}}
|
||||
]
|
||||
mock_memories["em"].search.return_value = [{"context": "Entity context"}]
|
||||
mock_memories["um"].search.return_value = [{"memory": "User memory"}]
|
||||
|
||||
result = contextual_memory_none.build_context_for_task(task, context)
|
||||
|
||||
assert "Recent Insights:" in result
|
||||
assert "Historical Data:" in result
|
||||
assert "Entities:" in result
|
||||
assert "User memories/preferences:" not in result
|
||||
|
||||
|
||||
def test_fetch_entity_context_mem0(contextual_memory_mem0, mock_memories):
|
||||
mock_memories["em"].search.return_value = [
|
||||
{"memory": "Entity 1"},
|
||||
{"memory": "Entity 2"},
|
||||
]
|
||||
result = contextual_memory_mem0._fetch_entity_context("query")
|
||||
expected_result = "Entities:\n- Entity 1\n- Entity 2"
|
||||
assert result == expected_result
|
||||
|
||||
|
||||
def test_fetch_entity_context_other_provider(contextual_memory_other, mock_memories):
|
||||
mock_memories["em"].search.return_value = [
|
||||
{"context": "Entity 1"},
|
||||
{"context": "Entity 2"},
|
||||
]
|
||||
result = contextual_memory_other._fetch_entity_context("query")
|
||||
expected_result = "Entities:\n- Entity 1\n- Entity 2"
|
||||
assert result == expected_result
|
||||
|
||||
|
||||
def test_user_memories_only_for_mem0(contextual_memory_mem0, mock_memories):
|
||||
mock_memories["um"].search.return_value = [{"memory": "User memory"}]
|
||||
|
||||
# Test for mem0 provider
|
||||
result_mem0 = contextual_memory_mem0._fetch_user_memories("query")
|
||||
assert "User memories/preferences:" in result_mem0
|
||||
assert "User memory" in result_mem0
|
||||
|
||||
# Additional test to ensure user memories are included/excluded in the full context
|
||||
task = MagicMock(description="Test task")
|
||||
context = "Additional context"
|
||||
mock_memories["stm"].search.return_value = ["Recent insight"]
|
||||
mock_memories["ltm"].search.return_value = [
|
||||
{"metadata": {"suggestions": ["Historical data"]}}
|
||||
]
|
||||
mock_memories["em"].search.return_value = [{"memory": "Entity memory"}]
|
||||
|
||||
full_context_mem0 = contextual_memory_mem0.build_context_for_task(task, context)
|
||||
assert "User memories/preferences:" in full_context_mem0
|
||||
assert "User memory" in full_context_mem0
|
||||
@@ -1,119 +0,0 @@
|
||||
# tests/memory/test_entity_memory.py
|
||||
|
||||
from unittest.mock import MagicMock, patch
|
||||
|
||||
import pytest
|
||||
from crewai.memory.entity.entity_memory import EntityMemory
|
||||
from crewai.memory.entity.entity_memory_item import EntityMemoryItem
|
||||
from crewai.memory.storage.mem0_storage import Mem0Storage
|
||||
from crewai.memory.storage.rag_storage import RAGStorage
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def mock_rag_storage():
|
||||
"""Fixture to create a mock RAGStorage instance"""
|
||||
return MagicMock(spec=RAGStorage)
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def mock_mem0_storage():
|
||||
"""Fixture to create a mock Mem0Storage instance"""
|
||||
return MagicMock(spec=Mem0Storage)
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def entity_memory_rag(mock_rag_storage):
|
||||
"""Fixture to create an EntityMemory instance with RAGStorage"""
|
||||
with patch(
|
||||
"crewai.memory.entity.entity_memory.RAGStorage", return_value=mock_rag_storage
|
||||
):
|
||||
return EntityMemory()
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def entity_memory_mem0(mock_mem0_storage):
|
||||
"""Fixture to create an EntityMemory instance with Mem0Storage"""
|
||||
with patch(
|
||||
"crewai.memory.entity.entity_memory.Mem0Storage", return_value=mock_mem0_storage
|
||||
):
|
||||
return EntityMemory(memory_provider="mem0")
|
||||
|
||||
|
||||
def test_save_rag_storage(entity_memory_rag, mock_rag_storage):
|
||||
item = EntityMemoryItem(
|
||||
name="John Doe",
|
||||
type="Person",
|
||||
description="A software engineer",
|
||||
relationships="Works at TechCorp",
|
||||
)
|
||||
entity_memory_rag.save(item)
|
||||
|
||||
expected_data = "John Doe(Person): A software engineer"
|
||||
mock_rag_storage.save.assert_called_once_with(expected_data, item.metadata)
|
||||
|
||||
|
||||
def test_save_mem0_storage(entity_memory_mem0, mock_mem0_storage):
|
||||
item = EntityMemoryItem(
|
||||
name="John Doe",
|
||||
type="Person",
|
||||
description="A software engineer",
|
||||
relationships="Works at TechCorp",
|
||||
)
|
||||
entity_memory_mem0.save(item)
|
||||
|
||||
expected_data = """
|
||||
Remember details about the following entity:
|
||||
Name: John Doe
|
||||
Type: Person
|
||||
Entity Description: A software engineer
|
||||
"""
|
||||
mock_mem0_storage.save.assert_called_once_with(expected_data, item.metadata)
|
||||
|
||||
|
||||
def test_search(entity_memory_rag, mock_rag_storage):
|
||||
query = "software engineer"
|
||||
limit = 5
|
||||
filters = {"type": "Person"}
|
||||
score_threshold = 0.7
|
||||
|
||||
entity_memory_rag.search(query, limit, filters, score_threshold)
|
||||
|
||||
mock_rag_storage.search.assert_called_once_with(
|
||||
query=query, limit=limit, filters=filters, score_threshold=score_threshold
|
||||
)
|
||||
|
||||
|
||||
def test_reset(entity_memory_rag, mock_rag_storage):
|
||||
entity_memory_rag.reset()
|
||||
mock_rag_storage.reset.assert_called_once()
|
||||
|
||||
|
||||
def test_reset_error(entity_memory_rag, mock_rag_storage):
|
||||
mock_rag_storage.reset.side_effect = Exception("Reset error")
|
||||
|
||||
with pytest.raises(Exception) as exc_info:
|
||||
entity_memory_rag.reset()
|
||||
|
||||
assert (
|
||||
str(exc_info.value)
|
||||
== "An error occurred while resetting the entity memory: Reset error"
|
||||
)
|
||||
|
||||
|
||||
@pytest.mark.parametrize("memory_provider", [None, "other"])
|
||||
def test_init_with_rag_storage(memory_provider):
|
||||
with patch("crewai.memory.entity.entity_memory.RAGStorage") as mock_rag_storage:
|
||||
EntityMemory(memory_provider=memory_provider)
|
||||
mock_rag_storage.assert_called_once()
|
||||
|
||||
|
||||
def test_init_with_mem0_storage():
|
||||
with patch("crewai.memory.entity.entity_memory.Mem0Storage") as mock_mem0_storage:
|
||||
EntityMemory(memory_provider="mem0")
|
||||
mock_mem0_storage.assert_called_once()
|
||||
|
||||
|
||||
def test_init_with_custom_storage():
|
||||
custom_storage = MagicMock()
|
||||
entity_memory = EntityMemory(storage=custom_storage)
|
||||
assert entity_memory.storage == custom_storage
|
||||
@@ -1,125 +1,29 @@
|
||||
# tests/memory/long_term_memory_test.py
|
||||
|
||||
from datetime import datetime
|
||||
from unittest.mock import MagicMock, patch
|
||||
|
||||
import pytest
|
||||
|
||||
from crewai.memory.long_term.long_term_memory import LongTermMemory
|
||||
from crewai.memory.long_term.long_term_memory_item import LongTermMemoryItem
|
||||
from crewai.memory.storage.ltm_sqlite_storage import LTMSQLiteStorage
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def mock_storage():
|
||||
"""Fixture to create a mock LTMSQLiteStorage instance"""
|
||||
return MagicMock(spec=LTMSQLiteStorage)
|
||||
def long_term_memory():
|
||||
"""Fixture to create a LongTermMemory instance"""
|
||||
return LongTermMemory()
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def long_term_memory(mock_storage):
|
||||
"""Fixture to create a LongTermMemory instance with mock storage"""
|
||||
return LongTermMemory(storage=mock_storage)
|
||||
|
||||
|
||||
def test_save(long_term_memory, mock_storage):
|
||||
def test_save_and_search(long_term_memory):
|
||||
memory = LongTermMemoryItem(
|
||||
agent="test_agent",
|
||||
task="test_task",
|
||||
expected_output="test_output",
|
||||
datetime="2023-01-01 12:00:00",
|
||||
datetime="test_datetime",
|
||||
quality=0.5,
|
||||
metadata={"additional_info": "test_info"},
|
||||
metadata={"task": "test_task", "quality": 0.5},
|
||||
)
|
||||
long_term_memory.save(memory)
|
||||
|
||||
expected_metadata = {
|
||||
"additional_info": "test_info",
|
||||
"agent": "test_agent",
|
||||
"expected_output": "test_output",
|
||||
"quality": 0.5, # Include quality in expected metadata
|
||||
}
|
||||
mock_storage.save.assert_called_once_with(
|
||||
task_description="test_task",
|
||||
score=0.5,
|
||||
metadata=expected_metadata,
|
||||
datetime="2023-01-01 12:00:00",
|
||||
)
|
||||
|
||||
|
||||
def test_search(long_term_memory, mock_storage):
|
||||
mock_storage.load.return_value = [
|
||||
{
|
||||
"metadata": {
|
||||
"agent": "test_agent",
|
||||
"expected_output": "test_output",
|
||||
"task": "test_task",
|
||||
},
|
||||
"datetime": "2023-01-01 12:00:00",
|
||||
"score": 0.5,
|
||||
}
|
||||
]
|
||||
|
||||
result = long_term_memory.search("test_task", latest_n=5)
|
||||
|
||||
mock_storage.load.assert_called_once_with("test_task", 5)
|
||||
assert len(result) == 1
|
||||
assert result[0]["metadata"]["agent"] == "test_agent"
|
||||
assert result[0]["metadata"]["expected_output"] == "test_output"
|
||||
assert result[0]["metadata"]["task"] == "test_task"
|
||||
assert result[0]["datetime"] == "2023-01-01 12:00:00"
|
||||
assert result[0]["score"] == 0.5
|
||||
|
||||
|
||||
def test_save_with_minimal_metadata(long_term_memory, mock_storage):
|
||||
memory = LongTermMemoryItem(
|
||||
agent="minimal_agent",
|
||||
task="minimal_task",
|
||||
expected_output="minimal_output",
|
||||
datetime="2023-01-01 12:00:00",
|
||||
quality=0.3,
|
||||
metadata={},
|
||||
)
|
||||
long_term_memory.save(memory)
|
||||
|
||||
expected_metadata = {
|
||||
"agent": "minimal_agent",
|
||||
"expected_output": "minimal_output",
|
||||
"quality": 0.3, # Include quality in expected metadata
|
||||
}
|
||||
mock_storage.save.assert_called_once_with(
|
||||
task_description="minimal_task",
|
||||
score=0.3,
|
||||
metadata=expected_metadata,
|
||||
datetime="2023-01-01 12:00:00",
|
||||
)
|
||||
|
||||
|
||||
def test_reset(long_term_memory, mock_storage):
|
||||
long_term_memory.reset()
|
||||
mock_storage.reset.assert_called_once()
|
||||
|
||||
|
||||
def test_search_with_no_results(long_term_memory, mock_storage):
|
||||
mock_storage.load.return_value = []
|
||||
result = long_term_memory.search("nonexistent_task")
|
||||
assert result == []
|
||||
|
||||
|
||||
def test_init_with_default_storage():
|
||||
with patch(
|
||||
"crewai.memory.long_term.long_term_memory.LTMSQLiteStorage"
|
||||
) as mock_storage_class:
|
||||
LongTermMemory()
|
||||
mock_storage_class.assert_called_once()
|
||||
|
||||
|
||||
def test_init_with_custom_storage():
|
||||
custom_storage = MagicMock()
|
||||
memory = LongTermMemory(storage=custom_storage)
|
||||
assert memory.storage == custom_storage
|
||||
|
||||
|
||||
@pytest.mark.parametrize("latest_n", [1, 3, 5, 10])
|
||||
def test_search_with_different_latest_n(long_term_memory, mock_storage, latest_n):
|
||||
long_term_memory.search("test_task", latest_n=latest_n)
|
||||
mock_storage.load.assert_called_once_with("test_task", latest_n)
|
||||
find = long_term_memory.search("test_task", latest_n=5)[0]
|
||||
assert find["score"] == 0.5
|
||||
assert find["datetime"] == "test_datetime"
|
||||
assert find["metadata"]["agent"] == "test_agent"
|
||||
assert find["metadata"]["quality"] == 0.5
|
||||
assert find["metadata"]["task"] == "test_task"
|
||||
assert find["metadata"]["expected_output"] == "test_output"
|
||||
|
||||
@@ -1,5 +1,5 @@
|
||||
import pytest
|
||||
|
||||
from unittest.mock import patch
|
||||
from crewai.agent import Agent
|
||||
from crewai.crew import Crew
|
||||
from crewai.memory.short_term.short_term_memory import ShortTermMemory
|
||||
@@ -26,7 +26,6 @@ def short_term_memory():
|
||||
return ShortTermMemory(crew=Crew(agents=[agent], tasks=[task]))
|
||||
|
||||
|
||||
@pytest.mark.vcr(filter_headers=["authorization"])
|
||||
def test_save_and_search(short_term_memory):
|
||||
memory = ShortTermMemoryItem(
|
||||
data="""test value test value test value test value test value test value
|
||||
@@ -35,55 +34,28 @@ def test_save_and_search(short_term_memory):
|
||||
agent="test_agent",
|
||||
metadata={"task": "test_task"},
|
||||
)
|
||||
short_term_memory.save(
|
||||
value=memory.data,
|
||||
metadata=memory.metadata,
|
||||
agent=memory.agent,
|
||||
)
|
||||
|
||||
find = short_term_memory.search("test value", score_threshold=0.01)[0]
|
||||
assert find["context"] == memory.data, "Data value mismatch."
|
||||
assert find["metadata"]["agent"] == "test_agent", "Agent value mismatch."
|
||||
with patch.object(ShortTermMemory, "save") as mock_save:
|
||||
short_term_memory.save(
|
||||
value=memory.data,
|
||||
metadata=memory.metadata,
|
||||
agent=memory.agent,
|
||||
)
|
||||
|
||||
mock_save.assert_called_once_with(
|
||||
value=memory.data,
|
||||
metadata=memory.metadata,
|
||||
agent=memory.agent,
|
||||
)
|
||||
|
||||
@pytest.fixture
|
||||
def short_term_memory_with_provider():
|
||||
"""Fixture to create a ShortTermMemory instance with a specific memory provider"""
|
||||
agent = Agent(
|
||||
role="Researcher",
|
||||
goal="Search relevant data and provide results",
|
||||
backstory="You are a researcher at a leading tech think tank.",
|
||||
tools=[],
|
||||
verbose=True,
|
||||
)
|
||||
|
||||
task = Task(
|
||||
description="Perform a search on specific topics.",
|
||||
expected_output="A list of relevant URLs based on the search query.",
|
||||
agent=agent,
|
||||
)
|
||||
return ShortTermMemory(
|
||||
crew=Crew(agents=[agent], tasks=[task]), memory_provider="mem0"
|
||||
)
|
||||
|
||||
|
||||
def test_save_and_search_with_provider(short_term_memory_with_provider):
|
||||
memory = ShortTermMemoryItem(
|
||||
data="Loves to do research on the latest technologies.",
|
||||
agent="test_agent_provider",
|
||||
metadata={"task": "test_task_provider"},
|
||||
)
|
||||
short_term_memory_with_provider.save(
|
||||
value=memory.data,
|
||||
metadata=memory.metadata,
|
||||
agent=memory.agent,
|
||||
)
|
||||
|
||||
find = short_term_memory_with_provider.search(
|
||||
"Loves to do research on the latest technologies.", score_threshold=0.01
|
||||
)[0]
|
||||
assert find["memory"] in memory.data, "Data value mismatch."
|
||||
assert find["metadata"]["agent"] == "test_agent_provider", "Agent value mismatch."
|
||||
assert (
|
||||
short_term_memory_with_provider.memory_provider == "mem0"
|
||||
), "Memory provider mismatch."
|
||||
expected_result = [
|
||||
{
|
||||
"context": memory.data,
|
||||
"metadata": {"agent": "test_agent"},
|
||||
"score": 0.95,
|
||||
}
|
||||
]
|
||||
with patch.object(ShortTermMemory, "search", return_value=expected_result):
|
||||
find = short_term_memory.search("test value", score_threshold=0.01)[0]
|
||||
assert find["context"] == memory.data, "Data value mismatch."
|
||||
assert find["metadata"]["agent"] == "test_agent", "Agent value mismatch."
|
||||
|
||||
119
tests/tools/test_tool_usage.py
Normal file
119
tests/tools/test_tool_usage.py
Normal file
@@ -0,0 +1,119 @@
|
||||
import json
|
||||
import random
|
||||
from unittest.mock import MagicMock
|
||||
|
||||
import pytest
|
||||
from crewai_tools import BaseTool
|
||||
from pydantic import BaseModel, Field
|
||||
|
||||
from crewai import Agent, Task
|
||||
from crewai.tools.tool_usage import ToolUsage
|
||||
|
||||
|
||||
class RandomNumberToolInput(BaseModel):
|
||||
min_value: int = Field(
|
||||
..., description="The minimum value of the range (inclusive)"
|
||||
)
|
||||
max_value: int = Field(
|
||||
..., description="The maximum value of the range (inclusive)"
|
||||
)
|
||||
|
||||
|
||||
class RandomNumberTool(BaseTool):
|
||||
name: str = "Random Number Generator"
|
||||
description: str = "Generates a random number within a specified range"
|
||||
args_schema: type[BaseModel] = RandomNumberToolInput
|
||||
|
||||
def _run(self, min_value: int, max_value: int) -> int:
|
||||
return random.randint(min_value, max_value)
|
||||
|
||||
|
||||
# Example agent and task
|
||||
example_agent = Agent(
|
||||
role="Number Generator",
|
||||
goal="Generate random numbers for various purposes",
|
||||
backstory="You are an AI agent specialized in generating random numbers within specified ranges.",
|
||||
tools=[RandomNumberTool()],
|
||||
verbose=True,
|
||||
)
|
||||
|
||||
example_task = Task(
|
||||
description="Generate a random number between 1 and 100",
|
||||
expected_output="A random number between 1 and 100",
|
||||
agent=example_agent,
|
||||
)
|
||||
|
||||
|
||||
def test_random_number_tool_range():
|
||||
tool = RandomNumberTool()
|
||||
result = tool._run(1, 10)
|
||||
assert 1 <= result <= 10
|
||||
|
||||
|
||||
def test_random_number_tool_invalid_range():
|
||||
tool = RandomNumberTool()
|
||||
with pytest.raises(ValueError):
|
||||
tool._run(10, 1) # min_value > max_value
|
||||
|
||||
|
||||
def test_random_number_tool_schema():
|
||||
tool = RandomNumberTool()
|
||||
|
||||
# Get the schema using model_json_schema()
|
||||
schema = tool.args_schema.model_json_schema()
|
||||
|
||||
# Convert the schema to a string
|
||||
schema_str = json.dumps(schema)
|
||||
|
||||
# Check if the schema string contains the expected fields
|
||||
assert "min_value" in schema_str
|
||||
assert "max_value" in schema_str
|
||||
|
||||
# Parse the schema string back to a dictionary
|
||||
schema_dict = json.loads(schema_str)
|
||||
|
||||
# Check if the schema contains the correct field types
|
||||
assert schema_dict["properties"]["min_value"]["type"] == "integer"
|
||||
assert schema_dict["properties"]["max_value"]["type"] == "integer"
|
||||
|
||||
# Check if the schema contains the field descriptions
|
||||
assert (
|
||||
"minimum value" in schema_dict["properties"]["min_value"]["description"].lower()
|
||||
)
|
||||
assert (
|
||||
"maximum value" in schema_dict["properties"]["max_value"]["description"].lower()
|
||||
)
|
||||
|
||||
|
||||
def test_tool_usage_render():
|
||||
tool = RandomNumberTool()
|
||||
|
||||
tool_usage = ToolUsage(
|
||||
tools_handler=MagicMock(),
|
||||
tools=[tool],
|
||||
original_tools=[tool],
|
||||
tools_description="Sample tool for testing",
|
||||
tools_names="random_number_generator",
|
||||
task=MagicMock(),
|
||||
function_calling_llm=MagicMock(),
|
||||
agent=MagicMock(),
|
||||
action=MagicMock(),
|
||||
)
|
||||
|
||||
rendered = tool_usage._render()
|
||||
|
||||
# Updated checks to match the actual output
|
||||
assert "Tool Name: random number generator" in rendered
|
||||
assert (
|
||||
"Random Number Generator(min_value: 'integer', max_value: 'integer') - Generates a random number within a specified range min_value: 'The minimum value of the range (inclusive)', max_value: 'The maximum value of the range (inclusive)'"
|
||||
in rendered
|
||||
)
|
||||
assert "Tool Arguments:" in rendered
|
||||
assert (
|
||||
"'min_value': {'description': 'The minimum value of the range (inclusive)', 'type': 'int'}"
|
||||
in rendered
|
||||
)
|
||||
assert (
|
||||
"'max_value': {'description': 'The maximum value of the range (inclusive)', 'type': 'int'}"
|
||||
in rendered
|
||||
)
|
||||
Reference in New Issue
Block a user