mirror of
https://github.com/crewAIInc/crewAI.git
synced 2025-12-16 12:28:30 +00:00
Compare commits
48 Commits
feat/cli-d
...
docs_updat
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
a54d34ea5b | ||
|
|
bc793749a5 | ||
|
|
a9916940ef | ||
|
|
b7f4931de5 | ||
|
|
327b728bef | ||
|
|
a9510eec88 | ||
|
|
d6db557f50 | ||
|
|
5ae56e3f72 | ||
|
|
1c9ebb59b1 | ||
|
|
f520ceeb0d | ||
|
|
0df4d2fd4b | ||
|
|
596491d932 | ||
|
|
72fb109147 | ||
|
|
40b336d2a5 | ||
|
|
5958df71a2 | ||
|
|
26d9af8367 | ||
|
|
cdaf2d41c7 | ||
|
|
d9ee104167 | ||
|
|
0b9eeb7cdb | ||
|
|
9b558ddc51 | ||
|
|
b857afe45b | ||
|
|
1d77c8de10 | ||
|
|
503f3a6372 | ||
|
|
d2fab55561 | ||
|
|
b955416458 | ||
|
|
18a2722e4d | ||
|
|
c7e8d55926 | ||
|
|
48698bf0b7 | ||
|
|
f79b3fc322 | ||
|
|
0b9e753c2f | ||
|
|
5b3f7be1c4 | ||
|
|
f2208f5f8e | ||
|
|
79b5248b83 | ||
|
|
d4791bef28 | ||
|
|
d861cb0d74 | ||
|
|
67f19f79c2 | ||
|
|
5f359b14f7 | ||
|
|
cda1900b14 | ||
|
|
c8c0a89dc6 | ||
|
|
9a10cc15f4 | ||
|
|
345f1eacde | ||
|
|
fa937bf3a7 | ||
|
|
172758020c | ||
|
|
5ff178084e | ||
|
|
c012e0ff8d | ||
|
|
f777c1c2e0 | ||
|
|
782ce22d99 | ||
|
|
f5246039e5 |
1
.github/workflows/tests.yml
vendored
1
.github/workflows/tests.yml
vendored
@@ -11,6 +11,7 @@ env:
|
||||
jobs:
|
||||
deploy:
|
||||
runs-on: ubuntu-latest
|
||||
timeout-minutes: 15
|
||||
|
||||
steps:
|
||||
- name: Checkout code
|
||||
|
||||
60
README.md
60
README.md
@@ -8,11 +8,11 @@
|
||||
|
||||
<h3>
|
||||
|
||||
[Homepage](https://www.crewai.io/) | [Documentation](https://docs.crewai.com/) | [Chat with Docs](https://chatg.pt/DWjSBZn) | [Examples](https://github.com/joaomdmoura/crewai-examples) | [Discord](https://discord.com/invite/X4JWnZnxPb)
|
||||
[Homepage](https://www.crewai.com/) | [Documentation](https://docs.crewai.com/) | [Chat with Docs](https://chatg.pt/DWjSBZn) | [Examples](https://github.com/crewAIInc/crewAI-examples) | [Discourse](https://community.crewai.com)
|
||||
|
||||
</h3>
|
||||
|
||||
[](https://github.com/joaomdmoura/crewAI)
|
||||
[](https://github.com/crewAIInc/crewAI)
|
||||
[](https://opensource.org/licenses/MIT)
|
||||
|
||||
</div>
|
||||
@@ -73,6 +73,7 @@ os.environ["SERPER_API_KEY"] = "Your Key" # serper.dev API key
|
||||
# You can pass an optional llm attribute specifying what model you wanna use.
|
||||
# It can be a local model through Ollama / LM Studio or a remote
|
||||
# model like OpenAI, Mistral, Antrophic or others (https://docs.crewai.com/how-to/LLM-Connections/)
|
||||
# If you don't specify a model, the default is OpenAI gpt-4o
|
||||
#
|
||||
# import os
|
||||
# os.environ['OPENAI_MODEL_NAME'] = 'gpt-3.5-turbo'
|
||||
@@ -153,12 +154,12 @@ In addition to the sequential process, you can use the hierarchical process, whi
|
||||
|
||||
## Examples
|
||||
|
||||
You can test different real life examples of AI crews in the [crewAI-examples repo](https://github.com/joaomdmoura/crewAI-examples?tab=readme-ov-file):
|
||||
You can test different real life examples of AI crews in the [crewAI-examples repo](https://github.com/crewAIInc/crewAI-examples?tab=readme-ov-file):
|
||||
|
||||
- [Landing Page Generator](https://github.com/joaomdmoura/crewAI-examples/tree/main/landing_page_generator)
|
||||
- [Landing Page Generator](https://github.com/crewAIInc/crewAI-examples/tree/main/landing_page_generator)
|
||||
- [Having Human input on the execution](https://docs.crewai.com/how-to/Human-Input-on-Execution)
|
||||
- [Trip Planner](https://github.com/joaomdmoura/crewAI-examples/tree/main/trip_planner)
|
||||
- [Stock Analysis](https://github.com/joaomdmoura/crewAI-examples/tree/main/stock_analysis)
|
||||
- [Trip Planner](https://github.com/crewAIInc/crewAI-examples/tree/main/trip_planner)
|
||||
- [Stock Analysis](https://github.com/crewAIInc/crewAI-examples/tree/main/stock_analysis)
|
||||
|
||||
### Quick Tutorial
|
||||
|
||||
@@ -166,19 +167,19 @@ You can test different real life examples of AI crews in the [crewAI-examples re
|
||||
|
||||
### Write Job Descriptions
|
||||
|
||||
[Check out code for this example](https://github.com/joaomdmoura/crewAI-examples/tree/main/job-posting) or watch a video below:
|
||||
[Check out code for this example](https://github.com/crewAIInc/crewAI-examples/tree/main/job-posting) or watch a video below:
|
||||
|
||||
[](https://www.youtube.com/watch?v=u98wEMz-9to "Jobs postings")
|
||||
|
||||
### Trip Planner
|
||||
|
||||
[Check out code for this example](https://github.com/joaomdmoura/crewAI-examples/tree/main/trip_planner) or watch a video below:
|
||||
[Check out code for this example](https://github.com/crewAIInc/crewAI-examples/tree/main/trip_planner) or watch a video below:
|
||||
|
||||
[](https://www.youtube.com/watch?v=xis7rWp-hjs "Trip Planner")
|
||||
|
||||
### Stock Analysis
|
||||
|
||||
[Check out code for this example](https://github.com/joaomdmoura/crewAI-examples/tree/main/stock_analysis) or watch a video below:
|
||||
[Check out code for this example](https://github.com/crewAIInc/crewAI-examples/tree/main/stock_analysis) or watch a video below:
|
||||
|
||||
[](https://www.youtube.com/watch?v=e0Uj4yWdaAg "Stock Analysis")
|
||||
|
||||
@@ -190,13 +191,12 @@ Please refer to the [Connect crewAI to LLMs](https://docs.crewai.com/how-to/LLM-
|
||||
|
||||
## How CrewAI Compares
|
||||
|
||||
**CrewAI's Advantage**: CrewAI is built with production in mind. It offers the flexibility of Autogen's conversational agents and the structured process approach of ChatDev, but without the rigidity. CrewAI's processes are designed to be dynamic and adaptable, fitting seamlessly into both development and production workflows.
|
||||
|
||||
- **Autogen**: While Autogen does good in creating conversational agents capable of working together, it lacks an inherent concept of process. In Autogen, orchestrating agents' interactions requires additional programming, which can become complex and cumbersome as the scale of tasks grows.
|
||||
|
||||
- **ChatDev**: ChatDev introduced the idea of processes into the realm of AI agents, but its implementation is quite rigid. Customizations in ChatDev are limited and not geared towards production environments, which can hinder scalability and flexibility in real-world applications.
|
||||
|
||||
**CrewAI's Advantage**: CrewAI is built with production in mind. It offers the flexibility of Autogen's conversational agents and the structured process approach of ChatDev, but without the rigidity. CrewAI's processes are designed to be dynamic and adaptable, fitting seamlessly into both development and production workflows.
|
||||
|
||||
|
||||
## Contribution
|
||||
|
||||
CrewAI is open-source and we welcome contributions. If you're looking to contribute, please:
|
||||
@@ -284,3 +284,39 @@ Users can opt-in to Further Telemetry, sharing the complete telemetry data by se
|
||||
## License
|
||||
|
||||
CrewAI is released under the MIT License.
|
||||
|
||||
## Frequently Asked Questions (FAQ)
|
||||
|
||||
### Q: What is CrewAI?
|
||||
A: CrewAI is a cutting-edge framework for orchestrating role-playing, autonomous AI agents. It enables agents to work together seamlessly, tackling complex tasks through collaborative intelligence.
|
||||
|
||||
### Q: How do I install CrewAI?
|
||||
A: You can install CrewAI using pip:
|
||||
```shell
|
||||
pip install crewai
|
||||
```
|
||||
For additional tools, use:
|
||||
```shell
|
||||
pip install 'crewai[tools]'
|
||||
```
|
||||
|
||||
### Q: Can I use CrewAI with local models?
|
||||
A: Yes, CrewAI supports various LLMs, including local models. You can configure your agents to use local models via tools like Ollama & LM Studio. Check the [LLM Connections documentation](https://docs.crewai.com/how-to/LLM-Connections/) for more details.
|
||||
|
||||
### Q: What are the key features of CrewAI?
|
||||
A: Key features include role-based agent design, autonomous inter-agent delegation, flexible task management, process-driven execution, output saving as files, and compatibility with both open-source and proprietary models.
|
||||
|
||||
### Q: How does CrewAI compare to other AI orchestration tools?
|
||||
A: CrewAI is designed with production in mind, offering flexibility similar to Autogen's conversational agents and structured processes like ChatDev, but with more adaptability for real-world applications.
|
||||
|
||||
### Q: Is CrewAI open-source?
|
||||
A: Yes, CrewAI is open-source and welcomes contributions from the community.
|
||||
|
||||
### Q: Does CrewAI collect any data?
|
||||
A: CrewAI uses anonymous telemetry to collect usage data for improvement purposes. No sensitive data (like prompts, task descriptions, or API calls) is collected. Users can opt-in to share more detailed data by setting `share_crew=True` on their Crews.
|
||||
|
||||
### Q: Where can I find examples of CrewAI in action?
|
||||
A: You can find various real-life examples in the [crewAI-examples repository](https://github.com/crewAIInc/crewAI-examples), including trip planners, stock analysis tools, and more.
|
||||
|
||||
### Q: How can I contribute to CrewAI?
|
||||
A: Contributions are welcome! You can fork the repository, create a new branch for your feature, add your improvement, and send a pull request. Check the Contribution section in the README for more details.
|
||||
|
||||
Binary file not shown.
|
Before Width: | Height: | Size: 1.0 MiB |
Binary file not shown.
|
Before Width: | Height: | Size: 810 KiB |
BIN
docs/assets/langtrace1.png
Normal file
BIN
docs/assets/langtrace1.png
Normal file
Binary file not shown.
|
After Width: | Height: | Size: 223 KiB |
BIN
docs/assets/langtrace2.png
Normal file
BIN
docs/assets/langtrace2.png
Normal file
Binary file not shown.
|
After Width: | Height: | Size: 204 KiB |
BIN
docs/assets/langtrace3.png
Normal file
BIN
docs/assets/langtrace3.png
Normal file
Binary file not shown.
|
After Width: | Height: | Size: 295 KiB |
@@ -32,8 +32,8 @@ Each input creates its own run, flowing through all stages of the pipeline. Mult
|
||||
|
||||
## Pipeline Attributes
|
||||
|
||||
| Attribute | Parameters | Description |
|
||||
| :--------- | :--------- | :------------------------------------------------------------------------------------ |
|
||||
| Attribute | Parameters | Description |
|
||||
| :--------- | :--------- | :---------------------------------------------------------------------------------------------- |
|
||||
| **Stages** | `stages` | A list of crews, lists of crews, or routers representing the stages to be executed in sequence. |
|
||||
|
||||
## Creating a Pipeline
|
||||
@@ -239,7 +239,7 @@ email_router = Router(
|
||||
pipeline=normal_pipeline
|
||||
)
|
||||
},
|
||||
default=Pipeline(stages=[normal_pipeline]) # Default to just classification if no urgency score
|
||||
default=Pipeline(stages=[normal_pipeline]) # Default to just normal if no urgency score
|
||||
)
|
||||
|
||||
# Use the router in a main pipeline
|
||||
|
||||
@@ -131,6 +131,7 @@ research_agent = Agent(
|
||||
verbose=True
|
||||
)
|
||||
|
||||
# to perform a semantic search for a specified query from a text's content across the internet
|
||||
search_tool = SerperDevTool()
|
||||
|
||||
task = Task(
|
||||
@@ -312,4 +313,4 @@ save_output_task = Task(
|
||||
|
||||
## Conclusion
|
||||
|
||||
Tasks are the driving force behind the actions of agents in crewAI. By properly defining tasks and their outcomes, you set the stage for your AI agents to work effectively, either independently or as a collaborative unit. Equipping tasks with appropriate tools, understanding the execution process, and following robust validation practices are crucial for maximizing CrewAI's potential, ensuring agents are effectively prepared for their assignments and that tasks are executed as intended.
|
||||
Tasks are the driving force behind the actions of agents in crewAI. By properly defining tasks and their outcomes, you set the stage for your AI agents to work effectively, either independently or as a collaborative unit. Equipping tasks with appropriate tools, understanding the execution process, and following robust validation practices are crucial for maximizing CrewAI's potential, ensuring agents are effectively prepared for their assignments and that tasks are executed as intended.
|
||||
|
||||
@@ -17,40 +17,12 @@ Before we start, there are a couple of things to note:
|
||||
Before getting started with CrewAI, make sure that you have installed it via pip:
|
||||
|
||||
```shell
|
||||
$ pip install crewai crewai-tools
|
||||
$ pip install 'crewai[tools]'
|
||||
```
|
||||
|
||||
### Virtual Environments
|
||||
It is highly recommended that you use virtual environments to ensure that your CrewAI project is isolated from other projects and dependencies. Virtual environments provide a clean, separate workspace for each project, preventing conflicts between different versions of packages and libraries. This isolation is crucial for maintaining consistency and reproducibility in your development process. You have multiple options for setting up virtual environments depending on your operating system and Python version:
|
||||
|
||||
1. Use venv (Python's built-in virtual environment tool):
|
||||
venv is included with Python 3.3 and later, making it a convenient choice for many developers. It's lightweight and easy to use, perfect for simple project setups.
|
||||
|
||||
To set up virtual environments with venv, refer to the official [Python documentation](https://docs.python.org/3/tutorial/venv.html).
|
||||
|
||||
2. Use Conda (A Python virtual environment manager):
|
||||
Conda is an open-source package manager and environment management system for Python. It's widely used by data scientists, developers, and researchers to manage dependencies and environments in a reproducible way.
|
||||
|
||||
To set up virtual environments with Conda, refer to the official [Conda documentation](https://docs.conda.io/projects/conda/en/stable/user-guide/getting-started.html).
|
||||
|
||||
3. Use Poetry (A Python package manager and dependency management tool):
|
||||
Poetry is an open-source Python package manager that simplifies the installation of packages and their dependencies. Poetry offers a convenient way to manage virtual environments and dependencies.
|
||||
Poetry is CrewAI's preferred tool for package / dependency management in CrewAI.
|
||||
|
||||
### Code IDEs
|
||||
|
||||
Most users of CrewAI use a Code Editor / Integrated Development Environment (IDE) for building their Crews. You can use any code IDE of your choice. See below for some popular options for Code Editors / Integrated Development Environments (IDE):
|
||||
|
||||
- [Visual Studio Code](https://code.visualstudio.com/) - Most popular
|
||||
- [PyCharm](https://www.jetbrains.com/pycharm/)
|
||||
- [Cursor AI](https://cursor.com)
|
||||
|
||||
Pick one that suits your style and needs.
|
||||
|
||||
## Creating a New Project
|
||||
In this example, we will be using Venv as our virtual environment manager.
|
||||
In this example, we will be using poetry as our virtual environment manager.
|
||||
|
||||
To set up a virtual environment, run the following CLI command:
|
||||
To create a new CrewAI project, run the following CLI command:
|
||||
|
||||
```shell
|
||||
|
||||
@@ -4,9 +4,11 @@ description: Kickoff a Crew Asynchronously
|
||||
---
|
||||
|
||||
## Introduction
|
||||
|
||||
CrewAI provides the ability to kickoff a crew asynchronously, allowing you to start the crew execution in a non-blocking manner. This feature is particularly useful when you want to run multiple crews concurrently or when you need to perform other tasks while the crew is executing.
|
||||
|
||||
## Asynchronous Crew Execution
|
||||
|
||||
To kickoff a crew asynchronously, use the `kickoff_async()` method. This method initiates the crew execution in a separate thread, allowing the main thread to continue executing other tasks.
|
||||
|
||||
### Method Signature
|
||||
@@ -23,10 +25,20 @@ def kickoff_async(self, inputs: dict) -> CrewOutput:
|
||||
|
||||
- `CrewOutput`: An object representing the result of the crew execution.
|
||||
|
||||
## Example
|
||||
Here's an example of how to kickoff a crew asynchronously:
|
||||
## Potential Use Cases
|
||||
|
||||
- **Parallel Content Generation**: Kickoff multiple independent crews asynchronously, each responsible for generating content on different topics. For example, one crew might research and draft an article on AI trends, while another crew generates social media posts about a new product launch. Each crew operates independently, allowing content production to scale efficiently.
|
||||
|
||||
- **Concurrent Market Research Tasks**: Launch multiple crews asynchronously to conduct market research in parallel. One crew might analyze industry trends, while another examines competitor strategies, and yet another evaluates consumer sentiment. Each crew independently completes its task, enabling faster and more comprehensive insights.
|
||||
|
||||
- **Independent Travel Planning Modules**: Execute separate crews to independently plan different aspects of a trip. One crew might handle flight options, another handles accommodation, and a third plans activities. Each crew works asynchronously, allowing various components of the trip to be planned simultaneously and independently for faster results.
|
||||
|
||||
## Example: Single Asynchronous Crew Execution
|
||||
|
||||
Here's an example of how to kickoff a crew asynchronously using asyncio and awaiting the result:
|
||||
|
||||
```python
|
||||
import asyncio
|
||||
from crewai import Crew, Agent, Task
|
||||
|
||||
# Create an agent with code execution enabled
|
||||
@@ -49,6 +61,57 @@ analysis_crew = Crew(
|
||||
tasks=[data_analysis_task]
|
||||
)
|
||||
|
||||
# Execute the crew asynchronously
|
||||
result = analysis_crew.kickoff_async(inputs={"ages": [25, 30, 35, 40, 45]})
|
||||
```
|
||||
# Async function to kickoff the crew asynchronously
|
||||
async def async_crew_execution():
|
||||
result = await analysis_crew.kickoff_async(inputs={"ages": [25, 30, 35, 40, 45]})
|
||||
print("Crew Result:", result)
|
||||
|
||||
# Run the async function
|
||||
asyncio.run(async_crew_execution())
|
||||
```
|
||||
|
||||
## Example: Multiple Asynchronous Crew Executions
|
||||
|
||||
In this example, we'll show how to kickoff multiple crews asynchronously and wait for all of them to complete using asyncio.gather():
|
||||
|
||||
```python
|
||||
import asyncio
|
||||
from crewai import Crew, Agent, Task
|
||||
|
||||
# Create an agent with code execution enabled
|
||||
coding_agent = Agent(
|
||||
role="Python Data Analyst",
|
||||
goal="Analyze data and provide insights using Python",
|
||||
backstory="You are an experienced data analyst with strong Python skills.",
|
||||
allow_code_execution=True
|
||||
)
|
||||
|
||||
# Create tasks that require code execution
|
||||
task_1 = Task(
|
||||
description="Analyze the first dataset and calculate the average age of participants. Ages: {ages}",
|
||||
agent=coding_agent
|
||||
)
|
||||
|
||||
task_2 = Task(
|
||||
description="Analyze the second dataset and calculate the average age of participants. Ages: {ages}",
|
||||
agent=coding_agent
|
||||
)
|
||||
|
||||
# Create two crews and add tasks
|
||||
crew_1 = Crew(agents=[coding_agent], tasks=[task_1])
|
||||
crew_2 = Crew(agents=[coding_agent], tasks=[task_2])
|
||||
|
||||
# Async function to kickoff multiple crews asynchronously and wait for all to finish
|
||||
async def async_multiple_crews():
|
||||
result_1 = crew_1.kickoff_async(inputs={"ages": [25, 30, 35, 40, 45]})
|
||||
result_2 = crew_2.kickoff_async(inputs={"ages": [20, 22, 24, 28, 30]})
|
||||
|
||||
# Wait for both crews to finish
|
||||
results = await asyncio.gather(result_1, result_2)
|
||||
|
||||
for i, result in enumerate(results, 1):
|
||||
print(f"Crew {i} Result:", result)
|
||||
|
||||
# Run the async function
|
||||
asyncio.run(async_multiple_crews())
|
||||
```
|
||||
|
||||
@@ -112,30 +112,30 @@ Switch between APIs and models seamlessly using environment variables, supportin
|
||||
### Configuration Examples
|
||||
#### FastChat
|
||||
```sh
|
||||
os.environ[OPENAI_API_BASE]="http://localhost:8001/v1"
|
||||
os.environ[OPENAI_MODEL_NAME]='oh-2.5m7b-q51'
|
||||
os.environ[OPENAI_API_KEY]=NA
|
||||
os.environ["OPENAI_API_BASE"]='http://localhost:8001/v1'
|
||||
os.environ["OPENAI_MODEL_NAME"]='oh-2.5m7b-q51'
|
||||
os.environ[OPENAI_API_KEY]='NA'
|
||||
```
|
||||
|
||||
#### LM Studio
|
||||
Launch [LM Studio](https://lmstudio.ai) and go to the Server tab. Then select a model from the dropdown menu and wait for it to load. Once it's loaded, click the green Start Server button and use the URL, port, and API key that's shown (you can modify them). Below is an example of the default settings as of LM Studio 0.2.19:
|
||||
```sh
|
||||
os.environ[OPENAI_API_BASE]="http://localhost:1234/v1"
|
||||
os.environ[OPENAI_API_KEY]="lm-studio"
|
||||
os.environ["OPENAI_API_BASE"]='http://localhost:1234/v1'
|
||||
os.environ["OPENAI_API_KEY"]='lm-studio'
|
||||
```
|
||||
|
||||
#### Groq API
|
||||
```sh
|
||||
os.environ[OPENAI_API_KEY]=your-groq-api-key
|
||||
os.environ[OPENAI_MODEL_NAME]='llama3-8b-8192'
|
||||
os.environ[OPENAI_API_BASE]=https://api.groq.com/openai/v1
|
||||
os.environ["OPENAI_API_KEY"]='your-groq-api-key'
|
||||
os.environ["OPENAI_MODEL_NAME"]='llama3-8b-8192'
|
||||
os.environ["OPENAI_API_BASE"]='https://api.groq.com/openai/v1'
|
||||
```
|
||||
|
||||
#### Mistral API
|
||||
```sh
|
||||
os.environ[OPENAI_API_KEY]=your-mistral-api-key
|
||||
os.environ[OPENAI_API_BASE]=https://api.mistral.ai/v1
|
||||
os.environ[OPENAI_MODEL_NAME]="mistral-small"
|
||||
os.environ["OPENAI_API_KEY"]='your-mistral-api-key'
|
||||
os.environ["OPENAI_API_BASE"]='https://api.mistral.ai/v1'
|
||||
os.environ["OPENAI_MODEL_NAME"]='mistral-small'
|
||||
```
|
||||
|
||||
### Solar
|
||||
@@ -145,17 +145,16 @@ from langchain_community.chat_models.solar import SolarChat
|
||||
```sh
|
||||
os.environ[SOLAR_API_BASE]="https://api.upstage.ai/v1/solar"
|
||||
os.environ[SOLAR_API_KEY]="your-solar-api-key"
|
||||
```
|
||||
|
||||
# Free developer API key available here: https://console.upstage.ai/services/solar
|
||||
# Langchain Example: https://github.com/langchain-ai/langchain/pull/18556
|
||||
|
||||
```
|
||||
|
||||
### Cohere
|
||||
```python
|
||||
from langchain_cohere import ChatCohere
|
||||
# Initialize language model
|
||||
os.environ["COHERE_API_KEY"] = "your-cohere-api-key"
|
||||
os.environ["COHERE_API_KEY"]='your-cohere-api-key'
|
||||
llm = ChatCohere()
|
||||
|
||||
# Free developer API key available here: https://cohere.com/
|
||||
@@ -166,10 +165,10 @@ llm = ChatCohere()
|
||||
For Azure OpenAI API integration, set the following environment variables:
|
||||
```sh
|
||||
|
||||
os.environ[AZURE_OPENAI_DEPLOYMENT] = "Your deployment"
|
||||
os.environ["OPENAI_API_VERSION"] = "2023-12-01-preview"
|
||||
os.environ["AZURE_OPENAI_ENDPOINT"] = "Your Endpoint"
|
||||
os.environ["AZURE_OPENAI_API_KEY"] = "<Your API Key>"
|
||||
os.environ["AZURE_OPENAI_DEPLOYMENT"]='Your deployment'
|
||||
os.environ["OPENAI_API_VERSION"]='2023-12-01-preview'
|
||||
os.environ["AZURE_OPENAI_ENDPOINT"]='Your Endpoint'
|
||||
os.environ["AZURE_OPENAI_API_KEY"]='Your API Key'
|
||||
```
|
||||
|
||||
### Example Agent with Azure LLM
|
||||
|
||||
@@ -7,10 +7,14 @@ description: How to monitor cost, latency, and performance of CrewAI Agents usin
|
||||
|
||||
Langtrace is an open-source, external tool that helps you set up observability and evaluations for Large Language Models (LLMs), LLM frameworks, and Vector Databases. While not built directly into CrewAI, Langtrace can be used alongside CrewAI to gain deep visibility into the cost, latency, and performance of your CrewAI Agents. This integration allows you to log hyperparameters, monitor performance regressions, and establish a process for continuous improvement of your Agents.
|
||||
|
||||

|
||||

|
||||

|
||||
|
||||
## Setup Instructions
|
||||
|
||||
1. Sign up for [Langtrace](https://langtrace.ai/) by visiting [https://langtrace.ai/signup](https://langtrace.ai/signup).
|
||||
2. Create a project and generate an API key.
|
||||
2. Create a project, set the project type to crewAI & generate an API key.
|
||||
3. Install Langtrace in your CrewAI project using the following commands:
|
||||
|
||||
```bash
|
||||
@@ -32,58 +36,29 @@ langtrace.init(api_key='<LANGTRACE_API_KEY>')
|
||||
from crewai import Agent, Task, Crew
|
||||
```
|
||||
|
||||
2. Create your CrewAI agents and tasks as usual.
|
||||
|
||||
3. Use Langtrace's tracking functions to monitor your CrewAI operations. For example:
|
||||
|
||||
```python
|
||||
with langtrace.trace("CrewAI Task Execution"):
|
||||
result = crew.kickoff()
|
||||
```
|
||||
|
||||
### Features and Their Application to CrewAI
|
||||
|
||||
1. **LLM Token and Cost Tracking**
|
||||
|
||||
- Monitor the token usage and associated costs for each CrewAI agent interaction.
|
||||
- Example:
|
||||
```python
|
||||
with langtrace.trace("Agent Interaction"):
|
||||
agent_response = agent.execute(task)
|
||||
```
|
||||
|
||||
2. **Trace Graph for Execution Steps**
|
||||
|
||||
- Visualize the execution flow of your CrewAI tasks, including latency and logs.
|
||||
- Useful for identifying bottlenecks in your agent workflows.
|
||||
|
||||
3. **Dataset Curation with Manual Annotation**
|
||||
|
||||
- Create datasets from your CrewAI task outputs for future training or evaluation.
|
||||
- Example:
|
||||
```python
|
||||
langtrace.log_dataset_item(task_input, agent_output, {"task_type": "research"})
|
||||
```
|
||||
|
||||
4. **Prompt Versioning and Management**
|
||||
|
||||
- Keep track of different versions of prompts used in your CrewAI agents.
|
||||
- Useful for A/B testing and optimizing agent performance.
|
||||
|
||||
5. **Prompt Playground with Model Comparisons**
|
||||
|
||||
- Test and compare different prompts and models for your CrewAI agents before deployment.
|
||||
|
||||
6. **Testing and Evaluations**
|
||||
- Set up automated tests for your CrewAI agents and tasks.
|
||||
- Example:
|
||||
```python
|
||||
langtrace.evaluate(agent_output, expected_output, "accuracy")
|
||||
```
|
||||
|
||||
## Monitoring New CrewAI Features
|
||||
|
||||
CrewAI has introduced several new features that can be monitored using Langtrace:
|
||||
|
||||
1. **Code Execution**: Monitor the performance and output of code executed by agents.
|
||||
```python
|
||||
with langtrace.trace("Agent Code Execution"):
|
||||
code_output = agent.execute_code(code_snippet)
|
||||
```
|
||||
|
||||
2. **Third-party Agent Integration**: Track interactions with LlamaIndex, LangChain, and Autogen agents.
|
||||
@@ -5,24 +5,39 @@ description: Understanding the telemetry data collected by CrewAI and how it con
|
||||
|
||||
## Telemetry
|
||||
|
||||
CrewAI utilizes anonymous telemetry to gather usage statistics with the primary goal of enhancing the library. Our focus is on improving and developing the features, integrations, and tools most utilized by our users. We don't offer a way to disable it now, but we will in the future.
|
||||
!!! note "Personal Information"
|
||||
By default, we collect no data that would be considered personal information under GDPR and other privacy regulations.
|
||||
We do collect Tool's names and Agent's roles, so be advised not to include any personal information in the tool's names or the Agent's roles.
|
||||
Because no personal information is collected, it's not necessary to worry about data residency.
|
||||
When `share_crew` is enabled, additional data is collected which may contain personal information if included by the user. Users should exercise caution when enabling this feature to ensure compliance with privacy regulations.
|
||||
|
||||
It's pivotal to understand that **NO data is collected** concerning prompts, task descriptions, agents' backstories or goals, usage of tools, API calls, responses, any data processed by the agents, or secrets and environment variables, with the exception of the conditions mentioned. When the `share_crew` feature is enabled, detailed data including task descriptions, agents' backstories or goals, and other specific attributes are collected to provide deeper insights while respecting user privacy.
|
||||
CrewAI utilizes anonymous telemetry to gather usage statistics with the primary goal of enhancing the library. Our focus is on improving and developing the features, integrations, and tools most utilized by our users.
|
||||
|
||||
### Data Collected Includes:
|
||||
- **Version of CrewAI**: Assessing the adoption rate of our latest version helps us understand user needs and guide our updates.
|
||||
- **Python Version**: Identifying the Python versions our users operate with assists in prioritizing our support efforts for these versions.
|
||||
- **General OS Information**: Details like the number of CPUs and the operating system type (macOS, Windows, Linux) enable us to focus our development on the most used operating systems and explore the potential for OS-specific features.
|
||||
- **Number of Agents and Tasks in a Crew**: Ensures our internal testing mirrors real-world scenarios, helping us guide users towards best practices.
|
||||
- **Crew Process Utilization**: Understanding how crews are utilized aids in directing our development focus.
|
||||
- **Memory and Delegation Use by Agents**: Insights into how these features are used help evaluate their effectiveness and future.
|
||||
- **Task Execution Mode**: Knowing whether tasks are executed in parallel or sequentially influences our emphasis on enhancing parallel execution capabilities.
|
||||
- **Language Model Utilization**: Supports our goal to improve support for the most popular languages among our users.
|
||||
- **Roles of Agents within a Crew**: Understanding the various roles agents play aids in crafting better tools, integrations, and examples.
|
||||
- **Tool Usage**: Identifying which tools are most frequently used allows us to prioritize improvements in those areas.
|
||||
It's pivotal to understand that by default, **NO personal data is collected** concerning prompts, task descriptions, agents' backstories or goals, usage of tools, API calls, responses, any data processed by the agents, or secrets and environment variables.
|
||||
When the `share_crew` feature is enabled, detailed data including task descriptions, agents' backstories or goals, and other specific attributes are collected to provide deeper insights. This expanded data collection may include personal information if users have incorporated it into their crews or tasks. Users should carefully consider the content of their crews and tasks before enabling `share_crew`. Users can disable telemetry by setting the environment variable OTEL_SDK_DISABLED to true.
|
||||
|
||||
### Data Explanation:
|
||||
| Defaulted | Data | Reason and Specifics |
|
||||
|-----------|-------------------------------------------|-----------------------------------------------------------------------------------------------------------------------------|
|
||||
| Yes | CrewAI and Python Version | Tracks software versions. Example: CrewAI v1.2.3, Python 3.8.10. No personal data. |
|
||||
| Yes | Crew Metadata | Includes: randomly generated key and ID, process type (e.g., 'sequential', 'parallel'), boolean flag for memory usage (true/false), count of tasks, count of agents. All non-personal. |
|
||||
| Yes | Agent Data | Includes: randomly generated key and ID, role name (should not include personal info), boolean settings (verbose, delegation enabled, code execution allowed), max iterations, max RPM, max retry limit, LLM info (see LLM Attributes), list of tool names (should not include personal info). No personal data. |
|
||||
| Yes | Task Metadata | Includes: randomly generated key and ID, boolean execution settings (async_execution, human_input), associated agent's role and key, list of tool names. All non-personal. |
|
||||
| Yes | Tool Usage Statistics | Includes: tool name (should not include personal info), number of usage attempts (integer), LLM attributes used. No personal data. |
|
||||
| Yes | Test Execution Data | Includes: crew's randomly generated key and ID, number of iterations, model name used, quality score (float), execution time (in seconds). All non-personal. |
|
||||
| Yes | Task Lifecycle Data | Includes: creation and execution start/end times, crew and task identifiers. Stored as spans with timestamps. No personal data. |
|
||||
| Yes | LLM Attributes | Includes: name, model_name, model, top_k, temperature, and class name of the LLM. All technical, non-personal data. |
|
||||
| Yes | Crew Deployment attempt using crewAI CLI | Includes: The fact a deploy is being made and crew id, and if it's trying to pull logs, no other data. |
|
||||
| No | Agent's Expanded Data | Includes: goal description, backstory text, i18n prompt file identifier. Users should ensure no personal info is included in text fields. |
|
||||
| No | Detailed Task Information | Includes: task description, expected output description, context references. Users should ensure no personal info is included in these fields. |
|
||||
| No | Environment Information | Includes: platform, release, system, version, and CPU count. Example: 'Windows 10', 'x86_64'. No personal data. |
|
||||
| No | Crew and Task Inputs and Outputs | Includes: input parameters and output results as non-identifiable data. Users should ensure no personal info is included. |
|
||||
| No | Comprehensive Crew Execution Data | Includes: detailed logs of crew operations, all agents and tasks data, final output. All non-personal and technical in nature. |
|
||||
|
||||
Note: "No" in the "Defaulted" column indicates that this data is only collected when `share_crew` is set to `true`.
|
||||
|
||||
### Opt-In Further Telemetry Sharing
|
||||
Users can choose to share their complete telemetry data by enabling the `share_crew` attribute to `True` in their crew configurations. Enabling `share_crew` results in the collection of detailed crew and task execution data, including `goal`, `backstory`, `context`, and `output` of tasks. This enables a deeper insight into usage patterns while respecting the user's choice to share.
|
||||
Users can choose to share their complete telemetry data by enabling the `share_crew` attribute to `True` in their crew configurations. Enabling `share_crew` results in the collection of detailed crew and task execution data, including `goal`, `backstory`, `context`, and `output` of tasks. This enables a deeper insight into usage patterns.
|
||||
|
||||
### Updates and Revisions
|
||||
We are committed to maintaining the accuracy and transparency of our documentation. Regular reviews and updates are performed to ensure our documentation accurately reflects the latest developments of our codebase and telemetry practices. Users are encouraged to review this section for the most current information on our data collection practices and how they contribute to the improvement of CrewAI.
|
||||
!!! warning "Potential Personal Information"
|
||||
If you enable `share_crew`, the collected data may include personal information if it has been incorporated into crew configurations, task descriptions, or outputs. Users should carefully review their data and ensure compliance with GDPR and other applicable privacy regulations before enabling this feature.
|
||||
@@ -2,8 +2,8 @@ site_name: crewAI
|
||||
site_author: crewAI, Inc
|
||||
site_description: Cutting-edge framework for orchestrating role-playing, autonomous AI agents. By fostering collaborative intelligence, CrewAI empowers agents to work together seamlessly, tackling complex tasks.
|
||||
repo_name: crewAI
|
||||
repo_url: https://github.com/joaomdmoura/crewai/
|
||||
site_url: https://crewai.com
|
||||
repo_url: https://github.com/crewAIInc/crewAI
|
||||
site_url: https://docs.crewai.com
|
||||
edit_uri: edit/main/docs/
|
||||
copyright: Copyright © 2024 crewAI, Inc
|
||||
|
||||
|
||||
2569
poetry.lock
generated
2569
poetry.lock
generated
File diff suppressed because it is too large
Load Diff
@@ -1,6 +1,6 @@
|
||||
[tool.poetry]
|
||||
name = "crewai"
|
||||
version = "0.51.1"
|
||||
version = "0.55.2"
|
||||
description = "Cutting-edge framework for orchestrating role-playing, autonomous AI agents. By fostering collaborative intelligence, CrewAI empowers agents to work together seamlessly, tackling complex tasks."
|
||||
authors = ["Joao Moura <joao@crewai.com>"]
|
||||
readme = "README.md"
|
||||
@@ -8,8 +8,8 @@ packages = [{ include = "crewai", from = "src" }]
|
||||
|
||||
[tool.poetry.urls]
|
||||
Homepage = "https://crewai.com"
|
||||
Documentation = "https://github.com/joaomdmoura/CrewAI/wiki/Index"
|
||||
Repository = "https://github.com/joaomdmoura/crewai"
|
||||
Documentation = "https://docs.crewai.com"
|
||||
Repository = "https://github.com/crewAIInc/crewAI"
|
||||
|
||||
[tool.poetry.dependencies]
|
||||
python = ">=3.10,<=3.13"
|
||||
@@ -20,8 +20,8 @@ opentelemetry-api = "^1.22.0"
|
||||
opentelemetry-sdk = "^1.22.0"
|
||||
opentelemetry-exporter-otlp-proto-http = "^1.22.0"
|
||||
instructor = "1.3.3"
|
||||
regex = "^2023.12.25"
|
||||
crewai-tools = { version = "^0.8.3", optional = true }
|
||||
regex = "^2024.7.24"
|
||||
crewai-tools = { version = "^0.12.0", optional = true }
|
||||
click = "^8.1.7"
|
||||
python-dotenv = "^1.0.0"
|
||||
appdirs = "^1.4.4"
|
||||
@@ -29,6 +29,7 @@ jsonref = "^1.1.0"
|
||||
agentops = { version = "^0.3.0", optional = true }
|
||||
embedchain = "^0.1.114"
|
||||
json-repair = "^0.25.2"
|
||||
auth0-python = "^4.7.1"
|
||||
|
||||
[tool.poetry.extras]
|
||||
tools = ["crewai-tools"]
|
||||
@@ -46,7 +47,7 @@ mkdocs-material = { extras = ["imaging"], version = "^9.5.7" }
|
||||
mkdocs-material-extensions = "^1.3.1"
|
||||
pillow = "^10.2.0"
|
||||
cairosvg = "^2.7.1"
|
||||
crewai-tools = "^0.8.3"
|
||||
crewai-tools = "^0.12.0"
|
||||
|
||||
[tool.poetry.group.test.dependencies]
|
||||
pytest = "^8.0.0"
|
||||
|
||||
@@ -2,6 +2,7 @@ from crewai.agent import Agent
|
||||
from crewai.crew import Crew
|
||||
from crewai.pipeline import Pipeline
|
||||
from crewai.process import Process
|
||||
from crewai.routers import Router
|
||||
from crewai.task import Task
|
||||
|
||||
__all__ = ["Agent", "Crew", "Process", "Task", "Pipeline"]
|
||||
__all__ = ["Agent", "Crew", "Process", "Task", "Pipeline", "Router"]
|
||||
|
||||
@@ -117,16 +117,21 @@ class Agent(BaseAgent):
|
||||
def post_init_setup(self):
|
||||
self.agent_ops_agent_name = self.role
|
||||
|
||||
if hasattr(self.llm, "model_name"):
|
||||
self._setup_llm_callbacks()
|
||||
# Different llms store the model name in different attributes
|
||||
model_name = getattr(self.llm, "model_name", None) or getattr(
|
||||
self.llm, "deployment_name", None
|
||||
)
|
||||
|
||||
if model_name:
|
||||
self._setup_llm_callbacks(model_name)
|
||||
|
||||
if not self.agent_executor:
|
||||
self._setup_agent_executor()
|
||||
|
||||
return self
|
||||
|
||||
def _setup_llm_callbacks(self):
|
||||
token_handler = TokenCalcHandler(self.llm.model_name, self._token_process)
|
||||
def _setup_llm_callbacks(self, model_name: str):
|
||||
token_handler = TokenCalcHandler(model_name, self._token_process)
|
||||
|
||||
if not isinstance(self.llm.callbacks, list):
|
||||
self.llm.callbacks = []
|
||||
|
||||
@@ -2,3 +2,5 @@ from .cache.cache_handler import CacheHandler
|
||||
from .executor import CrewAgentExecutor
|
||||
from .parser import CrewAgentParser
|
||||
from .tools_handler import ToolsHandler
|
||||
|
||||
__all__ = ["CacheHandler", "CrewAgentExecutor", "CrewAgentParser", "ToolsHandler"]
|
||||
|
||||
2
src/crewai/agents/cache/__init__.py
vendored
2
src/crewai/agents/cache/__init__.py
vendored
@@ -1 +1,3 @@
|
||||
from .cache_handler import CacheHandler
|
||||
|
||||
__all__ = ["CacheHandler"]
|
||||
|
||||
3
src/crewai/cli/authentication/__init__.py
Normal file
3
src/crewai/cli/authentication/__init__.py
Normal file
@@ -0,0 +1,3 @@
|
||||
from .main import AuthenticationCommand
|
||||
|
||||
__all__ = ["AuthenticationCommand"]
|
||||
4
src/crewai/cli/authentication/constants.py
Normal file
4
src/crewai/cli/authentication/constants.py
Normal file
@@ -0,0 +1,4 @@
|
||||
ALGORITHMS = ["RS256"]
|
||||
AUTH0_DOMAIN = "crewai.us.auth0.com"
|
||||
AUTH0_CLIENT_ID = "DEVC5Fw6NlRoSzmDCcOhVq85EfLBjKa8"
|
||||
AUTH0_AUDIENCE = "https://crewai.us.auth0.com/api/v2/"
|
||||
75
src/crewai/cli/authentication/main.py
Normal file
75
src/crewai/cli/authentication/main.py
Normal file
@@ -0,0 +1,75 @@
|
||||
import time
|
||||
import webbrowser
|
||||
from typing import Any, Dict
|
||||
|
||||
import requests
|
||||
from rich.console import Console
|
||||
|
||||
from .constants import AUTH0_AUDIENCE, AUTH0_CLIENT_ID, AUTH0_DOMAIN
|
||||
from .utils import TokenManager, validate_token
|
||||
|
||||
console = Console()
|
||||
|
||||
|
||||
class AuthenticationCommand:
|
||||
DEVICE_CODE_URL = f"https://{AUTH0_DOMAIN}/oauth/device/code"
|
||||
TOKEN_URL = f"https://{AUTH0_DOMAIN}/oauth/token"
|
||||
|
||||
def __init__(self):
|
||||
self.token_manager = TokenManager()
|
||||
|
||||
def signup(self) -> None:
|
||||
"""Sign up to CrewAI+"""
|
||||
console.print("Signing Up to CrewAI+ \n", style="bold blue")
|
||||
device_code_data = self._get_device_code()
|
||||
self._display_auth_instructions(device_code_data)
|
||||
|
||||
return self._poll_for_token(device_code_data)
|
||||
|
||||
def _get_device_code(self) -> Dict[str, Any]:
|
||||
"""Get the device code to authenticate the user."""
|
||||
|
||||
device_code_payload = {
|
||||
"client_id": AUTH0_CLIENT_ID,
|
||||
"scope": "openid",
|
||||
"audience": AUTH0_AUDIENCE,
|
||||
}
|
||||
response = requests.post(url=self.DEVICE_CODE_URL, data=device_code_payload)
|
||||
response.raise_for_status()
|
||||
return response.json()
|
||||
|
||||
def _display_auth_instructions(self, device_code_data: Dict[str, str]) -> None:
|
||||
"""Display the authentication instructions to the user."""
|
||||
console.print("1. Navigate to: ", device_code_data["verification_uri_complete"])
|
||||
console.print("2. Enter the following code: ", device_code_data["user_code"])
|
||||
webbrowser.open(device_code_data["verification_uri_complete"])
|
||||
|
||||
def _poll_for_token(self, device_code_data: Dict[str, Any]) -> None:
|
||||
"""Poll the server for the token."""
|
||||
token_payload = {
|
||||
"grant_type": "urn:ietf:params:oauth:grant-type:device_code",
|
||||
"device_code": device_code_data["device_code"],
|
||||
"client_id": AUTH0_CLIENT_ID,
|
||||
}
|
||||
|
||||
attempts = 0
|
||||
while True and attempts < 5:
|
||||
response = requests.post(self.TOKEN_URL, data=token_payload)
|
||||
token_data = response.json()
|
||||
|
||||
if response.status_code == 200:
|
||||
validate_token(token_data["id_token"])
|
||||
expires_in = 360000 # Token expiration time in seconds
|
||||
self.token_manager.save_tokens(token_data["access_token"], expires_in)
|
||||
console.print("\nWelcome to CrewAI+ !!", style="green")
|
||||
return
|
||||
|
||||
if token_data["error"] not in ("authorization_pending", "slow_down"):
|
||||
raise requests.HTTPError(token_data["error_description"])
|
||||
|
||||
time.sleep(device_code_data["interval"])
|
||||
attempts += 1
|
||||
|
||||
console.print(
|
||||
"Timeout: Failed to get the token. Please try again.", style="bold red"
|
||||
)
|
||||
144
src/crewai/cli/authentication/utils.py
Normal file
144
src/crewai/cli/authentication/utils.py
Normal file
@@ -0,0 +1,144 @@
|
||||
import json
|
||||
import os
|
||||
import sys
|
||||
from datetime import datetime, timedelta
|
||||
from pathlib import Path
|
||||
from typing import Optional
|
||||
|
||||
from auth0.authentication.token_verifier import (
|
||||
AsymmetricSignatureVerifier,
|
||||
TokenVerifier,
|
||||
)
|
||||
from cryptography.fernet import Fernet
|
||||
|
||||
from .constants import AUTH0_CLIENT_ID, AUTH0_DOMAIN
|
||||
|
||||
|
||||
def validate_token(id_token: str) -> None:
|
||||
"""
|
||||
Verify the token and its precedence
|
||||
|
||||
:param id_token:
|
||||
"""
|
||||
jwks_url = f"https://{AUTH0_DOMAIN}/.well-known/jwks.json"
|
||||
issuer = f"https://{AUTH0_DOMAIN}/"
|
||||
signature_verifier = AsymmetricSignatureVerifier(jwks_url)
|
||||
token_verifier = TokenVerifier(
|
||||
signature_verifier=signature_verifier, issuer=issuer, audience=AUTH0_CLIENT_ID
|
||||
)
|
||||
token_verifier.verify(id_token)
|
||||
|
||||
|
||||
class TokenManager:
|
||||
def __init__(self, file_path: str = "tokens.enc") -> None:
|
||||
"""
|
||||
Initialize the TokenManager class.
|
||||
|
||||
:param file_path: The file path to store the encrypted tokens. Default is "tokens.enc".
|
||||
"""
|
||||
self.file_path = file_path
|
||||
self.key = self._get_or_create_key()
|
||||
self.fernet = Fernet(self.key)
|
||||
|
||||
def _get_or_create_key(self) -> bytes:
|
||||
"""
|
||||
Get or create the encryption key.
|
||||
|
||||
:return: The encryption key.
|
||||
"""
|
||||
key_filename = "secret.key"
|
||||
key = self.read_secure_file(key_filename)
|
||||
|
||||
if key is not None:
|
||||
return key
|
||||
|
||||
new_key = Fernet.generate_key()
|
||||
self.save_secure_file(key_filename, new_key)
|
||||
return new_key
|
||||
|
||||
def save_tokens(self, access_token: str, expires_in: int) -> None:
|
||||
"""
|
||||
Save the access token and its expiration time.
|
||||
|
||||
:param access_token: The access token to save.
|
||||
:param expires_in: The expiration time of the access token in seconds.
|
||||
"""
|
||||
expiration_time = datetime.now() + timedelta(seconds=expires_in)
|
||||
data = {
|
||||
"access_token": access_token,
|
||||
"expiration": expiration_time.isoformat(),
|
||||
}
|
||||
encrypted_data = self.fernet.encrypt(json.dumps(data).encode())
|
||||
self.save_secure_file(self.file_path, encrypted_data)
|
||||
|
||||
def get_token(self) -> Optional[str]:
|
||||
"""
|
||||
Get the access token if it is valid and not expired.
|
||||
|
||||
:return: The access token if valid and not expired, otherwise None.
|
||||
"""
|
||||
encrypted_data = self.read_secure_file(self.file_path)
|
||||
|
||||
decrypted_data = self.fernet.decrypt(encrypted_data)
|
||||
data = json.loads(decrypted_data)
|
||||
|
||||
expiration = datetime.fromisoformat(data["expiration"])
|
||||
if expiration <= datetime.now():
|
||||
return None
|
||||
|
||||
return data["access_token"]
|
||||
|
||||
def get_secure_storage_path(self) -> Path:
|
||||
"""
|
||||
Get the secure storage path based on the operating system.
|
||||
|
||||
:return: The secure storage path.
|
||||
"""
|
||||
if sys.platform == "win32":
|
||||
# Windows: Use %LOCALAPPDATA%
|
||||
base_path = os.environ.get("LOCALAPPDATA")
|
||||
elif sys.platform == "darwin":
|
||||
# macOS: Use ~/Library/Application Support
|
||||
base_path = os.path.expanduser("~/Library/Application Support")
|
||||
else:
|
||||
# Linux and other Unix-like: Use ~/.local/share
|
||||
base_path = os.path.expanduser("~/.local/share")
|
||||
|
||||
app_name = "crewai/credentials"
|
||||
storage_path = Path(base_path) / app_name
|
||||
|
||||
storage_path.mkdir(parents=True, exist_ok=True)
|
||||
|
||||
return storage_path
|
||||
|
||||
def save_secure_file(self, filename: str, content: bytes) -> None:
|
||||
"""
|
||||
Save the content to a secure file.
|
||||
|
||||
:param filename: The name of the file.
|
||||
:param content: The content to save.
|
||||
"""
|
||||
storage_path = self.get_secure_storage_path()
|
||||
file_path = storage_path / filename
|
||||
|
||||
with open(file_path, "wb") as f:
|
||||
f.write(content)
|
||||
|
||||
# Set appropriate permissions (read/write for owner only)
|
||||
os.chmod(file_path, 0o600)
|
||||
|
||||
def read_secure_file(self, filename: str) -> Optional[bytes]:
|
||||
"""
|
||||
Read the content of a secure file.
|
||||
|
||||
:param filename: The name of the file.
|
||||
:return: The content of the file if it exists, otherwise None.
|
||||
"""
|
||||
storage_path = self.get_secure_storage_path()
|
||||
file_path = storage_path / filename
|
||||
|
||||
if not file_path.exists():
|
||||
return None
|
||||
|
||||
with open(file_path, "rb") as f:
|
||||
return f.read()
|
||||
@@ -1,3 +1,5 @@
|
||||
from typing import Optional
|
||||
|
||||
import click
|
||||
import pkg_resources
|
||||
|
||||
@@ -7,6 +9,8 @@ from crewai.memory.storage.kickoff_task_outputs_storage import (
|
||||
KickoffTaskOutputsSQLiteStorage,
|
||||
)
|
||||
|
||||
from .authentication.main import AuthenticationCommand
|
||||
from .deploy.main import DeployCommand
|
||||
from .evaluate_crew import evaluate_crew
|
||||
from .install_crew import install_crew
|
||||
from .replay_from_task import replay_task_command
|
||||
@@ -179,5 +183,71 @@ def run():
|
||||
run_crew()
|
||||
|
||||
|
||||
@crewai.command()
|
||||
def signup():
|
||||
"""Sign Up/Login to CrewAI+."""
|
||||
AuthenticationCommand().signup()
|
||||
|
||||
|
||||
@crewai.command()
|
||||
def login():
|
||||
"""Sign Up/Login to CrewAI+."""
|
||||
AuthenticationCommand().signup()
|
||||
|
||||
|
||||
# DEPLOY CREWAI+ COMMANDS
|
||||
@crewai.group()
|
||||
def deploy():
|
||||
"""Deploy the Crew CLI group."""
|
||||
pass
|
||||
|
||||
|
||||
@deploy.command(name="create")
|
||||
@click.option("-y", "--yes", is_flag=True, help="Skip the confirmation prompt")
|
||||
def deploy_create(yes: bool):
|
||||
"""Create a Crew deployment."""
|
||||
deploy_cmd = DeployCommand()
|
||||
deploy_cmd.create_crew(yes)
|
||||
|
||||
|
||||
@deploy.command(name="list")
|
||||
def deploy_list():
|
||||
"""List all deployments."""
|
||||
deploy_cmd = DeployCommand()
|
||||
deploy_cmd.list_crews()
|
||||
|
||||
|
||||
@deploy.command(name="push")
|
||||
@click.option("-u", "--uuid", type=str, help="Crew UUID parameter")
|
||||
def deploy_push(uuid: Optional[str]):
|
||||
"""Deploy the Crew."""
|
||||
deploy_cmd = DeployCommand()
|
||||
deploy_cmd.deploy(uuid=uuid)
|
||||
|
||||
|
||||
@deploy.command(name="status")
|
||||
@click.option("-u", "--uuid", type=str, help="Crew UUID parameter")
|
||||
def deply_status(uuid: Optional[str]):
|
||||
"""Get the status of a deployment."""
|
||||
deploy_cmd = DeployCommand()
|
||||
deploy_cmd.get_crew_status(uuid=uuid)
|
||||
|
||||
|
||||
@deploy.command(name="logs")
|
||||
@click.option("-u", "--uuid", type=str, help="Crew UUID parameter")
|
||||
def deploy_logs(uuid: Optional[str]):
|
||||
"""Get the logs of a deployment."""
|
||||
deploy_cmd = DeployCommand()
|
||||
deploy_cmd.get_crew_logs(uuid=uuid)
|
||||
|
||||
|
||||
@deploy.command(name="remove")
|
||||
@click.option("-u", "--uuid", type=str, help="Crew UUID parameter")
|
||||
def deploy_remove(uuid: Optional[str]):
|
||||
"""Remove a deployment."""
|
||||
deploy_cmd = DeployCommand()
|
||||
deploy_cmd.remove_crew(uuid=uuid)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
crewai()
|
||||
|
||||
0
src/crewai/cli/deploy/__init__.py
Normal file
0
src/crewai/cli/deploy/__init__.py
Normal file
66
src/crewai/cli/deploy/api.py
Normal file
66
src/crewai/cli/deploy/api.py
Normal file
@@ -0,0 +1,66 @@
|
||||
from os import getenv
|
||||
|
||||
import requests
|
||||
|
||||
from crewai.cli.deploy.utils import get_crewai_version
|
||||
|
||||
|
||||
class CrewAPI:
|
||||
"""
|
||||
CrewAPI class to interact with the crewAI+ API.
|
||||
"""
|
||||
|
||||
def __init__(self, api_key: str) -> None:
|
||||
self.api_key = api_key
|
||||
self.headers = {
|
||||
"Authorization": f"Bearer {api_key}",
|
||||
"Content-Type": "application/json",
|
||||
"User-Agent": f"CrewAI-CLI/{get_crewai_version()}",
|
||||
}
|
||||
self.base_url = getenv(
|
||||
"CREWAI_BASE_URL", "https://crewai.com/crewai_plus/api/v1/crews"
|
||||
)
|
||||
|
||||
def _make_request(self, method: str, endpoint: str, **kwargs) -> requests.Response:
|
||||
url = f"{self.base_url}/{endpoint}"
|
||||
return requests.request(method, url, headers=self.headers, **kwargs)
|
||||
|
||||
# Deploy
|
||||
def deploy_by_name(self, project_name: str) -> requests.Response:
|
||||
return self._make_request("POST", f"by-name/{project_name}/deploy")
|
||||
|
||||
def deploy_by_uuid(self, uuid: str) -> requests.Response:
|
||||
return self._make_request("POST", f"{uuid}/deploy")
|
||||
|
||||
# Status
|
||||
def status_by_name(self, project_name: str) -> requests.Response:
|
||||
return self._make_request("GET", f"by-name/{project_name}/status")
|
||||
|
||||
def status_by_uuid(self, uuid: str) -> requests.Response:
|
||||
return self._make_request("GET", f"{uuid}/status")
|
||||
|
||||
# Logs
|
||||
def logs_by_name(
|
||||
self, project_name: str, log_type: str = "deployment"
|
||||
) -> requests.Response:
|
||||
return self._make_request("GET", f"by-name/{project_name}/logs/{log_type}")
|
||||
|
||||
def logs_by_uuid(
|
||||
self, uuid: str, log_type: str = "deployment"
|
||||
) -> requests.Response:
|
||||
return self._make_request("GET", f"{uuid}/logs/{log_type}")
|
||||
|
||||
# Delete
|
||||
def delete_by_name(self, project_name: str) -> requests.Response:
|
||||
return self._make_request("DELETE", f"by-name/{project_name}")
|
||||
|
||||
def delete_by_uuid(self, uuid: str) -> requests.Response:
|
||||
return self._make_request("DELETE", f"{uuid}")
|
||||
|
||||
# List
|
||||
def list_crews(self) -> requests.Response:
|
||||
return self._make_request("GET", "")
|
||||
|
||||
# Create
|
||||
def create_crew(self, payload) -> requests.Response:
|
||||
return self._make_request("POST", "", json=payload)
|
||||
318
src/crewai/cli/deploy/main.py
Normal file
318
src/crewai/cli/deploy/main.py
Normal file
@@ -0,0 +1,318 @@
|
||||
from typing import Any, Dict, List, Optional
|
||||
|
||||
from rich.console import Console
|
||||
|
||||
from crewai.telemetry import Telemetry
|
||||
from .api import CrewAPI
|
||||
from .utils import (
|
||||
fetch_and_json_env_file,
|
||||
get_auth_token,
|
||||
get_git_remote_url,
|
||||
get_project_name,
|
||||
)
|
||||
|
||||
console = Console()
|
||||
|
||||
|
||||
class DeployCommand:
|
||||
"""
|
||||
A class to handle deployment-related operations for CrewAI projects.
|
||||
"""
|
||||
|
||||
def __init__(self):
|
||||
"""
|
||||
Initialize the DeployCommand with project name and API client.
|
||||
"""
|
||||
try:
|
||||
self._telemetry = Telemetry()
|
||||
self._telemetry.set_tracer()
|
||||
access_token = get_auth_token()
|
||||
except Exception:
|
||||
self._deploy_signup_error_span = self._telemetry.deploy_signup_error_span()
|
||||
console.print(
|
||||
"Please sign up/login to CrewAI+ before using the CLI.",
|
||||
style="bold red",
|
||||
)
|
||||
console.print("Run 'crewai signup' to sign up/login.", style="bold green")
|
||||
raise SystemExit
|
||||
|
||||
self.project_name = get_project_name()
|
||||
if self.project_name is None:
|
||||
console.print(
|
||||
"No project name found. Please ensure your project has a valid pyproject.toml file.",
|
||||
style="bold red",
|
||||
)
|
||||
raise SystemExit
|
||||
|
||||
self.client = CrewAPI(api_key=access_token)
|
||||
|
||||
def _handle_error(self, json_response: Dict[str, Any]) -> None:
|
||||
"""
|
||||
Handle and display error messages from API responses.
|
||||
|
||||
Args:
|
||||
json_response (Dict[str, Any]): The JSON response containing error information.
|
||||
"""
|
||||
error = json_response.get("error", "Unknown error")
|
||||
message = json_response.get("message", "No message provided")
|
||||
console.print(f"Error: {error}", style="bold red")
|
||||
console.print(f"Message: {message}", style="bold red")
|
||||
|
||||
def _standard_no_param_error_message(self) -> None:
|
||||
"""
|
||||
Display a standard error message when no UUID or project name is available.
|
||||
"""
|
||||
console.print(
|
||||
"No UUID provided, project pyproject.toml not found or with error.",
|
||||
style="bold red",
|
||||
)
|
||||
|
||||
def _display_deployment_info(self, json_response: Dict[str, Any]) -> None:
|
||||
"""
|
||||
Display deployment information.
|
||||
|
||||
Args:
|
||||
json_response (Dict[str, Any]): The deployment information to display.
|
||||
"""
|
||||
console.print("Deploying the crew...\n", style="bold blue")
|
||||
for key, value in json_response.items():
|
||||
console.print(f"{key.title()}: [green]{value}[/green]")
|
||||
console.print("\nTo check the status of the deployment, run:")
|
||||
console.print("crewai deploy status")
|
||||
console.print(" or")
|
||||
console.print(f"crewai deploy status --uuid \"{json_response['uuid']}\"")
|
||||
|
||||
def _display_logs(self, log_messages: List[Dict[str, Any]]) -> None:
|
||||
"""
|
||||
Display log messages.
|
||||
|
||||
Args:
|
||||
log_messages (List[Dict[str, Any]]): The log messages to display.
|
||||
"""
|
||||
for log_message in log_messages:
|
||||
console.print(
|
||||
f"{log_message['timestamp']} - {log_message['level']}: {log_message['message']}"
|
||||
)
|
||||
|
||||
def deploy(self, uuid: Optional[str] = None) -> None:
|
||||
"""
|
||||
Deploy a crew using either UUID or project name.
|
||||
|
||||
Args:
|
||||
uuid (Optional[str]): The UUID of the crew to deploy.
|
||||
"""
|
||||
self._start_deployment_span = self._telemetry.start_deployment_span(uuid)
|
||||
console.print("Starting deployment...", style="bold blue")
|
||||
if uuid:
|
||||
response = self.client.deploy_by_uuid(uuid)
|
||||
elif self.project_name:
|
||||
response = self.client.deploy_by_name(self.project_name)
|
||||
else:
|
||||
self._standard_no_param_error_message()
|
||||
return
|
||||
|
||||
json_response = response.json()
|
||||
if response.status_code == 200:
|
||||
self._display_deployment_info(json_response)
|
||||
else:
|
||||
self._handle_error(json_response)
|
||||
|
||||
def create_crew(self, confirm: bool) -> None:
|
||||
"""
|
||||
Create a new crew deployment.
|
||||
"""
|
||||
self._create_crew_deployment_span = (
|
||||
self._telemetry.create_crew_deployment_span()
|
||||
)
|
||||
console.print("Creating deployment...", style="bold blue")
|
||||
env_vars = fetch_and_json_env_file()
|
||||
remote_repo_url = get_git_remote_url()
|
||||
|
||||
if remote_repo_url is None:
|
||||
console.print("No remote repository URL found.", style="bold red")
|
||||
console.print(
|
||||
"Please ensure your project has a valid remote repository.",
|
||||
style="yellow",
|
||||
)
|
||||
return
|
||||
|
||||
self._confirm_input(env_vars, remote_repo_url, confirm)
|
||||
payload = self._create_payload(env_vars, remote_repo_url)
|
||||
|
||||
response = self.client.create_crew(payload)
|
||||
if response.status_code == 201:
|
||||
self._display_creation_success(response.json())
|
||||
else:
|
||||
self._handle_error(response.json())
|
||||
|
||||
def _confirm_input(
|
||||
self, env_vars: Dict[str, str], remote_repo_url: str, confirm: bool
|
||||
) -> None:
|
||||
"""
|
||||
Confirm input parameters with the user.
|
||||
|
||||
Args:
|
||||
env_vars (Dict[str, str]): Environment variables.
|
||||
remote_repo_url (str): Remote repository URL.
|
||||
confirm (bool): Whether to confirm input.
|
||||
"""
|
||||
if not confirm:
|
||||
input(f"Press Enter to continue with the following Env vars: {env_vars}")
|
||||
input(
|
||||
f"Press Enter to continue with the following remote repository: {remote_repo_url}\n"
|
||||
)
|
||||
|
||||
def _create_payload(
|
||||
self,
|
||||
env_vars: Dict[str, str],
|
||||
remote_repo_url: str,
|
||||
) -> Dict[str, Any]:
|
||||
"""
|
||||
Create the payload for crew creation.
|
||||
|
||||
Args:
|
||||
remote_repo_url (str): Remote repository URL.
|
||||
env_vars (Dict[str, str]): Environment variables.
|
||||
|
||||
Returns:
|
||||
Dict[str, Any]: The payload for crew creation.
|
||||
"""
|
||||
return {
|
||||
"deploy": {
|
||||
"name": self.project_name,
|
||||
"repo_clone_url": remote_repo_url,
|
||||
"env": env_vars,
|
||||
}
|
||||
}
|
||||
|
||||
def _display_creation_success(self, json_response: Dict[str, Any]) -> None:
|
||||
"""
|
||||
Display success message after crew creation.
|
||||
|
||||
Args:
|
||||
json_response (Dict[str, Any]): The response containing crew information.
|
||||
"""
|
||||
console.print("Deployment created successfully!\n", style="bold green")
|
||||
console.print(
|
||||
f"Name: {self.project_name} ({json_response['uuid']})", style="bold green"
|
||||
)
|
||||
console.print(f"Status: {json_response['status']}", style="bold green")
|
||||
console.print("\nTo (re)deploy the crew, run:")
|
||||
console.print("crewai deploy push")
|
||||
console.print(" or")
|
||||
console.print(f"crewai deploy push --uuid {json_response['uuid']}")
|
||||
|
||||
def list_crews(self) -> None:
|
||||
"""
|
||||
List all available crews.
|
||||
"""
|
||||
console.print("Listing all Crews\n", style="bold blue")
|
||||
|
||||
response = self.client.list_crews()
|
||||
json_response = response.json()
|
||||
if response.status_code == 200:
|
||||
self._display_crews(json_response)
|
||||
else:
|
||||
self._display_no_crews_message()
|
||||
|
||||
def _display_crews(self, crews_data: List[Dict[str, Any]]) -> None:
|
||||
"""
|
||||
Display the list of crews.
|
||||
|
||||
Args:
|
||||
crews_data (List[Dict[str, Any]]): List of crew data to display.
|
||||
"""
|
||||
for crew_data in crews_data:
|
||||
console.print(
|
||||
f"- {crew_data['name']} ({crew_data['uuid']}) [blue]{crew_data['status']}[/blue]"
|
||||
)
|
||||
|
||||
def _display_no_crews_message(self) -> None:
|
||||
"""
|
||||
Display a message when no crews are available.
|
||||
"""
|
||||
console.print("You don't have any Crews yet. Let's create one!", style="yellow")
|
||||
console.print(" crewai create crew <crew_name>", style="green")
|
||||
|
||||
def get_crew_status(self, uuid: Optional[str] = None) -> None:
|
||||
"""
|
||||
Get the status of a crew.
|
||||
|
||||
Args:
|
||||
uuid (Optional[str]): The UUID of the crew to check.
|
||||
"""
|
||||
console.print("Fetching deployment status...", style="bold blue")
|
||||
if uuid:
|
||||
response = self.client.status_by_uuid(uuid)
|
||||
elif self.project_name:
|
||||
response = self.client.status_by_name(self.project_name)
|
||||
else:
|
||||
self._standard_no_param_error_message()
|
||||
return
|
||||
|
||||
json_response = response.json()
|
||||
if response.status_code == 200:
|
||||
self._display_crew_status(json_response)
|
||||
else:
|
||||
self._handle_error(json_response)
|
||||
|
||||
def _display_crew_status(self, status_data: Dict[str, str]) -> None:
|
||||
"""
|
||||
Display the status of a crew.
|
||||
|
||||
Args:
|
||||
status_data (Dict[str, str]): The status data to display.
|
||||
"""
|
||||
console.print(f"Name:\t {status_data['name']}")
|
||||
console.print(f"Status:\t {status_data['status']}")
|
||||
|
||||
def get_crew_logs(self, uuid: Optional[str], log_type: str = "deployment") -> None:
|
||||
"""
|
||||
Get logs for a crew.
|
||||
|
||||
Args:
|
||||
uuid (Optional[str]): The UUID of the crew to get logs for.
|
||||
log_type (str): The type of logs to retrieve (default: "deployment").
|
||||
"""
|
||||
self._get_crew_logs_span = self._telemetry.get_crew_logs_span(uuid, log_type)
|
||||
console.print(f"Fetching {log_type} logs...", style="bold blue")
|
||||
|
||||
if uuid:
|
||||
response = self.client.logs_by_uuid(uuid, log_type)
|
||||
elif self.project_name:
|
||||
response = self.client.logs_by_name(self.project_name, log_type)
|
||||
else:
|
||||
self._standard_no_param_error_message()
|
||||
return
|
||||
|
||||
if response.status_code == 200:
|
||||
self._display_logs(response.json())
|
||||
else:
|
||||
self._handle_error(response.json())
|
||||
|
||||
def remove_crew(self, uuid: Optional[str]) -> None:
|
||||
"""
|
||||
Remove a crew deployment.
|
||||
|
||||
Args:
|
||||
uuid (Optional[str]): The UUID of the crew to remove.
|
||||
"""
|
||||
self._remove_crew_span = self._telemetry.remove_crew_span(uuid)
|
||||
console.print("Removing deployment...", style="bold blue")
|
||||
|
||||
if uuid:
|
||||
response = self.client.delete_by_uuid(uuid)
|
||||
elif self.project_name:
|
||||
response = self.client.delete_by_name(self.project_name)
|
||||
else:
|
||||
self._standard_no_param_error_message()
|
||||
return
|
||||
|
||||
if response.status_code == 204:
|
||||
console.print(
|
||||
f"Crew '{self.project_name}' removed successfully.", style="green"
|
||||
)
|
||||
else:
|
||||
console.print(
|
||||
f"Failed to remove crew '{self.project_name}'", style="bold red"
|
||||
)
|
||||
155
src/crewai/cli/deploy/utils.py
Normal file
155
src/crewai/cli/deploy/utils.py
Normal file
@@ -0,0 +1,155 @@
|
||||
import sys
|
||||
import re
|
||||
import subprocess
|
||||
|
||||
from rich.console import Console
|
||||
|
||||
from ..authentication.utils import TokenManager
|
||||
|
||||
console = Console()
|
||||
|
||||
|
||||
if sys.version_info >= (3, 11):
|
||||
import tomllib
|
||||
|
||||
|
||||
# Drop the simple_toml_parser when we move to python3.11
|
||||
def simple_toml_parser(content):
|
||||
result = {}
|
||||
current_section = result
|
||||
for line in content.split('\n'):
|
||||
line = line.strip()
|
||||
if line.startswith('[') and line.endswith(']'):
|
||||
# New section
|
||||
section = line[1:-1].split('.')
|
||||
current_section = result
|
||||
for key in section:
|
||||
current_section = current_section.setdefault(key, {})
|
||||
elif '=' in line:
|
||||
key, value = line.split('=', 1)
|
||||
key = key.strip()
|
||||
value = value.strip().strip('"')
|
||||
current_section[key] = value
|
||||
return result
|
||||
|
||||
|
||||
def parse_toml(content):
|
||||
if sys.version_info >= (3, 11):
|
||||
return tomllib.loads(content)
|
||||
else:
|
||||
return simple_toml_parser(content)
|
||||
|
||||
|
||||
def get_git_remote_url() -> str | None:
|
||||
"""Get the Git repository's remote URL."""
|
||||
try:
|
||||
# Run the git remote -v command
|
||||
result = subprocess.run(
|
||||
["git", "remote", "-v"], capture_output=True, text=True, check=True
|
||||
)
|
||||
|
||||
# Get the output
|
||||
output = result.stdout
|
||||
|
||||
# Parse the output to find the origin URL
|
||||
matches = re.findall(r"origin\s+(.*?)\s+\(fetch\)", output)
|
||||
|
||||
if matches:
|
||||
return matches[0] # Return the first match (origin URL)
|
||||
else:
|
||||
console.print("No origin remote found.", style="bold red")
|
||||
|
||||
except subprocess.CalledProcessError as e:
|
||||
console.print(f"Error running trying to fetch the Git Repository: {e}", style="bold red")
|
||||
except FileNotFoundError:
|
||||
console.print("Git command not found. Make sure Git is installed and in your PATH.", style="bold red")
|
||||
|
||||
return None
|
||||
|
||||
|
||||
def get_project_name(pyproject_path: str = "pyproject.toml") -> str | None:
|
||||
"""Get the project name from the pyproject.toml file."""
|
||||
try:
|
||||
# Read the pyproject.toml file
|
||||
with open(pyproject_path, "r") as f:
|
||||
pyproject_content = parse_toml(f.read())
|
||||
|
||||
# Extract the project name
|
||||
project_name = pyproject_content["tool"]["poetry"]["name"]
|
||||
|
||||
if "crewai" not in pyproject_content["tool"]["poetry"]["dependencies"]:
|
||||
raise Exception("crewai is not in the dependencies.")
|
||||
|
||||
return project_name
|
||||
|
||||
except FileNotFoundError:
|
||||
print(f"Error: {pyproject_path} not found.")
|
||||
except KeyError:
|
||||
print(f"Error: {pyproject_path} is not a valid pyproject.toml file.")
|
||||
except tomllib.TOMLDecodeError if sys.version_info >= (3, 11) else Exception as e: # type: ignore
|
||||
print(
|
||||
f"Error: {pyproject_path} is not a valid TOML file."
|
||||
if sys.version_info >= (3, 11)
|
||||
else f"Error reading the pyproject.toml file: {e}"
|
||||
)
|
||||
except Exception as e:
|
||||
print(f"Error reading the pyproject.toml file: {e}")
|
||||
|
||||
return None
|
||||
|
||||
|
||||
def get_crewai_version(poetry_lock_path: str = "poetry.lock") -> str:
|
||||
"""Get the version number of crewai from the poetry.lock file."""
|
||||
try:
|
||||
with open(poetry_lock_path, "r") as f:
|
||||
lock_content = f.read()
|
||||
|
||||
match = re.search(
|
||||
r'\[\[package\]\]\s*name\s*=\s*"crewai"\s*version\s*=\s*"([^"]+)"',
|
||||
lock_content,
|
||||
re.DOTALL,
|
||||
)
|
||||
if match:
|
||||
return match.group(1)
|
||||
else:
|
||||
print("crewai package not found in poetry.lock")
|
||||
return "no-version-found"
|
||||
|
||||
except FileNotFoundError:
|
||||
print(f"Error: {poetry_lock_path} not found.")
|
||||
except Exception as e:
|
||||
print(f"Error reading the poetry.lock file: {e}")
|
||||
|
||||
return "no-version-found"
|
||||
|
||||
|
||||
def fetch_and_json_env_file(env_file_path: str = ".env") -> dict:
|
||||
"""Fetch the environment variables from a .env file and return them as a dictionary."""
|
||||
try:
|
||||
# Read the .env file
|
||||
with open(env_file_path, "r") as f:
|
||||
env_content = f.read()
|
||||
|
||||
# Parse the .env file content to a dictionary
|
||||
env_dict = {}
|
||||
for line in env_content.splitlines():
|
||||
if line.strip() and not line.strip().startswith("#"):
|
||||
key, value = line.split("=", 1)
|
||||
env_dict[key.strip()] = value.strip()
|
||||
|
||||
return env_dict
|
||||
|
||||
except FileNotFoundError:
|
||||
print(f"Error: {env_file_path} not found.")
|
||||
except Exception as e:
|
||||
print(f"Error reading the .env file: {e}")
|
||||
|
||||
return {}
|
||||
|
||||
|
||||
def get_auth_token() -> str:
|
||||
"""Get the authentication token."""
|
||||
access_token = TokenManager().get_token()
|
||||
if not access_token:
|
||||
raise Exception()
|
||||
return access_token
|
||||
@@ -6,7 +6,7 @@ authors = ["Your Name <you@example.com>"]
|
||||
|
||||
[tool.poetry.dependencies]
|
||||
python = ">=3.10,<=3.13"
|
||||
crewai = { extras = ["tools"], version = ">=0.51.0,<1.0.0" }
|
||||
crewai = { extras = ["tools"], version = ">=0.55.2,<1.0.0" }
|
||||
|
||||
|
||||
[tool.poetry.scripts]
|
||||
|
||||
@@ -6,7 +6,7 @@ authors = ["Your Name <you@example.com>"]
|
||||
|
||||
[tool.poetry.dependencies]
|
||||
python = ">=3.10,<=3.13"
|
||||
crewai = { extras = ["tools"], version = ">=0.51.0,<1.0.0" }
|
||||
crewai = { extras = ["tools"], version = ">=0.55.2,<1.0.0" }
|
||||
asyncio = "*"
|
||||
|
||||
[tool.poetry.scripts]
|
||||
|
||||
@@ -6,7 +6,7 @@ authors = ["Your Name <you@example.com>"]
|
||||
|
||||
[tool.poetry.dependencies]
|
||||
python = ">=3.10,<=3.13"
|
||||
crewai = { extras = ["tools"], version = ">=0.51.0,<1.0.0" }
|
||||
crewai = { extras = ["tools"], version = ">=0.55.2,<1.0.0" }
|
||||
|
||||
|
||||
[tool.poetry.scripts]
|
||||
|
||||
@@ -584,7 +584,10 @@ class Crew(BaseModel):
|
||||
self.manager_agent.allow_delegation = True
|
||||
manager = self.manager_agent
|
||||
if manager.tools is not None and len(manager.tools) > 0:
|
||||
raise Exception("Manager agent should not have tools")
|
||||
self._logger.log(
|
||||
"warning", "Manager agent should not have tools", color="orange"
|
||||
)
|
||||
manager.tools = []
|
||||
manager.tools = self.manager_agent.get_delegation_tools(self.agents)
|
||||
else:
|
||||
manager = Agent(
|
||||
|
||||
@@ -1 +1,3 @@
|
||||
from .crew_output import CrewOutput
|
||||
|
||||
__all__ = ["CrewOutput"]
|
||||
|
||||
@@ -1,3 +1,5 @@
|
||||
from .entity.entity_memory import EntityMemory
|
||||
from .long_term.long_term_memory import LongTermMemory
|
||||
from .short_term.short_term_memory import ShortTermMemory
|
||||
|
||||
__all__ = ["EntityMemory", "LongTermMemory", "ShortTermMemory"]
|
||||
|
||||
@@ -21,7 +21,7 @@ class Memory:
|
||||
if agent:
|
||||
metadata["agent"] = agent
|
||||
|
||||
self.storage.save(value, metadata) # type: ignore # Maybe BUG? Should be self.storage.save(key, value, metadata)
|
||||
self.storage.save(value, metadata)
|
||||
|
||||
def search(self, query: str) -> Dict[str, Any]:
|
||||
return self.storage.search(query)
|
||||
|
||||
@@ -5,13 +5,14 @@ import os
|
||||
import shutil
|
||||
from typing import Any, Dict, List, Optional
|
||||
|
||||
from crewai.memory.storage.interface import Storage
|
||||
from crewai.utilities.paths import db_storage_path
|
||||
from embedchain import App
|
||||
from embedchain.llm.base import BaseLlm
|
||||
from embedchain.models.data_type import DataType
|
||||
from embedchain.vectordb.chroma import InvalidDimensionException
|
||||
|
||||
from crewai.memory.storage.interface import Storage
|
||||
from crewai.utilities.paths import db_storage_path
|
||||
|
||||
|
||||
@contextlib.contextmanager
|
||||
def suppress_logging(
|
||||
@@ -77,12 +78,12 @@ class RAGStorage(Storage):
|
||||
self.app.llm = FakeLLM()
|
||||
if allow_reset:
|
||||
self.app.reset()
|
||||
|
||||
|
||||
def _sanitize_role(self, role: str) -> str:
|
||||
"""
|
||||
Sanitizes agent roles to ensure valid directory names.
|
||||
"""
|
||||
return role.replace('\n', '').replace(' ', '_').replace('/', '_')
|
||||
return role.replace("\n", "").replace(" ", "_").replace("/", "_")
|
||||
|
||||
def save(self, value: Any, metadata: Dict[str, Any]) -> None:
|
||||
self._generate_embedding(value, metadata)
|
||||
|
||||
@@ -1,3 +1,5 @@
|
||||
from crewai.pipeline.pipeline import Pipeline
|
||||
from crewai.pipeline.pipeline_kickoff_result import PipelineKickoffResult
|
||||
from crewai.pipeline.pipeline_output import PipelineOutput
|
||||
|
||||
__all__ = ["Pipeline", "PipelineKickoffResult", "PipelineOutput"]
|
||||
|
||||
@@ -103,7 +103,8 @@ def crew(func):
|
||||
for task_name in sorted_task_names:
|
||||
task_instance = tasks[task_name]()
|
||||
instantiated_tasks.append(task_instance)
|
||||
if hasattr(task_instance, "agent"):
|
||||
agent_instance = getattr(task_instance, "agent", None)
|
||||
if agent_instance is not None:
|
||||
agent_instance = task_instance.agent
|
||||
if agent_instance.role not in agent_roles:
|
||||
instantiated_agents.append(agent_instance)
|
||||
|
||||
@@ -1 +1,3 @@
|
||||
from crewai.routers.router import Router
|
||||
|
||||
__all__ = ["Router"]
|
||||
|
||||
@@ -6,7 +6,7 @@ import uuid
|
||||
from concurrent.futures import Future
|
||||
from copy import copy
|
||||
from hashlib import md5
|
||||
from typing import Any, Dict, List, Optional, Tuple, Type, Union
|
||||
from typing import Any, Dict, List, Optional, Set, Tuple, Type, Union
|
||||
|
||||
from opentelemetry.trace import Span
|
||||
from pydantic import (
|
||||
@@ -108,6 +108,7 @@ class Task(BaseModel):
|
||||
description="A converter class used to export structured output",
|
||||
default=None,
|
||||
)
|
||||
processed_by_agents: Set[str] = Field(default_factory=set)
|
||||
|
||||
_telemetry: Telemetry = PrivateAttr(default_factory=Telemetry)
|
||||
_execution_span: Optional[Span] = PrivateAttr(default=None)
|
||||
@@ -241,6 +242,8 @@ class Task(BaseModel):
|
||||
self.prompt_context = context
|
||||
tools = tools or self.tools or []
|
||||
|
||||
self.processed_by_agents.add(agent.role)
|
||||
|
||||
result = agent.execute_task(
|
||||
task=self,
|
||||
context=context,
|
||||
@@ -308,8 +311,10 @@ class Task(BaseModel):
|
||||
"""Increment the tools errors counter."""
|
||||
self.tools_errors += 1
|
||||
|
||||
def increment_delegations(self) -> None:
|
||||
def increment_delegations(self, agent_name: Optional[str]) -> None:
|
||||
"""Increment the delegations counter."""
|
||||
if agent_name:
|
||||
self.processed_by_agents.add(agent_name)
|
||||
self.delegations += 1
|
||||
|
||||
def copy(self, agents: List["BaseAgent"]) -> "Task":
|
||||
|
||||
@@ -1 +1,3 @@
|
||||
from .telemetry import Telemetry
|
||||
|
||||
__all__ = ["Telemetry"]
|
||||
|
||||
@@ -4,7 +4,7 @@ import asyncio
|
||||
import json
|
||||
import os
|
||||
import platform
|
||||
from typing import TYPE_CHECKING, Any
|
||||
from typing import TYPE_CHECKING, Any, Optional
|
||||
|
||||
import pkg_resources
|
||||
from opentelemetry import trace
|
||||
@@ -28,18 +28,6 @@ class Telemetry:
|
||||
agents backstories or goals nor responses or any data that is being
|
||||
processed by the agents, nor any secrets and env vars.
|
||||
|
||||
Data collected includes:
|
||||
- Version of crewAI
|
||||
- Version of Python
|
||||
- General OS (e.g. number of CPUs, macOS/Windows/Linux)
|
||||
- Number of agents and tasks in a crew
|
||||
- Crew Process being used
|
||||
- If Agents are using memory or allowing delegation
|
||||
- If Tasks are being executed in parallel or sequentially
|
||||
- Language model being used
|
||||
- Roles of agents in a crew
|
||||
- Tools names available
|
||||
|
||||
Users can opt-in to sharing more complete data using the `share_crew`
|
||||
attribute in the Crew class.
|
||||
"""
|
||||
@@ -114,10 +102,17 @@ class Telemetry:
|
||||
"max_iter": agent.max_iter,
|
||||
"max_rpm": agent.max_rpm,
|
||||
"i18n": agent.i18n.prompt_file,
|
||||
"function_calling_llm": json.dumps(
|
||||
self._safe_llm_attributes(
|
||||
agent.function_calling_llm
|
||||
)
|
||||
),
|
||||
"llm": json.dumps(
|
||||
self._safe_llm_attributes(agent.llm)
|
||||
),
|
||||
"delegation_enabled?": agent.allow_delegation,
|
||||
"allow_code_execution?": agent.allow_code_execution,
|
||||
"max_retry_limit": agent.max_retry_limit,
|
||||
"tools_names": [
|
||||
tool.name.casefold()
|
||||
for tool in agent.tools or []
|
||||
@@ -165,7 +160,62 @@ class Telemetry:
|
||||
self._add_attribute(
|
||||
span, "crew_inputs", json.dumps(inputs) if inputs else None
|
||||
)
|
||||
|
||||
else:
|
||||
self._add_attribute(
|
||||
span,
|
||||
"crew_agents",
|
||||
json.dumps(
|
||||
[
|
||||
{
|
||||
"key": agent.key,
|
||||
"id": str(agent.id),
|
||||
"role": agent.role,
|
||||
"verbose?": agent.verbose,
|
||||
"max_iter": agent.max_iter,
|
||||
"max_rpm": agent.max_rpm,
|
||||
"function_calling_llm": json.dumps(
|
||||
self._safe_llm_attributes(
|
||||
agent.function_calling_llm
|
||||
)
|
||||
),
|
||||
"llm": json.dumps(
|
||||
self._safe_llm_attributes(agent.llm)
|
||||
),
|
||||
"delegation_enabled?": agent.allow_delegation,
|
||||
"allow_code_execution?": agent.allow_code_execution,
|
||||
"max_retry_limit": agent.max_retry_limit,
|
||||
"tools_names": [
|
||||
tool.name.casefold()
|
||||
for tool in agent.tools or []
|
||||
],
|
||||
}
|
||||
for agent in crew.agents
|
||||
]
|
||||
),
|
||||
)
|
||||
self._add_attribute(
|
||||
span,
|
||||
"crew_tasks",
|
||||
json.dumps(
|
||||
[
|
||||
{
|
||||
"key": task.key,
|
||||
"id": str(task.id),
|
||||
"async_execution?": task.async_execution,
|
||||
"human_input?": task.human_input,
|
||||
"agent_role": task.agent.role
|
||||
if task.agent
|
||||
else "None",
|
||||
"agent_key": task.agent.key if task.agent else None,
|
||||
"tools_names": [
|
||||
tool.name.casefold()
|
||||
for tool in task.tools or []
|
||||
],
|
||||
}
|
||||
for task in crew.tasks
|
||||
]
|
||||
),
|
||||
)
|
||||
span.set_status(Status(StatusCode.OK))
|
||||
span.end()
|
||||
except Exception:
|
||||
@@ -349,6 +399,63 @@ class Telemetry:
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
def deploy_signup_error_span(self):
|
||||
if self.ready:
|
||||
try:
|
||||
tracer = trace.get_tracer("crewai.telemetry")
|
||||
span = tracer.start_span("Deploy Signup Error")
|
||||
span.set_status(Status(StatusCode.OK))
|
||||
span.end()
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
def start_deployment_span(self, uuid: Optional[str] = None):
|
||||
if self.ready:
|
||||
try:
|
||||
tracer = trace.get_tracer("crewai.telemetry")
|
||||
span = tracer.start_span("Start Deployment")
|
||||
if uuid:
|
||||
self._add_attribute(span, "uuid", uuid)
|
||||
span.set_status(Status(StatusCode.OK))
|
||||
span.end()
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
def create_crew_deployment_span(self):
|
||||
if self.ready:
|
||||
try:
|
||||
tracer = trace.get_tracer("crewai.telemetry")
|
||||
span = tracer.start_span("Create Crew Deployment")
|
||||
span.set_status(Status(StatusCode.OK))
|
||||
span.end()
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
def get_crew_logs_span(self, uuid: Optional[str], log_type: str = "deployment"):
|
||||
if self.ready:
|
||||
try:
|
||||
tracer = trace.get_tracer("crewai.telemetry")
|
||||
span = tracer.start_span("Get Crew Logs")
|
||||
self._add_attribute(span, "log_type", log_type)
|
||||
if uuid:
|
||||
self._add_attribute(span, "uuid", uuid)
|
||||
span.set_status(Status(StatusCode.OK))
|
||||
span.end()
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
def remove_crew_span(self, uuid: Optional[str] = None):
|
||||
if self.ready:
|
||||
try:
|
||||
tracer = trace.get_tracer("crewai.telemetry")
|
||||
span = tracer.start_span("Remove Crew")
|
||||
if uuid:
|
||||
self._add_attribute(span, "uuid", uuid)
|
||||
span.set_status(Status(StatusCode.OK))
|
||||
span.end()
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
def crew_execution_span(self, crew: Crew, inputs: dict[str, Any] | None):
|
||||
"""Records the complete execution of a crew.
|
||||
This is only collected if the user has opted-in to share the crew.
|
||||
@@ -462,7 +569,7 @@ class Telemetry:
|
||||
pass
|
||||
|
||||
def _safe_llm_attributes(self, llm):
|
||||
attributes = ["name", "model_name", "base_url", "model", "top_k", "temperature"]
|
||||
attributes = ["name", "model_name", "model", "top_k", "temperature"]
|
||||
if llm:
|
||||
safe_attributes = {k: v for k, v in vars(llm).items() if k in attributes}
|
||||
safe_attributes["class"] = llm.__class__.__name__
|
||||
|
||||
@@ -1,8 +1,8 @@
|
||||
from typing import Any, Dict, Optional
|
||||
|
||||
from pydantic import BaseModel, Field
|
||||
from pydantic import BaseModel as PydanticBaseModel
|
||||
from pydantic import Field as PydanticField
|
||||
from pydantic.v1 import BaseModel, Field
|
||||
|
||||
|
||||
class ToolCalling(BaseModel):
|
||||
|
||||
@@ -5,7 +5,7 @@ import regex
|
||||
from langchain.output_parsers import PydanticOutputParser
|
||||
from langchain_core.exceptions import OutputParserException
|
||||
from langchain_core.outputs import Generation
|
||||
from langchain_core.pydantic_v1 import ValidationError
|
||||
from pydantic import ValidationError
|
||||
|
||||
|
||||
class ToolOutputParser(PydanticOutputParser):
|
||||
|
||||
@@ -8,6 +8,7 @@ from langchain_core.tools import BaseTool
|
||||
from langchain_openai import ChatOpenAI
|
||||
|
||||
from crewai.agents.tools_handler import ToolsHandler
|
||||
from crewai.task import Task
|
||||
from crewai.telemetry import Telemetry
|
||||
from crewai.tools.tool_calling import InstructorToolCalling, ToolCalling
|
||||
from crewai.utilities import I18N, Converter, ConverterError, Printer
|
||||
@@ -51,7 +52,7 @@ class ToolUsage:
|
||||
original_tools: List[Any],
|
||||
tools_description: str,
|
||||
tools_names: str,
|
||||
task: Any,
|
||||
task: Task,
|
||||
function_calling_llm: Any,
|
||||
agent: Any,
|
||||
action: Any,
|
||||
@@ -154,7 +155,10 @@ class ToolUsage:
|
||||
"Delegate work to coworker",
|
||||
"Ask question to coworker",
|
||||
]:
|
||||
self.task.increment_delegations()
|
||||
coworker = (
|
||||
calling.arguments.get("coworker") if calling.arguments else None
|
||||
)
|
||||
self.task.increment_delegations(coworker)
|
||||
|
||||
if calling.arguments:
|
||||
try:
|
||||
@@ -241,7 +245,7 @@ class ToolUsage:
|
||||
result = self._remember_format(result=result) # type: ignore # "_remember_format" of "ToolUsage" does not return a value (it only ever returns None)
|
||||
return result
|
||||
|
||||
def _should_remember_format(self) -> None:
|
||||
def _should_remember_format(self) -> bool:
|
||||
return self.task.used_tools % self._remember_format_after_usages == 0
|
||||
|
||||
def _remember_format(self, result: str) -> None:
|
||||
@@ -353,10 +357,10 @@ class ToolUsage:
|
||||
return ToolUsageErrorException( # type: ignore # Incompatible return value type (got "ToolUsageErrorException", expected "ToolCalling | InstructorToolCalling")
|
||||
f'{self._i18n.errors("tool_arguments_error")}'
|
||||
)
|
||||
calling = ToolCalling( # type: ignore # Unexpected keyword argument "log" for "ToolCalling"
|
||||
calling = ToolCalling(
|
||||
tool_name=tool.name,
|
||||
arguments=arguments,
|
||||
log=tool_string,
|
||||
log=tool_string, # type: ignore
|
||||
)
|
||||
except Exception as e:
|
||||
self._run_attempts += 1
|
||||
@@ -404,19 +408,19 @@ class ToolUsage:
|
||||
'"' + value.replace('"', '\\"') + '"'
|
||||
) # Re-encapsulate with double quotes
|
||||
elif value.isdigit(): # Check if value is a digit, hence integer
|
||||
formatted_value = value
|
||||
value = value
|
||||
elif value.lower() in [
|
||||
"true",
|
||||
"false",
|
||||
"null",
|
||||
]: # Check for boolean and null values
|
||||
formatted_value = value.lower()
|
||||
value = value.lower()
|
||||
else:
|
||||
# Assume the value is a string and needs quotes
|
||||
formatted_value = '"' + value.replace('"', '\\"') + '"'
|
||||
value = '"' + value.replace('"', '\\"') + '"'
|
||||
|
||||
# Rebuild the entry with proper quoting
|
||||
formatted_entry = f'"{key}": {formatted_value}'
|
||||
formatted_entry = f'"{key}": {value}'
|
||||
formatted_entries.append(formatted_entry)
|
||||
|
||||
# Reconstruct the JSON string
|
||||
|
||||
@@ -23,17 +23,16 @@ def process_config(
|
||||
# Copy values from config (originally from YAML) to the model's attributes.
|
||||
# Only copy if the attribute isn't already set, preserving any explicitly defined values.
|
||||
for key, value in config.items():
|
||||
if key not in model_class.model_fields:
|
||||
if key not in model_class.model_fields or values.get(key) is not None:
|
||||
continue
|
||||
if values.get(key) is not None:
|
||||
continue
|
||||
if isinstance(value, (str, int, float, bool, list)):
|
||||
values[key] = value
|
||||
elif isinstance(value, dict):
|
||||
|
||||
if isinstance(value, dict):
|
||||
if isinstance(values.get(key), dict):
|
||||
values[key].update(value)
|
||||
else:
|
||||
values[key] = value
|
||||
else:
|
||||
values[key] = value
|
||||
|
||||
# Remove the config from values to avoid duplicate processing
|
||||
values.pop("config", None)
|
||||
|
||||
@@ -5,8 +5,7 @@ import regex
|
||||
from langchain.output_parsers import PydanticOutputParser
|
||||
from langchain_core.exceptions import OutputParserException
|
||||
from langchain_core.outputs import Generation
|
||||
from langchain_core.pydantic_v1 import ValidationError
|
||||
from pydantic import BaseModel
|
||||
from pydantic import BaseModel, ValidationError
|
||||
|
||||
|
||||
class CrewPydanticOutputParser(PydanticOutputParser):
|
||||
|
||||
@@ -1,14 +1,14 @@
|
||||
from collections import defaultdict
|
||||
|
||||
from langchain_openai import ChatOpenAI
|
||||
from pydantic import BaseModel, Field
|
||||
from rich.console import Console
|
||||
from rich.table import Table
|
||||
|
||||
from crewai.agent import Agent
|
||||
from crewai.task import Task
|
||||
from crewai.tasks.task_output import TaskOutput
|
||||
from crewai.telemetry import Telemetry
|
||||
from langchain_openai import ChatOpenAI
|
||||
from pydantic import BaseModel, Field
|
||||
from rich.box import HEAVY_EDGE
|
||||
from rich.console import Console
|
||||
from rich.table import Table
|
||||
|
||||
|
||||
class TaskEvaluationPydanticOutput(BaseModel):
|
||||
@@ -77,50 +77,72 @@ class CrewEvaluator:
|
||||
def print_crew_evaluation_result(self) -> None:
|
||||
"""
|
||||
Prints the evaluation result of the crew in a table.
|
||||
A Crew with 2 tasks using the command crewai test -n 2
|
||||
A Crew with 2 tasks using the command crewai test -n 3
|
||||
will output the following table:
|
||||
|
||||
Task Scores
|
||||
Tasks Scores
|
||||
(1-10 Higher is better)
|
||||
┏━━━━━━━━━━━━┳━━━━━━━┳━━━━━━━┳━━━━━━━━━━━━┓
|
||||
┃ Tasks/Crew ┃ Run 1 ┃ Run 2 ┃ Avg. Total ┃
|
||||
┡━━━━━━━━━━━━╇━━━━━━━╇━━━━━━━╇━━━━━━━━━━━━┩
|
||||
│ Task 1 │ 10.0 │ 9.0 │ 9.5 │
|
||||
│ Task 2 │ 9.0 │ 9.0 │ 9.0 │
|
||||
│ Crew │ 9.5 │ 9.0 │ 9.2 │
|
||||
└────────────┴───────┴───────┴────────────┘
|
||||
┏━━━━━━━━━━━━━━━━━━━━┳━━━━━━━┳━━━━━━━┳━━━━━━━┳━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┓
|
||||
┃ Tasks/Crew/Agents ┃ Run 1 ┃ Run 2 ┃ Run 3 ┃ Avg. Total ┃ Agents ┃
|
||||
┡━━━━━━━━━━━━━━━━━━━━╇━━━━━━━╇━━━━━━━╇━━━━━━━╇━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┩
|
||||
│ Task 1 │ 9.0 │ 10.0 │ 9.0 │ 9.3 │ - AI LLMs Senior Researcher │
|
||||
│ │ │ │ │ │ - AI LLMs Reporting Analyst │
|
||||
│ │ │ │ │ │ │
|
||||
│ Task 2 │ 9.0 │ 9.0 │ 9.0 │ 9.0 │ - AI LLMs Senior Researcher │
|
||||
│ │ │ │ │ │ - AI LLMs Reporting Analyst │
|
||||
│ │ │ │ │ │ │
|
||||
│ Crew │ 9.0 │ 9.5 │ 9.0 │ 9.2 │ │
|
||||
│ Execution Time (s) │ 42 │ 79 │ 52 │ 57 │ │
|
||||
└────────────────────┴───────┴───────┴───────┴────────────┴──────────────────────────────┘
|
||||
"""
|
||||
task_averages = [
|
||||
sum(scores) / len(scores) for scores in zip(*self.tasks_scores.values())
|
||||
]
|
||||
crew_average = sum(task_averages) / len(task_averages)
|
||||
|
||||
# Create a table
|
||||
table = Table(title="Tasks Scores \n (1-10 Higher is better)")
|
||||
table = Table(title="Tasks Scores \n (1-10 Higher is better)", box=HEAVY_EDGE)
|
||||
|
||||
# Add columns for the table
|
||||
table.add_column("Tasks/Crew")
|
||||
table.add_column("Tasks/Crew/Agents", style="cyan")
|
||||
for run in range(1, len(self.tasks_scores) + 1):
|
||||
table.add_column(f"Run {run}")
|
||||
table.add_column("Avg. Total")
|
||||
table.add_column(f"Run {run}", justify="center")
|
||||
table.add_column("Avg. Total", justify="center")
|
||||
table.add_column("Agents", style="green")
|
||||
|
||||
# Add rows for each task
|
||||
for task_index in range(len(task_averages)):
|
||||
for task_index, task in enumerate(self.crew.tasks):
|
||||
task_scores = [
|
||||
self.tasks_scores[run][task_index]
|
||||
for run in range(1, len(self.tasks_scores) + 1)
|
||||
]
|
||||
avg_score = task_averages[task_index]
|
||||
agents = list(task.processed_by_agents)
|
||||
|
||||
# Add the task row with the first agent
|
||||
table.add_row(
|
||||
f"Task {task_index + 1}", *map(str, task_scores), f"{avg_score:.1f}"
|
||||
f"Task {task_index + 1}",
|
||||
*[f"{score:.1f}" for score in task_scores],
|
||||
f"{avg_score:.1f}",
|
||||
f"- {agents[0]}" if agents else "",
|
||||
)
|
||||
|
||||
# Add a row for the crew average
|
||||
# Add rows for additional agents
|
||||
for agent in agents[1:]:
|
||||
table.add_row("", "", "", "", "", f"- {agent}")
|
||||
|
||||
# Add a blank separator row if it's not the last task
|
||||
if task_index < len(self.crew.tasks) - 1:
|
||||
table.add_row("", "", "", "", "", "")
|
||||
|
||||
# Add Crew and Execution Time rows
|
||||
crew_scores = [
|
||||
sum(self.tasks_scores[run]) / len(self.tasks_scores[run])
|
||||
for run in range(1, len(self.tasks_scores) + 1)
|
||||
]
|
||||
table.add_row("Crew", *map(str, crew_scores), f"{crew_average:.1f}")
|
||||
table.add_row(
|
||||
"Crew",
|
||||
*[f"{score:.2f}" for score in crew_scores],
|
||||
f"{crew_average:.1f}",
|
||||
"",
|
||||
)
|
||||
|
||||
run_exec_times = [
|
||||
int(sum(tasks_exec_times))
|
||||
@@ -128,11 +150,9 @@ class CrewEvaluator:
|
||||
]
|
||||
execution_time_avg = int(sum(run_exec_times) / len(run_exec_times))
|
||||
table.add_row(
|
||||
"Execution Time (s)",
|
||||
*map(str, run_exec_times),
|
||||
f"{execution_time_avg}",
|
||||
"Execution Time (s)", *map(str, run_exec_times), f"{execution_time_avg}", ""
|
||||
)
|
||||
# Display the table in the terminal
|
||||
|
||||
console = Console()
|
||||
console.print(table)
|
||||
|
||||
|
||||
@@ -1,5 +1,6 @@
|
||||
import re
|
||||
|
||||
|
||||
class YamlParser:
|
||||
@staticmethod
|
||||
def parse(file):
|
||||
@@ -16,7 +17,9 @@ class YamlParser:
|
||||
|
||||
# Replace single { and } with doubled ones, while leaving already doubled ones intact and the other special characters {# and {%
|
||||
modified_content = re.sub(r"(?<!\{){(?!\{)(?!\#)(?!\%)", "{{", content)
|
||||
modified_content = re.sub(r"(?<!\})(?<!\%)(?<!\#)\}(?!})", "}}", modified_content)
|
||||
modified_content = re.sub(
|
||||
r"(?<!\})(?<!\%)(?<!\#)\}(?!})", "}}", modified_content
|
||||
)
|
||||
|
||||
# Check for 'context:' not followed by '[' and raise an error
|
||||
if re.search(r"context:(?!\s*\[)", modified_content):
|
||||
|
||||
94
tests/cli/authentication/test_auth_main.py
Normal file
94
tests/cli/authentication/test_auth_main.py
Normal file
@@ -0,0 +1,94 @@
|
||||
import unittest
|
||||
from unittest.mock import MagicMock, patch
|
||||
|
||||
import requests
|
||||
from crewai.cli.authentication.main import AuthenticationCommand
|
||||
|
||||
|
||||
class TestAuthenticationCommand(unittest.TestCase):
|
||||
def setUp(self):
|
||||
self.auth_command = AuthenticationCommand()
|
||||
|
||||
@patch("crewai.cli.authentication.main.requests.post")
|
||||
def test_get_device_code(self, mock_post):
|
||||
mock_response = MagicMock()
|
||||
mock_response.json.return_value = {
|
||||
"device_code": "123456",
|
||||
"user_code": "ABCDEF",
|
||||
"verification_uri_complete": "https://example.com",
|
||||
"interval": 5,
|
||||
}
|
||||
mock_post.return_value = mock_response
|
||||
|
||||
device_code_data = self.auth_command._get_device_code()
|
||||
|
||||
self.assertEqual(device_code_data["device_code"], "123456")
|
||||
self.assertEqual(device_code_data["user_code"], "ABCDEF")
|
||||
self.assertEqual(
|
||||
device_code_data["verification_uri_complete"], "https://example.com"
|
||||
)
|
||||
self.assertEqual(device_code_data["interval"], 5)
|
||||
|
||||
@patch("crewai.cli.authentication.main.console.print")
|
||||
@patch("crewai.cli.authentication.main.webbrowser.open")
|
||||
def test_display_auth_instructions(self, mock_open, mock_print):
|
||||
device_code_data = {
|
||||
"verification_uri_complete": "https://example.com",
|
||||
"user_code": "ABCDEF",
|
||||
}
|
||||
|
||||
self.auth_command._display_auth_instructions(device_code_data)
|
||||
|
||||
mock_print.assert_any_call("1. Navigate to: ", "https://example.com")
|
||||
mock_print.assert_any_call("2. Enter the following code: ", "ABCDEF")
|
||||
mock_open.assert_called_once_with("https://example.com")
|
||||
|
||||
@patch("crewai.cli.authentication.main.requests.post")
|
||||
@patch("crewai.cli.authentication.main.validate_token")
|
||||
@patch("crewai.cli.authentication.main.console.print")
|
||||
def test_poll_for_token_success(self, mock_print, mock_validate_token, mock_post):
|
||||
mock_response = MagicMock()
|
||||
mock_response.status_code = 200
|
||||
mock_response.json.return_value = {
|
||||
"id_token": "TOKEN",
|
||||
"access_token": "ACCESS_TOKEN",
|
||||
}
|
||||
mock_post.return_value = mock_response
|
||||
|
||||
self.auth_command._poll_for_token({"device_code": "123456"})
|
||||
|
||||
mock_validate_token.assert_called_once_with("TOKEN")
|
||||
mock_print.assert_called_once_with("\nWelcome to CrewAI+ !!", style="green")
|
||||
|
||||
@patch("crewai.cli.authentication.main.requests.post")
|
||||
@patch("crewai.cli.authentication.main.console.print")
|
||||
def test_poll_for_token_error(self, mock_print, mock_post):
|
||||
mock_response = MagicMock()
|
||||
mock_response.status_code = 400
|
||||
mock_response.json.return_value = {
|
||||
"error": "invalid_request",
|
||||
"error_description": "Invalid request",
|
||||
}
|
||||
mock_post.return_value = mock_response
|
||||
|
||||
with self.assertRaises(requests.HTTPError):
|
||||
self.auth_command._poll_for_token({"device_code": "123456"})
|
||||
|
||||
mock_print.assert_not_called()
|
||||
|
||||
@patch("crewai.cli.authentication.main.requests.post")
|
||||
@patch("crewai.cli.authentication.main.console.print")
|
||||
def test_poll_for_token_timeout(self, mock_print, mock_post):
|
||||
mock_response = MagicMock()
|
||||
mock_response.status_code = 400
|
||||
mock_response.json.return_value = {
|
||||
"error": "authorization_pending",
|
||||
"error_description": "Authorization pending",
|
||||
}
|
||||
mock_post.return_value = mock_response
|
||||
|
||||
self.auth_command._poll_for_token({"device_code": "123456", "interval": 0.01})
|
||||
|
||||
mock_print.assert_called_once_with(
|
||||
"Timeout: Failed to get the token. Please try again.", style="bold red"
|
||||
)
|
||||
147
tests/cli/authentication/test_utils.py
Normal file
147
tests/cli/authentication/test_utils.py
Normal file
@@ -0,0 +1,147 @@
|
||||
import json
|
||||
import unittest
|
||||
from datetime import datetime, timedelta
|
||||
from unittest.mock import MagicMock, patch
|
||||
|
||||
from crewai.cli.authentication.utils import TokenManager, validate_token
|
||||
from cryptography.fernet import Fernet
|
||||
|
||||
|
||||
class TestValidateToken(unittest.TestCase):
|
||||
@patch("crewai.cli.authentication.utils.AsymmetricSignatureVerifier")
|
||||
@patch("crewai.cli.authentication.utils.TokenVerifier")
|
||||
def test_validate_token(self, mock_token_verifier, mock_asymmetric_verifier):
|
||||
mock_verifier_instance = mock_token_verifier.return_value
|
||||
mock_id_token = "mock_id_token"
|
||||
|
||||
validate_token(mock_id_token)
|
||||
|
||||
mock_asymmetric_verifier.assert_called_once_with(
|
||||
"https://dev-jzsr0j8zs0atl5ha.us.auth0.com/.well-known/jwks.json"
|
||||
)
|
||||
mock_token_verifier.assert_called_once_with(
|
||||
signature_verifier=mock_asymmetric_verifier.return_value,
|
||||
issuer="https://dev-jzsr0j8zs0atl5ha.us.auth0.com/",
|
||||
audience="CZtyRHuVW80HbLSjk4ggXNzjg4KAt7Oe",
|
||||
)
|
||||
mock_verifier_instance.verify.assert_called_once_with(mock_id_token)
|
||||
|
||||
|
||||
class TestTokenManager(unittest.TestCase):
|
||||
def setUp(self):
|
||||
self.token_manager = TokenManager()
|
||||
|
||||
@patch("crewai.cli.authentication.utils.TokenManager.read_secure_file")
|
||||
@patch("crewai.cli.authentication.utils.TokenManager.save_secure_file")
|
||||
@patch("crewai.cli.authentication.utils.TokenManager._get_or_create_key")
|
||||
def test_get_or_create_key_existing(self, mock_get_or_create, mock_save, mock_read):
|
||||
mock_key = Fernet.generate_key()
|
||||
mock_get_or_create.return_value = mock_key
|
||||
|
||||
token_manager = TokenManager()
|
||||
result = token_manager.key
|
||||
|
||||
self.assertEqual(result, mock_key)
|
||||
|
||||
@patch("crewai.cli.authentication.utils.Fernet.generate_key")
|
||||
@patch("crewai.cli.authentication.utils.TokenManager.read_secure_file")
|
||||
@patch("crewai.cli.authentication.utils.TokenManager.save_secure_file")
|
||||
def test_get_or_create_key_new(self, mock_save, mock_read, mock_generate):
|
||||
mock_key = b"new_key"
|
||||
mock_read.return_value = None
|
||||
mock_generate.return_value = mock_key
|
||||
|
||||
result = self.token_manager._get_or_create_key()
|
||||
|
||||
self.assertEqual(result, mock_key)
|
||||
mock_read.assert_called_once_with("secret.key")
|
||||
mock_generate.assert_called_once()
|
||||
mock_save.assert_called_once_with("secret.key", mock_key)
|
||||
|
||||
@patch("crewai.cli.authentication.utils.TokenManager.save_secure_file")
|
||||
def test_save_tokens(self, mock_save):
|
||||
access_token = "test_token"
|
||||
expires_in = 3600
|
||||
|
||||
self.token_manager.save_tokens(access_token, expires_in)
|
||||
|
||||
mock_save.assert_called_once()
|
||||
args = mock_save.call_args[0]
|
||||
self.assertEqual(args[0], "tokens.enc")
|
||||
decrypted_data = self.token_manager.fernet.decrypt(args[1])
|
||||
data = json.loads(decrypted_data)
|
||||
self.assertEqual(data["access_token"], access_token)
|
||||
expiration = datetime.fromisoformat(data["expiration"])
|
||||
self.assertAlmostEqual(
|
||||
expiration,
|
||||
datetime.now() + timedelta(seconds=expires_in),
|
||||
delta=timedelta(seconds=1),
|
||||
)
|
||||
|
||||
@patch("crewai.cli.authentication.utils.TokenManager.read_secure_file")
|
||||
def test_get_token_valid(self, mock_read):
|
||||
access_token = "test_token"
|
||||
expiration = (datetime.now() + timedelta(hours=1)).isoformat()
|
||||
data = {"access_token": access_token, "expiration": expiration}
|
||||
encrypted_data = self.token_manager.fernet.encrypt(json.dumps(data).encode())
|
||||
mock_read.return_value = encrypted_data
|
||||
|
||||
result = self.token_manager.get_token()
|
||||
|
||||
self.assertEqual(result, access_token)
|
||||
|
||||
@patch("crewai.cli.authentication.utils.TokenManager.read_secure_file")
|
||||
def test_get_token_expired(self, mock_read):
|
||||
access_token = "test_token"
|
||||
expiration = (datetime.now() - timedelta(hours=1)).isoformat()
|
||||
data = {"access_token": access_token, "expiration": expiration}
|
||||
encrypted_data = self.token_manager.fernet.encrypt(json.dumps(data).encode())
|
||||
mock_read.return_value = encrypted_data
|
||||
|
||||
result = self.token_manager.get_token()
|
||||
|
||||
self.assertIsNone(result)
|
||||
|
||||
@patch("crewai.cli.authentication.utils.TokenManager.get_secure_storage_path")
|
||||
@patch("builtins.open", new_callable=unittest.mock.mock_open)
|
||||
@patch("crewai.cli.authentication.utils.os.chmod")
|
||||
def test_save_secure_file(self, mock_chmod, mock_open, mock_get_path):
|
||||
mock_path = MagicMock()
|
||||
mock_get_path.return_value = mock_path
|
||||
filename = "test_file.txt"
|
||||
content = b"test_content"
|
||||
|
||||
self.token_manager.save_secure_file(filename, content)
|
||||
|
||||
mock_path.__truediv__.assert_called_once_with(filename)
|
||||
mock_open.assert_called_once_with(mock_path.__truediv__.return_value, "wb")
|
||||
mock_open().write.assert_called_once_with(content)
|
||||
mock_chmod.assert_called_once_with(mock_path.__truediv__.return_value, 0o600)
|
||||
|
||||
@patch("crewai.cli.authentication.utils.TokenManager.get_secure_storage_path")
|
||||
@patch(
|
||||
"builtins.open", new_callable=unittest.mock.mock_open, read_data=b"test_content"
|
||||
)
|
||||
def test_read_secure_file_exists(self, mock_open, mock_get_path):
|
||||
mock_path = MagicMock()
|
||||
mock_get_path.return_value = mock_path
|
||||
mock_path.__truediv__.return_value.exists.return_value = True
|
||||
filename = "test_file.txt"
|
||||
|
||||
result = self.token_manager.read_secure_file(filename)
|
||||
|
||||
self.assertEqual(result, b"test_content")
|
||||
mock_path.__truediv__.assert_called_once_with(filename)
|
||||
mock_open.assert_called_once_with(mock_path.__truediv__.return_value, "rb")
|
||||
|
||||
@patch("crewai.cli.authentication.utils.TokenManager.get_secure_storage_path")
|
||||
def test_read_secure_file_not_exists(self, mock_get_path):
|
||||
mock_path = MagicMock()
|
||||
mock_get_path.return_value = mock_path
|
||||
mock_path.__truediv__.return_value.exists.return_value = False
|
||||
filename = "test_file.txt"
|
||||
|
||||
result = self.token_manager.read_secure_file(filename)
|
||||
|
||||
self.assertIsNone(result)
|
||||
mock_path.__truediv__.assert_called_once_with(filename)
|
||||
@@ -2,8 +2,19 @@ from unittest import mock
|
||||
|
||||
import pytest
|
||||
from click.testing import CliRunner
|
||||
|
||||
from crewai.cli.cli import reset_memories, test, train, version
|
||||
from crewai.cli.cli import (
|
||||
deploy_create,
|
||||
deploy_list,
|
||||
deploy_logs,
|
||||
deploy_push,
|
||||
deploy_remove,
|
||||
deply_status,
|
||||
reset_memories,
|
||||
signup,
|
||||
test,
|
||||
train,
|
||||
version,
|
||||
)
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
@@ -163,3 +174,106 @@ def test_test_invalid_string_iterations(evaluate_crew, runner):
|
||||
"Usage: test [OPTIONS]\nTry 'test --help' for help.\n\nError: Invalid value for '-n' / '--n_iterations': 'invalid' is not a valid integer.\n"
|
||||
in result.output
|
||||
)
|
||||
|
||||
|
||||
@mock.patch("crewai.cli.cli.AuthenticationCommand")
|
||||
def test_signup(command, runner):
|
||||
mock_auth = command.return_value
|
||||
result = runner.invoke(signup)
|
||||
|
||||
assert result.exit_code == 0
|
||||
mock_auth.signup.assert_called_once()
|
||||
|
||||
|
||||
@mock.patch("crewai.cli.cli.DeployCommand")
|
||||
def test_deploy_create(command, runner):
|
||||
mock_deploy = command.return_value
|
||||
result = runner.invoke(deploy_create)
|
||||
|
||||
assert result.exit_code == 0
|
||||
mock_deploy.create_crew.assert_called_once()
|
||||
|
||||
|
||||
@mock.patch("crewai.cli.cli.DeployCommand")
|
||||
def test_deploy_list(command, runner):
|
||||
mock_deploy = command.return_value
|
||||
result = runner.invoke(deploy_list)
|
||||
|
||||
assert result.exit_code == 0
|
||||
mock_deploy.list_crews.assert_called_once()
|
||||
|
||||
|
||||
@mock.patch("crewai.cli.cli.DeployCommand")
|
||||
def test_deploy_push(command, runner):
|
||||
mock_deploy = command.return_value
|
||||
uuid = "test-uuid"
|
||||
result = runner.invoke(deploy_push, ["-u", uuid])
|
||||
|
||||
assert result.exit_code == 0
|
||||
mock_deploy.deploy.assert_called_once_with(uuid=uuid)
|
||||
|
||||
|
||||
@mock.patch("crewai.cli.cli.DeployCommand")
|
||||
def test_deploy_push_no_uuid(command, runner):
|
||||
mock_deploy = command.return_value
|
||||
result = runner.invoke(deploy_push)
|
||||
|
||||
assert result.exit_code == 0
|
||||
mock_deploy.deploy.assert_called_once_with(uuid=None)
|
||||
|
||||
|
||||
@mock.patch("crewai.cli.cli.DeployCommand")
|
||||
def test_deploy_status(command, runner):
|
||||
mock_deploy = command.return_value
|
||||
uuid = "test-uuid"
|
||||
result = runner.invoke(deply_status, ["-u", uuid])
|
||||
|
||||
assert result.exit_code == 0
|
||||
mock_deploy.get_crew_status.assert_called_once_with(uuid=uuid)
|
||||
|
||||
|
||||
@mock.patch("crewai.cli.cli.DeployCommand")
|
||||
def test_deploy_status_no_uuid(command, runner):
|
||||
mock_deploy = command.return_value
|
||||
result = runner.invoke(deply_status)
|
||||
|
||||
assert result.exit_code == 0
|
||||
mock_deploy.get_crew_status.assert_called_once_with(uuid=None)
|
||||
|
||||
|
||||
@mock.patch("crewai.cli.cli.DeployCommand")
|
||||
def test_deploy_logs(command, runner):
|
||||
mock_deploy = command.return_value
|
||||
uuid = "test-uuid"
|
||||
result = runner.invoke(deploy_logs, ["-u", uuid])
|
||||
|
||||
assert result.exit_code == 0
|
||||
mock_deploy.get_crew_logs.assert_called_once_with(uuid=uuid)
|
||||
|
||||
|
||||
@mock.patch("crewai.cli.cli.DeployCommand")
|
||||
def test_deploy_logs_no_uuid(command, runner):
|
||||
mock_deploy = command.return_value
|
||||
result = runner.invoke(deploy_logs)
|
||||
|
||||
assert result.exit_code == 0
|
||||
mock_deploy.get_crew_logs.assert_called_once_with(uuid=None)
|
||||
|
||||
|
||||
@mock.patch("crewai.cli.cli.DeployCommand")
|
||||
def test_deploy_remove(command, runner):
|
||||
mock_deploy = command.return_value
|
||||
uuid = "test-uuid"
|
||||
result = runner.invoke(deploy_remove, ["-u", uuid])
|
||||
|
||||
assert result.exit_code == 0
|
||||
mock_deploy.remove_crew.assert_called_once_with(uuid=uuid)
|
||||
|
||||
|
||||
@mock.patch("crewai.cli.cli.DeployCommand")
|
||||
def test_deploy_remove_no_uuid(command, runner):
|
||||
mock_deploy = command.return_value
|
||||
result = runner.invoke(deploy_remove)
|
||||
|
||||
assert result.exit_code == 0
|
||||
mock_deploy.remove_crew.assert_called_once_with(uuid=None)
|
||||
|
||||
103
tests/cli/deploy/test_api.py
Normal file
103
tests/cli/deploy/test_api.py
Normal file
@@ -0,0 +1,103 @@
|
||||
import unittest
|
||||
from os import environ
|
||||
from unittest.mock import MagicMock, patch
|
||||
|
||||
from crewai.cli.deploy.api import CrewAPI
|
||||
|
||||
|
||||
class TestCrewAPI(unittest.TestCase):
|
||||
def setUp(self):
|
||||
self.api_key = "test_api_key"
|
||||
self.api = CrewAPI(self.api_key)
|
||||
|
||||
def test_init(self):
|
||||
self.assertEqual(self.api.api_key, self.api_key)
|
||||
self.assertEqual(
|
||||
self.api.headers,
|
||||
{
|
||||
"Authorization": f"Bearer {self.api_key}",
|
||||
"Content-Type": "application/json",
|
||||
"User-Agent": "CrewAI-CLI/no-version-found"
|
||||
},
|
||||
)
|
||||
|
||||
@patch("crewai.cli.deploy.api.requests.request")
|
||||
def test_make_request(self, mock_request):
|
||||
mock_response = MagicMock()
|
||||
mock_request.return_value = mock_response
|
||||
|
||||
response = self.api._make_request("GET", "test_endpoint")
|
||||
|
||||
mock_request.assert_called_once_with(
|
||||
"GET", f"{self.api.base_url}/test_endpoint", headers=self.api.headers
|
||||
)
|
||||
self.assertEqual(response, mock_response)
|
||||
|
||||
@patch("crewai.cli.deploy.api.CrewAPI._make_request")
|
||||
def test_deploy_by_name(self, mock_make_request):
|
||||
self.api.deploy_by_name("test_project")
|
||||
mock_make_request.assert_called_once_with("POST", "by-name/test_project/deploy")
|
||||
|
||||
@patch("crewai.cli.deploy.api.CrewAPI._make_request")
|
||||
def test_deploy_by_uuid(self, mock_make_request):
|
||||
self.api.deploy_by_uuid("test_uuid")
|
||||
mock_make_request.assert_called_once_with("POST", "test_uuid/deploy")
|
||||
|
||||
@patch("crewai.cli.deploy.api.CrewAPI._make_request")
|
||||
def test_status_by_name(self, mock_make_request):
|
||||
self.api.status_by_name("test_project")
|
||||
mock_make_request.assert_called_once_with("GET", "by-name/test_project/status")
|
||||
|
||||
@patch("crewai.cli.deploy.api.CrewAPI._make_request")
|
||||
def test_status_by_uuid(self, mock_make_request):
|
||||
self.api.status_by_uuid("test_uuid")
|
||||
mock_make_request.assert_called_once_with("GET", "test_uuid/status")
|
||||
|
||||
@patch("crewai.cli.deploy.api.CrewAPI._make_request")
|
||||
def test_logs_by_name(self, mock_make_request):
|
||||
self.api.logs_by_name("test_project")
|
||||
mock_make_request.assert_called_once_with(
|
||||
"GET", "by-name/test_project/logs/deployment"
|
||||
)
|
||||
|
||||
self.api.logs_by_name("test_project", "custom_log")
|
||||
mock_make_request.assert_called_with(
|
||||
"GET", "by-name/test_project/logs/custom_log"
|
||||
)
|
||||
|
||||
@patch("crewai.cli.deploy.api.CrewAPI._make_request")
|
||||
def test_logs_by_uuid(self, mock_make_request):
|
||||
self.api.logs_by_uuid("test_uuid")
|
||||
mock_make_request.assert_called_once_with("GET", "test_uuid/logs/deployment")
|
||||
|
||||
self.api.logs_by_uuid("test_uuid", "custom_log")
|
||||
mock_make_request.assert_called_with("GET", "test_uuid/logs/custom_log")
|
||||
|
||||
@patch("crewai.cli.deploy.api.CrewAPI._make_request")
|
||||
def test_delete_by_name(self, mock_make_request):
|
||||
self.api.delete_by_name("test_project")
|
||||
mock_make_request.assert_called_once_with("DELETE", "by-name/test_project")
|
||||
|
||||
@patch("crewai.cli.deploy.api.CrewAPI._make_request")
|
||||
def test_delete_by_uuid(self, mock_make_request):
|
||||
self.api.delete_by_uuid("test_uuid")
|
||||
mock_make_request.assert_called_once_with("DELETE", "test_uuid")
|
||||
|
||||
@patch("crewai.cli.deploy.api.CrewAPI._make_request")
|
||||
def test_list_crews(self, mock_make_request):
|
||||
self.api.list_crews()
|
||||
mock_make_request.assert_called_once_with("GET", "")
|
||||
|
||||
@patch("crewai.cli.deploy.api.CrewAPI._make_request")
|
||||
def test_create_crew(self, mock_make_request):
|
||||
payload = {"name": "test_crew"}
|
||||
self.api.create_crew(payload)
|
||||
mock_make_request.assert_called_once_with("POST", "", json=payload)
|
||||
|
||||
@patch.dict(environ, {"CREWAI_BASE_URL": "https://custom-url.com/api"})
|
||||
def test_custom_base_url(self):
|
||||
custom_api = CrewAPI("test_key")
|
||||
self.assertEqual(
|
||||
custom_api.base_url,
|
||||
"https://custom-url.com/api",
|
||||
)
|
||||
219
tests/cli/deploy/test_deploy_main.py
Normal file
219
tests/cli/deploy/test_deploy_main.py
Normal file
@@ -0,0 +1,219 @@
|
||||
import unittest
|
||||
from io import StringIO
|
||||
from unittest.mock import MagicMock, patch
|
||||
import sys
|
||||
|
||||
from crewai.cli.deploy.main import DeployCommand
|
||||
from crewai.cli.deploy.utils import parse_toml
|
||||
|
||||
class TestDeployCommand(unittest.TestCase):
|
||||
@patch("crewai.cli.deploy.main.get_auth_token")
|
||||
@patch("crewai.cli.deploy.main.get_project_name")
|
||||
@patch("crewai.cli.deploy.main.CrewAPI")
|
||||
def setUp(self, mock_crew_api, mock_get_project_name, mock_get_auth_token):
|
||||
self.mock_get_auth_token = mock_get_auth_token
|
||||
self.mock_get_project_name = mock_get_project_name
|
||||
self.mock_crew_api = mock_crew_api
|
||||
|
||||
self.mock_get_auth_token.return_value = "test_token"
|
||||
self.mock_get_project_name.return_value = "test_project"
|
||||
|
||||
self.deploy_command = DeployCommand()
|
||||
self.mock_client = self.deploy_command.client
|
||||
|
||||
def test_init_success(self):
|
||||
self.assertEqual(self.deploy_command.project_name, "test_project")
|
||||
self.mock_crew_api.assert_called_once_with(api_key="test_token")
|
||||
|
||||
@patch("crewai.cli.deploy.main.get_auth_token")
|
||||
def test_init_failure(self, mock_get_auth_token):
|
||||
mock_get_auth_token.side_effect = Exception("Auth failed")
|
||||
|
||||
with self.assertRaises(SystemExit):
|
||||
DeployCommand()
|
||||
|
||||
def test_handle_error(self):
|
||||
with patch("sys.stdout", new=StringIO()) as fake_out:
|
||||
self.deploy_command._handle_error(
|
||||
{"error": "Test error", "message": "Test message"}
|
||||
)
|
||||
self.assertIn("Error: Test error", fake_out.getvalue())
|
||||
self.assertIn("Message: Test message", fake_out.getvalue())
|
||||
|
||||
def test_standard_no_param_error_message(self):
|
||||
with patch("sys.stdout", new=StringIO()) as fake_out:
|
||||
self.deploy_command._standard_no_param_error_message()
|
||||
self.assertIn("No UUID provided", fake_out.getvalue())
|
||||
|
||||
def test_display_deployment_info(self):
|
||||
with patch("sys.stdout", new=StringIO()) as fake_out:
|
||||
self.deploy_command._display_deployment_info(
|
||||
{"uuid": "test-uuid", "status": "deployed"}
|
||||
)
|
||||
self.assertIn("Deploying the crew...", fake_out.getvalue())
|
||||
self.assertIn("test-uuid", fake_out.getvalue())
|
||||
self.assertIn("deployed", fake_out.getvalue())
|
||||
|
||||
def test_display_logs(self):
|
||||
with patch("sys.stdout", new=StringIO()) as fake_out:
|
||||
self.deploy_command._display_logs(
|
||||
[{"timestamp": "2023-01-01", "level": "INFO", "message": "Test log"}]
|
||||
)
|
||||
self.assertIn("2023-01-01 - INFO: Test log", fake_out.getvalue())
|
||||
|
||||
@patch("crewai.cli.deploy.main.DeployCommand._display_deployment_info")
|
||||
def test_deploy_with_uuid(self, mock_display):
|
||||
mock_response = MagicMock()
|
||||
mock_response.status_code = 200
|
||||
mock_response.json.return_value = {"uuid": "test-uuid"}
|
||||
self.mock_client.deploy_by_uuid.return_value = mock_response
|
||||
|
||||
self.deploy_command.deploy(uuid="test-uuid")
|
||||
|
||||
self.mock_client.deploy_by_uuid.assert_called_once_with("test-uuid")
|
||||
mock_display.assert_called_once_with({"uuid": "test-uuid"})
|
||||
|
||||
@patch("crewai.cli.deploy.main.DeployCommand._display_deployment_info")
|
||||
def test_deploy_with_project_name(self, mock_display):
|
||||
mock_response = MagicMock()
|
||||
mock_response.status_code = 200
|
||||
mock_response.json.return_value = {"uuid": "test-uuid"}
|
||||
self.mock_client.deploy_by_name.return_value = mock_response
|
||||
|
||||
self.deploy_command.deploy()
|
||||
|
||||
self.mock_client.deploy_by_name.assert_called_once_with("test_project")
|
||||
mock_display.assert_called_once_with({"uuid": "test-uuid"})
|
||||
|
||||
@patch("crewai.cli.deploy.main.fetch_and_json_env_file")
|
||||
@patch("crewai.cli.deploy.main.get_git_remote_url")
|
||||
@patch("builtins.input")
|
||||
def test_create_crew(self, mock_input, mock_get_git_remote_url, mock_fetch_env):
|
||||
mock_fetch_env.return_value = {"ENV_VAR": "value"}
|
||||
mock_get_git_remote_url.return_value = "https://github.com/test/repo.git"
|
||||
mock_input.return_value = ""
|
||||
|
||||
mock_response = MagicMock()
|
||||
mock_response.status_code = 201
|
||||
mock_response.json.return_value = {"uuid": "new-uuid", "status": "created"}
|
||||
self.mock_client.create_crew.return_value = mock_response
|
||||
|
||||
with patch("sys.stdout", new=StringIO()) as fake_out:
|
||||
self.deploy_command.create_crew()
|
||||
self.assertIn("Deployment created successfully!", fake_out.getvalue())
|
||||
self.assertIn("new-uuid", fake_out.getvalue())
|
||||
|
||||
def test_list_crews(self):
|
||||
mock_response = MagicMock()
|
||||
mock_response.status_code = 200
|
||||
mock_response.json.return_value = [
|
||||
{"name": "Crew1", "uuid": "uuid1", "status": "active"},
|
||||
{"name": "Crew2", "uuid": "uuid2", "status": "inactive"},
|
||||
]
|
||||
self.mock_client.list_crews.return_value = mock_response
|
||||
|
||||
with patch("sys.stdout", new=StringIO()) as fake_out:
|
||||
self.deploy_command.list_crews()
|
||||
self.assertIn("Crew1 (uuid1) active", fake_out.getvalue())
|
||||
self.assertIn("Crew2 (uuid2) inactive", fake_out.getvalue())
|
||||
|
||||
def test_get_crew_status(self):
|
||||
mock_response = MagicMock()
|
||||
mock_response.status_code = 200
|
||||
mock_response.json.return_value = {"name": "TestCrew", "status": "active"}
|
||||
self.mock_client.status_by_name.return_value = mock_response
|
||||
|
||||
with patch("sys.stdout", new=StringIO()) as fake_out:
|
||||
self.deploy_command.get_crew_status()
|
||||
self.assertIn("TestCrew", fake_out.getvalue())
|
||||
self.assertIn("active", fake_out.getvalue())
|
||||
|
||||
def test_get_crew_logs(self):
|
||||
mock_response = MagicMock()
|
||||
mock_response.status_code = 200
|
||||
mock_response.json.return_value = [
|
||||
{"timestamp": "2023-01-01", "level": "INFO", "message": "Log1"},
|
||||
{"timestamp": "2023-01-02", "level": "ERROR", "message": "Log2"},
|
||||
]
|
||||
self.mock_client.logs_by_name.return_value = mock_response
|
||||
|
||||
with patch("sys.stdout", new=StringIO()) as fake_out:
|
||||
self.deploy_command.get_crew_logs(None)
|
||||
self.assertIn("2023-01-01 - INFO: Log1", fake_out.getvalue())
|
||||
self.assertIn("2023-01-02 - ERROR: Log2", fake_out.getvalue())
|
||||
|
||||
def test_remove_crew(self):
|
||||
mock_response = MagicMock()
|
||||
mock_response.status_code = 204
|
||||
self.mock_client.delete_by_name.return_value = mock_response
|
||||
|
||||
with patch("sys.stdout", new=StringIO()) as fake_out:
|
||||
self.deploy_command.remove_crew(None)
|
||||
self.assertIn(
|
||||
"Crew 'test_project' removed successfully", fake_out.getvalue()
|
||||
)
|
||||
|
||||
@unittest.skipIf(sys.version_info < (3, 11), "Requires Python 3.11+")
|
||||
def test_parse_toml_python_311_plus(self):
|
||||
toml_content = """
|
||||
[tool.poetry]
|
||||
name = "test_project"
|
||||
version = "0.1.0"
|
||||
|
||||
[tool.poetry.dependencies]
|
||||
python = "^3.11"
|
||||
crewai = { extras = ["tools"], version = ">=0.51.0,<1.0.0" }
|
||||
"""
|
||||
parsed = parse_toml(toml_content)
|
||||
self.assertEqual(parsed['tool']['poetry']['name'], 'test_project')
|
||||
|
||||
@patch('builtins.open', new_callable=unittest.mock.mock_open, read_data="""
|
||||
[tool.poetry]
|
||||
name = "test_project"
|
||||
version = "0.1.0"
|
||||
|
||||
[tool.poetry.dependencies]
|
||||
python = "^3.10"
|
||||
crewai = { extras = ["tools"], version = ">=0.51.0,<1.0.0" }
|
||||
""")
|
||||
def test_get_project_name_python_310(self, mock_open):
|
||||
from crewai.cli.deploy.utils import get_project_name
|
||||
project_name = get_project_name()
|
||||
self.assertEqual(project_name, 'test_project')
|
||||
|
||||
@unittest.skipIf(sys.version_info < (3, 11), "Requires Python 3.11+")
|
||||
@patch('builtins.open', new_callable=unittest.mock.mock_open, read_data="""
|
||||
[tool.poetry]
|
||||
name = "test_project"
|
||||
version = "0.1.0"
|
||||
|
||||
[tool.poetry.dependencies]
|
||||
python = "^3.11"
|
||||
crewai = { extras = ["tools"], version = ">=0.51.0,<1.0.0" }
|
||||
""")
|
||||
def test_get_project_name_python_311_plus(self, mock_open):
|
||||
from crewai.cli.deploy.utils import get_project_name
|
||||
project_name = get_project_name()
|
||||
self.assertEqual(project_name, 'test_project')
|
||||
|
||||
@patch('builtins.open', new_callable=unittest.mock.mock_open, read_data="""
|
||||
[[package]]
|
||||
name = "crewai"
|
||||
version = "0.51.1"
|
||||
description = "Some description"
|
||||
category = "main"
|
||||
optional = false
|
||||
python-versions = ">=3.10,<4.0"
|
||||
""")
|
||||
def test_get_crewai_version(self, mock_open):
|
||||
from crewai.cli.deploy.utils import get_crewai_version
|
||||
version = get_crewai_version()
|
||||
self.assertEqual(version, '0.51.1')
|
||||
|
||||
@patch('builtins.open', side_effect=FileNotFoundError)
|
||||
def test_get_crewai_version_file_not_found(self, mock_open):
|
||||
from crewai.cli.deploy.utils import get_crewai_version
|
||||
with patch('sys.stdout', new=StringIO()) as fake_out:
|
||||
version = get_crewai_version()
|
||||
self.assertEqual(version, 'no-version-found')
|
||||
self.assertIn("Error: poetry.lock not found.", fake_out.getvalue())
|
||||
@@ -113,7 +113,9 @@ def mock_router_factory(mock_crew_factory):
|
||||
(
|
||||
"route1"
|
||||
if x.get("score", 0) > 80
|
||||
else "route2" if x.get("score", 0) > 50 else "default"
|
||||
else "route2"
|
||||
if x.get("score", 0) > 50
|
||||
else "default"
|
||||
),
|
||||
)
|
||||
)
|
||||
|
||||
@@ -1,7 +1,6 @@
|
||||
from unittest import mock
|
||||
|
||||
import pytest
|
||||
|
||||
from crewai.agent import Agent
|
||||
from crewai.crew import Crew
|
||||
from crewai.task import Task
|
||||
@@ -80,6 +79,7 @@ class TestCrewEvaluator:
|
||||
@mock.patch("crewai.utilities.evaluators.crew_evaluator_handler.Console")
|
||||
@mock.patch("crewai.utilities.evaluators.crew_evaluator_handler.Table")
|
||||
def test_print_crew_evaluation_result(self, table, console, crew_planner):
|
||||
# Set up task scores and execution times
|
||||
crew_planner.tasks_scores = {
|
||||
1: [10, 9, 8],
|
||||
2: [9, 8, 7],
|
||||
@@ -89,22 +89,45 @@ class TestCrewEvaluator:
|
||||
2: [55, 33, 67],
|
||||
}
|
||||
|
||||
# Mock agents and assign them to tasks
|
||||
crew_planner.crew.agents = [
|
||||
mock.Mock(role="Agent 1"),
|
||||
mock.Mock(role="Agent 2"),
|
||||
]
|
||||
crew_planner.crew.tasks = [
|
||||
mock.Mock(
|
||||
agent=crew_planner.crew.agents[0], processed_by_agents=["Agent 1"]
|
||||
),
|
||||
mock.Mock(
|
||||
agent=crew_planner.crew.agents[1], processed_by_agents=["Agent 2"]
|
||||
),
|
||||
]
|
||||
|
||||
# Run the method
|
||||
crew_planner.print_crew_evaluation_result()
|
||||
|
||||
# Verify that the table is created with the appropriate structure and rows
|
||||
table.assert_has_calls(
|
||||
[
|
||||
mock.call(title="Tasks Scores \n (1-10 Higher is better)"),
|
||||
mock.call().add_column("Tasks/Crew"),
|
||||
mock.call().add_column("Run 1"),
|
||||
mock.call().add_column("Run 2"),
|
||||
mock.call().add_column("Avg. Total"),
|
||||
mock.call().add_row("Task 1", "10", "9", "9.5"),
|
||||
mock.call().add_row("Task 2", "9", "8", "8.5"),
|
||||
mock.call().add_row("Task 3", "8", "7", "7.5"),
|
||||
mock.call().add_row("Crew", "9.0", "8.0", "8.5"),
|
||||
mock.call().add_row("Execution Time (s)", "135", "155", "145"),
|
||||
mock.call(
|
||||
title="Tasks Scores \n (1-10 Higher is better)", box=mock.ANY
|
||||
), # Title and styling
|
||||
mock.call().add_column("Tasks/Crew/Agents", style="cyan"), # Columns
|
||||
mock.call().add_column("Run 1", justify="center"),
|
||||
mock.call().add_column("Run 2", justify="center"),
|
||||
mock.call().add_column("Avg. Total", justify="center"),
|
||||
mock.call().add_column("Agents", style="green"),
|
||||
# Verify rows for tasks with agents
|
||||
mock.call().add_row("Task 1", "10.0", "9.0", "9.5", "- Agent 1"),
|
||||
mock.call().add_row("", "", "", "", "", ""), # Blank row between tasks
|
||||
mock.call().add_row("Task 2", "9.0", "8.0", "8.5", "- Agent 2"),
|
||||
# Add crew averages and execution times
|
||||
mock.call().add_row("Crew", "9.00", "8.00", "8.5", ""),
|
||||
mock.call().add_row("Execution Time (s)", "135", "155", "145", ""),
|
||||
]
|
||||
)
|
||||
|
||||
# Ensure the console prints the table
|
||||
console.assert_has_calls([mock.call(), mock.call().print(table())])
|
||||
|
||||
def test_evaluate(self, crew_planner):
|
||||
|
||||
Reference in New Issue
Block a user