mirror of
https://github.com/crewAIInc/crewAI.git
synced 2025-12-16 20:38:29 +00:00
Compare commits
23 Commits
pr-1209
...
feat/cli-m
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
a5f70d2307 | ||
|
|
b55fc40c83 | ||
|
|
d0ed4f5274 | ||
|
|
ee34399b71 | ||
|
|
798d16a6c6 | ||
|
|
c9152f2af8 | ||
|
|
24b09e97cd | ||
|
|
a6b7295092 | ||
|
|
725d159e44 | ||
|
|
ef21da15e6 | ||
|
|
de5d2eaa9b | ||
|
|
e2badaa4c6 | ||
|
|
39903f0c50 | ||
|
|
c4bf713113 | ||
|
|
5d18c6312d | ||
|
|
1f9baf9b2c | ||
|
|
6fbc97b298 | ||
|
|
08bacfa892 | ||
|
|
1ea8115d56 | ||
|
|
916dec2418 | ||
|
|
7f387dd7c3 | ||
|
|
6b906f09cf | ||
|
|
6c29ebafea |
22
.github/workflows/tests.yml
vendored
22
.github/workflows/tests.yml
vendored
@@ -9,24 +9,24 @@ env:
|
||||
OPENAI_API_KEY: fake-api-key
|
||||
|
||||
jobs:
|
||||
deploy:
|
||||
tests:
|
||||
runs-on: ubuntu-latest
|
||||
timeout-minutes: 15
|
||||
|
||||
steps:
|
||||
- name: Checkout code
|
||||
uses: actions/checkout@v4
|
||||
|
||||
- name: Setup Python
|
||||
uses: actions/setup-python@v4
|
||||
- name: Install uv
|
||||
uses: astral-sh/setup-uv@v3
|
||||
with:
|
||||
python-version: "3.11.9"
|
||||
enable-cache: true
|
||||
|
||||
- name: Install Requirements
|
||||
run: |
|
||||
set -e
|
||||
pip install poetry
|
||||
poetry install
|
||||
|
||||
- name: Set up Python
|
||||
run: uv python install 3.11.9
|
||||
|
||||
- name: Install the project
|
||||
run: uv sync --dev
|
||||
|
||||
- name: Run tests
|
||||
run: poetry run pytest
|
||||
run: uv run pytest tests
|
||||
|
||||
2
.github/workflows/type-checker.yml
vendored
2
.github/workflows/type-checker.yml
vendored
@@ -16,7 +16,7 @@ jobs:
|
||||
- name: Setup Python
|
||||
uses: actions/setup-python@v4
|
||||
with:
|
||||
python-version: "3.10"
|
||||
python-version: "3.11.9"
|
||||
|
||||
- name: Install Requirements
|
||||
run: |
|
||||
|
||||
30
README.md
30
README.md
@@ -44,15 +44,9 @@ To get started with CrewAI, follow these simple steps:
|
||||
|
||||
### 1. Installation
|
||||
|
||||
Ensure you have Python >=3.10 <=3.13 installed on your system. CrewAI uses [Poetry](https://python-poetry.org/) for dependency management and package handling, offering a seamless setup and execution experience.
|
||||
Ensure you have Python >=3.10 <=3.13 installed on your system. CrewAI uses [UV](https://docs.astral.sh/uv/) for dependency management and package handling, offering a seamless setup and execution experience.
|
||||
|
||||
First, if you haven't already, install Poetry:
|
||||
|
||||
```bash
|
||||
pip install poetry
|
||||
```
|
||||
|
||||
Then, install CrewAI:
|
||||
First, install CrewAI:
|
||||
|
||||
```shell
|
||||
pip install crewai
|
||||
@@ -243,7 +237,7 @@ Lock the dependencies and install them by using the CLI command but first, navig
|
||||
|
||||
```shell
|
||||
cd my_project
|
||||
crewai install
|
||||
crewai install (Optional)
|
||||
```
|
||||
|
||||
To run your crew, execute the following command in the root of your project:
|
||||
@@ -258,6 +252,12 @@ or
|
||||
python src/my_project/main.py
|
||||
```
|
||||
|
||||
If an error happens due to the usage of poetry, please run the following command to update your crewai package:
|
||||
|
||||
```bash
|
||||
crewai update
|
||||
```
|
||||
|
||||
You should see the output in the console and the `report.md` file should be created in the root of your project with the full final report.
|
||||
|
||||
In addition to the sequential process, you can use the hierarchical process, which automatically assigns a manager to the defined crew to properly coordinate the planning and execution of tasks through delegation and validation of results. [See more about the processes here](https://docs.crewai.com/core-concepts/Processes/).
|
||||
@@ -332,14 +332,14 @@ CrewAI is open-source and we welcome contributions. If you're looking to contrib
|
||||
### Installing Dependencies
|
||||
|
||||
```bash
|
||||
poetry lock
|
||||
poetry install
|
||||
uv lock
|
||||
uv sync
|
||||
```
|
||||
|
||||
### Virtual Env
|
||||
|
||||
```bash
|
||||
poetry shell
|
||||
uv venv
|
||||
```
|
||||
|
||||
### Pre-commit hooks
|
||||
@@ -351,19 +351,19 @@ pre-commit install
|
||||
### Running Tests
|
||||
|
||||
```bash
|
||||
poetry run pytest
|
||||
uvx pytest
|
||||
```
|
||||
|
||||
### Running static type checks
|
||||
|
||||
```bash
|
||||
poetry run mypy
|
||||
uvx mypy
|
||||
```
|
||||
|
||||
### Packaging
|
||||
|
||||
```bash
|
||||
poetry build
|
||||
uv build
|
||||
```
|
||||
|
||||
### Installing Locally
|
||||
|
||||
@@ -33,7 +33,7 @@ Think of an agent as a member of a team, with specific skills and a particular j
|
||||
| **Verbose** *(optional)* | `verbose` | Setting this to `True` configures the internal logger to provide detailed execution logs, aiding in debugging and monitoring. Default is `False`. |
|
||||
| **Allow Delegation** *(optional)* | `allow_delegation` | Agents can delegate tasks or questions to one another, ensuring that each task is handled by the most suitable agent. Default is `False`.
|
||||
| **Step Callback** *(optional)* | `step_callback` | A function that is called after each step of the agent. This can be used to log the agent's actions or to perform other operations. It will overwrite the crew `step_callback`. |
|
||||
| gbv vbn zzdsxcdsdfc**Cache** *(optional)* | `cache` | Indicates if the agent should use a cache for tool usage. Default is `True`. |
|
||||
| **Cache** *(optional)* | `cache` | Indicates if the agent should use a cache for tool usage. Default is `True`. |
|
||||
| **System Template** *(optional)* | `system_template` | Specifies the system format for the agent. Default is `None`. |
|
||||
| **Prompt Template** *(optional)* | `prompt_template` | Specifies the prompt format for the agent. Default is `None`. |
|
||||
| **Response Template** *(optional)* | `response_template` | Specifies the response format for the agent. Default is `None`. |
|
||||
|
||||
@@ -10,10 +10,10 @@ The CrewAI CLI provides a set of commands to interact with CrewAI, allowing you
|
||||
|
||||
## Installation
|
||||
|
||||
To use the CrewAI CLI, make sure you have CrewAI & Poetry installed:
|
||||
To use the CrewAI CLI, make sure you have CrewAI installed:
|
||||
|
||||
```shell
|
||||
pip install crewai poetry
|
||||
pip install crewai
|
||||
```
|
||||
|
||||
## Basic Usage
|
||||
@@ -145,4 +145,4 @@ crewai run
|
||||
<Note>
|
||||
Make sure to run these commands from the directory where your CrewAI project is set up.
|
||||
Some commands may require additional configuration or setup within your project structure.
|
||||
</Note>
|
||||
</Note>
|
||||
|
||||
@@ -572,16 +572,16 @@ In this example, the `PoemFlow` class defines a flow that generates a sentence c
|
||||
|
||||
### Running the Flow
|
||||
|
||||
Before running the flow, make sure to install the dependencies by running:
|
||||
(Optional) Before running the flow, you can install the dependencies by running:
|
||||
|
||||
```bash
|
||||
poetry install
|
||||
crewai install
|
||||
```
|
||||
|
||||
Once all of the dependencies are installed, you need to activate the virtual environment by running:
|
||||
|
||||
```bash
|
||||
poetry shell
|
||||
source .venv/bin/activate
|
||||
```
|
||||
|
||||
After activating the virtual environment, you can run the flow by executing one of the following commands:
|
||||
@@ -593,7 +593,7 @@ crewai flow run
|
||||
or
|
||||
|
||||
```bash
|
||||
poetry run run_flow
|
||||
uv run run_flow
|
||||
```
|
||||
|
||||
The flow will execute, and you should see the output in the console.
|
||||
@@ -653,4 +653,17 @@ If you're interested in exploring additional examples of flows, we have a variet
|
||||
|
||||
4. **Meeting Assistant Flow**: This flow demonstrates how to broadcast one event to trigger multiple follow-up actions. For instance, after a meeting is completed, the flow can update a Trello board, send a Slack message, and save the results. It's a great example of handling multiple outcomes from a single event, making it ideal for comprehensive task management and notification systems. [View Example](https://github.com/crewAIInc/crewAI-examples/tree/main/meeting_assistant_flow)
|
||||
|
||||
By exploring these examples, you can gain insights into how to leverage CrewAI Flows for various use cases, from automating repetitive tasks to managing complex, multi-step processes with dynamic decision-making and human feedback.
|
||||
By exploring these examples, you can gain insights into how to leverage CrewAI Flows for various use cases, from automating repetitive tasks to managing complex, multi-step processes with dynamic decision-making and human feedback.
|
||||
|
||||
Also, check out our YouTube video on how to use flows in CrewAI below!
|
||||
|
||||
<iframe
|
||||
width="560"
|
||||
height="315"
|
||||
src="https://www.youtube.com/embed/MTb5my6VOT8"
|
||||
title="YouTube video player"
|
||||
frameborder="0"
|
||||
allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share"
|
||||
referrerpolicy="strict-origin-when-cross-origin"
|
||||
allowfullscreen
|
||||
></iframe>
|
||||
@@ -1,277 +0,0 @@
|
||||
---
|
||||
title: Pipelines
|
||||
description: Understanding and utilizing pipelines in the crewAI framework for efficient multi-stage task processing.
|
||||
icon: timeline-arrow
|
||||
---
|
||||
|
||||
## What is a Pipeline?
|
||||
|
||||
A pipeline in CrewAI represents a structured workflow that allows for the sequential or parallel execution of multiple crews. It provides a way to organize complex processes involving multiple stages, where the output of one stage can serve as input for subsequent stages.
|
||||
|
||||
## Key Terminology
|
||||
|
||||
Understanding the following terms is crucial for working effectively with pipelines:
|
||||
|
||||
- **Stage**: A distinct part of the pipeline, which can be either sequential (a single crew) or parallel (multiple crews executing concurrently).
|
||||
- **Kickoff**: A specific execution of the pipeline for a given set of inputs, representing a single instance of processing through the pipeline.
|
||||
- **Branch**: Parallel executions within a stage (e.g., concurrent crew operations).
|
||||
- **Trace**: The journey of an individual input through the entire pipeline, capturing the path and transformations it undergoes.
|
||||
|
||||
Example pipeline structure:
|
||||
|
||||
```bash Pipeline
|
||||
crew1 >> [crew2, crew3] >> crew4
|
||||
```
|
||||
|
||||
This represents a pipeline with three stages:
|
||||
|
||||
1. A sequential stage (crew1)
|
||||
2. A parallel stage with two branches (crew2 and crew3 executing concurrently)
|
||||
3. Another sequential stage (crew4)
|
||||
|
||||
Each input creates its own kickoff, flowing through all stages of the pipeline. Multiple kickoffs can be processed concurrently, each following the defined pipeline structure.
|
||||
|
||||
## Pipeline Attributes
|
||||
|
||||
| Attribute | Parameters | Description |
|
||||
| :--------- | :---------- | :----------------------------------------------------------------------------------------------------------------- |
|
||||
| **Stages** | `stages` | A list of `PipelineStage` (crews, lists of crews, or routers) representing the stages to be executed in sequence. |
|
||||
|
||||
## Creating a Pipeline
|
||||
|
||||
When creating a pipeline, you define a series of stages, each consisting of either a single crew or a list of crews for parallel execution.
|
||||
The pipeline ensures that each stage is executed in order, with the output of one stage feeding into the next.
|
||||
|
||||
### Example: Assembling a Pipeline
|
||||
|
||||
```python
|
||||
from crewai import Crew, Process, Pipeline
|
||||
|
||||
# Define your crews
|
||||
research_crew = Crew(
|
||||
agents=[researcher],
|
||||
tasks=[research_task],
|
||||
process=Process.sequential
|
||||
)
|
||||
|
||||
analysis_crew = Crew(
|
||||
agents=[analyst],
|
||||
tasks=[analysis_task],
|
||||
process=Process.sequential
|
||||
)
|
||||
|
||||
writing_crew = Crew(
|
||||
agents=[writer],
|
||||
tasks=[writing_task],
|
||||
process=Process.sequential
|
||||
)
|
||||
|
||||
# Assemble the pipeline
|
||||
my_pipeline = Pipeline(
|
||||
stages=[research_crew, analysis_crew, writing_crew]
|
||||
)
|
||||
```
|
||||
|
||||
## Pipeline Methods
|
||||
|
||||
| Method | Description |
|
||||
| :--------------- | :----------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
|
||||
| **kickoff** | Executes the pipeline, processing all stages and returning the results. This method initiates one or more kickoffs through the pipeline, handling the flow of data between stages. |
|
||||
| **process_runs** | Runs the pipeline for each input provided, handling the flow and transformation of data between stages. |
|
||||
|
||||
## Pipeline Output
|
||||
|
||||
The output of a pipeline in the CrewAI framework is encapsulated within the `PipelineKickoffResult` class.
|
||||
This class provides a structured way to access the results of the pipeline's execution, including various formats such as raw strings, JSON, and Pydantic models.
|
||||
|
||||
### Pipeline Output Attributes
|
||||
|
||||
| Attribute | Parameters | Type | Description |
|
||||
| :-------------- | :------------ | :------------------------ | :-------------------------------------------------------------------------------------------------------- |
|
||||
| **ID** | `id` | `UUID4` | A unique identifier for the pipeline output. |
|
||||
| **Run Results** | `run_results` | `List[PipelineRunResult]` | A list of `PipelineRunResult` objects, each representing the output of a single run through the pipeline. |
|
||||
|
||||
### Pipeline Output Methods
|
||||
|
||||
| Method/Property | Description |
|
||||
| :----------------- | :----------------------------------------------------- |
|
||||
| **add_run_result** | Adds a `PipelineRunResult` to the list of run results. |
|
||||
|
||||
### Pipeline Run Result Attributes
|
||||
|
||||
| Attribute | Parameters | Type | Description |
|
||||
| :---------------- | :-------------- | :------------------------- | :-------------------------------------------------------------------------------------------- |
|
||||
| **ID** | `id` | `UUID4` | A unique identifier for the run result. |
|
||||
| **Raw** | `raw` | `str` | The raw output of the final stage in the pipeline kickoff. |
|
||||
| **Pydantic** | `pydantic` | `Any` | A Pydantic model object representing the structured output of the final stage, if applicable. |
|
||||
| **JSON Dict** | `json_dict` | `Union[Dict[str, Any], None]` | A dictionary representing the JSON output of the final stage, if applicable. |
|
||||
| **Token Usage** | `token_usage` | `Dict[str, UsageMetrics]` | A summary of token usage across all stages of the pipeline kickoff. |
|
||||
| **Trace** | `trace` | `List[Any]` | A trace of the journey of inputs through the pipeline kickoff. |
|
||||
| **Crews Outputs** | `crews_outputs` | `List[CrewOutput]` | A list of `CrewOutput` objects, representing the outputs from each crew in the pipeline kickoff. |
|
||||
|
||||
### Pipeline Run Result Methods and Properties
|
||||
|
||||
| Method/Property | Description |
|
||||
| :-------------- | :------------------------------------------------------------------------------------------------------- |
|
||||
| **json** | Returns the JSON string representation of the run result if the output format of the final task is JSON. |
|
||||
| **to_dict** | Converts the JSON and Pydantic outputs to a dictionary. |
|
||||
| **str** | Returns the string representation of the run result, prioritizing Pydantic, then JSON, then raw. |
|
||||
|
||||
### Accessing Pipeline Outputs
|
||||
|
||||
Once a pipeline has been executed, its output can be accessed through the `PipelineOutput` object returned by the `process_runs` method.
|
||||
The `PipelineOutput` class provides access to individual `PipelineRunResult` objects, each representing a single run through the pipeline.
|
||||
|
||||
#### Example
|
||||
|
||||
```python
|
||||
# Define input data for the pipeline
|
||||
input_data = [
|
||||
{"initial_query": "Latest advancements in AI"},
|
||||
{"initial_query": "Future of robotics"}
|
||||
]
|
||||
|
||||
# Execute the pipeline
|
||||
pipeline_output = await my_pipeline.process_runs(input_data)
|
||||
|
||||
# Access the results
|
||||
for run_result in pipeline_output.run_results:
|
||||
print(f"Run ID: {run_result.id}")
|
||||
print(f"Final Raw Output: {run_result.raw}")
|
||||
if run_result.json_dict:
|
||||
print(f"JSON Output: {json.dumps(run_result.json_dict, indent=2)}")
|
||||
if run_result.pydantic:
|
||||
print(f"Pydantic Output: {run_result.pydantic}")
|
||||
print(f"Token Usage: {run_result.token_usage}")
|
||||
print(f"Trace: {run_result.trace}")
|
||||
print("Crew Outputs:")
|
||||
for crew_output in run_result.crews_outputs:
|
||||
print(f" Crew: {crew_output.raw}")
|
||||
print("\n")
|
||||
```
|
||||
|
||||
This example demonstrates how to access and work with the pipeline output, including individual run results and their associated data.
|
||||
|
||||
## Using Pipelines
|
||||
|
||||
Pipelines are particularly useful for complex workflows that involve multiple stages of processing, analysis, or content generation. They allow you to:
|
||||
|
||||
1. **Sequence Operations**: Execute crews in a specific order, ensuring that the output of one crew is available as input to the next.
|
||||
2. **Parallel Processing**: Run multiple crews concurrently within a stage for increased efficiency.
|
||||
3. **Manage Complex Workflows**: Break down large tasks into smaller, manageable steps executed by specialized crews.
|
||||
|
||||
### Example: Running a Pipeline
|
||||
|
||||
```python
|
||||
# Define input data for the pipeline
|
||||
input_data = [{"initial_query": "Latest advancements in AI"}]
|
||||
|
||||
# Execute the pipeline, initiating a run for each input
|
||||
results = await my_pipeline.process_runs(input_data)
|
||||
|
||||
# Access the results
|
||||
for result in results:
|
||||
print(f"Final Output: {result.raw}")
|
||||
print(f"Token Usage: {result.token_usage}")
|
||||
print(f"Trace: {result.trace}") # Shows the path of the input through all stages
|
||||
```
|
||||
|
||||
## Advanced Features
|
||||
|
||||
### Parallel Execution within Stages
|
||||
|
||||
You can define parallel execution within a stage by providing a list of crews, creating multiple branches:
|
||||
|
||||
```python
|
||||
parallel_analysis_crew = Crew(agents=[financial_analyst], tasks=[financial_analysis_task])
|
||||
market_analysis_crew = Crew(agents=[market_analyst], tasks=[market_analysis_task])
|
||||
|
||||
my_pipeline = Pipeline(
|
||||
stages=[
|
||||
research_crew,
|
||||
[parallel_analysis_crew, market_analysis_crew], # Parallel execution (branching)
|
||||
writing_crew
|
||||
]
|
||||
)
|
||||
```
|
||||
|
||||
### Routers in Pipelines
|
||||
|
||||
Routers are a powerful feature in crewAI pipelines that allow for dynamic decision-making and branching within your workflow.
|
||||
They enable you to direct the flow of execution based on specific conditions or criteria, making your pipelines more flexible and adaptive.
|
||||
|
||||
#### What is a Router?
|
||||
|
||||
A router in crewAI is a special component that can be included as a stage in your pipeline. It evaluates the input data and determines which path the execution should take next.
|
||||
This allows for conditional branching in your pipeline, where different crews or sub-pipelines can be executed based on the router's decision.
|
||||
|
||||
#### Key Components of a Router
|
||||
|
||||
1. **Routes**: A dictionary of named routes, each associated with a condition and a pipeline to execute if the condition is met.
|
||||
2. **Default Route**: A fallback pipeline that is executed if none of the defined route conditions are met.
|
||||
|
||||
#### Creating a Router
|
||||
|
||||
Here's an example of how to create a router:
|
||||
|
||||
```python
|
||||
from crewai import Router, Route, Pipeline, Crew, Agent, Task
|
||||
|
||||
# Define your agents
|
||||
classifier = Agent(name="Classifier", role="Email Classifier")
|
||||
urgent_handler = Agent(name="Urgent Handler", role="Urgent Email Processor")
|
||||
normal_handler = Agent(name="Normal Handler", role="Normal Email Processor")
|
||||
|
||||
# Define your tasks
|
||||
classify_task = Task(description="Classify the email based on its content and metadata.")
|
||||
urgent_task = Task(description="Process and respond to urgent email quickly.")
|
||||
normal_task = Task(description="Process and respond to normal email thoroughly.")
|
||||
|
||||
# Define your crews
|
||||
classification_crew = Crew(agents=[classifier], tasks=[classify_task]) # classify email between high and low urgency 1-10
|
||||
urgent_crew = Crew(agents=[urgent_handler], tasks=[urgent_task])
|
||||
normal_crew = Crew(agents=[normal_handler], tasks=[normal_task])
|
||||
|
||||
# Create pipelines for different urgency levels
|
||||
urgent_pipeline = Pipeline(stages=[urgent_crew])
|
||||
normal_pipeline = Pipeline(stages=[normal_crew])
|
||||
|
||||
# Create a router
|
||||
email_router = Router(
|
||||
routes={
|
||||
"high_urgency": Route(
|
||||
condition=lambda x: x.get("urgency_score", 0) > 7,
|
||||
pipeline=urgent_pipeline
|
||||
),
|
||||
"low_urgency": Route(
|
||||
condition=lambda x: x.get("urgency_score", 0) <= 7,
|
||||
pipeline=normal_pipeline
|
||||
)
|
||||
},
|
||||
default=Pipeline(stages=[normal_pipeline]) # Default to just normal if no urgency score
|
||||
)
|
||||
|
||||
# Use the router in a main pipeline
|
||||
main_pipeline = Pipeline(stages=[classification_crew, email_router])
|
||||
|
||||
inputs = [{"email": "..."}, {"email": "..."}] # List of email data
|
||||
|
||||
main_pipeline.kickoff(inputs=inputs)
|
||||
```
|
||||
|
||||
In this example, the router decides between an urgent pipeline and a normal pipeline based on the urgency score of the email. If the urgency score is greater than 7,
|
||||
it routes to the urgent pipeline; otherwise, it uses the normal pipeline. If the input doesn't include an urgency score, it defaults to just the classification crew.
|
||||
|
||||
#### Benefits of Using Routers
|
||||
|
||||
1. **Dynamic Workflow**: Adapt your pipeline's behavior based on input characteristics or intermediate results.
|
||||
2. **Efficiency**: Route urgent tasks to quicker processes, reserving more thorough pipelines for less time-sensitive inputs.
|
||||
3. **Flexibility**: Easily modify or extend your pipeline's logic without changing the core structure.
|
||||
4. **Scalability**: Handle a wide range of email types and urgency levels with a single pipeline structure.
|
||||
|
||||
### Error Handling and Validation
|
||||
|
||||
The `Pipeline` class includes validation mechanisms to ensure the robustness of the pipeline structure:
|
||||
|
||||
- Validates that stages contain only Crew instances or lists of Crew instances.
|
||||
- Prevents double nesting of stages to maintain a clear structure.
|
||||
@@ -25,9 +25,9 @@ It provides a dashboard for tracking agent performance, session replays, and cus
|
||||
Additionally, AgentOps provides session drilldowns for viewing Crew agent interactions, LLM calls, and tool usage in real-time.
|
||||
This feature is useful for debugging and understanding how agents interact with users as well as other agents.
|
||||
|
||||

|
||||

|
||||

|
||||

|
||||

|
||||

|
||||
|
||||
### Features
|
||||
|
||||
@@ -123,4 +123,4 @@ For feature requests or bug reports, please reach out to the AgentOps team on th
|
||||
<span> • </span>
|
||||
<a href="https://app.agentops.ai/?=crew">🖇️ AgentOps Dashboard</a>
|
||||
<span> • </span>
|
||||
<a href="https://docs.agentops.ai/introduction">📙 Documentation</a>
|
||||
<a href="https://docs.agentops.ai/introduction">📙 Documentation</a>
|
||||
|
||||
@@ -10,9 +10,9 @@ Langtrace is an open-source, external tool that helps you set up observability a
|
||||
While not built directly into CrewAI, Langtrace can be used alongside CrewAI to gain deep visibility into the cost, latency, and performance of your CrewAI Agents.
|
||||
This integration allows you to log hyperparameters, monitor performance regressions, and establish a process for continuous improvement of your Agents.
|
||||
|
||||

|
||||

|
||||

|
||||

|
||||

|
||||

|
||||
|
||||
## Setup Instructions
|
||||
|
||||
@@ -69,4 +69,4 @@ This integration allows you to log hyperparameters, monitor performance regressi
|
||||
|
||||
6. **Testing and Evaluations**
|
||||
|
||||
- Set up automated tests for your CrewAI agents and tasks.
|
||||
- Set up automated tests for your CrewAI agents and tasks.
|
||||
|
||||
BIN
docs/images/crewai-run-poetry-error.png
Normal file
BIN
docs/images/crewai-run-poetry-error.png
Normal file
Binary file not shown.
|
After Width: | Height: | Size: 104 KiB |
BIN
docs/images/crewai-update.png
Normal file
BIN
docs/images/crewai-update.png
Normal file
Binary file not shown.
|
After Width: | Height: | Size: 50 KiB |
@@ -1,11 +1,9 @@
|
||||
---
|
||||
title: Installation & Setup
|
||||
title: Installation
|
||||
description:
|
||||
icon: wrench
|
||||
---
|
||||
|
||||
## Install CrewAI
|
||||
|
||||
This guide will walk you through the installation process for CrewAI and its dependencies.
|
||||
CrewAI is a flexible and powerful AI framework that enables you to create and manage AI agents, tools, and tasks efficiently.
|
||||
Let's get started! 🚀
|
||||
@@ -15,17 +13,8 @@ Let's get started! 🚀
|
||||
</Tip>
|
||||
|
||||
<Steps>
|
||||
<Step title="Install Poetry">
|
||||
First, if you haven't already, install [Poetry](https://python-poetry.org/).
|
||||
CrewAI uses Poetry for dependency management and package handling, offering a seamless setup and execution experience.
|
||||
<CodeGroup>
|
||||
```shell Terminal
|
||||
pip install poetry
|
||||
```
|
||||
</CodeGroup>
|
||||
</Step>
|
||||
<Step title="Install CrewAI">
|
||||
Then, install the main CrewAI package:
|
||||
Install the main CrewAI package with the following command:
|
||||
<CodeGroup>
|
||||
```shell Terminal
|
||||
pip install crewai
|
||||
@@ -45,15 +34,29 @@ Let's get started! 🚀
|
||||
</CodeGroup>
|
||||
</Step>
|
||||
<Step title="Upgrade CrewAI">
|
||||
To upgrade CrewAI and CrewAI Tools to the latest version, run the following command:
|
||||
To upgrade CrewAI and CrewAI Tools to the latest version, run the following command
|
||||
<CodeGroup>
|
||||
```shell Terminal
|
||||
pip install --upgrade crewai crewai-tools
|
||||
```
|
||||
</CodeGroup>
|
||||
<Note>
|
||||
1. If you're using an older version of CrewAI, you may receive a warning about using `Poetry` for dependency management.
|
||||

|
||||
|
||||
2. In this case, you'll need to run the command below to update your project.
|
||||
This command will migrate your project to use [UV](https://github.com/astral-sh/uv) and update the necessary files.
|
||||
```shell Terminal
|
||||
crewai update
|
||||
```
|
||||
3. After running the command above, you should see the following output:
|
||||

|
||||
|
||||
4. You're all set! You can now proceed to the next step! 🎉
|
||||
</Note>
|
||||
</Step>
|
||||
<Step title="Verify the installation">
|
||||
To verify that `crewai` and `crewai-tools` are installed correctly, run the following command:
|
||||
To verify that `crewai` and `crewai-tools` are installed correctly, run the following command
|
||||
<CodeGroup>
|
||||
```shell Terminal
|
||||
pip freeze | grep crewai
|
||||
|
||||
@@ -45,5 +45,5 @@ By fostering collaborative intelligence, CrewAI empowers agents to work together
|
||||
|
||||
## Next Step
|
||||
|
||||
- [Install CrewAI](/installation)
|
||||
- [Install CrewAI](/installation) to get started with your first agent.
|
||||
|
||||
|
||||
@@ -66,18 +66,17 @@
|
||||
"pages": [
|
||||
"concepts/agents",
|
||||
"concepts/tasks",
|
||||
"concepts/tools",
|
||||
"concepts/processes",
|
||||
"concepts/crews",
|
||||
"concepts/flows",
|
||||
"concepts/llms",
|
||||
"concepts/processes",
|
||||
"concepts/collaboration",
|
||||
"concepts/pipeline",
|
||||
"concepts/training",
|
||||
"concepts/memory",
|
||||
"concepts/planning",
|
||||
"concepts/testing",
|
||||
"concepts/flows",
|
||||
"concepts/cli",
|
||||
"concepts/llms",
|
||||
"concepts/tools",
|
||||
"concepts/langchain-tools",
|
||||
"concepts/llamaindex-tools"
|
||||
]
|
||||
|
||||
@@ -26,6 +26,7 @@ Follow the steps below to get crewing! 🚣♂️
|
||||
<Step title="Modify your `agents.yaml` file">
|
||||
<Tip>
|
||||
You can also modify the agents as needed to fit your use case or copy and paste as is to your project.
|
||||
Any variable interpolated in your `agents.yaml` and `tasks.yaml` files like `{topic}` will be replaced by the value of the variable in the `main.py` file.
|
||||
</Tip>
|
||||
```yaml agents.yaml
|
||||
# src/latest_ai_development/config/agents.yaml
|
||||
@@ -124,7 +125,7 @@ Follow the steps below to get crewing! 🚣♂️
|
||||
```
|
||||
</Step>
|
||||
<Step title="Feel free to pass custom inputs to your crew">
|
||||
For example, you can pass the `topic` input to your crew to customize the research and reporting to medical llms or any other topic.
|
||||
For example, you can pass the `topic` input to your crew to customize the research and reporting.
|
||||
```python main.py
|
||||
#!/usr/bin/env python
|
||||
# src/latest_ai_development/main.py
|
||||
@@ -233,6 +234,74 @@ Follow the steps below to get crewing! 🚣♂️
|
||||
</Step>
|
||||
</Steps>
|
||||
|
||||
### Note on Consistency in Naming
|
||||
|
||||
The names you use in your YAML files (`agents.yaml` and `tasks.yaml`) should match the method names in your Python code.
|
||||
For example, you can reference the agent for specific tasks from `tasks.yaml` file.
|
||||
This naming consistency allows CrewAI to automatically link your configurations with your code; otherwise, your task won't recognize the reference properly.
|
||||
|
||||
#### Example References
|
||||
|
||||
<Tip>
|
||||
Note how we use the same name for the agent in the `agents.yaml` (`email_summarizer`) file as the method name in the `crew.py` (`email_summarizer`) file.
|
||||
</Tip>
|
||||
|
||||
```yaml agents.yaml
|
||||
email_summarizer:
|
||||
role: >
|
||||
Email Summarizer
|
||||
goal: >
|
||||
Summarize emails into a concise and clear summary
|
||||
backstory: >
|
||||
You will create a 5 bullet point summary of the report
|
||||
llm: mixtal_llm
|
||||
```
|
||||
|
||||
<Tip>
|
||||
Note how we use the same name for the agent in the `tasks.yaml` (`email_summarizer_task`) file as the method name in the `crew.py` (`email_summarizer_task`) file.
|
||||
</Tip>
|
||||
|
||||
```yaml tasks.yaml
|
||||
email_summarizer_task:
|
||||
description: >
|
||||
Summarize the email into a 5 bullet point summary
|
||||
expected_output: >
|
||||
A 5 bullet point summary of the email
|
||||
agent: email_summarizer
|
||||
context:
|
||||
- reporting_task
|
||||
- research_task
|
||||
```
|
||||
|
||||
Use the annotations to properly reference the agent and task in the `crew.py` file.
|
||||
|
||||
### Annotations include:
|
||||
|
||||
* `@agent`
|
||||
* `@task`
|
||||
* `@crew`
|
||||
* `@tool`
|
||||
* `@callback`
|
||||
* `@output_json`
|
||||
* `@output_pydantic`
|
||||
* `@cache_handler`
|
||||
|
||||
```python crew.py
|
||||
# ...
|
||||
@agent
|
||||
def email_summarizer(self) -> Agent:
|
||||
return Agent(
|
||||
config=self.agents_config["email_summarizer"],
|
||||
)
|
||||
|
||||
@task
|
||||
def email_summarizer_task(self) -> Task:
|
||||
return Task(
|
||||
config=self.tasks_config["email_summarizer_task"],
|
||||
)
|
||||
# ...
|
||||
```
|
||||
|
||||
<Tip>
|
||||
In addition to the [sequential process](../how-to/sequential-process), you can use the [hierarchical process](../how-to/hierarchical-process),
|
||||
which automatically assigns a manager to the defined crew to properly coordinate the planning and execution of tasks through delegation and validation of results.
|
||||
@@ -241,7 +310,7 @@ You can learn more about the core concepts [here](/concepts).
|
||||
|
||||
### Replay Tasks from Latest Crew Kickoff
|
||||
|
||||
CrewAI now includes a replay feature that allows you to list the tasks from the last run and replay from a specific one. To use this feature, run:
|
||||
CrewAI now includes a replay feature that allows you to list the tasks from the last run and replay from a specific one. To use this feature, run.
|
||||
|
||||
```shell
|
||||
crewai replay <task_id>
|
||||
|
||||
107
pyproject.toml
107
pyproject.toml
@@ -1,65 +1,66 @@
|
||||
[tool.poetry]
|
||||
[project]
|
||||
name = "crewai"
|
||||
version = "0.70.1"
|
||||
description = "Cutting-edge framework for orchestrating role-playing, autonomous AI agents. By fostering collaborative intelligence, CrewAI empowers agents to work together seamlessly, tackling complex tasks."
|
||||
authors = ["Joao Moura <joao@crewai.com>"]
|
||||
readme = "README.md"
|
||||
packages = [{ include = "crewai", from = "src" }]
|
||||
requires-python = ">=3.10,<=3.13"
|
||||
authors = [
|
||||
{ name = "Joao Moura", email = "joao@crewai.com" }
|
||||
]
|
||||
dependencies = [
|
||||
"pydantic>=2.4.2",
|
||||
"langchain>=0.2.16",
|
||||
"openai>=1.13.3",
|
||||
"opentelemetry-api>=1.22.0",
|
||||
"opentelemetry-sdk>=1.22.0",
|
||||
"opentelemetry-exporter-otlp-proto-http>=1.22.0",
|
||||
"instructor>=1.3.3",
|
||||
"regex>=2024.9.11",
|
||||
"crewai-tools>=0.12.1",
|
||||
"click>=8.1.7",
|
||||
"python-dotenv>=1.0.0",
|
||||
"appdirs>=1.4.4",
|
||||
"jsonref>=1.1.0",
|
||||
"agentops>=0.3.0",
|
||||
"embedchain>=0.1.114",
|
||||
"json-repair>=0.25.2",
|
||||
"auth0-python>=4.7.1",
|
||||
"litellm>=1.44.22",
|
||||
"pyvis>=0.3.2",
|
||||
"uv>=0.4.18",
|
||||
"tomli-w>=1.1.0",
|
||||
]
|
||||
|
||||
[tool.poetry.urls]
|
||||
[project.urls]
|
||||
Homepage = "https://crewai.com"
|
||||
Documentation = "https://docs.crewai.com"
|
||||
Repository = "https://github.com/crewAIInc/crewAI"
|
||||
|
||||
[tool.poetry.dependencies]
|
||||
python = ">=3.10,<=3.13"
|
||||
pydantic = "^2.4.2"
|
||||
langchain = "^0.2.16"
|
||||
openai = "^1.13.3"
|
||||
opentelemetry-api = "^1.22.0"
|
||||
opentelemetry-sdk = "^1.22.0"
|
||||
opentelemetry-exporter-otlp-proto-http = "^1.22.0"
|
||||
instructor = "1.3.3"
|
||||
regex = "^2024.9.11"
|
||||
crewai-tools = { version = "^0.12.1", optional = true }
|
||||
click = "^8.1.7"
|
||||
python-dotenv = "^1.0.0"
|
||||
appdirs = "^1.4.4"
|
||||
jsonref = "^1.1.0"
|
||||
agentops = { version = "^0.3.0", optional = true }
|
||||
embedchain = "^0.1.114"
|
||||
json-repair = "^0.25.2"
|
||||
auth0-python = "^4.7.1"
|
||||
poetry = "^1.8.3"
|
||||
litellm = "^1.44.22"
|
||||
pyvis = "^0.3.2"
|
||||
[project.optional-dependencies]
|
||||
tools = ["crewai-tools>=0.12.1"]
|
||||
agentops = ["agentops>=0.3.0"]
|
||||
|
||||
[tool.poetry.extras]
|
||||
tools = ["crewai-tools"]
|
||||
agentops = ["agentops"]
|
||||
[tool.uv]
|
||||
dev-dependencies = [
|
||||
"ruff>=0.4.10",
|
||||
"mypy>=1.10.0",
|
||||
"pre-commit>=3.6.0",
|
||||
"mkdocs>=1.4.3",
|
||||
"mkdocstrings>=0.22.0",
|
||||
"mkdocstrings-python>=1.1.2",
|
||||
"mkdocs-material>=9.5.7",
|
||||
"mkdocs-material-extensions>=1.3.1",
|
||||
"pillow>=10.2.0",
|
||||
"cairosvg>=2.7.1",
|
||||
"crewai-tools>=0.12.1",
|
||||
"pytest>=8.0.0",
|
||||
"pytest-vcr>=1.0.2",
|
||||
"python-dotenv>=1.0.0",
|
||||
"pytest-asyncio>=0.23.7",
|
||||
"pytest-subprocess>=1.5.2",
|
||||
]
|
||||
|
||||
[tool.poetry.group.dev.dependencies]
|
||||
isort = "^5.13.2"
|
||||
mypy = "1.10.0"
|
||||
autoflake = "^2.2.1"
|
||||
pre-commit = "^3.6.0"
|
||||
mkdocs = "^1.4.3"
|
||||
mkdocstrings = "^0.22.0"
|
||||
mkdocstrings-python = "^1.1.2"
|
||||
mkdocs-material = { extras = ["imaging"], version = "^9.5.7" }
|
||||
mkdocs-material-extensions = "^1.3.1"
|
||||
pillow = "^10.2.0"
|
||||
cairosvg = "^2.7.1"
|
||||
crewai-tools = "^0.12.1"
|
||||
|
||||
[tool.poetry.group.test.dependencies]
|
||||
pytest = "^8.0.0"
|
||||
pytest-vcr = "^1.0.2"
|
||||
python-dotenv = "1.0.0"
|
||||
pytest-asyncio = "^0.23.7"
|
||||
pytest-subprocess = "^1.5.2"
|
||||
|
||||
[tool.poetry.scripts]
|
||||
[project.scripts]
|
||||
crewai = "crewai.cli.cli:crewai"
|
||||
|
||||
[tool.mypy]
|
||||
@@ -71,5 +72,5 @@ exclude = ["cli/templates"]
|
||||
exclude_dirs = ["src/crewai/cli/templates"]
|
||||
|
||||
[build-system]
|
||||
requires = ["poetry-core"]
|
||||
build-backend = "poetry.core.masonry.api"
|
||||
requires = ["hatchling"]
|
||||
build-backend = "hatchling.build"
|
||||
|
||||
@@ -81,6 +81,7 @@ class BaseAgentTools(BaseModel, ABC):
|
||||
task_with_assigned_agent = Task( # type: ignore # Incompatible types in assignment (expression has type "Task", variable has type "str")
|
||||
description=task,
|
||||
agent=agent,
|
||||
expected_output="Your best answer to your coworker asking you this, accounting for the context shared.",
|
||||
expected_output=agent.i18n.slice("manager_request"),
|
||||
i18n=agent.i18n,
|
||||
)
|
||||
return agent.execute_task(task_with_assigned_agent, context)
|
||||
|
||||
@@ -151,7 +151,7 @@ class CrewAgentExecutor(CrewAgentExecutorMixin):
|
||||
)
|
||||
self.have_forced_answer = True
|
||||
self.messages.append(
|
||||
self._format_msg(formatted_answer.text, role="user")
|
||||
self._format_msg(formatted_answer.text, role="assistant")
|
||||
)
|
||||
|
||||
except OutputParserException as e:
|
||||
@@ -317,9 +317,9 @@ class CrewAgentExecutor(CrewAgentExecutorMixin):
|
||||
if self.crew is not None and hasattr(self.crew, "_train_iteration"):
|
||||
train_iteration = self.crew._train_iteration
|
||||
if agent_id in training_data and isinstance(train_iteration, int):
|
||||
training_data[agent_id][train_iteration][
|
||||
"improved_output"
|
||||
] = result.output
|
||||
training_data[agent_id][train_iteration]["improved_output"] = (
|
||||
result.output
|
||||
)
|
||||
training_handler.save(training_data)
|
||||
else:
|
||||
self._logger.log(
|
||||
@@ -334,6 +334,32 @@ class CrewAgentExecutor(CrewAgentExecutorMixin):
|
||||
color="red",
|
||||
)
|
||||
|
||||
if self.ask_for_human_input and human_feedback is not None:
|
||||
training_data = {
|
||||
"initial_output": result.output,
|
||||
"human_feedback": human_feedback,
|
||||
"agent": agent_id,
|
||||
"agent_role": self.agent.role,
|
||||
}
|
||||
if self.crew is not None and hasattr(self.crew, "_train_iteration"):
|
||||
train_iteration = self.crew._train_iteration
|
||||
if isinstance(train_iteration, int):
|
||||
CrewTrainingHandler(TRAINING_DATA_FILE).append(
|
||||
train_iteration, agent_id, training_data
|
||||
)
|
||||
else:
|
||||
self._logger.log(
|
||||
"error",
|
||||
"Invalid train iteration type. Expected int.",
|
||||
color="red",
|
||||
)
|
||||
else:
|
||||
self._logger.log(
|
||||
"error",
|
||||
"Crew is None or does not have _train_iteration attribute.",
|
||||
color="red",
|
||||
)
|
||||
|
||||
def _format_prompt(self, prompt: str, inputs: Dict[str, str]) -> str:
|
||||
prompt = prompt.replace("{input}", inputs["input"])
|
||||
prompt = prompt.replace("{tool_names}", inputs["tool_names"])
|
||||
|
||||
@@ -21,6 +21,7 @@ from .run_crew import run_crew
|
||||
from .run_flow import run_flow
|
||||
from .tools.main import ToolCommand
|
||||
from .train_crew import train_crew
|
||||
from .update_crew import update_crew
|
||||
|
||||
|
||||
@click.group()
|
||||
@@ -188,6 +189,12 @@ def run():
|
||||
run_crew()
|
||||
|
||||
|
||||
@crewai.command()
|
||||
def update():
|
||||
"""Update the pyproject.toml of the Crew project to use uv."""
|
||||
update_crew()
|
||||
|
||||
|
||||
@crewai.command()
|
||||
def signup():
|
||||
"""Sign Up/Login to CrewAI+."""
|
||||
@@ -276,7 +283,13 @@ def tool_install(handle: str):
|
||||
|
||||
|
||||
@tool.command(name="publish")
|
||||
@click.option("--force", is_flag=True, show_default=True, default=False, help="Bypasses Git remote validations")
|
||||
@click.option(
|
||||
"--force",
|
||||
is_flag=True,
|
||||
show_default=True,
|
||||
default=False,
|
||||
help="Bypasses Git remote validations",
|
||||
)
|
||||
@click.option("--public", "is_public", flag_value=True, default=False)
|
||||
@click.option("--private", "is_public", flag_value=False)
|
||||
def tool_publish(is_public: bool, force: bool):
|
||||
|
||||
19
src/crewai/cli/constants.py
Normal file
19
src/crewai/cli/constants.py
Normal file
@@ -0,0 +1,19 @@
|
||||
ENV_VARS = {
|
||||
'openai': ['OPENAI_API_KEY'],
|
||||
'anthropic': ['ANTHROPIC_API_KEY'],
|
||||
'gemini': ['GEMINI_API_KEY'],
|
||||
'groq': ['GROQ_API_KEY'],
|
||||
'ollama': ['FAKE_KEY'],
|
||||
}
|
||||
|
||||
PROVIDERS = ['openai', 'anthropic', 'gemini', 'groq', 'ollama']
|
||||
|
||||
MODELS = {
|
||||
'openai': ['gpt-4', 'gpt-4o', 'gpt-4o-mini', 'o1-mini', 'o1-preview'],
|
||||
'anthropic': ['claude-3-5-sonnet-20240620', 'claude-3-sonnet-20240229', 'claude-3-opus-20240229', 'claude-3-haiku-20240307'],
|
||||
'gemini': ['gemini-1.5-flash', 'gemini-1.5-pro', 'gemini-gemma-2-9b-it', 'gemini-gemma-2-27b-it'],
|
||||
'groq': ['llama-3.1-8b-instant', 'llama-3.1-70b-versatile', 'llama-3.1-405b-reasoning', 'gemma2-9b-it', 'gemma-7b-it'],
|
||||
'ollama': ['llama3.1', 'mixtral'],
|
||||
}
|
||||
|
||||
JSON_URL = "https://raw.githubusercontent.com/BerriAI/litellm/main/model_prices_and_context_window.json"
|
||||
@@ -1,12 +1,10 @@
|
||||
from pathlib import Path
|
||||
|
||||
import click
|
||||
from crewai.cli.utils import copy_template,load_env_vars, write_env_file
|
||||
from crewai.cli.provider import get_provider_data, select_provider, select_model, PROVIDERS
|
||||
from crewai.cli.constants import ENV_VARS
|
||||
|
||||
from crewai.cli.utils import copy_template
|
||||
|
||||
|
||||
def create_crew(name, parent_folder=None):
|
||||
"""Create a new crew."""
|
||||
def create_folder_structure(name, parent_folder=None):
|
||||
folder_name = name.replace(" ", "_").replace("-", "_").lower()
|
||||
class_name = name.replace("_", " ").replace("-", " ").title().replace(" ", "")
|
||||
|
||||
@@ -28,19 +26,83 @@ def create_crew(name, parent_folder=None):
|
||||
(folder_path / "src" / folder_name).mkdir(parents=True)
|
||||
(folder_path / "src" / folder_name / "tools").mkdir(parents=True)
|
||||
(folder_path / "src" / folder_name / "config").mkdir(parents=True)
|
||||
with open(folder_path / ".env", "w") as file:
|
||||
file.write("OPENAI_API_KEY=YOUR_API_KEY")
|
||||
else:
|
||||
click.secho(
|
||||
f"\tFolder {folder_name} already exists. Please choose a different name.",
|
||||
fg="red",
|
||||
f"\tFolder {folder_name} already exists.",
|
||||
fg="yellow",
|
||||
)
|
||||
|
||||
return folder_path, folder_name, class_name
|
||||
|
||||
|
||||
|
||||
def copy_template_files(folder_path, name, class_name, parent_folder):
|
||||
package_dir = Path(__file__).parent
|
||||
templates_dir = package_dir / "templates" / "crew"
|
||||
|
||||
root_template_files = (
|
||||
[".gitignore", "pyproject.toml", "README.md"] if not parent_folder else []
|
||||
)
|
||||
tools_template_files = ["tools/custom_tool.py", "tools/__init__.py"]
|
||||
config_template_files = ["config/agents.yaml", "config/tasks.yaml"]
|
||||
src_template_files = (
|
||||
["__init__.py", "main.py", "crew.py"] if not parent_folder else ["crew.py"]
|
||||
)
|
||||
|
||||
for file_name in root_template_files:
|
||||
src_file = templates_dir / file_name
|
||||
dst_file = folder_path / file_name
|
||||
copy_template(src_file, dst_file, name, class_name, folder_path.name)
|
||||
|
||||
src_folder = folder_path / "src" / folder_path.name if not parent_folder else folder_path
|
||||
|
||||
for file_name in src_template_files:
|
||||
src_file = templates_dir / file_name
|
||||
dst_file = src_folder / file_name
|
||||
copy_template(src_file, dst_file, name, class_name, folder_path.name)
|
||||
|
||||
if not parent_folder:
|
||||
for file_name in tools_template_files + config_template_files:
|
||||
src_file = templates_dir / file_name
|
||||
dst_file = src_folder / file_name
|
||||
copy_template(src_file, dst_file, name, class_name, folder_path.name)
|
||||
|
||||
|
||||
def create_crew(name, parent_folder=None):
|
||||
folder_path, folder_name, class_name = create_folder_structure(name, parent_folder)
|
||||
env_vars = load_env_vars(folder_path)
|
||||
|
||||
provider_models = get_provider_data()
|
||||
if not provider_models:
|
||||
return
|
||||
|
||||
selected_provider = select_provider(provider_models)
|
||||
if not selected_provider:
|
||||
return
|
||||
provider = selected_provider
|
||||
|
||||
selected_model = select_model(provider, provider_models)
|
||||
if not selected_model:
|
||||
return
|
||||
model = selected_model
|
||||
|
||||
if provider in PROVIDERS:
|
||||
api_key_var = ENV_VARS[provider][0]
|
||||
else:
|
||||
api_key_var = click.prompt(
|
||||
f"Enter the environment variable name for your {provider.capitalize()} API key",
|
||||
type=str
|
||||
)
|
||||
|
||||
env_vars = {api_key_var: "YOUR_API_KEY_HERE"}
|
||||
write_env_file(folder_path, env_vars)
|
||||
|
||||
env_vars['MODEL'] = model
|
||||
click.secho(f"Selected model: {model}", fg="green")
|
||||
|
||||
package_dir = Path(__file__).parent
|
||||
templates_dir = package_dir / "templates" / "crew"
|
||||
|
||||
# List of template files to copy
|
||||
root_template_files = (
|
||||
[".gitignore", "pyproject.toml", "README.md"] if not parent_folder else []
|
||||
)
|
||||
|
||||
@@ -5,13 +5,13 @@ import click
|
||||
|
||||
def evaluate_crew(n_iterations: int, model: str) -> None:
|
||||
"""
|
||||
Test and Evaluate the crew by running a command in the Poetry environment.
|
||||
Test and Evaluate the crew by running a command in the UV environment.
|
||||
|
||||
Args:
|
||||
n_iterations (int): The number of iterations to test the crew.
|
||||
model (str): The model to test the crew with.
|
||||
"""
|
||||
command = ["poetry", "run", "test", str(n_iterations), model]
|
||||
command = ["uv", "run", "test", str(n_iterations), model]
|
||||
|
||||
try:
|
||||
if n_iterations <= 0:
|
||||
|
||||
@@ -5,13 +5,10 @@ import click
|
||||
|
||||
def install_crew() -> None:
|
||||
"""
|
||||
Install the crew by running the Poetry command to lock and install.
|
||||
Install the crew by running the UV command to lock and install.
|
||||
"""
|
||||
try:
|
||||
subprocess.run(["poetry", "lock"], check=True, capture_output=False, text=True)
|
||||
subprocess.run(
|
||||
["poetry", "install"], check=True, capture_output=False, text=True
|
||||
)
|
||||
subprocess.run(["uv", "sync"], check=True, capture_output=False, text=True)
|
||||
|
||||
except subprocess.CalledProcessError as e:
|
||||
click.echo(f"An error occurred while running the crew: {e}", err=True)
|
||||
|
||||
@@ -5,9 +5,9 @@ import click
|
||||
|
||||
def plot_flow() -> None:
|
||||
"""
|
||||
Plot the flow by running a command in the Poetry environment.
|
||||
Plot the flow by running a command in the UV environment.
|
||||
"""
|
||||
command = ["poetry", "run", "plot_flow"]
|
||||
command = ["uv", "run", "plot_flow"]
|
||||
|
||||
try:
|
||||
result = subprocess.run(command, capture_output=False, text=True, check=True)
|
||||
|
||||
@@ -25,7 +25,9 @@ class PlusAPI:
|
||||
|
||||
def _make_request(self, method: str, endpoint: str, **kwargs) -> requests.Response:
|
||||
url = urljoin(self.base_url, endpoint)
|
||||
return requests.request(method, url, headers=self.headers, **kwargs)
|
||||
session = requests.Session()
|
||||
session.trust_env = False
|
||||
return session.request(method, url, headers=self.headers, **kwargs)
|
||||
|
||||
def login_to_tool_repository(self):
|
||||
return self._make_request("POST", f"{self.TOOLS_RESOURCE}/login")
|
||||
|
||||
186
src/crewai/cli/provider.py
Normal file
186
src/crewai/cli/provider.py
Normal file
@@ -0,0 +1,186 @@
|
||||
import json
|
||||
import time
|
||||
import requests
|
||||
from collections import defaultdict
|
||||
import click
|
||||
from pathlib import Path
|
||||
from crewai.cli.constants import PROVIDERS, MODELS, JSON_URL
|
||||
|
||||
def select_choice(prompt_message, choices):
|
||||
"""
|
||||
Presents a list of choices to the user and prompts them to select one.
|
||||
|
||||
Args:
|
||||
- prompt_message (str): The message to display to the user before presenting the choices.
|
||||
- choices (list): A list of options to present to the user.
|
||||
|
||||
Returns:
|
||||
- str: The selected choice from the list, or None if the operation is aborted or an invalid selection is made.
|
||||
"""
|
||||
click.secho(prompt_message, fg="cyan")
|
||||
for idx, choice in enumerate(choices, start=1):
|
||||
click.secho(f"{idx}. {choice}", fg="cyan")
|
||||
try:
|
||||
selected_index = click.prompt("Enter the number of your choice", type=int) - 1
|
||||
except click.exceptions.Abort:
|
||||
click.secho("Operation aborted by the user.", fg="red")
|
||||
return None
|
||||
if not (0 <= selected_index < len(choices)):
|
||||
click.secho("Invalid selection.", fg="red")
|
||||
return None
|
||||
return choices[selected_index]
|
||||
|
||||
def select_provider(provider_models):
|
||||
"""
|
||||
Presents a list of providers to the user and prompts them to select one.
|
||||
|
||||
Args:
|
||||
- provider_models (dict): A dictionary of provider models.
|
||||
|
||||
Returns:
|
||||
- str: The selected provider, or None if the operation is aborted or an invalid selection is made.
|
||||
"""
|
||||
predefined_providers = [p.lower() for p in PROVIDERS]
|
||||
all_providers = sorted(set(predefined_providers + list(provider_models.keys())))
|
||||
|
||||
provider = select_choice("Select a provider to set up:", predefined_providers + ['other'])
|
||||
if not provider:
|
||||
return None
|
||||
provider = provider.lower()
|
||||
|
||||
if provider == 'other':
|
||||
provider = select_choice("Select a provider from the full list:", all_providers)
|
||||
if not provider:
|
||||
return None
|
||||
return provider
|
||||
|
||||
def select_model(provider, provider_models):
|
||||
"""
|
||||
Presents a list of models for a given provider to the user and prompts them to select one.
|
||||
|
||||
Args:
|
||||
- provider (str): The provider for which to select a model.
|
||||
- provider_models (dict): A dictionary of provider models.
|
||||
|
||||
Returns:
|
||||
- str: The selected model, or None if the operation is aborted or an invalid selection is made.
|
||||
"""
|
||||
predefined_providers = [p.lower() for p in PROVIDERS]
|
||||
|
||||
if provider in predefined_providers:
|
||||
available_models = MODELS.get(provider, [])
|
||||
else:
|
||||
available_models = provider_models.get(provider, [])
|
||||
|
||||
if not available_models:
|
||||
click.secho(f"No models available for provider '{provider}'.", fg="red")
|
||||
return None
|
||||
|
||||
selected_model = select_choice(f"Select a model to use for {provider.capitalize()}:", available_models)
|
||||
return selected_model
|
||||
|
||||
def load_provider_data(cache_file, cache_expiry):
|
||||
"""
|
||||
Loads provider data from a cache file if it exists and is not expired. If the cache is expired or corrupted, it fetches the data from the web.
|
||||
|
||||
Args:
|
||||
- cache_file (Path): The path to the cache file.
|
||||
- cache_expiry (int): The cache expiry time in seconds.
|
||||
|
||||
Returns:
|
||||
- dict or None: The loaded provider data or None if the operation fails.
|
||||
"""
|
||||
current_time = time.time()
|
||||
if cache_file.exists() and (current_time - cache_file.stat().st_mtime) < cache_expiry:
|
||||
data = read_cache_file(cache_file)
|
||||
if data:
|
||||
return data
|
||||
click.secho("Cache is corrupted. Fetching provider data from the web...", fg="yellow")
|
||||
else:
|
||||
click.secho("Cache expired or not found. Fetching provider data from the web...", fg="cyan")
|
||||
return fetch_provider_data(cache_file)
|
||||
|
||||
def read_cache_file(cache_file):
|
||||
"""
|
||||
Reads and returns the JSON content from a cache file. Returns None if the file contains invalid JSON.
|
||||
|
||||
Args:
|
||||
- cache_file (Path): The path to the cache file.
|
||||
|
||||
Returns:
|
||||
- dict or None: The JSON content of the cache file or None if the JSON is invalid.
|
||||
"""
|
||||
try:
|
||||
with open(cache_file, "r") as f:
|
||||
return json.load(f)
|
||||
except json.JSONDecodeError:
|
||||
return None
|
||||
|
||||
def fetch_provider_data(cache_file):
|
||||
"""
|
||||
Fetches provider data from a specified URL and caches it to a file.
|
||||
|
||||
Args:
|
||||
- cache_file (Path): The path to the cache file.
|
||||
|
||||
Returns:
|
||||
- dict or None: The fetched provider data or None if the operation fails.
|
||||
"""
|
||||
try:
|
||||
response = requests.get(JSON_URL, stream=True, timeout=10)
|
||||
response.raise_for_status()
|
||||
data = download_data(response)
|
||||
with open(cache_file, "w") as f:
|
||||
json.dump(data, f)
|
||||
return data
|
||||
except requests.RequestException as e:
|
||||
click.secho(f"Error fetching provider data: {e}", fg="red")
|
||||
except json.JSONDecodeError:
|
||||
click.secho("Error parsing provider data. Invalid JSON format.", fg="red")
|
||||
return None
|
||||
|
||||
def download_data(response):
|
||||
"""
|
||||
Downloads data from a given HTTP response and returns the JSON content.
|
||||
|
||||
Args:
|
||||
- response (requests.Response): The HTTP response object.
|
||||
|
||||
Returns:
|
||||
- dict: The JSON content of the response.
|
||||
"""
|
||||
total_size = int(response.headers.get('content-length', 0))
|
||||
block_size = 8192
|
||||
data_chunks = []
|
||||
with click.progressbar(length=total_size, label='Downloading', show_pos=True) as progress_bar:
|
||||
for chunk in response.iter_content(block_size):
|
||||
if chunk:
|
||||
data_chunks.append(chunk)
|
||||
progress_bar.update(len(chunk))
|
||||
data_content = b''.join(data_chunks)
|
||||
return json.loads(data_content.decode('utf-8'))
|
||||
|
||||
def get_provider_data():
|
||||
"""
|
||||
Retrieves provider data from a cache file, filters out models based on provider criteria, and returns a dictionary of providers mapped to their models.
|
||||
|
||||
Returns:
|
||||
- dict or None: A dictionary of providers mapped to their models or None if the operation fails.
|
||||
"""
|
||||
cache_dir = Path.home() / '.crewai'
|
||||
cache_dir.mkdir(exist_ok=True)
|
||||
cache_file = cache_dir / 'provider_cache.json'
|
||||
cache_expiry = 24 * 3600
|
||||
|
||||
data = load_provider_data(cache_file, cache_expiry)
|
||||
if not data:
|
||||
return None
|
||||
|
||||
provider_models = defaultdict(list)
|
||||
for model_name, properties in data.items():
|
||||
provider = properties.get("litellm_provider", "").strip().lower()
|
||||
if 'http' in provider or provider == 'other':
|
||||
continue
|
||||
if provider:
|
||||
provider_models[provider].append(model_name)
|
||||
return provider_models
|
||||
@@ -1,4 +1,5 @@
|
||||
import subprocess
|
||||
|
||||
import click
|
||||
|
||||
|
||||
@@ -9,7 +10,7 @@ def replay_task_command(task_id: str) -> None:
|
||||
Args:
|
||||
task_id (str): The ID of the task to replay from.
|
||||
"""
|
||||
command = ["poetry", "run", "replay", task_id]
|
||||
command = ["uv", "run", "replay", task_id]
|
||||
|
||||
try:
|
||||
result = subprocess.run(command, capture_output=False, text=True, check=True)
|
||||
|
||||
@@ -1,23 +1,48 @@
|
||||
import subprocess
|
||||
|
||||
import click
|
||||
import tomllib
|
||||
from packaging import version
|
||||
|
||||
from crewai.cli.utils import get_crewai_version
|
||||
|
||||
|
||||
def run_crew() -> None:
|
||||
"""
|
||||
Run the crew by running a command in the Poetry environment.
|
||||
Run the crew by running a command in the UV environment.
|
||||
"""
|
||||
command = ["poetry", "run", "run_crew"]
|
||||
command = ["uv", "run", "run_crew"]
|
||||
crewai_version = get_crewai_version()
|
||||
min_required_version = "0.71.0"
|
||||
|
||||
with open("pyproject.toml", "rb") as f:
|
||||
data = tomllib.load(f)
|
||||
|
||||
if data.get("tool", {}).get("poetry") and (
|
||||
version.parse(crewai_version) < version.parse(min_required_version)
|
||||
):
|
||||
click.secho(
|
||||
f"You are running an older version of crewAI ({crewai_version}) that uses poetry pyproject.toml. "
|
||||
f"Please run `crewai update` to update your pyproject.toml to use uv.",
|
||||
fg="red",
|
||||
)
|
||||
print()
|
||||
|
||||
try:
|
||||
result = subprocess.run(command, capture_output=False, text=True, check=True)
|
||||
|
||||
if result.stderr:
|
||||
click.echo(result.stderr, err=True)
|
||||
subprocess.run(command, capture_output=False, text=True, check=True)
|
||||
|
||||
except subprocess.CalledProcessError as e:
|
||||
click.echo(f"An error occurred while running the crew: {e}", err=True)
|
||||
click.echo(e.output, err=True)
|
||||
click.echo(e.output, err=True, nl=True)
|
||||
|
||||
with open("pyproject.toml", "rb") as f:
|
||||
data = tomllib.load(f)
|
||||
|
||||
if data.get("tool", {}).get("poetry"):
|
||||
click.secho(
|
||||
"It's possible that you are using an old version of crewAI that uses poetry, please run `crewai update` to update your pyproject.toml to use uv.",
|
||||
fg="yellow",
|
||||
)
|
||||
|
||||
except Exception as e:
|
||||
click.echo(f"An unexpected error occurred: {e}", err=True)
|
||||
|
||||
@@ -5,9 +5,9 @@ import click
|
||||
|
||||
def run_flow() -> None:
|
||||
"""
|
||||
Run the flow by running a command in the Poetry environment.
|
||||
Run the flow by running a command in the UV environment.
|
||||
"""
|
||||
command = ["poetry", "run", "run_flow"]
|
||||
command = ["uv", "run", "run_flow"]
|
||||
|
||||
try:
|
||||
result = subprocess.run(command, capture_output=False, text=True, check=True)
|
||||
|
||||
@@ -4,17 +4,17 @@ Welcome to the {{crew_name}} Crew project, powered by [crewAI](https://crewai.co
|
||||
|
||||
## Installation
|
||||
|
||||
Ensure you have Python >=3.10 <=3.13 installed on your system. This project uses [Poetry](https://python-poetry.org/) for dependency management and package handling, offering a seamless setup and execution experience.
|
||||
Ensure you have Python >=3.10 <=3.13 installed on your system. This project uses [UV](https://docs.astral.sh/uv/) for dependency management and package handling, offering a seamless setup and execution experience.
|
||||
|
||||
First, if you haven't already, install Poetry:
|
||||
First, if you haven't already, install uv:
|
||||
|
||||
```bash
|
||||
pip install poetry
|
||||
pip install uv
|
||||
```
|
||||
|
||||
Next, navigate to your project directory and install the dependencies:
|
||||
|
||||
1. First lock the dependencies and install them by using the CLI command:
|
||||
(Optional) Lock the dependencies and install them by using the CLI command:
|
||||
```bash
|
||||
crewai install
|
||||
```
|
||||
|
||||
@@ -2,7 +2,7 @@
|
||||
import sys
|
||||
from {{folder_name}}.crew import {{crew_name}}Crew
|
||||
|
||||
# This main file is intended to be a way for your to run your
|
||||
# This main file is intended to be a way for you to run your
|
||||
# crew locally, so refrain from adding necessary logic into this file.
|
||||
# Replace with inputs you want to test with, it will automatically
|
||||
# interpolate any tasks and agents information
|
||||
|
||||
@@ -1,15 +1,14 @@
|
||||
[tool.poetry]
|
||||
[project]
|
||||
name = "{{folder_name}}"
|
||||
version = "0.1.0"
|
||||
description = "{{name}} using crewAI"
|
||||
authors = ["Your Name <you@example.com>"]
|
||||
authors = [{ name = "Your Name", email = "you@example.com" }]
|
||||
requires-python = ">=3.10,<=3.13"
|
||||
dependencies = [
|
||||
"crewai[tools]>=0.67.1,<1.0.0"
|
||||
]
|
||||
|
||||
[tool.poetry.dependencies]
|
||||
python = ">=3.10,<=3.13"
|
||||
crewai = { extras = ["tools"], version = ">=0.70.1,<1.0.0" }
|
||||
|
||||
|
||||
[tool.poetry.scripts]
|
||||
[project.scripts]
|
||||
{{folder_name}} = "{{folder_name}}.main:run"
|
||||
run_crew = "{{folder_name}}.main:run"
|
||||
train = "{{folder_name}}.main:train"
|
||||
@@ -17,5 +16,5 @@ replay = "{{folder_name}}.main:replay"
|
||||
test = "{{folder_name}}.main:test"
|
||||
|
||||
[build-system]
|
||||
requires = ["poetry-core"]
|
||||
build-backend = "poetry.core.masonry.api"
|
||||
requires = ["hatchling"]
|
||||
build-backend = "hatchling.build"
|
||||
|
||||
@@ -4,18 +4,17 @@ Welcome to the {{crew_name}} Crew project, powered by [crewAI](https://crewai.co
|
||||
|
||||
## Installation
|
||||
|
||||
Ensure you have Python >=3.10 <=3.13 installed on your system. This project uses [Poetry](https://python-poetry.org/) for dependency management and package handling, offering a seamless setup and execution experience.
|
||||
Ensure you have Python >=3.10 <=3.13 installed on your system. This project uses [UV](https://docs.astral.sh/uv/) for dependency management and package handling, offering a seamless setup and execution experience.
|
||||
|
||||
First, if you haven't already, install Poetry:
|
||||
First, if you haven't already, install uv:
|
||||
|
||||
```bash
|
||||
pip install poetry
|
||||
pip install uv
|
||||
```
|
||||
|
||||
Next, navigate to your project directory and install the dependencies:
|
||||
|
||||
1. First lock the dependencies and then install them:
|
||||
|
||||
(Optional) Lock the dependencies and install them by using the CLI command:
|
||||
```bash
|
||||
crewai install
|
||||
```
|
||||
|
||||
@@ -1,19 +1,19 @@
|
||||
[tool.poetry]
|
||||
[project]
|
||||
name = "{{folder_name}}"
|
||||
version = "0.1.0"
|
||||
description = "{{name}} using crewAI"
|
||||
authors = ["Your Name <you@example.com>"]
|
||||
authors = [{ name = "Your Name", email = "you@example.com" }]
|
||||
requires-python = ">=3.10,<=3.13"
|
||||
dependencies = [
|
||||
"crewai[tools]>=0.67.1,<1.0.0",
|
||||
"asyncio"
|
||||
]
|
||||
|
||||
[tool.poetry.dependencies]
|
||||
python = ">=3.10,<=3.13"
|
||||
crewai = { extras = ["tools"], version = ">=0.70.1,<1.0.0" }
|
||||
asyncio = "*"
|
||||
|
||||
[tool.poetry.scripts]
|
||||
[project.scripts]
|
||||
{{folder_name}} = "{{folder_name}}.main:main"
|
||||
run_flow = "{{folder_name}}.main:main"
|
||||
plot_flow = "{{folder_name}}.main:plot"
|
||||
|
||||
[build-system]
|
||||
requires = ["poetry-core"]
|
||||
build-backend = "poetry.core.masonry.api"
|
||||
requires = ["hatchling"]
|
||||
build-backend = "hatchling.build"
|
||||
|
||||
@@ -1,20 +1,21 @@
|
||||
[tool.poetry]
|
||||
[project]
|
||||
name = "{{folder_name}}"
|
||||
version = "0.1.0"
|
||||
description = "{{name}} using crewAI"
|
||||
authors = ["Your Name <you@example.com>"]
|
||||
requires-python = ">=3.10,<=3.13"
|
||||
dependencies = [
|
||||
"crewai[tools]>=0.67.1,<1.0.0"
|
||||
]
|
||||
|
||||
[tool.poetry.dependencies]
|
||||
python = ">=3.10,<=3.13"
|
||||
crewai = { extras = ["tools"], version = ">=0.70.1,<1.0.0" }
|
||||
|
||||
|
||||
[tool.poetry.scripts]
|
||||
[project.scripts]
|
||||
{{folder_name}} = "{{folder_name}}.main:main"
|
||||
run_crew = "{{folder_name}}.main:main"
|
||||
train = "{{folder_name}}.main:train"
|
||||
replay = "{{folder_name}}.main:replay"
|
||||
test = "{{folder_name}}.main:test"
|
||||
|
||||
[build-system]
|
||||
requires = ["poetry-core"]
|
||||
build-backend = "poetry.core.masonry.api"
|
||||
requires = ["hatchling"]
|
||||
build-backend = "hatchling.build"
|
||||
|
||||
|
||||
10
src/crewai/cli/templates/tool/.gitignore
vendored
Normal file
10
src/crewai/cli/templates/tool/.gitignore
vendored
Normal file
@@ -0,0 +1,10 @@
|
||||
# Python-generated files
|
||||
__pycache__/
|
||||
*.py[oc]
|
||||
build/
|
||||
dist/
|
||||
wheels/
|
||||
*.egg-info
|
||||
|
||||
# Virtual environments
|
||||
.venv
|
||||
@@ -6,13 +6,13 @@ custom tools to power up your crews.
|
||||
## Installing
|
||||
|
||||
Ensure you have Python >=3.10 <=3.13 installed on your system. This project
|
||||
uses [Poetry](https://python-poetry.org/) for dependency management and package
|
||||
uses [UV](https://docs.astral.sh/uv/) for dependency management and package
|
||||
handling, offering a seamless setup and execution experience.
|
||||
|
||||
First, if you haven't already, install Poetry:
|
||||
First, if you haven't already, install `uv`:
|
||||
|
||||
```bash
|
||||
pip install poetry
|
||||
pip install uv
|
||||
```
|
||||
|
||||
Next, navigate to your project directory and install the dependencies with:
|
||||
|
||||
@@ -1,14 +1,10 @@
|
||||
[tool.poetry]
|
||||
[project]
|
||||
name = "{{folder_name}}"
|
||||
version = "0.1.0"
|
||||
description = "Power up your crews with {{folder_name}}"
|
||||
authors = ["Your Name <you@example.com>"]
|
||||
readme = "README.md"
|
||||
requires-python = ">=3.10,<=3.13"
|
||||
dependencies = [
|
||||
"crewai[tools]>=0.70.1"
|
||||
]
|
||||
|
||||
[tool.poetry.dependencies]
|
||||
python = ">=3.10,<=3.13"
|
||||
crewai = { extras = ["tools"], version = ">=0.70.1,<1.0.0" }
|
||||
|
||||
[build-system]
|
||||
requires = ["poetry-core"]
|
||||
build-backend = "poetry.core.masonry.api"
|
||||
|
||||
@@ -1,20 +1,24 @@
|
||||
import base64
|
||||
from pathlib import Path
|
||||
import click
|
||||
import os
|
||||
import platform
|
||||
import subprocess
|
||||
import tempfile
|
||||
from pathlib import Path
|
||||
from netrc import netrc
|
||||
import stat
|
||||
|
||||
import click
|
||||
from rich.console import Console
|
||||
|
||||
from crewai.cli.command import BaseCommand, PlusAPIMixin
|
||||
from crewai.cli import git
|
||||
from crewai.cli.command import BaseCommand, PlusAPIMixin
|
||||
from crewai.cli.utils import (
|
||||
get_project_name,
|
||||
get_project_description,
|
||||
get_project_name,
|
||||
get_project_version,
|
||||
tree_copy,
|
||||
tree_find_and_replace,
|
||||
)
|
||||
from rich.console import Console
|
||||
|
||||
console = Console()
|
||||
|
||||
@@ -24,6 +28,8 @@ class ToolCommand(BaseCommand, PlusAPIMixin):
|
||||
A class to handle tool repository related operations for CrewAI projects.
|
||||
"""
|
||||
|
||||
BASE_URL = "https://app.crewai.com/pypi/"
|
||||
|
||||
def __init__(self):
|
||||
BaseCommand.__init__(self)
|
||||
PlusAPIMixin.__init__(self, telemetry=self._telemetry)
|
||||
@@ -82,7 +88,7 @@ class ToolCommand(BaseCommand, PlusAPIMixin):
|
||||
|
||||
with tempfile.TemporaryDirectory() as temp_build_dir:
|
||||
subprocess.run(
|
||||
["poetry", "build", "-f", "sdist", "--output", temp_build_dir],
|
||||
["uv", "build", "--sdist", "--out-dir", temp_build_dir],
|
||||
check=True,
|
||||
capture_output=False,
|
||||
)
|
||||
@@ -92,7 +98,7 @@ class ToolCommand(BaseCommand, PlusAPIMixin):
|
||||
)
|
||||
if not tarball_filename:
|
||||
console.print(
|
||||
"Project build failed. Please ensure that the command `poetry build -f sdist` completes successfully.",
|
||||
"Project build failed. Please ensure that the command `uv build --sdist` completes successfully.",
|
||||
style="bold red",
|
||||
)
|
||||
raise SystemExit
|
||||
@@ -143,68 +149,41 @@ class ToolCommand(BaseCommand, PlusAPIMixin):
|
||||
|
||||
if login_response.status_code != 200:
|
||||
console.print(
|
||||
"Failed to authenticate to the tool repository. Make sure you have the access to tools.",
|
||||
"Authentication failed. Verify access to the tool repository, or try `crewai login`. ",
|
||||
style="bold red",
|
||||
)
|
||||
raise SystemExit
|
||||
|
||||
login_response_json = login_response.json()
|
||||
for repository in login_response_json["repositories"]:
|
||||
self._add_repository_to_poetry(
|
||||
repository, login_response_json["credential"]
|
||||
)
|
||||
self._set_netrc_credentials(login_response_json["credential"])
|
||||
|
||||
console.print(
|
||||
"Succesfully authenticated to the tool repository.", style="bold green"
|
||||
"Successfully authenticated to the tool repository.", style="bold green"
|
||||
)
|
||||
|
||||
def _add_repository_to_poetry(self, repository, credentials):
|
||||
repository_handle = f"crewai-{repository['handle']}"
|
||||
def _set_netrc_credentials(self, credentials, netrc_path=None):
|
||||
if not netrc_path:
|
||||
netrc_filename = "_netrc" if platform.system() == "Windows" else ".netrc"
|
||||
netrc_path = Path.home() / netrc_filename
|
||||
netrc_path.touch(mode=stat.S_IRUSR | stat.S_IWUSR, exist_ok=True)
|
||||
|
||||
add_repository_command = [
|
||||
"poetry",
|
||||
"source",
|
||||
"add",
|
||||
"--priority=explicit",
|
||||
repository_handle,
|
||||
repository["url"],
|
||||
]
|
||||
add_repository_result = subprocess.run(
|
||||
add_repository_command, text=True, check=True
|
||||
)
|
||||
netrc_instance = netrc(file=netrc_path)
|
||||
netrc_instance.hosts["app.crewai.com"] = (credentials["username"], "", credentials["password"])
|
||||
|
||||
if add_repository_result.stderr:
|
||||
click.echo(add_repository_result.stderr, err=True)
|
||||
raise SystemExit
|
||||
with open(netrc_path, 'w') as file:
|
||||
file.write(str(netrc_instance))
|
||||
|
||||
add_repository_credentials_command = [
|
||||
"poetry",
|
||||
"config",
|
||||
f"http-basic.{repository_handle}",
|
||||
credentials["username"],
|
||||
credentials["password"],
|
||||
]
|
||||
add_repository_credentials_result = subprocess.run(
|
||||
add_repository_credentials_command,
|
||||
capture_output=False,
|
||||
text=True,
|
||||
check=True,
|
||||
)
|
||||
|
||||
if add_repository_credentials_result.stderr:
|
||||
click.echo(add_repository_credentials_result.stderr, err=True)
|
||||
raise SystemExit
|
||||
console.print(f"Added credentials to {netrc_path}", style="bold green")
|
||||
|
||||
def _add_package(self, tool_details):
|
||||
tool_handle = tool_details["handle"]
|
||||
repository_handle = tool_details["repository"]["handle"]
|
||||
pypi_index_handle = f"crewai-{repository_handle}"
|
||||
|
||||
add_package_command = [
|
||||
"poetry",
|
||||
"uv",
|
||||
"add",
|
||||
"--source",
|
||||
pypi_index_handle,
|
||||
"--extra-index-url",
|
||||
self.BASE_URL + repository_handle,
|
||||
tool_handle,
|
||||
]
|
||||
add_package_result = subprocess.run(
|
||||
|
||||
@@ -5,12 +5,12 @@ import click
|
||||
|
||||
def train_crew(n_iterations: int, filename: str) -> None:
|
||||
"""
|
||||
Train the crew by running a command in the Poetry environment.
|
||||
Train the crew by running a command in the UV environment.
|
||||
|
||||
Args:
|
||||
n_iterations (int): The number of iterations to train the crew.
|
||||
"""
|
||||
command = ["poetry", "run", "train", str(n_iterations), filename]
|
||||
command = ["uv", "run", "train", str(n_iterations), filename]
|
||||
|
||||
try:
|
||||
if n_iterations <= 0:
|
||||
|
||||
115
src/crewai/cli/update_crew.py
Normal file
115
src/crewai/cli/update_crew.py
Normal file
@@ -0,0 +1,115 @@
|
||||
import shutil
|
||||
|
||||
import tomli_w
|
||||
import tomllib
|
||||
|
||||
|
||||
def update_crew() -> None:
|
||||
"""Update the pyproject.toml of the Crew project to use uv."""
|
||||
migrate_pyproject("pyproject.toml", "pyproject.toml")
|
||||
|
||||
|
||||
def migrate_pyproject(input_file, output_file):
|
||||
"""
|
||||
Migrate the pyproject.toml to the new format.
|
||||
|
||||
This function is used to migrate the pyproject.toml to the new format.
|
||||
And it will be used to migrate the pyproject.toml to the new format when uv is used.
|
||||
When the time comes that uv supports the new format, this function will be deprecated.
|
||||
"""
|
||||
|
||||
# Read the input pyproject.toml
|
||||
with open(input_file, "rb") as f:
|
||||
pyproject = tomllib.load(f)
|
||||
|
||||
# Initialize the new project structure
|
||||
new_pyproject = {
|
||||
"project": {},
|
||||
"build-system": {"requires": ["hatchling"], "build-backend": "hatchling.build"},
|
||||
}
|
||||
|
||||
# Migrate project metadata
|
||||
if "tool" in pyproject and "poetry" in pyproject["tool"]:
|
||||
poetry = pyproject["tool"]["poetry"]
|
||||
new_pyproject["project"]["name"] = poetry.get("name")
|
||||
new_pyproject["project"]["version"] = poetry.get("version")
|
||||
new_pyproject["project"]["description"] = poetry.get("description")
|
||||
new_pyproject["project"]["authors"] = [
|
||||
{
|
||||
"name": author.split("<")[0].strip(),
|
||||
"email": author.split("<")[1].strip(">").strip(),
|
||||
}
|
||||
for author in poetry.get("authors", [])
|
||||
]
|
||||
new_pyproject["project"]["requires-python"] = poetry.get("python")
|
||||
else:
|
||||
# If it's already in the new format, just copy the project section
|
||||
new_pyproject["project"] = pyproject.get("project", {})
|
||||
|
||||
# Migrate or copy dependencies
|
||||
if "dependencies" in new_pyproject["project"]:
|
||||
# If dependencies are already in the new format, keep them as is
|
||||
pass
|
||||
elif "dependencies" in poetry:
|
||||
new_pyproject["project"]["dependencies"] = []
|
||||
for dep, version in poetry["dependencies"].items():
|
||||
if isinstance(version, dict): # Handle extras
|
||||
extras = ",".join(version.get("extras", []))
|
||||
new_dep = f"{dep}[{extras}]"
|
||||
if "version" in version:
|
||||
new_dep += parse_version(version["version"])
|
||||
elif dep == "python":
|
||||
new_pyproject["project"]["requires-python"] = version
|
||||
continue
|
||||
else:
|
||||
new_dep = f"{dep}{parse_version(version)}"
|
||||
new_pyproject["project"]["dependencies"].append(new_dep)
|
||||
|
||||
# Migrate or copy scripts
|
||||
if "scripts" in poetry:
|
||||
new_pyproject["project"]["scripts"] = poetry["scripts"]
|
||||
elif "scripts" in pyproject.get("project", {}):
|
||||
new_pyproject["project"]["scripts"] = pyproject["project"]["scripts"]
|
||||
else:
|
||||
new_pyproject["project"]["scripts"] = {}
|
||||
|
||||
if (
|
||||
"run_crew" not in new_pyproject["project"]["scripts"]
|
||||
and len(new_pyproject["project"]["scripts"]) > 0
|
||||
):
|
||||
# Extract the module name from any existing script
|
||||
existing_scripts = new_pyproject["project"]["scripts"]
|
||||
module_name = next(
|
||||
(value.split(".")[0] for value in existing_scripts.values() if "." in value)
|
||||
)
|
||||
|
||||
new_pyproject["project"]["scripts"]["run_crew"] = f"{module_name}.main:run"
|
||||
|
||||
# Migrate optional dependencies
|
||||
if "extras" in poetry:
|
||||
new_pyproject["project"]["optional-dependencies"] = poetry["extras"]
|
||||
|
||||
# Backup the old pyproject.toml
|
||||
backup_file = "pyproject-old.toml"
|
||||
shutil.copy2(input_file, backup_file)
|
||||
print(f"Original pyproject.toml backed up as {backup_file}")
|
||||
|
||||
# Write the new pyproject.toml
|
||||
with open(output_file, "wb") as f:
|
||||
tomli_w.dump(new_pyproject, f)
|
||||
|
||||
print(f"Migration complete. New pyproject.toml written to {output_file}")
|
||||
|
||||
|
||||
def parse_version(version: str) -> str:
|
||||
"""Parse and convert version specifiers."""
|
||||
if version.startswith("^"):
|
||||
main_lib_version = version[1:].split(",")[0]
|
||||
addtional_lib_version = None
|
||||
if len(version[1:].split(",")) > 1:
|
||||
addtional_lib_version = version[1:].split(",")[1]
|
||||
|
||||
return f">={main_lib_version}" + (
|
||||
f",{addtional_lib_version}" if addtional_lib_version else ""
|
||||
)
|
||||
return version
|
||||
@@ -1,13 +1,15 @@
|
||||
import importlib.metadata
|
||||
import os
|
||||
import shutil
|
||||
import click
|
||||
import sys
|
||||
import importlib.metadata
|
||||
from functools import reduce
|
||||
from typing import Any, Dict, List
|
||||
|
||||
import click
|
||||
from rich.console import Console
|
||||
|
||||
from crewai.cli.authentication.utils import TokenManager
|
||||
from functools import reduce
|
||||
from rich.console import Console
|
||||
from typing import Any, Dict, List
|
||||
from crewai.cli.constants import ENV_VARS
|
||||
|
||||
if sys.version_info >= (3, 11):
|
||||
import tomllib
|
||||
@@ -55,17 +57,14 @@ def simple_toml_parser(content):
|
||||
def parse_toml(content):
|
||||
if sys.version_info >= (3, 11):
|
||||
return tomllib.loads(content)
|
||||
else:
|
||||
return simple_toml_parser(content)
|
||||
return simple_toml_parser(content)
|
||||
|
||||
|
||||
def get_project_name(
|
||||
pyproject_path: str = "pyproject.toml", require: bool = False
|
||||
) -> str | None:
|
||||
"""Get the project name from the pyproject.toml file."""
|
||||
return _get_project_attribute(
|
||||
pyproject_path, ["tool", "poetry", "name"], require=require
|
||||
)
|
||||
return _get_project_attribute(pyproject_path, ["project", "name"], require=require)
|
||||
|
||||
|
||||
def get_project_version(
|
||||
@@ -73,7 +72,7 @@ def get_project_version(
|
||||
) -> str | None:
|
||||
"""Get the project version from the pyproject.toml file."""
|
||||
return _get_project_attribute(
|
||||
pyproject_path, ["tool", "poetry", "version"], require=require
|
||||
pyproject_path, ["project", "version"], require=require
|
||||
)
|
||||
|
||||
|
||||
@@ -82,7 +81,7 @@ def get_project_description(
|
||||
) -> str | None:
|
||||
"""Get the project description from the pyproject.toml file."""
|
||||
return _get_project_attribute(
|
||||
pyproject_path, ["tool", "poetry", "description"], require=require
|
||||
pyproject_path, ["project", "description"], require=require
|
||||
)
|
||||
|
||||
|
||||
@@ -97,10 +96,9 @@ def _get_project_attribute(
|
||||
pyproject_content = parse_toml(f.read())
|
||||
|
||||
dependencies = (
|
||||
_get_nested_value(pyproject_content, ["tool", "poetry", "dependencies"])
|
||||
or {}
|
||||
_get_nested_value(pyproject_content, ["project", "dependencies"]) or []
|
||||
)
|
||||
if "crewai" not in dependencies:
|
||||
if not any(True for dep in dependencies if "crewai" in dep):
|
||||
raise Exception("crewai is not in the dependencies.")
|
||||
|
||||
attribute = _get_nested_value(pyproject_content, keys)
|
||||
@@ -203,3 +201,76 @@ def tree_find_and_replace(directory, find, replace):
|
||||
new_dirpath = os.path.join(path, new_dirname)
|
||||
old_dirpath = os.path.join(path, dirname)
|
||||
os.rename(old_dirpath, new_dirpath)
|
||||
|
||||
|
||||
def load_env_vars(folder_path):
|
||||
"""
|
||||
Loads environment variables from a .env file in the specified folder path.
|
||||
|
||||
Args:
|
||||
- folder_path (Path): The path to the folder containing the .env file.
|
||||
|
||||
Returns:
|
||||
- dict: A dictionary of environment variables.
|
||||
"""
|
||||
env_file_path = folder_path / ".env"
|
||||
env_vars = {}
|
||||
if env_file_path.exists():
|
||||
with open(env_file_path, "r") as file:
|
||||
for line in file:
|
||||
key, _, value = line.strip().partition("=")
|
||||
if key and value:
|
||||
env_vars[key] = value
|
||||
return env_vars
|
||||
|
||||
|
||||
def update_env_vars(env_vars, provider, model):
|
||||
"""
|
||||
Updates environment variables with the API key for the selected provider and model.
|
||||
|
||||
Args:
|
||||
- env_vars (dict): Environment variables dictionary.
|
||||
- provider (str): Selected provider.
|
||||
- model (str): Selected model.
|
||||
|
||||
Returns:
|
||||
- None
|
||||
"""
|
||||
api_key_var = ENV_VARS.get(
|
||||
provider,
|
||||
[
|
||||
click.prompt(
|
||||
f"Enter the environment variable name for your {provider.capitalize()} API key",
|
||||
type=str,
|
||||
)
|
||||
],
|
||||
)[0]
|
||||
|
||||
if api_key_var not in env_vars:
|
||||
try:
|
||||
env_vars[api_key_var] = click.prompt(
|
||||
f"Enter your {provider.capitalize()} API key", type=str, hide_input=True
|
||||
)
|
||||
except click.exceptions.Abort:
|
||||
click.secho("Operation aborted by the user.", fg="red")
|
||||
return None
|
||||
else:
|
||||
click.secho(f"API key already exists for {provider.capitalize()}.", fg="yellow")
|
||||
|
||||
env_vars["MODEL"] = model
|
||||
click.secho(f"Selected model: {model}", fg="green")
|
||||
return env_vars
|
||||
|
||||
|
||||
def write_env_file(folder_path, env_vars):
|
||||
"""
|
||||
Writes environment variables to a .env file in the specified folder.
|
||||
|
||||
Args:
|
||||
- folder_path (Path): The path to the folder where the .env file will be written.
|
||||
- env_vars (dict): A dictionary of environment variables to write.
|
||||
"""
|
||||
env_file_path = folder_path / ".env"
|
||||
with open(env_file_path, "w") as file:
|
||||
for key, value in env_vars.items():
|
||||
file.write(f"{key}={value}\n")
|
||||
|
||||
@@ -15,7 +15,10 @@ class EntityMemory(Memory):
|
||||
storage
|
||||
if storage
|
||||
else RAGStorage(
|
||||
type="entities", allow_reset=False, embedder_config=embedder_config, crew=crew
|
||||
type="entities",
|
||||
allow_reset=False,
|
||||
embedder_config=embedder_config,
|
||||
crew=crew,
|
||||
)
|
||||
)
|
||||
super().__init__(storage)
|
||||
|
||||
@@ -59,7 +59,7 @@ class ToolUsage:
|
||||
agent: Any,
|
||||
action: Any,
|
||||
) -> None:
|
||||
self._i18n: I18N = I18N()
|
||||
self._i18n: I18N = agent.i18n
|
||||
self._printer: Printer = Printer()
|
||||
self._telemetry: Telemetry = Telemetry()
|
||||
self._run_attempts: int = 1
|
||||
|
||||
@@ -20,7 +20,8 @@
|
||||
"getting_input": "This is the agent's final answer: {final_answer}\n\n",
|
||||
"summarizer_system_message": "You are a helpful assistant that summarizes text.",
|
||||
"sumamrize_instruction": "Summarize the following text, make sure to include all the important information: {group}",
|
||||
"summary": "This is a summary of our conversation so far:\n{merged_summary}"
|
||||
"summary": "This is a summary of our conversation so far:\n{merged_summary}",
|
||||
"manager_request": "Your best answer to your coworker asking you this, accounting for the context shared."
|
||||
},
|
||||
"errors": {
|
||||
"force_final_answer_error": "You can't keep going, this was the best you could do.\n {formatted_answer.text}",
|
||||
|
||||
@@ -1,21 +1,21 @@
|
||||
"""Test Agent creation and execution basic functionality."""
|
||||
|
||||
import os
|
||||
from unittest import mock
|
||||
from unittest.mock import patch
|
||||
|
||||
import os
|
||||
import pytest
|
||||
from crewai_tools import tool
|
||||
|
||||
from crewai import Agent, Crew, Task
|
||||
from crewai.agents.cache import CacheHandler
|
||||
from crewai.agents.crew_agent_executor import CrewAgentExecutor
|
||||
from crewai.agents.parser import AgentAction, CrewAgentParser, OutputParserException
|
||||
from crewai.llm import LLM
|
||||
from crewai.agents.parser import CrewAgentParser, OutputParserException
|
||||
from crewai.tools.tool_calling import InstructorToolCalling
|
||||
from crewai.tools.tool_usage import ToolUsage
|
||||
from crewai.tools.tool_usage_events import ToolUsageFinished
|
||||
from crewai.utilities import RPMController
|
||||
from crewai_tools import tool
|
||||
from crewai.agents.parser import AgentAction
|
||||
from crewai.utilities.events import Emitter
|
||||
|
||||
|
||||
@@ -73,7 +73,7 @@ def test_agent_creation():
|
||||
|
||||
def test_agent_default_values():
|
||||
agent = Agent(role="test role", goal="test goal", backstory="test backstory")
|
||||
assert agent.llm.model == "gpt-4o"
|
||||
assert agent.llm.model == "gpt-4o-mini"
|
||||
assert agent.allow_delegation is False
|
||||
|
||||
|
||||
@@ -116,6 +116,7 @@ def test_custom_llm_temperature_preservation():
|
||||
@pytest.mark.vcr(filter_headers=["authorization"])
|
||||
def test_agent_execute_task():
|
||||
from langchain_openai import ChatOpenAI
|
||||
|
||||
from crewai import Task
|
||||
|
||||
agent = Agent(
|
||||
@@ -206,7 +207,7 @@ def test_logging_tool_usage():
|
||||
verbose=True,
|
||||
)
|
||||
|
||||
assert agent.llm.model == "gpt-4o"
|
||||
assert agent.llm.model == "gpt-4o-mini"
|
||||
assert agent.tools_handler.last_used_tool == {}
|
||||
task = Task(
|
||||
description="What is 3 times 4?",
|
||||
@@ -602,6 +603,7 @@ def test_agent_respect_the_max_rpm_set(capsys):
|
||||
@pytest.mark.vcr(filter_headers=["authorization"])
|
||||
def test_agent_respect_the_max_rpm_set_over_crew_rpm(capsys):
|
||||
from unittest.mock import patch
|
||||
|
||||
from crewai_tools import tool
|
||||
|
||||
@tool
|
||||
@@ -693,6 +695,7 @@ def test_agent_without_max_rpm_respet_crew_rpm(capsys):
|
||||
@pytest.mark.vcr(filter_headers=["authorization"])
|
||||
def test_agent_error_on_parsing_tool(capsys):
|
||||
from unittest.mock import patch
|
||||
|
||||
from crewai_tools import tool
|
||||
|
||||
@tool
|
||||
@@ -855,7 +858,9 @@ def test_agent_function_calling_llm():
|
||||
tasks = [essay]
|
||||
crew = Crew(agents=[agent1], tasks=tasks)
|
||||
from unittest.mock import patch
|
||||
|
||||
import instructor
|
||||
|
||||
from crewai.tools.tool_usage import ToolUsage
|
||||
|
||||
with patch.object(
|
||||
|
||||
@@ -1,12 +1,12 @@
|
||||
import pytest
|
||||
import requests
|
||||
import sys
|
||||
import unittest
|
||||
|
||||
from io import StringIO
|
||||
from requests.exceptions import JSONDecodeError
|
||||
from unittest.mock import MagicMock, Mock, patch
|
||||
|
||||
import pytest
|
||||
import requests
|
||||
from requests.exceptions import JSONDecodeError
|
||||
|
||||
from crewai.cli.deploy.main import DeployCommand
|
||||
from crewai.cli.utils import parse_toml
|
||||
|
||||
@@ -228,13 +228,11 @@ class TestDeployCommand(unittest.TestCase):
|
||||
"builtins.open",
|
||||
new_callable=unittest.mock.mock_open,
|
||||
read_data="""
|
||||
[tool.poetry]
|
||||
[project]
|
||||
name = "test_project"
|
||||
version = "0.1.0"
|
||||
|
||||
[tool.poetry.dependencies]
|
||||
python = "^3.10"
|
||||
crewai = { extras = ["tools"], version = ">=0.51.0,<1.0.0" }
|
||||
requires-python = ">=3.10,<=3.13"
|
||||
dependencies = ["crewai"]
|
||||
""",
|
||||
)
|
||||
def test_get_project_name_python_310(self, mock_open):
|
||||
@@ -248,13 +246,11 @@ class TestDeployCommand(unittest.TestCase):
|
||||
"builtins.open",
|
||||
new_callable=unittest.mock.mock_open,
|
||||
read_data="""
|
||||
[tool.poetry]
|
||||
[project]
|
||||
name = "test_project"
|
||||
version = "0.1.0"
|
||||
|
||||
[tool.poetry.dependencies]
|
||||
python = "^3.11"
|
||||
crewai = { extras = ["tools"], version = ">=0.51.0,<1.0.0" }
|
||||
requires-python = ">=3.10,<=3.13"
|
||||
dependencies = ["crewai"]
|
||||
""",
|
||||
)
|
||||
def test_get_project_name_python_311_plus(self, mock_open):
|
||||
|
||||
@@ -18,12 +18,12 @@ from crewai.cli import evaluate_crew
|
||||
def test_crew_success(mock_subprocess_run, n_iterations, model):
|
||||
"""Test the crew function for successful execution."""
|
||||
mock_subprocess_run.return_value = subprocess.CompletedProcess(
|
||||
args=f"poetry run test {n_iterations} {model}", returncode=0
|
||||
args=f"uv run test {n_iterations} {model}", returncode=0
|
||||
)
|
||||
result = evaluate_crew.evaluate_crew(n_iterations, model)
|
||||
|
||||
mock_subprocess_run.assert_called_once_with(
|
||||
["poetry", "run", "test", str(n_iterations), model],
|
||||
["uv", "run", "test", str(n_iterations), model],
|
||||
capture_output=False,
|
||||
text=True,
|
||||
check=True,
|
||||
@@ -55,14 +55,14 @@ def test_test_crew_called_process_error(mock_subprocess_run, click):
|
||||
n_iterations = 5
|
||||
mock_subprocess_run.side_effect = subprocess.CalledProcessError(
|
||||
returncode=1,
|
||||
cmd=["poetry", "run", "test", str(n_iterations), "gpt-4o"],
|
||||
cmd=["uv", "run", "test", str(n_iterations), "gpt-4o"],
|
||||
output="Error",
|
||||
stderr="Some error occurred",
|
||||
)
|
||||
evaluate_crew.evaluate_crew(n_iterations, "gpt-4o")
|
||||
|
||||
mock_subprocess_run.assert_called_once_with(
|
||||
["poetry", "run", "test", "5", "gpt-4o"],
|
||||
["uv", "run", "test", "5", "gpt-4o"],
|
||||
capture_output=False,
|
||||
text=True,
|
||||
check=True,
|
||||
@@ -70,7 +70,7 @@ def test_test_crew_called_process_error(mock_subprocess_run, click):
|
||||
click.echo.assert_has_calls(
|
||||
[
|
||||
mock.call.echo(
|
||||
"An error occurred while testing the crew: Command '['poetry', 'run', 'test', '5', 'gpt-4o']' returned non-zero exit status 1.",
|
||||
"An error occurred while testing the crew: Command '['uv', 'run', 'test', '5', 'gpt-4o']' returned non-zero exit status 1.",
|
||||
err=True,
|
||||
),
|
||||
mock.call.echo("Error", err=True),
|
||||
@@ -87,7 +87,7 @@ def test_test_crew_unexpected_exception(mock_subprocess_run, click):
|
||||
evaluate_crew.evaluate_crew(n_iterations, "gpt-4o")
|
||||
|
||||
mock_subprocess_run.assert_called_once_with(
|
||||
["poetry", "run", "test", "5", "gpt-4o"],
|
||||
["uv", "run", "test", "5", "gpt-4o"],
|
||||
capture_output=False,
|
||||
text=True,
|
||||
check=True,
|
||||
|
||||
@@ -92,16 +92,20 @@ class TestPlusAPI(unittest.TestCase):
|
||||
)
|
||||
self.assertEqual(response, mock_response)
|
||||
|
||||
@patch("crewai.cli.plus_api.requests.request")
|
||||
def test_make_request(self, mock_request):
|
||||
@patch("crewai.cli.plus_api.requests.Session")
|
||||
def test_make_request(self, mock_session):
|
||||
mock_response = MagicMock()
|
||||
mock_request.return_value = mock_response
|
||||
|
||||
mock_session_instance = mock_session.return_value
|
||||
mock_session_instance.request.return_value = mock_response
|
||||
|
||||
response = self.api._make_request("GET", "test_endpoint")
|
||||
|
||||
mock_request.assert_called_once_with(
|
||||
mock_session.assert_called_once()
|
||||
mock_session_instance.request.assert_called_once_with(
|
||||
"GET", f"{self.api.base_url}/test_endpoint", headers=self.api.headers
|
||||
)
|
||||
mock_session_instance.trust_env = False
|
||||
self.assertEqual(response, mock_response)
|
||||
|
||||
@patch("crewai.cli.plus_api.PlusAPI._make_request")
|
||||
|
||||
@@ -1,13 +1,16 @@
|
||||
import os
|
||||
import tempfile
|
||||
import unittest
|
||||
import unittest.mock
|
||||
import os
|
||||
from contextlib import contextmanager
|
||||
from io import StringIO
|
||||
from unittest import mock
|
||||
from unittest.mock import MagicMock, patch
|
||||
|
||||
from pytest import raises
|
||||
|
||||
from crewai.cli.tools.main import ToolCommand
|
||||
from io import StringIO
|
||||
from unittest.mock import patch, MagicMock
|
||||
|
||||
|
||||
@contextmanager
|
||||
def in_temp_dir():
|
||||
@@ -19,6 +22,7 @@ def in_temp_dir():
|
||||
finally:
|
||||
os.chdir(original_dir)
|
||||
|
||||
|
||||
@patch("crewai.cli.tools.main.subprocess.run")
|
||||
def test_create_success(mock_subprocess):
|
||||
with in_temp_dir():
|
||||
@@ -38,9 +42,7 @@ def test_create_success(mock_subprocess):
|
||||
)
|
||||
assert os.path.isfile(os.path.join("test_tool", "src", "test_tool", "tool.py"))
|
||||
|
||||
with open(
|
||||
os.path.join("test_tool", "src", "test_tool", "tool.py"), "r"
|
||||
) as f:
|
||||
with open(os.path.join("test_tool", "src", "test_tool", "tool.py"), "r") as f:
|
||||
content = f.read()
|
||||
assert "class TestTool" in content
|
||||
|
||||
@@ -49,6 +51,7 @@ def test_create_success(mock_subprocess):
|
||||
|
||||
assert "Creating custom tool test_tool..." in output
|
||||
|
||||
|
||||
@patch("crewai.cli.tools.main.subprocess.run")
|
||||
@patch("crewai.cli.plus_api.PlusAPI.get_tool")
|
||||
def test_install_success(mock_get, mock_subprocess_run):
|
||||
@@ -67,9 +70,15 @@ def test_install_success(mock_get, mock_subprocess_run):
|
||||
tool_command.install("sample-tool")
|
||||
output = fake_out.getvalue()
|
||||
|
||||
mock_get.assert_called_once_with("sample-tool")
|
||||
mock_get.assert_has_calls([mock.call("sample-tool"), mock.call().json()])
|
||||
mock_subprocess_run.assert_any_call(
|
||||
["poetry", "add", "--source", "crewai-sample-repo", "sample-tool"],
|
||||
[
|
||||
"uv",
|
||||
"add",
|
||||
"--extra-index-url",
|
||||
"https://app.crewai.com/pypi/sample-repo",
|
||||
"sample-tool",
|
||||
],
|
||||
capture_output=False,
|
||||
text=True,
|
||||
check=True,
|
||||
@@ -77,6 +86,7 @@ def test_install_success(mock_get, mock_subprocess_run):
|
||||
|
||||
assert "Succesfully installed sample-tool" in output
|
||||
|
||||
|
||||
@patch("crewai.cli.plus_api.PlusAPI.get_tool")
|
||||
def test_install_tool_not_found(mock_get):
|
||||
mock_get_response = MagicMock()
|
||||
@@ -95,6 +105,7 @@ def test_install_tool_not_found(mock_get):
|
||||
mock_get.assert_called_once_with("non-existent-tool")
|
||||
assert "No tool found with this name" in output
|
||||
|
||||
|
||||
@patch("crewai.cli.plus_api.PlusAPI.get_tool")
|
||||
def test_install_api_error(mock_get):
|
||||
mock_get_response = MagicMock()
|
||||
@@ -113,15 +124,16 @@ def test_install_api_error(mock_get):
|
||||
mock_get.assert_called_once_with("error-tool")
|
||||
assert "Failed to get tool details" in output
|
||||
|
||||
|
||||
@patch("crewai.cli.tools.main.git.Repository.is_synced", return_value=False)
|
||||
def test_publish_when_not_in_sync(mock_is_synced):
|
||||
with patch("sys.stdout", new=StringIO()) as fake_out, \
|
||||
raises(SystemExit):
|
||||
with patch("sys.stdout", new=StringIO()) as fake_out, raises(SystemExit):
|
||||
tool_command = ToolCommand()
|
||||
tool_command.publish(is_public=True)
|
||||
|
||||
assert "Local changes need to be resolved before publishing" in fake_out.getvalue()
|
||||
|
||||
|
||||
@patch("crewai.cli.tools.main.get_project_name", return_value="sample-tool")
|
||||
@patch("crewai.cli.tools.main.get_project_version", return_value="1.0.0")
|
||||
@patch("crewai.cli.tools.main.get_project_description", return_value="A sample tool")
|
||||
@@ -156,7 +168,7 @@ def test_publish_when_not_in_sync_and_force(
|
||||
mock_get_project_version.assert_called_with(require=True)
|
||||
mock_get_project_description.assert_called_with(require=False)
|
||||
mock_subprocess_run.assert_called_with(
|
||||
["poetry", "build", "-f", "sdist", "--output", unittest.mock.ANY],
|
||||
["uv", "build", "--sdist", "--out-dir", unittest.mock.ANY],
|
||||
check=True,
|
||||
capture_output=False,
|
||||
)
|
||||
@@ -169,6 +181,7 @@ def test_publish_when_not_in_sync_and_force(
|
||||
encoded_file=unittest.mock.ANY,
|
||||
)
|
||||
|
||||
|
||||
@patch("crewai.cli.tools.main.get_project_name", return_value="sample-tool")
|
||||
@patch("crewai.cli.tools.main.get_project_version", return_value="1.0.0")
|
||||
@patch("crewai.cli.tools.main.get_project_description", return_value="A sample tool")
|
||||
@@ -203,7 +216,7 @@ def test_publish_success(
|
||||
mock_get_project_version.assert_called_with(require=True)
|
||||
mock_get_project_description.assert_called_with(require=False)
|
||||
mock_subprocess_run.assert_called_with(
|
||||
["poetry", "build", "-f", "sdist", "--output", unittest.mock.ANY],
|
||||
["uv", "build", "--sdist", "--out-dir", unittest.mock.ANY],
|
||||
check=True,
|
||||
capture_output=False,
|
||||
)
|
||||
@@ -216,6 +229,7 @@ def test_publish_success(
|
||||
encoded_file=unittest.mock.ANY,
|
||||
)
|
||||
|
||||
|
||||
@patch("crewai.cli.tools.main.get_project_name", return_value="sample-tool")
|
||||
@patch("crewai.cli.tools.main.get_project_version", return_value="1.0.0")
|
||||
@patch("crewai.cli.tools.main.get_project_description", return_value="A sample tool")
|
||||
@@ -254,6 +268,7 @@ def test_publish_failure(
|
||||
assert "Failed to complete operation" in output
|
||||
assert "Name is already taken" in output
|
||||
|
||||
|
||||
@patch("crewai.cli.tools.main.get_project_name", return_value="sample-tool")
|
||||
@patch("crewai.cli.tools.main.get_project_version", return_value="1.0.0")
|
||||
@patch("crewai.cli.tools.main.get_project_description", return_value="A sample tool")
|
||||
@@ -291,54 +306,3 @@ def test_publish_api_error(
|
||||
|
||||
mock_publish.assert_called_once()
|
||||
assert "Request to Enterprise API failed" in output
|
||||
|
||||
@patch("crewai.cli.plus_api.PlusAPI.login_to_tool_repository")
|
||||
@patch("crewai.cli.tools.main.subprocess.run")
|
||||
def test_login_success(mock_subprocess_run, mock_login):
|
||||
mock_login_response = MagicMock()
|
||||
mock_login_response.status_code = 200
|
||||
mock_login_response.json.return_value = {
|
||||
"repositories": [
|
||||
{
|
||||
"handle": "tools",
|
||||
"url": "https://example.com/repo",
|
||||
}
|
||||
],
|
||||
"credential": {"username": "user", "password": "pass"},
|
||||
}
|
||||
mock_login.return_value = mock_login_response
|
||||
|
||||
mock_subprocess_run.return_value = MagicMock(stderr=None)
|
||||
|
||||
tool_command = ToolCommand()
|
||||
|
||||
with patch("sys.stdout", new=StringIO()) as fake_out:
|
||||
tool_command.login()
|
||||
output = fake_out.getvalue()
|
||||
|
||||
mock_login.assert_called_once()
|
||||
mock_subprocess_run.assert_any_call(
|
||||
[
|
||||
"poetry",
|
||||
"source",
|
||||
"add",
|
||||
"--priority=explicit",
|
||||
"crewai-tools",
|
||||
"https://example.com/repo",
|
||||
],
|
||||
text=True,
|
||||
check=True,
|
||||
)
|
||||
mock_subprocess_run.assert_any_call(
|
||||
[
|
||||
"poetry",
|
||||
"config",
|
||||
"http-basic.crewai-tools",
|
||||
"user",
|
||||
"pass",
|
||||
],
|
||||
capture_output=False,
|
||||
text=True,
|
||||
check=True,
|
||||
)
|
||||
assert "Succesfully authenticated to the tool repository" in output
|
||||
|
||||
@@ -8,7 +8,7 @@ from crewai.cli.train_crew import train_crew
|
||||
def test_train_crew_positive_iterations(mock_subprocess_run):
|
||||
n_iterations = 5
|
||||
mock_subprocess_run.return_value = subprocess.CompletedProcess(
|
||||
args=["poetry", "run", "train", str(n_iterations)],
|
||||
args=["uv", "run", "train", str(n_iterations)],
|
||||
returncode=0,
|
||||
stdout="Success",
|
||||
stderr="",
|
||||
@@ -17,7 +17,7 @@ def test_train_crew_positive_iterations(mock_subprocess_run):
|
||||
train_crew(n_iterations, "trained_agents_data.pkl")
|
||||
|
||||
mock_subprocess_run.assert_called_once_with(
|
||||
["poetry", "run", "train", str(n_iterations), "trained_agents_data.pkl"],
|
||||
["uv", "run", "train", str(n_iterations), "trained_agents_data.pkl"],
|
||||
capture_output=False,
|
||||
text=True,
|
||||
check=True,
|
||||
@@ -48,14 +48,14 @@ def test_train_crew_called_process_error(mock_subprocess_run, click):
|
||||
n_iterations = 5
|
||||
mock_subprocess_run.side_effect = subprocess.CalledProcessError(
|
||||
returncode=1,
|
||||
cmd=["poetry", "run", "train", str(n_iterations)],
|
||||
cmd=["uv", "run", "train", str(n_iterations)],
|
||||
output="Error",
|
||||
stderr="Some error occurred",
|
||||
)
|
||||
train_crew(n_iterations, "trained_agents_data.pkl")
|
||||
|
||||
mock_subprocess_run.assert_called_once_with(
|
||||
["poetry", "run", "train", str(n_iterations), "trained_agents_data.pkl"],
|
||||
["uv", "run", "train", str(n_iterations), "trained_agents_data.pkl"],
|
||||
capture_output=False,
|
||||
text=True,
|
||||
check=True,
|
||||
@@ -63,7 +63,7 @@ def test_train_crew_called_process_error(mock_subprocess_run, click):
|
||||
click.echo.assert_has_calls(
|
||||
[
|
||||
mock.call.echo(
|
||||
"An error occurred while training the crew: Command '['poetry', 'run', 'train', '5']' returned non-zero exit status 1.",
|
||||
"An error occurred while training the crew: Command '['uv', 'run', 'train', '5']' returned non-zero exit status 1.",
|
||||
err=True,
|
||||
),
|
||||
mock.call.echo("Error", err=True),
|
||||
@@ -79,7 +79,7 @@ def test_train_crew_unexpected_exception(mock_subprocess_run, click):
|
||||
train_crew(n_iterations, "trained_agents_data.pkl")
|
||||
|
||||
mock_subprocess_run.assert_called_once_with(
|
||||
["poetry", "run", "train", str(n_iterations), "trained_agents_data.pkl"],
|
||||
["uv", "run", "train", str(n_iterations), "trained_agents_data.pkl"],
|
||||
capture_output=False,
|
||||
text=True,
|
||||
check=True,
|
||||
|
||||
Reference in New Issue
Block a user