mirror of
https://github.com/crewAIInc/crewAI.git
synced 2025-12-18 13:28:31 +00:00
Compare commits
21 Commits
undo-agent
...
security
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
16524ccfa8 | ||
|
|
263544524d | ||
|
|
098a4312ab | ||
|
|
c724c0af70 | ||
|
|
f6f430b26a | ||
|
|
a5f70d2307 | ||
|
|
b55fc40c83 | ||
|
|
d0ed4f5274 | ||
|
|
ee34399b71 | ||
|
|
798d16a6c6 | ||
|
|
c9152f2af8 | ||
|
|
24b09e97cd | ||
|
|
39903f0c50 | ||
|
|
c4bf713113 | ||
|
|
5d18c6312d | ||
|
|
1f9baf9b2c | ||
|
|
6fbc97b298 | ||
|
|
08bacfa892 | ||
|
|
1ea8115d56 | ||
|
|
6b906f09cf | ||
|
|
6c29ebafea |
23
.github/SECURITY.md
vendored
Normal file
23
.github/SECURITY.md
vendored
Normal file
@@ -0,0 +1,23 @@
|
|||||||
|
CrewAI takes the security of our software products and services seriously, which includes all source code repositories managed through our GitHub organization.
|
||||||
|
|
||||||
|
If you believe you have found a security vulnerability in any CrewAI product or service, please report it to us as described below.
|
||||||
|
|
||||||
|
## Reporting a Vulnerability
|
||||||
|
|
||||||
|
Please do not report security vulnerabilities through public GitHub issues.
|
||||||
|
|
||||||
|
To report a vulnerability, please email us at security@crewai.com.
|
||||||
|
|
||||||
|
Please include the requested information listed below so that we can triage your report more quickly
|
||||||
|
|
||||||
|
- Type of issue (e.g. SQL injection, cross-site scripting, etc.)
|
||||||
|
- Full paths of source file(s) related to the manifestation of the issue
|
||||||
|
- The location of the affected source code (tag/branch/commit or direct URL)
|
||||||
|
- Any special configuration required to reproduce the issue
|
||||||
|
- Step-by-step instructions to reproduce the issue (please include screenshots if needed)
|
||||||
|
- Proof-of-concept or exploit code (if possible)
|
||||||
|
- Impact of the issue, including how an attacker might exploit the issue
|
||||||
|
|
||||||
|
Once we have received your report, we will respond to you at the email address you provide. If the issue is confirmed, we will release a patch as soon as possible depending on the complexity of the issue.
|
||||||
|
|
||||||
|
At this time, we are not offering a bug bounty program. Any rewards will be at our discretion.
|
||||||
1
.gitignore
vendored
1
.gitignore
vendored
@@ -17,3 +17,4 @@ rc-tests/*
|
|||||||
temp/*
|
temp/*
|
||||||
.vscode/*
|
.vscode/*
|
||||||
crew_tasks_output.json
|
crew_tasks_output.json
|
||||||
|
.dccache
|
||||||
|
|||||||
@@ -252,6 +252,12 @@ or
|
|||||||
python src/my_project/main.py
|
python src/my_project/main.py
|
||||||
```
|
```
|
||||||
|
|
||||||
|
If an error happens due to the usage of poetry, please run the following command to update your crewai package:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
crewai update
|
||||||
|
```
|
||||||
|
|
||||||
You should see the output in the console and the `report.md` file should be created in the root of your project with the full final report.
|
You should see the output in the console and the `report.md` file should be created in the root of your project with the full final report.
|
||||||
|
|
||||||
In addition to the sequential process, you can use the hierarchical process, which automatically assigns a manager to the defined crew to properly coordinate the planning and execution of tasks through delegation and validation of results. [See more about the processes here](https://docs.crewai.com/core-concepts/Processes/).
|
In addition to the sequential process, you can use the hierarchical process, which automatically assigns a manager to the defined crew to properly coordinate the planning and execution of tasks through delegation and validation of results. [See more about the processes here](https://docs.crewai.com/core-concepts/Processes/).
|
||||||
|
|||||||
@@ -6,7 +6,7 @@ icon: terminal
|
|||||||
|
|
||||||
# CrewAI CLI Documentation
|
# CrewAI CLI Documentation
|
||||||
|
|
||||||
The CrewAI CLI provides a set of commands to interact with CrewAI, allowing you to create, train, run, and manage crews and pipelines.
|
The CrewAI CLI provides a set of commands to interact with CrewAI, allowing you to create, train, run, and manage crews & flows.
|
||||||
|
|
||||||
## Installation
|
## Installation
|
||||||
|
|
||||||
@@ -146,3 +146,34 @@ crewai run
|
|||||||
Make sure to run these commands from the directory where your CrewAI project is set up.
|
Make sure to run these commands from the directory where your CrewAI project is set up.
|
||||||
Some commands may require additional configuration or setup within your project structure.
|
Some commands may require additional configuration or setup within your project structure.
|
||||||
</Note>
|
</Note>
|
||||||
|
|
||||||
|
|
||||||
|
### 9. API Keys
|
||||||
|
|
||||||
|
When running ```crewai create crew``` command, the CLI will first show you the top 5 most common LLM providers and ask you to select one.
|
||||||
|
|
||||||
|
Once you've selected an LLM provider, you will be prompted for API keys.
|
||||||
|
|
||||||
|
#### Initial API key providers
|
||||||
|
|
||||||
|
The CLI will initiallyprompt for API keys for the following services:
|
||||||
|
|
||||||
|
* OpenAI
|
||||||
|
* Groq
|
||||||
|
* Anthropic
|
||||||
|
* Google Gemini
|
||||||
|
|
||||||
|
When you select a provider, the CLI will prompt you to enter your API key.
|
||||||
|
|
||||||
|
#### Other Options
|
||||||
|
|
||||||
|
If you select option 6, you will be able to select from a list of LiteLLM supported providers.
|
||||||
|
|
||||||
|
When you select a provider, the CLI will prompt you to enter the Key name and the API key.
|
||||||
|
|
||||||
|
See the following link for each provider's key name:
|
||||||
|
|
||||||
|
* [LiteLLM Providers](https://docs.litellm.ai/docs/providers)
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|||||||
@@ -653,4 +653,17 @@ If you're interested in exploring additional examples of flows, we have a variet
|
|||||||
|
|
||||||
4. **Meeting Assistant Flow**: This flow demonstrates how to broadcast one event to trigger multiple follow-up actions. For instance, after a meeting is completed, the flow can update a Trello board, send a Slack message, and save the results. It's a great example of handling multiple outcomes from a single event, making it ideal for comprehensive task management and notification systems. [View Example](https://github.com/crewAIInc/crewAI-examples/tree/main/meeting_assistant_flow)
|
4. **Meeting Assistant Flow**: This flow demonstrates how to broadcast one event to trigger multiple follow-up actions. For instance, after a meeting is completed, the flow can update a Trello board, send a Slack message, and save the results. It's a great example of handling multiple outcomes from a single event, making it ideal for comprehensive task management and notification systems. [View Example](https://github.com/crewAIInc/crewAI-examples/tree/main/meeting_assistant_flow)
|
||||||
|
|
||||||
By exploring these examples, you can gain insights into how to leverage CrewAI Flows for various use cases, from automating repetitive tasks to managing complex, multi-step processes with dynamic decision-making and human feedback.
|
By exploring these examples, you can gain insights into how to leverage CrewAI Flows for various use cases, from automating repetitive tasks to managing complex, multi-step processes with dynamic decision-making and human feedback.
|
||||||
|
|
||||||
|
Also, check out our YouTube video on how to use flows in CrewAI below!
|
||||||
|
|
||||||
|
<iframe
|
||||||
|
width="560"
|
||||||
|
height="315"
|
||||||
|
src="https://www.youtube.com/embed/MTb5my6VOT8"
|
||||||
|
title="YouTube video player"
|
||||||
|
frameborder="0"
|
||||||
|
allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share"
|
||||||
|
referrerpolicy="strict-origin-when-cross-origin"
|
||||||
|
allowfullscreen
|
||||||
|
></iframe>
|
||||||
@@ -1,277 +0,0 @@
|
|||||||
---
|
|
||||||
title: Pipelines
|
|
||||||
description: Understanding and utilizing pipelines in the crewAI framework for efficient multi-stage task processing.
|
|
||||||
icon: timeline-arrow
|
|
||||||
---
|
|
||||||
|
|
||||||
## What is a Pipeline?
|
|
||||||
|
|
||||||
A pipeline in CrewAI represents a structured workflow that allows for the sequential or parallel execution of multiple crews. It provides a way to organize complex processes involving multiple stages, where the output of one stage can serve as input for subsequent stages.
|
|
||||||
|
|
||||||
## Key Terminology
|
|
||||||
|
|
||||||
Understanding the following terms is crucial for working effectively with pipelines:
|
|
||||||
|
|
||||||
- **Stage**: A distinct part of the pipeline, which can be either sequential (a single crew) or parallel (multiple crews executing concurrently).
|
|
||||||
- **Kickoff**: A specific execution of the pipeline for a given set of inputs, representing a single instance of processing through the pipeline.
|
|
||||||
- **Branch**: Parallel executions within a stage (e.g., concurrent crew operations).
|
|
||||||
- **Trace**: The journey of an individual input through the entire pipeline, capturing the path and transformations it undergoes.
|
|
||||||
|
|
||||||
Example pipeline structure:
|
|
||||||
|
|
||||||
```bash Pipeline
|
|
||||||
crew1 >> [crew2, crew3] >> crew4
|
|
||||||
```
|
|
||||||
|
|
||||||
This represents a pipeline with three stages:
|
|
||||||
|
|
||||||
1. A sequential stage (crew1)
|
|
||||||
2. A parallel stage with two branches (crew2 and crew3 executing concurrently)
|
|
||||||
3. Another sequential stage (crew4)
|
|
||||||
|
|
||||||
Each input creates its own kickoff, flowing through all stages of the pipeline. Multiple kickoffs can be processed concurrently, each following the defined pipeline structure.
|
|
||||||
|
|
||||||
## Pipeline Attributes
|
|
||||||
|
|
||||||
| Attribute | Parameters | Description |
|
|
||||||
| :--------- | :---------- | :----------------------------------------------------------------------------------------------------------------- |
|
|
||||||
| **Stages** | `stages` | A list of `PipelineStage` (crews, lists of crews, or routers) representing the stages to be executed in sequence. |
|
|
||||||
|
|
||||||
## Creating a Pipeline
|
|
||||||
|
|
||||||
When creating a pipeline, you define a series of stages, each consisting of either a single crew or a list of crews for parallel execution.
|
|
||||||
The pipeline ensures that each stage is executed in order, with the output of one stage feeding into the next.
|
|
||||||
|
|
||||||
### Example: Assembling a Pipeline
|
|
||||||
|
|
||||||
```python
|
|
||||||
from crewai import Crew, Process, Pipeline
|
|
||||||
|
|
||||||
# Define your crews
|
|
||||||
research_crew = Crew(
|
|
||||||
agents=[researcher],
|
|
||||||
tasks=[research_task],
|
|
||||||
process=Process.sequential
|
|
||||||
)
|
|
||||||
|
|
||||||
analysis_crew = Crew(
|
|
||||||
agents=[analyst],
|
|
||||||
tasks=[analysis_task],
|
|
||||||
process=Process.sequential
|
|
||||||
)
|
|
||||||
|
|
||||||
writing_crew = Crew(
|
|
||||||
agents=[writer],
|
|
||||||
tasks=[writing_task],
|
|
||||||
process=Process.sequential
|
|
||||||
)
|
|
||||||
|
|
||||||
# Assemble the pipeline
|
|
||||||
my_pipeline = Pipeline(
|
|
||||||
stages=[research_crew, analysis_crew, writing_crew]
|
|
||||||
)
|
|
||||||
```
|
|
||||||
|
|
||||||
## Pipeline Methods
|
|
||||||
|
|
||||||
| Method | Description |
|
|
||||||
| :--------------- | :----------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
|
|
||||||
| **kickoff** | Executes the pipeline, processing all stages and returning the results. This method initiates one or more kickoffs through the pipeline, handling the flow of data between stages. |
|
|
||||||
| **process_runs** | Runs the pipeline for each input provided, handling the flow and transformation of data between stages. |
|
|
||||||
|
|
||||||
## Pipeline Output
|
|
||||||
|
|
||||||
The output of a pipeline in the CrewAI framework is encapsulated within the `PipelineKickoffResult` class.
|
|
||||||
This class provides a structured way to access the results of the pipeline's execution, including various formats such as raw strings, JSON, and Pydantic models.
|
|
||||||
|
|
||||||
### Pipeline Output Attributes
|
|
||||||
|
|
||||||
| Attribute | Parameters | Type | Description |
|
|
||||||
| :-------------- | :------------ | :------------------------ | :-------------------------------------------------------------------------------------------------------- |
|
|
||||||
| **ID** | `id` | `UUID4` | A unique identifier for the pipeline output. |
|
|
||||||
| **Run Results** | `run_results` | `List[PipelineRunResult]` | A list of `PipelineRunResult` objects, each representing the output of a single run through the pipeline. |
|
|
||||||
|
|
||||||
### Pipeline Output Methods
|
|
||||||
|
|
||||||
| Method/Property | Description |
|
|
||||||
| :----------------- | :----------------------------------------------------- |
|
|
||||||
| **add_run_result** | Adds a `PipelineRunResult` to the list of run results. |
|
|
||||||
|
|
||||||
### Pipeline Run Result Attributes
|
|
||||||
|
|
||||||
| Attribute | Parameters | Type | Description |
|
|
||||||
| :---------------- | :-------------- | :------------------------- | :-------------------------------------------------------------------------------------------- |
|
|
||||||
| **ID** | `id` | `UUID4` | A unique identifier for the run result. |
|
|
||||||
| **Raw** | `raw` | `str` | The raw output of the final stage in the pipeline kickoff. |
|
|
||||||
| **Pydantic** | `pydantic` | `Any` | A Pydantic model object representing the structured output of the final stage, if applicable. |
|
|
||||||
| **JSON Dict** | `json_dict` | `Union[Dict[str, Any], None]` | A dictionary representing the JSON output of the final stage, if applicable. |
|
|
||||||
| **Token Usage** | `token_usage` | `Dict[str, UsageMetrics]` | A summary of token usage across all stages of the pipeline kickoff. |
|
|
||||||
| **Trace** | `trace` | `List[Any]` | A trace of the journey of inputs through the pipeline kickoff. |
|
|
||||||
| **Crews Outputs** | `crews_outputs` | `List[CrewOutput]` | A list of `CrewOutput` objects, representing the outputs from each crew in the pipeline kickoff. |
|
|
||||||
|
|
||||||
### Pipeline Run Result Methods and Properties
|
|
||||||
|
|
||||||
| Method/Property | Description |
|
|
||||||
| :-------------- | :------------------------------------------------------------------------------------------------------- |
|
|
||||||
| **json** | Returns the JSON string representation of the run result if the output format of the final task is JSON. |
|
|
||||||
| **to_dict** | Converts the JSON and Pydantic outputs to a dictionary. |
|
|
||||||
| **str** | Returns the string representation of the run result, prioritizing Pydantic, then JSON, then raw. |
|
|
||||||
|
|
||||||
### Accessing Pipeline Outputs
|
|
||||||
|
|
||||||
Once a pipeline has been executed, its output can be accessed through the `PipelineOutput` object returned by the `process_runs` method.
|
|
||||||
The `PipelineOutput` class provides access to individual `PipelineRunResult` objects, each representing a single run through the pipeline.
|
|
||||||
|
|
||||||
#### Example
|
|
||||||
|
|
||||||
```python
|
|
||||||
# Define input data for the pipeline
|
|
||||||
input_data = [
|
|
||||||
{"initial_query": "Latest advancements in AI"},
|
|
||||||
{"initial_query": "Future of robotics"}
|
|
||||||
]
|
|
||||||
|
|
||||||
# Execute the pipeline
|
|
||||||
pipeline_output = await my_pipeline.process_runs(input_data)
|
|
||||||
|
|
||||||
# Access the results
|
|
||||||
for run_result in pipeline_output.run_results:
|
|
||||||
print(f"Run ID: {run_result.id}")
|
|
||||||
print(f"Final Raw Output: {run_result.raw}")
|
|
||||||
if run_result.json_dict:
|
|
||||||
print(f"JSON Output: {json.dumps(run_result.json_dict, indent=2)}")
|
|
||||||
if run_result.pydantic:
|
|
||||||
print(f"Pydantic Output: {run_result.pydantic}")
|
|
||||||
print(f"Token Usage: {run_result.token_usage}")
|
|
||||||
print(f"Trace: {run_result.trace}")
|
|
||||||
print("Crew Outputs:")
|
|
||||||
for crew_output in run_result.crews_outputs:
|
|
||||||
print(f" Crew: {crew_output.raw}")
|
|
||||||
print("\n")
|
|
||||||
```
|
|
||||||
|
|
||||||
This example demonstrates how to access and work with the pipeline output, including individual run results and their associated data.
|
|
||||||
|
|
||||||
## Using Pipelines
|
|
||||||
|
|
||||||
Pipelines are particularly useful for complex workflows that involve multiple stages of processing, analysis, or content generation. They allow you to:
|
|
||||||
|
|
||||||
1. **Sequence Operations**: Execute crews in a specific order, ensuring that the output of one crew is available as input to the next.
|
|
||||||
2. **Parallel Processing**: Run multiple crews concurrently within a stage for increased efficiency.
|
|
||||||
3. **Manage Complex Workflows**: Break down large tasks into smaller, manageable steps executed by specialized crews.
|
|
||||||
|
|
||||||
### Example: Running a Pipeline
|
|
||||||
|
|
||||||
```python
|
|
||||||
# Define input data for the pipeline
|
|
||||||
input_data = [{"initial_query": "Latest advancements in AI"}]
|
|
||||||
|
|
||||||
# Execute the pipeline, initiating a run for each input
|
|
||||||
results = await my_pipeline.process_runs(input_data)
|
|
||||||
|
|
||||||
# Access the results
|
|
||||||
for result in results:
|
|
||||||
print(f"Final Output: {result.raw}")
|
|
||||||
print(f"Token Usage: {result.token_usage}")
|
|
||||||
print(f"Trace: {result.trace}") # Shows the path of the input through all stages
|
|
||||||
```
|
|
||||||
|
|
||||||
## Advanced Features
|
|
||||||
|
|
||||||
### Parallel Execution within Stages
|
|
||||||
|
|
||||||
You can define parallel execution within a stage by providing a list of crews, creating multiple branches:
|
|
||||||
|
|
||||||
```python
|
|
||||||
parallel_analysis_crew = Crew(agents=[financial_analyst], tasks=[financial_analysis_task])
|
|
||||||
market_analysis_crew = Crew(agents=[market_analyst], tasks=[market_analysis_task])
|
|
||||||
|
|
||||||
my_pipeline = Pipeline(
|
|
||||||
stages=[
|
|
||||||
research_crew,
|
|
||||||
[parallel_analysis_crew, market_analysis_crew], # Parallel execution (branching)
|
|
||||||
writing_crew
|
|
||||||
]
|
|
||||||
)
|
|
||||||
```
|
|
||||||
|
|
||||||
### Routers in Pipelines
|
|
||||||
|
|
||||||
Routers are a powerful feature in crewAI pipelines that allow for dynamic decision-making and branching within your workflow.
|
|
||||||
They enable you to direct the flow of execution based on specific conditions or criteria, making your pipelines more flexible and adaptive.
|
|
||||||
|
|
||||||
#### What is a Router?
|
|
||||||
|
|
||||||
A router in crewAI is a special component that can be included as a stage in your pipeline. It evaluates the input data and determines which path the execution should take next.
|
|
||||||
This allows for conditional branching in your pipeline, where different crews or sub-pipelines can be executed based on the router's decision.
|
|
||||||
|
|
||||||
#### Key Components of a Router
|
|
||||||
|
|
||||||
1. **Routes**: A dictionary of named routes, each associated with a condition and a pipeline to execute if the condition is met.
|
|
||||||
2. **Default Route**: A fallback pipeline that is executed if none of the defined route conditions are met.
|
|
||||||
|
|
||||||
#### Creating a Router
|
|
||||||
|
|
||||||
Here's an example of how to create a router:
|
|
||||||
|
|
||||||
```python
|
|
||||||
from crewai import Router, Route, Pipeline, Crew, Agent, Task
|
|
||||||
|
|
||||||
# Define your agents
|
|
||||||
classifier = Agent(name="Classifier", role="Email Classifier")
|
|
||||||
urgent_handler = Agent(name="Urgent Handler", role="Urgent Email Processor")
|
|
||||||
normal_handler = Agent(name="Normal Handler", role="Normal Email Processor")
|
|
||||||
|
|
||||||
# Define your tasks
|
|
||||||
classify_task = Task(description="Classify the email based on its content and metadata.")
|
|
||||||
urgent_task = Task(description="Process and respond to urgent email quickly.")
|
|
||||||
normal_task = Task(description="Process and respond to normal email thoroughly.")
|
|
||||||
|
|
||||||
# Define your crews
|
|
||||||
classification_crew = Crew(agents=[classifier], tasks=[classify_task]) # classify email between high and low urgency 1-10
|
|
||||||
urgent_crew = Crew(agents=[urgent_handler], tasks=[urgent_task])
|
|
||||||
normal_crew = Crew(agents=[normal_handler], tasks=[normal_task])
|
|
||||||
|
|
||||||
# Create pipelines for different urgency levels
|
|
||||||
urgent_pipeline = Pipeline(stages=[urgent_crew])
|
|
||||||
normal_pipeline = Pipeline(stages=[normal_crew])
|
|
||||||
|
|
||||||
# Create a router
|
|
||||||
email_router = Router(
|
|
||||||
routes={
|
|
||||||
"high_urgency": Route(
|
|
||||||
condition=lambda x: x.get("urgency_score", 0) > 7,
|
|
||||||
pipeline=urgent_pipeline
|
|
||||||
),
|
|
||||||
"low_urgency": Route(
|
|
||||||
condition=lambda x: x.get("urgency_score", 0) <= 7,
|
|
||||||
pipeline=normal_pipeline
|
|
||||||
)
|
|
||||||
},
|
|
||||||
default=Pipeline(stages=[normal_pipeline]) # Default to just normal if no urgency score
|
|
||||||
)
|
|
||||||
|
|
||||||
# Use the router in a main pipeline
|
|
||||||
main_pipeline = Pipeline(stages=[classification_crew, email_router])
|
|
||||||
|
|
||||||
inputs = [{"email": "..."}, {"email": "..."}] # List of email data
|
|
||||||
|
|
||||||
main_pipeline.kickoff(inputs=inputs)
|
|
||||||
```
|
|
||||||
|
|
||||||
In this example, the router decides between an urgent pipeline and a normal pipeline based on the urgency score of the email. If the urgency score is greater than 7,
|
|
||||||
it routes to the urgent pipeline; otherwise, it uses the normal pipeline. If the input doesn't include an urgency score, it defaults to just the classification crew.
|
|
||||||
|
|
||||||
#### Benefits of Using Routers
|
|
||||||
|
|
||||||
1. **Dynamic Workflow**: Adapt your pipeline's behavior based on input characteristics or intermediate results.
|
|
||||||
2. **Efficiency**: Route urgent tasks to quicker processes, reserving more thorough pipelines for less time-sensitive inputs.
|
|
||||||
3. **Flexibility**: Easily modify or extend your pipeline's logic without changing the core structure.
|
|
||||||
4. **Scalability**: Handle a wide range of email types and urgency levels with a single pipeline structure.
|
|
||||||
|
|
||||||
### Error Handling and Validation
|
|
||||||
|
|
||||||
The `Pipeline` class includes validation mechanisms to ensure the robustness of the pipeline structure:
|
|
||||||
|
|
||||||
- Validates that stages contain only Crew instances or lists of Crew instances.
|
|
||||||
- Prevents double nesting of stages to maintain a clear structure.
|
|
||||||
@@ -1,163 +0,0 @@
|
|||||||
# Creating a CrewAI Pipeline Project
|
|
||||||
|
|
||||||
Welcome to the comprehensive guide for creating a new CrewAI pipeline project. This document will walk you through the steps to create, customize, and run your CrewAI pipeline project, ensuring you have everything you need to get started.
|
|
||||||
|
|
||||||
To learn more about CrewAI pipelines, visit the [CrewAI documentation](https://docs.crewai.com/core-concepts/Pipeline/).
|
|
||||||
|
|
||||||
## Prerequisites
|
|
||||||
|
|
||||||
Before getting started with CrewAI pipelines, make sure that you have installed CrewAI via pip:
|
|
||||||
|
|
||||||
```shell
|
|
||||||
$ pip install crewai crewai-tools
|
|
||||||
```
|
|
||||||
|
|
||||||
The same prerequisites for virtual environments and Code IDEs apply as in regular CrewAI projects.
|
|
||||||
|
|
||||||
## Creating a New Pipeline Project
|
|
||||||
|
|
||||||
To create a new CrewAI pipeline project, you have two options:
|
|
||||||
|
|
||||||
1. For a basic pipeline template:
|
|
||||||
|
|
||||||
```shell
|
|
||||||
$ crewai create pipeline <project_name>
|
|
||||||
```
|
|
||||||
|
|
||||||
2. For a pipeline example that includes a router:
|
|
||||||
|
|
||||||
```shell
|
|
||||||
$ crewai create pipeline --router <project_name>
|
|
||||||
```
|
|
||||||
|
|
||||||
These commands will create a new project folder with the following structure:
|
|
||||||
|
|
||||||
```
|
|
||||||
<project_name>/
|
|
||||||
├── README.md
|
|
||||||
├── uv.lock
|
|
||||||
├── pyproject.toml
|
|
||||||
├── src/
|
|
||||||
│ └── <project_name>/
|
|
||||||
│ ├── __init__.py
|
|
||||||
│ ├── main.py
|
|
||||||
│ ├── crews/
|
|
||||||
│ │ ├── crew1/
|
|
||||||
│ │ │ ├── crew1.py
|
|
||||||
│ │ │ └── config/
|
|
||||||
│ │ │ ├── agents.yaml
|
|
||||||
│ │ │ └── tasks.yaml
|
|
||||||
│ │ ├── crew2/
|
|
||||||
│ │ │ ├── crew2.py
|
|
||||||
│ │ │ └── config/
|
|
||||||
│ │ │ ├── agents.yaml
|
|
||||||
│ │ │ └── tasks.yaml
|
|
||||||
│ ├── pipelines/
|
|
||||||
│ │ ├── __init__.py
|
|
||||||
│ │ ├── pipeline1.py
|
|
||||||
│ │ └── pipeline2.py
|
|
||||||
│ └── tools/
|
|
||||||
│ ├── __init__.py
|
|
||||||
│ └── custom_tool.py
|
|
||||||
└── tests/
|
|
||||||
```
|
|
||||||
|
|
||||||
## Customizing Your Pipeline Project
|
|
||||||
|
|
||||||
To customize your pipeline project, you can:
|
|
||||||
|
|
||||||
1. Modify the crew files in `src/<project_name>/crews/` to define your agents and tasks for each crew.
|
|
||||||
2. Modify the pipeline files in `src/<project_name>/pipelines/` to define your pipeline structure.
|
|
||||||
3. Modify `src/<project_name>/main.py` to set up and run your pipelines.
|
|
||||||
4. Add your environment variables into the `.env` file.
|
|
||||||
|
|
||||||
## Example 1: Defining a Two-Stage Sequential Pipeline
|
|
||||||
|
|
||||||
Here's an example of how to define a pipeline with sequential stages in `src/<project_name>/pipelines/pipeline.py`:
|
|
||||||
|
|
||||||
```python
|
|
||||||
from crewai import Pipeline
|
|
||||||
from crewai.project import PipelineBase
|
|
||||||
from ..crews.research_crew.research_crew import ResearchCrew
|
|
||||||
from ..crews.write_x_crew.write_x_crew import WriteXCrew
|
|
||||||
|
|
||||||
@PipelineBase
|
|
||||||
class SequentialPipeline:
|
|
||||||
def __init__(self):
|
|
||||||
# Initialize crews
|
|
||||||
self.research_crew = ResearchCrew().crew()
|
|
||||||
self.write_x_crew = WriteXCrew().crew()
|
|
||||||
|
|
||||||
def create_pipeline(self):
|
|
||||||
return Pipeline(
|
|
||||||
stages=[
|
|
||||||
self.research_crew,
|
|
||||||
self.write_x_crew
|
|
||||||
]
|
|
||||||
)
|
|
||||||
|
|
||||||
async def kickoff(self, inputs):
|
|
||||||
pipeline = self.create_pipeline()
|
|
||||||
results = await pipeline.kickoff(inputs)
|
|
||||||
return results
|
|
||||||
```
|
|
||||||
|
|
||||||
## Example 2: Defining a Two-Stage Pipeline with Parallel Execution
|
|
||||||
|
|
||||||
```python
|
|
||||||
from crewai import Pipeline
|
|
||||||
from crewai.project import PipelineBase
|
|
||||||
from ..crews.research_crew.research_crew import ResearchCrew
|
|
||||||
from ..crews.write_x_crew.write_x_crew import WriteXCrew
|
|
||||||
from ..crews.write_linkedin_crew.write_linkedin_crew import WriteLinkedInCrew
|
|
||||||
|
|
||||||
@PipelineBase
|
|
||||||
class ParallelExecutionPipeline:
|
|
||||||
def __init__(self):
|
|
||||||
# Initialize crews
|
|
||||||
self.research_crew = ResearchCrew().crew()
|
|
||||||
self.write_x_crew = WriteXCrew().crew()
|
|
||||||
self.write_linkedin_crew = WriteLinkedInCrew().crew()
|
|
||||||
|
|
||||||
def create_pipeline(self):
|
|
||||||
return Pipeline(
|
|
||||||
stages=[
|
|
||||||
self.research_crew,
|
|
||||||
[self.write_x_crew, self.write_linkedin_crew] # Parallel execution
|
|
||||||
]
|
|
||||||
)
|
|
||||||
|
|
||||||
async def kickoff(self, inputs):
|
|
||||||
pipeline = self.create_pipeline()
|
|
||||||
results = await pipeline.kickoff(inputs)
|
|
||||||
return results
|
|
||||||
```
|
|
||||||
|
|
||||||
### Annotations
|
|
||||||
|
|
||||||
The main annotation you'll use for pipelines is `@PipelineBase`. This annotation is used to decorate your pipeline classes, similar to how `@CrewBase` is used for crews.
|
|
||||||
|
|
||||||
## Installing Dependencies
|
|
||||||
|
|
||||||
To install the dependencies for your project, use `uv` the install command is optional because when running `crewai run`, it will automatically install the dependencies for you:
|
|
||||||
|
|
||||||
```shell
|
|
||||||
$ cd <project_name>
|
|
||||||
$ crewai install (optional)
|
|
||||||
```
|
|
||||||
|
|
||||||
## Running Your Pipeline Project
|
|
||||||
|
|
||||||
To run your pipeline project, use the following command:
|
|
||||||
|
|
||||||
```shell
|
|
||||||
$ crewai run
|
|
||||||
```
|
|
||||||
|
|
||||||
This will initialize your pipeline and begin task execution as defined in your `main.py` file.
|
|
||||||
|
|
||||||
## Deploying Your Pipeline Project
|
|
||||||
|
|
||||||
Pipelines can be deployed in the same way as regular CrewAI projects. The easiest way is through [CrewAI+](https://www.crewai.com/crewaiplus), where you can deploy your pipeline in a few clicks.
|
|
||||||
|
|
||||||
Remember, when working with pipelines, you're orchestrating multiple crews to work together in a sequence or parallel fashion. This allows for more complex workflows and information processing tasks.
|
|
||||||
@@ -1,236 +0,0 @@
|
|||||||
---
|
|
||||||
|
|
||||||
title: Starting a New CrewAI Project - Using Template
|
|
||||||
|
|
||||||
description: A comprehensive guide to starting a new CrewAI project, including the latest updates and project setup methods.
|
|
||||||
---
|
|
||||||
|
|
||||||
# Starting Your CrewAI Project
|
|
||||||
|
|
||||||
Welcome to the ultimate guide for starting a new CrewAI project. This document will walk you through the steps to create, customize, and run your CrewAI project, ensuring you have everything you need to get started.
|
|
||||||
|
|
||||||
Before we start, there are a couple of things to note:
|
|
||||||
|
|
||||||
1. CrewAI is a Python package and requires Python >=3.10 and <=3.13 to run.
|
|
||||||
2. The preferred way of setting up CrewAI is using the `crewai create crew` command. This will create a new project folder and install a skeleton template for you to work on.
|
|
||||||
|
|
||||||
## Prerequisites
|
|
||||||
|
|
||||||
Before getting started with CrewAI, make sure that you have installed it via pip:
|
|
||||||
|
|
||||||
```shell
|
|
||||||
$ pip install 'crewai[tools]'
|
|
||||||
```
|
|
||||||
|
|
||||||
## Creating a New Project
|
|
||||||
|
|
||||||
In this example, we will be using `uv` as our virtual environment manager.
|
|
||||||
|
|
||||||
To create a new CrewAI project, run the following CLI command:
|
|
||||||
|
|
||||||
```shell
|
|
||||||
$ crewai create crew <project_name>
|
|
||||||
```
|
|
||||||
|
|
||||||
This command will create a new project folder with the following structure:
|
|
||||||
|
|
||||||
```shell
|
|
||||||
my_project/
|
|
||||||
├── .gitignore
|
|
||||||
├── pyproject.toml
|
|
||||||
├── README.md
|
|
||||||
└── src/
|
|
||||||
└── my_project/
|
|
||||||
├── __init__.py
|
|
||||||
├── main.py
|
|
||||||
├── crew.py
|
|
||||||
├── tools/
|
|
||||||
│ ├── custom_tool.py
|
|
||||||
│ └── __init__.py
|
|
||||||
└── config/
|
|
||||||
├── agents.yaml
|
|
||||||
└── tasks.yaml
|
|
||||||
```
|
|
||||||
|
|
||||||
You can now start developing your project by editing the files in the `src/my_project` folder. The `main.py` file is the entry point of your project, and the `crew.py` file is where you define your agents and tasks.
|
|
||||||
|
|
||||||
## Customizing Your Project
|
|
||||||
|
|
||||||
To customize your project, you can:
|
|
||||||
- Modify `src/my_project/config/agents.yaml` to define your agents.
|
|
||||||
- Modify `src/my_project/config/tasks.yaml` to define your tasks.
|
|
||||||
- Modify `src/my_project/crew.py` to add your own logic, tools, and specific arguments.
|
|
||||||
- Modify `src/my_project/main.py` to add custom inputs for your agents and tasks.
|
|
||||||
- Add your environment variables into the `.env` file.
|
|
||||||
|
|
||||||
### Example: Defining Agents and Tasks
|
|
||||||
|
|
||||||
#### agents.yaml
|
|
||||||
|
|
||||||
```yaml
|
|
||||||
researcher:
|
|
||||||
role: >
|
|
||||||
Job Candidate Researcher
|
|
||||||
goal: >
|
|
||||||
Find potential candidates for the job
|
|
||||||
backstory: >
|
|
||||||
You are adept at finding the right candidates by exploring various online
|
|
||||||
resources. Your skill in identifying suitable candidates ensures the best
|
|
||||||
match for job positions.
|
|
||||||
```
|
|
||||||
|
|
||||||
#### tasks.yaml
|
|
||||||
|
|
||||||
```yaml
|
|
||||||
research_candidates_task:
|
|
||||||
description: >
|
|
||||||
Conduct thorough research to find potential candidates for the specified job.
|
|
||||||
Utilize various online resources and databases to gather a comprehensive list of potential candidates.
|
|
||||||
Ensure that the candidates meet the job requirements provided.
|
|
||||||
|
|
||||||
Job Requirements:
|
|
||||||
{job_requirements}
|
|
||||||
expected_output: >
|
|
||||||
A list of 10 potential candidates with their contact information and brief profiles highlighting their suitability.
|
|
||||||
agent: researcher # THIS NEEDS TO MATCH THE AGENT NAME IN THE AGENTS.YAML FILE AND THE AGENT DEFINED IN THE crew.py FILE
|
|
||||||
context: # THESE NEED TO MATCH THE TASK NAMES DEFINED ABOVE AND THE TASKS.YAML FILE AND THE TASK DEFINED IN THE crew.py FILE
|
|
||||||
- researcher
|
|
||||||
```
|
|
||||||
|
|
||||||
### Referencing Variables:
|
|
||||||
|
|
||||||
Your defined functions with the same name will be used. For example, you can reference the agent for specific tasks from `tasks.yaml` file. Ensure your annotated agent and function name are the same; otherwise, your task won't recognize the reference properly.
|
|
||||||
|
|
||||||
#### Example References
|
|
||||||
|
|
||||||
`agents.yaml`
|
|
||||||
|
|
||||||
```yaml
|
|
||||||
email_summarizer:
|
|
||||||
role: >
|
|
||||||
Email Summarizer
|
|
||||||
goal: >
|
|
||||||
Summarize emails into a concise and clear summary
|
|
||||||
backstory: >
|
|
||||||
You will create a 5 bullet point summary of the report
|
|
||||||
llm: mixtal_llm
|
|
||||||
```
|
|
||||||
|
|
||||||
`tasks.yaml`
|
|
||||||
|
|
||||||
```yaml
|
|
||||||
email_summarizer_task:
|
|
||||||
description: >
|
|
||||||
Summarize the email into a 5 bullet point summary
|
|
||||||
expected_output: >
|
|
||||||
A 5 bullet point summary of the email
|
|
||||||
agent: email_summarizer
|
|
||||||
context:
|
|
||||||
- reporting_task
|
|
||||||
- research_task
|
|
||||||
```
|
|
||||||
|
|
||||||
Use the annotations to properly reference the agent and task in the `crew.py` file.
|
|
||||||
|
|
||||||
### Annotations include:
|
|
||||||
|
|
||||||
* `@agent`
|
|
||||||
* `@task`
|
|
||||||
* `@crew`
|
|
||||||
* `@tool`
|
|
||||||
* `@callback`
|
|
||||||
* `@output_json`
|
|
||||||
* `@output_pydantic`
|
|
||||||
* `@cache_handler`
|
|
||||||
|
|
||||||
`crew.py`
|
|
||||||
|
|
||||||
```python
|
|
||||||
# ...
|
|
||||||
@agent
|
|
||||||
def email_summarizer(self) -> Agent:
|
|
||||||
return Agent(
|
|
||||||
config=self.agents_config["email_summarizer"],
|
|
||||||
)
|
|
||||||
|
|
||||||
@task
|
|
||||||
def email_summarizer_task(self) -> Task:
|
|
||||||
return Task(
|
|
||||||
config=self.tasks_config["email_summarizer_task"],
|
|
||||||
)
|
|
||||||
# ...
|
|
||||||
```
|
|
||||||
|
|
||||||
## Installing Dependencies
|
|
||||||
|
|
||||||
To install the dependencies for your project, you can use `uv`. Running the following command is optional since when running `crewai run`, it will automatically install the dependencies for you.
|
|
||||||
|
|
||||||
```shell
|
|
||||||
$ cd my_project
|
|
||||||
$ crewai install (optional)
|
|
||||||
```
|
|
||||||
|
|
||||||
This will install the dependencies specified in the `pyproject.toml` file.
|
|
||||||
|
|
||||||
## Interpolating Variables
|
|
||||||
|
|
||||||
Any variable interpolated in your `agents.yaml` and `tasks.yaml` files like `{variable}` will be replaced by the value of the variable in the `main.py` file.
|
|
||||||
|
|
||||||
#### tasks.yaml
|
|
||||||
|
|
||||||
```yaml
|
|
||||||
research_task:
|
|
||||||
description: >
|
|
||||||
Conduct a thorough research about the customer and competitors in the context
|
|
||||||
of {customer_domain}.
|
|
||||||
Make sure you find any interesting and relevant information given the
|
|
||||||
current year is 2024.
|
|
||||||
expected_output: >
|
|
||||||
A complete report on the customer and their customers and competitors,
|
|
||||||
including their demographics, preferences, market positioning and audience engagement.
|
|
||||||
```
|
|
||||||
|
|
||||||
#### main.py
|
|
||||||
|
|
||||||
```python
|
|
||||||
# main.py
|
|
||||||
def run():
|
|
||||||
inputs = {
|
|
||||||
"customer_domain": "crewai.com"
|
|
||||||
}
|
|
||||||
MyProjectCrew(inputs).crew().kickoff(inputs=inputs)
|
|
||||||
```
|
|
||||||
|
|
||||||
## Running Your Project
|
|
||||||
|
|
||||||
To run your project, use the following command:
|
|
||||||
|
|
||||||
```shell
|
|
||||||
$ crewai run
|
|
||||||
```
|
|
||||||
|
|
||||||
This will initialize your crew of AI agents and begin task execution as defined in your configuration in the `main.py` file.
|
|
||||||
|
|
||||||
### Replay Tasks from Latest Crew Kickoff
|
|
||||||
|
|
||||||
CrewAI now includes a replay feature that allows you to list the tasks from the last run and replay from a specific one. To use this feature, run:
|
|
||||||
|
|
||||||
```shell
|
|
||||||
$ crewai replay <task_id>
|
|
||||||
```
|
|
||||||
|
|
||||||
Replace `<task_id>` with the ID of the task you want to replay.
|
|
||||||
|
|
||||||
### Reset Crew Memory
|
|
||||||
|
|
||||||
If you need to reset the memory of your crew before running it again, you can do so by calling the reset memory feature:
|
|
||||||
|
|
||||||
```shell
|
|
||||||
$ crewai reset-memory
|
|
||||||
```
|
|
||||||
|
|
||||||
This will clear the crew's memory, allowing for a fresh start.
|
|
||||||
|
|
||||||
## Deploying Your Project
|
|
||||||
|
|
||||||
The easiest way to deploy your crew is through [CrewAI+](https://www.crewai.com/crewaiplus), where you can deploy your crew in a few clicks.
|
|
||||||
@@ -25,9 +25,9 @@ It provides a dashboard for tracking agent performance, session replays, and cus
|
|||||||
Additionally, AgentOps provides session drilldowns for viewing Crew agent interactions, LLM calls, and tool usage in real-time.
|
Additionally, AgentOps provides session drilldowns for viewing Crew agent interactions, LLM calls, and tool usage in real-time.
|
||||||
This feature is useful for debugging and understanding how agents interact with users as well as other agents.
|
This feature is useful for debugging and understanding how agents interact with users as well as other agents.
|
||||||
|
|
||||||

|

|
||||||

|

|
||||||

|

|
||||||
|
|
||||||
### Features
|
### Features
|
||||||
|
|
||||||
@@ -123,4 +123,4 @@ For feature requests or bug reports, please reach out to the AgentOps team on th
|
|||||||
<span> • </span>
|
<span> • </span>
|
||||||
<a href="https://app.agentops.ai/?=crew">🖇️ AgentOps Dashboard</a>
|
<a href="https://app.agentops.ai/?=crew">🖇️ AgentOps Dashboard</a>
|
||||||
<span> • </span>
|
<span> • </span>
|
||||||
<a href="https://docs.agentops.ai/introduction">📙 Documentation</a>
|
<a href="https://docs.agentops.ai/introduction">📙 Documentation</a>
|
||||||
|
|||||||
@@ -10,9 +10,9 @@ Langtrace is an open-source, external tool that helps you set up observability a
|
|||||||
While not built directly into CrewAI, Langtrace can be used alongside CrewAI to gain deep visibility into the cost, latency, and performance of your CrewAI Agents.
|
While not built directly into CrewAI, Langtrace can be used alongside CrewAI to gain deep visibility into the cost, latency, and performance of your CrewAI Agents.
|
||||||
This integration allows you to log hyperparameters, monitor performance regressions, and establish a process for continuous improvement of your Agents.
|
This integration allows you to log hyperparameters, monitor performance regressions, and establish a process for continuous improvement of your Agents.
|
||||||
|
|
||||||

|

|
||||||

|

|
||||||

|

|
||||||
|
|
||||||
## Setup Instructions
|
## Setup Instructions
|
||||||
|
|
||||||
@@ -69,4 +69,4 @@ This integration allows you to log hyperparameters, monitor performance regressi
|
|||||||
|
|
||||||
6. **Testing and Evaluations**
|
6. **Testing and Evaluations**
|
||||||
|
|
||||||
- Set up automated tests for your CrewAI agents and tasks.
|
- Set up automated tests for your CrewAI agents and tasks.
|
||||||
|
|||||||
BIN
docs/images/crewai-run-poetry-error.png
Normal file
BIN
docs/images/crewai-run-poetry-error.png
Normal file
Binary file not shown.
|
After Width: | Height: | Size: 104 KiB |
BIN
docs/images/crewai-update.png
Normal file
BIN
docs/images/crewai-update.png
Normal file
Binary file not shown.
|
After Width: | Height: | Size: 50 KiB |
@@ -1,11 +1,9 @@
|
|||||||
---
|
---
|
||||||
title: Installation & Setup
|
title: Installation
|
||||||
description:
|
description:
|
||||||
icon: wrench
|
icon: wrench
|
||||||
---
|
---
|
||||||
|
|
||||||
## Install CrewAI
|
|
||||||
|
|
||||||
This guide will walk you through the installation process for CrewAI and its dependencies.
|
This guide will walk you through the installation process for CrewAI and its dependencies.
|
||||||
CrewAI is a flexible and powerful AI framework that enables you to create and manage AI agents, tools, and tasks efficiently.
|
CrewAI is a flexible and powerful AI framework that enables you to create and manage AI agents, tools, and tasks efficiently.
|
||||||
Let's get started! 🚀
|
Let's get started! 🚀
|
||||||
@@ -15,17 +13,8 @@ Let's get started! 🚀
|
|||||||
</Tip>
|
</Tip>
|
||||||
|
|
||||||
<Steps>
|
<Steps>
|
||||||
<Step title="Install Poetry">
|
|
||||||
First, if you haven't already, install [Poetry](https://python-poetry.org/).
|
|
||||||
CrewAI uses Poetry for dependency management and package handling, offering a seamless setup and execution experience.
|
|
||||||
<CodeGroup>
|
|
||||||
```shell Terminal
|
|
||||||
pip install poetry
|
|
||||||
```
|
|
||||||
</CodeGroup>
|
|
||||||
</Step>
|
|
||||||
<Step title="Install CrewAI">
|
<Step title="Install CrewAI">
|
||||||
Then, install the main CrewAI package:
|
Install the main CrewAI package with the following command:
|
||||||
<CodeGroup>
|
<CodeGroup>
|
||||||
```shell Terminal
|
```shell Terminal
|
||||||
pip install crewai
|
pip install crewai
|
||||||
@@ -45,15 +34,29 @@ Let's get started! 🚀
|
|||||||
</CodeGroup>
|
</CodeGroup>
|
||||||
</Step>
|
</Step>
|
||||||
<Step title="Upgrade CrewAI">
|
<Step title="Upgrade CrewAI">
|
||||||
To upgrade CrewAI and CrewAI Tools to the latest version, run the following command:
|
To upgrade CrewAI and CrewAI Tools to the latest version, run the following command
|
||||||
<CodeGroup>
|
<CodeGroup>
|
||||||
```shell Terminal
|
```shell Terminal
|
||||||
pip install --upgrade crewai crewai-tools
|
pip install --upgrade crewai crewai-tools
|
||||||
```
|
```
|
||||||
</CodeGroup>
|
</CodeGroup>
|
||||||
|
<Note>
|
||||||
|
1. If you're using an older version of CrewAI, you may receive a warning about using `Poetry` for dependency management.
|
||||||
|

|
||||||
|
|
||||||
|
2. In this case, you'll need to run the command below to update your project.
|
||||||
|
This command will migrate your project to use [UV](https://github.com/astral-sh/uv) and update the necessary files.
|
||||||
|
```shell Terminal
|
||||||
|
crewai update
|
||||||
|
```
|
||||||
|
3. After running the command above, you should see the following output:
|
||||||
|

|
||||||
|
|
||||||
|
4. You're all set! You can now proceed to the next step! 🎉
|
||||||
|
</Note>
|
||||||
</Step>
|
</Step>
|
||||||
<Step title="Verify the installation">
|
<Step title="Verify the installation">
|
||||||
To verify that `crewai` and `crewai-tools` are installed correctly, run the following command:
|
To verify that `crewai` and `crewai-tools` are installed correctly, run the following command
|
||||||
<CodeGroup>
|
<CodeGroup>
|
||||||
```shell Terminal
|
```shell Terminal
|
||||||
pip freeze | grep crewai
|
pip freeze | grep crewai
|
||||||
|
|||||||
@@ -45,5 +45,5 @@ By fostering collaborative intelligence, CrewAI empowers agents to work together
|
|||||||
|
|
||||||
## Next Step
|
## Next Step
|
||||||
|
|
||||||
- [Install CrewAI](/installation)
|
- [Install CrewAI](/installation) to get started with your first agent.
|
||||||
|
|
||||||
|
|||||||
@@ -66,18 +66,17 @@
|
|||||||
"pages": [
|
"pages": [
|
||||||
"concepts/agents",
|
"concepts/agents",
|
||||||
"concepts/tasks",
|
"concepts/tasks",
|
||||||
"concepts/tools",
|
|
||||||
"concepts/processes",
|
|
||||||
"concepts/crews",
|
"concepts/crews",
|
||||||
|
"concepts/flows",
|
||||||
|
"concepts/llms",
|
||||||
|
"concepts/processes",
|
||||||
"concepts/collaboration",
|
"concepts/collaboration",
|
||||||
"concepts/pipeline",
|
|
||||||
"concepts/training",
|
"concepts/training",
|
||||||
"concepts/memory",
|
"concepts/memory",
|
||||||
"concepts/planning",
|
"concepts/planning",
|
||||||
"concepts/testing",
|
"concepts/testing",
|
||||||
"concepts/flows",
|
|
||||||
"concepts/cli",
|
"concepts/cli",
|
||||||
"concepts/llms",
|
"concepts/tools",
|
||||||
"concepts/langchain-tools",
|
"concepts/langchain-tools",
|
||||||
"concepts/llamaindex-tools"
|
"concepts/llamaindex-tools"
|
||||||
]
|
]
|
||||||
|
|||||||
@@ -26,6 +26,7 @@ Follow the steps below to get crewing! 🚣♂️
|
|||||||
<Step title="Modify your `agents.yaml` file">
|
<Step title="Modify your `agents.yaml` file">
|
||||||
<Tip>
|
<Tip>
|
||||||
You can also modify the agents as needed to fit your use case or copy and paste as is to your project.
|
You can also modify the agents as needed to fit your use case or copy and paste as is to your project.
|
||||||
|
Any variable interpolated in your `agents.yaml` and `tasks.yaml` files like `{topic}` will be replaced by the value of the variable in the `main.py` file.
|
||||||
</Tip>
|
</Tip>
|
||||||
```yaml agents.yaml
|
```yaml agents.yaml
|
||||||
# src/latest_ai_development/config/agents.yaml
|
# src/latest_ai_development/config/agents.yaml
|
||||||
@@ -124,7 +125,7 @@ Follow the steps below to get crewing! 🚣♂️
|
|||||||
```
|
```
|
||||||
</Step>
|
</Step>
|
||||||
<Step title="Feel free to pass custom inputs to your crew">
|
<Step title="Feel free to pass custom inputs to your crew">
|
||||||
For example, you can pass the `topic` input to your crew to customize the research and reporting to medical llms or any other topic.
|
For example, you can pass the `topic` input to your crew to customize the research and reporting.
|
||||||
```python main.py
|
```python main.py
|
||||||
#!/usr/bin/env python
|
#!/usr/bin/env python
|
||||||
# src/latest_ai_development/main.py
|
# src/latest_ai_development/main.py
|
||||||
@@ -233,6 +234,74 @@ Follow the steps below to get crewing! 🚣♂️
|
|||||||
</Step>
|
</Step>
|
||||||
</Steps>
|
</Steps>
|
||||||
|
|
||||||
|
### Note on Consistency in Naming
|
||||||
|
|
||||||
|
The names you use in your YAML files (`agents.yaml` and `tasks.yaml`) should match the method names in your Python code.
|
||||||
|
For example, you can reference the agent for specific tasks from `tasks.yaml` file.
|
||||||
|
This naming consistency allows CrewAI to automatically link your configurations with your code; otherwise, your task won't recognize the reference properly.
|
||||||
|
|
||||||
|
#### Example References
|
||||||
|
|
||||||
|
<Tip>
|
||||||
|
Note how we use the same name for the agent in the `agents.yaml` (`email_summarizer`) file as the method name in the `crew.py` (`email_summarizer`) file.
|
||||||
|
</Tip>
|
||||||
|
|
||||||
|
```yaml agents.yaml
|
||||||
|
email_summarizer:
|
||||||
|
role: >
|
||||||
|
Email Summarizer
|
||||||
|
goal: >
|
||||||
|
Summarize emails into a concise and clear summary
|
||||||
|
backstory: >
|
||||||
|
You will create a 5 bullet point summary of the report
|
||||||
|
llm: mixtal_llm
|
||||||
|
```
|
||||||
|
|
||||||
|
<Tip>
|
||||||
|
Note how we use the same name for the agent in the `tasks.yaml` (`email_summarizer_task`) file as the method name in the `crew.py` (`email_summarizer_task`) file.
|
||||||
|
</Tip>
|
||||||
|
|
||||||
|
```yaml tasks.yaml
|
||||||
|
email_summarizer_task:
|
||||||
|
description: >
|
||||||
|
Summarize the email into a 5 bullet point summary
|
||||||
|
expected_output: >
|
||||||
|
A 5 bullet point summary of the email
|
||||||
|
agent: email_summarizer
|
||||||
|
context:
|
||||||
|
- reporting_task
|
||||||
|
- research_task
|
||||||
|
```
|
||||||
|
|
||||||
|
Use the annotations to properly reference the agent and task in the `crew.py` file.
|
||||||
|
|
||||||
|
### Annotations include:
|
||||||
|
|
||||||
|
* `@agent`
|
||||||
|
* `@task`
|
||||||
|
* `@crew`
|
||||||
|
* `@tool`
|
||||||
|
* `@callback`
|
||||||
|
* `@output_json`
|
||||||
|
* `@output_pydantic`
|
||||||
|
* `@cache_handler`
|
||||||
|
|
||||||
|
```python crew.py
|
||||||
|
# ...
|
||||||
|
@agent
|
||||||
|
def email_summarizer(self) -> Agent:
|
||||||
|
return Agent(
|
||||||
|
config=self.agents_config["email_summarizer"],
|
||||||
|
)
|
||||||
|
|
||||||
|
@task
|
||||||
|
def email_summarizer_task(self) -> Task:
|
||||||
|
return Task(
|
||||||
|
config=self.tasks_config["email_summarizer_task"],
|
||||||
|
)
|
||||||
|
# ...
|
||||||
|
```
|
||||||
|
|
||||||
<Tip>
|
<Tip>
|
||||||
In addition to the [sequential process](../how-to/sequential-process), you can use the [hierarchical process](../how-to/hierarchical-process),
|
In addition to the [sequential process](../how-to/sequential-process), you can use the [hierarchical process](../how-to/hierarchical-process),
|
||||||
which automatically assigns a manager to the defined crew to properly coordinate the planning and execution of tasks through delegation and validation of results.
|
which automatically assigns a manager to the defined crew to properly coordinate the planning and execution of tasks through delegation and validation of results.
|
||||||
@@ -241,7 +310,7 @@ You can learn more about the core concepts [here](/concepts).
|
|||||||
|
|
||||||
### Replay Tasks from Latest Crew Kickoff
|
### Replay Tasks from Latest Crew Kickoff
|
||||||
|
|
||||||
CrewAI now includes a replay feature that allows you to list the tasks from the last run and replay from a specific one. To use this feature, run:
|
CrewAI now includes a replay feature that allows you to list the tasks from the last run and replay from a specific one. To use this feature, run.
|
||||||
|
|
||||||
```shell
|
```shell
|
||||||
crewai replay <task_id>
|
crewai replay <task_id>
|
||||||
|
|||||||
@@ -1,7 +1,7 @@
|
|||||||
import uuid
|
import uuid
|
||||||
from abc import ABC, abstractmethod
|
from abc import ABC, abstractmethod
|
||||||
from copy import copy as shallow_copy
|
from copy import copy as shallow_copy
|
||||||
from hashlib import md5
|
from hashlib import sha256
|
||||||
from typing import Any, Dict, List, Optional, TypeVar
|
from typing import Any, Dict, List, Optional, TypeVar
|
||||||
|
|
||||||
from pydantic import (
|
from pydantic import (
|
||||||
@@ -181,7 +181,7 @@ class BaseAgent(ABC, BaseModel):
|
|||||||
self._original_goal or self.goal,
|
self._original_goal or self.goal,
|
||||||
self._original_backstory or self.backstory,
|
self._original_backstory or self.backstory,
|
||||||
]
|
]
|
||||||
return md5("|".join(source).encode(), usedforsecurity=False).hexdigest()
|
return sha256("|".join(source).encode()).hexdigest()
|
||||||
|
|
||||||
@abstractmethod
|
@abstractmethod
|
||||||
def execute_task(
|
def execute_task(
|
||||||
|
|||||||
19
src/crewai/cli/constants.py
Normal file
19
src/crewai/cli/constants.py
Normal file
@@ -0,0 +1,19 @@
|
|||||||
|
ENV_VARS = {
|
||||||
|
'openai': ['OPENAI_API_KEY'],
|
||||||
|
'anthropic': ['ANTHROPIC_API_KEY'],
|
||||||
|
'gemini': ['GEMINI_API_KEY'],
|
||||||
|
'groq': ['GROQ_API_KEY'],
|
||||||
|
'ollama': ['FAKE_KEY'],
|
||||||
|
}
|
||||||
|
|
||||||
|
PROVIDERS = ['openai', 'anthropic', 'gemini', 'groq', 'ollama']
|
||||||
|
|
||||||
|
MODELS = {
|
||||||
|
'openai': ['gpt-4', 'gpt-4o', 'gpt-4o-mini', 'o1-mini', 'o1-preview'],
|
||||||
|
'anthropic': ['claude-3-5-sonnet-20240620', 'claude-3-sonnet-20240229', 'claude-3-opus-20240229', 'claude-3-haiku-20240307'],
|
||||||
|
'gemini': ['gemini-1.5-flash', 'gemini-1.5-pro', 'gemini-gemma-2-9b-it', 'gemini-gemma-2-27b-it'],
|
||||||
|
'groq': ['llama-3.1-8b-instant', 'llama-3.1-70b-versatile', 'llama-3.1-405b-reasoning', 'gemma2-9b-it', 'gemma-7b-it'],
|
||||||
|
'ollama': ['llama3.1', 'mixtral'],
|
||||||
|
}
|
||||||
|
|
||||||
|
JSON_URL = "https://raw.githubusercontent.com/BerriAI/litellm/main/model_prices_and_context_window.json"
|
||||||
@@ -1,12 +1,17 @@
|
|||||||
from pathlib import Path
|
from pathlib import Path
|
||||||
|
|
||||||
import click
|
import click
|
||||||
|
from crewai.cli.utils import copy_template, load_env_vars, write_env_file
|
||||||
from crewai.cli.utils import copy_template
|
from crewai.cli.provider import (
|
||||||
|
get_provider_data,
|
||||||
|
select_provider,
|
||||||
|
select_model,
|
||||||
|
PROVIDERS,
|
||||||
|
)
|
||||||
|
from crewai.cli.constants import ENV_VARS
|
||||||
|
import sys
|
||||||
|
|
||||||
|
|
||||||
def create_crew(name, parent_folder=None):
|
def create_folder_structure(name, parent_folder=None):
|
||||||
"""Create a new crew."""
|
|
||||||
folder_name = name.replace(" ", "_").replace("-", "_").lower()
|
folder_name = name.replace(" ", "_").replace("-", "_").lower()
|
||||||
class_name = name.replace("_", " ").replace("-", " ").title().replace(" ", "")
|
class_name = name.replace("_", " ").replace("-", " ").title().replace(" ", "")
|
||||||
|
|
||||||
@@ -15,11 +20,19 @@ def create_crew(name, parent_folder=None):
|
|||||||
else:
|
else:
|
||||||
folder_path = Path(folder_name)
|
folder_path = Path(folder_name)
|
||||||
|
|
||||||
click.secho(
|
if folder_path.exists():
|
||||||
f"Creating {'crew' if parent_folder else 'folder'} {folder_name}...",
|
if not click.confirm(
|
||||||
fg="green",
|
f"Folder {folder_name} already exists. Do you want to override it?"
|
||||||
bold=True,
|
):
|
||||||
)
|
click.secho("Operation cancelled.", fg="yellow")
|
||||||
|
sys.exit(0)
|
||||||
|
click.secho(f"Overriding folder {folder_name}...", fg="green", bold=True)
|
||||||
|
else:
|
||||||
|
click.secho(
|
||||||
|
f"Creating {'crew' if parent_folder else 'folder'} {folder_name}...",
|
||||||
|
fg="green",
|
||||||
|
bold=True,
|
||||||
|
)
|
||||||
|
|
||||||
if not folder_path.exists():
|
if not folder_path.exists():
|
||||||
folder_path.mkdir(parents=True)
|
folder_path.mkdir(parents=True)
|
||||||
@@ -28,19 +41,119 @@ def create_crew(name, parent_folder=None):
|
|||||||
(folder_path / "src" / folder_name).mkdir(parents=True)
|
(folder_path / "src" / folder_name).mkdir(parents=True)
|
||||||
(folder_path / "src" / folder_name / "tools").mkdir(parents=True)
|
(folder_path / "src" / folder_name / "tools").mkdir(parents=True)
|
||||||
(folder_path / "src" / folder_name / "config").mkdir(parents=True)
|
(folder_path / "src" / folder_name / "config").mkdir(parents=True)
|
||||||
with open(folder_path / ".env", "w") as file:
|
|
||||||
file.write("OPENAI_API_KEY=YOUR_API_KEY")
|
return folder_path, folder_name, class_name
|
||||||
else:
|
|
||||||
click.secho(
|
|
||||||
f"\tFolder {folder_name} already exists. Please choose a different name.",
|
def copy_template_files(folder_path, name, class_name, parent_folder):
|
||||||
fg="red",
|
package_dir = Path(__file__).parent
|
||||||
)
|
templates_dir = package_dir / "templates" / "crew"
|
||||||
|
|
||||||
|
root_template_files = (
|
||||||
|
[".gitignore", "pyproject.toml", "README.md"] if not parent_folder else []
|
||||||
|
)
|
||||||
|
tools_template_files = ["tools/custom_tool.py", "tools/__init__.py"]
|
||||||
|
config_template_files = ["config/agents.yaml", "config/tasks.yaml"]
|
||||||
|
src_template_files = (
|
||||||
|
["__init__.py", "main.py", "crew.py"] if not parent_folder else ["crew.py"]
|
||||||
|
)
|
||||||
|
|
||||||
|
for file_name in root_template_files:
|
||||||
|
src_file = templates_dir / file_name
|
||||||
|
dst_file = folder_path / file_name
|
||||||
|
copy_template(src_file, dst_file, name, class_name, folder_path.name)
|
||||||
|
|
||||||
|
src_folder = (
|
||||||
|
folder_path / "src" / folder_path.name if not parent_folder else folder_path
|
||||||
|
)
|
||||||
|
|
||||||
|
for file_name in src_template_files:
|
||||||
|
src_file = templates_dir / file_name
|
||||||
|
dst_file = src_folder / file_name
|
||||||
|
copy_template(src_file, dst_file, name, class_name, folder_path.name)
|
||||||
|
|
||||||
|
if not parent_folder:
|
||||||
|
for file_name in tools_template_files + config_template_files:
|
||||||
|
src_file = templates_dir / file_name
|
||||||
|
dst_file = src_folder / file_name
|
||||||
|
copy_template(src_file, dst_file, name, class_name, folder_path.name)
|
||||||
|
|
||||||
|
|
||||||
|
def create_crew(name, parent_folder=None):
|
||||||
|
folder_path, folder_name, class_name = create_folder_structure(name, parent_folder)
|
||||||
|
env_vars = load_env_vars(folder_path)
|
||||||
|
|
||||||
|
existing_provider = None
|
||||||
|
for provider, env_keys in ENV_VARS.items():
|
||||||
|
if any(key in env_vars for key in env_keys):
|
||||||
|
existing_provider = provider
|
||||||
|
break
|
||||||
|
|
||||||
|
if existing_provider:
|
||||||
|
if not click.confirm(
|
||||||
|
f"Found existing environment variable configuration for {existing_provider.capitalize()}. Do you want to override it?"
|
||||||
|
):
|
||||||
|
click.secho("Keeping existing provider configuration.", fg="yellow")
|
||||||
|
return
|
||||||
|
|
||||||
|
provider_models = get_provider_data()
|
||||||
|
if not provider_models:
|
||||||
return
|
return
|
||||||
|
|
||||||
|
while True:
|
||||||
|
selected_provider = select_provider(provider_models)
|
||||||
|
if selected_provider is None: # User typed 'q'
|
||||||
|
click.secho("Exiting...", fg="yellow")
|
||||||
|
sys.exit(0)
|
||||||
|
if selected_provider: # Valid selection
|
||||||
|
break
|
||||||
|
click.secho(
|
||||||
|
"No provider selected. Please try again or press 'q' to exit.", fg="red"
|
||||||
|
)
|
||||||
|
|
||||||
|
while True:
|
||||||
|
selected_model = select_model(selected_provider, provider_models)
|
||||||
|
if selected_model is None: # User typed 'q'
|
||||||
|
click.secho("Exiting...", fg="yellow")
|
||||||
|
sys.exit(0)
|
||||||
|
if selected_model: # Valid selection
|
||||||
|
break
|
||||||
|
click.secho(
|
||||||
|
"No model selected. Please try again or press 'q' to exit.", fg="red"
|
||||||
|
)
|
||||||
|
|
||||||
|
if selected_provider in PROVIDERS:
|
||||||
|
api_key_var = ENV_VARS[selected_provider][0]
|
||||||
|
else:
|
||||||
|
api_key_var = click.prompt(
|
||||||
|
f"Enter the environment variable name for your {selected_provider.capitalize()} API key",
|
||||||
|
type=str,
|
||||||
|
default="",
|
||||||
|
)
|
||||||
|
|
||||||
|
api_key_value = ""
|
||||||
|
click.echo(
|
||||||
|
f"Enter your {selected_provider.capitalize()} API key (press Enter to skip): ",
|
||||||
|
nl=False,
|
||||||
|
)
|
||||||
|
try:
|
||||||
|
api_key_value = input()
|
||||||
|
except (KeyboardInterrupt, EOFError):
|
||||||
|
api_key_value = ""
|
||||||
|
|
||||||
|
if api_key_value.strip():
|
||||||
|
env_vars = {api_key_var: api_key_value}
|
||||||
|
write_env_file(folder_path, env_vars)
|
||||||
|
click.secho("API key saved to .env file", fg="green")
|
||||||
|
else:
|
||||||
|
click.secho("No API key provided. Skipping .env file creation.", fg="yellow")
|
||||||
|
|
||||||
|
env_vars["MODEL"] = selected_model
|
||||||
|
click.secho(f"Selected model: {selected_model}", fg="green")
|
||||||
|
|
||||||
package_dir = Path(__file__).parent
|
package_dir = Path(__file__).parent
|
||||||
templates_dir = package_dir / "templates" / "crew"
|
templates_dir = package_dir / "templates" / "crew"
|
||||||
|
|
||||||
# List of template files to copy
|
|
||||||
root_template_files = (
|
root_template_files = (
|
||||||
[".gitignore", "pyproject.toml", "README.md"] if not parent_folder else []
|
[".gitignore", "pyproject.toml", "README.md"] if not parent_folder else []
|
||||||
)
|
)
|
||||||
|
|||||||
200
src/crewai/cli/provider.py
Normal file
200
src/crewai/cli/provider.py
Normal file
@@ -0,0 +1,200 @@
|
|||||||
|
import json
|
||||||
|
import time
|
||||||
|
import requests
|
||||||
|
from collections import defaultdict
|
||||||
|
import click
|
||||||
|
from pathlib import Path
|
||||||
|
from crewai.cli.constants import PROVIDERS, MODELS, JSON_URL
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
def select_choice(prompt_message, choices):
|
||||||
|
"""
|
||||||
|
Presents a list of choices to the user and prompts them to select one.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
- prompt_message (str): The message to display to the user before presenting the choices.
|
||||||
|
- choices (list): A list of options to present to the user.
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
- str: The selected choice from the list, or None if the user chooses to quit.
|
||||||
|
"""
|
||||||
|
|
||||||
|
provider_models = get_provider_data()
|
||||||
|
if not provider_models:
|
||||||
|
return
|
||||||
|
click.secho(prompt_message, fg="cyan")
|
||||||
|
for idx, choice in enumerate(choices, start=1):
|
||||||
|
click.secho(f"{idx}. {choice}", fg="cyan")
|
||||||
|
click.secho("q. Quit", fg="cyan")
|
||||||
|
|
||||||
|
while True:
|
||||||
|
choice = click.prompt("Enter the number of your choice or 'q' to quit", type=str)
|
||||||
|
|
||||||
|
if choice.lower() == 'q':
|
||||||
|
return None
|
||||||
|
|
||||||
|
try:
|
||||||
|
selected_index = int(choice) - 1
|
||||||
|
if 0 <= selected_index < len(choices):
|
||||||
|
return choices[selected_index]
|
||||||
|
except ValueError:
|
||||||
|
pass
|
||||||
|
|
||||||
|
click.secho("Invalid selection. Please select a number between 1 and 6 or 'q' to quit.", fg="red")
|
||||||
|
|
||||||
|
def select_provider(provider_models):
|
||||||
|
"""
|
||||||
|
Presents a list of providers to the user and prompts them to select one.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
- provider_models (dict): A dictionary of provider models.
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
- str: The selected provider
|
||||||
|
- None: If user explicitly quits
|
||||||
|
"""
|
||||||
|
predefined_providers = [p.lower() for p in PROVIDERS]
|
||||||
|
all_providers = sorted(set(predefined_providers + list(provider_models.keys())))
|
||||||
|
|
||||||
|
provider = select_choice("Select a provider to set up:", predefined_providers + ['other'])
|
||||||
|
if provider is None: # User typed 'q'
|
||||||
|
return None
|
||||||
|
|
||||||
|
if provider == 'other':
|
||||||
|
provider = select_choice("Select a provider from the full list:", all_providers)
|
||||||
|
if provider is None: # User typed 'q'
|
||||||
|
return None
|
||||||
|
|
||||||
|
return provider.lower() if provider else False
|
||||||
|
|
||||||
|
def select_model(provider, provider_models):
|
||||||
|
"""
|
||||||
|
Presents a list of models for a given provider to the user and prompts them to select one.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
- provider (str): The provider for which to select a model.
|
||||||
|
- provider_models (dict): A dictionary of provider models.
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
- str: The selected model, or None if the operation is aborted or an invalid selection is made.
|
||||||
|
"""
|
||||||
|
predefined_providers = [p.lower() for p in PROVIDERS]
|
||||||
|
|
||||||
|
if provider in predefined_providers:
|
||||||
|
available_models = MODELS.get(provider, [])
|
||||||
|
else:
|
||||||
|
available_models = provider_models.get(provider, [])
|
||||||
|
|
||||||
|
if not available_models:
|
||||||
|
click.secho(f"No models available for provider '{provider}'.", fg="red")
|
||||||
|
return None
|
||||||
|
|
||||||
|
selected_model = select_choice(f"Select a model to use for {provider.capitalize()}:", available_models)
|
||||||
|
return selected_model
|
||||||
|
|
||||||
|
def load_provider_data(cache_file, cache_expiry):
|
||||||
|
"""
|
||||||
|
Loads provider data from a cache file if it exists and is not expired. If the cache is expired or corrupted, it fetches the data from the web.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
- cache_file (Path): The path to the cache file.
|
||||||
|
- cache_expiry (int): The cache expiry time in seconds.
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
- dict or None: The loaded provider data or None if the operation fails.
|
||||||
|
"""
|
||||||
|
current_time = time.time()
|
||||||
|
if cache_file.exists() and (current_time - cache_file.stat().st_mtime) < cache_expiry:
|
||||||
|
data = read_cache_file(cache_file)
|
||||||
|
if data:
|
||||||
|
return data
|
||||||
|
click.secho("Cache is corrupted. Fetching provider data from the web...", fg="yellow")
|
||||||
|
else:
|
||||||
|
click.secho("Cache expired or not found. Fetching provider data from the web...", fg="cyan")
|
||||||
|
return fetch_provider_data(cache_file)
|
||||||
|
|
||||||
|
def read_cache_file(cache_file):
|
||||||
|
"""
|
||||||
|
Reads and returns the JSON content from a cache file. Returns None if the file contains invalid JSON.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
- cache_file (Path): The path to the cache file.
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
- dict or None: The JSON content of the cache file or None if the JSON is invalid.
|
||||||
|
"""
|
||||||
|
try:
|
||||||
|
with open(cache_file, "r") as f:
|
||||||
|
return json.load(f)
|
||||||
|
except json.JSONDecodeError:
|
||||||
|
return None
|
||||||
|
|
||||||
|
def fetch_provider_data(cache_file):
|
||||||
|
"""
|
||||||
|
Fetches provider data from a specified URL and caches it to a file.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
- cache_file (Path): The path to the cache file.
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
- dict or None: The fetched provider data or None if the operation fails.
|
||||||
|
"""
|
||||||
|
try:
|
||||||
|
response = requests.get(JSON_URL, stream=True, timeout=10)
|
||||||
|
response.raise_for_status()
|
||||||
|
data = download_data(response)
|
||||||
|
with open(cache_file, "w") as f:
|
||||||
|
json.dump(data, f)
|
||||||
|
return data
|
||||||
|
except requests.RequestException as e:
|
||||||
|
click.secho(f"Error fetching provider data: {e}", fg="red")
|
||||||
|
except json.JSONDecodeError:
|
||||||
|
click.secho("Error parsing provider data. Invalid JSON format.", fg="red")
|
||||||
|
return None
|
||||||
|
|
||||||
|
def download_data(response):
|
||||||
|
"""
|
||||||
|
Downloads data from a given HTTP response and returns the JSON content.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
- response (requests.Response): The HTTP response object.
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
- dict: The JSON content of the response.
|
||||||
|
"""
|
||||||
|
total_size = int(response.headers.get('content-length', 0))
|
||||||
|
block_size = 8192
|
||||||
|
data_chunks = []
|
||||||
|
with click.progressbar(length=total_size, label='Downloading', show_pos=True) as progress_bar:
|
||||||
|
for chunk in response.iter_content(block_size):
|
||||||
|
if chunk:
|
||||||
|
data_chunks.append(chunk)
|
||||||
|
progress_bar.update(len(chunk))
|
||||||
|
data_content = b''.join(data_chunks)
|
||||||
|
return json.loads(data_content.decode('utf-8'))
|
||||||
|
|
||||||
|
def get_provider_data():
|
||||||
|
"""
|
||||||
|
Retrieves provider data from a cache file, filters out models based on provider criteria, and returns a dictionary of providers mapped to their models.
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
- dict or None: A dictionary of providers mapped to their models or None if the operation fails.
|
||||||
|
"""
|
||||||
|
cache_dir = Path.home() / '.crewai'
|
||||||
|
cache_dir.mkdir(exist_ok=True)
|
||||||
|
cache_file = cache_dir / 'provider_cache.json'
|
||||||
|
cache_expiry = 24 * 3600
|
||||||
|
|
||||||
|
data = load_provider_data(cache_file, cache_expiry)
|
||||||
|
if not data:
|
||||||
|
return None
|
||||||
|
|
||||||
|
provider_models = defaultdict(list)
|
||||||
|
for model_name, properties in data.items():
|
||||||
|
provider = properties.get("litellm_provider", "").strip().lower()
|
||||||
|
if 'http' in provider or provider == 'other':
|
||||||
|
continue
|
||||||
|
if provider:
|
||||||
|
provider_models[provider].append(model_name)
|
||||||
|
return provider_models
|
||||||
@@ -2,6 +2,9 @@ import subprocess
|
|||||||
|
|
||||||
import click
|
import click
|
||||||
import tomllib
|
import tomllib
|
||||||
|
from packaging import version
|
||||||
|
|
||||||
|
from crewai.cli.utils import get_crewai_version
|
||||||
|
|
||||||
|
|
||||||
def run_crew() -> None:
|
def run_crew() -> None:
|
||||||
@@ -9,6 +12,22 @@ def run_crew() -> None:
|
|||||||
Run the crew by running a command in the UV environment.
|
Run the crew by running a command in the UV environment.
|
||||||
"""
|
"""
|
||||||
command = ["uv", "run", "run_crew"]
|
command = ["uv", "run", "run_crew"]
|
||||||
|
crewai_version = get_crewai_version()
|
||||||
|
min_required_version = "0.71.0"
|
||||||
|
|
||||||
|
with open("pyproject.toml", "rb") as f:
|
||||||
|
data = tomllib.load(f)
|
||||||
|
|
||||||
|
if data.get("tool", {}).get("poetry") and (
|
||||||
|
version.parse(crewai_version) < version.parse(min_required_version)
|
||||||
|
):
|
||||||
|
click.secho(
|
||||||
|
f"You are running an older version of crewAI ({crewai_version}) that uses poetry pyproject.toml. "
|
||||||
|
f"Please run `crewai update` to update your pyproject.toml to use uv.",
|
||||||
|
fg="red",
|
||||||
|
)
|
||||||
|
print()
|
||||||
|
|
||||||
try:
|
try:
|
||||||
subprocess.run(command, capture_output=False, text=True, check=True)
|
subprocess.run(command, capture_output=False, text=True, check=True)
|
||||||
|
|
||||||
|
|||||||
@@ -9,6 +9,7 @@ import click
|
|||||||
from rich.console import Console
|
from rich.console import Console
|
||||||
|
|
||||||
from crewai.cli.authentication.utils import TokenManager
|
from crewai.cli.authentication.utils import TokenManager
|
||||||
|
from crewai.cli.constants import ENV_VARS
|
||||||
|
|
||||||
if sys.version_info >= (3, 11):
|
if sys.version_info >= (3, 11):
|
||||||
import tomllib
|
import tomllib
|
||||||
@@ -200,3 +201,76 @@ def tree_find_and_replace(directory, find, replace):
|
|||||||
new_dirpath = os.path.join(path, new_dirname)
|
new_dirpath = os.path.join(path, new_dirname)
|
||||||
old_dirpath = os.path.join(path, dirname)
|
old_dirpath = os.path.join(path, dirname)
|
||||||
os.rename(old_dirpath, new_dirpath)
|
os.rename(old_dirpath, new_dirpath)
|
||||||
|
|
||||||
|
|
||||||
|
def load_env_vars(folder_path):
|
||||||
|
"""
|
||||||
|
Loads environment variables from a .env file in the specified folder path.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
- folder_path (Path): The path to the folder containing the .env file.
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
- dict: A dictionary of environment variables.
|
||||||
|
"""
|
||||||
|
env_file_path = folder_path / ".env"
|
||||||
|
env_vars = {}
|
||||||
|
if env_file_path.exists():
|
||||||
|
with open(env_file_path, "r") as file:
|
||||||
|
for line in file:
|
||||||
|
key, _, value = line.strip().partition("=")
|
||||||
|
if key and value:
|
||||||
|
env_vars[key] = value
|
||||||
|
return env_vars
|
||||||
|
|
||||||
|
|
||||||
|
def update_env_vars(env_vars, provider, model):
|
||||||
|
"""
|
||||||
|
Updates environment variables with the API key for the selected provider and model.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
- env_vars (dict): Environment variables dictionary.
|
||||||
|
- provider (str): Selected provider.
|
||||||
|
- model (str): Selected model.
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
- None
|
||||||
|
"""
|
||||||
|
api_key_var = ENV_VARS.get(
|
||||||
|
provider,
|
||||||
|
[
|
||||||
|
click.prompt(
|
||||||
|
f"Enter the environment variable name for your {provider.capitalize()} API key",
|
||||||
|
type=str,
|
||||||
|
)
|
||||||
|
],
|
||||||
|
)[0]
|
||||||
|
|
||||||
|
if api_key_var not in env_vars:
|
||||||
|
try:
|
||||||
|
env_vars[api_key_var] = click.prompt(
|
||||||
|
f"Enter your {provider.capitalize()} API key", type=str, hide_input=True
|
||||||
|
)
|
||||||
|
except click.exceptions.Abort:
|
||||||
|
click.secho("Operation aborted by the user.", fg="red")
|
||||||
|
return None
|
||||||
|
else:
|
||||||
|
click.secho(f"API key already exists for {provider.capitalize()}.", fg="yellow")
|
||||||
|
|
||||||
|
env_vars["MODEL"] = model
|
||||||
|
click.secho(f"Selected model: {model}", fg="green")
|
||||||
|
return env_vars
|
||||||
|
|
||||||
|
|
||||||
|
def write_env_file(folder_path, env_vars):
|
||||||
|
"""
|
||||||
|
Writes environment variables to a .env file in the specified folder.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
- folder_path (Path): The path to the folder where the .env file will be written.
|
||||||
|
- env_vars (dict): A dictionary of environment variables to write.
|
||||||
|
"""
|
||||||
|
env_file_path = folder_path / ".env"
|
||||||
|
with open(env_file_path, "w") as file:
|
||||||
|
for key, value in env_vars.items():
|
||||||
|
file.write(f"{key}={value}\n")
|
||||||
|
|||||||
@@ -4,7 +4,7 @@ import os
|
|||||||
import uuid
|
import uuid
|
||||||
import warnings
|
import warnings
|
||||||
from concurrent.futures import Future
|
from concurrent.futures import Future
|
||||||
from hashlib import md5
|
from hashlib import sha256
|
||||||
from typing import TYPE_CHECKING, Any, Dict, List, Optional, Tuple, Union
|
from typing import TYPE_CHECKING, Any, Dict, List, Optional, Tuple, Union
|
||||||
|
|
||||||
from pydantic import (
|
from pydantic import (
|
||||||
@@ -388,7 +388,7 @@ class Crew(BaseModel):
|
|||||||
source = [agent.key for agent in self.agents] + [
|
source = [agent.key for agent in self.agents] + [
|
||||||
task.key for task in self.tasks
|
task.key for task in self.tasks
|
||||||
]
|
]
|
||||||
return md5("|".join(source).encode(), usedforsecurity=False).hexdigest()
|
return sha256("|".join(source).encode()).hexdigest()
|
||||||
|
|
||||||
def _setup_from_config(self):
|
def _setup_from_config(self):
|
||||||
assert self.config is not None, "Config should not be None."
|
assert self.config is not None, "Config should not be None."
|
||||||
|
|||||||
@@ -5,7 +5,7 @@ import threading
|
|||||||
import uuid
|
import uuid
|
||||||
from concurrent.futures import Future
|
from concurrent.futures import Future
|
||||||
from copy import copy
|
from copy import copy
|
||||||
from hashlib import md5
|
from hashlib import sha256
|
||||||
from typing import Any, Dict, List, Optional, Set, Tuple, Type, Union
|
from typing import Any, Dict, List, Optional, Set, Tuple, Type, Union
|
||||||
|
|
||||||
from opentelemetry.trace import Span
|
from opentelemetry.trace import Span
|
||||||
@@ -196,7 +196,7 @@ class Task(BaseModel):
|
|||||||
expected_output = self._original_expected_output or self.expected_output
|
expected_output = self._original_expected_output or self.expected_output
|
||||||
source = [description, expected_output]
|
source = [description, expected_output]
|
||||||
|
|
||||||
return md5("|".join(source).encode(), usedforsecurity=False).hexdigest()
|
return sha256("|".join(source).encode()).hexdigest()
|
||||||
|
|
||||||
def execute_async(
|
def execute_async(
|
||||||
self,
|
self,
|
||||||
|
|||||||
@@ -59,7 +59,7 @@ class ToolUsage:
|
|||||||
agent: Any,
|
agent: Any,
|
||||||
action: Any,
|
action: Any,
|
||||||
) -> None:
|
) -> None:
|
||||||
self._i18n: I18N = I18N()
|
self._i18n: I18N = agent.i18n
|
||||||
self._printer: Printer = Printer()
|
self._printer: Printer = Printer()
|
||||||
self._telemetry: Telemetry = Telemetry()
|
self._telemetry: Telemetry = Telemetry()
|
||||||
self._run_attempts: int = 1
|
self._run_attempts: int = 1
|
||||||
|
|||||||
@@ -1,4 +1,4 @@
|
|||||||
import hashlib
|
from hashlib import sha256
|
||||||
from typing import Any, List, Optional
|
from typing import Any, List, Optional
|
||||||
|
|
||||||
from crewai.agents.agent_builder.base_agent import BaseAgent
|
from crewai.agents.agent_builder.base_agent import BaseAgent
|
||||||
@@ -32,5 +32,5 @@ def test_key():
|
|||||||
goal="test goal",
|
goal="test goal",
|
||||||
backstory="test backstory",
|
backstory="test backstory",
|
||||||
)
|
)
|
||||||
hash = hashlib.md5("test role|test goal|test backstory".encode()).hexdigest()
|
hash = sha256("test role|test goal|test backstory".encode()).hexdigest()
|
||||||
assert agent.key == hash
|
assert agent.key == hash
|
||||||
|
|||||||
@@ -1,6 +1,6 @@
|
|||||||
"""Test Agent creation and execution basic functionality."""
|
"""Test Agent creation and execution basic functionality."""
|
||||||
|
|
||||||
import hashlib
|
from hashlib import sha256
|
||||||
import json
|
import json
|
||||||
from concurrent.futures import Future
|
from concurrent.futures import Future
|
||||||
from unittest import mock
|
from unittest import mock
|
||||||
@@ -2328,7 +2328,7 @@ def test_key():
|
|||||||
process=Process.sequential,
|
process=Process.sequential,
|
||||||
tasks=tasks,
|
tasks=tasks,
|
||||||
)
|
)
|
||||||
hash = hashlib.md5(
|
hash = sha256(
|
||||||
f"{researcher.key}|{writer.key}|{tasks[0].key}|{tasks[1].key}".encode()
|
f"{researcher.key}|{writer.key}|{tasks[0].key}|{tasks[1].key}".encode()
|
||||||
).hexdigest()
|
).hexdigest()
|
||||||
|
|
||||||
@@ -2368,7 +2368,7 @@ def test_key_with_interpolated_inputs():
|
|||||||
process=Process.sequential,
|
process=Process.sequential,
|
||||||
tasks=tasks,
|
tasks=tasks,
|
||||||
)
|
)
|
||||||
hash = hashlib.md5(
|
hash = sha256(
|
||||||
f"{researcher.key}|{writer.key}|{tasks[0].key}|{tasks[1].key}".encode()
|
f"{researcher.key}|{writer.key}|{tasks[0].key}|{tasks[1].key}".encode()
|
||||||
).hexdigest()
|
).hexdigest()
|
||||||
|
|
||||||
|
|||||||
@@ -1,6 +1,6 @@
|
|||||||
"""Test Agent creation and execution basic functionality."""
|
"""Test Agent creation and execution basic functionality."""
|
||||||
|
|
||||||
import hashlib
|
from hashlib import sha256
|
||||||
import json
|
import json
|
||||||
import os
|
import os
|
||||||
from unittest.mock import MagicMock, patch
|
from unittest.mock import MagicMock, patch
|
||||||
@@ -819,7 +819,7 @@ def test_key():
|
|||||||
description=original_description,
|
description=original_description,
|
||||||
expected_output=original_expected_output,
|
expected_output=original_expected_output,
|
||||||
)
|
)
|
||||||
hash = hashlib.md5(
|
hash = sha256(
|
||||||
f"{original_description}|{original_expected_output}".encode()
|
f"{original_description}|{original_expected_output}".encode()
|
||||||
).hexdigest()
|
).hexdigest()
|
||||||
|
|
||||||
|
|||||||
Reference in New Issue
Block a user