Compare commits

...

17 Commits

Author SHA1 Message Date
Brandon Hancock (bhancock_ai)
276cb7b7e8 Merge branch 'main' into feat/improve-tooling-docs 2024-10-29 10:41:04 -04:00
Brandon Hancock (bhancock_ai)
048aa6cbcc Update flows.mdx - Fix link 2024-10-29 10:40:49 -04:00
Brandon Hancock
fa9949b9d0 Update flow docs to talk about self evaluation example 2024-10-28 12:18:03 -05:00
Brandon Hancock
500072d855 Update flow docs to talk about self evaluation example 2024-10-28 12:17:44 -05:00
Brandon Hancock
04bcfa6e2d Improve tooling docs 2024-10-28 09:40:56 -05:00
Brandon Hancock (bhancock_ai)
26afee9bed improve tool text description and args (#1512)
* improve tool text descriptoin and args

* fix lint

* Drop print

* add back in docstring
2024-10-25 18:42:55 -04:00
Vini Brasil
f29f4abdd7 Forward install command options to uv sync (#1510)
Allow passing additional options from `crewai install` directly to
`uv sync`. This enables commands like `crewai install --locked` to work
as expected by forwarding all flags and options to the underlying uv
command.
2024-10-25 11:20:41 -03:00
Eduardo Chiarotti
4589d6fe9d feat: add tomli so we can support 3.10 (#1506)
* feat: add tomli so we can support 3.10

* feat: add validation for poetry data
2024-10-25 10:33:21 -03:00
Brandon Hancock (bhancock_ai)
201e652fa2 update plot command (#1504) 2024-10-24 14:44:30 -04:00
João Moura
8bc07e6071 new version 2024-10-23 18:10:37 -03:00
João Moura
6baaad045a new version 2024-10-23 18:08:49 -03:00
João Moura
74c1703310 updating crewai version 2024-10-23 17:58:58 -03:00
Brandon Hancock (bhancock_ai)
a921828e51 Fix memory imports for embedding functions (#1497) 2024-10-23 11:21:27 -04:00
Brandon Hancock (bhancock_ai)
e1fd83e6a7 support unsafe code execution. add in docker install and running checks. (#1496)
* support unsafe code execution. add in docker install and running checks.

* Update return type
2024-10-23 11:01:00 -04:00
Maicon Peixinho
7d68e287cc chore(readme-fix): fixing step for 'running tests' in the contribution section (#1490)
Co-authored-by: Eduardo Chiarotti <dudumelgaco@hotmail.com>
2024-10-23 11:38:41 -03:00
Rip&Tear
f39a975e20 fix/fixed missing API prompt + CLI docs update (#1464)
* updated CLI to allow for submitting API keys

* updated click prompt to remove default number

* removed all unnecessary comments

* feat: implement crew creation CLI command

- refactor code to multiple functions
- Added ability for users to select provider and model when uing crewai create command and ave API key to .env

* refactered select_choice function for early return

* refactored  select_provider to have an ealry return

* cleanup of comments

* refactor/Move functions into utils file, added new provider file and migrated fucntions thre, new constants file + general function refactor

* small comment cleanup

* fix unnecessary deps

* Added docs for new CLI provider + fixed missing API prompt

* Minor doc updates

* allow user to bypass api key entry + incorect number selected logic + ruff formatting

* ruff updates

* Fix spelling mistake

---------

Co-authored-by: Brandon Hancock (bhancock_ai) <109994880+bhancockio@users.noreply.github.com>
Co-authored-by: Brandon Hancock <brandon@brandonhancock.io>
2024-10-23 09:41:14 -04:00
João Moura
b8a3c29745 preparing new verison 2024-10-23 05:34:34 -03:00
27 changed files with 1377 additions and 1149 deletions

View File

@@ -351,7 +351,7 @@ pre-commit install
### Running Tests
```bash
uvx pytest
uv run pytest .
```
### Running static type checks

View File

@@ -31,16 +31,17 @@ Think of an agent as a member of a team, with specific skills and a particular j
| **Max RPM** *(optional)* | `max_rpm` | Max RPM is the maximum number of requests per minute the agent can perform to avoid rate limits. It's optional and can be left unspecified, with a default value of `None`. |
| **Max Execution Time** *(optional)* | `max_execution_time` | Max Execution Time is the maximum execution time for an agent to execute a task. It's optional and can be left unspecified, with a default value of `None`, meaning no max execution time. |
| **Verbose** *(optional)* | `verbose` | Setting this to `True` configures the internal logger to provide detailed execution logs, aiding in debugging and monitoring. Default is `False`. |
| **Allow Delegation** *(optional)* | `allow_delegation` | Agents can delegate tasks or questions to one another, ensuring that each task is handled by the most suitable agent. Default is `False`.
| **Allow Delegation** *(optional)* | `allow_delegation` | Agents can delegate tasks or questions to one another, ensuring that each task is handled by the most suitable agent. Default is `False`. |
| **Step Callback** *(optional)* | `step_callback` | A function that is called after each step of the agent. This can be used to log the agent's actions or to perform other operations. It will overwrite the crew `step_callback`. |
| **Cache** *(optional)* | `cache` | Indicates if the agent should use a cache for tool usage. Default is `True`. |
| **System Template** *(optional)* | `system_template` | Specifies the system format for the agent. Default is `None`. |
| **Prompt Template** *(optional)* | `prompt_template` | Specifies the prompt format for the agent. Default is `None`. |
| **Response Template** *(optional)* | `response_template` | Specifies the response format for the agent. Default is `None`. |
| **Allow Code Execution** *(optional)* | `allow_code_execution` | Enable code execution for the agent. Default is `False`. |
| **Max Retry Limit** *(optional)* | `max_retry_limit` | Maximum number of retries for an agent to execute a task when an error occurs. Default is `2`.
| **Max Retry Limit** *(optional)* | `max_retry_limit` | Maximum number of retries for an agent to execute a task when an error occurs. Default is `2`. |
| **Use System Prompt** *(optional)* | `use_system_prompt` | Adds the ability to not use system prompt (to support o1 models). Default is `True`. |
| **Respect Context Window** *(optional)* | `respect_context_window` | Summary strategy to avoid overflowing the context window. Default is `True`. |
| **Code Execution Mode** *(optional)* | `code_execution_mode` | Determines the mode for code execution: 'safe' (using Docker) or 'unsafe' (direct execution on the host machine). Default is `safe`. |
## Creating an agent
@@ -83,6 +84,7 @@ agent = Agent(
max_retry_limit=2, # Optional
use_system_prompt=True, # Optional
respect_context_window=True, # Optional
code_execution_mode='safe', # Optional, defaults to 'safe'
)
```
@@ -156,4 +158,4 @@ crew = my_crew.kickoff(inputs={"input": "Mark Twain"})
## Conclusion
Agents are the building blocks of the CrewAI framework. By understanding how to define and interact with agents,
you can create sophisticated AI systems that leverage the power of collaborative intelligence.
you can create sophisticated AI systems that leverage the power of collaborative intelligence. The `code_execution_mode` attribute provides flexibility in how agents execute code, allowing for both secure and direct execution options.

View File

@@ -6,7 +6,7 @@ icon: terminal
# CrewAI CLI Documentation
The CrewAI CLI provides a set of commands to interact with CrewAI, allowing you to create, train, run, and manage crews and pipelines.
The CrewAI CLI provides a set of commands to interact with CrewAI, allowing you to create, train, run, and manage crews & flows.
## Installation
@@ -146,3 +146,34 @@ crewai run
Make sure to run these commands from the directory where your CrewAI project is set up.
Some commands may require additional configuration or setup within your project structure.
</Note>
### 9. API Keys
When running ```crewai create crew``` command, the CLI will first show you the top 5 most common LLM providers and ask you to select one.
Once you've selected an LLM provider, you will be prompted for API keys.
#### Initial API key providers
The CLI will initially prompt for API keys for the following services:
* OpenAI
* Groq
* Anthropic
* Google Gemini
When you select a provider, the CLI will prompt you to enter your API key.
#### Other Options
If you select option 6, you will be able to select from a list of LiteLLM supported providers.
When you select a provider, the CLI will prompt you to enter the Key name and the API key.
See the following link for each provider's key name:
* [LiteLLM Providers](https://docs.litellm.ai/docs/providers)

View File

@@ -599,13 +599,114 @@ The generated plot will display nodes representing the tasks in your flow, with
By visualizing your flows, you can gain a clearer understanding of the workflow's structure, making it easier to debug, optimize, and communicate your AI processes to others.
### Conclusion
Plotting your flows is a powerful feature of CrewAI that enhances your ability to design and manage complex AI workflows. Whether you choose to use the `plot()` method or the command line, generating plots will provide you with a visual representation of your workflows, aiding in both development and presentation.
## Advanced
In this section, we explore more complex use cases of CrewAI Flows, starting with a self-evaluation loop. This pattern is crucial for developing AI systems that can iteratively improve their outputs through feedback.
### 1) Self-Evaluation Loop
The self-evaluation loop is a powerful pattern that allows AI workflows to automatically assess and refine their outputs. This example demonstrates how to set up a flow that generates content, evaluates it, and iterates based on feedback until the desired quality is achieved.
#### Overview
The self-evaluation loop involves two main Crews:
1. **ShakespeareanXPostCrew**: Generates a Shakespearean-style post on a given topic.
2. **XPostReviewCrew**: Evaluates the generated post, providing feedback on its validity and quality.
The process iterates until the post meets the criteria or a maximum retry limit is reached. This approach ensures high-quality outputs through iterative refinement.
#### Importance
This pattern is essential for building robust AI systems that can adapt and improve over time. By automating the evaluation and feedback loop, developers can ensure that their AI workflows produce reliable and high-quality results.
#### Main Code Highlights
Below is the `main.py` file for the self-evaluation loop flow:
```python
from typing import Optional
from crewai.flow.flow import Flow, listen, router, start
from pydantic import BaseModel
from self_evaluation_loop_flow.crews.shakespeare_crew.shakespeare_crew import (
ShakespeareanXPostCrew,
)
from self_evaluation_loop_flow.crews.x_post_review_crew.x_post_review_crew import (
XPostReviewCrew,
)
class ShakespeareXPostFlowState(BaseModel):
x_post: str = ""
feedback: Optional[str] = None
valid: bool = False
retry_count: int = 0
class ShakespeareXPostFlow(Flow[ShakespeareXPostFlowState]):
@start("retry")
def generate_shakespeare_x_post(self):
print("Generating Shakespearean X post")
topic = "Flying cars"
result = (
ShakespeareanXPostCrew()
.crew()
.kickoff(inputs={"topic": topic, "feedback": self.state.feedback})
)
print("X post generated", result.raw)
self.state.x_post = result.raw
@router(generate_shakespeare_x_post)
def evaluate_x_post(self):
if self.state.retry_count > 3:
return "max_retry_exceeded"
result = XPostReviewCrew().crew().kickoff(inputs={"x_post": self.state.x_post})
self.state.valid = result["valid"]
self.state.feedback = result["feedback"]
print("valid", self.state.valid)
print("feedback", self.state.feedback)
self.state.retry_count += 1
if self.state.valid:
return "complete"
return "retry"
@listen("complete")
def save_result(self):
print("X post is valid")
print("X post:", self.state.x_post)
with open("x_post.txt", "w") as file:
file.write(self.state.x_post)
@listen("max_retry_exceeded")
def max_retry_exceeded_exit(self):
print("Max retry count exceeded")
print("X post:", self.state.x_post)
print("Feedback:", self.state.feedback)
def kickoff():
shakespeare_flow = ShakespeareXPostFlow()
shakespeare_flow.kickoff()
def plot():
shakespeare_flow = ShakespeareXPostFlow()
shakespeare_flow.plot()
if __name__ == "__main__":
kickoff()
```
#### Code Highlights
- **Retry Mechanism**: The flow uses a retry mechanism to regenerate the post if it doesn't meet the criteria, up to a maximum of three retries.
- **Feedback Loop**: Feedback from the `XPostReviewCrew` is used to refine the post iteratively.
- **State Management**: The flow maintains state using a Pydantic model, ensuring type safety and clarity.
For a complete example and further details, please refer to the [Self Evaluation Loop Flow repository](https://github.com/crewAIInc/crewAI-examples/tree/main/self_evaluation_loop_flow).
## Next Steps
If you're interested in exploring additional examples of flows, we have a variety of recommendations in our examples repository. Here are four specific flow examples, each showcasing unique use cases to help you match your current problem type to a specific example:
If you're interested in exploring additional examples of flows, we have a variety of recommendations in our examples repository. Here are five specific flow examples, each showcasing unique use cases to help you match your current problem type to a specific example:
1. **Email Auto Responder Flow**: This example demonstrates an infinite loop where a background job continually runs to automate email responses. It's a great use case for tasks that need to be performed repeatedly without manual intervention. [View Example](https://github.com/crewAIInc/crewAI-examples/tree/main/email_auto_responder_flow)
@@ -615,6 +716,8 @@ If you're interested in exploring additional examples of flows, we have a variet
4. **Meeting Assistant Flow**: This flow demonstrates how to broadcast one event to trigger multiple follow-up actions. For instance, after a meeting is completed, the flow can update a Trello board, send a Slack message, and save the results. It's a great example of handling multiple outcomes from a single event, making it ideal for comprehensive task management and notification systems. [View Example](https://github.com/crewAIInc/crewAI-examples/tree/main/meeting_assistant_flow)
5. **Self Evaluation Loop Flow**: This flow demonstrates a self-evaluation loop where AI workflows automatically assess and refine their outputs through feedback. It involves generating content, evaluating it, and iterating until the desired quality is achieved. This pattern is crucial for developing robust AI systems that can adapt and improve over time. [View Example](https://github.com/crewAIInc/crewAI-examples/tree/main/self_evaluation_loop_flow)
By exploring these examples, you can gain insights into how to leverage CrewAI Flows for various use cases, from automating repetitive tasks to managing complex, multi-step processes with dynamic decision-making and human feedback.
Also, check out our YouTube video on how to use flows in CrewAI below!

View File

@@ -118,7 +118,7 @@ Alternatively, you can directly pass the OpenAIEmbeddingFunction to the embedder
Example:
```python Code
from crewai import Crew, Agent, Task, Process
from chromadb.utils.embedding_functions.openai_embedding_function import OpenAIEmbeddingFunction
from chromadb.utils.embedding_functions import OpenAIEmbeddingFunction
my_crew = Crew(
agents=[...],
@@ -174,6 +174,7 @@ my_crew = Crew(
### Using Azure OpenAI embeddings
```python Code
from chromadb.utils.embedding_functions import OpenAIEmbeddingFunction
from crewai import Crew, Agent, Task, Process
my_crew = Crew(
@@ -182,7 +183,7 @@ my_crew = Crew(
process=Process.sequential,
memory=True,
verbose=True,
embedder=embedding_functions.OpenAIEmbeddingFunction(
embedder=OpenAIEmbeddingFunction(
api_key="YOUR_API_KEY",
api_base="YOUR_API_BASE_PATH",
api_type="azure",
@@ -195,6 +196,7 @@ my_crew = Crew(
### Using Vertex AI embeddings
```python Code
from chromadb.utils.embedding_functions import GoogleVertexEmbeddingFunction
from crewai import Crew, Agent, Task, Process
my_crew = Crew(
@@ -203,7 +205,7 @@ my_crew = Crew(
process=Process.sequential,
memory=True,
verbose=True,
embedder=embedding_functions.GoogleVertexEmbeddingFunction(
embedder=GoogleVertexEmbeddingFunction(
project_id="YOUR_PROJECT_ID",
region="YOUR_REGION",
api_key="YOUR_API_KEY",

View File

@@ -20,14 +20,21 @@ pip install 'crewai[tools]'
### Subclassing `BaseTool`
To create a personalized tool, inherit from `BaseTool` and define the necessary attributes and the `_run` method.
To create a personalized tool, inherit from `BaseTool` and define the necessary attributes, including the `args_schema` for input validation, and the `_run` method.
```python Code
from typing import Type
from crewai_tools import BaseTool
from pydantic import BaseModel, Field
class MyToolInput(BaseModel):
"""Input schema for MyCustomTool."""
argument: str = Field(..., description="Description of the argument.")
class MyCustomTool(BaseTool):
name: str = "Name of my tool"
description: str = "What this tool does. It's vital for effective utilization."
args_schema: Type[BaseModel] = MyToolInput
def _run(self, argument: str) -> str:
# Your tool's logic here

View File

@@ -1,6 +1,6 @@
[project]
name = "crewai"
version = "0.74.2"
version = "0.76.2"
description = "Cutting-edge framework for orchestrating role-playing, autonomous AI agents. By fostering collaborative intelligence, CrewAI empowers agents to work together seamlessly, tackling complex tasks."
readme = "README.md"
requires-python = ">=3.10,<=3.13"
@@ -28,6 +28,7 @@ dependencies = [
"uv>=0.4.25",
"tomli-w>=1.1.0",
"chromadb>=0.4.24",
"tomli>=2.0.2",
]
[project.urls]

View File

@@ -14,5 +14,5 @@ warnings.filterwarnings(
category=UserWarning,
module="pydantic.main",
)
__version__ = "0.74.2"
__version__ = "0.76.2"
__all__ = ["Agent", "Crew", "Process", "Task", "Pipeline", "Router", "LLM", "Flow"]

View File

@@ -1,6 +1,7 @@
import os
from inspect import signature
from typing import Any, List, Optional, Union
import shutil
import subprocess
from typing import Any, List, Literal, Optional, Union
from pydantic import Field, InstanceOf, PrivateAttr, model_validator
@@ -112,6 +113,10 @@ class Agent(BaseAgent):
default=2,
description="Maximum number of retries for an agent to execute a task when an error occurs.",
)
code_execution_mode: Literal["safe", "unsafe"] = Field(
default="safe",
description="Mode for code execution: 'safe' (using Docker) or 'unsafe' (direct execution).",
)
@model_validator(mode="after")
def post_init_setup(self):
@@ -173,6 +178,9 @@ class Agent(BaseAgent):
if not self.agent_executor:
self._setup_agent_executor()
if self.allow_code_execution:
self._validate_docker_installation()
return self
def _setup_agent_executor(self):
@@ -308,7 +316,9 @@ class Agent(BaseAgent):
try:
from crewai_tools import CodeInterpreterTool
return [CodeInterpreterTool()]
# Set the unsafe_mode based on the code_execution_mode attribute
unsafe_mode = self.code_execution_mode == "unsafe"
return [CodeInterpreterTool(unsafe_mode=unsafe_mode)]
except ModuleNotFoundError:
self._logger.log(
"info", "Coding tools not available. Install crewai_tools. "
@@ -384,30 +394,49 @@ class Agent(BaseAgent):
def _render_text_description_and_args(self, tools: List[Any]) -> str:
"""Render the tool name, description, and args in plain text.
Output will be in the format of:
Output will be in the format of:
.. code-block:: markdown
.. code-block:: markdown
search: This tool is used for search, args: {"query": {"type": "string"}}
calculator: This tool is used for math, \
args: {"expression": {"type": "string"}}
args: {"expression": {"type": "string"}}
"""
tool_strings = []
for tool in tools:
args_schema = str(tool.model_fields)
if hasattr(tool, "func") and tool.func:
sig = signature(tool.func)
description = (
f"Tool Name: {tool.name}{sig}\nTool Description: {tool.description}"
)
else:
description = (
f"Tool Name: {tool.name}\nTool Description: {tool.description}"
)
args_schema = {
name: {
"description": field.description,
"type": field.annotation.__name__,
}
for name, field in tool.args_schema.model_fields.items()
}
description = (
f"Tool Name: {tool.name}\nTool Description: {tool.description}"
)
tool_strings.append(f"{description}\nTool Arguments: {args_schema}")
return "\n".join(tool_strings)
def _validate_docker_installation(self) -> None:
"""Check if Docker is installed and running."""
if not shutil.which("docker"):
raise RuntimeError(
f"Docker is not installed. Please install Docker to use code execution with agent: {self.role}"
)
try:
subprocess.run(
["docker", "info"],
check=True,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
)
except subprocess.CalledProcessError:
raise RuntimeError(
f"Docker is not running. Please start Docker to use code execution with agent: {self.role}"
)
@staticmethod
def __tools_names(tools) -> str:
return ", ".join([t.name for t in tools])

View File

@@ -32,10 +32,12 @@ def crewai():
@crewai.command()
@click.argument("type", type=click.Choice(["crew", "pipeline", "flow"]))
@click.argument("name")
def create(type, name):
@click.option("--provider", type=str, help="The provider to use for the crew")
@click.option("--skip_provider", is_flag=True, help="Skip provider validation")
def create(type, name, provider, skip_provider=False):
"""Create a new crew, pipeline, or flow."""
if type == "crew":
create_crew(name)
create_crew(name, provider, skip_provider)
elif type == "pipeline":
create_pipeline(name)
elif type == "flow":
@@ -176,10 +178,14 @@ def test(n_iterations: int, model: str):
evaluate_crew(n_iterations, model)
@crewai.command()
def install():
@crewai.command(context_settings=dict(
ignore_unknown_options=True,
allow_extra_args=True,
))
@click.pass_context
def install(context):
"""Install the Crew."""
install_crew()
install_crew(context.args)
@crewai.command()

View File

@@ -1,8 +1,16 @@
import sys
from pathlib import Path
import click
from crewai.cli.utils import copy_template, load_env_vars, write_env_file
from crewai.cli.provider import get_provider_data, select_provider, PROVIDERS
from crewai.cli.constants import ENV_VARS
from crewai.cli.provider import (
PROVIDERS,
get_provider_data,
select_model,
select_provider,
)
from crewai.cli.utils import copy_template, load_env_vars, write_env_file
def create_folder_structure(name, parent_folder=None):
@@ -14,11 +22,19 @@ def create_folder_structure(name, parent_folder=None):
else:
folder_path = Path(folder_name)
click.secho(
f"Creating {'crew' if parent_folder else 'folder'} {folder_name}...",
fg="green",
bold=True,
)
if folder_path.exists():
if not click.confirm(
f"Folder {folder_name} already exists. Do you want to override it?"
):
click.secho("Operation cancelled.", fg="yellow")
sys.exit(0)
click.secho(f"Overriding folder {folder_name}...", fg="green", bold=True)
else:
click.secho(
f"Creating {'crew' if parent_folder else 'folder'} {folder_name}...",
fg="green",
bold=True,
)
if not folder_path.exists():
folder_path.mkdir(parents=True)
@@ -27,11 +43,6 @@ def create_folder_structure(name, parent_folder=None):
(folder_path / "src" / folder_name).mkdir(parents=True)
(folder_path / "src" / folder_name / "tools").mkdir(parents=True)
(folder_path / "src" / folder_name / "config").mkdir(parents=True)
else:
click.secho(
f"\tFolder {folder_name} already exists.",
fg="yellow",
)
return folder_path, folder_name, class_name
@@ -70,37 +81,84 @@ def copy_template_files(folder_path, name, class_name, parent_folder):
copy_template(src_file, dst_file, name, class_name, folder_path.name)
def create_crew(name, parent_folder=None):
def create_crew(name, provider=None, skip_provider=False, parent_folder=None):
folder_path, folder_name, class_name = create_folder_structure(name, parent_folder)
env_vars = load_env_vars(folder_path)
if not skip_provider:
if not provider:
provider_models = get_provider_data()
if not provider_models:
return
provider_models = get_provider_data()
if not provider_models:
return
existing_provider = None
for provider, env_keys in ENV_VARS.items():
if any(key in env_vars for key in env_keys):
existing_provider = provider
break
selected_provider = select_provider(provider_models)
if not selected_provider:
return
provider = selected_provider
if existing_provider:
if not click.confirm(
f"Found existing environment variable configuration for {existing_provider.capitalize()}. Do you want to override it?"
):
click.secho("Keeping existing provider configuration.", fg="yellow")
return
# selected_model = select_model(provider, provider_models)
# if not selected_model:
# return
# model = selected_model
provider_models = get_provider_data()
if not provider_models:
return
if provider in PROVIDERS:
api_key_var = ENV_VARS[provider][0]
else:
api_key_var = click.prompt(
f"Enter the environment variable name for your {provider.capitalize()} API key",
type=str,
while True:
selected_provider = select_provider(provider_models)
if selected_provider is None: # User typed 'q'
click.secho("Exiting...", fg="yellow")
sys.exit(0)
if selected_provider: # Valid selection
break
click.secho(
"No provider selected. Please try again or press 'q' to exit.", fg="red"
)
while True:
selected_model = select_model(selected_provider, provider_models)
if selected_model is None: # User typed 'q'
click.secho("Exiting...", fg="yellow")
sys.exit(0)
if selected_model: # Valid selection
break
click.secho(
"No model selected. Please try again or press 'q' to exit.", fg="red"
)
if selected_provider in PROVIDERS:
api_key_var = ENV_VARS[selected_provider][0]
else:
api_key_var = click.prompt(
f"Enter the environment variable name for your {selected_provider.capitalize()} API key",
type=str,
default="",
)
api_key_value = ""
click.echo(
f"Enter your {selected_provider.capitalize()} API key (press Enter to skip): ",
nl=False,
)
try:
api_key_value = input()
except (KeyboardInterrupt, EOFError):
api_key_value = ""
env_vars = {api_key_var: "YOUR_API_KEY_HERE"}
write_env_file(folder_path, env_vars)
if api_key_value.strip():
env_vars = {api_key_var: api_key_value}
write_env_file(folder_path, env_vars)
click.secho("API key saved to .env file", fg="green")
else:
click.secho(
"No API key provided. Skipping .env file creation.", fg="yellow"
)
# env_vars['MODEL'] = model
# click.secho(f"Selected model: {model}", fg="green")
env_vars["MODEL"] = selected_model
click.secho(f"Selected model: {selected_model}", fg="green")
package_dir = Path(__file__).parent
templates_dir = package_dir / "templates" / "crew"

View File

@@ -3,12 +3,13 @@ import subprocess
import click
def install_crew() -> None:
def install_crew(proxy_options: list[str]) -> None:
"""
Install the crew by running the UV command to lock and install.
"""
try:
subprocess.run(["uv", "sync"], check=True, capture_output=False, text=True)
command = ["uv", "sync"] + proxy_options
subprocess.run(command, check=True, capture_output=False, text=True)
except subprocess.CalledProcessError as e:
click.echo(f"An error occurred while running the crew: {e}", err=True)

View File

@@ -7,7 +7,7 @@ def plot_flow() -> None:
"""
Plot the flow by running a command in the UV environment.
"""
command = ["uv", "run", "plot_flow"]
command = ["uv", "run", "plot"]
try:
result = subprocess.run(command, capture_output=False, text=True, check=True)

View File

@@ -1,67 +1,91 @@
import json
import time
import requests
from collections import defaultdict
from pathlib import Path
import click
from pathlib import Path
from crewai.cli.constants import PROVIDERS, MODELS, JSON_URL
import requests
from crewai.cli.constants import JSON_URL, MODELS, PROVIDERS
def select_choice(prompt_message, choices):
"""
Presents a list of choices to the user and prompts them to select one.
Args:
- prompt_message (str): The message to display to the user before presenting the choices.
- choices (list): A list of options to present to the user.
Returns:
- str: The selected choice from the list, or None if the operation is aborted or an invalid selection is made.
- str: The selected choice from the list, or None if the user chooses to quit.
"""
provider_models = get_provider_data()
if not provider_models:
return
click.secho(prompt_message, fg="cyan")
for idx, choice in enumerate(choices, start=1):
click.secho(f"{idx}. {choice}", fg="cyan")
try:
selected_index = click.prompt("Enter the number of your choice", type=int) - 1
except click.exceptions.Abort:
click.secho("Operation aborted by the user.", fg="red")
return None
if not (0 <= selected_index < len(choices)):
click.secho("Invalid selection.", fg="red")
return None
return choices[selected_index]
click.secho("q. Quit", fg="cyan")
while True:
choice = click.prompt(
"Enter the number of your choice or 'q' to quit", type=str
)
if choice.lower() == "q":
return None
try:
selected_index = int(choice) - 1
if 0 <= selected_index < len(choices):
return choices[selected_index]
except ValueError:
pass
click.secho(
"Invalid selection. Please select a number between 1 and 6 or 'q' to quit.",
fg="red",
)
def select_provider(provider_models):
"""
Presents a list of providers to the user and prompts them to select one.
Args:
- provider_models (dict): A dictionary of provider models.
Returns:
- str: The selected provider, or None if the operation is aborted or an invalid selection is made.
- str: The selected provider
- None: If user explicitly quits
"""
predefined_providers = [p.lower() for p in PROVIDERS]
all_providers = sorted(set(predefined_providers + list(provider_models.keys())))
provider = select_choice("Select a provider to set up:", predefined_providers + ['other'])
if not provider:
provider = select_choice(
"Select a provider to set up:", predefined_providers + ["other"]
)
if provider is None: # User typed 'q'
return None
provider = provider.lower()
if provider == 'other':
if provider == "other":
provider = select_choice("Select a provider from the full list:", all_providers)
if not provider:
if provider is None: # User typed 'q'
return None
return provider
return provider.lower() if provider else False
def select_model(provider, provider_models):
"""
Presents a list of models for a given provider to the user and prompts them to select one.
Args:
- provider (str): The provider for which to select a model.
- provider_models (dict): A dictionary of provider models.
Returns:
- str: The selected model, or None if the operation is aborted or an invalid selection is made.
"""
@@ -76,37 +100,49 @@ def select_model(provider, provider_models):
click.secho(f"No models available for provider '{provider}'.", fg="red")
return None
selected_model = select_choice(f"Select a model to use for {provider.capitalize()}:", available_models)
selected_model = select_choice(
f"Select a model to use for {provider.capitalize()}:", available_models
)
return selected_model
def load_provider_data(cache_file, cache_expiry):
"""
Loads provider data from a cache file if it exists and is not expired. If the cache is expired or corrupted, it fetches the data from the web.
Args:
- cache_file (Path): The path to the cache file.
- cache_expiry (int): The cache expiry time in seconds.
Returns:
- dict or None: The loaded provider data or None if the operation fails.
"""
current_time = time.time()
if cache_file.exists() and (current_time - cache_file.stat().st_mtime) < cache_expiry:
if (
cache_file.exists()
and (current_time - cache_file.stat().st_mtime) < cache_expiry
):
data = read_cache_file(cache_file)
if data:
return data
click.secho("Cache is corrupted. Fetching provider data from the web...", fg="yellow")
click.secho(
"Cache is corrupted. Fetching provider data from the web...", fg="yellow"
)
else:
click.secho("Cache expired or not found. Fetching provider data from the web...", fg="cyan")
click.secho(
"Cache expired or not found. Fetching provider data from the web...",
fg="cyan",
)
return fetch_provider_data(cache_file)
def read_cache_file(cache_file):
"""
Reads and returns the JSON content from a cache file. Returns None if the file contains invalid JSON.
Args:
- cache_file (Path): The path to the cache file.
Returns:
- dict or None: The JSON content of the cache file or None if the JSON is invalid.
"""
@@ -116,13 +152,14 @@ def read_cache_file(cache_file):
except json.JSONDecodeError:
return None
def fetch_provider_data(cache_file):
"""
Fetches provider data from a specified URL and caches it to a file.
Args:
- cache_file (Path): The path to the cache file.
Returns:
- dict or None: The fetched provider data or None if the operation fails.
"""
@@ -139,38 +176,42 @@ def fetch_provider_data(cache_file):
click.secho("Error parsing provider data. Invalid JSON format.", fg="red")
return None
def download_data(response):
"""
Downloads data from a given HTTP response and returns the JSON content.
Args:
- response (requests.Response): The HTTP response object.
Returns:
- dict: The JSON content of the response.
"""
total_size = int(response.headers.get('content-length', 0))
total_size = int(response.headers.get("content-length", 0))
block_size = 8192
data_chunks = []
with click.progressbar(length=total_size, label='Downloading', show_pos=True) as progress_bar:
with click.progressbar(
length=total_size, label="Downloading", show_pos=True
) as progress_bar:
for chunk in response.iter_content(block_size):
if chunk:
data_chunks.append(chunk)
progress_bar.update(len(chunk))
data_content = b''.join(data_chunks)
return json.loads(data_content.decode('utf-8'))
data_content = b"".join(data_chunks)
return json.loads(data_content.decode("utf-8"))
def get_provider_data():
"""
Retrieves provider data from a cache file, filters out models based on provider criteria, and returns a dictionary of providers mapped to their models.
Returns:
- dict or None: A dictionary of providers mapped to their models or None if the operation fails.
"""
cache_dir = Path.home() / '.crewai'
cache_dir = Path.home() / ".crewai"
cache_dir.mkdir(exist_ok=True)
cache_file = cache_dir / 'provider_cache.json'
cache_expiry = 24 * 3600
cache_file = cache_dir / "provider_cache.json"
cache_expiry = 24 * 3600
data = load_provider_data(cache_file, cache_expiry)
if not data:
@@ -179,8 +220,8 @@ def get_provider_data():
provider_models = defaultdict(list)
for model_name, properties in data.items():
provider = properties.get("litellm_provider", "").strip().lower()
if 'http' in provider or provider == 'other':
if "http" in provider or provider == "other":
continue
if provider:
provider_models[provider].append(model_name)
return provider_models
return provider_models

View File

@@ -1,10 +1,9 @@
import subprocess
import click
import tomllib
from packaging import version
from crewai.cli.utils import get_crewai_version
from crewai.cli.utils import get_crewai_version, read_toml
def run_crew() -> None:
@@ -15,10 +14,9 @@ def run_crew() -> None:
crewai_version = get_crewai_version()
min_required_version = "0.71.0"
with open("pyproject.toml", "rb") as f:
data = tomllib.load(f)
pyproject_data = read_toml()
if data.get("tool", {}).get("poetry") and (
if pyproject_data.get("tool", {}).get("poetry") and (
version.parse(crewai_version) < version.parse(min_required_version)
):
click.secho(
@@ -35,10 +33,7 @@ def run_crew() -> None:
click.echo(f"An error occurred while running the crew: {e}", err=True)
click.echo(e.output, err=True, nl=True)
with open("pyproject.toml", "rb") as f:
data = tomllib.load(f)
if data.get("tool", {}).get("poetry"):
if pyproject_data.get("tool", {}).get("poetry"):
click.secho(
"It's possible that you are using an old version of crewAI that uses poetry, please run `crewai update` to update your pyproject.toml to use uv.",
fg="yellow",

View File

@@ -5,7 +5,7 @@ description = "{{name}} using crewAI"
authors = [{ name = "Your Name", email = "you@example.com" }]
requires-python = ">=3.10,<=3.13"
dependencies = [
"crewai[tools]>=0.74.2,<1.0.0"
"crewai[tools]>=0.76.2,<1.0.0"
]
[project.scripts]

View File

@@ -1,11 +1,17 @@
from typing import Type
from crewai_tools import BaseTool
from pydantic import BaseModel, Field
class MyCustomToolInput(BaseModel):
"""Input schema for MyCustomTool."""
argument: str = Field(..., description="Description of the argument.")
class MyCustomTool(BaseTool):
name: str = "Name of my tool"
description: str = (
"Clear description for what this tool is useful for, you agent will need this information to use it."
)
args_schema: Type[BaseModel] = MyCustomToolInput
def _run(self, argument: str) -> str:
# Implementation goes here

View File

@@ -5,7 +5,7 @@ description = "{{name}} using crewAI"
authors = [{ name = "Your Name", email = "you@example.com" }]
requires-python = ">=3.10,<=3.13"
dependencies = [
"crewai[tools]>=0.74.2,<1.0.0",
"crewai[tools]>=0.76.2,<1.0.0",
]
[project.scripts]

View File

@@ -1,4 +1,13 @@
from typing import Type
from crewai_tools import BaseTool
from pydantic import BaseModel, Field
class MyCustomToolInput(BaseModel):
"""Input schema for MyCustomTool."""
argument: str = Field(..., description="Description of the argument.")
class MyCustomTool(BaseTool):
@@ -6,6 +15,7 @@ class MyCustomTool(BaseTool):
description: str = (
"Clear description for what this tool is useful for, you agent will need this information to use it."
)
args_schema: Type[BaseModel] = MyCustomToolInput
def _run(self, argument: str) -> str:
# Implementation goes here

View File

@@ -6,7 +6,7 @@ authors = ["Your Name <you@example.com>"]
[tool.poetry.dependencies]
python = ">=3.10,<=3.13"
crewai = { extras = ["tools"], version = ">=0.74.2,<1.0.0" }
crewai = { extras = ["tools"], version = ">=0.76.2,<1.0.0" }
asyncio = "*"
[tool.poetry.scripts]

View File

@@ -1,11 +1,17 @@
from typing import Type
from crewai_tools import BaseTool
from pydantic import BaseModel, Field
class MyCustomToolInput(BaseModel):
"""Input schema for MyCustomTool."""
argument: str = Field(..., description="Description of the argument.")
class MyCustomTool(BaseTool):
name: str = "Name of my tool"
description: str = (
"Clear description for what this tool is useful for, you agent will need this information to use it."
)
args_schema: Type[BaseModel] = MyCustomToolInput
def _run(self, argument: str) -> str:
# Implementation goes here

View File

@@ -5,7 +5,7 @@ description = "{{name}} using crewAI"
authors = ["Your Name <you@example.com>"]
requires-python = ">=3.10,<=3.13"
dependencies = [
"crewai[tools]>=0.74.2,<1.0.0"
"crewai[tools]>=0.76.2,<1.0.0"
]
[project.scripts]

View File

@@ -1,11 +1,17 @@
from typing import Type
from crewai_tools import BaseTool
from pydantic import BaseModel, Field
class MyCustomToolInput(BaseModel):
"""Input schema for MyCustomTool."""
argument: str = Field(..., description="Description of the argument.")
class MyCustomTool(BaseTool):
name: str = "Name of my tool"
description: str = (
"Clear description for what this tool is useful for, you agent will need this information to use it."
)
args_schema: Type[BaseModel] = MyCustomToolInput
def _run(self, argument: str) -> str:
# Implementation goes here

View File

@@ -5,6 +5,6 @@ description = "Power up your crews with {{folder_name}}"
readme = "README.md"
requires-python = ">=3.10,<=3.13"
dependencies = [
"crewai[tools]>=0.74.2"
"crewai[tools]>=0.76.2"
]

View File

@@ -2,7 +2,8 @@ import os
import shutil
import tomli_w
import tomllib
from crewai.cli.utils import read_toml
def update_crew() -> None:
@@ -18,10 +19,9 @@ def migrate_pyproject(input_file, output_file):
And it will be used to migrate the pyproject.toml to the new format when uv is used.
When the time comes that uv supports the new format, this function will be deprecated.
"""
poetry_data = {}
# Read the input pyproject.toml
with open(input_file, "rb") as f:
pyproject = tomllib.load(f)
pyproject_data = read_toml()
# Initialize the new project structure
new_pyproject = {
@@ -30,30 +30,30 @@ def migrate_pyproject(input_file, output_file):
}
# Migrate project metadata
if "tool" in pyproject and "poetry" in pyproject["tool"]:
poetry = pyproject["tool"]["poetry"]
new_pyproject["project"]["name"] = poetry.get("name")
new_pyproject["project"]["version"] = poetry.get("version")
new_pyproject["project"]["description"] = poetry.get("description")
if "tool" in pyproject_data and "poetry" in pyproject_data["tool"]:
poetry_data = pyproject_data["tool"]["poetry"]
new_pyproject["project"]["name"] = poetry_data.get("name")
new_pyproject["project"]["version"] = poetry_data.get("version")
new_pyproject["project"]["description"] = poetry_data.get("description")
new_pyproject["project"]["authors"] = [
{
"name": author.split("<")[0].strip(),
"email": author.split("<")[1].strip(">").strip(),
}
for author in poetry.get("authors", [])
for author in poetry_data.get("authors", [])
]
new_pyproject["project"]["requires-python"] = poetry.get("python")
new_pyproject["project"]["requires-python"] = poetry_data.get("python")
else:
# If it's already in the new format, just copy the project section
new_pyproject["project"] = pyproject.get("project", {})
new_pyproject["project"] = pyproject_data.get("project", {})
# Migrate or copy dependencies
if "dependencies" in new_pyproject["project"]:
# If dependencies are already in the new format, keep them as is
pass
elif "dependencies" in poetry:
elif poetry_data and "dependencies" in poetry_data:
new_pyproject["project"]["dependencies"] = []
for dep, version in poetry["dependencies"].items():
for dep, version in poetry_data["dependencies"].items():
if isinstance(version, dict): # Handle extras
extras = ",".join(version.get("extras", []))
new_dep = f"{dep}[{extras}]"
@@ -67,10 +67,10 @@ def migrate_pyproject(input_file, output_file):
new_pyproject["project"]["dependencies"].append(new_dep)
# Migrate or copy scripts
if "scripts" in poetry:
new_pyproject["project"]["scripts"] = poetry["scripts"]
elif "scripts" in pyproject.get("project", {}):
new_pyproject["project"]["scripts"] = pyproject["project"]["scripts"]
if poetry_data and "scripts" in poetry_data:
new_pyproject["project"]["scripts"] = poetry_data["scripts"]
elif pyproject_data.get("project", {}) and "scripts" in pyproject_data["project"]:
new_pyproject["project"]["scripts"] = pyproject_data["project"]["scripts"]
else:
new_pyproject["project"]["scripts"] = {}
@@ -87,8 +87,8 @@ def migrate_pyproject(input_file, output_file):
new_pyproject["project"]["scripts"]["run_crew"] = f"{module_name}.main:run"
# Migrate optional dependencies
if "extras" in poetry:
new_pyproject["project"]["optional-dependencies"] = poetry["extras"]
if poetry_data and "extras" in poetry_data:
new_pyproject["project"]["optional-dependencies"] = poetry_data["extras"]
# Backup the old pyproject.toml
backup_file = "pyproject-old.toml"

View File

@@ -6,6 +6,7 @@ from functools import reduce
from typing import Any, Dict, List
import click
import tomli
from rich.console import Console
from crewai.cli.authentication.utils import TokenManager
@@ -54,6 +55,13 @@ def simple_toml_parser(content):
return result
def read_toml(file_path: str = "pyproject.toml"):
"""Read the content of a TOML file and return it as a dictionary."""
with open(file_path, "rb") as f:
toml_dict = tomli.load(f)
return toml_dict
def parse_toml(content):
if sys.version_info >= (3, 11):
return tomllib.loads(content)

1906
uv.lock generated

File diff suppressed because it is too large Load Diff