Compare commits

..

11 Commits

Author SHA1 Message Date
Eduardo Chiarotti
60b541ba73 feat: add validation for poetry data 2024-10-25 08:04:19 -03:00
Eduardo Chiarotti
dea7e3a4c4 Merge branch 'main' into fix/update-crew-toml 2024-10-24 21:25:40 -03:00
Eduardo Chiarotti
23ab7e5596 feat: add tomli so we can support 3.10 2024-10-24 21:23:09 -03:00
Brandon Hancock (bhancock_ai)
201e652fa2 update plot command (#1504) 2024-10-24 14:44:30 -04:00
João Moura
8bc07e6071 new version 2024-10-23 18:10:37 -03:00
João Moura
6baaad045a new version 2024-10-23 18:08:49 -03:00
João Moura
74c1703310 updating crewai version 2024-10-23 17:58:58 -03:00
Brandon Hancock (bhancock_ai)
a921828e51 Fix memory imports for embedding functions (#1497) 2024-10-23 11:21:27 -04:00
Brandon Hancock (bhancock_ai)
e1fd83e6a7 support unsafe code execution. add in docker install and running checks. (#1496)
* support unsafe code execution. add in docker install and running checks.

* Update return type
2024-10-23 11:01:00 -04:00
Maicon Peixinho
7d68e287cc chore(readme-fix): fixing step for 'running tests' in the contribution section (#1490)
Co-authored-by: Eduardo Chiarotti <dudumelgaco@hotmail.com>
2024-10-23 11:38:41 -03:00
Rip&Tear
f39a975e20 fix/fixed missing API prompt + CLI docs update (#1464)
* updated CLI to allow for submitting API keys

* updated click prompt to remove default number

* removed all unnecessary comments

* feat: implement crew creation CLI command

- refactor code to multiple functions
- Added ability for users to select provider and model when uing crewai create command and ave API key to .env

* refactered select_choice function for early return

* refactored  select_provider to have an ealry return

* cleanup of comments

* refactor/Move functions into utils file, added new provider file and migrated fucntions thre, new constants file + general function refactor

* small comment cleanup

* fix unnecessary deps

* Added docs for new CLI provider + fixed missing API prompt

* Minor doc updates

* allow user to bypass api key entry + incorect number selected logic + ruff formatting

* ruff updates

* Fix spelling mistake

---------

Co-authored-by: Brandon Hancock (bhancock_ai) <109994880+bhancockio@users.noreply.github.com>
Co-authored-by: Brandon Hancock <brandon@brandonhancock.io>
2024-10-23 09:41:14 -04:00
18 changed files with 1064 additions and 1102 deletions

View File

@@ -351,7 +351,7 @@ pre-commit install
### Running Tests ### Running Tests
```bash ```bash
uvx pytest uv run pytest .
``` ```
### Running static type checks ### Running static type checks

View File

@@ -31,16 +31,17 @@ Think of an agent as a member of a team, with specific skills and a particular j
| **Max RPM** *(optional)* | `max_rpm` | Max RPM is the maximum number of requests per minute the agent can perform to avoid rate limits. It's optional and can be left unspecified, with a default value of `None`. | | **Max RPM** *(optional)* | `max_rpm` | Max RPM is the maximum number of requests per minute the agent can perform to avoid rate limits. It's optional and can be left unspecified, with a default value of `None`. |
| **Max Execution Time** *(optional)* | `max_execution_time` | Max Execution Time is the maximum execution time for an agent to execute a task. It's optional and can be left unspecified, with a default value of `None`, meaning no max execution time. | | **Max Execution Time** *(optional)* | `max_execution_time` | Max Execution Time is the maximum execution time for an agent to execute a task. It's optional and can be left unspecified, with a default value of `None`, meaning no max execution time. |
| **Verbose** *(optional)* | `verbose` | Setting this to `True` configures the internal logger to provide detailed execution logs, aiding in debugging and monitoring. Default is `False`. | | **Verbose** *(optional)* | `verbose` | Setting this to `True` configures the internal logger to provide detailed execution logs, aiding in debugging and monitoring. Default is `False`. |
| **Allow Delegation** *(optional)* | `allow_delegation` | Agents can delegate tasks or questions to one another, ensuring that each task is handled by the most suitable agent. Default is `False`. | **Allow Delegation** *(optional)* | `allow_delegation` | Agents can delegate tasks or questions to one another, ensuring that each task is handled by the most suitable agent. Default is `False`. |
| **Step Callback** *(optional)* | `step_callback` | A function that is called after each step of the agent. This can be used to log the agent's actions or to perform other operations. It will overwrite the crew `step_callback`. | | **Step Callback** *(optional)* | `step_callback` | A function that is called after each step of the agent. This can be used to log the agent's actions or to perform other operations. It will overwrite the crew `step_callback`. |
| **Cache** *(optional)* | `cache` | Indicates if the agent should use a cache for tool usage. Default is `True`. | | **Cache** *(optional)* | `cache` | Indicates if the agent should use a cache for tool usage. Default is `True`. |
| **System Template** *(optional)* | `system_template` | Specifies the system format for the agent. Default is `None`. | | **System Template** *(optional)* | `system_template` | Specifies the system format for the agent. Default is `None`. |
| **Prompt Template** *(optional)* | `prompt_template` | Specifies the prompt format for the agent. Default is `None`. | | **Prompt Template** *(optional)* | `prompt_template` | Specifies the prompt format for the agent. Default is `None`. |
| **Response Template** *(optional)* | `response_template` | Specifies the response format for the agent. Default is `None`. | | **Response Template** *(optional)* | `response_template` | Specifies the response format for the agent. Default is `None`. |
| **Allow Code Execution** *(optional)* | `allow_code_execution` | Enable code execution for the agent. Default is `False`. | | **Allow Code Execution** *(optional)* | `allow_code_execution` | Enable code execution for the agent. Default is `False`. |
| **Max Retry Limit** *(optional)* | `max_retry_limit` | Maximum number of retries for an agent to execute a task when an error occurs. Default is `2`. | **Max Retry Limit** *(optional)* | `max_retry_limit` | Maximum number of retries for an agent to execute a task when an error occurs. Default is `2`. |
| **Use System Prompt** *(optional)* | `use_system_prompt` | Adds the ability to not use system prompt (to support o1 models). Default is `True`. | | **Use System Prompt** *(optional)* | `use_system_prompt` | Adds the ability to not use system prompt (to support o1 models). Default is `True`. |
| **Respect Context Window** *(optional)* | `respect_context_window` | Summary strategy to avoid overflowing the context window. Default is `True`. | | **Respect Context Window** *(optional)* | `respect_context_window` | Summary strategy to avoid overflowing the context window. Default is `True`. |
| **Code Execution Mode** *(optional)* | `code_execution_mode` | Determines the mode for code execution: 'safe' (using Docker) or 'unsafe' (direct execution on the host machine). Default is `safe`. |
## Creating an agent ## Creating an agent
@@ -83,6 +84,7 @@ agent = Agent(
max_retry_limit=2, # Optional max_retry_limit=2, # Optional
use_system_prompt=True, # Optional use_system_prompt=True, # Optional
respect_context_window=True, # Optional respect_context_window=True, # Optional
code_execution_mode='safe', # Optional, defaults to 'safe'
) )
``` ```
@@ -156,4 +158,4 @@ crew = my_crew.kickoff(inputs={"input": "Mark Twain"})
## Conclusion ## Conclusion
Agents are the building blocks of the CrewAI framework. By understanding how to define and interact with agents, Agents are the building blocks of the CrewAI framework. By understanding how to define and interact with agents,
you can create sophisticated AI systems that leverage the power of collaborative intelligence. you can create sophisticated AI systems that leverage the power of collaborative intelligence. The `code_execution_mode` attribute provides flexibility in how agents execute code, allowing for both secure and direct execution options.

View File

@@ -118,7 +118,7 @@ Alternatively, you can directly pass the OpenAIEmbeddingFunction to the embedder
Example: Example:
```python Code ```python Code
from crewai import Crew, Agent, Task, Process from crewai import Crew, Agent, Task, Process
from chromadb.utils.embedding_functions.openai_embedding_function import OpenAIEmbeddingFunction from chromadb.utils.embedding_functions import OpenAIEmbeddingFunction
my_crew = Crew( my_crew = Crew(
agents=[...], agents=[...],
@@ -174,6 +174,7 @@ my_crew = Crew(
### Using Azure OpenAI embeddings ### Using Azure OpenAI embeddings
```python Code ```python Code
from chromadb.utils.embedding_functions import OpenAIEmbeddingFunction
from crewai import Crew, Agent, Task, Process from crewai import Crew, Agent, Task, Process
my_crew = Crew( my_crew = Crew(
@@ -182,7 +183,7 @@ my_crew = Crew(
process=Process.sequential, process=Process.sequential,
memory=True, memory=True,
verbose=True, verbose=True,
embedder=embedding_functions.OpenAIEmbeddingFunction( embedder=OpenAIEmbeddingFunction(
api_key="YOUR_API_KEY", api_key="YOUR_API_KEY",
api_base="YOUR_API_BASE_PATH", api_base="YOUR_API_BASE_PATH",
api_type="azure", api_type="azure",
@@ -195,6 +196,7 @@ my_crew = Crew(
### Using Vertex AI embeddings ### Using Vertex AI embeddings
```python Code ```python Code
from chromadb.utils.embedding_functions import GoogleVertexEmbeddingFunction
from crewai import Crew, Agent, Task, Process from crewai import Crew, Agent, Task, Process
my_crew = Crew( my_crew = Crew(
@@ -203,7 +205,7 @@ my_crew = Crew(
process=Process.sequential, process=Process.sequential,
memory=True, memory=True,
verbose=True, verbose=True,
embedder=embedding_functions.GoogleVertexEmbeddingFunction( embedder=GoogleVertexEmbeddingFunction(
project_id="YOUR_PROJECT_ID", project_id="YOUR_PROJECT_ID",
region="YOUR_REGION", region="YOUR_REGION",
api_key="YOUR_API_KEY", api_key="YOUR_API_KEY",

View File

@@ -1,6 +1,6 @@
[project] [project]
name = "crewai" name = "crewai"
version = "0.75.1" version = "0.76.2"
description = "Cutting-edge framework for orchestrating role-playing, autonomous AI agents. By fostering collaborative intelligence, CrewAI empowers agents to work together seamlessly, tackling complex tasks." description = "Cutting-edge framework for orchestrating role-playing, autonomous AI agents. By fostering collaborative intelligence, CrewAI empowers agents to work together seamlessly, tackling complex tasks."
readme = "README.md" readme = "README.md"
requires-python = ">=3.10,<=3.13" requires-python = ">=3.10,<=3.13"
@@ -28,6 +28,7 @@ dependencies = [
"uv>=0.4.25", "uv>=0.4.25",
"tomli-w>=1.1.0", "tomli-w>=1.1.0",
"chromadb>=0.4.24", "chromadb>=0.4.24",
"tomli>=2.0.2",
] ]
[project.urls] [project.urls]

View File

@@ -14,5 +14,5 @@ warnings.filterwarnings(
category=UserWarning, category=UserWarning,
module="pydantic.main", module="pydantic.main",
) )
__version__ = "0.75.1" __version__ = "0.76.2"
__all__ = ["Agent", "Crew", "Process", "Task", "Pipeline", "Router", "LLM", "Flow"] __all__ = ["Agent", "Crew", "Process", "Task", "Pipeline", "Router", "LLM", "Flow"]

View File

@@ -1,6 +1,8 @@
import os import os
import shutil
import subprocess
from inspect import signature from inspect import signature
from typing import Any, List, Optional, Union from typing import Any, List, Literal, Optional, Union
from pydantic import Field, InstanceOf, PrivateAttr, model_validator from pydantic import Field, InstanceOf, PrivateAttr, model_validator
@@ -112,6 +114,10 @@ class Agent(BaseAgent):
default=2, default=2,
description="Maximum number of retries for an agent to execute a task when an error occurs.", description="Maximum number of retries for an agent to execute a task when an error occurs.",
) )
code_execution_mode: Literal["safe", "unsafe"] = Field(
default="safe",
description="Mode for code execution: 'safe' (using Docker) or 'unsafe' (direct execution).",
)
@model_validator(mode="after") @model_validator(mode="after")
def post_init_setup(self): def post_init_setup(self):
@@ -173,6 +179,9 @@ class Agent(BaseAgent):
if not self.agent_executor: if not self.agent_executor:
self._setup_agent_executor() self._setup_agent_executor()
if self.allow_code_execution:
self._validate_docker_installation()
return self return self
def _setup_agent_executor(self): def _setup_agent_executor(self):
@@ -308,7 +317,9 @@ class Agent(BaseAgent):
try: try:
from crewai_tools import CodeInterpreterTool from crewai_tools import CodeInterpreterTool
return [CodeInterpreterTool()] # Set the unsafe_mode based on the code_execution_mode attribute
unsafe_mode = self.code_execution_mode == "unsafe"
return [CodeInterpreterTool(unsafe_mode=unsafe_mode)]
except ModuleNotFoundError: except ModuleNotFoundError:
self._logger.log( self._logger.log(
"info", "Coding tools not available. Install crewai_tools. " "info", "Coding tools not available. Install crewai_tools. "
@@ -408,6 +419,25 @@ class Agent(BaseAgent):
return "\n".join(tool_strings) return "\n".join(tool_strings)
def _validate_docker_installation(self) -> None:
"""Check if Docker is installed and running."""
if not shutil.which("docker"):
raise RuntimeError(
f"Docker is not installed. Please install Docker to use code execution with agent: {self.role}"
)
try:
subprocess.run(
["docker", "info"],
check=True,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
)
except subprocess.CalledProcessError:
raise RuntimeError(
f"Docker is not running. Please start Docker to use code execution with agent: {self.role}"
)
@staticmethod @staticmethod
def __tools_names(tools) -> str: def __tools_names(tools) -> str:
return ", ".join([t.name for t in tools]) return ", ".join([t.name for t in tools])

View File

@@ -33,10 +33,11 @@ def crewai():
@click.argument("type", type=click.Choice(["crew", "pipeline", "flow"])) @click.argument("type", type=click.Choice(["crew", "pipeline", "flow"]))
@click.argument("name") @click.argument("name")
@click.option("--provider", type=str, help="The provider to use for the crew") @click.option("--provider", type=str, help="The provider to use for the crew")
def create(type, name, provider): @click.option("--skip_provider", is_flag=True, help="Skip provider validation")
def create(type, name, provider, skip_provider=False):
"""Create a new crew, pipeline, or flow.""" """Create a new crew, pipeline, or flow."""
if type == "crew": if type == "crew":
create_crew(name, provider) create_crew(name, provider, skip_provider)
elif type == "pipeline": elif type == "pipeline":
create_pipeline(name) create_pipeline(name)
elif type == "flow": elif type == "flow":

View File

@@ -81,77 +81,84 @@ def copy_template_files(folder_path, name, class_name, parent_folder):
copy_template(src_file, dst_file, name, class_name, folder_path.name) copy_template(src_file, dst_file, name, class_name, folder_path.name)
def create_crew(name, parent_folder=None): def create_crew(name, provider=None, skip_provider=False, parent_folder=None):
folder_path, folder_name, class_name = create_folder_structure(name, parent_folder) folder_path, folder_name, class_name = create_folder_structure(name, parent_folder)
env_vars = load_env_vars(folder_path) env_vars = load_env_vars(folder_path)
if not skip_provider:
if not provider:
provider_models = get_provider_data()
if not provider_models:
return
existing_provider = None existing_provider = None
for provider, env_keys in ENV_VARS.items(): for provider, env_keys in ENV_VARS.items():
if any(key in env_vars for key in env_keys): if any(key in env_vars for key in env_keys):
existing_provider = provider existing_provider = provider
break break
if existing_provider: if existing_provider:
if not click.confirm( if not click.confirm(
f"Found existing environment variable configuration for {existing_provider.capitalize()}. Do you want to override it?" f"Found existing environment variable configuration for {existing_provider.capitalize()}. Do you want to override it?"
): ):
click.secho("Keeping existing provider configuration.", fg="yellow") click.secho("Keeping existing provider configuration.", fg="yellow")
return
provider_models = get_provider_data()
if not provider_models:
return return
provider_models = get_provider_data() while True:
if not provider_models: selected_provider = select_provider(provider_models)
return if selected_provider is None: # User typed 'q'
click.secho("Exiting...", fg="yellow")
sys.exit(0)
if selected_provider: # Valid selection
break
click.secho(
"No provider selected. Please try again or press 'q' to exit.", fg="red"
)
while True: while True:
selected_provider = select_provider(provider_models) selected_model = select_model(selected_provider, provider_models)
if selected_provider is None: # User typed 'q' if selected_model is None: # User typed 'q'
click.secho("Exiting...", fg="yellow") click.secho("Exiting...", fg="yellow")
sys.exit(0) sys.exit(0)
if selected_provider: # Valid selection if selected_model: # Valid selection
break break
click.secho( click.secho(
"No provider selected. Please try again or press 'q' to exit.", fg="red" "No model selected. Please try again or press 'q' to exit.", fg="red"
) )
while True: if selected_provider in PROVIDERS:
selected_model = select_model(selected_provider, provider_models) api_key_var = ENV_VARS[selected_provider][0]
if selected_model is None: # User typed 'q' else:
click.secho("Exiting...", fg="yellow") api_key_var = click.prompt(
sys.exit(0) f"Enter the environment variable name for your {selected_provider.capitalize()} API key",
if selected_model: # Valid selection type=str,
break default="",
click.secho( )
"No model selected. Please try again or press 'q' to exit.", fg="red"
)
if selected_provider in PROVIDERS:
api_key_var = ENV_VARS[selected_provider][0]
else:
api_key_var = click.prompt(
f"Enter the environment variable name for your {selected_provider.capitalize()} API key",
type=str,
default="",
)
api_key_value = ""
click.echo(
f"Enter your {selected_provider.capitalize()} API key (press Enter to skip): ",
nl=False,
)
try:
api_key_value = input()
except (KeyboardInterrupt, EOFError):
api_key_value = "" api_key_value = ""
click.echo(
f"Enter your {selected_provider.capitalize()} API key (press Enter to skip): ",
nl=False,
)
try:
api_key_value = input()
except (KeyboardInterrupt, EOFError):
api_key_value = ""
if api_key_value.strip(): if api_key_value.strip():
env_vars = {api_key_var: api_key_value} env_vars = {api_key_var: api_key_value}
write_env_file(folder_path, env_vars) write_env_file(folder_path, env_vars)
click.secho("API key saved to .env file", fg="green") click.secho("API key saved to .env file", fg="green")
else: else:
click.secho("No API key provided. Skipping .env file creation.", fg="yellow") click.secho(
"No API key provided. Skipping .env file creation.", fg="yellow"
)
env_vars["MODEL"] = selected_model env_vars["MODEL"] = selected_model
click.secho(f"Selected model: {selected_model}", fg="green") click.secho(f"Selected model: {selected_model}", fg="green")
package_dir = Path(__file__).parent package_dir = Path(__file__).parent
templates_dir = package_dir / "templates" / "crew" templates_dir = package_dir / "templates" / "crew"

View File

@@ -7,7 +7,7 @@ def plot_flow() -> None:
""" """
Plot the flow by running a command in the UV environment. Plot the flow by running a command in the UV environment.
""" """
command = ["uv", "run", "plot_flow"] command = ["uv", "run", "plot"]
try: try:
result = subprocess.run(command, capture_output=False, text=True, check=True) result = subprocess.run(command, capture_output=False, text=True, check=True)

View File

@@ -1,10 +1,9 @@
import subprocess import subprocess
import click import click
import tomllib
from packaging import version from packaging import version
from crewai.cli.utils import get_crewai_version from crewai.cli.utils import get_crewai_version, read_toml
def run_crew() -> None: def run_crew() -> None:
@@ -15,10 +14,9 @@ def run_crew() -> None:
crewai_version = get_crewai_version() crewai_version = get_crewai_version()
min_required_version = "0.71.0" min_required_version = "0.71.0"
with open("pyproject.toml", "rb") as f: pyproject_data = read_toml()
data = tomllib.load(f)
if data.get("tool", {}).get("poetry") and ( if pyproject_data.get("tool", {}).get("poetry") and (
version.parse(crewai_version) < version.parse(min_required_version) version.parse(crewai_version) < version.parse(min_required_version)
): ):
click.secho( click.secho(
@@ -35,10 +33,7 @@ def run_crew() -> None:
click.echo(f"An error occurred while running the crew: {e}", err=True) click.echo(f"An error occurred while running the crew: {e}", err=True)
click.echo(e.output, err=True, nl=True) click.echo(e.output, err=True, nl=True)
with open("pyproject.toml", "rb") as f: if pyproject_data.get("tool", {}).get("poetry"):
data = tomllib.load(f)
if data.get("tool", {}).get("poetry"):
click.secho( click.secho(
"It's possible that you are using an old version of crewAI that uses poetry, please run `crewai update` to update your pyproject.toml to use uv.", "It's possible that you are using an old version of crewAI that uses poetry, please run `crewai update` to update your pyproject.toml to use uv.",
fg="yellow", fg="yellow",

View File

@@ -5,7 +5,7 @@ description = "{{name}} using crewAI"
authors = [{ name = "Your Name", email = "you@example.com" }] authors = [{ name = "Your Name", email = "you@example.com" }]
requires-python = ">=3.10,<=3.13" requires-python = ">=3.10,<=3.13"
dependencies = [ dependencies = [
"crewai[tools]>=0.75.1,<1.0.0" "crewai[tools]>=0.76.2,<1.0.0"
] ]
[project.scripts] [project.scripts]

View File

@@ -5,7 +5,7 @@ description = "{{name}} using crewAI"
authors = [{ name = "Your Name", email = "you@example.com" }] authors = [{ name = "Your Name", email = "you@example.com" }]
requires-python = ">=3.10,<=3.13" requires-python = ">=3.10,<=3.13"
dependencies = [ dependencies = [
"crewai[tools]>=0.75.1,<1.0.0", "crewai[tools]>=0.76.2,<1.0.0",
] ]
[project.scripts] [project.scripts]

View File

@@ -6,7 +6,7 @@ authors = ["Your Name <you@example.com>"]
[tool.poetry.dependencies] [tool.poetry.dependencies]
python = ">=3.10,<=3.13" python = ">=3.10,<=3.13"
crewai = { extras = ["tools"], version = ">=0.75.1,<1.0.0" } crewai = { extras = ["tools"], version = ">=0.76.2,<1.0.0" }
asyncio = "*" asyncio = "*"
[tool.poetry.scripts] [tool.poetry.scripts]

View File

@@ -5,7 +5,7 @@ description = "{{name}} using crewAI"
authors = ["Your Name <you@example.com>"] authors = ["Your Name <you@example.com>"]
requires-python = ">=3.10,<=3.13" requires-python = ">=3.10,<=3.13"
dependencies = [ dependencies = [
"crewai[tools]>=0.75.1,<1.0.0" "crewai[tools]>=0.76.2,<1.0.0"
] ]
[project.scripts] [project.scripts]

View File

@@ -5,6 +5,6 @@ description = "Power up your crews with {{folder_name}}"
readme = "README.md" readme = "README.md"
requires-python = ">=3.10,<=3.13" requires-python = ">=3.10,<=3.13"
dependencies = [ dependencies = [
"crewai[tools]>=0.75.1" "crewai[tools]>=0.76.2"
] ]

View File

@@ -2,7 +2,8 @@ import os
import shutil import shutil
import tomli_w import tomli_w
import tomllib
from crewai.cli.utils import read_toml
def update_crew() -> None: def update_crew() -> None:
@@ -18,10 +19,9 @@ def migrate_pyproject(input_file, output_file):
And it will be used to migrate the pyproject.toml to the new format when uv is used. And it will be used to migrate the pyproject.toml to the new format when uv is used.
When the time comes that uv supports the new format, this function will be deprecated. When the time comes that uv supports the new format, this function will be deprecated.
""" """
poetry_data = {}
# Read the input pyproject.toml # Read the input pyproject.toml
with open(input_file, "rb") as f: pyproject_data = read_toml()
pyproject = tomllib.load(f)
# Initialize the new project structure # Initialize the new project structure
new_pyproject = { new_pyproject = {
@@ -30,30 +30,30 @@ def migrate_pyproject(input_file, output_file):
} }
# Migrate project metadata # Migrate project metadata
if "tool" in pyproject and "poetry" in pyproject["tool"]: if "tool" in pyproject_data and "poetry" in pyproject_data["tool"]:
poetry = pyproject["tool"]["poetry"] poetry_data = pyproject_data["tool"]["poetry"]
new_pyproject["project"]["name"] = poetry.get("name") new_pyproject["project"]["name"] = poetry_data.get("name")
new_pyproject["project"]["version"] = poetry.get("version") new_pyproject["project"]["version"] = poetry_data.get("version")
new_pyproject["project"]["description"] = poetry.get("description") new_pyproject["project"]["description"] = poetry_data.get("description")
new_pyproject["project"]["authors"] = [ new_pyproject["project"]["authors"] = [
{ {
"name": author.split("<")[0].strip(), "name": author.split("<")[0].strip(),
"email": author.split("<")[1].strip(">").strip(), "email": author.split("<")[1].strip(">").strip(),
} }
for author in poetry.get("authors", []) for author in poetry_data.get("authors", [])
] ]
new_pyproject["project"]["requires-python"] = poetry.get("python") new_pyproject["project"]["requires-python"] = poetry_data.get("python")
else: else:
# If it's already in the new format, just copy the project section # If it's already in the new format, just copy the project section
new_pyproject["project"] = pyproject.get("project", {}) new_pyproject["project"] = pyproject_data.get("project", {})
# Migrate or copy dependencies # Migrate or copy dependencies
if "dependencies" in new_pyproject["project"]: if "dependencies" in new_pyproject["project"]:
# If dependencies are already in the new format, keep them as is # If dependencies are already in the new format, keep them as is
pass pass
elif "dependencies" in poetry: elif poetry_data and "dependencies" in poetry_data:
new_pyproject["project"]["dependencies"] = [] new_pyproject["project"]["dependencies"] = []
for dep, version in poetry["dependencies"].items(): for dep, version in poetry_data["dependencies"].items():
if isinstance(version, dict): # Handle extras if isinstance(version, dict): # Handle extras
extras = ",".join(version.get("extras", [])) extras = ",".join(version.get("extras", []))
new_dep = f"{dep}[{extras}]" new_dep = f"{dep}[{extras}]"
@@ -67,10 +67,10 @@ def migrate_pyproject(input_file, output_file):
new_pyproject["project"]["dependencies"].append(new_dep) new_pyproject["project"]["dependencies"].append(new_dep)
# Migrate or copy scripts # Migrate or copy scripts
if "scripts" in poetry: if poetry_data and "scripts" in poetry_data:
new_pyproject["project"]["scripts"] = poetry["scripts"] new_pyproject["project"]["scripts"] = poetry_data["scripts"]
elif "scripts" in pyproject.get("project", {}): elif pyproject_data.get("project", {}) and "scripts" in pyproject_data["project"]:
new_pyproject["project"]["scripts"] = pyproject["project"]["scripts"] new_pyproject["project"]["scripts"] = pyproject_data["project"]["scripts"]
else: else:
new_pyproject["project"]["scripts"] = {} new_pyproject["project"]["scripts"] = {}
@@ -87,8 +87,8 @@ def migrate_pyproject(input_file, output_file):
new_pyproject["project"]["scripts"]["run_crew"] = f"{module_name}.main:run" new_pyproject["project"]["scripts"]["run_crew"] = f"{module_name}.main:run"
# Migrate optional dependencies # Migrate optional dependencies
if "extras" in poetry: if poetry_data and "extras" in poetry_data:
new_pyproject["project"]["optional-dependencies"] = poetry["extras"] new_pyproject["project"]["optional-dependencies"] = poetry_data["extras"]
# Backup the old pyproject.toml # Backup the old pyproject.toml
backup_file = "pyproject-old.toml" backup_file = "pyproject-old.toml"

View File

@@ -6,6 +6,7 @@ from functools import reduce
from typing import Any, Dict, List from typing import Any, Dict, List
import click import click
import tomli
from rich.console import Console from rich.console import Console
from crewai.cli.authentication.utils import TokenManager from crewai.cli.authentication.utils import TokenManager
@@ -54,6 +55,13 @@ def simple_toml_parser(content):
return result return result
def read_toml(file_path: str = "pyproject.toml"):
"""Read the content of a TOML file and return it as a dictionary."""
with open(file_path, "rb") as f:
toml_dict = tomli.load(f)
return toml_dict
def parse_toml(content): def parse_toml(content):
if sys.version_info >= (3, 11): if sys.version_info >= (3, 11):
return tomllib.loads(content) return tomllib.loads(content)

1906
uv.lock generated

File diff suppressed because it is too large Load Diff