Compare commits

..

14 Commits

Author SHA1 Message Date
Lorenze Jay
1589230833 Merge branch 'main' of github.com:crewAIInc/crewAI into fix/memory-embedder-config 2024-10-22 12:28:07 -07:00
Lorenze Jay
6d0251224e Merge branch 'fix/memory-embedder-config' of github.com:crewAIInc/crewAI into fix/memory-embedder-config 2024-10-22 12:27:50 -07:00
Lorenze Jay
e2f70cb53f updates to add more docs and correct imports with huggingface embedding server enabled 2024-10-22 12:27:47 -07:00
Brandon Hancock
0dd522ddff Merge branch 'main' into fix/memory-embedder-config 2024-10-21 19:41:45 -04:00
Lorenze Jay
5803b3fb69 fixed run types 2024-10-21 16:04:42 -07:00
Lorenze Jay
31c3082740 fixed docs 2024-10-21 14:29:11 -07:00
Lorenze Jay
21afc46c0d Merge branch 'main' of github.com:crewAIInc/crewAI into fix/memory-embedder-config 2024-10-21 14:24:35 -07:00
Lorenze Jay
78882c6de2 rm prints 2024-10-21 14:24:26 -07:00
Lorenze Jay
2786086974 fixes 2024-10-21 14:24:07 -07:00
Lorenze Jay
6b12ac9c0b Merge branch 'main' of github.com:crewAIInc/crewAI into fix/memory-embedder-config 2024-10-21 09:31:56 -07:00
Lorenze Jay
266ecff395 WIP: brandons notes 2024-10-21 09:31:39 -07:00
Lorenze Jay
34d748d18e raise error on unsupported provider 2024-10-21 08:37:42 -07:00
Lorenze Jay
79f527576b some fixes 2024-10-20 18:26:24 -07:00
Lorenze Jay
3fc83c624b ensure original embedding config works 2024-10-20 18:12:57 -07:00
19 changed files with 1170 additions and 1227 deletions

View File

@@ -351,7 +351,7 @@ pre-commit install
### Running Tests ### Running Tests
```bash ```bash
uv run pytest . uvx pytest
``` ```
### Running static type checks ### Running static type checks

View File

@@ -31,17 +31,16 @@ Think of an agent as a member of a team, with specific skills and a particular j
| **Max RPM** *(optional)* | `max_rpm` | Max RPM is the maximum number of requests per minute the agent can perform to avoid rate limits. It's optional and can be left unspecified, with a default value of `None`. | | **Max RPM** *(optional)* | `max_rpm` | Max RPM is the maximum number of requests per minute the agent can perform to avoid rate limits. It's optional and can be left unspecified, with a default value of `None`. |
| **Max Execution Time** *(optional)* | `max_execution_time` | Max Execution Time is the maximum execution time for an agent to execute a task. It's optional and can be left unspecified, with a default value of `None`, meaning no max execution time. | | **Max Execution Time** *(optional)* | `max_execution_time` | Max Execution Time is the maximum execution time for an agent to execute a task. It's optional and can be left unspecified, with a default value of `None`, meaning no max execution time. |
| **Verbose** *(optional)* | `verbose` | Setting this to `True` configures the internal logger to provide detailed execution logs, aiding in debugging and monitoring. Default is `False`. | | **Verbose** *(optional)* | `verbose` | Setting this to `True` configures the internal logger to provide detailed execution logs, aiding in debugging and monitoring. Default is `False`. |
| **Allow Delegation** *(optional)* | `allow_delegation` | Agents can delegate tasks or questions to one another, ensuring that each task is handled by the most suitable agent. Default is `False`. | | **Allow Delegation** *(optional)* | `allow_delegation` | Agents can delegate tasks or questions to one another, ensuring that each task is handled by the most suitable agent. Default is `False`.
| **Step Callback** *(optional)* | `step_callback` | A function that is called after each step of the agent. This can be used to log the agent's actions or to perform other operations. It will overwrite the crew `step_callback`. | | **Step Callback** *(optional)* | `step_callback` | A function that is called after each step of the agent. This can be used to log the agent's actions or to perform other operations. It will overwrite the crew `step_callback`. |
| **Cache** *(optional)* | `cache` | Indicates if the agent should use a cache for tool usage. Default is `True`. | | **Cache** *(optional)* | `cache` | Indicates if the agent should use a cache for tool usage. Default is `True`. |
| **System Template** *(optional)* | `system_template` | Specifies the system format for the agent. Default is `None`. | | **System Template** *(optional)* | `system_template` | Specifies the system format for the agent. Default is `None`. |
| **Prompt Template** *(optional)* | `prompt_template` | Specifies the prompt format for the agent. Default is `None`. | | **Prompt Template** *(optional)* | `prompt_template` | Specifies the prompt format for the agent. Default is `None`. |
| **Response Template** *(optional)* | `response_template` | Specifies the response format for the agent. Default is `None`. | | **Response Template** *(optional)* | `response_template` | Specifies the response format for the agent. Default is `None`. |
| **Allow Code Execution** *(optional)* | `allow_code_execution` | Enable code execution for the agent. Default is `False`. | | **Allow Code Execution** *(optional)* | `allow_code_execution` | Enable code execution for the agent. Default is `False`. |
| **Max Retry Limit** *(optional)* | `max_retry_limit` | Maximum number of retries for an agent to execute a task when an error occurs. Default is `2`. | | **Max Retry Limit** *(optional)* | `max_retry_limit` | Maximum number of retries for an agent to execute a task when an error occurs. Default is `2`.
| **Use System Prompt** *(optional)* | `use_system_prompt` | Adds the ability to not use system prompt (to support o1 models). Default is `True`. | | **Use System Prompt** *(optional)* | `use_system_prompt` | Adds the ability to not use system prompt (to support o1 models). Default is `True`. |
| **Respect Context Window** *(optional)* | `respect_context_window` | Summary strategy to avoid overflowing the context window. Default is `True`. | | **Respect Context Window** *(optional)* | `respect_context_window` | Summary strategy to avoid overflowing the context window. Default is `True`. |
| **Code Execution Mode** *(optional)* | `code_execution_mode` | Determines the mode for code execution: 'safe' (using Docker) or 'unsafe' (direct execution on the host machine). Default is `safe`. |
## Creating an agent ## Creating an agent
@@ -84,7 +83,6 @@ agent = Agent(
max_retry_limit=2, # Optional max_retry_limit=2, # Optional
use_system_prompt=True, # Optional use_system_prompt=True, # Optional
respect_context_window=True, # Optional respect_context_window=True, # Optional
code_execution_mode='safe', # Optional, defaults to 'safe'
) )
``` ```
@@ -158,4 +156,4 @@ crew = my_crew.kickoff(inputs={"input": "Mark Twain"})
## Conclusion ## Conclusion
Agents are the building blocks of the CrewAI framework. By understanding how to define and interact with agents, Agents are the building blocks of the CrewAI framework. By understanding how to define and interact with agents,
you can create sophisticated AI systems that leverage the power of collaborative intelligence. The `code_execution_mode` attribute provides flexibility in how agents execute code, allowing for both secure and direct execution options. you can create sophisticated AI systems that leverage the power of collaborative intelligence.

View File

@@ -6,7 +6,7 @@ icon: terminal
# CrewAI CLI Documentation # CrewAI CLI Documentation
The CrewAI CLI provides a set of commands to interact with CrewAI, allowing you to create, train, run, and manage crews & flows. The CrewAI CLI provides a set of commands to interact with CrewAI, allowing you to create, train, run, and manage crews and pipelines.
## Installation ## Installation
@@ -146,34 +146,3 @@ crewai run
Make sure to run these commands from the directory where your CrewAI project is set up. Make sure to run these commands from the directory where your CrewAI project is set up.
Some commands may require additional configuration or setup within your project structure. Some commands may require additional configuration or setup within your project structure.
</Note> </Note>
### 9. API Keys
When running ```crewai create crew``` command, the CLI will first show you the top 5 most common LLM providers and ask you to select one.
Once you've selected an LLM provider, you will be prompted for API keys.
#### Initial API key providers
The CLI will initially prompt for API keys for the following services:
* OpenAI
* Groq
* Anthropic
* Google Gemini
When you select a provider, the CLI will prompt you to enter your API key.
#### Other Options
If you select option 6, you will be able to select from a list of LiteLLM supported providers.
When you select a provider, the CLI will prompt you to enter the Key name and the API key.
See the following link for each provider's key name:
* [LiteLLM Providers](https://docs.litellm.ai/docs/providers)

View File

@@ -118,7 +118,7 @@ Alternatively, you can directly pass the OpenAIEmbeddingFunction to the embedder
Example: Example:
```python Code ```python Code
from crewai import Crew, Agent, Task, Process from crewai import Crew, Agent, Task, Process
from chromadb.utils.embedding_functions import OpenAIEmbeddingFunction from chromadb.utils.embedding_functions.openai_embedding_function import OpenAIEmbeddingFunction
my_crew = Crew( my_crew = Crew(
agents=[...], agents=[...],
@@ -174,7 +174,6 @@ my_crew = Crew(
### Using Azure OpenAI embeddings ### Using Azure OpenAI embeddings
```python Code ```python Code
from chromadb.utils.embedding_functions import OpenAIEmbeddingFunction
from crewai import Crew, Agent, Task, Process from crewai import Crew, Agent, Task, Process
my_crew = Crew( my_crew = Crew(
@@ -183,7 +182,7 @@ my_crew = Crew(
process=Process.sequential, process=Process.sequential,
memory=True, memory=True,
verbose=True, verbose=True,
embedder=OpenAIEmbeddingFunction( embedder=embedding_functions.OpenAIEmbeddingFunction(
api_key="YOUR_API_KEY", api_key="YOUR_API_KEY",
api_base="YOUR_API_BASE_PATH", api_base="YOUR_API_BASE_PATH",
api_type="azure", api_type="azure",
@@ -196,7 +195,6 @@ my_crew = Crew(
### Using Vertex AI embeddings ### Using Vertex AI embeddings
```python Code ```python Code
from chromadb.utils.embedding_functions import GoogleVertexEmbeddingFunction
from crewai import Crew, Agent, Task, Process from crewai import Crew, Agent, Task, Process
my_crew = Crew( my_crew = Crew(
@@ -205,7 +203,7 @@ my_crew = Crew(
process=Process.sequential, process=Process.sequential,
memory=True, memory=True,
verbose=True, verbose=True,
embedder=GoogleVertexEmbeddingFunction( embedder=embedding_functions.GoogleVertexEmbeddingFunction(
project_id="YOUR_PROJECT_ID", project_id="YOUR_PROJECT_ID",
region="YOUR_REGION", region="YOUR_REGION",
api_key="YOUR_API_KEY", api_key="YOUR_API_KEY",

View File

@@ -1,6 +1,6 @@
[project] [project]
name = "crewai" name = "crewai"
version = "0.76.2" version = "0.74.2"
description = "Cutting-edge framework for orchestrating role-playing, autonomous AI agents. By fostering collaborative intelligence, CrewAI empowers agents to work together seamlessly, tackling complex tasks." description = "Cutting-edge framework for orchestrating role-playing, autonomous AI agents. By fostering collaborative intelligence, CrewAI empowers agents to work together seamlessly, tackling complex tasks."
readme = "README.md" readme = "README.md"
requires-python = ">=3.10,<=3.13" requires-python = ">=3.10,<=3.13"

View File

@@ -14,5 +14,5 @@ warnings.filterwarnings(
category=UserWarning, category=UserWarning,
module="pydantic.main", module="pydantic.main",
) )
__version__ = "0.76.2" __version__ = "0.74.2"
__all__ = ["Agent", "Crew", "Process", "Task", "Pipeline", "Router", "LLM", "Flow"] __all__ = ["Agent", "Crew", "Process", "Task", "Pipeline", "Router", "LLM", "Flow"]

View File

@@ -1,8 +1,6 @@
import os import os
import shutil
import subprocess
from inspect import signature from inspect import signature
from typing import Any, List, Literal, Optional, Union from typing import Any, List, Optional, Union
from pydantic import Field, InstanceOf, PrivateAttr, model_validator from pydantic import Field, InstanceOf, PrivateAttr, model_validator
@@ -114,10 +112,6 @@ class Agent(BaseAgent):
default=2, default=2,
description="Maximum number of retries for an agent to execute a task when an error occurs.", description="Maximum number of retries for an agent to execute a task when an error occurs.",
) )
code_execution_mode: Literal["safe", "unsafe"] = Field(
default="safe",
description="Mode for code execution: 'safe' (using Docker) or 'unsafe' (direct execution).",
)
@model_validator(mode="after") @model_validator(mode="after")
def post_init_setup(self): def post_init_setup(self):
@@ -179,9 +173,6 @@ class Agent(BaseAgent):
if not self.agent_executor: if not self.agent_executor:
self._setup_agent_executor() self._setup_agent_executor()
if self.allow_code_execution:
self._validate_docker_installation()
return self return self
def _setup_agent_executor(self): def _setup_agent_executor(self):
@@ -317,9 +308,7 @@ class Agent(BaseAgent):
try: try:
from crewai_tools import CodeInterpreterTool from crewai_tools import CodeInterpreterTool
# Set the unsafe_mode based on the code_execution_mode attribute return [CodeInterpreterTool()]
unsafe_mode = self.code_execution_mode == "unsafe"
return [CodeInterpreterTool(unsafe_mode=unsafe_mode)]
except ModuleNotFoundError: except ModuleNotFoundError:
self._logger.log( self._logger.log(
"info", "Coding tools not available. Install crewai_tools. " "info", "Coding tools not available. Install crewai_tools. "
@@ -419,25 +408,6 @@ class Agent(BaseAgent):
return "\n".join(tool_strings) return "\n".join(tool_strings)
def _validate_docker_installation(self) -> None:
"""Check if Docker is installed and running."""
if not shutil.which("docker"):
raise RuntimeError(
f"Docker is not installed. Please install Docker to use code execution with agent: {self.role}"
)
try:
subprocess.run(
["docker", "info"],
check=True,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
)
except subprocess.CalledProcessError:
raise RuntimeError(
f"Docker is not running. Please start Docker to use code execution with agent: {self.role}"
)
@staticmethod @staticmethod
def __tools_names(tools) -> str: def __tools_names(tools) -> str:
return ", ".join([t.name for t in tools]) return ", ".join([t.name for t in tools])

View File

@@ -32,12 +32,10 @@ def crewai():
@crewai.command() @crewai.command()
@click.argument("type", type=click.Choice(["crew", "pipeline", "flow"])) @click.argument("type", type=click.Choice(["crew", "pipeline", "flow"]))
@click.argument("name") @click.argument("name")
@click.option("--provider", type=str, help="The provider to use for the crew") def create(type, name):
@click.option("--skip_provider", is_flag=True, help="Skip provider validation")
def create(type, name, provider, skip_provider=False):
"""Create a new crew, pipeline, or flow.""" """Create a new crew, pipeline, or flow."""
if type == "crew": if type == "crew":
create_crew(name, provider, skip_provider) create_crew(name)
elif type == "pipeline": elif type == "pipeline":
create_pipeline(name) create_pipeline(name)
elif type == "flow": elif type == "flow":

View File

@@ -1,16 +1,8 @@
import sys
from pathlib import Path from pathlib import Path
import click import click
from crewai.cli.constants import ENV_VARS
from crewai.cli.provider import (
PROVIDERS,
get_provider_data,
select_model,
select_provider,
)
from crewai.cli.utils import copy_template, load_env_vars, write_env_file from crewai.cli.utils import copy_template, load_env_vars, write_env_file
from crewai.cli.provider import get_provider_data, select_provider, PROVIDERS
from crewai.cli.constants import ENV_VARS
def create_folder_structure(name, parent_folder=None): def create_folder_structure(name, parent_folder=None):
@@ -22,19 +14,11 @@ def create_folder_structure(name, parent_folder=None):
else: else:
folder_path = Path(folder_name) folder_path = Path(folder_name)
if folder_path.exists(): click.secho(
if not click.confirm( f"Creating {'crew' if parent_folder else 'folder'} {folder_name}...",
f"Folder {folder_name} already exists. Do you want to override it?" fg="green",
): bold=True,
click.secho("Operation cancelled.", fg="yellow") )
sys.exit(0)
click.secho(f"Overriding folder {folder_name}...", fg="green", bold=True)
else:
click.secho(
f"Creating {'crew' if parent_folder else 'folder'} {folder_name}...",
fg="green",
bold=True,
)
if not folder_path.exists(): if not folder_path.exists():
folder_path.mkdir(parents=True) folder_path.mkdir(parents=True)
@@ -43,6 +27,11 @@ def create_folder_structure(name, parent_folder=None):
(folder_path / "src" / folder_name).mkdir(parents=True) (folder_path / "src" / folder_name).mkdir(parents=True)
(folder_path / "src" / folder_name / "tools").mkdir(parents=True) (folder_path / "src" / folder_name / "tools").mkdir(parents=True)
(folder_path / "src" / folder_name / "config").mkdir(parents=True) (folder_path / "src" / folder_name / "config").mkdir(parents=True)
else:
click.secho(
f"\tFolder {folder_name} already exists.",
fg="yellow",
)
return folder_path, folder_name, class_name return folder_path, folder_name, class_name
@@ -81,84 +70,37 @@ def copy_template_files(folder_path, name, class_name, parent_folder):
copy_template(src_file, dst_file, name, class_name, folder_path.name) copy_template(src_file, dst_file, name, class_name, folder_path.name)
def create_crew(name, provider=None, skip_provider=False, parent_folder=None): def create_crew(name, parent_folder=None):
folder_path, folder_name, class_name = create_folder_structure(name, parent_folder) folder_path, folder_name, class_name = create_folder_structure(name, parent_folder)
env_vars = load_env_vars(folder_path) env_vars = load_env_vars(folder_path)
if not skip_provider:
if not provider:
provider_models = get_provider_data()
if not provider_models:
return
existing_provider = None provider_models = get_provider_data()
for provider, env_keys in ENV_VARS.items(): if not provider_models:
if any(key in env_vars for key in env_keys): return
existing_provider = provider
break
if existing_provider: selected_provider = select_provider(provider_models)
if not click.confirm( if not selected_provider:
f"Found existing environment variable configuration for {existing_provider.capitalize()}. Do you want to override it?" return
): provider = selected_provider
click.secho("Keeping existing provider configuration.", fg="yellow")
return
provider_models = get_provider_data() # selected_model = select_model(provider, provider_models)
if not provider_models: # if not selected_model:
return # return
# model = selected_model
while True: if provider in PROVIDERS:
selected_provider = select_provider(provider_models) api_key_var = ENV_VARS[provider][0]
if selected_provider is None: # User typed 'q' else:
click.secho("Exiting...", fg="yellow") api_key_var = click.prompt(
sys.exit(0) f"Enter the environment variable name for your {provider.capitalize()} API key",
if selected_provider: # Valid selection type=str,
break
click.secho(
"No provider selected. Please try again or press 'q' to exit.", fg="red"
)
while True:
selected_model = select_model(selected_provider, provider_models)
if selected_model is None: # User typed 'q'
click.secho("Exiting...", fg="yellow")
sys.exit(0)
if selected_model: # Valid selection
break
click.secho(
"No model selected. Please try again or press 'q' to exit.", fg="red"
)
if selected_provider in PROVIDERS:
api_key_var = ENV_VARS[selected_provider][0]
else:
api_key_var = click.prompt(
f"Enter the environment variable name for your {selected_provider.capitalize()} API key",
type=str,
default="",
)
api_key_value = ""
click.echo(
f"Enter your {selected_provider.capitalize()} API key (press Enter to skip): ",
nl=False,
) )
try:
api_key_value = input()
except (KeyboardInterrupt, EOFError):
api_key_value = ""
if api_key_value.strip(): env_vars = {api_key_var: "YOUR_API_KEY_HERE"}
env_vars = {api_key_var: api_key_value} write_env_file(folder_path, env_vars)
write_env_file(folder_path, env_vars)
click.secho("API key saved to .env file", fg="green")
else:
click.secho(
"No API key provided. Skipping .env file creation.", fg="yellow"
)
env_vars["MODEL"] = selected_model # env_vars['MODEL'] = model
click.secho(f"Selected model: {selected_model}", fg="green") # click.secho(f"Selected model: {model}", fg="green")
package_dir = Path(__file__).parent package_dir = Path(__file__).parent
templates_dir = package_dir / "templates" / "crew" templates_dir = package_dir / "templates" / "crew"

View File

@@ -7,7 +7,7 @@ def plot_flow() -> None:
""" """
Plot the flow by running a command in the UV environment. Plot the flow by running a command in the UV environment.
""" """
command = ["uv", "run", "plot"] command = ["uv", "run", "plot_flow"]
try: try:
result = subprocess.run(command, capture_output=False, text=True, check=True) result = subprocess.run(command, capture_output=False, text=True, check=True)

View File

@@ -1,91 +1,67 @@
import json import json
import time import time
from collections import defaultdict
from pathlib import Path
import click
import requests import requests
from collections import defaultdict
from crewai.cli.constants import JSON_URL, MODELS, PROVIDERS import click
from pathlib import Path
from crewai.cli.constants import PROVIDERS, MODELS, JSON_URL
def select_choice(prompt_message, choices): def select_choice(prompt_message, choices):
""" """
Presents a list of choices to the user and prompts them to select one. Presents a list of choices to the user and prompts them to select one.
Args: Args:
- prompt_message (str): The message to display to the user before presenting the choices. - prompt_message (str): The message to display to the user before presenting the choices.
- choices (list): A list of options to present to the user. - choices (list): A list of options to present to the user.
Returns: Returns:
- str: The selected choice from the list, or None if the user chooses to quit. - str: The selected choice from the list, or None if the operation is aborted or an invalid selection is made.
""" """
provider_models = get_provider_data()
if not provider_models:
return
click.secho(prompt_message, fg="cyan") click.secho(prompt_message, fg="cyan")
for idx, choice in enumerate(choices, start=1): for idx, choice in enumerate(choices, start=1):
click.secho(f"{idx}. {choice}", fg="cyan") click.secho(f"{idx}. {choice}", fg="cyan")
click.secho("q. Quit", fg="cyan") try:
selected_index = click.prompt("Enter the number of your choice", type=int) - 1
while True: except click.exceptions.Abort:
choice = click.prompt( click.secho("Operation aborted by the user.", fg="red")
"Enter the number of your choice or 'q' to quit", type=str return None
) if not (0 <= selected_index < len(choices)):
click.secho("Invalid selection.", fg="red")
if choice.lower() == "q": return None
return None return choices[selected_index]
try:
selected_index = int(choice) - 1
if 0 <= selected_index < len(choices):
return choices[selected_index]
except ValueError:
pass
click.secho(
"Invalid selection. Please select a number between 1 and 6 or 'q' to quit.",
fg="red",
)
def select_provider(provider_models): def select_provider(provider_models):
""" """
Presents a list of providers to the user and prompts them to select one. Presents a list of providers to the user and prompts them to select one.
Args: Args:
- provider_models (dict): A dictionary of provider models. - provider_models (dict): A dictionary of provider models.
Returns: Returns:
- str: The selected provider - str: The selected provider, or None if the operation is aborted or an invalid selection is made.
- None: If user explicitly quits
""" """
predefined_providers = [p.lower() for p in PROVIDERS] predefined_providers = [p.lower() for p in PROVIDERS]
all_providers = sorted(set(predefined_providers + list(provider_models.keys()))) all_providers = sorted(set(predefined_providers + list(provider_models.keys())))
provider = select_choice( provider = select_choice("Select a provider to set up:", predefined_providers + ['other'])
"Select a provider to set up:", predefined_providers + ["other"] if not provider:
)
if provider is None: # User typed 'q'
return None return None
provider = provider.lower()
if provider == "other": if provider == 'other':
provider = select_choice("Select a provider from the full list:", all_providers) provider = select_choice("Select a provider from the full list:", all_providers)
if provider is None: # User typed 'q' if not provider:
return None return None
return provider
return provider.lower() if provider else False
def select_model(provider, provider_models): def select_model(provider, provider_models):
""" """
Presents a list of models for a given provider to the user and prompts them to select one. Presents a list of models for a given provider to the user and prompts them to select one.
Args: Args:
- provider (str): The provider for which to select a model. - provider (str): The provider for which to select a model.
- provider_models (dict): A dictionary of provider models. - provider_models (dict): A dictionary of provider models.
Returns: Returns:
- str: The selected model, or None if the operation is aborted or an invalid selection is made. - str: The selected model, or None if the operation is aborted or an invalid selection is made.
""" """
@@ -100,49 +76,37 @@ def select_model(provider, provider_models):
click.secho(f"No models available for provider '{provider}'.", fg="red") click.secho(f"No models available for provider '{provider}'.", fg="red")
return None return None
selected_model = select_choice( selected_model = select_choice(f"Select a model to use for {provider.capitalize()}:", available_models)
f"Select a model to use for {provider.capitalize()}:", available_models
)
return selected_model return selected_model
def load_provider_data(cache_file, cache_expiry): def load_provider_data(cache_file, cache_expiry):
""" """
Loads provider data from a cache file if it exists and is not expired. If the cache is expired or corrupted, it fetches the data from the web. Loads provider data from a cache file if it exists and is not expired. If the cache is expired or corrupted, it fetches the data from the web.
Args: Args:
- cache_file (Path): The path to the cache file. - cache_file (Path): The path to the cache file.
- cache_expiry (int): The cache expiry time in seconds. - cache_expiry (int): The cache expiry time in seconds.
Returns: Returns:
- dict or None: The loaded provider data or None if the operation fails. - dict or None: The loaded provider data or None if the operation fails.
""" """
current_time = time.time() current_time = time.time()
if ( if cache_file.exists() and (current_time - cache_file.stat().st_mtime) < cache_expiry:
cache_file.exists()
and (current_time - cache_file.stat().st_mtime) < cache_expiry
):
data = read_cache_file(cache_file) data = read_cache_file(cache_file)
if data: if data:
return data return data
click.secho( click.secho("Cache is corrupted. Fetching provider data from the web...", fg="yellow")
"Cache is corrupted. Fetching provider data from the web...", fg="yellow"
)
else: else:
click.secho( click.secho("Cache expired or not found. Fetching provider data from the web...", fg="cyan")
"Cache expired or not found. Fetching provider data from the web...",
fg="cyan",
)
return fetch_provider_data(cache_file) return fetch_provider_data(cache_file)
def read_cache_file(cache_file): def read_cache_file(cache_file):
""" """
Reads and returns the JSON content from a cache file. Returns None if the file contains invalid JSON. Reads and returns the JSON content from a cache file. Returns None if the file contains invalid JSON.
Args: Args:
- cache_file (Path): The path to the cache file. - cache_file (Path): The path to the cache file.
Returns: Returns:
- dict or None: The JSON content of the cache file or None if the JSON is invalid. - dict or None: The JSON content of the cache file or None if the JSON is invalid.
""" """
@@ -152,14 +116,13 @@ def read_cache_file(cache_file):
except json.JSONDecodeError: except json.JSONDecodeError:
return None return None
def fetch_provider_data(cache_file): def fetch_provider_data(cache_file):
""" """
Fetches provider data from a specified URL and caches it to a file. Fetches provider data from a specified URL and caches it to a file.
Args: Args:
- cache_file (Path): The path to the cache file. - cache_file (Path): The path to the cache file.
Returns: Returns:
- dict or None: The fetched provider data or None if the operation fails. - dict or None: The fetched provider data or None if the operation fails.
""" """
@@ -176,42 +139,38 @@ def fetch_provider_data(cache_file):
click.secho("Error parsing provider data. Invalid JSON format.", fg="red") click.secho("Error parsing provider data. Invalid JSON format.", fg="red")
return None return None
def download_data(response): def download_data(response):
""" """
Downloads data from a given HTTP response and returns the JSON content. Downloads data from a given HTTP response and returns the JSON content.
Args: Args:
- response (requests.Response): The HTTP response object. - response (requests.Response): The HTTP response object.
Returns: Returns:
- dict: The JSON content of the response. - dict: The JSON content of the response.
""" """
total_size = int(response.headers.get("content-length", 0)) total_size = int(response.headers.get('content-length', 0))
block_size = 8192 block_size = 8192
data_chunks = [] data_chunks = []
with click.progressbar( with click.progressbar(length=total_size, label='Downloading', show_pos=True) as progress_bar:
length=total_size, label="Downloading", show_pos=True
) as progress_bar:
for chunk in response.iter_content(block_size): for chunk in response.iter_content(block_size):
if chunk: if chunk:
data_chunks.append(chunk) data_chunks.append(chunk)
progress_bar.update(len(chunk)) progress_bar.update(len(chunk))
data_content = b"".join(data_chunks) data_content = b''.join(data_chunks)
return json.loads(data_content.decode("utf-8")) return json.loads(data_content.decode('utf-8'))
def get_provider_data(): def get_provider_data():
""" """
Retrieves provider data from a cache file, filters out models based on provider criteria, and returns a dictionary of providers mapped to their models. Retrieves provider data from a cache file, filters out models based on provider criteria, and returns a dictionary of providers mapped to their models.
Returns: Returns:
- dict or None: A dictionary of providers mapped to their models or None if the operation fails. - dict or None: A dictionary of providers mapped to their models or None if the operation fails.
""" """
cache_dir = Path.home() / ".crewai" cache_dir = Path.home() / '.crewai'
cache_dir.mkdir(exist_ok=True) cache_dir.mkdir(exist_ok=True)
cache_file = cache_dir / "provider_cache.json" cache_file = cache_dir / 'provider_cache.json'
cache_expiry = 24 * 3600 cache_expiry = 24 * 3600
data = load_provider_data(cache_file, cache_expiry) data = load_provider_data(cache_file, cache_expiry)
if not data: if not data:
@@ -220,8 +179,8 @@ def get_provider_data():
provider_models = defaultdict(list) provider_models = defaultdict(list)
for model_name, properties in data.items(): for model_name, properties in data.items():
provider = properties.get("litellm_provider", "").strip().lower() provider = properties.get("litellm_provider", "").strip().lower()
if "http" in provider or provider == "other": if 'http' in provider or provider == 'other':
continue continue
if provider: if provider:
provider_models[provider].append(model_name) provider_models[provider].append(model_name)
return provider_models return provider_models

View File

@@ -5,7 +5,7 @@ description = "{{name}} using crewAI"
authors = [{ name = "Your Name", email = "you@example.com" }] authors = [{ name = "Your Name", email = "you@example.com" }]
requires-python = ">=3.10,<=3.13" requires-python = ">=3.10,<=3.13"
dependencies = [ dependencies = [
"crewai[tools]>=0.76.2,<1.0.0" "crewai[tools]>=0.74.2,<1.0.0"
] ]
[project.scripts] [project.scripts]

View File

@@ -5,7 +5,7 @@ description = "{{name}} using crewAI"
authors = [{ name = "Your Name", email = "you@example.com" }] authors = [{ name = "Your Name", email = "you@example.com" }]
requires-python = ">=3.10,<=3.13" requires-python = ">=3.10,<=3.13"
dependencies = [ dependencies = [
"crewai[tools]>=0.76.2,<1.0.0", "crewai[tools]>=0.74.2,<1.0.0",
] ]
[project.scripts] [project.scripts]

View File

@@ -6,7 +6,7 @@ authors = ["Your Name <you@example.com>"]
[tool.poetry.dependencies] [tool.poetry.dependencies]
python = ">=3.10,<=3.13" python = ">=3.10,<=3.13"
crewai = { extras = ["tools"], version = ">=0.76.2,<1.0.0" } crewai = { extras = ["tools"], version = ">=0.74.2,<1.0.0" }
asyncio = "*" asyncio = "*"
[tool.poetry.scripts] [tool.poetry.scripts]

View File

@@ -5,7 +5,7 @@ description = "{{name}} using crewAI"
authors = ["Your Name <you@example.com>"] authors = ["Your Name <you@example.com>"]
requires-python = ">=3.10,<=3.13" requires-python = ">=3.10,<=3.13"
dependencies = [ dependencies = [
"crewai[tools]>=0.76.2,<1.0.0" "crewai[tools]>=0.74.2,<1.0.0"
] ]
[project.scripts] [project.scripts]

View File

@@ -5,6 +5,6 @@ description = "Power up your crews with {{folder_name}}"
readme = "README.md" readme = "README.md"
requires-python = ">=3.10,<=3.13" requires-python = ">=3.10,<=3.13"
dependencies = [ dependencies = [
"crewai[tools]>=0.76.2" "crewai[tools]>=0.74.2"
] ]

View File

@@ -435,16 +435,15 @@ class Crew(BaseModel):
self, n_iterations: int, filename: str, inputs: Optional[Dict[str, Any]] = {} self, n_iterations: int, filename: str, inputs: Optional[Dict[str, Any]] = {}
) -> None: ) -> None:
"""Trains the crew for a given number of iterations.""" """Trains the crew for a given number of iterations."""
train_crew = self.copy() self._setup_for_training(filename)
train_crew._setup_for_training(filename)
for n_iteration in range(n_iterations): for n_iteration in range(n_iterations):
train_crew._train_iteration = n_iteration self._train_iteration = n_iteration
train_crew.kickoff(inputs=inputs) self.kickoff(inputs=inputs)
training_data = CrewTrainingHandler(TRAINING_DATA_FILE).load() training_data = CrewTrainingHandler(TRAINING_DATA_FILE).load()
for agent in train_crew.agents: for agent in self.agents:
result = TaskEvaluator(agent).evaluate_training_data( result = TaskEvaluator(agent).evaluate_training_data(
training_data=training_data, agent_id=str(agent.id) training_data=training_data, agent_id=str(agent.id)
) )
@@ -988,19 +987,17 @@ class Crew(BaseModel):
inputs: Optional[Dict[str, Any]] = None, inputs: Optional[Dict[str, Any]] = None,
) -> None: ) -> None:
"""Test and evaluate the Crew with the given inputs for n iterations concurrently using concurrent.futures.""" """Test and evaluate the Crew with the given inputs for n iterations concurrently using concurrent.futures."""
test_crew = self.copy() self._test_execution_span = self._telemetry.test_execution_span(
self,
self._test_execution_span = test_crew._telemetry.test_execution_span(
test_crew,
n_iterations, n_iterations,
inputs, inputs,
openai_model_name, # type: ignore[arg-type] openai_model_name, # type: ignore[arg-type]
) # type: ignore[arg-type] ) # type: ignore[arg-type]
evaluator = CrewEvaluator(test_crew, openai_model_name) # type: ignore[arg-type] evaluator = CrewEvaluator(self, openai_model_name) # type: ignore[arg-type]
for i in range(1, n_iterations + 1): for i in range(1, n_iterations + 1):
evaluator.set_iteration(i) evaluator.set_iteration(i)
test_crew.kickoff(inputs=inputs) self.kickoff(inputs=inputs)
evaluator.print_crew_evaluation_result() evaluator.print_crew_evaluation_result()

View File

@@ -9,7 +9,6 @@ from unittest.mock import MagicMock, patch
import instructor import instructor
import pydantic_core import pydantic_core
import pytest import pytest
from crewai.agent import Agent from crewai.agent import Agent
from crewai.agents.cache import CacheHandler from crewai.agents.cache import CacheHandler
from crewai.crew import Crew from crewai.crew import Crew
@@ -498,7 +497,6 @@ def test_cache_hitting_between_agents():
@pytest.mark.vcr(filter_headers=["authorization"]) @pytest.mark.vcr(filter_headers=["authorization"])
def test_api_calls_throttling(capsys): def test_api_calls_throttling(capsys):
from unittest.mock import patch from unittest.mock import patch
from crewai_tools import tool from crewai_tools import tool
@tool @tool
@@ -781,14 +779,11 @@ def test_async_task_execution_call_count():
list_important_history.output = mock_task_output list_important_history.output = mock_task_output
write_article.output = mock_task_output write_article.output = mock_task_output
with ( with patch.object(
patch.object( Task, "execute_sync", return_value=mock_task_output
Task, "execute_sync", return_value=mock_task_output ) as mock_execute_sync, patch.object(
) as mock_execute_sync, Task, "execute_async", return_value=mock_future
patch.object( ) as mock_execute_async:
Task, "execute_async", return_value=mock_future
) as mock_execute_async,
):
crew.kickoff() crew.kickoff()
assert mock_execute_async.call_count == 2 assert mock_execute_async.call_count == 2
@@ -1110,7 +1105,6 @@ def test_dont_set_agents_step_callback_if_already_set():
@pytest.mark.vcr(filter_headers=["authorization"]) @pytest.mark.vcr(filter_headers=["authorization"])
def test_crew_function_calling_llm(): def test_crew_function_calling_llm():
from unittest.mock import patch from unittest.mock import patch
from crewai_tools import tool from crewai_tools import tool
llm = "gpt-4o" llm = "gpt-4o"
@@ -1454,6 +1448,52 @@ def test_crew_does_not_interpolate_without_inputs():
interpolate_task_inputs.assert_not_called() interpolate_task_inputs.assert_not_called()
# def test_crew_partial_inputs():
# agent = Agent(
# role="{topic} Researcher",
# goal="Express hot takes on {topic}.",
# backstory="You have a lot of experience with {topic}.",
# )
# task = Task(
# description="Give me an analysis around {topic}.",
# expected_output="{points} bullet points about {topic}.",
# )
# crew = Crew(agents=[agent], tasks=[task], inputs={"topic": "AI"})
# inputs = {"topic": "AI"}
# crew._interpolate_inputs(inputs=inputs) # Manual call for now
# assert crew.tasks[0].description == "Give me an analysis around AI."
# assert crew.tasks[0].expected_output == "{points} bullet points about AI."
# assert crew.agents[0].role == "AI Researcher"
# assert crew.agents[0].goal == "Express hot takes on AI."
# assert crew.agents[0].backstory == "You have a lot of experience with AI."
# def test_crew_invalid_inputs():
# agent = Agent(
# role="{topic} Researcher",
# goal="Express hot takes on {topic}.",
# backstory="You have a lot of experience with {topic}.",
# )
# task = Task(
# description="Give me an analysis around {topic}.",
# expected_output="{points} bullet points about {topic}.",
# )
# crew = Crew(agents=[agent], tasks=[task], inputs={"subject": "AI"})
# inputs = {"subject": "AI"}
# crew._interpolate_inputs(inputs=inputs) # Manual call for now
# assert crew.tasks[0].description == "Give me an analysis around {topic}."
# assert crew.tasks[0].expected_output == "{points} bullet points about {topic}."
# assert crew.agents[0].role == "{topic} Researcher"
# assert crew.agents[0].goal == "Express hot takes on {topic}."
# assert crew.agents[0].backstory == "You have a lot of experience with {topic}."
def test_task_callback_on_crew(): def test_task_callback_on_crew():
from unittest.mock import MagicMock, patch from unittest.mock import MagicMock, patch
@@ -1730,10 +1770,7 @@ def test_manager_agent_with_tools_raises_exception():
@patch("crewai.crew.Crew.kickoff") @patch("crewai.crew.Crew.kickoff")
@patch("crewai.crew.CrewTrainingHandler") @patch("crewai.crew.CrewTrainingHandler")
@patch("crewai.crew.TaskEvaluator") @patch("crewai.crew.TaskEvaluator")
@patch("crewai.crew.Crew.copy") def test_crew_train_success(task_evaluator, crew_training_handler, kickoff):
def test_crew_train_success(
copy_mock, task_evaluator, crew_training_handler, kickoff_mock
):
task = Task( task = Task(
description="Come up with a list of 5 interesting ideas to explore for an article, then write one amazing paragraph highlight for each idea that showcases how good an article about this topic could be. Return the list of ideas with their paragraph and your notes.", description="Come up with a list of 5 interesting ideas to explore for an article, then write one amazing paragraph highlight for each idea that showcases how good an article about this topic could be. Return the list of ideas with their paragraph and your notes.",
expected_output="5 bullet points with a paragraph for each idea.", expected_output="5 bullet points with a paragraph for each idea.",
@@ -1744,19 +1781,9 @@ def test_crew_train_success(
agents=[researcher, writer], agents=[researcher, writer],
tasks=[task], tasks=[task],
) )
# Create a mock for the copied crew
copy_mock.return_value = crew
crew.train( crew.train(
n_iterations=2, inputs={"topic": "AI"}, filename="trained_agents_data.pkl" n_iterations=2, inputs={"topic": "AI"}, filename="trained_agents_data.pkl"
) )
# Ensure kickoff is called on the copied crew
kickoff_mock.assert_has_calls(
[mock.call(inputs={"topic": "AI"}), mock.call(inputs={"topic": "AI"})]
)
task_evaluator.assert_has_calls( task_evaluator.assert_has_calls(
[ [
mock.call(researcher), mock.call(researcher),
@@ -1795,6 +1822,10 @@ def test_crew_train_success(
] ]
) )
kickoff.assert_has_calls(
[mock.call(inputs={"topic": "AI"}), mock.call(inputs={"topic": "AI"})]
)
def test_crew_train_error(): def test_crew_train_error():
task = Task( task = Task(
@@ -1809,7 +1840,7 @@ def test_crew_train_error():
) )
with pytest.raises(TypeError) as e: with pytest.raises(TypeError) as e:
crew.train() # type: ignore purposefully throwing err crew.train()
assert "train() missing 1 required positional argument: 'n_iterations'" in str( assert "train() missing 1 required positional argument: 'n_iterations'" in str(
e e
) )
@@ -2505,9 +2536,8 @@ def test_conditional_should_execute():
@mock.patch("crewai.crew.CrewEvaluator") @mock.patch("crewai.crew.CrewEvaluator")
@mock.patch("crewai.crew.Crew.copy")
@mock.patch("crewai.crew.Crew.kickoff") @mock.patch("crewai.crew.Crew.kickoff")
def test_crew_testing_function(kickoff_mock, copy_mock, crew_evaluator): def test_crew_testing_function(mock_kickoff, crew_evaluator):
task = Task( task = Task(
description="Come up with a list of 5 interesting ideas to explore for an article, then write one amazing paragraph highlight for each idea that showcases how good an article about this topic could be. Return the list of ideas with their paragraph and your notes.", description="Come up with a list of 5 interesting ideas to explore for an article, then write one amazing paragraph highlight for each idea that showcases how good an article about this topic could be. Return the list of ideas with their paragraph and your notes.",
expected_output="5 bullet points with a paragraph for each idea.", expected_output="5 bullet points with a paragraph for each idea.",
@@ -2518,15 +2548,11 @@ def test_crew_testing_function(kickoff_mock, copy_mock, crew_evaluator):
agents=[researcher], agents=[researcher],
tasks=[task], tasks=[task],
) )
# Create a mock for the copied crew
copy_mock.return_value = crew
n_iterations = 2 n_iterations = 2
crew.test(n_iterations, openai_model_name="gpt-4o-mini", inputs={"topic": "AI"}) crew.test(n_iterations, openai_model_name="gpt-4o-mini", inputs={"topic": "AI"})
# Ensure kickoff is called on the copied crew assert len(mock_kickoff.mock_calls) == n_iterations
kickoff_mock.assert_has_calls( mock_kickoff.assert_has_calls(
[mock.call(inputs={"topic": "AI"}), mock.call(inputs={"topic": "AI"})] [mock.call(inputs={"topic": "AI"}), mock.call(inputs={"topic": "AI"})]
) )

1912
uv.lock generated

File diff suppressed because it is too large Load Diff