Compare commits

..

15 Commits

Author SHA1 Message Date
Devin AI
22b81c4dc8 Fix tests: Import SSLError from requests.exceptions
Co-Authored-By: Joe Moura <joao@crewai.com>
2025-05-13 12:07:29 +00:00
Devin AI
71eebe6b1f Fix lint issue: remove unused json import
Co-Authored-By: Joe Moura <joao@crewai.com>
2025-05-13 12:02:32 +00:00
Devin AI
130a7fc1e0 Add additional tests for edge cases in provider data fetching
Co-Authored-By: Joe Moura <joao@crewai.com>
2025-05-13 12:00:32 +00:00
Devin AI
44540ddbd1 Add CLI documentation with --skip_ssl_verify flag details
Co-Authored-By: Joe Moura <joao@crewai.com>
2025-05-13 11:58:54 +00:00
Devin AI
97f8a44605 Enhance error handling for provider data fetching and cache file management
Co-Authored-By: Joe Moura <joao@crewai.com>
2025-05-13 11:57:21 +00:00
Devin AI
cfde51ce79 Fix lint issues: remove unused imports
Co-Authored-By: Joe Moura <joao@crewai.com>
2025-05-13 11:55:13 +00:00
Devin AI
4d3e094a40 Fix SSL certificate verification issue in provider data fetching
Co-Authored-By: Joe Moura <joao@crewai.com>
2025-05-13 11:52:55 +00:00
Lucas Gomide
d55e596800 feat: support to load an Agent from a repository (#2816)
Some checks failed
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
* feat: support to load an Agent from a repository

* test: fix get_auth_token test
2025-05-12 16:08:57 -04:00
Lucas Gomide
f700e014c9 fix: address race condition in FilteredStream by using context managers (#2818)
During the sys.stdout = FilteredStream(old_stdout) assignment, if any code (including logging, print, or internal library output) writes to sys.stdout immediately, and that write happens before __init__ completes, the write() method is called on a not-fully-initialized object.. hence _lock doesn’t exist yet.
2025-05-12 15:05:14 -04:00
Vidit Ostwal
4e496d7a20 Added link to github issue (#2810)
Some checks failed
Notify Downstream / notify-downstream (push) Has been cancelled
Co-authored-by: Lucas Gomide <lucaslg200@gmail.com>
2025-05-12 08:27:18 -04:00
Lucas Gomide
8663c7e1c2 Enable ALL Ruff rules set by default (#2775)
* style: use Ruff default linter rules

* ci: check linter files over changed ones
2025-05-12 08:10:31 -04:00
Orce MARINKOVSKI
cb1a98cabf Update arize-phoenix-observability.mdx (#2595)
Some checks failed
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
missing code to kickoff the monitoring for the crew

Co-authored-by: Lorenze Jay <63378463+lorenzejay@users.noreply.github.com>
Co-authored-by: Tony Kipkemboi <iamtonykipkemboi@gmail.com>
2025-05-08 13:25:10 -04:00
Mark McDonald
369e6d109c Adds link to AI Studio when entering Gemini key (#2780)
I used ai.dev as the alternate URL as it takes up less space but if this
is likely to confuse users we can use the long form.

Co-authored-by: Tony Kipkemboi <iamtonykipkemboi@gmail.com>
2025-05-08 13:00:03 -04:00
Mark McDonald
2c011631f9 Clean up the Google setup section (#2785)
The Gemini & Vertex sections were conflated and a little hard to
distingush, so I have put them in separate sections.

Also added the latest 2.5 and 2.0 flash-lite models, and added a note
that Gemma models work too.

Co-authored-by: Tony Kipkemboi <iamtonykipkemboi@gmail.com>
2025-05-08 12:24:38 -04:00
Rip&Tear
d3fc2b4477 Update security.md (#2779)
update policy for better readability
2025-05-08 09:00:41 -04:00
20 changed files with 1150 additions and 76 deletions

38
.github/security.md vendored
View File

@@ -1,19 +1,27 @@
CrewAI takes the security of our software products and services seriously, which includes all source code repositories managed through our GitHub organization.
If you believe you have found a security vulnerability in any CrewAI product or service, please report it to us as described below.
## CrewAI Security Vulnerability Reporting Policy
## Reporting a Vulnerability
Please do not report security vulnerabilities through public GitHub issues.
To report a vulnerability, please email us at security@crewai.com.
Please include the requested information listed below so that we can triage your report more quickly
CrewAI prioritizes the security of our software products, services, and GitHub repositories. To promptly address vulnerabilities, follow these steps for reporting security issues:
- Type of issue (e.g. SQL injection, cross-site scripting, etc.)
- Full paths of source file(s) related to the manifestation of the issue
- The location of the affected source code (tag/branch/commit or direct URL)
- Any special configuration required to reproduce the issue
- Step-by-step instructions to reproduce the issue (please include screenshots if needed)
- Proof-of-concept or exploit code (if possible)
- Impact of the issue, including how an attacker might exploit the issue
### Reporting Process
Do **not** report vulnerabilities via public GitHub issues.
Once we have received your report, we will respond to you at the email address you provide. If the issue is confirmed, we will release a patch as soon as possible depending on the complexity of the issue.
Email all vulnerability reports directly to:
**security@crewai.com**
At this time, we are not offering a bug bounty program. Any rewards will be at our discretion.
### Required Information
To help us quickly validate and remediate the issue, your report must include:
- **Vulnerability Type:** Clearly state the vulnerability type (e.g., SQL injection, XSS, privilege escalation).
- **Affected Source Code:** Provide full file paths and direct URLs (branch, tag, or commit).
- **Reproduction Steps:** Include detailed, step-by-step instructions. Screenshots are recommended.
- **Special Configuration:** Document any special settings or configurations required to reproduce.
- **Proof-of-Concept (PoC):** Provide exploit or PoC code (if available).
- **Impact Assessment:** Clearly explain the severity and potential exploitation scenarios.
### Our Response
- We will acknowledge receipt of your report promptly via your provided email.
- Confirmed vulnerabilities will receive priority remediation based on severity.
- Patches will be released as swiftly as possible following verification.
### Reward Notice
Currently, we do not offer a bug bounty program. Rewards, if issued, are discretionary.

View File

@@ -5,12 +5,29 @@ on: [pull_request]
jobs:
lint:
runs-on: ubuntu-latest
env:
TARGET_BRANCH: ${{ github.event.pull_request.base.ref }}
steps:
- uses: actions/checkout@v4
with:
fetch-depth: 0
- name: Install Requirements
- name: Fetch Target Branch
run: git fetch origin $TARGET_BRANCH --depth=1
- name: Install Ruff
run: pip install ruff
- name: Get Changed Python Files
id: changed-files
run: |
pip install ruff
merge_base=$(git merge-base origin/"$TARGET_BRANCH" HEAD)
changed_files=$(git diff --name-only --diff-filter=ACMRTUB "$merge_base" | grep '\.py$' || true)
echo "files<<EOF" >> $GITHUB_OUTPUT
echo "$changed_files" >> $GITHUB_OUTPUT
echo "EOF" >> $GITHUB_OUTPUT
- name: Run Ruff Linter
run: ruff check
- name: Run Ruff on Changed Files
if: ${{ steps.changed-files.outputs.files != '' }}
run: |
echo "${{ steps.changed-files.outputs.files }}" | tr " " "\n" | xargs -I{} ruff check "{}"

View File

@@ -2,8 +2,3 @@ exclude = [
"templates",
"__init__.py",
]
[lint]
select = [
"I", # isort rules
]

120
docs/cli.md Normal file
View File

@@ -0,0 +1,120 @@
# CrewAI CLI Documentation
The CrewAI Command Line Interface (CLI) provides tools for creating, managing, and running CrewAI projects.
## Installation
The CLI is automatically installed when you install the CrewAI package:
```bash
pip install crewai
```
## Available Commands
### Create Command
The `create` command allows you to create new crews or flows.
```bash
crewai create [TYPE] [NAME] [OPTIONS]
```
#### Arguments
- `TYPE`: Type of project to create. Must be either `crew` or `flow`.
- `NAME`: Name of the project to create.
#### Options
- `--provider`: The provider to use for the crew.
- `--skip_provider`: Skip provider validation.
- `--skip_ssl_verify`: Skip SSL certificate verification when fetching provider data (not secure).
#### Examples
Create a new crew:
```bash
crewai create crew my_crew
```
Create a new crew with a specific provider:
```bash
crewai create crew my_crew --provider openai
```
Create a new crew and skip SSL certificate verification (useful in environments with self-signed certificates):
```bash
crewai create crew my_crew --skip_ssl_verify
```
> **Warning**: Using the `--skip_ssl_verify` flag is not recommended in production environments as it bypasses SSL certificate verification, which can expose your system to security risks. Only use this flag in development environments or when you understand the security implications.
Create a new flow:
```bash
crewai create flow my_flow
```
### Run Command
The `run` command executes your crew.
```bash
crewai run
```
### Train Command
The `train` command trains your crew.
```bash
crewai train [OPTIONS]
```
#### Options
- `-n, --n_iterations`: Number of iterations to train the crew (default: 5).
- `-f, --filename`: Path to a custom file for training (default: "trained_agents_data.pkl").
### Reset Memories Command
The `reset_memories` command resets the crew memories.
```bash
crewai reset_memories [OPTIONS]
```
#### Options
- `-l, --long`: Reset LONG TERM memory.
- `-s, --short`: Reset SHORT TERM memory.
- `-e, --entities`: Reset ENTITIES memory.
- `-kn, --knowledge`: Reset KNOWLEDGE storage.
- `-k, --kickoff-outputs`: Reset LATEST KICKOFF TASK OUTPUTS.
- `-a, --all`: Reset ALL memories.
### Other Commands
- `version`: Show the installed version of crewai.
- `replay`: Replay the crew execution from a specific task.
- `log_tasks_outputs`: Retrieve your latest crew.kickoff() task outputs.
- `test`: Test the crew and evaluate the results.
- `install`: Install the Crew.
- `update`: Update the pyproject.toml of the Crew project to use uv.
- `signup`: Sign Up/Login to CrewAI+.
- `login`: Sign Up/Login to CrewAI+.
- `chat`: Start a conversation with the Crew.
## Security Considerations
When using the CrewAI CLI, be aware of the following security considerations:
1. **API Keys**: Store your API keys securely in environment variables or a `.env` file. Never hardcode them in your scripts.
2. **SSL Verification**: The `--skip_ssl_verify` flag bypasses SSL certificate verification, which can expose your system to security risks. Only use this flag in development environments or when you understand the security implications.
3. **Provider Data**: When fetching provider data, ensure that you're using a secure connection. The CLI will display a warning when SSL verification is disabled.

View File

@@ -700,4 +700,11 @@ recent_news = SpaceNewsKnowledgeSource(
- Configure appropriate embedding models
- Consider using local embedding providers for faster processing
</Accordion>
<Accordion title="One Time Knowledge">
- With the typical file structure provided by CrewAI, knowledge sources are embedded every time the kickoff is triggered.
- If the knowledge sources are large, this leads to inefficiency and increased latency, as the same data is embedded each time.
- To resolve this, directly initialize the knowledge parameter instead of the knowledge_sources parameter.
- Link to the issue to get complete idea [Github Issue](https://github.com/crewAIInc/crewAI/issues/2755)
</Accordion>
</AccordionGroup>

View File

@@ -169,19 +169,55 @@ In this section, you'll find detailed examples that help you select, configure,
```
</Accordion>
<Accordion title="Google">
Set the following environment variables in your `.env` file:
<Accordion title="Google (Gemini API)">
Set your API key in your `.env` file. If you need a key, or need to find an
existing key, check [AI Studio](https://aistudio.google.com/apikey).
```toml Code
# Option 1: Gemini accessed with an API key.
```toml .env
# https://ai.google.dev/gemini-api/docs/api-key
GEMINI_API_KEY=<your-api-key>
# Option 2: Vertex AI IAM credentials for Gemini, Anthropic, and Model Garden.
# https://cloud.google.com/vertex-ai/generative-ai/docs/overview
```
Get credentials from your Google Cloud Console and save it to a JSON file with the following code:
Example usage in your CrewAI project:
```python Code
from crewai import LLM
llm = LLM(
model="gemini/gemini-2.0-flash",
temperature=0.7,
)
```
### Gemini models
Google offers a range of powerful models optimized for different use cases.
| Model | Context Window | Best For |
|--------------------------------|----------------|-------------------------------------------------------------------|
| gemini-2.5-flash-preview-04-17 | 1M tokens | Adaptive thinking, cost efficiency |
| gemini-2.5-pro-preview-05-06 | 1M tokens | Enhanced thinking and reasoning, multimodal understanding, advanced coding, and more |
| gemini-2.0-flash | 1M tokens | Next generation features, speed, thinking, and realtime streaming |
| gemini-2.0-flash-lite | 1M tokens | Cost efficiency and low latency |
| gemini-1.5-flash | 1M tokens | Balanced multimodal model, good for most tasks |
| gemini-1.5-flash-8B | 1M tokens | Fastest, most cost-efficient, good for high-frequency tasks |
| gemini-1.5-pro | 2M tokens | Best performing, wide variety of reasoning tasks including logical reasoning, coding, and creative collaboration |
The full list of models is available in the [Gemini model docs](https://ai.google.dev/gemini-api/docs/models).
### Gemma
The Gemini API also allows you to use your API key to access [Gemma models](https://ai.google.dev/gemma/docs) hosted on Google infrastructure.
| Model | Context Window |
|----------------|----------------|
| gemma-3-1b-it | 32k tokens |
| gemma-3-4b-it | 32k tokens |
| gemma-3-12b-it | 32k tokens |
| gemma-3-27b-it | 128k tokens |
</Accordion>
<Accordion title="Google (Vertex AI)">
Get credentials from your Google Cloud Console and save it to a JSON file, then load it with the following code:
```python Code
import json
@@ -205,14 +241,18 @@ In this section, you'll find detailed examples that help you select, configure,
vertex_credentials=vertex_credentials_json
)
```
Google offers a range of powerful models optimized for different use cases:
| Model | Context Window | Best For |
|-----------------------|----------------|------------------------------------------------------------------|
| gemini-2.0-flash-exp | 1M tokens | Higher quality at faster speed, multimodal model, good for most tasks |
| gemini-1.5-flash | 1M tokens | Balanced multimodal model, good for most tasks |
| gemini-1.5-flash-8B | 1M tokens | Fastest, most cost-efficient, good for high-frequency tasks |
| gemini-1.5-pro | 2M tokens | Best performing, wide variety of reasoning tasks including logical reasoning, coding, and creative collaboration |
| Model | Context Window | Best For |
|--------------------------------|----------------|-------------------------------------------------------------------|
| gemini-2.5-flash-preview-04-17 | 1M tokens | Adaptive thinking, cost efficiency |
| gemini-2.5-pro-preview-05-06 | 1M tokens | Enhanced thinking and reasoning, multimodal understanding, advanced coding, and more |
| gemini-2.0-flash | 1M tokens | Next generation features, speed, thinking, and realtime streaming |
| gemini-2.0-flash-lite | 1M tokens | Cost efficiency and low latency |
| gemini-1.5-flash | 1M tokens | Balanced multimodal model, good for most tasks |
| gemini-1.5-flash-8B | 1M tokens | Fastest, most cost-efficient, good for high-frequency tasks |
| gemini-1.5-pro | 2M tokens | Best performing, wide variety of reasoning tasks including logical reasoning, coding, and creative collaboration |
</Accordion>
<Accordion title="Azure">

View File

@@ -68,7 +68,13 @@ We'll create a CrewAI application where two agents collaborate to research and w
```python
from crewai import Agent, Crew, Process, Task
from crewai_tools import SerperDevTool
from openinference.instrumentation.crewai import CrewAIInstrumentor
from phoenix.otel import register
# setup monitoring for your crew
tracer_provider = register(
endpoint="http://localhost:6006/v1/traces")
CrewAIInstrumentor().instrument(skip_dep_check=True, tracer_provider=tracer_provider)
search_tool = SerperDevTool()
# Define your agents with roles and goals

View File

@@ -25,6 +25,7 @@ from crewai.utilities.agent_utils import (
)
from crewai.utilities.constants import TRAINED_AGENTS_DATA_FILE, TRAINING_DATA_FILE
from crewai.utilities.converter import generate_model_description
from crewai.utilities.errors import AgentRepositoryError
from crewai.utilities.events.agent_events import (
AgentExecutionCompletedEvent,
AgentExecutionErrorEvent,
@@ -134,6 +135,49 @@ class Agent(BaseAgent):
default=None,
description="Knowledge search query for the agent dynamically generated by the agent.",
)
from_repository: Optional[str] = Field(
default=None,
description="The Agent's role to be used from your repository.",
)
@model_validator(mode="before")
def validate_from_repository(cls, v):
if v is not None and (from_repository := v.get("from_repository")):
return cls._load_agent_from_repository(from_repository) | v
return v
@classmethod
def _load_agent_from_repository(cls, from_repository: str) -> Dict[str, Any]:
attributes: Dict[str, Any] = {}
if from_repository:
import importlib
from crewai.cli.authentication.token import get_auth_token
from crewai.cli.plus_api import PlusAPI
client = PlusAPI(api_key=get_auth_token())
response = client.get_agent(from_repository)
if response.status_code != 200:
raise AgentRepositoryError(
f"Agent {from_repository} could not be loaded: {response.text}"
)
agent = response.json()
for key, value in agent.items():
if key == "tools":
attributes[key] = []
for tool_name in value:
try:
module = importlib.import_module("crewai_tools")
tool_class = getattr(module, tool_name)
attributes[key].append(tool_class())
except Exception as e:
raise AgentRepositoryError(
f"Tool {tool_name} could not be loaded: {e}"
) from e
else:
attributes[key] = value
return attributes
@model_validator(mode="after")
def post_init_setup(self):

View File

@@ -5,5 +5,5 @@ def get_auth_token() -> str:
"""Get the authentication token."""
access_token = TokenManager().get_token()
if not access_token:
raise Exception()
raise Exception("No token found, make sure you are logged in")
return access_token

View File

@@ -1,6 +1,5 @@
import os
from importlib.metadata import version as get_version
from typing import Optional, Tuple
from typing import Optional
import click
@@ -37,10 +36,11 @@ def crewai():
@click.argument("name")
@click.option("--provider", type=str, help="The provider to use for the crew")
@click.option("--skip_provider", is_flag=True, help="Skip provider validation")
def create(type, name, provider, skip_provider=False):
@click.option("--skip_ssl_verify", is_flag=True, help="Skip SSL certificate verification (not secure)")
def create(type, name, provider, skip_provider=False, skip_ssl_verify=False):
"""Create a new crew, or flow."""
if type == "crew":
create_crew(name, provider, skip_provider)
create_crew(name, provider, skip_provider, skip_ssl_verify=skip_ssl_verify)
elif type == "flow":
create_flow(name)
else:

View File

@@ -13,7 +13,7 @@ ENV_VARS = {
],
"gemini": [
{
"prompt": "Enter your GEMINI API key (press Enter to skip)",
"prompt": "Enter your GEMINI API key from https://ai.dev/apikey (press Enter to skip)",
"key_name": "GEMINI_API_KEY",
}
],

View File

@@ -89,12 +89,12 @@ def copy_template_files(folder_path, name, class_name, parent_folder):
copy_template(src_file, dst_file, name, class_name, folder_path.name)
def create_crew(name, provider=None, skip_provider=False, parent_folder=None):
def create_crew(name, provider=None, skip_provider=False, parent_folder=None, skip_ssl_verify=False):
folder_path, folder_name, class_name = create_folder_structure(name, parent_folder)
env_vars = load_env_vars(folder_path)
if not skip_provider:
if not provider:
provider_models = get_provider_data()
provider_models = get_provider_data(skip_ssl_verify)
if not provider_models:
return
@@ -114,7 +114,7 @@ def create_crew(name, provider=None, skip_provider=False, parent_folder=None):
click.secho("Keeping existing provider configuration.", fg="yellow")
return
provider_models = get_provider_data()
provider_models = get_provider_data(skip_ssl_verify)
if not provider_models:
return

View File

@@ -14,6 +14,7 @@ class PlusAPI:
TOOLS_RESOURCE = "/crewai_plus/api/v1/tools"
CREWS_RESOURCE = "/crewai_plus/api/v1/crews"
AGENTS_RESOURCE = "/crewai_plus/api/v1/agents"
def __init__(self, api_key: str) -> None:
self.api_key = api_key
@@ -37,6 +38,9 @@ class PlusAPI:
def get_tool(self, handle: str):
return self._make_request("GET", f"{self.TOOLS_RESOURCE}/{handle}")
def get_agent(self, handle: str):
return self._make_request("GET", f"{self.AGENTS_RESOURCE}/{handle}")
def publish_tool(
self,
handle: str,

View File

@@ -106,13 +106,14 @@ def select_model(provider, provider_models):
return selected_model
def load_provider_data(cache_file, cache_expiry):
def load_provider_data(cache_file, cache_expiry, skip_ssl_verify=False):
"""
Loads provider data from a cache file if it exists and is not expired. If the cache is expired or corrupted, it fetches the data from the web.
Args:
- cache_file (Path): The path to the cache file.
- cache_expiry (int): The cache expiry time in seconds.
- skip_ssl_verify (bool): Whether to skip SSL certificate verification.
Returns:
- dict or None: The loaded provider data or None if the operation fails.
@@ -133,7 +134,7 @@ def load_provider_data(cache_file, cache_expiry):
"Cache expired or not found. Fetching provider data from the web...",
fg="cyan",
)
return fetch_provider_data(cache_file)
return fetch_provider_data(cache_file, skip_ssl_verify)
def read_cache_file(cache_file):
@@ -147,34 +148,65 @@ def read_cache_file(cache_file):
- dict or None: The JSON content of the cache file or None if the JSON is invalid.
"""
try:
if not cache_file.exists():
return None
with open(cache_file, "r") as f:
return json.load(f)
except json.JSONDecodeError:
data = json.load(f)
if not isinstance(data, dict):
click.secho("Invalid cache file format", fg="yellow")
return None
return data
except json.JSONDecodeError as e:
click.secho(f"Error parsing cache file: {str(e)}", fg="yellow")
return None
except OSError as e:
click.secho(f"Error reading cache file: {str(e)}", fg="yellow")
return None
def fetch_provider_data(cache_file):
def fetch_provider_data(cache_file, skip_ssl_verify=False):
"""
Fetches provider data from a specified URL and caches it to a file.
Args:
- cache_file (Path): The path to the cache file.
- skip_ssl_verify (bool): Whether to skip SSL certificate verification.
Returns:
- dict or None: The fetched provider data or None if the operation fails.
"""
try:
response = requests.get(JSON_URL, stream=True, timeout=60)
if skip_ssl_verify:
click.secho(
"Warning: SSL certificate verification is disabled. This is not secure!",
fg="yellow",
)
response = requests.get(JSON_URL, stream=True, timeout=60, verify=not skip_ssl_verify)
response.raise_for_status()
data = download_data(response)
with open(cache_file, "w") as f:
json.dump(data, f)
return data
except requests.Timeout:
click.secho("Timeout while fetching provider data", fg="red")
return None
except requests.SSLError as e:
click.secho(f"SSL verification failed: {str(e)}", fg="red")
if not skip_ssl_verify:
click.secho(
"You can bypass SSL verification with --skip_ssl_verify flag (not secure)",
fg="yellow",
)
return None
except requests.RequestException as e:
click.secho(f"Error fetching provider data: {e}", fg="red")
click.secho(f"Error fetching provider data: {str(e)}", fg="red")
return None
except json.JSONDecodeError:
click.secho("Error parsing provider data. Invalid JSON format.", fg="red")
return None
return None
except OSError as e:
click.secho(f"Error writing to cache file: {str(e)}", fg="red")
return None
def download_data(response):
@@ -201,10 +233,13 @@ def download_data(response):
return json.loads(data_content.decode("utf-8"))
def get_provider_data():
def get_provider_data(skip_ssl_verify=False):
"""
Retrieves provider data from a cache file, filters out models based on provider criteria, and returns a dictionary of providers mapped to their models.
Args:
- skip_ssl_verify (bool): Whether to skip SSL certificate verification.
Returns:
- dict or None: A dictionary of providers mapped to their models or None if the operation fails.
"""
@@ -213,7 +248,7 @@ def get_provider_data():
cache_file = cache_dir / "provider_cache.json"
cache_expiry = 24 * 3600
data = load_provider_data(cache_file, cache_expiry)
data = load_provider_data(cache_file, cache_expiry, skip_ssl_verify)
if not data:
return None

View File

@@ -5,8 +5,7 @@ import sys
import threading
import warnings
from collections import defaultdict
from contextlib import contextmanager
from types import SimpleNamespace
from contextlib import contextmanager, redirect_stderr, redirect_stdout
from typing import (
Any,
DefaultDict,
@@ -31,7 +30,6 @@ from crewai.utilities.events.llm_events import (
LLMCallType,
LLMStreamChunkEvent,
)
from crewai.utilities.events.tool_usage_events import ToolExecutionErrorEvent
with warnings.catch_warnings():
warnings.simplefilter("ignore", UserWarning)
@@ -45,6 +43,9 @@ with warnings.catch_warnings():
from litellm.utils import supports_response_schema
import io
from typing import TextIO
from crewai.llms.base_llm import BaseLLM
from crewai.utilities.events import crewai_event_bus
from crewai.utilities.exceptions.context_window_exceeding_exception import (
@@ -54,12 +55,17 @@ from crewai.utilities.exceptions.context_window_exceeding_exception import (
load_dotenv()
class FilteredStream:
def __init__(self, original_stream):
class FilteredStream(io.TextIOBase):
_lock = None
def __init__(self, original_stream: TextIO):
self._original_stream = original_stream
self._lock = threading.Lock()
def write(self, s) -> int:
def write(self, s: str) -> int:
if not self._lock:
self._lock = threading.Lock()
with self._lock:
# Filter out extraneous messages from LiteLLM
if (
@@ -214,15 +220,11 @@ def suppress_warnings():
)
# Redirect stdout and stderr
old_stdout = sys.stdout
old_stderr = sys.stderr
sys.stdout = FilteredStream(old_stdout)
sys.stderr = FilteredStream(old_stderr)
try:
with (
redirect_stdout(FilteredStream(sys.stdout)),
redirect_stderr(FilteredStream(sys.stderr)),
):
yield
finally:
sys.stdout = old_stdout
sys.stderr = old_stderr
class Delta(TypedDict):

View File

@@ -1,4 +1,5 @@
"""Error message definitions for CrewAI database operations."""
from typing import Optional
@@ -37,3 +38,9 @@ class DatabaseError:
The formatted error message
"""
return template.format(str(error))
class AgentRepositoryError(Exception):
"""Exception raised when an agent repository is not found."""
...

View File

@@ -2,7 +2,7 @@
import os
from unittest import mock
from unittest.mock import patch
from unittest.mock import MagicMock, patch
import pytest
@@ -18,6 +18,7 @@ from crewai.tools import tool
from crewai.tools.tool_calling import InstructorToolCalling
from crewai.tools.tool_usage import ToolUsage
from crewai.utilities import RPMController
from crewai.utilities.errors import AgentRepositoryError
from crewai.utilities.events import crewai_event_bus
from crewai.utilities.events.tool_usage_events import ToolUsageFinishedEvent
@@ -308,9 +309,7 @@ def test_cache_hitting():
def handle_tool_end(source, event):
received_events.append(event)
with (
patch.object(CacheHandler, "read") as read,
):
with (patch.object(CacheHandler, "read") as read,):
read.return_value = "0"
task = Task(
description="What is 2 times 6? Ignore correctness and just return the result of the multiplication tool, you must use the tool.",
@@ -1040,7 +1039,7 @@ def test_agent_human_input():
CrewAgentExecutor,
"_invoke_loop",
return_value=AgentFinish(output="Hello", thought="", text=""),
) as mock_invoke_loop,
),
):
# Execute the task
output = agent.execute_task(task)
@@ -2025,3 +2024,86 @@ def test_get_knowledge_search_query():
},
]
)
@pytest.fixture
def mock_get_auth_token():
with patch(
"crewai.cli.authentication.token.get_auth_token", return_value="test_token"
):
yield
@patch("crewai.cli.plus_api.PlusAPI.get_agent")
def test_agent_from_repository(mock_get_agent, mock_get_auth_token):
from crewai_tools import SerperDevTool
mock_get_response = MagicMock()
mock_get_response.status_code = 200
mock_get_response.json.return_value = {
"role": "test role",
"goal": "test goal",
"backstory": "test backstory",
"tools": ["SerperDevTool"],
}
mock_get_agent.return_value = mock_get_response
agent = Agent(from_repository="test_agent")
assert agent.role == "test role"
assert agent.goal == "test goal"
assert agent.backstory == "test backstory"
assert len(agent.tools) == 1
assert isinstance(agent.tools[0], SerperDevTool)
@patch("crewai.cli.plus_api.PlusAPI.get_agent")
def test_agent_from_repository_override_attributes(mock_get_agent, mock_get_auth_token):
from crewai_tools import SerperDevTool
mock_get_response = MagicMock()
mock_get_response.status_code = 200
mock_get_response.json.return_value = {
"role": "test role",
"goal": "test goal",
"backstory": "test backstory",
"tools": ["SerperDevTool"],
}
mock_get_agent.return_value = mock_get_response
agent = Agent(from_repository="test_agent", role="Custom Role")
assert agent.role == "Custom Role"
assert agent.goal == "test goal"
assert agent.backstory == "test backstory"
assert len(agent.tools) == 1
assert isinstance(agent.tools[0], SerperDevTool)
@patch("crewai.cli.plus_api.PlusAPI.get_agent")
def test_agent_from_repository_with_invalid_tools(mock_get_agent, mock_get_auth_token):
mock_get_response = MagicMock()
mock_get_response.status_code = 200
mock_get_response.json.return_value = {
"role": "test role",
"goal": "test goal",
"backstory": "test backstory",
"tools": ["DoesNotExist"],
}
mock_get_agent.return_value = mock_get_response
with pytest.raises(
AgentRepositoryError,
match="Tool DoesNotExist could not be loaded: module 'crewai_tools' has no attribute 'DoesNotExist'",
):
Agent(from_repository="test_agent")
@patch("crewai.cli.plus_api.PlusAPI.get_agent")
def test_agent_from_repository_agent_not_found(mock_get_agent, mock_get_auth_token):
mock_get_response = MagicMock()
mock_get_response.status_code = 404
mock_get_response.text = "Agent not found"
mock_get_agent.return_value = mock_get_response
with pytest.raises(
AgentRepositoryError,
match="Agent NOT_FOUND could not be loaded: Agent not found",
):
Agent(from_repository="NOT_FOUND")

View File

@@ -0,0 +1,520 @@
interactions:
- request:
body: !!binary |
CqcXCiQKIgoMc2VydmljZS5uYW1lEhIKEGNyZXdBSS10ZWxlbWV0cnkS/hYKEgoQY3Jld2FpLnRl
bGVtZXRyeRJ5ChBuJJtOdNaB05mOW/p3915eEgj2tkAd3rZcASoQVG9vbCBVc2FnZSBFcnJvcjAB
OYa7/URvKBUYQUpcFEVvKBUYShoKDmNyZXdhaV92ZXJzaW9uEggKBjAuODYuMEoPCgNsbG0SCAoG
Z3B0LTRvegIYAYUBAAEAABLJBwoQifhX01E5i+5laGdALAlZBBIIBuGM1aN+OPgqDENyZXcgQ3Jl
YXRlZDABORVGruBvKBUYQaipwOBvKBUYShoKDmNyZXdhaV92ZXJzaW9uEggKBjAuODYuMEoaCg5w
eXRob25fdmVyc2lvbhIICgYzLjEyLjdKLgoIY3Jld19rZXkSIgogN2U2NjA4OTg5ODU5YTY3ZWVj
ODhlZWY3ZmNlODUyMjVKMQoHY3Jld19pZBImCiRiOThiNWEwMC01YTI1LTQxMDctYjQwNS1hYmYz
MjBhOGYzYThKHAoMY3Jld19wcm9jZXNzEgwKCnNlcXVlbnRpYWxKEQoLY3Jld19tZW1vcnkSAhAA
ShoKFGNyZXdfbnVtYmVyX29mX3Rhc2tzEgIYAUobChVjcmV3X251bWJlcl9vZl9hZ2VudHMSAhgB
SuQCCgtjcmV3X2FnZW50cxLUAgrRAlt7ImtleSI6ICIyMmFjZDYxMWU0NGVmNWZhYzA1YjUzM2Q3
NWU4ODkzYiIsICJpZCI6ICJkNWIyMzM1YS0yMmIyLTQyZWEtYmYwNS03OTc3NmU3MmYzOTIiLCAi
cm9sZSI6ICJEYXRhIFNjaWVudGlzdCIsICJ2ZXJib3NlPyI6IGZhbHNlLCAibWF4X2l0ZXIiOiAy
MCwgIm1heF9ycG0iOiBudWxsLCAiZnVuY3Rpb25fY2FsbGluZ19sbG0iOiAiIiwgImxsbSI6ICJn
cHQtNG8tbWluaSIsICJkZWxlZ2F0aW9uX2VuYWJsZWQ/IjogZmFsc2UsICJhbGxvd19jb2RlX2V4
ZWN1dGlvbj8iOiBmYWxzZSwgIm1heF9yZXRyeV9saW1pdCI6IDIsICJ0b29sc19uYW1lcyI6IFsi
Z2V0IGdyZWV0aW5ncyJdfV1KkgIKCmNyZXdfdGFza3MSgwIKgAJbeyJrZXkiOiAiYTI3N2IzNGIy
YzE0NmYwYzU2YzVlMTM1NmU4ZjhhNTciLCAiaWQiOiAiMjJiZWMyMzEtY2QyMS00YzU4LTgyN2Ut
MDU4MWE4ZjBjMTExIiwgImFzeW5jX2V4ZWN1dGlvbj8iOiBmYWxzZSwgImh1bWFuX2lucHV0PyI6
IGZhbHNlLCAiYWdlbnRfcm9sZSI6ICJEYXRhIFNjaWVudGlzdCIsICJhZ2VudF9rZXkiOiAiMjJh
Y2Q2MTFlNDRlZjVmYWMwNWI1MzNkNzVlODg5M2IiLCAidG9vbHNfbmFtZXMiOiBbImdldCBncmVl
dGluZ3MiXX1degIYAYUBAAEAABKOAgoQ5WYoxRtTyPjge4BduhL0rRIIv2U6rvWALfwqDFRhc2sg
Q3JlYXRlZDABOX068uBvKBUYQZkv8+BvKBUYSi4KCGNyZXdfa2V5EiIKIDdlNjYwODk4OTg1OWE2
N2VlYzg4ZWVmN2ZjZTg1MjI1SjEKB2NyZXdfaWQSJgokYjk4YjVhMDAtNWEyNS00MTA3LWI0MDUt
YWJmMzIwYThmM2E4Si4KCHRhc2tfa2V5EiIKIGEyNzdiMzRiMmMxNDZmMGM1NmM1ZTEzNTZlOGY4
YTU3SjEKB3Rhc2tfaWQSJgokMjJiZWMyMzEtY2QyMS00YzU4LTgyN2UtMDU4MWE4ZjBjMTExegIY
AYUBAAEAABKQAQoQXyeDtJDFnyp2Fjk9YEGTpxIIaNE7gbhPNYcqClRvb2wgVXNhZ2UwATkaXTvj
bygVGEGvx0rjbygVGEoaCg5jcmV3YWlfdmVyc2lvbhIICgYwLjg2LjBKHAoJdG9vbF9uYW1lEg8K
DUdldCBHcmVldGluZ3NKDgoIYXR0ZW1wdHMSAhgBegIYAYUBAAEAABLVBwoQMWfznt0qwauEzl7T
UOQxRBII9q+pUS5EdLAqDENyZXcgQ3JlYXRlZDABORONPORvKBUYQSAoS+RvKBUYShoKDmNyZXdh
aV92ZXJzaW9uEggKBjAuODYuMEoaCg5weXRob25fdmVyc2lvbhIICgYzLjEyLjdKLgoIY3Jld19r
ZXkSIgogYzMwNzYwMDkzMjY3NjE0NDRkNTdjNzFkMWRhM2YyN2NKMQoHY3Jld19pZBImCiQ3OTQw
MTkyNS1iOGU5LTQ3MDgtODUzMC00NDhhZmEzYmY4YjBKHAoMY3Jld19wcm9jZXNzEgwKCnNlcXVl
bnRpYWxKEQoLY3Jld19tZW1vcnkSAhAAShoKFGNyZXdfbnVtYmVyX29mX3Rhc2tzEgIYAUobChVj
cmV3X251bWJlcl9vZl9hZ2VudHMSAhgBSuoCCgtjcmV3X2FnZW50cxLaAgrXAlt7ImtleSI6ICI5
OGYzYjFkNDdjZTk2OWNmMDU3NzI3Yjc4NDE0MjVjZCIsICJpZCI6ICI5OTJkZjYyZi1kY2FiLTQy
OTUtOTIwNi05MDBkNDExNGIxZTkiLCAicm9sZSI6ICJGcmllbmRseSBOZWlnaGJvciIsICJ2ZXJi
b3NlPyI6IGZhbHNlLCAibWF4X2l0ZXIiOiAyMCwgIm1heF9ycG0iOiBudWxsLCAiZnVuY3Rpb25f
Y2FsbGluZ19sbG0iOiAiIiwgImxsbSI6ICJncHQtNG8tbWluaSIsICJkZWxlZ2F0aW9uX2VuYWJs
ZWQ/IjogZmFsc2UsICJhbGxvd19jb2RlX2V4ZWN1dGlvbj8iOiBmYWxzZSwgIm1heF9yZXRyeV9s
aW1pdCI6IDIsICJ0b29sc19uYW1lcyI6IFsiZGVjaWRlIGdyZWV0aW5ncyJdfV1KmAIKCmNyZXdf
dGFza3MSiQIKhgJbeyJrZXkiOiAiODBkN2JjZDQ5MDk5MjkwMDgzODMyZjBlOTgzMzgwZGYiLCAi
aWQiOiAiMmZmNjE5N2UtYmEyNy00YjczLWI0YTctNGZhMDQ4ZTYyYjQ3IiwgImFzeW5jX2V4ZWN1
dGlvbj8iOiBmYWxzZSwgImh1bWFuX2lucHV0PyI6IGZhbHNlLCAiYWdlbnRfcm9sZSI6ICJGcmll
bmRseSBOZWlnaGJvciIsICJhZ2VudF9rZXkiOiAiOThmM2IxZDQ3Y2U5NjljZjA1NzcyN2I3ODQx
NDI1Y2QiLCAidG9vbHNfbmFtZXMiOiBbImRlY2lkZSBncmVldGluZ3MiXX1degIYAYUBAAEAABKO
AgoQnjTp5boK7/+DQxztYIpqihIIgGnMUkBtzHEqDFRhc2sgQ3JlYXRlZDABOcpYcuRvKBUYQalE
c+RvKBUYSi4KCGNyZXdfa2V5EiIKIGMzMDc2MDA5MzI2NzYxNDQ0ZDU3YzcxZDFkYTNmMjdjSjEK
B2NyZXdfaWQSJgokNzk0MDE5MjUtYjhlOS00NzA4LTg1MzAtNDQ4YWZhM2JmOGIwSi4KCHRhc2tf
a2V5EiIKIDgwZDdiY2Q0OTA5OTI5MDA4MzgzMmYwZTk4MzM4MGRmSjEKB3Rhc2tfaWQSJgokMmZm
NjE5N2UtYmEyNy00YjczLWI0YTctNGZhMDQ4ZTYyYjQ3egIYAYUBAAEAABKTAQoQ26H9pLUgswDN
p9XhJwwL6BIIx3bw7mAvPYwqClRvb2wgVXNhZ2UwATmy7NPlbygVGEEvb+HlbygVGEoaCg5jcmV3
YWlfdmVyc2lvbhIICgYwLjg2LjBKHwoJdG9vbF9uYW1lEhIKEERlY2lkZSBHcmVldGluZ3NKDgoI
YXR0ZW1wdHMSAhgBegIYAYUBAAEAAA==
headers:
Accept:
- '*/*'
Accept-Encoding:
- gzip, deflate
Connection:
- keep-alive
Content-Length:
- '2986'
Content-Type:
- application/x-protobuf
User-Agent:
- OTel-OTLP-Exporter-Python/1.27.0
method: POST
uri: https://telemetry.crewai.com:4319/v1/traces
response:
body:
string: "\n\0"
headers:
Content-Length:
- '2'
Content-Type:
- application/x-protobuf
Date:
- Fri, 27 Dec 2024 22:14:53 GMT
status:
code: 200
message: OK
- request:
body: '{"messages": [{"role": "system", "content": "You are test role. test backstory\nYour
personal goal is: test goal\nTo give my best complete final answer to the task
use the exact following format:\n\nThought: I now can give a great answer\nFinal
Answer: Your final answer must be the great and the most complete as possible,
it must be outcome described.\n\nI MUST use these formats, my job depends on
it!"}, {"role": "user", "content": "\nCurrent Task: Say the word: Hi\n\nThis
is the expect criteria for your final answer: The word: Hi\nyou MUST return
the actual complete content as the final answer, not a summary.\n\nBegin! This
is VERY important to you, use the tools available and give your best Final Answer,
your job depends on it!\n\nThought:"}], "model": "gpt-4o-mini", "stop": ["\nObservation:"],
"stream": false}'
headers:
accept:
- application/json
accept-encoding:
- gzip, deflate
connection:
- keep-alive
content-length:
- '824'
content-type:
- application/json
cookie:
- _cfuvid=ePJSDFdHag2D8lj21_ijAMWjoA6xfnPNxN4uekvC728-1727226247743-0.0.1.1-604800000
host:
- api.openai.com
user-agent:
- OpenAI/Python 1.52.1
x-stainless-arch:
- x64
x-stainless-async:
- 'false'
x-stainless-lang:
- python
x-stainless-os:
- Linux
x-stainless-package-version:
- 1.52.1
x-stainless-raw-response:
- 'true'
x-stainless-retry-count:
- '0'
x-stainless-runtime:
- CPython
x-stainless-runtime-version:
- 3.12.7
method: POST
uri: https://api.openai.com/v1/chat/completions
response:
content: "{\n \"id\": \"chatcmpl-AjCtZLLrWi8ZASpP9bz6HaCV7xBIn\",\n \"object\":
\"chat.completion\",\n \"created\": 1735337693,\n \"model\": \"gpt-4o-mini-2024-07-18\",\n
\ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\":
\"assistant\",\n \"content\": \"I now can give a great answer \\nFinal
Answer: Hi\",\n \"refusal\": null\n },\n \"logprobs\": null,\n
\ \"finish_reason\": \"stop\"\n }\n ],\n \"usage\": {\n \"prompt_tokens\":
158,\n \"completion_tokens\": 12,\n \"total_tokens\": 170,\n \"prompt_tokens_details\":
{\n \"cached_tokens\": 0,\n \"audio_tokens\": 0\n },\n \"completion_tokens_details\":
{\n \"reasoning_tokens\": 0,\n \"audio_tokens\": 0,\n \"accepted_prediction_tokens\":
0,\n \"rejected_prediction_tokens\": 0\n }\n },\n \"system_fingerprint\":
\"fp_0aa8d3e20b\"\n}\n"
headers:
CF-Cache-Status:
- DYNAMIC
CF-RAY:
- 8f8caa83deca756b-SEA
Connection:
- keep-alive
Content-Encoding:
- gzip
Content-Type:
- application/json
Date:
- Fri, 27 Dec 2024 22:14:53 GMT
Server:
- cloudflare
Set-Cookie:
- __cf_bm=wJkq_yLkzE3OdxE0aMJz.G0kce969.9JxRmZ0ratl4c-1735337693-1.0.1.1-OKpUoRrSPFGvWv5Hp5ET1PNZ7iZNHPKEAuakpcQUxxPSeisUIIR3qIOZ31MGmYugqB5.wkvidgbxOAagqJvmnw;
path=/; expires=Fri, 27-Dec-24 22:44:53 GMT; domain=.api.openai.com; HttpOnly;
Secure; SameSite=None
- _cfuvid=A_ASCLNAVfQoyucWOAIhecWtEpNotYoZr0bAFihgNxs-1735337693273-0.0.1.1-604800000;
path=/; domain=.api.openai.com; HttpOnly; Secure; SameSite=None
Transfer-Encoding:
- chunked
X-Content-Type-Options:
- nosniff
access-control-expose-headers:
- X-Request-ID
alt-svc:
- h3=":443"; ma=86400
openai-organization:
- crewai-iuxna1
openai-processing-ms:
- '404'
openai-version:
- '2020-10-01'
strict-transport-security:
- max-age=31536000; includeSubDomains; preload
x-ratelimit-limit-requests:
- '30000'
x-ratelimit-limit-tokens:
- '150000000'
x-ratelimit-remaining-requests:
- '29999'
x-ratelimit-remaining-tokens:
- '149999816'
x-ratelimit-reset-requests:
- 2ms
x-ratelimit-reset-tokens:
- 0s
x-request-id:
- req_6ac84634bff9193743c4b0911c09b4a6
http_version: HTTP/1.1
status_code: 200
- request:
body: '{"messages": [{"role": "system", "content": "Determine if the following
feedback indicates that the user is satisfied or if further changes are needed.
Respond with ''True'' if further changes are needed, or ''False'' if the user
is satisfied. **Important** Do not include any additional commentary outside
of your ''True'' or ''False'' response.\n\nFeedback: \"Don''t say hi, say Hello
instead!\""}], "model": "gpt-4o-mini", "stop": ["\nObservation:"], "stream":
false}'
headers:
accept:
- application/json
accept-encoding:
- gzip, deflate
connection:
- keep-alive
content-length:
- '461'
content-type:
- application/json
cookie:
- _cfuvid=A_ASCLNAVfQoyucWOAIhecWtEpNotYoZr0bAFihgNxs-1735337693273-0.0.1.1-604800000;
__cf_bm=wJkq_yLkzE3OdxE0aMJz.G0kce969.9JxRmZ0ratl4c-1735337693-1.0.1.1-OKpUoRrSPFGvWv5Hp5ET1PNZ7iZNHPKEAuakpcQUxxPSeisUIIR3qIOZ31MGmYugqB5.wkvidgbxOAagqJvmnw
host:
- api.openai.com
user-agent:
- OpenAI/Python 1.52.1
x-stainless-arch:
- x64
x-stainless-async:
- 'false'
x-stainless-lang:
- python
x-stainless-os:
- Linux
x-stainless-package-version:
- 1.52.1
x-stainless-raw-response:
- 'true'
x-stainless-retry-count:
- '0'
x-stainless-runtime:
- CPython
x-stainless-runtime-version:
- 3.12.7
method: POST
uri: https://api.openai.com/v1/chat/completions
response:
content: "{\n \"id\": \"chatcmpl-AjCtZNlWdrrPZhq0MJDqd16sMuQEJ\",\n \"object\":
\"chat.completion\",\n \"created\": 1735337693,\n \"model\": \"gpt-4o-mini-2024-07-18\",\n
\ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\":
\"assistant\",\n \"content\": \"True\",\n \"refusal\": null\n
\ },\n \"logprobs\": null,\n \"finish_reason\": \"stop\"\n }\n
\ ],\n \"usage\": {\n \"prompt_tokens\": 78,\n \"completion_tokens\":
1,\n \"total_tokens\": 79,\n \"prompt_tokens_details\": {\n \"cached_tokens\":
0,\n \"audio_tokens\": 0\n },\n \"completion_tokens_details\": {\n
\ \"reasoning_tokens\": 0,\n \"audio_tokens\": 0,\n \"accepted_prediction_tokens\":
0,\n \"rejected_prediction_tokens\": 0\n }\n },\n \"system_fingerprint\":
\"fp_0aa8d3e20b\"\n}\n"
headers:
CF-Cache-Status:
- DYNAMIC
CF-RAY:
- 8f8caa87094f756b-SEA
Connection:
- keep-alive
Content-Encoding:
- gzip
Content-Type:
- application/json
Date:
- Fri, 27 Dec 2024 22:14:53 GMT
Server:
- cloudflare
Transfer-Encoding:
- chunked
X-Content-Type-Options:
- nosniff
access-control-expose-headers:
- X-Request-ID
alt-svc:
- h3=":443"; ma=86400
openai-organization:
- crewai-iuxna1
openai-processing-ms:
- '156'
openai-version:
- '2020-10-01'
strict-transport-security:
- max-age=31536000; includeSubDomains; preload
x-ratelimit-limit-requests:
- '30000'
x-ratelimit-limit-tokens:
- '150000000'
x-ratelimit-remaining-requests:
- '29999'
x-ratelimit-remaining-tokens:
- '149999898'
x-ratelimit-reset-requests:
- 2ms
x-ratelimit-reset-tokens:
- 0s
x-request-id:
- req_ec74bef2a9ef7b2144c03fd7f7bbeab0
http_version: HTTP/1.1
status_code: 200
- request:
body: '{"messages": [{"role": "system", "content": "You are test role. test backstory\nYour
personal goal is: test goal\nTo give my best complete final answer to the task
use the exact following format:\n\nThought: I now can give a great answer\nFinal
Answer: Your final answer must be the great and the most complete as possible,
it must be outcome described.\n\nI MUST use these formats, my job depends on
it!"}, {"role": "user", "content": "\nCurrent Task: Say the word: Hi\n\nThis
is the expect criteria for your final answer: The word: Hi\nyou MUST return
the actual complete content as the final answer, not a summary.\n\nBegin! This
is VERY important to you, use the tools available and give your best Final Answer,
your job depends on it!\n\nThought:"}, {"role": "assistant", "content": "I now
can give a great answer \nFinal Answer: Hi"}, {"role": "user", "content": "Feedback:
Don''t say hi, say Hello instead!"}], "model": "gpt-4o-mini", "stop": ["\nObservation:"],
"stream": false}'
headers:
accept:
- application/json
accept-encoding:
- gzip, deflate
connection:
- keep-alive
content-length:
- '986'
content-type:
- application/json
cookie:
- _cfuvid=A_ASCLNAVfQoyucWOAIhecWtEpNotYoZr0bAFihgNxs-1735337693273-0.0.1.1-604800000;
__cf_bm=wJkq_yLkzE3OdxE0aMJz.G0kce969.9JxRmZ0ratl4c-1735337693-1.0.1.1-OKpUoRrSPFGvWv5Hp5ET1PNZ7iZNHPKEAuakpcQUxxPSeisUIIR3qIOZ31MGmYugqB5.wkvidgbxOAagqJvmnw
host:
- api.openai.com
user-agent:
- OpenAI/Python 1.52.1
x-stainless-arch:
- x64
x-stainless-async:
- 'false'
x-stainless-lang:
- python
x-stainless-os:
- Linux
x-stainless-package-version:
- 1.52.1
x-stainless-raw-response:
- 'true'
x-stainless-retry-count:
- '0'
x-stainless-runtime:
- CPython
x-stainless-runtime-version:
- 3.12.7
method: POST
uri: https://api.openai.com/v1/chat/completions
response:
content: "{\n \"id\": \"chatcmpl-AjCtZGv4f3h7GDdhyOy9G0sB1lRgC\",\n \"object\":
\"chat.completion\",\n \"created\": 1735337693,\n \"model\": \"gpt-4o-mini-2024-07-18\",\n
\ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\":
\"assistant\",\n \"content\": \"Thought: I understand the feedback and
will adjust my response accordingly. \\nFinal Answer: Hello\",\n \"refusal\":
null\n },\n \"logprobs\": null,\n \"finish_reason\": \"stop\"\n
\ }\n ],\n \"usage\": {\n \"prompt_tokens\": 188,\n \"completion_tokens\":
18,\n \"total_tokens\": 206,\n \"prompt_tokens_details\": {\n \"cached_tokens\":
0,\n \"audio_tokens\": 0\n },\n \"completion_tokens_details\": {\n
\ \"reasoning_tokens\": 0,\n \"audio_tokens\": 0,\n \"accepted_prediction_tokens\":
0,\n \"rejected_prediction_tokens\": 0\n }\n },\n \"system_fingerprint\":
\"fp_0aa8d3e20b\"\n}\n"
headers:
CF-Cache-Status:
- DYNAMIC
CF-RAY:
- 8f8caa88cac4756b-SEA
Connection:
- keep-alive
Content-Encoding:
- gzip
Content-Type:
- application/json
Date:
- Fri, 27 Dec 2024 22:14:54 GMT
Server:
- cloudflare
Transfer-Encoding:
- chunked
X-Content-Type-Options:
- nosniff
access-control-expose-headers:
- X-Request-ID
alt-svc:
- h3=":443"; ma=86400
openai-organization:
- crewai-iuxna1
openai-processing-ms:
- '358'
openai-version:
- '2020-10-01'
strict-transport-security:
- max-age=31536000; includeSubDomains; preload
x-ratelimit-limit-requests:
- '30000'
x-ratelimit-limit-tokens:
- '150000000'
x-ratelimit-remaining-requests:
- '29999'
x-ratelimit-remaining-tokens:
- '149999793'
x-ratelimit-reset-requests:
- 2ms
x-ratelimit-reset-tokens:
- 0s
x-request-id:
- req_ae1ab6b206d28ded6fee3c83ed0c2ab7
http_version: HTTP/1.1
status_code: 200
- request:
body: '{"messages": [{"role": "system", "content": "Determine if the following
feedback indicates that the user is satisfied or if further changes are needed.
Respond with ''True'' if further changes are needed, or ''False'' if the user
is satisfied. **Important** Do not include any additional commentary outside
of your ''True'' or ''False'' response.\n\nFeedback: \"looks good\""}], "model":
"gpt-4o-mini", "stop": ["\nObservation:"], "stream": false}'
headers:
accept:
- application/json
accept-encoding:
- gzip, deflate
connection:
- keep-alive
content-length:
- '439'
content-type:
- application/json
cookie:
- _cfuvid=A_ASCLNAVfQoyucWOAIhecWtEpNotYoZr0bAFihgNxs-1735337693273-0.0.1.1-604800000;
__cf_bm=wJkq_yLkzE3OdxE0aMJz.G0kce969.9JxRmZ0ratl4c-1735337693-1.0.1.1-OKpUoRrSPFGvWv5Hp5ET1PNZ7iZNHPKEAuakpcQUxxPSeisUIIR3qIOZ31MGmYugqB5.wkvidgbxOAagqJvmnw
host:
- api.openai.com
user-agent:
- OpenAI/Python 1.52.1
x-stainless-arch:
- x64
x-stainless-async:
- 'false'
x-stainless-lang:
- python
x-stainless-os:
- Linux
x-stainless-package-version:
- 1.52.1
x-stainless-raw-response:
- 'true'
x-stainless-retry-count:
- '0'
x-stainless-runtime:
- CPython
x-stainless-runtime-version:
- 3.12.7
method: POST
uri: https://api.openai.com/v1/chat/completions
response:
content: "{\n \"id\": \"chatcmpl-AjCtaiHL4TY8Dssk0j2miqmjrzquy\",\n \"object\":
\"chat.completion\",\n \"created\": 1735337694,\n \"model\": \"gpt-4o-mini-2024-07-18\",\n
\ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\":
\"assistant\",\n \"content\": \"False\",\n \"refusal\": null\n
\ },\n \"logprobs\": null,\n \"finish_reason\": \"stop\"\n }\n
\ ],\n \"usage\": {\n \"prompt_tokens\": 73,\n \"completion_tokens\":
1,\n \"total_tokens\": 74,\n \"prompt_tokens_details\": {\n \"cached_tokens\":
0,\n \"audio_tokens\": 0\n },\n \"completion_tokens_details\": {\n
\ \"reasoning_tokens\": 0,\n \"audio_tokens\": 0,\n \"accepted_prediction_tokens\":
0,\n \"rejected_prediction_tokens\": 0\n }\n },\n \"system_fingerprint\":
\"fp_0aa8d3e20b\"\n}\n"
headers:
CF-Cache-Status:
- DYNAMIC
CF-RAY:
- 8f8caa8bdd26756b-SEA
Connection:
- keep-alive
Content-Encoding:
- gzip
Content-Type:
- application/json
Date:
- Fri, 27 Dec 2024 22:14:54 GMT
Server:
- cloudflare
Transfer-Encoding:
- chunked
X-Content-Type-Options:
- nosniff
access-control-expose-headers:
- X-Request-ID
alt-svc:
- h3=":443"; ma=86400
openai-organization:
- crewai-iuxna1
openai-processing-ms:
- '184'
openai-version:
- '2020-10-01'
strict-transport-security:
- max-age=31536000; includeSubDomains; preload
x-ratelimit-limit-requests:
- '30000'
x-ratelimit-limit-tokens:
- '150000000'
x-ratelimit-remaining-requests:
- '29999'
x-ratelimit-remaining-tokens:
- '149999902'
x-ratelimit-reset-requests:
- 2ms
x-ratelimit-reset-tokens:
- 0s
x-request-id:
- req_652891f79c1104a7a8436275d78a69f1
http_version: HTTP/1.1
status_code: 200
version: 1

40
tests/cli/create_test.py Normal file
View File

@@ -0,0 +1,40 @@
from unittest import mock
import pytest
from click.testing import CliRunner
from crewai.cli.cli import create
@pytest.fixture
def runner():
return CliRunner()
class TestCreateCommand:
@mock.patch("crewai.cli.cli.create_crew")
def test_create_crew_with_ssl_verify_default(self, mock_create_crew, runner):
"""Test that create crew command passes skip_ssl_verify=False by default."""
result = runner.invoke(create, ["crew", "test_crew"])
assert result.exit_code == 0
mock_create_crew.assert_called_once()
assert mock_create_crew.call_args[1]["skip_ssl_verify"] is False
@mock.patch("crewai.cli.cli.create_crew")
def test_create_crew_with_skip_ssl_verify(self, mock_create_crew, runner):
"""Test that create crew command passes skip_ssl_verify=True when flag is used."""
result = runner.invoke(create, ["crew", "test_crew", "--skip_ssl_verify"])
assert result.exit_code == 0
mock_create_crew.assert_called_once()
assert mock_create_crew.call_args[1]["skip_ssl_verify"] is True
@mock.patch("crewai.cli.cli.create_flow")
def test_create_flow_ignores_skip_ssl_verify(self, mock_create_flow, runner):
"""Test that create flow command ignores the skip_ssl_verify flag."""
result = runner.invoke(create, ["flow", "test_flow", "--skip_ssl_verify"])
assert result.exit_code == 0
mock_create_flow.assert_called_once()
assert mock_create_flow.call_args == mock.call("test_flow")

147
tests/cli/provider_test.py Normal file
View File

@@ -0,0 +1,147 @@
import tempfile
from pathlib import Path
from unittest import mock
import pytest
import requests
from requests.exceptions import RequestException, SSLError, Timeout
from crewai.cli.provider import (
fetch_provider_data,
get_provider_data,
load_provider_data,
read_cache_file,
)
@pytest.fixture
def mock_response():
"""Mock a successful response from requests.get."""
mock_resp = mock.Mock()
mock_resp.headers = {"content-length": "100"}
mock_resp.iter_content.return_value = [b'{"model1": {"litellm_provider": "openai"}}']
return mock_resp
@pytest.fixture
def mock_cache_file():
"""Create a temporary file to use as a cache file."""
with tempfile.NamedTemporaryFile() as tmp:
yield Path(tmp.name)
class TestProviderFunctions:
@mock.patch("crewai.cli.provider.requests.get")
def test_fetch_provider_data_with_ssl_verify(self, mock_get, mock_response, mock_cache_file):
"""Test that fetch_provider_data calls requests.get with verify=True by default."""
mock_get.return_value = mock_response
fetch_provider_data(mock_cache_file)
mock_get.assert_called_once()
assert mock_get.call_args[1]["verify"] is True
@mock.patch("crewai.cli.provider.requests.get")
def test_fetch_provider_data_without_ssl_verify(self, mock_get, mock_response, mock_cache_file):
"""Test that fetch_provider_data calls requests.get with verify=False when skip_ssl_verify=True."""
mock_get.return_value = mock_response
fetch_provider_data(mock_cache_file, skip_ssl_verify=True)
mock_get.assert_called_once()
assert mock_get.call_args[1]["verify"] is False
@mock.patch("crewai.cli.provider.requests.get")
def test_fetch_provider_data_handles_request_exception(self, mock_get, mock_cache_file):
"""Test that fetch_provider_data handles RequestException properly."""
mock_get.side_effect = RequestException("Test error")
result = fetch_provider_data(mock_cache_file)
assert result is None
mock_get.assert_called_once()
@mock.patch("crewai.cli.provider.requests.get")
def test_fetch_provider_data_handles_timeout(self, mock_get, mock_cache_file):
"""Test that fetch_provider_data handles Timeout exception properly."""
mock_get.side_effect = Timeout("Connection timed out")
result = fetch_provider_data(mock_cache_file)
assert result is None
mock_get.assert_called_once()
@mock.patch("crewai.cli.provider.requests.get")
def test_fetch_provider_data_handles_ssl_error(self, mock_get, mock_cache_file):
"""Test that fetch_provider_data handles SSLError exception properly."""
mock_get.side_effect = SSLError("SSL Certificate verification failed")
result = fetch_provider_data(mock_cache_file)
assert result is None
mock_get.assert_called_once()
@mock.patch("crewai.cli.provider.requests.get")
def test_fetch_provider_data_handles_json_decode_error(
self, mock_get, mock_response, mock_cache_file
):
"""Test that fetch_provider_data handles JSONDecodeError properly."""
mock_get.return_value = mock_response
mock_response.iter_content.return_value = [b"invalid json"]
result = fetch_provider_data(mock_cache_file)
assert result is None
mock_get.assert_called_once()
@mock.patch("builtins.open", new_callable=mock.mock_open, read_data="invalid json")
def test_read_cache_file_handles_json_decode_error(self, mock_file, mock_cache_file):
"""Test that read_cache_file handles JSONDecodeError properly."""
with mock.patch.object(Path, "exists", return_value=True):
result = read_cache_file(mock_cache_file)
assert result is None
mock_file.assert_called_once_with(mock_cache_file, "r")
@mock.patch("builtins.open")
def test_read_cache_file_handles_os_error(self, mock_file, mock_cache_file):
"""Test that read_cache_file handles OSError properly."""
mock_file.side_effect = OSError("File I/O error")
with mock.patch.object(Path, "exists", return_value=True):
result = read_cache_file(mock_cache_file)
assert result is None
mock_file.assert_called_once_with(mock_cache_file, "r")
@mock.patch("builtins.open", new_callable=mock.mock_open, read_data='{"key": [1, 2, 3]}')
def test_read_cache_file_handles_invalid_format(self, mock_file, mock_cache_file):
"""Test that read_cache_file handles invalid data format properly."""
with mock.patch.object(Path, "exists", return_value=True):
with mock.patch("json.load", return_value=["not", "a", "dict"]):
result = read_cache_file(mock_cache_file)
assert result is None
mock_file.assert_called_once_with(mock_cache_file, "r")
@mock.patch("crewai.cli.provider.fetch_provider_data")
@mock.patch("crewai.cli.provider.read_cache_file")
def test_load_provider_data_with_ssl_verify(
self, mock_read_cache, mock_fetch, mock_cache_file
):
"""Test that load_provider_data passes skip_ssl_verify to fetch_provider_data."""
mock_read_cache.return_value = None
mock_fetch.return_value = {"model1": {"litellm_provider": "openai"}}
load_provider_data(mock_cache_file, 3600, skip_ssl_verify=True)
mock_fetch.assert_called_once_with(mock_cache_file, True)
@mock.patch("crewai.cli.provider.load_provider_data")
def test_get_provider_data_with_ssl_verify(self, mock_load, tmp_path):
"""Test that get_provider_data passes skip_ssl_verify to load_provider_data."""
mock_load.return_value = {"model1": {"litellm_provider": "openai"}}
get_provider_data(skip_ssl_verify=True)
mock_load.assert_called_once()
assert mock_load.call_args[0][2] is True # skip_ssl_verify parameter