Compare commits

...

7 Commits

Author SHA1 Message Date
Devin AI
22b81c4dc8 Fix tests: Import SSLError from requests.exceptions
Co-Authored-By: Joe Moura <joao@crewai.com>
2025-05-13 12:07:29 +00:00
Devin AI
71eebe6b1f Fix lint issue: remove unused json import
Co-Authored-By: Joe Moura <joao@crewai.com>
2025-05-13 12:02:32 +00:00
Devin AI
130a7fc1e0 Add additional tests for edge cases in provider data fetching
Co-Authored-By: Joe Moura <joao@crewai.com>
2025-05-13 12:00:32 +00:00
Devin AI
44540ddbd1 Add CLI documentation with --skip_ssl_verify flag details
Co-Authored-By: Joe Moura <joao@crewai.com>
2025-05-13 11:58:54 +00:00
Devin AI
97f8a44605 Enhance error handling for provider data fetching and cache file management
Co-Authored-By: Joe Moura <joao@crewai.com>
2025-05-13 11:57:21 +00:00
Devin AI
cfde51ce79 Fix lint issues: remove unused imports
Co-Authored-By: Joe Moura <joao@crewai.com>
2025-05-13 11:55:13 +00:00
Devin AI
4d3e094a40 Fix SSL certificate verification issue in provider data fetching
Co-Authored-By: Joe Moura <joao@crewai.com>
2025-05-13 11:52:55 +00:00
7 changed files with 879 additions and 17 deletions

120
docs/cli.md Normal file
View File

@@ -0,0 +1,120 @@
# CrewAI CLI Documentation
The CrewAI Command Line Interface (CLI) provides tools for creating, managing, and running CrewAI projects.
## Installation
The CLI is automatically installed when you install the CrewAI package:
```bash
pip install crewai
```
## Available Commands
### Create Command
The `create` command allows you to create new crews or flows.
```bash
crewai create [TYPE] [NAME] [OPTIONS]
```
#### Arguments
- `TYPE`: Type of project to create. Must be either `crew` or `flow`.
- `NAME`: Name of the project to create.
#### Options
- `--provider`: The provider to use for the crew.
- `--skip_provider`: Skip provider validation.
- `--skip_ssl_verify`: Skip SSL certificate verification when fetching provider data (not secure).
#### Examples
Create a new crew:
```bash
crewai create crew my_crew
```
Create a new crew with a specific provider:
```bash
crewai create crew my_crew --provider openai
```
Create a new crew and skip SSL certificate verification (useful in environments with self-signed certificates):
```bash
crewai create crew my_crew --skip_ssl_verify
```
> **Warning**: Using the `--skip_ssl_verify` flag is not recommended in production environments as it bypasses SSL certificate verification, which can expose your system to security risks. Only use this flag in development environments or when you understand the security implications.
Create a new flow:
```bash
crewai create flow my_flow
```
### Run Command
The `run` command executes your crew.
```bash
crewai run
```
### Train Command
The `train` command trains your crew.
```bash
crewai train [OPTIONS]
```
#### Options
- `-n, --n_iterations`: Number of iterations to train the crew (default: 5).
- `-f, --filename`: Path to a custom file for training (default: "trained_agents_data.pkl").
### Reset Memories Command
The `reset_memories` command resets the crew memories.
```bash
crewai reset_memories [OPTIONS]
```
#### Options
- `-l, --long`: Reset LONG TERM memory.
- `-s, --short`: Reset SHORT TERM memory.
- `-e, --entities`: Reset ENTITIES memory.
- `-kn, --knowledge`: Reset KNOWLEDGE storage.
- `-k, --kickoff-outputs`: Reset LATEST KICKOFF TASK OUTPUTS.
- `-a, --all`: Reset ALL memories.
### Other Commands
- `version`: Show the installed version of crewai.
- `replay`: Replay the crew execution from a specific task.
- `log_tasks_outputs`: Retrieve your latest crew.kickoff() task outputs.
- `test`: Test the crew and evaluate the results.
- `install`: Install the Crew.
- `update`: Update the pyproject.toml of the Crew project to use uv.
- `signup`: Sign Up/Login to CrewAI+.
- `login`: Sign Up/Login to CrewAI+.
- `chat`: Start a conversation with the Crew.
## Security Considerations
When using the CrewAI CLI, be aware of the following security considerations:
1. **API Keys**: Store your API keys securely in environment variables or a `.env` file. Never hardcode them in your scripts.
2. **SSL Verification**: The `--skip_ssl_verify` flag bypasses SSL certificate verification, which can expose your system to security risks. Only use this flag in development environments or when you understand the security implications.
3. **Provider Data**: When fetching provider data, ensure that you're using a secure connection. The CLI will display a warning when SSL verification is disabled.

View File

@@ -1,6 +1,5 @@
import os
from importlib.metadata import version as get_version
from typing import Optional, Tuple
from typing import Optional
import click
@@ -37,10 +36,11 @@ def crewai():
@click.argument("name")
@click.option("--provider", type=str, help="The provider to use for the crew")
@click.option("--skip_provider", is_flag=True, help="Skip provider validation")
def create(type, name, provider, skip_provider=False):
@click.option("--skip_ssl_verify", is_flag=True, help="Skip SSL certificate verification (not secure)")
def create(type, name, provider, skip_provider=False, skip_ssl_verify=False):
"""Create a new crew, or flow."""
if type == "crew":
create_crew(name, provider, skip_provider)
create_crew(name, provider, skip_provider, skip_ssl_verify=skip_ssl_verify)
elif type == "flow":
create_flow(name)
else:

View File

@@ -89,12 +89,12 @@ def copy_template_files(folder_path, name, class_name, parent_folder):
copy_template(src_file, dst_file, name, class_name, folder_path.name)
def create_crew(name, provider=None, skip_provider=False, parent_folder=None):
def create_crew(name, provider=None, skip_provider=False, parent_folder=None, skip_ssl_verify=False):
folder_path, folder_name, class_name = create_folder_structure(name, parent_folder)
env_vars = load_env_vars(folder_path)
if not skip_provider:
if not provider:
provider_models = get_provider_data()
provider_models = get_provider_data(skip_ssl_verify)
if not provider_models:
return
@@ -114,7 +114,7 @@ def create_crew(name, provider=None, skip_provider=False, parent_folder=None):
click.secho("Keeping existing provider configuration.", fg="yellow")
return
provider_models = get_provider_data()
provider_models = get_provider_data(skip_ssl_verify)
if not provider_models:
return

View File

@@ -106,13 +106,14 @@ def select_model(provider, provider_models):
return selected_model
def load_provider_data(cache_file, cache_expiry):
def load_provider_data(cache_file, cache_expiry, skip_ssl_verify=False):
"""
Loads provider data from a cache file if it exists and is not expired. If the cache is expired or corrupted, it fetches the data from the web.
Args:
- cache_file (Path): The path to the cache file.
- cache_expiry (int): The cache expiry time in seconds.
- skip_ssl_verify (bool): Whether to skip SSL certificate verification.
Returns:
- dict or None: The loaded provider data or None if the operation fails.
@@ -133,7 +134,7 @@ def load_provider_data(cache_file, cache_expiry):
"Cache expired or not found. Fetching provider data from the web...",
fg="cyan",
)
return fetch_provider_data(cache_file)
return fetch_provider_data(cache_file, skip_ssl_verify)
def read_cache_file(cache_file):
@@ -147,34 +148,65 @@ def read_cache_file(cache_file):
- dict or None: The JSON content of the cache file or None if the JSON is invalid.
"""
try:
if not cache_file.exists():
return None
with open(cache_file, "r") as f:
return json.load(f)
except json.JSONDecodeError:
data = json.load(f)
if not isinstance(data, dict):
click.secho("Invalid cache file format", fg="yellow")
return None
return data
except json.JSONDecodeError as e:
click.secho(f"Error parsing cache file: {str(e)}", fg="yellow")
return None
except OSError as e:
click.secho(f"Error reading cache file: {str(e)}", fg="yellow")
return None
def fetch_provider_data(cache_file):
def fetch_provider_data(cache_file, skip_ssl_verify=False):
"""
Fetches provider data from a specified URL and caches it to a file.
Args:
- cache_file (Path): The path to the cache file.
- skip_ssl_verify (bool): Whether to skip SSL certificate verification.
Returns:
- dict or None: The fetched provider data or None if the operation fails.
"""
try:
response = requests.get(JSON_URL, stream=True, timeout=60)
if skip_ssl_verify:
click.secho(
"Warning: SSL certificate verification is disabled. This is not secure!",
fg="yellow",
)
response = requests.get(JSON_URL, stream=True, timeout=60, verify=not skip_ssl_verify)
response.raise_for_status()
data = download_data(response)
with open(cache_file, "w") as f:
json.dump(data, f)
return data
except requests.Timeout:
click.secho("Timeout while fetching provider data", fg="red")
return None
except requests.SSLError as e:
click.secho(f"SSL verification failed: {str(e)}", fg="red")
if not skip_ssl_verify:
click.secho(
"You can bypass SSL verification with --skip_ssl_verify flag (not secure)",
fg="yellow",
)
return None
except requests.RequestException as e:
click.secho(f"Error fetching provider data: {e}", fg="red")
click.secho(f"Error fetching provider data: {str(e)}", fg="red")
return None
except json.JSONDecodeError:
click.secho("Error parsing provider data. Invalid JSON format.", fg="red")
return None
return None
except OSError as e:
click.secho(f"Error writing to cache file: {str(e)}", fg="red")
return None
def download_data(response):
@@ -201,10 +233,13 @@ def download_data(response):
return json.loads(data_content.decode("utf-8"))
def get_provider_data():
def get_provider_data(skip_ssl_verify=False):
"""
Retrieves provider data from a cache file, filters out models based on provider criteria, and returns a dictionary of providers mapped to their models.
Args:
- skip_ssl_verify (bool): Whether to skip SSL certificate verification.
Returns:
- dict or None: A dictionary of providers mapped to their models or None if the operation fails.
"""
@@ -213,7 +248,7 @@ def get_provider_data():
cache_file = cache_dir / "provider_cache.json"
cache_expiry = 24 * 3600
data = load_provider_data(cache_file, cache_expiry)
data = load_provider_data(cache_file, cache_expiry, skip_ssl_verify)
if not data:
return None

View File

@@ -0,0 +1,520 @@
interactions:
- request:
body: !!binary |
CqcXCiQKIgoMc2VydmljZS5uYW1lEhIKEGNyZXdBSS10ZWxlbWV0cnkS/hYKEgoQY3Jld2FpLnRl
bGVtZXRyeRJ5ChBuJJtOdNaB05mOW/p3915eEgj2tkAd3rZcASoQVG9vbCBVc2FnZSBFcnJvcjAB
OYa7/URvKBUYQUpcFEVvKBUYShoKDmNyZXdhaV92ZXJzaW9uEggKBjAuODYuMEoPCgNsbG0SCAoG
Z3B0LTRvegIYAYUBAAEAABLJBwoQifhX01E5i+5laGdALAlZBBIIBuGM1aN+OPgqDENyZXcgQ3Jl
YXRlZDABORVGruBvKBUYQaipwOBvKBUYShoKDmNyZXdhaV92ZXJzaW9uEggKBjAuODYuMEoaCg5w
eXRob25fdmVyc2lvbhIICgYzLjEyLjdKLgoIY3Jld19rZXkSIgogN2U2NjA4OTg5ODU5YTY3ZWVj
ODhlZWY3ZmNlODUyMjVKMQoHY3Jld19pZBImCiRiOThiNWEwMC01YTI1LTQxMDctYjQwNS1hYmYz
MjBhOGYzYThKHAoMY3Jld19wcm9jZXNzEgwKCnNlcXVlbnRpYWxKEQoLY3Jld19tZW1vcnkSAhAA
ShoKFGNyZXdfbnVtYmVyX29mX3Rhc2tzEgIYAUobChVjcmV3X251bWJlcl9vZl9hZ2VudHMSAhgB
SuQCCgtjcmV3X2FnZW50cxLUAgrRAlt7ImtleSI6ICIyMmFjZDYxMWU0NGVmNWZhYzA1YjUzM2Q3
NWU4ODkzYiIsICJpZCI6ICJkNWIyMzM1YS0yMmIyLTQyZWEtYmYwNS03OTc3NmU3MmYzOTIiLCAi
cm9sZSI6ICJEYXRhIFNjaWVudGlzdCIsICJ2ZXJib3NlPyI6IGZhbHNlLCAibWF4X2l0ZXIiOiAy
MCwgIm1heF9ycG0iOiBudWxsLCAiZnVuY3Rpb25fY2FsbGluZ19sbG0iOiAiIiwgImxsbSI6ICJn
cHQtNG8tbWluaSIsICJkZWxlZ2F0aW9uX2VuYWJsZWQ/IjogZmFsc2UsICJhbGxvd19jb2RlX2V4
ZWN1dGlvbj8iOiBmYWxzZSwgIm1heF9yZXRyeV9saW1pdCI6IDIsICJ0b29sc19uYW1lcyI6IFsi
Z2V0IGdyZWV0aW5ncyJdfV1KkgIKCmNyZXdfdGFza3MSgwIKgAJbeyJrZXkiOiAiYTI3N2IzNGIy
YzE0NmYwYzU2YzVlMTM1NmU4ZjhhNTciLCAiaWQiOiAiMjJiZWMyMzEtY2QyMS00YzU4LTgyN2Ut
MDU4MWE4ZjBjMTExIiwgImFzeW5jX2V4ZWN1dGlvbj8iOiBmYWxzZSwgImh1bWFuX2lucHV0PyI6
IGZhbHNlLCAiYWdlbnRfcm9sZSI6ICJEYXRhIFNjaWVudGlzdCIsICJhZ2VudF9rZXkiOiAiMjJh
Y2Q2MTFlNDRlZjVmYWMwNWI1MzNkNzVlODg5M2IiLCAidG9vbHNfbmFtZXMiOiBbImdldCBncmVl
dGluZ3MiXX1degIYAYUBAAEAABKOAgoQ5WYoxRtTyPjge4BduhL0rRIIv2U6rvWALfwqDFRhc2sg
Q3JlYXRlZDABOX068uBvKBUYQZkv8+BvKBUYSi4KCGNyZXdfa2V5EiIKIDdlNjYwODk4OTg1OWE2
N2VlYzg4ZWVmN2ZjZTg1MjI1SjEKB2NyZXdfaWQSJgokYjk4YjVhMDAtNWEyNS00MTA3LWI0MDUt
YWJmMzIwYThmM2E4Si4KCHRhc2tfa2V5EiIKIGEyNzdiMzRiMmMxNDZmMGM1NmM1ZTEzNTZlOGY4
YTU3SjEKB3Rhc2tfaWQSJgokMjJiZWMyMzEtY2QyMS00YzU4LTgyN2UtMDU4MWE4ZjBjMTExegIY
AYUBAAEAABKQAQoQXyeDtJDFnyp2Fjk9YEGTpxIIaNE7gbhPNYcqClRvb2wgVXNhZ2UwATkaXTvj
bygVGEGvx0rjbygVGEoaCg5jcmV3YWlfdmVyc2lvbhIICgYwLjg2LjBKHAoJdG9vbF9uYW1lEg8K
DUdldCBHcmVldGluZ3NKDgoIYXR0ZW1wdHMSAhgBegIYAYUBAAEAABLVBwoQMWfznt0qwauEzl7T
UOQxRBII9q+pUS5EdLAqDENyZXcgQ3JlYXRlZDABORONPORvKBUYQSAoS+RvKBUYShoKDmNyZXdh
aV92ZXJzaW9uEggKBjAuODYuMEoaCg5weXRob25fdmVyc2lvbhIICgYzLjEyLjdKLgoIY3Jld19r
ZXkSIgogYzMwNzYwMDkzMjY3NjE0NDRkNTdjNzFkMWRhM2YyN2NKMQoHY3Jld19pZBImCiQ3OTQw
MTkyNS1iOGU5LTQ3MDgtODUzMC00NDhhZmEzYmY4YjBKHAoMY3Jld19wcm9jZXNzEgwKCnNlcXVl
bnRpYWxKEQoLY3Jld19tZW1vcnkSAhAAShoKFGNyZXdfbnVtYmVyX29mX3Rhc2tzEgIYAUobChVj
cmV3X251bWJlcl9vZl9hZ2VudHMSAhgBSuoCCgtjcmV3X2FnZW50cxLaAgrXAlt7ImtleSI6ICI5
OGYzYjFkNDdjZTk2OWNmMDU3NzI3Yjc4NDE0MjVjZCIsICJpZCI6ICI5OTJkZjYyZi1kY2FiLTQy
OTUtOTIwNi05MDBkNDExNGIxZTkiLCAicm9sZSI6ICJGcmllbmRseSBOZWlnaGJvciIsICJ2ZXJi
b3NlPyI6IGZhbHNlLCAibWF4X2l0ZXIiOiAyMCwgIm1heF9ycG0iOiBudWxsLCAiZnVuY3Rpb25f
Y2FsbGluZ19sbG0iOiAiIiwgImxsbSI6ICJncHQtNG8tbWluaSIsICJkZWxlZ2F0aW9uX2VuYWJs
ZWQ/IjogZmFsc2UsICJhbGxvd19jb2RlX2V4ZWN1dGlvbj8iOiBmYWxzZSwgIm1heF9yZXRyeV9s
aW1pdCI6IDIsICJ0b29sc19uYW1lcyI6IFsiZGVjaWRlIGdyZWV0aW5ncyJdfV1KmAIKCmNyZXdf
dGFza3MSiQIKhgJbeyJrZXkiOiAiODBkN2JjZDQ5MDk5MjkwMDgzODMyZjBlOTgzMzgwZGYiLCAi
aWQiOiAiMmZmNjE5N2UtYmEyNy00YjczLWI0YTctNGZhMDQ4ZTYyYjQ3IiwgImFzeW5jX2V4ZWN1
dGlvbj8iOiBmYWxzZSwgImh1bWFuX2lucHV0PyI6IGZhbHNlLCAiYWdlbnRfcm9sZSI6ICJGcmll
bmRseSBOZWlnaGJvciIsICJhZ2VudF9rZXkiOiAiOThmM2IxZDQ3Y2U5NjljZjA1NzcyN2I3ODQx
NDI1Y2QiLCAidG9vbHNfbmFtZXMiOiBbImRlY2lkZSBncmVldGluZ3MiXX1degIYAYUBAAEAABKO
AgoQnjTp5boK7/+DQxztYIpqihIIgGnMUkBtzHEqDFRhc2sgQ3JlYXRlZDABOcpYcuRvKBUYQalE
c+RvKBUYSi4KCGNyZXdfa2V5EiIKIGMzMDc2MDA5MzI2NzYxNDQ0ZDU3YzcxZDFkYTNmMjdjSjEK
B2NyZXdfaWQSJgokNzk0MDE5MjUtYjhlOS00NzA4LTg1MzAtNDQ4YWZhM2JmOGIwSi4KCHRhc2tf
a2V5EiIKIDgwZDdiY2Q0OTA5OTI5MDA4MzgzMmYwZTk4MzM4MGRmSjEKB3Rhc2tfaWQSJgokMmZm
NjE5N2UtYmEyNy00YjczLWI0YTctNGZhMDQ4ZTYyYjQ3egIYAYUBAAEAABKTAQoQ26H9pLUgswDN
p9XhJwwL6BIIx3bw7mAvPYwqClRvb2wgVXNhZ2UwATmy7NPlbygVGEEvb+HlbygVGEoaCg5jcmV3
YWlfdmVyc2lvbhIICgYwLjg2LjBKHwoJdG9vbF9uYW1lEhIKEERlY2lkZSBHcmVldGluZ3NKDgoI
YXR0ZW1wdHMSAhgBegIYAYUBAAEAAA==
headers:
Accept:
- '*/*'
Accept-Encoding:
- gzip, deflate
Connection:
- keep-alive
Content-Length:
- '2986'
Content-Type:
- application/x-protobuf
User-Agent:
- OTel-OTLP-Exporter-Python/1.27.0
method: POST
uri: https://telemetry.crewai.com:4319/v1/traces
response:
body:
string: "\n\0"
headers:
Content-Length:
- '2'
Content-Type:
- application/x-protobuf
Date:
- Fri, 27 Dec 2024 22:14:53 GMT
status:
code: 200
message: OK
- request:
body: '{"messages": [{"role": "system", "content": "You are test role. test backstory\nYour
personal goal is: test goal\nTo give my best complete final answer to the task
use the exact following format:\n\nThought: I now can give a great answer\nFinal
Answer: Your final answer must be the great and the most complete as possible,
it must be outcome described.\n\nI MUST use these formats, my job depends on
it!"}, {"role": "user", "content": "\nCurrent Task: Say the word: Hi\n\nThis
is the expect criteria for your final answer: The word: Hi\nyou MUST return
the actual complete content as the final answer, not a summary.\n\nBegin! This
is VERY important to you, use the tools available and give your best Final Answer,
your job depends on it!\n\nThought:"}], "model": "gpt-4o-mini", "stop": ["\nObservation:"],
"stream": false}'
headers:
accept:
- application/json
accept-encoding:
- gzip, deflate
connection:
- keep-alive
content-length:
- '824'
content-type:
- application/json
cookie:
- _cfuvid=ePJSDFdHag2D8lj21_ijAMWjoA6xfnPNxN4uekvC728-1727226247743-0.0.1.1-604800000
host:
- api.openai.com
user-agent:
- OpenAI/Python 1.52.1
x-stainless-arch:
- x64
x-stainless-async:
- 'false'
x-stainless-lang:
- python
x-stainless-os:
- Linux
x-stainless-package-version:
- 1.52.1
x-stainless-raw-response:
- 'true'
x-stainless-retry-count:
- '0'
x-stainless-runtime:
- CPython
x-stainless-runtime-version:
- 3.12.7
method: POST
uri: https://api.openai.com/v1/chat/completions
response:
content: "{\n \"id\": \"chatcmpl-AjCtZLLrWi8ZASpP9bz6HaCV7xBIn\",\n \"object\":
\"chat.completion\",\n \"created\": 1735337693,\n \"model\": \"gpt-4o-mini-2024-07-18\",\n
\ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\":
\"assistant\",\n \"content\": \"I now can give a great answer \\nFinal
Answer: Hi\",\n \"refusal\": null\n },\n \"logprobs\": null,\n
\ \"finish_reason\": \"stop\"\n }\n ],\n \"usage\": {\n \"prompt_tokens\":
158,\n \"completion_tokens\": 12,\n \"total_tokens\": 170,\n \"prompt_tokens_details\":
{\n \"cached_tokens\": 0,\n \"audio_tokens\": 0\n },\n \"completion_tokens_details\":
{\n \"reasoning_tokens\": 0,\n \"audio_tokens\": 0,\n \"accepted_prediction_tokens\":
0,\n \"rejected_prediction_tokens\": 0\n }\n },\n \"system_fingerprint\":
\"fp_0aa8d3e20b\"\n}\n"
headers:
CF-Cache-Status:
- DYNAMIC
CF-RAY:
- 8f8caa83deca756b-SEA
Connection:
- keep-alive
Content-Encoding:
- gzip
Content-Type:
- application/json
Date:
- Fri, 27 Dec 2024 22:14:53 GMT
Server:
- cloudflare
Set-Cookie:
- __cf_bm=wJkq_yLkzE3OdxE0aMJz.G0kce969.9JxRmZ0ratl4c-1735337693-1.0.1.1-OKpUoRrSPFGvWv5Hp5ET1PNZ7iZNHPKEAuakpcQUxxPSeisUIIR3qIOZ31MGmYugqB5.wkvidgbxOAagqJvmnw;
path=/; expires=Fri, 27-Dec-24 22:44:53 GMT; domain=.api.openai.com; HttpOnly;
Secure; SameSite=None
- _cfuvid=A_ASCLNAVfQoyucWOAIhecWtEpNotYoZr0bAFihgNxs-1735337693273-0.0.1.1-604800000;
path=/; domain=.api.openai.com; HttpOnly; Secure; SameSite=None
Transfer-Encoding:
- chunked
X-Content-Type-Options:
- nosniff
access-control-expose-headers:
- X-Request-ID
alt-svc:
- h3=":443"; ma=86400
openai-organization:
- crewai-iuxna1
openai-processing-ms:
- '404'
openai-version:
- '2020-10-01'
strict-transport-security:
- max-age=31536000; includeSubDomains; preload
x-ratelimit-limit-requests:
- '30000'
x-ratelimit-limit-tokens:
- '150000000'
x-ratelimit-remaining-requests:
- '29999'
x-ratelimit-remaining-tokens:
- '149999816'
x-ratelimit-reset-requests:
- 2ms
x-ratelimit-reset-tokens:
- 0s
x-request-id:
- req_6ac84634bff9193743c4b0911c09b4a6
http_version: HTTP/1.1
status_code: 200
- request:
body: '{"messages": [{"role": "system", "content": "Determine if the following
feedback indicates that the user is satisfied or if further changes are needed.
Respond with ''True'' if further changes are needed, or ''False'' if the user
is satisfied. **Important** Do not include any additional commentary outside
of your ''True'' or ''False'' response.\n\nFeedback: \"Don''t say hi, say Hello
instead!\""}], "model": "gpt-4o-mini", "stop": ["\nObservation:"], "stream":
false}'
headers:
accept:
- application/json
accept-encoding:
- gzip, deflate
connection:
- keep-alive
content-length:
- '461'
content-type:
- application/json
cookie:
- _cfuvid=A_ASCLNAVfQoyucWOAIhecWtEpNotYoZr0bAFihgNxs-1735337693273-0.0.1.1-604800000;
__cf_bm=wJkq_yLkzE3OdxE0aMJz.G0kce969.9JxRmZ0ratl4c-1735337693-1.0.1.1-OKpUoRrSPFGvWv5Hp5ET1PNZ7iZNHPKEAuakpcQUxxPSeisUIIR3qIOZ31MGmYugqB5.wkvidgbxOAagqJvmnw
host:
- api.openai.com
user-agent:
- OpenAI/Python 1.52.1
x-stainless-arch:
- x64
x-stainless-async:
- 'false'
x-stainless-lang:
- python
x-stainless-os:
- Linux
x-stainless-package-version:
- 1.52.1
x-stainless-raw-response:
- 'true'
x-stainless-retry-count:
- '0'
x-stainless-runtime:
- CPython
x-stainless-runtime-version:
- 3.12.7
method: POST
uri: https://api.openai.com/v1/chat/completions
response:
content: "{\n \"id\": \"chatcmpl-AjCtZNlWdrrPZhq0MJDqd16sMuQEJ\",\n \"object\":
\"chat.completion\",\n \"created\": 1735337693,\n \"model\": \"gpt-4o-mini-2024-07-18\",\n
\ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\":
\"assistant\",\n \"content\": \"True\",\n \"refusal\": null\n
\ },\n \"logprobs\": null,\n \"finish_reason\": \"stop\"\n }\n
\ ],\n \"usage\": {\n \"prompt_tokens\": 78,\n \"completion_tokens\":
1,\n \"total_tokens\": 79,\n \"prompt_tokens_details\": {\n \"cached_tokens\":
0,\n \"audio_tokens\": 0\n },\n \"completion_tokens_details\": {\n
\ \"reasoning_tokens\": 0,\n \"audio_tokens\": 0,\n \"accepted_prediction_tokens\":
0,\n \"rejected_prediction_tokens\": 0\n }\n },\n \"system_fingerprint\":
\"fp_0aa8d3e20b\"\n}\n"
headers:
CF-Cache-Status:
- DYNAMIC
CF-RAY:
- 8f8caa87094f756b-SEA
Connection:
- keep-alive
Content-Encoding:
- gzip
Content-Type:
- application/json
Date:
- Fri, 27 Dec 2024 22:14:53 GMT
Server:
- cloudflare
Transfer-Encoding:
- chunked
X-Content-Type-Options:
- nosniff
access-control-expose-headers:
- X-Request-ID
alt-svc:
- h3=":443"; ma=86400
openai-organization:
- crewai-iuxna1
openai-processing-ms:
- '156'
openai-version:
- '2020-10-01'
strict-transport-security:
- max-age=31536000; includeSubDomains; preload
x-ratelimit-limit-requests:
- '30000'
x-ratelimit-limit-tokens:
- '150000000'
x-ratelimit-remaining-requests:
- '29999'
x-ratelimit-remaining-tokens:
- '149999898'
x-ratelimit-reset-requests:
- 2ms
x-ratelimit-reset-tokens:
- 0s
x-request-id:
- req_ec74bef2a9ef7b2144c03fd7f7bbeab0
http_version: HTTP/1.1
status_code: 200
- request:
body: '{"messages": [{"role": "system", "content": "You are test role. test backstory\nYour
personal goal is: test goal\nTo give my best complete final answer to the task
use the exact following format:\n\nThought: I now can give a great answer\nFinal
Answer: Your final answer must be the great and the most complete as possible,
it must be outcome described.\n\nI MUST use these formats, my job depends on
it!"}, {"role": "user", "content": "\nCurrent Task: Say the word: Hi\n\nThis
is the expect criteria for your final answer: The word: Hi\nyou MUST return
the actual complete content as the final answer, not a summary.\n\nBegin! This
is VERY important to you, use the tools available and give your best Final Answer,
your job depends on it!\n\nThought:"}, {"role": "assistant", "content": "I now
can give a great answer \nFinal Answer: Hi"}, {"role": "user", "content": "Feedback:
Don''t say hi, say Hello instead!"}], "model": "gpt-4o-mini", "stop": ["\nObservation:"],
"stream": false}'
headers:
accept:
- application/json
accept-encoding:
- gzip, deflate
connection:
- keep-alive
content-length:
- '986'
content-type:
- application/json
cookie:
- _cfuvid=A_ASCLNAVfQoyucWOAIhecWtEpNotYoZr0bAFihgNxs-1735337693273-0.0.1.1-604800000;
__cf_bm=wJkq_yLkzE3OdxE0aMJz.G0kce969.9JxRmZ0ratl4c-1735337693-1.0.1.1-OKpUoRrSPFGvWv5Hp5ET1PNZ7iZNHPKEAuakpcQUxxPSeisUIIR3qIOZ31MGmYugqB5.wkvidgbxOAagqJvmnw
host:
- api.openai.com
user-agent:
- OpenAI/Python 1.52.1
x-stainless-arch:
- x64
x-stainless-async:
- 'false'
x-stainless-lang:
- python
x-stainless-os:
- Linux
x-stainless-package-version:
- 1.52.1
x-stainless-raw-response:
- 'true'
x-stainless-retry-count:
- '0'
x-stainless-runtime:
- CPython
x-stainless-runtime-version:
- 3.12.7
method: POST
uri: https://api.openai.com/v1/chat/completions
response:
content: "{\n \"id\": \"chatcmpl-AjCtZGv4f3h7GDdhyOy9G0sB1lRgC\",\n \"object\":
\"chat.completion\",\n \"created\": 1735337693,\n \"model\": \"gpt-4o-mini-2024-07-18\",\n
\ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\":
\"assistant\",\n \"content\": \"Thought: I understand the feedback and
will adjust my response accordingly. \\nFinal Answer: Hello\",\n \"refusal\":
null\n },\n \"logprobs\": null,\n \"finish_reason\": \"stop\"\n
\ }\n ],\n \"usage\": {\n \"prompt_tokens\": 188,\n \"completion_tokens\":
18,\n \"total_tokens\": 206,\n \"prompt_tokens_details\": {\n \"cached_tokens\":
0,\n \"audio_tokens\": 0\n },\n \"completion_tokens_details\": {\n
\ \"reasoning_tokens\": 0,\n \"audio_tokens\": 0,\n \"accepted_prediction_tokens\":
0,\n \"rejected_prediction_tokens\": 0\n }\n },\n \"system_fingerprint\":
\"fp_0aa8d3e20b\"\n}\n"
headers:
CF-Cache-Status:
- DYNAMIC
CF-RAY:
- 8f8caa88cac4756b-SEA
Connection:
- keep-alive
Content-Encoding:
- gzip
Content-Type:
- application/json
Date:
- Fri, 27 Dec 2024 22:14:54 GMT
Server:
- cloudflare
Transfer-Encoding:
- chunked
X-Content-Type-Options:
- nosniff
access-control-expose-headers:
- X-Request-ID
alt-svc:
- h3=":443"; ma=86400
openai-organization:
- crewai-iuxna1
openai-processing-ms:
- '358'
openai-version:
- '2020-10-01'
strict-transport-security:
- max-age=31536000; includeSubDomains; preload
x-ratelimit-limit-requests:
- '30000'
x-ratelimit-limit-tokens:
- '150000000'
x-ratelimit-remaining-requests:
- '29999'
x-ratelimit-remaining-tokens:
- '149999793'
x-ratelimit-reset-requests:
- 2ms
x-ratelimit-reset-tokens:
- 0s
x-request-id:
- req_ae1ab6b206d28ded6fee3c83ed0c2ab7
http_version: HTTP/1.1
status_code: 200
- request:
body: '{"messages": [{"role": "system", "content": "Determine if the following
feedback indicates that the user is satisfied or if further changes are needed.
Respond with ''True'' if further changes are needed, or ''False'' if the user
is satisfied. **Important** Do not include any additional commentary outside
of your ''True'' or ''False'' response.\n\nFeedback: \"looks good\""}], "model":
"gpt-4o-mini", "stop": ["\nObservation:"], "stream": false}'
headers:
accept:
- application/json
accept-encoding:
- gzip, deflate
connection:
- keep-alive
content-length:
- '439'
content-type:
- application/json
cookie:
- _cfuvid=A_ASCLNAVfQoyucWOAIhecWtEpNotYoZr0bAFihgNxs-1735337693273-0.0.1.1-604800000;
__cf_bm=wJkq_yLkzE3OdxE0aMJz.G0kce969.9JxRmZ0ratl4c-1735337693-1.0.1.1-OKpUoRrSPFGvWv5Hp5ET1PNZ7iZNHPKEAuakpcQUxxPSeisUIIR3qIOZ31MGmYugqB5.wkvidgbxOAagqJvmnw
host:
- api.openai.com
user-agent:
- OpenAI/Python 1.52.1
x-stainless-arch:
- x64
x-stainless-async:
- 'false'
x-stainless-lang:
- python
x-stainless-os:
- Linux
x-stainless-package-version:
- 1.52.1
x-stainless-raw-response:
- 'true'
x-stainless-retry-count:
- '0'
x-stainless-runtime:
- CPython
x-stainless-runtime-version:
- 3.12.7
method: POST
uri: https://api.openai.com/v1/chat/completions
response:
content: "{\n \"id\": \"chatcmpl-AjCtaiHL4TY8Dssk0j2miqmjrzquy\",\n \"object\":
\"chat.completion\",\n \"created\": 1735337694,\n \"model\": \"gpt-4o-mini-2024-07-18\",\n
\ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\":
\"assistant\",\n \"content\": \"False\",\n \"refusal\": null\n
\ },\n \"logprobs\": null,\n \"finish_reason\": \"stop\"\n }\n
\ ],\n \"usage\": {\n \"prompt_tokens\": 73,\n \"completion_tokens\":
1,\n \"total_tokens\": 74,\n \"prompt_tokens_details\": {\n \"cached_tokens\":
0,\n \"audio_tokens\": 0\n },\n \"completion_tokens_details\": {\n
\ \"reasoning_tokens\": 0,\n \"audio_tokens\": 0,\n \"accepted_prediction_tokens\":
0,\n \"rejected_prediction_tokens\": 0\n }\n },\n \"system_fingerprint\":
\"fp_0aa8d3e20b\"\n}\n"
headers:
CF-Cache-Status:
- DYNAMIC
CF-RAY:
- 8f8caa8bdd26756b-SEA
Connection:
- keep-alive
Content-Encoding:
- gzip
Content-Type:
- application/json
Date:
- Fri, 27 Dec 2024 22:14:54 GMT
Server:
- cloudflare
Transfer-Encoding:
- chunked
X-Content-Type-Options:
- nosniff
access-control-expose-headers:
- X-Request-ID
alt-svc:
- h3=":443"; ma=86400
openai-organization:
- crewai-iuxna1
openai-processing-ms:
- '184'
openai-version:
- '2020-10-01'
strict-transport-security:
- max-age=31536000; includeSubDomains; preload
x-ratelimit-limit-requests:
- '30000'
x-ratelimit-limit-tokens:
- '150000000'
x-ratelimit-remaining-requests:
- '29999'
x-ratelimit-remaining-tokens:
- '149999902'
x-ratelimit-reset-requests:
- 2ms
x-ratelimit-reset-tokens:
- 0s
x-request-id:
- req_652891f79c1104a7a8436275d78a69f1
http_version: HTTP/1.1
status_code: 200
version: 1

40
tests/cli/create_test.py Normal file
View File

@@ -0,0 +1,40 @@
from unittest import mock
import pytest
from click.testing import CliRunner
from crewai.cli.cli import create
@pytest.fixture
def runner():
return CliRunner()
class TestCreateCommand:
@mock.patch("crewai.cli.cli.create_crew")
def test_create_crew_with_ssl_verify_default(self, mock_create_crew, runner):
"""Test that create crew command passes skip_ssl_verify=False by default."""
result = runner.invoke(create, ["crew", "test_crew"])
assert result.exit_code == 0
mock_create_crew.assert_called_once()
assert mock_create_crew.call_args[1]["skip_ssl_verify"] is False
@mock.patch("crewai.cli.cli.create_crew")
def test_create_crew_with_skip_ssl_verify(self, mock_create_crew, runner):
"""Test that create crew command passes skip_ssl_verify=True when flag is used."""
result = runner.invoke(create, ["crew", "test_crew", "--skip_ssl_verify"])
assert result.exit_code == 0
mock_create_crew.assert_called_once()
assert mock_create_crew.call_args[1]["skip_ssl_verify"] is True
@mock.patch("crewai.cli.cli.create_flow")
def test_create_flow_ignores_skip_ssl_verify(self, mock_create_flow, runner):
"""Test that create flow command ignores the skip_ssl_verify flag."""
result = runner.invoke(create, ["flow", "test_flow", "--skip_ssl_verify"])
assert result.exit_code == 0
mock_create_flow.assert_called_once()
assert mock_create_flow.call_args == mock.call("test_flow")

147
tests/cli/provider_test.py Normal file
View File

@@ -0,0 +1,147 @@
import tempfile
from pathlib import Path
from unittest import mock
import pytest
import requests
from requests.exceptions import RequestException, SSLError, Timeout
from crewai.cli.provider import (
fetch_provider_data,
get_provider_data,
load_provider_data,
read_cache_file,
)
@pytest.fixture
def mock_response():
"""Mock a successful response from requests.get."""
mock_resp = mock.Mock()
mock_resp.headers = {"content-length": "100"}
mock_resp.iter_content.return_value = [b'{"model1": {"litellm_provider": "openai"}}']
return mock_resp
@pytest.fixture
def mock_cache_file():
"""Create a temporary file to use as a cache file."""
with tempfile.NamedTemporaryFile() as tmp:
yield Path(tmp.name)
class TestProviderFunctions:
@mock.patch("crewai.cli.provider.requests.get")
def test_fetch_provider_data_with_ssl_verify(self, mock_get, mock_response, mock_cache_file):
"""Test that fetch_provider_data calls requests.get with verify=True by default."""
mock_get.return_value = mock_response
fetch_provider_data(mock_cache_file)
mock_get.assert_called_once()
assert mock_get.call_args[1]["verify"] is True
@mock.patch("crewai.cli.provider.requests.get")
def test_fetch_provider_data_without_ssl_verify(self, mock_get, mock_response, mock_cache_file):
"""Test that fetch_provider_data calls requests.get with verify=False when skip_ssl_verify=True."""
mock_get.return_value = mock_response
fetch_provider_data(mock_cache_file, skip_ssl_verify=True)
mock_get.assert_called_once()
assert mock_get.call_args[1]["verify"] is False
@mock.patch("crewai.cli.provider.requests.get")
def test_fetch_provider_data_handles_request_exception(self, mock_get, mock_cache_file):
"""Test that fetch_provider_data handles RequestException properly."""
mock_get.side_effect = RequestException("Test error")
result = fetch_provider_data(mock_cache_file)
assert result is None
mock_get.assert_called_once()
@mock.patch("crewai.cli.provider.requests.get")
def test_fetch_provider_data_handles_timeout(self, mock_get, mock_cache_file):
"""Test that fetch_provider_data handles Timeout exception properly."""
mock_get.side_effect = Timeout("Connection timed out")
result = fetch_provider_data(mock_cache_file)
assert result is None
mock_get.assert_called_once()
@mock.patch("crewai.cli.provider.requests.get")
def test_fetch_provider_data_handles_ssl_error(self, mock_get, mock_cache_file):
"""Test that fetch_provider_data handles SSLError exception properly."""
mock_get.side_effect = SSLError("SSL Certificate verification failed")
result = fetch_provider_data(mock_cache_file)
assert result is None
mock_get.assert_called_once()
@mock.patch("crewai.cli.provider.requests.get")
def test_fetch_provider_data_handles_json_decode_error(
self, mock_get, mock_response, mock_cache_file
):
"""Test that fetch_provider_data handles JSONDecodeError properly."""
mock_get.return_value = mock_response
mock_response.iter_content.return_value = [b"invalid json"]
result = fetch_provider_data(mock_cache_file)
assert result is None
mock_get.assert_called_once()
@mock.patch("builtins.open", new_callable=mock.mock_open, read_data="invalid json")
def test_read_cache_file_handles_json_decode_error(self, mock_file, mock_cache_file):
"""Test that read_cache_file handles JSONDecodeError properly."""
with mock.patch.object(Path, "exists", return_value=True):
result = read_cache_file(mock_cache_file)
assert result is None
mock_file.assert_called_once_with(mock_cache_file, "r")
@mock.patch("builtins.open")
def test_read_cache_file_handles_os_error(self, mock_file, mock_cache_file):
"""Test that read_cache_file handles OSError properly."""
mock_file.side_effect = OSError("File I/O error")
with mock.patch.object(Path, "exists", return_value=True):
result = read_cache_file(mock_cache_file)
assert result is None
mock_file.assert_called_once_with(mock_cache_file, "r")
@mock.patch("builtins.open", new_callable=mock.mock_open, read_data='{"key": [1, 2, 3]}')
def test_read_cache_file_handles_invalid_format(self, mock_file, mock_cache_file):
"""Test that read_cache_file handles invalid data format properly."""
with mock.patch.object(Path, "exists", return_value=True):
with mock.patch("json.load", return_value=["not", "a", "dict"]):
result = read_cache_file(mock_cache_file)
assert result is None
mock_file.assert_called_once_with(mock_cache_file, "r")
@mock.patch("crewai.cli.provider.fetch_provider_data")
@mock.patch("crewai.cli.provider.read_cache_file")
def test_load_provider_data_with_ssl_verify(
self, mock_read_cache, mock_fetch, mock_cache_file
):
"""Test that load_provider_data passes skip_ssl_verify to fetch_provider_data."""
mock_read_cache.return_value = None
mock_fetch.return_value = {"model1": {"litellm_provider": "openai"}}
load_provider_data(mock_cache_file, 3600, skip_ssl_verify=True)
mock_fetch.assert_called_once_with(mock_cache_file, True)
@mock.patch("crewai.cli.provider.load_provider_data")
def test_get_provider_data_with_ssl_verify(self, mock_load, tmp_path):
"""Test that get_provider_data passes skip_ssl_verify to load_provider_data."""
mock_load.return_value = {"model1": {"litellm_provider": "openai"}}
get_provider_data(skip_ssl_verify=True)
mock_load.assert_called_once()
assert mock_load.call_args[0][2] is True # skip_ssl_verify parameter