mirror of
https://github.com/crewAIInc/crewAI.git
synced 2026-01-30 18:48:14 +00:00
Compare commits
10 Commits
0.203.0
...
devin/1760
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
1ae3a003b6 | ||
|
|
fc4b0dd923 | ||
|
|
f0fb349ddf | ||
|
|
bf2e2a42da | ||
|
|
814c962196 | ||
|
|
2ebb2e845f | ||
|
|
7b550ebfe8 | ||
|
|
29919c2d81 | ||
|
|
b71c88814f | ||
|
|
cb8bcfe214 |
63
.github/security.md
vendored
63
.github/security.md
vendored
@@ -1,27 +1,50 @@
|
||||
## CrewAI Security Vulnerability Reporting Policy
|
||||
## CrewAI Security Policy
|
||||
|
||||
CrewAI prioritizes the security of our software products, services, and GitHub repositories. To promptly address vulnerabilities, follow these steps for reporting security issues:
|
||||
We are committed to protecting the confidentiality, integrity, and availability of the CrewAI ecosystem. This policy explains how to report potential vulnerabilities and what you can expect from us when you do.
|
||||
|
||||
### Reporting Process
|
||||
Do **not** report vulnerabilities via public GitHub issues.
|
||||
### Scope
|
||||
|
||||
Email all vulnerability reports directly to:
|
||||
**security@crewai.com**
|
||||
We welcome reports for vulnerabilities that could impact:
|
||||
|
||||
### Required Information
|
||||
To help us quickly validate and remediate the issue, your report must include:
|
||||
- CrewAI-maintained source code and repositories
|
||||
- CrewAI-operated infrastructure and services
|
||||
- Official CrewAI releases, packages, and distributions
|
||||
|
||||
- **Vulnerability Type:** Clearly state the vulnerability type (e.g., SQL injection, XSS, privilege escalation).
|
||||
- **Affected Source Code:** Provide full file paths and direct URLs (branch, tag, or commit).
|
||||
- **Reproduction Steps:** Include detailed, step-by-step instructions. Screenshots are recommended.
|
||||
- **Special Configuration:** Document any special settings or configurations required to reproduce.
|
||||
- **Proof-of-Concept (PoC):** Provide exploit or PoC code (if available).
|
||||
- **Impact Assessment:** Clearly explain the severity and potential exploitation scenarios.
|
||||
Issues affecting clearly unaffiliated third-party services or user-generated content are out of scope, unless you can demonstrate a direct impact on CrewAI systems or customers.
|
||||
|
||||
### Our Response
|
||||
- We will acknowledge receipt of your report promptly via your provided email.
|
||||
- Confirmed vulnerabilities will receive priority remediation based on severity.
|
||||
- Patches will be released as swiftly as possible following verification.
|
||||
### How to Report
|
||||
|
||||
### Reward Notice
|
||||
Currently, we do not offer a bug bounty program. Rewards, if issued, are discretionary.
|
||||
- **Please do not** disclose vulnerabilities via public GitHub issues, pull requests, or social media.
|
||||
- Email detailed reports to **security@crewai.com** with the subject line `Security Report`.
|
||||
- If you need to share large files or sensitive artifacts, mention it in your email and we will coordinate a secure transfer method.
|
||||
|
||||
### What to Include
|
||||
|
||||
Providing comprehensive information enables us to validate the issue quickly:
|
||||
|
||||
- **Vulnerability overview** — a concise description and classification (e.g., RCE, privilege escalation)
|
||||
- **Affected components** — repository, branch, tag, or deployed service along with relevant file paths or endpoints
|
||||
- **Reproduction steps** — detailed, step-by-step instructions; include logs, screenshots, or screen recordings when helpful
|
||||
- **Proof-of-concept** — exploit details or code that demonstrates the impact (if available)
|
||||
- **Impact analysis** — severity assessment, potential exploitation scenarios, and any prerequisites or special configurations
|
||||
|
||||
### Our Commitment
|
||||
|
||||
- **Acknowledgement:** We aim to acknowledge your report within two business days.
|
||||
- **Communication:** We will keep you informed about triage results, remediation progress, and planned release timelines.
|
||||
- **Resolution:** Confirmed vulnerabilities will be prioritized based on severity and fixed as quickly as possible.
|
||||
- **Recognition:** We currently do not run a bug bounty program; any rewards or recognition are issued at CrewAI's discretion.
|
||||
|
||||
### Coordinated Disclosure
|
||||
|
||||
We ask that you allow us a reasonable window to investigate and remediate confirmed issues before any public disclosure. We will coordinate publication timelines with you whenever possible.
|
||||
|
||||
### Safe Harbor
|
||||
|
||||
We will not pursue or support legal action against individuals who, in good faith:
|
||||
|
||||
- Follow this policy and refrain from violating any applicable laws
|
||||
- Avoid privacy violations, data destruction, or service disruption
|
||||
- Limit testing to systems in scope and respect rate limits and terms of service
|
||||
|
||||
If you are unsure whether your testing is covered, please contact us at **security@crewai.com** before proceeding.
|
||||
|
||||
@@ -40,7 +40,7 @@ def _suppress_pydantic_deprecation_warnings() -> None:
|
||||
|
||||
_suppress_pydantic_deprecation_warnings()
|
||||
|
||||
__version__ = "0.203.0"
|
||||
__version__ = "0.203.1"
|
||||
_telemetry_submitted = False
|
||||
|
||||
|
||||
|
||||
@@ -30,6 +30,7 @@ def validate_jwt_token(
|
||||
algorithms=["RS256"],
|
||||
audience=audience,
|
||||
issuer=issuer,
|
||||
leeway=10.0,
|
||||
options={
|
||||
"verify_signature": True,
|
||||
"verify_exp": True,
|
||||
|
||||
@@ -1,10 +1,11 @@
|
||||
import os
|
||||
import subprocess
|
||||
from enum import Enum
|
||||
|
||||
import click
|
||||
from packaging import version
|
||||
|
||||
from crewai.cli.utils import read_toml
|
||||
from crewai.cli.utils import build_env_with_tool_repository_credentials, read_toml
|
||||
from crewai.cli.version import get_crewai_version
|
||||
|
||||
|
||||
@@ -55,8 +56,22 @@ def execute_command(crew_type: CrewType) -> None:
|
||||
"""
|
||||
command = ["uv", "run", "kickoff" if crew_type == CrewType.FLOW else "run_crew"]
|
||||
|
||||
env = os.environ.copy()
|
||||
try:
|
||||
subprocess.run(command, capture_output=False, text=True, check=True) # noqa: S603
|
||||
pyproject_data = read_toml()
|
||||
sources = pyproject_data.get("tool", {}).get("uv", {}).get("sources", {})
|
||||
|
||||
for source_config in sources.values():
|
||||
if isinstance(source_config, dict):
|
||||
index = source_config.get("index")
|
||||
if index:
|
||||
index_env = build_env_with_tool_repository_credentials(index)
|
||||
env.update(index_env)
|
||||
except Exception: # noqa: S110
|
||||
pass
|
||||
|
||||
try:
|
||||
subprocess.run(command, capture_output=False, text=True, check=True, env=env) # noqa: S603
|
||||
|
||||
except subprocess.CalledProcessError as e:
|
||||
handle_error(e, crew_type)
|
||||
|
||||
@@ -5,7 +5,7 @@ description = "{{name}} using crewAI"
|
||||
authors = [{ name = "Your Name", email = "you@example.com" }]
|
||||
requires-python = ">=3.10,<3.14"
|
||||
dependencies = [
|
||||
"crewai[tools]>=0.203.0,<1.0.0"
|
||||
"crewai[tools]>=0.203.1,<1.0.0"
|
||||
]
|
||||
|
||||
[project.scripts]
|
||||
|
||||
@@ -5,7 +5,7 @@ description = "{{name}} using crewAI"
|
||||
authors = [{ name = "Your Name", email = "you@example.com" }]
|
||||
requires-python = ">=3.10,<3.14"
|
||||
dependencies = [
|
||||
"crewai[tools]>=0.203.0,<1.0.0",
|
||||
"crewai[tools]>=0.203.1,<1.0.0",
|
||||
]
|
||||
|
||||
[project.scripts]
|
||||
|
||||
@@ -5,7 +5,7 @@ description = "Power up your crews with {{folder_name}}"
|
||||
readme = "README.md"
|
||||
requires-python = ">=3.10,<3.14"
|
||||
dependencies = [
|
||||
"crewai[tools]>=0.203.0"
|
||||
"crewai[tools]>=0.203.1"
|
||||
]
|
||||
|
||||
[tool.crewai]
|
||||
|
||||
@@ -358,7 +358,8 @@ def prompt_user_for_trace_viewing(timeout_seconds: int = 20) -> bool:
|
||||
try:
|
||||
response = input().strip().lower()
|
||||
result[0] = response in ["y", "yes"]
|
||||
except (EOFError, KeyboardInterrupt):
|
||||
except (EOFError, KeyboardInterrupt, OSError, LookupError):
|
||||
# Handle all input-related errors silently
|
||||
result[0] = False
|
||||
|
||||
input_thread = threading.Thread(target=get_input, daemon=True)
|
||||
@@ -371,6 +372,7 @@ def prompt_user_for_trace_viewing(timeout_seconds: int = 20) -> bool:
|
||||
return result[0]
|
||||
|
||||
except Exception:
|
||||
# Suppress any warnings or errors and assume "no"
|
||||
return False
|
||||
|
||||
|
||||
|
||||
@@ -299,6 +299,7 @@ class LLM(BaseLLM):
|
||||
callbacks: list[Any] | None = None,
|
||||
reasoning_effort: Literal["none", "low", "medium", "high"] | None = None,
|
||||
stream: bool = False,
|
||||
supports_function_calling: bool | None = None,
|
||||
**kwargs,
|
||||
):
|
||||
self.model = model
|
||||
@@ -325,6 +326,7 @@ class LLM(BaseLLM):
|
||||
self.additional_params = kwargs
|
||||
self.is_anthropic = self._is_anthropic_model(model)
|
||||
self.stream = stream
|
||||
self._supports_function_calling_override = supports_function_calling
|
||||
|
||||
litellm.drop_params = True
|
||||
|
||||
@@ -1197,6 +1199,9 @@ class LLM(BaseLLM):
|
||||
)
|
||||
|
||||
def supports_function_calling(self) -> bool:
|
||||
if self._supports_function_calling_override is not None:
|
||||
return self._supports_function_calling_override
|
||||
|
||||
try:
|
||||
provider = self._get_custom_llm_provider()
|
||||
return litellm.utils.supports_function_calling(
|
||||
|
||||
@@ -9,6 +9,7 @@ from typing import Any, Final
|
||||
|
||||
DEFAULT_CONTEXT_WINDOW_SIZE: Final[int] = 4096
|
||||
DEFAULT_SUPPORTS_STOP_WORDS: Final[bool] = True
|
||||
DEFAULT_SUPPORTS_FUNCTION_CALLING: Final[bool] = True
|
||||
|
||||
|
||||
class BaseLLM(ABC):
|
||||
@@ -82,6 +83,14 @@ class BaseLLM(ABC):
|
||||
RuntimeError: If the LLM request fails for other reasons.
|
||||
"""
|
||||
|
||||
def supports_function_calling(self) -> bool:
|
||||
"""Check if the LLM supports function calling.
|
||||
|
||||
Returns:
|
||||
True if the LLM supports function calling, False otherwise.
|
||||
"""
|
||||
return DEFAULT_SUPPORTS_FUNCTION_CALLING
|
||||
|
||||
def supports_stop_words(self) -> bool:
|
||||
"""Check if the LLM supports stop words.
|
||||
|
||||
|
||||
@@ -7,7 +7,7 @@ import uuid
|
||||
import warnings
|
||||
from collections.abc import Callable
|
||||
from concurrent.futures import Future
|
||||
from copy import copy
|
||||
from copy import copy as shallow_copy
|
||||
from hashlib import md5
|
||||
from pathlib import Path
|
||||
from typing import (
|
||||
@@ -672,7 +672,9 @@ Follow these guidelines:
|
||||
copied_data = {k: v for k, v in copied_data.items() if v is not None}
|
||||
|
||||
cloned_context = (
|
||||
[task_mapping[context_task.key] for context_task in self.context]
|
||||
self.context
|
||||
if self.context is NOT_SPECIFIED
|
||||
else [task_mapping[context_task.key] for context_task in self.context]
|
||||
if isinstance(self.context, list)
|
||||
else None
|
||||
)
|
||||
@@ -681,7 +683,7 @@ Follow these guidelines:
|
||||
return next((agent for agent in agents if agent.role == role), None)
|
||||
|
||||
cloned_agent = get_agent_by_role(self.agent.role) if self.agent else None
|
||||
cloned_tools = copy(self.tools) if self.tools else []
|
||||
cloned_tools = shallow_copy(self.tools) if self.tools else []
|
||||
|
||||
return self.__class__(
|
||||
**copied_data,
|
||||
|
||||
@@ -1,7 +1,7 @@
|
||||
import jwt
|
||||
import unittest
|
||||
from unittest.mock import MagicMock, patch
|
||||
|
||||
import jwt
|
||||
|
||||
from crewai.cli.authentication.utils import validate_jwt_token
|
||||
|
||||
@@ -17,19 +17,22 @@ class TestUtils(unittest.TestCase):
|
||||
key="mock_signing_key"
|
||||
)
|
||||
|
||||
jwt_token = "aaaaa.bbbbbb.cccccc" # noqa: S105
|
||||
|
||||
decoded_token = validate_jwt_token(
|
||||
jwt_token="aaaaa.bbbbbb.cccccc",
|
||||
jwt_token=jwt_token,
|
||||
jwks_url="https://mock_jwks_url",
|
||||
issuer="https://mock_issuer",
|
||||
audience="app_id_xxxx",
|
||||
)
|
||||
|
||||
mock_jwt.decode.assert_called_with(
|
||||
"aaaaa.bbbbbb.cccccc",
|
||||
jwt_token,
|
||||
"mock_signing_key",
|
||||
algorithms=["RS256"],
|
||||
audience="app_id_xxxx",
|
||||
issuer="https://mock_issuer",
|
||||
leeway=10.0,
|
||||
options={
|
||||
"verify_signature": True,
|
||||
"verify_exp": True,
|
||||
@@ -43,9 +46,9 @@ class TestUtils(unittest.TestCase):
|
||||
|
||||
def test_validate_jwt_token_expired(self, mock_jwt, mock_pyjwkclient):
|
||||
mock_jwt.decode.side_effect = jwt.ExpiredSignatureError
|
||||
with self.assertRaises(Exception):
|
||||
with self.assertRaises(Exception): # noqa: B017
|
||||
validate_jwt_token(
|
||||
jwt_token="aaaaa.bbbbbb.cccccc",
|
||||
jwt_token="aaaaa.bbbbbb.cccccc", # noqa: S106
|
||||
jwks_url="https://mock_jwks_url",
|
||||
issuer="https://mock_issuer",
|
||||
audience="app_id_xxxx",
|
||||
@@ -53,9 +56,9 @@ class TestUtils(unittest.TestCase):
|
||||
|
||||
def test_validate_jwt_token_invalid_audience(self, mock_jwt, mock_pyjwkclient):
|
||||
mock_jwt.decode.side_effect = jwt.InvalidAudienceError
|
||||
with self.assertRaises(Exception):
|
||||
with self.assertRaises(Exception): # noqa: B017
|
||||
validate_jwt_token(
|
||||
jwt_token="aaaaa.bbbbbb.cccccc",
|
||||
jwt_token="aaaaa.bbbbbb.cccccc", # noqa: S106
|
||||
jwks_url="https://mock_jwks_url",
|
||||
issuer="https://mock_issuer",
|
||||
audience="app_id_xxxx",
|
||||
@@ -63,9 +66,9 @@ class TestUtils(unittest.TestCase):
|
||||
|
||||
def test_validate_jwt_token_invalid_issuer(self, mock_jwt, mock_pyjwkclient):
|
||||
mock_jwt.decode.side_effect = jwt.InvalidIssuerError
|
||||
with self.assertRaises(Exception):
|
||||
with self.assertRaises(Exception): # noqa: B017
|
||||
validate_jwt_token(
|
||||
jwt_token="aaaaa.bbbbbb.cccccc",
|
||||
jwt_token="aaaaa.bbbbbb.cccccc", # noqa: S106
|
||||
jwks_url="https://mock_jwks_url",
|
||||
issuer="https://mock_issuer",
|
||||
audience="app_id_xxxx",
|
||||
@@ -75,9 +78,9 @@ class TestUtils(unittest.TestCase):
|
||||
self, mock_jwt, mock_pyjwkclient
|
||||
):
|
||||
mock_jwt.decode.side_effect = jwt.MissingRequiredClaimError
|
||||
with self.assertRaises(Exception):
|
||||
with self.assertRaises(Exception): # noqa: B017
|
||||
validate_jwt_token(
|
||||
jwt_token="aaaaa.bbbbbb.cccccc",
|
||||
jwt_token="aaaaa.bbbbbb.cccccc", # noqa: S106
|
||||
jwks_url="https://mock_jwks_url",
|
||||
issuer="https://mock_issuer",
|
||||
audience="app_id_xxxx",
|
||||
@@ -85,9 +88,9 @@ class TestUtils(unittest.TestCase):
|
||||
|
||||
def test_validate_jwt_token_jwks_error(self, mock_jwt, mock_pyjwkclient):
|
||||
mock_jwt.decode.side_effect = jwt.exceptions.PyJWKClientError
|
||||
with self.assertRaises(Exception):
|
||||
with self.assertRaises(Exception): # noqa: B017
|
||||
validate_jwt_token(
|
||||
jwt_token="aaaaa.bbbbbb.cccccc",
|
||||
jwt_token="aaaaa.bbbbbb.cccccc", # noqa: S106
|
||||
jwks_url="https://mock_jwks_url",
|
||||
issuer="https://mock_issuer",
|
||||
audience="app_id_xxxx",
|
||||
@@ -95,9 +98,9 @@ class TestUtils(unittest.TestCase):
|
||||
|
||||
def test_validate_jwt_token_invalid_token(self, mock_jwt, mock_pyjwkclient):
|
||||
mock_jwt.decode.side_effect = jwt.InvalidTokenError
|
||||
with self.assertRaises(Exception):
|
||||
with self.assertRaises(Exception): # noqa: B017
|
||||
validate_jwt_token(
|
||||
jwt_token="aaaaa.bbbbbb.cccccc",
|
||||
jwt_token="aaaaa.bbbbbb.cccccc", # noqa: S106
|
||||
jwks_url="https://mock_jwks_url",
|
||||
issuer="https://mock_issuer",
|
||||
audience="app_id_xxxx",
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
from typing import Any, Dict, List, Optional, Union
|
||||
from typing import Any
|
||||
|
||||
import pytest
|
||||
|
||||
@@ -159,11 +159,11 @@ class JWTAuthLLM(BaseLLM):
|
||||
|
||||
def call(
|
||||
self,
|
||||
messages: Union[str, List[Dict[str, str]]],
|
||||
tools: Optional[List[dict]] = None,
|
||||
callbacks: Optional[List[Any]] = None,
|
||||
available_functions: Optional[Dict[str, Any]] = None,
|
||||
) -> Union[str, Any]:
|
||||
messages: str | list[dict[str, str]],
|
||||
tools: list[dict] | None = None,
|
||||
callbacks: list[Any] | None = None,
|
||||
available_functions: dict[str, Any] | None = None,
|
||||
) -> str | Any:
|
||||
"""Record the call and return a predefined response."""
|
||||
self.calls.append(
|
||||
{
|
||||
@@ -192,7 +192,7 @@ class JWTAuthLLM(BaseLLM):
|
||||
|
||||
def test_custom_llm_with_jwt_auth():
|
||||
"""Test a custom LLM implementation with JWT authentication."""
|
||||
jwt_llm = JWTAuthLLM(jwt_token="example.jwt.token")
|
||||
jwt_llm = JWTAuthLLM(jwt_token="example.jwt.token") # noqa: S106
|
||||
|
||||
# Test that create_llm returns the JWT-authenticated LLM instance directly
|
||||
result_llm = create_llm(jwt_llm)
|
||||
@@ -238,11 +238,11 @@ class TimeoutHandlingLLM(BaseLLM):
|
||||
|
||||
def call(
|
||||
self,
|
||||
messages: Union[str, List[Dict[str, str]]],
|
||||
tools: Optional[List[dict]] = None,
|
||||
callbacks: Optional[List[Any]] = None,
|
||||
available_functions: Optional[Dict[str, Any]] = None,
|
||||
) -> Union[str, Any]:
|
||||
messages: str | list[dict[str, str]],
|
||||
tools: list[dict] | None = None,
|
||||
callbacks: list[Any] | None = None,
|
||||
available_functions: dict[str, Any] | None = None,
|
||||
) -> str | Any:
|
||||
"""Simulate API calls with timeout handling and retry logic.
|
||||
|
||||
Args:
|
||||
@@ -282,35 +282,34 @@ class TimeoutHandlingLLM(BaseLLM):
|
||||
)
|
||||
# Otherwise, continue to the next attempt (simulating backoff)
|
||||
continue
|
||||
else:
|
||||
# Success on first attempt
|
||||
return "First attempt response"
|
||||
else:
|
||||
# This is a retry attempt (attempt > 0)
|
||||
# Always record retry attempts
|
||||
self.calls.append(
|
||||
{
|
||||
"retry_attempt": attempt,
|
||||
"messages": messages,
|
||||
"tools": tools,
|
||||
"callbacks": callbacks,
|
||||
"available_functions": available_functions,
|
||||
}
|
||||
)
|
||||
# Success on first attempt
|
||||
return "First attempt response"
|
||||
# This is a retry attempt (attempt > 0)
|
||||
# Always record retry attempts
|
||||
self.calls.append(
|
||||
{
|
||||
"retry_attempt": attempt,
|
||||
"messages": messages,
|
||||
"tools": tools,
|
||||
"callbacks": callbacks,
|
||||
"available_functions": available_functions,
|
||||
}
|
||||
)
|
||||
|
||||
# Simulate a failure if fail_count > 0
|
||||
if self.fail_count > 0:
|
||||
self.fail_count -= 1
|
||||
# If we've used all retries, raise an error
|
||||
if attempt == self.max_retries - 1:
|
||||
raise TimeoutError(
|
||||
f"LLM request failed after {self.max_retries} attempts"
|
||||
)
|
||||
# Otherwise, continue to the next attempt (simulating backoff)
|
||||
continue
|
||||
else:
|
||||
# Success on retry
|
||||
return "Response after retry"
|
||||
# Simulate a failure if fail_count > 0
|
||||
if self.fail_count > 0:
|
||||
self.fail_count -= 1
|
||||
# If we've used all retries, raise an error
|
||||
if attempt == self.max_retries - 1:
|
||||
raise TimeoutError(
|
||||
f"LLM request failed after {self.max_retries} attempts"
|
||||
)
|
||||
# Otherwise, continue to the next attempt (simulating backoff)
|
||||
continue
|
||||
# Success on retry
|
||||
return "Response after retry"
|
||||
|
||||
return "Response after retry"
|
||||
|
||||
def supports_function_calling(self) -> bool:
|
||||
"""Return True to indicate that function calling is supported.
|
||||
@@ -358,3 +357,25 @@ def test_timeout_handling_llm():
|
||||
with pytest.raises(TimeoutError, match="LLM request failed after 2 attempts"):
|
||||
llm.call("Test message")
|
||||
assert len(llm.calls) == 2 # Initial call + failed retry attempt
|
||||
|
||||
|
||||
class MinimalCustomLLM(BaseLLM):
|
||||
"""Minimal custom LLM implementation that doesn't override supports_function_calling."""
|
||||
|
||||
def __init__(self):
|
||||
super().__init__(model="minimal-model")
|
||||
|
||||
def call(
|
||||
self,
|
||||
messages: str | list[dict[str, str]],
|
||||
tools: list[dict] | None = None,
|
||||
callbacks: list[Any] | None = None,
|
||||
available_functions: dict[str, Any] | None = None,
|
||||
) -> str | Any:
|
||||
return "Minimal response"
|
||||
|
||||
|
||||
def test_base_llm_supports_function_calling_default():
|
||||
"""Test that BaseLLM supports function calling by default."""
|
||||
llm = MinimalCustomLLM()
|
||||
assert llm.supports_function_calling() is True
|
||||
|
||||
@@ -711,3 +711,18 @@ def test_ollama_does_not_modify_when_last_is_user(ollama_llm):
|
||||
formatted = ollama_llm._format_messages_for_provider(original_messages)
|
||||
|
||||
assert formatted == original_messages
|
||||
|
||||
|
||||
def test_supports_function_calling_with_override_true():
|
||||
llm = LLM(model="custom-model/my-model", supports_function_calling=True)
|
||||
assert llm.supports_function_calling() is True
|
||||
|
||||
|
||||
def test_supports_function_calling_with_override_false():
|
||||
llm = LLM(model="gpt-4o-mini", supports_function_calling=False)
|
||||
assert llm.supports_function_calling() is False
|
||||
|
||||
|
||||
def test_supports_function_calling_without_override():
|
||||
llm = LLM(model="gpt-4o-mini")
|
||||
assert llm.supports_function_calling() is True
|
||||
|
||||
@@ -1218,7 +1218,7 @@ def test_create_directory_false():
|
||||
assert not resolved_dir.exists()
|
||||
|
||||
with pytest.raises(
|
||||
RuntimeError, match="Directory .* does not exist and create_directory is False"
|
||||
RuntimeError, match=r"Directory .* does not exist and create_directory is False"
|
||||
):
|
||||
task._save_file("test content")
|
||||
|
||||
@@ -1635,3 +1635,48 @@ def test_task_interpolation_with_hyphens():
|
||||
assert "say hello world" in task.prompt()
|
||||
|
||||
assert result.raw == "Hello, World!"
|
||||
|
||||
|
||||
def test_task_copy_with_none_context():
|
||||
original_task = Task(
|
||||
description="Test task",
|
||||
expected_output="Test output",
|
||||
context=None
|
||||
)
|
||||
|
||||
new_task = original_task.copy(agents=[], task_mapping={})
|
||||
assert original_task.context is None
|
||||
assert new_task.context is None
|
||||
|
||||
|
||||
def test_task_copy_with_not_specified_context():
|
||||
from crewai.utilities.constants import NOT_SPECIFIED
|
||||
original_task = Task(
|
||||
description="Test task",
|
||||
expected_output="Test output",
|
||||
)
|
||||
|
||||
new_task = original_task.copy(agents=[], task_mapping={})
|
||||
assert original_task.context is NOT_SPECIFIED
|
||||
assert new_task.context is NOT_SPECIFIED
|
||||
|
||||
|
||||
def test_task_copy_with_list_context():
|
||||
"""Test that copying a task with list context works correctly."""
|
||||
task1 = Task(
|
||||
description="Task 1",
|
||||
expected_output="Output 1"
|
||||
)
|
||||
task2 = Task(
|
||||
description="Task 2",
|
||||
expected_output="Output 2",
|
||||
context=[task1]
|
||||
)
|
||||
|
||||
task_mapping = {task1.key: task1}
|
||||
|
||||
copied_task2 = task2.copy(agents=[], task_mapping=task_mapping)
|
||||
|
||||
assert isinstance(copied_task2.context, list)
|
||||
assert len(copied_task2.context) == 1
|
||||
assert copied_task2.context[0] is task1
|
||||
|
||||
Reference in New Issue
Block a user