Compare commits

...

14 Commits

Author SHA1 Message Date
Devin AI
207079e562 Fix CSVKnowledgeSource token limit issue with batching
- Add batch_size parameter to BaseFileKnowledgeSource (default: 50)
- Modify _save_documents to process chunks in batches
- Add comprehensive tests for large file handling and batching
- Ensure backward compatibility with existing code

Fixes #3574

Co-Authored-By: João <joao@crewai.com>
2025-09-22 10:06:40 +00:00
Vini Brasil
aa8dc9d77f Add source to LLM Guardrail events (#3572)
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
Update Test Durations / update-durations (3.10) (push) Has been cancelled
Update Test Durations / update-durations (3.11) (push) Has been cancelled
Update Test Durations / update-durations (3.12) (push) Has been cancelled
Update Test Durations / update-durations (3.13) (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
This commit adds the source attribute to LLM Guardrail event calls to
identify the Lite Agent or Task that executed the guardrail.
2025-09-22 11:58:00 +09:00
Jonathan Hill
9c1096dbdc fix: Make 'ready' parameter optional in _create_reasoning_plan function (#3561)
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
Update Test Durations / update-durations (3.10) (push) Has been cancelled
Update Test Durations / update-durations (3.11) (push) Has been cancelled
Update Test Durations / update-durations (3.12) (push) Has been cancelled
Update Test Durations / update-durations (3.13) (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
* fix: Make 'ready' parameter optional in _create_reasoning_plan function

This PR fixes Issue #3466 where the _create_reasoning_plan function was missing
the 'ready' parameter when called by the LLM. The fix makes the 'ready' parameter
optional with a default value of False, which allows the function to be called
with only the 'plan' argument.

Fixes #3466

* Change default value of 'ready' parameter to True

---------

Co-authored-by: João Moura <joaomdmoura@gmail.com>
2025-09-20 22:57:18 -03:00
João Moura
47044450c0 Adding fallback to crew settings (#3562)
* Adding fallback to crew settings

* fix: resolve ruff and mypy issues in cli/config.py

---------

Co-authored-by: Greyson Lalonde <greyson.r.lalonde@gmail.com>
2025-09-20 22:54:36 -03:00
João Moura
0ee438c39d fix version (#3557) 2025-09-20 17:14:28 -03:00
Joao Moura
cbb9965bf7 preparing new version 2025-09-20 12:27:25 -07:00
João Moura
4951d30dd9 Dix issues with getting id (#3556)
* fix issues with getting id

* ignore linter

* fix: resolve ruff linting issues in tracing utils

---------

Co-authored-by: Greyson LaLonde <greyson.r.lalonde@gmail.com>
2025-09-20 15:29:25 -03:00
Greyson LaLonde
7426969736 chore: apply ruff linting fixes and type annotations to memory module
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
Update Test Durations / update-durations (3.10) (push) Has been cancelled
Update Test Durations / update-durations (3.11) (push) Has been cancelled
Update Test Durations / update-durations (3.12) (push) Has been cancelled
Update Test Durations / update-durations (3.13) (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
Co-authored-by: Lorenze Jay <63378463+lorenzejay@users.noreply.github.com>
2025-09-19 22:20:13 -04:00
Greyson LaLonde
d879be8b66 chore: fix ruff linting issues in agents module
fix(agents): linting, import paths, cache key alignment, and static method
2025-09-19 22:11:21 -04:00
Greyson LaLonde
24b84a4b68 chore: apply ruff linting fixes to crews module 2025-09-19 22:02:22 -04:00
Greyson LaLonde
8e571ea8a7 chore: fix ruff linting and mypy issues in knowledge module
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
Update Test Durations / update-durations (3.10) (push) Has been cancelled
Update Test Durations / update-durations (3.11) (push) Has been cancelled
Update Test Durations / update-durations (3.12) (push) Has been cancelled
Update Test Durations / update-durations (3.13) (push) Has been cancelled
Build uv cache / build-cache (3.10) (push) Has been cancelled
Build uv cache / build-cache (3.11) (push) Has been cancelled
Build uv cache / build-cache (3.12) (push) Has been cancelled
Build uv cache / build-cache (3.13) (push) Has been cancelled
2025-09-19 21:39:15 -04:00
Greyson LaLonde
2cfc4d37b8 chore: apply ruff linting fixes to events module
fix: apply ruff linting to events
2025-09-19 20:10:55 -04:00
Greyson LaLonde
f4abc41235 chore: apply ruff linting fixes to CLI module
fix: apply ruff fixes to CLI and update Okta provider test
2025-09-19 19:55:55 -04:00
Greyson LaLonde
de5d3c3ad1 chore: add pydantic.mypy plugin for better type checking 2025-09-19 19:23:33 -04:00
86 changed files with 1544 additions and 1141 deletions

View File

@@ -138,6 +138,7 @@ ignore = ["E501"] # ignore line too long globally
[tool.mypy]
exclude = ["src/crewai/cli/templates", "tests/"]
plugins = ["pydantic.mypy"]
[tool.bandit]

View File

@@ -40,7 +40,7 @@ def _suppress_pydantic_deprecation_warnings() -> None:
_suppress_pydantic_deprecation_warnings()
__version__ = "0.193.0"
__version__ = "0.193.2"
_telemetry_submitted = False

View File

@@ -1,5 +1,12 @@
from crewai.agents.cache.cache_handler import CacheHandler
from crewai.agents.parser import parse, AgentAction, AgentFinish, OutputParserException
from crewai.agents.parser import AgentAction, AgentFinish, OutputParserError, parse
from crewai.agents.tools_handler import ToolsHandler
__all__ = ["CacheHandler", "parse", "AgentAction", "AgentFinish", "OutputParserException", "ToolsHandler"]
__all__ = [
"AgentAction",
"AgentFinish",
"CacheHandler",
"OutputParserError",
"ToolsHandler",
"parse",
]

View File

@@ -1,7 +1,7 @@
from abc import ABC, abstractmethod
from typing import Any, Dict, List, Optional
from typing import Any
from pydantic import PrivateAttr
from pydantic import ConfigDict, PrivateAttr
from crewai.agent import BaseAgent
from crewai.tools import BaseTool
@@ -16,22 +16,21 @@ class BaseAgentAdapter(BaseAgent, ABC):
"""
adapted_structured_output: bool = False
_agent_config: Optional[Dict[str, Any]] = PrivateAttr(default=None)
_agent_config: dict[str, Any] | None = PrivateAttr(default=None)
model_config = {"arbitrary_types_allowed": True}
model_config = ConfigDict(arbitrary_types_allowed=True)
def __init__(self, agent_config: Optional[Dict[str, Any]] = None, **kwargs: Any):
def __init__(self, agent_config: dict[str, Any] | None = None, **kwargs: Any):
super().__init__(adapted_agent=True, **kwargs)
self._agent_config = agent_config
@abstractmethod
def configure_tools(self, tools: Optional[List[BaseTool]] = None) -> None:
def configure_tools(self, tools: list[BaseTool] | None = None) -> None:
"""Configure and adapt tools for the specific agent implementation.
Args:
tools: Optional list of BaseTool instances to be configured
"""
pass
def configure_structured_output(self, structured_output: Any) -> None:
"""Configure the structured output for the specific agent implementation.
@@ -39,4 +38,3 @@ class BaseAgentAdapter(BaseAgent, ABC):
Args:
structured_output: The structured output to be configured
"""
pass

View File

@@ -1,5 +1,5 @@
from abc import ABC, abstractmethod
from typing import Any, List, Optional
from typing import Any
from crewai.tools.base_tool import BaseTool
@@ -12,23 +12,22 @@ class BaseToolAdapter(ABC):
different frameworks and platforms.
"""
original_tools: List[BaseTool]
converted_tools: List[Any]
original_tools: list[BaseTool]
converted_tools: list[Any]
def __init__(self, tools: Optional[List[BaseTool]] = None):
def __init__(self, tools: list[BaseTool] | None = None):
self.original_tools = tools or []
self.converted_tools = []
@abstractmethod
def configure_tools(self, tools: List[BaseTool]) -> None:
def configure_tools(self, tools: list[BaseTool]) -> None:
"""Configure and convert tools for the specific implementation.
Args:
tools: List of BaseTool instances to be configured and converted
"""
pass
def tools(self) -> List[Any]:
def tools(self) -> list[Any]:
"""Return all converted tools."""
return self.converted_tools

View File

@@ -1,8 +1,9 @@
import uuid
from abc import ABC, abstractmethod
from collections.abc import Callable
from copy import copy as shallow_copy
from hashlib import md5
from typing import Any, Callable, Dict, List, Optional, TypeVar
from typing import Any, TypeVar
from pydantic import (
UUID4,
@@ -25,7 +26,6 @@ from crewai.security.security_config import SecurityConfig
from crewai.tools.base_tool import BaseTool, Tool
from crewai.utilities import I18N, Logger, RPMController
from crewai.utilities.config import process_config
from crewai.utilities.converter import Converter
from crewai.utilities.string_utils import interpolate_only
T = TypeVar("T", bound="BaseAgent")
@@ -81,17 +81,17 @@ class BaseAgent(ABC, BaseModel):
__hash__ = object.__hash__ # type: ignore
_logger: Logger = PrivateAttr(default_factory=lambda: Logger(verbose=False))
_rpm_controller: Optional[RPMController] = PrivateAttr(default=None)
_rpm_controller: RPMController | None = PrivateAttr(default=None)
_request_within_rpm_limit: Any = PrivateAttr(default=None)
_original_role: Optional[str] = PrivateAttr(default=None)
_original_goal: Optional[str] = PrivateAttr(default=None)
_original_backstory: Optional[str] = PrivateAttr(default=None)
_original_role: str | None = PrivateAttr(default=None)
_original_goal: str | None = PrivateAttr(default=None)
_original_backstory: str | None = PrivateAttr(default=None)
_token_process: TokenProcess = PrivateAttr(default_factory=TokenProcess)
id: UUID4 = Field(default_factory=uuid.uuid4, frozen=True)
role: str = Field(description="Role of the agent")
goal: str = Field(description="Objective of the agent")
backstory: str = Field(description="Backstory of the agent")
config: Optional[Dict[str, Any]] = Field(
config: dict[str, Any] | None = Field(
description="Configuration for the agent", default=None, exclude=True
)
cache: bool = Field(
@@ -100,7 +100,7 @@ class BaseAgent(ABC, BaseModel):
verbose: bool = Field(
default=False, description="Verbose mode for the Agent Execution"
)
max_rpm: Optional[int] = Field(
max_rpm: int | None = Field(
default=None,
description="Maximum number of requests per minute for the agent execution to be respected.",
)
@@ -108,7 +108,7 @@ class BaseAgent(ABC, BaseModel):
default=False,
description="Enable agent to delegate and ask questions among each other.",
)
tools: Optional[List[BaseTool]] = Field(
tools: list[BaseTool] | None = Field(
default_factory=list, description="Tools at agents' disposal"
)
max_iter: int = Field(
@@ -122,27 +122,27 @@ class BaseAgent(ABC, BaseModel):
)
crew: Any = Field(default=None, description="Crew to which the agent belongs.")
i18n: I18N = Field(default=I18N(), description="Internationalization settings.")
cache_handler: Optional[InstanceOf[CacheHandler]] = Field(
cache_handler: InstanceOf[CacheHandler] | None = Field(
default=None, description="An instance of the CacheHandler class."
)
tools_handler: InstanceOf[ToolsHandler] = Field(
default_factory=ToolsHandler,
description="An instance of the ToolsHandler class.",
)
tools_results: List[Dict[str, Any]] = Field(
tools_results: list[dict[str, Any]] = Field(
default=[], description="Results of the tools used by the agent."
)
max_tokens: Optional[int] = Field(
max_tokens: int | None = Field(
default=None, description="Maximum number of tokens for the agent's execution."
)
knowledge: Optional[Knowledge] = Field(
knowledge: Knowledge | None = Field(
default=None, description="Knowledge for the agent."
)
knowledge_sources: Optional[List[BaseKnowledgeSource]] = Field(
knowledge_sources: list[BaseKnowledgeSource] | None = Field(
default=None,
description="Knowledge sources for the agent.",
)
knowledge_storage: Optional[Any] = Field(
knowledge_storage: Any | None = Field(
default=None,
description="Custom knowledge storage for the agent.",
)
@@ -150,13 +150,13 @@ class BaseAgent(ABC, BaseModel):
default_factory=SecurityConfig,
description="Security configuration for the agent, including fingerprinting.",
)
callbacks: List[Callable] = Field(
callbacks: list[Callable] = Field(
default=[], description="Callbacks to be used for the agent"
)
adapted_agent: bool = Field(
default=False, description="Whether the agent is adapted"
)
knowledge_config: Optional[KnowledgeConfig] = Field(
knowledge_config: KnowledgeConfig | None = Field(
default=None,
description="Knowledge configuration for the agent such as limits and threshold",
)
@@ -168,7 +168,7 @@ class BaseAgent(ABC, BaseModel):
@field_validator("tools")
@classmethod
def validate_tools(cls, tools: List[Any]) -> List[BaseTool]:
def validate_tools(cls, tools: list[Any]) -> list[BaseTool]:
"""Validate and process the tools provided to the agent.
This method ensures that each tool is either an instance of BaseTool
@@ -221,7 +221,7 @@ class BaseAgent(ABC, BaseModel):
@field_validator("id", mode="before")
@classmethod
def _deny_user_set_id(cls, v: Optional[UUID4]) -> None:
def _deny_user_set_id(cls, v: UUID4 | None) -> None:
if v:
raise PydanticCustomError(
"may_not_set_field", "This field is not to be set by the user.", {}
@@ -252,8 +252,8 @@ class BaseAgent(ABC, BaseModel):
def execute_task(
self,
task: Any,
context: Optional[str] = None,
tools: Optional[List[BaseTool]] = None,
context: str | None = None,
tools: list[BaseTool] | None = None,
) -> str:
pass
@@ -262,9 +262,8 @@ class BaseAgent(ABC, BaseModel):
pass
@abstractmethod
def get_delegation_tools(self, agents: List["BaseAgent"]) -> List[BaseTool]:
def get_delegation_tools(self, agents: list["BaseAgent"]) -> list[BaseTool]:
"""Set the task tools that init BaseAgenTools class."""
pass
def copy(self: T) -> T: # type: ignore # Signature of "copy" incompatible with supertype "BaseModel"
"""Create a deep copy of the Agent."""
@@ -309,7 +308,7 @@ class BaseAgent(ABC, BaseModel):
copied_data = self.model_dump(exclude=exclude)
copied_data = {k: v for k, v in copied_data.items() if v is not None}
copied_agent = type(self)(
return type(self)(
**copied_data,
llm=existing_llm,
tools=self.tools,
@@ -318,9 +317,7 @@ class BaseAgent(ABC, BaseModel):
knowledge_storage=copied_knowledge_storage,
)
return copied_agent
def interpolate_inputs(self, inputs: Dict[str, Any]) -> None:
def interpolate_inputs(self, inputs: dict[str, Any]) -> None:
"""Interpolate inputs into the agent description and backstory."""
if self._original_role is None:
self._original_role = self.role
@@ -362,5 +359,5 @@ class BaseAgent(ABC, BaseModel):
self._rpm_controller = rpm_controller
self.create_agent_executor()
def set_knowledge(self, crew_embedder: Optional[Dict[str, Any]] = None):
def set_knowledge(self, crew_embedder: dict[str, Any] | None = None):
pass

View File

@@ -1,13 +1,13 @@
import time
from typing import TYPE_CHECKING, Dict, List
from typing import TYPE_CHECKING
from crewai.events.event_listener import event_listener
from crewai.memory.entity.entity_memory_item import EntityMemoryItem
from crewai.memory.long_term.long_term_memory_item import LongTermMemoryItem
from crewai.utilities import I18N
from crewai.utilities.converter import ConverterError
from crewai.utilities.evaluators.task_evaluator import TaskEvaluator
from crewai.utilities.printer import Printer
from crewai.events.event_listener import event_listener
if TYPE_CHECKING:
from crewai.agents.agent_builder.base_agent import BaseAgent
@@ -21,7 +21,7 @@ class CrewAgentExecutorMixin:
task: "Task"
iterations: int
max_iter: int
messages: List[Dict[str, str]]
messages: list[dict[str, str]]
_i18n: I18N
_printer: Printer = Printer()
@@ -46,7 +46,6 @@ class CrewAgentExecutorMixin:
)
except Exception as e:
print(f"Failed to add to short term memory: {e}")
pass
def _create_external_memory(self, output) -> None:
"""Create and save a external-term memory item if conditions are met."""
@@ -67,7 +66,6 @@ class CrewAgentExecutorMixin:
)
except Exception as e:
print(f"Failed to add to external memory: {e}")
pass
def _create_long_term_memory(self, output) -> None:
"""Create and save long-term and entity memory items based on evaluation."""
@@ -113,10 +111,8 @@ class CrewAgentExecutorMixin:
self.crew._entity_memory.save(entity_memories)
except AttributeError as e:
print(f"Missing attributes for long term memory: {e}")
pass
except Exception as e:
print(f"Failed to add to long term memory: {e}")
pass
elif (
self.crew
and self.crew._long_term_memory

View File

@@ -12,7 +12,7 @@ from crewai.agents.agent_builder.base_agent_executor_mixin import CrewAgentExecu
from crewai.agents.parser import (
AgentAction,
AgentFinish,
OutputParserException,
OutputParserError,
)
from crewai.agents.tools_handler import ToolsHandler
from crewai.events.event_bus import crewai_event_bus
@@ -228,7 +228,7 @@ class CrewAgentExecutor(CrewAgentExecutorMixin):
self._invoke_step_callback(formatted_answer)
self._append_message(formatted_answer.text)
except OutputParserException as e:
except OutputParserError as e: # noqa: PERF203
formatted_answer = handle_output_parser_exception(
e=e,
messages=self.messages,
@@ -251,17 +251,20 @@ class CrewAgentExecutor(CrewAgentExecutorMixin):
i18n=self._i18n,
)
continue
else:
handle_unknown_error(self._printer, e)
raise e
handle_unknown_error(self._printer, e)
raise e
finally:
self.iterations += 1
# During the invoke loop, formatted_answer alternates between AgentAction
# (when the agent is using tools) and eventually becomes AgentFinish
# (when the agent reaches a final answer). This assertion confirms we've
# (when the agent reaches a final answer). This check confirms we've
# reached a final answer and helps type checking understand this transition.
assert isinstance(formatted_answer, AgentFinish)
if not isinstance(formatted_answer, AgentFinish):
raise RuntimeError(
"Agent execution ended without reaching a final answer. "
f"Got {type(formatted_answer).__name__} instead of AgentFinish."
)
self._show_logs(formatted_answer)
return formatted_answer
@@ -324,9 +327,7 @@ class CrewAgentExecutor(CrewAgentExecutorMixin):
self.agent,
AgentLogsStartedEvent(
agent_role=self.agent.role,
task_description=(
getattr(self.task, "description") if self.task else "Not Found"
),
task_description=(self.task.description if self.task else "Not Found"),
verbose=self.agent.verbose
or (hasattr(self, "crew") and getattr(self.crew, "verbose", False)),
),
@@ -415,8 +416,7 @@ class CrewAgentExecutor(CrewAgentExecutorMixin):
"""
prompt = prompt.replace("{input}", inputs["input"])
prompt = prompt.replace("{tool_names}", inputs["tool_names"])
prompt = prompt.replace("{tools}", inputs["tools"])
return prompt
return prompt.replace("{tools}", inputs["tools"])
def _handle_human_feedback(self, formatted_answer: AgentFinish) -> AgentFinish:
"""Process human feedback.

View File

@@ -7,12 +7,12 @@ AgentAction or AgentFinish objects.
from dataclasses import dataclass
from json_repair import repair_json
from json_repair import repair_json # type: ignore[import-untyped]
from crewai.agents.constants import (
ACTION_INPUT_ONLY_REGEX,
ACTION_INPUT_REGEX,
ACTION_REGEX,
ACTION_INPUT_ONLY_REGEX,
FINAL_ANSWER_ACTION,
MISSING_ACTION_AFTER_THOUGHT_ERROR_MESSAGE,
MISSING_ACTION_INPUT_AFTER_ACTION_ERROR_MESSAGE,
@@ -43,7 +43,7 @@ class AgentFinish:
text: str
class OutputParserException(Exception):
class OutputParserError(Exception):
"""Exception raised when output parsing fails.
Attributes:
@@ -51,7 +51,7 @@ class OutputParserException(Exception):
"""
def __init__(self, error: str) -> None:
"""Initialize OutputParserException.
"""Initialize OutputParserError.
Args:
error: The error message.
@@ -87,7 +87,7 @@ def parse(text: str) -> AgentAction | AgentFinish:
AgentAction or AgentFinish based on the content.
Raises:
OutputParserException: If the text format is invalid.
OutputParserError: If the text format is invalid.
"""
thought = _extract_thought(text)
includes_answer = FINAL_ANSWER_ACTION in text
@@ -104,7 +104,7 @@ def parse(text: str) -> AgentAction | AgentFinish:
final_answer = final_answer[:-3].rstrip()
return AgentFinish(thought=thought, output=final_answer, text=text)
elif action_match:
if action_match:
action = action_match.group(1)
clean_action = _clean_action(action)
@@ -118,19 +118,18 @@ def parse(text: str) -> AgentAction | AgentFinish:
)
if not ACTION_REGEX.search(text):
raise OutputParserException(
raise OutputParserError(
f"{MISSING_ACTION_AFTER_THOUGHT_ERROR_MESSAGE}\n{_I18N.slice('final_answer_format')}",
)
elif not ACTION_INPUT_ONLY_REGEX.search(text):
raise OutputParserException(
if not ACTION_INPUT_ONLY_REGEX.search(text):
raise OutputParserError(
MISSING_ACTION_INPUT_AFTER_ACTION_ERROR_MESSAGE,
)
else:
err_format = _I18N.slice("format_without_tools")
error = f"{err_format}"
raise OutputParserException(
error,
)
err_format = _I18N.slice("format_without_tools")
error = f"{err_format}"
raise OutputParserError(
error,
)
def _extract_thought(text: str) -> str:
@@ -149,8 +148,7 @@ def _extract_thought(text: str) -> str:
return ""
thought = text[:thought_index].strip()
# Remove any triple backticks from the thought string
thought = thought.replace("```", "").strip()
return thought
return thought.replace("```", "").strip()
def _clean_action(text: str) -> str:

View File

@@ -1,8 +1,10 @@
"""Tools handler for managing tool execution and caching."""
import json
from crewai.agents.cache.cache_handler import CacheHandler
from crewai.tools.cache_tools.cache_tools import CacheTools
from crewai.tools.tool_calling import InstructorToolCalling, ToolCalling
from crewai.agents.cache.cache_handler import CacheHandler
class ToolsHandler:
@@ -37,8 +39,16 @@ class ToolsHandler:
"""
self.last_used_tool = calling
if self.cache and should_cache and calling.tool_name != CacheTools().name:
# Convert arguments to string for cache
input_str = ""
if calling.arguments:
if isinstance(calling.arguments, dict):
input_str = json.dumps(calling.arguments)
else:
input_str = str(calling.arguments)
self.cache.add(
tool=calling.tool_name,
input=calling.arguments,
input=input_str,
output=output,
)

View File

@@ -1,5 +1,6 @@
from crewai.cli.authentication.providers.base_provider import BaseProvider
class Auth0Provider(BaseProvider):
def get_authorize_url(self) -> str:
return f"https://{self._get_domain()}/oauth/device/code"
@@ -14,13 +15,20 @@ class Auth0Provider(BaseProvider):
return f"https://{self._get_domain()}/"
def get_audience(self) -> str:
assert self.settings.audience is not None, "Audience is required"
if self.settings.audience is None:
raise ValueError(
"Audience is required. Please set it in the configuration."
)
return self.settings.audience
def get_client_id(self) -> str:
assert self.settings.client_id is not None, "Client ID is required"
if self.settings.client_id is None:
raise ValueError(
"Client ID is required. Please set it in the configuration."
)
return self.settings.client_id
def _get_domain(self) -> str:
assert self.settings.domain is not None, "Domain is required"
if self.settings.domain is None:
raise ValueError("Domain is required. Please set it in the configuration.")
return self.settings.domain

View File

@@ -1,30 +1,26 @@
from abc import ABC, abstractmethod
from crewai.cli.authentication.main import Oauth2Settings
class BaseProvider(ABC):
def __init__(self, settings: Oauth2Settings):
self.settings = settings
@abstractmethod
def get_authorize_url(self) -> str:
...
def get_authorize_url(self) -> str: ...
@abstractmethod
def get_token_url(self) -> str:
...
def get_token_url(self) -> str: ...
@abstractmethod
def get_jwks_url(self) -> str:
...
def get_jwks_url(self) -> str: ...
@abstractmethod
def get_issuer(self) -> str:
...
def get_issuer(self) -> str: ...
@abstractmethod
def get_audience(self) -> str:
...
def get_audience(self) -> str: ...
@abstractmethod
def get_client_id(self) -> str:
...
def get_client_id(self) -> str: ...

View File

@@ -1,5 +1,6 @@
from crewai.cli.authentication.providers.base_provider import BaseProvider
class OktaProvider(BaseProvider):
def get_authorize_url(self) -> str:
return f"https://{self.settings.domain}/oauth2/default/v1/device/authorize"
@@ -14,9 +15,15 @@ class OktaProvider(BaseProvider):
return f"https://{self.settings.domain}/oauth2/default"
def get_audience(self) -> str:
assert self.settings.audience is not None
if self.settings.audience is None:
raise ValueError(
"Audience is required. Please set it in the configuration."
)
return self.settings.audience
def get_client_id(self) -> str:
assert self.settings.client_id is not None
if self.settings.client_id is None:
raise ValueError(
"Client ID is required. Please set it in the configuration."
)
return self.settings.client_id

View File

@@ -1,5 +1,6 @@
from crewai.cli.authentication.providers.base_provider import BaseProvider
class WorkosProvider(BaseProvider):
def get_authorize_url(self) -> str:
return f"https://{self._get_domain()}/oauth2/device_authorization"
@@ -17,9 +18,13 @@ class WorkosProvider(BaseProvider):
return self.settings.audience or ""
def get_client_id(self) -> str:
assert self.settings.client_id is not None, "Client ID is required"
if self.settings.client_id is None:
raise ValueError(
"Client ID is required. Please set it in the configuration."
)
return self.settings.client_id
def _get_domain(self) -> str:
assert self.settings.domain is not None, "Domain is required"
if self.settings.domain is None:
raise ValueError("Domain is required. Please set it in the configuration.")
return self.settings.domain

View File

@@ -17,8 +17,6 @@ def validate_jwt_token(
missing required claims).
"""
decoded_token = None
try:
jwk_client = PyJWKClient(jwks_url)
signing_key = jwk_client.get_signing_key_from_jwt(jwt_token)
@@ -26,7 +24,7 @@ def validate_jwt_token(
_unverified_decoded_token = jwt.decode(
jwt_token, options={"verify_signature": False}
)
decoded_token = jwt.decode(
return jwt.decode(
jwt_token,
signing_key.key,
algorithms=["RS256"],
@@ -40,23 +38,22 @@ def validate_jwt_token(
"require": ["exp", "iat", "iss", "aud", "sub"],
},
)
return decoded_token
except jwt.ExpiredSignatureError:
raise Exception("Token has expired.")
except jwt.InvalidAudienceError:
except jwt.ExpiredSignatureError as e:
raise Exception("Token has expired.") from e
except jwt.InvalidAudienceError as e:
actual_audience = _unverified_decoded_token.get("aud", "[no audience found]")
raise Exception(
f"Invalid token audience. Got: '{actual_audience}'. Expected: '{audience}'"
)
except jwt.InvalidIssuerError:
) from e
except jwt.InvalidIssuerError as e:
actual_issuer = _unverified_decoded_token.get("iss", "[no issuer found]")
raise Exception(
f"Invalid token issuer. Got: '{actual_issuer}'. Expected: '{issuer}'"
)
) from e
except jwt.MissingRequiredClaimError as e:
raise Exception(f"Token is missing required claims: {str(e)}")
raise Exception(f"Token is missing required claims: {e!s}") from e
except jwt.exceptions.PyJWKClientError as e:
raise Exception(f"JWKS or key processing error: {str(e)}")
raise Exception(f"JWKS or key processing error: {e!s}") from e
except jwt.InvalidTokenError as e:
raise Exception(f"Invalid token: {str(e)}")
raise Exception(f"Invalid token: {e!s}") from e

View File

@@ -1,13 +1,13 @@
from importlib.metadata import version as get_version
from typing import Optional
import click
from crewai.cli.config import Settings
from crewai.cli.settings.main import SettingsCommand
from crewai.cli.add_crew_to_flow import add_crew_to_flow
from crewai.cli.config import Settings
from crewai.cli.create_crew import create_crew
from crewai.cli.create_flow import create_flow
from crewai.cli.crew_chat import run_chat
from crewai.cli.settings.main import SettingsCommand
from crewai.memory.storage.kickoff_task_outputs_storage import (
KickoffTaskOutputsSQLiteStorage,
)
@@ -237,13 +237,11 @@ def login():
@crewai.group()
def deploy():
"""Deploy the Crew CLI group."""
pass
@crewai.group()
def tool():
"""Tool Repository related commands."""
pass
@deploy.command(name="create")
@@ -263,7 +261,7 @@ def deploy_list():
@deploy.command(name="push")
@click.option("-u", "--uuid", type=str, help="Crew UUID parameter")
def deploy_push(uuid: Optional[str]):
def deploy_push(uuid: str | None):
"""Deploy the Crew."""
deploy_cmd = DeployCommand()
deploy_cmd.deploy(uuid=uuid)
@@ -271,7 +269,7 @@ def deploy_push(uuid: Optional[str]):
@deploy.command(name="status")
@click.option("-u", "--uuid", type=str, help="Crew UUID parameter")
def deply_status(uuid: Optional[str]):
def deply_status(uuid: str | None):
"""Get the status of a deployment."""
deploy_cmd = DeployCommand()
deploy_cmd.get_crew_status(uuid=uuid)
@@ -279,7 +277,7 @@ def deply_status(uuid: Optional[str]):
@deploy.command(name="logs")
@click.option("-u", "--uuid", type=str, help="Crew UUID parameter")
def deploy_logs(uuid: Optional[str]):
def deploy_logs(uuid: str | None):
"""Get the logs of a deployment."""
deploy_cmd = DeployCommand()
deploy_cmd.get_crew_logs(uuid=uuid)
@@ -287,7 +285,7 @@ def deploy_logs(uuid: Optional[str]):
@deploy.command(name="remove")
@click.option("-u", "--uuid", type=str, help="Crew UUID parameter")
def deploy_remove(uuid: Optional[str]):
def deploy_remove(uuid: str | None):
"""Remove a deployment."""
deploy_cmd = DeployCommand()
deploy_cmd.remove_crew(uuid=uuid)
@@ -327,7 +325,6 @@ def tool_publish(is_public: bool, force: bool):
@crewai.group()
def flow():
"""Flow related commands."""
pass
@flow.command(name="kickoff")
@@ -359,7 +356,7 @@ def chat():
and using the Chat LLM to generate responses.
"""
click.secho(
"\nStarting a conversation with the Crew\n" "Type 'exit' or Ctrl+C to quit.\n",
"\nStarting a conversation with the Crew\nType 'exit' or Ctrl+C to quit.\n",
)
run_chat()
@@ -368,7 +365,6 @@ def chat():
@crewai.group(invoke_without_command=True)
def org():
"""Organization management commands."""
pass
@org.command("list")
@@ -396,7 +392,6 @@ def current():
@crewai.group()
def enterprise():
"""Enterprise Configuration commands."""
pass
@enterprise.command("configure")
@@ -410,7 +405,6 @@ def enterprise_configure(enterprise_url: str):
@crewai.group()
def config():
"""CLI Configuration commands."""
pass
@config.command("list")

View File

@@ -1,20 +1,61 @@
import json
import tempfile
from logging import getLogger
from pathlib import Path
from typing import Optional
from pydantic import BaseModel, Field
from crewai.cli.constants import (
DEFAULT_CREWAI_ENTERPRISE_URL,
CREWAI_ENTERPRISE_DEFAULT_OAUTH2_PROVIDER,
CREWAI_ENTERPRISE_DEFAULT_OAUTH2_AUDIENCE,
CREWAI_ENTERPRISE_DEFAULT_OAUTH2_CLIENT_ID,
CREWAI_ENTERPRISE_DEFAULT_OAUTH2_DOMAIN,
CREWAI_ENTERPRISE_DEFAULT_OAUTH2_PROVIDER,
DEFAULT_CREWAI_ENTERPRISE_URL,
)
from crewai.cli.shared.token_manager import TokenManager
logger = getLogger(__name__)
DEFAULT_CONFIG_PATH = Path.home() / ".config" / "crewai" / "settings.json"
def get_writable_config_path() -> Path | None:
"""
Find a writable location for the config file with fallback options.
Tries in order:
1. Default: ~/.config/crewai/settings.json
2. Temp directory: /tmp/crewai_settings.json (or OS equivalent)
3. Current directory: ./crewai_settings.json
4. In-memory only (returns None)
Returns:
Path object for writable config location, or None if no writable location found
"""
fallback_paths = [
DEFAULT_CONFIG_PATH, # Default location
Path(tempfile.gettempdir()) / "crewai_settings.json", # Temporary directory
Path.cwd() / "crewai_settings.json", # Current working directory
]
for config_path in fallback_paths:
try:
config_path.parent.mkdir(parents=True, exist_ok=True)
test_file = config_path.parent / ".crewai_write_test"
try:
test_file.write_text("test")
test_file.unlink() # Clean up test file
logger.info(f"Using config path: {config_path}")
return config_path
except Exception: # noqa: S112
continue
except Exception: # noqa: S112
continue
return None
# Settings that are related to the user's account
USER_SETTINGS_KEYS = [
"tool_repository_username",
@@ -56,20 +97,20 @@ HIDDEN_SETTINGS_KEYS = [
class Settings(BaseModel):
enterprise_base_url: Optional[str] = Field(
enterprise_base_url: str | None = Field(
default=DEFAULT_CLI_SETTINGS["enterprise_base_url"],
description="Base URL of the CrewAI Enterprise instance",
)
tool_repository_username: Optional[str] = Field(
tool_repository_username: str | None = Field(
None, description="Username for interacting with the Tool Repository"
)
tool_repository_password: Optional[str] = Field(
tool_repository_password: str | None = Field(
None, description="Password for interacting with the Tool Repository"
)
org_name: Optional[str] = Field(
org_name: str | None = Field(
None, description="Name of the currently active organization"
)
org_uuid: Optional[str] = Field(
org_uuid: str | None = Field(
None, description="UUID of the currently active organization"
)
config_path: Path = Field(default=DEFAULT_CONFIG_PATH, frozen=True, exclude=True)
@@ -79,7 +120,7 @@ class Settings(BaseModel):
default=DEFAULT_CLI_SETTINGS["oauth2_provider"],
)
oauth2_audience: Optional[str] = Field(
oauth2_audience: str | None = Field(
description="OAuth2 audience value, typically used to identify the target API or resource.",
default=DEFAULT_CLI_SETTINGS["oauth2_audience"],
)
@@ -94,16 +135,32 @@ class Settings(BaseModel):
default=DEFAULT_CLI_SETTINGS["oauth2_domain"],
)
def __init__(self, config_path: Path = DEFAULT_CONFIG_PATH, **data):
"""Load Settings from config path"""
config_path.parent.mkdir(parents=True, exist_ok=True)
def __init__(self, config_path: Path | None = None, **data):
"""Load Settings from config path with fallback support"""
if config_path is None:
config_path = get_writable_config_path()
# If config_path is None, we're in memory-only mode
if config_path is None:
merged_data = {**data}
# Dummy path for memory-only mode
super().__init__(config_path=Path("/dev/null"), **merged_data)
return
try:
config_path.parent.mkdir(parents=True, exist_ok=True)
except Exception:
merged_data = {**data}
# Dummy path for memory-only mode
super().__init__(config_path=Path("/dev/null"), **merged_data)
return
file_data = {}
if config_path.is_file():
try:
with config_path.open("r") as f:
file_data = json.load(f)
except json.JSONDecodeError:
except Exception:
file_data = {}
merged_data = {**file_data, **data}
@@ -123,15 +180,22 @@ class Settings(BaseModel):
def dump(self) -> None:
"""Save current settings to settings.json"""
if self.config_path.is_file():
with self.config_path.open("r") as f:
existing_data = json.load(f)
else:
existing_data = {}
if str(self.config_path) == "/dev/null":
return
updated_data = {**existing_data, **self.model_dump(exclude_unset=True)}
with self.config_path.open("w") as f:
json.dump(updated_data, f, indent=4)
try:
if self.config_path.is_file():
with self.config_path.open("r") as f:
existing_data = json.load(f)
else:
existing_data = {}
updated_data = {**existing_data, **self.model_dump(exclude_unset=True)}
with self.config_path.open("w") as f:
json.dump(updated_data, f, indent=4)
except Exception: # noqa: S110
pass
def _reset_user_settings(self) -> None:
"""Reset all user settings to default values"""

View File

@@ -16,48 +16,72 @@ from crewai.cli.utils import copy_template, load_env_vars, write_env_file
def create_folder_structure(name, parent_folder=None):
import keyword
import re
name = name.rstrip('/')
name = name.rstrip("/")
if not name.strip():
raise ValueError("Project name cannot be empty or contain only whitespace")
folder_name = name.replace(" ", "_").replace("-", "_").lower()
folder_name = re.sub(r'[^a-zA-Z0-9_]', '', folder_name)
folder_name = re.sub(r"[^a-zA-Z0-9_]", "", folder_name)
# Check if the name starts with invalid characters or is primarily invalid
if re.match(r'^[^a-zA-Z0-9_-]+', name):
raise ValueError(f"Project name '{name}' contains no valid characters for a Python module name")
if re.match(r"^[^a-zA-Z0-9_-]+", name):
raise ValueError(
f"Project name '{name}' contains no valid characters for a Python module name"
)
if not folder_name:
raise ValueError(f"Project name '{name}' contains no valid characters for a Python module name")
raise ValueError(
f"Project name '{name}' contains no valid characters for a Python module name"
)
if folder_name[0].isdigit():
raise ValueError(f"Project name '{name}' would generate folder name '{folder_name}' which cannot start with a digit (invalid Python module name)")
raise ValueError(
f"Project name '{name}' would generate folder name '{folder_name}' which cannot start with a digit (invalid Python module name)"
)
if keyword.iskeyword(folder_name):
raise ValueError(f"Project name '{name}' would generate folder name '{folder_name}' which is a reserved Python keyword")
raise ValueError(
f"Project name '{name}' would generate folder name '{folder_name}' which is a reserved Python keyword"
)
if not folder_name.isidentifier():
raise ValueError(f"Project name '{name}' would generate invalid Python module name '{folder_name}'")
raise ValueError(
f"Project name '{name}' would generate invalid Python module name '{folder_name}'"
)
class_name = name.replace("_", " ").replace("-", " ").title().replace(" ", "")
class_name = re.sub(r'[^a-zA-Z0-9_]', '', class_name)
class_name = re.sub(r"[^a-zA-Z0-9_]", "", class_name)
if not class_name:
raise ValueError(f"Project name '{name}' contains no valid characters for a Python class name")
raise ValueError(
f"Project name '{name}' contains no valid characters for a Python class name"
)
if class_name[0].isdigit():
raise ValueError(f"Project name '{name}' would generate class name '{class_name}' which cannot start with a digit")
raise ValueError(
f"Project name '{name}' would generate class name '{class_name}' which cannot start with a digit"
)
# Check if the original name (before title casing) is a keyword
original_name_clean = re.sub(r'[^a-zA-Z0-9_]', '', name.replace("_", "").replace("-", "").lower())
if keyword.iskeyword(original_name_clean) or keyword.iskeyword(class_name) or class_name in ('True', 'False', 'None'):
raise ValueError(f"Project name '{name}' would generate class name '{class_name}' which is a reserved Python keyword")
original_name_clean = re.sub(
r"[^a-zA-Z0-9_]", "", name.replace("_", "").replace("-", "").lower()
)
if (
keyword.iskeyword(original_name_clean)
or keyword.iskeyword(class_name)
or class_name in ("True", "False", "None")
):
raise ValueError(
f"Project name '{name}' would generate class name '{class_name}' which is a reserved Python keyword"
)
if not class_name.isidentifier():
raise ValueError(f"Project name '{name}' would generate invalid Python class name '{class_name}'")
raise ValueError(
f"Project name '{name}' would generate invalid Python class name '{class_name}'"
)
if parent_folder:
folder_path = Path(parent_folder) / folder_name
@@ -172,7 +196,7 @@ def create_crew(name, provider=None, skip_provider=False, parent_folder=None):
)
# Check if the selected provider has predefined models
if selected_provider in MODELS and MODELS[selected_provider]:
if MODELS.get(selected_provider):
while True:
selected_model = select_model(selected_provider, provider_models)
if selected_model is None: # User typed 'q'

View File

@@ -5,7 +5,7 @@ import sys
import threading
import time
from pathlib import Path
from typing import Any, Dict, List, Optional, Set, Tuple
from typing import Any
import click
import tomli
@@ -116,7 +116,7 @@ def show_loading(event: threading.Event):
print()
def initialize_chat_llm(crew: Crew) -> Optional[LLM | BaseLLM]:
def initialize_chat_llm(crew: Crew) -> LLM | BaseLLM | None:
"""Initializes the chat LLM and handles exceptions."""
try:
return create_llm(crew.chat_llm)
@@ -157,7 +157,7 @@ def build_system_message(crew_chat_inputs: ChatInputs) -> str:
)
def create_tool_function(crew: Crew, messages: List[Dict[str, str]]) -> Any:
def create_tool_function(crew: Crew, messages: list[dict[str, str]]) -> Any:
"""Creates a wrapper function for running the crew tool with messages."""
def run_crew_tool_with_messages(**kwargs):
@@ -193,7 +193,7 @@ def chat_loop(chat_llm, messages, crew_tool_schema, available_functions):
user_input, chat_llm, messages, crew_tool_schema, available_functions
)
except KeyboardInterrupt:
except KeyboardInterrupt: # noqa: PERF203
click.echo("\nExiting chat. Goodbye!")
break
except Exception as e:
@@ -221,9 +221,9 @@ def get_user_input() -> str:
def handle_user_input(
user_input: str,
chat_llm: LLM,
messages: List[Dict[str, str]],
crew_tool_schema: Dict[str, Any],
available_functions: Dict[str, Any],
messages: list[dict[str, str]],
crew_tool_schema: dict[str, Any],
available_functions: dict[str, Any],
) -> None:
if user_input.strip().lower() == "exit":
click.echo("Exiting chat. Goodbye!")
@@ -281,7 +281,7 @@ def generate_crew_tool_schema(crew_inputs: ChatInputs) -> dict:
}
def run_crew_tool(crew: Crew, messages: List[Dict[str, str]], **kwargs):
def run_crew_tool(crew: Crew, messages: list[dict[str, str]], **kwargs):
"""
Runs the crew using crew.kickoff(inputs=kwargs) and returns the output.
@@ -304,9 +304,8 @@ def run_crew_tool(crew: Crew, messages: List[Dict[str, str]], **kwargs):
crew_output = crew.kickoff(inputs=kwargs)
# Convert CrewOutput to a string to send back to the user
result = str(crew_output)
return str(crew_output)
return result
except Exception as e:
# Exit the chat and show the error message
click.secho("An error occurred while running the crew:", fg="red")
@@ -314,7 +313,7 @@ def run_crew_tool(crew: Crew, messages: List[Dict[str, str]], **kwargs):
sys.exit(1)
def load_crew_and_name() -> Tuple[Crew, str]:
def load_crew_and_name() -> tuple[Crew, str]:
"""
Loads the crew by importing the crew class from the user's project.
@@ -351,15 +350,17 @@ def load_crew_and_name() -> Tuple[Crew, str]:
try:
crew_module = __import__(crew_module_name, fromlist=[crew_class_name])
except ImportError as e:
raise ImportError(f"Failed to import crew module {crew_module_name}: {e}")
raise ImportError(
f"Failed to import crew module {crew_module_name}: {e}"
) from e
# Get the crew class from the module
try:
crew_class = getattr(crew_module, crew_class_name)
except AttributeError:
except AttributeError as e:
raise AttributeError(
f"Crew class {crew_class_name} not found in module {crew_module_name}"
)
) from e
# Instantiate the crew
crew_instance = crew_class().crew()
@@ -395,7 +396,7 @@ def generate_crew_chat_inputs(crew: Crew, crew_name: str, chat_llm) -> ChatInput
)
def fetch_required_inputs(crew: Crew) -> Set[str]:
def fetch_required_inputs(crew: Crew) -> set[str]:
"""
Extracts placeholders from the crew's tasks and agents.
@@ -405,8 +406,8 @@ def fetch_required_inputs(crew: Crew) -> Set[str]:
Returns:
Set[str]: A set of placeholder names.
"""
placeholder_pattern = re.compile(r"\{(.+?)\}")
required_inputs: Set[str] = set()
placeholder_pattern = re.compile(r"\{(.+?)}")
required_inputs: set[str] = set()
# Scan tasks
for task in crew.tasks:
@@ -435,7 +436,7 @@ def generate_input_description_with_ai(input_name: str, crew: Crew, chat_llm) ->
"""
# Gather context from tasks and agents where the input is used
context_texts = []
placeholder_pattern = re.compile(r"\{(.+?)\}")
placeholder_pattern = re.compile(r"\{(.+?)}")
for task in crew.tasks:
if (
@@ -479,9 +480,7 @@ def generate_input_description_with_ai(input_name: str, crew: Crew, chat_llm) ->
f"{context}"
)
response = chat_llm.call(messages=[{"role": "user", "content": prompt}])
description = response.strip()
return description
return response.strip()
def generate_crew_description_with_ai(crew: Crew, chat_llm) -> str:
@@ -497,7 +496,7 @@ def generate_crew_description_with_ai(crew: Crew, chat_llm) -> str:
"""
# Gather context from tasks and agents
context_texts = []
placeholder_pattern = re.compile(r"\{(.+?)\}")
placeholder_pattern = re.compile(r"\{(.+?)}")
for task in crew.tasks:
# Replace placeholders with input names
@@ -531,6 +530,4 @@ def generate_crew_description_with_ai(crew: Crew, chat_llm) -> str:
f"{context}"
)
response = chat_llm.call(messages=[{"role": "user", "content": prompt}])
crew_description = response.strip()
return crew_description
return response.strip()

View File

@@ -14,11 +14,15 @@ class Repository:
self.fetch()
def is_git_installed(self) -> bool:
@staticmethod
def is_git_installed() -> bool:
"""Check if Git is installed and available in the system."""
try:
subprocess.run(
["git", "--version"], capture_output=True, check=True, text=True
["git", "--version"], # noqa: S607
capture_output=True,
check=True,
text=True,
)
return True
except (subprocess.CalledProcessError, FileNotFoundError):
@@ -26,22 +30,26 @@ class Repository:
def fetch(self) -> None:
"""Fetch latest updates from the remote."""
subprocess.run(["git", "fetch"], cwd=self.path, check=True)
subprocess.run(["git", "fetch"], cwd=self.path, check=True) # noqa: S607
def status(self) -> str:
"""Get the git status in porcelain format."""
return subprocess.check_output(
["git", "status", "--branch", "--porcelain"],
["git", "status", "--branch", "--porcelain"], # noqa: S607
cwd=self.path,
encoding="utf-8",
).strip()
@lru_cache(maxsize=None)
@lru_cache(maxsize=None) # noqa: B019
def is_git_repo(self) -> bool:
"""Check if the current directory is a git repository."""
"""Check if the current directory is a git repository.
Notes:
- TODO: This method is cached to avoid redundant checks, but using lru_cache on methods can lead to memory leaks
"""
try:
subprocess.check_output(
["git", "rev-parse", "--is-inside-work-tree"],
["git", "rev-parse", "--is-inside-work-tree"], # noqa: S607
cwd=self.path,
encoding="utf-8",
)
@@ -64,14 +72,13 @@ class Repository:
"""Return True if the Git repository is fully synced with the remote, False otherwise."""
if self.has_uncommitted_changes() or self.is_ahead_or_behind():
return False
else:
return True
return True
def origin_url(self) -> str | None:
"""Get the Git repository's remote URL."""
try:
result = subprocess.run(
["git", "remote", "get-url", "origin"],
["git", "remote", "get-url", "origin"], # noqa: S607
cwd=self.path,
capture_output=True,
text=True,

View File

@@ -12,8 +12,8 @@ def install_crew(proxy_options: list[str]) -> None:
Install the crew by running the UV command to lock and install.
"""
try:
command = ["uv", "sync"] + proxy_options
subprocess.run(command, check=True, capture_output=False, text=True)
command = ["uv", "sync", *proxy_options]
subprocess.run(command, check=True, capture_output=False, text=True) # noqa: S603
except subprocess.CalledProcessError as e:
click.echo(f"An error occurred while running the crew: {e}", err=True)

View File

@@ -1,11 +1,10 @@
from typing import List, Optional
from urllib.parse import urljoin
import requests
from crewai.cli.config import Settings
from crewai.cli.version import get_crewai_version
from crewai.cli.constants import DEFAULT_CREWAI_ENTERPRISE_URL
from crewai.cli.version import get_crewai_version
class PlusAPI:
@@ -56,9 +55,9 @@ class PlusAPI:
handle: str,
is_public: bool,
version: str,
description: Optional[str],
description: str | None,
encoded_file: str,
available_exports: Optional[List[str]] = None,
available_exports: list[str] | None = None,
):
params = {
"handle": handle,

View File

@@ -1,10 +1,10 @@
import os
import certifi
import json
import os
import time
from collections import defaultdict
from pathlib import Path
import certifi
import click
import requests
@@ -25,7 +25,7 @@ def select_choice(prompt_message, choices):
provider_models = get_provider_data()
if not provider_models:
return
return None
click.secho(prompt_message, fg="cyan")
for idx, choice in enumerate(choices, start=1):
click.secho(f"{idx}. {choice}", fg="cyan")
@@ -67,7 +67,7 @@ def select_provider(provider_models):
all_providers = sorted(set(predefined_providers + list(provider_models.keys())))
provider = select_choice(
"Select a provider to set up:", predefined_providers + ["other"]
"Select a provider to set up:", [*predefined_providers, "other"]
)
if provider is None: # User typed 'q'
return None
@@ -102,10 +102,9 @@ def select_model(provider, provider_models):
click.secho(f"No models available for provider '{provider}'.", fg="red")
return None
selected_model = select_choice(
return select_choice(
f"Select a model to use for {provider.capitalize()}:", available_models
)
return selected_model
def load_provider_data(cache_file, cache_expiry):
@@ -165,7 +164,7 @@ def fetch_provider_data(cache_file):
Returns:
- dict or None: The fetched provider data or None if the operation fails.
"""
ssl_config = os.environ['SSL_CERT_FILE'] = certifi.where()
ssl_config = os.environ["SSL_CERT_FILE"] = certifi.where()
try:
response = requests.get(JSON_URL, stream=True, timeout=60, verify=ssl_config)

View File

@@ -1,6 +1,5 @@
import subprocess
from enum import Enum
from typing import List, Optional
import click
from packaging import version
@@ -57,7 +56,7 @@ def execute_command(crew_type: CrewType) -> None:
command = ["uv", "run", "kickoff" if crew_type == CrewType.FLOW else "run_crew"]
try:
subprocess.run(command, capture_output=False, text=True, check=True)
subprocess.run(command, capture_output=False, text=True, check=True) # noqa: S603
except subprocess.CalledProcessError as e:
handle_error(e, crew_type)

View File

@@ -3,7 +3,7 @@ import os
import sys
from datetime import datetime
from pathlib import Path
from typing import Optional
from cryptography.fernet import Fernet
@@ -49,7 +49,7 @@ class TokenManager:
encrypted_data = self.fernet.encrypt(json.dumps(data).encode())
self.save_secure_file(self.file_path, encrypted_data)
def get_token(self) -> Optional[str]:
def get_token(self) -> str | None:
"""
Get the access token if it is valid and not expired.
@@ -113,7 +113,7 @@ class TokenManager:
# Set appropriate permissions (read/write for owner only)
os.chmod(file_path, 0o600)
def read_secure_file(self, filename: str) -> Optional[bytes]:
def read_secure_file(self, filename: str) -> bytes | None:
"""
Read the content of a secure file.

View File

@@ -5,7 +5,7 @@ description = "{{name}} using crewAI"
authors = [{ name = "Your Name", email = "you@example.com" }]
requires-python = ">=3.10,<3.14"
dependencies = [
"crewai[tools]>=0.193.0,<1.0.0"
"crewai[tools]>=0.193.2,<1.0.0"
]
[project.scripts]

View File

@@ -5,7 +5,7 @@ description = "{{name}} using crewAI"
authors = [{ name = "Your Name", email = "you@example.com" }]
requires-python = ">=3.10,<3.14"
dependencies = [
"crewai[tools]>=0.193.0,<1.0.0",
"crewai[tools]>=0.193.2,<1.0.0",
]
[project.scripts]

View File

@@ -5,7 +5,7 @@ description = "Power up your crews with {{folder_name}}"
readme = "README.md"
requires-python = ">=3.10,<3.14"
dependencies = [
"crewai[tools]>=0.193.0"
"crewai[tools]>=0.193.2"
]
[tool.crewai]

View File

@@ -5,7 +5,7 @@ import sys
from functools import reduce
from inspect import getmro, isclass, isfunction, ismethod
from pathlib import Path
from typing import Any, Dict, List, get_type_hints
from typing import Any, get_type_hints
import click
import tomli
@@ -41,8 +41,7 @@ def copy_template(src, dst, name, class_name, folder_name):
def read_toml(file_path: str = "pyproject.toml"):
"""Read the content of a TOML file and return it as a dictionary."""
with open(file_path, "rb") as f:
toml_dict = tomli.load(f)
return toml_dict
return tomli.load(f)
def parse_toml(content):
@@ -77,7 +76,7 @@ def get_project_description(
def _get_project_attribute(
pyproject_path: str, keys: List[str], require: bool
pyproject_path: str, keys: list[str], require: bool
) -> Any | None:
"""Get an attribute from the pyproject.toml file."""
attribute = None
@@ -96,16 +95,20 @@ def _get_project_attribute(
except FileNotFoundError:
console.print(f"Error: {pyproject_path} not found.", style="bold red")
except KeyError:
console.print(f"Error: {pyproject_path} is not a valid pyproject.toml file.", style="bold red")
except tomllib.TOMLDecodeError if sys.version_info >= (3, 11) else Exception as e: # type: ignore
console.print(
f"Error: {pyproject_path} is not a valid TOML file."
if sys.version_info >= (3, 11)
else f"Error reading the pyproject.toml file: {e}",
f"Error: {pyproject_path} is not a valid pyproject.toml file.",
style="bold red",
)
except Exception as e:
console.print(f"Error reading the pyproject.toml file: {e}", style="bold red")
# Handle TOML decode errors for Python 3.11+
if sys.version_info >= (3, 11) and isinstance(e, tomllib.TOMLDecodeError): # type: ignore
console.print(
f"Error: {pyproject_path} is not a valid TOML file.", style="bold red"
)
else:
console.print(
f"Error reading the pyproject.toml file: {e}", style="bold red"
)
if require and not attribute:
console.print(
@@ -117,7 +120,7 @@ def _get_project_attribute(
return attribute
def _get_nested_value(data: Dict[str, Any], keys: List[str]) -> Any:
def _get_nested_value(data: dict[str, Any], keys: list[str]) -> Any:
return reduce(dict.__getitem__, keys, data)
@@ -296,7 +299,10 @@ def get_crews(crew_path: str = "crew.py", require: bool = False) -> list[Crew]:
try:
crew_instances.extend(fetch_crews(module_attr))
except Exception as e:
console.print(f"Error processing attribute {attr_name}: {e}", style="bold red")
console.print(
f"Error processing attribute {attr_name}: {e}",
style="bold red",
)
continue
# If we found crew instances, break out of the loop
@@ -304,12 +310,15 @@ def get_crews(crew_path: str = "crew.py", require: bool = False) -> list[Crew]:
break
except Exception as exec_error:
console.print(f"Error executing module: {exec_error}", style="bold red")
console.print(
f"Error executing module: {exec_error}",
style="bold red",
)
except (ImportError, AttributeError) as e:
if require:
console.print(
f"Error importing crew from {crew_path}: {str(e)}",
f"Error importing crew from {crew_path}: {e!s}",
style="bold red",
)
continue
@@ -325,9 +334,9 @@ def get_crews(crew_path: str = "crew.py", require: bool = False) -> list[Crew]:
except Exception as e:
if require:
console.print(
f"Unexpected error while loading crew: {str(e)}", style="bold red"
f"Unexpected error while loading crew: {e!s}", style="bold red"
)
raise SystemExit
raise SystemExit from e
return crew_instances
@@ -348,8 +357,7 @@ def get_crew_instance(module_attr) -> Crew | None:
if isinstance(module_attr, Crew):
return module_attr
else:
return None
return None
def fetch_crews(module_attr) -> list[Crew]:
@@ -402,11 +410,11 @@ def extract_available_exports(dir_path: str = "src"):
return available_exports
except Exception as e:
console.print(f"[red]Error: Could not extract tool classes: {str(e)}[/red]")
console.print(f"[red]Error: Could not extract tool classes: {e!s}[/red]")
console.print(
"Please ensure your project contains valid tools (classes inheriting from BaseTool or functions with @tool decorator)."
)
raise SystemExit(1)
raise SystemExit(1) from e
def _load_tools_from_init(init_file: Path) -> list[dict[str, Any]]:
@@ -440,8 +448,8 @@ def _load_tools_from_init(init_file: Path) -> list[dict[str, Any]]:
]
except Exception as e:
console.print(f"[red]Warning: Could not load {init_file}: {str(e)}[/red]")
raise SystemExit(1)
console.print(f"[red]Warning: Could not load {init_file}: {e!s}[/red]")
raise SystemExit(1) from e
finally:
sys.modules.pop("temp_module", None)

View File

@@ -1,5 +1,5 @@
import json
from typing import Any, Dict, Optional
from typing import Any
from pydantic import BaseModel, Field
@@ -12,19 +12,21 @@ class CrewOutput(BaseModel):
"""Class that represents the result of a crew."""
raw: str = Field(description="Raw output of crew", default="")
pydantic: Optional[BaseModel] = Field(
pydantic: BaseModel | None = Field(
description="Pydantic output of Crew", default=None
)
json_dict: Optional[Dict[str, Any]] = Field(
json_dict: dict[str, Any] | None = Field(
description="JSON dict output of Crew", default=None
)
tasks_output: list[TaskOutput] = Field(
description="Output of each task", default=[]
)
token_usage: UsageMetrics = Field(description="Processed token summary", default={})
token_usage: UsageMetrics = Field(
description="Processed token summary", default_factory=UsageMetrics
)
@property
def json(self) -> Optional[str]:
def json(self) -> str | None: # type: ignore[override]
if self.tasks_output[-1].output_format != OutputFormat.JSON:
raise ValueError(
"No JSON output found in the final task. Please make sure to set the output_json property in the final task in your crew."
@@ -32,7 +34,7 @@ class CrewOutput(BaseModel):
return json.dumps(self.json_dict)
def to_dict(self) -> Dict[str, Any]:
def to_dict(self) -> dict[str, Any]:
"""Convert json_output and pydantic_output to a dictionary."""
output_dict = {}
if self.json_dict:
@@ -44,10 +46,9 @@ class CrewOutput(BaseModel):
def __getitem__(self, key):
if self.pydantic and hasattr(self.pydantic, key):
return getattr(self.pydantic, key)
elif self.json_dict and key in self.json_dict:
if self.json_dict and key in self.json_dict:
return self.json_dict[key]
else:
raise KeyError(f"Key '{key}' not found in CrewOutput.")
raise KeyError(f"Key '{key}' not found in CrewOutput.")
def __str__(self):
if self.pydantic:

View File

@@ -1,5 +1,6 @@
from datetime import datetime, timezone
from typing import Any, Dict, Optional
from typing import Any
from pydantic import BaseModel, Field
from crewai.utilities.serialization import to_serializable
@@ -10,11 +11,11 @@ class BaseEvent(BaseModel):
timestamp: datetime = Field(default_factory=lambda: datetime.now(timezone.utc))
type: str
source_fingerprint: Optional[str] = None # UUID string of the source entity
source_type: Optional[str] = (
source_fingerprint: str | None = None # UUID string of the source entity
source_type: str | None = (
None # "agent", "task", "crew", "memory", "entity_memory", "short_term_memory", "long_term_memory", "external_memory"
)
fingerprint_metadata: Optional[Dict[str, Any]] = None # Any relevant metadata
fingerprint_metadata: dict[str, Any] | None = None # Any relevant metadata
def to_json(self, exclude: set[str] | None = None):
"""
@@ -28,13 +29,13 @@ class BaseEvent(BaseModel):
"""
return to_serializable(self, exclude=exclude)
def _set_task_params(self, data: Dict[str, Any]):
def _set_task_params(self, data: dict[str, Any]):
if "from_task" in data and (task := data["from_task"]):
self.task_id = task.id
self.task_name = task.name or task.description
self.from_task = None
def _set_agent_params(self, data: Dict[str, Any]):
def _set_agent_params(self, data: dict[str, Any]):
task = data.get("from_task", None)
agent = task.agent if task else data.get("from_agent", None)

View File

@@ -1,8 +1,9 @@
from __future__ import annotations
import threading
from collections.abc import Callable
from contextlib import contextmanager
from typing import Any, Callable, Dict, List, Type, TypeVar, cast
from typing import Any, TypeVar, cast
from blinker import Signal
@@ -25,17 +26,17 @@ class CrewAIEventsBus:
if cls._instance is None:
with cls._lock:
if cls._instance is None: # prevent race condition
cls._instance = super(CrewAIEventsBus, cls).__new__(cls)
cls._instance = super().__new__(cls)
cls._instance._initialize()
return cls._instance
def _initialize(self) -> None:
"""Initialize the event bus internal state"""
self._signal = Signal("crewai_event_bus")
self._handlers: Dict[Type[BaseEvent], List[Callable]] = {}
self._handlers: dict[type[BaseEvent], list[Callable]] = {}
def on(
self, event_type: Type[EventT]
self, event_type: type[EventT]
) -> Callable[[Callable[[Any, EventT], None]], Callable[[Any, EventT], None]]:
"""
Decorator to register an event handler for a specific event type.
@@ -61,6 +62,18 @@ class CrewAIEventsBus:
return decorator
@staticmethod
def _call_handler(
handler: Callable, source: Any, event: BaseEvent, event_type: type
) -> None:
"""Call a single handler with error handling."""
try:
handler(source, event)
except Exception as e:
print(
f"[EventBus Error] Handler '{handler.__name__}' failed for event '{event_type.__name__}': {e}"
)
def emit(self, source: Any, event: BaseEvent) -> None:
"""
Emit an event to all registered handlers
@@ -72,17 +85,12 @@ class CrewAIEventsBus:
for event_type, handlers in self._handlers.items():
if isinstance(event, event_type):
for handler in handlers:
try:
handler(source, event)
except Exception as e:
print(
f"[EventBus Error] Handler '{handler.__name__}' failed for event '{event_type.__name__}': {e}"
)
self._call_handler(handler, source, event, event_type)
self._signal.send(source, event=event)
def register_handler(
self, event_type: Type[EventTypes], handler: Callable[[Any, EventTypes], None]
self, event_type: type[EventTypes], handler: Callable[[Any, EventTypes], None]
) -> None:
"""Register an event handler for a specific event type"""
if event_type not in self._handlers:

View File

@@ -1,15 +1,30 @@
from __future__ import annotations
from io import StringIO
from typing import Any, Dict
from typing import Any
from pydantic import Field, PrivateAttr
from crewai.llm import LLM
from crewai.task import Task
from crewai.telemetry.telemetry import Telemetry
from crewai.utilities import Logger
from crewai.utilities.constants import EMITTER_COLOR
from crewai.events.base_event_listener import BaseEventListener
from crewai.events.types.agent_events import (
AgentExecutionCompletedEvent,
AgentExecutionStartedEvent,
LiteAgentExecutionCompletedEvent,
LiteAgentExecutionErrorEvent,
LiteAgentExecutionStartedEvent,
)
from crewai.events.types.crew_events import (
CrewKickoffCompletedEvent,
CrewKickoffFailedEvent,
CrewKickoffStartedEvent,
CrewTestCompletedEvent,
CrewTestFailedEvent,
CrewTestResultEvent,
CrewTestStartedEvent,
CrewTrainCompletedEvent,
CrewTrainFailedEvent,
CrewTrainStartedEvent,
)
from crewai.events.types.knowledge_events import (
KnowledgeQueryCompletedEvent,
KnowledgeQueryFailedEvent,
@@ -25,34 +40,21 @@ from crewai.events.types.llm_events import (
LLMStreamChunkEvent,
)
from crewai.events.types.llm_guardrail_events import (
LLMGuardrailStartedEvent,
LLMGuardrailCompletedEvent,
)
from crewai.events.utils.console_formatter import ConsoleFormatter
from crewai.events.types.agent_events import (
AgentExecutionCompletedEvent,
AgentExecutionStartedEvent,
LiteAgentExecutionCompletedEvent,
LiteAgentExecutionErrorEvent,
LiteAgentExecutionStartedEvent,
LLMGuardrailStartedEvent,
)
from crewai.events.types.logging_events import (
AgentLogsStartedEvent,
AgentLogsExecutionEvent,
AgentLogsStartedEvent,
)
from crewai.events.types.crew_events import (
CrewKickoffCompletedEvent,
CrewKickoffFailedEvent,
CrewKickoffStartedEvent,
CrewTestCompletedEvent,
CrewTestFailedEvent,
CrewTestResultEvent,
CrewTestStartedEvent,
CrewTrainCompletedEvent,
CrewTrainFailedEvent,
CrewTrainStartedEvent,
)
from crewai.events.utils.console_formatter import ConsoleFormatter
from crewai.llm import LLM
from crewai.task import Task
from crewai.telemetry.telemetry import Telemetry
from crewai.utilities import Logger
from crewai.utilities.constants import EMITTER_COLOR
from .listeners.memory_listener import MemoryListener
from .types.flow_events import (
FlowCreatedEvent,
FlowFinishedEvent,
@@ -61,26 +63,24 @@ from .types.flow_events import (
MethodExecutionFinishedEvent,
MethodExecutionStartedEvent,
)
from .types.reasoning_events import (
AgentReasoningCompletedEvent,
AgentReasoningFailedEvent,
AgentReasoningStartedEvent,
)
from .types.task_events import TaskCompletedEvent, TaskFailedEvent, TaskStartedEvent
from .types.tool_usage_events import (
ToolUsageErrorEvent,
ToolUsageFinishedEvent,
ToolUsageStartedEvent,
)
from .types.reasoning_events import (
AgentReasoningStartedEvent,
AgentReasoningCompletedEvent,
AgentReasoningFailedEvent,
)
from .listeners.memory_listener import MemoryListener
class EventListener(BaseEventListener):
_instance = None
_telemetry: Telemetry = PrivateAttr(default_factory=lambda: Telemetry())
logger = Logger(verbose=True, default_color=EMITTER_COLOR)
execution_spans: Dict[Task, Any] = Field(default_factory=dict)
execution_spans: dict[Task, Any] = Field(default_factory=dict)
next_chunk = 0
text_stream = StringIO()
knowledge_retrieval_in_progress = False

View File

@@ -1,11 +1,10 @@
from typing import Union
from crewai.events.types.agent_events import (
AgentExecutionCompletedEvent,
AgentExecutionErrorEvent,
AgentExecutionStartedEvent,
LiteAgentExecutionCompletedEvent,
)
from .types.crew_events import (
CrewKickoffCompletedEvent,
CrewKickoffFailedEvent,
@@ -24,6 +23,14 @@ from .types.flow_events import (
MethodExecutionFinishedEvent,
MethodExecutionStartedEvent,
)
from .types.knowledge_events import (
KnowledgeQueryCompletedEvent,
KnowledgeQueryFailedEvent,
KnowledgeQueryStartedEvent,
KnowledgeRetrievalCompletedEvent,
KnowledgeRetrievalStartedEvent,
KnowledgeSearchQueryFailedEvent,
)
from .types.llm_events import (
LLMCallCompletedEvent,
LLMCallFailedEvent,
@@ -34,6 +41,21 @@ from .types.llm_guardrail_events import (
LLMGuardrailCompletedEvent,
LLMGuardrailStartedEvent,
)
from .types.memory_events import (
MemoryQueryCompletedEvent,
MemoryQueryFailedEvent,
MemoryQueryStartedEvent,
MemoryRetrievalCompletedEvent,
MemoryRetrievalStartedEvent,
MemorySaveCompletedEvent,
MemorySaveFailedEvent,
MemorySaveStartedEvent,
)
from .types.reasoning_events import (
AgentReasoningCompletedEvent,
AgentReasoningFailedEvent,
AgentReasoningStartedEvent,
)
from .types.task_events import (
TaskCompletedEvent,
TaskFailedEvent,
@@ -44,77 +66,53 @@ from .types.tool_usage_events import (
ToolUsageFinishedEvent,
ToolUsageStartedEvent,
)
from .types.reasoning_events import (
AgentReasoningStartedEvent,
AgentReasoningCompletedEvent,
AgentReasoningFailedEvent,
)
from .types.knowledge_events import (
KnowledgeRetrievalStartedEvent,
KnowledgeRetrievalCompletedEvent,
KnowledgeQueryStartedEvent,
KnowledgeQueryCompletedEvent,
KnowledgeQueryFailedEvent,
KnowledgeSearchQueryFailedEvent,
)
from .types.memory_events import (
MemorySaveStartedEvent,
MemorySaveCompletedEvent,
MemorySaveFailedEvent,
MemoryQueryStartedEvent,
MemoryQueryCompletedEvent,
MemoryQueryFailedEvent,
MemoryRetrievalStartedEvent,
MemoryRetrievalCompletedEvent,
EventTypes = (
CrewKickoffStartedEvent
| CrewKickoffCompletedEvent
| CrewKickoffFailedEvent
| CrewTestStartedEvent
| CrewTestCompletedEvent
| CrewTestFailedEvent
| CrewTrainStartedEvent
| CrewTrainCompletedEvent
| CrewTrainFailedEvent
| AgentExecutionStartedEvent
| AgentExecutionCompletedEvent
| LiteAgentExecutionCompletedEvent
| TaskStartedEvent
| TaskCompletedEvent
| TaskFailedEvent
| FlowStartedEvent
| FlowFinishedEvent
| MethodExecutionStartedEvent
| MethodExecutionFinishedEvent
| MethodExecutionFailedEvent
| AgentExecutionErrorEvent
| ToolUsageFinishedEvent
| ToolUsageErrorEvent
| ToolUsageStartedEvent
| LLMCallStartedEvent
| LLMCallCompletedEvent
| LLMCallFailedEvent
| LLMStreamChunkEvent
| LLMGuardrailStartedEvent
| LLMGuardrailCompletedEvent
| AgentReasoningStartedEvent
| AgentReasoningCompletedEvent
| AgentReasoningFailedEvent
| KnowledgeRetrievalStartedEvent
| KnowledgeRetrievalCompletedEvent
| KnowledgeQueryStartedEvent
| KnowledgeQueryCompletedEvent
| KnowledgeQueryFailedEvent
| KnowledgeSearchQueryFailedEvent
| MemorySaveStartedEvent
| MemorySaveCompletedEvent
| MemorySaveFailedEvent
| MemoryQueryStartedEvent
| MemoryQueryCompletedEvent
| MemoryQueryFailedEvent
| MemoryRetrievalStartedEvent
| MemoryRetrievalCompletedEvent
)
EventTypes = Union[
CrewKickoffStartedEvent,
CrewKickoffCompletedEvent,
CrewKickoffFailedEvent,
CrewTestStartedEvent,
CrewTestCompletedEvent,
CrewTestFailedEvent,
CrewTrainStartedEvent,
CrewTrainCompletedEvent,
CrewTrainFailedEvent,
AgentExecutionStartedEvent,
AgentExecutionCompletedEvent,
LiteAgentExecutionCompletedEvent,
TaskStartedEvent,
TaskCompletedEvent,
TaskFailedEvent,
FlowStartedEvent,
FlowFinishedEvent,
MethodExecutionStartedEvent,
MethodExecutionFinishedEvent,
MethodExecutionFailedEvent,
AgentExecutionErrorEvent,
ToolUsageFinishedEvent,
ToolUsageErrorEvent,
ToolUsageStartedEvent,
LLMCallStartedEvent,
LLMCallCompletedEvent,
LLMCallFailedEvent,
LLMStreamChunkEvent,
LLMGuardrailStartedEvent,
LLMGuardrailCompletedEvent,
AgentReasoningStartedEvent,
AgentReasoningCompletedEvent,
AgentReasoningFailedEvent,
KnowledgeRetrievalStartedEvent,
KnowledgeRetrievalCompletedEvent,
KnowledgeQueryStartedEvent,
KnowledgeQueryCompletedEvent,
KnowledgeQueryFailedEvent,
KnowledgeSearchQueryFailedEvent,
MemorySaveStartedEvent,
MemorySaveCompletedEvent,
MemorySaveFailedEvent,
MemoryQueryStartedEvent,
MemoryQueryCompletedEvent,
MemoryQueryFailedEvent,
MemoryRetrievalStartedEvent,
MemoryRetrievalCompletedEvent,
]

View File

@@ -2,4 +2,4 @@
This module contains various event listener implementations
for handling memory, tracing, and other event-driven functionality.
"""
"""

View File

@@ -1,12 +1,12 @@
from crewai.events.base_event_listener import BaseEventListener
from crewai.events.types.memory_events import (
MemoryQueryCompletedEvent,
MemoryQueryFailedEvent,
MemoryRetrievalCompletedEvent,
MemoryRetrievalStartedEvent,
MemoryQueryFailedEvent,
MemoryQueryCompletedEvent,
MemorySaveStartedEvent,
MemorySaveCompletedEvent,
MemorySaveFailedEvent,
MemorySaveStartedEvent,
)

View File

@@ -1,7 +1,7 @@
from dataclasses import dataclass, field, asdict
from datetime import datetime, timezone
from typing import Dict, Any
import uuid
from dataclasses import asdict, dataclass, field
from datetime import datetime, timezone
from typing import Any
@dataclass
@@ -13,7 +13,7 @@ class TraceEvent:
default_factory=lambda: datetime.now(timezone.utc).isoformat()
)
type: str = ""
event_data: Dict[str, Any] = field(default_factory=dict)
event_data: dict[str, Any] = field(default_factory=dict)
def to_dict(self) -> Dict[str, Any]:
def to_dict(self) -> dict[str, Any]:
return asdict(self)

View File

@@ -54,44 +54,164 @@ def _get_machine_id() -> str:
[f"{(uuid.getnode() >> b) & 0xFF:02x}" for b in range(0, 12, 2)][::-1]
)
parts.append(mac)
except Exception:
logger.warning("Error getting machine id for fingerprinting")
except Exception: # noqa: S110
pass
sysname = platform.system()
parts.append(sysname)
try:
sysname = platform.system()
parts.append(sysname)
except Exception:
sysname = "unknown"
parts.append(sysname)
try:
if sysname == "Darwin":
res = subprocess.run(
["/usr/sbin/system_profiler", "SPHardwareDataType"],
capture_output=True,
text=True,
timeout=2,
)
m = re.search(r"Hardware UUID:\s*([A-Fa-f0-9\-]+)", res.stdout)
if m:
parts.append(m.group(1))
elif sysname == "Linux":
try:
parts.append(Path("/etc/machine-id").read_text().strip())
except Exception:
parts.append(Path("/sys/class/dmi/id/product_uuid").read_text().strip())
res = subprocess.run(
["/usr/sbin/system_profiler", "SPHardwareDataType"],
capture_output=True,
text=True,
timeout=2,
)
m = re.search(r"Hardware UUID:\s*([A-Fa-f0-9\-]+)", res.stdout)
if m:
parts.append(m.group(1))
except Exception: # noqa: S110
pass
elif sysname == "Linux":
linux_id = _get_linux_machine_id()
if linux_id:
parts.append(linux_id)
elif sysname == "Windows":
res = subprocess.run(
["C:\\Windows\\System32\\wbem\\wmic.exe", "csproduct", "get", "UUID"],
capture_output=True,
text=True,
timeout=2,
)
lines = [line.strip() for line in res.stdout.splitlines() if line.strip()]
if len(lines) >= 2:
parts.append(lines[1])
except Exception:
logger.exception("Error getting machine ID")
try:
res = subprocess.run(
[
"C:\\Windows\\System32\\wbem\\wmic.exe",
"csproduct",
"get",
"UUID",
],
capture_output=True,
text=True,
timeout=2,
)
lines = [
line.strip() for line in res.stdout.splitlines() if line.strip()
]
if len(lines) >= 2:
parts.append(lines[1])
except Exception: # noqa: S110
pass
else:
generic_id = _get_generic_system_id()
if generic_id:
parts.append(generic_id)
except Exception: # noqa: S110
pass
if len(parts) <= 1:
try:
import socket
parts.append(socket.gethostname())
except Exception: # noqa: S110
pass
try:
parts.append(getpass.getuser())
except Exception: # noqa: S110
pass
try:
parts.append(platform.machine())
parts.append(platform.processor())
except Exception: # noqa: S110
pass
if not parts:
parts.append("unknown-system")
parts.append(str(uuid.uuid4()))
return hashlib.sha256("".join(parts).encode()).hexdigest()
def _get_linux_machine_id() -> str | None:
linux_id_sources = [
"/etc/machine-id",
"/sys/class/dmi/id/product_uuid",
"/proc/sys/kernel/random/boot_id",
"/sys/class/dmi/id/board_serial",
"/sys/class/dmi/id/chassis_serial",
]
for source in linux_id_sources:
try:
path = Path(source)
if path.exists() and path.is_file():
content = path.read_text().strip()
if content and content.lower() not in [
"unknown",
"to be filled by o.e.m.",
"",
]:
return content
except Exception: # noqa: S112, PERF203
continue
try:
import socket
hostname = socket.gethostname()
arch = platform.machine()
if hostname and arch:
return f"{hostname}-{arch}"
except Exception: # noqa: S110
pass
return None
def _get_generic_system_id() -> str | None:
try:
parts = []
try:
import socket
hostname = socket.gethostname()
if hostname:
parts.append(hostname)
except Exception: # noqa: S110
pass
try:
parts.append(platform.machine())
parts.append(platform.processor())
parts.append(platform.architecture()[0])
except Exception: # noqa: S110
pass
try:
container_id = os.environ.get(
"HOSTNAME", os.environ.get("CONTAINER_ID", "")
)
if container_id:
parts.append(container_id)
except Exception: # noqa: S110
pass
if parts:
return "-".join(filter(None, parts))
except Exception: # noqa: S110
pass
return None
def _user_data_file() -> Path:
base = Path(db_storage_path())
base.mkdir(parents=True, exist_ok=True)

View File

@@ -2,4 +2,4 @@
This module contains all event types used throughout the CrewAI system
for monitoring and extending agent, crew, task, and tool execution.
"""
"""

View File

@@ -2,14 +2,15 @@
from __future__ import annotations
from typing import Any, Dict, List, Optional, Sequence, Union
from collections.abc import Sequence
from typing import Any
from pydantic import model_validator
from pydantic import ConfigDict, model_validator
from crewai.agents.agent_builder.base_agent import BaseAgent
from crewai.events.base_events import BaseEvent
from crewai.tools.base_tool import BaseTool
from crewai.tools.structured_tool import CrewStructuredTool
from crewai.events.base_events import BaseEvent
class AgentExecutionStartedEvent(BaseEvent):
@@ -17,11 +18,11 @@ class AgentExecutionStartedEvent(BaseEvent):
agent: BaseAgent
task: Any
tools: Optional[Sequence[Union[BaseTool, CrewStructuredTool]]]
tools: Sequence[BaseTool | CrewStructuredTool] | None
task_prompt: str
type: str = "agent_execution_started"
model_config = {"arbitrary_types_allowed": True}
model_config = ConfigDict(arbitrary_types_allowed=True)
@model_validator(mode="after")
def set_fingerprint_data(self):
@@ -45,7 +46,7 @@ class AgentExecutionCompletedEvent(BaseEvent):
output: str
type: str = "agent_execution_completed"
model_config = {"arbitrary_types_allowed": True}
model_config = ConfigDict(arbitrary_types_allowed=True)
@model_validator(mode="after")
def set_fingerprint_data(self):
@@ -69,7 +70,7 @@ class AgentExecutionErrorEvent(BaseEvent):
error: str
type: str = "agent_execution_error"
model_config = {"arbitrary_types_allowed": True}
model_config = ConfigDict(arbitrary_types_allowed=True)
@model_validator(mode="after")
def set_fingerprint_data(self):
@@ -89,18 +90,18 @@ class AgentExecutionErrorEvent(BaseEvent):
class LiteAgentExecutionStartedEvent(BaseEvent):
"""Event emitted when a LiteAgent starts executing"""
agent_info: Dict[str, Any]
tools: Optional[Sequence[Union[BaseTool, CrewStructuredTool]]]
messages: Union[str, List[Dict[str, str]]]
agent_info: dict[str, Any]
tools: Sequence[BaseTool | CrewStructuredTool] | None
messages: str | list[dict[str, str]]
type: str = "lite_agent_execution_started"
model_config = {"arbitrary_types_allowed": True}
model_config = ConfigDict(arbitrary_types_allowed=True)
class LiteAgentExecutionCompletedEvent(BaseEvent):
"""Event emitted when a LiteAgent completes execution"""
agent_info: Dict[str, Any]
agent_info: dict[str, Any]
output: str
type: str = "lite_agent_execution_completed"
@@ -108,7 +109,7 @@ class LiteAgentExecutionCompletedEvent(BaseEvent):
class LiteAgentExecutionErrorEvent(BaseEvent):
"""Event emitted when a LiteAgent encounters an error during execution"""
agent_info: Dict[str, Any]
agent_info: dict[str, Any]
error: str
type: str = "lite_agent_execution_error"

View File

@@ -1,4 +1,4 @@
from typing import TYPE_CHECKING, Any, Dict, Optional, Union
from typing import TYPE_CHECKING, Any
from crewai.events.base_events import BaseEvent
@@ -11,8 +11,8 @@ else:
class CrewBaseEvent(BaseEvent):
"""Base class for crew events with fingerprint handling"""
crew_name: Optional[str]
crew: Optional[Crew] = None
crew_name: str | None
crew: Crew | None = None
def __init__(self, **data):
super().__init__(**data)
@@ -38,7 +38,7 @@ class CrewBaseEvent(BaseEvent):
class CrewKickoffStartedEvent(CrewBaseEvent):
"""Event emitted when a crew starts execution"""
inputs: Optional[Dict[str, Any]]
inputs: dict[str, Any] | None
type: str = "crew_kickoff_started"
@@ -62,7 +62,7 @@ class CrewTrainStartedEvent(CrewBaseEvent):
n_iterations: int
filename: str
inputs: Optional[Dict[str, Any]]
inputs: dict[str, Any] | None
type: str = "crew_train_started"
@@ -85,8 +85,8 @@ class CrewTestStartedEvent(CrewBaseEvent):
"""Event emitted when a crew starts testing"""
n_iterations: int
eval_llm: Optional[Union[str, Any]]
inputs: Optional[Dict[str, Any]]
eval_llm: str | Any | None
inputs: dict[str, Any] | None
type: str = "crew_test_started"

View File

@@ -1,4 +1,4 @@
from typing import Any, Dict, Optional, Union
from typing import Any
from pydantic import BaseModel, ConfigDict
@@ -16,7 +16,7 @@ class FlowStartedEvent(FlowEvent):
"""Event emitted when a flow starts execution"""
flow_name: str
inputs: Optional[Dict[str, Any]] = None
inputs: dict[str, Any] | None = None
type: str = "flow_started"
@@ -32,8 +32,8 @@ class MethodExecutionStartedEvent(FlowEvent):
flow_name: str
method_name: str
state: Union[Dict[str, Any], BaseModel]
params: Optional[Dict[str, Any]] = None
state: dict[str, Any] | BaseModel
params: dict[str, Any] | None = None
type: str = "method_execution_started"
@@ -43,7 +43,7 @@ class MethodExecutionFinishedEvent(FlowEvent):
flow_name: str
method_name: str
result: Any = None
state: Union[Dict[str, Any], BaseModel]
state: dict[str, Any] | BaseModel
type: str = "method_execution_finished"
@@ -62,7 +62,7 @@ class FlowFinishedEvent(FlowEvent):
"""Event emitted when a flow completes execution"""
flow_name: str
result: Optional[Any] = None
result: Any | None = None
type: str = "flow_finished"

View File

@@ -1,6 +1,5 @@
from crewai.events.base_events import BaseEvent
from crewai.agents.agent_builder.base_agent import BaseAgent
from crewai.events.base_events import BaseEvent
class KnowledgeRetrievalStartedEvent(BaseEvent):

View File

@@ -1,5 +1,5 @@
from enum import Enum
from typing import Any, Dict, List, Optional, Union
from typing import Any
from pydantic import BaseModel
@@ -7,14 +7,14 @@ from crewai.events.base_events import BaseEvent
class LLMEventBase(BaseEvent):
task_name: Optional[str] = None
task_id: Optional[str] = None
task_name: str | None = None
task_id: str | None = None
agent_id: Optional[str] = None
agent_role: Optional[str] = None
agent_id: str | None = None
agent_role: str | None = None
from_task: Optional[Any] = None
from_agent: Optional[Any] = None
from_task: Any | None = None
from_agent: Any | None = None
def __init__(self, **data):
super().__init__(**data)
@@ -38,11 +38,11 @@ class LLMCallStartedEvent(LLMEventBase):
"""
type: str = "llm_call_started"
model: Optional[str] = None
messages: Optional[Union[str, List[Dict[str, Any]]]] = None
tools: Optional[List[dict[str, Any]]] = None
callbacks: Optional[List[Any]] = None
available_functions: Optional[Dict[str, Any]] = None
model: str | None = None
messages: str | list[dict[str, Any]] | None = None
tools: list[dict[str, Any]] | None = None
callbacks: list[Any] | None = None
available_functions: dict[str, Any] | None = None
class LLMCallCompletedEvent(LLMEventBase):
@@ -52,7 +52,7 @@ class LLMCallCompletedEvent(LLMEventBase):
messages: str | list[dict[str, Any]] | None = None
response: Any
call_type: LLMCallType
model: Optional[str] = None
model: str | None = None
class LLMCallFailedEvent(LLMEventBase):
@@ -64,13 +64,13 @@ class LLMCallFailedEvent(LLMEventBase):
class FunctionCall(BaseModel):
arguments: str
name: Optional[str] = None
name: str | None = None
class ToolCall(BaseModel):
id: Optional[str] = None
id: str | None = None
function: FunctionCall
type: Optional[str] = None
type: str | None = None
index: int
@@ -79,4 +79,4 @@ class LLMStreamChunkEvent(LLMEventBase):
type: str = "llm_stream_chunk"
chunk: str
tool_call: Optional[ToolCall] = None
tool_call: ToolCall | None = None

View File

@@ -1,5 +1,6 @@
from collections.abc import Callable
from inspect import getsource
from typing import Any, Callable, Optional, Union
from typing import Any
from crewai.events.base_events import BaseEvent
@@ -13,12 +14,12 @@ class LLMGuardrailStartedEvent(BaseEvent):
"""
type: str = "llm_guardrail_started"
guardrail: Union[str, Callable]
guardrail: str | Callable
retry_count: int
def __init__(self, **data):
from crewai.tasks.llm_guardrail import LLMGuardrail
from crewai.tasks.hallucination_guardrail import HallucinationGuardrail
from crewai.tasks.llm_guardrail import LLMGuardrail
super().__init__(**data)
@@ -41,5 +42,5 @@ class LLMGuardrailCompletedEvent(BaseEvent):
type: str = "llm_guardrail_completed"
success: bool
result: Any
error: Optional[str] = None
error: str | None = None
retry_count: int

View File

@@ -1,6 +1,8 @@
"""Agent logging events that don't reference BaseAgent to avoid circular imports."""
from typing import Any, Optional
from typing import Any
from pydantic import ConfigDict
from crewai.events.base_events import BaseEvent
@@ -9,7 +11,7 @@ class AgentLogsStartedEvent(BaseEvent):
"""Event emitted when agent logs should be shown at start"""
agent_role: str
task_description: Optional[str] = None
task_description: str | None = None
verbose: bool = False
type: str = "agent_logs_started"
@@ -22,4 +24,4 @@ class AgentLogsExecutionEvent(BaseEvent):
verbose: bool = False
type: str = "agent_logs_execution"
model_config = {"arbitrary_types_allowed": True}
model_config = ConfigDict(arbitrary_types_allowed=True)

View File

@@ -1,4 +1,4 @@
from typing import Any, Dict, Optional
from typing import Any
from crewai.events.base_events import BaseEvent
@@ -7,12 +7,12 @@ class MemoryBaseEvent(BaseEvent):
"""Base event for memory operations"""
type: str
task_id: Optional[str] = None
task_name: Optional[str] = None
from_task: Optional[Any] = None
from_agent: Optional[Any] = None
agent_role: Optional[str] = None
agent_id: Optional[str] = None
task_id: str | None = None
task_name: str | None = None
from_task: Any | None = None
from_agent: Any | None = None
agent_role: str | None = None
agent_id: str | None = None
def __init__(self, **data):
super().__init__(**data)
@@ -26,7 +26,7 @@ class MemoryQueryStartedEvent(MemoryBaseEvent):
type: str = "memory_query_started"
query: str
limit: int
score_threshold: Optional[float] = None
score_threshold: float | None = None
class MemoryQueryCompletedEvent(MemoryBaseEvent):
@@ -36,7 +36,7 @@ class MemoryQueryCompletedEvent(MemoryBaseEvent):
query: str
results: Any
limit: int
score_threshold: Optional[float] = None
score_threshold: float | None = None
query_time_ms: float
@@ -46,7 +46,7 @@ class MemoryQueryFailedEvent(MemoryBaseEvent):
type: str = "memory_query_failed"
query: str
limit: int
score_threshold: Optional[float] = None
score_threshold: float | None = None
error: str
@@ -54,9 +54,9 @@ class MemorySaveStartedEvent(MemoryBaseEvent):
"""Event emitted when a memory save operation is started"""
type: str = "memory_save_started"
value: Optional[str] = None
metadata: Optional[Dict[str, Any]] = None
agent_role: Optional[str] = None
value: str | None = None
metadata: dict[str, Any] | None = None
agent_role: str | None = None
class MemorySaveCompletedEvent(MemoryBaseEvent):
@@ -64,8 +64,8 @@ class MemorySaveCompletedEvent(MemoryBaseEvent):
type: str = "memory_save_completed"
value: str
metadata: Optional[Dict[str, Any]] = None
agent_role: Optional[str] = None
metadata: dict[str, Any] | None = None
agent_role: str | None = None
save_time_ms: float
@@ -73,9 +73,9 @@ class MemorySaveFailedEvent(MemoryBaseEvent):
"""Event emitted when a memory save operation fails"""
type: str = "memory_save_failed"
value: Optional[str] = None
metadata: Optional[Dict[str, Any]] = None
agent_role: Optional[str] = None
value: str | None = None
metadata: dict[str, Any] | None = None
agent_role: str | None = None
error: str
@@ -83,13 +83,13 @@ class MemoryRetrievalStartedEvent(MemoryBaseEvent):
"""Event emitted when memory retrieval for a task prompt starts"""
type: str = "memory_retrieval_started"
task_id: Optional[str] = None
task_id: str | None = None
class MemoryRetrievalCompletedEvent(MemoryBaseEvent):
"""Event emitted when memory retrieval for a task prompt completes successfully"""
type: str = "memory_retrieval_completed"
task_id: Optional[str] = None
task_id: str | None = None
memory_content: str
retrieval_time_ms: float

View File

@@ -1,5 +1,6 @@
from typing import Any
from crewai.events.base_events import BaseEvent
from typing import Any, Optional
class ReasoningEvent(BaseEvent):
@@ -9,10 +10,10 @@ class ReasoningEvent(BaseEvent):
attempt: int = 1
agent_role: str
task_id: str
task_name: Optional[str] = None
from_task: Optional[Any] = None
agent_id: Optional[str] = None
from_agent: Optional[Any] = None
task_name: str | None = None
from_task: Any | None = None
agent_id: str | None = None
from_agent: Any | None = None
def __init__(self, **data):
super().__init__(**data)

View File

@@ -1,15 +1,15 @@
from typing import Any, Optional
from typing import Any
from crewai.tasks.task_output import TaskOutput
from crewai.events.base_events import BaseEvent
from crewai.tasks.task_output import TaskOutput
class TaskStartedEvent(BaseEvent):
"""Event emitted when a task starts"""
type: str = "task_started"
context: Optional[str]
task: Optional[Any] = None
context: str | None
task: Any | None = None
def __init__(self, **data):
super().__init__(**data)
@@ -29,7 +29,7 @@ class TaskCompletedEvent(BaseEvent):
output: TaskOutput
type: str = "task_completed"
task: Optional[Any] = None
task: Any | None = None
def __init__(self, **data):
super().__init__(**data)
@@ -49,7 +49,7 @@ class TaskFailedEvent(BaseEvent):
error: str
type: str = "task_failed"
task: Optional[Any] = None
task: Any | None = None
def __init__(self, **data):
super().__init__(**data)
@@ -69,7 +69,7 @@ class TaskEvaluationEvent(BaseEvent):
type: str = "task_evaluation"
evaluation_type: str
task: Optional[Any] = None
task: Any | None = None
def __init__(self, **data):
super().__init__(**data)

View File

@@ -1,5 +1,8 @@
from collections.abc import Callable
from datetime import datetime
from typing import Any, Callable, Dict, Optional
from typing import Any
from pydantic import ConfigDict
from crewai.events.base_events import BaseEvent
@@ -7,21 +10,21 @@ from crewai.events.base_events import BaseEvent
class ToolUsageEvent(BaseEvent):
"""Base event for tool usage tracking"""
agent_key: Optional[str] = None
agent_role: Optional[str] = None
agent_id: Optional[str] = None
agent_key: str | None = None
agent_role: str | None = None
agent_id: str | None = None
tool_name: str
tool_args: Dict[str, Any] | str
tool_class: Optional[str] = None
tool_args: dict[str, Any] | str
tool_class: str | None = None
run_attempts: int | None = None
delegations: int | None = None
agent: Optional[Any] = None
task_name: Optional[str] = None
task_id: Optional[str] = None
from_task: Optional[Any] = None
from_agent: Optional[Any] = None
agent: Any | None = None
task_name: str | None = None
task_id: str | None = None
from_task: Any | None = None
from_agent: Any | None = None
model_config = {"arbitrary_types_allowed": True}
model_config = ConfigDict(arbitrary_types_allowed=True)
def __init__(self, **data):
super().__init__(**data)
@@ -81,9 +84,9 @@ class ToolExecutionErrorEvent(BaseEvent):
error: Any
type: str = "tool_execution_error"
tool_name: str
tool_args: Dict[str, Any]
tool_args: dict[str, Any]
tool_class: Callable
agent: Optional[Any] = None
agent: Any | None = None
def __init__(self, **data):
super().__init__(**data)

View File

@@ -1,25 +1,25 @@
from typing import Any, Dict, Optional
from typing import Any, ClassVar
from rich.console import Console
from rich.live import Live
from rich.panel import Panel
from rich.syntax import Syntax
from rich.text import Text
from rich.tree import Tree
from rich.live import Live
from rich.syntax import Syntax
class ConsoleFormatter:
current_crew_tree: Optional[Tree] = None
current_task_branch: Optional[Tree] = None
current_agent_branch: Optional[Tree] = None
current_tool_branch: Optional[Tree] = None
current_flow_tree: Optional[Tree] = None
current_method_branch: Optional[Tree] = None
current_lite_agent_branch: Optional[Tree] = None
tool_usage_counts: Dict[str, int] = {}
current_reasoning_branch: Optional[Tree] = None # Track reasoning status
current_crew_tree: Tree | None = None
current_task_branch: Tree | None = None
current_agent_branch: Tree | None = None
current_tool_branch: Tree | None = None
current_flow_tree: Tree | None = None
current_method_branch: Tree | None = None
current_lite_agent_branch: Tree | None = None
tool_usage_counts: ClassVar[dict[str, int]] = {}
current_reasoning_branch: Tree | None = None # Track reasoning status
_live_paused: bool = False
current_llm_tool_tree: Optional[Tree] = None
current_llm_tool_tree: Tree | None = None
def __init__(self, verbose: bool = False):
self.console = Console(width=None)
@@ -29,7 +29,7 @@ class ConsoleFormatter:
# instance so the previous render is replaced instead of writing a new one.
# Once any non-Tree renderable is printed we stop the Live session so the
# final Tree persists on the terminal.
self._live: Optional[Live] = None
self._live: Live | None = None
def create_panel(self, content: Text, title: str, style: str = "blue") -> Panel:
"""Create a standardized panel with consistent styling."""
@@ -45,7 +45,7 @@ class ConsoleFormatter:
title: str,
name: str,
status_style: str = "blue",
tool_args: Dict[str, Any] | str = "",
tool_args: dict[str, Any] | str = "",
**fields,
) -> Text:
"""Create standardized status content with consistent formatting."""
@@ -70,7 +70,7 @@ class ConsoleFormatter:
prefix: str,
name: str,
style: str = "blue",
status: Optional[str] = None,
status: str | None = None,
) -> None:
"""Update tree label with consistent formatting."""
label = Text()
@@ -115,7 +115,7 @@ class ConsoleFormatter:
self._live.update(tree, refresh=True)
return # Nothing else to do
# Case 2: blank line while a live session is running ignore so we
# Case 2: blank line while a live session is running - ignore so we
# don't break the in-place rendering behaviour
if len(args) == 0 and self._live:
return
@@ -156,7 +156,7 @@ class ConsoleFormatter:
def update_crew_tree(
self,
tree: Optional[Tree],
tree: Tree | None,
crew_name: str,
source_id: str,
status: str = "completed",
@@ -196,7 +196,7 @@ class ConsoleFormatter:
self.print_panel(content, title, style)
def create_crew_tree(self, crew_name: str, source_id: str) -> Optional[Tree]:
def create_crew_tree(self, crew_name: str, source_id: str) -> Tree | None:
"""Create and initialize a new crew tree with initial status."""
if not self.verbose:
return None
@@ -220,8 +220,8 @@ class ConsoleFormatter:
return tree
def create_task_branch(
self, crew_tree: Optional[Tree], task_id: str, task_name: Optional[str] = None
) -> Optional[Tree]:
self, crew_tree: Tree | None, task_id: str, task_name: str | None = None
) -> Tree | None:
"""Create and initialize a task branch."""
if not self.verbose:
return None
@@ -255,11 +255,11 @@ class ConsoleFormatter:
def update_task_status(
self,
crew_tree: Optional[Tree],
crew_tree: Tree | None,
task_id: str,
agent_role: str,
status: str = "completed",
task_name: Optional[str] = None,
task_name: str | None = None,
) -> None:
"""Update task status in the tree."""
if not self.verbose or crew_tree is None:
@@ -306,8 +306,8 @@ class ConsoleFormatter:
self.print_panel(content, panel_title, style)
def create_agent_branch(
self, task_branch: Optional[Tree], agent_role: str, crew_tree: Optional[Tree]
) -> Optional[Tree]:
self, task_branch: Tree | None, agent_role: str, crew_tree: Tree | None
) -> Tree | None:
"""Create and initialize an agent branch."""
if not self.verbose or not task_branch or not crew_tree:
return None
@@ -325,9 +325,9 @@ class ConsoleFormatter:
def update_agent_status(
self,
agent_branch: Optional[Tree],
agent_branch: Tree | None,
agent_role: str,
crew_tree: Optional[Tree],
crew_tree: Tree | None,
status: str = "completed",
) -> None:
"""Update agent status in the tree."""
@@ -336,7 +336,7 @@ class ConsoleFormatter:
# altering the tree. Keeping it a no-op avoids duplicate status lines.
return
def create_flow_tree(self, flow_name: str, flow_id: str) -> Optional[Tree]:
def create_flow_tree(self, flow_name: str, flow_id: str) -> Tree | None:
"""Create and initialize a flow tree."""
content = self.create_status_content(
"Starting Flow Execution", flow_name, "blue", ID=flow_id
@@ -356,7 +356,7 @@ class ConsoleFormatter:
return flow_tree
def start_flow(self, flow_name: str, flow_id: str) -> Optional[Tree]:
def start_flow(self, flow_name: str, flow_id: str) -> Tree | None:
"""Initialize a flow execution tree."""
flow_tree = Tree("")
flow_label = Text()
@@ -376,7 +376,7 @@ class ConsoleFormatter:
def update_flow_status(
self,
flow_tree: Optional[Tree],
flow_tree: Tree | None,
flow_name: str,
flow_id: str,
status: str = "completed",
@@ -423,11 +423,11 @@ class ConsoleFormatter:
def update_method_status(
self,
method_branch: Optional[Tree],
flow_tree: Optional[Tree],
method_branch: Tree | None,
flow_tree: Tree | None,
method_name: str,
status: str = "running",
) -> Optional[Tree]:
) -> Tree | None:
"""Update method status in the flow tree."""
if not flow_tree:
return None
@@ -480,7 +480,7 @@ class ConsoleFormatter:
def handle_llm_tool_usage_started(
self,
tool_name: str,
tool_args: Dict[str, Any] | str,
tool_args: dict[str, Any] | str,
):
# Create status content for the tool usage
content = self.create_status_content(
@@ -520,11 +520,11 @@ class ConsoleFormatter:
def handle_tool_usage_started(
self,
agent_branch: Optional[Tree],
agent_branch: Tree | None,
tool_name: str,
crew_tree: Optional[Tree],
tool_args: Dict[str, Any] | str = "",
) -> Optional[Tree]:
crew_tree: Tree | None,
tool_args: dict[str, Any] | str = "",
) -> Tree | None:
"""Handle tool usage started event."""
if not self.verbose:
return None
@@ -569,9 +569,9 @@ class ConsoleFormatter:
def handle_tool_usage_finished(
self,
tool_branch: Optional[Tree],
tool_branch: Tree | None,
tool_name: str,
crew_tree: Optional[Tree],
crew_tree: Tree | None,
) -> None:
"""Handle tool usage finished event."""
if not self.verbose or tool_branch is None:
@@ -600,10 +600,10 @@ class ConsoleFormatter:
def handle_tool_usage_error(
self,
tool_branch: Optional[Tree],
tool_branch: Tree | None,
tool_name: str,
error: str,
crew_tree: Optional[Tree],
crew_tree: Tree | None,
) -> None:
"""Handle tool usage error event."""
if not self.verbose:
@@ -631,9 +631,9 @@ class ConsoleFormatter:
def handle_llm_call_started(
self,
agent_branch: Optional[Tree],
crew_tree: Optional[Tree],
) -> Optional[Tree]:
agent_branch: Tree | None,
crew_tree: Tree | None,
) -> Tree | None:
"""Handle LLM call started event."""
if not self.verbose:
return None
@@ -672,9 +672,9 @@ class ConsoleFormatter:
def handle_llm_call_completed(
self,
tool_branch: Optional[Tree],
agent_branch: Optional[Tree],
crew_tree: Optional[Tree],
tool_branch: Tree | None,
agent_branch: Tree | None,
crew_tree: Tree | None,
) -> None:
"""Handle LLM call completed event."""
if not self.verbose:
@@ -736,7 +736,7 @@ class ConsoleFormatter:
self.print()
def handle_llm_call_failed(
self, tool_branch: Optional[Tree], error: str, crew_tree: Optional[Tree]
self, tool_branch: Tree | None, error: str, crew_tree: Tree | None
) -> None:
"""Handle LLM call failed event."""
if not self.verbose:
@@ -789,7 +789,7 @@ class ConsoleFormatter:
def handle_crew_test_started(
self, crew_name: str, source_id: str, n_iterations: int
) -> Optional[Tree]:
) -> Tree | None:
"""Handle crew test started event."""
if not self.verbose:
return None
@@ -823,7 +823,7 @@ class ConsoleFormatter:
return test_tree
def handle_crew_test_completed(
self, flow_tree: Optional[Tree], crew_name: str
self, flow_tree: Tree | None, crew_name: str
) -> None:
"""Handle crew test completed event."""
if not self.verbose:
@@ -913,7 +913,7 @@ class ConsoleFormatter:
self.print_panel(failure_content, "Test Failure", "red")
self.print()
def create_lite_agent_branch(self, lite_agent_role: str) -> Optional[Tree]:
def create_lite_agent_branch(self, lite_agent_role: str) -> Tree | None:
"""Create and initialize a lite agent branch."""
if not self.verbose:
return None
@@ -935,10 +935,10 @@ class ConsoleFormatter:
def update_lite_agent_status(
self,
lite_agent_branch: Optional[Tree],
lite_agent_branch: Tree | None,
lite_agent_role: str,
status: str = "completed",
**fields: Dict[str, Any],
**fields: dict[str, Any],
) -> None:
"""Update lite agent status in the tree."""
if not self.verbose or lite_agent_branch is None:
@@ -981,7 +981,7 @@ class ConsoleFormatter:
lite_agent_role: str,
status: str = "started",
error: Any = None,
**fields: Dict[str, Any],
**fields: dict[str, Any],
) -> None:
"""Handle lite agent execution events with consistent formatting."""
if not self.verbose:
@@ -1006,9 +1006,9 @@ class ConsoleFormatter:
def handle_knowledge_retrieval_started(
self,
agent_branch: Optional[Tree],
crew_tree: Optional[Tree],
) -> Optional[Tree]:
agent_branch: Tree | None,
crew_tree: Tree | None,
) -> Tree | None:
"""Handle knowledge retrieval started event."""
if not self.verbose:
return None
@@ -1034,13 +1034,13 @@ class ConsoleFormatter:
def handle_knowledge_retrieval_completed(
self,
agent_branch: Optional[Tree],
crew_tree: Optional[Tree],
agent_branch: Tree | None,
crew_tree: Tree | None,
retrieved_knowledge: Any,
) -> None:
"""Handle knowledge retrieval completed event."""
if not self.verbose:
return None
return
branch_to_use = self.current_lite_agent_branch or agent_branch
tree_to_use = branch_to_use or crew_tree
@@ -1062,7 +1062,7 @@ class ConsoleFormatter:
)
self.print(knowledge_panel)
self.print()
return None
return
knowledge_branch_found = False
for child in branch_to_use.children:
@@ -1111,18 +1111,18 @@ class ConsoleFormatter:
def handle_knowledge_query_started(
self,
agent_branch: Optional[Tree],
agent_branch: Tree | None,
task_prompt: str,
crew_tree: Optional[Tree],
crew_tree: Tree | None,
) -> None:
"""Handle knowledge query generated event."""
if not self.verbose:
return None
return
branch_to_use = self.current_lite_agent_branch or agent_branch
tree_to_use = branch_to_use or crew_tree
if branch_to_use is None or tree_to_use is None:
return None
return
query_branch = branch_to_use.add("")
self.update_tree_label(
@@ -1134,9 +1134,9 @@ class ConsoleFormatter:
def handle_knowledge_query_failed(
self,
agent_branch: Optional[Tree],
agent_branch: Tree | None,
error: str,
crew_tree: Optional[Tree],
crew_tree: Tree | None,
) -> None:
"""Handle knowledge query failed event."""
if not self.verbose:
@@ -1159,18 +1159,18 @@ class ConsoleFormatter:
def handle_knowledge_query_completed(
self,
agent_branch: Optional[Tree],
crew_tree: Optional[Tree],
agent_branch: Tree | None,
crew_tree: Tree | None,
) -> None:
"""Handle knowledge query completed event."""
if not self.verbose:
return None
return
branch_to_use = self.current_lite_agent_branch or agent_branch
tree_to_use = branch_to_use or crew_tree
if branch_to_use is None or tree_to_use is None:
return None
return
query_branch = branch_to_use.add("")
self.update_tree_label(query_branch, "", "Knowledge Query Completed", "green")
@@ -1180,9 +1180,9 @@ class ConsoleFormatter:
def handle_knowledge_search_query_failed(
self,
agent_branch: Optional[Tree],
agent_branch: Tree | None,
error: str,
crew_tree: Optional[Tree],
crew_tree: Tree | None,
) -> None:
"""Handle knowledge search query failed event."""
if not self.verbose:
@@ -1207,10 +1207,10 @@ class ConsoleFormatter:
def handle_reasoning_started(
self,
agent_branch: Optional[Tree],
agent_branch: Tree | None,
attempt: int,
crew_tree: Optional[Tree],
) -> Optional[Tree]:
crew_tree: Tree | None,
) -> Tree | None:
"""Handle agent reasoning started (or refinement) event."""
if not self.verbose:
return None
@@ -1249,7 +1249,7 @@ class ConsoleFormatter:
self,
plan: str,
ready: bool,
crew_tree: Optional[Tree],
crew_tree: Tree | None,
) -> None:
"""Handle agent reasoning completed event."""
if not self.verbose:
@@ -1292,7 +1292,7 @@ class ConsoleFormatter:
def handle_reasoning_failed(
self,
error: str,
crew_tree: Optional[Tree],
crew_tree: Tree | None,
) -> None:
"""Handle agent reasoning failure event."""
if not self.verbose:
@@ -1329,7 +1329,7 @@ class ConsoleFormatter:
def handle_agent_logs_started(
self,
agent_role: str,
task_description: Optional[str] = None,
task_description: str | None = None,
verbose: bool = False,
) -> None:
"""Handle agent logs started event."""
@@ -1367,10 +1367,11 @@ class ConsoleFormatter:
if not verbose:
return
from crewai.agents.parser import AgentAction, AgentFinish
import json
import re
from crewai.agents.parser import AgentAction, AgentFinish
agent_role = agent_role.partition("\n")[0]
if isinstance(formatted_answer, AgentAction):
@@ -1473,9 +1474,9 @@ class ConsoleFormatter:
def handle_memory_retrieval_started(
self,
agent_branch: Optional[Tree],
crew_tree: Optional[Tree],
) -> Optional[Tree]:
agent_branch: Tree | None,
crew_tree: Tree | None,
) -> Tree | None:
if not self.verbose:
return None
@@ -1497,13 +1498,13 @@ class ConsoleFormatter:
def handle_memory_retrieval_completed(
self,
agent_branch: Optional[Tree],
crew_tree: Optional[Tree],
agent_branch: Tree | None,
crew_tree: Tree | None,
memory_content: str,
retrieval_time_ms: float,
) -> None:
if not self.verbose:
return None
return
branch_to_use = self.current_lite_agent_branch or agent_branch
tree_to_use = branch_to_use or crew_tree
@@ -1528,7 +1529,7 @@ class ConsoleFormatter:
if branch_to_use is None or tree_to_use is None:
add_panel()
return None
return
memory_branch_found = False
for child in branch_to_use.children:
@@ -1565,13 +1566,13 @@ class ConsoleFormatter:
def handle_memory_query_completed(
self,
agent_branch: Optional[Tree],
agent_branch: Tree | None,
source_type: str,
query_time_ms: float,
crew_tree: Optional[Tree],
crew_tree: Tree | None,
) -> None:
if not self.verbose:
return None
return
branch_to_use = self.current_lite_agent_branch or agent_branch
tree_to_use = branch_to_use or crew_tree
@@ -1580,15 +1581,15 @@ class ConsoleFormatter:
branch_to_use = tree_to_use
if branch_to_use is None:
return None
return
memory_type = source_type.replace("_", " ").title()
for child in branch_to_use.children:
if "Memory Retrieval" in str(child.label):
for child in child.children:
sources_branch = child
if "Sources Used" in str(child.label):
for inner_child in child.children:
sources_branch = inner_child
if "Sources Used" in str(inner_child.label):
sources_branch.add(f"{memory_type} ({query_time_ms:.2f}ms)")
break
else:
@@ -1598,13 +1599,13 @@ class ConsoleFormatter:
def handle_memory_query_failed(
self,
agent_branch: Optional[Tree],
crew_tree: Optional[Tree],
agent_branch: Tree | None,
crew_tree: Tree | None,
error: str,
source_type: str,
) -> None:
if not self.verbose:
return None
return
branch_to_use = self.current_lite_agent_branch or agent_branch
tree_to_use = branch_to_use or crew_tree
@@ -1613,15 +1614,15 @@ class ConsoleFormatter:
branch_to_use = tree_to_use
if branch_to_use is None:
return None
return
memory_type = source_type.replace("_", " ").title()
for child in branch_to_use.children:
if "Memory Retrieval" in str(child.label):
for child in child.children:
sources_branch = child
if "Sources Used" in str(child.label):
for inner_child in child.children:
sources_branch = inner_child
if "Sources Used" in str(inner_child.label):
sources_branch.add(f"{memory_type} - Error: {error}")
break
else:
@@ -1630,16 +1631,16 @@ class ConsoleFormatter:
break
def handle_memory_save_started(
self, agent_branch: Optional[Tree], crew_tree: Optional[Tree]
self, agent_branch: Tree | None, crew_tree: Tree | None
) -> None:
if not self.verbose:
return None
return
branch_to_use = agent_branch or self.current_lite_agent_branch
tree_to_use = branch_to_use or crew_tree
if tree_to_use is None:
return None
return
for child in tree_to_use.children:
if "Memory Update" in str(child.label):
@@ -1655,19 +1656,19 @@ class ConsoleFormatter:
def handle_memory_save_completed(
self,
agent_branch: Optional[Tree],
crew_tree: Optional[Tree],
agent_branch: Tree | None,
crew_tree: Tree | None,
save_time_ms: float,
source_type: str,
) -> None:
if not self.verbose:
return None
return
branch_to_use = agent_branch or self.current_lite_agent_branch
tree_to_use = branch_to_use or crew_tree
if tree_to_use is None:
return None
return
memory_type = source_type.replace("_", " ").title()
content = f"{memory_type} Memory Saved ({save_time_ms:.2f}ms)"
@@ -1685,19 +1686,19 @@ class ConsoleFormatter:
def handle_memory_save_failed(
self,
agent_branch: Optional[Tree],
agent_branch: Tree | None,
error: str,
source_type: str,
crew_tree: Optional[Tree],
crew_tree: Tree | None,
) -> None:
if not self.verbose:
return None
return
branch_to_use = agent_branch or self.current_lite_agent_branch
tree_to_use = branch_to_use or crew_tree
if branch_to_use is None or tree_to_use is None:
return None
return
memory_type = source_type.replace("_", " ").title()
content = f"{memory_type} Memory Save Failed"
@@ -1738,7 +1739,7 @@ class ConsoleFormatter:
def handle_guardrail_completed(
self,
success: bool,
error: Optional[str],
error: str | None,
retry_count: int,
) -> None:
"""Display guardrail evaluation result.

View File

@@ -1,6 +1,5 @@
from abc import ABC, abstractmethod
from pathlib import Path
from typing import Dict, List, Optional, Union
from pydantic import Field, field_validator
@@ -14,19 +13,23 @@ class BaseFileKnowledgeSource(BaseKnowledgeSource, ABC):
"""Base class for knowledge sources that load content from files."""
_logger: Logger = Logger(verbose=True)
file_path: Optional[Union[Path, List[Path], str, List[str]]] = Field(
file_path: Path | list[Path] | str | list[str] | None = Field(
default=None,
description="[Deprecated] The path to the file. Use file_paths instead.",
)
file_paths: Optional[Union[Path, List[Path], str, List[str]]] = Field(
file_paths: Path | list[Path] | str | list[str] | None = Field(
default_factory=list, description="The path to the file"
)
content: Dict[Path, str] = Field(init=False, default_factory=dict)
storage: Optional[KnowledgeStorage] = Field(default=None)
safe_file_paths: List[Path] = Field(default_factory=list)
content: dict[Path, str] = Field(init=False, default_factory=dict)
storage: KnowledgeStorage | None = Field(default=None)
safe_file_paths: list[Path] = Field(default_factory=list)
batch_size: int = Field(
default=50,
description="Number of chunks to process in each batch to avoid token limits",
)
@field_validator("file_path", "file_paths", mode="before")
def validate_file_path(cls, v, info):
def validate_file_path(cls, v, info): # noqa: N805
"""Validate that at least one of file_path or file_paths is provided."""
# Single check if both are None, O(1) instead of nested conditions
if (
@@ -46,9 +49,8 @@ class BaseFileKnowledgeSource(BaseKnowledgeSource, ABC):
self.content = self.load_content()
@abstractmethod
def load_content(self) -> Dict[Path, str]:
def load_content(self) -> dict[Path, str]:
"""Load and preprocess file content. Should be overridden by subclasses. Assume that the file path is relative to the project root in the knowledge directory."""
pass
def validate_content(self):
"""Validate the paths."""
@@ -68,17 +70,19 @@ class BaseFileKnowledgeSource(BaseKnowledgeSource, ABC):
)
def _save_documents(self):
"""Save the documents to the storage."""
"""Save the documents to the storage in batches to avoid token limits."""
if self.storage:
self.storage.save(self.chunks)
for i in range(0, len(self.chunks), self.batch_size):
batch = self.chunks[i : i + self.batch_size]
self.storage.save(batch)
else:
raise ValueError("No storage found to save documents.")
def convert_to_path(self, path: Union[Path, str]) -> Path:
def convert_to_path(self, path: Path | str) -> Path:
"""Convert a path to a Path object."""
return Path(KNOWLEDGE_DIRECTORY + "/" + path) if isinstance(path, str) else path
def _process_file_paths(self) -> List[Path]:
def _process_file_paths(self) -> list[Path]:
"""Convert file_path to a list of Path objects."""
if hasattr(self, "file_path") and self.file_path is not None:
@@ -93,7 +97,7 @@ class BaseFileKnowledgeSource(BaseKnowledgeSource, ABC):
raise ValueError("Your source must be provided with a file_paths: []")
# Convert single path to list
path_list: List[Union[Path, str]] = (
path_list: list[Path | str] = (
[self.file_paths]
if isinstance(self.file_paths, (str, Path))
else list(self.file_paths)

View File

@@ -1,5 +1,5 @@
from abc import ABC, abstractmethod
from typing import Any, Dict, List, Optional
from typing import Any
import numpy as np
from pydantic import BaseModel, ConfigDict, Field
@@ -12,29 +12,27 @@ class BaseKnowledgeSource(BaseModel, ABC):
chunk_size: int = 4000
chunk_overlap: int = 200
chunks: List[str] = Field(default_factory=list)
chunk_embeddings: List[np.ndarray] = Field(default_factory=list)
chunks: list[str] = Field(default_factory=list)
chunk_embeddings: list[np.ndarray] = Field(default_factory=list)
model_config = ConfigDict(arbitrary_types_allowed=True)
storage: Optional[KnowledgeStorage] = Field(default=None)
metadata: Dict[str, Any] = Field(default_factory=dict) # Currently unused
collection_name: Optional[str] = Field(default=None)
storage: KnowledgeStorage | None = Field(default=None)
metadata: dict[str, Any] = Field(default_factory=dict) # Currently unused
collection_name: str | None = Field(default=None)
@abstractmethod
def validate_content(self) -> Any:
"""Load and preprocess content from the source."""
pass
@abstractmethod
def add(self) -> None:
"""Process content, chunk it, compute embeddings, and save them."""
pass
def get_embeddings(self) -> List[np.ndarray]:
def get_embeddings(self) -> list[np.ndarray]:
"""Return the list of embeddings for the chunks."""
return self.chunk_embeddings
def _chunk_text(self, text: str) -> List[str]:
def _chunk_text(self, text: str) -> list[str]:
"""Utility method to split text into chunks."""
return [
text[i : i + self.chunk_size]

View File

@@ -1,13 +1,21 @@
from collections.abc import Iterator
from pathlib import Path
from typing import Iterator, List, Optional, Union
from urllib.parse import urlparse
try:
from docling.datamodel.base_models import InputFormat
from docling.document_converter import DocumentConverter
from docling.exceptions import ConversionError
from docling_core.transforms.chunker.hierarchical_chunker import HierarchicalChunker
from docling_core.types.doc.document import DoclingDocument
from docling.datamodel.base_models import ( # type: ignore[import-not-found]
InputFormat,
)
from docling.document_converter import ( # type: ignore[import-not-found]
DocumentConverter,
)
from docling.exceptions import ConversionError # type: ignore[import-not-found]
from docling_core.transforms.chunker.hierarchical_chunker import ( # type: ignore[import-not-found]
HierarchicalChunker,
)
from docling_core.types.doc.document import ( # type: ignore[import-not-found]
DoclingDocument,
)
DOCLING_AVAILABLE = True
except ImportError:
@@ -35,11 +43,11 @@ class CrewDoclingSource(BaseKnowledgeSource):
_logger: Logger = Logger(verbose=True)
file_path: Optional[List[Union[Path, str]]] = Field(default=None)
file_paths: List[Union[Path, str]] = Field(default_factory=list)
chunks: List[str] = Field(default_factory=list)
safe_file_paths: List[Union[Path, str]] = Field(default_factory=list)
content: List["DoclingDocument"] = Field(default_factory=list)
file_path: list[Path | str] | None = Field(default=None)
file_paths: list[Path | str] = Field(default_factory=list)
chunks: list[str] = Field(default_factory=list)
safe_file_paths: list[Path | str] = Field(default_factory=list)
content: list["DoclingDocument"] = Field(default_factory=list)
document_converter: "DocumentConverter" = Field(
default_factory=lambda: DocumentConverter(
allowed_formats=[
@@ -66,7 +74,7 @@ class CrewDoclingSource(BaseKnowledgeSource):
self.safe_file_paths = self.validate_content()
self.content = self._load_content()
def _load_content(self) -> List["DoclingDocument"]:
def _load_content(self) -> list["DoclingDocument"]:
try:
return self._convert_source_to_docling_documents()
except ConversionError as e:
@@ -88,7 +96,7 @@ class CrewDoclingSource(BaseKnowledgeSource):
self.chunks.extend(list(new_chunks_iterable))
self._save_documents()
def _convert_source_to_docling_documents(self) -> List["DoclingDocument"]:
def _convert_source_to_docling_documents(self) -> list["DoclingDocument"]:
conv_results_iter = self.document_converter.convert_all(self.safe_file_paths)
return [result.document for result in conv_results_iter]
@@ -97,8 +105,8 @@ class CrewDoclingSource(BaseKnowledgeSource):
for chunk in chunker.chunk(doc):
yield chunk.text
def validate_content(self) -> List[Union[Path, str]]:
processed_paths: List[Union[Path, str]] = []
def validate_content(self) -> list[Path | str]:
processed_paths: list[Path | str] = []
for path in self.file_paths:
if isinstance(path, str):
if path.startswith(("http://", "https://")):
@@ -108,7 +116,7 @@ class CrewDoclingSource(BaseKnowledgeSource):
else:
raise ValueError(f"Invalid URL format: {path}")
except Exception as e:
raise ValueError(f"Invalid URL: {path}. Error: {str(e)}")
raise ValueError(f"Invalid URL: {path}. Error: {e!s}") from e
else:
local_path = Path(KNOWLEDGE_DIRECTORY + "/" + path)
if local_path.exists():

View File

@@ -1,6 +1,5 @@
import csv
from pathlib import Path
from typing import Dict, List
from crewai.knowledge.source.base_file_knowledge_source import BaseFileKnowledgeSource
@@ -8,7 +7,7 @@ from crewai.knowledge.source.base_file_knowledge_source import BaseFileKnowledge
class CSVKnowledgeSource(BaseFileKnowledgeSource):
"""A knowledge source that stores and queries CSV file content using embeddings."""
def load_content(self) -> Dict[Path, str]:
def load_content(self) -> dict[Path, str]:
"""Load and preprocess CSV file content."""
content_dict = {}
for file_path in self.safe_file_paths:
@@ -32,7 +31,7 @@ class CSVKnowledgeSource(BaseFileKnowledgeSource):
self.chunks.extend(new_chunks)
self._save_documents()
def _chunk_text(self, text: str) -> List[str]:
def _chunk_text(self, text: str) -> list[str]:
"""Utility method to split text into chunks."""
return [
text[i : i + self.chunk_size]

View File

@@ -1,6 +1,4 @@
from pathlib import Path
from typing import Dict, Iterator, List, Optional, Union
from urllib.parse import urlparse
from pydantic import Field, field_validator
@@ -16,19 +14,19 @@ class ExcelKnowledgeSource(BaseKnowledgeSource):
_logger: Logger = Logger(verbose=True)
file_path: Optional[Union[Path, List[Path], str, List[str]]] = Field(
file_path: Path | list[Path] | str | list[str] | None = Field(
default=None,
description="[Deprecated] The path to the file. Use file_paths instead.",
)
file_paths: Optional[Union[Path, List[Path], str, List[str]]] = Field(
file_paths: Path | list[Path] | str | list[str] | None = Field(
default_factory=list, description="The path to the file"
)
chunks: List[str] = Field(default_factory=list)
content: Dict[Path, Dict[str, str]] = Field(default_factory=dict)
safe_file_paths: List[Path] = Field(default_factory=list)
chunks: list[str] = Field(default_factory=list)
content: dict[Path, dict[str, str]] = Field(default_factory=dict)
safe_file_paths: list[Path] = Field(default_factory=list)
@field_validator("file_path", "file_paths", mode="before")
def validate_file_path(cls, v, info):
def validate_file_path(cls, v, info): # noqa: N805
"""Validate that at least one of file_path or file_paths is provided."""
# Single check if both are None, O(1) instead of nested conditions
if (
@@ -41,7 +39,7 @@ class ExcelKnowledgeSource(BaseKnowledgeSource):
raise ValueError("Either file_path or file_paths must be provided")
return v
def _process_file_paths(self) -> List[Path]:
def _process_file_paths(self) -> list[Path]:
"""Convert file_path to a list of Path objects."""
if hasattr(self, "file_path") and self.file_path is not None:
@@ -56,7 +54,7 @@ class ExcelKnowledgeSource(BaseKnowledgeSource):
raise ValueError("Your source must be provided with a file_paths: []")
# Convert single path to list
path_list: List[Union[Path, str]] = (
path_list: list[Path | str] = (
[self.file_paths]
if isinstance(self.file_paths, (str, Path))
else list(self.file_paths)
@@ -100,7 +98,7 @@ class ExcelKnowledgeSource(BaseKnowledgeSource):
self.validate_content()
self.content = self._load_content()
def _load_content(self) -> Dict[Path, Dict[str, str]]:
def _load_content(self) -> dict[Path, dict[str, str]]:
"""Load and preprocess Excel file content from multiple sheets.
Each sheet's content is converted to CSV format and stored.
@@ -126,21 +124,21 @@ class ExcelKnowledgeSource(BaseKnowledgeSource):
content_dict[file_path] = sheet_dict
return content_dict
def convert_to_path(self, path: Union[Path, str]) -> Path:
def convert_to_path(self, path: Path | str) -> Path:
"""Convert a path to a Path object."""
return Path(KNOWLEDGE_DIRECTORY + "/" + path) if isinstance(path, str) else path
def _import_dependencies(self):
"""Dynamically import dependencies."""
try:
import pandas as pd
import pandas as pd # type: ignore[import-untyped,import-not-found]
return pd
except ImportError as e:
missing_package = str(e).split()[-1]
raise ImportError(
f"{missing_package} is not installed. Please install it with: pip install {missing_package}"
)
) from e
def add(self) -> None:
"""
@@ -161,7 +159,7 @@ class ExcelKnowledgeSource(BaseKnowledgeSource):
self.chunks.extend(new_chunks)
self._save_documents()
def _chunk_text(self, text: str) -> List[str]:
def _chunk_text(self, text: str) -> list[str]:
"""Utility method to split text into chunks."""
return [
text[i : i + self.chunk_size]

View File

@@ -1,6 +1,6 @@
import json
from pathlib import Path
from typing import Any, Dict, List
from typing import Any
from crewai.knowledge.source.base_file_knowledge_source import BaseFileKnowledgeSource
@@ -8,9 +8,9 @@ from crewai.knowledge.source.base_file_knowledge_source import BaseFileKnowledge
class JSONKnowledgeSource(BaseFileKnowledgeSource):
"""A knowledge source that stores and queries JSON file content using embeddings."""
def load_content(self) -> Dict[Path, str]:
def load_content(self) -> dict[Path, str]:
"""Load and preprocess JSON file content."""
content: Dict[Path, str] = {}
content: dict[Path, str] = {}
for path in self.safe_file_paths:
path = self.convert_to_path(path)
with open(path, "r", encoding="utf-8") as json_file:
@@ -29,7 +29,7 @@ class JSONKnowledgeSource(BaseFileKnowledgeSource):
for item in data:
text += f"{indent}- {self._json_to_text(item, level + 1)}\n"
else:
text += f"{str(data)}"
text += f"{data!s}"
return text
def add(self) -> None:
@@ -44,7 +44,7 @@ class JSONKnowledgeSource(BaseFileKnowledgeSource):
self.chunks.extend(new_chunks)
self._save_documents()
def _chunk_text(self, text: str) -> List[str]:
def _chunk_text(self, text: str) -> list[str]:
"""Utility method to split text into chunks."""
return [
text[i : i + self.chunk_size]

View File

@@ -1,5 +1,4 @@
from pathlib import Path
from typing import Dict, List
from crewai.knowledge.source.base_file_knowledge_source import BaseFileKnowledgeSource
@@ -7,7 +6,7 @@ from crewai.knowledge.source.base_file_knowledge_source import BaseFileKnowledge
class PDFKnowledgeSource(BaseFileKnowledgeSource):
"""A knowledge source that stores and queries PDF file content using embeddings."""
def load_content(self) -> Dict[Path, str]:
def load_content(self) -> dict[Path, str]:
"""Load and preprocess PDF file content."""
pdfplumber = self._import_pdfplumber()
@@ -30,22 +29,22 @@ class PDFKnowledgeSource(BaseFileKnowledgeSource):
import pdfplumber
return pdfplumber
except ImportError:
except ImportError as e:
raise ImportError(
"pdfplumber is not installed. Please install it with: pip install pdfplumber"
)
) from e
def add(self) -> None:
"""
Add PDF file content to the knowledge source, chunk it, compute embeddings,
and save the embeddings.
"""
for _, text in self.content.items():
for text in self.content.values():
new_chunks = self._chunk_text(text)
self.chunks.extend(new_chunks)
self._save_documents()
def _chunk_text(self, text: str) -> List[str]:
def _chunk_text(self, text: str) -> list[str]:
"""Utility method to split text into chunks."""
return [
text[i : i + self.chunk_size]

View File

@@ -1,5 +1,3 @@
from typing import List, Optional
from pydantic import Field
from crewai.knowledge.source.base_knowledge_source import BaseKnowledgeSource
@@ -9,7 +7,7 @@ class StringKnowledgeSource(BaseKnowledgeSource):
"""A knowledge source that stores and queries plain text content using embeddings."""
content: str = Field(...)
collection_name: Optional[str] = Field(default=None)
collection_name: str | None = Field(default=None)
def model_post_init(self, _):
"""Post-initialization method to validate content."""
@@ -26,7 +24,7 @@ class StringKnowledgeSource(BaseKnowledgeSource):
self.chunks.extend(new_chunks)
self._save_documents()
def _chunk_text(self, text: str) -> List[str]:
def _chunk_text(self, text: str) -> list[str]:
"""Utility method to split text into chunks."""
return [
text[i : i + self.chunk_size]

View File

@@ -1,5 +1,4 @@
from pathlib import Path
from typing import Dict, List
from crewai.knowledge.source.base_file_knowledge_source import BaseFileKnowledgeSource
@@ -7,7 +6,7 @@ from crewai.knowledge.source.base_file_knowledge_source import BaseFileKnowledge
class TextFileKnowledgeSource(BaseFileKnowledgeSource):
"""A knowledge source that stores and queries text file content using embeddings."""
def load_content(self) -> Dict[Path, str]:
def load_content(self) -> dict[Path, str]:
"""Load and preprocess text file content."""
content = {}
for path in self.safe_file_paths:
@@ -21,12 +20,12 @@ class TextFileKnowledgeSource(BaseFileKnowledgeSource):
Add text file content to the knowledge source, chunk it, compute embeddings,
and save the embeddings.
"""
for _, text in self.content.items():
for text in self.content.values():
new_chunks = self._chunk_text(text)
self.chunks.extend(new_chunks)
self._save_documents()
def _chunk_text(self, text: str) -> List[str]:
def _chunk_text(self, text: str) -> list[str]:
"""Utility method to split text into chunks."""
return [
text[i : i + self.chunk_size]

View File

@@ -1,35 +1,24 @@
import asyncio
import inspect
import uuid
from collections.abc import Callable
from typing import (
Any,
Callable,
Dict,
List,
Optional,
Tuple,
Type,
Union,
cast,
get_args,
get_origin,
)
try:
from typing import Self
except ImportError:
from typing_extensions import Self
from pydantic import (
UUID4,
BaseModel,
Field,
InstanceOf,
PrivateAttr,
model_validator,
field_validator,
model_validator,
)
from typing_extensions import Self
from crewai.agents.agent_builder.base_agent import BaseAgent
from crewai.agents.agent_builder.utilities.base_token_process import TokenProcess
@@ -37,14 +26,20 @@ from crewai.agents.cache import CacheHandler
from crewai.agents.parser import (
AgentAction,
AgentFinish,
OutputParserException,
OutputParserError,
)
from crewai.events.event_bus import crewai_event_bus
from crewai.events.types.agent_events import (
LiteAgentExecutionCompletedEvent,
LiteAgentExecutionErrorEvent,
LiteAgentExecutionStartedEvent,
)
from crewai.events.types.logging_events import AgentLogsExecutionEvent
from crewai.flow.flow_trackable import FlowTrackable
from crewai.llm import LLM, BaseLLM
from crewai.tools.base_tool import BaseTool
from crewai.tools.structured_tool import CrewStructuredTool
from crewai.utilities import I18N
from crewai.utilities.guardrail import process_guardrail
from crewai.utilities.agent_utils import (
enforce_rpm_limit,
format_message_for_llm,
@@ -62,14 +57,7 @@ from crewai.utilities.agent_utils import (
render_text_description_and_args,
)
from crewai.utilities.converter import generate_model_description
from crewai.events.types.logging_events import AgentLogsExecutionEvent
from crewai.events.types.agent_events import (
LiteAgentExecutionCompletedEvent,
LiteAgentExecutionErrorEvent,
LiteAgentExecutionStartedEvent,
)
from crewai.events.event_bus import crewai_event_bus
from crewai.utilities.guardrail import process_guardrail
from crewai.utilities.llm_utils import create_llm
from crewai.utilities.printer import Printer
from crewai.utilities.token_counter_callback import TokenCalcHandler
@@ -82,15 +70,15 @@ class LiteAgentOutput(BaseModel):
model_config = {"arbitrary_types_allowed": True}
raw: str = Field(description="Raw output of the agent", default="")
pydantic: Optional[BaseModel] = Field(
pydantic: BaseModel | None = Field(
description="Pydantic output of the agent", default=None
)
agent_role: str = Field(description="Role of the agent that produced this output")
usage_metrics: Optional[Dict[str, Any]] = Field(
usage_metrics: dict[str, Any] | None = Field(
description="Token usage metrics for this execution", default=None
)
def to_dict(self) -> Dict[str, Any]:
def to_dict(self) -> dict[str, Any]:
"""Convert pydantic_output to a dictionary."""
if self.pydantic:
return self.pydantic.model_dump()
@@ -130,10 +118,10 @@ class LiteAgent(FlowTrackable, BaseModel):
role: str = Field(description="Role of the agent")
goal: str = Field(description="Goal of the agent")
backstory: str = Field(description="Backstory of the agent")
llm: Optional[Union[str, InstanceOf[BaseLLM], Any]] = Field(
llm: str | InstanceOf[BaseLLM] | Any | None = Field(
default=None, description="Language model that will run the agent"
)
tools: List[BaseTool] = Field(
tools: list[BaseTool] = Field(
default_factory=list, description="Tools at agent's disposal"
)
@@ -141,7 +129,7 @@ class LiteAgent(FlowTrackable, BaseModel):
max_iterations: int = Field(
default=15, description="Maximum number of iterations for tool usage"
)
max_execution_time: Optional[int] = Field(
max_execution_time: int | None = Field(
default=None, description=". Maximum execution time in seconds"
)
respect_context_window: bool = Field(
@@ -152,52 +140,50 @@ class LiteAgent(FlowTrackable, BaseModel):
default=True,
description="Whether to use stop words to prevent the LLM from using tools",
)
request_within_rpm_limit: Optional[Callable[[], bool]] = Field(
request_within_rpm_limit: Callable[[], bool] | None = Field(
default=None,
description="Callback to check if the request is within the RPM limit",
)
i18n: I18N = Field(default=I18N(), description="Internationalization settings.")
# Output and Formatting Properties
response_format: Optional[Type[BaseModel]] = Field(
response_format: type[BaseModel] | None = Field(
default=None, description="Pydantic model for structured output"
)
verbose: bool = Field(
default=False, description="Whether to print execution details"
)
callbacks: List[Callable] = Field(
callbacks: list[Callable] = Field(
default=[], description="Callbacks to be used for the agent"
)
# Guardrail Properties
guardrail: Optional[Union[Callable[[LiteAgentOutput], Tuple[bool, Any]], str]] = (
Field(
default=None,
description="Function or string description of a guardrail to validate agent output",
)
guardrail: Callable[[LiteAgentOutput], tuple[bool, Any]] | str | None = Field(
default=None,
description="Function or string description of a guardrail to validate agent output",
)
guardrail_max_retries: int = Field(
default=3, description="Maximum number of retries when guardrail fails"
)
# State and Results
tools_results: List[Dict[str, Any]] = Field(
tools_results: list[dict[str, Any]] = Field(
default=[], description="Results of the tools used by the agent."
)
# Reference of Agent
original_agent: Optional[BaseAgent] = Field(
original_agent: BaseAgent | None = Field(
default=None, description="Reference to the agent that created this LiteAgent"
)
# Private Attributes
_parsed_tools: List[CrewStructuredTool] = PrivateAttr(default_factory=list)
_parsed_tools: list[CrewStructuredTool] = PrivateAttr(default_factory=list)
_token_process: TokenProcess = PrivateAttr(default_factory=TokenProcess)
_cache_handler: CacheHandler = PrivateAttr(default_factory=CacheHandler)
_key: str = PrivateAttr(default_factory=lambda: str(uuid.uuid4()))
_messages: List[Dict[str, str]] = PrivateAttr(default_factory=list)
_messages: list[dict[str, str]] = PrivateAttr(default_factory=list)
_iterations: int = PrivateAttr(default=0)
_printer: Printer = PrivateAttr(default_factory=Printer)
_guardrail: Optional[Callable] = PrivateAttr(default=None)
_guardrail: Callable | None = PrivateAttr(default=None)
_guardrail_retry_count: int = PrivateAttr(default=0)
@model_validator(mode="after")
@@ -241,8 +227,8 @@ class LiteAgent(FlowTrackable, BaseModel):
@field_validator("guardrail", mode="before")
@classmethod
def validate_guardrail_function(
cls, v: Optional[Union[Callable, str]]
) -> Optional[Union[Callable, str]]:
cls, v: Callable | str | None
) -> Callable | str | None:
"""Validate that the guardrail function has the correct signature.
If v is a callable, validate that it has the correct signature.
@@ -267,7 +253,7 @@ class LiteAgent(FlowTrackable, BaseModel):
# Check return annotation if present
if sig.return_annotation is not sig.empty:
if sig.return_annotation == Tuple[bool, Any]:
if sig.return_annotation == tuple[bool, Any]:
return v
origin = get_origin(sig.return_annotation)
@@ -290,7 +276,7 @@ class LiteAgent(FlowTrackable, BaseModel):
"""Return the original role for compatibility with tool interfaces."""
return self.role
def kickoff(self, messages: Union[str, List[Dict[str, str]]]) -> LiteAgentOutput:
def kickoff(self, messages: str | list[dict[str, str]]) -> LiteAgentOutput:
"""
Execute the agent with the given messages.
@@ -338,7 +324,7 @@ class LiteAgent(FlowTrackable, BaseModel):
)
raise e
def _execute_core(self, agent_info: Dict[str, Any]) -> LiteAgentOutput:
def _execute_core(self, agent_info: dict[str, Any]) -> LiteAgentOutput:
# Emit event for agent execution start
crewai_event_bus.emit(
self,
@@ -351,7 +337,7 @@ class LiteAgent(FlowTrackable, BaseModel):
# Execute the agent using invoke loop
agent_finish = self._invoke_loop()
formatted_result: Optional[BaseModel] = None
formatted_result: BaseModel | None = None
if self.response_format:
try:
# Cast to BaseModel to ensure type safety
@@ -360,7 +346,7 @@ class LiteAgent(FlowTrackable, BaseModel):
formatted_result = result
except Exception as e:
self._printer.print(
content=f"Failed to parse output into response format: {str(e)}",
content=f"Failed to parse output into response format: {e!s}",
color="yellow",
)
@@ -381,6 +367,7 @@ class LiteAgent(FlowTrackable, BaseModel):
output=output,
guardrail=self._guardrail,
retry_count=self._guardrail_retry_count,
event_source=self,
)
if not guardrail_result.success:
@@ -428,7 +415,7 @@ class LiteAgent(FlowTrackable, BaseModel):
return output
async def kickoff_async(
self, messages: Union[str, List[Dict[str, str]]]
self, messages: str | list[dict[str, str]]
) -> LiteAgentOutput:
"""
Execute the agent asynchronously with the given messages.
@@ -475,8 +462,8 @@ class LiteAgent(FlowTrackable, BaseModel):
return base_prompt
def _format_messages(
self, messages: Union[str, List[Dict[str, str]]]
) -> List[Dict[str, str]]:
self, messages: str | list[dict[str, str]]
) -> list[dict[str, str]]:
"""Format messages for the LLM."""
if isinstance(messages, str):
messages = [{"role": "user", "content": messages}]
@@ -548,7 +535,7 @@ class LiteAgent(FlowTrackable, BaseModel):
)
self._append_message(formatted_answer.text, role="assistant")
except OutputParserException as e:
except OutputParserError as e: # noqa: PERF203
formatted_answer = handle_output_parser_exception(
e=e,
messages=self._messages,
@@ -571,18 +558,21 @@ class LiteAgent(FlowTrackable, BaseModel):
i18n=self.i18n,
)
continue
else:
handle_unknown_error(self._printer, e)
raise e
handle_unknown_error(self._printer, e)
raise e
finally:
self._iterations += 1
assert isinstance(formatted_answer, AgentFinish)
if not isinstance(formatted_answer, AgentFinish):
raise RuntimeError(
"Agent execution ended without reaching a final answer. "
f"Got {type(formatted_answer).__name__} instead of AgentFinish."
)
self._show_logs(formatted_answer)
return formatted_answer
def _show_logs(self, formatted_answer: Union[AgentAction, AgentFinish]):
def _show_logs(self, formatted_answer: AgentAction | AgentFinish):
"""Show logs for the agent's execution."""
crewai_event_bus.emit(
self,

View File

@@ -1,11 +1,11 @@
from .entity.entity_memory import EntityMemory
from .external.external_memory import ExternalMemory
from .long_term.long_term_memory import LongTermMemory
from .short_term.short_term_memory import ShortTermMemory
from .external.external_memory import ExternalMemory
__all__ = [
"EntityMemory",
"ExternalMemory",
"LongTermMemory",
"ShortTermMemory",
"ExternalMemory",
]

View File

@@ -1,12 +1,12 @@
from typing import Any, Dict, Optional
from typing import Any
class ExternalMemoryItem:
def __init__(
self,
value: Any,
metadata: Optional[Dict[str, Any]] = None,
agent: Optional[str] = None,
metadata: dict[str, Any] | None = None,
agent: str | None = None,
):
self.value = value
self.metadata = metadata

View File

@@ -1,17 +1,17 @@
from typing import Any, Dict, List
import time
from typing import Any
from crewai.memory.long_term.long_term_memory_item import LongTermMemoryItem
from crewai.memory.memory import Memory
from crewai.events.event_bus import crewai_event_bus
from crewai.events.types.memory_events import (
MemoryQueryStartedEvent,
MemoryQueryCompletedEvent,
MemoryQueryFailedEvent,
MemorySaveStartedEvent,
MemoryQueryStartedEvent,
MemorySaveCompletedEvent,
MemorySaveFailedEvent,
MemorySaveStartedEvent,
)
from crewai.memory.long_term.long_term_memory_item import LongTermMemoryItem
from crewai.memory.memory import Memory
from crewai.memory.storage.ltm_sqlite_storage import LTMSQLiteStorage
@@ -84,7 +84,7 @@ class LongTermMemory(Memory):
self,
task: str,
latest_n: int = 3,
) -> List[Dict[str, Any]]: # type: ignore # signature of "search" incompatible with supertype "Memory"
) -> list[dict[str, Any]]: # type: ignore # signature of "search" incompatible with supertype "Memory"
crewai_event_bus.emit(
self,
event=MemoryQueryStartedEvent(

View File

@@ -1,4 +1,4 @@
from typing import Any, Dict, Optional, Union
from typing import Any
class LongTermMemoryItem:
@@ -8,8 +8,8 @@ class LongTermMemoryItem:
task: str,
expected_output: str,
datetime: str,
quality: Optional[Union[int, float]] = None,
metadata: Optional[Dict[str, Any]] = None,
quality: int | float | None = None,
metadata: dict[str, Any] | None = None,
):
self.task = task
self.agent = agent

View File

@@ -1,12 +1,12 @@
from typing import Any, Dict, Optional
from typing import Any
class ShortTermMemoryItem:
def __init__(
self,
data: Any,
agent: Optional[str] = None,
metadata: Optional[Dict[str, Any]] = None,
agent: str | None = None,
metadata: dict[str, Any] | None = None,
):
self.data = data
self.agent = agent

View File

@@ -1,15 +1,15 @@
from typing import Any, Dict, List
from typing import Any
class Storage:
"""Abstract base class defining the storage interface"""
def save(self, value: Any, metadata: Dict[str, Any]) -> None:
def save(self, value: Any, metadata: dict[str, Any]) -> None:
pass
def search(
self, query: str, limit: int, score_threshold: float
) -> Dict[str, Any] | List[Any]:
) -> dict[str, Any] | list[Any]:
return {}
def reset(self) -> None:

View File

@@ -2,7 +2,7 @@ import json
import logging
import sqlite3
from pathlib import Path
from typing import Any, Dict, List, Optional
from typing import Any
from crewai.task import Task
from crewai.utilities import Printer
@@ -18,7 +18,7 @@ class KickoffTaskOutputsSQLiteStorage:
An updated SQLite storage class for kickoff task outputs storage.
"""
def __init__(self, db_path: Optional[str] = None) -> None:
def __init__(self, db_path: str | None = None) -> None:
if db_path is None:
# Get the parent directory of the default db path and create our db file there
db_path = str(Path(db_storage_path()) / "latest_kickoff_task_outputs.db")
@@ -57,15 +57,15 @@ class KickoffTaskOutputsSQLiteStorage:
except sqlite3.Error as e:
error_msg = DatabaseError.format_error(DatabaseError.INIT_ERROR, e)
logger.error(error_msg)
raise DatabaseOperationError(error_msg, e)
raise DatabaseOperationError(error_msg, e) from e
def add(
self,
task: Task,
output: Dict[str, Any],
output: dict[str, Any],
task_index: int,
was_replayed: bool = False,
inputs: Dict[str, Any] | None = None,
inputs: dict[str, Any] | None = None,
) -> None:
"""Add a new task output record to the database.
@@ -103,7 +103,7 @@ class KickoffTaskOutputsSQLiteStorage:
except sqlite3.Error as e:
error_msg = DatabaseError.format_error(DatabaseError.SAVE_ERROR, e)
logger.error(error_msg)
raise DatabaseOperationError(error_msg, e)
raise DatabaseOperationError(error_msg, e) from e
def update(
self,
@@ -138,7 +138,7 @@ class KickoffTaskOutputsSQLiteStorage:
else value
)
query = f"UPDATE latest_kickoff_task_outputs SET {', '.join(fields)} WHERE task_index = ?" # nosec
query = f"UPDATE latest_kickoff_task_outputs SET {', '.join(fields)} WHERE task_index = ?" # nosec # noqa: S608
values.append(task_index)
cursor.execute(query, tuple(values))
@@ -151,9 +151,9 @@ class KickoffTaskOutputsSQLiteStorage:
except sqlite3.Error as e:
error_msg = DatabaseError.format_error(DatabaseError.UPDATE_ERROR, e)
logger.error(error_msg)
raise DatabaseOperationError(error_msg, e)
raise DatabaseOperationError(error_msg, e) from e
def load(self) -> List[Dict[str, Any]]:
def load(self) -> list[dict[str, Any]]:
"""Load all task output records from the database.
Returns:
@@ -192,7 +192,7 @@ class KickoffTaskOutputsSQLiteStorage:
except sqlite3.Error as e:
error_msg = DatabaseError.format_error(DatabaseError.LOAD_ERROR, e)
logger.error(error_msg)
raise DatabaseOperationError(error_msg, e)
raise DatabaseOperationError(error_msg, e) from e
def delete_all(self) -> None:
"""Delete all task output records from the database.
@@ -212,4 +212,4 @@ class KickoffTaskOutputsSQLiteStorage:
except sqlite3.Error as e:
error_msg = DatabaseError.format_error(DatabaseError.DELETE_ERROR, e)
logger.error(error_msg)
raise DatabaseOperationError(error_msg, e)
raise DatabaseOperationError(error_msg, e) from e

View File

@@ -1,7 +1,7 @@
import json
import sqlite3
from pathlib import Path
from typing import Any, Dict, List, Optional, Union
from typing import Any
from crewai.utilities import Printer
from crewai.utilities.paths import db_storage_path
@@ -12,9 +12,7 @@ class LTMSQLiteStorage:
An updated SQLite storage class for LTM data storage.
"""
def __init__(
self, db_path: Optional[str] = None
) -> None:
def __init__(self, db_path: str | None = None) -> None:
if db_path is None:
# Get the parent directory of the default db path and create our db file there
db_path = str(Path(db_storage_path()) / "long_term_memory_storage.db")
@@ -53,9 +51,9 @@ class LTMSQLiteStorage:
def save(
self,
task_description: str,
metadata: Dict[str, Any],
metadata: dict[str, Any],
datetime: str,
score: Union[int, float],
score: int | float,
) -> None:
"""Saves data to the LTM table with error handling."""
try:
@@ -75,9 +73,7 @@ class LTMSQLiteStorage:
color="red",
)
def load(
self, task_description: str, latest_n: int
) -> Optional[List[Dict[str, Any]]]:
def load(self, task_description: str, latest_n: int) -> list[dict[str, Any]] | None:
"""Queries the LTM table by task description with error handling."""
try:
with sqlite3.connect(self.db_path) as conn:
@@ -89,7 +85,7 @@ class LTMSQLiteStorage:
WHERE task_description = ?
ORDER BY datetime DESC, score ASC
LIMIT {latest_n}
""", # nosec
""", # nosec # noqa: S608
(task_description,),
)
rows = cursor.fetchall()
@@ -125,4 +121,4 @@ class LTMSQLiteStorage:
content=f"MEMORY ERROR: An error occurred while deleting all rows in LTM: {e}",
color="red",
)
return None
return

View File

@@ -1,9 +1,10 @@
import logging
import traceback
import warnings
from typing import Any
from typing import Any, cast
from crewai.rag.chromadb.config import ChromaDBConfig
from crewai.rag.chromadb.types import ChromaEmbeddingFunctionWrapper
from crewai.rag.config.utils import get_rag_client
from crewai.rag.core.base_client import BaseClient
from crewai.rag.embeddings.factory import get_embedding_function
@@ -21,8 +22,13 @@ class RAGStorage(BaseRAGStorage):
"""
def __init__(
self, type, allow_reset=True, embedder_config=None, crew=None, path=None
):
self,
type: str,
allow_reset: bool = True,
embedder_config: dict[str, Any] | None = None,
crew: Any = None,
path: str | None = None,
) -> None:
super().__init__(type, allow_reset, embedder_config, crew)
agents = crew.agents if crew else []
agents = [self._sanitize_role(agent.role) for agent in agents]
@@ -44,7 +50,11 @@ class RAGStorage(BaseRAGStorage):
if self.embedder_config:
embedding_function = get_embedding_function(self.embedder_config)
config = ChromaDBConfig(embedding_function=embedding_function)
config = ChromaDBConfig(
embedding_function=cast(
ChromaEmbeddingFunctionWrapper, embedding_function
)
)
self._client = create_client(config)
def _get_client(self) -> BaseClient:

View File

@@ -5,20 +5,14 @@ import logging
import threading
import uuid
import warnings
from collections.abc import Callable
from concurrent.futures import Future
from copy import copy
from hashlib import md5
from pathlib import Path
from typing import (
Any,
Callable,
ClassVar,
Dict,
List,
Optional,
Set,
Tuple,
Type,
Union,
get_args,
get_origin,
@@ -35,20 +29,20 @@ from pydantic import (
from pydantic_core import PydanticCustomError
from crewai.agents.agent_builder.base_agent import BaseAgent
from crewai.events.event_bus import crewai_event_bus
from crewai.events.event_types import (
TaskCompletedEvent,
TaskFailedEvent,
TaskStartedEvent,
)
from crewai.security import Fingerprint, SecurityConfig
from crewai.tasks.output_format import OutputFormat
from crewai.tasks.task_output import TaskOutput
from crewai.tools.base_tool import BaseTool
from crewai.utilities.config import process_config
from crewai.utilities.constants import NOT_SPECIFIED, _NotSpecified
from crewai.utilities.guardrail import process_guardrail, GuardrailResult
from crewai.utilities.converter import Converter, convert_to_model
from crewai.events.event_types import (
TaskCompletedEvent,
TaskFailedEvent,
TaskStartedEvent,
)
from crewai.events.event_bus import crewai_event_bus
from crewai.utilities.guardrail import process_guardrail
from crewai.utilities.i18n import I18N
from crewai.utilities.printer import Printer
from crewai.utilities.string_utils import interpolate_only
@@ -85,50 +79,50 @@ class Task(BaseModel):
tools_errors: int = 0
delegations: int = 0
i18n: I18N = I18N()
name: Optional[str] = Field(default=None)
prompt_context: Optional[str] = None
name: str | None = Field(default=None)
prompt_context: str | None = None
description: str = Field(description="Description of the actual task.")
expected_output: str = Field(
description="Clear definition of expected output for the task."
)
config: Optional[Dict[str, Any]] = Field(
config: dict[str, Any] | None = Field(
description="Configuration for the agent",
default=None,
)
callback: Optional[Any] = Field(
callback: Any | None = Field(
description="Callback to be executed after the task is completed.", default=None
)
agent: Optional[BaseAgent] = Field(
agent: BaseAgent | None = Field(
description="Agent responsible for execution the task.", default=None
)
context: Union[List["Task"], None, _NotSpecified] = Field(
context: list["Task"] | None | _NotSpecified = Field(
description="Other tasks that will have their output used as context for this task.",
default=NOT_SPECIFIED,
)
async_execution: Optional[bool] = Field(
async_execution: bool | None = Field(
description="Whether the task should be executed asynchronously or not.",
default=False,
)
output_json: Optional[Type[BaseModel]] = Field(
output_json: type[BaseModel] | None = Field(
description="A Pydantic model to be used to create a JSON output.",
default=None,
)
output_pydantic: Optional[Type[BaseModel]] = Field(
output_pydantic: type[BaseModel] | None = Field(
description="A Pydantic model to be used to create a Pydantic output.",
default=None,
)
output_file: Optional[str] = Field(
output_file: str | None = Field(
description="A file path to be used to create a file output.",
default=None,
)
create_directory: Optional[bool] = Field(
create_directory: bool | None = Field(
description="Whether to create the directory for output_file if it doesn't exist.",
default=True,
)
output: Optional[TaskOutput] = Field(
output: TaskOutput | None = Field(
description="Task output, it's final result after being executed", default=None
)
tools: Optional[List[BaseTool]] = Field(
tools: list[BaseTool] | None = Field(
default_factory=list,
description="Tools the agent is limited to use for this task.",
)
@@ -141,24 +135,24 @@ class Task(BaseModel):
frozen=True,
description="Unique identifier for the object, not set by user.",
)
human_input: Optional[bool] = Field(
human_input: bool | None = Field(
description="Whether the task should have a human review the final answer of the agent",
default=False,
)
markdown: Optional[bool] = Field(
markdown: bool | None = Field(
description="Whether the task should instruct the agent to return the final answer formatted in Markdown",
default=False,
)
converter_cls: Optional[Type[Converter]] = Field(
converter_cls: type[Converter] | None = Field(
description="A converter class used to export structured output",
default=None,
)
processed_by_agents: Set[str] = Field(default_factory=set)
guardrail: Optional[Union[Callable[[TaskOutput], Tuple[bool, Any]], str]] = Field(
processed_by_agents: set[str] = Field(default_factory=set)
guardrail: Callable[[TaskOutput], tuple[bool, Any]] | str | None = Field(
default=None,
description="Function or string description of a guardrail to validate task output before proceeding to next task",
)
max_retries: Optional[int] = Field(
max_retries: int | None = Field(
default=None,
description="[DEPRECATED] Maximum number of retries when guardrail fails. Use guardrail_max_retries instead. Will be removed in v1.0.0",
)
@@ -166,13 +160,13 @@ class Task(BaseModel):
default=3, description="Maximum number of retries when guardrail fails"
)
retry_count: int = Field(default=0, description="Current number of retries")
start_time: Optional[datetime.datetime] = Field(
start_time: datetime.datetime | None = Field(
default=None, description="Start time of the task execution"
)
end_time: Optional[datetime.datetime] = Field(
end_time: datetime.datetime | None = Field(
default=None, description="End time of the task execution"
)
allow_crewai_trigger_context: Optional[bool] = Field(
allow_crewai_trigger_context: bool | None = Field(
default=None,
description="Whether this task should append 'Trigger Payload: {crewai_trigger_payload}' to the task description when crewai_trigger_payload exists in crew inputs.",
)
@@ -181,8 +175,8 @@ class Task(BaseModel):
@field_validator("guardrail")
@classmethod
def validate_guardrail_function(
cls, v: Optional[str | Callable]
) -> Optional[str | Callable]:
cls, v: str | Callable | None
) -> str | Callable | None:
"""
If v is a callable, validate that the guardrail function has the correct signature and behavior.
If v is a string, return it as is.
@@ -229,7 +223,7 @@ class Task(BaseModel):
return_annotation_args[1] is Any
or return_annotation_args[1] is str
or return_annotation_args[1] is TaskOutput
or return_annotation_args[1] == Union[str, TaskOutput]
or return_annotation_args[1] == str | TaskOutput
)
):
raise ValueError(
@@ -237,11 +231,11 @@ class Task(BaseModel):
)
return v
_guardrail: Optional[Callable] = PrivateAttr(default=None)
_original_description: Optional[str] = PrivateAttr(default=None)
_original_expected_output: Optional[str] = PrivateAttr(default=None)
_original_output_file: Optional[str] = PrivateAttr(default=None)
_thread: Optional[threading.Thread] = PrivateAttr(default=None)
_guardrail: Callable | None = PrivateAttr(default=None)
_original_description: str | None = PrivateAttr(default=None)
_original_expected_output: str | None = PrivateAttr(default=None)
_original_output_file: str | None = PrivateAttr(default=None)
_thread: threading.Thread | None = PrivateAttr(default=None)
@model_validator(mode="before")
@classmethod
@@ -265,7 +259,9 @@ class Task(BaseModel):
elif isinstance(self.guardrail, str):
from crewai.tasks.llm_guardrail import LLMGuardrail
assert self.agent is not None
if self.agent is None:
raise ValueError("Agent is required to use LLMGuardrail")
self._guardrail = LLMGuardrail(
description=self.guardrail, llm=self.agent.llm
)
@@ -274,7 +270,7 @@ class Task(BaseModel):
@field_validator("id", mode="before")
@classmethod
def _deny_user_set_id(cls, v: Optional[UUID4]) -> None:
def _deny_user_set_id(cls, v: UUID4 | None) -> None:
if v:
raise PydanticCustomError(
"may_not_set_field", "This field is not to be set by the user.", {}
@@ -282,7 +278,7 @@ class Task(BaseModel):
@field_validator("output_file")
@classmethod
def output_file_validation(cls, value: Optional[str]) -> Optional[str]:
def output_file_validation(cls, value: str | None) -> str | None:
"""Validate the output file path.
Args:
@@ -307,7 +303,7 @@ class Task(BaseModel):
)
# Check for shell expansion first
if value.startswith("~") or value.startswith("$"):
if value.startswith(("~", "$")):
raise ValueError(
"Shell expansion characters are not allowed in output_file paths"
)
@@ -373,9 +369,9 @@ class Task(BaseModel):
def execute_sync(
self,
agent: Optional[BaseAgent] = None,
context: Optional[str] = None,
tools: Optional[List[BaseTool]] = None,
agent: BaseAgent | None = None,
context: str | None = None,
tools: list[BaseTool] | None = None,
) -> TaskOutput:
"""Execute the task synchronously."""
return self._execute_core(agent, context, tools)
@@ -397,8 +393,8 @@ class Task(BaseModel):
def execute_async(
self,
agent: BaseAgent | None = None,
context: Optional[str] = None,
tools: Optional[List[BaseTool]] = None,
context: str | None = None,
tools: list[BaseTool] | None = None,
) -> Future[TaskOutput]:
"""Execute the task asynchronously."""
future: Future[TaskOutput] = Future()
@@ -411,9 +407,9 @@ class Task(BaseModel):
def _execute_task_async(
self,
agent: Optional[BaseAgent],
context: Optional[str],
tools: Optional[List[Any]],
agent: BaseAgent | None,
context: str | None,
tools: list[Any] | None,
future: Future[TaskOutput],
) -> None:
"""Execute the task asynchronously with context handling."""
@@ -422,9 +418,9 @@ class Task(BaseModel):
def _execute_core(
self,
agent: Optional[BaseAgent],
context: Optional[str],
tools: Optional[List[Any]],
agent: BaseAgent | None,
context: str | None,
tools: list[Any] | None,
) -> TaskOutput:
"""Run the core execution logic of the task."""
try:
@@ -465,6 +461,7 @@ class Task(BaseModel):
output=task_output,
guardrail=self._guardrail,
retry_count=self.retry_count,
event_source=self,
)
if not guardrail_result.success:
if self.retry_count >= self.guardrail_max_retries:
@@ -528,41 +525,6 @@ class Task(BaseModel):
crewai_event_bus.emit(self, TaskFailedEvent(error=str(e), task=self))
raise e # Re-raise the exception after emitting the event
def _process_guardrail(self, task_output: TaskOutput) -> GuardrailResult:
assert self._guardrail is not None
from crewai.events.event_types import (
LLMGuardrailCompletedEvent,
LLMGuardrailStartedEvent,
)
from crewai.events.event_bus import crewai_event_bus
crewai_event_bus.emit(
self,
LLMGuardrailStartedEvent(
guardrail=self._guardrail, retry_count=self.retry_count
),
)
try:
result = self._guardrail(task_output)
guardrail_result = GuardrailResult.from_tuple(result)
except Exception as e:
guardrail_result = GuardrailResult(
success=False, result=None, error=f"Guardrail execution error: {str(e)}"
)
crewai_event_bus.emit(
self,
LLMGuardrailCompletedEvent(
success=guardrail_result.success,
result=guardrail_result.result,
error=guardrail_result.error,
retry_count=self.retry_count,
),
)
return guardrail_result
def prompt(self) -> str:
"""Generates the task prompt with optional markdown formatting.
@@ -604,7 +566,7 @@ Follow these guidelines:
return "\n".join(tasks_slices)
def interpolate_inputs_and_add_conversation_history(
self, inputs: Dict[str, Union[str, int, float, Dict[str, Any], List[Any]]]
self, inputs: dict[str, str | int | float | dict[str, Any] | list[Any]]
) -> None:
"""Interpolate inputs into the task description, expected output, and output file path.
Add conversation history if present.
@@ -635,14 +597,14 @@ Follow these guidelines:
f"Missing required template variable '{e.args[0]}' in description"
) from e
except ValueError as e:
raise ValueError(f"Error interpolating description: {str(e)}") from e
raise ValueError(f"Error interpolating description: {e!s}") from e
try:
self.expected_output = interpolate_only(
input_string=self._original_expected_output, inputs=inputs
)
except (KeyError, ValueError) as e:
raise ValueError(f"Error interpolating expected_output: {str(e)}") from e
raise ValueError(f"Error interpolating expected_output: {e!s}") from e
if self.output_file is not None:
try:
@@ -650,11 +612,9 @@ Follow these guidelines:
input_string=self._original_output_file, inputs=inputs
)
except (KeyError, ValueError) as e:
raise ValueError(
f"Error interpolating output_file path: {str(e)}"
) from e
raise ValueError(f"Error interpolating output_file path: {e!s}") from e
if "crew_chat_messages" in inputs and inputs["crew_chat_messages"]:
if inputs.get("crew_chat_messages"):
conversation_instruction = self.i18n.slice(
"conversation_history_instruction"
)
@@ -681,14 +641,14 @@ Follow these guidelines:
"""Increment the tools errors counter."""
self.tools_errors += 1
def increment_delegations(self, agent_name: Optional[str]) -> None:
def increment_delegations(self, agent_name: str | None) -> None:
"""Increment the delegations counter."""
if agent_name:
self.processed_by_agents.add(agent_name)
self.delegations += 1
def copy(
self, agents: List["BaseAgent"], task_mapping: Dict[str, "Task"]
def copy( # type: ignore
self, agents: list["BaseAgent"], task_mapping: dict[str, "Task"]
) -> "Task":
"""Creates a deep copy of the Task while preserving its original class type.
@@ -721,20 +681,18 @@ Follow these guidelines:
cloned_agent = get_agent_by_role(self.agent.role) if self.agent else None
cloned_tools = copy(self.tools) if self.tools else []
copied_task = self.__class__(
return self.__class__(
**copied_data,
context=cloned_context,
agent=cloned_agent,
tools=cloned_tools,
)
return copied_task
def _export_output(
self, result: str
) -> Tuple[Optional[BaseModel], Optional[Dict[str, Any]]]:
pydantic_output: Optional[BaseModel] = None
json_output: Optional[Dict[str, Any]] = None
) -> tuple[BaseModel | None, dict[str, Any] | None]:
pydantic_output: BaseModel | None = None
json_output: dict[str, Any] | None = None
if self.output_pydantic or self.output_json:
model_output = convert_to_model(
@@ -764,7 +722,7 @@ Follow these guidelines:
return OutputFormat.PYDANTIC
return OutputFormat.RAW
def _save_file(self, result: Union[Dict, str, Any]) -> None:
def _save_file(self, result: dict | str | Any) -> None:
"""Save task output to a file.
Note:
@@ -785,7 +743,7 @@ Follow these guidelines:
if self.output_file is None:
raise ValueError("output_file is not set.")
FILEWRITER_RECOMMENDATION = (
filewriter_recommendation = (
"For cross-platform file writing, especially on Windows, "
"use FileWriterTool from crewai_tools package."
)
@@ -811,10 +769,10 @@ Follow these guidelines:
except (OSError, IOError) as e:
raise RuntimeError(
"\n".join(
[f"Failed to save output file: {e}", FILEWRITER_RECOMMENDATION]
[f"Failed to save output file: {e}", filewriter_recommendation]
)
)
return None
) from e
return
def __repr__(self):
return f"Task(description={self.description}, expected_output={self.expected_output})"

View File

@@ -5,12 +5,20 @@ import time
from difflib import SequenceMatcher
from json import JSONDecodeError
from textwrap import dedent
from typing import TYPE_CHECKING, Any, Dict, List, Optional, Union
from typing import TYPE_CHECKING, Any, Union
import json5
from json_repair import repair_json
from json_repair import repair_json # type: ignore[import-untyped,import-error]
from crewai.agents.tools_handler import ToolsHandler
from crewai.events.event_bus import crewai_event_bus
from crewai.events.types.tool_usage_events import (
ToolSelectionErrorEvent,
ToolUsageErrorEvent,
ToolUsageFinishedEvent,
ToolUsageStartedEvent,
ToolValidateInputErrorEvent,
)
from crewai.task import Task
from crewai.telemetry import Telemetry
from crewai.tools.structured_tool import CrewStructuredTool
@@ -20,14 +28,6 @@ from crewai.utilities.agent_utils import (
get_tool_names,
render_text_description_and_args,
)
from crewai.events.event_bus import crewai_event_bus
from crewai.events.types.tool_usage_events import (
ToolSelectionErrorEvent,
ToolUsageErrorEvent,
ToolUsageFinishedEvent,
ToolUsageStartedEvent,
ToolValidateInputErrorEvent,
)
if TYPE_CHECKING:
from crewai.agents.agent_builder.base_agent import BaseAgent
@@ -44,7 +44,7 @@ OPENAI_BIGGER_MODELS = [
]
class ToolUsageErrorException(Exception):
class ToolUsageError(Exception):
"""Exception raised for errors in the tool usage."""
def __init__(self, message: str) -> None:
@@ -60,7 +60,6 @@ class ToolUsage:
task: Task being executed.
tools_handler: Tools handler that will manage the tool usage.
tools: List of tools available for the agent.
original_tools: Original tools available for the agent before being converted to BaseTool.
tools_description: Description of the tools available for the agent.
tools_names: Names of the tools available for the agent.
function_calling_llm: Language model to be used for the tool usage.
@@ -68,13 +67,13 @@ class ToolUsage:
def __init__(
self,
tools_handler: Optional[ToolsHandler],
tools: List[CrewStructuredTool],
task: Optional[Task],
tools_handler: ToolsHandler | None,
tools: list[CrewStructuredTool],
task: Task | None,
function_calling_llm: Any,
agent: Optional[Union["BaseAgent", "LiteAgent"]] = None,
agent: Union["BaseAgent", "LiteAgent"] | None = None,
action: Any = None,
fingerprint_context: Optional[Dict[str, str]] = None,
fingerprint_context: dict[str, str] | None = None,
) -> None:
self._i18n: I18N = agent.i18n if agent else I18N()
self._printer: Printer = Printer()
@@ -105,9 +104,9 @@ class ToolUsage:
return self._tool_calling(tool_string)
def use(
self, calling: Union[ToolCalling, InstructorToolCalling], tool_string: str
self, calling: ToolCalling | InstructorToolCalling, tool_string: str
) -> str:
if isinstance(calling, ToolUsageErrorException):
if isinstance(calling, ToolUsageError):
error = calling.message
if self.agent and self.agent.verbose:
self._printer.print(content=f"\n\n{error}\n", color="red")
@@ -130,8 +129,7 @@ class ToolUsage:
and tool.name == self._i18n.tools("add_image")["name"] # type: ignore
):
try:
result = self._use(tool_string=tool_string, tool=tool, calling=calling)
return result
return self._use(tool_string=tool_string, tool=tool, calling=calling)
except Exception as e:
error = getattr(e, "message", str(e))
@@ -147,7 +145,7 @@ class ToolUsage:
self,
tool_string: str,
tool: CrewStructuredTool,
calling: Union[ToolCalling, InstructorToolCalling],
calling: ToolCalling | InstructorToolCalling,
) -> str:
if self._check_tool_repeated_usage(calling=calling): # type: ignore # _check_tool_repeated_usage of "ToolUsage" does not return a value (it only ever returns None)
try:
@@ -159,8 +157,7 @@ class ToolUsage:
tool_name=tool.name,
attempts=self._run_attempts,
)
result = self._format_result(result=result) # type: ignore # "_format_result" of "ToolUsage" does not return a value (it only ever returns None)
return result # type: ignore # Fix the return type of this function
return self._format_result(result=result) # type: ignore # "_format_result" of "ToolUsage" does not return a value (it only ever returns None)
except Exception:
if self.task:
@@ -176,8 +173,9 @@ class ToolUsage:
"agent": self.agent,
}
if self.agent.fingerprint:
event_data.update(self.agent.fingerprint)
# TODO: Investigate fingerprint attribute availability on BaseAgent/LiteAgent
if self.agent.fingerprint: # type: ignore
event_data.update(self.agent.fingerprint) # type: ignore
if self.task:
event_data["task_name"] = self.task.name or self.task.description
event_data["task_id"] = str(self.task.id)
@@ -188,8 +186,17 @@ class ToolUsage:
result = None # type: ignore
if self.tools_handler and self.tools_handler.cache:
input_str = ""
if calling.arguments:
if isinstance(calling.arguments, dict):
import json
input_str = json.dumps(calling.arguments)
else:
input_str = str(calling.arguments)
result = self.tools_handler.cache.read(
tool=calling.tool_name, input=calling.arguments
tool=calling.tool_name, input=input_str
) # type: ignore
from_cache = result is not None
@@ -207,8 +214,7 @@ class ToolUsage:
try:
result = usage_limit_error
self._telemetry.tool_usage_error(llm=self.function_calling_llm)
result = self._format_result(result=result)
return result
return self._format_result(result=result)
except Exception:
if self.task:
self.task.increment_tools_errors()
@@ -255,7 +261,7 @@ class ToolUsage:
error_message = self._i18n.errors("tool_usage_exception").format(
error=e, tool=tool.name, tool_inputs=tool.description
)
error = ToolUsageErrorException(
error = ToolUsageError(
f"\n{error_message}.\nMoving on then. {self._i18n.slice('format').format(tool_names=self.tools_names)}"
).message
if self.task:
@@ -346,7 +352,7 @@ class ToolUsage:
return result
def _check_tool_repeated_usage(
self, calling: Union[ToolCalling, InstructorToolCalling]
self, calling: ToolCalling | InstructorToolCalling
) -> bool:
if not self.tools_handler:
return False
@@ -356,7 +362,8 @@ class ToolUsage:
)
return False
def _check_usage_limit(self, tool: Any, tool_name: str) -> str | None:
@staticmethod
def _check_usage_limit(tool: Any, tool_name: str) -> str | None:
"""Check if tool has reached its usage limit.
Args:
@@ -393,7 +400,7 @@ class ToolUsage:
return tool
if self.task:
self.task.increment_tools_errors()
tool_selection_data: Dict[str, Any] = {
tool_selection_data: dict[str, Any] = {
"agent_key": getattr(self.agent, "key", None) if self.agent else None,
"agent_role": getattr(self.agent, "role", None) if self.agent else None,
"tool_name": tool_name,
@@ -410,27 +417,24 @@ class ToolUsage:
),
)
raise Exception(error)
else:
error = f"I forgot the Action name, these are the only available Actions: {self.tools_description}"
crewai_event_bus.emit(
self,
ToolSelectionErrorEvent(
**tool_selection_data,
error=error,
),
)
raise Exception(error)
error = f"I forgot the Action name, these are the only available Actions: {self.tools_description}"
crewai_event_bus.emit(
self,
ToolSelectionErrorEvent(
**tool_selection_data,
error=error,
),
)
raise Exception(error)
def _render(self) -> str:
"""Render the tool name and description in plain text."""
descriptions = []
for tool in self.tools:
descriptions.append(tool.description)
descriptions = [tool.description for tool in self.tools]
return "\n--\n".join(descriptions)
def _function_calling(
self, tool_string: str
) -> Union[ToolCalling, InstructorToolCalling]:
) -> ToolCalling | InstructorToolCalling:
model = (
InstructorToolCalling
if self.function_calling_llm.supports_function_calling()
@@ -453,13 +457,13 @@ class ToolUsage:
)
tool_object = converter.to_pydantic()
if not isinstance(tool_object, (ToolCalling, InstructorToolCalling)):
raise ToolUsageErrorException("Failed to parse tool calling")
raise ToolUsageError("Failed to parse tool calling")
return tool_object
def _original_tool_calling(
self, tool_string: str, raise_error: bool = False
) -> Union[ToolCalling, InstructorToolCalling, ToolUsageErrorException]:
) -> ToolCalling | InstructorToolCalling | ToolUsageError:
tool_name = self.action.tool
tool = self._select_tool(tool_name)
try:
@@ -468,18 +472,12 @@ class ToolUsage:
except Exception:
if raise_error:
raise
else:
return ToolUsageErrorException(
f"{self._i18n.errors('tool_arguments_error')}"
)
return ToolUsageError(f"{self._i18n.errors('tool_arguments_error')}")
if not isinstance(arguments, dict):
if raise_error:
raise
else:
return ToolUsageErrorException(
f"{self._i18n.errors('tool_arguments_error')}"
)
return ToolUsageError(f"{self._i18n.errors('tool_arguments_error')}")
return ToolCalling(
tool_name=tool.name,
@@ -488,15 +486,14 @@ class ToolUsage:
def _tool_calling(
self, tool_string: str
) -> Union[ToolCalling, InstructorToolCalling, ToolUsageErrorException]:
) -> ToolCalling | InstructorToolCalling | ToolUsageError:
try:
try:
return self._original_tool_calling(tool_string, raise_error=True)
except Exception:
if self.function_calling_llm:
return self._function_calling(tool_string)
else:
return self._original_tool_calling(tool_string)
return self._original_tool_calling(tool_string)
except Exception as e:
self._run_attempts += 1
if self._run_attempts > self._max_parsing_attempts:
@@ -505,12 +502,12 @@ class ToolUsage:
self.task.increment_tools_errors()
if self.agent and self.agent.verbose:
self._printer.print(content=f"\n\n{e}\n", color="red")
return ToolUsageErrorException( # type: ignore # Incompatible return value type (got "ToolUsageErrorException", expected "ToolCalling | InstructorToolCalling")
return ToolUsageError( # type: ignore # Incompatible return value type (got "ToolUsageError", expected "ToolCalling | InstructorToolCalling")
f"{self._i18n.errors('tool_usage_error').format(error=e)}\nMoving on then. {self._i18n.slice('format').format(tool_names=self.tools_names)}"
)
return self._tool_calling(tool_string)
def _validate_tool_input(self, tool_input: Optional[str]) -> Dict[str, Any]:
def _validate_tool_input(self, tool_input: str | None) -> dict[str, Any]:
if tool_input is None:
return {}
@@ -534,7 +531,7 @@ class ToolUsage:
return arguments
except (ValueError, SyntaxError):
repaired_input = repair_json(tool_input)
pass # Continue to the next parsing attempt
# Continue to the next parsing attempt
# Attempt 3: Parse as JSON5
try:
@@ -586,7 +583,7 @@ class ToolUsage:
def on_tool_error(
self,
tool: Any,
tool_calling: Union[ToolCalling, InstructorToolCalling],
tool_calling: ToolCalling | InstructorToolCalling,
e: Exception,
) -> None:
event_data = self._prepare_event_data(tool, tool_calling)
@@ -595,7 +592,7 @@ class ToolUsage:
def on_tool_use_finished(
self,
tool: Any,
tool_calling: Union[ToolCalling, InstructorToolCalling],
tool_calling: ToolCalling | InstructorToolCalling,
from_cache: bool,
started_at: float,
result: Any,
@@ -616,7 +613,7 @@ class ToolUsage:
crewai_event_bus.emit(self, ToolUsageFinishedEvent(**event_data))
def _prepare_event_data(
self, tool: Any, tool_calling: Union[ToolCalling, InstructorToolCalling]
self, tool: Any, tool_calling: ToolCalling | InstructorToolCalling
) -> dict:
event_data = {
"run_attempts": self._run_attempts,

View File

@@ -1,14 +1,18 @@
import json
import re
from typing import Any, Callable, Dict, List, Optional, Sequence, Union
from collections.abc import Callable, Sequence
from typing import Any
from rich.console import Console
from crewai.agents.constants import FINAL_ANSWER_AND_PARSABLE_ACTION_ERROR_MESSAGE
from crewai.agents.parser import (
AgentAction,
AgentFinish,
OutputParserException,
OutputParserError,
parse,
)
from crewai.cli.config import Settings
from crewai.llm import LLM
from crewai.llms.base_llm import BaseLLM
from crewai.tools import BaseTool as CrewAITool
@@ -20,13 +24,11 @@ from crewai.utilities.errors import AgentRepositoryError
from crewai.utilities.exceptions.context_window_exceeding_exception import (
LLMContextLengthExceededException,
)
from rich.console import Console
from crewai.cli.config import Settings
console = Console()
def parse_tools(tools: List[BaseTool]) -> List[CrewStructuredTool]:
def parse_tools(tools: list[BaseTool]) -> list[CrewStructuredTool]:
"""Parse tools to be used for the task."""
tools_list = []
@@ -39,13 +41,13 @@ def parse_tools(tools: List[BaseTool]) -> List[CrewStructuredTool]:
return tools_list
def get_tool_names(tools: Sequence[Union[CrewStructuredTool, BaseTool]]) -> str:
def get_tool_names(tools: Sequence[CrewStructuredTool | BaseTool]) -> str:
"""Get the names of the tools."""
return ", ".join([t.name for t in tools])
def render_text_description_and_args(
tools: Sequence[Union[CrewStructuredTool, BaseTool]],
tools: Sequence[CrewStructuredTool | BaseTool],
) -> str:
"""Render the tool name, description, and args in plain text.
@@ -53,10 +55,7 @@ def render_text_description_and_args(
calculator: This tool is used for math, \
args: {"expression": {"type": "string"}}
"""
tool_strings = []
for tool in tools:
tool_strings.append(tool.description)
tool_strings = [tool.description for tool in tools]
return "\n".join(tool_strings)
@@ -66,13 +65,13 @@ def has_reached_max_iterations(iterations: int, max_iterations: int) -> bool:
def handle_max_iterations_exceeded(
formatted_answer: Union[AgentAction, AgentFinish, None],
formatted_answer: AgentAction | AgentFinish | None,
printer: Printer,
i18n: I18N,
messages: List[Dict[str, str]],
llm: Union[LLM, BaseLLM],
callbacks: List[Any],
) -> Union[AgentAction, AgentFinish]:
messages: list[dict[str, str]],
llm: LLM | BaseLLM,
callbacks: list[Any],
) -> AgentAction | AgentFinish:
"""
Handles the case when the maximum number of iterations is exceeded.
Performs one more LLM call to get the final answer.
@@ -90,7 +89,7 @@ def handle_max_iterations_exceeded(
if formatted_answer and hasattr(formatted_answer, "text"):
assistant_message = (
formatted_answer.text + f'\n{i18n.errors("force_final_answer")}'
formatted_answer.text + f"\n{i18n.errors('force_final_answer')}"
)
else:
assistant_message = i18n.errors("force_final_answer")
@@ -110,17 +109,16 @@ def handle_max_iterations_exceeded(
)
raise ValueError("Invalid response from LLM call - None or empty.")
formatted_answer = format_answer(answer)
# Return the formatted answer, regardless of its type
return formatted_answer
return format_answer(answer)
def format_message_for_llm(prompt: str, role: str = "user") -> Dict[str, str]:
def format_message_for_llm(prompt: str, role: str = "user") -> dict[str, str]:
prompt = prompt.rstrip()
return {"role": role, "content": prompt}
def format_answer(answer: str) -> Union[AgentAction, AgentFinish]:
def format_answer(answer: str) -> AgentAction | AgentFinish:
"""Format a response from the LLM into an AgentAction or AgentFinish."""
try:
return parse(answer)
@@ -134,7 +132,7 @@ def format_answer(answer: str) -> Union[AgentAction, AgentFinish]:
def enforce_rpm_limit(
request_within_rpm_limit: Optional[Callable[[], bool]] = None,
request_within_rpm_limit: Callable[[], bool] | None = None,
) -> None:
"""Enforce the requests per minute (RPM) limit if applicable."""
if request_within_rpm_limit:
@@ -142,12 +140,12 @@ def enforce_rpm_limit(
def get_llm_response(
llm: Union[LLM, BaseLLM],
messages: List[Dict[str, str]],
callbacks: List[Any],
llm: LLM | BaseLLM,
messages: list[dict[str, str]],
callbacks: list[Any],
printer: Printer,
from_task: Optional[Any] = None,
from_agent: Optional[Any] = None,
from_task: Any | None = None,
from_agent: Any | None = None,
) -> str:
"""Call the LLM and return the response, handling any invalid responses."""
try:
@@ -171,13 +169,13 @@ def get_llm_response(
def process_llm_response(
answer: str, use_stop_words: bool
) -> Union[AgentAction, AgentFinish]:
) -> AgentAction | AgentFinish:
"""Process the LLM response and format it into an AgentAction or AgentFinish."""
if not use_stop_words:
try:
# Preliminary parsing to check for errors.
format_answer(answer)
except OutputParserException as e:
except OutputParserError as e:
if FINAL_ANSWER_AND_PARSABLE_ACTION_ERROR_MESSAGE in e.error:
answer = answer.split("Observation:")[0].strip()
@@ -187,10 +185,10 @@ def process_llm_response(
def handle_agent_action_core(
formatted_answer: AgentAction,
tool_result: ToolResult,
messages: Optional[List[Dict[str, str]]] = None,
step_callback: Optional[Callable] = None,
show_logs: Optional[Callable] = None,
) -> Union[AgentAction, AgentFinish]:
messages: list[dict[str, str]] | None = None,
step_callback: Callable | None = None,
show_logs: Callable | None = None,
) -> AgentAction | AgentFinish:
"""Core logic for handling agent actions and tool results.
Args:
@@ -245,16 +243,16 @@ def handle_unknown_error(printer: Any, exception: Exception) -> None:
def handle_output_parser_exception(
e: OutputParserException,
messages: List[Dict[str, str]],
e: OutputParserError,
messages: list[dict[str, str]],
iterations: int,
log_error_after: int = 3,
printer: Optional[Any] = None,
printer: Any | None = None,
) -> AgentAction:
"""Handle OutputParserException by updating messages and formatted_answer.
"""Handle OutputParserError by updating messages and formatted_answer.
Args:
e: The OutputParserException that occurred
e: The OutputParserError that occurred
messages: List of messages to append to
iterations: Current iteration count
log_error_after: Number of iterations after which to log errors
@@ -298,9 +296,9 @@ def is_context_length_exceeded(exception: Exception) -> bool:
def handle_context_length(
respect_context_window: bool,
printer: Any,
messages: List[Dict[str, str]],
messages: list[dict[str, str]],
llm: Any,
callbacks: List[Any],
callbacks: list[Any],
i18n: Any,
) -> None:
"""Handle context length exceeded by either summarizing or raising an error.
@@ -330,9 +328,9 @@ def handle_context_length(
def summarize_messages(
messages: List[Dict[str, str]],
messages: list[dict[str, str]],
llm: Any,
callbacks: List[Any],
callbacks: list[Any],
i18n: Any,
) -> None:
"""Summarize messages to fit within context window.
@@ -344,12 +342,12 @@ def summarize_messages(
i18n: I18N instance for messages
"""
messages_string = " ".join([message["content"] for message in messages])
messages_groups = []
cut_size = llm.get_context_window_size()
for i in range(0, len(messages_string), cut_size):
messages_groups.append({"content": messages_string[i : i + cut_size]})
messages_groups = [
{"content": messages_string[i : i + cut_size]}
for i in range(0, len(messages_string), cut_size)
]
summarized_contents = []
@@ -385,8 +383,8 @@ def summarize_messages(
def show_agent_logs(
printer: Printer,
agent_role: str,
formatted_answer: Optional[Union[AgentAction, AgentFinish]] = None,
task_description: Optional[str] = None,
formatted_answer: AgentAction | AgentFinish | None = None,
task_description: str | None = None,
verbose: bool = False,
) -> None:
"""Show agent logs for both start and execution states.
@@ -458,8 +456,8 @@ def _print_current_organization():
)
def load_agent_from_repository(from_repository: str) -> Dict[str, Any]:
attributes: Dict[str, Any] = {}
def load_agent_from_repository(from_repository: str) -> dict[str, Any]:
attributes: dict[str, Any] = {}
if from_repository:
import importlib
@@ -497,7 +495,7 @@ def load_agent_from_repository(from_repository: str) -> Dict[str, Any]:
else:
attributes[key].append(tool_value)
except Exception as e:
except Exception as e: # noqa: PERF203
raise AgentRepositoryError(
f"Tool {tool['name']} could not be loaded: {e}"
) from e

View File

@@ -1,17 +1,18 @@
import json
from typing import Any, Type
from typing import Any
import regex
from pydantic import BaseModel, ValidationError
from crewai.agents.parser import OutputParserException
from crewai.agents.parser import OutputParserError
"""Parser for converting text outputs into Pydantic models."""
class CrewPydanticOutputParser:
"""Parses text outputs into specified Pydantic models."""
pydantic_object: Type[BaseModel]
pydantic_object: type[BaseModel]
def parse_result(self, result: str) -> Any:
result = self._transform_in_valid_json(result)
@@ -27,7 +28,7 @@ class CrewPydanticOutputParser:
except ValidationError as e:
name = self.pydantic_object.__name__
msg = f"Failed to parse {name} from completion {json_object}. Got: {e}"
raise OutputParserException(error=msg)
raise OutputParserError(error=msg) from e
def _transform_in_valid_json(self, text) -> str:
text = text.replace("```", "").replace("json", "")
@@ -41,7 +42,7 @@ class CrewPydanticOutputParser:
# Return the first successfully parsed JSON object
json_obj = json.dumps(json_obj)
return str(json_obj)
except json.JSONDecodeError:
except json.JSONDecodeError: # noqa: PERF203
# If parsing fails, skip to the next match
continue
return text

View File

@@ -1,4 +1,5 @@
from typing import Any, Callable, Optional, Tuple, Union
from collections.abc import Callable
from typing import Any
from pydantic import BaseModel, field_validator
@@ -17,8 +18,8 @@ class GuardrailResult(BaseModel):
"""
success: bool
result: Optional[Any] = None
error: Optional[str] = None
result: Any | None = None
error: str | None = None
@field_validator("result", "error")
@classmethod
@@ -36,7 +37,7 @@ class GuardrailResult(BaseModel):
return v
@classmethod
def from_tuple(cls, result: Tuple[bool, Union[Any, str]]) -> "GuardrailResult":
def from_tuple(cls, result: tuple[bool, Any | str]) -> "GuardrailResult":
"""Create a GuardrailResult from a validation tuple.
Args:
@@ -55,33 +56,27 @@ class GuardrailResult(BaseModel):
def process_guardrail(
output: Any, guardrail: Callable, retry_count: int
output: Any, guardrail: Callable, retry_count: int, event_source: Any | None = None
) -> GuardrailResult:
"""Process the guardrail for the agent output.
Args:
output: The output to validate with the guardrail
guardrail: The guardrail to validate the output with
retry_count: The number of times the guardrail has been retried
event_source: The source of the guardrail to be sent in events
Returns:
GuardrailResult: The result of the guardrail validation
"""
from crewai.task import TaskOutput
from crewai.lite_agent import LiteAgentOutput
assert isinstance(output, TaskOutput) or isinstance(
output, LiteAgentOutput
), "Output must be a TaskOutput or LiteAgentOutput"
assert guardrail is not None
from crewai.events.event_bus import crewai_event_bus
from crewai.events.types.llm_guardrail_events import (
LLMGuardrailCompletedEvent,
LLMGuardrailStartedEvent,
)
from crewai.events.event_bus import crewai_event_bus
crewai_event_bus.emit(
None,
event_source,
LLMGuardrailStartedEvent(guardrail=guardrail, retry_count=retry_count),
)
@@ -89,7 +84,7 @@ def process_guardrail(
guardrail_result = GuardrailResult.from_tuple(result)
crewai_event_bus.emit(
None,
event_source,
LLMGuardrailCompletedEvent(
success=guardrail_result.success,
result=guardrail_result.result,

View File

@@ -259,7 +259,7 @@ class AgentReasoning:
)
# Prepare a simple callable that just returns the tool arguments as JSON
def _create_reasoning_plan(plan: str, ready: bool): # noqa: N802
def _create_reasoning_plan(plan: str, ready: bool = True): # noqa: N802
"""Return the reasoning plan result in JSON string form."""
return json.dumps({"plan": plan, "ready": ready})

View File

@@ -1,24 +1,24 @@
from typing import Any, Dict, List, Optional
from typing import Any
from crewai.agents.parser import AgentAction
from crewai.security import Fingerprint
from crewai.tools.structured_tool import CrewStructuredTool
from crewai.tools.tool_types import ToolResult
from crewai.tools.tool_usage import ToolUsage, ToolUsageErrorException
from crewai.tools.tool_usage import ToolUsage, ToolUsageError
from crewai.utilities.i18n import I18N
def execute_tool_and_check_finality(
agent_action: AgentAction,
tools: List[CrewStructuredTool],
tools: list[CrewStructuredTool],
i18n: I18N,
agent_key: Optional[str] = None,
agent_role: Optional[str] = None,
tools_handler: Optional[Any] = None,
task: Optional[Any] = None,
agent: Optional[Any] = None,
function_calling_llm: Optional[Any] = None,
fingerprint_context: Optional[Dict[str, str]] = None,
agent_key: str | None = None,
agent_role: str | None = None,
tools_handler: Any | None = None,
task: Any | None = None,
agent: Any | None = None,
function_calling_llm: Any | None = None,
fingerprint_context: dict[str, str] | None = None,
) -> ToolResult:
"""Execute a tool and check if the result should be treated as a final answer.
@@ -50,7 +50,7 @@ def execute_tool_and_check_finality(
fingerprint_obj = Fingerprint.from_dict(fingerprint_context)
agent.set_fingerprint(fingerprint_obj)
except Exception as e:
raise ValueError(f"Failed to set fingerprint: {e}")
raise ValueError(f"Failed to set fingerprint: {e}") from e
# Create tool usage instance
tool_usage = ToolUsage(
@@ -65,7 +65,7 @@ def execute_tool_and_check_finality(
# Parse tool calling
tool_calling = tool_usage.parse_tool_calling(agent_action.text)
if isinstance(tool_calling, ToolUsageErrorException):
if isinstance(tool_calling, ToolUsageError):
return ToolResult(tool_calling.message, False)
# Check if tool name matches

View File

@@ -258,8 +258,8 @@ def test_cache_hitting():
output = agent.execute_task(task1)
output = agent.execute_task(task2)
assert cache_handler._cache == {
"multiplier-{'first_number': 2, 'second_number': 6}": 12,
"multiplier-{'first_number': 3, 'second_number': 3}": 9,
'multiplier-{"first_number": 2, "second_number": 6}': 12,
'multiplier-{"first_number": 3, "second_number": 3}': 9,
}
task = Task(
@@ -271,9 +271,9 @@ def test_cache_hitting():
assert output == "36"
assert cache_handler._cache == {
"multiplier-{'first_number': 2, 'second_number': 6}": 12,
"multiplier-{'first_number': 3, 'second_number': 3}": 9,
"multiplier-{'first_number': 12, 'second_number': 3}": 36,
'multiplier-{"first_number": 2, "second_number": 6}': 12,
'multiplier-{"first_number": 3, "second_number": 3}': 9,
'multiplier-{"first_number": 12, "second_number": 3}': 36,
}
received_events = []
@@ -293,7 +293,7 @@ def test_cache_hitting():
output = agent.execute_task(task)
assert output == "0"
read.assert_called_with(
tool="multiplier", input={"first_number": 2, "second_number": 6}
tool="multiplier", input='{"first_number": 2, "second_number": 6}'
)
assert len(received_events) == 1
assert isinstance(received_events[0], ToolUsageFinishedEvent)
@@ -334,8 +334,8 @@ def test_disabling_cache_for_agent():
output = agent.execute_task(task1)
output = agent.execute_task(task2)
assert cache_handler._cache != {
"multiplier-{'first_number': 2, 'second_number': 6}": 12,
"multiplier-{'first_number': 3, 'second_number': 3}": 9,
'multiplier-{"first_number": 2, "second_number": 6}': 12,
'multiplier-{"first_number": 3, "second_number": 3}': 9,
}
task = Task(
@@ -347,9 +347,9 @@ def test_disabling_cache_for_agent():
assert output == "36"
assert cache_handler._cache != {
"multiplier-{'first_number': 2, 'second_number': 6}": 12,
"multiplier-{'first_number': 3, 'second_number': 3}": 9,
"multiplier-{'first_number': 12, 'second_number': 3}": 36,
'multiplier-{"first_number": 2, "second_number": 6}': 12,
'multiplier-{"first_number": 3, "second_number": 3}': 9,
'multiplier-{"first_number": 12, "second_number": 3}': 36,
}
with patch.object(CacheHandler, "read") as read:

View File

@@ -1,11 +1,13 @@
import pytest
from crewai.agents.crew_agent_executor import (
from crewai.agents import parser
from crewai.agents.parser import (
AgentAction,
AgentFinish,
OutputParserException,
)
from crewai.agents import parser
from crewai.agents.parser import (
OutputParserError as OutputParserException,
)
def test_valid_action_parsing_special_characters():
@@ -348,9 +350,9 @@ def test_integration_valid_and_invalid():
for part in parts:
try:
result = parser.parse(part.strip())
results.append(result)
except OutputParserException as e:
results.append(e)
result = e
results.append(result)
assert isinstance(results[0], AgentAction)
assert isinstance(results[1], AgentFinish)

View File

@@ -1,4 +1,3 @@
# ruff: noqa: S101
# mypy: ignore-errors
from collections import defaultdict
from typing import cast
@@ -329,23 +328,27 @@ def test_guardrail_is_called_using_string():
LLMGuardrailStartedEvent,
)
agent = Agent(
role="Sports Analyst",
goal="Gather information about the best soccer players",
backstory="""You are an expert at gathering and organizing information. You carefully collect details and present them in a structured way.""",
guardrail="""Only include Brazilian players, both women and men""",
)
with crewai_event_bus.scoped_handlers():
@crewai_event_bus.on(LLMGuardrailStartedEvent)
def capture_guardrail_started(source, event):
assert isinstance(source, LiteAgent)
assert source.original_agent == agent
guardrail_events["started"].append(event)
@crewai_event_bus.on(LLMGuardrailCompletedEvent)
def capture_guardrail_completed(source, event):
assert isinstance(source, LiteAgent)
assert source.original_agent == agent
guardrail_events["completed"].append(event)
agent = Agent(
role="Sports Analyst",
goal="Gather information about the best soccer players",
backstory="""You are an expert at gathering and organizing information. You carefully collect details and present them in a structured way.""",
guardrail="""Only include Brazilian players, both women and men""",
)
result = agent.kickoff(messages="Top 10 best players in the world?")
assert len(guardrail_events["started"]) == 2

View File

@@ -1,17 +1,17 @@
import pytest
from crewai.cli.authentication.main import Oauth2Settings
from crewai.cli.authentication.providers.okta import OktaProvider
class TestOktaProvider:
@pytest.fixture(autouse=True)
def setup_method(self):
self.valid_settings = Oauth2Settings(
provider="okta",
domain="test-domain.okta.com",
client_id="test-client-id",
audience="test-audience"
audience="test-audience",
)
self.provider = OktaProvider(self.valid_settings)
@@ -32,7 +32,7 @@ class TestOktaProvider:
provider="okta",
domain="my-company.okta.com",
client_id="test-client",
audience="test-audience"
audience="test-audience",
)
provider = OktaProvider(settings)
expected_url = "https://my-company.okta.com/oauth2/default/v1/device/authorize"
@@ -47,7 +47,7 @@ class TestOktaProvider:
provider="okta",
domain="another-domain.okta.com",
client_id="test-client",
audience="test-audience"
audience="test-audience",
)
provider = OktaProvider(settings)
expected_url = "https://another-domain.okta.com/oauth2/default/v1/token"
@@ -62,7 +62,7 @@ class TestOktaProvider:
provider="okta",
domain="dev.okta.com",
client_id="test-client",
audience="test-audience"
audience="test-audience",
)
provider = OktaProvider(settings)
expected_url = "https://dev.okta.com/oauth2/default/v1/keys"
@@ -77,7 +77,7 @@ class TestOktaProvider:
provider="okta",
domain="prod.okta.com",
client_id="test-client",
audience="test-audience"
audience="test-audience",
)
provider = OktaProvider(settings)
expected_issuer = "https://prod.okta.com/oauth2/default"
@@ -91,11 +91,11 @@ class TestOktaProvider:
provider="okta",
domain="test-domain.okta.com",
client_id="test-client-id",
audience=None
audience=None,
)
provider = OktaProvider(settings)
with pytest.raises(AssertionError):
with pytest.raises(ValueError, match="Audience is required"):
provider.get_audience()
def test_get_client_id(self):

View File

@@ -0,0 +1,124 @@
"""Tests for the machine ID generation functionality in tracing utils."""
from pathlib import Path
from unittest.mock import patch
from crewai.events.listeners.tracing.utils import (
_get_generic_system_id,
_get_linux_machine_id,
_get_machine_id,
)
def test_get_machine_id_basic():
"""Test that _get_machine_id always returns a valid SHA256 hash."""
machine_id = _get_machine_id()
# Should return a 64-character hex string (SHA256)
assert isinstance(machine_id, str)
assert len(machine_id) == 64
assert all(c in "0123456789abcdef" for c in machine_id)
def test_get_machine_id_handles_missing_files():
"""Test that _get_machine_id handles FileNotFoundError gracefully."""
with patch.object(Path, "read_text", side_effect=FileNotFoundError):
machine_id = _get_machine_id()
# Should still return a valid hash even when files are missing
assert isinstance(machine_id, str)
assert len(machine_id) == 64
assert all(c in "0123456789abcdef" for c in machine_id)
def test_get_machine_id_handles_permission_errors():
"""Test that _get_machine_id handles PermissionError gracefully."""
with patch.object(Path, "read_text", side_effect=PermissionError):
machine_id = _get_machine_id()
# Should still return a valid hash even with permission errors
assert isinstance(machine_id, str)
assert len(machine_id) == 64
assert all(c in "0123456789abcdef" for c in machine_id)
def test_get_machine_id_handles_mac_address_failure():
"""Test that _get_machine_id works even if MAC address retrieval fails."""
with patch("uuid.getnode", side_effect=Exception("MAC address error")):
machine_id = _get_machine_id()
# Should still return a valid hash even without MAC address
assert isinstance(machine_id, str)
assert len(machine_id) == 64
assert all(c in "0123456789abcdef" for c in machine_id)
def test_get_linux_machine_id_handles_missing_files():
"""Test that _get_linux_machine_id handles missing files gracefully."""
with patch.object(Path, "exists", return_value=False):
result = _get_linux_machine_id()
# Should return something (hostname-arch fallback) or None
assert result is None or isinstance(result, str)
def test_get_linux_machine_id_handles_file_read_errors():
"""Test that _get_linux_machine_id handles file read errors."""
with (
patch.object(Path, "exists", return_value=True),
patch.object(Path, "is_file", return_value=True),
patch.object(Path, "read_text", side_effect=FileNotFoundError),
):
result = _get_linux_machine_id()
# Should fallback to hostname-based ID or None
assert result is None or isinstance(result, str)
def test_get_generic_system_id_basic():
"""Test that _get_generic_system_id returns reasonable values."""
result = _get_generic_system_id()
# Should return a string or None
assert result is None or isinstance(result, str)
# If it returns a string, it should be non-empty
if result:
assert len(result) > 0
def test_get_generic_system_id_handles_socket_errors():
"""Test that _get_generic_system_id handles socket errors gracefully."""
with patch("socket.gethostname", side_effect=Exception("Socket error")):
result = _get_generic_system_id()
# Should still work or return None
assert result is None or isinstance(result, str)
def test_machine_id_consistency():
"""Test that machine ID is consistent across multiple calls."""
machine_id1 = _get_machine_id()
machine_id2 = _get_machine_id()
# Should be the same across calls (stable fingerprint)
assert machine_id1 == machine_id2
def test_machine_id_always_has_fallback():
"""Test that machine ID always generates something even in worst case."""
with (
patch("uuid.getnode", side_effect=Exception),
patch("platform.system", side_effect=Exception),
patch("socket.gethostname", side_effect=Exception),
patch("getpass.getuser", side_effect=Exception),
patch("platform.machine", side_effect=Exception),
patch("platform.processor", side_effect=Exception),
patch.object(Path, "read_text", side_effect=FileNotFoundError),
):
machine_id = _get_machine_id()
# Even in worst case, should return a valid hash
assert isinstance(machine_id, str)
assert len(machine_id) == 64
assert all(c in "0123456789abcdef" for c in machine_id)

View File

@@ -602,3 +602,81 @@ def test_file_path_validation():
match="file_path/file_paths must be a Path, str, or a list of these types",
):
PDFKnowledgeSource()
@pytest.mark.vcr(filter_headers=["authorization"])
def test_csv_knowledge_source_large_file_batching(mock_vector_db, tmpdir):
"""Test CSVKnowledgeSource with a large CSV file that would exceed token limits."""
from unittest.mock import Mock
# Create a large CSV file that would exceed token limits
large_csv_content = [["Name", "Description", "Details", "Notes", "Extra"]]
for i in range(200): # This should generate enough content to test batching
row = [
f"Item_{i}",
f"This is a detailed description for item {i} with lots of text content that will contribute to token count",
f"Extended details about item {i} including technical specifications, usage instructions, and comprehensive information that adds to the overall token count when processed by the embedder",
f"Additional notes and commentary for item {i} with even more text to ensure we have substantial content",
f"Extra field with supplementary information for item {i} to maximize content size",
]
large_csv_content.append(row)
csv_path = Path(tmpdir.join("large_data.csv"))
with open(csv_path, "w", encoding="utf-8") as f:
for row in large_csv_content:
f.write(",".join(row) + "\n")
# Create a CSVKnowledgeSource with custom batch size
csv_source = CSVKnowledgeSource(
file_paths=[csv_path],
batch_size=25, # Smaller batch size for testing
metadata={"test": "large_file"},
)
# Mock the storage to track batch calls
mock_storage = Mock()
csv_source.storage = mock_storage
csv_source.add()
# Verify that storage.save was called multiple times (indicating batching)
assert mock_storage.save.call_count > 1, (
"Storage.save should be called multiple times for batching"
)
# Verify that each batch has the expected size or less
for call in mock_storage.save.call_args_list:
batch_chunks = call[0][0] # First argument to save()
assert len(batch_chunks) <= 25, (
f"Batch size should not exceed 25, got {len(batch_chunks)}"
)
@pytest.mark.vcr(filter_headers=["authorization"])
def test_csv_knowledge_source_default_batch_size(mock_vector_db, tmpdir):
"""Test CSVKnowledgeSource uses default batch size when not specified."""
from unittest.mock import Mock
# Create a small CSV file
csv_content = [
["Name", "Age", "City"],
["Alice", "25", "Boston"],
["Bob", "30", "Seattle"],
]
csv_path = Path(tmpdir.join("small_data.csv"))
with open(csv_path, "w", encoding="utf-8") as f:
for row in csv_content:
f.write(",".join(row) + "\n")
csv_source = CSVKnowledgeSource(file_paths=[csv_path])
assert csv_source.batch_size == 50, (
f"Default batch_size should be 50, got {csv_source.batch_size}"
)
mock_storage = Mock()
csv_source.storage = mock_storage
csv_source.add()
assert mock_storage.save.called, "Storage.save should be called"

View File

@@ -918,7 +918,7 @@ def test_cache_hitting_between_agents(researcher, writer, ceo):
# Check if both calls were made with the expected arguments
expected_call = call(
tool="multiplier", input={"first_number": 2, "second_number": 6}
tool="multiplier", input='{"first_number": 2, "second_number": 6}'
)
assert cache_calls[0] == expected_call, f"First call mismatch: {cache_calls[0]}"
assert cache_calls[1] == expected_call, (
@@ -2229,7 +2229,7 @@ def test_tools_with_custom_caching():
# Verify that one of those calls was with the even number that should be cached
add_to_cache.assert_any_call(
tool="multiplcation_tool",
input={"first_number": 2, "second_number": 6},
input='{"first_number": 2, "second_number": 6}',
output=12,
)

View File

@@ -3,15 +3,15 @@ from unittest.mock import Mock, patch
import pytest
from crewai import Agent, Task
from crewai.llm import LLM
from crewai.tasks.hallucination_guardrail import HallucinationGuardrail
from crewai.tasks.llm_guardrail import LLMGuardrail
from crewai.tasks.task_output import TaskOutput
from crewai.events.event_bus import crewai_event_bus
from crewai.events.event_types import (
LLMGuardrailCompletedEvent,
LLMGuardrailStartedEvent,
)
from crewai.events.event_bus import crewai_event_bus
from crewai.llm import LLM
from crewai.tasks.hallucination_guardrail import HallucinationGuardrail
from crewai.tasks.llm_guardrail import LLMGuardrail
from crewai.tasks.task_output import TaskOutput
def test_task_without_guardrail():
@@ -177,16 +177,25 @@ def test_guardrail_emits_events(sample_agent):
started_guardrail = []
completed_guardrail = []
task = Task(
description="Gather information about available books on the First World War",
agent=sample_agent,
expected_output="A list of available books on the First World War",
guardrail="Ensure the authors are from Italy",
)
with crewai_event_bus.scoped_handlers():
@crewai_event_bus.on(LLMGuardrailStartedEvent)
def handle_guardrail_started(source, event):
assert source == task
started_guardrail.append(
{"guardrail": event.guardrail, "retry_count": event.retry_count}
)
@crewai_event_bus.on(LLMGuardrailCompletedEvent)
def handle_guardrail_completed(source, event):
assert source == task
completed_guardrail.append(
{
"success": event.success,
@@ -196,13 +205,6 @@ def test_guardrail_emits_events(sample_agent):
}
)
task = Task(
description="Gather information about available books on the First World War",
agent=sample_agent,
expected_output="A list of available books on the First World War",
guardrail="Ensure the authors are from Italy",
)
result = task.execute_sync(agent=sample_agent)
def custom_guardrail(result: TaskOutput):