Compare commits

..

7 Commits

Author SHA1 Message Date
Devin AI
0a22307962 fix: reuse parent run_id in nested streaming and fix test context isolation
Co-Authored-By: João <joao@crewai.com>
2026-04-09 06:06:12 +00:00
Devin AI
5511f0a089 fix: add run_id scoping to streaming handlers to prevent cross-run chunk contamination (#5376)
The singleton event bus fans out LLMStreamChunkEvent to all registered
handlers. When multiple streaming runs execute concurrently, each run's
handler receives chunks from all other runs, causing cross-run chunk
contamination.

Fix:
- Add run_id field to LLMStreamChunkEvent
- Use contextvars.ContextVar to track the current streaming run_id
- create_streaming_state() generates a unique run_id per run and sets it
  in the context var
- LLM emit paths (base_llm.py, llm.py) stamp run_id on emitted events
- Stream handler filters events by matching run_id

Tests:
- Handler rejects events from different run_id
- Two concurrent streaming states receive only their own events
- Two concurrent threads with isolated contextvars receive only their
  own chunks
- run_id field defaults to None for backward compatibility

Co-Authored-By: João <joao@crewai.com>
2026-04-09 05:56:47 +00:00
Greyson LaLonde
06fe163611 docs: update changelog and version for v1.14.2a1
Some checks failed
Nightly Canary Release / Check for new commits (push) Has been cancelled
Nightly Canary Release / Build nightly packages (push) Has been cancelled
Nightly Canary Release / Publish nightly to PyPI (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
Build uv cache / build-cache (3.10) (push) Has been cancelled
Build uv cache / build-cache (3.11) (push) Has been cancelled
Build uv cache / build-cache (3.12) (push) Has been cancelled
Build uv cache / build-cache (3.13) (push) Has been cancelled
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Check Documentation Broken Links / Check broken links (push) Has been cancelled
Vulnerability Scan / pip-audit (push) Has been cancelled
2026-04-09 07:26:22 +08:00
Greyson LaLonde
3b52b1a800 feat: bump versions to 1.14.2a1 2026-04-09 07:21:39 +08:00
Greyson LaLonde
9ab67552a7 fix: emit flow_finished event after HITL resume
resume_async() was missing trace infrastructure that kickoff_async()
sets up, causing flow_finished to never reach the platform after HITL
feedback. Add FlowStartedEvent emission to initialize the trace batch,
await event futures, finalize the trace batch, and guard with
suppress_flow_events.
2026-04-09 05:31:31 +08:00
Greyson LaLonde
8cdde16ac8 fix: bump cryptography to 46.0.7 for CVE-2026-39892 2026-04-09 05:17:31 +08:00
Greyson LaLonde
0e590ff669 refactor: use shared I18N_DEFAULT singleton 2026-04-09 04:29:53 +08:00
56 changed files with 675 additions and 337 deletions

View File

@@ -4,6 +4,29 @@ description: "تحديثات المنتج والتحسينات وإصلاحات
icon: "clock"
mode: "wide"
---
<Update label="9 أبريل 2026">
## v1.14.2a1
[عرض الإصدار على GitHub](https://github.com/crewAIInc/crewAI/releases/tag/1.14.2a1)
## ما الذي تغير
### إصلاحات الأخطاء
- إصلاح إصدار حدث flow_finished بعد استئناف HITL
- إصلاح إصدار التشفير إلى 46.0.7 لمعالجة CVE-2026-39892
### إعادة هيكلة
- إعادة هيكلة لاستخدام I18N_DEFAULT المشترك
### الوثائق
- تحديث سجل التغييرات والإصدار لـ v1.14.1
## المساهمون
@greysonlalonde
</Update>
<Update label="9 أبريل 2026">
## v1.14.1

View File

@@ -4,6 +4,29 @@ description: "Product updates, improvements, and bug fixes for CrewAI"
icon: "clock"
mode: "wide"
---
<Update label="Apr 09, 2026">
## v1.14.2a1
[View release on GitHub](https://github.com/crewAIInc/crewAI/releases/tag/1.14.2a1)
## What's Changed
### Bug Fixes
- Fix emission of flow_finished event after HITL resume
- Fix cryptography version to 46.0.7 to address CVE-2026-39892
### Refactoring
- Refactor to use shared I18N_DEFAULT singleton
### Documentation
- Update changelog and version for v1.14.1
## Contributors
@greysonlalonde
</Update>
<Update label="Apr 09, 2026">
## v1.14.1

View File

@@ -4,6 +4,29 @@ description: "CrewAI의 제품 업데이트, 개선 사항 및 버그 수정"
icon: "clock"
mode: "wide"
---
<Update label="2026년 4월 9일">
## v1.14.2a1
[GitHub 릴리스 보기](https://github.com/crewAIInc/crewAI/releases/tag/1.14.2a1)
## 변경 사항
### 버그 수정
- HITL 재개 후 flow_finished 이벤트 방출 수정
- CVE-2026-39892 문제를 해결하기 위해 암호화 버전을 46.0.7로 수정
### 리팩토링
- 공유 I18N_DEFAULT 싱글톤을 사용하도록 리팩토링
### 문서
- v1.14.1에 대한 변경 로그 및 버전 업데이트
## 기여자
@greysonlalonde
</Update>
<Update label="2026년 4월 9일">
## v1.14.1

View File

@@ -4,6 +4,29 @@ description: "Atualizações de produto, melhorias e correções do CrewAI"
icon: "clock"
mode: "wide"
---
<Update label="09 abr 2026">
## v1.14.2a1
[Ver release no GitHub](https://github.com/crewAIInc/crewAI/releases/tag/1.14.2a1)
## O que Mudou
### Correções de Bugs
- Corrigir a emissão do evento flow_finished após a retomada do HITL
- Corrigir a versão da criptografia para 46.0.7 para resolver o CVE-2026-39892
### Refatoração
- Refatorar para usar o singleton I18N_DEFAULT compartilhado
### Documentação
- Atualizar o changelog e a versão para v1.14.1
## Contribuidores
@greysonlalonde
</Update>
<Update label="09 abr 2026">
## v1.14.1

View File

@@ -152,4 +152,4 @@ __all__ = [
"wrap_file_source",
]
__version__ = "1.14.1"
__version__ = "1.14.2a1"

View File

@@ -10,7 +10,7 @@ requires-python = ">=3.10, <3.14"
dependencies = [
"pytube~=15.0.0",
"requests~=2.32.5",
"crewai==1.14.1",
"crewai==1.14.2a1",
"tiktoken~=0.8.0",
"beautifulsoup4~=4.13.4",
"python-docx~=1.2.0",

View File

@@ -305,4 +305,4 @@ __all__ = [
"ZapierActionTools",
]
__version__ = "1.14.1"
__version__ = "1.14.2a1"

View File

@@ -55,7 +55,7 @@ Repository = "https://github.com/crewAIInc/crewAI"
[project.optional-dependencies]
tools = [
"crewai-tools==1.14.1",
"crewai-tools==1.14.2a1",
]
embeddings = [
"tiktoken~=0.8.0"

View File

@@ -46,7 +46,7 @@ def _suppress_pydantic_deprecation_warnings() -> None:
_suppress_pydantic_deprecation_warnings()
__version__ = "1.14.1"
__version__ = "1.14.2a1"
_telemetry_submitted = False

View File

@@ -98,6 +98,7 @@ from crewai.utilities.converter import Converter, ConverterError
from crewai.utilities.env import get_env_context
from crewai.utilities.guardrail import process_guardrail
from crewai.utilities.guardrail_types import GuardrailCallable, GuardrailType
from crewai.utilities.i18n import I18N_DEFAULT
from crewai.utilities.llm_utils import create_llm
from crewai.utilities.prompts import Prompts, StandardPromptResult, SystemPromptResult
from crewai.utilities.pydantic_schema_utils import generate_model_description
@@ -499,8 +500,8 @@ class Agent(BaseAgent):
self.tools_handler.last_used_tool = None
task_prompt = task.prompt()
task_prompt = build_task_prompt_with_schema(task, task_prompt, self.i18n)
task_prompt = format_task_with_context(task_prompt, context, self.i18n)
task_prompt = build_task_prompt_with_schema(task, task_prompt)
task_prompt = format_task_with_context(task_prompt, context)
return self._retrieve_memory_context(task, task_prompt)
def _finalize_task_prompt(
@@ -562,7 +563,7 @@ class Agent(BaseAgent):
m.format() for m in matches
)
if memory.strip() != "":
task_prompt += self.i18n.slice("memory").format(memory=memory)
task_prompt += I18N_DEFAULT.slice("memory").format(memory=memory)
crewai_event_bus.emit(
self,
@@ -968,14 +969,13 @@ class Agent(BaseAgent):
agent=self,
has_tools=len(raw_tools) > 0,
use_native_tool_calling=use_native_tool_calling,
i18n=self.i18n,
use_system_prompt=self.use_system_prompt,
system_template=self.system_template,
prompt_template=self.prompt_template,
response_template=self.response_template,
).task_execution()
stop_words = [self.i18n.slice("observation")]
stop_words = [I18N_DEFAULT.slice("observation")]
if self.response_template:
stop_words.append(
self.response_template.split("{{ .Response }}")[1].strip()
@@ -1017,7 +1017,6 @@ class Agent(BaseAgent):
self.agent_executor = self.executor_class(
llm=self.llm,
task=task,
i18n=self.i18n,
agent=self,
crew=self.crew,
tools=parsed_tools,
@@ -1262,10 +1261,10 @@ class Agent(BaseAgent):
from_agent=self,
),
)
query = self.i18n.slice("knowledge_search_query").format(
query = I18N_DEFAULT.slice("knowledge_search_query").format(
task_prompt=task_prompt
)
rewriter_prompt = self.i18n.slice("knowledge_search_query_system_prompt")
rewriter_prompt = I18N_DEFAULT.slice("knowledge_search_query_system_prompt")
if not isinstance(self.llm, BaseLLM):
self._logger.log(
"warning",
@@ -1384,7 +1383,6 @@ class Agent(BaseAgent):
request_within_rpm_limit=rpm_limit_fn,
callbacks=[TokenCalcHandler(self._token_process)],
response_model=response_format,
i18n=self.i18n,
)
all_files: dict[str, Any] = {}
@@ -1420,7 +1418,7 @@ class Agent(BaseAgent):
m.format() for m in matches
)
if memory_block:
formatted_messages += "\n\n" + self.i18n.slice("memory").format(
formatted_messages += "\n\n" + I18N_DEFAULT.slice("memory").format(
memory=memory_block
)
crewai_event_bus.emit(
@@ -1624,7 +1622,7 @@ class Agent(BaseAgent):
try:
model_schema = generate_model_description(response_format)
schema = json.dumps(model_schema, indent=2)
instructions = self.i18n.slice("formatted_task_instructions").format(
instructions = I18N_DEFAULT.slice("formatted_task_instructions").format(
output_format=schema
)

View File

@@ -24,7 +24,6 @@ if TYPE_CHECKING:
from crewai.agent.core import Agent
from crewai.task import Task
from crewai.tools.base_tool import BaseTool
from crewai.utilities.i18n import I18N
def handle_reasoning(agent: Agent, task: Task) -> None:
@@ -59,46 +58,50 @@ def handle_reasoning(agent: Agent, task: Task) -> None:
agent._logger.log("error", f"Error during planning: {e!s}")
def build_task_prompt_with_schema(task: Task, task_prompt: str, i18n: I18N) -> str:
def build_task_prompt_with_schema(task: Task, task_prompt: str) -> str:
"""Build task prompt with JSON/Pydantic schema instructions if applicable.
Args:
task: The task being executed.
task_prompt: The initial task prompt.
i18n: Internationalization instance.
Returns:
The task prompt potentially augmented with schema instructions.
"""
from crewai.utilities.i18n import I18N_DEFAULT
if (task.output_json or task.output_pydantic) and not task.response_model:
if task.output_json:
schema_dict = generate_model_description(task.output_json)
schema = json.dumps(schema_dict["json_schema"]["schema"], indent=2)
task_prompt += "\n" + i18n.slice("formatted_task_instructions").format(
output_format=schema
)
task_prompt += "\n" + I18N_DEFAULT.slice(
"formatted_task_instructions"
).format(output_format=schema)
elif task.output_pydantic:
schema_dict = generate_model_description(task.output_pydantic)
schema = json.dumps(schema_dict["json_schema"]["schema"], indent=2)
task_prompt += "\n" + i18n.slice("formatted_task_instructions").format(
output_format=schema
)
task_prompt += "\n" + I18N_DEFAULT.slice(
"formatted_task_instructions"
).format(output_format=schema)
return task_prompt
def format_task_with_context(task_prompt: str, context: str | None, i18n: I18N) -> str:
def format_task_with_context(task_prompt: str, context: str | None) -> str:
"""Format task prompt with context if provided.
Args:
task_prompt: The task prompt.
context: Optional context string.
i18n: Internationalization instance.
Returns:
The task prompt formatted with context if provided.
"""
from crewai.utilities.i18n import I18N_DEFAULT
if context:
return i18n.slice("task_with_context").format(task=task_prompt, context=context)
return I18N_DEFAULT.slice("task_with_context").format(
task=task_prompt, context=context
)
return task_prompt

View File

@@ -33,6 +33,7 @@ from crewai.tools.base_tool import BaseTool
from crewai.types.callback import SerializableCallable
from crewai.utilities import Logger
from crewai.utilities.converter import Converter
from crewai.utilities.i18n import I18N_DEFAULT
from crewai.utilities.import_utils import require
@@ -186,7 +187,7 @@ class LangGraphAgentAdapter(BaseAgentAdapter):
task_prompt = task.prompt() if hasattr(task, "prompt") else str(task)
if context:
task_prompt = self.i18n.slice("task_with_context").format(
task_prompt = I18N_DEFAULT.slice("task_with_context").format(
task=task_prompt, context=context
)

View File

@@ -32,6 +32,7 @@ from crewai.events.types.agent_events import (
from crewai.tools import BaseTool
from crewai.tools.agent_tools.agent_tools import AgentTools
from crewai.utilities import Logger
from crewai.utilities.i18n import I18N_DEFAULT
from crewai.utilities.import_utils import require
@@ -133,7 +134,7 @@ class OpenAIAgentAdapter(BaseAgentAdapter):
try:
task_prompt: str = task.prompt()
if context:
task_prompt = self.i18n.slice("task_with_context").format(
task_prompt = I18N_DEFAULT.slice("task_with_context").format(
task=task_prompt, context=context
)
crewai_event_bus.emit(

View File

@@ -8,7 +8,7 @@ import json
from typing import Any
from crewai.agents.agent_adapters.base_converter_adapter import BaseConverterAdapter
from crewai.utilities.i18n import get_i18n
from crewai.utilities.i18n import I18N_DEFAULT
class OpenAIConverterAdapter(BaseConverterAdapter):
@@ -59,10 +59,8 @@ class OpenAIConverterAdapter(BaseConverterAdapter):
if not self._output_format:
return base_prompt
output_schema: str = (
get_i18n()
.slice("formatted_task_instructions")
.format(output_format=json.dumps(self._schema, indent=2))
output_schema: str = I18N_DEFAULT.slice("formatted_task_instructions").format(
output_format=json.dumps(self._schema, indent=2)
)
return f"{base_prompt}\n\n{output_schema}"

View File

@@ -43,7 +43,6 @@ from crewai.state.checkpoint_config import CheckpointConfig, _coerce_checkpoint
from crewai.tools.base_tool import BaseTool, Tool
from crewai.types.callback import SerializableCallable
from crewai.utilities.config import process_config
from crewai.utilities.i18n import I18N, get_i18n
from crewai.utilities.logger import Logger
from crewai.utilities.rpm_controller import RPMController
from crewai.utilities.string_utils import interpolate_only
@@ -179,7 +178,7 @@ class BaseAgent(BaseModel, ABC, metaclass=AgentMeta):
agent_executor: An instance of the CrewAgentExecutor class.
llm (Any): Language model that will run the agent.
crew (Any): Crew to which the agent belongs.
i18n (I18N): Internationalization settings.
cache_handler ([CacheHandler]): An instance of the CacheHandler class.
tools_handler ([ToolsHandler]): An instance of the ToolsHandler class.
max_tokens: Maximum number of tokens for the agent to generate in a response.
@@ -269,9 +268,6 @@ class BaseAgent(BaseModel, ABC, metaclass=AgentMeta):
_serialize_crew_ref, return_type=str | None, when_used="always"
),
] = Field(default=None, description="Crew to which the agent belongs.")
i18n: I18N = Field(
default_factory=get_i18n, description="Internationalization settings."
)
cache_handler: CacheHandler | None = Field(
default=None, description="An instance of the CacheHandler class."
)

View File

@@ -14,7 +14,6 @@ if TYPE_CHECKING:
from crewai.agents.agent_builder.base_agent import BaseAgent
from crewai.crew import Crew
from crewai.task import Task
from crewai.utilities.i18n import I18N
class BaseAgentExecutor(BaseModel):
@@ -28,7 +27,6 @@ class BaseAgentExecutor(BaseModel):
max_iter: int = Field(default=25)
messages: list[LLMMessage] = Field(default_factory=list)
_resuming: bool = PrivateAttr(default=False)
_i18n: I18N | None = PrivateAttr(default=None)
def _save_to_memory(self, output: AgentFinish) -> None:
"""Save task result to unified memory (memory or crew._memory)."""

View File

@@ -67,7 +67,7 @@ from crewai.utilities.agent_utils import (
)
from crewai.utilities.constants import TRAINING_DATA_FILE
from crewai.utilities.file_store import aget_all_files, get_all_files
from crewai.utilities.i18n import I18N, get_i18n
from crewai.utilities.i18n import I18N_DEFAULT
from crewai.utilities.printer import PRINTER
from crewai.utilities.string_utils import sanitize_tool_name
from crewai.utilities.token_counter_callback import TokenCalcHandler
@@ -135,9 +135,8 @@ class CrewAgentExecutor(BaseAgentExecutor):
model_config = ConfigDict(arbitrary_types_allowed=True, populate_by_name=True)
def __init__(self, i18n: I18N | None = None, **kwargs: Any) -> None:
def __init__(self, **kwargs: Any) -> None:
super().__init__(**kwargs)
self._i18n = i18n or get_i18n()
if not self.before_llm_call_hooks:
self.before_llm_call_hooks.extend(get_before_llm_call_hooks())
if not self.after_llm_call_hooks:
@@ -328,7 +327,6 @@ class CrewAgentExecutor(BaseAgentExecutor):
formatted_answer = handle_max_iterations_exceeded(
formatted_answer,
printer=PRINTER,
i18n=self._i18n,
messages=self.messages,
llm=cast("BaseLLM", self.llm),
callbacks=self.callbacks,
@@ -401,7 +399,6 @@ class CrewAgentExecutor(BaseAgentExecutor):
agent_action=formatted_answer,
fingerprint_context=fingerprint_context,
tools=self.tools,
i18n=self._i18n,
agent_key=self.agent.key if self.agent else None,
agent_role=self.agent.role if self.agent else None,
tools_handler=self.tools_handler,
@@ -438,7 +435,6 @@ class CrewAgentExecutor(BaseAgentExecutor):
messages=self.messages,
llm=cast("BaseLLM", self.llm),
callbacks=self.callbacks,
i18n=self._i18n,
verbose=self.agent.verbose,
)
continue
@@ -484,7 +480,6 @@ class CrewAgentExecutor(BaseAgentExecutor):
formatted_answer = handle_max_iterations_exceeded(
None,
printer=PRINTER,
i18n=self._i18n,
messages=self.messages,
llm=cast("BaseLLM", self.llm),
callbacks=self.callbacks,
@@ -575,7 +570,6 @@ class CrewAgentExecutor(BaseAgentExecutor):
messages=self.messages,
llm=cast("BaseLLM", self.llm),
callbacks=self.callbacks,
i18n=self._i18n,
verbose=self.agent.verbose,
)
continue
@@ -771,7 +765,7 @@ class CrewAgentExecutor(BaseAgentExecutor):
if tool_finish:
return tool_finish
reasoning_prompt = self._i18n.slice("post_tool_reasoning")
reasoning_prompt = I18N_DEFAULT.slice("post_tool_reasoning")
reasoning_message: LLMMessage = {
"role": "user",
"content": reasoning_prompt,
@@ -795,7 +789,7 @@ class CrewAgentExecutor(BaseAgentExecutor):
if tool_finish:
return tool_finish
reasoning_prompt = self._i18n.slice("post_tool_reasoning")
reasoning_prompt = I18N_DEFAULT.slice("post_tool_reasoning")
reasoning_message = {
"role": "user",
"content": reasoning_prompt,
@@ -1170,7 +1164,6 @@ class CrewAgentExecutor(BaseAgentExecutor):
formatted_answer = handle_max_iterations_exceeded(
formatted_answer,
printer=PRINTER,
i18n=self._i18n,
messages=self.messages,
llm=cast("BaseLLM", self.llm),
callbacks=self.callbacks,
@@ -1242,7 +1235,6 @@ class CrewAgentExecutor(BaseAgentExecutor):
agent_action=formatted_answer,
fingerprint_context=fingerprint_context,
tools=self.tools,
i18n=self._i18n,
agent_key=self.agent.key if self.agent else None,
agent_role=self.agent.role if self.agent else None,
tools_handler=self.tools_handler,
@@ -1278,7 +1270,6 @@ class CrewAgentExecutor(BaseAgentExecutor):
messages=self.messages,
llm=cast("BaseLLM", self.llm),
callbacks=self.callbacks,
i18n=self._i18n,
verbose=self.agent.verbose,
)
continue
@@ -1318,7 +1309,6 @@ class CrewAgentExecutor(BaseAgentExecutor):
formatted_answer = handle_max_iterations_exceeded(
None,
printer=PRINTER,
i18n=self._i18n,
messages=self.messages,
llm=cast("BaseLLM", self.llm),
callbacks=self.callbacks,
@@ -1408,7 +1398,6 @@ class CrewAgentExecutor(BaseAgentExecutor):
messages=self.messages,
llm=cast("BaseLLM", self.llm),
callbacks=self.callbacks,
i18n=self._i18n,
verbose=self.agent.verbose,
)
continue
@@ -1467,7 +1456,7 @@ class CrewAgentExecutor(BaseAgentExecutor):
Updated action or final answer.
"""
# Special case for add_image_tool
add_image_tool = self._i18n.tools("add_image")
add_image_tool = I18N_DEFAULT.tools("add_image")
if (
isinstance(add_image_tool, dict)
and formatted_answer.tool.casefold().strip()
@@ -1673,5 +1662,5 @@ class CrewAgentExecutor(BaseAgentExecutor):
Formatted message dict.
"""
return format_message_for_llm(
self._i18n.slice("feedback_instructions").format(feedback=feedback)
I18N_DEFAULT.slice("feedback_instructions").format(feedback=feedback)
)

View File

@@ -19,10 +19,7 @@ from crewai.agents.constants import (
MISSING_ACTION_INPUT_AFTER_ACTION_ERROR_MESSAGE,
UNABLE_TO_REPAIR_JSON_RESULTS,
)
from crewai.utilities.i18n import get_i18n
_I18N = get_i18n()
from crewai.utilities.i18n import I18N_DEFAULT as _I18N
@dataclass

View File

@@ -23,7 +23,7 @@ from crewai.events.types.observation_events import (
StepObservationStartedEvent,
)
from crewai.utilities.agent_utils import extract_task_section
from crewai.utilities.i18n import I18N, get_i18n
from crewai.utilities.i18n import I18N_DEFAULT
from crewai.utilities.llm_utils import create_llm
from crewai.utilities.planning_types import StepObservation, TodoItem
from crewai.utilities.types import LLMMessage
@@ -64,7 +64,6 @@ class PlannerObserver:
self.task = task
self.kickoff_input = kickoff_input
self.llm = self._resolve_llm()
self._i18n: I18N = get_i18n()
def _resolve_llm(self) -> Any:
"""Resolve which LLM to use for observation/planning.
@@ -246,7 +245,7 @@ class PlannerObserver:
task_desc = extract_task_section(self.kickoff_input)
task_goal = "Complete the task successfully"
system_prompt = self._i18n.retrieve("planning", "observation_system_prompt")
system_prompt = I18N_DEFAULT.retrieve("planning", "observation_system_prompt")
# Build context of what's been done
completed_summary = ""
@@ -273,7 +272,9 @@ class PlannerObserver:
remaining_lines
)
user_prompt = self._i18n.retrieve("planning", "observation_user_prompt").format(
user_prompt = I18N_DEFAULT.retrieve(
"planning", "observation_user_prompt"
).format(
task_description=task_desc,
task_goal=task_goal,
completed_summary=completed_summary,

View File

@@ -38,7 +38,7 @@ from crewai.utilities.agent_utils import (
process_llm_response,
setup_native_tools,
)
from crewai.utilities.i18n import I18N, get_i18n
from crewai.utilities.i18n import I18N_DEFAULT
from crewai.utilities.planning_types import TodoItem
from crewai.utilities.printer import PRINTER
from crewai.utilities.step_execution_context import StepExecutionContext, StepResult
@@ -81,7 +81,7 @@ class StepExecutor:
function_calling_llm: Optional separate LLM for function calling.
request_within_rpm_limit: Optional RPM limit function.
callbacks: Optional list of callbacks.
i18n: Optional i18n instance.
"""
def __init__(
@@ -96,7 +96,6 @@ class StepExecutor:
function_calling_llm: BaseLLM | None = None,
request_within_rpm_limit: Callable[[], bool] | None = None,
callbacks: list[Any] | None = None,
i18n: I18N | None = None,
) -> None:
self.llm = llm
self.tools = tools
@@ -108,7 +107,6 @@ class StepExecutor:
self.function_calling_llm = function_calling_llm
self.request_within_rpm_limit = request_within_rpm_limit
self.callbacks = callbacks or []
self._i18n: I18N = i18n or get_i18n()
# Native tool support — set up once
self._use_native_tools = check_native_tool_support(
@@ -221,14 +219,14 @@ class StepExecutor:
tools_section = ""
if self.tools and not self._use_native_tools:
tool_names = ", ".join(sanitize_tool_name(t.name) for t in self.tools)
tools_section = self._i18n.retrieve(
tools_section = I18N_DEFAULT.retrieve(
"planning", "step_executor_tools_section"
).format(tool_names=tool_names)
elif self.tools:
tool_names = ", ".join(sanitize_tool_name(t.name) for t in self.tools)
tools_section = f"\n\nAvailable tools: {tool_names}"
return self._i18n.retrieve("planning", "step_executor_system_prompt").format(
return I18N_DEFAULT.retrieve("planning", "step_executor_system_prompt").format(
role=role,
backstory=backstory,
goal=goal,
@@ -247,7 +245,7 @@ class StepExecutor:
task_section = extract_task_section(context.task_description)
if task_section:
parts.append(
self._i18n.retrieve(
I18N_DEFAULT.retrieve(
"planning", "step_executor_task_context"
).format(
task_context=task_section,
@@ -255,14 +253,16 @@ class StepExecutor:
)
parts.append(
self._i18n.retrieve("planning", "step_executor_user_prompt").format(
I18N_DEFAULT.retrieve("planning", "step_executor_user_prompt").format(
step_description=todo.description,
)
)
if todo.tool_to_use:
parts.append(
self._i18n.retrieve("planning", "step_executor_suggested_tool").format(
I18N_DEFAULT.retrieve(
"planning", "step_executor_suggested_tool"
).format(
tool_to_use=todo.tool_to_use,
)
)
@@ -270,16 +270,16 @@ class StepExecutor:
# Include dependency results (final results only, no traces)
if context.dependency_results:
parts.append(
self._i18n.retrieve("planning", "step_executor_context_header")
I18N_DEFAULT.retrieve("planning", "step_executor_context_header")
)
for step_num, result in sorted(context.dependency_results.items()):
parts.append(
self._i18n.retrieve(
I18N_DEFAULT.retrieve(
"planning", "step_executor_context_entry"
).format(step_number=step_num, result=result)
)
parts.append(self._i18n.retrieve("planning", "step_executor_complete_step"))
parts.append(I18N_DEFAULT.retrieve("planning", "step_executor_complete_step"))
return "\n".join(parts)
@@ -375,7 +375,6 @@ class StepExecutor:
agent_action=formatted,
fingerprint_context=fingerprint_context,
tools=self.tools,
i18n=self._i18n,
agent_key=self.agent.key if self.agent else None,
agent_role=self.agent.role if self.agent else None,
tools_handler=self.tools_handler,

View File

@@ -5,7 +5,7 @@ description = "{{name}} using crewAI"
authors = [{ name = "Your Name", email = "you@example.com" }]
requires-python = ">=3.10,<3.14"
dependencies = [
"crewai[tools]==1.14.1"
"crewai[tools]==1.14.2a1"
]
[project.scripts]

View File

@@ -5,7 +5,7 @@ description = "{{name}} using crewAI"
authors = [{ name = "Your Name", email = "you@example.com" }]
requires-python = ">=3.10,<3.14"
dependencies = [
"crewai[tools]==1.14.1"
"crewai[tools]==1.14.2a1"
]
[project.scripts]

View File

@@ -5,7 +5,7 @@ description = "Power up your crews with {{folder_name}}"
readme = "README.md"
requires-python = ">=3.10,<3.14"
dependencies = [
"crewai[tools]==1.14.1"
"crewai[tools]==1.14.2a1"
]
[tool.crewai]

View File

@@ -87,6 +87,7 @@ class LLMStreamChunkEvent(LLMEventBase):
tool_call: ToolCall | None = None
call_type: LLMCallType | None = None
response_id: str | None = None
run_id: str | None = None
class LLMThinkingChunkEvent(LLMEventBase):

View File

@@ -91,7 +91,7 @@ from crewai.utilities.agent_utils import (
track_delegation_if_needed,
)
from crewai.utilities.constants import TRAINING_DATA_FILE
from crewai.utilities.i18n import I18N, get_i18n
from crewai.utilities.i18n import I18N_DEFAULT
from crewai.utilities.planning_types import (
PlanStep,
StepObservation,
@@ -189,7 +189,6 @@ class AgentExecutor(Flow[AgentExecutorState], BaseAgentExecutor): # type: ignor
)
callbacks: list[Any] = Field(default_factory=list, exclude=True)
response_model: type[BaseModel] | None = Field(default=None, exclude=True)
i18n: I18N | None = Field(default=None, exclude=True)
log_error_after: int = Field(default=3, exclude=True)
before_llm_call_hooks: list[BeforeLLMCallHookType | BeforeLLMCallHookCallable] = (
Field(default_factory=list, exclude=True)
@@ -198,7 +197,6 @@ class AgentExecutor(Flow[AgentExecutorState], BaseAgentExecutor): # type: ignor
default_factory=list, exclude=True
)
_i18n: I18N = PrivateAttr(default_factory=get_i18n)
_console: Console = PrivateAttr(default_factory=Console)
_last_parser_error: OutputParserError | None = PrivateAttr(default=None)
_last_context_error: Exception | None = PrivateAttr(default=None)
@@ -214,7 +212,6 @@ class AgentExecutor(Flow[AgentExecutorState], BaseAgentExecutor): # type: ignor
@model_validator(mode="after")
def _setup_executor(self) -> Self:
"""Configure executor after Pydantic field initialization."""
self._i18n = self.i18n or get_i18n()
self.before_llm_call_hooks.extend(get_before_llm_call_hooks())
self.after_llm_call_hooks.extend(get_after_llm_call_hooks())
@@ -363,7 +360,6 @@ class AgentExecutor(Flow[AgentExecutorState], BaseAgentExecutor): # type: ignor
function_calling_llm=self.function_calling_llm,
request_within_rpm_limit=self.request_within_rpm_limit,
callbacks=self.callbacks,
i18n=self._i18n,
)
return self._step_executor
@@ -1203,7 +1199,6 @@ class AgentExecutor(Flow[AgentExecutorState], BaseAgentExecutor): # type: ignor
formatted_answer = handle_max_iterations_exceeded(
formatted_answer=None,
printer=PRINTER,
i18n=self._i18n,
messages=list(self.state.messages),
llm=self.llm,
callbacks=self.callbacks,
@@ -1430,7 +1425,6 @@ class AgentExecutor(Flow[AgentExecutorState], BaseAgentExecutor): # type: ignor
agent_action=action,
fingerprint_context=fingerprint_context,
tools=self.tools,
i18n=self._i18n,
agent_key=self.agent.key if self.agent else None,
agent_role=self.agent.role if self.agent else None,
tools_handler=self.tools_handler,
@@ -1450,7 +1444,7 @@ class AgentExecutor(Flow[AgentExecutorState], BaseAgentExecutor): # type: ignor
action.result = str(e)
self._append_message_to_state(action.text)
reasoning_prompt = self._i18n.slice("post_tool_reasoning")
reasoning_prompt = I18N_DEFAULT.slice("post_tool_reasoning")
reasoning_message: LLMMessage = {
"role": "user",
"content": reasoning_prompt,
@@ -1471,7 +1465,7 @@ class AgentExecutor(Flow[AgentExecutorState], BaseAgentExecutor): # type: ignor
self.state.is_finished = True
return "tool_result_is_final"
reasoning_prompt = self._i18n.slice("post_tool_reasoning")
reasoning_prompt = I18N_DEFAULT.slice("post_tool_reasoning")
reasoning_message_post: LLMMessage = {
"role": "user",
"content": reasoning_prompt,
@@ -2222,10 +2216,10 @@ class AgentExecutor(Flow[AgentExecutorState], BaseAgentExecutor): # type: ignor
# Build synthesis prompt
role = self.agent.role if self.agent else "Assistant"
system_prompt = self._i18n.retrieve(
system_prompt = I18N_DEFAULT.retrieve(
"planning", "synthesis_system_prompt"
).format(role=role)
user_prompt = self._i18n.retrieve("planning", "synthesis_user_prompt").format(
user_prompt = I18N_DEFAULT.retrieve("planning", "synthesis_user_prompt").format(
task_description=task_description,
combined_steps=combined_steps,
)
@@ -2472,7 +2466,7 @@ class AgentExecutor(Flow[AgentExecutorState], BaseAgentExecutor): # type: ignor
self.task.description if self.task else getattr(self, "_kickoff_input", "")
)
enhancement = self._i18n.retrieve(
enhancement = I18N_DEFAULT.retrieve(
"planning", "replan_enhancement_prompt"
).format(previous_context=previous_context)
@@ -2535,7 +2529,6 @@ class AgentExecutor(Flow[AgentExecutorState], BaseAgentExecutor): # type: ignor
messages=self.state.messages,
llm=self.llm,
callbacks=self.callbacks,
i18n=self._i18n,
verbose=self.agent.verbose,
)
@@ -2746,7 +2739,7 @@ class AgentExecutor(Flow[AgentExecutorState], BaseAgentExecutor): # type: ignor
Returns:
Updated action or final answer.
"""
add_image_tool = self._i18n.tools("add_image")
add_image_tool = I18N_DEFAULT.tools("add_image")
if (
isinstance(add_image_tool, dict)
and formatted_answer.tool.casefold().strip()

View File

@@ -1455,6 +1455,25 @@ class Flow(BaseModel, Generic[T], metaclass=FlowMeta):
"No pending feedback context. Use from_pending() to restore a paused flow."
)
if get_current_parent_id() is None:
reset_emission_counter()
reset_last_event_id()
if not self.suppress_flow_events:
future = crewai_event_bus.emit(
self,
FlowStartedEvent(
type="flow_started",
flow_name=self.name or self.__class__.__name__,
inputs=None,
),
)
if future and isinstance(future, Future):
try:
await asyncio.wrap_future(future)
except Exception:
logger.warning("FlowStartedEvent handler failed", exc_info=True)
context = self._pending_feedback_context
emit = context.emit
default_outcome = context.default_outcome
@@ -1594,16 +1613,39 @@ class Flow(BaseModel, Generic[T], metaclass=FlowMeta):
final_result = self._method_outputs[-1] if self._method_outputs else result
# Emit flow finished
crewai_event_bus.emit(
self,
FlowFinishedEvent(
type="flow_finished",
flow_name=self.name or self.__class__.__name__,
result=final_result,
state=self._state,
),
)
if self._event_futures:
await asyncio.gather(
*[
asyncio.wrap_future(f)
for f in self._event_futures
if isinstance(f, Future)
]
)
self._event_futures.clear()
if not self.suppress_flow_events:
future = crewai_event_bus.emit(
self,
FlowFinishedEvent(
type="flow_finished",
flow_name=self.name or self.__class__.__name__,
result=final_result,
state=self._copy_and_serialize_state(),
),
)
if future and isinstance(future, Future):
try:
await asyncio.wrap_future(future)
except Exception:
logger.warning("FlowFinishedEvent handler failed", exc_info=True)
trace_listener = TraceCollectionListener()
if trace_listener.batch_manager.batch_owner_type == "flow":
if trace_listener.first_time_handler.is_first_time:
trace_listener.first_time_handler.mark_events_collected()
trace_listener.first_time_handler.handle_execution_completion()
else:
trace_listener.batch_manager.finalize_batch()
return final_result
@@ -3194,7 +3236,7 @@ class Flow(BaseModel, Generic[T], metaclass=FlowMeta):
from crewai.llm import LLM
from crewai.llms.base_llm import BaseLLM as BaseLLMClass
from crewai.utilities.i18n import get_i18n
from crewai.utilities.i18n import I18N_DEFAULT
llm_instance: BaseLLMClass
if isinstance(llm, str):
@@ -3214,9 +3256,7 @@ class Flow(BaseModel, Generic[T], metaclass=FlowMeta):
description=f"The outcome that best matches the feedback. Must be one of: {', '.join(outcomes)}"
)
# Load prompt from translations (using cached instance)
i18n = get_i18n()
prompt_template = i18n.slice("human_feedback_collapse")
prompt_template = I18N_DEFAULT.slice("human_feedback_collapse")
prompt = prompt_template.format(
feedback=feedback,

View File

@@ -350,9 +350,9 @@ def human_feedback(
def _get_hitl_prompt(key: str) -> str:
"""Read a HITL prompt from the i18n translations."""
from crewai.utilities.i18n import get_i18n
from crewai.utilities.i18n import I18N_DEFAULT
return get_i18n().slice(key)
return I18N_DEFAULT.slice(key)
def _resolve_llm_instance() -> Any:
"""Resolve the ``llm`` parameter to a BaseLLM instance.

View File

@@ -89,7 +89,7 @@ from crewai.utilities.converter import (
)
from crewai.utilities.guardrail import process_guardrail
from crewai.utilities.guardrail_types import GuardrailCallable, GuardrailType
from crewai.utilities.i18n import I18N, get_i18n
from crewai.utilities.i18n import I18N_DEFAULT
from crewai.utilities.llm_utils import create_llm
from crewai.utilities.printer import PRINTER
from crewai.utilities.pydantic_schema_utils import generate_model_description
@@ -227,9 +227,6 @@ class LiteAgent(FlowTrackable, BaseModel):
default=None,
description="Callback to check if the request is within the RPM8 limit",
)
i18n: I18N = Field(
default_factory=get_i18n, description="Internationalization settings."
)
response_format: type[BaseModel] | None = Field(
default=None, description="Pydantic model for structured output"
)
@@ -571,7 +568,7 @@ class LiteAgent(FlowTrackable, BaseModel):
f"- {m.record.content}" for m in matches
)
if memory_block:
formatted = self.i18n.slice("memory").format(memory=memory_block)
formatted = I18N_DEFAULT.slice("memory").format(memory=memory_block)
if self._messages and self._messages[0].get("role") == "system":
existing_content = self._messages[0].get("content", "")
if not isinstance(existing_content, str):
@@ -644,7 +641,7 @@ class LiteAgent(FlowTrackable, BaseModel):
try:
model_schema = generate_model_description(active_response_format)
schema = json.dumps(model_schema, indent=2)
instructions = self.i18n.slice("formatted_task_instructions").format(
instructions = I18N_DEFAULT.slice("formatted_task_instructions").format(
output_format=schema
)
@@ -793,7 +790,9 @@ class LiteAgent(FlowTrackable, BaseModel):
base_prompt = ""
if self._parsed_tools:
# Use the prompt template for agents with tools
base_prompt = self.i18n.slice("lite_agent_system_prompt_with_tools").format(
base_prompt = I18N_DEFAULT.slice(
"lite_agent_system_prompt_with_tools"
).format(
role=self.role,
backstory=self.backstory,
goal=self.goal,
@@ -802,7 +801,7 @@ class LiteAgent(FlowTrackable, BaseModel):
)
else:
# Use the prompt template for agents without tools
base_prompt = self.i18n.slice(
base_prompt = I18N_DEFAULT.slice(
"lite_agent_system_prompt_without_tools"
).format(
role=self.role,
@@ -814,7 +813,7 @@ class LiteAgent(FlowTrackable, BaseModel):
if active_response_format:
model_description = generate_model_description(active_response_format)
schema_json = json.dumps(model_description, indent=2)
base_prompt += self.i18n.slice("lite_agent_response_format").format(
base_prompt += I18N_DEFAULT.slice("lite_agent_response_format").format(
response_format=schema_json
)
@@ -875,7 +874,6 @@ class LiteAgent(FlowTrackable, BaseModel):
formatted_answer = handle_max_iterations_exceeded(
formatted_answer,
printer=PRINTER,
i18n=self.i18n,
messages=self._messages,
llm=cast(LLM, self.llm),
callbacks=self._callbacks,
@@ -914,7 +912,6 @@ class LiteAgent(FlowTrackable, BaseModel):
tool_result = execute_tool_and_check_finality(
agent_action=formatted_answer,
tools=self._parsed_tools,
i18n=self.i18n,
agent_key=self.key,
agent_role=self.role,
agent=self.original_agent,
@@ -956,7 +953,6 @@ class LiteAgent(FlowTrackable, BaseModel):
messages=self._messages,
llm=cast(LLM, self.llm),
callbacks=self._callbacks,
i18n=self.i18n,
verbose=self.verbose,
)
continue

View File

@@ -38,6 +38,7 @@ from crewai.llms.base_llm import (
get_current_call_id,
llm_call_context,
)
from crewai.utilities.streaming import get_current_stream_run_id
from crewai.llms.constants import (
ANTHROPIC_MODELS,
AZURE_MODELS,
@@ -790,6 +791,7 @@ class LLM(BaseLLM):
call_type=LLMCallType.LLM_CALL,
response_id=response_id,
call_id=get_current_call_id(),
run_id=get_current_stream_run_id(),
),
)
# --- 4) Fallback to non-streaming if no content received
@@ -1003,6 +1005,7 @@ class LLM(BaseLLM):
call_type=LLMCallType.TOOL_CALL,
response_id=response_id,
call_id=get_current_call_id(),
run_id=get_current_stream_run_id(),
),
)
@@ -1456,6 +1459,7 @@ class LLM(BaseLLM):
from_agent=from_agent,
response_id=response_id,
call_id=get_current_call_id(),
run_id=get_current_stream_run_id(),
),
)

View File

@@ -36,6 +36,7 @@ from crewai.events.types.llm_events import (
LLMStreamChunkEvent,
LLMThinkingChunkEvent,
)
from crewai.utilities.streaming import get_current_stream_run_id
from crewai.events.types.tool_usage_events import (
ToolUsageErrorEvent,
ToolUsageFinishedEvent,
@@ -527,6 +528,7 @@ class BaseLLM(BaseModel, ABC):
call_type=call_type,
response_id=response_id,
call_id=get_current_call_id(),
run_id=get_current_stream_run_id(),
),
)

View File

@@ -9,7 +9,7 @@ from typing import Any
from pydantic import BaseModel, ConfigDict, Field
from crewai.memory.types import MemoryRecord, ScopeInfo
from crewai.utilities.i18n import get_i18n
from crewai.utilities.i18n import I18N_DEFAULT
_logger = logging.getLogger(__name__)
@@ -149,7 +149,7 @@ def _get_prompt(key: str) -> str:
Returns:
The prompt string.
"""
return get_i18n().memory(key)
return I18N_DEFAULT.memory(key)
def extract_memories_from_content(content: str, llm: Any) -> list[str]:

View File

@@ -80,7 +80,7 @@ from crewai.utilities.guardrail_types import (
GuardrailType,
GuardrailsType,
)
from crewai.utilities.i18n import I18N, get_i18n
from crewai.utilities.i18n import I18N_DEFAULT
from crewai.utilities.printer import PRINTER
from crewai.utilities.string_utils import interpolate_only
@@ -115,7 +115,6 @@ class Task(BaseModel):
used_tools: int = 0
tools_errors: int = 0
delegations: int = 0
i18n: I18N = Field(default_factory=get_i18n)
name: str | None = Field(default=None)
prompt_context: str | None = None
description: str = Field(description="Description of the actual task.")
@@ -896,7 +895,7 @@ class Task(BaseModel):
tasks_slices = [description]
output = self.i18n.slice("expected_output").format(
output = I18N_DEFAULT.slice("expected_output").format(
expected_output=self.expected_output
)
tasks_slices = [description, output]
@@ -968,7 +967,7 @@ Follow these guidelines:
raise ValueError(f"Error interpolating output_file path: {e!s}") from e
if inputs.get("crew_chat_messages"):
conversation_instruction = self.i18n.slice(
conversation_instruction = I18N_DEFAULT.slice(
"conversation_history_instruction"
)
@@ -1219,7 +1218,7 @@ Follow these guidelines:
self.retry_count += 1
current_retry_count = self.retry_count
context = self.i18n.errors("validation_error").format(
context = I18N_DEFAULT.errors("validation_error").format(
guardrail_result_error=guardrail_result.error,
task_output=task_output.raw,
)
@@ -1316,7 +1315,7 @@ Follow these guidelines:
self.retry_count += 1
current_retry_count = self.retry_count
context = self.i18n.errors("validation_error").format(
context = I18N_DEFAULT.errors("validation_error").format(
guardrail_result_error=guardrail_result.error,
task_output=task_output.raw,
)

View File

@@ -52,6 +52,7 @@ from crewai.telemetry.utils import (
add_crew_attributes,
close_span,
)
from crewai.utilities.i18n import I18N_DEFAULT
from crewai.utilities.logger_utils import suppress_warnings
from crewai.utilities.string_utils import sanitize_tool_name
@@ -314,7 +315,7 @@ class Telemetry:
"verbose?": agent.verbose,
"max_iter": agent.max_iter,
"max_rpm": agent.max_rpm,
"i18n": agent.i18n.prompt_file,
"i18n": I18N_DEFAULT.prompt_file,
"function_calling_llm": (
getattr(
getattr(agent, "function_calling_llm", None),
@@ -844,7 +845,7 @@ class Telemetry:
"verbose?": agent.verbose,
"max_iter": agent.max_iter,
"max_rpm": agent.max_rpm,
"i18n": agent.i18n.prompt_file,
"i18n": I18N_DEFAULT.prompt_file,
"llm": agent.llm.model
if isinstance(agent.llm, BaseLLM)
else str(agent.llm),

View File

@@ -3,10 +3,7 @@ from typing import Any
from pydantic import BaseModel, Field
from crewai.tools.base_tool import BaseTool
from crewai.utilities import I18N
i18n = I18N()
from crewai.utilities.i18n import I18N_DEFAULT
class AddImageToolSchema(BaseModel):
@@ -19,9 +16,9 @@ class AddImageToolSchema(BaseModel):
class AddImageTool(BaseTool):
"""Tool for adding images to the content"""
name: str = Field(default_factory=lambda: i18n.tools("add_image")["name"]) # type: ignore[index]
name: str = Field(default_factory=lambda: I18N_DEFAULT.tools("add_image")["name"]) # type: ignore[index]
description: str = Field(
default_factory=lambda: i18n.tools("add_image")["description"] # type: ignore[index]
default_factory=lambda: I18N_DEFAULT.tools("add_image")["description"] # type: ignore[index]
)
args_schema: type[BaseModel] = AddImageToolSchema
@@ -31,7 +28,7 @@ class AddImageTool(BaseTool):
action: str | None = None,
**kwargs: Any,
) -> dict[str, Any]:
action = action or i18n.tools("add_image")["default_action"] # type: ignore
action = action or I18N_DEFAULT.tools("add_image")["default_action"] # type: ignore
content = [
{"type": "text", "text": action},
{

View File

@@ -5,21 +5,19 @@ from typing import TYPE_CHECKING
from crewai.tools.agent_tools.ask_question_tool import AskQuestionTool
from crewai.tools.agent_tools.delegate_work_tool import DelegateWorkTool
from crewai.utilities.i18n import get_i18n
from crewai.utilities.i18n import I18N_DEFAULT
if TYPE_CHECKING:
from crewai.agents.agent_builder.base_agent import BaseAgent
from crewai.tools.base_tool import BaseTool
from crewai.utilities.i18n import I18N
class AgentTools:
"""Manager class for agent-related tools"""
def __init__(self, agents: Sequence[BaseAgent], i18n: I18N | None = None) -> None:
def __init__(self, agents: Sequence[BaseAgent]) -> None:
self.agents = agents
self.i18n = i18n if i18n is not None else get_i18n()
def tools(self) -> list[BaseTool]:
"""Get all available agent tools"""
@@ -27,14 +25,12 @@ class AgentTools:
delegate_tool = DelegateWorkTool(
agents=self.agents,
i18n=self.i18n,
description=self.i18n.tools("delegate_work").format(coworkers=coworkers), # type: ignore
description=I18N_DEFAULT.tools("delegate_work").format(coworkers=coworkers), # type: ignore
)
ask_tool = AskQuestionTool(
agents=self.agents,
i18n=self.i18n,
description=self.i18n.tools("ask_question").format(coworkers=coworkers), # type: ignore
description=I18N_DEFAULT.tools("ask_question").format(coworkers=coworkers), # type: ignore
)
return [delegate_tool, ask_tool]

View File

@@ -6,7 +6,7 @@ from pydantic import Field
from crewai.agents.agent_builder.base_agent import BaseAgent
from crewai.task import Task
from crewai.tools.base_tool import BaseTool
from crewai.utilities.i18n import I18N, get_i18n
from crewai.utilities.i18n import I18N_DEFAULT
logger = logging.getLogger(__name__)
@@ -16,9 +16,6 @@ class BaseAgentTool(BaseTool):
"""Base class for agent-related tools"""
agents: list[BaseAgent] = Field(description="List of available agents")
i18n: I18N = Field(
default_factory=get_i18n, description="Internationalization settings"
)
def sanitize_agent_name(self, name: str) -> str:
"""
@@ -93,7 +90,7 @@ class BaseAgentTool(BaseTool):
)
except (AttributeError, ValueError) as e:
# Handle specific exceptions that might occur during role name processing
return self.i18n.errors("agent_tool_unexisting_coworker").format(
return I18N_DEFAULT.errors("agent_tool_unexisting_coworker").format(
coworkers="\n".join(
[
f"- {self.sanitize_agent_name(agent.role)}"
@@ -105,7 +102,7 @@ class BaseAgentTool(BaseTool):
if not agent:
# No matching agent found after sanitization
return self.i18n.errors("agent_tool_unexisting_coworker").format(
return I18N_DEFAULT.errors("agent_tool_unexisting_coworker").format(
coworkers="\n".join(
[
f"- {self.sanitize_agent_name(agent.role)}"
@@ -120,8 +117,7 @@ class BaseAgentTool(BaseTool):
task_with_assigned_agent = Task(
description=task,
agent=selected_agent,
expected_output=selected_agent.i18n.slice("manager_request"),
i18n=selected_agent.i18n,
expected_output=I18N_DEFAULT.slice("manager_request"),
)
logger.debug(
f"Created task for agent '{self.sanitize_agent_name(selected_agent.role)}': {task}"
@@ -129,6 +125,6 @@ class BaseAgentTool(BaseTool):
return selected_agent.execute_task(task_with_assigned_agent, context)
except Exception as e:
# Handle task creation or execution errors
return self.i18n.errors("agent_tool_execution_error").format(
return I18N_DEFAULT.errors("agent_tool_execution_error").format(
agent_role=self.sanitize_agent_name(selected_agent.role), error=str(e)
)

View File

@@ -7,7 +7,7 @@ from typing import Any
from pydantic import BaseModel, Field
from crewai.tools.base_tool import BaseTool
from crewai.utilities.i18n import get_i18n
from crewai.utilities.i18n import I18N_DEFAULT
class RecallMemorySchema(BaseModel):
@@ -114,18 +114,17 @@ def create_memory_tools(memory: Any) -> list[BaseTool]:
Returns:
List containing a RecallMemoryTool and, if not read-only, a RememberTool.
"""
i18n = get_i18n()
tools: list[BaseTool] = [
RecallMemoryTool(
memory=memory,
description=i18n.tools("recall_memory"),
description=I18N_DEFAULT.tools("recall_memory"),
),
]
if not memory.read_only:
tools.append(
RememberTool(
memory=memory,
description=i18n.tools("save_to_memory"),
description=I18N_DEFAULT.tools("save_to_memory"),
)
)
return tools

View File

@@ -28,7 +28,7 @@ from crewai.utilities.agent_utils import (
render_text_description_and_args,
)
from crewai.utilities.converter import Converter
from crewai.utilities.i18n import I18N, get_i18n
from crewai.utilities.i18n import I18N_DEFAULT
from crewai.utilities.printer import PRINTER
from crewai.utilities.string_utils import sanitize_tool_name
@@ -93,7 +93,6 @@ class ToolUsage:
action: Any = None,
fingerprint_context: dict[str, str] | None = None,
) -> None:
self._i18n: I18N = agent.i18n if agent else get_i18n()
self._telemetry: Telemetry = Telemetry()
self._run_attempts: int = 1
self._max_parsing_attempts: int = 3
@@ -146,7 +145,7 @@ class ToolUsage:
if (
isinstance(tool, CrewStructuredTool)
and sanitize_tool_name(tool.name)
== sanitize_tool_name(self._i18n.tools("add_image")["name"]) # type: ignore
== sanitize_tool_name(I18N_DEFAULT.tools("add_image")["name"]) # type: ignore
):
try:
return self._use(tool_string=tool_string, tool=tool, calling=calling)
@@ -194,7 +193,7 @@ class ToolUsage:
if (
isinstance(tool, CrewStructuredTool)
and sanitize_tool_name(tool.name)
== sanitize_tool_name(self._i18n.tools("add_image")["name"]) # type: ignore
== sanitize_tool_name(I18N_DEFAULT.tools("add_image")["name"]) # type: ignore
):
try:
return await self._ause(
@@ -230,7 +229,7 @@ class ToolUsage:
"""
if self._check_tool_repeated_usage(calling=calling):
try:
result = self._i18n.errors("task_repeated_usage").format(
result = I18N_DEFAULT.errors("task_repeated_usage").format(
tool_names=self.tools_names
)
self._telemetry.tool_repeated_usage(
@@ -415,7 +414,7 @@ class ToolUsage:
self._run_attempts += 1
if self._run_attempts > self._max_parsing_attempts:
self._telemetry.tool_usage_error(llm=self.function_calling_llm)
error_message = self._i18n.errors(
error_message = I18N_DEFAULT.errors(
"tool_usage_exception"
).format(
error=e,
@@ -423,7 +422,7 @@ class ToolUsage:
tool_inputs=tool.description,
)
result = ToolUsageError(
f"\n{error_message}.\nMoving on then. {self._i18n.slice('format').format(tool_names=self.tools_names)}"
f"\n{error_message}.\nMoving on then. {I18N_DEFAULT.slice('format').format(tool_names=self.tools_names)}"
).message
if self.task:
self.task.increment_tools_errors()
@@ -461,7 +460,7 @@ class ToolUsage:
# Repeated usage check happens before event emission - safe to return early
if self._check_tool_repeated_usage(calling=calling):
try:
result = self._i18n.errors("task_repeated_usage").format(
result = I18N_DEFAULT.errors("task_repeated_usage").format(
tool_names=self.tools_names
)
self._telemetry.tool_repeated_usage(
@@ -648,7 +647,7 @@ class ToolUsage:
self._run_attempts += 1
if self._run_attempts > self._max_parsing_attempts:
self._telemetry.tool_usage_error(llm=self.function_calling_llm)
error_message = self._i18n.errors(
error_message = I18N_DEFAULT.errors(
"tool_usage_exception"
).format(
error=e,
@@ -656,7 +655,7 @@ class ToolUsage:
tool_inputs=tool.description,
)
result = ToolUsageError(
f"\n{error_message}.\nMoving on then. {self._i18n.slice('format').format(tool_names=self.tools_names)}"
f"\n{error_message}.\nMoving on then. {I18N_DEFAULT.slice('format').format(tool_names=self.tools_names)}"
).message
if self.task:
self.task.increment_tools_errors()
@@ -699,7 +698,7 @@ class ToolUsage:
def _remember_format(self, result: str) -> str:
result = str(result)
result += "\n\n" + self._i18n.slice("tools").format(
result += "\n\n" + I18N_DEFAULT.slice("tools").format(
tools=self.tools_description, tool_names=self.tools_names
)
return result
@@ -825,12 +824,12 @@ class ToolUsage:
except Exception:
if raise_error:
raise
return ToolUsageError(f"{self._i18n.errors('tool_arguments_error')}")
return ToolUsageError(f"{I18N_DEFAULT.errors('tool_arguments_error')}")
if not isinstance(arguments, dict):
if raise_error:
raise
return ToolUsageError(f"{self._i18n.errors('tool_arguments_error')}")
return ToolUsageError(f"{I18N_DEFAULT.errors('tool_arguments_error')}")
return ToolCalling(
tool_name=sanitize_tool_name(tool.name),
@@ -856,7 +855,7 @@ class ToolUsage:
if self.agent and self.agent.verbose:
PRINTER.print(content=f"\n\n{e}\n", color="red")
return ToolUsageError(
f"{self._i18n.errors('tool_usage_error').format(error=e)}\nMoving on then. {self._i18n.slice('format').format(tool_names=self.tools_names)}"
f"{I18N_DEFAULT.errors('tool_usage_error').format(error=e)}\nMoving on then. {I18N_DEFAULT.slice('format').format(tool_names=self.tools_names)}"
)
return self._tool_calling(tool_string)

View File

@@ -31,7 +31,7 @@ from crewai.utilities.errors import AgentRepositoryError
from crewai.utilities.exceptions.context_window_exceeding_exception import (
LLMContextLengthExceededError,
)
from crewai.utilities.i18n import I18N
from crewai.utilities.i18n import I18N_DEFAULT
from crewai.utilities.printer import PRINTER, ColoredText, Printer
from crewai.utilities.pydantic_schema_utils import generate_model_description
from crewai.utilities.string_utils import sanitize_tool_name
@@ -254,7 +254,6 @@ def has_reached_max_iterations(iterations: int, max_iterations: int) -> bool:
def handle_max_iterations_exceeded(
formatted_answer: AgentAction | AgentFinish | None,
printer: Printer,
i18n: I18N,
messages: list[LLMMessage],
llm: LLM | BaseLLM,
callbacks: list[TokenCalcHandler],
@@ -265,7 +264,6 @@ def handle_max_iterations_exceeded(
Args:
formatted_answer: The last formatted answer from the agent.
printer: Printer instance for output.
i18n: I18N instance for internationalization.
messages: List of messages to send to the LLM.
llm: The LLM instance to call.
callbacks: List of callbacks for the LLM call.
@@ -282,10 +280,10 @@ def handle_max_iterations_exceeded(
if formatted_answer and hasattr(formatted_answer, "text"):
assistant_message = (
formatted_answer.text + f"\n{i18n.errors('force_final_answer')}"
formatted_answer.text + f"\n{I18N_DEFAULT.errors('force_final_answer')}"
)
else:
assistant_message = i18n.errors("force_final_answer")
assistant_message = I18N_DEFAULT.errors("force_final_answer")
messages.append(format_message_for_llm(assistant_message, role="assistant"))
@@ -687,7 +685,6 @@ def handle_context_length(
messages: list[LLMMessage],
llm: LLM | BaseLLM,
callbacks: list[TokenCalcHandler],
i18n: I18N,
verbose: bool = True,
) -> None:
"""Handle context length exceeded by either summarizing or raising an error.
@@ -698,7 +695,6 @@ def handle_context_length(
messages: List of messages to summarize
llm: LLM instance for summarization
callbacks: List of callbacks for LLM
i18n: I18N instance for messages
Raises:
SystemExit: If context length is exceeded and user opts not to summarize
@@ -710,7 +706,7 @@ def handle_context_length(
color="yellow",
)
summarize_messages(
messages=messages, llm=llm, callbacks=callbacks, i18n=i18n, verbose=verbose
messages=messages, llm=llm, callbacks=callbacks, verbose=verbose
)
else:
if verbose:
@@ -863,7 +859,6 @@ async def _asummarize_chunks(
chunks: list[list[LLMMessage]],
llm: LLM | BaseLLM,
callbacks: list[TokenCalcHandler],
i18n: I18N,
) -> list[SummaryContent]:
"""Summarize multiple message chunks concurrently using asyncio.
@@ -871,7 +866,6 @@ async def _asummarize_chunks(
chunks: List of message chunks to summarize.
llm: LLM instance (must support ``acall``).
callbacks: List of callbacks for the LLM.
i18n: I18N instance for prompt templates.
Returns:
Ordered list of summary contents, one per chunk.
@@ -881,10 +875,10 @@ async def _asummarize_chunks(
conversation_text = _format_messages_for_summary(chunk)
summarization_messages = [
format_message_for_llm(
i18n.slice("summarizer_system_message"), role="system"
I18N_DEFAULT.slice("summarizer_system_message"), role="system"
),
format_message_for_llm(
i18n.slice("summarize_instruction").format(
I18N_DEFAULT.slice("summarize_instruction").format(
conversation=conversation_text
),
),
@@ -901,7 +895,6 @@ def summarize_messages(
messages: list[LLMMessage],
llm: LLM | BaseLLM,
callbacks: list[TokenCalcHandler],
i18n: I18N,
verbose: bool = True,
) -> None:
"""Summarize messages to fit within context window.
@@ -917,7 +910,6 @@ def summarize_messages(
messages: List of messages to summarize (modified in-place)
llm: LLM instance for summarization
callbacks: List of callbacks for LLM
i18n: I18N instance for messages
verbose: Whether to print progress.
"""
# 1. Extract & preserve file attachments from user messages
@@ -953,10 +945,10 @@ def summarize_messages(
conversation_text = _format_messages_for_summary(chunk)
summarization_messages = [
format_message_for_llm(
i18n.slice("summarizer_system_message"), role="system"
I18N_DEFAULT.slice("summarizer_system_message"), role="system"
),
format_message_for_llm(
i18n.slice("summarize_instruction").format(
I18N_DEFAULT.slice("summarize_instruction").format(
conversation=conversation_text
),
),
@@ -971,9 +963,7 @@ def summarize_messages(
content=f"Summarizing {total_chunks} chunks in parallel...",
color="yellow",
)
coro = _asummarize_chunks(
chunks=chunks, llm=llm, callbacks=callbacks, i18n=i18n
)
coro = _asummarize_chunks(chunks=chunks, llm=llm, callbacks=callbacks)
if is_inside_event_loop():
ctx = contextvars.copy_context()
with concurrent.futures.ThreadPoolExecutor(max_workers=1) as pool:
@@ -988,7 +978,7 @@ def summarize_messages(
messages.extend(system_messages)
summary_message = format_message_for_llm(
i18n.slice("summary").format(merged_summary=merged_summary)
I18N_DEFAULT.slice("summary").format(merged_summary=merged_summary)
)
if preserved_files:
summary_message["files"] = preserved_files

View File

@@ -8,7 +8,7 @@ from pydantic import BaseModel, ValidationError
from typing_extensions import Unpack
from crewai.agents.agent_builder.utilities.base_output_converter import OutputConverter
from crewai.utilities.i18n import get_i18n
from crewai.utilities.i18n import I18N_DEFAULT
from crewai.utilities.internal_instructor import InternalInstructor
from crewai.utilities.printer import PRINTER
from crewai.utilities.pydantic_schema_utils import generate_model_description
@@ -21,7 +21,7 @@ if TYPE_CHECKING:
from crewai.llms.base_llm import BaseLLM
_JSON_PATTERN: Final[re.Pattern[str]] = re.compile(r"({.*})", re.DOTALL)
_I18N = get_i18n()
_I18N = I18N_DEFAULT
class ConverterError(Exception):

View File

@@ -8,7 +8,7 @@ from pydantic import BaseModel, Field
from crewai.events.event_bus import crewai_event_bus
from crewai.events.types.task_events import TaskEvaluationEvent
from crewai.utilities.converter import Converter
from crewai.utilities.i18n import get_i18n
from crewai.utilities.i18n import I18N_DEFAULT
from crewai.utilities.pydantic_schema_utils import generate_model_description
from crewai.utilities.training_converter import TrainingConverter
@@ -98,11 +98,9 @@ class TaskEvaluator:
if not self.llm.supports_function_calling(): # type: ignore[union-attr]
schema_dict = generate_model_description(TaskEvaluation)
output_schema: str = (
get_i18n()
.slice("formatted_task_instructions")
.format(output_format=json.dumps(schema_dict, indent=2))
)
output_schema: str = I18N_DEFAULT.slice(
"formatted_task_instructions"
).format(output_format=json.dumps(schema_dict, indent=2))
instructions = f"{instructions}\n\n{output_schema}"
converter = Converter(
@@ -174,11 +172,9 @@ class TaskEvaluator:
if not self.llm.supports_function_calling(): # type: ignore[union-attr]
schema_dict = generate_model_description(TrainingTaskEvaluation)
output_schema: str = (
get_i18n()
.slice("formatted_task_instructions")
.format(output_format=json.dumps(schema_dict, indent=2))
)
output_schema: str = I18N_DEFAULT.slice(
"formatted_task_instructions"
).format(output_format=json.dumps(schema_dict, indent=2))
instructions = f"{instructions}\n\n{output_schema}"
converter = TrainingConverter(

View File

@@ -142,3 +142,6 @@ def get_i18n(prompt_file: str | None = None) -> I18N:
Cached I18N instance.
"""
return I18N(prompt_file=prompt_file)
I18N_DEFAULT: I18N = get_i18n()

View File

@@ -6,7 +6,7 @@ from typing import Any, Literal
from pydantic import BaseModel, Field
from crewai.utilities.i18n import I18N, get_i18n
from crewai.utilities.i18n import I18N_DEFAULT
class StandardPromptResult(BaseModel):
@@ -49,7 +49,6 @@ class Prompts(BaseModel):
- Need to refactor so that prompt is not tightly coupled to agent.
"""
i18n: I18N = Field(default_factory=get_i18n)
has_tools: bool = Field(
default=False, description="Indicates if the agent has access to tools"
)
@@ -140,13 +139,13 @@ class Prompts(BaseModel):
if not system_template or not prompt_template:
# If any of the required templates are missing, fall back to the default format
prompt_parts: list[str] = [
self.i18n.slice(component) for component in components
I18N_DEFAULT.slice(component) for component in components
]
prompt = "".join(prompt_parts)
else:
# All templates are provided, use them
template_parts: list[str] = [
self.i18n.slice(component)
I18N_DEFAULT.slice(component)
for component in components
if component != "task"
]
@@ -154,7 +153,7 @@ class Prompts(BaseModel):
"{{ .System }}", "".join(template_parts)
)
prompt = prompt_template.replace(
"{{ .Prompt }}", "".join(self.i18n.slice("task"))
"{{ .Prompt }}", "".join(I18N_DEFAULT.slice("task"))
)
# Handle missing response_template
if response_template:

View File

@@ -15,6 +15,7 @@ from crewai.events.types.reasoning_events import (
AgentReasoningStartedEvent,
)
from crewai.llm import LLM
from crewai.utilities.i18n import I18N_DEFAULT
from crewai.utilities.llm_utils import create_llm
from crewai.utilities.planning_types import PlanStep
from crewai.utilities.string_utils import sanitize_tool_name
@@ -481,17 +482,17 @@ class AgentReasoning:
"""Get the system prompt for planning.
Returns:
The system prompt, either custom or from i18n.
The system prompt, either custom or from I18N_DEFAULT.
"""
if self.config.system_prompt is not None:
return self.config.system_prompt
# Try new "planning" section first, fall back to "reasoning" for compatibility
try:
return self.agent.i18n.retrieve("planning", "system_prompt")
return I18N_DEFAULT.retrieve("planning", "system_prompt")
except (KeyError, AttributeError):
# Fallback to reasoning section for backward compatibility
return self.agent.i18n.retrieve("reasoning", "initial_plan").format(
return I18N_DEFAULT.retrieve("reasoning", "initial_plan").format(
role=self.agent.role,
goal=self.agent.goal,
backstory=self._get_agent_backstory(),
@@ -527,7 +528,7 @@ class AgentReasoning:
# Try new "planning" section first
try:
return self.agent.i18n.retrieve("planning", "create_plan_prompt").format(
return I18N_DEFAULT.retrieve("planning", "create_plan_prompt").format(
description=self.description,
expected_output=self.expected_output,
tools=available_tools,
@@ -535,7 +536,7 @@ class AgentReasoning:
)
except (KeyError, AttributeError):
# Fallback to reasoning section for backward compatibility
return self.agent.i18n.retrieve("reasoning", "create_plan_prompt").format(
return I18N_DEFAULT.retrieve("reasoning", "create_plan_prompt").format(
role=self.agent.role,
goal=self.agent.goal,
backstory=self._get_agent_backstory(),
@@ -584,12 +585,12 @@ class AgentReasoning:
# Try new "planning" section first
try:
return self.agent.i18n.retrieve("planning", "refine_plan_prompt").format(
return I18N_DEFAULT.retrieve("planning", "refine_plan_prompt").format(
current_plan=current_plan,
)
except (KeyError, AttributeError):
# Fallback to reasoning section for backward compatibility
return self.agent.i18n.retrieve("reasoning", "refine_plan_prompt").format(
return I18N_DEFAULT.retrieve("reasoning", "refine_plan_prompt").format(
role=self.agent.role,
goal=self.agent.goal,
backstory=self._get_agent_backstory(),
@@ -642,7 +643,7 @@ def _call_llm_with_reasoning_prompt(
Returns:
The LLM response.
"""
system_prompt = reasoning_agent.i18n.retrieve("reasoning", plan_type).format(
system_prompt = I18N_DEFAULT.retrieve("reasoning", plan_type).format(
role=reasoning_agent.role,
goal=reasoning_agent.goal,
backstory=backstory,

View File

@@ -7,6 +7,7 @@ import logging
import queue
import threading
from typing import Any, NamedTuple
import uuid
from typing_extensions import TypedDict
@@ -25,6 +26,17 @@ from crewai.utilities.string_utils import sanitize_tool_name
logger = logging.getLogger(__name__)
# ContextVar that tracks the current streaming run_id.
# Set by create_streaming_state() so that LLM emit paths can stamp events.
_current_stream_run_id: contextvars.ContextVar[str | None] = contextvars.ContextVar(
"_current_stream_run_id", default=None
)
def get_current_stream_run_id() -> str | None:
"""Return the active streaming run_id for the current context, if any."""
return _current_stream_run_id.get()
class TaskInfo(TypedDict):
"""Task context information for streaming."""
@@ -106,6 +118,7 @@ def _create_stream_handler(
sync_queue: queue.Queue[StreamChunk | None | Exception],
async_queue: asyncio.Queue[StreamChunk | None | Exception] | None = None,
loop: asyncio.AbstractEventLoop | None = None,
run_id: str | None = None,
) -> Callable[[Any, BaseEvent], None]:
"""Create a stream handler function.
@@ -114,6 +127,9 @@ def _create_stream_handler(
sync_queue: Synchronous queue for chunks.
async_queue: Optional async queue for chunks.
loop: Optional event loop for async operations.
run_id: Unique identifier for this streaming run. When set, the handler
only accepts events whose ``run_id`` matches, preventing cross-run
chunk contamination in concurrent streaming scenarios.
Returns:
Handler function that can be registered with the event bus.
@@ -129,6 +145,10 @@ def _create_stream_handler(
if not isinstance(event, LLMStreamChunkEvent):
return
# Filter: only accept events belonging to this streaming run.
if run_id is not None and event.run_id is not None and event.run_id != run_id:
return
chunk = _create_stream_chunk(event, current_task_info)
if async_queue is not None and loop is not None:
@@ -187,6 +207,16 @@ def create_streaming_state(
) -> StreamingState:
"""Create and register streaming state.
Each call assigns a ``run_id`` that is:
* stored in a ``contextvars.ContextVar`` so that downstream LLM emit
paths can stamp ``LLMStreamChunkEvent.run_id`` automatically, and
* passed to the stream handler so it only accepts events with a
matching ``run_id``, preventing cross-run chunk contamination.
If the current context already carries a ``run_id`` (e.g. a parent
flow already created a streaming state), the existing value is reused
so that nested streaming (flow → crew) shares the same scope.
Args:
current_task_info: Task context info.
result_holder: List to hold the final result.
@@ -195,6 +225,9 @@ def create_streaming_state(
Returns:
Initialized StreamingState with registered handler.
"""
run_id = _current_stream_run_id.get() or str(uuid.uuid4())
_current_stream_run_id.set(run_id)
sync_queue: queue.Queue[StreamChunk | None | Exception] = queue.Queue()
async_queue: asyncio.Queue[StreamChunk | None | Exception] | None = None
loop: asyncio.AbstractEventLoop | None = None
@@ -203,7 +236,9 @@ def create_streaming_state(
async_queue = asyncio.Queue()
loop = asyncio.get_event_loop()
handler = _create_stream_handler(current_task_info, sync_queue, async_queue, loop)
handler = _create_stream_handler(
current_task_info, sync_queue, async_queue, loop, run_id=run_id
)
crewai_event_bus.register_handler(LLMStreamChunkEvent, handler)
return StreamingState(

View File

@@ -13,7 +13,7 @@ from crewai.security.fingerprint import Fingerprint
from crewai.tools.structured_tool import CrewStructuredTool
from crewai.tools.tool_types import ToolResult
from crewai.tools.tool_usage import ToolUsage, ToolUsageError
from crewai.utilities.i18n import I18N
from crewai.utilities.i18n import I18N_DEFAULT
from crewai.utilities.logger import Logger
from crewai.utilities.string_utils import sanitize_tool_name
@@ -30,7 +30,6 @@ if TYPE_CHECKING:
async def aexecute_tool_and_check_finality(
agent_action: AgentAction,
tools: list[CrewStructuredTool],
i18n: I18N,
agent_key: str | None = None,
agent_role: str | None = None,
tools_handler: ToolsHandler | None = None,
@@ -49,7 +48,6 @@ async def aexecute_tool_and_check_finality(
Args:
agent_action: The action containing the tool to execute.
tools: List of available tools.
i18n: Internationalization settings.
agent_key: Optional key for event emission.
agent_role: Optional role for event emission.
tools_handler: Optional tools handler for tool execution.
@@ -142,7 +140,7 @@ async def aexecute_tool_and_check_finality(
return ToolResult(modified_result, tool.result_as_answer)
tool_result = i18n.errors("wrong_tool_name").format(
tool_result = I18N_DEFAULT.errors("wrong_tool_name").format(
tool=sanitized_tool_name,
tools=", ".join(tool_name_to_tool_map.keys()),
)
@@ -152,7 +150,6 @@ async def aexecute_tool_and_check_finality(
def execute_tool_and_check_finality(
agent_action: AgentAction,
tools: list[CrewStructuredTool],
i18n: I18N,
agent_key: str | None = None,
agent_role: str | None = None,
tools_handler: ToolsHandler | None = None,
@@ -170,7 +167,6 @@ def execute_tool_and_check_finality(
Args:
agent_action: The action containing the tool to execute
tools: List of available tools
i18n: Internationalization settings
agent_key: Optional key for event emission
agent_role: Optional role for event emission
tools_handler: Optional tools handler for tool execution
@@ -263,7 +259,7 @@ def execute_tool_and_check_finality(
return ToolResult(modified_result, tool.result_as_answer)
tool_result = i18n.errors("wrong_tool_name").format(
tool_result = I18N_DEFAULT.errors("wrong_tool_name").format(
tool=sanitized_tool_name,
tools=", ".join(tool_name_to_tool_map.keys()),
)

View File

@@ -1208,12 +1208,10 @@ def test_llm_call_with_error():
def test_handle_context_length_exceeds_limit():
# Import necessary modules
from crewai.utilities.agent_utils import handle_context_length
from crewai.utilities.i18n import I18N
from crewai.utilities.printer import Printer
# Create mocks for dependencies
printer = Printer()
i18n = I18N()
# Create an agent just for its LLM
agent = Agent(
@@ -1249,7 +1247,6 @@ def test_handle_context_length_exceeds_limit():
messages=messages,
llm=llm,
callbacks=callbacks,
i18n=i18n,
)
# Verify our patch was called and raised the correct error
@@ -1994,7 +1991,7 @@ def test_litellm_anthropic_error_handling():
@pytest.mark.vcr()
def test_get_knowledge_search_query():
"""Test that _get_knowledge_search_query calls the LLM with the correct prompts."""
from crewai.utilities.i18n import I18N
from crewai.utilities.i18n import I18N_DEFAULT
content = "The capital of France is Paris."
string_source = StringKnowledgeSource(content=content)
@@ -2013,7 +2010,6 @@ def test_get_knowledge_search_query():
agent=agent,
)
i18n = I18N()
task_prompt = task.prompt()
with (
@@ -2050,13 +2046,13 @@ def test_get_knowledge_search_query():
[
{
"role": "system",
"content": i18n.slice(
"content": I18N_DEFAULT.slice(
"knowledge_search_query_system_prompt"
).format(task_prompt=task.description),
},
{
"role": "user",
"content": i18n.slice("knowledge_search_query").format(
"content": I18N_DEFAULT.slice("knowledge_search_query").format(
task_prompt=task_prompt
),
},

View File

@@ -48,8 +48,6 @@ def _build_executor(**kwargs: Any) -> AgentExecutor:
executor._last_context_error = None
executor._step_executor = None
executor._planner_observer = None
from crewai.utilities.i18n import get_i18n
executor._i18n = kwargs.get("i18n") or get_i18n()
return executor
from crewai.agents.planner_observer import PlannerObserver
from crewai.experimental.agent_executor import (

View File

@@ -861,6 +861,265 @@ class TestStreamingCancellation:
assert not streaming.is_cancelled
class TestStreamingRunIsolation:
"""Tests for concurrent streaming run isolation (issue #5376).
The singleton event bus fans out events to all registered handlers.
Without run_id scoping, concurrent streaming runs receive each other's
chunks. These tests verify that the run_id filtering prevents
cross-run chunk contamination.
"""
def test_handler_ignores_events_from_different_run(self) -> None:
"""A handler with run_id must reject events carrying a different run_id."""
import queue as _queue
from crewai.utilities.streaming import _create_stream_handler, TaskInfo
task_info: TaskInfo = {
"index": 0,
"name": "task-a",
"id": "tid-a",
"agent_role": "Agent",
"agent_id": "aid-a",
}
q: _queue.Queue[StreamChunk | None | Exception] = _queue.Queue()
handler = _create_stream_handler(task_info, q, run_id="run-A")
# Event from a *different* run must be silently dropped.
foreign_event = LLMStreamChunkEvent(
chunk="foreign-chunk",
call_id="cid",
run_id="run-B",
)
handler(None, foreign_event)
assert q.empty(), "Handler must not enqueue events from a different run_id"
# Event from the *same* run must be enqueued.
own_event = LLMStreamChunkEvent(
chunk="own-chunk",
call_id="cid",
run_id="run-A",
)
handler(None, own_event)
assert not q.empty(), "Handler must enqueue events with matching run_id"
item = q.get_nowait()
assert item.content == "own-chunk"
def test_concurrent_streaming_states_do_not_cross_contaminate(self) -> None:
"""Two streaming states created in separate contexts (simulating
concurrent runs) must each receive only their own events, even
though both handlers are registered on the same global event bus.
"""
import contextvars
from crewai.utilities.streaming import (
create_streaming_state,
_current_stream_run_id,
TaskInfo,
_unregister_handler,
)
task_a: TaskInfo = {
"index": 0,
"name": "task-a",
"id": "tid-a",
"agent_role": "Agent-A",
"agent_id": "aid-a",
}
task_b: TaskInfo = {
"index": 1,
"name": "task-b",
"id": "tid-b",
"agent_role": "Agent-B",
"agent_id": "aid-b",
}
def _create_in_fresh_context(
task_info: TaskInfo,
) -> "StreamingState":
"""Reset the run_id contextvar and create streaming state."""
_current_stream_run_id.set(None)
return create_streaming_state(task_info, [])
# Create each streaming state in a *separate* context so they get
# distinct run_ids (simulates truly concurrent runs).
state_a = contextvars.copy_context().run(_create_in_fresh_context, task_a)
state_b = contextvars.copy_context().run(_create_in_fresh_context, task_b)
# Extract run_ids from handler closures.
def _get_run_id_from_handler(handler: Any) -> str | None:
"""Extract the run_id captured in the handler closure."""
fn = handler
if hasattr(fn, "__wrapped__"):
fn = fn.__wrapped__
for cell in (fn.__closure__ or []):
try:
val = cell.cell_contents
if isinstance(val, str) and len(val) == 36 and val.count("-") == 4:
return val
except ValueError:
continue
return None
rid_a = _get_run_id_from_handler(state_a.handler)
rid_b = _get_run_id_from_handler(state_b.handler)
assert rid_a is not None and rid_b is not None
assert rid_a != rid_b, "Each streaming state must have a unique run_id"
# Emit events for run A.
for i in range(3):
crewai_event_bus.emit(
self,
LLMStreamChunkEvent(
chunk=f"A-{i}",
call_id="cid-a",
run_id=rid_a,
),
)
# Emit events for run B.
for i in range(3):
crewai_event_bus.emit(
self,
LLMStreamChunkEvent(
chunk=f"B-{i}",
call_id="cid-b",
run_id=rid_b,
),
)
# Drain queues.
chunks_a = []
while not state_a.sync_queue.empty():
chunks_a.append(state_a.sync_queue.get_nowait())
chunks_b = []
while not state_b.sync_queue.empty():
chunks_b.append(state_b.sync_queue.get_nowait())
# Verify isolation.
contents_a = [c.content for c in chunks_a]
contents_b = [c.content for c in chunks_b]
assert contents_a == ["A-0", "A-1", "A-2"], (
f"State A must only contain its own chunks, got {contents_a}"
)
assert contents_b == ["B-0", "B-1", "B-2"], (
f"State B must only contain its own chunks, got {contents_b}"
)
# No cross-contamination.
for c in contents_a:
assert not c.startswith("B-"), f"Run A received run B chunk: {c}"
for c in contents_b:
assert not c.startswith("A-"), f"Run B received run A chunk: {c}"
# Cleanup.
_unregister_handler(state_a.handler)
_unregister_handler(state_b.handler)
def test_concurrent_threads_isolated(self) -> None:
"""Simulate two concurrent streaming runs in separate threads and
verify that each collects only its own chunks.
"""
import contextvars
import threading
import time
from crewai.utilities.streaming import (
create_streaming_state,
get_current_stream_run_id,
TaskInfo,
_unregister_handler,
)
results: dict[str, list[str]] = {"A": [], "B": []}
errors: list[Exception] = []
def run_streaming(label: str, task_info: TaskInfo) -> None:
try:
state = create_streaming_state(task_info, [])
run_id = get_current_stream_run_id()
assert run_id is not None
# Simulate LLM emitting chunks stamped with this run's id.
for i in range(5):
crewai_event_bus.emit(
None,
LLMStreamChunkEvent(
chunk=f"{label}-{i}",
call_id=f"cid-{label}",
run_id=run_id,
),
)
time.sleep(0.005)
# Drain the queue.
while not state.sync_queue.empty():
item = state.sync_queue.get_nowait()
results[label].append(item.content)
_unregister_handler(state.handler)
except Exception as exc:
errors.append(exc)
task_a: TaskInfo = {
"index": 0,
"name": "task-a",
"id": "tid-a",
"agent_role": "Agent-A",
"agent_id": "aid-a",
}
task_b: TaskInfo = {
"index": 1,
"name": "task-b",
"id": "tid-b",
"agent_role": "Agent-B",
"agent_id": "aid-b",
}
t_a = threading.Thread(target=run_streaming, args=("A", task_a))
t_b = threading.Thread(target=run_streaming, args=("B", task_b))
t_a.start()
t_b.start()
t_a.join(timeout=10)
t_b.join(timeout=10)
assert not errors, f"Threads raised errors: {errors}"
# Each thread must see only its own chunks.
for c in results["A"]:
assert c.startswith("A-"), f"Run A received foreign chunk: {c}"
for c in results["B"]:
assert c.startswith("B-"), f"Run B received foreign chunk: {c}"
assert len(results["A"]) == 5, (
f"Run A expected 5 chunks, got {len(results['A'])}: {results['A']}"
)
assert len(results["B"]) == 5, (
f"Run B expected 5 chunks, got {len(results['B'])}: {results['B']}"
)
def test_run_id_stamped_on_llm_stream_chunk_event(self) -> None:
"""Verify that LLMStreamChunkEvent accepts and stores run_id."""
event = LLMStreamChunkEvent(
chunk="test",
call_id="cid",
run_id="my-run-id",
)
assert event.run_id == "my-run-id"
def test_run_id_defaults_to_none(self) -> None:
"""Verify that run_id defaults to None when not provided."""
event = LLMStreamChunkEvent(
chunk="test",
call_id="cid",
)
assert event.run_id is None
class TestStreamingImports:
"""Tests for correct imports of streaming types."""

View File

@@ -308,7 +308,6 @@ def test_validate_tool_input_invalid_input():
mock_agent.key = "test_agent_key" # Must be a string
mock_agent.role = "test_agent_role" # Must be a string
mock_agent._original_role = "test_agent_role" # Must be a string
mock_agent.i18n = MagicMock()
mock_agent.verbose = False
# Create mock action with proper string value
@@ -443,7 +442,6 @@ def test_tool_selection_error_event_direct():
mock_agent = MagicMock()
mock_agent.key = "test_key"
mock_agent.role = "test_role"
mock_agent.i18n = MagicMock()
mock_agent.verbose = False
mock_task = MagicMock()
@@ -518,13 +516,6 @@ def test_tool_validate_input_error_event():
mock_agent.verbose = False
mock_agent._original_role = "test_role"
# Mock i18n with error message
mock_i18n = MagicMock()
mock_i18n.errors.return_value = (
"Tool input must be a valid dictionary in JSON or Python literal format"
)
mock_agent.i18n = mock_i18n
# Mock task and tools handler
mock_task = MagicMock()
mock_tools_handler = MagicMock()
@@ -590,7 +581,6 @@ def test_tool_usage_finished_event_with_result():
mock_agent.key = "test_agent_key"
mock_agent.role = "test_agent_role"
mock_agent._original_role = "test_agent_role"
mock_agent.i18n = MagicMock()
mock_agent.verbose = False
# Create mock task
@@ -670,7 +660,6 @@ def test_tool_usage_finished_event_with_cached_result():
mock_agent.key = "test_agent_key"
mock_agent.role = "test_agent_role"
mock_agent._original_role = "test_agent_role"
mock_agent.i18n = MagicMock()
mock_agent.verbose = False
# Create mock task
@@ -761,9 +750,6 @@ def test_tool_error_does_not_emit_finished_event():
mock_agent._original_role = "test_agent_role"
mock_agent.verbose = False
mock_agent.fingerprint = None
mock_agent.i18n.tools.return_value = {"name": "Add Image"}
mock_agent.i18n.errors.return_value = "Error: {error}"
mock_agent.i18n.slice.return_value = "Available tools: {tool_names}"
mock_task = MagicMock()
mock_task.delegations = 0

View File

@@ -225,16 +225,6 @@ class TestConvertToolsToOpenaiSchema:
assert max_results_prop["default"] == 10
def _make_mock_i18n() -> MagicMock:
"""Create a mock i18n with the new structured prompt keys."""
mock_i18n = MagicMock()
mock_i18n.slice.side_effect = lambda key: {
"summarizer_system_message": "You are a precise assistant that creates structured summaries.",
"summarize_instruction": "Summarize the conversation:\n{conversation}",
"summary": "<summary>\n{merged_summary}\n</summary>\nContinue the task.",
}.get(key, "")
return mock_i18n
class MCPStyleInput(BaseModel):
"""Input schema mimicking an MCP tool with optional fields."""
@@ -330,7 +320,7 @@ class TestSummarizeMessages:
messages=messages,
llm=mock_llm,
callbacks=[],
i18n=_make_mock_i18n(),
)
# System message preserved + summary message = 2
@@ -361,7 +351,7 @@ class TestSummarizeMessages:
messages=messages,
llm=mock_llm,
callbacks=[],
i18n=_make_mock_i18n(),
)
assert len(messages) == 1
@@ -387,7 +377,7 @@ class TestSummarizeMessages:
messages=messages,
llm=mock_llm,
callbacks=[],
i18n=_make_mock_i18n(),
)
assert len(messages) == 1
@@ -410,7 +400,7 @@ class TestSummarizeMessages:
messages=messages,
llm=mock_llm,
callbacks=[],
i18n=_make_mock_i18n(),
)
assert id(messages) == original_list_id
@@ -432,7 +422,7 @@ class TestSummarizeMessages:
messages=messages,
llm=mock_llm,
callbacks=[],
i18n=_make_mock_i18n(),
)
assert len(messages) == 2
@@ -456,7 +446,7 @@ class TestSummarizeMessages:
messages=messages,
llm=mock_llm,
callbacks=[],
i18n=_make_mock_i18n(),
)
# Check what was passed to llm.call
@@ -482,7 +472,7 @@ class TestSummarizeMessages:
messages=messages,
llm=mock_llm,
callbacks=[],
i18n=_make_mock_i18n(),
)
assert "The extracted summary content." in messages[0]["content"]
@@ -506,7 +496,7 @@ class TestSummarizeMessages:
messages=messages,
llm=mock_llm,
callbacks=[],
i18n=_make_mock_i18n(),
)
# Verify the conversation text sent to LLM contains tool labels
@@ -528,7 +518,7 @@ class TestSummarizeMessages:
messages=messages,
llm=mock_llm,
callbacks=[],
i18n=_make_mock_i18n(),
)
# No LLM call should have been made
@@ -733,7 +723,7 @@ class TestParallelSummarization:
messages=messages,
llm=mock_llm,
callbacks=[],
i18n=_make_mock_i18n(),
)
# acall should have been awaited once per chunk
@@ -757,7 +747,7 @@ class TestParallelSummarization:
messages=messages,
llm=mock_llm,
callbacks=[],
i18n=_make_mock_i18n(),
)
mock_llm.call.assert_called_once()
@@ -788,7 +778,7 @@ class TestParallelSummarization:
messages=messages,
llm=mock_llm,
callbacks=[],
i18n=_make_mock_i18n(),
)
# The final summary message should have A, B, C in order
@@ -816,7 +806,7 @@ class TestParallelSummarization:
chunks=[chunk_a, chunk_b],
llm=mock_llm,
callbacks=[],
i18n=_make_mock_i18n(),
)
)
@@ -843,7 +833,7 @@ class TestParallelSummarization:
messages=messages,
llm=mock_llm,
callbacks=[],
i18n=_make_mock_i18n(),
)
assert mock_llm.acall.await_count == 2
@@ -940,10 +930,8 @@ class TestParallelSummarizationVCR:
def test_parallel_summarize_openai(self) -> None:
"""Test that parallel summarization with gpt-4o-mini produces a valid summary."""
from crewai.llm import LLM
from crewai.utilities.i18n import I18N
llm = LLM(model="gpt-4o-mini", temperature=0)
i18n = I18N()
messages = _build_long_conversation()
original_system = messages[0]["content"]
@@ -959,7 +947,6 @@ class TestParallelSummarizationVCR:
messages=messages,
llm=llm,
callbacks=[],
i18n=i18n,
)
# System message preserved
@@ -975,10 +962,8 @@ class TestParallelSummarizationVCR:
def test_parallel_summarize_preserves_files(self) -> None:
"""Test that file references survive parallel summarization."""
from crewai.llm import LLM
from crewai.utilities.i18n import I18N
llm = LLM(model="gpt-4o-mini", temperature=0)
i18n = I18N()
messages = _build_long_conversation()
mock_file = MagicMock()
@@ -989,7 +974,6 @@ class TestParallelSummarizationVCR:
messages=messages,
llm=llm,
callbacks=[],
i18n=i18n,
)
summary_msg = messages[-1]

View File

@@ -147,8 +147,6 @@ class TestAgentReasoningWithMockedLLM:
agent.backstory = "Test backstory"
agent.verbose = False
agent.planning_config = PlanningConfig()
agent.i18n = MagicMock()
agent.i18n.retrieve.return_value = "Test prompt: {description}"
# Mock the llm attribute
agent.llm = MagicMock()
agent.llm.supports_function_calling.return_value = True

View File

@@ -14,7 +14,6 @@ from crewai.crew import Crew
from crewai.llm import LLM
from crewai.task import Task
from crewai.utilities.agent_utils import summarize_messages
from crewai.utilities.i18n import I18N
def _build_conversation_messages(
@@ -90,7 +89,7 @@ class TestSummarizeDirectOpenAI:
def test_summarize_direct_openai(self) -> None:
"""Test summarize_messages with gpt-4o-mini preserves system messages."""
llm = LLM(model="gpt-4o-mini", temperature=0)
i18n = I18N()
messages = _build_conversation_messages(include_system=True)
original_system_content = messages[0]["content"]
@@ -99,7 +98,7 @@ class TestSummarizeDirectOpenAI:
messages=messages,
llm=llm,
callbacks=[],
i18n=i18n,
)
# System message should be preserved
@@ -122,14 +121,14 @@ class TestSummarizeDirectAnthropic:
def test_summarize_direct_anthropic(self) -> None:
"""Test summarize_messages with claude-3-5-haiku."""
llm = LLM(model="anthropic/claude-3-5-haiku-latest", temperature=0)
i18n = I18N()
messages = _build_conversation_messages(include_system=True)
summarize_messages(
messages=messages,
llm=llm,
callbacks=[],
i18n=i18n,
)
assert len(messages) >= 2
@@ -148,14 +147,14 @@ class TestSummarizeDirectGemini:
def test_summarize_direct_gemini(self) -> None:
"""Test summarize_messages with gemini-2.0-flash."""
llm = LLM(model="gemini/gemini-2.0-flash", temperature=0)
i18n = I18N()
messages = _build_conversation_messages(include_system=True)
summarize_messages(
messages=messages,
llm=llm,
callbacks=[],
i18n=i18n,
)
assert len(messages) >= 2
@@ -174,14 +173,14 @@ class TestSummarizeDirectAzure:
def test_summarize_direct_azure(self) -> None:
"""Test summarize_messages with azure/gpt-4o-mini."""
llm = LLM(model="azure/gpt-4o-mini", temperature=0)
i18n = I18N()
messages = _build_conversation_messages(include_system=True)
summarize_messages(
messages=messages,
llm=llm,
callbacks=[],
i18n=i18n,
)
assert len(messages) >= 2
@@ -261,7 +260,7 @@ class TestSummarizePreservesFiles:
def test_summarize_preserves_files_integration(self) -> None:
"""Test that file references survive a real summarization call."""
llm = LLM(model="gpt-4o-mini", temperature=0)
i18n = I18N()
messages = _build_conversation_messages(
include_system=True, include_files=True
)
@@ -270,7 +269,7 @@ class TestSummarizePreservesFiles:
messages=messages,
llm=llm,
callbacks=[],
i18n=i18n,
)
# System message preserved

View File

@@ -1,3 +1,3 @@
"""CrewAI development tools."""
__version__ = "1.14.1"
__version__ = "1.14.2a1"

View File

@@ -161,13 +161,14 @@ info = "Commits must follow Conventional Commits 1.0.0."
[tool.uv]
exclude-newer = "3 days"
exclude-newer = "2026-04-10" # pinned for CVE-2026-39892; restore to "3 days" after 2026-04-11
# composio-core pins rich<14 but textual requires rich>=14.
# onnxruntime 1.24+ dropped Python 3.10 wheels; cap it so qdrant[fastembed] resolves on 3.10.
# fastembed 0.7.x and docling 2.63 cap pillow<12; the removed APIs don't affect them.
# langchain-core <1.2.11 has SSRF via image_url token counting (CVE-2026-26013).
# transformers 4.57.6 has CVE-2026-1839; force 5.4+ (docling 2.84 allows huggingface-hub>=1).
# cryptography 46.0.6 has CVE-2026-39892; force 46.0.7+.
override-dependencies = [
"rich>=13.7.1",
"onnxruntime<1.24; python_version < '3.11'",
@@ -175,6 +176,7 @@ override-dependencies = [
"langchain-core>=1.2.11,<2",
"urllib3>=2.6.3",
"transformers>=5.4.0; python_version >= '3.10'",
"cryptography>=46.0.7",
]
[tool.uv.workspace]

76
uv.lock generated
View File

@@ -13,8 +13,7 @@ resolution-markers = [
]
[options]
exclude-newer = "2026-04-05T11:09:48.9111Z"
exclude-newer-span = "P3D"
exclude-newer = "2026-04-10T16:00:00Z"
[manifest]
members = [
@@ -24,6 +23,7 @@ members = [
"crewai-tools",
]
overrides = [
{ name = "cryptography", specifier = ">=46.0.7" },
{ name = "langchain-core", specifier = ">=1.2.11,<2" },
{ name = "onnxruntime", marker = "python_full_version < '3.11'", specifier = "<1.24" },
{ name = "pillow", specifier = ">=12.1.1" },
@@ -1583,48 +1583,48 @@ provides-extras = ["apify", "beautifulsoup4", "bedrock", "browserbase", "composi
[[package]]
name = "cryptography"
version = "46.0.6"
version = "46.0.7"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "cffi", marker = "platform_python_implementation != 'PyPy'" },
{ name = "typing-extensions", marker = "python_full_version < '3.11'" },
]
sdist = { url = "https://files.pythonhosted.org/packages/a4/ba/04b1bd4218cbc58dc90ce967106d51582371b898690f3ae0402876cc4f34/cryptography-46.0.6.tar.gz", hash = "sha256:27550628a518c5c6c903d84f637fbecf287f6cb9ced3804838a1295dc1fd0759", size = 750542, upload-time = "2026-03-25T23:34:53.396Z" }
sdist = { url = "https://files.pythonhosted.org/packages/47/93/ac8f3d5ff04d54bc814e961a43ae5b0b146154c89c61b47bb07557679b18/cryptography-46.0.7.tar.gz", hash = "sha256:e4cfd68c5f3e0bfdad0d38e023239b96a2fe84146481852dffbcca442c245aa5", size = 750652, upload-time = "2026-04-08T01:57:54.692Z" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/47/23/9285e15e3bc57325b0a72e592921983a701efc1ee8f91c06c5f0235d86d9/cryptography-46.0.6-cp311-abi3-macosx_10_9_universal2.whl", hash = "sha256:64235194bad039a10bb6d2d930ab3323baaec67e2ce36215fd0952fad0930ca8", size = 7176401, upload-time = "2026-03-25T23:33:22.096Z" },
{ url = "https://files.pythonhosted.org/packages/60/f8/e61f8f13950ab6195b31913b42d39f0f9afc7d93f76710f299b5ec286ae6/cryptography-46.0.6-cp311-abi3-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:26031f1e5ca62fcb9d1fcb34b2b60b390d1aacaa15dc8b895a9ed00968b97b30", size = 4275275, upload-time = "2026-03-25T23:33:23.844Z" },
{ url = "https://files.pythonhosted.org/packages/19/69/732a736d12c2631e140be2348b4ad3d226302df63ef64d30dfdb8db7ad1c/cryptography-46.0.6-cp311-abi3-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:9a693028b9cbe51b5a1136232ee8f2bc242e4e19d456ded3fa7c86e43c713b4a", size = 4425320, upload-time = "2026-03-25T23:33:25.703Z" },
{ url = "https://files.pythonhosted.org/packages/d4/12/123be7292674abf76b21ac1fc0e1af50661f0e5b8f0ec8285faac18eb99e/cryptography-46.0.6-cp311-abi3-manylinux_2_28_aarch64.whl", hash = "sha256:67177e8a9f421aa2d3a170c3e56eca4e0128883cf52a071a7cbf53297f18b175", size = 4278082, upload-time = "2026-03-25T23:33:27.423Z" },
{ url = "https://files.pythonhosted.org/packages/5b/ba/d5e27f8d68c24951b0a484924a84c7cdaed7502bac9f18601cd357f8b1d2/cryptography-46.0.6-cp311-abi3-manylinux_2_28_ppc64le.whl", hash = "sha256:d9528b535a6c4f8ff37847144b8986a9a143585f0540fbcb1a98115b543aa463", size = 4926514, upload-time = "2026-03-25T23:33:29.206Z" },
{ url = "https://files.pythonhosted.org/packages/34/71/1ea5a7352ae516d5512d17babe7e1b87d9db5150b21f794b1377eac1edc0/cryptography-46.0.6-cp311-abi3-manylinux_2_28_x86_64.whl", hash = "sha256:22259338084d6ae497a19bae5d4c66b7ca1387d3264d1c2c0e72d9e9b6a77b97", size = 4457766, upload-time = "2026-03-25T23:33:30.834Z" },
{ url = "https://files.pythonhosted.org/packages/01/59/562be1e653accee4fdad92c7a2e88fced26b3fdfce144047519bbebc299e/cryptography-46.0.6-cp311-abi3-manylinux_2_31_armv7l.whl", hash = "sha256:760997a4b950ff00d418398ad73fbc91aa2894b5c1db7ccb45b4f68b42a63b3c", size = 3986535, upload-time = "2026-03-25T23:33:33.02Z" },
{ url = "https://files.pythonhosted.org/packages/d6/8b/b1ebfeb788bf4624d36e45ed2662b8bd43a05ff62157093c1539c1288a18/cryptography-46.0.6-cp311-abi3-manylinux_2_34_aarch64.whl", hash = "sha256:3dfa6567f2e9e4c5dceb8ccb5a708158a2a871052fa75c8b78cb0977063f1507", size = 4277618, upload-time = "2026-03-25T23:33:34.567Z" },
{ url = "https://files.pythonhosted.org/packages/dd/52/a005f8eabdb28df57c20f84c44d397a755782d6ff6d455f05baa2785bd91/cryptography-46.0.6-cp311-abi3-manylinux_2_34_ppc64le.whl", hash = "sha256:cdcd3edcbc5d55757e5f5f3d330dd00007ae463a7e7aa5bf132d1f22a4b62b19", size = 4890802, upload-time = "2026-03-25T23:33:37.034Z" },
{ url = "https://files.pythonhosted.org/packages/ec/4d/8e7d7245c79c617d08724e2efa397737715ca0ec830ecb3c91e547302555/cryptography-46.0.6-cp311-abi3-manylinux_2_34_x86_64.whl", hash = "sha256:d4e4aadb7fc1f88687f47ca20bb7227981b03afaae69287029da08096853b738", size = 4457425, upload-time = "2026-03-25T23:33:38.904Z" },
{ url = "https://files.pythonhosted.org/packages/1d/5c/f6c3596a1430cec6f949085f0e1a970638d76f81c3ea56d93d564d04c340/cryptography-46.0.6-cp311-abi3-musllinux_1_2_aarch64.whl", hash = "sha256:2b417edbe8877cda9022dde3a008e2deb50be9c407eef034aeeb3a8b11d9db3c", size = 4405530, upload-time = "2026-03-25T23:33:40.842Z" },
{ url = "https://files.pythonhosted.org/packages/7e/c9/9f9cea13ee2dbde070424e0c4f621c091a91ffcc504ffea5e74f0e1daeff/cryptography-46.0.6-cp311-abi3-musllinux_1_2_x86_64.whl", hash = "sha256:380343e0653b1c9d7e1f55b52aaa2dbb2fdf2730088d48c43ca1c7c0abb7cc2f", size = 4667896, upload-time = "2026-03-25T23:33:42.781Z" },
{ url = "https://files.pythonhosted.org/packages/ad/b5/1895bc0821226f129bc74d00eccfc6a5969e2028f8617c09790bf89c185e/cryptography-46.0.6-cp311-abi3-win32.whl", hash = "sha256:bcb87663e1f7b075e48c3be3ecb5f0b46c8fc50b50a97cf264e7f60242dca3f2", size = 3026348, upload-time = "2026-03-25T23:33:45.021Z" },
{ url = "https://files.pythonhosted.org/packages/c3/f8/c9bcbf0d3e6ad288b9d9aa0b1dee04b063d19e8c4f871855a03ab3a297ab/cryptography-46.0.6-cp311-abi3-win_amd64.whl", hash = "sha256:6739d56300662c468fddb0e5e291f9b4d084bead381667b9e654c7dd81705124", size = 3483896, upload-time = "2026-03-25T23:33:46.649Z" },
{ url = "https://files.pythonhosted.org/packages/c4/cc/f330e982852403da79008552de9906804568ae9230da8432f7496ce02b71/cryptography-46.0.6-cp38-abi3-macosx_10_9_universal2.whl", hash = "sha256:12cae594e9473bca1a7aceb90536060643128bb274fcea0fc459ab90f7d1ae7a", size = 7162776, upload-time = "2026-03-25T23:34:13.308Z" },
{ url = "https://files.pythonhosted.org/packages/49/b3/dc27efd8dcc4bff583b3f01d4a3943cd8b5821777a58b3a6a5f054d61b79/cryptography-46.0.6-cp38-abi3-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:639301950939d844a9e1c4464d7e07f902fe9a7f6b215bb0d4f28584729935d8", size = 4270529, upload-time = "2026-03-25T23:34:15.019Z" },
{ url = "https://files.pythonhosted.org/packages/e6/05/e8d0e6eb4f0d83365b3cb0e00eb3c484f7348db0266652ccd84632a3d58d/cryptography-46.0.6-cp38-abi3-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:ed3775295fb91f70b4027aeba878d79b3e55c0b3e97eaa4de71f8f23a9f2eb77", size = 4414827, upload-time = "2026-03-25T23:34:16.604Z" },
{ url = "https://files.pythonhosted.org/packages/2f/97/daba0f5d2dc6d855e2dcb70733c812558a7977a55dd4a6722756628c44d1/cryptography-46.0.6-cp38-abi3-manylinux_2_28_aarch64.whl", hash = "sha256:8927ccfbe967c7df312ade694f987e7e9e22b2425976ddbf28271d7e58845290", size = 4271265, upload-time = "2026-03-25T23:34:18.586Z" },
{ url = "https://files.pythonhosted.org/packages/89/06/fe1fce39a37ac452e58d04b43b0855261dac320a2ebf8f5260dd55b201a9/cryptography-46.0.6-cp38-abi3-manylinux_2_28_ppc64le.whl", hash = "sha256:b12c6b1e1651e42ab5de8b1e00dc3b6354fdfd778e7fa60541ddacc27cd21410", size = 4916800, upload-time = "2026-03-25T23:34:20.561Z" },
{ url = "https://files.pythonhosted.org/packages/ff/8a/b14f3101fe9c3592603339eb5d94046c3ce5f7fc76d6512a2d40efd9724e/cryptography-46.0.6-cp38-abi3-manylinux_2_28_x86_64.whl", hash = "sha256:063b67749f338ca9c5a0b7fe438a52c25f9526b851e24e6c9310e7195aad3b4d", size = 4448771, upload-time = "2026-03-25T23:34:22.406Z" },
{ url = "https://files.pythonhosted.org/packages/01/b3/0796998056a66d1973fd52ee89dc1bb3b6581960a91ad4ac705f182d398f/cryptography-46.0.6-cp38-abi3-manylinux_2_31_armv7l.whl", hash = "sha256:02fad249cb0e090b574e30b276a3da6a149e04ee2f049725b1f69e7b8351ec70", size = 3978333, upload-time = "2026-03-25T23:34:24.281Z" },
{ url = "https://files.pythonhosted.org/packages/c5/3d/db200af5a4ffd08918cd55c08399dc6c9c50b0bc72c00a3246e099d3a849/cryptography-46.0.6-cp38-abi3-manylinux_2_34_aarch64.whl", hash = "sha256:7e6142674f2a9291463e5e150090b95a8519b2fb6e6aaec8917dd8d094ce750d", size = 4271069, upload-time = "2026-03-25T23:34:25.895Z" },
{ url = "https://files.pythonhosted.org/packages/d7/18/61acfd5b414309d74ee838be321c636fe71815436f53c9f0334bf19064fa/cryptography-46.0.6-cp38-abi3-manylinux_2_34_ppc64le.whl", hash = "sha256:456b3215172aeefb9284550b162801d62f5f264a081049a3e94307fe20792cfa", size = 4878358, upload-time = "2026-03-25T23:34:27.67Z" },
{ url = "https://files.pythonhosted.org/packages/8b/65/5bf43286d566f8171917cae23ac6add941654ccf085d739195a4eacf1674/cryptography-46.0.6-cp38-abi3-manylinux_2_34_x86_64.whl", hash = "sha256:341359d6c9e68834e204ceaf25936dffeafea3829ab80e9503860dcc4f4dac58", size = 4448061, upload-time = "2026-03-25T23:34:29.375Z" },
{ url = "https://files.pythonhosted.org/packages/e0/25/7e49c0fa7205cf3597e525d156a6bce5b5c9de1fd7e8cb01120e459f205a/cryptography-46.0.6-cp38-abi3-musllinux_1_2_aarch64.whl", hash = "sha256:9a9c42a2723999a710445bc0d974e345c32adfd8d2fac6d8a251fa829ad31cfb", size = 4399103, upload-time = "2026-03-25T23:34:32.036Z" },
{ url = "https://files.pythonhosted.org/packages/44/46/466269e833f1c4718d6cd496ffe20c56c9c8d013486ff66b4f69c302a68d/cryptography-46.0.6-cp38-abi3-musllinux_1_2_x86_64.whl", hash = "sha256:6617f67b1606dfd9fe4dbfa354a9508d4a6d37afe30306fe6c101b7ce3274b72", size = 4659255, upload-time = "2026-03-25T23:34:33.679Z" },
{ url = "https://files.pythonhosted.org/packages/0a/09/ddc5f630cc32287d2c953fc5d32705e63ec73e37308e5120955316f53827/cryptography-46.0.6-cp38-abi3-win32.whl", hash = "sha256:7f6690b6c55e9c5332c0b59b9c8a3fb232ebf059094c17f9019a51e9827df91c", size = 3010660, upload-time = "2026-03-25T23:34:35.418Z" },
{ url = "https://files.pythonhosted.org/packages/1b/82/ca4893968aeb2709aacfb57a30dec6fa2ab25b10fa9f064b8882ce33f599/cryptography-46.0.6-cp38-abi3-win_amd64.whl", hash = "sha256:79e865c642cfc5c0b3eb12af83c35c5aeff4fa5c672dc28c43721c2c9fdd2f0f", size = 3471160, upload-time = "2026-03-25T23:34:37.191Z" },
{ url = "https://files.pythonhosted.org/packages/2e/84/7ccff00ced5bac74b775ce0beb7d1be4e8637536b522b5df9b73ada42da2/cryptography-46.0.6-pp311-pypy311_pp73-macosx_11_0_arm64.whl", hash = "sha256:2ea0f37e9a9cf0df2952893ad145fd9627d326a59daec9b0802480fa3bcd2ead", size = 3475444, upload-time = "2026-03-25T23:34:38.944Z" },
{ url = "https://files.pythonhosted.org/packages/bc/1f/4c926f50df7749f000f20eede0c896769509895e2648db5da0ed55db711d/cryptography-46.0.6-pp311-pypy311_pp73-manylinux_2_28_aarch64.whl", hash = "sha256:a3e84d5ec9ba01f8fd03802b2147ba77f0c8f2617b2aff254cedd551844209c8", size = 4218227, upload-time = "2026-03-25T23:34:40.871Z" },
{ url = "https://files.pythonhosted.org/packages/c6/65/707be3ffbd5f786028665c3223e86e11c4cda86023adbc56bd72b1b6bab5/cryptography-46.0.6-pp311-pypy311_pp73-manylinux_2_28_x86_64.whl", hash = "sha256:12f0fa16cc247b13c43d56d7b35287ff1569b5b1f4c5e87e92cc4fcc00cd10c0", size = 4381399, upload-time = "2026-03-25T23:34:42.609Z" },
{ url = "https://files.pythonhosted.org/packages/f3/6d/73557ed0ef7d73d04d9aba745d2c8e95218213687ee5e76b7d236a5030fc/cryptography-46.0.6-pp311-pypy311_pp73-manylinux_2_34_aarch64.whl", hash = "sha256:50575a76e2951fe7dbd1f56d181f8c5ceeeb075e9ff88e7ad997d2f42af06e7b", size = 4217595, upload-time = "2026-03-25T23:34:44.205Z" },
{ url = "https://files.pythonhosted.org/packages/9e/c5/e1594c4eec66a567c3ac4400008108a415808be2ce13dcb9a9045c92f1a0/cryptography-46.0.6-pp311-pypy311_pp73-manylinux_2_34_x86_64.whl", hash = "sha256:90e5f0a7b3be5f40c3a0a0eafb32c681d8d2c181fc2a1bdabe9b3f611d9f6b1a", size = 4380912, upload-time = "2026-03-25T23:34:46.328Z" },
{ url = "https://files.pythonhosted.org/packages/1a/89/843b53614b47f97fe1abc13f9a86efa5ec9e275292c457af1d4a60dc80e0/cryptography-46.0.6-pp311-pypy311_pp73-win_amd64.whl", hash = "sha256:6728c49e3b2c180ef26f8e9f0a883a2c585638db64cf265b49c9ba10652d430e", size = 3409955, upload-time = "2026-03-25T23:34:48.465Z" },
{ url = "https://files.pythonhosted.org/packages/0b/5d/4a8f770695d73be252331e60e526291e3df0c9b27556a90a6b47bccca4c2/cryptography-46.0.7-cp311-abi3-macosx_10_9_universal2.whl", hash = "sha256:ea42cbe97209df307fdc3b155f1b6fa2577c0defa8f1f7d3be7d31d189108ad4", size = 7179869, upload-time = "2026-04-08T01:56:17.157Z" },
{ url = "https://files.pythonhosted.org/packages/5f/45/6d80dc379b0bbc1f9d1e429f42e4cb9e1d319c7a8201beffd967c516ea01/cryptography-46.0.7-cp311-abi3-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:b36a4695e29fe69215d75960b22577197aca3f7a25b9cf9d165dcfe9d80bc325", size = 4275492, upload-time = "2026-04-08T01:56:19.36Z" },
{ url = "https://files.pythonhosted.org/packages/4a/9a/1765afe9f572e239c3469f2cb429f3ba7b31878c893b246b4b2994ffe2fe/cryptography-46.0.7-cp311-abi3-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:5ad9ef796328c5e3c4ceed237a183f5d41d21150f972455a9d926593a1dcb308", size = 4426670, upload-time = "2026-04-08T01:56:21.415Z" },
{ url = "https://files.pythonhosted.org/packages/8f/3e/af9246aaf23cd4ee060699adab1e47ced3f5f7e7a8ffdd339f817b446462/cryptography-46.0.7-cp311-abi3-manylinux_2_28_aarch64.whl", hash = "sha256:73510b83623e080a2c35c62c15298096e2a5dc8d51c3b4e1740211839d0dea77", size = 4280275, upload-time = "2026-04-08T01:56:23.539Z" },
{ url = "https://files.pythonhosted.org/packages/0f/54/6bbbfc5efe86f9d71041827b793c24811a017c6ac0fd12883e4caa86b8ed/cryptography-46.0.7-cp311-abi3-manylinux_2_28_ppc64le.whl", hash = "sha256:cbd5fb06b62bd0721e1170273d3f4d5a277044c47ca27ee257025146c34cbdd1", size = 4928402, upload-time = "2026-04-08T01:56:25.624Z" },
{ url = "https://files.pythonhosted.org/packages/2d/cf/054b9d8220f81509939599c8bdbc0c408dbd2bdd41688616a20731371fe0/cryptography-46.0.7-cp311-abi3-manylinux_2_28_x86_64.whl", hash = "sha256:420b1e4109cc95f0e5700eed79908cef9268265c773d3a66f7af1eef53d409ef", size = 4459985, upload-time = "2026-04-08T01:56:27.309Z" },
{ url = "https://files.pythonhosted.org/packages/f9/46/4e4e9c6040fb01c7467d47217d2f882daddeb8828f7df800cb806d8a2288/cryptography-46.0.7-cp311-abi3-manylinux_2_31_armv7l.whl", hash = "sha256:24402210aa54baae71d99441d15bb5a1919c195398a87b563df84468160a65de", size = 3990652, upload-time = "2026-04-08T01:56:29.095Z" },
{ url = "https://files.pythonhosted.org/packages/36/5f/313586c3be5a2fbe87e4c9a254207b860155a8e1f3cca99f9910008e7d08/cryptography-46.0.7-cp311-abi3-manylinux_2_34_aarch64.whl", hash = "sha256:8a469028a86f12eb7d2fe97162d0634026d92a21f3ae0ac87ed1c4a447886c83", size = 4279805, upload-time = "2026-04-08T01:56:30.928Z" },
{ url = "https://files.pythonhosted.org/packages/69/33/60dfc4595f334a2082749673386a4d05e4f0cf4df8248e63b2c3437585f2/cryptography-46.0.7-cp311-abi3-manylinux_2_34_ppc64le.whl", hash = "sha256:9694078c5d44c157ef3162e3bf3946510b857df5a3955458381d1c7cfc143ddb", size = 4892883, upload-time = "2026-04-08T01:56:32.614Z" },
{ url = "https://files.pythonhosted.org/packages/c7/0b/333ddab4270c4f5b972f980adef4faa66951a4aaf646ca067af597f15563/cryptography-46.0.7-cp311-abi3-manylinux_2_34_x86_64.whl", hash = "sha256:42a1e5f98abb6391717978baf9f90dc28a743b7d9be7f0751a6f56a75d14065b", size = 4459756, upload-time = "2026-04-08T01:56:34.306Z" },
{ url = "https://files.pythonhosted.org/packages/d2/14/633913398b43b75f1234834170947957c6b623d1701ffc7a9600da907e89/cryptography-46.0.7-cp311-abi3-musllinux_1_2_aarch64.whl", hash = "sha256:91bbcb08347344f810cbe49065914fe048949648f6bd5c2519f34619142bbe85", size = 4410244, upload-time = "2026-04-08T01:56:35.977Z" },
{ url = "https://files.pythonhosted.org/packages/10/f2/19ceb3b3dc14009373432af0c13f46aa08e3ce334ec6eff13492e1812ccd/cryptography-46.0.7-cp311-abi3-musllinux_1_2_x86_64.whl", hash = "sha256:5d1c02a14ceb9148cc7816249f64f623fbfee39e8c03b3650d842ad3f34d637e", size = 4674868, upload-time = "2026-04-08T01:56:38.034Z" },
{ url = "https://files.pythonhosted.org/packages/1a/bb/a5c213c19ee94b15dfccc48f363738633a493812687f5567addbcbba9f6f/cryptography-46.0.7-cp311-abi3-win32.whl", hash = "sha256:d23c8ca48e44ee015cd0a54aeccdf9f09004eba9fc96f38c911011d9ff1bd457", size = 3026504, upload-time = "2026-04-08T01:56:39.666Z" },
{ url = "https://files.pythonhosted.org/packages/2b/02/7788f9fefa1d060ca68717c3901ae7fffa21ee087a90b7f23c7a603c32ae/cryptography-46.0.7-cp311-abi3-win_amd64.whl", hash = "sha256:397655da831414d165029da9bc483bed2fe0e75dde6a1523ec2fe63f3c46046b", size = 3488363, upload-time = "2026-04-08T01:56:41.893Z" },
{ url = "https://files.pythonhosted.org/packages/a7/7f/cd42fc3614386bc0c12f0cb3c4ae1fc2bbca5c9662dfed031514911d513d/cryptography-46.0.7-cp38-abi3-macosx_10_9_universal2.whl", hash = "sha256:462ad5cb1c148a22b2e3bcc5ad52504dff325d17daf5df8d88c17dda1f75f2a4", size = 7165618, upload-time = "2026-04-08T01:57:10.645Z" },
{ url = "https://files.pythonhosted.org/packages/a5/d0/36a49f0262d2319139d2829f773f1b97ef8aef7f97e6e5bd21455e5a8fb5/cryptography-46.0.7-cp38-abi3-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:84d4cced91f0f159a7ddacad249cc077e63195c36aac40b4150e7a57e84fffe7", size = 4270628, upload-time = "2026-04-08T01:57:12.885Z" },
{ url = "https://files.pythonhosted.org/packages/8a/6c/1a42450f464dda6ffbe578a911f773e54dd48c10f9895a23a7e88b3e7db5/cryptography-46.0.7-cp38-abi3-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:128c5edfe5e5938b86b03941e94fac9ee793a94452ad1365c9fc3f4f62216832", size = 4415405, upload-time = "2026-04-08T01:57:14.923Z" },
{ url = "https://files.pythonhosted.org/packages/9a/92/4ed714dbe93a066dc1f4b4581a464d2d7dbec9046f7c8b7016f5286329e2/cryptography-46.0.7-cp38-abi3-manylinux_2_28_aarch64.whl", hash = "sha256:5e51be372b26ef4ba3de3c167cd3d1022934bc838ae9eaad7e644986d2a3d163", size = 4272715, upload-time = "2026-04-08T01:57:16.638Z" },
{ url = "https://files.pythonhosted.org/packages/b7/e6/a26b84096eddd51494bba19111f8fffe976f6a09f132706f8f1bf03f51f7/cryptography-46.0.7-cp38-abi3-manylinux_2_28_ppc64le.whl", hash = "sha256:cdf1a610ef82abb396451862739e3fc93b071c844399e15b90726ef7470eeaf2", size = 4918400, upload-time = "2026-04-08T01:57:19.021Z" },
{ url = "https://files.pythonhosted.org/packages/c7/08/ffd537b605568a148543ac3c2b239708ae0bd635064bab41359252ef88ed/cryptography-46.0.7-cp38-abi3-manylinux_2_28_x86_64.whl", hash = "sha256:1d25aee46d0c6f1a501adcddb2d2fee4b979381346a78558ed13e50aa8a59067", size = 4450634, upload-time = "2026-04-08T01:57:21.185Z" },
{ url = "https://files.pythonhosted.org/packages/16/01/0cd51dd86ab5b9befe0d031e276510491976c3a80e9f6e31810cce46c4ad/cryptography-46.0.7-cp38-abi3-manylinux_2_31_armv7l.whl", hash = "sha256:cdfbe22376065ffcf8be74dc9a909f032df19bc58a699456a21712d6e5eabfd0", size = 3985233, upload-time = "2026-04-08T01:57:22.862Z" },
{ url = "https://files.pythonhosted.org/packages/92/49/819d6ed3a7d9349c2939f81b500a738cb733ab62fbecdbc1e38e83d45e12/cryptography-46.0.7-cp38-abi3-manylinux_2_34_aarch64.whl", hash = "sha256:abad9dac36cbf55de6eb49badd4016806b3165d396f64925bf2999bcb67837ba", size = 4271955, upload-time = "2026-04-08T01:57:24.814Z" },
{ url = "https://files.pythonhosted.org/packages/80/07/ad9b3c56ebb95ed2473d46df0847357e01583f4c52a85754d1a55e29e4d0/cryptography-46.0.7-cp38-abi3-manylinux_2_34_ppc64le.whl", hash = "sha256:935ce7e3cfdb53e3536119a542b839bb94ec1ad081013e9ab9b7cfd478b05006", size = 4879888, upload-time = "2026-04-08T01:57:26.88Z" },
{ url = "https://files.pythonhosted.org/packages/b8/c7/201d3d58f30c4c2bdbe9b03844c291feb77c20511cc3586daf7edc12a47b/cryptography-46.0.7-cp38-abi3-manylinux_2_34_x86_64.whl", hash = "sha256:35719dc79d4730d30f1c2b6474bd6acda36ae2dfae1e3c16f2051f215df33ce0", size = 4449961, upload-time = "2026-04-08T01:57:29.068Z" },
{ url = "https://files.pythonhosted.org/packages/a5/ef/649750cbf96f3033c3c976e112265c33906f8e462291a33d77f90356548c/cryptography-46.0.7-cp38-abi3-musllinux_1_2_aarch64.whl", hash = "sha256:7bbc6ccf49d05ac8f7d7b5e2e2c33830d4fe2061def88210a126d130d7f71a85", size = 4401696, upload-time = "2026-04-08T01:57:31.029Z" },
{ url = "https://files.pythonhosted.org/packages/41/52/a8908dcb1a389a459a29008c29966c1d552588d4ae6d43f3a1a4512e0ebe/cryptography-46.0.7-cp38-abi3-musllinux_1_2_x86_64.whl", hash = "sha256:a1529d614f44b863a7b480c6d000fe93b59acee9c82ffa027cfadc77521a9f5e", size = 4664256, upload-time = "2026-04-08T01:57:33.144Z" },
{ url = "https://files.pythonhosted.org/packages/4b/fa/f0ab06238e899cc3fb332623f337a7364f36f4bb3f2534c2bb95a35b132c/cryptography-46.0.7-cp38-abi3-win32.whl", hash = "sha256:f247c8c1a1fb45e12586afbb436ef21ff1e80670b2861a90353d9b025583d246", size = 3013001, upload-time = "2026-04-08T01:57:34.933Z" },
{ url = "https://files.pythonhosted.org/packages/d2/f1/00ce3bde3ca542d1acd8f8cfa38e446840945aa6363f9b74746394b14127/cryptography-46.0.7-cp38-abi3-win_amd64.whl", hash = "sha256:506c4ff91eff4f82bdac7633318a526b1d1309fc07ca76a3ad182cb5b686d6d3", size = 3472985, upload-time = "2026-04-08T01:57:36.714Z" },
{ url = "https://files.pythonhosted.org/packages/63/0c/dca8abb64e7ca4f6b2978769f6fea5ad06686a190cec381f0a796fdcaaba/cryptography-46.0.7-pp311-pypy311_pp73-macosx_11_0_arm64.whl", hash = "sha256:fc9ab8856ae6cf7c9358430e49b368f3108f050031442eaeb6b9d87e4dcf4e4f", size = 3476879, upload-time = "2026-04-08T01:57:38.664Z" },
{ url = "https://files.pythonhosted.org/packages/3a/ea/075aac6a84b7c271578d81a2f9968acb6e273002408729f2ddff517fed4a/cryptography-46.0.7-pp311-pypy311_pp73-manylinux_2_28_aarch64.whl", hash = "sha256:d3b99c535a9de0adced13d159c5a9cf65c325601aa30f4be08afd680643e9c15", size = 4219700, upload-time = "2026-04-08T01:57:40.625Z" },
{ url = "https://files.pythonhosted.org/packages/6c/7b/1c55db7242b5e5612b29fc7a630e91ee7a6e3c8e7bf5406d22e206875fbd/cryptography-46.0.7-pp311-pypy311_pp73-manylinux_2_28_x86_64.whl", hash = "sha256:d02c738dacda7dc2a74d1b2b3177042009d5cab7c7079db74afc19e56ca1b455", size = 4385982, upload-time = "2026-04-08T01:57:42.725Z" },
{ url = "https://files.pythonhosted.org/packages/cb/da/9870eec4b69c63ef5925bf7d8342b7e13bc2ee3d47791461c4e49ca212f4/cryptography-46.0.7-pp311-pypy311_pp73-manylinux_2_34_aarch64.whl", hash = "sha256:04959522f938493042d595a736e7dbdff6eb6cc2339c11465b3ff89343b65f65", size = 4219115, upload-time = "2026-04-08T01:57:44.939Z" },
{ url = "https://files.pythonhosted.org/packages/f4/72/05aa5832b82dd341969e9a734d1812a6aadb088d9eb6f0430fc337cc5a8f/cryptography-46.0.7-pp311-pypy311_pp73-manylinux_2_34_x86_64.whl", hash = "sha256:3986ac1dee6def53797289999eabe84798ad7817f3e97779b5061a95b0ee4968", size = 4385479, upload-time = "2026-04-08T01:57:46.86Z" },
{ url = "https://files.pythonhosted.org/packages/20/2a/1b016902351a523aa2bd446b50a5bc1175d7a7d1cf90fe2ef904f9b84ebc/cryptography-46.0.7-pp311-pypy311_pp73-win_amd64.whl", hash = "sha256:258514877e15963bd43b558917bc9f54cf7cf866c38aa576ebf47a77ddbc43a4", size = 3412829, upload-time = "2026-04-08T01:57:48.874Z" },
]
[[package]]