Compare commits

..

5 Commits

Author SHA1 Message Date
Greyson LaLonde
b266cf7a3e ci: add PR size and title checks, configure commitizen
Some checks are pending
Build uv cache / build-cache (3.10) (push) Waiting to run
Build uv cache / build-cache (3.11) (push) Waiting to run
Build uv cache / build-cache (3.12) (push) Waiting to run
Build uv cache / build-cache (3.13) (push) Waiting to run
CodeQL Advanced / Analyze (actions) (push) Waiting to run
CodeQL Advanced / Analyze (python) (push) Waiting to run
Check Documentation Broken Links / Check broken links (push) Waiting to run
2026-03-24 19:45:07 +08:00
Greyson LaLonde
c542cc9f70 fix: raise value error on no file support 2026-03-24 19:21:19 +08:00
Greyson LaLonde
aced3e5c29 feat(cli): add logout command and fix all mypy errors in CLI
Add `crewai logout` command that clears auth tokens and user settings.
Supports `--reset` flag to also restore all CLI settings to defaults.

Add missing type annotations to all CLI command functions, DeployCommand
and TriggersCommand __init__ methods, and create_flow to resolve all
mypy errors. Remove unused assignments of void telemetry return values.
2026-03-24 19:14:24 +08:00
Greyson LaLonde
555ee462a3 feat: agent skills
introduce the agent skills standard for packaging reusable instructions that agents can discover and activate at runtime.                                                             

- skills defined via SKILL.md with yaml frontmatter and markdown body
- three-level progressive disclosure: metadata, instructions, resources
- filesystem discovery with directory name validation                                                         
- skill lifecycle events (discovery, loaded, activated, failed)
- crew-level skills resolved once and shared across agents                                                    
- skill context injected into both task execution and standalone kickoff
2026-03-24 19:03:35 +08:00
alex-clawd
dd9ae02159 feat: automatic root_scope for hierarchical memory isolation (#5035)
* feat: automatic root_scope for hierarchical memory isolation

Crews and flows now automatically scope their memories hierarchically.
The encoding flow's LLM-inferred scope becomes a sub-scope under the
structural root, preventing memory pollution across crews/agents.

Scope hierarchy:
  /crew/{crew_name}/agent/{agent_role}/{llm-inferred}
  /flow/{flow_name}/{llm-inferred}

Changes:
- Memory class: new root_scope field, passed through remember/remember_many
- EncodingFlow: prepends root_scope to resolved scope in both fast path
  (Group A) and LLM path (Group C/D)
- Crew: auto-sets root_scope=/crew/{sanitized_name} on memory creation
- Agent executor: extends crew root with /agent/{sanitized_role} per save
- Flow: auto-sets root_scope=/flow/{sanitized_name} on memory creation
- New utils: sanitize_scope_name, normalize_scope_path, join_scope_paths

Backward compatible — no root_scope means no prefix (existing behavior).
Old memories at '/' remain accessible.

51 new tests, all existing tests pass.

* ci: retrigger tests

* fix: don't auto-set root_scope on user-provided Memory instances

When users pass their own Memory instance to a Crew (memory=mem),
respect their configuration — don't auto-set root_scope.
Auto-scoping only applies when memory=True (Crew creates Memory).

Fixes: test_crew_memory_with_google_vertex_embedder which passes
Memory(embedder=...) to Crew and expects remember(scope='/test')
to produce scope '/test', not '/crew/crew/test'.

* fix: address 6 review comments — true scope isolation for reads, writes, and consolidation

1. Constrain similarity search to root_scope boundary (no cross-crew consolidation)
2. Remove unused self._root_scope from EncodingFlow
3. Apply root_scope to recall/list/info/reset (true read isolation)
4. Only extend agent root_scope when crew has one (backward compat)
5. Fix docstring example for sanitize_scope_name
6. Verify code comments match behavior

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>

---------

Co-authored-by: Joao Moura <joao@crewai.com>
Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
2026-03-24 02:56:10 -03:00
46 changed files with 2634 additions and 110 deletions

32
.github/workflows/pr-size.yml vendored Normal file
View File

@@ -0,0 +1,32 @@
name: PR Size Check
on:
pull_request:
types: [opened, synchronize, reopened]
jobs:
pr-size:
runs-on: ubuntu-latest
permissions:
pull-requests: write
steps:
- uses: codelytv/pr-size-labeler@v1
with:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
xs_label: "size/XS"
xs_max_size: 25
s_label: "size/S"
s_max_size: 100
m_label: "size/M"
m_max_size: 250
l_label: "size/L"
l_max_size: 500
xl_label: "size/XL"
fail_if_xl: false
files_to_ignore: |
uv.lock
*.lock
lib/crewai/src/crewai/cli/templates/**
**/*.json
**/test_durations/**
**/cassettes/**

41
.github/workflows/pr-title.yml vendored Normal file
View File

@@ -0,0 +1,41 @@
name: PR Title Check
on:
pull_request:
types: [opened, edited, synchronize, reopened]
permissions:
contents: read
pull-requests: read
jobs:
pr-title:
runs-on: ubuntu-latest
steps:
- uses: amannn/action-semantic-pull-request@v5
continue-on-error: true
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
with:
types: |
feat
fix
refactor
perf
test
docs
chore
ci
style
revert
requireScope: false
subjectPattern: ^[a-z].+[^.]$
subjectPatternError: >
The PR title "{title}" does not follow conventional commit format.
Expected: <type>(<scope>): <lowercase description without trailing period>
Examples:
feat(memory): add lancedb storage backend
fix(agents): resolve deadlock in concurrent execution
chore(deps): bump pydantic to 2.11.9

View File

@@ -155,6 +155,7 @@
"en/concepts/flows",
"en/concepts/production-architecture",
"en/concepts/knowledge",
"en/concepts/skills",
"en/concepts/llms",
"en/concepts/files",
"en/concepts/processes",
@@ -1556,6 +1557,7 @@
"en/concepts/flows",
"en/concepts/production-architecture",
"en/concepts/knowledge",
"en/concepts/skills",
"en/concepts/llms",
"en/concepts/files",
"en/concepts/processes",
@@ -2053,6 +2055,7 @@
"pt-BR/concepts/flows",
"pt-BR/concepts/production-architecture",
"pt-BR/concepts/knowledge",
"pt-BR/concepts/skills",
"pt-BR/concepts/llms",
"pt-BR/concepts/files",
"pt-BR/concepts/processes",
@@ -3412,6 +3415,7 @@
"pt-BR/concepts/flows",
"pt-BR/concepts/production-architecture",
"pt-BR/concepts/knowledge",
"pt-BR/concepts/skills",
"pt-BR/concepts/llms",
"pt-BR/concepts/files",
"pt-BR/concepts/processes",
@@ -3895,6 +3899,7 @@
"ko/concepts/flows",
"ko/concepts/production-architecture",
"ko/concepts/knowledge",
"ko/concepts/skills",
"ko/concepts/llms",
"ko/concepts/files",
"ko/concepts/processes",
@@ -5290,6 +5295,7 @@
"ko/concepts/flows",
"ko/concepts/production-architecture",
"ko/concepts/knowledge",
"ko/concepts/skills",
"ko/concepts/llms",
"ko/concepts/files",
"ko/concepts/processes",

115
docs/en/concepts/skills.mdx Normal file
View File

@@ -0,0 +1,115 @@
---
title: Skills
description: Filesystem-based skill packages that inject context into agent prompts.
icon: bolt
mode: "wide"
---
## Overview
Skills are self-contained directories that provide agents with domain-specific instructions, references, and assets. Each skill is defined by a `SKILL.md` file with YAML frontmatter and a markdown body.
Skills use **progressive disclosure** — metadata is loaded first, full instructions only when activated, and resource catalogs only when needed.
## Directory Structure
```
my-skill/
├── SKILL.md # Required — frontmatter + instructions
├── scripts/ # Optional — executable scripts
├── references/ # Optional — reference documents
└── assets/ # Optional — static files (configs, data)
```
The directory name must match the `name` field in `SKILL.md`.
## SKILL.md Format
```markdown
---
name: my-skill
description: Short description of what this skill does and when to use it.
license: Apache-2.0 # optional
compatibility: crewai>=0.1.0 # optional
metadata: # optional
author: your-name
version: "1.0"
allowed-tools: web-search file-read # optional, space-delimited
---
Instructions for the agent go here. This markdown body is injected
into the agent's prompt when the skill is activated.
```
### Frontmatter Fields
| Field | Required | Constraints |
| :-------------- | :------- | :----------------------------------------------------------------------- |
| `name` | Yes | 164 chars. Lowercase alphanumeric and hyphens. No leading/trailing/consecutive hyphens. Must match directory name. |
| `description` | Yes | 11024 chars. Describes what the skill does and when to use it. |
| `license` | No | License name or reference to a bundled license file. |
| `compatibility` | No | Max 500 chars. Environment requirements (products, packages, network). |
| `metadata` | No | Arbitrary string key-value mapping. |
| `allowed-tools` | No | Space-delimited list of pre-approved tools. Experimental. |
## Usage
### Agent-level Skills
Pass skill directory paths to an agent:
```python
from crewai import Agent
agent = Agent(
role="Researcher",
goal="Find relevant information",
backstory="An expert researcher.",
skills=["./skills"], # discovers all skills in this directory
)
```
### Crew-level Skills
Skill paths on a crew are merged into every agent:
```python
from crewai import Crew
crew = Crew(
agents=[agent],
tasks=[task],
skills=["./skills"],
)
```
### Pre-loaded Skills
You can also pass `Skill` objects directly:
```python
from pathlib import Path
from crewai.skills import discover_skills, activate_skill
skills = discover_skills(Path("./skills"))
activated = [activate_skill(s) for s in skills]
agent = Agent(
role="Researcher",
goal="Find relevant information",
backstory="An expert researcher.",
skills=activated,
)
```
## How Skills Are Loaded
Skills load progressively — only the data needed at each stage is read:
| Stage | What's loaded | When |
| :--------------- | :------------------------------------------------ | :----------------- |
| Discovery | Name, description, frontmatter fields | `discover_skills()` |
| Activation | Full SKILL.md body text | `activate_skill()` |
During normal agent execution, skills are automatically discovered and activated. The `scripts/`, `references/`, and `assets/` directories are available on the skill's `path` for agents that need to reference files directly.

114
docs/ko/concepts/skills.mdx Normal file
View File

@@ -0,0 +1,114 @@
---
title: 스킬
description: 에이전트 프롬프트에 컨텍스트를 주입하는 파일 시스템 기반 스킬 패키지.
icon: bolt
mode: "wide"
---
## 개요
스킬은 에이전트에게 도메인별 지침, 참조 자료, 에셋을 제공하는 자체 포함 디렉터리입니다. 각 스킬은 YAML 프론트매터와 마크다운 본문이 포함된 `SKILL.md` 파일로 정의됩니다.
스킬은 **점진적 공개**를 사용합니다 — 메타데이터가 먼저 로드되고, 활성화 시에만 전체 지침이 로드되며, 필요할 때만 리소스 카탈로그가 로드됩니다.
## 디렉터리 구조
```
my-skill/
├── SKILL.md # 필수 — 프론트매터 + 지침
├── scripts/ # 선택 — 실행 가능한 스크립트
├── references/ # 선택 — 참조 문서
└── assets/ # 선택 — 정적 파일 (설정, 데이터)
```
디렉터리 이름은 `SKILL.md`의 `name` 필드와 일치해야 합니다.
## SKILL.md 형식
```markdown
---
name: my-skill
description: 이 스킬이 무엇을 하고 언제 사용하는지에 대한 간단한 설명.
license: Apache-2.0 # 선택
compatibility: crewai>=0.1.0 # 선택
metadata: # 선택
author: your-name
version: "1.0"
allowed-tools: web-search file-read # 선택, 공백으로 구분
---
에이전트를 위한 지침이 여기에 들어갑니다. 이 마크다운 본문은
스킬이 활성화되면 에이전트의 프롬프트에 주입됩니다.
```
### 프론트매터 필드
| 필드 | 필수 | 제약 조건 |
| :-------------- | :----- | :----------------------------------------------------------------------- |
| `name` | 예 | 164자. 소문자 영숫자와 하이픈. 선행/후행/연속 하이픈 불가. 디렉터리 이름과 일치 필수. |
| `description` | 예 | 11024자. 스킬이 무엇을 하고 언제 사용하는지 설명. |
| `license` | 아니오 | 라이선스 이름 또는 번들된 라이선스 파일 참조. |
| `compatibility` | 아니오 | 최대 500자. 환경 요구 사항 (제품, 패키지, 네트워크). |
| `metadata` | 아니오 | 임의의 문자열 키-값 매핑. |
| `allowed-tools` | 아니오 | 공백으로 구분된 사전 승인 도구 목록. 실험적. |
## 사용법
### 에이전트 레벨 스킬
에이전트에 스킬 디렉터리 경로를 전달합니다:
```python
from crewai import Agent
agent = Agent(
role="Researcher",
goal="Find relevant information",
backstory="An expert researcher.",
skills=["./skills"], # 이 디렉터리의 모든 스킬을 검색
)
```
### 크루 레벨 스킬
크루의 스킬 경로는 모든 에이전트에 병합됩니다:
```python
from crewai import Crew
crew = Crew(
agents=[agent],
tasks=[task],
skills=["./skills"],
)
```
### 사전 로드된 스킬
`Skill` 객체를 직접 전달할 수도 있습니다:
```python
from pathlib import Path
from crewai.skills import discover_skills, activate_skill
skills = discover_skills(Path("./skills"))
activated = [activate_skill(s) for s in skills]
agent = Agent(
role="Researcher",
goal="Find relevant information",
backstory="An expert researcher.",
skills=activated,
)
```
## 스킬 로드 방식
스킬은 점진적으로 로드됩니다 — 각 단계에서 필요한 데이터만 읽습니다:
| 단계 | 로드되는 내용 | 시점 |
| :--------------- | :------------------------------------------------ | :----------------- |
| 검색 | 이름, 설명, 프론트매터 필드 | `discover_skills()` |
| 활성화 | 전체 SKILL.md 본문 텍스트 | `activate_skill()` |
일반적인 에이전트 실행 중에 스킬은 자동으로 검색되고 활성화됩니다. `scripts/`, `references/`, `assets/` 디렉터리는 파일을 직접 참조해야 하는 에이전트를 위해 스킬의 `path`에서 사용할 수 있습니다.

View File

@@ -0,0 +1,114 @@
---
title: Skills
description: Pacotes de skills baseados em sistema de arquivos que injetam contexto nos prompts dos agentes.
icon: bolt
mode: "wide"
---
## Visão Geral
Skills são diretórios autocontidos que fornecem aos agentes instruções, referências e assets específicos de domínio. Cada skill é definida por um arquivo `SKILL.md` com frontmatter YAML e um corpo em markdown.
Skills usam **divulgação progressiva** — metadados são carregados primeiro, instruções completas apenas quando ativadas, e catálogos de recursos apenas quando necessário.
## Estrutura de Diretório
```
my-skill/
├── SKILL.md # Obrigatório — frontmatter + instruções
├── scripts/ # Opcional — scripts executáveis
├── references/ # Opcional — documentos de referência
└── assets/ # Opcional — arquivos estáticos (configs, dados)
```
O nome do diretório deve corresponder ao campo `name` no `SKILL.md`.
## Formato do SKILL.md
```markdown
---
name: my-skill
description: Descrição curta do que esta skill faz e quando usá-la.
license: Apache-2.0 # opcional
compatibility: crewai>=0.1.0 # opcional
metadata: # opcional
author: your-name
version: "1.0"
allowed-tools: web-search file-read # opcional, delimitado por espaços
---
Instruções para o agente vão aqui. Este corpo em markdown é injetado
no prompt do agente quando a skill é ativada.
```
### Campos do Frontmatter
| Campo | Obrigatório | Restrições |
| :-------------- | :---------- | :----------------------------------------------------------------------- |
| `name` | Sim | 164 chars. Alfanumérico minúsculo e hifens. Sem hifens iniciais/finais/consecutivos. Deve corresponder ao nome do diretório. |
| `description` | Sim | 11024 chars. Descreve o que a skill faz e quando usá-la. |
| `license` | Não | Nome da licença ou referência a um arquivo de licença incluído. |
| `compatibility` | Não | Máx 500 chars. Requisitos de ambiente (produtos, pacotes, rede). |
| `metadata` | Não | Mapeamento arbitrário de chave-valor string. |
| `allowed-tools` | Não | Lista de ferramentas pré-aprovadas delimitada por espaços. Experimental. |
## Uso
### Skills no Nível do Agente
Passe caminhos de diretório de skills para um agente:
```python
from crewai import Agent
agent = Agent(
role="Researcher",
goal="Find relevant information",
backstory="An expert researcher.",
skills=["./skills"], # descobre todas as skills neste diretório
)
```
### Skills no Nível do Crew
Caminhos de skills no crew são mesclados em todos os agentes:
```python
from crewai import Crew
crew = Crew(
agents=[agent],
tasks=[task],
skills=["./skills"],
)
```
### Skills Pré-carregadas
Você também pode passar objetos `Skill` diretamente:
```python
from pathlib import Path
from crewai.skills import discover_skills, activate_skill
skills = discover_skills(Path("./skills"))
activated = [activate_skill(s) for s in skills]
agent = Agent(
role="Researcher",
goal="Find relevant information",
backstory="An expert researcher.",
skills=activated,
)
```
## Como as Skills São Carregadas
Skills carregam progressivamente — apenas os dados necessários em cada etapa são lidos:
| Etapa | O que é carregado | Quando |
| :--------------- | :------------------------------------------------ | :------------------ |
| Descoberta | Nome, descrição, campos do frontmatter | `discover_skills()` |
| Ativação | Texto completo do corpo do SKILL.md | `activate_skill()` |
Durante a execução normal do agente, skills são automaticamente descobertas e ativadas. Os diretórios `scripts/`, `references/` e `assets/` estão disponíveis no `path` da skill para agentes que precisam referenciar arquivos diretamente.

View File

@@ -42,6 +42,7 @@ dependencies = [
"mcp~=1.26.0",
"uv~=0.9.13",
"aiosqlite~=0.21.0",
"pyyaml~=6.0",
"lancedb>=0.29.2",
]

View File

@@ -3,6 +3,7 @@ from __future__ import annotations
import asyncio
from collections.abc import Callable, Coroutine, Sequence
import contextvars
from pathlib import Path
import shutil
import subprocess
import time
@@ -26,6 +27,7 @@ from typing_extensions import Self
from crewai.agent.planning_config import PlanningConfig
from crewai.agent.utils import (
ahandle_knowledge_retrieval,
append_skill_context,
apply_training_data,
build_task_prompt_with_schema,
format_task_with_context,
@@ -65,6 +67,8 @@ from crewai.mcp import MCPServerConfig
from crewai.mcp.tool_resolver import MCPToolResolver
from crewai.rag.embeddings.types import EmbedderConfig
from crewai.security.fingerprint import Fingerprint
from crewai.skills.loader import activate_skill, discover_skills
from crewai.skills.models import INSTRUCTIONS, Skill as SkillModel
from crewai.tools.agent_tools.agent_tools import AgentTools
from crewai.types.callback import SerializableCallable
from crewai.utilities.agent_utils import (
@@ -278,6 +282,8 @@ class Agent(BaseAgent):
if self.allow_code_execution:
self._validate_docker_installation()
self.set_skills()
# Handle backward compatibility: convert reasoning=True to planning_config
if self.reasoning and self.planning_config is None:
import warnings
@@ -321,6 +327,76 @@ class Agent(BaseAgent):
except (TypeError, ValueError) as e:
raise ValueError(f"Invalid Knowledge Configuration: {e!s}") from e
def set_skills(
self,
resolved_crew_skills: list[SkillModel] | None = None,
) -> None:
"""Resolve skill paths and activate skills to INSTRUCTIONS level.
Path entries trigger discovery and activation. Pre-loaded Skill objects
below INSTRUCTIONS level are activated. Crew-level skills are merged in
with event emission so observability is consistent regardless of origin.
Args:
resolved_crew_skills: Pre-resolved crew skills (already discovered
and activated). When provided, avoids redundant discovery per agent.
"""
from crewai.crew import Crew
from crewai.events.event_bus import crewai_event_bus
from crewai.events.types.skill_events import SkillActivatedEvent
if resolved_crew_skills is None:
crew_skills: list[Path | SkillModel] | None = (
self.crew.skills
if isinstance(self.crew, Crew) and isinstance(self.crew.skills, list)
else None
)
else:
crew_skills = list(resolved_crew_skills)
if not self.skills and not crew_skills:
return
needs_work = self.skills and any(
isinstance(s, Path)
or (isinstance(s, SkillModel) and s.disclosure_level < INSTRUCTIONS)
for s in self.skills
)
if not needs_work and not crew_skills:
return
seen: set[str] = set()
resolved: list[Path | SkillModel] = []
items: list[Path | SkillModel] = list(self.skills) if self.skills else []
if crew_skills:
items.extend(crew_skills)
for item in items:
if isinstance(item, Path):
discovered = discover_skills(item, source=self)
for skill in discovered:
if skill.name not in seen:
seen.add(skill.name)
resolved.append(activate_skill(skill, source=self))
elif isinstance(item, SkillModel):
if item.name not in seen:
seen.add(item.name)
activated = activate_skill(item, source=self)
if activated is item and item.disclosure_level >= INSTRUCTIONS:
crewai_event_bus.emit(
self,
event=SkillActivatedEvent(
from_agent=self,
skill_name=item.name,
skill_path=item.path,
disclosure_level=item.disclosure_level,
),
)
resolved.append(activated)
self.skills = resolved if resolved else None
def _is_any_available_memory(self) -> bool:
"""Check if unified memory is available (agent or crew)."""
if getattr(self, "memory", None):
@@ -442,6 +518,8 @@ class Agent(BaseAgent):
self.crew.query_knowledge if self.crew else lambda *a, **k: None,
)
task_prompt = append_skill_context(self, task_prompt)
prepare_tools(self, tools, task)
task_prompt = apply_training_data(self, task_prompt)
@@ -682,6 +760,8 @@ class Agent(BaseAgent):
self, task, task_prompt, knowledge_config
)
task_prompt = append_skill_context(self, task_prompt)
prepare_tools(self, tools, task)
task_prompt = apply_training_data(self, task_prompt)
@@ -1343,6 +1423,8 @@ class Agent(BaseAgent):
),
)
formatted_messages = append_skill_context(self, formatted_messages)
# Build the input dict for the executor
inputs: dict[str, Any] = {
"input": formatted_messages,

View File

@@ -210,6 +210,30 @@ def _combine_knowledge_context(agent: Agent) -> str:
return agent_ctx + separator + crew_ctx
def append_skill_context(agent: Agent, task_prompt: str) -> str:
"""Append activated skill context sections to the task prompt.
Args:
agent: The agent with optional skills.
task_prompt: The current task prompt.
Returns:
The task prompt with skill context appended.
"""
if not agent.skills:
return task_prompt
from crewai.skills.loader import format_skill_context
from crewai.skills.models import Skill
skill_sections = [
format_skill_context(s) for s in agent.skills if isinstance(s, Skill)
]
if skill_sections:
task_prompt += "\n\n" + "\n\n".join(skill_sections)
return task_prompt
def apply_training_data(agent: Agent, task_prompt: str) -> str:
"""Apply training data to the task prompt.

View File

@@ -3,6 +3,7 @@ from __future__ import annotations
from abc import ABC, abstractmethod
from copy import copy as shallow_copy
from hashlib import md5
from pathlib import Path
import re
from typing import Any, Final, Literal
import uuid
@@ -32,6 +33,7 @@ from crewai.memory.memory_scope import MemoryScope, MemorySlice
from crewai.memory.unified_memory import Memory
from crewai.rag.embeddings.types import EmbedderConfig
from crewai.security.security_config import SecurityConfig
from crewai.skills.models import Skill
from crewai.tools.base_tool import BaseTool, Tool
from crewai.types.callback import SerializableCallable
from crewai.utilities.config import process_config
@@ -217,6 +219,11 @@ class BaseAgent(BaseModel, ABC, metaclass=AgentMeta):
"If not set, falls back to crew memory."
),
)
skills: list[Path | Skill] | None = Field(
default=None,
description="Agent Skills. Accepts paths for discovery or pre-loaded Skill objects.",
min_length=1,
)
@model_validator(mode="before")
@classmethod
@@ -500,3 +507,6 @@ class BaseAgent(BaseModel, ABC, metaclass=AgentMeta):
def set_knowledge(self, crew_embedder: EmbedderConfig | None = None) -> None:
pass
def set_skills(self, resolved_crew_skills: list[Any] | None = None) -> None:
pass

View File

@@ -49,20 +49,21 @@ class CrewAgentExecutorMixin:
)
extracted = memory.extract_memories(raw)
if extracted:
# Build agent-specific root_scope that extends the crew's root
agent_role = self.agent.role or "unknown"
sanitized_role = sanitize_scope_name(agent_role)
# Get the memory's existing root_scope
base_root = getattr(memory, "root_scope", None)
# Get the memory's existing root_scope and extend with agent info
base_root = getattr(memory, "root_scope", None) or ""
# Construct agent root: base_root + /agent/<role>
agent_root = f"{base_root.rstrip('/')}/agent/{sanitized_role}"
# Ensure leading slash
if not agent_root.startswith("/"):
agent_root = "/" + agent_root
memory.remember_many(
extracted, agent_role=self.agent.role, root_scope=agent_root
)
if isinstance(base_root, str) and base_root:
# Memory has a root_scope — extend it with agent info
agent_role = self.agent.role or "unknown"
sanitized_role = sanitize_scope_name(agent_role)
agent_root = f"{base_root.rstrip('/')}/agent/{sanitized_role}"
if not agent_root.startswith("/"):
agent_root = "/" + agent_root
memory.remember_many(
extracted, agent_role=self.agent.role, root_scope=agent_root
)
else:
# No base root_scope — don't inject one, preserve backward compat
memory.remember_many(extracted, agent_role=self.agent.role)
except Exception as e:
self.agent._logger.log("error", f"Failed to save to memory: {e}")

View File

@@ -22,6 +22,7 @@ from crewai.cli.replay_from_task import replay_task_command
from crewai.cli.reset_memories_command import reset_memories_command
from crewai.cli.run_crew import run_crew
from crewai.cli.settings.main import SettingsCommand
from crewai.cli.shared.token_manager import TokenManager
from crewai.cli.tools.main import ToolCommand
from crewai.cli.train_crew import train_crew
from crewai.cli.triggers.main import TriggersCommand
@@ -34,7 +35,7 @@ from crewai.memory.storage.kickoff_task_outputs_storage import (
@click.group()
@click.version_option(get_version("crewai"))
def crewai():
def crewai() -> None:
"""Top-level command group for crewai."""
@@ -45,7 +46,7 @@ def crewai():
),
)
@click.argument("uv_args", nargs=-1, type=click.UNPROCESSED)
def uv(uv_args):
def uv(uv_args: tuple[str, ...]) -> None:
"""A wrapper around uv commands that adds custom tool authentication through env vars."""
env = os.environ.copy()
try:
@@ -83,7 +84,9 @@ def uv(uv_args):
@click.argument("name")
@click.option("--provider", type=str, help="The provider to use for the crew")
@click.option("--skip_provider", is_flag=True, help="Skip provider validation")
def create(type, name, provider, skip_provider=False):
def create(
type: str, name: str, provider: str | None, skip_provider: bool = False
) -> None:
"""Create a new crew, or flow."""
if type == "crew":
create_crew(name, provider, skip_provider)
@@ -97,7 +100,7 @@ def create(type, name, provider, skip_provider=False):
@click.option(
"--tools", is_flag=True, help="Show the installed version of crewai tools"
)
def version(tools):
def version(tools: bool) -> None:
"""Show the installed version of crewai."""
try:
crewai_version = get_version("crewai")
@@ -128,7 +131,7 @@ def version(tools):
default="trained_agents_data.pkl",
help="Path to a custom file for training",
)
def train(n_iterations: int, filename: str):
def train(n_iterations: int, filename: str) -> None:
"""Train the crew."""
click.echo(f"Training the Crew for {n_iterations} iterations")
train_crew(n_iterations, filename)
@@ -334,7 +337,7 @@ def memory(
default="gpt-4o-mini",
help="LLM Model to run the tests on the Crew. For now only accepting only OpenAI models.",
)
def test(n_iterations: int, model: str):
def test(n_iterations: int, model: str) -> None:
"""Test the crew and evaluate the results."""
click.echo(f"Testing the crew for {n_iterations} iterations with model {model}")
evaluate_crew(n_iterations, model)
@@ -347,46 +350,62 @@ def test(n_iterations: int, model: str):
)
)
@click.pass_context
def install(context):
def install(context: click.Context) -> None:
"""Install the Crew."""
install_crew(context.args)
@crewai.command()
def run():
def run() -> None:
"""Run the Crew."""
run_crew()
@crewai.command()
def update():
def update() -> None:
"""Update the pyproject.toml of the Crew project to use uv."""
update_crew()
@crewai.command()
def login():
def login() -> None:
"""Sign Up/Login to CrewAI AMP."""
Settings().clear_user_settings()
AuthenticationCommand().login()
@crewai.command()
@click.option(
"--reset", is_flag=True, help="Also reset all CLI configuration to defaults"
)
def logout(reset: bool) -> None:
"""Logout from CrewAI AMP."""
settings = Settings()
if reset:
settings.reset()
click.echo("Successfully logged out and reset all CLI configuration.")
else:
TokenManager().clear_tokens()
settings.clear_user_settings()
click.echo("Successfully logged out from CrewAI AMP.")
# DEPLOY CREWAI+ COMMANDS
@crewai.group()
def deploy():
def deploy() -> None:
"""Deploy the Crew CLI group."""
@deploy.command(name="create")
@click.option("-y", "--yes", is_flag=True, help="Skip the confirmation prompt")
def deploy_create(yes: bool):
def deploy_create(yes: bool) -> None:
"""Create a Crew deployment."""
deploy_cmd = DeployCommand()
deploy_cmd.create_crew(yes)
@deploy.command(name="list")
def deploy_list():
def deploy_list() -> None:
"""List all deployments."""
deploy_cmd = DeployCommand()
deploy_cmd.list_crews()
@@ -394,7 +413,7 @@ def deploy_list():
@deploy.command(name="push")
@click.option("-u", "--uuid", type=str, help="Crew UUID parameter")
def deploy_push(uuid: str | None):
def deploy_push(uuid: str | None) -> None:
"""Deploy the Crew."""
deploy_cmd = DeployCommand()
deploy_cmd.deploy(uuid=uuid)
@@ -402,7 +421,7 @@ def deploy_push(uuid: str | None):
@deploy.command(name="status")
@click.option("-u", "--uuid", type=str, help="Crew UUID parameter")
def deply_status(uuid: str | None):
def deply_status(uuid: str | None) -> None:
"""Get the status of a deployment."""
deploy_cmd = DeployCommand()
deploy_cmd.get_crew_status(uuid=uuid)
@@ -410,7 +429,7 @@ def deply_status(uuid: str | None):
@deploy.command(name="logs")
@click.option("-u", "--uuid", type=str, help="Crew UUID parameter")
def deploy_logs(uuid: str | None):
def deploy_logs(uuid: str | None) -> None:
"""Get the logs of a deployment."""
deploy_cmd = DeployCommand()
deploy_cmd.get_crew_logs(uuid=uuid)
@@ -418,27 +437,27 @@ def deploy_logs(uuid: str | None):
@deploy.command(name="remove")
@click.option("-u", "--uuid", type=str, help="Crew UUID parameter")
def deploy_remove(uuid: str | None):
def deploy_remove(uuid: str | None) -> None:
"""Remove a deployment."""
deploy_cmd = DeployCommand()
deploy_cmd.remove_crew(uuid=uuid)
@crewai.group()
def tool():
def tool() -> None:
"""Tool Repository related commands."""
@tool.command(name="create")
@click.argument("handle")
def tool_create(handle: str):
def tool_create(handle: str) -> None:
tool_cmd = ToolCommand()
tool_cmd.create(handle)
@tool.command(name="install")
@click.argument("handle")
def tool_install(handle: str):
def tool_install(handle: str) -> None:
tool_cmd = ToolCommand()
tool_cmd.login()
tool_cmd.install(handle)
@@ -454,26 +473,26 @@ def tool_install(handle: str):
)
@click.option("--public", "is_public", flag_value=True, default=False)
@click.option("--private", "is_public", flag_value=False)
def tool_publish(is_public: bool, force: bool):
def tool_publish(is_public: bool, force: bool) -> None:
tool_cmd = ToolCommand()
tool_cmd.login()
tool_cmd.publish(is_public, force)
@crewai.group()
def flow():
def flow() -> None:
"""Flow related commands."""
@flow.command(name="kickoff")
def flow_run():
def flow_run() -> None:
"""Kickoff the Flow."""
click.echo("Running the Flow")
kickoff_flow()
@flow.command(name="plot")
def flow_plot():
def flow_plot() -> None:
"""Plot the Flow."""
click.echo("Plotting the Flow")
plot_flow()
@@ -481,19 +500,19 @@ def flow_plot():
@flow.command(name="add-crew")
@click.argument("crew_name")
def flow_add_crew(crew_name):
def flow_add_crew(crew_name: str) -> None:
"""Add a crew to an existing flow."""
click.echo(f"Adding crew {crew_name} to the flow")
add_crew_to_flow(crew_name)
@crewai.group()
def triggers():
def triggers() -> None:
"""Trigger related commands. Use 'crewai triggers list' to see available triggers, or 'crewai triggers run app_slug/trigger_slug' to execute."""
@triggers.command(name="list")
def triggers_list():
def triggers_list() -> None:
"""List all available triggers from integrations."""
triggers_cmd = TriggersCommand()
triggers_cmd.list_triggers()
@@ -501,14 +520,14 @@ def triggers_list():
@triggers.command(name="run")
@click.argument("trigger_path")
def triggers_run(trigger_path: str):
def triggers_run(trigger_path: str) -> None:
"""Execute crew with trigger payload. Format: app_slug/trigger_slug"""
triggers_cmd = TriggersCommand()
triggers_cmd.execute_with_trigger(trigger_path)
@crewai.command()
def chat():
def chat() -> None:
"""
Start a conversation with the Crew, collecting user-supplied inputs,
and using the Chat LLM to generate responses.
@@ -521,12 +540,12 @@ def chat():
@crewai.group(invoke_without_command=True)
def org():
def org() -> None:
"""Organization management commands."""
@org.command("list")
def org_list():
def org_list() -> None:
"""List available organizations."""
org_command = OrganizationCommand()
org_command.list()
@@ -534,39 +553,39 @@ def org_list():
@org.command()
@click.argument("id")
def switch(id):
def switch(id: str) -> None:
"""Switch to a specific organization."""
org_command = OrganizationCommand()
org_command.switch(id)
@org.command()
def current():
def current() -> None:
"""Show current organization when 'crewai org' is called without subcommands."""
org_command = OrganizationCommand()
org_command.current()
@crewai.group()
def enterprise():
def enterprise() -> None:
"""Enterprise Configuration commands."""
@enterprise.command("configure")
@click.argument("enterprise_url")
def enterprise_configure(enterprise_url: str):
def enterprise_configure(enterprise_url: str) -> None:
"""Configure CrewAI AMP OAuth2 settings from the provided Enterprise URL."""
enterprise_command = EnterpriseConfigureCommand()
enterprise_command.configure(enterprise_url)
@crewai.group()
def config():
def config() -> None:
"""CLI Configuration commands."""
@config.command("list")
def config_list():
def config_list() -> None:
"""List all CLI configuration parameters."""
config_command = SettingsCommand()
config_command.list()
@@ -575,26 +594,26 @@ def config_list():
@config.command("set")
@click.argument("key")
@click.argument("value")
def config_set(key: str, value: str):
def config_set(key: str, value: str) -> None:
"""Set a CLI configuration parameter."""
config_command = SettingsCommand()
config_command.set(key, value)
@config.command("reset")
def config_reset():
def config_reset() -> None:
"""Reset all CLI configuration parameters to default values."""
config_command = SettingsCommand()
config_command.reset_all_settings()
@crewai.group()
def env():
def env() -> None:
"""Environment variable commands."""
@env.command("view")
def env_view():
def env_view() -> None:
"""View tracing-related environment variables."""
import os
from pathlib import Path
@@ -672,12 +691,12 @@ def env_view():
@crewai.group()
def traces():
def traces() -> None:
"""Trace collection management commands."""
@traces.command("enable")
def traces_enable():
def traces_enable() -> None:
"""Enable trace collection for crew/flow executions."""
from rich.console import Console
from rich.panel import Panel
@@ -700,7 +719,7 @@ def traces_enable():
@traces.command("disable")
def traces_disable():
def traces_disable() -> None:
"""Disable trace collection for crew/flow executions."""
from rich.console import Console
from rich.panel import Panel
@@ -723,7 +742,7 @@ def traces_disable():
@traces.command("status")
def traces_status():
def traces_status() -> None:
"""Show current trace collection status."""
import os

View File

@@ -6,7 +6,7 @@ import click
from crewai.telemetry import Telemetry
def create_flow(name):
def create_flow(name: str) -> None:
"""Create a new flow."""
folder_name = name.replace(" ", "_").replace("-", "_").lower()
class_name = name.replace("_", " ").replace("-", " ").title().replace(" ", "")
@@ -49,7 +49,7 @@ def create_flow(name):
"poem_crew",
]
def process_file(src_file, dst_file):
def process_file(src_file: Path, dst_file: Path) -> None:
if src_file.suffix in [".pyc", ".pyo", ".pyd"]:
return

View File

@@ -15,7 +15,7 @@ class DeployCommand(BaseCommand, PlusAPIMixin):
A class to handle deployment-related operations for CrewAI projects.
"""
def __init__(self):
def __init__(self) -> None:
"""
Initialize the DeployCommand with project name and API client.
"""
@@ -67,7 +67,7 @@ class DeployCommand(BaseCommand, PlusAPIMixin):
Args:
uuid (Optional[str]): The UUID of the crew to deploy.
"""
self._start_deployment_span = self._telemetry.start_deployment_span(uuid)
self._telemetry.start_deployment_span(uuid)
console.print("Starting deployment...", style="bold blue")
if uuid:
response = self.plus_api_client.deploy_by_uuid(uuid)
@@ -84,9 +84,7 @@ class DeployCommand(BaseCommand, PlusAPIMixin):
"""
Create a new crew deployment.
"""
self._create_crew_deployment_span = (
self._telemetry.create_crew_deployment_span()
)
self._telemetry.create_crew_deployment_span()
console.print("Creating deployment...", style="bold blue")
env_vars = fetch_and_json_env_file()
@@ -236,7 +234,7 @@ class DeployCommand(BaseCommand, PlusAPIMixin):
uuid (Optional[str]): The UUID of the crew to get logs for.
log_type (str): The type of logs to retrieve (default: "deployment").
"""
self._get_crew_logs_span = self._telemetry.get_crew_logs_span(uuid, log_type)
self._telemetry.get_crew_logs_span(uuid, log_type)
console.print(f"Fetching {log_type} logs...", style="bold blue")
if uuid:
@@ -257,7 +255,7 @@ class DeployCommand(BaseCommand, PlusAPIMixin):
Args:
uuid (Optional[str]): The UUID of the crew to remove.
"""
self._remove_crew_span = self._telemetry.remove_crew_span(uuid)
self._telemetry.remove_crew_span(uuid)
console.print("Removing deployment...", style="bold blue")
if uuid:

View File

@@ -16,7 +16,7 @@ class TriggersCommand(BaseCommand, PlusAPIMixin):
A class to handle trigger-related operations for CrewAI projects.
"""
def __init__(self):
def __init__(self) -> None:
BaseCommand.__init__(self)
PlusAPIMixin.__init__(self, telemetry=self._telemetry)

View File

@@ -6,6 +6,7 @@ from concurrent.futures import Future
from copy import copy as shallow_copy
from hashlib import md5
import json
from pathlib import Path
import re
from typing import (
TYPE_CHECKING,
@@ -91,6 +92,7 @@ from crewai.rag.embeddings.types import EmbedderConfig
from crewai.rag.types import SearchResult
from crewai.security.fingerprint import Fingerprint
from crewai.security.security_config import SecurityConfig
from crewai.skills.models import Skill
from crewai.task import Task
from crewai.tasks.conditional_task import ConditionalTask
from crewai.tasks.task_output import TaskOutput
@@ -294,6 +296,11 @@ class Crew(FlowTrackable, BaseModel):
default=None,
description="Knowledge for the crew.",
)
skills: list[Path | Skill] | None = Field(
default=None,
description="Skill search paths or pre-loaded Skill objects applied to all agents in the crew.",
)
security_config: SecurityConfig = Field(
default_factory=SecurityConfig,
description="Security configuration for the crew, including fingerprinting.",
@@ -376,14 +383,12 @@ class Crew(FlowTrackable, BaseModel):
if self.embedder is not None:
from crewai.rag.embeddings.factory import build_embedder
embedder = build_embedder(self.embedder) # type: ignore[arg-type]
embedder = build_embedder(cast(dict[str, Any], self.embedder))
self._memory = Memory(embedder=embedder, root_scope=crew_root_scope)
elif self.memory:
# User passed a Memory / MemoryScope / MemorySlice instance
# Respect user's configuration — don't auto-set root_scope
self._memory = self.memory
# Set root_scope only if not already set (don't override user config)
if hasattr(self._memory, "root_scope") and self._memory.root_scope is None:
self._memory.root_scope = crew_root_scope
else:
self._memory = None

View File

@@ -4,6 +4,7 @@ from __future__ import annotations
import asyncio
from collections.abc import Callable, Coroutine, Iterable, Mapping
from pathlib import Path
from typing import TYPE_CHECKING, Any
from opentelemetry import baggage
@@ -11,6 +12,8 @@ from opentelemetry import baggage
from crewai.agents.agent_builder.base_agent import BaseAgent
from crewai.crews.crew_output import CrewOutput
from crewai.rag.embeddings.types import EmbedderConfig
from crewai.skills.loader import activate_skill, discover_skills
from crewai.skills.models import INSTRUCTIONS, Skill as SkillModel
from crewai.types.streaming import CrewStreamingOutput, FlowStreamingOutput
from crewai.utilities.file_store import store_files
from crewai.utilities.streaming import (
@@ -51,6 +54,30 @@ def enable_agent_streaming(agents: Iterable[BaseAgent]) -> None:
agent.llm.stream = True
def _resolve_crew_skills(crew: Crew) -> list[SkillModel] | None:
"""Resolve crew-level skill paths once so agents don't repeat the work."""
if not isinstance(crew.skills, list) or not crew.skills:
return None
resolved: list[SkillModel] = []
seen: set[str] = set()
for item in crew.skills:
if isinstance(item, Path):
for skill in discover_skills(item):
if skill.name not in seen:
seen.add(skill.name)
resolved.append(activate_skill(skill))
elif isinstance(item, SkillModel):
if item.name not in seen:
seen.add(item.name)
resolved.append(
activate_skill(item)
if item.disclosure_level < INSTRUCTIONS
else item
)
return resolved
def setup_agents(
crew: Crew,
agents: Iterable[BaseAgent],
@@ -67,9 +94,12 @@ def setup_agents(
function_calling_llm: Default function calling LLM for agents.
step_callback: Default step callback for agents.
"""
resolved_crew_skills = _resolve_crew_skills(crew)
for agent in agents:
agent.crew = crew
agent.set_knowledge(crew_embedder=embedder)
agent.set_skills(resolved_crew_skills=resolved_crew_skills)
if not agent.function_calling_llm: # type: ignore[attr-defined]
agent.function_calling_llm = function_calling_llm # type: ignore[attr-defined]
if not agent.step_callback: # type: ignore[attr-defined]

View File

@@ -88,6 +88,14 @@ from crewai.events.types.reasoning_events import (
AgentReasoningStartedEvent,
ReasoningEvent,
)
from crewai.events.types.skill_events import (
SkillActivatedEvent,
SkillDiscoveryCompletedEvent,
SkillDiscoveryStartedEvent,
SkillEvent,
SkillLoadFailedEvent,
SkillLoadedEvent,
)
from crewai.events.types.task_events import (
TaskCompletedEvent,
TaskEvaluationEvent,
@@ -186,6 +194,12 @@ __all__ = [
"MethodExecutionFinishedEvent",
"MethodExecutionStartedEvent",
"ReasoningEvent",
"SkillActivatedEvent",
"SkillDiscoveryCompletedEvent",
"SkillDiscoveryStartedEvent",
"SkillEvent",
"SkillLoadFailedEvent",
"SkillLoadedEvent",
"TaskCompletedEvent",
"TaskEvaluationEvent",
"TaskFailedEvent",

View File

@@ -0,0 +1,62 @@
"""Skill lifecycle events for the Agent Skills standard.
Events emitted during skill discovery, loading, and activation.
"""
from __future__ import annotations
from pathlib import Path
from typing import Any
from crewai.events.base_events import BaseEvent
class SkillEvent(BaseEvent):
"""Base event for skill operations."""
skill_name: str = ""
skill_path: Path | None = None
from_agent: Any | None = None
from_task: Any | None = None
def __init__(self, **data: Any) -> None:
super().__init__(**data)
self._set_agent_params(data)
self._set_task_params(data)
class SkillDiscoveryStartedEvent(SkillEvent):
"""Event emitted when skill discovery begins."""
type: str = "skill_discovery_started"
search_path: Path
class SkillDiscoveryCompletedEvent(SkillEvent):
"""Event emitted when skill discovery completes."""
type: str = "skill_discovery_completed"
search_path: Path
skills_found: int
skill_names: list[str]
class SkillLoadedEvent(SkillEvent):
"""Event emitted when a skill is loaded at metadata level."""
type: str = "skill_loaded"
disclosure_level: int = 1
class SkillActivatedEvent(SkillEvent):
"""Event emitted when a skill is activated (promoted to instructions level)."""
type: str = "skill_activated"
disclosure_level: int = 2
class SkillLoadFailedEvent(SkillEvent):
"""Event emitted when skill loading fails."""
type: str = "skill_load_failed"
error: str

View File

@@ -1984,7 +1984,16 @@ class LLM(BaseLLM):
Returns:
Messages with files formatted into content blocks.
"""
if not HAS_CREWAI_FILES or not self.supports_multimodal():
if not HAS_CREWAI_FILES:
return messages
if not self.supports_multimodal():
if any(msg.get("files") for msg in messages):
raise ValueError(
f"Model '{self.model}' does not support multimodal input, "
"but files were provided via 'input_files'. "
"Use a vision-capable model or remove the file inputs."
)
return messages
provider = getattr(self, "provider", None) or self.model
@@ -2026,7 +2035,16 @@ class LLM(BaseLLM):
Returns:
Messages with files formatted into content blocks.
"""
if not HAS_CREWAI_FILES or not self.supports_multimodal():
if not HAS_CREWAI_FILES:
return messages
if not self.supports_multimodal():
if any(msg.get("files") for msg in messages):
raise ValueError(
f"Model '{self.model}' does not support multimodal input, "
"but files were provided via 'input_files'. "
"Use a vision-capable model or remove the file inputs."
)
return messages
provider = getattr(self, "provider", None) or self.model
@@ -2398,6 +2416,9 @@ class LLM(BaseLLM):
"gpt-4.1",
"claude-3",
"claude-4",
"claude-sonnet-4",
"claude-opus-4",
"claude-haiku-4",
"gemini",
)
model_lower = self.model.lower()

View File

@@ -641,7 +641,16 @@ class BaseLLM(ABC):
Returns:
Messages with files formatted into content blocks.
"""
if not HAS_CREWAI_FILES or not self.supports_multimodal():
if not HAS_CREWAI_FILES:
return messages
if not self.supports_multimodal():
if any(msg.get("files") for msg in messages):
raise ValueError(
f"Model '{self.model}' does not support multimodal input, "
"but files were provided via 'input_files'. "
"Use a vision-capable model or remove the file inputs."
)
return messages
provider = getattr(self, "provider", None) or getattr(self, "model", "openai")

View File

@@ -1766,7 +1766,14 @@ class AnthropicCompletion(BaseLLM):
Returns:
True if the model supports images and PDFs.
"""
return "claude-3" in self.model.lower() or "claude-4" in self.model.lower()
model_lower = self.model.lower()
return (
"claude-3" in model_lower
or "claude-4" in model_lower
or "claude-sonnet-4" in model_lower
or "claude-opus-4" in model_lower
or "claude-haiku-4" in model_lower
)
def get_file_uploader(self) -> Any:
"""Get an Anthropic file uploader using this LLM's clients.

View File

@@ -2119,12 +2119,18 @@ class BedrockCompletion(BaseLLM):
model_lower = self.model.lower()
vision_models = (
"anthropic.claude-3",
"anthropic.claude-sonnet-4",
"anthropic.claude-opus-4",
"anthropic.claude-haiku-4",
"amazon.nova-lite",
"amazon.nova-pro",
"amazon.nova-premier",
"us.amazon.nova-lite",
"us.amazon.nova-pro",
"us.amazon.nova-premier",
"us.anthropic.claude-sonnet-4",
"us.anthropic.claude-opus-4",
"us.anthropic.claude-haiku-4",
)
return any(model_lower.startswith(m) for m in vision_models)

View File

@@ -106,7 +106,6 @@ class EncodingFlow(Flow[EncodingState]):
llm: Any,
embedder: Any,
config: MemoryConfig | None = None,
root_scope: str | None = None,
) -> None:
"""Initialize the encoding flow.
@@ -115,15 +114,12 @@ class EncodingFlow(Flow[EncodingState]):
llm: LLM instance for analysis.
embedder: Embedder for generating vectors.
config: Optional memory configuration.
root_scope: Structural root scope prefix. LLM-inferred or explicit
scopes are nested under this root.
"""
super().__init__(suppress_flow_events=True)
self._storage = storage
self._llm = llm
self._embedder = embedder
self._config = config or MemoryConfig()
self._root_scope = root_scope
# ------------------------------------------------------------------
# Step 1: Batch embed (ONE embedder call)
@@ -195,10 +191,18 @@ class EncodingFlow(Flow[EncodingState]):
def _search_one(
item: ItemState,
) -> list[tuple[MemoryRecord, float]]:
scope_prefix = item.scope if item.scope and item.scope.strip("/") else None
# Use root_scope as the search boundary, then narrow by explicit scope if provided
effective_prefix = None
if item.root_scope:
effective_prefix = item.root_scope.rstrip("/")
if item.scope and item.scope.strip("/"):
effective_prefix = effective_prefix + "/" + item.scope.strip("/")
elif item.scope and item.scope.strip("/"):
effective_prefix = item.scope
return self._storage.search( # type: ignore[no-any-return]
item.embedding,
scope_prefix=scope_prefix,
scope_prefix=effective_prefix,
categories=None,
limit=self._config.consolidation_limit,
min_score=0.0,
@@ -268,9 +272,16 @@ class EncodingFlow(Flow[EncodingState]):
existing_scopes: list[str] = []
existing_categories: list[str] = []
if any_needs_fields:
existing_scopes = self._storage.list_scopes("/") or ["/"]
# Constrain scope/category suggestions to root_scope boundary
# Check if any active item has root_scope
active_root = next(
(it.root_scope for it in items if not it.dropped and it.root_scope),
None,
)
scope_search_root = active_root if active_root else "/"
existing_scopes = self._storage.list_scopes(scope_search_root) or ["/"]
existing_categories = list(
self._storage.list_categories(scope_prefix=None).keys()
self._storage.list_categories(scope_prefix=active_root).keys()
)
# Classify items and submit LLM calls

View File

@@ -31,6 +31,7 @@ from crewai.memory.types import (
compute_composite_score,
embed_text,
)
from crewai.memory.utils import join_scope_paths
from crewai.rag.embeddings.factory import build_embedder
from crewai.rag.embeddings.providers.openai.types import OpenAIProviderSpec
@@ -333,7 +334,6 @@ class Memory(BaseModel):
llm=self._llm,
embedder=self._embedder,
config=self._config,
root_scope=root_scope,
)
items_input = [
{
@@ -637,6 +637,14 @@ class Memory(BaseModel):
# so that the search sees all persisted records.
self.drain_writes()
# Apply root_scope as default scope_prefix for read isolation
effective_scope = scope
if effective_scope is None and self.root_scope:
effective_scope = self.root_scope
elif effective_scope is not None and self.root_scope:
# Nest provided scope under root
effective_scope = join_scope_paths(self.root_scope, effective_scope)
_source = "unified_memory"
try:
crewai_event_bus.emit(
@@ -657,7 +665,7 @@ class Memory(BaseModel):
else:
raw = self._storage.search(
embedding,
scope_prefix=scope,
scope_prefix=effective_scope,
categories=categories,
limit=limit,
min_score=0.0,
@@ -692,7 +700,7 @@ class Memory(BaseModel):
flow.kickoff(
inputs={
"query": query,
"scope": scope,
"scope": effective_scope,
"categories": categories or [],
"limit": limit,
"source": source,
@@ -746,11 +754,24 @@ class Memory(BaseModel):
) -> int:
"""Delete memories matching criteria.
Args:
scope: Scope to delete from. If None and root_scope is set, deletes
only within root_scope.
categories: Filter by categories.
older_than: Delete records older than this datetime.
metadata_filter: Filter by metadata fields.
record_ids: Specific record IDs to delete.
Returns:
Number of records deleted.
"""
effective_scope = scope
if effective_scope is None and self.root_scope:
effective_scope = self.root_scope
elif effective_scope is not None and self.root_scope:
effective_scope = join_scope_paths(self.root_scope, effective_scope)
return self._storage.delete(
scope_prefix=scope,
scope_prefix=effective_scope,
categories=categories,
record_ids=record_ids,
older_than=older_than,
@@ -825,9 +846,21 @@ class Memory(BaseModel):
read_only=read_only,
)
def list_scopes(self, path: str = "/") -> list[str]:
"""List immediate child scopes under path."""
return self._storage.list_scopes(path)
def list_scopes(self, path: str | None = None) -> list[str]:
"""List immediate child scopes under path.
Args:
path: Scope path to list children of. If None and root_scope is set,
defaults to root_scope. Otherwise defaults to '/'.
"""
effective_path = path
if effective_path is None and self.root_scope:
effective_path = self.root_scope
elif effective_path is not None and self.root_scope:
effective_path = join_scope_paths(self.root_scope, effective_path)
elif effective_path is None:
effective_path = "/"
return self._storage.list_scopes(effective_path)
def list_records(
self, scope: str | None = None, limit: int = 200, offset: int = 0
@@ -835,20 +868,52 @@ class Memory(BaseModel):
"""List records in a scope, newest first.
Args:
scope: Optional scope path prefix to filter by.
scope: Optional scope path prefix to filter by. If None and root_scope
is set, defaults to root_scope.
limit: Maximum number of records to return.
offset: Number of records to skip (for pagination).
"""
effective_scope = scope
if effective_scope is None and self.root_scope:
effective_scope = self.root_scope
elif effective_scope is not None and self.root_scope:
effective_scope = join_scope_paths(self.root_scope, effective_scope)
return self._storage.list_records(
scope_prefix=scope, limit=limit, offset=offset
scope_prefix=effective_scope, limit=limit, offset=offset
)
def info(self, path: str = "/") -> ScopeInfo:
"""Return scope info for path."""
return self._storage.get_scope_info(path)
def info(self, path: str | None = None) -> ScopeInfo:
"""Return scope info for path.
Args:
path: Scope path to get info for. If None and root_scope is set,
defaults to root_scope. Otherwise defaults to '/'.
"""
effective_path = path
if effective_path is None and self.root_scope:
effective_path = self.root_scope
elif effective_path is not None and self.root_scope:
effective_path = join_scope_paths(self.root_scope, effective_path)
elif effective_path is None:
effective_path = "/"
return self._storage.get_scope_info(effective_path)
def tree(self, path: str | None = None, max_depth: int = 3) -> str:
"""Return a formatted tree of scopes (string).
Args:
path: Root path for the tree. If None and root_scope is set,
defaults to root_scope. Otherwise defaults to '/'.
max_depth: Maximum depth to traverse.
"""
effective_path = path
if effective_path is None and self.root_scope:
effective_path = self.root_scope
elif effective_path is not None and self.root_scope:
effective_path = join_scope_paths(self.root_scope, effective_path)
elif effective_path is None:
effective_path = "/"
def tree(self, path: str = "/", max_depth: int = 3) -> str:
"""Return a formatted tree of scopes (string)."""
lines: list[str] = []
def _walk(p: str, depth: int, prefix: str) -> None:
@@ -859,16 +924,36 @@ class Memory(BaseModel):
for child in info.child_scopes[:20]:
_walk(child, depth + 1, prefix + " ")
_walk(path.rstrip("/") or "/", 0, "")
return "\n".join(lines) if lines else f"{path or '/'} (0 records)"
_walk(effective_path.rstrip("/") or "/", 0, "")
return "\n".join(lines) if lines else f"{effective_path or '/'} (0 records)"
def list_categories(self, path: str | None = None) -> dict[str, int]:
"""List categories and counts; path=None means global."""
return self._storage.list_categories(scope_prefix=path)
"""List categories and counts.
Args:
path: Scope path to filter categories by. If None and root_scope is set,
defaults to root_scope.
"""
effective_path = path
if effective_path is None and self.root_scope:
effective_path = self.root_scope
elif effective_path is not None and self.root_scope:
effective_path = join_scope_paths(self.root_scope, effective_path)
return self._storage.list_categories(scope_prefix=effective_path)
def reset(self, scope: str | None = None) -> None:
"""Reset (delete all) memories in scope. None = all."""
self._storage.reset(scope_prefix=scope)
"""Reset (delete all) memories in scope.
Args:
scope: Scope to reset. If None and root_scope is set, resets only
within root_scope. If None and no root_scope, resets all.
"""
effective_scope = scope
if effective_scope is None and self.root_scope:
effective_scope = self.root_scope
elif effective_scope is not None and self.root_scope:
effective_scope = join_scope_paths(self.root_scope, effective_scope)
self._storage.reset(scope_prefix=effective_scope)
async def aextract_memories(self, content: str) -> list[str]:
"""Async variant of extract_memories."""

View File

@@ -25,7 +25,7 @@ def sanitize_scope_name(name: str) -> str:
>>> sanitize_scope_name("Agent #1 (Main)")
'agent-1-main'
>>> sanitize_scope_name("café_worker")
'caf-worker'
'caf-_worker'
"""
if not name:
return "unknown"

View File

@@ -0,0 +1,17 @@
"""Agent Skills standard implementation for crewAI.
Provides filesystem-based skill packaging with progressive disclosure.
"""
from crewai.skills.loader import activate_skill, discover_skills
from crewai.skills.models import Skill, SkillFrontmatter
from crewai.skills.parser import SkillParseError
__all__ = [
"Skill",
"SkillFrontmatter",
"SkillParseError",
"activate_skill",
"discover_skills",
]

View File

@@ -0,0 +1,184 @@
"""Filesystem discovery and progressive loading for Agent Skills.
Provides functions to discover skills in directories, activate them
for agent use, and format skill context for prompt injection.
"""
from __future__ import annotations
import logging
from pathlib import Path
from typing import TYPE_CHECKING
from crewai.events.event_bus import crewai_event_bus
from crewai.events.types.skill_events import (
SkillActivatedEvent,
SkillDiscoveryCompletedEvent,
SkillDiscoveryStartedEvent,
SkillLoadFailedEvent,
SkillLoadedEvent,
)
from crewai.skills.models import INSTRUCTIONS, RESOURCES, Skill
from crewai.skills.parser import (
SKILL_FILENAME,
load_skill_instructions,
load_skill_metadata,
load_skill_resources,
)
if TYPE_CHECKING:
from crewai.agents.agent_builder.base_agent import BaseAgent
_logger = logging.getLogger(__name__)
def discover_skills(
search_path: Path,
source: BaseAgent | None = None,
) -> list[Skill]:
"""Scan a directory for skill directories containing SKILL.md.
Loads each discovered skill at METADATA disclosure level.
Args:
search_path: Directory to scan for skill subdirectories.
source: Optional event source (agent or crew) for event emission.
Returns:
List of Skill instances at METADATA level.
"""
if not search_path.is_dir():
msg = f"Skill search path does not exist or is not a directory: {search_path}"
raise FileNotFoundError(msg)
skills: list[Skill] = []
if source is not None:
crewai_event_bus.emit(
source,
event=SkillDiscoveryStartedEvent(
from_agent=source,
search_path=search_path,
),
)
for child in sorted(search_path.iterdir()):
if not child.is_dir():
continue
skill_md = child / SKILL_FILENAME
if not skill_md.is_file():
continue
try:
skill = load_skill_metadata(child)
skills.append(skill)
if source is not None:
crewai_event_bus.emit(
source,
event=SkillLoadedEvent(
from_agent=source,
skill_name=skill.name,
skill_path=skill.path,
disclosure_level=skill.disclosure_level,
),
)
except Exception as e:
_logger.warning("Failed to load skill from %s: %s", child, e)
if source is not None:
crewai_event_bus.emit(
source,
event=SkillLoadFailedEvent(
from_agent=source,
skill_name=child.name,
skill_path=child,
error=str(e),
),
)
if source is not None:
crewai_event_bus.emit(
source,
event=SkillDiscoveryCompletedEvent(
from_agent=source,
search_path=search_path,
skills_found=len(skills),
skill_names=[s.name for s in skills],
),
)
return skills
def activate_skill(
skill: Skill,
source: BaseAgent | None = None,
) -> Skill:
"""Promote a skill to INSTRUCTIONS disclosure level.
Idempotent: returns the skill unchanged if already at or above INSTRUCTIONS.
Args:
skill: Skill to activate.
source: Optional event source for event emission.
Returns:
Skill at INSTRUCTIONS level or higher.
"""
if skill.disclosure_level >= INSTRUCTIONS:
return skill
activated = load_skill_instructions(skill)
if source is not None:
crewai_event_bus.emit(
source,
event=SkillActivatedEvent(
from_agent=source,
skill_name=activated.name,
skill_path=activated.path,
disclosure_level=activated.disclosure_level,
),
)
return activated
def load_resources(skill: Skill) -> Skill:
"""Promote a skill to RESOURCES disclosure level.
Args:
skill: Skill to promote.
Returns:
Skill at RESOURCES level.
"""
return load_skill_resources(skill)
def format_skill_context(skill: Skill) -> str:
"""Format skill information for agent prompt injection.
At METADATA level: returns name and description only.
At INSTRUCTIONS level or above: returns full SKILL.md body.
Args:
skill: The skill to format.
Returns:
Formatted skill context string.
"""
if skill.disclosure_level >= INSTRUCTIONS and skill.instructions:
parts = [
f"## Skill: {skill.name}",
skill.description,
"",
skill.instructions,
]
if skill.disclosure_level >= RESOURCES and skill.resource_files:
parts.append("")
parts.append("### Available Resources")
for dir_name, files in sorted(skill.resource_files.items()):
if files:
parts.append(f"- **{dir_name}/**: {', '.join(files)}")
return "\n".join(parts)
return f"## Skill: {skill.name}\n{skill.description}"

View File

@@ -0,0 +1,175 @@
"""Pydantic data models for the Agent Skills standard.
Defines DisclosureLevel, SkillFrontmatter, and Skill models for
progressive disclosure of skill information.
"""
from __future__ import annotations
from pathlib import Path
from typing import Annotated, Any, Final, Literal
from pydantic import BaseModel, ConfigDict, Field, model_validator
from crewai.skills.validation import (
MAX_SKILL_NAME_LENGTH,
MIN_SKILL_NAME_LENGTH,
SKILL_NAME_PATTERN,
)
MAX_DESCRIPTION_LENGTH: Final[int] = 1024
ResourceDirName = Literal["scripts", "references", "assets"]
DisclosureLevel = Annotated[
Literal[1, 2, 3], "Progressive disclosure levels for skill loading."
]
METADATA: Final[
Annotated[
DisclosureLevel, "Only frontmatter metadata is loaded (name, description)."
]
] = 1
INSTRUCTIONS: Final[Annotated[DisclosureLevel, "Full SKILL.md body is loaded."]] = 2
RESOURCES: Final[
Annotated[
DisclosureLevel,
"Resource directories (scripts, references, assets) are cataloged.",
]
] = 3
class SkillFrontmatter(BaseModel):
"""YAML frontmatter from a SKILL.md file.
Attributes:
name: Unique skill identifier (1-64 chars, lowercase alphanumeric + hyphens).
description: Human-readable description (1-1024 chars).
license: Optional license name or reference.
compatibility: Optional compatibility information (max 500 chars).
metadata: Optional additional metadata as string key-value pairs.
allowed_tools: Optional space-delimited list of pre-approved tools.
"""
model_config = ConfigDict(frozen=True, populate_by_name=True)
name: str = Field(
min_length=MIN_SKILL_NAME_LENGTH,
max_length=MAX_SKILL_NAME_LENGTH,
pattern=SKILL_NAME_PATTERN,
)
description: str = Field(min_length=1, max_length=MAX_DESCRIPTION_LENGTH)
license: str | None = Field(
default=None,
description="SPDX license identifier or free-text license reference, e.g. 'MIT', 'Apache-2.0'.",
)
compatibility: str | None = Field(
default=None,
max_length=500,
description="Version or platform constraints for the skill, e.g. 'crewai >= 0.80'.",
)
metadata: dict[str, str] | None = Field(
default=None,
description="Arbitrary string key-value pairs for custom skill metadata.",
)
allowed_tools: list[str] | None = Field(
default=None,
alias="allowed-tools",
description="Pre-approved tool names the skill may use, parsed from a space-delimited string in frontmatter.",
)
@model_validator(mode="before")
@classmethod
def parse_allowed_tools(cls, values: dict[str, Any]) -> dict[str, Any]:
"""Parse space-delimited allowed-tools string into a list."""
key = "allowed-tools"
alt_key = "allowed_tools"
raw = values.get(key) or values.get(alt_key)
if isinstance(raw, str):
values[key] = raw.split()
return values
class Skill(BaseModel):
"""A loaded Agent Skill with progressive disclosure support.
Attributes:
frontmatter: Parsed YAML frontmatter.
instructions: Full SKILL.md body text (populated at INSTRUCTIONS level).
path: Filesystem path to the skill directory.
disclosure_level: Current disclosure level of the skill.
resource_files: Cataloged resource files (populated at RESOURCES level).
"""
frontmatter: SkillFrontmatter = Field(
description="Parsed YAML frontmatter from SKILL.md.",
)
instructions: str | None = Field(
default=None,
description="Full SKILL.md body text, populated at INSTRUCTIONS level.",
)
path: Path = Field(
description="Filesystem path to the skill directory.",
)
disclosure_level: DisclosureLevel = Field(
default=METADATA,
description="Current progressive disclosure level of the skill.",
)
resource_files: dict[ResourceDirName, list[str]] | None = Field(
default=None,
description="Cataloged resource files by directory, populated at RESOURCES level.",
)
@property
def name(self) -> str:
"""Skill name from frontmatter."""
return self.frontmatter.name
@property
def description(self) -> str:
"""Skill description from frontmatter."""
return self.frontmatter.description
@property
def scripts_dir(self) -> Path:
"""Path to the scripts directory."""
return self.path / "scripts"
@property
def references_dir(self) -> Path:
"""Path to the references directory."""
return self.path / "references"
@property
def assets_dir(self) -> Path:
"""Path to the assets directory."""
return self.path / "assets"
def with_disclosure_level(
self,
level: DisclosureLevel,
instructions: str | None = None,
resource_files: dict[ResourceDirName, list[str]] | None = None,
) -> Skill:
"""Create a new Skill at a different disclosure level.
Args:
level: The new disclosure level.
instructions: Optional instructions body text.
resource_files: Optional cataloged resource files.
Returns:
A new Skill instance at the specified disclosure level.
"""
return Skill(
frontmatter=self.frontmatter,
instructions=instructions
if instructions is not None
else self.instructions,
path=self.path,
disclosure_level=level,
resource_files=(
resource_files if resource_files is not None else self.resource_files
),
)

View File

@@ -0,0 +1,194 @@
"""SKILL.md file parsing for the Agent Skills standard.
Parses YAML frontmatter and markdown body from SKILL.md files,
and provides progressive loading functions for skill data.
"""
from __future__ import annotations
import logging
from pathlib import Path
import re
from typing import Any, Final
import yaml
from crewai.skills.models import (
INSTRUCTIONS,
METADATA,
RESOURCES,
ResourceDirName,
Skill,
SkillFrontmatter,
)
from crewai.skills.validation import validate_directory_name
_logger = logging.getLogger(__name__)
SKILL_FILENAME: Final[str] = "SKILL.md"
_CLOSING_DELIMITER: Final[re.Pattern[str]] = re.compile(r"\n---[ \t]*(?:\n|$)")
_MAX_BODY_CHARS: Final[int] = 50_000
class SkillParseError(ValueError):
"""Error raised when SKILL.md parsing fails."""
def parse_frontmatter(content: str) -> tuple[dict[str, Any], str]:
"""Split SKILL.md content into frontmatter dict and body text.
Args:
content: Raw SKILL.md file content.
Returns:
Tuple of (frontmatter dict, body text).
Raises:
SkillParseError: If frontmatter delimiters are missing or YAML is invalid.
"""
if not content.startswith("---"):
msg = "SKILL.md must start with '---' frontmatter delimiter"
raise SkillParseError(msg)
match = _CLOSING_DELIMITER.search(content, pos=3)
if match is None:
msg = "SKILL.md missing closing '---' frontmatter delimiter"
raise SkillParseError(msg)
yaml_content = content[3 : match.start()].strip()
body = content[match.end() :].strip()
try:
frontmatter = yaml.safe_load(yaml_content)
except yaml.YAMLError as e:
msg = f"Invalid YAML in frontmatter: {e}"
raise SkillParseError(msg) from e
if not isinstance(frontmatter, dict):
msg = "Frontmatter must be a YAML mapping"
raise SkillParseError(msg)
return frontmatter, body
def parse_skill_md(path: Path) -> tuple[SkillFrontmatter, str]:
"""Read and parse a SKILL.md file.
Args:
path: Path to the SKILL.md file.
Returns:
Tuple of (SkillFrontmatter, body text).
Raises:
FileNotFoundError: If the file does not exist.
SkillParseError: If parsing fails.
"""
content = path.read_text(encoding="utf-8")
frontmatter_dict, body = parse_frontmatter(content)
frontmatter = SkillFrontmatter(**frontmatter_dict)
return frontmatter, body
def load_skill_metadata(skill_dir: Path) -> Skill:
"""Load a skill at METADATA disclosure level.
Parses SKILL.md frontmatter only and validates directory name.
Args:
skill_dir: Path to the skill directory.
Returns:
Skill instance at METADATA level.
Raises:
FileNotFoundError: If SKILL.md is missing.
SkillParseError: If parsing fails.
ValueError: If directory name doesn't match skill name.
"""
skill_md_path = skill_dir / SKILL_FILENAME
frontmatter, body = parse_skill_md(skill_md_path)
validate_directory_name(skill_dir, frontmatter.name)
if len(body) > _MAX_BODY_CHARS:
_logger.warning(
"SKILL.md body for '%s' is %d chars (threshold: %d). "
"Large bodies may consume significant context window when injected into prompts.",
frontmatter.name,
len(body),
_MAX_BODY_CHARS,
)
return Skill(
frontmatter=frontmatter,
path=skill_dir,
disclosure_level=METADATA,
)
def load_skill_instructions(skill: Skill) -> Skill:
"""Promote a skill to INSTRUCTIONS disclosure level.
Reads the full SKILL.md body text.
Args:
skill: Skill at METADATA level.
Returns:
New Skill instance at INSTRUCTIONS level.
"""
if skill.disclosure_level >= INSTRUCTIONS:
return skill
skill_md_path = skill.path / SKILL_FILENAME
_, body = parse_skill_md(skill_md_path)
if len(body) > _MAX_BODY_CHARS:
_logger.warning(
"SKILL.md body for '%s' is %d chars (threshold: %d). "
"Large bodies may consume significant context window when injected into prompts.",
skill.name,
len(body),
_MAX_BODY_CHARS,
)
return skill.with_disclosure_level(
level=INSTRUCTIONS,
instructions=body,
)
def load_skill_resources(skill: Skill) -> Skill:
"""Promote a skill to RESOURCES disclosure level.
Catalogs available resource directories (scripts, references, assets).
Args:
skill: Skill at any level.
Returns:
New Skill instance at RESOURCES level.
"""
if skill.disclosure_level >= RESOURCES:
return skill
if skill.disclosure_level < INSTRUCTIONS:
skill = load_skill_instructions(skill)
resource_dirs: list[tuple[ResourceDirName, Path]] = [
("scripts", skill.scripts_dir),
("references", skill.references_dir),
("assets", skill.assets_dir),
]
resource_files: dict[ResourceDirName, list[str]] = {}
for dir_name, resource_dir in resource_dirs:
if resource_dir.is_dir():
resource_files[dir_name] = sorted(
str(f.relative_to(resource_dir))
for f in resource_dir.rglob("*")
if f.is_file()
)
return skill.with_disclosure_level(
level=RESOURCES,
instructions=skill.instructions,
resource_files=resource_files,
)

View File

@@ -0,0 +1,31 @@
"""Validation functions for Agent Skills specification constraints.
Validates skill names and directory structures per the Agent Skills standard.
"""
from __future__ import annotations
from pathlib import Path
import re
from typing import Final
MAX_SKILL_NAME_LENGTH: Final[int] = 64
MIN_SKILL_NAME_LENGTH: Final[int] = 1
SKILL_NAME_PATTERN: Final[re.Pattern[str]] = re.compile(r"^[a-z0-9]+(?:-[a-z0-9]+)*$")
def validate_directory_name(skill_dir: Path, skill_name: str) -> None:
"""Validate that a directory name matches the skill name.
Args:
skill_dir: Path to the skill directory.
skill_name: The declared skill name from frontmatter.
Raises:
ValueError: If the directory name does not match the skill name.
"""
dir_name = skill_dir.name
if dir_name != skill_name:
msg = f"Directory name '{dir_name}' does not match skill name '{skill_name}'"
raise ValueError(msg)

View File

@@ -295,7 +295,13 @@ class TestMemoryRootScope:
)
mem.drain_writes()
records = mem.list_records()
# Use a global memory view to see all records (not scoped to /default)
mem_global = Memory(
storage=str(tmp_path / "db"),
llm=MagicMock(),
embedder=mock_embedder,
)
records = mem_global.list_records()
assert len(records) == 1
assert records[0].scope == "/agent/researcher/inner"
@@ -406,10 +412,10 @@ class TestCrewAutoScoping:
assert hasattr(crew._memory, "root_scope")
assert crew._memory.root_scope == "/crew/research-crew"
def test_crew_memory_instance_gets_root_scope_if_not_set(
def test_crew_memory_instance_preserves_no_root_scope(
self, tmp_path: Path, mock_embedder: MagicMock
) -> None:
"""User-provided Memory instance gets root_scope if not already set."""
"""User-provided Memory instance is not modified — root_scope stays None."""
from crewai.agent import Agent
from crewai.crew import Crew
from crewai.memory.unified_memory import Memory
@@ -443,7 +449,8 @@ class TestCrewAutoScoping:
)
assert crew._memory is mem
assert crew._memory.root_scope == "/crew/test-crew"
# User-provided Memory is not auto-scoped — respect their config
assert crew._memory.root_scope is None
def test_crew_respects_existing_root_scope(
self, tmp_path: Path, mock_embedder: MagicMock
@@ -821,3 +828,382 @@ class TestMemoryScopeWithRootScope:
# Note: MemoryScope builds the scope before calling memory.remember
# So the scope it passes is /agent/1/task, which then gets root_scope prepended
assert record.scope.startswith("/crew/test/agent/1")
class TestReadIsolation:
"""Tests for root_scope read isolation (recall, list, info, reset)."""
def test_recall_with_root_scope_only_returns_scoped_records(
self, tmp_path: Path, mock_embedder: MagicMock
) -> None:
"""recall() with root_scope returns only records within that scope."""
from crewai.memory.unified_memory import Memory
# Create memory without root_scope and store some records
mem_global = Memory(
storage=str(tmp_path / "db"),
llm=MagicMock(),
embedder=mock_embedder,
)
# Store records at different scopes
mem_global.remember(
"Global record",
scope="/other/scope",
categories=["global"],
importance=0.5,
)
mem_global.remember(
"Crew A record",
scope="/crew/crew-a/inner",
categories=["crew-a"],
importance=0.5,
)
mem_global.remember(
"Crew B record",
scope="/crew/crew-b/inner",
categories=["crew-b"],
importance=0.5,
)
# Create a scoped view for crew-a
mem_scoped = Memory(
storage=str(tmp_path / "db"),
llm=MagicMock(),
embedder=mock_embedder,
root_scope="/crew/crew-a",
)
# recall() should only find crew-a records
results = mem_scoped.recall("record", depth="shallow")
assert len(results) == 1
assert results[0].record.scope == "/crew/crew-a/inner"
def test_recall_with_root_scope_and_explicit_scope_nests(
self, tmp_path: Path, mock_embedder: MagicMock
) -> None:
"""recall() with root_scope + explicit scope combines them."""
from crewai.memory.unified_memory import Memory
mem = Memory(
storage=str(tmp_path / "db"),
llm=MagicMock(),
embedder=mock_embedder,
root_scope="/crew/test",
)
mem.remember(
"Nested record",
scope="/inner/deep",
categories=["test"],
importance=0.5,
)
# recall with explicit scope should nest under root_scope
results = mem.recall("record", scope="/inner", depth="shallow")
assert len(results) == 1
assert results[0].record.scope == "/crew/test/inner/deep"
def test_recall_without_root_scope_works_globally(
self, tmp_path: Path, mock_embedder: MagicMock
) -> None:
"""recall() without root_scope searches globally (backward compat)."""
from crewai.memory.unified_memory import Memory
mem = Memory(
storage=str(tmp_path / "db"),
llm=MagicMock(),
embedder=mock_embedder,
)
mem.remember(
"Record A",
scope="/scope-a",
categories=["test"],
importance=0.5,
)
mem.remember(
"Record B",
scope="/scope-b",
categories=["test"],
importance=0.5,
)
# recall without scope should find all records
results = mem.recall("record", depth="shallow")
assert len(results) == 2
def test_list_records_defaults_to_root_scope(
self, tmp_path: Path, mock_embedder: MagicMock
) -> None:
"""list_records() with root_scope defaults to that scope."""
from crewai.memory.unified_memory import Memory
# Store records at different scopes
mem_global = Memory(
storage=str(tmp_path / "db"),
llm=MagicMock(),
embedder=mock_embedder,
)
mem_global.remember("Global", scope="/other", categories=["x"], importance=0.5)
mem_global.remember("Scoped", scope="/crew/a/inner", categories=["x"], importance=0.5)
# Create scoped memory
mem_scoped = Memory(
storage=str(tmp_path / "db"),
llm=MagicMock(),
embedder=mock_embedder,
root_scope="/crew/a",
)
# list_records() without scope should only show /crew/a records
records = mem_scoped.list_records()
assert len(records) == 1
assert records[0].scope == "/crew/a/inner"
def test_list_scopes_defaults_to_root_scope(
self, tmp_path: Path, mock_embedder: MagicMock
) -> None:
"""list_scopes() with root_scope defaults to that scope."""
from crewai.memory.unified_memory import Memory
mem = Memory(
storage=str(tmp_path / "db"),
llm=MagicMock(),
embedder=mock_embedder,
)
mem.remember("A", scope="/crew/a/child1", categories=["x"], importance=0.5)
mem.remember("B", scope="/crew/a/child2", categories=["x"], importance=0.5)
mem.remember("C", scope="/crew/b/other", categories=["x"], importance=0.5)
mem_scoped = Memory(
storage=str(tmp_path / "db"),
llm=MagicMock(),
embedder=mock_embedder,
root_scope="/crew/a",
)
# list_scopes() should only show children under /crew/a
scopes = mem_scoped.list_scopes()
assert "/crew/a/child1" in scopes or "child1" in str(scopes)
assert "/crew/b" not in scopes
def test_info_defaults_to_root_scope(
self, tmp_path: Path, mock_embedder: MagicMock
) -> None:
"""info() with root_scope defaults to that scope."""
from crewai.memory.unified_memory import Memory
mem = Memory(
storage=str(tmp_path / "db"),
llm=MagicMock(),
embedder=mock_embedder,
)
mem.remember("A", scope="/crew/a/inner", categories=["x"], importance=0.5)
mem.remember("B", scope="/other/inner", categories=["x"], importance=0.5)
mem_scoped = Memory(
storage=str(tmp_path / "db"),
llm=MagicMock(),
embedder=mock_embedder,
root_scope="/crew/a",
)
# info() should only count records under /crew/a
scope_info = mem_scoped.info()
assert scope_info.record_count == 1
def test_reset_with_root_scope_only_deletes_scoped_records(
self, tmp_path: Path, mock_embedder: MagicMock
) -> None:
"""reset() with root_scope only deletes within that scope."""
from crewai.memory.unified_memory import Memory
mem = Memory(
storage=str(tmp_path / "db"),
llm=MagicMock(),
embedder=mock_embedder,
)
mem.remember("A", scope="/crew/a/inner", categories=["x"], importance=0.5)
mem.remember("B", scope="/other/inner", categories=["x"], importance=0.5)
mem_scoped = Memory(
storage=str(tmp_path / "db"),
llm=MagicMock(),
embedder=mock_embedder,
root_scope="/crew/a",
)
# reset() should only delete /crew/a records
mem_scoped.reset()
# Check with a fresh global memory instance to avoid stale table references
mem_fresh = Memory(
storage=str(tmp_path / "db"),
llm=MagicMock(),
embedder=mock_embedder,
)
records = mem_fresh.list_records()
assert len(records) == 1
assert records[0].scope == "/other/inner"
class TestAgentExecutorBackwardCompat:
"""Tests for agent executor backward compatibility."""
def test_agent_executor_no_root_scope_when_memory_has_none(self) -> None:
"""Agent executor doesn't inject root_scope when memory has none."""
from crewai.agents.agent_builder.base_agent_executor_mixin import (
CrewAgentExecutorMixin,
)
from crewai.agents.parser import AgentFinish
from crewai.utilities.printer import Printer
mock_memory = MagicMock()
mock_memory.read_only = False
mock_memory.root_scope = None # No root_scope set
mock_memory.extract_memories.return_value = ["Fact A"]
mock_agent = MagicMock()
mock_agent.memory = mock_memory
mock_agent._logger = MagicMock()
mock_agent.role = "Researcher"
mock_task = MagicMock()
mock_task.description = "Task"
mock_task.expected_output = "Output"
class MinimalExecutor(CrewAgentExecutorMixin):
crew = None
agent = mock_agent
task = mock_task
iterations = 0
max_iter = 1
messages = []
_i18n = MagicMock()
_printer = Printer()
executor = MinimalExecutor()
executor._save_to_memory(AgentFinish(thought="", output="R", text="R"))
# Should NOT pass root_scope when memory has none
mock_memory.remember_many.assert_called_once()
call_kwargs = mock_memory.remember_many.call_args.kwargs
assert "root_scope" not in call_kwargs
def test_agent_executor_extends_root_scope_when_memory_has_one(self) -> None:
"""Agent executor extends root_scope when memory has one."""
from crewai.agents.agent_builder.base_agent_executor_mixin import (
CrewAgentExecutorMixin,
)
from crewai.agents.parser import AgentFinish
from crewai.utilities.printer import Printer
mock_memory = MagicMock()
mock_memory.read_only = False
mock_memory.root_scope = "/crew/test" # Has root_scope
mock_memory.extract_memories.return_value = ["Fact A"]
mock_agent = MagicMock()
mock_agent.memory = mock_memory
mock_agent._logger = MagicMock()
mock_agent.role = "Researcher"
mock_task = MagicMock()
mock_task.description = "Task"
mock_task.expected_output = "Output"
class MinimalExecutor(CrewAgentExecutorMixin):
crew = None
agent = mock_agent
task = mock_task
iterations = 0
max_iter = 1
messages = []
_i18n = MagicMock()
_printer = Printer()
executor = MinimalExecutor()
executor._save_to_memory(AgentFinish(thought="", output="R", text="R"))
# Should pass extended root_scope
mock_memory.remember_many.assert_called_once()
call_kwargs = mock_memory.remember_many.call_args.kwargs
assert call_kwargs["root_scope"] == "/crew/test/agent/researcher"
class TestConsolidationIsolation:
"""Tests for consolidation staying within root_scope boundary."""
def test_consolidation_search_constrained_to_root_scope(
self, tmp_path: Path, mock_embedder: MagicMock
) -> None:
"""Consolidation similarity search is constrained to root_scope."""
from crewai.memory.encoding_flow import EncodingFlow, ItemState
from crewai.memory.types import MemoryConfig
mock_storage = MagicMock()
mock_storage.search.return_value = []
flow = EncodingFlow(
storage=mock_storage,
llm=MagicMock(),
embedder=mock_embedder,
config=MemoryConfig(),
)
# Create item with root_scope
item = ItemState(
content="Test",
scope="/inner",
root_scope="/crew/a",
embedding=[0.1] * 1536,
)
flow.state.items = [item]
# Run parallel_find_similar
flow.parallel_find_similar()
# Check that search was called with correct scope_prefix
mock_storage.search.assert_called_once()
call_kwargs = mock_storage.search.call_args.kwargs
# Should be /crew/a/inner (root + inner combined)
assert call_kwargs["scope_prefix"] == "/crew/a/inner"
def test_consolidation_search_without_root_scope(
self, tmp_path: Path, mock_embedder: MagicMock
) -> None:
"""Consolidation without root_scope searches by explicit scope only."""
from crewai.memory.encoding_flow import EncodingFlow, ItemState
from crewai.memory.types import MemoryConfig
mock_storage = MagicMock()
mock_storage.search.return_value = []
flow = EncodingFlow(
storage=mock_storage,
llm=MagicMock(),
embedder=mock_embedder,
config=MemoryConfig(),
)
# Create item without root_scope
item = ItemState(
content="Test",
scope="/inner",
root_scope=None,
embedding=[0.1] * 1536,
)
flow.state.items = [item]
# Run parallel_find_similar
flow.parallel_find_similar()
# Check that search was called with explicit scope only
mock_storage.search.assert_called_once()
call_kwargs = mock_storage.search.call_args.kwargs
assert call_kwargs["scope_prefix"] == "/inner"

View File

View File

@@ -0,0 +1,4 @@
---
name: Invalid--Name
description: This skill has an invalid name.
---

View File

@@ -0,0 +1,4 @@
---
name: minimal-skill
description: A minimal skill with only required fields.
---

View File

@@ -0,0 +1,22 @@
---
name: valid-skill
description: A complete test skill with all optional directories.
license: Apache-2.0
compatibility: crewai>=0.1.0
metadata:
author: test
version: "1.0"
allowed-tools: web-search file-read
---
## Instructions
This skill provides comprehensive instructions for the agent.
### Usage
Follow these steps to use the skill effectively.
### Notes
Additional context for the agent.

View File

@@ -0,0 +1 @@
{"key": "value"}

View File

@@ -0,0 +1,3 @@
# Reference Guide
This is a reference document for the skill.

View File

@@ -0,0 +1,2 @@
#!/bin/bash
echo "setup"

View File

@@ -0,0 +1,78 @@
"""Integration tests for the skills system."""
from pathlib import Path
import pytest
from crewai.skills.loader import activate_skill, discover_skills, format_skill_context
from crewai.skills.models import INSTRUCTIONS, METADATA
def _create_skill_dir(parent: Path, name: str, body: str = "Body.") -> Path:
"""Helper to create a skill directory with SKILL.md."""
skill_dir = parent / name
skill_dir.mkdir()
(skill_dir / "SKILL.md").write_text(
f"---\nname: {name}\ndescription: Skill {name}\n---\n{body}"
)
return skill_dir
class TestSkillDiscoveryAndActivation:
"""End-to-end tests for discover + activate workflow."""
def test_discover_and_activate(self, tmp_path: Path) -> None:
_create_skill_dir(tmp_path, "my-skill", body="Use this skill.")
skills = discover_skills(tmp_path)
assert len(skills) == 1
assert skills[0].disclosure_level == METADATA
activated = activate_skill(skills[0])
assert activated.disclosure_level == INSTRUCTIONS
assert activated.instructions == "Use this skill."
context = format_skill_context(activated)
assert "## Skill: my-skill" in context
assert "Use this skill." in context
def test_filter_by_skill_names(self, tmp_path: Path) -> None:
_create_skill_dir(tmp_path, "alpha")
_create_skill_dir(tmp_path, "beta")
_create_skill_dir(tmp_path, "gamma")
all_skills = discover_skills(tmp_path)
wanted = {"alpha", "gamma"}
filtered = [s for s in all_skills if s.name in wanted]
assert {s.name for s in filtered} == {"alpha", "gamma"}
def test_full_fixture_skill(self) -> None:
fixtures = Path(__file__).parent / "fixtures"
valid_dir = fixtures / "valid-skill"
if not valid_dir.exists():
pytest.skip("Fixture not found")
skills = discover_skills(fixtures)
valid_skills = [s for s in skills if s.name == "valid-skill"]
assert len(valid_skills) == 1
skill = valid_skills[0]
assert skill.frontmatter.license == "Apache-2.0"
assert skill.frontmatter.allowed_tools == ["web-search", "file-read"]
activated = activate_skill(skill)
assert "Instructions" in (activated.instructions or "")
def test_multiple_search_paths(self, tmp_path: Path) -> None:
path_a = tmp_path / "a"
path_a.mkdir()
_create_skill_dir(path_a, "skill-a")
path_b = tmp_path / "b"
path_b.mkdir()
_create_skill_dir(path_b, "skill-b")
all_skills = []
for search_path in [path_a, path_b]:
all_skills.extend(discover_skills(search_path))
names = {s.name for s in all_skills}
assert names == {"skill-a", "skill-b"}

View File

@@ -0,0 +1,161 @@
"""Tests for skills/loader.py."""
from pathlib import Path
import pytest
from crewai.skills.loader import (
activate_skill,
discover_skills,
format_skill_context,
load_resources,
)
from crewai.skills.models import INSTRUCTIONS, METADATA, RESOURCES, Skill, SkillFrontmatter
from crewai.skills.parser import load_skill_metadata
def _create_skill_dir(parent: Path, name: str, body: str = "Body.") -> Path:
"""Helper to create a skill directory with SKILL.md."""
skill_dir = parent / name
skill_dir.mkdir()
(skill_dir / "SKILL.md").write_text(
f"---\nname: {name}\ndescription: Skill {name}\n---\n{body}"
)
return skill_dir
class TestDiscoverSkills:
"""Tests for discover_skills."""
def test_finds_valid_skills(self, tmp_path: Path) -> None:
_create_skill_dir(tmp_path, "alpha")
_create_skill_dir(tmp_path, "beta")
skills = discover_skills(tmp_path)
names = {s.name for s in skills}
assert names == {"alpha", "beta"}
def test_skips_dirs_without_skill_md(self, tmp_path: Path) -> None:
_create_skill_dir(tmp_path, "valid")
(tmp_path / "no-skill").mkdir()
skills = discover_skills(tmp_path)
assert len(skills) == 1
assert skills[0].name == "valid"
def test_skips_invalid_skills(self, tmp_path: Path) -> None:
_create_skill_dir(tmp_path, "good-skill")
bad_dir = tmp_path / "bad-skill"
bad_dir.mkdir()
(bad_dir / "SKILL.md").write_text(
"---\nname: Wrong-Name\ndescription: bad\n---\n"
)
skills = discover_skills(tmp_path)
assert len(skills) == 1
def test_empty_directory(self, tmp_path: Path) -> None:
skills = discover_skills(tmp_path)
assert skills == []
def test_nonexistent_path(self, tmp_path: Path) -> None:
with pytest.raises(FileNotFoundError):
discover_skills(tmp_path / "nonexistent")
def test_sorted_by_name(self, tmp_path: Path) -> None:
_create_skill_dir(tmp_path, "zebra")
_create_skill_dir(tmp_path, "alpha")
skills = discover_skills(tmp_path)
assert [s.name for s in skills] == ["alpha", "zebra"]
class TestActivateSkill:
"""Tests for activate_skill."""
def test_promotes_to_instructions(self, tmp_path: Path) -> None:
_create_skill_dir(tmp_path, "my-skill", body="Instructions.")
skill = load_skill_metadata(tmp_path / "my-skill")
activated = activate_skill(skill)
assert activated.disclosure_level == INSTRUCTIONS
assert activated.instructions == "Instructions."
def test_idempotent(self, tmp_path: Path) -> None:
_create_skill_dir(tmp_path, "my-skill")
skill = load_skill_metadata(tmp_path / "my-skill")
activated = activate_skill(skill)
again = activate_skill(activated)
assert again is activated
class TestLoadResources:
"""Tests for load_resources."""
def test_promotes_to_resources(self, tmp_path: Path) -> None:
skill_dir = _create_skill_dir(tmp_path, "my-skill")
(skill_dir / "scripts").mkdir()
(skill_dir / "scripts" / "run.sh").write_text("#!/bin/bash")
skill = load_skill_metadata(skill_dir)
full = load_resources(skill)
assert full.disclosure_level == RESOURCES
class TestFormatSkillContext:
"""Tests for format_skill_context."""
def test_metadata_level(self, tmp_path: Path) -> None:
fm = SkillFrontmatter(name="test-skill", description="A skill")
skill = Skill(
frontmatter=fm, path=tmp_path, disclosure_level=METADATA
)
ctx = format_skill_context(skill)
assert "## Skill: test-skill" in ctx
assert "A skill" in ctx
def test_instructions_level(self, tmp_path: Path) -> None:
fm = SkillFrontmatter(name="test-skill", description="A skill")
skill = Skill(
frontmatter=fm,
path=tmp_path,
disclosure_level=INSTRUCTIONS,
instructions="Do these things.",
)
ctx = format_skill_context(skill)
assert "## Skill: test-skill" in ctx
assert "Do these things." in ctx
def test_no_instructions_at_instructions_level(self, tmp_path: Path) -> None:
fm = SkillFrontmatter(name="test-skill", description="A skill")
skill = Skill(
frontmatter=fm,
path=tmp_path,
disclosure_level=INSTRUCTIONS,
instructions=None,
)
ctx = format_skill_context(skill)
assert ctx == "## Skill: test-skill\nA skill"
def test_resources_level(self, tmp_path: Path) -> None:
fm = SkillFrontmatter(name="test-skill", description="A skill")
skill = Skill(
frontmatter=fm,
path=tmp_path,
disclosure_level=RESOURCES,
instructions="Do things.",
resource_files={
"scripts": ["run.sh"],
"assets": ["data.json", "config.yaml"],
},
)
ctx = format_skill_context(skill)
assert "### Available Resources" in ctx
assert "**assets/**: data.json, config.yaml" in ctx
assert "**scripts/**: run.sh" in ctx
def test_resources_level_empty_files(self, tmp_path: Path) -> None:
fm = SkillFrontmatter(name="test-skill", description="A skill")
skill = Skill(
frontmatter=fm,
path=tmp_path,
disclosure_level=RESOURCES,
instructions="Do things.",
resource_files={},
)
ctx = format_skill_context(skill)
assert "### Available Resources" not in ctx

View File

@@ -0,0 +1,91 @@
"""Tests for skills/models.py."""
from pathlib import Path
import pytest
from crewai.skills.models import (
INSTRUCTIONS,
METADATA,
RESOURCES,
Skill,
SkillFrontmatter,
)
class TestDisclosureLevel:
"""Tests for DisclosureLevel constants."""
def test_ordering(self) -> None:
assert METADATA < INSTRUCTIONS
assert INSTRUCTIONS < RESOURCES
def test_values(self) -> None:
assert METADATA == 1
assert INSTRUCTIONS == 2
assert RESOURCES == 3
class TestSkillFrontmatter:
"""Tests for SkillFrontmatter model."""
def test_required_fields(self) -> None:
fm = SkillFrontmatter(name="my-skill", description="A test skill")
assert fm.name == "my-skill"
assert fm.description == "A test skill"
assert fm.license is None
assert fm.metadata is None
assert fm.allowed_tools is None
def test_all_fields(self) -> None:
fm = SkillFrontmatter(
name="web-search",
description="Search the web",
license="Apache-2.0",
compatibility="crewai>=0.1.0",
metadata={"author": "test"},
allowed_tools=["browser"],
)
assert fm.license == "Apache-2.0"
assert fm.metadata == {"author": "test"}
assert fm.allowed_tools == ["browser"]
def test_frozen(self) -> None:
fm = SkillFrontmatter(name="my-skill", description="desc")
with pytest.raises(Exception):
fm.name = "other" # type: ignore[misc]
def test_invalid_name_rejected(self) -> None:
with pytest.raises(ValueError):
SkillFrontmatter(name="Invalid--Name", description="bad")
class TestSkill:
"""Tests for Skill model."""
def test_properties(self, tmp_path: Path) -> None:
fm = SkillFrontmatter(name="test-skill", description="desc")
skill = Skill(frontmatter=fm, path=tmp_path / "test-skill")
assert skill.name == "test-skill"
assert skill.description == "desc"
assert skill.disclosure_level == METADATA
def test_resource_dirs(self, tmp_path: Path) -> None:
skill_dir = tmp_path / "test-skill"
skill_dir.mkdir()
fm = SkillFrontmatter(name="test-skill", description="desc")
skill = Skill(frontmatter=fm, path=skill_dir)
assert skill.scripts_dir == skill_dir / "scripts"
assert skill.references_dir == skill_dir / "references"
assert skill.assets_dir == skill_dir / "assets"
def test_with_disclosure_level(self, tmp_path: Path) -> None:
fm = SkillFrontmatter(name="test-skill", description="desc")
skill = Skill(frontmatter=fm, path=tmp_path)
promoted = skill.with_disclosure_level(
INSTRUCTIONS,
instructions="Do this.",
)
assert promoted.disclosure_level == INSTRUCTIONS
assert promoted.instructions == "Do this."
assert skill.disclosure_level == METADATA

View File

@@ -0,0 +1,167 @@
"""Tests for skills/parser.py."""
from pathlib import Path
import pytest
from crewai.skills.models import INSTRUCTIONS, METADATA, RESOURCES
from crewai.skills.parser import (
SkillParseError,
load_skill_instructions,
load_skill_metadata,
load_skill_resources,
parse_frontmatter,
parse_skill_md,
)
class TestParseFrontmatter:
"""Tests for parse_frontmatter."""
def test_valid_frontmatter_and_body(self) -> None:
content = "---\nname: test\ndescription: A test\n---\n\nBody text here."
fm, body = parse_frontmatter(content)
assert fm["name"] == "test"
assert fm["description"] == "A test"
assert body == "Body text here."
def test_empty_body(self) -> None:
content = "---\nname: test\ndescription: A test\n---"
fm, body = parse_frontmatter(content)
assert fm["name"] == "test"
assert body == ""
def test_missing_opening_delimiter(self) -> None:
with pytest.raises(SkillParseError, match="must start with"):
parse_frontmatter("name: test\n---\nBody")
def test_missing_closing_delimiter(self) -> None:
with pytest.raises(SkillParseError, match="missing closing"):
parse_frontmatter("---\nname: test\n")
def test_invalid_yaml(self) -> None:
with pytest.raises(SkillParseError, match="Invalid YAML"):
parse_frontmatter("---\n: :\n bad: [yaml\n---\nBody")
def test_triple_dash_in_body(self) -> None:
content = "---\nname: test\ndescription: desc\n---\n\nBody with --- inside."
fm, body = parse_frontmatter(content)
assert "---" in body
def test_inline_triple_dash_in_yaml_value(self) -> None:
content = '---\nname: test\ndescription: "Use---carefully"\n---\n\nBody.'
fm, body = parse_frontmatter(content)
assert fm["description"] == "Use---carefully"
assert body == "Body."
def test_unicode_content(self) -> None:
content = "---\nname: test\ndescription: Beschreibung\n---\n\nUnicode: \u00e4\u00f6\u00fc\u00df"
fm, body = parse_frontmatter(content)
assert fm["description"] == "Beschreibung"
assert "\u00e4\u00f6\u00fc\u00df" in body
def test_non_mapping_frontmatter(self) -> None:
with pytest.raises(SkillParseError, match="must be a YAML mapping"):
parse_frontmatter("---\n- item1\n- item2\n---\nBody")
class TestParseSkillMd:
"""Tests for parse_skill_md."""
def test_valid_file(self, tmp_path: Path) -> None:
skill_md = tmp_path / "SKILL.md"
skill_md.write_text(
"---\nname: my-skill\ndescription: desc\n---\nInstructions here."
)
fm, body = parse_skill_md(skill_md)
assert fm.name == "my-skill"
assert body == "Instructions here."
def test_file_not_found(self, tmp_path: Path) -> None:
with pytest.raises(FileNotFoundError):
parse_skill_md(tmp_path / "nonexistent" / "SKILL.md")
class TestLoadSkillMetadata:
"""Tests for load_skill_metadata."""
def test_valid_skill(self, tmp_path: Path) -> None:
skill_dir = tmp_path / "my-skill"
skill_dir.mkdir()
(skill_dir / "SKILL.md").write_text(
"---\nname: my-skill\ndescription: Test skill\n---\nBody"
)
skill = load_skill_metadata(skill_dir)
assert skill.name == "my-skill"
assert skill.disclosure_level == METADATA
assert skill.instructions is None
def test_directory_name_mismatch(self, tmp_path: Path) -> None:
skill_dir = tmp_path / "wrong-name"
skill_dir.mkdir()
(skill_dir / "SKILL.md").write_text(
"---\nname: my-skill\ndescription: Test skill\n---\n"
)
with pytest.raises(ValueError, match="does not match"):
load_skill_metadata(skill_dir)
class TestLoadSkillInstructions:
"""Tests for load_skill_instructions."""
def test_promotes_to_instructions(self, tmp_path: Path) -> None:
skill_dir = tmp_path / "my-skill"
skill_dir.mkdir()
(skill_dir / "SKILL.md").write_text(
"---\nname: my-skill\ndescription: Test\n---\nFull body."
)
skill = load_skill_metadata(skill_dir)
promoted = load_skill_instructions(skill)
assert promoted.disclosure_level == INSTRUCTIONS
assert promoted.instructions == "Full body."
def test_idempotent(self, tmp_path: Path) -> None:
skill_dir = tmp_path / "my-skill"
skill_dir.mkdir()
(skill_dir / "SKILL.md").write_text(
"---\nname: my-skill\ndescription: Test\n---\nBody."
)
skill = load_skill_metadata(skill_dir)
promoted = load_skill_instructions(skill)
again = load_skill_instructions(promoted)
assert again is promoted
class TestLoadSkillResources:
"""Tests for load_skill_resources."""
def test_catalogs_resources(self, tmp_path: Path) -> None:
skill_dir = tmp_path / "my-skill"
skill_dir.mkdir()
(skill_dir / "SKILL.md").write_text(
"---\nname: my-skill\ndescription: Test\n---\nBody."
)
(skill_dir / "scripts").mkdir()
(skill_dir / "scripts" / "run.sh").write_text("#!/bin/bash")
(skill_dir / "assets").mkdir()
(skill_dir / "assets" / "data.json").write_text("{}")
skill = load_skill_metadata(skill_dir)
full = load_skill_resources(skill)
assert full.disclosure_level == RESOURCES
assert full.instructions == "Body."
assert full.resource_files is not None
assert "scripts" in full.resource_files
assert "run.sh" in full.resource_files["scripts"]
assert "assets" in full.resource_files
assert "data.json" in full.resource_files["assets"]
def test_no_resource_dirs(self, tmp_path: Path) -> None:
skill_dir = tmp_path / "my-skill"
skill_dir.mkdir()
(skill_dir / "SKILL.md").write_text(
"---\nname: my-skill\ndescription: Test\n---\nBody."
)
skill = load_skill_metadata(skill_dir)
full = load_skill_resources(skill)
assert full.resource_files == {}

View File

@@ -0,0 +1,93 @@
"""Tests for skills validation."""
from pathlib import Path
import pytest
from crewai.skills.models import SkillFrontmatter
from crewai.skills.validation import (
MAX_SKILL_NAME_LENGTH,
validate_directory_name,
)
def _make(name: str) -> SkillFrontmatter:
"""Create a SkillFrontmatter with the given name."""
return SkillFrontmatter(name=name, description="desc")
class TestSkillNameValidation:
"""Tests for skill name constraints via SkillFrontmatter."""
def test_simple_name(self) -> None:
assert _make("web-search").name == "web-search"
def test_single_word(self) -> None:
assert _make("search").name == "search"
def test_numeric(self) -> None:
assert _make("tool3").name == "tool3"
def test_all_digits(self) -> None:
assert _make("123").name == "123"
def test_single_char(self) -> None:
assert _make("a").name == "a"
def test_max_length(self) -> None:
name = "a" * MAX_SKILL_NAME_LENGTH
assert _make(name).name == name
def test_multi_hyphen_segments(self) -> None:
assert _make("my-cool-skill").name == "my-cool-skill"
def test_empty_raises(self) -> None:
with pytest.raises(ValueError):
_make("")
def test_too_long_raises(self) -> None:
with pytest.raises(ValueError):
_make("a" * (MAX_SKILL_NAME_LENGTH + 1))
def test_uppercase_raises(self) -> None:
with pytest.raises(ValueError):
_make("MySkill")
def test_leading_hyphen_raises(self) -> None:
with pytest.raises(ValueError):
_make("-skill")
def test_trailing_hyphen_raises(self) -> None:
with pytest.raises(ValueError):
_make("skill-")
def test_consecutive_hyphens_raises(self) -> None:
with pytest.raises(ValueError):
_make("my--skill")
def test_underscore_raises(self) -> None:
with pytest.raises(ValueError):
_make("my_skill")
def test_space_raises(self) -> None:
with pytest.raises(ValueError):
_make("my skill")
def test_special_chars_raises(self) -> None:
with pytest.raises(ValueError):
_make("skill@v1")
class TestValidateDirectoryName:
"""Tests for validate_directory_name."""
def test_matching_names(self, tmp_path: Path) -> None:
skill_dir = tmp_path / "my-skill"
skill_dir.mkdir()
validate_directory_name(skill_dir, "my-skill")
def test_mismatched_names(self, tmp_path: Path) -> None:
skill_dir = tmp_path / "other-name"
skill_dir.mkdir()
with pytest.raises(ValueError, match="does not match"):
validate_directory_name(skill_dir, "my-skill")

View File

@@ -29,6 +29,7 @@ dev = [
"types-psycopg2==2.9.21.20251012",
"types-pymysql==1.1.0.20250916",
"types-aiofiles~=25.1.0",
"commitizen>=4.13.9",
]
@@ -142,6 +143,22 @@ python_files = "test_*.py"
python_classes = "Test*"
python_functions = "test_*"
[tool.commitizen]
name = "cz_customize"
version_provider = "scm"
tag_format = "$version"
allowed_prefixes = ["Merge", "Revert"]
changelog_incremental = true
update_changelog_on_bump = false
[tool.commitizen.customize]
schema = "<type>(<scope>): <description>"
schema_pattern = "^(feat|fix|refactor|perf|test|docs|chore|ci|style|revert)(\\(.+\\))?!?: .{1,72}"
bump_pattern = "^(feat|fix|perf|refactor|revert)"
bump_map = { feat = "MINOR", fix = "PATCH", perf = "PATCH", refactor = "PATCH", revert = "PATCH" }
info = "Commits must follow Conventional Commits 1.0.0."
[tool.uv]
# composio-core pins rich<14 but textual requires rich>=14.

87
uv.lock generated
View File

@@ -31,6 +31,7 @@ overrides = [
dev = [
{ name = "bandit", specifier = "==1.9.2" },
{ name = "boto3-stubs", extras = ["bedrock-runtime"], specifier = "==1.42.40" },
{ name = "commitizen", specifier = ">=4.13.9" },
{ name = "mypy", specifier = "==1.19.1" },
{ name = "pre-commit", specifier = "==4.5.1" },
{ name = "pytest", specifier = "==8.4.2" },
@@ -370,6 +371,15 @@ wheels = [
{ url = "https://files.pythonhosted.org/packages/3b/00/2344469e2084fb287c2e0b57b72910309874c3245463acd6cf5e3db69324/appdirs-1.4.4-py2.py3-none-any.whl", hash = "sha256:a841dacd6b99318a741b166adb07e19ee71a274450e68237b4650ca1055ab128", size = 9566, upload-time = "2020-05-11T07:59:49.499Z" },
]
[[package]]
name = "argcomplete"
version = "3.6.3"
source = { registry = "https://pypi.org/simple" }
sdist = { url = "https://files.pythonhosted.org/packages/38/61/0b9ae6399dd4a58d8c1b1dc5a27d6f2808023d0b5dd3104bb99f45a33ff6/argcomplete-3.6.3.tar.gz", hash = "sha256:62e8ed4fd6a45864acc8235409461b72c9a28ee785a2011cc5eb78318786c89c", size = 73754, upload-time = "2025-10-20T03:33:34.741Z" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/74/f5/9373290775639cb67a2fce7f629a1c240dce9f12fe927bc32b2736e16dfc/argcomplete-3.6.3-py3-none-any.whl", hash = "sha256:f5007b3a600ccac5d25bbce33089211dfd49eab4a7718da3f10e3082525a92ce", size = 43846, upload-time = "2025-10-20T03:33:33.021Z" },
]
[[package]]
name = "asn1crypto"
version = "1.5.1"
@@ -944,6 +954,30 @@ wheels = [
{ url = "https://files.pythonhosted.org/packages/6d/c1/e419ef3723a074172b68aaa89c9f3de486ed4c2399e2dbd8113a4fdcaf9e/colorlog-6.10.1-py3-none-any.whl", hash = "sha256:2d7e8348291948af66122cff006c9f8da6255d224e7cf8e37d8de2df3bad8c9c", size = 11743, upload-time = "2025-10-16T16:14:10.512Z" },
]
[[package]]
name = "commitizen"
version = "4.13.9"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "argcomplete" },
{ name = "charset-normalizer" },
{ name = "colorama" },
{ name = "decli" },
{ name = "deprecated" },
{ name = "jinja2" },
{ name = "packaging" },
{ name = "prompt-toolkit" },
{ name = "pyyaml" },
{ name = "questionary" },
{ name = "termcolor" },
{ name = "tomlkit" },
{ name = "typing-extensions", marker = "python_full_version < '3.11'" },
]
sdist = { url = "https://files.pythonhosted.org/packages/a6/44/10f95e8178ab5a584298726a4a94ceb83a7f77e00741fec4680df05fedd5/commitizen-4.13.9.tar.gz", hash = "sha256:2b4567ed50555e10920e5bd804a6a4e2c42ec70bb74f14a83f2680fe9eaf9727", size = 64145, upload-time = "2026-02-25T02:40:05.326Z" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/28/22/9b14ee0f17f0aad219a2fb37a293a57b8324d9d195c6ef6807bcd0bf2055/commitizen-4.13.9-py3-none-any.whl", hash = "sha256:d2af3d6a83cacec9d5200e17768942c5de6266f93d932c955986c60c4285f2db", size = 85373, upload-time = "2026-02-25T02:40:03.83Z" },
]
[[package]]
name = "composio-core"
version = "0.7.21"
@@ -1115,6 +1149,7 @@ dependencies = [
{ name = "pydantic-settings" },
{ name = "pyjwt" },
{ name = "python-dotenv" },
{ name = "pyyaml" },
{ name = "regex" },
{ name = "textual" },
{ name = "tokenizers" },
@@ -1222,6 +1257,7 @@ requires-dist = [
{ name = "pydantic-settings", specifier = "~=2.10.1" },
{ name = "pyjwt", specifier = ">=2.9.0,<3" },
{ name = "python-dotenv", specifier = "~=1.1.1" },
{ name = "pyyaml", specifier = "~=6.0" },
{ name = "qdrant-client", extras = ["fastembed"], marker = "extra == 'qdrant'", specifier = "~=1.14.3" },
{ name = "regex", specifier = "~=2026.1.15" },
{ name = "textual", specifier = ">=7.5.0" },
@@ -1573,6 +1609,15 @@ wheels = [
{ url = "https://files.pythonhosted.org/packages/c3/be/d0d44e092656fe7a06b55e6103cbce807cdbdee17884a5367c68c9860853/dataclasses_json-0.6.7-py3-none-any.whl", hash = "sha256:0dbf33f26c8d5305befd61b39d2b3414e8a407bedc2834dea9b8d642666fb40a", size = 28686, upload-time = "2024-06-09T16:20:16.715Z" },
]
[[package]]
name = "decli"
version = "0.6.3"
source = { registry = "https://pypi.org/simple" }
sdist = { url = "https://files.pythonhosted.org/packages/0c/59/d4ffff1dee2c8f6f2dd8f87010962e60f7b7847504d765c91ede5a466730/decli-0.6.3.tar.gz", hash = "sha256:87f9d39361adf7f16b9ca6e3b614badf7519da13092f2db3c80ca223c53c7656", size = 7564, upload-time = "2025-06-01T15:23:41.25Z" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/d8/fa/ec878c28bc7f65b77e7e17af3522c9948a9711b9fa7fc4c5e3140a7e3578/decli-0.6.3-py3-none-any.whl", hash = "sha256:5152347c7bb8e3114ad65db719e5709b28d7f7f45bdb709f70167925e55640f3", size = 7989, upload-time = "2025-06-01T15:23:40.228Z" },
]
[[package]]
name = "decorator"
version = "5.2.1"
@@ -5275,6 +5320,18 @@ wheels = [
{ url = "https://files.pythonhosted.org/packages/5d/19/fd3ef348460c80af7bb4669ea7926651d1f95c23ff2df18b9d24bab4f3fa/pre_commit-4.5.1-py2.py3-none-any.whl", hash = "sha256:3b3afd891e97337708c1674210f8eba659b52a38ea5f822ff142d10786221f77", size = 226437, upload-time = "2025-12-16T21:14:32.409Z" },
]
[[package]]
name = "prompt-toolkit"
version = "3.0.51"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "wcwidth" },
]
sdist = { url = "https://files.pythonhosted.org/packages/bb/6e/9d084c929dfe9e3bfe0c6a47e31f78a25c54627d64a66e884a8bf5474f1c/prompt_toolkit-3.0.51.tar.gz", hash = "sha256:931a162e3b27fc90c86f1b48bb1fb2c528c2761475e57c9c06de13311c7b54ed", size = 428940, upload-time = "2025-04-15T09:18:47.731Z" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/ce/4f/5249960887b1fbe561d9ff265496d170b55a735b76724f10ef19f9e40716/prompt_toolkit-3.0.51-py3-none-any.whl", hash = "sha256:52742911fde84e2d423e2f9a4cf1de7d7ac4e51958f648d9540e0fb8db077b07", size = 387810, upload-time = "2025-04-15T09:18:44.753Z" },
]
[[package]]
name = "propcache"
version = "0.4.1"
@@ -6556,6 +6613,18 @@ fastembed = [
{ name = "fastembed", version = "0.7.4", source = { registry = "https://pypi.org/simple" }, marker = "python_full_version < '3.13'" },
]
[[package]]
name = "questionary"
version = "2.1.1"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "prompt-toolkit" },
]
sdist = { url = "https://files.pythonhosted.org/packages/f6/45/eafb0bba0f9988f6a2520f9ca2df2c82ddfa8d67c95d6625452e97b204a5/questionary-2.1.1.tar.gz", hash = "sha256:3d7e980292bb0107abaa79c68dd3eee3c561b83a0f89ae482860b181c8bd412d", size = 25845, upload-time = "2025-08-28T19:00:20.851Z" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/3c/26/1062c7ec1b053db9e499b4d2d5bc231743201b74051c973dadeac80a8f43/questionary-2.1.1-py3-none-any.whl", hash = "sha256:a51af13f345f1cdea62347589fbb6df3b290306ab8930713bfae4d475a7d4a59", size = 36753, upload-time = "2025-08-28T19:00:19.56Z" },
]
[[package]]
name = "rapidfuzz"
version = "3.14.3"
@@ -7555,6 +7624,15 @@ wheels = [
{ url = "https://files.pythonhosted.org/packages/d7/c1/eb8f9debc45d3b7918a32ab756658a0904732f75e555402972246b0b8e71/tenacity-9.1.4-py3-none-any.whl", hash = "sha256:6095a360c919085f28c6527de529e76a06ad89b23659fa881ae0649b867a9d55", size = 28926, upload-time = "2026-02-07T10:45:32.24Z" },
]
[[package]]
name = "termcolor"
version = "3.3.0"
source = { registry = "https://pypi.org/simple" }
sdist = { url = "https://files.pythonhosted.org/packages/46/79/cf31d7a93a8fdc6aa0fbb665be84426a8c5a557d9240b6239e9e11e35fc5/termcolor-3.3.0.tar.gz", hash = "sha256:348871ca648ec6a9a983a13ab626c0acce02f515b9e1983332b17af7979521c5", size = 14434, upload-time = "2025-12-29T12:55:21.882Z" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/33/d1/8bb87d21e9aeb323cc03034f5eaf2c8f69841e40e4853c2627edf8111ed3/termcolor-3.3.0-py3-none-any.whl", hash = "sha256:cf642efadaf0a8ebbbf4bc7a31cec2f9b5f21a9f726f4ccbb08192c9c26f43a5", size = 7734, upload-time = "2025-12-29T12:55:20.718Z" },
]
[[package]]
name = "textual"
version = "7.5.0"
@@ -8565,6 +8643,15 @@ wheels = [
{ url = "https://files.pythonhosted.org/packages/6e/d4/ed38dd3b1767193de971e694aa544356e63353c33a85d948166b5ff58b9e/watchfiles-1.1.1-pp311-pypy311_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:3e6f39af2eab0118338902798b5aa6664f46ff66bc0280de76fca67a7f262a49", size = 457546, upload-time = "2025-10-14T15:06:13.372Z" },
]
[[package]]
name = "wcwidth"
version = "0.6.0"
source = { registry = "https://pypi.org/simple" }
sdist = { url = "https://files.pythonhosted.org/packages/35/a2/8e3becb46433538a38726c948d3399905a4c7cabd0df578ede5dc51f0ec2/wcwidth-0.6.0.tar.gz", hash = "sha256:cdc4e4262d6ef9a1a57e018384cbeb1208d8abbc64176027e2c2455c81313159", size = 159684, upload-time = "2026-02-06T19:19:40.919Z" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/68/5a/199c59e0a824a3db2b89c5d2dade7ab5f9624dbf6448dc291b46d5ec94d3/wcwidth-0.6.0-py3-none-any.whl", hash = "sha256:1a3a1e510b553315f8e146c54764f4fb6264ffad731b3d78088cdb1478ffbdad", size = 94189, upload-time = "2026-02-06T19:19:39.646Z" },
]
[[package]]
name = "weaviate-client"
version = "4.18.3"