mirror of
https://github.com/crewAIInc/crewAI.git
synced 2026-03-28 06:38:19 +00:00
Compare commits
13 Commits
devin/1774
...
feature/re
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
42421740cf | ||
|
|
b266cf7a3e | ||
|
|
c542cc9f70 | ||
|
|
aced3e5c29 | ||
|
|
555ee462a3 | ||
|
|
dd9ae02159 | ||
|
|
949d7f1091 | ||
|
|
3b569b8da9 | ||
|
|
e88a8f2785 | ||
|
|
85199e9ffc | ||
|
|
c92de53da7 | ||
|
|
1704ccdfa8 | ||
|
|
09b84dd2b0 |
32
.github/workflows/pr-size.yml
vendored
Normal file
32
.github/workflows/pr-size.yml
vendored
Normal file
@@ -0,0 +1,32 @@
|
||||
name: PR Size Check
|
||||
|
||||
on:
|
||||
pull_request:
|
||||
types: [opened, synchronize, reopened]
|
||||
|
||||
jobs:
|
||||
pr-size:
|
||||
runs-on: ubuntu-latest
|
||||
permissions:
|
||||
pull-requests: write
|
||||
steps:
|
||||
- uses: codelytv/pr-size-labeler@v1
|
||||
with:
|
||||
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||
xs_label: "size/XS"
|
||||
xs_max_size: 25
|
||||
s_label: "size/S"
|
||||
s_max_size: 100
|
||||
m_label: "size/M"
|
||||
m_max_size: 250
|
||||
l_label: "size/L"
|
||||
l_max_size: 500
|
||||
xl_label: "size/XL"
|
||||
fail_if_xl: false
|
||||
files_to_ignore: |
|
||||
uv.lock
|
||||
*.lock
|
||||
lib/crewai/src/crewai/cli/templates/**
|
||||
**/*.json
|
||||
**/test_durations/**
|
||||
**/cassettes/**
|
||||
41
.github/workflows/pr-title.yml
vendored
Normal file
41
.github/workflows/pr-title.yml
vendored
Normal file
@@ -0,0 +1,41 @@
|
||||
name: PR Title Check
|
||||
|
||||
on:
|
||||
pull_request:
|
||||
types: [opened, edited, synchronize, reopened]
|
||||
|
||||
permissions:
|
||||
contents: read
|
||||
pull-requests: read
|
||||
|
||||
jobs:
|
||||
pr-title:
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- uses: amannn/action-semantic-pull-request@v5
|
||||
continue-on-error: true
|
||||
env:
|
||||
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||
with:
|
||||
types: |
|
||||
feat
|
||||
fix
|
||||
refactor
|
||||
perf
|
||||
test
|
||||
docs
|
||||
chore
|
||||
ci
|
||||
style
|
||||
revert
|
||||
requireScope: false
|
||||
subjectPattern: ^[a-z].+[^.]$
|
||||
subjectPatternError: >
|
||||
The PR title "{title}" does not follow conventional commit format.
|
||||
|
||||
Expected: <type>(<scope>): <lowercase description without trailing period>
|
||||
|
||||
Examples:
|
||||
feat(memory): add lancedb storage backend
|
||||
fix(agents): resolve deadlock in concurrent execution
|
||||
chore(deps): bump pydantic to 2.11.9
|
||||
1397
docs/docs.json
1397
docs/docs.json
File diff suppressed because it is too large
Load Diff
@@ -4,6 +4,38 @@ description: "Product updates, improvements, and bug fixes for CrewAI"
|
||||
icon: "clock"
|
||||
mode: "wide"
|
||||
---
|
||||
<Update label="Mar 23, 2026">
|
||||
## v1.11.1
|
||||
|
||||
[View release on GitHub](https://github.com/crewAIInc/crewAI/releases/tag/1.11.1)
|
||||
|
||||
## What's Changed
|
||||
|
||||
### Features
|
||||
- Add flow_structure() serializer for Flow class introspection.
|
||||
|
||||
### Bug Fixes
|
||||
- Fix security vulnerabilities by bumping pypdf, tinytag, and langchain-core.
|
||||
- Preserve full LLM config across HITL resume for non-OpenAI providers.
|
||||
- Prevent path traversal in FileWriterTool.
|
||||
- Fix lock_store crash when redis package is not installed.
|
||||
- Pass cache_function from BaseTool to CrewStructuredTool.
|
||||
|
||||
### Documentation
|
||||
- Add publish custom tools guide with translations.
|
||||
- Update changelog and version for v1.11.0.
|
||||
- Add missing event listeners documentation.
|
||||
|
||||
### Refactoring
|
||||
- Replace urllib with requests in pdf loader.
|
||||
- Replace Any-typed callback and model fields with serializable types.
|
||||
|
||||
## Contributors
|
||||
|
||||
@alex-clawd, @danielfsbarreto, @dependabot[bot], @greysonlalonde, @lorenzejay, @lucasgomide, @mattatcha, @theCyberTech, @vinibrsl
|
||||
|
||||
</Update>
|
||||
|
||||
<Update label="Mar 18, 2026">
|
||||
## v1.11.0
|
||||
|
||||
|
||||
115
docs/en/concepts/skills.mdx
Normal file
115
docs/en/concepts/skills.mdx
Normal file
@@ -0,0 +1,115 @@
|
||||
---
|
||||
title: Skills
|
||||
description: Filesystem-based skill packages that inject context into agent prompts.
|
||||
icon: bolt
|
||||
mode: "wide"
|
||||
---
|
||||
|
||||
## Overview
|
||||
|
||||
Skills are self-contained directories that provide agents with domain-specific instructions, references, and assets. Each skill is defined by a `SKILL.md` file with YAML frontmatter and a markdown body.
|
||||
|
||||
Skills use **progressive disclosure** — metadata is loaded first, full instructions only when activated, and resource catalogs only when needed.
|
||||
|
||||
## Directory Structure
|
||||
|
||||
```
|
||||
my-skill/
|
||||
├── SKILL.md # Required — frontmatter + instructions
|
||||
├── scripts/ # Optional — executable scripts
|
||||
├── references/ # Optional — reference documents
|
||||
└── assets/ # Optional — static files (configs, data)
|
||||
```
|
||||
|
||||
The directory name must match the `name` field in `SKILL.md`.
|
||||
|
||||
## SKILL.md Format
|
||||
|
||||
```markdown
|
||||
---
|
||||
name: my-skill
|
||||
description: Short description of what this skill does and when to use it.
|
||||
license: Apache-2.0 # optional
|
||||
compatibility: crewai>=0.1.0 # optional
|
||||
metadata: # optional
|
||||
author: your-name
|
||||
version: "1.0"
|
||||
allowed-tools: web-search file-read # optional, space-delimited
|
||||
---
|
||||
|
||||
Instructions for the agent go here. This markdown body is injected
|
||||
into the agent's prompt when the skill is activated.
|
||||
```
|
||||
|
||||
### Frontmatter Fields
|
||||
|
||||
| Field | Required | Constraints |
|
||||
| :-------------- | :------- | :----------------------------------------------------------------------- |
|
||||
| `name` | Yes | 1–64 chars. Lowercase alphanumeric and hyphens. No leading/trailing/consecutive hyphens. Must match directory name. |
|
||||
| `description` | Yes | 1–1024 chars. Describes what the skill does and when to use it. |
|
||||
| `license` | No | License name or reference to a bundled license file. |
|
||||
| `compatibility` | No | Max 500 chars. Environment requirements (products, packages, network). |
|
||||
| `metadata` | No | Arbitrary string key-value mapping. |
|
||||
| `allowed-tools` | No | Space-delimited list of pre-approved tools. Experimental. |
|
||||
|
||||
## Usage
|
||||
|
||||
### Agent-level Skills
|
||||
|
||||
Pass skill directory paths to an agent:
|
||||
|
||||
```python
|
||||
from crewai import Agent
|
||||
|
||||
agent = Agent(
|
||||
role="Researcher",
|
||||
goal="Find relevant information",
|
||||
backstory="An expert researcher.",
|
||||
skills=["./skills"], # discovers all skills in this directory
|
||||
)
|
||||
```
|
||||
|
||||
### Crew-level Skills
|
||||
|
||||
Skill paths on a crew are merged into every agent:
|
||||
|
||||
```python
|
||||
from crewai import Crew
|
||||
|
||||
crew = Crew(
|
||||
agents=[agent],
|
||||
tasks=[task],
|
||||
skills=["./skills"],
|
||||
)
|
||||
```
|
||||
|
||||
### Pre-loaded Skills
|
||||
|
||||
You can also pass `Skill` objects directly:
|
||||
|
||||
```python
|
||||
from pathlib import Path
|
||||
from crewai.skills import discover_skills, activate_skill
|
||||
|
||||
skills = discover_skills(Path("./skills"))
|
||||
activated = [activate_skill(s) for s in skills]
|
||||
|
||||
agent = Agent(
|
||||
role="Researcher",
|
||||
goal="Find relevant information",
|
||||
backstory="An expert researcher.",
|
||||
skills=activated,
|
||||
)
|
||||
```
|
||||
|
||||
## How Skills Are Loaded
|
||||
|
||||
Skills load progressively — only the data needed at each stage is read:
|
||||
|
||||
| Stage | What's loaded | When |
|
||||
| :--------------- | :------------------------------------------------ | :----------------- |
|
||||
| Discovery | Name, description, frontmatter fields | `discover_skills()` |
|
||||
| Activation | Full SKILL.md body text | `activate_skill()` |
|
||||
|
||||
During normal agent execution, skills are automatically discovered and activated. The `scripts/`, `references/`, and `assets/` directories are available on the skill's `path` for agents that need to reference files directly.
|
||||
|
||||
@@ -4,6 +4,38 @@ description: "CrewAI의 제품 업데이트, 개선 사항 및 버그 수정"
|
||||
icon: "clock"
|
||||
mode: "wide"
|
||||
---
|
||||
<Update label="2026년 3월 23일">
|
||||
## v1.11.1
|
||||
|
||||
[GitHub 릴리스 보기](https://github.com/crewAIInc/crewAI/releases/tag/1.11.1)
|
||||
|
||||
## 변경 사항
|
||||
|
||||
### 기능
|
||||
- Flow 클래스 내성 검사를 위한 flow_structure() 직렬 변환기 추가.
|
||||
|
||||
### 버그 수정
|
||||
- pypdf, tinytag 및 langchain-core의 버전을 업데이트하여 보안 취약점 수정.
|
||||
- 비-OpenAI 제공자의 HITL 재개 시 전체 LLM 구성 유지.
|
||||
- FileWriterTool에서 경로 탐색 방지.
|
||||
- redis 패키지가 설치되지 않았을 때 lock_store 충돌 수정.
|
||||
- BaseTool에서 CrewStructuredTool로 cache_function 전달.
|
||||
|
||||
### 문서화
|
||||
- 번역이 포함된 사용자 정의 도구 게시 가이드 추가.
|
||||
- v1.11.0에 대한 변경 로그 및 버전 업데이트.
|
||||
- 누락된 이벤트 리스너 문서 추가.
|
||||
|
||||
### 리팩토링
|
||||
- pdf 로더에서 urllib를 requests로 교체.
|
||||
- Any 유형의 콜백 및 모델 필드를 직렬화 가능한 유형으로 교체.
|
||||
|
||||
## 기여자
|
||||
|
||||
@alex-clawd, @danielfsbarreto, @dependabot[bot], @greysonlalonde, @lorenzejay, @lucasgomide, @mattatcha, @theCyberTech, @vinibrsl
|
||||
|
||||
</Update>
|
||||
|
||||
<Update label="2026년 3월 18일">
|
||||
## v1.11.0
|
||||
|
||||
|
||||
114
docs/ko/concepts/skills.mdx
Normal file
114
docs/ko/concepts/skills.mdx
Normal file
@@ -0,0 +1,114 @@
|
||||
---
|
||||
title: 스킬
|
||||
description: 에이전트 프롬프트에 컨텍스트를 주입하는 파일 시스템 기반 스킬 패키지.
|
||||
icon: bolt
|
||||
mode: "wide"
|
||||
---
|
||||
|
||||
## 개요
|
||||
|
||||
스킬은 에이전트에게 도메인별 지침, 참조 자료, 에셋을 제공하는 자체 포함 디렉터리입니다. 각 스킬은 YAML 프론트매터와 마크다운 본문이 포함된 `SKILL.md` 파일로 정의됩니다.
|
||||
|
||||
스킬은 **점진적 공개**를 사용합니다 — 메타데이터가 먼저 로드되고, 활성화 시에만 전체 지침이 로드되며, 필요할 때만 리소스 카탈로그가 로드됩니다.
|
||||
|
||||
## 디렉터리 구조
|
||||
|
||||
```
|
||||
my-skill/
|
||||
├── SKILL.md # 필수 — 프론트매터 + 지침
|
||||
├── scripts/ # 선택 — 실행 가능한 스크립트
|
||||
├── references/ # 선택 — 참조 문서
|
||||
└── assets/ # 선택 — 정적 파일 (설정, 데이터)
|
||||
```
|
||||
|
||||
디렉터리 이름은 `SKILL.md`의 `name` 필드와 일치해야 합니다.
|
||||
|
||||
## SKILL.md 형식
|
||||
|
||||
```markdown
|
||||
---
|
||||
name: my-skill
|
||||
description: 이 스킬이 무엇을 하고 언제 사용하는지에 대한 간단한 설명.
|
||||
license: Apache-2.0 # 선택
|
||||
compatibility: crewai>=0.1.0 # 선택
|
||||
metadata: # 선택
|
||||
author: your-name
|
||||
version: "1.0"
|
||||
allowed-tools: web-search file-read # 선택, 공백으로 구분
|
||||
---
|
||||
|
||||
에이전트를 위한 지침이 여기에 들어갑니다. 이 마크다운 본문은
|
||||
스킬이 활성화되면 에이전트의 프롬프트에 주입됩니다.
|
||||
```
|
||||
|
||||
### 프론트매터 필드
|
||||
|
||||
| 필드 | 필수 | 제약 조건 |
|
||||
| :-------------- | :----- | :----------------------------------------------------------------------- |
|
||||
| `name` | 예 | 1–64자. 소문자 영숫자와 하이픈. 선행/후행/연속 하이픈 불가. 디렉터리 이름과 일치 필수. |
|
||||
| `description` | 예 | 1–1024자. 스킬이 무엇을 하고 언제 사용하는지 설명. |
|
||||
| `license` | 아니오 | 라이선스 이름 또는 번들된 라이선스 파일 참조. |
|
||||
| `compatibility` | 아니오 | 최대 500자. 환경 요구 사항 (제품, 패키지, 네트워크). |
|
||||
| `metadata` | 아니오 | 임의의 문자열 키-값 매핑. |
|
||||
| `allowed-tools` | 아니오 | 공백으로 구분된 사전 승인 도구 목록. 실험적. |
|
||||
|
||||
## 사용법
|
||||
|
||||
### 에이전트 레벨 스킬
|
||||
|
||||
에이전트에 스킬 디렉터리 경로를 전달합니다:
|
||||
|
||||
```python
|
||||
from crewai import Agent
|
||||
|
||||
agent = Agent(
|
||||
role="Researcher",
|
||||
goal="Find relevant information",
|
||||
backstory="An expert researcher.",
|
||||
skills=["./skills"], # 이 디렉터리의 모든 스킬을 검색
|
||||
)
|
||||
```
|
||||
|
||||
### 크루 레벨 스킬
|
||||
|
||||
크루의 스킬 경로는 모든 에이전트에 병합됩니다:
|
||||
|
||||
```python
|
||||
from crewai import Crew
|
||||
|
||||
crew = Crew(
|
||||
agents=[agent],
|
||||
tasks=[task],
|
||||
skills=["./skills"],
|
||||
)
|
||||
```
|
||||
|
||||
### 사전 로드된 스킬
|
||||
|
||||
`Skill` 객체를 직접 전달할 수도 있습니다:
|
||||
|
||||
```python
|
||||
from pathlib import Path
|
||||
from crewai.skills import discover_skills, activate_skill
|
||||
|
||||
skills = discover_skills(Path("./skills"))
|
||||
activated = [activate_skill(s) for s in skills]
|
||||
|
||||
agent = Agent(
|
||||
role="Researcher",
|
||||
goal="Find relevant information",
|
||||
backstory="An expert researcher.",
|
||||
skills=activated,
|
||||
)
|
||||
```
|
||||
|
||||
## 스킬 로드 방식
|
||||
|
||||
스킬은 점진적으로 로드됩니다 — 각 단계에서 필요한 데이터만 읽습니다:
|
||||
|
||||
| 단계 | 로드되는 내용 | 시점 |
|
||||
| :--------------- | :------------------------------------------------ | :----------------- |
|
||||
| 검색 | 이름, 설명, 프론트매터 필드 | `discover_skills()` |
|
||||
| 활성화 | 전체 SKILL.md 본문 텍스트 | `activate_skill()` |
|
||||
|
||||
일반적인 에이전트 실행 중에 스킬은 자동으로 검색되고 활성화됩니다. `scripts/`, `references/`, `assets/` 디렉터리는 파일을 직접 참조해야 하는 에이전트를 위해 스킬의 `path`에서 사용할 수 있습니다.
|
||||
@@ -4,6 +4,38 @@ description: "Atualizações de produto, melhorias e correções do CrewAI"
|
||||
icon: "clock"
|
||||
mode: "wide"
|
||||
---
|
||||
<Update label="23 mar 2026">
|
||||
## v1.11.1
|
||||
|
||||
[Ver release no GitHub](https://github.com/crewAIInc/crewAI/releases/tag/1.11.1)
|
||||
|
||||
## O que Mudou
|
||||
|
||||
### Funcionalidades
|
||||
- Adicionar o serializer flow_structure() para introspecção da classe Flow.
|
||||
|
||||
### Correções de Bugs
|
||||
- Corrigir vulnerabilidades de segurança atualizando pypdf, tinytag e langchain-core.
|
||||
- Preservar a configuração completa do LLM durante a retomada do HITL para provedores que não são da OpenAI.
|
||||
- Prevenir a travessia de caminho no FileWriterTool.
|
||||
- Corrigir a falha do lock_store quando o pacote redis não está instalado.
|
||||
- Passar cache_function de BaseTool para CrewStructuredTool.
|
||||
|
||||
### Documentação
|
||||
- Adicionar guia de publicação de ferramentas personalizadas com traduções.
|
||||
- Atualizar changelog e versão para v1.11.0.
|
||||
- Adicionar documentação de ouvintes de eventos ausentes.
|
||||
|
||||
### Refatoração
|
||||
- Substituir urllib por requests no carregador de pdf.
|
||||
- Substituir campos de callback e modelo do tipo Any por tipos serializáveis.
|
||||
|
||||
## Contribuidores
|
||||
|
||||
@alex-clawd, @danielfsbarreto, @dependabot[bot], @greysonlalonde, @lorenzejay, @lucasgomide, @mattatcha, @theCyberTech, @vinibrsl
|
||||
|
||||
</Update>
|
||||
|
||||
<Update label="18 mar 2026">
|
||||
## v1.11.0
|
||||
|
||||
|
||||
114
docs/pt-BR/concepts/skills.mdx
Normal file
114
docs/pt-BR/concepts/skills.mdx
Normal file
@@ -0,0 +1,114 @@
|
||||
---
|
||||
title: Skills
|
||||
description: Pacotes de skills baseados em sistema de arquivos que injetam contexto nos prompts dos agentes.
|
||||
icon: bolt
|
||||
mode: "wide"
|
||||
---
|
||||
|
||||
## Visão Geral
|
||||
|
||||
Skills são diretórios autocontidos que fornecem aos agentes instruções, referências e assets específicos de domínio. Cada skill é definida por um arquivo `SKILL.md` com frontmatter YAML e um corpo em markdown.
|
||||
|
||||
Skills usam **divulgação progressiva** — metadados são carregados primeiro, instruções completas apenas quando ativadas, e catálogos de recursos apenas quando necessário.
|
||||
|
||||
## Estrutura de Diretório
|
||||
|
||||
```
|
||||
my-skill/
|
||||
├── SKILL.md # Obrigatório — frontmatter + instruções
|
||||
├── scripts/ # Opcional — scripts executáveis
|
||||
├── references/ # Opcional — documentos de referência
|
||||
└── assets/ # Opcional — arquivos estáticos (configs, dados)
|
||||
```
|
||||
|
||||
O nome do diretório deve corresponder ao campo `name` no `SKILL.md`.
|
||||
|
||||
## Formato do SKILL.md
|
||||
|
||||
```markdown
|
||||
---
|
||||
name: my-skill
|
||||
description: Descrição curta do que esta skill faz e quando usá-la.
|
||||
license: Apache-2.0 # opcional
|
||||
compatibility: crewai>=0.1.0 # opcional
|
||||
metadata: # opcional
|
||||
author: your-name
|
||||
version: "1.0"
|
||||
allowed-tools: web-search file-read # opcional, delimitado por espaços
|
||||
---
|
||||
|
||||
Instruções para o agente vão aqui. Este corpo em markdown é injetado
|
||||
no prompt do agente quando a skill é ativada.
|
||||
```
|
||||
|
||||
### Campos do Frontmatter
|
||||
|
||||
| Campo | Obrigatório | Restrições |
|
||||
| :-------------- | :---------- | :----------------------------------------------------------------------- |
|
||||
| `name` | Sim | 1–64 chars. Alfanumérico minúsculo e hifens. Sem hifens iniciais/finais/consecutivos. Deve corresponder ao nome do diretório. |
|
||||
| `description` | Sim | 1–1024 chars. Descreve o que a skill faz e quando usá-la. |
|
||||
| `license` | Não | Nome da licença ou referência a um arquivo de licença incluído. |
|
||||
| `compatibility` | Não | Máx 500 chars. Requisitos de ambiente (produtos, pacotes, rede). |
|
||||
| `metadata` | Não | Mapeamento arbitrário de chave-valor string. |
|
||||
| `allowed-tools` | Não | Lista de ferramentas pré-aprovadas delimitada por espaços. Experimental. |
|
||||
|
||||
## Uso
|
||||
|
||||
### Skills no Nível do Agente
|
||||
|
||||
Passe caminhos de diretório de skills para um agente:
|
||||
|
||||
```python
|
||||
from crewai import Agent
|
||||
|
||||
agent = Agent(
|
||||
role="Researcher",
|
||||
goal="Find relevant information",
|
||||
backstory="An expert researcher.",
|
||||
skills=["./skills"], # descobre todas as skills neste diretório
|
||||
)
|
||||
```
|
||||
|
||||
### Skills no Nível do Crew
|
||||
|
||||
Caminhos de skills no crew são mesclados em todos os agentes:
|
||||
|
||||
```python
|
||||
from crewai import Crew
|
||||
|
||||
crew = Crew(
|
||||
agents=[agent],
|
||||
tasks=[task],
|
||||
skills=["./skills"],
|
||||
)
|
||||
```
|
||||
|
||||
### Skills Pré-carregadas
|
||||
|
||||
Você também pode passar objetos `Skill` diretamente:
|
||||
|
||||
```python
|
||||
from pathlib import Path
|
||||
from crewai.skills import discover_skills, activate_skill
|
||||
|
||||
skills = discover_skills(Path("./skills"))
|
||||
activated = [activate_skill(s) for s in skills]
|
||||
|
||||
agent = Agent(
|
||||
role="Researcher",
|
||||
goal="Find relevant information",
|
||||
backstory="An expert researcher.",
|
||||
skills=activated,
|
||||
)
|
||||
```
|
||||
|
||||
## Como as Skills São Carregadas
|
||||
|
||||
Skills carregam progressivamente — apenas os dados necessários em cada etapa são lidos:
|
||||
|
||||
| Etapa | O que é carregado | Quando |
|
||||
| :--------------- | :------------------------------------------------ | :------------------ |
|
||||
| Descoberta | Nome, descrição, campos do frontmatter | `discover_skills()` |
|
||||
| Ativação | Texto completo do corpo do SKILL.md | `activate_skill()` |
|
||||
|
||||
Durante a execução normal do agente, skills são automaticamente descobertas e ativadas. Os diretórios `scripts/`, `references/` e `assets/` estão disponíveis no `path` da skill para agentes que precisam referenciar arquivos diretamente.
|
||||
@@ -9,11 +9,11 @@ authors = [
|
||||
requires-python = ">=3.10, <3.14"
|
||||
dependencies = [
|
||||
"Pillow~=12.1.1",
|
||||
"pypdf~=6.7.5",
|
||||
"pypdf~=6.9.1",
|
||||
"python-magic>=0.4.27",
|
||||
"aiocache~=0.12.3",
|
||||
"aiofiles~=24.1.0",
|
||||
"tinytag~=1.10.0",
|
||||
"tinytag~=2.2.1",
|
||||
"av~=13.0.0",
|
||||
]
|
||||
|
||||
|
||||
@@ -152,4 +152,4 @@ __all__ = [
|
||||
"wrap_file_source",
|
||||
]
|
||||
|
||||
__version__ = "1.11.0"
|
||||
__version__ = "1.11.1"
|
||||
|
||||
@@ -11,7 +11,7 @@ dependencies = [
|
||||
"pytube~=15.0.0",
|
||||
"requests~=2.32.5",
|
||||
"docker~=7.1.0",
|
||||
"crewai==1.11.0",
|
||||
"crewai==1.11.1",
|
||||
"tiktoken~=0.8.0",
|
||||
"beautifulsoup4~=4.13.4",
|
||||
"python-docx~=1.2.0",
|
||||
|
||||
@@ -309,4 +309,4 @@ __all__ = [
|
||||
"ZapierActionTools",
|
||||
]
|
||||
|
||||
__version__ = "1.11.0"
|
||||
__version__ = "1.11.1"
|
||||
|
||||
@@ -1,10 +1,12 @@
|
||||
"""PDF loader for extracting text from PDF files."""
|
||||
|
||||
import os
|
||||
import tempfile
|
||||
from pathlib import Path
|
||||
from typing import Any, cast
|
||||
from typing import Any
|
||||
from urllib.parse import urlparse
|
||||
import urllib.request
|
||||
|
||||
import requests
|
||||
|
||||
from crewai_tools.rag.base_loader import BaseLoader, LoaderResult
|
||||
from crewai_tools.rag.source_content import SourceContent
|
||||
@@ -23,22 +25,34 @@ class PDFLoader(BaseLoader):
|
||||
return False
|
||||
|
||||
@staticmethod
|
||||
def _download_pdf(url: str) -> bytes:
|
||||
"""Download PDF content from a URL.
|
||||
def _download_from_url(url: str, kwargs: dict) -> str:
|
||||
"""Download PDF from a URL to a temporary file and return its path.
|
||||
|
||||
Args:
|
||||
url: The URL to download from.
|
||||
kwargs: Optional dict that may contain custom headers.
|
||||
|
||||
Returns:
|
||||
The PDF content as bytes.
|
||||
Path to the temporary file containing the PDF.
|
||||
|
||||
Raises:
|
||||
ValueError: If the download fails.
|
||||
"""
|
||||
headers = kwargs.get(
|
||||
"headers",
|
||||
{
|
||||
"Accept": "application/pdf",
|
||||
"User-Agent": "Mozilla/5.0 (compatible; crewai-tools PDFLoader)",
|
||||
},
|
||||
)
|
||||
|
||||
try:
|
||||
with urllib.request.urlopen(url, timeout=30) as response: # noqa: S310
|
||||
return cast(bytes, response.read())
|
||||
response = requests.get(url, headers=headers, timeout=30)
|
||||
response.raise_for_status()
|
||||
|
||||
with tempfile.NamedTemporaryFile(suffix=".pdf", delete=False) as temp_file:
|
||||
temp_file.write(response.content)
|
||||
return temp_file.name
|
||||
except Exception as e:
|
||||
raise ValueError(f"Failed to download PDF from {url}: {e!s}") from e
|
||||
|
||||
@@ -80,8 +94,8 @@ class PDFLoader(BaseLoader):
|
||||
|
||||
try:
|
||||
if is_url:
|
||||
pdf_bytes = self._download_pdf(file_path)
|
||||
doc = pymupdf.open(stream=pdf_bytes, filetype="pdf")
|
||||
local_path = self._download_from_url(file_path, kwargs)
|
||||
doc = pymupdf.open(local_path)
|
||||
else:
|
||||
if not os.path.isfile(file_path):
|
||||
raise FileNotFoundError(f"PDF file not found: {file_path}")
|
||||
|
||||
@@ -42,6 +42,7 @@ dependencies = [
|
||||
"mcp~=1.26.0",
|
||||
"uv~=0.9.13",
|
||||
"aiosqlite~=0.21.0",
|
||||
"pyyaml~=6.0",
|
||||
"lancedb>=0.29.2",
|
||||
]
|
||||
|
||||
@@ -53,7 +54,7 @@ Repository = "https://github.com/crewAIInc/crewAI"
|
||||
|
||||
[project.optional-dependencies]
|
||||
tools = [
|
||||
"crewai-tools==1.11.0",
|
||||
"crewai-tools==1.11.1",
|
||||
]
|
||||
embeddings = [
|
||||
"tiktoken~=0.8.0"
|
||||
|
||||
@@ -42,7 +42,7 @@ def _suppress_pydantic_deprecation_warnings() -> None:
|
||||
|
||||
_suppress_pydantic_deprecation_warnings()
|
||||
|
||||
__version__ = "1.11.0"
|
||||
__version__ = "1.11.1"
|
||||
_telemetry_submitted = False
|
||||
|
||||
|
||||
|
||||
@@ -3,6 +3,7 @@ from __future__ import annotations
|
||||
import asyncio
|
||||
from collections.abc import Callable, Coroutine, Sequence
|
||||
import contextvars
|
||||
from pathlib import Path
|
||||
import shutil
|
||||
import subprocess
|
||||
import time
|
||||
@@ -26,6 +27,7 @@ from typing_extensions import Self
|
||||
from crewai.agent.planning_config import PlanningConfig
|
||||
from crewai.agent.utils import (
|
||||
ahandle_knowledge_retrieval,
|
||||
append_skill_context,
|
||||
apply_training_data,
|
||||
build_task_prompt_with_schema,
|
||||
format_task_with_context,
|
||||
@@ -65,6 +67,8 @@ from crewai.mcp import MCPServerConfig
|
||||
from crewai.mcp.tool_resolver import MCPToolResolver
|
||||
from crewai.rag.embeddings.types import EmbedderConfig
|
||||
from crewai.security.fingerprint import Fingerprint
|
||||
from crewai.skills.loader import activate_skill, discover_skills
|
||||
from crewai.skills.models import INSTRUCTIONS, Skill as SkillModel
|
||||
from crewai.tools.agent_tools.agent_tools import AgentTools
|
||||
from crewai.types.callback import SerializableCallable
|
||||
from crewai.utilities.agent_utils import (
|
||||
@@ -278,6 +282,8 @@ class Agent(BaseAgent):
|
||||
if self.allow_code_execution:
|
||||
self._validate_docker_installation()
|
||||
|
||||
self.set_skills()
|
||||
|
||||
# Handle backward compatibility: convert reasoning=True to planning_config
|
||||
if self.reasoning and self.planning_config is None:
|
||||
import warnings
|
||||
@@ -321,6 +327,76 @@ class Agent(BaseAgent):
|
||||
except (TypeError, ValueError) as e:
|
||||
raise ValueError(f"Invalid Knowledge Configuration: {e!s}") from e
|
||||
|
||||
def set_skills(
|
||||
self,
|
||||
resolved_crew_skills: list[SkillModel] | None = None,
|
||||
) -> None:
|
||||
"""Resolve skill paths and activate skills to INSTRUCTIONS level.
|
||||
|
||||
Path entries trigger discovery and activation. Pre-loaded Skill objects
|
||||
below INSTRUCTIONS level are activated. Crew-level skills are merged in
|
||||
with event emission so observability is consistent regardless of origin.
|
||||
|
||||
Args:
|
||||
resolved_crew_skills: Pre-resolved crew skills (already discovered
|
||||
and activated). When provided, avoids redundant discovery per agent.
|
||||
"""
|
||||
from crewai.crew import Crew
|
||||
from crewai.events.event_bus import crewai_event_bus
|
||||
from crewai.events.types.skill_events import SkillActivatedEvent
|
||||
|
||||
if resolved_crew_skills is None:
|
||||
crew_skills: list[Path | SkillModel] | None = (
|
||||
self.crew.skills
|
||||
if isinstance(self.crew, Crew) and isinstance(self.crew.skills, list)
|
||||
else None
|
||||
)
|
||||
else:
|
||||
crew_skills = list(resolved_crew_skills)
|
||||
|
||||
if not self.skills and not crew_skills:
|
||||
return
|
||||
|
||||
needs_work = self.skills and any(
|
||||
isinstance(s, Path)
|
||||
or (isinstance(s, SkillModel) and s.disclosure_level < INSTRUCTIONS)
|
||||
for s in self.skills
|
||||
)
|
||||
if not needs_work and not crew_skills:
|
||||
return
|
||||
|
||||
seen: set[str] = set()
|
||||
resolved: list[Path | SkillModel] = []
|
||||
items: list[Path | SkillModel] = list(self.skills) if self.skills else []
|
||||
|
||||
if crew_skills:
|
||||
items.extend(crew_skills)
|
||||
|
||||
for item in items:
|
||||
if isinstance(item, Path):
|
||||
discovered = discover_skills(item, source=self)
|
||||
for skill in discovered:
|
||||
if skill.name not in seen:
|
||||
seen.add(skill.name)
|
||||
resolved.append(activate_skill(skill, source=self))
|
||||
elif isinstance(item, SkillModel):
|
||||
if item.name not in seen:
|
||||
seen.add(item.name)
|
||||
activated = activate_skill(item, source=self)
|
||||
if activated is item and item.disclosure_level >= INSTRUCTIONS:
|
||||
crewai_event_bus.emit(
|
||||
self,
|
||||
event=SkillActivatedEvent(
|
||||
from_agent=self,
|
||||
skill_name=item.name,
|
||||
skill_path=item.path,
|
||||
disclosure_level=item.disclosure_level,
|
||||
),
|
||||
)
|
||||
resolved.append(activated)
|
||||
|
||||
self.skills = resolved if resolved else None
|
||||
|
||||
def _is_any_available_memory(self) -> bool:
|
||||
"""Check if unified memory is available (agent or crew)."""
|
||||
if getattr(self, "memory", None):
|
||||
@@ -442,6 +518,8 @@ class Agent(BaseAgent):
|
||||
self.crew.query_knowledge if self.crew else lambda *a, **k: None,
|
||||
)
|
||||
|
||||
task_prompt = append_skill_context(self, task_prompt)
|
||||
|
||||
prepare_tools(self, tools, task)
|
||||
task_prompt = apply_training_data(self, task_prompt)
|
||||
|
||||
@@ -682,6 +760,8 @@ class Agent(BaseAgent):
|
||||
self, task, task_prompt, knowledge_config
|
||||
)
|
||||
|
||||
task_prompt = append_skill_context(self, task_prompt)
|
||||
|
||||
prepare_tools(self, tools, task)
|
||||
task_prompt = apply_training_data(self, task_prompt)
|
||||
|
||||
@@ -1343,6 +1423,8 @@ class Agent(BaseAgent):
|
||||
),
|
||||
)
|
||||
|
||||
formatted_messages = append_skill_context(self, formatted_messages)
|
||||
|
||||
# Build the input dict for the executor
|
||||
inputs: dict[str, Any] = {
|
||||
"input": formatted_messages,
|
||||
|
||||
@@ -210,6 +210,30 @@ def _combine_knowledge_context(agent: Agent) -> str:
|
||||
return agent_ctx + separator + crew_ctx
|
||||
|
||||
|
||||
def append_skill_context(agent: Agent, task_prompt: str) -> str:
|
||||
"""Append activated skill context sections to the task prompt.
|
||||
|
||||
Args:
|
||||
agent: The agent with optional skills.
|
||||
task_prompt: The current task prompt.
|
||||
|
||||
Returns:
|
||||
The task prompt with skill context appended.
|
||||
"""
|
||||
if not agent.skills:
|
||||
return task_prompt
|
||||
|
||||
from crewai.skills.loader import format_skill_context
|
||||
from crewai.skills.models import Skill
|
||||
|
||||
skill_sections = [
|
||||
format_skill_context(s) for s in agent.skills if isinstance(s, Skill)
|
||||
]
|
||||
if skill_sections:
|
||||
task_prompt += "\n\n" + "\n\n".join(skill_sections)
|
||||
return task_prompt
|
||||
|
||||
|
||||
def apply_training_data(agent: Agent, task_prompt: str) -> str:
|
||||
"""Apply training data to the task prompt.
|
||||
|
||||
|
||||
@@ -3,6 +3,7 @@ from __future__ import annotations
|
||||
from abc import ABC, abstractmethod
|
||||
from copy import copy as shallow_copy
|
||||
from hashlib import md5
|
||||
from pathlib import Path
|
||||
import re
|
||||
from typing import Any, Final, Literal
|
||||
import uuid
|
||||
@@ -32,6 +33,7 @@ from crewai.memory.memory_scope import MemoryScope, MemorySlice
|
||||
from crewai.memory.unified_memory import Memory
|
||||
from crewai.rag.embeddings.types import EmbedderConfig
|
||||
from crewai.security.security_config import SecurityConfig
|
||||
from crewai.skills.models import Skill
|
||||
from crewai.tools.base_tool import BaseTool, Tool
|
||||
from crewai.types.callback import SerializableCallable
|
||||
from crewai.utilities.config import process_config
|
||||
@@ -217,6 +219,11 @@ class BaseAgent(BaseModel, ABC, metaclass=AgentMeta):
|
||||
"If not set, falls back to crew memory."
|
||||
),
|
||||
)
|
||||
skills: list[Path | Skill] | None = Field(
|
||||
default=None,
|
||||
description="Agent Skills. Accepts paths for discovery or pre-loaded Skill objects.",
|
||||
min_length=1,
|
||||
)
|
||||
|
||||
@model_validator(mode="before")
|
||||
@classmethod
|
||||
@@ -500,3 +507,6 @@ class BaseAgent(BaseModel, ABC, metaclass=AgentMeta):
|
||||
|
||||
def set_knowledge(self, crew_embedder: EmbedderConfig | None = None) -> None:
|
||||
pass
|
||||
|
||||
def set_skills(self, resolved_crew_skills: list[Any] | None = None) -> None:
|
||||
pass
|
||||
|
||||
@@ -3,6 +3,7 @@ from __future__ import annotations
|
||||
from typing import TYPE_CHECKING
|
||||
|
||||
from crewai.agents.parser import AgentFinish
|
||||
from crewai.memory.utils import sanitize_scope_name
|
||||
from crewai.utilities.printer import Printer
|
||||
from crewai.utilities.string_utils import sanitize_tool_name
|
||||
|
||||
@@ -26,7 +27,12 @@ class CrewAgentExecutorMixin:
|
||||
_printer: Printer = Printer()
|
||||
|
||||
def _save_to_memory(self, output: AgentFinish) -> None:
|
||||
"""Save task result to unified memory (memory or crew._memory)."""
|
||||
"""Save task result to unified memory (memory or crew._memory).
|
||||
|
||||
Extends the memory's root_scope with agent-specific path segment
|
||||
(e.g., '/crew/research-crew/agent/researcher') so that agent memories
|
||||
are scoped hierarchically under their crew.
|
||||
"""
|
||||
memory = getattr(self.agent, "memory", None) or (
|
||||
getattr(self.crew, "_memory", None) if self.crew else None
|
||||
)
|
||||
@@ -43,6 +49,21 @@ class CrewAgentExecutorMixin:
|
||||
)
|
||||
extracted = memory.extract_memories(raw)
|
||||
if extracted:
|
||||
memory.remember_many(extracted, agent_role=self.agent.role)
|
||||
# Get the memory's existing root_scope
|
||||
base_root = getattr(memory, "root_scope", None)
|
||||
|
||||
if isinstance(base_root, str) and base_root:
|
||||
# Memory has a root_scope — extend it with agent info
|
||||
agent_role = self.agent.role or "unknown"
|
||||
sanitized_role = sanitize_scope_name(agent_role)
|
||||
agent_root = f"{base_root.rstrip('/')}/agent/{sanitized_role}"
|
||||
if not agent_root.startswith("/"):
|
||||
agent_root = "/" + agent_root
|
||||
memory.remember_many(
|
||||
extracted, agent_role=self.agent.role, root_scope=agent_root
|
||||
)
|
||||
else:
|
||||
# No base root_scope — don't inject one, preserve backward compat
|
||||
memory.remember_many(extracted, agent_role=self.agent.role)
|
||||
except Exception as e:
|
||||
self.agent._logger.log("error", f"Failed to save to memory: {e}")
|
||||
|
||||
@@ -22,6 +22,7 @@ from crewai.cli.replay_from_task import replay_task_command
|
||||
from crewai.cli.reset_memories_command import reset_memories_command
|
||||
from crewai.cli.run_crew import run_crew
|
||||
from crewai.cli.settings.main import SettingsCommand
|
||||
from crewai.cli.shared.token_manager import TokenManager
|
||||
from crewai.cli.tools.main import ToolCommand
|
||||
from crewai.cli.train_crew import train_crew
|
||||
from crewai.cli.triggers.main import TriggersCommand
|
||||
@@ -34,7 +35,7 @@ from crewai.memory.storage.kickoff_task_outputs_storage import (
|
||||
|
||||
@click.group()
|
||||
@click.version_option(get_version("crewai"))
|
||||
def crewai():
|
||||
def crewai() -> None:
|
||||
"""Top-level command group for crewai."""
|
||||
|
||||
|
||||
@@ -45,7 +46,7 @@ def crewai():
|
||||
),
|
||||
)
|
||||
@click.argument("uv_args", nargs=-1, type=click.UNPROCESSED)
|
||||
def uv(uv_args):
|
||||
def uv(uv_args: tuple[str, ...]) -> None:
|
||||
"""A wrapper around uv commands that adds custom tool authentication through env vars."""
|
||||
env = os.environ.copy()
|
||||
try:
|
||||
@@ -83,7 +84,9 @@ def uv(uv_args):
|
||||
@click.argument("name")
|
||||
@click.option("--provider", type=str, help="The provider to use for the crew")
|
||||
@click.option("--skip_provider", is_flag=True, help="Skip provider validation")
|
||||
def create(type, name, provider, skip_provider=False):
|
||||
def create(
|
||||
type: str, name: str, provider: str | None, skip_provider: bool = False
|
||||
) -> None:
|
||||
"""Create a new crew, or flow."""
|
||||
if type == "crew":
|
||||
create_crew(name, provider, skip_provider)
|
||||
@@ -97,7 +100,7 @@ def create(type, name, provider, skip_provider=False):
|
||||
@click.option(
|
||||
"--tools", is_flag=True, help="Show the installed version of crewai tools"
|
||||
)
|
||||
def version(tools):
|
||||
def version(tools: bool) -> None:
|
||||
"""Show the installed version of crewai."""
|
||||
try:
|
||||
crewai_version = get_version("crewai")
|
||||
@@ -128,7 +131,7 @@ def version(tools):
|
||||
default="trained_agents_data.pkl",
|
||||
help="Path to a custom file for training",
|
||||
)
|
||||
def train(n_iterations: int, filename: str):
|
||||
def train(n_iterations: int, filename: str) -> None:
|
||||
"""Train the crew."""
|
||||
click.echo(f"Training the Crew for {n_iterations} iterations")
|
||||
train_crew(n_iterations, filename)
|
||||
@@ -334,7 +337,7 @@ def memory(
|
||||
default="gpt-4o-mini",
|
||||
help="LLM Model to run the tests on the Crew. For now only accepting only OpenAI models.",
|
||||
)
|
||||
def test(n_iterations: int, model: str):
|
||||
def test(n_iterations: int, model: str) -> None:
|
||||
"""Test the crew and evaluate the results."""
|
||||
click.echo(f"Testing the crew for {n_iterations} iterations with model {model}")
|
||||
evaluate_crew(n_iterations, model)
|
||||
@@ -347,46 +350,62 @@ def test(n_iterations: int, model: str):
|
||||
)
|
||||
)
|
||||
@click.pass_context
|
||||
def install(context):
|
||||
def install(context: click.Context) -> None:
|
||||
"""Install the Crew."""
|
||||
install_crew(context.args)
|
||||
|
||||
|
||||
@crewai.command()
|
||||
def run():
|
||||
def run() -> None:
|
||||
"""Run the Crew."""
|
||||
run_crew()
|
||||
|
||||
|
||||
@crewai.command()
|
||||
def update():
|
||||
def update() -> None:
|
||||
"""Update the pyproject.toml of the Crew project to use uv."""
|
||||
update_crew()
|
||||
|
||||
|
||||
@crewai.command()
|
||||
def login():
|
||||
def login() -> None:
|
||||
"""Sign Up/Login to CrewAI AMP."""
|
||||
Settings().clear_user_settings()
|
||||
AuthenticationCommand().login()
|
||||
|
||||
|
||||
@crewai.command()
|
||||
@click.option(
|
||||
"--reset", is_flag=True, help="Also reset all CLI configuration to defaults"
|
||||
)
|
||||
def logout(reset: bool) -> None:
|
||||
"""Logout from CrewAI AMP."""
|
||||
settings = Settings()
|
||||
if reset:
|
||||
settings.reset()
|
||||
click.echo("Successfully logged out and reset all CLI configuration.")
|
||||
else:
|
||||
TokenManager().clear_tokens()
|
||||
settings.clear_user_settings()
|
||||
click.echo("Successfully logged out from CrewAI AMP.")
|
||||
|
||||
|
||||
# DEPLOY CREWAI+ COMMANDS
|
||||
@crewai.group()
|
||||
def deploy():
|
||||
def deploy() -> None:
|
||||
"""Deploy the Crew CLI group."""
|
||||
|
||||
|
||||
@deploy.command(name="create")
|
||||
@click.option("-y", "--yes", is_flag=True, help="Skip the confirmation prompt")
|
||||
def deploy_create(yes: bool):
|
||||
def deploy_create(yes: bool) -> None:
|
||||
"""Create a Crew deployment."""
|
||||
deploy_cmd = DeployCommand()
|
||||
deploy_cmd.create_crew(yes)
|
||||
|
||||
|
||||
@deploy.command(name="list")
|
||||
def deploy_list():
|
||||
def deploy_list() -> None:
|
||||
"""List all deployments."""
|
||||
deploy_cmd = DeployCommand()
|
||||
deploy_cmd.list_crews()
|
||||
@@ -394,7 +413,7 @@ def deploy_list():
|
||||
|
||||
@deploy.command(name="push")
|
||||
@click.option("-u", "--uuid", type=str, help="Crew UUID parameter")
|
||||
def deploy_push(uuid: str | None):
|
||||
def deploy_push(uuid: str | None) -> None:
|
||||
"""Deploy the Crew."""
|
||||
deploy_cmd = DeployCommand()
|
||||
deploy_cmd.deploy(uuid=uuid)
|
||||
@@ -402,7 +421,7 @@ def deploy_push(uuid: str | None):
|
||||
|
||||
@deploy.command(name="status")
|
||||
@click.option("-u", "--uuid", type=str, help="Crew UUID parameter")
|
||||
def deply_status(uuid: str | None):
|
||||
def deply_status(uuid: str | None) -> None:
|
||||
"""Get the status of a deployment."""
|
||||
deploy_cmd = DeployCommand()
|
||||
deploy_cmd.get_crew_status(uuid=uuid)
|
||||
@@ -410,7 +429,7 @@ def deply_status(uuid: str | None):
|
||||
|
||||
@deploy.command(name="logs")
|
||||
@click.option("-u", "--uuid", type=str, help="Crew UUID parameter")
|
||||
def deploy_logs(uuid: str | None):
|
||||
def deploy_logs(uuid: str | None) -> None:
|
||||
"""Get the logs of a deployment."""
|
||||
deploy_cmd = DeployCommand()
|
||||
deploy_cmd.get_crew_logs(uuid=uuid)
|
||||
@@ -418,27 +437,27 @@ def deploy_logs(uuid: str | None):
|
||||
|
||||
@deploy.command(name="remove")
|
||||
@click.option("-u", "--uuid", type=str, help="Crew UUID parameter")
|
||||
def deploy_remove(uuid: str | None):
|
||||
def deploy_remove(uuid: str | None) -> None:
|
||||
"""Remove a deployment."""
|
||||
deploy_cmd = DeployCommand()
|
||||
deploy_cmd.remove_crew(uuid=uuid)
|
||||
|
||||
|
||||
@crewai.group()
|
||||
def tool():
|
||||
def tool() -> None:
|
||||
"""Tool Repository related commands."""
|
||||
|
||||
|
||||
@tool.command(name="create")
|
||||
@click.argument("handle")
|
||||
def tool_create(handle: str):
|
||||
def tool_create(handle: str) -> None:
|
||||
tool_cmd = ToolCommand()
|
||||
tool_cmd.create(handle)
|
||||
|
||||
|
||||
@tool.command(name="install")
|
||||
@click.argument("handle")
|
||||
def tool_install(handle: str):
|
||||
def tool_install(handle: str) -> None:
|
||||
tool_cmd = ToolCommand()
|
||||
tool_cmd.login()
|
||||
tool_cmd.install(handle)
|
||||
@@ -454,26 +473,26 @@ def tool_install(handle: str):
|
||||
)
|
||||
@click.option("--public", "is_public", flag_value=True, default=False)
|
||||
@click.option("--private", "is_public", flag_value=False)
|
||||
def tool_publish(is_public: bool, force: bool):
|
||||
def tool_publish(is_public: bool, force: bool) -> None:
|
||||
tool_cmd = ToolCommand()
|
||||
tool_cmd.login()
|
||||
tool_cmd.publish(is_public, force)
|
||||
|
||||
|
||||
@crewai.group()
|
||||
def flow():
|
||||
def flow() -> None:
|
||||
"""Flow related commands."""
|
||||
|
||||
|
||||
@flow.command(name="kickoff")
|
||||
def flow_run():
|
||||
def flow_run() -> None:
|
||||
"""Kickoff the Flow."""
|
||||
click.echo("Running the Flow")
|
||||
kickoff_flow()
|
||||
|
||||
|
||||
@flow.command(name="plot")
|
||||
def flow_plot():
|
||||
def flow_plot() -> None:
|
||||
"""Plot the Flow."""
|
||||
click.echo("Plotting the Flow")
|
||||
plot_flow()
|
||||
@@ -481,19 +500,19 @@ def flow_plot():
|
||||
|
||||
@flow.command(name="add-crew")
|
||||
@click.argument("crew_name")
|
||||
def flow_add_crew(crew_name):
|
||||
def flow_add_crew(crew_name: str) -> None:
|
||||
"""Add a crew to an existing flow."""
|
||||
click.echo(f"Adding crew {crew_name} to the flow")
|
||||
add_crew_to_flow(crew_name)
|
||||
|
||||
|
||||
@crewai.group()
|
||||
def triggers():
|
||||
def triggers() -> None:
|
||||
"""Trigger related commands. Use 'crewai triggers list' to see available triggers, or 'crewai triggers run app_slug/trigger_slug' to execute."""
|
||||
|
||||
|
||||
@triggers.command(name="list")
|
||||
def triggers_list():
|
||||
def triggers_list() -> None:
|
||||
"""List all available triggers from integrations."""
|
||||
triggers_cmd = TriggersCommand()
|
||||
triggers_cmd.list_triggers()
|
||||
@@ -501,14 +520,14 @@ def triggers_list():
|
||||
|
||||
@triggers.command(name="run")
|
||||
@click.argument("trigger_path")
|
||||
def triggers_run(trigger_path: str):
|
||||
def triggers_run(trigger_path: str) -> None:
|
||||
"""Execute crew with trigger payload. Format: app_slug/trigger_slug"""
|
||||
triggers_cmd = TriggersCommand()
|
||||
triggers_cmd.execute_with_trigger(trigger_path)
|
||||
|
||||
|
||||
@crewai.command()
|
||||
def chat():
|
||||
def chat() -> None:
|
||||
"""
|
||||
Start a conversation with the Crew, collecting user-supplied inputs,
|
||||
and using the Chat LLM to generate responses.
|
||||
@@ -521,12 +540,12 @@ def chat():
|
||||
|
||||
|
||||
@crewai.group(invoke_without_command=True)
|
||||
def org():
|
||||
def org() -> None:
|
||||
"""Organization management commands."""
|
||||
|
||||
|
||||
@org.command("list")
|
||||
def org_list():
|
||||
def org_list() -> None:
|
||||
"""List available organizations."""
|
||||
org_command = OrganizationCommand()
|
||||
org_command.list()
|
||||
@@ -534,39 +553,39 @@ def org_list():
|
||||
|
||||
@org.command()
|
||||
@click.argument("id")
|
||||
def switch(id):
|
||||
def switch(id: str) -> None:
|
||||
"""Switch to a specific organization."""
|
||||
org_command = OrganizationCommand()
|
||||
org_command.switch(id)
|
||||
|
||||
|
||||
@org.command()
|
||||
def current():
|
||||
def current() -> None:
|
||||
"""Show current organization when 'crewai org' is called without subcommands."""
|
||||
org_command = OrganizationCommand()
|
||||
org_command.current()
|
||||
|
||||
|
||||
@crewai.group()
|
||||
def enterprise():
|
||||
def enterprise() -> None:
|
||||
"""Enterprise Configuration commands."""
|
||||
|
||||
|
||||
@enterprise.command("configure")
|
||||
@click.argument("enterprise_url")
|
||||
def enterprise_configure(enterprise_url: str):
|
||||
def enterprise_configure(enterprise_url: str) -> None:
|
||||
"""Configure CrewAI AMP OAuth2 settings from the provided Enterprise URL."""
|
||||
enterprise_command = EnterpriseConfigureCommand()
|
||||
enterprise_command.configure(enterprise_url)
|
||||
|
||||
|
||||
@crewai.group()
|
||||
def config():
|
||||
def config() -> None:
|
||||
"""CLI Configuration commands."""
|
||||
|
||||
|
||||
@config.command("list")
|
||||
def config_list():
|
||||
def config_list() -> None:
|
||||
"""List all CLI configuration parameters."""
|
||||
config_command = SettingsCommand()
|
||||
config_command.list()
|
||||
@@ -575,26 +594,26 @@ def config_list():
|
||||
@config.command("set")
|
||||
@click.argument("key")
|
||||
@click.argument("value")
|
||||
def config_set(key: str, value: str):
|
||||
def config_set(key: str, value: str) -> None:
|
||||
"""Set a CLI configuration parameter."""
|
||||
config_command = SettingsCommand()
|
||||
config_command.set(key, value)
|
||||
|
||||
|
||||
@config.command("reset")
|
||||
def config_reset():
|
||||
def config_reset() -> None:
|
||||
"""Reset all CLI configuration parameters to default values."""
|
||||
config_command = SettingsCommand()
|
||||
config_command.reset_all_settings()
|
||||
|
||||
|
||||
@crewai.group()
|
||||
def env():
|
||||
def env() -> None:
|
||||
"""Environment variable commands."""
|
||||
|
||||
|
||||
@env.command("view")
|
||||
def env_view():
|
||||
def env_view() -> None:
|
||||
"""View tracing-related environment variables."""
|
||||
import os
|
||||
from pathlib import Path
|
||||
@@ -672,12 +691,12 @@ def env_view():
|
||||
|
||||
|
||||
@crewai.group()
|
||||
def traces():
|
||||
def traces() -> None:
|
||||
"""Trace collection management commands."""
|
||||
|
||||
|
||||
@traces.command("enable")
|
||||
def traces_enable():
|
||||
def traces_enable() -> None:
|
||||
"""Enable trace collection for crew/flow executions."""
|
||||
from rich.console import Console
|
||||
from rich.panel import Panel
|
||||
@@ -700,7 +719,7 @@ def traces_enable():
|
||||
|
||||
|
||||
@traces.command("disable")
|
||||
def traces_disable():
|
||||
def traces_disable() -> None:
|
||||
"""Disable trace collection for crew/flow executions."""
|
||||
from rich.console import Console
|
||||
from rich.panel import Panel
|
||||
@@ -723,7 +742,7 @@ def traces_disable():
|
||||
|
||||
|
||||
@traces.command("status")
|
||||
def traces_status():
|
||||
def traces_status() -> None:
|
||||
"""Show current trace collection status."""
|
||||
import os
|
||||
|
||||
|
||||
@@ -6,7 +6,7 @@ import click
|
||||
from crewai.telemetry import Telemetry
|
||||
|
||||
|
||||
def create_flow(name):
|
||||
def create_flow(name: str) -> None:
|
||||
"""Create a new flow."""
|
||||
folder_name = name.replace(" ", "_").replace("-", "_").lower()
|
||||
class_name = name.replace("_", " ").replace("-", " ").title().replace(" ", "")
|
||||
@@ -49,7 +49,7 @@ def create_flow(name):
|
||||
"poem_crew",
|
||||
]
|
||||
|
||||
def process_file(src_file, dst_file):
|
||||
def process_file(src_file: Path, dst_file: Path) -> None:
|
||||
if src_file.suffix in [".pyc", ".pyo", ".pyd"]:
|
||||
return
|
||||
|
||||
|
||||
@@ -15,7 +15,7 @@ class DeployCommand(BaseCommand, PlusAPIMixin):
|
||||
A class to handle deployment-related operations for CrewAI projects.
|
||||
"""
|
||||
|
||||
def __init__(self):
|
||||
def __init__(self) -> None:
|
||||
"""
|
||||
Initialize the DeployCommand with project name and API client.
|
||||
"""
|
||||
@@ -67,7 +67,7 @@ class DeployCommand(BaseCommand, PlusAPIMixin):
|
||||
Args:
|
||||
uuid (Optional[str]): The UUID of the crew to deploy.
|
||||
"""
|
||||
self._start_deployment_span = self._telemetry.start_deployment_span(uuid)
|
||||
self._telemetry.start_deployment_span(uuid)
|
||||
console.print("Starting deployment...", style="bold blue")
|
||||
if uuid:
|
||||
response = self.plus_api_client.deploy_by_uuid(uuid)
|
||||
@@ -84,9 +84,7 @@ class DeployCommand(BaseCommand, PlusAPIMixin):
|
||||
"""
|
||||
Create a new crew deployment.
|
||||
"""
|
||||
self._create_crew_deployment_span = (
|
||||
self._telemetry.create_crew_deployment_span()
|
||||
)
|
||||
self._telemetry.create_crew_deployment_span()
|
||||
console.print("Creating deployment...", style="bold blue")
|
||||
env_vars = fetch_and_json_env_file()
|
||||
|
||||
@@ -236,7 +234,7 @@ class DeployCommand(BaseCommand, PlusAPIMixin):
|
||||
uuid (Optional[str]): The UUID of the crew to get logs for.
|
||||
log_type (str): The type of logs to retrieve (default: "deployment").
|
||||
"""
|
||||
self._get_crew_logs_span = self._telemetry.get_crew_logs_span(uuid, log_type)
|
||||
self._telemetry.get_crew_logs_span(uuid, log_type)
|
||||
console.print(f"Fetching {log_type} logs...", style="bold blue")
|
||||
|
||||
if uuid:
|
||||
@@ -257,7 +255,7 @@ class DeployCommand(BaseCommand, PlusAPIMixin):
|
||||
Args:
|
||||
uuid (Optional[str]): The UUID of the crew to remove.
|
||||
"""
|
||||
self._remove_crew_span = self._telemetry.remove_crew_span(uuid)
|
||||
self._telemetry.remove_crew_span(uuid)
|
||||
console.print("Removing deployment...", style="bold blue")
|
||||
|
||||
if uuid:
|
||||
|
||||
@@ -5,7 +5,7 @@ description = "{{name}} using crewAI"
|
||||
authors = [{ name = "Your Name", email = "you@example.com" }]
|
||||
requires-python = ">=3.10,<3.14"
|
||||
dependencies = [
|
||||
"crewai[tools]==1.11.0"
|
||||
"crewai[tools]==1.11.1"
|
||||
]
|
||||
|
||||
[project.scripts]
|
||||
|
||||
@@ -5,7 +5,7 @@ description = "{{name}} using crewAI"
|
||||
authors = [{ name = "Your Name", email = "you@example.com" }]
|
||||
requires-python = ">=3.10,<3.14"
|
||||
dependencies = [
|
||||
"crewai[tools]==1.11.0"
|
||||
"crewai[tools]==1.11.1"
|
||||
]
|
||||
|
||||
[project.scripts]
|
||||
|
||||
@@ -5,7 +5,7 @@ description = "Power up your crews with {{folder_name}}"
|
||||
readme = "README.md"
|
||||
requires-python = ">=3.10,<3.14"
|
||||
dependencies = [
|
||||
"crewai[tools]==1.11.0"
|
||||
"crewai[tools]==1.11.1"
|
||||
]
|
||||
|
||||
[tool.crewai]
|
||||
|
||||
@@ -16,7 +16,7 @@ class TriggersCommand(BaseCommand, PlusAPIMixin):
|
||||
A class to handle trigger-related operations for CrewAI projects.
|
||||
"""
|
||||
|
||||
def __init__(self):
|
||||
def __init__(self) -> None:
|
||||
BaseCommand.__init__(self)
|
||||
PlusAPIMixin.__init__(self, telemetry=self._telemetry)
|
||||
|
||||
|
||||
@@ -6,6 +6,7 @@ from concurrent.futures import Future
|
||||
from copy import copy as shallow_copy
|
||||
from hashlib import md5
|
||||
import json
|
||||
from pathlib import Path
|
||||
import re
|
||||
from typing import (
|
||||
TYPE_CHECKING,
|
||||
@@ -91,6 +92,7 @@ from crewai.rag.embeddings.types import EmbedderConfig
|
||||
from crewai.rag.types import SearchResult
|
||||
from crewai.security.fingerprint import Fingerprint
|
||||
from crewai.security.security_config import SecurityConfig
|
||||
from crewai.skills.models import Skill
|
||||
from crewai.task import Task
|
||||
from crewai.tasks.conditional_task import ConditionalTask
|
||||
from crewai.tasks.task_output import TaskOutput
|
||||
@@ -294,6 +296,11 @@ class Crew(FlowTrackable, BaseModel):
|
||||
default=None,
|
||||
description="Knowledge for the crew.",
|
||||
)
|
||||
skills: list[Path | Skill] | None = Field(
|
||||
default=None,
|
||||
description="Skill search paths or pre-loaded Skill objects applied to all agents in the crew.",
|
||||
)
|
||||
|
||||
security_config: SecurityConfig = Field(
|
||||
default_factory=SecurityConfig,
|
||||
description="Security configuration for the crew, including fingerprinting.",
|
||||
@@ -357,7 +364,18 @@ class Crew(FlowTrackable, BaseModel):
|
||||
|
||||
@model_validator(mode="after")
|
||||
def create_crew_memory(self) -> Crew:
|
||||
"""Initialize unified memory, respecting crew embedder config."""
|
||||
"""Initialize unified memory, respecting crew embedder config.
|
||||
|
||||
When memory is enabled, sets a hierarchical root_scope based on the
|
||||
crew name (e.g. '/crew/research-crew') so that all memories saved by
|
||||
this crew and its agents are organized under a consistent namespace.
|
||||
"""
|
||||
from crewai.memory.utils import sanitize_scope_name
|
||||
|
||||
# Compute sanitized crew name for root_scope
|
||||
crew_name = sanitize_scope_name(self.name or "crew")
|
||||
crew_root_scope = f"/crew/{crew_name}"
|
||||
|
||||
if self.memory is True:
|
||||
from crewai.memory.unified_memory import Memory
|
||||
|
||||
@@ -365,10 +383,11 @@ class Crew(FlowTrackable, BaseModel):
|
||||
if self.embedder is not None:
|
||||
from crewai.rag.embeddings.factory import build_embedder
|
||||
|
||||
embedder = build_embedder(self.embedder) # type: ignore[arg-type]
|
||||
self._memory = Memory(embedder=embedder)
|
||||
embedder = build_embedder(cast(dict[str, Any], self.embedder))
|
||||
self._memory = Memory(embedder=embedder, root_scope=crew_root_scope)
|
||||
elif self.memory:
|
||||
# User passed a Memory / MemoryScope / MemorySlice instance
|
||||
# Respect user's configuration — don't auto-set root_scope
|
||||
self._memory = self.memory
|
||||
else:
|
||||
self._memory = None
|
||||
|
||||
@@ -4,6 +4,7 @@ from __future__ import annotations
|
||||
|
||||
import asyncio
|
||||
from collections.abc import Callable, Coroutine, Iterable, Mapping
|
||||
from pathlib import Path
|
||||
from typing import TYPE_CHECKING, Any
|
||||
|
||||
from opentelemetry import baggage
|
||||
@@ -11,6 +12,8 @@ from opentelemetry import baggage
|
||||
from crewai.agents.agent_builder.base_agent import BaseAgent
|
||||
from crewai.crews.crew_output import CrewOutput
|
||||
from crewai.rag.embeddings.types import EmbedderConfig
|
||||
from crewai.skills.loader import activate_skill, discover_skills
|
||||
from crewai.skills.models import INSTRUCTIONS, Skill as SkillModel
|
||||
from crewai.types.streaming import CrewStreamingOutput, FlowStreamingOutput
|
||||
from crewai.utilities.file_store import store_files
|
||||
from crewai.utilities.streaming import (
|
||||
@@ -51,6 +54,30 @@ def enable_agent_streaming(agents: Iterable[BaseAgent]) -> None:
|
||||
agent.llm.stream = True
|
||||
|
||||
|
||||
def _resolve_crew_skills(crew: Crew) -> list[SkillModel] | None:
|
||||
"""Resolve crew-level skill paths once so agents don't repeat the work."""
|
||||
if not isinstance(crew.skills, list) or not crew.skills:
|
||||
return None
|
||||
|
||||
resolved: list[SkillModel] = []
|
||||
seen: set[str] = set()
|
||||
for item in crew.skills:
|
||||
if isinstance(item, Path):
|
||||
for skill in discover_skills(item):
|
||||
if skill.name not in seen:
|
||||
seen.add(skill.name)
|
||||
resolved.append(activate_skill(skill))
|
||||
elif isinstance(item, SkillModel):
|
||||
if item.name not in seen:
|
||||
seen.add(item.name)
|
||||
resolved.append(
|
||||
activate_skill(item)
|
||||
if item.disclosure_level < INSTRUCTIONS
|
||||
else item
|
||||
)
|
||||
return resolved
|
||||
|
||||
|
||||
def setup_agents(
|
||||
crew: Crew,
|
||||
agents: Iterable[BaseAgent],
|
||||
@@ -67,9 +94,12 @@ def setup_agents(
|
||||
function_calling_llm: Default function calling LLM for agents.
|
||||
step_callback: Default step callback for agents.
|
||||
"""
|
||||
resolved_crew_skills = _resolve_crew_skills(crew)
|
||||
|
||||
for agent in agents:
|
||||
agent.crew = crew
|
||||
agent.set_knowledge(crew_embedder=embedder)
|
||||
agent.set_skills(resolved_crew_skills=resolved_crew_skills)
|
||||
if not agent.function_calling_llm: # type: ignore[attr-defined]
|
||||
agent.function_calling_llm = function_calling_llm # type: ignore[attr-defined]
|
||||
if not agent.step_callback: # type: ignore[attr-defined]
|
||||
|
||||
@@ -88,6 +88,14 @@ from crewai.events.types.reasoning_events import (
|
||||
AgentReasoningStartedEvent,
|
||||
ReasoningEvent,
|
||||
)
|
||||
from crewai.events.types.skill_events import (
|
||||
SkillActivatedEvent,
|
||||
SkillDiscoveryCompletedEvent,
|
||||
SkillDiscoveryStartedEvent,
|
||||
SkillEvent,
|
||||
SkillLoadFailedEvent,
|
||||
SkillLoadedEvent,
|
||||
)
|
||||
from crewai.events.types.task_events import (
|
||||
TaskCompletedEvent,
|
||||
TaskEvaluationEvent,
|
||||
@@ -186,6 +194,12 @@ __all__ = [
|
||||
"MethodExecutionFinishedEvent",
|
||||
"MethodExecutionStartedEvent",
|
||||
"ReasoningEvent",
|
||||
"SkillActivatedEvent",
|
||||
"SkillDiscoveryCompletedEvent",
|
||||
"SkillDiscoveryStartedEvent",
|
||||
"SkillEvent",
|
||||
"SkillLoadFailedEvent",
|
||||
"SkillLoadedEvent",
|
||||
"TaskCompletedEvent",
|
||||
"TaskEvaluationEvent",
|
||||
"TaskFailedEvent",
|
||||
|
||||
62
lib/crewai/src/crewai/events/types/skill_events.py
Normal file
62
lib/crewai/src/crewai/events/types/skill_events.py
Normal file
@@ -0,0 +1,62 @@
|
||||
"""Skill lifecycle events for the Agent Skills standard.
|
||||
|
||||
Events emitted during skill discovery, loading, and activation.
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
from pathlib import Path
|
||||
from typing import Any
|
||||
|
||||
from crewai.events.base_events import BaseEvent
|
||||
|
||||
|
||||
class SkillEvent(BaseEvent):
|
||||
"""Base event for skill operations."""
|
||||
|
||||
skill_name: str = ""
|
||||
skill_path: Path | None = None
|
||||
from_agent: Any | None = None
|
||||
from_task: Any | None = None
|
||||
|
||||
def __init__(self, **data: Any) -> None:
|
||||
super().__init__(**data)
|
||||
self._set_agent_params(data)
|
||||
self._set_task_params(data)
|
||||
|
||||
|
||||
class SkillDiscoveryStartedEvent(SkillEvent):
|
||||
"""Event emitted when skill discovery begins."""
|
||||
|
||||
type: str = "skill_discovery_started"
|
||||
search_path: Path
|
||||
|
||||
|
||||
class SkillDiscoveryCompletedEvent(SkillEvent):
|
||||
"""Event emitted when skill discovery completes."""
|
||||
|
||||
type: str = "skill_discovery_completed"
|
||||
search_path: Path
|
||||
skills_found: int
|
||||
skill_names: list[str]
|
||||
|
||||
|
||||
class SkillLoadedEvent(SkillEvent):
|
||||
"""Event emitted when a skill is loaded at metadata level."""
|
||||
|
||||
type: str = "skill_loaded"
|
||||
disclosure_level: int = 1
|
||||
|
||||
|
||||
class SkillActivatedEvent(SkillEvent):
|
||||
"""Event emitted when a skill is activated (promoted to instructions level)."""
|
||||
|
||||
type: str = "skill_activated"
|
||||
disclosure_level: int = 2
|
||||
|
||||
|
||||
class SkillLoadFailedEvent(SkillEvent):
|
||||
"""Event emitted when skill loading fails."""
|
||||
|
||||
type: str = "skill_load_failed"
|
||||
error: str
|
||||
@@ -6,6 +6,7 @@ from crewai.flow.async_feedback import (
|
||||
)
|
||||
from crewai.flow.flow import Flow, and_, listen, or_, router, start
|
||||
from crewai.flow.flow_config import flow_config
|
||||
from crewai.flow.flow_serializer import flow_structure
|
||||
from crewai.flow.human_feedback import HumanFeedbackResult, human_feedback
|
||||
from crewai.flow.input_provider import InputProvider, InputResponse
|
||||
from crewai.flow.persistence import persist
|
||||
@@ -29,6 +30,7 @@ __all__ = [
|
||||
"and_",
|
||||
"build_flow_structure",
|
||||
"flow_config",
|
||||
"flow_structure",
|
||||
"human_feedback",
|
||||
"listen",
|
||||
"or_",
|
||||
|
||||
@@ -60,7 +60,7 @@ class PendingFeedbackContext:
|
||||
emit: list[str] | None = None
|
||||
default_outcome: str | None = None
|
||||
metadata: dict[str, Any] = field(default_factory=dict)
|
||||
llm: str | None = None
|
||||
llm: dict[str, Any] | str | None = None
|
||||
requested_at: datetime = field(default_factory=datetime.now)
|
||||
|
||||
def to_dict(self) -> dict[str, Any]:
|
||||
|
||||
@@ -905,7 +905,10 @@ class Flow(Generic[T], metaclass=FlowMeta):
|
||||
# Internal flows (RecallFlow, EncodingFlow) set _skip_auto_memory
|
||||
# to avoid creating a wasteful standalone Memory instance.
|
||||
if self.memory is None and not getattr(self, "_skip_auto_memory", False):
|
||||
self.memory = Memory()
|
||||
from crewai.memory.utils import sanitize_scope_name
|
||||
|
||||
flow_name = sanitize_scope_name(self.name or self.__class__.__name__)
|
||||
self.memory = Memory(root_scope=f"/flow/{flow_name}")
|
||||
|
||||
# Register all flow-related methods
|
||||
for method_name in dir(self):
|
||||
@@ -1315,7 +1318,25 @@ class Flow(Generic[T], metaclass=FlowMeta):
|
||||
context = self._pending_feedback_context
|
||||
emit = context.emit
|
||||
default_outcome = context.default_outcome
|
||||
llm = context.llm
|
||||
|
||||
# Try to get the live LLM from the re-imported decorator first.
|
||||
# This preserves the fully-configured object (credentials, safety_settings, etc.)
|
||||
# for same-process resume. For cross-process resume, fall back to the
|
||||
# serialized context.llm which is now a dict with full config (or a legacy string).
|
||||
from crewai.flow.human_feedback import _deserialize_llm_from_context
|
||||
|
||||
llm = None
|
||||
method = self._methods.get(FlowMethodName(context.method_name))
|
||||
if method is not None:
|
||||
live_llm = getattr(method, "_hf_llm", None)
|
||||
if live_llm is not None:
|
||||
from crewai.llms.base_llm import BaseLLM as BaseLLMClass
|
||||
|
||||
if isinstance(live_llm, BaseLLMClass):
|
||||
llm = live_llm
|
||||
|
||||
if llm is None:
|
||||
llm = _deserialize_llm_from_context(context.llm)
|
||||
|
||||
# Determine outcome
|
||||
collapsed_outcome: str | None = None
|
||||
|
||||
619
lib/crewai/src/crewai/flow/flow_serializer.py
Normal file
619
lib/crewai/src/crewai/flow/flow_serializer.py
Normal file
@@ -0,0 +1,619 @@
|
||||
"""Flow structure serializer for introspecting Flow classes.
|
||||
|
||||
This module provides the flow_structure() function that analyzes a Flow class
|
||||
and returns a JSON-serializable dictionary describing its graph structure.
|
||||
This is used by Studio UI to render a visual flow graph.
|
||||
|
||||
Example:
|
||||
>>> from crewai.flow import Flow, start, listen
|
||||
>>> from crewai.flow.flow_serializer import flow_structure
|
||||
>>>
|
||||
>>> class MyFlow(Flow):
|
||||
... @start()
|
||||
... def begin(self):
|
||||
... return "started"
|
||||
...
|
||||
... @listen(begin)
|
||||
... def process(self):
|
||||
... return "done"
|
||||
>>>
|
||||
>>> structure = flow_structure(MyFlow)
|
||||
>>> print(structure["name"])
|
||||
'MyFlow'
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import inspect
|
||||
import logging
|
||||
import re
|
||||
import textwrap
|
||||
from typing import Any, TypedDict, get_args, get_origin
|
||||
|
||||
from pydantic import BaseModel
|
||||
from pydantic_core import PydanticUndefined
|
||||
|
||||
from crewai.flow.flow_wrappers import (
|
||||
FlowCondition,
|
||||
FlowMethod,
|
||||
ListenMethod,
|
||||
RouterMethod,
|
||||
StartMethod,
|
||||
)
|
||||
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
class MethodInfo(TypedDict, total=False):
|
||||
"""Information about a single flow method.
|
||||
|
||||
Attributes:
|
||||
name: The method name.
|
||||
type: Method type - start, listen, router, or start_router.
|
||||
trigger_methods: List of method names that trigger this method.
|
||||
condition_type: 'AND' or 'OR' for composite conditions, null otherwise.
|
||||
router_paths: For routers, the possible route names returned.
|
||||
has_human_feedback: Whether the method has @human_feedback decorator.
|
||||
has_crew: Whether the method body references a Crew.
|
||||
"""
|
||||
|
||||
name: str
|
||||
type: str
|
||||
trigger_methods: list[str]
|
||||
condition_type: str | None
|
||||
router_paths: list[str]
|
||||
has_human_feedback: bool
|
||||
has_crew: bool
|
||||
|
||||
|
||||
class EdgeInfo(TypedDict, total=False):
|
||||
"""Information about an edge between flow methods.
|
||||
|
||||
Attributes:
|
||||
from_method: Source method name.
|
||||
to_method: Target method name.
|
||||
edge_type: Type of edge - 'listen' or 'route'.
|
||||
condition: Route name for router edges, null for listen edges.
|
||||
"""
|
||||
|
||||
from_method: str
|
||||
to_method: str
|
||||
edge_type: str
|
||||
condition: str | None
|
||||
|
||||
|
||||
class StateFieldInfo(TypedDict, total=False):
|
||||
"""Information about a state field.
|
||||
|
||||
Attributes:
|
||||
name: Field name.
|
||||
type: Field type as string.
|
||||
default: Default value if any.
|
||||
"""
|
||||
|
||||
name: str
|
||||
type: str
|
||||
default: Any
|
||||
|
||||
|
||||
class StateSchemaInfo(TypedDict, total=False):
|
||||
"""Information about the flow's state schema.
|
||||
|
||||
Attributes:
|
||||
fields: List of field information.
|
||||
"""
|
||||
|
||||
fields: list[StateFieldInfo]
|
||||
|
||||
|
||||
class FlowStructureInfo(TypedDict, total=False):
|
||||
"""Complete flow structure information.
|
||||
|
||||
Attributes:
|
||||
name: Flow class name.
|
||||
description: Flow docstring if available.
|
||||
methods: List of method information.
|
||||
edges: List of edge information.
|
||||
state_schema: State schema if typed, null otherwise.
|
||||
inputs: Detected flow inputs if available.
|
||||
"""
|
||||
|
||||
name: str
|
||||
description: str | None
|
||||
methods: list[MethodInfo]
|
||||
edges: list[EdgeInfo]
|
||||
state_schema: StateSchemaInfo | None
|
||||
inputs: list[str]
|
||||
|
||||
|
||||
def _get_method_type(
|
||||
method_name: str,
|
||||
method: Any,
|
||||
start_methods: list[str],
|
||||
routers: set[str],
|
||||
) -> str:
|
||||
"""Determine the type of a flow method.
|
||||
|
||||
Args:
|
||||
method_name: Name of the method.
|
||||
method: The method object.
|
||||
start_methods: List of start method names.
|
||||
routers: Set of router method names.
|
||||
|
||||
Returns:
|
||||
One of: 'start', 'listen', 'router', or 'start_router'.
|
||||
"""
|
||||
is_start = method_name in start_methods or getattr(
|
||||
method, "__is_start_method__", False
|
||||
)
|
||||
is_router = method_name in routers or getattr(method, "__is_router__", False)
|
||||
|
||||
if is_start and is_router:
|
||||
return "start_router"
|
||||
if is_start:
|
||||
return "start"
|
||||
if is_router:
|
||||
return "router"
|
||||
return "listen"
|
||||
|
||||
|
||||
def _has_human_feedback(method: Any) -> bool:
|
||||
"""Check if a method has the @human_feedback decorator.
|
||||
|
||||
Args:
|
||||
method: The method object to check.
|
||||
|
||||
Returns:
|
||||
True if the method has __human_feedback_config__ attribute.
|
||||
"""
|
||||
return hasattr(method, "__human_feedback_config__")
|
||||
|
||||
|
||||
def _detect_crew_reference(method: Any) -> bool:
|
||||
"""Detect if a method body references a Crew.
|
||||
|
||||
Checks for patterns like:
|
||||
- .crew() method calls
|
||||
- Crew( instantiation
|
||||
- References to Crew class in type hints
|
||||
|
||||
Note:
|
||||
This is a **best-effort heuristic for UI hints**, not a guarantee.
|
||||
Uses inspect.getsource + regex which can false-positive on comments
|
||||
or string literals, and may fail on dynamically generated methods
|
||||
or lambdas. Do not rely on this for correctness-critical logic.
|
||||
|
||||
Args:
|
||||
method: The method object to inspect.
|
||||
|
||||
Returns:
|
||||
True if crew reference detected, False otherwise.
|
||||
"""
|
||||
try:
|
||||
# Get the underlying function from wrapper
|
||||
func = method
|
||||
if hasattr(method, "_meth"):
|
||||
func = method._meth
|
||||
elif hasattr(method, "__wrapped__"):
|
||||
func = method.__wrapped__
|
||||
|
||||
source = inspect.getsource(func)
|
||||
source = textwrap.dedent(source)
|
||||
|
||||
# Patterns that indicate Crew usage
|
||||
crew_patterns = [
|
||||
r"\.crew\(\)", # .crew() method call
|
||||
r"Crew\s*\(", # Crew( instantiation
|
||||
r":\s*Crew\b", # Type hint with Crew
|
||||
r"->.*Crew", # Return type hint with Crew
|
||||
]
|
||||
|
||||
for pattern in crew_patterns:
|
||||
if re.search(pattern, source):
|
||||
return True
|
||||
|
||||
return False
|
||||
except (OSError, TypeError):
|
||||
# Can't get source code - assume no crew reference
|
||||
return False
|
||||
|
||||
|
||||
def _extract_trigger_methods(method: Any) -> tuple[list[str], str | None]:
|
||||
"""Extract trigger methods and condition type from a method.
|
||||
|
||||
Args:
|
||||
method: The method object to inspect.
|
||||
|
||||
Returns:
|
||||
Tuple of (trigger_methods list, condition_type or None).
|
||||
"""
|
||||
trigger_methods: list[str] = []
|
||||
condition_type: str | None = None
|
||||
|
||||
# First try __trigger_methods__ (populated for simple conditions)
|
||||
if hasattr(method, "__trigger_methods__") and method.__trigger_methods__:
|
||||
trigger_methods = [str(m) for m in method.__trigger_methods__]
|
||||
|
||||
# For complex conditions (or_/and_ combinators), extract from __trigger_condition__
|
||||
if (
|
||||
not trigger_methods
|
||||
and hasattr(method, "__trigger_condition__")
|
||||
and method.__trigger_condition__
|
||||
):
|
||||
trigger_condition = method.__trigger_condition__
|
||||
trigger_methods = _extract_all_methods_from_condition(trigger_condition)
|
||||
|
||||
if hasattr(method, "__condition_type__") and method.__condition_type__:
|
||||
condition_type = str(method.__condition_type__)
|
||||
|
||||
return trigger_methods, condition_type
|
||||
|
||||
|
||||
def _extract_router_paths(
|
||||
method: Any, router_paths_registry: dict[str, list[str]]
|
||||
) -> list[str]:
|
||||
"""Extract router paths for a router method.
|
||||
|
||||
Args:
|
||||
method: The method object.
|
||||
router_paths_registry: The class-level _router_paths dict.
|
||||
|
||||
Returns:
|
||||
List of possible route names.
|
||||
"""
|
||||
method_name = getattr(method, "__name__", "")
|
||||
|
||||
# First check if there are __router_paths__ on the method itself
|
||||
if hasattr(method, "__router_paths__") and method.__router_paths__:
|
||||
return [str(p) for p in method.__router_paths__]
|
||||
|
||||
# Then check the class-level registry
|
||||
if method_name in router_paths_registry:
|
||||
return [str(p) for p in router_paths_registry[method_name]]
|
||||
|
||||
return []
|
||||
|
||||
|
||||
def _extract_all_methods_from_condition(
|
||||
condition: str | FlowCondition | dict[str, Any] | list[Any],
|
||||
) -> list[str]:
|
||||
"""Extract all method names from a condition tree recursively.
|
||||
|
||||
Args:
|
||||
condition: Can be a string, FlowCondition tuple, dict, or list.
|
||||
|
||||
Returns:
|
||||
List of all method names found in the condition.
|
||||
"""
|
||||
if isinstance(condition, str):
|
||||
return [condition]
|
||||
if isinstance(condition, tuple) and len(condition) == 2:
|
||||
# FlowCondition: (condition_type, methods_list)
|
||||
_, methods = condition
|
||||
if isinstance(methods, list):
|
||||
result: list[str] = []
|
||||
for m in methods:
|
||||
result.extend(_extract_all_methods_from_condition(m))
|
||||
return result
|
||||
return []
|
||||
if isinstance(condition, dict):
|
||||
conditions_list = condition.get("conditions", [])
|
||||
methods: list[str] = []
|
||||
for sub_cond in conditions_list:
|
||||
methods.extend(_extract_all_methods_from_condition(sub_cond))
|
||||
return methods
|
||||
if isinstance(condition, list):
|
||||
methods = []
|
||||
for item in condition:
|
||||
methods.extend(_extract_all_methods_from_condition(item))
|
||||
return methods
|
||||
return []
|
||||
|
||||
|
||||
def _generate_edges(
|
||||
listeners: dict[str, tuple[str, list[str]] | FlowCondition],
|
||||
routers: set[str],
|
||||
router_paths: dict[str, list[str]],
|
||||
all_methods: set[str],
|
||||
) -> list[EdgeInfo]:
|
||||
"""Generate edges from listeners and routers.
|
||||
|
||||
Args:
|
||||
listeners: Map of listener_name -> (condition_type, trigger_methods) or FlowCondition.
|
||||
routers: Set of router method names.
|
||||
router_paths: Map of router_name -> possible return values.
|
||||
all_methods: Set of all method names in the flow.
|
||||
|
||||
Returns:
|
||||
List of EdgeInfo dictionaries.
|
||||
"""
|
||||
edges: list[EdgeInfo] = []
|
||||
|
||||
# Generate edges from listeners (listen edges)
|
||||
for listener_name, condition_data in listeners.items():
|
||||
trigger_methods: list[str] = []
|
||||
|
||||
if isinstance(condition_data, tuple) and len(condition_data) == 2:
|
||||
_condition_type, methods = condition_data
|
||||
trigger_methods = [str(m) for m in methods]
|
||||
elif isinstance(condition_data, dict):
|
||||
trigger_methods = _extract_all_methods_from_condition(condition_data)
|
||||
|
||||
# Create edges from each trigger to the listener
|
||||
edges.extend(
|
||||
EdgeInfo(
|
||||
from_method=trigger,
|
||||
to_method=listener_name,
|
||||
edge_type="listen",
|
||||
condition=None,
|
||||
)
|
||||
for trigger in trigger_methods
|
||||
if trigger in all_methods
|
||||
)
|
||||
|
||||
# Generate edges from routers (route edges)
|
||||
for router_name, paths in router_paths.items():
|
||||
for path in paths:
|
||||
# Find listeners that listen to this path
|
||||
for listener_name, condition_data in listeners.items():
|
||||
path_triggers: list[str] = []
|
||||
|
||||
if isinstance(condition_data, tuple) and len(condition_data) == 2:
|
||||
_, methods = condition_data
|
||||
path_triggers = [str(m) for m in methods]
|
||||
elif isinstance(condition_data, dict):
|
||||
path_triggers = _extract_all_methods_from_condition(condition_data)
|
||||
|
||||
if str(path) in path_triggers:
|
||||
edges.append(
|
||||
EdgeInfo(
|
||||
from_method=router_name,
|
||||
to_method=listener_name,
|
||||
edge_type="route",
|
||||
condition=str(path),
|
||||
)
|
||||
)
|
||||
|
||||
return edges
|
||||
|
||||
|
||||
def _extract_state_schema(flow_class: type) -> StateSchemaInfo | None:
|
||||
"""Extract state schema from a Flow class.
|
||||
|
||||
Checks for:
|
||||
- Generic type parameter (Flow[MyState])
|
||||
- initial_state class attribute
|
||||
|
||||
Args:
|
||||
flow_class: The Flow class to inspect.
|
||||
|
||||
Returns:
|
||||
StateSchemaInfo if a Pydantic model state is detected, None otherwise.
|
||||
"""
|
||||
state_type: type | None = None
|
||||
|
||||
# Check for _initial_state_t set by __class_getitem__
|
||||
if hasattr(flow_class, "_initial_state_t"):
|
||||
state_type = flow_class._initial_state_t
|
||||
|
||||
# Check initial_state class attribute
|
||||
if state_type is None and hasattr(flow_class, "initial_state"):
|
||||
initial_state = flow_class.initial_state
|
||||
if isinstance(initial_state, type) and issubclass(initial_state, BaseModel):
|
||||
state_type = initial_state
|
||||
elif isinstance(initial_state, BaseModel):
|
||||
state_type = type(initial_state)
|
||||
|
||||
# Check __orig_bases__ for generic parameters
|
||||
if state_type is None and hasattr(flow_class, "__orig_bases__"):
|
||||
for base in flow_class.__orig_bases__:
|
||||
origin = get_origin(base)
|
||||
if origin is not None:
|
||||
args = get_args(base)
|
||||
if args:
|
||||
candidate = args[0]
|
||||
if isinstance(candidate, type) and issubclass(candidate, BaseModel):
|
||||
state_type = candidate
|
||||
break
|
||||
|
||||
if state_type is None or not issubclass(state_type, BaseModel):
|
||||
return None
|
||||
|
||||
# Extract fields from the Pydantic model
|
||||
fields: list[StateFieldInfo] = []
|
||||
try:
|
||||
model_fields = state_type.model_fields
|
||||
for field_name, field_info in model_fields.items():
|
||||
field_type_str = "Any"
|
||||
if field_info.annotation is not None:
|
||||
field_type_str = str(field_info.annotation)
|
||||
# Clean up the type string
|
||||
field_type_str = field_type_str.replace("typing.", "")
|
||||
field_type_str = field_type_str.replace("<class '", "").replace(
|
||||
"'>", ""
|
||||
)
|
||||
|
||||
default_value = None
|
||||
if (
|
||||
field_info.default is not PydanticUndefined
|
||||
and field_info.default is not None
|
||||
and not callable(field_info.default)
|
||||
):
|
||||
try:
|
||||
# Try to serialize the default value
|
||||
default_value = field_info.default
|
||||
except Exception:
|
||||
default_value = str(field_info.default)
|
||||
|
||||
fields.append(
|
||||
StateFieldInfo(
|
||||
name=field_name,
|
||||
type=field_type_str,
|
||||
default=default_value,
|
||||
)
|
||||
)
|
||||
except Exception:
|
||||
logger.debug(
|
||||
"Failed to extract state schema fields for %s", flow_class.__name__
|
||||
)
|
||||
|
||||
return StateSchemaInfo(fields=fields) if fields else None
|
||||
|
||||
|
||||
def _detect_flow_inputs(flow_class: type) -> list[str]:
|
||||
"""Detect flow input parameters.
|
||||
|
||||
Inspects the __init__ signature for custom parameters beyond standard Flow params.
|
||||
|
||||
Args:
|
||||
flow_class: The Flow class to inspect.
|
||||
|
||||
Returns:
|
||||
List of detected input names.
|
||||
"""
|
||||
inputs: list[str] = []
|
||||
|
||||
# Check for inputs in __init__ signature beyond standard Flow params
|
||||
try:
|
||||
init_sig = inspect.signature(flow_class.__init__)
|
||||
standard_params = {
|
||||
"self",
|
||||
"persistence",
|
||||
"tracing",
|
||||
"suppress_flow_events",
|
||||
"max_method_calls",
|
||||
"kwargs",
|
||||
}
|
||||
inputs.extend(
|
||||
param_name
|
||||
for param_name in init_sig.parameters
|
||||
if param_name not in standard_params and not param_name.startswith("_")
|
||||
)
|
||||
except Exception:
|
||||
logger.debug(
|
||||
"Failed to detect inputs from __init__ for %s", flow_class.__name__
|
||||
)
|
||||
|
||||
return inputs
|
||||
|
||||
|
||||
def flow_structure(flow_class: type) -> FlowStructureInfo:
|
||||
"""Introspect a Flow class and return its structure as a JSON-serializable dict.
|
||||
|
||||
This function analyzes a Flow CLASS (not instance) and returns complete
|
||||
information about its graph structure including methods, edges, and state.
|
||||
|
||||
Args:
|
||||
flow_class: A Flow class (not an instance) to introspect.
|
||||
|
||||
Returns:
|
||||
FlowStructureInfo dictionary containing:
|
||||
- name: Flow class name
|
||||
- description: Docstring if available
|
||||
- methods: List of method info dicts
|
||||
- edges: List of edge info dicts
|
||||
- state_schema: State schema if typed, None otherwise
|
||||
- inputs: Detected input names
|
||||
|
||||
Raises:
|
||||
TypeError: If flow_class is not a class.
|
||||
|
||||
Example:
|
||||
>>> structure = flow_structure(MyFlow)
|
||||
>>> print(structure["name"])
|
||||
'MyFlow'
|
||||
>>> for method in structure["methods"]:
|
||||
... print(method["name"], method["type"])
|
||||
"""
|
||||
if not isinstance(flow_class, type):
|
||||
raise TypeError(
|
||||
f"flow_structure requires a Flow class, not an instance. "
|
||||
f"Got {type(flow_class).__name__}"
|
||||
)
|
||||
|
||||
# Get class-level metadata set by FlowMeta
|
||||
start_methods: list[str] = getattr(flow_class, "_start_methods", [])
|
||||
listeners: dict[str, Any] = getattr(flow_class, "_listeners", {})
|
||||
routers: set[str] = getattr(flow_class, "_routers", set())
|
||||
router_paths_registry: dict[str, list[str]] = getattr(
|
||||
flow_class, "_router_paths", {}
|
||||
)
|
||||
|
||||
# Collect all flow methods
|
||||
methods: list[MethodInfo] = []
|
||||
all_method_names: set[str] = set()
|
||||
|
||||
for attr_name in dir(flow_class):
|
||||
if attr_name.startswith("_"):
|
||||
continue
|
||||
|
||||
try:
|
||||
attr = getattr(flow_class, attr_name)
|
||||
except AttributeError:
|
||||
continue
|
||||
|
||||
# Check if it's a flow method
|
||||
is_flow_method = (
|
||||
isinstance(attr, (FlowMethod, StartMethod, ListenMethod, RouterMethod))
|
||||
or hasattr(attr, "__is_flow_method__")
|
||||
or hasattr(attr, "__is_start_method__")
|
||||
or hasattr(attr, "__trigger_methods__")
|
||||
or hasattr(attr, "__is_router__")
|
||||
)
|
||||
|
||||
if not is_flow_method:
|
||||
continue
|
||||
|
||||
all_method_names.add(attr_name)
|
||||
|
||||
# Get method type
|
||||
method_type = _get_method_type(attr_name, attr, start_methods, routers)
|
||||
|
||||
# Get trigger methods and condition type
|
||||
trigger_methods, condition_type = _extract_trigger_methods(attr)
|
||||
|
||||
# Get router paths if applicable
|
||||
router_paths_list: list[str] = []
|
||||
if method_type in ("router", "start_router"):
|
||||
router_paths_list = _extract_router_paths(attr, router_paths_registry)
|
||||
|
||||
# Check for human feedback
|
||||
has_hf = _has_human_feedback(attr)
|
||||
|
||||
# Check for crew reference
|
||||
has_crew = _detect_crew_reference(attr)
|
||||
|
||||
method_info = MethodInfo(
|
||||
name=attr_name,
|
||||
type=method_type,
|
||||
trigger_methods=trigger_methods,
|
||||
condition_type=condition_type,
|
||||
router_paths=router_paths_list,
|
||||
has_human_feedback=has_hf,
|
||||
has_crew=has_crew,
|
||||
)
|
||||
methods.append(method_info)
|
||||
|
||||
# Generate edges
|
||||
edges = _generate_edges(listeners, routers, router_paths_registry, all_method_names)
|
||||
|
||||
# Extract state schema
|
||||
state_schema = _extract_state_schema(flow_class)
|
||||
|
||||
# Detect inputs
|
||||
inputs = _detect_flow_inputs(flow_class)
|
||||
|
||||
# Get flow description from docstring
|
||||
description: str | None = None
|
||||
if flow_class.__doc__:
|
||||
description = flow_class.__doc__.strip()
|
||||
|
||||
return FlowStructureInfo(
|
||||
name=flow_class.__name__,
|
||||
description=description,
|
||||
methods=methods,
|
||||
edges=edges,
|
||||
state_schema=state_schema,
|
||||
inputs=inputs,
|
||||
)
|
||||
@@ -75,6 +75,7 @@ class FlowMethod(Generic[P, R]):
|
||||
"__is_router__",
|
||||
"__router_paths__",
|
||||
"__human_feedback_config__",
|
||||
"_hf_llm", # Live LLM object for HITL resume
|
||||
]:
|
||||
if hasattr(meth, attr):
|
||||
setattr(self, attr, getattr(meth, attr))
|
||||
|
||||
@@ -76,22 +76,48 @@ if TYPE_CHECKING:
|
||||
F = TypeVar("F", bound=Callable[..., Any])
|
||||
|
||||
|
||||
def _serialize_llm_for_context(llm: Any) -> str | None:
|
||||
"""Serialize a BaseLLM object to a model string with provider prefix.
|
||||
def _serialize_llm_for_context(llm: Any) -> dict[str, Any] | str | None:
|
||||
"""Serialize a BaseLLM object to a dict preserving full config.
|
||||
|
||||
When persisting the LLM for HITL resume, we need to store enough info
|
||||
to reconstruct a working LLM on the resume worker. Just storing the bare
|
||||
model name (e.g. "gemini-3-flash-preview") causes provider inference to
|
||||
fail — it defaults to OpenAI. Including the provider prefix (e.g.
|
||||
"gemini/gemini-3-flash-preview") allows LLM() to correctly route.
|
||||
Delegates to ``llm.to_config_dict()`` when available (BaseLLM and
|
||||
subclasses). Falls back to extracting the model string with provider
|
||||
prefix for unknown LLM types.
|
||||
"""
|
||||
if hasattr(llm, "to_config_dict"):
|
||||
return llm.to_config_dict()
|
||||
|
||||
# Fallback for non-BaseLLM objects: just extract model + provider prefix
|
||||
model = getattr(llm, "model", None)
|
||||
if not model:
|
||||
return None
|
||||
provider = getattr(llm, "provider", None)
|
||||
if provider and "/" not in model:
|
||||
return f"{provider}/{model}"
|
||||
return model
|
||||
return f"{provider}/{model}" if provider and "/" not in model else model
|
||||
|
||||
|
||||
def _deserialize_llm_from_context(
|
||||
llm_data: dict[str, Any] | str | None,
|
||||
) -> BaseLLM | None:
|
||||
"""Reconstruct an LLM instance from serialized context data.
|
||||
|
||||
Handles both the new dict format (with full config) and the legacy
|
||||
string format (model name only) for backward compatibility.
|
||||
|
||||
Returns a BaseLLM instance, or None if llm_data is None.
|
||||
"""
|
||||
if llm_data is None:
|
||||
return None
|
||||
|
||||
from crewai.llm import LLM
|
||||
|
||||
if isinstance(llm_data, str):
|
||||
return LLM(model=llm_data)
|
||||
|
||||
if isinstance(llm_data, dict):
|
||||
model = llm_data.pop("model", None)
|
||||
if not model:
|
||||
return None
|
||||
return LLM(model=model, **llm_data)
|
||||
return None
|
||||
|
||||
|
||||
@dataclass
|
||||
@@ -572,6 +598,14 @@ def human_feedback(
|
||||
wrapper.__is_router__ = True
|
||||
wrapper.__router_paths__ = list(emit)
|
||||
|
||||
# Stash the live LLM object for HITL resume to retrieve.
|
||||
# When a flow pauses for human feedback and later resumes (possibly in a
|
||||
# different process), the serialized context only contains a model string.
|
||||
# By storing the original LLM on the wrapper, resume_async can retrieve
|
||||
# the fully-configured LLM (with credentials, project, safety_settings, etc.)
|
||||
# instead of creating a bare LLM from just the model string.
|
||||
wrapper._hf_llm = llm
|
||||
|
||||
return wrapper # type: ignore[no-any-return]
|
||||
|
||||
return decorator
|
||||
|
||||
@@ -62,18 +62,6 @@ except ImportError:
|
||||
|
||||
|
||||
if TYPE_CHECKING:
|
||||
from litellm.exceptions import ContextWindowExceededError
|
||||
from litellm.litellm_core_utils.get_supported_openai_params import (
|
||||
get_supported_openai_params,
|
||||
)
|
||||
from litellm.types.utils import (
|
||||
ChatCompletionDeltaToolCall,
|
||||
Choices,
|
||||
Function,
|
||||
ModelResponse,
|
||||
)
|
||||
from litellm.utils import supports_response_schema
|
||||
|
||||
from crewai.agent.core import Agent
|
||||
from crewai.llms.hooks.base import BaseInterceptor
|
||||
from crewai.llms.providers.anthropic.completion import AnthropicThinkingConfig
|
||||
@@ -83,8 +71,6 @@ if TYPE_CHECKING:
|
||||
|
||||
try:
|
||||
import litellm
|
||||
from litellm.exceptions import ContextWindowExceededError
|
||||
from litellm.integrations.custom_logger import CustomLogger
|
||||
from litellm.litellm_core_utils.get_supported_openai_params import (
|
||||
get_supported_openai_params,
|
||||
)
|
||||
@@ -99,15 +85,13 @@ try:
|
||||
LITELLM_AVAILABLE = True
|
||||
except ImportError:
|
||||
LITELLM_AVAILABLE = False
|
||||
litellm = None # type: ignore
|
||||
Choices = None # type: ignore
|
||||
ContextWindowExceededError = Exception # type: ignore
|
||||
get_supported_openai_params = None # type: ignore
|
||||
ChatCompletionDeltaToolCall = None # type: ignore
|
||||
Function = None # type: ignore
|
||||
ModelResponse = None # type: ignore
|
||||
supports_response_schema = None # type: ignore
|
||||
CustomLogger = None # type: ignore
|
||||
litellm = None # type: ignore[assignment]
|
||||
Choices = None # type: ignore[assignment, misc]
|
||||
get_supported_openai_params = None # type: ignore[assignment]
|
||||
ChatCompletionDeltaToolCall = None # type: ignore[assignment, misc]
|
||||
Function = None # type: ignore[assignment, misc]
|
||||
ModelResponse = None # type: ignore[assignment, misc]
|
||||
supports_response_schema = None # type: ignore[assignment]
|
||||
|
||||
|
||||
load_dotenv()
|
||||
@@ -1009,12 +993,15 @@ class LLM(BaseLLM):
|
||||
)
|
||||
return full_response
|
||||
|
||||
except ContextWindowExceededError as e:
|
||||
# Catch context window errors from litellm and convert them to our own exception type.
|
||||
# This exception is handled by CrewAgentExecutor._invoke_loop() which can then
|
||||
# decide whether to summarize the content or abort based on the respect_context_window flag.
|
||||
raise LLMContextLengthExceededError(str(e)) from e
|
||||
except LLMContextLengthExceededError:
|
||||
# Re-raise our own context length error
|
||||
raise
|
||||
except Exception as e:
|
||||
# Check if this is a context window error and convert to our exception type
|
||||
error_msg = str(e)
|
||||
if LLMContextLengthExceededError._is_context_limit_error(error_msg):
|
||||
raise LLMContextLengthExceededError(error_msg) from e
|
||||
|
||||
logging.error(f"Error in streaming response: {e!s}")
|
||||
if full_response.strip():
|
||||
logging.warning(f"Returning partial response despite error: {e!s}")
|
||||
@@ -1195,10 +1182,15 @@ class LLM(BaseLLM):
|
||||
usage_info = response.usage
|
||||
self._track_token_usage_internal(usage_info)
|
||||
|
||||
except ContextWindowExceededError as e:
|
||||
# Convert litellm's context window error to our own exception type
|
||||
# for consistent handling in the rest of the codebase
|
||||
raise LLMContextLengthExceededError(str(e)) from e
|
||||
except LLMContextLengthExceededError:
|
||||
# Re-raise our own context length error
|
||||
raise
|
||||
except Exception as e:
|
||||
# Check if this is a context window error and convert to our exception type
|
||||
error_msg = str(e)
|
||||
if LLMContextLengthExceededError._is_context_limit_error(error_msg):
|
||||
raise LLMContextLengthExceededError(error_msg) from e
|
||||
raise
|
||||
|
||||
# --- 2) Handle structured output response (when response_model is provided)
|
||||
if response_model is not None:
|
||||
@@ -1330,8 +1322,15 @@ class LLM(BaseLLM):
|
||||
usage_info = response.usage
|
||||
self._track_token_usage_internal(usage_info)
|
||||
|
||||
except ContextWindowExceededError as e:
|
||||
raise LLMContextLengthExceededError(str(e)) from e
|
||||
except LLMContextLengthExceededError:
|
||||
# Re-raise our own context length error
|
||||
raise
|
||||
except Exception as e:
|
||||
# Check if this is a context window error and convert to our exception type
|
||||
error_msg = str(e)
|
||||
if LLMContextLengthExceededError._is_context_limit_error(error_msg):
|
||||
raise LLMContextLengthExceededError(error_msg) from e
|
||||
raise
|
||||
|
||||
if response_model is not None:
|
||||
if isinstance(response, BaseModel):
|
||||
@@ -1548,9 +1547,15 @@ class LLM(BaseLLM):
|
||||
)
|
||||
return full_response
|
||||
|
||||
except ContextWindowExceededError as e:
|
||||
raise LLMContextLengthExceededError(str(e)) from e
|
||||
except Exception:
|
||||
except LLMContextLengthExceededError:
|
||||
# Re-raise our own context length error
|
||||
raise
|
||||
except Exception as e:
|
||||
# Check if this is a context window error and convert to our exception type
|
||||
error_msg = str(e)
|
||||
if LLMContextLengthExceededError._is_context_limit_error(error_msg):
|
||||
raise LLMContextLengthExceededError(error_msg) from e
|
||||
|
||||
if chunk_count == 0:
|
||||
raise
|
||||
if full_response:
|
||||
@@ -1984,7 +1989,16 @@ class LLM(BaseLLM):
|
||||
Returns:
|
||||
Messages with files formatted into content blocks.
|
||||
"""
|
||||
if not HAS_CREWAI_FILES or not self.supports_multimodal():
|
||||
if not HAS_CREWAI_FILES:
|
||||
return messages
|
||||
|
||||
if not self.supports_multimodal():
|
||||
if any(msg.get("files") for msg in messages):
|
||||
raise ValueError(
|
||||
f"Model '{self.model}' does not support multimodal input, "
|
||||
"but files were provided via 'input_files'. "
|
||||
"Use a vision-capable model or remove the file inputs."
|
||||
)
|
||||
return messages
|
||||
|
||||
provider = getattr(self, "provider", None) or self.model
|
||||
@@ -2026,7 +2040,16 @@ class LLM(BaseLLM):
|
||||
Returns:
|
||||
Messages with files formatted into content blocks.
|
||||
"""
|
||||
if not HAS_CREWAI_FILES or not self.supports_multimodal():
|
||||
if not HAS_CREWAI_FILES:
|
||||
return messages
|
||||
|
||||
if not self.supports_multimodal():
|
||||
if any(msg.get("files") for msg in messages):
|
||||
raise ValueError(
|
||||
f"Model '{self.model}' does not support multimodal input, "
|
||||
"but files were provided via 'input_files'. "
|
||||
"Use a vision-capable model or remove the file inputs."
|
||||
)
|
||||
return messages
|
||||
|
||||
provider = getattr(self, "provider", None) or self.model
|
||||
@@ -2139,7 +2162,15 @@ class LLM(BaseLLM):
|
||||
- E.g., "openrouter/deepseek/deepseek-chat" yields "openrouter"
|
||||
- "gemini/gemini-1.5-pro" yields "gemini"
|
||||
- If no slash is present, "openai" is assumed.
|
||||
|
||||
Note: This validation only applies to the litellm fallback path.
|
||||
Native providers have their own validation.
|
||||
"""
|
||||
if not LITELLM_AVAILABLE or supports_response_schema is None:
|
||||
# When litellm is not available, skip validation
|
||||
# (this path should only be reached for litellm fallback models)
|
||||
return
|
||||
|
||||
provider = self._get_custom_llm_provider()
|
||||
if self.response_format is not None and not supports_response_schema(
|
||||
model=self.model,
|
||||
@@ -2151,6 +2182,16 @@ class LLM(BaseLLM):
|
||||
)
|
||||
|
||||
def supports_function_calling(self) -> bool:
|
||||
"""Check if the model supports function calling.
|
||||
|
||||
Note: This method is only used by the litellm fallback path.
|
||||
Native providers override this method with their own implementation.
|
||||
"""
|
||||
if not LITELLM_AVAILABLE:
|
||||
# When litellm is not available, assume function calling is supported
|
||||
# (all modern models support it)
|
||||
return True
|
||||
|
||||
try:
|
||||
provider = self._get_custom_llm_provider()
|
||||
return litellm.utils.supports_function_calling(
|
||||
@@ -2158,15 +2199,24 @@ class LLM(BaseLLM):
|
||||
)
|
||||
except Exception as e:
|
||||
logging.error(f"Failed to check function calling support: {e!s}")
|
||||
return False
|
||||
return True # Default to True for modern models
|
||||
|
||||
def supports_stop_words(self) -> bool:
|
||||
"""Check if the model supports stop words.
|
||||
|
||||
Note: This method is only used by the litellm fallback path.
|
||||
Native providers override this method with their own implementation.
|
||||
"""
|
||||
if not LITELLM_AVAILABLE or get_supported_openai_params is None:
|
||||
# When litellm is not available, assume stop words are supported
|
||||
return True
|
||||
|
||||
try:
|
||||
params = get_supported_openai_params(model=self.model)
|
||||
return params is not None and "stop" in params
|
||||
except Exception as e:
|
||||
logging.error(f"Failed to get supported params: {e!s}")
|
||||
return False
|
||||
return True # Default to True
|
||||
|
||||
def get_context_window_size(self) -> int:
|
||||
"""
|
||||
@@ -2202,7 +2252,15 @@ class LLM(BaseLLM):
|
||||
"""
|
||||
Attempt to keep a single set of callbacks in litellm by removing old
|
||||
duplicates and adding new ones.
|
||||
|
||||
Note: This only affects the litellm fallback path. Native providers
|
||||
don't use litellm callbacks - they emit events via base_llm.py.
|
||||
"""
|
||||
if not LITELLM_AVAILABLE:
|
||||
# When litellm is not available, callbacks are still stored
|
||||
# but not registered with litellm globals
|
||||
return
|
||||
|
||||
with suppress_warnings():
|
||||
callback_types = [type(callback) for callback in callbacks]
|
||||
for callback in litellm.success_callback[:]:
|
||||
@@ -2227,6 +2285,9 @@ class LLM(BaseLLM):
|
||||
If the environment variables are not set or are empty, the corresponding callback lists
|
||||
will be set to empty lists.
|
||||
|
||||
Note: This only affects the litellm fallback path. Native providers
|
||||
don't use litellm callbacks - they emit events via base_llm.py.
|
||||
|
||||
Examples:
|
||||
LITELLM_SUCCESS_CALLBACKS="langfuse,langsmith"
|
||||
LITELLM_FAILURE_CALLBACKS="langfuse"
|
||||
@@ -2234,9 +2295,13 @@ class LLM(BaseLLM):
|
||||
This will set `litellm.success_callback` to ["langfuse", "langsmith"] and
|
||||
`litellm.failure_callback` to ["langfuse"].
|
||||
"""
|
||||
if not LITELLM_AVAILABLE:
|
||||
# When litellm is not available, env callbacks have no effect
|
||||
return
|
||||
|
||||
with suppress_warnings():
|
||||
success_callbacks_str = os.environ.get("LITELLM_SUCCESS_CALLBACKS", "")
|
||||
success_callbacks: list[str | Callable[..., Any] | CustomLogger] = []
|
||||
success_callbacks: list[str | Callable[..., Any]] = []
|
||||
if success_callbacks_str:
|
||||
success_callbacks = [
|
||||
cb.strip() for cb in success_callbacks_str.split(",") if cb.strip()
|
||||
@@ -2244,7 +2309,7 @@ class LLM(BaseLLM):
|
||||
|
||||
failure_callbacks_str = os.environ.get("LITELLM_FAILURE_CALLBACKS", "")
|
||||
if failure_callbacks_str:
|
||||
failure_callbacks: list[str | Callable[..., Any] | CustomLogger] = [
|
||||
failure_callbacks: list[str | Callable[..., Any]] = [
|
||||
cb.strip() for cb in failure_callbacks_str.split(",") if cb.strip()
|
||||
]
|
||||
|
||||
@@ -2398,6 +2463,9 @@ class LLM(BaseLLM):
|
||||
"gpt-4.1",
|
||||
"claude-3",
|
||||
"claude-4",
|
||||
"claude-sonnet-4",
|
||||
"claude-opus-4",
|
||||
"claude-haiku-4",
|
||||
"gemini",
|
||||
)
|
||||
model_lower = self.model.lower()
|
||||
|
||||
@@ -152,6 +152,28 @@ class BaseLLM(ABC):
|
||||
"cached_prompt_tokens": 0,
|
||||
}
|
||||
|
||||
def to_config_dict(self) -> dict[str, Any]:
|
||||
"""Serialize this LLM to a dict that can reconstruct it via ``LLM(**config)``.
|
||||
|
||||
Returns the core fields that BaseLLM owns. Provider subclasses should
|
||||
override this (calling ``super().to_config_dict()``) to add their own
|
||||
fields (e.g. ``project``, ``location``, ``safety_settings``).
|
||||
"""
|
||||
model = self.model
|
||||
provider = self.provider
|
||||
model_str = f"{provider}/{model}" if provider and "/" not in model else model
|
||||
|
||||
config: dict[str, Any] = {"model": model_str}
|
||||
|
||||
if self.temperature is not None:
|
||||
config["temperature"] = self.temperature
|
||||
if self.base_url is not None:
|
||||
config["base_url"] = self.base_url
|
||||
if self.stop:
|
||||
config["stop"] = self.stop
|
||||
|
||||
return config
|
||||
|
||||
@property
|
||||
def provider(self) -> str:
|
||||
"""Get the provider of the LLM."""
|
||||
@@ -619,7 +641,16 @@ class BaseLLM(ABC):
|
||||
Returns:
|
||||
Messages with files formatted into content blocks.
|
||||
"""
|
||||
if not HAS_CREWAI_FILES or not self.supports_multimodal():
|
||||
if not HAS_CREWAI_FILES:
|
||||
return messages
|
||||
|
||||
if not self.supports_multimodal():
|
||||
if any(msg.get("files") for msg in messages):
|
||||
raise ValueError(
|
||||
f"Model '{self.model}' does not support multimodal input, "
|
||||
"but files were provided via 'input_files'. "
|
||||
"Use a vision-capable model or remove the file inputs."
|
||||
)
|
||||
return messages
|
||||
|
||||
provider = getattr(self, "provider", None) or getattr(self, "model", "openai")
|
||||
|
||||
@@ -256,6 +256,19 @@ class AnthropicCompletion(BaseLLM):
|
||||
else:
|
||||
self.stop_sequences = []
|
||||
|
||||
def to_config_dict(self) -> dict[str, Any]:
|
||||
"""Extend base config with Anthropic-specific fields."""
|
||||
config = super().to_config_dict()
|
||||
if self.max_tokens != 4096: # non-default
|
||||
config["max_tokens"] = self.max_tokens
|
||||
if self.max_retries != 2: # non-default
|
||||
config["max_retries"] = self.max_retries
|
||||
if self.top_p is not None:
|
||||
config["top_p"] = self.top_p
|
||||
if self.timeout is not None:
|
||||
config["timeout"] = self.timeout
|
||||
return config
|
||||
|
||||
def _get_client_params(self) -> dict[str, Any]:
|
||||
"""Get client parameters."""
|
||||
|
||||
@@ -1753,7 +1766,14 @@ class AnthropicCompletion(BaseLLM):
|
||||
Returns:
|
||||
True if the model supports images and PDFs.
|
||||
"""
|
||||
return "claude-3" in self.model.lower() or "claude-4" in self.model.lower()
|
||||
model_lower = self.model.lower()
|
||||
return (
|
||||
"claude-3" in model_lower
|
||||
or "claude-4" in model_lower
|
||||
or "claude-sonnet-4" in model_lower
|
||||
or "claude-opus-4" in model_lower
|
||||
or "claude-haiku-4" in model_lower
|
||||
)
|
||||
|
||||
def get_file_uploader(self) -> Any:
|
||||
"""Get an Anthropic file uploader using this LLM's clients.
|
||||
|
||||
@@ -180,6 +180,27 @@ class AzureCompletion(BaseLLM):
|
||||
and "/openai/deployments/" in self.endpoint
|
||||
)
|
||||
|
||||
def to_config_dict(self) -> dict[str, Any]:
|
||||
"""Extend base config with Azure-specific fields."""
|
||||
config = super().to_config_dict()
|
||||
if self.endpoint:
|
||||
config["endpoint"] = self.endpoint
|
||||
if self.api_version and self.api_version != "2024-06-01":
|
||||
config["api_version"] = self.api_version
|
||||
if self.timeout is not None:
|
||||
config["timeout"] = self.timeout
|
||||
if self.max_retries != 2:
|
||||
config["max_retries"] = self.max_retries
|
||||
if self.top_p is not None:
|
||||
config["top_p"] = self.top_p
|
||||
if self.frequency_penalty is not None:
|
||||
config["frequency_penalty"] = self.frequency_penalty
|
||||
if self.presence_penalty is not None:
|
||||
config["presence_penalty"] = self.presence_penalty
|
||||
if self.max_tokens is not None:
|
||||
config["max_tokens"] = self.max_tokens
|
||||
return config
|
||||
|
||||
@staticmethod
|
||||
def _validate_and_fix_endpoint(endpoint: str, model: str) -> str:
|
||||
"""Validate and fix Azure endpoint URL format.
|
||||
|
||||
@@ -346,6 +346,23 @@ class BedrockCompletion(BaseLLM):
|
||||
# Handle inference profiles for newer models
|
||||
self.model_id = model
|
||||
|
||||
def to_config_dict(self) -> dict[str, Any]:
|
||||
"""Extend base config with Bedrock-specific fields."""
|
||||
config = super().to_config_dict()
|
||||
# NOTE: AWS credentials (access_key, secret_key, session_token) are
|
||||
# intentionally excluded — they must come from env on resume.
|
||||
if self.region_name and self.region_name != "us-east-1":
|
||||
config["region_name"] = self.region_name
|
||||
if self.max_tokens is not None:
|
||||
config["max_tokens"] = self.max_tokens
|
||||
if self.top_p is not None:
|
||||
config["top_p"] = self.top_p
|
||||
if self.top_k is not None:
|
||||
config["top_k"] = self.top_k
|
||||
if self.guardrail_config:
|
||||
config["guardrail_config"] = self.guardrail_config
|
||||
return config
|
||||
|
||||
@property
|
||||
def stop(self) -> list[str]:
|
||||
"""Get stop sequences sent to the API."""
|
||||
@@ -1880,7 +1897,9 @@ class BedrockCompletion(BaseLLM):
|
||||
# Anthropic (Claude) models reject assistant-last messages when
|
||||
# tools are in the request. Append a user message so the
|
||||
# Converse API accepts the payload.
|
||||
elif "anthropic" in self.model.lower() or "claude" in self.model.lower():
|
||||
elif (
|
||||
"anthropic" in self.model.lower() or "claude" in self.model.lower()
|
||||
):
|
||||
converse_messages.append(
|
||||
{
|
||||
"role": "user",
|
||||
@@ -2100,12 +2119,18 @@ class BedrockCompletion(BaseLLM):
|
||||
model_lower = self.model.lower()
|
||||
vision_models = (
|
||||
"anthropic.claude-3",
|
||||
"anthropic.claude-sonnet-4",
|
||||
"anthropic.claude-opus-4",
|
||||
"anthropic.claude-haiku-4",
|
||||
"amazon.nova-lite",
|
||||
"amazon.nova-pro",
|
||||
"amazon.nova-premier",
|
||||
"us.amazon.nova-lite",
|
||||
"us.amazon.nova-pro",
|
||||
"us.amazon.nova-premier",
|
||||
"us.anthropic.claude-sonnet-4",
|
||||
"us.anthropic.claude-opus-4",
|
||||
"us.anthropic.claude-haiku-4",
|
||||
)
|
||||
return any(model_lower.startswith(m) for m in vision_models)
|
||||
|
||||
|
||||
@@ -176,6 +176,28 @@ class GeminiCompletion(BaseLLM):
|
||||
else:
|
||||
self.stop_sequences = []
|
||||
|
||||
def to_config_dict(self) -> dict[str, Any]:
|
||||
"""Extend base config with Gemini/Vertex-specific fields."""
|
||||
config = super().to_config_dict()
|
||||
if self.project:
|
||||
config["project"] = self.project
|
||||
if self.location and self.location != "us-central1":
|
||||
config["location"] = self.location
|
||||
if self.top_p is not None:
|
||||
config["top_p"] = self.top_p
|
||||
if self.top_k is not None:
|
||||
config["top_k"] = self.top_k
|
||||
if self.max_output_tokens is not None:
|
||||
config["max_output_tokens"] = self.max_output_tokens
|
||||
if self.safety_settings:
|
||||
config["safety_settings"] = [
|
||||
{"category": str(s.category), "threshold": str(s.threshold)}
|
||||
if hasattr(s, "category") and hasattr(s, "threshold")
|
||||
else s
|
||||
for s in self.safety_settings
|
||||
]
|
||||
return config
|
||||
|
||||
def _initialize_client(self, use_vertexai: bool = False) -> genai.Client:
|
||||
"""Initialize the Google Gen AI client with proper parameter handling.
|
||||
|
||||
|
||||
@@ -329,6 +329,35 @@ class OpenAICompletion(BaseLLM):
|
||||
"""
|
||||
self._last_reasoning_items = None
|
||||
|
||||
def to_config_dict(self) -> dict[str, Any]:
|
||||
"""Extend base config with OpenAI-specific fields."""
|
||||
config = super().to_config_dict()
|
||||
# Client-level params (from OpenAI SDK)
|
||||
if self.organization:
|
||||
config["organization"] = self.organization
|
||||
if self.project:
|
||||
config["project"] = self.project
|
||||
if self.timeout is not None:
|
||||
config["timeout"] = self.timeout
|
||||
if self.max_retries != 2:
|
||||
config["max_retries"] = self.max_retries
|
||||
# Completion params
|
||||
if self.top_p is not None:
|
||||
config["top_p"] = self.top_p
|
||||
if self.frequency_penalty is not None:
|
||||
config["frequency_penalty"] = self.frequency_penalty
|
||||
if self.presence_penalty is not None:
|
||||
config["presence_penalty"] = self.presence_penalty
|
||||
if self.max_tokens is not None:
|
||||
config["max_tokens"] = self.max_tokens
|
||||
if self.max_completion_tokens is not None:
|
||||
config["max_completion_tokens"] = self.max_completion_tokens
|
||||
if self.seed is not None:
|
||||
config["seed"] = self.seed
|
||||
if self.reasoning_effort is not None:
|
||||
config["reasoning_effort"] = self.reasoning_effort
|
||||
return config
|
||||
|
||||
def _get_client_params(self) -> dict[str, Any]:
|
||||
"""Get OpenAI client parameters."""
|
||||
|
||||
|
||||
@@ -28,6 +28,7 @@ from crewai.memory.analyze import (
|
||||
analyze_for_save,
|
||||
)
|
||||
from crewai.memory.types import MemoryConfig, MemoryRecord, embed_texts
|
||||
from crewai.memory.utils import join_scope_paths
|
||||
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
@@ -48,6 +49,8 @@ class ItemState(BaseModel):
|
||||
importance: float | None = None
|
||||
source: str | None = None
|
||||
private: bool = False
|
||||
# Structural root scope prefix for hierarchical scoping
|
||||
root_scope: str | None = None
|
||||
# Resolved values
|
||||
resolved_scope: str = "/"
|
||||
resolved_categories: list[str] = Field(default_factory=list)
|
||||
@@ -104,6 +107,14 @@ class EncodingFlow(Flow[EncodingState]):
|
||||
embedder: Any,
|
||||
config: MemoryConfig | None = None,
|
||||
) -> None:
|
||||
"""Initialize the encoding flow.
|
||||
|
||||
Args:
|
||||
storage: Storage backend for persisting memories.
|
||||
llm: LLM instance for analysis.
|
||||
embedder: Embedder for generating vectors.
|
||||
config: Optional memory configuration.
|
||||
"""
|
||||
super().__init__(suppress_flow_events=True)
|
||||
self._storage = storage
|
||||
self._llm = llm
|
||||
@@ -180,10 +191,18 @@ class EncodingFlow(Flow[EncodingState]):
|
||||
def _search_one(
|
||||
item: ItemState,
|
||||
) -> list[tuple[MemoryRecord, float]]:
|
||||
scope_prefix = item.scope if item.scope and item.scope.strip("/") else None
|
||||
# Use root_scope as the search boundary, then narrow by explicit scope if provided
|
||||
effective_prefix = None
|
||||
if item.root_scope:
|
||||
effective_prefix = item.root_scope.rstrip("/")
|
||||
if item.scope and item.scope.strip("/"):
|
||||
effective_prefix = effective_prefix + "/" + item.scope.strip("/")
|
||||
elif item.scope and item.scope.strip("/"):
|
||||
effective_prefix = item.scope
|
||||
|
||||
return self._storage.search( # type: ignore[no-any-return]
|
||||
item.embedding,
|
||||
scope_prefix=scope_prefix,
|
||||
scope_prefix=effective_prefix,
|
||||
categories=None,
|
||||
limit=self._config.consolidation_limit,
|
||||
min_score=0.0,
|
||||
@@ -253,9 +272,16 @@ class EncodingFlow(Flow[EncodingState]):
|
||||
existing_scopes: list[str] = []
|
||||
existing_categories: list[str] = []
|
||||
if any_needs_fields:
|
||||
existing_scopes = self._storage.list_scopes("/") or ["/"]
|
||||
# Constrain scope/category suggestions to root_scope boundary
|
||||
# Check if any active item has root_scope
|
||||
active_root = next(
|
||||
(it.root_scope for it in items if not it.dropped and it.root_scope),
|
||||
None,
|
||||
)
|
||||
scope_search_root = active_root if active_root else "/"
|
||||
existing_scopes = self._storage.list_scopes(scope_search_root) or ["/"]
|
||||
existing_categories = list(
|
||||
self._storage.list_categories(scope_prefix=None).keys()
|
||||
self._storage.list_categories(scope_prefix=active_root).keys()
|
||||
)
|
||||
|
||||
# Classify items and submit LLM calls
|
||||
@@ -321,7 +347,13 @@ class EncodingFlow(Flow[EncodingState]):
|
||||
for i, future in save_futures.items():
|
||||
analysis = future.result()
|
||||
item = items[i]
|
||||
item.resolved_scope = item.scope or analysis.suggested_scope or "/"
|
||||
# Determine inner scope from explicit scope or LLM-inferred
|
||||
inner_scope = item.scope or analysis.suggested_scope or "/"
|
||||
# Join root_scope with inner scope if root_scope is set
|
||||
if item.root_scope:
|
||||
item.resolved_scope = join_scope_paths(item.root_scope, inner_scope)
|
||||
else:
|
||||
item.resolved_scope = inner_scope
|
||||
item.resolved_categories = (
|
||||
item.categories
|
||||
if item.categories is not None
|
||||
@@ -353,8 +385,18 @@ class EncodingFlow(Flow[EncodingState]):
|
||||
pool.shutdown(wait=False)
|
||||
|
||||
def _apply_defaults(self, item: ItemState) -> None:
|
||||
"""Apply caller values with config defaults (fast path)."""
|
||||
item.resolved_scope = item.scope or "/"
|
||||
"""Apply caller values with config defaults (fast path).
|
||||
|
||||
If root_scope is set, prepends it to the inner scope to create the
|
||||
final resolved_scope.
|
||||
"""
|
||||
inner_scope = item.scope or "/"
|
||||
# Join root_scope with inner scope if root_scope is set
|
||||
if item.root_scope:
|
||||
item.resolved_scope = join_scope_paths(item.root_scope, inner_scope)
|
||||
else:
|
||||
item.resolved_scope = inner_scope if inner_scope != "/" else "/"
|
||||
|
||||
item.resolved_categories = item.categories or []
|
||||
item.resolved_metadata = item.metadata or {}
|
||||
item.resolved_importance = (
|
||||
|
||||
@@ -31,6 +31,7 @@ from crewai.memory.types import (
|
||||
compute_composite_score,
|
||||
embed_text,
|
||||
)
|
||||
from crewai.memory.utils import join_scope_paths
|
||||
from crewai.rag.embeddings.factory import build_embedder
|
||||
from crewai.rag.embeddings.providers.openai.types import OpenAIProviderSpec
|
||||
|
||||
@@ -126,6 +127,14 @@ class Memory(BaseModel):
|
||||
default=False,
|
||||
description="If True, remember() and remember_many() are silent no-ops.",
|
||||
)
|
||||
root_scope: str | None = Field(
|
||||
default=None,
|
||||
description=(
|
||||
"Structural root scope prefix. When set, LLM-inferred or explicit scopes "
|
||||
"are nested under this root. For example, a crew with root_scope='/crew/research' "
|
||||
"will store memories at '/crew/research/<inferred_scope>'."
|
||||
),
|
||||
)
|
||||
|
||||
_config: MemoryConfig = PrivateAttr()
|
||||
_llm_instance: BaseLLM | None = PrivateAttr(default=None)
|
||||
@@ -297,11 +306,26 @@ class Memory(BaseModel):
|
||||
importance: float | None = None,
|
||||
source: str | None = None,
|
||||
private: bool = False,
|
||||
root_scope: str | None = None,
|
||||
) -> list[MemoryRecord]:
|
||||
"""Run the batch EncodingFlow for one or more items. No event emission.
|
||||
|
||||
This is the core encoding logic shared by ``remember()`` and
|
||||
``remember_many()``. Events are managed by the calling method.
|
||||
|
||||
Args:
|
||||
contents: List of text content to encode and store.
|
||||
scope: Optional explicit scope (inner scope, nested under root_scope).
|
||||
categories: Optional categories for all items.
|
||||
metadata: Optional metadata for all items.
|
||||
importance: Optional importance score for all items.
|
||||
source: Optional source identifier for all items.
|
||||
private: Whether items are private.
|
||||
root_scope: Structural root scope prefix. LLM-inferred or explicit
|
||||
scopes are nested under this root.
|
||||
|
||||
Returns:
|
||||
List of created MemoryRecord instances.
|
||||
"""
|
||||
from crewai.memory.encoding_flow import EncodingFlow
|
||||
|
||||
@@ -320,6 +344,7 @@ class Memory(BaseModel):
|
||||
"importance": importance,
|
||||
"source": source,
|
||||
"private": private,
|
||||
"root_scope": root_scope,
|
||||
}
|
||||
for c in contents
|
||||
]
|
||||
@@ -340,6 +365,7 @@ class Memory(BaseModel):
|
||||
source: str | None = None,
|
||||
private: bool = False,
|
||||
agent_role: str | None = None,
|
||||
root_scope: str | None = None,
|
||||
) -> MemoryRecord | None:
|
||||
"""Store a single item in memory (synchronous).
|
||||
|
||||
@@ -349,13 +375,15 @@ class Memory(BaseModel):
|
||||
|
||||
Args:
|
||||
content: Text to remember.
|
||||
scope: Optional scope path; inferred if None.
|
||||
scope: Optional scope path (inner scope); inferred if None.
|
||||
categories: Optional categories; inferred if None.
|
||||
metadata: Optional metadata; merged with LLM-extracted if inferred.
|
||||
importance: Optional importance 0-1; inferred if None.
|
||||
source: Optional provenance identifier (e.g. user ID, session ID).
|
||||
private: If True, only visible to recall from the same source.
|
||||
agent_role: Optional agent role for event metadata.
|
||||
root_scope: Optional root scope override. If provided, this overrides
|
||||
the instance-level root_scope for this call only.
|
||||
|
||||
Returns:
|
||||
The created MemoryRecord, or None if this memory is read-only.
|
||||
@@ -365,6 +393,10 @@ class Memory(BaseModel):
|
||||
"""
|
||||
if self.read_only:
|
||||
return None
|
||||
|
||||
# Determine effective root_scope: per-call override takes precedence
|
||||
effective_root = root_scope if root_scope is not None else self.root_scope
|
||||
|
||||
_source_type = "unified_memory"
|
||||
try:
|
||||
crewai_event_bus.emit(
|
||||
@@ -388,6 +420,7 @@ class Memory(BaseModel):
|
||||
importance,
|
||||
source,
|
||||
private,
|
||||
effective_root,
|
||||
)
|
||||
records = future.result()
|
||||
record = records[0] if records else None
|
||||
@@ -426,6 +459,7 @@ class Memory(BaseModel):
|
||||
source: str | None = None,
|
||||
private: bool = False,
|
||||
agent_role: str | None = None,
|
||||
root_scope: str | None = None,
|
||||
) -> list[MemoryRecord]:
|
||||
"""Store multiple items in memory (non-blocking).
|
||||
|
||||
@@ -440,13 +474,15 @@ class Memory(BaseModel):
|
||||
|
||||
Args:
|
||||
contents: List of text items to remember.
|
||||
scope: Optional scope applied to all items.
|
||||
scope: Optional scope (inner scope) applied to all items.
|
||||
categories: Optional categories applied to all items.
|
||||
metadata: Optional metadata applied to all items.
|
||||
importance: Optional importance applied to all items.
|
||||
source: Optional provenance identifier applied to all items.
|
||||
private: Privacy flag applied to all items.
|
||||
agent_role: Optional agent role for event metadata.
|
||||
root_scope: Optional root scope override. If provided, this overrides
|
||||
the instance-level root_scope for this call only.
|
||||
|
||||
Returns:
|
||||
Empty list (records are not available until the background save completes).
|
||||
@@ -454,6 +490,9 @@ class Memory(BaseModel):
|
||||
if not contents or self.read_only:
|
||||
return []
|
||||
|
||||
# Determine effective root_scope: per-call override takes precedence
|
||||
effective_root = root_scope if root_scope is not None else self.root_scope
|
||||
|
||||
self._submit_save(
|
||||
self._background_encode_batch,
|
||||
contents,
|
||||
@@ -464,6 +503,7 @@ class Memory(BaseModel):
|
||||
source,
|
||||
private,
|
||||
agent_role,
|
||||
effective_root,
|
||||
)
|
||||
return []
|
||||
|
||||
@@ -477,6 +517,7 @@ class Memory(BaseModel):
|
||||
source: str | None,
|
||||
private: bool,
|
||||
agent_role: str | None,
|
||||
root_scope: str | None = None,
|
||||
) -> list[MemoryRecord]:
|
||||
"""Run the encoding pipeline in a background thread with event emission.
|
||||
|
||||
@@ -486,6 +527,20 @@ class Memory(BaseModel):
|
||||
All ``emit`` calls are wrapped in try/except to handle the case where
|
||||
the event bus shuts down before the background save finishes (e.g.
|
||||
during process exit).
|
||||
|
||||
Args:
|
||||
contents: List of text content to encode.
|
||||
scope: Optional inner scope for all items.
|
||||
categories: Optional categories for all items.
|
||||
metadata: Optional metadata for all items.
|
||||
importance: Optional importance for all items.
|
||||
source: Optional source identifier for all items.
|
||||
private: Whether items are private.
|
||||
agent_role: Optional agent role for event metadata.
|
||||
root_scope: Optional root scope prefix for hierarchical scoping.
|
||||
|
||||
Returns:
|
||||
List of created MemoryRecord instances.
|
||||
"""
|
||||
try:
|
||||
crewai_event_bus.emit(
|
||||
@@ -502,7 +557,14 @@ class Memory(BaseModel):
|
||||
try:
|
||||
start = time.perf_counter()
|
||||
records = self._encode_batch(
|
||||
contents, scope, categories, metadata, importance, source, private
|
||||
contents,
|
||||
scope,
|
||||
categories,
|
||||
metadata,
|
||||
importance,
|
||||
source,
|
||||
private,
|
||||
root_scope,
|
||||
)
|
||||
elapsed_ms = (time.perf_counter() - start) * 1000
|
||||
except RuntimeError:
|
||||
@@ -575,6 +637,14 @@ class Memory(BaseModel):
|
||||
# so that the search sees all persisted records.
|
||||
self.drain_writes()
|
||||
|
||||
# Apply root_scope as default scope_prefix for read isolation
|
||||
effective_scope = scope
|
||||
if effective_scope is None and self.root_scope:
|
||||
effective_scope = self.root_scope
|
||||
elif effective_scope is not None and self.root_scope:
|
||||
# Nest provided scope under root
|
||||
effective_scope = join_scope_paths(self.root_scope, effective_scope)
|
||||
|
||||
_source = "unified_memory"
|
||||
try:
|
||||
crewai_event_bus.emit(
|
||||
@@ -595,7 +665,7 @@ class Memory(BaseModel):
|
||||
else:
|
||||
raw = self._storage.search(
|
||||
embedding,
|
||||
scope_prefix=scope,
|
||||
scope_prefix=effective_scope,
|
||||
categories=categories,
|
||||
limit=limit,
|
||||
min_score=0.0,
|
||||
@@ -630,7 +700,7 @@ class Memory(BaseModel):
|
||||
flow.kickoff(
|
||||
inputs={
|
||||
"query": query,
|
||||
"scope": scope,
|
||||
"scope": effective_scope,
|
||||
"categories": categories or [],
|
||||
"limit": limit,
|
||||
"source": source,
|
||||
@@ -684,11 +754,24 @@ class Memory(BaseModel):
|
||||
) -> int:
|
||||
"""Delete memories matching criteria.
|
||||
|
||||
Args:
|
||||
scope: Scope to delete from. If None and root_scope is set, deletes
|
||||
only within root_scope.
|
||||
categories: Filter by categories.
|
||||
older_than: Delete records older than this datetime.
|
||||
metadata_filter: Filter by metadata fields.
|
||||
record_ids: Specific record IDs to delete.
|
||||
|
||||
Returns:
|
||||
Number of records deleted.
|
||||
"""
|
||||
effective_scope = scope
|
||||
if effective_scope is None and self.root_scope:
|
||||
effective_scope = self.root_scope
|
||||
elif effective_scope is not None and self.root_scope:
|
||||
effective_scope = join_scope_paths(self.root_scope, effective_scope)
|
||||
return self._storage.delete(
|
||||
scope_prefix=scope,
|
||||
scope_prefix=effective_scope,
|
||||
categories=categories,
|
||||
record_ids=record_ids,
|
||||
older_than=older_than,
|
||||
@@ -763,9 +846,21 @@ class Memory(BaseModel):
|
||||
read_only=read_only,
|
||||
)
|
||||
|
||||
def list_scopes(self, path: str = "/") -> list[str]:
|
||||
"""List immediate child scopes under path."""
|
||||
return self._storage.list_scopes(path)
|
||||
def list_scopes(self, path: str | None = None) -> list[str]:
|
||||
"""List immediate child scopes under path.
|
||||
|
||||
Args:
|
||||
path: Scope path to list children of. If None and root_scope is set,
|
||||
defaults to root_scope. Otherwise defaults to '/'.
|
||||
"""
|
||||
effective_path = path
|
||||
if effective_path is None and self.root_scope:
|
||||
effective_path = self.root_scope
|
||||
elif effective_path is not None and self.root_scope:
|
||||
effective_path = join_scope_paths(self.root_scope, effective_path)
|
||||
elif effective_path is None:
|
||||
effective_path = "/"
|
||||
return self._storage.list_scopes(effective_path)
|
||||
|
||||
def list_records(
|
||||
self, scope: str | None = None, limit: int = 200, offset: int = 0
|
||||
@@ -773,20 +868,52 @@ class Memory(BaseModel):
|
||||
"""List records in a scope, newest first.
|
||||
|
||||
Args:
|
||||
scope: Optional scope path prefix to filter by.
|
||||
scope: Optional scope path prefix to filter by. If None and root_scope
|
||||
is set, defaults to root_scope.
|
||||
limit: Maximum number of records to return.
|
||||
offset: Number of records to skip (for pagination).
|
||||
"""
|
||||
effective_scope = scope
|
||||
if effective_scope is None and self.root_scope:
|
||||
effective_scope = self.root_scope
|
||||
elif effective_scope is not None and self.root_scope:
|
||||
effective_scope = join_scope_paths(self.root_scope, effective_scope)
|
||||
return self._storage.list_records(
|
||||
scope_prefix=scope, limit=limit, offset=offset
|
||||
scope_prefix=effective_scope, limit=limit, offset=offset
|
||||
)
|
||||
|
||||
def info(self, path: str = "/") -> ScopeInfo:
|
||||
"""Return scope info for path."""
|
||||
return self._storage.get_scope_info(path)
|
||||
def info(self, path: str | None = None) -> ScopeInfo:
|
||||
"""Return scope info for path.
|
||||
|
||||
Args:
|
||||
path: Scope path to get info for. If None and root_scope is set,
|
||||
defaults to root_scope. Otherwise defaults to '/'.
|
||||
"""
|
||||
effective_path = path
|
||||
if effective_path is None and self.root_scope:
|
||||
effective_path = self.root_scope
|
||||
elif effective_path is not None and self.root_scope:
|
||||
effective_path = join_scope_paths(self.root_scope, effective_path)
|
||||
elif effective_path is None:
|
||||
effective_path = "/"
|
||||
return self._storage.get_scope_info(effective_path)
|
||||
|
||||
def tree(self, path: str | None = None, max_depth: int = 3) -> str:
|
||||
"""Return a formatted tree of scopes (string).
|
||||
|
||||
Args:
|
||||
path: Root path for the tree. If None and root_scope is set,
|
||||
defaults to root_scope. Otherwise defaults to '/'.
|
||||
max_depth: Maximum depth to traverse.
|
||||
"""
|
||||
effective_path = path
|
||||
if effective_path is None and self.root_scope:
|
||||
effective_path = self.root_scope
|
||||
elif effective_path is not None and self.root_scope:
|
||||
effective_path = join_scope_paths(self.root_scope, effective_path)
|
||||
elif effective_path is None:
|
||||
effective_path = "/"
|
||||
|
||||
def tree(self, path: str = "/", max_depth: int = 3) -> str:
|
||||
"""Return a formatted tree of scopes (string)."""
|
||||
lines: list[str] = []
|
||||
|
||||
def _walk(p: str, depth: int, prefix: str) -> None:
|
||||
@@ -797,16 +924,36 @@ class Memory(BaseModel):
|
||||
for child in info.child_scopes[:20]:
|
||||
_walk(child, depth + 1, prefix + " ")
|
||||
|
||||
_walk(path.rstrip("/") or "/", 0, "")
|
||||
return "\n".join(lines) if lines else f"{path or '/'} (0 records)"
|
||||
_walk(effective_path.rstrip("/") or "/", 0, "")
|
||||
return "\n".join(lines) if lines else f"{effective_path or '/'} (0 records)"
|
||||
|
||||
def list_categories(self, path: str | None = None) -> dict[str, int]:
|
||||
"""List categories and counts; path=None means global."""
|
||||
return self._storage.list_categories(scope_prefix=path)
|
||||
"""List categories and counts.
|
||||
|
||||
Args:
|
||||
path: Scope path to filter categories by. If None and root_scope is set,
|
||||
defaults to root_scope.
|
||||
"""
|
||||
effective_path = path
|
||||
if effective_path is None and self.root_scope:
|
||||
effective_path = self.root_scope
|
||||
elif effective_path is not None and self.root_scope:
|
||||
effective_path = join_scope_paths(self.root_scope, effective_path)
|
||||
return self._storage.list_categories(scope_prefix=effective_path)
|
||||
|
||||
def reset(self, scope: str | None = None) -> None:
|
||||
"""Reset (delete all) memories in scope. None = all."""
|
||||
self._storage.reset(scope_prefix=scope)
|
||||
"""Reset (delete all) memories in scope.
|
||||
|
||||
Args:
|
||||
scope: Scope to reset. If None and root_scope is set, resets only
|
||||
within root_scope. If None and no root_scope, resets all.
|
||||
"""
|
||||
effective_scope = scope
|
||||
if effective_scope is None and self.root_scope:
|
||||
effective_scope = self.root_scope
|
||||
elif effective_scope is not None and self.root_scope:
|
||||
effective_scope = join_scope_paths(self.root_scope, effective_scope)
|
||||
self._storage.reset(scope_prefix=effective_scope)
|
||||
|
||||
async def aextract_memories(self, content: str) -> list[str]:
|
||||
"""Async variant of extract_memories."""
|
||||
|
||||
110
lib/crewai/src/crewai/memory/utils.py
Normal file
110
lib/crewai/src/crewai/memory/utils.py
Normal file
@@ -0,0 +1,110 @@
|
||||
"""Utility functions for the unified memory system."""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import re
|
||||
|
||||
|
||||
def sanitize_scope_name(name: str) -> str:
|
||||
"""Sanitize a name for use in hierarchical scope paths.
|
||||
|
||||
Converts to lowercase, replaces non-alphanumeric chars (except underscore
|
||||
and hyphen) with hyphens, collapses multiple hyphens, strips leading/trailing
|
||||
hyphens.
|
||||
|
||||
Args:
|
||||
name: The raw name to sanitize (e.g. crew name, agent role, flow class name).
|
||||
|
||||
Returns:
|
||||
A sanitized string safe for use in scope paths. Returns 'unknown' if the
|
||||
result would be empty.
|
||||
|
||||
Examples:
|
||||
>>> sanitize_scope_name("Research Crew")
|
||||
'research-crew'
|
||||
>>> sanitize_scope_name("Agent #1 (Main)")
|
||||
'agent-1-main'
|
||||
>>> sanitize_scope_name("café_worker")
|
||||
'caf-_worker'
|
||||
"""
|
||||
if not name:
|
||||
return "unknown"
|
||||
name = name.lower().strip()
|
||||
# Replace any character that's not alphanumeric, underscore, or hyphen with hyphen
|
||||
name = re.sub(r"[^a-z0-9_-]", "-", name)
|
||||
# Collapse multiple hyphens into one
|
||||
name = re.sub(r"-+", "-", name)
|
||||
# Strip leading/trailing hyphens
|
||||
name = name.strip("-")
|
||||
return name or "unknown"
|
||||
|
||||
|
||||
def normalize_scope_path(path: str) -> str:
|
||||
"""Normalize a scope path by removing double slashes and ensuring proper format.
|
||||
|
||||
Args:
|
||||
path: The raw scope path (e.g. '/crew/MyCrewName//agent//role').
|
||||
|
||||
Returns:
|
||||
A normalized path with leading slash, no trailing slash, no double slashes.
|
||||
Returns '/' for empty or root-only paths.
|
||||
|
||||
Examples:
|
||||
>>> normalize_scope_path("/crew/test//agent//")
|
||||
'/crew/test/agent'
|
||||
>>> normalize_scope_path("")
|
||||
'/'
|
||||
>>> normalize_scope_path("crew/test")
|
||||
'/crew/test'
|
||||
"""
|
||||
if not path or path == "/":
|
||||
return "/"
|
||||
# Collapse multiple slashes
|
||||
path = re.sub(r"/+", "/", path)
|
||||
# Ensure leading slash
|
||||
if not path.startswith("/"):
|
||||
path = "/" + path
|
||||
# Remove trailing slash (unless it's just '/')
|
||||
if len(path) > 1:
|
||||
path = path.rstrip("/")
|
||||
return path
|
||||
|
||||
|
||||
def join_scope_paths(root: str | None, inner: str | None) -> str:
|
||||
"""Join a root scope with an inner scope, handling edge cases properly.
|
||||
|
||||
Args:
|
||||
root: The root scope prefix (e.g. '/crew/research-crew').
|
||||
inner: The inner scope (e.g. '/market-trends' or 'market-trends').
|
||||
|
||||
Returns:
|
||||
The combined, normalized scope path.
|
||||
|
||||
Examples:
|
||||
>>> join_scope_paths("/crew/test", "/market-trends")
|
||||
'/crew/test/market-trends'
|
||||
>>> join_scope_paths("/crew/test", "market-trends")
|
||||
'/crew/test/market-trends'
|
||||
>>> join_scope_paths("/crew/test", "/")
|
||||
'/crew/test'
|
||||
>>> join_scope_paths("/crew/test", None)
|
||||
'/crew/test'
|
||||
>>> join_scope_paths(None, "/market-trends")
|
||||
'/market-trends'
|
||||
>>> join_scope_paths(None, None)
|
||||
'/'
|
||||
"""
|
||||
# Normalize both parts
|
||||
root = root.rstrip("/") if root else ""
|
||||
inner = inner.strip("/") if inner else ""
|
||||
|
||||
if root and inner:
|
||||
result = f"{root}/{inner}"
|
||||
elif root:
|
||||
result = root
|
||||
elif inner:
|
||||
result = f"/{inner}"
|
||||
else:
|
||||
result = "/"
|
||||
|
||||
return normalize_scope_path(result)
|
||||
17
lib/crewai/src/crewai/skills/__init__.py
Normal file
17
lib/crewai/src/crewai/skills/__init__.py
Normal file
@@ -0,0 +1,17 @@
|
||||
"""Agent Skills standard implementation for crewAI.
|
||||
|
||||
Provides filesystem-based skill packaging with progressive disclosure.
|
||||
"""
|
||||
|
||||
from crewai.skills.loader import activate_skill, discover_skills
|
||||
from crewai.skills.models import Skill, SkillFrontmatter
|
||||
from crewai.skills.parser import SkillParseError
|
||||
|
||||
|
||||
__all__ = [
|
||||
"Skill",
|
||||
"SkillFrontmatter",
|
||||
"SkillParseError",
|
||||
"activate_skill",
|
||||
"discover_skills",
|
||||
]
|
||||
184
lib/crewai/src/crewai/skills/loader.py
Normal file
184
lib/crewai/src/crewai/skills/loader.py
Normal file
@@ -0,0 +1,184 @@
|
||||
"""Filesystem discovery and progressive loading for Agent Skills.
|
||||
|
||||
Provides functions to discover skills in directories, activate them
|
||||
for agent use, and format skill context for prompt injection.
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import logging
|
||||
from pathlib import Path
|
||||
from typing import TYPE_CHECKING
|
||||
|
||||
from crewai.events.event_bus import crewai_event_bus
|
||||
from crewai.events.types.skill_events import (
|
||||
SkillActivatedEvent,
|
||||
SkillDiscoveryCompletedEvent,
|
||||
SkillDiscoveryStartedEvent,
|
||||
SkillLoadFailedEvent,
|
||||
SkillLoadedEvent,
|
||||
)
|
||||
from crewai.skills.models import INSTRUCTIONS, RESOURCES, Skill
|
||||
from crewai.skills.parser import (
|
||||
SKILL_FILENAME,
|
||||
load_skill_instructions,
|
||||
load_skill_metadata,
|
||||
load_skill_resources,
|
||||
)
|
||||
|
||||
|
||||
if TYPE_CHECKING:
|
||||
from crewai.agents.agent_builder.base_agent import BaseAgent
|
||||
|
||||
_logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
def discover_skills(
|
||||
search_path: Path,
|
||||
source: BaseAgent | None = None,
|
||||
) -> list[Skill]:
|
||||
"""Scan a directory for skill directories containing SKILL.md.
|
||||
|
||||
Loads each discovered skill at METADATA disclosure level.
|
||||
|
||||
Args:
|
||||
search_path: Directory to scan for skill subdirectories.
|
||||
source: Optional event source (agent or crew) for event emission.
|
||||
|
||||
Returns:
|
||||
List of Skill instances at METADATA level.
|
||||
"""
|
||||
if not search_path.is_dir():
|
||||
msg = f"Skill search path does not exist or is not a directory: {search_path}"
|
||||
raise FileNotFoundError(msg)
|
||||
|
||||
skills: list[Skill] = []
|
||||
|
||||
if source is not None:
|
||||
crewai_event_bus.emit(
|
||||
source,
|
||||
event=SkillDiscoveryStartedEvent(
|
||||
from_agent=source,
|
||||
search_path=search_path,
|
||||
),
|
||||
)
|
||||
|
||||
for child in sorted(search_path.iterdir()):
|
||||
if not child.is_dir():
|
||||
continue
|
||||
skill_md = child / SKILL_FILENAME
|
||||
if not skill_md.is_file():
|
||||
continue
|
||||
try:
|
||||
skill = load_skill_metadata(child)
|
||||
skills.append(skill)
|
||||
if source is not None:
|
||||
crewai_event_bus.emit(
|
||||
source,
|
||||
event=SkillLoadedEvent(
|
||||
from_agent=source,
|
||||
skill_name=skill.name,
|
||||
skill_path=skill.path,
|
||||
disclosure_level=skill.disclosure_level,
|
||||
),
|
||||
)
|
||||
except Exception as e:
|
||||
_logger.warning("Failed to load skill from %s: %s", child, e)
|
||||
if source is not None:
|
||||
crewai_event_bus.emit(
|
||||
source,
|
||||
event=SkillLoadFailedEvent(
|
||||
from_agent=source,
|
||||
skill_name=child.name,
|
||||
skill_path=child,
|
||||
error=str(e),
|
||||
),
|
||||
)
|
||||
|
||||
if source is not None:
|
||||
crewai_event_bus.emit(
|
||||
source,
|
||||
event=SkillDiscoveryCompletedEvent(
|
||||
from_agent=source,
|
||||
search_path=search_path,
|
||||
skills_found=len(skills),
|
||||
skill_names=[s.name for s in skills],
|
||||
),
|
||||
)
|
||||
|
||||
return skills
|
||||
|
||||
|
||||
def activate_skill(
|
||||
skill: Skill,
|
||||
source: BaseAgent | None = None,
|
||||
) -> Skill:
|
||||
"""Promote a skill to INSTRUCTIONS disclosure level.
|
||||
|
||||
Idempotent: returns the skill unchanged if already at or above INSTRUCTIONS.
|
||||
|
||||
Args:
|
||||
skill: Skill to activate.
|
||||
source: Optional event source for event emission.
|
||||
|
||||
Returns:
|
||||
Skill at INSTRUCTIONS level or higher.
|
||||
"""
|
||||
if skill.disclosure_level >= INSTRUCTIONS:
|
||||
return skill
|
||||
|
||||
activated = load_skill_instructions(skill)
|
||||
|
||||
if source is not None:
|
||||
crewai_event_bus.emit(
|
||||
source,
|
||||
event=SkillActivatedEvent(
|
||||
from_agent=source,
|
||||
skill_name=activated.name,
|
||||
skill_path=activated.path,
|
||||
disclosure_level=activated.disclosure_level,
|
||||
),
|
||||
)
|
||||
|
||||
return activated
|
||||
|
||||
|
||||
def load_resources(skill: Skill) -> Skill:
|
||||
"""Promote a skill to RESOURCES disclosure level.
|
||||
|
||||
Args:
|
||||
skill: Skill to promote.
|
||||
|
||||
Returns:
|
||||
Skill at RESOURCES level.
|
||||
"""
|
||||
return load_skill_resources(skill)
|
||||
|
||||
|
||||
def format_skill_context(skill: Skill) -> str:
|
||||
"""Format skill information for agent prompt injection.
|
||||
|
||||
At METADATA level: returns name and description only.
|
||||
At INSTRUCTIONS level or above: returns full SKILL.md body.
|
||||
|
||||
Args:
|
||||
skill: The skill to format.
|
||||
|
||||
Returns:
|
||||
Formatted skill context string.
|
||||
"""
|
||||
if skill.disclosure_level >= INSTRUCTIONS and skill.instructions:
|
||||
parts = [
|
||||
f"## Skill: {skill.name}",
|
||||
skill.description,
|
||||
"",
|
||||
skill.instructions,
|
||||
]
|
||||
if skill.disclosure_level >= RESOURCES and skill.resource_files:
|
||||
parts.append("")
|
||||
parts.append("### Available Resources")
|
||||
for dir_name, files in sorted(skill.resource_files.items()):
|
||||
if files:
|
||||
parts.append(f"- **{dir_name}/**: {', '.join(files)}")
|
||||
return "\n".join(parts)
|
||||
return f"## Skill: {skill.name}\n{skill.description}"
|
||||
175
lib/crewai/src/crewai/skills/models.py
Normal file
175
lib/crewai/src/crewai/skills/models.py
Normal file
@@ -0,0 +1,175 @@
|
||||
"""Pydantic data models for the Agent Skills standard.
|
||||
|
||||
Defines DisclosureLevel, SkillFrontmatter, and Skill models for
|
||||
progressive disclosure of skill information.
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
from pathlib import Path
|
||||
from typing import Annotated, Any, Final, Literal
|
||||
|
||||
from pydantic import BaseModel, ConfigDict, Field, model_validator
|
||||
|
||||
from crewai.skills.validation import (
|
||||
MAX_SKILL_NAME_LENGTH,
|
||||
MIN_SKILL_NAME_LENGTH,
|
||||
SKILL_NAME_PATTERN,
|
||||
)
|
||||
|
||||
|
||||
MAX_DESCRIPTION_LENGTH: Final[int] = 1024
|
||||
ResourceDirName = Literal["scripts", "references", "assets"]
|
||||
|
||||
|
||||
DisclosureLevel = Annotated[
|
||||
Literal[1, 2, 3], "Progressive disclosure levels for skill loading."
|
||||
]
|
||||
|
||||
METADATA: Final[
|
||||
Annotated[
|
||||
DisclosureLevel, "Only frontmatter metadata is loaded (name, description)."
|
||||
]
|
||||
] = 1
|
||||
INSTRUCTIONS: Final[Annotated[DisclosureLevel, "Full SKILL.md body is loaded."]] = 2
|
||||
RESOURCES: Final[
|
||||
Annotated[
|
||||
DisclosureLevel,
|
||||
"Resource directories (scripts, references, assets) are cataloged.",
|
||||
]
|
||||
] = 3
|
||||
|
||||
|
||||
class SkillFrontmatter(BaseModel):
|
||||
"""YAML frontmatter from a SKILL.md file.
|
||||
|
||||
Attributes:
|
||||
name: Unique skill identifier (1-64 chars, lowercase alphanumeric + hyphens).
|
||||
description: Human-readable description (1-1024 chars).
|
||||
license: Optional license name or reference.
|
||||
compatibility: Optional compatibility information (max 500 chars).
|
||||
metadata: Optional additional metadata as string key-value pairs.
|
||||
allowed_tools: Optional space-delimited list of pre-approved tools.
|
||||
"""
|
||||
|
||||
model_config = ConfigDict(frozen=True, populate_by_name=True)
|
||||
|
||||
name: str = Field(
|
||||
min_length=MIN_SKILL_NAME_LENGTH,
|
||||
max_length=MAX_SKILL_NAME_LENGTH,
|
||||
pattern=SKILL_NAME_PATTERN,
|
||||
)
|
||||
description: str = Field(min_length=1, max_length=MAX_DESCRIPTION_LENGTH)
|
||||
license: str | None = Field(
|
||||
default=None,
|
||||
description="SPDX license identifier or free-text license reference, e.g. 'MIT', 'Apache-2.0'.",
|
||||
)
|
||||
compatibility: str | None = Field(
|
||||
default=None,
|
||||
max_length=500,
|
||||
description="Version or platform constraints for the skill, e.g. 'crewai >= 0.80'.",
|
||||
)
|
||||
metadata: dict[str, str] | None = Field(
|
||||
default=None,
|
||||
description="Arbitrary string key-value pairs for custom skill metadata.",
|
||||
)
|
||||
allowed_tools: list[str] | None = Field(
|
||||
default=None,
|
||||
alias="allowed-tools",
|
||||
description="Pre-approved tool names the skill may use, parsed from a space-delimited string in frontmatter.",
|
||||
)
|
||||
|
||||
@model_validator(mode="before")
|
||||
@classmethod
|
||||
def parse_allowed_tools(cls, values: dict[str, Any]) -> dict[str, Any]:
|
||||
"""Parse space-delimited allowed-tools string into a list."""
|
||||
key = "allowed-tools"
|
||||
alt_key = "allowed_tools"
|
||||
raw = values.get(key) or values.get(alt_key)
|
||||
if isinstance(raw, str):
|
||||
values[key] = raw.split()
|
||||
return values
|
||||
|
||||
|
||||
class Skill(BaseModel):
|
||||
"""A loaded Agent Skill with progressive disclosure support.
|
||||
|
||||
Attributes:
|
||||
frontmatter: Parsed YAML frontmatter.
|
||||
instructions: Full SKILL.md body text (populated at INSTRUCTIONS level).
|
||||
path: Filesystem path to the skill directory.
|
||||
disclosure_level: Current disclosure level of the skill.
|
||||
resource_files: Cataloged resource files (populated at RESOURCES level).
|
||||
"""
|
||||
|
||||
frontmatter: SkillFrontmatter = Field(
|
||||
description="Parsed YAML frontmatter from SKILL.md.",
|
||||
)
|
||||
instructions: str | None = Field(
|
||||
default=None,
|
||||
description="Full SKILL.md body text, populated at INSTRUCTIONS level.",
|
||||
)
|
||||
path: Path = Field(
|
||||
description="Filesystem path to the skill directory.",
|
||||
)
|
||||
disclosure_level: DisclosureLevel = Field(
|
||||
default=METADATA,
|
||||
description="Current progressive disclosure level of the skill.",
|
||||
)
|
||||
resource_files: dict[ResourceDirName, list[str]] | None = Field(
|
||||
default=None,
|
||||
description="Cataloged resource files by directory, populated at RESOURCES level.",
|
||||
)
|
||||
|
||||
@property
|
||||
def name(self) -> str:
|
||||
"""Skill name from frontmatter."""
|
||||
return self.frontmatter.name
|
||||
|
||||
@property
|
||||
def description(self) -> str:
|
||||
"""Skill description from frontmatter."""
|
||||
return self.frontmatter.description
|
||||
|
||||
@property
|
||||
def scripts_dir(self) -> Path:
|
||||
"""Path to the scripts directory."""
|
||||
return self.path / "scripts"
|
||||
|
||||
@property
|
||||
def references_dir(self) -> Path:
|
||||
"""Path to the references directory."""
|
||||
return self.path / "references"
|
||||
|
||||
@property
|
||||
def assets_dir(self) -> Path:
|
||||
"""Path to the assets directory."""
|
||||
return self.path / "assets"
|
||||
|
||||
def with_disclosure_level(
|
||||
self,
|
||||
level: DisclosureLevel,
|
||||
instructions: str | None = None,
|
||||
resource_files: dict[ResourceDirName, list[str]] | None = None,
|
||||
) -> Skill:
|
||||
"""Create a new Skill at a different disclosure level.
|
||||
|
||||
Args:
|
||||
level: The new disclosure level.
|
||||
instructions: Optional instructions body text.
|
||||
resource_files: Optional cataloged resource files.
|
||||
|
||||
Returns:
|
||||
A new Skill instance at the specified disclosure level.
|
||||
"""
|
||||
return Skill(
|
||||
frontmatter=self.frontmatter,
|
||||
instructions=instructions
|
||||
if instructions is not None
|
||||
else self.instructions,
|
||||
path=self.path,
|
||||
disclosure_level=level,
|
||||
resource_files=(
|
||||
resource_files if resource_files is not None else self.resource_files
|
||||
),
|
||||
)
|
||||
194
lib/crewai/src/crewai/skills/parser.py
Normal file
194
lib/crewai/src/crewai/skills/parser.py
Normal file
@@ -0,0 +1,194 @@
|
||||
"""SKILL.md file parsing for the Agent Skills standard.
|
||||
|
||||
Parses YAML frontmatter and markdown body from SKILL.md files,
|
||||
and provides progressive loading functions for skill data.
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import logging
|
||||
from pathlib import Path
|
||||
import re
|
||||
from typing import Any, Final
|
||||
|
||||
import yaml
|
||||
|
||||
from crewai.skills.models import (
|
||||
INSTRUCTIONS,
|
||||
METADATA,
|
||||
RESOURCES,
|
||||
ResourceDirName,
|
||||
Skill,
|
||||
SkillFrontmatter,
|
||||
)
|
||||
from crewai.skills.validation import validate_directory_name
|
||||
|
||||
|
||||
_logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
SKILL_FILENAME: Final[str] = "SKILL.md"
|
||||
_CLOSING_DELIMITER: Final[re.Pattern[str]] = re.compile(r"\n---[ \t]*(?:\n|$)")
|
||||
_MAX_BODY_CHARS: Final[int] = 50_000
|
||||
|
||||
|
||||
class SkillParseError(ValueError):
|
||||
"""Error raised when SKILL.md parsing fails."""
|
||||
|
||||
|
||||
def parse_frontmatter(content: str) -> tuple[dict[str, Any], str]:
|
||||
"""Split SKILL.md content into frontmatter dict and body text.
|
||||
|
||||
Args:
|
||||
content: Raw SKILL.md file content.
|
||||
|
||||
Returns:
|
||||
Tuple of (frontmatter dict, body text).
|
||||
|
||||
Raises:
|
||||
SkillParseError: If frontmatter delimiters are missing or YAML is invalid.
|
||||
"""
|
||||
if not content.startswith("---"):
|
||||
msg = "SKILL.md must start with '---' frontmatter delimiter"
|
||||
raise SkillParseError(msg)
|
||||
|
||||
match = _CLOSING_DELIMITER.search(content, pos=3)
|
||||
if match is None:
|
||||
msg = "SKILL.md missing closing '---' frontmatter delimiter"
|
||||
raise SkillParseError(msg)
|
||||
|
||||
yaml_content = content[3 : match.start()].strip()
|
||||
body = content[match.end() :].strip()
|
||||
|
||||
try:
|
||||
frontmatter = yaml.safe_load(yaml_content)
|
||||
except yaml.YAMLError as e:
|
||||
msg = f"Invalid YAML in frontmatter: {e}"
|
||||
raise SkillParseError(msg) from e
|
||||
|
||||
if not isinstance(frontmatter, dict):
|
||||
msg = "Frontmatter must be a YAML mapping"
|
||||
raise SkillParseError(msg)
|
||||
|
||||
return frontmatter, body
|
||||
|
||||
|
||||
def parse_skill_md(path: Path) -> tuple[SkillFrontmatter, str]:
|
||||
"""Read and parse a SKILL.md file.
|
||||
|
||||
Args:
|
||||
path: Path to the SKILL.md file.
|
||||
|
||||
Returns:
|
||||
Tuple of (SkillFrontmatter, body text).
|
||||
|
||||
Raises:
|
||||
FileNotFoundError: If the file does not exist.
|
||||
SkillParseError: If parsing fails.
|
||||
"""
|
||||
content = path.read_text(encoding="utf-8")
|
||||
frontmatter_dict, body = parse_frontmatter(content)
|
||||
frontmatter = SkillFrontmatter(**frontmatter_dict)
|
||||
return frontmatter, body
|
||||
|
||||
|
||||
def load_skill_metadata(skill_dir: Path) -> Skill:
|
||||
"""Load a skill at METADATA disclosure level.
|
||||
|
||||
Parses SKILL.md frontmatter only and validates directory name.
|
||||
|
||||
Args:
|
||||
skill_dir: Path to the skill directory.
|
||||
|
||||
Returns:
|
||||
Skill instance at METADATA level.
|
||||
|
||||
Raises:
|
||||
FileNotFoundError: If SKILL.md is missing.
|
||||
SkillParseError: If parsing fails.
|
||||
ValueError: If directory name doesn't match skill name.
|
||||
"""
|
||||
skill_md_path = skill_dir / SKILL_FILENAME
|
||||
frontmatter, body = parse_skill_md(skill_md_path)
|
||||
validate_directory_name(skill_dir, frontmatter.name)
|
||||
if len(body) > _MAX_BODY_CHARS:
|
||||
_logger.warning(
|
||||
"SKILL.md body for '%s' is %d chars (threshold: %d). "
|
||||
"Large bodies may consume significant context window when injected into prompts.",
|
||||
frontmatter.name,
|
||||
len(body),
|
||||
_MAX_BODY_CHARS,
|
||||
)
|
||||
return Skill(
|
||||
frontmatter=frontmatter,
|
||||
path=skill_dir,
|
||||
disclosure_level=METADATA,
|
||||
)
|
||||
|
||||
|
||||
def load_skill_instructions(skill: Skill) -> Skill:
|
||||
"""Promote a skill to INSTRUCTIONS disclosure level.
|
||||
|
||||
Reads the full SKILL.md body text.
|
||||
|
||||
Args:
|
||||
skill: Skill at METADATA level.
|
||||
|
||||
Returns:
|
||||
New Skill instance at INSTRUCTIONS level.
|
||||
"""
|
||||
if skill.disclosure_level >= INSTRUCTIONS:
|
||||
return skill
|
||||
|
||||
skill_md_path = skill.path / SKILL_FILENAME
|
||||
_, body = parse_skill_md(skill_md_path)
|
||||
if len(body) > _MAX_BODY_CHARS:
|
||||
_logger.warning(
|
||||
"SKILL.md body for '%s' is %d chars (threshold: %d). "
|
||||
"Large bodies may consume significant context window when injected into prompts.",
|
||||
skill.name,
|
||||
len(body),
|
||||
_MAX_BODY_CHARS,
|
||||
)
|
||||
return skill.with_disclosure_level(
|
||||
level=INSTRUCTIONS,
|
||||
instructions=body,
|
||||
)
|
||||
|
||||
|
||||
def load_skill_resources(skill: Skill) -> Skill:
|
||||
"""Promote a skill to RESOURCES disclosure level.
|
||||
|
||||
Catalogs available resource directories (scripts, references, assets).
|
||||
|
||||
Args:
|
||||
skill: Skill at any level.
|
||||
|
||||
Returns:
|
||||
New Skill instance at RESOURCES level.
|
||||
"""
|
||||
if skill.disclosure_level >= RESOURCES:
|
||||
return skill
|
||||
|
||||
if skill.disclosure_level < INSTRUCTIONS:
|
||||
skill = load_skill_instructions(skill)
|
||||
|
||||
resource_dirs: list[tuple[ResourceDirName, Path]] = [
|
||||
("scripts", skill.scripts_dir),
|
||||
("references", skill.references_dir),
|
||||
("assets", skill.assets_dir),
|
||||
]
|
||||
resource_files: dict[ResourceDirName, list[str]] = {}
|
||||
for dir_name, resource_dir in resource_dirs:
|
||||
if resource_dir.is_dir():
|
||||
resource_files[dir_name] = sorted(
|
||||
str(f.relative_to(resource_dir))
|
||||
for f in resource_dir.rglob("*")
|
||||
if f.is_file()
|
||||
)
|
||||
|
||||
return skill.with_disclosure_level(
|
||||
level=RESOURCES,
|
||||
instructions=skill.instructions,
|
||||
resource_files=resource_files,
|
||||
)
|
||||
31
lib/crewai/src/crewai/skills/validation.py
Normal file
31
lib/crewai/src/crewai/skills/validation.py
Normal file
@@ -0,0 +1,31 @@
|
||||
"""Validation functions for Agent Skills specification constraints.
|
||||
|
||||
Validates skill names and directory structures per the Agent Skills standard.
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
from pathlib import Path
|
||||
import re
|
||||
from typing import Final
|
||||
|
||||
|
||||
MAX_SKILL_NAME_LENGTH: Final[int] = 64
|
||||
MIN_SKILL_NAME_LENGTH: Final[int] = 1
|
||||
SKILL_NAME_PATTERN: Final[re.Pattern[str]] = re.compile(r"^[a-z0-9]+(?:-[a-z0-9]+)*$")
|
||||
|
||||
|
||||
def validate_directory_name(skill_dir: Path, skill_name: str) -> None:
|
||||
"""Validate that a directory name matches the skill name.
|
||||
|
||||
Args:
|
||||
skill_dir: Path to the skill directory.
|
||||
skill_name: The declared skill name from frontmatter.
|
||||
|
||||
Raises:
|
||||
ValueError: If the directory name does not match the skill name.
|
||||
"""
|
||||
dir_name = skill_dir.name
|
||||
if dir_name != skill_name:
|
||||
msg = f"Directory name '{dir_name}' does not match skill name '{skill_name}'"
|
||||
raise ValueError(msg)
|
||||
@@ -1,37 +1,40 @@
|
||||
"""Token counting callback handler for LLM interactions.
|
||||
|
||||
This module provides a callback handler that tracks token usage
|
||||
for LLM API calls through the litellm library.
|
||||
for LLM API calls. Works standalone and also integrates with litellm
|
||||
when available (for the litellm fallback path).
|
||||
"""
|
||||
|
||||
from typing import TYPE_CHECKING, Any
|
||||
|
||||
|
||||
if TYPE_CHECKING:
|
||||
from litellm.integrations.custom_logger import CustomLogger
|
||||
from litellm.types.utils import Usage
|
||||
else:
|
||||
try:
|
||||
from litellm.integrations.custom_logger import CustomLogger
|
||||
from litellm.types.utils import Usage
|
||||
except ImportError:
|
||||
|
||||
class CustomLogger:
|
||||
"""Fallback CustomLogger when litellm is not available."""
|
||||
|
||||
class Usage:
|
||||
"""Fallback Usage when litellm is not available."""
|
||||
|
||||
from typing import Any
|
||||
|
||||
from crewai.agents.agent_builder.utilities.base_token_process import TokenProcess
|
||||
from crewai.utilities.logger_utils import suppress_warnings
|
||||
|
||||
|
||||
class TokenCalcHandler(CustomLogger):
|
||||
# Check if litellm is available for callback integration
|
||||
try:
|
||||
from litellm.integrations.custom_logger import CustomLogger as LiteLLMCustomLogger
|
||||
|
||||
LITELLM_AVAILABLE = True
|
||||
except ImportError:
|
||||
LiteLLMCustomLogger = None # type: ignore[misc, assignment]
|
||||
LITELLM_AVAILABLE = False
|
||||
|
||||
|
||||
# Create a base class that conditionally inherits from litellm's CustomLogger
|
||||
# when available, or from object when not available
|
||||
if LITELLM_AVAILABLE and LiteLLMCustomLogger is not None:
|
||||
_BaseClass: type = LiteLLMCustomLogger
|
||||
else:
|
||||
_BaseClass = object
|
||||
|
||||
|
||||
class TokenCalcHandler(_BaseClass): # type: ignore[misc]
|
||||
"""Handler for calculating and tracking token usage in LLM calls.
|
||||
|
||||
This handler integrates with litellm's logging system to track
|
||||
prompt tokens, completion tokens, and cached tokens across requests.
|
||||
This handler tracks prompt tokens, completion tokens, and cached tokens
|
||||
across requests. It works standalone and also integrates with litellm's
|
||||
logging system when litellm is installed (for the fallback path).
|
||||
|
||||
Attributes:
|
||||
token_cost_process: The token process tracker to accumulate usage metrics.
|
||||
@@ -43,7 +46,9 @@ class TokenCalcHandler(CustomLogger):
|
||||
Args:
|
||||
token_cost_process: Optional token process tracker for accumulating metrics.
|
||||
"""
|
||||
super().__init__(**kwargs)
|
||||
# Only call super().__init__ if we have a real parent class with __init__
|
||||
if LITELLM_AVAILABLE and LiteLLMCustomLogger is not None:
|
||||
super().__init__(**kwargs)
|
||||
self.token_cost_process = token_cost_process
|
||||
|
||||
def log_success_event(
|
||||
@@ -55,6 +60,10 @@ class TokenCalcHandler(CustomLogger):
|
||||
) -> None:
|
||||
"""Log successful LLM API call and track token usage.
|
||||
|
||||
This method has the same interface as litellm's CustomLogger.log_success_event()
|
||||
so it can be used as a litellm callback when litellm is installed, or called
|
||||
directly when litellm is not installed.
|
||||
|
||||
Args:
|
||||
kwargs: The arguments passed to the LLM call.
|
||||
response_obj: The response object from the LLM API.
|
||||
@@ -66,7 +75,7 @@ class TokenCalcHandler(CustomLogger):
|
||||
|
||||
with suppress_warnings():
|
||||
if isinstance(response_obj, dict) and "usage" in response_obj:
|
||||
usage: Usage = response_obj["usage"]
|
||||
usage = response_obj["usage"]
|
||||
if usage:
|
||||
self.token_cost_process.sum_successful_requests(1)
|
||||
if hasattr(usage, "prompt_tokens"):
|
||||
|
||||
1209
lib/crewai/tests/memory/test_memory_root_scope.py
Normal file
1209
lib/crewai/tests/memory/test_memory_root_scope.py
Normal file
File diff suppressed because it is too large
Load Diff
0
lib/crewai/tests/skills/__init__.py
Normal file
0
lib/crewai/tests/skills/__init__.py
Normal file
4
lib/crewai/tests/skills/fixtures/invalid-name/SKILL.md
Normal file
4
lib/crewai/tests/skills/fixtures/invalid-name/SKILL.md
Normal file
@@ -0,0 +1,4 @@
|
||||
---
|
||||
name: Invalid--Name
|
||||
description: This skill has an invalid name.
|
||||
---
|
||||
4
lib/crewai/tests/skills/fixtures/minimal-skill/SKILL.md
Normal file
4
lib/crewai/tests/skills/fixtures/minimal-skill/SKILL.md
Normal file
@@ -0,0 +1,4 @@
|
||||
---
|
||||
name: minimal-skill
|
||||
description: A minimal skill with only required fields.
|
||||
---
|
||||
22
lib/crewai/tests/skills/fixtures/valid-skill/SKILL.md
Normal file
22
lib/crewai/tests/skills/fixtures/valid-skill/SKILL.md
Normal file
@@ -0,0 +1,22 @@
|
||||
---
|
||||
name: valid-skill
|
||||
description: A complete test skill with all optional directories.
|
||||
license: Apache-2.0
|
||||
compatibility: crewai>=0.1.0
|
||||
metadata:
|
||||
author: test
|
||||
version: "1.0"
|
||||
allowed-tools: web-search file-read
|
||||
---
|
||||
|
||||
## Instructions
|
||||
|
||||
This skill provides comprehensive instructions for the agent.
|
||||
|
||||
### Usage
|
||||
|
||||
Follow these steps to use the skill effectively.
|
||||
|
||||
### Notes
|
||||
|
||||
Additional context for the agent.
|
||||
@@ -0,0 +1 @@
|
||||
{"key": "value"}
|
||||
@@ -0,0 +1,3 @@
|
||||
# Reference Guide
|
||||
|
||||
This is a reference document for the skill.
|
||||
@@ -0,0 +1,2 @@
|
||||
#!/bin/bash
|
||||
echo "setup"
|
||||
78
lib/crewai/tests/skills/test_integration.py
Normal file
78
lib/crewai/tests/skills/test_integration.py
Normal file
@@ -0,0 +1,78 @@
|
||||
"""Integration tests for the skills system."""
|
||||
|
||||
from pathlib import Path
|
||||
|
||||
import pytest
|
||||
|
||||
from crewai.skills.loader import activate_skill, discover_skills, format_skill_context
|
||||
from crewai.skills.models import INSTRUCTIONS, METADATA
|
||||
|
||||
|
||||
def _create_skill_dir(parent: Path, name: str, body: str = "Body.") -> Path:
|
||||
"""Helper to create a skill directory with SKILL.md."""
|
||||
skill_dir = parent / name
|
||||
skill_dir.mkdir()
|
||||
(skill_dir / "SKILL.md").write_text(
|
||||
f"---\nname: {name}\ndescription: Skill {name}\n---\n{body}"
|
||||
)
|
||||
return skill_dir
|
||||
|
||||
|
||||
class TestSkillDiscoveryAndActivation:
|
||||
"""End-to-end tests for discover + activate workflow."""
|
||||
|
||||
def test_discover_and_activate(self, tmp_path: Path) -> None:
|
||||
_create_skill_dir(tmp_path, "my-skill", body="Use this skill.")
|
||||
skills = discover_skills(tmp_path)
|
||||
assert len(skills) == 1
|
||||
assert skills[0].disclosure_level == METADATA
|
||||
|
||||
activated = activate_skill(skills[0])
|
||||
assert activated.disclosure_level == INSTRUCTIONS
|
||||
assert activated.instructions == "Use this skill."
|
||||
|
||||
context = format_skill_context(activated)
|
||||
assert "## Skill: my-skill" in context
|
||||
assert "Use this skill." in context
|
||||
|
||||
def test_filter_by_skill_names(self, tmp_path: Path) -> None:
|
||||
_create_skill_dir(tmp_path, "alpha")
|
||||
_create_skill_dir(tmp_path, "beta")
|
||||
_create_skill_dir(tmp_path, "gamma")
|
||||
|
||||
all_skills = discover_skills(tmp_path)
|
||||
wanted = {"alpha", "gamma"}
|
||||
filtered = [s for s in all_skills if s.name in wanted]
|
||||
assert {s.name for s in filtered} == {"alpha", "gamma"}
|
||||
|
||||
def test_full_fixture_skill(self) -> None:
|
||||
fixtures = Path(__file__).parent / "fixtures"
|
||||
valid_dir = fixtures / "valid-skill"
|
||||
if not valid_dir.exists():
|
||||
pytest.skip("Fixture not found")
|
||||
|
||||
skills = discover_skills(fixtures)
|
||||
valid_skills = [s for s in skills if s.name == "valid-skill"]
|
||||
assert len(valid_skills) == 1
|
||||
|
||||
skill = valid_skills[0]
|
||||
assert skill.frontmatter.license == "Apache-2.0"
|
||||
assert skill.frontmatter.allowed_tools == ["web-search", "file-read"]
|
||||
|
||||
activated = activate_skill(skill)
|
||||
assert "Instructions" in (activated.instructions or "")
|
||||
|
||||
def test_multiple_search_paths(self, tmp_path: Path) -> None:
|
||||
path_a = tmp_path / "a"
|
||||
path_a.mkdir()
|
||||
_create_skill_dir(path_a, "skill-a")
|
||||
|
||||
path_b = tmp_path / "b"
|
||||
path_b.mkdir()
|
||||
_create_skill_dir(path_b, "skill-b")
|
||||
|
||||
all_skills = []
|
||||
for search_path in [path_a, path_b]:
|
||||
all_skills.extend(discover_skills(search_path))
|
||||
names = {s.name for s in all_skills}
|
||||
assert names == {"skill-a", "skill-b"}
|
||||
161
lib/crewai/tests/skills/test_loader.py
Normal file
161
lib/crewai/tests/skills/test_loader.py
Normal file
@@ -0,0 +1,161 @@
|
||||
"""Tests for skills/loader.py."""
|
||||
|
||||
from pathlib import Path
|
||||
|
||||
import pytest
|
||||
|
||||
from crewai.skills.loader import (
|
||||
activate_skill,
|
||||
discover_skills,
|
||||
format_skill_context,
|
||||
load_resources,
|
||||
)
|
||||
from crewai.skills.models import INSTRUCTIONS, METADATA, RESOURCES, Skill, SkillFrontmatter
|
||||
from crewai.skills.parser import load_skill_metadata
|
||||
|
||||
|
||||
def _create_skill_dir(parent: Path, name: str, body: str = "Body.") -> Path:
|
||||
"""Helper to create a skill directory with SKILL.md."""
|
||||
skill_dir = parent / name
|
||||
skill_dir.mkdir()
|
||||
(skill_dir / "SKILL.md").write_text(
|
||||
f"---\nname: {name}\ndescription: Skill {name}\n---\n{body}"
|
||||
)
|
||||
return skill_dir
|
||||
|
||||
|
||||
class TestDiscoverSkills:
|
||||
"""Tests for discover_skills."""
|
||||
|
||||
def test_finds_valid_skills(self, tmp_path: Path) -> None:
|
||||
_create_skill_dir(tmp_path, "alpha")
|
||||
_create_skill_dir(tmp_path, "beta")
|
||||
skills = discover_skills(tmp_path)
|
||||
names = {s.name for s in skills}
|
||||
assert names == {"alpha", "beta"}
|
||||
|
||||
def test_skips_dirs_without_skill_md(self, tmp_path: Path) -> None:
|
||||
_create_skill_dir(tmp_path, "valid")
|
||||
(tmp_path / "no-skill").mkdir()
|
||||
skills = discover_skills(tmp_path)
|
||||
assert len(skills) == 1
|
||||
assert skills[0].name == "valid"
|
||||
|
||||
def test_skips_invalid_skills(self, tmp_path: Path) -> None:
|
||||
_create_skill_dir(tmp_path, "good-skill")
|
||||
bad_dir = tmp_path / "bad-skill"
|
||||
bad_dir.mkdir()
|
||||
(bad_dir / "SKILL.md").write_text(
|
||||
"---\nname: Wrong-Name\ndescription: bad\n---\n"
|
||||
)
|
||||
skills = discover_skills(tmp_path)
|
||||
assert len(skills) == 1
|
||||
|
||||
def test_empty_directory(self, tmp_path: Path) -> None:
|
||||
skills = discover_skills(tmp_path)
|
||||
assert skills == []
|
||||
|
||||
def test_nonexistent_path(self, tmp_path: Path) -> None:
|
||||
with pytest.raises(FileNotFoundError):
|
||||
discover_skills(tmp_path / "nonexistent")
|
||||
|
||||
def test_sorted_by_name(self, tmp_path: Path) -> None:
|
||||
_create_skill_dir(tmp_path, "zebra")
|
||||
_create_skill_dir(tmp_path, "alpha")
|
||||
skills = discover_skills(tmp_path)
|
||||
assert [s.name for s in skills] == ["alpha", "zebra"]
|
||||
|
||||
|
||||
class TestActivateSkill:
|
||||
"""Tests for activate_skill."""
|
||||
|
||||
def test_promotes_to_instructions(self, tmp_path: Path) -> None:
|
||||
_create_skill_dir(tmp_path, "my-skill", body="Instructions.")
|
||||
skill = load_skill_metadata(tmp_path / "my-skill")
|
||||
activated = activate_skill(skill)
|
||||
assert activated.disclosure_level == INSTRUCTIONS
|
||||
assert activated.instructions == "Instructions."
|
||||
|
||||
def test_idempotent(self, tmp_path: Path) -> None:
|
||||
_create_skill_dir(tmp_path, "my-skill")
|
||||
skill = load_skill_metadata(tmp_path / "my-skill")
|
||||
activated = activate_skill(skill)
|
||||
again = activate_skill(activated)
|
||||
assert again is activated
|
||||
|
||||
|
||||
class TestLoadResources:
|
||||
"""Tests for load_resources."""
|
||||
|
||||
def test_promotes_to_resources(self, tmp_path: Path) -> None:
|
||||
skill_dir = _create_skill_dir(tmp_path, "my-skill")
|
||||
(skill_dir / "scripts").mkdir()
|
||||
(skill_dir / "scripts" / "run.sh").write_text("#!/bin/bash")
|
||||
skill = load_skill_metadata(skill_dir)
|
||||
full = load_resources(skill)
|
||||
assert full.disclosure_level == RESOURCES
|
||||
|
||||
|
||||
class TestFormatSkillContext:
|
||||
"""Tests for format_skill_context."""
|
||||
|
||||
def test_metadata_level(self, tmp_path: Path) -> None:
|
||||
fm = SkillFrontmatter(name="test-skill", description="A skill")
|
||||
skill = Skill(
|
||||
frontmatter=fm, path=tmp_path, disclosure_level=METADATA
|
||||
)
|
||||
ctx = format_skill_context(skill)
|
||||
assert "## Skill: test-skill" in ctx
|
||||
assert "A skill" in ctx
|
||||
|
||||
def test_instructions_level(self, tmp_path: Path) -> None:
|
||||
fm = SkillFrontmatter(name="test-skill", description="A skill")
|
||||
skill = Skill(
|
||||
frontmatter=fm,
|
||||
path=tmp_path,
|
||||
disclosure_level=INSTRUCTIONS,
|
||||
instructions="Do these things.",
|
||||
)
|
||||
ctx = format_skill_context(skill)
|
||||
assert "## Skill: test-skill" in ctx
|
||||
assert "Do these things." in ctx
|
||||
|
||||
def test_no_instructions_at_instructions_level(self, tmp_path: Path) -> None:
|
||||
fm = SkillFrontmatter(name="test-skill", description="A skill")
|
||||
skill = Skill(
|
||||
frontmatter=fm,
|
||||
path=tmp_path,
|
||||
disclosure_level=INSTRUCTIONS,
|
||||
instructions=None,
|
||||
)
|
||||
ctx = format_skill_context(skill)
|
||||
assert ctx == "## Skill: test-skill\nA skill"
|
||||
|
||||
def test_resources_level(self, tmp_path: Path) -> None:
|
||||
fm = SkillFrontmatter(name="test-skill", description="A skill")
|
||||
skill = Skill(
|
||||
frontmatter=fm,
|
||||
path=tmp_path,
|
||||
disclosure_level=RESOURCES,
|
||||
instructions="Do things.",
|
||||
resource_files={
|
||||
"scripts": ["run.sh"],
|
||||
"assets": ["data.json", "config.yaml"],
|
||||
},
|
||||
)
|
||||
ctx = format_skill_context(skill)
|
||||
assert "### Available Resources" in ctx
|
||||
assert "**assets/**: data.json, config.yaml" in ctx
|
||||
assert "**scripts/**: run.sh" in ctx
|
||||
|
||||
def test_resources_level_empty_files(self, tmp_path: Path) -> None:
|
||||
fm = SkillFrontmatter(name="test-skill", description="A skill")
|
||||
skill = Skill(
|
||||
frontmatter=fm,
|
||||
path=tmp_path,
|
||||
disclosure_level=RESOURCES,
|
||||
instructions="Do things.",
|
||||
resource_files={},
|
||||
)
|
||||
ctx = format_skill_context(skill)
|
||||
assert "### Available Resources" not in ctx
|
||||
91
lib/crewai/tests/skills/test_models.py
Normal file
91
lib/crewai/tests/skills/test_models.py
Normal file
@@ -0,0 +1,91 @@
|
||||
"""Tests for skills/models.py."""
|
||||
|
||||
from pathlib import Path
|
||||
|
||||
import pytest
|
||||
|
||||
from crewai.skills.models import (
|
||||
INSTRUCTIONS,
|
||||
METADATA,
|
||||
RESOURCES,
|
||||
Skill,
|
||||
SkillFrontmatter,
|
||||
)
|
||||
|
||||
|
||||
class TestDisclosureLevel:
|
||||
"""Tests for DisclosureLevel constants."""
|
||||
|
||||
def test_ordering(self) -> None:
|
||||
assert METADATA < INSTRUCTIONS
|
||||
assert INSTRUCTIONS < RESOURCES
|
||||
|
||||
def test_values(self) -> None:
|
||||
assert METADATA == 1
|
||||
assert INSTRUCTIONS == 2
|
||||
assert RESOURCES == 3
|
||||
|
||||
|
||||
class TestSkillFrontmatter:
|
||||
"""Tests for SkillFrontmatter model."""
|
||||
|
||||
def test_required_fields(self) -> None:
|
||||
fm = SkillFrontmatter(name="my-skill", description="A test skill")
|
||||
assert fm.name == "my-skill"
|
||||
assert fm.description == "A test skill"
|
||||
assert fm.license is None
|
||||
assert fm.metadata is None
|
||||
assert fm.allowed_tools is None
|
||||
|
||||
def test_all_fields(self) -> None:
|
||||
fm = SkillFrontmatter(
|
||||
name="web-search",
|
||||
description="Search the web",
|
||||
license="Apache-2.0",
|
||||
compatibility="crewai>=0.1.0",
|
||||
metadata={"author": "test"},
|
||||
allowed_tools=["browser"],
|
||||
)
|
||||
assert fm.license == "Apache-2.0"
|
||||
assert fm.metadata == {"author": "test"}
|
||||
assert fm.allowed_tools == ["browser"]
|
||||
|
||||
def test_frozen(self) -> None:
|
||||
fm = SkillFrontmatter(name="my-skill", description="desc")
|
||||
with pytest.raises(Exception):
|
||||
fm.name = "other" # type: ignore[misc]
|
||||
|
||||
def test_invalid_name_rejected(self) -> None:
|
||||
with pytest.raises(ValueError):
|
||||
SkillFrontmatter(name="Invalid--Name", description="bad")
|
||||
|
||||
|
||||
class TestSkill:
|
||||
"""Tests for Skill model."""
|
||||
|
||||
def test_properties(self, tmp_path: Path) -> None:
|
||||
fm = SkillFrontmatter(name="test-skill", description="desc")
|
||||
skill = Skill(frontmatter=fm, path=tmp_path / "test-skill")
|
||||
assert skill.name == "test-skill"
|
||||
assert skill.description == "desc"
|
||||
assert skill.disclosure_level == METADATA
|
||||
|
||||
def test_resource_dirs(self, tmp_path: Path) -> None:
|
||||
skill_dir = tmp_path / "test-skill"
|
||||
skill_dir.mkdir()
|
||||
fm = SkillFrontmatter(name="test-skill", description="desc")
|
||||
skill = Skill(frontmatter=fm, path=skill_dir)
|
||||
assert skill.scripts_dir == skill_dir / "scripts"
|
||||
assert skill.references_dir == skill_dir / "references"
|
||||
assert skill.assets_dir == skill_dir / "assets"
|
||||
|
||||
def test_with_disclosure_level(self, tmp_path: Path) -> None:
|
||||
fm = SkillFrontmatter(name="test-skill", description="desc")
|
||||
skill = Skill(frontmatter=fm, path=tmp_path)
|
||||
promoted = skill.with_disclosure_level(
|
||||
INSTRUCTIONS,
|
||||
instructions="Do this.",
|
||||
)
|
||||
assert promoted.disclosure_level == INSTRUCTIONS
|
||||
assert promoted.instructions == "Do this."
|
||||
assert skill.disclosure_level == METADATA
|
||||
167
lib/crewai/tests/skills/test_parser.py
Normal file
167
lib/crewai/tests/skills/test_parser.py
Normal file
@@ -0,0 +1,167 @@
|
||||
"""Tests for skills/parser.py."""
|
||||
|
||||
from pathlib import Path
|
||||
|
||||
import pytest
|
||||
|
||||
from crewai.skills.models import INSTRUCTIONS, METADATA, RESOURCES
|
||||
from crewai.skills.parser import (
|
||||
SkillParseError,
|
||||
load_skill_instructions,
|
||||
load_skill_metadata,
|
||||
load_skill_resources,
|
||||
parse_frontmatter,
|
||||
parse_skill_md,
|
||||
)
|
||||
|
||||
|
||||
class TestParseFrontmatter:
|
||||
"""Tests for parse_frontmatter."""
|
||||
|
||||
def test_valid_frontmatter_and_body(self) -> None:
|
||||
content = "---\nname: test\ndescription: A test\n---\n\nBody text here."
|
||||
fm, body = parse_frontmatter(content)
|
||||
assert fm["name"] == "test"
|
||||
assert fm["description"] == "A test"
|
||||
assert body == "Body text here."
|
||||
|
||||
def test_empty_body(self) -> None:
|
||||
content = "---\nname: test\ndescription: A test\n---"
|
||||
fm, body = parse_frontmatter(content)
|
||||
assert fm["name"] == "test"
|
||||
assert body == ""
|
||||
|
||||
def test_missing_opening_delimiter(self) -> None:
|
||||
with pytest.raises(SkillParseError, match="must start with"):
|
||||
parse_frontmatter("name: test\n---\nBody")
|
||||
|
||||
def test_missing_closing_delimiter(self) -> None:
|
||||
with pytest.raises(SkillParseError, match="missing closing"):
|
||||
parse_frontmatter("---\nname: test\n")
|
||||
|
||||
def test_invalid_yaml(self) -> None:
|
||||
with pytest.raises(SkillParseError, match="Invalid YAML"):
|
||||
parse_frontmatter("---\n: :\n bad: [yaml\n---\nBody")
|
||||
|
||||
def test_triple_dash_in_body(self) -> None:
|
||||
content = "---\nname: test\ndescription: desc\n---\n\nBody with --- inside."
|
||||
fm, body = parse_frontmatter(content)
|
||||
assert "---" in body
|
||||
|
||||
def test_inline_triple_dash_in_yaml_value(self) -> None:
|
||||
content = '---\nname: test\ndescription: "Use---carefully"\n---\n\nBody.'
|
||||
fm, body = parse_frontmatter(content)
|
||||
assert fm["description"] == "Use---carefully"
|
||||
assert body == "Body."
|
||||
|
||||
def test_unicode_content(self) -> None:
|
||||
content = "---\nname: test\ndescription: Beschreibung\n---\n\nUnicode: \u00e4\u00f6\u00fc\u00df"
|
||||
fm, body = parse_frontmatter(content)
|
||||
assert fm["description"] == "Beschreibung"
|
||||
assert "\u00e4\u00f6\u00fc\u00df" in body
|
||||
|
||||
def test_non_mapping_frontmatter(self) -> None:
|
||||
with pytest.raises(SkillParseError, match="must be a YAML mapping"):
|
||||
parse_frontmatter("---\n- item1\n- item2\n---\nBody")
|
||||
|
||||
|
||||
class TestParseSkillMd:
|
||||
"""Tests for parse_skill_md."""
|
||||
|
||||
def test_valid_file(self, tmp_path: Path) -> None:
|
||||
skill_md = tmp_path / "SKILL.md"
|
||||
skill_md.write_text(
|
||||
"---\nname: my-skill\ndescription: desc\n---\nInstructions here."
|
||||
)
|
||||
fm, body = parse_skill_md(skill_md)
|
||||
assert fm.name == "my-skill"
|
||||
assert body == "Instructions here."
|
||||
|
||||
def test_file_not_found(self, tmp_path: Path) -> None:
|
||||
with pytest.raises(FileNotFoundError):
|
||||
parse_skill_md(tmp_path / "nonexistent" / "SKILL.md")
|
||||
|
||||
|
||||
class TestLoadSkillMetadata:
|
||||
"""Tests for load_skill_metadata."""
|
||||
|
||||
def test_valid_skill(self, tmp_path: Path) -> None:
|
||||
skill_dir = tmp_path / "my-skill"
|
||||
skill_dir.mkdir()
|
||||
(skill_dir / "SKILL.md").write_text(
|
||||
"---\nname: my-skill\ndescription: Test skill\n---\nBody"
|
||||
)
|
||||
skill = load_skill_metadata(skill_dir)
|
||||
assert skill.name == "my-skill"
|
||||
assert skill.disclosure_level == METADATA
|
||||
assert skill.instructions is None
|
||||
|
||||
def test_directory_name_mismatch(self, tmp_path: Path) -> None:
|
||||
skill_dir = tmp_path / "wrong-name"
|
||||
skill_dir.mkdir()
|
||||
(skill_dir / "SKILL.md").write_text(
|
||||
"---\nname: my-skill\ndescription: Test skill\n---\n"
|
||||
)
|
||||
with pytest.raises(ValueError, match="does not match"):
|
||||
load_skill_metadata(skill_dir)
|
||||
|
||||
|
||||
class TestLoadSkillInstructions:
|
||||
"""Tests for load_skill_instructions."""
|
||||
|
||||
def test_promotes_to_instructions(self, tmp_path: Path) -> None:
|
||||
skill_dir = tmp_path / "my-skill"
|
||||
skill_dir.mkdir()
|
||||
(skill_dir / "SKILL.md").write_text(
|
||||
"---\nname: my-skill\ndescription: Test\n---\nFull body."
|
||||
)
|
||||
skill = load_skill_metadata(skill_dir)
|
||||
promoted = load_skill_instructions(skill)
|
||||
assert promoted.disclosure_level == INSTRUCTIONS
|
||||
assert promoted.instructions == "Full body."
|
||||
|
||||
def test_idempotent(self, tmp_path: Path) -> None:
|
||||
skill_dir = tmp_path / "my-skill"
|
||||
skill_dir.mkdir()
|
||||
(skill_dir / "SKILL.md").write_text(
|
||||
"---\nname: my-skill\ndescription: Test\n---\nBody."
|
||||
)
|
||||
skill = load_skill_metadata(skill_dir)
|
||||
promoted = load_skill_instructions(skill)
|
||||
again = load_skill_instructions(promoted)
|
||||
assert again is promoted
|
||||
|
||||
|
||||
class TestLoadSkillResources:
|
||||
"""Tests for load_skill_resources."""
|
||||
|
||||
def test_catalogs_resources(self, tmp_path: Path) -> None:
|
||||
skill_dir = tmp_path / "my-skill"
|
||||
skill_dir.mkdir()
|
||||
(skill_dir / "SKILL.md").write_text(
|
||||
"---\nname: my-skill\ndescription: Test\n---\nBody."
|
||||
)
|
||||
(skill_dir / "scripts").mkdir()
|
||||
(skill_dir / "scripts" / "run.sh").write_text("#!/bin/bash")
|
||||
(skill_dir / "assets").mkdir()
|
||||
(skill_dir / "assets" / "data.json").write_text("{}")
|
||||
|
||||
skill = load_skill_metadata(skill_dir)
|
||||
full = load_skill_resources(skill)
|
||||
assert full.disclosure_level == RESOURCES
|
||||
assert full.instructions == "Body."
|
||||
assert full.resource_files is not None
|
||||
assert "scripts" in full.resource_files
|
||||
assert "run.sh" in full.resource_files["scripts"]
|
||||
assert "assets" in full.resource_files
|
||||
assert "data.json" in full.resource_files["assets"]
|
||||
|
||||
def test_no_resource_dirs(self, tmp_path: Path) -> None:
|
||||
skill_dir = tmp_path / "my-skill"
|
||||
skill_dir.mkdir()
|
||||
(skill_dir / "SKILL.md").write_text(
|
||||
"---\nname: my-skill\ndescription: Test\n---\nBody."
|
||||
)
|
||||
skill = load_skill_metadata(skill_dir)
|
||||
full = load_skill_resources(skill)
|
||||
assert full.resource_files == {}
|
||||
93
lib/crewai/tests/skills/test_validation.py
Normal file
93
lib/crewai/tests/skills/test_validation.py
Normal file
@@ -0,0 +1,93 @@
|
||||
"""Tests for skills validation."""
|
||||
|
||||
from pathlib import Path
|
||||
|
||||
import pytest
|
||||
|
||||
from crewai.skills.models import SkillFrontmatter
|
||||
from crewai.skills.validation import (
|
||||
MAX_SKILL_NAME_LENGTH,
|
||||
validate_directory_name,
|
||||
)
|
||||
|
||||
|
||||
def _make(name: str) -> SkillFrontmatter:
|
||||
"""Create a SkillFrontmatter with the given name."""
|
||||
return SkillFrontmatter(name=name, description="desc")
|
||||
|
||||
|
||||
class TestSkillNameValidation:
|
||||
"""Tests for skill name constraints via SkillFrontmatter."""
|
||||
|
||||
def test_simple_name(self) -> None:
|
||||
assert _make("web-search").name == "web-search"
|
||||
|
||||
def test_single_word(self) -> None:
|
||||
assert _make("search").name == "search"
|
||||
|
||||
def test_numeric(self) -> None:
|
||||
assert _make("tool3").name == "tool3"
|
||||
|
||||
def test_all_digits(self) -> None:
|
||||
assert _make("123").name == "123"
|
||||
|
||||
def test_single_char(self) -> None:
|
||||
assert _make("a").name == "a"
|
||||
|
||||
def test_max_length(self) -> None:
|
||||
name = "a" * MAX_SKILL_NAME_LENGTH
|
||||
assert _make(name).name == name
|
||||
|
||||
def test_multi_hyphen_segments(self) -> None:
|
||||
assert _make("my-cool-skill").name == "my-cool-skill"
|
||||
|
||||
def test_empty_raises(self) -> None:
|
||||
with pytest.raises(ValueError):
|
||||
_make("")
|
||||
|
||||
def test_too_long_raises(self) -> None:
|
||||
with pytest.raises(ValueError):
|
||||
_make("a" * (MAX_SKILL_NAME_LENGTH + 1))
|
||||
|
||||
def test_uppercase_raises(self) -> None:
|
||||
with pytest.raises(ValueError):
|
||||
_make("MySkill")
|
||||
|
||||
def test_leading_hyphen_raises(self) -> None:
|
||||
with pytest.raises(ValueError):
|
||||
_make("-skill")
|
||||
|
||||
def test_trailing_hyphen_raises(self) -> None:
|
||||
with pytest.raises(ValueError):
|
||||
_make("skill-")
|
||||
|
||||
def test_consecutive_hyphens_raises(self) -> None:
|
||||
with pytest.raises(ValueError):
|
||||
_make("my--skill")
|
||||
|
||||
def test_underscore_raises(self) -> None:
|
||||
with pytest.raises(ValueError):
|
||||
_make("my_skill")
|
||||
|
||||
def test_space_raises(self) -> None:
|
||||
with pytest.raises(ValueError):
|
||||
_make("my skill")
|
||||
|
||||
def test_special_chars_raises(self) -> None:
|
||||
with pytest.raises(ValueError):
|
||||
_make("skill@v1")
|
||||
|
||||
|
||||
class TestValidateDirectoryName:
|
||||
"""Tests for validate_directory_name."""
|
||||
|
||||
def test_matching_names(self, tmp_path: Path) -> None:
|
||||
skill_dir = tmp_path / "my-skill"
|
||||
skill_dir.mkdir()
|
||||
validate_directory_name(skill_dir, "my-skill")
|
||||
|
||||
def test_mismatched_names(self, tmp_path: Path) -> None:
|
||||
skill_dir = tmp_path / "other-name"
|
||||
skill_dir.mkdir()
|
||||
with pytest.raises(ValueError, match="does not match"):
|
||||
validate_directory_name(skill_dir, "my-skill")
|
||||
@@ -988,11 +988,9 @@ class TestLLMObjectPreservedInContext:
|
||||
db_path = os.path.join(tmpdir, "test_flows.db")
|
||||
persistence = SQLiteFlowPersistence(db_path)
|
||||
|
||||
# Create a mock BaseLLM object (not a string)
|
||||
# Simulates LLM(model="gemini-2.0-flash", provider="gemini")
|
||||
mock_llm_obj = MagicMock()
|
||||
mock_llm_obj.model = "gemini-2.0-flash"
|
||||
mock_llm_obj.provider = "gemini"
|
||||
# Create a real LLM object (not a string)
|
||||
from crewai.llm import LLM
|
||||
mock_llm_obj = LLM(model="gemini-2.0-flash", provider="gemini")
|
||||
|
||||
class PausingProvider:
|
||||
def __init__(self, persistence: SQLiteFlowPersistence):
|
||||
@@ -1041,32 +1039,37 @@ class TestLLMObjectPreservedInContext:
|
||||
result = flow1.kickoff()
|
||||
assert isinstance(result, HumanFeedbackPending)
|
||||
|
||||
# Verify the context stored the model STRING, not None
|
||||
# Verify the context stored the model config dict, not None
|
||||
assert provider.captured_context is not None
|
||||
assert provider.captured_context.llm == "gemini/gemini-2.0-flash"
|
||||
assert isinstance(provider.captured_context.llm, dict)
|
||||
assert provider.captured_context.llm["model"] == "gemini/gemini-2.0-flash"
|
||||
|
||||
# Verify it survives persistence roundtrip
|
||||
flow_id = result.context.flow_id
|
||||
loaded = persistence.load_pending_feedback(flow_id)
|
||||
assert loaded is not None
|
||||
_, loaded_context = loaded
|
||||
assert loaded_context.llm == "gemini/gemini-2.0-flash"
|
||||
assert isinstance(loaded_context.llm, dict)
|
||||
assert loaded_context.llm["model"] == "gemini/gemini-2.0-flash"
|
||||
|
||||
# Phase 2: Resume with positive feedback - should use LLM to classify
|
||||
flow2 = TestFlow.from_pending(flow_id, persistence)
|
||||
assert flow2._pending_feedback_context is not None
|
||||
assert flow2._pending_feedback_context.llm == "gemini/gemini-2.0-flash"
|
||||
assert isinstance(flow2._pending_feedback_context.llm, dict)
|
||||
assert flow2._pending_feedback_context.llm["model"] == "gemini/gemini-2.0-flash"
|
||||
|
||||
# Mock _collapse_to_outcome to verify it gets called (not skipped)
|
||||
with patch.object(flow2, "_collapse_to_outcome", return_value="approved") as mock_collapse:
|
||||
flow2.resume("this looks good, proceed!")
|
||||
|
||||
# The key assertion: _collapse_to_outcome was called (not skipped due to llm=None)
|
||||
mock_collapse.assert_called_once_with(
|
||||
feedback="this looks good, proceed!",
|
||||
outcomes=["needs_changes", "approved"],
|
||||
llm="gemini/gemini-2.0-flash",
|
||||
)
|
||||
mock_collapse.assert_called_once()
|
||||
call_kwargs = mock_collapse.call_args
|
||||
assert call_kwargs.kwargs["feedback"] == "this looks good, proceed!"
|
||||
assert call_kwargs.kwargs["outcomes"] == ["needs_changes", "approved"]
|
||||
# LLM should be a live object (from _hf_llm) or reconstructed, not None
|
||||
assert call_kwargs.kwargs["llm"] is not None
|
||||
assert getattr(call_kwargs.kwargs["llm"], "model", None) == "gemini-2.0-flash"
|
||||
assert flow2.last_human_feedback.outcome == "approved"
|
||||
assert flow2.result_path == "approved"
|
||||
|
||||
@@ -1096,23 +1099,25 @@ class TestLLMObjectPreservedInContext:
|
||||
def test_provider_prefix_added_to_bare_model(self) -> None:
|
||||
"""Test that provider prefix is added when model has no slash."""
|
||||
from crewai.flow.human_feedback import _serialize_llm_for_context
|
||||
from crewai.llm import LLM
|
||||
|
||||
mock_obj = MagicMock()
|
||||
mock_obj.model = "gemini-3-flash-preview"
|
||||
mock_obj.provider = "gemini"
|
||||
assert _serialize_llm_for_context(mock_obj) == "gemini/gemini-3-flash-preview"
|
||||
llm = LLM(model="gemini-2.0-flash", provider="gemini")
|
||||
result = _serialize_llm_for_context(llm)
|
||||
assert isinstance(result, dict)
|
||||
assert result["model"] == "gemini/gemini-2.0-flash"
|
||||
|
||||
def test_provider_prefix_not_doubled_when_already_present(self) -> None:
|
||||
"""Test that provider prefix is not added when model already has a slash."""
|
||||
from crewai.flow.human_feedback import _serialize_llm_for_context
|
||||
from crewai.llm import LLM
|
||||
|
||||
mock_obj = MagicMock()
|
||||
mock_obj.model = "gemini/gemini-2.0-flash"
|
||||
mock_obj.provider = "gemini"
|
||||
assert _serialize_llm_for_context(mock_obj) == "gemini/gemini-2.0-flash"
|
||||
llm = LLM(model="gemini/gemini-2.0-flash")
|
||||
result = _serialize_llm_for_context(llm)
|
||||
assert isinstance(result, dict)
|
||||
assert result["model"] == "gemini/gemini-2.0-flash"
|
||||
|
||||
def test_no_provider_attr_falls_back_to_bare_model(self) -> None:
|
||||
"""Test that bare model is used when no provider attribute exists."""
|
||||
"""Test that objects without to_config_dict fall back to model string."""
|
||||
from crewai.flow.human_feedback import _serialize_llm_for_context
|
||||
|
||||
mock_obj = MagicMock(spec=[])
|
||||
@@ -1216,3 +1221,279 @@ class TestAsyncHumanFeedbackEdgeCases:
|
||||
|
||||
assert flow.last_human_feedback.outcome == "approved"
|
||||
assert flow.last_human_feedback.feedback == ""
|
||||
|
||||
|
||||
# =============================================================================
|
||||
# Tests for _hf_llm attribute and live LLM resolution on resume
|
||||
# =============================================================================
|
||||
|
||||
|
||||
class TestLiveLLMPreservationOnResume:
|
||||
"""Tests for preserving the full LLM config across HITL resume."""
|
||||
|
||||
def test_hf_llm_attribute_set_on_wrapper_with_basellm(self) -> None:
|
||||
"""Test that _hf_llm is set on the wrapper when llm is a BaseLLM instance."""
|
||||
from crewai.llms.base_llm import BaseLLM
|
||||
|
||||
# Create a mock BaseLLM object
|
||||
mock_llm = MagicMock(spec=BaseLLM)
|
||||
mock_llm.model = "gemini/gemini-3-flash"
|
||||
|
||||
class TestFlow(Flow):
|
||||
@start()
|
||||
@human_feedback(
|
||||
message="Review:",
|
||||
emit=["approved", "rejected"],
|
||||
llm=mock_llm,
|
||||
)
|
||||
def review(self):
|
||||
return "content"
|
||||
|
||||
flow = TestFlow()
|
||||
method = flow._methods.get("review")
|
||||
assert method is not None
|
||||
assert hasattr(method, "_hf_llm")
|
||||
assert method._hf_llm is mock_llm
|
||||
|
||||
def test_hf_llm_attribute_set_on_wrapper_with_string(self) -> None:
|
||||
"""Test that _hf_llm is set on the wrapper even when llm is a string."""
|
||||
|
||||
class TestFlow(Flow):
|
||||
@start()
|
||||
@human_feedback(
|
||||
message="Review:",
|
||||
emit=["approved", "rejected"],
|
||||
llm="gpt-4o-mini",
|
||||
)
|
||||
def review(self):
|
||||
return "content"
|
||||
|
||||
flow = TestFlow()
|
||||
method = flow._methods.get("review")
|
||||
assert method is not None
|
||||
assert hasattr(method, "_hf_llm")
|
||||
assert method._hf_llm == "gpt-4o-mini"
|
||||
|
||||
@patch("crewai.flow.flow.crewai_event_bus.emit")
|
||||
def test_resume_async_uses_live_basellm_over_serialized_string(
|
||||
self, mock_emit: MagicMock
|
||||
) -> None:
|
||||
"""Test that resume_async uses the live BaseLLM from decorator instead of serialized string.
|
||||
|
||||
This is the main bug fix: when a flow resumes, it should use the fully-configured
|
||||
LLM from the re-imported decorator (with credentials, project, etc.) instead of
|
||||
creating a new LLM from just the model string.
|
||||
"""
|
||||
with tempfile.TemporaryDirectory() as tmpdir:
|
||||
db_path = os.path.join(tmpdir, "test_flows.db")
|
||||
persistence = SQLiteFlowPersistence(db_path)
|
||||
|
||||
from crewai.llms.base_llm import BaseLLM
|
||||
|
||||
# Create a mock BaseLLM with full config (simulating Gemini with service account)
|
||||
live_llm = MagicMock(spec=BaseLLM)
|
||||
live_llm.model = "gemini/gemini-3-flash"
|
||||
|
||||
class TestFlow(Flow):
|
||||
result_path: str = ""
|
||||
|
||||
@start()
|
||||
@human_feedback(
|
||||
message="Approve?",
|
||||
emit=["approved", "rejected"],
|
||||
llm=live_llm, # Full LLM object with credentials
|
||||
)
|
||||
def review(self):
|
||||
return "content"
|
||||
|
||||
@listen("approved")
|
||||
def handle_approved(self):
|
||||
self.result_path = "approved"
|
||||
return "Approved!"
|
||||
|
||||
# Save pending feedback with just a model STRING (simulating serialization)
|
||||
context = PendingFeedbackContext(
|
||||
flow_id="live-llm-test",
|
||||
flow_class="TestFlow",
|
||||
method_name="review",
|
||||
method_output="content",
|
||||
message="Approve?",
|
||||
emit=["approved", "rejected"],
|
||||
llm="gemini/gemini-3-flash", # Serialized string, NOT the live object
|
||||
)
|
||||
persistence.save_pending_feedback(
|
||||
flow_uuid="live-llm-test",
|
||||
context=context,
|
||||
state_data={"id": "live-llm-test"},
|
||||
)
|
||||
|
||||
# Restore flow - this re-imports the class with the live LLM
|
||||
flow = TestFlow.from_pending("live-llm-test", persistence)
|
||||
|
||||
# Mock _collapse_to_outcome to capture what LLM it receives
|
||||
captured_llm = []
|
||||
|
||||
def capture_llm(feedback, outcomes, llm):
|
||||
captured_llm.append(llm)
|
||||
return "approved"
|
||||
|
||||
with patch.object(flow, "_collapse_to_outcome", side_effect=capture_llm):
|
||||
flow.resume("looks good!")
|
||||
|
||||
# The key assertion: _collapse_to_outcome received the LIVE BaseLLM object,
|
||||
# NOT the serialized string. The live_llm was captured at class definition
|
||||
# time and stored on the method wrapper as _hf_llm.
|
||||
assert len(captured_llm) == 1
|
||||
# Verify it's the same object that was passed to the decorator
|
||||
# (which is stored on the method's _hf_llm attribute)
|
||||
method = flow._methods.get("review")
|
||||
assert method is not None
|
||||
assert captured_llm[0] is method._hf_llm
|
||||
# And verify it's a BaseLLM instance, not a string
|
||||
assert isinstance(captured_llm[0], BaseLLM)
|
||||
|
||||
@patch("crewai.flow.flow.crewai_event_bus.emit")
|
||||
def test_resume_async_falls_back_to_serialized_string_when_no_hf_llm(
|
||||
self, mock_emit: MagicMock
|
||||
) -> None:
|
||||
"""Test that resume_async falls back to context.llm when _hf_llm is not available.
|
||||
|
||||
This ensures backward compatibility with flows that were paused before this fix.
|
||||
"""
|
||||
with tempfile.TemporaryDirectory() as tmpdir:
|
||||
db_path = os.path.join(tmpdir, "test_flows.db")
|
||||
persistence = SQLiteFlowPersistence(db_path)
|
||||
|
||||
class TestFlow(Flow):
|
||||
@start()
|
||||
@human_feedback(
|
||||
message="Approve?",
|
||||
emit=["approved", "rejected"],
|
||||
llm="gpt-4o-mini",
|
||||
)
|
||||
def review(self):
|
||||
return "content"
|
||||
|
||||
# Save pending feedback
|
||||
context = PendingFeedbackContext(
|
||||
flow_id="fallback-test",
|
||||
flow_class="TestFlow",
|
||||
method_name="review",
|
||||
method_output="content",
|
||||
message="Approve?",
|
||||
emit=["approved", "rejected"],
|
||||
llm="gpt-4o-mini",
|
||||
)
|
||||
persistence.save_pending_feedback(
|
||||
flow_uuid="fallback-test",
|
||||
context=context,
|
||||
state_data={"id": "fallback-test"},
|
||||
)
|
||||
|
||||
flow = TestFlow.from_pending("fallback-test", persistence)
|
||||
|
||||
# Remove _hf_llm to simulate old decorator without this attribute
|
||||
method = flow._methods.get("review")
|
||||
if hasattr(method, "_hf_llm"):
|
||||
delattr(method, "_hf_llm")
|
||||
|
||||
# Mock _collapse_to_outcome to capture what LLM it receives
|
||||
captured_llm = []
|
||||
|
||||
def capture_llm(feedback, outcomes, llm):
|
||||
captured_llm.append(llm)
|
||||
return "approved"
|
||||
|
||||
with patch.object(flow, "_collapse_to_outcome", side_effect=capture_llm):
|
||||
flow.resume("looks good!")
|
||||
|
||||
# Should fall back to deserialized LLM from context string
|
||||
assert len(captured_llm) == 1
|
||||
from crewai.llms.base_llm import BaseLLM as BaseLLMClass
|
||||
assert isinstance(captured_llm[0], BaseLLMClass)
|
||||
assert captured_llm[0].model == "gpt-4o-mini"
|
||||
|
||||
@patch("crewai.flow.flow.crewai_event_bus.emit")
|
||||
def test_resume_async_uses_string_from_context_when_hf_llm_is_string(
|
||||
self, mock_emit: MagicMock
|
||||
) -> None:
|
||||
"""Test that when _hf_llm is a string (not BaseLLM), we still use context.llm.
|
||||
|
||||
String LLM values offer no benefit over the serialized context.llm,
|
||||
so we don't prefer them.
|
||||
"""
|
||||
with tempfile.TemporaryDirectory() as tmpdir:
|
||||
db_path = os.path.join(tmpdir, "test_flows.db")
|
||||
persistence = SQLiteFlowPersistence(db_path)
|
||||
|
||||
class TestFlow(Flow):
|
||||
@start()
|
||||
@human_feedback(
|
||||
message="Approve?",
|
||||
emit=["approved", "rejected"],
|
||||
llm="gpt-4o-mini", # String LLM
|
||||
)
|
||||
def review(self):
|
||||
return "content"
|
||||
|
||||
# Save pending feedback
|
||||
context = PendingFeedbackContext(
|
||||
flow_id="string-llm-test",
|
||||
flow_class="TestFlow",
|
||||
method_name="review",
|
||||
method_output="content",
|
||||
message="Approve?",
|
||||
emit=["approved", "rejected"],
|
||||
llm="gpt-4o-mini",
|
||||
)
|
||||
persistence.save_pending_feedback(
|
||||
flow_uuid="string-llm-test",
|
||||
context=context,
|
||||
state_data={"id": "string-llm-test"},
|
||||
)
|
||||
|
||||
flow = TestFlow.from_pending("string-llm-test", persistence)
|
||||
|
||||
# Verify _hf_llm is a string
|
||||
method = flow._methods.get("review")
|
||||
assert method._hf_llm == "gpt-4o-mini"
|
||||
|
||||
# Mock _collapse_to_outcome to capture what LLM it receives
|
||||
captured_llm = []
|
||||
|
||||
def capture_llm(feedback, outcomes, llm):
|
||||
captured_llm.append(llm)
|
||||
return "approved"
|
||||
|
||||
with patch.object(flow, "_collapse_to_outcome", side_effect=capture_llm):
|
||||
flow.resume("looks good!")
|
||||
|
||||
# _hf_llm is a string, so resume deserializes context.llm into an LLM instance
|
||||
assert len(captured_llm) == 1
|
||||
from crewai.llms.base_llm import BaseLLM as BaseLLMClass
|
||||
assert isinstance(captured_llm[0], BaseLLMClass)
|
||||
assert captured_llm[0].model == "gpt-4o-mini"
|
||||
|
||||
def test_hf_llm_set_for_async_wrapper(self) -> None:
|
||||
"""Test that _hf_llm is set on async wrapper functions."""
|
||||
import asyncio
|
||||
from crewai.llms.base_llm import BaseLLM
|
||||
|
||||
mock_llm = MagicMock(spec=BaseLLM)
|
||||
mock_llm.model = "gemini/gemini-3-flash"
|
||||
|
||||
class TestFlow(Flow):
|
||||
@start()
|
||||
@human_feedback(
|
||||
message="Review:",
|
||||
emit=["approved", "rejected"],
|
||||
llm=mock_llm,
|
||||
)
|
||||
async def async_review(self):
|
||||
return "content"
|
||||
|
||||
flow = TestFlow()
|
||||
method = flow._methods.get("async_review")
|
||||
assert method is not None
|
||||
assert hasattr(method, "_hf_llm")
|
||||
assert method._hf_llm is mock_llm
|
||||
|
||||
795
lib/crewai/tests/test_flow_serializer.py
Normal file
795
lib/crewai/tests/test_flow_serializer.py
Normal file
@@ -0,0 +1,795 @@
|
||||
"""Tests for flow_serializer.py - Flow structure serialization for Studio UI."""
|
||||
|
||||
from typing import Literal
|
||||
|
||||
import pytest
|
||||
from pydantic import BaseModel, Field
|
||||
|
||||
from crewai.flow.flow import Flow, and_, listen, or_, router, start
|
||||
from crewai.flow.flow_serializer import flow_structure
|
||||
from crewai.flow.human_feedback import human_feedback
|
||||
|
||||
|
||||
class TestSimpleLinearFlow:
|
||||
"""Test simple linear flow (start → listen → listen)."""
|
||||
|
||||
def test_linear_flow_structure(self):
|
||||
"""Test a simple sequential flow structure."""
|
||||
|
||||
class LinearFlow(Flow):
|
||||
"""A simple linear flow for testing."""
|
||||
|
||||
@start()
|
||||
def begin(self):
|
||||
return "started"
|
||||
|
||||
@listen(begin)
|
||||
def process(self):
|
||||
return "processed"
|
||||
|
||||
@listen(process)
|
||||
def finalize(self):
|
||||
return "done"
|
||||
|
||||
structure = flow_structure(LinearFlow)
|
||||
|
||||
assert structure["name"] == "LinearFlow"
|
||||
assert structure["description"] == "A simple linear flow for testing."
|
||||
assert len(structure["methods"]) == 3
|
||||
|
||||
# Check method types
|
||||
method_map = {m["name"]: m for m in structure["methods"]}
|
||||
|
||||
assert method_map["begin"]["type"] == "start"
|
||||
assert method_map["process"]["type"] == "listen"
|
||||
assert method_map["finalize"]["type"] == "listen"
|
||||
|
||||
# Check edges
|
||||
assert len(structure["edges"]) == 2
|
||||
|
||||
edge_pairs = [(e["from_method"], e["to_method"]) for e in structure["edges"]]
|
||||
assert ("begin", "process") in edge_pairs
|
||||
assert ("process", "finalize") in edge_pairs
|
||||
|
||||
# All edges should be listen type
|
||||
for edge in structure["edges"]:
|
||||
assert edge["edge_type"] == "listen"
|
||||
assert edge["condition"] is None
|
||||
|
||||
|
||||
class TestRouterFlow:
|
||||
"""Test flow with router branching."""
|
||||
|
||||
def test_router_flow_structure(self):
|
||||
"""Test a flow with router that branches to different paths."""
|
||||
|
||||
class BranchingFlow(Flow):
|
||||
@start()
|
||||
def init(self):
|
||||
return "initialized"
|
||||
|
||||
@router(init)
|
||||
def decide(self) -> Literal["path_a", "path_b"]:
|
||||
return "path_a"
|
||||
|
||||
@listen("path_a")
|
||||
def handle_a(self):
|
||||
return "handled_a"
|
||||
|
||||
@listen("path_b")
|
||||
def handle_b(self):
|
||||
return "handled_b"
|
||||
|
||||
structure = flow_structure(BranchingFlow)
|
||||
|
||||
assert structure["name"] == "BranchingFlow"
|
||||
assert len(structure["methods"]) == 4
|
||||
|
||||
method_map = {m["name"]: m for m in structure["methods"]}
|
||||
|
||||
# Check method types
|
||||
assert method_map["init"]["type"] == "start"
|
||||
assert method_map["decide"]["type"] == "router"
|
||||
assert method_map["handle_a"]["type"] == "listen"
|
||||
assert method_map["handle_b"]["type"] == "listen"
|
||||
|
||||
# Check router paths
|
||||
assert "path_a" in method_map["decide"]["router_paths"]
|
||||
assert "path_b" in method_map["decide"]["router_paths"]
|
||||
|
||||
# Check edges
|
||||
# Should have: init -> decide (listen), decide -> handle_a (route), decide -> handle_b (route)
|
||||
listen_edges = [e for e in structure["edges"] if e["edge_type"] == "listen"]
|
||||
route_edges = [e for e in structure["edges"] if e["edge_type"] == "route"]
|
||||
|
||||
assert len(listen_edges) == 1
|
||||
assert listen_edges[0]["from_method"] == "init"
|
||||
assert listen_edges[0]["to_method"] == "decide"
|
||||
|
||||
assert len(route_edges) == 2
|
||||
route_targets = {e["to_method"] for e in route_edges}
|
||||
assert "handle_a" in route_targets
|
||||
assert "handle_b" in route_targets
|
||||
|
||||
# Check route conditions
|
||||
route_conditions = {e["to_method"]: e["condition"] for e in route_edges}
|
||||
assert route_conditions["handle_a"] == "path_a"
|
||||
assert route_conditions["handle_b"] == "path_b"
|
||||
|
||||
|
||||
class TestAndOrConditions:
|
||||
"""Test flow with AND/OR conditions."""
|
||||
|
||||
def test_and_condition_flow(self):
|
||||
"""Test a flow where a method waits for multiple methods (AND)."""
|
||||
|
||||
class AndConditionFlow(Flow):
|
||||
@start()
|
||||
def step_a(self):
|
||||
return "a"
|
||||
|
||||
@start()
|
||||
def step_b(self):
|
||||
return "b"
|
||||
|
||||
@listen(and_(step_a, step_b))
|
||||
def converge(self):
|
||||
return "converged"
|
||||
|
||||
structure = flow_structure(AndConditionFlow)
|
||||
|
||||
assert len(structure["methods"]) == 3
|
||||
|
||||
method_map = {m["name"]: m for m in structure["methods"]}
|
||||
|
||||
assert method_map["step_a"]["type"] == "start"
|
||||
assert method_map["step_b"]["type"] == "start"
|
||||
assert method_map["converge"]["type"] == "listen"
|
||||
|
||||
# Check condition type
|
||||
assert method_map["converge"]["condition_type"] == "AND"
|
||||
|
||||
# Check trigger methods
|
||||
triggers = method_map["converge"]["trigger_methods"]
|
||||
assert "step_a" in triggers
|
||||
assert "step_b" in triggers
|
||||
|
||||
# Check edges - should have 2 edges to converge
|
||||
converge_edges = [e for e in structure["edges"] if e["to_method"] == "converge"]
|
||||
assert len(converge_edges) == 2
|
||||
|
||||
def test_or_condition_flow(self):
|
||||
"""Test a flow where a method is triggered by any of multiple methods (OR)."""
|
||||
|
||||
class OrConditionFlow(Flow):
|
||||
@start()
|
||||
def path_1(self):
|
||||
return "1"
|
||||
|
||||
@start()
|
||||
def path_2(self):
|
||||
return "2"
|
||||
|
||||
@listen(or_(path_1, path_2))
|
||||
def handle_any(self):
|
||||
return "handled"
|
||||
|
||||
structure = flow_structure(OrConditionFlow)
|
||||
|
||||
method_map = {m["name"]: m for m in structure["methods"]}
|
||||
|
||||
assert method_map["handle_any"]["condition_type"] == "OR"
|
||||
|
||||
triggers = method_map["handle_any"]["trigger_methods"]
|
||||
assert "path_1" in triggers
|
||||
assert "path_2" in triggers
|
||||
|
||||
|
||||
class TestHumanFeedbackMethods:
|
||||
"""Test flow with @human_feedback decorated methods."""
|
||||
|
||||
def test_human_feedback_detection(self):
|
||||
"""Test that human feedback methods are correctly identified."""
|
||||
|
||||
class HumanFeedbackFlow(Flow):
|
||||
@start()
|
||||
@human_feedback(
|
||||
message="Please review:",
|
||||
emit=["approved", "rejected"],
|
||||
llm="gpt-4o-mini",
|
||||
)
|
||||
def review_step(self):
|
||||
return "content to review"
|
||||
|
||||
@listen("approved")
|
||||
def handle_approved(self):
|
||||
return "approved"
|
||||
|
||||
@listen("rejected")
|
||||
def handle_rejected(self):
|
||||
return "rejected"
|
||||
|
||||
structure = flow_structure(HumanFeedbackFlow)
|
||||
|
||||
method_map = {m["name"]: m for m in structure["methods"]}
|
||||
|
||||
# review_step should have human feedback
|
||||
assert method_map["review_step"]["has_human_feedback"] is True
|
||||
# It's a start+router (due to emit)
|
||||
assert method_map["review_step"]["type"] == "start_router"
|
||||
assert "approved" in method_map["review_step"]["router_paths"]
|
||||
assert "rejected" in method_map["review_step"]["router_paths"]
|
||||
|
||||
# Other methods should not have human feedback
|
||||
assert method_map["handle_approved"]["has_human_feedback"] is False
|
||||
assert method_map["handle_rejected"]["has_human_feedback"] is False
|
||||
|
||||
|
||||
class TestCrewReferences:
|
||||
"""Test detection of Crew references in method bodies."""
|
||||
|
||||
def test_crew_detection_with_crew_call(self):
|
||||
"""Test that .crew() calls are detected."""
|
||||
|
||||
class FlowWithCrew(Flow):
|
||||
@start()
|
||||
def run_crew(self):
|
||||
# Simulating crew usage pattern
|
||||
# result = MyCrew().crew().kickoff()
|
||||
return "result"
|
||||
|
||||
@listen(run_crew)
|
||||
def no_crew(self):
|
||||
return "done"
|
||||
|
||||
structure = flow_structure(FlowWithCrew)
|
||||
|
||||
method_map = {m["name"]: m for m in structure["methods"]}
|
||||
|
||||
# Note: Since the actual .crew() call is in a comment/string,
|
||||
# the detection might not trigger. In real code it would.
|
||||
# We're testing the mechanism exists.
|
||||
assert "has_crew" in method_map["run_crew"]
|
||||
assert "has_crew" in method_map["no_crew"]
|
||||
|
||||
def test_no_crew_when_absent(self):
|
||||
"""Test that methods without Crew refs return has_crew=False."""
|
||||
|
||||
class SimpleNonCrewFlow(Flow):
|
||||
@start()
|
||||
def calculate(self):
|
||||
return 1 + 1
|
||||
|
||||
@listen(calculate)
|
||||
def display(self):
|
||||
return "result"
|
||||
|
||||
structure = flow_structure(SimpleNonCrewFlow)
|
||||
|
||||
method_map = {m["name"]: m for m in structure["methods"]}
|
||||
|
||||
assert method_map["calculate"]["has_crew"] is False
|
||||
assert method_map["display"]["has_crew"] is False
|
||||
|
||||
|
||||
class TestTypedStateSchema:
|
||||
"""Test flow with typed Pydantic state."""
|
||||
|
||||
def test_pydantic_state_schema_extraction(self):
|
||||
"""Test extracting state schema from a Flow with Pydantic state."""
|
||||
|
||||
class MyState(BaseModel):
|
||||
counter: int = 0
|
||||
message: str = ""
|
||||
items: list[str] = Field(default_factory=list)
|
||||
|
||||
class TypedStateFlow(Flow[MyState]):
|
||||
initial_state = MyState
|
||||
|
||||
@start()
|
||||
def increment(self):
|
||||
self.state.counter += 1
|
||||
return self.state.counter
|
||||
|
||||
@listen(increment)
|
||||
def display(self):
|
||||
return f"Count: {self.state.counter}"
|
||||
|
||||
structure = flow_structure(TypedStateFlow)
|
||||
|
||||
assert structure["state_schema"] is not None
|
||||
fields = structure["state_schema"]["fields"]
|
||||
|
||||
field_names = {f["name"] for f in fields}
|
||||
assert "counter" in field_names
|
||||
assert "message" in field_names
|
||||
assert "items" in field_names
|
||||
|
||||
# Check types
|
||||
field_map = {f["name"]: f for f in fields}
|
||||
assert "int" in field_map["counter"]["type"]
|
||||
assert "str" in field_map["message"]["type"]
|
||||
|
||||
# Check defaults
|
||||
assert field_map["counter"]["default"] == 0
|
||||
assert field_map["message"]["default"] == ""
|
||||
|
||||
def test_dict_state_returns_none(self):
|
||||
"""Test that flows using dict state return None for state_schema."""
|
||||
|
||||
class DictStateFlow(Flow):
|
||||
@start()
|
||||
def begin(self):
|
||||
self.state["count"] = 1
|
||||
return "started"
|
||||
|
||||
structure = flow_structure(DictStateFlow)
|
||||
|
||||
assert structure["state_schema"] is None
|
||||
|
||||
|
||||
class TestEdgeCases:
|
||||
"""Test edge cases and special scenarios."""
|
||||
|
||||
def test_start_router_combo(self):
|
||||
"""Test a method that is both @start and a router (via human_feedback emit)."""
|
||||
|
||||
class StartRouterFlow(Flow):
|
||||
@start()
|
||||
@human_feedback(
|
||||
message="Review:",
|
||||
emit=["continue", "stop"],
|
||||
llm="gpt-4o-mini",
|
||||
)
|
||||
def entry_point(self):
|
||||
return "data"
|
||||
|
||||
@listen("continue")
|
||||
def proceed(self):
|
||||
return "proceeding"
|
||||
|
||||
@listen("stop")
|
||||
def halt(self):
|
||||
return "halted"
|
||||
|
||||
structure = flow_structure(StartRouterFlow)
|
||||
|
||||
method_map = {m["name"]: m for m in structure["methods"]}
|
||||
|
||||
assert method_map["entry_point"]["type"] == "start_router"
|
||||
assert method_map["entry_point"]["has_human_feedback"] is True
|
||||
assert "continue" in method_map["entry_point"]["router_paths"]
|
||||
assert "stop" in method_map["entry_point"]["router_paths"]
|
||||
|
||||
def test_multiple_start_methods(self):
|
||||
"""Test a flow with multiple start methods."""
|
||||
|
||||
class MultiStartFlow(Flow):
|
||||
@start()
|
||||
def start_a(self):
|
||||
return "a"
|
||||
|
||||
@start()
|
||||
def start_b(self):
|
||||
return "b"
|
||||
|
||||
@listen(and_(start_a, start_b))
|
||||
def combine(self):
|
||||
return "combined"
|
||||
|
||||
structure = flow_structure(MultiStartFlow)
|
||||
|
||||
start_methods = [m for m in structure["methods"] if m["type"] == "start"]
|
||||
assert len(start_methods) == 2
|
||||
|
||||
start_names = {m["name"] for m in start_methods}
|
||||
assert "start_a" in start_names
|
||||
assert "start_b" in start_names
|
||||
|
||||
def test_orphan_methods(self):
|
||||
"""Test that orphan methods (not connected to flow) are still captured."""
|
||||
|
||||
class FlowWithOrphan(Flow):
|
||||
@start()
|
||||
def begin(self):
|
||||
return "started"
|
||||
|
||||
@listen(begin)
|
||||
def connected(self):
|
||||
return "connected"
|
||||
|
||||
@listen("never_triggered")
|
||||
def orphan(self):
|
||||
return "orphan"
|
||||
|
||||
structure = flow_structure(FlowWithOrphan)
|
||||
|
||||
method_names = {m["name"] for m in structure["methods"]}
|
||||
assert "orphan" in method_names
|
||||
|
||||
method_map = {m["name"]: m for m in structure["methods"]}
|
||||
assert method_map["orphan"]["trigger_methods"] == ["never_triggered"]
|
||||
|
||||
def test_empty_flow(self):
|
||||
"""Test building structure for a flow with no methods."""
|
||||
|
||||
class EmptyFlow(Flow):
|
||||
pass
|
||||
|
||||
structure = flow_structure(EmptyFlow)
|
||||
|
||||
assert structure["name"] == "EmptyFlow"
|
||||
assert structure["methods"] == []
|
||||
assert structure["edges"] == []
|
||||
assert structure["state_schema"] is None
|
||||
|
||||
def test_flow_with_docstring(self):
|
||||
"""Test that flow docstring is captured."""
|
||||
|
||||
class DocumentedFlow(Flow):
|
||||
"""This is a well-documented flow.
|
||||
|
||||
It has multiple lines of documentation.
|
||||
"""
|
||||
|
||||
@start()
|
||||
def begin(self):
|
||||
return "started"
|
||||
|
||||
structure = flow_structure(DocumentedFlow)
|
||||
|
||||
assert structure["description"] is not None
|
||||
assert "well-documented flow" in structure["description"]
|
||||
|
||||
def test_flow_without_docstring(self):
|
||||
"""Test that missing docstring returns None."""
|
||||
|
||||
class UndocumentedFlow(Flow):
|
||||
@start()
|
||||
def begin(self):
|
||||
return "started"
|
||||
|
||||
structure = flow_structure(UndocumentedFlow)
|
||||
|
||||
assert structure["description"] is None
|
||||
|
||||
def test_nested_conditions(self):
|
||||
"""Test flow with nested AND/OR conditions."""
|
||||
|
||||
class NestedConditionFlow(Flow):
|
||||
@start()
|
||||
def a(self):
|
||||
return "a"
|
||||
|
||||
@start()
|
||||
def b(self):
|
||||
return "b"
|
||||
|
||||
@start()
|
||||
def c(self):
|
||||
return "c"
|
||||
|
||||
@listen(or_(and_(a, b), c))
|
||||
def complex_trigger(self):
|
||||
return "triggered"
|
||||
|
||||
structure = flow_structure(NestedConditionFlow)
|
||||
|
||||
method_map = {m["name"]: m for m in structure["methods"]}
|
||||
|
||||
# Should have triggers for a, b, and c
|
||||
triggers = method_map["complex_trigger"]["trigger_methods"]
|
||||
assert len(triggers) == 3
|
||||
assert "a" in triggers
|
||||
assert "b" in triggers
|
||||
assert "c" in triggers
|
||||
|
||||
|
||||
class TestErrorHandling:
|
||||
"""Test error handling and validation."""
|
||||
|
||||
def test_instance_raises_type_error(self):
|
||||
"""Test that passing an instance raises TypeError."""
|
||||
|
||||
class TestFlow(Flow):
|
||||
@start()
|
||||
def begin(self):
|
||||
return "started"
|
||||
|
||||
flow_instance = TestFlow()
|
||||
|
||||
with pytest.raises(TypeError) as exc_info:
|
||||
flow_structure(flow_instance)
|
||||
|
||||
assert "requires a Flow class, not an instance" in str(exc_info.value)
|
||||
|
||||
def test_non_class_raises_type_error(self):
|
||||
"""Test that passing non-class raises TypeError."""
|
||||
|
||||
with pytest.raises(TypeError):
|
||||
flow_structure("not a class")
|
||||
|
||||
with pytest.raises(TypeError):
|
||||
flow_structure(123)
|
||||
|
||||
|
||||
class TestEdgeGeneration:
|
||||
"""Test edge generation in various scenarios."""
|
||||
|
||||
def test_all_edges_generated_correctly(self):
|
||||
"""Verify all edges are correctly generated for a complex flow."""
|
||||
|
||||
class ComplexFlow(Flow):
|
||||
@start()
|
||||
def entry(self):
|
||||
return "started"
|
||||
|
||||
@listen(entry)
|
||||
def step_1(self):
|
||||
return "step_1"
|
||||
|
||||
@router(step_1)
|
||||
def branch(self) -> Literal["left", "right"]:
|
||||
return "left"
|
||||
|
||||
@listen("left")
|
||||
def left_path(self):
|
||||
return "left_done"
|
||||
|
||||
@listen("right")
|
||||
def right_path(self):
|
||||
return "right_done"
|
||||
|
||||
@listen(or_(left_path, right_path))
|
||||
def converge(self):
|
||||
return "done"
|
||||
|
||||
structure = flow_structure(ComplexFlow)
|
||||
|
||||
# Build edge map for easier checking
|
||||
edges = structure["edges"]
|
||||
|
||||
# Check listen edges
|
||||
listen_edges = [(e["from_method"], e["to_method"]) for e in edges if e["edge_type"] == "listen"]
|
||||
|
||||
assert ("entry", "step_1") in listen_edges
|
||||
assert ("step_1", "branch") in listen_edges
|
||||
assert ("left_path", "converge") in listen_edges
|
||||
assert ("right_path", "converge") in listen_edges
|
||||
|
||||
# Check route edges
|
||||
route_edges = [(e["from_method"], e["to_method"], e["condition"]) for e in edges if e["edge_type"] == "route"]
|
||||
|
||||
assert ("branch", "left_path", "left") in route_edges
|
||||
assert ("branch", "right_path", "right") in route_edges
|
||||
|
||||
def test_router_edge_conditions(self):
|
||||
"""Test that router edge conditions are properly set."""
|
||||
|
||||
class RouterConditionFlow(Flow):
|
||||
@start()
|
||||
def begin(self):
|
||||
return "start"
|
||||
|
||||
@router(begin)
|
||||
def route(self) -> Literal["option_1", "option_2", "option_3"]:
|
||||
return "option_1"
|
||||
|
||||
@listen("option_1")
|
||||
def handle_1(self):
|
||||
return "1"
|
||||
|
||||
@listen("option_2")
|
||||
def handle_2(self):
|
||||
return "2"
|
||||
|
||||
@listen("option_3")
|
||||
def handle_3(self):
|
||||
return "3"
|
||||
|
||||
structure = flow_structure(RouterConditionFlow)
|
||||
|
||||
route_edges = [e for e in structure["edges"] if e["edge_type"] == "route"]
|
||||
|
||||
# Should have 3 route edges
|
||||
assert len(route_edges) == 3
|
||||
|
||||
conditions = {e["to_method"]: e["condition"] for e in route_edges}
|
||||
assert conditions["handle_1"] == "option_1"
|
||||
assert conditions["handle_2"] == "option_2"
|
||||
assert conditions["handle_3"] == "option_3"
|
||||
|
||||
|
||||
class TestMethodTypeClassification:
|
||||
"""Test method type classification."""
|
||||
|
||||
def test_all_method_types(self):
|
||||
"""Test classification of all method types."""
|
||||
|
||||
class AllTypesFlow(Flow):
|
||||
@start()
|
||||
def start_only(self):
|
||||
return "start"
|
||||
|
||||
@listen(start_only)
|
||||
def listen_only(self):
|
||||
return "listen"
|
||||
|
||||
@router(listen_only)
|
||||
def router_only(self) -> Literal["path"]:
|
||||
return "path"
|
||||
|
||||
@listen("path")
|
||||
def after_router(self):
|
||||
return "after"
|
||||
|
||||
@start()
|
||||
@human_feedback(
|
||||
message="Review",
|
||||
emit=["yes", "no"],
|
||||
llm="gpt-4o-mini",
|
||||
)
|
||||
def start_and_router(self):
|
||||
return "data"
|
||||
|
||||
structure = flow_structure(AllTypesFlow)
|
||||
|
||||
method_map = {m["name"]: m for m in structure["methods"]}
|
||||
|
||||
assert method_map["start_only"]["type"] == "start"
|
||||
assert method_map["listen_only"]["type"] == "listen"
|
||||
assert method_map["router_only"]["type"] == "router"
|
||||
assert method_map["after_router"]["type"] == "listen"
|
||||
assert method_map["start_and_router"]["type"] == "start_router"
|
||||
|
||||
|
||||
class TestInputDetection:
|
||||
"""Test flow input detection."""
|
||||
|
||||
def test_inputs_list_exists(self):
|
||||
"""Test that inputs list is always present."""
|
||||
|
||||
class SimpleFlow(Flow):
|
||||
@start()
|
||||
def begin(self):
|
||||
return "started"
|
||||
|
||||
structure = flow_structure(SimpleFlow)
|
||||
|
||||
assert "inputs" in structure
|
||||
assert isinstance(structure["inputs"], list)
|
||||
|
||||
|
||||
class TestJsonSerializable:
|
||||
"""Test that output is JSON serializable."""
|
||||
|
||||
def test_structure_is_json_serializable(self):
|
||||
"""Test that the entire structure can be JSON serialized."""
|
||||
import json
|
||||
|
||||
class MyState(BaseModel):
|
||||
value: int = 0
|
||||
|
||||
class SerializableFlow(Flow[MyState]):
|
||||
"""Test flow for JSON serialization."""
|
||||
|
||||
initial_state = MyState
|
||||
|
||||
@start()
|
||||
@human_feedback(
|
||||
message="Review",
|
||||
emit=["ok", "not_ok"],
|
||||
llm="gpt-4o-mini",
|
||||
)
|
||||
def begin(self):
|
||||
return "data"
|
||||
|
||||
@listen("ok")
|
||||
def proceed(self):
|
||||
return "done"
|
||||
|
||||
structure = flow_structure(SerializableFlow)
|
||||
|
||||
# Should not raise
|
||||
json_str = json.dumps(structure)
|
||||
assert json_str is not None
|
||||
|
||||
# Should round-trip
|
||||
parsed = json.loads(json_str)
|
||||
assert parsed["name"] == "SerializableFlow"
|
||||
assert len(parsed["methods"]) > 0
|
||||
|
||||
|
||||
class TestFlowInheritance:
|
||||
"""Test flow inheritance scenarios."""
|
||||
|
||||
def test_child_flow_inherits_parent_methods(self):
|
||||
"""Test that FlowB inheriting from FlowA includes methods from both.
|
||||
|
||||
Note: FlowMeta propagates methods but does NOT fully propagate the
|
||||
_listeners registry from parent classes. This means edges defined
|
||||
in the parent class (e.g., parent_start -> parent_process) may not
|
||||
appear in the child's structure. This is a known FlowMeta limitation.
|
||||
"""
|
||||
|
||||
class FlowA(Flow):
|
||||
"""Parent flow with start method."""
|
||||
|
||||
@start()
|
||||
def parent_start(self):
|
||||
return "parent started"
|
||||
|
||||
@listen(parent_start)
|
||||
def parent_process(self):
|
||||
return "parent processed"
|
||||
|
||||
class FlowB(FlowA):
|
||||
"""Child flow with additional methods."""
|
||||
|
||||
@listen(FlowA.parent_process)
|
||||
def child_continue(self):
|
||||
return "child continued"
|
||||
|
||||
@listen(child_continue)
|
||||
def child_finalize(self):
|
||||
return "child finalized"
|
||||
|
||||
structure = flow_structure(FlowB)
|
||||
|
||||
assert structure["name"] == "FlowB"
|
||||
|
||||
# Check all methods are present (from both parent and child)
|
||||
method_names = {m["name"] for m in structure["methods"]}
|
||||
assert "parent_start" in method_names
|
||||
assert "parent_process" in method_names
|
||||
assert "child_continue" in method_names
|
||||
assert "child_finalize" in method_names
|
||||
|
||||
# Check method types
|
||||
method_map = {m["name"]: m for m in structure["methods"]}
|
||||
assert method_map["parent_start"]["type"] == "start"
|
||||
assert method_map["parent_process"]["type"] == "listen"
|
||||
assert method_map["child_continue"]["type"] == "listen"
|
||||
assert method_map["child_finalize"]["type"] == "listen"
|
||||
|
||||
# Check edges defined in child class exist
|
||||
edge_pairs = [(e["from_method"], e["to_method"]) for e in structure["edges"]]
|
||||
assert ("parent_process", "child_continue") in edge_pairs
|
||||
assert ("child_continue", "child_finalize") in edge_pairs
|
||||
|
||||
# KNOWN LIMITATION: Edges defined in parent class (parent_start -> parent_process)
|
||||
# are NOT propagated to child's _listeners registry by FlowMeta.
|
||||
# The edge (parent_start, parent_process) will NOT be in edge_pairs.
|
||||
# This is a FlowMeta limitation, not a serializer bug.
|
||||
|
||||
def test_child_flow_can_override_parent_method(self):
|
||||
"""Test that child can override parent methods."""
|
||||
|
||||
class BaseFlow(Flow):
|
||||
@start()
|
||||
def begin(self):
|
||||
return "base begin"
|
||||
|
||||
@listen(begin)
|
||||
def process(self):
|
||||
return "base process"
|
||||
|
||||
class ExtendedFlow(BaseFlow):
|
||||
@listen(BaseFlow.begin)
|
||||
def process(self):
|
||||
# Override parent's process method
|
||||
return "extended process"
|
||||
|
||||
@listen(process)
|
||||
def finalize(self):
|
||||
return "extended finalize"
|
||||
|
||||
structure = flow_structure(ExtendedFlow)
|
||||
|
||||
method_names = {m["name"] for m in structure["methods"]}
|
||||
assert "begin" in method_names
|
||||
assert "process" in method_names
|
||||
assert "finalize" in method_names
|
||||
|
||||
# Should have 3 methods total (not 4, since process is overridden)
|
||||
assert len(structure["methods"]) == 3
|
||||
@@ -772,3 +772,204 @@ class TestEdgeCases:
|
||||
assert result.output == "content"
|
||||
assert result.feedback == "feedback"
|
||||
assert result.outcome is None # No routing, no outcome
|
||||
|
||||
|
||||
class TestLLMConfigPreservation:
|
||||
"""Tests that LLM config is preserved through @human_feedback serialization.
|
||||
|
||||
PR #4970 introduced _hf_llm stashing so the live LLM object survives
|
||||
decorator wrapping for same-process resume. The serialization path
|
||||
(_serialize_llm_for_context / _deserialize_llm_from_context) preserves
|
||||
config for cross-process resume.
|
||||
"""
|
||||
|
||||
def test_hf_llm_stashed_on_wrapper_with_llm_instance(self):
|
||||
"""Test that passing an LLM instance stashes it on the wrapper as _hf_llm."""
|
||||
from crewai.llm import LLM
|
||||
|
||||
llm_instance = LLM(model="gpt-4o-mini", temperature=0.42)
|
||||
|
||||
class ConfigFlow(Flow):
|
||||
@start()
|
||||
@human_feedback(
|
||||
message="Review:",
|
||||
emit=["approved", "rejected"],
|
||||
llm=llm_instance,
|
||||
)
|
||||
def review(self):
|
||||
return "content"
|
||||
|
||||
method = ConfigFlow.review
|
||||
assert hasattr(method, "_hf_llm"), "_hf_llm not found on wrapper"
|
||||
assert method._hf_llm is llm_instance, "_hf_llm is not the same object"
|
||||
|
||||
def test_hf_llm_preserved_on_listen_method(self):
|
||||
"""Test that _hf_llm is preserved when @human_feedback is on a @listen method."""
|
||||
from crewai.llm import LLM
|
||||
|
||||
llm_instance = LLM(model="gpt-4o-mini", temperature=0.7)
|
||||
|
||||
class ListenConfigFlow(Flow):
|
||||
@start()
|
||||
def generate(self):
|
||||
return "draft"
|
||||
|
||||
@listen("generate")
|
||||
@human_feedback(
|
||||
message="Review:",
|
||||
emit=["approved", "rejected"],
|
||||
llm=llm_instance,
|
||||
)
|
||||
def review(self):
|
||||
return "content"
|
||||
|
||||
method = ListenConfigFlow.review
|
||||
assert hasattr(method, "_hf_llm")
|
||||
assert method._hf_llm is llm_instance
|
||||
|
||||
def test_hf_llm_accessible_on_instance(self):
|
||||
"""Test that _hf_llm survives Flow instantiation (bound method access)."""
|
||||
from crewai.llm import LLM
|
||||
|
||||
llm_instance = LLM(model="gpt-4o-mini", temperature=0.42)
|
||||
|
||||
class InstanceFlow(Flow):
|
||||
@start()
|
||||
@human_feedback(
|
||||
message="Review:",
|
||||
emit=["approved", "rejected"],
|
||||
llm=llm_instance,
|
||||
)
|
||||
def review(self):
|
||||
return "content"
|
||||
|
||||
flow = InstanceFlow()
|
||||
instance_method = flow.review
|
||||
assert hasattr(instance_method, "_hf_llm")
|
||||
assert instance_method._hf_llm is llm_instance
|
||||
|
||||
def test_serialize_llm_preserves_config_fields(self):
|
||||
"""Test that _serialize_llm_for_context captures temperature, base_url, etc."""
|
||||
from crewai.flow.human_feedback import _serialize_llm_for_context
|
||||
from crewai.llm import LLM
|
||||
|
||||
llm = LLM(
|
||||
model="gpt-4o-mini",
|
||||
temperature=0.42,
|
||||
base_url="https://custom.example.com/v1",
|
||||
)
|
||||
|
||||
serialized = _serialize_llm_for_context(llm)
|
||||
|
||||
assert isinstance(serialized, dict), f"Expected dict, got {type(serialized)}"
|
||||
assert serialized["model"] == "openai/gpt-4o-mini"
|
||||
assert serialized["temperature"] == 0.42
|
||||
assert serialized["base_url"] == "https://custom.example.com/v1"
|
||||
|
||||
def test_serialize_llm_excludes_api_key(self):
|
||||
"""Test that api_key is NOT included in serialized output (security)."""
|
||||
from crewai.flow.human_feedback import _serialize_llm_for_context
|
||||
from crewai.llm import LLM
|
||||
|
||||
llm = LLM(model="gpt-4o-mini")
|
||||
|
||||
serialized = _serialize_llm_for_context(llm)
|
||||
assert isinstance(serialized, dict)
|
||||
assert "api_key" not in serialized
|
||||
|
||||
def test_deserialize_round_trip_preserves_config(self):
|
||||
"""Test that serialize → deserialize round-trip preserves all config."""
|
||||
from crewai.flow.human_feedback import (
|
||||
_deserialize_llm_from_context,
|
||||
_serialize_llm_for_context,
|
||||
)
|
||||
from crewai.llm import LLM
|
||||
|
||||
original = LLM(
|
||||
model="gpt-4o-mini",
|
||||
temperature=0.42,
|
||||
base_url="https://custom.example.com/v1",
|
||||
)
|
||||
|
||||
serialized = _serialize_llm_for_context(original)
|
||||
reconstructed = _deserialize_llm_from_context(serialized)
|
||||
|
||||
assert reconstructed is not None
|
||||
assert reconstructed.model == original.model
|
||||
assert reconstructed.temperature == original.temperature
|
||||
assert reconstructed.base_url == original.base_url
|
||||
|
||||
def test_deserialize_handles_legacy_string_format(self):
|
||||
"""Test backward compat: plain string still reconstructs an LLM."""
|
||||
from crewai.flow.human_feedback import _deserialize_llm_from_context
|
||||
|
||||
reconstructed = _deserialize_llm_from_context("openai/gpt-4o-mini")
|
||||
|
||||
assert reconstructed is not None
|
||||
assert reconstructed.model == "gpt-4o-mini"
|
||||
|
||||
def test_deserialize_returns_none_for_none(self):
|
||||
"""Test that None input returns None."""
|
||||
from crewai.flow.human_feedback import _deserialize_llm_from_context
|
||||
|
||||
assert _deserialize_llm_from_context(None) is None
|
||||
|
||||
def test_serialize_llm_preserves_provider_specific_fields(self):
|
||||
"""Test that provider-specific fields like project/location are serialized."""
|
||||
from crewai.flow.human_feedback import _serialize_llm_for_context
|
||||
from crewai.llm import LLM
|
||||
|
||||
# Create a Gemini-style LLM with project and non-default location
|
||||
llm = LLM(
|
||||
model="gemini-2.0-flash",
|
||||
provider="gemini",
|
||||
project="my-project",
|
||||
location="europe-west1",
|
||||
temperature=0.3,
|
||||
)
|
||||
|
||||
serialized = _serialize_llm_for_context(llm)
|
||||
|
||||
assert isinstance(serialized, dict)
|
||||
assert serialized.get("project") == "my-project"
|
||||
assert serialized.get("location") == "europe-west1"
|
||||
assert serialized.get("temperature") == 0.3
|
||||
|
||||
def test_config_preserved_through_full_flow_execution(self):
|
||||
"""Test that the LLM with custom config is used during outcome collapsing."""
|
||||
from crewai.llm import LLM
|
||||
|
||||
llm_instance = LLM(model="gpt-4o-mini", temperature=0.42)
|
||||
collapse_calls = []
|
||||
|
||||
class FullFlow(Flow):
|
||||
@start()
|
||||
@human_feedback(
|
||||
message="Review:",
|
||||
emit=["approved", "rejected"],
|
||||
llm=llm_instance,
|
||||
)
|
||||
def review(self):
|
||||
return "content"
|
||||
|
||||
@listen("approved")
|
||||
def on_approved(self):
|
||||
return "done"
|
||||
|
||||
flow = FullFlow()
|
||||
|
||||
original_collapse = flow._collapse_to_outcome
|
||||
|
||||
def spy_collapse(feedback, outcomes, llm):
|
||||
collapse_calls.append(llm)
|
||||
return "approved"
|
||||
|
||||
with (
|
||||
patch.object(flow, "_request_human_feedback", return_value="looks good"),
|
||||
patch.object(flow, "_collapse_to_outcome", side_effect=spy_collapse),
|
||||
):
|
||||
flow.kickoff()
|
||||
|
||||
assert len(collapse_calls) == 1
|
||||
# The LLM passed to _collapse_to_outcome should be the original instance
|
||||
assert collapse_calls[0] is llm_instance
|
||||
|
||||
@@ -1,3 +1,3 @@
|
||||
"""CrewAI development tools."""
|
||||
|
||||
__version__ = "1.11.0"
|
||||
__version__ = "1.11.1"
|
||||
|
||||
@@ -29,6 +29,7 @@ dev = [
|
||||
"types-psycopg2==2.9.21.20251012",
|
||||
"types-pymysql==1.1.0.20250916",
|
||||
"types-aiofiles~=25.1.0",
|
||||
"commitizen>=4.13.9",
|
||||
]
|
||||
|
||||
|
||||
@@ -142,17 +143,33 @@ python_files = "test_*.py"
|
||||
python_classes = "Test*"
|
||||
python_functions = "test_*"
|
||||
|
||||
[tool.commitizen]
|
||||
name = "cz_customize"
|
||||
version_provider = "scm"
|
||||
tag_format = "$version"
|
||||
allowed_prefixes = ["Merge", "Revert"]
|
||||
changelog_incremental = true
|
||||
update_changelog_on_bump = false
|
||||
|
||||
[tool.commitizen.customize]
|
||||
schema = "<type>(<scope>): <description>"
|
||||
schema_pattern = "^(feat|fix|refactor|perf|test|docs|chore|ci|style|revert)(\\(.+\\))?!?: .{1,72}"
|
||||
bump_pattern = "^(feat|fix|perf|refactor|revert)"
|
||||
bump_map = { feat = "MINOR", fix = "PATCH", perf = "PATCH", refactor = "PATCH", revert = "PATCH" }
|
||||
info = "Commits must follow Conventional Commits 1.0.0."
|
||||
|
||||
|
||||
[tool.uv]
|
||||
|
||||
# composio-core pins rich<14 but textual requires rich>=14.
|
||||
# onnxruntime 1.24+ dropped Python 3.10 wheels; cap it so qdrant[fastembed] resolves on 3.10.
|
||||
# fastembed 0.7.x and docling 2.63 cap pillow<12; the removed APIs don't affect them.
|
||||
# langchain-core 0.3.76 has a template-injection vuln (GHSA); force >=0.3.80.
|
||||
# langchain-core <1.2.11 has SSRF via image_url token counting (CVE-2026-26013).
|
||||
override-dependencies = [
|
||||
"rich>=13.7.1",
|
||||
"onnxruntime<1.24; python_version < '3.11'",
|
||||
"pillow>=12.1.1",
|
||||
"langchain-core>=0.3.80,<1",
|
||||
"langchain-core>=1.2.11,<2",
|
||||
"urllib3>=2.6.3",
|
||||
]
|
||||
|
||||
|
||||
111
uv.lock
generated
111
uv.lock
generated
@@ -20,7 +20,7 @@ members = [
|
||||
"crewai-tools",
|
||||
]
|
||||
overrides = [
|
||||
{ name = "langchain-core", specifier = ">=0.3.80,<1" },
|
||||
{ name = "langchain-core", specifier = ">=1.2.11,<2" },
|
||||
{ name = "onnxruntime", marker = "python_full_version < '3.11'", specifier = "<1.24" },
|
||||
{ name = "pillow", specifier = ">=12.1.1" },
|
||||
{ name = "rich", specifier = ">=13.7.1" },
|
||||
@@ -31,6 +31,7 @@ overrides = [
|
||||
dev = [
|
||||
{ name = "bandit", specifier = "==1.9.2" },
|
||||
{ name = "boto3-stubs", extras = ["bedrock-runtime"], specifier = "==1.42.40" },
|
||||
{ name = "commitizen", specifier = ">=4.13.9" },
|
||||
{ name = "mypy", specifier = "==1.19.1" },
|
||||
{ name = "pre-commit", specifier = "==4.5.1" },
|
||||
{ name = "pytest", specifier = "==8.4.2" },
|
||||
@@ -370,6 +371,15 @@ wheels = [
|
||||
{ url = "https://files.pythonhosted.org/packages/3b/00/2344469e2084fb287c2e0b57b72910309874c3245463acd6cf5e3db69324/appdirs-1.4.4-py2.py3-none-any.whl", hash = "sha256:a841dacd6b99318a741b166adb07e19ee71a274450e68237b4650ca1055ab128", size = 9566, upload-time = "2020-05-11T07:59:49.499Z" },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "argcomplete"
|
||||
version = "3.6.3"
|
||||
source = { registry = "https://pypi.org/simple" }
|
||||
sdist = { url = "https://files.pythonhosted.org/packages/38/61/0b9ae6399dd4a58d8c1b1dc5a27d6f2808023d0b5dd3104bb99f45a33ff6/argcomplete-3.6.3.tar.gz", hash = "sha256:62e8ed4fd6a45864acc8235409461b72c9a28ee785a2011cc5eb78318786c89c", size = 73754, upload-time = "2025-10-20T03:33:34.741Z" }
|
||||
wheels = [
|
||||
{ url = "https://files.pythonhosted.org/packages/74/f5/9373290775639cb67a2fce7f629a1c240dce9f12fe927bc32b2736e16dfc/argcomplete-3.6.3-py3-none-any.whl", hash = "sha256:f5007b3a600ccac5d25bbce33089211dfd49eab4a7718da3f10e3082525a92ce", size = 43846, upload-time = "2025-10-20T03:33:33.021Z" },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "asn1crypto"
|
||||
version = "1.5.1"
|
||||
@@ -944,6 +954,30 @@ wheels = [
|
||||
{ url = "https://files.pythonhosted.org/packages/6d/c1/e419ef3723a074172b68aaa89c9f3de486ed4c2399e2dbd8113a4fdcaf9e/colorlog-6.10.1-py3-none-any.whl", hash = "sha256:2d7e8348291948af66122cff006c9f8da6255d224e7cf8e37d8de2df3bad8c9c", size = 11743, upload-time = "2025-10-16T16:14:10.512Z" },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "commitizen"
|
||||
version = "4.13.9"
|
||||
source = { registry = "https://pypi.org/simple" }
|
||||
dependencies = [
|
||||
{ name = "argcomplete" },
|
||||
{ name = "charset-normalizer" },
|
||||
{ name = "colorama" },
|
||||
{ name = "decli" },
|
||||
{ name = "deprecated" },
|
||||
{ name = "jinja2" },
|
||||
{ name = "packaging" },
|
||||
{ name = "prompt-toolkit" },
|
||||
{ name = "pyyaml" },
|
||||
{ name = "questionary" },
|
||||
{ name = "termcolor" },
|
||||
{ name = "tomlkit" },
|
||||
{ name = "typing-extensions", marker = "python_full_version < '3.11'" },
|
||||
]
|
||||
sdist = { url = "https://files.pythonhosted.org/packages/a6/44/10f95e8178ab5a584298726a4a94ceb83a7f77e00741fec4680df05fedd5/commitizen-4.13.9.tar.gz", hash = "sha256:2b4567ed50555e10920e5bd804a6a4e2c42ec70bb74f14a83f2680fe9eaf9727", size = 64145, upload-time = "2026-02-25T02:40:05.326Z" }
|
||||
wheels = [
|
||||
{ url = "https://files.pythonhosted.org/packages/28/22/9b14ee0f17f0aad219a2fb37a293a57b8324d9d195c6ef6807bcd0bf2055/commitizen-4.13.9-py3-none-any.whl", hash = "sha256:d2af3d6a83cacec9d5200e17768942c5de6266f93d932c955986c60c4285f2db", size = 85373, upload-time = "2026-02-25T02:40:03.83Z" },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "composio-core"
|
||||
version = "0.7.21"
|
||||
@@ -1115,6 +1149,7 @@ dependencies = [
|
||||
{ name = "pydantic-settings" },
|
||||
{ name = "pyjwt" },
|
||||
{ name = "python-dotenv" },
|
||||
{ name = "pyyaml" },
|
||||
{ name = "regex" },
|
||||
{ name = "textual" },
|
||||
{ name = "tokenizers" },
|
||||
@@ -1222,6 +1257,7 @@ requires-dist = [
|
||||
{ name = "pydantic-settings", specifier = "~=2.10.1" },
|
||||
{ name = "pyjwt", specifier = ">=2.9.0,<3" },
|
||||
{ name = "python-dotenv", specifier = "~=1.1.1" },
|
||||
{ name = "pyyaml", specifier = "~=6.0" },
|
||||
{ name = "qdrant-client", extras = ["fastembed"], marker = "extra == 'qdrant'", specifier = "~=1.14.3" },
|
||||
{ name = "regex", specifier = "~=2026.1.15" },
|
||||
{ name = "textual", specifier = ">=7.5.0" },
|
||||
@@ -1275,9 +1311,9 @@ requires-dist = [
|
||||
{ name = "aiofiles", specifier = "~=24.1.0" },
|
||||
{ name = "av", specifier = "~=13.0.0" },
|
||||
{ name = "pillow", specifier = "~=12.1.1" },
|
||||
{ name = "pypdf", specifier = "~=6.7.5" },
|
||||
{ name = "pypdf", specifier = "~=6.9.1" },
|
||||
{ name = "python-magic", specifier = ">=0.4.27" },
|
||||
{ name = "tinytag", specifier = "~=1.10.0" },
|
||||
{ name = "tinytag", specifier = "~=2.2.1" },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
@@ -1573,6 +1609,15 @@ wheels = [
|
||||
{ url = "https://files.pythonhosted.org/packages/c3/be/d0d44e092656fe7a06b55e6103cbce807cdbdee17884a5367c68c9860853/dataclasses_json-0.6.7-py3-none-any.whl", hash = "sha256:0dbf33f26c8d5305befd61b39d2b3414e8a407bedc2834dea9b8d642666fb40a", size = 28686, upload-time = "2024-06-09T16:20:16.715Z" },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "decli"
|
||||
version = "0.6.3"
|
||||
source = { registry = "https://pypi.org/simple" }
|
||||
sdist = { url = "https://files.pythonhosted.org/packages/0c/59/d4ffff1dee2c8f6f2dd8f87010962e60f7b7847504d765c91ede5a466730/decli-0.6.3.tar.gz", hash = "sha256:87f9d39361adf7f16b9ca6e3b614badf7519da13092f2db3c80ca223c53c7656", size = 7564, upload-time = "2025-06-01T15:23:41.25Z" }
|
||||
wheels = [
|
||||
{ url = "https://files.pythonhosted.org/packages/d8/fa/ec878c28bc7f65b77e7e17af3522c9948a9711b9fa7fc4c5e3140a7e3578/decli-0.6.3-py3-none-any.whl", hash = "sha256:5152347c7bb8e3114ad65db719e5709b28d7f7f45bdb709f70167925e55640f3", size = 7989, upload-time = "2025-06-01T15:23:40.228Z" },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "decorator"
|
||||
version = "5.2.1"
|
||||
@@ -3295,7 +3340,7 @@ wheels = [
|
||||
|
||||
[[package]]
|
||||
name = "langchain-core"
|
||||
version = "0.3.83"
|
||||
version = "1.2.20"
|
||||
source = { registry = "https://pypi.org/simple" }
|
||||
dependencies = [
|
||||
{ name = "jsonpatch" },
|
||||
@@ -3307,9 +3352,9 @@ dependencies = [
|
||||
{ name = "typing-extensions" },
|
||||
{ name = "uuid-utils" },
|
||||
]
|
||||
sdist = { url = "https://files.pythonhosted.org/packages/21/a4/24f2d787bfcf56e5990924cacefe6f6e7971a3629f97c8162fc7a2a3d851/langchain_core-0.3.83.tar.gz", hash = "sha256:a0a4c7b6ea1c446d3b432116f405dc2afa1fe7891c44140d3d5acca221909415", size = 597965, upload-time = "2026-01-13T01:19:23.854Z" }
|
||||
sdist = { url = "https://files.pythonhosted.org/packages/db/41/6552a419fe549a79601e5a698d1d5ee2ca7fe93bb87fd624a16a8c1bdee3/langchain_core-1.2.20.tar.gz", hash = "sha256:c7ac8b976039b5832abb989fef058b88c270594ba331efc79e835df046e7dc44", size = 838330, upload-time = "2026-03-18T17:34:45.522Z" }
|
||||
wheels = [
|
||||
{ url = "https://files.pythonhosted.org/packages/5a/db/d71b80d3bd6193812485acea4001cdf86cf95a44bbf942f7a240120ff762/langchain_core-0.3.83-py3-none-any.whl", hash = "sha256:8c92506f8b53fc1958b1c07447f58c5783eb8833dd3cb6dc75607c80891ab1ae", size = 458890, upload-time = "2026-01-13T01:19:21.748Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/d9/06/08c88ddd4d6766de4e6c43111ae8f3025df383d2a4379cb938fc571b49d4/langchain_core-1.2.20-py3-none-any.whl", hash = "sha256:b65ff678f3c3dc1f1b4d03a3af5ee3b8d51f9be5181d74eb53c6c11cd9dd5e68", size = 504215, upload-time = "2026-03-18T17:34:44.087Z" },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
@@ -5275,6 +5320,18 @@ wheels = [
|
||||
{ url = "https://files.pythonhosted.org/packages/5d/19/fd3ef348460c80af7bb4669ea7926651d1f95c23ff2df18b9d24bab4f3fa/pre_commit-4.5.1-py2.py3-none-any.whl", hash = "sha256:3b3afd891e97337708c1674210f8eba659b52a38ea5f822ff142d10786221f77", size = 226437, upload-time = "2025-12-16T21:14:32.409Z" },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "prompt-toolkit"
|
||||
version = "3.0.51"
|
||||
source = { registry = "https://pypi.org/simple" }
|
||||
dependencies = [
|
||||
{ name = "wcwidth" },
|
||||
]
|
||||
sdist = { url = "https://files.pythonhosted.org/packages/bb/6e/9d084c929dfe9e3bfe0c6a47e31f78a25c54627d64a66e884a8bf5474f1c/prompt_toolkit-3.0.51.tar.gz", hash = "sha256:931a162e3b27fc90c86f1b48bb1fb2c528c2761475e57c9c06de13311c7b54ed", size = 428940, upload-time = "2025-04-15T09:18:47.731Z" }
|
||||
wheels = [
|
||||
{ url = "https://files.pythonhosted.org/packages/ce/4f/5249960887b1fbe561d9ff265496d170b55a735b76724f10ef19f9e40716/prompt_toolkit-3.0.51-py3-none-any.whl", hash = "sha256:52742911fde84e2d423e2f9a4cf1de7d7ac4e51958f648d9540e0fb8db077b07", size = 387810, upload-time = "2025-04-15T09:18:44.753Z" },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "propcache"
|
||||
version = "0.4.1"
|
||||
@@ -6174,14 +6231,14 @@ wheels = [
|
||||
|
||||
[[package]]
|
||||
name = "pypdf"
|
||||
version = "6.7.5"
|
||||
version = "6.9.1"
|
||||
source = { registry = "https://pypi.org/simple" }
|
||||
dependencies = [
|
||||
{ name = "typing-extensions", marker = "python_full_version < '3.11'" },
|
||||
]
|
||||
sdist = { url = "https://files.pythonhosted.org/packages/f6/52/37cc0aa9e9d1bf7729a737a0d83f8b3f851c8eb137373d9f71eafb0a3405/pypdf-6.7.5.tar.gz", hash = "sha256:40bb2e2e872078655f12b9b89e2f900888bb505e88a82150b64f9f34fa25651d", size = 5304278, upload-time = "2026-03-02T09:05:21.464Z" }
|
||||
sdist = { url = "https://files.pythonhosted.org/packages/f9/fb/dc2e8cb006e80b0020ed20d8649106fe4274e82d8e756ad3e24ade19c0df/pypdf-6.9.1.tar.gz", hash = "sha256:ae052407d33d34de0c86c5c729be6d51010bf36e03035a8f23ab449bca52377d", size = 5311551, upload-time = "2026-03-17T10:46:07.876Z" }
|
||||
wheels = [
|
||||
{ url = "https://files.pythonhosted.org/packages/05/89/336673efd0a88956562658aba4f0bbef7cb92a6fbcbcaf94926dbc82b408/pypdf-6.7.5-py3-none-any.whl", hash = "sha256:07ba7f1d6e6d9aa2a17f5452e320a84718d4ce863367f7ede2fd72280349ab13", size = 331421, upload-time = "2026-03-02T09:05:19.722Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/f9/f4/75543fa802b86e72f87e9395440fe1a89a6d149887e3e55745715c3352ac/pypdf-6.9.1-py3-none-any.whl", hash = "sha256:f35a6a022348fae47e092a908339a8f3dc993510c026bb39a96718fc7185e89f", size = 333661, upload-time = "2026-03-17T10:46:06.286Z" },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
@@ -6556,6 +6613,18 @@ fastembed = [
|
||||
{ name = "fastembed", version = "0.7.4", source = { registry = "https://pypi.org/simple" }, marker = "python_full_version < '3.13'" },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "questionary"
|
||||
version = "2.1.1"
|
||||
source = { registry = "https://pypi.org/simple" }
|
||||
dependencies = [
|
||||
{ name = "prompt-toolkit" },
|
||||
]
|
||||
sdist = { url = "https://files.pythonhosted.org/packages/f6/45/eafb0bba0f9988f6a2520f9ca2df2c82ddfa8d67c95d6625452e97b204a5/questionary-2.1.1.tar.gz", hash = "sha256:3d7e980292bb0107abaa79c68dd3eee3c561b83a0f89ae482860b181c8bd412d", size = 25845, upload-time = "2025-08-28T19:00:20.851Z" }
|
||||
wheels = [
|
||||
{ url = "https://files.pythonhosted.org/packages/3c/26/1062c7ec1b053db9e499b4d2d5bc231743201b74051c973dadeac80a8f43/questionary-2.1.1-py3-none-any.whl", hash = "sha256:a51af13f345f1cdea62347589fbb6df3b290306ab8930713bfae4d475a7d4a59", size = 36753, upload-time = "2025-08-28T19:00:19.56Z" },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "rapidfuzz"
|
||||
version = "3.14.3"
|
||||
@@ -7555,6 +7624,15 @@ wheels = [
|
||||
{ url = "https://files.pythonhosted.org/packages/d7/c1/eb8f9debc45d3b7918a32ab756658a0904732f75e555402972246b0b8e71/tenacity-9.1.4-py3-none-any.whl", hash = "sha256:6095a360c919085f28c6527de529e76a06ad89b23659fa881ae0649b867a9d55", size = 28926, upload-time = "2026-02-07T10:45:32.24Z" },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "termcolor"
|
||||
version = "3.3.0"
|
||||
source = { registry = "https://pypi.org/simple" }
|
||||
sdist = { url = "https://files.pythonhosted.org/packages/46/79/cf31d7a93a8fdc6aa0fbb665be84426a8c5a557d9240b6239e9e11e35fc5/termcolor-3.3.0.tar.gz", hash = "sha256:348871ca648ec6a9a983a13ab626c0acce02f515b9e1983332b17af7979521c5", size = 14434, upload-time = "2025-12-29T12:55:21.882Z" }
|
||||
wheels = [
|
||||
{ url = "https://files.pythonhosted.org/packages/33/d1/8bb87d21e9aeb323cc03034f5eaf2c8f69841e40e4853c2627edf8111ed3/termcolor-3.3.0-py3-none-any.whl", hash = "sha256:cf642efadaf0a8ebbbf4bc7a31cec2f9b5f21a9f726f4ccbb08192c9c26f43a5", size = 7734, upload-time = "2025-12-29T12:55:20.718Z" },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "textual"
|
||||
version = "7.5.0"
|
||||
@@ -7626,11 +7704,11 @@ wheels = [
|
||||
|
||||
[[package]]
|
||||
name = "tinytag"
|
||||
version = "1.10.1"
|
||||
version = "2.2.1"
|
||||
source = { registry = "https://pypi.org/simple" }
|
||||
sdist = { url = "https://files.pythonhosted.org/packages/59/b5/ff5e5f9ca9677be7272260f67c87f7e8e885babc7ce94604e837dcfd8d76/tinytag-1.10.1.tar.gz", hash = "sha256:122a63b836f85094aacca43fc807aaee3290be3de17d134f5f4a08b509ae268f", size = 40906, upload-time = "2023-10-26T19:30:38.791Z" }
|
||||
sdist = { url = "https://files.pythonhosted.org/packages/96/59/8a8cb2331e2602b53e4dc06960f57d1387a2b18e7efd24e5f9cb60ea4925/tinytag-2.2.1.tar.gz", hash = "sha256:e6d06610ebe7cd66fd07be2d3b9495914ab32654a5e47657bb8cd44c2484523c", size = 38214, upload-time = "2026-03-15T18:48:01.11Z" }
|
||||
wheels = [
|
||||
{ url = "https://files.pythonhosted.org/packages/2f/04/ef783cbc4aa3a5ed75969e300b3e3929daf3d1b52fe80e950c63e0d66d95/tinytag-1.10.1-py3-none-any.whl", hash = "sha256:e437654d04c966fbbbdbf807af61eb9759f1d80e4173a7d26202506b37cfdaf0", size = 37900, upload-time = "2023-10-26T19:30:36.724Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/ce/34/d50e338631baaf65ec5396e70085e5de0b52b24b28db1ffbc1c6e82190dc/tinytag-2.2.1-py3-none-any.whl", hash = "sha256:ed8b1e6d25367937e3321e054f4974f9abfde1a3e0a538824c87da377130c2b6", size = 32927, upload-time = "2026-03-15T18:47:59.613Z" },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
@@ -8565,6 +8643,15 @@ wheels = [
|
||||
{ url = "https://files.pythonhosted.org/packages/6e/d4/ed38dd3b1767193de971e694aa544356e63353c33a85d948166b5ff58b9e/watchfiles-1.1.1-pp311-pypy311_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:3e6f39af2eab0118338902798b5aa6664f46ff66bc0280de76fca67a7f262a49", size = 457546, upload-time = "2025-10-14T15:06:13.372Z" },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "wcwidth"
|
||||
version = "0.6.0"
|
||||
source = { registry = "https://pypi.org/simple" }
|
||||
sdist = { url = "https://files.pythonhosted.org/packages/35/a2/8e3becb46433538a38726c948d3399905a4c7cabd0df578ede5dc51f0ec2/wcwidth-0.6.0.tar.gz", hash = "sha256:cdc4e4262d6ef9a1a57e018384cbeb1208d8abbc64176027e2c2455c81313159", size = 159684, upload-time = "2026-02-06T19:19:40.919Z" }
|
||||
wheels = [
|
||||
{ url = "https://files.pythonhosted.org/packages/68/5a/199c59e0a824a3db2b89c5d2dade7ab5f9624dbf6448dc291b46d5ec94d3/wcwidth-0.6.0-py3-none-any.whl", hash = "sha256:1a3a1e510b553315f8e146c54764f4fb6264ffad731b3d78088cdb1478ffbdad", size = 94189, upload-time = "2026-02-06T19:19:39.646Z" },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "weaviate-client"
|
||||
version = "4.18.3"
|
||||
|
||||
Reference in New Issue
Block a user