Compare commits

..

7 Commits

Author SHA1 Message Date
Greyson LaLonde
9ad2a9271f Merge branch 'main' into gl/fix/google-response-schema 2026-01-28 12:20:24 -05:00
Greyson LaLonde
1e27cf3f0f fix: ensure verbosity flag is applied
Some checks are pending
CodeQL Advanced / Analyze (actions) (push) Waiting to run
CodeQL Advanced / Analyze (python) (push) Waiting to run
Notify Downstream / notify-downstream (push) Waiting to run
2026-01-28 11:52:47 -05:00
Greyson LaLonde
cba6d2eb9b fix: use response_json_schema 2026-01-28 06:28:49 -05:00
Lorenze Jay
381ad3a9a8 chore: update version to 1.9.1
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
2026-01-27 20:08:53 -05:00
Lorenze Jay
f53bdb28ac feat: implement before and after tool call hooks in CrewAgentExecutor… (#4287)
* feat: implement before and after tool call hooks in CrewAgentExecutor and AgentExecutor

- Added support for before and after tool call hooks in both CrewAgentExecutor and AgentExecutor classes.
- Introduced ToolCallHookContext to manage context for hooks, allowing for enhanced control over tool execution.
- Implemented logic to block tool execution based on before hooks and to modify results based on after hooks.
- Added integration tests to validate the functionality of the new hooks, ensuring they work as expected in various scenarios.
- Enhanced the overall flexibility and extensibility of tool interactions within the CrewAI framework.

* Potential fix for pull request finding 'Unused local variable'

Co-authored-by: Copilot Autofix powered by AI <223894421+github-code-quality[bot]@users.noreply.github.com>

* Potential fix for pull request finding 'Unused local variable'

Co-authored-by: Copilot Autofix powered by AI <223894421+github-code-quality[bot]@users.noreply.github.com>

* test: add integration test for before hook blocking tool execution in Crew

- Implemented a new test to verify that the before hook can successfully block the execution of a tool within a crew.
- The test checks that the tool is not executed when the before hook returns False, ensuring proper control over tool interactions.
- Enhanced the validation of hook calls to confirm that both before and after hooks are triggered appropriately, even when execution is blocked.
- This addition strengthens the testing coverage for tool call hooks in the CrewAI framework.

* drop unused

* refactor(tests): remove OPENAI_API_KEY check from tool hook tests

- Eliminated the check for the OPENAI_API_KEY environment variable in the test cases for tool hooks.
- This change simplifies the test setup and allows for running tests without requiring the API key to be set, improving test accessibility and flexibility.

---------

Co-authored-by: Copilot Autofix powered by AI <223894421+github-code-quality[bot]@users.noreply.github.com>
2026-01-27 14:56:50 -08:00
Greyson LaLonde
3b17026082 fix: correct tool-calling content handling and schema serialization
- fix(gemini): prevent tool calls from using stale text content; correct key refs
- fix(agent-executor): resolve type errors
- refactor(schema): extract Pydantic schema utilities from platform tools
- fix(schema): properly serialize schemas and ensure Responses API uses a separate structure
- fix: preserve list identity to avoid mutation/aliasing issues
- chore(tests): update assumptions to match new behavior
2026-01-27 15:47:29 -05:00
Greyson LaLonde
d52dbc1f4b chore: add missing change logs (#4285)
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
Check Documentation Broken Links / Check broken links (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
* chore: add missing change logs

* chore: add translations
2026-01-26 18:26:01 -08:00
42 changed files with 2389 additions and 1255 deletions

View File

@@ -4,6 +4,74 @@ description: "Product updates, improvements, and bug fixes for CrewAI"
icon: "clock"
mode: "wide"
---
<Update label="Jan 26, 2026">
## v1.9.0
[View release on GitHub](https://github.com/crewAIInc/crewAI/releases/tag/1.9.0)
## What's Changed
### Features
- Add structured outputs and response_format support across providers
- Add response ID to streaming responses
- Add event ordering with parent-child hierarchies
- Add Keycloak SSO authentication support
- Add multimodal file handling capabilities
- Add native OpenAI responses API support
- Add A2A task execution utilities
- Add A2A server configuration and agent card generation
- Enhance event system and expand transport options
- Improve tool calling mechanisms
### Bug Fixes
- Enhance file store with fallback memory cache when aiocache is not available
- Ensure document list is not empty
- Handle Bedrock stop sequences properly
- Add Google Vertex API key support
- Enhance Azure model stop word detection
- Improve error handling for HumanFeedbackPending in flow execution
- Fix execution span task unlinking
### Documentation
- Add native file handling documentation
- Add OpenAI responses API documentation
- Add agent card implementation guidance
- Refine A2A documentation
- Update changelog for v1.8.0
### Contributors
@Anaisdg, @GininDenis, @Vidit-Ostwal, @greysonlalonde, @heitorado, @joaomdmoura, @koushiv777, @lorenzejay, @nicoferdi96, @vinibrsl
</Update>
<Update label="Jan 15, 2026">
## v1.8.1
[View release on GitHub](https://github.com/crewAIInc/crewAI/releases/tag/1.8.1)
## What's Changed
### Features
- Add A2A task execution utilities
- Add A2A server configuration and agent card generation
- Add additional transport mechanisms
- Add Galileo integration support
### Bug Fixes
- Improve Azure model compatibility
- Expand frame inspection depth to detect parent_flow
- Resolve task execution span management issues
- Enhance error handling for human feedback scenarios during flow execution
### Documentation
- Add A2A agent card documentation
- Add PII redaction feature documentation
### Contributors
@Anaisdg, @GininDenis, @greysonlalonde, @joaomdmoura, @koushiv777, @lorenzejay, @vinibrsl
</Update>
<Update label="Jan 08, 2026">
## v1.8.0

View File

@@ -4,6 +4,74 @@ description: "CrewAI의 제품 업데이트, 개선 사항 및 버그 수정"
icon: "clock"
mode: "wide"
---
<Update label="2026년 1월 26일">
## v1.9.0
[GitHub 릴리스 보기](https://github.com/crewAIInc/crewAI/releases/tag/1.9.0)
## 변경 사항
### 기능
- 프로바이더 전반에 걸친 구조화된 출력 및 response_format 지원 추가
- 스트리밍 응답에 응답 ID 추가
- 부모-자식 계층 구조를 가진 이벤트 순서 추가
- Keycloak SSO 인증 지원 추가
- 멀티모달 파일 처리 기능 추가
- 네이티브 OpenAI responses API 지원 추가
- A2A 작업 실행 유틸리티 추가
- A2A 서버 구성 및 에이전트 카드 생성 추가
- 이벤트 시스템 향상 및 전송 옵션 확장
- 도구 호출 메커니즘 개선
### 버그 수정
- aiocache를 사용할 수 없을 때 폴백 메모리 캐시로 파일 저장소 향상
- 문서 목록이 비어 있지 않도록 보장
- Bedrock 중지 시퀀스 적절히 처리
- Google Vertex API 키 지원 추가
- Azure 모델 중지 단어 감지 향상
- 흐름 실행 시 HumanFeedbackPending 오류 처리 개선
- 실행 스팬 작업 연결 해제 수정
### 문서
- 네이티브 파일 처리 문서 추가
- OpenAI responses API 문서 추가
- 에이전트 카드 구현 가이드 추가
- A2A 문서 개선
- v1.8.0 변경 로그 업데이트
### 기여자
@Anaisdg, @GininDenis, @Vidit-Ostwal, @greysonlalonde, @heitorado, @joaomdmoura, @koushiv777, @lorenzejay, @nicoferdi96, @vinibrsl
</Update>
<Update label="2026년 1월 15일">
## v1.8.1
[GitHub 릴리스 보기](https://github.com/crewAIInc/crewAI/releases/tag/1.8.1)
## 변경 사항
### 기능
- A2A 작업 실행 유틸리티 추가
- A2A 서버 구성 및 에이전트 카드 생성 추가
- 추가 전송 메커니즘 추가
- Galileo 통합 지원 추가
### 버그 수정
- Azure 모델 호환성 개선
- parent_flow 감지를 위한 프레임 검사 깊이 확장
- 작업 실행 스팬 관리 문제 해결
- 흐름 실행 중 휴먼 피드백 시나리오에 대한 오류 처리 향상
### 문서
- A2A 에이전트 카드 문서 추가
- PII 삭제 기능 문서 추가
### 기여자
@Anaisdg, @GininDenis, @greysonlalonde, @joaomdmoura, @koushiv777, @lorenzejay, @vinibrsl
</Update>
<Update label="2026년 1월 8일">
## v1.8.0

View File

@@ -4,6 +4,74 @@ description: "Atualizações de produto, melhorias e correções do CrewAI"
icon: "clock"
mode: "wide"
---
<Update label="26 jan 2026">
## v1.9.0
[Ver release no GitHub](https://github.com/crewAIInc/crewAI/releases/tag/1.9.0)
## O que Mudou
### Funcionalidades
- Adicionar suporte a saídas estruturadas e response_format em vários provedores
- Adicionar ID de resposta às respostas de streaming
- Adicionar ordenação de eventos com hierarquias pai-filho
- Adicionar suporte à autenticação SSO Keycloak
- Adicionar capacidades de manipulação de arquivos multimodais
- Adicionar suporte nativo à API de respostas OpenAI
- Adicionar utilitários de execução de tarefas A2A
- Adicionar configuração de servidor A2A e geração de cartão de agente
- Aprimorar sistema de eventos e expandir opções de transporte
- Melhorar mecanismos de chamada de ferramentas
### Correções de Bugs
- Aprimorar armazenamento de arquivos com cache de memória de fallback quando aiocache não está disponível
- Garantir que lista de documentos não esteja vazia
- Tratar sequências de parada do Bedrock adequadamente
- Adicionar suporte à chave de API do Google Vertex
- Aprimorar detecção de palavras de parada do modelo Azure
- Melhorar tratamento de erros para HumanFeedbackPending na execução de fluxo
- Corrigir desvinculação de tarefa do span de execução
### Documentação
- Adicionar documentação de manipulação nativa de arquivos
- Adicionar documentação da API de respostas OpenAI
- Adicionar orientação de implementação de cartão de agente
- Refinar documentação A2A
- Atualizar changelog para v1.8.0
### Contribuidores
@Anaisdg, @GininDenis, @Vidit-Ostwal, @greysonlalonde, @heitorado, @joaomdmoura, @koushiv777, @lorenzejay, @nicoferdi96, @vinibrsl
</Update>
<Update label="15 jan 2026">
## v1.8.1
[Ver release no GitHub](https://github.com/crewAIInc/crewAI/releases/tag/1.8.1)
## O que Mudou
### Funcionalidades
- Adicionar utilitários de execução de tarefas A2A
- Adicionar configuração de servidor A2A e geração de cartão de agente
- Adicionar mecanismos de transporte adicionais
- Adicionar suporte à integração Galileo
### Correções de Bugs
- Melhorar compatibilidade do modelo Azure
- Expandir profundidade de inspeção de frame para detectar parent_flow
- Resolver problemas de gerenciamento de span de execução de tarefas
- Aprimorar tratamento de erros para cenários de feedback humano durante execução de fluxo
### Documentação
- Adicionar documentação de cartão de agente A2A
- Adicionar documentação de recurso de redação de PII
### Contribuidores
@Anaisdg, @GininDenis, @greysonlalonde, @joaomdmoura, @koushiv777, @lorenzejay, @vinibrsl
</Update>
<Update label="08 jan 2026">
## v1.8.0

View File

@@ -152,4 +152,4 @@ __all__ = [
"wrap_file_source",
]
__version__ = "1.9.0"
__version__ = "1.9.1"

View File

@@ -12,7 +12,7 @@ dependencies = [
"pytube~=15.0.0",
"requests~=2.32.5",
"docker~=7.1.0",
"crewai==1.9.0",
"crewai==1.9.1",
"lancedb~=0.5.4",
"tiktoken~=0.8.0",
"beautifulsoup4~=4.13.4",

View File

@@ -291,4 +291,4 @@ __all__ = [
"ZapierActionTools",
]
__version__ = "1.9.0"
__version__ = "1.9.1"

View File

@@ -1,10 +1,11 @@
"""Crewai Enterprise Tools."""
import os
import json
import re
from typing import Any, Optional, Union, cast, get_origin
import os
from typing import Any
from crewai.tools import BaseTool
from crewai.utilities.pydantic_schema_utils import create_model_from_schema
from pydantic import Field, create_model
import requests
@@ -14,77 +15,6 @@ from crewai_tools.tools.crewai_platform_tools.misc import (
)
class AllOfSchemaAnalyzer:
"""Helper class to analyze and merge allOf schemas."""
def __init__(self, schemas: list[dict[str, Any]]):
self.schemas = schemas
self._explicit_types: list[str] = []
self._merged_properties: dict[str, Any] = {}
self._merged_required: list[str] = []
self._analyze_schemas()
def _analyze_schemas(self) -> None:
"""Analyze all schemas and extract relevant information."""
for schema in self.schemas:
if "type" in schema:
self._explicit_types.append(schema["type"])
# Merge object properties
if schema.get("type") == "object" and "properties" in schema:
self._merged_properties.update(schema["properties"])
if "required" in schema:
self._merged_required.extend(schema["required"])
def has_consistent_type(self) -> bool:
"""Check if all schemas have the same explicit type."""
return len(set(self._explicit_types)) == 1 if self._explicit_types else False
def get_consistent_type(self) -> type[Any]:
"""Get the consistent type if all schemas agree."""
if not self.has_consistent_type():
raise ValueError("No consistent type found")
type_mapping = {
"string": str,
"integer": int,
"number": float,
"boolean": bool,
"array": list,
"object": dict,
"null": type(None),
}
return type_mapping.get(self._explicit_types[0], str)
def has_object_schemas(self) -> bool:
"""Check if any schemas are object types with properties."""
return bool(self._merged_properties)
def get_merged_properties(self) -> dict[str, Any]:
"""Get merged properties from all object schemas."""
return self._merged_properties
def get_merged_required_fields(self) -> list[str]:
"""Get merged required fields from all object schemas."""
return list(set(self._merged_required)) # Remove duplicates
def get_fallback_type(self) -> type[Any]:
"""Get a fallback type when merging fails."""
if self._explicit_types:
# Use the first explicit type
type_mapping = {
"string": str,
"integer": int,
"number": float,
"boolean": bool,
"array": list,
"object": dict,
"null": type(None),
}
return type_mapping.get(self._explicit_types[0], str)
return str
class CrewAIPlatformActionTool(BaseTool):
action_name: str = Field(default="", description="The name of the action")
action_schema: dict[str, Any] = Field(
@@ -97,42 +27,19 @@ class CrewAIPlatformActionTool(BaseTool):
action_name: str,
action_schema: dict[str, Any],
):
self._model_registry: dict[str, type[Any]] = {}
self._base_name = self._sanitize_name(action_name)
schema_props, required = self._extract_schema_info(action_schema)
field_definitions: dict[str, Any] = {}
for param_name, param_details in schema_props.items():
param_desc = param_details.get("description", "")
is_required = param_name in required
parameters = action_schema.get("function", {}).get("parameters", {})
if parameters and parameters.get("properties"):
try:
field_type = self._process_schema_type(
param_details, self._sanitize_name(param_name).title()
)
if "title" not in parameters:
parameters = {**parameters, "title": f"{action_name}Schema"}
if "type" not in parameters:
parameters = {**parameters, "type": "object"}
args_schema = create_model_from_schema(parameters)
except Exception:
field_type = str
field_definitions[param_name] = self._create_field_definition(
field_type, is_required, param_desc
)
if field_definitions:
try:
args_schema = create_model(
f"{self._base_name}Schema", **field_definitions
)
except Exception:
args_schema = create_model(
f"{self._base_name}Schema",
input_text=(str, Field(description="Input for the action")),
)
args_schema = create_model(f"{action_name}Schema")
else:
args_schema = create_model(
f"{self._base_name}Schema",
input_text=(str, Field(description="Input for the action")),
)
args_schema = create_model(f"{action_name}Schema")
super().__init__(
name=action_name.lower().replace(" ", "_"),
@@ -142,285 +49,12 @@ class CrewAIPlatformActionTool(BaseTool):
self.action_name = action_name
self.action_schema = action_schema
@staticmethod
def _sanitize_name(name: str) -> str:
name = name.lower().replace(" ", "_")
sanitized = re.sub(r"[^a-zA-Z0-9_]", "", name)
parts = sanitized.split("_")
return "".join(word.capitalize() for word in parts if word)
@staticmethod
def _extract_schema_info(
action_schema: dict[str, Any],
) -> tuple[dict[str, Any], list[str]]:
schema_props = (
action_schema.get("function", {})
.get("parameters", {})
.get("properties", {})
)
required = (
action_schema.get("function", {}).get("parameters", {}).get("required", [])
)
return schema_props, required
def _process_schema_type(self, schema: dict[str, Any], type_name: str) -> type[Any]:
"""
Process a JSON Schema type definition into a Python type.
Handles complex schema constructs like anyOf, oneOf, allOf, enums, arrays, and objects.
"""
# Handle composite schema types (anyOf, oneOf, allOf)
if composite_type := self._process_composite_schema(schema, type_name):
return composite_type
# Handle primitive types and simple constructs
return self._process_primitive_schema(schema, type_name)
def _process_composite_schema(
self, schema: dict[str, Any], type_name: str
) -> type[Any] | None:
"""Process composite schema types: anyOf, oneOf, allOf."""
if "anyOf" in schema:
return self._process_any_of_schema(schema["anyOf"], type_name)
if "oneOf" in schema:
return self._process_one_of_schema(schema["oneOf"], type_name)
if "allOf" in schema:
return self._process_all_of_schema(schema["allOf"], type_name)
return None
def _process_any_of_schema(
self, any_of_types: list[dict[str, Any]], type_name: str
) -> type[Any]:
"""Process anyOf schema - creates Union of possible types."""
is_nullable = any(t.get("type") == "null" for t in any_of_types)
non_null_types = [t for t in any_of_types if t.get("type") != "null"]
if not non_null_types:
return cast(
type[Any], cast(object, str | None)
) # fallback for only-null case
base_type = (
self._process_schema_type(non_null_types[0], type_name)
if len(non_null_types) == 1
else self._create_union_type(non_null_types, type_name, "AnyOf")
)
return base_type | None if is_nullable else base_type # type: ignore[return-value]
def _process_one_of_schema(
self, one_of_types: list[dict[str, Any]], type_name: str
) -> type[Any]:
"""Process oneOf schema - creates Union of mutually exclusive types."""
return (
self._process_schema_type(one_of_types[0], type_name)
if len(one_of_types) == 1
else self._create_union_type(one_of_types, type_name, "OneOf")
)
def _process_all_of_schema(
self, all_of_schemas: list[dict[str, Any]], type_name: str
) -> type[Any]:
"""Process allOf schema - merges schemas that must all be satisfied."""
if len(all_of_schemas) == 1:
return self._process_schema_type(all_of_schemas[0], type_name)
return self._merge_all_of_schemas(all_of_schemas, type_name)
def _create_union_type(
self, schemas: list[dict[str, Any]], type_name: str, prefix: str
) -> type[Any]:
"""Create a Union type from multiple schemas."""
return Union[ # type: ignore # noqa: UP007
tuple(
self._process_schema_type(schema, f"{type_name}{prefix}{i}")
for i, schema in enumerate(schemas)
)
]
def _process_primitive_schema(
self, schema: dict[str, Any], type_name: str
) -> type[Any]:
"""Process primitive schema types: string, number, array, object, etc."""
json_type = schema.get("type", "string")
if "enum" in schema:
return self._process_enum_schema(schema, json_type)
if json_type == "array":
return self._process_array_schema(schema, type_name)
if json_type == "object":
return self._create_nested_model(schema, type_name)
return self._map_json_type_to_python(json_type)
def _process_enum_schema(self, schema: dict[str, Any], json_type: str) -> type[Any]:
"""Process enum schema - currently falls back to base type."""
enum_values = schema["enum"]
if not enum_values:
return self._map_json_type_to_python(json_type)
# For Literal types, we need to pass the values directly, not as a tuple
# This is a workaround since we can't dynamically create Literal types easily
# Fall back to the base JSON type for now
return self._map_json_type_to_python(json_type)
def _process_array_schema(
self, schema: dict[str, Any], type_name: str
) -> type[Any]:
items_schema = schema.get("items", {"type": "string"})
item_type = self._process_schema_type(items_schema, f"{type_name}Item")
return list[item_type] # type: ignore
def _merge_all_of_schemas(
self, schemas: list[dict[str, Any]], type_name: str
) -> type[Any]:
schema_analyzer = AllOfSchemaAnalyzer(schemas)
if schema_analyzer.has_consistent_type():
return schema_analyzer.get_consistent_type()
if schema_analyzer.has_object_schemas():
return self._create_merged_object_model(
schema_analyzer.get_merged_properties(),
schema_analyzer.get_merged_required_fields(),
type_name,
)
return schema_analyzer.get_fallback_type()
def _create_merged_object_model(
self, properties: dict[str, Any], required: list[str], model_name: str
) -> type[Any]:
full_model_name = f"{self._base_name}{model_name}AllOf"
if full_model_name in self._model_registry:
return self._model_registry[full_model_name]
if not properties:
return dict
field_definitions = self._build_field_definitions(
properties, required, model_name
)
try:
merged_model = create_model(full_model_name, **field_definitions)
self._model_registry[full_model_name] = merged_model
return merged_model
except Exception:
return dict
def _build_field_definitions(
self, properties: dict[str, Any], required: list[str], model_name: str
) -> dict[str, Any]:
field_definitions = {}
for prop_name, prop_schema in properties.items():
prop_desc = prop_schema.get("description", "")
is_required = prop_name in required
try:
prop_type = self._process_schema_type(
prop_schema, f"{model_name}{self._sanitize_name(prop_name).title()}"
)
except Exception:
prop_type = str
field_definitions[prop_name] = self._create_field_definition(
prop_type, is_required, prop_desc
)
return field_definitions
def _create_nested_model(
self, schema: dict[str, Any], model_name: str
) -> type[Any]:
full_model_name = f"{self._base_name}{model_name}"
if full_model_name in self._model_registry:
return self._model_registry[full_model_name]
properties = schema.get("properties", {})
required_fields = schema.get("required", [])
if not properties:
return dict
field_definitions = {}
for prop_name, prop_schema in properties.items():
prop_desc = prop_schema.get("description", "")
is_required = prop_name in required_fields
try:
prop_type = self._process_schema_type(
prop_schema, f"{model_name}{self._sanitize_name(prop_name).title()}"
)
except Exception:
prop_type = str
field_definitions[prop_name] = self._create_field_definition(
prop_type, is_required, prop_desc
)
try:
nested_model = create_model(full_model_name, **field_definitions) # type: ignore
self._model_registry[full_model_name] = nested_model
return nested_model
except Exception:
return dict
def _create_field_definition(
self, field_type: type[Any], is_required: bool, description: str
) -> tuple:
if is_required:
return (field_type, Field(description=description))
if get_origin(field_type) is Union:
return (field_type, Field(default=None, description=description))
return (
Optional[field_type], # noqa: UP045
Field(default=None, description=description),
)
def _map_json_type_to_python(self, json_type: str) -> type[Any]:
type_mapping = {
"string": str,
"integer": int,
"number": float,
"boolean": bool,
"array": list,
"object": dict,
"null": type(None),
}
return type_mapping.get(json_type, str)
def _get_required_nullable_fields(self) -> list[str]:
schema_props, required = self._extract_schema_info(self.action_schema)
required_nullable_fields = []
for param_name in required:
param_details = schema_props.get(param_name, {})
if self._is_nullable_type(param_details):
required_nullable_fields.append(param_name)
return required_nullable_fields
def _is_nullable_type(self, schema: dict[str, Any]) -> bool:
if "anyOf" in schema:
return any(t.get("type") == "null" for t in schema["anyOf"])
return schema.get("type") == "null"
def _run(self, **kwargs) -> str:
def _run(self, **kwargs: Any) -> str:
try:
cleaned_kwargs = {
key: value for key, value in kwargs.items() if value is not None
}
required_nullable_fields = self._get_required_nullable_fields()
for field_name in required_nullable_fields:
if field_name not in cleaned_kwargs:
cleaned_kwargs[field_name] = None
api_url = (
f"{get_platform_api_base_url()}/actions/{self.action_name}/execute"
)
@@ -429,7 +63,9 @@ class CrewAIPlatformActionTool(BaseTool):
"Authorization": f"Bearer {token}",
"Content-Type": "application/json",
}
payload = cleaned_kwargs
payload = {
"integration": cleaned_kwargs if cleaned_kwargs else {"_noop": True}
}
response = requests.post(
url=api_url,
@@ -441,7 +77,14 @@ class CrewAIPlatformActionTool(BaseTool):
data = response.json()
if not response.ok:
error_message = data.get("error", {}).get("message", json.dumps(data))
if isinstance(data, dict):
error_info = data.get("error", {})
if isinstance(error_info, dict):
error_message = error_info.get("message", json.dumps(data))
else:
error_message = str(error_info)
else:
error_message = str(data)
return f"API request failed: {error_message}"
return json.dumps(data, indent=2)

View File

@@ -1,5 +1,10 @@
from typing import Any
"""CrewAI platform tool builder for fetching and creating action tools."""
import logging
import os
from types import TracebackType
from typing import Any
from crewai.tools import BaseTool
import requests
@@ -12,22 +17,29 @@ from crewai_tools.tools.crewai_platform_tools.misc import (
)
logger = logging.getLogger(__name__)
class CrewaiPlatformToolBuilder:
"""Builds platform tools from remote action schemas."""
def __init__(
self,
apps: list[str],
):
) -> None:
self._apps = apps
self._actions_schema = {} # type: ignore[var-annotated]
self._tools = None
self._actions_schema: dict[str, dict[str, Any]] = {}
self._tools: list[BaseTool] | None = None
def tools(self) -> list[BaseTool]:
"""Fetch actions and return built tools."""
if self._tools is None:
self._fetch_actions()
self._create_tools()
return self._tools if self._tools is not None else []
def _fetch_actions(self):
def _fetch_actions(self) -> None:
"""Fetch action schemas from the platform API."""
actions_url = f"{get_platform_api_base_url()}/actions"
headers = {"Authorization": f"Bearer {get_platform_integration_token()}"}
@@ -40,7 +52,8 @@ class CrewaiPlatformToolBuilder:
verify=os.environ.get("CREWAI_FACTORY", "false").lower() != "true",
)
response.raise_for_status()
except Exception:
except Exception as e:
logger.error(f"Failed to fetch platform tools for apps {self._apps}: {e}")
return
raw_data = response.json()
@@ -51,6 +64,8 @@ class CrewaiPlatformToolBuilder:
for app, action_list in action_categories.items():
if isinstance(action_list, list):
for action in action_list:
if not isinstance(action, dict):
continue
if action_name := action.get("name"):
action_schema = {
"function": {
@@ -64,72 +79,16 @@ class CrewaiPlatformToolBuilder:
}
self._actions_schema[action_name] = action_schema
def _generate_detailed_description(
self, schema: dict[str, Any], indent: int = 0
) -> list[str]:
descriptions = []
indent_str = " " * indent
schema_type = schema.get("type", "string")
if schema_type == "object":
properties = schema.get("properties", {})
required_fields = schema.get("required", [])
if properties:
descriptions.append(f"{indent_str}Object with properties:")
for prop_name, prop_schema in properties.items():
prop_desc = prop_schema.get("description", "")
is_required = prop_name in required_fields
req_str = " (required)" if is_required else " (optional)"
descriptions.append(
f"{indent_str} - {prop_name}: {prop_desc}{req_str}"
)
if prop_schema.get("type") == "object":
descriptions.extend(
self._generate_detailed_description(prop_schema, indent + 2)
)
elif prop_schema.get("type") == "array":
items_schema = prop_schema.get("items", {})
if items_schema.get("type") == "object":
descriptions.append(f"{indent_str} Array of objects:")
descriptions.extend(
self._generate_detailed_description(
items_schema, indent + 3
)
)
elif "enum" in items_schema:
descriptions.append(
f"{indent_str} Array of enum values: {items_schema['enum']}"
)
elif "enum" in prop_schema:
descriptions.append(
f"{indent_str} Enum values: {prop_schema['enum']}"
)
return descriptions
def _create_tools(self):
tools = []
def _create_tools(self) -> None:
"""Create tool instances from fetched action schemas."""
tools: list[BaseTool] = []
for action_name, action_schema in self._actions_schema.items():
function_details = action_schema.get("function", {})
description = function_details.get("description", f"Execute {action_name}")
parameters = function_details.get("parameters", {})
param_descriptions = []
if parameters.get("properties"):
param_descriptions.append("\nDetailed Parameter Structure:")
param_descriptions.extend(
self._generate_detailed_description(parameters)
)
full_description = description + "\n".join(param_descriptions)
tool = CrewAIPlatformActionTool(
description=full_description,
description=description,
action_name=action_name,
action_schema=action_schema,
)
@@ -138,8 +97,14 @@ class CrewaiPlatformToolBuilder:
self._tools = tools
def __enter__(self):
def __enter__(self) -> list[BaseTool]:
"""Enter context manager and return tools."""
return self.tools()
def __exit__(self, exc_type, exc_val, exc_tb):
pass
def __exit__(
self,
exc_type: type[BaseException] | None,
exc_val: BaseException | None,
exc_tb: TracebackType | None,
) -> None:
"""Exit context manager."""

View File

@@ -1,4 +1,3 @@
from typing import Union, get_args, get_origin
from unittest.mock import patch, Mock
import os
@@ -7,251 +6,6 @@ from crewai_tools.tools.crewai_platform_tools.crewai_platform_action_tool import
)
class TestSchemaProcessing:
def setup_method(self):
self.base_action_schema = {
"function": {
"parameters": {
"properties": {},
"required": []
}
}
}
def create_test_tool(self, action_name="test_action"):
return CrewAIPlatformActionTool(
description="Test tool",
action_name=action_name,
action_schema=self.base_action_schema
)
def test_anyof_multiple_types(self):
tool = self.create_test_tool()
test_schema = {
"anyOf": [
{"type": "string"},
{"type": "number"},
{"type": "integer"}
]
}
result_type = tool._process_schema_type(test_schema, "TestField")
assert get_origin(result_type) is Union
args = get_args(result_type)
expected_types = (str, float, int)
for expected_type in expected_types:
assert expected_type in args
def test_anyof_with_null(self):
tool = self.create_test_tool()
test_schema = {
"anyOf": [
{"type": "string"},
{"type": "number"},
{"type": "null"}
]
}
result_type = tool._process_schema_type(test_schema, "TestFieldNullable")
assert get_origin(result_type) is Union
args = get_args(result_type)
assert type(None) in args
assert str in args
assert float in args
def test_anyof_single_type(self):
tool = self.create_test_tool()
test_schema = {
"anyOf": [
{"type": "string"}
]
}
result_type = tool._process_schema_type(test_schema, "TestFieldSingle")
assert result_type is str
def test_oneof_multiple_types(self):
tool = self.create_test_tool()
test_schema = {
"oneOf": [
{"type": "string"},
{"type": "boolean"}
]
}
result_type = tool._process_schema_type(test_schema, "TestFieldOneOf")
assert get_origin(result_type) is Union
args = get_args(result_type)
expected_types = (str, bool)
for expected_type in expected_types:
assert expected_type in args
def test_oneof_single_type(self):
tool = self.create_test_tool()
test_schema = {
"oneOf": [
{"type": "integer"}
]
}
result_type = tool._process_schema_type(test_schema, "TestFieldOneOfSingle")
assert result_type is int
def test_basic_types(self):
tool = self.create_test_tool()
test_cases = [
({"type": "string"}, str),
({"type": "integer"}, int),
({"type": "number"}, float),
({"type": "boolean"}, bool),
({"type": "array", "items": {"type": "string"}}, list),
]
for schema, expected_type in test_cases:
result_type = tool._process_schema_type(schema, "TestField")
if schema["type"] == "array":
assert get_origin(result_type) is list
else:
assert result_type is expected_type
def test_enum_handling(self):
tool = self.create_test_tool()
test_schema = {
"type": "string",
"enum": ["option1", "option2", "option3"]
}
result_type = tool._process_schema_type(test_schema, "TestFieldEnum")
assert result_type is str
def test_nested_anyof(self):
tool = self.create_test_tool()
test_schema = {
"anyOf": [
{"type": "string"},
{
"anyOf": [
{"type": "integer"},
{"type": "boolean"}
]
}
]
}
result_type = tool._process_schema_type(test_schema, "TestFieldNested")
assert get_origin(result_type) is Union
args = get_args(result_type)
assert str in args
if len(args) == 3:
assert int in args
assert bool in args
else:
nested_union = next(arg for arg in args if get_origin(arg) is Union)
nested_args = get_args(nested_union)
assert int in nested_args
assert bool in nested_args
def test_allof_same_types(self):
tool = self.create_test_tool()
test_schema = {
"allOf": [
{"type": "string"},
{"type": "string", "maxLength": 100}
]
}
result_type = tool._process_schema_type(test_schema, "TestFieldAllOfSame")
assert result_type is str
def test_allof_object_merge(self):
tool = self.create_test_tool()
test_schema = {
"allOf": [
{
"type": "object",
"properties": {
"name": {"type": "string"},
"age": {"type": "integer"}
},
"required": ["name"]
},
{
"type": "object",
"properties": {
"email": {"type": "string"},
"age": {"type": "integer"}
},
"required": ["email"]
}
]
}
result_type = tool._process_schema_type(test_schema, "TestFieldAllOfMerged")
# Should create a merged model with all properties
# The implementation might fall back to dict if model creation fails
# Let's just verify it's not a basic scalar type
assert result_type is not str
assert result_type is not int
assert result_type is not bool
# It could be dict (fallback) or a proper model class
assert result_type in (dict, type) or hasattr(result_type, '__name__')
def test_allof_single_schema(self):
"""Test that allOf with single schema works correctly."""
tool = self.create_test_tool()
test_schema = {
"allOf": [
{"type": "boolean"}
]
}
result_type = tool._process_schema_type(test_schema, "TestFieldAllOfSingle")
# Should be just bool
assert result_type is bool
def test_allof_mixed_types(self):
tool = self.create_test_tool()
test_schema = {
"allOf": [
{"type": "string"},
{"type": "integer"}
]
}
result_type = tool._process_schema_type(test_schema, "TestFieldAllOfMixed")
assert result_type is str
class TestCrewAIPlatformActionToolVerify:
"""Test suite for SSL verification behavior based on CREWAI_FACTORY environment variable"""

View File

@@ -224,43 +224,6 @@ class TestCrewaiPlatformToolBuilder(unittest.TestCase):
_, kwargs = mock_get.call_args
assert kwargs["params"]["apps"] == ""
def test_detailed_description_generation(self):
builder = CrewaiPlatformToolBuilder(apps=["test"])
complex_schema = {
"type": "object",
"properties": {
"simple_string": {"type": "string", "description": "A simple string"},
"nested_object": {
"type": "object",
"properties": {
"inner_prop": {
"type": "integer",
"description": "Inner property",
}
},
"description": "Nested object",
},
"array_prop": {
"type": "array",
"items": {"type": "string"},
"description": "Array of strings",
},
},
}
descriptions = builder._generate_detailed_description(complex_schema)
assert isinstance(descriptions, list)
assert len(descriptions) > 0
description_text = "\n".join(descriptions)
assert "simple_string" in description_text
assert "nested_object" in description_text
assert "array_prop" in description_text
class TestCrewaiPlatformToolBuilderVerify(unittest.TestCase):
"""Test suite for SSL verification behavior in CrewaiPlatformToolBuilder"""

View File

@@ -49,7 +49,7 @@ Repository = "https://github.com/crewAIInc/crewAI"
[project.optional-dependencies]
tools = [
"crewai-tools==1.9.0",
"crewai-tools==1.9.1",
]
embeddings = [
"tiktoken~=0.8.0"

View File

@@ -40,7 +40,7 @@ def _suppress_pydantic_deprecation_warnings() -> None:
_suppress_pydantic_deprecation_warnings()
__version__ = "1.9.0"
__version__ = "1.9.1"
_telemetry_submitted = False

View File

@@ -37,7 +37,8 @@ class CrewAgentExecutorMixin:
self.crew
and self.agent
and self.task
and f"Action: {sanitize_tool_name('Delegate work to coworker')}" not in output.text
and f"Action: {sanitize_tool_name('Delegate work to coworker')}"
not in output.text
):
try:
if (
@@ -132,10 +133,11 @@ class CrewAgentExecutorMixin:
and self.crew._long_term_memory
and self.crew._entity_memory is None
):
self._printer.print(
content="Long term memory is enabled, but entity memory is not enabled. Please configure entity memory or set memory=True to automatically enable it.",
color="bold_yellow",
)
if self.agent and self.agent.verbose:
self._printer.print(
content="Long term memory is enabled, but entity memory is not enabled. Please configure entity memory or set memory=True to automatically enable it.",
color="bold_yellow",
)
def _ask_human_input(self, final_answer: str) -> str:
"""Prompt human input with mode-appropriate messaging.

View File

@@ -28,6 +28,11 @@ from crewai.hooks.llm_hooks import (
get_after_llm_call_hooks,
get_before_llm_call_hooks,
)
from crewai.hooks.tool_hooks import (
ToolCallHookContext,
get_after_tool_call_hooks,
get_before_tool_call_hooks,
)
from crewai.utilities.agent_utils import (
aget_llm_response,
convert_tools_to_openai_schema,
@@ -201,13 +206,14 @@ class CrewAgentExecutor(CrewAgentExecutorMixin):
try:
formatted_answer = self._invoke_loop()
except AssertionError:
self._printer.print(
content="Agent failed to reach a final answer. This is likely a bug - please report it.",
color="red",
)
if self.agent.verbose:
self._printer.print(
content="Agent failed to reach a final answer. This is likely a bug - please report it.",
color="red",
)
raise
except Exception as e:
handle_unknown_error(self._printer, e)
handle_unknown_error(self._printer, e, verbose=self.agent.verbose)
raise
if self.ask_for_human_input:
@@ -322,6 +328,7 @@ class CrewAgentExecutor(CrewAgentExecutorMixin):
messages=self.messages,
llm=self.llm,
callbacks=self.callbacks,
verbose=self.agent.verbose,
)
break
@@ -336,6 +343,7 @@ class CrewAgentExecutor(CrewAgentExecutorMixin):
from_agent=self.agent,
response_model=self.response_model,
executor_context=self,
verbose=self.agent.verbose,
)
# breakpoint()
if self.response_model is not None:
@@ -394,6 +402,7 @@ class CrewAgentExecutor(CrewAgentExecutorMixin):
iterations=self.iterations,
log_error_after=self.log_error_after,
printer=self._printer,
verbose=self.agent.verbose,
)
except Exception as e:
@@ -408,9 +417,10 @@ class CrewAgentExecutor(CrewAgentExecutorMixin):
llm=self.llm,
callbacks=self.callbacks,
i18n=self._i18n,
verbose=self.agent.verbose,
)
continue
handle_unknown_error(self._printer, e)
handle_unknown_error(self._printer, e, verbose=self.agent.verbose)
raise e
finally:
self.iterations += 1
@@ -456,6 +466,7 @@ class CrewAgentExecutor(CrewAgentExecutorMixin):
messages=self.messages,
llm=self.llm,
callbacks=self.callbacks,
verbose=self.agent.verbose,
)
self._show_logs(formatted_answer)
return formatted_answer
@@ -477,6 +488,7 @@ class CrewAgentExecutor(CrewAgentExecutorMixin):
from_agent=self.agent,
response_model=self.response_model,
executor_context=self,
verbose=self.agent.verbose,
)
# Check if the response is a list of tool calls
@@ -530,9 +542,10 @@ class CrewAgentExecutor(CrewAgentExecutorMixin):
llm=self.llm,
callbacks=self.callbacks,
i18n=self._i18n,
verbose=self.agent.verbose,
)
continue
handle_unknown_error(self._printer, e)
handle_unknown_error(self._printer, e, verbose=self.agent.verbose)
raise e
finally:
self.iterations += 1
@@ -554,6 +567,7 @@ class CrewAgentExecutor(CrewAgentExecutorMixin):
from_agent=self.agent,
response_model=self.response_model,
executor_context=self,
verbose=self.agent.verbose,
)
formatted_answer = AgentFinish(
@@ -749,8 +763,42 @@ class CrewAgentExecutor(CrewAgentExecutorMixin):
track_delegation_if_needed(func_name, args_dict, self.task)
# Execute the tool (only if not cached and not at max usage)
if not from_cache and not max_usage_reached:
# Find the structured tool for hook context
structured_tool: CrewStructuredTool | None = None
for structured in self.tools or []:
if sanitize_tool_name(structured.name) == func_name:
structured_tool = structured
break
# Execute before_tool_call hooks
hook_blocked = False
before_hook_context = ToolCallHookContext(
tool_name=func_name,
tool_input=args_dict,
tool=structured_tool, # type: ignore[arg-type]
agent=self.agent,
task=self.task,
crew=self.crew,
)
before_hooks = get_before_tool_call_hooks()
try:
for hook in before_hooks:
hook_result = hook(before_hook_context)
if hook_result is False:
hook_blocked = True
break
except Exception as hook_error:
if self.agent.verbose:
self._printer.print(
content=f"Error in before_tool_call hook: {hook_error}",
color="red",
)
# If hook blocked execution, set result and skip tool execution
if hook_blocked:
result = f"Tool execution blocked by hook. Tool: {func_name}"
# Execute the tool (only if not cached, not at max usage, and not blocked by hook)
elif not from_cache and not max_usage_reached:
result = "Tool not found"
if func_name in available_functions:
try:
@@ -798,6 +846,29 @@ class CrewAgentExecutor(CrewAgentExecutorMixin):
# Return error message when max usage limit is reached
result = f"Tool '{func_name}' has reached its usage limit of {original_tool.max_usage_count} times and cannot be used anymore."
after_hook_context = ToolCallHookContext(
tool_name=func_name,
tool_input=args_dict,
tool=structured_tool, # type: ignore[arg-type]
agent=self.agent,
task=self.task,
crew=self.crew,
tool_result=result,
)
after_hooks = get_after_tool_call_hooks()
try:
for after_hook in after_hooks:
after_hook_result = after_hook(after_hook_context)
if after_hook_result is not None:
result = after_hook_result
after_hook_context.tool_result = result
except Exception as hook_error:
if self.agent.verbose:
self._printer.print(
content=f"Error in after_tool_call hook: {hook_error}",
color="red",
)
# Emit tool usage finished event
crewai_event_bus.emit(
self,
@@ -882,13 +953,14 @@ class CrewAgentExecutor(CrewAgentExecutorMixin):
try:
formatted_answer = await self._ainvoke_loop()
except AssertionError:
self._printer.print(
content="Agent failed to reach a final answer. This is likely a bug - please report it.",
color="red",
)
if self.agent.verbose:
self._printer.print(
content="Agent failed to reach a final answer. This is likely a bug - please report it.",
color="red",
)
raise
except Exception as e:
handle_unknown_error(self._printer, e)
handle_unknown_error(self._printer, e, verbose=self.agent.verbose)
raise
if self.ask_for_human_input:
@@ -939,6 +1011,7 @@ class CrewAgentExecutor(CrewAgentExecutorMixin):
messages=self.messages,
llm=self.llm,
callbacks=self.callbacks,
verbose=self.agent.verbose,
)
break
@@ -953,6 +1026,7 @@ class CrewAgentExecutor(CrewAgentExecutorMixin):
from_agent=self.agent,
response_model=self.response_model,
executor_context=self,
verbose=self.agent.verbose,
)
if self.response_model is not None:
@@ -1010,6 +1084,7 @@ class CrewAgentExecutor(CrewAgentExecutorMixin):
iterations=self.iterations,
log_error_after=self.log_error_after,
printer=self._printer,
verbose=self.agent.verbose,
)
except Exception as e:
@@ -1023,9 +1098,10 @@ class CrewAgentExecutor(CrewAgentExecutorMixin):
llm=self.llm,
callbacks=self.callbacks,
i18n=self._i18n,
verbose=self.agent.verbose,
)
continue
handle_unknown_error(self._printer, e)
handle_unknown_error(self._printer, e, verbose=self.agent.verbose)
raise e
finally:
self.iterations += 1
@@ -1065,6 +1141,7 @@ class CrewAgentExecutor(CrewAgentExecutorMixin):
messages=self.messages,
llm=self.llm,
callbacks=self.callbacks,
verbose=self.agent.verbose,
)
self._show_logs(formatted_answer)
return formatted_answer
@@ -1086,6 +1163,7 @@ class CrewAgentExecutor(CrewAgentExecutorMixin):
from_agent=self.agent,
response_model=self.response_model,
executor_context=self,
verbose=self.agent.verbose,
)
# Check if the response is a list of tool calls
if (
@@ -1138,9 +1216,10 @@ class CrewAgentExecutor(CrewAgentExecutorMixin):
llm=self.llm,
callbacks=self.callbacks,
i18n=self._i18n,
verbose=self.agent.verbose,
)
continue
handle_unknown_error(self._printer, e)
handle_unknown_error(self._printer, e, verbose=self.agent.verbose)
raise e
finally:
self.iterations += 1
@@ -1162,6 +1241,7 @@ class CrewAgentExecutor(CrewAgentExecutorMixin):
from_agent=self.agent,
response_model=self.response_model,
executor_context=self,
verbose=self.agent.verbose,
)
formatted_answer = AgentFinish(
@@ -1279,10 +1359,11 @@ class CrewAgentExecutor(CrewAgentExecutorMixin):
)
if train_iteration is None or not isinstance(train_iteration, int):
self._printer.print(
content="Invalid or missing train iteration. Cannot save training data.",
color="red",
)
if self.agent.verbose:
self._printer.print(
content="Invalid or missing train iteration. Cannot save training data.",
color="red",
)
return
training_handler = CrewTrainingHandler(TRAINING_DATA_FILE)
@@ -1302,13 +1383,14 @@ class CrewAgentExecutor(CrewAgentExecutorMixin):
if train_iteration in agent_training_data:
agent_training_data[train_iteration]["improved_output"] = result.output
else:
self._printer.print(
content=(
f"No existing training data for agent {agent_id} and iteration "
f"{train_iteration}. Cannot save improved output."
),
color="red",
)
if self.agent.verbose:
self._printer.print(
content=(
f"No existing training data for agent {agent_id} and iteration "
f"{train_iteration}. Cannot save improved output."
),
color="red",
)
return
# Update the training data and save

View File

@@ -5,7 +5,7 @@ description = "{{name}} using crewAI"
authors = [{ name = "Your Name", email = "you@example.com" }]
requires-python = ">=3.10,<3.14"
dependencies = [
"crewai[tools]==1.9.0"
"crewai[tools]==1.9.1"
]
[project.scripts]

View File

@@ -5,7 +5,7 @@ description = "{{name}} using crewAI"
authors = [{ name = "Your Name", email = "you@example.com" }]
requires-python = ">=3.10,<3.14"
dependencies = [
"crewai[tools]==1.9.0"
"crewai[tools]==1.9.1"
]
[project.scripts]

View File

@@ -36,6 +36,12 @@ from crewai.hooks.llm_hooks import (
get_after_llm_call_hooks,
get_before_llm_call_hooks,
)
from crewai.hooks.tool_hooks import (
ToolCallHookContext,
get_after_tool_call_hooks,
get_before_tool_call_hooks,
)
from crewai.hooks.types import AfterLLMCallHookType, BeforeLLMCallHookType
from crewai.utilities.agent_utils import (
convert_tools_to_openai_schema,
enforce_rpm_limit,
@@ -185,8 +191,8 @@ class AgentExecutor(Flow[AgentReActState], CrewAgentExecutorMixin):
self._instance_id = str(uuid4())[:8]
self.before_llm_call_hooks: list[Callable] = []
self.after_llm_call_hooks: list[Callable] = []
self.before_llm_call_hooks: list[BeforeLLMCallHookType] = []
self.after_llm_call_hooks: list[AfterLLMCallHookType] = []
self.before_llm_call_hooks.extend(get_before_llm_call_hooks())
self.after_llm_call_hooks.extend(get_after_llm_call_hooks())
@@ -299,11 +305,21 @@ class AgentExecutor(Flow[AgentReActState], CrewAgentExecutorMixin):
"""Compatibility property for mixin - returns state messages."""
return self._state.messages
@messages.setter
def messages(self, value: list[LLMMessage]) -> None:
"""Set state messages."""
self._state.messages = value
@property
def iterations(self) -> int:
"""Compatibility property for mixin - returns state iterations."""
return self._state.iterations
@iterations.setter
def iterations(self, value: int) -> None:
"""Set state iterations."""
self._state.iterations = value
@start()
def initialize_reasoning(self) -> Literal["initialized"]:
"""Initialize the reasoning flow and emit agent start logs."""
@@ -325,6 +341,7 @@ class AgentExecutor(Flow[AgentReActState], CrewAgentExecutorMixin):
messages=list(self.state.messages),
llm=self.llm,
callbacks=self.callbacks,
verbose=self.agent.verbose,
)
self.state.current_answer = formatted_answer
@@ -350,6 +367,7 @@ class AgentExecutor(Flow[AgentReActState], CrewAgentExecutorMixin):
from_agent=self.agent,
response_model=None,
executor_context=self,
verbose=self.agent.verbose,
)
# Parse the LLM response
@@ -385,7 +403,7 @@ class AgentExecutor(Flow[AgentReActState], CrewAgentExecutorMixin):
return "context_error"
if e.__class__.__module__.startswith("litellm"):
raise e
handle_unknown_error(self._printer, e)
handle_unknown_error(self._printer, e, verbose=self.agent.verbose)
raise
@listen("continue_reasoning_native")
@@ -420,6 +438,7 @@ class AgentExecutor(Flow[AgentReActState], CrewAgentExecutorMixin):
from_agent=self.agent,
response_model=None,
executor_context=self,
verbose=self.agent.verbose,
)
# Check if the response is a list of tool calls
@@ -458,7 +477,7 @@ class AgentExecutor(Flow[AgentReActState], CrewAgentExecutorMixin):
return "context_error"
if e.__class__.__module__.startswith("litellm"):
raise e
handle_unknown_error(self._printer, e)
handle_unknown_error(self._printer, e, verbose=self.agent.verbose)
raise
@router(call_llm_and_parse)
@@ -577,6 +596,12 @@ class AgentExecutor(Flow[AgentReActState], CrewAgentExecutorMixin):
"content": None,
"tool_calls": tool_calls_to_report,
}
if all(
type(tc).__qualname__ == "Part" for tc in self.state.pending_tool_calls
):
assistant_message["raw_tool_call_parts"] = list(
self.state.pending_tool_calls
)
self.state.messages.append(assistant_message)
# Now execute each tool
@@ -611,14 +636,12 @@ class AgentExecutor(Flow[AgentReActState], CrewAgentExecutorMixin):
# Check if tool has reached max usage count
max_usage_reached = False
if original_tool:
if (
hasattr(original_tool, "max_usage_count")
and original_tool.max_usage_count is not None
and original_tool.current_usage_count
>= original_tool.max_usage_count
):
max_usage_reached = True
if (
original_tool
and original_tool.max_usage_count is not None
and original_tool.current_usage_count >= original_tool.max_usage_count
):
max_usage_reached = True
# Check cache before executing
from_cache = False
@@ -650,8 +673,38 @@ class AgentExecutor(Flow[AgentReActState], CrewAgentExecutorMixin):
track_delegation_if_needed(func_name, args_dict, self.task)
# Execute the tool (only if not cached and not at max usage)
if not from_cache and not max_usage_reached:
structured_tool: CrewStructuredTool | None = None
for structured in self.tools or []:
if sanitize_tool_name(structured.name) == func_name:
structured_tool = structured
break
hook_blocked = False
before_hook_context = ToolCallHookContext(
tool_name=func_name,
tool_input=args_dict,
tool=structured_tool, # type: ignore[arg-type]
agent=self.agent,
task=self.task,
crew=self.crew,
)
before_hooks = get_before_tool_call_hooks()
try:
for hook in before_hooks:
hook_result = hook(before_hook_context)
if hook_result is False:
hook_blocked = True
break
except Exception as hook_error:
if self.agent.verbose:
self._printer.print(
content=f"Error in before_tool_call hook: {hook_error}",
color="red",
)
if hook_blocked:
result = f"Tool execution blocked by hook. Tool: {func_name}"
elif not from_cache and not max_usage_reached:
result = "Tool not found"
if func_name in self._available_functions:
try:
@@ -661,11 +714,7 @@ class AgentExecutor(Flow[AgentReActState], CrewAgentExecutorMixin):
# Add to cache after successful execution (before string conversion)
if self.tools_handler and self.tools_handler.cache:
should_cache = True
if (
original_tool
and hasattr(original_tool, "cache_function")
and original_tool.cache_function
):
if original_tool:
should_cache = original_tool.cache_function(
args_dict, raw_result
)
@@ -696,10 +745,34 @@ class AgentExecutor(Flow[AgentReActState], CrewAgentExecutorMixin):
error=e,
),
)
elif max_usage_reached:
elif max_usage_reached and original_tool:
# Return error message when max usage limit is reached
result = f"Tool '{func_name}' has reached its usage limit of {original_tool.max_usage_count} times and cannot be used anymore."
# Execute after_tool_call hooks (even if blocked, to allow logging/monitoring)
after_hook_context = ToolCallHookContext(
tool_name=func_name,
tool_input=args_dict,
tool=structured_tool, # type: ignore[arg-type]
agent=self.agent,
task=self.task,
crew=self.crew,
tool_result=result,
)
after_hooks = get_after_tool_call_hooks()
try:
for after_hook in after_hooks:
after_hook_result = after_hook(after_hook_context)
if after_hook_result is not None:
result = after_hook_result
after_hook_context.tool_result = result
except Exception as hook_error:
if self.agent.verbose:
self._printer.print(
content=f"Error in after_tool_call hook: {hook_error}",
color="red",
)
# Emit tool usage finished event
crewai_event_bus.emit(
self,
@@ -833,12 +906,17 @@ class AgentExecutor(Flow[AgentReActState], CrewAgentExecutorMixin):
@listen("parser_error")
def recover_from_parser_error(self) -> Literal["initialized"]:
"""Recover from output parser errors and retry."""
if not self._last_parser_error:
self.state.iterations += 1
return "initialized"
formatted_answer = handle_output_parser_exception(
e=self._last_parser_error,
messages=list(self.state.messages),
iterations=self.state.iterations,
log_error_after=self.log_error_after,
printer=self._printer,
verbose=self.agent.verbose,
)
if formatted_answer:
@@ -858,6 +936,7 @@ class AgentExecutor(Flow[AgentReActState], CrewAgentExecutorMixin):
llm=self.llm,
callbacks=self.callbacks,
i18n=self._i18n,
verbose=self.agent.verbose,
)
self.state.iterations += 1
@@ -949,7 +1028,7 @@ class AgentExecutor(Flow[AgentReActState], CrewAgentExecutorMixin):
self._console.print(fail_text)
raise
except Exception as e:
handle_unknown_error(self._printer, e)
handle_unknown_error(self._printer, e, verbose=self.agent.verbose)
raise
finally:
self._is_executing = False
@@ -1034,7 +1113,7 @@ class AgentExecutor(Flow[AgentReActState], CrewAgentExecutorMixin):
self._console.print(fail_text)
raise
except Exception as e:
handle_unknown_error(self._printer, e)
handle_unknown_error(self._printer, e, verbose=self.agent.verbose)
raise
finally:
self._is_executing = False

View File

@@ -118,17 +118,20 @@ class PersistenceDecorator:
)
except Exception as e:
error_msg = LOG_MESSAGES["save_error"].format(method_name, str(e))
cls._printer.print(error_msg, color="red")
if verbose:
cls._printer.print(error_msg, color="red")
logger.error(error_msg)
raise RuntimeError(f"State persistence failed: {e!s}") from e
except AttributeError as e:
error_msg = LOG_MESSAGES["state_missing"]
cls._printer.print(error_msg, color="red")
if verbose:
cls._printer.print(error_msg, color="red")
logger.error(error_msg)
raise ValueError(error_msg) from e
except (TypeError, ValueError) as e:
error_msg = LOG_MESSAGES["id_missing"]
cls._printer.print(error_msg, color="red")
if verbose:
cls._printer.print(error_msg, color="red")
logger.error(error_msg)
raise ValueError(error_msg) from e

View File

@@ -151,7 +151,9 @@ def _unwrap_function(function: Any) -> Any:
return function
def get_possible_return_constants(function: Any) -> list[str] | None:
def get_possible_return_constants(
function: Any, verbose: bool = True
) -> list[str] | None:
"""Extract possible string return values from a function using AST parsing.
This function analyzes the source code of a router method to identify
@@ -178,10 +180,11 @@ def get_possible_return_constants(function: Any) -> list[str] | None:
# Can't get source code
return None
except Exception as e:
_printer.print(
f"Error retrieving source code for function {function.__name__}: {e}",
color="red",
)
if verbose:
_printer.print(
f"Error retrieving source code for function {function.__name__}: {e}",
color="red",
)
return None
try:
@@ -190,25 +193,28 @@ def get_possible_return_constants(function: Any) -> list[str] | None:
# Parse the source code into an AST
code_ast = ast.parse(source)
except IndentationError as e:
_printer.print(
f"IndentationError while parsing source code of {function.__name__}: {e}",
color="red",
)
_printer.print(f"Source code:\n{source}", color="yellow")
if verbose:
_printer.print(
f"IndentationError while parsing source code of {function.__name__}: {e}",
color="red",
)
_printer.print(f"Source code:\n{source}", color="yellow")
return None
except SyntaxError as e:
_printer.print(
f"SyntaxError while parsing source code of {function.__name__}: {e}",
color="red",
)
_printer.print(f"Source code:\n{source}", color="yellow")
if verbose:
_printer.print(
f"SyntaxError while parsing source code of {function.__name__}: {e}",
color="red",
)
_printer.print(f"Source code:\n{source}", color="yellow")
return None
except Exception as e:
_printer.print(
f"Unexpected error while parsing source code of {function.__name__}: {e}",
color="red",
)
_printer.print(f"Source code:\n{source}", color="yellow")
if verbose:
_printer.print(
f"Unexpected error while parsing source code of {function.__name__}: {e}",
color="red",
)
_printer.print(f"Source code:\n{source}", color="yellow")
return None
return_values: set[str] = set()
@@ -388,15 +394,17 @@ def get_possible_return_constants(function: Any) -> list[str] | None:
StateAttributeVisitor().visit(class_ast)
except Exception as e:
_printer.print(
f"Could not analyze class context for {function.__name__}: {e}",
color="yellow",
)
if verbose:
_printer.print(
f"Could not analyze class context for {function.__name__}: {e}",
color="yellow",
)
except Exception as e:
_printer.print(
f"Could not introspect class for {function.__name__}: {e}",
color="yellow",
)
if verbose:
_printer.print(
f"Could not introspect class for {function.__name__}: {e}",
color="yellow",
)
VariableAssignmentVisitor().visit(code_ast)
ReturnVisitor().visit(code_ast)

View File

@@ -9,6 +9,7 @@ from crewai.utilities.printer import Printer
if TYPE_CHECKING:
from crewai.agents.crew_agent_executor import CrewAgentExecutor
from crewai.experimental.agent_executor import AgentExecutor
from crewai.lite_agent import LiteAgent
from crewai.llms.base_llm import BaseLLM
from crewai.utilities.types import LLMMessage
@@ -41,7 +42,7 @@ class LLMCallHookContext:
Can be modified by returning a new string from after_llm_call hook.
"""
executor: CrewAgentExecutor | LiteAgent | None
executor: CrewAgentExecutor | AgentExecutor | LiteAgent | None
messages: list[LLMMessage]
agent: Any
task: Any
@@ -52,7 +53,7 @@ class LLMCallHookContext:
def __init__(
self,
executor: CrewAgentExecutor | LiteAgent | None = None,
executor: CrewAgentExecutor | AgentExecutor | LiteAgent | None = None,
response: str | None = None,
messages: list[LLMMessage] | None = None,
llm: BaseLLM | str | Any | None = None, # TODO: look into

View File

@@ -72,13 +72,13 @@ from crewai.utilities.agent_utils import (
from crewai.utilities.converter import (
Converter,
ConverterError,
generate_model_description,
)
from crewai.utilities.guardrail import process_guardrail
from crewai.utilities.guardrail_types import GuardrailCallable, GuardrailType
from crewai.utilities.i18n import I18N, get_i18n
from crewai.utilities.llm_utils import create_llm
from crewai.utilities.printer import Printer
from crewai.utilities.pydantic_schema_utils import generate_model_description
from crewai.utilities.token_counter_callback import TokenCalcHandler
from crewai.utilities.tool_utils import execute_tool_and_check_finality
from crewai.utilities.types import LLMMessage
@@ -344,11 +344,12 @@ class LiteAgent(FlowTrackable, BaseModel):
)
except Exception as e:
self._printer.print(
content="Agent failed to reach a final answer. This is likely a bug - please report it.",
color="red",
)
handle_unknown_error(self._printer, e)
if self.verbose:
self._printer.print(
content="Agent failed to reach a final answer. This is likely a bug - please report it.",
color="red",
)
handle_unknown_error(self._printer, e, verbose=self.verbose)
# Emit error event
crewai_event_bus.emit(
self,
@@ -396,10 +397,11 @@ class LiteAgent(FlowTrackable, BaseModel):
if isinstance(result, BaseModel):
formatted_result = result
except ConverterError as e:
self._printer.print(
content=f"Failed to parse output into response format after retries: {e.message}",
color="yellow",
)
if self.verbose:
self._printer.print(
content=f"Failed to parse output into response format after retries: {e.message}",
color="yellow",
)
# Calculate token usage metrics
if isinstance(self.llm, BaseLLM):
@@ -605,6 +607,7 @@ class LiteAgent(FlowTrackable, BaseModel):
messages=self._messages,
llm=cast(LLM, self.llm),
callbacks=self._callbacks,
verbose=self.verbose,
)
enforce_rpm_limit(self.request_within_rpm_limit)
@@ -617,6 +620,7 @@ class LiteAgent(FlowTrackable, BaseModel):
printer=self._printer,
from_agent=self,
executor_context=self,
verbose=self.verbose,
)
except Exception as e:
@@ -646,16 +650,18 @@ class LiteAgent(FlowTrackable, BaseModel):
self._append_message(formatted_answer.text, role="assistant")
except OutputParserError as e: # noqa: PERF203
self._printer.print(
content="Failed to parse LLM output. Retrying...",
color="yellow",
)
if self.verbose:
self._printer.print(
content="Failed to parse LLM output. Retrying...",
color="yellow",
)
formatted_answer = handle_output_parser_exception(
e=e,
messages=self._messages,
iterations=self._iterations,
log_error_after=3,
printer=self._printer,
verbose=self.verbose,
)
except Exception as e:
@@ -670,9 +676,10 @@ class LiteAgent(FlowTrackable, BaseModel):
llm=cast(LLM, self.llm),
callbacks=self._callbacks,
i18n=self.i18n,
verbose=self.verbose,
)
continue
handle_unknown_error(self._printer, e)
handle_unknown_error(self._printer, e, verbose=self.verbose)
raise e
finally:

View File

@@ -404,7 +404,7 @@ class BaseLLM(ABC):
from_agent: Agent | None = None,
tool_call: dict[str, Any] | None = None,
call_type: LLMCallType | None = None,
response_id: str | None = None
response_id: str | None = None,
) -> None:
"""Emit stream chunk event.
@@ -427,7 +427,7 @@ class BaseLLM(ABC):
from_task=from_task,
from_agent=from_agent,
call_type=call_type,
response_id=response_id
response_id=response_id,
),
)
@@ -497,7 +497,7 @@ class BaseLLM(ABC):
from_agent=from_agent,
)
return result
return str(result) if not isinstance(result, str) else result
except Exception as e:
error_msg = f"Error executing function '{function_name}': {e!s}"
@@ -737,22 +737,25 @@ class BaseLLM(ABC):
task=None,
crew=None,
)
verbose = getattr(from_agent, "verbose", True) if from_agent else True
printer = Printer()
try:
for hook in before_hooks:
result = hook(hook_context)
if result is False:
printer.print(
content="LLM call blocked by before_llm_call hook",
color="yellow",
)
if verbose:
printer.print(
content="LLM call blocked by before_llm_call hook",
color="yellow",
)
return False
except Exception as e:
printer.print(
content=f"Error in before_llm_call hook: {e}",
color="yellow",
)
if verbose:
printer.print(
content=f"Error in before_llm_call hook: {e}",
color="yellow",
)
return True
@@ -805,6 +808,7 @@ class BaseLLM(ABC):
crew=None,
response=response,
)
verbose = getattr(from_agent, "verbose", True) if from_agent else True
printer = Printer()
modified_response = response
@@ -815,9 +819,10 @@ class BaseLLM(ABC):
modified_response = result
hook_context.response = modified_response
except Exception as e:
printer.print(
content=f"Error in after_llm_call hook: {e}",
color="yellow",
)
if verbose:
printer.print(
content=f"Error in after_llm_call hook: {e}",
color="yellow",
)
return modified_response

View File

@@ -16,6 +16,7 @@ from crewai.utilities.agent_utils import is_context_length_exceeded
from crewai.utilities.exceptions.context_window_exceeding_exception import (
LLMContextLengthExceededError,
)
from crewai.utilities.pydantic_schema_utils import generate_model_description
from crewai.utilities.types import LLMMessage
@@ -548,7 +549,11 @@ class BedrockCompletion(BaseLLM):
"toolSpec": {
"name": "structured_output",
"description": "Returns structured data according to the schema",
"inputSchema": {"json": response_model.model_json_schema()},
"inputSchema": {
"json": generate_model_description(response_model)
.get("json_schema", {})
.get("schema", {})
},
}
}
body["toolConfig"] = cast(
@@ -779,7 +784,11 @@ class BedrockCompletion(BaseLLM):
"toolSpec": {
"name": "structured_output",
"description": "Returns structured data according to the schema",
"inputSchema": {"json": response_model.model_json_schema()},
"inputSchema": {
"json": generate_model_description(response_model)
.get("json_schema", {})
.get("schema", {})
},
}
}
body["toolConfig"] = cast(
@@ -1011,7 +1020,11 @@ class BedrockCompletion(BaseLLM):
"toolSpec": {
"name": "structured_output",
"description": "Returns structured data according to the schema",
"inputSchema": {"json": response_model.model_json_schema()},
"inputSchema": {
"json": generate_model_description(response_model)
.get("json_schema", {})
.get("schema", {})
},
}
}
body["toolConfig"] = cast(
@@ -1223,7 +1236,11 @@ class BedrockCompletion(BaseLLM):
"toolSpec": {
"name": "structured_output",
"description": "Returns structured data according to the schema",
"inputSchema": {"json": response_model.model_json_schema()},
"inputSchema": {
"json": generate_model_description(response_model)
.get("json_schema", {})
.get("schema", {})
},
}
}
body["toolConfig"] = cast(

View File

@@ -15,6 +15,7 @@ from crewai.utilities.agent_utils import is_context_length_exceeded
from crewai.utilities.exceptions.context_window_exceeding_exception import (
LLMContextLengthExceededError,
)
from crewai.utilities.pydantic_schema_utils import generate_model_description
from crewai.utilities.types import LLMMessage
@@ -464,7 +465,10 @@ class GeminiCompletion(BaseLLM):
if response_model:
config_params["response_mime_type"] = "application/json"
config_params["response_schema"] = response_model.model_json_schema()
schema_output = generate_model_description(response_model)
config_params["response_json_schema"] = schema_output.get(
"json_schema", {}
).get("schema", {})
# Handle tools for supported models
if tools and self.supports_tools:
@@ -489,7 +493,7 @@ class GeminiCompletion(BaseLLM):
function_declaration = types.FunctionDeclaration(
name=name,
description=description,
parameters=parameters if parameters else None,
parameters_json_schema=parameters if parameters else None,
)
gemini_tool = types.Tool(function_declarations=[function_declaration])
@@ -543,11 +547,10 @@ class GeminiCompletion(BaseLLM):
else:
parts.append(types.Part.from_text(text=str(content) if content else ""))
text_content: str = " ".join(p.text for p in parts if p.text is not None)
if role == "system":
# Extract system instruction - Gemini handles it separately
text_content = " ".join(
p.text for p in parts if hasattr(p, "text") and p.text
)
if system_instruction:
system_instruction += f"\n\n{text_content}"
else:
@@ -576,31 +579,40 @@ class GeminiCompletion(BaseLLM):
types.Content(role="user", parts=[function_response_part])
)
elif role == "assistant" and message.get("tool_calls"):
tool_parts: list[types.Part] = []
raw_parts: list[Any] | None = message.get("raw_tool_call_parts")
if raw_parts and all(isinstance(p, types.Part) for p in raw_parts):
tool_parts: list[types.Part] = list(raw_parts)
if text_content:
tool_parts.insert(0, types.Part.from_text(text=text_content))
else:
tool_parts = []
if text_content:
tool_parts.append(types.Part.from_text(text=text_content))
if text_content:
tool_parts.append(types.Part.from_text(text=text_content))
tool_calls: list[dict[str, Any]] = message.get("tool_calls") or []
for tool_call in tool_calls:
func: dict[str, Any] = tool_call.get("function") or {}
func_name: str = str(func.get("name") or "")
func_args_raw: str | dict[str, Any] = (
func.get("arguments") or {}
)
tool_calls: list[dict[str, Any]] = message.get("tool_calls") or []
for tool_call in tool_calls:
func: dict[str, Any] = tool_call.get("function") or {}
func_name: str = str(func.get("name") or "")
func_args_raw: str | dict[str, Any] = func.get("arguments") or {}
func_args: dict[str, Any]
if isinstance(func_args_raw, str):
try:
func_args = (
json.loads(func_args_raw) if func_args_raw else {}
)
except (json.JSONDecodeError, TypeError):
func_args = {}
else:
func_args = func_args_raw
func_args: dict[str, Any]
if isinstance(func_args_raw, str):
try:
func_args = (
json.loads(func_args_raw) if func_args_raw else {}
tool_parts.append(
types.Part.from_function_call(
name=func_name, args=func_args
)
except (json.JSONDecodeError, TypeError):
func_args = {}
else:
func_args = func_args_raw
tool_parts.append(
types.Part.from_function_call(name=func_name, args=func_args)
)
)
contents.append(types.Content(role="model", parts=tool_parts))
else:

View File

@@ -693,14 +693,14 @@ class OpenAICompletion(BaseLLM):
if response_model or self.response_format:
format_model = response_model or self.response_format
if isinstance(format_model, type) and issubclass(format_model, BaseModel):
schema = format_model.model_json_schema()
schema["additionalProperties"] = False
schema_output = generate_model_description(format_model)
json_schema = schema_output.get("json_schema", {})
params["text"] = {
"format": {
"type": "json_schema",
"name": format_model.__name__,
"strict": True,
"schema": schema,
"name": json_schema.get("name", format_model.__name__),
"strict": json_schema.get("strict", True),
"schema": json_schema.get("schema", {}),
}
}
elif isinstance(format_model, dict):
@@ -1060,7 +1060,7 @@ class OpenAICompletion(BaseLLM):
chunk=delta_text,
from_task=from_task,
from_agent=from_agent,
response_id=response_id_stream
response_id=response_id_stream,
)
elif event.type == "response.function_call_arguments.delta":
@@ -1709,7 +1709,7 @@ class OpenAICompletion(BaseLLM):
**parse_params, response_format=response_model
) as stream:
for chunk in stream:
response_id_stream=chunk.id if hasattr(chunk,"id") else None
response_id_stream = chunk.id if hasattr(chunk, "id") else None
if chunk.type == "content.delta":
delta_content = chunk.delta
@@ -1718,7 +1718,7 @@ class OpenAICompletion(BaseLLM):
chunk=delta_content,
from_task=from_task,
from_agent=from_agent,
response_id=response_id_stream
response_id=response_id_stream,
)
final_completion = stream.get_final_completion()
@@ -1748,7 +1748,9 @@ class OpenAICompletion(BaseLLM):
usage_data = {"total_tokens": 0}
for completion_chunk in completion_stream:
response_id_stream=completion_chunk.id if hasattr(completion_chunk,"id") else None
response_id_stream = (
completion_chunk.id if hasattr(completion_chunk, "id") else None
)
if hasattr(completion_chunk, "usage") and completion_chunk.usage:
usage_data = self._extract_openai_token_usage(completion_chunk)
@@ -1766,7 +1768,7 @@ class OpenAICompletion(BaseLLM):
chunk=chunk_delta.content,
from_task=from_task,
from_agent=from_agent,
response_id=response_id_stream
response_id=response_id_stream,
)
if chunk_delta.tool_calls:
@@ -1805,7 +1807,7 @@ class OpenAICompletion(BaseLLM):
"index": tool_calls[tool_index]["index"],
},
call_type=LLMCallType.TOOL_CALL,
response_id=response_id_stream
response_id=response_id_stream,
)
self._track_token_usage_internal(usage_data)
@@ -2017,7 +2019,7 @@ class OpenAICompletion(BaseLLM):
accumulated_content = ""
usage_data = {"total_tokens": 0}
async for chunk in completion_stream:
response_id_stream=chunk.id if hasattr(chunk,"id") else None
response_id_stream = chunk.id if hasattr(chunk, "id") else None
if hasattr(chunk, "usage") and chunk.usage:
usage_data = self._extract_openai_token_usage(chunk)
@@ -2035,7 +2037,7 @@ class OpenAICompletion(BaseLLM):
chunk=delta.content,
from_task=from_task,
from_agent=from_agent,
response_id=response_id_stream
response_id=response_id_stream,
)
self._track_token_usage_internal(usage_data)
@@ -2071,7 +2073,7 @@ class OpenAICompletion(BaseLLM):
usage_data = {"total_tokens": 0}
async for chunk in stream:
response_id_stream=chunk.id if hasattr(chunk,"id") else None
response_id_stream = chunk.id if hasattr(chunk, "id") else None
if hasattr(chunk, "usage") and chunk.usage:
usage_data = self._extract_openai_token_usage(chunk)
@@ -2089,7 +2091,7 @@ class OpenAICompletion(BaseLLM):
chunk=chunk_delta.content,
from_task=from_task,
from_agent=from_agent,
response_id=response_id_stream
response_id=response_id_stream,
)
if chunk_delta.tool_calls:
@@ -2128,7 +2130,7 @@ class OpenAICompletion(BaseLLM):
"index": tool_calls[tool_index]["index"],
},
call_type=LLMCallType.TOOL_CALL,
response_id=response_id_stream
response_id=response_id_stream,
)
self._track_token_usage_internal(usage_data)

View File

@@ -2,6 +2,7 @@ import logging
import re
from typing import Any
from crewai.utilities.pydantic_schema_utils import generate_model_description
from crewai.utilities.string_utils import sanitize_tool_name
@@ -77,7 +78,8 @@ def extract_tool_info(tool: dict[str, Any]) -> tuple[str, str, dict[str, Any]]:
# Also check for args_schema (Pydantic format)
if not parameters and "args_schema" in tool:
if hasattr(tool["args_schema"], "model_json_schema"):
parameters = tool["args_schema"].model_json_schema()
schema_output = generate_model_description(tool["args_schema"])
parameters = schema_output.get("json_schema", {}).get("schema", {})
return name, description, parameters

View File

@@ -12,15 +12,17 @@ from crewai.utilities.paths import db_storage_path
class LTMSQLiteStorage:
"""SQLite storage class for long-term memory data."""
def __init__(self, db_path: str | None = None) -> None:
def __init__(self, db_path: str | None = None, verbose: bool = True) -> None:
"""Initialize the SQLite storage.
Args:
db_path: Optional path to the database file.
verbose: Whether to print error messages.
"""
if db_path is None:
db_path = str(Path(db_storage_path()) / "long_term_memory_storage.db")
self.db_path = db_path
self._verbose = verbose
self._printer: Printer = Printer()
Path(self.db_path).parent.mkdir(parents=True, exist_ok=True)
self._initialize_db()
@@ -44,10 +46,11 @@ class LTMSQLiteStorage:
conn.commit()
except sqlite3.Error as e:
self._printer.print(
content=f"MEMORY ERROR: An error occurred during database initialization: {e}",
color="red",
)
if self._verbose:
self._printer.print(
content=f"MEMORY ERROR: An error occurred during database initialization: {e}",
color="red",
)
def save(
self,
@@ -69,10 +72,11 @@ class LTMSQLiteStorage:
)
conn.commit()
except sqlite3.Error as e:
self._printer.print(
content=f"MEMORY ERROR: An error occurred while saving to LTM: {e}",
color="red",
)
if self._verbose:
self._printer.print(
content=f"MEMORY ERROR: An error occurred while saving to LTM: {e}",
color="red",
)
def load(self, task_description: str, latest_n: int) -> list[dict[str, Any]] | None:
"""Queries the LTM table by task description with error handling."""
@@ -101,10 +105,11 @@ class LTMSQLiteStorage:
]
except sqlite3.Error as e:
self._printer.print(
content=f"MEMORY ERROR: An error occurred while querying LTM: {e}",
color="red",
)
if self._verbose:
self._printer.print(
content=f"MEMORY ERROR: An error occurred while querying LTM: {e}",
color="red",
)
return None
def reset(self) -> None:
@@ -116,10 +121,11 @@ class LTMSQLiteStorage:
conn.commit()
except sqlite3.Error as e:
self._printer.print(
content=f"MEMORY ERROR: An error occurred while deleting all rows in LTM: {e}",
color="red",
)
if self._verbose:
self._printer.print(
content=f"MEMORY ERROR: An error occurred while deleting all rows in LTM: {e}",
color="red",
)
async def asave(
self,
@@ -147,10 +153,11 @@ class LTMSQLiteStorage:
)
await conn.commit()
except aiosqlite.Error as e:
self._printer.print(
content=f"MEMORY ERROR: An error occurred while saving to LTM: {e}",
color="red",
)
if self._verbose:
self._printer.print(
content=f"MEMORY ERROR: An error occurred while saving to LTM: {e}",
color="red",
)
async def aload(
self, task_description: str, latest_n: int
@@ -187,10 +194,11 @@ class LTMSQLiteStorage:
for row in rows
]
except aiosqlite.Error as e:
self._printer.print(
content=f"MEMORY ERROR: An error occurred while querying LTM: {e}",
color="red",
)
if self._verbose:
self._printer.print(
content=f"MEMORY ERROR: An error occurred while querying LTM: {e}",
color="red",
)
return None
async def areset(self) -> None:
@@ -200,7 +208,8 @@ class LTMSQLiteStorage:
await conn.execute("DELETE FROM long_term_memories")
await conn.commit()
except aiosqlite.Error as e:
self._printer.print(
content=f"MEMORY ERROR: An error occurred while deleting all rows in LTM: {e}",
color="red",
)
if self._verbose:
self._printer.print(
content=f"MEMORY ERROR: An error occurred while deleting all rows in LTM: {e}",
color="red",
)

View File

@@ -1,6 +1,6 @@
"""IBM WatsonX embedding function implementation."""
from typing import cast
from typing import Any, cast
from chromadb.api.types import Documents, EmbeddingFunction, Embeddings
from typing_extensions import Unpack
@@ -15,14 +15,18 @@ _printer = Printer()
class WatsonXEmbeddingFunction(EmbeddingFunction[Documents]):
"""Embedding function for IBM WatsonX models."""
def __init__(self, **kwargs: Unpack[WatsonXProviderConfig]) -> None:
def __init__(
self, *, verbose: bool = True, **kwargs: Unpack[WatsonXProviderConfig]
) -> None:
"""Initialize WatsonX embedding function.
Args:
verbose: Whether to print error messages.
**kwargs: Configuration parameters for WatsonX Embeddings and Credentials.
"""
super().__init__(**kwargs)
self._config = kwargs
self._verbose = verbose
@staticmethod
def name() -> str:
@@ -56,7 +60,7 @@ class WatsonXEmbeddingFunction(EmbeddingFunction[Documents]):
if isinstance(input, str):
input = [input]
embeddings_config: dict = {
embeddings_config: dict[str, Any] = {
"model_id": self._config["model_id"],
}
if "params" in self._config and self._config["params"] is not None:
@@ -90,7 +94,7 @@ class WatsonXEmbeddingFunction(EmbeddingFunction[Documents]):
if "credentials" in self._config and self._config["credentials"] is not None:
embeddings_config["credentials"] = self._config["credentials"]
else:
cred_config: dict = {}
cred_config: dict[str, Any] = {}
if "url" in self._config and self._config["url"] is not None:
cred_config["url"] = self._config["url"]
if "api_key" in self._config and self._config["api_key"] is not None:
@@ -159,5 +163,6 @@ class WatsonXEmbeddingFunction(EmbeddingFunction[Documents]):
embeddings = embedding.embed_documents(input)
return cast(Embeddings, embeddings)
except Exception as e:
_printer.print(f"Error during WatsonX embedding: {e}", color="red")
if self._verbose:
_printer.print(f"Error during WatsonX embedding: {e}", color="red")
raise

View File

@@ -767,10 +767,11 @@ class Task(BaseModel):
if files:
supported_types: list[str] = []
if self.agent.llm and self.agent.llm.supports_multimodal():
provider = getattr(self.agent.llm, "provider", None) or getattr(
self.agent.llm, "model", "openai"
provider: str = str(
getattr(self.agent.llm, "provider", None)
or getattr(self.agent.llm, "model", "openai")
)
api = getattr(self.agent.llm, "api", None)
api: str | None = getattr(self.agent.llm, "api", None)
supported_types = get_supported_content_types(provider, api)
def is_auto_injected(content_type: str) -> bool:
@@ -887,10 +888,11 @@ Follow these guidelines:
try:
crew_chat_messages = json.loads(crew_chat_messages_json)
except json.JSONDecodeError as e:
_printer.print(
f"An error occurred while parsing crew chat messages: {e}",
color="red",
)
if self.agent and self.agent.verbose:
_printer.print(
f"An error occurred while parsing crew chat messages: {e}",
color="red",
)
raise
conversation_history = "\n".join(
@@ -1132,11 +1134,12 @@ Follow these guidelines:
guardrail_result_error=guardrail_result.error,
task_output=task_output.raw,
)
printer = Printer()
printer.print(
content=f"Guardrail {guardrail_index if guardrail_index is not None else ''} blocked (attempt {attempt + 1}/{max_attempts}), retrying due to: {guardrail_result.error}\n",
color="yellow",
)
if agent and agent.verbose:
printer = Printer()
printer.print(
content=f"Guardrail {guardrail_index if guardrail_index is not None else ''} blocked (attempt {attempt + 1}/{max_attempts}), retrying due to: {guardrail_result.error}\n",
color="yellow",
)
# Regenerate output from agent
result = agent.execute_task(
@@ -1229,11 +1232,12 @@ Follow these guidelines:
guardrail_result_error=guardrail_result.error,
task_output=task_output.raw,
)
printer = Printer()
printer.print(
content=f"Guardrail {guardrail_index if guardrail_index is not None else ''} blocked (attempt {attempt + 1}/{max_attempts}), retrying due to: {guardrail_result.error}\n",
color="yellow",
)
if agent and agent.verbose:
printer = Printer()
printer.print(
content=f"Guardrail {guardrail_index if guardrail_index is not None else ''} blocked (attempt {attempt + 1}/{max_attempts}), retrying due to: {guardrail_result.error}\n",
color="yellow",
)
result = await agent.aexecute_task(
task=self,

View File

@@ -384,6 +384,8 @@ class ToolUsage:
if (
hasattr(available_tool, "max_usage_count")
and available_tool.max_usage_count is not None
and self.agent
and self.agent.verbose
):
self._printer.print(
content=f"Tool '{sanitize_tool_name(available_tool.name)}' usage: {available_tool.current_usage_count}/{available_tool.max_usage_count}",
@@ -396,6 +398,8 @@ class ToolUsage:
if (
hasattr(available_tool, "max_usage_count")
and available_tool.max_usage_count is not None
and self.agent
and self.agent.verbose
):
self._printer.print(
content=f"Tool '{sanitize_tool_name(available_tool.name)}' usage: {available_tool.current_usage_count}/{available_tool.max_usage_count}",
@@ -610,6 +614,8 @@ class ToolUsage:
if (
hasattr(available_tool, "max_usage_count")
and available_tool.max_usage_count is not None
and self.agent
and self.agent.verbose
):
self._printer.print(
content=f"Tool '{sanitize_tool_name(available_tool.name)}' usage: {available_tool.current_usage_count}/{available_tool.max_usage_count}",
@@ -622,6 +628,8 @@ class ToolUsage:
if (
hasattr(available_tool, "max_usage_count")
and available_tool.max_usage_count is not None
and self.agent
and self.agent.verbose
):
self._printer.print(
content=f"Tool '{sanitize_tool_name(available_tool.name)}' usage: {available_tool.current_usage_count}/{available_tool.max_usage_count}",
@@ -884,15 +892,17 @@ class ToolUsage:
# Attempt 4: Repair JSON
try:
repaired_input = str(repair_json(tool_input, skip_json_loads=True))
self._printer.print(
content=f"Repaired JSON: {repaired_input}", color="blue"
)
if self.agent and self.agent.verbose:
self._printer.print(
content=f"Repaired JSON: {repaired_input}", color="blue"
)
arguments = json.loads(repaired_input)
if isinstance(arguments, dict):
return arguments
except Exception as e:
error = f"Failed to repair JSON: {e}"
self._printer.print(content=error, color="red")
if self.agent and self.agent.verbose:
self._printer.print(content=error, color="red")
error_message = (
"Tool input must be a valid dictionary in JSON or Python literal format"

View File

@@ -28,6 +28,7 @@ from crewai.utilities.exceptions.context_window_exceeding_exception import (
)
from crewai.utilities.i18n import I18N
from crewai.utilities.printer import ColoredText, Printer
from crewai.utilities.pydantic_schema_utils import generate_model_description
from crewai.utilities.string_utils import sanitize_tool_name
from crewai.utilities.token_counter_callback import TokenCalcHandler
from crewai.utilities.types import LLMMessage
@@ -36,6 +37,7 @@ from crewai.utilities.types import LLMMessage
if TYPE_CHECKING:
from crewai.agent import Agent
from crewai.agents.crew_agent_executor import CrewAgentExecutor
from crewai.experimental.agent_executor import AgentExecutor
from crewai.lite_agent import LiteAgent
from crewai.llm import LLM
from crewai.task import Task
@@ -158,7 +160,8 @@ def convert_tools_to_openai_schema(
parameters: dict[str, Any] = {}
if hasattr(tool, "args_schema") and tool.args_schema is not None:
try:
parameters = tool.args_schema.model_json_schema()
schema_output = generate_model_description(tool.args_schema)
parameters = schema_output.get("json_schema", {}).get("schema", {})
# Remove title and description from schema root as they're redundant
parameters.pop("title", None)
parameters.pop("description", None)
@@ -207,6 +210,7 @@ def handle_max_iterations_exceeded(
messages: list[LLMMessage],
llm: LLM | BaseLLM,
callbacks: list[TokenCalcHandler],
verbose: bool = True,
) -> AgentFinish:
"""Handles the case when the maximum number of iterations is exceeded. Performs one more LLM call to get the final answer.
@@ -217,14 +221,16 @@ def handle_max_iterations_exceeded(
messages: List of messages to send to the LLM.
llm: The LLM instance to call.
callbacks: List of callbacks for the LLM call.
verbose: Whether to print output.
Returns:
AgentFinish with the final answer after exceeding max iterations.
"""
printer.print(
content="Maximum iterations reached. Requesting final answer.",
color="yellow",
)
if verbose:
printer.print(
content="Maximum iterations reached. Requesting final answer.",
color="yellow",
)
if formatted_answer and hasattr(formatted_answer, "text"):
assistant_message = (
@@ -242,10 +248,11 @@ def handle_max_iterations_exceeded(
)
if answer is None or answer == "":
printer.print(
content="Received None or empty response from LLM call.",
color="red",
)
if verbose:
printer.print(
content="Received None or empty response from LLM call.",
color="red",
)
raise ValueError("Invalid response from LLM call - None or empty.")
formatted = format_answer(answer=answer)
@@ -318,7 +325,8 @@ def get_llm_response(
from_task: Task | None = None,
from_agent: Agent | LiteAgent | None = None,
response_model: type[BaseModel] | None = None,
executor_context: CrewAgentExecutor | LiteAgent | None = None,
executor_context: CrewAgentExecutor | AgentExecutor | LiteAgent | None = None,
verbose: bool = True,
) -> str | Any:
"""Call the LLM and return the response, handling any invalid responses.
@@ -344,7 +352,7 @@ def get_llm_response(
"""
if executor_context is not None:
if not _setup_before_llm_call_hooks(executor_context, printer):
if not _setup_before_llm_call_hooks(executor_context, printer, verbose=verbose):
raise ValueError("LLM call blocked by before_llm_call hook")
messages = executor_context.messages
@@ -361,13 +369,16 @@ def get_llm_response(
except Exception as e:
raise e
if not answer:
printer.print(
content="Received None or empty response from LLM call.",
color="red",
)
if verbose:
printer.print(
content="Received None or empty response from LLM call.",
color="red",
)
raise ValueError("Invalid response from LLM call - None or empty.")
return _setup_after_llm_call_hooks(executor_context, answer, printer)
return _setup_after_llm_call_hooks(
executor_context, answer, printer, verbose=verbose
)
async def aget_llm_response(
@@ -380,7 +391,8 @@ async def aget_llm_response(
from_task: Task | None = None,
from_agent: Agent | LiteAgent | None = None,
response_model: type[BaseModel] | None = None,
executor_context: CrewAgentExecutor | None = None,
executor_context: CrewAgentExecutor | AgentExecutor | None = None,
verbose: bool = True,
) -> str | Any:
"""Call the LLM asynchronously and return the response.
@@ -405,7 +417,7 @@ async def aget_llm_response(
ValueError: If the response is None or empty.
"""
if executor_context is not None:
if not _setup_before_llm_call_hooks(executor_context, printer):
if not _setup_before_llm_call_hooks(executor_context, printer, verbose=verbose):
raise ValueError("LLM call blocked by before_llm_call hook")
messages = executor_context.messages
@@ -422,13 +434,16 @@ async def aget_llm_response(
except Exception as e:
raise e
if not answer:
printer.print(
content="Received None or empty response from LLM call.",
color="red",
)
if verbose:
printer.print(
content="Received None or empty response from LLM call.",
color="red",
)
raise ValueError("Invalid response from LLM call - None or empty.")
return _setup_after_llm_call_hooks(executor_context, answer, printer)
return _setup_after_llm_call_hooks(
executor_context, answer, printer, verbose=verbose
)
def process_llm_response(
@@ -495,13 +510,19 @@ def handle_agent_action_core(
return formatted_answer
def handle_unknown_error(printer: Printer, exception: Exception) -> None:
def handle_unknown_error(
printer: Printer, exception: Exception, verbose: bool = True
) -> None:
"""Handle unknown errors by informing the user.
Args:
printer: Printer instance for output
exception: The exception that occurred
verbose: Whether to print output.
"""
if not verbose:
return
error_message = str(exception)
if "litellm" in error_message:
@@ -523,6 +544,7 @@ def handle_output_parser_exception(
iterations: int,
log_error_after: int = 3,
printer: Printer | None = None,
verbose: bool = True,
) -> AgentAction:
"""Handle OutputParserError by updating messages and formatted_answer.
@@ -545,7 +567,7 @@ def handle_output_parser_exception(
thought="",
)
if iterations > log_error_after and printer:
if verbose and iterations > log_error_after and printer:
printer.print(
content=f"Error parsing LLM output, agent will retry: {e.error}",
color="red",
@@ -575,6 +597,7 @@ def handle_context_length(
llm: LLM | BaseLLM,
callbacks: list[TokenCalcHandler],
i18n: I18N,
verbose: bool = True,
) -> None:
"""Handle context length exceeded by either summarizing or raising an error.
@@ -590,16 +613,20 @@ def handle_context_length(
SystemExit: If context length is exceeded and user opts not to summarize
"""
if respect_context_window:
printer.print(
content="Context length exceeded. Summarizing content to fit the model context window. Might take a while...",
color="yellow",
if verbose:
printer.print(
content="Context length exceeded. Summarizing content to fit the model context window. Might take a while...",
color="yellow",
)
summarize_messages(
messages=messages, llm=llm, callbacks=callbacks, i18n=i18n, verbose=verbose
)
summarize_messages(messages=messages, llm=llm, callbacks=callbacks, i18n=i18n)
else:
printer.print(
content="Context length exceeded. Consider using smaller text or RAG tools from crewai_tools.",
color="red",
)
if verbose:
printer.print(
content="Context length exceeded. Consider using smaller text or RAG tools from crewai_tools.",
color="red",
)
raise SystemExit(
"Context length exceeded and user opted not to summarize. Consider using smaller text or RAG tools from crewai_tools."
)
@@ -610,6 +637,7 @@ def summarize_messages(
llm: LLM | BaseLLM,
callbacks: list[TokenCalcHandler],
i18n: I18N,
verbose: bool = True,
) -> None:
"""Summarize messages to fit within context window.
@@ -641,10 +669,11 @@ def summarize_messages(
total_groups = len(messages_groups)
for idx, group in enumerate(messages_groups, 1):
Printer().print(
content=f"Summarizing {idx}/{total_groups}...",
color="yellow",
)
if verbose:
Printer().print(
content=f"Summarizing {idx}/{total_groups}...",
color="yellow",
)
summarization_messages = [
format_message_for_llm(
@@ -900,13 +929,16 @@ def extract_tool_call_info(
def _setup_before_llm_call_hooks(
executor_context: CrewAgentExecutor | LiteAgent | None, printer: Printer
executor_context: CrewAgentExecutor | AgentExecutor | LiteAgent | None,
printer: Printer,
verbose: bool = True,
) -> bool:
"""Setup and invoke before_llm_call hooks for the executor context.
Args:
executor_context: The executor context to setup the hooks for.
printer: Printer instance for error logging.
verbose: Whether to print output.
Returns:
True if LLM execution should proceed, False if blocked by a hook.
@@ -921,26 +953,29 @@ def _setup_before_llm_call_hooks(
for hook in executor_context.before_llm_call_hooks:
result = hook(hook_context)
if result is False:
printer.print(
content="LLM call blocked by before_llm_call hook",
color="yellow",
)
if verbose:
printer.print(
content="LLM call blocked by before_llm_call hook",
color="yellow",
)
return False
except Exception as e:
printer.print(
content=f"Error in before_llm_call hook: {e}",
color="yellow",
)
if verbose:
printer.print(
content=f"Error in before_llm_call hook: {e}",
color="yellow",
)
if not isinstance(executor_context.messages, list):
printer.print(
content=(
"Warning: before_llm_call hook replaced messages with non-list. "
"Restoring original messages list. Hooks should modify messages in-place, "
"not replace the list (e.g., use context.messages.append() not context.messages = [])."
),
color="yellow",
)
if verbose:
printer.print(
content=(
"Warning: before_llm_call hook replaced messages with non-list. "
"Restoring original messages list. Hooks should modify messages in-place, "
"not replace the list (e.g., use context.messages.append() not context.messages = [])."
),
color="yellow",
)
if isinstance(original_messages, list):
executor_context.messages = original_messages
else:
@@ -950,9 +985,10 @@ def _setup_before_llm_call_hooks(
def _setup_after_llm_call_hooks(
executor_context: CrewAgentExecutor | LiteAgent | None,
executor_context: CrewAgentExecutor | AgentExecutor | LiteAgent | None,
answer: str,
printer: Printer,
verbose: bool = True,
) -> str:
"""Setup and invoke after_llm_call hooks for the executor context.
@@ -960,6 +996,7 @@ def _setup_after_llm_call_hooks(
executor_context: The executor context to setup the hooks for.
answer: The LLM response string.
printer: Printer instance for error logging.
verbose: Whether to print output.
Returns:
The potentially modified response string.
@@ -977,20 +1014,22 @@ def _setup_after_llm_call_hooks(
answer = modified_response
except Exception as e:
printer.print(
content=f"Error in after_llm_call hook: {e}",
color="yellow",
)
if verbose:
printer.print(
content=f"Error in after_llm_call hook: {e}",
color="yellow",
)
if not isinstance(executor_context.messages, list):
printer.print(
content=(
"Warning: after_llm_call hook replaced messages with non-list. "
"Restoring original messages list. Hooks should modify messages in-place, "
"not replace the list (e.g., use context.messages.append() not context.messages = [])."
),
color="yellow",
)
if verbose:
printer.print(
content=(
"Warning: after_llm_call hook replaced messages with non-list. "
"Restoring original messages list. Hooks should modify messages in-place, "
"not replace the list (e.g., use context.messages.append() not context.messages = [])."
),
color="yellow",
)
if isinstance(original_messages, list):
executor_context.messages = original_messages
else:

View File

@@ -205,10 +205,11 @@ def convert_to_model(
)
except Exception as e:
Printer().print(
content=f"Unexpected error during model conversion: {type(e).__name__}: {e}. Returning original result.",
color="red",
)
if agent and getattr(agent, "verbose", True):
Printer().print(
content=f"Unexpected error during model conversion: {type(e).__name__}: {e}. Returning original result.",
color="red",
)
return result
@@ -262,10 +263,11 @@ def handle_partial_json(
except ValidationError:
raise
except Exception as e:
Printer().print(
content=f"Unexpected error during partial JSON handling: {type(e).__name__}: {e}. Attempting alternative conversion method.",
color="red",
)
if agent and getattr(agent, "verbose", True):
Printer().print(
content=f"Unexpected error during partial JSON handling: {type(e).__name__}: {e}. Attempting alternative conversion method.",
color="red",
)
return convert_with_instructions(
result=result,
@@ -323,10 +325,11 @@ def convert_with_instructions(
)
if isinstance(exported_result, ConverterError):
Printer().print(
content=f"Failed to convert result to model: {exported_result}",
color="red",
)
if agent and getattr(agent, "verbose", True):
Printer().print(
content=f"Failed to convert result to model: {exported_result}",
color="red",
)
return result
return exported_result

View File

@@ -1,14 +1,72 @@
"""Utilities for generating JSON schemas from Pydantic models.
"""Dynamic Pydantic model creation from JSON schemas.
This module provides utilities for converting JSON schemas to Pydantic models at runtime.
The main function is `create_model_from_schema`, which takes a JSON schema and returns
a dynamically created Pydantic model class.
This is used by the A2A server to honor response schemas sent by clients, allowing
structured output from agent tasks.
Based on dydantic (https://github.com/zenbase-ai/dydantic).
This module provides functions for converting Pydantic models to JSON schemas
suitable for use with LLMs and tool definitions.
"""
from __future__ import annotations
from collections.abc import Callable
from copy import deepcopy
from typing import Any
import datetime
import logging
from typing import TYPE_CHECKING, Annotated, Any, Literal, Union
import uuid
from pydantic import BaseModel
from pydantic import (
UUID1,
UUID3,
UUID4,
UUID5,
AnyUrl,
BaseModel,
ConfigDict,
DirectoryPath,
Field,
FilePath,
FileUrl,
HttpUrl,
Json,
MongoDsn,
NewPath,
PostgresDsn,
SecretBytes,
SecretStr,
StrictBytes,
create_model as create_model_base,
)
from pydantic.networks import ( # type: ignore[attr-defined]
IPv4Address,
IPv6Address,
IPvAnyAddress,
IPvAnyInterface,
IPvAnyNetwork,
)
logger = logging.getLogger(__name__)
if TYPE_CHECKING:
from pydantic import EmailStr
from pydantic.main import AnyClassMethod
else:
try:
from pydantic import EmailStr
except ImportError:
logger.warning(
"EmailStr unavailable, using str fallback",
extra={"missing_package": "email_validator"},
)
EmailStr = str
def resolve_refs(schema: dict[str, Any]) -> dict[str, Any]:
@@ -243,3 +301,319 @@ def generate_model_description(model: type[BaseModel]) -> dict[str, Any]:
"schema": json_schema,
},
}
FORMAT_TYPE_MAP: dict[str, type[Any]] = {
"base64": Annotated[bytes, Field(json_schema_extra={"format": "base64"})], # type: ignore[dict-item]
"binary": StrictBytes,
"date": datetime.date,
"time": datetime.time,
"date-time": datetime.datetime,
"duration": datetime.timedelta,
"directory-path": DirectoryPath,
"email": EmailStr,
"file-path": FilePath,
"ipv4": IPv4Address,
"ipv6": IPv6Address,
"ipvanyaddress": IPvAnyAddress, # type: ignore[dict-item]
"ipvanyinterface": IPvAnyInterface, # type: ignore[dict-item]
"ipvanynetwork": IPvAnyNetwork, # type: ignore[dict-item]
"json-string": Json,
"multi-host-uri": PostgresDsn | MongoDsn, # type: ignore[dict-item]
"password": SecretStr,
"path": NewPath,
"uri": AnyUrl,
"uuid": uuid.UUID,
"uuid1": UUID1,
"uuid3": UUID3,
"uuid4": UUID4,
"uuid5": UUID5,
}
def create_model_from_schema( # type: ignore[no-any-unimported]
json_schema: dict[str, Any],
*,
root_schema: dict[str, Any] | None = None,
__config__: ConfigDict | None = None,
__base__: type[BaseModel] | None = None,
__module__: str = __name__,
__validators__: dict[str, AnyClassMethod] | None = None,
__cls_kwargs__: dict[str, Any] | None = None,
) -> type[BaseModel]:
"""Create a Pydantic model from a JSON schema.
This function takes a JSON schema as input and dynamically creates a Pydantic
model class based on the schema. It supports various JSON schema features such
as nested objects, referenced definitions ($ref), arrays with typed items,
union types (anyOf/oneOf), and string formats.
Args:
json_schema: A dictionary representing the JSON schema.
root_schema: The root schema containing $defs. If not provided, the
current schema is treated as the root schema.
__config__: Pydantic configuration for the generated model.
__base__: Base class for the generated model. Defaults to BaseModel.
__module__: Module name for the generated model class.
__validators__: A dictionary of custom validators for the generated model.
__cls_kwargs__: Additional keyword arguments for the generated model class.
Returns:
A dynamically created Pydantic model class based on the provided JSON schema.
Example:
>>> schema = {
... "title": "Person",
... "type": "object",
... "properties": {
... "name": {"type": "string"},
... "age": {"type": "integer"},
... },
... "required": ["name"],
... }
>>> Person = create_model_from_schema(schema)
>>> person = Person(name="John", age=30)
>>> person.name
'John'
"""
effective_root = root_schema or json_schema
if "allOf" in json_schema:
json_schema = _merge_all_of_schemas(json_schema["allOf"], effective_root)
if "title" not in json_schema and "title" in (root_schema or {}):
json_schema["title"] = (root_schema or {}).get("title")
model_name = json_schema.get("title", "DynamicModel")
field_definitions = {
name: _json_schema_to_pydantic_field(
name, prop, json_schema.get("required", []), effective_root
)
for name, prop in (json_schema.get("properties", {}) or {}).items()
}
return create_model_base(
model_name,
__config__=__config__,
__base__=__base__,
__module__=__module__,
__validators__=__validators__,
__cls_kwargs__=__cls_kwargs__,
**field_definitions,
)
def _json_schema_to_pydantic_field(
name: str,
json_schema: dict[str, Any],
required: list[str],
root_schema: dict[str, Any],
) -> Any:
"""Convert a JSON schema property to a Pydantic field definition.
Args:
name: The field name.
json_schema: The JSON schema for this field.
required: List of required field names.
root_schema: The root schema for resolving $ref.
Returns:
A tuple of (type, Field) for use with create_model.
"""
type_ = _json_schema_to_pydantic_type(json_schema, root_schema, name_=name.title())
description = json_schema.get("description")
examples = json_schema.get("examples")
is_required = name in required
field_params: dict[str, Any] = {}
schema_extra: dict[str, Any] = {}
if description:
field_params["description"] = description
if examples:
schema_extra["examples"] = examples
default = ... if is_required else None
if isinstance(type_, type) and issubclass(type_, (int, float)):
if "minimum" in json_schema:
field_params["ge"] = json_schema["minimum"]
if "exclusiveMinimum" in json_schema:
field_params["gt"] = json_schema["exclusiveMinimum"]
if "maximum" in json_schema:
field_params["le"] = json_schema["maximum"]
if "exclusiveMaximum" in json_schema:
field_params["lt"] = json_schema["exclusiveMaximum"]
if "multipleOf" in json_schema:
field_params["multiple_of"] = json_schema["multipleOf"]
format_ = json_schema.get("format")
if format_ in FORMAT_TYPE_MAP:
pydantic_type = FORMAT_TYPE_MAP[format_]
if format_ == "password":
if json_schema.get("writeOnly"):
pydantic_type = SecretBytes
elif format_ == "uri":
allowed_schemes = json_schema.get("scheme")
if allowed_schemes:
if len(allowed_schemes) == 1 and allowed_schemes[0] == "http":
pydantic_type = HttpUrl
elif len(allowed_schemes) == 1 and allowed_schemes[0] == "file":
pydantic_type = FileUrl
type_ = pydantic_type
if isinstance(type_, type) and issubclass(type_, str):
if "minLength" in json_schema:
field_params["min_length"] = json_schema["minLength"]
if "maxLength" in json_schema:
field_params["max_length"] = json_schema["maxLength"]
if "pattern" in json_schema:
field_params["pattern"] = json_schema["pattern"]
if not is_required:
type_ = type_ | None
if schema_extra:
field_params["json_schema_extra"] = schema_extra
return type_, Field(default, **field_params)
def _resolve_ref(ref: str, root_schema: dict[str, Any]) -> dict[str, Any]:
"""Resolve a $ref to its actual schema.
Args:
ref: The $ref string (e.g., "#/$defs/MyType").
root_schema: The root schema containing $defs.
Returns:
The resolved schema dict.
"""
from typing import cast
ref_path = ref.split("/")
if ref.startswith("#/$defs/"):
ref_schema: dict[str, Any] = root_schema["$defs"]
start_idx = 2
else:
ref_schema = root_schema
start_idx = 1
for path in ref_path[start_idx:]:
ref_schema = cast(dict[str, Any], ref_schema[path])
return ref_schema
def _merge_all_of_schemas(
schemas: list[dict[str, Any]],
root_schema: dict[str, Any],
) -> dict[str, Any]:
"""Merge multiple allOf schemas into a single schema.
Combines properties and required fields from all schemas.
Args:
schemas: List of schemas to merge.
root_schema: The root schema for resolving $ref.
Returns:
Merged schema with combined properties and required fields.
"""
merged: dict[str, Any] = {"type": "object", "properties": {}, "required": []}
for schema in schemas:
if "$ref" in schema:
schema = _resolve_ref(schema["$ref"], root_schema)
if "properties" in schema:
merged["properties"].update(schema["properties"])
if "required" in schema:
for field in schema["required"]:
if field not in merged["required"]:
merged["required"].append(field)
if "title" in schema and "title" not in merged:
merged["title"] = schema["title"]
return merged
def _json_schema_to_pydantic_type(
json_schema: dict[str, Any],
root_schema: dict[str, Any],
*,
name_: str | None = None,
) -> Any:
"""Convert a JSON schema to a Python/Pydantic type.
Args:
json_schema: The JSON schema to convert.
root_schema: The root schema for resolving $ref.
name_: Optional name for nested models.
Returns:
A Python type corresponding to the JSON schema.
"""
ref = json_schema.get("$ref")
if ref:
ref_schema = _resolve_ref(ref, root_schema)
return _json_schema_to_pydantic_type(ref_schema, root_schema, name_=name_)
enum_values = json_schema.get("enum")
if enum_values:
return Literal[tuple(enum_values)]
if "const" in json_schema:
return Literal[json_schema["const"]]
any_of_schemas = []
if "anyOf" in json_schema or "oneOf" in json_schema:
any_of_schemas = json_schema.get("anyOf", []) + json_schema.get("oneOf", [])
if any_of_schemas:
any_of_types = [
_json_schema_to_pydantic_type(schema, root_schema)
for schema in any_of_schemas
]
return Union[tuple(any_of_types)] # noqa: UP007
all_of_schemas = json_schema.get("allOf")
if all_of_schemas:
if len(all_of_schemas) == 1:
return _json_schema_to_pydantic_type(
all_of_schemas[0], root_schema, name_=name_
)
merged = _merge_all_of_schemas(all_of_schemas, root_schema)
return _json_schema_to_pydantic_type(merged, root_schema, name_=name_)
type_ = json_schema.get("type")
if type_ == "string":
return str
if type_ == "integer":
return int
if type_ == "number":
return float
if type_ == "boolean":
return bool
if type_ == "array":
items_schema = json_schema.get("items")
if items_schema:
item_type = _json_schema_to_pydantic_type(
items_schema, root_schema, name_=name_
)
return list[item_type] # type: ignore[valid-type]
return list
if type_ == "object":
properties = json_schema.get("properties")
if properties:
json_schema_ = json_schema.copy()
if json_schema_.get("title") is None:
json_schema_["title"] = name_
return create_model_from_schema(json_schema_, root_schema=root_schema)
return dict
if type_ == "null":
return None
if type_ is None:
return Any
raise ValueError(f"Unsupported JSON schema type: {type_} from {json_schema}")

View File

@@ -26,4 +26,5 @@ class LLMMessage(TypedDict):
tool_call_id: NotRequired[str]
name: NotRequired[str]
tool_calls: NotRequired[list[dict[str, Any]]]
raw_tool_call_parts: NotRequired[list[Any]]
files: NotRequired[dict[str, FileInput]]

View File

@@ -1004,3 +1004,53 @@ def test_prepare_kickoff_param_files_override_message_files():
assert "files" in inputs
assert inputs["files"]["same.png"] is param_file # param takes precedence
def test_lite_agent_verbose_false_suppresses_printer_output():
"""Test that setting verbose=False suppresses all printer output."""
from crewai.agents.parser import AgentFinish
from crewai.types.usage_metrics import UsageMetrics
mock_llm = Mock(spec=LLM)
mock_llm.call.return_value = "Final Answer: Hello!"
mock_llm.stop = []
mock_llm.supports_stop_words.return_value = False
mock_llm.get_token_usage_summary.return_value = UsageMetrics(
total_tokens=100,
prompt_tokens=50,
completion_tokens=50,
cached_prompt_tokens=0,
successful_requests=1,
)
with pytest.warns(DeprecationWarning):
agent = LiteAgent(
role="Test Agent",
goal="Test goal",
backstory="Test backstory",
llm=mock_llm,
verbose=False,
)
result = agent.kickoff("Say hello")
assert result is not None
assert isinstance(result, LiteAgentOutput)
# Verify the printer was never called
agent._printer.print = Mock()
# For a clean verification, patch printer before execution
with pytest.warns(DeprecationWarning):
agent2 = LiteAgent(
role="Test Agent",
goal="Test goal",
backstory="Test backstory",
llm=mock_llm,
verbose=False,
)
mock_printer = Mock()
agent2._printer = mock_printer
agent2.kickoff("Say hello")
mock_printer.print.assert_not_called()

View File

@@ -0,0 +1,224 @@
interactions:
- request:
body: '{"messages":[{"role":"system","content":"You are Calculator. You are a
calculator assistant\nYour personal goal is: Perform calculations"},{"role":"user","content":"\nCurrent
Task: What is 7 times 6? Use the multiply_numbers tool.\n\nThis is VERY important
to you, your job depends on it!"}],"model":"gpt-4.1-mini","tool_choice":"auto","tools":[{"type":"function","function":{"name":"multiply_numbers","description":"Multiply
two numbers together.","parameters":{"properties":{"a":{"title":"A","type":"integer"},"b":{"title":"B","type":"integer"}},"required":["a","b"],"type":"object"}}}]}'
headers:
User-Agent:
- X-USER-AGENT-XXX
accept:
- application/json
accept-encoding:
- ACCEPT-ENCODING-XXX
authorization:
- AUTHORIZATION-XXX
connection:
- keep-alive
content-length:
- '589'
content-type:
- application/json
host:
- api.openai.com
x-stainless-arch:
- X-STAINLESS-ARCH-XXX
x-stainless-async:
- 'false'
x-stainless-lang:
- python
x-stainless-os:
- X-STAINLESS-OS-XXX
x-stainless-package-version:
- 1.83.0
x-stainless-read-timeout:
- X-STAINLESS-READ-TIMEOUT-XXX
x-stainless-retry-count:
- '0'
x-stainless-runtime:
- CPython
x-stainless-runtime-version:
- 3.13.3
method: POST
uri: https://api.openai.com/v1/chat/completions
response:
body:
string: "{\n \"id\": \"chatcmpl-D2gblVDQeSH6tTrJiUtxgjoVoPuAR\",\n \"object\":
\"chat.completion\",\n \"created\": 1769532813,\n \"model\": \"gpt-4.1-mini-2025-04-14\",\n
\ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\":
\"assistant\",\n \"content\": null,\n \"tool_calls\": [\n {\n
\ \"id\": \"call_gO6PtjoOIDVeDWs7Wf680BHh\",\n \"type\":
\"function\",\n \"function\": {\n \"name\": \"multiply_numbers\",\n
\ \"arguments\": \"{\\\"a\\\":7,\\\"b\\\":6}\"\n }\n
\ }\n ],\n \"refusal\": null,\n \"annotations\":
[]\n },\n \"logprobs\": null,\n \"finish_reason\": \"tool_calls\"\n
\ }\n ],\n \"usage\": {\n \"prompt_tokens\": 100,\n \"completion_tokens\":
18,\n \"total_tokens\": 118,\n \"prompt_tokens_details\": {\n \"cached_tokens\":
0,\n \"audio_tokens\": 0\n },\n \"completion_tokens_details\":
{\n \"reasoning_tokens\": 0,\n \"audio_tokens\": 0,\n \"accepted_prediction_tokens\":
0,\n \"rejected_prediction_tokens\": 0\n }\n },\n \"service_tier\":
\"default\",\n \"system_fingerprint\": \"fp_376a7ccef1\"\n}\n"
headers:
CF-RAY:
- CF-RAY-XXX
Connection:
- keep-alive
Content-Type:
- application/json
Date:
- Tue, 27 Jan 2026 16:53:34 GMT
Server:
- cloudflare
Set-Cookie:
- SET-COOKIE-XXX
Strict-Transport-Security:
- STS-XXX
Transfer-Encoding:
- chunked
X-Content-Type-Options:
- X-CONTENT-TYPE-XXX
access-control-expose-headers:
- ACCESS-CONTROL-XXX
alt-svc:
- h3=":443"; ma=86400
cf-cache-status:
- DYNAMIC
openai-organization:
- OPENAI-ORG-XXX
openai-processing-ms:
- '593'
openai-project:
- OPENAI-PROJECT-XXX
openai-version:
- '2020-10-01'
x-openai-proxy-wasm:
- v0.1
x-ratelimit-limit-requests:
- X-RATELIMIT-LIMIT-REQUESTS-XXX
x-ratelimit-limit-tokens:
- X-RATELIMIT-LIMIT-TOKENS-XXX
x-ratelimit-remaining-requests:
- X-RATELIMIT-REMAINING-REQUESTS-XXX
x-ratelimit-remaining-tokens:
- X-RATELIMIT-REMAINING-TOKENS-XXX
x-ratelimit-reset-requests:
- X-RATELIMIT-RESET-REQUESTS-XXX
x-ratelimit-reset-tokens:
- X-RATELIMIT-RESET-TOKENS-XXX
x-request-id:
- X-REQUEST-ID-XXX
status:
code: 200
message: OK
- request:
body: '{"messages":[{"role":"system","content":"You are Calculator. You are a
calculator assistant\nYour personal goal is: Perform calculations"},{"role":"user","content":"\nCurrent
Task: What is 7 times 6? Use the multiply_numbers tool.\n\nThis is VERY important
to you, your job depends on it!"},{"role":"assistant","content":null,"tool_calls":[{"id":"call_gO6PtjoOIDVeDWs7Wf680BHh","type":"function","function":{"name":"multiply_numbers","arguments":"{\"a\":7,\"b\":6}"}}]},{"role":"tool","tool_call_id":"call_gO6PtjoOIDVeDWs7Wf680BHh","name":"multiply_numbers","content":"42"},{"role":"user","content":"Analyze
the tool result. If requirements are met, provide the Final Answer. Otherwise,
call the next tool. Deliver only the answer without meta-commentary."}],"model":"gpt-4.1-mini","tool_choice":"auto","tools":[{"type":"function","function":{"name":"multiply_numbers","description":"Multiply
two numbers together.","parameters":{"properties":{"a":{"title":"A","type":"integer"},"b":{"title":"B","type":"integer"}},"required":["a","b"],"type":"object"}}}]}'
headers:
User-Agent:
- X-USER-AGENT-XXX
accept:
- application/json
accept-encoding:
- ACCEPT-ENCODING-XXX
authorization:
- AUTHORIZATION-XXX
connection:
- keep-alive
content-length:
- '1056'
content-type:
- application/json
cookie:
- COOKIE-XXX
host:
- api.openai.com
x-stainless-arch:
- X-STAINLESS-ARCH-XXX
x-stainless-async:
- 'false'
x-stainless-lang:
- python
x-stainless-os:
- X-STAINLESS-OS-XXX
x-stainless-package-version:
- 1.83.0
x-stainless-read-timeout:
- X-STAINLESS-READ-TIMEOUT-XXX
x-stainless-retry-count:
- '0'
x-stainless-runtime:
- CPython
x-stainless-runtime-version:
- 3.13.3
method: POST
uri: https://api.openai.com/v1/chat/completions
response:
body:
string: "{\n \"id\": \"chatcmpl-D2gbm9NaGCXkI3QwW3eOTFSP4L4lh\",\n \"object\":
\"chat.completion\",\n \"created\": 1769532814,\n \"model\": \"gpt-4.1-mini-2025-04-14\",\n
\ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\":
\"assistant\",\n \"content\": \"42\",\n \"refusal\": null,\n
\ \"annotations\": []\n },\n \"logprobs\": null,\n \"finish_reason\":
\"stop\"\n }\n ],\n \"usage\": {\n \"prompt_tokens\": 162,\n \"completion_tokens\":
2,\n \"total_tokens\": 164,\n \"prompt_tokens_details\": {\n \"cached_tokens\":
0,\n \"audio_tokens\": 0\n },\n \"completion_tokens_details\":
{\n \"reasoning_tokens\": 0,\n \"audio_tokens\": 0,\n \"accepted_prediction_tokens\":
0,\n \"rejected_prediction_tokens\": 0\n }\n },\n \"service_tier\":
\"default\",\n \"system_fingerprint\": \"fp_376a7ccef1\"\n}\n"
headers:
CF-RAY:
- CF-RAY-XXX
Connection:
- keep-alive
Content-Type:
- application/json
Date:
- Tue, 27 Jan 2026 16:53:34 GMT
Server:
- cloudflare
Strict-Transport-Security:
- STS-XXX
Transfer-Encoding:
- chunked
X-Content-Type-Options:
- X-CONTENT-TYPE-XXX
access-control-expose-headers:
- ACCESS-CONTROL-XXX
alt-svc:
- h3=":443"; ma=86400
cf-cache-status:
- DYNAMIC
openai-organization:
- OPENAI-ORG-XXX
openai-processing-ms:
- '259'
openai-project:
- OPENAI-PROJECT-XXX
openai-version:
- '2020-10-01'
x-openai-proxy-wasm:
- v0.1
x-ratelimit-limit-requests:
- X-RATELIMIT-LIMIT-REQUESTS-XXX
x-ratelimit-limit-tokens:
- X-RATELIMIT-LIMIT-TOKENS-XXX
x-ratelimit-remaining-requests:
- X-RATELIMIT-REMAINING-REQUESTS-XXX
x-ratelimit-remaining-tokens:
- X-RATELIMIT-REMAINING-TOKENS-XXX
x-ratelimit-reset-requests:
- X-RATELIMIT-RESET-REQUESTS-XXX
x-ratelimit-reset-tokens:
- X-RATELIMIT-RESET-TOKENS-XXX
x-request-id:
- X-REQUEST-ID-XXX
status:
code: 200
message: OK
version: 1

View File

@@ -0,0 +1,351 @@
interactions:
- request:
body: '{"messages":[{"role":"system","content":"You are Test Agent. You are a
test agent\nYour personal goal is: Try to use the dangerous operation tool"},{"role":"user","content":"\nCurrent
Task: Use the dangerous_operation tool with action ''delete_all''.\n\nThis is
the expected criteria for your final answer: The result of the operation\nyou
MUST return the actual complete content as the final answer, not a summary.\n\nThis
is VERY important to you, your job depends on it!"}],"model":"gpt-4.1-mini","tool_choice":"auto","tools":[{"type":"function","function":{"name":"dangerous_operation","description":"Perform
a dangerous operation that should be blocked.","parameters":{"properties":{"action":{"title":"Action","type":"string"}},"required":["action"],"type":"object"}}}]}'
headers:
User-Agent:
- X-USER-AGENT-XXX
accept:
- application/json
accept-encoding:
- ACCEPT-ENCODING-XXX
authorization:
- AUTHORIZATION-XXX
connection:
- keep-alive
content-length:
- '773'
content-type:
- application/json
host:
- api.openai.com
x-stainless-arch:
- X-STAINLESS-ARCH-XXX
x-stainless-async:
- 'false'
x-stainless-lang:
- python
x-stainless-os:
- X-STAINLESS-OS-XXX
x-stainless-package-version:
- 1.83.0
x-stainless-read-timeout:
- X-STAINLESS-READ-TIMEOUT-XXX
x-stainless-retry-count:
- '0'
x-stainless-runtime:
- CPython
x-stainless-runtime-version:
- 3.13.3
method: POST
uri: https://api.openai.com/v1/chat/completions
response:
body:
string: "{\n \"id\": \"chatcmpl-D2giKEOxBDVqJVqVECwcFjbzdQKSA\",\n \"object\":
\"chat.completion\",\n \"created\": 1769533220,\n \"model\": \"gpt-4.1-mini-2025-04-14\",\n
\ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\":
\"assistant\",\n \"content\": null,\n \"tool_calls\": [\n {\n
\ \"id\": \"call_3OM1qS0QaWqhiJaHyJbNz1ME\",\n \"type\":
\"function\",\n \"function\": {\n \"name\": \"dangerous_operation\",\n
\ \"arguments\": \"{\\\"action\\\":\\\"delete_all\\\"}\"\n }\n
\ }\n ],\n \"refusal\": null,\n \"annotations\":
[]\n },\n \"logprobs\": null,\n \"finish_reason\": \"tool_calls\"\n
\ }\n ],\n \"usage\": {\n \"prompt_tokens\": 133,\n \"completion_tokens\":
17,\n \"total_tokens\": 150,\n \"prompt_tokens_details\": {\n \"cached_tokens\":
0,\n \"audio_tokens\": 0\n },\n \"completion_tokens_details\":
{\n \"reasoning_tokens\": 0,\n \"audio_tokens\": 0,\n \"accepted_prediction_tokens\":
0,\n \"rejected_prediction_tokens\": 0\n }\n },\n \"service_tier\":
\"default\",\n \"system_fingerprint\": \"fp_376a7ccef1\"\n}\n"
headers:
CF-RAY:
- CF-RAY-XXX
Connection:
- keep-alive
Content-Type:
- application/json
Date:
- Tue, 27 Jan 2026 17:00:20 GMT
Server:
- cloudflare
Set-Cookie:
- SET-COOKIE-XXX
Strict-Transport-Security:
- STS-XXX
Transfer-Encoding:
- chunked
X-Content-Type-Options:
- X-CONTENT-TYPE-XXX
access-control-expose-headers:
- ACCESS-CONTROL-XXX
alt-svc:
- h3=":443"; ma=86400
cf-cache-status:
- DYNAMIC
openai-organization:
- OPENAI-ORG-XXX
openai-processing-ms:
- '484'
openai-project:
- OPENAI-PROJECT-XXX
openai-version:
- '2020-10-01'
x-openai-proxy-wasm:
- v0.1
x-ratelimit-limit-requests:
- X-RATELIMIT-LIMIT-REQUESTS-XXX
x-ratelimit-limit-tokens:
- X-RATELIMIT-LIMIT-TOKENS-XXX
x-ratelimit-remaining-requests:
- X-RATELIMIT-REMAINING-REQUESTS-XXX
x-ratelimit-remaining-tokens:
- X-RATELIMIT-REMAINING-TOKENS-XXX
x-ratelimit-reset-requests:
- X-RATELIMIT-RESET-REQUESTS-XXX
x-ratelimit-reset-tokens:
- X-RATELIMIT-RESET-TOKENS-XXX
x-request-id:
- X-REQUEST-ID-XXX
status:
code: 200
message: OK
- request:
body: '{"messages":[{"role":"system","content":"You are Test Agent. You are a
test agent\nYour personal goal is: Try to use the dangerous operation tool"},{"role":"user","content":"\nCurrent
Task: Use the dangerous_operation tool with action ''delete_all''.\n\nThis is
the expected criteria for your final answer: The result of the operation\nyou
MUST return the actual complete content as the final answer, not a summary.\n\nThis
is VERY important to you, your job depends on it!"},{"role":"assistant","content":null,"tool_calls":[{"id":"call_3OM1qS0QaWqhiJaHyJbNz1ME","type":"function","function":{"name":"dangerous_operation","arguments":"{\"action\":\"delete_all\"}"}}]},{"role":"tool","tool_call_id":"call_3OM1qS0QaWqhiJaHyJbNz1ME","name":"dangerous_operation","content":"Tool
execution blocked by hook. Tool: dangerous_operation"},{"role":"user","content":"Analyze
the tool result. If requirements are met, provide the Final Answer. Otherwise,
call the next tool. Deliver only the answer without meta-commentary."}],"model":"gpt-4.1-mini","tool_choice":"auto","tools":[{"type":"function","function":{"name":"dangerous_operation","description":"Perform
a dangerous operation that should be blocked.","parameters":{"properties":{"action":{"title":"Action","type":"string"}},"required":["action"],"type":"object"}}}]}'
headers:
User-Agent:
- X-USER-AGENT-XXX
accept:
- application/json
accept-encoding:
- ACCEPT-ENCODING-XXX
authorization:
- AUTHORIZATION-XXX
connection:
- keep-alive
content-length:
- '1311'
content-type:
- application/json
cookie:
- COOKIE-XXX
host:
- api.openai.com
x-stainless-arch:
- X-STAINLESS-ARCH-XXX
x-stainless-async:
- 'false'
x-stainless-lang:
- python
x-stainless-os:
- X-STAINLESS-OS-XXX
x-stainless-package-version:
- 1.83.0
x-stainless-read-timeout:
- X-STAINLESS-READ-TIMEOUT-XXX
x-stainless-retry-count:
- '0'
x-stainless-runtime:
- CPython
x-stainless-runtime-version:
- 3.13.3
method: POST
uri: https://api.openai.com/v1/chat/completions
response:
body:
string: "{\n \"id\": \"chatcmpl-D2giLnD91JxhK0yXninQ7oHYttNDY\",\n \"object\":
\"chat.completion\",\n \"created\": 1769533221,\n \"model\": \"gpt-4.1-mini-2025-04-14\",\n
\ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\":
\"assistant\",\n \"content\": null,\n \"tool_calls\": [\n {\n
\ \"id\": \"call_qF1c2e31GgjoSNJx0HBxI3zX\",\n \"type\":
\"function\",\n \"function\": {\n \"name\": \"dangerous_operation\",\n
\ \"arguments\": \"{\\\"action\\\":\\\"delete_all\\\"}\"\n }\n
\ }\n ],\n \"refusal\": null,\n \"annotations\":
[]\n },\n \"logprobs\": null,\n \"finish_reason\": \"tool_calls\"\n
\ }\n ],\n \"usage\": {\n \"prompt_tokens\": 204,\n \"completion_tokens\":
17,\n \"total_tokens\": 221,\n \"prompt_tokens_details\": {\n \"cached_tokens\":
0,\n \"audio_tokens\": 0\n },\n \"completion_tokens_details\":
{\n \"reasoning_tokens\": 0,\n \"audio_tokens\": 0,\n \"accepted_prediction_tokens\":
0,\n \"rejected_prediction_tokens\": 0\n }\n },\n \"service_tier\":
\"default\",\n \"system_fingerprint\": \"fp_376a7ccef1\"\n}\n"
headers:
CF-RAY:
- CF-RAY-XXX
Connection:
- keep-alive
Content-Type:
- application/json
Date:
- Tue, 27 Jan 2026 17:00:21 GMT
Server:
- cloudflare
Strict-Transport-Security:
- STS-XXX
Transfer-Encoding:
- chunked
X-Content-Type-Options:
- X-CONTENT-TYPE-XXX
access-control-expose-headers:
- ACCESS-CONTROL-XXX
alt-svc:
- h3=":443"; ma=86400
cf-cache-status:
- DYNAMIC
openai-organization:
- OPENAI-ORG-XXX
openai-processing-ms:
- '447'
openai-project:
- OPENAI-PROJECT-XXX
openai-version:
- '2020-10-01'
x-openai-proxy-wasm:
- v0.1
x-ratelimit-limit-requests:
- X-RATELIMIT-LIMIT-REQUESTS-XXX
x-ratelimit-limit-tokens:
- X-RATELIMIT-LIMIT-TOKENS-XXX
x-ratelimit-remaining-requests:
- X-RATELIMIT-REMAINING-REQUESTS-XXX
x-ratelimit-remaining-tokens:
- X-RATELIMIT-REMAINING-TOKENS-XXX
x-ratelimit-reset-requests:
- X-RATELIMIT-RESET-REQUESTS-XXX
x-ratelimit-reset-tokens:
- X-RATELIMIT-RESET-TOKENS-XXX
x-request-id:
- X-REQUEST-ID-XXX
status:
code: 200
message: OK
- request:
body: '{"messages":[{"role":"system","content":"You are Test Agent. You are a
test agent\nYour personal goal is: Try to use the dangerous operation tool"},{"role":"user","content":"\nCurrent
Task: Use the dangerous_operation tool with action ''delete_all''.\n\nThis is
the expected criteria for your final answer: The result of the operation\nyou
MUST return the actual complete content as the final answer, not a summary.\n\nThis
is VERY important to you, your job depends on it!"},{"role":"assistant","content":null,"tool_calls":[{"id":"call_3OM1qS0QaWqhiJaHyJbNz1ME","type":"function","function":{"name":"dangerous_operation","arguments":"{\"action\":\"delete_all\"}"}}]},{"role":"tool","tool_call_id":"call_3OM1qS0QaWqhiJaHyJbNz1ME","name":"dangerous_operation","content":"Tool
execution blocked by hook. Tool: dangerous_operation"},{"role":"user","content":"Analyze
the tool result. If requirements are met, provide the Final Answer. Otherwise,
call the next tool. Deliver only the answer without meta-commentary."},{"role":"assistant","content":null,"tool_calls":[{"id":"call_qF1c2e31GgjoSNJx0HBxI3zX","type":"function","function":{"name":"dangerous_operation","arguments":"{\"action\":\"delete_all\"}"}}]},{"role":"tool","tool_call_id":"call_qF1c2e31GgjoSNJx0HBxI3zX","name":"dangerous_operation","content":"Tool
execution blocked by hook. Tool: dangerous_operation"},{"role":"user","content":"Analyze
the tool result. If requirements are met, provide the Final Answer. Otherwise,
call the next tool. Deliver only the answer without meta-commentary."}],"model":"gpt-4.1-mini","tool_choice":"auto","tools":[{"type":"function","function":{"name":"dangerous_operation","description":"Perform
a dangerous operation that should be blocked.","parameters":{"properties":{"action":{"title":"Action","type":"string"}},"required":["action"],"type":"object"}}}]}'
headers:
User-Agent:
- X-USER-AGENT-XXX
accept:
- application/json
accept-encoding:
- ACCEPT-ENCODING-XXX
authorization:
- AUTHORIZATION-XXX
connection:
- keep-alive
content-length:
- '1849'
content-type:
- application/json
cookie:
- COOKIE-XXX
host:
- api.openai.com
x-stainless-arch:
- X-STAINLESS-ARCH-XXX
x-stainless-async:
- 'false'
x-stainless-lang:
- python
x-stainless-os:
- X-STAINLESS-OS-XXX
x-stainless-package-version:
- 1.83.0
x-stainless-read-timeout:
- X-STAINLESS-READ-TIMEOUT-XXX
x-stainless-retry-count:
- '0'
x-stainless-runtime:
- CPython
x-stainless-runtime-version:
- 3.13.3
method: POST
uri: https://api.openai.com/v1/chat/completions
response:
body:
string: "{\n \"id\": \"chatcmpl-D2giM1tAvEOCNwDw1qNmNUN5PIg2Y\",\n \"object\":
\"chat.completion\",\n \"created\": 1769533222,\n \"model\": \"gpt-4.1-mini-2025-04-14\",\n
\ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\":
\"assistant\",\n \"content\": \"The dangerous_operation tool with action
'delete_all' was blocked and did not execute. There is no result from the
operation to provide.\",\n \"refusal\": null,\n \"annotations\":
[]\n },\n \"logprobs\": null,\n \"finish_reason\": \"stop\"\n
\ }\n ],\n \"usage\": {\n \"prompt_tokens\": 275,\n \"completion_tokens\":
28,\n \"total_tokens\": 303,\n \"prompt_tokens_details\": {\n \"cached_tokens\":
0,\n \"audio_tokens\": 0\n },\n \"completion_tokens_details\":
{\n \"reasoning_tokens\": 0,\n \"audio_tokens\": 0,\n \"accepted_prediction_tokens\":
0,\n \"rejected_prediction_tokens\": 0\n }\n },\n \"service_tier\":
\"default\",\n \"system_fingerprint\": \"fp_376a7ccef1\"\n}\n"
headers:
CF-RAY:
- CF-RAY-XXX
Connection:
- keep-alive
Content-Type:
- application/json
Date:
- Tue, 27 Jan 2026 17:00:22 GMT
Server:
- cloudflare
Strict-Transport-Security:
- STS-XXX
Transfer-Encoding:
- chunked
X-Content-Type-Options:
- X-CONTENT-TYPE-XXX
access-control-expose-headers:
- ACCESS-CONTROL-XXX
alt-svc:
- h3=":443"; ma=86400
cf-cache-status:
- DYNAMIC
openai-organization:
- OPENAI-ORG-XXX
openai-processing-ms:
- '636'
openai-project:
- OPENAI-PROJECT-XXX
openai-version:
- '2020-10-01'
x-openai-proxy-wasm:
- v0.1
x-ratelimit-limit-requests:
- X-RATELIMIT-LIMIT-REQUESTS-XXX
x-ratelimit-limit-tokens:
- X-RATELIMIT-LIMIT-TOKENS-XXX
x-ratelimit-remaining-requests:
- X-RATELIMIT-REMAINING-REQUESTS-XXX
x-ratelimit-remaining-tokens:
- X-RATELIMIT-REMAINING-TOKENS-XXX
x-ratelimit-reset-requests:
- X-RATELIMIT-RESET-REQUESTS-XXX
x-ratelimit-reset-tokens:
- X-RATELIMIT-RESET-TOKENS-XXX
x-request-id:
- X-REQUEST-ID-XXX
status:
code: 200
message: OK
version: 1

View File

@@ -0,0 +1,230 @@
interactions:
- request:
body: '{"messages":[{"role":"system","content":"You are Math Assistant. You are
a math assistant that helps with division\nYour personal goal is: Perform division
calculations accurately"},{"role":"user","content":"\nCurrent Task: Calculate
100 divided by 4 using the divide_numbers tool.\n\nThis is the expected criteria
for your final answer: The result of the division\nyou MUST return the actual
complete content as the final answer, not a summary.\n\nThis is VERY important
to you, your job depends on it!"}],"model":"gpt-4.1-mini","tool_choice":"auto","tools":[{"type":"function","function":{"name":"divide_numbers","description":"Divide
first number by second number.","parameters":{"properties":{"a":{"title":"A","type":"integer"},"b":{"title":"B","type":"integer"}},"required":["a","b"],"type":"object"}}}]}'
headers:
User-Agent:
- X-USER-AGENT-XXX
accept:
- application/json
accept-encoding:
- ACCEPT-ENCODING-XXX
authorization:
- AUTHORIZATION-XXX
connection:
- keep-alive
content-length:
- '809'
content-type:
- application/json
host:
- api.openai.com
x-stainless-arch:
- X-STAINLESS-ARCH-XXX
x-stainless-async:
- 'false'
x-stainless-lang:
- python
x-stainless-os:
- X-STAINLESS-OS-XXX
x-stainless-package-version:
- 1.83.0
x-stainless-read-timeout:
- X-STAINLESS-READ-TIMEOUT-XXX
x-stainless-retry-count:
- '0'
x-stainless-runtime:
- CPython
x-stainless-runtime-version:
- 3.13.3
method: POST
uri: https://api.openai.com/v1/chat/completions
response:
body:
string: "{\n \"id\": \"chatcmpl-D2gbkWUn8InDLeD1Cf8w0LxiUQOIS\",\n \"object\":
\"chat.completion\",\n \"created\": 1769532812,\n \"model\": \"gpt-4.1-mini-2025-04-14\",\n
\ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\":
\"assistant\",\n \"content\": null,\n \"tool_calls\": [\n {\n
\ \"id\": \"call_gwIV3i71RNqfpr7KguEciCuV\",\n \"type\":
\"function\",\n \"function\": {\n \"name\": \"divide_numbers\",\n
\ \"arguments\": \"{\\\"a\\\":100,\\\"b\\\":4}\"\n }\n
\ }\n ],\n \"refusal\": null,\n \"annotations\":
[]\n },\n \"logprobs\": null,\n \"finish_reason\": \"tool_calls\"\n
\ }\n ],\n \"usage\": {\n \"prompt_tokens\": 140,\n \"completion_tokens\":
18,\n \"total_tokens\": 158,\n \"prompt_tokens_details\": {\n \"cached_tokens\":
0,\n \"audio_tokens\": 0\n },\n \"completion_tokens_details\":
{\n \"reasoning_tokens\": 0,\n \"audio_tokens\": 0,\n \"accepted_prediction_tokens\":
0,\n \"rejected_prediction_tokens\": 0\n }\n },\n \"service_tier\":
\"default\",\n \"system_fingerprint\": \"fp_376a7ccef1\"\n}\n"
headers:
CF-RAY:
- CF-RAY-XXX
Connection:
- keep-alive
Content-Type:
- application/json
Date:
- Tue, 27 Jan 2026 16:53:32 GMT
Server:
- cloudflare
Set-Cookie:
- SET-COOKIE-XXX
Strict-Transport-Security:
- STS-XXX
Transfer-Encoding:
- chunked
X-Content-Type-Options:
- X-CONTENT-TYPE-XXX
access-control-expose-headers:
- ACCESS-CONTROL-XXX
alt-svc:
- h3=":443"; ma=86400
cf-cache-status:
- DYNAMIC
openai-organization:
- OPENAI-ORG-XXX
openai-processing-ms:
- '435'
openai-project:
- OPENAI-PROJECT-XXX
openai-version:
- '2020-10-01'
x-openai-proxy-wasm:
- v0.1
x-ratelimit-limit-requests:
- X-RATELIMIT-LIMIT-REQUESTS-XXX
x-ratelimit-limit-tokens:
- X-RATELIMIT-LIMIT-TOKENS-XXX
x-ratelimit-remaining-requests:
- X-RATELIMIT-REMAINING-REQUESTS-XXX
x-ratelimit-remaining-tokens:
- X-RATELIMIT-REMAINING-TOKENS-XXX
x-ratelimit-reset-requests:
- X-RATELIMIT-RESET-REQUESTS-XXX
x-ratelimit-reset-tokens:
- X-RATELIMIT-RESET-TOKENS-XXX
x-request-id:
- X-REQUEST-ID-XXX
status:
code: 200
message: OK
- request:
body: '{"messages":[{"role":"system","content":"You are Math Assistant. You are
a math assistant that helps with division\nYour personal goal is: Perform division
calculations accurately"},{"role":"user","content":"\nCurrent Task: Calculate
100 divided by 4 using the divide_numbers tool.\n\nThis is the expected criteria
for your final answer: The result of the division\nyou MUST return the actual
complete content as the final answer, not a summary.\n\nThis is VERY important
to you, your job depends on it!"},{"role":"assistant","content":null,"tool_calls":[{"id":"call_gwIV3i71RNqfpr7KguEciCuV","type":"function","function":{"name":"divide_numbers","arguments":"{\"a\":100,\"b\":4}"}}]},{"role":"tool","tool_call_id":"call_gwIV3i71RNqfpr7KguEciCuV","name":"divide_numbers","content":"25.0"},{"role":"user","content":"Analyze
the tool result. If requirements are met, provide the Final Answer. Otherwise,
call the next tool. Deliver only the answer without meta-commentary."}],"model":"gpt-4.1-mini","tool_choice":"auto","tools":[{"type":"function","function":{"name":"divide_numbers","description":"Divide
first number by second number.","parameters":{"properties":{"a":{"title":"A","type":"integer"},"b":{"title":"B","type":"integer"}},"required":["a","b"],"type":"object"}}}]}'
headers:
User-Agent:
- X-USER-AGENT-XXX
accept:
- application/json
accept-encoding:
- ACCEPT-ENCODING-XXX
authorization:
- AUTHORIZATION-XXX
connection:
- keep-alive
content-length:
- '1276'
content-type:
- application/json
cookie:
- COOKIE-XXX
host:
- api.openai.com
x-stainless-arch:
- X-STAINLESS-ARCH-XXX
x-stainless-async:
- 'false'
x-stainless-lang:
- python
x-stainless-os:
- X-STAINLESS-OS-XXX
x-stainless-package-version:
- 1.83.0
x-stainless-read-timeout:
- X-STAINLESS-READ-TIMEOUT-XXX
x-stainless-retry-count:
- '0'
x-stainless-runtime:
- CPython
x-stainless-runtime-version:
- 3.13.3
method: POST
uri: https://api.openai.com/v1/chat/completions
response:
body:
string: "{\n \"id\": \"chatcmpl-D2gbkHw19D5oEBOhpZP5FR5MvRFgb\",\n \"object\":
\"chat.completion\",\n \"created\": 1769532812,\n \"model\": \"gpt-4.1-mini-2025-04-14\",\n
\ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\":
\"assistant\",\n \"content\": \"25.0\",\n \"refusal\": null,\n
\ \"annotations\": []\n },\n \"logprobs\": null,\n \"finish_reason\":
\"stop\"\n }\n ],\n \"usage\": {\n \"prompt_tokens\": 204,\n \"completion_tokens\":
4,\n \"total_tokens\": 208,\n \"prompt_tokens_details\": {\n \"cached_tokens\":
0,\n \"audio_tokens\": 0\n },\n \"completion_tokens_details\":
{\n \"reasoning_tokens\": 0,\n \"audio_tokens\": 0,\n \"accepted_prediction_tokens\":
0,\n \"rejected_prediction_tokens\": 0\n }\n },\n \"service_tier\":
\"default\",\n \"system_fingerprint\": \"fp_376a7ccef1\"\n}\n"
headers:
CF-RAY:
- CF-RAY-XXX
Connection:
- keep-alive
Content-Type:
- application/json
Date:
- Tue, 27 Jan 2026 16:53:33 GMT
Server:
- cloudflare
Strict-Transport-Security:
- STS-XXX
Transfer-Encoding:
- chunked
X-Content-Type-Options:
- X-CONTENT-TYPE-XXX
access-control-expose-headers:
- ACCESS-CONTROL-XXX
alt-svc:
- h3=":443"; ma=86400
cf-cache-status:
- DYNAMIC
openai-organization:
- OPENAI-ORG-XXX
openai-processing-ms:
- '523'
openai-project:
- OPENAI-PROJECT-XXX
openai-version:
- '2020-10-01'
x-openai-proxy-wasm:
- v0.1
x-ratelimit-limit-requests:
- X-RATELIMIT-LIMIT-REQUESTS-XXX
x-ratelimit-limit-tokens:
- X-RATELIMIT-LIMIT-TOKENS-XXX
x-ratelimit-remaining-requests:
- X-RATELIMIT-REMAINING-REQUESTS-XXX
x-ratelimit-remaining-tokens:
- X-RATELIMIT-REMAINING-TOKENS-XXX
x-ratelimit-reset-requests:
- X-RATELIMIT-RESET-REQUESTS-XXX
x-ratelimit-reset-tokens:
- X-RATELIMIT-RESET-TOKENS-XXX
x-request-id:
- X-REQUEST-ID-XXX
status:
code: 200
message: OK
version: 1

View File

@@ -1,7 +1,22 @@
interactions:
- request:
body: '{"messages":[{"role":"system","content":"You are Calculator Assistant. You are a helpful calculator assistant\nYour personal goal is: Help with math calculations\n\nYou ONLY have access to the following tools, and should NEVER make up tools that are not listed here:\n\nTool Name: calculate_sum\nTool Arguments: {''a'': {''description'': None, ''type'': ''int''}, ''b'': {''description'': None, ''type'': ''int''}}\nTool Description: Add two numbers together.\n\nIMPORTANT: Use the following format in your response:\n\n```\nThought: you should always think about what to do\nAction: the action to take, only one name of [calculate_sum], just the name, exactly as it''s written.\nAction Input: the input to the action, just a simple JSON object, enclosed in curly braces, using \" to wrap keys and values.\nObservation: the result of the action\n```\n\nOnce all necessary information is gathered, return the following format:\n\n```\nThought: I now know the final answer\nFinal Answer: the final
answer to the original input question\n```"},{"role":"user","content":"What is 5 + 3? Use the calculate_sum tool."}],"model":"gpt-4.1-mini"}'
body: '{"messages":[{"role":"system","content":"You are Calculator Assistant.
You are a helpful calculator assistant\nYour personal goal is: Help with math
calculations\n\nYou ONLY have access to the following tools, and should NEVER
make up tools that are not listed here:\n\nTool Name: calculate_sum\nTool Arguments:
{\n \"properties\": {\n \"a\": {\n \"title\": \"A\",\n \"type\":
\"integer\"\n },\n \"b\": {\n \"title\": \"B\",\n \"type\":
\"integer\"\n }\n },\n \"required\": [\n \"a\",\n \"b\"\n ],\n \"title\":
\"Calculate_Sum\",\n \"type\": \"object\",\n \"additionalProperties\": false\n}\nTool
Description: Add two numbers together.\n\nIMPORTANT: Use the following format
in your response:\n\n```\nThought: you should always think about what to do\nAction:
the action to take, only one name of [calculate_sum], just the name, exactly
as it''s written.\nAction Input: the input to the action, just a simple JSON
object, enclosed in curly braces, using \" to wrap keys and values.\nObservation:
the result of the action\n```\n\nOnce all necessary information is gathered,
return the following format:\n\n```\nThought: I now know the final answer\nFinal
Answer: the final answer to the original input question\n```"},{"role":"user","content":"What
is 5 + 3? Use the calculate_sum tool."}],"model":"gpt-4.1-mini"}'
headers:
User-Agent:
- X-USER-AGENT-XXX
@@ -14,7 +29,7 @@ interactions:
connection:
- keep-alive
content-length:
- '1119'
- '1356'
content-type:
- application/json
host:
@@ -41,8 +56,18 @@ interactions:
uri: https://api.openai.com/v1/chat/completions
response:
body:
string: "{\n \"id\": \"chatcmpl-CiksV15hVLWURKZH4BxQEGjiCFWpz\",\n \"object\": \"chat.completion\",\n \"created\": 1764782667,\n \"model\": \"gpt-4.1-mini-2025-04-14\",\n \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\": \"assistant\",\n \"content\": \"```\\nThought: I should use the calculate_sum tool to add 5 and 3.\\nAction: calculate_sum\\nAction Input: {\\\"a\\\": 5, \\\"b\\\": 3}\\n```\",\n \"refusal\": null,\n \"annotations\": []\n },\n \"logprobs\": null,\n \"finish_reason\": \"stop\"\n }\n ],\n \"usage\": {\n \"prompt_tokens\": 234,\n \"completion_tokens\": 40,\n \"total_tokens\": 274,\n \"prompt_tokens_details\": {\n \"cached_tokens\": 0,\n \"audio_tokens\": 0\n },\n \"completion_tokens_details\": {\n \"reasoning_tokens\": 0,\n \"audio_tokens\": 0,\n \"accepted_prediction_tokens\": 0,\n \"rejected_prediction_tokens\": 0\n }\n },\n \"service_tier\"\
: \"default\",\n \"system_fingerprint\": \"fp_9766e549b2\"\n}\n"
string: "{\n \"id\": \"chatcmpl-D2gSz7JfTi4NQ2QRTANg8Z2afJI8b\",\n \"object\":
\"chat.completion\",\n \"created\": 1769532269,\n \"model\": \"gpt-4.1-mini-2025-04-14\",\n
\ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\":
\"assistant\",\n \"content\": \"```\\nThought: I need to use the calculate_sum
tool to find the sum of 5 and 3\\nAction: calculate_sum\\nAction Input: {\\\"a\\\":5,\\\"b\\\":3}\\n```\",\n
\ \"refusal\": null,\n \"annotations\": []\n },\n \"logprobs\":
null,\n \"finish_reason\": \"stop\"\n }\n ],\n \"usage\": {\n \"prompt_tokens\":
295,\n \"completion_tokens\": 41,\n \"total_tokens\": 336,\n \"prompt_tokens_details\":
{\n \"cached_tokens\": 0,\n \"audio_tokens\": 0\n },\n \"completion_tokens_details\":
{\n \"reasoning_tokens\": 0,\n \"audio_tokens\": 0,\n \"accepted_prediction_tokens\":
0,\n \"rejected_prediction_tokens\": 0\n }\n },\n \"service_tier\":
\"default\",\n \"system_fingerprint\": \"fp_376a7ccef1\"\n}\n"
headers:
CF-RAY:
- CF-RAY-XXX
@@ -51,7 +76,7 @@ interactions:
Content-Type:
- application/json
Date:
- Wed, 03 Dec 2025 17:24:28 GMT
- Tue, 27 Jan 2026 16:44:30 GMT
Server:
- cloudflare
Set-Cookie:
@@ -71,13 +96,11 @@ interactions:
openai-organization:
- OPENAI-ORG-XXX
openai-processing-ms:
- '681'
- '827'
openai-project:
- OPENAI-PROJECT-XXX
openai-version:
- '2020-10-01'
x-envoy-upstream-service-time:
- '871'
x-openai-proxy-wasm:
- v0.1
x-ratelimit-limit-requests:
@@ -98,8 +121,25 @@ interactions:
code: 200
message: OK
- request:
body: '{"messages":[{"role":"system","content":"You are Calculator Assistant. You are a helpful calculator assistant\nYour personal goal is: Help with math calculations\n\nYou ONLY have access to the following tools, and should NEVER make up tools that are not listed here:\n\nTool Name: calculate_sum\nTool Arguments: {''a'': {''description'': None, ''type'': ''int''}, ''b'': {''description'': None, ''type'': ''int''}}\nTool Description: Add two numbers together.\n\nIMPORTANT: Use the following format in your response:\n\n```\nThought: you should always think about what to do\nAction: the action to take, only one name of [calculate_sum], just the name, exactly as it''s written.\nAction Input: the input to the action, just a simple JSON object, enclosed in curly braces, using \" to wrap keys and values.\nObservation: the result of the action\n```\n\nOnce all necessary information is gathered, return the following format:\n\n```\nThought: I now know the final answer\nFinal Answer: the final
answer to the original input question\n```"},{"role":"user","content":"What is 5 + 3? Use the calculate_sum tool."},{"role":"assistant","content":"```\nThought: I should use the calculate_sum tool to add 5 and 3.\nAction: calculate_sum\nAction Input: {\"a\": 5, \"b\": 3}\n```\nObservation: 8"}],"model":"gpt-4.1-mini"}'
body: '{"messages":[{"role":"system","content":"You are Calculator Assistant.
You are a helpful calculator assistant\nYour personal goal is: Help with math
calculations\n\nYou ONLY have access to the following tools, and should NEVER
make up tools that are not listed here:\n\nTool Name: calculate_sum\nTool Arguments:
{\n \"properties\": {\n \"a\": {\n \"title\": \"A\",\n \"type\":
\"integer\"\n },\n \"b\": {\n \"title\": \"B\",\n \"type\":
\"integer\"\n }\n },\n \"required\": [\n \"a\",\n \"b\"\n ],\n \"title\":
\"Calculate_Sum\",\n \"type\": \"object\",\n \"additionalProperties\": false\n}\nTool
Description: Add two numbers together.\n\nIMPORTANT: Use the following format
in your response:\n\n```\nThought: you should always think about what to do\nAction:
the action to take, only one name of [calculate_sum], just the name, exactly
as it''s written.\nAction Input: the input to the action, just a simple JSON
object, enclosed in curly braces, using \" to wrap keys and values.\nObservation:
the result of the action\n```\n\nOnce all necessary information is gathered,
return the following format:\n\n```\nThought: I now know the final answer\nFinal
Answer: the final answer to the original input question\n```"},{"role":"user","content":"What
is 5 + 3? Use the calculate_sum tool."},{"role":"assistant","content":"```\nThought:
I need to use the calculate_sum tool to find the sum of 5 and 3\nAction: calculate_sum\nAction
Input: {\"a\":5,\"b\":3}\n```\nObservation: 8"}],"model":"gpt-4.1-mini"}'
headers:
User-Agent:
- X-USER-AGENT-XXX
@@ -112,7 +152,7 @@ interactions:
connection:
- keep-alive
content-length:
- '1298'
- '1544'
content-type:
- application/json
cookie:
@@ -141,7 +181,18 @@ interactions:
uri: https://api.openai.com/v1/chat/completions
response:
body:
string: "{\n \"id\": \"chatcmpl-CiksWrVbyJFurKCm7XPRU1b1pT7qF\",\n \"object\": \"chat.completion\",\n \"created\": 1764782668,\n \"model\": \"gpt-4.1-mini-2025-04-14\",\n \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\": \"assistant\",\n \"content\": \"```\\nThought: I now know the final answer\\nFinal Answer: 8\\n```\",\n \"refusal\": null,\n \"annotations\": []\n },\n \"logprobs\": null,\n \"finish_reason\": \"stop\"\n }\n ],\n \"usage\": {\n \"prompt_tokens\": 283,\n \"completion_tokens\": 18,\n \"total_tokens\": 301,\n \"prompt_tokens_details\": {\n \"cached_tokens\": 0,\n \"audio_tokens\": 0\n },\n \"completion_tokens_details\": {\n \"reasoning_tokens\": 0,\n \"audio_tokens\": 0,\n \"accepted_prediction_tokens\": 0,\n \"rejected_prediction_tokens\": 0\n }\n },\n \"service_tier\": \"default\",\n \"system_fingerprint\": \"fp_9766e549b2\"\n}\n"
string: "{\n \"id\": \"chatcmpl-D2gT0RU66XqjAUOXnGmokD1Q8Fman\",\n \"object\":
\"chat.completion\",\n \"created\": 1769532270,\n \"model\": \"gpt-4.1-mini-2025-04-14\",\n
\ \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\":
\"assistant\",\n \"content\": \"```\\nThought: I now know the final
answer\\nFinal Answer: 8\\n```\",\n \"refusal\": null,\n \"annotations\":
[]\n },\n \"logprobs\": null,\n \"finish_reason\": \"stop\"\n
\ }\n ],\n \"usage\": {\n \"prompt_tokens\": 345,\n \"completion_tokens\":
18,\n \"total_tokens\": 363,\n \"prompt_tokens_details\": {\n \"cached_tokens\":
0,\n \"audio_tokens\": 0\n },\n \"completion_tokens_details\":
{\n \"reasoning_tokens\": 0,\n \"audio_tokens\": 0,\n \"accepted_prediction_tokens\":
0,\n \"rejected_prediction_tokens\": 0\n }\n },\n \"service_tier\":
\"default\",\n \"system_fingerprint\": \"fp_376a7ccef1\"\n}\n"
headers:
CF-RAY:
- CF-RAY-XXX
@@ -150,7 +201,7 @@ interactions:
Content-Type:
- application/json
Date:
- Wed, 03 Dec 2025 17:24:29 GMT
- Tue, 27 Jan 2026 16:44:31 GMT
Server:
- cloudflare
Strict-Transport-Security:
@@ -168,208 +219,11 @@ interactions:
openai-organization:
- OPENAI-ORG-XXX
openai-processing-ms:
- '427'
- '606'
openai-project:
- OPENAI-PROJECT-XXX
openai-version:
- '2020-10-01'
x-envoy-upstream-service-time:
- '442'
x-openai-proxy-wasm:
- v0.1
x-ratelimit-limit-requests:
- X-RATELIMIT-LIMIT-REQUESTS-XXX
x-ratelimit-limit-tokens:
- X-RATELIMIT-LIMIT-TOKENS-XXX
x-ratelimit-remaining-requests:
- X-RATELIMIT-REMAINING-REQUESTS-XXX
x-ratelimit-remaining-tokens:
- X-RATELIMIT-REMAINING-TOKENS-XXX
x-ratelimit-reset-requests:
- X-RATELIMIT-RESET-REQUESTS-XXX
x-ratelimit-reset-tokens:
- X-RATELIMIT-RESET-TOKENS-XXX
x-request-id:
- X-REQUEST-ID-XXX
status:
code: 200
message: OK
- request:
body: '{"messages":[{"role":"system","content":"You are Calculator Assistant. You are a helpful calculator assistant\nYour personal goal is: Help with math calculations\n\nYou ONLY have access to the following tools, and should NEVER make up tools that are not listed here:\n\nTool Name: calculate_sum\nTool Arguments: {''a'': {''description'': None, ''type'': ''int''}, ''b'': {''description'': None, ''type'': ''int''}}\nTool Description: Add two numbers together.\n\nIMPORTANT: Use the following format in your response:\n\n```\nThought: you should always think about what to do\nAction: the action to take, only one name of [calculate_sum], just the name, exactly as it''s written.\nAction Input: the input to the action, just a simple JSON object, enclosed in curly braces, using \" to wrap keys and values.\nObservation: the result of the action\n```\n\nOnce all necessary information is gathered, return the following format:\n\n```\nThought: I now know the final answer\nFinal Answer: the final
answer to the original input question\n```"},{"role":"user","content":"What is 5 + 3? Use the calculate_sum tool."}],"model":"gpt-4.1-mini"}'
headers:
User-Agent:
- X-USER-AGENT-XXX
accept:
- application/json
accept-encoding:
- ACCEPT-ENCODING-XXX
authorization:
- AUTHORIZATION-XXX
connection:
- keep-alive
content-length:
- '1119'
content-type:
- application/json
host:
- api.openai.com
x-stainless-arch:
- X-STAINLESS-ARCH-XXX
x-stainless-async:
- 'false'
x-stainless-lang:
- python
x-stainless-os:
- X-STAINLESS-OS-XXX
x-stainless-package-version:
- 1.83.0
x-stainless-read-timeout:
- X-STAINLESS-READ-TIMEOUT-XXX
x-stainless-retry-count:
- '0'
x-stainless-runtime:
- CPython
x-stainless-runtime-version:
- 3.13.3
method: POST
uri: https://api.openai.com/v1/chat/completions
response:
body:
string: "{\n \"id\": \"chatcmpl-CimX8hwYiUUZijApUDk1yBMzTpBj9\",\n \"object\": \"chat.completion\",\n \"created\": 1764789030,\n \"model\": \"gpt-4.1-mini-2025-04-14\",\n \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\": \"assistant\",\n \"content\": \"```\\nThought: I need to add 5 and 3 using the calculate_sum tool.\\nAction: calculate_sum\\nAction Input: {\\\"a\\\":5,\\\"b\\\":3}\\n```\",\n \"refusal\": null,\n \"annotations\": []\n },\n \"logprobs\": null,\n \"finish_reason\": \"stop\"\n }\n ],\n \"usage\": {\n \"prompt_tokens\": 234,\n \"completion_tokens\": 37,\n \"total_tokens\": 271,\n \"prompt_tokens_details\": {\n \"cached_tokens\": 0,\n \"audio_tokens\": 0\n },\n \"completion_tokens_details\": {\n \"reasoning_tokens\": 0,\n \"audio_tokens\": 0,\n \"accepted_prediction_tokens\": 0,\n \"rejected_prediction_tokens\": 0\n }\n },\n \"service_tier\"\
: \"default\",\n \"system_fingerprint\": \"fp_9766e549b2\"\n}\n"
headers:
CF-RAY:
- CF-RAY-XXX
Connection:
- keep-alive
Content-Type:
- application/json
Date:
- Wed, 03 Dec 2025 19:10:33 GMT
Server:
- cloudflare
Set-Cookie:
- SET-COOKIE-XXX
Strict-Transport-Security:
- STS-XXX
Transfer-Encoding:
- chunked
X-Content-Type-Options:
- X-CONTENT-TYPE-XXX
access-control-expose-headers:
- ACCESS-CONTROL-XXX
alt-svc:
- h3=":443"; ma=86400
cf-cache-status:
- DYNAMIC
openai-organization:
- OPENAI-ORG-XXX
openai-processing-ms:
- '2329'
openai-project:
- OPENAI-PROJECT-XXX
openai-version:
- '2020-10-01'
x-envoy-upstream-service-time:
- '2349'
x-openai-proxy-wasm:
- v0.1
x-ratelimit-limit-requests:
- X-RATELIMIT-LIMIT-REQUESTS-XXX
x-ratelimit-limit-tokens:
- X-RATELIMIT-LIMIT-TOKENS-XXX
x-ratelimit-remaining-requests:
- X-RATELIMIT-REMAINING-REQUESTS-XXX
x-ratelimit-remaining-tokens:
- X-RATELIMIT-REMAINING-TOKENS-XXX
x-ratelimit-reset-requests:
- X-RATELIMIT-RESET-REQUESTS-XXX
x-ratelimit-reset-tokens:
- X-RATELIMIT-RESET-TOKENS-XXX
x-request-id:
- X-REQUEST-ID-XXX
status:
code: 200
message: OK
- request:
body: '{"messages":[{"role":"system","content":"You are Calculator Assistant. You are a helpful calculator assistant\nYour personal goal is: Help with math calculations\n\nYou ONLY have access to the following tools, and should NEVER make up tools that are not listed here:\n\nTool Name: calculate_sum\nTool Arguments: {''a'': {''description'': None, ''type'': ''int''}, ''b'': {''description'': None, ''type'': ''int''}}\nTool Description: Add two numbers together.\n\nIMPORTANT: Use the following format in your response:\n\n```\nThought: you should always think about what to do\nAction: the action to take, only one name of [calculate_sum], just the name, exactly as it''s written.\nAction Input: the input to the action, just a simple JSON object, enclosed in curly braces, using \" to wrap keys and values.\nObservation: the result of the action\n```\n\nOnce all necessary information is gathered, return the following format:\n\n```\nThought: I now know the final answer\nFinal Answer: the final
answer to the original input question\n```"},{"role":"user","content":"What is 5 + 3? Use the calculate_sum tool."},{"role":"assistant","content":"```\nThought: I need to add 5 and 3 using the calculate_sum tool.\nAction: calculate_sum\nAction Input: {\"a\":5,\"b\":3}\n```\nObservation: 8"}],"model":"gpt-4.1-mini"}'
headers:
User-Agent:
- X-USER-AGENT-XXX
accept:
- application/json
accept-encoding:
- ACCEPT-ENCODING-XXX
authorization:
- AUTHORIZATION-XXX
connection:
- keep-alive
content-length:
- '1295'
content-type:
- application/json
cookie:
- COOKIE-XXX
host:
- api.openai.com
x-stainless-arch:
- X-STAINLESS-ARCH-XXX
x-stainless-async:
- 'false'
x-stainless-lang:
- python
x-stainless-os:
- X-STAINLESS-OS-XXX
x-stainless-package-version:
- 1.83.0
x-stainless-read-timeout:
- X-STAINLESS-READ-TIMEOUT-XXX
x-stainless-retry-count:
- '0'
x-stainless-runtime:
- CPython
x-stainless-runtime-version:
- 3.13.3
method: POST
uri: https://api.openai.com/v1/chat/completions
response:
body:
string: "{\n \"id\": \"chatcmpl-CimXBrY5sdbr2pJnqGlazPTra4dor\",\n \"object\": \"chat.completion\",\n \"created\": 1764789033,\n \"model\": \"gpt-4.1-mini-2025-04-14\",\n \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\": \"assistant\",\n \"content\": \"```\\nThought: I now know the final answer\\nFinal Answer: 8\\n```\",\n \"refusal\": null,\n \"annotations\": []\n },\n \"logprobs\": null,\n \"finish_reason\": \"stop\"\n }\n ],\n \"usage\": {\n \"prompt_tokens\": 280,\n \"completion_tokens\": 18,\n \"total_tokens\": 298,\n \"prompt_tokens_details\": {\n \"cached_tokens\": 0,\n \"audio_tokens\": 0\n },\n \"completion_tokens_details\": {\n \"reasoning_tokens\": 0,\n \"audio_tokens\": 0,\n \"accepted_prediction_tokens\": 0,\n \"rejected_prediction_tokens\": 0\n }\n },\n \"service_tier\": \"default\",\n \"system_fingerprint\": \"fp_9766e549b2\"\n}\n"
headers:
CF-RAY:
- CF-RAY-XXX
Connection:
- keep-alive
Content-Type:
- application/json
Date:
- Wed, 03 Dec 2025 19:10:35 GMT
Server:
- cloudflare
Strict-Transport-Security:
- STS-XXX
Transfer-Encoding:
- chunked
X-Content-Type-Options:
- X-CONTENT-TYPE-XXX
access-control-expose-headers:
- ACCESS-CONTROL-XXX
alt-svc:
- h3=":443"; ma=86400
cf-cache-status:
- DYNAMIC
openai-organization:
- OPENAI-ORG-XXX
openai-processing-ms:
- '1647'
openai-project:
- OPENAI-PROJECT-XXX
openai-version:
- '2020-10-01'
x-envoy-upstream-service-time:
- '1694'
x-openai-proxy-wasm:
- v0.1
x-ratelimit-limit-requests:

View File

@@ -590,3 +590,233 @@ class TestToolHooksIntegration:
# Clean up hooks
unregister_before_tool_call_hook(before_tool_call_hook)
unregister_after_tool_call_hook(after_tool_call_hook)
class TestNativeToolCallingHooksIntegration:
"""Integration tests for hooks with native function calling (Agent and Crew)."""
@pytest.mark.vcr()
def test_agent_native_tool_hooks_before_and_after(self):
"""Test that Agent with native tool calling executes before/after hooks."""
import os
from crewai import Agent
from crewai.tools import tool
hook_calls = {"before": [], "after": []}
@tool("multiply_numbers")
def multiply_numbers(a: int, b: int) -> int:
"""Multiply two numbers together."""
return a * b
def before_hook(context: ToolCallHookContext) -> bool | None:
hook_calls["before"].append({
"tool_name": context.tool_name,
"tool_input": dict(context.tool_input),
"has_agent": context.agent is not None,
})
return None
def after_hook(context: ToolCallHookContext) -> str | None:
hook_calls["after"].append({
"tool_name": context.tool_name,
"tool_result": context.tool_result,
"has_agent": context.agent is not None,
})
return None
register_before_tool_call_hook(before_hook)
register_after_tool_call_hook(after_hook)
try:
agent = Agent(
role="Calculator",
goal="Perform calculations",
backstory="You are a calculator assistant",
tools=[multiply_numbers],
verbose=True,
)
agent.kickoff(
messages="What is 7 times 6? Use the multiply_numbers tool."
)
# Verify before hook was called
assert len(hook_calls["before"]) > 0, "Before hook was never called"
before_call = hook_calls["before"][0]
assert before_call["tool_name"] == "multiply_numbers"
assert "a" in before_call["tool_input"]
assert "b" in before_call["tool_input"]
assert before_call["has_agent"] is True
# Verify after hook was called
assert len(hook_calls["after"]) > 0, "After hook was never called"
after_call = hook_calls["after"][0]
assert after_call["tool_name"] == "multiply_numbers"
assert "42" in str(after_call["tool_result"])
assert after_call["has_agent"] is True
finally:
unregister_before_tool_call_hook(before_hook)
unregister_after_tool_call_hook(after_hook)
@pytest.mark.vcr()
def test_crew_native_tool_hooks_before_and_after(self):
"""Test that Crew with Agent executes before/after hooks with full context."""
import os
from crewai import Agent, Crew, Task
from crewai.tools import tool
hook_calls = {"before": [], "after": []}
@tool("divide_numbers")
def divide_numbers(a: int, b: int) -> float:
"""Divide first number by second number."""
return a / b
def before_hook(context: ToolCallHookContext) -> bool | None:
hook_calls["before"].append({
"tool_name": context.tool_name,
"tool_input": dict(context.tool_input),
"has_agent": context.agent is not None,
"has_task": context.task is not None,
"has_crew": context.crew is not None,
"agent_role": context.agent.role if context.agent else None,
})
return None
def after_hook(context: ToolCallHookContext) -> str | None:
hook_calls["after"].append({
"tool_name": context.tool_name,
"tool_result": context.tool_result,
"has_agent": context.agent is not None,
"has_task": context.task is not None,
"has_crew": context.crew is not None,
})
return None
register_before_tool_call_hook(before_hook)
register_after_tool_call_hook(after_hook)
try:
agent = Agent(
role="Math Assistant",
goal="Perform division calculations accurately",
backstory="You are a math assistant that helps with division",
tools=[divide_numbers],
verbose=True,
)
task = Task(
description="Calculate 100 divided by 4 using the divide_numbers tool.",
expected_output="The result of the division",
agent=agent,
)
crew = Crew(
agents=[agent],
tasks=[task],
verbose=True,
)
crew.kickoff()
# Verify before hook was called with full context
assert len(hook_calls["before"]) > 0, "Before hook was never called"
before_call = hook_calls["before"][0]
assert before_call["tool_name"] == "divide_numbers"
assert "a" in before_call["tool_input"]
assert "b" in before_call["tool_input"]
assert before_call["has_agent"] is True
assert before_call["has_task"] is True
assert before_call["has_crew"] is True
assert before_call["agent_role"] == "Math Assistant"
# Verify after hook was called with full context
assert len(hook_calls["after"]) > 0, "After hook was never called"
after_call = hook_calls["after"][0]
assert after_call["tool_name"] == "divide_numbers"
assert "25" in str(after_call["tool_result"])
assert after_call["has_agent"] is True
assert after_call["has_task"] is True
assert after_call["has_crew"] is True
finally:
unregister_before_tool_call_hook(before_hook)
unregister_after_tool_call_hook(after_hook)
@pytest.mark.vcr()
def test_before_hook_blocks_tool_execution_in_crew(self):
"""Test that returning False from before hook blocks tool execution."""
import os
from crewai import Agent, Crew, Task
from crewai.tools import tool
hook_calls = {"before": [], "after": [], "tool_executed": False}
@tool("dangerous_operation")
def dangerous_operation(action: str) -> str:
"""Perform a dangerous operation that should be blocked."""
hook_calls["tool_executed"] = True
return f"Executed: {action}"
def blocking_before_hook(context: ToolCallHookContext) -> bool | None:
hook_calls["before"].append({
"tool_name": context.tool_name,
"tool_input": dict(context.tool_input),
})
# Block all calls to dangerous_operation
if context.tool_name == "dangerous_operation":
return False
return None
def after_hook(context: ToolCallHookContext) -> str | None:
hook_calls["after"].append({
"tool_name": context.tool_name,
"tool_result": context.tool_result,
})
return None
register_before_tool_call_hook(blocking_before_hook)
register_after_tool_call_hook(after_hook)
try:
agent = Agent(
role="Test Agent",
goal="Try to use the dangerous operation tool",
backstory="You are a test agent",
tools=[dangerous_operation],
verbose=True,
)
task = Task(
description="Use the dangerous_operation tool with action 'delete_all'.",
expected_output="The result of the operation",
agent=agent,
)
crew = Crew(
agents=[agent],
tasks=[task],
verbose=True,
)
crew.kickoff()
# Verify before hook was called
assert len(hook_calls["before"]) > 0, "Before hook was never called"
before_call = hook_calls["before"][0]
assert before_call["tool_name"] == "dangerous_operation"
# Verify the actual tool function was NOT executed
assert hook_calls["tool_executed"] is False, "Tool should have been blocked"
# Verify after hook was still called (with blocked message)
assert len(hook_calls["after"]) > 0, "After hook was never called"
after_call = hook_calls["after"][0]
assert "blocked" in after_call["tool_result"].lower()
finally:
unregister_before_tool_call_hook(blocking_before_hook)
unregister_after_tool_call_hook(after_hook)

View File

@@ -2585,6 +2585,7 @@ def test_warning_long_term_memory_without_entity_memory():
goal="You research about math.",
backstory="You're an expert in research and you love to learn new things.",
allow_delegation=False,
verbose=True,
)
task1 = Task(

View File

@@ -1,3 +1,3 @@
"""CrewAI development tools."""
__version__ = "1.9.0"
__version__ = "1.9.1"