mirror of
https://github.com/crewAIInc/crewAI.git
synced 2026-01-08 23:58:34 +00:00
Some checks are pending
Notify Downstream / notify-downstream (push) Waiting to run
* feat: add OpenAI agent adapter implementation - Introduced OpenAIAgentAdapter class to facilitate interaction with OpenAI Assistants. - Implemented methods for task execution, tool configuration, and response processing. - Added support for converting CrewAI tools to OpenAI format and handling delegation tools. * created an adapter for the delegate and ask_question tools * delegate and ask_questions work and it delegates to crewai agents* * refactor: introduce OpenAIAgentToolAdapter for tool management - Created OpenAIAgentToolAdapter class to encapsulate tool configuration and conversion for OpenAI Assistant. - Removed tool configuration logic from OpenAIAgentAdapter and integrated it into the new adapter. - Enhanced the tool conversion process to ensure compatibility with OpenAI's requirements. * feat: implement BaseAgentAdapter for agent integration - Introduced BaseAgentAdapter as an abstract base class for agent adapters in CrewAI. - Defined common interface and methods for configuring tools and structured output. - Updated OpenAIAgentAdapter to inherit from BaseAgentAdapter, enhancing its structure and functionality. * feat: add LangGraph agent and tool adapter for CrewAI integration - Introduced LangGraphAgentAdapter to facilitate interaction with LangGraph agents. - Implemented methods for task execution, context handling, and tool configuration. - Created LangGraphToolAdapter to convert CrewAI tools into LangGraph-compatible format. - Enhanced error handling and logging for task execution and streaming processes. * feat: enhance LangGraphToolAdapter and improve conversion instructions - Added type hints for better clarity and type checking in LangGraphToolAdapter. - Updated conversion instructions to ensure compatibility with optional LLM checks. * feat: integrate structured output handling in LangGraph and OpenAI agents - Added LangGraphConverterAdapter for managing structured output in LangGraph agents. - Enhanced LangGraphAgentAdapter to utilize the new converter for system prompt and task execution. - Updated LangGraphToolAdapter to use StructuredTool for better compatibility. - Introduced OpenAIConverterAdapter for structured output management in OpenAI agents. - Improved task execution flow in OpenAIAgentAdapter to incorporate structured output configuration and post-processing. * feat: implement BaseToolAdapter for tool integration - Introduced BaseToolAdapter as an abstract base class for tool adapters in CrewAI. - Updated LangGraphToolAdapter and OpenAIAgentToolAdapter to inherit from BaseToolAdapter, enhancing their structure and functionality. - Improved tool configuration methods to support better integration with various frameworks. - Added type hints and documentation for clarity and maintainability. * feat: enhance OpenAIAgentAdapter with configurable agent properties - Refactored OpenAIAgentAdapter to accept agent configuration as an argument. - Introduced a method to build a system prompt for the OpenAI agent, improving task execution context. - Updated initialization to utilize role, goal, and backstory from kwargs, enhancing flexibility in agent setup. - Improved tool handling and integration within the adapter. * feat: enhance agent adapters with structured output support - Introduced BaseConverterAdapter as an abstract class for structured output handling. - Implemented LangGraphConverterAdapter and OpenAIConverterAdapter to manage structured output in their respective agents. - Updated BaseAgentAdapter to accept an agent configuration dictionary during initialization. - Enhanced LangGraphAgentAdapter to utilize the new converter and improved tool handling. - Added methods for configuring structured output and enhancing system prompts in converter adapters. * refactor: remove _parse_tools method from OpenAIAgentAdapter and BaseAgent - Eliminated the _parse_tools method from OpenAIAgentAdapter and its abstract declaration in BaseAgent. - Cleaned up related test code in MockAgent to reflect the removal of the method. * also removed _parse_tools here as not used * feat: add dynamic import handling for LangGraph dependencies - Implemented conditional imports for LangGraph components to handle ImportError gracefully. - Updated LangGraphAgentAdapter initialization to check for LangGraph availability and raise an informative error if dependencies are missing. - Enhanced the agent adapter's robustness by ensuring it only initializes components when the required libraries are present. * fix: improve error handling for agent adapters - Updated LangGraphAgentAdapter to raise an ImportError with a clear message if LangGraph dependencies are not installed. - Refactored OpenAIAgentAdapter to include a similar check for OpenAI dependencies, ensuring robust initialization and user guidance for missing libraries. - Enhanced overall error handling in agent adapters to prevent runtime issues when dependencies are unavailable. * refactor: enhance tool handling in agent adapters - Updated BaseToolAdapter to initialize original and converted tools in the constructor. - Renamed method `all_tools` to `tools` for clarity in BaseToolAdapter. - Added `sanitize_tool_name` method to ensure tool names are API compatible. - Modified LangGraphAgentAdapter to utilize the updated tool handling and ensure proper tool configuration. - Refactored LangGraphToolAdapter to streamline tool conversion and ensure consistent naming conventions. * feat: emit AgentExecutionCompletedEvent in agent adapters - Added emission of AgentExecutionCompletedEvent in both LangGraphAgentAdapter and OpenAIAgentAdapter to signal task completion. - Enhanced event handling to include agent, task, and output details for better tracking of execution results. * docs: Enhance BaseConverterAdapter documentation - Added a detailed docstring to the BaseConverterAdapter class, outlining its purpose and the expected functionality for all converter adapters. - Updated the post_process_result method's docstring to specify the expected format of the result as a string. * docs: Add comprehensive guide for bringing custom agents into CrewAI - Introduced a new documentation file detailing the process of integrating custom agents using the BaseAgentAdapter, BaseToolAdapter, and BaseConverter. - Included step-by-step instructions for creating custom adapters, configuring tools, and handling structured output. - Provided examples for implementing adapters for various frameworks, enhancing the usability of CrewAI for developers. * feat: Introduce adapted_agent flag in BaseAgent and update BaseAgentAdapter initialization - Added an `adapted_agent` boolean field to the BaseAgent class to indicate if the agent is adapted. - Updated the BaseAgentAdapter's constructor to pass `adapted_agent=True` to the superclass, ensuring proper initialization of the new field. * feat: Enhance LangGraphAgentAdapter to support optional agent configuration - Updated LangGraphAgentAdapter to conditionally apply agent configuration when creating the agent graph, allowing for more flexible initialization. - Modified LangGraphToolAdapter to ensure only instances of BaseTool are converted, improving tool compatibility and handling. * feat: Introduce OpenAIConverterAdapter for structured output handling - Added OpenAIConverterAdapter to manage structured output conversion for OpenAI agents, enhancing their ability to process and format results. - Updated OpenAIAgentAdapter to utilize the new converter for configuring structured output and post-processing results. - Removed the deprecated get_output_converter method from OpenAIAgentAdapter. - Added unit tests for BaseAgentAdapter and BaseToolAdapter to ensure proper functionality and integration of new features. * feat: Enhance tool adapters to support asynchronous execution - Updated LangGraphToolAdapter and OpenAIAgentToolAdapter to handle asynchronous tool execution by checking if the output is awaitable. - Introduced `inspect` import to facilitate the awaitability check. - Refactored tool wrapper functions to ensure proper handling of both synchronous and asynchronous tool results. * fix: Correct method definition syntax and enhance tool adapter implementation - Updated the method definition for `configure_structured_output` to include the `def` keyword for clarity. - Added an asynchronous tool wrapper to ensure tools can operate in both synchronous and asynchronous contexts. - Modified the constructor of the custom converter adapter to directly assign the agent adapter, improving clarity and functionality. * linted * refactor: Improve tool processing logic in BaseAgent - Added a check to return an empty list if no tools are provided. - Simplified the tool attribute validation by using a list of required attributes. - Removed commented-out abstract method definition for clarity. * refactor: Simplify tool handling in agent adapters - Changed default value of `tools` parameter in LangGraphAgentAdapter to None for better handling of empty tool lists. - Updated tool initialization in both LangGraphAgentAdapter and OpenAIAgentAdapter to directly pass the `tools` parameter, removing unnecessary list handling. - Cleaned up commented-out code in OpenAIConverterAdapter to improve readability. * refactor: Remove unused stream_task method from LangGraphAgentAdapter - Deleted the `stream_task` method from LangGraphAgentAdapter to streamline the code and eliminate unnecessary complexity. - This change enhances maintainability by focusing on essential functionalities within the agent adapter.
296 lines
10 KiB
Python
296 lines
10 KiB
Python
import json
|
|
import re
|
|
from typing import Any, Optional, Type, Union, get_args, get_origin
|
|
|
|
from pydantic import BaseModel, ValidationError
|
|
|
|
from crewai.agents.agent_builder.utilities.base_output_converter import OutputConverter
|
|
from crewai.utilities.printer import Printer
|
|
from crewai.utilities.pydantic_schema_parser import PydanticSchemaParser
|
|
|
|
|
|
class ConverterError(Exception):
|
|
"""Error raised when Converter fails to parse the input."""
|
|
|
|
def __init__(self, message: str, *args: object) -> None:
|
|
super().__init__(message, *args)
|
|
self.message = message
|
|
|
|
|
|
class Converter(OutputConverter):
|
|
"""Class that converts text into either pydantic or json."""
|
|
|
|
def to_pydantic(self, current_attempt=1) -> BaseModel:
|
|
"""Convert text to pydantic."""
|
|
try:
|
|
if self.llm.supports_function_calling():
|
|
result = self._create_instructor().to_pydantic()
|
|
else:
|
|
response = self.llm.call(
|
|
[
|
|
{"role": "system", "content": self.instructions},
|
|
{"role": "user", "content": self.text},
|
|
]
|
|
)
|
|
try:
|
|
# Try to directly validate the response JSON
|
|
result = self.model.model_validate_json(response)
|
|
except ValidationError:
|
|
# If direct validation fails, attempt to extract valid JSON
|
|
result = handle_partial_json(response, self.model, False, None)
|
|
# Ensure result is a BaseModel instance
|
|
if not isinstance(result, BaseModel):
|
|
if isinstance(result, dict):
|
|
result = self.model.parse_obj(result)
|
|
elif isinstance(result, str):
|
|
try:
|
|
parsed = json.loads(result)
|
|
result = self.model.parse_obj(parsed)
|
|
except Exception as parse_err:
|
|
raise ConverterError(
|
|
f"Failed to convert partial JSON result into Pydantic: {parse_err}"
|
|
)
|
|
else:
|
|
raise ConverterError(
|
|
"handle_partial_json returned an unexpected type."
|
|
)
|
|
return result
|
|
except ValidationError as e:
|
|
if current_attempt < self.max_attempts:
|
|
return self.to_pydantic(current_attempt + 1)
|
|
raise ConverterError(
|
|
f"Failed to convert text into a Pydantic model due to validation error: {e}"
|
|
)
|
|
except Exception as e:
|
|
if current_attempt < self.max_attempts:
|
|
return self.to_pydantic(current_attempt + 1)
|
|
raise ConverterError(
|
|
f"Failed to convert text into a Pydantic model due to error: {e}"
|
|
)
|
|
|
|
def to_json(self, current_attempt=1):
|
|
"""Convert text to json."""
|
|
try:
|
|
if self.llm.supports_function_calling():
|
|
return self._create_instructor().to_json()
|
|
else:
|
|
return json.dumps(
|
|
self.llm.call(
|
|
[
|
|
{"role": "system", "content": self.instructions},
|
|
{"role": "user", "content": self.text},
|
|
]
|
|
)
|
|
)
|
|
except Exception as e:
|
|
if current_attempt < self.max_attempts:
|
|
return self.to_json(current_attempt + 1)
|
|
return ConverterError(f"Failed to convert text into JSON, error: {e}.")
|
|
|
|
def _create_instructor(self):
|
|
"""Create an instructor."""
|
|
from crewai.utilities import InternalInstructor
|
|
|
|
inst = InternalInstructor(
|
|
llm=self.llm,
|
|
model=self.model,
|
|
content=self.text,
|
|
)
|
|
return inst
|
|
|
|
def _convert_with_instructions(self):
|
|
"""Create a chain."""
|
|
from crewai.utilities.crew_pydantic_output_parser import (
|
|
CrewPydanticOutputParser,
|
|
)
|
|
|
|
parser = CrewPydanticOutputParser(pydantic_object=self.model)
|
|
result = self.llm.call(
|
|
[
|
|
{"role": "system", "content": self.instructions},
|
|
{"role": "user", "content": self.text},
|
|
]
|
|
)
|
|
return parser.parse_result(result)
|
|
|
|
|
|
def convert_to_model(
|
|
result: str,
|
|
output_pydantic: Optional[Type[BaseModel]],
|
|
output_json: Optional[Type[BaseModel]],
|
|
agent: Any,
|
|
converter_cls: Optional[Type[Converter]] = None,
|
|
) -> Union[dict, BaseModel, str]:
|
|
model = output_pydantic or output_json
|
|
if model is None:
|
|
return result
|
|
try:
|
|
escaped_result = json.dumps(json.loads(result, strict=False))
|
|
return validate_model(escaped_result, model, bool(output_json))
|
|
except json.JSONDecodeError:
|
|
return handle_partial_json(
|
|
result, model, bool(output_json), agent, converter_cls
|
|
)
|
|
|
|
except ValidationError:
|
|
return handle_partial_json(
|
|
result, model, bool(output_json), agent, converter_cls
|
|
)
|
|
|
|
except Exception as e:
|
|
Printer().print(
|
|
content=f"Unexpected error during model conversion: {type(e).__name__}: {e}. Returning original result.",
|
|
color="red",
|
|
)
|
|
return result
|
|
|
|
|
|
def validate_model(
|
|
result: str, model: Type[BaseModel], is_json_output: bool
|
|
) -> Union[dict, BaseModel]:
|
|
exported_result = model.model_validate_json(result)
|
|
if is_json_output:
|
|
return exported_result.model_dump()
|
|
return exported_result
|
|
|
|
|
|
def handle_partial_json(
|
|
result: str,
|
|
model: Type[BaseModel],
|
|
is_json_output: bool,
|
|
agent: Any,
|
|
converter_cls: Optional[Type[Converter]] = None,
|
|
) -> Union[dict, BaseModel, str]:
|
|
match = re.search(r"({.*})", result, re.DOTALL)
|
|
if match:
|
|
try:
|
|
exported_result = model.model_validate_json(match.group(0))
|
|
if is_json_output:
|
|
return exported_result.model_dump()
|
|
return exported_result
|
|
except json.JSONDecodeError:
|
|
pass
|
|
except ValidationError:
|
|
pass
|
|
except Exception as e:
|
|
Printer().print(
|
|
content=f"Unexpected error during partial JSON handling: {type(e).__name__}: {e}. Attempting alternative conversion method.",
|
|
color="red",
|
|
)
|
|
|
|
return convert_with_instructions(
|
|
result, model, is_json_output, agent, converter_cls
|
|
)
|
|
|
|
|
|
def convert_with_instructions(
|
|
result: str,
|
|
model: Type[BaseModel],
|
|
is_json_output: bool,
|
|
agent: Any,
|
|
converter_cls: Optional[Type[Converter]] = None,
|
|
) -> Union[dict, BaseModel, str]:
|
|
llm = agent.function_calling_llm or agent.llm
|
|
instructions = get_conversion_instructions(model, llm)
|
|
converter = create_converter(
|
|
agent=agent,
|
|
converter_cls=converter_cls,
|
|
llm=llm,
|
|
text=result,
|
|
model=model,
|
|
instructions=instructions,
|
|
)
|
|
exported_result = (
|
|
converter.to_pydantic() if not is_json_output else converter.to_json()
|
|
)
|
|
|
|
if isinstance(exported_result, ConverterError):
|
|
Printer().print(
|
|
content=f"{exported_result.message} Using raw output instead.",
|
|
color="red",
|
|
)
|
|
return result
|
|
|
|
return exported_result
|
|
|
|
|
|
def get_conversion_instructions(model: Type[BaseModel], llm: Any) -> str:
|
|
instructions = "Please convert the following text into valid JSON."
|
|
if llm and not isinstance(llm, str) and llm.supports_function_calling():
|
|
model_schema = PydanticSchemaParser(model=model).get_schema()
|
|
instructions += (
|
|
f"\n\nOutput ONLY the valid JSON and nothing else.\n\n"
|
|
f"The JSON must follow this schema exactly:\n```json\n{model_schema}\n```"
|
|
)
|
|
else:
|
|
model_description = generate_model_description(model)
|
|
instructions += (
|
|
f"\n\nOutput ONLY the valid JSON and nothing else.\n\n"
|
|
f"The JSON must follow this format exactly:\n{model_description}"
|
|
)
|
|
return instructions
|
|
|
|
|
|
def create_converter(
|
|
agent: Optional[Any] = None,
|
|
converter_cls: Optional[Type[Converter]] = None,
|
|
*args,
|
|
**kwargs,
|
|
) -> Converter:
|
|
if agent and not converter_cls:
|
|
if hasattr(agent, "get_output_converter"):
|
|
converter = agent.get_output_converter(*args, **kwargs)
|
|
else:
|
|
raise AttributeError("Agent does not have a 'get_output_converter' method")
|
|
elif converter_cls:
|
|
converter = converter_cls(*args, **kwargs)
|
|
else:
|
|
raise ValueError("Either agent or converter_cls must be provided")
|
|
|
|
if not converter:
|
|
raise Exception("No output converter found or set.")
|
|
|
|
return converter
|
|
|
|
|
|
def generate_model_description(model: Type[BaseModel]) -> str:
|
|
"""
|
|
Generate a string description of a Pydantic model's fields and their types.
|
|
|
|
This function takes a Pydantic model class and returns a string that describes
|
|
the model's fields and their respective types. The description includes handling
|
|
of complex types such as `Optional`, `List`, and `Dict`, as well as nested Pydantic
|
|
models.
|
|
"""
|
|
|
|
def describe_field(field_type):
|
|
origin = get_origin(field_type)
|
|
args = get_args(field_type)
|
|
|
|
if origin is Union or (origin is None and len(args) > 0):
|
|
# Handle both Union and the new '|' syntax
|
|
non_none_args = [arg for arg in args if arg is not type(None)]
|
|
if len(non_none_args) == 1:
|
|
return f"Optional[{describe_field(non_none_args[0])}]"
|
|
else:
|
|
return f"Optional[Union[{', '.join(describe_field(arg) for arg in non_none_args)}]]"
|
|
elif origin is list:
|
|
return f"List[{describe_field(args[0])}]"
|
|
elif origin is dict:
|
|
key_type = describe_field(args[0])
|
|
value_type = describe_field(args[1])
|
|
return f"Dict[{key_type}, {value_type}]"
|
|
elif isinstance(field_type, type) and issubclass(field_type, BaseModel):
|
|
return generate_model_description(field_type)
|
|
elif hasattr(field_type, "__name__"):
|
|
return field_type.__name__
|
|
else:
|
|
return str(field_type)
|
|
|
|
fields = model.model_fields
|
|
field_descriptions = [
|
|
f'"{name}": {describe_field(field.annotation)}'
|
|
for name, field in fields.items()
|
|
]
|
|
return "{\n " + ",\n ".join(field_descriptions) + "\n}"
|