Lorenzejay/byoa (#776)

* better spacing

* works with llama index

* works on langchain custom just need delegation to work

* cleanup for custom_agent class

* works with different argument expectations for agent_executor

* cleanup for hierarchial process, better agent_executor args handler and added to the crew agent doc page

* removed code examples for langchain + llama index, added to docs instead

* added key output if return is not a str for and added some tests

* added hinting for CustomAgent class

* removed pass as it was not needed

* closer just need to figuire ou agentTools

* running agents - llamaindex and langchain with base agent

* some cleanup on baseAgent

* minimum for agent to run for base class and ensure it works with hierarchical process

* cleanup for original agent to take on BaseAgent class

* Agent takes on langchainagent and cleanup across

* token handling working for usage_metrics to continue working

* installed llama-index, updated docs and added better name

* fixed some type errors

* base agent holds token_process

* heirarchail process uses proper tools and no longer relies on hasattr for token_processes

* removal of test_custom_agent_executions

* this fixes copying agents

* leveraging an executor class for trigger llamaindex agent

* llama index now has ask_human

* executor mixins added

* added output converter base class

* type listed

* cleanup for output conversions and tokenprocess eliminated redundancy

* properly handling tokens

* simplified token calc handling

* original agent with base agent builder structure setup

* better docs

* no more llama-index dep

* cleaner docs

* test fixes

* poetry reverts and better docs

* base_agent_tools set for third party agents

* updated task and test fix
This commit is contained in:
Lorenze Jay
2024-06-27 10:56:08 -07:00
committed by GitHub
parent da9cc5f097
commit 10997dd175
22 changed files with 637 additions and 407 deletions

View File

@@ -9,10 +9,10 @@ from langchain_openai import ChatOpenAI
from pydantic import UUID4, BaseModel, Field, field_validator, model_validator
from pydantic_core import PydanticCustomError
from crewai.agent import Agent
from crewai.tasks.task_output import TaskOutput
from crewai.utilities import I18N, Converter, ConverterError, Printer
from crewai.utilities import I18N, ConverterError, Printer
from crewai.utilities.pydantic_schema_parser import PydanticSchemaParser
from crewai.agents.agent_builder.base_agent import BaseAgent
class Task(BaseModel):
@@ -55,7 +55,7 @@ class Task(BaseModel):
callback: Optional[Any] = Field(
description="Callback to be executed after the task is completed.", default=None
)
agent: Optional[Agent] = Field(
agent: Optional[BaseAgent] = Field(
description="Agent responsible for execution the task.", default=None
)
context: Optional[List["Task"]] = Field(
@@ -147,7 +147,7 @@ class Task(BaseModel):
def execute( # type: ignore # Missing return statement
self,
agent: Agent | None = None,
agent: BaseAgent | None = None,
context: Optional[str] = None,
tools: Optional[List[Any]] = None,
) -> str:
@@ -198,9 +198,9 @@ class Task(BaseModel):
context=context,
tools=tools,
)
exported_output = self._export_output(result)
# type: the responses are usually str but need to figuire out a more elegant solution here
self.output = TaskOutput(
description=self.description,
exported_output=exported_output,
@@ -262,7 +262,7 @@ class Task(BaseModel):
[task.copy() for task in self.context] if self.context else None
)
cloned_agent = self.agent.copy() if self.agent else None
cloned_tools = deepcopy(self.tools) if self.tools else None
cloned_tools = deepcopy(self.tools) if self.tools else []
copied_task = Task(
**copied_data,
@@ -302,14 +302,13 @@ class Task(BaseModel):
pass
# type: ignore # Item "None" of "Agent | None" has no attribute "function_calling_llm"
llm = self.agent.function_calling_llm or self.agent.llm
llm = getattr(self.agent, "function_calling_llm", None) or self.agent.llm
if not self._is_gpt(llm):
# type: ignore # Argument "model" to "PydanticSchemaParser" has incompatible type "type[BaseModel] | None"; expected "type[BaseModel]"
model_schema = PydanticSchemaParser(model=model).get_schema()
instructions = f"{instructions}\n\nThe json should have the following structure, with the following keys:\n{model_schema}"
converter = Converter(
converter = self.agent.get_output_converter(
llm=llm, text=result, model=model, instructions=instructions
)