Revised to utilize Ollama from langchain.llms instead as the functionality from the other method simply doesn't work when delegating.
Co-authored-by: João Moura <joaomdmoura@gmail.com>
fixed error for some cases with Pandas DataFrame:
ValueError: The truth value of a DataFrame is ambiguous. Use a.empty, a.bool(), a.item(), a.any() or a.all().
* better spacing
* works with llama index
* works on langchain custom just need delegation to work
* cleanup for custom_agent class
* works with different argument expectations for agent_executor
* cleanup for hierarchial process, better agent_executor args handler and added to the crew agent doc page
* removed code examples for langchain + llama index, added to docs instead
* added key output if return is not a str for and added some tests
* added hinting for CustomAgent class
* removed pass as it was not needed
* closer just need to figuire ou agentTools
* running agents - llamaindex and langchain with base agent
* some cleanup on baseAgent
* minimum for agent to run for base class and ensure it works with hierarchical process
* cleanup for original agent to take on BaseAgent class
* Agent takes on langchainagent and cleanup across
* token handling working for usage_metrics to continue working
* installed llama-index, updated docs and added better name
* fixed some type errors
* base agent holds token_process
* heirarchail process uses proper tools and no longer relies on hasattr for token_processes
* removal of test_custom_agent_executions
* this fixes copying agents
* leveraging an executor class for trigger llamaindex agent
* llama index now has ask_human
* executor mixins added
* added output converter base class
* type listed
* cleanup for output conversions and tokenprocess eliminated redundancy
* properly handling tokens
* simplified token calc handling
* original agent with base agent builder structure setup
* better docs
* no more llama-index dep
* cleaner docs
* test fixes
* poetry reverts and better docs
* base_agent_tools set for third party agents
* updated task and test fix
* feat: add CodeInterpreterTool to run when enable code execution is allowed on agent
* feat: change to allow_code_execution
* feat: add readme for CodeInterpreterTool
* feat: add training logic to agent and crew
* feat: add training logic to agent executor
* feat: add input parameter to cli command
* feat: add utilities for the training logic
* feat: polish code, logic and add private variables
* feat: add docstring and type hinting to executor
* feat: add constant file, add constant to code
* feat: fix name of training handler function
* feat: remove unused var
* feat: change file handler file name
* feat: Add training handler file, class and change on the code
* feat: fix name error from file
* fix: change import to adapt to logic
* feat: add training handler test
* feat: add tests for file and training_handler
* feat: add test for task evaluator function
* feat: change text to fit in-screen
* feat: add test for train function
* feat: add test for agent training_handler function
* feat: add test for agent._use_trained_data
* removed hyphen in co-workers
* Fix issue with AgentTool agent selection. The LLM included double quotes in the agent name which messed up the string comparison. Added additional types. Cleaned up error messaging.
* Remove duplicate import
* Improve explanation
* Revert poetry.lock changes
* Fix missing line in poetry.lock
---------
Co-authored-by: madmag77 <goncharov.artemv@gmail.com>
* added extra parameter for kickoff to return token usage count after result
* added output_token_usage to class and in full_output
* logger duplicated
* added more types
* added usage_metrics to full output instead
* added more to the description on full_output
* possible mispacing
* updated kickoff return types to be either string or dict applicable when full_output is set
* removed duplicates
* updates instructor to the latest version. adds jsonref, which instructor seems to depend on.
* updates embedchain reference, necessary for python 3.12
* added extra parameter for kickoff to return token usage count after result
* added output_token_usage to class and in full_output
* logger duplicated
* added more types
* added usage_metrics to full output instead
* added more to the description on full_output
* possible mispacing
* fix: 'from datetime import datetime for logging' to print the timestamp
* fix: correct default model (gpt-4o), correct token counts, and correct TaskOutput attributes (added agent)
* test: verify Task callback data is an instance of TaskOutput
* Sync with deep copy working now
* async working!!
* Clean up code for review
* Fix naming
---------
Co-authored-by: João Moura <joaomdmoura@gmail.com>
* Added timestamp to logger
Updated the logger.py file to include timestamps when logging output. For example:
[2024-05-20 15:32:48][DEBUG]: == Working Agent: Researcher
[2024-05-20 15:32:48][INFO]: == Starting Task: Research the topic
[2024-05-20 15:33:22][DEBUG]: == [Researcher] Task output:
* Update tool_usage.py
* Revert "Update tool_usage.py"
This reverts commit 95d18d5b6f.
incorrect bramch for this commit
Changed default model value from gpt-4 to gpt-4o.
Reasoning.
gpt-4 costs 30$ per million tokens while gpt-4o costs 5$.
This is more cost friendly for default option.