* Adding support to force a tool return to be the final answer.
This will at the end of the execution return the tool output.
It will return the output of the latest tool with the flag
* Update src/crewai/agent.py
Co-authored-by: Gui Vieira <guilherme_vieira@me.com>
* Update tests/agent_test.py
Co-authored-by: Gui Vieira <guilherme_vieira@me.com>
---------
Co-authored-by: Gui Vieira <guilherme_vieira@me.com>
* fixed bug for manager overriding task agent and then added pydanic valditors to sequential when no agent is added to task
* better test and fixed task.agent logic
* fixed tests and better validator message
* added validator for async_execution true in tasks whenever in hierarchical run
* WIP. Figuring out disconnect issue.
* Cleaned up logs now that I've isolated the issue to the LLM
* more wip.
* WIP. It looks like usage metrics has always been broken for async
* Update parent crew who is managing for_each loop
* Merge in main to bugfix/kickoff-for-each-usage-metrics
* Clean up code for review
* Add new tests
* Final cleanup. Ready for review.
* Moving copy functionality from Agent to BaseAgent
* Fix renaming issue
* Fix linting errors
* use BaseAgent instead of Agent where applicable
* better spacing
* works with llama index
* works on langchain custom just need delegation to work
* cleanup for custom_agent class
* works with different argument expectations for agent_executor
* cleanup for hierarchial process, better agent_executor args handler and added to the crew agent doc page
* removed code examples for langchain + llama index, added to docs instead
* added key output if return is not a str for and added some tests
* added hinting for CustomAgent class
* removed pass as it was not needed
* closer just need to figuire ou agentTools
* running agents - llamaindex and langchain with base agent
* some cleanup on baseAgent
* minimum for agent to run for base class and ensure it works with hierarchical process
* cleanup for original agent to take on BaseAgent class
* Agent takes on langchainagent and cleanup across
* token handling working for usage_metrics to continue working
* installed llama-index, updated docs and added better name
* fixed some type errors
* base agent holds token_process
* heirarchail process uses proper tools and no longer relies on hasattr for token_processes
* removal of test_custom_agent_executions
* this fixes copying agents
* leveraging an executor class for trigger llamaindex agent
* llama index now has ask_human
* executor mixins added
* added output converter base class
* type listed
* cleanup for output conversions and tokenprocess eliminated redundancy
* properly handling tokens
* simplified token calc handling
* original agent with base agent builder structure setup
* better docs
* no more llama-index dep
* cleaner docs
* test fixes
* poetry reverts and better docs
* base_agent_tools set for third party agents
* updated task and test fix
* feat: add training logic to agent and crew
* feat: add training logic to agent executor
* feat: add input parameter to cli command
* feat: add utilities for the training logic
* feat: polish code, logic and add private variables
* feat: add docstring and type hinting to executor
* feat: add constant file, add constant to code
* feat: fix name of training handler function
* feat: remove unused var
* feat: change file handler file name
* feat: Add training handler file, class and change on the code
* feat: fix name error from file
* fix: change import to adapt to logic
* feat: add training handler test
* feat: add tests for file and training_handler
* feat: add test for task evaluator function
* feat: change text to fit in-screen
* feat: add test for train function
* feat: add test for agent training_handler function
* feat: add test for agent._use_trained_data
* removed hyphen in co-workers
* Fix issue with AgentTool agent selection. The LLM included double quotes in the agent name which messed up the string comparison. Added additional types. Cleaned up error messaging.
* Remove duplicate import
* Improve explanation
* Revert poetry.lock changes
* Fix missing line in poetry.lock
---------
Co-authored-by: madmag77 <goncharov.artemv@gmail.com>
* updates instructor to the latest version. adds jsonref, which instructor seems to depend on.
* updates embedchain reference, necessary for python 3.12
* added extra parameter for kickoff to return token usage count after result
* added output_token_usage to class and in full_output
* logger duplicated
* added more types
* added usage_metrics to full output instead
* added more to the description on full_output
* possible mispacing
* fix: 'from datetime import datetime for logging' to print the timestamp
* fix: correct default model (gpt-4o), correct token counts, and correct TaskOutput attributes (added agent)
* test: verify Task callback data is an instance of TaskOutput
* mentioning ollama on the docs as embedder
* lowering barrier to match tool with simialr name
* Fixing agent tools to support co_worker
* Adding new tests
* Fixing type"
* updating tests
* fixing conflict
* fix: fix test actually running
* fix: fix test to not send request to openai
* fix: fix linting to remove cli files
* fix: exclude only files that breaks black
* fix: Fix all Ruff checkings on the code and Fix Test with repeated name
* fix: Change linter name on yml file
* feat: update pre-commit
* feat: remove need for isort on the code
* feat: remove black linter
* feat: update tests yml to try to fix the tests gh action
* fix: fix test actually running
* fix: fix test to not send request to openai
* fix: fix linting to remove cli files
* fix: exclude only files that breaks black
* log_file
feature: added a new parameter for crew that creates a txt file to log agent execution
* unit tests and documentation
unit test if file is created but not what is inside the file