* patching for non-gpt model
* removal of json_object tool name assignment
* fixed issue for smaller models due to instructions prompt
* fixing for ollama llama3 models
* WIP: generated summary from documents split, could also create memgpt approach
* WIP: need tests but user inputted summarization strategy implemented - handling context window exceeding errors
* rm extra line
* removed type ignores
* added tests
* handling n to summarize prompt
* code cleanup, using click for cli asker
* rm not used class
* better refactor
* reverted poetry lock
* reverted poetry.locl
* improved context window exceeding exception class
* feat: Add execution time to both task and testing feature
* feat: Remove unused functions
* feat: change test_crew to evalaute_crew to avoid issues with testing libs
* feat: fix tests
I think there is some mistake, because there is no such parameter as force_output_result, and as the code shows, the correct parameter result_as_answer is set during agent creation, not task.
* Performed spell check across the entire documentation
Thank you once again!
* Performed spell check across the most of code base
Folders been checked:
- agents
- cli
- memory
- project
- tasks
- telemetry
- tools
- translations
* Trying to add a max_token for the agents, so they limited by number of tokens.
* Performed spell check across the rest of code base, and enahnced the yaml paraser code a little
* Small change in the main agent doc
* Improve _save_file method to handle both dict and str inputs
- Add check for dict type input
- Use json.dump for dict serialization
- Convert non-dict inputs to string
- Remove type ignore comments
---------
Co-authored-by: João Moura <joaomdmoura@gmail.com>
* Fixed special character issue when converting json to models. Added numerous tests to ensure thigns work properly.
* Fix linting error and cleaned up tests
* Fix customer_converter_cls test failure
* Fixed tests. Thank you lorenze for pointing that out. added a few more to ensure converter creation works properly
* Address lorenze feedback
* Fix linting issues
* patching for non-gpt model
* removal of json_object tool name assignment
* fixed issue for smaller models due to instructions prompt
* fixing for ollama llama3 models
* closing brackets
* removed not used and fixes
* WIP: hierarchical unblock for async tasks
* added better test
* update name change
* added more test and crew manager cleanup
* remove prints
* code cleanup, no need to pass manager
* feat: add ability to set LLM for AgentPLanner on Crew
* feat: fixes issue on instantiating the ChatOpenAI on the crew
* docs: add docs for the planning_llm new parameter
* docs: change message to ChatOpenAI llm
* feat: add tests
* feat: add crew Testing/evalauting feature
* feat: add docs and add unit test
* feat: improve testing output table
* feat: add tests
* feat: fix type checking issue
* feat: add raise ValueError when testing if output is not the expected
* docs: add docs for Testing
* feat: improve tests and fix some issue
* feat: back to sync
* feat: change opdeai model
* feat: fix test
* WIP: yaml proper mapping for agents and agent
* WIP: added output_json and output_pydantic setup
* WIP: core logic added, need cleanup
* code cleanup
* updated docs and example template to use yaml to reference agents within tasks
* cleanup type errors
* Update Start-a-New-CrewAI-Project.md
---------
Co-authored-by: João Moura <joaomdmoura@gmail.com>
To solve :
I encountered an error while trying to use the tool. This was the error: DuckDuckGoSearchRun._run() got an unexpected keyword argument 'q'.
Tool duckduckgo_search accepts these inputs: A wrapper around DuckDuckGo Search. Useful for when you need to answer questions about current events. Input should be a search query.
refer : https://github.com/joaomdmoura/crewAI/issues/316
The memory documentation left me with a lot of questions. After I went through the code to find an answer. I added this paragraph to explain what I found. Hope this is helpful.