* test: Add test demonstrating knowledge not included in planning process
Issue #1703: Add test to verify that agent knowledge sources are not currently
included in the planning process. This test will help validate the fix once
implemented.
- Creates agent with knowledge sources
- Verifies knowledge context missing from planning
- Checks other expected components are present
Co-Authored-By: Joe Moura <joao@crewai.com>
* fix: Include agent knowledge in planning process
Issue #1703: Integrate agent knowledge sources into planning summaries
- Add agent_knowledge field to task summaries in planning_handler
- Update test to verify knowledge inclusion
- Ensure knowledge context is available during planning phase
The planning agent now has access to agent knowledge when creating
task execution plans, allowing for better informed planning decisions.
Co-Authored-By: Joe Moura <joao@crewai.com>
* style: Fix import sorting in test_knowledge_planning.py
- Reorganize imports according to ruff linting rules
- Fix I001 linting error
Co-Authored-By: Joe Moura <joao@crewai.com>
* test: Update task summary assertions to include knowledge field
Co-Authored-By: Joe Moura <joao@crewai.com>
* fix: Update ChromaDB mock path and fix knowledge string formatting
Co-Authored-By: Joe Moura <joao@crewai.com>
* fix: Improve knowledge integration in planning process with error handling
Co-Authored-By: Joe Moura <joao@crewai.com>
* fix: Update task summary format for empty tools and knowledge
- Change empty tools message to 'agent has no tools'
- Remove agent_knowledge field when empty
- Update test assertions to match new format
- Improve test messages for clarity
Co-Authored-By: Joe Moura <joao@crewai.com>
* fix: Update string formatting for agent tools in task summary
Co-Authored-By: Joe Moura <joao@crewai.com>
* fix: Update string formatting for agent tools in task summary
Co-Authored-By: Joe Moura <joao@crewai.com>
* fix: Update string formatting for agent tools and knowledge in task summary
Co-Authored-By: Joe Moura <joao@crewai.com>
* fix: Update knowledge field formatting in task summary
Co-Authored-By: Joe Moura <joao@crewai.com>
* style: Fix import sorting in test_planning_handler.py
Co-Authored-By: Joe Moura <joao@crewai.com>
* style: Fix import sorting order in test_planning_handler.py
Co-Authored-By: Joe Moura <joao@crewai.com>
* test: Add ChromaDB mocking to test_create_tasks_summary_with_knowledge_and_tools
Co-Authored-By: Joe Moura <joao@crewai.com>
---------
Co-authored-by: Devin AI <158243242+devin-ai-integration[bot]@users.noreply.github.com>
Co-authored-by: Joe Moura <joao@crewai.com>
Co-authored-by: João Moura <joaomdmoura@gmail.com>
* docs: add comprehensive docstrings to Flow class and methods
- Added NumPy-style docstrings to all decorator functions
- Added detailed documentation to Flow class methods
- Included parameter types, return types, and examples
- Enhanced documentation clarity and completeness
Co-Authored-By: Joe Moura <joao@crewai.com>
* feat: add secure path handling utilities
- Add path_utils.py with safe path handling functions
- Implement path validation and security checks
- Integrate secure path handling in flow_visualizer.py
- Add path validation in html_template_handler.py
- Add comprehensive error handling for path operations
Co-Authored-By: Joe Moura <joao@crewai.com>
* docs: add comprehensive docstrings and type hints to flow utils (#1819)
Co-Authored-By: Joe Moura <joao@crewai.com>
* fix: add type annotations and fix import sorting
Co-Authored-By: Joe Moura <joao@crewai.com>
* fix: add type annotations to flow utils and visualization utils
Co-Authored-By: Joe Moura <joao@crewai.com>
* fix: resolve import sorting and type annotation issues
Co-Authored-By: Joe Moura <joao@crewai.com>
* fix: properly initialize and update edge_smooth variable
Co-Authored-By: Joe Moura <joao@crewai.com>
---------
Co-authored-by: Devin AI <158243242+devin-ai-integration[bot]@users.noreply.github.com>
Co-authored-by: Joe Moura <joao@crewai.com>
* feat: add tiktoken as explicit dependency and document Rust requirement
- Add tiktoken>=0.8.0 as explicit dependency to ensure pre-built wheels are used
- Document Rust compiler requirement as fallback in README.md
- Addresses issue #1824 tiktoken build failure
Co-Authored-By: Joe Moura <joao@crewai.com>
* fix: adjust tiktoken version to ~=0.7.0 for dependency compatibility
- Update tiktoken dependency to ~=0.7.0 to resolve conflict with embedchain
- Maintain compatibility with crewai-tools dependency chain
- Addresses CI build failures
Co-Authored-By: Joe Moura <joao@crewai.com>
* docs: add troubleshooting section and make tiktoken optional
Co-Authored-By: Joe Moura <joao@crewai.com>
* Update README.md
---------
Co-authored-by: Devin AI <158243242+devin-ai-integration[bot]@users.noreply.github.com>
Co-authored-by: Joe Moura <joao@crewai.com>
Co-authored-by: João Moura <joaomdmoura@gmail.com>
* fix(manager_llm): handle coworker role name case/whitespace properly
- Add .strip() to agent name and role comparisons in base_agent_tools.py
- Add test case for varied role name cases and whitespace
- Fix issue #1503 with manager LLM delegation
Co-Authored-By: Joe Moura <joao@crewai.com>
* fix(manager_llm): improve error handling and add debug logging
- Add debug logging for better observability
- Add sanitize_agent_name helper method
- Enhance error messages with more context
- Add parameterized tests for edge cases:
- Embedded quotes
- Trailing newlines
- Multiple whitespace
- Case variations
- None values
- Improve error handling with specific exceptions
Co-Authored-By: Joe Moura <joao@crewai.com>
* style: fix import sorting in base_agent_tools and test_manager_llm_delegation
Co-Authored-By: Joe Moura <joao@crewai.com>
* fix(manager_llm): improve whitespace normalization in role name matching
Co-Authored-By: Joe Moura <joao@crewai.com>
* style: fix import sorting in base_agent_tools and test_manager_llm_delegation
Co-Authored-By: Joe Moura <joao@crewai.com>
* fix(manager_llm): add error message template for agent tool execution errors
Co-Authored-By: Joe Moura <joao@crewai.com>
* style: fix import sorting in test_manager_llm_delegation.py
Co-Authored-By: Joe Moura <joao@crewai.com>
---------
Co-authored-by: Devin AI <158243242+devin-ai-integration[bot]@users.noreply.github.com>
Co-authored-by: Joe Moura <joao@crewai.com>
* fix: Change storage initialization to None for KnowledgeStorage
* refactor: Change storage field to optional and improve error handling when saving documents
---------
Co-authored-by: João Moura <joaomdmoura@gmail.com>
* Create Portkey-Observability-and-Guardrails.md
* crewAI update with new changes
* small change
---------
Co-authored-by: siddharthsambharia-portkey <siddhath.s@portkey.ai>
Co-authored-by: João Moura <joaomdmoura@gmail.com>
As mentioned in issue #1759, listing crewai-tools as dev-dependencies makes pip install it a required dependency, and not an optional
Co-authored-by: João Moura <joaomdmoura@gmail.com>
* added tool for docling support
* docling support installation
* use file_paths instead of file_path
* fix import
* organized imports
* run_type docs
* needs to be list
* fixed logic
* logged but file_path is backwards compatible
* use file_paths instead of file_path 2
* added test for multiple sources for file_paths
* fix run-types
* enabling local files to work and type cleanup
* linted
* fix test and types
* fixed run types
* fix types
* renamed to CrewDoclingSource
* linted
* added docs
* resolve conflicts
---------
Co-authored-by: Brandon Hancock (bhancock_ai) <109994880+bhancockio@users.noreply.github.com>
Co-authored-by: Brandon Hancock <brandon@brandonhancock.io>
* Update llms.mdx (Gemini 2.0)
- Add Gemini 2.0 flash to Gemini table.
- Add link to 2 hosting paths for Gemini in Tip.
- Change to lower case model slugs vs names, user convenience.
- Add https://artificialanalysis.ai/ as alternate leaderboard.
- Move Gemma to "other" tab.
* Update llm.py (gemini 2.0)
Add setting for Gemini 2.0 context window to llm.py
---------
Co-authored-by: Brandon Hancock (bhancock_ai) <109994880+bhancockio@users.noreply.github.com>
* apply agent ops changes and resolve merge conflicts
* Trying to fix tests
* add back in vcr
* update tools
* remove pkg_resources which was causing issues
* Fix tests
* experimenting to see if unique content is an issue with knowledge
* experimenting to see if unique content is an issue with knowledge
* update chromadb which seems to have issues with upsert
* generate new yaml for failing test
* Investigating upsert
* Drop patch
* Update casettes
* Fix duplicate document issue
* more fixes
* add back in vcr
* new cassette for test
---------
Co-authored-by: Lorenze Jay <lorenzejaytech@gmail.com>
When using agentops, we have the option to pass the `skip_auto_end_session` parameter, which is supposed to not end the session if the `end_session` function is called by Crew.
Now the way it works is, the `agentops.end_session` accepts `is_auto_end` flag and crewai should have passed it as `True` (its `False` by default).
I have changed the code to pass is_auto_end=True
Co-authored-by: Brandon Hancock (bhancock_ai) <109994880+bhancockio@users.noreply.github.com>
* drop 3.13
* revert
* Drop test cassette that was causing error
* trying to fix failing test
* adding thiago changes
* resolve final tests
* Drop skip
* drop pipeline
* drop 3.13
* revert
* Drop test cassette that was causing error
* trying to fix failing test
* adding thiago changes
* resolve final tests
* Drop skip
* Fix disk I/O error when resetting short-term memory.
Reset chromadb client and nullifies references before
removing directory.
* Nit for clarity
* did the same for knowledge_storage
* cleanup
* cleanup order
* Cleanup after the rm of the directories
---------
Co-authored-by: Lorenze Jay <lorenzejaytech@gmail.com>
Co-authored-by: Lorenze Jay <63378463+lorenzejay@users.noreply.github.com>
* drop metadata requirement
* fix linting
* Update docs for new knowledge
* more linting
* more linting
* make save_documents private
* update docs to the new way we use knowledge and include clearing memory
* incorporate #1683
* add in --version flag to cli. closes#1679.
* Fix env issue
* Add in suggestions from @caike to make sure ragstorage doesnt exceed os file limit. Also, included additional checks to support windows.
* remove poetry.lock as pointed out by @sanders41 in #1574.
* Incorporate feedback from crewai reviewer
* Incorporate @lorenzejay feedback
* v1 of HITL working
* Drop print statements
* HITL code more robust. Still needs to be refactored.
* refactor and more clear messages
* Fix type issue
* fix tests
* Fix test again
* Drop extra print
Corrected the statement which says users can not disable telemetry, but now users can disable by setting the environment variable OTEL_SDK_DISABLED to true.
Co-authored-by: Brandon Hancock (bhancock_ai) <109994880+bhancockio@users.noreply.github.com>
* Knowledge project directory standard
* fixed types
* comment fix
* made base file knowledge source an abstract class
* cleaner validator on model_post_init
* fix type checker
* cleaner refactor
* better template
* added path to RAGStorage
* added path to short term and entity memory
* add path for long_term_storage for completeness
---------
Co-authored-by: Brandon Hancock (bhancock_ai) <109994880+bhancockio@users.noreply.github.com>
* Update example of how to use LangChain tools with correct syntax
* Use .env
* Add Code back
---------
Co-authored-by: Brandon Hancock (bhancock_ai) <109994880+bhancockio@users.noreply.github.com>
* docs: improve tasks documentation clarity and structure
- Add Task Execution Flow section
- Add variable interpolation explanation
- Add Task Dependencies section with examples
- Improve overall document structure and readability
- Update code examples with proper syntax highlighting
* docs: update agent documentation with improved examples and formatting
- Replace DuckDuckGoSearchRun with SerperDevTool
- Update code block formatting to be consistent
- Improve template examples with actual syntax
- Update LLM examples to use current models
- Clean up formatting and remove redundant comments
* docs: enhance LLM documentation with Cerebras provider and formatting improvements
* docs: simplify LLMs documentation title
* docs: improve installation guide clarity and structure
- Add clear Python version requirements with check command
- Simplify installation options to recommended method
- Improve upgrade section clarity for existing users
- Add better visual structure with Notes and Tips
- Update description and formatting
* docs: improve introduction page organization and clarity
- Update organizational analogy in Note section
- Improve table formatting and alignment
- Remove emojis from component table for cleaner look
- Add 'helps you' to make the note more action-oriented
* docs: add enterprise and community cards
- Add Enterprise deployment card in quickstart
- Add community card focused on open source discussions
- Remove deployment reference from community description
- Clean up introduction page cards
- Remove link from Enterprise description text
* docs: add code snippet to Getting Started section in flows.mdx
---------
Co-authored-by: Brandon Hancock (bhancock_ai) <109994880+bhancockio@users.noreply.github.com>
* docs: improve tasks documentation clarity and structure
- Add Task Execution Flow section
- Add variable interpolation explanation
- Add Task Dependencies section with examples
- Improve overall document structure and readability
- Update code examples with proper syntax highlighting
* docs: update agent documentation with improved examples and formatting
- Replace DuckDuckGoSearchRun with SerperDevTool
- Update code block formatting to be consistent
- Improve template examples with actual syntax
- Update LLM examples to use current models
- Clean up formatting and remove redundant comments
* docs: enhance LLM documentation with Cerebras provider and formatting improvements
* docs: simplify LLMs documentation title
* docs: improve installation guide clarity and structure
- Add clear Python version requirements with check command
- Simplify installation options to recommended method
- Improve upgrade section clarity for existing users
- Add better visual structure with Notes and Tips
- Update description and formatting
* docs: improve introduction page organization and clarity
- Update organizational analogy in Note section
- Improve table formatting and alignment
- Remove emojis from component table for cleaner look
- Add 'helps you' to make the note more action-oriented
* docs: add enterprise and community cards
- Add Enterprise deployment card in quickstart
- Add community card focused on open source discussions
- Remove deployment reference from community description
- Clean up introduction page cards
- Remove link from Enterprise description text
* added knowledge to agent level
* linted
* added doc
* added from suggestions
* added test
* fixes from discussion
* fix docs
* fix test
* rm cassette for knowledge_sources test as its a mock and update agent doc string
* fix test
* rm unused
* linted
* V1 working
* clean up imports and prints
* more clean up and add tests
* fixing tests
* fix test
* fix linting
* Fix tests
* Fix linting
* add doc string as requested by eduardo
This commit adds an extra step to `crewai login` to ensure users also
log in to Tool Repository, that is, exchanging their Auth0 tokens for a
Tool Repository username and password to be used by UV downloads and API
tool uploads.
* initial knowledge
* WIP
* Adding core knowledge sources
* Improve types and better support for file paths
* added additional sources
* fix linting
* update yaml to include optional deps
* adding in lorenze feedback
* ensure embeddings are persisted
* improvements all around Knowledge class
* return this
* properly reset memory
* properly reset memory+knowledge
* consolodation and improvements
* linted
* cleanup rm unused embedder
* fix test
* fix duplicate
* generating cassettes for knowledge test
* updated default embedder
* None embedder to use default on pipeline cloning
* improvements
* fixed text_file_knowledge
* mypysrc fixes
* type check fixes
* added extra cassette
* just mocks
* linted
* mock knowledge query to not spin up db
* linted
* verbose run
* put a flag
* fix
* adding docs
* better docs
* improvements from review
* more docs
* linted
* rm print
* more fixes
* clearer docs
* added docstrings and type hints for cli
---------
Co-authored-by: João Moura <joaomdmoura@gmail.com>
Co-authored-by: Lorenze Jay <lorenzejaytech@gmail.com>
This commit replaces .netrc with uv environment variables for installing
tools from private repositories. To store credentials, I created a new
and reusable settings file for the CLI in
`$HOME/.config/crewai/settings.json`.
The issue with .netrc files is that they are applied system-wide and are
scoped by hostname, meaning we can't differentiate tool repositories
requests from regular requests to CrewAI's API.
Allow passing additional options from `crewai install` directly to
`uv sync`. This enables commands like `crewai install --locked` to work
as expected by forwarding all flags and options to the underlying uv
command.
* updated CLI to allow for submitting API keys
* updated click prompt to remove default number
* removed all unnecessary comments
* feat: implement crew creation CLI command
- refactor code to multiple functions
- Added ability for users to select provider and model when uing crewai create command and ave API key to .env
* refactered select_choice function for early return
* refactored select_provider to have an ealry return
* cleanup of comments
* refactor/Move functions into utils file, added new provider file and migrated fucntions thre, new constants file + general function refactor
* small comment cleanup
* fix unnecessary deps
* Added docs for new CLI provider + fixed missing API prompt
* Minor doc updates
* allow user to bypass api key entry + incorect number selected logic + ruff formatting
* ruff updates
* Fix spelling mistake
---------
Co-authored-by: Brandon Hancock (bhancock_ai) <109994880+bhancockio@users.noreply.github.com>
Co-authored-by: Brandon Hancock <brandon@brandonhancock.io>
* ensure original embedding config works
* some fixes
* raise error on unsupported provider
* WIP: brandons notes
* fixes
* rm prints
* fixed docs
* fixed run types
* updates to add more docs and correct imports with huggingface embedding server enabled
---------
Co-authored-by: Brandon Hancock <brandon@brandonhancock.io>
* byom - short/entity memory
* better
* rm uneeded
* fix text
* use context
* rm dep and sync
* type check fix
* fixed test using new cassete
* fixing types
* fixed types
* fix types
* fixed types
* fixing types
* fix type
* cassette update
* just mock the return of short term mem
* remove print
* try catch block
* added docs
* dding error handling here
* updated CLI to allow for submitting API keys
* updated click prompt to remove default number
* removed all unnecessary comments
* feat: implement crew creation CLI command
- refactor code to multiple functions
- Added ability for users to select provider and model when uing crewai create command and ave API key to .env
* refactered select_choice function for early return
* refactored select_provider to have an ealry return
* cleanup of comments
* refactor/Move functions into utils file, added new provider file and migrated fucntions thre, new constants file + general function refactor
* small comment cleanup
* fix unnecessary deps
---------
Co-authored-by: Brandon Hancock (bhancock_ai) <109994880+bhancockio@users.noreply.github.com>
Co-authored-by: Brandon Hancock <brandon@brandonhancock.io>
* feat: Start migrating to UV
* feat: add uv to flows
* feat: update docs on Poetry -> uv
* feat: update docs and uv.locl
* feat: update tests and github CI
* feat: run ruff format
* feat: update typechecking
* feat: fix type checking
* feat: update python version
* feat: type checking gic
* feat: adapt uv command to run the tool repo
* Adapt tool build command to uv
* feat: update logic to let only projects with crew to be deployed
* feat: add uv to tools
* fix; tests
* fix: remove breakpoint
* fix :test
* feat: add crewai update to migrate from poetry to uv
* fix: tests
* feat: add validation for ˆ character on pyproject
* feat: add run_crew to pyproject if doesnt exist
* feat: add validation for poetry migration
* fix: warning
---------
Co-authored-by: Vinicius Brasil <vini@hey.com>
* Change all instaces of crewAI to CrewAI and fix installation step
* Update the example to use YAML format
* Update to come after setup and edits
* Remove double tool instance
This commit adds a new command for adding custom PyPI indexes
credentials to the project. This was changed because credentials are now
user-scoped instead of organization.
* Almost working!
* It fully works but not clean enought
* Working but not clean engouth
* Everything is workign
* WIP. Working on adding and & or to flows. In the middle of setting up template for flow as well
* template working
* Everything is working
* More changes and todos
* Add more support for @start
* Router working now
* minor tweak to
* minor tweak to conditions and event handling
* Update logs
* Too trigger happy with cleanup
* Added in Thiago fix
* Flow passing results again
* Working on docs.
* made more progress updates on docs
* Finished talking about controlling flows
* add flow output
* fixed flow output section
* add crews to flows section is looking good now
* more flow doc changes
* Update docs and add more examples
* drop visualizer
* save visualizer
* pyvis is beginning to work
* pyvis working
* it is working
* regular methods and triggers working. Need to work on router next.
* properly identifying router and router children nodes. Need to fix color
* children router working. Need to support loops
* curving cycles but need to add curve conditionals
* everythin is showing up properly need to fix curves
* all working. needs to be cleaned up
* adjust padding
* drop lib
* clean up prior to PR
* incorporate joao feedback
* final tweaks for joao
* Refactor to make crews easier to understand
* update CLI and templates
* Fix crewai version in flows
* Fix merge conflict
* Almost working!
* It fully works but not clean enought
* Working but not clean engouth
* Everything is workign
* WIP. Working on adding and & or to flows. In the middle of setting up template for flow as well
* template working
* Everything is working
* More changes and todos
* Add more support for @start
* Router working now
* minor tweak to
* minor tweak to conditions and event handling
* Update logs
* Too trigger happy with cleanup
* Added in Thiago fix
* Flow passing results again
* Working on docs.
* made more progress updates on docs
* Finished talking about controlling flows
* add flow output
* fixed flow output section
* add crews to flows section is looking good now
* more flow doc changes
* Update docs and add more examples
* drop visualizer
* save visualizer
* pyvis is beginning to work
* pyvis working
* it is working
* regular methods and triggers working. Need to work on router next.
* properly identifying router and router children nodes. Need to fix color
* children router working. Need to support loops
* curving cycles but need to add curve conditionals
* everythin is showing up properly need to fix curves
* all working. needs to be cleaned up
* adjust padding
* drop lib
* clean up prior to PR
* incorporate joao feedback
* final tweaks for joao
This commit adds two commands to the CLI:
- `crewai tool publish`
- Builds the project using Poetry
- Uploads the tarball to CrewAI's tool repository
- `crewai tool install my-tool`
- Adds my-tool's index to Poetry and its credentials
- Installs my-tool from the custom index
* Prevent double slashes when joining URLs
* Move crewai.cli.deploy.utils to crewai.cli.utils
This commit moves this package so it's reusable across commands.
* Update Tasks.md
Current formating of the page Tasks has been broken, fix the markdown formatting.
* Update LLM-Connections.md
LLM class has been moved to llm.py file
* rebuilding executor
* removing langchain
* Making all tests good
* fixing types and adding ability for nor using system prompts
* improving types
* pleasing the types gods
* pleasing the types gods
* fixing parser, tools and executor
* making sure all tests pass
* final pass
* fixing type
* Updating Docs
* preparing to cut new version
* Updated CrewAI Documentation and Repository link in tools.poetry.urls
* Update pyproject.toml
---------
Co-authored-by: João Moura <joaomdmoura@gmail.com>
* feat: set basic structure deploy commands
* feat: add first iteration of CLI Deploy
* feat: some minor refactor
* feat: Add api, Deploy command and update cli
* feat: Remove test token
* feat: add auth0 lib, update cli and improve code
* feat: update code and decouple auth
* fix: parts of the code
* feat: Add token manager to encrypt access token and get and save tokens
* feat: add audience to costants
* feat: add subsystem saving credentials and remove comment of type hinting
* feat: add get crew version to send on header of request
* feat: add docstrings
* feat: add tests for authentication module
* feat: add tests for utils
* feat: add unit tests for cl
* feat: add tests
* feat: add deploy man tests
* feat: fix type checking issue
* feat: rename tests to pass ci
* feat: fix pr issues
* feat: fix get crewai versoin
* fix: add timeout for tests.yml
* Fixed agents. Now need to fix tasks.
* Add type fixes and fix task decorator
* Clean up logs
* fix more type errors
* Revert back to required
* Undo changes.
* Remove default none for properties that cannot be none
* Clean up comments
* Implement all of Guis feedback
* Clean up pipeline
* Make versioning dynamic in templates
* fix .env issues when openai is trying to use invalid keys
* Fix type checker issue in pipeline
* Fix tests.
* Add name and expected_output to TaskOutput
This commit adds task information to the TaskOutput class. This is
useful to provide extra context to callbacks.
* Populate task name from function names
This commit populates task name from function names when using
annotations.
As per (https://github.com/langchain-ai/langchain/pull/16395), OpenAI functions don't accept tool names with space. Therefore, I added an exception handling snippet to raise an issue if a custom tool name has a space.
* WIP. Procedure appears to be working well. Working on mocking properly for tests
* All tests are passing now
* rshift working
* Add back in Gui's tool_usage fix
* WIP
* Going to start refactoring for pipeline_output
* Update terminology
* new pipeline flow with traces and usage metrics working. need to add more tests and make sure PipelineOutput behaves likew CrewOutput
* Fix pipelineoutput to look more like crewoutput and taskoutput
* Implemented additional tests for pipeline. One test is failing. Need team support
* Update docs for pipeline
* Update pipeline to properly process input and ouput dictionary
* Update Pipeline docs
* Add back in commentary at top of pipeline file
* Starting to work on router
* Drop router for now. will add in separately
* In the middle of fixing router. A ton of circular dependencies. Moving over to a new design.
* WIP.
* Fix circular dependencies and updated PipelineRouter
* Add in Eduardo feedback. Still need to add in more commentary describing the design decisions for pipeline
* Add developer notes to explain what is going on in pipelines.
* Add doc strings
* Fix missing rag datatype
* WIP. Converting usage metrics from a dict to an object
* Fix tests that were checking usage metrics
* Drop todo
* Fix 1 type error in pipeline
* Update pipeline to use UsageMetric
* Add missing doc string
* WIP.
* Change names
* Rename variables based on joaos feedback
* Fix critical circular dependency issues. Now needing to fix trace issue.
* Tests working now!
* Add more tests which showed underlying issue with traces
* Fix tests
* Remove overly complicated test
* Add router example to docs
* Clean up end of docs
* Clean up docs
* Working on creating Crew templates and pipeline templates
* WIP.
* WIP
* Fix poetry install from templates
* WIP
* Restructure
* changes for lorenze
* more todos
* WIP: create pipelines cli working
* wrapped up router
* ignore mypy src on templates
* ignored signature of copy
* fix all verbose
* rm print statements
* brought back correct folders
* fixes missing folders and then rm print statements
* fixed tests
* fixed broken test
* fixed type checker
* fixed type ignore
* ignore types for templates
* needed
* revert
* exclude only required
* rm type errors on templates
* rm excluding type checks for template files on github action
* fixed missing quotes
---------
Co-authored-by: Brandon Hancock <brandon@brandonhancock.io>
* patching for non-gpt model
* removal of json_object tool name assignment
* fixed issue for smaller models due to instructions prompt
* fixing for ollama llama3 models
* WIP: generated summary from documents split, could also create memgpt approach
* WIP: need tests but user inputted summarization strategy implemented - handling context window exceeding errors
* rm extra line
* removed type ignores
* added tests
* handling n to summarize prompt
* code cleanup, using click for cli asker
* rm not used class
* better refactor
* reverted poetry lock
* reverted poetry.locl
* improved context window exceeding exception class
* feat: Add execution time to both task and testing feature
* feat: Remove unused functions
* feat: change test_crew to evalaute_crew to avoid issues with testing libs
* feat: fix tests
I think there is some mistake, because there is no such parameter as force_output_result, and as the code shows, the correct parameter result_as_answer is set during agent creation, not task.
* Performed spell check across the entire documentation
Thank you once again!
* Performed spell check across the most of code base
Folders been checked:
- agents
- cli
- memory
- project
- tasks
- telemetry
- tools
- translations
* Trying to add a max_token for the agents, so they limited by number of tokens.
* Performed spell check across the rest of code base, and enahnced the yaml paraser code a little
* Small change in the main agent doc
* Improve _save_file method to handle both dict and str inputs
- Add check for dict type input
- Use json.dump for dict serialization
- Convert non-dict inputs to string
- Remove type ignore comments
---------
Co-authored-by: João Moura <joaomdmoura@gmail.com>
* Fixed special character issue when converting json to models. Added numerous tests to ensure thigns work properly.
* Fix linting error and cleaned up tests
* Fix customer_converter_cls test failure
* Fixed tests. Thank you lorenze for pointing that out. added a few more to ensure converter creation works properly
* Address lorenze feedback
* Fix linting issues
* patching for non-gpt model
* removal of json_object tool name assignment
* fixed issue for smaller models due to instructions prompt
* fixing for ollama llama3 models
* closing brackets
* removed not used and fixes
* WIP: hierarchical unblock for async tasks
* added better test
* update name change
* added more test and crew manager cleanup
* remove prints
* code cleanup, no need to pass manager
* feat: add ability to set LLM for AgentPLanner on Crew
* feat: fixes issue on instantiating the ChatOpenAI on the crew
* docs: add docs for the planning_llm new parameter
* docs: change message to ChatOpenAI llm
* feat: add tests
* feat: add crew Testing/evalauting feature
* feat: add docs and add unit test
* feat: improve testing output table
* feat: add tests
* feat: fix type checking issue
* feat: add raise ValueError when testing if output is not the expected
* docs: add docs for Testing
* feat: improve tests and fix some issue
* feat: back to sync
* feat: change opdeai model
* feat: fix test
* WIP: yaml proper mapping for agents and agent
* WIP: added output_json and output_pydantic setup
* WIP: core logic added, need cleanup
* code cleanup
* updated docs and example template to use yaml to reference agents within tasks
* cleanup type errors
* Update Start-a-New-CrewAI-Project.md
---------
Co-authored-by: João Moura <joaomdmoura@gmail.com>
To solve :
I encountered an error while trying to use the tool. This was the error: DuckDuckGoSearchRun._run() got an unexpected keyword argument 'q'.
Tool duckduckgo_search accepts these inputs: A wrapper around DuckDuckGo Search. Useful for when you need to answer questions about current events. Input should be a search query.
refer : https://github.com/joaomdmoura/crewAI/issues/316
The memory documentation left me with a lot of questions. After I went through the code to find an answer. I added this paragraph to explain what I found. Hope this is helpful.
* feat: add planning feature to crew
* feat: add test to planning handler and change to execute_async method
* docs: add planning parameter to the Core documentation
* docs: add planning docs
* fix: fix type checking issue
* fix: test and logic
* Cleaned up task execution to now have separate paths for async and sync execution. Updating all kickoff functions to return CrewOutput. WIP. Waiting for Joao feedback on async task execution with task_output
* Consistently storing async and sync output for context
* outline tests I need to create going forward
* Major rehaul of TaskOutput and CrewOutput. Updated all tests to work with new change. Need to add in a few final tricky async tests and add a few more to verify output types on TaskOutput and CrewOutput.
* Encountering issues with callback. Need to test on main. WIP
* working on tests. WIP
* WIP. Figuring out disconnect issue.
* Cleaned up logs now that I've isolated the issue to the LLM
* more wip.
* WIP. It looks like usage metrics has always been broken for async
* Update parent crew who is managing for_each loop
* Merge in main to bugfix/kickoff-for-each-usage-metrics
* Clean up code for review
* Add new tests
* Final cleanup. Ready for review.
* Moving copy functionality from Agent to BaseAgent
* Fix renaming issue
* Fix linting errors
* use BaseAgent instead of Agent where applicable
* Fixing missing function. Working on tests.
* WIP. Needing team to review change
* Fixing issues brought about by merge
* WIP: need to fix json encoder
* WIP need to fix encoder
* WIP
* WIP: replay working with async. need to add tests
* Implement major fixes from yesterdays group conversation. Now working on tests.
* The majority of tasks are working now. Need to fix converter class
* Fix final failing test
* Fix linting and type-checker issues
* Add more tests to fully test CrewOutput and TaskOutput changes
* Add in validation for async cannot depend on other async tasks.
* WIP: working replay feat fixing inputs, need tests
* WIP: core logic of seq and heir for executing tasks added into one
* Update validators and tests
* better logic for seq and hier
* replay working for both seq and hier just need tests
* fixed context
* added cli command + code cleanup TODO: need better refactoring
* refactoring for cleaner code
* added better tests
* removed todo comments and fixed some tests
* fix logging now all tests should pass
* cleaner code
* ensure replay is delcared when replaying specific tasks
* ensure hierarchical works
* better typing for stored_outputs and separated task_output_handler
* added better tests
* added replay feature to crew docs
* easier cli command name
* fixing changes
* using sqllite instead of .json file for logging previous task_outputs
* tools fix
* added to docs and fixed tests
* fixed .db
* fixed docs and removed unneeded comments
* separating ltm and replay db
* fixed printing colors
* added how to doc
---------
Co-authored-by: Brandon Hancock <brandon@brandonhancock.io>
* feat: add max retry limit to agent execution
* feat: add test to max retry limit feature
* feat: add code execution docstring
---------
Co-authored-by: João Moura <joaomdmoura@gmail.com>
* Exploring output being passed to tool selector to see if we can better format data
* WIP. Adding JSON repair functionality
* Almost done implementing JSON repair. Testing fixes vs current base case.
* More action cleanup with additional tests
* WIP. Trying to figure out what is going on with tool descriptions
* Update tool description generation
* WIP. Trying to find out what is causing the tools to duplicate
* Replacing tools properly instead of duplicating them accidentally
* Fixing issues for MR
* Update dependencies for JSON_REPAIR
* More cleaning up pull request
* preppering for call
* Fix type-checking issues
---------
Co-authored-by: João Moura <joaomdmoura@gmail.com>
* Cleaned up task execution to now have separate paths for async and sync execution. Updating all kickoff functions to return CrewOutput. WIP. Waiting for Joao feedback on async task execution with task_output
* Consistently storing async and sync output for context
* outline tests I need to create going forward
* Major rehaul of TaskOutput and CrewOutput. Updated all tests to work with new change. Need to add in a few final tricky async tests and add a few more to verify output types on TaskOutput and CrewOutput.
* Encountering issues with callback. Need to test on main. WIP
* working on tests. WIP
* WIP. Figuring out disconnect issue.
* Cleaned up logs now that I've isolated the issue to the LLM
* more wip.
* WIP. It looks like usage metrics has always been broken for async
* Update parent crew who is managing for_each loop
* Merge in main to bugfix/kickoff-for-each-usage-metrics
* Clean up code for review
* Add new tests
* Final cleanup. Ready for review.
* Moving copy functionality from Agent to BaseAgent
* Fix renaming issue
* Fix linting errors
* use BaseAgent instead of Agent where applicable
* Fixing missing function. Working on tests.
* WIP. Needing team to review change
* Fixing issues brought about by merge
* WIP
* Implement major fixes from yesterdays group conversation. Now working on tests.
* The majority of tasks are working now. Need to fix converter class
* Fix final failing test
* Fix linting and type-checker issues
* Add more tests to fully test CrewOutput and TaskOutput changes
* Add in validation for async cannot depend on other async tasks.
* Update validators and tests
* Performed spell check across the entire documentation
Thank you once again!
* Performed spell check across the most of code base
Folders been checked:
- agents
- cli
- memory
- project
- tasks
- telemetry
- tools
- translations
* fix: call asserts
* fix: test_increment_tool_errors
* fix: test_increment_delegations_for_sequential_process
* fix: test_increment_delegations_for_hierarchical_process
* fix: test_code_execution_flag_adds_code_tool_upon_kickoff
* fix: test_tool_usage_information_is_appended_to_agent
* fix: try to fix test_crew_full_output
* fix: try to fix test_crew_full_output
* fix: test remove vcr to test crew_test test
* fix: comment test to see if ci passes
* fix: comment test to see if ci passes
* fix: test changing prompt tokens to get error on CI
* fix: test changing prompt tokens to get error on CI
* fix: test changing prompt tokens to get error on CI
* fix: test changing prompt tokens to get error on CI
* fix: test new approach
* fix: comment funciont not working in CI
* fix: github python version
* fix: remove need of vcr
* fix: fix and add comments for all type checking errors
* Adding support to force a tool return to be the final answer.
This will at the end of the execution return the tool output.
It will return the output of the latest tool with the flag
* Update src/crewai/agent.py
Co-authored-by: Gui Vieira <guilherme_vieira@me.com>
* Update tests/agent_test.py
Co-authored-by: Gui Vieira <guilherme_vieira@me.com>
---------
Co-authored-by: Gui Vieira <guilherme_vieira@me.com>
* fixed bug for manager overriding task agent and then added pydanic valditors to sequential when no agent is added to task
* better test and fixed task.agent logic
* fixed tests and better validator message
* added validator for async_execution true in tasks whenever in hierarchical run
* WIP. Figuring out disconnect issue.
* Cleaned up logs now that I've isolated the issue to the LLM
* more wip.
* WIP. It looks like usage metrics has always been broken for async
* Update parent crew who is managing for_each loop
* Merge in main to bugfix/kickoff-for-each-usage-metrics
* Clean up code for review
* Add new tests
* Final cleanup. Ready for review.
* Moving copy functionality from Agent to BaseAgent
* Fix renaming issue
* Fix linting errors
* use BaseAgent instead of Agent where applicable
- Added detailed steps for training the crew programmatically.
- Clarified the distinction between using the CLI and programmatic approaches.
This update makes it easier for users to understand how to train their crew both through the CLI and programmatically, whether using a UI or API endpoints.
Again Thank you to the author for the great project and the excellent foundation provided!
* Fix issue agentop poetry install issue
* Updated install requirements tests to fail if .lock becomes out of sync with poetry install. Cleaned up old issues that were merged back in.
* implements agentops with a langchain handler, agent tracking and tool call recording
* track tool usage
* end session after completion
* track tool usage time
* better tool and llm tracking
* code cleanup
* make agentops optional
* optional dependency usage
* remove telemetry code
* optional agentops
* agentops version bump
* remove org key
* true dependency
* add crew org key to agentops
* cleanup
* Update pyproject.toml
* Revert "true dependency"
This reverts commit e52e8e9568.
* Revert "cleanup"
This reverts commit 7f5635fb9e.
* optional parent key
* agentops 0.1.5
* Revert "Revert "cleanup""
This reverts commit cea33d9a5d.
* Revert "Revert "true dependency""
This reverts commit 4d1b460b
* cleanup
* Forcing version 0.1.5
* Update pyproject.toml
* agentops update
* noop
* add crew tag
* black formatting
* use langchain callback handler to support all LLMs
* agentops version bump
* track task evaluator
* merge upstream
* Fix typo in instruction en.json (#676)
* Enable search in docs (#663)
* Clarify text in docstring (#662)
* Update agent.py (#655)
Changed default model value from gpt-4 to gpt-4o.
Reasoning.
gpt-4 costs 30$ per million tokens while gpt-4o costs 5$.
This is more cost friendly for default option.
* Update README.md (#652)
Rework example so that if you use a custom LLM it doesn't throw code errors by uncommenting.
* Update BrowserbaseLoadTool.md (#647)
* Update crew.py (#644)
Fixed Type on line 53
* fixes#665 (#666)
* Added timestamp to logger (#646)
* Added timestamp to logger
Updated the logger.py file to include timestamps when logging output. For example:
[2024-05-20 15:32:48][DEBUG]: == Working Agent: Researcher
[2024-05-20 15:32:48][INFO]: == Starting Task: Research the topic
[2024-05-20 15:33:22][DEBUG]: == [Researcher] Task output:
* Update tool_usage.py
* Revert "Update tool_usage.py"
This reverts commit 95d18d5b6f.
incorrect bramch for this commit
* support skip auto end session
* conditional protect agentops use
* fix crew logger bug
* fix crew logger bug
* Update crew.py
* Update tool_usage.py
---------
Co-authored-by: João Moura <joaomdmoura@gmail.com>
Co-authored-by: Howard Gil <howardbgil@gmail.com>
Co-authored-by: Olivier Roberdet <niox5199@gmail.com>
Co-authored-by: Paul Sanders <psanders1@gmail.com>
Co-authored-by: Anudeep Kolluri <50168940+Anudeep-Kolluri@users.noreply.github.com>
Co-authored-by: Mike Heavers <heaversm@users.noreply.github.com>
Co-authored-by: Mish Ushakov <10400064+mishushakov@users.noreply.github.com>
Co-authored-by: theCyberTech - Rip&Tear <84775494+theCyberTech@users.noreply.github.com>
Co-authored-by: Saif Mahmud <60409889+vmsaif@users.noreply.github.com>
To Resolve :
pydantic_core._pydantic_core.ValidationError: 1 validation error for Task
expected_output
Field required [type=missing, input_value=, input_type=dict]
For further information visit https://errors.pydantic.dev/2.6/v/missing
"Expected Output" is mandatory now as it forces people to be specific about the expected result and get better result
refer : https://github.com/joaomdmoura/crewAI/issues/308
- Added a "Parameters" column to attribute tables. Improved overall document formatting for enhanced readability and ease of use.
Thank you to the author for the great project and the excellent foundation provided!
Revised to utilize Ollama from langchain.llms instead as the functionality from the other method simply doesn't work when delegating.
Co-authored-by: João Moura <joaomdmoura@gmail.com>
fixed error for some cases with Pandas DataFrame:
ValueError: The truth value of a DataFrame is ambiguous. Use a.empty, a.bool(), a.item(), a.any() or a.all().
* better spacing
* works with llama index
* works on langchain custom just need delegation to work
* cleanup for custom_agent class
* works with different argument expectations for agent_executor
* cleanup for hierarchial process, better agent_executor args handler and added to the crew agent doc page
* removed code examples for langchain + llama index, added to docs instead
* added key output if return is not a str for and added some tests
* added hinting for CustomAgent class
* removed pass as it was not needed
* closer just need to figuire ou agentTools
* running agents - llamaindex and langchain with base agent
* some cleanup on baseAgent
* minimum for agent to run for base class and ensure it works with hierarchical process
* cleanup for original agent to take on BaseAgent class
* Agent takes on langchainagent and cleanup across
* token handling working for usage_metrics to continue working
* installed llama-index, updated docs and added better name
* fixed some type errors
* base agent holds token_process
* heirarchail process uses proper tools and no longer relies on hasattr for token_processes
* removal of test_custom_agent_executions
* this fixes copying agents
* leveraging an executor class for trigger llamaindex agent
* llama index now has ask_human
* executor mixins added
* added output converter base class
* type listed
* cleanup for output conversions and tokenprocess eliminated redundancy
* properly handling tokens
* simplified token calc handling
* original agent with base agent builder structure setup
* better docs
* no more llama-index dep
* cleaner docs
* test fixes
* poetry reverts and better docs
* base_agent_tools set for third party agents
* updated task and test fix
* feat: add CodeInterpreterTool to run when enable code execution is allowed on agent
* feat: change to allow_code_execution
* feat: add readme for CodeInterpreterTool
* feat: add training logic to agent and crew
* feat: add training logic to agent executor
* feat: add input parameter to cli command
* feat: add utilities for the training logic
* feat: polish code, logic and add private variables
* feat: add docstring and type hinting to executor
* feat: add constant file, add constant to code
* feat: fix name of training handler function
* feat: remove unused var
* feat: change file handler file name
* feat: Add training handler file, class and change on the code
* feat: fix name error from file
* fix: change import to adapt to logic
* feat: add training handler test
* feat: add tests for file and training_handler
* feat: add test for task evaluator function
* feat: change text to fit in-screen
* feat: add test for train function
* feat: add test for agent training_handler function
* feat: add test for agent._use_trained_data
* removed hyphen in co-workers
* Fix issue with AgentTool agent selection. The LLM included double quotes in the agent name which messed up the string comparison. Added additional types. Cleaned up error messaging.
* Remove duplicate import
* Improve explanation
* Revert poetry.lock changes
* Fix missing line in poetry.lock
---------
Co-authored-by: madmag77 <goncharov.artemv@gmail.com>
* added extra parameter for kickoff to return token usage count after result
* added output_token_usage to class and in full_output
* logger duplicated
* added more types
* added usage_metrics to full output instead
* added more to the description on full_output
* possible mispacing
* updated kickoff return types to be either string or dict applicable when full_output is set
* removed duplicates
* updates instructor to the latest version. adds jsonref, which instructor seems to depend on.
* updates embedchain reference, necessary for python 3.12
* added extra parameter for kickoff to return token usage count after result
* added output_token_usage to class and in full_output
* logger duplicated
* added more types
* added usage_metrics to full output instead
* added more to the description on full_output
* possible mispacing
* fix: 'from datetime import datetime for logging' to print the timestamp
* fix: correct default model (gpt-4o), correct token counts, and correct TaskOutput attributes (added agent)
* test: verify Task callback data is an instance of TaskOutput
* Sync with deep copy working now
* async working!!
* Clean up code for review
* Fix naming
---------
Co-authored-by: João Moura <joaomdmoura@gmail.com>
* Added timestamp to logger
Updated the logger.py file to include timestamps when logging output. For example:
[2024-05-20 15:32:48][DEBUG]: == Working Agent: Researcher
[2024-05-20 15:32:48][INFO]: == Starting Task: Research the topic
[2024-05-20 15:33:22][DEBUG]: == [Researcher] Task output:
* Update tool_usage.py
* Revert "Update tool_usage.py"
This reverts commit 95d18d5b6f.
incorrect bramch for this commit
Changed default model value from gpt-4 to gpt-4o.
Reasoning.
gpt-4 costs 30$ per million tokens while gpt-4o costs 5$.
This is more cost friendly for default option.
* mentioning ollama on the docs as embedder
* lowering barrier to match tool with simialr name
* Fixing agent tools to support co_worker
* Adding new tests
* Fixing type"
* updating tests
* fixing conflict
* fix: fix test actually running
* fix: fix test to not send request to openai
* fix: fix linting to remove cli files
* fix: exclude only files that breaks black
* fix: Fix all Ruff checkings on the code and Fix Test with repeated name
* fix: Change linter name on yml file
* feat: update pre-commit
* feat: remove need for isort on the code
* feat: add mypy as type checker, update code and add comment to reference
* feat: remove black linter
* feat: remove poetry to run the command
* feat: change logic to test mypy
* feat: update tests yml to try to fix the tests gh action
* feat: try to add just mypy to run on gh action
* feat: fix yml file
* feat: add comment to avoid issue on gh action
* feat: decouple pytest from the necessity of poetry install
* feat: change tests.yml to test different approach
* feat: change to poetry run
* fix: parameter field on yml file
* fix: update parameters to be on the pyproject
* fix: update pyproject to remove import untyped errors
* fix: fix test actually running
* fix: fix test to not send request to openai
* fix: fix linting to remove cli files
* fix: exclude only files that breaks black
* fix: Fix all Ruff checkings on the code and Fix Test with repeated name
* fix: Change linter name on yml file
* feat: update pre-commit
* feat: remove need for isort on the code
* feat: remove black linter
* feat: update tests yml to try to fix the tests gh action
* Bug/curly_braces_yaml
Added parser to help users on yaml syntax
* context error
Patch and later will prioritize this again to have context work with the yaml
* fix: fix test actually running
* fix: fix test to not send request to openai
* fix: fix linting to remove cli files
* fix: exclude only files that breaks black
* Update task.py: try to find json in task output using regex
Sometimes the model replies with a valid and additional text, let's try to extract and validate it first. It's cheaper than calling LLM for that.
* Update task.py
---------
Co-authored-by: João Moura <joaomdmoura@gmail.com>
* adding variations of ask question and delegate work tools
* Revert "adding variations of ask question and delegate work tools"
This reverts commit 38d4589be8.
* adding distance calculation for tool names.
* proper formatting
* remove brackets
This commit adds a conditional check to ensure that the output file directory exists before attempting to create it. This ensures that the code does not
fail in cases where the directory does not exist and needs to be created. The condition is added in the `_save_file` method of the `Task` class, ensuring
that the correct behavior is maintained for saving results to a file.
Reworded "If you want to also install crewai-tools, which is a package with tools that can be used by the agents, but more dependencies, you can install it with, example below uses it:" for clarity
Created a short documentation on how to use Llama2 locally with crewAI thanks to the help of Ollama.
Co-authored-by: João Moura <joaomdmoura@gmail.com>
* fix the issue where the suggestions were split at character level
* Update contextual_memory.py
---------
Co-authored-by: rajib76 <rajib76@yahoo.com>
Co-authored-by: João Moura <joaomdmoura@gmail.com>
I saw a rendedered whitespace inconsistency in the Tasks docs here:
c87d887efc/docs/core-concepts/Tasks.md (L173)
So I set out to patch that up to make it easier to read. I then noticed
there were a few whitespace inconsistencies:
- 2 spaces
- 4 whitespaces
- tabs
It appears that the 4 whitespaces is the prevalent whitesapce usage, so
I overwrote other whitespace usages with that in this commit.
Co-authored-by: Rueben Ramirez <rramirez@ruebens-mbp.tail7c016.ts.net>
Co-authored-by: João Moura <joaomdmoura@gmail.com>
* log_file
feature: added a new parameter for crew that creates a txt file to log agent execution
* unit tests and documentation
unit test if file is created but not what is inside the file
* tasks.py: don't call Converter when model response is valid
Try to convert the task output to the expected Pydantic model before sending it to Converter, maybe the model got it right.
* feature: human input per task
* Update executor.py
* Update executor.py
* Update executor.py
* Update executor.py
* Update executor.py
* feat: change human input for unit testing
added documentation and unit test
* Create test_agent_human_input.yaml
add yaml for test
---------
Co-authored-by: João Moura <joaomdmoura@gmail.com>
Added the langchain "Tool" functionality by creating a python function and then adding the functionality of that function to the tool by 'func' variable in the 'Tool' function.
This now will allow to add a max_inter option to agents while also making sure to force the agent to give it's best final answer before running out of it's max_inter.
* fixing identation for AgentTools
* updating gitignore to exclude quick test script
* startingprompt translation
* supporting individual task output
* adding agent to task output
* cutting new version
* Updating README example
* Refactoring task cache to be a tool
The previous implementation of the task caching system was early exiting
the agent executor due to the fact it was returning an AgentFinish object.
This now refactors it to use a cache specific tool that is dynamically
added and forced into the agent in case of a task execution that was
already executed with the same input.
New CrewAIBaseModel:
Base for Agent, Crew, Task.
Includes generated, frozen UUID.
Adds hashing capability
Migrate to ConfigDict:
Replaces class Config with model_config, see this deprecation note .
Benefits:
Adds auditing capability with frozen UUIDs.
* Adding tool caching a loop execution prevention.
This adds some guardrails, to both prevent the same tool to be used
consecutively and also caching tool's results across the entire crew
so it cuts down execution time and eventual LLM calls.
This plays a huge role for smaller opensource models that usually fall
into those behaviors patterns.
It also includes some smaller improvements around the tool prompt and
agent tools, all with the same intention of guiding models into
better conform with agent instructions.
Update to Pydantic v2:
Transitioned all references from pydantic.v1 to pydantic (v2), ensuring compatibility with the latest Pydantic features and improvements.
Affected components include agent tools, prompts, crew, and task modules.
Refactoring & Alignment with Pydantic Standards:
Refactored the agent module away from traditional __init__ to align more closely with Pydantic best practices.
Updated the crew module to Pydantic v2 and enhanced configurations, allowing JSON and dictionary inputs. Additionally, some (not all) exceptions have been migrated to leverage Pydantic's error-handling capabilities.
Enhancements to Validators and Typings:
Improved validators and type annotations across multiple modules, enhancing code readability and maintainability.
Streamlined the validation process in line with Pydantic v2's methodologies.
Import and Configuration Adjustments:
Updated to test-related absolute imports due to issues with Pytest finding packages through relative imports.
Increased minimum Python version from 3.81 to 3.9 - most projects align with this; added pre-commit hooks, isort, black, & autoflake for code quality; updated lock file.
I can explain AI in one sentence. \n\nFinal Answer: Artificial intelligence
(AI) is the ability of computer systems to perform tasks that typically require
human intelligence, such as learning, problem-solving, and decision-making. \n","done":true,"done_reason":"stop","context":[106,1645,108,6176,1479,235292,108,2045,708,2121,4731,235265,2121,135147,108,6922,3749,6789,603,235292,2121,6789,108,1469,2734,970,1963,3407,2048,3448,577,573,6911,1281,573,5463,2412,5920,235292,109,65366,235292,590,1490,798,2734,476,1775,3448,108,11263,10358,235292,3883,2048,3448,2004,614,573,1775,578,573,1546,3407,685,3077,235269,665,2004,614,17526,6547,235265,109,235285,44472,1281,1450,32808,235269,970,3356,12014,611,665,235341,109,6176,4926,235292,109,6846,12297,235292,36576,1212,16481,603,575,974,13060,109,1596,603,573,5246,12830,604,861,2048,3448,235292,586,974,235290,47366,15844,576,16481,108,4747,44472,2203,573,5579,3407,3381,685,573,2048,3448,235269,780,476,13367,235265,109,12694,235341,1417,603,50471,2845,577,692,235269,1281,573,8112,2506,578,2734,861,1963,14124,10358,235269,861,3356,12014,611,665,235341,109,65366,235292,109,107,108,106,2516,108,65366,235292,590,798,10200,16481,575,974,13060,235265,235248,109,11263,10358,235292,42456,17273,591,11716,235275,603,573,7374,576,6875,5188,577,3114,13333,674,15976,2817,3515,17273,235269,1582,685,6044,235269,3210,235290,60495,235269,578,4530,235290,14577,235265,139,108],"total_duration":3370959792,"load_duration":20611750,"prompt_eval_count":173,"prompt_eval_duration":688036000,"eval_count":51,"eval_duration":2660291000}'
Answer: Artificial Intelligence (AI) refers to the development of computer
systems capable of performing tasks that typically require human intelligence,
such as learning, problem-solving, decision-making, and perception.","done":true,"done_reason":"stop","context":[128006,9125,128007,271,38766,1303,33025,2696,25,6790,220,2366,18,271,128009,128006,882,128007,271,14711,744,512,2675,527,1296,3560,13,1296,93371,198,7927,4443,5915,374,25,1296,5915,198,1271,3041,856,1888,4686,1620,4320,311,279,3465,1005,279,4839,2768,3645,1473,85269,25,358,1457,649,3041,264,2294,4320,198,19918,22559,25,4718,1620,4320,2011,387,279,2294,323,279,1455,4686,439,3284,11,433,2011,387,15632,7633,382,40,28832,1005,1521,20447,11,856,2683,14117,389,433,2268,14711,2724,1473,5520,5546,25,83017,1148,15592,374,304,832,11914,271,2028,374,279,1755,13186,369,701,1620,4320,25,362,832,1355,18886,16540,315,15592,198,9514,28832,471,279,5150,4686,2262,439,279,1620,4320,11,539,264,12399,382,11382,0,1115,374,48174,3062,311,499,11,1005,279,7526,2561,323,3041,701,1888,13321,22559,11,701,2683,14117,389,433,2268,85269,1473,128009,128006,78191,128007,271,19918,22559,25,59294,22107,320,15836,8,19813,311,279,4500,315,6500,6067,13171,315,16785,9256,430,11383,1397,3823,11478,11,1778,439,6975,11,3575,99246,11,5597,28846,11,323,21063,13],"total_duration":1461909875,"load_duration":39886208,"prompt_eval_count":181,"prompt_eval_duration":701000000,"eval_count":39,"eval_duration":719000000}'
am Gemma, an open-weights AI assistant developed by Google DeepMind. \n","done":true,"done_reason":"stop","context":[106,1645,108,6176,4926,235292,108,54657,575,235248,235284,235276,3907,235265,7702,708,692,235336,109,107,108,106,2516,108,235285,1144,137061,235269,671,2174,235290,30316,16481,20409,6990,731,6238,20555,35777,235265,139,108],"total_duration":14046647083,"load_duration":12942541833,"prompt_eval_count":25,"prompt_eval_duration":177695000,"eval_count":19,"eval_duration":923120000}'
an AI designed to assist and communicate with users, utilizing a combination
of natural language processing models.","done":true,"done_reason":"stop","context":[128006,9125,128007,271,38766,1303,33025,2696,25,6790,220,2366,18,271,128009,128006,882,128007,271,14711,2724,512,66454,304,220,508,4339,13,10699,902,1646,527,499,1980,128009,128006,78191,128007,271,40,2846,459,15592,6319,311,7945,323,19570,449,3932,11,35988,264,10824,315,5933,4221,8863,4211,13],"total_duration":1076617833,"load_duration":46505416,"prompt_eval_count":40,"prompt_eval_duration":626000000,"eval_count":22,"eval_duration":399000000}'
am Gemma, an open-weights AI assistant trained by Google DeepMind. \n","done":true,"done_reason":"stop","context":[106,1645,108,6176,4926,235292,108,54657,575,235248,235284,235276,3907,235265,7702,708,692,235336,109,107,108,106,2516,108,235285,1144,137061,235269,671,2174,235290,30316,16481,20409,17363,731,6238,20555,35777,235265,139,108],"total_duration":991843667,"load_duration":31664750,"prompt_eval_count":25,"prompt_eval_duration":51409000,"eval_count":19,"eval_duration":908132000}'
an AI, specifically a large language model, designed to understand and respond
to user queries with accuracy.","done":true,"done_reason":"stop","context":[128006,9125,128007,271,38766,1303,33025,2696,25,6790,220,2366,18,271,128009,128006,882,128007,271,14711,2724,512,66454,304,220,508,4339,13,16299,1646,527,499,71291,128009,128006,78191,128007,271,40,2846,459,15592,11,11951,264,3544,4221,1646,11,6319,311,3619,323,6013,311,1217,20126,449,13708,13],"total_duration":827817584,"load_duration":41560542,"prompt_eval_count":39,"prompt_eval_duration":384000000,"eval_count":23,"eval_duration":400000000}'
=="In the rapidly evolving landscape of technology, AI agents have emerged as formidable tools, revolutionizing how we interact with data and automate tasks. These sophisticated systems leverage machine learning and natural language processing to perform a myriad of functions, from virtual personal assistants to complex decision-making companions in industries such as finance, healthcare, and education. By mimicking human intelligence, AI agents can analyze massive data sets at unparalleled speeds, enabling businesses to uncover valuable insights, enhance productivity, and elevate user experiences to unprecedented levels.\n\nOne of the most striking aspects of AI agents is their adaptability; they learn from their interactions and continuously improve their performance over time. This feature is particularly valuable in customer service where AI agents can address inquiries, resolve issues, and provide personalized recommendations without the limitations of human fatigue. Moreover, with intuitive interfaces, AI agents enhance user interactions, making technology more accessible and user-friendly, thereby breaking down barriers that have historically hindered digital engagement.\n\nDespite their immense potential, the deployment of AI agents raises important ethical and practical considerations. Issues related to privacy, data security, and the potential for job displacement necessitate thoughtful dialogue and proactive measures. Striking a balance between technological innovation and societal impact will be crucial as organizations integrate these agents into their operations. Additionally, ensuring transparency in AI decision-making processes is vital to maintain public trust as AI agents become an integral part of daily life.\n\nLooking ahead, the future of AI agents appears bright, with ongoing advancements promising even greater capabilities. As we continue to harness the power of AI, we can expect these agents to play a transformative role in shaping various sectors—streamlining workflows, enabling smarter decision-making, and fostering more personalized experiences. Embracing this technology responsibly can lead to a future where AI agents not only augment human effort but also inspire creativity and efficiency across the board, ultimately redefining our interaction with the digital world."
"python_full_version >= '3.12' and python_full_version < '3.12.4'",
"python_full_version >= '3.12.4'",
"python_full_version < '3.11' and sys_platform == 'darwin'",
"python_full_version < '3.11' and platform_machine == 'aarch64' and sys_platform == 'linux'",
"(python_full_version < '3.11' and platform_machine != 'aarch64' and sys_platform == 'linux') or (python_full_version < '3.11' and sys_platform != 'darwin' and sys_platform != 'linux')",
"python_full_version == '3.11.*' and sys_platform == 'darwin'",
"python_full_version == '3.11.*' and platform_machine == 'aarch64' and sys_platform == 'linux'",
"(python_full_version == '3.11.*' and platform_machine != 'aarch64' and sys_platform == 'linux') or (python_full_version == '3.11.*' and sys_platform != 'darwin' and sys_platform != 'linux')",
"python_full_version >= '3.12' and python_full_version < '3.12.4' and sys_platform == 'darwin'",
"python_full_version >= '3.12' and python_full_version < '3.12.4' and platform_machine == 'aarch64' and sys_platform == 'linux'",
"(python_full_version >= '3.12' and python_full_version < '3.12.4' and platform_machine != 'aarch64' and sys_platform == 'linux') or (python_full_version >= '3.12' and python_full_version < '3.12.4' and sys_platform != 'darwin' and sys_platform != 'linux')",
"python_full_version >= '3.12.4' and sys_platform == 'darwin'",
"python_full_version >= '3.12.4' and platform_machine == 'aarch64' and sys_platform == 'linux'",
"(python_full_version >= '3.12.4' and platform_machine != 'aarch64' and sys_platform == 'linux') or (python_full_version >= '3.12.4' and sys_platform != 'darwin' and sys_platform != 'linux')",
{name="nvidia-cublas-cu12",marker="(platform_machine != 'aarch64' and platform_system != 'Darwin') or (platform_system != 'Darwin' and platform_system != 'Linux')"},
{name="nvidia-cublas-cu12",marker="(platform_machine != 'aarch64' and platform_system != 'Darwin') or (platform_system != 'Darwin' and platform_system != 'Linux' and sys_platform != 'linux')"},
@@ -2917,9 +2927,9 @@ name = "nvidia-cusolver-cu12"
version="11.4.5.107"
source={registry="https://pypi.org/simple"}
dependencies=[
{name="nvidia-cublas-cu12",marker="(platform_machine != 'aarch64' and platform_system != 'Darwin') or (platform_system != 'Darwin' and platform_system != 'Linux')"},
{name="nvidia-cusparse-cu12",marker="(platform_machine != 'aarch64' and platform_system != 'Darwin') or (platform_system != 'Darwin' and platform_system != 'Linux')"},
{name="nvidia-nvjitlink-cu12",marker="(platform_machine != 'aarch64' and platform_system != 'Darwin') or (platform_system != 'Darwin' and platform_system != 'Linux')"},
{name="nvidia-cublas-cu12",marker="(platform_machine != 'aarch64' and platform_system != 'Darwin') or (platform_system != 'Darwin' and platform_system != 'Linux' and sys_platform != 'linux')"},
{name="nvidia-cusparse-cu12",marker="(platform_machine != 'aarch64' and platform_system != 'Darwin') or (platform_system != 'Darwin' and platform_system != 'Linux' and sys_platform != 'linux')"},
{name="nvidia-nvjitlink-cu12",marker="(platform_machine != 'aarch64' and platform_system != 'Darwin') or (platform_system != 'Darwin' and platform_system != 'Linux' and sys_platform != 'linux')"},
@@ -2930,7 +2940,7 @@ name = "nvidia-cusparse-cu12"
version="12.1.0.106"
source={registry="https://pypi.org/simple"}
dependencies=[
{name="nvidia-nvjitlink-cu12",marker="(platform_machine != 'aarch64' and platform_system != 'Darwin') or (platform_system != 'Darwin' and platform_system != 'Linux')"},
{name="nvidia-nvjitlink-cu12",marker="(platform_machine != 'aarch64' and platform_system != 'Darwin') or (platform_system != 'Darwin' and platform_system != 'Linux' and sys_platform != 'linux')"},
{name="filelock",marker="(platform_machine != 'aarch64' and platform_system != 'Darwin') or (platform_system != 'Darwin' and platform_system != 'Linux')"},
{name="filelock",marker="(platform_machine != 'aarch64' and platform_system != 'Darwin') or (platform_system != 'Darwin' and platform_system != 'Linux' and sys_platform != 'linux')"},
Blocking a user prevents them from interacting with repositories, such as opening or commenting on pull requests or issues. Learn more about blocking a user.