* test: Add test demonstrating knowledge not included in planning process
Issue #1703: Add test to verify that agent knowledge sources are not currently
included in the planning process. This test will help validate the fix once
implemented.
- Creates agent with knowledge sources
- Verifies knowledge context missing from planning
- Checks other expected components are present
Co-Authored-By: Joe Moura <joao@crewai.com>
* fix: Include agent knowledge in planning process
Issue #1703: Integrate agent knowledge sources into planning summaries
- Add agent_knowledge field to task summaries in planning_handler
- Update test to verify knowledge inclusion
- Ensure knowledge context is available during planning phase
The planning agent now has access to agent knowledge when creating
task execution plans, allowing for better informed planning decisions.
Co-Authored-By: Joe Moura <joao@crewai.com>
* style: Fix import sorting in test_knowledge_planning.py
- Reorganize imports according to ruff linting rules
- Fix I001 linting error
Co-Authored-By: Joe Moura <joao@crewai.com>
* test: Update task summary assertions to include knowledge field
Co-Authored-By: Joe Moura <joao@crewai.com>
* fix: Update ChromaDB mock path and fix knowledge string formatting
Co-Authored-By: Joe Moura <joao@crewai.com>
* fix: Improve knowledge integration in planning process with error handling
Co-Authored-By: Joe Moura <joao@crewai.com>
* fix: Update task summary format for empty tools and knowledge
- Change empty tools message to 'agent has no tools'
- Remove agent_knowledge field when empty
- Update test assertions to match new format
- Improve test messages for clarity
Co-Authored-By: Joe Moura <joao@crewai.com>
* fix: Update string formatting for agent tools in task summary
Co-Authored-By: Joe Moura <joao@crewai.com>
* fix: Update string formatting for agent tools in task summary
Co-Authored-By: Joe Moura <joao@crewai.com>
* fix: Update string formatting for agent tools and knowledge in task summary
Co-Authored-By: Joe Moura <joao@crewai.com>
* fix: Update knowledge field formatting in task summary
Co-Authored-By: Joe Moura <joao@crewai.com>
* style: Fix import sorting in test_planning_handler.py
Co-Authored-By: Joe Moura <joao@crewai.com>
* style: Fix import sorting order in test_planning_handler.py
Co-Authored-By: Joe Moura <joao@crewai.com>
* test: Add ChromaDB mocking to test_create_tasks_summary_with_knowledge_and_tools
Co-Authored-By: Joe Moura <joao@crewai.com>
---------
Co-authored-by: Devin AI <158243242+devin-ai-integration[bot]@users.noreply.github.com>
Co-authored-by: Joe Moura <joao@crewai.com>
Co-authored-by: João Moura <joaomdmoura@gmail.com>
* test: Add test demonstrating knowledge not included in planning process
Issue #1703: Add test to verify that agent knowledge sources are not currently
included in the planning process. This test will help validate the fix once
implemented.
- Creates agent with knowledge sources
- Verifies knowledge context missing from planning
- Checks other expected components are present
Co-Authored-By: Joe Moura <joao@crewai.com>
* fix: Include agent knowledge in planning process
Issue #1703: Integrate agent knowledge sources into planning summaries
- Add agent_knowledge field to task summaries in planning_handler
- Update test to verify knowledge inclusion
- Ensure knowledge context is available during planning phase
The planning agent now has access to agent knowledge when creating
task execution plans, allowing for better informed planning decisions.
Co-Authored-By: Joe Moura <joao@crewai.com>
* style: Fix import sorting in test_knowledge_planning.py
- Reorganize imports according to ruff linting rules
- Fix I001 linting error
Co-Authored-By: Joe Moura <joao@crewai.com>
* test: Update task summary assertions to include knowledge field
Co-Authored-By: Joe Moura <joao@crewai.com>
* fix: Update ChromaDB mock path and fix knowledge string formatting
Co-Authored-By: Joe Moura <joao@crewai.com>
* fix: Improve knowledge integration in planning process with error handling
Co-Authored-By: Joe Moura <joao@crewai.com>
* fix: Update task summary format for empty tools and knowledge
- Change empty tools message to 'agent has no tools'
- Remove agent_knowledge field when empty
- Update test assertions to match new format
- Improve test messages for clarity
Co-Authored-By: Joe Moura <joao@crewai.com>
* fix: Update string formatting for agent tools in task summary
Co-Authored-By: Joe Moura <joao@crewai.com>
* fix: Update string formatting for agent tools in task summary
Co-Authored-By: Joe Moura <joao@crewai.com>
* fix: Update string formatting for agent tools and knowledge in task summary
Co-Authored-By: Joe Moura <joao@crewai.com>
* fix: Update knowledge field formatting in task summary
Co-Authored-By: Joe Moura <joao@crewai.com>
* style: Fix import sorting in test_planning_handler.py
Co-Authored-By: Joe Moura <joao@crewai.com>
* style: Fix import sorting order in test_planning_handler.py
Co-Authored-By: Joe Moura <joao@crewai.com>
* test: Add ChromaDB mocking to test_create_tasks_summary_with_knowledge_and_tools
Co-Authored-By: Joe Moura <joao@crewai.com>
---------
Co-authored-by: Devin AI <158243242+devin-ai-integration[bot]@users.noreply.github.com>
Co-authored-by: Joe Moura <joao@crewai.com>
Co-authored-by: João Moura <joaomdmoura@gmail.com>
* docs: add comprehensive docstrings to Flow class and methods
- Added NumPy-style docstrings to all decorator functions
- Added detailed documentation to Flow class methods
- Included parameter types, return types, and examples
- Enhanced documentation clarity and completeness
Co-Authored-By: Joe Moura <joao@crewai.com>
* feat: add secure path handling utilities
- Add path_utils.py with safe path handling functions
- Implement path validation and security checks
- Integrate secure path handling in flow_visualizer.py
- Add path validation in html_template_handler.py
- Add comprehensive error handling for path operations
Co-Authored-By: Joe Moura <joao@crewai.com>
* docs: add comprehensive docstrings and type hints to flow utils (#1819)
Co-Authored-By: Joe Moura <joao@crewai.com>
* fix: add type annotations and fix import sorting
Co-Authored-By: Joe Moura <joao@crewai.com>
* fix: add type annotations to flow utils and visualization utils
Co-Authored-By: Joe Moura <joao@crewai.com>
* fix: resolve import sorting and type annotation issues
Co-Authored-By: Joe Moura <joao@crewai.com>
* fix: properly initialize and update edge_smooth variable
Co-Authored-By: Joe Moura <joao@crewai.com>
---------
Co-authored-by: Devin AI <158243242+devin-ai-integration[bot]@users.noreply.github.com>
Co-authored-by: Joe Moura <joao@crewai.com>
* docs: add comprehensive docstrings to Flow class and methods
- Added NumPy-style docstrings to all decorator functions
- Added detailed documentation to Flow class methods
- Included parameter types, return types, and examples
- Enhanced documentation clarity and completeness
Co-Authored-By: Joe Moura <joao@crewai.com>
* feat: add secure path handling utilities
- Add path_utils.py with safe path handling functions
- Implement path validation and security checks
- Integrate secure path handling in flow_visualizer.py
- Add path validation in html_template_handler.py
- Add comprehensive error handling for path operations
Co-Authored-By: Joe Moura <joao@crewai.com>
* docs: add comprehensive docstrings and type hints to flow utils (#1819)
Co-Authored-By: Joe Moura <joao@crewai.com>
* fix: add type annotations and fix import sorting
Co-Authored-By: Joe Moura <joao@crewai.com>
* fix: add type annotations to flow utils and visualization utils
Co-Authored-By: Joe Moura <joao@crewai.com>
* fix: resolve import sorting and type annotation issues
Co-Authored-By: Joe Moura <joao@crewai.com>
* fix: properly initialize and update edge_smooth variable
Co-Authored-By: Joe Moura <joao@crewai.com>
---------
Co-authored-by: Devin AI <158243242+devin-ai-integration[bot]@users.noreply.github.com>
Co-authored-by: Joe Moura <joao@crewai.com>
* feat: add tiktoken as explicit dependency and document Rust requirement
- Add tiktoken>=0.8.0 as explicit dependency to ensure pre-built wheels are used
- Document Rust compiler requirement as fallback in README.md
- Addresses issue #1824 tiktoken build failure
Co-Authored-By: Joe Moura <joao@crewai.com>
* fix: adjust tiktoken version to ~=0.7.0 for dependency compatibility
- Update tiktoken dependency to ~=0.7.0 to resolve conflict with embedchain
- Maintain compatibility with crewai-tools dependency chain
- Addresses CI build failures
Co-Authored-By: Joe Moura <joao@crewai.com>
* docs: add troubleshooting section and make tiktoken optional
Co-Authored-By: Joe Moura <joao@crewai.com>
* Update README.md
---------
Co-authored-by: Devin AI <158243242+devin-ai-integration[bot]@users.noreply.github.com>
Co-authored-by: Joe Moura <joao@crewai.com>
Co-authored-by: João Moura <joaomdmoura@gmail.com>
* feat: add tiktoken as explicit dependency and document Rust requirement
- Add tiktoken>=0.8.0 as explicit dependency to ensure pre-built wheels are used
- Document Rust compiler requirement as fallback in README.md
- Addresses issue #1824 tiktoken build failure
Co-Authored-By: Joe Moura <joao@crewai.com>
* fix: adjust tiktoken version to ~=0.7.0 for dependency compatibility
- Update tiktoken dependency to ~=0.7.0 to resolve conflict with embedchain
- Maintain compatibility with crewai-tools dependency chain
- Addresses CI build failures
Co-Authored-By: Joe Moura <joao@crewai.com>
* docs: add troubleshooting section and make tiktoken optional
Co-Authored-By: Joe Moura <joao@crewai.com>
* Update README.md
---------
Co-authored-by: Devin AI <158243242+devin-ai-integration[bot]@users.noreply.github.com>
Co-authored-by: Joe Moura <joao@crewai.com>
Co-authored-by: João Moura <joaomdmoura@gmail.com>
* fix(manager_llm): handle coworker role name case/whitespace properly
- Add .strip() to agent name and role comparisons in base_agent_tools.py
- Add test case for varied role name cases and whitespace
- Fix issue #1503 with manager LLM delegation
Co-Authored-By: Joe Moura <joao@crewai.com>
* fix(manager_llm): improve error handling and add debug logging
- Add debug logging for better observability
- Add sanitize_agent_name helper method
- Enhance error messages with more context
- Add parameterized tests for edge cases:
- Embedded quotes
- Trailing newlines
- Multiple whitespace
- Case variations
- None values
- Improve error handling with specific exceptions
Co-Authored-By: Joe Moura <joao@crewai.com>
* style: fix import sorting in base_agent_tools and test_manager_llm_delegation
Co-Authored-By: Joe Moura <joao@crewai.com>
* fix(manager_llm): improve whitespace normalization in role name matching
Co-Authored-By: Joe Moura <joao@crewai.com>
* style: fix import sorting in base_agent_tools and test_manager_llm_delegation
Co-Authored-By: Joe Moura <joao@crewai.com>
* fix(manager_llm): add error message template for agent tool execution errors
Co-Authored-By: Joe Moura <joao@crewai.com>
* style: fix import sorting in test_manager_llm_delegation.py
Co-Authored-By: Joe Moura <joao@crewai.com>
---------
Co-authored-by: Devin AI <158243242+devin-ai-integration[bot]@users.noreply.github.com>
Co-authored-by: Joe Moura <joao@crewai.com>
* fix(manager_llm): handle coworker role name case/whitespace properly
- Add .strip() to agent name and role comparisons in base_agent_tools.py
- Add test case for varied role name cases and whitespace
- Fix issue #1503 with manager LLM delegation
Co-Authored-By: Joe Moura <joao@crewai.com>
* fix(manager_llm): improve error handling and add debug logging
- Add debug logging for better observability
- Add sanitize_agent_name helper method
- Enhance error messages with more context
- Add parameterized tests for edge cases:
- Embedded quotes
- Trailing newlines
- Multiple whitespace
- Case variations
- None values
- Improve error handling with specific exceptions
Co-Authored-By: Joe Moura <joao@crewai.com>
* style: fix import sorting in base_agent_tools and test_manager_llm_delegation
Co-Authored-By: Joe Moura <joao@crewai.com>
* fix(manager_llm): improve whitespace normalization in role name matching
Co-Authored-By: Joe Moura <joao@crewai.com>
* style: fix import sorting in base_agent_tools and test_manager_llm_delegation
Co-Authored-By: Joe Moura <joao@crewai.com>
* fix(manager_llm): add error message template for agent tool execution errors
Co-Authored-By: Joe Moura <joao@crewai.com>
* style: fix import sorting in test_manager_llm_delegation.py
Co-Authored-By: Joe Moura <joao@crewai.com>
---------
Co-authored-by: Devin AI <158243242+devin-ai-integration[bot]@users.noreply.github.com>
Co-authored-by: Joe Moura <joao@crewai.com>
* fix: Change storage initialization to None for KnowledgeStorage
* refactor: Change storage field to optional and improve error handling when saving documents
---------
Co-authored-by: João Moura <joaomdmoura@gmail.com>
* fix: Change storage initialization to None for KnowledgeStorage
* refactor: Change storage field to optional and improve error handling when saving documents
---------
Co-authored-by: João Moura <joaomdmoura@gmail.com>
* Create Portkey-Observability-and-Guardrails.md
* crewAI update with new changes
* small change
---------
Co-authored-by: siddharthsambharia-portkey <siddhath.s@portkey.ai>
Co-authored-by: João Moura <joaomdmoura@gmail.com>
* Create Portkey-Observability-and-Guardrails.md
* crewAI update with new changes
* small change
---------
Co-authored-by: siddharthsambharia-portkey <siddhath.s@portkey.ai>
Co-authored-by: João Moura <joaomdmoura@gmail.com>
As mentioned in issue #1759, listing crewai-tools as dev-dependencies makes pip install it a required dependency, and not an optional
Co-authored-by: João Moura <joaomdmoura@gmail.com>
As mentioned in issue #1759, listing crewai-tools as dev-dependencies makes pip install it a required dependency, and not an optional
Co-authored-by: João Moura <joaomdmoura@gmail.com>
* added tool for docling support
* docling support installation
* use file_paths instead of file_path
* fix import
* organized imports
* run_type docs
* needs to be list
* fixed logic
* logged but file_path is backwards compatible
* use file_paths instead of file_path 2
* added test for multiple sources for file_paths
* fix run-types
* enabling local files to work and type cleanup
* linted
* fix test and types
* fixed run types
* fix types
* renamed to CrewDoclingSource
* linted
* added docs
* resolve conflicts
---------
Co-authored-by: Brandon Hancock (bhancock_ai) <109994880+bhancockio@users.noreply.github.com>
Co-authored-by: Brandon Hancock <brandon@brandonhancock.io>
* added tool for docling support
* docling support installation
* use file_paths instead of file_path
* fix import
* organized imports
* run_type docs
* needs to be list
* fixed logic
* logged but file_path is backwards compatible
* use file_paths instead of file_path 2
* added test for multiple sources for file_paths
* fix run-types
* enabling local files to work and type cleanup
* linted
* fix test and types
* fixed run types
* fix types
* renamed to CrewDoclingSource
* linted
* added docs
* resolve conflicts
---------
Co-authored-by: Brandon Hancock (bhancock_ai) <109994880+bhancockio@users.noreply.github.com>
Co-authored-by: Brandon Hancock <brandon@brandonhancock.io>
* Update llms.mdx (Gemini 2.0)
- Add Gemini 2.0 flash to Gemini table.
- Add link to 2 hosting paths for Gemini in Tip.
- Change to lower case model slugs vs names, user convenience.
- Add https://artificialanalysis.ai/ as alternate leaderboard.
- Move Gemma to "other" tab.
* Update llm.py (gemini 2.0)
Add setting for Gemini 2.0 context window to llm.py
---------
Co-authored-by: Brandon Hancock (bhancock_ai) <109994880+bhancockio@users.noreply.github.com>
* Update llms.mdx (Gemini 2.0)
- Add Gemini 2.0 flash to Gemini table.
- Add link to 2 hosting paths for Gemini in Tip.
- Change to lower case model slugs vs names, user convenience.
- Add https://artificialanalysis.ai/ as alternate leaderboard.
- Move Gemma to "other" tab.
* Update llm.py (gemini 2.0)
Add setting for Gemini 2.0 context window to llm.py
---------
Co-authored-by: Brandon Hancock (bhancock_ai) <109994880+bhancockio@users.noreply.github.com>
* apply agent ops changes and resolve merge conflicts
* Trying to fix tests
* add back in vcr
* update tools
* remove pkg_resources which was causing issues
* Fix tests
* experimenting to see if unique content is an issue with knowledge
* experimenting to see if unique content is an issue with knowledge
* update chromadb which seems to have issues with upsert
* generate new yaml for failing test
* Investigating upsert
* Drop patch
* Update casettes
* Fix duplicate document issue
* more fixes
* add back in vcr
* new cassette for test
---------
Co-authored-by: Lorenze Jay <lorenzejaytech@gmail.com>
* apply agent ops changes and resolve merge conflicts
* Trying to fix tests
* add back in vcr
* update tools
* remove pkg_resources which was causing issues
* Fix tests
* experimenting to see if unique content is an issue with knowledge
* experimenting to see if unique content is an issue with knowledge
* update chromadb which seems to have issues with upsert
* generate new yaml for failing test
* Investigating upsert
* Drop patch
* Update casettes
* Fix duplicate document issue
* more fixes
* add back in vcr
* new cassette for test
---------
Co-authored-by: Lorenze Jay <lorenzejaytech@gmail.com>
When using agentops, we have the option to pass the `skip_auto_end_session` parameter, which is supposed to not end the session if the `end_session` function is called by Crew.
Now the way it works is, the `agentops.end_session` accepts `is_auto_end` flag and crewai should have passed it as `True` (its `False` by default).
I have changed the code to pass is_auto_end=True
Co-authored-by: Brandon Hancock (bhancock_ai) <109994880+bhancockio@users.noreply.github.com>
When using agentops, we have the option to pass the `skip_auto_end_session` parameter, which is supposed to not end the session if the `end_session` function is called by Crew.
Now the way it works is, the `agentops.end_session` accepts `is_auto_end` flag and crewai should have passed it as `True` (its `False` by default).
I have changed the code to pass is_auto_end=True
Co-authored-by: Brandon Hancock (bhancock_ai) <109994880+bhancockio@users.noreply.github.com>
* drop 3.13
* revert
* Drop test cassette that was causing error
* trying to fix failing test
* adding thiago changes
* resolve final tests
* Drop skip
* drop pipeline
* drop 3.13
* revert
* Drop test cassette that was causing error
* trying to fix failing test
* adding thiago changes
* resolve final tests
* Drop skip
* drop pipeline
* drop 3.13
* revert
* Drop test cassette that was causing error
* trying to fix failing test
* adding thiago changes
* resolve final tests
* Drop skip
* drop 3.13
* revert
* Drop test cassette that was causing error
* trying to fix failing test
* adding thiago changes
* resolve final tests
* Drop skip
* Fix disk I/O error when resetting short-term memory.
Reset chromadb client and nullifies references before
removing directory.
* Nit for clarity
* did the same for knowledge_storage
* cleanup
* cleanup order
* Cleanup after the rm of the directories
---------
Co-authored-by: Lorenze Jay <lorenzejaytech@gmail.com>
Co-authored-by: Lorenze Jay <63378463+lorenzejay@users.noreply.github.com>
* Fix disk I/O error when resetting short-term memory.
Reset chromadb client and nullifies references before
removing directory.
* Nit for clarity
* did the same for knowledge_storage
* cleanup
* cleanup order
* Cleanup after the rm of the directories
---------
Co-authored-by: Lorenze Jay <lorenzejaytech@gmail.com>
Co-authored-by: Lorenze Jay <63378463+lorenzejay@users.noreply.github.com>
* drop metadata requirement
* fix linting
* Update docs for new knowledge
* more linting
* more linting
* make save_documents private
* update docs to the new way we use knowledge and include clearing memory
* incorporate #1683
* add in --version flag to cli. closes#1679.
* Fix env issue
* Add in suggestions from @caike to make sure ragstorage doesnt exceed os file limit. Also, included additional checks to support windows.
* remove poetry.lock as pointed out by @sanders41 in #1574.
* Incorporate feedback from crewai reviewer
* Incorporate @lorenzejay feedback
* v1 of HITL working
* Drop print statements
* HITL code more robust. Still needs to be refactored.
* refactor and more clear messages
* Fix type issue
* fix tests
* Fix test again
* Drop extra print
Corrected the statement which says users can not disable telemetry, but now users can disable by setting the environment variable OTEL_SDK_DISABLED to true.
Co-authored-by: Brandon Hancock (bhancock_ai) <109994880+bhancockio@users.noreply.github.com>
* Knowledge project directory standard
* fixed types
* comment fix
* made base file knowledge source an abstract class
* cleaner validator on model_post_init
* fix type checker
* cleaner refactor
* better template
* added path to RAGStorage
* added path to short term and entity memory
* add path for long_term_storage for completeness
---------
Co-authored-by: Brandon Hancock (bhancock_ai) <109994880+bhancockio@users.noreply.github.com>
* Update example of how to use LangChain tools with correct syntax
* Use .env
* Add Code back
---------
Co-authored-by: Brandon Hancock (bhancock_ai) <109994880+bhancockio@users.noreply.github.com>
* docs: improve tasks documentation clarity and structure
- Add Task Execution Flow section
- Add variable interpolation explanation
- Add Task Dependencies section with examples
- Improve overall document structure and readability
- Update code examples with proper syntax highlighting
* docs: update agent documentation with improved examples and formatting
- Replace DuckDuckGoSearchRun with SerperDevTool
- Update code block formatting to be consistent
- Improve template examples with actual syntax
- Update LLM examples to use current models
- Clean up formatting and remove redundant comments
* docs: enhance LLM documentation with Cerebras provider and formatting improvements
* docs: simplify LLMs documentation title
* docs: improve installation guide clarity and structure
- Add clear Python version requirements with check command
- Simplify installation options to recommended method
- Improve upgrade section clarity for existing users
- Add better visual structure with Notes and Tips
- Update description and formatting
* docs: improve introduction page organization and clarity
- Update organizational analogy in Note section
- Improve table formatting and alignment
- Remove emojis from component table for cleaner look
- Add 'helps you' to make the note more action-oriented
* docs: add enterprise and community cards
- Add Enterprise deployment card in quickstart
- Add community card focused on open source discussions
- Remove deployment reference from community description
- Clean up introduction page cards
- Remove link from Enterprise description text
* docs: add code snippet to Getting Started section in flows.mdx
---------
Co-authored-by: Brandon Hancock (bhancock_ai) <109994880+bhancockio@users.noreply.github.com>
* docs: improve tasks documentation clarity and structure
- Add Task Execution Flow section
- Add variable interpolation explanation
- Add Task Dependencies section with examples
- Improve overall document structure and readability
- Update code examples with proper syntax highlighting
* docs: update agent documentation with improved examples and formatting
- Replace DuckDuckGoSearchRun with SerperDevTool
- Update code block formatting to be consistent
- Improve template examples with actual syntax
- Update LLM examples to use current models
- Clean up formatting and remove redundant comments
* docs: enhance LLM documentation with Cerebras provider and formatting improvements
* docs: simplify LLMs documentation title
* docs: improve installation guide clarity and structure
- Add clear Python version requirements with check command
- Simplify installation options to recommended method
- Improve upgrade section clarity for existing users
- Add better visual structure with Notes and Tips
- Update description and formatting
* docs: improve introduction page organization and clarity
- Update organizational analogy in Note section
- Improve table formatting and alignment
- Remove emojis from component table for cleaner look
- Add 'helps you' to make the note more action-oriented
* docs: add enterprise and community cards
- Add Enterprise deployment card in quickstart
- Add community card focused on open source discussions
- Remove deployment reference from community description
- Clean up introduction page cards
- Remove link from Enterprise description text
* added knowledge to agent level
* linted
* added doc
* added from suggestions
* added test
* fixes from discussion
* fix docs
* fix test
* rm cassette for knowledge_sources test as its a mock and update agent doc string
* fix test
* rm unused
* linted
* V1 working
* clean up imports and prints
* more clean up and add tests
* fixing tests
* fix test
* fix linting
* Fix tests
* Fix linting
* add doc string as requested by eduardo
This commit adds an extra step to `crewai login` to ensure users also
log in to Tool Repository, that is, exchanging their Auth0 tokens for a
Tool Repository username and password to be used by UV downloads and API
tool uploads.
* initial knowledge
* WIP
* Adding core knowledge sources
* Improve types and better support for file paths
* added additional sources
* fix linting
* update yaml to include optional deps
* adding in lorenze feedback
* ensure embeddings are persisted
* improvements all around Knowledge class
* return this
* properly reset memory
* properly reset memory+knowledge
* consolodation and improvements
* linted
* cleanup rm unused embedder
* fix test
* fix duplicate
* generating cassettes for knowledge test
* updated default embedder
* None embedder to use default on pipeline cloning
* improvements
* fixed text_file_knowledge
* mypysrc fixes
* type check fixes
* added extra cassette
* just mocks
* linted
* mock knowledge query to not spin up db
* linted
* verbose run
* put a flag
* fix
* adding docs
* better docs
* improvements from review
* more docs
* linted
* rm print
* more fixes
* clearer docs
* added docstrings and type hints for cli
---------
Co-authored-by: João Moura <joaomdmoura@gmail.com>
Co-authored-by: Lorenze Jay <lorenzejaytech@gmail.com>
This commit replaces .netrc with uv environment variables for installing
tools from private repositories. To store credentials, I created a new
and reusable settings file for the CLI in
`$HOME/.config/crewai/settings.json`.
The issue with .netrc files is that they are applied system-wide and are
scoped by hostname, meaning we can't differentiate tool repositories
requests from regular requests to CrewAI's API.
Allow passing additional options from `crewai install` directly to
`uv sync`. This enables commands like `crewai install --locked` to work
as expected by forwarding all flags and options to the underlying uv
command.
* updated CLI to allow for submitting API keys
* updated click prompt to remove default number
* removed all unnecessary comments
* feat: implement crew creation CLI command
- refactor code to multiple functions
- Added ability for users to select provider and model when uing crewai create command and ave API key to .env
* refactered select_choice function for early return
* refactored select_provider to have an ealry return
* cleanup of comments
* refactor/Move functions into utils file, added new provider file and migrated fucntions thre, new constants file + general function refactor
* small comment cleanup
* fix unnecessary deps
* Added docs for new CLI provider + fixed missing API prompt
* Minor doc updates
* allow user to bypass api key entry + incorect number selected logic + ruff formatting
* ruff updates
* Fix spelling mistake
---------
Co-authored-by: Brandon Hancock (bhancock_ai) <109994880+bhancockio@users.noreply.github.com>
Co-authored-by: Brandon Hancock <brandon@brandonhancock.io>
* ensure original embedding config works
* some fixes
* raise error on unsupported provider
* WIP: brandons notes
* fixes
* rm prints
* fixed docs
* fixed run types
* updates to add more docs and correct imports with huggingface embedding server enabled
---------
Co-authored-by: Brandon Hancock <brandon@brandonhancock.io>
* byom - short/entity memory
* better
* rm uneeded
* fix text
* use context
* rm dep and sync
* type check fix
* fixed test using new cassete
* fixing types
* fixed types
* fix types
* fixed types
* fixing types
* fix type
* cassette update
* just mock the return of short term mem
* remove print
* try catch block
* added docs
* dding error handling here
* updated CLI to allow for submitting API keys
* updated click prompt to remove default number
* removed all unnecessary comments
* feat: implement crew creation CLI command
- refactor code to multiple functions
- Added ability for users to select provider and model when uing crewai create command and ave API key to .env
* refactered select_choice function for early return
* refactored select_provider to have an ealry return
* cleanup of comments
* refactor/Move functions into utils file, added new provider file and migrated fucntions thre, new constants file + general function refactor
* small comment cleanup
* fix unnecessary deps
---------
Co-authored-by: Brandon Hancock (bhancock_ai) <109994880+bhancockio@users.noreply.github.com>
Co-authored-by: Brandon Hancock <brandon@brandonhancock.io>
* feat: Start migrating to UV
* feat: add uv to flows
* feat: update docs on Poetry -> uv
* feat: update docs and uv.locl
* feat: update tests and github CI
* feat: run ruff format
* feat: update typechecking
* feat: fix type checking
* feat: update python version
* feat: type checking gic
* feat: adapt uv command to run the tool repo
* Adapt tool build command to uv
* feat: update logic to let only projects with crew to be deployed
* feat: add uv to tools
* fix; tests
* fix: remove breakpoint
* fix :test
* feat: add crewai update to migrate from poetry to uv
* fix: tests
* feat: add validation for ˆ character on pyproject
* feat: add run_crew to pyproject if doesnt exist
* feat: add validation for poetry migration
* fix: warning
---------
Co-authored-by: Vinicius Brasil <vini@hey.com>
* Change all instaces of crewAI to CrewAI and fix installation step
* Update the example to use YAML format
* Update to come after setup and edits
* Remove double tool instance
This commit adds a new command for adding custom PyPI indexes
credentials to the project. This was changed because credentials are now
user-scoped instead of organization.
* Almost working!
* It fully works but not clean enought
* Working but not clean engouth
* Everything is workign
* WIP. Working on adding and & or to flows. In the middle of setting up template for flow as well
* template working
* Everything is working
* More changes and todos
* Add more support for @start
* Router working now
* minor tweak to
* minor tweak to conditions and event handling
* Update logs
* Too trigger happy with cleanup
* Added in Thiago fix
* Flow passing results again
* Working on docs.
* made more progress updates on docs
* Finished talking about controlling flows
* add flow output
* fixed flow output section
* add crews to flows section is looking good now
* more flow doc changes
* Update docs and add more examples
* drop visualizer
* save visualizer
* pyvis is beginning to work
* pyvis working
* it is working
* regular methods and triggers working. Need to work on router next.
* properly identifying router and router children nodes. Need to fix color
* children router working. Need to support loops
* curving cycles but need to add curve conditionals
* everythin is showing up properly need to fix curves
* all working. needs to be cleaned up
* adjust padding
* drop lib
* clean up prior to PR
* incorporate joao feedback
* final tweaks for joao
* Refactor to make crews easier to understand
* update CLI and templates
* Fix crewai version in flows
* Fix merge conflict
* Almost working!
* It fully works but not clean enought
* Working but not clean engouth
* Everything is workign
* WIP. Working on adding and & or to flows. In the middle of setting up template for flow as well
* template working
* Everything is working
* More changes and todos
* Add more support for @start
* Router working now
* minor tweak to
* minor tweak to conditions and event handling
* Update logs
* Too trigger happy with cleanup
* Added in Thiago fix
* Flow passing results again
* Working on docs.
* made more progress updates on docs
* Finished talking about controlling flows
* add flow output
* fixed flow output section
* add crews to flows section is looking good now
* more flow doc changes
* Update docs and add more examples
* drop visualizer
* save visualizer
* pyvis is beginning to work
* pyvis working
* it is working
* regular methods and triggers working. Need to work on router next.
* properly identifying router and router children nodes. Need to fix color
* children router working. Need to support loops
* curving cycles but need to add curve conditionals
* everythin is showing up properly need to fix curves
* all working. needs to be cleaned up
* adjust padding
* drop lib
* clean up prior to PR
* incorporate joao feedback
* final tweaks for joao
This commit adds two commands to the CLI:
- `crewai tool publish`
- Builds the project using Poetry
- Uploads the tarball to CrewAI's tool repository
- `crewai tool install my-tool`
- Adds my-tool's index to Poetry and its credentials
- Installs my-tool from the custom index
* Prevent double slashes when joining URLs
* Move crewai.cli.deploy.utils to crewai.cli.utils
This commit moves this package so it's reusable across commands.
* Update Tasks.md
Current formating of the page Tasks has been broken, fix the markdown formatting.
* Update LLM-Connections.md
LLM class has been moved to llm.py file
* rebuilding executor
* removing langchain
* Making all tests good
* fixing types and adding ability for nor using system prompts
* improving types
* pleasing the types gods
* pleasing the types gods
* fixing parser, tools and executor
* making sure all tests pass
* final pass
* fixing type
* Updating Docs
* preparing to cut new version
* Updated CrewAI Documentation and Repository link in tools.poetry.urls
* Update pyproject.toml
---------
Co-authored-by: João Moura <joaomdmoura@gmail.com>
* feat: set basic structure deploy commands
* feat: add first iteration of CLI Deploy
* feat: some minor refactor
* feat: Add api, Deploy command and update cli
* feat: Remove test token
* feat: add auth0 lib, update cli and improve code
* feat: update code and decouple auth
* fix: parts of the code
* feat: Add token manager to encrypt access token and get and save tokens
* feat: add audience to costants
* feat: add subsystem saving credentials and remove comment of type hinting
* feat: add get crew version to send on header of request
* feat: add docstrings
* feat: add tests for authentication module
* feat: add tests for utils
* feat: add unit tests for cl
* feat: add tests
* feat: add deploy man tests
* feat: fix type checking issue
* feat: rename tests to pass ci
* feat: fix pr issues
* feat: fix get crewai versoin
* fix: add timeout for tests.yml
* Fixed agents. Now need to fix tasks.
* Add type fixes and fix task decorator
* Clean up logs
* fix more type errors
* Revert back to required
* Undo changes.
* Remove default none for properties that cannot be none
* Clean up comments
* Implement all of Guis feedback
* Clean up pipeline
* Make versioning dynamic in templates
* fix .env issues when openai is trying to use invalid keys
* Fix type checker issue in pipeline
* Fix tests.
* Add name and expected_output to TaskOutput
This commit adds task information to the TaskOutput class. This is
useful to provide extra context to callbacks.
* Populate task name from function names
This commit populates task name from function names when using
annotations.
As per (https://github.com/langchain-ai/langchain/pull/16395), OpenAI functions don't accept tool names with space. Therefore, I added an exception handling snippet to raise an issue if a custom tool name has a space.
* WIP. Procedure appears to be working well. Working on mocking properly for tests
* All tests are passing now
* rshift working
* Add back in Gui's tool_usage fix
* WIP
* Going to start refactoring for pipeline_output
* Update terminology
* new pipeline flow with traces and usage metrics working. need to add more tests and make sure PipelineOutput behaves likew CrewOutput
* Fix pipelineoutput to look more like crewoutput and taskoutput
* Implemented additional tests for pipeline. One test is failing. Need team support
* Update docs for pipeline
* Update pipeline to properly process input and ouput dictionary
* Update Pipeline docs
* Add back in commentary at top of pipeline file
* Starting to work on router
* Drop router for now. will add in separately
* In the middle of fixing router. A ton of circular dependencies. Moving over to a new design.
* WIP.
* Fix circular dependencies and updated PipelineRouter
* Add in Eduardo feedback. Still need to add in more commentary describing the design decisions for pipeline
* Add developer notes to explain what is going on in pipelines.
* Add doc strings
* Fix missing rag datatype
* WIP. Converting usage metrics from a dict to an object
* Fix tests that were checking usage metrics
* Drop todo
* Fix 1 type error in pipeline
* Update pipeline to use UsageMetric
* Add missing doc string
* WIP.
* Change names
* Rename variables based on joaos feedback
* Fix critical circular dependency issues. Now needing to fix trace issue.
* Tests working now!
* Add more tests which showed underlying issue with traces
* Fix tests
* Remove overly complicated test
* Add router example to docs
* Clean up end of docs
* Clean up docs
* Working on creating Crew templates and pipeline templates
* WIP.
* WIP
* Fix poetry install from templates
* WIP
* Restructure
* changes for lorenze
* more todos
* WIP: create pipelines cli working
* wrapped up router
* ignore mypy src on templates
* ignored signature of copy
* fix all verbose
* rm print statements
* brought back correct folders
* fixes missing folders and then rm print statements
* fixed tests
* fixed broken test
* fixed type checker
* fixed type ignore
* ignore types for templates
* needed
* revert
* exclude only required
* rm type errors on templates
* rm excluding type checks for template files on github action
* fixed missing quotes
---------
Co-authored-by: Brandon Hancock <brandon@brandonhancock.io>
* patching for non-gpt model
* removal of json_object tool name assignment
* fixed issue for smaller models due to instructions prompt
* fixing for ollama llama3 models
* WIP: generated summary from documents split, could also create memgpt approach
* WIP: need tests but user inputted summarization strategy implemented - handling context window exceeding errors
* rm extra line
* removed type ignores
* added tests
* handling n to summarize prompt
* code cleanup, using click for cli asker
* rm not used class
* better refactor
* reverted poetry lock
* reverted poetry.locl
* improved context window exceeding exception class
* feat: Add execution time to both task and testing feature
* feat: Remove unused functions
* feat: change test_crew to evalaute_crew to avoid issues with testing libs
* feat: fix tests
I think there is some mistake, because there is no such parameter as force_output_result, and as the code shows, the correct parameter result_as_answer is set during agent creation, not task.
* Performed spell check across the entire documentation
Thank you once again!
* Performed spell check across the most of code base
Folders been checked:
- agents
- cli
- memory
- project
- tasks
- telemetry
- tools
- translations
* Trying to add a max_token for the agents, so they limited by number of tokens.
* Performed spell check across the rest of code base, and enahnced the yaml paraser code a little
* Small change in the main agent doc
* Improve _save_file method to handle both dict and str inputs
- Add check for dict type input
- Use json.dump for dict serialization
- Convert non-dict inputs to string
- Remove type ignore comments
---------
Co-authored-by: João Moura <joaomdmoura@gmail.com>
* Fixed special character issue when converting json to models. Added numerous tests to ensure thigns work properly.
* Fix linting error and cleaned up tests
* Fix customer_converter_cls test failure
* Fixed tests. Thank you lorenze for pointing that out. added a few more to ensure converter creation works properly
* Address lorenze feedback
* Fix linting issues
* patching for non-gpt model
* removal of json_object tool name assignment
* fixed issue for smaller models due to instructions prompt
* fixing for ollama llama3 models
* closing brackets
* removed not used and fixes
* WIP: hierarchical unblock for async tasks
* added better test
* update name change
* added more test and crew manager cleanup
* remove prints
* code cleanup, no need to pass manager
* feat: add ability to set LLM for AgentPLanner on Crew
* feat: fixes issue on instantiating the ChatOpenAI on the crew
* docs: add docs for the planning_llm new parameter
* docs: change message to ChatOpenAI llm
* feat: add tests
* feat: add crew Testing/evalauting feature
* feat: add docs and add unit test
* feat: improve testing output table
* feat: add tests
* feat: fix type checking issue
* feat: add raise ValueError when testing if output is not the expected
* docs: add docs for Testing
* feat: improve tests and fix some issue
* feat: back to sync
* feat: change opdeai model
* feat: fix test
* WIP: yaml proper mapping for agents and agent
* WIP: added output_json and output_pydantic setup
* WIP: core logic added, need cleanup
* code cleanup
* updated docs and example template to use yaml to reference agents within tasks
* cleanup type errors
* Update Start-a-New-CrewAI-Project.md
---------
Co-authored-by: João Moura <joaomdmoura@gmail.com>
To solve :
I encountered an error while trying to use the tool. This was the error: DuckDuckGoSearchRun._run() got an unexpected keyword argument 'q'.
Tool duckduckgo_search accepts these inputs: A wrapper around DuckDuckGo Search. Useful for when you need to answer questions about current events. Input should be a search query.
refer : https://github.com/joaomdmoura/crewAI/issues/316
The memory documentation left me with a lot of questions. After I went through the code to find an answer. I added this paragraph to explain what I found. Hope this is helpful.
* feat: add planning feature to crew
* feat: add test to planning handler and change to execute_async method
* docs: add planning parameter to the Core documentation
* docs: add planning docs
* fix: fix type checking issue
* fix: test and logic
* Cleaned up task execution to now have separate paths for async and sync execution. Updating all kickoff functions to return CrewOutput. WIP. Waiting for Joao feedback on async task execution with task_output
* Consistently storing async and sync output for context
* outline tests I need to create going forward
* Major rehaul of TaskOutput and CrewOutput. Updated all tests to work with new change. Need to add in a few final tricky async tests and add a few more to verify output types on TaskOutput and CrewOutput.
* Encountering issues with callback. Need to test on main. WIP
* working on tests. WIP
* WIP. Figuring out disconnect issue.
* Cleaned up logs now that I've isolated the issue to the LLM
* more wip.
* WIP. It looks like usage metrics has always been broken for async
* Update parent crew who is managing for_each loop
* Merge in main to bugfix/kickoff-for-each-usage-metrics
* Clean up code for review
* Add new tests
* Final cleanup. Ready for review.
* Moving copy functionality from Agent to BaseAgent
* Fix renaming issue
* Fix linting errors
* use BaseAgent instead of Agent where applicable
* Fixing missing function. Working on tests.
* WIP. Needing team to review change
* Fixing issues brought about by merge
* WIP: need to fix json encoder
* WIP need to fix encoder
* WIP
* WIP: replay working with async. need to add tests
* Implement major fixes from yesterdays group conversation. Now working on tests.
* The majority of tasks are working now. Need to fix converter class
* Fix final failing test
* Fix linting and type-checker issues
* Add more tests to fully test CrewOutput and TaskOutput changes
* Add in validation for async cannot depend on other async tasks.
* WIP: working replay feat fixing inputs, need tests
* WIP: core logic of seq and heir for executing tasks added into one
* Update validators and tests
* better logic for seq and hier
* replay working for both seq and hier just need tests
* fixed context
* added cli command + code cleanup TODO: need better refactoring
* refactoring for cleaner code
* added better tests
* removed todo comments and fixed some tests
* fix logging now all tests should pass
* cleaner code
* ensure replay is delcared when replaying specific tasks
* ensure hierarchical works
* better typing for stored_outputs and separated task_output_handler
* added better tests
* added replay feature to crew docs
* easier cli command name
* fixing changes
* using sqllite instead of .json file for logging previous task_outputs
* tools fix
* added to docs and fixed tests
* fixed .db
* fixed docs and removed unneeded comments
* separating ltm and replay db
* fixed printing colors
* added how to doc
---------
Co-authored-by: Brandon Hancock <brandon@brandonhancock.io>
* feat: add max retry limit to agent execution
* feat: add test to max retry limit feature
* feat: add code execution docstring
---------
Co-authored-by: João Moura <joaomdmoura@gmail.com>
* Exploring output being passed to tool selector to see if we can better format data
* WIP. Adding JSON repair functionality
* Almost done implementing JSON repair. Testing fixes vs current base case.
* More action cleanup with additional tests
* WIP. Trying to figure out what is going on with tool descriptions
* Update tool description generation
* WIP. Trying to find out what is causing the tools to duplicate
* Replacing tools properly instead of duplicating them accidentally
* Fixing issues for MR
* Update dependencies for JSON_REPAIR
* More cleaning up pull request
* preppering for call
* Fix type-checking issues
---------
Co-authored-by: João Moura <joaomdmoura@gmail.com>
* Cleaned up task execution to now have separate paths for async and sync execution. Updating all kickoff functions to return CrewOutput. WIP. Waiting for Joao feedback on async task execution with task_output
* Consistently storing async and sync output for context
* outline tests I need to create going forward
* Major rehaul of TaskOutput and CrewOutput. Updated all tests to work with new change. Need to add in a few final tricky async tests and add a few more to verify output types on TaskOutput and CrewOutput.
* Encountering issues with callback. Need to test on main. WIP
* working on tests. WIP
* WIP. Figuring out disconnect issue.
* Cleaned up logs now that I've isolated the issue to the LLM
* more wip.
* WIP. It looks like usage metrics has always been broken for async
* Update parent crew who is managing for_each loop
* Merge in main to bugfix/kickoff-for-each-usage-metrics
* Clean up code for review
* Add new tests
* Final cleanup. Ready for review.
* Moving copy functionality from Agent to BaseAgent
* Fix renaming issue
* Fix linting errors
* use BaseAgent instead of Agent where applicable
* Fixing missing function. Working on tests.
* WIP. Needing team to review change
* Fixing issues brought about by merge
* WIP
* Implement major fixes from yesterdays group conversation. Now working on tests.
* The majority of tasks are working now. Need to fix converter class
* Fix final failing test
* Fix linting and type-checker issues
* Add more tests to fully test CrewOutput and TaskOutput changes
* Add in validation for async cannot depend on other async tasks.
* Update validators and tests
* Performed spell check across the entire documentation
Thank you once again!
* Performed spell check across the most of code base
Folders been checked:
- agents
- cli
- memory
- project
- tasks
- telemetry
- tools
- translations
* fix: call asserts
* fix: test_increment_tool_errors
* fix: test_increment_delegations_for_sequential_process
* fix: test_increment_delegations_for_hierarchical_process
* fix: test_code_execution_flag_adds_code_tool_upon_kickoff
* fix: test_tool_usage_information_is_appended_to_agent
* fix: try to fix test_crew_full_output
* fix: try to fix test_crew_full_output
* fix: test remove vcr to test crew_test test
* fix: comment test to see if ci passes
* fix: comment test to see if ci passes
* fix: test changing prompt tokens to get error on CI
* fix: test changing prompt tokens to get error on CI
* fix: test changing prompt tokens to get error on CI
* fix: test changing prompt tokens to get error on CI
* fix: test new approach
* fix: comment funciont not working in CI
* fix: github python version
* fix: remove need of vcr
* fix: fix and add comments for all type checking errors
* Adding support to force a tool return to be the final answer.
This will at the end of the execution return the tool output.
It will return the output of the latest tool with the flag
* Update src/crewai/agent.py
Co-authored-by: Gui Vieira <guilherme_vieira@me.com>
* Update tests/agent_test.py
Co-authored-by: Gui Vieira <guilherme_vieira@me.com>
---------
Co-authored-by: Gui Vieira <guilherme_vieira@me.com>
* fixed bug for manager overriding task agent and then added pydanic valditors to sequential when no agent is added to task
* better test and fixed task.agent logic
* fixed tests and better validator message
* added validator for async_execution true in tasks whenever in hierarchical run
* WIP. Figuring out disconnect issue.
* Cleaned up logs now that I've isolated the issue to the LLM
* more wip.
* WIP. It looks like usage metrics has always been broken for async
* Update parent crew who is managing for_each loop
* Merge in main to bugfix/kickoff-for-each-usage-metrics
* Clean up code for review
* Add new tests
* Final cleanup. Ready for review.
* Moving copy functionality from Agent to BaseAgent
* Fix renaming issue
* Fix linting errors
* use BaseAgent instead of Agent where applicable
- Added detailed steps for training the crew programmatically.
- Clarified the distinction between using the CLI and programmatic approaches.
This update makes it easier for users to understand how to train their crew both through the CLI and programmatically, whether using a UI or API endpoints.
Again Thank you to the author for the great project and the excellent foundation provided!
* Fix issue agentop poetry install issue
* Updated install requirements tests to fail if .lock becomes out of sync with poetry install. Cleaned up old issues that were merged back in.
* implements agentops with a langchain handler, agent tracking and tool call recording
* track tool usage
* end session after completion
* track tool usage time
* better tool and llm tracking
* code cleanup
* make agentops optional
* optional dependency usage
* remove telemetry code
* optional agentops
* agentops version bump
* remove org key
* true dependency
* add crew org key to agentops
* cleanup
* Update pyproject.toml
* Revert "true dependency"
This reverts commit e52e8e9568.
* Revert "cleanup"
This reverts commit 7f5635fb9e.
* optional parent key
* agentops 0.1.5
* Revert "Revert "cleanup""
This reverts commit cea33d9a5d.
* Revert "Revert "true dependency""
This reverts commit 4d1b460b
* cleanup
* Forcing version 0.1.5
* Update pyproject.toml
* agentops update
* noop
* add crew tag
* black formatting
* use langchain callback handler to support all LLMs
* agentops version bump
* track task evaluator
* merge upstream
* Fix typo in instruction en.json (#676)
* Enable search in docs (#663)
* Clarify text in docstring (#662)
* Update agent.py (#655)
Changed default model value from gpt-4 to gpt-4o.
Reasoning.
gpt-4 costs 30$ per million tokens while gpt-4o costs 5$.
This is more cost friendly for default option.
* Update README.md (#652)
Rework example so that if you use a custom LLM it doesn't throw code errors by uncommenting.
* Update BrowserbaseLoadTool.md (#647)
* Update crew.py (#644)
Fixed Type on line 53
* fixes#665 (#666)
* Added timestamp to logger (#646)
* Added timestamp to logger
Updated the logger.py file to include timestamps when logging output. For example:
[2024-05-20 15:32:48][DEBUG]: == Working Agent: Researcher
[2024-05-20 15:32:48][INFO]: == Starting Task: Research the topic
[2024-05-20 15:33:22][DEBUG]: == [Researcher] Task output:
* Update tool_usage.py
* Revert "Update tool_usage.py"
This reverts commit 95d18d5b6f.
incorrect bramch for this commit
* support skip auto end session
* conditional protect agentops use
* fix crew logger bug
* fix crew logger bug
* Update crew.py
* Update tool_usage.py
---------
Co-authored-by: João Moura <joaomdmoura@gmail.com>
Co-authored-by: Howard Gil <howardbgil@gmail.com>
Co-authored-by: Olivier Roberdet <niox5199@gmail.com>
Co-authored-by: Paul Sanders <psanders1@gmail.com>
Co-authored-by: Anudeep Kolluri <50168940+Anudeep-Kolluri@users.noreply.github.com>
Co-authored-by: Mike Heavers <heaversm@users.noreply.github.com>
Co-authored-by: Mish Ushakov <10400064+mishushakov@users.noreply.github.com>
Co-authored-by: theCyberTech - Rip&Tear <84775494+theCyberTech@users.noreply.github.com>
Co-authored-by: Saif Mahmud <60409889+vmsaif@users.noreply.github.com>
To Resolve :
pydantic_core._pydantic_core.ValidationError: 1 validation error for Task
expected_output
Field required [type=missing, input_value=, input_type=dict]
For further information visit https://errors.pydantic.dev/2.6/v/missing
"Expected Output" is mandatory now as it forces people to be specific about the expected result and get better result
refer : https://github.com/joaomdmoura/crewAI/issues/308
- Added a "Parameters" column to attribute tables. Improved overall document formatting for enhanced readability and ease of use.
Thank you to the author for the great project and the excellent foundation provided!
Revised to utilize Ollama from langchain.llms instead as the functionality from the other method simply doesn't work when delegating.
Co-authored-by: João Moura <joaomdmoura@gmail.com>
fixed error for some cases with Pandas DataFrame:
ValueError: The truth value of a DataFrame is ambiguous. Use a.empty, a.bool(), a.item(), a.any() or a.all().
* better spacing
* works with llama index
* works on langchain custom just need delegation to work
* cleanup for custom_agent class
* works with different argument expectations for agent_executor
* cleanup for hierarchial process, better agent_executor args handler and added to the crew agent doc page
* removed code examples for langchain + llama index, added to docs instead
* added key output if return is not a str for and added some tests
* added hinting for CustomAgent class
* removed pass as it was not needed
* closer just need to figuire ou agentTools
* running agents - llamaindex and langchain with base agent
* some cleanup on baseAgent
* minimum for agent to run for base class and ensure it works with hierarchical process
* cleanup for original agent to take on BaseAgent class
* Agent takes on langchainagent and cleanup across
* token handling working for usage_metrics to continue working
* installed llama-index, updated docs and added better name
* fixed some type errors
* base agent holds token_process
* heirarchail process uses proper tools and no longer relies on hasattr for token_processes
* removal of test_custom_agent_executions
* this fixes copying agents
* leveraging an executor class for trigger llamaindex agent
* llama index now has ask_human
* executor mixins added
* added output converter base class
* type listed
* cleanup for output conversions and tokenprocess eliminated redundancy
* properly handling tokens
* simplified token calc handling
* original agent with base agent builder structure setup
* better docs
* no more llama-index dep
* cleaner docs
* test fixes
* poetry reverts and better docs
* base_agent_tools set for third party agents
* updated task and test fix
* feat: add CodeInterpreterTool to run when enable code execution is allowed on agent
* feat: change to allow_code_execution
* feat: add readme for CodeInterpreterTool
* feat: add training logic to agent and crew
* feat: add training logic to agent executor
* feat: add input parameter to cli command
* feat: add utilities for the training logic
* feat: polish code, logic and add private variables
* feat: add docstring and type hinting to executor
* feat: add constant file, add constant to code
* feat: fix name of training handler function
* feat: remove unused var
* feat: change file handler file name
* feat: Add training handler file, class and change on the code
* feat: fix name error from file
* fix: change import to adapt to logic
* feat: add training handler test
* feat: add tests for file and training_handler
* feat: add test for task evaluator function
* feat: change text to fit in-screen
* feat: add test for train function
* feat: add test for agent training_handler function
* feat: add test for agent._use_trained_data
* removed hyphen in co-workers
* Fix issue with AgentTool agent selection. The LLM included double quotes in the agent name which messed up the string comparison. Added additional types. Cleaned up error messaging.
* Remove duplicate import
* Improve explanation
* Revert poetry.lock changes
* Fix missing line in poetry.lock
---------
Co-authored-by: madmag77 <goncharov.artemv@gmail.com>
* added extra parameter for kickoff to return token usage count after result
* added output_token_usage to class and in full_output
* logger duplicated
* added more types
* added usage_metrics to full output instead
* added more to the description on full_output
* possible mispacing
* updated kickoff return types to be either string or dict applicable when full_output is set
* removed duplicates
* updates instructor to the latest version. adds jsonref, which instructor seems to depend on.
* updates embedchain reference, necessary for python 3.12
* added extra parameter for kickoff to return token usage count after result
* added output_token_usage to class and in full_output
* logger duplicated
* added more types
* added usage_metrics to full output instead
* added more to the description on full_output
* possible mispacing
* fix: 'from datetime import datetime for logging' to print the timestamp
* fix: correct default model (gpt-4o), correct token counts, and correct TaskOutput attributes (added agent)
* test: verify Task callback data is an instance of TaskOutput
* Sync with deep copy working now
* async working!!
* Clean up code for review
* Fix naming
---------
Co-authored-by: João Moura <joaomdmoura@gmail.com>
* Added timestamp to logger
Updated the logger.py file to include timestamps when logging output. For example:
[2024-05-20 15:32:48][DEBUG]: == Working Agent: Researcher
[2024-05-20 15:32:48][INFO]: == Starting Task: Research the topic
[2024-05-20 15:33:22][DEBUG]: == [Researcher] Task output:
* Update tool_usage.py
* Revert "Update tool_usage.py"
This reverts commit 95d18d5b6f.
incorrect bramch for this commit
Changed default model value from gpt-4 to gpt-4o.
Reasoning.
gpt-4 costs 30$ per million tokens while gpt-4o costs 5$.
This is more cost friendly for default option.
* mentioning ollama on the docs as embedder
* lowering barrier to match tool with simialr name
* Fixing agent tools to support co_worker
* Adding new tests
* Fixing type"
* updating tests
* fixing conflict
* fix: fix test actually running
* fix: fix test to not send request to openai
* fix: fix linting to remove cli files
* fix: exclude only files that breaks black
* fix: Fix all Ruff checkings on the code and Fix Test with repeated name
* fix: Change linter name on yml file
* feat: update pre-commit
* feat: remove need for isort on the code
* feat: add mypy as type checker, update code and add comment to reference
* feat: remove black linter
* feat: remove poetry to run the command
* feat: change logic to test mypy
* feat: update tests yml to try to fix the tests gh action
* feat: try to add just mypy to run on gh action
* feat: fix yml file
* feat: add comment to avoid issue on gh action
* feat: decouple pytest from the necessity of poetry install
* feat: change tests.yml to test different approach
* feat: change to poetry run
* fix: parameter field on yml file
* fix: update parameters to be on the pyproject
* fix: update pyproject to remove import untyped errors
* fix: fix test actually running
* fix: fix test to not send request to openai
* fix: fix linting to remove cli files
* fix: exclude only files that breaks black
* fix: Fix all Ruff checkings on the code and Fix Test with repeated name
* fix: Change linter name on yml file
* feat: update pre-commit
* feat: remove need for isort on the code
* feat: remove black linter
* feat: update tests yml to try to fix the tests gh action
* Bug/curly_braces_yaml
Added parser to help users on yaml syntax
* context error
Patch and later will prioritize this again to have context work with the yaml
* fix: fix test actually running
* fix: fix test to not send request to openai
* fix: fix linting to remove cli files
* fix: exclude only files that breaks black
* Update task.py: try to find json in task output using regex
Sometimes the model replies with a valid and additional text, let's try to extract and validate it first. It's cheaper than calling LLM for that.
* Update task.py
---------
Co-authored-by: João Moura <joaomdmoura@gmail.com>
* adding variations of ask question and delegate work tools
* Revert "adding variations of ask question and delegate work tools"
This reverts commit 38d4589be8.
* adding distance calculation for tool names.
* proper formatting
* remove brackets
This commit adds a conditional check to ensure that the output file directory exists before attempting to create it. This ensures that the code does not
fail in cases where the directory does not exist and needs to be created. The condition is added in the `_save_file` method of the `Task` class, ensuring
that the correct behavior is maintained for saving results to a file.
Reworded "If you want to also install crewai-tools, which is a package with tools that can be used by the agents, but more dependencies, you can install it with, example below uses it:" for clarity
Created a short documentation on how to use Llama2 locally with crewAI thanks to the help of Ollama.
Co-authored-by: João Moura <joaomdmoura@gmail.com>
* fix the issue where the suggestions were split at character level
* Update contextual_memory.py
---------
Co-authored-by: rajib76 <rajib76@yahoo.com>
Co-authored-by: João Moura <joaomdmoura@gmail.com>
I saw a rendedered whitespace inconsistency in the Tasks docs here:
c87d887efc/docs/core-concepts/Tasks.md (L173)
So I set out to patch that up to make it easier to read. I then noticed
there were a few whitespace inconsistencies:
- 2 spaces
- 4 whitespaces
- tabs
It appears that the 4 whitespaces is the prevalent whitesapce usage, so
I overwrote other whitespace usages with that in this commit.
Co-authored-by: Rueben Ramirez <rramirez@ruebens-mbp.tail7c016.ts.net>
Co-authored-by: João Moura <joaomdmoura@gmail.com>
* log_file
feature: added a new parameter for crew that creates a txt file to log agent execution
* unit tests and documentation
unit test if file is created but not what is inside the file
* tasks.py: don't call Converter when model response is valid
Try to convert the task output to the expected Pydantic model before sending it to Converter, maybe the model got it right.
* feature: human input per task
* Update executor.py
* Update executor.py
* Update executor.py
* Update executor.py
* Update executor.py
* feat: change human input for unit testing
added documentation and unit test
* Create test_agent_human_input.yaml
add yaml for test
---------
Co-authored-by: João Moura <joaomdmoura@gmail.com>
Added the langchain "Tool" functionality by creating a python function and then adding the functionality of that function to the tool by 'func' variable in the 'Tool' function.
This now will allow to add a max_inter option to agents while also making sure to force the agent to give it's best final answer before running out of it's max_inter.
* fixing identation for AgentTools
* updating gitignore to exclude quick test script
* startingprompt translation
* supporting individual task output
* adding agent to task output
* cutting new version
* Updating README example
* Refactoring task cache to be a tool
The previous implementation of the task caching system was early exiting
the agent executor due to the fact it was returning an AgentFinish object.
This now refactors it to use a cache specific tool that is dynamically
added and forced into the agent in case of a task execution that was
already executed with the same input.
New CrewAIBaseModel:
Base for Agent, Crew, Task.
Includes generated, frozen UUID.
Adds hashing capability
Migrate to ConfigDict:
Replaces class Config with model_config, see this deprecation note .
Benefits:
Adds auditing capability with frozen UUIDs.
* Adding tool caching a loop execution prevention.
This adds some guardrails, to both prevent the same tool to be used
consecutively and also caching tool's results across the entire crew
so it cuts down execution time and eventual LLM calls.
This plays a huge role for smaller opensource models that usually fall
into those behaviors patterns.
It also includes some smaller improvements around the tool prompt and
agent tools, all with the same intention of guiding models into
better conform with agent instructions.
Update to Pydantic v2:
Transitioned all references from pydantic.v1 to pydantic (v2), ensuring compatibility with the latest Pydantic features and improvements.
Affected components include agent tools, prompts, crew, and task modules.
Refactoring & Alignment with Pydantic Standards:
Refactored the agent module away from traditional __init__ to align more closely with Pydantic best practices.
Updated the crew module to Pydantic v2 and enhanced configurations, allowing JSON and dictionary inputs. Additionally, some (not all) exceptions have been migrated to leverage Pydantic's error-handling capabilities.
Enhancements to Validators and Typings:
Improved validators and type annotations across multiple modules, enhancing code readability and maintainability.
Streamlined the validation process in line with Pydantic v2's methodologies.
Import and Configuration Adjustments:
Updated to test-related absolute imports due to issues with Pytest finding packages through relative imports.
Increased minimum Python version from 3.81 to 3.9 - most projects align with this; added pre-commit hooks, isort, black, & autoflake for code quality; updated lock file.
🤖 **CrewAI**: Cutting-edge framework for orchestrating role-playing, autonomous AI agents. By fostering collaborative intelligence, CrewAI empowers agents to work together seamlessly, tackling complex tasks.
🤖 **CrewAI**: Production-grade framework for orchestrating sophisticated AI agent systems. From simple automations to complex real-world applications, CrewAI provides precise control and deep customization. By fostering collaborative intelligence through flexible, production-ready architecture, CrewAI empowers agents to work together seamlessly, tackling complex business challenges with predictable, consistent results.
<h3>
@@ -22,13 +22,17 @@
- [Why CrewAI?](#why-crewai)
- [Getting Started](#getting-started)
- [Key Features](#key-features)
- [Understanding Flows and Crews](#understanding-flows-and-crews)
The power of AI collaboration has too much to offer.
CrewAI is designed to enable AI agents to assume roles, share goals, and operate in a cohesive unit - much like a well-oiled crew. Whether you're building a smart assistant platform, an automated customer service ensemble, or a multi-agent research team, CrewAI provides the backbone for sophisticated multi-agent interactions.
CrewAI is a standalone framework, built from the ground up without dependencies on Langchain or other agent frameworks. It's designed to enable AI agents to assume roles, share goals, and operate in a cohesive unit - much like a well-oiled crew. Whether you're building a smart assistant platform, an automated customer service ensemble, or a multi-agent research team, CrewAI provides the backbone for sophisticated multi-agent interactions.
## Getting Started
### Learning Resources
Learn CrewAI through our comprehensive courses:
- [Multi AI Agent Systems with CrewAI](https://www.deeplearning.ai/short-courses/multi-ai-agent-systems-with-crewai/) - Master the fundamentals of multi-agent systems
- [Practical Multi AI Agents and Advanced Use Cases](https://www.deeplearning.ai/short-courses/practical-multi-ai-agents-and-advanced-use-cases-with-crewai/) - Deep dive into advanced implementations
### Understanding Flows and Crews
CrewAI offers two powerful, complementary approaches that work seamlessly together to build sophisticated AI applications:
1.**Crews**: Teams of AI agents with true autonomy and agency, working together to accomplish complex tasks through role-based collaboration. Crews enable:
- Natural, autonomous decision-making between agents
- Dynamic task delegation and collaboration
- Specialized roles with defined goals and expertise
- Flexible problem-solving approaches
2.**Flows**: Production-ready, event-driven workflows that deliver precise control over complex automations. Flows provide:
- Fine-grained control over execution paths for real-world scenarios
- Secure, consistent state management between tasks
- Clean integration of AI agents with production Python code
- Conditional branching for complex business logic
The true power of CrewAI emerges when combining Crews and Flows. This synergy allows you to:
- Build complex, production-grade applications
- Balance autonomy with precise control
- Handle sophisticated real-world scenarios
- Maintain clean, maintainable code structure
### Getting Started with Installation
To get started with CrewAI, follow these simple steps:
### 1. Installation
Ensure you have Python >=3.10 <=3.13 installed on your system. CrewAI uses [UV](https://docs.astral.sh/uv/) for dependency management and package handling, offering a seamless setup and execution experience.
Ensure you have Python >=3.10 <3.13 installed on your system. CrewAI uses [UV](https://docs.astral.sh/uv/) for dependency management and package handling, offering a seamless setup and execution experience.
First, install CrewAI:
```shell
pip install crewai
```
If you want to install the 'crewai' package along with its optional features that include additional tools for agents, you can do so by using the following command:
```shell
@@ -59,6 +92,22 @@ pip install 'crewai[tools]'
```
The command above installs the basic package and also adds extra components which require more dependencies to function.
### Troubleshooting Dependencies
If you encounter issues during installation or usage, here are some common solutions:
#### Common Issues
1.**ModuleNotFoundError: No module named 'tiktoken'**
- If using embedchain or other tools: `pip install 'crewai[tools]'`
2.**Failed building wheel for tiktoken**
- Ensure Rust compiler is installed (see installation steps above)
- For Windows: Verify Visual C++ Build Tools are installed
- Try upgrading pip: `pip install --upgrade pip`
- If issues persist, use a pre-built wheel: `pip install tiktoken --prefer-binary`
### 2. Setting Up Your Crew with the YAML Configuration
To create a new CrewAI project, run the following CLI (Command Line Interface) command:
@@ -264,13 +313,16 @@ In addition to the sequential process, you can use the hierarchical process, whi
## Key Features
- **Role-Based Agent Design**: Customize agents with specific roles, goals, and tools.
- **Autonomous Inter-Agent Delegation**: Agents can autonomously delegate tasks and inquire amongst themselves, enhancing problem-solving efficiency.
- **Flexible Task Management**: Define tasks with customizable tools and assign them to agents dynamically.
- **Processes Driven**: Currently only supports `sequential` task execution and `hierarchical` processes, but more complex processes like consensual and autonomous are being worked on.
- **Save output as file**: Save the output of individual tasks as a file, so you can use it later.
- **Parse output as Pydantic or Json**: Parse the output of individual tasks as a Pydantic model or as a Json if you want to.
- **Works with Open Source Models**: Run your crew using Open AI or open source models refer to the [Connect CrewAI to LLMs](https://docs.crewai.com/how-to/LLM-Connections/) page for details on configuring your agents' connections to models, even ones running locally!
**Note**: CrewAI is a standalone framework built from the ground up, without dependencies on Langchain or other agent frameworks.
- **Deep Customization**: Build sophisticated agents with full control over the system - from overriding inner prompts to accessing low-level APIs. Customize roles, goals, tools, and behaviors while maintaining clean abstractions.
- **Autonomous Inter-Agent Delegation**: Agents can autonomously delegate tasks and inquire amongst themselves, enabling complex problem-solving in real-world scenarios.
- **Flexible Task Management**: Define and customize tasks with granular control, from simple operations to complex multi-step processes.
- **Production-Grade Architecture**: Support for both high-level abstractions and low-level customization, with robust error handling and state management.
- **Predictable Results**: Ensure consistent, accurate outputs through programmatic guardrails, agent training capabilities, and flow-based execution control. See our [documentation on guardrails](https://docs.crewai.com/how-to/guardrails/) for implementation details.
- **Model Flexibility**: Run your crew using OpenAI or open source models with production-ready integrations. See [Connect CrewAI to LLMs](https://docs.crewai.com/how-to/LLM-Connections/) for detailed configuration options.
- **Event-Driven Flows**: Build complex, real-world workflows with precise control over execution paths, state management, and conditional logic.
- **Process Orchestration**: Achieve any workflow pattern through flows - from simple sequential and hierarchical processes to complex, custom orchestration patterns with conditional branching and parallel execution.

@@ -305,6 +357,98 @@ You can test different real life examples of AI crews in the [CrewAI-examples re
CrewAI's power truly shines when combining Crews with Flows to create sophisticated automation pipelines. Here's how you can orchestrate multiple Crews within a Flow:
# Demonstrate low-level control with structured state
self.state.sentiment="analyzing"
return{"sector":"tech","timeframe":"1W"}# These parameters match the task description template
@listen(fetch_market_data)
defanalyze_with_crew(self,market_data):
# Show crew agency through specialized roles
analyst=Agent(
role="Senior Market Analyst",
goal="Conduct deep market analysis with expert insight",
backstory="You're a veteran analyst known for identifying subtle market patterns"
)
researcher=Agent(
role="Data Researcher",
goal="Gather and validate supporting market data",
backstory="You excel at finding and correlating multiple data sources"
)
analysis_task=Task(
description="Analyze {sector} sector data for the past {timeframe}",
expected_output="Detailed market analysis with confidence score",
agent=analyst
)
research_task=Task(
description="Find supporting data to validate the analysis",
expected_output="Corroborating evidence and potential contradictions",
agent=researcher
)
# Demonstrate crew autonomy
analysis_crew=Crew(
agents=[analyst,researcher],
tasks=[analysis_task,research_task],
process=Process.sequential,
verbose=True
)
returnanalysis_crew.kickoff(inputs=market_data)# Pass market_data as named inputs
@router(analyze_with_crew)
defdetermine_next_steps(self):
# Show flow control with conditional routing
ifself.state.confidence>0.8:
return"high_confidence"
elifself.state.confidence>0.5:
return"medium_confidence"
return"low_confidence"
@listen("high_confidence")
defexecute_strategy(self):
# Demonstrate complex decision making
strategy_crew=Crew(
agents=[
Agent(role="Strategy Expert",
goal="Develop optimal market strategy")
],
tasks=[
Task(description="Create detailed strategy based on analysis",
expected_output="Step-by-step action plan")
]
)
returnstrategy_crew.kickoff()
@listen("medium_confidence","low_confidence")
defrequest_additional_analysis(self):
self.state.recommendations.append("Gather more data")
return"Additional analysis required"
```
This example demonstrates how to:
1. Use Python code for basic data operations
2. Create and execute Crews as steps in your workflow
3. Use Flow decorators to manage the sequence of operations
4. Implement conditional branching based on Crew results
## Connecting Your Crew to a Model
CrewAI supports using various LLMs through a variety of connection options. By default your agents will use the OpenAI API when querying the model. However, there are several other ways to allow your agents to connect to models. For example, you can configure your agents to use a local model via the Ollama tool.
@@ -313,9 +457,13 @@ Please refer to the [Connect CrewAI to LLMs](https://docs.crewai.com/how-to/LLM-
## How CrewAI Compares
**CrewAI's Advantage**: CrewAI is built with production in mind. It offers the flexibility of Autogen's conversational agents and the structured process approach of ChatDev, but without the rigidity. CrewAI's processes are designed to be dynamic and adaptable, fitting seamlessly into both development and production workflows.
**CrewAI's Advantage**: CrewAI combines autonomous agent intelligence with precise workflow control through its unique Crews and Flows architecture. The framework excels at both high-level orchestration and low-level customization, enabling complex, production-grade systems with granular control.
- **Autogen**: While Autogen does good in creating conversational agents capable of working together, it lacks an inherent concept of process. In Autogen, orchestrating agents' interactions requires additional programming, which can become complex and cumbersome as the scale of tasks grows.
- **LangGraph**: While LangGraph provides a foundation for building agent workflows, its approach requires significant boilerplate code and complex state management patterns. The framework's tight coupling with LangChain can limit flexibility when implementing custom agent behaviors or integrating with external systems.
*P.S. CrewAI demonstrates significant performance advantages over LangGraph, executing 5.76x faster in certain cases like this QA task example ([see comparison](https://github.com/crewAIInc/crewAI-examples/tree/main/Notebooks/CrewAI%20Flows%20%26%20Langgraph/QA%20Agent)) while achieving higher evaluation scores with faster completion times in certain coding tasks, like in this example ([detailed analysis](https://github.com/crewAIInc/crewAI-examples/blob/main/Notebooks/CrewAI%20Flows%20%26%20Langgraph/Coding%20Assistant/coding_assistant_eval.ipynb)).*
- **Autogen**: While Autogen excels at creating conversational agents capable of working together, it lacks an inherent concept of process. In Autogen, orchestrating agents' interactions requires additional programming, which can become complex and cumbersome as the scale of tasks grows.
- **ChatDev**: ChatDev introduced the idea of processes into the realm of AI agents, but its implementation is quite rigid. Customizations in ChatDev are limited and not geared towards production environments, which can hinder scalability and flexibility in real-world applications.
@@ -440,5 +588,8 @@ A: CrewAI uses anonymous telemetry to collect usage data for improvement purpose
### Q: Where can I find examples of CrewAI in action?
A: You can find various real-life examples in the [CrewAI-examples repository](https://github.com/crewAIInc/crewAI-examples), including trip planners, stock analysis tools, and more.
### Q: What is the difference between Crews and Flows?
A: Crews and Flows serve different but complementary purposes in CrewAI. Crews are teams of AI agents working together to accomplish specific tasks through role-based collaboration, delivering accurate and predictable results. Flows, on the other hand, are event-driven workflows that can orchestrate both Crews and regular Python code, allowing you to build complex automation pipelines with secure state management and conditional execution paths.
### Q: How can I contribute to CrewAI?
A: Contributions are welcome! You can fork the repository, create a new branch for your feature, add your improvement, and send a pull request. Check the Contribution section in the README for more details.
@@ -32,7 +32,6 @@ A crew in crewAI represents a collaborative group of agents working together to
| **Share Crew** _(optional)_ | `share_crew` | Whether you want to share the complete crew information and execution with the crewAI team to make the library better, and allow us to train models. |
| **Output Log File** _(optional)_ | `output_log_file` | Whether you want to have a file with the complete crew output and execution. You can set it using True and it will default to the folder you are currently in and it will be called logs.txt or passing a string with the full path and name of the file. |
| **Manager Agent** _(optional)_ | `manager_agent` | `manager` sets a custom agent that will be used as a manager. |
| **Manager Callbacks** _(optional)_ | `manager_callbacks` | `manager_callbacks` takes a list of callback handlers to be executed by the manager agent when a hierarchical process is used. |
| **Prompt File** _(optional)_ | `prompt_file` | Path to the prompt JSON file to be used for the crew. |
| **Planning** *(optional)* | `planning` | Adds planning ability to the Crew. When activated before each Crew iteration, all Crew data is sent to an AgentPlanner that will plan the tasks and this plan will be added to each task description. |
| **Planning LLM** *(optional)* | `planning_llm` | The language model used by the AgentPlanner in a planning process. |
This is useful when you've updated your knowledge sources and want to ensure that the agents are using the most recent information.
## Agent-Specific Knowledge
While knowledge can be provided at the crew level using `crew.knowledge_sources`, individual agents can also have their own knowledge sources using the `knowledge_sources` parameter:
```python Code
from crewai import Agent, Task, Crew
from crewai.knowledge.source.string_knowledge_source import StringKnowledgeSource
# Create agent-specific knowledge about a product
product_specs = StringKnowledgeSource(
content="""The XPS 13 laptop features:
- 13.4-inch 4K display
- Intel Core i7 processor
- 16GB RAM
- 512GB SSD storage
- 12-hour battery life""",
metadata={"category": "product_specs"}
)
# Create a support agent with product knowledge
support_agent = Agent(
role="Technical Support Specialist",
goal="Provide accurate product information and support.",
backstory="You are an expert on our laptop products and specifications.",
goal="Answer questions about space news accurately and comprehensively",
backstory="""You are a space industry analyst with expertise in space exploration,
backstory="""You are a space industry analyst with expertise in space exploration,
satellite technology, and space industry trends. You excel at answering questions
about space news and providing detailed, accurate information.""",
knowledge_sources=[recent_news],
@@ -220,13 +320,14 @@ result = crew.kickoff(
inputs={"user_question": "What are the latest developments in space exploration?"}
)
```
```output Output
# Agent: Space News Analyst
## Task: Answer this question about space news: What are the latest developments in space exploration?
# Agent: Space News Analyst
## Final Answer:
## Final Answer:
The latest developments in space exploration, based on recent space news articles, include the following:
1. SpaceX has received the final regulatory approvals to proceed with the second integrated Starship/Super Heavy launch, scheduled for as soon as the morning of Nov. 17, 2023. This is a significant step in SpaceX's ambitious plans for space exploration and colonization. [Source: SpaceNews](https://spacenews.com/starship-cleared-for-nov-17-launch/)
@@ -242,11 +343,13 @@ The latest developments in space exploration, based on recent space news article
6. The National Natural Science Foundation of China has outlined a five-year project for researchers to study the assembly of ultra-large spacecraft. This could lead to significant advancements in spacecraft technology and space exploration capabilities. [Source: SpaceNews](https://spacenews.com/china-researching-challenges-of-kilometer-scale-ultra-large-spacecraft/)
7. The Center for AEroSpace Autonomy Research (CAESAR) at Stanford University is focusing on spacecraft autonomy. The center held a kickoff event on May 22, 2024, to highlight the industry, academia, and government collaboration it seeks to foster. This could lead to significant advancements in autonomous spacecraft technology. [Source: SpaceNews](https://spacenews.com/stanford-center-focuses-on-spacecraft-autonomy/)
@@ -29,7 +29,7 @@ Large Language Models (LLMs) are the core intelligence behind CrewAI agents. The
## Available Models and Their Capabilities
Here's a detailed breakdown of supported models and their capabilities:
Here's a detailed breakdown of supported models and their capabilities, you can compare performance at [lmarena.ai](https://lmarena.ai/?leaderboard) and [artificialanalysis.ai](https://artificialanalysis.ai/):
<Tabs>
<Tab title="OpenAI">
@@ -43,13 +43,104 @@ Here's a detailed breakdown of supported models and their capabilities:
1 token ≈ 4 characters in English. For example, 8,192 tokens ≈ 32,768 characters or about 6,000 words.
</Note>
</Tab>
<Tab title="Nvidia NIM">
| Model | Context Window | Best For |
|-------|---------------|-----------|
| nvidia/mistral-nemo-minitron-8b-8k-instruct | 8,192 tokens | State-of-the-art small language model delivering superior accuracy for chatbot, virtual assistants, and content generation. |
| nvidia/nemotron-4-mini-hindi-4b-instruct| 4,096 tokens | A bilingual Hindi-English SLM for on-device inference, tailored specifically for Hindi Language. |
| "nvidia/llama-3.1-nemotron-70b-instruct | 128k tokens | Llama-3.1-Nemotron-70B-Instruct is a large language model customized by NVIDIA in order to improve the helpfulness of LLM generated responses. |
| nvidia/llama3-chatqa-1.5-8b | 128k tokens | Advanced LLM to generate high-quality, context-aware responses for chatbots and search engines. |
| nvidia/llama3-chatqa-1.5-70b | 128k tokens | Advanced LLM to generate high-quality, context-aware responses for chatbots and search engines. |
| nvidia/vila | 128k tokens | Multi-modal vision-language model that understands text/img/video and creates informative responses |
| nvidia/neva-22| 4,096 tokens | Multi-modal vision-language model that understands text/images and generates informative responses |
| nvidia/usdcode-llama3-70b-instruct | 128k tokens | State-of-the-art LLM that answers OpenUSD knowledge queries and generates USD-Python code. |
| nvidia/nemotron-4-340b-instruct | 4,096 tokens | Creates diverse synthetic data that mimics the characteristics of real-world data. |
| meta/codellama-70b | 100k tokens | LLM capable of generating code from natural language and vice versa. |
| meta/llama2-70b | 4,096 tokens | Cutting-edge large language AI model capable of generating text and code in response to prompts. |
| meta/llama3-8b-instruct | 8,192 tokens | Advanced state-of-the-art LLM with language understanding, superior reasoning, and text generation. |
| meta/llama3-70b-instruct | 8,192 tokens | Powers complex conversations with superior contextual understanding, reasoning and text generation. |
| meta/llama-3.1-8b-instruct | 128k tokens | Advanced state-of-the-art model with language understanding, superior reasoning, and text generation. |
| meta/llama-3.1-70b-instruct | 128k tokens | Powers complex conversations with superior contextual understanding, reasoning and text generation. |
| meta/llama-3.1-405b-instruct | 128k tokens | Advanced LLM for synthetic data generation, distillation, and inference for chatbots, coding, and domain-specific tasks. |
| meta/llama-3.2-1b-instruct | 128k tokens | Advanced state-of-the-art small language model with language understanding, superior reasoning, and text generation. |
| meta/llama-3.2-3b-instruct | 128k tokens | Advanced state-of-the-art small language model with language understanding, superior reasoning, and text generation. |
| meta/llama-3.2-11b-vision-instruct | 128k tokens | Advanced state-of-the-art small language model with language understanding, superior reasoning, and text generation. |
| meta/llama-3.2-90b-vision-instruct | 128k tokens | Advanced state-of-the-art small language model with language understanding, superior reasoning, and text generation. |
| meta/llama-3.1-70b-instruct | 128k tokens | Powers complex conversations with superior contextual understanding, reasoning and text generation. |
| google/gemma-7b | 8,192 tokens | Cutting-edge text generation model text understanding, transformation, and code generation. |
| google/gemma-2b | 8,192 tokens | Cutting-edge text generation model text understanding, transformation, and code generation. |
| google/codegemma-7b | 8,192 tokens | Cutting-edge model built on Google's Gemma-7B specialized for code generation and code completion. |
| google/codegemma-1.1-7b | 8,192 tokens | Advanced programming model for code generation, completion, reasoning, and instruction following. |
| google/recurrentgemma-2b | 8,192 tokens | Novel recurrent architecture based language model for faster inference when generating long sequences. |
| google/gemma-2-9b-it | 8,192 tokens | Cutting-edge text generation model text understanding, transformation, and code generation. |
| google/gemma-2-27b-it | 8,192 tokens | Cutting-edge text generation model text understanding, transformation, and code generation. |
| google/gemma-2-2b-it | 8,192 tokens | Cutting-edge text generation model text understanding, transformation, and code generation. |
| google/deplot | 512 tokens | One-shot visual language understanding model that translates images of plots into tables. |
| google/paligemma | 8,192 tokens | Vision language model adept at comprehending text and visual inputs to produce informative responses. |
| mistralai/mistral-7b-instruct-v0.2 | 32k tokens | This LLM follows instructions, completes requests, and generates creative text. |
| mistralai/mixtral-8x7b-instruct-v0.1 | 8,192 tokens | An MOE LLM that follows instructions, completes requests, and generates creative text. |
| mistralai/mistral-large | 4,096 tokens | Creates diverse synthetic data that mimics the characteristics of real-world data. |
| mistralai/mixtral-8x22b-instruct-v0.1 | 8,192 tokens | Creates diverse synthetic data that mimics the characteristics of real-world data. |
| mistralai/mistral-7b-instruct-v0.3 | 32k tokens | This LLM follows instructions, completes requests, and generates creative text. |
| nv-mistralai/mistral-nemo-12b-instruct | 128k tokens | Most advanced language model for reasoning, code, multilingual tasks; runs on a single GPU. |
| mistralai/mamba-codestral-7b-v0.1 | 256k tokens | Model for writing and interacting with code across a wide range of programming languages and tasks. |
| microsoft/phi-3-mini-128k-instruct | 128K tokens | Lightweight, state-of-the-art open LLM with strong math and logical reasoning skills. |
| microsoft/phi-3-mini-4k-instruct | 4,096 tokens | Lightweight, state-of-the-art open LLM with strong math and logical reasoning skills. |
| microsoft/phi-3-small-8k-instruct | 8,192 tokens | Lightweight, state-of-the-art open LLM with strong math and logical reasoning skills. |
| microsoft/phi-3-small-128k-instruct | 128K tokens | Lightweight, state-of-the-art open LLM with strong math and logical reasoning skills. |
| microsoft/phi-3-medium-4k-instruct | 4,096 tokens | Lightweight, state-of-the-art open LLM with strong math and logical reasoning skills. |
| microsoft/phi-3-medium-128k-instruct | 128K tokens | Lightweight, state-of-the-art open LLM with strong math and logical reasoning skills. |
| microsoft/phi-3.5-mini-instruct | 128K tokens | Lightweight multilingual LLM powering AI applications in latency bound, memory/compute constrained environments |
| microsoft/phi-3.5-moe-instruct | 128K tokens | Advanced LLM based on Mixture of Experts architecure to deliver compute efficient content generation |
| microsoft/kosmos-2 | 1,024 tokens | Groundbreaking multimodal model designed to understand and reason about visual elements in images. |
| microsoft/phi-3-vision-128k-instruct | 128k tokens | Cutting-edge open multimodal model exceling in high-quality reasoning from images. |
| microsoft/phi-3.5-vision-instruct | 128k tokens | Cutting-edge open multimodal model exceling in high-quality reasoning from images. |
| databricks/dbrx-instruct | 12k tokens | A general-purpose LLM with state-of-the-art performance in language understanding, coding, and RAG. |
| snowflake/arctic | 1,024 tokens | Delivers high efficiency inference for enterprise applications focused on SQL generation and coding. |
| aisingapore/sea-lion-7b-instruct | 4,096 tokens | LLM to represent and serve the linguistic and cultural diversity of Southeast Asia |
| ibm/granite-8b-code-instruct | 4,096 tokens | Software programming LLM for code generation, completion, explanation, and multi-turn conversion. |
| ibm/granite-34b-code-instruct | 8,192 tokens | Software programming LLM for code generation, completion, explanation, and multi-turn conversion. |
| ibm/granite-3.0-8b-instruct | 4,096 tokens | Advanced Small Language Model supporting RAG, summarization, classification, code, and agentic AI |
| ibm/granite-3.0-3b-a800m-instruct | 4,096 tokens | Highly efficient Mixture of Experts model for RAG, summarization, entity extraction, and classification |
| mediatek/breeze-7b-instruct | 4,096 tokens | Creates diverse synthetic data that mimics the characteristics of real-world data. |
| upstage/solar-10.7b-instruct | 4,096 tokens | Excels in NLP tasks, particularly in instruction-following, reasoning, and mathematics. |
| writer/palmyra-med-70b-32k | 32k tokens | Leading LLM for accurate, contextually relevant responses in the medical domain. |
| writer/palmyra-med-70b | 32k tokens | Leading LLM for accurate, contextually relevant responses in the medical domain. |
| writer/palmyra-fin-70b-32k | 32k tokens | Specialized LLM for financial analysis, reporting, and data processing |
| 01-ai/yi-large | 32k tokens | Powerful model trained on English and Chinese for diverse tasks including chatbot and creative writing. |
| deepseek-ai/deepseek-coder-6.7b-instruct | 2k tokens | Powerful coding model offering advanced capabilities in code generation, completion, and infilling |
| rakuten/rakutenai-7b-instruct | 1,024 tokens | Advanced state-of-the-art LLM with language understanding, superior reasoning, and text generation. |
| rakuten/rakutenai-7b-chat | 1,024 tokens | Advanced state-of-the-art LLM with language understanding, superior reasoning, and text generation. |
| baichuan-inc/baichuan2-13b-chat | 4,096 tokens | Support Chinese and English chat, coding, math, instruction following, solving quizzes |
<Note>
NVIDIA's NIM support for models is expanding continuously! For the most up-to-date list of available models, please visit build.nvidia.com.
</Note>
</Tab>
<Tab title="Gemini">
| Model | Context Window | Best For |
|-------|---------------|-----------|
| gemini-2.0-flash-exp | 1M tokens | Higher quality at faster speed, multimodal model, good for most tasks |
| gemini-1.5-flash | 1M tokens | Balanced multimodal model, good for most tasks |
| gemini-1.5-flash-8B | 1M tokens | Fastest, most cost-efficient, good for high-frequency tasks |
| gemini-1.5-pro | 2M tokens | Best performing, wide variety of reasoning tasks including logical reasoning, coding, and creative collaboration |
<Tip>
Google's Gemini models are all multimodal, supporting audio, images, video and text, supporting context caching, json schema, function calling, etc.
These models are available via API_KEY from
[The Gemini API](https://ai.google.dev/gemini-api/docs) and also from
[Google Cloud Vertex](https://cloud.google.com/vertex-ai/generative-ai/docs/migrate/migrate-google-ai) as part of the
In the CrewAI framework, a `Task` is a specific assignment completed by an `Agent`.
In the CrewAI framework, a `Task` is a specific assignment completed by an `Agent`.
Tasks provide all necessary details for execution, such as a description, the agent responsible, required tools, and more, facilitating a wide range of action complexities.
@@ -263,8 +263,148 @@ analysis_task = Task(
)
```
## Task Guardrails
Task guardrails provide a way to validate and transform task outputs before they
are passed to the next task. This feature helps ensure data quality and provides
efeedback to agents when their output doesn't meet specific criteria.
### Using Task Guardrails
To add a guardrail to a task, provide a validation function through the `guardrail` parameter:
## Getting Structured Consistent Outputs from Tasks
When you need to ensure that a task outputs a structured and consistent format, you can use the `output_pydantic` or `output_json` properties on a task. These properties allow you to define the expected output structure, making it easier to parse and utilize the results in your application.
<Note>
It's also important to note that the output of the final task of a crew becomes the final output of the actual crew itself.
@@ -608,6 +748,114 @@ While creating and executing tasks, certain validation mechanisms are in place t
These validations help in maintaining the consistency and reliability of task executions within the crewAI framework.
## Task Guardrails
Task guardrails provide a powerful way to validate, transform, or filter task outputs before they are passed to the next task. Guardrails are optional functions that execute before the next task starts, allowing you to ensure that task outputs meet specific requirements or formats.
return (False, "Output must be a 10-digit phone number")
```
### Advanced Features
#### Chaining Multiple Validations
```python Code
def chain_validations(*validators):
"""Chain multiple validators together."""
def combined_validator(result):
for validator in validators:
success, data = validator(result)
if not success:
return (False, data)
result = data
return (True, result)
return combined_validator
# Usage
task = Task(
description="Get user contact info",
expected_output="Email and phone",
guardrail=chain_validations(
validate_email_format,
filter_sensitive_info
)
)
```
#### Custom Retry Logic
```python Code
task = Task(
description="Generate data",
expected_output="Valid data",
guardrail=validate_data,
max_retries=5 # Override default retry limit
)
```
## Creating Directories when Saving Files
You can now specify if a task should create directories when saving its output to a file. This is particularly useful for organizing outputs and ensuring that file paths are correctly structured.
@@ -629,7 +877,7 @@ save_output_task = Task(
## Conclusion
Tasks are the driving force behind the actions of agents in CrewAI.
By properly defining tasks and their outcomes, you set the stage for your AI agents to work effectively, either independently or as a collaborative unit.
Equipping tasks with appropriate tools, understanding the execution process, and following robust validation practices are crucial for maximizing CrewAI's potential,
Tasks are the driving force behind the actions of agents in CrewAI.
By properly defining tasks and their outcomes, you set the stage for your AI agents to work effectively, either independently or as a collaborative unit.
Equipping tasks with appropriate tools, understanding the execution process, and following robust validation practices are crucial for maximizing CrewAI's potential,
ensuring agents are effectively prepared for their assignments and that tasks are executed as intended.
The `StructuredTool` class wraps functions as tools, providing flexibility and validation while reducing boilerplate. It supports custom schemas and dynamic logic for seamless integration of complex functionalities.
#### Example:
Using `StructuredTool.from_function`, you can wrap a function that interacts with an external API or system, providing a structured interface. This enables robust validation and consistent execution, making it easier to integrate complex functionalities into your applications as demonstrated in the following example:
```python
from crewai.tools.structured_tool import CrewStructuredTool
from pydantic import BaseModel
# Define the schema for the tool's input using Pydantic
class APICallInput(BaseModel):
endpoint: str
parameters: dict
# Wrapper function to execute the API call
def tool_wrapper(*args, **kwargs):
# Here, you would typically call the API using the parameters
# For demonstration, we'll return a placeholder string
return f"Call the API at {kwargs['endpoint']} with parameters {kwargs['parameters']}"
# Create and return the structured tool
def create_structured_tool():
return CrewStructuredTool.from_function(
name='Wrapper API',
description="A tool to wrap API calls with structured input.",
[Portkey](https://portkey.ai/?utm_source=crewai&utm_medium=crewai&utm_campaign=crewai) is a 2-line upgrade to make your CrewAI agents reliable, cost-efficient, and fast.
Portkey adds 4 core production capabilities to any CrewAI agent:
To build CrewAI Agents with Portkey, you'll need two keys:
- **Portkey API Key**: Sign up on the [Portkey app](https://app.portkey.ai/?utm_source=crewai&utm_medium=crewai&utm_campaign=crewai) and copy your API key
- **Virtual Key**: Virtual Keys securely manage your LLM API keys in one place. Store your LLM provider API keys securely in Portkey's vault
virtual_key="YOUR_VIRTUAL_KEY",# Enter your Virtual key from Portkey
)
)
```
3.**Create and Run Your First Agent:**
```python
fromcrewaiimportAgent,Task,Crew
# Define your agents with roles and goals
coder=Agent(
role='Software developer',
goal='Write clear, concise code on demand',
backstory='An expert coder with a keen eye for software trends.',
llm=gpt_llm
)
# Create tasks for your agents
task1=Task(
description="Define the HTML for making a simple website with heading- Hello World! Portkey is working!",
expected_output="A clear and concise HTML code",
agent=coder
)
# Instantiate your crew
crew=Crew(
agents=[coder],
tasks=[task1],
)
result=crew.kickoff()
print(result)
```
## Key Features
| Feature | Description |
|---------|-------------|
| 🌐 Multi-LLM Support | Access OpenAI, Anthropic, Gemini, Azure, and 250+ providers through a unified interface |
| 🛡️ Production Reliability | Implement retries, timeouts, load balancing, and fallbacks |
| 📊 Advanced Observability | Track 40+ metrics including costs, tokens, latency, and custom metadata |
| 🔍 Comprehensive Logging | Debug with detailed execution traces and function call logs |
| 🚧 Security Controls | Set budget limits and implement role-based access control |
| 🔄 Performance Analytics | Capture and analyze feedback for continuous improvement |
| 💾 Intelligent Caching | Reduce costs and latency with semantic or simple caching |
## Production Features with Portkey Configs
All features mentioned below are through Portkey's Config system. Portkey's Config system allows you to define routing strategies using simple JSON objects in your LLM API calls. You can create and manage Configs directly in your code or through the Portkey Dashboard. Each Config has a unique ID for easy reference.
Access various LLMs like Anthropic, Gemini, Mistral, Azure OpenAI, and more with minimal code changes. Switch between providers or use them together seamlessly. [Learn more about Universal API](https://portkey.ai/docs/product/ai-gateway/universal-api)
Easily switch between different LLM providers:
```python
# Anthropic Configuration
anthropic_llm=LLM(
model="claude-3-5-sonnet-latest",
base_url=PORTKEY_GATEWAY_URL,
api_key="dummy",
extra_headers=createHeaders(
api_key="YOUR_PORTKEY_API_KEY",
virtual_key="YOUR_ANTHROPIC_VIRTUAL_KEY",#You don't need provider when using Virtual keys
trace_id="anthropic_agent"
)
)
# Azure OpenAI Configuration
azure_llm=LLM(
model="gpt-4",
base_url=PORTKEY_GATEWAY_URL,
api_key="dummy",
extra_headers=createHeaders(
api_key="YOUR_PORTKEY_API_KEY",
virtual_key="YOUR_AZURE_VIRTUAL_KEY",#You don't need provider when using Virtual keys
trace_id="azure_agent"
)
)
```
### 2. Caching
Improve response times and reduce costs with two powerful caching modes:
- **Simple Cache**: Perfect for exact matches
- **Semantic Cache**: Matches responses for requests that are semantically similar
[Learn more about Caching](https://portkey.ai/docs/product/ai-gateway/cache-simple-and-semantic)
```py
config={
"cache":{
"mode":"semantic",# or "simple" for exact matching
[Learn more about Reliability Features](https://portkey.ai/docs/product/ai-gateway/)
### 4. Metrics
Agent runs are complex. Portkey automatically logs **40+ comprehensive metrics** for your AI agents, including cost, tokens used, latency, etc. Whether you need a broad overview or granular insights into your agent runs, Portkey's customizable filters provide the metrics you need.
Logs are essential for understanding agent behavior, diagnosing issues, and improving performance. They provide a detailed record of agent activities and tool use, which is crucial for debugging and optimizing processes.
Access a dedicated section to view records of agent executions, including parameters, outcomes, function calls, and errors. Filter logs based on multiple parameters such as trace ID, model, tokens used, and metadata.
description: Learn how to enable and use multimodal capabilities in your agents for processing images and other non-text content within the CrewAI framework.
icon: image
---
# Using Multimodal Agents
CrewAI supports multimodal agents that can process both text and non-text content like images. This guide will show you how to enable and use multimodal capabilities in your agents.
## Enabling Multimodal Capabilities
To create a multimodal agent, simply set the `multimodal` parameter to `True` when initializing your agent:
```python
from crewai import Agent
agent = Agent(
role="Image Analyst",
goal="Analyze and extract insights from images",
backstory="An expert in visual content interpretation with years of experience in image analysis",
multimodal=True # This enables multimodal capabilities
)
```
When you set `multimodal=True`, the agent is automatically configured with the necessary tools for handling non-text content, including the `AddImageTool`.
## Working with Images
The multimodal agent comes pre-configured with the `AddImageTool`, which allows it to process images. You don't need to manually add this tool - it's automatically included when you enable multimodal capabilities.
Here's a complete example showing how to use a multimodal agent to analyze an image:
```python
from crewai import Agent, Task, Crew
# Create a multimodal agent
image_analyst = Agent(
role="Product Analyst",
goal="Analyze product images and provide detailed descriptions",
backstory="Expert in visual product analysis with deep knowledge of design and features",
multimodal=True
)
# Create a task for image analysis
task = Task(
description="Analyze the product image at https://example.com/product.jpg and provide a detailed description",
agent=image_analyst
)
# Create and run the crew
crew = Crew(
agents=[image_analyst],
tasks=[task]
)
result = crew.kickoff()
```
### Advanced Usage with Context
You can provide additional context or specific questions about the image when creating tasks for multimodal agents. The task description can include specific aspects you want the agent to focus on:
```python
from crewai import Agent, Task, Crew
# Create a multimodal agent for detailed analysis
expert_analyst = Agent(
role="Visual Quality Inspector",
goal="Perform detailed quality analysis of product images",
backstory="Senior quality control expert with expertise in visual inspection",
multimodal=True # AddImageTool is automatically included
)
# Create a task with specific analysis requirements
inspection_task = Task(
description="""
Analyze the product image at https://example.com/product.jpg with focus on:
1. Quality of materials
2. Manufacturing defects
3. Compliance with standards
Provide a detailed report highlighting any issues found.
""",
agent=expert_analyst
)
# Create and run the crew
crew = Crew(
agents=[expert_analyst],
tasks=[inspection_task]
)
result = crew.kickoff()
```
### Tool Details
When working with multimodal agents, the `AddImageTool` is automatically configured with the following schema:
```python
class AddImageToolSchema:
image_url: str # Required: The URL or path of the image to process
action: Optional[str] = None # Optional: Additional context or specific questions about the image
```
The multimodal agent will automatically handle the image processing through its built-in tools, allowing it to:
- Access images via URLs or local file paths
- Process image content with optional context or specific questions
- Provide analysis and insights based on the visual information and task requirements
## Best Practices
When working with multimodal agents, keep these best practices in mind:
1. **Image Access**
- Ensure your images are accessible via URLs that the agent can reach
- For local images, consider hosting them temporarily or using absolute file paths
- Verify that image URLs are valid and accessible before running tasks
2. **Task Description**
- Be specific about what aspects of the image you want the agent to analyze
- Include clear questions or requirements in the task description
- Consider using the optional `action` parameter for focused analysis
3. **Resource Management**
- Image processing may require more computational resources than text-only tasks
- Some language models may require base64 encoding for image data
- Consider batch processing for multiple images to optimize performance
4. **Environment Setup**
- Verify that your environment has the necessary dependencies for image processing
- Ensure your language model supports multimodal capabilities
- Test with small images first to validate your setup
5. **Error Handling**
- Implement proper error handling for image loading failures
- Have fallback strategies for when image processing fails
- Monitor and log image processing operations for debugging
description="Cutting-edge framework for orchestrating role-playing, autonomous AI agents. By fostering collaborative intelligence, CrewAI empowers agents to work together seamlessly, tackling complex tasks."
@@ -4,7 +4,7 @@ Welcome to the {{crew_name}} Crew project, powered by [crewAI](https://crewai.co
## Installation
Ensure you have Python >=3.10 <=3.13 installed on your system. This project uses [UV](https://docs.astral.sh/uv/) for dependency management and package handling, offering a seamless setup and execution experience.
Ensure you have Python >=3.10 <3.13 installed on your system. This project uses [UV](https://docs.astral.sh/uv/) for dependency management and package handling, offering a seamless setup and execution experience.
@@ -4,7 +4,7 @@ Welcome to the {{crew_name}} Crew project, powered by [crewAI](https://crewai.co
## Installation
Ensure you have Python >=3.10 <=3.13 installed on your system. This project uses [UV](https://docs.astral.sh/uv/) for dependency management and package handling, offering a seamless setup and execution experience.
Ensure you have Python >=3.10 <3.13 installed on your system. This project uses [UV](https://docs.astral.sh/uv/) for dependency management and package handling, offering a seamless setup and execution experience.
raiseValueError("Either file_path or file_paths must be provided")
returnv
defmodel_post_init(self,_):
"""Post-initialization method to load content."""
self.safe_file_paths=self._process_file_paths()
self.validate_paths()
self.validate_content()
self.content=self.load_content()
@abstractmethod
@@ -32,13 +44,13 @@ class BaseFileKnowledgeSource(BaseKnowledgeSource, ABC):
"""Load and preprocess file content. Should be overridden by subclasses. Assume that the file path is relative to the project root in the knowledge directory."""
pass
defvalidate_paths(self):
defvalidate_content(self):
"""Validate the paths."""
forpathinself.safe_file_paths:
ifnotpath.exists():
self._logger.log(
"error",
f"File not found: {path}. Try adding sources to the knowledge directory. If its inside the knowledge directory, use the relative path.",
f"File not found: {path}. Try adding sources to the knowledge directory. If it's inside the knowledge directory, use the relative path.",
color="red",
)
raiseFileNotFoundError(f"File not found: {path}")
@@ -51,7 +63,10 @@ class BaseFileKnowledgeSource(BaseKnowledgeSource, ABC):
def_save_documents(self):
"""Save the documents to the storage."""
self.storage.save(self.chunks)
ifself.storage:
self.storage.save(self.chunks)
else:
raiseValueError("No storage found to save documents.")
"""Default Source class for converting documents to markdown or json
This will auto support PDF, DOCX, and TXT, XLSX, Images, and HTML files without any additional dependencies and follows the docling package as the source of truth.
"Embedding dimension mismatch. This usually happens when mixing different embedding models. Try resetting the collection using `crewai reset-memories -a`",
"red",
)
raiseValueError(
"Embedding dimension mismatch. Make sure you're using the same embedding model "
"across all operations with this collection."
"Try resetting the collection using `crewai reset-memories -a`"
)frome
exceptExceptionase:
Logger(verbose=True).log(
"error",f"Failed to upsert documents: {e}","red"
)
raise
else:
ifnotself.collection:
raiseException("Collection not initialized")
try:
# Create a dictionary to store unique documents
unique_docs={}
# Generate IDs and create a mapping of id -> (document, metadata)
"Embedding dimension mismatch. This usually happens when mixing different embedding models. Try resetting the collection using `crewai reset-memories -a`",
"red",
)
raiseValueError(
"Embedding dimension mismatch. Make sure you're using the same embedding model "
"across all operations with this collection."
"Try resetting the collection using `crewai reset-memories -a`"
)frome
exceptExceptionase:
Logger(verbose=True).log("error",f"Failed to upsert documents: {e}","red")
returnf"{self._use(tool_string=tool_string,tool=tool,calling=calling)}"# type: ignore # BUG?: "_use" of "ToolUsage" does not return a value (it only ever returns None)
"tools":"\nYou ONLY have access to the following tools, and should NEVER make up tools that are not listed here:\n\n{tools}\n\nUse the following format:\n\nThought: you should always think about what to do\nAction: the action to take, only one name of [{tool_names}], just the name, exactly as it's written.\nAction Input: the input to the action, just a simple python dictionary, enclosed in curly braces, using \" to wrap keys and values.\nObservation: the result of the action\n\nOnce all necessary information is gathered:\n\nThought: I now know the final answer\nFinal Answer: the final answer to the original input question\n",
"no_tools":"\nTo give my best complete final answer to the task use the exact following format:\n\nThought: I now can give a great answer\nFinal Answer: Your final answer must be the great and the most complete as possible, it must be outcome described.\n\nI MUST use these formats, my job depends on it!",
"format":"I MUST either use a tool (use one at time) OR give my best final answer not both at the same time. To Use the following format:\n\nThought: you should always think about what to do\nAction: the action to take, should be one of [{tool_names}]\nAction Input: the input to the action, dictionary enclosed in curly braces\nObservation: the result of the action\n... (this Thought/Action/Action Input/Result can repeat N times)\nThought: I now can give a great answer\nFinal Answer: Your final answer must be the great and the most complete as possible, it must be outcome described\n\n",
"final_answer_format":"If you don't need to use any more tools, you must give your best complete final answer, make sure it satisfy the expect criteria, use the EXACT format below:\n\nThought: I now can give a great answer\nFinal Answer: my best complete final answer to the task.\n\n",
"final_answer_format":"If you don't need to use any more tools, you must give your best complete final answer, make sure it satisfies the expected criteria, use the EXACT format below:\n\nThought: I now can give a great answer\nFinal Answer: my best complete final answer to the task.\n\n",
"format_without_tools":"\nSorry, I didn't use the right format. I MUST either use a tool (among the available ones), OR give my best final answer.\nI just remembered the expected format I must follow:\n\nQuestion: the input question you must answer\nThought: you should always think about what to do\nAction: the action to take, should be one of [{tool_names}]\nAction Input: the input to the action\nObservation: the result of the action\n... (this Thought/Action/Action Input/Result can repeat N times)\nThought: I now can give a great answer\nFinal Answer: Your final answer must be the great and the most complete as possible, it must be outcome described\n\n",
"task_with_context":"{task}\n\nThis is the context you're working with:\n{context}",
"expected_output":"\nThis is the expect criteria for your final answer: {expected_output}\nyou MUST return the actual complete content as the final answer, not a summary.",
"human_feedback":"You got human feedback on your work, re-evaluate it and give a new Final Answer when ready.\n {human_feedback}",
"getting_input":"This is the agent's final answer: {final_answer}\n\n",
"summarizer_system_message":"You are a helpful assistant that summarizes text.",
"sumamrize_instruction":"Summarize the following text, make sure to include all the important information: {group}",
"summarize_instruction":"Summarize the following text, make sure to include all the important information: {group}",
"summary":"This is a summary of our conversation so far:\n{merged_summary}",
"manager_request":"Your best answer to your coworker asking you this, accounting for the context shared.",
"formatted_task_instructions":"Ensure your final answer contains only the content in the following format: {output_format}\n\nEnsure the final output does not include any code block markers like ```json or ```python.",
@@ -28,15 +28,21 @@
"errors":{
"force_final_answer_error":"You can't keep going, this was the best you could do.\n {formatted_answer.text}",
"force_final_answer":"Now it's time you MUST give your absolute best final answer. You'll ignore all previous instructions, stop using any tools, and just return your absolute BEST Final answer.",
"agent_tool_unexsiting_coworker":"\nError executing tool. coworker mentioned not found, it must be one of the following options:\n{coworkers}\n",
"agent_tool_unexisting_coworker":"\nError executing tool. coworker mentioned not found, it must be one of the following options:\n{coworkers}\n",
"task_repeated_usage":"I tried reusing the same input, I must stop using this action input. I'll try something else instead.\n\n",
"tool_usage_error":"I encountered an error: {error}",
"tool_arguments_error":"Error: the Action Input is not a valid key, value dictionary.",
"wrong_tool_name":"You tried to use the tool {tool}, but it doesn't exist. You must use one of the following tools, use one at time: {tools}.",
"tool_usage_exception":"I encountered an error while trying to use the tool. This was the error: {error}.\n Tool {tool} accepts these inputs: {tool_inputs}"
"tool_usage_exception":"I encountered an error while trying to use the tool. This was the error: {error}.\n Tool {tool} accepts these inputs: {tool_inputs}",
"agent_tool_execution_error":"Error executing task with agent '{agent_role}'. Error: {error}"
},
"tools":{
"delegate_work":"Delegate a specific task to one of the following coworkers: {coworkers}\nThe input to this tool should be the coworker, the task you want them to do, and ALL necessary context to execute the task, they know nothing about the task, so share absolute everything you know, don't reference things but instead explain them.",
"ask_question":"Ask a specific question to one of the following coworkers: {coworkers}\nThe input to this tool should be the coworker, the question you have for them, and ALL necessary context to ask the question properly, they know nothing about the question, so share absolute everything you know, don't reference things but instead explain them."
"ask_question":"Ask a specific question to one of the following coworkers: {coworkers}\nThe input to this tool should be the coworker, the question you have for them, and ALL necessary context to ask the question properly, they know nothing about the question, so share absolute everything you know, don't reference things but instead explain them.",
"add_image":{
"name":"Add image to content",
"description":"See image to understand it's content, you can optionally ask a question about the image",
"default_action":"Please provide a detailed description of this image, including all visual elements, context, and any notable details you can observe."
I can explain AI in one sentence. \n\nFinal Answer: Artificial intelligence
(AI) is the ability of computer systems to perform tasks that typically require
human intelligence, such as learning, problem-solving, and decision-making. \n","done":true,"done_reason":"stop","context":[106,1645,108,6176,1479,235292,108,2045,708,2121,4731,235265,2121,135147,108,6922,3749,6789,603,235292,2121,6789,108,1469,2734,970,1963,3407,2048,3448,577,573,6911,1281,573,5463,2412,5920,235292,109,65366,235292,590,1490,798,2734,476,1775,3448,108,11263,10358,235292,3883,2048,3448,2004,614,573,1775,578,573,1546,3407,685,3077,235269,665,2004,614,17526,6547,235265,109,235285,44472,1281,1450,32808,235269,970,3356,12014,611,665,235341,109,6176,4926,235292,109,6846,12297,235292,36576,1212,16481,603,575,974,13060,109,1596,603,573,5246,12830,604,861,2048,3448,235292,586,974,235290,47366,15844,576,16481,108,4747,44472,2203,573,5579,3407,3381,685,573,2048,3448,235269,780,476,13367,235265,109,12694,235341,1417,603,50471,2845,577,692,235269,1281,573,8112,2506,578,2734,861,1963,14124,10358,235269,861,3356,12014,611,665,235341,109,65366,235292,109,107,108,106,2516,108,65366,235292,590,798,10200,16481,575,974,13060,235265,235248,109,11263,10358,235292,42456,17273,591,11716,235275,603,573,7374,576,6875,5188,577,3114,13333,674,15976,2817,3515,17273,235269,1582,685,6044,235269,3210,235290,60495,235269,578,4530,235290,14577,235265,139,108],"total_duration":3370959792,"load_duration":20611750,"prompt_eval_count":173,"prompt_eval_duration":688036000,"eval_count":51,"eval_duration":2660291000}'
Answer: Artificial Intelligence (AI) refers to the development of computer
systems capable of performing tasks that typically require human intelligence,
such as learning, problem-solving, decision-making, and perception.","done":true,"done_reason":"stop","context":[128006,9125,128007,271,38766,1303,33025,2696,25,6790,220,2366,18,271,128009,128006,882,128007,271,14711,744,512,2675,527,1296,3560,13,1296,93371,198,7927,4443,5915,374,25,1296,5915,198,1271,3041,856,1888,4686,1620,4320,311,279,3465,1005,279,4839,2768,3645,1473,85269,25,358,1457,649,3041,264,2294,4320,198,19918,22559,25,4718,1620,4320,2011,387,279,2294,323,279,1455,4686,439,3284,11,433,2011,387,15632,7633,382,40,28832,1005,1521,20447,11,856,2683,14117,389,433,2268,14711,2724,1473,5520,5546,25,83017,1148,15592,374,304,832,11914,271,2028,374,279,1755,13186,369,701,1620,4320,25,362,832,1355,18886,16540,315,15592,198,9514,28832,471,279,5150,4686,2262,439,279,1620,4320,11,539,264,12399,382,11382,0,1115,374,48174,3062,311,499,11,1005,279,7526,2561,323,3041,701,1888,13321,22559,11,701,2683,14117,389,433,2268,85269,1473,128009,128006,78191,128007,271,19918,22559,25,59294,22107,320,15836,8,19813,311,279,4500,315,6500,6067,13171,315,16785,9256,430,11383,1397,3823,11478,11,1778,439,6975,11,3575,99246,11,5597,28846,11,323,21063,13],"total_duration":1461909875,"load_duration":39886208,"prompt_eval_count":181,"prompt_eval_duration":701000000,"eval_count":39,"eval_duration":719000000}'
am Gemma, an open-weights AI assistant developed by Google DeepMind. \n","done":true,"done_reason":"stop","context":[106,1645,108,6176,4926,235292,108,54657,575,235248,235284,235276,3907,235265,7702,708,692,235336,109,107,108,106,2516,108,235285,1144,137061,235269,671,2174,235290,30316,16481,20409,6990,731,6238,20555,35777,235265,139,108],"total_duration":14046647083,"load_duration":12942541833,"prompt_eval_count":25,"prompt_eval_duration":177695000,"eval_count":19,"eval_duration":923120000}'
an AI designed to assist and communicate with users, utilizing a combination
of natural language processing models.","done":true,"done_reason":"stop","context":[128006,9125,128007,271,38766,1303,33025,2696,25,6790,220,2366,18,271,128009,128006,882,128007,271,14711,2724,512,66454,304,220,508,4339,13,10699,902,1646,527,499,1980,128009,128006,78191,128007,271,40,2846,459,15592,6319,311,7945,323,19570,449,3932,11,35988,264,10824,315,5933,4221,8863,4211,13],"total_duration":1076617833,"load_duration":46505416,"prompt_eval_count":40,"prompt_eval_duration":626000000,"eval_count":22,"eval_duration":399000000}'
headers:
Content-Length:
- '690'
Content-Type:
- application/json; charset=utf-8
Date:
- Thu, 02 Jan 2025 20:07:07 GMT
status:
code:200
message:OK
version:1
Some files were not shown because too many files have changed in this diff
Show More
Reference in New Issue
Block a user
Blocking a user prevents them from interacting with repositories, such as opening or commenting on pull requests or issues. Learn more about blocking a user.