- Fix OSError when trying to get source code of lambda functions in guardrail events
- Gracefully handle lambdas and built-in functions by showing placeholder text
- Update VCR config to exclude body matching for more reliable cassette playback
- Add pytest marker for tests requiring local services (Ollama, etc)
- Configure CI to skip tests marked as requiring local services
- Re-record async tool test cassettes with telemetry calls only
- Migrate crewai-tools as standalone package in lib/tools
- Configure UV workspace for monorepo structure
- Move assets to repository root
- Clean up duplicate README files
- Focus pre-commit hooks on lib/crewai/src only
- Remove embedchain and resolve circular deps with ChromaDB
- Adjust lockfile to match crewai requirements
- Mock embeddings and vector DB in RAG tool tests
* fix: attempt to make embedchain optional
* fix: drop pydantic_settings dependency
* fix: ensure the package is importable without any extra dependency
After making embedchain option many packages were unstalled which caused errors in some tools due to failing import directives
This commit updates tool prompts to explicitly highlight that some tools
can accept both local file paths and remote URLs.
The improved prompts ensure LLMs understand they may pass remote
resources.
* Create tool for generating automations in Studio
This commit creates a tool to use CrewAI Enterprise API to generate
crews using CrewAI Studio.
* Replace CREWAI_BASE_URL with CREWAI_PLUS_URL
* Add missing /crewai_plus in URL
* Add contextual AI tools with async support
* Fix package version issues and update README
* Rename contextual tools to contextualai and update contents
* Update tools init for contextualai tools
* feat: Resolved no module found error for nest_asyncio
* Updated nest_asyncio import
---------
Co-authored-by: QJ <qj@QJs-MacBook-Pro.local>
Co-authored-by: Qile-Jiang <qile.jiang@contextual.ai>
* feat: add InvokeCrewAIAutomationTool for external crew API integration
* feat: add InvokeCrewAIAutomationTool class for executing CrewAI tasks programmatically
* feat: initialize rag
* refactor: using cosine distance metric for chromadb
* feat: use RecursiveCharacterTextSplitter as chunker strategy
* feat: support chucker and loader per data_type
* feat: adding JSON loader
* feat: adding CSVLoader
* feat: adding loader for DOCX files
* feat: add loader for MDX files
* feat: add loader for XML files
* feat: add loader for parser Webpage
* feat: support to load files from an entire directory
* feat: support to auto-load the loaders for additional DataType
* feat: add chuckers for some specific data type
- Each chunker uses separators specific to its content type
* feat: prevent document duplication and centralize content management
- Implement document deduplication logic in RAG
* Check for existing documents by source reference
* Compare doc IDs to detect content changes
* Automatically replace outdated content while preventing duplicates
- Centralize common functionality for better maintainability
* Create SourceContent class to handle URLs, files, and text uniformly
* Extract shared utilities (compute_sha256) to misc.py
* Standardize doc ID generation across all loaders
- Improve RAG system architecture
* All loaders now inherit consistent patterns from centralized BaseLoader
* Better separation of concerns with dedicated content management classes
* Standardized LoaderResult structure across all loader implementations
* chore: split text loaders file
* test: adding missing tests about RAG loaders
* refactor: QOL
* fix: add missing uv syntax on DOCXLoader
* Stagehand tool improvements
This commit significantly improves the StagehandTool reliability and usability when working with CrewAI agents by addressing several critical
issues:
## Key Improvements
### 1. Atomic Action Support
- Added _extract_steps() method to break complex instructions into individual steps
- Added _simplify_instruction() method for intelligent error recovery
- Sequential execution of micro-actions with proper DOM settling between steps
- Prevents token limit issues on complex pages by encouraging scoped actions
### 2. Enhanced Schema Design
- Made instruction field optional to handle navigation-only commands
- Added smart defaults for missing instructions based on command_type
- Improved field descriptions to guide agents toward atomic actions with location context
- Prevents "instruction Field required" validation errors
### 3. Intelligent API Key Management
- Added _get_model_api_key() method with automatic detection based on model type
- Support for OpenAI (GPT), Anthropic (Claude), and Google (Gemini) API keys
- Removes need for manual model API key configuration
### 4. Robust Error Recovery
- Step-by-step execution with individual error handling per atomic action
- Automatic retry with simplified instructions when complex actions fail
- Comprehensive error logging and reporting for debugging
- Graceful degradation instead of complete failure
### 5. Token Management & Performance
- Tool descriptions encourage atomic, scoped actions (e.g., "click search box in header")
- Prevents "prompt too long" errors on complex pages like Wikipedia
- Location-aware instruction patterns for better DOM targeting
- Reduced observe-act cycles through better instruction decomposition
### 6. Enhanced Testing Support
- Comprehensive async mock objects for testing mode
- Proper async/sync compatibility for different execution contexts
- Enhanced resource cleanup and session management
* Update stagehand_tool.py
removeing FixedStagehandTool in favour of StagehandTool
* removed comment
* Cleanup
Revoved unused class
Improved tool description
* feat: add SerperScrapeWebsiteTool for extracting clean content from URLs
* feat: add required SERPER_API_KEY env var validation to SerperScrapeWebsiteTool
* Enhance EnterpriseActionTool with improved schema processing and error handling
- Added methods for sanitizing names and processing schema types, including support for nested models and nullable types.
- Improved error handling during schema creation and processing, with warnings for failures.
- Updated parameter handling in the `_run` method to clean up `kwargs` before sending requests.
- Introduced a detailed description generation for nested schema structures to enhance tool documentation.
* Add tests for EnterpriseActionTool schema conversion and validation
- Introduced a new test class for validating complex nested schemas in EnterpriseActionTool.
- Added tests for schema conversion, optional fields, enum validation, and required nested fields.
- Implemented execution tests to ensure the tool can handle complex validated input correctly.
- Verified model naming conventions and added tests for simpler schemas with basic enum validation.
- Enhanced overall test coverage for the EnterpriseActionTool functionality.
* Update chromadb dependency version in pyproject.toml and uv.lock
- Changed chromadb version from >=0.4.22 to ==0.5.23 in both pyproject.toml and uv.lock to ensure compatibility and stability.
* Update test workflow configuration
- Changed EMBEDCHAIN_DB_URI to point to a temporary test database location.
- Added CHROMA_PERSIST_PATH for specifying the path to the Chroma test database.
- Cleaned up the test run command in the workflow file.
* reverted
* INTPYTHON-580 Design and Implement MongoDBVectorSearchTool
* add implementation
* wip
* wip
* finish tests
* add todo
* refactor to wrap langchain-mongodb
* cleanup
* address review
* Fix usage of EnvVar class
* inline code
* lint
* lint
* fix usage of SearchIndexModel
* Refactor: Update EnvVar import path and remove unused tests.utils module
- Changed import of EnvVar from tests.utils to crewai.tools in multiple files.
- Updated README.md for MongoDB vector search tool with additional context.
- Modified subprocess command in vector_search.py for package installation.
- Cleaned up test_generate_tool_specs.py to improve mock patching syntax.
- Deleted unused tests/utils.py file.
* update the crewai dep and the lockfile
* chore: update package versions and dependencies in uv.lock
- Removed `auth0-python` package.
- Updated `crewai` version to 0.140.0 and adjusted its dependencies.
- Changed `json-repair` version to 0.25.2.
- Updated `litellm` version to 1.72.6.
- Modified dependency markers for several packages to improve compatibility with Python versions.
* refactor: improve MongoDB vector search tool with enhanced error handling and new dimensions field
- Added logging for error handling in the _run method and during client cleanup.
- Introduced a new 'dimensions' field in the MongoDBVectorSearchConfig for embedding vector size.
- Refactored the _run method to return JSON formatted results and handle exceptions gracefully.
- Cleaned up import statements and improved code readability.
* address review
* update tests
* debug
* fix test
* fix test
* fix test
* support azure openai
---------
Co-authored-by: lorenzejay <lorenzejaytech@gmail.com>
* - Added CouchbaseFTSVectorStore as a CrewAI tool.
- Wrote a README to setup the tool.
- Wrote test cases.
- Added Couchbase as an optional dependency in the project.
* Fixed naming in some places. Added docstrings. Added instructions on how to create a vector search index.
* Fixed pyproject.toml
* error handling and response format
- Removed unnecessary ImportError for missing 'couchbase' package.
- Changed response format from a concatenated string to a JSON array for search results.
- Updated error handling to return error messages instead of raising exceptions in certain cases.
- Adjusted tests to reflect changes in response format and error handling.
* Update dependencies in pyproject.toml and uv.lock
- Changed pydantic version from 2.6.1 to 2.10.6 in both pyproject.toml and uv.lock.
- Updated crewai-tools version from 0.42.2 to 0.42.3 in uv.lock.
- Adjusted pydantic-core version from 2.33.1 to 2.27.2 in uv.lock, reflecting the new pydantic version.
* Removed restrictive pydantic version and updated uv.lock
* synced lockfile
* regenerated lockfile
* updated lockfile
* regenerated lockfile
* Update tool specifications for
* Fix test cases
---------
Co-authored-by: AayushTyagi1 <tyagiaayush5@gmail.com>
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
* refactor: enhance schema handling in EnterpriseActionTool
- Extracted schema property and required field extraction into separate methods for better readability and maintainability.
- Introduced methods to analyze field types and create Pydantic field definitions based on nullability and requirement status.
- Updated the _run method to handle required nullable fields, ensuring they are set to None if not provided in kwargs.
* refactor: streamline nullable field handling in EnterpriseActionTool
- Removed commented-out code related to handling required nullable fields for clarity.
- Simplified the logic in the _run method to focus on processing parameters without unnecessary comments.
- Added TYPE_CHECKING imports for FirecrawlApp to enhance type safety.
- Updated configuration keys in FirecrawlCrawlWebsiteTool and FirecrawlScrapeWebsiteTool to camelCase for consistency.
- Introduced error handling in the _run methods of both tools to ensure FirecrawlApp is properly initialized before usage.
- Adjusted parameters passed to crawl_url and scrape_url methods to use 'params' instead of unpacking the config dictionary directly.
* feat: add support for parsing actions list from environment variables
This commit introduces a new function, _parse_actions_list, to handle the parsing of a string representation of a list of tool names from environment variables. The CrewaiEnterpriseTools now utilizes this function to filter tools based on the parsed actions list, enhancing flexibility in tool selection. Additionally, a new test case is added to verify the correct usage of the environment actions list.
* test: simplify environment actions list test setup
This commit refactors the test for CrewaiEnterpriseTools to streamline the setup of environment variables. The environment token and actions list are now set in a single patch.dict call, improving readability and reducing redundancy in the test code.
* refactor: remove token validation from EnterpriseActionKitToolAdapter and CrewaiEnterpriseTools
This commit simplifies the initialization of the EnterpriseActionKitToolAdapter and CrewaiEnterpriseTools by removing the explicit validation for the enterprise action token. The token can now be set to None without raising an error, allowing for more flexible usage.
* added loggers for monitoring
* fixed typo
* fix: enhance token handling in EnterpriseActionKitToolAdapter and CrewaiEnterpriseTools
This commit improves the handling of the enterprise action token by allowing it to be fetched from environment variables if not provided. It adds checks to ensure the token is set before making API requests, enhancing robustness and flexibility.
* removed redundancy
* test: add new test for environment token fallback in CrewaiEnterpriseTools
This update introduces a new test case to verify that the environment token is used when no token is provided during the initialization of CrewaiEnterpriseTools. Additionally, minor formatting adjustments were made to existing assertions for consistency.
* test: update environment token test to clear environment variables
This change modifies the test for CrewaiEnterpriseTools to ensure that the environment variables are cleared before setting the test token. This ensures a clean test environment and prevents potential interference from other tests.
* drop redundancy
* feat: support to complex filter on ToolCollection
* refactor: use proper tool collection methot to filter tool in CrewAiEnterpriseTools
* feat: allow to filter available MCP tools
This change allows accessing tools by name (tools["tool_name"]) in addition to
index (tools[0]), making it more intuitive and convenient to work with multiple
tools without needing to remember their position in the list
* feat: add explictly package_dependencies in the Tools
* feat: collect package_dependencies from Tool to add in tool.specs.json
* feat: add default value in run_params Tool' specs
* fix: support get boolean values
This commit also refactor test to make easier define newest attributes into a Tool
* refactor: remove token validation from EnterpriseActionKitToolAdapter and CrewaiEnterpriseTools
This commit simplifies the initialization of the EnterpriseActionKitToolAdapter and CrewaiEnterpriseTools by removing the explicit validation for the enterprise action token. The token can now be set to None without raising an error, allowing for more flexible usage.
* added loggers for monitoring
* fixed typo
* feat: generate tool specs file based on their schema definition
* generate tool spec after publishing a new release
* feat: support add available env-vars to tool.specs.json
* refactor: use better identifier names on tool specs
* feat: change tool specs generation to run daily
* feat: add auth token to notify api about tool changes
* refactor: use humanized_name instead of verbose_name
* refactor: generate tool spec after pushing to main
This commit also fix the remote upstream & updated the notify api
* feat: add ZapierActionTool and ZapierActionsAdapter for integrating with Zapier actions
- Introduced ZapierActionTool to execute Zapier actions with dynamic parameter handling.
- Added ZapierActionsAdapter to fetch available Zapier actions and convert them into BaseTool instances.
- Updated __init__.py files to include new tools and ensure proper imports.
- Created README.md for ZapierActionTools with installation instructions and usage examples.
* fix: restore ZapierActionTool import and enhance logging in Zapier adapter
- Reintroduced the import of ZapierActionTool in __init__.py for proper accessibility.
- Added logging for error handling in ZapierActionsAdapter to improve debugging.
- Updated ZapierActionTools factory function to include logging for missing API key.
* feat(tavily): add TavilyExtractorTool and TavilySearchTool with documentation
* feat(tavily): enhance TavilyExtractorTool and TavilySearchTool with additional parameters and improved error handling
* fix(tavily): update installation instructions for 'tavily-python' package in TavilyExtractorTool and TavilySearchTool
---------
Co-authored-by: lorenzejay <lorenzejaytech@gmail.com>
The built-in `callable` type is not subscriptable, and thus not usable
in a type annotation. When this tool is used, this warning is generated:
```
.../_generate_schema.py:623: UserWarning: <built-in function callable> is not a Python type (it may be an instance of an object), Pydantic will allow any object with no validation since we cannot even enforce that the input is an instance of the given type. To get rid of this error wrap the type with `pydantic.SkipValidation`.
```
This change fixes the warning.
* FileCompressorTool with support for files and subdirectories
* README.md
* Updated files_compressor_tool.py
* Enhanced FileCompressorTool different compression formats
* Update README.md
* Updated with lookup tables
* Updated files_compressor_tool.py
* Added Test Cases
* Removing Test_Cases.md inorder to update with correct test case as per the review
* Added Test Cases
* Test Cases with patch,MagicMock
* Empty lines Removed
* Updated Test Case,Ensured Maximum Scenarios
* Deleting old one
* Updated __init__.py to include FileCompressorTool
* Update __init__.py to add FileCompressorTool
* fix FirecrawlScrapeWebsiteTool: add missing config parameter and correct Dict type annotation
- Add required config parameter when creating the tool
- Change type hint from `dict` to `Dict` to resolve Pydantic validation issues
* Update firecrawl_scrape_website_tool.py
- removing optional config
- removing timeout from Pydantic model
* Removing config from __init__
- removing config from __init__
* Update firecrawl_scrape_website_tool.py
- removing timeout
* fix: remove kwargs from all (except mysql & pg) RagTools
The agent uses the tool description to decide what to propagate when a tool with **kwargs is found, but this often leads to failures during the tool invocation step.
This happens because the final description ends up like this:
```
CrewStructuredTool(name='Knowledge base', description='Tool Name: Knowledge base
Tool Arguments: {'query': {'description': None, 'type': 'str'}, 'kwargs': {'description': None, 'type': 'Any'}}
Tool Description: A knowledge base that can be used to answer questions.')
```
The agent then tries to infer and pass a kwargs parameter, which isn’t supported by the schema at all.
* feat: adding test to search tools
* feat: add db (chromadb folder) to .gitignore
* fix: fix github search integration
A few attributes were missing when calling the .add method: data_type and loader.
Also, update the query search according to the EmbedChain documentation, the query must include the type and repo keys
* fix: rollback YoutubeChannel paramenter
* chore: fix type hinting for CodeDocs search
* fix: ensure proper configuration when call `add`
According to the documentation, some search methods must be defined as either a loader or a data_type. This commit ensures that.
* build: add optional-dependencies for github and xml search
* test: mocking external requests from search_tool tests
* build: add pytest-recording as devDependencie
* Enhance EnterpriseActionKitToolAdapter to support custom project IDs
- Updated the EnterpriseActionKitToolAdapter and EnterpriseActionTool classes to accept an optional project_id parameter, allowing for greater flexibility in API interactions.
- Modified API URL construction to utilize the provided project_id instead of a hardcoded default.
- Updated the CrewaiEnterpriseTools factory function to accept and pass the project_id to the adapter.
* for factory in mind
* Allow setting custom LLM for the vision tool
Defaults to gpt-4o-mini otherwise
* Enhance VisionTool with model management and improved initialization
- Added support for setting a custom model identifier with a default of "gpt-4o-mini".
- Introduced properties for model management, allowing dynamic updates and resetting of the LLM instance.
- Updated the initialization method to accept an optional LLM and model parameter.
- Refactored the image processing logic for clarity and efficiency.
* docstrings
* Add stop config
---------
Co-authored-by: lorenzejay <lorenzejaytech@gmail.com>
* Corrected to adapt to firecrawl package use
Was leading to an error too many arguments when calling the craw_url() function
* Corrected to adapt to firecrawl package use
Corrected to avoid too many arguments error when calling firecrawl scrape_url function
* Corrected to adapt to firecrawl package use
Corrected to avoid error too many arguments when calling firecrawl search() function
* fix: fix firecrawl integration
* feat: support define Firecrawl using any config
Currently we pre-defined the available paramenters to call Firecrawl, this commit adds support to receive any parameter and propagate them
* docs: added doc string to Firecrawls classes
---------
Co-authored-by: Lucas Gomide <lucaslg200@gmail.com>
- Removed the main execution block that included token validation and agent/task setup for testing.
- This change streamlines the adapter's code, focusing on its core functionality without execution logic.
- Introduced EnterpriseActionTool to execute specific enterprise actions with dynamic parameter validation.
- Added EnterpriseActionKitToolAdapter to manage and create tool instances for available enterprise actions.
- Implemented methods for fetching action schemas from the API and creating corresponding tools.
- Enhanced error handling and provided detailed descriptions for tool parameters.
- Included a main execution block for testing the adapter with a sample agent and task setup.
* feat: add a safety sandbox to run Python code
This sandbox blocks a bunch of dangerous imports and built-in functions
* feat: add more logs and warning about code execution
* test: add tests to cover sandbox code execution
* docs: add Google-style docstrings and type hints to printer and code_interpreter
* chore: renaming globals and locals paramenters
---------
Co-authored-by: Greyson Lalonde <greyson.r.lalonde@gmail.com>
* Add chunk reading functionality to FileReadTool
- Added start_line parameter to specify which line to start reading from
- Added line_count parameter to specify how many lines to read
- Updated documentation with new parameters and examples
* [FIX] Bugs and Disscutions
Fixed: start_line negative value
Improved: File Reading Operations
* [IMPROVE] Simplify line selection
* [REFACTOR] use mock_open while preserving essential filesystem tests
* mcp server proposal
* Refactor MCP server implementation: rename MCPServer to MCPServerAdapter and update usage examples. Adjust error message for optional dependencies installation.
* Update MCPServerAdapter usage examples to remove unnecessary parameters in context manager instantiation.
* Refactor MCPServerAdapter to move optional dependency imports inside the class constructor, improving error handling for missing dependencies.
* Enhance MCPServerAdapter by adding type hinting for server parameters and improving error handling during server startup. Optional dependency imports are now conditionally loaded, ensuring clearer error messages for missing packages.
* Refactor MCPServerAdapter to improve error handling for missing 'mcp' package. Conditional imports are now used, prompting users to install the package if not found, enhancing user experience during server initialization.
* Refactor MCPServerAdapter to ensure proper cleanup after usage. Removed redundant exception handling and ensured that the server stops in a finally block, improving resource management.
* add documentation
* fix typo close -> stop
* add tests and fix double call with context manager
* Enhance MCPServerAdapter with logging capabilities and improved error handling during initialization. Added logging for cleanup errors and refined the structure for handling missing 'mcp' package dependencies.
---------
Co-authored-by: lorenzejay <lorenzejaytech@gmail.com>
- Refactor Selenium scraping tool to use single driver instance
- Add headless mode configuration for Chrome
- Improve error handling with try/finally
- Simplify code structure and improve maintainability
[{'code': 'unrecognized_keys', 'keys': ['crawlerOptions', 'timeout'], 'path': [], 'message': 'Unrecognized key in body -- please review the v1 API documentation for request body changes'}]) because it has been updated to v1. I updated the sent parameters to match v1 and updated their description in the readme file
- Re-add country (gl), location, and locale (hl) parameters to SerperDevTool class
- Update payload construction in _make_api_request to include localization params
- Add schema validation for localization parameters
- Update documentation and examples to demonstrate parameter usage
These parameters were accidentally removed in the previous enhancement PR and are crucial for:
- Getting region-specific search results (via country/gl)
- Targeting searches to specific cities (via location)
- Getting results in specific languages (via locale/hl)
BREAKING CHANGE: None - This restores previously available functionality
This commit adds a new StagehandTool that integrates Stagehand's AI-powered web automation capabilities into CrewAI. The tool provides access to Stagehand's three core APIs:
- act: Perform web interactions
- extract: Extract information from web pages
- observe: Monitor web page changes
Each function takes atomic instructions to increase reliability.
Co-Authored-By: Joe Moura <joao@crewai.com>
- Add comprehensive URL validation in schema and _create_driver
- Add URL format, length, and character validation
- Add meaningful error messages for validation failures
- Add return_html usage examples in README.md
Co-Authored-By: Joe Moura <joao@crewai.com>
- When passing result_as_answer=True, it will return ToolOutput so it won't pass pydantic validation as a string
- Get content of ToolOutput before return
- Add support for multiple search types (general and news)
- Implement knowledge graph integration
- Add structured result processing for organic results, "People Also Ask", and related searches
- Enhance error handling with try-catch blocks and logging
- Update documentation with comprehensive feature list and usage examples
- Fixed an issue where multiple tools failed to function if parameters were provided after tool creation.
- Updated tools to correctly process source file/URL passed by the agent post-creation as per documentation.
Closes #<47>
Added two additional functionalities:
1) added the ability to save the server results to a file
2) added the ability to set the number of results returned
Can be used as follows:
serper_tool = SerperDevTool(file_save=True, n_results=20)
- Wrapped the file reading functionality inside a `_run` method.
- Added error handling to return a descriptive error message if an exception occurs during file reading.
In original code n_results is always None so you always get only 10 results from Serper. With this change, when you explicitly set the n_results parameter when creating a SerperDevTool object it is taken into account.
@@ -161,6 +186,10 @@ class BaseAgent(ABC, BaseModel):
default=None,
description="Knowledge configuration for the agent such as limits and threshold",
)
apps:list[PlatformAppOrAction]|None=Field(
default=None,
description="List of applications or application/action combinations that the agent can access through CrewAI Platform. Can contain app names (e.g., 'gmail') or specific actions (e.g., 'gmail/send_email')",
)
@model_validator(mode="before")
@classmethod
@@ -196,6 +225,24 @@ class BaseAgent(ABC, BaseModel):
)
returnprocessed_tools
@field_validator("apps")
@classmethod
defvalidate_apps(
cls,apps:list[PlatformAppOrAction]|None
)->list[PlatformAppOrAction]|None:
ifnotapps:
returnapps
validated_apps=[]
forappinapps:
ifapp.count("/")>1:
raiseValueError(
f"Invalid app format '{app}'. Apps can only have one '/' for app/action format (e.g., 'gmail/send_email')"
)
validated_apps.append(app)
returnlist(set(validated_apps))
@model_validator(mode="after")
defvalidate_and_set_attributes(self):
# Validate required fields
@@ -266,6 +313,10 @@ class BaseAgent(ABC, BaseModel):
Some files were not shown because too many files have changed in this diff
Show More
Reference in New Issue
Block a user
Blocking a user prevents them from interacting with repositories, such as opening or commenting on pull requests or issues. Learn more about blocking a user.