* fix: add SSRF and path traversal protections CVE-2026-2286: validate_url blocks non-http/https schemes, private IPs, loopback, link-local, reserved addresses. Applied to 11 web tools. CVE-2026-2285: validate_path confines file access to the working directory. Applied to 7 file and directory tools. * fix: drop unused assignment from validate_url call * fix: DNS rebinding protection and allow_private flag Rewrite validated URLs to use the resolved IP, preventing DNS rebinding between validation and request time. SDK-based tools use pin_ip=False since they manage their own HTTP clients. Add allow_private flag for deployments that need internal network access. * fix: unify security utilities and restore RAG chokepoint validation Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com> * refactor: move validation to security/ package + address review comments - Move safe_path.py to crewai_tools/security/; add safe_url.py re-export - Keep utilities/safe_path.py as a backwards-compat shim - Update all 21 import sites to use crewai_tools.security.safe_path - files_compressor_tool: validate output_path (user-controlled) - serper_scrape_website_tool: call validate_url() before building payload - brightdata_unlocker: validate_url() already called without assignment (no-op fix) Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com> * refactor: move validation to security/ package, keep utilities/ as compat shim - security/safe_path.py is the canonical location for all validation - utilities/safe_path.py re-exports for backward compatibility - All tool imports already point to security.safe_path - All review comments already addressed in prior commits * fix: move validation outside try/except blocks, use correct directory validator Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com> * fix: use resolved paths from validation to prevent symlink TOCTOU, remove unused safe_url.py --------- Co-authored-by: Alex <alex@crewai.com> Co-authored-by: Claude Sonnet 4.6 <noreply@anthropic.com>
Serply API Documentation
Description
This tool is designed to perform a web/news/scholar search for a specified query from a text's content across the internet. It utilizes the Serply.io API to fetch and display the most relevant search results based on the query provided by the user.
Installation
To incorporate this tool into your project, follow the installation instructions below:
pip install 'crewai[tools]'
Examples
Web Search
The following example demonstrates how to initialize the tool and execute a search the web with a given query:
from crewai_tools import SerplyWebSearchTool
# Initialize the tool for internet searching capabilities
tool = SerplyWebSearchTool()
# increase search limits to 100 results
tool = SerplyWebSearchTool(limit=100)
# change results language (fr - French)
tool = SerplyWebSearchTool(hl="fr")
News Search
The following example demonstrates how to initialize the tool and execute a search news with a given query:
from crewai_tools import SerplyNewsSearchTool
# Initialize the tool for internet searching capabilities
tool = SerplyNewsSearchTool()
# change country news (JP - Japan)
tool = SerplyNewsSearchTool(proxy_location="JP")
Scholar Search
The following example demonstrates how to initialize the tool and execute a search scholar articles a given query:
from crewai_tools import SerplyScholarSearchTool
# Initialize the tool for internet searching capabilities
tool = SerplyScholarSearchTool()
# change country news (GB - Great Britain)
tool = SerplyScholarSearchTool(proxy_location="GB")
Job Search
The following example demonstrates how to initialize the tool and searching for jobs in the USA:
from crewai_tools import SerplyJobSearchTool
# Initialize the tool for internet searching capabilities
tool = SerplyJobSearchTool()
Web Page To Markdown
The following example demonstrates how to initialize the tool and fetch a web page and convert it to markdown:
from crewai_tools import SerplyWebpageToMarkdownTool
# Initialize the tool for internet searching capabilities
tool = SerplyWebpageToMarkdownTool()
# change country make request from (DE - Germany)
tool = SerplyWebpageToMarkdownTool(proxy_location="DE")
Combining Multiple Tools
The following example demonstrates performing a Google search to find relevant articles. Then, convert those articles to markdown format for easier extraction of key points.
from crewai import Agent
from crewai_tools import SerplyWebSearchTool, SerplyWebpageToMarkdownTool
search_tool = SerplyWebSearchTool()
convert_to_markdown = SerplyWebpageToMarkdownTool()
# Creating a senior researcher agent with memory and verbose mode
researcher = Agent(
role='Senior Researcher',
goal='Uncover groundbreaking technologies in {topic}',
verbose=True,
memory=True,
backstory=(
"Driven by curiosity, you're at the forefront of"
"innovation, eager to explore and share knowledge that could change"
"the world."
),
tools=[search_tool, convert_to_markdown],
allow_delegation=True
)
Steps to Get Started
To effectively use the SerplyApiTool, follow these steps:
- Package Installation: Confirm that the
crewai[tools]package is installed in your Python environment. - API Key Acquisition: Acquire a
serper.devAPI key by registering for a free account at Serply.io. - Environment Configuration: Store your obtained API key in an environment variable named
SERPLY_API_KEYto facilitate its use by the tool.
Conclusion
By integrating the SerplyApiTool into Python projects, users gain the ability to conduct real-time searches, relevant news across the internet directly from their applications. By adhering to the setup and usage guidelines provided, incorporating this tool into projects is streamlined and straightforward.