Compare commits

...

6 Commits

Author SHA1 Message Date
Devin AI
56ee5c1e86 Fix lint error: remove unused import os from docs styling test
Co-Authored-By: João <joao@crewai.com>
2025-06-23 19:37:44 +00:00
Devin AI
e9bf07cbdc Address AI crew review feedback
- Consolidate redundant CSS selectors into grouped rules
- Add CSS custom properties for better maintainability
- Implement responsive design with media queries for mobile
- Add vendor prefixes (-webkit-overflow-scrolling) for iOS
- Improve test suite with pytest fixtures to reduce redundancy
- Add better error messages and encoding specifications
- Add comprehensive test coverage for new CSS features

This addresses all actionable feedback from the AI crew review
while maintaining the core fix for Frame component width issues.

Co-Authored-By: João <joao@crewai.com>
2025-06-23 19:25:16 +00:00
Devin AI
6c8677a2d7 Fix documentation file structure component width issue
- Add custom CSS file (docs/style.css) to fix Frame component width
- Target Frame components with min-width: 300px to prevent collapse
- Add overflow-x: auto for horizontal scrolling when needed
- Include comprehensive CSS selectors for different Frame implementations
- Add tests (tests/docs_styling_test.py) to prevent regression
- Fixes issue #3049: file structure component has no width on installation page

The Frame component in the 'Generate Project Scaffolding' section was
collapsing due to lack of width styling. This fix ensures the file
structure displays properly with adequate width for readability.

Co-Authored-By: João <joao@crewai.com>
2025-06-23 19:21:03 +00:00
Rostyslav Borovyk
c96d4a6823 Add Oxylabs Web Scraping tools (#2905)
Some checks failed
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
* Add Oxylabs tools

* Review updates

* Review updates

---------

Co-authored-by: Tony Kipkemboi <iamtonykipkemboi@gmail.com>
2025-06-23 13:58:16 -04:00
Lucas Gomide
59032817c7 docs: update recommendation filters for MCP and Enterprise tools (#3041)
Some checks failed
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
2025-06-20 13:35:26 -04:00
Lucas Gomide
e9d8a853ea feat: support to initialize a tool from defined Tool attributes (#3023)
* feat: support to initialize a tool from defined Tool attributes

* fix: ensure Agent is able to load a list of Tools dynamically
2025-06-20 10:53:37 -04:00
9 changed files with 415 additions and 10 deletions

View File

@@ -134,7 +134,7 @@
"tools/web-scraping/stagehandtool",
"tools/web-scraping/firecrawlcrawlwebsitetool",
"tools/web-scraping/firecrawlscrapewebsitetool",
"tools/web-scraping/firecrawlsearchtool"
"tools/web-scraping/oxylabsscraperstool"
]
},
{

View File

@@ -124,7 +124,7 @@ from crewai_tools import CrewaiEnterpriseTools
enterprise_tools = CrewaiEnterpriseTools(
actions_list=["gmail_find_email"] # only gmail_find_email tool will be available
)
gmail_tool = enterprise_tools[0]
gmail_tool = enterprise_tools["gmail_find_email"]
gmail_agent = Agent(
role="Gmail Manager",

View File

@@ -85,6 +85,22 @@ with MCPServerAdapter(server_params) as mcp_tools:
```
This general pattern shows how to integrate tools. For specific examples tailored to each transport, refer to the detailed guides below.
## Filtering Tools
```python
with MCPServerAdapter(server_params) as mcp_tools:
print(f"Available tools: {[tool.name for tool in mcp_tools]}")
my_agent = Agent(
role="MCP Tool User",
goal="Utilize tools from an MCP server.",
backstory="I can connect to MCP servers and use their tools.",
tools=mcp_tools["tool_name"], # Pass the loaded tools to your agent
reasoning=True,
verbose=True
)
# ... rest of your crew setup ...
```
## Explore MCP Integrations
<CardGroup cols={2}>

47
docs/style.css Normal file
View File

@@ -0,0 +1,47 @@
/* Fix for Frame component width issue in installation docs */
/* Ensures file structure displays with proper width */
/* CSS Custom Properties for maintainability */
:root {
--frame-min-width: 300px;
--frame-width: 100%;
--frame-pre-min-width: 280px;
}
/* Consolidated Frame component selectors */
.frame-container,
[data-component="frame"],
.frame,
div[class*="frame"] {
min-width: var(--frame-min-width);
width: var(--frame-width);
overflow-x: auto;
-webkit-overflow-scrolling: touch; /* Smooth scrolling on iOS */
}
/* Pre elements within Frame components */
.frame-container pre,
[data-component="frame"] pre,
.frame pre,
div[class*="frame"] pre {
min-width: var(--frame-pre-min-width);
white-space: pre;
overflow-x: auto;
}
/* Responsive design for smaller screens */
@media screen and (max-width: 768px) {
.frame-container,
[data-component="frame"],
.frame,
div[class*="frame"] {
min-width: 100%;
}
.frame-container pre,
[data-component="frame"] pre,
.frame pre,
div[class*="frame"] pre {
min-width: 100%;
}
}

View File

@@ -56,6 +56,10 @@ These tools enable your agents to interact with the web, extract data from websi
<Card title="Stagehand Tool" icon="hand" href="/tools/web-scraping/stagehandtool">
Intelligent browser automation with natural language commands.
</Card>
<Card title="Oxylabs Scraper Tool" icon="globe" href="/tools/web-scraping/oxylabsscraperstool">
Access web data at scale with Oxylabs.
</Card>
</CardGroup>
## **Common Use Cases**
@@ -100,4 +104,4 @@ agent = Agent(
- **JavaScript-Heavy Sites**: Use `SeleniumScrapingTool` for dynamic content
- **Scale & Performance**: Use `FirecrawlScrapeWebsiteTool` for high-volume scraping
- **Cloud Infrastructure**: Use `BrowserBaseLoadTool` for scalable browser automation
- **Complex Workflows**: Use `StagehandTool` for intelligent browser interactions
- **Complex Workflows**: Use `StagehandTool` for intelligent browser interactions

View File

@@ -0,0 +1,236 @@
---
title: Oxylabs Scrapers
description: >
Oxylabs Scrapers allow to easily access the information from the respective sources. Please see the list of available sources below:
- `Amazon Product`
- `Amazon Search`
- `Google Seach`
- `Universal`
icon: globe
---
## Installation
Get the credentials by creating an Oxylabs Account [here](https://oxylabs.io).
```shell
pip install 'crewai[tools]' oxylabs
```
Check [Oxylabs Documentation](https://developers.oxylabs.io/scraping-solutions/web-scraper-api/targets) to get more information about API parameters.
# `OxylabsAmazonProductScraperTool`
### Example
```python
from crewai_tools import OxylabsAmazonProductScraperTool
# make sure OXYLABS_USERNAME and OXYLABS_PASSWORD variables are set
tool = OxylabsAmazonProductScraperTool()
result = tool.run(query="AAAAABBBBCC")
print(result)
```
### Parameters
- `query` - 10-symbol ASIN code.
- `domain` - domain localization for Amazon.
- `geo_location` - the _Deliver to_ location.
- `user_agent_type` - device type and browser.
- `render` - enables JavaScript rendering when set to `html`.
- `callback_url` - URL to your callback endpoint.
- `context` - Additional advanced settings and controls for specialized requirements.
- `parse` - returns parsed data when set to true.
- `parsing_instructions` - define your own parsing and data transformation logic that will be executed on an HTML scraping result.
### Advanced example
```python
from crewai_tools import OxylabsAmazonProductScraperTool
# make sure OXYLABS_USERNAME and OXYLABS_PASSWORD variables are set
tool = OxylabsAmazonProductScraperTool(
config={
"domain": "com",
"parse": True,
"context": [
{
"key": "autoselect_variant",
"value": True
}
]
}
)
result = tool.run(query="AAAAABBBBCC")
print(result)
```
# `OxylabsAmazonSearchScraperTool`
### Example
```python
from crewai_tools import OxylabsAmazonSearchScraperTool
# make sure OXYLABS_USERNAME and OXYLABS_PASSWORD variables are set
tool = OxylabsAmazonSearchScraperTool()
result = tool.run(query="headsets")
print(result)
```
### Parameters
- `query` - Amazon search term.
- `domain` - Domain localization for Bestbuy.
- `start_page` - starting page number.
- `pages` - number of pages to retrieve.
- `geo_location` - the _Deliver to_ location.
- `user_agent_type` - device type and browser.
- `render` - enables JavaScript rendering when set to `html`.
- `callback_url` - URL to your callback endpoint.
- `context` - Additional advanced settings and controls for specialized requirements.
- `parse` - returns parsed data when set to true.
- `parsing_instructions` - define your own parsing and data transformation logic that will be executed on an HTML scraping result.
### Advanced example
```python
from crewai_tools import OxylabsAmazonSearchScraperTool
# make sure OXYLABS_USERNAME and OXYLABS_PASSWORD variables are set
tool = OxylabsAmazonSearchScraperTool(
config={
"domain": 'nl',
"start_page": 2,
"pages": 2,
"parse": True,
"context": [
{'key': 'category_id', 'value': 16391693031}
],
}
)
result = tool.run(query='nirvana tshirt')
print(result)
```
# `OxylabsGoogleSearchScraperTool`
### Example
```python
from crewai_tools import OxylabsGoogleSearchScraperTool
# make sure OXYLABS_USERNAME and OXYLABS_PASSWORD variables are set
tool = OxylabsGoogleSearchScraperTool()
result = tool.run(query="iPhone 16")
print(result)
```
### Parameters
- `query` - search keyword.
- `domain` - domain localization for Google.
- `start_page` - starting page number.
- `pages` - number of pages to retrieve.
- `limit` - number of results to retrieve in each page.
- `locale` - `Accept-Language` header value which changes your Google search page web interface language.
- `geo_location` - the geographical location that the result should be adapted for. Using this parameter correctly is extremely important to get the right data.
- `user_agent_type` - device type and browser.
- `render` - enables JavaScript rendering when set to `html`.
- `callback_url` - URL to your callback endpoint.
- `context` - Additional advanced settings and controls for specialized requirements.
- `parse` - returns parsed data when set to true.
- `parsing_instructions` - define your own parsing and data transformation logic that will be executed on an HTML scraping result.
### Advanced example
```python
from crewai_tools import OxylabsGoogleSearchScraperTool
# make sure OXYLABS_USERNAME and OXYLABS_PASSWORD variables are set
tool = OxylabsGoogleSearchScraperTool(
config={
"parse": True,
"geo_location": "Paris, France",
"user_agent_type": "tablet",
}
)
result = tool.run(query="iPhone 16")
print(result)
```
# `OxylabsUniversalScraperTool`
### Example
```python
from crewai_tools import OxylabsUniversalScraperTool
# make sure OXYLABS_USERNAME and OXYLABS_PASSWORD variables are set
tool = OxylabsUniversalScraperTool()
result = tool.run(url="https://ip.oxylabs.io")
print(result)
```
### Parameters
- `url` - website url to scrape.
- `user_agent_type` - device type and browser.
- `geo_location` - sets the proxy's geolocation to retrieve data.
- `render` - enables JavaScript rendering when set to `html`.
- `callback_url` - URL to your callback endpoint.
- `context` - Additional advanced settings and controls for specialized requirements.
- `parse` - returns parsed data when set to `true`, as long as a dedicated parser exists for the submitted URL's page type.
- `parsing_instructions` - define your own parsing and data transformation logic that will be executed on an HTML scraping result.
### Advanced example
```python
from crewai_tools import OxylabsUniversalScraperTool
# make sure OXYLABS_USERNAME and OXYLABS_PASSWORD variables are set
tool = OxylabsUniversalScraperTool(
config={
"render": "html",
"user_agent_type": "mobile",
"context": [
{"key": "force_headers", "value": True},
{"key": "force_cookies", "value": True},
{
"key": "headers",
"value": {
"Custom-Header-Name": "custom header content",
},
},
{
"key": "cookies",
"value": [
{"key": "NID", "value": "1234567890"},
{"key": "1P JAR", "value": "0987654321"},
],
},
{"key": "http_method", "value": "get"},
{"key": "follow_redirects", "value": True},
{"key": "successful_status_codes", "value": [808, 909]},
],
}
)
result = tool.run(url="https://ip.oxylabs.io")
print(result)
```

View File

@@ -476,7 +476,14 @@ def load_agent_from_repository(from_repository: str) -> Dict[str, Any]:
try:
module = importlib.import_module(tool["module"])
tool_class = getattr(module, tool["name"])
attributes[key].append(tool_class())
tool_value = tool_class(**tool["init_params"])
if isinstance(tool_value, list):
attributes[key].extend(tool_value)
else:
attributes[key].append(tool_value)
except Exception as e:
raise AgentRepositoryError(
f"Tool {tool['name']} could not be loaded: {e}"

View File

@@ -2099,7 +2099,7 @@ def mock_get_auth_token():
@patch("crewai.cli.plus_api.PlusAPI.get_agent")
def test_agent_from_repository(mock_get_agent, mock_get_auth_token):
from crewai_tools import SerperDevTool, XMLSearchTool
from crewai_tools import SerperDevTool, XMLSearchTool, CSVSearchTool, EnterpriseActionTool
mock_get_response = MagicMock()
mock_get_response.status_code = 200
@@ -2108,19 +2108,42 @@ def test_agent_from_repository(mock_get_agent, mock_get_auth_token):
"goal": "test goal",
"backstory": "test backstory",
"tools": [
{"module": "crewai_tools", "name": "SerperDevTool"},
{"module": "crewai_tools", "name": "XMLSearchTool"},
{"module": "crewai_tools", "name": "SerperDevTool", "init_params": {"n_results": 30}},
{"module": "crewai_tools", "name": "XMLSearchTool", "init_params": {"summarize": True}},
{"module": "crewai_tools", "name": "CSVSearchTool", "init_params": {}},
# using a tools that returns a list of BaseTools
{"module": "crewai_tools", "name": "CrewaiEnterpriseTools", "init_params": {"actions_list": [], "enterprise_token": "test_key"}},
],
}
mock_get_agent.return_value = mock_get_response
agent = Agent(from_repository="test_agent")
tool_action = EnterpriseActionTool(
name="test_name",
description="test_description",
enterprise_action_token="test_token",
action_name="test_action_name",
action_schema={"test": "test"},
)
with patch("crewai_tools.CrewaiEnterpriseTools", return_value=[tool_action]):
agent = Agent(from_repository="test_agent")
assert agent.role == "test role"
assert agent.goal == "test goal"
assert agent.backstory == "test backstory"
assert len(agent.tools) == 2
assert len(agent.tools) == 4
assert isinstance(agent.tools[0], SerperDevTool)
assert agent.tools[0].n_results == 30
assert isinstance(agent.tools[1], XMLSearchTool)
assert agent.tools[1].summarize
assert isinstance(agent.tools[2], CSVSearchTool)
assert not agent.tools[2].summarize
assert isinstance(agent.tools[3], EnterpriseActionTool)
assert agent.tools[3].name == "test_name"
@patch("crewai.cli.plus_api.PlusAPI.get_agent")
@@ -2133,7 +2156,7 @@ def test_agent_from_repository_override_attributes(mock_get_agent, mock_get_auth
"role": "test role",
"goal": "test goal",
"backstory": "test backstory",
"tools": [{"name": "SerperDevTool", "module": "crewai_tools"}],
"tools": [{"name": "SerperDevTool", "module": "crewai_tools", "init_params": {}}],
}
mock_get_agent.return_value = mock_get_response
agent = Agent(from_repository="test_agent", role="Custom Role")

View File

@@ -0,0 +1,72 @@
import pytest
from pathlib import Path
@pytest.fixture(scope="module")
def css_file_path():
"""Fixture providing the path to the CSS file."""
return Path(__file__).parent.parent / "docs" / "style.css"
@pytest.fixture(scope="module")
def css_content(css_file_path):
"""Fixture providing the CSS file content."""
with open(css_file_path, 'r', encoding='utf-8') as f:
return f.read()
@pytest.fixture(scope="module")
def mdx_file_path():
"""Fixture providing the path to the installation MDX file."""
return Path(__file__).parent.parent / "docs" / "installation.mdx"
def test_custom_css_file_exists(css_file_path):
"""Test that the custom CSS file exists in the docs directory."""
assert css_file_path.exists(), (
f"Custom CSS file not found at {css_file_path}. "
"Please ensure style.css is present in the docs directory."
)
def test_frame_component_styling(css_content):
"""Test that the CSS file contains proper Frame component styling."""
assert 'frame' in css_content.lower(), "CSS should contain frame component styling"
assert 'min-width' in css_content, "CSS should contain min-width property"
assert 'overflow-x' in css_content, "CSS should contain overflow-x property"
def test_installation_mdx_has_frame_component(mdx_file_path):
"""Test that the installation.mdx file contains the Frame component."""
with open(mdx_file_path, 'r', encoding='utf-8') as f:
content = f.read()
assert '<Frame>' in content, "installation.mdx should contain Frame component"
assert 'my_project/' in content, "Frame should contain the file structure"
def test_css_contains_required_selectors(css_content):
"""Test that the CSS file contains all required selectors for Frame components."""
required_selectors = [
'.frame-container',
'[data-component="frame"]',
'.frame',
'div[class*="frame"]'
]
for selector in required_selectors:
assert selector in css_content, f"CSS should contain selector: {selector}"
def test_css_has_proper_width_properties(css_content):
"""Test that the CSS file has proper width and overflow properties."""
assert '--frame-min-width: 300px' in css_content, "CSS should define frame min-width variable"
assert '--frame-width: 100%' in css_content, "CSS should define frame width variable"
assert 'overflow-x: auto' in css_content, "CSS should set overflow-x to auto"
assert 'white-space: pre' in css_content, "CSS should preserve whitespace in pre elements"
def test_css_values_are_valid(css_content):
"""Validate that critical CSS values are as expected."""
assert '300px' in css_content, "Min-width should be 300px"
assert '100%' in css_content, "Width should be 100%"
assert 'var(--frame-min-width)' in css_content, "CSS should use custom properties"
def test_responsive_design_included(css_content):
"""Test that responsive design media queries are included."""
assert '@media screen and (max-width: 768px)' in css_content, "CSS should include responsive media queries"
def test_vendor_prefixes_included(css_content):
"""Test that vendor prefixes are included for better browser compatibility."""
assert '-webkit-overflow-scrolling: touch' in css_content, "CSS should include webkit overflow scrolling for iOS"