Expand MCP Integration documentation structure (#2922)
Some checks failed
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled

This commit is contained in:
Tony Kipkemboi
2025-05-30 17:38:36 -04:00
committed by GitHub
parent e07e11fbe7
commit 1da2fd2a5c
8 changed files with 819 additions and 424 deletions

View File

@@ -85,7 +85,12 @@
{
"group": "MCP Integration",
"pages": [
"mcp/crewai-mcp-integration"
"mcp/overview",
"mcp/stdio",
"mcp/sse",
"mcp/streamable-http",
"mcp/multiple-servers",
"mcp/security"
]
},
{

View File

@@ -1,423 +0,0 @@
---
title: 'MCP Servers as Tools in CrewAI'
description: 'Learn how to integrate MCP servers as tools in your CrewAI agents using the `crewai-tools` library.'
icon: 'plug'
---
## Overview
The [Model Context Protocol](https://modelcontextprotocol.io/introduction) (MCP) provides a standardized way for AI agents to provide context to LLMs by communicating with external services, known as MCP Servers.
The `crewai-tools` library extends CrewAI's capabilities by allowing you to seamlessly integrate tools from these MCP servers into your agents.
This gives your crews access to a vast ecosystem of functionalities.
We currently support the following transport mechanisms:
- **Stdio**: for local servers (communication via standard input/output between processes on the same machine)
- **Server-Sent Events (SSE)**: for remote servers (unidirectional, real-time data streaming from server to client over HTTP)
- **Streamable HTTP**: for remote servers (flexible, potentially bi-directional communication over HTTP, often utilizing SSE for server-to-client streams)
## Video Tutorial
Watch this video tutorial for a comprehensive guide on MCP integration with CrewAI:
<iframe
width="100%"
height="400"
src="https://www.youtube.com/embed/TpQ45lAZh48"
title="CrewAI MCP Integration Guide"
frameborder="0"
style={{ borderRadius: '10px' }}
allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture"
allowfullscreen
></iframe>
## Installation
Before you start using MCP with `crewai-tools`, you need to install the `mcp` extra `crewai-tools` dependency with the following command:
```shell
uv pip install 'crewai-tools[mcp]'
```
### Integrating MCP Tools with `MCPServerAdapter`
The `MCPServerAdapter` class from `crewai-tools` is the primary way to connect to an MCP server and make its tools available to your CrewAI agents.
It supports different transport mechanisms, including **Stdio** (for local servers), **SSE** (Server-Sent Events), and **Streamable HTTP** (for remote servers). You have two main options for managing the connection lifecycle:
### Option 1: Fully Managed Connection (Recommended)
Using a Python context manager (`with` statement) is the recommended approach. It automatically handles starting and stopping the connection to the MCP server.
**For a local Stdio-based MCP server:**
```python
from crewai import Agent, Task, Crew
from crewai_tools import MCPServerAdapter
from mcp import StdioServerParameters
import os
server_params=StdioServerParameters(
command="python3", # Or your specific python interpreter e.g., "python"
args=["path/to/your/stdio_server_script.py"], # Replace with the actual path to your server script
env={"UV_PYTHON": "3.12", **os.environ},
)
with MCPServerAdapter(server_params) as tools:
print(f"Available tools from Stdio MCP server: {[tool.name for tool in tools]}")
# Example: Using the tools from the Stdio MCP server in a CrewAI Agent
agent = Agent(
role="Web Information Retriever",
goal="Scrape content from a specified URL.",
backstory="An AI that can fetch and process web page data via an MCP tool.",
tools=tools,
verbose=True,
)
task = Task(
description="Scrape content from a specified URL.",
expected_output="Scraped content from the specified URL.",
agent=agent,
)
crew = Crew(
agents=[agent],
tasks=[task],
verbose=True,
)
result = crew.kickoff()
print(result)
```
**For a remote SSE-based MCP server:**
```python
from crewai_tools import MCPServerAdapter
from crewai import Agent, Task, Crew
server_params = {"url": "http://localhost:8000/sse"}
with MCPServerAdapter(server_params) as tools:
print(f"Available tools from SSE MCP server: {[tool.name for tool in tools]}")
# Example: Using the tools from the SSE MCP server in a CrewAI Agent
agent = Agent(
role="Web Information Retriever",
goal="Scrape content from a specified URL.",
backstory="An AI that can fetch and process web page data via an MCP tool.",
tools=tools,
verbose=True,
)
task = Task(
description="Scrape content from a specified URL.",
expected_output="Scraped content from the specified URL.",
agent=agent,
)
crew = Crew(
agents=[agent],
tasks=[task],
verbose=True,
)
result = crew.kickoff()
print(result)
```
**For a Streamable HTTP-based MCP server:**
```python
from crewai import Agent, Task, Crew
from crewai_tools import MCPServerAdapter
# Define parameters for the Streamable HTTP MCP server
server_params = {
"url": "http://localhost:8001/mcp", # Ensure this matches your server's URL
"transport": "streamable-http"
}
# Use MCPServerAdapter with a context manager
with MCPServerAdapter(server_params) as tools:
print(f"Available tools from Streamable HTTP MCP server: {[tool.name for tool in tools]}")
# Example: Using a tool from the server (e.g., a 'hello' tool)
hello_agent = Agent(
role="Greeter",
goal="Greet the user warmly.",
backstory="An expert in providing friendly greetings.",
tools=tools, # Assuming one of the tools is 'hello_tool' or similar
verbose=True
)
greeting_task = Task(
description="Greet the user named '{user}'.",
agent=hello_agent,
expected_output="A personalized and friendly greeting message.",
# If your tool or agent produces markdown, you can set markdown=True
# markdown=True
)
# Create the crew
greeting_crew = Crew(
agents=[hello_agent],
tasks=[greeting_task],
verbose=True
)
# Get user input for the task
user_name = input("What is your name? ")
result = greeting_crew.kickoff(inputs={"user": user_name})
print("\nFinal Greeting Output:\n", result)
```
### Option 2: More control over the MCP server connection lifecycle
If you need finer-grained control over the MCP server connection lifecycle, you can instantiate `MCPServerAdapter` directly and manage its `start()` and `stop()` methods.
<Info>
You **MUST** call `mcp_server_adapter.stop()` to ensure the connection is closed and resources are released. Using a `try...finally` block is highly recommended.
</Info>
#### Stdio Transport Example (Manual)
```python
from mcp import StdioServerParameters
from crewai_tools import MCPServerAdapter
from crewai import Agent, Task, Crew
import os
stdio_params = StdioServerParameters(
command="python3", # Or your specific python interpreter e.g., "python"
args=["path/to/your/stdio_server_script.py"], # Replace with the actual path to your server script
env={"UV_PYTHON": "3.12", **os.environ},
)
mcp_server_adapter = MCPServerAdapter(server_params=stdio_params)
try:
mcp_server_adapter.start() # Manually start the connection
tools = mcp_server_adapter.tools
print(f"Available tools (manual Stdio): {[tool.name for tool in tools]}")
# Use 'tools' with your Agent, Task, Crew setup as in Option 1
agent = Agent(
role="Medical Researcher",
goal="Find recent studies on a given topic using PubMed.",
backstory="An AI assistant specialized in biomedical literature research.",
tools=tools,
verbose=True
)
task = Task(
description="Search for recent articles on 'crispr gene editing'.",
expected_output="A summary of the top 3 recent articles.",
agent=agent
)
crew = Crew(
agents=[agent],
tasks=[task],
verbose=True,
process=Process.sequential
)
result = crew.kickoff()
print(result)
finally:
print("Stopping Stdio MCP server connection (manual)...")
mcp_server_adapter.stop() # **Crucial: Ensure stop is called**
```
#### SSE Transport Example (Manual)
```python
from crewai_tools import MCPServerAdapter
from crewai import Agent, Task, Crew, Process
from mcp import StdioServerParameters
server_params = {"url": "http://localhost:8000/sse"}
try:
mcp_server_adapter = MCPServerAdapter(server_params)
mcp_server_adapter.start()
tools = mcp_server_adapter.tools
print(f"Available tools (manual SSE): {[tool.name for tool in tools]}")
agent = Agent(
role="Medical Researcher",
goal="Find recent studies on a given topic using PubMed.",
backstory="An AI assistant specialized in biomedical literature research.",
tools=tools,
verbose=True
)
task = Task(
description="Search for recent articles on 'crispr gene editing'.",
expected_output="A summary of the top 3 recent articles.",
agent=agent
)
crew = Crew(
agents=[agent],
tasks=[task],
verbose=True,
process=Process.sequential
)
result = crew.kickoff()
print(result)
finally:
print("Stopping SSE MCP server connection (manual)...")
mcp_server_adapter.stop() # **Crucial: Ensure stop is called**
```
#### Streamable HTTP Transport Example (Manual)
```python
from crewai import Agent, Task, Crew
from crewai_tools import MCPServerAdapter
# Define parameters for the Streamable HTTP MCP server
server_params = {
"url": "http://localhost:8001/mcp", # Ensure this matches your server's URL
"transport": "streamable-http"
}
mcp_server_adapter = None # Initialize to ensure it's defined for the finally block
try:
mcp_server_adapter = MCPServerAdapter(server_params)
mcp_server_adapter.start()
tools = mcp_server_adapter.tools
print(f"Available tools (manual Streamable HTTP): {[tool.name for tool in tools]}")
hello_agent = Agent(
role="Greeter",
goal="Greet the user warmly using a manually managed Streamable HTTP connection.",
backstory="An expert in providing friendly greetings.",
tools=tools,
verbose=True
)
greeting_task = Task(
description="Greet the user named '{user}'.",
agent=hello_agent,
expected_output="A personalized and friendly greeting message."
)
greeting_crew = Crew(
agents=[hello_agent],
tasks=[greeting_task],
verbose=True
)
user_name = input("What is your name for the manual Streamable HTTP demo? ")
result = greeting_crew.kickoff(inputs={"user": user_name})
print("\nFinal Greeting Output (manual Streamable HTTP):\n", result)
finally:
if mcp_server_adapter:
print("Stopping Streamable HTTP MCP server connection (manual)...")
mcp_server_adapter.stop() # **Crucial: Ensure stop is called**
```
### Connecting to Multiple MCP Servers
`MCPServerAdapter` also supports connecting to multiple MCP servers simultaneously. This is useful when your agents need to access tools from different services, each exposed through its own MCP server (e.g., one for math operations, another for web searching, and a third for a specific API).
To connect to multiple servers, you can pass a list of server parameter objects directly to `MCPServerAdapter`. Each element in the list should be a valid server parameter configuration, such as a dictionary for HTTP/SSE servers or an `StdioServerParameters` object for Stdio-based servers.
The adapter will attempt to connect to all specified servers. The `tools` object obtained (e.g., via the context manager) will contain a combined list of all tools available from all successfully connected servers. This combined list of tools can then be provided to your CrewAI agents, allowing them to leverage capabilities from all connected MCP sources.
**Example:**
```python
from crewai import Agent, Task, Crew
from crewai_tools import MCPServerAdapter
from mcp import StdioServerParameters
import os
# Define configurations for multiple MCP servers (replace with actual details)
server_configurations = [
# Streamable HTTP Server
{
"url": "http://localhost:8001/mcp",
"transport": "streamable-http"
},
# SSE (Server-Sent Events) Server
{
"url": "http://localhost:8000/sse",
"transport": "sse"
},
# StdIO (Standard Input/Output) Server
StdioServerParameters(
command="python3",
args=["path/to/your/stdio_mcp_server.py"],
env={"PYTHONPATH": ".", **os.environ}
)
]
try:
with MCPServerAdapter(server_configurations) as all_tools:
print(f"Connected. Available tools: {[tool.name for tool in all_tools]}\n")
multi_tool_agent = Agent(
role="Versatile Assistant",
goal="Leverage diverse tools from multiple MCP servers.",
backstory="An AI agent skilled in using various tool sources.",
tools=all_tools,
verbose=True,
allow_delegation=False
)
# Generic task to demonstrate tool usage
reporting_task = Task(
description="List the names of all available tools.",
agent=multi_tool_agent,
expected_output="A list of tool names."
)
utility_crew = Crew(
agents=[multi_tool_agent],
tasks=[reporting_task],
verbose=True
)
result = utility_crew.kickoff()
print("\nCrew Task Result:\n", result)
except Exception as e:
print(f"An error occurred: {e}")
print("Ensure configured MCP servers are running and accessible.")
```
This approach allows for a flexible and centralized way to manage tool sources for your CrewAI agents. Remember to ensure that each MCP server is running and accessible when your CrewAI application starts.
Checkout this repository for full demos and examples of MCP integration with CrewAI! 👇
<Card
title="GitHub Repository"
icon="github"
href="https://github.com/tonykipkemboi/crewai-mcp-demo"
target="_blank"
>
CrewAI MCP Demo
</Card>
## Staying Safe with MCP
<Warning>
Always ensure that you trust an MCP Server before using it.
</Warning>
#### Security Warning: DNS Rebinding Attacks
SSE transports can be vulnerable to DNS rebinding attacks if not properly secured.
To prevent this:
1. **Always validate Origin headers** on incoming SSE connections to ensure they come from expected sources
2. **Avoid binding servers to all network interfaces** (0.0.0.0) when running locally - bind only to localhost (127.0.0.1) instead
3. **Implement proper authentication** for all SSE connections
Without these protections, attackers could use DNS rebinding to interact with local MCP servers from remote websites.
For more details, see the [MCP Transport Security](https://modelcontextprotocol.io/docs/concepts/transports#security-considerations) documentation.
### Limitations
* **Supported Primitives**: Currently, `MCPServerAdapter` primarily supports adapting MCP `tools`.
Other MCP primitives like `prompts` or `resources` are not directly integrated as CrewAI components through this adapter at this time.
* **Output Handling**: The adapter typically processes the primary text output from an MCP tool (e.g., `.content[0].text`). Complex or multi-modal outputs might require custom handling if not fitting this pattern.

View File

@@ -0,0 +1,64 @@
---
title: Connecting to Multiple MCP Servers
description: Learn how to use MCPServerAdapter in CrewAI to connect to multiple MCP servers simultaneously and aggregate their tools.
icon: layer-group
---
## Overview
`MCPServerAdapter` in `crewai-tools` allows you to connect to multiple MCP servers concurrently. This is useful when your agents need to access tools distributed across different services or environments. The adapter aggregates tools from all specified servers, making them available to your CrewAI agents.
## Configuration
To connect to multiple servers, you provide a list of server parameter dictionaries to `MCPServerAdapter`. Each dictionary in the list should define the parameters for one MCP server.
Supported transport types for each server in the list include `stdio`, `sse`, and `streamable-http`.
```python
from crewai import Agent, Task, Crew, Process
from crewai_tools import MCPServerAdapter
from mcp import StdioServerParameters # Needed for Stdio example
# Define parameters for multiple MCP servers
server_params_list = [
# Streamable HTTP Server
{
"url": "http://localhost:8001/mcp",
"transport": "streamable-http"
},
# SSE Server
{
"url": "http://localhost:8000/sse",
"transport": "sse"
},
# StdIO Server
StdioServerParameters(
command="python3",
args=["servers/your_stdio_server.py"],
env={"UV_PYTHON": "3.12", **os.environ},
)
]
try:
with MCPServerAdapter(server_params_list) as aggregated_tools:
print(f"Available aggregated tools: {[tool.name for tool in aggregated_tools]}")
multi_server_agent = Agent(
role="Versatile Assistant",
goal="Utilize tools from local Stdio, remote SSE, and remote HTTP MCP servers.",
backstory="An AI agent capable of leveraging a diverse set of tools from multiple sources.",
tools=aggregated_tools, # All tools are available here
verbose=True,
)
... # Your other agent, tasks, and crew code here
except Exception as e:
print(f"Error connecting to or using multiple MCP servers (Managed): {e}")
print("Ensure all MCP servers are running and accessible with correct configurations.")
```
## Connection Management
When using the context manager (`with` statement), `MCPServerAdapter` handles the lifecycle (start and stop) of all connections to the configured MCP servers. This simplifies resource management and ensures that all connections are properly closed when the context is exited.

164
docs/mcp/overview.mdx Normal file
View File

@@ -0,0 +1,164 @@
---
title: 'MCP Servers as Tools in CrewAI'
description: 'Learn how to integrate MCP servers as tools in your CrewAI agents using the `crewai-tools` library.'
icon: plug
---
## Overview
The [Model Context Protocol](https://modelcontextprotocol.io/introduction) (MCP) provides a standardized way for AI agents to provide context to LLMs by communicating with external services, known as MCP Servers.
The `crewai-tools` library extends CrewAI's capabilities by allowing you to seamlessly integrate tools from these MCP servers into your agents.
This gives your crews access to a vast ecosystem of functionalities.
We currently support the following transport mechanisms:
- **Stdio**: for local servers (communication via standard input/output between processes on the same machine)
- **Server-Sent Events (SSE)**: for remote servers (unidirectional, real-time data streaming from server to client over HTTP)
- **Streamable HTTP**: for remote servers (flexible, potentially bi-directional communication over HTTP, often utilizing SSE for server-to-client streams)
## Video Tutorial
Watch this video tutorial for a comprehensive guide on MCP integration with CrewAI:
<iframe
width="100%"
height="400"
src="https://www.youtube.com/embed/TpQ45lAZh48"
title="CrewAI MCP Integration Guide"
frameborder="0"
style={{ borderRadius: '10px' }}
allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture"
allowfullscreen
></iframe>
## Installation
Before you start using MCP with `crewai-tools`, you need to install the `mcp` extra `crewai-tools` dependency with the following command:
```shell
uv pip install 'crewai-tools[mcp]'
```
## Key Concepts & Getting Started
The `MCPServerAdapter` class from `crewai-tools` is the primary way to connect to an MCP server and make its tools available to your CrewAI agents. It supports different transport mechanisms and simplifies connection management.
Using a Python context manager (`with` statement) is the **recommended approach** for `MCPServerAdapter`. It automatically handles starting and stopping the connection to the MCP server.
```python
from crewai import Agent
from crewai_tools import MCPServerAdapter
from mcp import StdioServerParameters # For Stdio Server
# Example server_params (choose one based on your server type):
# 1. Stdio Server:
server_params=StdioServerParameters(
command="python3",
args=["servers/your_server.py"],
env={"UV_PYTHON": "3.12", **os.environ},
)
# 2. SSE Server:
server_params = {
"url": "http://localhost:8000/sse",
"transport": "sse"
}
# 3. Streamable HTTP Server:
server_params = {
"url": "http://localhost:8001/mcp",
"transport": "streamable-http"
}
# Example usage (uncomment and adapt once server_params is set):
with MCPServerAdapter(server_params) as mcp_tools:
print(f"Available tools: {[tool.name for tool in mcp_tools]}")
my_agent = Agent(
role="MCP Tool User",
goal="Utilize tools from an MCP server.",
backstory="I can connect to MCP servers and use their tools.",
tools=mcp_tools, # Pass the loaded tools to your agent
reasoning=True,
verbose=True
)
# ... rest of your crew setup ...
```
This general pattern shows how to integrate tools. For specific examples tailored to each transport, refer to the detailed guides below.
## Explore MCP Integrations
<CardGroup cols={2}>
<Card
title="Stdio Transport"
icon="server"
href="/mcp/stdio"
color="#3B82F6"
>
Connect to local MCP servers via standard input/output. Ideal for scripts and local executables.
</Card>
<Card
title="SSE Transport"
icon="wifi"
href="/mcp/sse"
color="#10B981"
>
Integrate with remote MCP servers using Server-Sent Events for real-time data streaming.
</Card>
<Card
title="Streamable HTTP Transport"
icon="globe"
href="/mcp/streamable-http"
color="#F59E0B"
>
Utilize flexible Streamable HTTP for robust communication with remote MCP servers.
</Card>
<Card
title="Connecting to Multiple Servers"
icon="layer-group"
href="/mcp/multiple-servers"
color="#8B5CF6"
>
Aggregate tools from several MCP servers simultaneously using a single adapter.
</Card>
<Card
title="Security Considerations"
icon="lock"
href="/mcp/security"
color="#EF4444"
>
Review important security best practices for MCP integration to keep your agents safe.
</Card>
</CardGroup>
Checkout this repository for full demos and examples of MCP integration with CrewAI! 👇
<Card
title="GitHub Repository"
icon="github"
href="https://github.com/tonykipkemboi/crewai-mcp-demo"
target="_blank"
>
CrewAI MCP Demo
</Card>
## Staying Safe with MCP
<Warning>
Always ensure that you trust an MCP Server before using it.
</Warning>
#### Security Warning: DNS Rebinding Attacks
SSE transports can be vulnerable to DNS rebinding attacks if not properly secured.
To prevent this:
1. **Always validate Origin headers** on incoming SSE connections to ensure they come from expected sources
2. **Avoid binding servers to all network interfaces** (0.0.0.0) when running locally - bind only to localhost (127.0.0.1) instead
3. **Implement proper authentication** for all SSE connections
Without these protections, attackers could use DNS rebinding to interact with local MCP servers from remote websites.
For more details, see the [Anthropic's MCP Transport Security docs](https://modelcontextprotocol.io/docs/concepts/transports#security-considerations).
### Limitations
* **Supported Primitives**: Currently, `MCPServerAdapter` primarily supports adapting MCP `tools`.
Other MCP primitives like `prompts` or `resources` are not directly integrated as CrewAI components through this adapter at this time.
* **Output Handling**: The adapter typically processes the primary text output from an MCP tool (e.g., `.content[0].text`). Complex or multi-modal outputs might require custom handling if not fitting this pattern.

166
docs/mcp/security.mdx Normal file
View File

@@ -0,0 +1,166 @@
---
title: MCP Security Considerations
description: Learn about important security best practices when integrating MCP servers with your CrewAI agents.
icon: lock
---
## Overview
<Warning>
The most critical aspect of MCP security is **trust**. You should **only** connect your CrewAI agents to MCP servers that you fully trust.
</Warning>
When integrating external services like MCP (Model Context Protocol) servers into your CrewAI agents, security is paramount.
MCP servers can execute code, access data, or interact with other systems based on the tools they expose.
It's crucial to understand the implications and follow best practices to protect your applications and data.
### Risks
- Execute arbitrary code on the machine where the agent is running (especially with `Stdio` transport if the server can control the command executed).
- Expose sensitive data from your agent or its environment.
- Manipulate your agent's behavior in unintended ways, including making unauthorized API calls on your behalf.
- Hijack your agent's reasoning process through sophisticated prompt injection techniques (see below).
### 1. Trusting MCP Servers
<Warning>
**Only connect to MCP servers that you trust.**
</Warning>
Before configuring `MCPServerAdapter` to connect to an MCP server, ensure you know:
- **Who operates the server?** Is it a known, reputable service, or an internal server under your control?
- **What tools does it expose?** Understand the capabilities of the tools. Could they be misused if an attacker gained control or if the server itself is malicious?
- **What data does it access or process?** Be aware of any sensitive information that might be sent to or handled by the MCP server.
Avoid connecting to unknown or unverified MCP servers, especially if your agents handle sensitive tasks or data.
### 2. Secure Prompt Injection via Tool Metadata: The "Model Control Protocol" Risk
A significant and subtle risk is the potential for prompt injection through tool metadata. Here's how it works:
1. When your CrewAI agent connects to an MCP server, it typically requests a list of available tools.
2. The MCP server responds with metadata for each tool, including its name, description, and parameter descriptions.
3. Your agent's underlying Language Model (LLM) uses this metadata to understand how and when to use the tools. This metadata is often incorporated into the LLM's system prompt or context.
4. A malicious MCP server can craft its tool metadata (names, descriptions) to include hidden or overt instructions. These instructions can act as a prompt injection, effectively telling your LLM to behave in a certain way, reveal sensitive information, or perform malicious actions.
**Crucially, this attack can occur simply by connecting to a malicious server and listing its tools, even if your agent never explicitly decides to *use* any of those tools.** The mere exposure to the malicious metadata can be enough to compromise the agent's behavior.
**Mitigation:**
* **Extreme Caution with Untrusted Servers:** Reiterate: *Do not connect to MCP servers you do not fully trust.* The risk of metadata injection makes this paramount.
### Stdio Transport Security
Stdio (Standard Input/Output) transport is typically used for local MCP servers running on the same machine as your CrewAI application.
- **Process Isolation**: While generally safer as it doesn't involve network exposure by default, ensure the script or command run by `StdioServerParameters` is from a trusted source and has appropriate file system permissions. A malicious Stdio server script could still harm your local system.
- **Input Sanitization**: If your Stdio server script takes complex inputs derived from agent interactions, ensure the script itself sanitizes these inputs to prevent command injection or other vulnerabilities within the script's logic.
- **Resource Limits**: Be mindful that a local Stdio server process consumes local resources (CPU, memory). Ensure it's well-behaved and won't exhaust system resources.
### Confused Deputy Attacks
The [Confused Deputy Problem](https://en.wikipedia.org/wiki/Confused_deputy_problem) is a classic security vulnerability that can manifest in MCP integrations, especially when an MCP server acts as a proxy to other third-party services (e.g., Google Calendar, GitHub) that use OAuth 2.0 for authorization.
**Scenario:**
1. An MCP server (let's call it `MCP-Proxy`) allows your agent to interact with `ThirdPartyAPI`.
2. `MCP-Proxy` uses its own single, static `client_id` when talking to `ThirdPartyAPI`'s authorization server.
3. You, as the user, legitimately authorize `MCP-Proxy` to access `ThirdPartyAPI` on your behalf. During this, `ThirdPartyAPI`'s auth server might set a cookie in your browser indicating your consent for `MCP-Proxy`'s `client_id`.
4. An attacker crafts a malicious link. This link initiates an OAuth flow with `MCP-Proxy`, but is designed to trick `ThirdPartyAPI`'s auth server.
5. If you click this link, and `ThirdPartyAPI`'s auth server sees your existing consent cookie for `MCP-Proxy`'s `client_id`, it might *skip* asking for your consent again.
6. `MCP-Proxy` might then be tricked into forwarding an authorization code (for `ThirdPartyAPI`) to the attacker, or an MCP authorization code that the attacker can use to impersonate you to `MCP-Proxy`.
**Mitigation (Primarily for MCP Server Developers):**
* MCP proxy servers using static client IDs for downstream services **must** obtain explicit user consent for *each client application or agent* connecting to them *before* initiating an OAuth flow with the third-party service. This means `MCP-Proxy` itself should show a consent screen.
**CrewAI User Implication:**
* Be cautious if an MCP server redirects you for multiple OAuth authentications, especially if it seems unexpected or if the permissions requested are overly broad.
* Prefer MCP servers that clearly delineate their own identity versus the third-party services they might proxy.
### Remote Transport Security (SSE & Streamable HTTP)
When connecting to remote MCP servers via Server-Sent Events (SSE) or Streamable HTTP, standard web security practices are essential.
### SSE Security Considerations
### a. DNS Rebinding Attacks (Especially for SSE)
<Critical>
**Protect against DNS Rebinding Attacks.**
</Critical>
DNS rebinding allows an attacker-controlled website to bypass the same-origin policy and make requests to servers on the user's local network (e.g., `localhost`) or intranet. This is particularly risky if you run an MCP server locally (e.g., for development) and an agent in a browser-like environment (though less common for typical CrewAI backend setups) or if the MCP server is on an internal network.
**Mitigation Strategies for MCP Server Implementers:**
- **Validate `Origin` and `Host` Headers**: MCP servers (especially SSE ones) should validate the `Origin` and/or `Host` HTTP headers to ensure requests are coming from expected domains/clients.
- **Bind to `localhost` (127.0.0.1)**: When running MCP servers locally for development, bind them to `127.0.0.1` instead of `0.0.0.0`. This prevents them from being accessible from other machines on the network.
- **Authentication**: Require authentication for all connections to your MCP server if it's not intended for public anonymous access.
### b. Use HTTPS
- **Encrypt Data in Transit**: Always use HTTPS (HTTP Secure) for the URLs of remote MCP servers. This encrypts the communication between your CrewAI application and the MCP server, protecting against eavesdropping and man-in-the-middle attacks. `MCPServerAdapter` will respect the scheme (`http` or `https`) provided in the URL.
### c. Token Passthrough (Anti-Pattern)
This is primarily a concern for MCP server developers but understanding it helps in choosing secure servers.
"Token passthrough" is when an MCP server accepts an access token from your CrewAI agent (which might be a token for a *different* service, say `ServiceA`) and simply passes it through to another downstream API (`ServiceB`) without proper validation. Specifically, `ServiceB` (or the MCP server itself) should only accept tokens that were explicitly issued *for them* (i.e., the 'audience' claim in the token matches the server/service).
**Risks:**
* Bypasses security controls (like rate limiting or fine-grained permissions) on the MCP server or the downstream API.
* Breaks audit trails and accountability.
* Allows misuse of stolen tokens.
**Mitigation (For MCP Server Developers):**
* MCP servers **MUST NOT** accept tokens that were not explicitly issued for them. They must validate the token's audience claim.
**CrewAI User Implication:**
* While not directly controllable by the user, this highlights the importance of connecting to well-designed MCP servers that adhere to security best practices.
#### Authentication and Authorization
- **Verify Identity**: If the MCP server provides sensitive tools or access to private data, it MUST implement strong authentication mechanisms to verify the identity of the client (your CrewAI application). This could involve API keys, OAuth tokens, or other standard methods.
- **Principle of Least Privilege**: Ensure the credentials used by `MCPServerAdapter` (if any) have only the necessary permissions to access the required tools.
### d. Input Validation and Sanitization
- **Input Validation is Critical**: MCP servers **must** rigorously validate all inputs received from agents *before* processing them or passing them to tools. This is a primary defense against many common vulnerabilities:
- **Command Injection:** If a tool constructs shell commands, SQL queries, or other interpreted language statements based on input, the server must meticulously sanitize this input to prevent malicious commands from being injected and executed.
- **Path Traversal:** If a tool accesses files based on input parameters, the server must validate and sanitize these paths to prevent access to unauthorized files or directories (e.g., by blocking `../` sequences).
- **Data Type & Range Checks:** Servers must ensure that input data conforms to the expected data types (e.g., string, number, boolean) and falls within acceptable ranges or adheres to defined formats (e.g., regex for URLs).
- **JSON Schema Validation:** All tool parameters should be strictly validated against their defined JSON schema. This helps catch malformed requests early.
- **Client-Side Awareness**: While server-side validation is paramount, as a CrewAI user, be mindful of the data your agents are constructed to send to MCP tools, especially if interacting with less-trusted or new MCP servers.
### e. Rate Limiting and Resource Management
- **Prevent Abuse**: MCP servers should implement rate limiting to prevent abuse, whether intentional (Denial of Service attacks) or unintentional (e.g., a misconfigured agent making too many requests).
- **Client-Side Retries**: Implement sensible retry logic in your CrewAI tasks if transient network issues or server rate limits are expected, but avoid aggressive retries that could exacerbate server load.
## 4. Secure MCP Server Implementation Advice (For Developers)
If you are developing an MCP server that CrewAI agents might connect to, consider these best practices in addition to the points above:
- **Follow Secure Coding Practices**: Adhere to standard secure coding principles for your chosen language and framework (e.g., OWASP Top 10).
- **Principle of Least Privilege**: Ensure the process running the MCP server (especially for `Stdio`) has only the minimum necessary permissions. Tools themselves should also operate with the least privilege required to perform their function.
- **Dependency Management**: Keep all server-side dependencies, including operating system packages, language runtimes, and third-party libraries, up-to-date to patch known vulnerabilities. Use tools to scan for vulnerable dependencies.
- **Secure Defaults**: Design your server and its tools to be secure by default. For example, features that could be risky should be off by default or require explicit opt-in with clear warnings.
- **Access Control for Tools**: Implement robust mechanisms to control which authenticated and authorized agents or users can access specific tools, especially those that are powerful, sensitive, or incur costs.
- **Secure Error Handling**: Servers should not expose detailed internal error messages, stack traces, or debugging information to the client, as these can reveal internal workings or potential vulnerabilities. Log errors comprehensively on the server-side for diagnostics.
- **Comprehensive Logging and Monitoring**: Implement detailed logging of security-relevant events (e.g., authentication attempts, tool invocations, errors, authorization changes). Monitor these logs for suspicious activity or abuse patterns.
- **Adherence to MCP Authorization Spec**: If implementing authentication and authorization, strictly follow the [MCP Authorization specification](https://modelcontextprotocol.io/specification/draft/basic/authorization) and relevant [OAuth 2.0 security best practices](https://datatracker.ietf.org/doc/html/rfc9700).
- **Regular Security Audits**: If your MCP server handles sensitive data, performs critical operations, or is publicly exposed, consider periodic security audits by qualified professionals.
## 5. Further Reading
For more detailed information on MCP security, refer to the official documentation:
- **[MCP Transport Security](https://modelcontextprotocol.io/docs/concepts/transports#security-considerations)**
By understanding these security considerations and implementing best practices, you can safely leverage the power of MCP servers in your CrewAI projects.
These are by no means exhaustive, but they cover the most common and critical security concerns.
The threats will continue to evolve, so it's important to stay informed and adapt your security measures accordingly.

150
docs/mcp/sse.mdx Normal file
View File

@@ -0,0 +1,150 @@
---
title: SSE Transport
description: Learn how to connect CrewAI to remote MCP servers using Server-Sent Events (SSE) for real-time communication.
icon: wifi
---
## Overview
Server-Sent Events (SSE) provide a standard way for a web server to send updates to a client over a single, long-lived HTTP connection. In the context of MCP, SSE is used for remote servers to stream data (like tool responses) to your CrewAI application in real-time.
## Key Concepts
- **Remote Servers**: SSE is suitable for MCP servers hosted remotely.
- **Unidirectional Stream**: Typically, SSE is a one-way communication channel from server to client.
- **`MCPServerAdapter` Configuration**: For SSE, you'll provide the server's URL and specify the transport type.
## Connecting via SSE
You can connect to an SSE-based MCP server using two main approaches for managing the connection lifecycle:
### 1. Fully Managed Connection (Recommended)
Using a Python context manager (`with` statement) is the recommended approach. It automatically handles establishing and closing the connection to the SSE MCP server.
```python
from crewai import Agent, Task, Crew, Process
from crewai_tools import MCPServerAdapter
server_params = {
"url": "http://localhost:8000/sse", # Replace with your actual SSE server URL
"transport": "sse"
}
# Using MCPServerAdapter with a context manager
try:
with MCPServerAdapter(server_params) as tools:
print(f"Available tools from SSE MCP server: {[tool.name for tool in tools]}")
# Example: Using a tool from the SSE MCP server
sse_agent = Agent(
role="Remote Service User",
goal="Utilize a tool provided by a remote SSE MCP server.",
backstory="An AI agent that connects to external services via SSE.",
tools=tools,
reasoning=True,
verbose=True,
)
sse_task = Task(
description="Fetch real-time stock updates for 'AAPL' using an SSE tool.",
expected_output="The latest stock price for AAPL.",
agent=sse_agent,
markdown=True
)
sse_crew = Crew(
agents=[sse_agent],
tasks=[sse_task],
verbose=True,
process=Process.sequential
)
if tools: # Only kickoff if tools were loaded
result = sse_crew.kickoff() # Add inputs={'stock_symbol': 'AAPL'} if tool requires it
print("\nCrew Task Result (SSE - Managed):\n", result)
else:
print("Skipping crew kickoff as tools were not loaded (check server connection).")
except Exception as e:
print(f"Error connecting to or using SSE MCP server (Managed): {e}")
print("Ensure the SSE MCP server is running and accessible at the specified URL.")
```
<Note>
Replace `"http://localhost:8000/sse"` with the actual URL of your SSE MCP server.
</Note>
### 2. Manual Connection Lifecycle
If you need finer-grained control, you can manage the `MCPServerAdapter` connection lifecycle manually.
<Info>
You **MUST** call `mcp_server_adapter.stop()` to ensure the connection is closed and resources are released. Using a `try...finally` block is highly recommended.
</Info>
```python
from crewai import Agent, Task, Crew, Process
from crewai_tools import MCPServerAdapter
server_params = {
"url": "http://localhost:8000/sse", # Replace with your actual SSE server URL
"transport": "sse"
}
mcp_server_adapter = None
try:
mcp_server_adapter = MCPServerAdapter(server_params)
mcp_server_adapter.start()
tools = mcp_server_adapter.tools
print(f"Available tools (manual SSE): {[tool.name for tool in tools]}")
manual_sse_agent = Agent(
role="Remote Data Analyst",
goal="Analyze data fetched from a remote SSE MCP server using manual connection management.",
backstory="An AI skilled in handling SSE connections explicitly.",
tools=tools,
verbose=True
)
analysis_task = Task(
description="Fetch and analyze the latest user activity trends from the SSE server.",
expected_output="A summary report of user activity trends.",
agent=manual_sse_agent
)
analysis_crew = Crew(
agents=[manual_sse_agent],
tasks=[analysis_task],
verbose=True,
process=Process.sequential
)
result = analysis_crew.kickoff()
print("\nCrew Task Result (SSE - Manual):\n", result)
except Exception as e:
print(f"An error occurred during manual SSE MCP integration: {e}")
print("Ensure the SSE MCP server is running and accessible.")
finally:
if mcp_server_adapter and mcp_server_adapter.is_connected:
print("Stopping SSE MCP server connection (manual)...")
mcp_server_adapter.stop() # **Crucial: Ensure stop is called**
elif mcp_server_adapter:
print("SSE MCP server adapter was not connected. No stop needed or start failed.")
```
## Security Considerations for SSE
<Warning>
**DNS Rebinding Attacks**: SSE transports can be vulnerable to DNS rebinding attacks if the MCP server is not properly secured. This could allow malicious websites to interact with local or intranet-based MCP servers.
</Warning>
To mitigate this risk:
- MCP server implementations should **validate `Origin` headers** on incoming SSE connections.
- When running local SSE MCP servers for development, **bind only to `localhost` (`127.0.0.1`)** rather than all network interfaces (`0.0.0.0`).
- Implement **proper authentication** for all SSE connections if they expose sensitive tools or data.
For a comprehensive overview of security best practices, please refer to our [Security Considerations](./security.mdx) page and the official [MCP Transport Security documentation](https://modelcontextprotocol.io/docs/concepts/transports#security-considerations).

134
docs/mcp/stdio.mdx Normal file
View File

@@ -0,0 +1,134 @@
---
title: Stdio Transport
description: Learn how to connect CrewAI to local MCP servers using the Stdio (Standard Input/Output) transport mechanism.
icon: server
---
## Overview
The Stdio (Standard Input/Output) transport is designed for connecting `MCPServerAdapter` to local MCP servers that communicate over their standard input and output streams. This is typically used when the MCP server is a script or executable running on the same machine as your CrewAI application.
## Key Concepts
- **Local Execution**: Stdio transport manages a locally running process for the MCP server.
- **`StdioServerParameters`**: This class from the `mcp` library is used to configure the command, arguments, and environment variables for launching the Stdio server.
## Connecting via Stdio
You can connect to an Stdio-based MCP server using two main approaches for managing the connection lifecycle:
### 1. Fully Managed Connection (Recommended)
Using a Python context manager (`with` statement) is the recommended approach. It automatically handles starting the MCP server process and stopping it when the context is exited.
```python
from crewai import Agent, Task, Crew, Process
from crewai_tools import MCPServerAdapter
from mcp import StdioServerParameters
import os
# Create a StdioServerParameters object
server_params=StdioServerParameters(
command="python3",
args=["servers/your_stdio_server.py"],
env={"UV_PYTHON": "3.12", **os.environ},
)
with MCPServerAdapter(server_params) as tools:
print(f"Available tools from Stdio MCP server: {[tool.name for tool in tools]}")
# Example: Using the tools from the Stdio MCP server in a CrewAI Agent
research_agent = Agent(
role="Local Data Processor",
goal="Process data using a local Stdio-based tool.",
backstory="An AI that leverages local scripts via MCP for specialized tasks.",
tools=tools,
reasoning=True,
verbose=True,
)
processing_task = Task(
description="Process the input data file 'data.txt' and summarize its contents.",
expected_output="A summary of the processed data.",
agent=research_agent,
markdown=True
)
data_crew = Crew(
agents=[research_agent],
tasks=[processing_task],
verbose=True,
process=Process.sequential
)
result = data_crew.kickoff()
print("\nCrew Task Result (Stdio - Managed):\n", result)
```
### 2. Manual Connection Lifecycle
If you need finer-grained control over when the Stdio MCP server process is started and stopped, you can manage the `MCPServerAdapter` lifecycle manually.
<Info>
You **MUST** call `mcp_server_adapter.stop()` to ensure the server process is terminated and resources are released. Using a `try...finally` block is highly recommended.
</Info>
```python
from crewai import Agent, Task, Crew, Process
from crewai_tools import MCPServerAdapter
from mcp import StdioServerParameters
import os
# Create a StdioServerParameters object
stdio_params=StdioServerParameters(
command="python3",
args=["servers/your_stdio_server.py"],
env={"UV_PYTHON": "3.12", **os.environ},
)
mcp_server_adapter = MCPServerAdapter(server_params=stdio_params)
try:
mcp_server_adapter.start() # Manually start the connection and server process
tools = mcp_server_adapter.tools
print(f"Available tools (manual Stdio): {[tool.name for tool in tools]}")
# Example: Using the tools with your Agent, Task, Crew setup
manual_agent = Agent(
role="Local Task Executor",
goal="Execute a specific local task using a manually managed Stdio tool.",
backstory="An AI proficient in controlling local processes via MCP.",
tools=tools,
verbose=True
)
manual_task = Task(
description="Execute the 'perform_analysis' command via the Stdio tool.",
expected_output="Results of the analysis.",
agent=manual_agent
)
manual_crew = Crew(
agents=[manual_agent],
tasks=[manual_task],
verbose=True,
process=Process.sequential
)
result = manual_crew.kickoff() # Actual inputs depend on your tool
print("\nCrew Task Result (Stdio - Manual):\n", result)
except Exception as e:
print(f"An error occurred during manual Stdio MCP integration: {e}")
finally:
if mcp_server_adapter and mcp_server_adapter.is_connected: # Check if connected before stopping
print("Stopping Stdio MCP server connection (manual)...")
mcp_server_adapter.stop() # **Crucial: Ensure stop is called**
elif mcp_server_adapter: # If adapter exists but not connected (e.g. start failed)
print("Stdio MCP server adapter was not connected. No stop needed or start failed.")
```
Remember to replace placeholder paths and commands with your actual Stdio server details. The `env` parameter in `StdioServerParameters` can
be used to set environment variables for the server process, which can be useful for configuring its behavior or providing necessary paths (like `PYTHONPATH`).

View File

@@ -0,0 +1,135 @@
---
title: Streamable HTTP Transport
description: Learn how to connect CrewAI to remote MCP servers using the flexible Streamable HTTP transport.
icon: globe
---
## Overview
Streamable HTTP transport provides a flexible way to connect to remote MCP servers. It's often built upon HTTP and can support various communication patterns, including request-response and streaming, sometimes utilizing Server-Sent Events (SSE) for server-to-client streams within a broader HTTP interaction.
## Key Concepts
- **Remote Servers**: Designed for MCP servers hosted remotely.
- **Flexibility**: Can support more complex interaction patterns than plain SSE, potentially including bi-directional communication if the server implements it.
- **`MCPServerAdapter` Configuration**: You'll need to provide the server's base URL for MCP communication and specify `"streamable-http"` as the transport type.
## Connecting via Streamable HTTP
You have two primary methods for managing the connection lifecycle with a Streamable HTTP MCP server:
### 1. Fully Managed Connection (Recommended)
The recommended approach is to use a Python context manager (`with` statement), which handles the connection's setup and teardown automatically.
```python
from crewai import Agent, Task, Crew, Process
from crewai_tools import MCPServerAdapter
server_params = {
"url": "http://localhost:8001/mcp", # Replace with your actual Streamable HTTP server URL
"transport": "streamable-http"
}
try:
with MCPServerAdapter(server_params) as tools:
print(f"Available tools from Streamable HTTP MCP server: {[tool.name for tool in tools]}")
http_agent = Agent(
role="HTTP Service Integrator",
goal="Utilize tools from a remote MCP server via Streamable HTTP.",
backstory="An AI agent adept at interacting with complex web services.",
tools=tools,
verbose=True,
)
http_task = Task(
description="Perform a complex data query using a tool from the Streamable HTTP server.",
expected_output="The result of the complex data query.",
agent=http_agent,
)
http_crew = Crew(
agents=[http_agent],
tasks=[http_task],
verbose=True,
process=Process.sequential
)
result = http_crew.kickoff()
print("\nCrew Task Result (Streamable HTTP - Managed):\n", result)
except Exception as e:
print(f"Error connecting to or using Streamable HTTP MCP server (Managed): {e}")
print("Ensure the Streamable HTTP MCP server is running and accessible at the specified URL.")
```
**Note:** Replace `"http://localhost:8001/mcp"` with the actual URL of your Streamable HTTP MCP server.
### 2. Manual Connection Lifecycle
For scenarios requiring more explicit control, you can manage the `MCPServerAdapter` connection manually.
<Info>
It is **critical** to call `mcp_server_adapter.stop()` when you are done to close the connection and free up resources. A `try...finally` block is the safest way to ensure this.
</Info>
```python
from crewai import Agent, Task, Crew, Process
from crewai_tools import MCPServerAdapter
server_params = {
"url": "http://localhost:8001/mcp", # Replace with your actual Streamable HTTP server URL
"transport": "streamable-http"
}
mcp_server_adapter = None
try:
mcp_server_adapter = MCPServerAdapter(server_params)
mcp_server_adapter.start()
tools = mcp_server_adapter.tools
print(f"Available tools (manual Streamable HTTP): {[tool.name for tool in tools]}")
manual_http_agent = Agent(
role="Advanced Web Service User",
goal="Interact with an MCP server using manually managed Streamable HTTP connections.",
backstory="An AI specialist in fine-tuning HTTP-based service integrations.",
tools=tools,
verbose=True
)
data_processing_task = Task(
description="Submit data for processing and retrieve results via Streamable HTTP.",
expected_output="Processed data or confirmation.",
agent=manual_http_agent
)
data_crew = Crew(
agents=[manual_http_agent],
tasks=[data_processing_task],
verbose=True,
process=Process.sequential
)
result = data_crew.kickoff()
print("\nCrew Task Result (Streamable HTTP - Manual):\n", result)
except Exception as e:
print(f"An error occurred during manual Streamable HTTP MCP integration: {e}")
print("Ensure the Streamable HTTP MCP server is running and accessible.")
finally:
if mcp_server_adapter and mcp_server_adapter.is_connected:
print("Stopping Streamable HTTP MCP server connection (manual)...")
mcp_server_adapter.stop() # **Crucial: Ensure stop is called**
elif mcp_server_adapter:
print("Streamable HTTP MCP server adapter was not connected. No stop needed or start failed.")
```
## Security Considerations
When using Streamable HTTP transport, general web security best practices are paramount:
- **Use HTTPS**: Always prefer HTTPS (HTTP Secure) for your MCP server URLs to encrypt data in transit.
- **Authentication**: Implement robust authentication mechanisms if your MCP server exposes sensitive tools or data.
- **Input Validation**: Ensure your MCP server validates all incoming requests and parameters.
For a comprehensive guide on securing your MCP integrations, please refer to our [Security Considerations](./security.mdx) page and the official [MCP Transport Security documentation](https://modelcontextprotocol.io/docs/concepts/transports#security-considerations).