Compare commits

..

17 Commits

Author SHA1 Message Date
Lorenze Jay
d29867bbb6 chore: update version numbers to 1.4.0
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
2025-11-06 23:04:44 -05:00
Lorenze Jay
b2c278ed22 refactor: improve MCP tool execution handling with concurrent futures (#3854)
- Enhanced the MCP tool execution in both synchronous and asynchronous contexts by utilizing  for better event loop management.
- Updated error handling to provide clearer messages for connection issues and task cancellations.
- Added tests to validate MCP tool execution in both sync and async scenarios, ensuring robust functionality across different contexts.
2025-11-06 19:28:08 -08:00
Greyson LaLonde
f6aed9798b feat: allow non-ast plot routes 2025-11-06 21:17:29 -05:00
Greyson LaLonde
40a2d387a1 fix: keep stopwords updated 2025-11-06 21:10:25 -05:00
Lorenze Jay
6f36d7003b Lorenze/feat mcp first class support (#3850)
* WIP transport support mcp

* refactor: streamline MCP tool loading and error handling

* linted

* Self type from typing with typing_extensions in MCP transport modules

* added tests for mcp setup

* added tests for mcp setup

* docs: enhance MCP overview with detailed integration examples and structured configurations

* feat: implement MCP event handling and logging in event listener and client

- Added MCP event types and handlers for connection and tool execution events.
- Enhanced MCPClient to emit events on connection status and tool execution.
- Updated ConsoleFormatter to handle MCP event logging.
- Introduced new MCP event types for better integration and monitoring.
2025-11-06 17:45:16 -08:00
Greyson LaLonde
9e5906c52f feat: add pydantic validation dunder to BaseInterceptor
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
2025-11-06 15:27:07 -05:00
Lorenze Jay
fc521839e4 Lorenze/fix duplicating doc ids for knowledge (#3840)
* fix: update document ID handling in ChromaDB utility functions to use SHA-256 hashing and include index for uniqueness

* test: add tests for hash-based ID generation in ChromaDB utility functions

* drop idx for preventing dups, upsert should handle dups

* fix: update document ID extraction logic in ChromaDB utility functions to check for doc_id at the top level of the document

* fix: enhance document ID generation in ChromaDB utility functions to deduplicate documents and ensure unique hash-based IDs without suffixes

* fix: improve error handling and document ID generation in ChromaDB utility functions to ensure robust processing and uniqueness
2025-11-06 10:59:52 -08:00
Greyson LaLonde
e4cc9a664c fix: handle unpickleable values in flow state
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
2025-11-06 01:29:21 -05:00
Greyson LaLonde
7e6171d5bc fix: ensure lite agents course-correct on validation errors
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
* fix: ensure lite agents course-correct on validation errors

* chore: update cassettes and test expectations

* fix: ensure multiple guardrails propogate
2025-11-05 19:02:11 -05:00
Greyson LaLonde
61ad1fb112 feat: add support for llm message interceptor hooks 2025-11-05 11:38:44 -05:00
Greyson LaLonde
54710a8711 fix: hash callback args correctly to ensure caching works
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
Build uv cache / build-cache (3.10) (push) Has been cancelled
Build uv cache / build-cache (3.11) (push) Has been cancelled
Build uv cache / build-cache (3.12) (push) Has been cancelled
Build uv cache / build-cache (3.13) (push) Has been cancelled
2025-11-05 07:19:09 -05:00
Lucas Gomide
5abf976373 fix: allow adding RAG source content from valid URLs (#3831)
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
2025-11-04 07:58:40 -05:00
Greyson LaLonde
329567153b fix: make plot node selection smoother
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
Build uv cache / build-cache (3.10) (push) Has been cancelled
Build uv cache / build-cache (3.11) (push) Has been cancelled
Build uv cache / build-cache (3.12) (push) Has been cancelled
Build uv cache / build-cache (3.13) (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
2025-11-03 07:49:31 -05:00
Greyson LaLonde
60332e0b19 feat: cache i18n prompts for efficient use 2025-11-03 07:39:05 -05:00
Lorenze Jay
40932af3fa feat: bump versions to 1.3.0 (#3820)
Some checks failed
Build uv cache / build-cache (3.10) (push) Has been cancelled
Build uv cache / build-cache (3.11) (push) Has been cancelled
Build uv cache / build-cache (3.12) (push) Has been cancelled
Build uv cache / build-cache (3.13) (push) Has been cancelled
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
* feat: bump versions to 1.3.0

* chore: update crew and flow templates to use crewai[tools] version 1.3.0
2025-10-31 18:54:02 -07:00
Greyson LaLonde
e134e5305b Gl/feat/a2a refactor (#3793)
* feat: agent metaclass, refactor a2a to wrappers

* feat: a2a schemas and utils

* chore: move agent class, update imports

* refactor: organize imports to avoid circularity, add a2a to console

* feat: pass response_model through call chain

* feat: add standard openapi spec serialization to tools and structured output

* feat: a2a events

* chore: add a2a to pyproject

* docs: minimal base for learn docs

* fix: adjust a2a conversation flow, allow llm to decide exit until max_retries

* fix: inject agent skills into initial prompt

* fix: format agent card as json in prompt

* refactor: simplify A2A agent prompt formatting and improve skill display

* chore: wide cleanup

* chore: cleanup logic, add auth cache, use json for messages in prompt

* chore: update docs

* fix: doc snippets formatting

* feat: optimize A2A agent card fetching and improve error reporting

* chore: move imports to top of file

* chore: refactor hasattr check

* chore: add httpx-auth, update lockfile

* feat: create base public api

* chore: cleanup modules, add docstrings, types

* fix: exclude extra fields in prompt

* chore: update docs

* tests: update to correct import

* chore: lint for ruff, add missing import

* fix: tweak openai streaming logic for response model

* tests: add reimport for test

* tests: add reimport for test

* fix: don't set a2a attr if not set

* fix: don't set a2a attr if not set

* chore: update cassettes

* tests: fix tests

* fix: use instructor and dont pass response_format for litellm

* chore: consolidate event listeners, add typing

* fix: address race condition in test, update cassettes

* tests: add correct mocks, rerun cassette for json

* tests: update cassette

* chore: regenerate cassette after new run

* fix: make token manager access-safe

* fix: make token manager access-safe

* merge

* chore: update test and cassete for output pydantic

* fix: tweak to disallow deadlock

* chore: linter

* fix: adjust event ordering for threading

* fix: use conditional for batch check

* tests: tweak for emission

* tests: simplify api + event check

* fix: ensure non-function calling llms see json formatted string

* tests: tweak message comparison

* fix: use internal instructor for litellm structure responses

---------

Co-authored-by: Mike Plachta <mike@crewai.com>
2025-10-31 18:42:03 -07:00
Greyson LaLonde
e229ef4e19 refactor: improve flow handling, typing, and logging; update UI and tests
fix: refine nested flow conditionals and ensure router methods and routes are fully parsed
fix: improve docstrings, typing, and logging coverage across all events
feat: update flow.plot feature with new UI enhancements
chore: apply Ruff linting, reorganize imports, and remove deprecated utilities/files
chore: split constants and utils, clean JS comments, and add typing for linters
tests: strengthen test coverage for flow execution paths and router logic
2025-10-31 21:15:06 -04:00
184 changed files with 35541 additions and 22896 deletions

View File

@@ -19,6 +19,7 @@ repos:
language: system
pass_filenames: true
types: [python]
exclude: ^(lib/crewai/src/crewai/cli/templates/|lib/crewai/tests/|lib/crewai-tools/tests/)
- repo: https://github.com/astral-sh/uv-pre-commit
rev: 0.9.3
hooks:

View File

@@ -1200,6 +1200,52 @@ Learn how to get the most out of your LLM configuration:
)
```
</Accordion>
<Accordion title="Transport Interceptors">
CrewAI provides message interceptors for several providers, allowing you to hook into request/response cycles at the transport layer.
**Supported Providers:**
- ✅ OpenAI
- ✅ Anthropic
**Basic Usage:**
```python
import httpx
from crewai import LLM
from crewai.llms.hooks import BaseInterceptor
class CustomInterceptor(BaseInterceptor[httpx.Request, httpx.Response]):
"""Custom interceptor to modify requests and responses."""
def on_outbound(self, request: httpx.Request) -> httpx.Request:
"""Print request before sending to the LLM provider."""
print(request)
return request
def on_inbound(self, response: httpx.Response) -> httpx.Response:
"""Process response after receiving from the LLM provider."""
print(f"Status: {response.status_code}")
print(f"Response time: {response.elapsed}")
return response
# Use the interceptor with an LLM
llm = LLM(
model="openai/gpt-4o",
interceptor=CustomInterceptor()
)
```
**Important Notes:**
- Both methods must return the received object or type of object.
- Modifying received objects may result in unexpected behavior or application crashes.
- Not all providers support interceptors - check the supported providers list above
<Info>
Interceptors operate at the transport layer. This is particularly useful for:
- Message transformation and filtering
- Debugging API interactions
</Info>
</Accordion>
</AccordionGroup>
## Common Issues and Solutions

View File

@@ -0,0 +1,291 @@
---
title: Agent-to-Agent (A2A) Protocol
description: Enable CrewAI agents to delegate tasks to remote A2A-compliant agents for specialized handling
icon: network-wired
mode: "wide"
---
## A2A Agent Delegation
CrewAI supports the Agent-to-Agent (A2A) protocol, allowing agents to delegate tasks to remote specialized agents. The agent's LLM automatically decides whether to handle a task directly or delegate to an A2A agent based on the task requirements.
<Note>
A2A delegation requires the `a2a-sdk` package. Install with: `uv add 'crewai[a2a]'` or `pip install 'crewai[a2a]'`
</Note>
## How It Works
When an agent is configured with A2A capabilities:
1. The LLM analyzes each task
2. It decides to either:
- Handle the task directly using its own capabilities
- Delegate to a remote A2A agent for specialized handling
3. If delegating, the agent communicates with the remote A2A agent through the protocol
4. Results are returned to the CrewAI workflow
## Basic Configuration
Configure an agent for A2A delegation by setting the `a2a` parameter:
```python Code
from crewai import Agent, Crew, Task
from crewai.a2a import A2AConfig
agent = Agent(
role="Research Coordinator",
goal="Coordinate research tasks efficiently",
backstory="Expert at delegating to specialized research agents",
llm="gpt-4o",
a2a=A2AConfig(
endpoint="https://example.com/.well-known/agent-card.json",
timeout=120,
max_turns=10
)
)
task = Task(
description="Research the latest developments in quantum computing",
expected_output="A comprehensive research report",
agent=agent
)
crew = Crew(agents=[agent], tasks=[task], verbose=True)
result = crew.kickoff()
```
## Configuration Options
The `A2AConfig` class accepts the following parameters:
<ParamField path="endpoint" type="str" required>
The A2A agent endpoint URL (typically points to `.well-known/agent-card.json`)
</ParamField>
<ParamField path="auth" type="AuthScheme" default="None">
Authentication scheme for the A2A agent. Supports Bearer tokens, OAuth2, API keys, and HTTP authentication.
</ParamField>
<ParamField path="timeout" type="int" default="120">
Request timeout in seconds
</ParamField>
<ParamField path="max_turns" type="int" default="10">
Maximum number of conversation turns with the A2A agent
</ParamField>
<ParamField path="response_model" type="type[BaseModel]" default="None">
Optional Pydantic model for requesting structured output from an A2A agent. A2A protocol does not
enforce this, so an A2A agent does not need to honor this request.
</ParamField>
<ParamField path="fail_fast" type="bool" default="True">
Whether to raise an error immediately if agent connection fails. When `False`, the agent continues with available agents and informs the LLM about unavailable ones.
</ParamField>
## Authentication
For A2A agents that require authentication, use one of the provided auth schemes:
<Tabs>
<Tab title="Bearer Token">
```python Code
from crewai.a2a import A2AConfig
from crewai.a2a.auth import BearerTokenAuth
agent = Agent(
role="Secure Coordinator",
goal="Coordinate tasks with secured agents",
backstory="Manages secure agent communications",
llm="gpt-4o",
a2a=A2AConfig(
endpoint="https://secure-agent.example.com/.well-known/agent-card.json",
auth=BearerTokenAuth(token="your-bearer-token"),
timeout=120
)
)
```
</Tab>
<Tab title="API Key">
```python Code
from crewai.a2a import A2AConfig
from crewai.a2a.auth import APIKeyAuth
agent = Agent(
role="API Coordinator",
goal="Coordinate with API-based agents",
backstory="Manages API-authenticated communications",
llm="gpt-4o",
a2a=A2AConfig(
endpoint="https://api-agent.example.com/.well-known/agent-card.json",
auth=APIKeyAuth(
api_key="your-api-key",
location="header", # or "query" or "cookie"
name="X-API-Key"
),
timeout=120
)
)
```
</Tab>
<Tab title="OAuth2">
```python Code
from crewai.a2a import A2AConfig
from crewai.a2a.auth import OAuth2ClientCredentials
agent = Agent(
role="OAuth Coordinator",
goal="Coordinate with OAuth-secured agents",
backstory="Manages OAuth-authenticated communications",
llm="gpt-4o",
a2a=A2AConfig(
endpoint="https://oauth-agent.example.com/.well-known/agent-card.json",
auth=OAuth2ClientCredentials(
token_url="https://auth.example.com/oauth/token",
client_id="your-client-id",
client_secret="your-client-secret",
scopes=["read", "write"]
),
timeout=120
)
)
```
</Tab>
<Tab title="HTTP Basic">
```python Code
from crewai.a2a import A2AConfig
from crewai.a2a.auth import HTTPBasicAuth
agent = Agent(
role="Basic Auth Coordinator",
goal="Coordinate with basic auth agents",
backstory="Manages basic authentication communications",
llm="gpt-4o",
a2a=A2AConfig(
endpoint="https://basic-agent.example.com/.well-known/agent-card.json",
auth=HTTPBasicAuth(
username="your-username",
password="your-password"
),
timeout=120
)
)
```
</Tab>
</Tabs>
## Multiple A2A Agents
Configure multiple A2A agents for delegation by passing a list:
```python Code
from crewai.a2a import A2AConfig
from crewai.a2a.auth import BearerTokenAuth
agent = Agent(
role="Multi-Agent Coordinator",
goal="Coordinate with multiple specialized agents",
backstory="Expert at delegating to the right specialist",
llm="gpt-4o",
a2a=[
A2AConfig(
endpoint="https://research.example.com/.well-known/agent-card.json",
timeout=120
),
A2AConfig(
endpoint="https://data.example.com/.well-known/agent-card.json",
auth=BearerTokenAuth(token="data-token"),
timeout=90
)
]
)
```
The LLM will automatically choose which A2A agent to delegate to based on the task requirements.
## Error Handling
Control how agent connection failures are handled using the `fail_fast` parameter:
```python Code
from crewai.a2a import A2AConfig
# Fail immediately on connection errors (default)
agent = Agent(
role="Research Coordinator",
goal="Coordinate research tasks",
backstory="Expert at delegation",
llm="gpt-4o",
a2a=A2AConfig(
endpoint="https://research.example.com/.well-known/agent-card.json",
fail_fast=True
)
)
# Continue with available agents
agent = Agent(
role="Multi-Agent Coordinator",
goal="Coordinate with multiple agents",
backstory="Expert at working with available resources",
llm="gpt-4o",
a2a=[
A2AConfig(
endpoint="https://primary.example.com/.well-known/agent-card.json",
fail_fast=False
),
A2AConfig(
endpoint="https://backup.example.com/.well-known/agent-card.json",
fail_fast=False
)
]
)
```
When `fail_fast=False`:
- If some agents fail, the LLM is informed which agents are unavailable and can delegate to working agents
- If all agents fail, the LLM receives a notice about unavailable agents and handles the task directly
- Connection errors are captured and included in the context for better decision-making
## Best Practices
<CardGroup cols={2}>
<Card title="Set Appropriate Timeouts" icon="clock">
Configure timeouts based on expected A2A agent response times. Longer-running tasks may need higher timeout values.
</Card>
<Card title="Limit Conversation Turns" icon="comments">
Use `max_turns` to prevent excessive back-and-forth. The agent will automatically conclude conversations before hitting the limit.
</Card>
<Card title="Use Resilient Error Handling" icon="shield-check">
Set `fail_fast=False` for production environments with multiple agents to gracefully handle connection failures and maintain workflow continuity.
</Card>
<Card title="Secure Your Credentials" icon="lock">
Store authentication tokens and credentials as environment variables, not in code.
</Card>
<Card title="Monitor Delegation Decisions" icon="eye">
Use verbose mode to observe when the LLM chooses to delegate versus handle tasks directly.
</Card>
</CardGroup>
## Supported Authentication Methods
- **Bearer Token** - Simple token-based authentication
- **OAuth2 Client Credentials** - OAuth2 flow for machine-to-machine communication
- **OAuth2 Authorization Code** - OAuth2 flow requiring user authorization
- **API Key** - Key-based authentication (header, query param, or cookie)
- **HTTP Basic** - Username/password authentication
- **HTTP Digest** - Digest authentication (requires `httpx-auth` package)
## Learn More
For more information about the A2A protocol and reference implementations:
- [A2A Protocol Documentation](https://a2a-protocol.org)
- [A2A Sample Implementations](https://github.com/a2aproject/a2a-samples)
- [A2A Python SDK](https://github.com/a2aproject/a2a-python)

View File

@@ -11,9 +11,13 @@ The [Model Context Protocol](https://modelcontextprotocol.io/introduction) (MCP)
CrewAI offers **two approaches** for MCP integration:
### Simple DSL Integration** (Recommended)
### 🚀 **Simple DSL Integration** (Recommended)
Use the `mcps` field directly on agents for seamless MCP tool integration:
Use the `mcps` field directly on agents for seamless MCP tool integration. The DSL supports both **string references** (for quick setup) and **structured configurations** (for full control).
#### String-Based References (Quick Setup)
Perfect for remote HTTPS servers and CrewAI AMP marketplace:
```python
from crewai import Agent
@@ -32,6 +36,46 @@ agent = Agent(
# MCP tools are now automatically available to your agent!
```
#### Structured Configurations (Full Control)
For complete control over connection settings, tool filtering, and all transport types:
```python
from crewai import Agent
from crewai.mcp import MCPServerStdio, MCPServerHTTP, MCPServerSSE
from crewai.mcp.filters import create_static_tool_filter
agent = Agent(
role="Advanced Research Analyst",
goal="Research with full control over MCP connections",
backstory="Expert researcher with advanced tool access",
mcps=[
# Stdio transport for local servers
MCPServerStdio(
command="npx",
args=["-y", "@modelcontextprotocol/server-filesystem"],
env={"API_KEY": "your_key"},
tool_filter=create_static_tool_filter(
allowed_tool_names=["read_file", "list_directory"]
),
cache_tools_list=True,
),
# HTTP/Streamable HTTP transport for remote servers
MCPServerHTTP(
url="https://api.example.com/mcp",
headers={"Authorization": "Bearer your_token"},
streamable=True,
cache_tools_list=True,
),
# SSE transport for real-time streaming
MCPServerSSE(
url="https://stream.example.com/mcp/sse",
headers={"Authorization": "Bearer your_token"},
),
]
)
```
### 🔧 **Advanced: MCPServerAdapter** (For Complex Scenarios)
For advanced use cases requiring manual connection management, the `crewai-tools` library provides the `MCPServerAdapter` class.
@@ -68,12 +112,14 @@ uv pip install 'crewai-tools[mcp]'
## Quick Start: Simple DSL Integration
The easiest way to integrate MCP servers is using the `mcps` field on your agents:
The easiest way to integrate MCP servers is using the `mcps` field on your agents. You can use either string references or structured configurations.
### Quick Start with String References
```python
from crewai import Agent, Task, Crew
# Create agent with MCP tools
# Create agent with MCP tools using string references
research_agent = Agent(
role="Research Analyst",
goal="Find and analyze information using advanced search tools",
@@ -96,13 +142,53 @@ crew = Crew(agents=[research_agent], tasks=[research_task])
result = crew.kickoff()
```
### Quick Start with Structured Configurations
```python
from crewai import Agent, Task, Crew
from crewai.mcp import MCPServerStdio, MCPServerHTTP, MCPServerSSE
# Create agent with structured MCP configurations
research_agent = Agent(
role="Research Analyst",
goal="Find and analyze information using advanced search tools",
backstory="Expert researcher with access to multiple data sources",
mcps=[
# Local stdio server
MCPServerStdio(
command="python",
args=["local_server.py"],
env={"API_KEY": "your_key"},
),
# Remote HTTP server
MCPServerHTTP(
url="https://api.research.com/mcp",
headers={"Authorization": "Bearer your_token"},
),
]
)
# Create task
research_task = Task(
description="Research the latest developments in AI agent frameworks",
expected_output="Comprehensive research report with citations",
agent=research_agent
)
# Create and run crew
crew = Crew(agents=[research_agent], tasks=[research_task])
result = crew.kickoff()
```
That's it! The MCP tools are automatically discovered and available to your agent.
## MCP Reference Formats
The `mcps` field supports various reference formats for maximum flexibility:
The `mcps` field supports both **string references** (for quick setup) and **structured configurations** (for full control). You can mix both formats in the same list.
### External MCP Servers
### String-Based References
#### External MCP Servers
```python
mcps=[
@@ -117,7 +203,7 @@ mcps=[
]
```
### CrewAI AMP Marketplace
#### CrewAI AMP Marketplace
```python
mcps=[
@@ -133,17 +219,166 @@ mcps=[
]
```
### Mixed References
### Structured Configurations
#### Stdio Transport (Local Servers)
Perfect for local MCP servers that run as processes:
```python
from crewai.mcp import MCPServerStdio
from crewai.mcp.filters import create_static_tool_filter
mcps=[
"https://external-api.com/mcp", # External server
"https://weather.service.com/mcp#forecast", # Specific external tool
"crewai-amp:financial-insights", # AMP service
"crewai-amp:data-analysis#sentiment_tool" # Specific AMP tool
MCPServerStdio(
command="npx",
args=["-y", "@modelcontextprotocol/server-filesystem"],
env={"API_KEY": "your_key"},
tool_filter=create_static_tool_filter(
allowed_tool_names=["read_file", "write_file"]
),
cache_tools_list=True,
),
# Python-based server
MCPServerStdio(
command="python",
args=["path/to/server.py"],
env={"UV_PYTHON": "3.12", "API_KEY": "your_key"},
),
]
```
#### HTTP/Streamable HTTP Transport (Remote Servers)
For remote MCP servers over HTTP/HTTPS:
```python
from crewai.mcp import MCPServerHTTP
mcps=[
# Streamable HTTP (default)
MCPServerHTTP(
url="https://api.example.com/mcp",
headers={"Authorization": "Bearer your_token"},
streamable=True,
cache_tools_list=True,
),
# Standard HTTP
MCPServerHTTP(
url="https://api.example.com/mcp",
headers={"Authorization": "Bearer your_token"},
streamable=False,
),
]
```
#### SSE Transport (Real-Time Streaming)
For remote servers using Server-Sent Events:
```python
from crewai.mcp import MCPServerSSE
mcps=[
MCPServerSSE(
url="https://stream.example.com/mcp/sse",
headers={"Authorization": "Bearer your_token"},
cache_tools_list=True,
),
]
```
### Mixed References
You can combine string references and structured configurations:
```python
from crewai.mcp import MCPServerStdio, MCPServerHTTP
mcps=[
# String references
"https://external-api.com/mcp", # External server
"crewai-amp:financial-insights", # AMP service
# Structured configurations
MCPServerStdio(
command="npx",
args=["-y", "@modelcontextprotocol/server-filesystem"],
),
MCPServerHTTP(
url="https://api.example.com/mcp",
headers={"Authorization": "Bearer token"},
),
]
```
### Tool Filtering
Structured configurations support advanced tool filtering:
```python
from crewai.mcp import MCPServerStdio
from crewai.mcp.filters import create_static_tool_filter, create_dynamic_tool_filter, ToolFilterContext
# Static filtering (allow/block lists)
static_filter = create_static_tool_filter(
allowed_tool_names=["read_file", "write_file"],
blocked_tool_names=["delete_file"],
)
# Dynamic filtering (context-aware)
def dynamic_filter(context: ToolFilterContext, tool: dict) -> bool:
# Block dangerous tools for certain agent roles
if context.agent.role == "Code Reviewer":
if "delete" in tool.get("name", "").lower():
return False
return True
mcps=[
MCPServerStdio(
command="npx",
args=["-y", "@modelcontextprotocol/server-filesystem"],
tool_filter=static_filter, # or dynamic_filter
),
]
```
## Configuration Parameters
Each transport type supports specific configuration options:
### MCPServerStdio Parameters
- **`command`** (required): Command to execute (e.g., `"python"`, `"node"`, `"npx"`, `"uvx"`)
- **`args`** (optional): List of command arguments (e.g., `["server.py"]` or `["-y", "@mcp/server"]`)
- **`env`** (optional): Dictionary of environment variables to pass to the process
- **`tool_filter`** (optional): Tool filter function for filtering available tools
- **`cache_tools_list`** (optional): Whether to cache the tool list for faster subsequent access (default: `False`)
### MCPServerHTTP Parameters
- **`url`** (required): Server URL (e.g., `"https://api.example.com/mcp"`)
- **`headers`** (optional): Dictionary of HTTP headers for authentication or other purposes
- **`streamable`** (optional): Whether to use streamable HTTP transport (default: `True`)
- **`tool_filter`** (optional): Tool filter function for filtering available tools
- **`cache_tools_list`** (optional): Whether to cache the tool list for faster subsequent access (default: `False`)
### MCPServerSSE Parameters
- **`url`** (required): Server URL (e.g., `"https://api.example.com/mcp/sse"`)
- **`headers`** (optional): Dictionary of HTTP headers for authentication or other purposes
- **`tool_filter`** (optional): Tool filter function for filtering available tools
- **`cache_tools_list`** (optional): Whether to cache the tool list for faster subsequent access (default: `False`)
### Common Parameters
All transport types support:
- **`tool_filter`**: Filter function to control which tools are available. Can be:
- `None` (default): All tools are available
- Static filter: Created with `create_static_tool_filter()` for allow/block lists
- Dynamic filter: Created with `create_dynamic_tool_filter()` for context-aware filtering
- **`cache_tools_list`**: When `True`, caches the tool list after first discovery to improve performance on subsequent connections
## Key Features
- 🔄 **Automatic Tool Discovery**: Tools are automatically discovered and integrated
@@ -152,26 +387,47 @@ mcps=[
- 🛡️ **Error Resilience**: Graceful handling of unavailable servers
- ⏱️ **Timeout Protection**: Built-in timeouts prevent hanging connections
- 📊 **Transparent Integration**: Works seamlessly with existing CrewAI features
- 🔧 **Full Transport Support**: Stdio, HTTP/Streamable HTTP, and SSE transports
- 🎯 **Advanced Filtering**: Static and dynamic tool filtering capabilities
- 🔐 **Flexible Authentication**: Support for headers, environment variables, and query parameters
## Error Handling
The MCP DSL integration is designed to be resilient:
The MCP DSL integration is designed to be resilient and handles failures gracefully:
```python
from crewai import Agent
from crewai.mcp import MCPServerStdio, MCPServerHTTP
agent = Agent(
role="Resilient Agent",
goal="Continue working despite server issues",
backstory="Agent that handles failures gracefully",
mcps=[
# String references
"https://reliable-server.com/mcp", # Will work
"https://unreachable-server.com/mcp", # Will be skipped gracefully
"https://slow-server.com/mcp", # Will timeout gracefully
"crewai-amp:working-service" # Will work
"crewai-amp:working-service", # Will work
# Structured configs
MCPServerStdio(
command="python",
args=["reliable_server.py"], # Will work
),
MCPServerHTTP(
url="https://slow-server.com/mcp", # Will timeout gracefully
),
]
)
# Agent will use tools from working servers and log warnings for failing ones
```
All connection errors are handled gracefully:
- **Connection failures**: Logged as warnings, agent continues with available tools
- **Timeout errors**: Connections timeout after 30 seconds (configurable)
- **Authentication errors**: Logged clearly for debugging
- **Invalid configurations**: Validation errors are raised at agent creation time
## Advanced: MCPServerAdapter
For complex scenarios requiring manual connection management, use the `MCPServerAdapter` class from `crewai-tools`. Using a Python context manager (`with` statement) is the recommended approach as it automatically handles starting and stopping the connection to the MCP server.

View File

@@ -12,7 +12,7 @@ dependencies = [
"pytube>=15.0.0",
"requests>=2.32.5",
"docker>=7.1.0",
"crewai==1.2.1",
"crewai==1.4.0",
"lancedb>=0.5.4",
"tiktoken>=0.8.0",
"beautifulsoup4>=4.13.4",

View File

@@ -287,4 +287,4 @@ __all__ = [
"ZapierActionTools",
]
__version__ = "1.2.1"
__version__ = "1.4.0"

View File

@@ -229,6 +229,7 @@ class CrewAIRagAdapter(Adapter):
continue
else:
metadata: dict[str, Any] = base_metadata.copy()
source_content = SourceContent(source_ref)
if data_type in [
DataType.PDF_FILE,
@@ -239,13 +240,12 @@ class CrewAIRagAdapter(Adapter):
DataType.XML,
DataType.MDX,
]:
if not os.path.isfile(source_ref):
if not source_content.is_url() and not source_content.path_exists():
raise FileNotFoundError(f"File does not exist: {source_ref}")
loader = data_type.get_loader()
chunker = data_type.get_chunker()
source_content = SourceContent(source_ref)
loader_result: LoaderResult = loader.load(source_content)
chunks = chunker.chunk(loader_result.content)

View File

@@ -23,7 +23,6 @@ dependencies = [
"chromadb~=1.1.0",
"tokenizers>=0.20.3",
"openpyxl>=3.1.5",
"pyvis>=0.3.2",
# Authentication and Security
"python-dotenv>=1.1.1",
"pyjwt>=2.9.0",
@@ -49,7 +48,7 @@ Repository = "https://github.com/crewAIInc/crewAI"
[project.optional-dependencies]
tools = [
"crewai-tools==1.2.1",
"crewai-tools==1.4.0",
]
embeddings = [
"tiktoken~=0.8.0"
@@ -94,10 +93,11 @@ azure-ai-inference = [
anthropic = [
"anthropic>=0.69.0",
]
# a2a = [
# "a2a-sdk~=0.3.9",
# "httpx-sse>=0.4.0",
# ]
a2a = [
"a2a-sdk~=0.3.10",
"httpx-auth>=0.23.1",
"httpx-sse>=0.4.0",
]
[project.scripts]

View File

@@ -3,7 +3,7 @@ from typing import Any
import urllib.request
import warnings
from crewai.agent import Agent
from crewai.agent.core import Agent
from crewai.crew import Crew
from crewai.crews.crew_output import CrewOutput
from crewai.flow.flow import Flow
@@ -40,7 +40,7 @@ def _suppress_pydantic_deprecation_warnings() -> None:
_suppress_pydantic_deprecation_warnings()
__version__ = "1.2.1"
__version__ = "1.4.0"
_telemetry_submitted = False

View File

@@ -0,0 +1,6 @@
"""Agent-to-Agent (A2A) protocol communication module for CrewAI."""
from crewai.a2a.config import A2AConfig
__all__ = ["A2AConfig"]

View File

@@ -0,0 +1,20 @@
"""A2A authentication schemas."""
from crewai.a2a.auth.schemas import (
APIKeyAuth,
BearerTokenAuth,
HTTPBasicAuth,
HTTPDigestAuth,
OAuth2AuthorizationCode,
OAuth2ClientCredentials,
)
__all__ = [
"APIKeyAuth",
"BearerTokenAuth",
"HTTPBasicAuth",
"HTTPDigestAuth",
"OAuth2AuthorizationCode",
"OAuth2ClientCredentials",
]

View File

@@ -0,0 +1,392 @@
"""Authentication schemes for A2A protocol agents.
Supported authentication methods:
- Bearer tokens
- OAuth2 (Client Credentials, Authorization Code)
- API Keys (header, query, cookie)
- HTTP Basic authentication
- HTTP Digest authentication
"""
from __future__ import annotations
from abc import ABC, abstractmethod
import base64
from collections.abc import Awaitable, Callable, MutableMapping
import time
from typing import Literal
import urllib.parse
import httpx
from httpx import DigestAuth
from pydantic import BaseModel, Field, PrivateAttr
class AuthScheme(ABC, BaseModel):
"""Base class for authentication schemes."""
@abstractmethod
async def apply_auth(
self, client: httpx.AsyncClient, headers: MutableMapping[str, str]
) -> MutableMapping[str, str]:
"""Apply authentication to request headers.
Args:
client: HTTP client for making auth requests.
headers: Current request headers.
Returns:
Updated headers with authentication applied.
"""
...
class BearerTokenAuth(AuthScheme):
"""Bearer token authentication (Authorization: Bearer <token>).
Attributes:
token: Bearer token for authentication.
"""
token: str = Field(description="Bearer token")
async def apply_auth(
self, client: httpx.AsyncClient, headers: MutableMapping[str, str]
) -> MutableMapping[str, str]:
"""Apply Bearer token to Authorization header.
Args:
client: HTTP client for making auth requests.
headers: Current request headers.
Returns:
Updated headers with Bearer token in Authorization header.
"""
headers["Authorization"] = f"Bearer {self.token}"
return headers
class HTTPBasicAuth(AuthScheme):
"""HTTP Basic authentication.
Attributes:
username: Username for Basic authentication.
password: Password for Basic authentication.
"""
username: str = Field(description="Username")
password: str = Field(description="Password")
async def apply_auth(
self, client: httpx.AsyncClient, headers: MutableMapping[str, str]
) -> MutableMapping[str, str]:
"""Apply HTTP Basic authentication.
Args:
client: HTTP client for making auth requests.
headers: Current request headers.
Returns:
Updated headers with Basic auth in Authorization header.
"""
credentials = f"{self.username}:{self.password}"
encoded = base64.b64encode(credentials.encode()).decode()
headers["Authorization"] = f"Basic {encoded}"
return headers
class HTTPDigestAuth(AuthScheme):
"""HTTP Digest authentication.
Note: Uses httpx-auth library for digest implementation.
Attributes:
username: Username for Digest authentication.
password: Password for Digest authentication.
"""
username: str = Field(description="Username")
password: str = Field(description="Password")
async def apply_auth(
self, client: httpx.AsyncClient, headers: MutableMapping[str, str]
) -> MutableMapping[str, str]:
"""Digest auth is handled by httpx auth flow, not headers.
Args:
client: HTTP client for making auth requests.
headers: Current request headers.
Returns:
Unchanged headers (Digest auth handled by httpx auth flow).
"""
return headers
def configure_client(self, client: httpx.AsyncClient) -> None:
"""Configure client with Digest auth.
Args:
client: HTTP client to configure with Digest authentication.
"""
client.auth = DigestAuth(self.username, self.password)
class APIKeyAuth(AuthScheme):
"""API Key authentication (header, query, or cookie).
Attributes:
api_key: API key value for authentication.
location: Where to send the API key (header, query, or cookie).
name: Parameter name for the API key (default: X-API-Key).
"""
api_key: str = Field(description="API key value")
location: Literal["header", "query", "cookie"] = Field(
default="header", description="Where to send the API key"
)
name: str = Field(default="X-API-Key", description="Parameter name for the API key")
async def apply_auth(
self, client: httpx.AsyncClient, headers: MutableMapping[str, str]
) -> MutableMapping[str, str]:
"""Apply API key authentication.
Args:
client: HTTP client for making auth requests.
headers: Current request headers.
Returns:
Updated headers with API key (for header/cookie locations).
"""
if self.location == "header":
headers[self.name] = self.api_key
elif self.location == "cookie":
headers["Cookie"] = f"{self.name}={self.api_key}"
return headers
def configure_client(self, client: httpx.AsyncClient) -> None:
"""Configure client for query param API keys.
Args:
client: HTTP client to configure with query param API key hook.
"""
if self.location == "query":
async def _add_api_key_param(request: httpx.Request) -> None:
url = httpx.URL(request.url)
request.url = url.copy_add_param(self.name, self.api_key)
client.event_hooks["request"].append(_add_api_key_param)
class OAuth2ClientCredentials(AuthScheme):
"""OAuth2 Client Credentials flow authentication.
Attributes:
token_url: OAuth2 token endpoint URL.
client_id: OAuth2 client identifier.
client_secret: OAuth2 client secret.
scopes: List of required OAuth2 scopes.
"""
token_url: str = Field(description="OAuth2 token endpoint")
client_id: str = Field(description="OAuth2 client ID")
client_secret: str = Field(description="OAuth2 client secret")
scopes: list[str] = Field(
default_factory=list, description="Required OAuth2 scopes"
)
_access_token: str | None = PrivateAttr(default=None)
_token_expires_at: float | None = PrivateAttr(default=None)
async def apply_auth(
self, client: httpx.AsyncClient, headers: MutableMapping[str, str]
) -> MutableMapping[str, str]:
"""Apply OAuth2 access token to Authorization header.
Args:
client: HTTP client for making token requests.
headers: Current request headers.
Returns:
Updated headers with OAuth2 access token in Authorization header.
"""
if (
self._access_token is None
or self._token_expires_at is None
or time.time() >= self._token_expires_at
):
await self._fetch_token(client)
if self._access_token:
headers["Authorization"] = f"Bearer {self._access_token}"
return headers
async def _fetch_token(self, client: httpx.AsyncClient) -> None:
"""Fetch OAuth2 access token using client credentials flow.
Args:
client: HTTP client for making token request.
Raises:
httpx.HTTPStatusError: If token request fails.
"""
data = {
"grant_type": "client_credentials",
"client_id": self.client_id,
"client_secret": self.client_secret,
}
if self.scopes:
data["scope"] = " ".join(self.scopes)
response = await client.post(self.token_url, data=data)
response.raise_for_status()
token_data = response.json()
self._access_token = token_data["access_token"]
expires_in = token_data.get("expires_in", 3600)
self._token_expires_at = time.time() + expires_in - 60
class OAuth2AuthorizationCode(AuthScheme):
"""OAuth2 Authorization Code flow authentication.
Note: Requires interactive authorization.
Attributes:
authorization_url: OAuth2 authorization endpoint URL.
token_url: OAuth2 token endpoint URL.
client_id: OAuth2 client identifier.
client_secret: OAuth2 client secret.
redirect_uri: OAuth2 redirect URI for callback.
scopes: List of required OAuth2 scopes.
"""
authorization_url: str = Field(description="OAuth2 authorization endpoint")
token_url: str = Field(description="OAuth2 token endpoint")
client_id: str = Field(description="OAuth2 client ID")
client_secret: str = Field(description="OAuth2 client secret")
redirect_uri: str = Field(description="OAuth2 redirect URI")
scopes: list[str] = Field(
default_factory=list, description="Required OAuth2 scopes"
)
_access_token: str | None = PrivateAttr(default=None)
_refresh_token: str | None = PrivateAttr(default=None)
_token_expires_at: float | None = PrivateAttr(default=None)
_authorization_callback: Callable[[str], Awaitable[str]] | None = PrivateAttr(
default=None
)
def set_authorization_callback(
self, callback: Callable[[str], Awaitable[str]] | None
) -> None:
"""Set callback to handle authorization URL.
Args:
callback: Async function that receives authorization URL and returns auth code.
"""
self._authorization_callback = callback
async def apply_auth(
self, client: httpx.AsyncClient, headers: MutableMapping[str, str]
) -> MutableMapping[str, str]:
"""Apply OAuth2 access token to Authorization header.
Args:
client: HTTP client for making token requests.
headers: Current request headers.
Returns:
Updated headers with OAuth2 access token in Authorization header.
Raises:
ValueError: If authorization callback is not set.
"""
if self._access_token is None:
if self._authorization_callback is None:
msg = "Authorization callback not set. Use set_authorization_callback()"
raise ValueError(msg)
await self._fetch_initial_token(client)
elif self._token_expires_at and time.time() >= self._token_expires_at:
await self._refresh_access_token(client)
if self._access_token:
headers["Authorization"] = f"Bearer {self._access_token}"
return headers
async def _fetch_initial_token(self, client: httpx.AsyncClient) -> None:
"""Fetch initial access token using authorization code flow.
Args:
client: HTTP client for making token request.
Raises:
ValueError: If authorization callback is not set.
httpx.HTTPStatusError: If token request fails.
"""
params = {
"response_type": "code",
"client_id": self.client_id,
"redirect_uri": self.redirect_uri,
"scope": " ".join(self.scopes),
}
auth_url = f"{self.authorization_url}?{urllib.parse.urlencode(params)}"
if self._authorization_callback is None:
msg = "Authorization callback not set"
raise ValueError(msg)
auth_code = await self._authorization_callback(auth_url)
data = {
"grant_type": "authorization_code",
"code": auth_code,
"client_id": self.client_id,
"client_secret": self.client_secret,
"redirect_uri": self.redirect_uri,
}
response = await client.post(self.token_url, data=data)
response.raise_for_status()
token_data = response.json()
self._access_token = token_data["access_token"]
self._refresh_token = token_data.get("refresh_token")
expires_in = token_data.get("expires_in", 3600)
self._token_expires_at = time.time() + expires_in - 60
async def _refresh_access_token(self, client: httpx.AsyncClient) -> None:
"""Refresh the access token using refresh token.
Args:
client: HTTP client for making token request.
Raises:
httpx.HTTPStatusError: If token refresh request fails.
"""
if not self._refresh_token:
await self._fetch_initial_token(client)
return
data = {
"grant_type": "refresh_token",
"refresh_token": self._refresh_token,
"client_id": self.client_id,
"client_secret": self.client_secret,
}
response = await client.post(self.token_url, data=data)
response.raise_for_status()
token_data = response.json()
self._access_token = token_data["access_token"]
if "refresh_token" in token_data:
self._refresh_token = token_data["refresh_token"]
expires_in = token_data.get("expires_in", 3600)
self._token_expires_at = time.time() + expires_in - 60

View File

@@ -0,0 +1,236 @@
"""Authentication utilities for A2A protocol agent communication.
Provides validation and retry logic for various authentication schemes including
OAuth2, API keys, and HTTP authentication methods.
"""
import asyncio
from collections.abc import Awaitable, Callable, MutableMapping
import re
from typing import Final
from a2a.client.errors import A2AClientHTTPError
from a2a.types import (
APIKeySecurityScheme,
AgentCard,
HTTPAuthSecurityScheme,
OAuth2SecurityScheme,
)
from httpx import AsyncClient, Response
from crewai.a2a.auth.schemas import (
APIKeyAuth,
AuthScheme,
BearerTokenAuth,
HTTPBasicAuth,
HTTPDigestAuth,
OAuth2AuthorizationCode,
OAuth2ClientCredentials,
)
_auth_store: dict[int, AuthScheme | None] = {}
_SCHEME_PATTERN: Final[re.Pattern[str]] = re.compile(r"(\w+)\s+(.+?)(?=,\s*\w+\s+|$)")
_PARAM_PATTERN: Final[re.Pattern[str]] = re.compile(r'(\w+)=(?:"([^"]*)"|([^\s,]+))')
_SCHEME_AUTH_MAPPING: Final[dict[type, tuple[type[AuthScheme], ...]]] = {
OAuth2SecurityScheme: (
OAuth2ClientCredentials,
OAuth2AuthorizationCode,
BearerTokenAuth,
),
APIKeySecurityScheme: (APIKeyAuth,),
}
_HTTP_SCHEME_MAPPING: Final[dict[str, type[AuthScheme]]] = {
"basic": HTTPBasicAuth,
"digest": HTTPDigestAuth,
"bearer": BearerTokenAuth,
}
def _raise_auth_mismatch(
expected_classes: type[AuthScheme] | tuple[type[AuthScheme], ...],
provided_auth: AuthScheme,
) -> None:
"""Raise authentication mismatch error.
Args:
expected_classes: Expected authentication class or tuple of classes.
provided_auth: Actually provided authentication instance.
Raises:
A2AClientHTTPError: Always raises with 401 status code.
"""
if isinstance(expected_classes, tuple):
if len(expected_classes) == 1:
required = expected_classes[0].__name__
else:
names = [cls.__name__ for cls in expected_classes]
required = f"one of ({', '.join(names)})"
else:
required = expected_classes.__name__
msg = (
f"AgentCard requires {required} authentication, "
f"but {type(provided_auth).__name__} was provided"
)
raise A2AClientHTTPError(401, msg)
def parse_www_authenticate(header_value: str) -> dict[str, dict[str, str]]:
"""Parse WWW-Authenticate header into auth challenges.
Args:
header_value: The WWW-Authenticate header value.
Returns:
Dictionary mapping auth scheme to its parameters.
Example: {"Bearer": {"realm": "api", "scope": "read write"}}
"""
if not header_value:
return {}
challenges: dict[str, dict[str, str]] = {}
for match in _SCHEME_PATTERN.finditer(header_value):
scheme = match.group(1)
params_str = match.group(2)
params: dict[str, str] = {}
for param_match in _PARAM_PATTERN.finditer(params_str):
key = param_match.group(1)
value = param_match.group(2) or param_match.group(3)
params[key] = value
challenges[scheme] = params
return challenges
def validate_auth_against_agent_card(
agent_card: AgentCard, auth: AuthScheme | None
) -> None:
"""Validate that provided auth matches AgentCard security requirements.
Args:
agent_card: The A2A AgentCard containing security requirements.
auth: User-provided authentication scheme (or None).
Raises:
A2AClientHTTPError: If auth doesn't match AgentCard requirements (status_code=401).
"""
if not agent_card.security or not agent_card.security_schemes:
return
if not auth:
msg = "AgentCard requires authentication but no auth scheme provided"
raise A2AClientHTTPError(401, msg)
first_security_req = agent_card.security[0] if agent_card.security else {}
for scheme_name in first_security_req.keys():
security_scheme_wrapper = agent_card.security_schemes.get(scheme_name)
if not security_scheme_wrapper:
continue
scheme = security_scheme_wrapper.root
if allowed_classes := _SCHEME_AUTH_MAPPING.get(type(scheme)):
if not isinstance(auth, allowed_classes):
_raise_auth_mismatch(allowed_classes, auth)
return
if isinstance(scheme, HTTPAuthSecurityScheme):
if required_class := _HTTP_SCHEME_MAPPING.get(scheme.scheme.lower()):
if not isinstance(auth, required_class):
_raise_auth_mismatch(required_class, auth)
return
msg = "Could not validate auth against AgentCard security requirements"
raise A2AClientHTTPError(401, msg)
async def retry_on_401(
request_func: Callable[[], Awaitable[Response]],
auth_scheme: AuthScheme | None,
client: AsyncClient,
headers: MutableMapping[str, str],
max_retries: int = 3,
) -> Response:
"""Retry a request on 401 authentication error.
Handles 401 errors by:
1. Parsing WWW-Authenticate header
2. Re-acquiring credentials
3. Retrying the request
Args:
request_func: Async function that makes the HTTP request.
auth_scheme: Authentication scheme to refresh credentials with.
client: HTTP client for making requests.
headers: Request headers to update with new auth.
max_retries: Maximum number of retry attempts (default: 3).
Returns:
HTTP response from the request.
Raises:
httpx.HTTPStatusError: If retries are exhausted or auth scheme is None.
"""
last_response: Response | None = None
last_challenges: dict[str, dict[str, str]] = {}
for attempt in range(max_retries):
response = await request_func()
if response.status_code != 401:
return response
last_response = response
if auth_scheme is None:
response.raise_for_status()
return response
www_authenticate = response.headers.get("WWW-Authenticate", "")
challenges = parse_www_authenticate(www_authenticate)
last_challenges = challenges
if attempt >= max_retries - 1:
break
backoff_time = 2**attempt
await asyncio.sleep(backoff_time)
await auth_scheme.apply_auth(client, headers)
if last_response:
last_response.raise_for_status()
return last_response
msg = "retry_on_401 failed without making any requests"
if last_challenges:
challenge_info = ", ".join(
f"{scheme} (realm={params.get('realm', 'N/A')})"
for scheme, params in last_challenges.items()
)
msg = f"{msg}. Server challenges: {challenge_info}"
raise RuntimeError(msg)
def configure_auth_client(
auth: HTTPDigestAuth | APIKeyAuth, client: AsyncClient
) -> None:
"""Configure HTTP client with auth-specific settings.
Only HTTPDigestAuth and APIKeyAuth need client configuration.
Args:
auth: Authentication scheme that requires client configuration.
client: HTTP client to configure.
"""
auth.configure_client(client)

View File

@@ -0,0 +1,59 @@
"""A2A configuration types.
This module is separate from experimental.a2a to avoid circular imports.
"""
from __future__ import annotations
from typing import Annotated
from pydantic import (
BaseModel,
BeforeValidator,
Field,
HttpUrl,
TypeAdapter,
)
from crewai.a2a.auth.schemas import AuthScheme
http_url_adapter = TypeAdapter(HttpUrl)
Url = Annotated[
str,
BeforeValidator(
lambda value: str(http_url_adapter.validate_python(value, strict=True))
),
]
class A2AConfig(BaseModel):
"""Configuration for A2A protocol integration.
Attributes:
endpoint: A2A agent endpoint URL.
auth: Authentication scheme (Bearer, OAuth2, API Key, HTTP Basic/Digest).
timeout: Request timeout in seconds (default: 120).
max_turns: Maximum conversation turns with A2A agent (default: 10).
response_model: Optional Pydantic model for structured A2A agent responses.
fail_fast: If True, raise error when agent unreachable; if False, skip and continue (default: True).
"""
endpoint: Url = Field(description="A2A agent endpoint URL")
auth: AuthScheme | None = Field(
default=None,
description="Authentication scheme (Bearer, OAuth2, API Key, HTTP Basic/Digest)",
)
timeout: int = Field(default=120, description="Request timeout in seconds")
max_turns: int = Field(
default=10, description="Maximum conversation turns with A2A agent"
)
response_model: type[BaseModel] | None = Field(
default=None,
description="Optional Pydantic model for structured A2A agent responses. When specified, the A2A agent is expected to return JSON matching this schema.",
)
fail_fast: bool = Field(
default=True,
description="If True, raise an error immediately when the A2A agent is unreachable. If False, skip the A2A agent and continue execution.",
)

View File

@@ -0,0 +1,29 @@
"""String templates for A2A (Agent-to-Agent) protocol messaging and status."""
from string import Template
from typing import Final
AVAILABLE_AGENTS_TEMPLATE: Final[Template] = Template(
"\n<AVAILABLE_A2A_AGENTS>\n $available_a2a_agents\n</AVAILABLE_A2A_AGENTS>\n"
)
PREVIOUS_A2A_CONVERSATION_TEMPLATE: Final[Template] = Template(
"\n<PREVIOUS_A2A_CONVERSATION>\n"
" $previous_a2a_conversation"
"\n</PREVIOUS_A2A_CONVERSATION>\n"
)
CONVERSATION_TURN_INFO_TEMPLATE: Final[Template] = Template(
"\n<CONVERSATION_PROGRESS>\n"
' turn="$turn_count"\n'
' max_turns="$max_turns"\n'
" $warning"
"\n</CONVERSATION_PROGRESS>\n"
)
UNAVAILABLE_AGENTS_NOTICE_TEMPLATE: Final[Template] = Template(
"\n<A2A_AGENTS_STATUS>\n"
" NOTE: A2A agents were configured but are currently unavailable.\n"
" You cannot delegate to remote agents for this task.\n\n"
" Unavailable Agents:\n"
" $unavailable_agents"
"\n</A2A_AGENTS_STATUS>\n"
)

View File

@@ -0,0 +1,38 @@
"""Type definitions for A2A protocol message parts."""
from typing import Any, Literal, Protocol, TypedDict, runtime_checkable
from typing_extensions import NotRequired
@runtime_checkable
class AgentResponseProtocol(Protocol):
"""Protocol for the dynamically created AgentResponse model."""
a2a_ids: tuple[str, ...]
message: str
is_a2a: bool
class PartsMetadataDict(TypedDict, total=False):
"""Metadata for A2A message parts.
Attributes:
mimeType: MIME type for the part content.
schema: JSON schema for the part content.
"""
mimeType: Literal["application/json"]
schema: dict[str, Any]
class PartsDict(TypedDict):
"""A2A message part containing text and optional metadata.
Attributes:
text: The text content of the message part.
metadata: Optional metadata describing the part content.
"""
text: str
metadata: NotRequired[PartsMetadataDict]

View File

@@ -0,0 +1,755 @@
"""Utility functions for A2A (Agent-to-Agent) protocol delegation."""
from __future__ import annotations
import asyncio
from collections.abc import AsyncIterator, MutableMapping
from contextlib import asynccontextmanager
from functools import lru_cache
import time
from typing import TYPE_CHECKING, Any
import uuid
from a2a.client import Client, ClientConfig, ClientFactory
from a2a.client.errors import A2AClientHTTPError
from a2a.types import (
AgentCard,
Message,
Part,
Role,
TaskArtifactUpdateEvent,
TaskState,
TaskStatusUpdateEvent,
TextPart,
TransportProtocol,
)
import httpx
from pydantic import BaseModel, Field, create_model
from crewai.a2a.auth.schemas import APIKeyAuth, HTTPDigestAuth
from crewai.a2a.auth.utils import (
_auth_store,
configure_auth_client,
retry_on_401,
validate_auth_against_agent_card,
)
from crewai.a2a.config import A2AConfig
from crewai.a2a.types import PartsDict, PartsMetadataDict
from crewai.events.event_bus import crewai_event_bus
from crewai.events.types.a2a_events import (
A2AConversationStartedEvent,
A2ADelegationCompletedEvent,
A2ADelegationStartedEvent,
A2AMessageSentEvent,
A2AResponseReceivedEvent,
)
from crewai.types.utils import create_literals_from_strings
if TYPE_CHECKING:
from a2a.types import Message, Task as A2ATask
from crewai.a2a.auth.schemas import AuthScheme
@lru_cache()
def _fetch_agent_card_cached(
endpoint: str,
auth_hash: int,
timeout: int,
_ttl_hash: int,
) -> AgentCard:
"""Cached version of fetch_agent_card with auth support.
Args:
endpoint: A2A agent endpoint URL
auth_hash: Hash of the auth object
timeout: Request timeout
_ttl_hash: Time-based hash for cache invalidation (unused in body)
Returns:
Cached AgentCard
"""
auth = _auth_store.get(auth_hash)
loop = asyncio.new_event_loop()
asyncio.set_event_loop(loop)
try:
return loop.run_until_complete(
_fetch_agent_card_async(endpoint=endpoint, auth=auth, timeout=timeout)
)
finally:
loop.close()
def fetch_agent_card(
endpoint: str,
auth: AuthScheme | None = None,
timeout: int = 30,
use_cache: bool = True,
cache_ttl: int = 300,
) -> AgentCard:
"""Fetch AgentCard from an A2A endpoint with optional caching.
Args:
endpoint: A2A agent endpoint URL (AgentCard URL)
auth: Optional AuthScheme for authentication
timeout: Request timeout in seconds
use_cache: Whether to use caching (default True)
cache_ttl: Cache TTL in seconds (default 300 = 5 minutes)
Returns:
AgentCard object with agent capabilities and skills
Raises:
httpx.HTTPStatusError: If the request fails
A2AClientHTTPError: If authentication fails
"""
if use_cache:
auth_hash = hash((type(auth).__name__, id(auth))) if auth else 0
_auth_store[auth_hash] = auth
ttl_hash = int(time.time() // cache_ttl)
return _fetch_agent_card_cached(endpoint, auth_hash, timeout, ttl_hash)
loop = asyncio.new_event_loop()
asyncio.set_event_loop(loop)
try:
return loop.run_until_complete(
_fetch_agent_card_async(endpoint=endpoint, auth=auth, timeout=timeout)
)
finally:
loop.close()
async def _fetch_agent_card_async(
endpoint: str,
auth: AuthScheme | None,
timeout: int,
) -> AgentCard:
"""Async implementation of AgentCard fetching.
Args:
endpoint: A2A agent endpoint URL
auth: Optional AuthScheme for authentication
timeout: Request timeout in seconds
Returns:
AgentCard object
"""
if "/.well-known/agent-card.json" in endpoint:
base_url = endpoint.replace("/.well-known/agent-card.json", "")
agent_card_path = "/.well-known/agent-card.json"
else:
url_parts = endpoint.split("/", 3)
base_url = f"{url_parts[0]}//{url_parts[2]}"
agent_card_path = f"/{url_parts[3]}" if len(url_parts) > 3 else "/"
headers: MutableMapping[str, str] = {}
if auth:
async with httpx.AsyncClient(timeout=timeout) as temp_auth_client:
if isinstance(auth, (HTTPDigestAuth, APIKeyAuth)):
configure_auth_client(auth, temp_auth_client)
headers = await auth.apply_auth(temp_auth_client, {})
async with httpx.AsyncClient(timeout=timeout, headers=headers) as temp_client:
if auth and isinstance(auth, (HTTPDigestAuth, APIKeyAuth)):
configure_auth_client(auth, temp_client)
agent_card_url = f"{base_url}{agent_card_path}"
async def _fetch_agent_card_request() -> httpx.Response:
return await temp_client.get(agent_card_url)
try:
response = await retry_on_401(
request_func=_fetch_agent_card_request,
auth_scheme=auth,
client=temp_client,
headers=temp_client.headers,
max_retries=2,
)
response.raise_for_status()
return AgentCard.model_validate(response.json())
except httpx.HTTPStatusError as e:
if e.response.status_code == 401:
error_details = ["Authentication failed"]
www_auth = e.response.headers.get("WWW-Authenticate")
if www_auth:
error_details.append(f"WWW-Authenticate: {www_auth}")
if not auth:
error_details.append("No auth scheme provided")
msg = " | ".join(error_details)
raise A2AClientHTTPError(401, msg) from e
raise
def execute_a2a_delegation(
endpoint: str,
auth: AuthScheme | None,
timeout: int,
task_description: str,
context: str | None = None,
context_id: str | None = None,
task_id: str | None = None,
reference_task_ids: list[str] | None = None,
metadata: dict[str, Any] | None = None,
extensions: dict[str, Any] | None = None,
conversation_history: list[Message] | None = None,
agent_id: str | None = None,
agent_role: Role | None = None,
agent_branch: Any | None = None,
response_model: type[BaseModel] | None = None,
turn_number: int | None = None,
) -> dict[str, Any]:
"""Execute a task delegation to a remote A2A agent with multi-turn support.
Handles:
- AgentCard discovery
- Authentication setup
- Message creation and sending
- Response parsing
- Multi-turn conversations
Args:
endpoint: A2A agent endpoint URL (AgentCard URL)
auth: Optional AuthScheme for authentication (Bearer, OAuth2, API Key, HTTP Basic/Digest)
timeout: Request timeout in seconds
task_description: The task to delegate
context: Optional context information
context_id: Context ID for correlating messages/tasks
task_id: Specific task identifier
reference_task_ids: List of related task IDs
metadata: Additional metadata (external_id, request_id, etc.)
extensions: Protocol extensions for custom fields
conversation_history: Previous Message objects from conversation
agent_id: Agent identifier for logging
agent_role: Role of the CrewAI agent delegating the task
agent_branch: Optional agent tree branch for logging
response_model: Optional Pydantic model for structured outputs
turn_number: Optional turn number for multi-turn conversations
Returns:
Dictionary with:
- status: "completed", "input_required", "failed", etc.
- result: Result string (if completed)
- error: Error message (if failed)
- history: List of new Message objects from this exchange
Raises:
ImportError: If a2a-sdk is not installed
"""
is_multiturn = bool(conversation_history and len(conversation_history) > 0)
if turn_number is None:
turn_number = (
len([m for m in (conversation_history or []) if m.role == Role.user]) + 1
)
crewai_event_bus.emit(
agent_branch,
A2ADelegationStartedEvent(
endpoint=endpoint,
task_description=task_description,
agent_id=agent_id,
is_multiturn=is_multiturn,
turn_number=turn_number,
),
)
loop = asyncio.new_event_loop()
asyncio.set_event_loop(loop)
try:
result = loop.run_until_complete(
_execute_a2a_delegation_async(
endpoint=endpoint,
auth=auth,
timeout=timeout,
task_description=task_description,
context=context,
context_id=context_id,
task_id=task_id,
reference_task_ids=reference_task_ids,
metadata=metadata,
extensions=extensions,
conversation_history=conversation_history or [],
is_multiturn=is_multiturn,
turn_number=turn_number,
agent_branch=agent_branch,
agent_id=agent_id,
agent_role=agent_role,
response_model=response_model,
)
)
crewai_event_bus.emit(
agent_branch,
A2ADelegationCompletedEvent(
status=result["status"],
result=result.get("result"),
error=result.get("error"),
is_multiturn=is_multiturn,
),
)
return result
finally:
loop.close()
async def _execute_a2a_delegation_async(
endpoint: str,
auth: AuthScheme | None,
timeout: int,
task_description: str,
context: str | None,
context_id: str | None,
task_id: str | None,
reference_task_ids: list[str] | None,
metadata: dict[str, Any] | None,
extensions: dict[str, Any] | None,
conversation_history: list[Message],
is_multiturn: bool = False,
turn_number: int = 1,
agent_branch: Any | None = None,
agent_id: str | None = None,
agent_role: str | None = None,
response_model: type[BaseModel] | None = None,
) -> dict[str, Any]:
"""Async implementation of A2A delegation with multi-turn support.
Args:
endpoint: A2A agent endpoint URL
auth: Optional AuthScheme for authentication
timeout: Request timeout in seconds
task_description: Task to delegate
context: Optional context
context_id: Context ID for correlation
task_id: Specific task identifier
reference_task_ids: Related task IDs
metadata: Additional metadata
extensions: Protocol extensions
conversation_history: Previous Message objects
is_multiturn: Whether this is a multi-turn conversation
turn_number: Current turn number
agent_branch: Agent tree branch for logging
agent_id: Agent identifier for logging
agent_role: Agent role for logging
response_model: Optional Pydantic model for structured outputs
Returns:
Dictionary with status, result/error, and new history
"""
agent_card = await _fetch_agent_card_async(endpoint, auth, timeout)
validate_auth_against_agent_card(agent_card, auth)
headers: MutableMapping[str, str] = {}
if auth:
async with httpx.AsyncClient(timeout=timeout) as temp_auth_client:
if isinstance(auth, (HTTPDigestAuth, APIKeyAuth)):
configure_auth_client(auth, temp_auth_client)
headers = await auth.apply_auth(temp_auth_client, {})
a2a_agent_name = None
if agent_card.name:
a2a_agent_name = agent_card.name
if turn_number == 1:
agent_id_for_event = agent_id or endpoint
crewai_event_bus.emit(
agent_branch,
A2AConversationStartedEvent(
agent_id=agent_id_for_event,
endpoint=endpoint,
a2a_agent_name=a2a_agent_name,
),
)
message_parts = []
if context:
message_parts.append(f"Context:\n{context}\n\n")
message_parts.append(f"{task_description}")
message_text = "".join(message_parts)
if is_multiturn and conversation_history and not task_id:
if first_task_id := conversation_history[0].task_id:
task_id = first_task_id
parts: PartsDict = {"text": message_text}
if response_model:
parts.update(
{
"metadata": PartsMetadataDict(
mimeType="application/json",
schema=response_model.model_json_schema(),
)
}
)
message = Message(
role=Role.user,
message_id=str(uuid.uuid4()),
parts=[Part(root=TextPart(**parts))],
context_id=context_id,
task_id=task_id,
reference_task_ids=reference_task_ids,
metadata=metadata,
extensions=extensions,
)
transport_protocol = TransportProtocol("JSONRPC")
new_messages: list[Message] = [*conversation_history, message]
crewai_event_bus.emit(
None,
A2AMessageSentEvent(
message=message_text,
turn_number=turn_number,
is_multiturn=is_multiturn,
agent_role=agent_role,
),
)
async with _create_a2a_client(
agent_card=agent_card,
transport_protocol=transport_protocol,
timeout=timeout,
headers=headers,
streaming=True,
auth=auth,
) as client:
result_parts: list[str] = []
final_result: dict[str, Any] | None = None
event_stream = client.send_message(message)
try:
async for event in event_stream:
if isinstance(event, Message):
new_messages.append(event)
for part in event.parts:
if part.root.kind == "text":
text = part.root.text
result_parts.append(text)
elif isinstance(event, tuple):
a2a_task, update = event
if isinstance(update, TaskArtifactUpdateEvent):
artifact = update.artifact
result_parts.extend(
part.root.text
for part in artifact.parts
if part.root.kind == "text"
)
is_final_update = False
if isinstance(update, TaskStatusUpdateEvent):
is_final_update = update.final
if not is_final_update and a2a_task.status.state not in [
TaskState.completed,
TaskState.input_required,
TaskState.failed,
TaskState.rejected,
TaskState.auth_required,
TaskState.canceled,
]:
continue
if a2a_task.status.state == TaskState.completed:
extracted_parts = _extract_task_result_parts(a2a_task)
result_parts.extend(extracted_parts)
if a2a_task.history:
new_messages.extend(a2a_task.history)
response_text = " ".join(result_parts) if result_parts else ""
crewai_event_bus.emit(
None,
A2AResponseReceivedEvent(
response=response_text,
turn_number=turn_number,
is_multiturn=is_multiturn,
status="completed",
agent_role=agent_role,
),
)
final_result = {
"status": "completed",
"result": response_text,
"history": new_messages,
"agent_card": agent_card,
}
break
if a2a_task.status.state == TaskState.input_required:
if a2a_task.history:
new_messages.extend(a2a_task.history)
response_text = _extract_error_message(
a2a_task, "Additional input required"
)
if response_text and not a2a_task.history:
agent_message = Message(
role=Role.agent,
message_id=str(uuid.uuid4()),
parts=[Part(root=TextPart(text=response_text))],
context_id=a2a_task.context_id
if hasattr(a2a_task, "context_id")
else None,
task_id=a2a_task.task_id
if hasattr(a2a_task, "task_id")
else None,
)
new_messages.append(agent_message)
crewai_event_bus.emit(
None,
A2AResponseReceivedEvent(
response=response_text,
turn_number=turn_number,
is_multiturn=is_multiturn,
status="input_required",
agent_role=agent_role,
),
)
final_result = {
"status": "input_required",
"error": response_text,
"history": new_messages,
"agent_card": agent_card,
}
break
if a2a_task.status.state in [TaskState.failed, TaskState.rejected]:
error_msg = _extract_error_message(
a2a_task, "Task failed without error message"
)
if a2a_task.history:
new_messages.extend(a2a_task.history)
final_result = {
"status": "failed",
"error": error_msg,
"history": new_messages,
}
break
if a2a_task.status.state == TaskState.auth_required:
error_msg = _extract_error_message(
a2a_task, "Authentication required"
)
final_result = {
"status": "auth_required",
"error": error_msg,
"history": new_messages,
}
break
if a2a_task.status.state == TaskState.canceled:
error_msg = _extract_error_message(
a2a_task, "Task was canceled"
)
final_result = {
"status": "canceled",
"error": error_msg,
"history": new_messages,
}
break
except Exception as e:
current_exception: Exception | BaseException | None = e
while current_exception:
if hasattr(current_exception, "response"):
response = current_exception.response
if hasattr(response, "text"):
break
if current_exception and hasattr(current_exception, "__cause__"):
current_exception = current_exception.__cause__
raise
finally:
if hasattr(event_stream, "aclose"):
await event_stream.aclose()
if final_result:
return final_result
return {
"status": "completed",
"result": " ".join(result_parts) if result_parts else "",
"history": new_messages,
}
@asynccontextmanager
async def _create_a2a_client(
agent_card: AgentCard,
transport_protocol: TransportProtocol,
timeout: int,
headers: MutableMapping[str, str],
streaming: bool,
auth: AuthScheme | None = None,
) -> AsyncIterator[Client]:
"""Create and configure an A2A client.
Args:
agent_card: The A2A agent card
transport_protocol: Transport protocol to use
timeout: Request timeout in seconds
headers: HTTP headers (already with auth applied)
streaming: Enable streaming responses
auth: Optional AuthScheme for client configuration
Yields:
Configured A2A client instance
"""
async with httpx.AsyncClient(
timeout=timeout,
headers=headers,
) as httpx_client:
if auth and isinstance(auth, (HTTPDigestAuth, APIKeyAuth)):
configure_auth_client(auth, httpx_client)
config = ClientConfig(
httpx_client=httpx_client,
supported_transports=[str(transport_protocol.value)],
streaming=streaming,
accepted_output_modes=["application/json"],
)
factory = ClientFactory(config)
client = factory.create(agent_card)
yield client
def _extract_task_result_parts(a2a_task: A2ATask) -> list[str]:
"""Extract result parts from A2A task history and artifacts.
Args:
a2a_task: A2A Task object with history and artifacts
Returns:
List of result text parts
"""
result_parts: list[str] = []
if a2a_task.history:
for history_msg in reversed(a2a_task.history):
if history_msg.role == Role.agent:
result_parts.extend(
part.root.text
for part in history_msg.parts
if part.root.kind == "text"
)
break
if a2a_task.artifacts:
result_parts.extend(
part.root.text
for artifact in a2a_task.artifacts
for part in artifact.parts
if part.root.kind == "text"
)
return result_parts
def _extract_error_message(a2a_task: A2ATask, default: str) -> str:
"""Extract error message from A2A task.
Args:
a2a_task: A2A Task object
default: Default message if no error found
Returns:
Error message string
"""
if a2a_task.status and a2a_task.status.message:
msg = a2a_task.status.message
if msg:
for part in msg.parts:
if part.root.kind == "text":
return str(part.root.text)
return str(msg)
if a2a_task.history:
for history_msg in reversed(a2a_task.history):
for part in history_msg.parts:
if part.root.kind == "text":
return str(part.root.text)
return default
def create_agent_response_model(agent_ids: tuple[str, ...]) -> type[BaseModel]:
"""Create a dynamic AgentResponse model with Literal types for agent IDs.
Args:
agent_ids: List of available A2A agent IDs
Returns:
Dynamically created Pydantic model with Literal-constrained a2a_ids field
"""
DynamicLiteral = create_literals_from_strings(agent_ids) # noqa: N806
return create_model(
"AgentResponse",
a2a_ids=(
tuple[DynamicLiteral, ...], # type: ignore[valid-type]
Field(
default_factory=tuple,
max_length=len(agent_ids),
description="A2A agent IDs to delegate to.",
),
),
message=(
str,
Field(
description="The message content. If is_a2a=true, this is sent to the A2A agent. If is_a2a=false, this is your final answer ending the conversation."
),
),
is_a2a=(
bool,
Field(
description="Set to true to continue the conversation by sending this message to the A2A agent and awaiting their response. Set to false ONLY when you are completely done and providing your final answer (not when asking questions)."
),
),
__base__=BaseModel,
)
def extract_a2a_agent_ids_from_config(
a2a_config: list[A2AConfig] | A2AConfig | None,
) -> tuple[list[A2AConfig], tuple[str, ...]]:
"""Extract A2A agent IDs from A2A configuration.
Args:
a2a_config: A2A configuration
Returns:
List of A2A agent IDs
"""
if a2a_config is None:
return [], ()
if isinstance(a2a_config, A2AConfig):
a2a_agents = [a2a_config]
else:
a2a_agents = a2a_config
return a2a_agents, tuple(config.endpoint for config in a2a_agents)
def get_a2a_agents_and_response_model(
a2a_config: list[A2AConfig] | A2AConfig | None,
) -> tuple[list[A2AConfig], type[BaseModel]]:
"""Get A2A agent IDs and response model.
Args:
a2a_config: A2A configuration
Returns:
Tuple of A2A agent IDs and response model
"""
a2a_agents, agent_ids = extract_a2a_agent_ids_from_config(a2a_config=a2a_config)
return a2a_agents, create_agent_response_model(agent_ids)

View File

@@ -0,0 +1,570 @@
"""A2A agent wrapping logic for metaclass integration.
Wraps agent classes with A2A delegation capabilities.
"""
from __future__ import annotations
from collections.abc import Callable
from concurrent.futures import ThreadPoolExecutor, as_completed
from functools import wraps
from types import MethodType
from typing import TYPE_CHECKING, Any, cast
from a2a.types import Role
from pydantic import BaseModel, ValidationError
from crewai.a2a.config import A2AConfig
from crewai.a2a.templates import (
AVAILABLE_AGENTS_TEMPLATE,
CONVERSATION_TURN_INFO_TEMPLATE,
PREVIOUS_A2A_CONVERSATION_TEMPLATE,
UNAVAILABLE_AGENTS_NOTICE_TEMPLATE,
)
from crewai.a2a.types import AgentResponseProtocol
from crewai.a2a.utils import (
execute_a2a_delegation,
fetch_agent_card,
get_a2a_agents_and_response_model,
)
from crewai.events.event_bus import crewai_event_bus
from crewai.events.types.a2a_events import (
A2AConversationCompletedEvent,
A2AMessageSentEvent,
)
if TYPE_CHECKING:
from a2a.types import AgentCard, Message
from crewai.agent.core import Agent
from crewai.task import Task
from crewai.tools.base_tool import BaseTool
def wrap_agent_with_a2a_instance(agent: Agent) -> None:
"""Wrap an agent instance's execute_task method with A2A support.
This function modifies the agent instance by wrapping its execute_task
method to add A2A delegation capabilities. Should only be called when
the agent has a2a configuration set.
Args:
agent: The agent instance to wrap
"""
original_execute_task = agent.execute_task.__func__
@wraps(original_execute_task)
def execute_task_with_a2a(
self: Agent,
task: Task,
context: str | None = None,
tools: list[BaseTool] | None = None,
) -> str:
"""Execute task with A2A delegation support.
Args:
self: The agent instance
task: The task to execute
context: Optional context for task execution
tools: Optional tools available to the agent
Returns:
Task execution result
"""
if not self.a2a:
return original_execute_task(self, task, context, tools)
a2a_agents, agent_response_model = get_a2a_agents_and_response_model(self.a2a)
return _execute_task_with_a2a(
self=self,
a2a_agents=a2a_agents,
original_fn=original_execute_task,
task=task,
agent_response_model=agent_response_model,
context=context,
tools=tools,
)
object.__setattr__(agent, "execute_task", MethodType(execute_task_with_a2a, agent))
def _fetch_card_from_config(
config: A2AConfig,
) -> tuple[A2AConfig, AgentCard | Exception]:
"""Fetch agent card from A2A config.
Args:
config: A2A configuration
Returns:
Tuple of (config, card or exception)
"""
try:
card = fetch_agent_card(
endpoint=config.endpoint,
auth=config.auth,
timeout=config.timeout,
)
return config, card
except Exception as e:
return config, e
def _fetch_agent_cards_concurrently(
a2a_agents: list[A2AConfig],
) -> tuple[dict[str, AgentCard], dict[str, str]]:
"""Fetch agent cards concurrently for multiple A2A agents.
Args:
a2a_agents: List of A2A agent configurations
Returns:
Tuple of (agent_cards dict, failed_agents dict mapping endpoint to error message)
"""
agent_cards: dict[str, AgentCard] = {}
failed_agents: dict[str, str] = {}
with ThreadPoolExecutor(max_workers=len(a2a_agents)) as executor:
futures = {
executor.submit(_fetch_card_from_config, config): config
for config in a2a_agents
}
for future in as_completed(futures):
config, result = future.result()
if isinstance(result, Exception):
if config.fail_fast:
raise RuntimeError(
f"Failed to fetch agent card from {config.endpoint}. "
f"Ensure the A2A agent is running and accessible. Error: {result}"
) from result
failed_agents[config.endpoint] = str(result)
else:
agent_cards[config.endpoint] = result
return agent_cards, failed_agents
def _execute_task_with_a2a(
self: Agent,
a2a_agents: list[A2AConfig],
original_fn: Callable[..., str],
task: Task,
agent_response_model: type[BaseModel],
context: str | None,
tools: list[BaseTool] | None,
) -> str:
"""Wrap execute_task with A2A delegation logic.
Args:
self: The agent instance
a2a_agents: Dictionary of A2A agent configurations
original_fn: The original execute_task method
task: The task to execute
context: Optional context for task execution
tools: Optional tools available to the agent
agent_response_model: Optional agent response model
Returns:
Task execution result (either from LLM or A2A agent)
"""
original_description: str = task.description
original_output_pydantic = task.output_pydantic
original_response_model = task.response_model
agent_cards, failed_agents = _fetch_agent_cards_concurrently(a2a_agents)
if not agent_cards and a2a_agents and failed_agents:
unavailable_agents_text = ""
for endpoint, error in failed_agents.items():
unavailable_agents_text += f" - {endpoint}: {error}\n"
notice = UNAVAILABLE_AGENTS_NOTICE_TEMPLATE.substitute(
unavailable_agents=unavailable_agents_text
)
task.description = f"{original_description}{notice}"
try:
return original_fn(self, task, context, tools)
finally:
task.description = original_description
task.description = _augment_prompt_with_a2a(
a2a_agents=a2a_agents,
task_description=original_description,
agent_cards=agent_cards,
failed_agents=failed_agents,
)
task.response_model = agent_response_model
try:
raw_result = original_fn(self, task, context, tools)
agent_response = _parse_agent_response(
raw_result=raw_result, agent_response_model=agent_response_model
)
if isinstance(agent_response, BaseModel) and isinstance(
agent_response, AgentResponseProtocol
):
if agent_response.is_a2a:
return _delegate_to_a2a(
self,
agent_response=agent_response,
task=task,
original_fn=original_fn,
context=context,
tools=tools,
agent_cards=agent_cards,
original_task_description=original_description,
)
return str(agent_response.message)
return raw_result
finally:
task.description = original_description
task.output_pydantic = original_output_pydantic
task.response_model = original_response_model
def _augment_prompt_with_a2a(
a2a_agents: list[A2AConfig],
task_description: str,
agent_cards: dict[str, AgentCard],
conversation_history: list[Message] | None = None,
turn_num: int = 0,
max_turns: int | None = None,
failed_agents: dict[str, str] | None = None,
) -> str:
"""Add A2A delegation instructions to prompt.
Args:
a2a_agents: Dictionary of A2A agent configurations
task_description: Original task description
agent_cards: dictionary mapping agent IDs to AgentCards
conversation_history: Previous A2A Messages from conversation
turn_num: Current turn number (0-indexed)
max_turns: Maximum allowed turns (from config)
failed_agents: Dictionary mapping failed agent endpoints to error messages
Returns:
Augmented task description with A2A instructions
"""
if not agent_cards:
return task_description
agents_text = ""
for config in a2a_agents:
if config.endpoint in agent_cards:
card = agent_cards[config.endpoint]
agents_text += f"\n{card.model_dump_json(indent=2, exclude_none=True, include={'description', 'url', 'skills'})}\n"
failed_agents = failed_agents or {}
if failed_agents:
agents_text += "\n<!-- Unavailable Agents -->\n"
for endpoint, error in failed_agents.items():
agents_text += f"\n<!-- Agent: {endpoint}\n Status: Unavailable\n Error: {error} -->\n"
agents_text = AVAILABLE_AGENTS_TEMPLATE.substitute(available_a2a_agents=agents_text)
history_text = ""
if conversation_history:
for msg in conversation_history:
history_text += f"\n{msg.model_dump_json(indent=2, exclude_none=True, exclude={'message_id'})}\n"
history_text = PREVIOUS_A2A_CONVERSATION_TEMPLATE.substitute(
previous_a2a_conversation=history_text
)
turn_info = ""
if max_turns is not None and conversation_history:
turn_count = turn_num + 1
warning = ""
if turn_count >= max_turns:
warning = (
"CRITICAL: This is the FINAL turn. You MUST conclude the conversation now.\n"
"Set is_a2a=false and provide your final response to complete the task."
)
elif turn_count == max_turns - 1:
warning = "WARNING: Next turn will be the last. Consider wrapping up the conversation."
turn_info = CONVERSATION_TURN_INFO_TEMPLATE.substitute(
turn_count=turn_count,
max_turns=max_turns,
warning=warning,
)
return f"""{task_description}
IMPORTANT: You have the ability to delegate this task to remote A2A agents.
{agents_text}
{history_text}{turn_info}
"""
def _parse_agent_response(
raw_result: str | dict[str, Any], agent_response_model: type[BaseModel]
) -> BaseModel | str:
"""Parse LLM output as AgentResponse or return raw agent response.
Args:
raw_result: Raw output from LLM
agent_response_model: The agent response model
Returns:
Parsed AgentResponse or string
"""
if agent_response_model:
try:
if isinstance(raw_result, str):
return agent_response_model.model_validate_json(raw_result)
if isinstance(raw_result, dict):
return agent_response_model.model_validate(raw_result)
except ValidationError:
return cast(str, raw_result)
return cast(str, raw_result)
def _handle_agent_response_and_continue(
self: Agent,
a2a_result: dict[str, Any],
agent_id: str,
agent_cards: dict[str, AgentCard] | None,
a2a_agents: list[A2AConfig],
original_task_description: str,
conversation_history: list[Message],
turn_num: int,
max_turns: int,
task: Task,
original_fn: Callable[..., str],
context: str | None,
tools: list[BaseTool] | None,
agent_response_model: type[BaseModel],
) -> tuple[str | None, str | None]:
"""Handle A2A result and get CrewAI agent's response.
Args:
self: The agent instance
a2a_result: Result from A2A delegation
agent_id: ID of the A2A agent
agent_cards: Pre-fetched agent cards
a2a_agents: List of A2A configurations
original_task_description: Original task description
conversation_history: Conversation history
turn_num: Current turn number
max_turns: Maximum turns allowed
task: The task being executed
original_fn: Original execute_task method
context: Optional context
tools: Optional tools
agent_response_model: Response model for parsing
Returns:
Tuple of (final_result, current_request) where:
- final_result is not None if conversation should end
- current_request is the next message to send if continuing
"""
agent_cards_dict = agent_cards or {}
if "agent_card" in a2a_result and agent_id not in agent_cards_dict:
agent_cards_dict[agent_id] = a2a_result["agent_card"]
task.description = _augment_prompt_with_a2a(
a2a_agents=a2a_agents,
task_description=original_task_description,
conversation_history=conversation_history,
turn_num=turn_num,
max_turns=max_turns,
agent_cards=agent_cards_dict,
)
raw_result = original_fn(self, task, context, tools)
llm_response = _parse_agent_response(
raw_result=raw_result, agent_response_model=agent_response_model
)
if isinstance(llm_response, BaseModel) and isinstance(
llm_response, AgentResponseProtocol
):
if not llm_response.is_a2a:
final_turn_number = turn_num + 1
crewai_event_bus.emit(
None,
A2AMessageSentEvent(
message=str(llm_response.message),
turn_number=final_turn_number,
is_multiturn=True,
agent_role=self.role,
),
)
crewai_event_bus.emit(
None,
A2AConversationCompletedEvent(
status="completed",
final_result=str(llm_response.message),
error=None,
total_turns=final_turn_number,
),
)
return str(llm_response.message), None
return None, str(llm_response.message)
return str(raw_result), None
def _delegate_to_a2a(
self: Agent,
agent_response: AgentResponseProtocol,
task: Task,
original_fn: Callable[..., str],
context: str | None,
tools: list[BaseTool] | None,
agent_cards: dict[str, AgentCard] | None = None,
original_task_description: str | None = None,
) -> str:
"""Delegate to A2A agent with multi-turn conversation support.
Args:
self: The agent instance
agent_response: The AgentResponse indicating delegation
task: The task being executed (for extracting A2A fields)
original_fn: The original execute_task method for follow-ups
context: Optional context for task execution
tools: Optional tools available to the agent
agent_cards: Pre-fetched agent cards from _execute_task_with_a2a
original_task_description: The original task description before A2A augmentation
Returns:
Result from A2A agent
Raises:
ImportError: If a2a-sdk is not installed
"""
a2a_agents, agent_response_model = get_a2a_agents_and_response_model(self.a2a)
agent_ids = tuple(config.endpoint for config in a2a_agents)
current_request = str(agent_response.message)
agent_id = agent_response.a2a_ids[0]
if agent_id not in agent_ids:
raise ValueError(
f"Unknown A2A agent ID(s): {agent_response.a2a_ids} not in {agent_ids}"
)
agent_config = next(filter(lambda x: x.endpoint == agent_id, a2a_agents))
task_config = task.config or {}
context_id = task_config.get("context_id")
task_id_config = task_config.get("task_id")
reference_task_ids = task_config.get("reference_task_ids")
metadata = task_config.get("metadata")
extensions = task_config.get("extensions")
if original_task_description is None:
original_task_description = task.description
conversation_history: list[Message] = []
max_turns = agent_config.max_turns
try:
for turn_num in range(max_turns):
console_formatter = getattr(crewai_event_bus, "_console", None)
agent_branch = None
if console_formatter:
agent_branch = getattr(
console_formatter, "current_agent_branch", None
) or getattr(console_formatter, "current_task_branch", None)
a2a_result = execute_a2a_delegation(
endpoint=agent_config.endpoint,
auth=agent_config.auth,
timeout=agent_config.timeout,
task_description=current_request,
context_id=context_id,
task_id=task_id_config,
reference_task_ids=reference_task_ids,
metadata=metadata,
extensions=extensions,
conversation_history=conversation_history,
agent_id=agent_id,
agent_role=Role.user,
agent_branch=agent_branch,
response_model=agent_config.response_model,
turn_number=turn_num + 1,
)
conversation_history = a2a_result.get("history", [])
if a2a_result["status"] in ["completed", "input_required"]:
final_result, next_request = _handle_agent_response_and_continue(
self=self,
a2a_result=a2a_result,
agent_id=agent_id,
agent_cards=agent_cards,
a2a_agents=a2a_agents,
original_task_description=original_task_description,
conversation_history=conversation_history,
turn_num=turn_num,
max_turns=max_turns,
task=task,
original_fn=original_fn,
context=context,
tools=tools,
agent_response_model=agent_response_model,
)
if final_result is not None:
return final_result
if next_request is not None:
current_request = next_request
continue
error_msg = a2a_result.get("error", "Unknown error")
crewai_event_bus.emit(
None,
A2AConversationCompletedEvent(
status="failed",
final_result=None,
error=error_msg,
total_turns=turn_num + 1,
),
)
raise Exception(f"A2A delegation failed: {error_msg}")
if conversation_history:
for msg in reversed(conversation_history):
if msg.role == Role.agent:
text_parts = [
part.root.text for part in msg.parts if part.root.kind == "text"
]
final_message = (
" ".join(text_parts) if text_parts else "Conversation completed"
)
crewai_event_bus.emit(
None,
A2AConversationCompletedEvent(
status="completed",
final_result=final_message,
error=None,
total_turns=max_turns,
),
)
return final_message
crewai_event_bus.emit(
None,
A2AConversationCompletedEvent(
status="failed",
final_result=None,
error=f"Conversation exceeded maximum turns ({max_turns})",
total_turns=max_turns,
),
)
raise Exception(f"A2A conversation exceeded maximum turns ({max_turns})")
finally:
task.description = original_task_description

View File

@@ -0,0 +1,5 @@
from crewai.agent.core import Agent
from crewai.utilities.training_handler import CrewTrainingHandler
__all__ = ["Agent", "CrewTrainingHandler"]

View File

@@ -2,27 +2,27 @@ from __future__ import annotations
import asyncio
from collections.abc import Sequence
import json
import shutil
import subprocess
import time
from typing import (
TYPE_CHECKING,
Any,
Final,
Literal,
cast,
)
from urllib.parse import urlparse
from pydantic import BaseModel, Field, InstanceOf, PrivateAttr, model_validator
from typing_extensions import Self
from crewai.a2a.config import A2AConfig
from crewai.agents.agent_builder.base_agent import BaseAgent
from crewai.agents.cache.cache_handler import CacheHandler
from crewai.agents.crew_agent_executor import CrewAgentExecutor
from crewai.events.event_bus import crewai_event_bus
from crewai.events.types.agent_events import (
AgentExecutionCompletedEvent,
AgentExecutionErrorEvent,
AgentExecutionStartedEvent,
)
from crewai.events.types.knowledge_events import (
KnowledgeQueryCompletedEvent,
KnowledgeQueryFailedEvent,
@@ -40,6 +40,16 @@ from crewai.knowledge.source.base_knowledge_source import BaseKnowledgeSource
from crewai.knowledge.utils.knowledge_utils import extract_knowledge_context
from crewai.lite_agent import LiteAgent
from crewai.llms.base_llm import BaseLLM
from crewai.mcp import (
MCPClient,
MCPServerConfig,
MCPServerHTTP,
MCPServerSSE,
MCPServerStdio,
)
from crewai.mcp.transports.http import HTTPTransport
from crewai.mcp.transports.sse import SSETransport
from crewai.mcp.transports.stdio import StdioTransport
from crewai.memory.contextual.contextual_memory import ContextualMemory
from crewai.rag.embeddings.types import EmbedderConfig
from crewai.security.fingerprint import Fingerprint
@@ -70,14 +80,14 @@ if TYPE_CHECKING:
# MCP Connection timeout constants (in seconds)
MCP_CONNECTION_TIMEOUT = 10
MCP_TOOL_EXECUTION_TIMEOUT = 30
MCP_DISCOVERY_TIMEOUT = 15
MCP_MAX_RETRIES = 3
MCP_CONNECTION_TIMEOUT: Final[int] = 10
MCP_TOOL_EXECUTION_TIMEOUT: Final[int] = 30
MCP_DISCOVERY_TIMEOUT: Final[int] = 15
MCP_MAX_RETRIES: Final[int] = 3
# Simple in-memory cache for MCP tool schemas (duration: 5 minutes)
_mcp_schema_cache = {}
_cache_ttl = 300 # 5 minutes
_mcp_schema_cache: dict[str, Any] = {}
_cache_ttl: Final[int] = 300 # 5 minutes
class Agent(BaseAgent):
@@ -108,6 +118,7 @@ class Agent(BaseAgent):
"""
_times_executed: int = PrivateAttr(default=0)
_mcp_clients: list[Any] = PrivateAttr(default_factory=list)
max_execution_time: int | None = Field(
default=None,
description="Maximum execution time for an agent to execute a task",
@@ -197,6 +208,10 @@ class Agent(BaseAgent):
guardrail_max_retries: int = Field(
default=3, description="Maximum number of retries when guardrail fails"
)
a2a: list[A2AConfig] | A2AConfig | None = Field(
default=None,
description="A2A (Agent-to-Agent) configuration for delegating tasks to remote agents. Can be a single A2AConfig or a dict mapping agent IDs to configs.",
)
@model_validator(mode="before")
def validate_from_repository(cls, v: Any) -> dict[str, Any] | None | Any: # noqa: N805
@@ -305,17 +320,19 @@ class Agent(BaseAgent):
# If the task requires output in JSON or Pydantic format,
# append specific instructions to the task prompt to ensure
# that the final answer does not include any code block markers
if task.output_json or task.output_pydantic:
# Skip this if task.response_model is set, as native structured outputs handle schema automatically
if (task.output_json or task.output_pydantic) and not task.response_model:
# Generate the schema based on the output format
if task.output_json:
# schema = json.dumps(task.output_json, indent=2)
schema = generate_model_description(task.output_json)
schema_dict = generate_model_description(task.output_json)
schema = json.dumps(schema_dict["json_schema"]["schema"], indent=2)
task_prompt += "\n" + self.i18n.slice(
"formatted_task_instructions"
).format(output_format=schema)
elif task.output_pydantic:
schema = generate_model_description(task.output_pydantic)
schema_dict = generate_model_description(task.output_pydantic)
schema = json.dumps(schema_dict["json_schema"]["schema"], indent=2)
task_prompt += "\n" + self.i18n.slice(
"formatted_task_instructions"
).format(output_format=schema)
@@ -438,6 +455,13 @@ class Agent(BaseAgent):
else:
task_prompt = self._use_trained_data(task_prompt=task_prompt)
# Import agent events locally to avoid circular imports
from crewai.events.types.agent_events import (
AgentExecutionCompletedEvent,
AgentExecutionErrorEvent,
AgentExecutionStartedEvent,
)
try:
crewai_event_bus.emit(
self,
@@ -513,6 +537,9 @@ class Agent(BaseAgent):
self,
event=AgentExecutionCompletedEvent(agent=self, task=task, output=result),
)
self._cleanup_mcp_clients()
return result
def _execute_with_timeout(self, task_prompt: str, task: Task, timeout: int) -> Any:
@@ -618,6 +645,7 @@ class Agent(BaseAgent):
self._rpm_controller.check_or_wait if self._rpm_controller else None
),
callbacks=[TokenCalcHandler(self._token_process)],
response_model=task.response_model if task else None,
)
def get_delegation_tools(self, agents: list[BaseAgent]) -> list[BaseTool]:
@@ -635,30 +663,70 @@ class Agent(BaseAgent):
self._logger.log("error", f"Error getting platform tools: {e!s}")
return []
def get_mcp_tools(self, mcps: list[str]) -> list[BaseTool]:
"""Convert MCP server references to CrewAI tools."""
def get_mcp_tools(self, mcps: list[str | MCPServerConfig]) -> list[BaseTool]:
"""Convert MCP server references/configs to CrewAI tools.
Supports both string references (backwards compatible) and structured
configuration objects (MCPServerStdio, MCPServerHTTP, MCPServerSSE).
Args:
mcps: List of MCP server references (strings) or configurations.
Returns:
List of BaseTool instances from MCP servers.
"""
all_tools = []
clients = []
for mcp_ref in mcps:
try:
if mcp_ref.startswith("crewai-amp:"):
tools = self._get_amp_mcp_tools(mcp_ref)
elif mcp_ref.startswith("https://"):
tools = self._get_external_mcp_tools(mcp_ref)
else:
continue
for mcp_config in mcps:
if isinstance(mcp_config, str):
tools = self._get_mcp_tools_from_string(mcp_config)
else:
tools, client = self._get_native_mcp_tools(mcp_config)
if client:
clients.append(client)
all_tools.extend(tools)
self._logger.log(
"info", f"Successfully loaded {len(tools)} tools from {mcp_ref}"
)
except Exception as e:
self._logger.log("warning", f"Skipping MCP {mcp_ref} due to error: {e}")
continue
all_tools.extend(tools)
# Store clients for cleanup
self._mcp_clients.extend(clients)
return all_tools
def _cleanup_mcp_clients(self) -> None:
"""Cleanup MCP client connections after task execution."""
if not self._mcp_clients:
return
async def _disconnect_all() -> None:
for client in self._mcp_clients:
if client and hasattr(client, "connected") and client.connected:
await client.disconnect()
try:
asyncio.run(_disconnect_all())
except Exception as e:
self._logger.log("error", f"Error during MCP client cleanup: {e}")
finally:
self._mcp_clients.clear()
def _get_mcp_tools_from_string(self, mcp_ref: str) -> list[BaseTool]:
"""Get tools from legacy string-based MCP references.
This method maintains backwards compatibility with string-based
MCP references (https://... and crewai-amp:...).
Args:
mcp_ref: String reference to MCP server.
Returns:
List of BaseTool instances.
"""
if mcp_ref.startswith("crewai-amp:"):
return self._get_amp_mcp_tools(mcp_ref)
if mcp_ref.startswith("https://"):
return self._get_external_mcp_tools(mcp_ref)
return []
def _get_external_mcp_tools(self, mcp_ref: str) -> list[BaseTool]:
"""Get tools from external HTTPS MCP server with graceful error handling."""
from crewai.tools.mcp_tool_wrapper import MCPToolWrapper
@@ -709,7 +777,7 @@ class Agent(BaseAgent):
f"Specific tool '{specific_tool}' not found on MCP server: {server_url}",
)
return tools
return cast(list[BaseTool], tools)
except Exception as e:
self._logger.log(
@@ -717,6 +785,164 @@ class Agent(BaseAgent):
)
return []
def _get_native_mcp_tools(
self, mcp_config: MCPServerConfig
) -> tuple[list[BaseTool], Any | None]:
"""Get tools from MCP server using structured configuration.
This method creates an MCP client based on the configuration type,
connects to the server, discovers tools, applies filtering, and
returns wrapped tools along with the client instance for cleanup.
Args:
mcp_config: MCP server configuration (MCPServerStdio, MCPServerHTTP, or MCPServerSSE).
Returns:
Tuple of (list of BaseTool instances, MCPClient instance for cleanup).
"""
from crewai.tools.base_tool import BaseTool
from crewai.tools.mcp_native_tool import MCPNativeTool
if isinstance(mcp_config, MCPServerStdio):
transport = StdioTransport(
command=mcp_config.command,
args=mcp_config.args,
env=mcp_config.env,
)
server_name = f"{mcp_config.command}_{'_'.join(mcp_config.args)}"
elif isinstance(mcp_config, MCPServerHTTP):
transport = HTTPTransport(
url=mcp_config.url,
headers=mcp_config.headers,
streamable=mcp_config.streamable,
)
server_name = self._extract_server_name(mcp_config.url)
elif isinstance(mcp_config, MCPServerSSE):
transport = SSETransport(
url=mcp_config.url,
headers=mcp_config.headers,
)
server_name = self._extract_server_name(mcp_config.url)
else:
raise ValueError(f"Unsupported MCP server config type: {type(mcp_config)}")
client = MCPClient(
transport=transport,
cache_tools_list=mcp_config.cache_tools_list,
)
async def _setup_client_and_list_tools() -> list[dict[str, Any]]:
"""Async helper to connect and list tools in same event loop."""
try:
if not client.connected:
await client.connect()
tools_list = await client.list_tools()
try:
await client.disconnect()
# Small delay to allow background tasks to finish cleanup
# This helps prevent "cancel scope in different task" errors
# when asyncio.run() closes the event loop
await asyncio.sleep(0.1)
except Exception as e:
self._logger.log("error", f"Error during disconnect: {e}")
return tools_list
except Exception as e:
if client.connected:
await client.disconnect()
await asyncio.sleep(0.1)
raise RuntimeError(
f"Error during setup client and list tools: {e}"
) from e
try:
try:
asyncio.get_running_loop()
import concurrent.futures
with concurrent.futures.ThreadPoolExecutor() as executor:
future = executor.submit(
asyncio.run, _setup_client_and_list_tools()
)
tools_list = future.result()
except RuntimeError:
try:
tools_list = asyncio.run(_setup_client_and_list_tools())
except RuntimeError as e:
error_msg = str(e).lower()
if "cancel scope" in error_msg or "task" in error_msg:
raise ConnectionError(
"MCP connection failed due to event loop cleanup issues. "
"This may be due to authentication errors or server unavailability."
) from e
except asyncio.CancelledError as e:
raise ConnectionError(
"MCP connection was cancelled. This may indicate an authentication "
"error or server unavailability."
) from e
if mcp_config.tool_filter:
filtered_tools = []
for tool in tools_list:
if callable(mcp_config.tool_filter):
try:
from crewai.mcp.filters import ToolFilterContext
context = ToolFilterContext(
agent=self,
server_name=server_name,
run_context=None,
)
if mcp_config.tool_filter(context, tool):
filtered_tools.append(tool)
except (TypeError, AttributeError):
if mcp_config.tool_filter(tool):
filtered_tools.append(tool)
else:
# Not callable - include tool
filtered_tools.append(tool)
tools_list = filtered_tools
tools = []
for tool_def in tools_list:
tool_name = tool_def.get("name", "")
if not tool_name:
continue
# Convert inputSchema to Pydantic model if present
args_schema = None
if tool_def.get("inputSchema"):
args_schema = self._json_schema_to_pydantic(
tool_name, tool_def["inputSchema"]
)
tool_schema = {
"description": tool_def.get("description", ""),
"args_schema": args_schema,
}
try:
native_tool = MCPNativeTool(
mcp_client=client,
tool_name=tool_name,
tool_schema=tool_schema,
server_name=server_name,
)
tools.append(native_tool)
except Exception as e:
self._logger.log("error", f"Failed to create native MCP tool: {e}")
continue
return cast(list[BaseTool], tools), client
except Exception as e:
if client.connected:
asyncio.run(client.disconnect())
raise RuntimeError(f"Failed to get native MCP tools: {e}") from e
def _get_amp_mcp_tools(self, amp_ref: str) -> list[BaseTool]:
"""Get tools from CrewAI AMP MCP marketplace."""
# Parse: "crewai-amp:mcp-name" or "crewai-amp:mcp-name#tool_name"
@@ -739,9 +965,9 @@ class Agent(BaseAgent):
return tools
def _extract_server_name(self, server_url: str) -> str:
@staticmethod
def _extract_server_name(server_url: str) -> str:
"""Extract clean server name from URL for tool prefixing."""
from urllib.parse import urlparse
parsed = urlparse(server_url)
domain = parsed.netloc.replace(".", "_")
@@ -778,7 +1004,9 @@ class Agent(BaseAgent):
)
return {}
async def _get_mcp_tool_schemas_async(self, server_params: dict) -> dict[str, dict]:
async def _get_mcp_tool_schemas_async(
self, server_params: dict[str, Any]
) -> dict[str, dict]:
"""Async implementation of MCP tool schema retrieval with timeouts and retries."""
server_url = server_params["url"]
return await self._retry_mcp_discovery(
@@ -787,7 +1015,7 @@ class Agent(BaseAgent):
async def _retry_mcp_discovery(
self, operation_func, server_url: str
) -> dict[str, dict]:
) -> dict[str, dict[str, Any]]:
"""Retry MCP discovery operation with exponential backoff, avoiding try-except in loop."""
last_error = None
@@ -815,9 +1043,10 @@ class Agent(BaseAgent):
f"Failed to discover MCP tools after {MCP_MAX_RETRIES} attempts: {last_error}"
)
@staticmethod
async def _attempt_mcp_discovery(
self, operation_func, server_url: str
) -> tuple[dict[str, dict] | None, str, bool]:
operation_func, server_url: str
) -> tuple[dict[str, dict[str, Any]] | None, str, bool]:
"""Attempt single MCP discovery operation and return (result, error_message, should_retry)."""
try:
result = await operation_func(server_url)
@@ -851,13 +1080,13 @@ class Agent(BaseAgent):
async def _discover_mcp_tools_with_timeout(
self, server_url: str
) -> dict[str, dict]:
) -> dict[str, dict[str, Any]]:
"""Discover MCP tools with timeout wrapper."""
return await asyncio.wait_for(
self._discover_mcp_tools(server_url), timeout=MCP_DISCOVERY_TIMEOUT
)
async def _discover_mcp_tools(self, server_url: str) -> dict[str, dict]:
async def _discover_mcp_tools(self, server_url: str) -> dict[str, dict[str, Any]]:
"""Discover tools from MCP server with proper timeout handling."""
from mcp import ClientSession
from mcp.client.streamable_http import streamablehttp_client
@@ -889,7 +1118,9 @@ class Agent(BaseAgent):
}
return schemas
def _json_schema_to_pydantic(self, tool_name: str, json_schema: dict) -> type:
def _json_schema_to_pydantic(
self, tool_name: str, json_schema: dict[str, Any]
) -> type:
"""Convert JSON Schema to Pydantic model for tool arguments.
Args:
@@ -926,7 +1157,7 @@ class Agent(BaseAgent):
model_name = f"{tool_name.replace('-', '_').replace(' ', '_')}Schema"
return create_model(model_name, **field_definitions)
def _json_type_to_python(self, field_schema: dict) -> type:
def _json_type_to_python(self, field_schema: dict[str, Any]) -> type:
"""Convert JSON Schema type to Python type.
Args:
@@ -935,7 +1166,6 @@ class Agent(BaseAgent):
Returns:
Python type
"""
from typing import Any
json_type = field_schema.get("type")
@@ -965,13 +1195,15 @@ class Agent(BaseAgent):
return type_mapping.get(json_type, Any)
def _fetch_amp_mcp_servers(self, mcp_name: str) -> list[dict]:
@staticmethod
def _fetch_amp_mcp_servers(mcp_name: str) -> list[dict]:
"""Fetch MCP server configurations from CrewAI AMP API."""
# TODO: Implement AMP API call to "integrations/mcps" endpoint
# Should return list of server configs with URLs
return []
def get_multimodal_tools(self) -> Sequence[BaseTool]:
@staticmethod
def get_multimodal_tools() -> Sequence[BaseTool]:
from crewai.tools.agent_tools.add_image_tool import AddImageTool
return [AddImageTool()]
@@ -991,8 +1223,9 @@ class Agent(BaseAgent):
)
return []
@staticmethod
def get_output_converter(
self, llm: BaseLLM, text: str, model: type[BaseModel], instructions: str
llm: BaseLLM, text: str, model: type[BaseModel], instructions: str
) -> Converter:
return Converter(llm=llm, text=text, model=model, instructions=instructions)
@@ -1022,7 +1255,8 @@ class Agent(BaseAgent):
)
return task_prompt
def _render_text_description(self, tools: list[Any]) -> str:
@staticmethod
def _render_text_description(tools: list[Any]) -> str:
"""Render the tool name and description in plain text.
Output will be in the format of:

View File

@@ -0,0 +1,76 @@
"""Generic metaclass for agent extensions.
This metaclass enables extension capabilities for agents by detecting
extension fields in class annotations and applying appropriate wrappers.
"""
import warnings
from functools import wraps
from typing import Any
from pydantic import model_validator
from pydantic._internal._model_construction import ModelMetaclass
class AgentMeta(ModelMetaclass):
"""Generic metaclass for agent extensions.
Detects extension fields (like 'a2a') in class annotations and applies
the appropriate wrapper logic to enable extension functionality.
"""
def __new__(
mcs,
name: str,
bases: tuple[type, ...],
namespace: dict[str, Any],
**kwargs: Any,
) -> type:
"""Create a new class with extension support.
Args:
name: The name of the class being created
bases: Base classes
namespace: Class namespace dictionary
**kwargs: Additional keyword arguments
Returns:
The newly created class with extension support if applicable
"""
orig_post_init_setup = namespace.get("post_init_setup")
if orig_post_init_setup is not None:
original_func = (
orig_post_init_setup.wrapped
if hasattr(orig_post_init_setup, "wrapped")
else orig_post_init_setup
)
def post_init_setup_with_extensions(self: Any) -> Any:
"""Wrap post_init_setup to apply extensions after initialization.
Args:
self: The agent instance
Returns:
The agent instance
"""
result = original_func(self)
a2a_value = getattr(self, "a2a", None)
if a2a_value is not None:
from crewai.a2a.wrapper import wrap_agent_with_a2a_instance
wrap_agent_with_a2a_instance(self)
return result
with warnings.catch_warnings():
warnings.filterwarnings(
"ignore", message=".*overrides an existing Pydantic.*"
)
namespace["post_init_setup"] = model_validator(mode="after")(
post_init_setup_with_extensions
)
return super().__new__(mcs, name, bases, namespace, **kwargs)

View File

@@ -7,7 +7,7 @@ output conversion for OpenAI agents, supporting JSON and Pydantic model formats.
from typing import Any
from crewai.agents.agent_adapters.base_converter_adapter import BaseConverterAdapter
from crewai.utilities.i18n import I18N
from crewai.utilities.i18n import get_i18n
class OpenAIConverterAdapter(BaseConverterAdapter):
@@ -59,7 +59,7 @@ class OpenAIConverterAdapter(BaseConverterAdapter):
return base_prompt
output_schema: str = (
I18N()
get_i18n()
.slice("formatted_task_instructions")
.format(output_format=self._schema)
)

View File

@@ -18,17 +18,19 @@ from pydantic import (
from pydantic_core import PydanticCustomError
from typing_extensions import Self
from crewai.agent.internal.meta import AgentMeta
from crewai.agents.agent_builder.utilities.base_token_process import TokenProcess
from crewai.agents.cache.cache_handler import CacheHandler
from crewai.agents.tools_handler import ToolsHandler
from crewai.knowledge.knowledge import Knowledge
from crewai.knowledge.knowledge_config import KnowledgeConfig
from crewai.knowledge.source.base_knowledge_source import BaseKnowledgeSource
from crewai.mcp.config import MCPServerConfig
from crewai.rag.embeddings.types import EmbedderConfig
from crewai.security.security_config import SecurityConfig
from crewai.tools.base_tool import BaseTool, Tool
from crewai.utilities.config import process_config
from crewai.utilities.i18n import I18N
from crewai.utilities.i18n import I18N, get_i18n
from crewai.utilities.logger import Logger
from crewai.utilities.rpm_controller import RPMController
from crewai.utilities.string_utils import interpolate_only
@@ -56,7 +58,7 @@ PlatformApp = Literal[
PlatformAppOrAction = PlatformApp | str
class BaseAgent(BaseModel, ABC):
class BaseAgent(BaseModel, ABC, metaclass=AgentMeta):
"""Abstract Base Class for all third party agents compatible with CrewAI.
Attributes:
@@ -106,7 +108,7 @@ class BaseAgent(BaseModel, ABC):
Set private attributes.
"""
__hash__ = object.__hash__ # type: ignore
__hash__ = object.__hash__
_logger: Logger = PrivateAttr(default_factory=lambda: Logger(verbose=False))
_rpm_controller: RPMController | None = PrivateAttr(default=None)
_request_within_rpm_limit: Any = PrivateAttr(default=None)
@@ -149,7 +151,7 @@ class BaseAgent(BaseModel, ABC):
)
crew: Any = Field(default=None, description="Crew to which the agent belongs.")
i18n: I18N = Field(
default_factory=I18N, description="Internationalization settings."
default_factory=get_i18n, description="Internationalization settings."
)
cache_handler: CacheHandler | None = Field(
default=None, description="An instance of the CacheHandler class."
@@ -179,8 +181,8 @@ class BaseAgent(BaseModel, ABC):
default_factory=SecurityConfig,
description="Security configuration for the agent, including fingerprinting.",
)
callbacks: list[Callable] = Field(
default=[], description="Callbacks to be used for the agent"
callbacks: list[Callable[[Any], Any]] = Field(
default_factory=list, description="Callbacks to be used for the agent"
)
adapted_agent: bool = Field(
default=False, description="Whether the agent is adapted"
@@ -193,14 +195,14 @@ class BaseAgent(BaseModel, ABC):
default=None,
description="List of applications or application/action combinations that the agent can access through CrewAI Platform. Can contain app names (e.g., 'gmail') or specific actions (e.g., 'gmail/send_email')",
)
mcps: list[str] | None = Field(
mcps: list[str | MCPServerConfig] | None = Field(
default=None,
description="List of MCP server references. Supports 'https://server.com/path' for external servers and 'crewai-amp:mcp-name' for AMP marketplace. Use '#tool_name' suffix for specific tools.",
)
@model_validator(mode="before")
@classmethod
def process_model_config(cls, values):
def process_model_config(cls, values: Any) -> dict[str, Any]:
return process_config(values, cls)
@field_validator("tools")
@@ -252,23 +254,39 @@ class BaseAgent(BaseModel, ABC):
@field_validator("mcps")
@classmethod
def validate_mcps(cls, mcps: list[str] | None) -> list[str] | None:
def validate_mcps(
cls, mcps: list[str | MCPServerConfig] | None
) -> list[str | MCPServerConfig] | None:
"""Validate MCP server references and configurations.
Supports both string references (for backwards compatibility) and
structured configuration objects (MCPServerStdio, MCPServerHTTP, MCPServerSSE).
"""
if not mcps:
return mcps
validated_mcps = []
for mcp in mcps:
if mcp.startswith(("https://", "crewai-amp:")):
if isinstance(mcp, str):
if mcp.startswith(("https://", "crewai-amp:")):
validated_mcps.append(mcp)
else:
raise ValueError(
f"Invalid MCP reference: {mcp}. "
"String references must start with 'https://' or 'crewai-amp:'"
)
elif isinstance(mcp, (MCPServerConfig)):
validated_mcps.append(mcp)
else:
raise ValueError(
f"Invalid MCP reference: {mcp}. Must start with 'https://' or 'crewai-amp:'"
f"Invalid MCP configuration: {type(mcp)}. "
"Must be a string reference or MCPServerConfig instance."
)
return list(set(validated_mcps))
return validated_mcps
@model_validator(mode="after")
def validate_and_set_attributes(self):
def validate_and_set_attributes(self) -> Self:
# Validate required fields
for field in ["role", "goal", "backstory"]:
if getattr(self, field) is None:
@@ -300,7 +318,7 @@ class BaseAgent(BaseModel, ABC):
)
@model_validator(mode="after")
def set_private_attrs(self):
def set_private_attrs(self) -> Self:
"""Set private attributes."""
self._logger = Logger(verbose=self.verbose)
if self.max_rpm and not self._rpm_controller:
@@ -312,7 +330,7 @@ class BaseAgent(BaseModel, ABC):
return self
@property
def key(self):
def key(self) -> str:
source = [
self._original_role or self.role,
self._original_goal or self.goal,
@@ -330,7 +348,7 @@ class BaseAgent(BaseModel, ABC):
pass
@abstractmethod
def create_agent_executor(self, tools=None) -> None:
def create_agent_executor(self, tools: list[BaseTool] | None = None) -> None:
pass
@abstractmethod
@@ -342,7 +360,7 @@ class BaseAgent(BaseModel, ABC):
"""Get platform tools for the specified list of applications and/or application/action combinations."""
@abstractmethod
def get_mcp_tools(self, mcps: list[str]) -> list[BaseTool]:
def get_mcp_tools(self, mcps: list[str | MCPServerConfig]) -> list[BaseTool]:
"""Get MCP tools for the specified list of MCP server references."""
def copy(self) -> Self: # type: ignore # Signature of "copy" incompatible with supertype "BaseModel"
@@ -442,5 +460,5 @@ class BaseAgent(BaseModel, ABC):
self._rpm_controller = rpm_controller
self.create_agent_executor()
def set_knowledge(self, crew_embedder: EmbedderConfig | None = None):
def set_knowledge(self, crew_embedder: EmbedderConfig | None = None) -> None:
pass

View File

@@ -9,7 +9,7 @@ from __future__ import annotations
from collections.abc import Callable
from typing import TYPE_CHECKING, Any, Literal, cast
from pydantic import GetCoreSchemaHandler
from pydantic import BaseModel, GetCoreSchemaHandler
from pydantic_core import CoreSchema, core_schema
from crewai.agents.agent_builder.base_agent_executor_mixin import CrewAgentExecutorMixin
@@ -37,7 +37,7 @@ from crewai.utilities.agent_utils import (
process_llm_response,
)
from crewai.utilities.constants import TRAINING_DATA_FILE
from crewai.utilities.i18n import I18N
from crewai.utilities.i18n import I18N, get_i18n
from crewai.utilities.printer import Printer
from crewai.utilities.tool_utils import execute_tool_and_check_finality
from crewai.utilities.training_handler import CrewTrainingHandler
@@ -65,7 +65,7 @@ class CrewAgentExecutor(CrewAgentExecutorMixin):
def __init__(
self,
llm: BaseLLM | Any,
llm: BaseLLM,
task: Task,
crew: Crew,
agent: Agent,
@@ -82,6 +82,7 @@ class CrewAgentExecutor(CrewAgentExecutorMixin):
respect_context_window: bool = False,
request_within_rpm_limit: Callable[[], bool] | None = None,
callbacks: list[Any] | None = None,
response_model: type[BaseModel] | None = None,
) -> None:
"""Initialize executor.
@@ -103,8 +104,9 @@ class CrewAgentExecutor(CrewAgentExecutorMixin):
respect_context_window: Respect context limits.
request_within_rpm_limit: RPM limit check function.
callbacks: Optional callbacks list.
response_model: Optional Pydantic model for structured outputs.
"""
self._i18n: I18N = I18N()
self._i18n: I18N = get_i18n()
self.llm = llm
self.task = task
self.agent = agent
@@ -123,6 +125,7 @@ class CrewAgentExecutor(CrewAgentExecutorMixin):
self.function_calling_llm = function_calling_llm
self.respect_context_window = respect_context_window
self.request_within_rpm_limit = request_within_rpm_limit
self.response_model = response_model
self.ask_for_human_input = False
self.messages: list[LLMMessage] = []
self.iterations = 0
@@ -221,6 +224,7 @@ class CrewAgentExecutor(CrewAgentExecutorMixin):
printer=self._printer,
from_task=self.task,
from_agent=self.agent,
response_model=self.response_model,
)
formatted_answer = process_llm_response(answer, self.use_stop_words)

View File

@@ -18,10 +18,10 @@ from crewai.agents.constants import (
MISSING_ACTION_INPUT_AFTER_ACTION_ERROR_MESSAGE,
UNABLE_TO_REPAIR_JSON_RESULTS,
)
from crewai.utilities.i18n import I18N
from crewai.utilities.i18n import get_i18n
_I18N = I18N()
_I18N = get_i18n()
@dataclass

View File

@@ -3,10 +3,17 @@ import json
import os
from pathlib import Path
import sys
from typing import BinaryIO, cast
from cryptography.fernet import Fernet
if sys.platform == "win32":
import msvcrt
else:
import fcntl
class TokenManager:
def __init__(self, file_path: str = "tokens.enc") -> None:
"""
@@ -18,21 +25,74 @@ class TokenManager:
self.key = self._get_or_create_key()
self.fernet = Fernet(self.key)
@staticmethod
def _acquire_lock(file_handle: BinaryIO) -> None:
"""
Acquire an exclusive lock on a file handle.
Args:
file_handle: Open file handle to lock.
"""
if sys.platform == "win32":
msvcrt.locking(file_handle.fileno(), msvcrt.LK_LOCK, 1)
else:
fcntl.flock(file_handle.fileno(), fcntl.LOCK_EX)
@staticmethod
def _release_lock(file_handle: BinaryIO) -> None:
"""
Release the lock on a file handle.
Args:
file_handle: Open file handle to unlock.
"""
if sys.platform == "win32":
msvcrt.locking(file_handle.fileno(), msvcrt.LK_UNLCK, 1)
else:
fcntl.flock(file_handle.fileno(), fcntl.LOCK_UN)
def _get_or_create_key(self) -> bytes:
"""
Get or create the encryption key.
Get or create the encryption key with file locking to prevent race conditions.
:return: The encryption key.
Returns:
The encryption key.
"""
key_filename = "secret.key"
key = self.read_secure_file(key_filename)
storage_path = self.get_secure_storage_path()
if key is not None:
key = self.read_secure_file(key_filename)
if key is not None and len(key) == 44:
return key
new_key = Fernet.generate_key()
self.save_secure_file(key_filename, new_key)
return new_key
lock_file_path = storage_path / f"{key_filename}.lock"
try:
lock_file_path.touch()
with open(lock_file_path, "r+b") as lock_file:
self._acquire_lock(lock_file)
try:
key = self.read_secure_file(key_filename)
if key is not None and len(key) == 44:
return key
new_key = Fernet.generate_key()
self.save_secure_file(key_filename, new_key)
return new_key
finally:
try:
self._release_lock(lock_file)
except OSError:
pass
except OSError:
key = self.read_secure_file(key_filename)
if key is not None and len(key) == 44:
return key
new_key = Fernet.generate_key()
self.save_secure_file(key_filename, new_key)
return new_key
def save_tokens(self, access_token: str, expires_at: int) -> None:
"""
@@ -59,14 +119,14 @@ class TokenManager:
if encrypted_data is None:
return None
decrypted_data = self.fernet.decrypt(encrypted_data) # type: ignore
decrypted_data = self.fernet.decrypt(encrypted_data)
data = json.loads(decrypted_data)
expiration = datetime.fromisoformat(data["expiration"])
if expiration <= datetime.now():
return None
return data["access_token"]
return cast(str | None, data["access_token"])
def clear_tokens(self) -> None:
"""
@@ -74,20 +134,18 @@ class TokenManager:
"""
self.delete_secure_file(self.file_path)
def get_secure_storage_path(self) -> Path:
@staticmethod
def get_secure_storage_path() -> Path:
"""
Get the secure storage path based on the operating system.
:return: The secure storage path.
"""
if sys.platform == "win32":
# Windows: Use %LOCALAPPDATA%
base_path = os.environ.get("LOCALAPPDATA")
elif sys.platform == "darwin":
# macOS: Use ~/Library/Application Support
base_path = os.path.expanduser("~/Library/Application Support")
else:
# Linux and other Unix-like: Use ~/.local/share
base_path = os.path.expanduser("~/.local/share")
app_name = "crewai/credentials"
@@ -110,7 +168,6 @@ class TokenManager:
with open(file_path, "wb") as f:
f.write(content)
# Set appropriate permissions (read/write for owner only)
os.chmod(file_path, 0o600)
def read_secure_file(self, filename: str) -> bytes | None:

View File

@@ -5,7 +5,7 @@ description = "{{name}} using crewAI"
authors = [{ name = "Your Name", email = "you@example.com" }]
requires-python = ">=3.10,<3.14"
dependencies = [
"crewai[tools]==1.2.1"
"crewai[tools]==1.4.0"
]
[project.scripts]

View File

@@ -5,7 +5,7 @@ description = "{{name}} using crewAI"
authors = [{ name = "Your Name", email = "you@example.com" }]
requires-python = ">=3.10,<3.14"
dependencies = [
"crewai[tools]==1.2.1"
"crewai[tools]==1.4.0"
]
[project.scripts]

View File

@@ -27,6 +27,7 @@ from pydantic import (
model_validator,
)
from pydantic_core import PydanticCustomError
from typing_extensions import Self
from crewai.agent import Agent
from crewai.agents.agent_builder.base_agent import BaseAgent
@@ -70,7 +71,7 @@ from crewai.task import Task
from crewai.tasks.conditional_task import ConditionalTask
from crewai.tasks.task_output import TaskOutput
from crewai.tools.agent_tools.agent_tools import AgentTools
from crewai.tools.base_tool import BaseTool, Tool
from crewai.tools.base_tool import BaseTool
from crewai.types.usage_metrics import UsageMetrics
from crewai.utilities.constants import NOT_SPECIFIED, TRAINING_DATA_FILE
from crewai.utilities.crew.models import CrewContext
@@ -81,7 +82,7 @@ from crewai.utilities.formatter import (
aggregate_raw_outputs_from_task_outputs,
aggregate_raw_outputs_from_tasks,
)
from crewai.utilities.i18n import I18N
from crewai.utilities.i18n import get_i18n
from crewai.utilities.llm_utils import create_llm
from crewai.utilities.logger import Logger
from crewai.utilities.planning_handler import CrewPlanner
@@ -195,7 +196,7 @@ class Crew(FlowTrackable, BaseModel):
function_calling_llm: str | InstanceOf[LLM] | Any | None = Field(
description="Language model that will run the agent.", default=None
)
config: Json | dict[str, Any] | None = Field(default=None)
config: Json[dict[str, Any]] | dict[str, Any] | None = Field(default=None)
id: UUID4 = Field(default_factory=uuid.uuid4, frozen=True)
share_crew: bool | None = Field(default=False)
step_callback: Any | None = Field(
@@ -294,7 +295,9 @@ class Crew(FlowTrackable, BaseModel):
@field_validator("config", mode="before")
@classmethod
def check_config_type(cls, v: Json | dict[str, Any]) -> Json | dict[str, Any]:
def check_config_type(
cls, v: Json[dict[str, Any]] | dict[str, Any]
) -> dict[str, Any]:
"""Validates that the config is a valid type.
Args:
v: The config to be validated.
@@ -310,7 +313,7 @@ class Crew(FlowTrackable, BaseModel):
"""set private attributes."""
self._cache_handler = CacheHandler()
event_listener = EventListener()
event_listener = EventListener() # type: ignore[no-untyped-call]
if (
is_tracing_enabled()
@@ -330,13 +333,13 @@ class Crew(FlowTrackable, BaseModel):
return self
def _initialize_default_memories(self):
self._long_term_memory = self._long_term_memory or LongTermMemory()
self._short_term_memory = self._short_term_memory or ShortTermMemory(
def _initialize_default_memories(self) -> None:
self._long_term_memory = self._long_term_memory or LongTermMemory() # type: ignore[no-untyped-call]
self._short_term_memory = self._short_term_memory or ShortTermMemory( # type: ignore[no-untyped-call]
crew=self,
embedder_config=self.embedder,
)
self._entity_memory = self.entity_memory or EntityMemory(
self._entity_memory = self.entity_memory or EntityMemory( # type: ignore[no-untyped-call]
crew=self, embedder_config=self.embedder
)
@@ -380,7 +383,7 @@ class Crew(FlowTrackable, BaseModel):
return self
@model_validator(mode="after")
def check_manager_llm(self):
def check_manager_llm(self) -> Self:
"""Validates that the language model is set when using hierarchical process."""
if self.process == Process.hierarchical:
if not self.manager_llm and not self.manager_agent:
@@ -405,7 +408,7 @@ class Crew(FlowTrackable, BaseModel):
return self
@model_validator(mode="after")
def check_config(self):
def check_config(self) -> Self:
"""Validates that the crew is properly configured with agents and tasks."""
if not self.config and not self.tasks and not self.agents:
raise PydanticCustomError(
@@ -426,23 +429,20 @@ class Crew(FlowTrackable, BaseModel):
return self
@model_validator(mode="after")
def validate_tasks(self):
def validate_tasks(self) -> Self:
if self.process == Process.sequential:
for task in self.tasks:
if task.agent is None:
raise PydanticCustomError(
"missing_agent_in_task",
(
f"Sequential process error: Agent is missing in the task "
f"with the following description: {task.description}"
), # type: ignore # Dynamic string in error message
{},
"Sequential process error: Agent is missing in the task with the following description: {description}",
{"description": task.description},
)
return self
@model_validator(mode="after")
def validate_end_with_at_most_one_async_task(self):
def validate_end_with_at_most_one_async_task(self) -> Self:
"""Validates that the crew ends with at most one asynchronous task."""
final_async_task_count = 0
@@ -505,7 +505,9 @@ class Crew(FlowTrackable, BaseModel):
return self
@model_validator(mode="after")
def validate_async_task_cannot_include_sequential_async_tasks_in_context(self):
def validate_async_task_cannot_include_sequential_async_tasks_in_context(
self,
) -> Self:
"""
Validates that if a task is set to be executed asynchronously,
it cannot include other asynchronous tasks in its context unless
@@ -527,7 +529,7 @@ class Crew(FlowTrackable, BaseModel):
return self
@model_validator(mode="after")
def validate_context_no_future_tasks(self):
def validate_context_no_future_tasks(self) -> Self:
"""Validates that a task's context does not include future tasks."""
task_indices = {id(task): i for i, task in enumerate(self.tasks)}
@@ -561,7 +563,7 @@ class Crew(FlowTrackable, BaseModel):
"""
return self.security_config.fingerprint
def _setup_from_config(self):
def _setup_from_config(self) -> None:
"""Initializes agents and tasks from the provided config."""
if self.config is None:
raise ValueError("Config should not be None.")
@@ -628,12 +630,12 @@ class Crew(FlowTrackable, BaseModel):
for agent in train_crew.agents:
if training_data.get(str(agent.id)):
result = TaskEvaluator(agent).evaluate_training_data(
result = TaskEvaluator(agent).evaluate_training_data( # type: ignore[arg-type]
training_data=training_data, agent_id=str(agent.id)
)
CrewTrainingHandler(filename).save_trained_data(
agent_id=str(agent.role),
trained_data=result.model_dump(), # type: ignore[arg-type]
trained_data=result.model_dump(),
)
crewai_event_bus.emit(
@@ -684,12 +686,8 @@ class Crew(FlowTrackable, BaseModel):
self._set_tasks_callbacks()
self._set_allow_crewai_trigger_context_for_first_task()
i18n = I18N(prompt_file=self.prompt_file)
for agent in self.agents:
agent.i18n = i18n
# type: ignore[attr-defined] # Argument 1 to "_interpolate_inputs" of "Crew" has incompatible type "dict[str, Any] | None"; expected "dict[str, Any]"
agent.crew = self # type: ignore[attr-defined]
agent.crew = self
agent.set_knowledge(crew_embedder=self.embedder)
# TODO: Create an AgentFunctionCalling protocol for future refactoring
if not agent.function_calling_llm: # type: ignore # "BaseAgent" has no attribute "function_calling_llm"
@@ -753,10 +751,12 @@ class Crew(FlowTrackable, BaseModel):
inputs = inputs or {}
return await asyncio.to_thread(self.kickoff, inputs)
async def kickoff_for_each_async(self, inputs: list[dict]) -> list[CrewOutput]:
async def kickoff_for_each_async(
self, inputs: list[dict[str, Any]]
) -> list[CrewOutput]:
crew_copies = [self.copy() for _ in inputs]
async def run_crew(crew, input_data):
async def run_crew(crew: Self, input_data: Any) -> CrewOutput:
return await crew.kickoff_async(inputs=input_data)
tasks = [
@@ -775,7 +775,7 @@ class Crew(FlowTrackable, BaseModel):
self._task_output_handler.reset()
return results
def _handle_crew_planning(self):
def _handle_crew_planning(self) -> None:
"""Handles the Crew planning."""
self._logger.log("info", "Planning the crew execution")
result = CrewPlanner(
@@ -793,7 +793,7 @@ class Crew(FlowTrackable, BaseModel):
output: TaskOutput,
task_index: int,
was_replayed: bool = False,
):
) -> None:
if self._inputs:
inputs = self._inputs
else:
@@ -825,19 +825,21 @@ class Crew(FlowTrackable, BaseModel):
self._create_manager_agent()
return self._execute_tasks(self.tasks)
def _create_manager_agent(self):
i18n = I18N(prompt_file=self.prompt_file)
def _create_manager_agent(self) -> None:
if self.manager_agent is not None:
self.manager_agent.allow_delegation = True
manager = self.manager_agent
if manager.tools is not None and len(manager.tools) > 0:
self._logger.log(
"warning", "Manager agent should not have tools", color="orange"
"warning",
"Manager agent should not have tools",
color="bold_yellow",
)
manager.tools = []
raise Exception("Manager agent should not have tools")
else:
self.manager_llm = create_llm(self.manager_llm)
i18n = get_i18n(prompt_file=self.prompt_file)
manager = Agent(
role=i18n.retrieve("hierarchical_manager_agent", "role"),
goal=i18n.retrieve("hierarchical_manager_agent", "goal"),
@@ -895,7 +897,7 @@ class Crew(FlowTrackable, BaseModel):
tools_for_task = self._prepare_tools(
agent_to_use,
task,
cast(list[Tool] | list[BaseTool], tools_for_task),
tools_for_task,
)
self._log_task_start(task, agent_to_use.role)
@@ -915,7 +917,7 @@ class Crew(FlowTrackable, BaseModel):
future = task.execute_async(
agent=agent_to_use,
context=context,
tools=cast(list[BaseTool], tools_for_task),
tools=tools_for_task,
)
futures.append((task, future, task_index))
else:
@@ -927,7 +929,7 @@ class Crew(FlowTrackable, BaseModel):
task_output = task.execute_sync(
agent=agent_to_use,
context=context,
tools=cast(list[BaseTool], tools_for_task),
tools=tools_for_task,
)
task_outputs.append(task_output)
self._process_task_result(task, task_output)
@@ -965,7 +967,7 @@ class Crew(FlowTrackable, BaseModel):
return None
def _prepare_tools(
self, agent: BaseAgent, task: Task, tools: list[Tool] | list[BaseTool]
self, agent: BaseAgent, task: Task, tools: list[BaseTool]
) -> list[BaseTool]:
# Add delegation tools if agent allows delegation
if hasattr(agent, "allow_delegation") and getattr(
@@ -1002,21 +1004,21 @@ class Crew(FlowTrackable, BaseModel):
tools = self._add_mcp_tools(task, tools)
# Return a list[BaseTool] compatible with Task.execute_sync and execute_async
return cast(list[BaseTool], tools)
return tools
def _get_agent_to_use(self, task: Task) -> BaseAgent | None:
if self.process == Process.hierarchical:
return self.manager_agent
return task.agent
@staticmethod
def _merge_tools(
self,
existing_tools: list[Tool] | list[BaseTool],
new_tools: list[Tool] | list[BaseTool],
existing_tools: list[BaseTool],
new_tools: list[BaseTool],
) -> list[BaseTool]:
"""Merge new tools into existing tools list, avoiding duplicates."""
if not new_tools:
return cast(list[BaseTool], existing_tools)
return existing_tools
# Create mapping of tool names to new tools
new_tool_map = {tool.name: tool for tool in new_tools}
@@ -1027,63 +1029,62 @@ class Crew(FlowTrackable, BaseModel):
# Add all new tools
tools.extend(new_tools)
return cast(list[BaseTool], tools)
return tools
def _inject_delegation_tools(
self,
tools: list[Tool] | list[BaseTool],
tools: list[BaseTool],
task_agent: BaseAgent,
agents: list[BaseAgent],
) -> list[BaseTool]:
if hasattr(task_agent, "get_delegation_tools"):
delegation_tools = task_agent.get_delegation_tools(agents)
# Cast delegation_tools to the expected type for _merge_tools
return self._merge_tools(tools, cast(list[BaseTool], delegation_tools))
return cast(list[BaseTool], tools)
return self._merge_tools(tools, delegation_tools)
return tools
def _inject_platform_tools(
self,
tools: list[Tool] | list[BaseTool],
tools: list[BaseTool],
task_agent: BaseAgent,
) -> list[BaseTool]:
apps = getattr(task_agent, "apps", None) or []
if hasattr(task_agent, "get_platform_tools") and apps:
platform_tools = task_agent.get_platform_tools(apps=apps)
return self._merge_tools(tools, cast(list[BaseTool], platform_tools))
return cast(list[BaseTool], tools)
return self._merge_tools(tools, platform_tools)
return tools
def _inject_mcp_tools(
self,
tools: list[Tool] | list[BaseTool],
tools: list[BaseTool],
task_agent: BaseAgent,
) -> list[BaseTool]:
mcps = getattr(task_agent, "mcps", None) or []
if hasattr(task_agent, "get_mcp_tools") and mcps:
mcp_tools = task_agent.get_mcp_tools(mcps=mcps)
return self._merge_tools(tools, cast(list[BaseTool], mcp_tools))
return cast(list[BaseTool], tools)
return self._merge_tools(tools, mcp_tools)
return tools
def _add_multimodal_tools(
self, agent: BaseAgent, tools: list[Tool] | list[BaseTool]
self, agent: BaseAgent, tools: list[BaseTool]
) -> list[BaseTool]:
if hasattr(agent, "get_multimodal_tools"):
multimodal_tools = agent.get_multimodal_tools()
# Cast multimodal_tools to the expected type for _merge_tools
return self._merge_tools(tools, cast(list[BaseTool], multimodal_tools))
return cast(list[BaseTool], tools)
return tools
def _add_code_execution_tools(
self, agent: BaseAgent, tools: list[Tool] | list[BaseTool]
self, agent: BaseAgent, tools: list[BaseTool]
) -> list[BaseTool]:
if hasattr(agent, "get_code_execution_tools"):
code_tools = agent.get_code_execution_tools()
# Cast code_tools to the expected type for _merge_tools
return self._merge_tools(tools, cast(list[BaseTool], code_tools))
return cast(list[BaseTool], tools)
return tools
def _add_delegation_tools(
self, task: Task, tools: list[Tool] | list[BaseTool]
self, task: Task, tools: list[BaseTool]
) -> list[BaseTool]:
agents_for_delegation = [agent for agent in self.agents if agent != task.agent]
if len(self.agents) > 1 and len(agents_for_delegation) > 0 and task.agent:
@@ -1092,25 +1093,21 @@ class Crew(FlowTrackable, BaseModel):
tools = self._inject_delegation_tools(
tools, task.agent, agents_for_delegation
)
return cast(list[BaseTool], tools)
return tools
def _add_platform_tools(
self, task: Task, tools: list[Tool] | list[BaseTool]
) -> list[BaseTool]:
def _add_platform_tools(self, task: Task, tools: list[BaseTool]) -> list[BaseTool]:
if task.agent:
tools = self._inject_platform_tools(tools, task.agent)
return cast(list[BaseTool], tools or [])
return tools or []
def _add_mcp_tools(
self, task: Task, tools: list[Tool] | list[BaseTool]
) -> list[BaseTool]:
def _add_mcp_tools(self, task: Task, tools: list[BaseTool]) -> list[BaseTool]:
if task.agent:
tools = self._inject_mcp_tools(tools, task.agent)
return cast(list[BaseTool], tools or [])
return tools or []
def _log_task_start(self, task: Task, role: str = "None"):
def _log_task_start(self, task: Task, role: str = "None") -> None:
if self.output_log_file:
self._file_handler.log(
task_name=task.name, # type: ignore[arg-type]
@@ -1120,7 +1117,7 @@ class Crew(FlowTrackable, BaseModel):
)
def _update_manager_tools(
self, task: Task, tools: list[Tool] | list[BaseTool]
self, task: Task, tools: list[BaseTool]
) -> list[BaseTool]:
if self.manager_agent:
if task.agent:
@@ -1129,7 +1126,7 @@ class Crew(FlowTrackable, BaseModel):
tools = self._inject_delegation_tools(
tools, self.manager_agent, self.agents
)
return cast(list[BaseTool], tools)
return tools
def _get_context(self, task: Task, task_outputs: list[TaskOutput]) -> str:
if not task.context:
@@ -1280,7 +1277,7 @@ class Crew(FlowTrackable, BaseModel):
return required_inputs
def copy(self):
def copy(self) -> Crew: # type: ignore[override]
"""
Creates a deep copy of the Crew instance.
@@ -1311,7 +1308,7 @@ class Crew(FlowTrackable, BaseModel):
manager_agent = self.manager_agent.copy() if self.manager_agent else None
manager_llm = shallow_copy(self.manager_llm) if self.manager_llm else None
task_mapping = {}
task_mapping: dict[str, Any] = {}
cloned_tasks = []
existing_knowledge_sources = shallow_copy(self.knowledge_sources)
@@ -1373,7 +1370,6 @@ class Crew(FlowTrackable, BaseModel):
)
for task in self.tasks
]
# type: ignore # "interpolate_inputs" of "Agent" does not return a value (it only ever returns None)
for agent in self.agents:
agent.interpolate_inputs(inputs)
@@ -1463,7 +1459,7 @@ class Crew(FlowTrackable, BaseModel):
)
raise
def __repr__(self):
def __repr__(self) -> str:
return (
f"Crew(id={self.id}, process={self.process}, "
f"number_of_agents={len(self.agents)}, "
@@ -1520,7 +1516,9 @@ class Crew(FlowTrackable, BaseModel):
if (system := config.get("system")) is not None:
name = config.get("name")
try:
reset_fn: Callable = cast(Callable, config.get("reset"))
reset_fn: Callable[[Any], Any] = cast(
Callable[[Any], Any], config.get("reset")
)
reset_fn(system)
self._logger.log(
"info",
@@ -1551,7 +1549,9 @@ class Crew(FlowTrackable, BaseModel):
raise RuntimeError(f"{name} memory system is not initialized")
try:
reset_fn: Callable = cast(Callable, config.get("reset"))
reset_fn: Callable[[Any], Any] = cast(
Callable[[Any], Any], config.get("reset")
)
reset_fn(system)
self._logger.log(
"info",
@@ -1564,7 +1564,7 @@ class Crew(FlowTrackable, BaseModel):
f"Failed to reset {name} memory: {e!s}"
) from e
def _get_memory_systems(self):
def _get_memory_systems(self) -> dict[str, Any]:
"""Get all available memory systems with their configuration.
Returns:
@@ -1572,10 +1572,10 @@ class Crew(FlowTrackable, BaseModel):
display names.
"""
def default_reset(memory):
def default_reset(memory: Any) -> Any:
return memory.reset()
def knowledge_reset(memory):
def knowledge_reset(memory: Any) -> Any:
return self.reset_knowledge(memory)
# Get knowledge for agents
@@ -1635,7 +1635,7 @@ class Crew(FlowTrackable, BaseModel):
for ks in knowledges:
ks.reset()
def _set_allow_crewai_trigger_context_for_first_task(self):
def _set_allow_crewai_trigger_context_for_first_task(self) -> None:
crewai_trigger_payload = self._inputs and self._inputs.get(
"crewai_trigger_payload"
)

View File

@@ -8,21 +8,14 @@ This module provides the event infrastructure that allows users to:
- Declare handler dependencies for ordered execution
"""
from __future__ import annotations
from typing import TYPE_CHECKING
from crewai.events.base_event_listener import BaseEventListener
from crewai.events.depends import Depends
from crewai.events.event_bus import crewai_event_bus
from crewai.events.handler_graph import CircularDependencyError
from crewai.events.types.agent_events import (
AgentEvaluationCompletedEvent,
AgentEvaluationFailedEvent,
AgentEvaluationStartedEvent,
AgentExecutionCompletedEvent,
AgentExecutionErrorEvent,
AgentExecutionStartedEvent,
LiteAgentExecutionCompletedEvent,
LiteAgentExecutionErrorEvent,
LiteAgentExecutionStartedEvent,
)
from crewai.events.types.crew_events import (
CrewKickoffCompletedEvent,
CrewKickoffFailedEvent,
@@ -67,6 +60,14 @@ from crewai.events.types.logging_events import (
AgentLogsExecutionEvent,
AgentLogsStartedEvent,
)
from crewai.events.types.mcp_events import (
MCPConnectionCompletedEvent,
MCPConnectionFailedEvent,
MCPConnectionStartedEvent,
MCPToolExecutionCompletedEvent,
MCPToolExecutionFailedEvent,
MCPToolExecutionStartedEvent,
)
from crewai.events.types.memory_events import (
MemoryQueryCompletedEvent,
MemoryQueryFailedEvent,
@@ -100,6 +101,20 @@ from crewai.events.types.tool_usage_events import (
)
if TYPE_CHECKING:
from crewai.events.types.agent_events import (
AgentEvaluationCompletedEvent,
AgentEvaluationFailedEvent,
AgentEvaluationStartedEvent,
AgentExecutionCompletedEvent,
AgentExecutionErrorEvent,
AgentExecutionStartedEvent,
LiteAgentExecutionCompletedEvent,
LiteAgentExecutionErrorEvent,
LiteAgentExecutionStartedEvent,
)
__all__ = [
"AgentEvaluationCompletedEvent",
"AgentEvaluationFailedEvent",
@@ -145,6 +160,12 @@ __all__ = [
"LiteAgentExecutionCompletedEvent",
"LiteAgentExecutionErrorEvent",
"LiteAgentExecutionStartedEvent",
"MCPConnectionCompletedEvent",
"MCPConnectionFailedEvent",
"MCPConnectionStartedEvent",
"MCPToolExecutionCompletedEvent",
"MCPToolExecutionFailedEvent",
"MCPToolExecutionStartedEvent",
"MemoryQueryCompletedEvent",
"MemoryQueryFailedEvent",
"MemoryQueryStartedEvent",
@@ -170,3 +191,27 @@ __all__ = [
"ToolValidateInputErrorEvent",
"crewai_event_bus",
]
_AGENT_EVENT_MAPPING = {
"AgentEvaluationCompletedEvent": "crewai.events.types.agent_events",
"AgentEvaluationFailedEvent": "crewai.events.types.agent_events",
"AgentEvaluationStartedEvent": "crewai.events.types.agent_events",
"AgentExecutionCompletedEvent": "crewai.events.types.agent_events",
"AgentExecutionErrorEvent": "crewai.events.types.agent_events",
"AgentExecutionStartedEvent": "crewai.events.types.agent_events",
"LiteAgentExecutionCompletedEvent": "crewai.events.types.agent_events",
"LiteAgentExecutionErrorEvent": "crewai.events.types.agent_events",
"LiteAgentExecutionStartedEvent": "crewai.events.types.agent_events",
}
def __getattr__(name: str):
"""Lazy import for agent events to avoid circular imports."""
if name in _AGENT_EVENT_MAPPING:
import importlib
module_path = _AGENT_EVENT_MAPPING[name]
module = importlib.import_module(module_path)
return getattr(module, name)
msg = f"module {__name__!r} has no attribute {name!r}"
raise AttributeError(msg)

View File

@@ -1,16 +1,26 @@
"""Base event listener for CrewAI event system."""
from abc import ABC, abstractmethod
from crewai.events.event_bus import CrewAIEventsBus, crewai_event_bus
class BaseEventListener(ABC):
"""Abstract base class for event listeners."""
verbose: bool = False
def __init__(self):
def __init__(self) -> None:
"""Initialize the event listener and register handlers."""
super().__init__()
self.setup_listeners(crewai_event_bus)
crewai_event_bus.validate_dependencies()
@abstractmethod
def setup_listeners(self, crewai_event_bus: CrewAIEventsBus):
def setup_listeners(self, crewai_event_bus: CrewAIEventsBus) -> None:
"""Setup event listeners on the event bus.
Args:
crewai_event_bus: The event bus to register listeners on.
"""
pass

View File

@@ -1,12 +1,21 @@
from __future__ import annotations
from io import StringIO
from typing import Any
import threading
from typing import TYPE_CHECKING, Any
from pydantic import Field, PrivateAttr
from crewai.events.base_event_listener import BaseEventListener
from crewai.events.listeners.memory_listener import MemoryListener
from crewai.events.listeners.tracing.trace_listener import TraceCollectionListener
from crewai.events.types.a2a_events import (
A2AConversationCompletedEvent,
A2AConversationStartedEvent,
A2ADelegationCompletedEvent,
A2ADelegationStartedEvent,
A2AMessageSentEvent,
A2AResponseReceivedEvent,
)
from crewai.events.types.agent_events import (
AgentExecutionCompletedEvent,
AgentExecutionStartedEvent,
@@ -56,6 +65,14 @@ from crewai.events.types.logging_events import (
AgentLogsExecutionEvent,
AgentLogsStartedEvent,
)
from crewai.events.types.mcp_events import (
MCPConnectionCompletedEvent,
MCPConnectionFailedEvent,
MCPConnectionStartedEvent,
MCPToolExecutionCompletedEvent,
MCPToolExecutionFailedEvent,
MCPToolExecutionStartedEvent,
)
from crewai.events.types.reasoning_events import (
AgentReasoningCompletedEvent,
AgentReasoningFailedEvent,
@@ -79,6 +96,10 @@ from crewai.utilities import Logger
from crewai.utilities.constants import EMITTER_COLOR
if TYPE_CHECKING:
from crewai.events.event_bus import CrewAIEventsBus
class EventListener(BaseEventListener):
_instance = None
_telemetry: Telemetry = PrivateAttr(default_factory=lambda: Telemetry())
@@ -88,6 +109,7 @@ class EventListener(BaseEventListener):
text_stream = StringIO()
knowledge_retrieval_in_progress = False
knowledge_query_in_progress = False
method_branches: dict[str, Any] = Field(default_factory=dict)
def __new__(cls):
if cls._instance is None:
@@ -101,21 +123,27 @@ class EventListener(BaseEventListener):
self._telemetry = Telemetry()
self._telemetry.set_tracer()
self.execution_spans = {}
self.method_branches = {}
self._initialized = True
self.formatter = ConsoleFormatter(verbose=True)
self._crew_tree_lock = threading.Condition()
MemoryListener(formatter=self.formatter)
# Initialize trace listener with formatter for memory event handling
trace_listener = TraceCollectionListener()
trace_listener.formatter = self.formatter
# ----------- CREW EVENTS -----------
def setup_listeners(self, crewai_event_bus):
def setup_listeners(self, crewai_event_bus: CrewAIEventsBus) -> None:
@crewai_event_bus.on(CrewKickoffStartedEvent)
def on_crew_started(source, event: CrewKickoffStartedEvent):
self.formatter.create_crew_tree(event.crew_name or "Crew", source.id)
self._telemetry.crew_execution_span(source, event.inputs)
def on_crew_started(source, event: CrewKickoffStartedEvent) -> None:
with self._crew_tree_lock:
self.formatter.create_crew_tree(event.crew_name or "Crew", source.id)
self._telemetry.crew_execution_span(source, event.inputs)
self._crew_tree_lock.notify_all()
@crewai_event_bus.on(CrewKickoffCompletedEvent)
def on_crew_completed(source, event: CrewKickoffCompletedEvent):
def on_crew_completed(source, event: CrewKickoffCompletedEvent) -> None:
# Handle telemetry
final_string_output = event.output.raw
self._telemetry.end_crew(source, final_string_output)
@@ -129,7 +157,7 @@ class EventListener(BaseEventListener):
)
@crewai_event_bus.on(CrewKickoffFailedEvent)
def on_crew_failed(source, event: CrewKickoffFailedEvent):
def on_crew_failed(source, event: CrewKickoffFailedEvent) -> None:
self.formatter.update_crew_tree(
self.formatter.current_crew_tree,
event.crew_name or "Crew",
@@ -138,23 +166,23 @@ class EventListener(BaseEventListener):
)
@crewai_event_bus.on(CrewTrainStartedEvent)
def on_crew_train_started(source, event: CrewTrainStartedEvent):
def on_crew_train_started(source, event: CrewTrainStartedEvent) -> None:
self.formatter.handle_crew_train_started(
event.crew_name or "Crew", str(event.timestamp)
)
@crewai_event_bus.on(CrewTrainCompletedEvent)
def on_crew_train_completed(source, event: CrewTrainCompletedEvent):
def on_crew_train_completed(source, event: CrewTrainCompletedEvent) -> None:
self.formatter.handle_crew_train_completed(
event.crew_name or "Crew", str(event.timestamp)
)
@crewai_event_bus.on(CrewTrainFailedEvent)
def on_crew_train_failed(source, event: CrewTrainFailedEvent):
def on_crew_train_failed(source, event: CrewTrainFailedEvent) -> None:
self.formatter.handle_crew_train_failed(event.crew_name or "Crew")
@crewai_event_bus.on(CrewTestResultEvent)
def on_crew_test_result(source, event: CrewTestResultEvent):
def on_crew_test_result(source, event: CrewTestResultEvent) -> None:
self._telemetry.individual_test_result_span(
source.crew,
event.quality,
@@ -165,14 +193,22 @@ class EventListener(BaseEventListener):
# ----------- TASK EVENTS -----------
@crewai_event_bus.on(TaskStartedEvent)
def on_task_started(source, event: TaskStartedEvent):
def on_task_started(source, event: TaskStartedEvent) -> None:
span = self._telemetry.task_started(crew=source.agent.crew, task=source)
self.execution_spans[source] = span
# Pass both task ID and task name (if set)
task_name = source.name if hasattr(source, "name") and source.name else None
self.formatter.create_task_branch(
self.formatter.current_crew_tree, source.id, task_name
)
with self._crew_tree_lock:
self._crew_tree_lock.wait_for(
lambda: self.formatter.current_crew_tree is not None, timeout=5.0
)
if self.formatter.current_crew_tree is not None:
task_name = (
source.name if hasattr(source, "name") and source.name else None
)
self.formatter.create_task_branch(
self.formatter.current_crew_tree, source.id, task_name
)
@crewai_event_bus.on(TaskCompletedEvent)
def on_task_completed(source, event: TaskCompletedEvent):
@@ -263,7 +299,8 @@ class EventListener(BaseEventListener):
@crewai_event_bus.on(FlowCreatedEvent)
def on_flow_created(source, event: FlowCreatedEvent):
self._telemetry.flow_creation_span(event.flow_name)
self.formatter.create_flow_tree(event.flow_name, str(source.flow_id))
tree = self.formatter.create_flow_tree(event.flow_name, str(source.flow_id))
self.formatter.current_flow_tree = tree
@crewai_event_bus.on(FlowStartedEvent)
def on_flow_started(source, event: FlowStartedEvent):
@@ -280,30 +317,36 @@ class EventListener(BaseEventListener):
@crewai_event_bus.on(MethodExecutionStartedEvent)
def on_method_execution_started(source, event: MethodExecutionStartedEvent):
self.formatter.update_method_status(
self.formatter.current_method_branch,
method_branch = self.method_branches.get(event.method_name)
updated_branch = self.formatter.update_method_status(
method_branch,
self.formatter.current_flow_tree,
event.method_name,
"running",
)
self.method_branches[event.method_name] = updated_branch
@crewai_event_bus.on(MethodExecutionFinishedEvent)
def on_method_execution_finished(source, event: MethodExecutionFinishedEvent):
self.formatter.update_method_status(
self.formatter.current_method_branch,
method_branch = self.method_branches.get(event.method_name)
updated_branch = self.formatter.update_method_status(
method_branch,
self.formatter.current_flow_tree,
event.method_name,
"completed",
)
self.method_branches[event.method_name] = updated_branch
@crewai_event_bus.on(MethodExecutionFailedEvent)
def on_method_execution_failed(source, event: MethodExecutionFailedEvent):
self.formatter.update_method_status(
self.formatter.current_method_branch,
method_branch = self.method_branches.get(event.method_name)
updated_branch = self.formatter.update_method_status(
method_branch,
self.formatter.current_flow_tree,
event.method_name,
"failed",
)
self.method_branches[event.method_name] = updated_branch
# ----------- TOOL USAGE EVENTS -----------
@@ -524,5 +567,123 @@ class EventListener(BaseEventListener):
event.verbose,
)
@crewai_event_bus.on(A2ADelegationStartedEvent)
def on_a2a_delegation_started(source, event: A2ADelegationStartedEvent):
self.formatter.handle_a2a_delegation_started(
event.endpoint,
event.task_description,
event.agent_id,
event.is_multiturn,
event.turn_number,
)
@crewai_event_bus.on(A2ADelegationCompletedEvent)
def on_a2a_delegation_completed(source, event: A2ADelegationCompletedEvent):
self.formatter.handle_a2a_delegation_completed(
event.status,
event.result,
event.error,
event.is_multiturn,
)
@crewai_event_bus.on(A2AConversationStartedEvent)
def on_a2a_conversation_started(source, event: A2AConversationStartedEvent):
# Store A2A agent name for display in conversation tree
if event.a2a_agent_name:
self.formatter._current_a2a_agent_name = event.a2a_agent_name
self.formatter.handle_a2a_conversation_started(
event.agent_id,
event.endpoint,
)
@crewai_event_bus.on(A2AMessageSentEvent)
def on_a2a_message_sent(source, event: A2AMessageSentEvent):
self.formatter.handle_a2a_message_sent(
event.message,
event.turn_number,
event.agent_role,
)
@crewai_event_bus.on(A2AResponseReceivedEvent)
def on_a2a_response_received(source, event: A2AResponseReceivedEvent):
self.formatter.handle_a2a_response_received(
event.response,
event.turn_number,
event.status,
event.agent_role,
)
@crewai_event_bus.on(A2AConversationCompletedEvent)
def on_a2a_conversation_completed(source, event: A2AConversationCompletedEvent):
self.formatter.handle_a2a_conversation_completed(
event.status,
event.final_result,
event.error,
event.total_turns,
)
# ----------- MCP EVENTS -----------
@crewai_event_bus.on(MCPConnectionStartedEvent)
def on_mcp_connection_started(source, event: MCPConnectionStartedEvent):
self.formatter.handle_mcp_connection_started(
event.server_name,
event.server_url,
event.transport_type,
event.is_reconnect,
event.connect_timeout,
)
@crewai_event_bus.on(MCPConnectionCompletedEvent)
def on_mcp_connection_completed(source, event: MCPConnectionCompletedEvent):
self.formatter.handle_mcp_connection_completed(
event.server_name,
event.server_url,
event.transport_type,
event.connection_duration_ms,
event.is_reconnect,
)
@crewai_event_bus.on(MCPConnectionFailedEvent)
def on_mcp_connection_failed(source, event: MCPConnectionFailedEvent):
self.formatter.handle_mcp_connection_failed(
event.server_name,
event.server_url,
event.transport_type,
event.error,
event.error_type,
)
@crewai_event_bus.on(MCPToolExecutionStartedEvent)
def on_mcp_tool_execution_started(source, event: MCPToolExecutionStartedEvent):
self.formatter.handle_mcp_tool_execution_started(
event.server_name,
event.tool_name,
event.tool_args,
)
@crewai_event_bus.on(MCPToolExecutionCompletedEvent)
def on_mcp_tool_execution_completed(
source, event: MCPToolExecutionCompletedEvent
):
self.formatter.handle_mcp_tool_execution_completed(
event.server_name,
event.tool_name,
event.tool_args,
event.result,
event.execution_duration_ms,
)
@crewai_event_bus.on(MCPToolExecutionFailedEvent)
def on_mcp_tool_execution_failed(source, event: MCPToolExecutionFailedEvent):
self.formatter.handle_mcp_tool_execution_failed(
event.server_name,
event.tool_name,
event.tool_args,
event.error,
event.error_type,
)
event_listener = EventListener()

View File

@@ -40,6 +40,14 @@ from crewai.events.types.llm_guardrail_events import (
LLMGuardrailCompletedEvent,
LLMGuardrailStartedEvent,
)
from crewai.events.types.mcp_events import (
MCPConnectionCompletedEvent,
MCPConnectionFailedEvent,
MCPConnectionStartedEvent,
MCPToolExecutionCompletedEvent,
MCPToolExecutionFailedEvent,
MCPToolExecutionStartedEvent,
)
from crewai.events.types.memory_events import (
MemoryQueryCompletedEvent,
MemoryQueryFailedEvent,
@@ -115,4 +123,10 @@ EventTypes = (
| MemoryQueryFailedEvent
| MemoryRetrievalStartedEvent
| MemoryRetrievalCompletedEvent
| MCPConnectionStartedEvent
| MCPConnectionCompletedEvent
| MCPConnectionFailedEvent
| MCPToolExecutionStartedEvent
| MCPToolExecutionCompletedEvent
| MCPToolExecutionFailedEvent
)

View File

@@ -1,106 +0,0 @@
from crewai.events.base_event_listener import BaseEventListener
from crewai.events.types.memory_events import (
MemoryQueryCompletedEvent,
MemoryQueryFailedEvent,
MemoryRetrievalCompletedEvent,
MemoryRetrievalStartedEvent,
MemorySaveCompletedEvent,
MemorySaveFailedEvent,
MemorySaveStartedEvent,
)
class MemoryListener(BaseEventListener):
def __init__(self, formatter):
super().__init__()
self.formatter = formatter
self.memory_retrieval_in_progress = False
self.memory_save_in_progress = False
def setup_listeners(self, crewai_event_bus):
@crewai_event_bus.on(MemoryRetrievalStartedEvent)
def on_memory_retrieval_started(source, event: MemoryRetrievalStartedEvent):
if self.memory_retrieval_in_progress:
return
self.memory_retrieval_in_progress = True
self.formatter.handle_memory_retrieval_started(
self.formatter.current_agent_branch,
self.formatter.current_crew_tree,
)
@crewai_event_bus.on(MemoryRetrievalCompletedEvent)
def on_memory_retrieval_completed(source, event: MemoryRetrievalCompletedEvent):
if not self.memory_retrieval_in_progress:
return
self.memory_retrieval_in_progress = False
self.formatter.handle_memory_retrieval_completed(
self.formatter.current_agent_branch,
self.formatter.current_crew_tree,
event.memory_content,
event.retrieval_time_ms,
)
@crewai_event_bus.on(MemoryQueryCompletedEvent)
def on_memory_query_completed(source, event: MemoryQueryCompletedEvent):
if not self.memory_retrieval_in_progress:
return
self.formatter.handle_memory_query_completed(
self.formatter.current_agent_branch,
event.source_type,
event.query_time_ms,
self.formatter.current_crew_tree,
)
@crewai_event_bus.on(MemoryQueryFailedEvent)
def on_memory_query_failed(source, event: MemoryQueryFailedEvent):
if not self.memory_retrieval_in_progress:
return
self.formatter.handle_memory_query_failed(
self.formatter.current_agent_branch,
self.formatter.current_crew_tree,
event.error,
event.source_type,
)
@crewai_event_bus.on(MemorySaveStartedEvent)
def on_memory_save_started(source, event: MemorySaveStartedEvent):
if self.memory_save_in_progress:
return
self.memory_save_in_progress = True
self.formatter.handle_memory_save_started(
self.formatter.current_agent_branch,
self.formatter.current_crew_tree,
)
@crewai_event_bus.on(MemorySaveCompletedEvent)
def on_memory_save_completed(source, event: MemorySaveCompletedEvent):
if not self.memory_save_in_progress:
return
self.memory_save_in_progress = False
self.formatter.handle_memory_save_completed(
self.formatter.current_agent_branch,
self.formatter.current_crew_tree,
event.save_time_ms,
event.source_type,
)
@crewai_event_bus.on(MemorySaveFailedEvent)
def on_memory_save_failed(source, event: MemorySaveFailedEvent):
if not self.memory_save_in_progress:
return
self.formatter.handle_memory_save_failed(
self.formatter.current_agent_branch,
event.error,
event.source_type,
self.formatter.current_crew_tree,
)

View File

@@ -73,15 +73,19 @@ class FirstTimeTraceHandler:
self.is_first_time = should_auto_collect_first_time_traces()
return self.is_first_time
def set_batch_manager(self, batch_manager: TraceBatchManager):
"""Set reference to batch manager for sending events."""
def set_batch_manager(self, batch_manager: TraceBatchManager) -> None:
"""Set reference to batch manager for sending events.
Args:
batch_manager: The trace batch manager instance.
"""
self.batch_manager = batch_manager
def mark_events_collected(self):
def mark_events_collected(self) -> None:
"""Mark that events have been collected during execution."""
self.collected_events = True
def handle_execution_completion(self):
def handle_execution_completion(self) -> None:
"""Handle the completion flow as shown in your diagram."""
if not self.is_first_time or not self.collected_events:
return

View File

@@ -44,6 +44,7 @@ class TraceBatchManager:
def __init__(self) -> None:
self._init_lock = Lock()
self._batch_ready_cv = Condition(self._init_lock)
self._pending_events_lock = Lock()
self._pending_events_cv = Condition(self._pending_events_lock)
self._pending_events_count = 0
@@ -94,6 +95,8 @@ class TraceBatchManager:
)
self.backend_initialized = True
self._batch_ready_cv.notify_all()
return self.current_batch
def _initialize_backend_batch(
@@ -161,13 +164,13 @@ class TraceBatchManager:
f"Error initializing trace batch: {e}. Continuing without tracing."
)
def begin_event_processing(self):
"""Mark that an event handler started processing (for synchronization)"""
def begin_event_processing(self) -> None:
"""Mark that an event handler started processing (for synchronization)."""
with self._pending_events_lock:
self._pending_events_count += 1
def end_event_processing(self):
"""Mark that an event handler finished processing (for synchronization)"""
def end_event_processing(self) -> None:
"""Mark that an event handler finished processing (for synchronization)."""
with self._pending_events_cv:
self._pending_events_count -= 1
if self._pending_events_count == 0:
@@ -385,6 +388,22 @@ class TraceBatchManager:
"""Check if batch is initialized"""
return self.current_batch is not None
def wait_for_batch_initialization(self, timeout: float = 2.0) -> bool:
"""Wait for batch to be initialized.
Args:
timeout: Maximum time to wait in seconds (default: 2.0)
Returns:
True if batch was initialized, False if timeout occurred
"""
with self._batch_ready_cv:
if self.current_batch is not None:
return True
return self._batch_ready_cv.wait_for(
lambda: self.current_batch is not None, timeout=timeout
)
def record_start_time(self, key: str):
"""Record start time for duration calculation"""
self.execution_start_times[key] = datetime.now(timezone.utc)

View File

@@ -1,10 +1,16 @@
"""Trace collection listener for orchestrating trace collection."""
import os
from typing import Any, ClassVar
import uuid
from typing_extensions import Self
from crewai.cli.authentication.token import AuthError, get_auth_token
from crewai.cli.version import get_crewai_version
from crewai.events.base_event_listener import BaseEventListener
from crewai.events.event_bus import CrewAIEventsBus
from crewai.events.utils.console_formatter import ConsoleFormatter
from crewai.events.listeners.tracing.first_time_trace_handler import (
FirstTimeTraceHandler,
)
@@ -53,6 +59,8 @@ from crewai.events.types.memory_events import (
MemoryQueryCompletedEvent,
MemoryQueryFailedEvent,
MemoryQueryStartedEvent,
MemoryRetrievalCompletedEvent,
MemoryRetrievalStartedEvent,
MemorySaveCompletedEvent,
MemorySaveFailedEvent,
MemorySaveStartedEvent,
@@ -75,9 +83,7 @@ from crewai.events.types.tool_usage_events import (
class TraceCollectionListener(BaseEventListener):
"""
Trace collection listener that orchestrates trace collection
"""
"""Trace collection listener that orchestrates trace collection."""
complex_events: ClassVar[list[str]] = [
"task_started",
@@ -88,11 +94,12 @@ class TraceCollectionListener(BaseEventListener):
"agent_execution_completed",
]
_instance = None
_initialized = False
_listeners_setup = False
_instance: Self | None = None
_initialized: bool = False
_listeners_setup: bool = False
def __new__(cls, batch_manager: TraceBatchManager | None = None):
def __new__(cls, batch_manager: TraceBatchManager | None = None) -> Self:
"""Create or return singleton instance."""
if cls._instance is None:
cls._instance = super().__new__(cls)
return cls._instance
@@ -100,7 +107,14 @@ class TraceCollectionListener(BaseEventListener):
def __init__(
self,
batch_manager: TraceBatchManager | None = None,
):
formatter: ConsoleFormatter | None = None,
) -> None:
"""Initialize trace collection listener.
Args:
batch_manager: Optional trace batch manager instance.
formatter: Optional console formatter for output.
"""
if self._initialized:
return
@@ -108,19 +122,22 @@ class TraceCollectionListener(BaseEventListener):
self.batch_manager = batch_manager or TraceBatchManager()
self._initialized = True
self.first_time_handler = FirstTimeTraceHandler()
self.formatter = formatter
self.memory_retrieval_in_progress = False
self.memory_save_in_progress = False
if self.first_time_handler.initialize_for_first_time_user():
self.first_time_handler.set_batch_manager(self.batch_manager)
def _check_authenticated(self) -> bool:
"""Check if tracing should be enabled"""
"""Check if tracing should be enabled."""
try:
return bool(get_auth_token())
except AuthError:
return False
def _get_user_context(self) -> dict[str, str]:
"""Extract user context for tracing"""
"""Extract user context for tracing."""
return {
"user_id": os.getenv("CREWAI_USER_ID", "anonymous"),
"organization_id": os.getenv("CREWAI_ORG_ID", ""),
@@ -128,9 +145,12 @@ class TraceCollectionListener(BaseEventListener):
"trace_id": str(uuid.uuid4()),
}
def setup_listeners(self, crewai_event_bus):
"""Setup event listeners - delegates to specific handlers"""
def setup_listeners(self, crewai_event_bus: CrewAIEventsBus) -> None:
"""Setup event listeners - delegates to specific handlers.
Args:
crewai_event_bus: The event bus to register listeners on.
"""
if self._listeners_setup:
return
@@ -140,50 +160,52 @@ class TraceCollectionListener(BaseEventListener):
self._listeners_setup = True
def _register_flow_event_handlers(self, event_bus):
"""Register handlers for flow events"""
def _register_flow_event_handlers(self, event_bus: CrewAIEventsBus) -> None:
"""Register handlers for flow events."""
@event_bus.on(FlowCreatedEvent)
def on_flow_created(source, event):
def on_flow_created(source: Any, event: FlowCreatedEvent) -> None:
pass
@event_bus.on(FlowStartedEvent)
def on_flow_started(source, event):
def on_flow_started(source: Any, event: FlowStartedEvent) -> None:
if not self.batch_manager.is_batch_initialized():
self._initialize_flow_batch(source, event)
self._handle_trace_event("flow_started", source, event)
@event_bus.on(MethodExecutionStartedEvent)
def on_method_started(source, event):
def on_method_started(source: Any, event: MethodExecutionStartedEvent) -> None:
self._handle_trace_event("method_execution_started", source, event)
@event_bus.on(MethodExecutionFinishedEvent)
def on_method_finished(source, event):
def on_method_finished(
source: Any, event: MethodExecutionFinishedEvent
) -> None:
self._handle_trace_event("method_execution_finished", source, event)
@event_bus.on(MethodExecutionFailedEvent)
def on_method_failed(source, event):
def on_method_failed(source: Any, event: MethodExecutionFailedEvent) -> None:
self._handle_trace_event("method_execution_failed", source, event)
@event_bus.on(FlowFinishedEvent)
def on_flow_finished(source, event):
def on_flow_finished(source: Any, event: FlowFinishedEvent) -> None:
self._handle_trace_event("flow_finished", source, event)
@event_bus.on(FlowPlotEvent)
def on_flow_plot(source, event):
def on_flow_plot(source: Any, event: FlowPlotEvent) -> None:
self._handle_action_event("flow_plot", source, event)
def _register_context_event_handlers(self, event_bus):
"""Register handlers for context events (start/end)"""
def _register_context_event_handlers(self, event_bus: CrewAIEventsBus) -> None:
"""Register handlers for context events (start/end)."""
@event_bus.on(CrewKickoffStartedEvent)
def on_crew_started(source, event):
def on_crew_started(source: Any, event: CrewKickoffStartedEvent) -> None:
if not self.batch_manager.is_batch_initialized():
self._initialize_crew_batch(source, event)
self._handle_trace_event("crew_kickoff_started", source, event)
@event_bus.on(CrewKickoffCompletedEvent)
def on_crew_completed(source, event):
def on_crew_completed(source: Any, event: CrewKickoffCompletedEvent) -> None:
self._handle_trace_event("crew_kickoff_completed", source, event)
if self.batch_manager.batch_owner_type == "crew":
if self.first_time_handler.is_first_time:
@@ -193,7 +215,7 @@ class TraceCollectionListener(BaseEventListener):
self.batch_manager.finalize_batch()
@event_bus.on(CrewKickoffFailedEvent)
def on_crew_failed(source, event):
def on_crew_failed(source: Any, event: CrewKickoffFailedEvent) -> None:
self._handle_trace_event("crew_kickoff_failed", source, event)
if self.first_time_handler.is_first_time:
self.first_time_handler.mark_events_collected()
@@ -202,134 +224,245 @@ class TraceCollectionListener(BaseEventListener):
self.batch_manager.finalize_batch()
@event_bus.on(TaskStartedEvent)
def on_task_started(source, event):
def on_task_started(source: Any, event: TaskStartedEvent) -> None:
self._handle_trace_event("task_started", source, event)
@event_bus.on(TaskCompletedEvent)
def on_task_completed(source, event):
def on_task_completed(source: Any, event: TaskCompletedEvent) -> None:
self._handle_trace_event("task_completed", source, event)
@event_bus.on(TaskFailedEvent)
def on_task_failed(source, event):
def on_task_failed(source: Any, event: TaskFailedEvent) -> None:
self._handle_trace_event("task_failed", source, event)
@event_bus.on(AgentExecutionStartedEvent)
def on_agent_started(source, event):
def on_agent_started(source: Any, event: AgentExecutionStartedEvent) -> None:
self._handle_trace_event("agent_execution_started", source, event)
@event_bus.on(AgentExecutionCompletedEvent)
def on_agent_completed(source, event):
def on_agent_completed(
source: Any, event: AgentExecutionCompletedEvent
) -> None:
self._handle_trace_event("agent_execution_completed", source, event)
@event_bus.on(LiteAgentExecutionStartedEvent)
def on_lite_agent_started(source, event):
def on_lite_agent_started(
source: Any, event: LiteAgentExecutionStartedEvent
) -> None:
self._handle_trace_event("lite_agent_execution_started", source, event)
@event_bus.on(LiteAgentExecutionCompletedEvent)
def on_lite_agent_completed(source, event):
def on_lite_agent_completed(
source: Any, event: LiteAgentExecutionCompletedEvent
) -> None:
self._handle_trace_event("lite_agent_execution_completed", source, event)
@event_bus.on(LiteAgentExecutionErrorEvent)
def on_lite_agent_error(source, event):
def on_lite_agent_error(
source: Any, event: LiteAgentExecutionErrorEvent
) -> None:
self._handle_trace_event("lite_agent_execution_error", source, event)
@event_bus.on(AgentExecutionErrorEvent)
def on_agent_error(source, event):
def on_agent_error(source: Any, event: AgentExecutionErrorEvent) -> None:
self._handle_trace_event("agent_execution_error", source, event)
@event_bus.on(LLMGuardrailStartedEvent)
def on_guardrail_started(source, event):
def on_guardrail_started(source: Any, event: LLMGuardrailStartedEvent) -> None:
self._handle_trace_event("llm_guardrail_started", source, event)
@event_bus.on(LLMGuardrailCompletedEvent)
def on_guardrail_completed(source, event):
def on_guardrail_completed(
source: Any, event: LLMGuardrailCompletedEvent
) -> None:
self._handle_trace_event("llm_guardrail_completed", source, event)
def _register_action_event_handlers(self, event_bus):
"""Register handlers for action events (LLM calls, tool usage)"""
def _register_action_event_handlers(self, event_bus: CrewAIEventsBus) -> None:
"""Register handlers for action events (LLM calls, tool usage)."""
@event_bus.on(LLMCallStartedEvent)
def on_llm_call_started(source, event):
def on_llm_call_started(source: Any, event: LLMCallStartedEvent) -> None:
self._handle_action_event("llm_call_started", source, event)
@event_bus.on(LLMCallCompletedEvent)
def on_llm_call_completed(source, event):
def on_llm_call_completed(source: Any, event: LLMCallCompletedEvent) -> None:
self._handle_action_event("llm_call_completed", source, event)
@event_bus.on(LLMCallFailedEvent)
def on_llm_call_failed(source, event):
def on_llm_call_failed(source: Any, event: LLMCallFailedEvent) -> None:
self._handle_action_event("llm_call_failed", source, event)
@event_bus.on(ToolUsageStartedEvent)
def on_tool_started(source, event):
def on_tool_started(source: Any, event: ToolUsageStartedEvent) -> None:
self._handle_action_event("tool_usage_started", source, event)
@event_bus.on(ToolUsageFinishedEvent)
def on_tool_finished(source, event):
def on_tool_finished(source: Any, event: ToolUsageFinishedEvent) -> None:
self._handle_action_event("tool_usage_finished", source, event)
@event_bus.on(ToolUsageErrorEvent)
def on_tool_error(source, event):
def on_tool_error(source: Any, event: ToolUsageErrorEvent) -> None:
self._handle_action_event("tool_usage_error", source, event)
@event_bus.on(MemoryQueryStartedEvent)
def on_memory_query_started(source, event):
def on_memory_query_started(
source: Any, event: MemoryQueryStartedEvent
) -> None:
self._handle_action_event("memory_query_started", source, event)
@event_bus.on(MemoryQueryCompletedEvent)
def on_memory_query_completed(source, event):
def on_memory_query_completed(
source: Any, event: MemoryQueryCompletedEvent
) -> None:
self._handle_action_event("memory_query_completed", source, event)
if self.formatter and self.memory_retrieval_in_progress:
self.formatter.handle_memory_query_completed(
self.formatter.current_agent_branch,
event.source_type or "memory",
event.query_time_ms,
self.formatter.current_crew_tree,
)
@event_bus.on(MemoryQueryFailedEvent)
def on_memory_query_failed(source, event):
def on_memory_query_failed(source: Any, event: MemoryQueryFailedEvent) -> None:
self._handle_action_event("memory_query_failed", source, event)
if self.formatter and self.memory_retrieval_in_progress:
self.formatter.handle_memory_query_failed(
self.formatter.current_agent_branch,
self.formatter.current_crew_tree,
event.error,
event.source_type or "memory",
)
@event_bus.on(MemorySaveStartedEvent)
def on_memory_save_started(source, event):
def on_memory_save_started(source: Any, event: MemorySaveStartedEvent) -> None:
self._handle_action_event("memory_save_started", source, event)
if self.formatter:
if self.memory_save_in_progress:
return
self.memory_save_in_progress = True
self.formatter.handle_memory_save_started(
self.formatter.current_agent_branch,
self.formatter.current_crew_tree,
)
@event_bus.on(MemorySaveCompletedEvent)
def on_memory_save_completed(source, event):
def on_memory_save_completed(
source: Any, event: MemorySaveCompletedEvent
) -> None:
self._handle_action_event("memory_save_completed", source, event)
if self.formatter:
if not self.memory_save_in_progress:
return
self.memory_save_in_progress = False
self.formatter.handle_memory_save_completed(
self.formatter.current_agent_branch,
self.formatter.current_crew_tree,
event.save_time_ms,
event.source_type or "memory",
)
@event_bus.on(MemorySaveFailedEvent)
def on_memory_save_failed(source, event):
def on_memory_save_failed(source: Any, event: MemorySaveFailedEvent) -> None:
self._handle_action_event("memory_save_failed", source, event)
if self.formatter and self.memory_save_in_progress:
self.formatter.handle_memory_save_failed(
self.formatter.current_agent_branch,
event.error,
event.source_type or "memory",
self.formatter.current_crew_tree,
)
@event_bus.on(MemoryRetrievalStartedEvent)
def on_memory_retrieval_started(
source: Any, event: MemoryRetrievalStartedEvent
) -> None:
if self.formatter:
if self.memory_retrieval_in_progress:
return
self.memory_retrieval_in_progress = True
self.formatter.handle_memory_retrieval_started(
self.formatter.current_agent_branch,
self.formatter.current_crew_tree,
)
@event_bus.on(MemoryRetrievalCompletedEvent)
def on_memory_retrieval_completed(
source: Any, event: MemoryRetrievalCompletedEvent
) -> None:
if self.formatter:
if not self.memory_retrieval_in_progress:
return
self.memory_retrieval_in_progress = False
self.formatter.handle_memory_retrieval_completed(
self.formatter.current_agent_branch,
self.formatter.current_crew_tree,
event.memory_content,
event.retrieval_time_ms,
)
@event_bus.on(AgentReasoningStartedEvent)
def on_agent_reasoning_started(source, event):
def on_agent_reasoning_started(
source: Any, event: AgentReasoningStartedEvent
) -> None:
self._handle_action_event("agent_reasoning_started", source, event)
@event_bus.on(AgentReasoningCompletedEvent)
def on_agent_reasoning_completed(source, event):
def on_agent_reasoning_completed(
source: Any, event: AgentReasoningCompletedEvent
) -> None:
self._handle_action_event("agent_reasoning_completed", source, event)
@event_bus.on(AgentReasoningFailedEvent)
def on_agent_reasoning_failed(source, event):
def on_agent_reasoning_failed(
source: Any, event: AgentReasoningFailedEvent
) -> None:
self._handle_action_event("agent_reasoning_failed", source, event)
@event_bus.on(KnowledgeRetrievalStartedEvent)
def on_knowledge_retrieval_started(source, event):
def on_knowledge_retrieval_started(
source: Any, event: KnowledgeRetrievalStartedEvent
) -> None:
self._handle_action_event("knowledge_retrieval_started", source, event)
@event_bus.on(KnowledgeRetrievalCompletedEvent)
def on_knowledge_retrieval_completed(source, event):
def on_knowledge_retrieval_completed(
source: Any, event: KnowledgeRetrievalCompletedEvent
) -> None:
self._handle_action_event("knowledge_retrieval_completed", source, event)
@event_bus.on(KnowledgeQueryStartedEvent)
def on_knowledge_query_started(source, event):
def on_knowledge_query_started(
source: Any, event: KnowledgeQueryStartedEvent
) -> None:
self._handle_action_event("knowledge_query_started", source, event)
@event_bus.on(KnowledgeQueryCompletedEvent)
def on_knowledge_query_completed(source, event):
def on_knowledge_query_completed(
source: Any, event: KnowledgeQueryCompletedEvent
) -> None:
self._handle_action_event("knowledge_query_completed", source, event)
@event_bus.on(KnowledgeQueryFailedEvent)
def on_knowledge_query_failed(source, event):
def on_knowledge_query_failed(
source: Any, event: KnowledgeQueryFailedEvent
) -> None:
self._handle_action_event("knowledge_query_failed", source, event)
def _initialize_crew_batch(self, source: Any, event: Any):
"""Initialize trace batch"""
def _initialize_crew_batch(self, source: Any, event: Any) -> None:
"""Initialize trace batch.
Args:
source: Source object that triggered the event.
event: Event object containing crew information.
"""
user_context = self._get_user_context()
execution_metadata = {
"crew_name": getattr(event, "crew_name", "Unknown Crew"),
@@ -342,8 +475,13 @@ class TraceCollectionListener(BaseEventListener):
self._initialize_batch(user_context, execution_metadata)
def _initialize_flow_batch(self, source: Any, event: Any):
"""Initialize trace batch for Flow execution"""
def _initialize_flow_batch(self, source: Any, event: Any) -> None:
"""Initialize trace batch for Flow execution.
Args:
source: Source object that triggered the event.
event: Event object containing flow information.
"""
user_context = self._get_user_context()
execution_metadata = {
"flow_name": getattr(event, "flow_name", "Unknown Flow"),
@@ -359,21 +497,32 @@ class TraceCollectionListener(BaseEventListener):
def _initialize_batch(
self, user_context: dict[str, str], execution_metadata: dict[str, Any]
):
"""Initialize trace batch - auto-enable ephemeral for first-time users."""
) -> None:
"""Initialize trace batch - auto-enable ephemeral for first-time users.
Args:
user_context: User context information.
execution_metadata: Metadata about the execution.
"""
if self.first_time_handler.is_first_time:
return self.batch_manager.initialize_batch(
self.batch_manager.initialize_batch(
user_context, execution_metadata, use_ephemeral=True
)
return
use_ephemeral = not self._check_authenticated()
return self.batch_manager.initialize_batch(
self.batch_manager.initialize_batch(
user_context, execution_metadata, use_ephemeral=use_ephemeral
)
def _handle_trace_event(self, event_type: str, source: Any, event: Any):
"""Generic handler for context end events"""
def _handle_trace_event(self, event_type: str, source: Any, event: Any) -> None:
"""Generic handler for context end events.
Args:
event_type: Type of the event.
source: Source object that triggered the event.
event: Event object.
"""
self.batch_manager.begin_event_processing()
try:
trace_event = self._create_trace_event(event_type, source, event)
@@ -381,9 +530,14 @@ class TraceCollectionListener(BaseEventListener):
finally:
self.batch_manager.end_event_processing()
def _handle_action_event(self, event_type: str, source: Any, event: Any):
"""Generic handler for action events (LLM calls, tool usage)"""
def _handle_action_event(self, event_type: str, source: Any, event: Any) -> None:
"""Generic handler for action events (LLM calls, tool usage).
Args:
event_type: Type of the event.
source: Source object that triggered the event.
event: Event object.
"""
if not self.batch_manager.is_batch_initialized():
user_context = self._get_user_context()
execution_metadata = {

View File

@@ -0,0 +1,141 @@
"""Events for A2A (Agent-to-Agent) delegation.
This module defines events emitted during A2A protocol delegation,
including both single-turn and multiturn conversation flows.
"""
from typing import Any, Literal
from crewai.events.base_events import BaseEvent
class A2AEventBase(BaseEvent):
"""Base class for A2A events with task/agent context."""
from_task: Any | None = None
from_agent: Any | None = None
def __init__(self, **data):
"""Initialize A2A event, extracting task and agent metadata."""
if data.get("from_task"):
task = data["from_task"]
data["task_id"] = str(task.id)
data["task_name"] = task.name or task.description
data["from_task"] = None
if data.get("from_agent"):
agent = data["from_agent"]
data["agent_id"] = str(agent.id)
data["agent_role"] = agent.role
data["from_agent"] = None
super().__init__(**data)
class A2ADelegationStartedEvent(A2AEventBase):
"""Event emitted when A2A delegation starts.
Attributes:
endpoint: A2A agent endpoint URL (AgentCard URL)
task_description: Task being delegated to the A2A agent
agent_id: A2A agent identifier
is_multiturn: Whether this is part of a multiturn conversation
turn_number: Current turn number (1-indexed, 1 for single-turn)
"""
type: str = "a2a_delegation_started"
endpoint: str
task_description: str
agent_id: str
is_multiturn: bool = False
turn_number: int = 1
class A2ADelegationCompletedEvent(A2AEventBase):
"""Event emitted when A2A delegation completes.
Attributes:
status: Completion status (completed, input_required, failed, etc.)
result: Result message if status is completed
error: Error/response message (error for failed, response for input_required)
is_multiturn: Whether this is part of a multiturn conversation
"""
type: str = "a2a_delegation_completed"
status: str
result: str | None = None
error: str | None = None
is_multiturn: bool = False
class A2AConversationStartedEvent(A2AEventBase):
"""Event emitted when a multiturn A2A conversation starts.
This is emitted once at the beginning of a multiturn conversation,
before the first message exchange.
Attributes:
agent_id: A2A agent identifier
endpoint: A2A agent endpoint URL
a2a_agent_name: Name of the A2A agent from agent card
"""
type: str = "a2a_conversation_started"
agent_id: str
endpoint: str
a2a_agent_name: str | None = None
class A2AMessageSentEvent(A2AEventBase):
"""Event emitted when a message is sent to the A2A agent.
Attributes:
message: Message content sent to the A2A agent
turn_number: Current turn number (1-indexed)
is_multiturn: Whether this is part of a multiturn conversation
agent_role: Role of the CrewAI agent sending the message
"""
type: str = "a2a_message_sent"
message: str
turn_number: int
is_multiturn: bool = False
agent_role: str | None = None
class A2AResponseReceivedEvent(A2AEventBase):
"""Event emitted when a response is received from the A2A agent.
Attributes:
response: Response content from the A2A agent
turn_number: Current turn number (1-indexed)
is_multiturn: Whether this is part of a multiturn conversation
status: Response status (input_required, completed, etc.)
agent_role: Role of the CrewAI agent (for display)
"""
type: str = "a2a_response_received"
response: str
turn_number: int
is_multiturn: bool = False
status: str
agent_role: str | None = None
class A2AConversationCompletedEvent(A2AEventBase):
"""Event emitted when a multiturn A2A conversation completes.
This is emitted once at the end of a multiturn conversation.
Attributes:
status: Final status (completed, failed, etc.)
final_result: Final result if completed successfully
error: Error message if failed
total_turns: Total number of turns in the conversation
"""
type: str = "a2a_conversation_completed"
status: Literal["completed", "failed"]
final_result: str | None = None
error: str | None = None
total_turns: int

View File

@@ -0,0 +1,85 @@
from datetime import datetime
from typing import Any
from crewai.events.base_events import BaseEvent
class MCPEvent(BaseEvent):
"""Base event for MCP operations."""
server_name: str
server_url: str | None = None
transport_type: str | None = None # "stdio", "http", "sse"
agent_id: str | None = None
agent_role: str | None = None
from_agent: Any | None = None
from_task: Any | None = None
def __init__(self, **data):
super().__init__(**data)
self._set_agent_params(data)
self._set_task_params(data)
class MCPConnectionStartedEvent(MCPEvent):
"""Event emitted when starting to connect to an MCP server."""
type: str = "mcp_connection_started"
connect_timeout: int | None = None
is_reconnect: bool = (
False # True if this is a reconnection, False for first connection
)
class MCPConnectionCompletedEvent(MCPEvent):
"""Event emitted when successfully connected to an MCP server."""
type: str = "mcp_connection_completed"
started_at: datetime | None = None
completed_at: datetime | None = None
connection_duration_ms: float | None = None
is_reconnect: bool = (
False # True if this was a reconnection, False for first connection
)
class MCPConnectionFailedEvent(MCPEvent):
"""Event emitted when connection to an MCP server fails."""
type: str = "mcp_connection_failed"
error: str
error_type: str | None = None # "timeout", "authentication", "network", etc.
started_at: datetime | None = None
failed_at: datetime | None = None
class MCPToolExecutionStartedEvent(MCPEvent):
"""Event emitted when starting to execute an MCP tool."""
type: str = "mcp_tool_execution_started"
tool_name: str
tool_args: dict[str, Any] | None = None
class MCPToolExecutionCompletedEvent(MCPEvent):
"""Event emitted when MCP tool execution completes."""
type: str = "mcp_tool_execution_completed"
tool_name: str
tool_args: dict[str, Any] | None = None
result: Any | None = None
started_at: datetime | None = None
completed_at: datetime | None = None
execution_duration_ms: float | None = None
class MCPToolExecutionFailedEvent(MCPEvent):
"""Event emitted when MCP tool execution fails."""
type: str = "mcp_tool_execution_failed"
tool_name: str
tool_args: dict[str, Any] | None = None
error: str
error_type: str | None = None # "timeout", "validation", "server_error", etc.
started_at: datetime | None = None
failed_at: datetime | None = None

View File

@@ -17,9 +17,16 @@ class ConsoleFormatter:
current_method_branch: Tree | None = None
current_lite_agent_branch: Tree | None = None
tool_usage_counts: ClassVar[dict[str, int]] = {}
current_reasoning_branch: Tree | None = None # Track reasoning status
current_reasoning_branch: Tree | None = None
_live_paused: bool = False
current_llm_tool_tree: Tree | None = None
current_a2a_conversation_branch: Tree | None = None
current_a2a_turn_count: int = 0
_pending_a2a_message: str | None = None
_pending_a2a_agent_role: str | None = None
_pending_a2a_turn_number: int | None = None
_a2a_turn_branches: ClassVar[dict[int, Tree]] = {}
_current_a2a_agent_name: str | None = None
def __init__(self, verbose: bool = False):
self.console = Console(width=None)
@@ -192,7 +199,12 @@ class ConsoleFormatter:
style,
ID=source_id,
)
content.append(f"Final Output: {final_string_output}\n", style="white")
if status == "failed" and final_string_output:
content.append("Error:\n", style="white bold")
content.append(f"{final_string_output}\n", style="red")
else:
content.append(f"Final Output: {final_string_output}\n", style="white")
self.print_panel(content, title, style)
@@ -357,7 +369,14 @@ class ConsoleFormatter:
return flow_tree
def start_flow(self, flow_name: str, flow_id: str) -> Tree | None:
"""Initialize a flow execution tree."""
"""Initialize or update a flow execution tree."""
if self.current_flow_tree is not None:
for child in self.current_flow_tree.children:
if "Starting Flow" in str(child.label):
child.label = Text("🚀 Flow Started", style="green")
break
return self.current_flow_tree
flow_tree = Tree("")
flow_label = Text()
flow_label.append("🌊 Flow: ", style="blue bold")
@@ -436,27 +455,38 @@ class ConsoleFormatter:
prefix, style = "🔄 Running:", "yellow"
elif status == "completed":
prefix, style = "✅ Completed:", "green"
# Update initialization node when a method completes successfully
for child in flow_tree.children:
if "Starting Flow" in str(child.label):
child.label = Text("Flow Method Step", style="white")
break
else:
prefix, style = "❌ Failed:", "red"
# Update initialization node on failure
for child in flow_tree.children:
if "Starting Flow" in str(child.label):
child.label = Text("❌ Flow Step Failed", style="red")
break
if not method_branch:
# Find or create method branch
for branch in flow_tree.children:
if method_name in str(branch.label):
method_branch = branch
break
if not method_branch:
method_branch = flow_tree.add("")
if method_branch is not None:
if method_branch in flow_tree.children:
method_branch.label = Text(prefix, style=f"{style} bold") + Text(
f" {method_name}", style=style
)
self.print(flow_tree)
self.print()
return method_branch
for branch in flow_tree.children:
label_str = str(branch.label)
if f" {method_name}" in label_str and (
"Running:" in label_str
or "Completed:" in label_str
or "Failed:" in label_str
):
method_branch = branch
break
if method_branch is None:
method_branch = flow_tree.add("")
method_branch.label = Text(prefix, style=f"{style} bold") + Text(
f" {method_name}", style=style
@@ -464,6 +494,7 @@ class ConsoleFormatter:
self.print(flow_tree)
self.print()
return method_branch
def get_llm_tree(self, tool_name: str):
@@ -1455,22 +1486,37 @@ class ConsoleFormatter:
self.print()
elif isinstance(formatted_answer, AgentFinish):
# Create content for the finish panel
content = Text()
content.append("Agent: ", style="white")
content.append(f"{agent_role}\n\n", style="bright_green bold")
content.append("Final Answer:\n", style="white")
content.append(f"{formatted_answer.output}", style="bright_green")
is_a2a_delegation = False
try:
output_data = json.loads(formatted_answer.output)
if isinstance(output_data, dict):
if output_data.get("is_a2a") is True:
is_a2a_delegation = True
elif "output" in output_data:
nested_output = output_data["output"]
if (
isinstance(nested_output, dict)
and nested_output.get("is_a2a") is True
):
is_a2a_delegation = True
except (json.JSONDecodeError, TypeError, ValueError):
pass
# Create and display the finish panel
finish_panel = Panel(
content,
title="✅ Agent Final Answer",
border_style="green",
padding=(1, 2),
)
self.print(finish_panel)
self.print()
if not is_a2a_delegation:
content = Text()
content.append("Agent: ", style="white")
content.append(f"{agent_role}\n\n", style="bright_green bold")
content.append("Final Answer:\n", style="white")
content.append(f"{formatted_answer.output}", style="bright_green")
finish_panel = Panel(
content,
title="✅ Agent Final Answer",
border_style="green",
padding=(1, 2),
)
self.print(finish_panel)
self.print()
def handle_memory_retrieval_started(
self,
@@ -1770,3 +1816,635 @@ class ConsoleFormatter:
Attempts=f"{retry_count + 1}",
)
self.print_panel(content, "🛡️ Guardrail Failed", "red")
def handle_a2a_delegation_started(
self,
endpoint: str,
task_description: str,
agent_id: str,
is_multiturn: bool = False,
turn_number: int = 1,
) -> None:
"""Handle A2A delegation started event.
Args:
endpoint: A2A agent endpoint URL
task_description: Task being delegated
agent_id: A2A agent identifier
is_multiturn: Whether this is part of a multiturn conversation
turn_number: Current turn number in conversation (1-indexed)
"""
branch_to_use = self.current_lite_agent_branch or self.current_task_branch
tree_to_use = self.current_crew_tree or branch_to_use
a2a_branch: Tree | None = None
if is_multiturn:
if self.current_a2a_turn_count == 0 and not isinstance(
self.current_a2a_conversation_branch, Tree
):
if branch_to_use is not None and tree_to_use is not None:
self.current_a2a_conversation_branch = branch_to_use.add("")
self.update_tree_label(
self.current_a2a_conversation_branch,
"💬",
f"Multiturn A2A Conversation ({agent_id})",
"cyan",
)
self.print(tree_to_use)
self.print()
else:
self.current_a2a_conversation_branch = "MULTITURN_NO_TREE"
content = Text()
content.append(
"Multiturn A2A Conversation Started\n\n", style="cyan bold"
)
content.append("Agent ID: ", style="white")
content.append(f"{agent_id}\n", style="cyan")
content.append("Note: ", style="white dim")
content.append(
"Conversation will be tracked in tree view", style="cyan dim"
)
panel = self.create_panel(
content, "💬 Multiturn Conversation", "cyan"
)
self.print(panel)
self.print()
self.current_a2a_turn_count = turn_number
return (
self.current_a2a_conversation_branch
if isinstance(self.current_a2a_conversation_branch, Tree)
else None
)
if branch_to_use is not None and tree_to_use is not None:
a2a_branch = branch_to_use.add("")
self.update_tree_label(
a2a_branch,
"🔗",
f"Delegating to A2A Agent ({agent_id})",
"cyan",
)
self.print(tree_to_use)
self.print()
content = Text()
content.append("A2A Delegation Started\n\n", style="cyan bold")
content.append("Agent ID: ", style="white")
content.append(f"{agent_id}\n", style="cyan")
content.append("Endpoint: ", style="white")
content.append(f"{endpoint}\n\n", style="cyan dim")
content.append("Task Description:\n", style="white")
task_preview = (
task_description
if len(task_description) <= 200
else task_description[:197] + "..."
)
content.append(task_preview, style="cyan")
panel = self.create_panel(content, "🔗 A2A Delegation", "cyan")
self.print(panel)
self.print()
return a2a_branch
def handle_a2a_delegation_completed(
self,
status: str,
result: str | None = None,
error: str | None = None,
is_multiturn: bool = False,
) -> None:
"""Handle A2A delegation completed event.
Args:
status: Completion status
result: Optional result message
error: Optional error message (or response for input_required)
is_multiturn: Whether this is part of a multiturn conversation
"""
tree_to_use = self.current_crew_tree or self.current_task_branch
a2a_branch = None
if is_multiturn and self.current_a2a_conversation_branch:
has_tree = isinstance(self.current_a2a_conversation_branch, Tree)
if status == "input_required" and error:
pass
elif status == "completed":
if has_tree:
final_turn = self.current_a2a_conversation_branch.add("")
self.update_tree_label(
final_turn,
"",
"Conversation Completed",
"green",
)
if tree_to_use:
self.print(tree_to_use)
self.print()
self.current_a2a_conversation_branch = None
self.current_a2a_turn_count = 0
elif status == "failed":
if has_tree:
error_turn = self.current_a2a_conversation_branch.add("")
error_msg = (
error[:150] + "..." if error and len(error) > 150 else error
)
self.update_tree_label(
error_turn,
"",
f"Failed: {error_msg}" if error else "Conversation Failed",
"red",
)
if tree_to_use:
self.print(tree_to_use)
self.print()
self.current_a2a_conversation_branch = None
self.current_a2a_turn_count = 0
return
if a2a_branch and tree_to_use:
if status == "completed":
self.update_tree_label(
a2a_branch,
"",
"A2A Delegation Completed",
"green",
)
elif status == "failed":
self.update_tree_label(
a2a_branch,
"",
"A2A Delegation Failed",
"red",
)
else:
self.update_tree_label(
a2a_branch,
"⚠️",
f"A2A Delegation {status.replace('_', ' ').title()}",
"yellow",
)
self.print(tree_to_use)
self.print()
if status == "completed" and result:
content = Text()
content.append("A2A Delegation Completed\n\n", style="green bold")
content.append("Result:\n", style="white")
result_preview = result if len(result) <= 500 else result[:497] + "..."
content.append(result_preview, style="green")
panel = self.create_panel(content, "✅ A2A Success", "green")
self.print(panel)
self.print()
elif status == "input_required" and error:
content = Text()
content.append("A2A Response\n\n", style="cyan bold")
content.append("Message:\n", style="white")
response_preview = error if len(error) <= 500 else error[:497] + "..."
content.append(response_preview, style="cyan")
panel = self.create_panel(content, "💬 A2A Response", "cyan")
self.print(panel)
self.print()
elif error:
content = Text()
content.append(
"A2A Delegation Issue\n\n",
style="red bold" if status == "failed" else "yellow bold",
)
content.append("Status: ", style="white")
content.append(
f"{status}\n\n", style="red" if status == "failed" else "yellow"
)
content.append("Message:\n", style="white")
content.append(error, style="red" if status == "failed" else "yellow")
panel_style = "red" if status == "failed" else "yellow"
panel_title = "❌ A2A Failed" if status == "failed" else "⚠️ A2A Status"
panel = self.create_panel(content, panel_title, panel_style)
self.print(panel)
self.print()
def handle_a2a_conversation_started(
self,
agent_id: str,
endpoint: str,
) -> None:
"""Handle A2A conversation started event.
Args:
agent_id: A2A agent identifier
endpoint: A2A agent endpoint URL
"""
branch_to_use = self.current_lite_agent_branch or self.current_task_branch
tree_to_use = self.current_crew_tree or branch_to_use
if not isinstance(self.current_a2a_conversation_branch, Tree):
if branch_to_use is not None and tree_to_use is not None:
self.current_a2a_conversation_branch = branch_to_use.add("")
self.update_tree_label(
self.current_a2a_conversation_branch,
"💬",
f"Multiturn A2A Conversation ({agent_id})",
"cyan",
)
self.print(tree_to_use)
self.print()
else:
self.current_a2a_conversation_branch = "MULTITURN_NO_TREE"
def handle_a2a_message_sent(
self,
message: str,
turn_number: int,
agent_role: str | None = None,
) -> None:
"""Handle A2A message sent event.
Args:
message: Message content sent to the A2A agent
turn_number: Current turn number
agent_role: Role of the CrewAI agent sending the message
"""
self._pending_a2a_message = message
self._pending_a2a_agent_role = agent_role
self._pending_a2a_turn_number = turn_number
def handle_a2a_response_received(
self,
response: str,
turn_number: int,
status: str,
agent_role: str | None = None,
) -> None:
"""Handle A2A response received event.
Args:
response: Response content from the A2A agent
turn_number: Current turn number
status: Response status (input_required, completed, etc.)
agent_role: Role of the CrewAI agent (for display)
"""
if self.current_a2a_conversation_branch and isinstance(
self.current_a2a_conversation_branch, Tree
):
if turn_number in self._a2a_turn_branches:
turn_branch = self._a2a_turn_branches[turn_number]
else:
turn_branch = self.current_a2a_conversation_branch.add("")
self.update_tree_label(
turn_branch,
"💬",
f"Turn {turn_number}",
"cyan",
)
self._a2a_turn_branches[turn_number] = turn_branch
crewai_agent_role = self._pending_a2a_agent_role or agent_role or "User"
message_content = self._pending_a2a_message or "sent message"
message_preview = (
message_content[:100] + "..."
if len(message_content) > 100
else message_content
)
user_node = turn_branch.add("")
self.update_tree_label(
user_node,
f"{crewai_agent_role} 👤 : ",
f'"{message_preview}"',
"blue",
)
agent_node = turn_branch.add("")
response_preview = (
response[:100] + "..." if len(response) > 100 else response
)
a2a_agent_display = f"{self._current_a2a_agent_name} \U0001f916: "
if status == "completed":
response_color = "green"
status_indicator = ""
elif status == "input_required":
response_color = "yellow"
status_indicator = ""
elif status == "failed":
response_color = "red"
status_indicator = ""
elif status == "auth_required":
response_color = "magenta"
status_indicator = "🔒"
elif status == "canceled":
response_color = "dim"
status_indicator = ""
else:
response_color = "cyan"
status_indicator = ""
label = f'"{response_preview}"'
if status_indicator:
label = f"{status_indicator} {label}"
self.update_tree_label(
agent_node,
a2a_agent_display,
label,
response_color,
)
self._pending_a2a_message = None
self._pending_a2a_agent_role = None
self._pending_a2a_turn_number = None
tree_to_use = self.current_crew_tree or self.current_task_branch
if tree_to_use:
self.print(tree_to_use)
self.print()
def handle_a2a_conversation_completed(
self,
status: str,
final_result: str | None,
error: str | None,
total_turns: int,
) -> None:
"""Handle A2A conversation completed event.
Args:
status: Final status (completed, failed, etc.)
final_result: Final result if completed successfully
error: Error message if failed
total_turns: Total number of turns in the conversation
"""
if self.current_a2a_conversation_branch and isinstance(
self.current_a2a_conversation_branch, Tree
):
if status == "completed":
if self._pending_a2a_message and self._pending_a2a_agent_role:
if total_turns in self._a2a_turn_branches:
turn_branch = self._a2a_turn_branches[total_turns]
else:
turn_branch = self.current_a2a_conversation_branch.add("")
self.update_tree_label(
turn_branch,
"💬",
f"Turn {total_turns}",
"cyan",
)
self._a2a_turn_branches[total_turns] = turn_branch
crewai_agent_role = self._pending_a2a_agent_role
message_content = self._pending_a2a_message
message_preview = (
message_content[:100] + "..."
if len(message_content) > 100
else message_content
)
user_node = turn_branch.add("")
self.update_tree_label(
user_node,
f"{crewai_agent_role} 👤 : ",
f'"{message_preview}"',
"green",
)
self._pending_a2a_message = None
self._pending_a2a_agent_role = None
self._pending_a2a_turn_number = None
elif status == "failed":
error_turn = self.current_a2a_conversation_branch.add("")
error_msg = error[:150] + "..." if error and len(error) > 150 else error
self.update_tree_label(
error_turn,
"",
f"Failed: {error_msg}" if error else "Conversation Failed",
"red",
)
tree_to_use = self.current_crew_tree or self.current_task_branch
if tree_to_use:
self.print(tree_to_use)
self.print()
self.current_a2a_conversation_branch = None
self.current_a2a_turn_count = 0
# ----------- MCP EVENTS -----------
def handle_mcp_connection_started(
self,
server_name: str,
server_url: str | None = None,
transport_type: str | None = None,
is_reconnect: bool = False,
connect_timeout: int | None = None,
) -> None:
"""Handle MCP connection started event."""
if not self.verbose:
return
content = Text()
reconnect_text = " (Reconnecting)" if is_reconnect else ""
content.append(f"MCP Connection Started{reconnect_text}\n\n", style="cyan bold")
content.append("Server: ", style="white")
content.append(f"{server_name}\n", style="cyan")
if server_url:
content.append("URL: ", style="white")
content.append(f"{server_url}\n", style="cyan dim")
if transport_type:
content.append("Transport: ", style="white")
content.append(f"{transport_type}\n", style="cyan")
if connect_timeout:
content.append("Timeout: ", style="white")
content.append(f"{connect_timeout}s\n", style="cyan")
panel = self.create_panel(content, "🔌 MCP Connection", "cyan")
self.print(panel)
self.print()
def handle_mcp_connection_completed(
self,
server_name: str,
server_url: str | None = None,
transport_type: str | None = None,
connection_duration_ms: float | None = None,
is_reconnect: bool = False,
) -> None:
"""Handle MCP connection completed event."""
if not self.verbose:
return
content = Text()
reconnect_text = " (Reconnected)" if is_reconnect else ""
content.append(
f"MCP Connection Completed{reconnect_text}\n\n", style="green bold"
)
content.append("Server: ", style="white")
content.append(f"{server_name}\n", style="green")
if server_url:
content.append("URL: ", style="white")
content.append(f"{server_url}\n", style="green dim")
if transport_type:
content.append("Transport: ", style="white")
content.append(f"{transport_type}\n", style="green")
if connection_duration_ms is not None:
content.append("Duration: ", style="white")
content.append(f"{connection_duration_ms:.2f}ms\n", style="green")
panel = self.create_panel(content, "✅ MCP Connected", "green")
self.print(panel)
self.print()
def handle_mcp_connection_failed(
self,
server_name: str,
server_url: str | None = None,
transport_type: str | None = None,
error: str = "",
error_type: str | None = None,
) -> None:
"""Handle MCP connection failed event."""
if not self.verbose:
return
content = Text()
content.append("MCP Connection Failed\n\n", style="red bold")
content.append("Server: ", style="white")
content.append(f"{server_name}\n", style="red")
if server_url:
content.append("URL: ", style="white")
content.append(f"{server_url}\n", style="red dim")
if transport_type:
content.append("Transport: ", style="white")
content.append(f"{transport_type}\n", style="red")
if error_type:
content.append("Error Type: ", style="white")
content.append(f"{error_type}\n", style="red")
if error:
content.append("\nError: ", style="white bold")
error_preview = error[:500] + "..." if len(error) > 500 else error
content.append(f"{error_preview}\n", style="red")
panel = self.create_panel(content, "❌ MCP Connection Failed", "red")
self.print(panel)
self.print()
def handle_mcp_tool_execution_started(
self,
server_name: str,
tool_name: str,
tool_args: dict[str, Any] | None = None,
) -> None:
"""Handle MCP tool execution started event."""
if not self.verbose:
return
content = self.create_status_content(
"MCP Tool Execution Started",
tool_name,
"yellow",
tool_args=tool_args or {},
Server=server_name,
)
panel = self.create_panel(content, "🔧 MCP Tool", "yellow")
self.print(panel)
self.print()
def handle_mcp_tool_execution_completed(
self,
server_name: str,
tool_name: str,
tool_args: dict[str, Any] | None = None,
result: Any | None = None,
execution_duration_ms: float | None = None,
) -> None:
"""Handle MCP tool execution completed event."""
if not self.verbose:
return
content = self.create_status_content(
"MCP Tool Execution Completed",
tool_name,
"green",
tool_args=tool_args or {},
Server=server_name,
)
if execution_duration_ms is not None:
content.append("Duration: ", style="white")
content.append(f"{execution_duration_ms:.2f}ms\n", style="green")
if result is not None:
result_str = str(result)
if len(result_str) > 500:
result_str = result_str[:497] + "..."
content.append("\nResult: ", style="white bold")
content.append(f"{result_str}\n", style="green")
panel = self.create_panel(content, "✅ MCP Tool Completed", "green")
self.print(panel)
self.print()
def handle_mcp_tool_execution_failed(
self,
server_name: str,
tool_name: str,
tool_args: dict[str, Any] | None = None,
error: str = "",
error_type: str | None = None,
) -> None:
"""Handle MCP tool execution failed event."""
if not self.verbose:
return
content = self.create_status_content(
"MCP Tool Execution Failed",
tool_name,
"red",
tool_args=tool_args or {},
Server=server_name,
)
if error_type:
content.append("Error Type: ", style="white")
content.append(f"{error_type}\n", style="red")
if error:
content.append("\nError: ", style="white bold")
error_preview = error[:500] + "..." if len(error) > 500 else error
content.append(f"{error_preview}\n", style="red")
panel = self.create_panel(content, "❌ MCP Tool Failed", "red")
self.print(panel)
self.print()

View File

@@ -1,65 +0,0 @@
"""A2A (Agent-to-Agent) Protocol adapter for CrewAI.
This module provides integration with A2A protocol-compliant agents,
enabling CrewAI to orchestrate external agents like ServiceNow, Bedrock Agents,
Glean, and other A2A-compliant systems.
Example:
```python
from crewai.experimental.a2a import A2AAgentAdapter
# Create A2A agent
servicenow_agent = A2AAgentAdapter(
agent_card_url="https://servicenow.example.com/.well-known/agent-card.json",
auth_token="your-token",
role="ServiceNow Incident Manager",
goal="Create and manage IT incidents",
backstory="Expert at incident management",
)
# Use in crew
crew = Crew(agents=[servicenow_agent], tasks=[task])
```
"""
from crewai.experimental.a2a.a2a_adapter import A2AAgentAdapter
from crewai.experimental.a2a.auth import (
APIKeyAuth,
AuthScheme,
BearerTokenAuth,
HTTPBasicAuth,
HTTPDigestAuth,
OAuth2AuthorizationCode,
OAuth2ClientCredentials,
create_auth_from_agent_card,
)
from crewai.experimental.a2a.exceptions import (
A2AAuthenticationError,
A2AConfigurationError,
A2AConnectionError,
A2AError,
A2AInputRequiredError,
A2ATaskCanceledError,
A2ATaskFailedError,
)
__all__ = [
"A2AAgentAdapter",
"A2AAuthenticationError",
"A2AConfigurationError",
"A2AConnectionError",
"A2AError",
"A2AInputRequiredError",
"A2ATaskCanceledError",
"A2ATaskFailedError",
"APIKeyAuth",
# Authentication
"AuthScheme",
"BearerTokenAuth",
"HTTPBasicAuth",
"HTTPDigestAuth",
"OAuth2AuthorizationCode",
"OAuth2ClientCredentials",
"create_auth_from_agent_card",
]

File diff suppressed because it is too large Load Diff

View File

@@ -1,424 +0,0 @@
"""Authentication schemes for A2A protocol agents.
This module provides support for various authentication methods:
- Bearer tokens (existing)
- OAuth2 (Client Credentials, Authorization Code)
- API Keys (header, query, cookie)
- HTTP Basic authentication
- HTTP Digest authentication
"""
from __future__ import annotations
from abc import ABC, abstractmethod
import base64
from collections.abc import Awaitable, Callable
from typing import TYPE_CHECKING, Any, Literal
import httpx
from pydantic import BaseModel, Field
if TYPE_CHECKING:
from a2a.types import AgentCard
class AuthScheme(ABC, BaseModel):
"""Base class for authentication schemes."""
@abstractmethod
async def apply_auth(
self, client: httpx.AsyncClient, headers: dict[str, str]
) -> dict[str, str]:
"""Apply authentication to request headers.
Args:
client: HTTP client for making auth requests.
headers: Current request headers.
Returns:
Updated headers with authentication applied.
"""
...
@abstractmethod
def configure_client(self, client: httpx.AsyncClient) -> None:
"""Configure the HTTP client for this auth scheme.
Args:
client: HTTP client to configure.
"""
...
class BearerTokenAuth(AuthScheme):
"""Bearer token authentication (Authorization: Bearer <token>)."""
token: str = Field(description="Bearer token")
async def apply_auth(
self, client: httpx.AsyncClient, headers: dict[str, str]
) -> dict[str, str]:
"""Apply Bearer token to Authorization header."""
headers["Authorization"] = f"Bearer {self.token}"
return headers
def configure_client(self, client: httpx.AsyncClient) -> None:
"""No client configuration needed for Bearer tokens."""
class HTTPBasicAuth(AuthScheme):
"""HTTP Basic authentication."""
username: str = Field(description="Username")
password: str = Field(description="Password")
async def apply_auth(
self, client: httpx.AsyncClient, headers: dict[str, str]
) -> dict[str, str]:
"""Apply HTTP Basic authentication."""
credentials = f"{self.username}:{self.password}"
encoded = base64.b64encode(credentials.encode()).decode()
headers["Authorization"] = f"Basic {encoded}"
return headers
def configure_client(self, client: httpx.AsyncClient) -> None:
"""No client configuration needed for Basic auth."""
class HTTPDigestAuth(AuthScheme):
"""HTTP Digest authentication.
Note: Uses httpx-auth library for proper digest implementation.
"""
username: str = Field(description="Username")
password: str = Field(description="Password")
async def apply_auth(
self, client: httpx.AsyncClient, headers: dict[str, str]
) -> dict[str, str]:
"""Digest auth is handled by httpx auth flow, not headers."""
return headers
def configure_client(self, client: httpx.AsyncClient) -> None:
"""Configure client with Digest auth."""
try:
from httpx_auth import DigestAuth # type: ignore[import-not-found]
client.auth = DigestAuth(self.username, self.password) # type: ignore[import-not-found]
except ImportError as e:
msg = "httpx-auth required for Digest authentication. Install with: pip install httpx-auth"
raise ImportError(msg) from e
class APIKeyAuth(AuthScheme):
"""API Key authentication (header, query, or cookie)."""
api_key: str = Field(description="API key value")
location: Literal["header", "query", "cookie"] = Field(
default="header", description="Where to send the API key"
)
name: str = Field(default="X-API-Key", description="Parameter name for the API key")
async def apply_auth(
self, client: httpx.AsyncClient, headers: dict[str, str]
) -> dict[str, str]:
"""Apply API key authentication."""
if self.location == "header":
headers[self.name] = self.api_key
elif self.location == "cookie":
headers["Cookie"] = f"{self.name}={self.api_key}"
# Query params are handled in configure_client via event hooks
return headers
def configure_client(self, client: httpx.AsyncClient) -> None:
"""Configure client for query param API keys."""
if self.location == "query":
# Add API key to all requests via event hook
async def add_api_key_param(request: httpx.Request) -> None:
url = httpx.URL(request.url)
request.url = url.copy_add_param(self.name, self.api_key)
client.event_hooks["request"].append(add_api_key_param)
class OAuth2ClientCredentials(AuthScheme):
"""OAuth2 Client Credentials flow authentication."""
token_url: str = Field(description="OAuth2 token endpoint")
client_id: str = Field(description="OAuth2 client ID")
client_secret: str = Field(description="OAuth2 client secret")
scopes: list[str] = Field(
default_factory=list, description="Required OAuth2 scopes"
)
_access_token: str | None = None
_token_expires_at: float | None = None
async def apply_auth(
self, client: httpx.AsyncClient, headers: dict[str, str]
) -> dict[str, str]:
"""Apply OAuth2 access token to Authorization header."""
# Get or refresh token if needed
import time
if (
self._access_token is None
or self._token_expires_at is None
or time.time() >= self._token_expires_at
):
await self._fetch_token(client)
if self._access_token:
headers["Authorization"] = f"Bearer {self._access_token}"
return headers
async def _fetch_token(self, client: httpx.AsyncClient) -> None:
"""Fetch OAuth2 access token using client credentials flow."""
import time
data = {
"grant_type": "client_credentials",
"client_id": self.client_id,
"client_secret": self.client_secret,
}
if self.scopes:
data["scope"] = " ".join(self.scopes)
response = await client.post(self.token_url, data=data)
response.raise_for_status()
token_data = response.json()
self._access_token = token_data["access_token"]
# Calculate expiration time (default to 3600 seconds if not provided)
expires_in = token_data.get("expires_in", 3600)
self._token_expires_at = time.time() + expires_in - 60 # 60s buffer
def configure_client(self, client: httpx.AsyncClient) -> None:
"""No client configuration needed for OAuth2."""
class OAuth2AuthorizationCode(AuthScheme):
"""OAuth2 Authorization Code flow authentication.
Note: This requires interactive authorization and is typically used
for user-facing applications. For server-to-server, use ClientCredentials.
"""
authorization_url: str = Field(description="OAuth2 authorization endpoint")
token_url: str = Field(description="OAuth2 token endpoint")
client_id: str = Field(description="OAuth2 client ID")
client_secret: str = Field(description="OAuth2 client secret")
redirect_uri: str = Field(description="OAuth2 redirect URI")
scopes: list[str] = Field(
default_factory=list, description="Required OAuth2 scopes"
)
_access_token: str | None = None
_refresh_token: str | None = None
_token_expires_at: float | None = None
_authorization_callback: Callable[[str], Awaitable[str]] | None = None
def set_authorization_callback(
self, callback: Callable[[str], Awaitable[str]] | None
) -> None:
"""Set callback to handle authorization URL.
The callback receives the authorization URL and should return
the authorization code after user completes the flow.
"""
self._authorization_callback = callback
async def apply_auth(
self, client: httpx.AsyncClient, headers: dict[str, str]
) -> dict[str, str]:
"""Apply OAuth2 access token to Authorization header."""
import time
# Get or refresh token if needed
if self._access_token is None:
if self._authorization_callback is None:
msg = "Authorization callback not set. Use set_authorization_callback()"
raise ValueError(msg)
await self._fetch_initial_token(client)
elif self._token_expires_at and time.time() >= self._token_expires_at:
await self._refresh_access_token(client)
if self._access_token:
headers["Authorization"] = f"Bearer {self._access_token}"
return headers
async def _fetch_initial_token(self, client: httpx.AsyncClient) -> None:
"""Fetch initial access token using authorization code flow."""
import time
import urllib.parse
# Build authorization URL
params = {
"response_type": "code",
"client_id": self.client_id,
"redirect_uri": self.redirect_uri,
"scope": " ".join(self.scopes),
}
auth_url = f"{self.authorization_url}?{urllib.parse.urlencode(params)}"
# Get authorization code from callback
if self._authorization_callback is None:
msg = "Authorization callback not set"
raise ValueError(msg)
auth_code = await self._authorization_callback(auth_url)
# Exchange code for token
data = {
"grant_type": "authorization_code",
"code": auth_code,
"client_id": self.client_id,
"client_secret": self.client_secret,
"redirect_uri": self.redirect_uri,
}
response = await client.post(self.token_url, data=data)
response.raise_for_status()
token_data = response.json()
self._access_token = token_data["access_token"]
self._refresh_token = token_data.get("refresh_token")
expires_in = token_data.get("expires_in", 3600)
self._token_expires_at = time.time() + expires_in - 60
async def _refresh_access_token(self, client: httpx.AsyncClient) -> None:
"""Refresh the access token using refresh token."""
import time
if not self._refresh_token:
# Re-authorize if no refresh token
await self._fetch_initial_token(client)
return
data = {
"grant_type": "refresh_token",
"refresh_token": self._refresh_token,
"client_id": self.client_id,
"client_secret": self.client_secret,
}
response = await client.post(self.token_url, data=data)
response.raise_for_status()
token_data = response.json()
self._access_token = token_data["access_token"]
if "refresh_token" in token_data:
self._refresh_token = token_data["refresh_token"]
expires_in = token_data.get("expires_in", 3600)
self._token_expires_at = time.time() + expires_in - 60
def configure_client(self, client: httpx.AsyncClient) -> None:
"""No client configuration needed for OAuth2."""
def create_auth_from_agent_card(
agent_card: AgentCard, credentials: dict[str, Any]
) -> AuthScheme | None:
"""Create an appropriate authentication scheme from AgentCard security config.
Args:
agent_card: The A2A AgentCard containing security requirements.
credentials: User-provided credentials (passwords, tokens, keys, etc.).
Returns:
Configured AuthScheme, or None if no authentication required.
Example:
```python
# For OAuth2
credentials = {
"client_id": "my-app",
"client_secret": "secret123",
}
auth = create_auth_from_agent_card(agent_card, credentials)
# For API Key
credentials = {"api_key": "key-12345"}
auth = create_auth_from_agent_card(agent_card, credentials)
# For HTTP Basic
credentials = {"username": "user", "password": "pass"}
auth = create_auth_from_agent_card(agent_card, credentials)
```
"""
if not agent_card.security or not agent_card.security_schemes:
return None
# Get the first required security scheme
first_security_req = agent_card.security[0] if agent_card.security else {}
for scheme_name, _scopes in first_security_req.items():
security_scheme_obj = agent_card.security_schemes.get(scheme_name)
if not security_scheme_obj:
continue
# SecurityScheme is a dict-like object
security_scheme = dict(security_scheme_obj) # type: ignore[arg-type]
scheme_type = str(security_scheme.get("type", "")).lower()
# OAuth2
if scheme_type == "oauth2":
flows = security_scheme.get("flows", {})
if "clientCredentials" in flows:
flow = flows["clientCredentials"]
return OAuth2ClientCredentials(
token_url=str(flow["tokenUrl"]),
client_id=str(credentials.get("client_id", "")),
client_secret=str(credentials.get("client_secret", "")),
scopes=list(flow.get("scopes", {}).keys()),
)
if "authorizationCode" in flows:
flow = flows["authorizationCode"]
return OAuth2AuthorizationCode(
authorization_url=str(flow["authorizationUrl"]),
token_url=str(flow["tokenUrl"]),
client_id=str(credentials.get("client_id", "")),
client_secret=str(credentials.get("client_secret", "")),
redirect_uri=str(credentials.get("redirect_uri", "")),
scopes=list(flow.get("scopes", {}).keys()),
)
# API Key
elif scheme_type == "apikey":
location = str(security_scheme.get("in", "header"))
name = str(security_scheme.get("name", "X-API-Key"))
return APIKeyAuth(
api_key=str(credentials.get("api_key", "")),
location=location, # type: ignore[arg-type]
name=name,
)
# HTTP Auth
elif scheme_type == "http":
http_scheme = str(security_scheme.get("scheme", "")).lower()
if http_scheme == "basic":
return HTTPBasicAuth(
username=str(credentials.get("username", "")),
password=str(credentials.get("password", "")),
)
if http_scheme == "digest":
return HTTPDigestAuth(
username=str(credentials.get("username", "")),
password=str(credentials.get("password", "")),
)
if http_scheme == "bearer":
return BearerTokenAuth(token=str(credentials.get("token", "")))
return None

View File

@@ -1,56 +0,0 @@
"""Custom exceptions for A2A Agent Adapter."""
class A2AError(Exception):
"""Base exception for A2A adapter errors."""
class A2ATaskFailedError(A2AError):
"""Raised when A2A agent task fails or is rejected.
This exception is raised when the A2A agent reports a task
in the 'failed' or 'rejected' state.
"""
class A2AInputRequiredError(A2AError):
"""Raised when A2A agent requires additional input.
This exception is raised when the A2A agent reports a task
in the 'input_required' state, indicating that it needs more
information to complete the task.
"""
class A2AConfigurationError(A2AError):
"""Raised when A2A adapter configuration is invalid.
This exception is raised during initialization or setup when
the adapter configuration is invalid or incompatible.
"""
class A2AConnectionError(A2AError):
"""Raised when connection to A2A agent fails.
This exception is raised when the adapter cannot establish
a connection to the A2A agent or when network errors occur.
"""
class A2AAuthenticationError(A2AError):
"""Raised when A2A agent requires authentication.
This exception is raised when the A2A agent reports a task
in the 'auth_required' state, indicating that authentication
is needed before the task can continue.
"""
class A2ATaskCanceledError(A2AError):
"""Raised when A2A task is canceled.
This exception is raised when the A2A agent reports a task
in the 'canceled' state, indicating the task was canceled
either by the user or the system.
"""

View File

@@ -1,56 +0,0 @@
"""Type protocols for A2A SDK components.
These protocols define the expected interfaces for A2A SDK types,
allowing for type checking without requiring the SDK to be installed.
"""
from collections.abc import AsyncIterator
from typing import Any, Protocol, runtime_checkable
@runtime_checkable
class AgentCardProtocol(Protocol):
"""Protocol for A2A AgentCard."""
name: str
version: str
description: str
skills: list[Any]
capabilities: Any
@runtime_checkable
class ClientProtocol(Protocol):
"""Protocol for A2A Client."""
async def send_message(self, message: Any) -> AsyncIterator[Any]:
"""Send message to A2A agent."""
...
async def get_card(self) -> AgentCardProtocol:
"""Get agent card."""
...
async def close(self) -> None:
"""Close client connection."""
...
@runtime_checkable
class MessageProtocol(Protocol):
"""Protocol for A2A Message."""
role: Any
message_id: str
parts: list[Any]
@runtime_checkable
class TaskProtocol(Protocol):
"""Protocol for A2A Task."""
id: str
context_id: str
status: Any
history: list[Any] | None
artifacts: list[Any] | None

View File

@@ -2,7 +2,7 @@ from collections.abc import Sequence
import threading
from typing import Any
from crewai.agent import Agent
from crewai.agent.core import Agent
from crewai.agents.agent_builder.base_agent import BaseAgent
from crewai.events.event_bus import crewai_event_bus
from crewai.events.types.agent_events import (

View File

@@ -1,5 +1,21 @@
from crewai.flow.flow import Flow, and_, listen, or_, router, start
from crewai.flow.persistence import persist
from crewai.flow.visualization import (
FlowStructure,
build_flow_structure,
visualize_flow_structure,
)
__all__ = ["Flow", "and_", "listen", "or_", "persist", "router", "start"]
__all__ = [
"Flow",
"FlowStructure",
"and_",
"build_flow_structure",
"listen",
"or_",
"persist",
"router",
"start",
"visualize_flow_structure",
]

View File

@@ -1,93 +0,0 @@
<!DOCTYPE html>
<html>
<head>
<meta charset="utf-8" />
<title>{{ title }}</title>
<script
src="https://cdnjs.cloudflare.com/ajax/libs/vis-network/9.1.2/dist/vis-network.min.js"
integrity="sha512-LnvoEWDFrqGHlHmDD2101OrLcbsfkrzoSpvtSQtxK3RMnRV0eOkhhBN2dXHKRrUU8p2DGRTk35n4O8nWSVe1mQ=="
crossorigin="anonymous"
referrerpolicy="no-referrer"
></script>
<link
rel="stylesheet"
href="https://cdnjs.cloudflare.com/ajax/libs/vis-network/9.1.2/dist/dist/vis-network.min.css"
integrity="sha512-WgxfT5LWjfszlPHXRmBWHkV2eceiWTOBvrKCNbdgDYTHrT2AeLCGbF4sZlZw3UMN3WtL0tGUoIAKsu8mllg/XA=="
crossorigin="anonymous"
referrerpolicy="no-referrer"
/>
<style type="text/css">
body {
font-family: verdana;
margin: 0;
padding: 0;
}
.container {
display: flex;
flex-direction: column;
height: 100vh;
}
#mynetwork {
flex-grow: 1;
width: 100%;
height: 750px;
background-color: #ffffff;
}
.card {
border: none;
}
.legend-container {
display: flex;
align-items: center;
justify-content: center;
padding: 10px;
background-color: #f8f9fa;
position: fixed; /* Make the legend fixed */
bottom: 0; /* Position it at the bottom */
width: 100%; /* Make it span the full width */
}
.legend-item {
display: flex;
align-items: center;
margin-right: 20px;
}
.legend-color-box {
width: 20px;
height: 20px;
margin-right: 5px;
}
.logo {
height: 50px;
margin-right: 20px;
}
.legend-dashed {
border-bottom: 2px dashed #666666;
width: 20px;
height: 0;
margin-right: 5px;
}
.legend-solid {
border-bottom: 2px solid #666666;
width: 20px;
height: 0;
margin-right: 5px;
}
</style>
</head>
<body>
<div class="container">
<div class="card" style="width: 100%">
<div id="mynetwork" class="card-body"></div>
</div>
<div class="legend-container">
<img
src="data:image/svg+xml;base64,{{ logo_svg_base64 }}"
alt="CrewAI logo"
class="logo"
/>
<!-- LEGEND_ITEMS_PLACEHOLDER -->
</div>
</div>
{{ network_content }}
</body>
</html>

File diff suppressed because one or more lines are too long

Before

Width:  |  Height:  |  Size: 27 KiB

View File

@@ -0,0 +1,4 @@
from typing import Final, Literal
AND_CONDITION: Final[Literal["AND"]] = "AND"
OR_CONDITION: Final[Literal["OR"]] = "OR"

View File

@@ -1,3 +1,9 @@
"""Core flow execution framework with decorators and state management.
This module provides the Flow class and decorators (@start, @listen, @router)
for building event-driven workflows with conditional execution and routing.
"""
from __future__ import annotations
import asyncio
@@ -38,7 +44,7 @@ from crewai.events.types.flow_events import (
MethodExecutionFinishedEvent,
MethodExecutionStartedEvent,
)
from crewai.flow.flow_visualizer import plot_flow
from crewai.flow.constants import AND_CONDITION, OR_CONDITION
from crewai.flow.flow_wrappers import (
FlowCondition,
FlowConditions,
@@ -51,14 +57,16 @@ from crewai.flow.flow_wrappers import (
from crewai.flow.persistence.base import FlowPersistence
from crewai.flow.types import FlowExecutionData, FlowMethodName, PendingListenerKey
from crewai.flow.utils import (
_extract_all_methods,
_normalize_condition,
get_possible_return_constants,
is_flow_condition_dict,
is_flow_condition_list,
is_flow_method,
is_flow_method_callable,
is_flow_method_name,
is_simple_flow_condition,
)
from crewai.flow.visualization import build_flow_structure, render_interactive
from crewai.utilities.printer import Printer, PrinterColor
@@ -74,95 +82,63 @@ class FlowState(BaseModel):
)
# type variables with explicit bounds
T = TypeVar("T", bound=dict[str, Any] | BaseModel) # Generic flow state type parameter
StateT = TypeVar(
"StateT", bound=dict[str, Any] | BaseModel
) # State validation type parameter
P = ParamSpec("P") # ParamSpec for preserving function signatures in decorators
R = TypeVar("R") # Generic return type for decorated methods
F = TypeVar("F", bound=Callable[..., Any]) # Function type for decorator preservation
def ensure_state_type(state: Any, expected_type: type[StateT]) -> StateT:
"""Ensure state matches expected type with proper validation.
Args:
state: State instance to validate
expected_type: Expected type for the state
Returns:
Validated state instance
Raises:
TypeError: If state doesn't match expected type
ValueError: If state validation fails
"""
if expected_type is dict:
if not isinstance(state, dict):
raise TypeError(f"Expected dict, got {type(state).__name__}")
return cast(StateT, state)
if isinstance(expected_type, type) and issubclass(expected_type, BaseModel):
if not isinstance(state, expected_type):
raise TypeError(
f"Expected {expected_type.__name__}, got {type(state).__name__}"
)
return state
raise TypeError(f"Invalid expected_type: {expected_type}")
T = TypeVar("T", bound=dict[str, Any] | BaseModel)
P = ParamSpec("P")
R = TypeVar("R")
F = TypeVar("F", bound=Callable[..., Any])
def start(
condition: str | FlowCondition | Callable[..., Any] | None = None,
) -> Callable[[Callable[P, R]], StartMethod[P, R]]:
"""
Marks a method as a flow's starting point.
"""Marks a method as a flow's starting point.
This decorator designates a method as an entry point for the flow execution.
It can optionally specify conditions that trigger the start based on other
method executions.
Parameters
----------
condition : Optional[Union[str, FlowCondition, Callable[..., Any]]], optional
Defines when the start method should execute. Can be:
- str: Name of a method that triggers this start
- FlowCondition: Result from or_() or and_(), including nested conditions
- Callable[..., Any]: A method reference that triggers this start
Default is None, meaning unconditional start.
Args:
condition: Defines when the start method should execute. Can be:
- str: Name of a method that triggers this start
- FlowCondition: Result from or_() or and_(), including nested conditions
- Callable[..., Any]: A method reference that triggers this start
Default is None, meaning unconditional start.
Returns
-------
Callable[[Callable[P, R]], StartMethod[P, R]]
A decorator function that wraps the method as a flow start point
and preserves its signature.
Returns:
A decorator function that wraps the method as a flow start point and preserves its signature.
Raises
------
ValueError
If the condition format is invalid.
Raises:
ValueError: If the condition format is invalid.
Examples
--------
>>> @start() # Unconditional start
>>> def begin_flow(self):
... pass
Examples:
>>> @start() # Unconditional start
>>> def begin_flow(self):
... pass
>>> @start("method_name") # Start after specific method
>>> def conditional_start(self):
... pass
>>> @start("method_name") # Start after specific method
>>> def conditional_start(self):
... pass
>>> @start(and_("method1", "method2")) # Start after multiple methods
>>> def complex_start(self):
... pass
>>> @start(and_("method1", "method2")) # Start after multiple methods
>>> def complex_start(self):
... pass
"""
def decorator(func: Callable[P, R]) -> StartMethod[P, R]:
"""Decorator that wraps a function as a start method.
Args:
func: The function to wrap as a start method.
Returns:
A StartMethod wrapper around the function.
"""
wrapper = StartMethod(func)
if condition is not None:
if is_flow_method_name(condition):
wrapper.__trigger_methods__ = [condition]
wrapper.__condition_type__ = "OR"
wrapper.__condition_type__ = OR_CONDITION
elif is_flow_condition_dict(condition):
if "conditions" in condition:
wrapper.__trigger_condition__ = condition
@@ -177,7 +153,7 @@ def start(
)
elif is_flow_method_callable(condition):
wrapper.__trigger_methods__ = [condition.__name__]
wrapper.__condition_type__ = "OR"
wrapper.__condition_type__ = OR_CONDITION
else:
raise ValueError(
"Condition must be a method, string, or a result of or_() or and_()"
@@ -190,49 +166,45 @@ def start(
def listen(
condition: str | FlowCondition | Callable[..., Any],
) -> Callable[[Callable[P, R]], ListenMethod[P, R]]:
"""
Creates a listener that executes when specified conditions are met.
"""Creates a listener that executes when specified conditions are met.
This decorator sets up a method to execute in response to other method
executions in the flow. It supports both simple and complex triggering
conditions.
Parameters
----------
condition : Union[str, FlowCondition, Callable[..., Any]]
Specifies when the listener should execute. Can be:
- str: Name of a method that triggers this listener
- FlowCondition: Result from or_() or and_(), including nested conditions
- Callable[..., Any]: A method reference that triggers this listener
Args:
condition: Specifies when the listener should execute.
Returns
-------
Callable[[Callable[P, R]], ListenMethod[P, R]]
A decorator function that wraps the method as a listener
and preserves its signature.
Returns:
A decorator function that wraps the method as a flow listener and preserves its signature.
Raises
------
ValueError
If the condition format is invalid.
Raises:
ValueError: If the condition format is invalid.
Examples
--------
>>> @listen("process_data") # Listen to single method
>>> def handle_processed_data(self):
... pass
Examples:
>>> @listen("process_data")
>>> def handle_processed_data(self):
... pass
>>> @listen(or_("success", "failure")) # Listen to multiple methods
>>> def handle_completion(self):
... pass
>>> @listen("method_name")
>>> def handle_completion(self):
... pass
"""
def decorator(func: Callable[P, R]) -> ListenMethod[P, R]:
"""Decorator that wraps a function as a listener method.
Args:
func: The function to wrap as a listener method.
Returns:
A ListenMethod wrapper around the function.
"""
wrapper = ListenMethod(func)
if is_flow_method_name(condition):
wrapper.__trigger_methods__ = [condition]
wrapper.__condition_type__ = "OR"
wrapper.__condition_type__ = OR_CONDITION
elif is_flow_condition_dict(condition):
if "conditions" in condition:
wrapper.__trigger_condition__ = condition
@@ -247,7 +219,7 @@ def listen(
)
elif is_flow_method_callable(condition):
wrapper.__trigger_methods__ = [condition.__name__]
wrapper.__condition_type__ = "OR"
wrapper.__condition_type__ = OR_CONDITION
else:
raise ValueError(
"Condition must be a method, string, or a result of or_() or and_()"
@@ -260,54 +232,53 @@ def listen(
def router(
condition: str | FlowCondition | Callable[..., Any],
) -> Callable[[Callable[P, R]], RouterMethod[P, R]]:
"""
Creates a routing method that directs flow execution based on conditions.
"""Creates a routing method that directs flow execution based on conditions.
This decorator marks a method as a router, which can dynamically determine
the next steps in the flow based on its return value. Routers are triggered
by specified conditions and can return constants that determine which path
the flow should take.
Parameters
----------
condition : Union[str, FlowCondition, Callable[..., Any]]
Specifies when the router should execute. Can be:
- str: Name of a method that triggers this router
- FlowCondition: Result from or_() or and_(), including nested conditions
- Callable[..., Any]: A method reference that triggers this router
Args:
condition: Specifies when the router should execute. Can be:
- str: Name of a method that triggers this router
- FlowCondition: Result from or_() or and_(), including nested conditions
- Callable[..., Any]: A method reference that triggers this router
Returns
-------
Callable[[Callable[P, R]], RouterMethod[P, R]]
A decorator function that wraps the method as a router
and preserves its signature.
Returns:
A decorator function that wraps the method as a router and preserves its signature.
Raises
------
ValueError
If the condition format is invalid.
Raises:
ValueError: If the condition format is invalid.
Examples
--------
>>> @router("check_status")
>>> def route_based_on_status(self):
... if self.state.status == "success":
... return SUCCESS
... return FAILURE
Examples:
>>> @router("check_status")
>>> def route_based_on_status(self):
... if self.state.status == "success":
... return "SUCCESS"
... return "FAILURE"
>>> @router(and_("validate", "process"))
>>> def complex_routing(self):
... if all([self.state.valid, self.state.processed]):
... return CONTINUE
... return STOP
>>> @router(and_("validate", "process"))
>>> def complex_routing(self):
... if all([self.state.valid, self.state.processed]):
... return "CONTINUE"
... return "STOP"
"""
def decorator(func: Callable[P, R]) -> RouterMethod[P, R]:
"""Decorator that wraps a function as a router method.
Args:
func: The function to wrap as a router method.
Returns:
A RouterMethod wrapper around the function.
"""
wrapper = RouterMethod(func)
if is_flow_method_name(condition):
wrapper.__trigger_methods__ = [condition]
wrapper.__condition_type__ = "OR"
wrapper.__condition_type__ = OR_CONDITION
elif is_flow_condition_dict(condition):
if "conditions" in condition:
wrapper.__trigger_condition__ = condition
@@ -322,7 +293,7 @@ def router(
)
elif is_flow_method_callable(condition):
wrapper.__trigger_methods__ = [condition.__name__]
wrapper.__condition_type__ = "OR"
wrapper.__condition_type__ = OR_CONDITION
else:
raise ValueError(
"Condition must be a method, string, or a result of or_() or and_()"
@@ -333,42 +304,29 @@ def router(
def or_(*conditions: str | FlowCondition | Callable[..., Any]) -> FlowCondition:
"""
Combines multiple conditions with OR logic for flow control.
"""Combines multiple conditions with OR logic for flow control.
Creates a condition that is satisfied when any of the specified conditions
are met. This is used with @start, @listen, or @router decorators to create
complex triggering conditions.
Parameters
----------
*conditions : Union[str, dict[str, Any], Callable[..., Any]]
Variable number of conditions that can be:
- str: Method names
- dict[str, Any]: Existing condition dictionaries (nested conditions)
- Callable[..., Any]: Method references
Args:
conditions: Variable number of conditions that can be method names, existing condition dictionaries, or method references.
Returns
-------
dict[str, Any]
A condition dictionary with format:
{"type": "OR", "conditions": list_of_conditions}
where each condition can be a string (method name) or a nested dict
Returns:
A condition dictionary with format {"type": "OR", "conditions": list_of_conditions} where each condition can be a string (method name) or a nested dict
Raises
------
ValueError
If any condition is invalid.
Raises:
ValueError: If condition format is invalid.
Examples
--------
>>> @listen(or_("success", "timeout"))
>>> def handle_completion(self):
... pass
Examples:
>>> @listen(or_("success", "timeout"))
>>> def handle_completion(self):
... pass
>>> @listen(or_(and_("step1", "step2"), "step3"))
>>> def handle_nested(self):
... pass
>>> @listen(or_(and_("step1", "step2"), "step3"))
>>> def handle_nested(self):
... pass
"""
processed_conditions: FlowConditions = []
for condition in conditions:
@@ -378,46 +336,34 @@ def or_(*conditions: str | FlowCondition | Callable[..., Any]) -> FlowCondition:
processed_conditions.append(condition.__name__)
else:
raise ValueError("Invalid condition in or_()")
return {"type": "OR", "conditions": processed_conditions}
return {"type": OR_CONDITION, "conditions": processed_conditions}
def and_(*conditions: str | FlowCondition | Callable[..., Any]) -> FlowCondition:
"""
Combines multiple conditions with AND logic for flow control.
"""Combines multiple conditions with AND logic for flow control.
Creates a condition that is satisfied only when all specified conditions
are met. This is used with @start, @listen, or @router decorators to create
complex triggering conditions.
Parameters
----------
*conditions : Union[str, dict[str, Any], Callable[..., Any]]
Variable number of conditions that can be:
- str: Method names
- dict[str, Any]: Existing condition dictionaries (nested conditions)
- Callable[..., Any]: Method references
Args:
*conditions: Variable number of conditions that can be method names, existing condition dictionaries, or method references.
Returns
-------
dict[str, Any]
A condition dictionary with format:
{"type": "AND", "conditions": list_of_conditions}
Returns:
A condition dictionary with format {"type": "AND", "conditions": list_of_conditions}
where each condition can be a string (method name) or a nested dict
Raises
------
ValueError
If any condition is invalid.
Raises:
ValueError: If any condition is invalid.
Examples
--------
>>> @listen(and_("validated", "processed"))
>>> def handle_complete_data(self):
... pass
Examples:
>>> @listen(and_("validated", "processed"))
>>> def handle_complete_data(self):
... pass
>>> @listen(and_(or_("step1", "step2"), "step3"))
>>> def handle_nested(self):
... pass
>>> @listen(and_(or_("step1", "step2"), "step3"))
>>> def handle_nested(self):
... pass
"""
processed_conditions: FlowConditions = []
for condition in conditions:
@@ -427,59 +373,7 @@ def and_(*conditions: str | FlowCondition | Callable[..., Any]) -> FlowCondition
processed_conditions.append(condition.__name__)
else:
raise ValueError("Invalid condition in and_()")
return {"type": "AND", "conditions": processed_conditions}
def _normalize_condition(
condition: FlowConditions | FlowCondition | FlowMethodName,
) -> FlowCondition:
"""Normalize a condition to standard format with 'conditions' key.
Args:
condition: Can be a string (method name), dict (condition), or list
Returns:
Normalized dict with 'type' and 'conditions' keys
"""
if is_flow_method_name(condition):
return {"type": "OR", "conditions": [condition]}
if is_flow_condition_dict(condition):
if "conditions" in condition:
return condition
if "methods" in condition:
return {"type": condition["type"], "conditions": condition["methods"]}
return condition
if is_flow_condition_list(condition):
return {"type": "OR", "conditions": condition}
raise ValueError(f"Cannot normalize condition: {condition}")
def _extract_all_methods(
condition: str | FlowCondition | dict[str, Any] | list[Any],
) -> list[FlowMethodName]:
"""Extract all method names from a condition (including nested).
Args:
condition: Can be a string, dict, or list
Returns:
List of all method names in the condition tree
"""
if is_flow_method_name(condition):
return [condition]
if is_flow_condition_dict(condition):
normalized = _normalize_condition(condition)
methods = []
for sub_cond in normalized.get("conditions", []):
methods.extend(_extract_all_methods(sub_cond))
return methods
if isinstance(condition, list):
methods = []
for item in condition:
methods.extend(_extract_all_methods(item))
return methods
return []
return {"type": AND_CONDITION, "conditions": processed_conditions}
class FlowMeta(type):
@@ -515,7 +409,9 @@ class FlowMeta(type):
and attr_value.__trigger_methods__ is not None
):
methods = attr_value.__trigger_methods__
condition_type = getattr(attr_value, "__condition_type__", "OR")
condition_type = getattr(
attr_value, "__condition_type__", OR_CONDITION
)
if (
hasattr(attr_value, "__trigger_condition__")
and attr_value.__trigger_condition__ is not None
@@ -532,6 +428,8 @@ class FlowMeta(type):
possible_returns = get_possible_return_constants(attr_value)
if possible_returns:
router_paths[attr_name] = possible_returns
else:
router_paths[attr_name] = []
cls._start_methods = start_methods # type: ignore[attr-defined]
cls._listeners = listeners # type: ignore[attr-defined]
@@ -556,7 +454,7 @@ class Flow(Generic[T], metaclass=FlowMeta):
name: str | None = None
tracing: bool | None = False
def __class_getitem__(cls: type[Flow[StateT]], item: type[T]) -> type[Flow[StateT]]:
def __class_getitem__(cls: type[Flow[T]], item: type[T]) -> type[Flow[T]]:
class _FlowGeneric(cls): # type: ignore
_initial_state_t = item
@@ -596,7 +494,7 @@ class Flow(Generic[T], metaclass=FlowMeta):
or should_auto_collect_first_time_traces()
):
trace_listener = TraceCollectionListener()
trace_listener.setup_listeners(crewai_event_bus) # type: ignore[no-untyped-call]
trace_listener.setup_listeners(crewai_event_bus)
# Apply any additional kwargs
if kwargs:
self._initialize_state(kwargs)
@@ -702,7 +600,26 @@ class Flow(Generic[T], metaclass=FlowMeta):
)
def _copy_state(self) -> T:
return copy.deepcopy(self._state)
"""Create a copy of the current state.
Returns:
A copy of the current state
"""
if isinstance(self._state, BaseModel):
try:
return self._state.model_copy(deep=True)
except (TypeError, AttributeError):
try:
state_dict = self._state.model_dump()
model_class = type(self._state)
return model_class(**state_dict)
except Exception:
return self._state.model_copy(deep=False)
else:
try:
return copy.deepcopy(self._state)
except (TypeError, AttributeError):
return cast(T, self._state.copy())
@property
def state(self) -> T:
@@ -1027,8 +944,8 @@ class Flow(Generic[T], metaclass=FlowMeta):
trace_listener = TraceCollectionListener()
if trace_listener.batch_manager.batch_owner_type == "flow":
if trace_listener.first_time_handler.is_first_time:
trace_listener.first_time_handler.mark_events_collected() # type: ignore[no-untyped-call]
trace_listener.first_time_handler.handle_execution_completion() # type: ignore[no-untyped-call]
trace_listener.first_time_handler.mark_events_collected()
trace_listener.first_time_handler.handle_execution_completion()
else:
trace_listener.batch_manager.finalize_batch()
@@ -1037,24 +954,20 @@ class Flow(Generic[T], metaclass=FlowMeta):
detach(flow_token)
async def _execute_start_method(self, start_method_name: FlowMethodName) -> None:
"""
Executes a flow's start method and its triggered listeners.
"""Executes a flow's start method and its triggered listeners.
This internal method handles the execution of methods marked with @start
decorator and manages the subsequent chain of listener executions.
Parameters
----------
start_method_name : str
The name of the start method to execute.
Args:
start_method_name: The name of the start method to execute.
Notes
-----
- Executes the start method and captures its result
- Triggers execution of any listeners waiting on this start method
- Part of the flow's initialization sequence
- Skips execution if method was already completed (e.g., after reload)
- Automatically injects crewai_trigger_payload if available in flow inputs
Note:
- Executes the start method and captures its result
- Triggers execution of any listeners waiting on this start method
- Part of the flow's initialization sequence
- Skips execution if method was already completed (e.g., after reload)
- Automatically injects crewai_trigger_payload if available in flow inputs
"""
if start_method_name in self._completed_methods:
if self._is_execution_resuming:
@@ -1174,27 +1087,21 @@ class Flow(Generic[T], metaclass=FlowMeta):
async def _execute_listeners(
self, trigger_method: FlowMethodName, result: Any
) -> None:
"""
Executes all listeners and routers triggered by a method completion.
"""Executes all listeners and routers triggered by a method completion.
This internal method manages the execution flow by:
1. First executing all triggered routers sequentially
2. Then executing all triggered listeners in parallel
Parameters
----------
trigger_method : str
The name of the method that triggered these listeners.
result : Any
The result from the triggering method, passed to listeners
that accept parameters.
Args:
trigger_method: The name of the method that triggered these listeners.
result: The result from the triggering method, passed to listeners that accept parameters.
Notes
-----
- Routers are executed sequentially to maintain flow control
- Each router's result becomes a new trigger_method
- Normal listeners are executed in parallel for efficiency
- Listeners can receive the trigger method's result as a parameter
Note:
- Routers are executed sequentially to maintain flow control
- Each router's result becomes a new trigger_method
- Normal listeners are executed in parallel for efficiency
- Listeners can receive the trigger method's result as a parameter
"""
# First, handle routers repeatedly until no router triggers anymore
router_results = []
@@ -1281,16 +1188,16 @@ class Flow(Generic[T], metaclass=FlowMeta):
if is_flow_condition_dict(condition):
normalized = _normalize_condition(condition)
cond_type = normalized.get("type", "OR")
cond_type = normalized.get("type", OR_CONDITION)
sub_conditions = normalized.get("conditions", [])
if cond_type == "OR":
if cond_type == OR_CONDITION:
return any(
self._evaluate_condition(sub_cond, trigger_method, listener_name)
for sub_cond in sub_conditions
)
if cond_type == "AND":
if cond_type == AND_CONDITION:
pending_key = PendingListenerKey(f"{listener_name}:{id(condition)}")
if pending_key not in self._pending_and_listeners:
@@ -1300,7 +1207,20 @@ class Flow(Generic[T], metaclass=FlowMeta):
if trigger_method in self._pending_and_listeners[pending_key]:
self._pending_and_listeners[pending_key].discard(trigger_method)
if not self._pending_and_listeners[pending_key]:
direct_methods_satisfied = not self._pending_and_listeners[pending_key]
nested_conditions_satisfied = all(
(
self._evaluate_condition(
sub_cond, trigger_method, listener_name
)
if is_flow_condition_dict(sub_cond)
else True
)
for sub_cond in sub_conditions
)
if direct_methods_satisfied and nested_conditions_satisfied:
self._pending_and_listeners.pop(pending_key, None)
return True
@@ -1311,30 +1231,22 @@ class Flow(Generic[T], metaclass=FlowMeta):
def _find_triggered_methods(
self, trigger_method: FlowMethodName, router_only: bool
) -> list[FlowMethodName]:
"""
Finds all methods that should be triggered based on conditions.
"""Finds all methods that should be triggered based on conditions.
This internal method evaluates both OR and AND conditions to determine
which methods should be executed next in the flow. Supports nested conditions.
Parameters
----------
trigger_method : str
The name of the method that just completed execution.
router_only : bool
If True, only consider router methods.
If False, only consider non-router methods.
Args:
trigger_method: The name of the method that just completed execution.
router_only: If True, only consider router methods. If False, only consider non-router methods.
Returns
-------
list[str]
Returns:
Names of methods that should be triggered.
Notes
-----
- Handles both OR and AND conditions, including nested combinations
- Maintains state for AND conditions using _pending_and_listeners
- Separates router and normal listener evaluation
Note:
- Handles both OR and AND conditions, including nested combinations
- Maintains state for AND conditions using _pending_and_listeners
- Separates router and normal listener evaluation
"""
triggered: list[FlowMethodName] = []
@@ -1350,10 +1262,10 @@ class Flow(Generic[T], metaclass=FlowMeta):
if is_simple_flow_condition(condition_data):
condition_type, methods = condition_data
if condition_type == "OR":
if condition_type == OR_CONDITION:
if trigger_method in methods:
triggered.append(listener_name)
elif condition_type == "AND":
elif condition_type == AND_CONDITION:
pending_key = PendingListenerKey(listener_name)
if pending_key not in self._pending_and_listeners:
self._pending_and_listeners[pending_key] = set(methods)
@@ -1375,33 +1287,23 @@ class Flow(Generic[T], metaclass=FlowMeta):
async def _execute_single_listener(
self, listener_name: FlowMethodName, result: Any
) -> None:
"""
Executes a single listener method with proper event handling.
"""Executes a single listener method with proper event handling.
This internal method manages the execution of an individual listener,
including parameter inspection, event emission, and error handling.
Parameters
----------
listener_name : str
The name of the listener method to execute.
result : Any
The result from the triggering method, which may be passed
to the listener if it accepts parameters.
Args:
listener_name: The name of the listener method to execute.
result: The result from the triggering method, which may be passed to the listener if it accepts parameters.
Notes
-----
- Inspects method signature to determine if it accepts the trigger result
- Emits events for method execution start and finish
- Handles errors gracefully with detailed logging
- Recursively triggers listeners of this listener
- Supports both parameterized and parameter-less listeners
- Skips execution if method was already completed (e.g., after reload)
Error Handling
-------------
Catches and logs any exceptions during execution, preventing
individual listener failures from breaking the entire flow.
Note:
- Inspects method signature to determine if it accepts the trigger result
- Emits events for method execution start and finish
- Handles errors gracefully with detailed logging
- Recursively triggers listeners of this listener
- Supports both parameterized and parameter-less listeners
- Skips execution if method was already completed (e.g., after reload)
- Catches and logs any exceptions during execution, preventing individual listener failures from breaking the entire flow
"""
if listener_name in self._completed_methods:
if self._is_execution_resuming:
@@ -1460,7 +1362,16 @@ class Flow(Generic[T], metaclass=FlowMeta):
logger.info(message)
logger.warning(message)
def plot(self, filename: str = "crewai_flow") -> None:
def plot(self, filename: str = "crewai_flow.html", show: bool = True) -> str:
"""Create interactive HTML visualization of Flow structure.
Args:
filename: Output HTML filename (default: "crewai_flow.html").
show: Whether to open in browser (default: True).
Returns:
Absolute path to generated HTML file.
"""
crewai_event_bus.emit(
self,
FlowPlotEvent(
@@ -1468,4 +1379,5 @@ class Flow(Generic[T], metaclass=FlowMeta):
flow_name=self.name or self.__class__.__name__,
),
)
plot_flow(self, filename)
structure = build_flow_structure(self)
return render_interactive(structure, filename=filename, show=show)

View File

@@ -1,234 +0,0 @@
# flow_visualizer.py
from __future__ import annotations
import os
from typing import TYPE_CHECKING, Any
from pyvis.network import Network # type: ignore[import-untyped]
from crewai.flow.config import COLORS, NODE_STYLES, NodeStyles
from crewai.flow.html_template_handler import HTMLTemplateHandler
from crewai.flow.legend_generator import generate_legend_items_html, get_legend_items
from crewai.flow.path_utils import safe_path_join
from crewai.flow.utils import calculate_node_levels
from crewai.flow.visualization_utils import (
add_edges,
add_nodes_to_network,
compute_positions,
)
from crewai.utilities.printer import Printer
if TYPE_CHECKING:
from crewai.flow.flow import Flow
_printer = Printer()
class FlowPlot:
"""Handles the creation and rendering of flow visualization diagrams."""
def __init__(self, flow: Flow[Any]) -> None:
"""
Initialize FlowPlot with a flow object.
Parameters
----------
flow : Flow
A Flow instance to visualize.
Raises
------
ValueError
If flow object is invalid or missing required attributes.
"""
self.flow = flow
self.colors = COLORS
self.node_styles: NodeStyles = NODE_STYLES
def plot(self, filename: str) -> None:
"""
Generate and save an HTML visualization of the flow.
Parameters
----------
filename : str
Name of the output file (without extension).
Raises
------
ValueError
If filename is invalid or network generation fails.
IOError
If file operations fail or visualization cannot be generated.
RuntimeError
If network visualization generation fails.
"""
try:
# Initialize network
net = Network(directed=True, height="750px", bgcolor=self.colors["bg"])
# Set options to disable physics
net.set_options(
"""
var options = {
"nodes": {
"font": {
"multi": "html"
}
},
"physics": {
"enabled": false
}
}
"""
)
# Calculate levels for nodes
try:
node_levels = calculate_node_levels(self.flow)
except Exception as e:
raise ValueError(f"Failed to calculate node levels: {e!s}") from e
# Compute positions
try:
node_positions = compute_positions(self.flow, node_levels)
except Exception as e:
raise ValueError(f"Failed to compute node positions: {e!s}") from e
# Add nodes to the network
try:
add_nodes_to_network(net, self.flow, node_positions, self.node_styles)
except Exception as e:
raise RuntimeError(f"Failed to add nodes to network: {e!s}") from e
# Add edges to the network
try:
add_edges(net, self.flow, node_positions, self.colors)
except Exception as e:
raise RuntimeError(f"Failed to add edges to network: {e!s}") from e
# Generate HTML
try:
network_html = net.generate_html()
final_html_content = self._generate_final_html(network_html)
except Exception as e:
raise RuntimeError(
f"Failed to generate network visualization: {e!s}"
) from e
# Save the final HTML content to the file
try:
with open(f"{filename}.html", "w", encoding="utf-8") as f:
f.write(final_html_content)
_printer.print(f"Plot saved as {filename}.html", color="green")
except IOError as e:
raise IOError(
f"Failed to save flow visualization to {filename}.html: {e!s}"
) from e
except (ValueError, RuntimeError, IOError) as e:
raise e
except Exception as e:
raise RuntimeError(
f"Unexpected error during flow visualization: {e!s}"
) from e
finally:
self._cleanup_pyvis_lib(filename)
def _generate_final_html(self, network_html: str) -> str:
"""
Generate the final HTML content with network visualization and legend.
Parameters
----------
network_html : str
HTML content generated by pyvis Network.
Returns
-------
str
Complete HTML content with styling and legend.
Raises
------
IOError
If template or logo files cannot be accessed.
ValueError
If network_html is invalid.
"""
if not network_html:
raise ValueError("Invalid network HTML content")
try:
# Extract just the body content from the generated HTML
current_dir = os.path.dirname(__file__)
template_path = safe_path_join(
"assets", "crewai_flow_visual_template.html", root=current_dir
)
logo_path = safe_path_join("assets", "crewai_logo.svg", root=current_dir)
if not os.path.exists(template_path):
raise IOError(f"Template file not found: {template_path}")
if not os.path.exists(logo_path):
raise IOError(f"Logo file not found: {logo_path}")
html_handler = HTMLTemplateHandler(template_path, logo_path)
network_body = html_handler.extract_body_content(network_html)
# Generate the legend items HTML
legend_items = get_legend_items(self.colors)
legend_items_html = generate_legend_items_html(legend_items)
return html_handler.generate_final_html(network_body, legend_items_html)
except Exception as e:
raise IOError(f"Failed to generate visualization HTML: {e!s}") from e
@staticmethod
def _cleanup_pyvis_lib(filename: str) -> None:
"""
Clean up the generated lib folder from pyvis.
This method safely removes the temporary lib directory created by pyvis
during network visualization generation. The lib folder is created in the
same directory as the output HTML file.
Parameters
----------
filename : str
The output filename (without .html extension) used for the visualization.
"""
try:
import shutil
output_dir = os.path.dirname(os.path.abspath(filename)) or os.getcwd()
lib_folder = os.path.join(output_dir, "lib")
if os.path.exists(lib_folder) and os.path.isdir(lib_folder):
vis_js = os.path.join(lib_folder, "vis-network.min.js")
if os.path.exists(vis_js):
shutil.rmtree(lib_folder)
except Exception as e:
_printer.print(f"Error cleaning up lib folder: {e}", color="red")
def plot_flow(flow: Flow[Any], filename: str = "flow_plot") -> None:
"""
Convenience function to create and save a flow visualization.
Parameters
----------
flow : Flow
Flow instance to visualize.
filename : str, optional
Output filename without extension, by default "flow_plot".
Raises
------
ValueError
If flow object or filename is invalid.
IOError
If file operations fail.
"""
visualizer = FlowPlot(flow)
visualizer.plot(filename)

View File

@@ -5,7 +5,6 @@ from __future__ import annotations
from collections.abc import Callable, Sequence
import functools
import inspect
import types
from typing import Any, Generic, Literal, ParamSpec, TypeAlias, TypeVar, TypedDict
from typing_extensions import Required, Self
@@ -17,8 +16,6 @@ P = ParamSpec("P")
R = TypeVar("R")
FlowConditionType: TypeAlias = Literal["OR", "AND"]
# Simple flow condition stored as tuple (condition_type, method_list)
SimpleFlowCondition: TypeAlias = tuple[FlowConditionType, list[FlowMethodName]]
@@ -26,6 +23,11 @@ class FlowCondition(TypedDict, total=False):
"""Type definition for flow trigger conditions.
This is a recursive structure where conditions can contain nested FlowConditions.
Attributes:
type: The type of the condition.
conditions: A list of conditions types.
methods: A list of methods.
"""
type: Required[FlowConditionType]
@@ -79,8 +81,7 @@ class FlowMethod(Generic[P, R]):
The result of calling the wrapped method.
"""
if self._instance is not None:
bound = types.MethodType(self._meth, self._instance)
return bound(*args, **kwargs)
return self._meth(self._instance, *args, **kwargs)
return self._meth(*args, **kwargs)
def unwrap(self) -> Callable[P, R]:

View File

@@ -1,91 +0,0 @@
"""HTML template processing and generation for flow visualization diagrams."""
import base64
import re
from typing import Any
from crewai.flow.path_utils import validate_path_exists
class HTMLTemplateHandler:
"""Handles HTML template processing and generation for flow visualization diagrams."""
def __init__(self, template_path: str, logo_path: str) -> None:
"""
Initialize HTMLTemplateHandler with validated template and logo paths.
Parameters
----------
template_path : str
Path to the HTML template file.
logo_path : str
Path to the logo image file.
Raises
------
ValueError
If template or logo paths are invalid or files don't exist.
"""
try:
self.template_path = validate_path_exists(template_path, "file")
self.logo_path = validate_path_exists(logo_path, "file")
except ValueError as e:
raise ValueError(f"Invalid template or logo path: {e}") from e
def read_template(self) -> str:
"""Read and return the HTML template file contents."""
with open(self.template_path, "r", encoding="utf-8") as f:
return f.read()
def encode_logo(self) -> str:
"""Convert the logo SVG file to base64 encoded string."""
with open(self.logo_path, "rb") as logo_file:
logo_svg_data = logo_file.read()
return base64.b64encode(logo_svg_data).decode("utf-8")
def extract_body_content(self, html: str) -> str:
"""Extract and return content between body tags from HTML string."""
match = re.search("<body.*?>(.*?)</body>", html, re.DOTALL)
return match.group(1) if match else ""
def generate_legend_items_html(self, legend_items: list[dict[str, Any]]) -> str:
"""Generate HTML markup for the legend items."""
legend_items_html = ""
for item in legend_items:
if "border" in item:
legend_items_html += f"""
<div class="legend-item">
<div class="legend-color-box" style="background-color: {item["color"]}; border: 2px dashed {item["border"]};"></div>
<div>{item["label"]}</div>
</div>
"""
elif item.get("dashed") is not None:
style = "dashed" if item["dashed"] else "solid"
legend_items_html += f"""
<div class="legend-item">
<div class="legend-{style}" style="border-bottom: 2px {style} {item["color"]};"></div>
<div>{item["label"]}</div>
</div>
"""
else:
legend_items_html += f"""
<div class="legend-item">
<div class="legend-color-box" style="background-color: {item["color"]};"></div>
<div>{item["label"]}</div>
</div>
"""
return legend_items_html
def generate_final_html(
self, network_body: str, legend_items_html: str, title: str = "Flow Plot"
) -> str:
"""Combine all components into final HTML document with network visualization."""
html_template = self.read_template()
logo_svg_base64 = self.encode_logo()
return (
html_template.replace("{{ title }}", title)
.replace("{{ network_content }}", network_body)
.replace("{{ logo_svg_base64 }}", logo_svg_base64)
.replace("<!-- LEGEND_ITEMS_PLACEHOLDER -->", legend_items_html)
)

View File

@@ -1,84 +0,0 @@
"""Legend generation for flow visualization diagrams."""
from typing import Any
from crewai.flow.config import FlowColors
def get_legend_items(colors: FlowColors) -> list[dict[str, Any]]:
"""Generate legend items based on flow colors.
Parameters
----------
colors : FlowColors
Dictionary containing color definitions for flow elements.
Returns
-------
list[dict[str, Any]]
List of legend item dictionaries with labels and styling.
"""
return [
{"label": "Start Method", "color": colors["start"]},
{"label": "Method", "color": colors["method"]},
{
"label": "Crew Method",
"color": colors["bg"],
"border": colors["start"],
"dashed": False,
},
{
"label": "Router",
"color": colors["router"],
"border": colors["router_border"],
"dashed": True,
},
{"label": "Trigger", "color": colors["edge"], "dashed": False},
{"label": "AND Trigger", "color": colors["edge"], "dashed": True},
{
"label": "Router Trigger",
"color": colors["router_edge"],
"dashed": True,
},
]
def generate_legend_items_html(legend_items: list[dict[str, Any]]) -> str:
"""Generate HTML markup for legend items.
Parameters
----------
legend_items : list[dict[str, Any]]
List of legend item dictionaries containing labels and styling.
Returns
-------
str
HTML string containing formatted legend items.
"""
legend_items_html = ""
for item in legend_items:
if "border" in item:
style = "dashed" if item["dashed"] else "solid"
legend_items_html += f"""
<div class="legend-item">
<div class="legend-color-box" style="background-color: {item["color"]}; border: 2px {style} {item["border"]}; border-radius: 5px;"></div>
<div>{item["label"]}</div>
</div>
"""
elif item.get("dashed") is not None:
style = "dashed" if item["dashed"] else "solid"
legend_items_html += f"""
<div class="legend-item">
<div class="legend-{style}" style="border-bottom: 2px {style} {item["color"]}; border-radius: 5px;"></div>
<div>{item["label"]}</div>
</div>
"""
else:
legend_items_html += f"""
<div class="legend-item">
<div class="legend-color-box" style="background-color: {item["color"]}; border-radius: 5px;"></div>
<div>{item["label"]}</div>
</div>
"""
return legend_items_html

View File

@@ -1,133 +0,0 @@
"""
Path utilities for secure file operations in CrewAI flow module.
This module provides utilities for secure path handling to prevent directory
traversal attacks and ensure paths remain within allowed boundaries.
"""
from pathlib import Path
def safe_path_join(*parts: str, root: str | Path | None = None) -> str:
"""
Safely join path components and ensure the result is within allowed boundaries.
Parameters
----------
*parts : str
Variable number of path components to join.
root : Union[str, Path, None], optional
Root directory to use as base. If None, uses current working directory.
Returns
-------
str
String representation of the resolved path.
Raises
------
ValueError
If the resulting path would be outside the root directory
or if any path component is invalid.
"""
if not parts:
raise ValueError("No path components provided")
try:
# Convert all parts to strings and clean them
clean_parts = [str(part).strip() for part in parts if part]
if not clean_parts:
raise ValueError("No valid path components provided")
# Establish root directory
root_path = Path(root).resolve() if root else Path.cwd()
# Join and resolve the full path
full_path = Path(root_path, *clean_parts).resolve()
# Check if the resolved path is within root
if not str(full_path).startswith(str(root_path)):
raise ValueError(
f"Invalid path: Potential directory traversal. Path must be within {root_path}"
)
return str(full_path)
except Exception as e:
if isinstance(e, ValueError):
raise
raise ValueError(f"Invalid path components: {e!s}") from e
def validate_path_exists(path: str | Path, file_type: str = "file") -> str:
"""
Validate that a path exists and is of the expected type.
Parameters
----------
path : Union[str, Path]
Path to validate.
file_type : str, optional
Expected type ('file' or 'directory'), by default 'file'.
Returns
-------
str
Validated path as string.
Raises
------
ValueError
If path doesn't exist or is not of expected type.
"""
try:
path_obj = Path(path).resolve()
if not path_obj.exists():
raise ValueError(f"Path does not exist: {path}")
if file_type == "file" and not path_obj.is_file():
raise ValueError(f"Path is not a file: {path}")
if file_type == "directory" and not path_obj.is_dir():
raise ValueError(f"Path is not a directory: {path}")
return str(path_obj)
except Exception as e:
if isinstance(e, ValueError):
raise
raise ValueError(f"Invalid path: {e!s}") from e
def list_files(directory: str | Path, pattern: str = "*") -> list[str]:
"""
Safely list files in a directory matching a pattern.
Parameters
----------
directory : Union[str, Path]
Directory to search in.
pattern : str, optional
Glob pattern to match files against, by default "*".
Returns
-------
List[str]
List of matching file paths.
Raises
------
ValueError
If directory is invalid or inaccessible.
"""
try:
dir_path = Path(directory).resolve()
if not dir_path.is_dir():
raise ValueError(f"Not a directory: {directory}")
return [str(p) for p in dir_path.glob(pattern) if p.is_file()]
except Exception as e:
if isinstance(e, ValueError):
raise
raise ValueError(f"Error listing files: {e!s}") from e

View File

@@ -21,6 +21,7 @@ P = ParamSpec("P")
R = TypeVar("R", covariant=True)
FlowMethodName = NewType("FlowMethodName", str)
FlowRouteName = NewType("FlowRouteName", str)
PendingListenerKey = NewType(
"PendingListenerKey",
Annotated[str, "nested flow conditions use 'listener_name:object_id'"],

View File

@@ -13,14 +13,17 @@ Example
>>> ancestors = build_ancestor_dict(flow)
"""
from __future__ import annotations
import ast
from collections import defaultdict, deque
import inspect
import textwrap
from typing import Any
from typing import TYPE_CHECKING, Any
from typing_extensions import TypeIs
from crewai.flow.constants import AND_CONDITION, OR_CONDITION
from crewai.flow.flow_wrappers import (
FlowCondition,
FlowConditions,
@@ -31,10 +34,29 @@ from crewai.flow.types import FlowMethodCallable, FlowMethodName
from crewai.utilities.printer import Printer
if TYPE_CHECKING:
from crewai.flow.flow import Flow
_printer = Printer()
def get_possible_return_constants(function: Any) -> list[str] | None:
"""Extract possible string return values from a function using AST parsing.
This function analyzes the source code of a router method to identify
all possible string values it might return. It handles:
- Direct string literals: return "value"
- Variable assignments: x = "value"; return x
- Dictionary lookups: d = {"k": "v"}; return d[key]
- Conditional returns: return "a" if cond else "b"
- State attributes: return self.state.attr (infers from class context)
Args:
function: The function to analyze.
Returns:
List of possible string return values, or None if analysis fails.
"""
try:
source = inspect.getsource(function)
except OSError:
@@ -74,11 +96,34 @@ def get_possible_return_constants(function: Any) -> list[str] | None:
_printer.print(f"Source code:\n{source}", color="yellow")
return None
return_values = set()
dict_definitions = {}
return_values: set[str] = set()
dict_definitions: dict[str, list[str]] = {}
variable_values: dict[str, list[str]] = {}
state_attribute_values: dict[str, list[str]] = {}
class DictionaryAssignmentVisitor(ast.NodeVisitor):
def visit_Assign(self, node):
def extract_string_constants(node: ast.expr) -> list[str]:
"""Recursively extract all string constants from an AST node."""
strings: list[str] = []
if isinstance(node, ast.Constant) and isinstance(node.value, str):
strings.append(node.value)
elif isinstance(node, ast.IfExp):
strings.extend(extract_string_constants(node.body))
strings.extend(extract_string_constants(node.orelse))
elif isinstance(node, ast.Call):
if (
isinstance(node.func, ast.Attribute)
and node.func.attr == "get"
and len(node.args) >= 2
):
default_arg = node.args[1]
if isinstance(default_arg, ast.Constant) and isinstance(
default_arg.value, str
):
strings.append(default_arg.value)
return strings
class VariableAssignmentVisitor(ast.NodeVisitor):
def visit_Assign(self, node: ast.Assign) -> None:
# Check if this assignment is assigning a dictionary literal to a variable
if isinstance(node.value, ast.Dict) and len(node.targets) == 1:
target = node.targets[0]
@@ -92,29 +137,142 @@ def get_possible_return_constants(function: Any) -> list[str] | None:
]
if dict_values:
dict_definitions[var_name] = dict_values
if len(node.targets) == 1:
target = node.targets[0]
var_name_alt: str | None = None
if isinstance(target, ast.Name):
var_name_alt = target.id
elif isinstance(target, ast.Attribute):
var_name_alt = f"{target.value.id if isinstance(target.value, ast.Name) else '_'}.{target.attr}"
if var_name_alt:
strings = extract_string_constants(node.value)
if strings:
variable_values[var_name_alt] = strings
self.generic_visit(node)
def get_attribute_chain(node: ast.expr) -> str | None:
"""Extract the full attribute chain from an AST node.
Examples:
self.state.run_type -> "self.state.run_type"
x.y.z -> "x.y.z"
simple_var -> "simple_var"
"""
if isinstance(node, ast.Name):
return node.id
if isinstance(node, ast.Attribute):
base = get_attribute_chain(node.value)
if base:
return f"{base}.{node.attr}"
return None
class ReturnVisitor(ast.NodeVisitor):
def visit_Return(self, node):
# Direct string return
if isinstance(node.value, ast.Constant) and isinstance(
node.value.value, str
def visit_Return(self, node: ast.Return) -> None:
if (
node.value
and isinstance(node.value, ast.Constant)
and isinstance(node.value.value, str)
):
return_values.add(node.value.value)
# Dictionary-based return, like return paths[result]
elif isinstance(node.value, ast.Subscript):
# Check if we're subscripting a known dictionary variable
elif node.value and isinstance(node.value, ast.Subscript):
if isinstance(node.value.value, ast.Name):
var_name = node.value.value.id
if var_name in dict_definitions:
# Add all possible dictionary values
for v in dict_definitions[var_name]:
var_name_dict = node.value.value.id
if var_name_dict in dict_definitions:
for v in dict_definitions[var_name_dict]:
return_values.add(v)
elif node.value:
var_name_ret = get_attribute_chain(node.value)
if var_name_ret and var_name_ret in variable_values:
for v in variable_values[var_name_ret]:
return_values.add(v)
elif var_name_ret and var_name_ret in state_attribute_values:
for v in state_attribute_values[var_name_ret]:
return_values.add(v)
self.generic_visit(node)
# First pass: identify dictionary assignments
DictionaryAssignmentVisitor().visit(code_ast)
# Second pass: identify returns
def visit_If(self, node: ast.If) -> None:
self.generic_visit(node)
# Try to get the class context to infer state attribute values
try:
if hasattr(function, "__self__"):
# Method is bound, get the class
class_obj = function.__self__.__class__
elif hasattr(function, "__qualname__") and "." in function.__qualname__:
# Method is unbound but we can try to get class from module
class_name = function.__qualname__.rsplit(".", 1)[0]
if hasattr(function, "__globals__"):
class_obj = function.__globals__.get(class_name)
else:
class_obj = None
else:
class_obj = None
if class_obj is not None:
try:
class_source = inspect.getsource(class_obj)
class_source = textwrap.dedent(class_source)
class_ast = ast.parse(class_source)
# Look for comparisons and assignments involving state attributes
class StateAttributeVisitor(ast.NodeVisitor):
def visit_Compare(self, node: ast.Compare) -> None:
"""Find comparisons like: self.state.attr == "value" """
left_attr = get_attribute_chain(node.left)
if left_attr:
for comparator in node.comparators:
if isinstance(comparator, ast.Constant) and isinstance(
comparator.value, str
):
if left_attr not in state_attribute_values:
state_attribute_values[left_attr] = []
if (
comparator.value
not in state_attribute_values[left_attr]
):
state_attribute_values[left_attr].append(
comparator.value
)
# Also check right side
for comparator in node.comparators:
right_attr = get_attribute_chain(comparator)
if (
right_attr
and isinstance(node.left, ast.Constant)
and isinstance(node.left.value, str)
):
if right_attr not in state_attribute_values:
state_attribute_values[right_attr] = []
if (
node.left.value
not in state_attribute_values[right_attr]
):
state_attribute_values[right_attr].append(
node.left.value
)
self.generic_visit(node)
StateAttributeVisitor().visit(class_ast)
except Exception as e:
_printer.print(
f"Could not analyze class context for {function.__name__}: {e}",
color="yellow",
)
except Exception as e:
_printer.print(
f"Could not introspect class for {function.__name__}: {e}",
color="yellow",
)
VariableAssignmentVisitor().visit(code_ast)
ReturnVisitor().visit(code_ast)
return list(return_values) if return_values else None
@@ -158,7 +316,15 @@ def calculate_node_levels(flow: Any) -> dict[str, int]:
# Precompute listener dependencies
or_listeners = defaultdict(list)
and_listeners = defaultdict(set)
for listener_name, (condition_type, trigger_methods) in flow._listeners.items():
for listener_name, condition_data in flow._listeners.items():
if isinstance(condition_data, tuple):
condition_type, trigger_methods = condition_data
elif isinstance(condition_data, dict):
trigger_methods = _extract_all_methods_recursive(condition_data, flow)
condition_type = condition_data.get("type", "OR")
else:
continue
if condition_type == "OR":
for method in trigger_methods:
or_listeners[method].append(listener_name)
@@ -192,9 +358,13 @@ def calculate_node_levels(flow: Any) -> dict[str, int]:
if listener_name not in visited:
queue.append(listener_name)
# Handle router connections
process_router_paths(flow, current, current_level, levels, queue)
max_level = max(levels.values()) if levels else 0
for method_name in flow._methods:
if method_name not in levels:
levels[method_name] = max_level + 1
return levels
@@ -215,8 +385,14 @@ def count_outgoing_edges(flow: Any) -> dict[str, int]:
counts = {}
for method_name in flow._methods:
counts[method_name] = 0
for method_name in flow._listeners:
_, trigger_methods = flow._listeners[method_name]
for condition_data in flow._listeners.values():
if isinstance(condition_data, tuple):
_, trigger_methods = condition_data
elif isinstance(condition_data, dict):
trigger_methods = _extract_all_methods_recursive(condition_data, flow)
else:
continue
for trigger in trigger_methods:
if trigger in flow._methods:
counts[trigger] += 1
@@ -271,21 +447,34 @@ def dfs_ancestors(
return
visited.add(node)
# Handle regular listeners
for listener_name, (_, trigger_methods) in flow._listeners.items():
for listener_name, condition_data in flow._listeners.items():
if isinstance(condition_data, tuple):
_, trigger_methods = condition_data
elif isinstance(condition_data, dict):
trigger_methods = _extract_all_methods_recursive(condition_data, flow)
else:
continue
if node in trigger_methods:
ancestors[listener_name].add(node)
ancestors[listener_name].update(ancestors[node])
dfs_ancestors(listener_name, ancestors, visited, flow)
# Handle router methods separately
if node in flow._routers:
router_method_name = node
paths = flow._router_paths.get(router_method_name, [])
for path in paths:
for listener_name, (_, trigger_methods) in flow._listeners.items():
for listener_name, condition_data in flow._listeners.items():
if isinstance(condition_data, tuple):
_, trigger_methods = condition_data
elif isinstance(condition_data, dict):
trigger_methods = _extract_all_methods_recursive(
condition_data, flow
)
else:
continue
if path in trigger_methods:
# Only propagate the ancestors of the router method, not the router method itself
ancestors[listener_name].update(ancestors[node])
dfs_ancestors(listener_name, ancestors, visited, flow)
@@ -335,19 +524,32 @@ def build_parent_children_dict(flow: Any) -> dict[str, list[str]]:
"""
parent_children: dict[str, list[str]] = {}
# Map listeners to their trigger methods
for listener_name, (_, trigger_methods) in flow._listeners.items():
for listener_name, condition_data in flow._listeners.items():
if isinstance(condition_data, tuple):
_, trigger_methods = condition_data
elif isinstance(condition_data, dict):
trigger_methods = _extract_all_methods_recursive(condition_data, flow)
else:
continue
for trigger in trigger_methods:
if trigger not in parent_children:
parent_children[trigger] = []
if listener_name not in parent_children[trigger]:
parent_children[trigger].append(listener_name)
# Map router methods to their paths and to listeners
for router_method_name, paths in flow._router_paths.items():
for path in paths:
# Map router method to listeners of each path
for listener_name, (_, trigger_methods) in flow._listeners.items():
for listener_name, condition_data in flow._listeners.items():
if isinstance(condition_data, tuple):
_, trigger_methods = condition_data
elif isinstance(condition_data, dict):
trigger_methods = _extract_all_methods_recursive(
condition_data, flow
)
else:
continue
if path in trigger_methods:
if router_method_name not in parent_children:
parent_children[router_method_name] = []
@@ -382,17 +584,27 @@ def get_child_index(
return children.index(child)
def process_router_paths(flow, current, current_level, levels, queue):
"""
Handle the router connections for the current node.
"""
def process_router_paths(
flow: Any,
current: str,
current_level: int,
levels: dict[str, int],
queue: deque[str],
) -> None:
"""Handle the router connections for the current node."""
if current in flow._routers:
paths = flow._router_paths.get(current, [])
for path in paths:
for listener_name, (
_condition_type,
trigger_methods,
) in flow._listeners.items():
for listener_name, condition_data in flow._listeners.items():
if isinstance(condition_data, tuple):
_condition_type, trigger_methods = condition_data
elif isinstance(condition_data, dict):
trigger_methods = _extract_all_methods_recursive(
condition_data, flow
)
else:
continue
if path in trigger_methods:
if (
listener_name not in levels
@@ -413,7 +625,7 @@ def is_flow_method_name(obj: Any) -> TypeIs[FlowMethodName]:
return isinstance(obj, str)
def is_flow_method_callable(obj: Any) -> TypeIs[FlowMethodCallable]:
def is_flow_method_callable(obj: Any) -> TypeIs[FlowMethodCallable[..., Any]]:
"""Check if the object is a callable flow method.
Args:
@@ -517,3 +729,107 @@ def is_flow_condition_dict(obj: Any) -> TypeIs[FlowCondition]:
return False
return True
def _extract_all_methods_recursive(
condition: str | FlowCondition | dict[str, Any] | list[Any],
flow: Flow[Any] | None = None,
) -> list[FlowMethodName]:
"""Extract ALL method names from a condition tree recursively.
This function recursively extracts every method name from the entire
condition tree, regardless of nesting. Used for visualization and debugging.
Note: Only extracts actual method names, not router output strings.
If flow is provided, it will filter out strings that are not in flow._methods.
Args:
condition: Can be a string, dict, or list
flow: Optional flow instance to filter out non-method strings
Returns:
List of all method names found in the condition tree
"""
if is_flow_method_name(condition):
if flow is not None:
if condition in flow._methods:
return [condition]
return []
return [condition]
if is_flow_condition_dict(condition):
normalized = _normalize_condition(condition)
methods = []
for sub_cond in normalized.get("conditions", []):
methods.extend(_extract_all_methods_recursive(sub_cond, flow))
return methods
if isinstance(condition, list):
methods = []
for item in condition:
methods.extend(_extract_all_methods_recursive(item, flow))
return methods
return []
def _normalize_condition(
condition: FlowConditions | FlowCondition | FlowMethodName,
) -> FlowCondition:
"""Normalize a condition to standard format with 'conditions' key.
Args:
condition: Can be a string (method name), dict (condition), or list
Returns:
Normalized dict with 'type' and 'conditions' keys
"""
if is_flow_method_name(condition):
return {"type": OR_CONDITION, "conditions": [condition]}
if is_flow_condition_dict(condition):
if "conditions" in condition:
return condition
if "methods" in condition:
return {"type": condition["type"], "conditions": condition["methods"]}
return condition
if is_flow_condition_list(condition):
return {"type": OR_CONDITION, "conditions": condition}
raise ValueError(f"Cannot normalize condition: {condition}")
def _extract_all_methods(
condition: str | FlowCondition | dict[str, Any] | list[Any],
) -> list[FlowMethodName]:
"""Extract all method names from a condition (including nested).
For AND conditions, this extracts methods that must ALL complete.
For OR conditions nested inside AND, we don't extract their methods
since only one branch of the OR needs to trigger, not all methods.
This function is used for runtime execution logic, where we need to know
which methods must complete for AND conditions. For visualization purposes,
use _extract_all_methods_recursive() instead.
Args:
condition: Can be a string, dict, or list
Returns:
List of all method names in the condition tree that must complete
"""
if is_flow_method_name(condition):
return [condition]
if is_flow_condition_dict(condition):
normalized = _normalize_condition(condition)
cond_type = normalized.get("type", OR_CONDITION)
if cond_type == AND_CONDITION:
return [
sub_cond
for sub_cond in normalized.get("conditions", [])
if is_flow_method_name(sub_cond)
]
return []
if isinstance(condition, list):
methods = []
for item in condition:
methods.extend(_extract_all_methods(item))
return methods
return []

View File

@@ -0,0 +1,21 @@
"""Flow structure visualization utilities."""
from crewai.flow.visualization.builder import (
build_flow_structure,
calculate_execution_paths,
)
from crewai.flow.visualization.renderers import render_interactive
from crewai.flow.visualization.types import FlowStructure, NodeMetadata, StructureEdge
visualize_flow_structure = render_interactive
__all__ = [
"FlowStructure",
"NodeMetadata",
"StructureEdge",
"build_flow_structure",
"calculate_execution_paths",
"render_interactive",
"visualize_flow_structure",
]

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,152 @@
<!DOCTYPE html>
<html lang="EN">
<head>
<title>CrewAI Flow Visualization</title>
<link rel="preconnect" href="https://fonts.googleapis.com">
<link rel="preconnect" href="https://fonts.gstatic.com" crossorigin>
<link href="https://fonts.googleapis.com/css2?family=Inter:wght@400;500;600;700&display=swap" rel="stylesheet">
<link rel="stylesheet" href="'{{ css_path }}'" />
<script src="https://unpkg.com/lucide@latest"></script>
<script src="https://cdnjs.cloudflare.com/ajax/libs/prism/1.29.0/prism.min.js"></script>
<script src="https://cdnjs.cloudflare.com/ajax/libs/prism/1.29.0/components/prism-python.min.js"></script>
<script src="'{{ js_path }}'"></script>
</head>
<body>
<!-- Drawer overlay -->
<div id="drawer-overlay"></div>
<!-- Highlight canvas for active nodes/edges above overlay -->
<canvas id="highlight-canvas"></canvas>
<!-- Side drawer -->
<div id="drawer" style="visibility: hidden;">
<div class="drawer-header">
<div class="drawer-title" id="drawer-node-name">Node Details</div>
<div style="display: flex; align-items: center;">
<button class="drawer-open-ide" id="drawer-open-ide" style="display: none;">
<i data-lucide="file-code" style="width: 16px; height: 16px;"></i>
Open in IDE
</button>
<button class="drawer-close" id="drawer-close">
<i data-lucide="x" style="width: 20px; height: 20px;"></i>
</button>
</div>
</div>
<div class="drawer-content" id="drawer-content"></div>
</div>
<div id="info">
<div style="text-align: center;">
<img src="https://cdn.prod.website-files.com/68de1ee6d7c127849807d7a6/68de1ee6d7c127849807d7ef_Logo.svg"
alt="CrewAI Logo"
style="width: 144px; height: auto;">
</div>
</div>
<!-- Custom navigation controls -->
<div class="nav-controls">
<div class="nav-button" id="theme-toggle" title="Toggle Dark Mode">
<i data-lucide="moon" style="width: 18px; height: 18px;"></i>
</div>
<div class="nav-button" id="zoom-in" title="Zoom In">
<i data-lucide="zoom-in" style="width: 18px; height: 18px;"></i>
</div>
<div class="nav-button" id="zoom-out" title="Zoom Out">
<i data-lucide="zoom-out" style="width: 18px; height: 18px;"></i>
</div>
<div class="nav-button" id="fit" title="Fit to Screen">
<i data-lucide="maximize-2" style="width: 18px; height: 18px;"></i>
</div>
<div class="nav-button" id="export-png" title="Export to PNG">
<i data-lucide="image" style="width: 18px; height: 18px;"></i>
</div>
<div class="nav-button" id="export-pdf" title="Export to PDF">
<i data-lucide="file-text" style="width: 18px; height: 18px;"></i>
</div>
<!-- <div class="nav-button" id="export-json" title="Export to JSON">
<i data-lucide="braces" style="width: 18px; height: 18px;"></i>
</div> -->
</div>
<div id="network-container">
<div id="network"></div>
</div>
<!-- Info panel at bottom -->
<div id="legend-panel">
<!-- Stats Section -->
<div class="legend-section">
<div class="legend-stats-row">
<div class="legend-stat-item">
<span class="stat-value">'{{ dag_nodes_count }}'</span>
<span class="stat-label">Nodes</span>
</div>
<div class="legend-stat-item">
<span class="stat-value">'{{ dag_edges_count }}'</span>
<span class="stat-label">Edges</span>
</div>
<div class="legend-stat-item">
<span class="stat-value">'{{ execution_paths }}'</span>
<span class="stat-label">Paths</span>
</div>
</div>
</div>
<!-- Node Types Section -->
<div class="legend-section">
<div class="legend-group">
<div class="legend-item-compact">
<div class="legend-color-small" style="background: var(--node-bg-start);"></div>
<span>Start</span>
</div>
<div class="legend-item-compact">
<div class="legend-color-small" style="background: var(--node-bg-router); border: 2px solid var(--node-border-start);"></div>
<span>Router</span>
</div>
<div class="legend-item-compact">
<div class="legend-color-small" style="background: var(--node-bg-listen); border: 2px solid var(--node-border-listen);"></div>
<span>Listen</span>
</div>
</div>
</div>
<!-- Edge Types Section -->
<div class="legend-section">
<div class="legend-group">
<div class="legend-item-compact">
<svg>
<line x1="0" y1="7" x2="29" y2="7" stroke="var(--edge-router-color)" stroke-width="2" stroke-dasharray="4,4"/>
</svg>
<span>Router</span>
</div>
<div class="legend-item-compact">
<svg class="legend-or-line">
<line x1="0" y1="7" x2="29" y2="7" stroke="var(--edge-or-color)" stroke-width="2"/>
</svg>
<span>OR</span>
</div>
<div class="legend-item-compact">
<svg>
<line x1="0" y1="7" x2="29" y2="7" stroke="var(--edge-router-color)" stroke-width="2"/>
</svg>
<span>AND</span>
</div>
</div>
</div>
<!-- IDE Selector Section -->
<div class="legend-section">
<div class="legend-ide-column">
<label class="legend-ide-label">IDE</label>
<select id="ide-selector" class="legend-ide-select">
<option value="auto">Auto-detect</option>
<option value="pycharm">PyCharm</option>
<option value="vscode">VS Code</option>
<option value="jetbrains">JetBrains</option>
</select>
</div>
</div>
</div>
</body>
</html>

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,496 @@
"""Flow structure builder for analyzing Flow execution."""
from __future__ import annotations
from collections import defaultdict
from collections.abc import Iterable
import inspect
from typing import TYPE_CHECKING, Any
from crewai.flow.constants import AND_CONDITION, OR_CONDITION
from crewai.flow.flow_wrappers import FlowCondition
from crewai.flow.types import FlowMethodName, FlowRouteName
from crewai.flow.utils import (
is_flow_condition_dict,
is_simple_flow_condition,
)
from crewai.flow.visualization.schema import extract_method_signature
from crewai.flow.visualization.types import FlowStructure, NodeMetadata, StructureEdge
if TYPE_CHECKING:
from crewai.flow.flow import Flow
def _extract_direct_or_triggers(
condition: str | dict[str, Any] | list[Any] | FlowCondition,
) -> list[str]:
"""Extract direct OR-level trigger strings from a condition.
This function extracts strings that would directly trigger a listener,
meaning they appear at the top level of an OR condition. Strings nested
inside AND conditions are NOT considered direct triggers for router paths.
For example:
- or_("a", "b") -> ["a", "b"] (both are direct triggers)
- and_("a", "b") -> [] (neither are direct triggers, both required)
- or_(and_("a", "b"), "c") -> ["c"] (only "c" is a direct trigger)
Args:
condition: Can be a string, dict, or list.
Returns:
List of direct OR-level trigger strings.
"""
if isinstance(condition, str):
return [condition]
if isinstance(condition, dict):
cond_type = condition.get("type", OR_CONDITION)
conditions_list = condition.get("conditions", [])
if cond_type == OR_CONDITION:
strings = []
for sub_cond in conditions_list:
strings.extend(_extract_direct_or_triggers(sub_cond))
return strings
return []
if isinstance(condition, list):
strings = []
for item in condition:
strings.extend(_extract_direct_or_triggers(item))
return strings
if callable(condition) and hasattr(condition, "__name__"):
return [condition.__name__]
return []
def _extract_all_trigger_names(
condition: str | dict[str, Any] | list[Any] | FlowCondition,
) -> list[str]:
"""Extract ALL trigger names from a condition for display purposes.
Unlike _extract_direct_or_triggers, this extracts ALL strings and method
names from the entire condition tree, including those nested in AND conditions.
This is used for displaying trigger information in the UI.
For example:
- or_("a", "b") -> ["a", "b"]
- and_("a", "b") -> ["a", "b"]
- or_(and_("a", method_6), method_4) -> ["a", "method_6", "method_4"]
Args:
condition: Can be a string, dict, or list.
Returns:
List of all trigger names found in the condition.
"""
if isinstance(condition, str):
return [condition]
if isinstance(condition, dict):
conditions_list = condition.get("conditions", [])
strings = []
for sub_cond in conditions_list:
strings.extend(_extract_all_trigger_names(sub_cond))
return strings
if isinstance(condition, list):
strings = []
for item in condition:
strings.extend(_extract_all_trigger_names(item))
return strings
if callable(condition) and hasattr(condition, "__name__"):
return [condition.__name__]
return []
def _create_edges_from_condition(
condition: str | dict[str, Any] | list[Any] | FlowCondition,
target: str,
nodes: dict[str, NodeMetadata],
) -> list[StructureEdge]:
"""Create edges from a condition tree, preserving AND/OR semantics.
This function recursively processes the condition tree and creates edges
with the appropriate condition_type for each trigger.
For AND conditions, all triggers get edges with condition_type="AND".
For OR conditions, triggers get edges with condition_type="OR".
Args:
condition: The condition tree (string, dict, or list).
target: The target node name.
nodes: Dictionary of all nodes for validation.
Returns:
List of StructureEdge objects representing the condition.
"""
edges: list[StructureEdge] = []
if isinstance(condition, str):
if condition in nodes:
edges.append(
StructureEdge(
source=condition,
target=target,
condition_type=OR_CONDITION,
is_router_path=False,
)
)
elif callable(condition) and hasattr(condition, "__name__"):
method_name = condition.__name__
if method_name in nodes:
edges.append(
StructureEdge(
source=method_name,
target=target,
condition_type=OR_CONDITION,
is_router_path=False,
)
)
elif isinstance(condition, dict):
cond_type = condition.get("type", OR_CONDITION)
conditions_list = condition.get("conditions", [])
if cond_type == AND_CONDITION:
triggers = _extract_all_trigger_names(condition)
edges.extend(
StructureEdge(
source=trigger,
target=target,
condition_type=AND_CONDITION,
is_router_path=False,
)
for trigger in triggers
if trigger in nodes
)
else:
for sub_cond in conditions_list:
edges.extend(_create_edges_from_condition(sub_cond, target, nodes))
elif isinstance(condition, list):
for item in condition:
edges.extend(_create_edges_from_condition(item, target, nodes))
return edges
def build_flow_structure(flow: Flow[Any]) -> FlowStructure:
"""Build a structure representation of a Flow's execution.
Args:
flow: Flow instance to analyze.
Returns:
Dictionary with nodes, edges, start_methods, and router_methods.
"""
nodes: dict[str, NodeMetadata] = {}
edges: list[StructureEdge] = []
start_methods: list[str] = []
router_methods: list[str] = []
for method_name, method in flow._methods.items():
node_metadata: NodeMetadata = {"type": "listen"}
if hasattr(method, "__is_start_method__") and method.__is_start_method__:
node_metadata["type"] = "start"
start_methods.append(method_name)
if hasattr(method, "__is_router__") and method.__is_router__:
node_metadata["is_router"] = True
node_metadata["type"] = "router"
router_methods.append(method_name)
if method_name in flow._router_paths:
node_metadata["router_paths"] = [
str(p) for p in flow._router_paths[method_name]
]
if hasattr(method, "__trigger_methods__") and method.__trigger_methods__:
node_metadata["trigger_methods"] = [
str(m) for m in method.__trigger_methods__
]
if hasattr(method, "__condition_type__") and method.__condition_type__:
node_metadata["trigger_condition_type"] = method.__condition_type__
if "condition_type" not in node_metadata:
node_metadata["condition_type"] = method.__condition_type__
if node_metadata.get("is_router") and "condition_type" not in node_metadata:
node_metadata["condition_type"] = "IF"
if (
hasattr(method, "__trigger_condition__")
and method.__trigger_condition__ is not None
):
node_metadata["trigger_condition"] = method.__trigger_condition__
if "trigger_methods" not in node_metadata:
extracted = _extract_all_trigger_names(method.__trigger_condition__)
if extracted:
node_metadata["trigger_methods"] = extracted
node_metadata["method_signature"] = extract_method_signature(
method, method_name
)
try:
source_code = inspect.getsource(method)
node_metadata["source_code"] = source_code
try:
source_lines, start_line = inspect.getsourcelines(method)
node_metadata["source_lines"] = source_lines
node_metadata["source_start_line"] = start_line
except (OSError, TypeError):
pass
try:
source_file = inspect.getsourcefile(method)
if source_file:
node_metadata["source_file"] = source_file
except (OSError, TypeError):
try:
class_file = inspect.getsourcefile(flow.__class__)
if class_file:
node_metadata["source_file"] = class_file
except (OSError, TypeError):
pass
except (OSError, TypeError):
pass
try:
class_obj = flow.__class__
if class_obj:
class_name = class_obj.__name__
bases = class_obj.__bases__
if bases:
base_strs = []
for base in bases:
if hasattr(base, "__name__"):
if hasattr(base, "__origin__"):
base_strs.append(str(base))
else:
base_strs.append(base.__name__)
else:
base_strs.append(str(base))
try:
source_lines = inspect.getsource(class_obj).split("\n")
_, class_start_line = inspect.getsourcelines(class_obj)
for idx, line in enumerate(source_lines):
stripped = line.strip()
if stripped.startswith("class ") and class_name in stripped:
class_signature = stripped.rstrip(":")
node_metadata["class_signature"] = class_signature
node_metadata["class_line_number"] = (
class_start_line + idx
)
break
except (OSError, TypeError):
class_signature = f"class {class_name}({', '.join(base_strs)})"
node_metadata["class_signature"] = class_signature
else:
class_signature = f"class {class_name}"
node_metadata["class_signature"] = class_signature
node_metadata["class_name"] = class_name
except (OSError, TypeError, AttributeError):
pass
nodes[method_name] = node_metadata
for listener_name, condition_data in flow._listeners.items():
if listener_name in router_methods:
continue
if is_simple_flow_condition(condition_data):
cond_type, methods = condition_data
edges.extend(
StructureEdge(
source=str(trigger_method),
target=str(listener_name),
condition_type=cond_type,
is_router_path=False,
)
for trigger_method in methods
if str(trigger_method) in nodes
)
elif is_flow_condition_dict(condition_data):
edges.extend(
_create_edges_from_condition(condition_data, str(listener_name), nodes)
)
for method_name, node_metadata in nodes.items(): # type: ignore[assignment]
if node_metadata.get("is_router") and "trigger_methods" in node_metadata:
trigger_methods = node_metadata["trigger_methods"]
condition_type = node_metadata.get("trigger_condition_type", OR_CONDITION)
if "trigger_condition" in node_metadata:
edges.extend(
_create_edges_from_condition(
node_metadata["trigger_condition"], # type: ignore[arg-type]
method_name,
nodes,
)
)
else:
edges.extend(
StructureEdge(
source=trigger_method,
target=method_name,
condition_type=condition_type,
is_router_path=False,
)
for trigger_method in trigger_methods
if trigger_method in nodes
)
for router_method_name in router_methods:
if router_method_name not in flow._router_paths:
flow._router_paths[FlowMethodName(router_method_name)] = []
inferred_paths: Iterable[FlowMethodName | FlowRouteName] = set(
flow._router_paths.get(FlowMethodName(router_method_name), [])
)
for condition_data in flow._listeners.values():
trigger_strings: list[str] = []
if is_simple_flow_condition(condition_data):
_, methods = condition_data
trigger_strings = [str(m) for m in methods]
elif is_flow_condition_dict(condition_data):
trigger_strings = _extract_direct_or_triggers(condition_data)
for trigger_str in trigger_strings:
if trigger_str not in nodes:
# This is likely a router path output
inferred_paths.add(trigger_str) # type: ignore[attr-defined]
if inferred_paths:
flow._router_paths[FlowMethodName(router_method_name)] = list(
inferred_paths # type: ignore[arg-type]
)
if router_method_name in nodes:
nodes[router_method_name]["router_paths"] = list(inferred_paths)
for router_method_name in router_methods:
if router_method_name not in flow._router_paths:
continue
router_paths = flow._router_paths[FlowMethodName(router_method_name)]
for path in router_paths:
for listener_name, condition_data in flow._listeners.items():
trigger_strings_from_cond: list[str] = []
if is_simple_flow_condition(condition_data):
_, methods = condition_data
trigger_strings_from_cond = [str(m) for m in methods]
elif is_flow_condition_dict(condition_data):
trigger_strings_from_cond = _extract_direct_or_triggers(
condition_data
)
if str(path) in trigger_strings_from_cond:
edges.append(
StructureEdge(
source=router_method_name,
target=str(listener_name),
condition_type=None,
is_router_path=True,
router_path_label=str(path),
)
)
for start_method in flow._start_methods:
if start_method not in nodes and start_method in flow._methods:
method = flow._methods[start_method]
nodes[str(start_method)] = NodeMetadata(type="start")
if hasattr(method, "__trigger_methods__") and method.__trigger_methods__:
nodes[str(start_method)]["trigger_methods"] = [
str(m) for m in method.__trigger_methods__
]
if hasattr(method, "__condition_type__") and method.__condition_type__:
nodes[str(start_method)]["condition_type"] = method.__condition_type__
return FlowStructure(
nodes=nodes,
edges=edges,
start_methods=start_methods,
router_methods=router_methods,
)
def calculate_execution_paths(structure: FlowStructure) -> int:
"""Calculate number of possible execution paths through the flow.
Args:
structure: FlowStructure to analyze.
Returns:
Number of possible execution paths.
"""
graph = defaultdict(list)
for edge in structure["edges"]:
graph[edge["source"]].append(
{
"target": edge["target"],
"is_router": edge["is_router_path"],
"condition": edge["condition_type"],
}
)
all_nodes = set(structure["nodes"].keys())
nodes_with_outgoing = set(edge["source"] for edge in structure["edges"])
terminal_nodes = all_nodes - nodes_with_outgoing
if not structure["start_methods"] or not terminal_nodes:
return 0
def count_paths_from(node: str, visited: set[str]) -> int:
"""Recursively count execution paths from a given node.
Args:
node: Node name to start counting from.
visited: Set of already visited nodes to prevent cycles.
Returns:
Number of execution paths from this node to terminal nodes.
"""
if node in terminal_nodes:
return 1
if node in visited:
return 0
visited.add(node)
outgoing = graph[node]
if not outgoing:
visited.remove(node)
return 1
if node in structure["router_methods"]:
total = 0
for edge_info in outgoing:
target = str(edge_info["target"])
total += count_paths_from(target, visited.copy())
visited.remove(node)
return total
total = 0
for edge_info in outgoing:
target = str(edge_info["target"])
total += count_paths_from(target, visited.copy())
visited.remove(node)
return total if total > 0 else 1
total_paths = 0
for start in structure["start_methods"]:
total_paths += count_paths_from(start, set())
return max(total_paths, 1)

View File

@@ -0,0 +1,8 @@
"""Flow structure visualization renderers."""
from crewai.flow.visualization.renderers.interactive import render_interactive
__all__ = [
"render_interactive",
]

View File

@@ -0,0 +1,466 @@
"""Interactive HTML renderer for Flow structure visualization."""
import json
from pathlib import Path
import tempfile
from typing import Any, ClassVar
import webbrowser
from jinja2 import Environment, FileSystemLoader, nodes, select_autoescape
from jinja2.ext import Extension
from jinja2.parser import Parser
from crewai.flow.visualization.builder import calculate_execution_paths
from crewai.flow.visualization.types import FlowStructure
class CSSExtension(Extension):
"""Jinja2 extension for rendering CSS link tags.
Provides {% css 'path/to/file.css' %} tag syntax.
"""
tags: ClassVar[set[str]] = {"css"} # type: ignore[misc]
def parse(self, parser: Parser) -> nodes.Node:
"""Parse {% css 'styles.css' %} tag.
Args:
parser: Jinja2 parser instance.
Returns:
Output node with rendered CSS link tag.
"""
lineno: int = next(parser.stream).lineno
args: list[nodes.Expr] = [parser.parse_expression()]
return nodes.Output([self.call_method("_render_css", args)]).set_lineno(lineno)
def _render_css(self, href: str) -> str:
"""Render CSS link tag.
Args:
href: Path to CSS file.
Returns:
HTML link tag string.
"""
return f'<link rel="stylesheet" href="{href}">'
class JSExtension(Extension):
"""Jinja2 extension for rendering script tags.
Provides {% js 'path/to/file.js' %} tag syntax.
"""
tags: ClassVar[set[str]] = {"js"} # type: ignore[misc]
def parse(self, parser: Parser) -> nodes.Node:
"""Parse {% js 'script.js' %} tag.
Args:
parser: Jinja2 parser instance.
Returns:
Output node with rendered script tag.
"""
lineno: int = next(parser.stream).lineno
args: list[nodes.Expr] = [parser.parse_expression()]
return nodes.Output([self.call_method("_render_js", args)]).set_lineno(lineno)
def _render_js(self, src: str) -> str:
"""Render script tag.
Args:
src: Path to JavaScript file.
Returns:
HTML script tag string.
"""
return f'<script src="{src}"></script>'
CREWAI_ORANGE = "#FF5A50"
DARK_GRAY = "#333333"
WHITE = "#FFFFFF"
GRAY = "#666666"
BG_DARK = "#0d1117"
BG_CARD = "#161b22"
BORDER_SUBTLE = "#30363d"
TEXT_PRIMARY = "#e6edf3"
TEXT_SECONDARY = "#7d8590"
def calculate_node_positions(
dag: FlowStructure,
) -> dict[str, dict[str, int | float]]:
"""Calculate hierarchical positions (level, x, y) for each node.
Args:
dag: FlowStructure containing nodes and edges.
Returns:
Dictionary mapping node names to their position data (level, x, y).
"""
children: dict[str, list[str]] = {name: [] for name in dag["nodes"]}
parents: dict[str, list[str]] = {name: [] for name in dag["nodes"]}
for edge in dag["edges"]:
source = edge["source"]
target = edge["target"]
if source in children and target in children:
children[source].append(target)
parents[target].append(source)
levels: dict[str, int] = {}
queue: list[tuple[str, int]] = []
for start_method in dag["start_methods"]:
if start_method in dag["nodes"]:
levels[start_method] = 0
queue.append((start_method, 0))
visited: set[str] = set()
while queue:
node, level = queue.pop(0)
if node in visited:
continue
visited.add(node)
if node not in levels or levels[node] < level:
levels[node] = level
for child in children.get(node, []):
if child not in visited:
child_level = level + 1
if child not in levels or levels[child] < child_level:
levels[child] = child_level
queue.append((child, child_level))
for name in dag["nodes"]:
if name not in levels:
levels[name] = 0
nodes_by_level: dict[int, list[str]] = {}
for node, level in levels.items():
if level not in nodes_by_level:
nodes_by_level[level] = []
nodes_by_level[level].append(node)
positions: dict[str, dict[str, int | float]] = {}
level_separation = 300 # Vertical spacing between levels
node_spacing = 400 # Horizontal spacing between nodes
parent_count: dict[str, int] = {}
for node, parent_list in parents.items():
parent_count[node] = len(parent_list)
for level, nodes_at_level in sorted(nodes_by_level.items()):
y = level * level_separation
if level == 0:
num_nodes = len(nodes_at_level)
for i, node in enumerate(nodes_at_level):
x = (i - (num_nodes - 1) / 2) * node_spacing
positions[node] = {"level": level, "x": x, "y": y}
else:
for i, node in enumerate(nodes_at_level):
parent_list = parents.get(node, [])
parent_positions: list[float] = [
positions[parent]["x"]
for parent in parent_list
if parent in positions
]
if parent_positions:
if len(parent_positions) > 1 and len(set(parent_positions)) == 1:
base_x = parent_positions[0]
avg_x = base_x + node_spacing * 0.4
else:
avg_x = sum(parent_positions) / len(parent_positions)
else:
avg_x = i * node_spacing * 0.5
positions[node] = {"level": level, "x": avg_x, "y": y}
nodes_at_level_sorted = sorted(
nodes_at_level, key=lambda n: positions[n]["x"]
)
min_spacing = node_spacing * 0.6 # Minimum horizontal distance
for i in range(len(nodes_at_level_sorted) - 1):
current_node = nodes_at_level_sorted[i]
next_node = nodes_at_level_sorted[i + 1]
current_x = positions[current_node]["x"]
next_x = positions[next_node]["x"]
if next_x - current_x < min_spacing:
positions[next_node]["x"] = current_x + min_spacing
return positions
def render_interactive(
dag: FlowStructure,
filename: str = "flow_dag.html",
show: bool = True,
) -> str:
"""Create interactive HTML visualization of Flow structure.
Generates three output files in a temporary directory: HTML template,
CSS stylesheet, and JavaScript. Optionally opens the visualization in
default browser.
Args:
dag: FlowStructure to visualize.
filename: Output HTML filename (basename only, no path).
show: Whether to open in browser.
Returns:
Absolute path to generated HTML file in temporary directory.
"""
node_positions = calculate_node_positions(dag)
nodes_list: list[dict[str, Any]] = []
for name, metadata in dag["nodes"].items():
node_type: str = metadata.get("type", "listen")
color_config: dict[str, Any]
font_color: str
border_width: int
if node_type == "start":
color_config = {
"background": "var(--node-bg-start)",
"border": "var(--node-border-start)",
"highlight": {
"background": "var(--node-bg-start)",
"border": "var(--node-border-start)",
},
}
font_color = "var(--node-text-color)"
border_width = 3
elif node_type == "router":
color_config = {
"background": "var(--node-bg-router)",
"border": CREWAI_ORANGE,
"highlight": {
"background": "var(--node-bg-router)",
"border": CREWAI_ORANGE,
},
}
font_color = "var(--node-text-color)"
border_width = 3
else:
color_config = {
"background": "var(--node-bg-listen)",
"border": "var(--node-border-listen)",
"highlight": {
"background": "var(--node-bg-listen)",
"border": "var(--node-border-listen)",
},
}
font_color = "var(--node-text-color)"
border_width = 3
title_parts: list[str] = []
type_badge_bg: str = (
CREWAI_ORANGE if node_type in ["start", "router"] else DARK_GRAY
)
title_parts.append(f"""
<div style="border-bottom: 1px solid rgba(102,102,102,0.15); padding-bottom: 8px; margin-bottom: 10px;">
<div style="font-size: 13px; font-weight: 700; color: {DARK_GRAY}; margin-bottom: 6px;">{name}</div>
<span style="display: inline-block; background: {type_badge_bg}; color: white; padding: 2px 8px; border-radius: 4px; font-size: 10px; font-weight: 600; text-transform: uppercase; letter-spacing: 0.5px;">{node_type}</span>
</div>
""")
if metadata.get("condition_type"):
condition = metadata["condition_type"]
if condition == "AND":
condition_badge_bg = "rgba(255,90,80,0.12)"
condition_color = CREWAI_ORANGE
elif condition == "IF":
condition_badge_bg = "rgba(255,90,80,0.18)"
condition_color = CREWAI_ORANGE
else:
condition_badge_bg = "rgba(102,102,102,0.12)"
condition_color = GRAY
title_parts.append(f"""
<div style="margin-bottom: 8px;">
<div style="font-size: 10px; text-transform: uppercase; color: {GRAY}; letter-spacing: 0.5px; margin-bottom: 3px; font-weight: 600;">Condition</div>
<span style="display: inline-block; background: {condition_badge_bg}; color: {condition_color}; padding: 3px 8px; border-radius: 4px; font-size: 11px; font-weight: 700;">{condition}</span>
</div>
""")
if metadata.get("trigger_methods"):
triggers = metadata["trigger_methods"]
triggers_items = "".join(
[
f'<li style="margin: 3px 0;"><code style="background: rgba(102,102,102,0.08); padding: 2px 6px; border-radius: 3px; font-size: 10px; color: {DARK_GRAY}; border: 1px solid rgba(102,102,102,0.12);">{t}</code></li>'
for t in triggers
]
)
title_parts.append(f"""
<div style="margin-bottom: 8px;">
<div style="font-size: 10px; text-transform: uppercase; color: {GRAY}; letter-spacing: 0.5px; margin-bottom: 4px; font-weight: 600;">Triggers</div>
<ul style="list-style: none; padding: 0; margin: 0;">{triggers_items}</ul>
</div>
""")
if metadata.get("router_paths"):
paths = metadata["router_paths"]
paths_items = "".join(
[
f'<li style="margin: 3px 0;"><code style="background: rgba(255,90,80,0.08); padding: 2px 6px; border-radius: 3px; font-size: 10px; color: {CREWAI_ORANGE}; border: 1px solid rgba(255,90,80,0.2); font-weight: 600;">{p}</code></li>'
for p in paths
]
)
title_parts.append(f"""
<div>
<div style="font-size: 10px; text-transform: uppercase; color: {GRAY}; letter-spacing: 0.5px; margin-bottom: 4px; font-weight: 600;">Router Paths</div>
<ul style="list-style: none; padding: 0; margin: 0;">{paths_items}</ul>
</div>
""")
bg_color = color_config["background"]
border_color = color_config["border"]
position_data = node_positions.get(name, {"level": 0, "x": 0, "y": 0})
node_data: dict[str, Any] = {
"id": name,
"label": name,
"title": "".join(title_parts),
"shape": "custom",
"size": 30,
"level": position_data["level"],
"nodeStyle": {
"name": name,
"bgColor": bg_color,
"borderColor": border_color,
"borderWidth": border_width,
"fontColor": font_color,
},
"opacity": 1.0,
"glowSize": 0,
"glowColor": None,
}
# Add x,y only for graphs with 3-4 nodes
total_nodes = len(dag["nodes"])
if 3 <= total_nodes <= 4:
node_data["x"] = position_data["x"]
node_data["y"] = position_data["y"]
nodes_list.append(node_data)
execution_paths: int = calculate_execution_paths(dag)
edges_list: list[dict[str, Any]] = []
for edge in dag["edges"]:
edge_label: str = ""
edge_color: str = GRAY
edge_dashes: bool | list[int] = False
if edge["is_router_path"]:
edge_color = CREWAI_ORANGE
edge_dashes = [15, 10]
if "router_path_label" in edge:
edge_label = edge["router_path_label"]
elif edge["condition_type"] == "AND":
edge_label = "AND"
edge_color = CREWAI_ORANGE
elif edge["condition_type"] == "OR":
edge_label = "OR"
edge_color = GRAY
edge_data: dict[str, Any] = {
"from": edge["source"],
"to": edge["target"],
"label": edge_label,
"arrows": "to",
"width": 2,
"selectionWidth": 0,
"color": {
"color": edge_color,
"highlight": edge_color,
},
}
if edge_dashes is not False:
edge_data["dashes"] = edge_dashes
edges_list.append(edge_data)
template_dir = Path(__file__).parent.parent / "assets"
env = Environment(
loader=FileSystemLoader(template_dir),
autoescape=select_autoescape(["html", "xml", "css", "js"]),
variable_start_string="'{{",
variable_end_string="}}'",
extensions=[CSSExtension, JSExtension],
)
temp_dir = Path(tempfile.mkdtemp(prefix="crewai_flow_"))
output_path = temp_dir / Path(filename).name
css_filename = output_path.stem + "_style.css"
css_output_path = temp_dir / css_filename
js_filename = output_path.stem + "_script.js"
js_output_path = temp_dir / js_filename
css_file = template_dir / "style.css"
css_content = css_file.read_text(encoding="utf-8")
css_content = css_content.replace("'{{ WHITE }}'", WHITE)
css_content = css_content.replace("'{{ DARK_GRAY }}'", DARK_GRAY)
css_content = css_content.replace("'{{ GRAY }}'", GRAY)
css_content = css_content.replace("'{{ CREWAI_ORANGE }}'", CREWAI_ORANGE)
css_output_path.write_text(css_content, encoding="utf-8")
js_file = template_dir / "interactive.js"
js_content = js_file.read_text(encoding="utf-8")
dag_nodes_json = json.dumps(dag["nodes"])
dag_full_json = json.dumps(dag)
js_content = js_content.replace("{{ WHITE }}", WHITE)
js_content = js_content.replace("{{ DARK_GRAY }}", DARK_GRAY)
js_content = js_content.replace("{{ GRAY }}", GRAY)
js_content = js_content.replace("{{ CREWAI_ORANGE }}", CREWAI_ORANGE)
js_content = js_content.replace("'{{ nodeData }}'", dag_nodes_json)
js_content = js_content.replace("'{{ dagData }}'", dag_full_json)
js_content = js_content.replace("'{{ nodes_list_json }}'", json.dumps(nodes_list))
js_content = js_content.replace("'{{ edges_list_json }}'", json.dumps(edges_list))
js_output_path.write_text(js_content, encoding="utf-8")
template = env.get_template("interactive_flow.html.j2")
html_content = template.render(
CREWAI_ORANGE=CREWAI_ORANGE,
DARK_GRAY=DARK_GRAY,
WHITE=WHITE,
GRAY=GRAY,
BG_DARK=BG_DARK,
BG_CARD=BG_CARD,
BORDER_SUBTLE=BORDER_SUBTLE,
TEXT_PRIMARY=TEXT_PRIMARY,
TEXT_SECONDARY=TEXT_SECONDARY,
nodes_list_json=json.dumps(nodes_list),
edges_list_json=json.dumps(edges_list),
dag_nodes_count=len(dag["nodes"]),
dag_edges_count=len(dag["edges"]),
execution_paths=execution_paths,
css_path=css_filename,
js_path=js_filename,
)
output_path.write_text(html_content, encoding="utf-8")
if show:
webbrowser.open(f"file://{output_path.absolute()}")
return str(output_path.absolute())

View File

@@ -0,0 +1,104 @@
"""OpenAPI schema conversion utilities for Flow methods."""
import inspect
from typing import Any, get_args, get_origin
def type_to_openapi_schema(type_hint: Any) -> dict[str, Any]:
"""Convert Python type hint to OpenAPI schema.
Args:
type_hint: Python type hint to convert.
Returns:
OpenAPI schema dictionary.
"""
if type_hint is inspect.Parameter.empty:
return {}
if type_hint is None or type_hint is type(None):
return {"type": "null"}
if hasattr(type_hint, "__module__") and hasattr(type_hint, "__name__"):
if type_hint.__module__ == "typing" and type_hint.__name__ == "Any":
return {}
type_str = str(type_hint)
if type_str == "typing.Any" or type_str == "<class 'typing.Any'>":
return {}
if isinstance(type_hint, str):
return {"type": type_hint}
origin = get_origin(type_hint)
args = get_args(type_hint)
if type_hint is str:
return {"type": "string"}
if type_hint is int:
return {"type": "integer"}
if type_hint is float:
return {"type": "number"}
if type_hint is bool:
return {"type": "boolean"}
if type_hint is dict or origin is dict:
if args and len(args) > 1:
return {
"type": "object",
"additionalProperties": type_to_openapi_schema(args[1]),
}
return {"type": "object"}
if type_hint is list or origin is list:
if args:
return {"type": "array", "items": type_to_openapi_schema(args[0])}
return {"type": "array"}
if hasattr(type_hint, "__name__"):
return {"type": "object", "className": type_hint.__name__}
return {}
def extract_method_signature(method: Any, method_name: str) -> dict[str, Any]:
"""Extract method signature as OpenAPI schema with documentation.
Args:
method: Method to analyze.
method_name: Method name.
Returns:
Dictionary with operationId, parameters, returns, summary, and description.
"""
try:
sig = inspect.signature(method)
parameters = {}
for param_name, param in sig.parameters.items():
if param_name == "self":
continue
parameters[param_name] = type_to_openapi_schema(param.annotation)
return_type = type_to_openapi_schema(sig.return_annotation)
docstring = inspect.getdoc(method)
result: dict[str, Any] = {
"operationId": method_name,
"parameters": parameters,
"returns": return_type,
}
if docstring:
lines = docstring.strip().split("\n")
summary = lines[0].strip()
if summary:
result["summary"] = summary
if len(lines) > 1:
description = "\n".join(line.strip() for line in lines[1:]).strip()
if description:
result["description"] = description
return result
except Exception:
return {"operationId": method_name, "parameters": {}, "returns": {}}

View File

@@ -0,0 +1,42 @@
"""Type definitions for Flow structure visualization."""
from typing import Any, TypedDict
class NodeMetadata(TypedDict, total=False):
"""Metadata for a single node in the flow structure."""
type: str
is_router: bool
router_paths: list[str]
condition_type: str | None
trigger_condition_type: str | None
trigger_methods: list[str]
trigger_condition: dict[str, Any] | None
method_signature: dict[str, Any]
source_code: str
source_lines: list[str]
source_start_line: int
source_file: str
class_signature: str
class_name: str
class_line_number: int
class StructureEdge(TypedDict, total=False):
"""Represents a connection in the flow structure."""
source: str
target: str
condition_type: str | None
is_router_path: bool
router_path_label: str
class FlowStructure(TypedDict):
"""Complete structure representation of a Flow."""
nodes: dict[str, NodeMetadata]
edges: list[StructureEdge]
start_methods: list[str]
router_methods: list[str]

View File

@@ -1,342 +0,0 @@
"""
Utilities for creating visual representations of flow structures.
This module provides functions for generating network visualizations of flows,
including node placement, edge creation, and visual styling. It handles the
conversion of flow structures into visual network graphs with appropriate
styling and layout.
Example
-------
>>> flow = Flow()
>>> net = Network(directed=True)
>>> node_positions = compute_positions(flow, node_levels)
>>> add_nodes_to_network(net, flow, node_positions, node_styles)
>>> add_edges(net, flow, node_positions, colors)
"""
import ast
import inspect
from typing import Any
from crewai.flow.config import (
CrewNodeStyle,
FlowColors,
MethodNodeStyle,
NodeStyles,
RouterNodeStyle,
StartNodeStyle,
)
from crewai.flow.utils import (
build_ancestor_dict,
build_parent_children_dict,
get_child_index,
is_ancestor,
)
from crewai.utilities.printer import Printer
_printer = Printer()
def method_calls_crew(method: Any) -> bool:
"""
Check if the method contains a call to `.crew()`, `.kickoff()`, or `.kickoff_async()`.
Parameters
----------
method : Any
The method to analyze for crew or agent execution calls.
Returns
-------
bool
True if the method calls .crew(), .kickoff(), or .kickoff_async(), False otherwise.
Notes
-----
Uses AST analysis to detect method calls, specifically looking for
attribute access of 'crew', 'kickoff', or 'kickoff_async'.
This includes both traditional Crew execution (.crew()) and Agent/LiteAgent
execution (.kickoff() or .kickoff_async()).
"""
try:
source = inspect.getsource(method)
source = inspect.cleandoc(source)
tree = ast.parse(source)
except Exception as e:
_printer.print(f"Could not parse method {method.__name__}: {e}", color="red")
return False
class CrewCallVisitor(ast.NodeVisitor):
"""AST visitor to detect .crew(), .kickoff(), or .kickoff_async() method calls."""
def __init__(self) -> None:
self.found = False
def visit_Call(self, node: ast.Call) -> None:
if isinstance(node.func, ast.Attribute):
if node.func.attr in ("crew", "kickoff", "kickoff_async"):
self.found = True
self.generic_visit(node)
visitor = CrewCallVisitor()
visitor.visit(tree)
return visitor.found
def add_nodes_to_network(
net: Any,
flow: Any,
node_positions: dict[str, tuple[float, float]],
node_styles: NodeStyles,
) -> None:
"""
Add nodes to the network visualization with appropriate styling.
Parameters
----------
net : Any
The pyvis Network instance to add nodes to.
flow : Any
The flow instance containing method information.
node_positions : Dict[str, Tuple[float, float]]
Dictionary mapping node names to their (x, y) positions.
node_styles : Dict[str, Dict[str, Any]]
Dictionary containing style configurations for different node types.
Notes
-----
Node types include:
- Start methods
- Router methods
- Crew methods
- Regular methods
"""
def human_friendly_label(method_name: str) -> str:
return method_name.replace("_", " ").title()
node_style: (
StartNodeStyle | RouterNodeStyle | CrewNodeStyle | MethodNodeStyle | None
)
for method_name, (x, y) in node_positions.items():
method = flow._methods.get(method_name)
if hasattr(method, "__is_start_method__"):
node_style = node_styles["start"]
elif hasattr(method, "__is_router__"):
node_style = node_styles["router"]
elif method_calls_crew(method):
node_style = node_styles["crew"]
else:
node_style = node_styles["method"]
node_style = node_style.copy()
label = human_friendly_label(method_name)
node_style.update(
{
"label": label,
"shape": "box",
"font": {
"multi": "html",
"color": node_style.get("font", {}).get("color", "#FFFFFF"),
},
}
)
net.add_node(
method_name,
x=x,
y=y,
fixed=True,
physics=False,
**node_style,
)
def compute_positions(
flow: Any,
node_levels: dict[str, int],
y_spacing: float = 150,
x_spacing: float = 300,
) -> dict[str, tuple[float, float]]:
"""
Compute the (x, y) positions for each node in the flow graph.
Parameters
----------
flow : Any
The flow instance to compute positions for.
node_levels : Dict[str, int]
Dictionary mapping node names to their hierarchical levels.
y_spacing : float, optional
Vertical spacing between levels, by default 150.
x_spacing : float, optional
Horizontal spacing between nodes, by default 300.
Returns
-------
Dict[str, Tuple[float, float]]
Dictionary mapping node names to their (x, y) coordinates.
"""
level_nodes: dict[int, list[str]] = {}
node_positions: dict[str, tuple[float, float]] = {}
for method_name, level in node_levels.items():
level_nodes.setdefault(level, []).append(method_name)
for level, nodes in level_nodes.items():
x_offset = -(len(nodes) - 1) * x_spacing / 2 # Center nodes horizontally
for i, method_name in enumerate(nodes):
x = x_offset + i * x_spacing
y = level * y_spacing
node_positions[method_name] = (x, y)
return node_positions
def add_edges(
net: Any,
flow: Any,
node_positions: dict[str, tuple[float, float]],
colors: FlowColors,
) -> None:
edge_smooth: dict[str, str | float] = {"type": "continuous"} # Default value
"""
Add edges to the network visualization with appropriate styling.
Parameters
----------
net : Any
The pyvis Network instance to add edges to.
flow : Any
The flow instance containing edge information.
node_positions : Dict[str, Tuple[float, float]]
Dictionary mapping node names to their positions.
colors : Dict[str, str]
Dictionary mapping edge types to their colors.
Notes
-----
- Handles both normal listener edges and router edges
- Applies appropriate styling (color, dashes) based on edge type
- Adds curvature to edges when needed (cycles or multiple children)
"""
ancestors = build_ancestor_dict(flow)
parent_children = build_parent_children_dict(flow)
# Edges for normal listeners
for method_name in flow._listeners:
condition_type, trigger_methods = flow._listeners[method_name]
is_and_condition = condition_type == "AND"
for trigger in trigger_methods:
# Check if nodes exist before adding edges
if trigger in node_positions and method_name in node_positions:
is_router_edge = any(
trigger in paths for paths in flow._router_paths.values()
)
edge_color = colors["router_edge"] if is_router_edge else colors["edge"]
is_cycle_edge = is_ancestor(trigger, method_name, ancestors)
parent_has_multiple_children = len(parent_children.get(trigger, [])) > 1
needs_curvature = is_cycle_edge or parent_has_multiple_children
if needs_curvature:
source_pos = node_positions.get(trigger)
target_pos = node_positions.get(method_name)
if source_pos and target_pos:
dx = target_pos[0] - source_pos[0]
smooth_type = "curvedCCW" if dx <= 0 else "curvedCW"
index = get_child_index(trigger, method_name, parent_children)
edge_smooth = {
"type": smooth_type,
"roundness": 0.2 + (0.1 * index),
}
else:
edge_smooth = {"type": "cubicBezier"}
else:
edge_smooth.update({"type": "continuous"})
edge_style = {
"color": edge_color,
"width": 2,
"arrows": "to",
"dashes": True if is_router_edge or is_and_condition else False,
"smooth": edge_smooth,
}
net.add_edge(trigger, method_name, **edge_style)
else:
# Nodes not found in node_positions. Check if it's a known router outcome and a known method.
is_router_edge = any(
trigger in paths for paths in flow._router_paths.values()
)
# Check if method_name is a known method
method_known = method_name in flow._methods
# If it's a known router edge and the method is known, don't warn.
# This means the path is legitimate, just not reflected as nodes here.
if not (is_router_edge and method_known):
_printer.print(
f"Warning: No node found for '{trigger}' or '{method_name}'. Skipping edge.",
color="yellow",
)
# Edges for router return paths
for router_method_name, paths in flow._router_paths.items():
for path in paths:
for listener_name, (
_condition_type,
trigger_methods,
) in flow._listeners.items():
if path in trigger_methods:
if (
router_method_name in node_positions
and listener_name in node_positions
):
is_cycle_edge = is_ancestor(
router_method_name, listener_name, ancestors
)
parent_has_multiple_children = (
len(parent_children.get(router_method_name, [])) > 1
)
needs_curvature = is_cycle_edge or parent_has_multiple_children
if needs_curvature:
source_pos = node_positions.get(router_method_name)
target_pos = node_positions.get(listener_name)
if source_pos and target_pos:
dx = target_pos[0] - source_pos[0]
smooth_type = "curvedCCW" if dx <= 0 else "curvedCW"
index = get_child_index(
router_method_name, listener_name, parent_children
)
edge_smooth = {
"type": smooth_type,
"roundness": 0.2 + (0.1 * index),
}
else:
edge_smooth = {"type": "cubicBezier"}
else:
edge_smooth.update({"type": "continuous"})
edge_style = {
"color": colors["router_edge"],
"width": 2,
"arrows": "to",
"dashes": True,
"smooth": edge_smooth,
}
net.add_edge(router_method_name, listener_name, **edge_style)
else:
# Same check here: known router edge and known method?
method_known = listener_name in flow._methods
if not method_known:
_printer.print(
f"Warning: No node found for '{router_method_name}' or '{listener_name}'. Skipping edge.",
color="yellow",
)

View File

@@ -51,7 +51,6 @@ class CrewDoclingSource(BaseKnowledgeSource):
chunks: list[str] = Field(default_factory=list)
safe_file_paths: list[Path | str] = Field(default_factory=list)
content: list[DoclingDocument] = Field(default_factory=list)
content_paths: list[Path | str] = Field(default_factory=list)
document_converter: DocumentConverter = Field(
default_factory=lambda: DocumentConverter(
allowed_formats=[
@@ -95,29 +94,14 @@ class CrewDoclingSource(BaseKnowledgeSource):
def add(self) -> None:
if self.content is None:
return
for doc_index, doc in enumerate(self.content):
filepath = self.content_paths[doc_index] if doc_index < len(self.content_paths) else "unknown"
chunk_index = 0
for chunk_text in self._chunk_doc(doc):
self.chunks.append({
"content": chunk_text,
"metadata": {
"filepath": str(filepath),
"chunk_index": chunk_index,
"source_type": "docling",
}
})
chunk_index += 1
for doc in self.content:
new_chunks_iterable = self._chunk_doc(doc)
self.chunks.extend(list(new_chunks_iterable))
self._save_documents()
def _convert_source_to_docling_documents(self) -> list[DoclingDocument]:
conv_results_iter = self.document_converter.convert_all(self.safe_file_paths)
documents = []
self.content_paths = []
for result in conv_results_iter:
documents.append(result.document)
self.content_paths.append(result.input.file)
return documents
return [result.document for result in conv_results_iter]
def _chunk_doc(self, doc: DoclingDocument) -> Iterator[str]:
chunker = HierarchicalChunker()

View File

@@ -21,20 +21,14 @@ class CSVKnowledgeSource(BaseFileKnowledgeSource):
def add(self) -> None:
"""
Add CSV file content to the knowledge source, chunk it with metadata,
Add CSV file content to the knowledge source, chunk it, compute embeddings,
and save the embeddings.
"""
for filepath, text in self.content.items():
text_chunks = self._chunk_text(text)
for chunk_index, chunk in enumerate(text_chunks):
self.chunks.append({
"content": chunk,
"metadata": {
"filepath": str(filepath),
"chunk_index": chunk_index,
"source_type": "csv",
}
})
content_str = (
str(self.content) if isinstance(self.content, dict) else self.content
)
new_chunks = self._chunk_text(content_str)
self.chunks.extend(new_chunks)
self._save_documents()
def _chunk_text(self, text: str) -> list[str]:

View File

@@ -142,34 +142,21 @@ class ExcelKnowledgeSource(BaseKnowledgeSource):
def add(self) -> None:
"""
Add Excel file content to the knowledge source, chunk it with metadata,
Add Excel file content to the knowledge source, chunk it, compute embeddings,
and save the embeddings.
"""
for filepath, sheets in self.content.items():
if isinstance(sheets, dict):
for sheet_name, sheet_content in sheets.items():
text_chunks = self._chunk_text(str(sheet_content))
for chunk_index, chunk in enumerate(text_chunks):
self.chunks.append({
"content": chunk,
"metadata": {
"filepath": str(filepath),
"sheet_name": sheet_name,
"chunk_index": chunk_index,
"source_type": "excel",
}
})
# Convert dictionary values to a single string if content is a dictionary
# Updated to account for .xlsx workbooks with multiple tabs/sheets
content_str = ""
for value in self.content.values():
if isinstance(value, dict):
for sheet_value in value.values():
content_str += str(sheet_value) + "\n"
else:
text_chunks = self._chunk_text(str(sheets))
for chunk_index, chunk in enumerate(text_chunks):
self.chunks.append({
"content": chunk,
"metadata": {
"filepath": str(filepath),
"chunk_index": chunk_index,
"source_type": "excel",
}
})
content_str += str(value) + "\n"
new_chunks = self._chunk_text(content_str)
self.chunks.extend(new_chunks)
self._save_documents()
def _chunk_text(self, text: str) -> list[str]:

View File

@@ -34,20 +34,14 @@ class JSONKnowledgeSource(BaseFileKnowledgeSource):
def add(self) -> None:
"""
Add JSON file content to the knowledge source, chunk it with metadata,
Add JSON file content to the knowledge source, chunk it, compute embeddings,
and save the embeddings.
"""
for filepath, text in self.content.items():
text_chunks = self._chunk_text(text)
for chunk_index, chunk in enumerate(text_chunks):
self.chunks.append({
"content": chunk,
"metadata": {
"filepath": str(filepath),
"chunk_index": chunk_index,
"source_type": "json",
}
})
content_str = (
str(self.content) if isinstance(self.content, dict) else self.content
)
new_chunks = self._chunk_text(content_str)
self.chunks.extend(new_chunks)
self._save_documents()
def _chunk_text(self, text: str) -> list[str]:

View File

@@ -36,20 +36,12 @@ class PDFKnowledgeSource(BaseFileKnowledgeSource):
def add(self) -> None:
"""
Add PDF file content to the knowledge source, chunk it with metadata,
Add PDF file content to the knowledge source, chunk it, compute embeddings,
and save the embeddings.
"""
for filepath, text in self.content.items():
text_chunks = self._chunk_text(text)
for chunk_index, chunk in enumerate(text_chunks):
self.chunks.append({
"content": chunk,
"metadata": {
"filepath": str(filepath),
"chunk_index": chunk_index,
"source_type": "pdf",
}
})
for text in self.content.values():
new_chunks = self._chunk_text(text)
self.chunks.extend(new_chunks)
self._save_documents()
def _chunk_text(self, text: str) -> list[str]:

View File

@@ -19,16 +19,9 @@ class StringKnowledgeSource(BaseKnowledgeSource):
raise ValueError("StringKnowledgeSource only accepts string content")
def add(self) -> None:
"""Add string content to the knowledge source, chunk it with metadata, and save them."""
text_chunks = self._chunk_text(self.content)
for chunk_index, chunk in enumerate(text_chunks):
self.chunks.append({
"content": chunk,
"metadata": {
"chunk_index": chunk_index,
"source_type": "string",
}
})
"""Add string content to the knowledge source, chunk it, compute embeddings, and save them."""
new_chunks = self._chunk_text(self.content)
self.chunks.extend(new_chunks)
self._save_documents()
def _chunk_text(self, text: str) -> list[str]:

View File

@@ -17,20 +17,12 @@ class TextFileKnowledgeSource(BaseFileKnowledgeSource):
def add(self) -> None:
"""
Add text file content to the knowledge source, chunk it with metadata,
Add text file content to the knowledge source, chunk it, compute embeddings,
and save the embeddings.
"""
for filepath, text in self.content.items():
text_chunks = self._chunk_text(text)
for chunk_index, chunk in enumerate(text_chunks):
self.chunks.append({
"content": chunk,
"metadata": {
"filepath": str(filepath),
"chunk_index": chunk_index,
"source_type": "text_file",
}
})
for text in self.content.values():
new_chunks = self._chunk_text(text)
self.chunks.extend(new_chunks)
self._save_documents()
def _chunk_text(self, text: str) -> list[str]:

View File

@@ -1,4 +1,3 @@
from collections.abc import Mapping, Sequence
import logging
import traceback
from typing import Any, cast
@@ -17,72 +16,6 @@ from crewai.rag.types import BaseRecord, SearchResult
from crewai.utilities.logger import Logger
def _coerce_to_records(documents: Sequence[Any]) -> list[BaseRecord]:
"""Convert various document formats to BaseRecord format.
Supports:
- str: Simple string content
- dict: With 'content' key and optional 'metadata' and 'doc_id'
Args:
documents: Sequence of documents in various formats
Returns:
List of BaseRecord dictionaries with content and optional metadata
"""
records: list[BaseRecord] = []
for d in documents:
if isinstance(d, str):
records.append({"content": d})
elif isinstance(d, Mapping):
if "content" not in d:
continue
content = d["content"]
if content is None or (isinstance(content, str) and not content):
continue
content_str = str(content)
rec: BaseRecord = {"content": content_str}
if "metadata" in d:
metadata_raw = d["metadata"]
if isinstance(metadata_raw, Mapping):
sanitized_metadata: dict[str, str | int | float | bool] = {}
for k, v in metadata_raw.items():
if isinstance(v, (str, int, float, bool)):
sanitized_metadata[str(k)] = v
elif v is None:
sanitized_metadata[str(k)] = ""
else:
sanitized_metadata[str(k)] = str(v)
rec["metadata"] = sanitized_metadata
elif isinstance(metadata_raw, list):
sanitized_list: list[Mapping[str, str | int | float | bool]] = []
for item in metadata_raw:
if isinstance(item, Mapping):
sanitized_item: dict[str, str | int | float | bool] = {}
for k, v in item.items():
if isinstance(v, (str, int, float, bool)):
sanitized_item[str(k)] = v
elif v is None:
sanitized_item[str(k)] = ""
else:
sanitized_item[str(k)] = str(v)
sanitized_list.append(sanitized_item)
if sanitized_list:
rec["metadata"] = sanitized_list
if "doc_id" in d and isinstance(d["doc_id"], str):
rec["doc_id"] = d["doc_id"]
records.append(rec)
return records
class KnowledgeStorage(BaseKnowledgeStorage):
"""
Extends Storage to handle embeddings for memory entries, improving
@@ -165,7 +98,7 @@ class KnowledgeStorage(BaseKnowledgeStorage):
f"Error during knowledge reset: {e!s}\n{traceback.format_exc()}"
)
def save(self, documents: list[str] | list[dict[str, Any]]) -> None:
def save(self, documents: list[str]) -> None:
try:
client = self._get_client()
collection_name = (
@@ -175,7 +108,7 @@ class KnowledgeStorage(BaseKnowledgeStorage):
)
client.get_or_create_collection(collection_name=collection_name)
rag_documents: list[BaseRecord] = _coerce_to_records(documents)
rag_documents: list[BaseRecord] = [{"content": doc} for doc in documents]
client.add_documents(
collection_name=collection_name, documents=rag_documents

View File

@@ -1,6 +1,7 @@
import asyncio
from collections.abc import Callable
import inspect
import json
from typing import (
Any,
Literal,
@@ -58,10 +59,14 @@ from crewai.utilities.agent_utils import (
process_llm_response,
render_text_description_and_args,
)
from crewai.utilities.converter import generate_model_description
from crewai.utilities.converter import (
Converter,
ConverterError,
generate_model_description,
)
from crewai.utilities.guardrail import process_guardrail
from crewai.utilities.guardrail_types import GuardrailCallable, GuardrailType
from crewai.utilities.i18n import I18N
from crewai.utilities.i18n import I18N, get_i18n
from crewai.utilities.llm_utils import create_llm
from crewai.utilities.printer import Printer
from crewai.utilities.token_counter_callback import TokenCalcHandler
@@ -90,8 +95,6 @@ class LiteAgent(FlowTrackable, BaseModel):
"""
model_config = {"arbitrary_types_allowed": True}
# Core Agent Properties
id: UUID4 = Field(default_factory=uuid.uuid4, frozen=True)
role: str = Field(description="Role of the agent")
goal: str = Field(description="Goal of the agent")
@@ -102,8 +105,6 @@ class LiteAgent(FlowTrackable, BaseModel):
tools: list[BaseTool] = Field(
default_factory=list, description="Tools at agent's disposal"
)
# Execution Control Properties
max_iterations: int = Field(
default=15, description="Maximum number of iterations for tool usage"
)
@@ -120,24 +121,17 @@ class LiteAgent(FlowTrackable, BaseModel):
)
request_within_rpm_limit: Callable[[], bool] | None = Field(
default=None,
description="Callback to check if the request is within the RPM limit",
description="Callback to check if the request is within the RPM8 limit",
)
i18n: I18N = Field(
default_factory=I18N, description="Internationalization settings."
default_factory=get_i18n, description="Internationalization settings."
)
# Output and Formatting Properties
response_format: type[BaseModel] | None = Field(
default=None, description="Pydantic model for structured output"
)
verbose: bool = Field(
default=False, description="Whether to print execution details"
)
callbacks: list[Callable] = Field(
default_factory=list, description="Callbacks to be used for the agent"
)
# Guardrail Properties
guardrail: GuardrailType | None = Field(
default=None,
description="Function or string description of a guardrail to validate agent output",
@@ -145,17 +139,12 @@ class LiteAgent(FlowTrackable, BaseModel):
guardrail_max_retries: int = Field(
default=3, description="Maximum number of retries when guardrail fails"
)
# State and Results
tools_results: list[dict[str, Any]] = Field(
default_factory=list, description="Results of the tools used by the agent."
)
# Reference of Agent
original_agent: BaseAgent | None = Field(
default=None, description="Reference to the agent that created this LiteAgent"
)
# Private Attributes
_parsed_tools: list[CrewStructuredTool] = PrivateAttr(default_factory=list)
_token_process: TokenProcess = PrivateAttr(default_factory=TokenProcess)
_cache_handler: CacheHandler = PrivateAttr(default_factory=CacheHandler)
@@ -165,6 +154,7 @@ class LiteAgent(FlowTrackable, BaseModel):
_printer: Printer = PrivateAttr(default_factory=Printer)
_guardrail: GuardrailCallable | None = PrivateAttr(default=None)
_guardrail_retry_count: int = PrivateAttr(default=0)
_callbacks: list[TokenCalcHandler] = PrivateAttr(default_factory=list)
@model_validator(mode="after")
def setup_llm(self) -> Self:
@@ -174,15 +164,13 @@ class LiteAgent(FlowTrackable, BaseModel):
raise ValueError(
f"Expected LLM instance of type BaseLLM, got {type(self.llm).__name__}"
)
# Initialize callbacks
token_callback = TokenCalcHandler(token_cost_process=self._token_process)
self._callbacks = [token_callback]
return self
@model_validator(mode="after")
def parse_tools(self):
def parse_tools(self) -> Self:
"""Parse the tools and convert them to CrewStructuredTool instances."""
self._parsed_tools = parse_tools(self.tools)
@@ -201,7 +189,7 @@ class LiteAgent(FlowTrackable, BaseModel):
)
self._guardrail = cast(
GuardrailCallable,
LLMGuardrail(description=self.guardrail, llm=self.llm),
cast(object, LLMGuardrail(description=self.guardrail, llm=self.llm)),
)
return self
@@ -209,8 +197,8 @@ class LiteAgent(FlowTrackable, BaseModel):
@field_validator("guardrail", mode="before")
@classmethod
def validate_guardrail_function(
cls, v: Callable | str | None
) -> Callable | str | None:
cls, v: GuardrailCallable | str | None
) -> GuardrailCallable | str | None:
"""Validate that the guardrail function has the correct signature.
If v is a callable, validate that it has the correct signature.
@@ -258,7 +246,11 @@ class LiteAgent(FlowTrackable, BaseModel):
"""Return the original role for compatibility with tool interfaces."""
return self.role
def kickoff(self, messages: str | list[LLMMessage]) -> LiteAgentOutput:
def kickoff(
self,
messages: str | list[LLMMessage],
response_format: type[BaseModel] | None = None,
) -> LiteAgentOutput:
"""
Execute the agent with the given messages.
@@ -266,6 +258,8 @@ class LiteAgent(FlowTrackable, BaseModel):
messages: Either a string query or a list of message dictionaries.
If a string is provided, it will be converted to a user message.
If a list is provided, each dict should have 'role' and 'content' keys.
response_format: Optional Pydantic model for structured output. If provided,
overrides self.response_format for this execution.
Returns:
LiteAgentOutput: The result of the agent execution.
@@ -286,9 +280,13 @@ class LiteAgent(FlowTrackable, BaseModel):
self.tools_results = []
# Format messages for the LLM
self._messages = self._format_messages(messages)
self._messages = self._format_messages(
messages, response_format=response_format
)
return self._execute_core(agent_info=agent_info)
return self._execute_core(
agent_info=agent_info, response_format=response_format
)
except Exception as e:
self._printer.print(
@@ -306,7 +304,9 @@ class LiteAgent(FlowTrackable, BaseModel):
)
raise e
def _execute_core(self, agent_info: dict[str, Any]) -> LiteAgentOutput:
def _execute_core(
self, agent_info: dict[str, Any], response_format: type[BaseModel] | None = None
) -> LiteAgentOutput:
# Emit event for agent execution start
crewai_event_bus.emit(
self,
@@ -320,15 +320,29 @@ class LiteAgent(FlowTrackable, BaseModel):
# Execute the agent using invoke loop
agent_finish = self._invoke_loop()
formatted_result: BaseModel | None = None
if self.response_format:
active_response_format = response_format or self.response_format
if active_response_format:
try:
# Cast to BaseModel to ensure type safety
result = self.response_format.model_validate_json(agent_finish.output)
model_schema = generate_model_description(active_response_format)
schema = json.dumps(model_schema, indent=2)
instructions = self.i18n.slice("formatted_task_instructions").format(
output_format=schema
)
converter = Converter(
llm=self.llm,
text=agent_finish.output,
model=active_response_format,
instructions=instructions,
)
result = converter.to_pydantic()
if isinstance(result, BaseModel):
formatted_result = result
except Exception as e:
except ConverterError as e:
self._printer.print(
content=f"Failed to parse output into response format: {e!s}",
content=f"Failed to parse output into response format after retries: {e.message}",
color="yellow",
)
@@ -417,8 +431,14 @@ class LiteAgent(FlowTrackable, BaseModel):
"""
return await asyncio.to_thread(self.kickoff, messages)
def _get_default_system_prompt(self) -> str:
"""Get the default system prompt for the agent."""
def _get_default_system_prompt(
self, response_format: type[BaseModel] | None = None
) -> str:
"""Get the default system prompt for the agent.
Args:
response_format: Optional response format to use instead of self.response_format
"""
base_prompt = ""
if self._parsed_tools:
# Use the prompt template for agents with tools
@@ -439,21 +459,31 @@ class LiteAgent(FlowTrackable, BaseModel):
goal=self.goal,
)
# Add response format instructions if specified
if self.response_format:
schema = generate_model_description(self.response_format)
active_response_format = response_format or self.response_format
if active_response_format:
model_description = generate_model_description(active_response_format)
schema_json = json.dumps(model_description, indent=2)
base_prompt += self.i18n.slice("lite_agent_response_format").format(
response_format=schema
response_format=schema_json
)
return base_prompt
def _format_messages(self, messages: str | list[LLMMessage]) -> list[LLMMessage]:
"""Format messages for the LLM."""
def _format_messages(
self,
messages: str | list[LLMMessage],
response_format: type[BaseModel] | None = None,
) -> list[LLMMessage]:
"""Format messages for the LLM.
Args:
messages: Input messages to format
response_format: Optional response format to use instead of self.response_format
"""
if isinstance(messages, str):
messages = [{"role": "user", "content": messages}]
system_prompt = self._get_default_system_prompt()
system_prompt = self._get_default_system_prompt(response_format=response_format)
# Add system message at the beginning
formatted_messages: list[LLMMessage] = [
@@ -523,6 +553,10 @@ class LiteAgent(FlowTrackable, BaseModel):
self._append_message(formatted_answer.text, role="assistant")
except OutputParserError as e: # noqa: PERF203
self._printer.print(
content="Failed to parse LLM output. Retrying...",
color="yellow",
)
formatted_answer = handle_output_parser_exception(
e=e,
messages=self._messages,
@@ -559,7 +593,7 @@ class LiteAgent(FlowTrackable, BaseModel):
self._show_logs(formatted_answer)
return formatted_answer
def _show_logs(self, formatted_answer: AgentAction | AgentFinish):
def _show_logs(self, formatted_answer: AgentAction | AgentFinish) -> None:
"""Show logs for the agent's execution."""
crewai_event_bus.emit(
self,
@@ -574,4 +608,4 @@ class LiteAgent(FlowTrackable, BaseModel):
self, text: str, role: Literal["user", "assistant", "system"] = "assistant"
) -> None:
"""Append a message to the message list with the given role."""
self._messages.append(cast(LLMMessage, format_message_for_llm(text, role=role)))
self._messages.append(format_message_for_llm(text, role=role))

View File

@@ -20,7 +20,9 @@ from typing import (
)
from dotenv import load_dotenv
import httpx
from pydantic import BaseModel, Field
from typing_extensions import Self
from crewai.events.event_bus import crewai_event_bus
from crewai.events.types.llm_events import (
@@ -36,30 +38,35 @@ from crewai.events.types.tool_usage_events import (
ToolUsageStartedEvent,
)
from crewai.llms.base_llm import BaseLLM
from crewai.utilities import InternalInstructor
from crewai.utilities.exceptions.context_window_exceeding_exception import (
LLMContextLengthExceededError,
)
from crewai.utilities.logger_utils import suppress_warnings
from crewai.utilities.types import LLMMessage
if TYPE_CHECKING:
from litellm import Choices
from litellm.exceptions import ContextWindowExceededError
from litellm.litellm_core_utils.get_supported_openai_params import (
get_supported_openai_params,
)
from litellm.types.utils import ChatCompletionDeltaToolCall, ModelResponse
from litellm.types.utils import ChatCompletionDeltaToolCall, Choices, ModelResponse
from litellm.utils import supports_response_schema
from crewai.agent.core import Agent
from crewai.llms.hooks.base import BaseInterceptor
from crewai.task import Task
from crewai.tools.base_tool import BaseTool
from crewai.utilities.types import LLMMessage
try:
import litellm
from litellm import Choices, CustomLogger
from litellm.exceptions import ContextWindowExceededError
from litellm.integrations.custom_logger import CustomLogger
from litellm.litellm_core_utils.get_supported_openai_params import (
get_supported_openai_params,
)
from litellm.types.utils import ChatCompletionDeltaToolCall, ModelResponse
from litellm.types.utils import ChatCompletionDeltaToolCall, Choices, ModelResponse
from litellm.utils import supports_response_schema
LITELLM_AVAILABLE = True
@@ -72,6 +79,7 @@ except ImportError:
ChatCompletionDeltaToolCall = None # type: ignore
ModelResponse = None # type: ignore
supports_response_schema = None # type: ignore
CustomLogger = None # type: ignore
load_dotenv()
@@ -104,11 +112,13 @@ class FilteredStream(io.TextIOBase):
return self._original_stream.write(s)
def flush(self):
with self._lock:
return self._original_stream.flush()
def flush(self) -> None:
if self._lock:
with self._lock:
return self._original_stream.flush()
return None
def __getattr__(self, name):
def __getattr__(self, name: str) -> Any:
"""Delegate attribute access to the wrapped original stream.
This ensures compatibility with libraries (e.g., Rich) that rely on
@@ -122,16 +132,16 @@ class FilteredStream(io.TextIOBase):
# confuses Rich). These explicit pass-throughs ensure the wrapped Console
# still sees a fully-featured stream.
@property
def encoding(self):
def encoding(self) -> str | Any: # type: ignore[override]
return getattr(self._original_stream, "encoding", "utf-8")
def isatty(self):
def isatty(self) -> bool:
return self._original_stream.isatty()
def fileno(self):
def fileno(self) -> int:
return self._original_stream.fileno()
def writable(self):
def writable(self) -> bool:
return True
@@ -312,7 +322,7 @@ class AccumulatedToolArgs(BaseModel):
class LLM(BaseLLM):
completion_cost: float | None = None
def __new__(cls, model: str, is_litellm: bool = False, **kwargs) -> LLM:
def __new__(cls, model: str, is_litellm: bool = False, **kwargs: Any) -> LLM:
"""Factory method that routes to native SDK or falls back to LiteLLM."""
if not model or not isinstance(model, str):
raise ValueError("Model must be a non-empty string")
@@ -323,7 +333,11 @@ class LLM(BaseLLM):
if native_class and not is_litellm and provider in SUPPORTED_NATIVE_PROVIDERS:
try:
model_string = model.partition("/")[2] if "/" in model else model
return native_class(model=model_string, provider=provider, **kwargs)
return cast(
Self, native_class(model=model_string, provider=provider, **kwargs)
)
except NotImplementedError:
raise
except Exception as e:
raise ImportError(f"Error importing native provider: {e}") from e
@@ -393,13 +407,22 @@ class LLM(BaseLLM):
callbacks: list[Any] | None = None,
reasoning_effort: Literal["none", "low", "medium", "high"] | None = None,
stream: bool = False,
**kwargs,
):
interceptor: BaseInterceptor[httpx.Request, httpx.Response] | None = None,
**kwargs: Any,
) -> None:
"""Initialize LLM instance.
Note: This __init__ method is only called for fallback instances.
Native provider instances handle their own initialization in their respective classes.
"""
super().__init__(
model=model,
temperature=temperature,
api_key=api_key,
base_url=base_url,
timeout=timeout,
**kwargs,
)
self.model = model
self.timeout = timeout
self.temperature = temperature
@@ -424,6 +447,7 @@ class LLM(BaseLLM):
self.additional_params = kwargs
self.is_anthropic = self._is_anthropic_model(model)
self.stream = stream
self.interceptor = interceptor
litellm.drop_params = True
@@ -454,7 +478,7 @@ class LLM(BaseLLM):
def _prepare_completion_params(
self,
messages: str | list[LLMMessage],
tools: list[dict] | None = None,
tools: list[dict[str, BaseTool]] | None = None,
) -> dict[str, Any]:
"""Prepare parameters for the completion call.
@@ -505,9 +529,10 @@ class LLM(BaseLLM):
params: dict[str, Any],
callbacks: list[Any] | None = None,
available_functions: dict[str, Any] | None = None,
from_task: Any | None = None,
from_agent: Any | None = None,
) -> str:
from_task: Task | None = None,
from_agent: Agent | None = None,
response_model: type[BaseModel] | None = None,
) -> Any:
"""Handle a streaming response from the LLM.
Args:
@@ -516,6 +541,7 @@ class LLM(BaseLLM):
available_functions: Dict of available functions
from_task: Optional task object
from_agent: Optional agent object
response_model: Optional response model
Returns:
str: The complete response text
@@ -716,14 +742,30 @@ class LLM(BaseLLM):
tool_calls = message.tool_calls
except Exception as e:
logging.debug(f"Error checking for tool calls: {e}")
# --- 8) If no tool calls or no available functions, return the text response directly
if not tool_calls or not available_functions:
# Track token usage and log callbacks if available in streaming mode
if usage_info:
self._track_token_usage_internal(usage_info)
self._handle_streaming_callbacks(callbacks, usage_info, last_chunk)
# Emit completion event and return response
if response_model and self.is_litellm:
instructor_instance = InternalInstructor(
content=full_response,
model=response_model,
llm=self,
)
result = instructor_instance.to_pydantic()
structured_response = result.model_dump_json()
self._handle_emit_call_events(
response=structured_response,
call_type=LLMCallType.LLM_CALL,
from_task=from_task,
from_agent=from_agent,
messages=params["messages"],
)
return structured_response
self._handle_emit_call_events(
response=full_response,
call_type=LLMCallType.LLM_CALL,
@@ -784,9 +826,9 @@ class LLM(BaseLLM):
tool_calls: list[ChatCompletionDeltaToolCall],
accumulated_tool_args: defaultdict[int, AccumulatedToolArgs],
available_functions: dict[str, Any] | None = None,
from_task: Any | None = None,
from_agent: Any | None = None,
) -> None | str:
from_task: Task | None = None,
from_agent: Agent | None = None,
) -> Any:
for tool_call in tool_calls:
current_tool_accumulator = accumulated_tool_args[tool_call.index]
@@ -869,8 +911,9 @@ class LLM(BaseLLM):
params: dict[str, Any],
callbacks: list[Any] | None = None,
available_functions: dict[str, Any] | None = None,
from_task: Any | None = None,
from_agent: Any | None = None,
from_task: Task | None = None,
from_agent: Agent | None = None,
response_model: type[BaseModel] | None = None,
) -> str | Any:
"""Handle a non-streaming response from the LLM.
@@ -880,23 +923,69 @@ class LLM(BaseLLM):
available_functions: Dict of available functions
from_task: Optional Task that invoked the LLM
from_agent: Optional Agent that invoked the LLM
response_model: Optional Response model
Returns:
str: The response text
"""
# --- 1) Make the completion call
# --- 1) Handle response_model with InternalInstructor for LiteLLM
if response_model and self.is_litellm:
from crewai.utilities.internal_instructor import InternalInstructor
messages = params.get("messages", [])
if not messages:
raise ValueError("Messages are required when using response_model")
# Combine all message content for InternalInstructor
combined_content = "\n\n".join(
f"{msg['role'].upper()}: {msg['content']}" for msg in messages
)
instructor_instance = InternalInstructor(
content=combined_content,
model=response_model,
llm=self,
)
result = instructor_instance.to_pydantic()
structured_response = result.model_dump_json()
self._handle_emit_call_events(
response=structured_response,
call_type=LLMCallType.LLM_CALL,
from_task=from_task,
from_agent=from_agent,
messages=params["messages"],
)
return structured_response
try:
# Attempt to make the completion call, but catch context window errors
# and convert them to our own exception type for consistent handling
# across the codebase. This allows CrewAgentExecutor to handle context
# length issues appropriately.
if response_model:
params["response_model"] = response_model
response = litellm.completion(**params)
except ContextWindowExceededError as e:
# Convert litellm's context window error to our own exception type
# for consistent handling in the rest of the codebase
raise LLMContextLengthExceededError(str(e)) from e
# --- 2) Extract response message and content
# --- 2) Handle structured output response (when response_model is provided)
if response_model is not None:
# When using instructor/response_model, litellm returns a Pydantic model instance
if isinstance(response, BaseModel):
structured_response = response.model_dump_json()
self._handle_emit_call_events(
response=structured_response,
call_type=LLMCallType.LLM_CALL,
from_task=from_task,
from_agent=from_agent,
messages=params["messages"],
)
return structured_response
# --- 3) Extract response message and content (standard response)
response_message = cast(Choices, cast(ModelResponse, response).choices)[
0
].message
@@ -951,9 +1040,9 @@ class LLM(BaseLLM):
self,
tool_calls: list[Any],
available_functions: dict[str, Any] | None = None,
from_task: Any | None = None,
from_agent: Any | None = None,
) -> str | None:
from_task: Task | None = None,
from_agent: Agent | None = None,
) -> Any:
"""Handle a tool call from the LLM.
Args:
@@ -1039,11 +1128,12 @@ class LLM(BaseLLM):
def call(
self,
messages: str | list[LLMMessage],
tools: list[dict] | None = None,
tools: list[dict[str, BaseTool]] | None = None,
callbacks: list[Any] | None = None,
available_functions: dict[str, Any] | None = None,
from_task: Any | None = None,
from_agent: Any | None = None,
from_task: Task | None = None,
from_agent: Agent | None = None,
response_model: type[BaseModel] | None = None,
) -> str | Any:
"""High-level LLM call method.
@@ -1060,6 +1150,7 @@ class LLM(BaseLLM):
that can be invoked by the LLM.
from_task: Optional Task that invoked the LLM
from_agent: Optional Agent that invoked the LLM
response_model: Optional Model that contains a pydantic response model.
Returns:
Union[str, Any]: Either a text response from the LLM (str) or
@@ -1105,11 +1196,21 @@ class LLM(BaseLLM):
# --- 7) Make the completion call and handle response
if self.stream:
return self._handle_streaming_response(
params, callbacks, available_functions, from_task, from_agent
params=params,
callbacks=callbacks,
available_functions=available_functions,
from_task=from_task,
from_agent=from_agent,
response_model=response_model,
)
return self._handle_non_streaming_response(
params, callbacks, available_functions, from_task, from_agent
params=params,
callbacks=callbacks,
available_functions=available_functions,
from_task=from_task,
from_agent=from_agent,
response_model=response_model,
)
except LLMContextLengthExceededError:
# Re-raise LLMContextLengthExceededError as it should be handled
@@ -1141,6 +1242,7 @@ class LLM(BaseLLM):
available_functions=available_functions,
from_task=from_task,
from_agent=from_agent,
response_model=response_model,
)
crewai_event_bus.emit(
@@ -1155,10 +1257,10 @@ class LLM(BaseLLM):
self,
response: Any,
call_type: LLMCallType,
from_task: Any | None = None,
from_agent: Any | None = None,
messages: str | list[dict[str, Any]] | None = None,
):
from_task: Task | None = None,
from_agent: Agent | None = None,
messages: str | list[LLMMessage] | None = None,
) -> None:
"""Handle the events for the LLM call.
Args:
@@ -1324,7 +1426,7 @@ class LLM(BaseLLM):
return self.context_window_size
@staticmethod
def set_callbacks(callbacks: list[Any]):
def set_callbacks(callbacks: list[Any]) -> None:
"""
Attempt to keep a single set of callbacks in litellm by removing old
duplicates and adding new ones.
@@ -1377,7 +1479,7 @@ class LLM(BaseLLM):
litellm.success_callback = success_callbacks
litellm.failure_callback = failure_callbacks
def __copy__(self):
def __copy__(self) -> LLM:
"""Create a shallow copy of the LLM instance."""
# Filter out parameters that are already explicitly passed to avoid conflicts
filtered_params = {
@@ -1437,7 +1539,7 @@ class LLM(BaseLLM):
**filtered_params,
)
def __deepcopy__(self, memo):
def __deepcopy__(self, memo: dict[int, Any] | None) -> LLM:
"""Create a deep copy of the LLM instance."""
import copy

View File

@@ -10,6 +10,7 @@ from abc import ABC, abstractmethod
from datetime import datetime
import json
import logging
import re
from typing import TYPE_CHECKING, Any, Final
from pydantic import BaseModel
@@ -31,11 +32,15 @@ from crewai.types.usage_metrics import UsageMetrics
if TYPE_CHECKING:
from crewai.agent.core import Agent
from crewai.task import Task
from crewai.tools.base_tool import BaseTool
from crewai.utilities.types import LLMMessage
DEFAULT_CONTEXT_WINDOW_SIZE: Final[int] = 4096
DEFAULT_SUPPORTS_STOP_WORDS: Final[bool] = True
_JSON_EXTRACTION_PATTERN: Final[re.Pattern[str]] = re.compile(r"\{.*}", re.DOTALL)
class BaseLLM(ABC):
@@ -65,9 +70,8 @@ class BaseLLM(ABC):
temperature: float | None = None,
api_key: str | None = None,
base_url: str | None = None,
timeout: float | None = None,
provider: str | None = None,
**kwargs,
**kwargs: Any,
) -> None:
"""Initialize the BaseLLM with default attributes.
@@ -93,8 +97,10 @@ class BaseLLM(ABC):
self.stop: list[str] = []
elif isinstance(stop, str):
self.stop = [stop]
else:
elif isinstance(stop, list):
self.stop = stop
else:
self.stop = []
self._token_usage = {
"total_tokens": 0,
@@ -118,11 +124,12 @@ class BaseLLM(ABC):
def call(
self,
messages: str | list[LLMMessage],
tools: list[dict] | None = None,
tools: list[dict[str, BaseTool]] | None = None,
callbacks: list[Any] | None = None,
available_functions: dict[str, Any] | None = None,
from_task: Any | None = None,
from_agent: Any | None = None,
from_task: Task | None = None,
from_agent: Agent | None = None,
response_model: type[BaseModel] | None = None,
) -> str | Any:
"""Call the LLM with the given messages.
@@ -139,6 +146,7 @@ class BaseLLM(ABC):
that can be invoked by the LLM.
from_task: Optional task caller to be used for the LLM call.
from_agent: Optional agent caller to be used for the LLM call.
response_model: Optional response model to be used for the LLM call.
Returns:
Either a text response from the LLM (str) or
@@ -150,7 +158,9 @@ class BaseLLM(ABC):
RuntimeError: If the LLM request fails for other reasons.
"""
def _convert_tools_for_interference(self, tools: list[dict]) -> list[dict]:
def _convert_tools_for_interference(
self, tools: list[dict[str, BaseTool]]
) -> list[dict[str, BaseTool]]:
"""Convert tools to a format that can be used for interference.
Args:
@@ -237,11 +247,11 @@ class BaseLLM(ABC):
def _emit_call_started_event(
self,
messages: str | list[LLMMessage],
tools: list[dict] | None = None,
tools: list[dict[str, BaseTool]] | None = None,
callbacks: list[Any] | None = None,
available_functions: dict[str, Any] | None = None,
from_task: Any | None = None,
from_agent: Any | None = None,
from_task: Task | None = None,
from_agent: Agent | None = None,
) -> None:
"""Emit LLM call started event."""
if not hasattr(crewai_event_bus, "emit"):
@@ -264,8 +274,8 @@ class BaseLLM(ABC):
self,
response: Any,
call_type: LLMCallType,
from_task: Any | None = None,
from_agent: Any | None = None,
from_task: Task | None = None,
from_agent: Agent | None = None,
messages: str | list[dict[str, Any]] | None = None,
) -> None:
"""Emit LLM call completed event."""
@@ -284,8 +294,8 @@ class BaseLLM(ABC):
def _emit_call_failed_event(
self,
error: str,
from_task: Any | None = None,
from_agent: Any | None = None,
from_task: Task | None = None,
from_agent: Agent | None = None,
) -> None:
"""Emit LLM call failed event."""
if not hasattr(crewai_event_bus, "emit"):
@@ -303,8 +313,8 @@ class BaseLLM(ABC):
def _emit_stream_chunk_event(
self,
chunk: str,
from_task: Any | None = None,
from_agent: Any | None = None,
from_task: Task | None = None,
from_agent: Agent | None = None,
tool_call: dict[str, Any] | None = None,
) -> None:
"""Emit stream chunk event."""
@@ -326,8 +336,8 @@ class BaseLLM(ABC):
function_name: str,
function_args: dict[str, Any],
available_functions: dict[str, Any],
from_task: Any | None = None,
from_agent: Any | None = None,
from_task: Task | None = None,
from_agent: Agent | None = None,
) -> str | None:
"""Handle tool execution with proper event emission.
@@ -443,10 +453,10 @@ class BaseLLM(ABC):
f"Message at index {i} must have 'role' and 'content' keys"
)
return messages # type: ignore[return-value]
return messages
@staticmethod
def _validate_structured_output(
self,
response: str,
response_format: type[BaseModel] | None,
) -> str | BaseModel:
@@ -471,10 +481,7 @@ class BaseLLM(ABC):
data = json.loads(response)
return response_format.model_validate(data)
# Try to extract JSON from response
import re
json_match = re.search(r"\{.*\}", response, re.DOTALL)
json_match = _JSON_EXTRACTION_PATTERN.search(response)
if json_match:
data = json.loads(json_match.group())
return response_format.model_validate(data)
@@ -487,7 +494,8 @@ class BaseLLM(ABC):
f"Failed to parse response into {response_format.__name__}: {e}"
) from e
def _extract_provider(self, model: str) -> str:
@staticmethod
def _extract_provider(model: str) -> str:
"""Extract provider from model string.
Args:

View File

@@ -0,0 +1,6 @@
"""Interceptor contracts for crewai"""
from crewai.llms.hooks.base import BaseInterceptor
__all__ = ["BaseInterceptor"]

View File

@@ -0,0 +1,133 @@
"""Base classes for LLM transport interceptors.
This module provides abstract base classes for intercepting and modifying
outbound and inbound messages at the transport level.
"""
from __future__ import annotations
from abc import ABC, abstractmethod
from typing import TYPE_CHECKING, Any, Generic, TypeVar
from pydantic_core import core_schema
if TYPE_CHECKING:
from pydantic import GetCoreSchemaHandler
from pydantic_core import CoreSchema
T = TypeVar("T")
U = TypeVar("U")
class BaseInterceptor(ABC, Generic[T, U]):
"""Abstract base class for intercepting transport-level messages.
Provides hooks to intercept and modify outbound and inbound messages
at the transport layer.
Type parameters:
T: Outbound message type (e.g., httpx.Request)
U: Inbound message type (e.g., httpx.Response)
Example:
>>> import httpx
>>> class CustomInterceptor(BaseInterceptor[httpx.Request, httpx.Response]):
... def on_outbound(self, message: httpx.Request) -> httpx.Request:
... message.headers["X-Custom-Header"] = "value"
... return message
...
... def on_inbound(self, message: httpx.Response) -> httpx.Response:
... print(f"Status: {message.status_code}")
... return message
"""
@abstractmethod
def on_outbound(self, message: T) -> T:
"""Intercept outbound message before sending.
Args:
message: Outbound message object.
Returns:
Modified message object.
"""
...
@abstractmethod
def on_inbound(self, message: U) -> U:
"""Intercept inbound message after receiving.
Args:
message: Inbound message object.
Returns:
Modified message object.
"""
...
async def aon_outbound(self, message: T) -> T:
"""Async version of on_outbound.
Args:
message: Outbound message object.
Returns:
Modified message object.
"""
raise NotImplementedError
async def aon_inbound(self, message: U) -> U:
"""Async version of on_inbound.
Args:
message: Inbound message object.
Returns:
Modified message object.
"""
raise NotImplementedError
@classmethod
def __get_pydantic_core_schema__(
cls, _source_type: Any, _handler: GetCoreSchemaHandler
) -> CoreSchema:
"""Generate Pydantic core schema for BaseInterceptor.
This allows the generic BaseInterceptor to be used in Pydantic models
without requiring arbitrary_types_allowed=True. The schema validates
that the value is an instance of BaseInterceptor.
Args:
_source_type: The source type being validated (unused).
_handler: Handler for generating schemas (unused).
Returns:
A Pydantic core schema that validates BaseInterceptor instances.
"""
return core_schema.no_info_plain_validator_function(
_validate_interceptor,
serialization=core_schema.plain_serializer_function_ser_schema(
lambda x: x, return_schema=core_schema.any_schema()
),
)
def _validate_interceptor(value: Any) -> BaseInterceptor[T, U]:
"""Validate that the value is a BaseInterceptor instance.
Args:
value: The value to validate.
Returns:
The validated BaseInterceptor instance.
Raises:
ValueError: If the value is not a BaseInterceptor instance.
"""
if not isinstance(value, BaseInterceptor):
raise ValueError(
f"Expected BaseInterceptor instance, got {type(value).__name__}"
)
return value

View File

@@ -0,0 +1,123 @@
"""HTTP transport implementations for LLM request/response interception.
This module provides internal transport classes that integrate with BaseInterceptor
to enable request/response modification at the transport level.
"""
from __future__ import annotations
from collections.abc import Iterable
from typing import TYPE_CHECKING, TypedDict
from httpx import (
AsyncHTTPTransport as _AsyncHTTPTransport,
HTTPTransport as _HTTPTransport,
)
from typing_extensions import NotRequired, Unpack
if TYPE_CHECKING:
from ssl import SSLContext
from httpx import Limits, Request, Response
from httpx._types import CertTypes, ProxyTypes
from crewai.llms.hooks.base import BaseInterceptor
class HTTPTransportKwargs(TypedDict, total=False):
"""Typed dictionary for httpx.HTTPTransport initialization parameters.
These parameters configure the underlying HTTP transport behavior including
SSL verification, proxies, connection limits, and low-level socket options.
"""
verify: bool | str | SSLContext
cert: NotRequired[CertTypes]
trust_env: bool
http1: bool
http2: bool
limits: Limits
proxy: NotRequired[ProxyTypes]
uds: NotRequired[str]
local_address: NotRequired[str]
retries: int
socket_options: NotRequired[
Iterable[
tuple[int, int, int]
| tuple[int, int, bytes | bytearray]
| tuple[int, int, None, int]
]
]
class HTTPTransport(_HTTPTransport):
"""HTTP transport that uses an interceptor for request/response modification.
This transport is used internally when a user provides a BaseInterceptor.
Users should not instantiate this class directly - instead, pass an interceptor
to the LLM client and this transport will be created automatically.
"""
def __init__(
self,
interceptor: BaseInterceptor[Request, Response],
**kwargs: Unpack[HTTPTransportKwargs],
) -> None:
"""Initialize transport with interceptor.
Args:
interceptor: HTTP interceptor for modifying raw request/response objects.
**kwargs: HTTPTransport configuration parameters (verify, cert, proxy, etc.).
"""
super().__init__(**kwargs)
self.interceptor = interceptor
def handle_request(self, request: Request) -> Response:
"""Handle request with interception.
Args:
request: The HTTP request to handle.
Returns:
The HTTP response.
"""
request = self.interceptor.on_outbound(request)
response = super().handle_request(request)
return self.interceptor.on_inbound(response)
class AsyncHTTPTransport(_AsyncHTTPTransport):
"""Async HTTP transport that uses an interceptor for request/response modification.
This transport is used internally when a user provides a BaseInterceptor.
Users should not instantiate this class directly - instead, pass an interceptor
to the LLM client and this transport will be created automatically.
"""
def __init__(
self,
interceptor: BaseInterceptor[Request, Response],
**kwargs: Unpack[HTTPTransportKwargs],
) -> None:
"""Initialize async transport with interceptor.
Args:
interceptor: HTTP interceptor for modifying raw request/response objects.
**kwargs: HTTPTransport configuration parameters (verify, cert, proxy, etc.).
"""
super().__init__(**kwargs)
self.interceptor = interceptor
async def handle_async_request(self, request: Request) -> Response:
"""Handle async request with interception.
Args:
request: The HTTP request to handle.
Returns:
The HTTP response.
"""
request = await self.interceptor.aon_outbound(request)
response = await super().handle_async_request(request)
return await self.interceptor.aon_inbound(response)

View File

@@ -1,9 +1,15 @@
from __future__ import annotations
import json
import logging
import os
from typing import Any, cast
from typing import TYPE_CHECKING, Any, cast
from pydantic import BaseModel
from crewai.events.types.llm_events import LLMCallType
from crewai.llms.base_llm import BaseLLM
from crewai.llms.hooks.transport import HTTPTransport
from crewai.utilities.agent_utils import is_context_length_exceeded
from crewai.utilities.exceptions.context_window_exceeding_exception import (
LLMContextLengthExceededError,
@@ -11,10 +17,14 @@ from crewai.utilities.exceptions.context_window_exceeding_exception import (
from crewai.utilities.types import LLMMessage
if TYPE_CHECKING:
from crewai.llms.hooks.base import BaseInterceptor
try:
from anthropic import Anthropic
from anthropic.types import Message
from anthropic.types.tool_use_block import ToolUseBlock
import httpx
except ImportError:
raise ImportError(
'Anthropic native provider not available, to install: uv add "crewai[anthropic]"'
@@ -41,7 +51,8 @@ class AnthropicCompletion(BaseLLM):
stop_sequences: list[str] | None = None,
stream: bool = False,
client_params: dict[str, Any] | None = None,
**kwargs,
interceptor: BaseInterceptor[httpx.Request, httpx.Response] | None = None,
**kwargs: Any,
):
"""Initialize Anthropic chat completion client.
@@ -57,6 +68,7 @@ class AnthropicCompletion(BaseLLM):
stop_sequences: Stop sequences (Anthropic uses stop_sequences, not stop)
stream: Enable streaming responses
client_params: Additional parameters for the Anthropic client
interceptor: HTTP interceptor for modifying requests/responses at transport level.
**kwargs: Additional parameters
"""
super().__init__(
@@ -64,6 +76,7 @@ class AnthropicCompletion(BaseLLM):
)
# Client params
self.interceptor = interceptor
self.client_params = client_params
self.base_url = base_url
self.timeout = timeout
@@ -81,6 +94,30 @@ class AnthropicCompletion(BaseLLM):
self.is_claude_3 = "claude-3" in model.lower()
self.supports_tools = self.is_claude_3 # Claude 3+ supports tool use
@property
def stop(self) -> list[str]:
"""Get stop sequences sent to the API."""
return self.stop_sequences
@stop.setter
def stop(self, value: list[str] | str | None) -> None:
"""Set stop sequences.
Synchronizes stop_sequences to ensure values set by CrewAgentExecutor
are properly sent to the Anthropic API.
Args:
value: Stop sequences as a list, single string, or None
"""
if value is None:
self.stop_sequences = []
elif isinstance(value, str):
self.stop_sequences = [value]
elif isinstance(value, list):
self.stop_sequences = value
else:
self.stop_sequences = []
def _get_client_params(self) -> dict[str, Any]:
"""Get client parameters."""
@@ -96,6 +133,11 @@ class AnthropicCompletion(BaseLLM):
"max_retries": self.max_retries,
}
if self.interceptor:
transport = HTTPTransport(interceptor=self.interceptor)
http_client = httpx.Client(transport=transport)
client_params["http_client"] = http_client # type: ignore[assignment]
if self.client_params:
client_params.update(self.client_params)
@@ -104,11 +146,12 @@ class AnthropicCompletion(BaseLLM):
def call(
self,
messages: str | list[LLMMessage],
tools: list[dict] | None = None,
tools: list[dict[str, Any]] | None = None,
callbacks: list[Any] | None = None,
available_functions: dict[str, Any] | None = None,
from_task: Any | None = None,
from_agent: Any | None = None,
response_model: type[BaseModel] | None = None,
) -> str | Any:
"""Call Anthropic messages API.
@@ -126,7 +169,7 @@ class AnthropicCompletion(BaseLLM):
try:
# Emit call started event
self._emit_call_started_event(
messages=messages, # type: ignore[arg-type]
messages=messages,
tools=tools,
callbacks=callbacks,
available_functions=available_functions,
@@ -136,7 +179,7 @@ class AnthropicCompletion(BaseLLM):
# Format messages for Anthropic
formatted_messages, system_message = self._format_messages_for_anthropic(
messages # type: ignore[arg-type]
messages
)
# Prepare completion parameters
@@ -147,11 +190,19 @@ class AnthropicCompletion(BaseLLM):
# Handle streaming vs non-streaming
if self.stream:
return self._handle_streaming_completion(
completion_params, available_functions, from_task, from_agent
completion_params,
available_functions,
from_task,
from_agent,
response_model,
)
return self._handle_completion(
completion_params, available_functions, from_task, from_agent
completion_params,
available_functions,
from_task,
from_agent,
response_model,
)
except Exception as e:
@@ -166,7 +217,7 @@ class AnthropicCompletion(BaseLLM):
self,
messages: list[LLMMessage],
system_message: str | None = None,
tools: list[dict] | None = None,
tools: list[dict[str, Any]] | None = None,
) -> dict[str, Any]:
"""Prepare parameters for Anthropic messages API.
@@ -203,7 +254,9 @@ class AnthropicCompletion(BaseLLM):
return params
def _convert_tools_for_interference(self, tools: list[dict]) -> list[dict]:
def _convert_tools_for_interference(
self, tools: list[dict[str, Any]]
) -> list[dict[str, Any]]:
"""Convert CrewAI tool format to Anthropic tool use format."""
anthropic_tools = []
@@ -290,8 +343,19 @@ class AnthropicCompletion(BaseLLM):
available_functions: dict[str, Any] | None = None,
from_task: Any | None = None,
from_agent: Any | None = None,
response_model: type[BaseModel] | None = None,
) -> str | Any:
"""Handle non-streaming message completion."""
if response_model:
structured_tool = {
"name": "structured_output",
"description": "Returns structured data according to the schema",
"input_schema": response_model.model_json_schema(),
}
params["tools"] = [structured_tool]
params["tool_choice"] = {"type": "tool", "name": "structured_output"}
try:
response: Message = self.client.messages.create(**params)
@@ -304,6 +368,24 @@ class AnthropicCompletion(BaseLLM):
usage = self._extract_anthropic_token_usage(response)
self._track_token_usage_internal(usage)
if response_model and response.content:
tool_uses = [
block for block in response.content if isinstance(block, ToolUseBlock)
]
if tool_uses and tool_uses[0].name == "structured_output":
structured_data = tool_uses[0].input
structured_json = json.dumps(structured_data)
self._emit_call_completed_event(
response=structured_json,
call_type=LLMCallType.LLM_CALL,
from_task=from_task,
from_agent=from_agent,
messages=params["messages"],
)
return structured_json
# Check if Claude wants to use tools
if response.content and available_functions:
tool_uses = [
@@ -349,8 +431,19 @@ class AnthropicCompletion(BaseLLM):
available_functions: dict[str, Any] | None = None,
from_task: Any | None = None,
from_agent: Any | None = None,
response_model: type[BaseModel] | None = None,
) -> str:
"""Handle streaming message completion."""
if response_model:
structured_tool = {
"name": "structured_output",
"description": "Returns structured data according to the schema",
"input_schema": response_model.model_json_schema(),
}
params["tools"] = [structured_tool]
params["tool_choice"] = {"type": "tool", "name": "structured_output"}
full_response = ""
# Remove 'stream' parameter as messages.stream() doesn't accept it
@@ -374,6 +467,26 @@ class AnthropicCompletion(BaseLLM):
usage = self._extract_anthropic_token_usage(final_message)
self._track_token_usage_internal(usage)
if response_model and final_message.content:
tool_uses = [
block
for block in final_message.content
if isinstance(block, ToolUseBlock)
]
if tool_uses and tool_uses[0].name == "structured_output":
structured_data = tool_uses[0].input
structured_json = json.dumps(structured_data)
self._emit_call_completed_event(
response=structured_json,
call_type=LLMCallType.LLM_CALL,
from_task=from_task,
from_agent=from_agent,
messages=params["messages"],
)
return structured_json
if final_message.content and available_functions:
tool_uses = [
block

View File

@@ -1,7 +1,11 @@
from __future__ import annotations
import json
import logging
import os
from typing import Any
from typing import TYPE_CHECKING, Any
from pydantic import BaseModel
from crewai.utilities.agent_utils import is_context_length_exceeded
from crewai.utilities.exceptions.context_window_exceeding_exception import (
@@ -10,19 +14,24 @@ from crewai.utilities.exceptions.context_window_exceeding_exception import (
from crewai.utilities.types import LLMMessage
if TYPE_CHECKING:
from crewai.llms.hooks.base import BaseInterceptor
from crewai.tools.base_tool import BaseTool
try:
from azure.ai.inference import ( # type: ignore[import-not-found]
from azure.ai.inference import (
ChatCompletionsClient,
)
from azure.ai.inference.models import ( # type: ignore[import-not-found]
from azure.ai.inference.models import (
ChatCompletions,
ChatCompletionsToolCall,
StreamingChatCompletionsUpdate,
)
from azure.core.credentials import ( # type: ignore[import-not-found]
from azure.core.credentials import (
AzureKeyCredential,
)
from azure.core.exceptions import ( # type: ignore[import-not-found]
from azure.core.exceptions import (
HttpResponseError,
)
@@ -57,7 +66,8 @@ class AzureCompletion(BaseLLM):
max_tokens: int | None = None,
stop: list[str] | None = None,
stream: bool = False,
**kwargs,
interceptor: BaseInterceptor[Any, Any] | None = None,
**kwargs: Any,
):
"""Initialize Azure AI Inference chat completion client.
@@ -75,8 +85,15 @@ class AzureCompletion(BaseLLM):
max_tokens: Maximum tokens in response
stop: Stop sequences
stream: Enable streaming responses
interceptor: HTTP interceptor (not yet supported for Azure).
**kwargs: Additional parameters
"""
if interceptor is not None:
raise NotImplementedError(
"HTTP interceptors are not yet supported for Azure AI Inference provider. "
"Interceptors are currently supported for OpenAI and Anthropic providers only."
)
super().__init__(
model=model, temperature=temperature, stop=stop or [], **kwargs
)
@@ -114,7 +131,7 @@ class AzureCompletion(BaseLLM):
if self.api_version:
client_kwargs["api_version"] = self.api_version
self.client = ChatCompletionsClient(**client_kwargs)
self.client = ChatCompletionsClient(**client_kwargs) # type: ignore[arg-type]
self.top_p = top_p
self.frequency_penalty = frequency_penalty
@@ -157,11 +174,12 @@ class AzureCompletion(BaseLLM):
def call(
self,
messages: str | list[LLMMessage],
tools: list[dict] | None = None,
tools: list[dict[str, BaseTool]] | None = None,
callbacks: list[Any] | None = None,
available_functions: dict[str, Any] | None = None,
from_task: Any | None = None,
from_agent: Any | None = None,
response_model: type[BaseModel] | None = None,
) -> str | Any:
"""Call Azure AI Inference chat completions API.
@@ -192,17 +210,25 @@ class AzureCompletion(BaseLLM):
# Prepare completion parameters
completion_params = self._prepare_completion_params(
formatted_messages, tools
formatted_messages, tools, response_model
)
# Handle streaming vs non-streaming
if self.stream:
return self._handle_streaming_completion(
completion_params, available_functions, from_task, from_agent
completion_params,
available_functions,
from_task,
from_agent,
response_model,
)
return self._handle_completion(
completion_params, available_functions, from_task, from_agent
completion_params,
available_functions,
from_task,
from_agent,
response_model,
)
except HttpResponseError as e:
@@ -233,13 +259,15 @@ class AzureCompletion(BaseLLM):
def _prepare_completion_params(
self,
messages: list[LLMMessage],
tools: list[dict] | None = None,
tools: list[dict[str, Any]] | None = None,
response_model: type[BaseModel] | None = None,
) -> dict[str, Any]:
"""Prepare parameters for Azure AI Inference chat completion.
Args:
messages: Formatted messages for Azure
tools: Tool definitions
response_model: Pydantic model for structured output
Returns:
Parameters dictionary for Azure API
@@ -249,6 +277,15 @@ class AzureCompletion(BaseLLM):
"stream": self.stream,
}
if response_model and self.is_openai_model:
params["response_format"] = {
"type": "json_schema",
"json_schema": {
"name": response_model.__name__,
"schema": response_model.model_json_schema(),
},
}
# Only include model parameter for non-Azure OpenAI endpoints
# Azure OpenAI endpoints have the deployment name in the URL
if not self.is_azure_openai_endpoint:
@@ -275,7 +312,9 @@ class AzureCompletion(BaseLLM):
return params
def _convert_tools_for_interference(self, tools: list[dict]) -> list[dict]:
def _convert_tools_for_interference(
self, tools: list[dict[str, Any]]
) -> list[dict[str, Any]]:
"""Convert CrewAI tool format to Azure OpenAI function calling format."""
from crewai.llms.providers.utils.common import safe_tool_conversion
@@ -334,6 +373,7 @@ class AzureCompletion(BaseLLM):
available_functions: dict[str, Any] | None = None,
from_task: Any | None = None,
from_agent: Any | None = None,
response_model: type[BaseModel] | None = None,
) -> str | Any:
"""Handle non-streaming chat completion."""
# Make API call
@@ -350,6 +390,26 @@ class AzureCompletion(BaseLLM):
usage = self._extract_azure_token_usage(response)
self._track_token_usage_internal(usage)
if response_model and self.is_openai_model:
content = message.content or ""
try:
structured_data = response_model.model_validate_json(content)
structured_json = structured_data.model_dump_json()
self._emit_call_completed_event(
response=structured_json,
call_type=LLMCallType.LLM_CALL,
from_task=from_task,
from_agent=from_agent,
messages=params["messages"],
)
return structured_json
except Exception as e:
error_msg = f"Failed to validate structured output with model {response_model.__name__}: {e}"
logging.error(error_msg)
raise ValueError(error_msg) from e
# Handle tool calls
if message.tool_calls and available_functions:
tool_call = message.tool_calls[0] # Handle first tool call
@@ -409,6 +469,7 @@ class AzureCompletion(BaseLLM):
available_functions: dict[str, Any] | None = None,
from_task: Any | None = None,
from_agent: Any | None = None,
response_model: type[BaseModel] | None = None,
) -> str:
"""Handle streaming chat completion."""
full_response = ""

View File

@@ -5,6 +5,7 @@ import logging
import os
from typing import TYPE_CHECKING, Any, TypedDict, cast
from pydantic import BaseModel
from typing_extensions import Required
from crewai.events.types.llm_events import LLMCallType
@@ -29,6 +30,8 @@ if TYPE_CHECKING:
ToolTypeDef,
)
from crewai.llms.hooks.base import BaseInterceptor
try:
from boto3.session import Session
@@ -156,8 +159,9 @@ class BedrockCompletion(BaseLLM):
guardrail_config: dict[str, Any] | None = None,
additional_model_request_fields: dict[str, Any] | None = None,
additional_model_response_field_paths: list[str] | None = None,
**kwargs,
):
interceptor: BaseInterceptor[Any, Any] | None = None,
**kwargs: Any,
) -> None:
"""Initialize AWS Bedrock completion client.
Args:
@@ -175,8 +179,15 @@ class BedrockCompletion(BaseLLM):
guardrail_config: Guardrail configuration for content filtering
additional_model_request_fields: Model-specific request parameters
additional_model_response_field_paths: Custom response field paths
interceptor: HTTP interceptor (not yet supported for Bedrock).
**kwargs: Additional parameters
"""
if interceptor is not None:
raise NotImplementedError(
"HTTP interceptors are not yet supported for AWS Bedrock provider. "
"Interceptors are currently supported for OpenAI and Anthropic providers only."
)
# Extract provider from kwargs to avoid duplicate argument
kwargs.pop("provider", None)
@@ -232,6 +243,30 @@ class BedrockCompletion(BaseLLM):
# Handle inference profiles for newer models
self.model_id = model
@property
def stop(self) -> list[str]:
"""Get stop sequences sent to the API."""
return list(self.stop_sequences)
@stop.setter
def stop(self, value: Sequence[str] | str | None) -> None:
"""Set stop sequences.
Synchronizes stop_sequences to ensure values set by CrewAgentExecutor
are properly sent to the Bedrock API.
Args:
value: Stop sequences as a Sequence, single string, or None
"""
if value is None:
self.stop_sequences = []
elif isinstance(value, str):
self.stop_sequences = [value]
elif isinstance(value, Sequence):
self.stop_sequences = list(value)
else:
self.stop_sequences = []
def call(
self,
messages: str | list[LLMMessage],
@@ -240,12 +275,13 @@ class BedrockCompletion(BaseLLM):
available_functions: dict[str, Any] | None = None,
from_task: Any | None = None,
from_agent: Any | None = None,
response_model: type[BaseModel] | None = None,
) -> str | Any:
"""Call AWS Bedrock Converse API."""
try:
# Emit call started event
self._emit_call_started_event(
messages=messages, # type: ignore[arg-type]
messages=messages,
tools=tools,
callbacks=callbacks,
available_functions=available_functions,
@@ -738,7 +774,9 @@ class BedrockCompletion(BaseLLM):
return converse_messages, system_message
@staticmethod
def _format_tools_for_converse(tools: list[dict]) -> list[ConverseToolTypeDef]:
def _format_tools_for_converse(
tools: list[dict[str, Any]],
) -> list[ConverseToolTypeDef]:
"""Convert CrewAI tools to Converse API format following AWS specification."""
from crewai.llms.providers.utils.common import safe_tool_conversion

View File

@@ -2,8 +2,11 @@ import logging
import os
from typing import Any, cast
from pydantic import BaseModel
from crewai.events.types.llm_events import LLMCallType
from crewai.llms.base_llm import BaseLLM
from crewai.llms.hooks.base import BaseInterceptor
from crewai.utilities.agent_utils import is_context_length_exceeded
from crewai.utilities.exceptions.context_window_exceeding_exception import (
LLMContextLengthExceededError,
@@ -42,7 +45,8 @@ class GeminiCompletion(BaseLLM):
stream: bool = False,
safety_settings: dict[str, Any] | None = None,
client_params: dict[str, Any] | None = None,
**kwargs,
interceptor: BaseInterceptor[Any, Any] | None = None,
**kwargs: Any,
):
"""Initialize Google Gemini chat completion client.
@@ -60,8 +64,15 @@ class GeminiCompletion(BaseLLM):
safety_settings: Safety filter settings
client_params: Additional parameters to pass to the Google Gen AI Client constructor.
Supports parameters like http_options, credentials, debug_config, etc.
interceptor: HTTP interceptor (not yet supported for Gemini).
**kwargs: Additional parameters
"""
if interceptor is not None:
raise NotImplementedError(
"HTTP interceptors are not yet supported for Google Gemini provider. "
"Interceptors are currently supported for OpenAI and Anthropic providers only."
)
super().__init__(
model=model, temperature=temperature, stop=stop_sequences or [], **kwargs
)
@@ -93,7 +104,31 @@ class GeminiCompletion(BaseLLM):
self.is_gemini_1_5 = "gemini-1.5" in model.lower()
self.supports_tools = self.is_gemini_1_5 or self.is_gemini_2
def _initialize_client(self, use_vertexai: bool = False) -> genai.Client:
@property
def stop(self) -> list[str]:
"""Get stop sequences sent to the API."""
return self.stop_sequences
@stop.setter
def stop(self, value: list[str] | str | None) -> None:
"""Set stop sequences.
Synchronizes stop_sequences to ensure values set by CrewAgentExecutor
are properly sent to the Gemini API.
Args:
value: Stop sequences as a list, single string, or None
"""
if value is None:
self.stop_sequences = []
elif isinstance(value, str):
self.stop_sequences = [value]
elif isinstance(value, list):
self.stop_sequences = value
else:
self.stop_sequences = []
def _initialize_client(self, use_vertexai: bool = False) -> genai.Client: # type: ignore[no-any-unimported]
"""Initialize the Google Gen AI client with proper parameter handling.
Args:
@@ -168,11 +203,12 @@ class GeminiCompletion(BaseLLM):
def call(
self,
messages: str | list[LLMMessage],
tools: list[dict] | None = None,
tools: list[dict[str, Any]] | None = None,
callbacks: list[Any] | None = None,
available_functions: dict[str, Any] | None = None,
from_task: Any | None = None,
from_agent: Any | None = None,
response_model: type[BaseModel] | None = None,
) -> str | Any:
"""Call Google Gemini generate content API.
@@ -189,7 +225,7 @@ class GeminiCompletion(BaseLLM):
"""
try:
self._emit_call_started_event(
messages=messages, # type: ignore[arg-type]
messages=messages,
tools=tools,
callbacks=callbacks,
available_functions=available_functions,
@@ -199,10 +235,12 @@ class GeminiCompletion(BaseLLM):
self.tools = tools
formatted_content, system_instruction = self._format_messages_for_gemini(
messages # type: ignore[arg-type]
messages
)
config = self._prepare_generation_config(system_instruction, tools)
config = self._prepare_generation_config(
system_instruction, tools, response_model
)
if self.stream:
return self._handle_streaming_completion(
@@ -211,6 +249,7 @@ class GeminiCompletion(BaseLLM):
available_functions,
from_task,
from_agent,
response_model,
)
return self._handle_completion(
@@ -220,6 +259,7 @@ class GeminiCompletion(BaseLLM):
available_functions,
from_task,
from_agent,
response_model,
)
except APIError as e:
@@ -237,16 +277,18 @@ class GeminiCompletion(BaseLLM):
)
raise
def _prepare_generation_config(
def _prepare_generation_config( # type: ignore[no-any-unimported]
self,
system_instruction: str | None = None,
tools: list[dict] | None = None,
tools: list[dict[str, Any]] | None = None,
response_model: type[BaseModel] | None = None,
) -> types.GenerateContentConfig:
"""Prepare generation config for Google Gemini API.
Args:
system_instruction: System instruction for the model
tools: Tool definitions
response_model: Pydantic model for structured output
Returns:
GenerateContentConfig object for Gemini API
@@ -274,6 +316,10 @@ class GeminiCompletion(BaseLLM):
if self.stop_sequences:
config_params["stop_sequences"] = self.stop_sequences
if response_model:
config_params["response_mime_type"] = "application/json"
config_params["response_schema"] = response_model.model_json_schema()
# Handle tools for supported models
if tools and self.supports_tools:
config_params["tools"] = self._convert_tools_for_interference(tools)
@@ -283,7 +329,9 @@ class GeminiCompletion(BaseLLM):
return types.GenerateContentConfig(**config_params)
def _convert_tools_for_interference(self, tools: list[dict]) -> list[types.Tool]:
def _convert_tools_for_interference( # type: ignore[no-any-unimported]
self, tools: list[dict[str, Any]]
) -> list[types.Tool]:
"""Convert CrewAI tool format to Gemini function declaration format."""
gemini_tools = []
@@ -306,7 +354,7 @@ class GeminiCompletion(BaseLLM):
return gemini_tools
def _format_messages_for_gemini(
def _format_messages_for_gemini( # type: ignore[no-any-unimported]
self, messages: str | list[LLMMessage]
) -> tuple[list[types.Content], str | None]:
"""Format messages for Gemini API.
@@ -350,7 +398,7 @@ class GeminiCompletion(BaseLLM):
return contents, system_instruction
def _handle_completion(
def _handle_completion( # type: ignore[no-any-unimported]
self,
contents: list[types.Content],
system_instruction: str | None,
@@ -358,6 +406,7 @@ class GeminiCompletion(BaseLLM):
available_functions: dict[str, Any] | None = None,
from_task: Any | None = None,
from_agent: Any | None = None,
response_model: type[BaseModel] | None = None,
) -> str | Any:
"""Handle non-streaming content generation."""
api_params = {
@@ -416,13 +465,14 @@ class GeminiCompletion(BaseLLM):
return content
def _handle_streaming_completion(
def _handle_streaming_completion( # type: ignore[no-any-unimported]
self,
contents: list[types.Content],
config: types.GenerateContentConfig,
available_functions: dict[str, Any] | None = None,
from_task: Any | None = None,
from_agent: Any | None = None,
response_model: type[BaseModel] | None = None,
) -> str:
"""Handle streaming content generation."""
full_response = ""
@@ -544,8 +594,9 @@ class GeminiCompletion(BaseLLM):
}
return {"total_tokens": 0}
def _convert_contents_to_dict(
self, contents: list[types.Content]
def _convert_contents_to_dict( # type: ignore[no-any-unimported]
self,
contents: list[types.Content],
) -> list[dict[str, str]]:
"""Convert contents to dict format."""
return [

View File

@@ -1,9 +1,12 @@
from __future__ import annotations
from collections.abc import Iterator
import json
import logging
import os
from typing import Any
from typing import TYPE_CHECKING, Any
import httpx
from openai import APIConnectionError, NotFoundError, OpenAI
from openai.types.chat import ChatCompletion, ChatCompletionChunk
from openai.types.chat.chat_completion import Choice
@@ -12,6 +15,7 @@ from pydantic import BaseModel
from crewai.events.types.llm_events import LLMCallType
from crewai.llms.base_llm import BaseLLM
from crewai.llms.hooks.transport import HTTPTransport
from crewai.utilities.agent_utils import is_context_length_exceeded
from crewai.utilities.exceptions.context_window_exceeding_exception import (
LLMContextLengthExceededError,
@@ -19,6 +23,13 @@ from crewai.utilities.exceptions.context_window_exceeding_exception import (
from crewai.utilities.types import LLMMessage
if TYPE_CHECKING:
from crewai.agent.core import Agent
from crewai.llms.hooks.base import BaseInterceptor
from crewai.task import Task
from crewai.tools.base_tool import BaseTool
class OpenAICompletion(BaseLLM):
"""OpenAI native completion implementation.
@@ -51,13 +62,15 @@ class OpenAICompletion(BaseLLM):
top_logprobs: int | None = None,
reasoning_effort: str | None = None,
provider: str | None = None,
**kwargs,
):
interceptor: BaseInterceptor[httpx.Request, httpx.Response] | None = None,
**kwargs: Any,
) -> None:
"""Initialize OpenAI chat completion client."""
if provider is None:
provider = kwargs.pop("provider", "openai")
self.interceptor = interceptor
# Client configuration attributes
self.organization = organization
self.project = project
@@ -80,6 +93,11 @@ class OpenAICompletion(BaseLLM):
)
client_config = self._get_client_params()
if self.interceptor:
transport = HTTPTransport(interceptor=self.interceptor)
http_client = httpx.Client(transport=transport)
client_config["http_client"] = http_client
self.client = OpenAI(**client_config)
# Completion parameters
@@ -129,11 +147,12 @@ class OpenAICompletion(BaseLLM):
def call(
self,
messages: str | list[LLMMessage],
tools: list[dict] | None = None,
tools: list[dict[str, BaseTool]] | None = None,
callbacks: list[Any] | None = None,
available_functions: dict[str, Any] | None = None,
from_task: Any | None = None,
from_agent: Any | None = None,
from_task: Task | None = None,
from_agent: Agent | None = None,
response_model: type[BaseModel] | None = None,
) -> str | Any:
"""Call OpenAI chat completion API.
@@ -144,13 +163,14 @@ class OpenAICompletion(BaseLLM):
available_functions: Available functions for tool calling
from_task: Task that initiated the call
from_agent: Agent that initiated the call
response_model: Response model for structured output.
Returns:
Chat completion response or tool call result
"""
try:
self._emit_call_started_event(
messages=messages, # type: ignore[arg-type]
messages=messages,
tools=tools,
callbacks=callbacks,
available_functions=available_functions,
@@ -158,19 +178,27 @@ class OpenAICompletion(BaseLLM):
from_agent=from_agent,
)
formatted_messages = self._format_messages(messages) # type: ignore[arg-type]
formatted_messages = self._format_messages(messages)
completion_params = self._prepare_completion_params(
formatted_messages, tools
messages=formatted_messages, tools=tools
)
if self.stream:
return self._handle_streaming_completion(
completion_params, available_functions, from_task, from_agent
params=completion_params,
available_functions=available_functions,
from_task=from_task,
from_agent=from_agent,
response_model=response_model,
)
return self._handle_completion(
completion_params, available_functions, from_task, from_agent
params=completion_params,
available_functions=available_functions,
from_task=from_task,
from_agent=from_agent,
response_model=response_model,
)
except Exception as e:
@@ -182,14 +210,15 @@ class OpenAICompletion(BaseLLM):
raise
def _prepare_completion_params(
self, messages: list[LLMMessage], tools: list[dict] | None = None
self, messages: list[LLMMessage], tools: list[dict[str, BaseTool]] | None = None
) -> dict[str, Any]:
"""Prepare parameters for OpenAI chat completion."""
params = {
params: dict[str, Any] = {
"model": self.model,
"messages": messages,
"stream": self.stream,
}
if self.stream:
params["stream"] = self.stream
params.update(self.additional_params)
@@ -216,22 +245,6 @@ class OpenAICompletion(BaseLLM):
if self.is_o1_model and self.reasoning_effort:
params["reasoning_effort"] = self.reasoning_effort
# Handle response format for structured outputs
if self.response_format:
if isinstance(self.response_format, type) and issubclass(
self.response_format, BaseModel
):
# Convert Pydantic model to OpenAI response format
params["response_format"] = {
"type": "json_schema",
"json_schema": {
"name": self.response_format.__name__,
"schema": self.response_format.model_json_schema(),
},
}
else:
params["response_format"] = self.response_format
if tools:
params["tools"] = self._convert_tools_for_interference(tools)
params["tool_choice"] = "auto"
@@ -251,7 +264,9 @@ class OpenAICompletion(BaseLLM):
return {k: v for k, v in params.items() if k not in crewai_specific_params}
def _convert_tools_for_interference(self, tools: list[dict]) -> list[dict]:
def _convert_tools_for_interference(
self, tools: list[dict[str, BaseTool]]
) -> list[dict[str, Any]]:
"""Convert CrewAI tool format to OpenAI function calling format."""
from crewai.llms.providers.utils.common import safe_tool_conversion
@@ -283,9 +298,35 @@ class OpenAICompletion(BaseLLM):
available_functions: dict[str, Any] | None = None,
from_task: Any | None = None,
from_agent: Any | None = None,
response_model: type[BaseModel] | None = None,
) -> str | Any:
"""Handle non-streaming chat completion."""
try:
if response_model:
parsed_response = self.client.beta.chat.completions.parse(
**params,
response_format=response_model,
)
math_reasoning = parsed_response.choices[0].message
if math_reasoning.refusal:
pass
usage = self._extract_openai_token_usage(parsed_response)
self._track_token_usage_internal(usage)
parsed_object = parsed_response.choices[0].message.parsed
if parsed_object:
structured_json = parsed_object.model_dump_json()
self._emit_call_completed_event(
response=structured_json,
call_type=LLMCallType.LLM_CALL,
from_task=from_task,
from_agent=from_agent,
messages=params["messages"],
)
return structured_json
response: ChatCompletion = self.client.chat.completions.create(**params)
usage = self._extract_openai_token_usage(response)
@@ -380,12 +421,57 @@ class OpenAICompletion(BaseLLM):
available_functions: dict[str, Any] | None = None,
from_task: Any | None = None,
from_agent: Any | None = None,
response_model: type[BaseModel] | None = None,
) -> str:
"""Handle streaming chat completion."""
full_response = ""
tool_calls = {}
# Make streaming API call
if response_model:
completion_stream: Iterator[ChatCompletionChunk] = (
self.client.chat.completions.create(**params)
)
accumulated_content = ""
for chunk in completion_stream:
if not chunk.choices:
continue
choice = chunk.choices[0]
delta: ChoiceDelta = choice.delta
if delta.content:
accumulated_content += delta.content
self._emit_stream_chunk_event(
chunk=delta.content,
from_task=from_task,
from_agent=from_agent,
)
try:
parsed_object = response_model.model_validate_json(accumulated_content)
structured_json = parsed_object.model_dump_json()
self._emit_call_completed_event(
response=structured_json,
call_type=LLMCallType.LLM_CALL,
from_task=from_task,
from_agent=from_agent,
messages=params["messages"],
)
return structured_json
except Exception as e:
logging.error(f"Failed to parse structured output from stream: {e}")
self._emit_call_completed_event(
response=accumulated_content,
call_type=LLMCallType.LLM_CALL,
from_task=from_task,
from_agent=from_agent,
messages=params["messages"],
)
return accumulated_content
stream: Iterator[ChatCompletionChunk] = self.client.chat.completions.create(
**params
)
@@ -395,20 +481,18 @@ class OpenAICompletion(BaseLLM):
continue
choice = chunk.choices[0]
delta: ChoiceDelta = choice.delta
chunk_delta: ChoiceDelta = choice.delta
# Handle content streaming
if delta.content:
full_response += delta.content
if chunk_delta.content:
full_response += chunk_delta.content
self._emit_stream_chunk_event(
chunk=delta.content,
chunk=chunk_delta.content,
from_task=from_task,
from_agent=from_agent,
)
# Handle tool call streaming
if delta.tool_calls:
for tool_call in delta.tool_calls:
if chunk_delta.tool_calls:
for tool_call in chunk_delta.tool_calls:
call_id = tool_call.id or "default"
if call_id not in tool_calls:
tool_calls[call_id] = {
@@ -454,10 +538,8 @@ class OpenAICompletion(BaseLLM):
if result is not None:
return result
# Apply stop words to full response
full_response = self._apply_stop_words(full_response)
# Emit completion event and return full response
self._emit_call_completed_event(
response=full_response,
call_type=LLMCallType.LLM_CALL,
@@ -523,12 +605,9 @@ class OpenAICompletion(BaseLLM):
}
return {"total_tokens": 0}
def _format_messages( # type: ignore[override]
self, messages: str | list[LLMMessage]
) -> list[LLMMessage]:
def _format_messages(self, messages: str | list[LLMMessage]) -> list[LLMMessage]:
"""Format messages for OpenAI API."""
# Use base class formatting first
base_formatted = super()._format_messages(messages) # type: ignore[arg-type]
base_formatted = super()._format_messages(messages)
# Apply OpenAI-specific formatting
formatted_messages: list[LLMMessage] = []

View File

@@ -0,0 +1,37 @@
"""MCP (Model Context Protocol) client support for CrewAI agents.
This module provides native MCP client functionality, allowing CrewAI agents
to connect to any MCP-compliant server using various transport types.
"""
from crewai.mcp.client import MCPClient
from crewai.mcp.config import (
MCPServerConfig,
MCPServerHTTP,
MCPServerSSE,
MCPServerStdio,
)
from crewai.mcp.filters import (
StaticToolFilter,
ToolFilter,
ToolFilterContext,
create_dynamic_tool_filter,
create_static_tool_filter,
)
from crewai.mcp.transports.base import BaseTransport, TransportType
__all__ = [
"BaseTransport",
"MCPClient",
"MCPServerConfig",
"MCPServerHTTP",
"MCPServerSSE",
"MCPServerStdio",
"StaticToolFilter",
"ToolFilter",
"ToolFilterContext",
"TransportType",
"create_dynamic_tool_filter",
"create_static_tool_filter",
]

View File

@@ -0,0 +1,742 @@
"""MCP client with session management for CrewAI agents."""
import asyncio
from collections.abc import Callable
from contextlib import AsyncExitStack
from datetime import datetime
import logging
import time
from typing import Any
from typing_extensions import Self
# BaseExceptionGroup is available in Python 3.11+
try:
from builtins import BaseExceptionGroup
except ImportError:
# Fallback for Python < 3.11 (shouldn't happen in practice)
BaseExceptionGroup = Exception
from crewai.events.event_bus import crewai_event_bus
from crewai.events.types.mcp_events import (
MCPConnectionCompletedEvent,
MCPConnectionFailedEvent,
MCPConnectionStartedEvent,
MCPToolExecutionCompletedEvent,
MCPToolExecutionFailedEvent,
MCPToolExecutionStartedEvent,
)
from crewai.mcp.transports.base import BaseTransport
from crewai.mcp.transports.http import HTTPTransport
from crewai.mcp.transports.sse import SSETransport
from crewai.mcp.transports.stdio import StdioTransport
# MCP Connection timeout constants (in seconds)
MCP_CONNECTION_TIMEOUT = 30 # Increased for slow servers
MCP_TOOL_EXECUTION_TIMEOUT = 30
MCP_DISCOVERY_TIMEOUT = 30 # Increased for slow servers
MCP_MAX_RETRIES = 3
# Simple in-memory cache for MCP tool schemas (duration: 5 minutes)
_mcp_schema_cache: dict[str, tuple[dict[str, Any], float]] = {}
_cache_ttl = 300 # 5 minutes
class MCPClient:
"""MCP client with session management.
This client manages connections to MCP servers and provides a high-level
interface for interacting with MCP tools, prompts, and resources.
Example:
```python
transport = StdioTransport(command="python", args=["server.py"])
client = MCPClient(transport)
async with client:
tools = await client.list_tools()
result = await client.call_tool("tool_name", {"arg": "value"})
```
"""
def __init__(
self,
transport: BaseTransport,
connect_timeout: int = MCP_CONNECTION_TIMEOUT,
execution_timeout: int = MCP_TOOL_EXECUTION_TIMEOUT,
discovery_timeout: int = MCP_DISCOVERY_TIMEOUT,
max_retries: int = MCP_MAX_RETRIES,
cache_tools_list: bool = False,
logger: logging.Logger | None = None,
) -> None:
"""Initialize MCP client.
Args:
transport: Transport instance for MCP server connection.
connect_timeout: Connection timeout in seconds.
execution_timeout: Tool execution timeout in seconds.
discovery_timeout: Tool discovery timeout in seconds.
max_retries: Maximum retry attempts for operations.
cache_tools_list: Whether to cache tool list results.
logger: Optional logger instance.
"""
self.transport = transport
self.connect_timeout = connect_timeout
self.execution_timeout = execution_timeout
self.discovery_timeout = discovery_timeout
self.max_retries = max_retries
self.cache_tools_list = cache_tools_list
# self._logger = logger or logging.getLogger(__name__)
self._session: Any = None
self._initialized = False
self._exit_stack = AsyncExitStack()
self._was_connected = False
@property
def connected(self) -> bool:
"""Check if client is connected to server."""
return self.transport.connected and self._initialized
@property
def session(self) -> Any:
"""Get the MCP session."""
if self._session is None:
raise RuntimeError("Client not connected. Call connect() first.")
return self._session
def _get_server_info(self) -> tuple[str, str | None, str | None]:
"""Get server information for events.
Returns:
Tuple of (server_name, server_url, transport_type).
"""
if isinstance(self.transport, StdioTransport):
server_name = f"{self.transport.command} {' '.join(self.transport.args)}"
server_url = None
transport_type = self.transport.transport_type.value
elif isinstance(self.transport, HTTPTransport):
server_name = self.transport.url
server_url = self.transport.url
transport_type = self.transport.transport_type.value
elif isinstance(self.transport, SSETransport):
server_name = self.transport.url
server_url = self.transport.url
transport_type = self.transport.transport_type.value
else:
server_name = "Unknown MCP Server"
server_url = None
transport_type = (
self.transport.transport_type.value
if hasattr(self.transport, "transport_type")
else None
)
return server_name, server_url, transport_type
async def connect(self) -> Self:
"""Connect to MCP server and initialize session.
Returns:
Self for method chaining.
Raises:
ConnectionError: If connection fails.
ImportError: If MCP SDK not available.
"""
if self.connected:
return self
# Get server info for events
server_name, server_url, transport_type = self._get_server_info()
is_reconnect = self._was_connected
# Emit connection started event
started_at = datetime.now()
crewai_event_bus.emit(
self,
MCPConnectionStartedEvent(
server_name=server_name,
server_url=server_url,
transport_type=transport_type,
is_reconnect=is_reconnect,
connect_timeout=self.connect_timeout,
),
)
try:
from mcp import ClientSession
# Use AsyncExitStack to manage transport and session contexts together
# This ensures they're in the same async scope and prevents cancel scope errors
# Always enter transport context via exit stack (it handles already-connected state)
await self._exit_stack.enter_async_context(self.transport)
# Create ClientSession with transport streams
self._session = ClientSession(
self.transport.read_stream,
self.transport.write_stream,
)
# Enter the session's async context manager via exit stack
await self._exit_stack.enter_async_context(self._session)
# Initialize the session (required by MCP protocol)
try:
await asyncio.wait_for(
self._session.initialize(),
timeout=self.connect_timeout,
)
except asyncio.CancelledError:
# If initialization was cancelled (e.g., event loop closing),
# cleanup and re-raise - don't suppress cancellation
await self._cleanup_on_error()
raise
except BaseExceptionGroup as eg:
# Handle exception groups from anyio task groups
# Extract the actual meaningful error (not GeneratorExit)
actual_error = None
for exc in eg.exceptions:
if isinstance(exc, Exception) and not isinstance(
exc, GeneratorExit
):
# Check if it's an HTTP error (like 401)
error_msg = str(exc).lower()
if "401" in error_msg or "unauthorized" in error_msg:
actual_error = exc
break
if "cancel scope" not in error_msg and "task" not in error_msg:
actual_error = exc
break
await self._cleanup_on_error()
if actual_error:
raise ConnectionError(
f"Failed to connect to MCP server: {actual_error}"
) from actual_error
raise ConnectionError(f"Failed to connect to MCP server: {eg}") from eg
self._initialized = True
self._was_connected = True
completed_at = datetime.now()
connection_duration_ms = (completed_at - started_at).total_seconds() * 1000
crewai_event_bus.emit(
self,
MCPConnectionCompletedEvent(
server_name=server_name,
server_url=server_url,
transport_type=transport_type,
started_at=started_at,
completed_at=completed_at,
connection_duration_ms=connection_duration_ms,
is_reconnect=is_reconnect,
),
)
return self
except ImportError as e:
await self._cleanup_on_error()
error_msg = (
"MCP library not available. Please install with: pip install mcp"
)
self._emit_connection_failed(
server_name,
server_url,
transport_type,
error_msg,
"import_error",
started_at,
)
raise ImportError(error_msg) from e
except asyncio.TimeoutError as e:
await self._cleanup_on_error()
error_msg = f"MCP connection timed out after {self.connect_timeout} seconds. The server may be slow or unreachable."
self._emit_connection_failed(
server_name,
server_url,
transport_type,
error_msg,
"timeout",
started_at,
)
raise ConnectionError(error_msg) from e
except asyncio.CancelledError:
# Re-raise cancellation - don't suppress it
await self._cleanup_on_error()
self._emit_connection_failed(
server_name,
server_url,
transport_type,
"Connection cancelled",
"cancelled",
started_at,
)
raise
except BaseExceptionGroup as eg:
# Handle exception groups from anyio task groups at outer level
actual_error = None
for exc in eg.exceptions:
if isinstance(exc, Exception) and not isinstance(exc, GeneratorExit):
error_msg = str(exc).lower()
if "401" in error_msg or "unauthorized" in error_msg:
actual_error = exc
break
if "cancel scope" not in error_msg and "task" not in error_msg:
actual_error = exc
break
await self._cleanup_on_error()
error_type = (
"authentication"
if actual_error
and (
"401" in str(actual_error).lower()
or "unauthorized" in str(actual_error).lower()
)
else "network"
)
error_msg = str(actual_error) if actual_error else str(eg)
self._emit_connection_failed(
server_name,
server_url,
transport_type,
error_msg,
error_type,
started_at,
)
if actual_error:
raise ConnectionError(
f"Failed to connect to MCP server: {actual_error}"
) from actual_error
raise ConnectionError(f"Failed to connect to MCP server: {eg}") from eg
except Exception as e:
await self._cleanup_on_error()
error_type = (
"authentication"
if "401" in str(e).lower() or "unauthorized" in str(e).lower()
else "network"
)
self._emit_connection_failed(
server_name, server_url, transport_type, str(e), error_type, started_at
)
raise ConnectionError(f"Failed to connect to MCP server: {e}") from e
def _emit_connection_failed(
self,
server_name: str,
server_url: str | None,
transport_type: str | None,
error: str,
error_type: str,
started_at: datetime,
) -> None:
"""Emit connection failed event."""
failed_at = datetime.now()
crewai_event_bus.emit(
self,
MCPConnectionFailedEvent(
server_name=server_name,
server_url=server_url,
transport_type=transport_type,
error=error,
error_type=error_type,
started_at=started_at,
failed_at=failed_at,
),
)
async def _cleanup_on_error(self) -> None:
"""Cleanup resources when an error occurs during connection."""
try:
await self._exit_stack.aclose()
except Exception as e:
# Best effort cleanup - ignore all other errors
raise RuntimeError(f"Error during MCP client cleanup: {e}") from e
finally:
self._session = None
self._initialized = False
self._exit_stack = AsyncExitStack()
async def disconnect(self) -> None:
"""Disconnect from MCP server and cleanup resources."""
if not self.connected:
return
try:
await self._exit_stack.aclose()
except Exception as e:
raise RuntimeError(f"Error during MCP client disconnect: {e}") from e
finally:
self._session = None
self._initialized = False
self._exit_stack = AsyncExitStack()
async def list_tools(self, use_cache: bool | None = None) -> list[dict[str, Any]]:
"""List available tools from MCP server.
Args:
use_cache: Whether to use cached results. If None, uses
client's cache_tools_list setting.
Returns:
List of tool definitions with name, description, and inputSchema.
"""
if not self.connected:
await self.connect()
# Check cache if enabled
use_cache = use_cache if use_cache is not None else self.cache_tools_list
if use_cache:
cache_key = self._get_cache_key("tools")
if cache_key in _mcp_schema_cache:
cached_data, cache_time = _mcp_schema_cache[cache_key]
if time.time() - cache_time < _cache_ttl:
# Logger removed - return cached data
return cached_data
# List tools with timeout and retries
tools = await self._retry_operation(
self._list_tools_impl,
timeout=self.discovery_timeout,
)
# Cache results if enabled
if use_cache:
cache_key = self._get_cache_key("tools")
_mcp_schema_cache[cache_key] = (tools, time.time())
return tools
async def _list_tools_impl(self) -> list[dict[str, Any]]:
"""Internal implementation of list_tools."""
tools_result = await asyncio.wait_for(
self.session.list_tools(),
timeout=self.discovery_timeout,
)
return [
{
"name": tool.name,
"description": getattr(tool, "description", ""),
"inputSchema": getattr(tool, "inputSchema", {}),
}
for tool in tools_result.tools
]
async def call_tool(
self, tool_name: str, arguments: dict[str, Any] | None = None
) -> Any:
"""Call a tool on the MCP server.
Args:
tool_name: Name of the tool to call.
arguments: Tool arguments.
Returns:
Tool execution result.
"""
if not self.connected:
await self.connect()
arguments = arguments or {}
cleaned_arguments = self._clean_tool_arguments(arguments)
# Get server info for events
server_name, server_url, transport_type = self._get_server_info()
# Emit tool execution started event
started_at = datetime.now()
crewai_event_bus.emit(
self,
MCPToolExecutionStartedEvent(
server_name=server_name,
server_url=server_url,
transport_type=transport_type,
tool_name=tool_name,
tool_args=cleaned_arguments,
),
)
try:
result = await self._retry_operation(
lambda: self._call_tool_impl(tool_name, cleaned_arguments),
timeout=self.execution_timeout,
)
completed_at = datetime.now()
execution_duration_ms = (completed_at - started_at).total_seconds() * 1000
crewai_event_bus.emit(
self,
MCPToolExecutionCompletedEvent(
server_name=server_name,
server_url=server_url,
transport_type=transport_type,
tool_name=tool_name,
tool_args=cleaned_arguments,
result=result,
started_at=started_at,
completed_at=completed_at,
execution_duration_ms=execution_duration_ms,
),
)
return result
except Exception as e:
failed_at = datetime.now()
error_type = (
"timeout"
if isinstance(e, (asyncio.TimeoutError, ConnectionError))
and "timeout" in str(e).lower()
else "server_error"
)
crewai_event_bus.emit(
self,
MCPToolExecutionFailedEvent(
server_name=server_name,
server_url=server_url,
transport_type=transport_type,
tool_name=tool_name,
tool_args=cleaned_arguments,
error=str(e),
error_type=error_type,
started_at=started_at,
failed_at=failed_at,
),
)
raise
def _clean_tool_arguments(self, arguments: dict[str, Any]) -> dict[str, Any]:
"""Clean tool arguments by removing None values and fixing formats.
Args:
arguments: Raw tool arguments.
Returns:
Cleaned arguments ready for MCP server.
"""
cleaned = {}
for key, value in arguments.items():
# Skip None values
if value is None:
continue
# Fix sources array format: convert ["web"] to [{"type": "web"}]
if key == "sources" and isinstance(value, list):
fixed_sources = []
for item in value:
if isinstance(item, str):
# Convert string to object format
fixed_sources.append({"type": item})
elif isinstance(item, dict):
# Already in correct format
fixed_sources.append(item)
else:
# Keep as is if unknown format
fixed_sources.append(item)
if fixed_sources:
cleaned[key] = fixed_sources
continue
# Recursively clean nested dictionaries
if isinstance(value, dict):
nested_cleaned = self._clean_tool_arguments(value)
if nested_cleaned: # Only add if not empty
cleaned[key] = nested_cleaned
elif isinstance(value, list):
# Clean list items
cleaned_list = []
for item in value:
if isinstance(item, dict):
cleaned_item = self._clean_tool_arguments(item)
if cleaned_item:
cleaned_list.append(cleaned_item)
elif item is not None:
cleaned_list.append(item)
if cleaned_list:
cleaned[key] = cleaned_list
else:
# Keep primitive values
cleaned[key] = value
return cleaned
async def _call_tool_impl(self, tool_name: str, arguments: dict[str, Any]) -> Any:
"""Internal implementation of call_tool."""
result = await asyncio.wait_for(
self.session.call_tool(tool_name, arguments),
timeout=self.execution_timeout,
)
# Extract result content
if hasattr(result, "content") and result.content:
if isinstance(result.content, list) and len(result.content) > 0:
content_item = result.content[0]
if hasattr(content_item, "text"):
return str(content_item.text)
return str(content_item)
return str(result.content)
return str(result)
async def list_prompts(self) -> list[dict[str, Any]]:
"""List available prompts from MCP server.
Returns:
List of prompt definitions.
"""
if not self.connected:
await self.connect()
return await self._retry_operation(
self._list_prompts_impl,
timeout=self.discovery_timeout,
)
async def _list_prompts_impl(self) -> list[dict[str, Any]]:
"""Internal implementation of list_prompts."""
prompts_result = await asyncio.wait_for(
self.session.list_prompts(),
timeout=self.discovery_timeout,
)
return [
{
"name": prompt.name,
"description": getattr(prompt, "description", ""),
"arguments": getattr(prompt, "arguments", []),
}
for prompt in prompts_result.prompts
]
async def get_prompt(
self, prompt_name: str, arguments: dict[str, Any] | None = None
) -> dict[str, Any]:
"""Get a prompt from the MCP server.
Args:
prompt_name: Name of the prompt to get.
arguments: Optional prompt arguments.
Returns:
Prompt content and metadata.
"""
if not self.connected:
await self.connect()
arguments = arguments or {}
return await self._retry_operation(
lambda: self._get_prompt_impl(prompt_name, arguments),
timeout=self.execution_timeout,
)
async def _get_prompt_impl(
self, prompt_name: str, arguments: dict[str, Any]
) -> dict[str, Any]:
"""Internal implementation of get_prompt."""
result = await asyncio.wait_for(
self.session.get_prompt(prompt_name, arguments),
timeout=self.execution_timeout,
)
return {
"name": prompt_name,
"messages": [
{
"role": msg.role,
"content": msg.content,
}
for msg in result.messages
],
"arguments": arguments,
}
async def _retry_operation(
self,
operation: Callable[[], Any],
timeout: int | None = None,
) -> Any:
"""Retry an operation with exponential backoff.
Args:
operation: Async operation to retry.
timeout: Operation timeout in seconds.
Returns:
Operation result.
"""
last_error = None
timeout = timeout or self.execution_timeout
for attempt in range(self.max_retries):
try:
if timeout:
return await asyncio.wait_for(operation(), timeout=timeout)
return await operation()
except asyncio.TimeoutError as e: # noqa: PERF203
last_error = f"Operation timed out after {timeout} seconds"
if attempt < self.max_retries - 1:
wait_time = 2**attempt
await asyncio.sleep(wait_time)
else:
raise ConnectionError(last_error) from e
except Exception as e:
error_str = str(e).lower()
# Classify errors as retryable or non-retryable
if "authentication" in error_str or "unauthorized" in error_str:
raise ConnectionError(f"Authentication failed: {e}") from e
if "not found" in error_str:
raise ValueError(f"Resource not found: {e}") from e
# Retryable errors
last_error = str(e)
if attempt < self.max_retries - 1:
wait_time = 2**attempt
await asyncio.sleep(wait_time)
else:
raise ConnectionError(
f"Operation failed after {self.max_retries} attempts: {last_error}"
) from e
raise ConnectionError(f"Operation failed: {last_error}")
def _get_cache_key(self, resource_type: str) -> str:
"""Generate cache key for resource.
Args:
resource_type: Type of resource (e.g., "tools", "prompts").
Returns:
Cache key string.
"""
# Use transport type and URL/command as cache key
if isinstance(self.transport, StdioTransport):
key = f"stdio:{self.transport.command}:{':'.join(self.transport.args)}"
elif isinstance(self.transport, HTTPTransport):
key = f"http:{self.transport.url}"
elif isinstance(self.transport, SSETransport):
key = f"sse:{self.transport.url}"
else:
key = f"{self.transport.transport_type}:unknown"
return f"mcp:{key}:{resource_type}"
async def __aenter__(self) -> Self:
"""Async context manager entry."""
return await self.connect()
async def __aexit__(
self,
exc_type: type[BaseException] | None,
exc_val: BaseException | None,
exc_tb: Any,
) -> None:
"""Async context manager exit."""
await self.disconnect()

View File

@@ -0,0 +1,124 @@
"""MCP server configuration models for CrewAI agents.
This module provides Pydantic models for configuring MCP servers with
various transport types, similar to OpenAI's Agents SDK.
"""
from pydantic import BaseModel, Field
from crewai.mcp.filters import ToolFilter
class MCPServerStdio(BaseModel):
"""Stdio MCP server configuration.
This configuration is used for connecting to local MCP servers
that run as processes and communicate via standard input/output.
Example:
```python
mcp_server = MCPServerStdio(
command="python",
args=["path/to/server.py"],
env={"API_KEY": "..."},
tool_filter=create_static_tool_filter(
allowed_tool_names=["read_file", "write_file"]
),
)
```
"""
command: str = Field(
...,
description="Command to execute (e.g., 'python', 'node', 'npx', 'uvx').",
)
args: list[str] = Field(
default_factory=list,
description="Command arguments (e.g., ['server.py'] or ['-y', '@mcp/server']).",
)
env: dict[str, str] | None = Field(
default=None,
description="Environment variables to pass to the process.",
)
tool_filter: ToolFilter | None = Field(
default=None,
description="Optional tool filter for filtering available tools.",
)
cache_tools_list: bool = Field(
default=False,
description="Whether to cache the tool list for faster subsequent access.",
)
class MCPServerHTTP(BaseModel):
"""HTTP/Streamable HTTP MCP server configuration.
This configuration is used for connecting to remote MCP servers
over HTTP/HTTPS using streamable HTTP transport.
Example:
```python
mcp_server = MCPServerHTTP(
url="https://api.example.com/mcp",
headers={"Authorization": "Bearer ..."},
cache_tools_list=True,
)
```
"""
url: str = Field(
..., description="Server URL (e.g., 'https://api.example.com/mcp')."
)
headers: dict[str, str] | None = Field(
default=None,
description="Optional HTTP headers for authentication or other purposes.",
)
streamable: bool = Field(
default=True,
description="Whether to use streamable HTTP transport (default: True).",
)
tool_filter: ToolFilter | None = Field(
default=None,
description="Optional tool filter for filtering available tools.",
)
cache_tools_list: bool = Field(
default=False,
description="Whether to cache the tool list for faster subsequent access.",
)
class MCPServerSSE(BaseModel):
"""Server-Sent Events (SSE) MCP server configuration.
This configuration is used for connecting to remote MCP servers
using Server-Sent Events for real-time streaming communication.
Example:
```python
mcp_server = MCPServerSSE(
url="https://api.example.com/mcp/sse",
headers={"Authorization": "Bearer ..."},
)
```
"""
url: str = Field(
...,
description="Server URL (e.g., 'https://api.example.com/mcp/sse').",
)
headers: dict[str, str] | None = Field(
default=None,
description="Optional HTTP headers for authentication or other purposes.",
)
tool_filter: ToolFilter | None = Field(
default=None,
description="Optional tool filter for filtering available tools.",
)
cache_tools_list: bool = Field(
default=False,
description="Whether to cache the tool list for faster subsequent access.",
)
# Type alias for all MCP server configurations
MCPServerConfig = MCPServerStdio | MCPServerHTTP | MCPServerSSE

View File

@@ -0,0 +1,166 @@
"""Tool filtering support for MCP servers.
This module provides utilities for filtering tools from MCP servers,
including static allow/block lists and dynamic context-aware filtering.
"""
from collections.abc import Callable
from typing import TYPE_CHECKING, Any
from pydantic import BaseModel, Field
if TYPE_CHECKING:
pass
class ToolFilterContext(BaseModel):
"""Context for dynamic tool filtering.
This context is passed to dynamic tool filters to provide
information about the agent, run context, and server.
"""
agent: Any = Field(..., description="The agent requesting tools.")
server_name: str = Field(..., description="Name of the MCP server.")
run_context: dict[str, Any] | None = Field(
default=None,
description="Optional run context for additional filtering logic.",
)
# Type alias for tool filter functions
ToolFilter = (
Callable[[ToolFilterContext, dict[str, Any]], bool]
| Callable[[dict[str, Any]], bool]
)
class StaticToolFilter:
"""Static tool filter with allow/block lists.
This filter provides simple allow/block list filtering based on
tool names. Useful for restricting which tools are available
from an MCP server.
Example:
```python
filter = StaticToolFilter(
allowed_tool_names=["read_file", "write_file"],
blocked_tool_names=["delete_file"],
)
```
"""
def __init__(
self,
allowed_tool_names: list[str] | None = None,
blocked_tool_names: list[str] | None = None,
) -> None:
"""Initialize static tool filter.
Args:
allowed_tool_names: List of tool names to allow. If None,
all tools are allowed (unless blocked).
blocked_tool_names: List of tool names to block. Blocked tools
take precedence over allowed tools.
"""
self.allowed_tool_names = set(allowed_tool_names or [])
self.blocked_tool_names = set(blocked_tool_names or [])
def __call__(self, tool: dict[str, Any]) -> bool:
"""Filter tool based on allow/block lists.
Args:
tool: Tool definition dictionary with at least 'name' key.
Returns:
True if tool should be included, False otherwise.
"""
tool_name = tool.get("name", "")
# Blocked tools take precedence
if self.blocked_tool_names and tool_name in self.blocked_tool_names:
return False
# If allow list exists, tool must be in it
if self.allowed_tool_names:
return tool_name in self.allowed_tool_names
# No restrictions - allow all
return True
def create_static_tool_filter(
allowed_tool_names: list[str] | None = None,
blocked_tool_names: list[str] | None = None,
) -> Callable[[dict[str, Any]], bool]:
"""Create a static tool filter function.
This is a convenience function for creating static tool filters
with allow/block lists.
Args:
allowed_tool_names: List of tool names to allow. If None,
all tools are allowed (unless blocked).
blocked_tool_names: List of tool names to block. Blocked tools
take precedence over allowed tools.
Returns:
Tool filter function that returns True for allowed tools.
Example:
```python
filter_fn = create_static_tool_filter(
allowed_tool_names=["read_file", "write_file"],
blocked_tool_names=["delete_file"],
)
# Use in MCPServerStdio
mcp_server = MCPServerStdio(
command="npx",
args=["-y", "@modelcontextprotocol/server-filesystem"],
tool_filter=filter_fn,
)
```
"""
return StaticToolFilter(
allowed_tool_names=allowed_tool_names,
blocked_tool_names=blocked_tool_names,
)
def create_dynamic_tool_filter(
filter_func: Callable[[ToolFilterContext, dict[str, Any]], bool],
) -> Callable[[ToolFilterContext, dict[str, Any]], bool]:
"""Create a dynamic tool filter function.
This function wraps a dynamic filter function that has access
to the tool filter context (agent, server, run context).
Args:
filter_func: Function that takes (context, tool) and returns bool.
Returns:
Tool filter function that can be used with MCP server configs.
Example:
```python
async def context_aware_filter(
context: ToolFilterContext, tool: dict[str, Any]
) -> bool:
# Block dangerous tools for code reviewers
if context.agent.role == "Code Reviewer":
if tool["name"].startswith("danger_"):
return False
return True
filter_fn = create_dynamic_tool_filter(context_aware_filter)
mcp_server = MCPServerStdio(
command="python", args=["server.py"], tool_filter=filter_fn
)
```
"""
return filter_func

View File

@@ -0,0 +1,15 @@
"""MCP transport implementations for various connection types."""
from crewai.mcp.transports.base import BaseTransport, TransportType
from crewai.mcp.transports.http import HTTPTransport
from crewai.mcp.transports.sse import SSETransport
from crewai.mcp.transports.stdio import StdioTransport
__all__ = [
"BaseTransport",
"HTTPTransport",
"SSETransport",
"StdioTransport",
"TransportType",
]

View File

@@ -0,0 +1,125 @@
"""Base transport interface for MCP connections."""
from abc import ABC, abstractmethod
from enum import Enum
from typing import Any, Protocol
from typing_extensions import Self
class TransportType(str, Enum):
"""MCP transport types."""
STDIO = "stdio"
HTTP = "http"
STREAMABLE_HTTP = "streamable-http"
SSE = "sse"
class ReadStream(Protocol):
"""Protocol for read streams."""
async def read(self, n: int = -1) -> bytes:
"""Read bytes from stream."""
...
class WriteStream(Protocol):
"""Protocol for write streams."""
async def write(self, data: bytes) -> None:
"""Write bytes to stream."""
...
class BaseTransport(ABC):
"""Base class for MCP transport implementations.
This abstract base class defines the interface that all transport
implementations must follow. Transports handle the low-level communication
with MCP servers.
"""
def __init__(self, **kwargs: Any) -> None:
"""Initialize the transport.
Args:
**kwargs: Transport-specific configuration options.
"""
self._read_stream: ReadStream | None = None
self._write_stream: WriteStream | None = None
self._connected = False
@property
@abstractmethod
def transport_type(self) -> TransportType:
"""Return the transport type."""
...
@property
def connected(self) -> bool:
"""Check if transport is connected."""
return self._connected
@property
def read_stream(self) -> ReadStream:
"""Get the read stream."""
if self._read_stream is None:
raise RuntimeError("Transport not connected. Call connect() first.")
return self._read_stream
@property
def write_stream(self) -> WriteStream:
"""Get the write stream."""
if self._write_stream is None:
raise RuntimeError("Transport not connected. Call connect() first.")
return self._write_stream
@abstractmethod
async def connect(self) -> Self:
"""Establish connection to MCP server.
Returns:
Self for method chaining.
Raises:
ConnectionError: If connection fails.
"""
...
@abstractmethod
async def disconnect(self) -> None:
"""Close connection to MCP server."""
...
@abstractmethod
async def __aenter__(self) -> Self:
"""Async context manager entry."""
...
@abstractmethod
async def __aexit__(
self,
exc_type: type[BaseException] | None,
exc_val: BaseException | None,
exc_tb: Any,
) -> None:
"""Async context manager exit."""
...
def _set_streams(self, read: ReadStream, write: WriteStream) -> None:
"""Set the read and write streams.
Args:
read: Read stream.
write: Write stream.
"""
self._read_stream = read
self._write_stream = write
self._connected = True
def _clear_streams(self) -> None:
"""Clear the read and write streams."""
self._read_stream = None
self._write_stream = None
self._connected = False

View File

@@ -0,0 +1,174 @@
"""HTTP and Streamable HTTP transport for MCP servers."""
import asyncio
from typing import Any
from typing_extensions import Self
# BaseExceptionGroup is available in Python 3.11+
try:
from builtins import BaseExceptionGroup
except ImportError:
# Fallback for Python < 3.11 (shouldn't happen in practice)
BaseExceptionGroup = Exception
from crewai.mcp.transports.base import BaseTransport, TransportType
class HTTPTransport(BaseTransport):
"""HTTP/Streamable HTTP transport for connecting to remote MCP servers.
This transport connects to MCP servers over HTTP/HTTPS using the
streamable HTTP client from the MCP SDK.
Example:
```python
transport = HTTPTransport(
url="https://api.example.com/mcp",
headers={"Authorization": "Bearer ..."}
)
async with transport:
# Use transport...
```
"""
def __init__(
self,
url: str,
headers: dict[str, str] | None = None,
streamable: bool = True,
**kwargs: Any,
) -> None:
"""Initialize HTTP transport.
Args:
url: Server URL (e.g., "https://api.example.com/mcp").
headers: Optional HTTP headers.
streamable: Whether to use streamable HTTP (default: True).
**kwargs: Additional transport options.
"""
super().__init__(**kwargs)
self.url = url
self.headers = headers or {}
self.streamable = streamable
self._transport_context: Any = None
@property
def transport_type(self) -> TransportType:
"""Return the transport type."""
return TransportType.STREAMABLE_HTTP if self.streamable else TransportType.HTTP
async def connect(self) -> Self:
"""Establish HTTP connection to MCP server.
Returns:
Self for method chaining.
Raises:
ConnectionError: If connection fails.
ImportError: If MCP SDK not available.
"""
if self._connected:
return self
try:
from mcp.client.streamable_http import streamablehttp_client
self._transport_context = streamablehttp_client(
self.url,
headers=self.headers if self.headers else None,
terminate_on_close=True,
)
try:
read, write, _ = await asyncio.wait_for(
self._transport_context.__aenter__(), timeout=30.0
)
except asyncio.TimeoutError as e:
self._transport_context = None
raise ConnectionError(
"Transport context entry timed out after 30 seconds. "
"Server may be slow or unreachable."
) from e
except Exception as e:
self._transport_context = None
raise ConnectionError(f"Failed to enter transport context: {e}") from e
self._set_streams(read=read, write=write)
return self
except ImportError as e:
raise ImportError(
"MCP library not available. Please install with: pip install mcp"
) from e
except Exception as e:
self._clear_streams()
if self._transport_context is not None:
self._transport_context = None
raise ConnectionError(f"Failed to connect to MCP server: {e}") from e
async def disconnect(self) -> None:
"""Close HTTP connection."""
if not self._connected:
return
try:
# Clear streams first
self._clear_streams()
# await self._exit_stack.aclose()
# Exit transport context - this will clean up background tasks
# Give a small delay to allow background tasks to complete
if self._transport_context is not None:
try:
# Wait a tiny bit for any pending operations
await asyncio.sleep(0.1)
await self._transport_context.__aexit__(None, None, None)
except (RuntimeError, asyncio.CancelledError) as e:
# Ignore "exit cancel scope in different task" errors and cancellation
# These happen when asyncio.run() closes the event loop
# while background tasks are still running
error_msg = str(e).lower()
if "cancel scope" not in error_msg and "task" not in error_msg:
# Only suppress cancel scope/task errors, re-raise others
if isinstance(e, RuntimeError):
raise
# For CancelledError, just suppress it
except BaseExceptionGroup as eg:
# Handle exception groups from anyio task groups
# Suppress if they contain cancel scope errors
should_suppress = False
for exc in eg.exceptions:
error_msg = str(exc).lower()
if "cancel scope" in error_msg or "task" in error_msg:
should_suppress = True
break
if not should_suppress:
raise
except Exception as e:
raise RuntimeError(
f"Error during HTTP transport disconnect: {e}"
) from e
self._connected = False
except Exception as e:
# Log but don't raise - cleanup should be best effort
import logging
logger = logging.getLogger(__name__)
logger.warning(f"Error during HTTP transport disconnect: {e}")
async def __aenter__(self) -> Self:
"""Async context manager entry."""
return await self.connect()
async def __aexit__(
self,
exc_type: type[BaseException] | None,
exc_val: BaseException | None,
exc_tb: Any,
) -> None:
"""Async context manager exit."""
await self.disconnect()

View File

@@ -0,0 +1,113 @@
"""Server-Sent Events (SSE) transport for MCP servers."""
from typing import Any
from typing_extensions import Self
from crewai.mcp.transports.base import BaseTransport, TransportType
class SSETransport(BaseTransport):
"""SSE transport for connecting to remote MCP servers.
This transport connects to MCP servers using Server-Sent Events (SSE)
for real-time streaming communication.
Example:
```python
transport = SSETransport(
url="https://api.example.com/mcp/sse",
headers={"Authorization": "Bearer ..."}
)
async with transport:
# Use transport...
```
"""
def __init__(
self,
url: str,
headers: dict[str, str] | None = None,
**kwargs: Any,
) -> None:
"""Initialize SSE transport.
Args:
url: Server URL (e.g., "https://api.example.com/mcp/sse").
headers: Optional HTTP headers.
**kwargs: Additional transport options.
"""
super().__init__(**kwargs)
self.url = url
self.headers = headers or {}
self._transport_context: Any = None
@property
def transport_type(self) -> TransportType:
"""Return the transport type."""
return TransportType.SSE
async def connect(self) -> Self:
"""Establish SSE connection to MCP server.
Returns:
Self for method chaining.
Raises:
ConnectionError: If connection fails.
ImportError: If MCP SDK not available.
"""
if self._connected:
return self
try:
from mcp.client.sse import sse_client
self._transport_context = sse_client(
self.url,
headers=self.headers if self.headers else None,
terminate_on_close=True,
)
read, write = await self._transport_context.__aenter__()
self._set_streams(read=read, write=write)
return self
except ImportError as e:
raise ImportError(
"MCP library not available. Please install with: pip install mcp"
) from e
except Exception as e:
self._clear_streams()
raise ConnectionError(f"Failed to connect to SSE MCP server: {e}") from e
async def disconnect(self) -> None:
"""Close SSE connection."""
if not self._connected:
return
try:
self._clear_streams()
if self._transport_context is not None:
await self._transport_context.__aexit__(None, None, None)
except Exception as e:
import logging
logger = logging.getLogger(__name__)
logger.warning(f"Error during SSE transport disconnect: {e}")
async def __aenter__(self) -> Self:
"""Async context manager entry."""
return await self.connect()
async def __aexit__(
self,
exc_type: type[BaseException] | None,
exc_val: BaseException | None,
exc_tb: Any,
) -> None:
"""Async context manager exit."""
await self.disconnect()

View File

@@ -0,0 +1,153 @@
"""Stdio transport for MCP servers running as local processes."""
import asyncio
import os
import subprocess
from typing import Any
from typing_extensions import Self
from crewai.mcp.transports.base import BaseTransport, TransportType
class StdioTransport(BaseTransport):
"""Stdio transport for connecting to local MCP servers.
This transport connects to MCP servers running as local processes,
communicating via standard input/output streams. Supports Python,
Node.js, and other command-line servers.
Example:
```python
transport = StdioTransport(
command="python",
args=["path/to/server.py"],
env={"API_KEY": "..."}
)
async with transport:
# Use transport...
```
"""
def __init__(
self,
command: str,
args: list[str] | None = None,
env: dict[str, str] | None = None,
**kwargs: Any,
) -> None:
"""Initialize stdio transport.
Args:
command: Command to execute (e.g., "python", "node", "npx").
args: Command arguments (e.g., ["server.py"] or ["-y", "@mcp/server"]).
env: Environment variables to pass to the process.
**kwargs: Additional transport options.
"""
super().__init__(**kwargs)
self.command = command
self.args = args or []
self.env = env or {}
self._process: subprocess.Popen[bytes] | None = None
self._transport_context: Any = None
@property
def transport_type(self) -> TransportType:
"""Return the transport type."""
return TransportType.STDIO
async def connect(self) -> Self:
"""Start the MCP server process and establish connection.
Returns:
Self for method chaining.
Raises:
ConnectionError: If process fails to start.
ImportError: If MCP SDK not available.
"""
if self._connected:
return self
try:
from mcp import StdioServerParameters
from mcp.client.stdio import stdio_client
process_env = os.environ.copy()
process_env.update(self.env)
server_params = StdioServerParameters(
command=self.command,
args=self.args,
env=process_env if process_env else None,
)
self._transport_context = stdio_client(server_params)
try:
read, write = await self._transport_context.__aenter__()
except Exception as e:
import traceback
traceback.print_exc()
self._transport_context = None
raise ConnectionError(
f"Failed to enter stdio transport context: {e}"
) from e
self._set_streams(read=read, write=write)
return self
except ImportError as e:
raise ImportError(
"MCP library not available. Please install with: pip install mcp"
) from e
except Exception as e:
self._clear_streams()
if self._transport_context is not None:
self._transport_context = None
raise ConnectionError(f"Failed to start MCP server process: {e}") from e
async def disconnect(self) -> None:
"""Terminate the MCP server process and close connection."""
if not self._connected:
return
try:
self._clear_streams()
if self._transport_context is not None:
await self._transport_context.__aexit__(None, None, None)
if self._process is not None:
try:
self._process.terminate()
try:
await asyncio.wait_for(self._process.wait(), timeout=5.0)
except asyncio.TimeoutError:
self._process.kill()
await self._process.wait()
# except ProcessLookupError:
# pass
finally:
self._process = None
except Exception as e:
# Log but don't raise - cleanup should be best effort
import logging
logger = logging.getLogger(__name__)
logger.warning(f"Error during stdio transport disconnect: {e}")
async def __aenter__(self) -> Self:
"""Async context manager entry."""
return await self.connect()
async def __aexit__(
self,
exc_type: type[BaseException] | None,
exc_val: BaseException | None,
exc_tb: Any,
) -> None:
"""Async context manager exit."""
await self.disconnect()

View File

@@ -1,21 +1,75 @@
"""Utility functions for the crewai project module."""
from collections.abc import Callable
from functools import lru_cache
from typing import ParamSpec, TypeVar, cast
from functools import wraps
from typing import Any, ParamSpec, TypeVar, cast
from pydantic import BaseModel
from crewai.agents.cache.cache_handler import CacheHandler
P = ParamSpec("P")
R = TypeVar("R")
cache = CacheHandler()
def _make_hashable(arg: Any) -> Any:
"""Convert argument to hashable form for caching.
Args:
arg: The argument to convert.
Returns:
Hashable representation of the argument.
"""
if isinstance(arg, BaseModel):
return arg.model_dump_json()
if isinstance(arg, dict):
return tuple(sorted((k, _make_hashable(v)) for k, v in arg.items()))
if isinstance(arg, list):
return tuple(_make_hashable(item) for item in arg)
if hasattr(arg, "__dict__"):
return ("__instance__", id(arg))
return arg
def memoize(meth: Callable[P, R]) -> Callable[P, R]:
"""Memoize a method by caching its results based on arguments.
Handles Pydantic BaseModel instances by converting them to JSON strings
before hashing for cache lookup.
Args:
meth: The method to memoize.
Returns:
A memoized version of the method that caches results.
"""
return cast(Callable[P, R], lru_cache(typed=True)(meth))
@wraps(meth)
def wrapper(*args: P.args, **kwargs: P.kwargs) -> R:
"""Wrapper that converts arguments to hashable form before caching.
Args:
*args: Positional arguments to the memoized method.
**kwargs: Keyword arguments to the memoized method.
Returns:
The result of the memoized method call.
"""
hashable_args = tuple(_make_hashable(arg) for arg in args)
hashable_kwargs = tuple(
sorted((k, _make_hashable(v)) for k, v in kwargs.items())
)
cache_key = str((hashable_args, hashable_kwargs))
cached_result: R | None = cache.read(tool=meth.__name__, input=cache_key)
if cached_result is not None:
return cached_result
result = meth(*args, **kwargs)
cache.add(tool=meth.__name__, input=cache_key, output=result)
return result
return cast(Callable[P, R], wrapper)

View File

@@ -67,31 +67,44 @@ def _prepare_documents_for_chromadb(
ids: list[str] = []
texts: list[str] = []
metadatas: list[Mapping[str, str | int | float | bool]] = []
seen_ids: dict[str, int] = {}
try:
for doc in documents:
if "doc_id" in doc:
doc_id = str(doc["doc_id"])
else:
metadata = doc.get("metadata")
if metadata and isinstance(metadata, dict) and "doc_id" in metadata:
doc_id = str(metadata["doc_id"])
else:
content_for_hash = doc["content"]
if metadata:
metadata_str = json.dumps(metadata, sort_keys=True)
content_for_hash = f"{content_for_hash}|{metadata_str}"
doc_id = hashlib.sha256(content_for_hash.encode()).hexdigest()
for doc in documents:
if "doc_id" in doc:
ids.append(doc["doc_id"])
else:
content_for_hash = doc["content"]
metadata = doc.get("metadata")
if metadata:
metadata_str = json.dumps(metadata, sort_keys=True)
content_for_hash = f"{content_for_hash}|{metadata_str}"
content_hash = hashlib.blake2b(
content_for_hash.encode(), digest_size=32
).hexdigest()
ids.append(content_hash)
texts.append(doc["content"])
metadata = doc.get("metadata")
if metadata:
if isinstance(metadata, list):
metadatas.append(metadata[0] if metadata and metadata[0] else {})
if isinstance(metadata, list):
processed_metadata = metadata[0] if metadata and metadata[0] else {}
else:
processed_metadata = metadata
else:
metadatas.append(metadata)
else:
metadatas.append({})
processed_metadata = {}
if doc_id in seen_ids:
idx = seen_ids[doc_id]
texts[idx] = doc["content"]
metadatas[idx] = processed_metadata
else:
idx = len(ids)
ids.append(doc_id)
texts.append(doc["content"])
metadatas.append(processed_metadata)
seen_ids[doc_id] = idx
except Exception as e:
raise ValueError(f"Error preparing documents for ChromaDB: {e}") from e
return PreparedDocuments(ids, texts, metadatas)

View File

@@ -1,6 +1,5 @@
from __future__ import annotations
from collections.abc import Callable
from concurrent.futures import Future
from copy import copy as shallow_copy
import datetime
@@ -29,10 +28,11 @@ from pydantic import (
model_validator,
)
from pydantic_core import PydanticCustomError
from typing_extensions import Self
from crewai.agents.agent_builder.base_agent import BaseAgent
from crewai.events.event_bus import crewai_event_bus
from crewai.events.event_types import (
from crewai.events.types.task_events import (
TaskCompletedEvent,
TaskFailedEvent,
TaskStartedEvent,
@@ -52,7 +52,7 @@ from crewai.utilities.guardrail_types import (
GuardrailType,
GuardrailsType,
)
from crewai.utilities.i18n import I18N
from crewai.utilities.i18n import I18N, get_i18n
from crewai.utilities.printer import Printer
from crewai.utilities.string_utils import interpolate_only
@@ -90,7 +90,7 @@ class Task(BaseModel):
used_tools: int = 0
tools_errors: int = 0
delegations: int = 0
i18n: I18N = Field(default_factory=I18N)
i18n: I18N = Field(default_factory=get_i18n)
name: str | None = Field(default=None)
prompt_context: str | None = None
description: str = Field(description="Description of the actual task.")
@@ -123,6 +123,10 @@ class Task(BaseModel):
description="A Pydantic model to be used to create a Pydantic output.",
default=None,
)
response_model: type[BaseModel] | None = Field(
description="A Pydantic model for structured LLM outputs using native provider features.",
default=None,
)
output_file: str | None = Field(
description="A file path to be used to create a file output.",
default=None,
@@ -203,8 +207,8 @@ class Task(BaseModel):
@field_validator("guardrail")
@classmethod
def validate_guardrail_function(
cls, v: str | Callable | None
) -> str | Callable | None:
cls, v: str | GuardrailCallable | None
) -> str | GuardrailCallable | None:
"""
If v is a callable, validate that the guardrail function has the correct signature and behavior.
If v is a string, return it as is.
@@ -261,11 +265,11 @@ class Task(BaseModel):
@model_validator(mode="before")
@classmethod
def process_model_config(cls, values):
def process_model_config(cls, values: dict[str, Any]) -> dict[str, Any]:
return process_config(values, cls)
@model_validator(mode="after")
def validate_required_fields(self):
def validate_required_fields(self) -> Self:
required_fields = ["description", "expected_output"]
for field in required_fields:
if getattr(self, field) is None:
@@ -414,14 +418,14 @@ class Task(BaseModel):
return self
@model_validator(mode="after")
def check_tools(self):
def check_tools(self) -> Self:
"""Check if the tools are set."""
if not self.tools and self.agent and self.agent.tools:
self.tools.extend(self.agent.tools)
self.tools = self.agent.tools
return self
@model_validator(mode="after")
def check_output(self):
def check_output(self) -> Self:
"""Check if an output type is set."""
output_types = [self.output_json, self.output_pydantic]
if len([type for type in output_types if type]) > 1:
@@ -433,7 +437,7 @@ class Task(BaseModel):
return self
@model_validator(mode="after")
def handle_max_retries_deprecation(self):
def handle_max_retries_deprecation(self) -> Self:
if self.max_retries is not None:
warnings.warn(
"The 'max_retries' parameter is deprecated and will be removed in CrewAI v1.0.0. "
@@ -514,14 +518,18 @@ class Task(BaseModel):
tools = tools or self.tools or []
self.processed_by_agents.add(agent.role)
crewai_event_bus.emit(self, TaskStartedEvent(context=context, task=self))
crewai_event_bus.emit(self, TaskStartedEvent(context=context, task=self)) # type: ignore[no-untyped-call]
result = agent.execute_task(
task=self,
context=context,
tools=tools,
)
pydantic_output, json_output = self._export_output(result)
if not self._guardrails and not self._guardrail:
pydantic_output, json_output = self._export_output(result)
else:
pydantic_output, json_output = None, None
task_output = TaskOutput(
name=self.name or self.description,
description=self.description,
@@ -572,12 +580,13 @@ class Task(BaseModel):
)
self._save_file(content)
crewai_event_bus.emit(
self, TaskCompletedEvent(output=task_output, task=self)
self,
TaskCompletedEvent(output=task_output, task=self), # type: ignore[no-untyped-call]
)
return task_output
except Exception as e:
self.end_time = datetime.datetime.now()
crewai_event_bus.emit(self, TaskFailedEvent(error=str(e), task=self))
crewai_event_bus.emit(self, TaskFailedEvent(error=str(e), task=self)) # type: ignore[no-untyped-call]
raise e # Re-raise the exception after emitting the event
def prompt(self) -> str:
@@ -782,7 +791,7 @@ Follow these guidelines:
return OutputFormat.PYDANTIC
return OutputFormat.RAW
def _save_file(self, result: dict | str | Any) -> None:
def _save_file(self, result: dict[str, Any] | str | Any) -> None:
"""Save task output to a file.
Note:
@@ -834,7 +843,7 @@ Follow these guidelines:
) from e
return
def __repr__(self):
def __repr__(self) -> str:
return f"Task(description={self.description}, expected_output={self.expected_output})"
@property

Some files were not shown because too many files have changed in this diff Show More