Compare commits

..

9 Commits

Author SHA1 Message Date
Lucas Gomide
ccd98cc511 docs: update Python version requirement from <=3.13 to <3.14
This correctly reflects support for all 3.13.x patch version
2025-06-10 13:36:36 -03:00
Lucas Gomide
5c51349a85 Support async tool executions (#2983)
* test: fix structured tool tests

No tests were being executed from this file

* feat: support to run async tool

Some Tool requires async execution. This commit allow us to collect tool result from coroutines

* docs: add docs about asynchronous tool support
2025-06-10 12:17:06 -04:00
Richard Luo
5b740467cb docs: fix the guide on persistence (#2849)
Some checks failed
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
Co-authored-by: Lucas Gomide <lucaslg200@gmail.com>
2025-06-09 14:09:56 -04:00
hegasz
e9d9dd2a79 Fix missing manager_agent tokens in usage_metrics from kickoff (#2848)
* fix(metrics): prevent usage_metrics from dropping manager_agent tokens

* Add test to verify hierarchical kickoff aggregates manager and agent usage metrics

---------

Co-authored-by: Lucas Gomide <lucaslg200@gmail.com>
2025-06-09 13:16:05 -04:00
Lorenze Jay
3e74cb4832 docs: add integrations documentation and images for enterprise features (#2981)
- Introduced a new documentation file for Integrations, detailing supported services and setup instructions.
- Updated the main docs.json to include the new "integrations" feature in the contextual options.
- Added several images related to integrations to enhance the documentation.

Co-authored-by: Tony Kipkemboi <iamtonykipkemboi@gmail.com>
2025-06-09 12:46:09 -04:00
Lucas Gomide
db3c8a49bd feat: improve docs and logging for Multi-Org actions in CLI (#2980)
* docs: add organization management in our CLI docs

* feat: improve user feedback when user is not authenticated

* feat: improve logging about current organization while publishing/install a Tool

* feat: improve logging when Agent repository is not found during fetch

* fix linter offences

* test: fix auth token error
2025-06-09 12:21:12 -04:00
Lucas Gomide
8a37b535ed docs: improve docs about planning LLM usage (#2977) 2025-06-09 10:17:04 -04:00
Lucas Gomide
e6ac1311e7 build: upgrade LiteLLM to support latest Openai version (#2963)
Co-authored-by: Tony Kipkemboi <iamtonykipkemboi@gmail.com>
2025-06-09 08:55:12 -04:00
Akshit Madan
b0d89698fd docs: added Maxim support for Agent Observability (#2861)
Some checks failed
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
* docs: added Maxim support for Agent Observability

* enhanced the maxim integration doc page as per the github PR reviewer bot suggestions

* Update maxim-observability.mdx

* Update maxim-observability.mdx

- Fixed Python version, >=3.10
- added expected_output field in Task
- Removed marketing links and added github link

* added maxim in observability

---------

Co-authored-by: Tony Kipkemboi <iamtonykipkemboi@gmail.com>
2025-06-08 13:39:01 -04:00
44 changed files with 1958 additions and 1587 deletions

View File

@@ -1,184 +0,0 @@
# A2A Protocol Integration
CrewAI supports the A2A (Agent-to-Agent) protocol, enabling your crews to participate in remote agent interoperability. This allows CrewAI crews to be exposed as remotely accessible agents that can communicate with other A2A-compatible systems.
## Overview
The A2A protocol is Google's standard for agent interoperability that enables bidirectional communication between agents. CrewAI's A2A integration provides:
- **Remote Interoperability**: Expose crews as A2A-compatible agents
- **Bidirectional Communication**: Enable full-duplex agent interactions
- **Protocol Compliance**: Full support for A2A specifications
- **Transport Flexibility**: Support for multiple transport protocols
## Installation
A2A support is available as an optional dependency:
```bash
pip install crewai[a2a]
```
## Basic Usage
### Creating an A2A Server
```python
from crewai import Agent, Crew, Task
from crewai.a2a import CrewAgentExecutor, start_a2a_server
# Create your crew
agent = Agent(
role="Assistant",
goal="Help users with their queries",
backstory="A helpful AI assistant"
)
task = Task(
description="Help with: {query}",
agent=agent
)
crew = Crew(agents=[agent], tasks=[task])
# Create A2A executor
executor = CrewAgentExecutor(crew)
# Start A2A server
start_a2a_server(executor, host="0.0.0.0", port=10001)
```
### Custom Configuration
```python
from crewai.a2a import CrewAgentExecutor, create_a2a_app
# Create executor with custom content types
executor = CrewAgentExecutor(
crew=crew,
supported_content_types=['text', 'application/json', 'image/png']
)
# Create custom A2A app
app = create_a2a_app(
executor,
agent_name="My Research Crew",
agent_description="A specialized research and analysis crew",
transport="starlette"
)
# Run with custom ASGI server
import uvicorn
uvicorn.run(app, host="0.0.0.0", port=8080)
```
## Key Features
### CrewAgentExecutor
The `CrewAgentExecutor` class wraps CrewAI crews to implement the A2A `AgentExecutor` interface:
- **Asynchronous Execution**: Crews run asynchronously within the A2A protocol
- **Task Management**: Automatic handling of task lifecycle and cancellation
- **Error Handling**: Robust error handling with A2A-compliant responses
- **Output Conversion**: Automatic conversion of crew outputs to A2A artifacts
### Server Utilities
Convenience functions for starting A2A servers:
- `start_a2a_server()`: Quick server startup with default configuration
- `create_a2a_app()`: Create custom A2A applications for advanced use cases
## Protocol Compliance
CrewAI's A2A integration provides full protocol compliance:
- **Agent Cards**: Automatic generation of agent capability descriptions
- **Task Execution**: Asynchronous task processing with event queues
- **Artifact Management**: Conversion of crew outputs to A2A artifacts
- **Error Handling**: A2A-compliant error responses and status codes
## Use Cases
### Remote Agent Networks
Expose CrewAI crews as part of larger agent networks:
```python
# Multi-agent system with specialized crews
research_crew = create_research_crew()
analysis_crew = create_analysis_crew()
writing_crew = create_writing_crew()
# Expose each as A2A agents on different ports
start_a2a_server(CrewAgentExecutor(research_crew), port=10001)
start_a2a_server(CrewAgentExecutor(analysis_crew), port=10002)
start_a2a_server(CrewAgentExecutor(writing_crew), port=10003)
```
### Cross-Platform Integration
Enable CrewAI crews to work with other agent frameworks:
```python
# CrewAI crew accessible to other A2A-compatible systems
executor = CrewAgentExecutor(crew)
start_a2a_server(executor, host="0.0.0.0", port=10001)
# Other systems can now invoke this crew remotely
```
## Advanced Configuration
### Custom Agent Cards
```python
from a2a.types import AgentCard, AgentCapabilities, AgentSkill
# Custom agent card for specialized capabilities
agent_card = AgentCard(
name="Specialized Research Crew",
description="Advanced research and analysis capabilities",
version="2.0.0",
capabilities=AgentCapabilities(
streaming=True,
pushNotifications=False
),
skills=[
AgentSkill(
id="research",
name="Research Analysis",
description="Comprehensive research and analysis",
tags=["research", "analysis", "data"]
)
]
)
```
### Error Handling
The A2A integration includes comprehensive error handling:
- **Validation Errors**: Input validation with clear error messages
- **Execution Errors**: Crew execution errors converted to A2A artifacts
- **Cancellation**: Proper task cancellation support
- **Timeouts**: Configurable timeout handling
## Best Practices
1. **Resource Management**: Monitor crew resource usage in server environments
2. **Error Handling**: Implement proper error handling in crew tasks
3. **Security**: Use appropriate authentication and authorization
4. **Monitoring**: Monitor A2A server performance and health
5. **Scaling**: Consider load balancing for high-traffic scenarios
## Limitations
- **Optional Dependency**: A2A support requires additional dependencies
- **Transport Support**: Currently supports Starlette transport only
- **Synchronous Crews**: Crews execute synchronously within async A2A context
## Examples
See the `examples/a2a_integration_example.py` file for a complete working example of A2A integration with CrewAI.

View File

@@ -200,6 +200,37 @@ Deploy the crew or flow to [CrewAI Enterprise](https://app.crewai.com).
```
- Reads your local project configuration.
- Prompts you to confirm the environment variables (like `OPENAI_API_KEY`, `SERPER_API_KEY`) found locally. These will be securely stored with the deployment on the Enterprise platform. Ensure your sensitive keys are correctly configured locally (e.g., in a `.env` file) before running this.
### 11. Organization Management
Manage your CrewAI Enterprise organizations.
```shell Terminal
crewai org [COMMAND] [OPTIONS]
```
#### Commands:
- `list`: List all organizations you belong to
```shell Terminal
crewai org list
```
- `current`: Display your currently active organization
```shell Terminal
crewai org current
```
- `switch`: Switch to a specific organization
```shell Terminal
crewai org switch <organization_id>
```
<Note>
You must be authenticated to CrewAI Enterprise to use these organization management commands.
</Note>
- **Create a deployment** (continued):
- Links the deployment to the corresponding remote GitHub repository (it usually detects this automatically).
- **Deploy the Crew**: Once you are authenticated, you can deploy your crew or flow to CrewAI Enterprise.

View File

@@ -29,6 +29,10 @@ my_crew = Crew(
From this point on, your crew will have planning enabled, and the tasks will be planned before each iteration.
<Warning>
When planning is enabled, crewAI will use `gpt-4o-mini` as the default LLM for planning, which requires a valid OpenAI API key. Since your agents might be using different LLMs, this could cause confusion if you don't have an OpenAI API key configured or if you're experiencing unexpected behavior related to LLM API calls.
</Warning>
#### Planning LLM
Now you can define the LLM that will be used to plan the tasks.

View File

@@ -32,6 +32,7 @@ The Enterprise Tools Repository includes:
- **Customizability**: Provides the flexibility to develop custom tools or utilize existing ones, catering to the specific needs of agents.
- **Error Handling**: Incorporates robust error handling mechanisms to ensure smooth operation.
- **Caching Mechanism**: Features intelligent caching to optimize performance and reduce redundant operations.
- **Asynchronous Support**: Handles both synchronous and asynchronous tools, enabling non-blocking operations.
## Using CrewAI Tools
@@ -177,6 +178,62 @@ class MyCustomTool(BaseTool):
return "Tool's result"
```
## Asynchronous Tool Support
CrewAI supports asynchronous tools, allowing you to implement tools that perform non-blocking operations like network requests, file I/O, or other async operations without blocking the main execution thread.
### Creating Async Tools
You can create async tools in two ways:
#### 1. Using the `tool` Decorator with Async Functions
```python Code
from crewai.tools import tool
@tool("fetch_data_async")
async def fetch_data_async(query: str) -> str:
"""Asynchronously fetch data based on the query."""
# Simulate async operation
await asyncio.sleep(1)
return f"Data retrieved for {query}"
```
#### 2. Implementing Async Methods in Custom Tool Classes
```python Code
from crewai.tools import BaseTool
class AsyncCustomTool(BaseTool):
name: str = "async_custom_tool"
description: str = "An asynchronous custom tool"
async def _run(self, query: str = "") -> str:
"""Asynchronously run the tool"""
# Your async implementation here
await asyncio.sleep(1)
return f"Processed {query} asynchronously"
```
### Using Async Tools
Async tools work seamlessly in both standard Crew workflows and Flow-based workflows:
```python Code
# In standard Crew
agent = Agent(role="researcher", tools=[async_custom_tool])
# In Flow
class MyFlow(Flow):
@start()
async def begin(self):
crew = Crew(agents=[agent])
result = await crew.kickoff_async()
return result
```
The CrewAI framework automatically handles the execution of both synchronous and asynchronous tools, so you don't need to worry about how to call them differently.
### Utilizing the `tool` Decorator
```python Code

View File

@@ -9,7 +9,12 @@
},
"favicon": "images/favicon.svg",
"contextual": {
"options": ["copy", "view", "chatgpt", "claude"]
"options": [
"copy",
"view",
"chatgpt",
"claude"
]
},
"navigation": {
"tabs": [
@@ -201,6 +206,7 @@
"observability/arize-phoenix",
"observability/langfuse",
"observability/langtrace",
"observability/maxim",
"observability/mlflow",
"observability/openlit",
"observability/opik",
@@ -256,7 +262,8 @@
"enterprise/features/tool-repository",
"enterprise/features/webhook-streaming",
"enterprise/features/traces",
"enterprise/features/hallucination-guardrail"
"enterprise/features/hallucination-guardrail",
"enterprise/features/integrations"
]
},
{

View File

@@ -0,0 +1,185 @@
---
title: Integrations
description: "Connected applications for your agents to take actions."
icon: "plug"
---
## Overview
Enable your agents to authenticate with any OAuth enabled provider and take actions. From Salesforce and HubSpot to Google and GitHub, we've got you covered with 16+ integrated services.
<Frame>
![Integrations](/images/enterprise/crew_connectors.png)
</Frame>
## Supported Integrations
### **Communication & Collaboration**
- **Gmail** - Manage emails and drafts
- **Slack** - Workspace notifications and alerts
- **Microsoft** - Office 365 and Teams integration
### **Project Management**
- **Jira** - Issue tracking and project management
- **ClickUp** - Task and productivity management
- **Asana** - Team task and project coordination
- **Notion** - Page and database management
- **Linear** - Software project and bug tracking
- **GitHub** - Repository and issue management
### **Customer Relationship Management**
- **Salesforce** - CRM account and opportunity management
- **HubSpot** - Sales pipeline and contact management
- **Zendesk** - Customer support ticket management
### **Business & Finance**
- **Stripe** - Payment processing and customer management
- **Shopify** - E-commerce store and product management
### **Productivity & Storage**
- **Google Sheets** - Spreadsheet data synchronization
- **Google Calendar** - Event and schedule management
- **Box** - File storage and document management
and more to come!
## Prerequisites
Before using Authentication Integrations, ensure you have:
- A [CrewAI Enterprise](https://app.crewai.com) account. You can get started with a free trial.
## Setting Up Integrations
### 1. Connect Your Account
1. Navigate to [CrewAI Enterprise](https://app.crewai.com)
2. Go to **Integrations** tab - https://app.crewai.com/crewai_plus/connectors
3. Click **Connect** on your desired service from the Authentication Integrations section
4. Complete the OAuth authentication flow
5. Grant necessary permissions for your use case
6. Get your Enterprise Token from your [CrewAI Enterprise](https://app.crewai.com) account page - https://app.crewai.com/crewai_plus/settings/account
<Frame>
![Integrations](/images/enterprise/enterprise_action_auth_token.png)
</Frame>
### 2. Install Integration Tools
All you need is the latest version of `crewai-tools` package.
```bash
uv add crewai-tools
```
## Usage Examples
### Basic Usage
<Tip>
All the services you are authenticated into will be available as tools. So all you need to do is add the `CrewaiEnterpriseTools` to your agent and you are good to go.
</Tip>
```python
from crewai import Agent, Task, Crew
from crewai_tools import CrewaiEnterpriseTools
# Get enterprise tools (Gmail tool will be included)
enterprise_tools = CrewaiEnterpriseTools(
enterprise_token="your_enterprise_token"
)
# print the tools
print(enterprise_tools)
# Create an agent with Gmail capabilities
email_agent = Agent(
role="Email Manager",
goal="Manage and organize email communications",
backstory="An AI assistant specialized in email management and communication.",
tools=[enterprise_tools]
)
# Task to send an email
email_task = Task(
description="Draft and send a follow-up email to john@example.com about the project update",
agent=email_agent,
expected_output="Confirmation that email was sent successfully"
)
# Run the task
crew = Crew(
agents=[email_agent],
tasks=[email_task]
)
# Run the crew
crew.kickoff()
```
### Filtering Tools
```python
from crewai_tools import CrewaiEnterpriseTools
enterprise_tools = CrewaiEnterpriseTools(
actions_list=["gmail_find_email"] # only gmail_find_email tool will be available
)
gmail_tool = enterprise_tools[0]
gmail_agent = Agent(
role="Gmail Manager",
goal="Manage gmail communications and notifications",
backstory="An AI assistant that helps coordinate gmail communications.",
tools=[gmail_tool]
)
notification_task = Task(
description="Find the email from john@example.com",
agent=gmail_agent,
expected_output="Email found from john@example.com"
)
# Run the task
crew = Crew(
agents=[slack_agent],
tasks=[notification_task]
)
```
## Best Practices
### Security
- **Principle of Least Privilege**: Only grant the minimum permissions required for your agents' tasks
- **Regular Audits**: Periodically review connected integrations and their permissions
- **Secure Credentials**: Never hardcode credentials; use CrewAI's secure authentication flow
### Filtering Tools
On a deployed crew, you can specify which actions are avialbel for each integration from the settings page of the service you connected to.
<Frame>
![Integrations](/images/enterprise/filtering_enterprise_action_tools.png)
</Frame>
### Scoped Deployments for multi user organizations
You can deploy your crew and scope each integration to a specific user. For example, a crew that connects to google can use a specific user's gmail account.
<Tip>
This is useful for multi user organizations where you want to scope the integration to a specific user.
</Tip>
Use the `user_bearer_token` to scope the integration to a specific user so that when the crew is kicked off, it will use the user's bearer token to authenticate with the integration. If user is not logged in, then the crew will not use any connected integrations. Use the default bearer token to authenticate with the integrations thats deployed with the crew.
<Frame>
![Integrations](/images/enterprise/user_bearer_token.png)
</Frame>
### Getting Help
<Card title="Need Help?" icon="headset" href="mailto:support@crewai.com">
Contact our support team for assistance with integration setup or troubleshooting.
</Card>

View File

@@ -277,22 +277,23 @@ This pattern allows you to combine direct data passing with state updates for ma
One of CrewAI's most powerful features is the ability to persist flow state across executions. This enables workflows that can be paused, resumed, and even recovered after failures.
### The @persist Decorator
### The @persist() Decorator
The `@persist` decorator automates state persistence, saving your flow's state at key points in execution.
The `@persist()` decorator automates state persistence, saving your flow's state at key points in execution.
#### Class-Level Persistence
When applied at the class level, `@persist` saves state after every method execution:
When applied at the class level, `@persist()` saves state after every method execution:
```python
from crewai.flow.flow import Flow, listen, persist, start
from crewai.flow.flow import Flow, listen, start
from crewai.flow.persistence import persist
from pydantic import BaseModel
class CounterState(BaseModel):
value: int = 0
@persist # Apply to the entire flow class
@persist() # Apply to the entire flow class
class PersistentCounterFlow(Flow[CounterState]):
@start()
def increment(self):
@@ -319,10 +320,11 @@ print(f"Second run result: {result2}") # Will be higher due to persisted state
#### Method-Level Persistence
For more granular control, you can apply `@persist` to specific methods:
For more granular control, you can apply `@persist()` to specific methods:
```python
from crewai.flow.flow import Flow, listen, persist, start
from crewai.flow.flow import Flow, listen, start
from crewai.flow.persistence import persist
class SelectivePersistFlow(Flow):
@start()
@@ -330,7 +332,7 @@ class SelectivePersistFlow(Flow):
self.state["count"] = 1
return "First step"
@persist # Only persist after this method
@persist() # Only persist after this method
@listen(first_step)
def important_step(self, prev_result):
self.state["count"] += 1

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.2 MiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 54 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 362 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 49 KiB

View File

@@ -22,7 +22,7 @@ Watch this video tutorial for a step-by-step demonstration of the installation p
<Note>
**Python Version Requirements**
CrewAI requires `Python >=3.10 and <=3.13`. Here's how to check your version:
CrewAI requires `Python >=3.10 and <3.14`. Here's how to check your version:
```bash
python3 --version
```

View File

@@ -0,0 +1,152 @@
---
title: Maxim Integration
description: Start Agent monitoring, evaluation, and observability
icon: bars-staggered
---
# Maxim Integration
Maxim AI provides comprehensive agent monitoring, evaluation, and observability for your CrewAI applications. With Maxim's one-line integration, you can easily trace and analyse agent interactions, performance metrics, and more.
## Features: One Line Integration
- **End-to-End Agent Tracing**: Monitor the complete lifecycle of your agents
- **Performance Analytics**: Track latency, tokens consumed, and costs
- **Hyperparameter Monitoring**: View the configuration details of your agent runs
- **Tool Call Tracking**: Observe when and how agents use their tools
- **Advanced Visualisation**: Understand agent trajectories through intuitive dashboards
## Getting Started
### Prerequisites
- Python version >=3.10
- A Maxim account ([sign up here](https://getmaxim.ai/))
- A CrewAI project
### Installation
Install the Maxim SDK via pip:
```python
pip install maxim-py>=3.6.2
```
Or add it to your `requirements.txt`:
```
maxim-py>=3.6.2
```
### Basic Setup
### 1. Set up environment variables
```python
### Environment Variables Setup
# Create a `.env` file in your project root:
# Maxim API Configuration
MAXIM_API_KEY=your_api_key_here
MAXIM_LOG_REPO_ID=your_repo_id_here
```
### 2. Import the required packages
```python
from crewai import Agent, Task, Crew, Process
from maxim import Maxim
from maxim.logger.crewai import instrument_crewai
```
### 3. Initialise Maxim with your API key
```python
# Initialize Maxim logger
logger = Maxim().logger()
# Instrument CrewAI with just one line
instrument_crewai(logger)
```
### 4. Create and run your CrewAI application as usual
```python
# Create your agent
researcher = Agent(
role='Senior Research Analyst',
goal='Uncover cutting-edge developments in AI',
backstory="You are an expert researcher at a tech think tank...",
verbose=True,
llm=llm
)
# Define the task
research_task = Task(
description="Research the latest AI advancements...",
expected_output="",
agent=researcher
)
# Configure and run the crew
crew = Crew(
agents=[researcher],
tasks=[research_task],
verbose=True
)
try:
result = crew.kickoff()
finally:
maxim.cleanup() # Ensure cleanup happens even if errors occur
```
That's it! All your CrewAI agent interactions will now be logged and available in your Maxim dashboard.
Check this Google Colab Notebook for a quick reference - [Notebook](https://colab.research.google.com/drive/1ZKIZWsmgQQ46n8TH9zLsT1negKkJA6K8?usp=sharing)
## Viewing Your Traces
After running your CrewAI application:
![Example trace in Maxim showing agent interactions](https://raw.githubusercontent.com/maximhq/maxim-docs/master/images/Screenshot2025-05-14at12.10.58PM.png)
1. Log in to your [Maxim Dashboard](https://getmaxim.ai/dashboard)
2. Navigate to your repository
3. View detailed agent traces, including:
- Agent conversations
- Tool usage patterns
- Performance metrics
- Cost analytics
## Troubleshooting
### Common Issues
- **No traces appearing**: Ensure your API key and repository ID are correc
- Ensure you've **called `instrument_crewai()`** ***before*** running your crew. This initializes logging hooks correctly.
- Set `debug=True` in your `instrument_crewai()` call to surface any internal errors:
```python
instrument_crewai(logger, debug=True)
```
- Configure your agents with `verbose=True` to capture detailed logs:
```python
agent = CrewAgent(..., verbose=True)
```
- Double-check that `instrument_crewai()` is called **before** creating or executing agents. This might be obvious, but it's a common oversight.
### Support
If you encounter any issues:
- Check the [Maxim Documentation](https://getmaxim.ai/docs)
- Maxim Github [Link](https://github.com/maximhq)

View File

@@ -1,64 +0,0 @@
"""Example: CrewAI A2A Integration
This example demonstrates how to expose a CrewAI crew as an A2A (Agent-to-Agent)
protocol server for remote interoperability.
Requirements:
pip install crewai[a2a]
"""
from crewai import Agent, Crew, Task
from crewai.a2a import CrewAgentExecutor, start_a2a_server
def main():
"""Create and start an A2A server with a CrewAI crew."""
researcher = Agent(
role="Research Analyst",
goal="Provide comprehensive research and analysis on any topic",
backstory=(
"You are an experienced research analyst with expertise in "
"gathering, analyzing, and synthesizing information from various sources."
),
verbose=True
)
research_task = Task(
description=(
"Research and analyze the topic: {query}\n"
"Provide a comprehensive overview including:\n"
"- Key concepts and definitions\n"
"- Current trends and developments\n"
"- Important considerations\n"
"- Relevant examples or case studies"
),
agent=researcher,
expected_output="A detailed research report with analysis and insights"
)
research_crew = Crew(
agents=[researcher],
tasks=[research_task],
verbose=True
)
executor = CrewAgentExecutor(
crew=research_crew,
supported_content_types=['text', 'text/plain', 'application/json']
)
print("Starting A2A server with CrewAI research crew...")
print("Server will be available at http://localhost:10001")
print("Use the A2A CLI or SDK to interact with the crew remotely")
start_a2a_server(
executor,
host="0.0.0.0",
port=10001,
transport="starlette"
)
if __name__ == "__main__":
main()

View File

@@ -11,7 +11,7 @@ dependencies = [
# Core Dependencies
"pydantic>=2.4.2",
"openai>=1.13.3",
"litellm==1.68.0",
"litellm==1.72.0",
"instructor>=1.3.3",
# Text Processing
"pdfplumber>=0.11.4",
@@ -65,8 +65,8 @@ mem0 = ["mem0ai>=0.1.94"]
docling = [
"docling>=2.12.0",
]
a2a = [
"a2a-sdk>=0.0.1",
aisuite = [
"aisuite>=0.1.10",
]
[tool.uv]

View File

@@ -32,13 +32,3 @@ __all__ = [
"TaskOutput",
"LLMGuardrail",
]
try:
from crewai.a2a import ( # noqa: F401
CrewAgentExecutor,
start_a2a_server,
create_a2a_app
)
__all__.extend(["CrewAgentExecutor", "start_a2a_server", "create_a2a_app"])
except ImportError:
pass

View File

@@ -1,62 +0,0 @@
"""A2A (Agent-to-Agent) protocol integration for CrewAI.
This module provides integration with the A2A protocol to enable remote agent
interoperability. It allows CrewAI crews to be exposed as A2A-compatible agents
that can communicate with other agents following the A2A protocol standard.
The integration is optional and requires the 'a2a' extra dependency:
pip install crewai[a2a]
Example:
from crewai import Agent, Crew, Task
from crewai.a2a import CrewAgentExecutor, start_a2a_server
agent = Agent(role="Assistant", goal="Help users", backstory="Helpful AI")
task = Task(description="Help with {query}", agent=agent)
crew = Crew(agents=[agent], tasks=[task])
executor = CrewAgentExecutor(crew)
start_a2a_server(executor, host="localhost", port=8080)
"""
try:
from .crew_agent_executor import CrewAgentExecutor
from .server import start_a2a_server, create_a2a_app
from .server_config import ServerConfig
from .task_info import TaskInfo
from .exceptions import A2AServerError, TransportError, ExecutionError
__all__ = [
"CrewAgentExecutor",
"start_a2a_server",
"create_a2a_app",
"ServerConfig",
"TaskInfo",
"A2AServerError",
"TransportError",
"ExecutionError"
]
except ImportError:
import warnings
warnings.warn(
"A2A integration requires the 'a2a' extra dependency. "
"Install with: pip install crewai[a2a]",
ImportWarning
)
def _missing_dependency(*args, **kwargs):
raise ImportError(
"A2A integration requires the 'a2a' extra dependency. "
"Install with: pip install crewai[a2a]"
)
CrewAgentExecutor = _missing_dependency # type: ignore
start_a2a_server = _missing_dependency # type: ignore
create_a2a_app = _missing_dependency # type: ignore
ServerConfig = _missing_dependency # type: ignore
TaskInfo = _missing_dependency # type: ignore
A2AServerError = _missing_dependency # type: ignore
TransportError = _missing_dependency # type: ignore
ExecutionError = _missing_dependency # type: ignore
__all__ = []

View File

@@ -1,255 +0,0 @@
"""CrewAI Agent Executor for A2A Protocol Integration.
This module implements the A2A AgentExecutor interface to enable CrewAI crews
to participate in the Agent-to-Agent protocol for remote interoperability.
"""
import asyncio
import json
import logging
from typing import Any, Dict, Optional
from crewai import Crew
from crewai.crew import CrewOutput
from .task_info import TaskInfo
try:
from a2a.server.agent_execution.agent_executor import AgentExecutor
from a2a.server.agent_execution.context import RequestContext
from a2a.server.events.event_queue import EventQueue
from a2a.types import (
InvalidParamsError,
Part,
Task,
TextPart,
UnsupportedOperationError,
)
from a2a.utils import completed_task, new_artifact
from a2a.utils.errors import ServerError
except ImportError:
raise ImportError(
"A2A integration requires the 'a2a' extra dependency. "
"Install with: pip install crewai[a2a]"
)
logger = logging.getLogger(__name__)
class CrewAgentExecutor(AgentExecutor):
"""A2A Agent Executor that wraps CrewAI crews for remote interoperability.
This class implements the A2A AgentExecutor interface to enable CrewAI crews
to be exposed as remotely interoperable agents following the A2A protocol.
Args:
crew: The CrewAI crew to expose as an A2A agent
supported_content_types: List of supported content types for input
Example:
from crewai import Agent, Crew, Task
from crewai.a2a import CrewAgentExecutor
agent = Agent(role="Assistant", goal="Help users", backstory="Helpful AI")
task = Task(description="Help with {query}", agent=agent)
crew = Crew(agents=[agent], tasks=[task])
executor = CrewAgentExecutor(crew)
"""
def __init__(
self,
crew: Crew,
supported_content_types: Optional[list[str]] = None
):
"""Initialize the CrewAgentExecutor.
Args:
crew: The CrewAI crew to wrap
supported_content_types: List of supported content types
"""
self.crew = crew
self.supported_content_types = supported_content_types or [
'text', 'text/plain'
]
self._running_tasks: Dict[str, TaskInfo] = {}
async def execute(
self,
context: RequestContext,
event_queue: EventQueue,
) -> None:
"""Execute the crew with the given context and publish results to event queue.
This method extracts the user input from the request context, executes
the CrewAI crew, and publishes the results as A2A artifacts.
Args:
context: The A2A request context containing task details
event_queue: Queue for publishing execution events and results
Raises:
ServerError: If validation fails or execution encounters an error
"""
error = self._validate_request(context)
if error:
logger.error(f"Request validation failed: {error}")
raise ServerError(error=InvalidParamsError())
query = context.get_user_input()
task_id = context.task_id
context_id = context.context_id
if not task_id or not context_id:
raise ServerError(error=InvalidParamsError())
logger.info(f"Executing crew for task {task_id} with query: {query}")
try:
inputs = {"query": query}
execution_task = asyncio.create_task(
self._execute_crew_async(inputs)
)
from datetime import datetime
self._running_tasks[task_id] = TaskInfo(
task=execution_task,
started_at=datetime.now(),
status="running"
)
result = await execution_task
self._running_tasks.pop(task_id, None)
logger.info(f"Crew execution completed for task {task_id}")
parts = self._convert_output_to_parts(result)
messages = [context.message] if context.message else []
event_queue.enqueue_event(
completed_task(
task_id,
context_id,
[new_artifact(parts, f"crew_output_{task_id}")],
messages,
)
)
except asyncio.CancelledError:
logger.info(f"Task {task_id} was cancelled")
self._running_tasks.pop(task_id, None)
raise
except Exception as e:
logger.error(f"Error executing crew for task {task_id}: {e}")
self._running_tasks.pop(task_id, None)
error_parts = [
Part(root=TextPart(text=f"Error executing crew: {str(e)}"))
]
messages = [context.message] if context.message else []
event_queue.enqueue_event(
completed_task(
task_id,
context_id,
[new_artifact(error_parts, f"error_{task_id}")],
messages,
)
)
raise ServerError(
error=InvalidParamsError()
) from e
async def cancel(
self,
request: RequestContext,
event_queue: EventQueue
) -> Task | None:
"""Cancel a running crew execution.
Args:
request: The A2A request context for the task to cancel
event_queue: Event queue for publishing cancellation events
Returns:
None (cancellation is handled internally)
Raises:
ServerError: If the task cannot be cancelled
"""
task_id = request.task_id
if task_id in self._running_tasks:
task_info = self._running_tasks[task_id]
task_info.task.cancel()
task_info.update_status("cancelled")
try:
await task_info.task
except asyncio.CancelledError:
logger.info(f"Successfully cancelled task {task_id}")
pass
self._running_tasks.pop(task_id, None)
return None
else:
logger.warning(f"Task {task_id} not found for cancellation")
raise ServerError(error=UnsupportedOperationError())
async def _execute_crew_async(self, inputs: Dict[str, Any]) -> CrewOutput:
"""Execute the crew asynchronously.
Args:
inputs: Input parameters for the crew
Returns:
The crew execution output
"""
loop = asyncio.get_event_loop()
return await loop.run_in_executor(None, self.crew.kickoff, inputs)
def _convert_output_to_parts(self, result: CrewOutput) -> list[Part]:
"""Convert CrewAI output to A2A Parts.
Args:
result: The crew execution result
Returns:
List of A2A Parts representing the output
"""
parts = []
if hasattr(result, 'raw') and result.raw:
parts.append(Part(root=TextPart(text=str(result.raw))))
elif result:
parts.append(Part(root=TextPart(text=str(result))))
if hasattr(result, 'json_dict') and result.json_dict:
json_output = json.dumps(result.json_dict, indent=2)
parts.append(Part(root=TextPart(text=json_output)))
if not parts:
parts.append(Part(root=TextPart(text="Crew execution completed successfully")))
return parts
def _validate_request(self, context: RequestContext) -> Optional[str]:
"""Validate the incoming request context.
Args:
context: The A2A request context to validate
Returns:
Error message if validation fails, None if valid
"""
try:
user_input = context.get_user_input()
if not user_input or not user_input.strip():
return "Empty or missing user input"
return None
except Exception as e:
return f"Failed to extract user input: {e}"

View File

@@ -1,16 +0,0 @@
"""Custom exceptions for A2A integration."""
class A2AServerError(Exception):
"""Base exception for A2A server errors."""
pass
class TransportError(A2AServerError):
"""Error related to transport configuration."""
pass
class ExecutionError(A2AServerError):
"""Error during crew execution."""
pass

View File

@@ -1,151 +0,0 @@
"""A2A Server utilities for CrewAI integration.
This module provides convenience functions for starting A2A servers with CrewAI
crews, supporting multiple transport protocols and configurations.
"""
import logging
from typing import Optional
from .exceptions import TransportError
from .server_config import ServerConfig
try:
from a2a.server.agent_execution.agent_executor import AgentExecutor
from a2a.server.apps import A2AStarletteApplication
from a2a.server.request_handlers.default_request_handler import DefaultRequestHandler
from a2a.server.tasks import InMemoryTaskStore
from a2a.types import AgentCard, AgentCapabilities, AgentSkill
except ImportError:
raise ImportError(
"A2A integration requires the 'a2a' extra dependency. "
"Install with: pip install crewai[a2a]"
)
logger = logging.getLogger(__name__)
def start_a2a_server(
agent_executor: AgentExecutor,
host: str = "localhost",
port: int = 10001,
transport: str = "starlette",
config: Optional[ServerConfig] = None,
**kwargs
) -> None:
"""Start an A2A server with the given agent executor.
This is a convenience function that creates and starts an A2A server
with the specified configuration.
Args:
agent_executor: The A2A agent executor to serve
host: Host address to bind the server to
port: Port number to bind the server to
transport: Transport protocol to use ("starlette" or "fastapi")
config: Optional ServerConfig object to override individual parameters
**kwargs: Additional arguments passed to the server
Example:
from crewai import Agent, Crew, Task
from crewai.a2a import CrewAgentExecutor, start_a2a_server
agent = Agent(role="Assistant", goal="Help users", backstory="Helpful AI")
task = Task(description="Help with {query}", agent=agent)
crew = Crew(agents=[agent], tasks=[task])
executor = CrewAgentExecutor(crew)
start_a2a_server(executor, host="0.0.0.0", port=8080)
"""
if config:
host = config.host
port = config.port
transport = config.transport
app = create_a2a_app(
agent_executor,
transport=transport,
agent_name=config.agent_name if config else None,
agent_description=config.agent_description if config else None,
**kwargs
)
logger.info(f"Starting A2A server on {host}:{port} using {transport} transport")
try:
import uvicorn
uvicorn.run(app, host=host, port=port)
except ImportError:
raise ImportError("uvicorn is required to run the A2A server. Install with: pip install uvicorn")
def create_a2a_app(
agent_executor: AgentExecutor,
transport: str = "starlette",
agent_name: Optional[str] = None,
agent_description: Optional[str] = None,
**kwargs
):
"""Create an A2A application with the given agent executor.
This function creates an A2A server application that can be run
with any ASGI server.
Args:
agent_executor: The A2A agent executor to serve
transport: Transport protocol to use ("starlette" or "fastapi")
agent_name: Optional name for the agent
agent_description: Optional description for the agent
**kwargs: Additional arguments passed to the transport
Returns:
ASGI application ready to be served
Example:
from crewai.a2a import CrewAgentExecutor, create_a2a_app
executor = CrewAgentExecutor(crew)
app = create_a2a_app(
executor,
agent_name="My Crew Agent",
agent_description="A helpful CrewAI agent"
)
import uvicorn
uvicorn.run(app, host="0.0.0.0", port=8080)
"""
agent_card = AgentCard(
name=agent_name or "CrewAI Agent",
description=agent_description or "A CrewAI agent exposed via A2A protocol",
version="1.0.0",
capabilities=AgentCapabilities(
streaming=True,
pushNotifications=False
),
defaultInputModes=["text"],
defaultOutputModes=["text"],
skills=[
AgentSkill(
id="crew_execution",
name="Crew Execution",
description="Execute CrewAI crew tasks with multiple agents",
examples=["Process user queries", "Coordinate multi-agent workflows"],
tags=["crewai", "multi-agent", "workflow"]
)
],
url="https://github.com/crewAIInc/crewAI"
)
task_store = InMemoryTaskStore()
request_handler = DefaultRequestHandler(agent_executor, task_store)
if transport.lower() == "fastapi":
raise TransportError("FastAPI transport is not available in the current A2A SDK version")
else:
app_instance = A2AStarletteApplication(
agent_card=agent_card,
http_handler=request_handler,
**kwargs
)
return app_instance.build()

View File

@@ -1,25 +0,0 @@
"""Server configuration for A2A integration."""
from dataclasses import dataclass
from typing import Optional
@dataclass
class ServerConfig:
"""Configuration for A2A server.
This class encapsulates server settings to improve readability
and flexibility for server setups.
Attributes:
host: Host address to bind the server to
port: Port number to bind the server to
transport: Transport protocol to use ("starlette" or "fastapi")
agent_name: Optional name for the agent
agent_description: Optional description for the agent
"""
host: str = "localhost"
port: int = 10001
transport: str = "starlette"
agent_name: Optional[str] = None
agent_description: Optional[str] = None

View File

@@ -1,47 +0,0 @@
"""Task information tracking for A2A integration."""
from dataclasses import dataclass
from datetime import datetime
from typing import Optional
import asyncio
@dataclass
class TaskInfo:
"""Information about a running task in the A2A executor.
This class tracks the lifecycle and status of tasks being executed
by the CrewAgentExecutor, providing better task management capabilities.
Attributes:
task: The asyncio task being executed
started_at: When the task was started
status: Current status of the task ("running", "completed", "cancelled", "failed")
"""
task: asyncio.Task
started_at: datetime
status: str = "running"
def update_status(self, new_status: str) -> None:
"""Update the task status.
Args:
new_status: The new status to set
"""
self.status = new_status
@property
def is_running(self) -> bool:
"""Check if the task is currently running."""
return self.status == "running" and not self.task.done()
@property
def duration(self) -> Optional[float]:
"""Get the duration of the task in seconds.
Returns:
Duration in seconds if task is completed, None if still running
"""
if self.task.done():
return (datetime.now() - self.started_at).total_seconds()
return None

View File

@@ -1,6 +1,7 @@
from rich.console import Console
from rich.table import Table
from requests import HTTPError
from crewai.cli.command import BaseCommand, PlusAPIMixin
from crewai.cli.config import Settings
@@ -16,7 +17,7 @@ class OrganizationCommand(BaseCommand, PlusAPIMixin):
response = self.plus_api_client.get_organizations()
response.raise_for_status()
orgs = response.json()
if not orgs:
console.print("You don't belong to any organizations yet.", style="yellow")
return
@@ -26,8 +27,14 @@ class OrganizationCommand(BaseCommand, PlusAPIMixin):
table.add_column("ID", style="green")
for org in orgs:
table.add_row(org["name"], org["uuid"])
console.print(table)
except HTTPError as e:
if e.response.status_code == 401:
console.print("You are not logged in to any organization. Use 'crewai login' to login.", style="bold red")
return
console.print(f"Failed to retrieve organization list: {str(e)}", style="bold red")
raise SystemExit(1)
except Exception as e:
console.print(f"Failed to retrieve organization list: {str(e)}", style="bold red")
raise SystemExit(1)
@@ -37,18 +44,24 @@ class OrganizationCommand(BaseCommand, PlusAPIMixin):
response = self.plus_api_client.get_organizations()
response.raise_for_status()
orgs = response.json()
org = next((o for o in orgs if o["uuid"] == org_id), None)
if not org:
console.print(f"Organization with id '{org_id}' not found.", style="bold red")
return
settings = Settings()
settings.org_name = org["name"]
settings.org_uuid = org["uuid"]
settings.dump()
console.print(f"Successfully switched to {org['name']} ({org['uuid']})", style="bold green")
except HTTPError as e:
if e.response.status_code == 401:
console.print("You are not logged in to any organization. Use 'crewai login' to login.", style="bold red")
return
console.print(f"Failed to retrieve organization list: {str(e)}", style="bold red")
raise SystemExit(1)
except Exception as e:
console.print(f"Failed to switch organization: {str(e)}", style="bold red")
raise SystemExit(1)

View File

@@ -91,6 +91,7 @@ class ToolCommand(BaseCommand, PlusAPIMixin):
console.print(
f"[green]Found these tools to publish: {', '.join([e['name'] for e in available_exports])}[/green]"
)
self._print_current_organization()
with tempfile.TemporaryDirectory() as temp_build_dir:
subprocess.run(
@@ -136,6 +137,7 @@ class ToolCommand(BaseCommand, PlusAPIMixin):
)
def install(self, handle: str):
self._print_current_organization()
get_response = self.plus_api_client.get_tool(handle)
if get_response.status_code == 404:
@@ -182,7 +184,7 @@ class ToolCommand(BaseCommand, PlusAPIMixin):
settings.dump()
console.print(
"Successfully authenticated to the tool repository.", style="bold green"
f"Successfully authenticated to the tool repository as {settings.org_name} ({settings.org_uuid}).", style="bold green"
)
def _add_package(self, tool_details: dict[str, Any]):
@@ -240,3 +242,10 @@ class ToolCommand(BaseCommand, PlusAPIMixin):
)
return env
def _print_current_organization(self):
settings = Settings()
if settings.org_uuid:
console.print(f"Current organization: {settings.org_name} ({settings.org_uuid})", style="bold blue")
else:
console.print("No organization currently set. We recommend setting one before using: `crewai org switch <org_id>` command.", style="yellow")

View File

@@ -655,8 +655,6 @@ class Crew(FlowTrackable, BaseModel):
if self.planning:
self._handle_crew_planning()
metrics: List[UsageMetrics] = []
if self.process == Process.sequential:
result = self._run_sequential_process()
elif self.process == Process.hierarchical:
@@ -669,11 +667,8 @@ class Crew(FlowTrackable, BaseModel):
for after_callback in self.after_kickoff_callbacks:
result = after_callback(result)
metrics += [agent._token_process.get_summary() for agent in self.agents]
self.usage_metrics = self.calculate_usage_metrics()
self.usage_metrics = UsageMetrics()
for metric in metrics:
self.usage_metrics.add_usage_metrics(metric)
return result
except Exception as e:
crewai_event_bus.emit(

View File

@@ -1,5 +1,7 @@
from __future__ import annotations
import asyncio
import inspect
import textwrap
from typing import Any, Callable, Optional, Union, get_type_hints
@@ -239,7 +241,17 @@ class CrewStructuredTool:
) -> Any:
"""Main method for tool execution."""
parsed_args = self._parse_args(input)
return self.func(**parsed_args, **kwargs)
if inspect.iscoroutinefunction(self.func):
result = asyncio.run(self.func(**parsed_args, **kwargs))
return result
result = self.func(**parsed_args, **kwargs)
if asyncio.iscoroutine(result):
return asyncio.run(result)
return result
@property
def args(self) -> dict:

View File

@@ -20,7 +20,10 @@ from crewai.utilities.errors import AgentRepositoryError
from crewai.utilities.exceptions.context_window_exceeding_exception import (
LLMContextLengthExceededException,
)
from rich.console import Console
from crewai.cli.config import Settings
console = Console()
def parse_tools(tools: List[BaseTool]) -> List[CrewStructuredTool]:
"""Parse tools to be used for the task."""
@@ -435,6 +438,13 @@ def show_agent_logs(
)
def _print_current_organization():
settings = Settings()
if settings.org_uuid:
console.print(f"Fetching agent from organization: {settings.org_name} ({settings.org_uuid})", style="bold blue")
else:
console.print("No organization currently set. We recommend setting one before using: `crewai org switch <org_id>` command.", style="yellow")
def load_agent_from_repository(from_repository: str) -> Dict[str, Any]:
attributes: Dict[str, Any] = {}
if from_repository:
@@ -444,15 +454,18 @@ def load_agent_from_repository(from_repository: str) -> Dict[str, Any]:
from crewai.cli.plus_api import PlusAPI
client = PlusAPI(api_key=get_auth_token())
_print_current_organization()
response = client.get_agent(from_repository)
if response.status_code == 404:
raise AgentRepositoryError(
f"Agent {from_repository} does not exist, make sure the name is correct or the agent is available on your organization"
f"Agent {from_repository} does not exist, make sure the name is correct or the agent is available on your organization."
f"\nIf you are using the wrong organization, switch to the correct one using `crewai org switch <org_id>` command.",
)
if response.status_code != 200:
raise AgentRepositoryError(
f"Agent {from_repository} could not be loaded: {response.text}"
f"\nIf you are using the wrong organization, switch to the correct one using `crewai org switch <org_id>` command.",
)
agent = response.json()

View File

@@ -1 +0,0 @@
"""Tests for CrewAI A2A integration."""

View File

@@ -1,198 +0,0 @@
"""Tests for CrewAgentExecutor class."""
import asyncio
import pytest
from unittest.mock import Mock, patch
from crewai.crews.crew_output import CrewOutput
try:
from crewai.a2a import CrewAgentExecutor
from a2a.server.agent_execution import RequestContext
from a2a.server.events import EventQueue
pass # Imports handled in test methods as needed
from a2a.utils.errors import ServerError
A2A_AVAILABLE = True
except ImportError:
A2A_AVAILABLE = False
@pytest.mark.skipif(not A2A_AVAILABLE, reason="A2A integration not available")
class TestCrewAgentExecutor:
"""Test cases for CrewAgentExecutor."""
@pytest.fixture
def sample_crew(self):
"""Create a sample crew for testing."""
from unittest.mock import Mock
mock_crew = Mock()
mock_crew.agents = []
mock_crew.tasks = []
return mock_crew
@pytest.fixture
def crew_executor(self, sample_crew):
"""Create a CrewAgentExecutor for testing."""
return CrewAgentExecutor(sample_crew)
@pytest.fixture
def mock_context(self):
"""Create a mock RequestContext."""
from a2a.types import Message, Part, TextPart
context = Mock(spec=RequestContext)
context.task_id = "test-task-123"
context.context_id = "test-context-456"
context.message = Message(
messageId="msg-123",
taskId="test-task-123",
contextId="test-context-456",
role="user",
parts=[Part(root=TextPart(text="Test message"))]
)
context.get_user_input.return_value = "Test query"
return context
@pytest.fixture
def mock_event_queue(self):
"""Create a mock EventQueue."""
return Mock(spec=EventQueue)
def test_init(self, sample_crew):
"""Test CrewAgentExecutor initialization."""
executor = CrewAgentExecutor(sample_crew)
assert executor.crew == sample_crew
assert executor.supported_content_types == ['text', 'text/plain']
assert executor._running_tasks == {}
def test_init_with_custom_content_types(self, sample_crew):
"""Test CrewAgentExecutor initialization with custom content types."""
custom_types = ['text', 'application/json']
executor = CrewAgentExecutor(sample_crew, supported_content_types=custom_types)
assert executor.supported_content_types == custom_types
@pytest.mark.asyncio
async def test_execute_success(self, crew_executor, mock_context, mock_event_queue):
"""Test successful crew execution."""
mock_output = CrewOutput(raw="Test response", json_dict=None)
with patch.object(crew_executor, '_execute_crew_async', return_value=mock_output):
await crew_executor.execute(mock_context, mock_event_queue)
mock_event_queue.enqueue_event.assert_called_once()
assert len(crew_executor._running_tasks) == 0
@pytest.mark.asyncio
async def test_execute_with_validation_error(self, crew_executor, mock_event_queue):
"""Test execution with validation error."""
bad_context = Mock(spec=RequestContext)
bad_context.get_user_input.return_value = ""
with pytest.raises(ServerError):
await crew_executor.execute(bad_context, mock_event_queue)
@pytest.mark.asyncio
async def test_execute_with_crew_error(self, crew_executor, mock_context, mock_event_queue):
"""Test execution when crew raises an error."""
with patch.object(crew_executor, '_execute_crew_async', side_effect=Exception("Crew error")):
with pytest.raises(ServerError):
await crew_executor.execute(mock_context, mock_event_queue)
mock_event_queue.enqueue_event.assert_called_once()
@pytest.mark.asyncio
async def test_cancel_existing_task(self, crew_executor, mock_event_queue):
"""Test cancelling an existing task."""
cancel_context = Mock(spec=RequestContext)
cancel_context.task_id = "test-task-123"
async def dummy_task():
await asyncio.sleep(10)
mock_task = asyncio.create_task(dummy_task())
from crewai.a2a.crew_agent_executor import TaskInfo
from datetime import datetime
task_info = TaskInfo(task=mock_task, started_at=datetime.now())
crew_executor._running_tasks["test-task-123"] = task_info
result = await crew_executor.cancel(cancel_context, mock_event_queue)
assert result is None
assert "test-task-123" not in crew_executor._running_tasks
assert mock_task.cancelled()
@pytest.mark.asyncio
async def test_cancel_nonexistent_task(self, crew_executor, mock_event_queue):
"""Test cancelling a task that doesn't exist."""
cancel_context = Mock(spec=RequestContext)
cancel_context.task_id = "nonexistent-task"
with pytest.raises(ServerError):
await crew_executor.cancel(cancel_context, mock_event_queue)
def test_convert_output_to_parts_with_raw(self, crew_executor):
"""Test converting crew output with raw content to A2A parts."""
output = Mock()
output.raw = "Test response"
output.json_dict = None
parts = crew_executor._convert_output_to_parts(output)
assert len(parts) == 1
assert parts[0].root.text == "Test response"
def test_convert_output_to_parts_with_json(self, crew_executor):
"""Test converting crew output with JSON data to A2A parts."""
output = Mock()
output.raw = "Test response"
output.json_dict = {"key": "value"}
parts = crew_executor._convert_output_to_parts(output)
assert len(parts) == 2
assert parts[0].root.text == "Test response"
assert '"key": "value"' in parts[1].root.text
def test_convert_output_to_parts_empty(self, crew_executor):
"""Test converting empty crew output to A2A parts."""
output = ""
parts = crew_executor._convert_output_to_parts(output)
assert len(parts) == 1
assert parts[0].root.text == "Crew execution completed successfully"
def test_validate_request_valid(self, crew_executor, mock_context):
"""Test request validation with valid input."""
error = crew_executor._validate_request(mock_context)
assert error is None
def test_validate_request_empty_input(self, crew_executor):
"""Test request validation with empty input."""
context = Mock(spec=RequestContext)
context.get_user_input.return_value = ""
error = crew_executor._validate_request(context)
assert error == "Empty or missing user input"
def test_validate_request_whitespace_input(self, crew_executor):
"""Test request validation with whitespace-only input."""
context = Mock(spec=RequestContext)
context.get_user_input.return_value = " \n\t "
error = crew_executor._validate_request(context)
assert error == "Empty or missing user input"
def test_validate_request_exception(self, crew_executor):
"""Test request validation when get_user_input raises exception."""
context = Mock(spec=RequestContext)
context.get_user_input.side_effect = Exception("Input error")
error = crew_executor._validate_request(context)
assert "Failed to extract user input" in error
@pytest.mark.skipif(A2A_AVAILABLE, reason="Testing import error handling")
def test_import_error_handling():
"""Test that import errors are handled gracefully when A2A is not available."""
with pytest.raises(ImportError, match="A2A integration requires"):
pass

View File

@@ -1,56 +0,0 @@
"""Tests for A2A custom exceptions."""
import pytest
try:
from crewai.a2a.crew_agent_executor import (
A2AServerError,
TransportError,
ExecutionError
)
A2A_AVAILABLE = True
except ImportError:
A2A_AVAILABLE = False
@pytest.mark.skipif(not A2A_AVAILABLE, reason="A2A integration not available")
class TestA2AExceptions:
"""Test A2A custom exception classes."""
def test_a2a_server_error_base(self):
"""Test A2AServerError base exception."""
error = A2AServerError("Base error message")
assert str(error) == "Base error message"
assert isinstance(error, Exception)
def test_transport_error_inheritance(self):
"""Test TransportError inherits from A2AServerError."""
error = TransportError("Transport configuration failed")
assert str(error) == "Transport configuration failed"
assert isinstance(error, A2AServerError)
assert isinstance(error, Exception)
def test_execution_error_inheritance(self):
"""Test ExecutionError inherits from A2AServerError."""
error = ExecutionError("Crew execution failed")
assert str(error) == "Crew execution failed"
assert isinstance(error, A2AServerError)
assert isinstance(error, Exception)
def test_exception_raising(self):
"""Test that exceptions can be raised and caught."""
with pytest.raises(TransportError) as exc_info:
raise TransportError("Test transport error")
assert str(exc_info.value) == "Test transport error"
with pytest.raises(ExecutionError) as exc_info:
raise ExecutionError("Test execution error")
assert str(exc_info.value) == "Test execution error"
with pytest.raises(A2AServerError):
raise TransportError("Should be caught as base class")

View File

@@ -1,122 +0,0 @@
"""Integration tests for CrewAI A2A functionality."""
import pytest
from unittest.mock import Mock, patch
try:
from crewai.a2a import CrewAgentExecutor, create_a2a_app
A2A_AVAILABLE = True
except ImportError:
A2A_AVAILABLE = False
@pytest.mark.skipif(not A2A_AVAILABLE, reason="A2A integration not available")
class TestA2AIntegration:
"""Integration tests for A2A functionality."""
@pytest.fixture
def sample_crew(self):
"""Create a sample crew for integration testing."""
from unittest.mock import Mock
mock_crew = Mock()
mock_crew.agents = []
mock_crew.tasks = []
return mock_crew
def test_end_to_end_integration(self, sample_crew):
"""Test end-to-end A2A integration."""
executor = CrewAgentExecutor(sample_crew)
assert executor.crew == sample_crew
assert isinstance(executor.supported_content_types, list)
with patch('crewai.a2a.server.A2AStarletteApplication') as mock_app_class:
with patch('crewai.a2a.server.DefaultRequestHandler') as mock_handler_class:
with patch('crewai.a2a.server.InMemoryTaskStore') as mock_task_store_class:
mock_handler = Mock()
mock_app_instance = Mock()
mock_built_app = Mock()
mock_task_store = Mock()
mock_task_store_class.return_value = mock_task_store
mock_handler_class.return_value = mock_handler
mock_app_class.return_value = mock_app_instance
mock_app_instance.build.return_value = mock_built_app
app = create_a2a_app(executor)
mock_task_store_class.assert_called_once()
mock_handler_class.assert_called_once_with(executor, mock_task_store)
mock_app_class.assert_called_once()
assert app == mock_built_app
def test_crew_with_multiple_agents(self):
"""Test A2A integration with multi-agent crew."""
from unittest.mock import Mock
crew = Mock()
crew.agents = [Mock(), Mock()]
crew.tasks = [Mock(), Mock()]
executor = CrewAgentExecutor(crew)
assert executor.crew == crew
assert len(executor.crew.agents) == 2
assert len(executor.crew.tasks) == 2
def test_custom_content_types(self, sample_crew):
"""Test A2A integration with custom content types."""
custom_types = ['text', 'application/json', 'image/png']
executor = CrewAgentExecutor(
sample_crew,
supported_content_types=custom_types
)
assert executor.supported_content_types == custom_types
@patch('uvicorn.run')
def test_server_startup_integration(self, mock_uvicorn_run, sample_crew):
"""Test server startup integration."""
from crewai.a2a import start_a2a_server
executor = CrewAgentExecutor(sample_crew)
with patch('crewai.a2a.server.create_a2a_app') as mock_create_app:
mock_app = Mock()
mock_create_app.return_value = mock_app
start_a2a_server(
executor,
host="127.0.0.1",
port=9999,
transport="starlette"
)
mock_create_app.assert_called_once_with(
executor,
transport="starlette",
agent_name=None,
agent_description=None
)
mock_uvicorn_run.assert_called_once_with(
mock_app,
host="127.0.0.1",
port=9999
)
def test_optional_import_in_main_module():
"""Test that A2A classes are optionally imported in main module."""
import crewai
if A2A_AVAILABLE:
assert hasattr(crewai, 'CrewAgentExecutor')
assert hasattr(crewai, 'start_a2a_server')
assert hasattr(crewai, 'create_a2a_app')
assert 'CrewAgentExecutor' in crewai.__all__
assert 'start_a2a_server' in crewai.__all__
assert 'create_a2a_app' in crewai.__all__
else:
assert not hasattr(crewai, 'CrewAgentExecutor')
assert not hasattr(crewai, 'start_a2a_server')
assert not hasattr(crewai, 'create_a2a_app')

View File

@@ -1,134 +0,0 @@
"""Tests for A2A server utilities."""
import pytest
from unittest.mock import Mock, patch
try:
from crewai.a2a import start_a2a_server, create_a2a_app
from a2a.server.agent_execution.agent_executor import AgentExecutor
A2A_AVAILABLE = True
except ImportError:
A2A_AVAILABLE = False
@pytest.mark.skipif(not A2A_AVAILABLE, reason="A2A integration not available")
class TestA2AServer:
"""Test cases for A2A server utilities."""
@pytest.fixture
def mock_agent_executor(self):
"""Create a mock AgentExecutor."""
return Mock(spec=AgentExecutor)
@patch('uvicorn.run')
@patch('crewai.a2a.server.create_a2a_app')
def test_start_a2a_server_default(self, mock_create_app, mock_uvicorn_run, mock_agent_executor):
"""Test starting A2A server with default parameters."""
mock_app = Mock()
mock_create_app.return_value = mock_app
start_a2a_server(mock_agent_executor)
mock_create_app.assert_called_once_with(
mock_agent_executor,
transport="starlette",
agent_name=None,
agent_description=None
)
mock_uvicorn_run.assert_called_once_with(
mock_app,
host="localhost",
port=10001
)
@patch('uvicorn.run')
@patch('crewai.a2a.server.create_a2a_app')
def test_start_a2a_server_custom(self, mock_create_app, mock_uvicorn_run, mock_agent_executor):
"""Test starting A2A server with custom parameters."""
mock_app = Mock()
mock_create_app.return_value = mock_app
start_a2a_server(
mock_agent_executor,
host="0.0.0.0",
port=8080,
transport="fastapi"
)
mock_create_app.assert_called_once_with(
mock_agent_executor,
transport="fastapi",
agent_name=None,
agent_description=None
)
mock_uvicorn_run.assert_called_once_with(
mock_app,
host="0.0.0.0",
port=8080
)
@patch('crewai.a2a.server.A2AStarletteApplication')
@patch('crewai.a2a.server.DefaultRequestHandler')
@patch('crewai.a2a.server.InMemoryTaskStore')
def test_create_a2a_app_starlette(self, mock_task_store_class, mock_handler_class, mock_app_class, mock_agent_executor):
"""Test creating A2A app with Starlette transport."""
mock_handler = Mock()
mock_app_instance = Mock()
mock_built_app = Mock()
mock_task_store = Mock()
mock_task_store_class.return_value = mock_task_store
mock_handler_class.return_value = mock_handler
mock_app_class.return_value = mock_app_instance
mock_app_instance.build.return_value = mock_built_app
result = create_a2a_app(mock_agent_executor, transport="starlette")
mock_task_store_class.assert_called_once()
mock_handler_class.assert_called_once_with(mock_agent_executor, mock_task_store)
mock_app_class.assert_called_once()
mock_app_instance.build.assert_called_once()
assert result == mock_built_app
def test_create_a2a_app_fastapi(self, mock_agent_executor):
"""Test creating A2A app with FastAPI transport raises error."""
from crewai.a2a.exceptions import TransportError
with pytest.raises(TransportError, match="FastAPI transport is not available"):
create_a2a_app(
mock_agent_executor,
transport="fastapi",
agent_name="Custom Agent",
agent_description="Custom description"
)
@patch('crewai.a2a.server.A2AStarletteApplication')
@patch('crewai.a2a.server.DefaultRequestHandler')
@patch('crewai.a2a.server.InMemoryTaskStore')
def test_create_a2a_app_default_transport(self, mock_task_store_class, mock_handler_class, mock_app_class, mock_agent_executor):
"""Test creating A2A app with default transport."""
mock_handler = Mock()
mock_app_instance = Mock()
mock_built_app = Mock()
mock_task_store = Mock()
mock_task_store_class.return_value = mock_task_store
mock_handler_class.return_value = mock_handler
mock_app_class.return_value = mock_app_instance
mock_app_instance.build.return_value = mock_built_app
result = create_a2a_app(mock_agent_executor)
mock_task_store_class.assert_called_once()
mock_handler_class.assert_called_once_with(mock_agent_executor, mock_task_store)
mock_app_class.assert_called_once()
assert result == mock_built_app
@pytest.mark.skipif(A2A_AVAILABLE, reason="Testing import error handling")
def test_server_import_error_handling():
"""Test that import errors are handled gracefully when A2A is not available."""
with pytest.raises(ImportError, match="A2A integration requires"):
pass

View File

@@ -1,53 +0,0 @@
"""Tests for ServerConfig dataclass."""
import pytest
try:
from crewai.a2a.server import ServerConfig
A2A_AVAILABLE = True
except ImportError:
A2A_AVAILABLE = False
@pytest.mark.skipif(not A2A_AVAILABLE, reason="A2A integration not available")
class TestServerConfig:
"""Test ServerConfig dataclass functionality."""
def test_server_config_defaults(self):
"""Test ServerConfig with default values."""
config = ServerConfig()
assert config.host == "localhost"
assert config.port == 10001
assert config.transport == "starlette"
assert config.agent_name is None
assert config.agent_description is None
def test_server_config_custom_values(self):
"""Test ServerConfig with custom values."""
config = ServerConfig(
host="0.0.0.0",
port=8080,
transport="custom",
agent_name="Test Agent",
agent_description="A test agent"
)
assert config.host == "0.0.0.0"
assert config.port == 8080
assert config.transport == "custom"
assert config.agent_name == "Test Agent"
assert config.agent_description == "A test agent"
def test_server_config_partial_override(self):
"""Test ServerConfig with partial value override."""
config = ServerConfig(
port=9000,
agent_name="Custom Agent"
)
assert config.host == "localhost" # default
assert config.port == 9000 # custom
assert config.transport == "starlette" # default
assert config.agent_name == "Custom Agent" # custom
assert config.agent_description is None # default

View File

@@ -1,51 +0,0 @@
"""Tests for TaskInfo dataclass."""
import pytest
from datetime import datetime
from unittest.mock import Mock
try:
from crewai.a2a.crew_agent_executor import TaskInfo
A2A_AVAILABLE = True
except ImportError:
A2A_AVAILABLE = False
@pytest.mark.skipif(not A2A_AVAILABLE, reason="A2A integration not available")
class TestTaskInfo:
"""Test TaskInfo dataclass functionality."""
def test_task_info_creation(self):
"""Test TaskInfo creation with required fields."""
mock_task = Mock()
started_at = datetime.now()
task_info = TaskInfo(task=mock_task, started_at=started_at)
assert task_info.task == mock_task
assert task_info.started_at == started_at
assert task_info.status == "running"
def test_task_info_with_custom_status(self):
"""Test TaskInfo creation with custom status."""
mock_task = Mock()
started_at = datetime.now()
task_info = TaskInfo(
task=mock_task,
started_at=started_at,
status="completed"
)
assert task_info.status == "completed"
def test_task_info_status_update(self):
"""Test TaskInfo status can be updated."""
mock_task = Mock()
started_at = datetime.now()
task_info = TaskInfo(task=mock_task, started_at=started_at)
assert task_info.status == "running"
task_info.status = "cancelled"
assert task_info.status == "cancelled"

View File

@@ -2126,3 +2126,60 @@ def test_agent_from_repository_agent_not_found(mock_get_agent, mock_get_auth_tok
match="Agent test_agent does not exist, make sure the name is correct or the agent is available on your organization",
):
Agent(from_repository="test_agent")
@patch("crewai.cli.plus_api.PlusAPI.get_agent")
@patch("crewai.utilities.agent_utils.Settings")
@patch("crewai.utilities.agent_utils.console")
def test_agent_from_repository_displays_org_info(mock_console, mock_settings, mock_get_agent, mock_get_auth_token):
mock_settings_instance = MagicMock()
mock_settings_instance.org_uuid = "test-org-uuid"
mock_settings_instance.org_name = "Test Organization"
mock_settings.return_value = mock_settings_instance
mock_get_response = MagicMock()
mock_get_response.status_code = 200
mock_get_response.json.return_value = {
"role": "test role",
"goal": "test goal",
"backstory": "test backstory",
"tools": []
}
mock_get_agent.return_value = mock_get_response
agent = Agent(from_repository="test_agent")
mock_console.print.assert_any_call(
"Fetching agent from organization: Test Organization (test-org-uuid)",
style="bold blue"
)
assert agent.role == "test role"
assert agent.goal == "test goal"
assert agent.backstory == "test backstory"
@patch("crewai.cli.plus_api.PlusAPI.get_agent")
@patch("crewai.utilities.agent_utils.Settings")
@patch("crewai.utilities.agent_utils.console")
def test_agent_from_repository_without_org_set(mock_console, mock_settings, mock_get_agent, mock_get_auth_token):
mock_settings_instance = MagicMock()
mock_settings_instance.org_uuid = None
mock_settings_instance.org_name = None
mock_settings.return_value = mock_settings_instance
mock_get_response = MagicMock()
mock_get_response.status_code = 401
mock_get_response.text = "Unauthorized access"
mock_get_agent.return_value = mock_get_response
with pytest.raises(
AgentRepositoryError,
match="Agent test_agent could not be loaded: Unauthorized access"
):
Agent(from_repository="test_agent")
mock_console.print.assert_any_call(
"No organization currently set. We recommend setting one before using: `crewai org switch <org_id>` command.",
style="yellow"
)

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

View File

@@ -33,9 +33,9 @@ def mock_settings():
def test_org_list_command(mock_org_command_class, runner):
mock_org_instance = MagicMock()
mock_org_command_class.return_value = mock_org_instance
result = runner.invoke(list)
assert result.exit_code == 0
mock_org_command_class.assert_called_once()
mock_org_instance.list.assert_called_once()
@@ -45,9 +45,9 @@ def test_org_list_command(mock_org_command_class, runner):
def test_org_switch_command(mock_org_command_class, runner):
mock_org_instance = MagicMock()
mock_org_command_class.return_value = mock_org_instance
result = runner.invoke(switch, ['test-id'])
assert result.exit_code == 0
mock_org_command_class.assert_called_once()
mock_org_instance.switch.assert_called_once_with('test-id')
@@ -57,9 +57,9 @@ def test_org_switch_command(mock_org_command_class, runner):
def test_org_current_command(mock_org_command_class, runner):
mock_org_instance = MagicMock()
mock_org_command_class.return_value = mock_org_instance
result = runner.invoke(current)
assert result.exit_code == 0
mock_org_command_class.assert_called_once()
mock_org_instance.current.assert_called_once()
@@ -70,7 +70,7 @@ class TestOrganizationCommand(unittest.TestCase):
with patch.object(OrganizationCommand, '__init__', return_value=None):
self.org_command = OrganizationCommand()
self.org_command.plus_api_client = MagicMock()
@patch('crewai.cli.organization.main.console')
@patch('crewai.cli.organization.main.Table')
def test_list_organizations_success(self, mock_table, mock_console):
@@ -82,11 +82,11 @@ class TestOrganizationCommand(unittest.TestCase):
]
self.org_command.plus_api_client = MagicMock()
self.org_command.plus_api_client.get_organizations.return_value = mock_response
mock_console.print = MagicMock()
self.org_command.list()
self.org_command.plus_api_client.get_organizations.assert_called_once()
mock_table.assert_called_once_with(title="Your Organizations")
mock_table.return_value.add_column.assert_has_calls([
@@ -105,12 +105,12 @@ class TestOrganizationCommand(unittest.TestCase):
mock_response.json.return_value = []
self.org_command.plus_api_client = MagicMock()
self.org_command.plus_api_client.get_organizations.return_value = mock_response
self.org_command.list()
self.org_command.plus_api_client.get_organizations.assert_called_once()
mock_console.print.assert_called_once_with(
"You don't belong to any organizations yet.",
"You don't belong to any organizations yet.",
style="yellow"
)
@@ -118,14 +118,14 @@ class TestOrganizationCommand(unittest.TestCase):
def test_list_organizations_api_error(self, mock_console):
self.org_command.plus_api_client = MagicMock()
self.org_command.plus_api_client.get_organizations.side_effect = requests.exceptions.RequestException("API Error")
with pytest.raises(SystemExit):
self.org_command.list()
self.org_command.plus_api_client.get_organizations.assert_called_once()
mock_console.print.assert_called_once_with(
"Failed to retrieve organization list: API Error",
"Failed to retrieve organization list: API Error",
style="bold red"
)
@@ -140,12 +140,12 @@ class TestOrganizationCommand(unittest.TestCase):
]
self.org_command.plus_api_client = MagicMock()
self.org_command.plus_api_client.get_organizations.return_value = mock_response
mock_settings_instance = MagicMock()
mock_settings_class.return_value = mock_settings_instance
self.org_command.switch("test-id")
self.org_command.plus_api_client.get_organizations.assert_called_once()
mock_settings_instance.dump.assert_called_once()
assert mock_settings_instance.org_name == "Test Org"
@@ -165,9 +165,9 @@ class TestOrganizationCommand(unittest.TestCase):
]
self.org_command.plus_api_client = MagicMock()
self.org_command.plus_api_client.get_organizations.return_value = mock_response
self.org_command.switch("non-existent-id")
self.org_command.plus_api_client.get_organizations.assert_called_once()
mock_console.print.assert_called_once_with(
"Organization with id 'non-existent-id' not found.",
@@ -181,9 +181,9 @@ class TestOrganizationCommand(unittest.TestCase):
mock_settings_instance.org_name = "Test Org"
mock_settings_instance.org_uuid = "test-id"
mock_settings_class.return_value = mock_settings_instance
self.org_command.current()
self.org_command.plus_api_client.get_organizations.assert_not_called()
mock_console.print.assert_called_once_with(
"Currently logged in to organization Test Org (test-id)",
@@ -196,11 +196,49 @@ class TestOrganizationCommand(unittest.TestCase):
mock_settings_instance = MagicMock()
mock_settings_instance.org_uuid = None
mock_settings_class.return_value = mock_settings_instance
self.org_command.current()
assert mock_console.print.call_count == 3
mock_console.print.assert_any_call(
"You're not currently logged in to any organization.",
"You're not currently logged in to any organization.",
style="yellow"
)
@patch('crewai.cli.organization.main.console')
def test_list_organizations_unauthorized(self, mock_console):
mock_response = MagicMock()
mock_http_error = requests.exceptions.HTTPError(
"401 Client Error: Unauthorized",
response=MagicMock(status_code=401)
)
mock_response.raise_for_status.side_effect = mock_http_error
self.org_command.plus_api_client.get_organizations.return_value = mock_response
self.org_command.list()
self.org_command.plus_api_client.get_organizations.assert_called_once()
mock_console.print.assert_called_once_with(
"You are not logged in to any organization. Use 'crewai login' to login.",
style="bold red"
)
@patch('crewai.cli.organization.main.console')
def test_switch_organization_unauthorized(self, mock_console):
mock_response = MagicMock()
mock_http_error = requests.exceptions.HTTPError(
"401 Client Error: Unauthorized",
response=MagicMock(status_code=401)
)
mock_response.raise_for_status.side_effect = mock_http_error
self.org_command.plus_api_client.get_organizations.return_value = mock_response
self.org_command.switch("test-id")
self.org_command.plus_api_client.get_organizations.assert_called_once()
mock_console.print.assert_called_once_with(
"You are not logged in to any organization. Use 'crewai login' to login.",
style="bold red"
)

View File

@@ -56,7 +56,8 @@ def test_create_success(mock_subprocess, capsys, tool_command):
@patch("crewai.cli.tools.main.subprocess.run")
@patch("crewai.cli.plus_api.PlusAPI.get_tool")
def test_install_success(mock_get, mock_subprocess_run, capsys, tool_command):
@patch("crewai.cli.tools.main.ToolCommand._print_current_organization")
def test_install_success(mock_print_org, mock_get, mock_subprocess_run, capsys, tool_command):
mock_get_response = MagicMock()
mock_get_response.status_code = 200
mock_get_response.json.return_value = {
@@ -85,6 +86,9 @@ def test_install_success(mock_get, mock_subprocess_run, capsys, tool_command):
env=unittest.mock.ANY,
)
# Verify _print_current_organization was called
mock_print_org.assert_called_once()
@patch("crewai.cli.tools.main.subprocess.run")
@patch("crewai.cli.plus_api.PlusAPI.get_tool")
def test_install_success_from_pypi(mock_get, mock_subprocess_run, capsys, tool_command):
@@ -166,7 +170,9 @@ def test_publish_when_not_in_sync(mock_is_synced, capsys, tool_command):
@patch("crewai.cli.plus_api.PlusAPI.publish_tool")
@patch("crewai.cli.tools.main.git.Repository.is_synced", return_value=False)
@patch("crewai.cli.tools.main.extract_available_exports", return_value=[{"name": "SampleTool"}])
@patch("crewai.cli.tools.main.ToolCommand._print_current_organization")
def test_publish_when_not_in_sync_and_force(
mock_print_org,
mock_available_exports,
mock_is_synced,
mock_publish,
@@ -202,6 +208,7 @@ def test_publish_when_not_in_sync_and_force(
encoded_file=unittest.mock.ANY,
available_exports=[{"name": "SampleTool"}],
)
mock_print_org.assert_called_once()
@patch("crewai.cli.tools.main.get_project_name", return_value="sample-tool")
@@ -329,3 +336,27 @@ def test_publish_api_error(
assert "Request to Enterprise API failed" in output
mock_publish.assert_called_once()
@patch("crewai.cli.tools.main.Settings")
def test_print_current_organization_with_org(mock_settings, capsys, tool_command):
mock_settings_instance = MagicMock()
mock_settings_instance.org_uuid = "test-org-uuid"
mock_settings_instance.org_name = "Test Organization"
mock_settings.return_value = mock_settings_instance
tool_command._print_current_organization()
output = capsys.readouterr().out
assert "Current organization: Test Organization (test-org-uuid)" in output
@patch("crewai.cli.tools.main.Settings")
def test_print_current_organization_without_org(mock_settings, capsys, tool_command):
mock_settings_instance = MagicMock()
mock_settings_instance.org_uuid = None
mock_settings_instance.org_name = None
mock_settings.return_value = mock_settings_instance
tool_command._print_current_organization()
output = capsys.readouterr().out
assert "No organization currently set" in output
assert "org switch <org_id>" in output

View File

@@ -1765,6 +1765,50 @@ def test_agent_usage_metrics_are_captured_for_hierarchical_process():
)
def test_hierarchical_kickoff_usage_metrics_include_manager(researcher):
"""Ensure Crew.kickoff() sums UsageMetrics from both regular and manager agents."""
# ── 1. Build the manager and a simple task ──────────────────────────────────
manager = Agent(
role="Manager",
goal="Coordinate everything.",
backstory="Keeps the project on track.",
allow_delegation=False,
)
task = Task(
description="Say hello",
expected_output="Hello",
agent=researcher, # *regular* agent
)
# ── 2. Stub out each agents _token_process.get_summary() ───────────────────
researcher_metrics = UsageMetrics(total_tokens=120, prompt_tokens=80, completion_tokens=40, successful_requests=2)
manager_metrics = UsageMetrics(total_tokens=30, prompt_tokens=20, completion_tokens=10, successful_requests=1)
# Replace the internal _token_process objects with simple mocks
researcher._token_process = MagicMock(get_summary=MagicMock(return_value=researcher_metrics))
manager._token_process = MagicMock(get_summary=MagicMock(return_value=manager_metrics))
# ── 3. Create the crew (hierarchical!) and kick it off ──────────────────────
crew = Crew(
agents=[researcher], # regular agents
manager_agent=manager, # manager to be included
tasks=[task],
process=Process.hierarchical,
)
# We dont care about LLM output here; patch execute_sync to avoid network
with patch.object(Task, "execute_sync", return_value=TaskOutput(description="dummy", raw="Hello", agent=researcher.role)):
crew.kickoff()
# ── 4. Assert the aggregated numbers are the *sum* of both agents ───────────
assert crew.usage_metrics.total_tokens == researcher_metrics.total_tokens + manager_metrics.total_tokens
assert crew.usage_metrics.prompt_tokens == researcher_metrics.prompt_tokens + manager_metrics.prompt_tokens
assert crew.usage_metrics.completion_tokens == researcher_metrics.completion_tokens + manager_metrics.completion_tokens
assert crew.usage_metrics.successful_requests == researcher_metrics.successful_requests + manager_metrics.successful_requests
@pytest.mark.vcr(filter_headers=["authorization"])
def test_hierarchical_crew_creation_tasks_with_agents(researcher, writer):
"""
@@ -4564,5 +4608,3 @@ def test_reset_agent_knowledge_with_only_agent_knowledge(researcher,writer):
crew.reset_memories(command_type='agent_knowledge')
mock_reset_agent_knowledge.assert_called_once_with([mock_ks_research,mock_ks_writer])

View File

@@ -25,122 +25,206 @@ def schema_class():
return TestSchema
class InternalCrewStructuredTool:
def test_initialization(self, basic_function, schema_class):
"""Test basic initialization of CrewStructuredTool"""
tool = CrewStructuredTool(
name="test_tool",
description="Test tool description",
func=basic_function,
args_schema=schema_class,
)
def test_initialization(basic_function, schema_class):
"""Test basic initialization of CrewStructuredTool"""
tool = CrewStructuredTool(
name="test_tool",
description="Test tool description",
func=basic_function,
args_schema=schema_class,
)
assert tool.name == "test_tool"
assert tool.description == "Test tool description"
assert tool.func == basic_function
assert tool.args_schema == schema_class
assert tool.name == "test_tool"
assert tool.description == "Test tool description"
assert tool.func == basic_function
assert tool.args_schema == schema_class
def test_from_function(self, basic_function):
"""Test creating tool from function"""
tool = CrewStructuredTool.from_function(
func=basic_function, name="test_tool", description="Test description"
)
def test_from_function(basic_function):
"""Test creating tool from function"""
tool = CrewStructuredTool.from_function(
func=basic_function, name="test_tool", description="Test description"
)
assert tool.name == "test_tool"
assert tool.description == "Test description"
assert tool.func == basic_function
assert isinstance(tool.args_schema, type(BaseModel))
assert tool.name == "test_tool"
assert tool.description == "Test description"
assert tool.func == basic_function
assert isinstance(tool.args_schema, type(BaseModel))
def test_validate_function_signature(self, basic_function, schema_class):
"""Test function signature validation"""
tool = CrewStructuredTool(
name="test_tool",
description="Test tool",
func=basic_function,
args_schema=schema_class,
)
def test_validate_function_signature(basic_function, schema_class):
"""Test function signature validation"""
tool = CrewStructuredTool(
name="test_tool",
description="Test tool",
func=basic_function,
args_schema=schema_class,
)
# Should not raise any exceptions
tool._validate_function_signature()
# Should not raise any exceptions
tool._validate_function_signature()
@pytest.mark.asyncio
async def test_ainvoke(self, basic_function):
"""Test asynchronous invocation"""
tool = CrewStructuredTool.from_function(func=basic_function, name="test_tool")
@pytest.mark.asyncio
async def test_ainvoke(basic_function):
"""Test asynchronous invocation"""
tool = CrewStructuredTool.from_function(func=basic_function, name="test_tool")
result = await tool.ainvoke(input={"param1": "test"})
assert result == "test 0"
result = await tool.ainvoke(input={"param1": "test"})
assert result == "test 0"
def test_parse_args_dict(self, basic_function):
"""Test parsing dictionary arguments"""
tool = CrewStructuredTool.from_function(func=basic_function, name="test_tool")
def test_parse_args_dict(basic_function):
"""Test parsing dictionary arguments"""
tool = CrewStructuredTool.from_function(func=basic_function, name="test_tool")
parsed = tool._parse_args({"param1": "test", "param2": 42})
assert parsed["param1"] == "test"
assert parsed["param2"] == 42
parsed = tool._parse_args({"param1": "test", "param2": 42})
assert parsed["param1"] == "test"
assert parsed["param2"] == 42
def test_parse_args_string(self, basic_function):
"""Test parsing string arguments"""
tool = CrewStructuredTool.from_function(func=basic_function, name="test_tool")
def test_parse_args_string(basic_function):
"""Test parsing string arguments"""
tool = CrewStructuredTool.from_function(func=basic_function, name="test_tool")
parsed = tool._parse_args('{"param1": "test", "param2": 42}')
assert parsed["param1"] == "test"
assert parsed["param2"] == 42
parsed = tool._parse_args('{"param1": "test", "param2": 42}')
assert parsed["param1"] == "test"
assert parsed["param2"] == 42
def test_complex_types(self):
"""Test handling of complex parameter types"""
def test_complex_types():
"""Test handling of complex parameter types"""
def complex_func(nested: dict, items: list) -> str:
"""Process complex types."""
return f"Processed {len(items)} items with {len(nested)} nested keys"
def complex_func(nested: dict, items: list) -> str:
"""Process complex types."""
return f"Processed {len(items)} items with {len(nested)} nested keys"
tool = CrewStructuredTool.from_function(
func=complex_func, name="test_tool", description="Test complex types"
)
result = tool.invoke({"nested": {"key": "value"}, "items": [1, 2, 3]})
assert result == "Processed 3 items with 1 nested keys"
tool = CrewStructuredTool.from_function(
func=complex_func, name="test_tool", description="Test complex types"
)
result = tool.invoke({"nested": {"key": "value"}, "items": [1, 2, 3]})
assert result == "Processed 3 items with 1 nested keys"
def test_schema_inheritance(self):
"""Test tool creation with inherited schema"""
def test_schema_inheritance():
"""Test tool creation with inherited schema"""
def extended_func(base_param: str, extra_param: int) -> str:
"""Test function with inherited schema."""
return f"{base_param} {extra_param}"
def extended_func(base_param: str, extra_param: int) -> str:
"""Test function with inherited schema."""
return f"{base_param} {extra_param}"
class BaseSchema(BaseModel):
base_param: str
class BaseSchema(BaseModel):
base_param: str
class ExtendedSchema(BaseSchema):
extra_param: int
class ExtendedSchema(BaseSchema):
extra_param: int
tool = CrewStructuredTool.from_function(
func=extended_func, name="test_tool", args_schema=ExtendedSchema
)
tool = CrewStructuredTool.from_function(
func=extended_func, name="test_tool", args_schema=ExtendedSchema
)
result = tool.invoke({"base_param": "test", "extra_param": 42})
assert result == "test 42"
result = tool.invoke({"base_param": "test", "extra_param": 42})
assert result == "test 42"
def test_default_values_in_schema(self):
"""Test handling of default values in schema"""
def test_default_values_in_schema():
"""Test handling of default values in schema"""
def default_func(
required_param: str,
optional_param: str = "default",
nullable_param: Optional[int] = None,
) -> str:
"""Test function with default values."""
return f"{required_param} {optional_param} {nullable_param}"
def default_func(
required_param: str,
optional_param: str = "default",
nullable_param: Optional[int] = None,
) -> str:
"""Test function with default values."""
return f"{required_param} {optional_param} {nullable_param}"
tool = CrewStructuredTool.from_function(
func=default_func, name="test_tool", description="Test defaults"
)
tool = CrewStructuredTool.from_function(
func=default_func, name="test_tool", description="Test defaults"
)
# Test with minimal parameters
result = tool.invoke({"required_param": "test"})
assert result == "test default None"
# Test with minimal parameters
result = tool.invoke({"required_param": "test"})
assert result == "test default None"
# Test with all parameters
result = tool.invoke(
{"required_param": "test", "optional_param": "custom", "nullable_param": 42}
)
assert result == "test custom 42"
# Test with all parameters
result = tool.invoke(
{"required_param": "test", "optional_param": "custom", "nullable_param": 42}
)
assert result == "test custom 42"
@pytest.fixture
def custom_tool_decorator():
from crewai.tools import tool
@tool("custom_tool", result_as_answer=True)
async def custom_tool():
"""This is a tool that does something"""
return "Hello World from Custom Tool"
return custom_tool
@pytest.fixture
def custom_tool():
from crewai.tools import BaseTool
class CustomTool(BaseTool):
name: str = "my_tool"
description: str = "This is a tool that does something"
result_as_answer: bool = True
async def _run(self):
return "Hello World from Custom Tool"
return CustomTool()
def build_simple_crew(tool):
from crewai import Agent, Task, Crew
agent1 = Agent(role="Simple role", goal="Simple goal", backstory="Simple backstory", tools=[tool])
say_hi_task = Task(
description="Use the custom tool result as answer.", agent=agent1, expected_output="Use the tool result"
)
crew = Crew(agents=[agent1], tasks=[say_hi_task])
return crew
@pytest.mark.vcr(filter_headers=["authorization"])
def test_async_tool_using_within_isolated_crew(custom_tool):
crew = build_simple_crew(custom_tool)
result = crew.kickoff()
assert result.raw == "Hello World from Custom Tool"
@pytest.mark.vcr(filter_headers=["authorization"])
def test_async_tool_using_decorator_within_isolated_crew(custom_tool_decorator):
crew = build_simple_crew(custom_tool_decorator)
result = crew.kickoff()
assert result.raw == "Hello World from Custom Tool"
@pytest.mark.vcr(filter_headers=["authorization"])
def test_async_tool_within_flow(custom_tool):
from crewai.flow.flow import Flow
class StructuredExampleFlow(Flow):
from crewai.flow.flow import start
@start()
async def start(self):
crew = build_simple_crew(custom_tool)
result = await crew.kickoff_async()
return result
flow = StructuredExampleFlow()
result = flow.kickoff()
assert result.raw == "Hello World from Custom Tool"
@pytest.mark.vcr(filter_headers=["authorization"])
def test_async_tool_using_decorator_within_flow(custom_tool_decorator):
from crewai.flow.flow import Flow
class StructuredExampleFlow(Flow):
from crewai.flow.flow import start
@start()
async def start(self):
crew = build_simple_crew(custom_tool_decorator)
result = await crew.kickoff_async()
return result
flow = StructuredExampleFlow()
result = flow.kickoff()
assert result.raw == "Hello World from Custom Tool"

14
uv.lock generated
View File

@@ -820,7 +820,7 @@ requires-dist = [
{ name = "json-repair", specifier = ">=0.25.2" },
{ name = "json5", specifier = ">=0.10.0" },
{ name = "jsonref", specifier = ">=1.1.0" },
{ name = "litellm", specifier = "==1.68.0" },
{ name = "litellm", specifier = "==1.72.0" },
{ name = "mem0ai", marker = "extra == 'mem0'", specifier = ">=0.1.94" },
{ name = "onnxruntime", specifier = "==1.22.0" },
{ name = "openai", specifier = ">=1.13.3" },
@@ -2245,7 +2245,7 @@ wheels = [
[[package]]
name = "litellm"
version = "1.68.0"
version = "1.72.0"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "aiohttp" },
@@ -2260,9 +2260,9 @@ dependencies = [
{ name = "tiktoken" },
{ name = "tokenizers" },
]
sdist = { url = "https://files.pythonhosted.org/packages/ba/22/138545b646303ca3f4841b69613c697b9d696322a1386083bb70bcbba60b/litellm-1.68.0.tar.gz", hash = "sha256:9fb24643db84dfda339b64bafca505a2eef857477afbc6e98fb56512c24dbbfa", size = 7314051 }
sdist = { url = "https://files.pythonhosted.org/packages/55/d3/f1a8c9c9ffdd3bab1bc410254c8140b1346f05a01b8c6b37c48b56abb4b0/litellm-1.72.0.tar.gz", hash = "sha256:135022b9b8798f712ffa84e71ac419aa4310f1d0a70f79dd2007f7ef3a381e43", size = 8082337 }
wheels = [
{ url = "https://files.pythonhosted.org/packages/10/af/1e344bc8aee41445272e677d802b774b1f8b34bdc3bb5697ba30f0fb5d52/litellm-1.68.0-py3-none-any.whl", hash = "sha256:3bca38848b1a5236b11aa6b70afa4393b60880198c939e582273f51a542d4759", size = 7684460 },
{ url = "https://files.pythonhosted.org/packages/c2/98/bec08f5a3e504013db6f52b5fd68375bd92b463c91eb454d5a6460e957af/litellm-1.72.0-py3-none-any.whl", hash = "sha256:88360a7ae9aa9c96278ae1bb0a459226f909e711c5d350781296d0640386a824", size = 7979630 },
]
[[package]]
@@ -3123,7 +3123,7 @@ wheels = [
[[package]]
name = "openai"
version = "1.68.2"
version = "1.78.0"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "anyio" },
@@ -3135,9 +3135,9 @@ dependencies = [
{ name = "tqdm" },
{ name = "typing-extensions" },
]
sdist = { url = "https://files.pythonhosted.org/packages/3f/6b/6b002d5d38794645437ae3ddb42083059d556558493408d39a0fcea608bc/openai-1.68.2.tar.gz", hash = "sha256:b720f0a95a1dbe1429c0d9bb62096a0d98057bcda82516f6e8af10284bdd5b19", size = 413429 }
sdist = { url = "https://files.pythonhosted.org/packages/d1/7c/7c48bac9be52680e41e99ae7649d5da3a0184cd94081e028897f9005aa03/openai-1.78.0.tar.gz", hash = "sha256:254aef4980688468e96cbddb1f348ed01d274d02c64c6c69b0334bf001fb62b3", size = 442652 }
wheels = [
{ url = "https://files.pythonhosted.org/packages/fd/34/cebce15f64eb4a3d609a83ac3568d43005cc9a1cba9d7fde5590fd415423/openai-1.68.2-py3-none-any.whl", hash = "sha256:24484cb5c9a33b58576fdc5acf0e5f92603024a4e39d0b99793dfa1eb14c2b36", size = 606073 },
{ url = "https://files.pythonhosted.org/packages/cc/41/d64a6c56d0ec886b834caff7a07fc4d43e1987895594b144757e7a6b90d7/openai-1.78.0-py3-none-any.whl", hash = "sha256:1ade6a48cd323ad8a7715e7e1669bb97a17e1a5b8a916644261aaef4bf284778", size = 680407 },
]
[[package]]