Compare commits

...

8 Commits

Author SHA1 Message Date
Lucas Gomide
ccd98cc511 docs: update Python version requirement from <=3.13 to <3.14
This correctly reflects support for all 3.13.x patch version
2025-06-10 13:36:36 -03:00
Lucas Gomide
5c51349a85 Support async tool executions (#2983)
* test: fix structured tool tests

No tests were being executed from this file

* feat: support to run async tool

Some Tool requires async execution. This commit allow us to collect tool result from coroutines

* docs: add docs about asynchronous tool support
2025-06-10 12:17:06 -04:00
Richard Luo
5b740467cb docs: fix the guide on persistence (#2849)
Some checks failed
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
Co-authored-by: Lucas Gomide <lucaslg200@gmail.com>
2025-06-09 14:09:56 -04:00
hegasz
e9d9dd2a79 Fix missing manager_agent tokens in usage_metrics from kickoff (#2848)
* fix(metrics): prevent usage_metrics from dropping manager_agent tokens

* Add test to verify hierarchical kickoff aggregates manager and agent usage metrics

---------

Co-authored-by: Lucas Gomide <lucaslg200@gmail.com>
2025-06-09 13:16:05 -04:00
Lorenze Jay
3e74cb4832 docs: add integrations documentation and images for enterprise features (#2981)
- Introduced a new documentation file for Integrations, detailing supported services and setup instructions.
- Updated the main docs.json to include the new "integrations" feature in the contextual options.
- Added several images related to integrations to enhance the documentation.

Co-authored-by: Tony Kipkemboi <iamtonykipkemboi@gmail.com>
2025-06-09 12:46:09 -04:00
Lucas Gomide
db3c8a49bd feat: improve docs and logging for Multi-Org actions in CLI (#2980)
* docs: add organization management in our CLI docs

* feat: improve user feedback when user is not authenticated

* feat: improve logging about current organization while publishing/install a Tool

* feat: improve logging when Agent repository is not found during fetch

* fix linter offences

* test: fix auth token error
2025-06-09 12:21:12 -04:00
Lucas Gomide
8a37b535ed docs: improve docs about planning LLM usage (#2977) 2025-06-09 10:17:04 -04:00
Lucas Gomide
e6ac1311e7 build: upgrade LiteLLM to support latest Openai version (#2963)
Co-authored-by: Tony Kipkemboi <iamtonykipkemboi@gmail.com>
2025-06-09 08:55:12 -04:00
27 changed files with 1803 additions and 156 deletions

View File

@@ -200,6 +200,37 @@ Deploy the crew or flow to [CrewAI Enterprise](https://app.crewai.com).
```
- Reads your local project configuration.
- Prompts you to confirm the environment variables (like `OPENAI_API_KEY`, `SERPER_API_KEY`) found locally. These will be securely stored with the deployment on the Enterprise platform. Ensure your sensitive keys are correctly configured locally (e.g., in a `.env` file) before running this.
### 11. Organization Management
Manage your CrewAI Enterprise organizations.
```shell Terminal
crewai org [COMMAND] [OPTIONS]
```
#### Commands:
- `list`: List all organizations you belong to
```shell Terminal
crewai org list
```
- `current`: Display your currently active organization
```shell Terminal
crewai org current
```
- `switch`: Switch to a specific organization
```shell Terminal
crewai org switch <organization_id>
```
<Note>
You must be authenticated to CrewAI Enterprise to use these organization management commands.
</Note>
- **Create a deployment** (continued):
- Links the deployment to the corresponding remote GitHub repository (it usually detects this automatically).
- **Deploy the Crew**: Once you are authenticated, you can deploy your crew or flow to CrewAI Enterprise.

View File

@@ -29,6 +29,10 @@ my_crew = Crew(
From this point on, your crew will have planning enabled, and the tasks will be planned before each iteration.
<Warning>
When planning is enabled, crewAI will use `gpt-4o-mini` as the default LLM for planning, which requires a valid OpenAI API key. Since your agents might be using different LLMs, this could cause confusion if you don't have an OpenAI API key configured or if you're experiencing unexpected behavior related to LLM API calls.
</Warning>
#### Planning LLM
Now you can define the LLM that will be used to plan the tasks.

View File

@@ -32,6 +32,7 @@ The Enterprise Tools Repository includes:
- **Customizability**: Provides the flexibility to develop custom tools or utilize existing ones, catering to the specific needs of agents.
- **Error Handling**: Incorporates robust error handling mechanisms to ensure smooth operation.
- **Caching Mechanism**: Features intelligent caching to optimize performance and reduce redundant operations.
- **Asynchronous Support**: Handles both synchronous and asynchronous tools, enabling non-blocking operations.
## Using CrewAI Tools
@@ -177,6 +178,62 @@ class MyCustomTool(BaseTool):
return "Tool's result"
```
## Asynchronous Tool Support
CrewAI supports asynchronous tools, allowing you to implement tools that perform non-blocking operations like network requests, file I/O, or other async operations without blocking the main execution thread.
### Creating Async Tools
You can create async tools in two ways:
#### 1. Using the `tool` Decorator with Async Functions
```python Code
from crewai.tools import tool
@tool("fetch_data_async")
async def fetch_data_async(query: str) -> str:
"""Asynchronously fetch data based on the query."""
# Simulate async operation
await asyncio.sleep(1)
return f"Data retrieved for {query}"
```
#### 2. Implementing Async Methods in Custom Tool Classes
```python Code
from crewai.tools import BaseTool
class AsyncCustomTool(BaseTool):
name: str = "async_custom_tool"
description: str = "An asynchronous custom tool"
async def _run(self, query: str = "") -> str:
"""Asynchronously run the tool"""
# Your async implementation here
await asyncio.sleep(1)
return f"Processed {query} asynchronously"
```
### Using Async Tools
Async tools work seamlessly in both standard Crew workflows and Flow-based workflows:
```python Code
# In standard Crew
agent = Agent(role="researcher", tools=[async_custom_tool])
# In Flow
class MyFlow(Flow):
@start()
async def begin(self):
crew = Crew(agents=[agent])
result = await crew.kickoff_async()
return result
```
The CrewAI framework automatically handles the execution of both synchronous and asynchronous tools, so you don't need to worry about how to call them differently.
### Utilizing the `tool` Decorator
```python Code

View File

@@ -9,7 +9,12 @@
},
"favicon": "images/favicon.svg",
"contextual": {
"options": ["copy", "view", "chatgpt", "claude"]
"options": [
"copy",
"view",
"chatgpt",
"claude"
]
},
"navigation": {
"tabs": [
@@ -257,7 +262,8 @@
"enterprise/features/tool-repository",
"enterprise/features/webhook-streaming",
"enterprise/features/traces",
"enterprise/features/hallucination-guardrail"
"enterprise/features/hallucination-guardrail",
"enterprise/features/integrations"
]
},
{

View File

@@ -0,0 +1,185 @@
---
title: Integrations
description: "Connected applications for your agents to take actions."
icon: "plug"
---
## Overview
Enable your agents to authenticate with any OAuth enabled provider and take actions. From Salesforce and HubSpot to Google and GitHub, we've got you covered with 16+ integrated services.
<Frame>
![Integrations](/images/enterprise/crew_connectors.png)
</Frame>
## Supported Integrations
### **Communication & Collaboration**
- **Gmail** - Manage emails and drafts
- **Slack** - Workspace notifications and alerts
- **Microsoft** - Office 365 and Teams integration
### **Project Management**
- **Jira** - Issue tracking and project management
- **ClickUp** - Task and productivity management
- **Asana** - Team task and project coordination
- **Notion** - Page and database management
- **Linear** - Software project and bug tracking
- **GitHub** - Repository and issue management
### **Customer Relationship Management**
- **Salesforce** - CRM account and opportunity management
- **HubSpot** - Sales pipeline and contact management
- **Zendesk** - Customer support ticket management
### **Business & Finance**
- **Stripe** - Payment processing and customer management
- **Shopify** - E-commerce store and product management
### **Productivity & Storage**
- **Google Sheets** - Spreadsheet data synchronization
- **Google Calendar** - Event and schedule management
- **Box** - File storage and document management
and more to come!
## Prerequisites
Before using Authentication Integrations, ensure you have:
- A [CrewAI Enterprise](https://app.crewai.com) account. You can get started with a free trial.
## Setting Up Integrations
### 1. Connect Your Account
1. Navigate to [CrewAI Enterprise](https://app.crewai.com)
2. Go to **Integrations** tab - https://app.crewai.com/crewai_plus/connectors
3. Click **Connect** on your desired service from the Authentication Integrations section
4. Complete the OAuth authentication flow
5. Grant necessary permissions for your use case
6. Get your Enterprise Token from your [CrewAI Enterprise](https://app.crewai.com) account page - https://app.crewai.com/crewai_plus/settings/account
<Frame>
![Integrations](/images/enterprise/enterprise_action_auth_token.png)
</Frame>
### 2. Install Integration Tools
All you need is the latest version of `crewai-tools` package.
```bash
uv add crewai-tools
```
## Usage Examples
### Basic Usage
<Tip>
All the services you are authenticated into will be available as tools. So all you need to do is add the `CrewaiEnterpriseTools` to your agent and you are good to go.
</Tip>
```python
from crewai import Agent, Task, Crew
from crewai_tools import CrewaiEnterpriseTools
# Get enterprise tools (Gmail tool will be included)
enterprise_tools = CrewaiEnterpriseTools(
enterprise_token="your_enterprise_token"
)
# print the tools
print(enterprise_tools)
# Create an agent with Gmail capabilities
email_agent = Agent(
role="Email Manager",
goal="Manage and organize email communications",
backstory="An AI assistant specialized in email management and communication.",
tools=[enterprise_tools]
)
# Task to send an email
email_task = Task(
description="Draft and send a follow-up email to john@example.com about the project update",
agent=email_agent,
expected_output="Confirmation that email was sent successfully"
)
# Run the task
crew = Crew(
agents=[email_agent],
tasks=[email_task]
)
# Run the crew
crew.kickoff()
```
### Filtering Tools
```python
from crewai_tools import CrewaiEnterpriseTools
enterprise_tools = CrewaiEnterpriseTools(
actions_list=["gmail_find_email"] # only gmail_find_email tool will be available
)
gmail_tool = enterprise_tools[0]
gmail_agent = Agent(
role="Gmail Manager",
goal="Manage gmail communications and notifications",
backstory="An AI assistant that helps coordinate gmail communications.",
tools=[gmail_tool]
)
notification_task = Task(
description="Find the email from john@example.com",
agent=gmail_agent,
expected_output="Email found from john@example.com"
)
# Run the task
crew = Crew(
agents=[slack_agent],
tasks=[notification_task]
)
```
## Best Practices
### Security
- **Principle of Least Privilege**: Only grant the minimum permissions required for your agents' tasks
- **Regular Audits**: Periodically review connected integrations and their permissions
- **Secure Credentials**: Never hardcode credentials; use CrewAI's secure authentication flow
### Filtering Tools
On a deployed crew, you can specify which actions are avialbel for each integration from the settings page of the service you connected to.
<Frame>
![Integrations](/images/enterprise/filtering_enterprise_action_tools.png)
</Frame>
### Scoped Deployments for multi user organizations
You can deploy your crew and scope each integration to a specific user. For example, a crew that connects to google can use a specific user's gmail account.
<Tip>
This is useful for multi user organizations where you want to scope the integration to a specific user.
</Tip>
Use the `user_bearer_token` to scope the integration to a specific user so that when the crew is kicked off, it will use the user's bearer token to authenticate with the integration. If user is not logged in, then the crew will not use any connected integrations. Use the default bearer token to authenticate with the integrations thats deployed with the crew.
<Frame>
![Integrations](/images/enterprise/user_bearer_token.png)
</Frame>
### Getting Help
<Card title="Need Help?" icon="headset" href="mailto:support@crewai.com">
Contact our support team for assistance with integration setup or troubleshooting.
</Card>

View File

@@ -277,22 +277,23 @@ This pattern allows you to combine direct data passing with state updates for ma
One of CrewAI's most powerful features is the ability to persist flow state across executions. This enables workflows that can be paused, resumed, and even recovered after failures.
### The @persist Decorator
### The @persist() Decorator
The `@persist` decorator automates state persistence, saving your flow's state at key points in execution.
The `@persist()` decorator automates state persistence, saving your flow's state at key points in execution.
#### Class-Level Persistence
When applied at the class level, `@persist` saves state after every method execution:
When applied at the class level, `@persist()` saves state after every method execution:
```python
from crewai.flow.flow import Flow, listen, persist, start
from crewai.flow.flow import Flow, listen, start
from crewai.flow.persistence import persist
from pydantic import BaseModel
class CounterState(BaseModel):
value: int = 0
@persist # Apply to the entire flow class
@persist() # Apply to the entire flow class
class PersistentCounterFlow(Flow[CounterState]):
@start()
def increment(self):
@@ -319,10 +320,11 @@ print(f"Second run result: {result2}") # Will be higher due to persisted state
#### Method-Level Persistence
For more granular control, you can apply `@persist` to specific methods:
For more granular control, you can apply `@persist()` to specific methods:
```python
from crewai.flow.flow import Flow, listen, persist, start
from crewai.flow.flow import Flow, listen, start
from crewai.flow.persistence import persist
class SelectivePersistFlow(Flow):
@start()
@@ -330,7 +332,7 @@ class SelectivePersistFlow(Flow):
self.state["count"] = 1
return "First step"
@persist # Only persist after this method
@persist() # Only persist after this method
@listen(first_step)
def important_step(self, prev_result):
self.state["count"] += 1

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.2 MiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 54 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 362 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 49 KiB

View File

@@ -22,7 +22,7 @@ Watch this video tutorial for a step-by-step demonstration of the installation p
<Note>
**Python Version Requirements**
CrewAI requires `Python >=3.10 and <=3.13`. Here's how to check your version:
CrewAI requires `Python >=3.10 and <3.14`. Here's how to check your version:
```bash
python3 --version
```

View File

@@ -11,7 +11,7 @@ dependencies = [
# Core Dependencies
"pydantic>=2.4.2",
"openai>=1.13.3",
"litellm==1.68.0",
"litellm==1.72.0",
"instructor>=1.3.3",
# Text Processing
"pdfplumber>=0.11.4",

View File

@@ -1,6 +1,7 @@
from rich.console import Console
from rich.table import Table
from requests import HTTPError
from crewai.cli.command import BaseCommand, PlusAPIMixin
from crewai.cli.config import Settings
@@ -16,7 +17,7 @@ class OrganizationCommand(BaseCommand, PlusAPIMixin):
response = self.plus_api_client.get_organizations()
response.raise_for_status()
orgs = response.json()
if not orgs:
console.print("You don't belong to any organizations yet.", style="yellow")
return
@@ -26,8 +27,14 @@ class OrganizationCommand(BaseCommand, PlusAPIMixin):
table.add_column("ID", style="green")
for org in orgs:
table.add_row(org["name"], org["uuid"])
console.print(table)
except HTTPError as e:
if e.response.status_code == 401:
console.print("You are not logged in to any organization. Use 'crewai login' to login.", style="bold red")
return
console.print(f"Failed to retrieve organization list: {str(e)}", style="bold red")
raise SystemExit(1)
except Exception as e:
console.print(f"Failed to retrieve organization list: {str(e)}", style="bold red")
raise SystemExit(1)
@@ -37,18 +44,24 @@ class OrganizationCommand(BaseCommand, PlusAPIMixin):
response = self.plus_api_client.get_organizations()
response.raise_for_status()
orgs = response.json()
org = next((o for o in orgs if o["uuid"] == org_id), None)
if not org:
console.print(f"Organization with id '{org_id}' not found.", style="bold red")
return
settings = Settings()
settings.org_name = org["name"]
settings.org_uuid = org["uuid"]
settings.dump()
console.print(f"Successfully switched to {org['name']} ({org['uuid']})", style="bold green")
except HTTPError as e:
if e.response.status_code == 401:
console.print("You are not logged in to any organization. Use 'crewai login' to login.", style="bold red")
return
console.print(f"Failed to retrieve organization list: {str(e)}", style="bold red")
raise SystemExit(1)
except Exception as e:
console.print(f"Failed to switch organization: {str(e)}", style="bold red")
raise SystemExit(1)

View File

@@ -91,6 +91,7 @@ class ToolCommand(BaseCommand, PlusAPIMixin):
console.print(
f"[green]Found these tools to publish: {', '.join([e['name'] for e in available_exports])}[/green]"
)
self._print_current_organization()
with tempfile.TemporaryDirectory() as temp_build_dir:
subprocess.run(
@@ -136,6 +137,7 @@ class ToolCommand(BaseCommand, PlusAPIMixin):
)
def install(self, handle: str):
self._print_current_organization()
get_response = self.plus_api_client.get_tool(handle)
if get_response.status_code == 404:
@@ -182,7 +184,7 @@ class ToolCommand(BaseCommand, PlusAPIMixin):
settings.dump()
console.print(
"Successfully authenticated to the tool repository.", style="bold green"
f"Successfully authenticated to the tool repository as {settings.org_name} ({settings.org_uuid}).", style="bold green"
)
def _add_package(self, tool_details: dict[str, Any]):
@@ -240,3 +242,10 @@ class ToolCommand(BaseCommand, PlusAPIMixin):
)
return env
def _print_current_organization(self):
settings = Settings()
if settings.org_uuid:
console.print(f"Current organization: {settings.org_name} ({settings.org_uuid})", style="bold blue")
else:
console.print("No organization currently set. We recommend setting one before using: `crewai org switch <org_id>` command.", style="yellow")

View File

@@ -655,8 +655,6 @@ class Crew(FlowTrackable, BaseModel):
if self.planning:
self._handle_crew_planning()
metrics: List[UsageMetrics] = []
if self.process == Process.sequential:
result = self._run_sequential_process()
elif self.process == Process.hierarchical:
@@ -669,11 +667,8 @@ class Crew(FlowTrackable, BaseModel):
for after_callback in self.after_kickoff_callbacks:
result = after_callback(result)
metrics += [agent._token_process.get_summary() for agent in self.agents]
self.usage_metrics = self.calculate_usage_metrics()
self.usage_metrics = UsageMetrics()
for metric in metrics:
self.usage_metrics.add_usage_metrics(metric)
return result
except Exception as e:
crewai_event_bus.emit(

View File

@@ -1,5 +1,7 @@
from __future__ import annotations
import asyncio
import inspect
import textwrap
from typing import Any, Callable, Optional, Union, get_type_hints
@@ -239,7 +241,17 @@ class CrewStructuredTool:
) -> Any:
"""Main method for tool execution."""
parsed_args = self._parse_args(input)
return self.func(**parsed_args, **kwargs)
if inspect.iscoroutinefunction(self.func):
result = asyncio.run(self.func(**parsed_args, **kwargs))
return result
result = self.func(**parsed_args, **kwargs)
if asyncio.iscoroutine(result):
return asyncio.run(result)
return result
@property
def args(self) -> dict:

View File

@@ -20,7 +20,10 @@ from crewai.utilities.errors import AgentRepositoryError
from crewai.utilities.exceptions.context_window_exceeding_exception import (
LLMContextLengthExceededException,
)
from rich.console import Console
from crewai.cli.config import Settings
console = Console()
def parse_tools(tools: List[BaseTool]) -> List[CrewStructuredTool]:
"""Parse tools to be used for the task."""
@@ -435,6 +438,13 @@ def show_agent_logs(
)
def _print_current_organization():
settings = Settings()
if settings.org_uuid:
console.print(f"Fetching agent from organization: {settings.org_name} ({settings.org_uuid})", style="bold blue")
else:
console.print("No organization currently set. We recommend setting one before using: `crewai org switch <org_id>` command.", style="yellow")
def load_agent_from_repository(from_repository: str) -> Dict[str, Any]:
attributes: Dict[str, Any] = {}
if from_repository:
@@ -444,15 +454,18 @@ def load_agent_from_repository(from_repository: str) -> Dict[str, Any]:
from crewai.cli.plus_api import PlusAPI
client = PlusAPI(api_key=get_auth_token())
_print_current_organization()
response = client.get_agent(from_repository)
if response.status_code == 404:
raise AgentRepositoryError(
f"Agent {from_repository} does not exist, make sure the name is correct or the agent is available on your organization"
f"Agent {from_repository} does not exist, make sure the name is correct or the agent is available on your organization."
f"\nIf you are using the wrong organization, switch to the correct one using `crewai org switch <org_id>` command.",
)
if response.status_code != 200:
raise AgentRepositoryError(
f"Agent {from_repository} could not be loaded: {response.text}"
f"\nIf you are using the wrong organization, switch to the correct one using `crewai org switch <org_id>` command.",
)
agent = response.json()

View File

@@ -2126,3 +2126,60 @@ def test_agent_from_repository_agent_not_found(mock_get_agent, mock_get_auth_tok
match="Agent test_agent does not exist, make sure the name is correct or the agent is available on your organization",
):
Agent(from_repository="test_agent")
@patch("crewai.cli.plus_api.PlusAPI.get_agent")
@patch("crewai.utilities.agent_utils.Settings")
@patch("crewai.utilities.agent_utils.console")
def test_agent_from_repository_displays_org_info(mock_console, mock_settings, mock_get_agent, mock_get_auth_token):
mock_settings_instance = MagicMock()
mock_settings_instance.org_uuid = "test-org-uuid"
mock_settings_instance.org_name = "Test Organization"
mock_settings.return_value = mock_settings_instance
mock_get_response = MagicMock()
mock_get_response.status_code = 200
mock_get_response.json.return_value = {
"role": "test role",
"goal": "test goal",
"backstory": "test backstory",
"tools": []
}
mock_get_agent.return_value = mock_get_response
agent = Agent(from_repository="test_agent")
mock_console.print.assert_any_call(
"Fetching agent from organization: Test Organization (test-org-uuid)",
style="bold blue"
)
assert agent.role == "test role"
assert agent.goal == "test goal"
assert agent.backstory == "test backstory"
@patch("crewai.cli.plus_api.PlusAPI.get_agent")
@patch("crewai.utilities.agent_utils.Settings")
@patch("crewai.utilities.agent_utils.console")
def test_agent_from_repository_without_org_set(mock_console, mock_settings, mock_get_agent, mock_get_auth_token):
mock_settings_instance = MagicMock()
mock_settings_instance.org_uuid = None
mock_settings_instance.org_name = None
mock_settings.return_value = mock_settings_instance
mock_get_response = MagicMock()
mock_get_response.status_code = 401
mock_get_response.text = "Unauthorized access"
mock_get_agent.return_value = mock_get_response
with pytest.raises(
AgentRepositoryError,
match="Agent test_agent could not be loaded: Unauthorized access"
):
Agent(from_repository="test_agent")
mock_console.print.assert_any_call(
"No organization currently set. We recommend setting one before using: `crewai org switch <org_id>` command.",
style="yellow"
)

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

View File

@@ -33,9 +33,9 @@ def mock_settings():
def test_org_list_command(mock_org_command_class, runner):
mock_org_instance = MagicMock()
mock_org_command_class.return_value = mock_org_instance
result = runner.invoke(list)
assert result.exit_code == 0
mock_org_command_class.assert_called_once()
mock_org_instance.list.assert_called_once()
@@ -45,9 +45,9 @@ def test_org_list_command(mock_org_command_class, runner):
def test_org_switch_command(mock_org_command_class, runner):
mock_org_instance = MagicMock()
mock_org_command_class.return_value = mock_org_instance
result = runner.invoke(switch, ['test-id'])
assert result.exit_code == 0
mock_org_command_class.assert_called_once()
mock_org_instance.switch.assert_called_once_with('test-id')
@@ -57,9 +57,9 @@ def test_org_switch_command(mock_org_command_class, runner):
def test_org_current_command(mock_org_command_class, runner):
mock_org_instance = MagicMock()
mock_org_command_class.return_value = mock_org_instance
result = runner.invoke(current)
assert result.exit_code == 0
mock_org_command_class.assert_called_once()
mock_org_instance.current.assert_called_once()
@@ -70,7 +70,7 @@ class TestOrganizationCommand(unittest.TestCase):
with patch.object(OrganizationCommand, '__init__', return_value=None):
self.org_command = OrganizationCommand()
self.org_command.plus_api_client = MagicMock()
@patch('crewai.cli.organization.main.console')
@patch('crewai.cli.organization.main.Table')
def test_list_organizations_success(self, mock_table, mock_console):
@@ -82,11 +82,11 @@ class TestOrganizationCommand(unittest.TestCase):
]
self.org_command.plus_api_client = MagicMock()
self.org_command.plus_api_client.get_organizations.return_value = mock_response
mock_console.print = MagicMock()
self.org_command.list()
self.org_command.plus_api_client.get_organizations.assert_called_once()
mock_table.assert_called_once_with(title="Your Organizations")
mock_table.return_value.add_column.assert_has_calls([
@@ -105,12 +105,12 @@ class TestOrganizationCommand(unittest.TestCase):
mock_response.json.return_value = []
self.org_command.plus_api_client = MagicMock()
self.org_command.plus_api_client.get_organizations.return_value = mock_response
self.org_command.list()
self.org_command.plus_api_client.get_organizations.assert_called_once()
mock_console.print.assert_called_once_with(
"You don't belong to any organizations yet.",
"You don't belong to any organizations yet.",
style="yellow"
)
@@ -118,14 +118,14 @@ class TestOrganizationCommand(unittest.TestCase):
def test_list_organizations_api_error(self, mock_console):
self.org_command.plus_api_client = MagicMock()
self.org_command.plus_api_client.get_organizations.side_effect = requests.exceptions.RequestException("API Error")
with pytest.raises(SystemExit):
self.org_command.list()
self.org_command.plus_api_client.get_organizations.assert_called_once()
mock_console.print.assert_called_once_with(
"Failed to retrieve organization list: API Error",
"Failed to retrieve organization list: API Error",
style="bold red"
)
@@ -140,12 +140,12 @@ class TestOrganizationCommand(unittest.TestCase):
]
self.org_command.plus_api_client = MagicMock()
self.org_command.plus_api_client.get_organizations.return_value = mock_response
mock_settings_instance = MagicMock()
mock_settings_class.return_value = mock_settings_instance
self.org_command.switch("test-id")
self.org_command.plus_api_client.get_organizations.assert_called_once()
mock_settings_instance.dump.assert_called_once()
assert mock_settings_instance.org_name == "Test Org"
@@ -165,9 +165,9 @@ class TestOrganizationCommand(unittest.TestCase):
]
self.org_command.plus_api_client = MagicMock()
self.org_command.plus_api_client.get_organizations.return_value = mock_response
self.org_command.switch("non-existent-id")
self.org_command.plus_api_client.get_organizations.assert_called_once()
mock_console.print.assert_called_once_with(
"Organization with id 'non-existent-id' not found.",
@@ -181,9 +181,9 @@ class TestOrganizationCommand(unittest.TestCase):
mock_settings_instance.org_name = "Test Org"
mock_settings_instance.org_uuid = "test-id"
mock_settings_class.return_value = mock_settings_instance
self.org_command.current()
self.org_command.plus_api_client.get_organizations.assert_not_called()
mock_console.print.assert_called_once_with(
"Currently logged in to organization Test Org (test-id)",
@@ -196,11 +196,49 @@ class TestOrganizationCommand(unittest.TestCase):
mock_settings_instance = MagicMock()
mock_settings_instance.org_uuid = None
mock_settings_class.return_value = mock_settings_instance
self.org_command.current()
assert mock_console.print.call_count == 3
mock_console.print.assert_any_call(
"You're not currently logged in to any organization.",
"You're not currently logged in to any organization.",
style="yellow"
)
@patch('crewai.cli.organization.main.console')
def test_list_organizations_unauthorized(self, mock_console):
mock_response = MagicMock()
mock_http_error = requests.exceptions.HTTPError(
"401 Client Error: Unauthorized",
response=MagicMock(status_code=401)
)
mock_response.raise_for_status.side_effect = mock_http_error
self.org_command.plus_api_client.get_organizations.return_value = mock_response
self.org_command.list()
self.org_command.plus_api_client.get_organizations.assert_called_once()
mock_console.print.assert_called_once_with(
"You are not logged in to any organization. Use 'crewai login' to login.",
style="bold red"
)
@patch('crewai.cli.organization.main.console')
def test_switch_organization_unauthorized(self, mock_console):
mock_response = MagicMock()
mock_http_error = requests.exceptions.HTTPError(
"401 Client Error: Unauthorized",
response=MagicMock(status_code=401)
)
mock_response.raise_for_status.side_effect = mock_http_error
self.org_command.plus_api_client.get_organizations.return_value = mock_response
self.org_command.switch("test-id")
self.org_command.plus_api_client.get_organizations.assert_called_once()
mock_console.print.assert_called_once_with(
"You are not logged in to any organization. Use 'crewai login' to login.",
style="bold red"
)

View File

@@ -56,7 +56,8 @@ def test_create_success(mock_subprocess, capsys, tool_command):
@patch("crewai.cli.tools.main.subprocess.run")
@patch("crewai.cli.plus_api.PlusAPI.get_tool")
def test_install_success(mock_get, mock_subprocess_run, capsys, tool_command):
@patch("crewai.cli.tools.main.ToolCommand._print_current_organization")
def test_install_success(mock_print_org, mock_get, mock_subprocess_run, capsys, tool_command):
mock_get_response = MagicMock()
mock_get_response.status_code = 200
mock_get_response.json.return_value = {
@@ -85,6 +86,9 @@ def test_install_success(mock_get, mock_subprocess_run, capsys, tool_command):
env=unittest.mock.ANY,
)
# Verify _print_current_organization was called
mock_print_org.assert_called_once()
@patch("crewai.cli.tools.main.subprocess.run")
@patch("crewai.cli.plus_api.PlusAPI.get_tool")
def test_install_success_from_pypi(mock_get, mock_subprocess_run, capsys, tool_command):
@@ -166,7 +170,9 @@ def test_publish_when_not_in_sync(mock_is_synced, capsys, tool_command):
@patch("crewai.cli.plus_api.PlusAPI.publish_tool")
@patch("crewai.cli.tools.main.git.Repository.is_synced", return_value=False)
@patch("crewai.cli.tools.main.extract_available_exports", return_value=[{"name": "SampleTool"}])
@patch("crewai.cli.tools.main.ToolCommand._print_current_organization")
def test_publish_when_not_in_sync_and_force(
mock_print_org,
mock_available_exports,
mock_is_synced,
mock_publish,
@@ -202,6 +208,7 @@ def test_publish_when_not_in_sync_and_force(
encoded_file=unittest.mock.ANY,
available_exports=[{"name": "SampleTool"}],
)
mock_print_org.assert_called_once()
@patch("crewai.cli.tools.main.get_project_name", return_value="sample-tool")
@@ -329,3 +336,27 @@ def test_publish_api_error(
assert "Request to Enterprise API failed" in output
mock_publish.assert_called_once()
@patch("crewai.cli.tools.main.Settings")
def test_print_current_organization_with_org(mock_settings, capsys, tool_command):
mock_settings_instance = MagicMock()
mock_settings_instance.org_uuid = "test-org-uuid"
mock_settings_instance.org_name = "Test Organization"
mock_settings.return_value = mock_settings_instance
tool_command._print_current_organization()
output = capsys.readouterr().out
assert "Current organization: Test Organization (test-org-uuid)" in output
@patch("crewai.cli.tools.main.Settings")
def test_print_current_organization_without_org(mock_settings, capsys, tool_command):
mock_settings_instance = MagicMock()
mock_settings_instance.org_uuid = None
mock_settings_instance.org_name = None
mock_settings.return_value = mock_settings_instance
tool_command._print_current_organization()
output = capsys.readouterr().out
assert "No organization currently set" in output
assert "org switch <org_id>" in output

View File

@@ -1765,6 +1765,50 @@ def test_agent_usage_metrics_are_captured_for_hierarchical_process():
)
def test_hierarchical_kickoff_usage_metrics_include_manager(researcher):
"""Ensure Crew.kickoff() sums UsageMetrics from both regular and manager agents."""
# ── 1. Build the manager and a simple task ──────────────────────────────────
manager = Agent(
role="Manager",
goal="Coordinate everything.",
backstory="Keeps the project on track.",
allow_delegation=False,
)
task = Task(
description="Say hello",
expected_output="Hello",
agent=researcher, # *regular* agent
)
# ── 2. Stub out each agents _token_process.get_summary() ───────────────────
researcher_metrics = UsageMetrics(total_tokens=120, prompt_tokens=80, completion_tokens=40, successful_requests=2)
manager_metrics = UsageMetrics(total_tokens=30, prompt_tokens=20, completion_tokens=10, successful_requests=1)
# Replace the internal _token_process objects with simple mocks
researcher._token_process = MagicMock(get_summary=MagicMock(return_value=researcher_metrics))
manager._token_process = MagicMock(get_summary=MagicMock(return_value=manager_metrics))
# ── 3. Create the crew (hierarchical!) and kick it off ──────────────────────
crew = Crew(
agents=[researcher], # regular agents
manager_agent=manager, # manager to be included
tasks=[task],
process=Process.hierarchical,
)
# We dont care about LLM output here; patch execute_sync to avoid network
with patch.object(Task, "execute_sync", return_value=TaskOutput(description="dummy", raw="Hello", agent=researcher.role)):
crew.kickoff()
# ── 4. Assert the aggregated numbers are the *sum* of both agents ───────────
assert crew.usage_metrics.total_tokens == researcher_metrics.total_tokens + manager_metrics.total_tokens
assert crew.usage_metrics.prompt_tokens == researcher_metrics.prompt_tokens + manager_metrics.prompt_tokens
assert crew.usage_metrics.completion_tokens == researcher_metrics.completion_tokens + manager_metrics.completion_tokens
assert crew.usage_metrics.successful_requests == researcher_metrics.successful_requests + manager_metrics.successful_requests
@pytest.mark.vcr(filter_headers=["authorization"])
def test_hierarchical_crew_creation_tasks_with_agents(researcher, writer):
"""
@@ -4564,5 +4608,3 @@ def test_reset_agent_knowledge_with_only_agent_knowledge(researcher,writer):
crew.reset_memories(command_type='agent_knowledge')
mock_reset_agent_knowledge.assert_called_once_with([mock_ks_research,mock_ks_writer])

View File

@@ -25,122 +25,206 @@ def schema_class():
return TestSchema
class InternalCrewStructuredTool:
def test_initialization(self, basic_function, schema_class):
"""Test basic initialization of CrewStructuredTool"""
tool = CrewStructuredTool(
name="test_tool",
description="Test tool description",
func=basic_function,
args_schema=schema_class,
)
def test_initialization(basic_function, schema_class):
"""Test basic initialization of CrewStructuredTool"""
tool = CrewStructuredTool(
name="test_tool",
description="Test tool description",
func=basic_function,
args_schema=schema_class,
)
assert tool.name == "test_tool"
assert tool.description == "Test tool description"
assert tool.func == basic_function
assert tool.args_schema == schema_class
assert tool.name == "test_tool"
assert tool.description == "Test tool description"
assert tool.func == basic_function
assert tool.args_schema == schema_class
def test_from_function(self, basic_function):
"""Test creating tool from function"""
tool = CrewStructuredTool.from_function(
func=basic_function, name="test_tool", description="Test description"
)
def test_from_function(basic_function):
"""Test creating tool from function"""
tool = CrewStructuredTool.from_function(
func=basic_function, name="test_tool", description="Test description"
)
assert tool.name == "test_tool"
assert tool.description == "Test description"
assert tool.func == basic_function
assert isinstance(tool.args_schema, type(BaseModel))
assert tool.name == "test_tool"
assert tool.description == "Test description"
assert tool.func == basic_function
assert isinstance(tool.args_schema, type(BaseModel))
def test_validate_function_signature(self, basic_function, schema_class):
"""Test function signature validation"""
tool = CrewStructuredTool(
name="test_tool",
description="Test tool",
func=basic_function,
args_schema=schema_class,
)
def test_validate_function_signature(basic_function, schema_class):
"""Test function signature validation"""
tool = CrewStructuredTool(
name="test_tool",
description="Test tool",
func=basic_function,
args_schema=schema_class,
)
# Should not raise any exceptions
tool._validate_function_signature()
# Should not raise any exceptions
tool._validate_function_signature()
@pytest.mark.asyncio
async def test_ainvoke(self, basic_function):
"""Test asynchronous invocation"""
tool = CrewStructuredTool.from_function(func=basic_function, name="test_tool")
@pytest.mark.asyncio
async def test_ainvoke(basic_function):
"""Test asynchronous invocation"""
tool = CrewStructuredTool.from_function(func=basic_function, name="test_tool")
result = await tool.ainvoke(input={"param1": "test"})
assert result == "test 0"
result = await tool.ainvoke(input={"param1": "test"})
assert result == "test 0"
def test_parse_args_dict(self, basic_function):
"""Test parsing dictionary arguments"""
tool = CrewStructuredTool.from_function(func=basic_function, name="test_tool")
def test_parse_args_dict(basic_function):
"""Test parsing dictionary arguments"""
tool = CrewStructuredTool.from_function(func=basic_function, name="test_tool")
parsed = tool._parse_args({"param1": "test", "param2": 42})
assert parsed["param1"] == "test"
assert parsed["param2"] == 42
parsed = tool._parse_args({"param1": "test", "param2": 42})
assert parsed["param1"] == "test"
assert parsed["param2"] == 42
def test_parse_args_string(self, basic_function):
"""Test parsing string arguments"""
tool = CrewStructuredTool.from_function(func=basic_function, name="test_tool")
def test_parse_args_string(basic_function):
"""Test parsing string arguments"""
tool = CrewStructuredTool.from_function(func=basic_function, name="test_tool")
parsed = tool._parse_args('{"param1": "test", "param2": 42}')
assert parsed["param1"] == "test"
assert parsed["param2"] == 42
parsed = tool._parse_args('{"param1": "test", "param2": 42}')
assert parsed["param1"] == "test"
assert parsed["param2"] == 42
def test_complex_types(self):
"""Test handling of complex parameter types"""
def test_complex_types():
"""Test handling of complex parameter types"""
def complex_func(nested: dict, items: list) -> str:
"""Process complex types."""
return f"Processed {len(items)} items with {len(nested)} nested keys"
def complex_func(nested: dict, items: list) -> str:
"""Process complex types."""
return f"Processed {len(items)} items with {len(nested)} nested keys"
tool = CrewStructuredTool.from_function(
func=complex_func, name="test_tool", description="Test complex types"
)
result = tool.invoke({"nested": {"key": "value"}, "items": [1, 2, 3]})
assert result == "Processed 3 items with 1 nested keys"
tool = CrewStructuredTool.from_function(
func=complex_func, name="test_tool", description="Test complex types"
)
result = tool.invoke({"nested": {"key": "value"}, "items": [1, 2, 3]})
assert result == "Processed 3 items with 1 nested keys"
def test_schema_inheritance(self):
"""Test tool creation with inherited schema"""
def test_schema_inheritance():
"""Test tool creation with inherited schema"""
def extended_func(base_param: str, extra_param: int) -> str:
"""Test function with inherited schema."""
return f"{base_param} {extra_param}"
def extended_func(base_param: str, extra_param: int) -> str:
"""Test function with inherited schema."""
return f"{base_param} {extra_param}"
class BaseSchema(BaseModel):
base_param: str
class BaseSchema(BaseModel):
base_param: str
class ExtendedSchema(BaseSchema):
extra_param: int
class ExtendedSchema(BaseSchema):
extra_param: int
tool = CrewStructuredTool.from_function(
func=extended_func, name="test_tool", args_schema=ExtendedSchema
)
tool = CrewStructuredTool.from_function(
func=extended_func, name="test_tool", args_schema=ExtendedSchema
)
result = tool.invoke({"base_param": "test", "extra_param": 42})
assert result == "test 42"
result = tool.invoke({"base_param": "test", "extra_param": 42})
assert result == "test 42"
def test_default_values_in_schema(self):
"""Test handling of default values in schema"""
def test_default_values_in_schema():
"""Test handling of default values in schema"""
def default_func(
required_param: str,
optional_param: str = "default",
nullable_param: Optional[int] = None,
) -> str:
"""Test function with default values."""
return f"{required_param} {optional_param} {nullable_param}"
def default_func(
required_param: str,
optional_param: str = "default",
nullable_param: Optional[int] = None,
) -> str:
"""Test function with default values."""
return f"{required_param} {optional_param} {nullable_param}"
tool = CrewStructuredTool.from_function(
func=default_func, name="test_tool", description="Test defaults"
)
tool = CrewStructuredTool.from_function(
func=default_func, name="test_tool", description="Test defaults"
)
# Test with minimal parameters
result = tool.invoke({"required_param": "test"})
assert result == "test default None"
# Test with minimal parameters
result = tool.invoke({"required_param": "test"})
assert result == "test default None"
# Test with all parameters
result = tool.invoke(
{"required_param": "test", "optional_param": "custom", "nullable_param": 42}
)
assert result == "test custom 42"
# Test with all parameters
result = tool.invoke(
{"required_param": "test", "optional_param": "custom", "nullable_param": 42}
)
assert result == "test custom 42"
@pytest.fixture
def custom_tool_decorator():
from crewai.tools import tool
@tool("custom_tool", result_as_answer=True)
async def custom_tool():
"""This is a tool that does something"""
return "Hello World from Custom Tool"
return custom_tool
@pytest.fixture
def custom_tool():
from crewai.tools import BaseTool
class CustomTool(BaseTool):
name: str = "my_tool"
description: str = "This is a tool that does something"
result_as_answer: bool = True
async def _run(self):
return "Hello World from Custom Tool"
return CustomTool()
def build_simple_crew(tool):
from crewai import Agent, Task, Crew
agent1 = Agent(role="Simple role", goal="Simple goal", backstory="Simple backstory", tools=[tool])
say_hi_task = Task(
description="Use the custom tool result as answer.", agent=agent1, expected_output="Use the tool result"
)
crew = Crew(agents=[agent1], tasks=[say_hi_task])
return crew
@pytest.mark.vcr(filter_headers=["authorization"])
def test_async_tool_using_within_isolated_crew(custom_tool):
crew = build_simple_crew(custom_tool)
result = crew.kickoff()
assert result.raw == "Hello World from Custom Tool"
@pytest.mark.vcr(filter_headers=["authorization"])
def test_async_tool_using_decorator_within_isolated_crew(custom_tool_decorator):
crew = build_simple_crew(custom_tool_decorator)
result = crew.kickoff()
assert result.raw == "Hello World from Custom Tool"
@pytest.mark.vcr(filter_headers=["authorization"])
def test_async_tool_within_flow(custom_tool):
from crewai.flow.flow import Flow
class StructuredExampleFlow(Flow):
from crewai.flow.flow import start
@start()
async def start(self):
crew = build_simple_crew(custom_tool)
result = await crew.kickoff_async()
return result
flow = StructuredExampleFlow()
result = flow.kickoff()
assert result.raw == "Hello World from Custom Tool"
@pytest.mark.vcr(filter_headers=["authorization"])
def test_async_tool_using_decorator_within_flow(custom_tool_decorator):
from crewai.flow.flow import Flow
class StructuredExampleFlow(Flow):
from crewai.flow.flow import start
@start()
async def start(self):
crew = build_simple_crew(custom_tool_decorator)
result = await crew.kickoff_async()
return result
flow = StructuredExampleFlow()
result = flow.kickoff()
assert result.raw == "Hello World from Custom Tool"

14
uv.lock generated
View File

@@ -820,7 +820,7 @@ requires-dist = [
{ name = "json-repair", specifier = ">=0.25.2" },
{ name = "json5", specifier = ">=0.10.0" },
{ name = "jsonref", specifier = ">=1.1.0" },
{ name = "litellm", specifier = "==1.68.0" },
{ name = "litellm", specifier = "==1.72.0" },
{ name = "mem0ai", marker = "extra == 'mem0'", specifier = ">=0.1.94" },
{ name = "onnxruntime", specifier = "==1.22.0" },
{ name = "openai", specifier = ">=1.13.3" },
@@ -2245,7 +2245,7 @@ wheels = [
[[package]]
name = "litellm"
version = "1.68.0"
version = "1.72.0"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "aiohttp" },
@@ -2260,9 +2260,9 @@ dependencies = [
{ name = "tiktoken" },
{ name = "tokenizers" },
]
sdist = { url = "https://files.pythonhosted.org/packages/ba/22/138545b646303ca3f4841b69613c697b9d696322a1386083bb70bcbba60b/litellm-1.68.0.tar.gz", hash = "sha256:9fb24643db84dfda339b64bafca505a2eef857477afbc6e98fb56512c24dbbfa", size = 7314051 }
sdist = { url = "https://files.pythonhosted.org/packages/55/d3/f1a8c9c9ffdd3bab1bc410254c8140b1346f05a01b8c6b37c48b56abb4b0/litellm-1.72.0.tar.gz", hash = "sha256:135022b9b8798f712ffa84e71ac419aa4310f1d0a70f79dd2007f7ef3a381e43", size = 8082337 }
wheels = [
{ url = "https://files.pythonhosted.org/packages/10/af/1e344bc8aee41445272e677d802b774b1f8b34bdc3bb5697ba30f0fb5d52/litellm-1.68.0-py3-none-any.whl", hash = "sha256:3bca38848b1a5236b11aa6b70afa4393b60880198c939e582273f51a542d4759", size = 7684460 },
{ url = "https://files.pythonhosted.org/packages/c2/98/bec08f5a3e504013db6f52b5fd68375bd92b463c91eb454d5a6460e957af/litellm-1.72.0-py3-none-any.whl", hash = "sha256:88360a7ae9aa9c96278ae1bb0a459226f909e711c5d350781296d0640386a824", size = 7979630 },
]
[[package]]
@@ -3123,7 +3123,7 @@ wheels = [
[[package]]
name = "openai"
version = "1.68.2"
version = "1.78.0"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "anyio" },
@@ -3135,9 +3135,9 @@ dependencies = [
{ name = "tqdm" },
{ name = "typing-extensions" },
]
sdist = { url = "https://files.pythonhosted.org/packages/3f/6b/6b002d5d38794645437ae3ddb42083059d556558493408d39a0fcea608bc/openai-1.68.2.tar.gz", hash = "sha256:b720f0a95a1dbe1429c0d9bb62096a0d98057bcda82516f6e8af10284bdd5b19", size = 413429 }
sdist = { url = "https://files.pythonhosted.org/packages/d1/7c/7c48bac9be52680e41e99ae7649d5da3a0184cd94081e028897f9005aa03/openai-1.78.0.tar.gz", hash = "sha256:254aef4980688468e96cbddb1f348ed01d274d02c64c6c69b0334bf001fb62b3", size = 442652 }
wheels = [
{ url = "https://files.pythonhosted.org/packages/fd/34/cebce15f64eb4a3d609a83ac3568d43005cc9a1cba9d7fde5590fd415423/openai-1.68.2-py3-none-any.whl", hash = "sha256:24484cb5c9a33b58576fdc5acf0e5f92603024a4e39d0b99793dfa1eb14c2b36", size = 606073 },
{ url = "https://files.pythonhosted.org/packages/cc/41/d64a6c56d0ec886b834caff7a07fc4d43e1987895594b144757e7a6b90d7/openai-1.78.0-py3-none-any.whl", hash = "sha256:1ade6a48cd323ad8a7715e7e1669bb97a17e1a5b8a916644261aaef4bf284778", size = 680407 },
]
[[package]]