Compare commits

..

2 Commits

Author SHA1 Message Date
Iris Clawd
71c7293197 docs: remove --public flag references (public tools no longer supported)
Remove all references to --public visibility flag since public tools
have been removed from the platform (crewai-plus#2380, ENG-1453).
Tools are now organization-scoped (private) by default.

Co-authored-by: Diego Nogues <diego@crewai.com>
2026-05-14 09:52:10 -03:00
Iris Clawd
288568110f docs: add Platform Tools CLI guide for crewai tool create/publish/install
Add documentation for the CLI commands referenced in the Create Tool
modal on the platform (crewai tool create, crewai tool publish,
crewai tool install). These commands manage tools on the CrewAI
platform registry — distinct from PyPI publishing.

Changes:
- New guide: docs/en/guides/tools/platform-tools-cli.mdx
  Full lifecycle: create → implement → publish → install
  Covers visibility flags (--public/--private/--force)
  Includes platform vs PyPI comparison
- Updated create-custom-tools.mdx tip to cross-reference both guides
- Added new page to docs.json navigation (all versions)

Resolves EPD-76

Co-authored-by: Diego Nogues <diego@crewai.com>
2026-05-14 09:52:10 -03:00
11 changed files with 1389 additions and 1277 deletions

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,131 @@
---
title: Platform Tools CLI
description: Create, publish, and install custom tools on the CrewAI platform using the CLI.
icon: terminal
mode: "wide"
---
## Overview
The CrewAI CLI provides commands to manage custom tools on the **CrewAI platform** — a hosted tool registry that lets you share tools within your organization without publishing to PyPI.
| Command | Purpose |
|---------|---------|
| `crewai tool create <handle>` | Scaffold a new tool project |
| `crewai tool publish` | Publish the tool to the CrewAI platform |
| `crewai tool install <handle>` | Install a platform tool into your crew project |
<Note type="info" title="Platform vs PyPI">
These commands manage tools on the **CrewAI platform registry**. If you want to publish a standalone Python package to PyPI instead, see the [Publish Custom Tools to PyPI](/en/guides/tools/publish-custom-tools) guide.
</Note>
## Prerequisites
- **CrewAI CLI** installed (`pip install crewai`)
- **Authenticated** with the platform — run `crewai login` first
---
## Step 1: Create a Tool Project
Scaffold a new tool project:
```bash
crewai tool create my_custom_tool
```
This generates a project structure with the boilerplate you need to start building your tool.
<Tip>
The `handle` is the unique identifier for your tool on the platform. Choose something descriptive and specific to what the tool does.
</Tip>
### Implement Your Tool
Edit the generated tool file to add your logic. The tool follows the standard CrewAI tools contract — you can subclass `BaseTool` or use the `@tool` decorator:
```python
from crewai.tools import BaseTool
class MyCustomTool(BaseTool):
name: str = "My Custom Tool"
description: str = "Description of what this tool does — be specific so agents know when to use it."
def _run(self, argument: str) -> str:
# Your tool logic here
return "result"
```
For the full tools API reference (input schemas, caching, async support, error handling), see the [Create Custom Tools](/en/learn/create-custom-tools) guide.
---
## Step 2: Publish to the Platform
From your tool project directory, publish it to the CrewAI platform:
```bash
crewai tool publish
```
### Options
| Flag | Description |
|------|-------------|
| `--force` | Bypass Git remote validations |
Tools are published privately to your organization by default.
---
## Step 3: Install a Platform Tool
To install a tool that's been published to the platform:
```bash
crewai tool install my_custom_tool
```
Once installed, you can use the tool in your crew like any other tool — assign it to an agent via the `tools` parameter.
---
## Full Lifecycle Example
```bash
# 1. Authenticate with the platform
crewai login
# 2. Scaffold a new tool
crewai tool create weather_lookup
# 3. Implement your logic in the generated project
cd weather_lookup
# ... edit the tool file ...
# 4. Publish to the platform
crewai tool publish
# 5. In another project, install and use it
crewai tool install weather_lookup
```
---
## Platform Tools vs PyPI Packages
| | Platform Tools | PyPI Packages |
|---|---|---|
| **Publish** | `crewai tool publish` | `uv build` + `uv publish` |
| **Registry** | CrewAI platform | PyPI |
| **Install** | `crewai tool install <handle>` | `pip install <package>` |
| **Auth** | `crewai login` | PyPI account + token |
| **Visibility** | Organization-scoped (private) | Always public |
| **Guide** | This page | [Publish Custom Tools](/en/guides/tools/publish-custom-tools) |
---
## Related
- [Create Custom Tools](/en/learn/create-custom-tools) — Python API reference for building tools (BaseTool, @tool decorator)
- [Publish Custom Tools to PyPI](/en/guides/tools/publish-custom-tools) — package and distribute tools as standalone Python libraries

View File

@@ -12,7 +12,9 @@ incorporating the latest functionalities such as tool delegation, error handling
enabling agents to perform a wide range of actions.
<Tip>
**Want to publish your tool for the community?** If you're building a tool that others could benefit from, check out the [Publish Custom Tools](/en/guides/tools/publish-custom-tools) guide to learn how to package and distribute your tool on PyPI.
**Want to publish your tool to the CrewAI platform?** Use the CLI to scaffold, publish, and share tools directly on the platform — see the [Platform Tools CLI](/en/guides/tools/platform-tools-cli) guide.
**Prefer publishing to PyPI?** Check out the [Publish Custom Tools](/en/guides/tools/publish-custom-tools) guide to package and distribute your tool as a standalone Python library.
</Tip>
### Subclassing `BaseTool`

View File

@@ -220,11 +220,7 @@ class Agent(BaseAgent):
str | BaseLLM | None,
BeforeValidator(_validate_llm_ref),
PlainSerializer(_serialize_llm_ref, return_type=dict | None, when_used="json"),
] = Field(
description="Language model that will run the agent.",
default=None,
deprecated="function_calling_llm is deprecated and will be removed in a future release.",
)
] = Field(description="Language model that will run the agent.", default=None)
system_template: str | None = Field(
default=None, description="System format for the agent."
)

View File

@@ -51,10 +51,7 @@ class LangGraphAgentAdapter(BaseAgentAdapter):
_graph: Any = PrivateAttr(default=None)
_memory: Any = PrivateAttr(default=None)
_max_iterations: int = PrivateAttr(default=10)
function_calling_llm: Any = Field(
default=None,
deprecated="function_calling_llm is deprecated and will be removed in a future release.",
)
function_calling_llm: Any = Field(default=None)
step_callback: SerializableCallable | None = Field(default=None)
model: str = Field(default="gpt-4o")

View File

@@ -60,10 +60,7 @@ class OpenAIAgentAdapter(BaseAgentAdapter):
_openai_agent: OpenAIAgentProtocol = PrivateAttr()
_logger: Logger = PrivateAttr(default_factory=Logger)
_active_thread: str | None = PrivateAttr(default=None)
function_calling_llm: Any = Field(
default=None,
deprecated="function_calling_llm is deprecated and will be removed in a future release.",
)
function_calling_llm: Any = Field(default=None)
step_callback: Any = Field(default=None)
_tool_adapter: OpenAIAgentToolAdapter = PrivateAttr()
_converter_adapter: OpenAIConverterAdapter = PrivateAttr()

View File

@@ -251,11 +251,7 @@ class Crew(FlowTrackable, BaseModel):
str | LLM | None,
BeforeValidator(_validate_llm_ref),
PlainSerializer(_serialize_llm_ref, return_type=dict | None, when_used="json"),
] = Field(
description="Language model that will run the agent.",
default=None,
deprecated="function_calling_llm is deprecated and will be removed in a future release.",
)
] = Field(description="Language model that will run the agent.", default=None)
config: Json[dict[str, Any]] | dict[str, Any] | None = Field(default=None)
id: UUID4 = Field(default_factory=uuid.uuid4, frozen=True)
share_crew: bool | None = Field(default=False)

View File

@@ -940,21 +940,6 @@ class LLM(BaseLLM):
self._track_token_usage_internal(usage_info)
self._handle_streaming_callbacks(callbacks, usage_info, last_chunk)
if accumulated_tool_args and not available_functions:
tool_calls_list: list[ChatCompletionDeltaToolCall] = [
ChatCompletionDeltaToolCall(
index=idx,
function=Function(
name=tool_arg.function.name,
arguments=tool_arg.function.arguments,
),
)
for idx, tool_arg in sorted(accumulated_tool_args.items())
if tool_arg.function.name
]
if tool_calls_list:
return tool_calls_list
if not tool_calls or not available_functions:
if response_model and self.is_litellm:
instructor_instance = InternalInstructor(
@@ -1550,7 +1535,8 @@ class LLM(BaseLLM):
if usage_info:
self._track_token_usage_internal(usage_info)
if accumulated_tool_args:
if accumulated_tool_args and available_functions:
# Convert accumulated tool args to ChatCompletionDeltaToolCall objects
tool_calls_list: list[ChatCompletionDeltaToolCall] = [
ChatCompletionDeltaToolCall(
index=idx,
@@ -1559,24 +1545,21 @@ class LLM(BaseLLM):
arguments=tool_arg.function.arguments,
),
)
for idx, tool_arg in sorted(accumulated_tool_args.items())
for idx, tool_arg in accumulated_tool_args.items()
if tool_arg.function.name
]
if tool_calls_list:
if available_functions:
result = self._handle_streaming_tool_calls(
tool_calls=tool_calls_list,
accumulated_tool_args=accumulated_tool_args,
available_functions=available_functions,
from_task=from_task,
from_agent=from_agent,
response_id=response_id,
)
if result is not None:
return result
else:
return tool_calls_list
result = self._handle_streaming_tool_calls(
tool_calls=tool_calls_list,
accumulated_tool_args=accumulated_tool_args,
available_functions=available_functions,
from_task=from_task,
from_agent=from_agent,
response_id=response_id,
)
if result is not None:
return result
usage_dict = self._usage_to_dict(usage_info)
self._handle_emit_call_events(

View File

@@ -624,15 +624,12 @@ def test_handle_streaming_tool_calls_no_available_functions(
],
tools=[get_weather_tool_schema],
)
assert isinstance(response, list)
assert len(response) == 1
assert response[0].function.name == "get_weather"
assert response[0].function.arguments == '{"location":"New York, NY"}'
assert response == ""
assert_event_count(
mock_emit=mock_emit,
expected_stream_chunk=9,
expected_completed_llm_call=0,
expected_completed_llm_call=1,
expected_final_chunk_result='{"location":"New York, NY"}',
)

View File

@@ -185,7 +185,7 @@ exclude-newer = "3 days"
# python-multipart <0.0.27 has GHSA-pp6c-gr5w-3c5g (DoS via unbounded multipart headers).
# gitpython <3.1.50 has GHSA-mv93-w799-cj2w (config_writer newline injection bypassing the 3.1.49 patch -> RCE via core.hooksPath).
# urllib3 <2.7.0 has GHSA-qccp-gfcp-xxvc (ProxyManager cross-origin redirect leaks Authorization/Cookie) and GHSA-mf9v-mfxr-j63j (streaming decompression-bomb bypass); force 2.7.0+.
# langsmith <0.8.0 has GHSA-3644-q5cj-c5c7 (public prompt manifest deserialization, SSRF/secret disclosure); force 0.8.0+.
# langsmith <0.7.31 has GHSA-rr7j-v2q5-chgv (streaming token redaction bypass); force 0.7.31+.
# authlib <1.6.11 has GHSA-jj8c-mmj3-mmgv (CSRF bypass in cache-based state storage).
# litellm 1.83.8+ hard-pins openai==2.24.0, missing openai.types.responses used by crewai;
# override to >=2.30.0 (the version litellm 1.83.7 used) until upstream relaxes the pin.
@@ -203,7 +203,7 @@ override-dependencies = [
"uv>=0.11.6,<1",
"python-multipart>=0.0.27,<1",
"gitpython>=3.1.50,<4",
"langsmith>=0.8.0,<1",
"langsmith>=0.7.31,<0.8",
"authlib>=1.6.11",
]

10
uv.lock generated
View File

@@ -13,7 +13,7 @@ resolution-markers = [
]
[options]
exclude-newer = "2026-05-12T13:27:48.906744Z"
exclude-newer = "2026-05-08T16:33:02.834109Z"
exclude-newer-span = "P3D"
[manifest]
@@ -31,7 +31,7 @@ overrides = [
{ name = "gitpython", specifier = ">=3.1.50,<4" },
{ name = "langchain-core", specifier = ">=1.3.3,<2" },
{ name = "langchain-text-splitters", specifier = ">=1.1.2,<2" },
{ name = "langsmith", specifier = ">=0.8.0,<1" },
{ name = "langsmith", specifier = ">=0.7.31,<0.8" },
{ name = "onnxruntime", marker = "python_full_version < '3.11'", specifier = "<1.24" },
{ name = "openai", specifier = ">=2.30.0,<3" },
{ name = "pillow", specifier = ">=12.1.1" },
@@ -3888,7 +3888,7 @@ sdist = { url = "https://files.pythonhosted.org/packages/0e/72/a3add0e4eec4eb9e2
[[package]]
name = "langsmith"
version = "0.8.3"
version = "0.7.32"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "httpx" },
@@ -3901,9 +3901,9 @@ dependencies = [
{ name = "xxhash" },
{ name = "zstandard" },
]
sdist = { url = "https://files.pythonhosted.org/packages/de/8a/1e8ea5e8bab2a65fa95bd36229ef38e8723ec46e430e20ca2d953487a7f1/langsmith-0.8.3.tar.gz", hash = "sha256:767ff7a8d136ed42926bf99059ac631dc6883542d6e3104b32e71c7625e1fa05", size = 4460330, upload-time = "2026-05-07T19:56:56.18Z" }
sdist = { url = "https://files.pythonhosted.org/packages/2f/b4/a0b4a501bee6b8a741ce29f8c48155b132118483cddc6f9247735ddb38fa/langsmith-0.7.32.tar.gz", hash = "sha256:b59b8e106d0e4c4842e158229296086e2aa7c561e3f602acda73d3ad0062e915", size = 1184518, upload-time = "2026-04-15T23:42:41.885Z" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/98/a9/51e644c1f1dbc3dd7d22dfd6412eab206d538c81e024e4f287373544bdcb/langsmith-0.8.3-py3-none-any.whl", hash = "sha256:b2e40e308222fa0beb2dccee3b4b30bfee9062d7a4f20a3e3e93df3c51a08ab4", size = 399048, upload-time = "2026-05-07T19:56:53.994Z" },
{ url = "https://files.pythonhosted.org/packages/62/bc/148f98ac7dad73ac5e1b1c985290079cfeeb9ba13d760a24f25002beb2c9/langsmith-0.7.32-py3-none-any.whl", hash = "sha256:e1fde928990c4c52f47dc5132708cec674355d9101723d564183e965f383bf5f", size = 378272, upload-time = "2026-04-15T23:42:39.905Z" },
]
[[package]]