Compare commits

...

12 Commits

Author SHA1 Message Date
Greyson LaLonde
007b17ca66 Merge branch 'main' into gl/refactor/agent-adapter-typing-and-docs 2025-09-19 17:15:47 -04:00
Lorenze Jay
c062826779 chore: update dependencies and versioning for CrewAI 0.193.0 (#3542)
* chore: update dependencies and versioning for CrewAI

- Bump `crewai-tools` dependency version from `0.71.0` to `0.73.0` in `pyproject.toml`.
- Update CrewAI version from `0.186.1` to `0.193.0` in `__init__.py`.
- Adjust dependency versions in CLI templates for crew, flow, and tool to reflect the new CrewAI version.

This update ensures compatibility with the latest features and improvements in CrewAI.

* remove embedchain mock

* fix: remove last embedchain mocks

* fix: remove langchain_openai from tests

---------

Co-authored-by: Greyson LaLonde <greyson.r.lalonde@gmail.com>
2025-09-19 16:01:55 -03:00
João Moura
9491fe8334 Adding Ability for user to get deeper observability (#3541)
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
Update Test Durations / update-durations (3.10) (push) Has been cancelled
Update Test Durations / update-durations (3.11) (push) Has been cancelled
Update Test Durations / update-durations (3.12) (push) Has been cancelled
Update Test Durations / update-durations (3.13) (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
* feat(tracing): enhance first-time trace display and auto-open browser

* avoinding line breaking

* set tracing if user enables it

* linted

---------

Co-authored-by: lorenzejay <lorenzejaytech@gmail.com>
2025-09-18 21:47:09 -03:00
Greyson LaLonde
6f2ea013a7 docs: update RagTool references from EmbedChain to CrewAI native RAG (#3537)
* docs: update RagTool references from EmbedChain to CrewAI native RAG

* change ref to qdrant

* docs: update RAGTool to use Qdrant and add embedding_model example
2025-09-18 16:06:44 -07:00
Greyson LaLonde
39e8792ae5 fix: add l2 distance metric support for backward compatibility (#3540) 2025-09-18 18:36:33 -04:00
Greyson LaLonde
e4ff96bc05 Merge branch 'main' into gl/refactor/agent-adapter-typing-and-docs 2025-09-12 17:32:05 -04:00
Greyson LaLonde
b379e3d012 Merge branch 'main' into gl/refactor/agent-adapter-typing-and-docs 2025-09-12 15:57:27 -04:00
Greyson Lalonde
e9dc590fc6 refactor: make sanitize_tool_name a static method 2025-09-12 11:21:48 -04:00
Greyson Lalonde
aec730f04a fix: remove @abstractmethod decorator from sanitize_tool_name 2025-09-12 11:14:25 -04:00
Greyson LaLonde
80ad6961cd Merge branch 'main' into gl/refactor/agent-adapter-typing-and-docs 2025-09-12 10:42:05 -04:00
Greyson LaLonde
a6293689d1 Merge branch 'main' into gl/refactor/agent-adapter-typing-and-docs 2025-09-12 10:39:22 -04:00
Greyson LaLonde
2a1a9a3cb7 refactor: improve type hints and add docstrings to agent adapters 2025-09-11 23:07:04 -04:00
15 changed files with 3146 additions and 3508 deletions

View File

@@ -9,7 +9,7 @@ mode: "wide"
## Description
The `RagTool` is designed to answer questions by leveraging the power of Retrieval-Augmented Generation (RAG) through EmbedChain.
The `RagTool` is designed to answer questions by leveraging the power of Retrieval-Augmented Generation (RAG) through CrewAI's native RAG system.
It provides a dynamic knowledge base that can be queried to retrieve relevant information from various data sources.
This tool is particularly useful for applications that require access to a vast array of information and need to provide contextually relevant answers.
@@ -76,8 +76,8 @@ The `RagTool` can be used with a wide variety of data sources, including:
The `RagTool` accepts the following parameters:
- **summarize**: Optional. Whether to summarize the retrieved content. Default is `False`.
- **adapter**: Optional. A custom adapter for the knowledge base. If not provided, an EmbedchainAdapter will be used.
- **config**: Optional. Configuration for the underlying EmbedChain App.
- **adapter**: Optional. A custom adapter for the knowledge base. If not provided, a CrewAIRagAdapter will be used.
- **config**: Optional. Configuration for the underlying CrewAI RAG system.
## Adding Content
@@ -130,44 +130,23 @@ from crewai_tools import RagTool
# Create a RAG tool with custom configuration
config = {
"app": {
"name": "custom_app",
},
"llm": {
"provider": "openai",
"vectordb": {
"provider": "qdrant",
"config": {
"model": "gpt-4",
"collection_name": "my-collection"
}
},
"embedding_model": {
"provider": "openai",
"config": {
"model": "text-embedding-ada-002"
"model": "text-embedding-3-small"
}
},
"vectordb": {
"provider": "elasticsearch",
"config": {
"collection_name": "my-collection",
"cloud_id": "deployment-name:xxxx",
"api_key": "your-key",
"verify_certs": False
}
},
"chunker": {
"chunk_size": 400,
"chunk_overlap": 100,
"length_function": "len",
"min_chunk_size": 0
}
}
rag_tool = RagTool(config=config, summarize=True)
```
The internal RAG tool utilizes the Embedchain adapter, allowing you to pass any configuration options that are supported by Embedchain.
You can refer to the [Embedchain documentation](https://docs.embedchain.ai/components/introduction) for details.
Make sure to review the configuration options available in the .yaml file.
## Conclusion
The `RagTool` provides a powerful way to create and query knowledge bases from various data sources. By leveraging Retrieval-Augmented Generation, it enables agents to access and retrieve relevant information efficiently, enhancing their ability to provide accurate and contextually appropriate responses.

View File

@@ -48,7 +48,7 @@ Documentation = "https://docs.crewai.com"
Repository = "https://github.com/crewAIInc/crewAI"
[project.optional-dependencies]
tools = ["crewai-tools~=0.71.0"]
tools = ["crewai-tools~=0.73.0"]
embeddings = [
"tiktoken~=0.8.0"
]

View File

@@ -40,7 +40,7 @@ def _suppress_pydantic_deprecation_warnings() -> None:
_suppress_pydantic_deprecation_warnings()
__version__ = "0.186.1"
__version__ = "0.193.0"
_telemetry_submitted = False

View File

@@ -1,10 +1,42 @@
from abc import ABC, abstractmethod
from typing import Any, Dict, List, Optional
"""Base adapter for integrating external agent implementations with CrewAI."""
from pydantic import PrivateAttr
from abc import ABC, abstractmethod
from collections.abc import Callable
from typing import Any, TypedDict
from pydantic import ConfigDict, PrivateAttr
from typing_extensions import Unpack
from crewai.agent import BaseAgent
from crewai.knowledge.knowledge import Knowledge
from crewai.knowledge.source.base_knowledge_source import BaseKnowledgeSource
from crewai.security.security_config import SecurityConfig
from crewai.tools import BaseTool
from crewai.utilities import I18N
class AgentKwargs(TypedDict, total=False):
"""TypedDict for BaseAgent initialization arguments."""
role: str
goal: str
backstory: str
config: dict[str, Any] | None
cache: bool
verbose: bool
max_rpm: int | None
allow_delegation: bool
tools: list[BaseTool] | None
max_iter: int
llm: Any
crew: Any
i18n: I18N
max_tokens: int | None
knowledge: Knowledge | None
knowledge_sources: list[BaseKnowledgeSource] | None
knowledge_storage: Any
security_config: SecurityConfig
callbacks: list[Callable[..., Any]]
class BaseAgentAdapter(BaseAgent, ABC):
@@ -16,22 +48,31 @@ class BaseAgentAdapter(BaseAgent, ABC):
"""
adapted_structured_output: bool = False
_agent_config: Optional[Dict[str, Any]] = PrivateAttr(default=None)
_agent_config: dict[str, Any] | None = PrivateAttr(default=None)
model_config = {"arbitrary_types_allowed": True}
model_config = ConfigDict(arbitrary_types_allowed=True)
def __init__(self, agent_config: Optional[Dict[str, Any]] = None, **kwargs: Any):
def __init__(
self,
agent_config: dict[str, Any] | None = None,
**kwargs: Unpack[AgentKwargs],
) -> None:
"""Initialize the base agent adapter.
Args:
agent_config: Optional configuration dictionary for the adapted agent.
**kwargs: BaseAgent initialization arguments (role, goal, backstory, etc).
"""
super().__init__(adapted_agent=True, **kwargs)
self._agent_config = agent_config
@abstractmethod
def configure_tools(self, tools: Optional[List[BaseTool]] = None) -> None:
def configure_tools(self, tools: list[BaseTool] | None = None) -> None:
"""Configure and adapt tools for the specific agent implementation.
Args:
tools: Optional list of BaseTool instances to be configured
"""
pass
def configure_structured_output(self, structured_output: Any) -> None:
"""Configure the structured output for the specific agent implementation.
@@ -39,4 +80,3 @@ class BaseAgentAdapter(BaseAgent, ABC):
Args:
structured_output: The structured output to be configured
"""
pass

View File

@@ -1,5 +1,7 @@
"""Base adapter for tool conversions across different agent frameworks."""
from abc import ABC, abstractmethod
from typing import Any, List, Optional
from typing import Any
from crewai.tools.base_tool import BaseTool
@@ -12,26 +14,39 @@ class BaseToolAdapter(ABC):
different frameworks and platforms.
"""
original_tools: List[BaseTool]
converted_tools: List[Any]
def __init__(self, tools: list[BaseTool] | None = None) -> None:
"""Initialize the tool adapter.
def __init__(self, tools: Optional[List[BaseTool]] = None):
self.original_tools = tools or []
self.converted_tools = []
Args:
tools: Optional list of BaseTool instances to be adapted.
"""
self.original_tools: list[BaseTool] = tools or []
self.converted_tools: list[Any] = []
@abstractmethod
def configure_tools(self, tools: List[BaseTool]) -> None:
def configure_tools(self, tools: list[BaseTool]) -> None:
"""Configure and convert tools for the specific implementation.
Args:
tools: List of BaseTool instances to be configured and converted
tools: List of BaseTool instances to be configured and converted.
"""
pass
def tools(self) -> List[Any]:
"""Return all converted tools."""
def tools(self) -> list[Any]:
"""Return all converted tools.
Returns:
List of tools converted to the target framework format.
"""
return self.converted_tools
def sanitize_tool_name(self, tool_name: str) -> str:
"""Sanitize tool name for API compatibility."""
@staticmethod
def sanitize_tool_name(tool_name: str) -> str:
"""Sanitize tool name for API compatibility.
Args:
tool_name: Original tool name that may contain spaces or special characters.
Returns:
Sanitized tool name with underscores replacing spaces.
"""
return tool_name.replace(" ", "_")

View File

@@ -5,7 +5,7 @@ description = "{{name}} using crewAI"
authors = [{ name = "Your Name", email = "you@example.com" }]
requires-python = ">=3.10,<3.14"
dependencies = [
"crewai[tools]>=0.186.1,<1.0.0"
"crewai[tools]>=0.193.0,<1.0.0"
]
[project.scripts]

View File

@@ -5,7 +5,7 @@ description = "{{name}} using crewAI"
authors = [{ name = "Your Name", email = "you@example.com" }]
requires-python = ">=3.10,<3.14"
dependencies = [
"crewai[tools]>=0.186.1,<1.0.0",
"crewai[tools]>=0.193.0,<1.0.0",
]
[project.scripts]

View File

@@ -5,7 +5,7 @@ description = "Power up your crews with {{folder_name}}"
readme = "README.md"
requires-python = ">=3.10,<3.14"
dependencies = [
"crewai[tools]>=0.186.1"
"crewai[tools]>=0.193.0"
]
[tool.crewai]

View File

@@ -1,5 +1,7 @@
import logging
import uuid
import webbrowser
from pathlib import Path
from rich.console import Console
from rich.panel import Panel
@@ -14,6 +16,47 @@ from crewai.events.listeners.tracing.utils import (
logger = logging.getLogger(__name__)
def _update_or_create_env_file():
"""Update or create .env file with CREWAI_TRACING_ENABLED=true."""
env_path = Path(".env")
env_content = ""
variable_name = "CREWAI_TRACING_ENABLED"
variable_value = "true"
# Read existing content if file exists
if env_path.exists():
with open(env_path, "r") as f:
env_content = f.read()
# Check if CREWAI_TRACING_ENABLED is already set
lines = env_content.splitlines()
variable_exists = False
updated_lines = []
for line in lines:
if line.strip().startswith(f"{variable_name}="):
# Update existing variable
updated_lines.append(f"{variable_name}={variable_value}")
variable_exists = True
else:
updated_lines.append(line)
# Add variable if it doesn't exist
if not variable_exists:
if updated_lines and not updated_lines[-1].strip():
# If last line is empty, replace it
updated_lines[-1] = f"{variable_name}={variable_value}"
else:
# Add new line and then the variable
updated_lines.append(f"{variable_name}={variable_value}")
# Write updated content
with open(env_path, "w") as f:
f.write("\n".join(updated_lines))
if updated_lines: # Add final newline if there's content
f.write("\n")
class FirstTimeTraceHandler:
"""Handles the first-time user trace collection and display flow."""
@@ -48,6 +91,12 @@ class FirstTimeTraceHandler:
if user_wants_traces:
self._initialize_backend_and_send_events()
# Enable tracing for future runs by updating .env file
try:
_update_or_create_env_file()
except Exception: # noqa: S110
pass
if self.ephemeral_url:
self._display_ephemeral_trace_link()
@@ -108,9 +157,14 @@ class FirstTimeTraceHandler:
self._gracefully_fail(f"Backend initialization failed: {e}")
def _display_ephemeral_trace_link(self):
"""Display the ephemeral trace link to the user."""
"""Display the ephemeral trace link to the user and automatically open browser."""
console = Console()
try:
webbrowser.open(self.ephemeral_url)
except Exception: # noqa: S110
pass
panel_content = f"""
🎉 Your First CrewAI Execution Trace is Ready!
@@ -123,7 +177,8 @@ This trace shows:
• Tool usage and results
• LLM calls and responses
To use traces add tracing=True to your Crew(tracing=True) / Flow(tracing=True)
✅ Tracing has been enabled for future runs! (CREWAI_TRACING_ENABLED=true added to .env)
You can also add tracing=True to your Crew(tracing=True) / Flow(tracing=True) for more control.
📝 Note: This link will expire in 24 hours.
""".strip()
@@ -158,8 +213,8 @@ Unfortunately, we couldn't upload them to the server right now, but here's what
• Execution duration: {self.batch_manager.calculate_duration("execution")}ms
• Batch ID: {self.batch_manager.trace_batch_id}
Tracing has been enabled for future runs! (CREWAI_TRACING_ENABLED=true added to .env)
The traces include agent decisions, task execution, and tool usage.
Try running with CREWAI_TRACING_ENABLED=true next time for persistent traces.
""".strip()
panel = Panel(

View File

@@ -138,13 +138,6 @@ class TraceBatchManager:
if not use_ephemeral
else response_data["ephemeral_trace_id"]
)
console = Console()
panel = Panel(
f"✅ Trace batch initialized with session ID: {self.trace_batch_id}",
title="Trace Batch Initialization",
border_style="green",
)
console.print(panel)
else:
logger.warning(
f"Trace batch initialization returned status {response.status_code}. Continuing without tracing."
@@ -258,12 +251,23 @@ class TraceBatchManager:
if self.is_current_batch_ephemeral:
self.ephemeral_trace_url = return_link
# Create a properly formatted message with URL on its own line
message_parts = [
f"✅ Trace batch finalized with session ID: {self.trace_batch_id}",
"",
f"🔗 View here: {return_link}",
]
if access_code:
message_parts.append(f"🔑 Access Code: {access_code}")
panel = Panel(
f"✅ Trace batch finalized with session ID: {self.trace_batch_id}. View here: {return_link} {f', Access Code: {access_code}' if access_code else ''}",
"\n".join(message_parts),
title="Trace Batch Finalization",
border_style="green",
)
console.print(panel)
if not should_auto_collect_first_time_traces():
console.print(panel)
else:
logger.error(

View File

@@ -133,6 +133,9 @@ def _convert_distance_to_score(
if distance_metric == "cosine":
score = 1.0 - 0.5 * distance
return max(0.0, min(1.0, score))
if distance_metric == "l2":
score = 1.0 / (1.0 + distance)
return max(0.0, min(1.0, score))
raise ValueError(f"Unsupported distance metric: {distance_metric}")

View File

@@ -137,35 +137,6 @@ def test_custom_llm():
assert agent.llm.model == "gpt-4"
def test_custom_llm_with_langchain():
from langchain_openai import ChatOpenAI
agent = Agent(
role="test role",
goal="test goal",
backstory="test backstory",
llm=ChatOpenAI(temperature=0, model="gpt-4"),
)
assert agent.llm.model == "gpt-4"
def test_custom_llm_temperature_preservation():
from langchain_openai import ChatOpenAI
langchain_llm = ChatOpenAI(temperature=0.7, model="gpt-4")
agent = Agent(
role="temperature test role",
goal="temperature test goal",
backstory="temperature test backstory",
llm=langchain_llm,
)
assert isinstance(agent.llm, LLM)
assert agent.llm.model == "gpt-4"
assert agent.llm.temperature == 0.7
@pytest.mark.vcr(filter_headers=["authorization"])
def test_agent_execution():
agent = Agent(
@@ -2361,13 +2332,11 @@ def mock_get_auth_token():
@patch("crewai.cli.plus_api.PlusAPI.get_agent")
def test_agent_from_repository(mock_get_agent, mock_get_auth_token):
# Mock embedchain initialization to prevent race conditions in parallel CI execution
with patch("embedchain.client.Client.setup"):
from crewai_tools import (
EnterpriseActionTool,
FileReadTool,
SerperDevTool,
)
from crewai_tools import (
EnterpriseActionTool,
FileReadTool,
SerperDevTool,
)
mock_get_response = MagicMock()
mock_get_response.status_code = 200
@@ -2423,9 +2392,7 @@ def test_agent_from_repository(mock_get_agent, mock_get_auth_token):
@patch("crewai.cli.plus_api.PlusAPI.get_agent")
def test_agent_from_repository_override_attributes(mock_get_agent, mock_get_auth_token):
# Mock embedchain initialization to prevent race conditions in parallel CI execution
with patch("embedchain.client.Client.setup"):
from crewai_tools import SerperDevTool
from crewai_tools import SerperDevTool
mock_get_response = MagicMock()
mock_get_response.status_code = 200

View File

@@ -3818,10 +3818,7 @@ def test_task_tools_preserve_code_execution_tools():
"""
Test that task tools don't override code execution tools when allow_code_execution=True
"""
# Mock embedchain initialization to prevent race conditions in parallel CI execution
with patch("embedchain.client.Client.setup"):
from crewai_tools import CodeInterpreterTool
from crewai_tools import CodeInterpreterTool
from pydantic import BaseModel, Field
from crewai.tools import BaseTool

View File

@@ -112,9 +112,9 @@ def test_agent_memoization():
first_call_result = crew.simple_agent()
second_call_result = crew.simple_agent()
assert (
first_call_result is second_call_result
), "Agent memoization is not working as expected"
assert first_call_result is second_call_result, (
"Agent memoization is not working as expected"
)
def test_task_memoization():
@@ -122,9 +122,9 @@ def test_task_memoization():
first_call_result = crew.simple_task()
second_call_result = crew.simple_task()
assert (
first_call_result is second_call_result
), "Task memoization is not working as expected"
assert first_call_result is second_call_result, (
"Task memoization is not working as expected"
)
def test_crew_memoization():
@@ -132,35 +132,35 @@ def test_crew_memoization():
first_call_result = crew.crew()
second_call_result = crew.crew()
assert (
first_call_result is second_call_result
), "Crew references should point to the same object"
assert first_call_result is second_call_result, (
"Crew references should point to the same object"
)
def test_task_name():
simple_task = SimpleCrew().simple_task()
assert (
simple_task.name == "simple_task"
), "Task name is not inferred from function name as expected"
assert simple_task.name == "simple_task", (
"Task name is not inferred from function name as expected"
)
custom_named_task = SimpleCrew().custom_named_task()
assert (
custom_named_task.name == "Custom"
), "Custom task name is not being set as expected"
assert custom_named_task.name == "Custom", (
"Custom task name is not being set as expected"
)
def test_agent_function_calling_llm():
crew = InternalCrew()
llm = crew.local_llm()
obj_llm_agent = crew.researcher()
assert (
obj_llm_agent.function_calling_llm is llm
), "agent's function_calling_llm is incorrect"
assert obj_llm_agent.function_calling_llm is llm, (
"agent's function_calling_llm is incorrect"
)
str_llm_agent = crew.reporting_analyst()
assert (
str_llm_agent.function_calling_llm.model == "online_llm"
), "agent's function_calling_llm is incorrect"
assert str_llm_agent.function_calling_llm.model == "online_llm", (
"agent's function_calling_llm is incorrect"
)
def test_task_guardrail():
@@ -186,9 +186,9 @@ def test_after_kickoff_modification():
# Assuming the crew execution returns a dict
result = crew.crew().kickoff({"topic": "LLMs"})
assert (
"post processed" in result.raw
), "After kickoff function did not modify outputs"
assert "post processed" in result.raw, (
"After kickoff function did not modify outputs"
)
@pytest.mark.vcr(filter_headers=["authorization"])
@@ -274,10 +274,8 @@ def another_simple_tool():
def test_internal_crew_with_mcp():
# Mock embedchain initialization to prevent race conditions in parallel CI execution
with patch("embedchain.client.Client.setup"):
from crewai_tools import MCPServerAdapter
from crewai_tools.adapters.mcp_adapter import ToolCollection
from crewai_tools import MCPServerAdapter
from crewai_tools.adapters.mcp_adapter import ToolCollection
mock = Mock(spec=MCPServerAdapter)
mock.tools = ToolCollection([simple_tool, another_simple_tool])
@@ -287,6 +285,5 @@ def test_internal_crew_with_mcp():
assert crew.researcher().tools == [simple_tool]
adapter_mock.assert_called_once_with(
{"host": "localhost", "port": 8000},
connect_timeout=120
{"host": "localhost", "port": 8000}, connect_timeout=120
)

6317
uv.lock generated

File diff suppressed because it is too large Load Diff