Compare commits

...

11 Commits

Author SHA1 Message Date
Devin AI
1784636e93 Add explicit type annotations for agents_config and tasks_config
Fixes #3801

This commit addresses the issue where agents_config and tasks_config
in CrewBase-decorated classes lacked explicit type annotations, making
the code un-Pythonic and reducing IDE/type checker support.

Changes:
- Added AgentsConfigDict and TasksConfigDict type aliases
- Updated load_configurations to use cast() for proper typing
- Updated CrewInstance Protocol to use typed config dictionaries
- Exported type aliases from crewai.project for user convenience
- Updated TypedDicts to support both raw YAML and processed values
- Added comprehensive tests for config loading and type annotations

The type aliases allow users to import and use proper types:
  from crewai.project import AgentConfig, AgentsConfigDict

This provides better IDE autocomplete and static type checking when
accessing self.agents_config and self.tasks_config in CrewBase classes.

Co-Authored-By: João <joao@crewai.com>
2025-10-27 10:49:01 +00:00
Lorenze Jay
494ed7e671 liteagent supports apps and mcps (#3794)
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
Build uv cache / build-cache (3.10) (push) Has been cancelled
Build uv cache / build-cache (3.11) (push) Has been cancelled
Build uv cache / build-cache (3.12) (push) Has been cancelled
Build uv cache / build-cache (3.13) (push) Has been cancelled
* liteagent supports apps and mcps

* generated cassettes for these
2025-10-24 18:42:08 -07:00
Lorenze Jay
a83c57a2f2 feat: bump versions to 1.2.0 (#3787)
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
* feat: bump versions to 1.2.0

* also include projects
2025-10-23 18:04:34 -07:00
Lorenze Jay
08e15ab267 fix: update default LLM model and improve error logging in LLM utilities (#3785)
* fix: update default LLM model and improve error logging in LLM utilities

* Updated the default LLM model from "gpt-4o-mini" to "gpt-4.1-mini" for better performance.
* Enhanced error logging in the LLM utilities to use logger.error instead of logger.debug, ensuring that errors are properly reported and raised.
* Added tests to verify behavior when OpenAI API key is missing and when Anthropic dependency is not available, improving robustness and error handling in LLM creation.

* fix: update test for default LLM model usage

* Refactored the test_create_llm_with_none_uses_default_model to use the imported DEFAULT_LLM_MODEL constant instead of a hardcoded string.
* Ensured that the test correctly asserts the model used is the current default, improving maintainability and consistency across tests.

* change default model to gpt-4.1-mini

* change default model use defualt
2025-10-23 17:54:11 -07:00
Greyson LaLonde
9728388ea7 fix: change flow viz del dir; method inspection
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
* chore: update flow viz deletion dir, add typing
* tests: add flow viz tests to ensure lib dir is not deleted
2025-10-22 19:32:38 -04:00
Greyson LaLonde
4371cf5690 chore: remove aisuite
Some checks failed
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
Mark stale issues and pull requests / stale (push) Has been cancelled
Build uv cache / build-cache (3.10) (push) Has been cancelled
Build uv cache / build-cache (3.11) (push) Has been cancelled
Build uv cache / build-cache (3.12) (push) Has been cancelled
Build uv cache / build-cache (3.13) (push) Has been cancelled
Little usage + blocking some features
2025-10-21 23:18:06 -04:00
Lorenze Jay
d28daa26cd feat: bump versions to 1.1.0 (#3770)
Some checks failed
Build uv cache / build-cache (3.10) (push) Has been cancelled
Build uv cache / build-cache (3.11) (push) Has been cancelled
Build uv cache / build-cache (3.12) (push) Has been cancelled
Build uv cache / build-cache (3.13) (push) Has been cancelled
CodeQL Advanced / Analyze (actions) (push) Has been cancelled
CodeQL Advanced / Analyze (python) (push) Has been cancelled
Notify Downstream / notify-downstream (push) Has been cancelled
* feat: bump versions to 1.1.0

* chore: bump template versions

---------

Co-authored-by: Greyson Lalonde <greyson.r.lalonde@gmail.com>
2025-10-21 15:52:44 -07:00
Lorenze Jay
a850813f2b feat: enhance InternalInstructor to support multiple LLM providers (#3767)
* feat: enhance InternalInstructor to support multiple LLM providers

- Updated InternalInstructor to conditionally create an instructor client based on the LLM provider.
- Introduced a new method _create_instructor_client to handle client creation using the modern from_provider pattern.
- Added functionality to extract the provider from the LLM model name.
- Implemented tests for InternalInstructor with various LLM providers including OpenAI, Anthropic, Gemini, and Azure, ensuring robust integration and error handling.

This update improves flexibility and extensibility for different LLM integrations.

* fix test
2025-10-21 15:24:59 -07:00
Cameron Warren
5944a39629 fix: correct broken integration documentation links
Fix navigation paths for two integration tool cards that were redirecting to the
introduction page instead of their intended documentation pages.

Fixes #3516

Co-authored-by: Cwarre33 <cwarre33@charlotte.edu>
Co-authored-by: Greyson LaLonde <greyson.r.lalonde@gmail.com>
2025-10-21 18:12:08 -04:00
Greyson LaLonde
c594859ed0 feat: mypy plugin base
* feat: base mypy plugin with CrewBase

* fix: add crew method to protocol
2025-10-21 17:36:08 -04:00
Daniel Barreto
2ee27efca7 feat: improvements on QdrantVectorSearchTool
* Implement improvements on QdrantVectorSearchTool

- Allow search filters to be set at the constructor level
- Fix issue that prevented multiple records from being returned

* Implement improvements on QdrantVectorSearchTool

- Allow search filters to be set at the constructor level
- Fix issue that prevented multiple records from being returned

---------

Co-authored-by: Greyson LaLonde <greyson.r.lalonde@gmail.com>
2025-10-21 16:50:08 -04:00
33 changed files with 5511 additions and 4458 deletions

View File

@@ -11,7 +11,7 @@ mode: "wide"
<Card
title="Bedrock Invoke Agent Tool"
icon="cloud"
href="/en/tools/tool-integrations/bedrockinvokeagenttool"
href="/en/tools/integration/bedrockinvokeagenttool"
color="#0891B2"
>
Invoke Amazon Bedrock Agents from CrewAI to orchestrate actions across AWS services.
@@ -20,7 +20,7 @@ mode: "wide"
<Card
title="CrewAI Automation Tool"
icon="bolt"
href="/en/tools/tool-integrations/crewaiautomationtool"
href="/en/tools/integration/crewaiautomationtool"
color="#7C3AED"
>
Automate deployment and operations by integrating CrewAI with external platforms and workflows.

View File

@@ -12,7 +12,7 @@ dependencies = [
"pytube>=15.0.0",
"requests>=2.32.5",
"docker>=7.1.0",
"crewai==1.0.0",
"crewai==1.2.0",
"lancedb>=0.5.4",
"tiktoken>=0.8.0",
"beautifulsoup4>=4.13.4",

View File

@@ -287,4 +287,4 @@ __all__ = [
"ZapierActionTools",
]
__version__ = "1.0.0"
__version__ = "1.2.0"

View File

@@ -1,80 +1,42 @@
from collections.abc import Callable
from __future__ import annotations
import importlib
import json
import os
from collections.abc import Callable
from typing import Any
try:
from qdrant_client import QdrantClient
from qdrant_client.http.models import FieldCondition, Filter, MatchValue
QDRANT_AVAILABLE = True
except ImportError:
QDRANT_AVAILABLE = False
QdrantClient = Any # type: ignore[assignment,misc] # type placeholder
Filter = Any # type: ignore[assignment,misc]
FieldCondition = Any # type: ignore[assignment,misc]
MatchValue = Any # type: ignore[assignment,misc]
from crewai.tools import BaseTool, EnvVar
from pydantic import BaseModel, ConfigDict, Field
from pydantic import BaseModel, ConfigDict, Field, model_validator
from pydantic.types import ImportString
class QdrantToolSchema(BaseModel):
"""Input for QdrantTool."""
query: str = Field(..., description="Query to search in Qdrant DB.")
filter_by: str | None = None
filter_value: str | None = None
query: str = Field(
...,
description="The query to search retrieve relevant information from the Qdrant database. Pass only the query, not the question.",
)
filter_by: str | None = Field(
default=None,
description="Filter by properties. Pass only the properties, not the question.",
)
filter_value: str | None = Field(
default=None,
description="Filter by value. Pass only the value, not the question.",
)
class QdrantConfig(BaseModel):
"""All Qdrant connection and search settings."""
qdrant_url: str
qdrant_api_key: str | None = None
collection_name: str
limit: int = 3
score_threshold: float = 0.35
filter_conditions: list[tuple[str, Any]] = Field(default_factory=list)
class QdrantVectorSearchTool(BaseTool):
"""Tool to query and filter results from a Qdrant database.
This tool enables vector similarity search on internal documents stored in Qdrant,
with optional filtering capabilities.
Attributes:
client: Configured QdrantClient instance
collection_name: Name of the Qdrant collection to search
limit: Maximum number of results to return
score_threshold: Minimum similarity score threshold
qdrant_url: Qdrant server URL
qdrant_api_key: Authentication key for Qdrant
"""
"""Vector search tool for Qdrant."""
model_config = ConfigDict(arbitrary_types_allowed=True)
client: QdrantClient = None # type: ignore[assignment]
# --- Metadata ---
name: str = "QdrantVectorSearchTool"
description: str = "A tool to search the Qdrant database for relevant information on internal documents."
description: str = "Search Qdrant vector DB for relevant documents."
args_schema: type[BaseModel] = QdrantToolSchema
query: str | None = None
filter_by: str | None = None
filter_value: str | None = None
collection_name: str | None = None
limit: int | None = Field(default=3)
score_threshold: float = Field(default=0.35)
qdrant_url: str = Field(
...,
description="The URL of the Qdrant server",
)
qdrant_api_key: str | None = Field(
default=None,
description="The API key for the Qdrant server",
)
custom_embedding_fn: Callable | None = Field(
default=None,
description="A custom embedding function to use for vectorization. If not provided, the default model will be used.",
)
package_dependencies: list[str] = Field(default_factory=lambda: ["qdrant-client"])
env_vars: list[EnvVar] = Field(
default_factory=lambda: [
@@ -83,107 +45,81 @@ class QdrantVectorSearchTool(BaseTool):
)
]
)
qdrant_config: QdrantConfig
qdrant_package: ImportString[Any] = Field(
default="qdrant_client",
description="Base package path for Qdrant. Will dynamically import client and models.",
)
custom_embedding_fn: ImportString[Callable[[str], list[float]]] | None = Field(
default=None,
description="Optional embedding function or import path.",
)
client: Any | None = None
def __init__(self, **kwargs):
super().__init__(**kwargs)
if QDRANT_AVAILABLE:
self.client = QdrantClient(
url=self.qdrant_url,
api_key=self.qdrant_api_key if self.qdrant_api_key else None,
@model_validator(mode="after")
def _setup_qdrant(self) -> QdrantVectorSearchTool:
# Import the qdrant_package if it's a string
if isinstance(self.qdrant_package, str):
self.qdrant_package = importlib.import_module(self.qdrant_package)
if not self.client:
self.client = self.qdrant_package.QdrantClient(
url=self.qdrant_config.qdrant_url,
api_key=self.qdrant_config.qdrant_api_key or None,
)
else:
import click
if click.confirm(
"The 'qdrant-client' package is required to use the QdrantVectorSearchTool. "
"Would you like to install it?"
):
import subprocess
subprocess.run(["uv", "add", "qdrant-client"], check=True) # noqa: S607
else:
raise ImportError(
"The 'qdrant-client' package is required to use the QdrantVectorSearchTool. "
"Please install it with: uv add qdrant-client"
)
return self
def _run(
self,
query: str,
filter_by: str | None = None,
filter_value: str | None = None,
filter_value: Any | None = None,
) -> str:
"""Execute vector similarity search on Qdrant.
"""Perform vector similarity search."""
filter_ = self.qdrant_package.http.models.Filter
field_condition = self.qdrant_package.http.models.FieldCondition
match_value = self.qdrant_package.http.models.MatchValue
conditions = self.qdrant_config.filter_conditions.copy()
if filter_by and filter_value is not None:
conditions.append((filter_by, filter_value))
Args:
query: Search query to vectorize and match
filter_by: Optional metadata field to filter on
filter_value: Optional value to filter by
Returns:
JSON string containing search results with metadata and scores
Raises:
ImportError: If qdrant-client is not installed
ValueError: If Qdrant credentials are missing
"""
if not self.qdrant_url:
raise ValueError("QDRANT_URL is not set")
# Create filter if filter parameters are provided
search_filter = None
if filter_by and filter_value:
search_filter = Filter(
search_filter = (
filter_(
must=[
FieldCondition(key=filter_by, match=MatchValue(value=filter_value))
field_condition(key=k, match=match_value(value=v))
for k, v in conditions
]
)
# Search in Qdrant using the built-in query method
query_vector = (
self._vectorize_query(query, embedding_model="text-embedding-3-large")
if not self.custom_embedding_fn
else self.custom_embedding_fn(query)
if conditions
else None
)
search_results = self.client.query_points(
collection_name=self.collection_name, # type: ignore[arg-type]
query_vector = (
self.custom_embedding_fn(query)
if self.custom_embedding_fn
else (
lambda: __import__("openai")
.Client(api_key=os.getenv("OPENAI_API_KEY"))
.embeddings.create(input=[query], model="text-embedding-3-large")
.data[0]
.embedding
)()
)
results = self.client.query_points(
collection_name=self.qdrant_config.collection_name,
query=query_vector,
query_filter=search_filter,
limit=self.limit, # type: ignore[arg-type]
score_threshold=self.score_threshold,
limit=self.qdrant_config.limit,
score_threshold=self.qdrant_config.score_threshold,
)
# Format results similar to storage implementation
results = []
# Extract the list of ScoredPoint objects from the tuple
for point in search_results:
result = {
"metadata": point[1][0].payload.get("metadata", {}),
"context": point[1][0].payload.get("text", ""),
"distance": point[1][0].score,
}
results.append(result)
return json.dumps(results, indent=2)
def _vectorize_query(self, query: str, embedding_model: str) -> list[float]:
"""Default vectorization function with openai.
Args:
query (str): The query to vectorize
embedding_model (str): The embedding model to use
Returns:
list[float]: The vectorized query
"""
import openai
client = openai.Client(api_key=os.getenv("OPENAI_API_KEY"))
return (
client.embeddings.create(
input=[query],
model=embedding_model,
)
.data[0]
.embedding
return json.dumps(
[
{
"distance": p.score,
"metadata": p.payload.get("metadata", {}) if p.payload else {},
"context": p.payload.get("text", "") if p.payload else {},
}
for p in results.points
],
indent=2,
)

View File

@@ -49,7 +49,7 @@ Repository = "https://github.com/crewAIInc/crewAI"
[project.optional-dependencies]
tools = [
"crewai-tools==1.0.0",
"crewai-tools==1.2.0",
]
embeddings = [
"tiktoken~=0.8.0"
@@ -66,11 +66,6 @@ openpyxl = [
mem0 = ["mem0ai>=0.1.94"]
docling = [
"docling>=2.12.0",
]
aisuite = [
"aisuite>=0.1.11",
]
qdrant = [
"qdrant-client[fastembed]>=1.14.3",
@@ -137,13 +132,3 @@ build-backend = "hatchling.build"
[tool.hatch.version]
path = "src/crewai/__init__.py"
# Declare mutually exclusive extras due to conflicting httpx requirements
# a2a requires httpx>=0.28.1, while aisuite requires httpx>=0.27.0,<0.28.0
# [tool.uv]
# conflicts = [
# [
# { extra = "a2a" },
# { extra = "aisuite" },
# ],
# ]

View File

@@ -40,7 +40,7 @@ def _suppress_pydantic_deprecation_warnings() -> None:
_suppress_pydantic_deprecation_warnings()
__version__ = "1.0.0"
__version__ = "1.2.0"
_telemetry_submitted = False

View File

@@ -1186,6 +1186,15 @@ class Agent(BaseAgent):
Returns:
LiteAgentOutput: The result of the agent execution.
"""
if self.apps:
platform_tools = self.get_platform_tools(self.apps)
if platform_tools:
self.tools.extend(platform_tools)
if self.mcps:
mcps = self.get_mcp_tools(self.mcps)
if mcps:
self.tools.extend(mcps)
lite_agent = LiteAgent(
id=self.id,
role=self.role,

View File

@@ -322,7 +322,7 @@ MODELS = {
],
}
DEFAULT_LLM_MODEL = "gpt-4o-mini"
DEFAULT_LLM_MODEL = "gpt-4.1-mini"
JSON_URL = "https://raw.githubusercontent.com/BerriAI/litellm/main/model_prices_and_context_window.json"

View File

@@ -5,7 +5,7 @@ description = "{{name}} using crewAI"
authors = [{ name = "Your Name", email = "you@example.com" }]
requires-python = ">=3.10,<3.14"
dependencies = [
"crewai[tools]==1.0.0"
"crewai[tools]==1.2.0"
]
[project.scripts]

View File

@@ -5,7 +5,7 @@ description = "{{name}} using crewAI"
authors = [{ name = "Your Name", email = "you@example.com" }]
requires-python = ">=3.10,<3.14"
dependencies = [
"crewai[tools]==1.0.0"
"crewai[tools]==1.2.0"
]
[project.scripts]

View File

@@ -2,7 +2,7 @@
from __future__ import annotations
import os
from typing import TYPE_CHECKING
from typing import TYPE_CHECKING, Any
from pyvis.network import Network # type: ignore[import-untyped]
@@ -29,7 +29,7 @@ _printer = Printer()
class FlowPlot:
"""Handles the creation and rendering of flow visualization diagrams."""
def __init__(self, flow: Flow) -> None:
def __init__(self, flow: Flow[Any]) -> None:
"""
Initialize FlowPlot with a flow object.
@@ -136,7 +136,7 @@ class FlowPlot:
f"Unexpected error during flow visualization: {e!s}"
) from e
finally:
self._cleanup_pyvis_lib()
self._cleanup_pyvis_lib(filename)
def _generate_final_html(self, network_html: str) -> str:
"""
@@ -186,26 +186,33 @@ class FlowPlot:
raise IOError(f"Failed to generate visualization HTML: {e!s}") from e
@staticmethod
def _cleanup_pyvis_lib() -> None:
def _cleanup_pyvis_lib(filename: str) -> None:
"""
Clean up the generated lib folder from pyvis.
This method safely removes the temporary lib directory created by pyvis
during network visualization generation.
during network visualization generation. The lib folder is created in the
same directory as the output HTML file.
Parameters
----------
filename : str
The output filename (without .html extension) used for the visualization.
"""
try:
lib_folder = safe_path_join("lib", root=os.getcwd())
if os.path.exists(lib_folder) and os.path.isdir(lib_folder):
import shutil
import shutil
shutil.rmtree(lib_folder)
except ValueError as e:
_printer.print(f"Error validating lib folder path: {e}", color="red")
output_dir = os.path.dirname(os.path.abspath(filename)) or os.getcwd()
lib_folder = os.path.join(output_dir, "lib")
if os.path.exists(lib_folder) and os.path.isdir(lib_folder):
vis_js = os.path.join(lib_folder, "vis-network.min.js")
if os.path.exists(vis_js):
shutil.rmtree(lib_folder)
except Exception as e:
_printer.print(f"Error cleaning up lib folder: {e}", color="red")
def plot_flow(flow: Flow, filename: str = "flow_plot") -> None:
def plot_flow(flow: Flow[Any], filename: str = "flow_plot") -> None:
"""
Convenience function to create and save a flow visualization.

View File

@@ -1,5 +1,8 @@
"""HTML template processing and generation for flow visualization diagrams."""
import base64
import re
from typing import Any
from crewai.flow.path_utils import validate_path_exists
@@ -7,7 +10,7 @@ from crewai.flow.path_utils import validate_path_exists
class HTMLTemplateHandler:
"""Handles HTML template processing and generation for flow visualization diagrams."""
def __init__(self, template_path, logo_path):
def __init__(self, template_path: str, logo_path: str) -> None:
"""
Initialize HTMLTemplateHandler with validated template and logo paths.
@@ -29,23 +32,23 @@ class HTMLTemplateHandler:
except ValueError as e:
raise ValueError(f"Invalid template or logo path: {e}") from e
def read_template(self):
def read_template(self) -> str:
"""Read and return the HTML template file contents."""
with open(self.template_path, "r", encoding="utf-8") as f:
return f.read()
def encode_logo(self):
def encode_logo(self) -> str:
"""Convert the logo SVG file to base64 encoded string."""
with open(self.logo_path, "rb") as logo_file:
logo_svg_data = logo_file.read()
return base64.b64encode(logo_svg_data).decode("utf-8")
def extract_body_content(self, html):
def extract_body_content(self, html: str) -> str:
"""Extract and return content between body tags from HTML string."""
match = re.search("<body.*?>(.*?)</body>", html, re.DOTALL)
return match.group(1) if match else ""
def generate_legend_items_html(self, legend_items):
def generate_legend_items_html(self, legend_items: list[dict[str, Any]]) -> str:
"""Generate HTML markup for the legend items."""
legend_items_html = ""
for item in legend_items:
@@ -73,7 +76,9 @@ class HTMLTemplateHandler:
"""
return legend_items_html
def generate_final_html(self, network_body, legend_items_html, title="Flow Plot"):
def generate_final_html(
self, network_body: str, legend_items_html: str, title: str = "Flow Plot"
) -> str:
"""Combine all components into final HTML document with network visualization."""
html_template = self.read_template()
logo_svg_base64 = self.encode_logo()

View File

@@ -1,4 +1,23 @@
def get_legend_items(colors):
"""Legend generation for flow visualization diagrams."""
from typing import Any
from crewai.flow.config import FlowColors
def get_legend_items(colors: FlowColors) -> list[dict[str, Any]]:
"""Generate legend items based on flow colors.
Parameters
----------
colors : FlowColors
Dictionary containing color definitions for flow elements.
Returns
-------
list[dict[str, Any]]
List of legend item dictionaries with labels and styling.
"""
return [
{"label": "Start Method", "color": colors["start"]},
{"label": "Method", "color": colors["method"]},
@@ -24,7 +43,19 @@ def get_legend_items(colors):
]
def generate_legend_items_html(legend_items):
def generate_legend_items_html(legend_items: list[dict[str, Any]]) -> str:
"""Generate HTML markup for legend items.
Parameters
----------
legend_items : list[dict[str, Any]]
List of legend item dictionaries containing labels and styling.
Returns
-------
str
HTML string containing formatted legend items.
"""
legend_items_html = ""
for item in legend_items:
if "border" in item:

View File

@@ -36,28 +36,29 @@ from crewai.flow.utils import (
from crewai.utilities.printer import Printer
_printer = Printer()
def method_calls_crew(method: Any) -> bool:
"""
Check if the method contains a call to `.crew()`.
Check if the method contains a call to `.crew()`, `.kickoff()`, or `.kickoff_async()`.
Parameters
----------
method : Any
The method to analyze for crew() calls.
The method to analyze for crew or agent execution calls.
Returns
-------
bool
True if the method calls .crew(), False otherwise.
True if the method calls .crew(), .kickoff(), or .kickoff_async(), False otherwise.
Notes
-----
Uses AST analysis to detect method calls, specifically looking for
attribute access of 'crew'.
attribute access of 'crew', 'kickoff', or 'kickoff_async'.
This includes both traditional Crew execution (.crew()) and Agent/LiteAgent
execution (.kickoff() or .kickoff_async()).
"""
try:
source = inspect.getsource(method)
@@ -68,14 +69,14 @@ def method_calls_crew(method: Any) -> bool:
return False
class CrewCallVisitor(ast.NodeVisitor):
"""AST visitor to detect .crew() method calls."""
"""AST visitor to detect .crew(), .kickoff(), or .kickoff_async() method calls."""
def __init__(self):
def __init__(self) -> None:
self.found = False
def visit_Call(self, node):
def visit_Call(self, node: ast.Call) -> None:
if isinstance(node.func, ast.Attribute):
if node.func.attr == "crew":
if node.func.attr in ("crew", "kickoff", "kickoff_async"):
self.found = True
self.generic_visit(node)
@@ -113,7 +114,7 @@ def add_nodes_to_network(
- Regular methods
"""
def human_friendly_label(method_name):
def human_friendly_label(method_name: str) -> str:
return method_name.replace("_", " ").title()
node_style: (

View File

@@ -1,99 +0,0 @@
"""AI Suite LLM integration for CrewAI.
This module provides integration with AI Suite for LLM capabilities.
"""
from typing import Any
import aisuite as ai # type: ignore
from crewai.llms.base_llm import BaseLLM
class AISuiteLLM(BaseLLM):
"""AI Suite LLM implementation.
This class provides integration with AI Suite models through the BaseLLM interface.
"""
def __init__(
self,
model: str,
temperature: float | None = None,
stop: list[str] | None = None,
**kwargs: Any,
) -> None:
"""Initialize the AI Suite LLM.
Args:
model: The model identifier for AI Suite.
temperature: Optional temperature setting for response generation.
stop: Optional list of stop sequences for generation.
**kwargs: Additional keyword arguments passed to the AI Suite client.
"""
super().__init__(model=model, temperature=temperature, stop=stop)
self.client = ai.Client()
self.kwargs = kwargs
def call( # type: ignore[override]
self,
messages: str | list[dict[str, str]],
tools: list[dict] | None = None,
callbacks: list[Any] | None = None,
available_functions: dict[str, Any] | None = None,
from_task: Any | None = None,
from_agent: Any | None = None,
) -> str | Any:
"""Call the AI Suite LLM with the given messages.
Args:
messages: Input messages for the LLM.
tools: Optional list of tool schemas for function calling.
callbacks: Optional list of callback functions.
available_functions: Optional dict mapping function names to callables.
from_task: Optional task caller.
from_agent: Optional agent caller.
Returns:
The text response from the LLM.
"""
completion_params = self._prepare_completion_params(messages, tools)
response = self.client.chat.completions.create(**completion_params)
return response.choices[0].message.content
def _prepare_completion_params(
self,
messages: str | list[dict[str, str]],
tools: list[dict] | None = None,
) -> dict[str, Any]:
"""Prepare parameters for the AI Suite completion call.
Args:
messages: Input messages for the LLM.
tools: Optional list of tool schemas.
Returns:
Dictionary of parameters for the completion API.
"""
params: dict[str, Any] = {
"model": self.model,
"messages": messages,
"temperature": self.temperature,
"tools": tools,
**self.kwargs,
}
if self.stop:
params["stop"] = self.stop
return params
@staticmethod
def supports_function_calling() -> bool:
"""Check if the LLM supports function calling.
Returns:
False, as AI Suite does not currently support function calling.
"""
return False

View File

@@ -0,0 +1,60 @@
"""Mypy plugin for CrewAI decorator type checking.
This plugin informs mypy about attributes injected by the @CrewBase decorator.
"""
from collections.abc import Callable
from mypy.nodes import MDEF, SymbolTableNode, Var
from mypy.plugin import ClassDefContext, Plugin
from mypy.types import AnyType, TypeOfAny
class CrewAIPlugin(Plugin):
"""Mypy plugin that handles @CrewBase decorator attribute injection."""
def get_class_decorator_hook(
self, fullname: str
) -> Callable[[ClassDefContext], None] | None:
"""Return hook for class decorators.
Args:
fullname: Fully qualified name of the decorator.
Returns:
Hook function if this is a CrewBase decorator, None otherwise.
"""
if fullname in ("crewai.project.CrewBase", "crewai.project.crew_base.CrewBase"):
return self._crew_base_hook
return None
@staticmethod
def _crew_base_hook(ctx: ClassDefContext) -> None:
"""Add injected attributes to @CrewBase decorated classes.
Args:
ctx: Context for the class being decorated.
"""
any_type = AnyType(TypeOfAny.explicit)
str_type = ctx.api.named_type("builtins.str")
dict_type = ctx.api.named_type("builtins.dict", [str_type, any_type])
agents_config_var = Var("agents_config", dict_type)
agents_config_var.info = ctx.cls.info
agents_config_var._fullname = f"{ctx.cls.info.fullname}.agents_config"
ctx.cls.info.names["agents_config"] = SymbolTableNode(MDEF, agents_config_var)
tasks_config_var = Var("tasks_config", dict_type)
tasks_config_var.info = ctx.cls.info
tasks_config_var._fullname = f"{ctx.cls.info.fullname}.tasks_config"
ctx.cls.info.names["tasks_config"] = SymbolTableNode(MDEF, tasks_config_var)
def plugin(_: str) -> type[Plugin]:
"""Entry point for mypy plugin.
Args:
_: Mypy version string.
Returns:
Plugin class.
"""
return CrewAIPlugin

View File

@@ -13,11 +13,21 @@ from crewai.project.annotations import (
task,
tool,
)
from crewai.project.crew_base import CrewBase
from crewai.project.crew_base import (
AgentConfig,
AgentsConfigDict,
CrewBase,
TaskConfig,
TasksConfigDict,
)
__all__ = [
"AgentConfig",
"AgentsConfigDict",
"CrewBase",
"TaskConfig",
"TasksConfigDict",
"after_kickoff",
"agent",
"before_kickoff",

View File

@@ -52,11 +52,11 @@ class AgentConfig(TypedDict, total=False):
allow_delegation: bool
max_iter: int
max_tokens: int
callbacks: list[str]
callbacks: list[str] | list[Any]
# LLM configuration
llm: str
function_calling_llm: str
# LLM configuration (can be string references or resolved instances)
llm: str | Any
function_calling_llm: str | Any
use_system_prompt: bool
# Template configuration
@@ -66,7 +66,7 @@ class AgentConfig(TypedDict, total=False):
# Tools and handlers (can be string references or instances)
tools: list[str] | list[BaseTool]
step_callback: str
step_callback: str | Any
cache_handler: str | CacheHandler
# Code execution
@@ -111,18 +111,18 @@ class TaskConfig(TypedDict, total=False):
description: str
expected_output: str
# Agent and context
agent: str
context: list[str]
# Agent and context (can be string references or resolved instances)
agent: str | Any
context: list[str] | list[Any]
# Tools and callbacks (can be string references or instances)
tools: list[str] | list[BaseTool]
callback: str
callbacks: list[str]
callback: str | Any
callbacks: list[str] | list[Any]
# Output configuration
output_json: str
output_pydantic: str
# Output configuration (can be string references or resolved class wrappers)
output_json: str | Any
output_pydantic: str | Any
output_file: str
create_directory: bool
@@ -139,6 +139,10 @@ class TaskConfig(TypedDict, total=False):
allow_crewai_trigger_context: bool
AgentsConfigDict = dict[str, AgentConfig]
TasksConfigDict = dict[str, TaskConfig]
load_dotenv()
CallableT = TypeVar("CallableT", bound=Callable[..., Any])
@@ -378,8 +382,14 @@ def load_configurations(self: CrewInstance) -> None:
Args:
self: Crew instance with configuration paths.
"""
self.agents_config = self._load_config(self.original_agents_config_path, "agent")
self.tasks_config = self._load_config(self.original_tasks_config_path, "task")
self.agents_config = cast(
AgentsConfigDict,
self._load_config(self.original_agents_config_path, "agent"),
)
self.tasks_config = cast(
TasksConfigDict,
self._load_config(self.original_tasks_config_path, "task"),
)
def load_yaml(config_path: Path) -> dict[str, Any]:

View File

@@ -20,8 +20,14 @@ from typing_extensions import Self
if TYPE_CHECKING:
from crewai import Agent, Task
from crewai import Agent, Crew, Task
from crewai.crews.crew_output import CrewOutput
from crewai.project.crew_base import (
AgentConfig,
AgentsConfigDict,
TaskConfig,
TasksConfigDict,
)
from crewai.tools import BaseTool
@@ -75,8 +81,8 @@ class CrewInstance(Protocol):
base_directory: Path
original_agents_config_path: str
original_tasks_config_path: str
agents_config: dict[str, Any]
tasks_config: dict[str, Any]
agents_config: AgentsConfigDict
tasks_config: TasksConfigDict
mcp_server_params: Any
mcp_connect_timeout: int
@@ -90,7 +96,7 @@ class CrewInstance(Protocol):
def _map_agent_variables(
self,
agent_name: str,
agent_info: dict[str, Any],
agent_info: AgentConfig,
llms: dict[str, Callable[..., Any]],
tool_functions: dict[str, Callable[..., Any]],
cache_handler_functions: dict[str, Callable[..., Any]],
@@ -99,7 +105,7 @@ class CrewInstance(Protocol):
def _map_task_variables(
self,
task_name: str,
task_info: dict[str, Any],
task_info: TaskConfig,
agents: dict[str, Callable[..., Any]],
tasks: dict[str, Callable[..., Any]],
output_json_functions: dict[str, Callable[..., Any]],
@@ -129,6 +135,7 @@ class CrewClass(Protocol):
_map_agent_variables: Callable[..., None]
map_all_task_variables: Callable[..., None]
_map_task_variables: Callable[..., None]
crew: Callable[..., Crew]
class DecoratedMethod(Generic[P, R]):

View File

@@ -4,6 +4,8 @@ from typing import TYPE_CHECKING, Any, Generic, TypeGuard, TypeVar
from pydantic import BaseModel
from crewai.utilities.logger_utils import suppress_warnings
if TYPE_CHECKING:
from crewai.agent import Agent
@@ -11,9 +13,6 @@ if TYPE_CHECKING:
from crewai.llms.base_llm import BaseLLM
from crewai.utilities.types import LLMMessage
from crewai.utilities.logger_utils import suppress_warnings
T = TypeVar("T", bound=BaseModel)
@@ -62,9 +61,59 @@ class InternalInstructor(Generic[T]):
with suppress_warnings():
import instructor # type: ignore[import-untyped]
from litellm import completion
self._client = instructor.from_litellm(completion)
if (
self.llm is not None
and hasattr(self.llm, "is_litellm")
and self.llm.is_litellm
):
from litellm import completion
self._client = instructor.from_litellm(completion)
else:
self._client = self._create_instructor_client()
def _create_instructor_client(self) -> Any:
"""Create instructor client using the modern from_provider pattern.
Returns:
Instructor client configured for the LLM provider
Raises:
ValueError: If the provider is not supported
"""
import instructor
if isinstance(self.llm, str):
model_string = self.llm
elif self.llm is not None and hasattr(self.llm, "model"):
model_string = self.llm.model
else:
raise ValueError("LLM must be a string or have a model attribute")
if isinstance(self.llm, str):
provider = self._extract_provider()
elif self.llm is not None and hasattr(self.llm, "provider"):
provider = self.llm.provider
else:
provider = "openai" # Default fallback
return instructor.from_provider(f"{provider}/{model_string}")
def _extract_provider(self) -> str:
"""Extract provider from LLM model name.
Returns:
Provider name (e.g., 'openai', 'anthropic', etc.)
"""
if self.llm is not None and hasattr(self.llm, "provider") and self.llm.provider:
return self.llm.provider
if isinstance(self.llm, str):
return self.llm.partition("/")[0] or "openai"
if self.llm is not None and hasattr(self.llm, "model"):
return self.llm.model.partition("/")[0] or "openai"
return "openai"
def to_json(self) -> str:
"""Convert the structured output to JSON format.
@@ -96,6 +145,6 @@ class InternalInstructor(Generic[T]):
else:
model_name = self.llm.model
return self._client.chat.completions.create(
return self._client.chat.completions.create( # type: ignore[no-any-return]
model=model_name, response_model=self.model, messages=messages
)

View File

@@ -29,8 +29,8 @@ def create_llm(
try:
return LLM(model=llm_value)
except Exception as e:
logger.debug(f"Failed to instantiate LLM with model='{llm_value}': {e}")
return None
logger.error(f"Error instantiating LLM from string: {e}")
raise e
if llm_value is None:
return _llm_via_environment_or_fallback()
@@ -62,8 +62,8 @@ def create_llm(
)
except Exception as e:
logger.debug(f"Error instantiating LLM from unknown object type: {e}")
return None
logger.error(f"Error instantiating LLM from unknown object type: {e}")
raise e
UNACCEPTED_ATTRIBUTES: Final[list[str]] = [
@@ -176,10 +176,10 @@ def _llm_via_environment_or_fallback() -> LLM | None:
try:
return LLM(**llm_params)
except Exception as e:
logger.debug(
logger.error(
f"Error instantiating LLM from environment/fallback: {type(e).__name__}: {e}"
)
return None
raise e
def _normalize_key_name(key_name: str) -> str:

View File

@@ -6,6 +6,7 @@ from unittest import mock
from unittest.mock import MagicMock, patch
from crewai.agents.crew_agent_executor import AgentFinish, CrewAgentExecutor
from crewai.cli.constants import DEFAULT_LLM_MODEL
from crewai.events.event_bus import crewai_event_bus
from crewai.events.types.tool_usage_events import ToolUsageFinishedEvent
from crewai.knowledge.knowledge import Knowledge
@@ -135,7 +136,7 @@ def test_agent_with_missing_response_template():
def test_agent_default_values():
agent = Agent(role="test role", goal="test goal", backstory="test backstory")
assert agent.llm.model == "gpt-4o-mini"
assert agent.llm.model == DEFAULT_LLM_MODEL
assert agent.allow_delegation is False
@@ -225,7 +226,7 @@ def test_logging_tool_usage():
verbose=True,
)
assert agent.llm.model == "gpt-4o-mini"
assert agent.llm.model == DEFAULT_LLM_MODEL
assert agent.tools_handler.last_used_tool is None
task = Task(
description="What is 3 times 4?",
@@ -902,7 +903,8 @@ def test_agent_step_callback():
@pytest.mark.vcr(filter_headers=["authorization"])
def test_agent_function_calling_llm():
llm = "gpt-4o"
from crewai.llm import LLM
llm = LLM(model="gpt-4o", is_litellm=True)
@tool
def learn_about_ai() -> str:

View File

@@ -591,3 +591,81 @@ def test_lite_agent_with_invalid_llm():
llm="invalid-model",
)
assert "Expected LLM instance of type BaseLLM" in str(exc_info.value)
@patch.dict("os.environ", {"CREWAI_PLATFORM_INTEGRATION_TOKEN": "test_token"})
@patch("crewai_tools.tools.crewai_platform_tools.crewai_platform_tool_builder.requests.get")
@pytest.mark.vcr(filter_headers=["authorization"])
def test_agent_kickoff_with_platform_tools(mock_get):
"""Test that Agent.kickoff() properly integrates platform tools with LiteAgent"""
mock_response = Mock()
mock_response.raise_for_status.return_value = None
mock_response.json.return_value = {
"actions": {
"github": [
{
"name": "create_issue",
"description": "Create a GitHub issue",
"parameters": {
"type": "object",
"properties": {
"title": {"type": "string", "description": "Issue title"},
"body": {"type": "string", "description": "Issue body"},
},
"required": ["title"],
},
}
]
}
}
mock_get.return_value = mock_response
agent = Agent(
role="Test Agent",
goal="Test goal",
backstory="Test backstory",
llm=LLM(model="gpt-3.5-turbo"),
apps=["github"],
verbose=True
)
result = agent.kickoff("Create a GitHub issue")
assert isinstance(result, LiteAgentOutput)
assert result.raw is not None
@patch.dict("os.environ", {"EXA_API_KEY": "test_exa_key"})
@patch("crewai.agent.Agent._get_external_mcp_tools")
@pytest.mark.vcr(filter_headers=["authorization"])
def test_agent_kickoff_with_mcp_tools(mock_get_mcp_tools):
"""Test that Agent.kickoff() properly integrates MCP tools with LiteAgent"""
# Setup mock MCP tools - create a proper BaseTool instance
class MockMCPTool(BaseTool):
name: str = "exa_search"
description: str = "Search the web using Exa"
def _run(self, query: str) -> str:
return f"Mock search results for: {query}"
mock_get_mcp_tools.return_value = [MockMCPTool()]
# Create agent with MCP servers
agent = Agent(
role="Test Agent",
goal="Test goal",
backstory="Test backstory",
llm=LLM(model="gpt-3.5-turbo"),
mcps=["https://mcp.exa.ai/mcp?api_key=test_exa_key&profile=research"],
verbose=True
)
# Execute kickoff
result = agent.kickoff("Search for information about AI")
# Verify the result is a LiteAgentOutput
assert isinstance(result, LiteAgentOutput)
assert result.raw is not None
# Verify MCP tools were retrieved
mock_get_mcp_tools.assert_called_once_with("https://mcp.exa.ai/mcp?api_key=test_exa_key&profile=research")

View File

@@ -0,0 +1,244 @@
interactions:
- request:
body: '{"messages": [{"role": "system", "content": "You are Test Agent. Test backstory\nYour
personal goal is: Test goal\n\nYou ONLY have access to the following tools,
and should NEVER make up tools that are not listed here:\n\nTool Name: exa_search\nTool
Arguments: {''query'': {''description'': None, ''type'': ''str''}}\nTool Description:
Search the web using Exa\n\nIMPORTANT: Use the following format in your response:\n\n```\nThought:
you should always think about what to do\nAction: the action to take, only one
name of [exa_search], just the name, exactly as it''s written.\nAction Input:
the input to the action, just a simple JSON object, enclosed in curly braces,
using \" to wrap keys and values.\nObservation: the result of the action\n```\n\nOnce
all necessary information is gathered, return the following format:\n\n```\nThought:
I now know the final answer\nFinal Answer: the final answer to the original
input question\n```"}, {"role": "user", "content": "Search for information about
AI"}], "model": "gpt-3.5-turbo", "stream": false}'
headers:
accept:
- application/json
accept-encoding:
- gzip, deflate
connection:
- keep-alive
content-length:
- '1038'
content-type:
- application/json
host:
- api.openai.com
user-agent:
- OpenAI/Python 1.109.1
x-stainless-arch:
- arm64
x-stainless-async:
- 'false'
x-stainless-lang:
- python
x-stainless-os:
- MacOS
x-stainless-package-version:
- 1.109.1
x-stainless-read-timeout:
- '600'
x-stainless-retry-count:
- '0'
x-stainless-runtime:
- CPython
x-stainless-runtime-version:
- 3.13.3
method: POST
uri: https://api.openai.com/v1/chat/completions
response:
body:
string: !!binary |
H4sIAAAAAAAAAwAAAP//jFNNb9swDL3nVxA6J0GTLgnmW7pLDGzrPi+dC0ORaVurLHoSVaQI8t8H
OR92twzYxYD4+MjHR3o/AhC6EAkIVUtWTWsm776/320+fbzbPLfy893br8vi6WGePnywNW3uxTgy
aPsTFZ9ZU0VNa5A12SOsHErGWHW2Ws5uFzfz1awDGirQRFrV8uR2uphwcFua3MzmixOzJq3QiwR+
jAAA9t03arQF7kQCN+NzpEHvZYUiuSQBCEcmRoT0XnuWlsW4BxVZRtvJ/lZTqGpOIAVfUzAFBI/A
NQLuZO5ROlUDExlggtOzJAfaluQaGUcFuaXAsE6nmV2rGEkG5HMMUtsGTmCfiV8B3UsmEsjEOs3E
IbP3W4/uWR65X9AHwx4cmmhebLxOoXTUXNM1zexwNIdl8DJaa4MxA0BaS9x16Ex9PCGHi42GqtbR
1v9BFaW22te5Q+nJRss8Uys69DACeOzWFV5tQLSOmpZzpifs2s1ns2M90V9Ij75ZnkAmlmbAWqzG
V+rlBbLUxg8WLpRUNRY9tb8OGQpNA2A0mPpvNddqHyfXtvqf8j2gFLaMRd46LLR6PXGf5jD+QP9K
u7jcCRbxSLTCnDW6uIkCSxnM8bSFf/GMTV5qW6Frne7uO25ydBj9BgAA//8DAChlpSTeAwAA
headers:
CF-RAY:
- 993d6b3e6b64ffb8-SJC
Connection:
- keep-alive
Content-Encoding:
- gzip
Content-Type:
- application/json
Date:
- Fri, 24 Oct 2025 23:57:52 GMT
Server:
- cloudflare
Set-Cookie:
- __cf_bm=cXZeAPPk9o5VuaArJFruIKai9Oj2X9ResvQgx_qCwdg-1761350272-1.0.1.1-42v7QDan6OIFJYT2vOisNB0AeLg3KsbAiCGsrrsPgH1N13l8o_Vy6HvQCVCIRAqPaHCcvybK8xTxrHKqZgLBRH4XM7.l5IYkFLhgl8IIUA0;
path=/; expires=Sat, 25-Oct-25 00:27:52 GMT; domain=.api.openai.com; HttpOnly;
Secure; SameSite=None
- _cfuvid=wGtD6dA8GfZzwvY_uzLiXlAVzOIOJPtIPQYQRS_19oo-1761350272656-0.0.1.1-604800000;
path=/; domain=.api.openai.com; HttpOnly; Secure; SameSite=None
Strict-Transport-Security:
- max-age=31536000; includeSubDomains; preload
Transfer-Encoding:
- chunked
X-Content-Type-Options:
- nosniff
access-control-expose-headers:
- X-Request-ID
alt-svc:
- h3=":443"; ma=86400
cf-cache-status:
- DYNAMIC
openai-organization:
- crewai-iuxna1
openai-processing-ms:
- '718'
openai-project:
- proj_xitITlrFeen7zjNSzML82h9x
openai-version:
- '2020-10-01'
x-envoy-upstream-service-time:
- '791'
x-openai-proxy-wasm:
- v0.1
x-ratelimit-limit-requests:
- '10000'
x-ratelimit-limit-tokens:
- '50000000'
x-ratelimit-remaining-requests:
- '9999'
x-ratelimit-remaining-tokens:
- '49999774'
x-ratelimit-reset-requests:
- 6ms
x-ratelimit-reset-tokens:
- 0s
x-request-id:
- req_a2e42e9d98bc4c3db1a4de14cf1a94ec
status:
code: 200
message: OK
- request:
body: '{"messages": [{"role": "system", "content": "You are Test Agent. Test backstory\nYour
personal goal is: Test goal\n\nYou ONLY have access to the following tools,
and should NEVER make up tools that are not listed here:\n\nTool Name: exa_search\nTool
Arguments: {''query'': {''description'': None, ''type'': ''str''}}\nTool Description:
Search the web using Exa\n\nIMPORTANT: Use the following format in your response:\n\n```\nThought:
you should always think about what to do\nAction: the action to take, only one
name of [exa_search], just the name, exactly as it''s written.\nAction Input:
the input to the action, just a simple JSON object, enclosed in curly braces,
using \" to wrap keys and values.\nObservation: the result of the action\n```\n\nOnce
all necessary information is gathered, return the following format:\n\n```\nThought:
I now know the final answer\nFinal Answer: the final answer to the original
input question\n```"}, {"role": "user", "content": "Search for information about
AI"}, {"role": "assistant", "content": "Thought: I should use the exa_search
tool to search for information about AI.\nAction: exa_search\nAction Input:
{\"query\": \"AI\"}\nObservation: Mock search results for: AI"}], "model": "gpt-3.5-turbo",
"stream": false}'
headers:
accept:
- application/json
accept-encoding:
- gzip, deflate
connection:
- keep-alive
content-length:
- '1250'
content-type:
- application/json
cookie:
- __cf_bm=cXZeAPPk9o5VuaArJFruIKai9Oj2X9ResvQgx_qCwdg-1761350272-1.0.1.1-42v7QDan6OIFJYT2vOisNB0AeLg3KsbAiCGsrrsPgH1N13l8o_Vy6HvQCVCIRAqPaHCcvybK8xTxrHKqZgLBRH4XM7.l5IYkFLhgl8IIUA0;
_cfuvid=wGtD6dA8GfZzwvY_uzLiXlAVzOIOJPtIPQYQRS_19oo-1761350272656-0.0.1.1-604800000
host:
- api.openai.com
user-agent:
- OpenAI/Python 1.109.1
x-stainless-arch:
- arm64
x-stainless-async:
- 'false'
x-stainless-lang:
- python
x-stainless-os:
- MacOS
x-stainless-package-version:
- 1.109.1
x-stainless-read-timeout:
- '600'
x-stainless-retry-count:
- '0'
x-stainless-runtime:
- CPython
x-stainless-runtime-version:
- 3.13.3
method: POST
uri: https://api.openai.com/v1/chat/completions
response:
body:
string: !!binary |
H4sIAAAAAAAAAwAAAP//jFNNaxsxEL3vrxh06cU2/sBJs5diCi0phULr0EMaFlma3VWs1ajSbG0T
/N+L1o5306bQi0B6743evJGeMgBhtMhBqFqyarwdv7/7vP9kb+jjt8dV/Ln/otdrh3ezjdx9DUaM
koI2j6j4WTVR1HiLbMidYBVQMqaqs+ur2WI5nV8vOqAhjTbJKs/jxWQ55jZsaDydzZdnZU1GYRQ5
3GcAAE/dmjw6jXuRw3T0fNJgjLJCkV9IACKQTSdCxmgiS8di1IOKHKPrbK9raquac7gFRzvYpoVr
hNI4aUG6uMPww33odqtul6iKWqvdG040DRKiR2VKo86CCXxPBDhQC9ZsERoEJogog6qhpADSHbg2
rgK0ESGgTTElzur23dBpwLKNMiXlWmsHgHSOWKaku4wezsjxkoqlygfaxD+kojTOxLoIKCO5lEBk
8qJDjxnAQ5d++yJQ4QM1ngumLXbXzZdXp3qiH3iPLhZnkImlHaje3oxeqVdoZGlsHMxPKKlq1L20
H7ZstaEBkA26/tvNa7VPnRtX/U/5HlAKPaMufEBt1MuOe1rA9B/+Rbuk3BkWEcMvo7BggyFNQmMp
W3t6qSIeImNTlMZVGHww3XNNk8yO2W8AAAD//wMA7uEpt60DAAA=
headers:
CF-RAY:
- 993d6b44dc97ffb8-SJC
Connection:
- keep-alive
Content-Encoding:
- gzip
Content-Type:
- application/json
Date:
- Fri, 24 Oct 2025 23:57:53 GMT
Server:
- cloudflare
Strict-Transport-Security:
- max-age=31536000; includeSubDomains; preload
Transfer-Encoding:
- chunked
X-Content-Type-Options:
- nosniff
access-control-expose-headers:
- X-Request-ID
alt-svc:
- h3=":443"; ma=86400
cf-cache-status:
- DYNAMIC
openai-organization:
- crewai-iuxna1
openai-processing-ms:
- '446'
openai-project:
- proj_xitITlrFeen7zjNSzML82h9x
openai-version:
- '2020-10-01'
x-envoy-upstream-service-time:
- '655'
x-openai-proxy-wasm:
- v0.1
x-ratelimit-limit-requests:
- '10000'
x-ratelimit-limit-tokens:
- '50000000'
x-ratelimit-remaining-requests:
- '9999'
x-ratelimit-remaining-tokens:
- '49999732'
x-ratelimit-reset-requests:
- 6ms
x-ratelimit-reset-tokens:
- 0s
x-request-id:
- req_9ce6b4f80d9546eba4ce23b5fac77153
status:
code: 200
message: OK
version: 1

View File

@@ -0,0 +1,126 @@
interactions:
- request:
body: '{"messages": [{"role": "system", "content": "You are Test Agent. Test backstory\nYour
personal goal is: Test goal\n\nYou ONLY have access to the following tools,
and should NEVER make up tools that are not listed here:\n\nTool Name: create_issue\nTool
Arguments: {''title'': {''description'': ''Issue title'', ''type'': ''str''},
''body'': {''description'': ''Issue body'', ''type'': ''Union[str, NoneType]''}}\nTool
Description: Create a GitHub issue\nDetailed Parameter Structure:\nObject with
properties:\n - title: Issue title (required)\n - body: Issue body (optional)\n\nIMPORTANT:
Use the following format in your response:\n\n```\nThought: you should always
think about what to do\nAction: the action to take, only one name of [create_issue],
just the name, exactly as it''s written.\nAction Input: the input to the action,
just a simple JSON object, enclosed in curly braces, using \" to wrap keys and
values.\nObservation: the result of the action\n```\n\nOnce all necessary information
is gathered, return the following format:\n\n```\nThought: I now know the final
answer\nFinal Answer: the final answer to the original input question\n```"},
{"role": "user", "content": "Create a GitHub issue"}], "model": "gpt-3.5-turbo",
"stream": false}'
headers:
accept:
- application/json
accept-encoding:
- gzip, deflate
connection:
- keep-alive
content-length:
- '1233'
content-type:
- application/json
host:
- api.openai.com
user-agent:
- OpenAI/Python 1.109.1
x-stainless-arch:
- arm64
x-stainless-async:
- 'false'
x-stainless-lang:
- python
x-stainless-os:
- MacOS
x-stainless-package-version:
- 1.109.1
x-stainless-read-timeout:
- '600'
x-stainless-retry-count:
- '0'
x-stainless-runtime:
- CPython
x-stainless-runtime-version:
- 3.13.3
method: POST
uri: https://api.openai.com/v1/chat/completions
response:
body:
string: !!binary |
H4sIAAAAAAAAAwAAAP//jFNNbxMxEL3vrxj5nET5aGjIBUGoIMAFCRASqiLHns0O9Xose7ZtqPLf
0XrTbApF4rKHefOe37yZfSgAFFm1BGUqLaYObrj6+un+45er9ZvF/PW3tZirz+OXvy6+0/jDav1W
DVoGb3+ikUfWyHAdHAqx72ATUQu2qpPLF5PZfDy9vMhAzRZdS9sFGc5G86E0ccvD8WQ6PzIrJoNJ
LeFHAQDwkL+tR2/xXi1hPHis1JiS3qFanpoAVGTXVpROiZJoL2rQg4a9oM+213BHzoFHtFBzREgB
DZVkgHzJsdbtMCAM3Sig4R3J+2YLlFKDI1hx4yzsuYHgUCeEEPmWLHZiFkWTS5AaU4FOIBWCkDgE
7S1s2e6By1zNclnnLis6usH+2Vfn7iOWTdJter5x7gzQ3rNkwzm36yNyOCXleBcib9MfVFWSp1Rt
IurEvk0lCQeV0UMBcJ030jwJWYXIdZCN8A3m56bzeaen+iPo0dnsCAqLdmesxWLwjN7mGNzZTpXR
pkLbU/sD0I0lPgOKs6n/dvOcdjc5+d3/yPeAMRgE7SZEtGSeTty3RWz/kX+1nVLOhlXCeEsGN0IY
201YLHXjuutVaZ8E601JfocxRMon3G6yOBS/AQAA//8DABKn8+vBAwAA
headers:
CF-RAY:
- 993d6b4be9862379-SJC
Connection:
- keep-alive
Content-Encoding:
- gzip
Content-Type:
- application/json
Date:
- Fri, 24 Oct 2025 23:57:54 GMT
Server:
- cloudflare
Set-Cookie:
- __cf_bm=WY9bgemMDI_hUYISAPlQ2a.DBGeZfM6AjVEa3SKNg1c-1761350274-1.0.1.1-K3Qm2cl6IlDAgmocoKZ8IMUTmue6Q81hH9stECprUq_SM8LF8rR9d1sHktvRCN3.jEM.twEuFFYDNpBnN8NBRJFZcea1yvpm8Uo0G_UhyDs;
path=/; expires=Sat, 25-Oct-25 00:27:54 GMT; domain=.api.openai.com; HttpOnly;
Secure; SameSite=None
- _cfuvid=JklLS4i3hBGELpS9cz1KMpTbj72hCwP41LyXDSxWIv8-1761350274521-0.0.1.1-604800000;
path=/; domain=.api.openai.com; HttpOnly; Secure; SameSite=None
Strict-Transport-Security:
- max-age=31536000; includeSubDomains; preload
Transfer-Encoding:
- chunked
X-Content-Type-Options:
- nosniff
access-control-expose-headers:
- X-Request-ID
alt-svc:
- h3=":443"; ma=86400
cf-cache-status:
- DYNAMIC
openai-organization:
- crewai-iuxna1
openai-processing-ms:
- '487'
openai-project:
- proj_xitITlrFeen7zjNSzML82h9x
openai-version:
- '2020-10-01'
x-envoy-upstream-service-time:
- '526'
x-openai-proxy-wasm:
- v0.1
x-ratelimit-limit-requests:
- '10000'
x-ratelimit-limit-tokens:
- '50000000'
x-ratelimit-remaining-requests:
- '9999'
x-ratelimit-remaining-tokens:
- '49999727'
x-ratelimit-reset-requests:
- 6ms
x-ratelimit-reset-tokens:
- 0s
x-request-id:
- req_1708dc0928c64882aaa5bc2c168c140f
status:
code: 200
message: OK
version: 1

View File

View File

@@ -0,0 +1,193 @@
"""Tests for CrewBase configuration type annotations."""
from pathlib import Path
import pytest
from crewai.project import AgentConfig, AgentsConfigDict, CrewBase, TaskConfig, TasksConfigDict, agent, task
def test_agents_config_loads_as_dict(tmp_path: Path) -> None:
"""Test that agents_config loads as a properly typed dictionary."""
agents_yaml = tmp_path / "agents.yaml"
agents_yaml.write_text(
"""
researcher:
role: "Research Analyst"
goal: "Find accurate information"
backstory: "Expert researcher with years of experience"
"""
)
tasks_yaml = tmp_path / "tasks.yaml"
tasks_yaml.write_text(
"""
research_task:
description: "Research the topic"
expected_output: "A comprehensive report"
"""
)
@CrewBase
class TestCrew:
agents_config = str(agents_yaml)
tasks_config = str(tasks_yaml)
@agent
def researcher(self):
from crewai import Agent
return Agent(config=self.agents_config["researcher"])
@task
def research_task(self):
from crewai import Task
return Task(config=self.tasks_config["research_task"])
crew_instance = TestCrew()
assert isinstance(crew_instance.agents_config, dict)
assert "researcher" in crew_instance.agents_config
assert crew_instance.agents_config["researcher"]["role"] == "Research Analyst"
assert crew_instance.agents_config["researcher"]["goal"] == "Find accurate information"
assert crew_instance.agents_config["researcher"]["backstory"] == "Expert researcher with years of experience"
def test_tasks_config_loads_as_dict(tmp_path: Path) -> None:
"""Test that tasks_config loads as a properly typed dictionary."""
agents_yaml = tmp_path / "agents.yaml"
agents_yaml.write_text(
"""
writer:
role: "Content Writer"
goal: "Write engaging content"
backstory: "Experienced content writer"
"""
)
tasks_yaml = tmp_path / "tasks.yaml"
tasks_yaml.write_text(
"""
writing_task:
description: "Write an article"
expected_output: "A well-written article"
agent: "writer"
"""
)
@CrewBase
class TestCrew:
agents_config = str(agents_yaml)
tasks_config = str(tasks_yaml)
@agent
def writer(self):
from crewai import Agent
return Agent(config=self.agents_config["writer"])
@task
def writing_task(self):
from crewai import Task
return Task(config=self.tasks_config["writing_task"])
crew_instance = TestCrew()
assert isinstance(crew_instance.tasks_config, dict)
assert "writing_task" in crew_instance.tasks_config
assert crew_instance.tasks_config["writing_task"]["description"] == "Write an article"
assert crew_instance.tasks_config["writing_task"]["expected_output"] == "A well-written article"
from crewai import Agent
assert isinstance(crew_instance.tasks_config["writing_task"]["agent"], Agent)
assert crew_instance.tasks_config["writing_task"]["agent"].role == "Content Writer"
def test_empty_config_files_load_as_empty_dicts(tmp_path: Path) -> None:
"""Test that empty config files load as empty dictionaries."""
agents_yaml = tmp_path / "agents.yaml"
agents_yaml.write_text("")
tasks_yaml = tmp_path / "tasks.yaml"
tasks_yaml.write_text("")
@CrewBase
class TestCrew:
agents_config = str(agents_yaml)
tasks_config = str(tasks_yaml)
crew_instance = TestCrew()
assert isinstance(crew_instance.agents_config, dict)
assert isinstance(crew_instance.tasks_config, dict)
assert len(crew_instance.agents_config) == 0
assert len(crew_instance.tasks_config) == 0
def test_missing_config_files_load_as_empty_dicts(tmp_path: Path) -> None:
"""Test that missing config files load as empty dictionaries with warning."""
nonexistent_agents = tmp_path / "nonexistent_agents.yaml"
nonexistent_tasks = tmp_path / "nonexistent_tasks.yaml"
@CrewBase
class TestCrew:
agents_config = str(nonexistent_agents)
tasks_config = str(nonexistent_tasks)
crew_instance = TestCrew()
assert isinstance(crew_instance.agents_config, dict)
assert isinstance(crew_instance.tasks_config, dict)
assert len(crew_instance.agents_config) == 0
assert len(crew_instance.tasks_config) == 0
def test_config_types_are_exported() -> None:
"""Test that AgentConfig, TaskConfig, and type aliases are properly exported."""
from crewai.project import AgentConfig, AgentsConfigDict, TaskConfig, TasksConfigDict
assert AgentConfig is not None
assert TaskConfig is not None
assert AgentsConfigDict is not None
assert TasksConfigDict is not None
def test_agents_config_type_annotation_exists(tmp_path: Path) -> None:
"""Test that agents_config has proper type annotation at runtime."""
agents_yaml = tmp_path / "agents.yaml"
agents_yaml.write_text(
"""
analyst:
role: "Data Analyst"
goal: "Analyze data"
"""
)
tasks_yaml = tmp_path / "tasks.yaml"
tasks_yaml.write_text(
"""
analysis:
description: "Analyze the data"
expected_output: "Analysis report"
"""
)
@CrewBase
class TestCrew:
agents_config = str(agents_yaml)
tasks_config = str(tasks_yaml)
@agent
def analyst(self):
from crewai import Agent
return Agent(config=self.agents_config["analyst"])
@task
def analysis(self):
from crewai import Task
return Task(config=self.tasks_config["analysis"])
crew_instance = TestCrew()
assert hasattr(crew_instance, "agents_config")
assert hasattr(crew_instance, "tasks_config")
assert isinstance(crew_instance.agents_config, dict)
assert isinstance(crew_instance.tasks_config, dict)

View File

@@ -850,6 +850,31 @@ def test_flow_plotting():
assert isinstance(received_events[0].timestamp, datetime)
def test_method_calls_crew_detection():
"""Test that method_calls_crew() detects .crew(), .kickoff(), and .kickoff_async() calls."""
from crewai.flow.visualization_utils import method_calls_crew
from crewai import Agent
# Test with a real Flow that uses agent.kickoff()
class FlowWithAgentKickoff(Flow):
@start()
def run_agent(self):
agent = Agent(role="test", goal="test", backstory="test")
return agent.kickoff("query")
flow = FlowWithAgentKickoff()
assert method_calls_crew(flow.run_agent) is True
# Test with a Flow that has no crew/agent calls
class FlowWithoutCrewCalls(Flow):
@start()
def simple_method(self):
return "Just a regular method"
flow2 = FlowWithoutCrewCalls()
assert method_calls_crew(flow2.simple_method) is False
def test_multiple_routers_from_same_trigger():
"""Test that multiple routers triggered by the same method all activate their listeners."""
execution_order = []

View File

@@ -22,7 +22,7 @@ import pytest
@pytest.fixture(scope="module")
def vcr_config(request) -> dict:
def vcr_config(request: pytest.FixtureRequest) -> dict[str, str]:
return {
"cassette_library_dir": os.path.join(os.path.dirname(__file__), "cassettes"),
}
@@ -65,7 +65,7 @@ class CustomConverter(Converter):
# Fixtures
@pytest.fixture
def mock_agent():
def mock_agent() -> Mock:
agent = Mock()
agent.function_calling_llm = None
agent.llm = Mock()
@@ -73,7 +73,7 @@ def mock_agent():
# Tests for convert_to_model
def test_convert_to_model_with_valid_json():
def test_convert_to_model_with_valid_json() -> None:
result = '{"name": "John", "age": 30}'
output = convert_to_model(result, SimpleModel, None, None)
assert isinstance(output, SimpleModel)
@@ -81,7 +81,7 @@ def test_convert_to_model_with_valid_json():
assert output.age == 30
def test_convert_to_model_with_invalid_json():
def test_convert_to_model_with_invalid_json() -> None:
result = '{"name": "John", "age": "thirty"}'
with patch("crewai.utilities.converter.handle_partial_json") as mock_handle:
mock_handle.return_value = "Fallback result"
@@ -89,13 +89,13 @@ def test_convert_to_model_with_invalid_json():
assert output == "Fallback result"
def test_convert_to_model_with_no_model():
def test_convert_to_model_with_no_model() -> None:
result = "Plain text"
output = convert_to_model(result, None, None, None)
assert output == "Plain text"
def test_convert_to_model_with_special_characters():
def test_convert_to_model_with_special_characters() -> None:
json_string_test = """
{
"responses": [
@@ -114,7 +114,7 @@ def test_convert_to_model_with_special_characters():
)
def test_convert_to_model_with_escaped_special_characters():
def test_convert_to_model_with_escaped_special_characters() -> None:
json_string_test = json.dumps(
{
"responses": [
@@ -133,7 +133,7 @@ def test_convert_to_model_with_escaped_special_characters():
)
def test_convert_to_model_with_multiple_special_characters():
def test_convert_to_model_with_multiple_special_characters() -> None:
json_string_test = """
{
"responses": [
@@ -153,7 +153,7 @@ def test_convert_to_model_with_multiple_special_characters():
# Tests for validate_model
def test_validate_model_pydantic_output():
def test_validate_model_pydantic_output() -> None:
result = '{"name": "Alice", "age": 25}'
output = validate_model(result, SimpleModel, False)
assert isinstance(output, SimpleModel)
@@ -161,7 +161,7 @@ def test_validate_model_pydantic_output():
assert output.age == 25
def test_validate_model_json_output():
def test_validate_model_json_output() -> None:
result = '{"name": "Bob", "age": 40}'
output = validate_model(result, SimpleModel, True)
assert isinstance(output, dict)
@@ -169,7 +169,7 @@ def test_validate_model_json_output():
# Tests for handle_partial_json
def test_handle_partial_json_with_valid_partial():
def test_handle_partial_json_with_valid_partial() -> None:
result = 'Some text {"name": "Charlie", "age": 35} more text'
output = handle_partial_json(result, SimpleModel, False, None)
assert isinstance(output, SimpleModel)
@@ -177,7 +177,7 @@ def test_handle_partial_json_with_valid_partial():
assert output.age == 35
def test_handle_partial_json_with_invalid_partial(mock_agent):
def test_handle_partial_json_with_invalid_partial(mock_agent: Mock) -> None:
result = "No valid JSON here"
with patch("crewai.utilities.converter.convert_with_instructions") as mock_convert:
mock_convert.return_value = "Converted result"
@@ -189,8 +189,8 @@ def test_handle_partial_json_with_invalid_partial(mock_agent):
@patch("crewai.utilities.converter.create_converter")
@patch("crewai.utilities.converter.get_conversion_instructions")
def test_convert_with_instructions_success(
mock_get_instructions, mock_create_converter, mock_agent
):
mock_get_instructions: Mock, mock_create_converter: Mock, mock_agent: Mock
) -> None:
mock_get_instructions.return_value = "Instructions"
mock_converter = Mock()
mock_converter.to_pydantic.return_value = SimpleModel(name="David", age=50)
@@ -207,8 +207,8 @@ def test_convert_with_instructions_success(
@patch("crewai.utilities.converter.create_converter")
@patch("crewai.utilities.converter.get_conversion_instructions")
def test_convert_with_instructions_failure(
mock_get_instructions, mock_create_converter, mock_agent
):
mock_get_instructions: Mock, mock_create_converter: Mock, mock_agent: Mock
) -> None:
mock_get_instructions.return_value = "Instructions"
mock_converter = Mock()
mock_converter.to_pydantic.return_value = ConverterError("Conversion failed")
@@ -222,7 +222,7 @@ def test_convert_with_instructions_failure(
# Tests for get_conversion_instructions
def test_get_conversion_instructions_gpt():
def test_get_conversion_instructions_gpt() -> None:
llm = LLM(model="gpt-4o-mini")
with patch.object(LLM, "supports_function_calling") as supports_function_calling:
supports_function_calling.return_value = True
@@ -237,7 +237,7 @@ def test_get_conversion_instructions_gpt():
assert instructions == expected_instructions
def test_get_conversion_instructions_non_gpt():
def test_get_conversion_instructions_non_gpt() -> None:
llm = LLM(model="ollama/llama3.1", base_url="http://localhost:11434")
with patch.object(LLM, "supports_function_calling", return_value=False):
instructions = get_conversion_instructions(SimpleModel, llm)
@@ -246,17 +246,17 @@ def test_get_conversion_instructions_non_gpt():
# Tests for is_gpt
def test_supports_function_calling_true():
def test_supports_function_calling_true() -> None:
llm = LLM(model="gpt-4o")
assert llm.supports_function_calling() is True
def test_supports_function_calling_false():
def test_supports_function_calling_false() -> None:
llm = LLM(model="non-existent-model", is_litellm=True)
assert llm.supports_function_calling() is False
def test_create_converter_with_mock_agent():
def test_create_converter_with_mock_agent() -> None:
mock_agent = MagicMock()
mock_agent.get_output_converter.return_value = MagicMock(spec=Converter)
@@ -272,7 +272,7 @@ def test_create_converter_with_mock_agent():
mock_agent.get_output_converter.assert_called_once()
def test_create_converter_with_custom_converter():
def test_create_converter_with_custom_converter() -> None:
converter = create_converter(
converter_cls=CustomConverter,
llm=LLM(model="gpt-4o-mini"),
@@ -284,7 +284,7 @@ def test_create_converter_with_custom_converter():
assert isinstance(converter, CustomConverter)
def test_create_converter_fails_without_agent_or_converter_cls():
def test_create_converter_fails_without_agent_or_converter_cls() -> None:
with pytest.raises(
ValueError, match="Either agent or converter_cls must be provided"
):
@@ -293,13 +293,13 @@ def test_create_converter_fails_without_agent_or_converter_cls():
)
def test_generate_model_description_simple_model():
def test_generate_model_description_simple_model() -> None:
description = generate_model_description(SimpleModel)
expected_description = '{\n "name": str,\n "age": int\n}'
assert description == expected_description
def test_generate_model_description_nested_model():
def test_generate_model_description_nested_model() -> None:
description = generate_model_description(NestedModel)
expected_description = (
'{\n "id": int,\n "data": {\n "name": str,\n "age": int\n}\n}'
@@ -307,7 +307,7 @@ def test_generate_model_description_nested_model():
assert description == expected_description
def test_generate_model_description_optional_field():
def test_generate_model_description_optional_field() -> None:
class ModelWithOptionalField(BaseModel):
name: str
age: int | None
@@ -317,7 +317,7 @@ def test_generate_model_description_optional_field():
assert description == expected_description
def test_generate_model_description_list_field():
def test_generate_model_description_list_field() -> None:
class ModelWithListField(BaseModel):
items: list[int]
@@ -326,7 +326,7 @@ def test_generate_model_description_list_field():
assert description == expected_description
def test_generate_model_description_dict_field():
def test_generate_model_description_dict_field() -> None:
class ModelWithDictField(BaseModel):
attributes: dict[str, int]
@@ -336,7 +336,7 @@ def test_generate_model_description_dict_field():
@pytest.mark.vcr(filter_headers=["authorization"])
def test_convert_with_instructions():
def test_convert_with_instructions() -> None:
llm = LLM(model="gpt-4o-mini")
sample_text = "Name: Alice, Age: 30"
@@ -358,7 +358,7 @@ def test_convert_with_instructions():
@pytest.mark.vcr(filter_headers=["authorization"])
def test_converter_with_llama3_2_model():
def test_converter_with_llama3_2_model() -> None:
llm = LLM(model="openrouter/meta-llama/llama-3.2-3b-instruct")
sample_text = "Name: Alice Llama, Age: 30"
instructions = get_conversion_instructions(SimpleModel, llm)
@@ -375,7 +375,7 @@ def test_converter_with_llama3_2_model():
@pytest.mark.vcr(filter_headers=["authorization"])
def test_converter_with_llama3_1_model():
def test_converter_with_llama3_1_model() -> None:
llm = LLM(model="ollama/llama3.1", base_url="http://localhost:11434")
sample_text = "Name: Alice Llama, Age: 30"
instructions = get_conversion_instructions(SimpleModel, llm)
@@ -392,7 +392,7 @@ def test_converter_with_llama3_1_model():
@pytest.mark.vcr(filter_headers=["authorization"])
def test_converter_with_nested_model():
def test_converter_with_nested_model() -> None:
llm = LLM(model="gpt-4o-mini")
sample_text = "Name: John Doe\nAge: 30\nAddress: 123 Main St, Anytown, 12345"
@@ -416,7 +416,7 @@ def test_converter_with_nested_model():
# Tests for error handling
def test_converter_error_handling():
def test_converter_error_handling() -> None:
llm = Mock(spec=LLM)
llm.supports_function_calling.return_value = False
llm.call.return_value = "Invalid JSON"
@@ -437,7 +437,7 @@ def test_converter_error_handling():
# Tests for retry logic
def test_converter_retry_logic():
def test_converter_retry_logic() -> None:
llm = Mock(spec=LLM)
llm.supports_function_calling.return_value = False
llm.call.side_effect = [
@@ -465,7 +465,7 @@ def test_converter_retry_logic():
# Tests for optional fields
def test_converter_with_optional_fields():
def test_converter_with_optional_fields() -> None:
class OptionalModel(BaseModel):
name: str
age: int | None
@@ -492,7 +492,7 @@ def test_converter_with_optional_fields():
# Tests for list fields
def test_converter_with_list_field():
def test_converter_with_list_field() -> None:
class ListModel(BaseModel):
items: list[int]
@@ -515,7 +515,7 @@ def test_converter_with_list_field():
assert output.items == [1, 2, 3]
def test_converter_with_enum():
def test_converter_with_enum() -> None:
class Color(Enum):
RED = "red"
GREEN = "green"
@@ -546,7 +546,7 @@ def test_converter_with_enum():
# Tests for ambiguous input
def test_converter_with_ambiguous_input():
def test_converter_with_ambiguous_input() -> None:
llm = Mock(spec=LLM)
llm.supports_function_calling.return_value = False
llm.call.return_value = '{"name": "Charlie", "age": "Not an age"}'
@@ -567,7 +567,7 @@ def test_converter_with_ambiguous_input():
# Tests for function calling support
def test_converter_with_function_calling():
def test_converter_with_function_calling() -> None:
llm = Mock(spec=LLM)
llm.supports_function_calling.return_value = True
@@ -580,20 +580,359 @@ def test_converter_with_function_calling():
model=SimpleModel,
instructions="Convert this text.",
)
converter._create_instructor = Mock(return_value=instructor)
with patch.object(converter, '_create_instructor', return_value=instructor):
output = converter.to_pydantic()
output = converter.to_pydantic()
assert isinstance(output, SimpleModel)
assert output.name == "Eve"
assert output.age == 35
assert isinstance(output, SimpleModel)
assert output.name == "Eve"
assert output.age == 35
instructor.to_pydantic.assert_called_once()
def test_generate_model_description_union_field():
def test_generate_model_description_union_field() -> None:
class UnionModel(BaseModel):
field: int | str | None
description = generate_model_description(UnionModel)
expected_description = '{\n "field": int | str | None\n}'
assert description == expected_description
def test_internal_instructor_with_openai_provider() -> None:
"""Test InternalInstructor with OpenAI provider using registry pattern."""
from crewai.utilities.internal_instructor import InternalInstructor
# Mock LLM with OpenAI provider
mock_llm = Mock()
mock_llm.is_litellm = False
mock_llm.model = "gpt-4o"
mock_llm.provider = "openai"
# Mock instructor client
mock_client = Mock()
mock_client.chat.completions.create.return_value = SimpleModel(name="Test", age=25)
# Patch the instructor import at the method level
with patch.object(InternalInstructor, '_create_instructor_client') as mock_create_client:
mock_create_client.return_value = mock_client
instructor = InternalInstructor(
content="Test content",
model=SimpleModel,
llm=mock_llm
)
result = instructor.to_pydantic()
assert isinstance(result, SimpleModel)
assert result.name == "Test"
assert result.age == 25
# Verify the method was called with the correct LLM
mock_create_client.assert_called_once()
def test_internal_instructor_with_anthropic_provider() -> None:
"""Test InternalInstructor with Anthropic provider using registry pattern."""
from crewai.utilities.internal_instructor import InternalInstructor
# Mock LLM with Anthropic provider
mock_llm = Mock()
mock_llm.is_litellm = False
mock_llm.model = "claude-3-5-sonnet-20241022"
mock_llm.provider = "anthropic"
# Mock instructor client
mock_client = Mock()
mock_client.chat.completions.create.return_value = SimpleModel(name="Bob", age=25)
# Patch the instructor import at the method level
with patch.object(InternalInstructor, '_create_instructor_client') as mock_create_client:
mock_create_client.return_value = mock_client
instructor = InternalInstructor(
content="Name: Bob, Age: 25",
model=SimpleModel,
llm=mock_llm
)
result = instructor.to_pydantic()
assert isinstance(result, SimpleModel)
assert result.name == "Bob"
assert result.age == 25
# Verify the method was called with the correct LLM
mock_create_client.assert_called_once()
def test_factory_pattern_registry_extensibility() -> None:
"""Test that the factory pattern registry works with different providers."""
from crewai.utilities.internal_instructor import InternalInstructor
# Test with OpenAI provider
mock_llm_openai = Mock()
mock_llm_openai.is_litellm = False
mock_llm_openai.model = "gpt-4o-mini"
mock_llm_openai.provider = "openai"
mock_client_openai = Mock()
mock_client_openai.chat.completions.create.return_value = SimpleModel(name="Alice", age=30)
with patch.object(InternalInstructor, '_create_instructor_client') as mock_create_client:
mock_create_client.return_value = mock_client_openai
instructor_openai = InternalInstructor(
content="Name: Alice, Age: 30",
model=SimpleModel,
llm=mock_llm_openai
)
result_openai = instructor_openai.to_pydantic()
assert isinstance(result_openai, SimpleModel)
assert result_openai.name == "Alice"
assert result_openai.age == 30
# Test with Anthropic provider
mock_llm_anthropic = Mock()
mock_llm_anthropic.is_litellm = False
mock_llm_anthropic.model = "claude-3-5-sonnet-20241022"
mock_llm_anthropic.provider = "anthropic"
mock_client_anthropic = Mock()
mock_client_anthropic.chat.completions.create.return_value = SimpleModel(name="Bob", age=25)
with patch.object(InternalInstructor, '_create_instructor_client') as mock_create_client:
mock_create_client.return_value = mock_client_anthropic
instructor_anthropic = InternalInstructor(
content="Name: Bob, Age: 25",
model=SimpleModel,
llm=mock_llm_anthropic
)
result_anthropic = instructor_anthropic.to_pydantic()
assert isinstance(result_anthropic, SimpleModel)
assert result_anthropic.name == "Bob"
assert result_anthropic.age == 25
# Test with Bedrock provider
mock_llm_bedrock = Mock()
mock_llm_bedrock.is_litellm = False
mock_llm_bedrock.model = "claude-3-5-sonnet-20241022"
mock_llm_bedrock.provider = "bedrock"
mock_client_bedrock = Mock()
mock_client_bedrock.chat.completions.create.return_value = SimpleModel(name="Charlie", age=35)
with patch.object(InternalInstructor, '_create_instructor_client') as mock_create_client:
mock_create_client.return_value = mock_client_bedrock
instructor_bedrock = InternalInstructor(
content="Name: Charlie, Age: 35",
model=SimpleModel,
llm=mock_llm_bedrock
)
result_bedrock = instructor_bedrock.to_pydantic()
assert isinstance(result_bedrock, SimpleModel)
assert result_bedrock.name == "Charlie"
assert result_bedrock.age == 35
# Test with Google provider
mock_llm_google = Mock()
mock_llm_google.is_litellm = False
mock_llm_google.model = "gemini-1.5-flash"
mock_llm_google.provider = "google"
mock_client_google = Mock()
mock_client_google.chat.completions.create.return_value = SimpleModel(name="Diana", age=28)
with patch.object(InternalInstructor, '_create_instructor_client') as mock_create_client:
mock_create_client.return_value = mock_client_google
instructor_google = InternalInstructor(
content="Name: Diana, Age: 28",
model=SimpleModel,
llm=mock_llm_google
)
result_google = instructor_google.to_pydantic()
assert isinstance(result_google, SimpleModel)
assert result_google.name == "Diana"
assert result_google.age == 28
# Test with Azure provider
mock_llm_azure = Mock()
mock_llm_azure.is_litellm = False
mock_llm_azure.model = "gpt-4o"
mock_llm_azure.provider = "azure"
mock_client_azure = Mock()
mock_client_azure.chat.completions.create.return_value = SimpleModel(name="Eve", age=32)
with patch.object(InternalInstructor, '_create_instructor_client') as mock_create_client:
mock_create_client.return_value = mock_client_azure
instructor_azure = InternalInstructor(
content="Name: Eve, Age: 32",
model=SimpleModel,
llm=mock_llm_azure
)
result_azure = instructor_azure.to_pydantic()
assert isinstance(result_azure, SimpleModel)
assert result_azure.name == "Eve"
assert result_azure.age == 32
def test_internal_instructor_with_bedrock_provider() -> None:
"""Test InternalInstructor with AWS Bedrock provider using registry pattern."""
from crewai.utilities.internal_instructor import InternalInstructor
# Mock LLM with Bedrock provider
mock_llm = Mock()
mock_llm.is_litellm = False
mock_llm.model = "claude-3-5-sonnet-20241022"
mock_llm.provider = "bedrock"
# Mock instructor client
mock_client = Mock()
mock_client.chat.completions.create.return_value = SimpleModel(name="Charlie", age=35)
# Patch the instructor import at the method level
with patch.object(InternalInstructor, '_create_instructor_client') as mock_create_client:
mock_create_client.return_value = mock_client
instructor = InternalInstructor(
content="Name: Charlie, Age: 35",
model=SimpleModel,
llm=mock_llm
)
result = instructor.to_pydantic()
assert isinstance(result, SimpleModel)
assert result.name == "Charlie"
assert result.age == 35
# Verify the method was called with the correct LLM
mock_create_client.assert_called_once()
def test_internal_instructor_with_gemini_provider() -> None:
"""Test InternalInstructor with Google Gemini provider using registry pattern."""
from crewai.utilities.internal_instructor import InternalInstructor
# Mock LLM with Gemini provider
mock_llm = Mock()
mock_llm.is_litellm = False
mock_llm.model = "gemini-1.5-flash"
mock_llm.provider = "google"
# Mock instructor client
mock_client = Mock()
mock_client.chat.completions.create.return_value = SimpleModel(name="Diana", age=28)
# Patch the instructor import at the method level
with patch.object(InternalInstructor, '_create_instructor_client') as mock_create_client:
mock_create_client.return_value = mock_client
instructor = InternalInstructor(
content="Name: Diana, Age: 28",
model=SimpleModel,
llm=mock_llm
)
result = instructor.to_pydantic()
assert isinstance(result, SimpleModel)
assert result.name == "Diana"
assert result.age == 28
# Verify the method was called with the correct LLM
mock_create_client.assert_called_once()
def test_internal_instructor_with_azure_provider() -> None:
"""Test InternalInstructor with Azure OpenAI provider using registry pattern."""
from crewai.utilities.internal_instructor import InternalInstructor
# Mock LLM with Azure provider
mock_llm = Mock()
mock_llm.is_litellm = False
mock_llm.model = "gpt-4o"
mock_llm.provider = "azure"
# Mock instructor client
mock_client = Mock()
mock_client.chat.completions.create.return_value = SimpleModel(name="Eve", age=32)
# Patch the instructor import at the method level
with patch.object(InternalInstructor, '_create_instructor_client') as mock_create_client:
mock_create_client.return_value = mock_client
instructor = InternalInstructor(
content="Name: Eve, Age: 32",
model=SimpleModel,
llm=mock_llm
)
result = instructor.to_pydantic()
assert isinstance(result, SimpleModel)
assert result.name == "Eve"
assert result.age == 32
# Verify the method was called with the correct LLM
mock_create_client.assert_called_once()
def test_internal_instructor_unsupported_provider() -> None:
"""Test InternalInstructor with unsupported provider raises appropriate error."""
from crewai.utilities.internal_instructor import InternalInstructor
# Mock LLM with unsupported provider
mock_llm = Mock()
mock_llm.is_litellm = False
mock_llm.model = "unsupported-model"
mock_llm.provider = "unsupported"
# Mock the _create_instructor_client method to raise an error for unsupported providers
with patch.object(InternalInstructor, '_create_instructor_client') as mock_create_client:
mock_create_client.side_effect = Exception("Unsupported provider: unsupported")
# This should raise an error when trying to create the instructor client
with pytest.raises(Exception) as exc_info:
instructor = InternalInstructor(
content="Test content",
model=SimpleModel,
llm=mock_llm
)
instructor.to_pydantic()
# Verify it's the expected error
assert "Unsupported provider" in str(exc_info.value)
def test_internal_instructor_real_unsupported_provider() -> None:
"""Test InternalInstructor with real unsupported provider using actual instructor library."""
from crewai.utilities.internal_instructor import InternalInstructor
# Mock LLM with unsupported provider that would actually fail with instructor
mock_llm = Mock()
mock_llm.is_litellm = False
mock_llm.model = "unsupported-model"
mock_llm.provider = "unsupported"
# This should raise a ConfigurationError from the real instructor library
with pytest.raises(Exception) as exc_info:
instructor = InternalInstructor(
content="Test content",
model=SimpleModel,
llm=mock_llm
)
instructor.to_pydantic()
# Verify it's a configuration error about unsupported provider
assert "Unsupported provider" in str(exc_info.value) or "unsupported" in str(exc_info.value).lower()

View File

@@ -1,77 +1,79 @@
import os
from typing import Any
from unittest.mock import patch
from crewai.cli.constants import DEFAULT_LLM_MODEL
from crewai.llm import LLM
from crewai.llms.base_llm import BaseLLM
from crewai.utilities.llm_utils import create_llm
import pytest
try:
from litellm.exceptions import BadRequestError
except ImportError:
BadRequestError = Exception
def test_create_llm_with_llm_instance():
existing_llm = LLM(model="gpt-4o")
llm = create_llm(llm_value=existing_llm)
assert llm is existing_llm
def test_create_llm_with_valid_model_string():
llm = create_llm(llm_value="gpt-4o")
assert isinstance(llm, BaseLLM)
assert llm.model == "gpt-4o"
def test_create_llm_with_invalid_model_string():
# For invalid model strings, create_llm succeeds but call() fails with API error
llm = create_llm(llm_value="invalid-model")
assert llm is not None
assert isinstance(llm, BaseLLM)
# The error should occur when making the actual API call
# We expect some kind of API error (NotFoundError, etc.)
with pytest.raises(Exception): # noqa: B017
llm.call(messages=[{"role": "user", "content": "Hello, world!"}])
def test_create_llm_with_unknown_object_missing_attributes():
class UnknownObject:
pass
unknown_obj = UnknownObject()
llm = create_llm(llm_value=unknown_obj)
# Should succeed because str(unknown_obj) provides a model name
assert llm is not None
assert isinstance(llm, BaseLLM)
def test_create_llm_with_none_uses_default_model():
def test_create_llm_with_llm_instance() -> None:
with patch.dict(os.environ, {"OPENAI_API_KEY": "fake-key"}, clear=True):
with patch("crewai.utilities.llm_utils.DEFAULT_LLM_MODEL", "gpt-4o-mini"):
existing_llm = LLM(model="gpt-4o")
llm = create_llm(llm_value=existing_llm)
assert llm is existing_llm
def test_create_llm_with_valid_model_string() -> None:
with patch.dict(os.environ, {"OPENAI_API_KEY": "fake-key"}, clear=True):
llm = create_llm(llm_value="gpt-4o")
assert isinstance(llm, BaseLLM)
assert llm.model == "gpt-4o"
def test_create_llm_with_invalid_model_string() -> None:
with patch.dict(os.environ, {"OPENAI_API_KEY": "fake-key"}, clear=True):
# For invalid model strings, create_llm succeeds but call() fails with API error
llm = create_llm(llm_value="invalid-model")
assert llm is not None
assert isinstance(llm, BaseLLM)
# The error should occur when making the actual API call
# We expect some kind of API error (NotFoundError, etc.)
with pytest.raises(Exception): # noqa: B017
llm.call(messages=[{"role": "user", "content": "Hello, world!"}])
def test_create_llm_with_unknown_object_missing_attributes() -> None:
with patch.dict(os.environ, {"OPENAI_API_KEY": "fake-key"}, clear=True):
class UnknownObject:
pass
unknown_obj = UnknownObject()
llm = create_llm(llm_value=unknown_obj)
# Should succeed because str(unknown_obj) provides a model name
assert llm is not None
assert isinstance(llm, BaseLLM)
def test_create_llm_with_none_uses_default_model() -> None:
with patch.dict(os.environ, {"OPENAI_API_KEY": "fake-key"}, clear=True):
with patch("crewai.utilities.llm_utils.DEFAULT_LLM_MODEL", DEFAULT_LLM_MODEL):
llm = create_llm(llm_value=None)
assert isinstance(llm, BaseLLM)
assert llm.model == "gpt-4o-mini"
assert llm.model == DEFAULT_LLM_MODEL
def test_create_llm_with_unknown_object():
class UnknownObject:
model_name = "gpt-4o"
temperature = 0.7
max_tokens = 1500
def test_create_llm_with_unknown_object() -> None:
with patch.dict(os.environ, {"OPENAI_API_KEY": "fake-key"}, clear=True):
class UnknownObject:
model_name = "gpt-4o"
temperature = 0.7
max_tokens = 1500
unknown_obj = UnknownObject()
llm = create_llm(llm_value=unknown_obj)
assert isinstance(llm, BaseLLM)
assert llm.model == "gpt-4o"
assert llm.temperature == 0.7
assert llm.max_tokens == 1500
unknown_obj = UnknownObject()
llm = create_llm(llm_value=unknown_obj)
assert isinstance(llm, BaseLLM)
assert llm.model == "gpt-4o"
assert llm.temperature == 0.7
if hasattr(llm, 'max_tokens'):
assert llm.max_tokens == 1500
def test_create_llm_from_env_with_unaccepted_attributes():
def test_create_llm_from_env_with_unaccepted_attributes() -> None:
with patch.dict(
os.environ,
{
@@ -90,25 +92,47 @@ def test_create_llm_from_env_with_unaccepted_attributes():
assert not hasattr(llm, "AWS_REGION_NAME")
def test_create_llm_with_partial_attributes():
class PartialAttributes:
model_name = "gpt-4o"
# temperature is missing
def test_create_llm_with_partial_attributes() -> None:
with patch.dict(os.environ, {"OPENAI_API_KEY": "fake-key"}, clear=True):
class PartialAttributes:
model_name = "gpt-4o"
# temperature is missing
obj = PartialAttributes()
llm = create_llm(llm_value=obj)
assert isinstance(llm, BaseLLM)
assert llm.model == "gpt-4o"
assert llm.temperature is None # Should handle missing attributes gracefully
obj = PartialAttributes()
llm = create_llm(llm_value=obj)
assert isinstance(llm, BaseLLM)
assert llm.model == "gpt-4o"
assert llm.temperature is None # Should handle missing attributes gracefully
def test_create_llm_with_invalid_type():
# For integers, create_llm succeeds because str(42) becomes "42"
llm = create_llm(llm_value=42)
assert llm is not None
assert isinstance(llm, BaseLLM)
assert llm.model == "42"
def test_create_llm_with_invalid_type() -> None:
with patch.dict(os.environ, {"OPENAI_API_KEY": "fake-key"}, clear=True):
# For integers, create_llm succeeds because str(42) becomes "42"
llm = create_llm(llm_value=42)
assert llm is not None
assert isinstance(llm, BaseLLM)
assert llm.model == "42"
# The error should occur when making the actual API call
with pytest.raises(Exception): # noqa: B017
llm.call(messages=[{"role": "user", "content": "Hello, world!"}])
# The error should occur when making the actual API call
with pytest.raises(Exception): # noqa: B017
llm.call(messages=[{"role": "user", "content": "Hello, world!"}])
def test_create_llm_openai_missing_api_key() -> None:
"""Test that create_llm raises error when OpenAI API key is missing"""
with patch.dict(os.environ, {}, clear=True):
with pytest.raises((ValueError, ImportError)) as exc_info:
create_llm(llm_value="gpt-4o")
error_message = str(exc_info.value).lower()
assert "openai_api_key" in error_message or "api_key" in error_message
def test_create_llm_anthropic_missing_dependency() -> None:
"""Test that create_llm raises error when Anthropic dependency is missing"""
with patch.dict(os.environ, {"ANTHROPIC_API_KEY": "fake-key"}, clear=True):
with patch("crewai.llm.LLM.__new__", side_effect=ImportError('Anthropic native provider not available, to install: uv add "crewai[anthropic]"')):
with pytest.raises(ImportError) as exc_info:
create_llm(llm_value="anthropic/claude-3-sonnet")
assert "Anthropic native provider not available, to install: uv add \"crewai[anthropic]\"" in str(exc_info.value)

View File

@@ -1,3 +1,3 @@
"""CrewAI development tools."""
__version__ = "1.0.0"
__version__ = "1.2.0"

View File

@@ -124,7 +124,7 @@ exclude = [
"lib/crewai-tools/tests/",
"lib/crewai/src/crewai/experimental/a2a"
]
plugins = ["pydantic.mypy"]
plugins = ["pydantic.mypy", "crewai.mypy"]
[tool.bandit]

8009
uv.lock generated

File diff suppressed because it is too large Load Diff