mirror of
https://github.com/crewAIInc/crewAI.git
synced 2026-01-29 18:18:13 +00:00
Compare commits
7 Commits
bugfix/dro
...
ec89e003c8
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
ec89e003c8 | ||
|
|
0b0f2d30ab | ||
|
|
1df61aba4c | ||
|
|
da9220fa81 | ||
|
|
da4f356fab | ||
|
|
d932b20c6e | ||
|
|
2f9a2afd9e |
2
.github/workflows/linter.yml
vendored
2
.github/workflows/linter.yml
vendored
@@ -13,4 +13,4 @@ jobs:
|
|||||||
pip install ruff
|
pip install ruff
|
||||||
|
|
||||||
- name: Run Ruff Linter
|
- name: Run Ruff Linter
|
||||||
run: ruff check --exclude "templates","__init__.py"
|
run: ruff check
|
||||||
|
|||||||
@@ -1,9 +1,7 @@
|
|||||||
repos:
|
repos:
|
||||||
- repo: https://github.com/astral-sh/ruff-pre-commit
|
- repo: https://github.com/astral-sh/ruff-pre-commit
|
||||||
rev: v0.4.4
|
rev: v0.8.2
|
||||||
hooks:
|
hooks:
|
||||||
- id: ruff
|
- id: ruff
|
||||||
args: ["--fix"]
|
args: ["--fix"]
|
||||||
exclude: "templates"
|
|
||||||
- id: ruff-format
|
- id: ruff-format
|
||||||
exclude: "templates"
|
|
||||||
|
|||||||
9
.ruff.toml
Normal file
9
.ruff.toml
Normal file
@@ -0,0 +1,9 @@
|
|||||||
|
exclude = [
|
||||||
|
"templates",
|
||||||
|
"__init__.py",
|
||||||
|
]
|
||||||
|
|
||||||
|
[lint]
|
||||||
|
select = [
|
||||||
|
"I", # isort rules
|
||||||
|
]
|
||||||
@@ -32,7 +32,6 @@ A crew in crewAI represents a collaborative group of agents working together to
|
|||||||
| **Share Crew** _(optional)_ | `share_crew` | Whether you want to share the complete crew information and execution with the crewAI team to make the library better, and allow us to train models. |
|
| **Share Crew** _(optional)_ | `share_crew` | Whether you want to share the complete crew information and execution with the crewAI team to make the library better, and allow us to train models. |
|
||||||
| **Output Log File** _(optional)_ | `output_log_file` | Whether you want to have a file with the complete crew output and execution. You can set it using True and it will default to the folder you are currently in and it will be called logs.txt or passing a string with the full path and name of the file. |
|
| **Output Log File** _(optional)_ | `output_log_file` | Whether you want to have a file with the complete crew output and execution. You can set it using True and it will default to the folder you are currently in and it will be called logs.txt or passing a string with the full path and name of the file. |
|
||||||
| **Manager Agent** _(optional)_ | `manager_agent` | `manager` sets a custom agent that will be used as a manager. |
|
| **Manager Agent** _(optional)_ | `manager_agent` | `manager` sets a custom agent that will be used as a manager. |
|
||||||
| **Manager Callbacks** _(optional)_ | `manager_callbacks` | `manager_callbacks` takes a list of callback handlers to be executed by the manager agent when a hierarchical process is used. |
|
|
||||||
| **Prompt File** _(optional)_ | `prompt_file` | Path to the prompt JSON file to be used for the crew. |
|
| **Prompt File** _(optional)_ | `prompt_file` | Path to the prompt JSON file to be used for the crew. |
|
||||||
| **Planning** *(optional)* | `planning` | Adds planning ability to the Crew. When activated before each Crew iteration, all Crew data is sent to an AgentPlanner that will plan the tasks and this plan will be added to each task description. |
|
| **Planning** *(optional)* | `planning` | Adds planning ability to the Crew. When activated before each Crew iteration, all Crew data is sent to an AgentPlanner that will plan the tasks and this plan will be added to each task description. |
|
||||||
| **Planning LLM** *(optional)* | `planning_llm` | The language model used by the AgentPlanner in a planning process. |
|
| **Planning LLM** *(optional)* | `planning_llm` | The language model used by the AgentPlanner in a planning process. |
|
||||||
|
|||||||
@@ -29,7 +29,7 @@ Large Language Models (LLMs) are the core intelligence behind CrewAI agents. The
|
|||||||
|
|
||||||
## Available Models and Their Capabilities
|
## Available Models and Their Capabilities
|
||||||
|
|
||||||
Here's a detailed breakdown of supported models and their capabilities:
|
Here's a detailed breakdown of supported models and their capabilities, you can compare performance at [lmarena.ai](https://lmarena.ai/):
|
||||||
|
|
||||||
<Tabs>
|
<Tabs>
|
||||||
<Tab title="OpenAI">
|
<Tab title="OpenAI">
|
||||||
@@ -43,6 +43,17 @@ Here's a detailed breakdown of supported models and their capabilities:
|
|||||||
1 token ≈ 4 characters in English. For example, 8,192 tokens ≈ 32,768 characters or about 6,000 words.
|
1 token ≈ 4 characters in English. For example, 8,192 tokens ≈ 32,768 characters or about 6,000 words.
|
||||||
</Note>
|
</Note>
|
||||||
</Tab>
|
</Tab>
|
||||||
|
<Tab title="Gemini">
|
||||||
|
| Model | Context Window | Best For |
|
||||||
|
|-------|---------------|-----------|
|
||||||
|
| Gemini 1.5 Flash | 1M tokens | Balanced multimodal model, good for most tasks |
|
||||||
|
| Gemini 1.5 Flash 8B | 1M tokens | Fastest, most cost-efficient, good for high-frequency tasks |
|
||||||
|
| Gemini 1.5 Pro | 2M tokens | Best performing, wide variety of reasoning tasks including logical reasoning, coding, and creative collaboration |
|
||||||
|
|
||||||
|
<Tip>
|
||||||
|
Google's Gemini models are all multimodal, supporting audio, images, video and text, supporting context caching, json schema, function calling, etc.
|
||||||
|
</Tip>
|
||||||
|
</Tab>
|
||||||
<Tab title="Groq">
|
<Tab title="Groq">
|
||||||
| Model | Context Window | Best For |
|
| Model | Context Window | Best For |
|
||||||
|-------|---------------|-----------|
|
|-------|---------------|-----------|
|
||||||
@@ -128,10 +139,10 @@ There are three ways to configure LLMs in CrewAI. Choose the method that best fi
|
|||||||
# llm: anthropic/claude-2.1
|
# llm: anthropic/claude-2.1
|
||||||
# llm: anthropic/claude-2.0
|
# llm: anthropic/claude-2.0
|
||||||
|
|
||||||
# Google Models - Good for general tasks
|
# Google Models - Strong reasoning, large cachable context window, multimodal
|
||||||
# llm: gemini/gemini-pro
|
|
||||||
# llm: gemini/gemini-1.5-pro-latest
|
# llm: gemini/gemini-1.5-pro-latest
|
||||||
# llm: gemini/gemini-1.0-pro-latest
|
# llm: gemini/gemini-1.5-flash-latest
|
||||||
|
# llm: gemini/gemini-1.5-flash-8b-latest
|
||||||
|
|
||||||
# AWS Bedrock Models - Enterprise-grade
|
# AWS Bedrock Models - Enterprise-grade
|
||||||
# llm: bedrock/anthropic.claude-3-sonnet-20240229-v1:0
|
# llm: bedrock/anthropic.claude-3-sonnet-20240229-v1:0
|
||||||
@@ -350,13 +361,18 @@ Learn how to get the most out of your LLM configuration:
|
|||||||
|
|
||||||
<Accordion title="Google">
|
<Accordion title="Google">
|
||||||
```python Code
|
```python Code
|
||||||
|
# Option 1. Gemini accessed with an API key.
|
||||||
|
# https://ai.google.dev/gemini-api/docs/api-key
|
||||||
GEMINI_API_KEY=<your-api-key>
|
GEMINI_API_KEY=<your-api-key>
|
||||||
|
|
||||||
|
# Option 2. Vertex AI IAM credentials for Gemini, Anthropic, and anything in the Model Garden.
|
||||||
|
# https://cloud.google.com/vertex-ai/generative-ai/docs/overview
|
||||||
```
|
```
|
||||||
|
|
||||||
Example usage:
|
Example usage:
|
||||||
```python Code
|
```python Code
|
||||||
llm = LLM(
|
llm = LLM(
|
||||||
model="gemini/gemini-pro",
|
model="gemini/gemini-1.5-pro-latest",
|
||||||
temperature=0.7
|
temperature=0.7
|
||||||
)
|
)
|
||||||
```
|
```
|
||||||
|
|||||||
@@ -29,6 +29,7 @@ dependencies = [
|
|||||||
"chromadb>=0.5.18",
|
"chromadb>=0.5.18",
|
||||||
"pdfplumber>=0.11.4",
|
"pdfplumber>=0.11.4",
|
||||||
"openpyxl>=3.1.5",
|
"openpyxl>=3.1.5",
|
||||||
|
"blinker>=1.9.0",
|
||||||
]
|
]
|
||||||
|
|
||||||
[project.urls]
|
[project.urls]
|
||||||
@@ -53,7 +54,7 @@ mem0 = ["mem0ai>=0.1.29"]
|
|||||||
|
|
||||||
[tool.uv]
|
[tool.uv]
|
||||||
dev-dependencies = [
|
dev-dependencies = [
|
||||||
"ruff>=0.4.10",
|
"ruff>=0.8.2",
|
||||||
"mypy>=1.10.0",
|
"mypy>=1.10.0",
|
||||||
"pre-commit>=3.6.0",
|
"pre-commit>=3.6.0",
|
||||||
"mkdocs>=1.4.3",
|
"mkdocs>=1.4.3",
|
||||||
|
|||||||
@@ -413,7 +413,6 @@ class CrewAgentExecutor(CrewAgentExecutorMixin):
|
|||||||
"""
|
"""
|
||||||
while self.ask_for_human_input:
|
while self.ask_for_human_input:
|
||||||
human_feedback = self._ask_human_input(formatted_answer.output)
|
human_feedback = self._ask_human_input(formatted_answer.output)
|
||||||
print("Human feedback: ", human_feedback)
|
|
||||||
|
|
||||||
if self.crew and self.crew._train:
|
if self.crew and self.crew._train:
|
||||||
self._handle_crew_training_output(formatted_answer, human_feedback)
|
self._handle_crew_training_output(formatted_answer, human_feedback)
|
||||||
|
|||||||
@@ -1,5 +1,6 @@
|
|||||||
import re
|
import re
|
||||||
from typing import Any, Union
|
from typing import Any, Union
|
||||||
|
|
||||||
from json_repair import repair_json
|
from json_repair import repair_json
|
||||||
|
|
||||||
from crewai.utilities import I18N
|
from crewai.utilities import I18N
|
||||||
|
|||||||
@@ -5,9 +5,10 @@ from typing import Any, Dict
|
|||||||
import requests
|
import requests
|
||||||
from rich.console import Console
|
from rich.console import Console
|
||||||
|
|
||||||
|
from crewai.cli.tools.main import ToolCommand
|
||||||
|
|
||||||
from .constants import AUTH0_AUDIENCE, AUTH0_CLIENT_ID, AUTH0_DOMAIN
|
from .constants import AUTH0_AUDIENCE, AUTH0_CLIENT_ID, AUTH0_DOMAIN
|
||||||
from .utils import TokenManager, validate_token
|
from .utils import TokenManager, validate_token
|
||||||
from crewai.cli.tools.main import ToolCommand
|
|
||||||
|
|
||||||
console = Console()
|
console = Console()
|
||||||
|
|
||||||
@@ -79,7 +80,9 @@ class AuthenticationCommand:
|
|||||||
style="yellow",
|
style="yellow",
|
||||||
)
|
)
|
||||||
|
|
||||||
console.print("\n[bold green]Welcome to CrewAI Enterprise![/bold green]\n")
|
console.print(
|
||||||
|
"\n[bold green]Welcome to CrewAI Enterprise![/bold green]\n"
|
||||||
|
)
|
||||||
return
|
return
|
||||||
|
|
||||||
if token_data["error"] not in ("authorization_pending", "slow_down"):
|
if token_data["error"] not in ("authorization_pending", "slow_down"):
|
||||||
|
|||||||
@@ -1,10 +1,9 @@
|
|||||||
from .utils import TokenManager
|
from .utils import TokenManager
|
||||||
|
|
||||||
|
|
||||||
def get_auth_token() -> str:
|
def get_auth_token() -> str:
|
||||||
"""Get the authentication token."""
|
"""Get the authentication token."""
|
||||||
access_token = TokenManager().get_token()
|
access_token = TokenManager().get_token()
|
||||||
if not access_token:
|
if not access_token:
|
||||||
raise Exception()
|
raise Exception()
|
||||||
return access_token
|
return access_token
|
||||||
|
|
||||||
|
|
||||||
|
|||||||
@@ -1,8 +1,9 @@
|
|||||||
import requests
|
import requests
|
||||||
from requests.exceptions import JSONDecodeError
|
from requests.exceptions import JSONDecodeError
|
||||||
from rich.console import Console
|
from rich.console import Console
|
||||||
from crewai.cli.plus_api import PlusAPI
|
|
||||||
from crewai.cli.authentication.token import get_auth_token
|
from crewai.cli.authentication.token import get_auth_token
|
||||||
|
from crewai.cli.plus_api import PlusAPI
|
||||||
from crewai.telemetry.telemetry import Telemetry
|
from crewai.telemetry.telemetry import Telemetry
|
||||||
|
|
||||||
console = Console()
|
console = Console()
|
||||||
|
|||||||
@@ -1,13 +1,19 @@
|
|||||||
import json
|
import json
|
||||||
from pathlib import Path
|
from pathlib import Path
|
||||||
from pydantic import BaseModel, Field
|
|
||||||
from typing import Optional
|
from typing import Optional
|
||||||
|
|
||||||
|
from pydantic import BaseModel, Field
|
||||||
|
|
||||||
DEFAULT_CONFIG_PATH = Path.home() / ".config" / "crewai" / "settings.json"
|
DEFAULT_CONFIG_PATH = Path.home() / ".config" / "crewai" / "settings.json"
|
||||||
|
|
||||||
|
|
||||||
class Settings(BaseModel):
|
class Settings(BaseModel):
|
||||||
tool_repository_username: Optional[str] = Field(None, description="Username for interacting with the Tool Repository")
|
tool_repository_username: Optional[str] = Field(
|
||||||
tool_repository_password: Optional[str] = Field(None, description="Password for interacting with the Tool Repository")
|
None, description="Username for interacting with the Tool Repository"
|
||||||
|
)
|
||||||
|
tool_repository_password: Optional[str] = Field(
|
||||||
|
None, description="Password for interacting with the Tool Repository"
|
||||||
|
)
|
||||||
config_path: Path = Field(default=DEFAULT_CONFIG_PATH, exclude=True)
|
config_path: Path = Field(default=DEFAULT_CONFIG_PATH, exclude=True)
|
||||||
|
|
||||||
def __init__(self, config_path: Path = DEFAULT_CONFIG_PATH, **data):
|
def __init__(self, config_path: Path = DEFAULT_CONFIG_PATH, **data):
|
||||||
|
|||||||
@@ -1,9 +1,11 @@
|
|||||||
from typing import Optional
|
|
||||||
import requests
|
|
||||||
from os import getenv
|
from os import getenv
|
||||||
from crewai.cli.version import get_crewai_version
|
from typing import Optional
|
||||||
from urllib.parse import urljoin
|
from urllib.parse import urljoin
|
||||||
|
|
||||||
|
import requests
|
||||||
|
|
||||||
|
from crewai.cli.version import get_crewai_version
|
||||||
|
|
||||||
|
|
||||||
class PlusAPI:
|
class PlusAPI:
|
||||||
"""
|
"""
|
||||||
|
|||||||
@@ -1,11 +1,12 @@
|
|||||||
import subprocess
|
import subprocess
|
||||||
|
|
||||||
import click
|
import click
|
||||||
|
|
||||||
|
from crewai.knowledge.storage.knowledge_storage import KnowledgeStorage
|
||||||
from crewai.memory.entity.entity_memory import EntityMemory
|
from crewai.memory.entity.entity_memory import EntityMemory
|
||||||
from crewai.memory.long_term.long_term_memory import LongTermMemory
|
from crewai.memory.long_term.long_term_memory import LongTermMemory
|
||||||
from crewai.memory.short_term.short_term_memory import ShortTermMemory
|
from crewai.memory.short_term.short_term_memory import ShortTermMemory
|
||||||
from crewai.utilities.task_output_storage_handler import TaskOutputStorageHandler
|
from crewai.utilities.task_output_storage_handler import TaskOutputStorageHandler
|
||||||
from crewai.knowledge.storage.knowledge_storage import KnowledgeStorage
|
|
||||||
|
|
||||||
|
|
||||||
def reset_memories_command(
|
def reset_memories_command(
|
||||||
|
|||||||
@@ -117,7 +117,7 @@ class ToolCommand(BaseCommand, PlusAPIMixin):
|
|||||||
|
|
||||||
published_handle = publish_response.json()["handle"]
|
published_handle = publish_response.json()["handle"]
|
||||||
console.print(
|
console.print(
|
||||||
f"Succesfully published {published_handle} ({project_version}).\nInstall it in other projects with crewai tool install {published_handle}",
|
f"Successfully published {published_handle} ({project_version}).\nInstall it in other projects with crewai tool install {published_handle}",
|
||||||
style="bold green",
|
style="bold green",
|
||||||
)
|
)
|
||||||
|
|
||||||
@@ -138,7 +138,7 @@ class ToolCommand(BaseCommand, PlusAPIMixin):
|
|||||||
|
|
||||||
self._add_package(get_response.json())
|
self._add_package(get_response.json())
|
||||||
|
|
||||||
console.print(f"Succesfully installed {handle}", style="bold green")
|
console.print(f"Successfully installed {handle}", style="bold green")
|
||||||
|
|
||||||
def login(self):
|
def login(self):
|
||||||
login_response = self.plus_api_client.login_to_tool_repository()
|
login_response = self.plus_api_client.login_to_tool_repository()
|
||||||
|
|||||||
@@ -1,6 +1,6 @@
|
|||||||
import importlib.metadata
|
import importlib.metadata
|
||||||
|
|
||||||
|
|
||||||
def get_crewai_version() -> str:
|
def get_crewai_version() -> str:
|
||||||
"""Get the version number of CrewAI running the CLI"""
|
"""Get the version number of CrewAI running the CLI"""
|
||||||
return importlib.metadata.version("crewai")
|
return importlib.metadata.version("crewai")
|
||||||
|
|
||||||
|
|||||||
@@ -1032,6 +1032,7 @@ class Crew(BaseModel):
|
|||||||
agentops.end_session(
|
agentops.end_session(
|
||||||
end_state="Success",
|
end_state="Success",
|
||||||
end_state_reason="Finished Execution",
|
end_state_reason="Finished Execution",
|
||||||
|
is_auto_end=True,
|
||||||
)
|
)
|
||||||
self._telemetry.end_crew(self, final_string_output)
|
self._telemetry.end_crew(self, final_string_output)
|
||||||
|
|
||||||
|
|||||||
@@ -14,8 +14,15 @@ from typing import (
|
|||||||
cast,
|
cast,
|
||||||
)
|
)
|
||||||
|
|
||||||
|
from blinker import Signal
|
||||||
from pydantic import BaseModel, ValidationError
|
from pydantic import BaseModel, ValidationError
|
||||||
|
|
||||||
|
from crewai.flow.flow_events import (
|
||||||
|
FlowFinishedEvent,
|
||||||
|
FlowStartedEvent,
|
||||||
|
MethodExecutionFinishedEvent,
|
||||||
|
MethodExecutionStartedEvent,
|
||||||
|
)
|
||||||
from crewai.flow.flow_visualizer import plot_flow
|
from crewai.flow.flow_visualizer import plot_flow
|
||||||
from crewai.flow.utils import get_possible_return_constants
|
from crewai.flow.utils import get_possible_return_constants
|
||||||
from crewai.telemetry import Telemetry
|
from crewai.telemetry import Telemetry
|
||||||
@@ -159,6 +166,7 @@ class Flow(Generic[T], metaclass=FlowMeta):
|
|||||||
_routers: Dict[str, str] = {}
|
_routers: Dict[str, str] = {}
|
||||||
_router_paths: Dict[str, List[str]] = {}
|
_router_paths: Dict[str, List[str]] = {}
|
||||||
initial_state: Union[Type[T], T, None] = None
|
initial_state: Union[Type[T], T, None] = None
|
||||||
|
event_emitter = Signal("event_emitter")
|
||||||
|
|
||||||
def __class_getitem__(cls: Type["Flow"], item: Type[T]) -> Type["Flow"]:
|
def __class_getitem__(cls: Type["Flow"], item: Type[T]) -> Type["Flow"]:
|
||||||
class _FlowGeneric(cls): # type: ignore
|
class _FlowGeneric(cls): # type: ignore
|
||||||
@@ -253,6 +261,14 @@ class Flow(Generic[T], metaclass=FlowMeta):
|
|||||||
Returns:
|
Returns:
|
||||||
The final output from the flow execution.
|
The final output from the flow execution.
|
||||||
"""
|
"""
|
||||||
|
self.event_emitter.send(
|
||||||
|
self,
|
||||||
|
event=FlowStartedEvent(
|
||||||
|
type="flow_started",
|
||||||
|
flow_name=self.__class__.__name__,
|
||||||
|
),
|
||||||
|
)
|
||||||
|
|
||||||
if inputs is not None:
|
if inputs is not None:
|
||||||
self._initialize_state(inputs)
|
self._initialize_state(inputs)
|
||||||
return asyncio.run(self.kickoff_async())
|
return asyncio.run(self.kickoff_async())
|
||||||
@@ -267,8 +283,6 @@ class Flow(Generic[T], metaclass=FlowMeta):
|
|||||||
Returns:
|
Returns:
|
||||||
The final output from the flow execution.
|
The final output from the flow execution.
|
||||||
"""
|
"""
|
||||||
if inputs is not None:
|
|
||||||
self._initialize_state(inputs)
|
|
||||||
if not self._start_methods:
|
if not self._start_methods:
|
||||||
raise ValueError("No start method defined")
|
raise ValueError("No start method defined")
|
||||||
|
|
||||||
@@ -285,11 +299,19 @@ class Flow(Generic[T], metaclass=FlowMeta):
|
|||||||
# Run all start methods concurrently
|
# Run all start methods concurrently
|
||||||
await asyncio.gather(*tasks)
|
await asyncio.gather(*tasks)
|
||||||
|
|
||||||
# Return the final output (from the last executed method)
|
# Determine the final output (from the last executed method)
|
||||||
if self._method_outputs:
|
final_output = self._method_outputs[-1] if self._method_outputs else None
|
||||||
return self._method_outputs[-1]
|
|
||||||
else:
|
self.event_emitter.send(
|
||||||
return None # Or raise an exception if no methods were executed
|
self,
|
||||||
|
event=FlowFinishedEvent(
|
||||||
|
type="flow_finished",
|
||||||
|
flow_name=self.__class__.__name__,
|
||||||
|
result=final_output,
|
||||||
|
),
|
||||||
|
)
|
||||||
|
|
||||||
|
return final_output
|
||||||
|
|
||||||
async def _execute_start_method(self, start_method_name: str) -> None:
|
async def _execute_start_method(self, start_method_name: str) -> None:
|
||||||
result = await self._execute_method(
|
result = await self._execute_method(
|
||||||
@@ -352,6 +374,16 @@ class Flow(Generic[T], metaclass=FlowMeta):
|
|||||||
async def _execute_single_listener(self, listener_name: str, result: Any) -> None:
|
async def _execute_single_listener(self, listener_name: str, result: Any) -> None:
|
||||||
try:
|
try:
|
||||||
method = self._methods[listener_name]
|
method = self._methods[listener_name]
|
||||||
|
|
||||||
|
self.event_emitter.send(
|
||||||
|
self,
|
||||||
|
event=MethodExecutionStartedEvent(
|
||||||
|
type="method_execution_started",
|
||||||
|
method_name=listener_name,
|
||||||
|
flow_name=self.__class__.__name__,
|
||||||
|
),
|
||||||
|
)
|
||||||
|
|
||||||
sig = inspect.signature(method)
|
sig = inspect.signature(method)
|
||||||
params = list(sig.parameters.values())
|
params = list(sig.parameters.values())
|
||||||
|
|
||||||
@@ -367,6 +399,15 @@ class Flow(Generic[T], metaclass=FlowMeta):
|
|||||||
# If listener does not expect parameters, call without arguments
|
# If listener does not expect parameters, call without arguments
|
||||||
listener_result = await self._execute_method(listener_name, method)
|
listener_result = await self._execute_method(listener_name, method)
|
||||||
|
|
||||||
|
self.event_emitter.send(
|
||||||
|
self,
|
||||||
|
event=MethodExecutionFinishedEvent(
|
||||||
|
type="method_execution_finished",
|
||||||
|
method_name=listener_name,
|
||||||
|
flow_name=self.__class__.__name__,
|
||||||
|
),
|
||||||
|
)
|
||||||
|
|
||||||
# Execute listeners of this listener
|
# Execute listeners of this listener
|
||||||
await self._execute_listeners(listener_name, listener_result)
|
await self._execute_listeners(listener_name, listener_result)
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
|
|||||||
33
src/crewai/flow/flow_events.py
Normal file
33
src/crewai/flow/flow_events.py
Normal file
@@ -0,0 +1,33 @@
|
|||||||
|
from dataclasses import dataclass, field
|
||||||
|
from datetime import datetime
|
||||||
|
from typing import Any, Optional
|
||||||
|
|
||||||
|
|
||||||
|
@dataclass
|
||||||
|
class Event:
|
||||||
|
type: str
|
||||||
|
flow_name: str
|
||||||
|
timestamp: datetime = field(init=False)
|
||||||
|
|
||||||
|
def __post_init__(self):
|
||||||
|
self.timestamp = datetime.now()
|
||||||
|
|
||||||
|
|
||||||
|
@dataclass
|
||||||
|
class FlowStartedEvent(Event):
|
||||||
|
pass
|
||||||
|
|
||||||
|
|
||||||
|
@dataclass
|
||||||
|
class MethodExecutionStartedEvent(Event):
|
||||||
|
method_name: str
|
||||||
|
|
||||||
|
|
||||||
|
@dataclass
|
||||||
|
class MethodExecutionFinishedEvent(Event):
|
||||||
|
method_name: str
|
||||||
|
|
||||||
|
|
||||||
|
@dataclass
|
||||||
|
class FlowFinishedEvent(Event):
|
||||||
|
result: Optional[Any] = None
|
||||||
@@ -38,7 +38,7 @@ class BaseFileKnowledgeSource(BaseKnowledgeSource, ABC):
|
|||||||
if not path.exists():
|
if not path.exists():
|
||||||
self._logger.log(
|
self._logger.log(
|
||||||
"error",
|
"error",
|
||||||
f"File not found: {path}. Try adding sources to the knowledge directory. If its inside the knowledge directory, use the relative path.",
|
f"File not found: {path}. Try adding sources to the knowledge directory. If it's inside the knowledge directory, use the relative path.",
|
||||||
color="red",
|
color="red",
|
||||||
)
|
)
|
||||||
raise FileNotFoundError(f"File not found: {path}")
|
raise FileNotFoundError(f"File not found: {path}")
|
||||||
|
|||||||
@@ -1,5 +1,5 @@
|
|||||||
from abc import ABC, abstractmethod
|
from abc import ABC, abstractmethod
|
||||||
from typing import Dict, Any, List, Optional
|
from typing import Any, Dict, List, Optional
|
||||||
|
|
||||||
|
|
||||||
class BaseKnowledgeStorage(ABC):
|
class BaseKnowledgeStorage(ABC):
|
||||||
|
|||||||
@@ -14,9 +14,9 @@ from chromadb.config import Settings
|
|||||||
|
|
||||||
from crewai.knowledge.storage.base_knowledge_storage import BaseKnowledgeStorage
|
from crewai.knowledge.storage.base_knowledge_storage import BaseKnowledgeStorage
|
||||||
from crewai.utilities import EmbeddingConfigurator
|
from crewai.utilities import EmbeddingConfigurator
|
||||||
|
from crewai.utilities.constants import KNOWLEDGE_DIRECTORY
|
||||||
from crewai.utilities.logger import Logger
|
from crewai.utilities.logger import Logger
|
||||||
from crewai.utilities.paths import db_storage_path
|
from crewai.utilities.paths import db_storage_path
|
||||||
from crewai.utilities.constants import KNOWLEDGE_DIRECTORY
|
|
||||||
|
|
||||||
|
|
||||||
@contextlib.contextmanager
|
@contextlib.contextmanager
|
||||||
|
|||||||
@@ -43,6 +43,10 @@ LLM_CONTEXT_WINDOW_SIZES = {
|
|||||||
"gpt-4-turbo": 128000,
|
"gpt-4-turbo": 128000,
|
||||||
"o1-preview": 128000,
|
"o1-preview": 128000,
|
||||||
"o1-mini": 128000,
|
"o1-mini": 128000,
|
||||||
|
# gemini
|
||||||
|
"gemini-1.5-pro": 2097152,
|
||||||
|
"gemini-1.5-flash": 1048576,
|
||||||
|
"gemini-1.5-flash-8b": 1048576,
|
||||||
# deepseek
|
# deepseek
|
||||||
"deepseek-chat": 128000,
|
"deepseek-chat": 128000,
|
||||||
# groq
|
# groq
|
||||||
@@ -61,6 +65,9 @@ LLM_CONTEXT_WINDOW_SIZES = {
|
|||||||
"mixtral-8x7b-32768": 32768,
|
"mixtral-8x7b-32768": 32768,
|
||||||
}
|
}
|
||||||
|
|
||||||
|
DEFAULT_CONTEXT_WINDOW_SIZE = 8192
|
||||||
|
CONTEXT_WINDOW_USAGE_RATIO = 0.75
|
||||||
|
|
||||||
|
|
||||||
@contextmanager
|
@contextmanager
|
||||||
def suppress_warnings():
|
def suppress_warnings():
|
||||||
@@ -124,6 +131,7 @@ class LLM:
|
|||||||
self.api_version = api_version
|
self.api_version = api_version
|
||||||
self.api_key = api_key
|
self.api_key = api_key
|
||||||
self.callbacks = callbacks
|
self.callbacks = callbacks
|
||||||
|
self.context_window_size = 0
|
||||||
self.kwargs = kwargs
|
self.kwargs = kwargs
|
||||||
|
|
||||||
litellm.drop_params = True
|
litellm.drop_params = True
|
||||||
@@ -191,7 +199,16 @@ class LLM:
|
|||||||
|
|
||||||
def get_context_window_size(self) -> int:
|
def get_context_window_size(self) -> int:
|
||||||
# Only using 75% of the context window size to avoid cutting the message in the middle
|
# Only using 75% of the context window size to avoid cutting the message in the middle
|
||||||
return int(LLM_CONTEXT_WINDOW_SIZES.get(self.model, 8192) * 0.75)
|
if self.context_window_size != 0:
|
||||||
|
return self.context_window_size
|
||||||
|
|
||||||
|
self.context_window_size = int(
|
||||||
|
DEFAULT_CONTEXT_WINDOW_SIZE * CONTEXT_WINDOW_USAGE_RATIO
|
||||||
|
)
|
||||||
|
for key, value in LLM_CONTEXT_WINDOW_SIZES.items():
|
||||||
|
if self.model.startswith(key):
|
||||||
|
self.context_window_size = int(value * CONTEXT_WINDOW_USAGE_RATIO)
|
||||||
|
return self.context_window_size
|
||||||
|
|
||||||
def set_callbacks(self, callbacks: List[Any]):
|
def set_callbacks(self, callbacks: List[Any]):
|
||||||
callback_types = [type(callback) for callback in callbacks]
|
callback_types = [type(callback) for callback in callbacks]
|
||||||
|
|||||||
@@ -1,4 +1,4 @@
|
|||||||
from typing import Optional, Dict, Any
|
from typing import Any, Dict, Optional
|
||||||
|
|
||||||
from crewai.memory import EntityMemory, LongTermMemory, ShortTermMemory, UserMemory
|
from crewai.memory import EntityMemory, LongTermMemory, ShortTermMemory, UserMemory
|
||||||
|
|
||||||
|
|||||||
@@ -1,4 +1,4 @@
|
|||||||
from typing import Any, Dict, Optional, List
|
from typing import Any, Dict, List, Optional
|
||||||
|
|
||||||
from crewai.memory.storage.rag_storage import RAGStorage
|
from crewai.memory.storage.rag_storage import RAGStorage
|
||||||
|
|
||||||
|
|||||||
@@ -1,4 +1,5 @@
|
|||||||
from typing import Any, Dict, Optional
|
from typing import Any, Dict, Optional
|
||||||
|
|
||||||
from crewai.memory.memory import Memory
|
from crewai.memory.memory import Memory
|
||||||
from crewai.memory.short_term.short_term_memory_item import ShortTermMemoryItem
|
from crewai.memory.short_term.short_term_memory_item import ShortTermMemoryItem
|
||||||
from crewai.memory.storage.rag_storage import RAGStorage
|
from crewai.memory.storage.rag_storage import RAGStorage
|
||||||
@@ -32,7 +33,10 @@ class ShortTermMemory(Memory):
|
|||||||
storage
|
storage
|
||||||
if storage
|
if storage
|
||||||
else RAGStorage(
|
else RAGStorage(
|
||||||
type="short_term", embedder_config=embedder_config, crew=crew, path=path
|
type="short_term",
|
||||||
|
embedder_config=embedder_config,
|
||||||
|
crew=crew,
|
||||||
|
path=path,
|
||||||
)
|
)
|
||||||
)
|
)
|
||||||
super().__init__(storage)
|
super().__init__(storage)
|
||||||
|
|||||||
@@ -2,6 +2,7 @@ import os
|
|||||||
from typing import Any, Dict, List
|
from typing import Any, Dict, List
|
||||||
|
|
||||||
from mem0 import MemoryClient
|
from mem0 import MemoryClient
|
||||||
|
|
||||||
from crewai.memory.storage.interface import Storage
|
from crewai.memory.storage.interface import Storage
|
||||||
|
|
||||||
|
|
||||||
|
|||||||
@@ -1,5 +1,6 @@
|
|||||||
from functools import wraps
|
from functools import wraps
|
||||||
|
|
||||||
|
|
||||||
def memoize(func):
|
def memoize(func):
|
||||||
cache = {}
|
cache = {}
|
||||||
|
|
||||||
|
|||||||
@@ -1,11 +1,11 @@
|
|||||||
import datetime
|
import datetime
|
||||||
import json
|
import json
|
||||||
from pathlib import Path
|
|
||||||
import threading
|
import threading
|
||||||
import uuid
|
import uuid
|
||||||
from concurrent.futures import Future
|
from concurrent.futures import Future
|
||||||
from copy import copy
|
from copy import copy
|
||||||
from hashlib import md5
|
from hashlib import md5
|
||||||
|
from pathlib import Path
|
||||||
from typing import Any, Dict, List, Optional, Set, Tuple, Type, Union
|
from typing import Any, Dict, List, Optional, Set, Tuple, Type, Union
|
||||||
|
|
||||||
from opentelemetry.trace import Span
|
from opentelemetry.trace import Span
|
||||||
|
|||||||
@@ -21,7 +21,9 @@ with suppress_warnings():
|
|||||||
|
|
||||||
|
|
||||||
from opentelemetry import trace # noqa: E402
|
from opentelemetry import trace # noqa: E402
|
||||||
from opentelemetry.exporter.otlp.proto.http.trace_exporter import OTLPSpanExporter # noqa: E402
|
from opentelemetry.exporter.otlp.proto.http.trace_exporter import (
|
||||||
|
OTLPSpanExporter, # noqa: E402
|
||||||
|
)
|
||||||
from opentelemetry.sdk.resources import SERVICE_NAME, Resource # noqa: E402
|
from opentelemetry.sdk.resources import SERVICE_NAME, Resource # noqa: E402
|
||||||
from opentelemetry.sdk.trace import TracerProvider # noqa: E402
|
from opentelemetry.sdk.trace import TracerProvider # noqa: E402
|
||||||
from opentelemetry.sdk.trace.export import BatchSpanProcessor # noqa: E402
|
from opentelemetry.sdk.trace.export import BatchSpanProcessor # noqa: E402
|
||||||
|
|||||||
@@ -1,9 +1,9 @@
|
|||||||
from crewai.tools.base_tool import BaseTool
|
|
||||||
from crewai.agents.agent_builder.base_agent import BaseAgent
|
from crewai.agents.agent_builder.base_agent import BaseAgent
|
||||||
|
from crewai.tools.base_tool import BaseTool
|
||||||
from crewai.utilities import I18N
|
from crewai.utilities import I18N
|
||||||
|
|
||||||
from .delegate_work_tool import DelegateWorkTool
|
|
||||||
from .ask_question_tool import AskQuestionTool
|
from .ask_question_tool import AskQuestionTool
|
||||||
|
from .delegate_work_tool import DelegateWorkTool
|
||||||
|
|
||||||
|
|
||||||
class AgentTools:
|
class AgentTools:
|
||||||
|
|||||||
@@ -1,7 +1,9 @@
|
|||||||
from crewai.tools.agent_tools.base_agent_tools import BaseAgentTool
|
|
||||||
from typing import Optional
|
from typing import Optional
|
||||||
|
|
||||||
from pydantic import BaseModel, Field
|
from pydantic import BaseModel, Field
|
||||||
|
|
||||||
|
from crewai.tools.agent_tools.base_agent_tools import BaseAgentTool
|
||||||
|
|
||||||
|
|
||||||
class AskQuestionToolSchema(BaseModel):
|
class AskQuestionToolSchema(BaseModel):
|
||||||
question: str = Field(..., description="The question to ask")
|
question: str = Field(..., description="The question to ask")
|
||||||
|
|||||||
@@ -1,9 +1,10 @@
|
|||||||
from typing import Optional, Union
|
from typing import Optional, Union
|
||||||
|
|
||||||
from pydantic import Field
|
from pydantic import Field
|
||||||
|
|
||||||
from crewai.tools.base_tool import BaseTool
|
|
||||||
from crewai.agents.agent_builder.base_agent import BaseAgent
|
from crewai.agents.agent_builder.base_agent import BaseAgent
|
||||||
from crewai.task import Task
|
from crewai.task import Task
|
||||||
|
from crewai.tools.base_tool import BaseTool
|
||||||
from crewai.utilities import I18N
|
from crewai.utilities import I18N
|
||||||
|
|
||||||
|
|
||||||
@@ -44,14 +45,14 @@ class BaseAgentTool(BaseTool):
|
|||||||
if available_agent.role.casefold().replace("\n", "") == agent_name
|
if available_agent.role.casefold().replace("\n", "") == agent_name
|
||||||
]
|
]
|
||||||
except Exception as _:
|
except Exception as _:
|
||||||
return self.i18n.errors("agent_tool_unexsiting_coworker").format(
|
return self.i18n.errors("agent_tool_unexisting_coworker").format(
|
||||||
coworkers="\n".join(
|
coworkers="\n".join(
|
||||||
[f"- {agent.role.casefold()}" for agent in self.agents]
|
[f"- {agent.role.casefold()}" for agent in self.agents]
|
||||||
)
|
)
|
||||||
)
|
)
|
||||||
|
|
||||||
if not agent:
|
if not agent:
|
||||||
return self.i18n.errors("agent_tool_unexsiting_coworker").format(
|
return self.i18n.errors("agent_tool_unexisting_coworker").format(
|
||||||
coworkers="\n".join(
|
coworkers="\n".join(
|
||||||
[f"- {agent.role.casefold()}" for agent in self.agents]
|
[f"- {agent.role.casefold()}" for agent in self.agents]
|
||||||
)
|
)
|
||||||
|
|||||||
@@ -1,8 +1,9 @@
|
|||||||
from crewai.tools.agent_tools.base_agent_tools import BaseAgentTool
|
|
||||||
from typing import Optional
|
from typing import Optional
|
||||||
|
|
||||||
from pydantic import BaseModel, Field
|
from pydantic import BaseModel, Field
|
||||||
|
|
||||||
|
from crewai.tools.agent_tools.base_agent_tools import BaseAgentTool
|
||||||
|
|
||||||
|
|
||||||
class DelegateWorkToolSchema(BaseModel):
|
class DelegateWorkToolSchema(BaseModel):
|
||||||
task: str = Field(..., description="The task to delegate")
|
task: str = Field(..., description="The task to delegate")
|
||||||
|
|||||||
@@ -1,6 +1,7 @@
|
|||||||
from typing import Any, Dict
|
|
||||||
from pydantic import BaseModel
|
|
||||||
from datetime import datetime
|
from datetime import datetime
|
||||||
|
from typing import Any, Dict
|
||||||
|
|
||||||
|
from pydantic import BaseModel
|
||||||
|
|
||||||
|
|
||||||
class ToolUsageEvent(BaseModel):
|
class ToolUsageEvent(BaseModel):
|
||||||
|
|||||||
@@ -28,7 +28,7 @@
|
|||||||
"errors": {
|
"errors": {
|
||||||
"force_final_answer_error": "You can't keep going, this was the best you could do.\n {formatted_answer.text}",
|
"force_final_answer_error": "You can't keep going, this was the best you could do.\n {formatted_answer.text}",
|
||||||
"force_final_answer": "Now it's time you MUST give your absolute best final answer. You'll ignore all previous instructions, stop using any tools, and just return your absolute BEST Final answer.",
|
"force_final_answer": "Now it's time you MUST give your absolute best final answer. You'll ignore all previous instructions, stop using any tools, and just return your absolute BEST Final answer.",
|
||||||
"agent_tool_unexsiting_coworker": "\nError executing tool. coworker mentioned not found, it must be one of the following options:\n{coworkers}\n",
|
"agent_tool_unexisting_coworker": "\nError executing tool. coworker mentioned not found, it must be one of the following options:\n{coworkers}\n",
|
||||||
"task_repeated_usage": "I tried reusing the same input, I must stop using this action input. I'll try something else instead.\n\n",
|
"task_repeated_usage": "I tried reusing the same input, I must stop using this action input. I'll try something else instead.\n\n",
|
||||||
"tool_usage_error": "I encountered an error: {error}",
|
"tool_usage_error": "I encountered an error: {error}",
|
||||||
"tool_arguments_error": "Error: the Action Input is not a valid key, value dictionary.",
|
"tool_arguments_error": "Error: the Action Input is not a valid key, value dictionary.",
|
||||||
|
|||||||
@@ -1,8 +1,9 @@
|
|||||||
from datetime import datetime, date
|
|
||||||
import json
|
import json
|
||||||
from uuid import UUID
|
from datetime import date, datetime
|
||||||
from pydantic import BaseModel
|
|
||||||
from decimal import Decimal
|
from decimal import Decimal
|
||||||
|
from uuid import UUID
|
||||||
|
|
||||||
|
from pydantic import BaseModel
|
||||||
|
|
||||||
|
|
||||||
class CrewJSONEncoder(json.JSONEncoder):
|
class CrewJSONEncoder(json.JSONEncoder):
|
||||||
|
|||||||
@@ -1,10 +1,11 @@
|
|||||||
import json
|
import json
|
||||||
import regex
|
|
||||||
from typing import Any, Type
|
from typing import Any, Type
|
||||||
|
|
||||||
from crewai.agents.parser import OutputParserException
|
import regex
|
||||||
from pydantic import BaseModel, ValidationError
|
from pydantic import BaseModel, ValidationError
|
||||||
|
|
||||||
|
from crewai.agents.parser import OutputParserException
|
||||||
|
|
||||||
|
|
||||||
class CrewPydanticOutputParser:
|
class CrewPydanticOutputParser:
|
||||||
"""Parses the text into pydantic models"""
|
"""Parses the text into pydantic models"""
|
||||||
|
|||||||
@@ -1,6 +1,7 @@
|
|||||||
import os
|
import os
|
||||||
from typing import Any, Dict, cast
|
from typing import Any, Dict, cast
|
||||||
from chromadb import EmbeddingFunction, Documents, Embeddings
|
|
||||||
|
from chromadb import Documents, EmbeddingFunction, Embeddings
|
||||||
from chromadb.api.types import validate_embedding_function
|
from chromadb.api.types import validate_embedding_function
|
||||||
|
|
||||||
|
|
||||||
|
|||||||
@@ -1,13 +1,14 @@
|
|||||||
from collections import defaultdict
|
from collections import defaultdict
|
||||||
|
|
||||||
|
from pydantic import BaseModel, Field
|
||||||
|
from rich.box import HEAVY_EDGE
|
||||||
|
from rich.console import Console
|
||||||
|
from rich.table import Table
|
||||||
|
|
||||||
from crewai.agent import Agent
|
from crewai.agent import Agent
|
||||||
from crewai.task import Task
|
from crewai.task import Task
|
||||||
from crewai.tasks.task_output import TaskOutput
|
from crewai.tasks.task_output import TaskOutput
|
||||||
from crewai.telemetry import Telemetry
|
from crewai.telemetry import Telemetry
|
||||||
from pydantic import BaseModel, Field
|
|
||||||
from rich.box import HEAVY_EDGE
|
|
||||||
from rich.console import Console
|
|
||||||
from rich.table import Table
|
|
||||||
|
|
||||||
|
|
||||||
class TaskEvaluationPydanticOutput(BaseModel):
|
class TaskEvaluationPydanticOutput(BaseModel):
|
||||||
|
|||||||
@@ -1,7 +1,7 @@
|
|||||||
from typing import Any, Callable, Generic, List, Dict, Type, TypeVar
|
|
||||||
from functools import wraps
|
from functools import wraps
|
||||||
from pydantic import BaseModel
|
from typing import Any, Callable, Dict, Generic, List, Type, TypeVar
|
||||||
|
|
||||||
|
from pydantic import BaseModel
|
||||||
|
|
||||||
T = TypeVar("T")
|
T = TypeVar("T")
|
||||||
EVT = TypeVar("EVT", bound=BaseModel)
|
EVT = TypeVar("EVT", bound=BaseModel)
|
||||||
|
|||||||
@@ -1,4 +1,5 @@
|
|||||||
from typing import Any, List, Optional
|
from typing import Any, List, Optional
|
||||||
|
|
||||||
from pydantic import BaseModel, Field
|
from pydantic import BaseModel, Field
|
||||||
|
|
||||||
from crewai.agent import Agent
|
from crewai.agent import Agent
|
||||||
|
|||||||
@@ -1,5 +1,7 @@
|
|||||||
from pydantic import BaseModel, Field
|
|
||||||
from typing import Any, Optional
|
from typing import Any, Optional
|
||||||
|
|
||||||
|
from pydantic import BaseModel, Field
|
||||||
|
|
||||||
from crewai.utilities import I18N
|
from crewai.utilities import I18N
|
||||||
|
|
||||||
|
|
||||||
|
|||||||
@@ -1,4 +1,4 @@
|
|||||||
from typing import Type, get_args, get_origin, Union
|
from typing import Type, Union, get_args, get_origin
|
||||||
|
|
||||||
from pydantic import BaseModel
|
from pydantic import BaseModel
|
||||||
|
|
||||||
|
|||||||
@@ -1,6 +1,8 @@
|
|||||||
from pydantic import BaseModel, Field
|
|
||||||
from datetime import datetime
|
from datetime import datetime
|
||||||
from typing import Dict, Any, Optional, List
|
from typing import Any, Dict, List, Optional
|
||||||
|
|
||||||
|
from pydantic import BaseModel, Field
|
||||||
|
|
||||||
from crewai.memory.storage.kickoff_task_outputs_storage import (
|
from crewai.memory.storage.kickoff_task_outputs_storage import (
|
||||||
KickoffTaskOutputsSQLiteStorage,
|
KickoffTaskOutputsSQLiteStorage,
|
||||||
)
|
)
|
||||||
|
|||||||
@@ -1,5 +1,6 @@
|
|||||||
from litellm.integrations.custom_logger import CustomLogger
|
from litellm.integrations.custom_logger import CustomLogger
|
||||||
from litellm.types.utils import Usage
|
from litellm.types.utils import Usage
|
||||||
|
|
||||||
from crewai.agents.agent_builder.utilities.base_token_process import TokenProcess
|
from crewai.agents.agent_builder.utilities.base_token_process import TokenProcess
|
||||||
|
|
||||||
|
|
||||||
@@ -11,7 +12,7 @@ class TokenCalcHandler(CustomLogger):
|
|||||||
if self.token_cost_process is None:
|
if self.token_cost_process is None:
|
||||||
return
|
return
|
||||||
|
|
||||||
usage : Usage = response_obj["usage"]
|
usage: Usage = response_obj["usage"]
|
||||||
self.token_cost_process.sum_successful_requests(1)
|
self.token_cost_process.sum_successful_requests(1)
|
||||||
self.token_cost_process.sum_prompt_tokens(usage.prompt_tokens)
|
self.token_cost_process.sum_prompt_tokens(usage.prompt_tokens)
|
||||||
self.token_cost_process.sum_completion_tokens(usage.completion_tokens)
|
self.token_cost_process.sum_completion_tokens(usage.completion_tokens)
|
||||||
|
|||||||
@@ -1,9 +1,10 @@
|
|||||||
import hashlib
|
import hashlib
|
||||||
from typing import Any, List, Optional
|
from typing import Any, List, Optional
|
||||||
|
|
||||||
|
from pydantic import BaseModel
|
||||||
|
|
||||||
from crewai.agents.agent_builder.base_agent import BaseAgent
|
from crewai.agents.agent_builder.base_agent import BaseAgent
|
||||||
from crewai.tools.base_tool import BaseTool
|
from crewai.tools.base_tool import BaseTool
|
||||||
from pydantic import BaseModel
|
|
||||||
|
|
||||||
|
|
||||||
class TestAgent(BaseAgent):
|
class TestAgent(BaseAgent):
|
||||||
|
|||||||
@@ -1,10 +1,11 @@
|
|||||||
import pytest
|
import pytest
|
||||||
from crewai.agents.parser import CrewAgentParser
|
|
||||||
from crewai.agents.crew_agent_executor import (
|
from crewai.agents.crew_agent_executor import (
|
||||||
AgentAction,
|
AgentAction,
|
||||||
AgentFinish,
|
AgentFinish,
|
||||||
OutputParserException,
|
OutputParserException,
|
||||||
)
|
)
|
||||||
|
from crewai.agents.parser import CrewAgentParser
|
||||||
|
|
||||||
|
|
||||||
@pytest.fixture
|
@pytest.fixture
|
||||||
|
|||||||
@@ -2,6 +2,7 @@ import unittest
|
|||||||
from unittest.mock import MagicMock, patch
|
from unittest.mock import MagicMock, patch
|
||||||
|
|
||||||
import requests
|
import requests
|
||||||
|
|
||||||
from crewai.cli.authentication.main import AuthenticationCommand
|
from crewai.cli.authentication.main import AuthenticationCommand
|
||||||
|
|
||||||
|
|
||||||
@@ -47,7 +48,9 @@ class TestAuthenticationCommand(unittest.TestCase):
|
|||||||
@patch("crewai.cli.authentication.main.requests.post")
|
@patch("crewai.cli.authentication.main.requests.post")
|
||||||
@patch("crewai.cli.authentication.main.validate_token")
|
@patch("crewai.cli.authentication.main.validate_token")
|
||||||
@patch("crewai.cli.authentication.main.console.print")
|
@patch("crewai.cli.authentication.main.console.print")
|
||||||
def test_poll_for_token_success(self, mock_print, mock_validate_token, mock_post, mock_tool):
|
def test_poll_for_token_success(
|
||||||
|
self, mock_print, mock_validate_token, mock_post, mock_tool
|
||||||
|
):
|
||||||
mock_response = MagicMock()
|
mock_response = MagicMock()
|
||||||
mock_response.status_code = 200
|
mock_response.status_code = 200
|
||||||
mock_response.json.return_value = {
|
mock_response.json.return_value = {
|
||||||
@@ -62,7 +65,9 @@ class TestAuthenticationCommand(unittest.TestCase):
|
|||||||
self.auth_command._poll_for_token({"device_code": "123456"})
|
self.auth_command._poll_for_token({"device_code": "123456"})
|
||||||
|
|
||||||
mock_validate_token.assert_called_once_with("TOKEN")
|
mock_validate_token.assert_called_once_with("TOKEN")
|
||||||
mock_print.assert_called_once_with("\n[bold green]Welcome to CrewAI Enterprise![/bold green]\n")
|
mock_print.assert_called_once_with(
|
||||||
|
"\n[bold green]Welcome to CrewAI Enterprise![/bold green]\n"
|
||||||
|
)
|
||||||
|
|
||||||
@patch("crewai.cli.authentication.main.requests.post")
|
@patch("crewai.cli.authentication.main.requests.post")
|
||||||
@patch("crewai.cli.authentication.main.console.print")
|
@patch("crewai.cli.authentication.main.console.print")
|
||||||
|
|||||||
@@ -3,9 +3,10 @@ import unittest
|
|||||||
from datetime import datetime, timedelta
|
from datetime import datetime, timedelta
|
||||||
from unittest.mock import MagicMock, patch
|
from unittest.mock import MagicMock, patch
|
||||||
|
|
||||||
from crewai.cli.authentication.utils import TokenManager, validate_token
|
|
||||||
from cryptography.fernet import Fernet
|
from cryptography.fernet import Fernet
|
||||||
|
|
||||||
|
from crewai.cli.authentication.utils import TokenManager, validate_token
|
||||||
|
|
||||||
|
|
||||||
class TestValidateToken(unittest.TestCase):
|
class TestValidateToken(unittest.TestCase):
|
||||||
@patch("crewai.cli.authentication.utils.AsymmetricSignatureVerifier")
|
@patch("crewai.cli.authentication.utils.AsymmetricSignatureVerifier")
|
||||||
|
|||||||
@@ -1,10 +1,12 @@
|
|||||||
import unittest
|
|
||||||
import json
|
import json
|
||||||
import tempfile
|
|
||||||
import shutil
|
import shutil
|
||||||
|
import tempfile
|
||||||
|
import unittest
|
||||||
from pathlib import Path
|
from pathlib import Path
|
||||||
|
|
||||||
from crewai.cli.config import Settings
|
from crewai.cli.config import Settings
|
||||||
|
|
||||||
|
|
||||||
class TestSettings(unittest.TestCase):
|
class TestSettings(unittest.TestCase):
|
||||||
def setUp(self):
|
def setUp(self):
|
||||||
self.test_dir = Path(tempfile.mkdtemp())
|
self.test_dir = Path(tempfile.mkdtemp())
|
||||||
@@ -20,8 +22,7 @@ class TestSettings(unittest.TestCase):
|
|||||||
|
|
||||||
def test_initialization_with_data(self):
|
def test_initialization_with_data(self):
|
||||||
settings = Settings(
|
settings = Settings(
|
||||||
config_path=self.config_path,
|
config_path=self.config_path, tool_repository_username="user1"
|
||||||
tool_repository_username="user1"
|
|
||||||
)
|
)
|
||||||
self.assertEqual(settings.tool_repository_username, "user1")
|
self.assertEqual(settings.tool_repository_username, "user1")
|
||||||
self.assertIsNone(settings.tool_repository_password)
|
self.assertIsNone(settings.tool_repository_password)
|
||||||
@@ -37,22 +38,23 @@ class TestSettings(unittest.TestCase):
|
|||||||
def test_merge_file_and_input_data(self):
|
def test_merge_file_and_input_data(self):
|
||||||
self.config_path.parent.mkdir(parents=True, exist_ok=True)
|
self.config_path.parent.mkdir(parents=True, exist_ok=True)
|
||||||
with self.config_path.open("w") as f:
|
with self.config_path.open("w") as f:
|
||||||
json.dump({
|
json.dump(
|
||||||
|
{
|
||||||
"tool_repository_username": "file_user",
|
"tool_repository_username": "file_user",
|
||||||
"tool_repository_password": "file_pass"
|
"tool_repository_password": "file_pass",
|
||||||
}, f)
|
},
|
||||||
|
f,
|
||||||
|
)
|
||||||
|
|
||||||
settings = Settings(
|
settings = Settings(
|
||||||
config_path=self.config_path,
|
config_path=self.config_path, tool_repository_username="new_user"
|
||||||
tool_repository_username="new_user"
|
|
||||||
)
|
)
|
||||||
self.assertEqual(settings.tool_repository_username, "new_user")
|
self.assertEqual(settings.tool_repository_username, "new_user")
|
||||||
self.assertEqual(settings.tool_repository_password, "file_pass")
|
self.assertEqual(settings.tool_repository_password, "file_pass")
|
||||||
|
|
||||||
def test_dump_new_settings(self):
|
def test_dump_new_settings(self):
|
||||||
settings = Settings(
|
settings = Settings(
|
||||||
config_path=self.config_path,
|
config_path=self.config_path, tool_repository_username="user1"
|
||||||
tool_repository_username="user1"
|
|
||||||
)
|
)
|
||||||
settings.dump()
|
settings.dump()
|
||||||
|
|
||||||
@@ -67,8 +69,7 @@ class TestSettings(unittest.TestCase):
|
|||||||
json.dump({"existing_setting": "value"}, f)
|
json.dump({"existing_setting": "value"}, f)
|
||||||
|
|
||||||
settings = Settings(
|
settings = Settings(
|
||||||
config_path=self.config_path,
|
config_path=self.config_path, tool_repository_username="user1"
|
||||||
tool_repository_username="user1"
|
|
||||||
)
|
)
|
||||||
settings.dump()
|
settings.dump()
|
||||||
|
|
||||||
@@ -79,10 +80,7 @@ class TestSettings(unittest.TestCase):
|
|||||||
self.assertEqual(saved_data["tool_repository_username"], "user1")
|
self.assertEqual(saved_data["tool_repository_username"], "user1")
|
||||||
|
|
||||||
def test_none_values(self):
|
def test_none_values(self):
|
||||||
settings = Settings(
|
settings = Settings(config_path=self.config_path, tool_repository_username=None)
|
||||||
config_path=self.config_path,
|
|
||||||
tool_repository_username=None
|
|
||||||
)
|
|
||||||
settings.dump()
|
settings.dump()
|
||||||
|
|
||||||
with self.config_path.open("r") as f:
|
with self.config_path.open("r") as f:
|
||||||
|
|||||||
@@ -1,6 +1,7 @@
|
|||||||
from crewai.cli.git import Repository
|
|
||||||
import pytest
|
import pytest
|
||||||
|
|
||||||
|
from crewai.cli.git import Repository
|
||||||
|
|
||||||
|
|
||||||
@pytest.fixture()
|
@pytest.fixture()
|
||||||
def repository(fp):
|
def repository(fp):
|
||||||
|
|||||||
@@ -1,6 +1,7 @@
|
|||||||
import os
|
import os
|
||||||
import unittest
|
import unittest
|
||||||
from unittest.mock import MagicMock, patch
|
from unittest.mock import MagicMock, patch
|
||||||
|
|
||||||
from crewai.cli.plus_api import PlusAPI
|
from crewai.cli.plus_api import PlusAPI
|
||||||
|
|
||||||
|
|
||||||
|
|||||||
@@ -1,7 +1,9 @@
|
|||||||
import pytest
|
import os
|
||||||
import shutil
|
import shutil
|
||||||
import tempfile
|
import tempfile
|
||||||
import os
|
|
||||||
|
import pytest
|
||||||
|
|
||||||
from crewai.cli import utils
|
from crewai.cli import utils
|
||||||
|
|
||||||
|
|
||||||
|
|||||||
@@ -85,7 +85,7 @@ def test_install_success(mock_get, mock_subprocess_run):
|
|||||||
env=unittest.mock.ANY
|
env=unittest.mock.ANY
|
||||||
)
|
)
|
||||||
|
|
||||||
assert "Succesfully installed sample-tool" in output
|
assert "Successfully installed sample-tool" in output
|
||||||
|
|
||||||
|
|
||||||
@patch("crewai.cli.plus_api.PlusAPI.get_tool")
|
@patch("crewai.cli.plus_api.PlusAPI.get_tool")
|
||||||
|
|||||||
@@ -3,6 +3,7 @@
|
|||||||
import asyncio
|
import asyncio
|
||||||
|
|
||||||
import pytest
|
import pytest
|
||||||
|
|
||||||
from crewai.flow.flow import Flow, and_, listen, or_, router, start
|
from crewai.flow.flow import Flow, and_, listen, or_, router, start
|
||||||
|
|
||||||
|
|
||||||
|
|||||||
@@ -3,6 +3,8 @@
|
|||||||
from pathlib import Path
|
from pathlib import Path
|
||||||
from unittest.mock import patch
|
from unittest.mock import patch
|
||||||
|
|
||||||
|
import pytest
|
||||||
|
|
||||||
from crewai.knowledge.source.csv_knowledge_source import CSVKnowledgeSource
|
from crewai.knowledge.source.csv_knowledge_source import CSVKnowledgeSource
|
||||||
from crewai.knowledge.source.excel_knowledge_source import ExcelKnowledgeSource
|
from crewai.knowledge.source.excel_knowledge_source import ExcelKnowledgeSource
|
||||||
from crewai.knowledge.source.json_knowledge_source import JSONKnowledgeSource
|
from crewai.knowledge.source.json_knowledge_source import JSONKnowledgeSource
|
||||||
@@ -10,8 +12,6 @@ from crewai.knowledge.source.pdf_knowledge_source import PDFKnowledgeSource
|
|||||||
from crewai.knowledge.source.string_knowledge_source import StringKnowledgeSource
|
from crewai.knowledge.source.string_knowledge_source import StringKnowledgeSource
|
||||||
from crewai.knowledge.source.text_file_knowledge_source import TextFileKnowledgeSource
|
from crewai.knowledge.source.text_file_knowledge_source import TextFileKnowledgeSource
|
||||||
|
|
||||||
import pytest
|
|
||||||
|
|
||||||
|
|
||||||
@pytest.fixture(autouse=True)
|
@pytest.fixture(autouse=True)
|
||||||
def mock_vector_db():
|
def mock_vector_db():
|
||||||
|
|||||||
@@ -1,5 +1,7 @@
|
|||||||
import pytest
|
|
||||||
from unittest.mock import patch
|
from unittest.mock import patch
|
||||||
|
|
||||||
|
import pytest
|
||||||
|
|
||||||
from crewai.agent import Agent
|
from crewai.agent import Agent
|
||||||
from crewai.crew import Crew
|
from crewai.crew import Crew
|
||||||
from crewai.memory.short_term.short_term_memory import ShortTermMemory
|
from crewai.memory.short_term.short_term_memory import ShortTermMemory
|
||||||
|
|||||||
@@ -6,12 +6,13 @@ import os
|
|||||||
from unittest.mock import MagicMock, patch
|
from unittest.mock import MagicMock, patch
|
||||||
|
|
||||||
import pytest
|
import pytest
|
||||||
|
from pydantic import BaseModel
|
||||||
|
from pydantic_core import ValidationError
|
||||||
|
|
||||||
from crewai import Agent, Crew, Process, Task
|
from crewai import Agent, Crew, Process, Task
|
||||||
from crewai.tasks.conditional_task import ConditionalTask
|
from crewai.tasks.conditional_task import ConditionalTask
|
||||||
from crewai.tasks.task_output import TaskOutput
|
from crewai.tasks.task_output import TaskOutput
|
||||||
from crewai.utilities.converter import Converter
|
from crewai.utilities.converter import Converter
|
||||||
from pydantic import BaseModel
|
|
||||||
from pydantic_core import ValidationError
|
|
||||||
|
|
||||||
|
|
||||||
def test_task_tool_reflect_agent_tools():
|
def test_task_tool_reflect_agent_tools():
|
||||||
|
|||||||
@@ -6,8 +6,8 @@ import pytest
|
|||||||
from pydantic import BaseModel, Field
|
from pydantic import BaseModel, Field
|
||||||
|
|
||||||
from crewai import Agent, Task
|
from crewai import Agent, Task
|
||||||
from crewai.tools.tool_usage import ToolUsage
|
|
||||||
from crewai.tools import BaseTool
|
from crewai.tools import BaseTool
|
||||||
|
from crewai.tools.tool_usage import ToolUsage
|
||||||
|
|
||||||
|
|
||||||
class RandomNumberToolInput(BaseModel):
|
class RandomNumberToolInput(BaseModel):
|
||||||
|
|||||||
@@ -1,6 +1,7 @@
|
|||||||
from unittest import mock
|
from unittest import mock
|
||||||
|
|
||||||
import pytest
|
import pytest
|
||||||
|
|
||||||
from crewai.agent import Agent
|
from crewai.agent import Agent
|
||||||
from crewai.crew import Crew
|
from crewai.crew import Crew
|
||||||
from crewai.task import Task
|
from crewai.task import Task
|
||||||
|
|||||||
@@ -26,7 +26,7 @@
|
|||||||
},
|
},
|
||||||
"errors": {
|
"errors": {
|
||||||
"force_final_answer": "Lorem ipsum dolor sit amet",
|
"force_final_answer": "Lorem ipsum dolor sit amet",
|
||||||
"agent_tool_unexsiting_coworker": "Lorem ipsum dolor sit amet",
|
"agent_tool_unexisting_coworker": "Lorem ipsum dolor sit amet",
|
||||||
"task_repeated_usage": "Lorem ipsum dolor sit amet",
|
"task_repeated_usage": "Lorem ipsum dolor sit amet",
|
||||||
"tool_usage_error": "Lorem ipsum dolor sit amet",
|
"tool_usage_error": "Lorem ipsum dolor sit amet",
|
||||||
"tool_arguments_error": "Lorem ipsum dolor sit amet",
|
"tool_arguments_error": "Lorem ipsum dolor sit amet",
|
||||||
|
|||||||
51
uv.lock
generated
51
uv.lock
generated
@@ -272,6 +272,15 @@ wheels = [
|
|||||||
{ url = "https://files.pythonhosted.org/packages/b1/fe/e8c672695b37eecc5cbf43e1d0638d88d66ba3a44c4d321c796f4e59167f/beautifulsoup4-4.12.3-py3-none-any.whl", hash = "sha256:b80878c9f40111313e55da8ba20bdba06d8fa3969fc68304167741bbf9e082ed", size = 147925 },
|
{ url = "https://files.pythonhosted.org/packages/b1/fe/e8c672695b37eecc5cbf43e1d0638d88d66ba3a44c4d321c796f4e59167f/beautifulsoup4-4.12.3-py3-none-any.whl", hash = "sha256:b80878c9f40111313e55da8ba20bdba06d8fa3969fc68304167741bbf9e082ed", size = 147925 },
|
||||||
]
|
]
|
||||||
|
|
||||||
|
[[package]]
|
||||||
|
name = "blinker"
|
||||||
|
version = "1.9.0"
|
||||||
|
source = { registry = "https://pypi.org/simple" }
|
||||||
|
sdist = { url = "https://files.pythonhosted.org/packages/21/28/9b3f50ce0e048515135495f198351908d99540d69bfdc8c1d15b73dc55ce/blinker-1.9.0.tar.gz", hash = "sha256:b4ce2265a7abece45e7cc896e98dbebe6cead56bcf805a3d23136d145f5445bf", size = 22460 }
|
||||||
|
wheels = [
|
||||||
|
{ url = "https://files.pythonhosted.org/packages/10/cb/f2ad4230dc2eb1a74edf38f1a38b9b52277f75bef262d8908e60d957e13c/blinker-1.9.0-py3-none-any.whl", hash = "sha256:ba0efaa9080b619ff2f3459d1d500c57bddea4a6b424b60a91141db6fd2f08bc", size = 8458 },
|
||||||
|
]
|
||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "build"
|
name = "build"
|
||||||
version = "1.2.2.post1"
|
version = "1.2.2.post1"
|
||||||
@@ -568,6 +577,7 @@ source = { editable = "." }
|
|||||||
dependencies = [
|
dependencies = [
|
||||||
{ name = "appdirs" },
|
{ name = "appdirs" },
|
||||||
{ name = "auth0-python" },
|
{ name = "auth0-python" },
|
||||||
|
{ name = "blinker" },
|
||||||
{ name = "chromadb" },
|
{ name = "chromadb" },
|
||||||
{ name = "click" },
|
{ name = "click" },
|
||||||
{ name = "instructor" },
|
{ name = "instructor" },
|
||||||
@@ -637,6 +647,7 @@ requires-dist = [
|
|||||||
{ name = "agentops", marker = "extra == 'agentops'", specifier = ">=0.3.0" },
|
{ name = "agentops", marker = "extra == 'agentops'", specifier = ">=0.3.0" },
|
||||||
{ name = "appdirs", specifier = ">=1.4.4" },
|
{ name = "appdirs", specifier = ">=1.4.4" },
|
||||||
{ name = "auth0-python", specifier = ">=4.7.1" },
|
{ name = "auth0-python", specifier = ">=4.7.1" },
|
||||||
|
{ name = "blinker", specifier = ">=1.9.0" },
|
||||||
{ name = "chromadb", specifier = ">=0.5.18" },
|
{ name = "chromadb", specifier = ">=0.5.18" },
|
||||||
{ name = "click", specifier = ">=8.1.7" },
|
{ name = "click", specifier = ">=8.1.7" },
|
||||||
{ name = "crewai-tools", marker = "extra == 'tools'", specifier = ">=0.14.0" },
|
{ name = "crewai-tools", marker = "extra == 'tools'", specifier = ">=0.14.0" },
|
||||||
@@ -681,7 +692,7 @@ dev = [
|
|||||||
{ name = "pytest-subprocess", specifier = ">=1.5.2" },
|
{ name = "pytest-subprocess", specifier = ">=1.5.2" },
|
||||||
{ name = "pytest-vcr", specifier = ">=1.0.2" },
|
{ name = "pytest-vcr", specifier = ">=1.0.2" },
|
||||||
{ name = "python-dotenv", specifier = ">=1.0.0" },
|
{ name = "python-dotenv", specifier = ">=1.0.0" },
|
||||||
{ name = "ruff", specifier = ">=0.4.10" },
|
{ name = "ruff", specifier = ">=0.8.2" },
|
||||||
]
|
]
|
||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
@@ -3865,27 +3876,27 @@ wheels = [
|
|||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "ruff"
|
name = "ruff"
|
||||||
version = "0.7.0"
|
version = "0.8.2"
|
||||||
source = { registry = "https://pypi.org/simple" }
|
source = { registry = "https://pypi.org/simple" }
|
||||||
sdist = { url = "https://files.pythonhosted.org/packages/2c/c7/f3367d1da5d568192968c5c9e7f3d51fb317b9ac04828493b23d8fce8ce6/ruff-0.7.0.tar.gz", hash = "sha256:47a86360cf62d9cd53ebfb0b5eb0e882193fc191c6d717e8bef4462bc3b9ea2b", size = 3146645 }
|
sdist = { url = "https://files.pythonhosted.org/packages/5e/2b/01245f4f3a727d60bebeacd7ee6d22586c7f62380a2597ddb22c2f45d018/ruff-0.8.2.tar.gz", hash = "sha256:b84f4f414dda8ac7f75075c1fa0b905ac0ff25361f42e6d5da681a465e0f78e5", size = 3349020 }
|
||||||
wheels = [
|
wheels = [
|
||||||
{ url = "https://files.pythonhosted.org/packages/48/59/a0275a0913f3539498d116046dd679cd657fe3b7caf5afe1733319414932/ruff-0.7.0-py3-none-linux_armv6l.whl", hash = "sha256:0cdf20c2b6ff98e37df47b2b0bd3a34aaa155f59a11182c1303cce79be715628", size = 10434007 },
|
{ url = "https://files.pythonhosted.org/packages/91/29/366be70216dba1731a00a41f2f030822b0c96c7c4f3b2c0cdce15cbace74/ruff-0.8.2-py3-none-linux_armv6l.whl", hash = "sha256:c49ab4da37e7c457105aadfd2725e24305ff9bc908487a9bf8d548c6dad8bb3d", size = 10530649 },
|
||||||
{ url = "https://files.pythonhosted.org/packages/cd/94/da0ba5f956d04c90dd899209904210600009dcda039ce840d83eb4298c7d/ruff-0.7.0-py3-none-macosx_10_12_x86_64.whl", hash = "sha256:496494d350c7fdeb36ca4ef1c9f21d80d182423718782222c29b3e72b3512737", size = 10048066 },
|
{ url = "https://files.pythonhosted.org/packages/63/82/a733956540bb388f00df5a3e6a02467b16c0e529132625fe44ce4c5fb9c7/ruff-0.8.2-py3-none-macosx_10_12_x86_64.whl", hash = "sha256:ec016beb69ac16be416c435828be702ee694c0d722505f9c1f35e1b9c0cc1bf5", size = 10274069 },
|
||||||
{ url = "https://files.pythonhosted.org/packages/57/1d/e5cc149ecc46e4f203403a79ccd170fad52d316f98b87d0f63b1945567db/ruff-0.7.0-py3-none-macosx_11_0_arm64.whl", hash = "sha256:214b88498684e20b6b2b8852c01d50f0651f3cc6118dfa113b4def9f14faaf06", size = 9711389 },
|
{ url = "https://files.pythonhosted.org/packages/3d/12/0b3aa14d1d71546c988a28e1b412981c1b80c8a1072e977a2f30c595cc4a/ruff-0.8.2-py3-none-macosx_11_0_arm64.whl", hash = "sha256:f05cdf8d050b30e2ba55c9b09330b51f9f97d36d4673213679b965d25a785f3c", size = 9909400 },
|
||||||
{ url = "https://files.pythonhosted.org/packages/05/67/fb7ea2c869c539725a16c5bc294e9aa34f8b1b6fe702f1d173a5da517c2b/ruff-0.7.0-py3-none-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:630fce3fefe9844e91ea5bbf7ceadab4f9981f42b704fae011bb8efcaf5d84be", size = 10755174 },
|
{ url = "https://files.pythonhosted.org/packages/23/08/f9f08cefb7921784c891c4151cce6ed357ff49e84b84978440cffbc87408/ruff-0.8.2-py3-none-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:60f578c11feb1d3d257b2fb043ddb47501ab4816e7e221fbb0077f0d5d4e7b6f", size = 10766782 },
|
||||||
{ url = "https://files.pythonhosted.org/packages/5f/f0/13703bc50536a0613ea3dce991116e5f0917a1f05528c6ab738b33c08d3f/ruff-0.7.0-py3-none-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:211d877674e9373d4bb0f1c80f97a0201c61bcd1e9d045b6e9726adc42c156aa", size = 10196040 },
|
{ url = "https://files.pythonhosted.org/packages/e4/71/bf50c321ec179aa420c8ec40adac5ae9cc408d4d37283a485b19a2331ceb/ruff-0.8.2-py3-none-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:cbd5cf9b0ae8f30eebc7b360171bd50f59ab29d39f06a670b3e4501a36ba5897", size = 10286316 },
|
||||||
{ url = "https://files.pythonhosted.org/packages/99/c1/77b04ab20324ab03d333522ee55fb0f1c38e3ca0d326b4905f82ce6b6c70/ruff-0.7.0-py3-none-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:194d6c46c98c73949a106425ed40a576f52291c12bc21399eb8f13a0f7073495", size = 11033684 },
|
{ url = "https://files.pythonhosted.org/packages/f2/83/c82688a2a6117539aea0ce63fdf6c08e60fe0202779361223bcd7f40bd74/ruff-0.8.2-py3-none-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:b402ddee3d777683de60ff76da801fa7e5e8a71038f57ee53e903afbcefdaa58", size = 11338270 },
|
||||||
{ url = "https://files.pythonhosted.org/packages/f2/97/f463334dc4efeea3551cd109163df15561c18a1c3ec13d51643740fd36ba/ruff-0.7.0-py3-none-manylinux_2_17_ppc64.manylinux2014_ppc64.whl", hash = "sha256:82c2579b82b9973a110fab281860403b397c08c403de92de19568f32f7178598", size = 11803700 },
|
{ url = "https://files.pythonhosted.org/packages/7f/d7/bc6a45e5a22e627640388e703160afb1d77c572b1d0fda8b4349f334fc66/ruff-0.8.2-py3-none-manylinux_2_17_ppc64.manylinux2014_ppc64.whl", hash = "sha256:705832cd7d85605cb7858d8a13d75993c8f3ef1397b0831289109e953d833d29", size = 12058579 },
|
||||||
{ url = "https://files.pythonhosted.org/packages/b4/f8/a31d40c4bb92933d376a53e7c5d0245d9b27841357e4820e96d38f54b480/ruff-0.7.0-py3-none-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:9af971fe85dcd5eaed8f585ddbc6bdbe8c217fb8fcf510ea6bca5bdfff56040e", size = 11347848 },
|
{ url = "https://files.pythonhosted.org/packages/da/3b/64150c93946ec851e6f1707ff586bb460ca671581380c919698d6a9267dc/ruff-0.8.2-py3-none-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:32096b41aaf7a5cc095fa45b4167b890e4c8d3fd217603f3634c92a541de7248", size = 11615172 },
|
||||||
{ url = "https://files.pythonhosted.org/packages/83/62/0c133b35ddaf91c65c30a56718b80bdef36bfffc35684d29e3a4878e0ea3/ruff-0.7.0-py3-none-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:b641c7f16939b7d24b7bfc0be4102c56562a18281f84f635604e8a6989948914", size = 12480632 },
|
{ url = "https://files.pythonhosted.org/packages/e4/9e/cf12b697ea83cfe92ec4509ae414dc4c9b38179cc681a497031f0d0d9a8e/ruff-0.8.2-py3-none-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:e769083da9439508833cfc7c23e351e1809e67f47c50248250ce1ac52c21fb93", size = 12882398 },
|
||||||
{ url = "https://files.pythonhosted.org/packages/46/96/464058dd1d980014fb5aa0a1254e78799efb3096fc7a4823cd66a1621276/ruff-0.7.0-py3-none-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:d71672336e46b34e0c90a790afeac8a31954fd42872c1f6adaea1dff76fd44f9", size = 10941919 },
|
{ url = "https://files.pythonhosted.org/packages/a9/27/96d10863accf76a9c97baceac30b0a52d917eb985a8ac058bd4636aeede0/ruff-0.8.2-py3-none-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:5fe716592ae8a376c2673fdfc1f5c0c193a6d0411f90a496863c99cd9e2ae25d", size = 11176094 },
|
||||||
{ url = "https://files.pythonhosted.org/packages/a0/f7/bda37ec77986a435dde44e1f59374aebf4282a5fa9cf17735315b847141f/ruff-0.7.0-py3-none-musllinux_1_2_aarch64.whl", hash = "sha256:ab7d98c7eed355166f367597e513a6c82408df4181a937628dbec79abb2a1fe4", size = 10745519 },
|
{ url = "https://files.pythonhosted.org/packages/eb/10/cd2fd77d4a4e7f03c29351be0f53278a393186b540b99df68beb5304fddd/ruff-0.8.2-py3-none-musllinux_1_2_aarch64.whl", hash = "sha256:81c148825277e737493242b44c5388a300584d73d5774defa9245aaef55448b0", size = 10771884 },
|
||||||
{ url = "https://files.pythonhosted.org/packages/c2/33/5f77fc317027c057b61a848020a47442a1cbf12e592df0e41e21f4d0f3bd/ruff-0.7.0-py3-none-musllinux_1_2_armv7l.whl", hash = "sha256:1eb54986f770f49edb14f71d33312d79e00e629a57387382200b1ef12d6a4ef9", size = 10284872 },
|
{ url = "https://files.pythonhosted.org/packages/71/5d/beabb2ff18870fc4add05fa3a69a4cb1b1d2d6f83f3cf3ae5ab0d52f455d/ruff-0.8.2-py3-none-musllinux_1_2_armv7l.whl", hash = "sha256:d261d7850c8367704874847d95febc698a950bf061c9475d4a8b7689adc4f7fa", size = 10382535 },
|
||||||
{ url = "https://files.pythonhosted.org/packages/ff/50/98aec292bc9537f640b8d031c55f3414bf15b6ed13b3e943fed75ac927b9/ruff-0.7.0-py3-none-musllinux_1_2_i686.whl", hash = "sha256:dc452ba6f2bb9cf8726a84aa877061a2462afe9ae0ea1d411c53d226661c601d", size = 10600334 },
|
{ url = "https://files.pythonhosted.org/packages/ae/29/6b3fdf3ad3e35b28d87c25a9ff4c8222ad72485ab783936b2b267250d7a7/ruff-0.8.2-py3-none-musllinux_1_2_i686.whl", hash = "sha256:1ca4e3a87496dc07d2427b7dd7ffa88a1e597c28dad65ae6433ecb9f2e4f022f", size = 10886995 },
|
||||||
{ url = "https://files.pythonhosted.org/packages/f2/85/12607ae3201423a179b8cfadc7cb1e57d02cd0135e45bd0445acb4cef327/ruff-0.7.0-py3-none-musllinux_1_2_x86_64.whl", hash = "sha256:4b406c2dce5be9bad59f2de26139a86017a517e6bcd2688da515481c05a2cb11", size = 11017333 },
|
{ url = "https://files.pythonhosted.org/packages/e9/dc/859d889b4d9356a1a2cdbc1e4a0dda94052bc5b5300098647e51a58c430b/ruff-0.8.2-py3-none-musllinux_1_2_x86_64.whl", hash = "sha256:729850feed82ef2440aa27946ab39c18cb4a8889c1128a6d589ffa028ddcfc22", size = 11220750 },
|
||||||
{ url = "https://files.pythonhosted.org/packages/d4/7f/3b85a56879e705d5f46ec14daf8a439fca05c3081720fe3dc3209100922d/ruff-0.7.0-py3-none-win32.whl", hash = "sha256:f6c968509f767776f524a8430426539587d5ec5c662f6addb6aa25bc2e8195ec", size = 8570962 },
|
{ url = "https://files.pythonhosted.org/packages/0b/08/e8f519f61f1d624264bfd6b8829e4c5f31c3c61193bc3cff1f19dbe7626a/ruff-0.8.2-py3-none-win32.whl", hash = "sha256:ac42caaa0411d6a7d9594363294416e0e48fc1279e1b0e948391695db2b3d5b1", size = 8729396 },
|
||||||
{ url = "https://files.pythonhosted.org/packages/39/9f/c5ee2b40d377354dabcc23cff47eb299de4b4d06d345068f8f8cc1eadac8/ruff-0.7.0-py3-none-win_amd64.whl", hash = "sha256:ff4aabfbaaba880e85d394603b9e75d32b0693152e16fa659a3064a85df7fce2", size = 9365544 },
|
{ url = "https://files.pythonhosted.org/packages/f8/d4/ba1c7ab72aba37a2b71fe48ab95b80546dbad7a7f35ea28cf66fc5cea5f6/ruff-0.8.2-py3-none-win_amd64.whl", hash = "sha256:2aae99ec70abf43372612a838d97bfe77d45146254568d94926e8ed5bbb409ea", size = 9594729 },
|
||||||
{ url = "https://files.pythonhosted.org/packages/89/8b/ee1509f60148cecba644aa718f6633216784302458340311898aaf0b1bed/ruff-0.7.0-py3-none-win_arm64.whl", hash = "sha256:10842f69c245e78d6adec7e1db0a7d9ddc2fff0621d730e61657b64fa36f207e", size = 8695763 },
|
{ url = "https://files.pythonhosted.org/packages/23/34/db20e12d3db11b8a2a8874258f0f6d96a9a4d631659d54575840557164c8/ruff-0.8.2-py3-none-win_arm64.whl", hash = "sha256:fb88e2a506b70cfbc2de6fae6681c4f944f7dd5f2fe87233a7233d888bad73e8", size = 9035131 },
|
||||||
]
|
]
|
||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
|
|||||||
Reference in New Issue
Block a user