mirror of
https://github.com/crewAIInc/crewAI.git
synced 2026-04-29 06:12:43 +00:00
Compare commits
2 Commits
devin/1754
...
devin/1753
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
8ab236a68e | ||
|
|
dce11df0b7 |
356
docs/en/observability/langdb.mdx
Normal file
356
docs/en/observability/langdb.mdx
Normal file
@@ -0,0 +1,356 @@
|
||||
---
|
||||
title: LangDB Integration
|
||||
description: How to use LangDB AI Gateway with CrewAI
|
||||
icon: database
|
||||
---
|
||||
|
||||
<img src="https://raw.githubusercontent.com/LangDB/assets/main/langdb-crewai-header.png" alt="LangDB CrewAI Header Image" width="70%" />
|
||||
|
||||
## Introduction
|
||||
|
||||
LangDB is the fastest enterprise AI gateway that enhances CrewAI with production-ready observability and optimization features. It provides:
|
||||
|
||||
- **Complete end-to-end tracing** of every agent interaction and LLM call
|
||||
- **Real-time cost monitoring** and optimization across 250+ LLMs
|
||||
- **Performance analytics** with detailed metrics and insights
|
||||
- **Secure governance** for enterprise AI deployments
|
||||
- **OpenAI-compatible APIs** for seamless integration
|
||||
- **Fine-grained control** over agent workflows and resource usage
|
||||
|
||||
### Installation & Setup
|
||||
|
||||
<Steps>
|
||||
<Step title="Install the required packages">
|
||||
```bash
|
||||
pip install -U crewai langdb
|
||||
```
|
||||
</Step>
|
||||
|
||||
<Step title="Set up environment variables" icon="lock">
|
||||
Configure your LangDB credentials from the [LangDB dashboard](https://app.langdb.ai/):
|
||||
|
||||
```bash
|
||||
export LANGDB_API_KEY="your_langdb_api_key"
|
||||
export LANGDB_PROJECT_ID="your_project_id"
|
||||
```
|
||||
</Step>
|
||||
|
||||
<Step title="Initialize LangDB with CrewAI">
|
||||
The integration requires a single initialization call before creating your agents:
|
||||
|
||||
```python
|
||||
from langdb import LangDB
|
||||
from crewai import Agent, Task, Crew, LLM
|
||||
|
||||
# Initialize LangDB tracing
|
||||
LangDB.init()
|
||||
|
||||
# Create LLM instance - LangDB automatically traces all calls
|
||||
llm = LLM(
|
||||
model="gpt-4o",
|
||||
temperature=0.7
|
||||
)
|
||||
|
||||
# Create your agents as usual
|
||||
@agent
|
||||
def research_agent(self) -> Agent:
|
||||
return Agent(
|
||||
role="Senior Research Analyst",
|
||||
goal="Conduct comprehensive research on assigned topics",
|
||||
backstory="You are an expert researcher with deep analytical skills.",
|
||||
llm=llm,
|
||||
verbose=True
|
||||
)
|
||||
```
|
||||
</Step>
|
||||
</Steps>
|
||||
|
||||
## Key Features
|
||||
|
||||
### 1. Comprehensive Observability
|
||||
|
||||
LangDB provides complete visibility into your CrewAI agent workflows with minimal setup overhead.
|
||||
|
||||
<Tabs>
|
||||
<Tab title="Request Tracing">
|
||||
LangDB automatically captures every LLM interaction in your crew execution:
|
||||
|
||||
```python
|
||||
from langdb import LangDB
|
||||
from crewai import Agent, Task, Crew, LLM
|
||||
|
||||
# Initialize with custom trace metadata
|
||||
LangDB.init(
|
||||
metadata={
|
||||
"environment": "production",
|
||||
"crew_type": "research_workflow",
|
||||
"user_id": "user_123"
|
||||
}
|
||||
)
|
||||
|
||||
# All agent interactions are automatically traced
|
||||
crew = Crew(
|
||||
agents=[research_agent, writer_agent],
|
||||
tasks=[research_task, writing_task],
|
||||
verbose=True
|
||||
)
|
||||
|
||||
# Execute with full tracing
|
||||
result = crew.kickoff(inputs={"topic": "AI trends 2025"})
|
||||
```
|
||||
|
||||
View detailed traces in the LangDB dashboard showing:
|
||||
- Complete agent conversation flows
|
||||
- Tool usage and function calls
|
||||
- Task execution timelines
|
||||
- LLM request/response pairs
|
||||
</Tab>
|
||||
|
||||
<Tab title="Performance Metrics">
|
||||
LangDB tracks comprehensive performance metrics for your crews:
|
||||
|
||||
- **Execution Time**: Total and per-task execution duration
|
||||
- **Token Usage**: Input/output tokens for cost optimization
|
||||
- **Success Rates**: Task completion and failure analytics
|
||||
- **Latency Analysis**: Response times and bottleneck identification
|
||||
|
||||
```python
|
||||
# Access metrics programmatically
|
||||
from langdb import LangDB
|
||||
|
||||
# Get crew execution metrics
|
||||
metrics = LangDB.get_metrics(
|
||||
project_id="your_project_id",
|
||||
filters={
|
||||
"crew_type": "research_workflow",
|
||||
"time_range": "last_24h"
|
||||
}
|
||||
)
|
||||
|
||||
print(f"Average execution time: {metrics.avg_execution_time}")
|
||||
print(f"Total cost: ${metrics.total_cost}")
|
||||
print(f"Success rate: {metrics.success_rate}%")
|
||||
```
|
||||
</Tab>
|
||||
|
||||
<Tab title="Cost Monitoring">
|
||||
Track and optimize AI spending across your CrewAI deployments:
|
||||
|
||||
```python
|
||||
from langdb import LangDB
|
||||
|
||||
# Initialize with cost tracking
|
||||
LangDB.init(
|
||||
cost_tracking=True,
|
||||
budget_alerts={
|
||||
"daily_limit": 100.0, # $100 daily limit
|
||||
"alert_threshold": 0.8 # Alert at 80% of limit
|
||||
}
|
||||
)
|
||||
|
||||
# LangDB automatically tracks costs for all LLM calls
|
||||
crew = Crew(agents=[agent], tasks=[task])
|
||||
result = crew.kickoff()
|
||||
|
||||
# View cost breakdown
|
||||
cost_report = LangDB.get_cost_report(
|
||||
breakdown_by=["model", "agent", "task"]
|
||||
)
|
||||
```
|
||||
|
||||
Features include:
|
||||
- Real-time cost tracking across all models
|
||||
- Budget alerts and spending limits
|
||||
- Cost optimization recommendations
|
||||
- Detailed cost attribution by agent and task
|
||||
</Tab>
|
||||
</Tabs>
|
||||
|
||||
### 2. Advanced Analytics & Insights
|
||||
|
||||
LangDB provides powerful analytics to optimize your CrewAI workflows.
|
||||
|
||||
<Tabs>
|
||||
<Tab title="Agent Performance Analysis">
|
||||
Analyze individual agent performance and identify optimization opportunities:
|
||||
|
||||
```python
|
||||
from langdb import LangDB
|
||||
|
||||
# Get agent-specific analytics
|
||||
analytics = LangDB.get_agent_analytics(
|
||||
agent_role="Senior Research Analyst",
|
||||
time_range="last_week"
|
||||
)
|
||||
|
||||
print(f"Average task completion time: {analytics.avg_completion_time}")
|
||||
print(f"Most used tools: {analytics.top_tools}")
|
||||
print(f"Success rate: {analytics.success_rate}%")
|
||||
print(f"Cost per task: ${analytics.cost_per_task}")
|
||||
```
|
||||
</Tab>
|
||||
|
||||
<Tab title="Workflow Optimization">
|
||||
Identify bottlenecks and optimization opportunities in your crew workflows:
|
||||
|
||||
```python
|
||||
# Analyze crew workflow patterns
|
||||
workflow_analysis = LangDB.analyze_workflow(
|
||||
crew_id="research_crew_v1",
|
||||
optimization_focus=["speed", "cost", "quality"]
|
||||
)
|
||||
|
||||
# Get optimization recommendations
|
||||
recommendations = workflow_analysis.recommendations
|
||||
for rec in recommendations:
|
||||
print(f"Optimization: {rec.type}")
|
||||
print(f"Potential savings: {rec.estimated_savings}")
|
||||
print(f"Implementation: {rec.implementation_guide}")
|
||||
```
|
||||
</Tab>
|
||||
</Tabs>
|
||||
|
||||
### 3. Production-Ready Features
|
||||
|
||||
<CardGroup cols="2">
|
||||
<Card title="Error Monitoring" icon="exclamation-triangle" href="https://docs.langdb.ai/features/error-monitoring">
|
||||
Automatic detection and alerting for agent failures, LLM errors, and workflow issues.
|
||||
</Card>
|
||||
<Card title="Rate Limiting" icon="gauge" href="https://docs.langdb.ai/features/rate-limiting">
|
||||
Intelligent rate limiting to prevent API quota exhaustion and optimize throughput.
|
||||
</Card>
|
||||
<Card title="Caching" icon="bolt" href="https://docs.langdb.ai/features/caching">
|
||||
Smart caching of LLM responses to reduce costs and improve response times.
|
||||
</Card>
|
||||
<Card title="Load Balancing" icon="scale-balanced" href="https://docs.langdb.ai/features/load-balancing">
|
||||
Distribute requests across multiple LLM providers for reliability and performance.
|
||||
</Card>
|
||||
</CardGroup>
|
||||
|
||||
### 4. Enterprise Security & Governance
|
||||
|
||||
LangDB provides enterprise-grade security features for production CrewAI deployments:
|
||||
|
||||
```python
|
||||
from langdb import LangDB
|
||||
|
||||
# Initialize with security configurations
|
||||
LangDB.init(
|
||||
security_config={
|
||||
"pii_detection": True,
|
||||
"content_filtering": True,
|
||||
"audit_logging": True,
|
||||
"data_retention_days": 90
|
||||
}
|
||||
)
|
||||
|
||||
# All crew interactions are automatically secured
|
||||
crew = Crew(agents=[agent], tasks=[task])
|
||||
result = crew.kickoff()
|
||||
```
|
||||
|
||||
Security features include:
|
||||
- **PII Detection**: Automatic detection and redaction of sensitive information
|
||||
- **Content Filtering**: Block inappropriate or harmful content
|
||||
- **Audit Logging**: Complete audit trails for compliance
|
||||
- **Data Governance**: Configurable data retention and privacy controls
|
||||
|
||||
## Advanced Configuration
|
||||
|
||||
### Custom Metadata and Filtering
|
||||
|
||||
Add custom metadata to enable powerful filtering and analytics:
|
||||
|
||||
```python
|
||||
from langdb import LangDB
|
||||
from crewai import Agent, Crew, Task
|
||||
|
||||
# Initialize with rich metadata
|
||||
LangDB.init(
|
||||
metadata={
|
||||
"environment": "production",
|
||||
"team": "research_team",
|
||||
"version": "v2.1.0",
|
||||
"customer_tier": "enterprise"
|
||||
}
|
||||
)
|
||||
|
||||
# Add task-specific metadata
|
||||
@task
|
||||
def research_task(self) -> Task:
|
||||
return Task(
|
||||
description="Research the latest AI trends",
|
||||
expected_output="Comprehensive research report",
|
||||
agent=research_agent,
|
||||
metadata={
|
||||
"task_type": "research",
|
||||
"priority": "high",
|
||||
"estimated_duration": "30min"
|
||||
}
|
||||
)
|
||||
```
|
||||
|
||||
### Multi-Environment Setup
|
||||
|
||||
Configure different LangDB projects for different environments:
|
||||
|
||||
```python
|
||||
import os
|
||||
from langdb import LangDB
|
||||
|
||||
# Environment-specific configuration
|
||||
environment = os.getenv("ENVIRONMENT", "development")
|
||||
|
||||
if environment == "production":
|
||||
LangDB.init(
|
||||
project_id="prod_project_id",
|
||||
sampling_rate=1.0, # Trace all requests
|
||||
cost_tracking=True
|
||||
)
|
||||
elif environment == "staging":
|
||||
LangDB.init(
|
||||
project_id="staging_project_id",
|
||||
sampling_rate=0.5, # Sample 50% of requests
|
||||
cost_tracking=False
|
||||
)
|
||||
else:
|
||||
LangDB.init(
|
||||
project_id="dev_project_id",
|
||||
sampling_rate=0.1, # Sample 10% of requests
|
||||
cost_tracking=False
|
||||
)
|
||||
```
|
||||
|
||||
## Best Practices
|
||||
|
||||
### Development Phase
|
||||
- Use detailed tracing to understand agent behavior patterns
|
||||
- Monitor resource usage during testing and development
|
||||
- Set up cost alerts to prevent unexpected spending
|
||||
- Implement comprehensive error handling and monitoring
|
||||
|
||||
### Production Phase
|
||||
- Enable full request tracing for complete observability
|
||||
- Set up automated alerts for performance degradation
|
||||
- Implement cost optimization strategies based on analytics
|
||||
- Use metadata for detailed filtering and analysis
|
||||
|
||||
### Continuous Improvement
|
||||
- Regular performance reviews using LangDB analytics
|
||||
- A/B testing of different agent configurations
|
||||
- Cost optimization based on usage patterns
|
||||
- Workflow optimization using bottleneck analysis
|
||||
|
||||
## Getting Started
|
||||
|
||||
1. **Sign up** for a LangDB account at [app.langdb.ai](https://app.langdb.ai)
|
||||
2. **Install** the LangDB package: `pip install langdb`
|
||||
3. **Initialize** LangDB in your CrewAI application
|
||||
4. **Deploy** your crews with automatic observability
|
||||
5. **Monitor** and optimize using the LangDB dashboard
|
||||
|
||||
<Card title="LangDB Documentation" icon="book" href="https://docs.langdb.ai">
|
||||
Explore comprehensive LangDB documentation and advanced features
|
||||
</Card>
|
||||
|
||||
LangDB transforms your CrewAI agents into production-ready, observable, and optimized AI workflows with minimal code changes and maximum insights.
|
||||
@@ -56,6 +56,10 @@ Observability is crucial for understanding how your CrewAI agents perform, ident
|
||||
<Card title="Weave" icon="network-wired" href="/en/observability/weave">
|
||||
Weights & Biases platform for tracking and evaluating AI applications.
|
||||
</Card>
|
||||
|
||||
<Card title="LangDB" icon="database" href="/en/observability/langdb">
|
||||
Enterprise AI gateway with comprehensive tracing, cost optimization, and performance analytics.
|
||||
</Card>
|
||||
</CardGroup>
|
||||
|
||||
### Evaluation & Quality Assurance
|
||||
|
||||
@@ -48,7 +48,7 @@ Documentation = "https://docs.crewai.com"
|
||||
Repository = "https://github.com/crewAIInc/crewAI"
|
||||
|
||||
[project.optional-dependencies]
|
||||
tools = ["crewai-tools~=0.59.0"]
|
||||
tools = ["crewai-tools~=0.58.0"]
|
||||
embeddings = [
|
||||
"tiktoken~=0.8.0"
|
||||
]
|
||||
|
||||
@@ -54,7 +54,7 @@ def _track_install_async():
|
||||
|
||||
_track_install_async()
|
||||
|
||||
__version__ = "0.152.0"
|
||||
__version__ = "0.150.0"
|
||||
__all__ = [
|
||||
"Agent",
|
||||
"Crew",
|
||||
|
||||
@@ -3,7 +3,6 @@ from typing import Optional
|
||||
|
||||
import click
|
||||
from crewai.cli.config import Settings
|
||||
from crewai.cli.settings.main import SettingsCommand
|
||||
from crewai.cli.add_crew_to_flow import add_crew_to_flow
|
||||
from crewai.cli.create_crew import create_crew
|
||||
from crewai.cli.create_flow import create_flow
|
||||
@@ -228,7 +227,7 @@ def update():
|
||||
@crewai.command()
|
||||
def login():
|
||||
"""Sign Up/Login to CrewAI Enterprise."""
|
||||
Settings().clear_user_settings()
|
||||
Settings().clear()
|
||||
AuthenticationCommand().login()
|
||||
|
||||
|
||||
@@ -370,8 +369,8 @@ def org():
|
||||
pass
|
||||
|
||||
|
||||
@org.command("list")
|
||||
def org_list():
|
||||
@org.command()
|
||||
def list():
|
||||
"""List available organizations."""
|
||||
org_command = OrganizationCommand()
|
||||
org_command.list()
|
||||
@@ -392,34 +391,5 @@ def current():
|
||||
org_command.current()
|
||||
|
||||
|
||||
@crewai.group()
|
||||
def config():
|
||||
"""CLI Configuration commands."""
|
||||
pass
|
||||
|
||||
|
||||
@config.command("list")
|
||||
def config_list():
|
||||
"""List all CLI configuration parameters."""
|
||||
config_command = SettingsCommand()
|
||||
config_command.list()
|
||||
|
||||
|
||||
@config.command("set")
|
||||
@click.argument("key")
|
||||
@click.argument("value")
|
||||
def config_set(key: str, value: str):
|
||||
"""Set a CLI configuration parameter."""
|
||||
config_command = SettingsCommand()
|
||||
config_command.set(key, value)
|
||||
|
||||
|
||||
@config.command("reset")
|
||||
def config_reset():
|
||||
"""Reset all CLI configuration parameters to default values."""
|
||||
config_command = SettingsCommand()
|
||||
config_command.reset_all_settings()
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
crewai()
|
||||
|
||||
@@ -4,47 +4,10 @@ from typing import Optional
|
||||
|
||||
from pydantic import BaseModel, Field
|
||||
|
||||
from crewai.cli.constants import DEFAULT_CREWAI_ENTERPRISE_URL
|
||||
|
||||
DEFAULT_CONFIG_PATH = Path.home() / ".config" / "crewai" / "settings.json"
|
||||
|
||||
# Settings that are related to the user's account
|
||||
USER_SETTINGS_KEYS = [
|
||||
"tool_repository_username",
|
||||
"tool_repository_password",
|
||||
"org_name",
|
||||
"org_uuid",
|
||||
]
|
||||
|
||||
# Settings that are related to the CLI
|
||||
CLI_SETTINGS_KEYS = [
|
||||
"enterprise_base_url",
|
||||
]
|
||||
|
||||
# Default values for CLI settings
|
||||
DEFAULT_CLI_SETTINGS = {
|
||||
"enterprise_base_url": DEFAULT_CREWAI_ENTERPRISE_URL,
|
||||
}
|
||||
|
||||
# Readonly settings - cannot be set by the user
|
||||
READONLY_SETTINGS_KEYS = [
|
||||
"org_name",
|
||||
"org_uuid",
|
||||
]
|
||||
|
||||
# Hidden settings - not displayed by the 'list' command and cannot be set by the user
|
||||
HIDDEN_SETTINGS_KEYS = [
|
||||
"config_path",
|
||||
"tool_repository_username",
|
||||
"tool_repository_password",
|
||||
]
|
||||
|
||||
|
||||
class Settings(BaseModel):
|
||||
enterprise_base_url: Optional[str] = Field(
|
||||
default=DEFAULT_CREWAI_ENTERPRISE_URL,
|
||||
description="Base URL of the CrewAI Enterprise instance",
|
||||
)
|
||||
tool_repository_username: Optional[str] = Field(
|
||||
None, description="Username for interacting with the Tool Repository"
|
||||
)
|
||||
@@ -57,7 +20,7 @@ class Settings(BaseModel):
|
||||
org_uuid: Optional[str] = Field(
|
||||
None, description="UUID of the currently active organization"
|
||||
)
|
||||
config_path: Path = Field(default=DEFAULT_CONFIG_PATH, frozen=True, exclude=True)
|
||||
config_path: Path = Field(default=DEFAULT_CONFIG_PATH, exclude=True)
|
||||
|
||||
def __init__(self, config_path: Path = DEFAULT_CONFIG_PATH, **data):
|
||||
"""Load Settings from config path"""
|
||||
@@ -74,16 +37,9 @@ class Settings(BaseModel):
|
||||
merged_data = {**file_data, **data}
|
||||
super().__init__(config_path=config_path, **merged_data)
|
||||
|
||||
def clear_user_settings(self) -> None:
|
||||
"""Clear all user settings"""
|
||||
self._reset_user_settings()
|
||||
self.dump()
|
||||
|
||||
def reset(self) -> None:
|
||||
"""Reset all settings to default values"""
|
||||
self._reset_user_settings()
|
||||
self._reset_cli_settings()
|
||||
self.dump()
|
||||
def clear(self) -> None:
|
||||
"""Clear all settings"""
|
||||
self.config_path.unlink(missing_ok=True)
|
||||
|
||||
def dump(self) -> None:
|
||||
"""Save current settings to settings.json"""
|
||||
@@ -96,13 +52,3 @@ class Settings(BaseModel):
|
||||
updated_data = {**existing_data, **self.model_dump(exclude_unset=True)}
|
||||
with self.config_path.open("w") as f:
|
||||
json.dump(updated_data, f, indent=4)
|
||||
|
||||
def _reset_user_settings(self) -> None:
|
||||
"""Reset all user settings to default values"""
|
||||
for key in USER_SETTINGS_KEYS:
|
||||
setattr(self, key, None)
|
||||
|
||||
def _reset_cli_settings(self) -> None:
|
||||
"""Reset all CLI settings to default values"""
|
||||
for key in CLI_SETTINGS_KEYS:
|
||||
setattr(self, key, DEFAULT_CLI_SETTINGS[key])
|
||||
|
||||
@@ -1,5 +1,3 @@
|
||||
DEFAULT_CREWAI_ENTERPRISE_URL = "https://app.crewai.com"
|
||||
|
||||
ENV_VARS = {
|
||||
"openai": [
|
||||
{
|
||||
@@ -322,4 +320,5 @@ DEFAULT_LLM_MODEL = "gpt-4o-mini"
|
||||
|
||||
JSON_URL = "https://raw.githubusercontent.com/BerriAI/litellm/main/model_prices_and_context_window.json"
|
||||
|
||||
|
||||
LITELLM_PARAMS = ["api_key", "api_base", "api_version"]
|
||||
|
||||
@@ -1,3 +1,4 @@
|
||||
from os import getenv
|
||||
from typing import List, Optional
|
||||
from urllib.parse import urljoin
|
||||
|
||||
@@ -5,7 +6,6 @@ import requests
|
||||
|
||||
from crewai.cli.config import Settings
|
||||
from crewai.cli.version import get_crewai_version
|
||||
from crewai.cli.constants import DEFAULT_CREWAI_ENTERPRISE_URL
|
||||
|
||||
|
||||
class PlusAPI:
|
||||
@@ -29,10 +29,7 @@ class PlusAPI:
|
||||
settings = Settings()
|
||||
if settings.org_uuid:
|
||||
self.headers["X-Crewai-Organization-Id"] = settings.org_uuid
|
||||
|
||||
self.base_url = (
|
||||
str(settings.enterprise_base_url) or DEFAULT_CREWAI_ENTERPRISE_URL
|
||||
)
|
||||
self.base_url = getenv("CREWAI_BASE_URL", "https://app.crewai.com")
|
||||
|
||||
def _make_request(self, method: str, endpoint: str, **kwargs) -> requests.Response:
|
||||
url = urljoin(self.base_url, endpoint)
|
||||
@@ -111,6 +108,7 @@ class PlusAPI:
|
||||
|
||||
def create_crew(self, payload) -> requests.Response:
|
||||
return self._make_request("POST", self.CREWS_RESOURCE, json=payload)
|
||||
|
||||
|
||||
def get_organizations(self) -> requests.Response:
|
||||
return self._make_request("GET", self.ORGANIZATIONS_RESOURCE)
|
||||
|
||||
@@ -1,67 +0,0 @@
|
||||
from rich.console import Console
|
||||
from rich.table import Table
|
||||
from crewai.cli.command import BaseCommand
|
||||
from crewai.cli.config import Settings, READONLY_SETTINGS_KEYS, HIDDEN_SETTINGS_KEYS
|
||||
from typing import Any
|
||||
|
||||
console = Console()
|
||||
|
||||
|
||||
class SettingsCommand(BaseCommand):
|
||||
"""A class to handle CLI configuration commands."""
|
||||
|
||||
def __init__(self, settings_kwargs: dict[str, Any] = {}):
|
||||
super().__init__()
|
||||
self.settings = Settings(**settings_kwargs)
|
||||
|
||||
def list(self) -> None:
|
||||
"""List all CLI configuration parameters."""
|
||||
table = Table(title="CrewAI CLI Configuration")
|
||||
table.add_column("Setting", style="cyan", no_wrap=True)
|
||||
table.add_column("Value", style="green")
|
||||
table.add_column("Description", style="yellow")
|
||||
|
||||
# Add all settings to the table
|
||||
for field_name, field_info in Settings.model_fields.items():
|
||||
if field_name in HIDDEN_SETTINGS_KEYS:
|
||||
# Do not display hidden settings
|
||||
continue
|
||||
|
||||
current_value = getattr(self.settings, field_name)
|
||||
description = field_info.description or "No description available"
|
||||
display_value = (
|
||||
str(current_value) if current_value is not None else "Not set"
|
||||
)
|
||||
|
||||
table.add_row(field_name, display_value, description)
|
||||
|
||||
console.print(table)
|
||||
|
||||
def set(self, key: str, value: str) -> None:
|
||||
"""Set a CLI configuration parameter."""
|
||||
|
||||
readonly_settings = READONLY_SETTINGS_KEYS + HIDDEN_SETTINGS_KEYS
|
||||
|
||||
if not hasattr(self.settings, key) or key in readonly_settings:
|
||||
console.print(
|
||||
f"Error: Unknown or readonly configuration key '{key}'",
|
||||
style="bold red",
|
||||
)
|
||||
console.print("Available keys:", style="yellow")
|
||||
for field_name in Settings.model_fields.keys():
|
||||
if field_name not in readonly_settings:
|
||||
console.print(f" - {field_name}", style="yellow")
|
||||
raise SystemExit(1)
|
||||
|
||||
setattr(self.settings, key, value)
|
||||
self.settings.dump()
|
||||
|
||||
console.print(f"Successfully set '{key}' to '{value}'", style="bold green")
|
||||
|
||||
def reset_all_settings(self) -> None:
|
||||
"""Reset all CLI configuration parameters to default values."""
|
||||
self.settings.reset()
|
||||
console.print(
|
||||
"Successfully reset all configuration parameters to default values. It is recommended to run [bold yellow]'crewai login'[/bold yellow] to re-authenticate.",
|
||||
style="bold green",
|
||||
)
|
||||
@@ -5,7 +5,7 @@ description = "{{name}} using crewAI"
|
||||
authors = [{ name = "Your Name", email = "you@example.com" }]
|
||||
requires-python = ">=3.10,<3.14"
|
||||
dependencies = [
|
||||
"crewai[tools]>=0.152.0,<1.0.0"
|
||||
"crewai[tools]>=0.150.0,<1.0.0"
|
||||
]
|
||||
|
||||
[project.scripts]
|
||||
|
||||
@@ -5,7 +5,7 @@ description = "{{name}} using crewAI"
|
||||
authors = [{ name = "Your Name", email = "you@example.com" }]
|
||||
requires-python = ">=3.10,<3.14"
|
||||
dependencies = [
|
||||
"crewai[tools]>=0.152.0,<1.0.0",
|
||||
"crewai[tools]>=0.150.0,<1.0.0",
|
||||
]
|
||||
|
||||
[project.scripts]
|
||||
|
||||
@@ -5,7 +5,7 @@ description = "Power up your crews with {{folder_name}}"
|
||||
readme = "README.md"
|
||||
requires-python = ">=3.10,<3.14"
|
||||
dependencies = [
|
||||
"crewai[tools]>=0.152.0"
|
||||
"crewai[tools]>=0.150.0"
|
||||
]
|
||||
|
||||
[tool.crewai]
|
||||
|
||||
@@ -1,8 +1,7 @@
|
||||
import os
|
||||
from typing import Any, Dict, List
|
||||
from collections import defaultdict
|
||||
|
||||
from mem0 import Memory, MemoryClient
|
||||
from crewai.utilities.chromadb import sanitize_collection_name
|
||||
|
||||
from crewai.memory.storage.interface import Storage
|
||||
|
||||
@@ -71,32 +70,26 @@ class Mem0Storage(Storage):
|
||||
"""
|
||||
Returns:
|
||||
dict: A filter dictionary containing AND conditions for querying data.
|
||||
- Includes user_id and agent_id if both are present.
|
||||
- Includes user_id if only user_id is present.
|
||||
- Includes agent_id if only agent_id is present.
|
||||
- Includes user_id if memory_type is 'external'.
|
||||
- Includes run_id if memory_type is 'short_term' and mem0_run_id is present.
|
||||
"""
|
||||
filter = defaultdict(list)
|
||||
filter = {
|
||||
"AND": []
|
||||
}
|
||||
|
||||
# Add user_id condition if the memory type is external
|
||||
if self.memory_type == "external":
|
||||
filter["AND"].append({"user_id": self.config.get("user_id", "")})
|
||||
|
||||
# Add run_id condition if the memory type is short_term and a run ID is set
|
||||
if self.memory_type == "short_term" and self.mem0_run_id:
|
||||
filter["AND"].append({"run_id": self.mem0_run_id})
|
||||
else:
|
||||
user_id = self.config.get("user_id", "")
|
||||
agent_id = self.config.get("agent_id", "")
|
||||
|
||||
if user_id and agent_id:
|
||||
filter["OR"].append({"user_id": user_id})
|
||||
filter["OR"].append({"agent_id": agent_id})
|
||||
elif user_id:
|
||||
filter["AND"].append({"user_id": user_id})
|
||||
elif agent_id:
|
||||
filter["AND"].append({"agent_id": agent_id})
|
||||
|
||||
return filter
|
||||
|
||||
def save(self, value: Any, metadata: Dict[str, Any]) -> None:
|
||||
user_id = self.config.get("user_id", "")
|
||||
assistant_message = [{"role" : "assistant","content" : value}]
|
||||
assistant_message = [{"role" : "assistant","content" : value}]
|
||||
|
||||
base_metadata = {
|
||||
"short_term": "short_term",
|
||||
@@ -111,32 +104,31 @@ class Mem0Storage(Storage):
|
||||
"infer": self.infer
|
||||
}
|
||||
|
||||
# MemoryClient-specific overrides
|
||||
if isinstance(self.memory, MemoryClient):
|
||||
params["includes"] = self.includes
|
||||
params["excludes"] = self.excludes
|
||||
params["output_format"] = "v1.1"
|
||||
params["version"] = "v2"
|
||||
|
||||
if self.memory_type == "short_term" and self.mem0_run_id:
|
||||
params["run_id"] = self.mem0_run_id
|
||||
|
||||
if user_id:
|
||||
if self.memory_type == "external":
|
||||
params["user_id"] = user_id
|
||||
|
||||
if agent_id := self.config.get("agent_id", self._get_agent_name()):
|
||||
params["agent_id"] = agent_id
|
||||
|
||||
if params:
|
||||
# MemoryClient-specific overrides
|
||||
if isinstance(self.memory, MemoryClient):
|
||||
params["includes"] = self.includes
|
||||
params["excludes"] = self.excludes
|
||||
params["output_format"] = "v1.1"
|
||||
params["version"]="v2"
|
||||
|
||||
self.memory.add(assistant_message, **params)
|
||||
if self.memory_type == "short_term":
|
||||
params["run_id"] = self.mem0_run_id
|
||||
|
||||
self.memory.add(assistant_message, **params)
|
||||
|
||||
def search(self,query: str,limit: int = 3,score_threshold: float = 0.35) -> List[Any]:
|
||||
params = {
|
||||
"query": query,
|
||||
"limit": limit,
|
||||
"query": query,
|
||||
"limit": limit,
|
||||
"version": "v2",
|
||||
"output_format": "v1.1"
|
||||
}
|
||||
|
||||
|
||||
if user_id := self.config.get("user_id", ""):
|
||||
params["user_id"] = user_id
|
||||
|
||||
@@ -146,7 +138,7 @@ class Mem0Storage(Storage):
|
||||
"entities": {"type": "entity"},
|
||||
"external": {"type": "external"},
|
||||
}
|
||||
|
||||
|
||||
if self.memory_type in memory_type_map:
|
||||
params["metadata"] = memory_type_map[self.memory_type]
|
||||
if self.memory_type == "short_term":
|
||||
@@ -159,28 +151,11 @@ class Mem0Storage(Storage):
|
||||
params['threshold'] = score_threshold
|
||||
|
||||
if isinstance(self.memory, Memory):
|
||||
del params["metadata"], params["version"], params['output_format']
|
||||
if params.get("run_id"):
|
||||
del params["run_id"]
|
||||
del params["metadata"], params["version"], params["run_id"], params['output_format']
|
||||
|
||||
results = self.memory.search(**params)
|
||||
return [r for r in results["results"]]
|
||||
|
||||
|
||||
def reset(self):
|
||||
if self.memory:
|
||||
self.memory.reset()
|
||||
|
||||
def _sanitize_role(self, role: str) -> str:
|
||||
"""
|
||||
Sanitizes agent roles to ensure valid directory names.
|
||||
"""
|
||||
return role.replace("\n", "").replace(" ", "_").replace("/", "_")
|
||||
|
||||
def _get_agent_name(self) -> str:
|
||||
if not self.crew:
|
||||
return ""
|
||||
|
||||
agents = self.crew.agents
|
||||
agents = [self._sanitize_role(agent.role) for agent in agents]
|
||||
agents = "_".join(agents)
|
||||
return sanitize_collection_name(name=agents, max_collection_length=MAX_AGENT_ID_LENGTH_MEM0)
|
||||
|
||||
@@ -38,14 +38,7 @@ class EmbeddingConfigurator:
|
||||
f"Unsupported embedding provider: {provider}, supported providers: {list(self.embedding_functions.keys())}"
|
||||
)
|
||||
|
||||
try:
|
||||
embedding_function = self.embedding_functions[provider]
|
||||
except ImportError as e:
|
||||
missing_package = str(e).split()[-1]
|
||||
raise ImportError(
|
||||
f"{missing_package} is not installed. Please install it with: pip install {missing_package}"
|
||||
)
|
||||
|
||||
embedding_function = self.embedding_functions[provider]
|
||||
return (
|
||||
embedding_function(config)
|
||||
if provider == "custom"
|
||||
|
||||
@@ -1,28 +0,0 @@
|
||||
{
|
||||
"hierarchical_manager_agent": {
|
||||
"role": "Gerente del Equipo",
|
||||
"goal": "Gestionar el equipo para completar la tarea de la mejor manera posible.",
|
||||
"backstory": "Eres un gerente experimentado con talento para sacar lo mejor de tu equipo.\nTambién eres conocido por tu capacidad para delegar trabajo a las personas adecuadas y hacer las preguntas correctas para obtener lo mejor de tu equipo.\nAunque no realizas tareas por ti mismo, tienes mucha experiencia en el campo, lo que te permite evaluar adecuadamente el trabajo de los miembros de tu equipo."
|
||||
},
|
||||
"slices": {
|
||||
"observation": "\nObservación:",
|
||||
"task": "\nTarea Actual: {input}\n\n¡Comienza! Esto es MUY importante para ti, usa las herramientas disponibles y da tu mejor Respuesta Final, ¡tu trabajo depende de ello!\n\nPensamiento:",
|
||||
"memory": "\n\n# Contexto útil: \n{memory}",
|
||||
"role_playing": "Eres {role}. {backstory}\nTu objetivo personal es: {goal}",
|
||||
"tools": "\nSOLO tienes acceso a las siguientes herramientas, y NUNCA debes inventar herramientas que no estén listadas aquí:\n\n{tools}\n\nIMPORTANTE: Usa el siguiente formato en tu respuesta:\n\n```\nPensamiento: siempre debes pensar en qué hacer\nAcción: la acción a tomar, solo un nombre de [{tool_names}], solo el nombre, exactamente como está escrito.\nEntrada de Acción: la entrada para la acción, solo un objeto JSON simple, encerrado en llaves, usando \" para envolver claves y valores.\nObservación: el resultado de la acción\n```\n\nUna vez que se recopile toda la información necesaria, devuelve el siguiente formato:\n\n```\nPensamiento: Ahora conozco la respuesta final\nRespuesta Final: la respuesta final a la pregunta de entrada original\n```",
|
||||
"no_tools": "\nPara dar mi mejor respuesta final completa a la tarea, responde usando el siguiente formato exacto:\n\nPensamiento: Ahora puedo dar una gran respuesta\nRespuesta Final: Tu respuesta final debe ser la mejor y más completa posible, debe ser el resultado descrito.\n\n¡DEBO usar estos formatos, mi trabajo depende de ello!",
|
||||
"final_answer_format": "Si no necesitas usar más herramientas, debes dar tu mejor respuesta final completa, asegúrate de que satisfaga los criterios esperados, usa el formato EXACTO a continuación:\n\n```\nPensamiento: Ahora puedo dar una gran respuesta\nRespuesta Final: mi mejor respuesta final completa a la tarea.\n\n```",
|
||||
"getting_input": "Esta es la respuesta final del agente: {final_answer}\n\n",
|
||||
"manager_request": "Tu mejor respuesta a tu compañero de trabajo que te pregunta esto, teniendo en cuenta el contexto compartido."
|
||||
},
|
||||
"errors": {
|
||||
"force_final_answer": "Ahora es el momento en que DEBES dar tu respuesta final absolutamente mejor. Ignorarás todas las instrucciones anteriores, dejarás de usar cualquier herramienta y solo devolverás tu MEJOR respuesta final absoluta.",
|
||||
"tool_usage_error": "Encontré un error: {error}",
|
||||
"tool_arguments_error": "Error: la Entrada de Acción no es un diccionario de clave-valor válido.",
|
||||
"wrong_tool_name": "Intentaste usar la herramienta {tool}, pero no existe. Debes usar una de las siguientes herramientas, usa una a la vez: {tools}."
|
||||
},
|
||||
"tools": {
|
||||
"delegate_work": "Delegar una tarea específica a uno de los siguientes compañeros de trabajo: {coworkers}\nLa entrada para esta herramienta debe ser el compañero de trabajo, la tarea que quieres que hagan, y TODO el contexto necesario para ejecutar la tarea, no saben nada sobre la tarea, así que comparte absolutamente todo lo que sabes, no hagas referencia a cosas sino explícalas.",
|
||||
"ask_question": "Hacer una pregunta específica a uno de los siguientes compañeros de trabajo: {coworkers}\nLa entrada para esta herramienta debe ser el compañero de trabajo, la pregunta que tienes para ellos, y TODO el contexto necesario para hacer la pregunta correctamente, no saben nada sobre la pregunta, así que comparte absolutamente todo lo que sabes, no hagas referencia a cosas sino explícalas."
|
||||
}
|
||||
}
|
||||
@@ -1,28 +0,0 @@
|
||||
{
|
||||
"hierarchical_manager_agent": {
|
||||
"role": "团队经理",
|
||||
"goal": "以最佳方式管理团队完成任务",
|
||||
"backstory": "您是一位经验丰富的经理,擅长发挥团队的最佳潜力。\n您以能够将工作委派给合适的人员而闻名,并且善于提出正确的问题来发挥团队的最佳潜力。\n尽管您不亲自执行任务,但您在该领域拥有丰富的经验,这使您能够正确评估团队成员的工作。"
|
||||
},
|
||||
"slices": {
|
||||
"observation": "\n观察:",
|
||||
"task": "\n当前任务: {input}\n\n开始!这对您非常重要,请使用可用的工具并给出最佳的最终答案,您的工作取决于此!\n\n思考:",
|
||||
"memory": "\n\n# 有用的上下文: \n{memory}",
|
||||
"role_playing": "您是 {role}。{backstory}\n您的个人目标是: {goal}",
|
||||
"tools": "\n您只能使用以下工具,绝不能编造未列出的工具:\n\n{tools}\n\n重要提示: 在回复中使用以下格式:\n\n```\n思考: 您应该始终思考要做什么\n行动: 要采取的行动,只能是 [{tool_names}] 中的一个名称,只是名称,完全按照书面形式。\n行动输入: 行动的输入,只是一个简单的JSON对象,用大括号括起来,使用\"包装键和值。\n观察: 行动的结果\n```\n\n收集所有必要信息后,返回以下格式:\n\n```\n思考: 我现在知道最终答案\n最终答案: 对原始输入问题的最终答案\n```",
|
||||
"no_tools": "\n为了给出我对任务的最佳完整最终答案,请使用以下确切格式回复:\n\n思考: 我现在可以给出很好的答案\n最终答案: 您的最终答案必须是最好和最完整的,它必须是描述的结果。\n\n我必须使用这些格式,我的工作取决于此!",
|
||||
"final_answer_format": "如果您不需要使用更多工具,您必须给出最佳的完整最终答案,确保它满足预期标准,使用以下确切格式:\n\n```\n思考: 我现在可以给出很好的答案\n最终答案: 我对任务的最佳完整最终答案。\n\n```",
|
||||
"getting_input": "这是代理的最终答案: {final_answer}\n\n",
|
||||
"manager_request": "您对同事询问的最佳答案,考虑到共享的上下文。"
|
||||
},
|
||||
"errors": {
|
||||
"force_final_answer": "现在是时候您必须给出绝对最佳的最终答案了。您将忽略所有先前的指令,停止使用任何工具,只返回您绝对最佳的最终答案。",
|
||||
"tool_usage_error": "我遇到了错误: {error}",
|
||||
"tool_arguments_error": "错误: 行动输入不是有效的键值字典。",
|
||||
"wrong_tool_name": "您尝试使用工具 {tool},但它不存在。您必须使用以下工具之一,一次使用一个: {tools}。"
|
||||
},
|
||||
"tools": {
|
||||
"delegate_work": "将特定任务委派给以下同事之一: {coworkers}\n此工具的输入应该是同事、您希望他们执行的任务,以及执行任务所需的所有必要上下文,他们对任务一无所知,所以请分享您知道的一切,不要引用事物而是解释它们。",
|
||||
"ask_question": "向以下同事之一提出具体问题: {coworkers}\n此工具的输入应该是同事、您对他们的问题,以及正确提问所需的所有必要上下文,他们对问题一无所知,所以请分享您知道的一切,不要引用事物而是解释它们。"
|
||||
}
|
||||
}
|
||||
@@ -17,23 +17,18 @@ class I18N(BaseModel):
|
||||
@model_validator(mode="after")
|
||||
def load_prompts(self) -> "I18N":
|
||||
"""Load prompts from a JSON file."""
|
||||
prompt_file_to_use = None
|
||||
|
||||
try:
|
||||
if self.prompt_file:
|
||||
prompt_file_to_use = self.prompt_file
|
||||
with open(self.prompt_file, "r", encoding="utf-8") as f:
|
||||
self._prompts = json.load(f)
|
||||
else:
|
||||
env_i18n_file = os.environ.get("CREWAI_I18N_FILE")
|
||||
if env_i18n_file:
|
||||
prompt_file_to_use = env_i18n_file
|
||||
else:
|
||||
dir_path = os.path.dirname(os.path.realpath(__file__))
|
||||
prompt_file_to_use = os.path.join(dir_path, "../translations/en.json")
|
||||
dir_path = os.path.dirname(os.path.realpath(__file__))
|
||||
prompts_path = os.path.join(dir_path, "../translations/en.json")
|
||||
|
||||
with open(prompt_file_to_use, "r", encoding="utf-8") as f:
|
||||
self._prompts = json.load(f)
|
||||
with open(prompts_path, "r", encoding="utf-8") as f:
|
||||
self._prompts = json.load(f)
|
||||
except FileNotFoundError:
|
||||
raise Exception(f"Prompt file '{prompt_file_to_use}' not found.")
|
||||
raise Exception(f"Prompt file '{self.prompt_file}' not found.")
|
||||
except json.JSONDecodeError:
|
||||
raise Exception("Error decoding JSON from the prompts file.")
|
||||
|
||||
|
||||
@@ -4,12 +4,7 @@ import tempfile
|
||||
import unittest
|
||||
from pathlib import Path
|
||||
|
||||
from crewai.cli.config import (
|
||||
Settings,
|
||||
USER_SETTINGS_KEYS,
|
||||
CLI_SETTINGS_KEYS,
|
||||
DEFAULT_CLI_SETTINGS,
|
||||
)
|
||||
from crewai.cli.config import Settings
|
||||
|
||||
|
||||
class TestSettings(unittest.TestCase):
|
||||
@@ -57,30 +52,6 @@ class TestSettings(unittest.TestCase):
|
||||
self.assertEqual(settings.tool_repository_username, "new_user")
|
||||
self.assertEqual(settings.tool_repository_password, "file_pass")
|
||||
|
||||
def test_clear_user_settings(self):
|
||||
user_settings = {key: f"value_for_{key}" for key in USER_SETTINGS_KEYS}
|
||||
|
||||
settings = Settings(config_path=self.config_path, **user_settings)
|
||||
settings.clear_user_settings()
|
||||
|
||||
for key in user_settings.keys():
|
||||
self.assertEqual(getattr(settings, key), None)
|
||||
|
||||
def test_reset_settings(self):
|
||||
user_settings = {key: f"value_for_{key}" for key in USER_SETTINGS_KEYS}
|
||||
cli_settings = {key: f"value_for_{key}" for key in CLI_SETTINGS_KEYS}
|
||||
|
||||
settings = Settings(
|
||||
config_path=self.config_path, **user_settings, **cli_settings
|
||||
)
|
||||
|
||||
settings.reset()
|
||||
|
||||
for key in user_settings.keys():
|
||||
self.assertEqual(getattr(settings, key), None)
|
||||
for key in cli_settings.keys():
|
||||
self.assertEqual(getattr(settings, key), DEFAULT_CLI_SETTINGS[key])
|
||||
|
||||
def test_dump_new_settings(self):
|
||||
settings = Settings(
|
||||
config_path=self.config_path, tool_repository_username="user1"
|
||||
|
||||
@@ -6,7 +6,7 @@ from click.testing import CliRunner
|
||||
import requests
|
||||
|
||||
from crewai.cli.organization.main import OrganizationCommand
|
||||
from crewai.cli.cli import org_list, switch, current
|
||||
from crewai.cli.cli import list, switch, current
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
@@ -16,44 +16,44 @@ def runner():
|
||||
|
||||
@pytest.fixture
|
||||
def org_command():
|
||||
with patch.object(OrganizationCommand, "__init__", return_value=None):
|
||||
with patch.object(OrganizationCommand, '__init__', return_value=None):
|
||||
command = OrganizationCommand()
|
||||
yield command
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def mock_settings():
|
||||
with patch("crewai.cli.organization.main.Settings") as mock_settings_class:
|
||||
with patch('crewai.cli.organization.main.Settings') as mock_settings_class:
|
||||
mock_settings_instance = MagicMock()
|
||||
mock_settings_class.return_value = mock_settings_instance
|
||||
yield mock_settings_instance
|
||||
|
||||
|
||||
@patch("crewai.cli.cli.OrganizationCommand")
|
||||
@patch('crewai.cli.cli.OrganizationCommand')
|
||||
def test_org_list_command(mock_org_command_class, runner):
|
||||
mock_org_instance = MagicMock()
|
||||
mock_org_command_class.return_value = mock_org_instance
|
||||
|
||||
result = runner.invoke(org_list)
|
||||
result = runner.invoke(list)
|
||||
|
||||
assert result.exit_code == 0
|
||||
mock_org_command_class.assert_called_once()
|
||||
mock_org_instance.list.assert_called_once()
|
||||
|
||||
|
||||
@patch("crewai.cli.cli.OrganizationCommand")
|
||||
@patch('crewai.cli.cli.OrganizationCommand')
|
||||
def test_org_switch_command(mock_org_command_class, runner):
|
||||
mock_org_instance = MagicMock()
|
||||
mock_org_command_class.return_value = mock_org_instance
|
||||
|
||||
result = runner.invoke(switch, ["test-id"])
|
||||
result = runner.invoke(switch, ['test-id'])
|
||||
|
||||
assert result.exit_code == 0
|
||||
mock_org_command_class.assert_called_once()
|
||||
mock_org_instance.switch.assert_called_once_with("test-id")
|
||||
mock_org_instance.switch.assert_called_once_with('test-id')
|
||||
|
||||
|
||||
@patch("crewai.cli.cli.OrganizationCommand")
|
||||
@patch('crewai.cli.cli.OrganizationCommand')
|
||||
def test_org_current_command(mock_org_command_class, runner):
|
||||
mock_org_instance = MagicMock()
|
||||
mock_org_command_class.return_value = mock_org_instance
|
||||
@@ -67,18 +67,18 @@ def test_org_current_command(mock_org_command_class, runner):
|
||||
|
||||
class TestOrganizationCommand(unittest.TestCase):
|
||||
def setUp(self):
|
||||
with patch.object(OrganizationCommand, "__init__", return_value=None):
|
||||
with patch.object(OrganizationCommand, '__init__', return_value=None):
|
||||
self.org_command = OrganizationCommand()
|
||||
self.org_command.plus_api_client = MagicMock()
|
||||
|
||||
@patch("crewai.cli.organization.main.console")
|
||||
@patch("crewai.cli.organization.main.Table")
|
||||
@patch('crewai.cli.organization.main.console')
|
||||
@patch('crewai.cli.organization.main.Table')
|
||||
def test_list_organizations_success(self, mock_table, mock_console):
|
||||
mock_response = MagicMock()
|
||||
mock_response.raise_for_status = MagicMock()
|
||||
mock_response.json.return_value = [
|
||||
{"name": "Org 1", "uuid": "org-123"},
|
||||
{"name": "Org 2", "uuid": "org-456"},
|
||||
{"name": "Org 2", "uuid": "org-456"}
|
||||
]
|
||||
self.org_command.plus_api_client = MagicMock()
|
||||
self.org_command.plus_api_client.get_organizations.return_value = mock_response
|
||||
@@ -89,14 +89,16 @@ class TestOrganizationCommand(unittest.TestCase):
|
||||
|
||||
self.org_command.plus_api_client.get_organizations.assert_called_once()
|
||||
mock_table.assert_called_once_with(title="Your Organizations")
|
||||
mock_table.return_value.add_column.assert_has_calls(
|
||||
[call("Name", style="cyan"), call("ID", style="green")]
|
||||
)
|
||||
mock_table.return_value.add_row.assert_has_calls(
|
||||
[call("Org 1", "org-123"), call("Org 2", "org-456")]
|
||||
)
|
||||
mock_table.return_value.add_column.assert_has_calls([
|
||||
call("Name", style="cyan"),
|
||||
call("ID", style="green")
|
||||
])
|
||||
mock_table.return_value.add_row.assert_has_calls([
|
||||
call("Org 1", "org-123"),
|
||||
call("Org 2", "org-456")
|
||||
])
|
||||
|
||||
@patch("crewai.cli.organization.main.console")
|
||||
@patch('crewai.cli.organization.main.console')
|
||||
def test_list_organizations_empty(self, mock_console):
|
||||
mock_response = MagicMock()
|
||||
mock_response.raise_for_status = MagicMock()
|
||||
@@ -108,32 +110,33 @@ class TestOrganizationCommand(unittest.TestCase):
|
||||
|
||||
self.org_command.plus_api_client.get_organizations.assert_called_once()
|
||||
mock_console.print.assert_called_once_with(
|
||||
"You don't belong to any organizations yet.", style="yellow"
|
||||
"You don't belong to any organizations yet.",
|
||||
style="yellow"
|
||||
)
|
||||
|
||||
@patch("crewai.cli.organization.main.console")
|
||||
@patch('crewai.cli.organization.main.console')
|
||||
def test_list_organizations_api_error(self, mock_console):
|
||||
self.org_command.plus_api_client = MagicMock()
|
||||
self.org_command.plus_api_client.get_organizations.side_effect = (
|
||||
requests.exceptions.RequestException("API Error")
|
||||
)
|
||||
self.org_command.plus_api_client.get_organizations.side_effect = requests.exceptions.RequestException("API Error")
|
||||
|
||||
with pytest.raises(SystemExit):
|
||||
self.org_command.list()
|
||||
|
||||
|
||||
self.org_command.plus_api_client.get_organizations.assert_called_once()
|
||||
mock_console.print.assert_called_once_with(
|
||||
"Failed to retrieve organization list: API Error", style="bold red"
|
||||
"Failed to retrieve organization list: API Error",
|
||||
style="bold red"
|
||||
)
|
||||
|
||||
@patch("crewai.cli.organization.main.console")
|
||||
@patch("crewai.cli.organization.main.Settings")
|
||||
@patch('crewai.cli.organization.main.console')
|
||||
@patch('crewai.cli.organization.main.Settings')
|
||||
def test_switch_organization_success(self, mock_settings_class, mock_console):
|
||||
mock_response = MagicMock()
|
||||
mock_response.raise_for_status = MagicMock()
|
||||
mock_response.json.return_value = [
|
||||
{"name": "Org 1", "uuid": "org-123"},
|
||||
{"name": "Test Org", "uuid": "test-id"},
|
||||
{"name": "Test Org", "uuid": "test-id"}
|
||||
]
|
||||
self.org_command.plus_api_client = MagicMock()
|
||||
self.org_command.plus_api_client.get_organizations.return_value = mock_response
|
||||
@@ -148,16 +151,17 @@ class TestOrganizationCommand(unittest.TestCase):
|
||||
assert mock_settings_instance.org_name == "Test Org"
|
||||
assert mock_settings_instance.org_uuid == "test-id"
|
||||
mock_console.print.assert_called_once_with(
|
||||
"Successfully switched to Test Org (test-id)", style="bold green"
|
||||
"Successfully switched to Test Org (test-id)",
|
||||
style="bold green"
|
||||
)
|
||||
|
||||
@patch("crewai.cli.organization.main.console")
|
||||
@patch('crewai.cli.organization.main.console')
|
||||
def test_switch_organization_not_found(self, mock_console):
|
||||
mock_response = MagicMock()
|
||||
mock_response.raise_for_status = MagicMock()
|
||||
mock_response.json.return_value = [
|
||||
{"name": "Org 1", "uuid": "org-123"},
|
||||
{"name": "Org 2", "uuid": "org-456"},
|
||||
{"name": "Org 2", "uuid": "org-456"}
|
||||
]
|
||||
self.org_command.plus_api_client = MagicMock()
|
||||
self.org_command.plus_api_client.get_organizations.return_value = mock_response
|
||||
@@ -166,11 +170,12 @@ class TestOrganizationCommand(unittest.TestCase):
|
||||
|
||||
self.org_command.plus_api_client.get_organizations.assert_called_once()
|
||||
mock_console.print.assert_called_once_with(
|
||||
"Organization with id 'non-existent-id' not found.", style="bold red"
|
||||
"Organization with id 'non-existent-id' not found.",
|
||||
style="bold red"
|
||||
)
|
||||
|
||||
@patch("crewai.cli.organization.main.console")
|
||||
@patch("crewai.cli.organization.main.Settings")
|
||||
@patch('crewai.cli.organization.main.console')
|
||||
@patch('crewai.cli.organization.main.Settings')
|
||||
def test_current_organization_with_org(self, mock_settings_class, mock_console):
|
||||
mock_settings_instance = MagicMock()
|
||||
mock_settings_instance.org_name = "Test Org"
|
||||
@@ -181,11 +186,12 @@ class TestOrganizationCommand(unittest.TestCase):
|
||||
|
||||
self.org_command.plus_api_client.get_organizations.assert_not_called()
|
||||
mock_console.print.assert_called_once_with(
|
||||
"Currently logged in to organization Test Org (test-id)", style="bold green"
|
||||
"Currently logged in to organization Test Org (test-id)",
|
||||
style="bold green"
|
||||
)
|
||||
|
||||
@patch("crewai.cli.organization.main.console")
|
||||
@patch("crewai.cli.organization.main.Settings")
|
||||
@patch('crewai.cli.organization.main.console')
|
||||
@patch('crewai.cli.organization.main.Settings')
|
||||
def test_current_organization_without_org(self, mock_settings_class, mock_console):
|
||||
mock_settings_instance = MagicMock()
|
||||
mock_settings_instance.org_uuid = None
|
||||
@@ -195,14 +201,16 @@ class TestOrganizationCommand(unittest.TestCase):
|
||||
|
||||
assert mock_console.print.call_count == 3
|
||||
mock_console.print.assert_any_call(
|
||||
"You're not currently logged in to any organization.", style="yellow"
|
||||
"You're not currently logged in to any organization.",
|
||||
style="yellow"
|
||||
)
|
||||
|
||||
@patch("crewai.cli.organization.main.console")
|
||||
@patch('crewai.cli.organization.main.console')
|
||||
def test_list_organizations_unauthorized(self, mock_console):
|
||||
mock_response = MagicMock()
|
||||
mock_http_error = requests.exceptions.HTTPError(
|
||||
"401 Client Error: Unauthorized", response=MagicMock(status_code=401)
|
||||
"401 Client Error: Unauthorized",
|
||||
response=MagicMock(status_code=401)
|
||||
)
|
||||
|
||||
mock_response.raise_for_status.side_effect = mock_http_error
|
||||
@@ -213,14 +221,15 @@ class TestOrganizationCommand(unittest.TestCase):
|
||||
self.org_command.plus_api_client.get_organizations.assert_called_once()
|
||||
mock_console.print.assert_called_once_with(
|
||||
"You are not logged in to any organization. Use 'crewai login' to login.",
|
||||
style="bold red",
|
||||
style="bold red"
|
||||
)
|
||||
|
||||
@patch("crewai.cli.organization.main.console")
|
||||
@patch('crewai.cli.organization.main.console')
|
||||
def test_switch_organization_unauthorized(self, mock_console):
|
||||
mock_response = MagicMock()
|
||||
mock_http_error = requests.exceptions.HTTPError(
|
||||
"401 Client Error: Unauthorized", response=MagicMock(status_code=401)
|
||||
"401 Client Error: Unauthorized",
|
||||
response=MagicMock(status_code=401)
|
||||
)
|
||||
|
||||
mock_response.raise_for_status.side_effect = mock_http_error
|
||||
@@ -231,5 +240,5 @@ class TestOrganizationCommand(unittest.TestCase):
|
||||
self.org_command.plus_api_client.get_organizations.assert_called_once()
|
||||
mock_console.print.assert_called_once_with(
|
||||
"You are not logged in to any organization. Use 'crewai login' to login.",
|
||||
style="bold red",
|
||||
style="bold red"
|
||||
)
|
||||
|
||||
@@ -1,8 +1,8 @@
|
||||
import os
|
||||
import unittest
|
||||
from unittest.mock import MagicMock, patch, ANY
|
||||
|
||||
from crewai.cli.plus_api import PlusAPI
|
||||
from crewai.cli.constants import DEFAULT_CREWAI_ENTERPRISE_URL
|
||||
|
||||
|
||||
class TestPlusAPI(unittest.TestCase):
|
||||
@@ -30,41 +30,29 @@ class TestPlusAPI(unittest.TestCase):
|
||||
)
|
||||
self.assertEqual(response, mock_response)
|
||||
|
||||
def assert_request_with_org_id(
|
||||
self, mock_make_request, method: str, endpoint: str, **kwargs
|
||||
):
|
||||
def assert_request_with_org_id(self, mock_make_request, method: str, endpoint: str, **kwargs):
|
||||
mock_make_request.assert_called_once_with(
|
||||
method,
|
||||
f"{DEFAULT_CREWAI_ENTERPRISE_URL}{endpoint}",
|
||||
headers={
|
||||
"Authorization": ANY,
|
||||
"Content-Type": ANY,
|
||||
"User-Agent": ANY,
|
||||
"X-Crewai-Version": ANY,
|
||||
"X-Crewai-Organization-Id": self.org_uuid,
|
||||
},
|
||||
**kwargs,
|
||||
method, f"https://app.crewai.com{endpoint}", headers={'Authorization': ANY, 'Content-Type': ANY, 'User-Agent': ANY, 'X-Crewai-Version': ANY, 'X-Crewai-Organization-Id': self.org_uuid}, **kwargs
|
||||
)
|
||||
|
||||
@patch("crewai.cli.plus_api.Settings")
|
||||
@patch("requests.Session.request")
|
||||
def test_login_to_tool_repository_with_org_uuid(
|
||||
self, mock_make_request, mock_settings_class
|
||||
):
|
||||
def test_login_to_tool_repository_with_org_uuid(self, mock_make_request, mock_settings_class):
|
||||
mock_settings = MagicMock()
|
||||
mock_settings.org_uuid = self.org_uuid
|
||||
mock_settings.enterprise_base_url = DEFAULT_CREWAI_ENTERPRISE_URL
|
||||
mock_settings_class.return_value = mock_settings
|
||||
# re-initialize Client
|
||||
self.api = PlusAPI(self.api_key)
|
||||
|
||||
|
||||
mock_response = MagicMock()
|
||||
mock_make_request.return_value = mock_response
|
||||
|
||||
response = self.api.login_to_tool_repository()
|
||||
|
||||
self.assert_request_with_org_id(
|
||||
mock_make_request, "POST", "/crewai_plus/api/v1/tools/login"
|
||||
mock_make_request,
|
||||
'POST',
|
||||
'/crewai_plus/api/v1/tools/login'
|
||||
)
|
||||
self.assertEqual(response, mock_response)
|
||||
|
||||
@@ -78,27 +66,28 @@ class TestPlusAPI(unittest.TestCase):
|
||||
"GET", "/crewai_plus/api/v1/agents/test_agent_handle"
|
||||
)
|
||||
self.assertEqual(response, mock_response)
|
||||
|
||||
|
||||
@patch("crewai.cli.plus_api.Settings")
|
||||
@patch("requests.Session.request")
|
||||
def test_get_agent_with_org_uuid(self, mock_make_request, mock_settings_class):
|
||||
mock_settings = MagicMock()
|
||||
mock_settings.org_uuid = self.org_uuid
|
||||
mock_settings.enterprise_base_url = DEFAULT_CREWAI_ENTERPRISE_URL
|
||||
mock_settings_class.return_value = mock_settings
|
||||
# re-initialize Client
|
||||
self.api = PlusAPI(self.api_key)
|
||||
|
||||
|
||||
mock_response = MagicMock()
|
||||
mock_make_request.return_value = mock_response
|
||||
|
||||
response = self.api.get_agent("test_agent_handle")
|
||||
|
||||
self.assert_request_with_org_id(
|
||||
mock_make_request, "GET", "/crewai_plus/api/v1/agents/test_agent_handle"
|
||||
mock_make_request,
|
||||
"GET",
|
||||
"/crewai_plus/api/v1/agents/test_agent_handle"
|
||||
)
|
||||
self.assertEqual(response, mock_response)
|
||||
|
||||
|
||||
@patch("crewai.cli.plus_api.PlusAPI._make_request")
|
||||
def test_get_tool(self, mock_make_request):
|
||||
mock_response = MagicMock()
|
||||
@@ -109,13 +98,12 @@ class TestPlusAPI(unittest.TestCase):
|
||||
"GET", "/crewai_plus/api/v1/tools/test_tool_handle"
|
||||
)
|
||||
self.assertEqual(response, mock_response)
|
||||
|
||||
|
||||
@patch("crewai.cli.plus_api.Settings")
|
||||
@patch("requests.Session.request")
|
||||
def test_get_tool_with_org_uuid(self, mock_make_request, mock_settings_class):
|
||||
mock_settings = MagicMock()
|
||||
mock_settings.org_uuid = self.org_uuid
|
||||
mock_settings.enterprise_base_url = DEFAULT_CREWAI_ENTERPRISE_URL
|
||||
mock_settings_class.return_value = mock_settings
|
||||
# re-initialize Client
|
||||
self.api = PlusAPI(self.api_key)
|
||||
@@ -127,7 +115,9 @@ class TestPlusAPI(unittest.TestCase):
|
||||
response = self.api.get_tool("test_tool_handle")
|
||||
|
||||
self.assert_request_with_org_id(
|
||||
mock_make_request, "GET", "/crewai_plus/api/v1/tools/test_tool_handle"
|
||||
mock_make_request,
|
||||
"GET",
|
||||
"/crewai_plus/api/v1/tools/test_tool_handle"
|
||||
)
|
||||
self.assertEqual(response, mock_response)
|
||||
|
||||
@@ -157,13 +147,12 @@ class TestPlusAPI(unittest.TestCase):
|
||||
"POST", "/crewai_plus/api/v1/tools", json=params
|
||||
)
|
||||
self.assertEqual(response, mock_response)
|
||||
|
||||
|
||||
@patch("crewai.cli.plus_api.Settings")
|
||||
@patch("requests.Session.request")
|
||||
def test_publish_tool_with_org_uuid(self, mock_make_request, mock_settings_class):
|
||||
mock_settings = MagicMock()
|
||||
mock_settings.org_uuid = self.org_uuid
|
||||
mock_settings.enterprise_base_url = DEFAULT_CREWAI_ENTERPRISE_URL
|
||||
mock_settings_class.return_value = mock_settings
|
||||
# re-initialize Client
|
||||
self.api = PlusAPI(self.api_key)
|
||||
@@ -171,7 +160,7 @@ class TestPlusAPI(unittest.TestCase):
|
||||
# Set up mock response
|
||||
mock_response = MagicMock()
|
||||
mock_make_request.return_value = mock_response
|
||||
|
||||
|
||||
handle = "test_tool_handle"
|
||||
public = True
|
||||
version = "1.0.0"
|
||||
@@ -191,9 +180,12 @@ class TestPlusAPI(unittest.TestCase):
|
||||
"description": description,
|
||||
"available_exports": None,
|
||||
}
|
||||
|
||||
|
||||
self.assert_request_with_org_id(
|
||||
mock_make_request, "POST", "/crewai_plus/api/v1/tools", json=expected_params
|
||||
mock_make_request,
|
||||
"POST",
|
||||
"/crewai_plus/api/v1/tools",
|
||||
json=expected_params
|
||||
)
|
||||
self.assertEqual(response, mock_response)
|
||||
|
||||
@@ -319,11 +311,8 @@ class TestPlusAPI(unittest.TestCase):
|
||||
"POST", "/crewai_plus/api/v1/crews", json=payload
|
||||
)
|
||||
|
||||
@patch("crewai.cli.plus_api.Settings")
|
||||
def test_custom_base_url(self, mock_settings_class):
|
||||
mock_settings = MagicMock()
|
||||
mock_settings.enterprise_base_url = "https://custom-url.com/api"
|
||||
mock_settings_class.return_value = mock_settings
|
||||
@patch.dict(os.environ, {"CREWAI_BASE_URL": "https://custom-url.com/api"})
|
||||
def test_custom_base_url(self):
|
||||
custom_api = PlusAPI("test_key")
|
||||
self.assertEqual(
|
||||
custom_api.base_url,
|
||||
|
||||
@@ -1,91 +0,0 @@
|
||||
import tempfile
|
||||
import unittest
|
||||
from pathlib import Path
|
||||
from unittest.mock import patch, MagicMock, call
|
||||
|
||||
from crewai.cli.settings.main import SettingsCommand
|
||||
from crewai.cli.config import (
|
||||
Settings,
|
||||
USER_SETTINGS_KEYS,
|
||||
CLI_SETTINGS_KEYS,
|
||||
DEFAULT_CLI_SETTINGS,
|
||||
HIDDEN_SETTINGS_KEYS,
|
||||
READONLY_SETTINGS_KEYS,
|
||||
)
|
||||
import shutil
|
||||
|
||||
|
||||
class TestSettingsCommand(unittest.TestCase):
|
||||
def setUp(self):
|
||||
self.test_dir = Path(tempfile.mkdtemp())
|
||||
self.config_path = self.test_dir / "settings.json"
|
||||
self.settings = Settings(config_path=self.config_path)
|
||||
self.settings_command = SettingsCommand(
|
||||
settings_kwargs={"config_path": self.config_path}
|
||||
)
|
||||
|
||||
def tearDown(self):
|
||||
shutil.rmtree(self.test_dir)
|
||||
|
||||
@patch("crewai.cli.settings.main.console")
|
||||
@patch("crewai.cli.settings.main.Table")
|
||||
def test_list_settings(self, mock_table_class, mock_console):
|
||||
mock_table_instance = MagicMock()
|
||||
mock_table_class.return_value = mock_table_instance
|
||||
|
||||
self.settings_command.list()
|
||||
|
||||
# Tests that the table is created skipping hidden settings
|
||||
mock_table_instance.add_row.assert_has_calls(
|
||||
[
|
||||
call(
|
||||
field_name,
|
||||
getattr(self.settings, field_name) or "Not set",
|
||||
field_info.description,
|
||||
)
|
||||
for field_name, field_info in Settings.model_fields.items()
|
||||
if field_name not in HIDDEN_SETTINGS_KEYS
|
||||
]
|
||||
)
|
||||
|
||||
# Tests that the table is printed
|
||||
mock_console.print.assert_called_once_with(mock_table_instance)
|
||||
|
||||
def test_set_valid_keys(self):
|
||||
valid_keys = Settings.model_fields.keys() - (
|
||||
READONLY_SETTINGS_KEYS + HIDDEN_SETTINGS_KEYS
|
||||
)
|
||||
for key in valid_keys:
|
||||
test_value = f"some_value_for_{key}"
|
||||
self.settings_command.set(key, test_value)
|
||||
self.assertEqual(getattr(self.settings_command.settings, key), test_value)
|
||||
|
||||
def test_set_invalid_key(self):
|
||||
with self.assertRaises(SystemExit):
|
||||
self.settings_command.set("invalid_key", "value")
|
||||
|
||||
def test_set_readonly_keys(self):
|
||||
for key in READONLY_SETTINGS_KEYS:
|
||||
with self.assertRaises(SystemExit):
|
||||
self.settings_command.set(key, "some_readonly_key_value")
|
||||
|
||||
def test_set_hidden_keys(self):
|
||||
for key in HIDDEN_SETTINGS_KEYS:
|
||||
with self.assertRaises(SystemExit):
|
||||
self.settings_command.set(key, "some_hidden_key_value")
|
||||
|
||||
def test_reset_all_settings(self):
|
||||
for key in USER_SETTINGS_KEYS + CLI_SETTINGS_KEYS:
|
||||
setattr(self.settings_command.settings, key, f"custom_value_for_{key}")
|
||||
self.settings_command.settings.dump()
|
||||
|
||||
self.settings_command.reset_all_settings()
|
||||
|
||||
print(USER_SETTINGS_KEYS)
|
||||
for key in USER_SETTINGS_KEYS:
|
||||
self.assertEqual(getattr(self.settings_command.settings, key), None)
|
||||
|
||||
for key in CLI_SETTINGS_KEYS:
|
||||
self.assertEqual(
|
||||
getattr(self.settings_command.settings, key), DEFAULT_CLI_SETTINGS[key]
|
||||
)
|
||||
142
tests/observability/__init__.py
Normal file
142
tests/observability/__init__.py
Normal file
@@ -0,0 +1,142 @@
|
||||
|
||||
|
||||
from crewai import Agent, Task, LLM
|
||||
|
||||
|
||||
def test_langdb_basic_integration_example():
|
||||
"""Test the basic LangDB integration example from the documentation."""
|
||||
|
||||
class MockLangDB:
|
||||
@staticmethod
|
||||
def init(**kwargs):
|
||||
pass
|
||||
|
||||
MockLangDB.init()
|
||||
|
||||
llm = LLM(
|
||||
model="gpt-4o",
|
||||
temperature=0.7
|
||||
)
|
||||
|
||||
agent = Agent(
|
||||
role="Senior Research Analyst",
|
||||
goal="Conduct comprehensive research on assigned topics",
|
||||
backstory="You are an expert researcher with deep analytical skills.",
|
||||
llm=llm,
|
||||
verbose=True
|
||||
)
|
||||
|
||||
assert agent.role == "Senior Research Analyst"
|
||||
assert agent.goal == "Conduct comprehensive research on assigned topics"
|
||||
assert agent.llm == llm
|
||||
|
||||
|
||||
def test_langdb_metadata_configuration_example():
|
||||
"""Test the metadata configuration example from the documentation."""
|
||||
class MockLangDB:
|
||||
@staticmethod
|
||||
def init(metadata=None, **kwargs):
|
||||
assert metadata is not None
|
||||
assert "environment" in metadata
|
||||
assert "crew_type" in metadata
|
||||
|
||||
MockLangDB.init(
|
||||
metadata={
|
||||
"environment": "production",
|
||||
"crew_type": "research_workflow",
|
||||
"user_id": "user_123"
|
||||
}
|
||||
)
|
||||
|
||||
|
||||
def test_langdb_cost_tracking_example():
|
||||
"""Test the cost tracking configuration example from the documentation."""
|
||||
class MockLangDB:
|
||||
@staticmethod
|
||||
def init(cost_tracking=None, budget_alerts=None, **kwargs):
|
||||
assert cost_tracking is True
|
||||
assert budget_alerts is not None
|
||||
assert "daily_limit" in budget_alerts
|
||||
assert "alert_threshold" in budget_alerts
|
||||
|
||||
MockLangDB.init(
|
||||
cost_tracking=True,
|
||||
budget_alerts={
|
||||
"daily_limit": 100.0,
|
||||
"alert_threshold": 0.8
|
||||
}
|
||||
)
|
||||
|
||||
|
||||
def test_langdb_security_configuration_example():
|
||||
"""Test the security configuration example from the documentation."""
|
||||
class MockLangDB:
|
||||
@staticmethod
|
||||
def init(security_config=None, **kwargs):
|
||||
assert security_config is not None
|
||||
assert "pii_detection" in security_config
|
||||
assert "content_filtering" in security_config
|
||||
assert "audit_logging" in security_config
|
||||
|
||||
MockLangDB.init(
|
||||
security_config={
|
||||
"pii_detection": True,
|
||||
"content_filtering": True,
|
||||
"audit_logging": True,
|
||||
"data_retention_days": 90
|
||||
}
|
||||
)
|
||||
|
||||
|
||||
def test_langdb_environment_specific_setup():
|
||||
"""Test the multi-environment setup example from the documentation."""
|
||||
environments = ["production", "staging", "development"]
|
||||
|
||||
for env in environments:
|
||||
class MockLangDB:
|
||||
@staticmethod
|
||||
def init(project_id=None, sampling_rate=None, cost_tracking=None, **kwargs):
|
||||
assert project_id is not None
|
||||
assert sampling_rate is not None
|
||||
assert cost_tracking is not None
|
||||
|
||||
if env == "production":
|
||||
MockLangDB.init(
|
||||
project_id="prod_project_id",
|
||||
sampling_rate=1.0,
|
||||
cost_tracking=True
|
||||
)
|
||||
elif env == "staging":
|
||||
MockLangDB.init(
|
||||
project_id="staging_project_id",
|
||||
sampling_rate=0.5,
|
||||
cost_tracking=False
|
||||
)
|
||||
else:
|
||||
MockLangDB.init(
|
||||
project_id="dev_project_id",
|
||||
sampling_rate=0.1,
|
||||
cost_tracking=False
|
||||
)
|
||||
|
||||
|
||||
def test_langdb_task_with_metadata():
|
||||
"""Test task creation with metadata as shown in documentation."""
|
||||
llm = LLM(model="gpt-4o")
|
||||
|
||||
agent = Agent(
|
||||
role="Senior Research Analyst",
|
||||
goal="Conduct research",
|
||||
backstory="Expert researcher",
|
||||
llm=llm
|
||||
)
|
||||
|
||||
task = Task(
|
||||
description="Research the latest AI trends",
|
||||
expected_output="Comprehensive research report",
|
||||
agent=agent
|
||||
)
|
||||
|
||||
assert task.description == "Research the latest AI trends"
|
||||
assert task.expected_output == "Comprehensive research report"
|
||||
assert task.agent == agent
|
||||
141
tests/observability/test_langdb_documentation.py
Normal file
141
tests/observability/test_langdb_documentation.py
Normal file
@@ -0,0 +1,141 @@
|
||||
"""Test for the LangDB documentation examples."""
|
||||
|
||||
from crewai import Agent, Task, LLM
|
||||
|
||||
|
||||
def test_langdb_basic_integration_example():
|
||||
"""Test the basic LangDB integration example from the documentation."""
|
||||
|
||||
class MockLangDB:
|
||||
@staticmethod
|
||||
def init(**kwargs):
|
||||
pass
|
||||
|
||||
MockLangDB.init()
|
||||
|
||||
llm = LLM(
|
||||
model="gpt-4o",
|
||||
temperature=0.7
|
||||
)
|
||||
|
||||
agent = Agent(
|
||||
role="Senior Research Analyst",
|
||||
goal="Conduct comprehensive research on assigned topics",
|
||||
backstory="You are an expert researcher with deep analytical skills.",
|
||||
llm=llm
|
||||
)
|
||||
|
||||
assert agent.role == "Senior Research Analyst"
|
||||
assert agent.goal == "Conduct comprehensive research on assigned topics"
|
||||
assert agent.llm == llm
|
||||
|
||||
|
||||
def test_langdb_metadata_configuration_example():
|
||||
"""Test the metadata configuration example from the documentation."""
|
||||
class MockLangDB:
|
||||
@staticmethod
|
||||
def init(metadata=None, **kwargs):
|
||||
assert metadata is not None
|
||||
assert "environment" in metadata
|
||||
assert "crew_type" in metadata
|
||||
|
||||
MockLangDB.init(
|
||||
metadata={
|
||||
"environment": "production",
|
||||
"crew_type": "research_workflow",
|
||||
"user_id": "user_123"
|
||||
}
|
||||
)
|
||||
|
||||
|
||||
def test_langdb_cost_tracking_example():
|
||||
"""Test the cost tracking configuration example from the documentation."""
|
||||
class MockLangDB:
|
||||
@staticmethod
|
||||
def init(cost_tracking=None, budget_alerts=None, **kwargs):
|
||||
assert cost_tracking is True
|
||||
assert budget_alerts is not None
|
||||
assert "daily_limit" in budget_alerts
|
||||
assert "alert_threshold" in budget_alerts
|
||||
|
||||
MockLangDB.init(
|
||||
cost_tracking=True,
|
||||
budget_alerts={
|
||||
"daily_limit": 100.0,
|
||||
"alert_threshold": 0.8
|
||||
}
|
||||
)
|
||||
|
||||
|
||||
def test_langdb_security_configuration_example():
|
||||
"""Test the security configuration example from the documentation."""
|
||||
class MockLangDB:
|
||||
@staticmethod
|
||||
def init(security_config=None, **kwargs):
|
||||
assert security_config is not None
|
||||
assert "pii_detection" in security_config
|
||||
assert "content_filtering" in security_config
|
||||
assert "audit_logging" in security_config
|
||||
|
||||
MockLangDB.init(
|
||||
security_config={
|
||||
"pii_detection": True,
|
||||
"content_filtering": True,
|
||||
"audit_logging": True,
|
||||
"data_retention_days": 90
|
||||
}
|
||||
)
|
||||
|
||||
|
||||
def test_langdb_environment_specific_setup():
|
||||
"""Test the multi-environment setup example from the documentation."""
|
||||
environments = ["production", "staging", "development"]
|
||||
|
||||
for env in environments:
|
||||
class MockLangDB:
|
||||
@staticmethod
|
||||
def init(project_id=None, sampling_rate=None, cost_tracking=None, **kwargs):
|
||||
assert project_id is not None
|
||||
assert sampling_rate is not None
|
||||
assert cost_tracking is not None
|
||||
|
||||
if env == "production":
|
||||
MockLangDB.init(
|
||||
project_id="prod_project_id",
|
||||
sampling_rate=1.0,
|
||||
cost_tracking=True
|
||||
)
|
||||
elif env == "staging":
|
||||
MockLangDB.init(
|
||||
project_id="staging_project_id",
|
||||
sampling_rate=0.5,
|
||||
cost_tracking=False
|
||||
)
|
||||
else:
|
||||
MockLangDB.init(
|
||||
project_id="dev_project_id",
|
||||
sampling_rate=0.1,
|
||||
cost_tracking=False
|
||||
)
|
||||
|
||||
|
||||
def test_langdb_task_with_metadata():
|
||||
"""Test task creation with metadata as shown in documentation."""
|
||||
llm = LLM(model="gpt-4o")
|
||||
|
||||
agent = Agent(
|
||||
role="Senior Research Analyst",
|
||||
goal="Conduct research",
|
||||
backstory="Expert researcher",
|
||||
llm=llm
|
||||
)
|
||||
|
||||
task = Task(
|
||||
description="Research the latest AI trends",
|
||||
expected_output="Comprehensive research report",
|
||||
agent=agent
|
||||
)
|
||||
|
||||
assert task.description == "Research the latest AI trends"
|
||||
assert task.expected_output == "Comprehensive research report"
|
||||
assert task.agent == agent
|
||||
@@ -191,39 +191,17 @@ def test_save_method_with_memory_oss(mem0_storage_with_mocked_config):
|
||||
"""Test save method for different memory types"""
|
||||
mem0_storage, _, _ = mem0_storage_with_mocked_config
|
||||
mem0_storage.memory.add = MagicMock()
|
||||
|
||||
|
||||
# Test short_term memory type (already set in fixture)
|
||||
test_value = "This is a test memory"
|
||||
test_metadata = {"key": "value"}
|
||||
|
||||
|
||||
mem0_storage.save(test_value, test_metadata)
|
||||
|
||||
|
||||
mem0_storage.memory.add.assert_called_once_with(
|
||||
[{"role": "assistant" , "content": test_value}],
|
||||
[{'role': 'assistant' , 'content': test_value}],
|
||||
infer=True,
|
||||
metadata={"type": "short_term", "key": "value"},
|
||||
run_id="my_run_id",
|
||||
user_id="test_user",
|
||||
agent_id='Test_Agent'
|
||||
)
|
||||
|
||||
def test_save_method_with_multiple_agents(mem0_storage_with_mocked_config):
|
||||
mem0_storage, _, _ = mem0_storage_with_mocked_config
|
||||
mem0_storage.crew.agents = [MagicMock(role="Test Agent"), MagicMock(role="Test Agent 2"), MagicMock(role="Test Agent 3")]
|
||||
mem0_storage.memory.add = MagicMock()
|
||||
|
||||
test_value = "This is a test memory"
|
||||
test_metadata = {"key": "value"}
|
||||
|
||||
mem0_storage.save(test_value, test_metadata)
|
||||
|
||||
mem0_storage.memory.add.assert_called_once_with(
|
||||
[{"role": "assistant" , "content": test_value}],
|
||||
infer=True,
|
||||
metadata={"type": "short_term", "key": "value"},
|
||||
run_id="my_run_id",
|
||||
user_id="test_user",
|
||||
agent_id='Test_Agent_Test_Agent_2_Test_Agent_3'
|
||||
)
|
||||
|
||||
|
||||
@@ -231,13 +209,13 @@ def test_save_method_with_memory_client(mem0_storage_with_memory_client_using_co
|
||||
"""Test save method for different memory types"""
|
||||
mem0_storage = mem0_storage_with_memory_client_using_config_from_crew
|
||||
mem0_storage.memory.add = MagicMock()
|
||||
|
||||
|
||||
# Test short_term memory type (already set in fixture)
|
||||
test_value = "This is a test memory"
|
||||
test_metadata = {"key": "value"}
|
||||
|
||||
|
||||
mem0_storage.save(test_value, test_metadata)
|
||||
|
||||
|
||||
mem0_storage.memory.add.assert_called_once_with(
|
||||
[{'role': 'assistant' , 'content': test_value}],
|
||||
infer=True,
|
||||
@@ -246,9 +224,7 @@ def test_save_method_with_memory_client(mem0_storage_with_memory_client_using_co
|
||||
run_id="my_run_id",
|
||||
includes="include1",
|
||||
excludes="exclude1",
|
||||
output_format='v1.1',
|
||||
user_id='test_user',
|
||||
agent_id='Test_Agent'
|
||||
output_format='v1.1'
|
||||
)
|
||||
|
||||
|
||||
@@ -261,10 +237,10 @@ def test_search_method_with_memory_oss(mem0_storage_with_mocked_config):
|
||||
results = mem0_storage.search("test query", limit=5, score_threshold=0.5)
|
||||
|
||||
mem0_storage.memory.search.assert_called_once_with(
|
||||
query="test query",
|
||||
limit=5,
|
||||
query="test query",
|
||||
limit=5,
|
||||
user_id="test_user",
|
||||
filters={'AND': [{'run_id': 'my_run_id'}]},
|
||||
filters={'AND': [{'run_id': 'my_run_id'}]},
|
||||
threshold=0.5
|
||||
)
|
||||
|
||||
@@ -281,8 +257,8 @@ def test_search_method_with_memory_client(mem0_storage_with_memory_client_using_
|
||||
results = mem0_storage.search("test query", limit=5, score_threshold=0.5)
|
||||
|
||||
mem0_storage.memory.search.assert_called_once_with(
|
||||
query="test query",
|
||||
limit=5,
|
||||
query="test query",
|
||||
limit=5,
|
||||
metadata={"type": "short_term"},
|
||||
user_id="test_user",
|
||||
version='v2',
|
||||
@@ -310,56 +286,4 @@ def test_mem0_storage_default_infer_value(mock_mem0_memory_client):
|
||||
)
|
||||
|
||||
mem0_storage = Mem0Storage(type="short_term", crew=crew)
|
||||
assert mem0_storage.infer is True
|
||||
|
||||
def test_save_memory_using_agent_entity(mock_mem0_memory_client):
|
||||
config = {
|
||||
"agent_id": "agent-123",
|
||||
}
|
||||
|
||||
mock_memory = MagicMock(spec=Memory)
|
||||
with patch.object(Memory, "__new__", return_value=mock_memory):
|
||||
mem0_storage = Mem0Storage(type="external", config=config)
|
||||
mem0_storage.save("test memory", {"key": "value"})
|
||||
mem0_storage.memory.add.assert_called_once_with(
|
||||
[{'role': 'assistant' , 'content': 'test memory'}],
|
||||
infer=True,
|
||||
metadata={"type": "external", "key": "value"},
|
||||
agent_id="agent-123",
|
||||
)
|
||||
|
||||
def test_search_method_with_agent_entity():
|
||||
mem0_storage = Mem0Storage(type="external", config={"agent_id": "agent-123"})
|
||||
mock_results = {"results": [{"score": 0.9, "content": "Result 1"}, {"score": 0.4, "content": "Result 2"}]}
|
||||
mem0_storage.memory.search = MagicMock(return_value=mock_results)
|
||||
|
||||
results = mem0_storage.search("test query", limit=5, score_threshold=0.5)
|
||||
|
||||
mem0_storage.memory.search.assert_called_once_with(
|
||||
query="test query",
|
||||
limit=5,
|
||||
filters={"AND": [{"agent_id": "agent-123"}]},
|
||||
threshold=0.5,
|
||||
)
|
||||
|
||||
assert len(results) == 2
|
||||
assert results[0]["content"] == "Result 1"
|
||||
|
||||
|
||||
def test_search_method_with_agent_id_and_user_id():
|
||||
mem0_storage = Mem0Storage(type="external", config={"agent_id": "agent-123", "user_id": "user-123"})
|
||||
mock_results = {"results": [{"score": 0.9, "content": "Result 1"}, {"score": 0.4, "content": "Result 2"}]}
|
||||
mem0_storage.memory.search = MagicMock(return_value=mock_results)
|
||||
|
||||
results = mem0_storage.search("test query", limit=5, score_threshold=0.5)
|
||||
|
||||
mem0_storage.memory.search.assert_called_once_with(
|
||||
query="test query",
|
||||
limit=5,
|
||||
user_id='user-123',
|
||||
filters={"OR": [{"user_id": "user-123"}, {"agent_id": "agent-123"}]},
|
||||
threshold=0.5,
|
||||
)
|
||||
|
||||
assert len(results) == 2
|
||||
assert results[0]["content"] == "Result 1"
|
||||
assert mem0_storage.infer is True
|
||||
@@ -1,25 +0,0 @@
|
||||
from unittest.mock import patch
|
||||
|
||||
import pytest
|
||||
|
||||
from crewai.rag.embeddings.configurator import EmbeddingConfigurator
|
||||
|
||||
|
||||
def test_configure_embedder_importerror():
|
||||
configurator = EmbeddingConfigurator()
|
||||
|
||||
embedder_config = {
|
||||
'provider': 'openai',
|
||||
'config': {
|
||||
'model': 'text-embedding-ada-002',
|
||||
}
|
||||
}
|
||||
|
||||
with patch('chromadb.utils.embedding_functions.openai_embedding_function.OpenAIEmbeddingFunction') as mock_openai:
|
||||
mock_openai.side_effect = ImportError("Module not found.")
|
||||
|
||||
with pytest.raises(ImportError) as exc_info:
|
||||
configurator.configure_embedder(embedder_config)
|
||||
|
||||
assert str(exc_info.value) == "Module not found."
|
||||
mock_openai.assert_called_once()
|
||||
@@ -1,5 +1,3 @@
|
||||
import json
|
||||
import os
|
||||
import pytest
|
||||
|
||||
from crewai.utilities.i18n import I18N
|
||||
@@ -44,90 +42,3 @@ def test_prompt_file():
|
||||
i18n.load_prompts()
|
||||
assert isinstance(i18n.retrieve("slices", "role_playing"), str)
|
||||
assert i18n.retrieve("slices", "role_playing") == "Lorem ipsum dolor sit amet"
|
||||
|
||||
|
||||
def test_global_i18n_file_environment_variable(monkeypatch):
|
||||
"""Test that CREWAI_I18N_FILE environment variable is respected"""
|
||||
test_translations = {
|
||||
"slices": {"role_playing": "Test role playing message"},
|
||||
"tools": {"ask_question": "Test ask question message"}
|
||||
}
|
||||
|
||||
test_file_path = os.path.join(os.path.dirname(__file__), "test_env_prompts.json")
|
||||
with open(test_file_path, "w", encoding="utf-8") as f:
|
||||
json.dump(test_translations, f)
|
||||
|
||||
try:
|
||||
monkeypatch.setenv("CREWAI_I18N_FILE", test_file_path)
|
||||
|
||||
i18n = I18N()
|
||||
i18n.load_prompts()
|
||||
|
||||
assert i18n.slice("role_playing") == "Test role playing message"
|
||||
assert i18n.tools("ask_question") == "Test ask question message"
|
||||
|
||||
finally:
|
||||
if os.path.exists(test_file_path):
|
||||
os.remove(test_file_path)
|
||||
|
||||
|
||||
def test_prompt_file_priority_over_environment_variable(monkeypatch):
|
||||
"""Test that explicit prompt_file takes priority over environment variable"""
|
||||
monkeypatch.setenv("CREWAI_I18N_FILE", "/nonexistent/path.json")
|
||||
|
||||
path = os.path.join(os.path.dirname(__file__), "prompts.json")
|
||||
i18n = I18N(prompt_file=path)
|
||||
i18n.load_prompts()
|
||||
|
||||
assert i18n.retrieve("slices", "role_playing") == "Lorem ipsum dolor sit amet"
|
||||
|
||||
|
||||
def test_environment_variable_file_not_found(monkeypatch):
|
||||
"""Test proper error handling when environment variable points to non-existent file"""
|
||||
monkeypatch.setenv("CREWAI_I18N_FILE", "/nonexistent/file.json")
|
||||
|
||||
with pytest.raises(Exception) as exc_info:
|
||||
I18N()
|
||||
|
||||
assert "Prompt file '/nonexistent/file.json' not found" in str(exc_info.value)
|
||||
|
||||
|
||||
def test_fallback_to_default_when_no_environment_variable(monkeypatch):
|
||||
"""Test that it falls back to default en.json when no environment variable is set"""
|
||||
monkeypatch.delenv("CREWAI_I18N_FILE", raising=False)
|
||||
|
||||
i18n = I18N()
|
||||
i18n.load_prompts()
|
||||
|
||||
assert isinstance(i18n.slice("role_playing"), str)
|
||||
assert len(i18n.slice("role_playing")) > 0
|
||||
|
||||
|
||||
def test_chinese_translation_file():
|
||||
"""Test loading Chinese translation file"""
|
||||
import os
|
||||
|
||||
zh_path = os.path.join(os.path.dirname(__file__), "../../src/crewai/translations/zh.json")
|
||||
zh_path = os.path.abspath(zh_path)
|
||||
|
||||
i18n = I18N(prompt_file=zh_path)
|
||||
i18n.load_prompts()
|
||||
|
||||
assert i18n.retrieve("hierarchical_manager_agent", "role") == "团队经理"
|
||||
assert i18n.slice("observation") == "\n观察:"
|
||||
assert i18n.errors("tool_usage_error") == "我遇到了错误: {error}"
|
||||
|
||||
|
||||
def test_spanish_translation_file():
|
||||
"""Test loading Spanish translation file"""
|
||||
import os
|
||||
|
||||
es_path = os.path.join(os.path.dirname(__file__), "../../src/crewai/translations/es.json")
|
||||
es_path = os.path.abspath(es_path)
|
||||
|
||||
i18n = I18N(prompt_file=es_path)
|
||||
i18n.load_prompts()
|
||||
|
||||
assert i18n.retrieve("hierarchical_manager_agent", "role") == "Gerente del Equipo"
|
||||
assert i18n.slice("observation") == "\nObservación:"
|
||||
assert i18n.errors("tool_usage_error") == "Encontré un error: {error}"
|
||||
|
||||
9
uv.lock
generated
9
uv.lock
generated
@@ -798,7 +798,7 @@ requires-dist = [
|
||||
{ name = "blinker", specifier = ">=1.9.0" },
|
||||
{ name = "chromadb", specifier = ">=0.5.23" },
|
||||
{ name = "click", specifier = ">=8.1.7" },
|
||||
{ name = "crewai-tools", marker = "extra == 'tools'", specifier = "~=0.59.0" },
|
||||
{ name = "crewai-tools", marker = "extra == 'tools'", specifier = "~=0.58.0" },
|
||||
{ name = "docling", marker = "extra == 'docling'", specifier = ">=2.12.0" },
|
||||
{ name = "instructor", specifier = ">=1.3.3" },
|
||||
{ name = "json-repair", specifier = "==0.25.2" },
|
||||
@@ -850,7 +850,7 @@ dev = [
|
||||
|
||||
[[package]]
|
||||
name = "crewai-tools"
|
||||
version = "0.59.0"
|
||||
version = "0.58.0"
|
||||
source = { registry = "https://pypi.org/simple" }
|
||||
dependencies = [
|
||||
{ name = "chromadb" },
|
||||
@@ -860,7 +860,6 @@ dependencies = [
|
||||
{ name = "embedchain" },
|
||||
{ name = "lancedb" },
|
||||
{ name = "openai" },
|
||||
{ name = "portalocker" },
|
||||
{ name = "pydantic" },
|
||||
{ name = "pyright" },
|
||||
{ name = "pytube" },
|
||||
@@ -868,9 +867,9 @@ dependencies = [
|
||||
{ name = "stagehand" },
|
||||
{ name = "tiktoken" },
|
||||
]
|
||||
sdist = { url = "https://files.pythonhosted.org/packages/10/cd/af005e7dca5a35ed6000db2d1594acad890997b92172d8af2c1d9a83784e/crewai_tools-0.59.0.tar.gz", hash = "sha256:030e4b65446f4c6eccdcba5380bbdc90896de74589cb1bdb3cabc2c22c83f326", size = 1032093, upload-time = "2025-07-30T21:15:54.182Z" }
|
||||
sdist = { url = "https://files.pythonhosted.org/packages/1f/bf/72c3a0cb5a8be1f635a4e3b07ee2ad81a6d427e63b7748c2727a33ade0d4/crewai_tools-0.58.0.tar.gz", hash = "sha256:ea82d5df8611ae22a8291934c4cd0b7ed5b77eca475f81014f018b7eca4d3350", size = 1026853, upload-time = "2025-07-23T17:45:53.228Z" }
|
||||
wheels = [
|
||||
{ url = "https://files.pythonhosted.org/packages/7e/b2/6b11d770fda6df99ddc49bdb26d3cb0b5efd026ff395a72ca5eeb704c335/crewai_tools-0.59.0-py3-none-any.whl", hash = "sha256:816297934eb352368ff0869a0eb11dd9febb4c40b6d0eda8e6eba8f3e426f446", size = 657025, upload-time = "2025-07-30T21:15:52.531Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/34/bd/1de36fbf8fb717817d3bf72a94da38e27af6cc5b888d7c1203a3f0b0cc2f/crewai_tools-0.58.0-py3-none-any.whl", hash = "sha256:151688bf0fa8c90e27dcdbaa8619f3dee2a14e97f1b420a38187b12d88305175", size = 650113, upload-time = "2025-07-23T17:45:51.056Z" },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
|
||||
Reference in New Issue
Block a user