mirror of
https://github.com/crewAIInc/crewAI.git
synced 2026-01-09 08:08:32 +00:00
Add comprehensive prompt customization documentation
- Create detailed guide explaining CrewAI's prompt generation system - Document template system stored in translations/en.json - Explain prompt assembly process using Prompts class - Document LiteAgent prompt generation methods - Show how to customize system/user prompts with templates - Explain format parameter and structured output control - Document stop words configuration through response_template - Add practical examples for common customization scenarios - Include test file validating all documentation examples Addresses issue #3045: How system and user prompts are generated Co-Authored-By: João <joao@crewai.com>
This commit is contained in:
317
docs/how-to/customize-prompts.mdx
Normal file
317
docs/how-to/customize-prompts.mdx
Normal file
@@ -0,0 +1,317 @@
|
||||
---
|
||||
title: "Customize Agent Prompts"
|
||||
description: "Learn how to customize system and user prompts in CrewAI agents for precise control over agent behavior and output formatting."
|
||||
---
|
||||
|
||||
# Customize Agent Prompts
|
||||
|
||||
CrewAI provides fine-grained control over how agents generate and format their responses through a sophisticated prompt generation system. This guide explains how system and user prompts are constructed and how you can customize them for your specific use cases.
|
||||
|
||||
## Understanding Prompt Generation
|
||||
|
||||
CrewAI uses a template-based system to generate prompts, combining different components based on agent configuration:
|
||||
|
||||
### Core Prompt Components
|
||||
|
||||
All prompt templates are stored in the internationalization system and include:
|
||||
|
||||
- **Role Playing**: `"You are {role}. {backstory}\nYour personal goal is: {goal}"`
|
||||
- **Tools**: Instructions for agents with access to tools
|
||||
- **No Tools**: Instructions for agents without tools
|
||||
- **Task**: The specific task execution prompt
|
||||
- **Format Instructions**: Output formatting requirements
|
||||
|
||||
### Prompt Assembly Process
|
||||
|
||||
CrewAI assembles prompts differently based on agent type:
|
||||
|
||||
1. **Regular Agents**: Use the `Prompts` class to combine template slices
|
||||
2. **LiteAgents**: Use dedicated system prompt methods with specific templates
|
||||
3. **System/User Split**: When `use_system_prompt=True`, prompts are split into system and user components
|
||||
|
||||
## Basic Prompt Customization
|
||||
|
||||
### Custom System and Prompt Templates
|
||||
|
||||
You can override the default prompt structure using custom templates:
|
||||
|
||||
```python
|
||||
from crewai import Agent, Task, Crew
|
||||
|
||||
# Define custom templates
|
||||
system_template = """{{ .System }}
|
||||
|
||||
Additional context: You are working in a production environment.
|
||||
Always prioritize accuracy and provide detailed explanations."""
|
||||
|
||||
prompt_template = """{{ .Prompt }}
|
||||
|
||||
Remember to validate your approach before proceeding."""
|
||||
|
||||
response_template = """Please format your response as follows:
|
||||
{{ .Response }}
|
||||
End of response."""
|
||||
|
||||
# Create agent with custom templates
|
||||
agent = Agent(
|
||||
role="Data Analyst",
|
||||
goal="Analyze data with precision and accuracy",
|
||||
backstory="You are an experienced data analyst with expertise in statistical analysis.",
|
||||
system_template=system_template,
|
||||
prompt_template=prompt_template,
|
||||
response_template=response_template,
|
||||
use_system_prompt=True
|
||||
)
|
||||
```
|
||||
|
||||
### Template Placeholders
|
||||
|
||||
Custom templates support these placeholders:
|
||||
|
||||
- `{{ .System }}`: Replaced with the assembled system prompt components
|
||||
- `{{ .Prompt }}`: Replaced with the task-specific prompt
|
||||
- `{{ .Response }}`: Placeholder for the agent's response (used in response_template)
|
||||
|
||||
## System/User Prompt Split
|
||||
|
||||
Enable system/user prompt separation for better LLM compatibility:
|
||||
|
||||
```python
|
||||
agent = Agent(
|
||||
role="Research Assistant",
|
||||
goal="Conduct thorough research on given topics",
|
||||
backstory="You are a meticulous researcher with access to various information sources.",
|
||||
use_system_prompt=True # Enables system/user split
|
||||
)
|
||||
```
|
||||
|
||||
When `use_system_prompt=True`:
|
||||
- **System Prompt**: Contains role, backstory, goal, and tool instructions
|
||||
- **User Prompt**: Contains the specific task and expected output format
|
||||
|
||||
## Output Format Customization
|
||||
|
||||
### Structured Output with Pydantic Models
|
||||
|
||||
Control output formatting using Pydantic models:
|
||||
|
||||
```python
|
||||
from pydantic import BaseModel
|
||||
from typing import List
|
||||
|
||||
class ResearchOutput(BaseModel):
|
||||
summary: str
|
||||
key_findings: List[str]
|
||||
confidence_score: float
|
||||
|
||||
task = Task(
|
||||
description="Research the latest trends in AI development",
|
||||
expected_output="A structured research report",
|
||||
output_pydantic=ResearchOutput,
|
||||
agent=agent
|
||||
)
|
||||
```
|
||||
|
||||
### Custom Format Instructions
|
||||
|
||||
Add specific formatting requirements:
|
||||
|
||||
```python
|
||||
task = Task(
|
||||
description="Analyze the quarterly sales data",
|
||||
expected_output="Analysis in JSON format with specific fields",
|
||||
output_format="""
|
||||
{
|
||||
"total_sales": "number",
|
||||
"growth_rate": "percentage",
|
||||
"top_products": ["list of strings"],
|
||||
"recommendations": "detailed string"
|
||||
}
|
||||
"""
|
||||
)
|
||||
```
|
||||
|
||||
## Stop Words Configuration
|
||||
|
||||
### Default Stop Words
|
||||
|
||||
CrewAI automatically configures stop words based on agent setup:
|
||||
|
||||
```python
|
||||
# Default stop word is "\nObservation:" for tool-enabled agents
|
||||
agent = Agent(
|
||||
role="Analyst",
|
||||
goal="Perform analysis tasks",
|
||||
backstory="You are a skilled analyst.",
|
||||
tools=[some_tool] # Stop words include "\nObservation:"
|
||||
)
|
||||
```
|
||||
|
||||
### Custom Stop Words via Response Template
|
||||
|
||||
Modify stop words by customizing the response template:
|
||||
|
||||
```python
|
||||
response_template = """Provide your analysis:
|
||||
{{ .Response }}
|
||||
---END---"""
|
||||
|
||||
agent = Agent(
|
||||
role="Analyst",
|
||||
goal="Perform detailed analysis",
|
||||
backstory="You are an expert analyst.",
|
||||
response_template=response_template # Stop words will include "---END---"
|
||||
)
|
||||
```
|
||||
|
||||
## LiteAgent Prompt Customization
|
||||
|
||||
LiteAgents use a simplified prompt system with direct customization:
|
||||
|
||||
```python
|
||||
from crewai import LiteAgent
|
||||
|
||||
# LiteAgent with tools
|
||||
lite_agent = LiteAgent(
|
||||
role="Code Reviewer",
|
||||
goal="Review code for quality and security",
|
||||
backstory="You are an experienced software engineer specializing in code review.",
|
||||
tools=[code_analysis_tool],
|
||||
response_format=CodeReviewOutput # Pydantic model for structured output
|
||||
)
|
||||
|
||||
# The system prompt will automatically include tool instructions and format requirements
|
||||
```
|
||||
|
||||
## Advanced Customization Examples
|
||||
|
||||
### Example 1: Multi-Language Support
|
||||
|
||||
```python
|
||||
# Custom templates for different languages
|
||||
spanish_system_template = """{{ .System }}
|
||||
|
||||
Instrucciones adicionales: Responde siempre en español y proporciona explicaciones detalladas."""
|
||||
|
||||
agent = Agent(
|
||||
role="Asistente de Investigación",
|
||||
goal="Realizar investigación exhaustiva en español",
|
||||
backstory="Eres un investigador experimentado que trabaja en español.",
|
||||
system_template=spanish_system_template,
|
||||
use_system_prompt=True
|
||||
)
|
||||
```
|
||||
|
||||
### Example 2: Domain-Specific Formatting
|
||||
|
||||
```python
|
||||
# Medical report formatting
|
||||
medical_response_template = """MEDICAL ANALYSIS REPORT
|
||||
{{ .Response }}
|
||||
|
||||
DISCLAIMER: This analysis is for informational purposes only."""
|
||||
|
||||
medical_agent = Agent(
|
||||
role="Medical Data Analyst",
|
||||
goal="Analyze medical data with clinical precision",
|
||||
backstory="You are a certified medical data analyst with 10 years of experience.",
|
||||
response_template=medical_response_template,
|
||||
use_system_prompt=True
|
||||
)
|
||||
```
|
||||
|
||||
### Example 3: Complex Workflow Integration
|
||||
|
||||
```python
|
||||
from crewai import Flow
|
||||
|
||||
class CustomPromptFlow(Flow):
|
||||
|
||||
@start()
|
||||
def research_phase(self):
|
||||
# Agent with research-specific prompts
|
||||
researcher = Agent(
|
||||
role="Senior Researcher",
|
||||
goal="Gather comprehensive information",
|
||||
backstory="You are a senior researcher with expertise in data collection.",
|
||||
system_template="""{{ .System }}
|
||||
|
||||
Research Guidelines:
|
||||
- Verify all sources
|
||||
- Provide confidence ratings
|
||||
- Include methodology notes""",
|
||||
use_system_prompt=True
|
||||
)
|
||||
|
||||
task = Task(
|
||||
description="Research the given topic thoroughly",
|
||||
expected_output="Detailed research report with sources",
|
||||
agent=researcher
|
||||
)
|
||||
|
||||
return task.execute()
|
||||
|
||||
@listen(research_phase)
|
||||
def analysis_phase(self, research_result):
|
||||
# Agent with analysis-specific prompts
|
||||
analyst = Agent(
|
||||
role="Data Analyst",
|
||||
goal="Provide actionable insights",
|
||||
backstory="You are an expert data analyst specializing in trend analysis.",
|
||||
response_template="""ANALYSIS RESULTS:
|
||||
{{ .Response }}
|
||||
|
||||
CONFIDENCE LEVEL: [Specify confidence level]
|
||||
NEXT STEPS: [Recommend next actions]""",
|
||||
use_system_prompt=True
|
||||
)
|
||||
|
||||
return f"Analysis based on: {research_result}"
|
||||
```
|
||||
|
||||
## Best Practices
|
||||
|
||||
### Precision and Accuracy
|
||||
- Use specific role definitions and detailed backstories for consistent behavior
|
||||
- Include validation requirements in custom templates
|
||||
- Test prompt variations to ensure predictable outputs
|
||||
|
||||
### Security Considerations
|
||||
- Validate all user inputs before including them in prompts
|
||||
- Use structured output formats to prevent prompt injection
|
||||
- Implement guardrails for sensitive operations
|
||||
|
||||
### Performance Optimization
|
||||
- Keep system prompts concise while maintaining necessary context
|
||||
- Use appropriate stop words to prevent unnecessary token generation
|
||||
- Test prompt efficiency with your target LLM models
|
||||
|
||||
### Complexity Handling
|
||||
- Break complex requirements into multiple template components
|
||||
- Use conditional prompt assembly for different scenarios
|
||||
- Implement fallback templates for error handling
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Common Issues
|
||||
|
||||
**Prompt Not Applied**: Ensure you're using the correct template parameter names and that `use_system_prompt` is set appropriately.
|
||||
|
||||
**Format Not Working**: Verify that your `output_format` or `output_pydantic` model matches the expected structure.
|
||||
|
||||
**Stop Words Not Effective**: Check that your `response_template` includes the desired stop sequence after the `{{ .Response }}` placeholder.
|
||||
|
||||
### Debugging Prompts
|
||||
|
||||
Enable verbose mode to see the actual prompts being sent to the LLM:
|
||||
|
||||
```python
|
||||
agent = Agent(
|
||||
role="Debug Agent",
|
||||
goal="Help debug prompt issues",
|
||||
backstory="You are a debugging specialist.",
|
||||
verbose=True # Shows detailed prompt information
|
||||
)
|
||||
```
|
||||
|
||||
This comprehensive prompt customization system gives you precise control over agent behavior while maintaining the reliability and consistency that CrewAI is known for in production environments.
|
||||
343
tests/test_prompt_customization_docs.py
Normal file
343
tests/test_prompt_customization_docs.py
Normal file
@@ -0,0 +1,343 @@
|
||||
import pytest
|
||||
from unittest.mock import Mock, patch
|
||||
from pydantic import BaseModel
|
||||
from typing import List
|
||||
|
||||
from crewai import Agent, Task, Crew, LiteAgent
|
||||
from crewai.utilities.prompts import Prompts
|
||||
from crewai.utilities import I18N
|
||||
|
||||
|
||||
class TestPromptCustomizationDocs:
|
||||
"""Test cases validating the prompt customization documentation examples."""
|
||||
|
||||
def test_custom_system_and_prompt_templates(self):
|
||||
"""Test basic custom template functionality."""
|
||||
system_template = """{{ .System }}
|
||||
|
||||
Additional context: You are working in a production environment.
|
||||
Always prioritize accuracy and provide detailed explanations."""
|
||||
|
||||
prompt_template = """{{ .Prompt }}
|
||||
|
||||
Remember to validate your approach before proceeding."""
|
||||
|
||||
response_template = """Please format your response as follows:
|
||||
{{ .Response }}
|
||||
End of response."""
|
||||
|
||||
agent = Agent(
|
||||
role="Data Analyst",
|
||||
goal="Analyze data with precision and accuracy",
|
||||
backstory="You are an experienced data analyst with expertise in statistical analysis.",
|
||||
system_template=system_template,
|
||||
prompt_template=prompt_template,
|
||||
response_template=response_template,
|
||||
use_system_prompt=True,
|
||||
llm="gpt-4o-mini"
|
||||
)
|
||||
|
||||
assert agent.system_template == system_template
|
||||
assert agent.prompt_template == prompt_template
|
||||
assert agent.response_template == response_template
|
||||
assert agent.use_system_prompt is True
|
||||
|
||||
def test_system_user_prompt_split(self):
|
||||
"""Test system/user prompt separation."""
|
||||
agent = Agent(
|
||||
role="Research Assistant",
|
||||
goal="Conduct thorough research on given topics",
|
||||
backstory="You are a meticulous researcher with access to various information sources.",
|
||||
use_system_prompt=True,
|
||||
llm="gpt-4o-mini"
|
||||
)
|
||||
|
||||
prompts = Prompts(
|
||||
i18n=I18N(),
|
||||
has_tools=False,
|
||||
use_system_prompt=True,
|
||||
agent=agent
|
||||
)
|
||||
|
||||
prompt_dict = prompts.task_execution()
|
||||
|
||||
assert "system" in prompt_dict
|
||||
assert "user" in prompt_dict
|
||||
assert "You are Research Assistant" in prompt_dict["system"]
|
||||
assert agent.goal in prompt_dict["system"]
|
||||
|
||||
def test_structured_output_with_pydantic(self):
|
||||
"""Test structured output using Pydantic models."""
|
||||
class ResearchOutput(BaseModel):
|
||||
summary: str
|
||||
key_findings: List[str]
|
||||
confidence_score: float
|
||||
|
||||
agent = Agent(
|
||||
role="Research Assistant",
|
||||
goal="Conduct thorough research",
|
||||
backstory="You are a meticulous researcher.",
|
||||
llm="gpt-4o-mini"
|
||||
)
|
||||
|
||||
task = Task(
|
||||
description="Research the latest trends in AI development",
|
||||
expected_output="A structured research report",
|
||||
output_pydantic=ResearchOutput,
|
||||
agent=agent
|
||||
)
|
||||
|
||||
assert task.output_pydantic == ResearchOutput
|
||||
assert task.expected_output == "A structured research report"
|
||||
|
||||
def test_custom_format_instructions(self):
|
||||
"""Test custom output format instructions."""
|
||||
output_format = """{
|
||||
"total_sales": "number",
|
||||
"growth_rate": "percentage",
|
||||
"top_products": ["list of strings"],
|
||||
"recommendations": "detailed string"
|
||||
}"""
|
||||
|
||||
task = Task(
|
||||
description="Analyze the quarterly sales data",
|
||||
expected_output="Analysis in JSON format with specific fields",
|
||||
output_format=output_format,
|
||||
agent=Agent(
|
||||
role="Sales Analyst",
|
||||
goal="Analyze sales data",
|
||||
backstory="You are a sales data expert.",
|
||||
llm="gpt-4o-mini"
|
||||
)
|
||||
)
|
||||
|
||||
assert task.output_format == output_format
|
||||
|
||||
def test_stop_words_configuration(self):
|
||||
"""Test stop words configuration through response template."""
|
||||
response_template = """Provide your analysis:
|
||||
{{ .Response }}
|
||||
---END---"""
|
||||
|
||||
agent = Agent(
|
||||
role="Analyst",
|
||||
goal="Perform detailed analysis",
|
||||
backstory="You are an expert analyst.",
|
||||
response_template=response_template,
|
||||
llm="gpt-4o-mini"
|
||||
)
|
||||
|
||||
assert agent.response_template == response_template
|
||||
|
||||
with patch.object(agent, 'create_agent_executor') as mock_create:
|
||||
mock_task = Mock()
|
||||
agent.create_agent_executor(mock_task)
|
||||
|
||||
mock_create.assert_called_once()
|
||||
|
||||
def test_lite_agent_prompt_customization(self):
|
||||
"""Test LiteAgent prompt customization."""
|
||||
class CodeReviewOutput(BaseModel):
|
||||
issues_found: List[str]
|
||||
severity: str
|
||||
recommendations: List[str]
|
||||
|
||||
lite_agent = LiteAgent(
|
||||
role="Code Reviewer",
|
||||
goal="Review code for quality and security",
|
||||
backstory="You are an experienced software engineer specializing in code review.",
|
||||
response_format=CodeReviewOutput,
|
||||
llm="gpt-4o-mini"
|
||||
)
|
||||
|
||||
system_prompt = lite_agent._get_default_system_prompt()
|
||||
|
||||
assert "Code Reviewer" in system_prompt
|
||||
assert "Review code for quality and security" in system_prompt
|
||||
assert "experienced software engineer" in system_prompt
|
||||
|
||||
def test_multi_language_support(self):
|
||||
"""Test custom templates for different languages."""
|
||||
spanish_system_template = """{{ .System }}
|
||||
|
||||
Instrucciones adicionales: Responde siempre en español y proporciona explicaciones detalladas."""
|
||||
|
||||
agent = Agent(
|
||||
role="Asistente de Investigación",
|
||||
goal="Realizar investigación exhaustiva en español",
|
||||
backstory="Eres un investigador experimentado que trabaja en español.",
|
||||
system_template=spanish_system_template,
|
||||
use_system_prompt=True,
|
||||
llm="gpt-4o-mini"
|
||||
)
|
||||
|
||||
assert agent.system_template == spanish_system_template
|
||||
assert "español" in agent.system_template
|
||||
|
||||
def test_domain_specific_formatting(self):
|
||||
"""Test domain-specific response formatting."""
|
||||
medical_response_template = """MEDICAL ANALYSIS REPORT
|
||||
{{ .Response }}
|
||||
|
||||
DISCLAIMER: This analysis is for informational purposes only."""
|
||||
|
||||
medical_agent = Agent(
|
||||
role="Medical Data Analyst",
|
||||
goal="Analyze medical data with clinical precision",
|
||||
backstory="You are a certified medical data analyst with 10 years of experience.",
|
||||
response_template=medical_response_template,
|
||||
use_system_prompt=True,
|
||||
llm="gpt-4o-mini"
|
||||
)
|
||||
|
||||
assert "MEDICAL ANALYSIS REPORT" in medical_agent.response_template
|
||||
assert "DISCLAIMER" in medical_agent.response_template
|
||||
|
||||
def test_prompt_components_assembly(self):
|
||||
"""Test how prompt components are assembled."""
|
||||
agent = Agent(
|
||||
role="Test Agent",
|
||||
goal="Test goal",
|
||||
backstory="Test backstory",
|
||||
llm="gpt-4o-mini"
|
||||
)
|
||||
|
||||
prompts = Prompts(
|
||||
i18n=I18N(),
|
||||
has_tools=False,
|
||||
agent=agent
|
||||
)
|
||||
|
||||
prompt_dict = prompts.task_execution()
|
||||
|
||||
assert "prompt" in prompt_dict
|
||||
assert "Test Agent" in prompt_dict["prompt"]
|
||||
assert "Test goal" in prompt_dict["prompt"]
|
||||
assert "Test backstory" in prompt_dict["prompt"]
|
||||
|
||||
def test_tools_vs_no_tools_prompts(self):
|
||||
"""Test different prompt generation for agents with and without tools."""
|
||||
mock_tool = Mock()
|
||||
mock_tool.name = "test_tool"
|
||||
mock_tool.description = "A test tool"
|
||||
|
||||
agent_with_tools = Agent(
|
||||
role="Tool User",
|
||||
goal="Use tools effectively",
|
||||
backstory="You are skilled with tools.",
|
||||
tools=[mock_tool],
|
||||
llm="gpt-4o-mini"
|
||||
)
|
||||
|
||||
agent_without_tools = Agent(
|
||||
role="No Tool User",
|
||||
goal="Work without tools",
|
||||
backstory="You work independently.",
|
||||
llm="gpt-4o-mini"
|
||||
)
|
||||
|
||||
prompts_with_tools = Prompts(
|
||||
i18n=I18N(),
|
||||
has_tools=True,
|
||||
agent=agent_with_tools
|
||||
)
|
||||
|
||||
prompts_without_tools = Prompts(
|
||||
i18n=I18N(),
|
||||
has_tools=False,
|
||||
agent=agent_without_tools
|
||||
)
|
||||
|
||||
with_tools_dict = prompts_with_tools.task_execution()
|
||||
without_tools_dict = prompts_without_tools.task_execution()
|
||||
|
||||
assert "Action:" in with_tools_dict["prompt"]
|
||||
assert "Final Answer:" in without_tools_dict["prompt"]
|
||||
|
||||
def test_template_placeholder_replacement(self):
|
||||
"""Test that template placeholders are properly replaced."""
|
||||
system_template = "SYSTEM: {{ .System }} - Custom addition"
|
||||
prompt_template = "PROMPT: {{ .Prompt }} - Custom addition"
|
||||
response_template = "RESPONSE: {{ .Response }} - Custom addition"
|
||||
|
||||
agent = Agent(
|
||||
role="Template Tester",
|
||||
goal="Test template replacement",
|
||||
backstory="You test templates.",
|
||||
system_template=system_template,
|
||||
prompt_template=prompt_template,
|
||||
response_template=response_template,
|
||||
llm="gpt-4o-mini"
|
||||
)
|
||||
|
||||
prompts = Prompts(
|
||||
i18n=I18N(),
|
||||
has_tools=False,
|
||||
system_template=system_template,
|
||||
prompt_template=prompt_template,
|
||||
response_template=response_template,
|
||||
agent=agent
|
||||
)
|
||||
|
||||
prompt_dict = prompts.task_execution()
|
||||
|
||||
assert "SYSTEM:" in prompt_dict["prompt"]
|
||||
assert "PROMPT:" in prompt_dict["prompt"]
|
||||
assert "RESPONSE:" in prompt_dict["prompt"]
|
||||
assert "Custom addition" in prompt_dict["prompt"]
|
||||
|
||||
def test_verbose_mode_configuration(self):
|
||||
"""Test verbose mode for debugging prompts."""
|
||||
agent = Agent(
|
||||
role="Debug Agent",
|
||||
goal="Help debug prompt issues",
|
||||
backstory="You are a debugging specialist.",
|
||||
verbose=True,
|
||||
llm="gpt-4o-mini"
|
||||
)
|
||||
|
||||
assert agent.verbose is True
|
||||
|
||||
def test_i18n_slice_access(self):
|
||||
"""Test accessing internationalization slices."""
|
||||
i18n = I18N()
|
||||
|
||||
role_playing_slice = i18n.slice("role_playing")
|
||||
observation_slice = i18n.slice("observation")
|
||||
tools_slice = i18n.slice("tools")
|
||||
no_tools_slice = i18n.slice("no_tools")
|
||||
|
||||
assert "You are {role}" in role_playing_slice
|
||||
assert "Your personal goal is: {goal}" in role_playing_slice
|
||||
assert "\nObservation:" == observation_slice
|
||||
assert "Action:" in tools_slice
|
||||
assert "Final Answer:" in no_tools_slice
|
||||
|
||||
def test_lite_agent_with_and_without_tools(self):
|
||||
"""Test LiteAgent prompt generation with and without tools."""
|
||||
mock_tool = Mock()
|
||||
mock_tool.name = "test_tool"
|
||||
mock_tool.description = "A test tool"
|
||||
|
||||
lite_agent_with_tools = LiteAgent(
|
||||
role="Tool User",
|
||||
goal="Use tools",
|
||||
backstory="You use tools.",
|
||||
tools=[mock_tool],
|
||||
llm="gpt-4o-mini"
|
||||
)
|
||||
|
||||
lite_agent_without_tools = LiteAgent(
|
||||
role="No Tool User",
|
||||
goal="Work independently",
|
||||
backstory="You work alone.",
|
||||
llm="gpt-4o-mini"
|
||||
)
|
||||
|
||||
with_tools_prompt = lite_agent_with_tools._get_default_system_prompt()
|
||||
without_tools_prompt = lite_agent_without_tools._get_default_system_prompt()
|
||||
|
||||
assert "Action:" in with_tools_prompt
|
||||
assert "test_tool" in with_tools_prompt
|
||||
assert "Final Answer:" in without_tools_prompt
|
||||
assert "test_tool" not in without_tools_prompt
|
||||
Reference in New Issue
Block a user