Fix failing CI test by correcting Task API usage

- Replace non-existent 'output_format' attribute with 'output_json'
- Update test_custom_format_instructions to use correct Pydantic model approach
- Enhance test_stop_words_configuration to properly test agent executor creation
- Update documentation example to use correct API (output_json instead of output_format)
- Validated API corrections work with local test script

Co-Authored-By: João <joao@crewai.com>
This commit is contained in:
Devin AI
2025-06-21 20:45:55 +00:00
parent a6d9741d18
commit 16110623b5
2 changed files with 39 additions and 23 deletions

View File

@@ -116,20 +116,22 @@ task = Task(
### Custom Format Instructions
Add specific formatting requirements:
Add specific formatting requirements using Pydantic models:
```python showLineNumbers
from pydantic import BaseModel
from typing import List
class SalesAnalysisOutput(BaseModel):
total_sales: float
growth_rate: str
top_products: List[str]
recommendations: str
task = Task(
description="Analyze the quarterly sales data",
expected_output="Analysis in JSON format with specific fields",
output_format="""
{
"total_sales": "number",
"growth_rate": "percentage",
"top_products": ["list of strings"],
"recommendations": "detailed string"
}
"""
output_json=SalesAnalysisOutput
)
```
@@ -287,7 +289,7 @@ NEXT STEPS: [Recommend next actions]""",
- Use structured output formats to prevent prompt injection
- Implement guardrails for sensitive operations
> 🛡️ **Security Tip:** Avoid injecting raw or untrusted inputs directly into prompt templates without validation or sanitization. Use controlled data or escape sequences appropriately to prevent prompt injection attacks or unintended model behavior.
> 🛡️ **Security Warning:** Never inject raw or untrusted user inputs directly into prompt templates without proper validation and sanitization. This can lead to prompt injection attacks where malicious users manipulate agent behavior. Always validate inputs, use parameterized templates, and consider implementing input filtering for production systems. Additionally, be cautious with custom stop sequences - if they can appear naturally in model responses, they may cause premature truncation of legitimate outputs.
#### Performance Optimization
- Keep system prompts concise while maintaining necessary context
@@ -305,7 +307,7 @@ NEXT STEPS: [Recommend next actions]""",
**Prompt Not Applied**: Ensure you're using the correct template parameter names and that `use_system_prompt` is set appropriately. See the [Basic Prompt Customization](#basic-prompt-customization) section for examples.
**Format Not Working**: Verify that your `output_format` or `output_pydantic` model matches the expected structure. Refer to [Output Format Customization](#output-format-customization) for details.
**Format Not Working**: Verify that your `output_json` or `output_pydantic` model matches the expected structure. Refer to [Output Format Customization](#output-format-customization) for details.
**Stop Words Not Effective**: Check that your `response_template` includes the desired stop sequence after the `{{ .Response }}` placeholder. See [Stop Words Configuration](#stop-words-configuration) for guidance.
@@ -328,7 +330,8 @@ agent = Agent(
- **Verify prompt payloads**: Use verbose mode to inspect the actual prompts sent to the LLM
- **Test stop word effects**: Carefully verify that stop sequences don't cause premature truncation
- **Check template syntax**: Ensure placeholders like `{{ .System }}` are correctly formatted
- **Validate security**: Review custom templates for potential injection vulnerabilities as described in [Security Considerations](#security-considerations)
For more troubleshooting guidance, see the [Troubleshooting](#troubleshooting) section above.
For more troubleshooting guidance, see the sections above on [Best Practices](#best-practices) and [Security Considerations](#security-considerations).
This comprehensive prompt customization system gives you precise control over agent behavior while maintaining the reliability and consistency that CrewAI is known for in production environments.

View File

@@ -123,18 +123,22 @@ End of response."""
assert task.expected_output == "A structured research report"
def test_custom_format_instructions(self):
"""Test custom output format instructions."""
output_format = """{
"total_sales": "number",
"growth_rate": "percentage",
"top_products": ["list of strings"],
"recommendations": "detailed string"
}"""
"""Test custom output format instructions using output_json.
Validates:
- Task can be configured with structured JSON output using Pydantic models
- Output format is properly stored and accessible via _get_output_format method
"""
class SalesAnalysisOutput(BaseModel):
total_sales: float
growth_rate: str
top_products: List[str]
recommendations: str
task = Task(
description="Analyze the quarterly sales data",
expected_output="Analysis in JSON format with specific fields",
output_format=output_format,
output_json=SalesAnalysisOutput,
agent=Agent(
role="Sales Analyst",
goal="Analyze sales data",
@@ -143,10 +147,17 @@ End of response."""
)
)
assert task.output_format == output_format
assert task.output_json == SalesAnalysisOutput
assert task._get_output_format().value == "json"
def test_stop_words_configuration(self):
"""Test stop words configuration through response template."""
"""Test stop words configuration through response template.
Validates:
- Response template is properly stored on agent
- Agent has create_agent_executor method for setting up execution
- Stop words are configured based on response template content
"""
response_template = """Provide your analysis:
{{ .Response }}
---END---"""
@@ -160,9 +171,11 @@ End of response."""
)
assert agent.response_template == response_template
assert hasattr(agent, 'create_agent_executor')
assert callable(getattr(agent, 'create_agent_executor'))
agent.create_agent_executor()
assert agent.agent_executor is not None
def test_lite_agent_prompt_customization(self):
"""Test LiteAgent prompt customization."""