--- title: Customizing Prompts description: Dive deeper into low-level prompt customization for CrewAI, enabling super custom and complex use cases for different models and languages. icon: message-pen --- ## Why Customize Prompts? Although CrewAI's default prompts work well for many scenarios, low-level customization opens the door to significantly more flexible and powerful agent behavior. Here’s why you might want to take advantage of this deeper control: 1. **Optimize for specific LLMs** – Different models (such as GPT-4, Claude, or Llama) thrive with prompt formats tailored to their unique architectures. 2. **Change the language** – Build agents that operate exclusively in languages beyond English, handling nuances with precision. 3. **Specialize for complex domains** – Adapt prompts for highly specialized industries like healthcare, finance, or legal. 4. **Adjust tone and style** – Make agents more formal, casual, creative, or analytical. 5. **Support super custom use cases** – Utilize advanced prompt structures and formatting to meet intricate, project-specific requirements. This guide explores how to tap into CrewAI's prompts at a lower level, giving you fine-grained control over how agents think and interact. ## Understanding CrewAI's Prompt System Under the hood, CrewAI employs a modular prompt system that you can customize extensively: - **Agent templates** – Govern each agent’s approach to their assigned role. - **Prompt slices** – Control specialized behaviors such as tasks, tool usage, and output structure. - **Error handling** – Direct how agents respond to failures, exceptions, or timeouts. - **Tool-specific prompts** – Define detailed instructions for how tools are invoked or utilized. Check out the [original prompt templates in CrewAI's repository](https://github.com/crewAIInc/crewAI/blob/main/src/crewai/translations/en.json) to see how these elements are organized. From there, you can override or adapt them as needed to unlock advanced behaviors. ## Best Practices for Managing Prompt Files When engaging in low-level prompt customization, follow these guidelines to keep things organized and maintainable: 1. **Keep files separate** – Store your customized prompts in dedicated JSON files outside your main codebase. 2. **Version control** – Track changes within your repository, ensuring clear documentation of prompt adjustments over time. 3. **Organize by model or language** – Use naming schemes like `prompts_llama.json` or `prompts_es.json` to quickly identify specialized configurations. 4. **Document changes** – Provide comments or maintain a README detailing the purpose and scope of your customizations. 5. **Minimize alterations** – Only override the specific slices you genuinely need to adjust, keeping default functionality intact for everything else. ## The Simplest Way to Customize Prompts One straightforward approach is to create a JSON file for the prompts you want to override and then point your Crew at that file: 1. Craft a JSON file with your updated prompt slices. 2. Reference that file via the `prompt_file` parameter in your Crew. CrewAI then merges your customizations with the defaults, so you don’t have to redefine every prompt. Here’s how: ### Example: Basic Prompt Customization Create a `custom_prompts.json` file with the prompts you want to modify. Ensure you list all top-level prompts it should contain, not just your changes: ```json { "slices": { "format": "When responding, follow this structure:\n\nTHOUGHTS: Your step-by-step thinking\nACTION: Any tool you're using\nRESULT: Your final answer or conclusion" } } ``` Then integrate it like so: ```python from crewai import Agent, Crew, Task, Process # Create agents and tasks as normal researcher = Agent( role="Research Specialist", goal="Find information on quantum computing", backstory="You are a quantum physics expert", verbose=True ) research_task = Task( description="Research quantum computing applications", expected_output="A summary of practical applications", agent=researcher ) # Create a crew with your custom prompt file crew = Crew( agents=[researcher], tasks=[research_task], prompt_file="path/to/custom_prompts.json", verbose=True ) # Run the crew result = crew.kickoff() ``` With these few edits, you gain low-level control over how your agents communicate and solve tasks. ## Optimizing for Specific Models Different models thrive on differently structured prompts. Making deeper adjustments can significantly boost performance by aligning your prompts with a model’s nuances. ### Example: Llama 3.3 Prompting Template For instance, when dealing with Meta’s Llama 3.3, deeper-level customization may reflect the recommended structure described at: https://www.llama.com/docs/model-cards-and-prompt-formats/llama3_1/#prompt-template Here’s an example to highlight how you might fine-tune an Agent to leverage Llama 3.3 in code: ```python from crewai import Agent, Crew, Task, Process from crewai_tools import DirectoryReadTool, FileReadTool # Define templates for system, user (prompt), and assistant (response) messages system_template = """<|begin_of_text|><|start_header_id|>system<|end_header_id|>{{ .System }}<|eot_id|>""" prompt_template = """<|start_header_id|>user<|end_header_id|>{{ .Prompt }}<|eot_id|>""" response_template = """<|start_header_id|>assistant<|end_header_id|>{{ .Response }}<|eot_id|>""" # Create an Agent using Llama-specific layouts principal_engineer = Agent( role="Principal Engineer", goal="Oversee AI architecture and make high-level decisions", backstory="You are the lead engineer responsible for critical AI systems", verbose=True, llm="groq/llama-3.3-70b-versatile", # Using the Llama 3 model system_template=system_template, prompt_template=prompt_template, response_template=response_template, tools=[DirectoryReadTool(), FileReadTool()] ) # Define a sample task engineering_task = Task( description="Review AI implementation files for potential improvements", expected_output="A summary of key findings and recommendations", agent=principal_engineer ) # Create a Crew for the task llama_crew = Crew( agents=[principal_engineer], tasks=[engineering_task], process=Process.sequential, verbose=True ) # Execute the crew result = llama_crew.kickoff() print(result.raw) ``` Through this deeper configuration, you can exercise comprehensive, low-level control over your Llama-based workflows without needing a separate JSON file. ## Conclusion Low-level prompt customization in CrewAI opens the door to super custom, complex use cases. By establishing well-organized prompt files (or direct inline templates), you can accommodate various models, languages, and specialized domains. This level of flexibility ensures you can craft precisely the AI behavior you need, all while knowing CrewAI still provides reliable defaults when you don’t override them. You now have the foundation for advanced prompt customizations in CrewAI. Whether you’re adapting for model-specific structures or domain-specific constraints, this low-level approach lets you shape agent interactions in highly specialized ways.