PromptsApr 6, 2026·2 min read

Prompt Perfect — System Prompt Engineering Templates

Battle-tested system prompt templates for building LLM personas, agents, and workflows. Structured formats for role definition, constraints, and output control. 4,000+ GitHub stars.

TL;DR
Prompt Perfect provides battle-tested system prompt templates for defining LLM personas, agent behaviors, and output constraints.
§01

What it is

Prompt Perfect is a collection of system prompt engineering templates designed for building LLM personas, agents, and workflows. It provides structured formats for defining roles, capabilities, constraints, and output formatting. Instead of writing system prompts from scratch, you start with a proven template and customize it.

This resource targets developers building LLM-powered applications who need consistent, reliable system prompts. It covers common patterns: customer support agents, code reviewers, content writers, data analysts, and task-specific assistants.

§02

How it saves time or tokens

A well-structured system prompt reduces the need for follow-up corrections. Prompt Perfect templates encode best practices: clear role definition, explicit constraints, output format specification, and edge case handling. The estimated token cost is around 2,800 tokens per template, but this investment eliminates iterative prompt refinement that can consume thousands of tokens.

The templates also include anti-patterns to avoid, helping you skip common mistakes.

§03

How to use

  1. Copy the base template and customize:
# IDENTITY AND PURPOSE
You are a [ROLE] specializing in [DOMAIN].
Your audience is [TARGET USERS].

# CAPABILITIES
- You CAN: [list specific abilities]
- You CANNOT: [list restrictions]

# OUTPUT FORMAT
- Format: [markdown/json/plain text]
- Length: [specific range]
- Tone: [professional/casual/technical]

# CONSTRAINTS
- Never [specific prohibition]
- Always [required behavior]
- When uncertain, [fallback behavior]
  1. Add domain-specific rules:
# EXAMPLES
User: [sample input]
Assistant: [expected output]

# EDGE CASES
- If asked about [topic], respond with [approach]
- If input is ambiguous, ask for clarification
  1. Test the prompt with representative inputs and iterate on constraints.
§04

Example

A complete system prompt for a code review agent:

# IDENTITY
You are a senior software engineer performing code reviews.

# REVIEW CRITERIA
1. Correctness: Does the code do what it claims?
2. Security: Are there injection, auth, or data exposure risks?
3. Performance: Are there O(n^2) loops or N+1 queries?
4. Readability: Can a new team member understand this?

# OUTPUT FORMAT
For each issue found:
- Severity: critical / warning / suggestion
- Line: [file:line]
- Issue: [one sentence]
- Fix: [code snippet]

# CONSTRAINTS
- Do not rewrite the entire file
- Do not comment on style unless it impacts readability
- Limit to top 5 issues per review
§05

Related on TokRepo

§06

Common pitfalls

  • Vague role definitions. 'You are a helpful assistant' gives the LLM no useful constraints. Be specific: 'You are a PostgreSQL DBA who reviews slow queries.'
  • Missing output format specification. Without explicit format rules, LLM output varies unpredictably. Specify format, length, and structure.
  • Overloading a single prompt with too many roles. If your prompt tries to be a coder, reviewer, and project manager simultaneously, quality drops. Create separate prompts for distinct tasks.

Frequently Asked Questions

What makes a good system prompt?+

A good system prompt has four sections: identity (who the AI is), capabilities (what it can and cannot do), output format (how to structure responses), and constraints (behavioral boundaries). Each section should be specific and actionable.

How long should a system prompt be?+

Aim for 500 to 3,000 tokens. Shorter prompts miss important constraints. Longer prompts waste context window and may cause the LLM to ignore later instructions. Focus on rules the LLM would otherwise violate.

Do system prompts work the same across different LLMs?+

The general principles apply across models, but each LLM has preferences. Claude responds well to XML tags and numbered rules. GPT-4 prefers markdown headers. Test your prompt on your target model and adjust formatting.

Can I use few-shot examples in system prompts?+

Yes, and you should for complex output formats. One or two examples of expected input-output pairs dramatically improve consistency. Place examples after the rules section so the LLM sees constraints before examples.

How do I handle edge cases in system prompts?+

Add an explicit EDGE CASES section listing scenarios and desired behaviors. Common edges: ambiguous input, out-of-scope requests, harmful content, and missing context. Define fallback behaviors for each.

Citations (3)
🙏

Source & Thanks

Created by the prompt engineering community. Licensed under MIT.

prompt-perfect — ⭐ 4,000+

Thanks to the community for codifying what makes system prompts actually work.

Discussion

Sign in to join the discussion.
No comments yet. Be the first to share your thoughts.