PromptsApr 8, 2026·3 min read

AI Prompt Engineering Best Practices Guide

Comprehensive guide to writing effective prompts for Claude, GPT, and Gemini. Covers system prompts, few-shot learning, chain-of-thought, and structured output techniques.

TL;DR
A guide to writing effective prompts covering system prompts, few-shot, chain-of-thought, and structured output.
§01

What it is

This is a comprehensive guide to writing effective prompts for modern AI models including Claude, GPT-4, and Gemini. It covers the fundamental techniques that make prompts more reliable: system prompts, few-shot learning, chain-of-thought reasoning, structured output with JSON schemas, and role-based prompting.

The guide targets developers and non-developers who interact with AI models regularly and want to get more consistent, higher-quality results without trial and error.

§02

How it saves time or tokens

Poorly written prompts produce inconsistent outputs that require multiple retries. This guide teaches techniques that get the right output on the first attempt. Structured output with JSON schemas eliminates parsing errors. Chain-of-thought prompting improves reasoning accuracy for complex problems. Few-shot examples anchor the model's behavior to your specific use case. These techniques reduce total token usage by avoiding the retry loop.

§03

How to use

  1. Start with the five core rules:
1. Be specific - 'Write a Python function that validates
   email addresses using regex' beats 'help me with email'
2. Provide context - Tell the model its role, the user's
   need, and the constraints
3. Show examples - Few-shot examples anchor the output format
4. Specify output format - JSON, markdown, bullet points
5. Break complex tasks into steps - Chain subtasks together
  1. Apply system prompts for consistent behavior:
import anthropic

client = anthropic.Anthropic()
response = client.messages.create(
    model='claude-sonnet-4-20250514',
    system='You are a senior Python developer. Write clean, well-documented code. Use type hints. Include error handling.',
    messages=[{'role': 'user', 'content': 'Write a function to parse CSV files'}]
)
  1. Use chain-of-thought for reasoning tasks:
Think step by step:
1. First, identify the input format
2. Then, determine the validation rules
3. Finally, implement the solution

Show your reasoning before the final answer.
§04

Example

Few-shot prompting for consistent output formatting:

prompt = '''
Extract product information from the text.

Example 1:
Text: 'The MacBook Pro M3 starts at $1,599'
Output: {"name": "MacBook Pro M3", "price": 1599, "currency": "USD"}

Example 2:
Text: 'Galaxy S24 Ultra is available for EUR 1,449'
Output: {"name": "Galaxy S24 Ultra", "price": 1449, "currency": "EUR"}

Now extract from:
Text: 'The Pixel 9 Pro costs $999 in the US'
'''
§05

Related on TokRepo

§06

Common pitfalls

  • Writing vague prompts ('make it better') produces unpredictable results. Always specify what 'better' means: faster, shorter, more formal, more detailed.
  • Overloading a single prompt with too many instructions causes the model to miss some of them. Break complex tasks into sequential prompts.
  • Not testing prompts with edge cases. A prompt that works for normal inputs may fail on empty strings, special characters, or unexpected formats.

Frequently Asked Questions

What is the most important prompting technique?+

Being specific about what you want. A detailed prompt with clear constraints, expected output format, and context produces dramatically better results than a vague request. This single change eliminates most prompt failures.

When should I use few-shot examples?+

Use few-shot examples when you need consistent output formatting, when the task has domain-specific conventions, or when the model's zero-shot output does not match your expectations. Two to three examples usually suffice.

How does chain-of-thought prompting work?+

Chain-of-thought asks the model to show its reasoning step by step before producing the final answer. This improves accuracy on math, logic, and multi-step reasoning tasks by forcing the model to work through intermediate steps.

What is structured output?+

Structured output means asking the model to respond in a specific format like JSON with a defined schema. This makes outputs machine-parseable and eliminates format inconsistencies. Most modern models support JSON mode natively.

Do these techniques work with all AI models?+

The core techniques (system prompts, few-shot, chain-of-thought, structured output) work with Claude, GPT-4, Gemini, and most modern language models. Some syntax details vary by provider, but the principles are universal.

Citations (3)
🙏

Source & Thanks

Discussion

Sign in to join the discussion.
No comments yet. Be the first to share your thoughts.

Related Assets