AI Prompt Engineering Best Practices Guide
Comprehensive guide to writing effective prompts for Claude, GPT, and Gemini. Covers system prompts, few-shot learning, chain-of-thought, and structured output techniques.
What it is
This is a comprehensive guide to writing effective prompts for modern AI models including Claude, GPT-4, and Gemini. It covers the fundamental techniques that make prompts more reliable: system prompts, few-shot learning, chain-of-thought reasoning, structured output with JSON schemas, and role-based prompting.
The guide targets developers and non-developers who interact with AI models regularly and want to get more consistent, higher-quality results without trial and error.
How it saves time or tokens
Poorly written prompts produce inconsistent outputs that require multiple retries. This guide teaches techniques that get the right output on the first attempt. Structured output with JSON schemas eliminates parsing errors. Chain-of-thought prompting improves reasoning accuracy for complex problems. Few-shot examples anchor the model's behavior to your specific use case. These techniques reduce total token usage by avoiding the retry loop.
How to use
- Start with the five core rules:
1. Be specific - 'Write a Python function that validates
email addresses using regex' beats 'help me with email'
2. Provide context - Tell the model its role, the user's
need, and the constraints
3. Show examples - Few-shot examples anchor the output format
4. Specify output format - JSON, markdown, bullet points
5. Break complex tasks into steps - Chain subtasks together
- Apply system prompts for consistent behavior:
import anthropic
client = anthropic.Anthropic()
response = client.messages.create(
model='claude-sonnet-4-20250514',
system='You are a senior Python developer. Write clean, well-documented code. Use type hints. Include error handling.',
messages=[{'role': 'user', 'content': 'Write a function to parse CSV files'}]
)
- Use chain-of-thought for reasoning tasks:
Think step by step:
1. First, identify the input format
2. Then, determine the validation rules
3. Finally, implement the solution
Show your reasoning before the final answer.
Example
Few-shot prompting for consistent output formatting:
prompt = '''
Extract product information from the text.
Example 1:
Text: 'The MacBook Pro M3 starts at $1,599'
Output: {"name": "MacBook Pro M3", "price": 1599, "currency": "USD"}
Example 2:
Text: 'Galaxy S24 Ultra is available for EUR 1,449'
Output: {"name": "Galaxy S24 Ultra", "price": 1449, "currency": "EUR"}
Now extract from:
Text: 'The Pixel 9 Pro costs $999 in the US'
'''
Related on TokRepo
- Prompt library — Browse curated prompts and templates on TokRepo.
- AI tools for coding — Developer tools for AI-assisted workflows.
Common pitfalls
- Writing vague prompts ('make it better') produces unpredictable results. Always specify what 'better' means: faster, shorter, more formal, more detailed.
- Overloading a single prompt with too many instructions causes the model to miss some of them. Break complex tasks into sequential prompts.
- Not testing prompts with edge cases. A prompt that works for normal inputs may fail on empty strings, special characters, or unexpected formats.
Frequently Asked Questions
Being specific about what you want. A detailed prompt with clear constraints, expected output format, and context produces dramatically better results than a vague request. This single change eliminates most prompt failures.
Use few-shot examples when you need consistent output formatting, when the task has domain-specific conventions, or when the model's zero-shot output does not match your expectations. Two to three examples usually suffice.
Chain-of-thought asks the model to show its reasoning step by step before producing the final answer. This improves accuracy on math, logic, and multi-step reasoning tasks by forcing the model to work through intermediate steps.
Structured output means asking the model to respond in a specific format like JSON with a defined schema. This makes outputs machine-parseable and eliminates format inconsistencies. Most modern models support JSON mode natively.
The core techniques (system prompts, few-shot, chain-of-thought, structured output) work with Claude, GPT-4, Gemini, and most modern language models. Some syntax details vary by provider, but the principles are universal.
Citations (3)
- Anthropic Documentation— Anthropic prompt engineering best practices
- OpenAI Documentation— OpenAI prompt engineering guide
- Google Research— Chain-of-thought prompting research
Related on TokRepo
Source & Thanks
Discussion
Related Assets
Claude-Flow — Multi-Agent Orchestration for Claude Code
Layers swarm and hive-mind multi-agent orchestration on top of Claude Code with 64 specialized agents, SQLite memory, and parallel execution.
ccusage — Real-Time Token Cost Tracker for Claude Code
CLI that reads ~/.claude logs and breaks down Claude Code token spend by day, session, and project — pluggable into your statusline.
SuperClaude — Workflow Framework for Claude Code
Adds 16+ slash commands, 9 cognitive personas, and a smart flag system to Claude Code in one pipx install.