What is Prompt Engineering?
Prompt engineering is the practice of crafting inputs to LLMs that produce reliable, high-quality outputs. As AI coding tools become central to development, prompt engineering is no longer optional — it directly impacts code quality, agent reliability, and development speed. This guide covers techniques that work across Claude, GPT, and Gemini.
Answer-Ready: Prompt engineering guide for AI coding tools. Covers system prompts, few-shot learning, chain-of-thought, structured outputs, and role-based prompting. Techniques work across Claude, GPT, and Gemini. Essential for Claude Code CLAUDE.md, Cursor Rules, and agent development.
Best for: Developers using AI coding tools who want better results. Works with: Claude Code, Cursor, Codex CLI, any LLM.
Core Techniques
1. System Prompts (Role Setting)
You are a senior Python developer specializing in FastAPI.
You write clean, typed, well-tested code.
You prefer composition over inheritance.
You always handle errors explicitly.Where to use: CLAUDE.md (Claude Code), .cursorrules (Cursor), system message (API)
2. Few-Shot Learning
Convert natural language to SQL.
Example 1:
Input: "How many users signed up last month?"
Output: SELECT COUNT(*) FROM users WHERE created_at >= DATE_TRUNC('month', CURRENT_DATE - INTERVAL '1 month')
Example 2:
Input: "Top 5 products by revenue"
Output: SELECT product_name, SUM(price * quantity) as revenue FROM orders GROUP BY product_name ORDER BY revenue DESC LIMIT 5
Now convert:
Input: "Average order value by country"3. Chain-of-Thought (CoT)
Analyze this code for security vulnerabilities.
Think step by step:
1. First, identify all user inputs
2. Then, trace how each input flows through the code
3. Check if any input reaches a sensitive operation without validation
4. For each vulnerability found, classify severity (Critical/High/Medium/Low)
5. Suggest a specific fix for each vulnerability4. Structured Output Prompting
Extract the following from this error log and return as JSON:
{
"error_type": "string (e.g., TypeError, ConnectionError)",
"root_cause": "string (one sentence)",
"affected_files": ["list of file paths"],
"suggested_fix": "string (specific code change)",
"severity": "critical | high | medium | low"
}5. Constraint-Based Prompting
Generate a React component with these constraints:
- Use TypeScript with strict types
- Use Tailwind CSS only (no inline styles, no CSS modules)
- Must be a functional component with hooks
- Must handle loading, error, and empty states
- Must be accessible (ARIA labels, keyboard navigation)
- Maximum 100 lines of code6. Negative Prompting (What NOT to Do)
Do NOT:
- Add comments explaining obvious code
- Create helper functions for one-time operations
- Add error handling for impossible cases
- Import libraries not already in package.json
- Modify files outside the specified scopeTechnique Selection Guide
| Scenario | Best Technique |
|---|---|
| Code generation | Constraints + Examples |
| Bug fixing | Chain-of-thought |
| Data extraction | Structured output + Examples |
| Code review | Role setting + CoT |
| Refactoring | Constraints + Negative prompting |
| Documentation | Role setting + Examples |
Platform-Specific Tips
Claude Code (CLAUDE.md)
# CLAUDE.md
- Use TypeScript strict mode
- Run `npm test` after changes
- Prefer composition over inheritance
- Never modify files in /vendor/Cursor (.cursorrules)
You are an expert in Next.js 14, TypeScript, and Tailwind.
Always use server components unless client interactivity is needed.API (System Message)
response = client.messages.create(
model="claude-sonnet-4-20250514",
system="You are a code review expert. Be concise and specific.",
messages=[{"role": "user", "content": code}],
)FAQ
Q: Does prompt engineering still matter with advanced models? A: Yes. Better prompts = more reliable outputs, fewer retries, lower costs. The gap between good and bad prompts is smaller with advanced models but still significant.
Q: How long should prompts be? A: As short as possible, as long as necessary. A 5-line prompt with a good example beats a 50-line prompt without one.
Q: Should I use different prompts for different models? A: Core techniques work across models. Adjust for model-specific features (e.g., Claude's tool use vs GPT's function calling).