PromptsApr 8, 2026·3 min read

AI Prompt Engineering Best Practices Guide

Comprehensive guide to writing effective prompts for Claude, GPT, and Gemini. Covers system prompts, few-shot learning, chain-of-thought, and structured output techniques.

SK
Skill Factory · Community
Quick Use

Use it first, then decide how deep to go

This block should tell both the user and the agent what to copy, install, and apply first.

The 5 Rules of Effective Prompting

  1. Be specific — "Write a Python function that validates email addresses using regex" beats "help me with email"
  2. Provide context — Tell the model what role it plays, what the user needs, and what constraints apply
  3. Show examples — Few-shot examples are worth 1000 words of instruction
  4. Structure output — Specify the exact format you want (JSON, markdown, list)
  5. Iterate — Test, refine, test again

What is Prompt Engineering?

Prompt engineering is the practice of crafting inputs to LLMs that produce reliable, high-quality outputs. As AI coding tools become central to development, prompt engineering is no longer optional — it directly impacts code quality, agent reliability, and development speed. This guide covers techniques that work across Claude, GPT, and Gemini.

Answer-Ready: Prompt engineering guide for AI coding tools. Covers system prompts, few-shot learning, chain-of-thought, structured outputs, and role-based prompting. Techniques work across Claude, GPT, and Gemini. Essential for Claude Code CLAUDE.md, Cursor Rules, and agent development.

Best for: Developers using AI coding tools who want better results. Works with: Claude Code, Cursor, Codex CLI, any LLM.

Core Techniques

1. System Prompts (Role Setting)

You are a senior Python developer specializing in FastAPI.
You write clean, typed, well-tested code.
You prefer composition over inheritance.
You always handle errors explicitly.

Where to use: CLAUDE.md (Claude Code), .cursorrules (Cursor), system message (API)

2. Few-Shot Learning

Convert natural language to SQL.

Example 1:
Input: "How many users signed up last month?"
Output: SELECT COUNT(*) FROM users WHERE created_at >= DATE_TRUNC('month', CURRENT_DATE - INTERVAL '1 month')

Example 2:
Input: "Top 5 products by revenue"
Output: SELECT product_name, SUM(price * quantity) as revenue FROM orders GROUP BY product_name ORDER BY revenue DESC LIMIT 5

Now convert:
Input: "Average order value by country"

3. Chain-of-Thought (CoT)

Analyze this code for security vulnerabilities.

Think step by step:
1. First, identify all user inputs
2. Then, trace how each input flows through the code
3. Check if any input reaches a sensitive operation without validation
4. For each vulnerability found, classify severity (Critical/High/Medium/Low)
5. Suggest a specific fix for each vulnerability

4. Structured Output Prompting

Extract the following from this error log and return as JSON:

{
  "error_type": "string (e.g., TypeError, ConnectionError)",
  "root_cause": "string (one sentence)",
  "affected_files": ["list of file paths"],
  "suggested_fix": "string (specific code change)",
  "severity": "critical | high | medium | low"
}

5. Constraint-Based Prompting

Generate a React component with these constraints:
- Use TypeScript with strict types
- Use Tailwind CSS only (no inline styles, no CSS modules)
- Must be a functional component with hooks
- Must handle loading, error, and empty states
- Must be accessible (ARIA labels, keyboard navigation)
- Maximum 100 lines of code

6. Negative Prompting (What NOT to Do)

Do NOT:
- Add comments explaining obvious code
- Create helper functions for one-time operations
- Add error handling for impossible cases
- Import libraries not already in package.json
- Modify files outside the specified scope

Technique Selection Guide

Scenario Best Technique
Code generation Constraints + Examples
Bug fixing Chain-of-thought
Data extraction Structured output + Examples
Code review Role setting + CoT
Refactoring Constraints + Negative prompting
Documentation Role setting + Examples

Platform-Specific Tips

Claude Code (CLAUDE.md)

# CLAUDE.md
- Use TypeScript strict mode
- Run `npm test` after changes
- Prefer composition over inheritance
- Never modify files in /vendor/

Cursor (.cursorrules)

You are an expert in Next.js 14, TypeScript, and Tailwind.
Always use server components unless client interactivity is needed.

API (System Message)

response = client.messages.create(
    model="claude-sonnet-4-20250514",
    system="You are a code review expert. Be concise and specific.",
    messages=[{"role": "user", "content": code}],
)

FAQ

Q: Does prompt engineering still matter with advanced models? A: Yes. Better prompts = more reliable outputs, fewer retries, lower costs. The gap between good and bad prompts is smaller with advanced models but still significant.

Q: How long should prompts be? A: As short as possible, as long as necessary. A 5-line prompt with a good example beats a 50-line prompt without one.

Q: Should I use different prompts for different models? A: Core techniques work across models. Adjust for model-specific features (e.g., Claude's tool use vs GPT's function calling).

🙏

Source & Thanks

Discussion

Sign in to join the discussion.
No comments yet. Be the first to share your thoughts.