Prompt Engineering Techniques — 20+ Methods
Comprehensive prompt engineering collection with 20+ techniques. 7.3K+ stars. Chain-of-Thought, ReAct, Tree-of-Thought, runnable notebooks.
What it is
This is a comprehensive collection of prompt engineering techniques covering 20+ methods with runnable implementations. The repository includes techniques ranging from foundational approaches like Chain-of-Thought (CoT) and few-shot prompting to advanced strategies like ReAct, Tree-of-Thought, and self-consistency. Each technique comes with a notebook you can run directly.
The collection is designed for developers, researchers, and AI practitioners who want to move beyond basic prompting. Whether you are building AI agents, optimizing LLM outputs, or studying prompt design patterns, this resource provides concrete implementations rather than just theory.
How it saves time or tokens
Structured prompting techniques reduce wasted tokens on malformed outputs. Chain-of-Thought prompting, for example, improves accuracy on reasoning tasks, meaning fewer retry cycles. ReAct patterns combine reasoning and action in a single prompt structure, eliminating the overhead of separate planning and execution steps. By selecting the right technique for your task, you avoid the trial-and-error of discovering effective patterns from scratch.
How to use
- Browse the technique catalog and select the method that matches your task type (reasoning, classification, generation, tool use).
- Copy the template and adapt it to your specific domain:
# Chain-of-Thought (CoT) template
Solve the following problem step by step.
Problem: [your problem here]
Let's think through this step by step:
Step 1: ...
Step 2: ...
Step 3: ...
Therefore, the answer is: ...
- Run the associated notebook to see the technique in action with real LLM calls and compare output quality.
Example
ReAct pattern combining reasoning and action:
Question: What is the population of the capital of France?
Thought: I need to find the capital of France first, then look up its population.
Action: Search('capital of France')
Observation: Paris is the capital of France.
Thought: Now I need to find the population of Paris.
Action: Search('population of Paris 2024')
Observation: The population of Paris is approximately 2.1 million.
Thought: I have the answer.
Answer: The population of Paris, the capital of France, is approximately 2.1 million.
This pattern grounds the model's reasoning in observable actions, reducing hallucination on factual queries.
Related on TokRepo
- Prompt Library — Browse curated prompts organized by task type and model
- AI Tools for Coding — Coding assistants that benefit from advanced prompting techniques
Common pitfalls
- Chain-of-Thought increases token usage. Use it for complex reasoning tasks where accuracy matters, not for simple classification where a direct answer suffices.
- Tree-of-Thought requires multiple LLM calls per decision point. Budget for 3-5x token cost compared to single-pass prompting.
- Not all techniques work equally well across models. Test your chosen technique on your specific model before committing to it in production.
Frequently Asked Questions
The collection covers 20+ techniques including Chain-of-Thought, few-shot prompting, ReAct, Tree-of-Thought, self-consistency, role prompting, decomposition, and more. Each technique has a runnable notebook with real examples.
Start with Chain-of-Thought for reasoning tasks and few-shot prompting for classification or formatting tasks. These two techniques cover the majority of common use cases and are well-supported across all major LLM providers.
Most techniques are model-agnostic and work with OpenAI, Anthropic, Google, and open-source models. Some advanced techniques like function calling patterns may require provider-specific syntax adjustments.
Chain-of-Thought typically adds 50-200 tokens per response for the reasoning steps. Tree-of-Thought can multiply total token usage by 3-5x since it explores multiple reasoning paths. Few-shot prompting adds tokens proportional to the number of examples in the prompt.
Yes. Combining techniques is common in production systems. For example, you can use role prompting with Chain-of-Thought and few-shot examples in a single prompt. The key is to test combined approaches since interactions between techniques can be unpredictable.
Citations (3)
- Prompt Engineering GitHub— Prompt engineering techniques repository with 7.3K+ stars
- arXiv CoT Paper— Chain-of-Thought prompting improves reasoning in LLMs
- arXiv ReAct Paper— ReAct: Synergizing Reasoning and Acting in Language Models
Related on TokRepo
Source & Thanks
- GitHub: https://github.com/NirDiamant/Prompt_Engineering (7.3K+ stars)
- Author: Nir Diamant
- License: Open Source