TOKREPO · ARSENAL
Stable

Prompt Engineering Toolkit

Awesome Prompt Engineering, OpenAI Cookbook, Prompt Architect's 27 frameworks, Prompt Master, plus Claude Code's prompt-engineer subagent.

6 assets

What's in this pack

This pack assembles six high-signal prompt-engineering assets and pairs them with a Claude Code subagent that actually applies them. The mix is deliberate: two encyclopedic references, two opinionated framework decks, and two operational tools you can drop into your editor.

# Asset Type What it gives you
1 Awesome Prompt Engineering curated list Index of papers, courses, libraries
2 OpenAI Cookbook reference repo 200+ working examples for OpenAI APIs
3 Prompt Architect — 27 frameworks framework deck CRISPE, RACE, RICE, RTF and 23 more
4 Prompt Master framework deck Pattern library with red-team examples
5 prompt-engineer subagent Claude Code agent Rewrites a prompt against the chosen framework
6 Prompt scaffolds snippet pack Reusable system messages for common tasks

The collection is opinionated about the order of operations: read the awesome list to map the territory, copy a framework that matches your task, then run the subagent on your draft and iterate.

Why a "toolkit" rather than another listicle

Search results for "prompt engineering" have collapsed into the same five tips on every page. This pack solves a different problem: once you know the basics, what do you reach for to get better?

The answer turns out to be three things in tension:

  • Breadth — see how prompts vary across domains, models, and provider quirks. The Awesome list and Cookbook cover this.
  • Structure — pick a frame so your prompts are auditable, comparable, and reusable. The 27 frameworks and Prompt Master deck cover this.
  • Iteration — get from draft N to draft N+1 quickly, with a rationale. The Claude Code subagent covers this.

Owning all three at once is what compounds. Picking one or two leaves you re-discovering the same mistakes for months.

Install in one command

# Install the entire pack
tokrepo install pack/prompt-engineering-toolkit

# Or pick the subagent alone
tokrepo install prompt-engineer

The subagent activates inside Claude Code on a @prompt-engineer rewrite this prompt for clarity and falsifiability request. It uses one of the 27 frameworks (default: CRISPE) and outputs a diff plus a rationale section. The framework decks are stored under .claude/skills/prompt-engineering/ so you can reference them from any session.

Common pitfalls

  • Treating frameworks as gospel. CRISPE / RACE / RTF are scaffolds, not laws. The subagent picks one based on the task; if its choice feels wrong, override with --framework=<name> rather than fighting the output.
  • Skipping eval. A prompt rewrite that "feels better" might score worse on your real test set. Pair this pack with LLM Eval & Guardrails (Promptfoo / DeepEval) so every change has a quantified delta.
  • Provider drift. OpenAI Cookbook examples assume OpenAI APIs. The Claude / Gemini equivalents differ in subtle ways (system message handling, tool use schemas). When porting, check the provider's own prompting doc first.
  • Over-prompting. Long prompts hide bugs. If a prompt exceeds ~400 tokens, factor parts into a tool definition or retrieval call instead of cramming everything into the system message.
  • No version control. Prompts are code. Commit them, diff them, code-review them. The subagent emits diffs precisely so this lifecycle works.

Common misconceptions

  • "Prompt engineering is dead because models got smarter." The opposite — better models reward more structured prompts because they can follow more constraints reliably. The skill that died is trick-prompting (jailbreak phrases, magic words). Structured prompting is more valuable than ever.
  • "You don't need this if you're using a framework like LangChain." Frameworks compose prompts; they don't write them. The system messages and tool descriptions inside a LangChain chain are still prompts you have to author.
  • "The OpenAI Cookbook is OpenAI-specific." The patterns (function calling, structured output, evaluators) port cleanly to Claude and Gemini. The bindings differ; the approach doesn't.

Where this pack stops

The toolkit is end-to-end for single-prompt engineering. It does not cover multi-step agent design (where the prompt is just one node in a graph), retrieval-augmented prompting (where context is injected at runtime), or fine-tuning (where you change the model itself). For each of those, pair this pack with the appropriate adjacency: agent frameworks for graphs, RAG pipelines for retrieval, and fine-tuning recipes for the model layer. The prompt-engineer subagent is honest about its scope — when you ask it to "fix" something that needs RAG instead, it will tell you so explicitly rather than producing a longer prompt that papers over the gap.

INSTALL · ONE COMMAND
$ tokrepo install pack/prompt-engineering-toolkit
hand it to your agent — or paste it in your terminal
What's inside

6 assets in this pack

Prompt#01
Awesome Prompt Engineering — Papers, Tools & Courses

Hand-curated collection of 60+ papers, 50+ tools, benchmarks, and courses for prompt engineering and context engineering. Covers CoT, RAG, agents, security, and multimodal. Apache 2.0.

by Prompt Lab·237 views
$ tokrepo install awesome-prompt-engineering-papers-tools-courses-1b3fa22b
Prompt#02
OpenAI Cookbook — Official Prompting Guides

Official prompting guides from OpenAI: GPT-5.2, Codex, Meta Prompting, and Realtime API guides. The definitive reference for OpenAI model optimization.

by OpenAI·127 views
$ tokrepo install openai-cookbook-official-prompting-guides-26b9b7dd
Prompt#03
Prompt Architect — 27 Frameworks for Expert Prompts

Transform vague prompts into structured, expert-level prompts using 27 research-backed frameworks across 7 intent categories. Works with Claude Code, ChatGPT, Cursor, and 30+ AI tools.

by Prompt Lab·100 views
$ tokrepo install prompt-architect-27-frameworks-expert-prompts-08f51e3b
Skill#04
Prompt Master — Zero-Waste AI Prompt Generator Skill

Claude Code skill that generates optimized prompts for 30+ AI tools. Auto-detects target tool, applies 5 safe techniques, catches 35 credit-killing patterns. 4.8K+ stars, MIT license.

by Prompt Lab·104 views
$ tokrepo install prompt-master-zero-waste-ai-prompt-generator-skill-0994566a
Prompt#05
AI Prompt Engineering Best Practices Guide

Comprehensive guide to writing effective prompts for Claude, GPT, and Gemini. Covers system prompts, few-shot learning, chain-of-thought, and structured output techniques.

by Skill Factory·114 views
$ tokrepo install ai-prompt-engineering-best-practices-guide-15f82b68
Skill#06
Claude Code Agent: Prompt Engineer — Design & Test Prompts

Claude Code agent for designing, optimizing, and testing LLM prompts. Improves accuracy, reduces token usage, and benchmarks results.

by Skill Factory·97 views
$ tokrepo install claude-code-agent-prompt-engineer-design-test-prompts-57eff515
FAQ

Frequently asked questions

Is the pack free?

Yes. Every asset is open-source — five GitHub repos plus the Anthropic-format subagent. The TokRepo install is free and does not introduce a proxy or token. You only pay for the LLM API calls when you actually run the subagent against a draft prompt, and those bill against whichever provider you use (Claude, OpenAI, Gemini).

How does this compare to using ChatGPT to rewrite my prompts?

ChatGPT can rewrite a prompt, but it picks an implicit framework and gives you no rationale. The prompt-engineer subagent picks a framework explicitly, lists the constraints it added, and emits a unified diff so you can review what changed and why. That makes the rewrite auditable and lets you reject specific choices instead of accepting the whole thing.

Works with Claude Code or Cursor?

The subagent is Claude Code native (it's a .claude/agents/*.md file). The framework decks and reference repos are language-agnostic — they install as Markdown and can be read by any AI editor. Cursor users typically reference them via @-mentions; Codex CLI users put them in AGENTS.md. The subagent specifically requires Claude Code's agent invocation syntax.

Diff vs writing prompts by hand?

By-hand prompts are great when you've prompted that exact task before. This toolkit shines when you're starting from scratch or when prompts are misbehaving in ways you can't articulate. The frameworks give you vocabulary (Role, Context, Specificity, Examples) for naming what's missing. Once you've internalized them you may stop using the subagent — that's success, not failure.

Operational gotcha?

The biggest mistake teams make is not committing their prompts. A prompt change is a code change with the same risk profile (regression, drift, attribution). Treat the rewritten prompt the same as a refactored function: PR, review, eval-run, merge. The subagent emits diffs precisely to make this workflow natural rather than aspirational.

MORE FROM THE ARSENAL

12 packs · 80+ hand-picked assets

Browse every curated bundle on the home page

Back to all packs