Claude Official Skill: skill-creator
Create new skills, modify and improve existing skills, and measure skill performance. Use when users want to create a skill from scratch, edit, or optimize an existing skill, ru...
What it is
Skill-creator is an official Claude Code skill from Anthropic for creating new skills, modifying existing ones, and measuring skill performance. It provides an iterative workflow: draft a skill, run test prompts, evaluate results both qualitatively and quantitatively, then refine based on feedback.
The skill targets developers and teams who want to extend their AI agent's capabilities with custom domain expertise. It works with Claude Code, GitHub Copilot, and Gemini CLI.
How it saves time or tokens
Without a structured skill creation process, writing effective AI agent skills requires trial and error. Skill-creator provides a repeatable workflow that front-loads evaluation: it generates test prompts, runs them against the draft skill, and helps you assess results with quantitative metrics. This reduces the number of iterations needed to produce a working skill. The evaluation framework catches edge cases early, preventing skills from failing in production.
How to use
- Install the skill via Claude Code:
claude skill install anthropics/skills/skill-creator
- Or manually place the SKILL.md file:
mkdir -p .claude/skills/skill-creator
# Copy SKILL.md content to .claude/skills/skill-creator/SKILL.md
- Start creating a skill by describing what you want it to do. The skill-creator guides you through drafting, testing, and refining.
Example
The skill creation workflow in practice:
# Step 1: Define the skill
User: 'Create a skill for reviewing SQL migrations'
# Step 2: Claude drafts the skill SKILL.md
# Includes: trigger conditions, review checklist, output format
# Step 3: Generate test prompts
- 'Review this migration that adds a new index'
- 'Review this migration that drops a column'
- 'Review this migration with no WHERE clause on UPDATE'
# Step 4: Run tests and evaluate
# Claude runs each test prompt against the draft skill
# Generates quantitative metrics:
# - Did it catch the dangerous UPDATE? (pass/fail)
# - Did it flag the index naming convention? (pass/fail)
# - Was the output format consistent? (pass/fail)
# Step 5: Refine based on results
# Rewrite sections that underperformed
# Add edge cases to the skill's instructions
Related on TokRepo
- Prompt library — Browse existing skills and prompts for AI agents.
- Featured workflows — Discover curated tools and frameworks on TokRepo.
Common pitfalls
- Writing skills that are too broad (e.g., 'review all code') leads to unfocused results. Scope each skill to a specific domain or task type.
- Skipping the evaluation step means you deploy skills without knowing their failure modes. Always run at least 3 test prompts before using a skill in production.
- Not versioning skills alongside your codebase makes it hard to track changes. Store SKILL.md files in your repository and treat them like code.
Frequently Asked Questions
A skill is a markdown file (SKILL.md) stored in .claude/skills/ that gives Claude Code domain expertise for specific tasks. Skills activate automatically when the task context matches their trigger conditions.
Skill-creator generates test prompts, runs Claude with the draft skill on each prompt, and produces both qualitative results (you review the output) and quantitative metrics (pass/fail on specific criteria). You iterate on the skill based on the results.
Yes. Skills are markdown files that can be committed to your repository. Team members who have Claude Code configured will automatically load skills from the .claude/skills/ directory.
The SKILL.md format is standard markdown. While skill-creator is optimized for Claude Code, the generated skills can be loaded by GitHub Copilot, Gemini CLI, and other agents that support the skills protocol.
Skill-creator provides quantitative metrics across your test prompts. When the skill passes all test cases consistently and the qualitative review shows no major issues, it is ready for use. You can always refine based on production feedback.
Citations (3)
- Anthropic Skills GitHub— Official Claude Code skill for skill creation
- Anthropic Documentation— Claude Code skill installation and usage
- Anthropic Eval Guide— AI agent evaluation best practices
Related on TokRepo
Source & Thanks
Created by Anthropic. Licensed under MIT. anthropics/skills
Discussion
Related Assets
Claude-Flow — Multi-Agent Orchestration for Claude Code
Layers swarm and hive-mind multi-agent orchestration on top of Claude Code with 64 specialized agents, SQLite memory, and parallel execution.
ccusage — Real-Time Token Cost Tracker for Claude Code
CLI that reads ~/.claude logs and breaks down Claude Code token spend by day, session, and project — pluggable into your statusline.
SuperClaude — Workflow Framework for Claude Code
Adds 16+ slash commands, 9 cognitive personas, and a smart flag system to Claude Code in one pipx install.