OpenSpec — Spec-Driven AI Development
OpenSpec provides structured specifications that AI coding agents follow to produce consistent code. 36K+ stars. Works with Cursor, Claude Code, Copilot. MIT.
What it is
OpenSpec is a specification framework that gives AI coding agents structured instructions to follow when generating code. Instead of relying on ad-hoc prompts, you write formal specs that define architecture, coding standards, file structure, and behavior. AI agents like Cursor, Claude Code, and Copilot then follow these specs to produce consistent output.
This tool targets development teams who use AI coding assistants daily and want reproducible results across team members and sessions. It ships under the MIT license.
How it saves time or tokens
Without specs, developers repeat the same instructions in every prompt: use this framework, follow this pattern, name files this way. OpenSpec centralizes those instructions into reusable spec files. The estimated token cost is around 420 tokens per session. The real savings come from reduced iteration: agents get it right on the first try more often when following a clear spec.
How to use
- Install OpenSpec or add spec files to your project root.
- Define your project specifications in
.openspecfiles. - Point your AI coding agent to the spec directory.
- The agent reads specs before generating code.
# Initialize OpenSpec in your project
npx openspec init
# This creates .openspec/ directory with template files
# Edit the specs to match your project conventions
Example
A typical spec file:
# .openspec/architecture.yaml
name: my-api
framework: express
language: typescript
patterns:
- name: route-handler
template: |
import { Router } from 'express'
import { validate } from '../middleware/validate'
const router = Router()
router.post('/', validate(schema), async (req, res) => {
const result = await service.create(req.body)
res.status(201).json(result)
})
export default router
rules:
- Always use async/await, never callbacks
- Validate input with middleware, not in handler
- Return proper HTTP status codes
Related on TokRepo
- AI coding tools — More tools for AI-assisted development
- Prompt library — Reusable prompts and templates
Common pitfalls
- Specs that are too detailed become maintenance burdens. Keep them at the right abstraction level: conventions and patterns, not line-by-line templates.
- Different AI agents interpret specs with varying fidelity. Test your specs across agents before standardizing.
- Spec files add context tokens to every request. Keep total spec size reasonable to avoid hitting context limits.
- OpenSpec works best for greenfield projects. Retrofitting specs onto legacy codebases requires careful alignment with existing patterns.
- Version your spec files alongside code. Specs that drift from actual code cause agent confusion.
Frequently Asked Questions
OpenSpec works with Cursor, Claude Code, GitHub Copilot, and any AI coding agent that reads project files for context. The spec files are plain YAML or Markdown, so any agent that processes project-level configuration can use them.
No. OpenSpec files are read-time configuration for AI agents. They do not affect your application at runtime. There is no build step or runtime library to install.
OpenSpec provides a standardized, cross-agent format for specifications. While CLAUDE.md targets Claude specifically and cursor rules target Cursor, OpenSpec aims to work across all AI coding agents with a single spec format.
Yes. You can define separate spec files for different parts of your stack. A typical setup has architecture.yaml for the backend, components.yaml for the frontend, and shared.yaml for cross-cutting concerns like naming conventions.
Keep total spec size under 2,000 tokens for best results. Larger specs consume context window space and may cause agents to miss important details. Focus on high-impact conventions rather than exhaustive rules.
Citations (3)
- OpenSpec GitHub— OpenSpec provides structured specifications for AI coding agents
- Anthropic Claude Code Docs— Claude Code reads project-level configuration files
- Cursor Documentation— AI coding agents benefit from structured context
Related on TokRepo
Source & Thanks
- GitHub: https://github.com/Fission-AI/OpenSpec (36K+ stars)
- License: MIT
- Docs: https://openspec.dev
Discussion
Related Assets
Cucumber.js — BDD Testing with Plain Language Scenarios
Cucumber.js is a JavaScript implementation of Cucumber that runs automated tests written in Gherkin plain language.
WireMock — Flexible API Mocking for Java and Beyond
WireMock is an HTTP mock server for stubbing and verifying API calls in integration tests and development.
Google Benchmark — Microbenchmark Library for C++
Google Benchmark is a library for measuring and reporting the performance of C++ code with statistical rigor.