Headless CMS for AI
Strapi, Medusa, Vendure, Directus, Hasura — content and commerce backends with REST + GraphQL out of the box for agent integrations.
What's in this pack
This pack collects the five headless backends worth installing when you need an AI agent to read or mutate real business data — content, products, orders, users — without writing custom CRUD endpoints.
| # | Tool | Domain | Why agents like it |
|---|---|---|---|
| 1 | Strapi | content | REST + GraphQL out of the box, plugin ecosystem |
| 2 | Medusa | commerce | Headless commerce engine with full admin API |
| 3 | Vendure | commerce | TypeScript-first commerce with strong typing |
| 4 | Directus | data → API | Generates REST + GraphQL from any SQL database |
| 5 | Hasura | data → API | Compiles GraphQL straight to Postgres queries |
The split is intentional: Strapi for editorial content, Medusa / Vendure for product+order data, Directus / Hasura for arbitrary SQL surfaces. Most production systems pick one from each row depending on which use cases dominate.
Why headless matters for agents
Traditional CMSs ship HTML; agents do not consume HTML well. Headless CMSs ship typed JSON and GraphQL schemas. The difference compounds:
- Schema introspection. GraphQL endpoints are self-documenting. An agent can call
__schemaand discover what fields exist, what they return, and what arguments they accept. No manual API docs to keep in sync. - Mutation safety. REST + GraphQL endpoints declare write operations explicitly with input types. Agents can validate before sending, and the server rejects malformed payloads at the transport layer rather than after a partial write.
- Permissions. Each tool here ships role-based access control. You can give the agent a scoped token that can only read certain content types or only mutate specific resources, with audit logs.
- Multi-language and i18n. Strapi and Directus have first-class locale support, which matters when the agent is generating or translating content.
These properties are why a headless CMS, even when overkill for the human content team, is often the right substrate for an agent-driven product.
Install in one command
# Install the entire pack
tokrepo install pack/headless-cms-for-ai
# Or pick a single backend
tokrepo install strapi
tokrepo install hasura
Each tool installs as a Docker compose snippet plus a Claude Code skill that knows the tool's API surface. The skill exposes typed methods like strapi.list_articles({ filters }) so the agent doesn't have to handcraft REST URLs.
Common pitfalls
- Schema drift between agent and server. When you add a content type in the admin UI, the agent's local schema cache goes stale. Re-introspect on every session start, or wire a webhook so the agent invalidates on content-type changes.
- N+1 queries from agents. Agents tend to fetch one record at a time. Use the bulk endpoints (Strapi's
populate=*, Hasura's nested selections) and budget queries — set a per-agent rate limit. - Public read tokens. It's tempting to give the agent a wide-open read token. Don't. Scope to specific collections; revoke on role change. Each of these tools has fine-grained permission UI.
- Mutation idempotency. Agents retry. If an agent retries a
createOrdermutation after a network blip, you'll get duplicates. Use idempotency keys (Medusa supports them natively; for others, layer a request_id field). - Backups. Headless CMSs centralize content; if the database is corrupted you lose a year of editorial work. Set up daily snapshots before you let the agent write to production.
Common misconceptions
- "Hasura and Directus are interchangeable." They overlap on "expose SQL as GraphQL" but differ in audience: Hasura is for engineers who want a thin GraphQL veneer over Postgres; Directus is for teams that also want a content admin UI. If you only need the API, use Hasura. If non-engineers will edit data, use Directus.
- "Strapi can replace Medusa for commerce." Strapi handles content beautifully but lacks order workflow, payment, and inventory primitives. Use Medusa or Vendure for those, then point Strapi at content adjacent to products (blog, marketing pages).
- "GraphQL is always better than REST for agents." Sometimes — schema introspection is great. But for simple list/get operations, REST endpoints are easier to cache and easier for the agent to compose with HTTP-level tools. All five tools speak both; pick per call.
Relationship to other packs
- Postgres for Agents is the data layer beneath Hasura and Directus.
- MCP Server Stack includes MCP wrappers around these CMSs so the agent can use them via MCP rather than raw HTTP.
- AI App Builders consume these GraphQL endpoints from the frontend side.
5 assets in this pack
Frequently asked questions
Is the pack free?
Yes — Strapi, Medusa, Vendure, Directus, and Hasura are all open-source under permissive licenses, and the TokRepo install is free. Some vendors offer paid cloud hosting (Strapi Cloud, Hasura Cloud) but the self-hosted versions in this pack are fully featured. You only pay for the database and compute you run them on.
How does this compare to Contentful or Sanity?
Contentful and Sanity are SaaS headless CMSs — easier to start with, but they keep your data on their servers and bill per API call, which gets expensive when an agent is the primary consumer. The five tools in this pack are self-hostable, which means predictable cost when an agent makes 10× more calls than human users would.
Works with Claude Code or Cursor?
Both. Each backend ships a Claude Code skill (typed wrapper over its REST + GraphQL endpoints) plus a generic OpenAPI/SDL schema that Cursor and Codex CLI can consume. The Claude Code path is more polished — schema introspection on session start, typed errors — but the underlying APIs are vendor-standard, so any AI editor can call them.
Diff vs writing CRUD endpoints by hand?
Hand-written CRUD is faster for one or two endpoints and slower for ten. These tools auto-generate the API from your data model, so adding a new content type or commerce attribute takes minutes instead of a PR. They also enforce consistency — every endpoint follows the same auth, pagination, and error patterns, which matters when an agent has to learn the surface.
Operational gotcha?
Permission scopes get crufty fast. Each agent action initially demands a new permission, the team grants it, and three months later the agent has god-mode access to everything. Schedule a quarterly audit: list every active token, check which scopes it actually used in the last 30 days, prune the rest. All five tools log token usage; the audit is a SQL query, not a guess.
12 packs · 80+ hand-picked assets
Browse every curated bundle on the home page
Back to all packs