Esta página se muestra en inglés. Una traducción al español está en curso.
CLI ToolsMay 11, 2026·3 min de lectura

files-to-prompt — Concat Files Into LLM-Ready Prompts

Simon Willison's CLI that walks a directory and concats files into one LLM-ready prompt with path markers. Pipes straight into Claude or LLM CLI.

Listo para agents

Este activo puede ser leído e instalado directamente por agents

TokRepo expone el comando CLI, metadata JSON, plan de instalación y contenido raw para que los agents evalúen compatibilidad, riesgo y próximos pasos.

Stage only · 5/100Stage only
Destino
Claude Code, Gemini CLI
Tipo
CLI Tool
Instalación
Stage only
Confianza
Confianza: New
Entrada
Asset
Comando CLI de instalación
npx tokrepo install 65aa017c-0160-45d9-adab-e5191c9f00be --target codex
Introducción

files-to-prompt is Simon Willison's CLI that walks a directory tree and concatenates every file into one LLM-ready prompt — with path markers, gitignore awareness, and Claude-style XML wrapping options. Pipes naturally into Simon's LLM CLI or any model that takes stdin. Best for: 'paste my whole repo into Claude' workflows, codebase Q&A, refactor briefings, custom RAG ingestion. Works with: any shell, Python 3.10+. Setup time: 1 minute.


Install + basic use

pip install files-to-prompt

# Concat a whole repo
files-to-prompt . > prompt.txt

# Specific extensions
files-to-prompt . --extension .py --extension .ts > code.txt

# Honor .gitignore
files-to-prompt . --ignore-gitignore=false > clean.txt

Pipe into Claude / LLM CLI

# Via Simon's llm CLI
files-to-prompt src/ | llm -m claude-3-5-sonnet "Where is the auth bug?"

# Via plain curl to Anthropic
files-to-prompt src/ | jq -Rs '{model:"claude-3-5-sonnet-20241022",max_tokens:4096,messages:[{role:"user",content:.}]}' \
  | curl -X POST https://api.anthropic.com/v1/messages \
    -H "anthropic-version: 2023-06-01" \
    -H "x-api-key: $ANTHROPIC_API_KEY" \
    -H "content-type: application/json" \
    -d @-

Claude-friendly XML wrapping

files-to-prompt src/ --cxml > prompt.xml
# Wraps each file in <document index="N"><source>path</source><document_content>...</document_content></document>
# This format gets cited cleanly by Claude

Exclude noise

files-to-prompt . \
  --extension .py \
  --ignore "*test*" \
  --ignore "*.pyc" \
  --ignore "venv/" \
  --ignore "node_modules/" \
  > prompt.txt

Output format

path: src/main.py
---
def main():
    ...

path: src/utils.py
---
def helper():
    ...

LLMs handle this format reliably — every chunk has its origin file path inline.


FAQ

Q: How big a repo can I dump? A: Bounded by your model's context window. With Claude's 200K context, ~500K characters works. For 1M+ context models (Grok-3, Gemini), entire mid-size repos fit. For larger: chunk + RAG, or use Claude's prompt caching.

Q: Versus repomix / aider? A: Simpler. repomix has token counting and AI-aware splitting; aider runs as an interactive coding agent. files-to-prompt does one thing — concat. Pipe its output anywhere. Pick repomix when you need token math, aider when you want an actual coding agent.

Q: Binary files? A: Auto-skipped. files-to-prompt detects binary content and excludes by default. Override with --include-binary if you want everything (rarely useful for LLM prompts).


Quick Use

  1. pip install files-to-prompt
  2. files-to-prompt src/ --cxml > prompt.xml
  3. Pipe into Claude: files-to-prompt src/ | llm -m claude-3-5-sonnet 'your question'

Intro

files-to-prompt is Simon Willison's CLI that walks a directory tree and concatenates every file into one LLM-ready prompt — with path markers, gitignore awareness, and Claude-style XML wrapping options. Pipes naturally into Simon's LLM CLI or any model that takes stdin. Best for: 'paste my whole repo into Claude' workflows, codebase Q&A, refactor briefings, custom RAG ingestion. Works with: any shell, Python 3.10+. Setup time: 1 minute.


Install + basic use

pip install files-to-prompt

# Concat a whole repo
files-to-prompt . > prompt.txt

# Specific extensions
files-to-prompt . --extension .py --extension .ts > code.txt

# Honor .gitignore
files-to-prompt . --ignore-gitignore=false > clean.txt

Pipe into Claude / LLM CLI

# Via Simon's llm CLI
files-to-prompt src/ | llm -m claude-3-5-sonnet "Where is the auth bug?"

# Via plain curl to Anthropic
files-to-prompt src/ | jq -Rs '{model:"claude-3-5-sonnet-20241022",max_tokens:4096,messages:[{role:"user",content:.}]}' \
  | curl -X POST https://api.anthropic.com/v1/messages \
    -H "anthropic-version: 2023-06-01" \
    -H "x-api-key: $ANTHROPIC_API_KEY" \
    -H "content-type: application/json" \
    -d @-

Claude-friendly XML wrapping

files-to-prompt src/ --cxml > prompt.xml
# Wraps each file in <document index="N"><source>path</source><document_content>...</document_content></document>
# This format gets cited cleanly by Claude

Exclude noise

files-to-prompt . \
  --extension .py \
  --ignore "*test*" \
  --ignore "*.pyc" \
  --ignore "venv/" \
  --ignore "node_modules/" \
  > prompt.txt

Output format

path: src/main.py
---
def main():
    ...

path: src/utils.py
---
def helper():
    ...

LLMs handle this format reliably — every chunk has its origin file path inline.


FAQ

Q: How big a repo can I dump? A: Bounded by your model's context window. With Claude's 200K context, ~500K characters works. For 1M+ context models (Grok-3, Gemini), entire mid-size repos fit. For larger: chunk + RAG, or use Claude's prompt caching.

Q: Versus repomix / aider? A: Simpler. repomix has token counting and AI-aware splitting; aider runs as an interactive coding agent. files-to-prompt does one thing — concat. Pipe its output anywhere. Pick repomix when you need token math, aider when you want an actual coding agent.

Q: Binary files? A: Auto-skipped. files-to-prompt detects binary content and excludes by default. Override with --include-binary if you want everything (rarely useful for LLM prompts).


Source & Thanks

Built by Simon Willison. Licensed under Apache-2.0.

simonw/files-to-prompt — ⭐ 750+

🙏

Fuente y agradecimientos

Built by Simon Willison. Licensed under Apache-2.0.

simonw/files-to-prompt — ⭐ 750+

Discusión

Inicia sesión para unirte a la discusión.
Aún no hay comentarios. Sé el primero en compartir tus ideas.

Activos relacionados