# code2prompt — Turn Repos into LLM-Ready Prompts > code2prompt ingests a codebase and outputs structured, token-aware prompts, helping you move from ad-hoc copy/paste to repeatable context engineering. ## Install Copy the content below into your project: ## Quick Use 1. Install (Cargo): ```bash cargo install code2prompt ``` 2. Generate a prompt from the current repo: ```bash code2prompt . ``` 3. Verify: - Save output to a file once and confirm it includes the files you expected (and respects `.gitignore`). ## Intro code2prompt ingests a codebase and outputs structured, token-aware prompts, helping you move from ad-hoc copy/paste to repeatable context engineering. - **Best for:** developers building agent workflows who need consistent, sharable repo context - **Works with:** any Git repo; outputs text you can paste into ChatGPT/Claude/Cursor/Codex - **Setup time:** 5–15 minutes ## Practical Notes - Quant: run on 2 repos and compare output size (chars/tokens) before/after adding ignore rules. - Quant: keep a baseline prompt snapshot and diff it weekly to spot context drift. ## Pattern: context as an artifact Treat the generated prompt like a build artifact: - Check it into a scratch folder (or attach to PRs) when you need reproducibility. - Add ignore rules aggressively to keep noise down. - Prefer smaller, task-scoped prompts over one giant dump. ## Agent workflow tip Use `code2prompt` as a pre-step before: - running an agent to refactor, - generating docs, - or doing security review of changes. If you record the prompt file, you can replay the same context across models and compare results. ### FAQ **Q: Is this only a CLI?** A: No. The repo describes a wider ecosystem, but the CLI alone is already useful for most workflows. **Q: How do I reduce prompt size?** A: Add ignores and scope to a subdirectory; prioritize only the files needed for the task. **Q: Can I use it for reviews?** A: Yes—generate a snapshot prompt and attach it to a PR so reviewers/agents share the same context. ## Source & Thanks > Source: https://github.com/mufeedvh/code2prompt > License: MIT > GitHub stars: 7,342 · forks: 422 --- ## 快速使用 1. 安装(Cargo): ```bash cargo install code2prompt ``` 2. 从当前仓库生成 prompt: ```bash code2prompt . ``` 3. 验证: - 用一次 `--output-file` 保存输出,确认包含期望文件并遵守 `.gitignore`。 ## 简介 code2prompt 会遍历并结构化输出你的代码仓库,把内容整理成更适合 LLM 的提示词上下文;相比手工复制粘贴,它更可复用、可对比,也更适合做系统化的 Context Engineering 流程。 - **适合谁:** 在做 agent 工作流、需要稳定可分享“仓库上下文”的开发者 - **可搭配:** 任意 Git 仓库;输出可直接粘贴到 ChatGPT/Claude/Cursor/Codex - **准备时间:** 5–15 分钟 ## 实战建议 - 量化建议:在 2 个仓库上运行,对比加入 ignore 规则前后的输出体积(字符/Token)。 - 量化建议:保留一份基线输出快照,每周 diff 一次,观察上下文漂移。 ## 常用打法:把上下文当作可复用产物 把生成的 prompt 当成构建产物来管理: - 需要可复现时,把输出落盘(或随 PR 附件)便于回放。 - 主动维护 ignore 规则,减少噪音。 - 更推荐“按任务切片”的小 prompt,而不是一次性大 dump。 ## Agent 工作流提示 把 `code2prompt` 当作很多任务的前置步骤:重构、生成文档、安全审查等。 保留 prompt 文件后,你可以用同一份上下文在不同模型间重放,对比效果与成本。 ### FAQ **它只是 CLI 吗?** 答:不止。仓库还提到更完整的生态,但单独 CLI 就能覆盖大多数场景。 **怎么降低输出体积?** 答:加 ignore 并限制到子目录;只保留任务必需文件。 **能用于评审吗?** 答:可以。生成一份快照并随 PR 附上,让评审/agent 共享同一上下文。 ## 来源与感谢 > Source: https://github.com/mufeedvh/code2prompt > License: MIT > GitHub stars: 7,342 · forks: 422 --- Source: https://tokrepo.com/en/workflows/code2prompt-turn-repos-into-llm-ready-prompts Author: Script Depot