# llm-guard — Secure LLM Inputs & Outputs > Harden LLM apps with a scanner pipeline for prompt injection, PII leakage, toxicity, and unsafe output. Install in minutes and gate requests in code. ## Install Save as a script file and run: # llm-guard — Secure LLM Inputs & Outputs > Harden LLM apps with a scanner pipeline for prompt injection, PII leakage, toxicity, and unsafe output. Install in minutes and gate requests in code. ## Quick Use 1. Install: ```bash pip install llm-guard ``` 2. Run: ```bash python -c "import llm_guard; print('llm-guard ok')" ``` 3. Verify: - Run one scan on a known bad prompt and confirm the pipeline blocks or redacts as expected. --- ## Intro Harden LLM apps with a scanner pipeline for prompt injection, PII leakage, toxicity, and unsafe output. Install in minutes and gate requests in code. - **Best for:** Teams shipping LLM features who need a practical, code-first safety layer before production - **Works with:** Python, any LLM provider, sync/async app servers (FastAPI, Celery, etc.) - **Setup time:** 10 minutes ### Quantitative Notes - Setup time ~10 minutes (pip install + one scanner chain) - GitHub stars + forks (verified): see Source & Thanks - Typical pipeline: 3–6 scanners (prompt injection + secrets/PII + output safety) --- ## Practical Notes A reliable rollout pattern is: start with one high-signal guard (prompt injection / secrets) in monitor mode, log detections, then switch to block/redact. Keep scanner configs versioned, and add allowlists for known-safe internal tools to reduce false positives. **Safety note:** Do not rely on a single prompt to prevent injection—enforce guardrails in code with logs, tests, and allowlists. ### FAQ **Q: What problem does it solve?** A: It adds an explicit scanning/guard layer to LLM inputs and outputs to reduce prompt injection, leakage, and harmful content. **Q: Is it a model or a rule engine?** A: It’s a toolkit. You compose scanners/filters (rules + detectors) around whichever LLM you already use. **Q: Where should I enforce it?** A: Enforce on both edges: before the model call (prompt) and before returning to users (output). --- ## Source & Thanks > GitHub: https://github.com/protectai/llm-guard > Owner avatar: https://avatars.githubusercontent.com/u/102992336?v=4 > License (SPDX): MIT > GitHub stars (verified via `api.github.com/repos/protectai/llm-guard`): 2,941 > GitHub forks (verified via `api.github.com/repos/protectai/llm-guard`): 391 --- # llm-guard——加固 LLM 输入与输出安全 > 用可组合的扫描/过滤流水线加固 LLM 应用:覆盖 prompt injection、PII 泄露、毒性内容与不安全输出;几分钟接入,在代码里做请求拦截、脱敏、审计日志与阈值策略,并支持分场景配置。 ## 快速使用 1. 安装: ```bash pip install llm-guard ``` 2. 运行: ```bash python -c "import llm_guard; print('llm-guard ok')" ``` 3. 验证: - Run one scan on a known bad prompt and confirm the pipeline blocks or redacts as expected. --- ## 简介 用可组合的扫描/过滤流水线加固 LLM 应用:覆盖 prompt injection、PII 泄露、毒性内容与不安全输出;几分钟接入,在代码里做请求拦截、脱敏、审计日志与阈值策略,并支持分场景配置。 - **适合谁(Best for):** 要把 LLM 功能上线、希望在生产前落地安全防护与审计的团队 - **兼容工具(Works with):** Python、任意 LLM 供应商、同步/异步服务(FastAPI、Celery 等) - **安装时间(Setup time):** 10 分钟 ### 量化信息 - 接入时间约 10 分钟(pip 安装 + 配一条 scanner chain) - GitHub stars + forks(已核验):见「来源与感谢」 - 常见流水线:3–6 个扫描器(注入 + 机密/PII + 输出安全) --- ## 实战要点 推荐落地路径:先用 1 个高信号防护(注入/机密)以监控模式上线,记录命中与误报,再逐步切换到阻断/脱敏。把 scanner 配置做版本管理,并对内部可信工具加入白名单,降低误报。 **安全提示:** 不要只靠一条 prompt 抗注入;需要在代码里落地防护、日志、测试与白名单机制。 ### FAQ **Q: 它解决什么问题?** A: 它为 LLM 的输入/输出提供显式的扫描与防护层,降低注入、泄露与有害内容风险。 **Q: 它是模型还是规则引擎?** A: 它是工具箱:把规则/检测器组合成流水线,包裹你现有的 LLM 调用。 **Q: 应该在哪一层做防护?** A: 建议输入与输出两端都做:模型调用前扫 prompt,返回用户前扫 output。 --- ## 来源与感谢 > GitHub:https://github.com/protectai/llm-guard > Owner avatar:https://avatars.githubusercontent.com/u/102992336?v=4 > 许可证(SPDX):MIT > GitHub stars(已通过 `api.github.com/repos/protectai/llm-guard` 核验):2,941 > GitHub forks(已通过 `api.github.com/repos/protectai/llm-guard` 核验):391 --- Source: https://tokrepo.com/en/workflows/llm-guard-secure-llm-inputs-outputs Author: Script Depot