Esta página se muestra en inglés. Una traducción al español está en curso.
ScriptsMay 11, 2026·2 min de lectura

llm-guard — Secure LLM Inputs & Outputs

Harden LLM apps with a scanner pipeline for prompt injection, PII leakage, toxicity, and unsafe output. Install in minutes and gate requests in code.

Listo para agents

Este activo puede ser leído e instalado directamente por agents

TokRepo expone un comando CLI universal, contrato de instalación, metadata JSON, plan según adaptador y contenido raw para que los agents evalúen compatibilidad, riesgo y próximos pasos.

Stage only · 29/100Stage only
Superficie agent
Cualquier agent MCP/CLI
Tipo
Script
Instalación
Single
Confianza
Confianza: Established
Entrada
README.md
Comando CLI universal
npx tokrepo install d1888a22-7087-4310-bcaa-dca6663a2e18
Introducción

Harden LLM apps with a scanner pipeline for prompt injection, PII leakage, toxicity, and unsafe output. Install in minutes and gate requests in code.

  • Best for: Teams shipping LLM features who need a practical, code-first safety layer before production
  • Works with: Python, any LLM provider, sync/async app servers (FastAPI, Celery, etc.)
  • Setup time: 10 minutes

Quantitative Notes

  • Setup time ~10 minutes (pip install + one scanner chain)
  • GitHub stars + forks (verified): see Source & Thanks
  • Typical pipeline: 3–6 scanners (prompt injection + secrets/PII + output safety)

Practical Notes

A reliable rollout pattern is: start with one high-signal guard (prompt injection / secrets) in monitor mode, log detections, then switch to block/redact. Keep scanner configs versioned, and add allowlists for known-safe internal tools to reduce false positives.

Safety note: Do not rely on a single prompt to prevent injection—enforce guardrails in code with logs, tests, and allowlists.

FAQ

Q: What problem does it solve? A: It adds an explicit scanning/guard layer to LLM inputs and outputs to reduce prompt injection, leakage, and harmful content.

Q: Is it a model or a rule engine? A: It’s a toolkit. You compose scanners/filters (rules + detectors) around whichever LLM you already use.

Q: Where should I enforce it? A: Enforce on both edges: before the model call (prompt) and before returning to users (output).


🙏

Fuente y agradecimientos

GitHub: https://github.com/protectai/llm-guard Owner avatar: https://avatars.githubusercontent.com/u/102992336?v=4 License (SPDX): MIT GitHub stars (verified via api.github.com/repos/protectai/llm-guard): 2,941 GitHub forks (verified via api.github.com/repos/protectai/llm-guard): 391

Discusión

Inicia sesión para unirte a la discusión.
Aún no hay comentarios. Sé el primero en compartir tus ideas.

Activos relacionados