Cette page est affichée en anglais. Une traduction française est en cours.
ScriptsMay 11, 2026·2 min de lecture

llm-guard — Secure LLM Inputs & Outputs

Harden LLM apps with a scanner pipeline for prompt injection, PII leakage, toxicity, and unsafe output. Install in minutes and gate requests in code.

Prêt pour agents

Cet actif peut être lu et installé directement par les agents

TokRepo expose une commande CLI universelle, un contrat d'installation, le metadata JSON, un plan selon l'adaptateur et le contenu raw pour aider les agents à juger l'adaptation, le risque et les prochaines actions.

Stage only · 29/100Stage only
Surface agent
Tout agent MCP/CLI
Type
Script
Installation
Single
Confiance
Confiance : Established
Point d'entrée
README.md
Commande CLI universelle
npx tokrepo install d1888a22-7087-4310-bcaa-dca6663a2e18
Introduction

Harden LLM apps with a scanner pipeline for prompt injection, PII leakage, toxicity, and unsafe output. Install in minutes and gate requests in code.

  • Best for: Teams shipping LLM features who need a practical, code-first safety layer before production
  • Works with: Python, any LLM provider, sync/async app servers (FastAPI, Celery, etc.)
  • Setup time: 10 minutes

Quantitative Notes

  • Setup time ~10 minutes (pip install + one scanner chain)
  • GitHub stars + forks (verified): see Source & Thanks
  • Typical pipeline: 3–6 scanners (prompt injection + secrets/PII + output safety)

Practical Notes

A reliable rollout pattern is: start with one high-signal guard (prompt injection / secrets) in monitor mode, log detections, then switch to block/redact. Keep scanner configs versioned, and add allowlists for known-safe internal tools to reduce false positives.

Safety note: Do not rely on a single prompt to prevent injection—enforce guardrails in code with logs, tests, and allowlists.

FAQ

Q: What problem does it solve? A: It adds an explicit scanning/guard layer to LLM inputs and outputs to reduce prompt injection, leakage, and harmful content.

Q: Is it a model or a rule engine? A: It’s a toolkit. You compose scanners/filters (rules + detectors) around whichever LLM you already use.

Q: Where should I enforce it? A: Enforce on both edges: before the model call (prompt) and before returning to users (output).


🙏

Source et remerciements

GitHub: https://github.com/protectai/llm-guard Owner avatar: https://avatars.githubusercontent.com/u/102992336?v=4 License (SPDX): MIT GitHub stars (verified via api.github.com/repos/protectai/llm-guard): 2,941 GitHub forks (verified via api.github.com/repos/protectai/llm-guard): 391

Fil de discussion

Connectez-vous pour rejoindre la discussion.
Aucun commentaire pour l'instant. Soyez le premier à partager votre avis.

Actifs similaires