ScriptsMay 11, 2026·2 min read

llm-guard — Secure LLM Inputs & Outputs

Harden LLM apps with a scanner pipeline for prompt injection, PII leakage, toxicity, and unsafe output. Install in minutes and gate requests in code.

Agent ready

This asset can be read and installed directly by agents

TokRepo exposes a universal CLI command, install contract, metadata JSON, adapter-aware plan, and raw content links so agents can judge fit, risk, and next actions.

Stage only · 29/100Stage only
Agent surface
Any MCP/CLI agent
Kind
Script
Install
Single
Trust
Trust: Established
Entrypoint
README.md
Universal CLI install command
npx tokrepo install d1888a22-7087-4310-bcaa-dca6663a2e18
Intro

Harden LLM apps with a scanner pipeline for prompt injection, PII leakage, toxicity, and unsafe output. Install in minutes and gate requests in code.

  • Best for: Teams shipping LLM features who need a practical, code-first safety layer before production
  • Works with: Python, any LLM provider, sync/async app servers (FastAPI, Celery, etc.)
  • Setup time: 10 minutes

Quantitative Notes

  • Setup time ~10 minutes (pip install + one scanner chain)
  • GitHub stars + forks (verified): see Source & Thanks
  • Typical pipeline: 3–6 scanners (prompt injection + secrets/PII + output safety)

Practical Notes

A reliable rollout pattern is: start with one high-signal guard (prompt injection / secrets) in monitor mode, log detections, then switch to block/redact. Keep scanner configs versioned, and add allowlists for known-safe internal tools to reduce false positives.

Safety note: Do not rely on a single prompt to prevent injection—enforce guardrails in code with logs, tests, and allowlists.

FAQ

Q: What problem does it solve? A: It adds an explicit scanning/guard layer to LLM inputs and outputs to reduce prompt injection, leakage, and harmful content.

Q: Is it a model or a rule engine? A: It’s a toolkit. You compose scanners/filters (rules + detectors) around whichever LLM you already use.

Q: Where should I enforce it? A: Enforce on both edges: before the model call (prompt) and before returning to users (output).


🙏

Source & Thanks

GitHub: https://github.com/protectai/llm-guard Owner avatar: https://avatars.githubusercontent.com/u/102992336?v=4 License (SPDX): MIT GitHub stars (verified via api.github.com/repos/protectai/llm-guard): 2,941 GitHub forks (verified via api.github.com/repos/protectai/llm-guard): 391

Discussion

Sign in to join the discussion.
No comments yet. Be the first to share your thoughts.

Related Assets