ScriptsMar 31, 2026·2 min read

Guardrails — Validate & Secure LLM Outputs

Guardrails is a Python framework for validating LLM inputs/outputs to detect risks and generate structured data. 6.6K+ GitHub stars. Pre-built validators, Pydantic models. Apache 2.0.

TL;DR
Python framework for validating LLM inputs/outputs with pre-built validators, Pydantic models, and risk detection.
§01

What it is

Guardrails is a Python framework that wraps LLM calls with validation logic. It intercepts inputs and outputs, runs them through configurable validators, and ensures the LLM produces structured, safe, and correct responses. The library ships with pre-built validators for common risks like PII detection, toxic content, JSON schema compliance, and hallucination checks. You define your output schema using Pydantic models, and Guardrails enforces it.

Developers building production LLM applications who need reliable, structured outputs benefit from Guardrails. It is particularly useful for applications where incorrect or unsafe LLM responses carry real consequences.

§02

How it saves time or tokens

Without validation, developers manually inspect and retry LLM outputs when they fail to meet requirements. Guardrails automates this loop: if an output fails validation, it re-prompts the LLM with correction instructions. This automatic retry mechanism reduces manual debugging time and avoids wasted tokens on outputs that would be discarded anyway.

§03

How to use

  1. Install Guardrails via pip and define your output schema with Pydantic
  2. Wrap your LLM call with a Guardrails guard that applies your chosen validators
  3. Call the guard instead of the LLM directly; it returns validated, structured output
§04

Example

from guardrails import Guard
from guardrails.hub import DetectPII, ToxicLanguage
from pydantic import BaseModel

class UserResponse(BaseModel):
    answer: str
    confidence: float

guard = Guard().use_many(
    DetectPII(pii_entities=['EMAIL', 'PHONE']),
    ToxicLanguage(threshold=0.8)
)

result = guard(
    model='gpt-4o',
    messages=[{'role': 'user', 'content': 'Summarize this document.'}],
    output_class=UserResponse
)

print(result.validated_output)
§05

Related on TokRepo

§06

Common pitfalls

  • Stacking too many validators increases latency and token cost per call; validate only what matters for your use case
  • Some validators require external models or APIs (e.g., PII detection); check dependencies before deploying
  • Automatic retries can loop indefinitely if the LLM consistently fails validation; always set a max retry count

Frequently Asked Questions

What validators come pre-built with Guardrails?+

Guardrails Hub offers validators for PII detection, toxic language filtering, JSON schema compliance, regex matching, competitor mention detection, and more. You can also write custom validators as Python functions.

Does Guardrails work with any LLM provider?+

Yes. Guardrails wraps LLM calls and supports OpenAI, Anthropic, Cohere, and any provider accessible through LiteLLM. You pass the model name and Guardrails handles the API call with validation.

How does the retry mechanism work?+

When an LLM output fails validation, Guardrails sends a corrective prompt explaining what went wrong and asks for a new response. You configure the maximum number of retries. Each retry consumes additional tokens.

Can I use Guardrails for input validation too?+

Yes. Guards can validate both inputs and outputs. Input validation is useful for filtering user prompts that contain PII, injection attempts, or other risks before they reach the LLM.

Is Guardrails suitable for production use?+

Yes. Guardrails is designed for production with features like async support, streaming validation, telemetry, and caching. The Apache 2.0 license allows commercial use without restrictions.

Citations (3)
🙏

Source & Thanks

Created by Guardrails AI. Licensed under Apache 2.0. guardrails-ai/guardrails — 6,600+ GitHub stars

Discussion

Sign in to join the discussion.
No comments yet. Be the first to share your thoughts.

Related Assets