ScriptsApr 1, 2026·2 min read

Guardrails — Validate & Secure LLM Outputs

Guardrails is a Python framework for validating LLM inputs/outputs to detect risks and generate structured data. 6.6K+ GitHub stars. Pre-built validators, Pydantic models. Apache 2.0.

TO
TokRepo精选 · Community
Quick Use

Use it first, then decide how deep to go

This block should tell both the user and the agent what to copy, install, and apply first.

# Install
pip install guardrails-ai

# Example: validate LLM output as structured data
python -c "
import guardrails as gd
from pydantic import BaseModel

class Pet(BaseModel):
    name: str
    species: str
    age: int

guard = gd.Guard.from_pydantic(Pet)
result = guard(
    model='gpt-4o-mini',
    messages=[{'role': 'user', 'content': 'Tell me about a pet.'}]
)
print(result.validated_output)  # {'name': 'Buddy', 'species': 'Dog', 'age': 3}
"

Intro

Guardrails is a Python framework for building reliable AI applications by validating LLM inputs and outputs to detect and mitigate risks, and generate structured data. With 6,600+ GitHub stars and Apache 2.0 license, it provides pre-built validators through Guardrails Hub covering common risk categories (PII, toxicity, hallucination, SQL injection), input/output guards that intercept LLM interactions, structured data generation using Pydantic models, standalone server deployment via REST API, and support for both proprietary and open-source LLMs.

Best for: Teams building production AI apps who need output validation, safety guardrails, and structured outputs Works with: Claude Code, OpenAI Codex, Cursor, Gemini CLI, Windsurf LLMs: OpenAI, Anthropic, Cohere, HuggingFace, and any LLM


Key Features

  • Pre-built validators: PII detection, toxicity, hallucination, SQL injection via Hub
  • Input/output guards: Intercept and validate every LLM interaction
  • Structured outputs: Generate typed data using Pydantic models
  • REST API server: Deploy guards as a standalone service
  • Custom validators: Create domain-specific validation rules
  • Any LLM: Works with proprietary and open-source models

FAQ

Q: What is Guardrails? A: Guardrails is a Python framework with 6.6K+ stars for validating LLM inputs/outputs. Pre-built validators for PII, toxicity, hallucination. Structured outputs via Pydantic. Apache 2.0.

Q: How do I install Guardrails? A: pip install guardrails-ai. Use Guard.from_pydantic(Model) to create validated LLM calls.


🙏

Source & Thanks

Created by Guardrails AI. Licensed under Apache 2.0. guardrails-ai/guardrails — 6,600+ GitHub stars

Related Assets