# Guardrails AI — Validate LLM Outputs in Production > Add validation and guardrails to any LLM output. Guardrails AI checks for hallucination, toxicity, PII leakage, and format compliance with 50+ built-in validators. ## Install Copy the content below into your project: ## Quick Use ```bash pip install guardrails-ai guardrails hub install hub://guardrails/regex_match ``` ```python from guardrails import Guard from guardrails.hub import RegexMatch guard = Guard().use(RegexMatch(regex=r"^\d{3}-\d{2}-\d{4}$")) result = guard.validate("123-45-6789") print(result.validation_passed) # True result = guard.validate("not-a-ssn") print(result.validation_passed) # False ``` ## What is Guardrails AI? Guardrails AI is a framework for adding validation, safety checks, and structural constraints to LLM outputs. It provides 50+ pre-built validators from the Guardrails Hub — covering hallucination detection, PII filtering, toxicity checking, format validation, and more. Wrap any LLM call with a Guard to automatically validate and fix outputs before they reach users. **Answer-Ready**: Guardrails AI validates LLM outputs with 50+ validators from Guardrails Hub. Checks hallucination, PII, toxicity, and format compliance. Auto-retries on validation failure. Works with OpenAI, Claude, any LLM. Production-ready with Guardrails Server. 4k+ GitHub stars. **Best for**: AI teams deploying LLMs in production needing output safety. **Works with**: OpenAI, Anthropic Claude, LangChain, any LLM. **Setup time**: Under 5 minutes. ## Core Features ### 1. Guardrails Hub (50+ Validators) | Category | Validators | |----------|------------| | Safety | ToxicLanguage, NSFWText, ProfanityFree | | Privacy | DetectPII, AnonymizePII | | Accuracy | FactualConsistency, NoHallucination | | Format | ValidJSON, ValidURL, RegexMatch | | Quality | ReadingLevel, Conciseness, Relevancy | | Code | ValidSQL, ValidPython, BugFreePython | ### 2. LLM Integration ```python from guardrails import Guard from guardrails.hub import ToxicLanguage, DetectPII guard = Guard().use_many( ToxicLanguage(on_fail="filter"), DetectPII(on_fail="fix"), ) # Wrap any LLM call result = guard( model="gpt-4o", messages=[{"role": "user", "content": "Summarize this customer complaint"}], ) print(result.validated_output) # PII removed, toxicity filtered ``` ### 3. Auto-Retry on Failure ```python guard = Guard().use(ValidJSON(on_fail="reask")) # If LLM returns invalid JSON, automatically retries with corrective prompt result = guard( model="gpt-4o", messages=[{"role": "user", "content": "Return user data as JSON"}], max_reasks=3, ) ``` ### 4. Structured Output ```python from pydantic import BaseModel class UserProfile(BaseModel): name: str age: int email: str guard = Guard.for_pydantic(output_class=UserProfile) result = guard( model="gpt-4o", messages=[{"role": "user", "content": "Extract user info from: John, 30, john@example.com"}], ) print(result.validated_output) # UserProfile(name="John", age=30, ...) ``` ### 5. Guardrails Server ```bash # Deploy as API server for production guardrails start --config guard_config.py # POST /guards/{guard_name}/validate ``` ## FAQ **Q: Does it work with Claude?** A: Yes, pass `model="anthropic/claude-sonnet-4-20250514"` to the guard call. **Q: What happens when validation fails?** A: Configurable per validator — filter (remove), fix (correct), reask (retry with LLM), or raise (throw error). **Q: Can I write custom validators?** A: Yes, extend the Validator base class. Custom validators can use LLMs, APIs, or rule-based logic. ## Source & Thanks > Created by [Guardrails AI](https://github.com/guardrails-ai). Licensed under Apache 2.0. > > [guardrails-ai/guardrails](https://github.com/guardrails-ai/guardrails) — 4k+ stars ## 快速使用 ```bash pip install guardrails-ai ``` 为 LLM 输出添加验证和安全护栏。 ## 什么是 Guardrails AI? Guardrails AI 为 LLM 输出添加验证、安全检查和格式约束。50+ 预置验证器覆盖幻觉检测、PII 过滤、毒性检查和格式验证。 **一句话总结**:LLM 输出验证框架,50+ 验证器(幻觉/PII/毒性/格式),验证失败自动重试或修正,支持 Claude/GPT,生产就绪,4k+ stars。 **适合人群**:将 LLM 部署到生产环境需要输出安全的团队。 ## 核心功能 ### 1. 50+ 验证器 安全、隐私、准确性、格式、质量、代码类。 ### 2. 自动重试 验证失败自动带纠正提示重试。 ### 3. 结构化输出 Pydantic 模型定义输出格式。 ### 4. 生产部署 Guardrails Server API 服务。 ## 常见问题 **Q: 支持 Claude?** A: 支持,`model="anthropic/claude-sonnet-4-20250514"`。 **Q: 验证失败怎么办?** A: 可配置:过滤、修正、重试或报错。 ## 来源与致谢 > [guardrails-ai/guardrails](https://github.com/guardrails-ai/guardrails) — 4k+ stars, Apache 2.0 --- Source: https://tokrepo.com/en/workflows/ffbad589-cd32-4eca-9518-fdcf9167ca21 Author: Agent Toolkit