Key Features
- Pre-built validators: PII detection, toxicity, hallucination, SQL injection via Hub
- Input/output guards: Intercept and validate every LLM interaction
- Structured outputs: Generate typed data using Pydantic models
- REST API server: Deploy guards as a standalone service
- Custom validators: Create domain-specific validation rules
- Any LLM: Works with proprietary and open-source models
FAQ
Q: What is Guardrails? A: Guardrails is a Python framework with 6.6K+ stars for validating LLM inputs/outputs. Pre-built validators for PII, toxicity, hallucination. Structured outputs via Pydantic. Apache 2.0.
Q: How do I install Guardrails?
A: pip install guardrails-ai. Use Guard.from_pydantic(Model) to create validated LLM calls.