Cette page est affichée en anglais. Une traduction française est en cours.
WorkflowsMay 14, 2026·2 min de lecture

Pydantic AI Shields — Guardrails for Pydantic AI

Drop-in guardrail capabilities for Pydantic AI agents: cost budgets, tool permissions, and input/output guards; verified 63★, pushed 2026-05-05.

Prêt pour agents

Cet actif peut être lu et installé directement par les agents

TokRepo expose une commande CLI universelle, un contrat d'installation, le metadata JSON, un plan selon l'adaptateur et le contenu raw pour aider les agents à juger l'adaptation, le risque et les prochaines actions.

Native · 94/100Policy : autoriser
Surface agent
Tout agent MCP/CLI
Type
Workflow
Installation
Pip
Confiance
Confiance : Established
Point d'entrée
python -c "import pydantic_ai_shields; print('pydantic-ai-shields ready')"
Commande CLI universelle
npx tokrepo install 465f033a-7847-5937-8925-29c7c91bfb5a
Introduction

Drop-in guardrail capabilities for Pydantic AI agents: cost budgets, tool permissions, and input/output guards; verified 63★, pushed 2026-05-05.

Best for: Pydantic AI users who want safety + budget controls as first-class capabilities (not ad hoc wrappers)

Works with: Python 3.10+ and Pydantic AI agents using the capabilities API

Setup time: 8-15 minutes

Key facts (verified)

  • GitHub: 63 stars · 10 forks · pushed 2026-05-05.
  • License: MIT · owner avatar + repo URL verified via GitHub API.
  • README-backed entrypoint: python -c "import pydantic_ai_shields; print('pydantic-ai-shields ready')".

Main

  • Budgeting by default: add CostTracking(budget_usd=...) to stop runaway agent loops and to record total tokens/cost per run.

  • Tool permissions: use ToolGuard(blocked=[...], require_approval=[...]) so unsafe tools never appear (or require explicit approval).

  • Input/output controls: InputGuard blocks risky user prompts early; OutputGuard can enforce redaction or policy checks post-run.

  • Prefer incremental rollout: start with cost tracking + tool allowlist, then add input/output guards for the highest-risk surfaces.

Source-backed notes

  • README shows pip install pydantic-ai-shields and a Quick Start example using CostTracking, ToolGuard, and InputGuard capabilities.
  • README describes CostTracking as tracking tokens/cost with optional budget enforcement and raising BudgetExceededError.
  • README explains ToolGuard supports blocking tools entirely and requiring approvals via a callback.

FAQ

  • Is this a full agent framework?: No — README positions it as drop-in capabilities for Pydantic AI, not a separate agent runtime.
  • Can I block a tool completely?: Yes — README shows ToolGuard(blocked=[...]) removes tools from the model’s tool list.
  • What’s the first guardrail to add?: Cost tracking + tool permissions; then add input/output guards for your highest-risk prompts.
🙏

Source et remerciements

Source: https://github.com/vstorm-co/pydantic-ai-shields > License: MIT > GitHub stars: 63 · forks: 10

Fil de discussion

Connectez-vous pour rejoindre la discussion.
Aucun commentaire pour l'instant. Soyez le premier à partager votre avis.

Actifs similaires