WorkflowsMay 14, 2026·2 min read

Pydantic AI Shields — Guardrails for Pydantic AI

Drop-in guardrail capabilities for Pydantic AI agents: cost budgets, tool permissions, and input/output guards; verified 63★, pushed 2026-05-05.

Agent ready

This asset can be read and installed directly by agents

TokRepo exposes a universal CLI command, install contract, metadata JSON, adapter-aware plan, and raw content links so agents can judge fit, risk, and next actions.

Native · 94/100Policy: allow
Agent surface
Any MCP/CLI agent
Kind
Workflow
Install
Pip
Trust
Trust: Established
Entrypoint
python -c "import pydantic_ai_shields; print('pydantic-ai-shields ready')"
Universal CLI install command
npx tokrepo install 465f033a-7847-5937-8925-29c7c91bfb5a
Intro

Drop-in guardrail capabilities for Pydantic AI agents: cost budgets, tool permissions, and input/output guards; verified 63★, pushed 2026-05-05.

Best for: Pydantic AI users who want safety + budget controls as first-class capabilities (not ad hoc wrappers)

Works with: Python 3.10+ and Pydantic AI agents using the capabilities API

Setup time: 8-15 minutes

Key facts (verified)

  • GitHub: 63 stars · 10 forks · pushed 2026-05-05.
  • License: MIT · owner avatar + repo URL verified via GitHub API.
  • README-backed entrypoint: python -c "import pydantic_ai_shields; print('pydantic-ai-shields ready')".

Main

  • Budgeting by default: add CostTracking(budget_usd=...) to stop runaway agent loops and to record total tokens/cost per run.

  • Tool permissions: use ToolGuard(blocked=[...], require_approval=[...]) so unsafe tools never appear (or require explicit approval).

  • Input/output controls: InputGuard blocks risky user prompts early; OutputGuard can enforce redaction or policy checks post-run.

  • Prefer incremental rollout: start with cost tracking + tool allowlist, then add input/output guards for the highest-risk surfaces.

Source-backed notes

  • README shows pip install pydantic-ai-shields and a Quick Start example using CostTracking, ToolGuard, and InputGuard capabilities.
  • README describes CostTracking as tracking tokens/cost with optional budget enforcement and raising BudgetExceededError.
  • README explains ToolGuard supports blocking tools entirely and requiring approvals via a callback.

FAQ

  • Is this a full agent framework?: No — README positions it as drop-in capabilities for Pydantic AI, not a separate agent runtime.
  • Can I block a tool completely?: Yes — README shows ToolGuard(blocked=[...]) removes tools from the model’s tool list.
  • What’s the first guardrail to add?: Cost tracking + tool permissions; then add input/output guards for your highest-risk prompts.
🙏

Source & Thanks

Source: https://github.com/vstorm-co/pydantic-ai-shields > License: MIT > GitHub stars: 63 · forks: 10

Discussion

Sign in to join the discussion.
No comments yet. Be the first to share your thoughts.

Related Assets