Esta página se muestra en inglés. Una traducción al español está en curso.
ScriptsMay 11, 2026·2 min de lectura

Instructor — Typed Structured Outputs for LLMs

Instructor turns LLM replies into validated Pydantic models with retries. `pip install instructor`, then extract typed objects across major providers.

Listo para agents

Este activo puede ser leído e instalado directamente por agents

TokRepo expone un comando CLI universal, contrato de instalación, metadata JSON, plan según adaptador y contenido raw para que los agents evalúen compatibilidad, riesgo y próximos pasos.

Stage only · 29/100Stage only
Superficie agent
Cualquier agent MCP/CLI
Tipo
Script
Instalación
Single
Confianza
Confianza: Established
Entrada
README.md
Comando CLI universal
npx tokrepo install e4780821-245b-49db-9315-ba6260689aa5
Introducción

Instructor turns LLM replies into validated Pydantic models with retries. pip install instructor, then extract typed objects across major providers.

  • Best for: backend teams who need reliable, typed extraction (JSON-safe) without hand-written parsers and fragile regex cleanup
  • Works with: Python, Pydantic models, OpenAI/Anthropic/Google/Ollama providers (per repo examples)
  • Setup time: 8 minutes

Quantitative Notes

  • GitHub stars (verified): see Source & Thanks
  • Setup time ~8 minutes
  • Install command: pip install instructor (repo)

Practical Notes

Use Instructor when you already know the shape of the answer and you want the model to fill it in with high reliability. Start with one Pydantic model per call (e.g., User, ProductReview), add tight field constraints (enums, ranges), then layer in retry budgets. Once stable, treat the schema as an API contract: version it, add regression examples, and monitor validation failures as a quality metric.

Safety note: Schema discipline matters: oversized models and ambiguous fields cause retries, latency, and cost spikes.

FAQ

Q: What problem does Instructor solve? A: It enforces a typed schema on LLM outputs (Pydantic) and retries invalid generations, so you ship structured results instead of brittle parsing.

Q: Is it only for OpenAI? A: No. The repo shows provider strings for OpenAI, Anthropic, Google, and local Ollama; the API stays consistent.

Q: How do I reduce failures? A: Keep the schema minimal, constrain enums, and ask for one object per call; smaller schemas validate more reliably and retry less.


🙏

Fuente y agradecimientos

GitHub: https://github.com/567-labs/instructor Owner avatar: https://avatars.githubusercontent.com/u/152629781?v=4 License (SPDX): MIT GitHub stars (verified via api.github.com/repos/instructor-ai/instructor): 12,947

Discusión

Inicia sesión para unirte a la discusión.
Aún no hay comentarios. Sé el primero en compartir tus ideas.

Activos relacionados