# Instructor — Typed Structured Outputs for LLMs > Instructor turns LLM replies into validated Pydantic models with retries. `pip install instructor`, then extract typed objects across major providers. ## Install Save as a script file and run: # Instructor — Typed Structured Outputs for LLMs > Instructor turns LLM replies into validated Pydantic models with retries. `pip install instructor`, then extract typed objects across major providers. ## Quick Use 1. Install: ```bash pip install instructor ``` 2. Run: ```bash python -c "import instructor; print('instructor installed')" ``` 3. Verify: - Run one extraction and confirm the result is a validated Pydantic model (not raw JSON strings) --- ## Intro Instructor turns LLM replies into validated Pydantic models with retries. `pip install instructor`, then extract typed objects across major providers. - **Best for:** backend teams who need reliable, typed extraction (JSON-safe) without hand-written parsers and fragile regex cleanup - **Works with:** Python, Pydantic models, OpenAI/Anthropic/Google/Ollama providers (per repo examples) - **Setup time:** 8 minutes ### Quantitative Notes - GitHub stars (verified): see Source & Thanks - Setup time ~8 minutes - Install command: `pip install instructor` (repo) --- ## Practical Notes Use Instructor when you already know the *shape* of the answer and you want the model to fill it in with high reliability. Start with one Pydantic model per call (e.g., `User`, `ProductReview`), add tight field constraints (enums, ranges), then layer in retry budgets. Once stable, treat the schema as an API contract: version it, add regression examples, and monitor validation failures as a quality metric. **Safety note:** Schema discipline matters: oversized models and ambiguous fields cause retries, latency, and cost spikes. ### FAQ **Q: What problem does Instructor solve?** A: It enforces a typed schema on LLM outputs (Pydantic) and retries invalid generations, so you ship structured results instead of brittle parsing. **Q: Is it only for OpenAI?** A: No. The repo shows provider strings for OpenAI, Anthropic, Google, and local Ollama; the API stays consistent. **Q: How do I reduce failures?** A: Keep the schema minimal, constrain enums, and ask for one object per call; smaller schemas validate more reliably and retry less. --- ## Source & Thanks > GitHub: https://github.com/567-labs/instructor > Owner avatar: https://avatars.githubusercontent.com/u/152629781?v=4 > License (SPDX): MIT > GitHub stars (verified via `api.github.com/repos/instructor-ai/instructor`): 12,947 --- # Instructor——LLM 结构化输出与类型校验 > Instructor 把 LLM 输出直接变成可校验的 Pydantic 模型,并自动重试失败解析。`pip install instructor` 后即可做 schema-first 抽取,兼容 OpenAI/Anthropic 等。 ## 快速使用 1. 安装: ```bash pip install instructor ``` 2. 运行: ```bash python -c "import instructor; print('instructor installed')" ``` 3. 验证: - Run one extraction and confirm the result is a validated Pydantic model (not raw JSON strings) --- ## 简介 Instructor 把 LLM 输出直接变成可校验的 Pydantic 模型,并自动重试失败解析。`pip install instructor` 后即可做 schema-first 抽取,兼容 OpenAI/Anthropic 等。 - **适合谁(Best for):** 需要可靠结构化抽取的后端团队,希望拿到可校验的类型结果,避免手写解析器与脆弱正则清洗 - **兼容工具(Works with):** Python、Pydantic 模型、OpenAI/Anthropic/Google/Ollama 等提供方(仓库示例) - **安装时间(Setup time):** 8 分钟 ### 量化信息 - GitHub stars(已核验):见「来源与感谢」 - 装机约 8 分钟 - 安装命令:`pip install instructor`(仓库) --- ## 实战要点 当你已经明确「答案应该长什么样」时,用 Instructor 会非常稳:一次调用对应一个 Pydantic 模型,把字段约束收紧(枚举/范围/必填),再配置重试预算。稳定后,把 schema 当成 API 合同来维护:版本化、补回归样例,并把校验失败率当作质量指标持续监控。 **安全提示:** Schema 纪律很重要:模型过大、字段语义模糊会带来更多重试,进而拉高延迟与成本。 ### FAQ **Q: Instructor 解决什么问题?** A: 它让 LLM 输出遵循明确的类型/结构(Pydantic),并对不合规输出自动重试,让你拿到稳定的结构化结果而不是靠解析字符串。 **Q: 只能配 OpenAI 吗?** A: 不是。仓库示例包含 OpenAI、Anthropic、Google 以及本地 Ollama 的 provider 写法,调用方式一致。 **Q: 如何降低失败率?** A: 把 schema 做小、枚举尽量收敛,并尽量一次只抽取一个对象;schema 越小越容易通过校验、重试次数越少。 --- ## 来源与感谢 > GitHub:https://github.com/567-labs/instructor > Owner avatar:https://avatars.githubusercontent.com/u/152629781?v=4 > 许可证(SPDX):MIT > GitHub stars(已通过 `api.github.com/repos/instructor-ai/instructor` 核验):12,947 --- Source: https://tokrepo.com/en/workflows/instructor-typed-structured-outputs-for-llms Author: Agent Toolkit