Scripts2026年3月31日·1 分钟阅读

Instructor — Structured Outputs from LLMs

Get structured, validated outputs from LLMs using Pydantic models. Works with OpenAI, Anthropic, Google, Ollama, and more. Retry logic, streaming, partial responses. 12.6K+ stars.

TO
TokRepo精选 · Community
快速使用

先拿来用,再决定要不要深挖

这里应该同时让用户和 Agent 知道第一步该复制什么、安装什么、落到哪里。

pip install instructor
import instructor
from pydantic import BaseModel
from openai import OpenAI

client = instructor.from_openai(OpenAI())

class User(BaseModel):
    name: str
    age: int
    bio: str

user = client.chat.completions.create(
    model="gpt-4o",
    response_model=User,
    messages=[{"role": "user", "content": "Extract: Jason is 25 and loves hiking."}],
)
print(user)  # User(name='Jason', age=25, bio='Loves hiking')

介绍

Instructor makes it easy to get structured, validated data from LLMs. Define a Pydantic model, and Instructor handles prompting, parsing, validation, and retries automatically. Works with OpenAI, Anthropic, Google, Ollama, LiteLLM, and any OpenAI-compatible API. Supports streaming, partial responses, and complex nested schemas. 12,600+ GitHub stars, MIT licensed.

Best for: Developers who need reliable structured data extraction from LLMs — not free-text Works with: OpenAI, Anthropic, Google, Ollama, LiteLLM, Mistral, Cohere


Key Features

Type-Safe Extraction

Define output schema with Pydantic, get validated objects back:

class Contact(BaseModel):
    name: str
    email: str
    company: Optional[str] = None

Automatic Retries

If the LLM returns invalid data, Instructor retries with the validation error as context:

client.chat.completions.create(
    response_model=Contact,
    max_retries=3,  # auto-retry on validation failure
    ...
)

Streaming & Partial Responses

Stream structured objects as they're generated — great for UIs.

Multi-Provider

One API across OpenAI, Anthropic, Google Gemini, Ollama, Mistral, and more.

Complex Schemas

Nested models, lists, enums, optional fields, custom validators — full Pydantic support.


FAQ

Q: What is Instructor? A: A Python library that gets structured, validated outputs from LLMs using Pydantic models. Handles prompting, parsing, validation, and retries. 12.6K+ stars.

Q: Does it work with Claude? A: Yes, Instructor supports Anthropic Claude, OpenAI, Google Gemini, Ollama, and many more providers.


🙏

来源与感谢

Created by Jason Liu. Licensed under MIT. instructor-ai/instructor — 12,600+ GitHub stars

相关资产