Instructor — Structured Outputs from LLMs
Get structured, validated outputs from LLMs using Pydantic models. Works with OpenAI, Anthropic, Google, Ollama, and more. Retry logic, streaming, partial responses. 12.6K+ stars.
What it is
Instructor is a Python library that wraps LLM API clients to return structured, validated outputs instead of raw text. You define a Pydantic model, pass it to Instructor, and the library handles prompt injection, JSON parsing, and validation. If the LLM returns invalid data, Instructor retries automatically.
Instructor is for developers building applications that need reliable structured data from LLMs: extracting entities from text, classifying inputs, generating structured reports, or creating API responses. It supports OpenAI, Anthropic, Google, Ollama, and other providers.
How it saves time or tokens
Without Instructor, you write custom parsing code for every LLM call, handle JSON extraction manually, and build retry logic yourself. Instructor replaces all of that with a single function call. The workflow provides pip install and working code examples that produce typed Python objects from LLM responses in minutes.
How to use
- Install Instructor:
pip install instructor
- Patch your LLM client and define a Pydantic model:
import instructor
from pydantic import BaseModel
from openai import OpenAI
client = instructor.from_openai(OpenAI())
class User(BaseModel):
name: str
age: int
email: str
user = client.chat.completions.create(
model='gpt-4o',
response_model=User,
messages=[{'role': 'user', 'content': 'Extract: John is 30, john@example.com'}]
)
print(user.name) # John
print(user.age) # 30
print(user.email) # john@example.com
- The returned object is a validated Pydantic instance with type checking and field validation.
Example
from typing import List
import instructor
from pydantic import BaseModel, Field
from anthropic import Anthropic
client = instructor.from_anthropic(Anthropic())
class ExtractedEntity(BaseModel):
name: str
entity_type: str = Field(description='person, org, or location')
confidence: float = Field(ge=0.0, le=1.0)
class ExtractionResult(BaseModel):
entities: List[ExtractedEntity]
summary: str
result = client.messages.create(
model='claude-sonnet-4-20250514',
max_tokens=1024,
response_model=ExtractionResult,
messages=[{'role': 'user', 'content': 'Extract entities from: Apple CEO Tim Cook visited Berlin for a meeting with SAP executives.'}]
)
for entity in result.entities:
print(f'{entity.name} ({entity.entity_type}): {entity.confidence}')
Related on TokRepo
- AI tools for coding -- Developer tools for building with LLMs
- Prompt library -- Reusable prompt patterns for structured outputs
Common pitfalls
- Complex nested Pydantic models increase token usage because Instructor injects the schema into the prompt. Keep models flat when possible.
- Retry logic consumes additional API calls and tokens. Set max_retries to a reasonable number (2-3) to avoid runaway costs.
- Not all LLM providers support function calling natively. For providers without native support, Instructor falls back to JSON mode, which may be less reliable.
Frequently Asked Questions
Instructor supports OpenAI, Anthropic, Google (Gemini), Ollama, LiteLLM, Cohere, and any provider with an OpenAI-compatible API. Each provider has a dedicated from_ function for patching the client.
When the LLM returns data that fails Pydantic validation, Instructor sends the validation error back to the LLM with a corrected prompt and retries. This continues up to max_retries times. Each retry is a separate API call.
Yes. Instructor supports partial streaming where fields are populated as the LLM generates them. You can iterate over partial results and update your UI progressively. Use the stream=True parameter with create_partial.
Yes. Use instructor.from_openai with an Ollama or vLLM client that exposes an OpenAI-compatible API. Local models work best when they support function calling or structured output modes.
Instructor is focused exclusively on structured extraction with Pydantic validation and automatic retries. LangChain output parsers are part of a larger framework. Instructor is simpler and more reliable for the specific task of getting validated structured data from LLMs.
Citations (3)
- Instructor GitHub— Instructor provides structured outputs from LLMs using Pydantic
- Instructor Documentation— Supports OpenAI, Anthropic, Google, Ollama and more
- Pydantic Documentation— Pydantic validation for data models
Related on TokRepo
Source & Thanks
Created by Jason Liu. Licensed under MIT. instructor-ai/instructor — 12,600+ GitHub stars
Discussion
Related Assets
doctest — The Fastest Feature-Rich C++ Testing Framework
doctest is a single-header C++ testing framework designed for minimal compile-time overhead and maximum speed.
Chai — BDD/TDD Assertion Library for Node.js
Chai is a flexible assertion library for Node.js and browsers that supports expect, should, and assert styles.
Supertest — HTTP Assertion Library for Node.js APIs
Supertest provides a high-level API for testing HTTP servers in Node.js with fluent assertion chaining.