ScriptsMar 30, 2026·2 min read

Instructor — Structured Outputs from LLMs

Get structured, validated outputs from LLMs using Pydantic models. Works with OpenAI, Anthropic, Google, Ollama, and more. Retry logic, streaming, partial responses. 12.6K+ stars.

TL;DR
Instructor extracts structured, validated data from LLM responses using Pydantic models with retry logic and streaming.
§01

What it is

Instructor is a Python library that wraps LLM API clients to return structured, validated outputs instead of raw text. You define a Pydantic model, pass it to Instructor, and the library handles prompt injection, JSON parsing, and validation. If the LLM returns invalid data, Instructor retries automatically.

Instructor is for developers building applications that need reliable structured data from LLMs: extracting entities from text, classifying inputs, generating structured reports, or creating API responses. It supports OpenAI, Anthropic, Google, Ollama, and other providers.

§02

How it saves time or tokens

Without Instructor, you write custom parsing code for every LLM call, handle JSON extraction manually, and build retry logic yourself. Instructor replaces all of that with a single function call. The workflow provides pip install and working code examples that produce typed Python objects from LLM responses in minutes.

§03

How to use

  1. Install Instructor:
pip install instructor
  1. Patch your LLM client and define a Pydantic model:
import instructor
from pydantic import BaseModel
from openai import OpenAI

client = instructor.from_openai(OpenAI())

class User(BaseModel):
    name: str
    age: int
    email: str

user = client.chat.completions.create(
    model='gpt-4o',
    response_model=User,
    messages=[{'role': 'user', 'content': 'Extract: John is 30, john@example.com'}]
)

print(user.name)   # John
print(user.age)    # 30
print(user.email)  # john@example.com
  1. The returned object is a validated Pydantic instance with type checking and field validation.
§04

Example

from typing import List
import instructor
from pydantic import BaseModel, Field
from anthropic import Anthropic

client = instructor.from_anthropic(Anthropic())

class ExtractedEntity(BaseModel):
    name: str
    entity_type: str = Field(description='person, org, or location')
    confidence: float = Field(ge=0.0, le=1.0)

class ExtractionResult(BaseModel):
    entities: List[ExtractedEntity]
    summary: str

result = client.messages.create(
    model='claude-sonnet-4-20250514',
    max_tokens=1024,
    response_model=ExtractionResult,
    messages=[{'role': 'user', 'content': 'Extract entities from: Apple CEO Tim Cook visited Berlin for a meeting with SAP executives.'}]
)

for entity in result.entities:
    print(f'{entity.name} ({entity.entity_type}): {entity.confidence}')
§05

Related on TokRepo

§06

Common pitfalls

  • Complex nested Pydantic models increase token usage because Instructor injects the schema into the prompt. Keep models flat when possible.
  • Retry logic consumes additional API calls and tokens. Set max_retries to a reasonable number (2-3) to avoid runaway costs.
  • Not all LLM providers support function calling natively. For providers without native support, Instructor falls back to JSON mode, which may be less reliable.

Frequently Asked Questions

Which LLM providers does Instructor support?+

Instructor supports OpenAI, Anthropic, Google (Gemini), Ollama, LiteLLM, Cohere, and any provider with an OpenAI-compatible API. Each provider has a dedicated from_ function for patching the client.

How does retry logic work?+

When the LLM returns data that fails Pydantic validation, Instructor sends the validation error back to the LLM with a corrected prompt and retries. This continues up to max_retries times. Each retry is a separate API call.

Does Instructor support streaming?+

Yes. Instructor supports partial streaming where fields are populated as the LLM generates them. You can iterate over partial results and update your UI progressively. Use the stream=True parameter with create_partial.

Can I use Instructor with local models?+

Yes. Use instructor.from_openai with an Ollama or vLLM client that exposes an OpenAI-compatible API. Local models work best when they support function calling or structured output modes.

How does Instructor compare to LangChain output parsers?+

Instructor is focused exclusively on structured extraction with Pydantic validation and automatic retries. LangChain output parsers are part of a larger framework. Instructor is simpler and more reliable for the specific task of getting validated structured data from LLMs.

Citations (3)
🙏

Source & Thanks

Created by Jason Liu. Licensed under MIT. instructor-ai/instructor — 12,600+ GitHub stars

Discussion

Sign in to join the discussion.
No comments yet. Be the first to share your thoughts.

Related Assets