# Instructor — Structured Outputs from LLMs > Get structured, validated outputs from LLMs using Pydantic models. Works with OpenAI, Anthropic, Google, Ollama, and more. Retry logic, streaming, partial responses. 12.6K+ stars. ## Install Save as a script file and run: ## Quick Use ```bash pip install instructor ``` ```python import instructor from pydantic import BaseModel from openai import OpenAI client = instructor.from_openai(OpenAI()) class User(BaseModel): name: str age: int bio: str user = client.chat.completions.create( model="gpt-4o", response_model=User, messages=[{"role": "user", "content": "Extract: Jason is 25 and loves hiking."}], ) print(user) # User(name='Jason', age=25, bio='Loves hiking') ``` --- ## Intro Instructor makes it easy to get structured, validated data from LLMs. Define a Pydantic model, and Instructor handles prompting, parsing, validation, and retries automatically. Works with OpenAI, Anthropic, Google, Ollama, LiteLLM, and any OpenAI-compatible API. Supports streaming, partial responses, and complex nested schemas. 12,600+ GitHub stars, MIT licensed. **Best for**: Developers who need reliable structured data extraction from LLMs — not free-text **Works with**: OpenAI, Anthropic, Google, Ollama, LiteLLM, Mistral, Cohere --- ## Key Features ### Type-Safe Extraction Define output schema with Pydantic, get validated objects back: ```python class Contact(BaseModel): name: str email: str company: Optional[str] = None ``` ### Automatic Retries If the LLM returns invalid data, Instructor retries with the validation error as context: ```python client.chat.completions.create( response_model=Contact, max_retries=3, # auto-retry on validation failure ... ) ``` ### Streaming & Partial Responses Stream structured objects as they're generated — great for UIs. ### Multi-Provider One API across OpenAI, Anthropic, Google Gemini, Ollama, Mistral, and more. ### Complex Schemas Nested models, lists, enums, optional fields, custom validators — full Pydantic support. --- ### FAQ **Q: What is Instructor?** A: A Python library that gets structured, validated outputs from LLMs using Pydantic models. Handles prompting, parsing, validation, and retries. 12.6K+ stars. **Q: Does it work with Claude?** A: Yes, Instructor supports Anthropic Claude, OpenAI, Google Gemini, Ollama, and many more providers. --- ## Source & Thanks > Created by [Jason Liu](https://github.com/instructor-ai). Licensed under MIT. > [instructor-ai/instructor](https://github.com/instructor-ai/instructor) — 12,600+ GitHub stars --- Source: https://tokrepo.com/en/workflows/4a86c01b-d4d2-4c0f-8152-393c5685e429 Author: Script Depot