Key Features
- Programming over prompting: Write Python modules instead of fragile prompt strings
- Automatic optimization: DSPy compiles and optimizes prompts and weights algorithmically
- Modular design: Compose classifiers, retrieval, reasoning, and agent loops as reusable modules
- Self-improving pipelines: Compilation learns from examples to improve output quality
- Provider agnostic: Works with OpenAI, Anthropic, Google, local models, and any LLM API
- Built-in modules: ChainOfThought, ReAct, RAG, multi-hop reasoning out of the box
Example
import dspy
# Configure LLM
lm = dspy.LM('openai/gpt-4o-mini')
dspy.configure(lm=lm)
# Define a module
classify = dspy.ChainOfThought('text -> sentiment: positive, negative, neutral')
# Use it
result = classify(text="DSPy is amazing for building reliable AI systems")
print(result.sentiment) # positive
# Optimize with examples
optimizer = dspy.MIPROv2(metric=my_metric, auto="medium")
optimized = optimizer.compile(classify, trainset=examples)FAQ
Q: What is DSPy? A: DSPy is a Stanford NLP framework with 33.3K+ stars for programming language models instead of prompting them. You write Python modules and DSPy automatically optimizes the prompts and weights for reliable, high-quality outputs. MIT licensed.
Q: How do I install DSPy?
A: Run pip install dspy. Then configure your LLM provider with dspy.configure(lm=dspy.LM('openai/gpt-4o-mini')) and start building modules.
Q: How is DSPy different from LangChain? A: LangChain chains together prompts and tools manually. DSPy compiles and optimizes prompts algorithmically — you write what you want, DSPy figures out the best prompt automatically. DSPy is more focused on reliability and optimization.