# DSPy — Programming Foundation Models Declaratively > Replace hand-written prompts with modular programs. DSPy compiles declarative AI pipelines into optimized prompts automatically, boosting reliability and performance. ## Install Paste the prompt below into your AI tool: ## Quick Use ```bash pip install dspy ``` ```python import dspy lm = dspy.LM("openai/gpt-4o-mini") dspy.configure(lm=lm) # Define a simple QA module qa = dspy.ChainOfThought("question -> answer") result = qa(question="What is the capital of France?") print(result.answer) ``` ## What is DSPy? DSPy is a framework that replaces hand-written prompts with modular, compilable programs. Instead of tweaking prompt strings, you define what your AI pipeline should do declaratively — then DSPy optimizes the prompts automatically through compilation. It treats LLM calls as optimizable modules, similar to how PyTorch treats neural network layers. **Answer-Ready**: DSPy is a framework for programming (not prompting) LLMs. Define AI pipelines declaratively, compile them into optimized prompts automatically. Created at Stanford NLP. Replaces prompt engineering with systematic optimization. 22k+ GitHub stars. **Best for**: AI engineers building reliable LLM pipelines. **Works with**: OpenAI, Anthropic Claude, local models. **Setup time**: Under 3 minutes. ## Core Concepts ### 1. Signatures (Define I/O) ```python # Simple signature classify = dspy.Predict("sentence -> sentiment") # Detailed signature class FactCheck(dspy.Signature): claim = dspy.InputField(desc="A factual claim to verify") evidence = dspy.OutputField(desc="Supporting or refuting evidence") verdict = dspy.OutputField(desc="True, False, or Uncertain") ``` ### 2. Modules (Build Pipelines) ```python class RAGPipeline(dspy.Module): def __init__(self): self.retrieve = dspy.Retrieve(k=5) self.generate = dspy.ChainOfThought("context, question -> answer") def forward(self, question): context = self.retrieve(question).passages return self.generate(context=context, question=question) ``` ### 3. Optimizers (Compile Prompts) ```python from dspy.teleprompt import BootstrapFewShot # Provide training examples trainset = [ dspy.Example(question="...", answer="...").with_inputs("question"), ] # Compile: auto-generate optimized prompts optimizer = BootstrapFewShot(metric=my_metric, max_bootstrapped_demos=4) compiled_rag = optimizer.compile(RAGPipeline(), trainset=trainset) ``` ### 4. Metrics ```python def my_metric(example, prediction, trace=None): return prediction.answer.lower() == example.answer.lower() ``` ## Why DSPy over Prompt Engineering? | Aspect | Prompt Engineering | DSPy | |--------|-------------------|------| | Approach | Manual string tweaking | Declarative programming | | Optimization | Trial and error | Automatic compilation | | Reliability | Fragile | Systematic | | Modularity | Copy-paste | Composable modules | | Model switching | Rewrite prompts | Recompile | ## FAQ **Q: Does it work with Claude?** A: Yes, supports Anthropic Claude via `dspy.LM("anthropic/claude-sonnet-4-20250514")`. **Q: How is it different from LangChain?** A: LangChain chains manual prompts together. DSPy optimizes prompts automatically through compilation — you define the task, DSPy figures out the best prompt. **Q: Is it production-ready?** A: Yes, used by companies for production RAG, classification, and extraction pipelines. ## Source & Thanks > Created by [Stanford NLP](https://github.com/stanfordnlp). Licensed under MIT. > > [stanfordnlp/dspy](https://github.com/stanfordnlp/dspy) — 22k+ stars ## 快速使用 ```bash pip install dspy ``` 用编程代替提示词工程,自动优化 LLM 管线。 ## 什么是 DSPy? DSPy 用模块化程序替代手写提示词。声明式定义 AI 管线,自动编译为优化的提示词。斯坦福 NLP 出品。 **一句话总结**:编程式 LLM 框架,声明式定义 AI 管线后自动编译优化提示词,斯坦福 NLP 出品,22k+ stars。 **适合人群**:构建可靠 LLM 管线的 AI 工程师。 ## 核心概念 ### 1. 签名 声明输入输出格式。 ### 2. 模块 可组合的 LLM 调用单元,类似 PyTorch 层。 ### 3. 优化器 自动编译生成最优提示词。 ## 常见问题 **Q: 支持 Claude?** A: 支持,通过 `dspy.LM("anthropic/claude-sonnet-4-20250514")`。 **Q: 和 LangChain 区别?** A: LangChain 串联手写提示词,DSPy 通过编译自动优化提示词。 ## 来源与致谢 > [stanfordnlp/dspy](https://github.com/stanfordnlp/dspy) — 22k+ stars, MIT --- Source: https://tokrepo.com/en/workflows/023c142e-eba9-40ff-b598-4c0774814726 Author: Prompt Lab