Haystack — Production RAG & Agent Framework
Build composable AI pipelines for RAG, agents, and search. Model-agnostic, production-ready, by deepset. 18K+ stars.
What it is
Haystack is an open-source framework by deepset for building composable AI pipelines. It supports retrieval-augmented generation (RAG), document search, conversational agents, and custom NLP workflows. Haystack is model-agnostic, working with OpenAI, Anthropic, Cohere, Hugging Face, and local models.
It targets AI engineers and product teams building search, Q&A, or agent systems that need to connect retrievers, generators, and tools into production-ready pipelines.
How it saves time or tokens
Haystack's component-based architecture lets you swap models, retrievers, and stores without rewriting pipeline logic. The built-in evaluation tools measure retrieval quality and generation accuracy so you can optimize before shipping. Pipeline YAML serialization means you can version-control and deploy pipeline configurations without code changes. The estimated token usage per pipeline run is around 897 tokens depending on document length.
How to use
- Install Haystack:
pip install haystack-ai
- Build a simple RAG pipeline:
from haystack import Pipeline
from haystack.components.generators import OpenAIGenerator
from haystack.components.builders import PromptBuilder
template = '''Answer the question based on the context.
Context: {{context}}
Question: {{question}}'''
pipe = Pipeline()
pipe.add_component('prompt', PromptBuilder(template=template))
pipe.add_component('llm', OpenAIGenerator(model='gpt-4o'))
pipe.connect('prompt', 'llm')
result = pipe.run({
'prompt': {
'context': 'Haystack is an AI framework by deepset.',
'question': 'Who built Haystack?'
}
})
print(result['llm']['replies'][0])
- Add a retriever for document-based RAG:
from haystack.document_stores.in_memory import InMemoryDocumentStore
from haystack.components.retrievers.in_memory import InMemoryBM25Retriever
store = InMemoryDocumentStore()
retriever = InMemoryBM25Retriever(document_store=store)
Example
A complete indexing and query pipeline:
from haystack import Document
docs = [
Document(content='Haystack supports RAG pipelines.'),
Document(content='Deepset is based in Berlin.'),
]
store.write_documents(docs)
results = retriever.run(query='What does Haystack support?')
for doc in results['documents']:
print(doc.content)
Related on TokRepo
- RAG tools — retrieval-augmented generation frameworks and utilities
- AI agent tools — agent frameworks for building autonomous systems
Common pitfalls
- Using InMemoryDocumentStore in production fails under load. Switch to Elasticsearch, Weaviate, or Qdrant for persistent, scalable storage.
- Forgetting to set the OPENAI_API_KEY environment variable causes silent failures. Haystack does not always surface clear error messages for missing credentials.
- Pipeline connections must match component input/output names exactly. A typo in connect() calls produces runtime errors, not compile-time warnings.
Frequently Asked Questions
Haystack focuses on pipeline-based composition with typed inputs and outputs, making it easier to test and debug individual components. LangChain uses a chain abstraction with more flexibility but less structure. Haystack has stronger built-in evaluation tools.
Haystack supports Elasticsearch, OpenSearch, Weaviate, Qdrant, Pinecone, Chroma, pgvector, and an in-memory store. Each has a dedicated DocumentStore integration package.
Yes. Haystack integrates with Hugging Face Transformers, Ollama, and vLLM for local inference. Use the corresponding generator component instead of OpenAIGenerator.
Yes. Generator components support streaming callbacks. You can stream tokens to your frontend as they are generated, reducing perceived latency for users.
Yes. Haystack is used in production by enterprises for document search, customer support automation, and internal knowledge bases. Version 2.x introduced a more stable API with better type safety and component validation.
Citations (3)
- Haystack GitHub— Haystack is an open-source AI framework by deepset for composable pipelines
- Haystack Documentation— Supports RAG, document search, and agent workflows with multiple model providers
- Haystack Concepts— Component-based pipeline architecture with typed inputs and outputs
Related on TokRepo
Source & Thanks
- GitHub: deepset-ai/haystack
- License: Apache 2.0
- Stars: 18,000+
- Maintainer: deepset GmbH
Thanks to the deepset team for building the most production-oriented RAG framework, proving that AI pipelines can be both composable and reliable enough for enterprise deployment.
Discussion
Related Assets
Claude-Flow — Multi-Agent Orchestration for Claude Code
Layers swarm and hive-mind multi-agent orchestration on top of Claude Code with 64 specialized agents, SQLite memory, and parallel execution.
ccusage — Real-Time Token Cost Tracker for Claude Code
CLI that reads ~/.claude logs and breaks down Claude Code token spend by day, session, and project — pluggable into your statusline.
SuperClaude — Workflow Framework for Claude Code
Adds 16+ slash commands, 9 cognitive personas, and a smart flag system to Claude Code in one pipx install.