SkillsApr 2, 2026·3 min read

Haystack — Production RAG & Agent Framework

Build composable AI pipelines for RAG, agents, and search. Model-agnostic, production-ready, by deepset. 18K+ stars.

TL;DR
Haystack builds composable, model-agnostic AI pipelines for retrieval-augmented generation and agent workflows.
§01

What it is

Haystack is an open-source framework by deepset for building composable AI pipelines. It supports retrieval-augmented generation (RAG), document search, conversational agents, and custom NLP workflows. Haystack is model-agnostic, working with OpenAI, Anthropic, Cohere, Hugging Face, and local models.

It targets AI engineers and product teams building search, Q&A, or agent systems that need to connect retrievers, generators, and tools into production-ready pipelines.

§02

How it saves time or tokens

Haystack's component-based architecture lets you swap models, retrievers, and stores without rewriting pipeline logic. The built-in evaluation tools measure retrieval quality and generation accuracy so you can optimize before shipping. Pipeline YAML serialization means you can version-control and deploy pipeline configurations without code changes. The estimated token usage per pipeline run is around 897 tokens depending on document length.

§03

How to use

  1. Install Haystack:
pip install haystack-ai
  1. Build a simple RAG pipeline:
from haystack import Pipeline
from haystack.components.generators import OpenAIGenerator
from haystack.components.builders import PromptBuilder

template = '''Answer the question based on the context.
Context: {{context}}
Question: {{question}}'''

pipe = Pipeline()
pipe.add_component('prompt', PromptBuilder(template=template))
pipe.add_component('llm', OpenAIGenerator(model='gpt-4o'))
pipe.connect('prompt', 'llm')

result = pipe.run({
    'prompt': {
        'context': 'Haystack is an AI framework by deepset.',
        'question': 'Who built Haystack?'
    }
})
print(result['llm']['replies'][0])
  1. Add a retriever for document-based RAG:
from haystack.document_stores.in_memory import InMemoryDocumentStore
from haystack.components.retrievers.in_memory import InMemoryBM25Retriever

store = InMemoryDocumentStore()
retriever = InMemoryBM25Retriever(document_store=store)
§04

Example

A complete indexing and query pipeline:

from haystack import Document

docs = [
    Document(content='Haystack supports RAG pipelines.'),
    Document(content='Deepset is based in Berlin.'),
]
store.write_documents(docs)

results = retriever.run(query='What does Haystack support?')
for doc in results['documents']:
    print(doc.content)
§05

Related on TokRepo

  • RAG tools — retrieval-augmented generation frameworks and utilities
  • AI agent tools — agent frameworks for building autonomous systems
§06

Common pitfalls

  • Using InMemoryDocumentStore in production fails under load. Switch to Elasticsearch, Weaviate, or Qdrant for persistent, scalable storage.
  • Forgetting to set the OPENAI_API_KEY environment variable causes silent failures. Haystack does not always surface clear error messages for missing credentials.
  • Pipeline connections must match component input/output names exactly. A typo in connect() calls produces runtime errors, not compile-time warnings.

Frequently Asked Questions

How is Haystack different from LangChain?+

Haystack focuses on pipeline-based composition with typed inputs and outputs, making it easier to test and debug individual components. LangChain uses a chain abstraction with more flexibility but less structure. Haystack has stronger built-in evaluation tools.

Which vector databases does Haystack support?+

Haystack supports Elasticsearch, OpenSearch, Weaviate, Qdrant, Pinecone, Chroma, pgvector, and an in-memory store. Each has a dedicated DocumentStore integration package.

Can Haystack run with local models?+

Yes. Haystack integrates with Hugging Face Transformers, Ollama, and vLLM for local inference. Use the corresponding generator component instead of OpenAIGenerator.

Does Haystack support streaming responses?+

Yes. Generator components support streaming callbacks. You can stream tokens to your frontend as they are generated, reducing perceived latency for users.

Is Haystack production-ready?+

Yes. Haystack is used in production by enterprises for document search, customer support automation, and internal knowledge bases. Version 2.x introduced a more stable API with better type safety and component validation.

Citations (3)
  • Haystack GitHub— Haystack is an open-source AI framework by deepset for composable pipelines
  • Haystack Documentation— Supports RAG, document search, and agent workflows with multiple model providers
  • Haystack Concepts— Component-based pipeline architecture with typed inputs and outputs
🙏

Source & Thanks

Thanks to the deepset team for building the most production-oriented RAG framework, proving that AI pipelines can be both composable and reliable enough for enterprise deployment.

Discussion

Sign in to join the discussion.
No comments yet. Be the first to share your thoughts.

Related Assets