Self-Hosted AI Starter Kit — Local AI with n8n
Docker Compose template by n8n that bootstraps a complete local AI environment with n8n workflow automation, Ollama LLMs, Qdrant vector database, and PostgreSQL. 14,500+ stars.
What it is
The Self-Hosted AI Starter Kit is a Docker Compose template created by n8n that sets up a complete local AI environment. It bundles n8n (workflow automation), Ollama (local LLM inference), Qdrant (vector database), and PostgreSQL into a single deployment. No cloud API keys are required -- everything runs locally on your hardware.
Developers and teams who want to experiment with AI workflows without sending data to external APIs benefit most. The starter kit provides a pre-configured environment for building RAG pipelines, chatbots, document processing, and automation workflows using local models.
How it saves time or tokens
Setting up each component individually (n8n, Ollama, Qdrant, PostgreSQL) and configuring them to work together takes hours of configuration. The starter kit handles networking, volumes, and inter-service connections out of the box. Running models locally eliminates per-token API costs entirely. For teams processing sensitive data, local execution avoids data residency and privacy concerns that come with cloud APIs.
How to use
- Clone the starter kit and start the stack:
git clone https://github.com/n8n-io/self-hosted-ai-starter-kit.git
cd self-hosted-ai-starter-kit
docker compose up -d
- Open n8n at
http://localhost:5678and import the example workflows.
- The stack includes:
- n8n on port 5678 (workflow automation)
- Ollama on port 11434 (local LLM)
- Qdrant on port 6333 (vector database)
- PostgreSQL on port 5432 (structured data)
Example
# docker-compose.yml excerpt
services:
n8n:
image: n8nio/n8n:latest
ports:
- '5678:5678'
environment:
- N8N_AI_ENABLED=true
ollama:
image: ollama/ollama:latest
ports:
- '11434:11434'
volumes:
- ollama-data:/root/.ollama
qdrant:
image: qdrant/qdrant:latest
ports:
- '6333:6333'
volumes:
- qdrant-data:/qdrant/storage
# Pull a model into Ollama
docker exec ollama ollama pull llama3
Related on TokRepo
- Local LLM with Ollama -- Ollama setup and model management
- Self-Hosted Solutions -- Self-hosted AI infrastructure
Common pitfalls
- Ollama requires significant GPU memory for larger models. Llama 3 8B needs at least 8GB VRAM. Without a GPU, inference runs on CPU and is much slower.
- The starter kit uses default credentials. Change passwords for PostgreSQL and n8n before exposing the services beyond localhost.
- Qdrant vector storage grows with the number of embedded documents. Monitor disk usage if you index large document collections.
Frequently Asked Questions
A GPU is recommended for reasonable inference speed with Ollama. Without a GPU, models run on CPU and are significantly slower. Small models (3B parameters) are usable on CPU; larger models require GPU acceleration.
Ollama supports a wide range of open-source models including Llama 3, Mistral, Phi, Gemma, and others. Run 'ollama pull model-name' to download a model. The starter kit does not include a pre-downloaded model.
n8n Community Edition is open source and free for self-hosted use. The Docker image included in the starter kit is the community edition. n8n also offers a cloud-hosted version with additional features.
Yes. The Docker Compose file is extensible. Add services like Elasticsearch, Redis, or additional databases by adding them to the compose file. n8n can connect to any service accessible on the Docker network.
Qdrant serves as the vector database for RAG (Retrieval Augmented Generation) workflows. Documents are embedded into vectors, stored in Qdrant, and retrieved by similarity search when the LLM needs context to answer questions.
Citations (3)
- n8n Self-Hosted AI Starter Kit GitHub— Self-Hosted AI Starter Kit by n8n with Ollama and Qdrant
- n8n Official Website— n8n workflow automation platform
- Ollama Official Website— Ollama local LLM inference
Related on TokRepo
Source & Thanks
Created by n8n-io. Licensed under Apache 2.0.
self-hosted-ai-starter-kit — ⭐ 14,500+
Thank you to the n8n team for making self-hosted AI infrastructure accessible with one command.
Discussion
Related Assets
Conda — Cross-Platform Package and Environment Manager
Install, update, and manage packages and isolated environments for Python, R, C/C++, and hundreds of other languages from a single tool.
Sphinx — Python Documentation Generator
Generate professional documentation from reStructuredText and Markdown with cross-references, API autodoc, and multiple output formats.
Neutralinojs — Lightweight Cross-Platform Desktop Apps
Build desktop applications with HTML, CSS, and JavaScript using a tiny native runtime instead of bundling Chromium.