ConfigsApr 5, 2026·3 min read

Self-Hosted AI Starter Kit — Local AI with n8n

Docker Compose template by n8n that bootstraps a complete local AI environment with n8n workflow automation, Ollama LLMs, Qdrant vector database, and PostgreSQL. 14,500+ stars.

TO
TokRepo精选 · Community
Quick Use

Use it first, then decide how deep to go

This block should tell both the user and the agent what to copy, install, and apply first.

```bash git clone https://github.com/n8n-io/self-hosted-ai-starter-kit.git cd self-hosted-ai-starter-kit cp .env.example .env docker compose --profile cpu up ``` For GPU acceleration: - **NVIDIA**: `docker compose --profile gpu-nvidia up` - **AMD**: `docker compose --profile gpu-amd up` - **Apple Silicon**: `docker compose up` (uses CPU, or connect to local Ollama) Access n8n at `http://localhost:5678/` after startup. ---
Intro
Self-Hosted AI Starter Kit is an official open-source Docker Compose template by n8n that bootstraps a complete local AI development environment in one command. With 14,500+ GitHub stars and Apache 2.0 license, it bundles n8n (400+ integrations), Ollama (local LLMs), Qdrant (vector database), and PostgreSQL into a pre-configured stack. Start building AI workflows, RAG pipelines, and chatbots on your own hardware — no cloud APIs needed, full data privacy guaranteed. Best for: developers and teams who want a self-hosted AI workflow platform with zero cloud dependencies. Works with: n8n, Ollama, Qdrant, PostgreSQL, any Ollama-supported model. Setup time: under 5 minutes. ---
## Self-Hosted AI Starter Kit — Architecture & Components ### Included Services | Service | Purpose | Port | |---------|---------|------| | **n8n** | Low-code workflow automation with 400+ integrations and AI components | 5678 | | **Ollama** | Run LLMs locally (Llama 3, Mistral, CodeLlama, etc.) | 11434 | | **Qdrant** | High-performance vector database for embeddings and RAG | 6333 | | **PostgreSQL** | Persistent data storage for n8n workflows and credentials | 5432 | ### What You Can Build 1. **RAG Chatbots** — Upload documents, embed with Ollama, store in Qdrant, query via n8n chat interface 2. **AI Workflow Automation** — Trigger AI tasks from emails, webhooks, schedules, or 400+ app integrations 3. **Document Processing** — Extract, summarize, and classify documents using local LLMs 4. **Code Generation** — Use CodeLlama for automated code review and generation workflows 5. **Data Analysis** — Connect databases to LLMs for natural language data queries ### GPU Support | Platform | Command | Notes | |----------|---------|-------| | CPU only | `docker compose --profile cpu up` | Slower but works everywhere | | NVIDIA GPU | `docker compose --profile gpu-nvidia up` | Requires NVIDIA Docker runtime | | AMD GPU | `docker compose --profile gpu-amd up` | Linux only | | Apple Silicon | `docker compose up` | CPU mode, or use local Ollama | ### Included Workflow Templates The starter kit comes with pre-built n8n workflow templates: - AI chatbot with document RAG - Email classification and auto-response - Webhook-triggered AI processing ### Configuration Key `.env` settings: ```bash # n8n settings N8N_ENCRYPTION_KEY=your-encryption-key N8N_USER_MANAGEMENT_JWT_SECRET=your-jwt-secret # Ollama model (pulled on first start) OLLAMA_MODEL=llama3 # PostgreSQL POSTGRES_PASSWORD=your-password ``` ### FAQ **Q: What is the Self-Hosted AI Starter Kit?** A: A Docker Compose template by n8n that sets up a complete local AI environment with n8n (workflow automation), Ollama (local LLMs), Qdrant (vector DB), and PostgreSQL in one command. **Q: Is it free?** A: Yes, fully open-source under Apache 2.0. All included services are free and self-hosted. **Q: What hardware do I need?** A: Minimum 8GB RAM for CPU mode. For GPU acceleration, an NVIDIA or AMD GPU with Docker support. Apple Silicon Macs work with CPU mode. ---
🙏

Source & Thanks

> Created by [n8n-io](https://github.com/n8n-io). Licensed under Apache 2.0. > > [self-hosted-ai-starter-kit](https://github.com/n8n-io/self-hosted-ai-starter-kit) — ⭐ 14,500+ Thank you to the n8n team for making self-hosted AI infrastructure accessible with one command.

Discussion

Sign in to join the discussion.
No comments yet. Be the first to share your thoughts.

Related Assets