# Self-Hosted AI Starter Kit — Local AI with n8n > Docker Compose template by n8n that bootstraps a complete local AI environment with n8n workflow automation, Ollama LLMs, Qdrant vector database, and PostgreSQL. 14,500+ stars. ## Install Save in your project root: ## Quick Use ```bash git clone https://github.com/n8n-io/self-hosted-ai-starter-kit.git cd self-hosted-ai-starter-kit cp .env.example .env docker compose --profile cpu up ``` For GPU acceleration: - **NVIDIA**: `docker compose --profile gpu-nvidia up` - **AMD**: `docker compose --profile gpu-amd up` - **Apple Silicon**: `docker compose up` (uses CPU, or connect to local Ollama) Access n8n at `http://localhost:5678/` after startup. --- ## Intro Self-Hosted AI Starter Kit is an official open-source Docker Compose template by n8n that bootstraps a complete local AI development environment in one command. With 14,500+ GitHub stars and Apache 2.0 license, it bundles n8n (400+ integrations), Ollama (local LLMs), Qdrant (vector database), and PostgreSQL into a pre-configured stack. Start building AI workflows, RAG pipelines, and chatbots on your own hardware — no cloud APIs needed, full data privacy guaranteed. Best for: developers and teams who want a self-hosted AI workflow platform with zero cloud dependencies. Works with: n8n, Ollama, Qdrant, PostgreSQL, any Ollama-supported model. Setup time: under 5 minutes. --- ## Self-Hosted AI Starter Kit — Architecture & Components ### Included Services | Service | Purpose | Port | |---------|---------|------| | **n8n** | Low-code workflow automation with 400+ integrations and AI components | 5678 | | **Ollama** | Run LLMs locally (Llama 3, Mistral, CodeLlama, etc.) | 11434 | | **Qdrant** | High-performance vector database for embeddings and RAG | 6333 | | **PostgreSQL** | Persistent data storage for n8n workflows and credentials | 5432 | ### What You Can Build 1. **RAG Chatbots** — Upload documents, embed with Ollama, store in Qdrant, query via n8n chat interface 2. **AI Workflow Automation** — Trigger AI tasks from emails, webhooks, schedules, or 400+ app integrations 3. **Document Processing** — Extract, summarize, and classify documents using local LLMs 4. **Code Generation** — Use CodeLlama for automated code review and generation workflows 5. **Data Analysis** — Connect databases to LLMs for natural language data queries ### GPU Support | Platform | Command | Notes | |----------|---------|-------| | CPU only | `docker compose --profile cpu up` | Slower but works everywhere | | NVIDIA GPU | `docker compose --profile gpu-nvidia up` | Requires NVIDIA Docker runtime | | AMD GPU | `docker compose --profile gpu-amd up` | Linux only | | Apple Silicon | `docker compose up` | CPU mode, or use local Ollama | ### Included Workflow Templates The starter kit comes with pre-built n8n workflow templates: - AI chatbot with document RAG - Email classification and auto-response - Webhook-triggered AI processing ### Configuration Key `.env` settings: ```bash # n8n settings N8N_ENCRYPTION_KEY=your-encryption-key N8N_USER_MANAGEMENT_JWT_SECRET=your-jwt-secret # Ollama model (pulled on first start) OLLAMA_MODEL=llama3 # PostgreSQL POSTGRES_PASSWORD=your-password ``` ### FAQ **Q: What is the Self-Hosted AI Starter Kit?** A: A Docker Compose template by n8n that sets up a complete local AI environment with n8n (workflow automation), Ollama (local LLMs), Qdrant (vector DB), and PostgreSQL in one command. **Q: Is it free?** A: Yes, fully open-source under Apache 2.0. All included services are free and self-hosted. **Q: What hardware do I need?** A: Minimum 8GB RAM for CPU mode. For GPU acceleration, an NVIDIA or AMD GPU with Docker support. Apple Silicon Macs work with CPU mode. --- ## Source & Thanks > Created by [n8n-io](https://github.com/n8n-io). Licensed under Apache 2.0. > > [self-hosted-ai-starter-kit](https://github.com/n8n-io/self-hosted-ai-starter-kit) — ⭐ 14,500+ Thank you to the n8n team for making self-hosted AI infrastructure accessible with one command. --- ## 快速使用 ```bash git clone https://github.com/n8n-io/self-hosted-ai-starter-kit.git cd self-hosted-ai-starter-kit cp .env.example .env docker compose --profile cpu up ``` GPU 加速:NVIDIA 用 `--profile gpu-nvidia`,AMD 用 `--profile gpu-amd`。 启动后访问 `http://localhost:5678/`。 --- ## 简介 Self-Hosted AI Starter Kit 是 n8n 官方开源的 Docker Compose 模板,一条命令搭建完整的本地 AI 开发环境。拥有 14,500+ GitHub Star 和 Apache 2.0 许可证,打包了 n8n(400+ 集成)、Ollama(本地大模型)、Qdrant(向量数据库)和 PostgreSQL。在自己的硬件上构建 AI 工作流、RAG 管道和聊天机器人——无需云 API,数据完全私有。 适合人群:想要零云依赖的自托管 AI 工作流平台的开发者和团队。 包含组件:n8n、Ollama、Qdrant、PostgreSQL。 安装时间:5 分钟以内。 --- ## 包含服务 | 服务 | 用途 | 端口 | |------|------|------| | n8n | 低代码工作流自动化,400+ 集成 | 5678 | | Ollama | 本地运行大模型 | 11434 | | Qdrant | 向量数据库 | 6333 | | PostgreSQL | 持久化存储 | 5432 | ### 你可以构建的 - RAG 聊天机器人、AI 工作流自动化、文档处理、代码生成、数据分析 ### FAQ **Q: Self-Hosted AI Starter Kit 是什么?** A: n8n 官方的 Docker Compose 模板,一条命令搭建 n8n + Ollama + Qdrant + PostgreSQL 的完整本地 AI 环境。 **Q: 免费吗?** A: 完全免费开源(Apache 2.0),所有服务均可自托管。 --- ## 来源与感谢 > Created by [n8n-io](https://github.com/n8n-io). Licensed under Apache 2.0. > > [self-hosted-ai-starter-kit](https://github.com/n8n-io/self-hosted-ai-starter-kit) — ⭐ 14,500+ --- Source: https://tokrepo.com/en/workflows/92d3cc62-6199-4b1c-a7f1-1b73a1da86a0 Author: TokRepo精选