ConfigsApr 5, 2026·3 min read

Self-Hosted AI Starter Kit — Local AI with n8n

Docker Compose template by n8n that bootstraps a complete local AI environment with n8n workflow automation, Ollama LLMs, Qdrant vector database, and PostgreSQL. 14,500+ stars.

TL;DR
Docker Compose template that bootstraps a local AI stack with n8n workflow automation, Ollama, Qdrant, and PostgreSQL.
§01

What it is

The Self-Hosted AI Starter Kit is a Docker Compose template created by n8n that sets up a complete local AI environment. It bundles n8n (workflow automation), Ollama (local LLM inference), Qdrant (vector database), and PostgreSQL into a single deployment. No cloud API keys are required -- everything runs locally on your hardware.

Developers and teams who want to experiment with AI workflows without sending data to external APIs benefit most. The starter kit provides a pre-configured environment for building RAG pipelines, chatbots, document processing, and automation workflows using local models.

§02

How it saves time or tokens

Setting up each component individually (n8n, Ollama, Qdrant, PostgreSQL) and configuring them to work together takes hours of configuration. The starter kit handles networking, volumes, and inter-service connections out of the box. Running models locally eliminates per-token API costs entirely. For teams processing sensitive data, local execution avoids data residency and privacy concerns that come with cloud APIs.

§03

How to use

  1. Clone the starter kit and start the stack:
git clone https://github.com/n8n-io/self-hosted-ai-starter-kit.git
cd self-hosted-ai-starter-kit
docker compose up -d
  1. Open n8n at http://localhost:5678 and import the example workflows.
  1. The stack includes:
  • n8n on port 5678 (workflow automation)
  • Ollama on port 11434 (local LLM)
  • Qdrant on port 6333 (vector database)
  • PostgreSQL on port 5432 (structured data)
§04

Example

# docker-compose.yml excerpt
services:
  n8n:
    image: n8nio/n8n:latest
    ports:
      - '5678:5678'
    environment:
      - N8N_AI_ENABLED=true

  ollama:
    image: ollama/ollama:latest
    ports:
      - '11434:11434'
    volumes:
      - ollama-data:/root/.ollama

  qdrant:
    image: qdrant/qdrant:latest
    ports:
      - '6333:6333'
    volumes:
      - qdrant-data:/qdrant/storage
# Pull a model into Ollama
docker exec ollama ollama pull llama3
§05

Related on TokRepo

§06

Common pitfalls

  • Ollama requires significant GPU memory for larger models. Llama 3 8B needs at least 8GB VRAM. Without a GPU, inference runs on CPU and is much slower.
  • The starter kit uses default credentials. Change passwords for PostgreSQL and n8n before exposing the services beyond localhost.
  • Qdrant vector storage grows with the number of embedded documents. Monitor disk usage if you index large document collections.

Frequently Asked Questions

Do I need a GPU to run this starter kit?+

A GPU is recommended for reasonable inference speed with Ollama. Without a GPU, models run on CPU and are significantly slower. Small models (3B parameters) are usable on CPU; larger models require GPU acceleration.

What models can I use with Ollama?+

Ollama supports a wide range of open-source models including Llama 3, Mistral, Phi, Gemma, and others. Run 'ollama pull model-name' to download a model. The starter kit does not include a pre-downloaded model.

Is n8n free?+

n8n Community Edition is open source and free for self-hosted use. The Docker image included in the starter kit is the community edition. n8n also offers a cloud-hosted version with additional features.

Can I add more services to the stack?+

Yes. The Docker Compose file is extensible. Add services like Elasticsearch, Redis, or additional databases by adding them to the compose file. n8n can connect to any service accessible on the Docker network.

What is Qdrant used for in this stack?+

Qdrant serves as the vector database for RAG (Retrieval Augmented Generation) workflows. Documents are embedded into vectors, stored in Qdrant, and retrieved by similarity search when the LLM needs context to answer questions.

Citations (3)
🙏

Source & Thanks

Created by n8n-io. Licensed under Apache 2.0.

self-hosted-ai-starter-kit — ⭐ 14,500+

Thank you to the n8n team for making self-hosted AI infrastructure accessible with one command.

Discussion

Sign in to join the discussion.
No comments yet. Be the first to share your thoughts.

Related Assets