Best Self-Hosted AI Tools (2026)
Run AI locally with full privacy. Open-source LLMs, chat interfaces, knowledge bases, and development tools you can self-host on your own infrastructure.
Self-Hosted AI Starter Kit — Local AI with n8n
Docker Compose template by n8n that bootstraps a complete local AI environment with n8n workflow automation, Ollama LLMs, Qdrant vector database, and PostgreSQL. 14,500+ stars.
Continue — Open-Source AI Code Assistant
Open-source AI code assistant for VS Code and JetBrains. Tab autocomplete, chat, inline editing with any model — OpenAI, Anthropic, Ollama, or self-hosted.
Ollama Model Library — Best AI Models for Local Use
Curated guide to the best models available on Ollama for coding, chat, and reasoning. Compare Llama, Mistral, Gemma, Phi, and Qwen models for local AI development.
OpenCode — Open-Source AI Coding Agent for Terminal
Open-source AI coding agent with 140K+ stars. TUI-first design, LSP integration, works with Claude, OpenAI, Google, or local models. Two built-in agents: build and plan. MIT license.
Immich — High-Performance Self-Hosted Photo & Video Management
Immich is an open-source Google Photos alternative with auto-backup, AI-powered search, face recognition, and mobile apps — self-hosted for complete privacy.
Actual Budget — Local-First Personal Finance App
Actual is an open-source personal finance app with envelope budgeting, bank sync, multi-device sync, and local-first architecture — a YNAB alternative.
AFFiNE — Open-Source Notion Alternative
Docs, whiteboards, and databases in one privacy-first workspace. Local-first with real-time collaboration. 66K+ GitHub stars.
Devika — Open Source AI Software Engineer
Open-source AI software engineer that plans, researches, and writes code autonomously. Supports Claude, GPT, and local models with browser and terminal access.
Documenso — Open Source Document Signing Platform
Documenso is an open-source DocuSign alternative for self-hosted document signing with PDF e-signatures, audit trails, and Next.js stack.
Linkwarden — Self-Hosted Collaborative Bookmark Manager
Linkwarden is an open-source bookmark manager that saves, organizes, and preserves web pages with full-page screenshots, PDF snapshots, and collaborative collections.
Firefly III — Self-Hosted Personal Finance Manager
Firefly III is an open-source personal finance manager for tracking expenses, budgets, and bank accounts. Self-hosted with full privacy, multi-currency, and powerful reporting.
Logseq — Privacy-First Knowledge Management Platform
Logseq is a privacy-first, open-source platform for knowledge management and collaboration. Outliner-based with bidirectional links, block references, queries, and graph visualization. Local-first with optional sync. The open-source alternative to Roam Research.
Nextcloud — Self-Hosted Cloud Platform for Files, Calendar & More
Nextcloud is the leading open-source cloud platform for file sync, sharing, calendar, contacts, email, video calls, and office collaboration — a Google Workspace alternative.
Authentik — Open Source Identity Provider & SSO Platform
Authentik is a flexible open-source identity provider with SSO, MFA, user enrollment flows, and application proxy — the authentication glue for your self-hosted stack.
Stirling PDF — Self-Hosted PDF Editor & Toolkit
Stirling PDF is the #1 open-source PDF tool on GitHub. Merge, split, convert, compress, OCR, sign, and edit PDFs — all self-hosted with no data leaving your server.
Mattermost — Open Source Slack Alternative for Team Collaboration
Mattermost is an open-source messaging platform for secure team collaboration. Channels, threads, voice/video calls, playbooks, and integrations — self-hosted Slack alternative.
Jan — Offline AI Desktop App with Full Privacy
Jan is an open-source ChatGPT alternative that runs LLMs locally with full privacy. 41.4K+ GitHub stars. Desktop app for Windows/macOS/Linux, OpenAI-compatible API, MCP support. Apache 2.0.
Audiobookshelf — Self-Hosted Audiobook & Podcast Server
Audiobookshelf is an open-source audiobook and podcast server with progress sync, chapter navigation, mobile apps, and multi-user support — a self-hosted Audible alternative.
Langfuse — Open Source LLM Observability
Langfuse is an open-source LLM engineering platform for tracing, prompt management, evaluation, and debugging AI apps. 24.1K+ GitHub stars. Self-hosted or cloud. MIT.
Open WebUI — Self-Hosted AI Chat Interface
User-friendly, self-hosted AI chat interface. Supports Ollama, OpenAI, Anthropic, and any OpenAI-compatible API. RAG, web search, voice, image gen, and plugins. 129K+ stars.
LibreTranslate — Self-Hosted Translation API with No Rate Limits
LibreTranslate is a self-hostable translation API powered by open-source Argos Translate models. No API keys, no rate limits, no data sent to third parties — a drop-in replacement for Google Translate when privacy matters.
Mealie — Self-Hosted Recipe Manager & Meal Planner
Mealie is an open-source recipe management app with URL import, meal planning, shopping lists, and family sharing. Beautiful UI for organizing your kitchen.
Jan — Run AI Models Locally on Your Desktop
Open-source desktop app to run LLMs offline. Jan supports Llama, Mistral, and Gemma models with one-click download, OpenAI-compatible API, and full privacy.
Paperless-ngx — Self-Hosted Document Management with OCR
Paperless-ngx is an open-source document management system that scans, OCRs, indexes, and archives all your physical and digital documents for full-text search.
Verba — The Golden RAGtriever by Weaviate
Verba is an open-source RAG (Retrieval-Augmented Generation) chatbot from the Weaviate team. Drop in PDFs, web pages, or notes; pick a model (OpenAI, Ollama, Anthropic); and get a polished chat UI with semantic search built in.
LocalAI — Run Any AI Model Locally, No GPU
LocalAI is an open-source AI engine running LLMs, vision, voice, and image models locally. 44.6K+ GitHub stars. OpenAI/Anthropic-compatible API, 35+ backends, MCP, agents. MIT licensed.
Ollama — Run LLMs Locally
Run large language models locally on your machine. Supports Llama 3, Mistral, Gemma, Phi, and dozens more. One-command install, OpenAI-compatible API.
Ghost — Professional Publishing Platform for Modern Journalism
Ghost is an open-source publishing platform built for professional publishers. It bundles a blazing-fast Node.js CMS, Substack-style paid memberships, email newsletters, and SEO — everything a modern publication needs, self-hosted.
Jellyfin — Free Software Media System
Jellyfin is a free, open-source media server for streaming movies, TV shows, music, and live TV. Self-hosted Plex/Emby alternative with no premium or tracking.
Databasus — Self-Hosted Database Backup Tool with Web Dashboard
An open-source database backup tool supporting PostgreSQL, MySQL/MariaDB, and MongoDB. Provides scheduled backups, S3-compatible storage, encryption, and a clean web UI for managing backup jobs.
The Self-Hosted AI Stack
The Self-Hosted AI Stack
Self-hosted AI has matured from a hobbyist pursuit to an enterprise requirement. Privacy regulations, data sovereignty laws, and the desire for predictable costs drive organizations to run AI on their own infrastructure. Local LLM Inference — Ollama, Jan, and GPT4All make running models like Llama, Mistral, and Qwen as simple as installing an app. Support for GPU acceleration, quantization, and model management.
Chat Interfaces — Open WebUI, LibreChat, LobeChat, and AnythingLLM provide ChatGPT-like interfaces for your self-hosted models. Features include conversation history, file upload, RAG integration, and multi-model switching. Knowledge Bases — Onyx, Quivr, and PrivateGPT let you build private RAG systems over your documents — no data leaves your servers.
Development Tools — Tabby (self-hosted Copilot), SearXNG (private search), and Puter (cloud desktop) provide developer infrastructure without external dependencies. TokRepo hosts deployment configs and Docker Compose files for the entire self-hosted AI stack.
Self-hosting AI isn't about avoiding costs — it's about owning your intelligence infrastructure.
Questions fréquentes
What hardware do I need to self-host AI?+
It depends on the model size. For 7B parameter models (good for most tasks): 16GB RAM + any modern GPU with 8GB VRAM. For 70B models (GPT-4 class): 64GB RAM + GPU with 48GB VRAM (A6000 or dual 3090). For CPU-only inference: Ollama with quantized models runs on any modern laptop, just slower. Apple Silicon Macs with 32GB+ unified memory are excellent for local AI.
Is self-hosted AI as good as cloud APIs?+
For many tasks, yes. Open-source models like Llama 3.1 70B and Qwen 2.5 72B match GPT-4 on coding, analysis, and general reasoning. They fall short on the most complex multi-step reasoning and creative tasks where Claude Opus or GPT-4o still lead. The gap narrows every quarter. For most business applications, self-hosted models are "good enough" with dramatically better privacy and cost.
What is the easiest way to start with self-hosted AI?+
Install Ollama (one command on Mac/Linux/Windows), pull a model ("ollama pull llama3.1"), then install Open WebUI for a ChatGPT-like interface. Total setup time: under 10 minutes. TokRepo hosts Docker Compose configs that bundle Ollama + Open WebUI + RAG pipeline into a single deployment.