ConfigsApr 2, 2026·3 min read

Open WebUI — Self-Hosted AI Chat Platform

Feature-rich, offline-capable AI interface for Ollama, OpenAI, and local LLMs. Built-in RAG, voice, model builder. 130K+ stars.

AI
AI Open Source · Community
Quick Use

Use it first, then decide how deep to go

This block should tell both the user and the agent what to copy, install, and apply first.

# Quickest: pip install
pip install open-webui
open-webui serve
# With Docker (recommended for production)
docker run -d -p 3000:8080 \
  -v open-webui:/app/backend/data \
  --name open-webui \
  ghcr.io/open-webui/open-webui:main

Open http://localhost:3000 and create your admin account. Connect Ollama or add your OpenAI/Anthropic API keys in Settings.

Introduction

Open WebUI is the most popular self-hosted AI chat interface, providing a polished ChatGPT-like experience for any LLM. It works completely offline with local models via Ollama, or connects to cloud APIs like OpenAI, Anthropic, and Google.

Core capabilities:

  • Multi-Model Chat — Switch between local (Ollama, llama.cpp) and cloud (OpenAI, Anthropic) models in the same conversation
  • Built-in RAG — Upload documents (PDF, DOCX, TXT) and chat with them using integrated vector search. Supports 9 vector database backends
  • Model Builder — Create custom Modelfiles from the web UI. Fine-tune system prompts, parameters, and personas
  • Voice & Video — Built-in speech-to-text and text-to-speech. Have voice conversations with your AI
  • Python Function Tools — Extend model capabilities with custom Python functions that run server-side
  • Multi-User Support — Role-based access control, user management, and chat history per user
  • Plugin System — Pipelines architecture for adding custom processing, filters, and integrations
  • Offline-First — Runs entirely on your machine or server. No data leaves your infrastructure

130,000+ GitHub stars. 280+ million Docker pulls. The de facto standard for self-hosted AI chat.

FAQ

Q: How does Open WebUI compare to LibreChat? A: Both are self-hosted chat UIs. Open WebUI has deeper Ollama integration, built-in RAG, voice features, and a model builder. LibreChat focuses on multi-provider API routing. Open WebUI has 10x more stars and community activity.

Q: Do I need a GPU to run it? A: Open WebUI itself doesn't need a GPU — it's just the web interface. If you use cloud APIs (OpenAI, etc.), no GPU needed at all. For local models via Ollama, a GPU significantly improves inference speed.

Q: Can I use it for a team? A: Yes. It supports multi-user mode with admin, user, and pending roles. Each user gets their own chat history and settings. LDAP and OAuth authentication are available.

Q: Is it really free? A: Yes, fully open source and free to self-host. There's an optional cloud plan for managed hosting, but the self-hosted version has all features.

Works With

  • Ollama for local model inference
  • OpenAI / Anthropic / Google / any OpenAI-compatible API
  • Docker / Kubernetes for deployment
  • ChromaDB / Milvus / Qdrant / pgvector for RAG
  • Whisper for speech-to-text
🙏

Source & Thanks

  • GitHub: open-webui/open-webui
  • License: Open WebUI License (MIT-based with branding clause)
  • Stars: 130,000+
  • Maintainer: Timothy Baek & Open WebUI community

Thanks to Timothy Baek and the Open WebUI community for building the definitive self-hosted AI interface, making it possible for anyone to run a private, feature-rich AI chat platform on their own hardware.

Discussion

Sign in to join the discussion.
No comments yet. Be the first to share your thoughts.

Related Assets