Key Features
Multi-Model Support
Use any model from any provider in one interface:
| Provider | Models |
|---|---|
| Anthropic | Claude Opus, Sonnet, Haiku |
| OpenAI | GPT-4o, o1, GPT-3.5 |
| Gemini 2.5 Pro, Flash | |
| Ollama | Llama 3, Mistral, CodeStral |
| Azure | Azure OpenAI |
| AWS | Bedrock models |
Switch models mid-conversation or compare responses side-by-side.
Plugin System
Install plugins from the marketplace:
- Web search (Tavily, Google)
- Code interpreter
- Image generation (DALL-E, Midjourney)
- Weather, calculator, translator
- Custom plugins via OpenAPI spec
Knowledge Base (RAG)
Upload documents and chat with them:
- PDF, DOCX, TXT, Markdown
- Automatic chunking and embedding
- Citation tracking
- Multi-document conversations
Text-to-Speech
Built-in TTS with multiple voices:
- OpenAI TTS
- Edge TTS (free)
- ElevenLabs
Beautiful UI
- Dark/light mode
- Mobile responsive
- Custom themes
- Conversation folders
- Markdown rendering with code highlighting
Self-Hosting Options
# Docker
docker run -d -p 3210:3210 lobehub/lobe-chat
# Docker Compose (with database)
docker compose up -d
# Vercel (one-click)
# Use the "Deploy to Vercel" button on GitHubKey Stats
- 55,000+ GitHub stars
- 50+ model providers
- Plugin marketplace
- RAG knowledge base
- Self-hostable with Docker
FAQ
Q: What is LobeChat? A: LobeChat is an open-source chat UI supporting 50+ AI model providers with plugins, knowledge base, TTS, and self-hosting — like a self-hosted ChatGPT Pro.
Q: Is LobeChat free? A: Yes, fully open-source under Apache 2.0. Self-host for free with your own API keys.
Q: Can I use LobeChat with local models?
A: Yes, connect Ollama at http://localhost:11434 and use any local model.