Key Features
- Multiple backends: llama.cpp, Transformers, ExLlamaV3, TensorRT-LLM
- 100% offline: Zero telemetry, no external requests
- Multimodal vision: Image attachments and analysis
- LoRA training: Fine-tune on chat and text datasets
- Image generation: Integrated diffusers models
- OpenAI-compatible API: Tool-calling support
- Extension system: Built-in and community extensions
FAQ
Q: What is Text Generation WebUI? A: A Gradio interface with 46.4K+ stars for running LLMs locally. Multiple backends, vision, training, image gen, OpenAI API. 100% offline, zero telemetry.
Q: How do I install it?
A: Download portable builds from GitHub releases (zero setup), or clone the repo and run python server.py. Access at localhost:7860.