ScriptsApr 14, 2026·3 min read

Stable Diffusion Web UI by AUTOMATIC1111 — The Definitive Local AI Image Generator

AUTOMATIC1111's Stable Diffusion Web UI is the most popular interface for running Stable Diffusion locally. It supports text-to-image, image-to-image, inpainting, ControlNet, LoRA, embeddings, extensions, and every model variant — all in a self-hosted browser UI.

TL;DR
AUTOMATIC1111's Web UI runs Stable Diffusion locally with full control over models, LoRAs, and extensions.
§01

What it is

AUTOMATIC1111's Stable Diffusion Web UI is a browser-based interface for running Stable Diffusion models locally. It supports text-to-image generation, image-to-image transformation, inpainting, ControlNet integration, LoRA model loading, textual inversion embeddings, and a large ecosystem of community extensions. Everything runs on your own GPU with no cloud dependency.

The project targets artists, designers, and developers who want full control over their image generation pipeline without per-image API costs or content restrictions imposed by hosted services.

§02

How it saves time or tokens

Running Stable Diffusion locally eliminates per-image costs from services like DALL-E or Midjourney. A single GPU can generate hundreds of images per hour at zero marginal cost after the initial setup. The extension ecosystem means features like ControlNet, ADetailer, and batch processing are available without building custom pipelines. The Web UI stores all generation parameters in PNG metadata, making it easy to reproduce or iterate on results.

§03

How to use

  1. Clone the repository and launch:
git clone https://github.com/AUTOMATIC1111/stable-diffusion-webui
cd stable-diffusion-webui
./webui.sh  # Linux/macOS
# Windows: run webui-user.bat
  1. Open http://127.0.0.1:7860 in your browser
  1. Download a model checkpoint (e.g., Stable Diffusion XL) and place it in models/Stable-diffusion/
  1. Enter a prompt and click Generate:
Prompt: a serene mountain lake at sunset, photorealistic, 8k
Negative: blurry, low quality, watermark
Steps: 30, Sampler: DPM++ 2M Karras, CFG Scale: 7
§04

Example

# Launch with API access enabled for programmatic use
./webui.sh --api

# Generate via API
curl -X POST http://127.0.0.1:7860/sdapi/v1/txt2img \
  -H 'Content-Type: application/json' \
  -d '{
    "prompt": "cyberpunk city at night, neon lights, rain",
    "negative_prompt": "blurry, low quality",
    "steps": 30,
    "width": 1024,
    "height": 1024,
    "sampler_name": "DPM++ 2M Karras"
  }'
§05

Related on TokRepo

§06

Common pitfalls

  • Requires a GPU with at least 6GB VRAM for SD 1.5 and 8GB+ for SDXL; CPU-only generation is technically possible but impractically slow
  • Extensions can conflict with each other; install one at a time and test before adding more
  • Model checkpoints are large (2-7GB each); ensure sufficient disk space before downloading multiple models

Frequently Asked Questions

What GPU do I need to run Stable Diffusion Web UI?+

For Stable Diffusion 1.5 models, a GPU with 6GB VRAM (like NVIDIA GTX 1060 6GB) is the minimum. For SDXL models, 8GB or more is recommended (RTX 3060 12GB or better). AMD GPUs are supported on Linux via ROCm but with reduced feature compatibility.

How does the Web UI compare to ComfyUI?+

AUTOMATIC1111 Web UI provides a traditional form-based interface that is easier for beginners. ComfyUI uses a node-based workflow graph that offers more flexibility for complex pipelines. The Web UI has a larger extension ecosystem, while ComfyUI is better for advanced users who want visual workflow composition.

Can I use LoRA models with the Web UI?+

Yes. Place LoRA files in the models/Lora directory. Reference them in your prompt with the syntax '<lora:model_name:weight>' where weight is typically between 0.5 and 1.0. Multiple LoRAs can be combined in a single generation.

Is there an API for batch generation?+

Yes. Launch with the --api flag to enable a REST API at /sdapi/v1/. The API supports txt2img, img2img, and all major features. This is useful for integrating Stable Diffusion into automated pipelines or building custom frontends.

How do I install extensions?+

Go to the Extensions tab in the Web UI, click 'Install from URL', and paste the GitHub repository URL of the extension. Alternatively, use the 'Available' tab to browse and install popular extensions directly. Restart the Web UI after installation.

Citations (3)

Discussion

Sign in to join the discussion.
No comments yet. Be the first to share your thoughts.

Related Assets