ConfigsMar 31, 2026·2 min read

Unsloth — 2x Faster Local LLM Training & Inference

Unsloth is a unified local interface for running and training AI models. 58.7K+ GitHub stars. 2x faster training with 70% less VRAM across 500+ models including Qwen, DeepSeek, Llama, Gemma. Web UI wi

TL;DR
Unsloth accelerates LLM fine-tuning by 2x while cutting VRAM usage by 70%, supporting 500+ models with a simple Web UI or CLI.
§01

What it is

Unsloth is a unified local interface for running and training AI models. It provides up to 2x faster training with 70% less VRAM usage across 500+ models including Qwen, DeepSeek, Llama, and Gemma. It includes a web UI with one-click fine-tuning, a CLI for automated workflows, and full compatibility with the Hugging Face ecosystem.

It targets ML engineers and developers who want to fine-tune LLMs on consumer GPUs without expensive cloud compute, and researchers who need faster iteration cycles.

§02

How it saves time or tokens

Unsloth's memory optimizations let you fine-tune models that would otherwise require multiple expensive GPUs on a single consumer GPU. A model that needs 48GB VRAM with standard training may need only 14GB with Unsloth. This means you can fine-tune on an RTX 4090 instead of renting an A100, saving significant compute costs.

§03

How to use

  1. Install:
curl -fsSL https://unsloth.ai/install.sh | sh

Or via pip:

pip install unsloth
  1. Fine-tune a model:
from unsloth import FastLanguageModel

model, tokenizer = FastLanguageModel.from_pretrained(
    model_name='unsloth/Llama-3.2-3B-Instruct',
    max_seq_length=2048,
    load_in_4bit=True,
)

model = FastLanguageModel.get_peft_model(
    model, r=16, lora_alpha=16,
    target_modules=['q_proj', 'k_proj', 'v_proj', 'o_proj'],
)

# Train with your dataset using standard HuggingFace Trainer
  1. Or use the web UI for no-code fine-tuning.
§04

Example

MetricStandard trainingUnsloth
Training speed1x2x
VRAM usage100%30%
RTX 4090 max model7B20B+
Cost (cloud A100)$3/hr$1.50/hr (half the time)
§05

Related on TokRepo

§06

Common pitfalls

  • Unsloth optimizations are specific to certain model architectures. Check compatibility before starting a training run with a new model.
  • 4-bit training (QLoRA) reduces VRAM usage further but may slightly affect model quality compared to full-precision LoRA.
  • The web UI is convenient for getting started but the Python API provides more control for advanced training configurations.

Frequently Asked Questions

Which GPUs does Unsloth support?+

Unsloth supports NVIDIA GPUs with CUDA (RTX 3060 and newer are recommended). Apple Silicon support is available through the MLX backend. AMD GPUs have experimental support via ROCm. The VRAM savings are most impactful on consumer GPUs like the RTX 4090 where memory is limited.

Does Unsloth support LoRA and QLoRA?+

Yes. Unsloth fully supports LoRA (Low-Rank Adaptation) and QLoRA (Quantized LoRA) training methods. QLoRA combines 4-bit quantization with LoRA to minimize VRAM usage. Both methods produce models compatible with the standard Hugging Face ecosystem.

Can I export Unsloth-trained models to GGUF?+

Yes. Unsloth can export trained models to GGUF format for use with llama.cpp, Ollama, and other inference engines. This lets you train with Unsloth and deploy with your preferred serving solution. The export handles quantization and format conversion automatically.

Is Unsloth free?+

Unsloth has an open-source version that is free for personal and commercial use. A Pro version offers additional features like longer context support, more model architectures, and priority support. The free version covers most common fine-tuning use cases.

How does Unsloth achieve 2x speedup?+

Unsloth uses custom CUDA kernels optimized for transformer attention patterns, intelligent memory management that reduces fragmentation, and efficient gradient checkpointing. These optimizations are applied automatically when you load a model through Unsloth's API. No manual tuning is needed.

Citations (3)
🙏

Source & Thanks

Created by Unsloth AI. Licensed under Apache 2.0 / AGPL-3.0. unslothai/unsloth — 58,700+ GitHub stars

Discussion

Sign in to join the discussion.
No comments yet. Be the first to share your thoughts.

Related Assets