# Unsloth — 2x Faster Local LLM Training & Inference > Unsloth is a unified local interface for running and training AI models. 58.7K+ GitHub stars. 2x faster training with 70% less VRAM across 500+ models including Qwen, DeepSeek, Llama, Gemma. Web UI wi ## Install Save in your project root: ## Quick Use ```bash # macOS / Linux / WSL curl -fsSL https://unsloth.ai/install.sh | sh # Windows PowerShell irm https://unsloth.ai/install.ps1 | iex # Or via pip (training library only) pip install unsloth ``` --- ## Intro Unsloth is a unified local interface for running and fine-tuning open-source AI models. With 58,700+ GitHub stars, it provides a web UI (Unsloth Studio) for model inference and training on your own hardware. Unsloth achieves 2x faster training with up to 70% reduced VRAM across 500+ supported models — including Qwen 3.5, DeepSeek, Llama 3.1/3.2, Gemma, Mistral, and Phi-4. It supports GGUF, LoRA, and safetensors formats, with features like automatic data recipes, tool calling, code execution sandboxes, and multi-GPU training. **Best for**: Developers and researchers fine-tuning LLMs locally without expensive cloud GPUs **Works with**: Claude Code, OpenAI Codex, Cursor, Gemini CLI, Windsurf **Platforms**: macOS, Linux, Windows (WSL), multi-GPU setups --- ## Key Features - **2x faster training**: Up to 70% reduced VRAM across 500+ models - **Unsloth Studio**: Web UI for local model management, inference, and training - **Data recipes**: Automatically create training datasets from PDFs, CSVs, DOCX files - **Model support**: Qwen 3.5, DeepSeek, gpt-oss, Llama 3.1/3.2, Gemma, Mistral, Phi-4 - **Reinforcement learning**: GRPO with 80% less VRAM - **Multi-format**: GGUF, LoRA, safetensors download and execution - **Tool calling**: Self-healing tool use with code execution sandbox - **Multi-file upload**: Images, audio, PDFs, DOCX for multimodal workflows --- ## Agent Integration AI coding agents can use Unsloth to set up local model training pipelines, configure fine-tuning jobs, and manage model inference. Agents can automate dataset preparation using data recipes and orchestrate training workflows across multiple GPUs. --- ### FAQ **Q: What is Unsloth?** A: Unsloth is an open-source tool with 58.7K+ stars for running and training AI models locally. It provides 2x faster training with 70% less VRAM, a web UI, and supports 500+ models including Qwen, DeepSeek, Llama, and Gemma. **Q: How do I install Unsloth?** A: Run `curl -fsSL https://unsloth.ai/install.sh | sh` on macOS/Linux, or `irm https://unsloth.ai/install.ps1 | iex` on Windows. For the Python library only: `pip install unsloth`. **Q: Which models does Unsloth support?** A: Qwen 3.5, DeepSeek, gpt-oss, Llama 3.1/3.2, Gemma, Mistral, Phi-4, and 500+ other open-source models in GGUF, LoRA, and safetensors formats. --- ## Source & Thanks > Created by [Unsloth AI](https://github.com/unslothai). Licensed under Apache 2.0 / AGPL-3.0. > [unslothai/unsloth](https://github.com/unslothai/unsloth) — 58,700+ GitHub stars --- Source: https://tokrepo.com/en/workflows/a69b498a-76d7-4cb4-b4fd-d4006a89b5a0 Author: AI Open Source