# LitGPT — Fine-Tune and Deploy AI Models Simply > Lightning AI's framework for fine-tuning and serving 20+ LLM families. LitGPT supports LoRA, QLoRA, full fine-tuning with one-command training on consumer hardware. ## Install Paste the prompt below into your AI tool: ## Quick Use ```bash pip install litgpt ``` ```bash # Download a model litgpt download meta-llama/Llama-3.1-8B-Instruct # Chat locally litgpt chat meta-llama/Llama-3.1-8B-Instruct # Fine-tune with LoRA litgpt finetune_lora meta-llama/Llama-3.1-8B-Instruct \ --data JSON --data.json_path training_data.json # Serve as API litgpt serve meta-llama/Llama-3.1-8B-Instruct ``` ## What is LitGPT? LitGPT is Lightning AI's framework for fine-tuning, pretraining, and deploying large language models. It supports 20+ model families (Llama, Mistral, Phi, Gemma, etc.) with multiple fine-tuning methods (LoRA, QLoRA, full). One-command workflows make it accessible on consumer GPUs. Built on PyTorch Lightning for scalable training. **Answer-Ready**: LitGPT is Lightning AI's LLM fine-tuning and serving framework. Supports 20+ model families, LoRA/QLoRA/full fine-tuning, one-command training on consumer GPUs. Built on PyTorch Lightning. Download, fine-tune, and serve with simple CLI commands. 10k+ GitHub stars. **Best for**: ML engineers fine-tuning open-source models on their own hardware. **Works with**: Llama, Mistral, Phi, Gemma, Falcon, and 15+ model families. **Setup time**: Under 5 minutes. ## Core Features ### 1. One-Command Workflows ```bash litgpt download # Download model litgpt chat # Interactive chat litgpt finetune_lora # LoRA fine-tuning litgpt finetune # Full fine-tuning litgpt serve # API server litgpt evaluate # Benchmark evaluation litgpt pretrain # Pretrain from scratch ``` ### 2. Supported Model Families (20+) | Family | Models | |--------|--------| | Meta | Llama 3.1, Llama 3.2, CodeLlama | | Mistral | Mistral 7B, Mixtral | | Google | Gemma, Gemma 2 | | Microsoft | Phi-3, Phi-3.5 | | Alibaba | Qwen 2.5 | | TII | Falcon | | StabilityAI | StableLM | ### 3. Fine-Tuning Methods | Method | VRAM Needed | Quality | |--------|------------|---------| | QLoRA (4-bit) | 6GB | Good | | LoRA | 12GB | Very good | | Full fine-tune | 40GB+ | Best | | Adapter | 8GB | Good | ### 4. Training Data Format ```json [ {"instruction": "Summarize this text", "input": "Long article...", "output": "Brief summary..."}, {"instruction": "Translate to French", "input": "Hello world", "output": "Bonjour le monde"} ] ``` ### 5. Multi-GPU Training ```bash # 4 GPU training with FSDP litgpt finetune_lora meta-llama/Llama-3.1-8B-Instruct \ --devices 4 --strategy fsdp ``` ## FAQ **Q: Can I fine-tune on a single consumer GPU?** A: Yes, QLoRA needs only 6GB VRAM. A 7B model fine-tunes on an RTX 3060. **Q: How does it compare to Unsloth?** A: Unsloth is faster for single-GPU LoRA. LitGPT offers more model families, multi-GPU, and full pretraining support. **Q: Can I serve the fine-tuned model?** A: Yes, `litgpt serve` exposes an OpenAI-compatible API endpoint. ## Source & Thanks > Created by [Lightning AI](https://github.com/Lightning-AI). Licensed under Apache 2.0. > > [Lightning-AI/litgpt](https://github.com/Lightning-AI/litgpt) — 10k+ stars ## 快速使用 ```bash pip install litgpt litgpt finetune_lora meta-llama/Llama-3.1-8B-Instruct --data JSON --data.json_path data.json ``` 一行命令微调 LLM。 ## 什么是 LitGPT? Lightning AI 的 LLM 微调和部署框架,支持 20+ 模型族,LoRA/QLoRA/全量微调,消费级 GPU 可用。 **一句话总结**:LLM 微调框架,20+ 模型族,LoRA/QLoRA 消费级 GPU 可训练,一键下载/微调/部署,Lightning AI 出品,10k+ stars。 **适合人群**:在自有硬件上微调开源模型的 ML 工程师。 ## 核心功能 ### 1. 一键工作流 download/chat/finetune/serve 四步搞定。 ### 2. 消费级可用 QLoRA 仅需 6GB VRAM,RTX 3060 可训练 7B 模型。 ## 常见问题 **Q: 和 Unsloth 比?** A: Unsloth 单卡 LoRA 更快,LitGPT 模型族更多且支持多卡和预训练。 ## 来源与致谢 > [Lightning-AI/litgpt](https://github.com/Lightning-AI/litgpt) — 10k+ stars, Apache 2.0 --- Source: https://tokrepo.com/en/workflows/cd28bce4-e8ac-4e69-b06d-e580e7d33f75 Author: Prompt Lab