# Axolotl — Streamlined LLM Fine-Tuning > Axolotl streamlines post-training and fine-tuning for LLMs. 11.6K+ GitHub stars. LoRA, QLoRA, DPO, GRPO, multimodal training. Single YAML config. Flash Attention, multi-GPU. Apache 2.0. ## Install Save as a script file and run: ## Quick Use ```bash # Install pip3 install axolotl[flash-attn,deepspeed] # Fetch example configs axolotl fetch examples # Fine-tune Llama 3 with LoRA axolotl train examples/llama-3/lora-1b.yml # Or use Docker docker run --gpus all axolotlai/axolotl train examples/llama-3/lora-1b.yml ``` --- ## Intro Axolotl is a free, open-source tool for streamlined post-training and fine-tuning of large language models. With 11,600+ GitHub stars and Apache 2.0 license, it supports full fine-tuning, LoRA, QLoRA, GPTQ, DPO, GRPO, and reward modeling across GPT-OSS, LLaMA, Mistral, Mixtral, and Hugging Face models. Axolotl handles multimodal training (vision-language and audio models), Flash Attention, multipacking, sequence parallelism, and multi-GPU/multi-node training — all configured through a single YAML file. **Best for**: ML engineers who want simple, configurable LLM fine-tuning without writing training loops **Works with**: Claude Code, OpenAI Codex, Cursor, Gemini CLI, Windsurf **Requirements**: NVIDIA Ampere+ or AMD GPU, Python 3.11+ --- ## Key Features - **Single YAML config**: Entire pipeline from data preprocessing to inference - **Training methods**: Full fine-tune, LoRA, QLoRA, GPTQ, QAT, DPO, GRPO - **Multimodal**: Fine-tune vision-language (Qwen2-VL, LLaVA) and audio models - **Performance**: Flash Attention, multipacking, sequence parallelism - **Multi-GPU/node**: Distributed training with DeepSpeed and FSDP - **50+ model architectures**: LLaMA, Mistral, Mixtral, Pythia, and more --- ### FAQ **Q: What is Axolotl?** A: Axolotl is an LLM fine-tuning tool with 11.6K+ stars. Single YAML config for LoRA, QLoRA, DPO, GRPO, multimodal training. Multi-GPU support. Apache 2.0. **Q: How do I install Axolotl?** A: Run `pip3 install axolotl[flash-attn,deepspeed]`. Then `axolotl train config.yml` to start fine-tuning. --- ## Source & Thanks > Created by [Axolotl AI](https://github.com/axolotl-ai-cloud). Licensed under Apache 2.0. > [axolotl-ai-cloud/axolotl](https://github.com/axolotl-ai-cloud/axolotl) — 11,600+ GitHub stars --- Source: https://tokrepo.com/en/workflows/ae1cbf21-fc40-43c7-bba7-e598af255458 Author: Script Depot