Key Features
- Single YAML config: Entire pipeline from data preprocessing to inference
- Training methods: Full fine-tune, LoRA, QLoRA, GPTQ, QAT, DPO, GRPO
- Multimodal: Fine-tune vision-language (Qwen2-VL, LLaVA) and audio models
- Performance: Flash Attention, multipacking, sequence parallelism
- Multi-GPU/node: Distributed training with DeepSpeed and FSDP
- 50+ model architectures: LLaMA, Mistral, Mixtral, Pythia, and more
FAQ
Q: What is Axolotl? A: Axolotl is an LLM fine-tuning tool with 11.6K+ stars. Single YAML config for LoRA, QLoRA, DPO, GRPO, multimodal training. Multi-GPU support. Apache 2.0.
Q: How do I install Axolotl?
A: Run pip3 install axolotl[flash-attn,deepspeed]. Then axolotl train config.yml to start fine-tuning.