Scripts2026年4月1日·1 分钟阅读

Axolotl — Streamlined LLM Fine-Tuning

Axolotl streamlines post-training and fine-tuning for LLMs. 11.6K+ GitHub stars. LoRA, QLoRA, DPO, GRPO, multimodal training. Single YAML config. Flash Attention, multi-GPU. Apache 2.0.

TO
TokRepo精选 · Community
快速使用

先拿来用,再决定要不要深挖

这里应该同时让用户和 Agent 知道第一步该复制什么、安装什么、落到哪里。

# Install
pip3 install axolotl[flash-attn,deepspeed]

# Fetch example configs
axolotl fetch examples

# Fine-tune Llama 3 with LoRA
axolotl train examples/llama-3/lora-1b.yml

# Or use Docker
docker run --gpus all axolotlai/axolotl train examples/llama-3/lora-1b.yml

介绍

Axolotl is a free, open-source tool for streamlined post-training and fine-tuning of large language models. With 11,600+ GitHub stars and Apache 2.0 license, it supports full fine-tuning, LoRA, QLoRA, GPTQ, DPO, GRPO, and reward modeling across GPT-OSS, LLaMA, Mistral, Mixtral, and Hugging Face models. Axolotl handles multimodal training (vision-language and audio models), Flash Attention, multipacking, sequence parallelism, and multi-GPU/multi-node training — all configured through a single YAML file.

Best for: ML engineers who want simple, configurable LLM fine-tuning without writing training loops Works with: Claude Code, OpenAI Codex, Cursor, Gemini CLI, Windsurf Requirements: NVIDIA Ampere+ or AMD GPU, Python 3.11+


Key Features

  • Single YAML config: Entire pipeline from data preprocessing to inference
  • Training methods: Full fine-tune, LoRA, QLoRA, GPTQ, QAT, DPO, GRPO
  • Multimodal: Fine-tune vision-language (Qwen2-VL, LLaVA) and audio models
  • Performance: Flash Attention, multipacking, sequence parallelism
  • Multi-GPU/node: Distributed training with DeepSpeed and FSDP
  • 50+ model architectures: LLaMA, Mistral, Mixtral, Pythia, and more

FAQ

Q: What is Axolotl? A: Axolotl is an LLM fine-tuning tool with 11.6K+ stars. Single YAML config for LoRA, QLoRA, DPO, GRPO, multimodal training. Multi-GPU support. Apache 2.0.

Q: How do I install Axolotl? A: Run pip3 install axolotl[flash-attn,deepspeed]. Then axolotl train config.yml to start fine-tuning.


🙏

来源与感谢

Created by Axolotl AI. Licensed under Apache 2.0. axolotl-ai-cloud/axolotl — 11,600+ GitHub stars

相关资产