# Together AI Fine-Tuning Skill for Claude Code > Skill that teaches Claude Code Together AI's fine-tuning API. Covers LoRA, full fine-tuning, DPO preference tuning, VLM training, and function-calling fine-tuning. ## Install Save the content below to `.claude/skills/` or append to your `CLAUDE.md`: ## Quick Use ```bash npx skills add togethercomputer/skills ``` ## What is This Skill? This skill teaches AI coding agents how to fine-tune models on Together AI. It covers LoRA (efficient), full fine-tuning, DPO preference tuning, vision-language model (VLM) training, and function-calling fine-tuning — with correct API calls, data formats, and training parameters. **Answer-Ready**: Together AI Fine-Tuning Skill for coding agents. Covers LoRA, full fine-tuning, DPO, VLM training, and function-calling tuning. Correct data formats, hyperparameters, and job management. Part of official 12-skill collection. **Best for**: ML engineers fine-tuning open-source models on Together AI. **Works with**: Claude Code, Cursor, Codex CLI. ## What the Agent Learns ### LoRA Fine-Tuning ```python from together import Together client = Together() job = client.fine_tuning.create( training_file="file-abc123", model="meta-llama/Llama-3.1-8B-Instruct", n_epochs=3, learning_rate=1e-5, lora=True, lora_r=16, ) print(f"Job ID: {job.id}") ``` ### Training Data Format (JSONL) ```json {"messages": [{"role": "system", "content": "You are helpful."}, {"role": "user", "content": "Hi"}, {"role": "assistant", "content": "Hello!"}]} ``` ### Supported Methods | Method | Use Case | Cost | |--------|----------|------| | LoRA | Most tasks, efficient | Low | | Full fine-tuning | Maximum quality | High | | DPO | Preference alignment | Medium | | VLM training | Vision+language | Medium | | Function-calling | Tool use training | Low | ### Job Management ```python # Check status status = client.fine_tuning.retrieve(job.id) # List jobs jobs = client.fine_tuning.list() # Cancel client.fine_tuning.cancel(job.id) ``` ## FAQ **Q: Which method should I use?** A: Start with LoRA — it is faster, cheaper, and works well for most use cases. Use full fine-tuning only if LoRA quality is insufficient. ## Source & Thanks > Part of [togethercomputer/skills](https://github.com/togethercomputer/skills) — MIT licensed. ## 快速使用 ```bash npx skills add togethercomputer/skills ``` ## 什么是这个 Skill? 教 AI Agent 在 Together AI 上微调模型,包括 LoRA、全量微调、DPO、VLM 训练和函数调用微调。 **一句话总结**:Together AI 微调 Skill,LoRA/全量/DPO/VLM/函数调用五种方法,数据格式和超参数,官方出品。 ## 来源与致谢 > [togethercomputer/skills](https://github.com/togethercomputer/skills) — MIT --- Source: https://tokrepo.com/en/workflows/9f339e9b-52fc-410d-af7f-f7c7779ab24f Author: Script Depot