Together AI Fine-Tuning Skill for Claude Code
Skill that teaches Claude Code Together AI's fine-tuning API. Covers LoRA, full fine-tuning, DPO preference tuning, VLM training, and function-calling fine-tuning.
What it is
This is a Claude Code skill that teaches the AI agent how to use Together AI's fine-tuning API. It covers LoRA (efficient adapter tuning), full fine-tuning, DPO preference tuning, VLM (vision-language model) training, and function-calling fine-tuning.
The skill is designed for developers who want their Claude Code agent to handle fine-tuning tasks without repeatedly explaining Together AI's API surface. Install the skill once and the agent knows how to create training jobs, format datasets, and monitor runs.
How it saves time or tokens
Without this skill, every fine-tuning conversation starts with pasting Together AI docs into the chat. The skill pre-loads structured knowledge about API endpoints, dataset formats, hyperparameter defaults, and common workflows. This reduces per-conversation token usage by an estimated 2,700 tokens.
The skill also encodes best practices: when to use LoRA vs full fine-tuning, how to structure DPO preference pairs, and what hyperparameters to start with for different model sizes.
How to use
- Install the skill:
npx skills add togethercomputer/skills
- The skill is now available in your Claude Code sessions. Ask Claude Code to fine-tune a model:
You: Fine-tune Llama 3 on my customer support dataset using LoRA
Claude: I will use Together AI's fine-tuning API with LoRA.
Here is the configuration...
- Claude Code generates the correct API calls, dataset formatting, and monitoring commands.
Example
# Together AI fine-tuning API call (generated by the skill)
import together
client = together.Together(api_key='your-key')
# Create a LoRA fine-tuning job
response = client.fine_tuning.create(
training_file='file-abc123',
model='meta-llama/Llama-3-8b',
n_epochs=3,
learning_rate=1e-5,
lora=True,
lora_r=16,
lora_alpha=32
)
print(f'Job ID: {response.id}')
print(f'Status: {response.status}')
Related on TokRepo
- AI Tools for Coding -- Other AI coding tools and skills for development workflows
- Prompt Library -- Curated prompt templates and skills for AI agents
Common pitfalls
- Together AI requires an API key with billing enabled. Fine-tuning jobs incur compute costs that vary by model size and training duration.
- Dataset format matters: chat fine-tuning expects JSONL with
messagesarrays, while completion fine-tuning expectsprompt/completionpairs. The skill handles this distinction. - LoRA is the right default for most use cases. Full fine-tuning is only necessary when you need to change the model's behavior fundamentally, and it costs significantly more.
Frequently Asked Questions
Together AI supports fine-tuning for Llama, Mistral, and other open-source model families. The available models change as new releases come out. Check Together AI's documentation for the current list of supported base models.
LoRA trains small adapter layers on top of a frozen base model, using a fraction of the compute. Full fine-tuning updates all model weights. LoRA is faster, cheaper, and sufficient for most use cases. Full fine-tuning is needed only for fundamental behavior changes.
DPO (Direct Preference Optimization) trains a model to prefer one response over another given pairs of chosen/rejected examples. It is used for alignment and quality improvement without a separate reward model.
No. You can always paste Together AI documentation into Claude Code manually. The skill saves time by pre-loading API knowledge, dataset format rules, and best practices so the agent is immediately productive.
For chat fine-tuning, use JSONL files where each line has a 'messages' array with role/content objects. For completion tasks, use 'prompt' and 'completion' fields. The skill includes format validation guidance.
Citations (3)
- Together AI Documentation— Together AI provides fine-tuning APIs for open-source models
- LoRA Paper— LoRA enables efficient adapter-based fine-tuning
- DPO Paper— DPO provides alignment without a separate reward model
Related on TokRepo
Source & Thanks
Part of togethercomputer/skills — MIT licensed.
Discussion
Related Assets
NAPI-RS — Build Node.js Native Addons in Rust
Write high-performance Node.js native modules in Rust with automatic TypeScript type generation and cross-platform prebuilt binaries.
Mamba — Fast Cross-Platform Package Manager
A drop-in conda replacement written in C++ that resolves environments in seconds instead of minutes.
Plasmo — The Browser Extension Framework
Build, test, and publish browser extensions for Chrome, Firefox, and Edge using React or Vue with hot-reload and automatic manifest generation.