SkillsApr 8, 2026·2 min read

Together AI Fine-Tuning Skill for Claude Code

Skill that teaches Claude Code Together AI's fine-tuning API. Covers LoRA, full fine-tuning, DPO preference tuning, VLM training, and function-calling fine-tuning.

TL;DR
A Claude Code skill that provides structured knowledge for fine-tuning models on Together AI via LoRA, DPO, and full fine-tuning.
§01

What it is

This is a Claude Code skill that teaches the AI agent how to use Together AI's fine-tuning API. It covers LoRA (efficient adapter tuning), full fine-tuning, DPO preference tuning, VLM (vision-language model) training, and function-calling fine-tuning.

The skill is designed for developers who want their Claude Code agent to handle fine-tuning tasks without repeatedly explaining Together AI's API surface. Install the skill once and the agent knows how to create training jobs, format datasets, and monitor runs.

§02

How it saves time or tokens

Without this skill, every fine-tuning conversation starts with pasting Together AI docs into the chat. The skill pre-loads structured knowledge about API endpoints, dataset formats, hyperparameter defaults, and common workflows. This reduces per-conversation token usage by an estimated 2,700 tokens.

The skill also encodes best practices: when to use LoRA vs full fine-tuning, how to structure DPO preference pairs, and what hyperparameters to start with for different model sizes.

§03

How to use

  1. Install the skill:
npx skills add togethercomputer/skills
  1. The skill is now available in your Claude Code sessions. Ask Claude Code to fine-tune a model:
You: Fine-tune Llama 3 on my customer support dataset using LoRA

Claude: I will use Together AI's fine-tuning API with LoRA.
Here is the configuration...
  1. Claude Code generates the correct API calls, dataset formatting, and monitoring commands.
§04

Example

# Together AI fine-tuning API call (generated by the skill)
import together

client = together.Together(api_key='your-key')

# Create a LoRA fine-tuning job
response = client.fine_tuning.create(
    training_file='file-abc123',
    model='meta-llama/Llama-3-8b',
    n_epochs=3,
    learning_rate=1e-5,
    lora=True,
    lora_r=16,
    lora_alpha=32
)

print(f'Job ID: {response.id}')
print(f'Status: {response.status}')
§05

Related on TokRepo

§06

Common pitfalls

  • Together AI requires an API key with billing enabled. Fine-tuning jobs incur compute costs that vary by model size and training duration.
  • Dataset format matters: chat fine-tuning expects JSONL with messages arrays, while completion fine-tuning expects prompt/completion pairs. The skill handles this distinction.
  • LoRA is the right default for most use cases. Full fine-tuning is only necessary when you need to change the model's behavior fundamentally, and it costs significantly more.

Frequently Asked Questions

What models can I fine-tune on Together AI?+

Together AI supports fine-tuning for Llama, Mistral, and other open-source model families. The available models change as new releases come out. Check Together AI's documentation for the current list of supported base models.

What is the difference between LoRA and full fine-tuning?+

LoRA trains small adapter layers on top of a frozen base model, using a fraction of the compute. Full fine-tuning updates all model weights. LoRA is faster, cheaper, and sufficient for most use cases. Full fine-tuning is needed only for fundamental behavior changes.

What is DPO preference tuning?+

DPO (Direct Preference Optimization) trains a model to prefer one response over another given pairs of chosen/rejected examples. It is used for alignment and quality improvement without a separate reward model.

Do I need the skill to use Together AI?+

No. You can always paste Together AI documentation into Claude Code manually. The skill saves time by pre-loading API knowledge, dataset format rules, and best practices so the agent is immediately productive.

How do I format my training data?+

For chat fine-tuning, use JSONL files where each line has a 'messages' array with role/content objects. For completion tasks, use 'prompt' and 'completion' fields. The skill includes format validation guidance.

Citations (3)
🙏

Source & Thanks

Part of togethercomputer/skills — MIT licensed.

Discussion

Sign in to join the discussion.
No comments yet. Be the first to share your thoughts.

Related Assets