# Together AI GPU Clusters Skill for Claude Code > Skill that teaches Claude Code Together AI's GPU cluster API. Provision on-demand and reserved H100, H200, and B200 GPU clusters for large-scale training and inference. ## Install Save the content below to `.claude/skills/` or append to your `CLAUDE.md`: ## Quick Use ```bash npx skills add togethercomputer/skills ``` ## What is This Skill? This skill teaches AI coding agents how to provision and manage GPU clusters on Together AI. Request on-demand or reserved clusters of H100, H200, and B200 GPUs for large-scale model training, distributed inference, and research workloads. **Answer-Ready**: Together AI GPU Clusters Skill for coding agents. Provision H100/H200/B200 GPU clusters on-demand or reserved. Large-scale training and distributed inference. Part of official 12-skill collection. **Best for**: Teams needing GPU clusters for training or large-scale inference. **Works with**: Claude Code, Cursor, Codex CLI. ## What the Agent Learns ### Provision Cluster ```python from together import Together client = Together() cluster = client.clusters.create( name="training-cluster", gpu_type="h100-80gb", gpu_count=8, reservation_type="on-demand", ) print(f"Cluster ID: {cluster.id}") ``` ### GPU Options | GPU | VRAM | Interconnect | Best For | |-----|------|-------------|----------| | H100 80GB | 80GB | NVLink | Standard training | | H200 | 141GB | NVLink | Large models | | B200 | 192GB | NVLink | Cutting-edge | ### Reservation Types | Type | Billing | Commitment | |------|---------|------------| | On-demand | Per hour | None | | Reserved | Discounted | 1-12 months | ### Cluster Management ```python # Monitor utilization status = client.clusters.retrieve(cluster.id) # Resize client.clusters.update(cluster.id, gpu_count=16) # Release client.clusters.delete(cluster.id) ``` ## FAQ **Q: How many GPUs can I request?** A: From single GPUs to clusters of 1000+ for large training runs. Contact Together AI for very large allocations. ## Source & Thanks > Part of [togethercomputer/skills](https://github.com/togethercomputer/skills) — MIT licensed. ## 快速使用 ```bash npx skills add togethercomputer/skills ``` ## 什么是这个 Skill? 教 AI Agent 在 Together AI 上配置和管理 GPU 集群(H100/H200/B200),用于大规模训练和推理。 **一句话总结**:Together AI GPU 集群 Skill,按需/预留 H100/H200/B200 集群,大规模训练和分布式推理,官方出品。 ## 来源与致谢 > [togethercomputer/skills](https://github.com/togethercomputer/skills) — MIT --- Source: https://tokrepo.com/en/workflows/d1647e03-385b-4f09-a796-fb95fdaed84b Author: MCP Hub