Esta página se muestra en inglés. Una traducción al español está en curso.
SkillsApr 8, 2026·2 min de lectura

Together AI GPU Clusters Skill for Claude Code

Skill that teaches Claude Code Together AI's GPU cluster API. Provision on-demand and reserved H100, H200, and B200 GPU clusters for large-scale training and inference.

What is This Skill?

This skill teaches AI coding agents how to provision and manage GPU clusters on Together AI. Request on-demand or reserved clusters of H100, H200, and B200 GPUs for large-scale model training, distributed inference, and research workloads.

Answer-Ready: Together AI GPU Clusters Skill for coding agents. Provision H100/H200/B200 GPU clusters on-demand or reserved. Large-scale training and distributed inference. Part of official 12-skill collection.

Best for: Teams needing GPU clusters for training or large-scale inference. Works with: Claude Code, Cursor, Codex CLI.

What the Agent Learns

Provision Cluster

from together import Together

client = Together()
cluster = client.clusters.create(
    name="training-cluster",
    gpu_type="h100-80gb",
    gpu_count=8,
    reservation_type="on-demand",
)
print(f"Cluster ID: {cluster.id}")

GPU Options

GPU VRAM Interconnect Best For
H100 80GB 80GB NVLink Standard training
H200 141GB NVLink Large models
B200 192GB NVLink Cutting-edge

Reservation Types

Type Billing Commitment
On-demand Per hour None
Reserved Discounted 1-12 months

Cluster Management

# Monitor utilization
status = client.clusters.retrieve(cluster.id)
# Resize
client.clusters.update(cluster.id, gpu_count=16)
# Release
client.clusters.delete(cluster.id)

FAQ

Q: How many GPUs can I request? A: From single GPUs to clusters of 1000+ for large training runs. Contact Together AI for very large allocations.

🙏

Fuente y agradecimientos

Part of togethercomputer/skills — MIT licensed.

Discusión

Inicia sesión para unirte a la discusión.
Aún no hay comentarios. Sé el primero en compartir tus ideas.