# Ceph — Unified Distributed Storage at Scale > Ceph is a massively scalable software-defined storage platform that presents a single cluster as block, object, and file storage, powering private clouds and petabyte-scale backups. ## Install Save as a script file and run: # Ceph — Unified Distributed Storage at Scale ## Quick Use ```bash # Bootstrap a single-node cluster with cephadm curl --silent --remote-name --location https://github.com/ceph/ceph/raw/main/src/cephadm/cephadm chmod +x cephadm ./cephadm add-repo --release reef ./cephadm install cephadm bootstrap --mon-ip 192.168.1.10 # Create a pool and map an RBD image ceph osd pool create rbd 32 rbd create demo --size 1G --pool rbd rbd map demo -p rbd mkfs.ext4 /dev/rbd0 && mount /dev/rbd0 /mnt/demo ``` ## Introduction Ceph is the reference open-source distributed storage system, originally developed at UC Santa Cruz and now stewarded by the Ceph Foundation (Linux Foundation). It powers OpenStack, Rook on Kubernetes, and thousands of private clouds. ## What Ceph Does - Serves RBD block devices for VMs and databases - Exposes S3/Swift-compatible object storage via RGW - Provides POSIX file access via CephFS with snapshots - Self-heals data by replicating or erasure-coding placement groups - Scales from three nodes to exabytes with no single point of failure ## Architecture Overview A Ceph cluster is built from MON daemons (cluster membership), MGR daemons (metrics/orchestration), OSDs (one per disk, the actual storage), MDS (for CephFS), and RGW (for S3). Data is sharded into placement groups and mapped to OSDs via CRUSH, a pseudo-random distribution algorithm. ## Self-Hosting & Configuration - Use `cephadm` (containerized) or Rook for Kubernetes-native installs - Minimum 3 MONs and 3 hosts for production - NVMe for WAL/DB, spinning disks for bulk OSDs is a typical mix - Tune CRUSH rules to respect failure domains (rack, host, row) - Enable the dashboard MGR module for a web UI on port 8443 ## Key Features - Three storage interfaces from one cluster - Erasure coding saves 30–60 percent vs replication - RBD mirroring and RGW multisite for DR - Rolling upgrades with zero downtime - Integrates natively with OpenStack, Proxmox, and Kubernetes via Rook ## Comparison with Similar Tools - **MinIO** — S3 only, simpler, single-tier - **GlusterFS** — file focused, smaller communities, less active - **Longhorn** — block only, Kubernetes-scoped, lighter - **OpenEBS** — container-attached storage, not a unified cluster - **SeaweedFS** — object/file focused, extreme small-file performance ## FAQ **Q:** How many nodes do I need? A: Three is the production minimum; one-node labs work with `osd_crush_chooseleaf_type=0`. **Q:** Do I need dedicated networks? A: Recommended — separate public and cluster networks ≥ 10 GbE for healthy recovery. **Q:** Is Rook a fork? A: No, Rook is a Kubernetes operator that deploys upstream Ceph. **Q:** Can Ceph replace my SAN? A: Yes for most block workloads, provided the network and disks are sized correctly. ## Sources - https://github.com/ceph/ceph - https://docs.ceph.com/ --- Source: https://tokrepo.com/en/workflows/609469f1-38fa-11f1-9bc6-00163e2b0d79 Author: Script Depot