ConfigsApr 15, 2026·3 min read

Rook — Cloud-Native Storage Orchestrator for Kubernetes

Rook turns Ceph into a first-class Kubernetes citizen by deploying, configuring, and healing distributed block, file, and object storage through an operator and a family of CRDs.

TL;DR
Rook automates Ceph deployment on Kubernetes, providing distributed block, file, and object storage through a Kubernetes operator.
§01

What it is

Rook is a CNCF-graduated storage orchestrator for Kubernetes. It deploys and manages Ceph -- a distributed storage system -- as native Kubernetes resources. Through a Kubernetes operator and custom resource definitions (CRDs), Rook handles provisioning, scaling, healing, and upgrading of storage clusters.

Rook targets platform teams that need persistent storage in Kubernetes without managing Ceph infrastructure manually. It provides block storage (RBD), shared filesystems (CephFS), and S3-compatible object storage (RGW).

§02

How it saves time or tokens

Managing Ceph manually requires deep expertise in distributed storage, monitor quorums, OSD placement, and failure recovery. Rook encodes this expertise in a Kubernetes operator that handles the operational complexity. A CephCluster custom resource replaces pages of manual configuration.

Rook also handles automatic recovery: when an OSD fails, the operator detects it and rebuilds the data on healthy nodes.

§03

How to use

  1. Deploy the Rook operator:
git clone --single-branch --branch release-1.14 https://github.com/rook/rook.git
cd rook/deploy/examples
kubectl create -f crds.yaml -f common.yaml -f operator.yaml
  1. Create a Ceph cluster:
# cluster.yaml
apiVersion: ceph.rook.io/v1
kind: CephCluster
metadata:
  name: rook-ceph
  namespace: rook-ceph
spec:
  dataDirHostPath: /var/lib/rook
  mon:
    count: 3
  storage:
    useAllNodes: true
    useAllDevices: true
kubectl create -f cluster.yaml
  1. Create a StorageClass for dynamic provisioning:
kubectl create -f csi/rbd/storageclass.yaml
§04

Example

# PersistentVolumeClaim using Rook-managed Ceph
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: app-data
spec:
  accessModes: [ReadWriteOnce]
  storageClassName: rook-ceph-block
  resources:
    requests:
      storage: 10Gi
---
# Pod using the PVC
apiVersion: v1
kind: Pod
metadata:
  name: app
spec:
  containers:
  - name: app
    image: myapp:latest
    volumeMounts:
    - name: data
      mountPath: /data
  volumes:
  - name: data
    persistentVolumeClaim:
      claimName: app-data
§05

Related on TokRepo

§06

Common pitfalls

  • Rook requires raw block devices or empty partitions on Kubernetes nodes. Using devices that already have filesystems causes deployment failures.
  • Minimum 3 nodes with storage devices for a production Ceph cluster. Single-node deployments work for testing but have no redundancy.
  • Ceph is resource-intensive. Each OSD daemon needs 2-4GB RAM. Plan node resources accordingly.

Frequently Asked Questions

What is the relationship between Rook and Ceph?+

Rook is the Kubernetes operator. Ceph is the underlying distributed storage system. Rook automates Ceph's lifecycle (deploy, scale, upgrade, heal) on Kubernetes. You interact with Rook CRDs; Rook manages Ceph.

Does Rook support storage other than Ceph?+

Rook historically supported multiple storage backends, but the project now focuses exclusively on Ceph. NFS support is available through CephNFS resources.

Can I use Rook for object storage?+

Yes. Rook deploys Ceph's RADOS Gateway (RGW) for S3-compatible object storage. Create a CephObjectStore resource and Rook provisions the gateway endpoints.

Is Rook production-ready?+

Yes. Rook is a CNCF-graduated project used in production by many organizations. The operator handles rolling upgrades, automatic OSD recovery, and cluster scaling.

How does Rook compare to Longhorn?+

Longhorn is simpler and lighter, suitable for small clusters. Rook/Ceph is more feature-rich (block, file, object storage) and handles larger scale deployments. Choose Longhorn for simplicity, Rook for full-featured distributed storage.

Citations (3)

Discussion

Sign in to join the discussion.
No comments yet. Be the first to share your thoughts.

Related Assets