ScriptsApr 11, 2026·1 min read

Longhorn — Cloud Native Distributed Block Storage for Kubernetes

Longhorn is a lightweight, reliable distributed block storage system for Kubernetes. Persistent volumes with replication, snapshots, backups, and disaster recovery.

SC
Script Depot · Community
Quick Use

Use it first, then decide how deep to go

This block should tell both the user and the agent what to copy, install, and apply first.

# Install Longhorn via kubectl
kubectl apply -f https://raw.githubusercontent.com/longhorn/longhorn/v1.6.0/deploy/longhorn.yaml

# Wait for installation
kubectl -n longhorn-system get pods

# Access UI
kubectl port-forward -n longhorn-system svc/longhorn-frontend 8080:80

Open http://localhost:8080 — Longhorn UI for managing volumes.

Intro

Longhorn is an open-source, distributed block storage system for Kubernetes. Originally developed by Rancher Labs (now SUSE) and now a CNCF incubating project, Longhorn provides persistent volumes with enterprise features like replication, snapshots, backups, and cross-cluster disaster recovery — all managed through a simple Kubernetes-native interface.

With 7.6K+ GitHub stars and Apache-2.0 license, Longhorn is a popular choice for teams running stateful workloads on Kubernetes without wanting the complexity of Ceph or the cost of commercial storage solutions.

What Longhorn Does

  • Block Storage: Distributed block storage for Kubernetes PVs
  • Replication: Synchronous replication across multiple nodes (2-3 replicas default)
  • Snapshots: Point-in-time volume snapshots
  • Backups: Incremental backups to S3, NFS, or other object storage
  • Disaster Recovery: Restore backups to other clusters
  • Thin Provisioning: Allocate volumes without using upfront disk
  • Live Migration: Move volumes between nodes without downtime
  • UI Dashboard: Web-based volume management
  • CSI Compliant: Standard Container Storage Interface implementation
  • Crash-Consistent Backups: Application-consistent backup hooks
  • Volume Encryption: At-rest encryption with Kubernetes secrets

Architecture

┌─────────────────────────────────────────┐
│           Kubernetes Cluster             │
│                                          │
│  ┌──────────┐  ┌──────────┐  ┌────────┐ │
│  │ Node 1   │  │ Node 2   │  │ Node 3 │ │
│  │          │  │          │  │        │ │
│  │ Longhorn │  │ Longhorn │  │Longhorn│ │
│  │ Manager  │  │ Manager  │  │Manager │ │
│  │          │  │          │  │        │ │
│  │ ┌──────┐ │  │ ┌──────┐ │  │┌──────┐│ │
│  │ │Engine│ │  │ │Engine│ │  ││Engine││ │
│  │ └──┬───┘ │  │ └──┬───┘ │  │└───┬──┘│ │
│  │    │     │  │    │     │  │    │   │ │
│  │ ┌──┴───┐ │  │ ┌──┴───┐ │  │┌───┴──┐│ │
│  │ │Replica│ │  │ │Replica│ │ ││Replica││ │
│  │ └──────┘ │  │ └──────┘ │  │└──────┘│ │
│  │   Disk   │  │   Disk   │  │  Disk  │ │
│  └──────────┘  └──────────┘  └────────┘ │
│                                          │
│       Synchronous Replication            │
└─────────────────────────────────────────┘
         │
   ┌─────┴──────┐
   │ S3 Backup  │
   │ Target     │
   └────────────┘

Installation

Prerequisites

  • Kubernetes 1.21+
  • Linux nodes with open-iscsi installed
  • At least 3GB RAM per node
  • Disk space for storage (additional to OS)
# Install open-iscsi on all nodes (Ubuntu/Debian)
sudo apt install open-iscsi

# On RHEL/CentOS
sudo yum install iscsi-initiator-utils

Via kubectl

kubectl apply -f https://raw.githubusercontent.com/longhorn/longhorn/v1.6.0/deploy/longhorn.yaml

Via Helm

helm repo add longhorn https://charts.longhorn.io
helm repo update
helm install longhorn longhorn/longhorn 
  --namespace longhorn-system --create-namespace 
  --set defaultSettings.defaultReplicaCount=3 
  --set persistence.defaultClassReplicaCount=3

Using Longhorn

Create PVC

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: myapp-data
spec:
  accessModes:
    - ReadWriteOnce
  storageClassName: longhorn
  resources:
    requests:
      storage: 10Gi

Mount in Pod

apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: postgres
spec:
  serviceName: postgres
  replicas: 1
  selector:
    matchLabels:
      app: postgres
  template:
    metadata:
      labels:
        app: postgres
    spec:
      containers:
        - name: postgres
          image: postgres:16
          env:
            - name: POSTGRES_PASSWORD
              value: secret
          volumeMounts:
            - name: data
              mountPath: /var/lib/postgresql/data
  volumeClaimTemplates:
    - metadata:
        name: data
      spec:
        accessModes: ["ReadWriteOnce"]
        storageClassName: longhorn
        resources:
          requests:
            storage: 20Gi

Key Features

Snapshots

apiVersion: longhorn.io/v1beta2
kind: Snapshot
metadata:
  name: postgres-snapshot-20240410
  namespace: longhorn-system
spec:
  volume: pvc-abc123-xyz

Or via UI: Select volume → Create Snapshot.

Backups to S3

  1. Configure backup target in Longhorn UI:
Backup Target: s3://my-backup-bucket@us-west-2/
Backup Target Credential Secret: aws-creds
  1. Create backup:
apiVersion: longhorn.io/v1beta2
kind: Backup
metadata:
  name: postgres-backup-daily
  namespace: longhorn-system
spec:
  snapshotName: postgres-snapshot-20240410
  1. Or schedule recurring backups:
apiVersion: longhorn.io/v1beta2
kind: RecurringJob
metadata:
  name: daily-backup
  namespace: longhorn-system
spec:
  cron: "0 2 * * *"  # Daily at 2am
  task: backup
  groups:
    - default
  retain: 7         # Keep 7 backups
  concurrency: 2

Disaster Recovery

Cluster A (Primary):
  1. Take snapshots regularly
  2. Backup to S3

Cluster B (DR):
  1. Configure same S3 backup target
  2. Restore from backups
  3. Create new volumes from backup

Volume Expansion

# Resize PVC
kubectl patch pvc myapp-data -p '{"spec":{"resources":{"requests":{"storage":"20Gi"}}}}'

# Longhorn automatically expands volume online (no downtime)

Live Migration

Move volumes between nodes without downtime:

# Via Longhorn UI: Volume → Update Replicas → Move to different node
# Or via CLI
longhornctl migrate --volume pvc-abc123 --target-node node-2

Longhorn vs Alternatives

Feature Longhorn Rook/Ceph OpenEBS Portworx
Open Source Yes (Apache-2.0) Yes (Apache-2.0) Yes (Apache-2.0) No (paid)
Complexity Low High Medium Medium
Resource usage Low (~1GB/node) High (~2-4GB/node) Low Medium
Block storage Yes Yes Yes Yes
File storage No Yes (CephFS) Yes (Jiva/cStor) Yes
Object storage No Yes (RGW) No Yes
Snapshots Yes Yes Yes Yes
Backups S3/NFS S3/NFS Velero Built-in
UI Beautiful Dashboard Basic Yes
Best for Small-medium Large scale Flexibility Enterprise

常见问题

Q: Longhorn 适合生产环境吗? A: 是的。Longhorn 是 CNCF 孵化项目,已在数千个生产集群中运行。适合中小规模工作负载(数十 TB 级别)。对于超大规模(PB 级),Ceph/Rook 更合适。

Q: 性能如何? A: Longhorn 性能取决于底层磁盘、网络带宽和副本数。典型 NVMe SSD 配置下,单卷可达 50K+ IOPS 和 500MB/s+ 吞吐量。推荐使用 10Gbps+ 网络以获得最佳性能。

Q: 默认副本数应该设多少? A: 生产推荐 3 副本(容忍 2 节点故障)。开发环境可以用 2 副本节省存储。1 副本不推荐(无数据保护)。副本需要分布在不同节点上才有意义。

来源与致谢

Discussion

Sign in to join the discussion.
No comments yet. Be the first to share your thoughts.

Related Assets