Scripts2026年5月15日·1 分钟阅读

GlusterFS — Scalable Network-Attached Distributed File System

GlusterFS aggregates commodity storage servers into a single distributed file system, scaling to petabytes and thousands of clients without a centralized metadata server.

Agent 就绪

这个资产可以被 Agent 直接读取和安装

TokRepo 同时提供通用 CLI 命令、安装契约、metadata JSON、按适配器生成的安装计划和原始内容链接,方便 Agent 判断适配度、风险和下一步动作。

Native · 98/100策略:允许
Agent 入口
任意 MCP/CLI Agent
类型
Skill
安装
Single
信任
信任等级:Established
入口
GlusterFS Overview
通用 CLI 安装命令
npx tokrepo install 52cf2156-5058-11f1-9bc6-00163e2b0d79

Introduction

GlusterFS is a free, open-source, software-defined distributed storage system that can scale to several petabytes. It runs entirely in user space using FUSE, requires no kernel modifications, and eliminates single points of failure by distributing data and metadata across commodity hardware.

What GlusterFS Does

  • Combines disk storage from multiple servers into a single global namespace
  • Provides replication, distribution, and erasure-coding volume types for data protection
  • Exposes storage via POSIX (FUSE mount), NFS-Ganesha, and S3-compatible object interface
  • Scales horizontally by adding bricks (storage units) without downtime
  • Supports geo-replication for asynchronous cross-site disaster recovery

Architecture Overview

GlusterFS uses a server-client model with no centralized metadata server. Each server runs the glusterd management daemon and one glusterfsd brick process per volume brick. The client-side translator stack (loaded via FUSE) handles hashing, replication, and self-healing. DHT (Distributed Hash Table) translators map file names to bricks using consistent hashing, while AFR (Automatic File Replication) translators maintain copies across replicas.

Self-Hosting & Configuration

  • Available as RPM/DEB packages and container images for major Linux distributions
  • Volumes are created with gluster volume create, specifying replica count, disperse settings, and brick paths
  • Tunable options (performance.cache-size, network.ping-timeout) are set per volume via gluster volume set
  • Kubernetes integration via Heketi or the GlusterFS CSI driver for dynamic PersistentVolume provisioning
  • Geo-replication sessions sync volumes to remote clusters for DR

Key Features

  • No centralized metadata server eliminates a common bottleneck and single point of failure
  • Elastic scaling: add or remove bricks and rebalance data online
  • Multiple access protocols: POSIX, NFS, SMB/CIFS, and S3-compatible object storage
  • Self-healing automatically repairs files after a brick recovers from failure
  • Snapshot support for point-in-time volume copies using LVM thin provisioning

Comparison with Similar Tools

  • Ceph — Unified object/block/file storage with CRUSH algorithm; more complex but supports block devices natively
  • MinIO — S3-compatible object store; lighter for object workloads but no POSIX file semantics
  • SeaweedFS — Lightweight distributed file/object store with a central master node; simpler but different trade-offs
  • Longhorn — Kubernetes-native block storage; narrower scope but tighter K8s integration
  • Lustre — HPC-focused parallel file system; higher throughput for sequential I/O but harder to operate

FAQ

Q: Does GlusterFS require special hardware? A: No. It runs on commodity x86 servers with standard disks. XFS is the recommended underlying file system for bricks.

Q: Can I use GlusterFS with Kubernetes? A: Yes. The GlusterFS CSI driver or Heketi REST API enables dynamic provisioning of PersistentVolumes backed by Gluster volumes.

Q: How does GlusterFS handle node failures? A: Replicated volumes serve reads from surviving replicas. When the failed node returns, the self-heal daemon automatically repairs inconsistent files.

Q: Is GlusterFS still actively maintained? A: The project is community-maintained. Red Hat shifted focus to Ceph for new deployments, but GlusterFS continues to receive community patches and releases.

Sources

讨论

登录后参与讨论。
还没有评论,来写第一条吧。

相关资产