ScriptsApr 10, 2026·1 min read

MinIO — High-Performance S3-Compatible Object Storage

MinIO is an open-source, S3-compatible object storage server. Deploy a private cloud storage with AWS S3 API compatibility, erasure coding, and multi-site replication.

SC
Script Depot · Community
Quick Use

Use it first, then decide how deep to go

This block should tell both the user and the agent what to copy, install, and apply first.

docker run -d --name minio -p 9000:9000 -p 9001:9001 
  -e MINIO_ROOT_USER=minioadmin 
  -e MINIO_ROOT_PASSWORD=minioadmin 
  -v minio-data:/data 
  minio/minio server /data --console-address ":9001"

Open http://localhost:9001 — login with minioadmin/minioadmin and create your first bucket.

Intro

MinIO is a high-performance, S3-compatible object storage server designed for cloud-native workloads. It provides a complete, AWS S3-compatible API that works with any S3 client, SDK, or tool — making it perfect for building private cloud storage, data lakes, backup targets, and artifact repositories.

With 60.7K+ GitHub stars and AGPL-3.0 license, MinIO is the most popular open-source object storage solution, used by organizations ranging from startups to Fortune 500 companies for on-premises and hybrid cloud deployments.

What MinIO Does

  • S3-Compatible API: Full AWS S3 API compatibility — use existing S3 tools, SDKs, and applications
  • High Performance: Industry-leading throughput — 325 GiB/s reads, 165 GiB/s writes on standard hardware
  • Erasure Coding: Data protection with configurable redundancy (survive disk/node failures)
  • Encryption: Server-side and client-side encryption with KMS integration (Vault, AWS KMS)
  • Bucket Versioning: Keep multiple versions of objects for data protection
  • Object Locking: WORM (Write Once Read Many) compliance for regulatory requirements
  • Replication: Site-to-site, bucket-to-bucket replication for DR and geo-distribution
  • Console UI: Built-in web console for bucket management, user admin, and monitoring
  • Identity Management: LDAP, OpenID Connect, and built-in IAM policies

Architecture

Single Node (Development)

┌──────────────┐     ┌──────────────┐
│  S3 Client   │────▶│  MinIO       │
│  (aws-cli /  │     │  Server      │
│  SDK / App)  │     │  /data       │
└──────────────┘     └──────────────┘

Distributed (Production)

┌──────────────┐     ┌────────────────────────────┐
│  S3 Clients  │────▶│  MinIO Cluster             │
│  + Console   │     │  ┌──────┐ ┌──────┐ ┌──────┐│
└──────────────┘     │  │Node 1│ │Node 2│ │Node 3││
                     │  │4 disk│ │4 disk│ │4 disk││
                     │  └──────┘ └──────┘ └──────┘│
                     │  Erasure Coding across nodes│
                     └────────────────────────────┘

Self-Hosting

Docker Compose

services:
  minio:
    image: minio/minio:latest
    command: server /data --console-address ":9001"
    ports:
      - "9000:9000"   # S3 API
      - "9001:9001"   # Console UI
    environment:
      MINIO_ROOT_USER: minioadmin
      MINIO_ROOT_PASSWORD: your-secure-password
    volumes:
      - minio-data:/data
    restart: unless-stopped

volumes:
  minio-data:

Binary Installation

# Linux
wget https://dl.min.io/server/minio/release/linux-amd64/minio
chmod +x minio
MINIO_ROOT_USER=admin MINIO_ROOT_PASSWORD=password ./minio server /data

Usage with AWS CLI

MinIO is fully compatible with the AWS CLI:

# Configure AWS CLI for MinIO
aws configure set aws_access_key_id minioadmin
aws configure set aws_secret_access_key minioadmin

# Create bucket
aws --endpoint-url http://localhost:9000 s3 mb s3://my-bucket

# Upload file
aws --endpoint-url http://localhost:9000 s3 cp myfile.txt s3://my-bucket/

# List objects
aws --endpoint-url http://localhost:9000 s3 ls s3://my-bucket/

# Download file
aws --endpoint-url http://localhost:9000 s3 cp s3://my-bucket/myfile.txt ./

SDK Usage

Python (boto3)

import boto3

s3 = boto3.client('s3',
    endpoint_url='http://localhost:9000',
    aws_access_key_id='minioadmin',
    aws_secret_access_key='minioadmin'
)

# Upload
s3.upload_file('local-file.txt', 'my-bucket', 'remote-file.txt')

# Download
s3.download_file('my-bucket', 'remote-file.txt', 'downloaded.txt')

# List objects
response = s3.list_objects_v2(Bucket='my-bucket')
for obj in response.get('Contents', []):
    print(obj['Key'], obj['Size'])

JavaScript (AWS SDK v3)

import { S3Client, PutObjectCommand } from '@aws-sdk/client-s3';

const s3 = new S3Client({
  endpoint: 'http://localhost:9000',
  region: 'us-east-1',
  credentials: {
    accessKeyId: 'minioadmin',
    secretAccessKey: 'minioadmin',
  },
  forcePathStyle: true,
});

await s3.send(new PutObjectCommand({
  Bucket: 'my-bucket',
  Key: 'hello.txt',
  Body: 'Hello MinIO!',
}));

Common Use Cases

Use Case Description
Backup storage S3-compatible target for Restic, Velero, Duplicati
Docker Registry Backend storage for Harbor, Distribution
Data Lake Store Parquet, CSV, JSON for analytics (Spark, Trino)
ML Artifacts Model storage for MLflow, DVC, Kubeflow
Media Storage Image/video storage for web apps
Log Archive Long-term log storage from Loki, Elasticsearch

MinIO vs Alternatives

Feature MinIO AWS S3 Ceph SeaweedFS
Open Source Yes (AGPL-3.0) No Yes (LGPL) Yes (Apache)
S3 Compatible Full Native Partial Partial
Performance Very high High Moderate High
Complexity Simple Managed Very complex Moderate
Erasure coding Yes Yes Yes Yes
Single binary Yes N/A No Yes

常见问题

Q: MinIO 可以替代 AWS S3 吗? A: 在自托管场景下可以。MinIO 实现了完整的 S3 API,所有使用 AWS S3 SDK 的应用都可以无缝切换到 MinIO。但 MinIO 不提供 S3 的一些托管功能(如 Lambda 触发器、Glacier 存储类)。

Q: 数据安全性如何保证? A: MinIO 使用纠删码(Erasure Coding)确保数据可靠性。默认配置下可以容忍一半磁盘故障。生产环境建议配置至少 4 块磁盘并启用服务端加密。

Q: 单节点能存储多少数据? A: 没有限制,取决于磁盘容量。单节点 MinIO 在消费级 SSD 上可以达到 GB/s 级别的吞吐量。分布式部署可以线性扩展存储容量和性能。

来源与致谢

Discussion

Sign in to join the discussion.
No comments yet. Be the first to share your thoughts.

Related Assets