What MinIO Does
- S3-Compatible API: Full AWS S3 API compatibility — use existing S3 tools, SDKs, and applications
- High Performance: Industry-leading throughput — 325 GiB/s reads, 165 GiB/s writes on standard hardware
- Erasure Coding: Data protection with configurable redundancy (survive disk/node failures)
- Encryption: Server-side and client-side encryption with KMS integration (Vault, AWS KMS)
- Bucket Versioning: Keep multiple versions of objects for data protection
- Object Locking: WORM (Write Once Read Many) compliance for regulatory requirements
- Replication: Site-to-site, bucket-to-bucket replication for DR and geo-distribution
- Console UI: Built-in web console for bucket management, user admin, and monitoring
- Identity Management: LDAP, OpenID Connect, and built-in IAM policies
Architecture
Single Node (Development)
┌──────────────┐ ┌──────────────┐
│ S3 Client │────▶│ MinIO │
│ (aws-cli / │ │ Server │
│ SDK / App) │ │ /data │
└──────────────┘ └──────────────┘Distributed (Production)
┌──────────────┐ ┌────────────────────────────┐
│ S3 Clients │────▶│ MinIO Cluster │
│ + Console │ │ ┌──────┐ ┌──────┐ ┌──────┐│
└──────────────┘ │ │Node 1│ │Node 2│ │Node 3││
│ │4 disk│ │4 disk│ │4 disk││
│ └──────┘ └──────┘ └──────┘│
│ Erasure Coding across nodes│
└────────────────────────────┘Self-Hosting
Docker Compose
services:
minio:
image: minio/minio:latest
command: server /data --console-address ":9001"
ports:
- "9000:9000" # S3 API
- "9001:9001" # Console UI
environment:
MINIO_ROOT_USER: minioadmin
MINIO_ROOT_PASSWORD: your-secure-password
volumes:
- minio-data:/data
restart: unless-stopped
volumes:
minio-data:Binary Installation
# Linux
wget https://dl.min.io/server/minio/release/linux-amd64/minio
chmod +x minio
MINIO_ROOT_USER=admin MINIO_ROOT_PASSWORD=password ./minio server /dataUsage with AWS CLI
MinIO is fully compatible with the AWS CLI:
# Configure AWS CLI for MinIO
aws configure set aws_access_key_id minioadmin
aws configure set aws_secret_access_key minioadmin
# Create bucket
aws --endpoint-url http://localhost:9000 s3 mb s3://my-bucket
# Upload file
aws --endpoint-url http://localhost:9000 s3 cp myfile.txt s3://my-bucket/
# List objects
aws --endpoint-url http://localhost:9000 s3 ls s3://my-bucket/
# Download file
aws --endpoint-url http://localhost:9000 s3 cp s3://my-bucket/myfile.txt ./SDK Usage
Python (boto3)
import boto3
s3 = boto3.client('s3',
endpoint_url='http://localhost:9000',
aws_access_key_id='minioadmin',
aws_secret_access_key='minioadmin'
)
# Upload
s3.upload_file('local-file.txt', 'my-bucket', 'remote-file.txt')
# Download
s3.download_file('my-bucket', 'remote-file.txt', 'downloaded.txt')
# List objects
response = s3.list_objects_v2(Bucket='my-bucket')
for obj in response.get('Contents', []):
print(obj['Key'], obj['Size'])JavaScript (AWS SDK v3)
import { S3Client, PutObjectCommand } from '@aws-sdk/client-s3';
const s3 = new S3Client({
endpoint: 'http://localhost:9000',
region: 'us-east-1',
credentials: {
accessKeyId: 'minioadmin',
secretAccessKey: 'minioadmin',
},
forcePathStyle: true,
});
await s3.send(new PutObjectCommand({
Bucket: 'my-bucket',
Key: 'hello.txt',
Body: 'Hello MinIO!',
}));Common Use Cases
| Use Case | Description |
|---|---|
| Backup storage | S3-compatible target for Restic, Velero, Duplicati |
| Docker Registry | Backend storage for Harbor, Distribution |
| Data Lake | Store Parquet, CSV, JSON for analytics (Spark, Trino) |
| ML Artifacts | Model storage for MLflow, DVC, Kubeflow |
| Media Storage | Image/video storage for web apps |
| Log Archive | Long-term log storage from Loki, Elasticsearch |
MinIO vs Alternatives
| Feature | MinIO | AWS S3 | Ceph | SeaweedFS |
|---|---|---|---|---|
| Open Source | Yes (AGPL-3.0) | No | Yes (LGPL) | Yes (Apache) |
| S3 Compatible | Full | Native | Partial | Partial |
| Performance | Very high | High | Moderate | High |
| Complexity | Simple | Managed | Very complex | Moderate |
| Erasure coding | Yes | Yes | Yes | Yes |
| Single binary | Yes | N/A | No | Yes |
FAQ
Q: Can MinIO replace AWS S3? A: In self-hosted scenarios, yes. MinIO implements the full S3 API, so any application using the AWS S3 SDK can switch to MinIO seamlessly. However, MinIO doesn't provide S3's managed extras (like Lambda triggers or Glacier storage classes).
Q: How is data reliability guaranteed? A: MinIO uses erasure coding to ensure data durability. By default it can tolerate loss of half the disks. For production, configure at least 4 disks and enable server-side encryption.
Q: How much data can a single node store? A: No hard limit — it depends on disk capacity. A single MinIO node on consumer SSDs can hit GB/s-level throughput. Distributed deployment scales capacity and performance linearly.