ScriptsApr 10, 2026·3 min read

MinIO — High-Performance S3-Compatible Object Storage

MinIO is an open-source, S3-compatible object storage server. Deploy a private cloud storage with AWS S3 API compatibility, erasure coding, and multi-site replication.

TL;DR
MinIO provides S3-compatible object storage you can self-host with erasure coding and multi-site replication.
§01

What it is

MinIO is an open-source, high-performance object storage server that implements the AWS S3 API. It is designed for cloud-native workloads including AI/ML data lakes, backup targets, and application storage.

MinIO targets DevOps teams and developers who need S3-compatible storage without AWS dependency. It runs on any hardware from a single laptop to multi-node distributed clusters.

§02

How it saves time or tokens

MinIO provides a drop-in replacement for AWS S3. Existing code using the AWS SDK, boto3, or any S3 client works with MinIO by changing the endpoint URL. This eliminates vendor lock-in and reduces cloud storage costs for on-premises or hybrid deployments.

§03

How to use

  1. Run MinIO with Docker:
docker run -d --name minio -p 9000:9000 -p 9001:9001 \
  -e MINIO_ROOT_USER=minioadmin \
  -e MINIO_ROOT_PASSWORD=minioadmin \
  -v minio-data:/data \
  minio/minio server /data --console-address ':9001'
  1. Open http://localhost:9001 and log in with minioadmin/minioadmin.
  1. Create a bucket and start uploading files via the web console or any S3 client.
§04

Example

import boto3

s3 = boto3.client(
    's3',
    endpoint_url='http://localhost:9000',
    aws_access_key_id='minioadmin',
    aws_secret_access_key='minioadmin',
)

# Create a bucket
s3.create_bucket(Bucket='my-data')

# Upload a file
s3.upload_file('report.pdf', 'my-data', 'reports/2026/report.pdf')

# List objects
for obj in s3.list_objects_v2(Bucket='my-data')['Contents']:
    print(obj['Key'])
§05

Related on TokRepo

§06

Common pitfalls

  • The default minioadmin credentials are public knowledge. Change MINIO_ROOT_USER and MINIO_ROOT_PASSWORD before any non-local deployment.
  • MinIO erasure coding requires a minimum of 4 drives for data protection. Single-drive setups provide no redundancy.
  • MinIO uses port 9000 for the S3 API and 9001 for the web console. Ensure both ports are accessible if running behind a firewall.

Frequently Asked Questions

Is MinIO fully compatible with AWS S3?+

MinIO implements the core AWS S3 API including GET, PUT, DELETE, multipart upload, versioning, and lifecycle policies. Most S3 client libraries and tools work with MinIO by changing the endpoint URL.

What is erasure coding in MinIO?+

Erasure coding splits data into data and parity blocks across multiple drives. MinIO can tolerate drive failures without data loss. A distributed MinIO cluster with 4+ drives enables erasure coding automatically.

Can MinIO replicate across data centers?+

Yes. MinIO supports site-to-site replication for disaster recovery. You configure replication rules between two or more MinIO clusters to keep data synchronized across geographic locations.

How does MinIO handle large files?+

MinIO supports multipart uploads for large files, consistent with the S3 API. Files are split into parts, uploaded in parallel, and assembled on the server. The maximum object size is 5TB.

What clients work with MinIO?+

Any S3-compatible client works with MinIO. This includes AWS CLI, boto3, the MinIO client (mc), rclone, and S3-compatible libraries in every major programming language.

Citations (3)
🙏

Source & Thanks

Discussion

Sign in to join the discussion.
No comments yet. Be the first to share your thoughts.

Related Assets