MinIO — High-Performance S3-Compatible Object Storage
MinIO is an open-source, S3-compatible object storage server. Deploy a private cloud storage with AWS S3 API compatibility, erasure coding, and multi-site replication.
What it is
MinIO is an open-source, high-performance object storage server that implements the AWS S3 API. It is designed for cloud-native workloads including AI/ML data lakes, backup targets, and application storage.
MinIO targets DevOps teams and developers who need S3-compatible storage without AWS dependency. It runs on any hardware from a single laptop to multi-node distributed clusters.
How it saves time or tokens
MinIO provides a drop-in replacement for AWS S3. Existing code using the AWS SDK, boto3, or any S3 client works with MinIO by changing the endpoint URL. This eliminates vendor lock-in and reduces cloud storage costs for on-premises or hybrid deployments.
How to use
- Run MinIO with Docker:
docker run -d --name minio -p 9000:9000 -p 9001:9001 \
-e MINIO_ROOT_USER=minioadmin \
-e MINIO_ROOT_PASSWORD=minioadmin \
-v minio-data:/data \
minio/minio server /data --console-address ':9001'
- Open
http://localhost:9001and log in with minioadmin/minioadmin.
- Create a bucket and start uploading files via the web console or any S3 client.
Example
import boto3
s3 = boto3.client(
's3',
endpoint_url='http://localhost:9000',
aws_access_key_id='minioadmin',
aws_secret_access_key='minioadmin',
)
# Create a bucket
s3.create_bucket(Bucket='my-data')
# Upload a file
s3.upload_file('report.pdf', 'my-data', 'reports/2026/report.pdf')
# List objects
for obj in s3.list_objects_v2(Bucket='my-data')['Contents']:
print(obj['Key'])
Related on TokRepo
- Self-Hosted Tools -- Self-hosted infrastructure and storage
- Database Tools -- Data storage and management solutions
Common pitfalls
- The default minioadmin credentials are public knowledge. Change MINIO_ROOT_USER and MINIO_ROOT_PASSWORD before any non-local deployment.
- MinIO erasure coding requires a minimum of 4 drives for data protection. Single-drive setups provide no redundancy.
- MinIO uses port 9000 for the S3 API and 9001 for the web console. Ensure both ports are accessible if running behind a firewall.
Frequently Asked Questions
MinIO implements the core AWS S3 API including GET, PUT, DELETE, multipart upload, versioning, and lifecycle policies. Most S3 client libraries and tools work with MinIO by changing the endpoint URL.
Erasure coding splits data into data and parity blocks across multiple drives. MinIO can tolerate drive failures without data loss. A distributed MinIO cluster with 4+ drives enables erasure coding automatically.
Yes. MinIO supports site-to-site replication for disaster recovery. You configure replication rules between two or more MinIO clusters to keep data synchronized across geographic locations.
MinIO supports multipart uploads for large files, consistent with the S3 API. Files are split into parts, uploaded in parallel, and assembled on the server. The maximum object size is 5TB.
Any S3-compatible client works with MinIO. This includes AWS CLI, boto3, the MinIO client (mc), rclone, and S3-compatible libraries in every major programming language.
Citations (3)
- MinIO GitHub— MinIO is a high-performance S3-compatible object storage
- MinIO Documentation— MinIO documentation and deployment guides
- AWS S3 Documentation— AWS S3 API specification for object storage
Related on TokRepo
Source & Thanks
- GitHub: minio/minio — 60.7K+ ⭐ | AGPL-3.0
- Website: min.io
Discussion
Related Assets
NAPI-RS — Build Node.js Native Addons in Rust
Write high-performance Node.js native modules in Rust with automatic TypeScript type generation and cross-platform prebuilt binaries.
Mamba — Fast Cross-Platform Package Manager
A drop-in conda replacement written in C++ that resolves environments in seconds instead of minutes.
Plasmo — The Browser Extension Framework
Build, test, and publish browser extensions for Chrome, Firefox, and Edge using React or Vue with hot-reload and automatic manifest generation.