Deploy production-grade MinIO distributed object storage on Kubernetes in minutes. S3-compatible, erasure-coded, with a built-in web console and TLS support.
Supported Versions
Min Nodes (Production)
Open Source
Deploy MinIO in 2 steps
Install KubeBlocks & MinIO Addon
# Add Helm repo helm repo add kubeblocks https://apecloud.github.io/helm-charts helm repo update # Install KubeBlocks helm install kubeblocks kubeblocks/kubeblocks \ --namespace kb-system --create-namespace # Install MinIO addon helm upgrade -i kb-addon-minio kubeblocks/minio \ --namespace kb-system
Create a MinIO Cluster
apiVersion: apps.kubeblocks.io/v1
kind: Cluster
metadata:
name: minio-cluster
namespace: demo
spec:
terminationPolicy: Delete
componentSpecs:
- name: minio
componentDef: minio
replicas: 4
serviceVersion: "2025.10.15"Trusted by Engineering Teams at Scale
KubeBlocks deploys MinIO in distributed mode where all nodes are equal and data is protected by erasure coding — in a symmetric single-drive-per-node layout, data remains intact even when up to half the nodes fail.
KubeBlocks deploys MinIO with 4 or more nodes (must be a multiple of 2). Each node stores a portion of the erasure-coded data shards — when a node fails, surviving nodes reconstruct the data from parity shards. KubeBlocks restarts the failed pod; it rejoins and syncs automatically.
All nodes are symmetric — no primary/replica distinction, no single point of failure
Erasure coding distributes data and parity shards across all nodes for data durability
S3-compatible API on :9000 — works with any S3 SDK, CLI, or tool out of the box
Built-in web console on :9001 — browse buckets, manage objects, configure users
Peer discovery via Kubernetes headless service DNS — no external coordinator needed
TLS support for both S3 API and web console via KubeBlocks cert management
minio-cluster-minio:9000minio-cluster-minio:9001No SSH into pods, no shell scripts. Submit an OpsRequest and KubeBlocks handles the rest.
Availability & Scaling
Horizontal Scale-Out
Add nodes in multiples of 2. New nodes join the cluster and participate in erasure coding after a cluster restart. Min 4 nodes for production.
Vertical Scaling
Resize CPU and memory on running nodes via OpsRequest with a rolling pod restart strategy — the cluster continues serving during the operation.
Rolling Restart
Controlled pod restarts that keep the cluster serving S3 requests throughout the operation.
Stop / Start
Suspend the cluster to eliminate compute cost; resume with full object data and bucket configuration intact.
Expose via LoadBalancer
Expose the S3 API or web console externally via a LoadBalancer or NodePort service for out-of-cluster access.
Configuration & Observability
TLS Encryption
Enable TLS for both the S3 API (:9000) and web console (:9001) via KubeBlocks certificate management.
Auto-Create Buckets
Set the MINIO_BUCKETS env var to automatically create buckets during cluster initialization.
Credential Management
Root credentials are auto-generated and stored in a Kubernetes Secret. Retrieve via kubectl for SDK or console access.
Prometheus Metrics
Built-in metrics endpoint with public auth type. Compatible with Prometheus scraping and Grafana dashboards.
Multi-Engine Consistency
Manage MinIO alongside PostgreSQL, MySQL, Redis, and 35+ other engines using the same OpsRequest API and kubectl tooling.
KubeBlocks handles the operational complexity of running MinIO on Kubernetes — so your team can focus on building.
Deploy a production-grade MinIO distributed object storage cluster in minutes with erasure coding, TLS, built-in web console, and full Day-2 operations — all open source.