This page describes how KubeBlocks deploys a MinIO distributed object storage cluster on Kubernetes — covering the resource hierarchy, pod internals, erasure coding, and traffic routing.
minio-cluster-minio:9000minio-cluster-minio:9001KubeBlocks models a MinIO cluster as a hierarchy of Kubernetes custom resources:
Cluster → Component → InstanceSet → Pod × N
| Resource | Role |
|---|---|
| Cluster | User-facing declaration — specifies the number of server nodes, drives per node, storage size, and resources |
| Component | Generated automatically; references a ComponentDefinition that describes container specs, lifecycle actions, and services |
| InstanceSet | KubeBlocks custom workload (replaces StatefulSet); manages pods with stable identities |
| Pod | Actual running MinIO server node; each pod gets a unique ordinal and its own PVC |
MinIO requires a minimum of 2 pods (nodes) for a distributed deployment. For production, the chart example recommends at least 4 replicas, and the count must be a multiple of 2 (e.g., 4, 6, 8, 12) to maintain balanced erasure coding. All pods are peers — there is no primary/replica distinction at the S3 layer. KubeBlocks assigns an internal readwrite or notready role label via roleProbe (mc admin info) purely for rolling-update ordering; these roles carry no S3 routing semantics.
Every MinIO pod runs one main application container (plus an init init container that runs replicas-history-config.sh to set up the replica history configuration before the main process starts):
| Container | Port | Purpose |
|---|---|---|
minio | 9000 (S3 API + metrics), 9001 (web console) | MinIO object storage server; handles S3-compatible API requests; exposes Prometheus metrics on port 9000 at /minio/v2/metrics/cluster (auth disabled via MINIO_PROMETHEUS_AUTH_TYPE=public); MinIO also provides /minio/health/live and /minio/health/ready endpoints, but this addon does not declare HTTP liveness/readiness probes on the Pod — KubeBlocks uses an exec roleProbe (mc admin info) to label each pod readwrite or notready for rolling-update ordering |
Each pod mounts its own PVC (/data), which MinIO treats as a single drive. Multiple PVCs per pod can be configured for higher throughput in advanced deployments.
MinIO uses Reed-Solomon erasure coding instead of traditional replication. Data objects are split into data shards and parity shards across all drives in the deployment:
| Erasure Coding Concept | Description |
|---|---|
| Data shards | Portions of the original object stored across drives |
| Parity shards | Redundancy blocks calculated from the data shards; used to reconstruct lost or corrupted data |
| Default split | MinIO automatically determines an optimal EC ratio (e.g., EC:4 means 4 data + 4 parity shards in an 8-node cluster) |
| Read quorum | N/2 drives must be available to read an object |
| Write quorum | N/2 + 1 drives must be available to write an object |
| Drive failure tolerance | Up to N/2 drives (or nodes) can fail without data loss or service interruption |
Unlike primary/replica replication, erasure coding stores no full copies of any object on any single node. Storage efficiency is significantly higher than 3× replication while providing equivalent or better durability.
MinIO nodes discover each other at startup using the headless service DNS. Each node is started with the full list of peer addresses:
http://{cluster}-minio-{0...N-1}.{cluster}-minio-headless.{namespace}.svc.cluster.local:9000/data
MinIO uses an internal distributed locking mechanism to coordinate object writes and ensure read-after-write consistency across all nodes. There is no external coordinator — all consensus is handled within the MinIO process.
KubeBlocks creates two services for each MinIO cluster:
| Service | Type | Ports | Selector |
|---|---|---|---|
{cluster}-minio | ClusterIP | 9000 (S3 API), 9001 (console) | all pods |
{cluster}-minio-headless | Headless | 9000 | all pods |
Because all MinIO nodes are peers and can handle any S3 API request, the ClusterIP service load-balances across all available pods. Client applications connect using the standard S3 API on port 9000:
http://{cluster}-minio.{namespace}.svc.cluster.local:9000
The web console on port 9001 provides a management interface for bucket management, object browsing, and access key administration.
KubeBlocks automatically manages the following MinIO system account. The password is auto-generated and stored in a Secret named {cluster}-{component}-account-root.
| Account | Role | Purpose |
|---|---|---|
root | Admin (superuser) | MinIO root user used for cluster setup, bucket operations, and access-key administration; credentials injected as MINIO_ROOT_USER / MINIO_ROOT_PASSWORD |
MinIO does not require a traditional failover mechanism because erasure coding ensures continuous availability:
N/2 nodes fail simultaneously, all objects remain fully readable and writable