KubeBlocks
BlogsSkillsEnterprise
⌘K
​
Blogs

Overview
Quickstart
Architecture

Operations

Stop / Start / Restart
Vertical Scaling
Horizontal Scaling
Manage Services
Volume Expansion

Monitoring

Prometheus Integration
  1. Resource Hierarchy
  2. Containers Inside Each Pod
  3. Erasure Coding and Data Durability
  4. Peer Discovery and Distributed Coordination
  5. Traffic Routing
  6. System Accounts
  7. Data Durability Without Traditional Failover

MinIO Architecture in KubeBlocks

This page describes how KubeBlocks deploys a MinIO distributed object storage cluster on Kubernetes — covering the resource hierarchy, pod internals, erasure coding, and traffic routing.

Application / Client
S3 API  minio-cluster-minio:9000
Console  minio-cluster-minio:9001
S3 API + console traffic → all pods (round-robin)
Kubernetes Services
minio-cluster-minio
ClusterIP · :9000 S3 API · :9001 console
all pods (round-robin)
no primary — symmetric cluster
S3 API + Console
→ any pod (symmetric, no primary)
Pods · Worker Nodes (4 total — showing 3 · e.g. cluster name: minio-cluster → minio-cluster-minio-0)
minio-0NODE
🗄️
minio
:9000 S3 API + metrics (/minio/v2/metrics/cluster) · :9001 console
💾 PVC data-0 · 100Gi (object storage)
minio-1NODE
🗄️
minio
:9000 S3 API + metrics (/minio/v2/metrics/cluster) · :9001 console
💾 PVC data-1 · 100Gi (object storage)
minio-2NODE
🗄️
minio
:9000 S3 API + metrics (/minio/v2/metrics/cluster) · :9001 console
💾 PVC data-2 · 100Gi (object storage)
+minio-3  —  identical configuration · 100Gi PVC · same NODE role
⬡Reed-Solomon Erasure CodingEC ratio determined by MinIO based on drive count · e.g. EC:4 (4 parity drives) in an 8-node layout · actual params vary with topology
🔗Headless service — stable pod DNS for internal use (replication, HA heartbeat, operator probes); not a client endpoint
S3 API Traffic
Distributed Node (symmetric)
Persistent Storage

Resource Hierarchy

KubeBlocks models a MinIO cluster as a hierarchy of Kubernetes custom resources:

Cluster  →  Component  →  InstanceSet  →  Pod × N
ResourceRole
ClusterUser-facing declaration — specifies the number of server nodes, drives per node, storage size, and resources
ComponentGenerated automatically; references a ComponentDefinition that describes container specs, lifecycle actions, and services
InstanceSetKubeBlocks custom workload (replaces StatefulSet); manages pods with stable identities
PodActual running MinIO server node; each pod gets a unique ordinal and its own PVC

MinIO requires a minimum of 2 pods (nodes) for a distributed deployment. For production, the chart example recommends at least 4 replicas, and the count must be a multiple of 2 (e.g., 4, 6, 8, 12) to maintain balanced erasure coding. All pods are peers — there is no primary/replica distinction at the S3 layer. KubeBlocks assigns an internal readwrite or notready role label via roleProbe (mc admin info) purely for rolling-update ordering; these roles carry no S3 routing semantics.

Containers Inside Each Pod

Every MinIO pod runs one main application container (plus an init init container that runs replicas-history-config.sh to set up the replica history configuration before the main process starts):

ContainerPortPurpose
minio9000 (S3 API + metrics), 9001 (web console)MinIO object storage server; handles S3-compatible API requests; exposes Prometheus metrics on port 9000 at /minio/v2/metrics/cluster (auth disabled via MINIO_PROMETHEUS_AUTH_TYPE=public); MinIO also provides /minio/health/live and /minio/health/ready endpoints, but this addon does not declare HTTP liveness/readiness probes on the Pod — KubeBlocks uses an exec roleProbe (mc admin info) to label each pod readwrite or notready for rolling-update ordering

Each pod mounts its own PVC (/data), which MinIO treats as a single drive. Multiple PVCs per pod can be configured for higher throughput in advanced deployments.

Erasure Coding and Data Durability

MinIO uses Reed-Solomon erasure coding instead of traditional replication. Data objects are split into data shards and parity shards across all drives in the deployment:

Erasure Coding ConceptDescription
Data shardsPortions of the original object stored across drives
Parity shardsRedundancy blocks calculated from the data shards; used to reconstruct lost or corrupted data
Default splitMinIO automatically determines an optimal EC ratio (e.g., EC:4 means 4 data + 4 parity shards in an 8-node cluster)
Read quorumN/2 drives must be available to read an object
Write quorumN/2 + 1 drives must be available to write an object
Drive failure toleranceUp to N/2 drives (or nodes) can fail without data loss or service interruption

Unlike primary/replica replication, erasure coding stores no full copies of any object on any single node. Storage efficiency is significantly higher than 3× replication while providing equivalent or better durability.

Peer Discovery and Distributed Coordination

MinIO nodes discover each other at startup using the headless service DNS. Each node is started with the full list of peer addresses:

http://{cluster}-minio-{0...N-1}.{cluster}-minio-headless.{namespace}.svc.cluster.local:9000/data

MinIO uses an internal distributed locking mechanism to coordinate object writes and ensure read-after-write consistency across all nodes. There is no external coordinator — all consensus is handled within the MinIO process.

Traffic Routing

KubeBlocks creates two services for each MinIO cluster:

ServiceTypePortsSelector
{cluster}-minioClusterIP9000 (S3 API), 9001 (console)all pods
{cluster}-minio-headlessHeadless9000all pods

Because all MinIO nodes are peers and can handle any S3 API request, the ClusterIP service load-balances across all available pods. Client applications connect using the standard S3 API on port 9000:

http://{cluster}-minio.{namespace}.svc.cluster.local:9000

The web console on port 9001 provides a management interface for bucket management, object browsing, and access key administration.

System Accounts

KubeBlocks automatically manages the following MinIO system account. The password is auto-generated and stored in a Secret named {cluster}-{component}-account-root.

AccountRolePurpose
rootAdmin (superuser)MinIO root user used for cluster setup, bucket operations, and access-key administration; credentials injected as MINIO_ROOT_USER / MINIO_ROOT_PASSWORD

Data Durability Without Traditional Failover

MinIO does not require a traditional failover mechanism because erasure coding ensures continuous availability:

  • Node failure: MinIO transparently reconstructs requested objects from the remaining shards on healthy nodes
  • Node recovery: When a failed node restarts and its drives become available, MinIO performs a background healing process to repair any degraded objects
  • Drive failure: Healing runs continuously in the background, rebuilding parity from remaining drives
  • No data loss: As long as fewer than N/2 nodes fail simultaneously, all objects remain fully readable and writable

© 2026 KUBEBLOCKS INC