KubeBlocks
BlogsEnterprise
⌘K
​
Blogs

Overview
Quickstart
Architecture

Operations

Stop / Start / Restart
Vertical Scaling
Horizontal Scaling
Manage Services
Volume Expansion

Monitoring

Prometheus Integration
  1. Resource Hierarchy
  2. Containers Inside Each Pod
  3. Erasure Coding and Data Durability
  4. Peer Discovery and Distributed Coordination
  5. Traffic Routing
  6. Data Durability Without Traditional Failover

MinIO Architecture in KubeBlocks

This page describes how KubeBlocks deploys a MinIO distributed object storage cluster on Kubernetes — covering the resource hierarchy, pod internals, erasure coding, and traffic routing.

Resource Hierarchy

KubeBlocks models a MinIO cluster as a hierarchy of Kubernetes custom resources:

Cluster  →  Component  →  InstanceSet  →  Pod × N
ResourceRole
ClusterUser-facing declaration — specifies the number of server nodes, drives per node, storage size, and resources
ComponentGenerated automatically; references a ComponentDefinition that describes container specs, lifecycle actions, and services
InstanceSetKubeBlocks custom workload (replaces StatefulSet); manages pods with stable identities
PodActual running MinIO server node; each pod gets a unique ordinal and its own PVC

MinIO requires a minimum of 4 pods (nodes) for a distributed deployment to enable erasure coding. All pods in a deployment are peers — there is no primary/replica distinction.

Containers Inside Each Pod

Every MinIO pod runs three containers:

ContainerPortPurpose
minio9000 (S3 API), 9001 (web console)MinIO object storage server handling S3-compatible API requests
kbagent5001Health probe endpoint — KubeBlocks queries GET /v1.0/getrole periodically
metrics-exporter9187Prometheus metrics exporter

Each pod mounts its own PVC (/data), which MinIO treats as a single drive. Multiple PVCs per pod can be configured for higher throughput in advanced deployments.

Erasure Coding and Data Durability

MinIO uses Reed-Solomon erasure coding instead of traditional replication. Data objects are split into data shards and parity shards across all drives in the deployment:

Erasure Coding ConceptDescription
Data shardsPortions of the original object stored across drives
Parity shardsRedundancy blocks calculated from the data shards; used to reconstruct lost or corrupted data
Default splitMinIO automatically determines an optimal EC ratio (e.g., EC:4 means 4 data + 4 parity shards in an 8-node cluster)
Read quorumN/2 drives must be available to read an object
Write quorumN/2 + 1 drives must be available to write an object
Drive failure toleranceUp to N/2 drives (or nodes) can fail without data loss or service interruption

Unlike primary/replica replication, erasure coding stores no full copies of any object on any single node. Storage efficiency is significantly higher than 3× replication while providing equivalent or better durability.

Peer Discovery and Distributed Coordination

MinIO nodes discover each other at startup using the headless service DNS. Each node is started with the full list of peer addresses:

http://{cluster}-minio-{0..N-1}.{cluster}-minio-headless.{namespace}.svc.cluster.local:9000/data

MinIO uses an internal distributed locking mechanism to coordinate object writes and ensure read-after-write consistency across all nodes. There is no external coordinator — all consensus is handled within the MinIO process.

Traffic Routing

KubeBlocks creates two services for each MinIO cluster:

ServiceTypePortsSelector
{cluster}-minioClusterIP9000 (S3 API), 9001 (console)all pods
{cluster}-minio-headlessHeadless9000all pods

Because all MinIO nodes are peers and can handle any S3 API request, the ClusterIP service load-balances across all available pods. Client applications connect using the standard S3 API on port 9000:

http://{cluster}-minio.{namespace}.svc.cluster.local:9000

The web console on port 9001 provides a management interface for bucket management, object browsing, and access key administration.

Data Durability Without Traditional Failover

MinIO does not require a traditional failover mechanism because erasure coding ensures continuous availability:

  • Node failure: MinIO transparently reconstructs requested objects from the remaining shards on healthy nodes
  • Node recovery: When a failed node restarts and its drives become available, MinIO performs a background healing process to repair any degraded objects
  • Drive failure: Healing runs continuously in the background, rebuilding parity from remaining drives
  • No data loss: As long as fewer than N/2 nodes fail simultaneously, all objects remain fully readable and writable

© 2026 KUBEBLOCKS INC