KubeBlocks
BlogsEnterprise
⌘K
​
Blogs
Open Source · Production-Grade · S3-Compatible

KubeBlocks MinIO Operator for Kubernetes

Deploy production-grade MinIO distributed object storage on Kubernetes in minutes. S3-compatible, erasure-coded, with a built-in web console and TLS support.

Try Playground Free →Read the Docs

2

Supported Versions

4+

Min Nodes (Production)

100%

Open Source

Deploy MinIO in 2 steps

1

Install KubeBlocks & MinIO Addon

# Add Helm repo
helm repo add kubeblocks https://apecloud.github.io/helm-charts
helm repo update

# Install KubeBlocks
helm install kubeblocks kubeblocks/kubeblocks \
  --namespace kb-system --create-namespace

# Install MinIO addon
helm upgrade -i kb-addon-minio kubeblocks/minio \
  --namespace kb-system
2

Create a MinIO Cluster

apiVersion: apps.kubeblocks.io/v1
kind: Cluster
metadata:
  name: minio-cluster
  namespace: demo
spec:
  terminationPolicy: Delete
  componentSpecs:
    - name: minio
      componentDef: minio
      replicas: 4
      serviceVersion: "2025.10.15"

Trusted by Engineering Teams at Scale

BONC CloudBONC Cloud
China Mobile CloudChina Mobile Cloud
China Telecom CloudChina Telecom Cloud
TencentTencent
XiaomiXiaomi
Ping AnPing An
VIP.comVIP.com
KwaiKwai
Tiger BrokersTiger Brokers
CITIC SecuritiesCITIC Securities
SealOSSealOS
FastGPTFastGPT
KubeSphereKubeSphere
MomentaMomenta
360360
TilaaTilaa
OlaresOlares
Changan AutomobileChangan Automobile
State GridState Grid
BONC CloudBONC Cloud
China Mobile CloudChina Mobile Cloud
China Telecom CloudChina Telecom Cloud
TencentTencent
XiaomiXiaomi
Ping AnPing An
VIP.comVIP.com
KwaiKwai
Tiger BrokersTiger Brokers
CITIC SecuritiesCITIC Securities
SealOSSealOS
FastGPTFastGPT
KubeSphereKubeSphere
MomentaMomenta
360360
TilaaTilaa
OlaresOlares
Changan AutomobileChangan Automobile
State GridState Grid
MinIO Architecture

Distributed Object Storage. One Operator.

KubeBlocks deploys MinIO in distributed mode where all nodes are equal and data is protected by erasure coding — in a symmetric single-drive-per-node layout, data remains intact even when up to half the nodes fail.

KubeBlocks deploys MinIO with 4 or more nodes (must be a multiple of 2). Each node stores a portion of the erasure-coded data shards — when a node fails, surviving nodes reconstruct the data from parity shards. KubeBlocks restarts the failed pod; it rejoins and syncs automatically.

✓

All nodes are symmetric — no primary/replica distinction, no single point of failure

✓

Erasure coding distributes data and parity shards across all nodes for data durability

✓

S3-compatible API on :9000 — works with any S3 SDK, CLI, or tool out of the box

✓

Built-in web console on :9001 — browse buckets, manage objects, configure users

✓

Peer discovery via Kubernetes headless service DNS — no external coordinator needed

✓

TLS support for both S3 API and web console via KubeBlocks cert management

Erasure Coding
S3-Compatible
Web Console
TLS
Application / Client
S3 API  minio-cluster-minio:9000
Console  minio-cluster-minio:9001
S3 API + console traffic → all pods (round-robin)
Kubernetes Services
minio-cluster-minio
ClusterIP · :9000 S3 API · :9001 console
all pods (round-robin)
no primary — symmetric cluster
S3 API + Console
→ any pod (symmetric, no primary)
Pods · Worker Nodes (4 total — showing 3 · e.g. cluster name: minio-cluster → minio-cluster-minio-0)
minio-0NODE
🗄️
minio
:9000 S3 API + metrics (/minio/v2/metrics/cluster) · :9001 console
💾 PVC data-0 · 100Gi (object storage)
minio-1NODE
🗄️
minio
:9000 S3 API + metrics (/minio/v2/metrics/cluster) · :9001 console
💾 PVC data-1 · 100Gi (object storage)
minio-2NODE
🗄️
minio
:9000 S3 API + metrics (/minio/v2/metrics/cluster) · :9001 console
💾 PVC data-2 · 100Gi (object storage)
+minio-3  —  identical configuration · 100Gi PVC · same NODE role
⬡Reed-Solomon Erasure CodingEC ratio determined by MinIO based on drive count · e.g. EC:4 (4 parity drives) in an 8-node layout · actual params vary with topology
🔗Headless service — stable pod DNS for internal use (replication, HA heartbeat, operator probes); not a client endpoint
S3 API Traffic
Distributed Node (symmetric)
Persistent Storage
Day-2 Operations

Every Operation Declared as a Kubernetes Resource

No SSH into pods, no shell scripts. Submit an OpsRequest and KubeBlocks handles the rest.

Availability & Scaling

✓

Horizontal Scale-Out

Add nodes in multiples of 2. New nodes join the cluster and participate in erasure coding after a cluster restart. Min 4 nodes for production.

✓

Vertical Scaling

Resize CPU and memory on running nodes via OpsRequest with a rolling pod restart strategy — the cluster continues serving during the operation.

✓

Rolling Restart

Controlled pod restarts that keep the cluster serving S3 requests throughout the operation.

✓

Stop / Start

Suspend the cluster to eliminate compute cost; resume with full object data and bucket configuration intact.

✓

Expose via LoadBalancer

Expose the S3 API or web console externally via a LoadBalancer or NodePort service for out-of-cluster access.

Configuration & Observability

✓

TLS Encryption

Enable TLS for both the S3 API (:9000) and web console (:9001) via KubeBlocks certificate management.

✓

Auto-Create Buckets

Set the MINIO_BUCKETS env var to automatically create buckets during cluster initialization.

✓

Credential Management

Root credentials are auto-generated and stored in a Kubernetes Secret. Retrieve via kubectl for SDK or console access.

✓

Prometheus Metrics

Built-in metrics endpoint with public auth type. Compatible with Prometheus scraping and Grafana dashboards.

✓

Multi-Engine Consistency

Manage MinIO alongside PostgreSQL, MySQL, Redis, and 35+ other engines using the same OpsRequest API and kubectl tooling.

Capabilities

Built for Production MinIO

KubeBlocks handles the operational complexity of running MinIO on Kubernetes — so your team can focus on building.

🛡️
Erasure Coding — Data Durability Across Nodes
MinIO splits each object into data and parity shards distributed across all nodes. Surviving nodes reconstruct lost data from parity shards automatically.
Survives Node Loss
Data
minio-0
Shard D0
Shard D1
Data
minio-1
Shard D2
Shard D3
Parity
minio-2
Shard P0
Shard P1
Parity
minio-3
Shard P2
Shard P3
EC:2
Parity (illustrative)
2
Node Failures (4-node)
0
Data Loss
No RAID, no replication. Erasure coding uses less storage than full replication while tolerating more failures. In this symmetric 4-node, single-drive layout, 2 of 4 nodes can fail without data loss. Actual tolerance depends on drive count and pool layout.
From the Blog

Go Deeper on MinIO on Kubernetes

Managing Over 6,000 Self-Hosted Databases Without a DBA

Managing Over 6,000 Self-Hosted Databases Without a DBA

How Sealos used KubeBlocks to manage 6,000+ self-hosted databases across four availability zones — architecture, HA, backup, and operations.

We Let an AI Agent Manage Our Databases. Here's Why Most Operators Failed It.

We Let an AI Agent Manage Our Databases. Here's Why Most Operators Failed It.

We tested AI agents against traditional Kubernetes database operators. The results revealed a fundamental mismatch between fragmented operator APIs and how LLMs reason.

Validating KubeBlocks Addon High Availability with Chaos Mesh

Validating KubeBlocks Addon High Availability with Chaos Mesh

How to leverage Chaos Mesh for chaos engineering to validate and enhance KubeBlocks high availability capabilities through systematic fault injection testing.

Get Started

MinIO on Kubernetes, the Easy Way

Deploy a production-grade MinIO distributed object storage cluster in minutes with erasure coding, TLS, built-in web console, and full Day-2 operations — all open source.

Try Playground Free →Talk to the Team

© 2026 KUBEBLOCKS INC