KubeBlocks
BlogsSkillsEnterprise
⌘K
​
Blogs
Overview
Quickstart
Architecture

Topologies

MySQL Semi-Synchronous Cluster
MySQL Cluster with ProxySQL
MySQL Group Replication Cluster
MySQL Group Replication with ProxySQL
MySQL Cluster with Orchestrator
MySQL with Orchestrator & ProxySQL

Operations

Lifecycle Management
Vertical Scaling
Horizontal Scaling
Volume Expansion
Manage MySQL Services
Minor Version Upgrade
Modify MySQL Parameters
Planned Switchover in MySQL
Decommission MySQL Replica
Recovering MySQL Replica

Backup And Restores

Create BackupRepo
Create Full Backup
Scheduled Backups
Scheduled Continuous Backup
Restore MySQL Cluster
Restore with PITR

Custom Secret

Custom Password
Custom Password Policy

TLS

MySQL Cluster with TLS
MySQL Cluster with User-Provided TLS
MySQL Cluster with mTLS

Monitoring

Observability for MySQL Clusters

Advanced Pod Management

Custom Scheduling Policies
Custom Pod Resources
Pod Management Parallelism
Using OnDelete for Controlled Pod Updates
Gradual Rolling Update
Retain PVCs
  1. Semisync Architecture
    1. Resource Hierarchy
    2. Containers Inside Each Pod
    3. High Availability
    4. Automatic Failover
    5. Traffic Routing
    6. Semisync ± ProxySQL
  2. MGR Architecture (Group Replication)
    1. Resource Hierarchy
    2. Containers Inside Each Pod
    3. High Availability
    4. Automatic Failover
    5. Traffic Routing
    6. MGR ± ProxySQL
  3. Orchestrator Architecture
    1. Resource Hierarchy
    2. Containers Inside Each Pod
    3. How Orchestrator Manages Failover
    4. Traffic Routing
    5. Orchestrator ± ProxySQL
  4. System Accounts

MySQL Architecture in KubeBlocks

KubeBlocks supports three distinct MySQL HA architectures:

ArchitectureHA MechanismUse Case
SemisyncAsync/semi-sync binlog replication; syncer roleProbe + KubeBlocks drives failoverGeneral HA; single primary + replicas
MGR (Group Replication)Paxos-based group consensus on :33061; group elects new primary automaticallyStronger consistency guarantees; multi-region deployments
OrchestratorSemisync replication + external Orchestrator component manages failoverLarge-scale clusters; topology visualization and manual control

Each architecture can optionally be combined with a ProxySQL frontend component, which provides connection pooling, query routing, and read/write splitting.


Semisync Architecture

MySQL Semisync uses native binlog replication between a primary and one or more replicas. Semi-synchronous mode requires at least one replica to acknowledge each transaction before the primary commits, reducing the risk of data loss on failover. KubeBlocks drives failover via a syncer-based roleProbe that runs /tools/syncerctl getrole inside the mysql container.

Application / Client
via ProxySQL :6033 or direct :3306
K8s Service · ClusterIP
Kubernetes Services
Optional
mysql-cluster-proxysql
ClusterIP · :6033
selector: proxysql pods
ProxySQL
mysql-cluster-mysql
ClusterIP · :3306
selector: role=primary
MySQL Direct
→ proxysql pod
Optional
ProxySQL — Read/Write Splitting
⇄ :6033 MySQL
⚙ :6032 Admin
WRITEINSERT / UPDATE / DELETE → Primary only
READSELECT → Replicas (load balanced)
pod FQDN via headless service · :3306
Pods · Worker Nodes
mysql-0PRIMARY
🐬
mysql
:3306 · primary role
rw
syncer
role probe via syncerctl
mysql-exporter
:9104 metrics
PVC data-0 · 20Gi
mysql-1REPLICA
🐬
mysql
:3306 · replica role
ro
syncer
role probe via syncerctl
mysql-exporter
:9104 metrics
PVC data-1 · 20Gi
mysql-2REPLICA
🐬
mysql
:3306 · replica role
ro
syncer
role probe via syncerctl
mysql-exporter
:9104 metrics
PVC data-2 · 20Gi
↔Binlog Streaming (async / semi-sync replication)primary-0 → replica-1 · replica-2
ProxySQL (Optional)
Primary / Write
Replica / Read
Persistent Storage

Resource Hierarchy

Cluster  →  Component (mysql)  →  InstanceSet  →  Pod × N
ResourceRole
ClusterUser-facing declaration — specifies topology, replica count, storage size, and resources
ComponentGenerated automatically; references a ComponentDefinition describing container specs, lifecycle actions, and services
InstanceSetKubeBlocks custom workload (replaces StatefulSet); manages pods with stable identities and role awareness
PodActual running MySQL instance; each pod gets a unique ordinal and its own PVC

Containers Inside Each Pod

ContainerPortPurpose
mysql3306, 3601 (ha replication)MySQL database engine; roleProbe runs /tools/syncerctl getrole inside this container
mysql-exporter9104Prometheus metrics exporter (mysqld_exporter)

Each pod also runs multiple init containers on startup: init-syncer (copies syncer and syncerctl to /tools), init-data (sets up data directories and copies MySQL plugins), init-xtrabackup (copies Percona XtraBackup to /tools), and init-jemalloc (copies the jemalloc library to /tools).

High Availability

ConceptDescription
PrimaryReceives all writes; streams binlog events to replicas
Semi-sync replicaMust acknowledge each transaction before the primary commits (when semi-sync is enabled)
Failover triggersyncer roleProbe fails repeatedly → KubeBlocks selects replica with most advanced binlog position
PromotionKubeBlocks calls the switchover API to promote the chosen replica; remaining replicas repoint to new primary

Automatic Failover

  1. Primary pod crashes — replicas stop receiving binlog events
  2. syncer roleProbe fails — syncerctl getrole returns an error repeatedly; detection takes approximately 30 seconds
  3. KubeBlocks marks the primary unavailable and selects the replica with the most advanced binlog position as the promotion candidate
  4. Chosen replica is promoted — KubeBlocks calls the switchover lifecycle action; the replica stops replicating and takes over as primary
  5. Remaining replicas repoint to the new primary via CHANGE REPLICATION SOURCE TO
  6. Pod label updated — kubeblocks.io/role=primary applied to the new primary pod
  7. Service endpoints switch — the {cluster}-mysql ClusterIP service automatically routes writes to the new primary

Failover typically completes within 30–60 seconds.

Traffic Routing

ServiceTypePortSelector
{cluster}-mysqlClusterIP3306kubeblocks.io/role=primary
{cluster}-mysql-headlessHeadless3306, 3601 (ha), chart metrics port (default 9104)all pods

KubeBlocks headless Services follow the InstanceSet pattern: every declared container port appears as a ServicePort (semisync / non‑Orchestrator builds: 3306, 3601, exporter).

  • Write traffic: {cluster}-mysql:3306 (always routes to current primary)
  • Direct pod access: {pod-name}.{cluster}-mysql-headless.{namespace}.svc.cluster.local:3306

Semisync ± ProxySQL

Adding proxysql to the topology inserts a ProxySQL Component in front of MySQL. ProxySQL provides connection pooling, automatic read/write splitting, and query caching. Client traffic goes to {cluster}-proxysql-proxy-server:6033 (the ClusterIP Service from the ProxySQL ComponentDefinition, where serviceName is proxy-server) instead of directly to MySQL pods. Per-pod Services {cluster}-proxysql-proxy-ordinal-<n> expose the same DB port on each ProxySQL pod when you need ordinal-specific endpoints.


MGR Architecture (Group Replication)

MySQL Group Replication (MGR) runs a Paxos-based group consensus protocol on port :33061 across all member pods. All transaction commits require certification by the group. In single-primary mode, one pod is elected PRIMARY by the group; the others are SECONDARY. On primary failure, the group autonomously elects a new primary without external coordination.

MySQL Group Replication (MGR) — KubeBlocks
Single-primary mode · Paxos-based consensus on :33061 · Automatic group failover
💻
Application
via ProxySQL :6033 or direct :3306
K8s Service · ClusterIP
Kubernetes Services
Optional
ProxySQL
mysql-cluster-proxysql
ClusterIP · :6033
Read/Write splitting
ClusterIP
mysql-cluster-mysql
Port 3306
roleSelector: primary
→ proxysql pod
Optional
ProxySQL — Read/Write Splitting
⇄ :6033 MySQL
⚙ :6032 Admin
WRITEINSERT / UPDATE / DELETE → Primary only
READSELECT → Secondaries (load balanced)
pod FQDN via headless service · :3306
Pods · Component (mysql) · MGR Group
mysql-0PRIMARY
mysql:3306
mysql:33061 GR
syncer(syncerctl)
mysql-exporter:9104
PVC: /data/mysql
mysql-1SECONDARY
mysql:3306
mysql:33061 GR
syncer(syncerctl)
mysql-exporter:9104
PVC: /data/mysql
mysql-2SECONDARY
mysql:3306
mysql:33061 GR
syncer(syncerctl)
mysql-exporter:9104
PVC: /data/mysql
GR :33061group communication — Paxos consensus · all members exchange heartbeats and transaction certification messages
ProxySQL (Optional)
Primary Pod (writes)
Secondary Pod (replicated)
Group Replication :33061

Resource Hierarchy

Cluster  →  Component (mysql)  →  InstanceSet  →  Pod × N

The resource hierarchy is identical to Semisync. The difference is the replication protocol running inside each pod (group_replication_start_on_boot=off, group_replication_single_primary_mode=ON). KubeBlocks starts group replication manually via the lifecycle action rather than relying on start_on_boot.

Containers Inside Each Pod

ContainerPortPurpose
mysql3306, 3601 (ha replication), 33061 (GCS)MySQL database engine; port 33061 is used for Group Replication communication (GCS — Group Communication System); port 3601 is the ha replication port
mysql-exporter9104Prometheus metrics exporter

The roleProbe runs /tools/syncerctl getrole inside the mysql container. Each pod also runs multiple init containers on startup: init-syncer (copies syncer and syncerctl to /tools), init-data (sets up data directories and copies plugins), and init-jemalloc (copies the jemalloc library to /tools).

High Availability

HA MechanismDescription
Paxos group consensusAll members exchange heartbeats and transaction certification messages on :33061
Conflict detectionEach transaction is certified by the group before commit — conflicts are detected and rolled back
Group-driven electionWhen the primary fails, the GCS expels it and the remaining members elect a new primary
syncer role updatesyncer roleProbe detects the new primary role → updates kubeblocks.io/role label → ClusterIP service endpoints switch
Quorum tolerance3-member group tolerates 1 failure; 5-member tolerates 2

Automatic Failover

  1. Primary pod becomes unreachable — group communication times out for the failed member
  2. GCS expulsion — the remaining members detect the failure via the Group Communication System (GCS) and expel the unreachable member
  3. Group elects a new PRIMARY — the remaining certified secondaries autonomously elect a new primary; no external coordinator is needed
  4. syncer roleProbe detects the new PRIMARY — syncerctl getrole returns primary for the elected pod → kubeblocks.io/role=primary label updated
  5. Service endpoints switch — the {cluster}-mysql ClusterIP service automatically routes writes to the new primary

Failover typically completes within 5–15 seconds. Group-internal primary election is near-instant after expulsion; the subsequent label update and service endpoint switch depend on the syncer roleProbe cycle.

Traffic Routing

ServiceTypePortSelector
{cluster}-mysqlClusterIP3306kubeblocks.io/role=primary
{cluster}-mysql-headlessHeadless3306, 3601, 33061 (Group Replication), exporter (default 9104)all pods

Traffic routing matches Semisync except the headless Service also publishes 33061 from the mysql container.

The roleSelector: primary mechanism on the ClusterIP service works the same way — the syncer roleProbe detects the current GR role and updates the pod label accordingly.

MGR ± ProxySQL

The mgr-proxysql topology adds ProxySQL for connection pooling and read/write splitting, identical to the Semisync+ProxySQL variant — clients connect to {cluster}-proxysql-proxy-server:6033 (and optionally per-pod {cluster}-proxysql-proxy-ordinal-<n>).


Orchestrator Architecture

The Orchestrator topology pairs semi-sync MySQL replication with an external Orchestrator component. Orchestrator continuously monitors the replication topology via MySQL's SHOW SLAVE STATUS and performance_schema, detects failures, and drives automated failover and topology recovery.

MySQL + Orchestrator — KubeBlocks
Semi-sync binlog replication · Orchestrator monitors topology and drives automated failover
💻
Application
via ProxySQL :6033 or direct :3306
K8s Service · ClusterIP
Kubernetes Services
Optional
ProxySQL
mysql-cluster-proxysql
ClusterIP · :6033
Read/Write splitting
ClusterIP
mysql-cluster-mysql-server
Port 3306
no roleSelector
→ proxysql pod
Optional
ProxySQL — Read/Write Splitting
⇄ :6033 MySQL
⚙ :6032 Admin
WRITEINSERT / UPDATE / DELETE → Primary only
READSELECT → Replicas (load balanced)
pod FQDN via headless service · :3306
Component (mysql) · Pods
mysql-0PRIMARY
mysql:3306
syncersyncerctl
exporter:9104
PVC
mysql-1REPLICA
mysql:3306
syncersyncerctl
exporter:9104
PVC
mysql-2REPLICA
mysql:3306
syncersyncerctl
exporter:9104
PVC
Binlogsemi-sync replication — primary → replicas
Separate Orchestrator Cluster
polls :3306
orchestrator-0ORC
orchestrator:3000
Topology discovery
Failure detection
Auto failover
Web UI + API
ProxySQL (Optional)
Primary Pod
Replica Pod
Orchestrator Pod
Orchestrator polls MySQL :3306

Resource Hierarchy

Orchestrator is a separate KubeBlocks addon — it is deployed as an independent KubeBlocks Cluster (using the orchestrator ClusterDefinition), not as a Component inside the MySQL Cluster. The MySQL orc topology provisions only the MySQL Component, which then connects to a separately deployed Orchestrator cluster via serviceRefs (backed by the serviceRefDeclarations in the ComponentDefinition).

MySQL Cluster   →  Component (mysql)  →  InstanceSet  →  Pod × N

Orchestrator Cluster  →  Component (orchestrator)  →  InstanceSet  →  Pod × N

Containers Inside Each Pod

ContainerPortPurpose
mysql3306MySQL database engine; does not use syncer as a process wrapper — port 3601 is absent; roleProbe runs /kubeblocks/orchestrator-client -c which-cluster-master -i ${KB_AGENT_POD_NAME}
mysql-exporter9104Prometheus metrics exporter (mysqld_exporter)

Each pod also runs two init containers on startup: init-data (sets up data directories and copies MySQL plugins) and init-jq (copies jq, orchestrator-client, and curl to /kubeblocks for use by the roleProbe and lifecycle scripts).

How Orchestrator Manages Failover

  1. Orchestrator polls all MySQL pods on port 3306 at a configurable interval
  2. On primary failure, Orchestrator identifies the most eligible replica (most advanced relay log)
  3. Orchestrator promotes the chosen replica and reconnects all other replicas to it
  4. KubeBlocks detects the new primary via an exec roleProbe that runs orchestrator-client -c which-cluster-master and updates the kubeblocks.io/role label on the new primary pod
  5. The kubeblocks.io/role label update is consumed by the control plane and observability tooling (e.g. kbcli describe cluster); the mysql-server ClusterIP service has no roleSelector and continues to load-balance across all ready pods — it does not automatically become a single-primary endpoint

Orchestrator also provides a web UI for visualizing and manually managing the replication topology, which is useful for large clusters with complex topologies or when manual intervention is needed.

Traffic Routing

ServiceTypePortNotes
{cluster}-mysql-serverClusterIP3306All MySQL pods — service name is {cluster}-{component}-{serviceName} = {cluster}-mysql-server; no roleSelector (Orchestrator drives failover externally); load-balances to all ready pods
{cluster}-mysql-mysql-{n}ClusterIP (per-pod)3306One per MySQL pod (podService: true); created so Orchestrator can poll each instance individually by stable address
{cluster}-mysql-headlessHeadless3306, 9104 (Orchestrator topology omits 3601 on the mysql container)all MySQL pods
{orc-cluster}-orchestratorClusterIP80Orchestrator web UI + HTTP API (container listens on :3000, Service exposes :80 as orc-http; roleSelector: primary routes to the active Orchestrator leader)
NOTE

Because mysql-server has no roleSelector, it load-balances across all ready pods and does not automatically route writes to the current primary. For write-only routing, use the orc-proxysql topology (ProxySQL queries Orchestrator for the current master) or have the application discover the primary via Orchestrator's HTTP API and connect directly via the headless service.

Orchestrator ± ProxySQL

The orc-proxysql topology adds ProxySQL for connection pooling and read/write splitting, as in other variants — clients connect to {cluster}-proxysql-proxy-server:6033 (and optionally per-pod {cluster}-proxysql-proxy-ordinal-<n>).


System Accounts

KubeBlocks automatically creates and manages the following MySQL system accounts. Passwords are stored in Secrets named {cluster}-{component}-account-{name}.

AccountRolePurposeTopologies
rootSuperuserDefault administrative accountAll
kbadminSuperuserKubeBlocks internal managementAll
kbdataprotectionAdminBackup and restore (xtrabackup, mysqldump)semisync, semisync-proxysql, mgr, mgr-proxysql
kbprobeMonitor (read-only)Health check queries; role detection is handled by the lifecycle roleProbe (syncerctl / orchestrator-client), not by this accountsemisync, semisync-proxysql, mgr, mgr-proxysql
kbmonitoringMonitorPrometheus metrics collectionsemisync, semisync-proxysql, mgr, mgr-proxysql
kbreplicatorReplicationBinlog replication between primary and replicassemisync, semisync-proxysql, mgr, mgr-proxysql
proxysqlMonitorUsed by ProxySQL for health checking and query routingsemisync-proxysql, mgr-proxysql, orc-proxysql
NOTE

The orc and orc-proxysql topologies provision only root, kbadmin, and proxysql (present in orc-proxysql only). The accounts kbdataprotection, kbprobe, kbmonitoring, and kbreplicator are defined only in the semisync and MGR ComponentDefinitions.

© 2026 KUBEBLOCKS INC