Topologies
Operations
Backup And Restores
Custom Secret
Monitoring
KubeBlocks supports three distinct MySQL HA architectures:
| Architecture | HA Mechanism | Use Case |
|---|---|---|
| Semisync | Async/semi-sync binlog replication; syncer roleProbe + KubeBlocks drives failover | General HA; single primary + replicas |
| MGR (Group Replication) | Paxos-based group consensus on :33061; group elects new primary automatically | Stronger consistency guarantees; multi-region deployments |
| Orchestrator | Semisync replication + external Orchestrator component manages failover | Large-scale clusters; topology visualization and manual control |
Each architecture can optionally be combined with a ProxySQL frontend component, which provides connection pooling, query routing, and read/write splitting.
MySQL Semisync uses native binlog replication between a primary and one or more replicas. Semi-synchronous mode requires at least one replica to acknowledge each transaction before the primary commits, reducing the risk of data loss on failover. KubeBlocks drives failover via a syncer-based roleProbe that runs /tools/syncerctl getrole inside the mysql container.
:6033 or direct :3306role=primaryCluster → Component (mysql) → InstanceSet → Pod × N
| Resource | Role |
|---|---|
| Cluster | User-facing declaration — specifies topology, replica count, storage size, and resources |
| Component | Generated automatically; references a ComponentDefinition describing container specs, lifecycle actions, and services |
| InstanceSet | KubeBlocks custom workload (replaces StatefulSet); manages pods with stable identities and role awareness |
| Pod | Actual running MySQL instance; each pod gets a unique ordinal and its own PVC |
| Container | Port | Purpose |
|---|---|---|
mysql | 3306, 3601 (ha replication) | MySQL database engine; roleProbe runs /tools/syncerctl getrole inside this container |
mysql-exporter | 9104 | Prometheus metrics exporter (mysqld_exporter) |
Each pod also runs multiple init containers on startup: init-syncer (copies syncer and syncerctl to /tools), init-data (sets up data directories and copies MySQL plugins), init-xtrabackup (copies Percona XtraBackup to /tools), and init-jemalloc (copies the jemalloc library to /tools).
| Concept | Description |
|---|---|
| Primary | Receives all writes; streams binlog events to replicas |
| Semi-sync replica | Must acknowledge each transaction before the primary commits (when semi-sync is enabled) |
| Failover trigger | syncer roleProbe fails repeatedly → KubeBlocks selects replica with most advanced binlog position |
| Promotion | KubeBlocks calls the switchover API to promote the chosen replica; remaining replicas repoint to new primary |
Failover typically completes within 30–60 seconds.
| Service | Type | Port | Selector |
|---|---|---|---|
{cluster}-mysql | ClusterIP | 3306 | kubeblocks.io/role=primary |
{cluster}-mysql-headless | Headless | 3306, 3601 (ha), chart metrics port (default 9104) | all pods |
KubeBlocks headless Services follow the InstanceSet pattern: every declared container port appears as a ServicePort (semisync / non‑Orchestrator builds: 3306, 3601, exporter).
{cluster}-mysql:3306 (always routes to current primary){pod-name}.{cluster}-mysql-headless.{namespace}.svc.cluster.local:3306Adding proxysql to the topology inserts a ProxySQL Component in front of MySQL. ProxySQL provides connection pooling, automatic read/write splitting, and query caching. Client traffic goes to {cluster}-proxysql-proxy-server:6033 (the ClusterIP Service from the ProxySQL ComponentDefinition, where serviceName is proxy-server) instead of directly to MySQL pods. Per-pod Services {cluster}-proxysql-proxy-ordinal-<n> expose the same DB port on each ProxySQL pod when you need ordinal-specific endpoints.
MySQL Group Replication (MGR) runs a Paxos-based group consensus protocol on port :33061 across all member pods. All transaction commits require certification by the group. In single-primary mode, one pod is elected PRIMARY by the group; the others are SECONDARY. On primary failure, the group autonomously elects a new primary without external coordination.
:6033 or direct :3306roleSelector: primaryCluster → Component (mysql) → InstanceSet → Pod × N
The resource hierarchy is identical to Semisync. The difference is the replication protocol running inside each pod (group_replication_start_on_boot=off, group_replication_single_primary_mode=ON). KubeBlocks starts group replication manually via the lifecycle action rather than relying on start_on_boot.
| Container | Port | Purpose |
|---|---|---|
mysql | 3306, 3601 (ha replication), 33061 (GCS) | MySQL database engine; port 33061 is used for Group Replication communication (GCS — Group Communication System); port 3601 is the ha replication port |
mysql-exporter | 9104 | Prometheus metrics exporter |
The roleProbe runs /tools/syncerctl getrole inside the mysql container. Each pod also runs multiple init containers on startup: init-syncer (copies syncer and syncerctl to /tools), init-data (sets up data directories and copies plugins), and init-jemalloc (copies the jemalloc library to /tools).
| HA Mechanism | Description |
|---|---|
| Paxos group consensus | All members exchange heartbeats and transaction certification messages on :33061 |
| Conflict detection | Each transaction is certified by the group before commit — conflicts are detected and rolled back |
| Group-driven election | When the primary fails, the GCS expels it and the remaining members elect a new primary |
| syncer role update | syncer roleProbe detects the new primary role → updates kubeblocks.io/role label → ClusterIP service endpoints switch |
| Quorum tolerance | 3-member group tolerates 1 failure; 5-member tolerates 2 |
Failover typically completes within 5–15 seconds. Group-internal primary election is near-instant after expulsion; the subsequent kubeblocks.io/role label update and Service endpoint switch still depend on the syncer roleProbe cycle.
| Service | Type | Port | Selector |
|---|---|---|---|
{cluster}-mysql | ClusterIP | 3306 | kubeblocks.io/role=primary |
{cluster}-mysql-headless | Headless | 3306, 3601, 33061 (Group Replication), exporter (default 9104) | all pods |
Traffic routing matches Semisync except the headless Service also publishes 33061 from the mysql container.
The roleSelector: primary mechanism on the ClusterIP service works the same way — the syncer roleProbe detects the current GR role and updates the pod label accordingly.
The mgr-proxysql topology adds ProxySQL for connection pooling and read/write splitting, identical to the Semisync+ProxySQL variant — clients connect to {cluster}-proxysql-proxy-server:6033 (and optionally per-pod {cluster}-proxysql-proxy-ordinal-<n>).
The Orchestrator topology pairs semi-sync MySQL replication with an external Orchestrator component. Orchestrator continuously monitors the replication topology via MySQL's SHOW SLAVE STATUS and performance_schema, detects failures, and drives automated failover and topology recovery.
:6033 or direct :3306no roleSelectorOrchestrator is a separate KubeBlocks addon — it is deployed as an independent KubeBlocks Cluster (using the orchestrator ClusterDefinition), not as a Component inside the MySQL Cluster. The MySQL orc topology provisions only the MySQL Component, which then connects to a separately deployed Orchestrator cluster via serviceRefs (backed by the serviceRefDeclarations in the ComponentDefinition).
MySQL Cluster → Component (mysql) → InstanceSet → Pod × N
Orchestrator Cluster → Component (orchestrator) → InstanceSet → Pod × N
| Container | Port | Purpose |
|---|---|---|
mysql | 3306 | MySQL database engine; does not use syncer as a process wrapper — port 3601 is absent; roleProbe runs /kubeblocks/orchestrator-client -c which-cluster-master -i ${KB_AGENT_POD_NAME} |
mysql-exporter | 9104 | Prometheus metrics exporter (mysqld_exporter) |
Each pod also runs two init containers on startup: init-data (sets up data directories and copies MySQL plugins) and init-jq (copies jq, orchestrator-client, and curl to /kubeblocks for use by the roleProbe and lifecycle scripts).
orchestrator-client -c which-cluster-master and updates the kubeblocks.io/role label on the new primary podkubeblocks.io/role label update is consumed by the control plane and observability tooling (e.g. kbcli describe cluster); the mysql-server ClusterIP service has no roleSelector and continues to load-balance across all ready pods — it does not automatically become a single-primary endpointOrchestrator also provides a web UI for visualizing and manually managing the replication topology, which is useful for large clusters with complex topologies or when manual intervention is needed.
| Service | Type | Port | Notes |
|---|---|---|---|
{cluster}-mysql-server | ClusterIP | 3306 | All MySQL pods — service name is {cluster}-{component}-{serviceName} = {cluster}-mysql-server; no roleSelector (Orchestrator drives failover externally); load-balances to all ready pods |
{cluster}-mysql-mysql-{n} | ClusterIP (per-pod) | 3306 | One per MySQL pod (podService: true); created so Orchestrator can poll each instance individually by stable address |
{cluster}-mysql-headless | Headless | 3306, 9104 (Orchestrator topology omits 3601 on the mysql container) | all MySQL pods |
{orc-cluster}-orchestrator | ClusterIP | 80 | Orchestrator web UI + HTTP API (container listens on :3000, Service exposes :80 as orc-http; roleSelector: primary routes to the active Orchestrator leader) |
Because mysql-server has no roleSelector, it load-balances across all ready pods and does not automatically route writes to the current primary. For write-only routing, use the orc-proxysql topology (ProxySQL queries Orchestrator for the current master) or have the application discover the primary via Orchestrator's HTTP API and connect directly via the headless service.
The orc-proxysql topology adds ProxySQL for connection pooling and read/write splitting, as in other variants — clients connect to {cluster}-proxysql-proxy-server:6033 (and optionally per-pod {cluster}-proxysql-proxy-ordinal-<n>).
KubeBlocks automatically creates and manages the following MySQL system accounts. Passwords are stored in Secrets named {cluster}-{component}-account-{name}.
| Account | Role | Purpose | Topologies |
|---|---|---|---|
root | Superuser | Default administrative account | All |
kbadmin | Superuser | KubeBlocks internal management | All |
kbdataprotection | Admin | Backup and restore (xtrabackup, mysqldump) | semisync, semisync-proxysql, mgr, mgr-proxysql |
kbprobe | Monitor (read-only) | Health check queries; role detection is handled by the lifecycle roleProbe (syncerctl / orchestrator-client), not by this account | semisync, semisync-proxysql, mgr, mgr-proxysql |
kbmonitoring | Monitor | Prometheus metrics collection | semisync, semisync-proxysql, mgr, mgr-proxysql |
kbreplicator | Replication | Binlog replication between primary and replicas | semisync, semisync-proxysql, mgr, mgr-proxysql |
proxysql | Monitor | Used by ProxySQL for health checking and query routing | semisync-proxysql, mgr-proxysql, orc-proxysql |
The orc and orc-proxysql topologies provision only root, kbadmin, and proxysql (present in orc-proxysql only). The accounts kbdataprotection, kbprobe, kbmonitoring, and kbreplicator are defined only in the semisync and MGR ComponentDefinitions.