KubeBlocks
BlogsEnterprise
⌘K
​
Blogs
Overview
Quickstart

Topologies

MySQL Semi-Synchronous Cluster
MySQL Cluster with ProxySQL
MySQL Group Replication Cluster
MySQL Group Replication with ProxySQL
MySQL Cluster with Orchestrator
MySQL with Orchestrator & ProxySQL
Architecture

Operations

Lifecycle Management
Vertical Scaling
Horizontal Scaling
Volume Expansion
Manage MySQL Services
Minor Version Upgrade
Modify MySQL Parameters
Planned Switchover in MySQL
Decommission MySQL Replica
Recovering MySQL Replica

Backup And Restores

Create BackupRepo
Create Full Backup
Scheduled Backups
Scheduled Continuous Backup
Restore MySQL Cluster
Restore with PITR

Custom Secret

Custom Password
Custom Password Policy

TLS

MySQL Cluster with TLS
MySQL Cluster with User-Provided TLS
MySQL Cluster with mTLS

Monitoring

Observability for MySQL Clusters

Advanced Pod Management

Custom Scheduling Policies
Custom Pod Resources
Pod Management Parallelism
Using OnDelete for Controlled Pod Updates
Gradual Rolling Update
Retain PVCs
  1. Resource Hierarchy
  2. Topologies
  3. Containers Inside Each Pod
  4. High Availability via Built-in Raft (raftGroup Topology)
  5. Traffic Routing
  6. Automatic Failover
  7. System Accounts

MySQL High Availability Architecture in KubeBlocks

This page describes how KubeBlocks deploys an ApeCloud MySQL (wesql-server) cluster on Kubernetes — covering the resource hierarchy, pod internals, built-in Raft-based HA, traffic routing, and automatic failover.

Resource Hierarchy

KubeBlocks models a MySQL cluster as a hierarchy of Kubernetes custom resources:

Cluster  →  Component  →  InstanceSet  →  Pod × N
ResourceRole
ClusterUser-facing declaration — specifies topology, replica count, storage size, and resources
ComponentGenerated automatically; references a ComponentDefinition that describes container specs, lifecycle actions, and services
InstanceSetKubeBlocks custom workload (replaces StatefulSet); manages pods with stable identities and role awareness
PodActual running MySQL instance; each pod gets a unique ordinal and its own PVC

Topologies

KubeBlocks supports three MySQL topologies:

TopologyDescriptionUse Case
standaloneSingle MySQL pod; no replication or HADevelopment, testing, non-critical workloads
replicationTraditional async/semi-sync replication (primary + replicas); external coordinator manages failoverGeneral HA workloads
raftGroupRaft consensus built into the wesql-server engine (primary + N followers); no external coordinator requiredProduction workloads requiring strong consistency

Containers Inside Each Pod

Every MySQL pod runs three containers:

ContainerPortPurpose
mysql3306 (MySQL protocol)wesql-server (ApeCloud MySQL) database engine with built-in Raft consensus
kbagent5001Role probe endpoint — KubeBlocks queries GET /v1.0/getrole every second to determine primary vs. follower
metrics-exporter9187Prometheus metrics exporter

Each pod mounts its own PVC for the MySQL data directory (/data/mysql), providing independent persistent storage per replica.

High Availability via Built-in Raft (raftGroup Topology)

The wesql-server engine embeds a Raft consensus module directly into the MySQL storage layer. Unlike Patroni (PostgreSQL) or MySQL Group Replication with an external coordinator, wesql-server handles leader election and log replication natively:

Raft ConceptDescription
Primary (leader)Receives all write transactions; replicates log entries to followers before committing
FollowerReplicates from the primary via Raft log; can serve read queries when readPreference allows
Write quorumA majority (N/2 + 1) of Raft members must acknowledge a transaction before it commits
Election timeoutWhen the primary is unreachable, followers trigger a new election after a configurable timeout
No external coordinatorAll leader election and log replication is handled inside the wesql-server process; no ZooKeeper, etcd, or Patroni required

A raftGroup of 3 members (1 primary + 2 followers) tolerates 1 failure while maintaining a voting majority and write quorum.

Traffic Routing

KubeBlocks creates two services for each MySQL cluster:

ServiceTypePortSelector
{cluster}-mysqlClusterIP3306kubeblocks.io/role=primary
{cluster}-mysql-headlessHeadless3306all pods

The key mechanism is roleSelector: primary on the ClusterIP service. KubeBlocks probes each pod via kbagent every second and updates the pod label kubeblocks.io/role. The service Endpoints always point at the current primary — no VIP or external load balancer required.

  • Write traffic: connect to {cluster}-mysql:3306
  • Direct replica access (read replicas, replication monitoring): use the headless service DNS {pod-name}.{cluster}-mysql-headless.{namespace}.svc.cluster.local:3306

Automatic Failover

When the primary pod fails in a raftGroup topology, the following sequence restores service automatically:

  1. Primary becomes unreachable — followers stop receiving Raft heartbeats
  2. Election triggered — after the election timeout, an eligible follower increments its term and requests votes
  3. New primary elected — the follower that collects a majority of votes wins, promoting itself to primary and resuming write operations
  4. KubeBlocks detects role change — kbagent returns primary for the new winner; pod labels are updated
  5. Service Endpoints switch — the ClusterIP service automatically routes traffic to the new primary

Total failover time is typically within 5–15 seconds, bounded by the Raft election timeout.

For a planned switchover (e.g., maintenance), KubeBlocks invokes a graceful switchover operation that demotes the current primary and promotes a chosen follower with zero data loss.

System Accounts

KubeBlocks automatically creates and manages the following MySQL system accounts. Passwords are auto-generated and stored in Secrets named {cluster}-{component}-account-{name}.

AccountRolePurpose
rootSuperuserDefault administrative account; full privileges on all databases
kbadminSuperuserKubeBlocks internal management operations
kbdataprotectionAdminBackup and restore operations (xtrabackup, mysqldump)
kbprobeMonitor (read-only)Health check queries; used by kbagent for role detection
kbmonitoringMonitorPrometheus metrics collection via mysqld_exporter
kbreplicatorReplicationBinary log replication between primary and followers

© 2026 KUBEBLOCKS INC