Topologies
Operations
Backup And Restores
Custom Secret
Monitoring
Semi-synchronous replication enhances data consistency between primary and replica nodes by requiring acknowledgment from at least one replica before committing transactions.
This guide demonstrates how to deploy a MySQL cluster using KubeBlocks with Orchestrator for high availability and failover management, and ProxySQL for advanced query routing and load balancing. Together, these tools create a robust and efficient MySQL cluster infrastructure.
Orchestrator is a powerful MySQL High Availability (HA) and failover management tool. It automates monitoring, fault detection, and topology management for MySQL clusters, making it ideal for managing large-scale deployments. Key features include:
ProxySQL is a high-performance MySQL proxy that acts as a middleware between MySQL clients and database servers. It enhances cluster performance with features such as:
Before proceeding, ensure the following:
kubectl create ns demo
namespace/demo created
# including pre-release versions
helm search repo kubeblocks/orchestrator --devel --versions
helm install kb-addon-orc kubeblocks/orchestrator --namespace kb-system --create-namespace --version x.y.z
helm list -A
Expected Output:
orchestrator kb-system 1 2025-02-14 11:12:32.286516 +0800 CST deployed orchestrator-1.0.0 3.2.6
The STATUS is deployed and this Addon is installed successfully.
KubeBlocks provides a declarative approach to deploying MySQL clusters. Below is an example configuration for deploying a MySQL cluster with 2 nodes (1 primary and 1 replica) in semi-synchronous mode. This configuration also integrates Orchestrator (3 nodes) for failover management and ProxySQL (2 nodes) for query routing and load balancing.
Cluster Configuration
kubectl apply -f - <<EOF
apiVersion: apps.kubeblocks.io/v1
kind: Cluster
metadata:
name: example-mysql-cluster
namespace: demo
spec:
clusterDef: mysql
topology: orc-proxysql
terminationPolicy: Delete
componentSpecs:
- name: mysql
serviceVersion: 8.0.35
replicas: 2
serviceRefs:
- name: orchestrator
namespace: demo
clusterServiceSelector:
cluster: example-orc-cluster
service:
component: orchestrator
service: orchestrator
port: orc-http
credential:
component: orchestrator
name: orchestrator
resources:
limits:
cpu: '0.5'
memory: 0.5Gi
requests:
cpu: '0.5'
memory: 0.5Gi
- name: proxysql
serviceVersion: 2.4.4
replicas: 2
resources:
limits:
cpu: '0.5'
memory: 0.5Gi
requests:
cpu: '0.5'
memory: 0.5Gi
volumeClaimTemplates:
- name: data
spec:
storageClassName: ""
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 20Gi
---
apiVersion: apps.kubeblocks.io/v1
kind: Cluster
metadata:
name: example-orc-cluster
namespace: demo
spec:
clusterDef: orchestrator
topology: raft
terminationPolicy: Delete
services:
- name: orchestrator
componentSelector: orchestrator
spec:
ports:
- name: orc-http
port: 80
componentSpecs:
- name: orchestrator
disableExporter: true
replicas: 3
resources:
requests:
cpu: '0.5'
memory: 0.5Gi
limits:
cpu: '0.5'
memory: 0.5Gi
volumeClaimTemplates:
- name: data
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 20Gi
EOF
Explanation of Key Fields: MySQL Semi-Synchronous Cluster Configuration
clusterDef
: Specifies the ClusterDefinition CR for the cluster. The ClusterDefinition 'mysql' contains multiple topologies, such as 'semisync', 'semisync-proxysql', 'mgr', 'mgr-proxysql', 'orc', 'orc-proxysql'.topology : orc-proxysql
: integrates Orchestrator for failover and ProxySQL for query routing.serviceRefs
: Configures the MySQL cluster to connect to the Orchestrator cluster.
name : orchestrator
: The name of the service reference for Orchestrator.namespace: demo
: Specifies that the Orchestrator cluster is in the same demo namespace.clusterServiceSelector
:
cluster: example-orc-cluster
: Selects the Orchestrator cluster named example-orc-cluster.service
:
component: orchestrator
: Uses the Orchestrator component from the example-orc-cluster cluster.service: orchestrator
: Specifies the service name.port: orc-http
: Refers to the HTTP port of the Orchestrator service.Orchestrator Cluster Configuration
componentDef: orchestrator-raft
: Specifies that this component uses the Raft high-availability model.replicas: 3
: Configures the Orchestrator cluster to run with 3 nodes for Raft-based high availability.The first part of the YAML file configures a MySQL cluster ('example-mysql-cluster') with 1 primary and 1 replica in semi-synchronous mode, and integrates it with the Orchestrator cluster.
The second part of the YAML file defines an Orchestrator cluster ('example-orc-cluster') using Raft high-availability mode. This cluster manages the MySQL cluster, monitoring its topology and handling failover.
The serviceRefs in the MySQL cluster configuration establishes the connection between the MySQL semi-synchronous cluster and the Orchestrator cluster:
Monitor the cluster status until it transitions to the Running state:
kubectl get cluster -n demo
Example Output:
NAME CLUSTER-DEFINITION TERMINATION-POLICY STATUS AGE
example-mysql-cluster mysql Delete Running 2m32s
example-orc-cluster orchestrator Delete Running 2m31s
To get detailed information about the MySQL cluster:
kbcli cluster describe example-mysql-cluster -n demo
Example Output:
Name: example-mysql-cluster Created Time: Mar 10,2025 16:43 UTC+0800
NAMESPACE CLUSTER-DEFINITION TOPOLOGY STATUS TERMINATION-POLICY
demo mysql orc Running Delete
Endpoints:
COMPONENT INTERNAL EXTERNAL
mysql example-mysql-cluster-mysql-0.demo.svc.cluster.local:3306 <none>
example-mysql-cluster-mysql-1.demo.svc.cluster.local:3306
example-mysql-cluster-mysql-server.demo.svc.cluster.local:3306
Topology:
COMPONENT SERVICE-VERSION INSTANCE ROLE STATUS AZ NODE CREATED-TIME
mysql 8.0.35 example-mysql-cluster-mysql-0 primary Running ap-southeast-1c ip-10-0-3-68.ap-southeast-1.compute.internal/10.0.3.68 Mar 10,2025 16:43 UTC+0800
mysql 8.0.35 example-mysql-cluster-mysql-1 secondary Running ap-southeast-1c ip-10-0-3-225.ap-southeast-1.compute.internal/10.0.3.225 Mar 10,2025 16:44 UTC+0800
Resources Allocation:
COMPONENT INSTANCE-TEMPLATE CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE-SIZE STORAGE-CLASS
mysql 500m / 500m 512Mi / 512Mi data:20Gi <none>
Images:
COMPONENT COMPONENT-DEFINITION IMAGE
mysql mysql-orc-8.0-1.0.0 docker.io/apecloud/mysql:8.0.35
docker.io/apecloud/mysqld-exporter:0.15.1
apecloud-registry.cn-zhangjiakou.cr.aliyuncs.com/apecloud/kubeblocks-tools:1.0.0-beta.30
Show cluster events: kbcli cluster list-events -n demo example-mysql-cluster
To get detailed information about the Orchestrator cluster:
kbcli cluster describe example-orc-cluster -n demo
Example Output:
Name: example-orc-cluster Created Time: Mar 10,2025 16:43 UTC+0800
NAMESPACE CLUSTER-DEFINITION TOPOLOGY STATUS TERMINATION-POLICY
demo orchestrator raft Running Delete
Endpoints:
COMPONENT INTERNAL EXTERNAL
orchestrator example-orc-cluster-orchestrator.demo.svc.cluster.local:80 <none>
Topology:
COMPONENT SERVICE-VERSION INSTANCE ROLE STATUS AZ NODE CREATED-TIME
orchestrator 3.2.6 example-orc-cluster-orchestrator-0 primary Running ap-southeast-1c ip-10-0-3-225.ap-southeast-1.compute.internal/10.0.3.225 Mar 10,2025 16:43 UTC+0800
orchestrator 3.2.6 example-orc-cluster-orchestrator-1 secondary Running ap-southeast-1c ip-10-0-3-68.ap-southeast-1.compute.internal/10.0.3.68 Mar 10,2025 16:43 UTC+0800
orchestrator 3.2.6 example-orc-cluster-orchestrator-2 secondary Running ap-southeast-1c ip-10-0-3-225.ap-southeast-1.compute.internal/10.0.3.225 Mar 10,2025 16:44 UTC+0800
Resources Allocation:
COMPONENT INSTANCE-TEMPLATE CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE-SIZE STORAGE-CLASS
orchestrator 500m / 500m 512Mi / 512Mi data:20Gi kb-default-sc
Images:
COMPONENT COMPONENT-DEFINITION IMAGE
orchestrator orchestrator-raft docker.io/apecloud/orchestrator:v3.2.6
Show cluster events: kbcli cluster list-events -n demo example-orc-cluster
KubeBlocks automatically creates a secret containing the MySQL root credentials. Retrieve the credentials with the following commands:
kubectl get secrets -n demo example-mysql-cluster-mysql-account-root -o jsonpath='{.data.username}' | base64 -d
root
kubectl get secrets -n demo example-mysql-cluster-mysql-account-root -o jsonpath='{.data.password}' | base64 -d
GX596H32Oz
To connect to the cluster's primary node, use the MySQL client:
kubectl exec -it -n demo example-mysql-cluster-mysql-0 -c mysql -- mysql -h example-mysql-cluster-mysql-server.demo.svc.cluster.local -uroot -pGX596H32Oz
In this section, we will test the semi-synchronous replication of the MySQL cluster by verifying the roles of the pods and checking their replication statuses.
First, list all the pods in the cluster, along with their roles, to identify the primary and secondary instances:
kubectl get pods -n demo -o jsonpath='{range .items[*]}{.metadata.name}{"\t"}{.metadata.labels.kubeblocks\.io/role}{"\n"}{end}'
Example Output:
example-mysql-cluster-mysql-0 primary
example-mysql-cluster-mysql-1 secondary
From the output, we can see the following:
Next, connect to the primary instance ('example-mysql-cluster-mysql-0') and check its semi-synchronous replication status. Use the following command to execute a query inside the MySQL pod:
kubectl exec -it -n demo example-mysql-cluster-mysql-0 -c mysql -- mysql -h example-mysql-cluster-mysql-0.demo.svc.cluster.local -uroot -pGX596H32Oz -e "show status like 'Rpl%_status';"
mysql: [Warning] Using a password on the command line interface can be insecure.
+------------------------------+-------+
| Variable_name | Value |
+------------------------------+-------+
| Rpl_semi_sync_replica_status | OFF |
| Rpl_semi_sync_source_status | ON |
+------------------------------+-------+
Explanation:
kubectl exec -it -n demo example-mysql-cluster-mysql-0 -c mysql -- mysql -h example-mysql-cluster-mysql-1.demo.svc.cluster.local -uroot -pGX596H32Oz -e "show status like 'Rpl%_status';"
mysql: [Warning] Using a password on the command line interface can be insecure.
+------------------------------+-------+
| Variable_name | Value |
+------------------------------+-------+
| Rpl_semi_sync_replica_status | ON |
| Rpl_semi_sync_source_status | OFF |
+------------------------------+-------+
Explanation:
The following steps demonstrate how to trigger a failover in a MySQL cluster and verify the role changes of the pods.
To initiate a failover, delete the Pod currently assigned the primary role:
kubectl delete pod example-mysql-cluster-mysql-0 -n demo
pod "example-mysql-cluster-mysql-0" deleted
This will trigger a failover, and the secondary instance will be promoted to the primary role. After a while, the killed pod will be recreated and will take the secondary role:
kubectl get pods -n demo -o jsonpath='{range .items[*]}{.metadata.name}{"\t"}{.metadata.labels.kubeblocks\.io/role}{"\n"}{end}'
Expected Output:
example-semisync-mysql-mysql-0 secondary
example-semisync-mysql-mysql-1 primary
This process demonstrates how the failover mechanism ensures high availability by automatically promoting a secondary instance to the primary role in the event of a failure.
To remove all created resources, delete the MySQL cluster along with its namespace:
kubectl delete cluster example-mysql-cluster -n demo
kubectl delete cluster example-orc-cluster -n demo
kubectl delete ns demo
This guide demonstrated how to deploy a MySQL cluster with semi-synchronous replication, Orchestrator integration for high availability, and ProxySQL for query routing and load balancing using KubeBlocks. By leveraging the declarative configuration approach, you can easily scale and manage MySQL clusters in Kubernetes environments.