Topologies
Operations
Backup And Restores
Custom Secret
Monitoring
Semi-synchronous replication improves data consistency between primary and replica nodes by requiring acknowledgment from at least one replica before committing transactions.
Orchestrator is a robust MySQL High Availability (HA) and failover management tool. It provides automated monitoring, fault detection, and topology management for MySQL clusters, making it an essential component for managing large-scale MySQL deployments. With Orchestrator, you can:
This guide walks you through the process of setting up a MySQL semi-synchronous replication cluster using KubeBlocks, alongside Orchestrator for effective failover and recovery management.
Before proceeding, ensure the following:
kubectl create ns demo
namespace/demo created
# including pre-release versions
helm search repo kubeblocks/orchestrator --devel --versions
helm install kb-addon-orc kubeblocks/orchestrator --namespace kb-system --create-namespace --version x.y.z
helm list -A
Expected Output:
orchestrator kb-system 1 2025-02-14 11:12:32.286516 +0800 CST deployed orchestrator-1.0.0 3.2.6
The STATUS is deployed and this Addon is installed successfully.
KubeBlocks uses a declarative approach for managing MySQL clusters. Below is an example configuration for deploying a MySQL cluster with 3 nodes (1 primary, 2 replicas) in semi-synchronous mode. Additionally, it creates an Orchestrator cluster using the Raft high-availability mode and configure the relationship between the MySQL semi-synchronous cluster and the Orchestrator cluster.
Cluster Configuration
kubectl apply -f - <<EOF
apiVersion: apps.kubeblocks.io/v1
kind: Cluster
metadata:
name: example-mysql-cluster
namespace: demo
spec:
clusterDef: mysql
topology: orc
terminationPolicy: Delete
componentSpecs:
- name: mysql
serviceVersion: 8.0.35
replicas: 2
serviceRefs:
- name: orchestrator
namespace: demo
clusterServiceSelector:
cluster: example-orc-cluster
service:
component: orchestrator
service: orchestrator
port: orc-http
credential:
component: orchestrator
name: orchestrator
resources:
limits:
cpu: '0.5'
memory: 0.5Gi
requests:
cpu: '0.5'
memory: 0.5Gi
volumeClaimTemplates:
- name: data
spec:
storageClassName: ""
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 20Gi
---
apiVersion: apps.kubeblocks.io/v1
kind: Cluster
metadata:
name: example-orc-cluster
namespace: demo
spec:
clusterDef: orchestrator
topology: raft
terminationPolicy: Delete
services:
- name: orchestrator
componentSelector: orchestrator
spec:
ports:
- name: orc-http
port: 80
componentSpecs:
- name: orchestrator
disableExporter: true
replicas: 3
resources:
requests:
cpu: '0.5'
memory: 0.5Gi
limits:
cpu: '0.5'
memory: 0.5Gi
volumeClaimTemplates:
- name: data
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 20Gi
EOF
Explanation of Key Fields: MySQL Semi-Synchronous Cluster Configuration
clusterDef
: Specifies the ClusterDefinition CR for the cluster. The ClusterDefinition 'mysql' contains multiple topologies, such as 'semisync', 'semisync-proxysql', 'mgr', 'mgr-proxysql', 'orc', 'orc-proxysql'.topology : orc
: Specifies the topology of the cluster. Use 'orc' topology to setup MySQL with Orchestrator integration for managing failover and replication.serviceRefs
: Configures the MySQL cluster to connect to the Orchestrator cluster.
name : orchestrator
: The name of the service reference for Orchestrator.namespace: demo
: Specifies that the Orchestrator cluster is in the same demo namespace.clusterServiceSelector
:
cluster: example-orc-cluster
: Selects the Orchestrator cluster named example-orc-cluster.service
:
component: orchestrator
: Uses the Orchestrator component from the example-orc-cluster cluster.service: orchestrator
: Specifies the service name.port: orc-http
: Refers to the HTTP port of the Orchestrator service.Orchestrator Cluster Configuration
componentDef: orchestrator-raft
: Specifies that this component uses the Raft high-availability model.replicas: 3
: Configures the Orchestrator cluster to run with 3 nodes for Raft-based high availability.The first part of the YAML file configures a MySQL cluster ('example-mysql-cluster') with 1 primary and 1 replica in semi-synchronous mode, and integrates it with the Orchestrator cluster.
The second part of the YAML file defines an Orchestrator cluster ('example-orc-cluster') using Raft high-availability mode. This cluster manages the MySQL cluster, monitoring its topology and handling failover.
The serviceRefs in the MySQL cluster configuration establishes the connection between the MySQL semi-synchronous cluster and the Orchestrator cluster:
Monitor the cluster status until it transitions to the Running state:
kubectl get cluster -n demo
Example Output:
NAME CLUSTER-DEFINITION TERMINATION-POLICY STATUS AGE
example-mysql-cluster mysql Delete Running 13m
example-orc-cluster orchestrator Delete Running 13m
To get detailed information about the MySQL cluster:
kbcli cluster describe example-mysql-cluster -n demo
Example Output:
Name: example-mysql-cluster Created Time: Mar 11,2025 10:21 UTC+0800
NAMESPACE CLUSTER-DEFINITION TOPOLOGY STATUS TERMINATION-POLICY
demo mysql orc-proxysql Running Delete
Endpoints:
COMPONENT INTERNAL EXTERNAL
mysql example-mysql-cluster-mysql-0.demo.svc.cluster.local:3306 <none>
example-mysql-cluster-mysql-1.demo.svc.cluster.local:3306
example-mysql-cluster-mysql-server.demo.svc.cluster.local:3306
proxysql example-mysql-cluster-proxysql-proxy-ordinal-0.demo.svc.cluster.local:6032 <none>
example-mysql-cluster-proxysql-proxy-ordinal-0.demo.svc.cluster.local:6033
example-mysql-cluster-proxysql-proxy-ordinal-1.demo.svc.cluster.local:6032
example-mysql-cluster-proxysql-proxy-ordinal-1.demo.svc.cluster.local:6033
example-mysql-cluster-proxysql-proxy-server.demo.svc.cluster.local:6033
Topology:
COMPONENT SERVICE-VERSION INSTANCE ROLE STATUS AZ NODE CREATED-TIME
mysql 8.0.35 example-mysql-cluster-mysql-0 primary Running ap-southeast-1c ip-10-0-3-233.ap-southeast-1.compute.internal/10.0.3.233 Mar 11,2025 10:21 UTC+0800
mysql 8.0.35 example-mysql-cluster-mysql-1 secondary Running ap-southeast-1c ip-10-0-3-233.ap-southeast-1.compute.internal/10.0.3.233 Mar 11,2025 10:22 UTC+0800
proxysql 2.4.4 example-mysql-cluster-proxysql-0 <none> Running ap-southeast-1c ip-10-0-3-55.ap-southeast-1.compute.internal/10.0.3.55 Mar 11,2025 10:23 UTC+0800
proxysql 2.4.4 example-mysql-cluster-proxysql-1 <none> Running ap-southeast-1c ip-10-0-3-40.ap-southeast-1.compute.internal/10.0.3.40 Mar 11,2025 10:23 UTC+0800
Resources Allocation:
COMPONENT INSTANCE-TEMPLATE CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE-SIZE STORAGE-CLASS
mysql 500m / 500m 512Mi / 512Mi <none> <none>
proxysql 500m / 500m 512Mi / 512Mi data:20Gi <none>
Images:
COMPONENT COMPONENT-DEFINITION IMAGE
mysql mysql-orc-8.0-1.0.0 docker.io/apecloud/mysql:8.0.35
docker.io/apecloud/mysqld-exporter:0.15.1
apecloud-registry.cn-zhangjiakou.cr.aliyuncs.com/apecloud/kubeblocks-tools:1.0.0-beta.30
proxysql proxysql-mysql-1.0.0 docker.io/apecloud/proxysql:2.4.4
Show cluster events: kbcli cluster list-events -n demo example-mysql-cluster
To get detailed information about the Orchestrator cluster:
kbcli cluster describe example-orc-cluster -n demo
Example Output:
Name: example-orc-cluster Created Time: Mar 11,2025 10:21 UTC+0800
NAMESPACE CLUSTER-DEFINITION TOPOLOGY STATUS TERMINATION-POLICY
demo orchestrator raft Running Delete
Endpoints:
COMPONENT INTERNAL EXTERNAL
orchestrator example-orc-cluster-orchestrator.demo.svc.cluster.local:80 <none>
Topology:
COMPONENT SERVICE-VERSION INSTANCE ROLE STATUS AZ NODE CREATED-TIME
orchestrator 3.2.6 example-orc-cluster-orchestrator-0 primary Running ap-southeast-1c ip-10-0-3-55.ap-southeast-1.compute.internal/10.0.3.55 Mar 11,2025 10:21 UTC+0800
orchestrator 3.2.6 example-orc-cluster-orchestrator-1 secondary Running ap-southeast-1c ip-10-0-3-233.ap-southeast-1.compute.internal/10.0.3.233 Mar 11,2025 10:21 UTC+0800
orchestrator 3.2.6 example-orc-cluster-orchestrator-2 secondary Running ap-southeast-1c ip-10-0-3-55.ap-southeast-1.compute.internal/10.0.3.55 Mar 11,2025 10:22 UTC+0800
Resources Allocation:
COMPONENT INSTANCE-TEMPLATE CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE-SIZE STORAGE-CLASS
orchestrator 500m / 500m 512Mi / 512Mi data:20Gi kb-default-sc
Images:
COMPONENT COMPONENT-DEFINITION IMAGE
orchestrator orchestrator-raft docker.io/apecloud/orchestrator:v3.2.6
Show cluster events: kbcli cluster list-events -n demo example-orc-cluster
KubeBlocks automatically creates a secret containing the MySQL root credentials. Retrieve the credentials with the following commands:
kubectl get secrets -n demo example-mysql-cluster-mysql-account-root -o jsonpath='{.data.username}' | base64 -d
root
kubectl get secrets -n demo example-mysql-cluster-mysql-account-root -o jsonpath='{.data.password}' | base64 -d
d3a5iS499Z
Use ProxySQL to connect to the MySQL cluster:
kubectl exec -it -n demo example-mysql-cluster-mysql-0 -c mysql -- mysql -h example-mysql-cluster-proxysql-proxy-server.demo.svc.cluster.local -P6033 -uroot -pd3a5iS499Z
Alternatively, connect directly to the MySQL instance:
kubectl exec -it -n demo example-mysql-cluster-mysql-0 -c mysql -- mysql -h example-mysql-cluster-mysql-server.demo.svc.cluster.local -uroot -pd3a5iS499Z
In this section, we will test the semi-synchronous replication of the MySQL cluster by verifying the roles of the pods and checking their replication statuses.
First, list all the pods in the cluster, along with their roles, to identify the primary and secondary instances:
kubectl get pods -n demo -o jsonpath='{range .items[*]}{.metadata.name}{"\t"}{.metadata.labels.kubeblocks\.io/role}{"\n"}{end}'
Example Output:
example-mysql-cluster-mysql-0 primary
example-mysql-cluster-mysql-1 secondary
From the output, we can see the following:
Next, connect to the primary instance ('example-mysql-cluster-mysql-0') and check its semi-synchronous replication status. Use the following command to execute a query inside the MySQL pod:
kubectl exec -it -n demo example-mysql-cluster-mysql-0 -c mysql -- mysql -h example-mysql-cluster-mysql-0.demo.svc.cluster.local -uroot -pd3a5iS499Z -e "show status like 'Rpl%_status';"
mysql: [Warning] Using a password on the command line interface can be insecure.
+------------------------------+-------+
| Variable_name | Value |
+------------------------------+-------+
| Rpl_semi_sync_replica_status | OFF |
| Rpl_semi_sync_source_status | ON |
+------------------------------+-------+
Explanation:
kubectl exec -it -n demo example-mysql-cluster-mysql-0 -c mysql -- mysql -h example-mysql-cluster-mysql-1.demo.svc.cluster.local -uroot -pd3a5iS499Z -e "show status like 'Rpl%_status';"
mysql: [Warning] Using a password on the command line interface can be insecure.
+------------------------------+-------+
| Variable_name | Value |
+------------------------------+-------+
| Rpl_semi_sync_replica_status | ON |
| Rpl_semi_sync_source_status | OFF |
+------------------------------+-------+
Explanation:
The following steps demonstrate how to trigger a failover in a MySQL cluster and verify the role changes of the pods.
To initiate a failover, delete the Pod currently assigned the primary role:
kubectl delete pod example-mysql-cluster-mysql-0 -n demo
pod "example-mysql-cluster-mysql-0" deleted
This will trigger a failover, and the secondary instance will be promoted to the primary role. After a while, the killed pod will be recreated and will take the secondary role:
kubectl get pods -n demo -o jsonpath='{range .items[*]}{.metadata.name}{"\t"}{.metadata.labels.kubeblocks\.io/role}{"\n"}{end}'
Expected Output:
example-semisync-mysql-mysql-0 secondary
example-semisync-mysql-mysql-1 primary
This process demonstrates how the failover mechanism ensures high availability by automatically promoting a secondary instance to the primary role in the event of a failure.
To remove all created resources, delete the MySQL cluster along with its namespace:
kubectl delete cluster example-mysql-cluster -n demo
kubectl delete cluster example-orc-cluster -n demo
kubectl delete ns demo
This guide demonstrated how to deploy a MySQL cluster with semi-synchronous replication and integrate it with Orchestrator for high availability and failover management using KubeBlocks. With the declarative configuration approach, you can easily scale and manage MySQL clusters in Kubernetes environments.