Failure simulation and automatic recovery
As an open-source data management platform, Kubeblocks currently supports over thirty database engines and is continuously expanding. Due to the varying high availability capabilities of databases, KubeBlocks has designed and implemented a high availability (HA) system for database instances. The KubeBlocks HA system uses a unified HA framework to provide high availability for databases, allowing different databases on KubeBlocks to achieve similar high availability capabilities and experiences.
This tutorial uses ApeCloud MySQL RaftGroup Cluster as an example to demonstrate its fault simulation and recovery capabilities.
Recovery simulation
The faults here are all simulated by deleting a pod. When there are sufficient resources, the fault can also be simulated by machine downtime or container deletion, and its automatic recovery is the same as described here.
Before you start
Create an ApeCloud MySQL RaftGroup, refer to Create a MySQL cluster.
Run
kubectl get cd apecloud-mysql -o yaml
to check whether rolechangedprobe is enabled in the ApeCloud MySQL RaftGroup (it is enabled by default). If the following configuration exists, it indicates that it is enabled:probes:
roleProbe:
failureThreshold: 2
periodSeconds: 1
timeoutSeconds: 1
Leader pod fault
Steps:
- kbcli
- kubectl
View the ApeCloud MySQL RaftGroup information. View the leader pod name in
Topology
. In this example, the leader pod's name ismycluster-mysql-2
.kbcli cluster describe mycluster -n demo
Delete the leader pod
mycluster-mysql-2
to simulate a pod fault.kubectl delete pod mycluster-mysql-2 -n demo
Run
kbcli cluster describe
andkbcli cluster connect
to check the status of the pods and RaftGroup connection.Results
The following example shows that the roles of pods have changed after the old leader pod was deleted and
mycluster-mysql-1
is elected as the new leader pod.kbcli cluster describe mycluster -n demo
It shows that this ApeCloud MySQL RaftGroup can be connected within seconds.
kbcli cluster connect mycluster -n demo
How the automatic recovery works
After the leader pod is deleted, the ApeCloud MySQL RaftGroup Cluster elects a new leader. In this example, mycluster-mysql-1
is elected as the new leader. KubeBlocks detects that the leader has changed, and sends a notification to update the access link. The original exception node automatically rebuilds and recovers to the normal RaftGroup Cluster state. It normally takes 30 seconds from exception to recovery.
View the pod role of the ApeCloud MySQL RaftGroup Cluster. In this example, the leader pod's name is
mycluster-mysql-1
.kubectl get pods --show-labels -n demo | grep role
Delete the leader pod
mycluster-mysql-1
to simulate a pod fault.kubectl delete pod mycluster-mysql-1 -n demo
Check the status of the pods and RaftGroup Cluster connection.
The following example shows that the roles of pods have changed after the old leader pod was deleted and
mycluster-mysql-0
is elected as the new leader pod.kubectl get pods --show-labels -n demo | grep role
Connect to this cluster to check the pod roles and status. This cluster can be connected within seconds.
kubectl get secrets -n demo mycluster-conn-credential -o jsonpath='{.data.username}' | base64 -d
>
root
kubectl get secrets -n demo mycluster-conn-credential -o jsonpath='{.data.password}' | base64 -d
>
pt2mmdlp4
kubectl exec -ti -n demo mycluster-mysql-0 -- bash
mysql -uroot -pt2mmdlp4
How the automatic recovery works
After the leader pod is deleted, the ApeCloud MySQL RaftGroup Cluster elects a new leader. In this example, mycluster-mysql-0
is elected as the new leader. KubeBlocks detects that the leader has changed, and sends a notification to update the access link. The original exception node automatically rebuilds and recovers to the normal RaftGroup Cluster state. It normally takes 30 seconds from exception to recovery.
Single follower pod exception
Steps:
- kbcli
- kubectl
View the ApeCloud MySQL RaftGroup information and view the follower pod name in
Topology
. In this example, the follower pods aremycluster-mysql-0
and mmycluster-mysql-2
.kbcli cluster describe mycluster -n demo
Delete the follower pod
mycluster-mysql-0
.kubectl delete pod mycluster-mysql-0 -n demo
View the RaftGroup status and you can find the follower pod is being terminated.
kbcli cluster describe mycluster -n demo
Connect to the RaftGroup and you can find this single follower exception doesn't affect the R/W of the cluster.
kbcli cluster connect mycluster -n demo
View the pod role again and in this example, the follower pods are
mycluster-mysql-1
andmycluster-mysql-2
.kubectl get pods --show-labels -n demo | grep role
Delete the follower pod
mycluster-mysql-1
.kubectl delete pod mycluster-mysql-1 -n demo
Open another terminal page and view the pod status. You can find the follower pod
mycluster-mysql-1
isTerminating
.kubectl get pod -n demo
View the pod roles again.
Connect to this cluster and you can find this single follower exception doesn't affect the R/W of the cluster.
kubectl exec -ti -n demo mycluster-mysql-0 -- bash
mysql -uroot -pt2mmdlp4
How the automatic recovery works
One follower exception doesn't trigger re-electing of the leader or access link switch, so the R/W of the cluster is not affected. Follower exception triggers recreation and recovery. The process takes no more than 30 seconds.
Two pods exception
The availability of the cluster generally requires the majority of pods to be in a normal state. When most pods are exceptional, the original leader will be automatically downgraded to a follower. Therefore, any two exceptional pods result in only one follower pod remaining.
In this way, whether exceptions occur to one leader and one follower or two followers, failure performance and automatic recovery are the same.
Steps:
- kbcli
- kubectl
View the ApeCloud MySQL RaftGroup information and view the follower pod name in
Topology
. In this example, the follower pods aremycluster-mysql-0
andmycluster-mysql-2
.kbcli cluster describe mycluster -n demo
Delete these two follower pods.
kubectl delete pod mycluster-mysql-0 mycluster-mysql-2 -n demo
View the RaftGroup status and you can find the follower pods are pending and a new leader pod is selected.
kbcli cluster describe mycluster -n demo
Run
kbcli cluster connect mycluster
again after a few seconds and you can find the pods in the RaftGroup work normally again.kbcli cluster connect mycluster -n demo
View the pod role again. In this example, the follower pods are
mycluster-mysql-1
andmycluster-mysql-2
.kubectl get pods --show-labels -n demo | grep role
Delete these two follower pods.
kubectl delete pod mycluster-mysql-1 mycluster-mysql-2 -n demo
Open another terminal page and view the pod status. You can find the follower pods
mycluster-mysql-1
andmycluster-mysql-2
isTerminating
.kubectl get pod -n demo
View the pod roles and you can find a new leader pod is selected.
kubectl get pods --show-labels -n demo | grep role
Connect to this cluster after a few seconds and you can find the pods in the RaftGroup Cluster work normally again.
kubectl exec -ti -n demo mycluster-mysql-0 -- bash
mysql -uroot -pt2mmdlp4
How the automatic recovery works
When two pods of the ApeCloud MySQL RaftGroup are exceptional, pods are unavailable and cluster R/W is unavailable. After the recreation of pods, a new leader is elected to recover to R/W status. The process takes less than 30 seconds.
All pods exception
Steps:
- kbcli
- kubectl
Run the command below to view the ApeCloud MySQL RaftGroup information and view the pods' names in
Topology
.kbcli cluster describe mycluster -n demo
Delete all pods.
kubectl delete pod mycluster-mysql-1 mycluster-mysql-0 mycluster-mysql-2 -n demo
Run the command below to view the cluster and pod status. After a few seconds, you can find all pods are running again and a new leader is selected.
kbcli cluster describe mycluster -n demo
Connect to the cluster and the connection works normally again.
kbcli cluster connect mycluster -n demo
View the role of pods.
kubectl get pods --show-labels -n demo | grep role
Delete all pods.
kubectl delete pod mycluster-mysql-1 mycluster-mysql-0 mycluster-mysql-2 -n demo
Open another terminal page and view the pod status. You can find the pods are terminating.
kubectl get pod -n demo
Run the command below to view the pod status. After a few seconds, you can find all pods are running again and a new leader is selected.
kubectl get pods --show-labels -n demo | grep role
Connect to the cluster and the connection works normally again.
kubectl exec -ti -n demo mycluster-mysql-0 -- bash
mysql -uroot -pt2mmdlp4
How the automatic recovery works
Every time the pod is deleted, recreation is triggered. And then ApeCloud MySQL automatically completes the cluster recovery and the election of a new leader. After the election of the leader is completed, KubeBlocks detects the new leader and updates the access link. This process takes less than 30 seconds.