Skip to main content
Version: Preview

Failure simulation and automatic recovery

As an open-source data management platform, Kubeblocks currently supports over thirty database engines and is continuously expanding. Due to the varying high availability capabilities of databases, KubeBlocks has designed and implemented a high availability (HA) system for database instances. The KubeBlocks HA system uses a unified HA framework to provide high availability for databases, allowing different databases on KubeBlocks to achieve similar high availability capabilities and experiences.

This tutorial uses ApeCloud MySQL as an example to demonstrate its fault simulation and recovery capabilities.

Recovery simulation

note

The faults here are all simulated by deleting a pod. When there are sufficient resources, the fault can also be simulated by machine downtime or container deletion, and its automatic recovery is the same as described here.

Before you start

  • Install KubeBlocks.

  • Create an ApeCloud MySQL RaftGroup, refer to Create a MySQL cluster.

  • Run kubectl get cd apecloud-mysql -o yaml to check whether rolechangedprobe is enabled in the ApeCloud MySQL RaftGroup (it is enabled by default). If the following configuration exists, it indicates that it is enabled:

    probes:
    roleProbe:
    failureThreshold: 2
    periodSeconds: 1
    timeoutSeconds: 1

Leader pod fault

Steps:

  1. View the ApeCloud MySQL RaftGroup information. View the leader pod name in Topology. In this example, the leader pod's name is mysql-cluster-mysql-1.

    kbcli cluster describe mysql-cluster

    describe_cluster

  2. Delete the leader pod mysql-cluster-mysql-1 to simulate a pod fault.

    kubectl delete pod mysql-cluster-mysql-1

    delete_pod

  3. Run kbcli cluster describe and kbcli cluster connect to check the status of the pods and RaftGroup connection.

    Results

    The following example shows that the roles of pods have changed after the old leader pod was deleted and mysql-cluster-mysql-2 is elected as the new leader pod.

    kbcli cluster describe mysql-cluster

    describe_cluster_after It shows that this ApeCloud MySQL RaftGroup can be connected within seconds.

    kbcli cluster connect mysql-cluster

    connect_cluster_after

    How the automatic recovery works

    After the leader pod is deleted, the ApeCloud MySQL RaftGroup elects a new leader. In this example, mysql-cluster-mysql-2 is elected as the new leader. KubeBlocks detects that the leader has changed, and sends a notification to update the access link. The original exception node automatically rebuilds and recovers to the normal RaftGroup state. It normally takes 30 seconds from exception to recovery.

Single follower pod exception

Steps:

  1. View the ApeCloud MySQL RaftGroup information and view the follower pod name in Topology. In this example, the follower pods are mysql-cluster-mysql-0 and mysql-cluster-mysql-2.

    kbcli cluster describe mysql-cluster

    describe_cluster

  2. Delete the follower pod mysql-cluster-mysql-0.

    kubectl delete pod mysql-cluster-mysql-0

    delete_follower_pod

  3. View the RaftGroup status and you can find the follower pod is being terminated in Component.Instance.

    kbcli cluster describe mysql-cluster

    describe_cluster_follower

  4. Connect to the RaftGroup and you can find this single follower exception doesn't affect the R/W of the cluster.

    kbcli cluster connect mysql-cluster

    connect_cluster_follower

    How the automatic recovery works

    One follower exception doesn't trigger re-electing of the leader or access link switch, so the R/W of the cluster is not affected. Follower exception triggers recreation and recovery. The process takes no more than 30 seconds.

Two pods exception

The availability of the cluster generally requires the majority of pods to be in a normal state. When most pods are exceptional, the original leader will be automatically downgraded to a follower. Therefore, any two exceptional pods result in only one follower pod remaining.

In this way, whether exceptions occur to one leader and one follower or two followers, failure performance and automatic recovery are the same.

Steps:

  1. View the ApeCloud MySQL RaftGroup information and view the follower pod name in Topology. In this example, the follower pods are mysql-cluster-mysql-1 and mysql-cluster-mysql-0.

    kbcli cluster describe mysql-cluster

    describe_cluster

  2. Delete these two follower pods.

    kubectl delete pod mysql-cluster-mysql-1 mysql-cluster-mysql-0

    delete_two_pods

  3. View the RaftGroup status and you can find the follower pods are pending and a new leader pod is selected.

    kbcli cluster describe mysql-cluster

    describe_two_pods

  4. Run kbcli cluster connect mysql-cluster again after a few seconds and you can find the pods in the RaftGroup work normally again in Component.Instance.

    kbcli cluster connect mysql-cluster

    connect_two_pods

    How the automatic recovery works

    When two pods of the ApeCloud MySQL RaftGroup are exceptional, pods are unavailable and cluster R/W is unavailable. After the recreation of pods, a new leader is elected to recover to R/W status. The process takes less than 30 seconds.

All pods exception

Steps:

  1. Run the command below to view the ApeCloud MySQL RaftGroup information and view the pods' names in Topology.

    kbcli cluster describe mysql-cluster

    describe_cluster

  2. Delete all pods.

    kubectl delete pod mysql-cluster-mysql-1 mysql-cluster-mysql-0 mysql-cluster-mysql-2

    delete_three_pods

  3. Run the command below to view the deleting process. You can find the pods are pending.

    kbcli cluster describe mysql-cluster

    describe_three_clusters

  4. Run kbcli cluster connect mysql-cluster again after a few seconds and you can find the pods in the RaftGroup work normally again.

    kbcli cluster connect mysql-cluster

    connect_three_clusters

    How the automatic recovery works

    Every time the pod is deleted, recreation is triggered. And then ApeCloud MySQL automatically completes the cluster recovery and the election of a new leader. After the election of the leader is completed, KubeBlocks detects the new leader and updates the access link. This process takes less than 30 seconds.