KubeBlocks
BlogsEnterprise
⌘K
​
Blogs

Overview
Quickstart
Architecture

Operations

Lifecycle Management
Vertical Scaling
Horizontal Scaling
Volume Expansion
Configuration
Minor Version Upgrade
Manage Services

Backup And Restore

Backup
Restore

Monitoring

Observability for ZooKeeper Clusters
FAQs
  1. Prerequisites
  2. Step 1: Locate the Backup
  3. Step 2: Create a Restored Cluster
  4. Step 3: Monitor the Restore
  5. Step 4: Verify the Restored Data
  6. Troubleshooting
  7. Cleanup

Restore a ZooKeeper Cluster from Backup

This guide demonstrates how to restore a ZooKeeper cluster from a backup created by the zoocreeper method. KubeBlocks restores data into a new cluster rather than overwriting the existing one.

Prerequisites

    Before proceeding, verify your environment meets these requirements:

    • A functional Kubernetes cluster (v1.21+ recommended)
    • kubectl v1.21+ installed and configured with cluster access
    • Helm installed (installation guide)
    • KubeBlocks installed (installation guide)
    • ZooKeeper Add-on installed and a ZooKeeper cluster running (see Quickstart)
    • A Completed ZooKeeper backup (see Backup).

    Step 1: Locate the Backup

    List available backups:

    kubectl get backup -n demo -l app.kubernetes.io/instance=zookeeper-cluster
    Example Output
    NAME POLICY METHOD REPO STATUS TOTAL-SIZE DURATION DELETION-POLICY CREATION-TIME zk-cluster-backup zookeeper-cluster-zookeeper-backup-policy zoocreeper minio-repo Completed 588 17s Delete 2026-04-03T12:37:31Z

    Note the backup name (zk-cluster-backup) and namespace (demo) — you'll need them in the next step.

    Step 2: Create a Restored Cluster

    Restore creates a new ZooKeeper cluster from the backup data. The source cluster continues running unaffected.

    apiVersion: apps.kubeblocks.io/v1 kind: Cluster metadata: name: zk-cluster-restore namespace: demo annotations: # Replace "zk-cluster-backup" with your backup name # Replace "demo" with your backup namespace kubeblocks.io/restore-from-backup: '{"zookeeper":{"name":"zk-cluster-backup","namespace":"demo","volumeRestorePolicy":"Parallel"}}' spec: terminationPolicy: Delete componentSpecs: - name: zookeeper componentDef: zookeeper serviceVersion: "3.9.2" replicas: 3 resources: limits: cpu: '0.5' memory: 0.5Gi requests: cpu: '0.5' memory: 0.5Gi volumeClaimTemplates: - name: data spec: storageClassName: "" accessModes: - ReadWriteOnce resources: requests: storage: 20Gi - name: snapshot-log spec: storageClassName: "" accessModes: - ReadWriteOnce resources: requests: storage: 20Gi

    Apply it:

    kubectl apply -f https://raw.githubusercontent.com/apecloud/kubeblocks-addons/refs/heads/main/examples/zookeeper/restore.yaml

    The annotation format:

    { "zookeeper": { "name": "<BACKUP_NAME>", "namespace": "<BACKUP_NAMESPACE>", "volumeRestorePolicy": "Parallel" } }
    • zookeeper — the component name (must match spec.componentSpecs[].name)
    • name — the backup CR name
    • namespace — namespace where the backup CR lives
    • volumeRestorePolicy — Parallel (all volumes restored simultaneously) or Serial

    Step 3: Monitor the Restore

    kubectl get cluster zk-cluster-restore -n demo -w
    Example Output
    NAME CLUSTER-DEFINITION TERMINATION-POLICY STATUS AGE zk-cluster-restore Delete Creating 95s zk-cluster-restore Delete Running 2m41s

    Check pod status and roles:

    kubectl get pods -n demo -l app.kubernetes.io/instance=zk-cluster-restore -L kubeblocks.io/role
    Example Output
    NAME READY STATUS RESTARTS AGE ROLE zk-cluster-restore-zookeeper-0 2/2 Running 0 2m40s follower zk-cluster-restore-zookeeper-1 2/2 Running 0 2m20s follower zk-cluster-restore-zookeeper-2 2/2 Running 0 2m leader

    Step 4: Verify the Restored Data

    Connect to the restored cluster and verify that the data is intact:

    kubectl exec -n demo zk-cluster-restore-zookeeper-2 -- \ bash -c "echo 'ls /' | /opt/bitnami/zookeeper/bin/zkCli.sh -server localhost:2181 2>/dev/null | grep -v WATCHER | grep -v WatchedEvent | grep -v '\[zk:' | grep -v INFO | grep -v Exiting | grep -v '^$' | grep -v Connecting | grep -v Welcome | grep -v JLine"
    Example Output
    [test-node, zookeeper]

    Verify a specific node:

    kubectl exec -n demo zk-cluster-restore-zookeeper-2 -- \ bash -c "echo 'get /test-node' | /opt/bitnami/zookeeper/bin/zkCli.sh -server localhost:2181 2>/dev/null | tail -3"
    Example Output
    hello-kubeblocks

    Troubleshooting

    If the restore is stuck or fails:

    1. Check restore resources created during the process:

      kubectl get restore -n demo
    2. Describe the Component for events:

      kubectl describe cmp zk-cluster-restore-zookeeper -n demo
    3. Check the restore job logs:

      kubectl get job -n demo | grep restore kubectl logs -n demo job/<RESTORE_JOB_NAME>

    Cleanup

    kubectl delete cluster zk-cluster-restore -n demo kubectl delete cluster zookeeper-cluster -n demo kubectl delete backup zk-cluster-backup -n demo kubectl delete ns demo

    © 2026 KUBEBLOCKS INC