KubeBlocks
BlogsEnterprise
⌘K
​
Blogs

Overview
Quickstart
Architecture

Operations

Lifecycle Management
Vertical Scaling
Horizontal Scaling
Volume Expansion
Switchover
Minor Version Upgrade
Manage Services

Backup & Restore

Backup
Restore

Observability

Observability for etcd Clusters
  1. Prerequisites
  2. Step 1: Locate the Backup
  3. Step 2: Create a Restored Cluster
  4. Step 3: Monitor the Restore
  5. Step 4: Verify the Restored Data
  6. Troubleshooting
  7. Cleanup

Restore an etcd Cluster from Backup

This guide demonstrates how to restore an etcd cluster from a backup created by the datafile method. KubeBlocks restores data into a new cluster rather than overwriting the existing one.

Prerequisites

    Before proceeding, verify your environment meets these requirements:

    • A functional Kubernetes cluster (v1.21+ recommended)
    • kubectl v1.21+ installed and configured with cluster access
    • Helm installed (installation guide)
    • KubeBlocks installed (installation guide)
    • etcd Add-on installed and an etcd cluster running (see Quickstart)
    • A Completed etcd backup (see Backup).

    Step 1: Locate the Backup

    List available backups:

    kubectl get backup -n demo -l app.kubernetes.io/instance=etcd-cluster
    Example Output
    NAME POLICY METHOD REPO STATUS TOTAL-SIZE DURATION DELETION-POLICY CREATION-TIME etcd-cluster-backup etcd-cluster-etcd-backup-policy datafile minio-repo Completed 853 10s Delete 2026-04-03T17:20:00Z

    Note the backup name (etcd-cluster-backup) and namespace (demo) — you'll need them in the next step.

    Step 2: Create a Restored Cluster

    Restore creates a new etcd cluster from the backup data. The source cluster continues running unaffected.

    apiVersion: apps.kubeblocks.io/v1 kind: Cluster metadata: name: etcd-cluster-restore namespace: demo annotations: # Replace "etcd-cluster-backup" with your backup name # Replace "demo" with your backup namespace kubeblocks.io/restore-from-backup: '{"etcd":{"name":"etcd-cluster-backup","namespace":"demo","volumeRestorePolicy":"Parallel"}}' spec: terminationPolicy: Delete componentSpecs: - name: etcd componentDef: etcd serviceVersion: 3.6.1 disableExporter: false replicas: 3 resources: limits: cpu: "0.5" memory: "0.5Gi" requests: cpu: "0.5" memory: "0.5Gi" volumeClaimTemplates: - name: data spec: storageClassName: "" accessModes: - ReadWriteOnce resources: requests: storage: 20Gi

    Apply it:

    kubectl apply -f https://raw.githubusercontent.com/apecloud/kubeblocks-addons/refs/heads/main/examples/etcd/restore.yaml

    The annotation format:

    { "etcd": { "name": "<BACKUP_NAME>", "namespace": "<BACKUP_NAMESPACE>", "volumeRestorePolicy": "Parallel" } }
    • etcd — the component name (must match spec.componentSpecs[].name)
    • name — the backup CR name
    • namespace — namespace where the backup CR lives
    • volumeRestorePolicy — Parallel (all volumes restored simultaneously) or Serial

    Step 3: Monitor the Restore

    kubectl get cluster etcd-cluster-restore -n demo -w
    Example Output
    NAME CLUSTER-DEFINITION TERMINATION-POLICY STATUS AGE etcd-cluster-restore Delete Creating 30s etcd-cluster-restore Delete Running 2m1s

    Check pod status and roles:

    kubectl get pods -n demo -l app.kubernetes.io/instance=etcd-cluster-restore -L kubeblocks.io/role
    Example Output
    NAME READY STATUS RESTARTS AGE ROLE etcd-cluster-restore-etcd-0 2/2 Running 0 2m1s leader etcd-cluster-restore-etcd-1 2/2 Running 0 2m1s follower etcd-cluster-restore-etcd-2 2/2 Running 0 2m1s follower

    Step 4: Verify the Restored Data

    Connect to the restored cluster and verify that the data is intact:

    kubectl exec -n demo etcd-cluster-restore-etcd-0 -c etcd -- \ etcdctl get test-key \ --endpoints=http://localhost:2379
    Example Output
    test-key hello-kubeblocks

    Troubleshooting

    If the restore is stuck or fails:

    1. Check restore resources created during the process:

      kubectl get restore -n demo
    2. Describe the Component for events:

      kubectl describe cmp etcd-cluster-restore-etcd -n demo
    3. Check the restore job logs:

      kubectl get job -n demo | grep restore kubectl logs -n demo job/<RESTORE_JOB_NAME>

    Cleanup

    kubectl delete cluster etcd-cluster-restore -n demo kubectl delete cluster etcd-cluster -n demo kubectl delete backup etcd-cluster-backup -n demo kubectl delete ns demo

    © 2026 KUBEBLOCKS INC