This guide demonstrates how to restore an etcd cluster from a backup created by the datafile method. KubeBlocks restores data into a new cluster rather than overwriting the existing one.
Before proceeding, verify your environment meets these requirements:
kubectl v1.21+ installed and configured with cluster accessCompleted etcd backup (see Backup).List available backups:
kubectl get backup -n demo -l app.kubernetes.io/instance=etcd-cluster
NAME POLICY METHOD REPO STATUS TOTAL-SIZE DURATION DELETION-POLICY CREATION-TIME
etcd-cluster-backup etcd-cluster-etcd-backup-policy datafile minio-repo Completed 853 10s Delete 2026-04-03T17:20:00Z
Note the backup name (etcd-cluster-backup) and namespace (demo) — you'll need them in the next step.
Restore creates a new etcd cluster from the backup data. The source cluster continues running unaffected.
apiVersion: apps.kubeblocks.io/v1
kind: Cluster
metadata:
name: etcd-cluster-restore
namespace: demo
annotations:
# Replace "etcd-cluster-backup" with your backup name
# Replace "demo" with your backup namespace
kubeblocks.io/restore-from-backup: '{"etcd":{"name":"etcd-cluster-backup","namespace":"demo","volumeRestorePolicy":"Parallel"}}'
spec:
terminationPolicy: Delete
componentSpecs:
- name: etcd
componentDef: etcd
serviceVersion: 3.6.1
disableExporter: false
replicas: 3
resources:
limits:
cpu: "0.5"
memory: "0.5Gi"
requests:
cpu: "0.5"
memory: "0.5Gi"
volumeClaimTemplates:
- name: data
spec:
storageClassName: ""
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 20Gi
Apply it:
kubectl apply -f https://raw.githubusercontent.com/apecloud/kubeblocks-addons/refs/heads/main/examples/etcd/restore.yaml
The annotation format:
{
"etcd": {
"name": "<BACKUP_NAME>",
"namespace": "<BACKUP_NAMESPACE>",
"volumeRestorePolicy": "Parallel"
}
}
etcd — the component name (must match spec.componentSpecs[].name)name — the backup CR namenamespace — namespace where the backup CR livesvolumeRestorePolicy — Parallel (all volumes restored simultaneously) or Serialkubectl get cluster etcd-cluster-restore -n demo -w
NAME CLUSTER-DEFINITION TERMINATION-POLICY STATUS AGE
etcd-cluster-restore Delete Creating 30s
etcd-cluster-restore Delete Running 2m1s
Check pod status and roles:
kubectl get pods -n demo -l app.kubernetes.io/instance=etcd-cluster-restore -L kubeblocks.io/role
NAME READY STATUS RESTARTS AGE ROLE
etcd-cluster-restore-etcd-0 2/2 Running 0 2m1s leader
etcd-cluster-restore-etcd-1 2/2 Running 0 2m1s follower
etcd-cluster-restore-etcd-2 2/2 Running 0 2m1s follower
Connect to the restored cluster and verify that the data is intact:
kubectl exec -n demo etcd-cluster-restore-etcd-0 -c etcd -- \
etcdctl get test-key \
--endpoints=http://localhost:2379
test-key
hello-kubeblocks
If the restore is stuck or fails:
Check restore resources created during the process:
kubectl get restore -n demo
Describe the Component for events:
kubectl describe cmp etcd-cluster-restore-etcd -n demo
Check the restore job logs:
kubectl get job -n demo | grep restore
kubectl logs -n demo job/<RESTORE_JOB_NAME>
kubectl delete cluster etcd-cluster-restore -n demo
kubectl delete cluster etcd-cluster -n demo
kubectl delete backup etcd-cluster-backup -n demo
kubectl delete ns demo