This guide provides a comprehensive walkthrough for deploying and managing etcd clusters using the KubeBlocks etcd Add-on, covering:
Before proceeding, verify your environment meets these requirements:
kubectl v1.21+ installed and configured with cluster accessCheck if the etcd Add-on is installed:
helm list -n kb-system | grep etcd
NAME NAMESPACE REVISION UPDATED STATUS CHART
kb-addon-etcd kb-system 1 2026-04-03 deployed etcd-1.0.2
If the add-on isn't installed, choose an installation method:
# Add Helm repo
helm repo add kubeblocks https://apecloud.github.io/helm-charts
# For users in Mainland China, if GitHub is inaccessible or slow, use this alternative repo:
#helm repo add kubeblocks https://jihulab.com/api/v4/projects/150246/packages/helm/stable
# Update helm repo
helm repo update
# Search available Add-on versions
helm search repo kubeblocks/etcd --versions
# Install your desired version (replace <VERSION> with your chosen version)
helm upgrade -i kb-addon-etcd kubeblocks/etcd --version <VERSION> -n kb-system
# Add an index (kubeblocks is added by default)
kbcli addon index add kubeblocks https://github.com/apecloud/block-index.git
# Update the index
kbcli addon index update kubeblocks
To search and install an addon:
# Search Add-on
kbcli addon search etcd
# Install Add-on with your desired version (replace <VERSION> with your chosen version)
kbcli addon install etcd --version <VERSION>
To enable or disable an addon:
# Enable Add-on
kbcli addon enable etcd
# Disable Add-on
kbcli addon disable etcd
Version Compatibility
Always verify that the etcd Add-on version matches your KubeBlocks major version to avoid compatibility issues.
List available etcd versions:
kubectl get componentversion etcd
NAME VERSIONS STATUS AGE
etcd 3.6.1,3.5.15,3.5.6 Available 5m
etcd requires persistent storage. Verify available options:
kubectl get storageclass
Recommended storage characteristics:
Deploy a basic etcd cluster with 3 nodes:
kubectl apply -f https://raw.githubusercontent.com/apecloud/kubeblocks-addons/refs/heads/main/examples/etcd/cluster.yaml
This creates:
apiVersion: apps.kubeblocks.io/v1
kind: Cluster
metadata:
name: etcd-cluster
namespace: demo
spec:
# Specifies the behavior when a Cluster is deleted.
# Valid options are: [DoNotTerminate, Delete, WipeOut]
# - `DoNotTerminate`: Prevents deletion of the Cluster. This policy ensures that all resources remain intact.
# - `Delete`: Extends the `Halt` policy by also removing PVCs, leading to a thorough cleanup while removing all persistent data.
# - `WipeOut`: An aggressive policy that deletes all Cluster resources, including volume snapshots and backups in external storage.
terminationPolicy: Delete
componentSpecs:
- name: etcd
componentDef: etcd
# ServiceVersion specifies the version of the Service expected to be
# provisioned by this Component.
# Valid options are: [3.6.1,3.5.15,3.5.6]
serviceVersion: 3.6.1
disableExporter: false
replicas: 3
resources:
limits:
cpu: "0.5"
memory: "0.5Gi"
requests:
cpu: "0.5"
memory: "0.5Gi"
volumeClaimTemplates:
- name: data
spec:
storageClassName: ""
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 20Gi
For more API fields and descriptions, refer to the API Reference.
To enable mutual TLS for all client and peer communication, set tls: true. KubeBlocks automatically provisions certificates using cert-manager:
apiVersion: apps.kubeblocks.io/v1
kind: Cluster
metadata:
name: etcd-cluster-tls
namespace: demo
spec:
terminationPolicy: Delete
componentSpecs:
- name: etcd
componentDef: etcd
tls: true
issuer:
name: KubeBlocks # KubeBlocks manages certificate issuance
serviceVersion: 3.6.1
replicas: 3
resources:
limits:
cpu: "0.5"
memory: "0.5Gi"
requests:
cpu: "0.5"
memory: "0.5Gi"
volumeClaimTemplates:
- name: data
spec:
storageClassName: ""
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 20Gi
Apply it:
kubectl apply -f https://raw.githubusercontent.com/apecloud/kubeblocks-addons/refs/heads/main/examples/etcd/cluster-with-tls.yaml
When deploying an etcd cluster, KubeBlocks automatically configures:
Confirm successful deployment:
kubectl get cluster etcd-cluster -n demo
NAME CLUSTER-DEFINITION TERMINATION-POLICY STATUS AGE
etcd-cluster Delete Running 2m
kubectl get pods -n demo -l app.kubernetes.io/instance=etcd-cluster -L kubeblocks.io/role
NAME READY STATUS RESTARTS AGE ROLE
etcd-cluster-etcd-0 2/2 Running 0 2m follower
etcd-cluster-etcd-1 2/2 Running 0 2m follower
etcd-cluster-etcd-2 2/2 Running 0 2m leader
KubeBlocks creates a headless service for etcd. Each member is accessible via its pod DNS name.
kubectl get svc -n demo | grep etcd-cluster
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
etcd-cluster-etcd-headless ClusterIP None <none> 2379/TCP,2380/TCP 3m
| Port | Name | Description |
|---|---|---|
| 2379 | client | Client connections |
| 2380 | peer | Raft peer communication |
Connect to an etcd pod and verify the cluster health:
kubectl exec -n demo etcd-cluster-etcd-0 -c etcd -- \
etcdctl endpoint health \
--endpoints=http://localhost:2379
http://localhost:2379 is healthy: successfully committed proposal: took = 2.345ms
List all cluster members:
kubectl exec -n demo etcd-cluster-etcd-0 -c etcd -- \
etcdctl member list \
--endpoints=http://localhost:2379
2e7e91b4d4b2a6c1, started, etcd-cluster-etcd-0, http://etcd-cluster-etcd-0.etcd-cluster-etcd-headless.demo.svc.cluster.local:2380, http://etcd-cluster-etcd-0.etcd-cluster-etcd-headless.demo.svc.cluster.local:2379, false
3a5f8d12c9e14b07, started, etcd-cluster-etcd-1, http://etcd-cluster-etcd-1.etcd-cluster-etcd-headless.demo.svc.cluster.local:2380, http://etcd-cluster-etcd-1.etcd-cluster-etcd-headless.demo.svc.cluster.local:2379, false
8c1d4e7f2a9b3055, started, etcd-cluster-etcd-2, http://etcd-cluster-etcd-2.etcd-cluster-etcd-headless.demo.svc.cluster.local:2380, http://etcd-cluster-etcd-2.etcd-cluster-etcd-headless.demo.svc.cluster.local:2379, false
Write and read a key to verify basic functionality:
# Write
kubectl exec -n demo etcd-cluster-etcd-0 -c etcd -- \
etcdctl put test-key hello-kubeblocks \
--endpoints=http://localhost:2379
# Read
kubectl exec -n demo etcd-cluster-etcd-0 -c etcd -- \
etcdctl get test-key \
--endpoints=http://localhost:2379
OK
test-key
hello-kubeblocks
For TLS-enabled clusters, certificates are stored at /etc/pki/tls/:
kubectl exec -n demo etcd-cluster-tls-etcd-0 -c etcd -- \
etcdctl endpoint health \
--endpoints=https://localhost:2379 \
--cert=/etc/pki/tls/cert.pem \
--key=/etc/pki/tls/key.pem \
--cacert=/etc/pki/tls/ca.pem
https://localhost:2379 is healthy: successfully committed proposal: took = 2.123ms
Stopping a cluster temporarily suspends operations while preserving all data and configuration:
kubectl apply -f https://raw.githubusercontent.com/apecloud/kubeblocks-addons/refs/heads/main/examples/etcd/stop.yaml
apiVersion: operations.kubeblocks.io/v1alpha1
kind: OpsRequest
metadata:
name: etcd-stop
namespace: demo
spec:
clusterName: etcd-cluster
type: Stop
kubectl patch cluster etcd-cluster -n demo --type='json' -p='[
{
"op": "add",
"path": "/spec/componentSpecs/0/stop",
"value": true
}
]'
kubectl apply -f https://raw.githubusercontent.com/apecloud/kubeblocks-addons/refs/heads/main/examples/etcd/start.yaml
apiVersion: operations.kubeblocks.io/v1alpha1
kind: OpsRequest
metadata:
name: etcd-start
namespace: demo
spec:
clusterName: etcd-cluster
type: Start
kubectl patch cluster etcd-cluster -n demo --type='json' -p='[
{
"op": "remove",
"path": "/spec/componentSpecs/0/stop"
}
]'
Choose carefully based on your data retention needs:
| Policy | Resources Removed | Data Removed | Recommended For |
|---|---|---|---|
| DoNotTerminate | None | None | Critical production clusters |
| Delete | All resources | PVCs deleted | Non-critical environments |
| WipeOut | All resources | Everything* | Test environments only |
*Includes snapshots and backups in external storage
For test environments, use this complete cleanup:
kubectl patch cluster etcd-cluster -p '{"spec":{"terminationPolicy":"WipeOut"}}' --type="merge" -n demo
kubectl delete cluster etcd-cluster -n demo