This guide explains how to perform horizontal scaling (scale-out and scale-in) on an etcd cluster managed by KubeBlocks.
etcd Quorum Requirements
etcd uses the Raft consensus algorithm. For correct operation:
Known Limitation
After a scale-out operation completes, the KubeBlocks component controller may loop on the memberJoin health check, leaving the cluster in Updating state indefinitely even though the new member is fully started and functional. This is a known issue (#2541).
The cluster remains functional and accessible during this state. To restore the Running status, delete the cluster and recreate it at the desired replica count.
Before proceeding, verify your environment meets these requirements:
kubectl v1.21+ installed and configured with cluster accessExpected Workflow:
etcdctl member addSucceedScale out the etcd cluster by adding 1 replica (from 3 to 4):
apiVersion: operations.kubeblocks.io/v1alpha1
kind: OpsRequest
metadata:
name: etcd-scale-out
namespace: demo
spec:
clusterName: etcd-cluster
type: HorizontalScaling
horizontalScaling:
- componentName: etcd
scaleOut:
replicaChanges: 1
Apply it:
kubectl apply -f https://raw.githubusercontent.com/apecloud/kubeblocks-addons/refs/heads/main/examples/etcd/scale-out.yaml
Monitor the progress:
kubectl get ops etcd-scale-out -n demo -w
NAME TYPE CLUSTER STATUS PROGRESS AGE
etcd-scale-out HorizontalScaling etcd-cluster Running 0/1 10s
etcd-scale-out HorizontalScaling etcd-cluster Succeed 1/1 78s
Update the replicas field in the Cluster resource:
kubectl patch cluster etcd-cluster -n demo --type=json \
-p='[{"op": "replace", "path": "/spec/componentSpecs/0/replicas", "value": 4}]'
kubectl get pods -n demo -l app.kubernetes.io/instance=etcd-cluster -L kubeblocks.io/role
NAME READY STATUS RESTARTS AGE ROLE
etcd-cluster-etcd-0 2/2 Running 0 5m follower
etcd-cluster-etcd-1 2/2 Running 0 5m follower
etcd-cluster-etcd-2 2/2 Running 0 5m leader
etcd-cluster-etcd-3 2/2 Running 0 78s follower
Verify the new member is registered in the etcd cluster:
kubectl exec -n demo etcd-cluster-etcd-0 -c etcd -- \
etcdctl member list --endpoints=http://localhost:2379
..., started, etcd-cluster-etcd-0, ...
..., started, etcd-cluster-etcd-1, ...
..., started, etcd-cluster-etcd-2, ...
..., started, etcd-cluster-etcd-3, ...
Expected Workflow:
etcdctl member removeSucceedScale in the etcd cluster by removing 1 replica:
apiVersion: operations.kubeblocks.io/v1alpha1
kind: OpsRequest
metadata:
name: etcd-scale-in
namespace: demo
spec:
clusterName: etcd-cluster
type: HorizontalScaling
horizontalScaling:
- componentName: etcd
scaleIn:
replicaChanges: 1
Apply it:
kubectl apply -f https://raw.githubusercontent.com/apecloud/kubeblocks-addons/refs/heads/main/examples/etcd/scale-in.yaml
Monitor progress:
kubectl get ops etcd-scale-in -n demo -w
NAME TYPE CLUSTER STATUS PROGRESS AGE
etcd-scale-in HorizontalScaling etcd-cluster Running 0/1 10s
etcd-scale-in HorizontalScaling etcd-cluster Succeed 1/1 25s
kubectl patch cluster etcd-cluster -n demo --type=json \
-p='[{"op": "replace", "path": "/spec/componentSpecs/0/replicas", "value": 3}]'
kubectl get pods -n demo -l app.kubernetes.io/instance=etcd-cluster -L kubeblocks.io/role
NAME READY STATUS RESTARTS AGE ROLE
etcd-cluster-etcd-0 2/2 Running 0 8m follower
etcd-cluster-etcd-1 2/2 Running 0 8m follower
etcd-cluster-etcd-2 2/2 Running 0 8m leader
kubectl delete cluster etcd-cluster -n demo
kubectl delete ns demo