This guide explains how to vertically scale an etcd cluster managed by KubeBlocks by adjusting CPU and memory resources.
Before proceeding, verify your environment meets these requirements:
kubectl v1.21+ installed and configured with cluster accesskubectl get cluster etcd-cluster -n demo \
-o jsonpath='{.spec.componentSpecs[0].resources}' | jq .
{
"limits": { "cpu": "0.5", "memory": "0.5Gi" },
"requests": { "cpu": "0.5", "memory": "0.5Gi" }
}
Scale the etcd cluster from 0.5 CPU / 0.5Gi memory to 1 CPU / 1Gi memory:
apiVersion: operations.kubeblocks.io/v1alpha1
kind: OpsRequest
metadata:
name: etcd-verticalscaling
namespace: demo
spec:
clusterName: etcd-cluster
type: VerticalScaling
verticalScaling:
- componentName: etcd
requests:
cpu: '1'
memory: 1Gi
limits:
cpu: '1'
memory: 1Gi
Apply it:
kubectl apply -f https://raw.githubusercontent.com/apecloud/kubeblocks-addons/refs/heads/main/examples/etcd/verticalscale.yaml
Monitor the progress:
kubectl get ops etcd-verticalscaling -n demo -w
NAME TYPE CLUSTER STATUS PROGRESS AGE
etcd-verticalscaling VerticalScaling etcd-cluster Running 0/3 10s
etcd-verticalscaling VerticalScaling etcd-cluster Running 1/3 28s
etcd-verticalscaling VerticalScaling etcd-cluster Running 2/3 50s
etcd-verticalscaling VerticalScaling etcd-cluster Running 3/3 66s
etcd-verticalscaling VerticalScaling etcd-cluster Succeed 3/3 66s
Update the resources directly in the Cluster spec:
kubectl patch cluster etcd-cluster -n demo --type=json \
-p='[
{"op": "replace", "path": "/spec/componentSpecs/0/resources/requests/cpu", "value": "1"},
{"op": "replace", "path": "/spec/componentSpecs/0/resources/requests/memory", "value": "1Gi"},
{"op": "replace", "path": "/spec/componentSpecs/0/resources/limits/cpu", "value": "1"},
{"op": "replace", "path": "/spec/componentSpecs/0/resources/limits/memory", "value": "1Gi"}
]'
Monitor the cluster status:
kubectl get cluster etcd-cluster -n demo -w
kubectl get pods -n demo -l app.kubernetes.io/instance=etcd-cluster
NAME READY STATUS RESTARTS AGE ROLE
etcd-cluster-etcd-0 2/2 Running 0 90s follower
etcd-cluster-etcd-1 2/2 Running 0 65s follower
etcd-cluster-etcd-2 2/2 Running 0 35s leader
Confirm the new resource allocation:
kubectl get cluster etcd-cluster -n demo \
-o jsonpath='{.spec.componentSpecs[0].resources}' | jq .
{
"limits": { "cpu": "1", "memory": "1Gi" },
"requests": { "cpu": "1", "memory": "1Gi" }
}
kubectl delete cluster etcd-cluster -n demo
kubectl delete ns demo