KubeBlocks
BlogsEnterprise
⌘K
​
Blogs

Overview
Quickstart
Architecture

Operations

Lifecycle Management
Vertical Scaling
Horizontal Scaling
Volume Expansion
Switchover
Minor Version Upgrade
Manage Services

Backup & Restore

Backup
Restore

Observability

Observability for etcd Clusters
  1. Prerequisites
  2. Check Current Resources
  3. Apply Vertical Scaling
  4. Verify Scaling
  5. Cleanup

Vertical Scaling for etcd Clusters with KubeBlocks

This guide explains how to vertically scale an etcd cluster managed by KubeBlocks by adjusting CPU and memory resources.

Prerequisites

    Before proceeding, verify your environment meets these requirements:

    • A functional Kubernetes cluster (v1.21+ recommended)
    • kubectl v1.21+ installed and configured with cluster access
    • Helm installed (installation guide)
    • KubeBlocks installed (installation guide)
    • etcd Add-on installed and an etcd cluster running (see Quickstart)

    Check Current Resources

    kubectl get cluster etcd-cluster -n demo \ -o jsonpath='{.spec.componentSpecs[0].resources}' | jq .
    Example Output
    { "limits": { "cpu": "0.5", "memory": "0.5Gi" }, "requests": { "cpu": "0.5", "memory": "0.5Gi" } }

    Apply Vertical Scaling

    Scale the etcd cluster from 0.5 CPU / 0.5Gi memory to 1 CPU / 1Gi memory:

    apiVersion: operations.kubeblocks.io/v1alpha1 kind: OpsRequest metadata: name: etcd-verticalscaling namespace: demo spec: clusterName: etcd-cluster type: VerticalScaling verticalScaling: - componentName: etcd requests: cpu: '1' memory: 1Gi limits: cpu: '1' memory: 1Gi

    Apply it:

    kubectl apply -f https://raw.githubusercontent.com/apecloud/kubeblocks-addons/refs/heads/main/examples/etcd/verticalscale.yaml

    Monitor the progress:

    kubectl get ops etcd-verticalscaling -n demo -w
    Example Output
    NAME TYPE CLUSTER STATUS PROGRESS AGE etcd-verticalscaling VerticalScaling etcd-cluster Running 0/3 10s etcd-verticalscaling VerticalScaling etcd-cluster Running 1/3 28s etcd-verticalscaling VerticalScaling etcd-cluster Running 2/3 50s etcd-verticalscaling VerticalScaling etcd-cluster Running 3/3 66s etcd-verticalscaling VerticalScaling etcd-cluster Succeed 3/3 66s

    Update the resources directly in the Cluster spec:

    kubectl patch cluster etcd-cluster -n demo --type=json \ -p='[ {"op": "replace", "path": "/spec/componentSpecs/0/resources/requests/cpu", "value": "1"}, {"op": "replace", "path": "/spec/componentSpecs/0/resources/requests/memory", "value": "1Gi"}, {"op": "replace", "path": "/spec/componentSpecs/0/resources/limits/cpu", "value": "1"}, {"op": "replace", "path": "/spec/componentSpecs/0/resources/limits/memory", "value": "1Gi"} ]'

    Monitor the cluster status:

    kubectl get cluster etcd-cluster -n demo -w

    Verify Scaling

    kubectl get pods -n demo -l app.kubernetes.io/instance=etcd-cluster
    Example Output
    NAME READY STATUS RESTARTS AGE ROLE etcd-cluster-etcd-0 2/2 Running 0 90s follower etcd-cluster-etcd-1 2/2 Running 0 65s follower etcd-cluster-etcd-2 2/2 Running 0 35s leader

    Confirm the new resource allocation:

    kubectl get cluster etcd-cluster -n demo \ -o jsonpath='{.spec.componentSpecs[0].resources}' | jq .
    Example Output
    { "limits": { "cpu": "1", "memory": "1Gi" }, "requests": { "cpu": "1", "memory": "1Gi" } }

    Cleanup

    kubectl delete cluster etcd-cluster -n demo kubectl delete ns demo

    © 2026 KUBEBLOCKS INC