KubeBlocks
BlogsKubeBlocks Cloud
⌘K
​
Overview
Quickstart

Operations

Lifecycle Management
Vertical Scaling
Horizontal Scaling
Volume Expansion
Manage Elasticsearch Services
Decommission Elasticsearch Replica

Monitoring

Observability for Elasticsearch Clusters

tpl

  1. Prerequisites
  2. Deploy a Elasticsearch Cluster
  3. Verifying the Deployment
  4. Vertical Scale
  5. Best Practices & Considerations
  6. Verification
  7. Key Benefits of Vertical Scaling with KubeBlocks
  8. Cleanup
  9. Summary

Vertical Scaling for Elasticsearch Clusters with KubeBlocks

This guide demonstrates how to vertically scale a Elasticsearch Cluster managed by KubeBlocks by adjusting compute resources (CPU and memory) while maintaining the same number of replicas.

Vertical scaling modifies compute resources (CPU and memory) for Elasticsearch instances while maintaining replica count. Key characteristics:

  • Non-disruptive: When properly configured, maintains availability during scaling
  • Granular: Adjust CPU, memory, or both independently
  • Reversible: Scale up or down as needed

KubeBlocks ensures minimal impact during scaling operations by following a controlled, role-aware update strategy: Role-Aware Replicas (Primary/Secondary Replicas)

  • Secondary replicas update first – Non-leader pods are upgraded to minimize disruption.
  • Primary updates last – Only after all secondaries are healthy does the primary pod restart.
  • Cluster state progresses from Updating → Running once all replicas are stable.

Role-Unaware Replicas (Ordinal-Based Scaling) If replicas have no defined roles, updates follow Kubernetes pod ordinal order:

  • Highest ordinal first (e.g., pod-2 → pod-1 → pod-0) to ensure deterministic rollouts.

Prerequisites

    Before proceeding, ensure the following:

    • Environment Setup:
      • A Kubernetes cluster is up and running.
      • The kubectl CLI tool is configured to communicate with your cluster.
      • KubeBlocks CLI and KubeBlocks Operator are installed. Follow the installation instructions here.
    • Namespace Preparation: To keep resources isolated, create a dedicated namespace for this tutorial:
    kubectl create ns demo
    namespace/demo created
    

    Deploy a Elasticsearch Cluster

      KubeBlocks uses a declarative approach for managing Elasticsearch Clusters. Below is an example configuration for deploying a Elasticsearch Cluster with create a cluster with replicas for different roles.

      Apply the following YAML configuration to deploy the cluster:

      apiVersion: apps.kubeblocks.io/v1
      kind: Cluster
      metadata:
        name: es-multinode
        namespace: demo
      spec:
        terminationPolicy: Delete
        componentSpecs:
          - name: dit
            componentDef: elasticsearch-8
            serviceVersion: 8.8.2
            configs:
              - name: es-cm
                variables:
                  # use key `roles` to specify roles this component assume
                  roles: data,ingest,transform
            replicas: 3
            disableExporter: false
            resources:
              limits:
                cpu: "1"
                memory: "2Gi"
              requests:
                cpu: "1"
                memory: "2Gi"
            volumeClaimTemplates:
              - name: data
                spec:
                  accessModes:
                    - ReadWriteOnce
                  resources:
                    requests:
                      storage: 20Gi
          - name: master
            componentDef: elasticsearch-8
            serviceVersion: 8.8.2
            configs:
              - name: es-cm
                variables:
                  # use key `roles` to specify roles this component assume
                  roles: master
            replicas: 3
            disableExporter: false
            resources:
              limits:
                cpu: "1"
                memory: "2Gi"
              requests:
                cpu: "1"
                memory: "2Gi"
            volumeClaimTemplates:
              - name: data
                spec:
                  accessModes:
                    - ReadWriteOnce
                  resources:
                    requests:
                      storage: 20Gi
      

      Verifying the Deployment

        Monitor the cluster status until it transitions to the Running state:

        kubectl get cluster es-multinode -n demo -w
        

        Expected Output:

        NAME           CLUSTER-DEFINITION   TERMINATION-POLICY   STATUS     AGE
        es-multinode                        Delete               Creating   10s
        es-multinode                        Delete               Updating   41s
        es-multinode                        Delete               Running    42s
        

        Check the pod status and roles:

        kubectl get pods -l app.kubernetes.io/instance=es-multinode -n demo
        

        Expected Output:

        NAME                    READY   STATUS    RESTARTS   AGE
        es-multinode-dit-0      3/3     Running   0          6m21s
        es-multinode-dit-1      3/3     Running   0          6m21s
        es-multinode-dit-2      3/3     Running   0          6m21s
        es-multinode-master-0   3/3     Running   0          6m21s
        es-multinode-master-1   3/3     Running   0          6m21s
        es-multinode-master-2   3/3     Running   0          6m21s
        

        Once the cluster status becomes Running, your Elasticsearch cluster is ready for use.

        TIP

        If you are creating the cluster for the very first time, it may take some time to pull images before running.

        Vertical Scale

        Expected Workflow:

        1. Pods are updated in pod ordinal order, from highest to lowest, (e.g., pod-2 → pod-1 → pod-0)
        2. Cluster status transitions from Updating to Running

        Option 1: Using VerticalScaling OpsRequest

        Apply the following YAML to scale up the resources for the elasticsearch-broker component:

        apiVersion: operations.kubeblocks.io/v1alpha1
        kind: OpsRequest
        metadata:
          name: es-multinode-vscale-ops
          namespace: demo
        spec:
          clusterName: es-multinode
          type: VerticalScaling
          verticalScaling:
          - componentName: dit
            requests:
              cpu: '1'
              memory: 1Gi
            limits:
              cpu: '1'
              memory: 1Gi
        

        You can check the progress of the scaling operation with the following command:

        kubectl -n demo get ops es-multinode-vscale-ops -w
        

        Expected Result:

        NAME                      TYPE              CLUSTER        STATUS    PROGRESS   AGE
        es-multinode-vscale-ops   VerticalScaling   es-multinode   Running   0/3        57s
        es-multinode-vscale-ops   VerticalScaling   es-multinode   Running   1/3        60s
        es-multinode-vscale-ops   VerticalScaling   es-multinode   Running   2/3        118s
        es-multinode-vscale-ops   VerticalScaling   es-multinode   Running   3/3        2m51s
        es-multinode-vscale-ops   VerticalScaling   es-multinode   Running   3/3        2m51s
        es-multinode-vscale-ops   VerticalScaling   es-multinode   Succeed   3/3        2m51s
        

        Option 2: Direct Cluster API Update

        Alternatively, you may update spec.componentSpecs.resources field to the desired resources for vertical scale.

        apiVersion: apps.kubeblocks.io/v1
        kind: Cluster
        spec:
          componentSpecs:
            - name: dit
              replicas: 3
              resources:
                requests:
                  cpu: "1"       # Update the resources to your need.
                  memory: "1Gi"  # Update the resources to your need.
                limits:
                  cpu: "1"       # Update the resources to your need.
                  memory: "1Gi"  # Update the resources to your need.
          ...
        

        Best Practices & Considerations

        Planning:

        • Scale during maintenance windows or low-traffic periods
        • Verify Kubernetes cluster has sufficient resources
        • Check for any ongoing operations before starting

        Execution:

        • Maintain balanced CPU-to-Memory ratios
        • Set identical requests/limits for guaranteed QoS

        Post-Scaling:

        • Monitor resource utilization and application performance
        • Consider adjusting Elasticsearch parameters if needed

        Verification

        Verify the updated resources by inspecting the cluster configuration or Pod details:

        kbcli cluster describe es-multinode -n demo
        

        Expected Output:

        Resources Allocation:
        COMPONENT          INSTANCE-TEMPLATE   CPU(REQUEST/LIMIT)   MEMORY(REQUEST/LIMIT)   STORAGE-SIZE   STORAGE-CLASS
        dit                                    1 / 1                1Gi / 1Gi               data:20Gi      <none>
        

        Key Benefits of Vertical Scaling with KubeBlocks

        • Seamless Scaling: Pods are recreated in a specific order to ensure minimal disruption.
        • Dynamic Resource Adjustments: Easily scale CPU and memory based on workload requirements.
        • Flexibility: Choose between OpsRequest for dynamic scaling or direct API updates for precise control.
        • Improved Availability: The cluster remains operational during the scaling process, maintaining high availability.

        Cleanup

        To remove all created resources, delete the Elasticsearch Cluster along with its namespace:

        kubectl delete cluster es-multinode -n demo
        kubectl delete ns demo
        

        Summary

        In this guide, you learned how to:

        1. Deploy a Elasticsearch Cluster managed by KubeBlocks.
        2. Perform vertical scaling by increasing or decreasing resources for the elasticsearch component.
        3. Use both OpsRequest and direct Cluster API updates to adjust resource allocations.

        Vertical scaling is a powerful tool for optimizing resource utilization and adapting to changing workload demands, ensuring your Elasticsearch Cluster remains performant and resilient.

        © 2025 ApeCloud PTE. Ltd.