KubeBlocks
BlogsKubeBlocks Cloud
⌘K
​
Overview
Quickstart

Operations

Lifecycle Management
Vertical Scaling
Horizontal Scaling
Volume Expansion
Manage Elasticsearch Services
Decommission Elasticsearch Replica

Monitoring

Observability for Elasticsearch Clusters

tpl

  1. Prerequisites
  2. Deploy a Elasticsearch Cluster
  3. Verifying the Deployment
  4. Cluster Lifecycle Operations
    1. Stopping the Cluster
    2. Verifying Cluster Stop
    3. Starting the Cluster
    4. Verifying Cluster Start
    5. Restarting Cluster
  5. Summary

Elasticsearch Cluster Lifecycle Management

This guide demonstrates how to manage a Elasticsearch Cluster's operational state in KubeBlocks, including:

  • Stopping the cluster to conserve resources
  • Starting a stopped cluster
  • Restarting cluster components

These operations help optimize resource usage and reduce operational costs in Kubernetes environments.

Lifecycle management operations in KubeBlocks:

OperationEffectUse Case
StopSuspends cluster, retains storageCost savings, maintenance
StartResumes cluster operationRestore service after pause
RestartRecreates pods for componentConfiguration changes, troubleshooting

Prerequisites

    Before proceeding, ensure the following:

    • Environment Setup:
      • A Kubernetes cluster is up and running.
      • The kubectl CLI tool is configured to communicate with your cluster.
      • KubeBlocks CLI and KubeBlocks Operator are installed. Follow the installation instructions here.
    • Namespace Preparation: To keep resources isolated, create a dedicated namespace for this tutorial:
    kubectl create ns demo namespace/demo created

    Deploy a Elasticsearch Cluster

      KubeBlocks uses a declarative approach for managing Elasticsearch Clusters. Below is an example configuration for deploying a Elasticsearch Cluster with create a cluster with replicas for different roles.

      Apply the following YAML configuration to deploy the cluster:

      apiVersion: apps.kubeblocks.io/v1 kind: Cluster metadata: name: es-multinode namespace: demo spec: terminationPolicy: Delete componentSpecs: - name: dit componentDef: elasticsearch-8 serviceVersion: 8.8.2 configs: - name: es-cm variables: # use key `roles` to specify roles this component assume roles: data,ingest,transform replicas: 3 disableExporter: false resources: limits: cpu: "1" memory: "2Gi" requests: cpu: "1" memory: "2Gi" volumeClaimTemplates: - name: data spec: accessModes: - ReadWriteOnce resources: requests: storage: 20Gi - name: master componentDef: elasticsearch-8 serviceVersion: 8.8.2 configs: - name: es-cm variables: # use key `roles` to specify roles this component assume roles: master replicas: 3 disableExporter: false resources: limits: cpu: "1" memory: "2Gi" requests: cpu: "1" memory: "2Gi" volumeClaimTemplates: - name: data spec: accessModes: - ReadWriteOnce resources: requests: storage: 20Gi

      Verifying the Deployment

        Monitor the cluster status until it transitions to the Running state:

        kubectl get cluster es-multinode -n demo -w

        Expected Output:

        NAME CLUSTER-DEFINITION TERMINATION-POLICY STATUS AGE es-multinode Delete Creating 10s es-multinode Delete Updating 41s es-multinode Delete Running 42s

        Check the pod status and roles:

        kubectl get pods -l app.kubernetes.io/instance=es-multinode -n demo

        Expected Output:

        NAME READY STATUS RESTARTS AGE es-multinode-dit-0 3/3 Running 0 6m21s es-multinode-dit-1 3/3 Running 0 6m21s es-multinode-dit-2 3/3 Running 0 6m21s es-multinode-master-0 3/3 Running 0 6m21s es-multinode-master-1 3/3 Running 0 6m21s es-multinode-master-2 3/3 Running 0 6m21s

        Once the cluster status becomes Running, your Elasticsearch cluster is ready for use.

        TIP

        If you are creating the cluster for the very first time, it may take some time to pull images before running.

        Cluster Lifecycle Operations

        Stopping the Cluster

        Stopping a Elasticsearch Cluster in KubeBlocks will:

        1. Terminates all running pods
        2. Retains persistent storage (PVCs)
        3. Maintains cluster configuration

        This operation is ideal for:

        • Temporary cost savings
        • Maintenance windows
        • Development environment pauses

        Option 1: OpsRequest API

        Create a Stop operation request:

        apiVersion: operations.kubeblocks.io/v1alpha1 kind: OpsRequest metadata: name: es-multinode-stop-ops namespace: demo spec: clusterName: es-multinode type: Stop

        Option 2: Cluster API Patch

        Modify the cluster spec directly by patching the stop field:

        kubectl patch cluster es-multinode -n demo --type='json' -p='[ { "op": "add", "path": "/spec/componentSpecs/0/stop", "value": true }, { "op": "add", "path": "/spec/componentSpecs/1/stop", "value": true } ]'

        Verifying Cluster Stop

        To confirm a successful stop operation:

        1. Check cluster status transition:

          kubectl get cluster es-multinode -n demo -w

          Example Output:

          NAME CLUSTER-DEFINITION TERMINATION-POLICY STATUS AGE es-multinode Delete Stopping 8m6s es-multinode Delete Stopped 9m41s
        2. Verify no running pods:

          kubectl get pods -l app.kubernetes.io/instance=es-multinode -n demo

          Example Output:

          No resources found in demo namespace.
        3. Confirm persistent volumes remain:

          kubectl get pvc -l app.kubernetes.io/instance=es-multinode -n demo

          Example Output:

          NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS VOLUMEATTRIBUTESCLASS AGE data-es-multinode-dit-0 Bound pvc-aa8136e5-a69a-4117-bb4c-8978978bb77f 20Gi RWO standard <unset> 8m25s data-es-multinode-dit-1 Bound pvc-408fe4d5-b3a9-4984-b6e5-48ec133307eb 20Gi RWO standard <unset> 8m25s data-es-multinode-dit-2 Bound pvc-cf6c3c7c-bb5f-4fa6-8dff-33e0862f8ef9 20Gi RWO standard <unset> 8m25s data-es-multinode-master-0 Bound pvc-5793e794-8c91-4bba-b6e8-52c414ec0ade 20Gi RWO standard <unset> 8m25s data-es-multinode-master-1 Bound pvc-044dae8d-82ee-41f3-867d-c8f27ec08fbe 20Gi RWO standard <unset> 8m25s data-es-multinode-master-2 Bound pvc-2af7cedb-2f5f-4846-be43-ff6da8109880 20Gi RWO standard <unset> 8m25s

        Starting the Cluster

        Starting a stopped Elasticsearch Cluster:

        1. Recreates all pods
        2. Reattaches persistent storage
        3. Restores service endpoints

        Expected behavior:

        • Cluster returns to previous state
        • No data loss occurs
        • Services resume automatically

        Initiate a Start operation request:

        apiVersion: operations.kubeblocks.io/v1alpha1 kind: OpsRequest metadata: name: es-multinode-start-ops namespace: demo spec: # Specifies the name of the Cluster resource that this operation is targeting. clusterName: es-multinode type: Start

        Modify the cluster spec to resume operation:

        1. Set stop: false, or
        2. Remove the stop field entirely
        kubectl patch cluster es-multinode -n demo --type='json' -p='[ { "op": "remove", "path": "/spec/componentSpecs/0/stop" }, { "op": "remove", "path": "/spec/componentSpecs/1/stop" } ]'

        Verifying Cluster Start

        To confirm a successful start operation:

        1. Check cluster status transition:

          kubectl get cluster es-multinode -n demo -w

          Example Output:

          NAME CLUSTER-DEFINITION TERMINATION-POLICY STATUS AGE es-multinode Delete Updating 24m es-multinode Delete Running 24m es-multinode Delete Running 24m
        2. Verify pod recreation:

          kubectl get pods -n demo -l app.kubernetes.io/instance=es-multinode

          Example Output:

          NAME READY STATUS RESTARTS AGE es-multinode-dit-0 3/3 Running 0 24m es-multinode-dit-1 3/3 Running 0 24m es-multinode-dit-2 3/3 Running 0 24m es-multinode-master-0 3/3 Running 0 24m es-multinode-master-1 3/3 Running 0 24m es-multinode-master-2 3/3 Running 0 24m

        Restarting Cluster

        Restart operations provide:

        • Pod recreation without full cluster stop
        • Component-level granularity
        • Minimal service disruption

        Use cases:

        • Configuration changes requiring restart
        • Resource refresh
        • Troubleshooting

        Check Components

        There are five components in Milvus Cluster. To get the list of components,

        kubectl get cluster -n demo es-multinode -oyaml | yq '.spec.componentSpecs[].name'

        Expected Output:

        dit master

        Restart Proxy via OpsRequest API

        List specific components to be restarted:

        apiVersion: operations.kubeblocks.io/v1alpha1 kind: OpsRequest metadata: name: es-multinode-restart-ops namespace: demo spec: clusterName: es-multinode type: Restart restart: - componentName: dit

        Verifying Restart Completion

        To verify a successful component restart:

        1. Track OpsRequest progress:

          kubectl get opsrequest es-multinode-restart-ops -n demo -w

          Example Output:

          NAME TYPE CLUSTER STATUS PROGRESS AGE es-multinode-restart-ops Restart es-multinode Running 0/3 8s es-multinode-restart-ops Restart es-multinode Running 1/3 59s es-multinode-restart-ops Restart es-multinode Running 2/3 117s es-multinode-restart-ops Restart es-multinode Running 3/3 2m55s es-multinode-restart-ops Restart es-multinode Running 3/3 2m55s es-multinode-restart-ops Restart es-multinode Succeed 3/3 2m55s
        2. Check pod status:

          kubectl get pods -n demo -l app.kubernetes.io/instance=es-multinode

          Note: Pods will show new creation timestamps after restart. Only pods belongs to component dit have been restarted.

        Once the operation is complete, the cluster will return to the Running state.

        Summary

        In this guide, you learned how to:

        1. Stop a Elasticsearch Cluster to suspend operations while retaining persistent storage.
        2. Start a stopped cluster to bring it back online.
        3. Restart specific cluster components to recreate their Pods without stopping the entire cluster.

        By managing the lifecycle of your Elasticsearch Cluster, you can optimize resource utilization, reduce costs, and maintain flexibility in your Kubernetes environment. KubeBlocks provides a seamless way to perform these operations, ensuring high availability and minimal disruption.

        © 2025 ApeCloud PTE. Ltd.