KubeBlocks
BlogsEnterprise
⌘K
​
Blogs

Overview
Quickstart
Architecture

Operations

Lifecycle Management
Vertical Scaling
Horizontal Scaling
Volume Expansion
Configuration
Minor Version Upgrade
Manage Services

Backup And Restore

Backup
Restore

Monitoring

Observability for ZooKeeper Clusters
FAQs
  1. Prerequisites
  2. Expand Volume
  3. Verify
  4. Cleanup

Volume Expansion for ZooKeeper Clusters with KubeBlocks

This guide demonstrates how to expand the persistent storage volumes of a ZooKeeper cluster managed by KubeBlocks.

NOTE

Volume expansion requires your StorageClass to support volume expansion (allowVolumeExpansion: true).

Prerequisites

    Before proceeding, verify your environment meets these requirements:

    • A functional Kubernetes cluster (v1.21+ recommended)
    • kubectl v1.21+ installed and configured with cluster access
    • Helm installed (installation guide)
    • KubeBlocks installed (installation guide)
    • ZooKeeper Add-on installed and a ZooKeeper cluster running (see Quickstart)

    Expand Volume

    ZooKeeper uses two persistent volume types:

    • data — stores ZooKeeper data snapshots
    • snapshot-log — stores transaction logs

    Expand the data volume from 20Gi to 30Gi:

    apiVersion: operations.kubeblocks.io/v1alpha1 kind: OpsRequest metadata: name: zookeeper-volumeexpansion namespace: demo spec: clusterName: zookeeper-cluster type: VolumeExpansion volumeExpansion: - componentName: zookeeper volumeClaimTemplates: - name: data storage: 30Gi

    Apply it:

    kubectl apply -f https://raw.githubusercontent.com/apecloud/kubeblocks-addons/refs/heads/main/examples/zookeeper/volumeexpand.yaml

    Monitor progress:

    kubectl get ops zookeeper-volumeexpansion -n demo -w
    Example Output
    NAME TYPE CLUSTER STATUS PROGRESS AGE zookeeper-volumeexpansion VolumeExpansion zookeeper-cluster Running 0/3 10s zookeeper-volumeexpansion VolumeExpansion zookeeper-cluster Succeed 3/3 64s

    Update the storage request in the Cluster resource:

    apiVersion: apps.kubeblocks.io/v1 kind: Cluster metadata: name: zookeeper-cluster namespace: demo spec: terminationPolicy: Delete componentSpecs: - name: zookeeper componentDef: zookeeper serviceVersion: 3.9.2 replicas: 3 resources: requests: cpu: "0.5" memory: "0.5Gi" limits: cpu: "0.5" memory: "0.5Gi" volumeClaimTemplates: - name: data spec: storageClassName: "" accessModes: - ReadWriteOnce resources: requests: storage: 30Gi # Updated from 20Gi - name: snapshot-log spec: storageClassName: "" accessModes: - ReadWriteOnce resources: requests: storage: 20Gi

    Verify

    kubectl get pvc -n demo -l app.kubernetes.io/instance=zookeeper-cluster
    Example Output
    NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE data-zookeeper-cluster-zookeeper-0 Bound pvc-xxx 30Gi RWO kb-default-sc 14m data-zookeeper-cluster-zookeeper-1 Bound pvc-xxx 30Gi RWO kb-default-sc 13m data-zookeeper-cluster-zookeeper-2 Bound pvc-xxx 30Gi RWO kb-default-sc 13m snapshot-log-zookeeper-cluster-zookeeper-0 Bound pvc-xxx 20Gi RWO kb-default-sc 14m snapshot-log-zookeeper-cluster-zookeeper-1 Bound pvc-xxx 20Gi RWO kb-default-sc 13m snapshot-log-zookeeper-cluster-zookeeper-2 Bound pvc-xxx 20Gi RWO kb-default-sc 13m

    Cleanup

    kubectl delete cluster zookeeper-cluster -n demo kubectl delete ns demo

    © 2026 KUBEBLOCKS INC