Skip to main content
Version: Preview

Scale for a Pulsar cluster

Vertical scaling

You can vertically scale a cluster by changing resource requirements and limits (CPU and storage). For example, if you need to change the resource class from 1C2G to 2C4G, vertical scaling is what you need.

Before you start

Check whether the cluster status is Running. Otherwise, the following operations may fail.

kbcli cluster list pulsar-cluster

Steps

  1. Configure the parameters --components, --memory, and --cpu and run the command.

    kbcli cluster vscale pulsar-cluster --cpu=3 --memory=10Gi --components=broker,bookies  
    • --components describes the component name ready for vertical scaling.
    • --memory describes the requested and limited size of the component memory.
    • --cpu describes the requested and limited size of the component CPU.
  2. Check the cluster status to validate the vertical scaling.

    kbcli cluster list pulsar-cluster
    • STATUS=updating: it means the vertical scaling is in progress.

    • STATUS=Running: it means the vertical scaling operation has been applied.

    • STATUS=Abnormal: it means the vertical scaling is abnormal. The reason may be that the number of the normal instances is less than that of the total instance or the leader instance is running properly while others are abnormal.

      To solve the problem, you can manually check whether this error is caused by insufficient resources. Then if AutoScaling is supported by the Kubernetes cluster, the system recovers when there are enough resources. Otherwise, you can create enough resources and troubleshoot with kubectl describe command.

      note

      Vertical scaling does not synchronize parameters related to CPU and memory and it is required to manually call the OpsRequest of configuration to change parameters accordingly. Refer to Configuration for instructions.

  3. Check whether the corresponding resources change.

    kbcli cluster describe pulsar-cluster

Horizontal scaling

Horizontal scaling changes the amount of pods. For example, you can apply horizontal scaling to scale pods up from three to five. The scaling process includes the backup and restoration of data.

Before you start

  • It is recommended to keep 3 nodes without scaling for Zookeeper, and other components can scale horizontally for multiple or single components
  • The scaling of the Bookies node needs to be cautious. The data copy is related to the EnsembleSize, Write Quorum, and Ack Quorum configurations, scaling may cause data loss. Check Pulsar official document for detailed information.

Steps

  1. Change configuration.

    Configure the parameters --components and --replicas, and run the command.

    kbcli cluster hscale pulsar-cluster --replicas=5 --components=broker,bookies    
    • --components describes the component name ready for horizontal scaling.
    • --replicas describes the replicas with the specified components.
  2. Validate the horizontal scaling operation.

    Check the cluster STATUS to identify the horizontal scaling status.

    kubectl get ops
    >
    NAME TYPE CLUSTER STATUS PROGRESS AGE
    pulsar-cluster-horizontalscaling-9lfvc Updating pulsar Succeed 3/3 8m49s
  3. Check whether the corresponding resources change.

    kbcli cluster describe pulsar-cluster

Handle the snapshot exception

If STATUS=ConditionsError occurs during the horizontal scaling process, you can find the cause from cluster.status.condition.message for troubleshooting.

In the example below, a snapshot exception occurs.

Status:
conditions:
- lastTransitionTime: "2023-02-08T04:20:26Z"
message: VolumeSnapshot/pulsar-cluster-pulsar-scaling-dbqgp: Failed to set default snapshot
class with error cannot find default snapshot class
reason: ApplyResourcesFailed
status: "False"
type: ApplyResources

Reason

This exception occurs because the VolumeSnapshotClass is not configured. This exception can be fixed after configuring VolumeSnapshotClass, but the horizontal scaling cannot continue to run. It is because the wrong backup (volumesnapshot is generated by backup) and volumesnapshot generated before still exist. Delete these two wrong resources and then KubeBlocks re-generates new resources.

Steps:

  1. Configure the VolumeSnapshotClass by running the command below.

    kubectl create -f - <<EOF
    apiVersion: snapshot.storage.k8s.io/v1
    kind: VolumeSnapshotClass
    metadata:
    name: csi-aws-vsc
    annotations:
    snapshot.storage.kubernetes.io/is-default-class: "true"
    driver: ebs.csi.aws.com
    deletionPolicy: Delete
    EOF
  2. Delete the wrong backup (volumesnapshot is generated by backup) and volumesnapshot resources.

    kubectl delete backup -l app.kubernetes.io/instance=pulsar-cluster

    kubectl delete volumesnapshot -l app.kubernetes.io/instance=pulsar-cluster

Result

The horizontal scaling continues after backup and volumesnapshot are deleted and the cluster restores to running status.