Skip to main content
Version: release-0.9

Scale a Kafka cluster

You can scale a Kafka cluster in two ways, vertical scaling and horizontal scaling.

Vertical scaling

You can vertically scale a cluster by changing resource requirements and limits (CPU and storage). For example, you can change the resource class from 1C2G to 2C4G by performing vertical scaling.

Before you start

Check whether the cluster status is Running. Otherwise, the following operations may fail.

kbcli cluster list mycluster -n demo
>
NAME NAMESPACE CLUSTER-DEFINITION VERSION TERMINATION-POLICY STATUS CREATED-TIME
mycluster demo kafka kafka-3.3.2 Delete Running Sep 27,2024 15:15 UTC+0800

Steps

  1. Configure the parameters --components, --memory, and --cpu and run the command.

     kbcli cluster vscale mycluster -n demo --components="broker" --memory="4Gi" --cpu="2" 
    • --components value can be broker or controller.
      • broker: all nodes in the combined mode, or all the broker node in the separated node.
      • controller: all the corresponding nodes in the separated mode.
    • --memory describes the requested and limited size of the component memory.
    • --cpu describes the requested and limited size of the component CPU.
  2. Validate the vertical scaling operation.

    • View the OpsRequest progress.

      KubeBlocks outputs a command automatically for you to view the OpsRequest progress. The output includes the status of this OpsRequest and Pods. When the status is Succeed, this OpsRequest is completed.

      kbcli cluster describe-ops mycluster-verticalscaling-g67k9 -n demo
    • Check the cluster status.

      kbcli cluster list mycluster -n demo
      >
      NAME NAMESPACE CLUSTER-DEFINITION VERSION TERMINATION-POLICY STATUS CREATED-TIME
      mycluster demo kafka kafka-3.3.2 Delete Updating Sep 27,2024 15:15 UTC+0800
    • STATUS=Updating: it means the vertical scaling is in progress.

    • STATUS=Running: it means the vertical scaling operation has been applied.

    • STATUS=Abnormal: it means the vertical scaling is abnormal. The reason may be that the number of the normal instances is less than that of the total instance or the leader instance is running properly while others are abnormal.

      To solve the problem, you can manually check whether this error is caused by insufficient resources. Then if AutoScaling is supported by the Kubernetes cluster, the system recovers when there are enough resources. Otherwise, you can create enough resources and troubleshoot with kubectl describe command.

note

Vertical scaling does not synchronize parameters related to CPU and memory and it is required to manually call the OpsRequest of configuration to change parameters accordingly. Refer to Configuration for instructions.

  1. After the OpsRequest status is Succeed or the cluster status is Running again, check whether the corresponding resources change.

    kbcli cluster describe mycluster -n demo

Horizontal scaling

Horizontal scaling changes the amount of pods. For example, you can scale out replicas from three to five.

From v0.9.0, besides replicas, KubeBlocks also supports scaling in and out instances, refer to the Horizontal Scale tutorial for more details and examples.

Before you start

  • Check whether the cluster STATUS is Running. Otherwise, the following operations may fail.

    kbcli cluster list mycluster -n demo
    >
    NAME NAMESPACE CLUSTER-DEFINITION VERSION TERMINATION-POLICY STATUS CREATED-TIME
    mycluster demo kafka kafka-3.3.2 Delete Running Sep 27,2024 15:15 UTC+0800
  • You are not recommended to perform horizontal scaling on the controller node, including the controller node both in combined mode and separated node.

  • When scaling in horizontally, you must know the topic partition storage. If the topic has only one replication, data loss may caused when you scale in broker.

Steps

  1. Configure the parameters --components and --replicas, and run the command.

    kbcli cluster hscale mycluster -n demo --components="broker" --replicas=3
    • --components describes the component name ready for horizontal scaling.
    • --replicas describes the replica amount of the specified components. Edit the amount based on your demands to scale in or out replicas.
  2. Validate the horizontal scaling operation.

    • View the OpsRequest progress.

      KubeBlocks outputs a command automatically for you to view the OpsRequest progress. The output includes the status of this OpsRequest and Pods. When the status is Succeed, this OpsRequest is completed.

      kbcli cluster describe-ops mycluster-horizontalscaling-ffp9p -n demo
    • View the cluster satus.

      kbcli cluster list mycluster -n demo
      • STATUS=Updating: it means horizontal scaling is in progress.
      • STATUS=Running: it means horizontal scaling has been applied.
  3. Check whether the corresponding resources change.

    kbcli cluster describe mycluster -n demo

Handle the snapshot exception

If STATUS=ConditionsError occurs during the horizontal scaling process, you can find the cause from cluster.status.condition.message for troubleshooting.

In the example below, a snapshot exception occurs.

Status:
conditions:
- lastTransitionTime: "2023-02-08T04:20:26Z"
message: VolumeSnapshot/mycluster-kafka-scaling-dbqgp: Failed to set default snapshot
class with error cannot find default snapshot class
reason: ApplyResourcesFailed
status: "False"
type: ApplyResources

Reason

This exception occurs because the VolumeSnapshotClass is not configured. This exception can be fixed after configuring VolumeSnapshotClass, but the horizontal scaling cannot continue to run. It is because the wrong backup (volumesnapshot is generated by backup) and volumesnapshot generated before still exist. Delete these two wrong resources and then KubeBlocks re-generates new resources.

Steps:

  1. Configure the VolumeSnapshotClass by running the command below.

    kubectl create -f - <<EOF
    apiVersion: snapshot.storage.k8s.io/v1
    kind: VolumeSnapshotClass
    metadata:
    name: csi-aws-vsc
    annotations:
    snapshot.storage.kubernetes.io/is-default-class: "true"
    driver: ebs.csi.aws.com
    deletionPolicy: Delete
    EOF
  2. Delete the wrong backup (volumesnapshot is generated by backup) and volumesnapshot resources.

    kubectl delete backup -l app.kubernetes.io/instance=mycluster -n demo

    kubectl delete volumesnapshot -l app.kubernetes.io/instance=mycluster -n demo

Result

The horizontal scaling continues after backup and volumesnapshot are deleted and the cluster restores to running status.