KubeBlocks
BlogsKubeBlocks Cloud
⌘K
​
Overview
Quickstart

Operations

Lifecycle Management
Vertical Scaling
Horizontal Scaling
Volume Expansion
Manage Kafka Services
Decommission Kafka Replica

Monitoring

Observability for Kafka Clusters

tpl

  1. Prerequisites
    1. System Requirements
    2. Verify Kafka Add-on
  2. Deploy a Kafka Cluster
  3. Verify Cluster Status
  4. Access Kafka Cluster
  5. Stop the Kafka Cluster
  6. Start the Kafka Cluster
  7. Delete Kafka Cluster

Kafka Quickstart

This guide provides a comprehensive walkabout for deploying and managing Kafka ReplicaSet Clusters using the KubeBlocks Kafka Add-on, covering:

  • System prerequisites and add-on installation
  • Cluster creation and configuration
  • Operational management including start/stop procedures
  • Connection methods and cluster monitoring

Prerequisites

System Requirements

Before proceeding, verify your environment meets these requirements:

  • A functional Kubernetes cluster (v1.21+ recommended)
  • kubectl v1.21+ installed and configured with cluster access
  • Helm installed (installation guide)
  • KubeBlocks installed (installation guide)

Verify Kafka Add-on

The Kafka Add-on is included with KubeBlocks by default. Check its status:

helm list -n kb-system | grep kafka
Example Output:
NAME NAMESPACE REVISION UPDATED STATUS CHART kb-addon-kafka kb-system 1 2025-05-21 deployed kafka-1.0.0

If the add-on isn't enabled, choose an installation method:

# Add Helm repo helm repo add kubeblocks-addons https://apecloud.github.io/helm-charts # For users in Mainland China, if GitHub is inaccessible or slow, use this alternative repo: #helm repo add kubeblocks-addons https://jihulab.com/api/v4/projects/150246/packages/helm/stable # Update helm repo helm repo update # Search available Add-on versions helm search repo kubeblocks/kafka --versions # Install your desired version (replace <VERSION> with your chosen version) helm upgrade -i kb-addon-kafka kubeblocks-addons/kafka --version <VERSION> -n kb-system
# Add an index (kubeblocks is added by default) kbcli addon index add kubeblocks https://github.com/apecloud/block-index.git # Update the index kbcli addon index update kubeblocks # Update all indexes kbcli addon index update --all

To search and install an addon:

# Search Add-on kbcli addon search kafka # Install Add-on with your desired version (replace <VERSION> with your chosen version) kbcli addon install kafka --version <VERSION>

Example Output:

ADDON VERSION INDEX kafka 0.9.0 kubeblocks kafka 0.9.1 kubeblocks kafka 1.0.0 kubeblocks

To enable or disable an addon:

# Enable Add-on kbcli addon enable kafka # Disable Add-on kbcli addon disable kafka
NOTE

Version Compatibility

Always verify that the Kafka Add-on version matches your KubeBlocks major version to avoid compatibility issues.

Deploy a Kafka Cluster

Deploy a basic Kafka Cluster with default settings:

kubectl apply -f https://raw.githubusercontent.com/apecloud/kubeblocks-addons/refs/heads/main/examples/kafka/cluster-separated.yaml

This creates:

  • A Kafka Cluster with 3 components, kafka controller with 1 replica, kafka broker with 1 replicas and kafka exporter with 1 replica.
  • Default resource allocations (0.5 CPU, 0.5Gi memory)
  • 20Gi persistent storage
apiVersion: apps.kubeblocks.io/v1 kind: Cluster metadata: name: kafka-separated-cluster namespace: demo spec: # Specifies the behavior when a Cluster is deleted. # Valid options are: [DoNotTerminate, Delete, WipeOut] (`Halt` is deprecated since KB 0.9) # - `DoNotTerminate`: Prevents deletion of the Cluster. This policy ensures that all resources remain intact. # - `Delete`: Extends the `Halt` policy by also removing PVCs, leading to a thorough cleanup while removing all persistent data. # - `WipeOut`: An aggressive policy that deletes all Cluster resources, including volume snapshots and backups in external storage. This results in complete data removal and should be used cautiously, primarily in non-production environments to avoid irreversible data loss. terminationPolicy: Delete # Specifies the name of the ClusterDefinition to use when creating a Cluster. # Note: DO NOT UPDATE THIS FIELD # The value must be `kafaka` to create a Kafka Cluster clusterDef: kafka # Specifies the name of the ClusterTopology to be used when creating the # Cluster. # - combined: combined Kafka controller (KRaft) and broker in one Component # - combined_monitor: combined mode with monitor component # - separated: separated KRaft and Broker Components. # - separated_monitor: separated mode with monitor component # Valid options are: [combined,combined_monitor,separated,separated_monitor] topology: separated_monitor # Specifies a list of ClusterComponentSpec objects used to define the # individual Components that make up a Cluster. # This field allows for detailed configuration of each Component within the Cluster componentSpecs: - name: kafka-broker replicas: 1 resources: limits: cpu: "0.5" memory: "0.5Gi" requests: cpu: "0.5" memory: "0.5Gi" env: - name: KB_KAFKA_BROKER_HEAP # use this ENV to set BROKER HEAP value: "-XshowSettings:vm -XX:MaxRAMPercentage=100 -Ddepth=64" - name: KB_KAFKA_CONTROLLER_HEAP # use this ENV to set CONTOLLER_HEAP value: "-XshowSettings:vm -XX:MaxRAMPercentage=100 -Ddepth=64" # Whether to enable direct Pod IP address access mode. # - If set to 'true', Kafka clients will connect to Brokers using the Pod IP address directly. # - If set to 'false', Kafka clients will connect to Brokers using the Headless Service's FQDN. - name: KB_BROKER_DIRECT_POD_ACCESS value: "true" volumeClaimTemplates: - name: data spec: storageClassName: "" accessModes: - ReadWriteOnce resources: requests: storage: 20Gi - name: metadata spec: storageClassName: "" accessModes: - ReadWriteOnce resources: requests: storage: 1Gi - name: kafka-controller replicas: 1 resources: limits: cpu: "0.5" memory: "0.5Gi" requests: cpu: "0.5" memory: "0.5Gi" volumeClaimTemplates: - name: metadata spec: storageClassName: "" accessModes: - ReadWriteOnce resources: requests: storage: 1Gi - name: kafka-exporter replicas: 1 resources: limits: cpu: "0.5" memory: "1Gi" requests: cpu: "0.1" memory: "0.2Gi"

For more API fields and descriptions, refer to the API Reference.

Verify Cluster Status

When deploying a Kafka Cluster with 3 replicas:

Confirm successful deployment by checking:

  1. Cluster phase is Running
  2. All pods are operational

Check status using either method:

kubectl get cluster kafka-separated-cluster -n demo -w NAME CLUSTER-DEFINITION TERMINATION-POLICY STATUS AGE kafka-separated-cluster kafka Delete Running 2m48s kubectl get pods -l app.kubernetes.io/instance=kafka-separated-cluster -n demo NAME READY STATUS RESTARTS AGE kafka-separated-cluster-kafka-broker-0 2/2 Running 0 2m33s kafka-separated-cluster-kafka-controller-0 2/2 Running 0 2m58s kafka-separated-cluster-kafka-exporter-0 1/1 Running 0 2m9s

With kbcli installed, you can view comprehensive cluster information:

kbcli cluster describe kafka-separated-cluster -n demo Name: kafka-separated-cluster Created Time: May 19,2025 16:56 UTC+0800 NAMESPACE CLUSTER-DEFINITION TOPOLOGY STATUS TERMINATION-POLICY demo kafka separated_monitor Running Delete Endpoints: COMPONENT INTERNAL EXTERNAL kafka-broker kafka-separated-cluster-kafka-broker-advertised-listener-0.demo.svc.cluster.local:9092 <none> Topology: COMPONENT SERVICE-VERSION INSTANCE ROLE STATUS AZ NODE CREATED-TIME kafka-broker 3.3.2 kafka-separated-cluster-kafka-broker-0 <none> Running zone-x x.y.z May 19,2025 16:57 UTC+0800 kafka-controller 3.3.2 kafka-separated-cluster-kafka-controller-0 <none> Running zone-x x.y.z May 19,2025 16:56 UTC+0800 kafka-exporter 1.6.0 kafka-separated-cluster-kafka-exporter-0 <none> Running zone-x x.y.z May 19,2025 16:57 UTC+0800 Resources Allocation: COMPONENT INSTANCE-TEMPLATE CPU(REQUEST/LIMIT) MEMORY(REQUEST/LIMIT) STORAGE-SIZE STORAGE-CLASS kafka-controller 500m / 500m 512Mi / 512Mi metadata:1Gi <none> kafka-broker 500m / 500m 512Mi / 512Mi data:20Gi metadata:1Gi kafka-exporter 100m / 500m 214748364800m / 1Gi <none> <none> Images: COMPONENT COMPONENT-DEFINITION IMAGE kafka-controller kafka-controller-1.0.0 docker.io/bitnami/kafka:3.3.2-debian-11-r54 docker.io/bitnami/jmx-exporter:0.18.0-debian-11-r20 kafka-broker kafka-broker-1.0.0 docker.io/bitnami/kafka:3.3.2-debian-11-r54 docker.io/bitnami/jmx-exporter:0.18.0-debian-11-r20 kafka-exporter kafka-exporter-1.0.0 docker.io/bitnami/kafka-exporter:1.6.0-debian-11-r67 Show cluster events: kbcli cluster list-events -n demo kafka-separated-cluster

Access Kafka Cluster

Step 1. Get the address of the Kafka Services

kubectl get svc -l app.kubernetes.io/instance=kafka-separated-cluster -n demo

Expected Output:

NAME                                                         TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)    AGE
kafka-separated-cluster-kafka-broker-advertised-listener-0   ClusterIP   10.96.131.175   <none>        9092/TCP   5m8s

The service name is kafka-separated-cluster-kafka-broker-advertised-listener-0 in namespace demo.

Step 2. Connect to the Kafka cluster with the port No.

  1. Start client pod.
kubectl run kafka-producer --restart='Never' --image docker.io/bitnami/kafka:3.3.2-debian-11-r54 --command -- sleep infinity kubectl run kafka-consumer --restart='Never' --image docker.io/bitnami/kafka:3.3.2-debian-11-r54 --command -- sleep infinity
  1. Log in to kafka-producer.
kubectl exec -ti kafka-producer -- bash
  1. Create topic.
kafka-topics.sh --create --topic quickstart-events --bootstrap-server kafka-separated-cluster-kafka-broker-advertised-listener-0.demo:9092
  1. Create producer.
kafka-console-producer.sh --topic quickstart-events --bootstrap-server kafka-separated-cluster-kafka-broker-advertised-listener-0.demo:9092
  1. Enter:"Hello, KubeBlocks" and press Enter.

  2. Start a new terminal session and login to kafka-consumer.

kubectl exec -ti kafka-consumer -- bash
  1. Create consumer and specify consuming topic, and consuming message from the beginning.
kafka-console-consumer.sh --topic quickstart-events --from-beginning --bootstrap-server kafka-separated-cluster-kafka-broker-advertised-listener-0.demo:9092

And you get the output 'Hello, KubeBlocks'.

Stop the Kafka Cluster

Stopping a cluster temporarily suspends operations while preserving all data and configuration:

Key Effects:

  • Compute resources (Pods) are released
  • Persistent storage (PVCs) remains intact
  • Service definitions are maintained
  • Cluster configuration is preserved
  • Operational costs are reduced
kubectl apply -f https://raw.githubusercontent.com/apecloud/kubeblocks-addons/refs/heads/main/examples/kafka/stop.yaml
apiVersion: operations.kubeblocks.io/v1alpha1 kind: OpsRequest metadata: name: kafka-stop namespace: demo spec: clusterName: kafka-separated-cluster type: Stop

Alternatively, stop by setting spec.componentSpecs.stop to true:

kubectl patch cluster kafka-separated-cluster -n demo --type='json' -p='[ { "op": "add", "path": "/spec/componentSpecs/0/stop", "value": true }, { "op": "add", "path": "/spec/componentSpecs/1/stop", "value": true }, { "op": "add", "path": "/spec/componentSpecs/2/stop", "value": true } ]'

Start the Kafka Cluster

Restarting a stopped cluster resumes operations with all data and configuration intact.

Key Effects:

  • Compute resources (Pods) are recreated
  • Services become available again
  • Cluster returns to previous state
apiVersion: operations.kubeblocks.io/v1alpha1 kind: OpsRequest metadata: name: kafka-start namespace: demo spec: clusterName: kafka-separated-cluster type: Start

Restart by setting spec.componentSpecs.stop to false:

kubectl patch cluster kafka-separated-cluster -n demo --type='json' -p='[ { "op": "remove", "path": "/spec/componentSpecs/0/stop" }, { "op": "remove", "path": "/spec/componentSpecs/1/stop" }, { "op": "remove", "path": "/spec/componentSpecs/2/stop" } ]'

Delete Kafka Cluster

Choose carefully based on your data retention needs:

PolicyResources RemovedData RemovedRecommended For
DoNotTerminateNoneNoneCritical production clusters
DeleteAll resourcesPVCs deletedNon-critical environments
WipeOutAll resourcesEverything*Test environments only

*Includes snapshots and backups in external storage

Pre-Deletion Checklist:

  1. Verify no applications are using the cluster
  2. Ensure required backups exist
  3. Confirm proper terminationPolicy is set
  4. Check for dependent resources

For test environments, use this complete cleanup:

kubectl patch cluster kafka-separated-cluster -p '{"spec":{"terminationPolicy":"WipeOut"}}' --type="merge" -n demo kubectl delete cluster kafka-separated-cluster -n demo

© 2025 ApeCloud PTE. Ltd.