KubeBlocks
BlogsKubeBlocks Cloud
⌘K
​
KubeBlocks for Kafka

Cluster Management

Create
Connect
Scale
Expand volume
Restart
Stop/Start
Delete protection

Configuration

Configure cluster parameters
Resource description
  1. Before you start
  2. Create a Kafka cluster

Create a Kafka cluster

This document shows how to create a Kafka cluster.

Before you start

  • Install kbcli if you want to create a Kafka cluster by kbcli.

  • Install KubeBlocks.

  • Make sure Kafka Addon is enabled with kbcli addon list. If this Addon is not enabled, enable it first.

    kubectl get addons.extensions.kubeblocks.io kafka
    >
    NAME    TYPE   VERSION   PROVIDER   STATUS    AGE
    kafka   Helm                        Enabled   13m
    
    kbcli addon list
    >
    NAME                           TYPE   STATUS     EXTRAS         AUTO-INSTALL
    ...
    kafka                          Helm   Enabled                   true
    ...
    
  • To keep things isolated, create a separate namespace called demo throughout this tutorial.

    kubectl create namespace demo
    
NOTE
  • KubeBlocks integrates Kafka v3.3.2, running it in KRaft mode.
  • You are not recommended to use kraft cluster in combined mode in a production environment.
  • The controller number suggested ranges from 3 to 5, out of complexity and availability.

Create a Kafka cluster

  1. Create a Kafka cluster. If you only have one node for deploying a cluster with multiple replicas, set spec.affinity.topologyKeys as null. But for a production environment, it is not recommended to deploy all replicas on one node, which may decrease the cluster availability.

    • Create a Kafka cluster in combined mode.

      # create kafka in combined mode
      kubectl apply -f - <<EOF
      apiVersion: apps.kubeblocks.io/v1alpha1
      kind: Cluster
      metadata:
        name: mycluster
        namespace: demo
        annotations:
          "kubeblocks.io/extra-env": '{"KB_KAFKA_ENABLE_SASL":"false","KB_KAFKA_BROKER_HEAP":"-XshowSettings:vm -XX:MaxRAMPercentage=100 -Ddepth=64","KB_KAFKA_CONTROLLER_HEAP":"-XshowSettings:vm -XX:MaxRAMPercentage=100 -Ddepth=64","KB_KAFKA_PUBLIC_ACCESS":"false"}'
      spec:
        terminationPolicy: Delete
        componentSpecs:
        - name: broker
          componentDef: kafka-combine
          tls: false
          replicas: 1
          serviceVersion: 3.3.2
          services:
          affinity:
            podAntiAffinity: Preferred
            topologyKeys:
            - kubernetes.io/hostname
            tenancy: SharedNode
          tolerations:
          - key: kb-data
            operator: Equal
            value: 'true'
            effect: NoSchedule
          resources:
            limits:
              cpu: '0.5'
              memory: 0.5Gi
            requests:
              cpu: '0.5'
              memory: 0.5Gi
          volumeClaimTemplates:
          - name: data
            spec:
              accessModes:
              - ReadWriteOnce
              resources:
                requests:
                  storage: 20Gi
          - name: metadata
            spec:
              storageClassName: null
              accessModes:
              - ReadWriteOnce
              resources:
                requests:
                  storage: 20Gi
        - name: metrics-exp
          componentDef: kafka-exporter
          replicas: 1
          resources:
            limits:
              cpu: '0.5'
              memory: 0.5Gi
            requests:
              cpu: '0.5'
              memory: 0.5Gi
      EOF
      
    • Create a Kafka cluster in separated mode.

      # Create kafka cluster in separated mode
      kubectl apply -f - <<EOF
      apiVersion: apps.kubeblocks.io/v1alpha1
      kind: Cluster
      metadata:
        name: mycluster
        namespace: demo
        annotations:
          "kubeblocks.io/extra-env": '{"KB_KAFKA_ENABLE_SASL":"false","KB_KAFKA_BROKER_HEAP":"-XshowSettings:vm -XX:MaxRAMPercentage=100 -Ddepth=64","KB_KAFKA_CONTROLLER_HEAP":"-XshowSettings:vm -XX:MaxRAMPercentage=100 -Ddepth=64","KB_KAFKA_PUBLIC_ACCESS":"false"}'
      spec:
        terminationPolicy: Delete
        componentSpecs:
        - name: broker
          componentDef: kafka-broker
          tls: false
          replicas: 1
          affinity:
            podAntiAffinity: Preferred
            topologyKeys:
            - kubernetes.io/hostname
            tenancy: SharedNode
          tolerations:
          - key: kb-data
            operator: Equal
            value: 'true'
            effect: NoSchedule
          resources:
            limits:
              cpu: '0.5'
              memory: 0.5Gi
            requests:
              cpu: '0.5'
              memory: 0.5Gi
          volumeClaimTemplates:
          - name: data
            spec:
              accessModes:
              - ReadWriteOnce
              resources:
                requests:
                  storage: 20Gi
          - name: metadata
            spec:
              storageClassName: null
              accessModes:
              - ReadWriteOnce
              resources:
                requests:
                  storage: 5Gi
        - name: controller
          componentDefRef: controller
          componentDef: kafka-controller
          tls: false
          replicas: 1
          resources:
            limits:
              cpu: '0.5'
              memory: 0.5Gi
            requests:
              cpu: '0.5'
              memory: 0.5Gi
          volumeClaimTemplates:
          - name: metadata
            spec:
              storageClassName: null
              accessModes:
              - ReadWriteOnce
              resources:
                requests:
                  storage: 20Gi
        - name: metrics-exp
          componentDef: kafka-exporter
          replicas: 1
          resources:
            limits:
              cpu: '0.5'
              memory: 0.5Gi
            requests:
              cpu: '0.5'
              memory: 0.5Gi
      EOF
      
    FieldDefinition
    metadata.annotations."kubeblocks.io/extra-env"It defines Kafka broker's jvm heap setting.
    spec.terminationPolicyIt is the policy of cluster termination. The default value is Delete. Valid values are DoNotTerminate, Delete, WipeOut. For the detailed definition, you can refer to Termination Policy.
    spec.affinityIt defines a set of node affinity scheduling rules for the cluster's Pods. This field helps control the placement of Pods on nodes within the cluster.
    spec.affinity.podAntiAffinityIt specifies the anti-affinity level of Pods within a component. It determines how pods should spread across nodes to improve availability and performance.
    spec.affinity.topologyKeysIt represents the key of node labels used to define the topology domain for Pod anti-affinity and Pod spread constraints.
    spec.tolerationsIt is an array that specifies tolerations attached to the cluster's Pods, allowing them to be scheduled onto nodes with matching taints.
    spec.servicesIt defines the services to access a cluster.
    spec.componentSpecsIt is the list of components that define the cluster components. This field allows customized configuration of each component within a cluster.
    spec.componentSpecs.componentDefRefIt is the name of the component definition that is defined in the cluster definition and you can get the component definition names with kubectl get clusterdefinition kafka -o json | jq '.spec.componentDefs[].name'.
    spec.componentSpecs.nameIt specifies the name of the component.
    spec.componentSpecs.replicasIt specifies the number of replicas of the component.
    spec.componentSpecs.resourcesIt specifies the resource requirements of the component.
  2. Verify whether this cluster is created successfully.

    kubectl get cluster mycluster -n demo
    >
    NAME        CLUSTER-DEFINITION   VERSION       TERMINATION-POLICY   STATUS    AGE
    mycluster   kafka                kafka-3.3.2   Delete               Running   2m2s
    
  1. Create a Kafka cluster.

    The cluster creation command is simply kbcli cluster create. Further, you can customize your cluster resources as demanded by using the --set flag.

    kbcli cluster create kafka mycluster -n demo
    

    kbcli provides more options for creating a Kafka cluster, such as setting cluster version, termination policy, CPU, and memory. You can view these options by adding --help or -h flag.

    kbcli cluster create kafka --help
    
    kbcli cluster create kafka -h
    
  2. Verify whether this cluster is created successfully.

    kbcli cluster list -n demo
    >
    NAME        NAMESPACE   CLUSTER-DEFINITION   VERSION       TERMINATION-POLICY   STATUS    CREATED-TIME
    mycluster   demo        kafka                kafka-3.3.2   Delete               Running   Sep 27,2024 15:15 UTC+0800
    

© 2025 ApeCloud PTE. Ltd.