KubeBlocks
BlogsKubeBlocks Cloud
⌘K
​
KubeBlocks for Pulsar

Cluster Management

Create
Scale
Expand volume
Restart
Stop/Start
Delete protection

Configuration

Configure cluster parameters
  1. Introduction
  2. Environment Recommendation
  3. Before you start
  4. Create Pulsar cluster

Introduction

KubeBlocks can quickly integrate new engines through good abstraction. The functions tested in KubeBlocks include Pulsar cluster creation and deletion, vertical and horizontal scaling of Pulsar cluster components, storage expansion, restart, and configuration changes.

KubeBlocks supports Pulsar's daily operations, including basic lifecycle operations such as cluster creation, deletion, and restart, as well as advanced operations such as horizontal and vertical scaling, storage expansion, and configuration changes.

Environment Recommendation

Refer to the Pulsar official document for the configuration, such as memory, cpu, and storage, of each component.

ComponentsReplicas
zookeeper1 for test environment or 3 for production environment
bookiesat least 3 for test environment, at lease 4 for production environment
brokerat least 1, for production environment, 3 replicas recommended
recovery (Optional)1; if autoRecovery is not enabled for bookie, at least 3 replicas needed
proxy (Optional)1; and for production environment, 3 replicas needed

Before you start

  • Install kbcli if you want to manage the StarRocks cluster with kbcli.

  • Install KubeBlocks.

  • Check whether the Pulsar Addon is enabled. If this Addon is disabled, enable it first.

  • View all the database types and versions available for creating a cluster.

    kubectl get clusterdefinition pulsar > NAME TOPOLOGIES SERVICEREFS STATUS AGE pulsar pulsar-basic-cluster,pulsar-enhanced-cluster Available 16m

    View all available versions for creating a cluster.

    kubectl get clusterversions -l clusterdefinition.kubeblocks.io/name=pulsar > NAME CLUSTER-DEFINITION STATUS AGE pulsar-2.11.2 pulsar Available 16m pulsar-3.0.2 pulsar Available 16m
    kbcli clusterdefinition list kbcli clusterversion list
  • To keep things isolated, create a separate namespace called demo.

    kubectl create namespace demo > namespace/demo created

Create Pulsar cluster

  1. Create a Pulsar cluster.

    cat <<EOF | kubectl apply -f - apiVersion: apps.kubeblocks.io/v1alpha1 kind: Cluster metadata: name: mycluster namespace: demo annotations: "kubeblocks.io/extra-env": '{"KB_PULSAR_BROKER_NODEPORT": "false"}' spec: terminationPolicy: Delete services: - name: proxy serviceName: proxy componentSelector: pulsar-proxy spec: type: ClusterIP ports: - name: pulsar port: 6650 targetPort: 6650 - name: http port: 80 targetPort: 8080 - name: broker-bootstrap serviceName: broker-bootstrap componentSelector: pulsar-broker spec: type: ClusterIP ports: - name: pulsar port: 6650 targetPort: 6650 - name: http port: 80 targetPort: 8080 - name: kafka-client port: 9092 targetPort: 9092 componentSpecs: - name: pulsar-broker componentDef: pulsar-broker disableExporter: true replicas: 1 resources: limits: cpu: '0.5' memory: 0.5Gi requests: cpu: '0.5' memory: 0.5Gi volumeClaimTemplates: - name: data spec: accessModes: - ReadWriteOnce resources: requests: storage: 20Gi - name: pulsar-proxy componentDef: pulsar-proxy replicas: 1 resources: limits: cpu: '0.5' memory: 0.5Gi requests: cpu: '0.5' memory: 0.5Gi - name: bookies componentDef: pulsar-bookkeeper replicas: 3 resources: limits: cpu: '0.5' memory: 0.5Gi requests: cpu: '0.5' memory: 0.5Gi volumeClaimTemplates: - name: journal spec: accessModes: - ReadWriteOnce resources: requests: storage: 20Gi - name: ledgers spec: accessModes: - ReadWriteOnce resources: requests: storage: 20Gi - name: bookies-recovery componentDef: pulsar-bkrecovery replicas: 1 resources: limits: cpu: '0.5' memory: 0.5Gi requests: cpu: '0.5' memory: 0.5Gi - name: zookeeper componentDef: pulsar-zookeeper replicas: 3 resources: limits: cpu: '0.5' memory: 0.5Gi requests: cpu: '0.5' memory: 0.5Gi volumeClaimTemplates: - name: data spec: accessModes: - ReadWriteOnce resources: requests: storage: 20Gi EOF
    FieldDefinition
    metadata.annotations."kubeblocks.io/extra-env"It specifies whether to enable NodePort services.
    spec.terminationPolicyIt is the policy of cluster termination. The default value is Delete. Valid values are DoNotTerminate, Delete, WipeOut. For the detailed definition, you can refer to Termination Policy.
    spec.affinityIt defines a set of node affinity scheduling rules for the cluster's Pods. This field helps control the placement of Pods on nodes within the cluster.
    spec.affinity.podAntiAffinityIt specifies the anti-affinity level of Pods within a component. It determines how pods should spread across nodes to improve availability and performance.
    spec.affinity.topologyKeysIt represents the key of node labels used to define the topology domain for Pod anti-affinity and Pod spread constraints.
    spec.tolerationsIt is an array that specifies tolerations attached to the cluster's Pods, allowing them to be scheduled onto nodes with matching taints.
    spec.componentSpecsIt is the list of components that define the cluster components. This field allows customized configuration of each component within a cluster.
    spec.componentSpecs.componentDefRefIt is the name of the component definition that is defined in the cluster definition and you can get the component definition names with kubectl get clusterdefinition postgresql -o json | jq '.spec.componentDefs[].name'.
    spec.componentSpecs.nameIt specifies the name of the component.
    spec.componentSpecs.disableExporterIt defines whether the monitoring function is enabled.
    spec.componentSpecs.replicasIt specifies the number of replicas of the component.
    spec.componentSpecs.resourcesIt specifies the resource requirements of the component.
  2. Verify the cluster created.

    kubectl get cluster mycluster -n demo

    When the status is Running, the cluster is created successfully.

© 2025 ApeCloud PTE. Ltd.