Deploy production-grade Apache RocketMQ clusters on Kubernetes in minutes. Multi-shard broker topology with DLedger Raft HA, scalable NameServer, built-in metrics, and a web dashboard — all managed via a single operator.
Raft HA per Shard
Open Source
Deploy RocketMQ in 2 steps
Install KubeBlocks & RocketMQ Addon
# Add Helm repo helm repo add kubeblocks https://apecloud.github.io/helm-charts helm repo update # Install KubeBlocks helm install kubeblocks kubeblocks/kubeblocks \ --namespace kb-system --create-namespace # Install RocketMQ addon helm upgrade -i kb-addon-rocketmq kubeblocks/rocketmq \ --namespace kb-system
Create a RocketMQ Cluster
apiVersion: apps.kubeblocks.io/v1
kind: Cluster
metadata:
name: rocketmq-cluster
namespace: demo
spec:
clusterDef: rocketmq
topology: master-slave
terminationPolicy: Delete
componentSpecs:
- name: namesrv
resources:
limits:
memory: "2Gi"
- name: exporter
- name: dashboard
shardings:
- name: broker
shards: 2
template:
name: rocketmq-broker
resources:
limits:
memory: "4Gi"Trusted by Engineering Teams at Scale
KubeBlocks deploys the full RocketMQ stack — NameServer, Broker shards with DLedger Raft, Exporter, and Dashboard — and manages each component independently via Kubernetes CRDs.
RocketMQ's architecture separates service discovery (NameServer) from message storage (Broker). KubeBlocks models each as an independent component and uses the Sharding API to manage multiple broker shards. Each shard runs DLedger Raft — if a broker leader fails, the surviving replicas elect a new one automatically within that shard.
NameServer is stateless and horizontally scalable — clients connect to any replica for route discovery
Broker shards each run DLedger Raft consensus — a leader failure triggers automatic re-election within the shard
Multiple broker shards distribute topic partitions across independent Raft groups for horizontal throughput
Dedicated Exporter component serves Prometheus metrics on :5557; Broker pods also run a JMX exporter sidecar on :5556
Dashboard component provides a web admin console accessible via port-forward (18080:8080)
KubeBlocks Sharding API manages broker shards as a single logical scaling unit
No SSH into pods, no shell scripts. Declare an OpsRequest or Cluster resource and KubeBlocks handles the rest.
Availability & Scaling
Horizontal Scaling (NameServer)
Add or remove NameServer replicas online. NameServer is stateless — clients automatically use any running replica for route discovery.
Vertical Scaling
Resize CPU and memory for NameServer, Broker, Exporter, or Dashboard components via OpsRequest. KubeBlocks applies changes with a rolling strategy.
Volume Expansion
Expand PVC storage for broker message data without pod restarts on supported storage classes.
Rolling Restart
Controlled pod restarts across all components. Broker restarts respect DLedger Raft — the cluster stays available if quorum is maintained.
Stop / Start
Suspend the entire cluster to eliminate compute cost; resume with full state intact.
Configuration, Observability & Access
Dynamic Configuration
Update broker parameters (e.g. enableMultiDispatch) via OpsRequest. KubeBlocks applies changes with a rolling pod restart.
Prometheus Metrics
Exporter component serves Prometheus metrics on :5557. Broker pods additionally run a JMX exporter sidecar on :5556. Both are compatible with standard Prometheus scrape configs.
Web Dashboard
Built-in admin console on :8080. Access via kubectl port-forward (18080:8080) or expose via LoadBalancer.
Expose via LoadBalancer
Expose NameServer or Dashboard externally via a LoadBalancer or NodePort service using an Expose OpsRequest.
Credential Management
Broker credentials stored in Kubernetes Secrets at cluster creation time.
KubeBlocks automates the most complex RocketMQ operational tasks.
Each broker shard runs an independent DLedger Raft group. When the leader pod fails, the surviving followers elect a new leader automatically. Producers and consumers reconnect to the new leader without manual intervention.
View Docs →Normal: 2-replica shard
broker-0-0 (leader) + broker-0-1 (follower) — DLedger replicates every message commit
Leader pod fails
KubeBlocks detects the pod failure; broker-0-1 (follower) enters Raft election
Election completes
broker-0-1 becomes the new DLedger leader; NameServer route table updates automatically
Cluster recovers
Producers reconnect; KubeBlocks restarts broker-0-0 and re-joins it as a follower
KubeBlocks handles provisioning, failover, scaling, and configuration for RocketMQ and 35+ other database engines — with a single unified operator.