This page describes how KubeBlocks deploys an Apache RocketMQ cluster on Kubernetes — covering the resource hierarchy, NameServer and Broker component roles, master/slave HA, and traffic routing.
KubeBlocks models a RocketMQ cluster as a hierarchy of Kubernetes custom resources:
Cluster → Component (namesrv) → InstanceSet → Pod × N
→ Sharding (broker) → Shard × N → InstanceSet → Pod × replicas
→ Component (exporter) → InstanceSet → Pod × 1
→ Component (dashboard) → InstanceSet → Pod × 1
| Resource | Role |
|---|---|
| Cluster | User-facing declaration — specifies the topology, replica counts for each component, storage, and resources |
| Component (namesrv) | Generated automatically; references the NameServer ComponentDefinition; stateless routing registry |
| Sharding (broker) | KubeBlocks Sharding resource backed by a ShardingDefinition; manages N independent broker groups (shards), each with its own replica set |
| Component (exporter) | Prometheus metrics bridge — scrapes RocketMQ metrics from brokers via NameServer and exposes them on port 5557 |
| Component (dashboard) | Web management console — provides a UI for topics, consumers, and brokers; exposes HTTP on port 8080 |
| InstanceSet | KubeBlocks custom workload (replaces StatefulSet); manages pods with stable identities and role awareness |
| Pod | Actual running RocketMQ process; each pod gets a unique ordinal and its own PVC (except dashboard which uses emptyDir) |
The ClusterDefinition topology master-slave defines all four components with the following lifecycle order:
A RocketMQ cluster consists of NameServer (service discovery) and Broker (message storage) as the core data-plane components, with exporter (metrics) and dashboard (web console) as optional management-plane components — all four are part of the default master-slave topology.
| Container | Port | Purpose |
|---|---|---|
rocketmq-namesrv | 9876 (TCP) | RocketMQ NameServer — maintains topic routing information; acts as a service registry for brokers and clients |
jmx-exporter | 5556 | Prometheus JMX metrics exporter |
| Container | Port | Purpose |
|---|---|---|
rocketmq-broker | 10911 (remoting), 10909 (vip-channel), 10912 (HA replication), 40911 (DLedger/Raft) | RocketMQ Broker — stores message queues, handles producer/consumer requests, and replicates data within a broker group; roleProbe runs /scripts/get-role.sh inside this container |
jmx-exporter | 5556 | Prometheus JMX metrics exporter |
agent | 8999 | Lightweight HTTP server for topic and subscription group info |
Each Broker pod mounts its own PVC for the message store directory (/home/rocketmq/store), providing persistent storage for commit logs and consume queues.
| Container | Port | Purpose |
|---|---|---|
rocketmq-exporter | 5557 (metrics) | Scrapes RocketMQ metrics via NameServer and exposes them in Prometheus format; connects to brokers using admin credentials |
| Container | Port | Purpose |
|---|---|---|
rocketmq-dashboard | 8080 (console) | RocketMQ web management console — view and manage topics, consumer groups, and broker status; uses emptyDir for configuration (no PVC) |
NameServer is a lightweight, stateless routing registry. Each NameServer instance is independent — there is no leader election or data replication between NameServer nodes. Producers and consumers connect to all known NameServer instances and use any available one for routing lookups:
| NameServer Responsibility | Description |
|---|---|
| Broker registration | Brokers register their topic routing information with all NameServer instances at startup and periodically |
| Topic routing queries | Producers and consumers query NameServer to discover which brokers host the topics they need |
| Stale broker detection | NameServer removes brokers that have not sent a heartbeat within 120 seconds |
Brokers are organized into broker groups (a master plus N slaves). Each group handles a distinct set of topic partitions:
| Broker Role | Description |
|---|---|
| Master broker | Accepts producer writes; assigned brokerId=0 at startup; optionally serves consumer reads |
| Slave broker | Replicates from the master via the HA replication port (10912); assigned brokerId≥1; serves consumer reads when configured |
KubeBlocks deploys RocketMQ brokers in ASYNC_MASTER/SLAVE mode by default. Each broker group has one master (brokerId=0) and one or more slaves that replicate asynchronously from the master via port 10912. The role assignment is determined at startup by broker-setup.sh based on the pod's ordinal index.
| Concept | Description |
|---|---|
| Master broker | Accepts all producer writes; the only member that can confirm message durability |
| Slave broker | Replicates the master's commit log asynchronously via port 10912; serves consumer reads |
| HA replication | Slaves maintain a persistent connection to the master on port 10912 for log replication |
| Quorum | None — async replication; acknowledged writes are durable only on the master |
DLedger (Raft) mode is an optional alternative that provides automatic master election via Raft consensus. It requires setting ENABLE_DLEDGER=true in the broker container environment. This environment variable is not set in the KubeBlocks RocketMQ add-on by default, so DLedger is not active unless explicitly configured.
KubeBlocks creates services for NameServer. Broker has no ClusterIP service by default — the cmpd-broker ComponentDefinition contains no services block, so the broker controller creates only a headless service per shard:
| Service | Type | Port | Notes |
|---|---|---|---|
{cluster}-namesrv | ClusterIP | 9876 | All NameServer pods; clients use this for topic routing lookups |
{cluster}-namesrv-headless | Headless | 9876 | All NameServer pods; always created by the workload controller |
{cluster}-<shard>-headless | Headless | 10911, 10912, 10909 | Per broker shard; all pods in the shard; used for HA replication and operator probes — no broker ClusterIP exists by default |
{cluster}-dashboard | ClusterIP | 8080 | Dashboard web console; named default in the ComponentDefinition with no explicit serviceName, resolving to {cluster}-{component} |
Producers and consumers connect to NameServer (port 9876) at startup to fetch topic routing information. NameServer returns the direct pod addresses of broker masters — clients then connect to brokers directly using these addresses, bypassing any ClusterIP. HA replication traffic between broker replicas flows over port 10912 via the per-shard headless service.
The exec roleProbe (/scripts/get-role.sh) detects each broker's role and updates the kubeblocks.io/role pod label; this drives KubeBlocks rolling update ordering (master updated last) but does not affect service routing since there is no broker ClusterIP service.
When a RocketMQ component fails:
The RocketMQ add-on declares KubeBlocks systemAccounts on the broker and Dashboard ComponentDefinitions. NameServer and other components do not use this mechanism in the same way. Passwords are generated according to each account’s policy unless overridden on the Cluster.
Secrets follow {cluster}-{component}-account-{accountName} — where {component} is each component’s name in the Cluster spec (for example dashboard for the Dashboard, and each broker shard component name such as broker-0, broker-1, …).
| Account | Component (typical) | Role | Purpose |
|---|---|---|---|
rocketmq-admin | Per broker shard (broker-*) | Broker admin / ACL user | Injected into broker pods as ROCKETMQ_USER and ROCKETMQ_PASSWORD for broker authentication configuration |
console-admin | dashboard | Dashboard login | Injected into Dashboard pods as CONSOLE_USER and CONSOLE_PASSWORD for the RocketMQ Dashboard web UI |
The Dashboard also needs the broker admin identity to talk to the cluster: it reads rocketmq-admin credentials from the broker ComponentDefinition via credentialVarRef (same username/password as the broker shard’s rocketmq-admin account).