This guide demonstrates how to configure comprehensive monitoring for RabbitMQ clusters in KubeBlocks using:
Before proceeding, ensure the following:
kubectl create ns demo
namespace/demo created
Deploy the kube-prometheus-stack using Helm:
helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
helm install prometheus prometheus-community/kube-prometheus-stack \
-n monitoring \
--create-namespace
Check all components are running:
kubectl get pods -n monitoring
Expected Output:
NAME READY STATUS RESTARTS AGE
alertmanager-prometheus-kube-prometheus-alertmanager-0 2/2 Running 0 114s
prometheus-grafana-75bb7d6986-9zfkx 3/3 Running 0 2m
prometheus-kube-prometheus-operator-7986c9475-wkvlk 1/1 Running 0 2m
prometheus-kube-state-metrics-645c667b6-2s4qx 1/1 Running 0 2m
prometheus-prometheus-kube-prometheus-prometheus-0 2/2 Running 0 114s
prometheus-prometheus-node-exporter-47kf6 1/1 Running 0 2m1s
prometheus-prometheus-node-exporter-6ntsl 1/1 Running 0 2m1s
prometheus-prometheus-node-exporter-gvtxs 1/1 Running 0 2m1s
prometheus-prometheus-node-exporter-jmxg8 1/1 Running 0 2m1s
KubeBlocks uses a declarative approach for managing RabbitMQ Clusters. Below is an example configuration for deploying a RabbitMQ Cluster with 3 replicas.
Apply the following YAML configuration to deploy the cluster:
apiVersion: apps.kubeblocks.io/v1
kind: Cluster
metadata:
name: rabbitmq-cluster
namespace: demo
spec:
terminationPolicy: Delete
clusterDef: rabbitmq
topology: clustermode
componentSpecs:
- name: rabbitmq
serviceVersion: 3.13.7
replicas: 3
resources:
limits:
cpu: "0.5"
memory: "0.5Gi"
requests:
cpu: "0.5"
memory: "0.5Gi"
volumeClaimTemplates:
- name: data
spec:
storageClassName: ""
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 20Gi
Monitor the cluster status until it transitions to the Running state:
kubectl get cluster rabbitmq-cluster -n demo -w
Expected Output:
kubectl get cluster rabbitmq-cluster -n demo
NAME CLUSTER-DEFINITION TERMINATION-POLICY STATUS AGE
rabbitmq-cluster rabbitmq Delete Creating 15s
rabbitmq-cluster rabbitmq Delete Running 83s
Check the pod status and roles:
kubectl get pods -l app.kubernetes.io/instance=rabbitmq-cluster -n demo
Expected Output:
NAME READY STATUS RESTARTS AGE
rabbitmq-cluster-rabbitmq-0 2/2 Running 0 106s
rabbitmq-cluster-rabbitmq-1 2/2 Running 0 82s
rabbitmq-cluster-rabbitmq-2 2/2 Running 0 47s
Once the cluster status becomes Running, your RabbitMQ cluster is ready for use.
If you are creating the cluster for the very first time, it may take some time to pull images before running.
# prot-forward
kubectl -n demo port-forward pods/rabbitmq-cluster-rabbitmq-0 15692:15692
# check metrics
curl -s http://127.0.0.1:15692/metrics | head -n 50
apiVersion: monitoring.coreos.com/v1
kind: PodMonitor
metadata:
name: rabbitmq-cluster-pod-monitor
namespace: demo
labels: # Must match the setting in 'prometheus.spec.podMonitorSelector'
release: prometheus
spec:
jobLabel: app.kubernetes.io/managed-by
# defines the labels which are transferred from the
# associated Kubernetes 'Pod' object onto the ingested metrics
# set the lables w.r.t you own needs
podTargetLabels:
- app.kubernetes.io/instance
- app.kubernetes.io/managed-by
- apps.kubeblocks.io/component-name
- apps.kubeblocks.io/pod-name
podMetricsEndpoints:
- path: /metrics
port: prometheus # Must match exporter port name
scheme: http
namespaceSelector:
matchNames:
- demo # Target namespace
selector:
matchLabels:
app.kubernetes.io/instance: rabbitmq-cluster
PodMonitor Configuration Guide
Parameter | Required | Description |
---|---|---|
port | Yes | Must match exporter port name ('http-metrics') |
namespaceSelector | Yes | Targets namespace where RabbitMQ runs |
labels | Yes | Must match Prometheus's podMonitorSelector |
path | No | Metrics endpoint path (default: /metrics) |
interval | No | Scraping interval (default: 30s) |
Forward and access Prometheus UI:
kubectl port-forward svc/prometheus-kube-prometheus-prometheus -n monitoring 9090:9090
Open your browser and navigate to: http://localhost:9090/targets
Check if there is a scrape job corresponding to the PodMonitor (the job name is 'demo/rabbitmq-cluster-pod-monitor').
Expected State:
Verify metrics are being scraped:
curl -sG "http://localhost:9090/api/v1/query" --data-urlencode 'query=up{app_kubernetes_io_instance="rabbitmq-cluster"}' | jq
Example Output:
{
"status": "success",
"data": {
"resultType": "vector",
"result": [
{
"metric": {
"__name__": "up",
"app_kubernetes_io_instance": "rabbitmq-cluster",
"app_kubernetes_io_managed_by": "kubeblocks",
"apps_kubeblocks_io_component_name": "rabbitmq",
"apps_kubeblocks_io_pod_name": "rabbitmq-cluster-rabbitmq-0",
"container": "rabbitmq",
"endpoint": "prometheus",
"instance": "10.244.0.78:15692",
"job": "kubeblocks",
"namespace": "demo",
"pod": "rabbitmq-cluster-rabbitmq-0"
},
"value": [
1747622160.396,
"1"
]
},
{
"metric": {
"__name__": "up",
"app_kubernetes_io_instance": "rabbitmq-cluster",
"app_kubernetes_io_managed_by": "kubeblocks",
"apps_kubeblocks_io_component_name": "rabbitmq",
"apps_kubeblocks_io_pod_name": "rabbitmq-cluster-rabbitmq-1",
"container": "rabbitmq",
"endpoint": "prometheus",
"instance": "10.244.0.80:15692",
"job": "kubeblocks",
"namespace": "demo",
"pod": "rabbitmq-cluster-rabbitmq-1"
},
"value": [
1747622160.396,
"1"
]
}
]
}
}
Port-forward and login:
kubectl port-forward svc/prometheus-grafana -n monitoring 3000:80
Open your browser and navigate to http://localhost:3000. Use the default credentials to log in:
Import the KubeBlocks RabbitMQ dashboard:
To delete all the created resources, run the following commands:
kubectl delete cluster rabbitmq-cluster -n demo
kubectl delete ns demo
kubectl delete podmonitor rabbitmq-cluster-pod-monitor -n demo
In this tutorial, we set up observability for a RabbitMQ cluster in KubeBlocks using the Prometheus Operator.
By configuring a PodMonitor
, we enabled Prometheus to scrape metrics from the RabbitMQ exporter.
Finally, we visualized these metrics in Grafana. This setup provides valuable insights for monitoring the health and performance of your RabbitMQ databases.