Monitor a database
This tutorial demonstrates how to configure the monitoring function for a PostgreSQL cluster, using Prometheus and Grafana.
Step 1. Install Prometheus Operator and Grafana
Install the Prometheus Operator and Grafana to monitor the performance of a database. Skip this step if a Prometheus Operator is already installed in your environment.
-
Create a new namespace for Prometheus Operator.
kubectl create namespace monitoring
-
Add the Prometheus Operator Helm repository.
helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
-
Install the Prometheus Operator.
helm install prometheus-operator prometheus-community/kube-prometheus-stack --namespace monitoring
-
Verify the deployment of the Prometheus Operator. Make sure all pods are in the Ready state.
kubectl get pods -n monitoring
-
Access the Prometheus and Grafana dashboards.
-
Check the service endpoints of Prometheus and Grafana.
kubectl get svc -n monitoring
-
Use port forwarding to access the Prometheus dashboard locally.
kubectl port-forward svc/prometheus-operator-kube-p-prometheus -n monitoring 9090:9090
You can also access the Prometheus dashboard by opening "http://localhost:9090" in your browser.
-
Retrieve the Grafana's login credential from the secret.
kubectl get secrets prometheus-operator-grafana -n monitoring -o yaml
-
Use port forwarding to access the Grafana dashboard locally.
kubectl port-forward svc/prometheus-operator-grafana -n monitoring 3000:80
You can also access the Grafana dashboard by opening "http://localhost:3000" in your browser.
-
-
Configure the selectors for PodMonitor and ServiceMonitor to match your monitoring requirements.
Prometheus Operator uses Prometheus CRD to set up a Prometheus instance and to customize configurations of replicas, PVCs, etc.
To update the configuration on PodMonitor and ServiceMonitor, modify the Prometheus CR according to your needs:
apiVersion: monitoring.coreos.com/v1
kind: Prometheus
metadata:
spec:
podMonitorNamespaceSelector: {} # Namespaces to match for PodMonitors discovery
# PodMonitors to be selected for target discovery. An empty label selector
# matches all objects.
podMonitorSelector:
matchLabels:
release: prometheus # Make sure your PodMonitor CR labels matches the selector
serviceMonitorNamespaceSelector: {} # Namespaces to match for ServiceMonitors discovery
# ServiceMonitors to be selected for target discovery. An empty label selector
# matches all objects.
serviceMonitorSelector:
matchLabels:
release: prometheus # Make sure your ServiceMonitor CR labels matches the selector
Step 2. Monitor a database cluster
This section demonstrates how to use Prometheus and Grafana for monitoring a database cluster.
Enable the monitoring function for a database cluster
For a new cluster
Create a new cluster with the following command, ensuring the monitoring exporter is enabled.
Make sure spec.componentSpecs.disableExporter
is set to false
when creating a cluster.
cat <<EOF | kubectl apply -f -
apiVersion: apps.kubeblocks.io/v1
kind: Cluster
metadata:
name: mycluster
namespace: demo
spec:
terminationPolicy: Delete
clusterDef: postgresql
topology: replication
componentSpecs:
- name: postgresql
componentDef: postgresql
serviceVersion: "14.7.2"
disableExporter: false
labels:
apps.kubeblocks.postgres.patroni/scope: mycluster-postgresql
replicas: 2
resources:
limits:
cpu: "0.5"
memory: "0.5Gi"
requests:
cpu: "0.5"
memory: "0.5Gi"
volumeClaimTemplates:
- name: data
spec:
storageClassName: ""
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 20Gi
EOF
For an existing cluster
If a cluster already exists, you can run the command below to verify whether the monitoring exporter is enabled.
kubectl get cluster mycluster -o yaml
View the output.
apiVersion: apps.kubeblocks.io/v1alpha1
kind: Cluster
metadata:
...
spec:
...
componentSpecs:
...
disableExporter: false
Setting disableExporter: false
or leaving this field unset enables the monitoring exporter, which is the prerequisite of the monitoring function. If the output shows disableExporter: true
, you need to change it to false
to enable the exporter.
Note that updating disableExporter
will restart all pods in the cluster.
- kubectl patch
- Edit cluster YAML file
kubectl patch cluster mycluster -n demo --type "json" -p '[{"op":"add","path":"/spec/componentSpecs/0/disableExporter","value":false}]'
You can also edit the cluster.yaml
to enable/disable the monitoring function.
kubectl edit cluster mycluster -n demo
Edit the value of disableExporter
.
...
componentSpecs:
...
disableExporter: true # Set to `false` to enable exporter
...
When the cluster is running, each Pod should have a sidecar container, named exporter
running the postgres-exporter.
Create PodMonitor
-
Query
scrapePath
andscrapePort
.Retrieve the
scrapePath
andscrapePort
from the Pod's exporter container.kubectl get po mycluster-postgresql-0 -oyaml | yq '.spec.containers[] | select(.name=="exporter") | .ports '
Expected Output
- containerPort: 9187
name: http-metrics
protocol: TCP -
Create
PodMonitor
.Apply the
PodMonitor
file to monitor the cluster. Below are examples for different engines and you can edit the values according to your needs.- ApeCloud MySQL
- MySQL Community Edition
- PostgreSQL
- Redis
You can also find the latest example YAML file in the KubeBlocks Addons repo.
kubectl apply -f - <<EOF
apiVersion: monitoring.coreos.com/v1
kind: PodMonitor
metadata:
name: mycluster-pod-monitor
namespace: monitoring # Note: this is namespace for prometheus operator
labels: # This is labels set in `prometheus.spec.podMonitorSelector`
release: prometheus
spec:
jobLabel: kubeblocks-service
# Define the labels which are transferred from the
# associated Kubernetes `Pod` object onto the ingested metrics
# set the labels w.r.t your own needs
podTargetLabels:
- app.kubernetes.io/instance
- app.kubernetes.io/managed-by
- apps.kubeblocks.io/component-name
- apps.kubeblocks.io/pod-name
podMetricsEndpoints:
- path: /metrics
port: http-metrics
scheme: http
namespaceSelector:
matchNames:
- default
selector:
matchLabels:
app.kubernetes.io/instance: mycluster
apps.kubeblocks.io/component-name: mysql
EOFYou can also find the latest example YAML file in the KubeBlocks Addons repo.
kubectl apply -f - <<EOF
apiVersion: monitoring.coreos.com/v1
kind: PodMonitor
metadata:
name: mycluster-pod-monitor
namespace: monitoring # Note: this is namespace for prometheus operator
labels: # This is labels set in `prometheus.spec.podMonitorSelector`
release: prometheus
spec:
jobLabel: kubeblocks-service
# Define the labels which are transferred from the
# associated Kubernetes `Pod` object onto the ingested metrics
# set the labels w.r.t your own needs
podTargetLabels:
- app.kubernetes.io/instance
- app.kubernetes.io/managed-by
- apps.kubeblocks.io/component-name
- apps.kubeblocks.io/pod-name
podMetricsEndpoints:
- path: /metrics
port: http-metrics
scheme: http
namespaceSelector:
matchNames:
- default
selector:
matchLabels:
app.kubernetes.io/instance: mycluster
apps.kubeblocks.io/component-name: mysql
EOFYou can also find the latest example YAML file in the KubeBlocks Addons repo.
kubectl apply -f - <<EOF
apiVersion: monitoring.coreos.com/v1
kind: PodMonitor
metadata:
name: mycluster-pod-monitor
namespace: monitoring # Note: this is namespace for prometheus operator
labels: # This is labels set in `prometheus.spec.podMonitorSelector`
release: prometheus
spec:
jobLabel: kubeblocks-service
# Define the labels which are transferred from the
# associated Kubernetes `Pod` object onto the ingested metrics
# set the labels w.r.t your own needs
podTargetLabels:
- app.kubernetes.io/instance
- app.kubernetes.io/managed-by
- apps.kubeblocks.io/component-name
- apps.kubeblocks.io/pod-name
podMetricsEndpoints:
- path: /metrics
port: http-metrics
scheme: http
namespaceSelector:
matchNames:
- default
selector:
matchLabels:
app.kubernetes.io/instance: mycluster
apps.kubeblocks.io/component-name: mysql
EOFYou can also find the latest example YAML file in the KubeBlocks Addons repo.
kubectl apply -f - <<EOF
apiVersion: monitoring.coreos.com/v1
kind: PodMonitor
metadata:
name: mycluster-pod-monitor
namespace: monitoring # Note: this is namespace for prometheus operator
labels: # This is labels set in `prometheus.spec.podMonitorSelector`
release: prometheus
spec:
jobLabel: kubeblocks-service
# Define the labels which are transferred from the
# associated Kubernetes `Pod` object onto the ingested metrics
# set the labels w.r.t your own needs
podTargetLabels:
- app.kubernetes.io/instance
- app.kubernetes.io/managed-by
- apps.kubeblocks.io/component-name
- apps.kubeblocks.io/pod-name
podMetricsEndpoints:
- path: /metrics
port: http-metrics
scheme: http
namespaceSelector:
matchNames:
- default
selector:
matchLabels:
app.kubernetes.io/instance: mycluster
apps.kubeblocks.io/component-name: redis
EOF -
Access the Grafana dashboard.
Log in to the Grafana dashboard and import the dashboard.
There is a pre-configured dashboard for PostgreSQL under the
APPS / PostgreSQL
folder in the Grafana dashboard. And more dashboards can be found in the Grafana dashboard store.
Make sure the labels (such as the values of path and port in endpoint) are set correctly in the PodMonitor
file to match your dashboard.