Operations
Backup And Restores
Custom Secret
tpl
This guide explains how to decommission (take offline) specific Pods in MongoDB clusters managed by KubeBlocks. Decommissioning provides precise control over cluster resources while maintaining availability. Use this for workload rebalancing, node maintenance, or addressing failures.
In traditional StatefulSet-based deployments, Kubernetes lacks the ability to decommission specific Pods. StatefulSets ensure the order and identity of Pods, and scaling down always removes the Pod with the highest ordinal number (e.g., scaling down from 3 replicas removes Pod-2
first). This limitation prevents precise control over which Pod to take offline, which can complicate maintenance, workload distribution, or failure handling.
KubeBlocks overcomes this limitation by enabling administrators to decommission specific Pods directly. This fine-grained control ensures high availability and allows better resource management without disrupting the entire cluster.
Before proceeding, ensure the following:
kubectl create ns demo
namespace/demo created
KubeBlocks uses a declarative approach for managing MongoDB Replication Clusters. Below is an example configuration for deploying a MongoDB ReplicaSet Cluster with one primary replica and two secondary replicas.
Apply the following YAML configuration to deploy the cluster:
apiVersion: apps.kubeblocks.io/v1
kind: Cluster
metadata:
name: mongo-cluster
namespace: demo
spec:
terminationPolicy: Delete
clusterDef: mongodb
topology: replicaset
componentSpecs:
- name: mongodb
serviceVersion: "6.0.16"
replicas: 3
resources:
limits:
cpu: '0.5'
memory: 0.5Gi
requests:
cpu: '0.5'
memory: 0.5Gi
volumeClaimTemplates:
- name: data
spec:
storageClassName: ""
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 20Gi
Monitor the cluster status until it transitions to the Running state:
kubectl get cluster mongo-cluster -n demo -w
Expected Output:
kubectl get cluster mongo-cluster -n demo
NAME CLUSTER-DEFINITION TERMINATION-POLICY STATUS AGE
mongo-cluster mongodb Delete Creating 49s
mongo-cluster mongodb Delete Running 62s
Check the pod status and roles:
kubectl get pods -l app.kubernetes.io/instance=mongo-cluster -L kubeblocks.io/role -n demo
Expected Output:
NAME READY STATUS RESTARTS AGE ROLE
mongo-cluster-mongodb-0 2/2 Running 0 78s primary
mongo-cluster-mongodb-1 2/2 Running 0 63s secondary
mongo-cluster-mongodb-2 2/2 Running 0 48s secondary
Once the cluster status becomes Running, your MongoDB cluster is ready for use.
If you are creating the cluster for the very first time, it may take some time to pull images before running.
Expected Workflow:
onlineInstancesToOffline
is removedUpdating
to Running
To decommission a specific Pod (e.g., 'mongo-cluster-mongodb-1'), you can use one of the following methods:
Option 1: Using OpsRequest
Create an OpsRequest to mark the Pod as offline:
apiVersion: operations.kubeblocks.io/v1alpha1
kind: OpsRequest
metadata:
name: mongo-cluster-decommission-ops
namespace: demo
spec:
clusterName: mongo-cluster
type: HorizontalScaling
horizontalScaling:
- componentName: mongodb
scaleIn:
onlineInstancesToOffline:
- 'mongo-cluster-mongodb-1' # Specifies the instance names that need to be taken offline
Check the progress of the decommissioning operation:
kubectl get ops mongo-cluster-decommission-ops -n demo -w
Example Output:
NAME TYPE CLUSTER STATUS PROGRESS AGE
mongo-cluster-decommission-ops HorizontalScaling mongo-cluster Succeed 1/1 5s
Option 2: Using Cluster API
Alternatively, update the Cluster resource directly to decommission the Pod:
apiVersion: apps.kubeblocks.io/v1
kind: Cluster
spec:
componentSpecs:
- name: mongodb
replicas: 1 # explected replicas after decommission
offlineInstances:
- mongo-cluster-mongodb-1 # <----- Specify Pod to be decommissioned
...
After applying the updated configuration, verify the remaining Pods in the cluster:
kubectl get pods -n demo -l app.kubernetes.io/instance=mongo-cluster
Example Output:
NAME READY STATUS RESTARTS AGE
mongo-cluster-mongodb-0 2/2 Running 0 25m
mongo-cluster-mongodb-2 2/2 Running 0 24m
Login to MongoDB replica and check :
# login to any mongodb replica:
mongo-cluster-mongodb [direct: secondary] admin> rs.status()
Verify the change in members
.
Key takeaways:
This provides granular cluster management while maintaining availability.