KubeBlocks
BlogsKubeBlocks Cloud
⌘K
​
Overview
Quickstart

Operations

Lifecycle Management
Vertical Scaling
Horizontal Scaling
Volume Expansion
Manage PostgreSQL Services
Minor Version Upgrade
Modify PostgreSQL Parameters
PostgreSQL Switchover
Decommission PostgreSQL Replica
Recovering PostgreSQL Replica

Backup And Restores

Create BackupRepo
Create Full Backup
Scheduled Backups
Scheduled Continuous Backup
Restore PostgreSQL Cluster
Restore with PITR

Custom Secret

Custom Password
Custom Password Policy

TLS

PostgreSQL Cluster with TLS
PostgreSQL Cluster with Custom TLS

Monitoring

Observability for PostgreSQL Clusters
FAQs

tpl

  1. Why Decommission Pods with KubeBlocks?
  2. Prerequisites
  3. Deploy a PostgreSQL Cluster
  4. Verifying the Deployment
  5. Decommission a Pod
    1. Monitor the Decommissioning Process
    2. Verify the Decommissioning
  6. Summary

Decommission a Specific Pod in KubeBlocks-Managed PostgreSQL Clusters

This guide explains how to decommission (take offline) specific Pods in PostgreSQL clusters managed by KubeBlocks. Decommissioning provides precise control over cluster resources while maintaining availability. Use this for workload rebalancing, node maintenance, or addressing failures.

Why Decommission Pods with KubeBlocks?

In traditional StatefulSet-based deployments, Kubernetes lacks the ability to decommission specific Pods. StatefulSets ensure the order and identity of Pods, and scaling down always removes the Pod with the highest ordinal number (e.g., scaling down from 3 replicas removes Pod-2 first). This limitation prevents precise control over which Pod to take offline, which can complicate maintenance, workload distribution, or failure handling.

KubeBlocks overcomes this limitation by enabling administrators to decommission specific Pods directly. This fine-grained control ensures high availability and allows better resource management without disrupting the entire cluster.

Prerequisites

    Before proceeding, ensure the following:

    • Environment Setup:
      • A Kubernetes cluster is up and running.
      • The kubectl CLI tool is configured to communicate with your cluster.
      • KubeBlocks CLI and KubeBlocks Operator are installed. Follow the installation instructions here.
    • Namespace Preparation: To keep resources isolated, create a dedicated namespace for this tutorial:
    kubectl create ns demo namespace/demo created

    Deploy a PostgreSQL Cluster

      KubeBlocks uses a declarative approach for managing PostgreSQL clusters. Below is an example configuration for deploying a PostgreSQL cluster with 2 replicas (1 primary, 1 replicas).

      Apply the following YAML configuration to deploy the cluster:

      apiVersion: apps.kubeblocks.io/v1 kind: Cluster metadata: name: pg-cluster namespace: demo spec: terminationPolicy: Delete clusterDef: postgresql topology: replication componentSpecs: - name: postgresql serviceVersion: 16.4.0 disableExporter: true replicas: 2 resources: limits: cpu: "0.5" memory: "0.5Gi" requests: cpu: "0.5" memory: "0.5Gi" volumeClaimTemplates: - name: data spec: accessModes: - ReadWriteOnce resources: requests: storage: 20Gi

      Verifying the Deployment

        Monitor the cluster status until it transitions to the Running state:

        kubectl get cluster pg-cluster -n demo -w

        Expected Output:

        NAME CLUSTER-DEFINITION TERMINATION-POLICY STATUS AGE pg-cluster postgresql Delete Creating 50s pg-cluster postgresql Delete Running 4m2s

        Once the cluster status becomes Running, your PostgreSQL cluster is ready for use.

        TIP

        If you are creating the cluster for the very first time, it may take some time to pull images before running.

        Decommission a Pod

        Expected Workflow:

        1. Replica specified in onlineInstancesToOffline is removed
        2. Pod terminates gracefully
        3. Cluster transitions from Updating to Running

        To decommission a specific Pod (e.g., 'pg-cluster-postgresql-1'), you can use one of the following methods:

        Option 1: Using OpsRequest

        Create an OpsRequest to mark the Pod as offline:

        apiVersion: operations.kubeblocks.io/v1alpha1 kind: OpsRequest metadata: name: pg-cluster-decommission-ops namespace: demo spec: clusterName: pg-cluster type: HorizontalScaling horizontalScaling: - componentName: postgresql scaleIn: onlineInstancesToOffline: - 'pg-cluster-postgresql-1' # Specifies the instance names that need to be taken offline

        Monitor the Decommissioning Process

        Check the progress of the decommissioning operation:

        kubectl get ops pg-cluster-decommission-ops -n demo -w

        Example Output:

        NAME TYPE CLUSTER STATUS PROGRESS AGE pg-cluster-decommission-ops HorizontalScaling pg-cluster Succeed 1/1 33s

        Option 2: Using Cluster API

        Alternatively, update the Cluster resource directly to decommission the Pod:

        apiVersion: apps.kubeblocks.io/v1 kind: Cluster metadata: name: pg-cluster namespace: demo spec: terminationPolicy: Delete clusterDef: postgresql topology: replication componentSpecs: - name: postgresql serviceVersion: 16.4.0 disableExporter: true replicas: 1 offlineInstances: - pg-cluster-postgresql-1 # <----- Specify Pod to be decommissioned resources: limits: cpu: '0.5' memory: 0.5Gi requests: cpu: '0.5' memory: 0.5Gi volumeClaimTemplates: - name: data spec: storageClassName: "" accessModes: - ReadWriteOnce resources: requests: storage: 20Gi

        Verify the Decommissioning

        After applying the updated configuration, verify the remaining Pods in the cluster:

        kubectl get pods -n demo -l app.kubernetes.io/instance=pg-cluster

        Example Output:

        NAME READY STATUS RESTARTS AGE pg-cluster-postgresql-0 4/4 Running 0 6m12s

        Summary

        Key takeaways:

        • Traditional StatefulSets lack precise Pod removal control
        • KubeBlocks enables targeted Pod decommissioning
        • Two implementation methods: OpsRequest or Cluster API

        This provides granular cluster management while maintaining availability.

        © 2025 ApeCloud PTE. Ltd.