KubeBlocks
BlogsKubeBlocks Cloud
Overview
Quickstart

Operations

Lifecycle Management
Vertical Scaling
Horizontal Scaling
Volume Expansion
Manage PostgreSQL Services
Minor Version Upgrade
Modify PostgreSQL Parameters
PostgreSQL Switchover
Decommission PostgreSQL Replica
Recovering PostgreSQL Replica

Backup And Restores

Create BackupRepo
Create Full Backup
Scheduled Backups
Scheduled Continuous Backup
Restore PostgreSQL Cluster
Restore with PITR

Custom Secret

Custom Password

TLS

PostgreSQL Cluster with TLS
PostgreSQL Cluster with Custom TLS

Monitoring

Observability for PostgreSQL Clusters

tpl

  1. Prerequisites
  2. Prepare for PITR Restoration
    1. 1. Verify Continuous Backup
    2. 2. Check Backup Time Range
    3. 3. Identify Full Backup
  3. Option 1: Cluster Annotation Restoration
    1. Step 1: Create Restored Cluster
    2. Step 2: Monitor Restoration
  4. Cleanup
  5. Summary

Restore a PostgreSQL Cluster from Backup with Point-In-Time-Recovery(PITR) on KubeBlocks

This guide demonstrates how to perform Point-In-Time Recovery (PITR) for PostgreSQL clusters in KubeBlocks using:

  1. A full base backup
  2. Continuous WAL (Write-Ahead Log) backups
  3. Two restoration methods:
    • Cluster Annotation (declarative approach)
    • OpsRequest API (operational control)

PITR enables recovery to any moment within the timeRange specified.

Prerequisites

    Before proceeding, ensure the following:

    • Environment Setup:
      • A Kubernetes cluster is up and running.
      • The kubectl CLI tool is configured to communicate with your cluster.
      • KubeBlocks CLI and KubeBlocks Operator are installed. Follow the installation instructions here.
    • Namespace Preparation: To keep resources isolated, create a dedicated namespace for this tutorial:
    kubectl create ns demo
    namespace/demo created
    

    Prepare for PITR Restoration

    To perform a PITR restoration, both a full backup and continuous backup are required. Refer to the documentation to configure these backups if they are not already set up.

    • Completed full backup
    • Active continuous WAL backup
    • Backup repository accessible
    • Sufficient resources for new cluster

    To identify the list of full and continuous backups, you may follow the steps:

    1. Verify Continuous Backup

    Confirm you have a continuous WAL backup, either running or completed:

    # expect EXACTLY ONE continuous backup per cluster
    kubectl get backup -n demo -l dataprotection.kubeblocks.io/backup-type=Continuous,app.kubernetes.io/instance=pg-cluster
    

    2. Check Backup Time Range

    Get the valid recovery window:

    kubectl get backup <continuous-backup-name> -n demo -o yaml | yq '.status.timeRange'
    

    Expected Output:

    start: "2025-05-07T09:12:47Z"
    end: "2025-05-07T09:22:50Z"
    

    3. Identify Full Backup

    Find available full backups that meet:

    • Status: Completed
    • Completion time after continuous backup start time
    # expect one or more Full backups
    kubectl get backup -n demo -l dataprotection.kubeblocks.io/backup-type=Full,app.kubernetes.io/instance=pg-cluster
    
    TIP

    KubeBlocks automatically selects the most recent qualifying full backup as the base. Make sure there is a full backup meets the condition: its stopTime/completionTimestamp must AFTER Continuous backup's startTime, otherwise PITR restoration will fail.

    Option 1: Cluster Annotation Restoration

    Step 1: Create Restored Cluster

    Configure PITR parameters in cluster annotation:

    Key parameters:

    • name: Continuous backup name
    • restoreTime: Target recovery time (within backup timeRange)

    Apply this YAML configuration:

    apiVersion: apps.kubeblocks.io/v1
    kind: Cluster
    metadata:
      name: pg-restore-pitr
      namespace: demo
      annotations:
        # NOTE: replace <CONTINUOUS_BACKUP_NAME> with the continuouse backup name
        # NOTE: replace <RESTORE_POINT_TIME>  with a valid time within the backup timeRange.
        kubeblocks.io/restore-from-backup: '{"postgresql":{"name":"<CONTINUOUS_BACKUP_NAME>","namespace":"demo","restoreTime":"<RESTORE_POINT_TIME>","volumeRestorePolicy":"Parallel"}}'
    spec:
      terminationPolicy: Delete
      clusterDef: postgresql
      topology: replication
      componentSpecs:
        - name: postgresql
          serviceVersion: "14.7.2"
          disableExporter: true
          labels:
            # NOTE: update the label accordingly
            apps.kubeblocks.postgres.patroni/scope: pg-restore-pitr-postgresql
          replicas: 1
          resources:
            limits:
              cpu: "0.5"
              memory: "0.5Gi"
            requests:
              cpu: "0.5"
              memory: "0.5Gi"
          volumeClaimTemplates:
            - name: data
              spec:
                storageClassName: ""
                accessModes:
                  - ReadWriteOnce
                resources:
                  requests:
                    storage: 20Gi
    

    Step 2: Monitor Restoration

    Track restore progress with:

    # Watch restore status
    kubectl get restore -n demo -w
    
    # Watch cluster status
    kubectl get cluster -n demo -w
    
    NOTE

    Restore PostgreSQL cluster through kbcli or OpsRequest is not supported for now.

    You can restore PostgreSQL cluster through kubectl as the steps above.

    Cleanup

    To remove all created resources, delete the PostgreSQL cluster along with its namespace:

    kubectl delete cluster pg-cluster -n demo
    kubectl delete cluster pg-cluster-restore -n demo
    kubectl delete ns demo
    

    Summary

    This guide demonstrated how to restore a PostgreSQL cluster in KubeBlocks using a full backup and continuous backup for Point-In-Time Recovery (PITR). Key steps included:

    • Verifying available backups.
    • Extracting encrypted system account credentials.
    • Creating a new PostgreSQL cluster with restoration configuration.
    • Monitoring the restoration process.

    With this approach, you can restore a PostgreSQL cluster to a specific point in time, ensuring minimal data loss and operational continuity.

    © 2025 ApeCloud PTE. Ltd.