KubeBlocks
BlogsKubeBlocks Cloud
⌘K
​
Overview
Quickstart

Operations

Lifecycle Management
Vertical Scaling
Horizontal Scaling
Volume Expansion
Manage PostgreSQL Services
Minor Version Upgrade
Modify PostgreSQL Parameters
PostgreSQL Switchover
Decommission PostgreSQL Replica
Recovering PostgreSQL Replica

Backup And Restores

Create BackupRepo
Create Full Backup
Scheduled Backups
Scheduled Continuous Backup
Restore PostgreSQL Cluster
Restore with PITR

Custom Secret

Custom Password
Custom Password Policy

TLS

PostgreSQL Cluster with TLS
PostgreSQL Cluster with Custom TLS

Monitoring

Observability for PostgreSQL Clusters
FAQs

tpl

  1. Prerequisites
  2. Prepare for PITR Restoration
    1. 1. Verify Continuous Backup
    2. 2. Check Backup Time Range
    3. 3. Identify Full Backup
  3. Restore a Cluster from Continuous Backup
    1. Option 1: Restore a Cluster via Cluster Annotation
    2. Option 2: Restore a Cluster via Restore OpsRequest
  4. Step 2: Monitor Restoration
  5. Troubleshooting
  6. Cleanup
  7. Summary

Restore a PostgreSQL Cluster from Backup with Point-In-Time-Recovery(PITR) on KubeBlocks

This guide demonstrates how to perform Point-In-Time Recovery (PITR) for PostgreSQL clusters in KubeBlocks using:

  1. A full base backup
  2. Continuous WAL (Write-Ahead Log) backups
  3. Two restoration methods:
    • Cluster Annotation (declarative approach)
    • OpsRequest API (operational control)

PITR enables recovery to any moment within the timeRange specified.

Prerequisites

    Before proceeding, ensure the following:

    • Environment Setup:
      • A Kubernetes cluster is up and running.
      • The kubectl CLI tool is configured to communicate with your cluster.
      • KubeBlocks CLI and KubeBlocks Operator are installed. Follow the installation instructions here.
    • Namespace Preparation: To keep resources isolated, create a dedicated namespace for this tutorial:
    kubectl create ns demo namespace/demo created

    Prepare for PITR Restoration

    To perform a PITR restoration, both a full backup and continuous backup are required. Refer to the documentation to configure these backups if they are not already set up.

    • Completed full backup
    • Active continuous WAL backup
    • Backup repository accessible
    • Sufficient resources for new cluster

    To identify the list of full and continuous backups, you may follow the steps:

    1. Verify Continuous Backup

    Confirm you have a continuous WAL backup, either running or completed:

    # expect EXACTLY ONE continuous backup per cluster kubectl get backup -n demo -l dataprotection.kubeblocks.io/backup-type=Continuous,app.kubernetes.io/instance=pg-cluster

    2. Check Backup Time Range

    Get the valid recovery window:

    kubectl get backup <continuous-backup-name> -n demo -o yaml | yq '.status.timeRange'

    Expected Output:

    start: "2025-05-07T09:12:47Z" end: "2025-05-07T09:22:50Z"

    3. Identify Full Backup

    Find available full backups that meet:

    • Status: Completed
    • Completion time AFTER continuous backup start time
    # expect one or more Full backups kubectl get backup -n demo -l dataprotection.kubeblocks.io/backup-type=Full,app.kubernetes.io/instance=pg-cluster
    TIP

    KubeBlocks automatically selects the most recent qualifying full backup as the base.

    Make sure there is a full backup meets the condition: its stopTime/completionTimestamp must AFTER Continuous backup's startTime, otherwise PITR restoration will fail.

    Restore a Cluster from Continuous Backup

    Option 1: Restore a Cluster via Cluster Annotation

    Configure PITR parameters in cluster annotation:

    Key parameters:

    • name: Continuous backup name
    • restoreTime: Target recovery time (within backup timeRange)

    Apply this YAML configuration:

    apiVersion: apps.kubeblocks.io/v1 kind: Cluster metadata: name: pg-restore-pitr namespace: demo annotations: # NOTE: # 1. replace <CONTINUOUS_BACKUP_NAME> with the continuouse backup name # 2. replace <RESTORE_POINT_TIME> with a valid time within the backup timeRange. # 3. replace <BACKUP_NAMESPACE> with the namespace of the backup kubeblocks.io/restore-from-backup: '{"postgresql":{"name":"<CONTINUOUS_BACKUP_NAME>","namespace":"<BACKUP_NAMESPACE>","restoreTime":"<RESTORE_POINT_TIME>","volumeRestorePolicy":"Parallel"}}' spec: terminationPolicy: Delete clusterDef: postgresql topology: replication componentSpecs: - name: postgresql serviceVersion: "16.4.0" disableExporter: true replicas: 2 resources: limits: cpu: "0.5" memory: "0.5Gi" requests: cpu: "0.5" memory: "0.5Gi" volumeClaimTemplates: - name: data spec: storageClassName: "" accessModes: - ReadWriteOnce resources: requests: storage: 20Gi

    The json string in the annotation is of structure:

    { "postgresql": { "name": "<CONTINUOUS_BACKUP_NAME>", "namespace": "<BACKUP_NAMESPACE>", "restoreTime": "<RESTORE_POINT_TIME>", "volumeRestorePolicy": "Parallel" } }
    • postgresql: the component name in the cluster (check cluster.spec.componentSpecs[].name)
    • name: the continuous backup name
    • namespace: the namespace of the backup
    • restoreTime: the restore time, must be within the continuous backup timeRange
    • volumeRestorePolicy: the volume restore policy, Parallel or Serial

    Option 2: Restore a Cluster via Restore OpsRequest

    Create a Restore OpsRequest:

    apiVersion: operations.kubeblocks.io/v1alpha1 kind: OpsRequest metadata: name: pg-restore-pitr-ops namespace: demo spec: clusterName: pg-restore-pitr # restored cluster name restore: backupName: <CONTINUOUS_BACKUP_NAME> # replace it with your continuous backup name backupNamespace: <BACKUP_NAMESPACE> # replace it with the namespace of the backup restorePointInTime: <RESTORE_POINT_TIME> # replace it with a valid time within the backup timeRange, e.g. 2025-09-03T12:34:56Z type: Restore

    Step 2: Monitor Restoration

    1. Check component events:

      # describe component postgresql kubectl describe cmp pg-restore-pitr-postgresql -n demo

      It will show the following events. When all the restore tasks are completed, the component will be in Running state.

      Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning Warning 6m51s component-controller config/script template has no template specified: postgresql-configuration Normal NeedWaiting 6m44s (x7 over 6m51s) component-controller waiting for restore "pg-restore-pitr-postgresql-a6b02251-preparedata" successfully Normal Unknown 6m31s component-controller the component phase is unknown Normal ComponentPhaseTransition 6m30s (x4 over 6m31s) component-controller component is Creating Normal Unavailable 6m30s (x4 over 6m31s) component-controller the component phase is Creating Normal ComponentPhaseTransition 6m component-controller component is Running Normal Available 6m component-controller the component phase is Running Normal NeedWaiting 6m (x3 over 6m) component-controller waiting for restore "pg-restore-pitr-postgresql-a6b02251-postready" successfully
    2. Check restore status:

      # Watch restore status kubectl get restore -n demo

      There will be two restore resources created, one is for the data preparation, and the other is for the post-ready tasks.

      NAME BACKUP RESTORE-TIME STATUS DURATION CREATION-TIME COMPLETION-TIME pg-restore-pitr-postgresql-a6b02251-postready b387c27b-pg-cluster-postgresql-archive-wal 2025-05-16T08:03:50Z Completed 4s 2025-05-16T08:07:57Z 2025-05-16T08:08:00Z pg-restore-pitr-postgresql-a6b02251-preparedata b387c27b-pg-cluster-postgresql-archive-wal 2025-05-16T08:03:50Z Completed 21s 2025-05-16T08:07:06Z 2025-05-16T08:07:26Z

    Troubleshooting

    When encountering backup issues, such as Backup status is Failed or stuck in Runningfor quite a long time, follow these steps to diagnose and resolve the problem:

    1. Inspect the Backup resource for any error events or status updates:
    kubectl describe backup <BACKUP_NAME> -n demo
    1. Verify the backup job status and examine its logs: KubeBlocks runs a Job to create a full backup. If the backup task gets stuck, you can track the Job progress:
    kubectl -n demo get job -l app.kubernetes.io/instance=pg-cluster,app.kubernetes.io/managed-by=kubeblocks-dataprotection

    And check pod logs:

    kubectl -n demo logs <POD_NAME>

    This job will be deleted when the backup completes.

    1. Review KubeBlocks controller logs for detailed error information:
    kubectl -n kb-system logs deploy/kubeblocks -f

    Cleanup

    To remove all created resources, delete the PostgreSQL cluster along with its namespace:

    kubectl delete cluster pg-cluster -n demo kubectl delete cluster pg-restore-pitr -n demo kubectl delete ns demo

    Summary

    This guide demonstrated how to restore a PostgreSQL cluster in KubeBlocks using a full backup and continuous backup for Point-In-Time Recovery (PITR). Key steps included:

    • Verifying available backups.
    • Creating a new PostgreSQL cluster with restoration configuration.
    • Monitoring the restoration process.

    With this approach, you can restore a PostgreSQL cluster to a specific point in time, ensuring minimal data loss and operational continuity.

    © 2025 ApeCloud PTE. Ltd.