Operations
Backup And Restores
Custom Secret
Monitoring
tpl
Restore a PostgreSQL Cluster from Backup with Point-In-Time-Recovery(PITR) on KubeBlocks
This guide demonstrates how to perform Point-In-Time Recovery (PITR) for PostgreSQL clusters in KubeBlocks using:
- A full base backup
- Continuous WAL (Write-Ahead Log) backups
- Two restoration methods:
- Cluster Annotation (declarative approach)
- OpsRequest API (operational control)
PITR enables recovery to any moment within the timeRange
specified.
Prerequisites
Before proceeding, ensure the following:
- Environment Setup:
- A Kubernetes cluster is up and running.
- The kubectl CLI tool is configured to communicate with your cluster.
- KubeBlocks CLI and KubeBlocks Operator are installed. Follow the installation instructions here.
- Namespace Preparation: To keep resources isolated, create a dedicated namespace for this tutorial:
kubectl create ns demo
namespace/demo created
Prepare for PITR Restoration
To perform a PITR restoration, both a full backup and continuous backup are required. Refer to the documentation to configure these backups if they are not already set up.
- Completed full backup
- Active continuous WAL backup
- Backup repository accessible
- Sufficient resources for new cluster
To identify the list of full and continuous backups, you may follow the steps:
1. Verify Continuous Backup
Confirm you have a continuous WAL backup, either running or completed:
# expect EXACTLY ONE continuous backup per cluster
kubectl get backup -n demo -l dataprotection.kubeblocks.io/backup-type=Continuous,app.kubernetes.io/instance=pg-cluster
2. Check Backup Time Range
Get the valid recovery window:
kubectl get backup <continuous-backup-name> -n demo -o yaml | yq '.status.timeRange'
Expected Output:
start: "2025-05-07T09:12:47Z"
end: "2025-05-07T09:22:50Z"
3. Identify Full Backup
Find available full backups that meet:
- Status: Completed
- Completion time after continuous backup start time
# expect one or more Full backups
kubectl get backup -n demo -l dataprotection.kubeblocks.io/backup-type=Full,app.kubernetes.io/instance=pg-cluster
KubeBlocks automatically selects the most recent qualifying full backup as the base.
Make sure there is a full backup meets the condition: its stopTime
/completionTimestamp
must AFTER Continuous backup's startTime
, otherwise PITR restoration will fail.
Option 1: Cluster Annotation Restoration
Step 1: Retrieve System Credentials
Get encrypted account credentials from backup:
kubectl get backup <continuous-backup-name> -n demo -o json | \
jq -r '.metadata.annotations | ."kubeblocks.io/encrypted-system-accounts" | fromjson .postgresql | tojson | gsub("\""; "\\\"")'
Step 2: Create Restored Cluster
Configure PITR parameters in cluster annotation:
Key parameters:
encryptedSystemAccounts
: From the previous stepsname
: Continuous backup namerestoreTime
: Target recovery time (within backuptimeRange
)
Apply this YAML configuration:
apiVersion: apps.kubeblocks.io/v1
kind: Cluster
metadata:
name: pg-restore-pitr
namespace: demo
annotations:
# NOTE: replace <ENCRYPTED-SYSTEM-ACCOUNTS> with the accounts info from you backup
# NOTE: replace <CONTINUOUS_BACKUP_NAME> with the continuouse backup name
# NOTE: replace <RESTORE_POINT_TIME> with a valid time within the backup timeRange.
kubeblocks.io/restore-from-backup: '{"postgresql":{"encryptedSystemAccounts":"<ENCRYPTED-SYSTEM-ACCOUNTS>","name":"<CONTINUOUS_BACKUP_NAME>","namespace":"demo","restoreTime":"<RESTORE_POINT_TIME>","volumeRestorePolicy":"Parallel"}}'
spec:
terminationPolicy: Delete
clusterDef: postgresql
topology: replication
componentSpecs:
- name: postgresql
serviceVersion: "14.7.2"
disableExporter: true
labels:
# NOTE: update the label accordingly
apps.kubeblocks.postgres.patroni/scope: pg-restore-pitr-postgresql
replicas: 1
resources:
limits:
cpu: "0.5"
memory: "0.5Gi"
requests:
cpu: "0.5"
memory: "0.5Gi"
volumeClaimTemplates:
- name: data
spec:
storageClassName: ""
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 20Gi
Step 3: Monitor Restoration
Track restore progress with:
# Watch restore status
kubectl get restore -n demo -w
# Watch cluster status
kubectl get cluster -n demo -w
Option 2: OpsRequest API Restoration
For operational control and monitoring, use the OpsRequest API:
apiVersion: operations.kubeblocks.io/v1alpha1
kind: OpsRequest
metadata:
name: pg-cluster-restore
namespace: demo
spec:
clusterName: pg-cluster-restore
force: false
restore:
backupName: <CONTINUOUS_BACKUP_NAME>
backupNamespace: demo
restorePointInTime: <RESTORE_POINT_TIME>
type: Restore
Monitor Restoration
Track progress with:
# Watch restore operation
kubectl get restore -n demo -w
# Verify cluster status
kubectl get cluster -n demo -w
Cleanup
To remove all created resources, delete the PostgreSQL cluster along with its namespace:
kubectl delete cluster pg-cluster -n demo
kubectl delete cluster pg-cluster-restore -n demo
kubectl delete ns demo
Summary
This guide demonstrated how to restore a PostgreSQL cluster in KubeBlocks using a full backup and continuous backup for Point-In-Time Recovery (PITR). Key steps included:
- Verifying available backups.
- Extracting encrypted system account credentials.
- Creating a new PostgreSQL cluster with restoration configuration.
- Monitoring the restoration process.
With this approach, you can restore a PostgreSQL cluster to a specific point in time, ensuring minimal data loss and operational continuity.