KubeBlocks
BlogsEnterprise
⌘K
​
Blogs
Overview
Quickstart
Architecture

Operations

Lifecycle Management
Vertical Scaling
Horizontal Scaling
Volume Expansion
Manage PostgreSQL Services
Minor Version Upgrade
Modify PostgreSQL Parameters
PostgreSQL Switchover
Decommission PostgreSQL Replica
Recovering PostgreSQL Replica

Backup And Restores

Create BackupRepo
Create Full Backup
Scheduled Backups
Scheduled Continuous Backup
Restore PostgreSQL Cluster
Restore with PITR

Custom Secret

Custom Password
Custom Password Policy

TLS

PostgreSQL Cluster with TLS
PostgreSQL Cluster with Custom TLS

Monitoring

Observability for PostgreSQL Clusters
FAQs
Using Extensions

tpl

  1. Connect to your cluster
  2. pgvector — AI / Vector search
  3. PostGIS — Geospatial
  4. TimescaleDB — Time-series
  5. pg_duckdb — Analytics
  6. roaringbitmap — Analytics (PG18 only)
  7. pg_trgm — Fuzzy text search
  8. pg_stat_statements — Query performance monitoring
  9. pgaudit — Audit logging
  10. pg_cron — Scheduled jobs
  11. pg_partman — Partition management
  12. pglogical — Logical replication

Using Pre-installed Extensions

All KubeBlocks PostgreSQL images ship with a curated set of extensions already compiled and installed. Most extensions only need a one-time CREATE EXTENSION statement per database to become active.

The addons’ postgresql.conf templates always include pg_stat_statements, pg_cron, timescaledb, and pgaudit in shared_preload_libraries. pg_duckdb is added there only on supported minor versions (e.g. PostgreSQL 14.18+, 15.13+, 16.9+; it is not preloaded on PG12 or on older 14/15/16 minors such as 14.7.2 / 16.4.0 — see the Overview table and the semverCompare branches in addons/postgresql/config/pg*-config.tpl). Preloading is separate from CREATE EXTENSION: you normally still run CREATE EXTENSION where noted so each database gets the extension’s catalog objects.

For the full list of available extensions and their versions across PostgreSQL 12–18, see the Overview.

Connect to your cluster

Before enabling any extension, connect to your cluster. The examples below use kubectl exec:

kubectl exec -it <pod-name> -n <namespace> -- \ env PGUSER=kbadmin PGPASSWORD=<password> PGDATABASE=postgres \ psql

Replace <pod-name> with a pod running the primary if you are about to run DDL (CREATE EXTENSION, CREATE TABLE, etc.); check kubeblocks.io/role / Patroni labels — the primary is not always postgresql-0.

pgvector — AI / Vector search

pgvector adds a vector data type and approximate nearest-neighbor (ANN) search operators to PostgreSQL. Use it to store and query embedding vectors generated by AI models — the foundation for semantic search and RAG pipelines.

-- Enable the extension CREATE EXTENSION IF NOT EXISTS vector; -- Create a table to store text and its embedding CREATE TABLE documents ( id serial PRIMARY KEY, content text, embedding vector(1536) -- dimension must match your embedding model ); -- Insert a row (use a real vector literal with 1536 floats from your model; placeholder below is only illustrative) INSERT INTO documents (content, embedding) VALUES ('KubeBlocks manages databases on Kubernetes', array_fill(0.01::real, ARRAY[1536])::vector); -- Create an HNSW index for fast approximate search CREATE INDEX ON documents USING hnsw (embedding vector_cosine_ops); -- Find the 5 most similar documents to a query embedding (same dimension as the column) SELECT id, content, embedding <=> array_fill(0.02::real, ARRAY[1536])::vector AS distance FROM documents ORDER BY distance LIMIT 5;

Supported distance operators:

OperatorMetric
<->Euclidean (L2) distance
<=>Cosine distance
<#>Negative inner product
NOTE

Available on PG12–PG18. Index types: IVFFlat (training required) and HNSW (recommended for most use cases).

PostGIS — Geospatial

PostGIS adds geometry and geography data types, spatial indexes, and hundreds of spatial functions. It is the standard choice for location-aware applications.

CAUTION

If pgaudit.log is set to a non-empty value (the KubeBlocks default is 'ddl'), PostGIS installation will fail. Temporarily disable it first:

SET pgaudit.log = 'none'; CREATE EXTENSION IF NOT EXISTS postgis;

Re-enable after installation: ALTER DATABASE <dbname> SET pgaudit.log = 'ddl';

-- Enable the extension SET pgaudit.log = 'none'; -- required if pgaudit is active in this database CREATE EXTENSION IF NOT EXISTS postgis; -- Store locations in a table CREATE TABLE locations ( id serial PRIMARY KEY, name text, geom geometry(Point, 4326) -- SRID 4326 = WGS84 ); INSERT INTO locations (name, geom) VALUES ('Beijing', ST_SetSRID(ST_MakePoint(116.4, 39.9), 4326)), ('Shanghai', ST_SetSRID(ST_MakePoint(121.5, 31.2), 4326)); -- Create a spatial index CREATE INDEX ON locations USING GIST (geom); -- Distance between two cities in meters SELECT ST_Distance( ST_MakePoint(116.4, 39.9)::geography, ST_MakePoint(121.5, 31.2)::geography ) AS distance_meters; -- Find all locations within 500 km of Beijing SELECT name FROM locations WHERE ST_DWithin( geom::geography, ST_MakePoint(116.4, 39.9)::geography, 500000 -- meters );
NOTE

Available on PG12–PG18.

TimescaleDB — Time-series

TimescaleDB extends PostgreSQL with hypertables: automatically partitioned tables optimized for time-ordered inserts and range queries. It also provides continuous aggregates, data retention policies, and native compression.

-- Enable the extension CREATE EXTENSION IF NOT EXISTS timescaledb; -- Create a regular table, then convert it to a hypertable CREATE TABLE metrics ( time TIMESTAMPTZ NOT NULL, device text, value DOUBLE PRECISION ); SELECT create_hypertable('metrics', 'time'); -- Insert data INSERT INTO metrics VALUES (NOW(), 'sensor-1', 23.5), (NOW() - INTERVAL '1 hour', 'sensor-1', 22.1); -- Time-bucketed aggregation (1-hour buckets) SELECT time_bucket('1 hour', time) AS bucket, device, avg(value) AS avg_value FROM metrics GROUP BY bucket, device ORDER BY bucket DESC; -- Create a continuous aggregate for real-time rollups CREATE MATERIALIZED VIEW metrics_hourly WITH (timescaledb.continuous) AS SELECT time_bucket('1 hour', time) AS bucket, device, avg(value) AS avg_value FROM metrics GROUP BY bucket, device; -- Set a data retention policy (keep last 30 days) SELECT add_retention_policy('metrics', INTERVAL '30 days');
NOTE

Available on PG12–PG18. TimescaleDB appears in shared_preload_libraries in this addon’s templates, but you still run CREATE EXTENSION timescaledb once per database so TimescaleDB’s SQL objects are installed there.

pg_duckdb — Analytics

pg_duckdb embeds the DuckDB OLAP engine inside PostgreSQL. Complex aggregations and wide-table scans run through DuckDB's vectorized execution engine, which can be orders of magnitude faster than PostgreSQL's row-based executor.

-- Enable the extension (PG14+ only) CREATE EXTENSION IF NOT EXISTS pg_duckdb; -- Analytical queries run via DuckDB's vectorized engine transparently -- (no syntax change required — use standard SQL) SELECT region, product_category, sum(revenue) AS total_revenue, count(*) AS order_count, avg(order_value) AS avg_order FROM orders GROUP BY region, product_category ORDER BY total_revenue DESC; -- Verify DuckDB execution is active SELECT duckdb.raw_query($$ SELECT 42 AS answer $$); -- Read directly from S3 (requires DuckDB secrets configured first) SELECT * FROM read_parquet('s3://my-bucket/data/*.parquet') LIMIT 100;
NOTE

Not on PG12. On PG14–PG16, the addon preloads pg_duckdb only from specific minor versions upward (14.18+, 15.13+, 16.9+). On older minors such as 14.7.2 or 16.4.0, the image may still ship the extension, but it is not listed in shared_preload_libraries until you use a supported minor or reconfigure (which requires a restart). PG17+ uses the preload line that includes pg_duckdb in the current templates. Bundled extension versions by major release are listed in the Overview table (e.g. 0.3.0 vs 1.1.0 on PG18).

roaringbitmap — Analytics (PG18 only)

roaringbitmap provides a compressed bitmap data type for large integer set operations. Intersections, unions, and cardinality counts on sets of millions of IDs run in milliseconds.

-- Enable the extension (PG18 only) CREATE EXTENSION IF NOT EXISTS roaringbitmap; -- Create a table to store pre-aggregated user segments CREATE TABLE user_segments ( segment_name text PRIMARY KEY, user_ids roaringbitmap ); -- Build segments from a users table INSERT INTO user_segments (segment_name, user_ids) SELECT 'active', rb_build_agg(user_id) FROM users WHERE last_login > NOW() - INTERVAL '30 days'; -- Audience intersection: active female users in Beijing SELECT rb_cardinality( rb_and( (SELECT user_ids FROM user_segments WHERE segment_name = 'active'), rb_and( (SELECT user_ids FROM user_segments WHERE segment_name = 'female'), (SELECT user_ids FROM user_segments WHERE segment_name = 'beijing') ) ) ) AS audience_size;
NOTE

Available on PG18 only.

pg_trgm — Fuzzy text search

pg_trgm enables trigram-based fuzzy text search. Its primary benefit is making LIKE '%keyword%' queries index-friendly via GiST or GIN indexes, eliminating full-table scans on large text columns.

-- Enable the extension CREATE EXTENSION IF NOT EXISTS pg_trgm; -- Create a GIN trigram index on a product name column CREATE INDEX ON products USING GIN (name gin_trgm_ops); -- Fuzzy search: find products similar to "iphone" SELECT name, similarity(name, 'iphone') AS score FROM products WHERE name % 'iphone' -- similarity threshold (default 0.3) ORDER BY score DESC LIMIT 10; -- LIKE with index support (no full-table scan) SELECT * FROM products WHERE name ILIKE '%samsung%';
NOTE

Available on PG12–PG18.

pg_stat_statements — Query performance monitoring

pg_stat_statements records cumulative execution statistics for every distinct SQL statement. The library is always preloaded in this addon, but PostgreSQL still expects CREATE EXTENSION pg_stat_statements in each database where you want the extension’s objects (including the pg_stat_statements view) to exist.

CREATE EXTENSION IF NOT EXISTS pg_stat_statements; -- List slowest queries by total execution time SELECT left(query, 80) AS query_snippet, calls, round(mean_exec_time::numeric, 2) AS mean_ms, round(total_exec_time::numeric, 2) AS total_ms, rows FROM pg_stat_statements ORDER BY total_exec_time DESC LIMIT 10; -- Reset accumulated statistics SELECT pg_stat_statements_reset();
NOTE

Preload is configured in shared_preload_libraries; run CREATE EXTENSION (above) per database before querying the view.

pgaudit — Audit logging

pgaudit produces detailed audit logs for SQL activity beyond PostgreSQL's standard log_statement. Use it to satisfy compliance requirements such as SOC 2, HIPAA, and PCI DSS.

-- Enable the extension (pgaudit is already preloaded; CREATE EXTENSION activates the catalog) CREATE EXTENSION IF NOT EXISTS pgaudit; -- Log all DDL and data-modification statements SET pgaudit.log = 'ddl, write'; -- Log all statements for the current session SET pgaudit.log = 'all'; -- Apply a role-level audit policy ALTER ROLE audit_user SET pgaudit.log = 'all';

Audit entries appear in the PostgreSQL log file with an AUDIT: prefix.

NOTE

Available on PG12–PG18. pgaudit is loaded via shared_preload_libraries; CREATE EXTENSION pgaudit is still required per database to activate the catalog objects.

pg_cron — Scheduled jobs

pg_cron runs SQL tasks on a cron schedule directly inside the database. No external scheduler or application code is needed.

-- Library is preloaded; create the extension objects in this database (usually `postgres`), then schedule jobs CREATE EXTENSION IF NOT EXISTS pg_cron; -- Run VACUUM ANALYZE every night at 3 AM SELECT cron.schedule('nightly-vacuum', '0 3 * * *', 'VACUUM ANALYZE orders'); -- Archive old records every Sunday at midnight SELECT cron.schedule('weekly-archive', '0 0 * * 0', $$DELETE FROM events WHERE created_at < NOW() - INTERVAL '90 days'$$); -- Refresh a materialized view every 5 minutes SELECT cron.schedule('refresh-summary', '*/5 * * * *', 'REFRESH MATERIALIZED VIEW CONCURRENTLY sales_summary'); -- List all scheduled jobs SELECT jobid, schedule, command, nodename, active FROM cron.job; -- Remove a job SELECT cron.unschedule('nightly-vacuum');
NOTE

The pg_cron library is listed in shared_preload_libraries; use CREATE EXTENSION pg_cron in each database that will own schedules. By default, keep jobs in the postgres database or use cron.schedule_in_database() when targeting another database.

pg_partman — Partition management

pg_partman automates the lifecycle of native PostgreSQL partitioned tables: it creates future partitions on schedule, drops or detaches expired ones, and handles bookkeeping that is otherwise error-prone when done manually.

-- Enable the extension CREATE EXTENSION IF NOT EXISTS pg_partman; -- Create a partitioned parent table (partitioned by time, native) CREATE TABLE logs ( id bigserial, created_at timestamptz NOT NULL DEFAULT NOW(), message text ) PARTITION BY RANGE (created_at); -- Set up daily partitions with a 7-day pre-creation window -- pg_partman v5+: schema is public; p_interval uses PostgreSQL interval values SELECT create_parent( p_parent_table => 'public.logs', p_control => 'created_at', p_interval => '1 day', p_premake => 7 ); -- Run the maintenance procedure to create/drop partitions on schedule -- Typically called via pg_cron: SELECT cron.schedule('partman-maintenance', '*/30 * * * *', 'CALL run_maintenance_proc()'); -- Set a 30-day retention policy (detach partitions older than 30 days) UPDATE part_config SET retention = '30 days', retention_keep_table = false WHERE parent_table = 'public.logs';
NOTE

Available on PG12–PG18. pg_partman works with both time-based and integer-based partition keys.

pglogical — Logical replication

pglogical implements logical replication at the table level, independent of PostgreSQL's built-in streaming replication. Key uses: selective table replication, cross-major-version migration (e.g., PG14 → PG17), and bidirectional replication topologies.

:::caution Prerequisite: add pglogical to shared_preload_libraries pglogical is not preloaded by default. You must add it to shared_preload_libraries via a Reconfiguring OpsRequest and restart before CREATE EXTENSION will succeed:

apiVersion: operations.kubeblocks.io/v1alpha1 kind: OpsRequest metadata: name: pg-add-pglogical namespace: <namespace> spec: clusterName: <cluster-name> reconfigures: - componentName: postgresql parameters: - key: shared_preload_libraries value: "'pg_stat_statements,pg_cron,timescaledb,pgaudit,pglogical'" type: Reconfiguring

shared_preload_libraries is a static parameter; the cluster will restart automatically. :::

-- Enable on both provider and subscriber nodes (after shared_preload_libraries restart) CREATE EXTENSION IF NOT EXISTS pglogical; -- === On the provider (source) node === -- Create a provider node SELECT pglogical.create_node( node_name := 'provider', dsn := 'host=pg-cluster-provider-headless.demo.svc.cluster.local dbname=mydb user=kbadmin password=...' ); -- Add all tables in the public schema to the default replication set SELECT pglogical.replication_set_add_all_tables('default', ARRAY['public']); -- === On the subscriber (destination) node === -- Create a subscriber node SELECT pglogical.create_node( node_name := 'subscriber', dsn := 'host=pg-cluster-subscriber-headless.demo.svc.cluster.local dbname=mydb user=kbadmin password=...' ); -- Subscribe to the provider SELECT pglogical.create_subscription( subscription_name := 'sub_from_provider', provider_dsn := 'host=pg-cluster-provider-headless.demo.svc.cluster.local dbname=mydb user=kbadmin password=...' ); -- Check subscription status SELECT * FROM pglogical.show_subscription_status('sub_from_provider');
NOTE

Available on PG12–PG18. Requires adding pglogical to shared_preload_libraries before installation (see above). The replication user needs REPLICATION privilege and SELECT access to replicated tables.

© 2026 KUBEBLOCKS INC