KubeBlocks
BlogsEnterprise
⌘K
​
Blogs
Open Source · Production-Grade · CNCF Landscape

KubeBlocks Qdrant Operator for Kubernetes

Deploy production-grade Qdrant vector database clusters on Kubernetes in minutes. Automate distributed HA with per-shard Raft replication and rolling upgrades.

Try Playground Free →Read the Docs

8

Supported Versions

100%

Open Source

Deploy Qdrant in 2 steps

1

Install KubeBlocks & Qdrant Addon

# Add Helm repo
helm repo add kubeblocks https://apecloud.github.io/helm-charts
helm repo update

# Install KubeBlocks
helm install kubeblocks kubeblocks/kubeblocks \
  --namespace kb-system --create-namespace

# Install Qdrant addon
helm upgrade -i kb-addon-qdrant kubeblocks/qdrant \
  --namespace kb-system
2

Create a Qdrant Cluster

apiVersion: apps.kubeblocks.io/v1
kind: Cluster
metadata:
  name: qdrant-cluster
  namespace: demo
spec:
  clusterDef: qdrant
  topology: cluster
  terminationPolicy: Delete
  componentSpecs:
    - name: qdrant
      replicas: 3
      serviceVersion: "1.16.3"

Trusted by Engineering Teams at Scale

BONC CloudBONC Cloud
China Mobile CloudChina Mobile Cloud
China Telecom CloudChina Telecom Cloud
TencentTencent
XiaomiXiaomi
Ping AnPing An
VIP.comVIP.com
KwaiKwai
Tiger BrokersTiger Brokers
CITIC SecuritiesCITIC Securities
SealOSSealOS
FastGPTFastGPT
KubeSphereKubeSphere
MomentaMomenta
360360
TilaaTilaa
OlaresOlares
Changan AutomobileChangan Automobile
State GridState Grid
BONC CloudBONC Cloud
China Mobile CloudChina Mobile Cloud
China Telecom CloudChina Telecom Cloud
TencentTencent
XiaomiXiaomi
Ping AnPing An
VIP.comVIP.com
KwaiKwai
Tiger BrokersTiger Brokers
CITIC SecuritiesCITIC Securities
SealOSSealOS
FastGPTFastGPT
KubeSphereKubeSphere
MomentaMomenta
360360
TilaaTilaa
OlaresOlares
Changan AutomobileChangan Automobile
State GridState Grid
Qdrant Topology

Production-Grade Cluster. One Operator.

KubeBlocks deploys Qdrant as a distributed cluster where each collection shard elects its own Raft leader. Data is replicated across nodes; any surviving node continues serving vector search queries when a peer fails.

KubeBlocks deploys Qdrant in distributed mode with 3 or more nodes. Each collection shard is replicated across nodes using Raft consensus — when a node fails, the surviving replicas elect a new shard leader and continue serving queries. KubeBlocks restarts the failed pod; it rejoins the cluster and syncs shard data automatically.

✓

Per-shard Raft consensus — every collection write is replicated to a quorum before acknowledged

✓

Automatic shard leader re-election when a node fails — replicated shards on surviving nodes continue serving queries

✓

All nodes accept REST and gRPC search requests — clients connect to any surviving node

✓

Peer discovery via Kubernetes headless service DNS (no external etcd or ZooKeeper)

✓

HNSW indexing with configurable m and ef_construct parameters per collection

✓

REST API on :6333 (includes GET /metrics), gRPC on :6334, Raft P2P on :6335

Distributed Raft
Sharding
HNSW Index
REST + gRPC
Application / Client
REST API  {cluster}-qdrant-qdrant:6333
gRPC  {cluster}-qdrant-qdrant:6334
REST/gRPC → all pods (distributed search)
Kubernetes Services
{cluster}-qdrant-qdrant
ClusterIP · :6333 REST · :6334 gRPC
selector: all pods · vector search is distributed
name = cluster + component + serviceName ("qdrant")
ClusterIP
→ all pods (load balanced)
Pods · Worker Nodes
Per-shard Raft, not one cluster leader. Each collection shard elects its own replica leader; a node may host many shards. Badges mark symmetric peers (illustrative pod names only).
qdrant-0PEER
🎯
qdrant
:6333 REST + /metrics · :6334 gRPC · :6335 P2P/Raft
💾 PVC data-0 · 20Gi
qdrant-1PEER
🎯
qdrant
:6333 REST + /metrics · :6334 gRPC · :6335 P2P/Raft
💾 PVC data-1 · 20Gi
qdrant-2PEER
🎯
qdrant
:6333 REST + /metrics · :6334 gRPC · :6335 P2P/Raft
💾 PVC data-2 · 20Gi
↔Shard Replication via Raft Consensuscollection shards distributed across pods · each shard replicated with configurable replication_factor
🔗Headless service — stable pod DNS for internal use (replication, HA heartbeat, operator probes); not a client endpoint
Peer node
Raft (per shard)
Persistent Storage
Day-2 Operations

Every Operation Declared as a Kubernetes Resource

No SSH into pods, no shell scripts. Submit an OpsRequest and KubeBlocks handles the rest.

Availability & Scaling

✓

Horizontal Scaling

Add or remove nodes online. KubeBlocks joins new pods to the Qdrant cluster; shards are redistributed automatically to utilize new capacity.

✓

Vertical Scaling

Resize CPU and memory on running nodes with a rolling pod restart strategy — the cluster continues serving during the operation.

✓

Volume Expansion

Expand PVC storage for vector data without pod restarts on supported storage classes.

✓

Rolling Restart

Controlled pod restarts that maintain shard availability throughout — the cluster keeps serving queries.

✓

Stop / Start

Suspend the cluster to eliminate compute cost; resume with full vector data and shard state intact.

✓

Backup & Restore

Back up vector data to S3-compatible storage via the datafile method. Restore to a new cluster from any backup snapshot.

Configuration & Observability

✓

Multi-Engine Consistency

Manage Qdrant alongside PostgreSQL, MySQL, Redis, and 35+ other engines using the same OpsRequest API and kubectl tooling.

✓

Version Upgrade

Rolling upgrades across supported versions (e.g. 1.13.4 → 1.16.3) with health checks between each pod restart.

✓

Prometheus Metrics

Per-node metrics via GET /metrics on :6333 (built-in, no sidecar needed). Compatible with Grafana dashboards.

✓

Expose via LoadBalancer

Expose REST or gRPC APIs externally via a LoadBalancer or NodePort service for external client access.

Capabilities

Built for Production Qdrant

KubeBlocks automates the hardest parts of running Qdrant on Kubernetes — so your team doesn't have to.

🛡️
Automatic Shard Leader Re-election
When a node fails, each affected shard's Raft protocol elects a new leader from the surviving replicas. KubeBlocks restarts the failed pod and rejoins it to the cluster automatically.
Zero Manual Steps
T+0s — Normal
Healthy
qdrant-0
Leader
qdrant-1
Peer
qdrant-2
Peer
T+1s — Failure
Node Down
qdrant-0
Failed
qdrant-1
Peer
qdrant-2
Peer
T+5s — Detect
Detecting
qdrant-0
Unreachable
qdrant-1
Candidate
qdrant-2
Peer
T+10s — Elect
Electing
qdrant-0
Offline
qdrant-1
Elected…
qdrant-2
Peer
T+20s — Recovered
Healthy
qdrant-0
Rejoining
qdrant-1
Leader ★
qdrant-2
Peer
< 30s
Shard Recovery Time
0
Manual Steps Required
Raft
Consensus Protocol
No human intervention needed. Qdrant's per-shard Raft protocol detects the failed node and promotes a new shard leader. Replicated shards on the surviving nodes continue serving vector search queries throughout. KubeBlocks restarts the failed pod and rejoins it automatically.
From the Blog

Go Deeper on Qdrant on Kubernetes

We Let an AI Agent Manage Our Databases. Here's Why Most Operators Failed It.

We Let an AI Agent Manage Our Databases. Here's Why Most Operators Failed It.

We tested AI agents against traditional Kubernetes database operators. The results revealed a fundamental mismatch between fragmented operator APIs and how LLMs actually reason.

Validating KubeBlocks Addon High Availability with Chaos Mesh

Validating KubeBlocks Addon High Availability with Chaos Mesh

How to leverage Chaos Mesh for chaos engineering to validate and enhance KubeBlocks' high availability capabilities through systematic fault injection testing.

Managing Over 6,000 Self-Hosted Databases Without a DBA

Managing Over 6,000 Self-Hosted Databases Without a DBA

How Sealos used KubeBlocks to manage 6,000+ self-hosted databases across four availability zones — architecture, HA, backup, and operations.

Get Started

Qdrant on Kubernetes, the Easy Way

Deploy a production-grade Qdrant vector database cluster in minutes with distributed HA, per-shard Raft replication, and full Day-2 operations — all open source.

Try Playground Free →Talk to the Team

© 2026 KUBEBLOCKS INC