Operations
Backup And Restores
Monitoring
tpl
This guide explains how to perform horizontal scaling (scale-out and scale-in) on a Qdrant cluster managed by KubeBlocks. You'll learn how to use both OpsRequest and direct Cluster API updates to achieve this.
Before proceeding, ensure the following:
kubectl create ns demo
namespace/demo created
KubeBlocks uses a declarative approach for managing Qdrant Clusters. Below is an example configuration for deploying a Qdrant Cluster with 3 replicas.
Apply the following YAML configuration to deploy the cluster:
apiVersion: apps.kubeblocks.io/v1
kind: Cluster
metadata:
name: qdrant-cluster
namespace: demo
spec:
terminationPolicy: Delete
clusterDef: qdrant
topology: cluster
componentSpecs:
- name: qdrant
serviceVersion: 1.10.0
replicas: 3
resources:
limits:
cpu: "0.5"
memory: "0.5Gi"
requests:
cpu: "0.5"
memory: "0.5Gi"
volumeClaimTemplates:
- name: data
spec:
storageClassName: ""
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 20Gi
Monitor the cluster status until it transitions to the Running state:
kubectl get cluster qdrant-cluster -n demo -w
Expected Output:
kubectl get cluster qdrant-cluster -n demo
NAME CLUSTER-DEFINITION TERMINATION-POLICY STATUS AGE
qdrant-cluster qdrant Delete Creating 49s
qdrant-cluster qdrant Delete Running 62s
Check the pod status and roles:
kubectl get pods -l app.kubernetes.io/instance=qdrant-cluster -n demo
Expected Output:
NAME READY STATUS RESTARTS AGE
qdrant-cluster-qdrant-0 2/2 Running 0 1m43s
qdrant-cluster-qdrant-1 2/2 Running 0 1m28s
qdrant-cluster-qdrant-2 2/2 Running 0 1m14s
Once the cluster status becomes Running, your Qdrant cluster is ready for use.
If you are creating the cluster for the very first time, it may take some time to pull images before running.
Expected Workflow:
Pending
to Running
.Updating
to Running
Qdrant uses the Raft consensus protocol to maintain consistency regarding the cluster topology and the collections structure. Better to have an odd number of replicas, such as 3, 5, 7, to avoid split-brain scenarios, after scaling out/in the cluster.
Option 1: Using Horizontal Scaling OpsRequest
Scale out the Qdrant cluster by adding 1 replica to qdrant component:
apiVersion: operations.kubeblocks.io/v1alpha1
kind: OpsRequest
metadata:
name: qdrant-cluster-scale-out-ops
namespace: demo
spec:
clusterName: qdrant-cluster
type: HorizontalScaling
horizontalScaling:
- componentName: qdrant
# Specifies the replica changes for scaling in components
scaleOut:
# Specifies the replica changes for the component.
# add one more replica to current component
replicaChanges: 1
Monitor the progress of the scaling operation:
kubectl get ops qdrant-cluster-scale-out-ops -n demo -w
Expected Result:
NAME TYPE CLUSTER STATUS PROGRESS AGE
qdrant-cluster-scale-out-ops HorizontalScaling qdrant-cluster Running 0/1 9s
qdrant-cluster-scale-out-ops HorizontalScaling qdrant-cluster Running 1/1 16s
qdrant-cluster-scale-out-ops HorizontalScaling qdrant-cluster Succeed 1/1 16s
Option 2: Direct Cluster API Update
Alternatively, you can perform a direct update to the replicas
field in the Cluster resource:
apiVersion: apps.kubeblocks.io/v1
kind: Cluster
spec:
componentSpecs:
- name: qdrant
replicas: 4 # increase replicas to scale-out
...
Or you can patch the cluster CR with command:
kubectl patch cluster qdrant-cluster -n demo --type=json -p='[{"op": "replace", "path": "/spec/componentSpecs/0/replicas", "value": 4}]'
After applying the operation, you will see a new pod created and the Qdrant cluster status goes from Updating
to Running
, and the newly created pod has a new role secondary
.
kubectl get pods -n demo -l app.kubernetes.io/instance=qdrant-cluster
Example Output:
NAME READY STATUS RESTARTS AGE
qdrant-cluster-qdrant-0 2/2 Running 0 6m24s
qdrant-cluster-qdrant-1 2/2 Running 0 7m19s
qdrant-cluster-qdrant-2 2/2 Running 0 5m57s
qdrant-cluster-qdrant-3 2/2 Running 0 3m54s
Expected Workflow:
Updating
to Running
On Qdrant scale-in, data will be redistributed among the remaining replicas. Make sure the cluster have enough capacity to accommodate the data.
The data redistribution process may take some time depending on the amount of data.
It is handled by Qdrant MemberLeave
operation, and Pods won't be deleted until the data redistribution, i.e. the MemberLeave
actions completed successfully.
Option 1: Using Horizontal Scaling OpsRequest
Scale in the Qdrant cluster by removing ONE replica:
apiVersion: operations.kubeblocks.io/v1alpha1
kind: OpsRequest
metadata:
name: qdrant-cluster-scale-in-ops
namespace: demo
spec:
clusterName: qdrant-cluster
type: HorizontalScaling
horizontalScaling:
- componentName: qdrant
# Specifies the replica changes for scaling in components
scaleIn:
# Specifies the replica changes for the component.
# remove one replica from current component
replicaChanges: 1
Monitor progress:
kubectl get ops qdrant-cluster-scale-in-ops -n demo -w
Expected Result:
NAME TYPE CLUSTER STATUS PROGRESS AGE
qdrant-cluster-scale-in-ops HorizontalScaling qdrant-cluster Running 0/1 8s
qdrant-cluster-scale-in-ops HorizontalScaling qdrant-cluster Running 1/1 24s
qdrant-cluster-scale-in-ops HorizontalScaling qdrant-cluster Succeed 1/1 24s
Option 2: Direct Cluster API Update
Alternatively, you can perform a direct update to the replicas
field in the Cluster resource:
apiVersion: apps.kubeblocks.io/v1
kind: Cluster
spec:
componentSpecs:
- name: qdrant
replicas: 1 # decrease replicas to scale-out
Or you can patch the cluster CR with command:
kubectl patch cluster qdrant-cluster -n demo --type=json -p='[{"op": "replace", "path": "/spec/componentSpecs/0/replicas", "value": 1}]'
Example Output (ONE Pod):
kubectl get pods -n demo -l app.kubernetes.io/instance=qdrant-cluster
NAME READY STATUS RESTARTS AGE
qdrant-cluster-qdrant-0 2/2 Running 0 18m
On scale-in, KubeBlocks Qdrant will redistribute data in following steps:
If the scale-in operation gets stuck for quite a long time, please check these resources:
# Check agent logs
kubectl logs -n demo <pod-name> -c kbagent
# Check cluster events for errors
kubectl get events -n demo --field-selector involvedObject.name=pg-cluster
# Check kubeblocks logs
kubectl -n kb-system logs deploy/kubeblocks
When performing horizontal scaling:
To remove all created resources, delete the Qdrant cluster along with its namespace:
kubectl delete cluster qdrant-cluster -n demo
kubectl delete ns demo
In this guide you learned how to:
KubeBlocks ensures seamless scaling with minimal disruption to your database operations. with minimal disruption to your database operations.