Manage Qdrant with KubeBlocks
The popularity of generative AI (Generative AI) has aroused widespread attention and completely ignited the vector database (Vector Database) market. KubeBlocks supports the management of vector databases, such as Qdrant, Milvus, and Weaviate.
In this chapter, we take Qdrant as an example to show how to manage vector databases with KubeBlocks. This chapter illustrates how to create and manage a Qdrant cluster by kubectl
or a YAML file. You can find the YAML examples in the GitHub repository.
Before you start
View all the database types and versions available for creating a cluster.
Make sure the
qdrant
cluster definition is installed. If the cluster definition is not available, refer to this doc to enable it first.kubectl get clusterdefinition qdrant
>
NAME TOPOLOGIES SERVICEREFS STATUS AGE
qdrant Available 30mView all available versions for creating a cluster.
kubectl get clusterversions -l clusterdefinition.kubeblocks.io/name=qdrant
To keep things isolated, create a separate namespace called
demo
throughout this tutorial.kubectl create namespace demo
Create a cluster
KubeBlocks implements a Cluster
CRD to define a cluster. Here is an example of creating a Qdrant Replication cluster. Primary and Secondary are distributed on different nodes by default. But if you only have one node for deploying a Replication Cluster, set spec.affinity.topologyKeys
as null
.
cat <<EOF | kubectl apply -f -
apiVersion: apps.kubeblocks.io/v1alpha1
kind: Cluster
metadata:
name: mycluster
namespace: demo
spec:
clusterDefinitionRef: qdrant
clusterVersionRef: qdrant-1.8.1
terminationPolicy: Delete
affinity:
podAntiAffinity: Preferred
topologyKeys:
- kubernetes.io/hostname
tolerations:
- key: kb-data
operator: Equal
value: 'true'
effect: NoSchedule
componentSpecs:
- name: qdrant
componentDefRef: qdrant
disableExporter: true
serviceAccountName: kb-mycluster
replicas: 2
resources:
limits:
cpu: '0.5'
memory: 0.5Gi
requests:
cpu: '0.5'
memory: 0.5Gi
volumeClaimTemplates:
- name: data
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 20Gi
EOF
Field | Definition |
---|---|
spec.clusterDefinitionRef | It specifies the name of the ClusterDefinition for creating a specific type of cluster. |
spec.clusterVersionRef | It is the name of the cluster version CRD that defines the cluster version. |
spec.terminationPolicy | It is the policy of cluster termination. The default value is Delete . Valid values are DoNotTerminate , Halt , Delete , WipeOut . - - - WipeOut is based on Delete and wipe out all volume snapshots and snapshot data from a backup storage location. |
spec.affinity | It defines a set of node affinity scheduling rules for the cluster's Pods. This field helps control the placement of Pods on nodes within the cluster. |
spec.affinity.podAntiAffinity | It specifies the anti-affinity level of Pods within a component. It determines how pods should spread across nodes to improve availability and performance. |
spec.affinity.topologyKeys | It represents the key of node labels used to define the topology domain for Pod anti-affinity and Pod spread constraints. |
spec.tolerations | It is an array that specifies tolerations attached to the cluster's Pods, allowing them to be scheduled onto nodes with matching taints. |
spec.componentSpecs | It is the list of components that define the cluster components. This field allows customized configuration of each component within a cluster. |
spec.componentSpecs.componentDefRef | It is the name of the component definition that is defined in the cluster definition and you can get the component definition names with kubectl get clusterdefinition qdrant -o json \| jq '.spec.componentDefs[].name' . |
spec.componentSpecs.name | It specifies the name of the component. |
spec.componentSpecs.disableExporter | It defines whether the monitoring function is enabled. |
spec.componentSpecs.replicas | It specifies the number of replicas of the component. |
spec.componentSpecs.resources | It specifies the resource requirements of the component. |
KubeBlocks operator watches for the Cluster
CRD and creates the cluster and all dependent resources. You can get all the resources created by the cluster with kubectl get all,secret,rolebinding,serviceaccount -l app.kubernetes.io/instance=mycluster -n demo
.
kubectl get all,secret,rolebinding,serviceaccount -l app.kubernetes.io/instance=mycluster -n demo
Run the following command to see the created Qdrant cluster object:
kubectl get cluster mycluster -n demo -o yaml
Connect to a Qdrant cluster
Qdrant provides both HTTP and gRPC protocols for client access on ports 6333 and 6334 respectively. You can also port forward the service to connect to a database from your local machine.
Run the following command to port forward the service.
kubectl port-forward svc/mycluster-qdrant 6333:6333 -n demo
Open a new terminal and run the following command to connect to the database.
curl http://127.0.0.1:6333/collections
Refer to the official Qdrant documents for the cluster operations.
Scale
Scale horizontally
Horizontal scaling changes the amount of pods. For example, you can scale out replicas from three to five.
From v0.9.0, besides replicas, KubeBlocks also supports scaling in and out instances, refer to Horizontal Scale for more details and examples.
Before you start
Check whether the cluster STATUS is Running
. Otherwise, the following operations may fail.
kubectl get cluster mycluster -n demo
>
NAME CLUSTER-DEFINITION VERSION TERMINATION-POLICY STATUS AGE
mycluster qdrant qdrant-1.8.1 Delete Running 47m
Steps
There are two ways to apply horizontal scaling.
- OpsRequest
- Edit cluster YAML file
Apply an OpsRequest to a specified cluster. Configure the parameters according to your needs.
The example below means adding two replicas.
kubectl apply -f - <<EOF
apiVersion: apps.kubeblocks.io/v1alpha1
kind: OpsRequest
metadata:
name: ops-horizontal-scaling
namespace: demo
spec:
clusterName: mycluster
type: HorizontalScaling
horizontalScaling:
- componentName: qdrant
scaleOut:
replicaChanges: 2
EOFIf you want to scale in replicas, replace
scaleOut
withscaleIn
.The example below means deleting two replicas.
kubectl apply -f - <<EOF
apiVersion: apps.kubeblocks.io/v1alpha1
kind: OpsRequest
metadata:
name: ops-horizontal-scaling
namespace: demo
spec:
clusterName: mycluster
type: HorizontalScaling
horizontalScaling:
- componentName: qdrant
scaleIn:
replicaChanges: 2
EOFCheck the operation status to validate the horizontal scaling status.
kubectl get ops -n demo
>
NAMESPACE NAME TYPE CLUSTER STATUS PROGRESS AGE
demo ops-horizontal-scaling HorizontalScaling mycluster Succeed 3/3 6mIf an error occurs, you can troubleshoot with
kubectl describe ops -n demo
command to view the events of this operation.Check whether the corresponding resources change.
kubectl describe cluster mycluster -n demo
Change the configuration of
spec.componentSpecs.replicas
in the YAML file.spec.componentSpecs.replicas
stands for the pod amount and changing this value triggers a horizontal scaling of a cluster.kubectl edit cluster mycluster -n demo
apiVersion: apps.kubeblocks.io/v1alpha1
kind: Cluster
metadata:
name: mycluster
namespace: demo
spec:
clusterDefinitionRef: qdrant
clusterVersionRef: qdrant-1.8.1
componentSpecs:
- name: qdrant
componentDefRef: qdrant
replicas: 2 # Change the amount
volumeClaimTemplates:
- name: data
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
terminationPolicy: DeleteCheck whether the corresponding resources change.
kubectl describe cluster mycluster -n demo
Handle the snapshot exception
If STATUS=ConditionsError
occurs during the horizontal scaling process, you can find the cause from cluster.status.condition.message
for troubleshooting.
In the example below, a snapshot exception occurs.
Status:
conditions:
- lastTransitionTime: "2024-04-25T17:40:26Z"
message: VolumeSnapshot/mycluster-qdrant-scaling-dbqgp: Failed to set default snapshot
class with error cannot find default snapshot class
reason: ApplyResourcesFailed
status: "False"
type: ApplyResources
Reason
This exception occurs because the VolumeSnapshotClass
is not configured. This exception can be fixed after configuring VolumeSnapshotClass
, but the horizontal scaling cannot continue to run. It is because the wrong backup (volumesnapshot is generated by backup) and volumesnapshot generated before still exist. First, delete these two wrong resources and then KubeBlocks re-generates new resources.
Steps:
Configure the VolumeSnapshotClass by running the command below.
kubectl create -f - <<EOF
apiVersion: snapshot.storage.k8s.io/v1
kind: VolumeSnapshotClass
metadata:
name: csi-aws-vsc
annotations:
snapshot.storage.kubernetes.io/is-default-class: "true"
driver: ebs.csi.aws.com
deletionPolicy: Delete
EOFDelete the wrong backup (volumesnapshot is generated by backup) and volumesnapshot resources.
kubectl delete backup -l app.kubernetes.io/instance=mycluster
kubectl delete volumesnapshot -l app.kubernetes.io/instance=mycluster
Result
The horizontal scaling continues after backup and volumesnapshot are deleted and the cluster restores to running status.
Scale vertically
You can vertically scale a cluster by changing resource requirements and limits (CPU and storage). For example, you can change the resource class from 1C2G to 2C4G by performing vertical scaling.
Before you start
Check whether the cluster status is Running
. Otherwise, the following operations may fail.
kubectl get cluster mycluster -n demo
>
NAME CLUSTER-DEFINITION VERSION TERMINATION-POLICY STATUS AGE
mycluster qdrant qdrant-1.8.1 Delete Running 47m
Steps
There are two ways to apply vertical scaling.
- OpsRequest
- Edit cluster YAML file
Apply an OpsRequest to the specified cluster. Configure the parameters according to your needs.
kubectl apply -f - <<EOF
apiVersion: apps.kubeblocks.io/v1alpha1
kind: OpsRequest
metadata:
name: ops-vertical-scaling
namespace: demo
spec:
clusterName: mycluster
type: VerticalScaling
verticalScaling:
- componentName: qdrant
requests:
memory: "2Gi"
cpu: "1"
limits:
memory: "4Gi"
cpu: "2"
EOFCheck the operation status to validate the vertical scaling.
kubectl get ops -n demo
>
NAMESPACE NAME TYPE CLUSTER STATUS PROGRESS AGE
demo ops-vertical-scaling VerticalScaling mycluster Succeed 3/3 6mIf an error occurs, you can troubleshoot with
kubectl describe ops -n demo
command to view the events of this operation.Check whether the corresponding resources change.
kubectl describe cluster mycluster -n demo
Change the configuration of
spec.componentSpecs.resources
in the YAML file.spec.componentSpecs.resources
controls the requirement and limit of resources and changing them triggers a vertical scaling.apiVersion: apps.kubeblocks.io/v1alpha1
kind: Cluster
metadata:
name: mycluster
namespace: demo
spec:
clusterDefinitionRef: qdrant
clusterVersionRef: qdrant-1.8.1
componentSpecs:
- name: qdrant
componentDefRef: qdrant
replicas: 1
resources: # Change the values of resources.
requests:
memory: "2Gi"
cpu: "1"
limits:
memory: "4Gi"
cpu: "2"
volumeClaimTemplates:
- name: data
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
terminationPolicy: DeleteCheck whether the corresponding resources change.
kubectl describe cluster mycluster -n demo
Volume Expansion
Before you start
Check whether the cluster status is Running
. Otherwise, the following operations may fail.
kubectl get cluster mycluster -n demo
>
NAME CLUSTER-DEFINITION VERSION TERMINATION-POLICY STATUS AGE
mycluster qdrant qdrant-1.8.1 Delete Running 4m29s
Steps
There are two ways to apply volume expansion.
- OpsRequest
- Edit cluster YAML file
Change the value of storage according to your need and run the command below to expand the volume of a cluster.
kubectl apply -f - <<EOF
apiVersion: apps.kubeblocks.io/v1alpha1
kind: OpsRequest
metadata:
name: ops-volume-expansion
namespace: demo
spec:
clusterName: mycluster
type: VolumeExpansion
volumeExpansion:
- componentName: qdrant
volumeClaimTemplates:
- name: data
storage: "40Gi"
EOFValidate the volume expansion operation.
kubectl get ops -n demo
>
NAMESPACE NAME TYPE CLUSTER STATUS PROGRESS AGE
demo ops-volume-expansion VolumeExpansion mycluster Succeed 3/3 6mIf an error occurs, you can troubleshoot with
kubectl describe ops -n demo
command to view the events of this operation.Check whether the corresponding cluster resources change.
kubectl describe cluster mycluster -n demo
Change the value of
spec.componentSpecs.volumeClaimTemplates.spec.resources
in the cluster YAML file.spec.componentSpecs.volumeClaimTemplates.spec.resources
is the storage resource information of the pod and changing this value triggers the volume expansion of a cluster.kubectl edit cluster mycluster -n demo
apiVersion: apps.kubeblocks.io/v1alpha1
kind: Cluster
metadata:
name: mycluster
namespace: demo
spec:
clusterDefinitionRef: qdrant
clusterVersionRef: qdrant-1.8.1
componentSpecs:
- name: qdrant
componentDefRef: qdrant
replicas: 2
volumeClaimTemplates:
- name: data
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi # Change the volume storage size.
terminationPolicy: DeleteCheck whether the corresponding cluster resources change.
kubectl describe cluster mycluster -n demo
Stop/Start a cluster
You can stop/start a cluster to save computing resources. When a cluster is stopped, the computing resources of this cluster are released, which means the pods of Kubernetes are released, but the storage resources are reserved. Start this cluster again if you want to restore the cluster resources from the original storage by snapshots.
Stop a cluster
- OpsRequest
- Edit cluster YAML file
Run the command below to stop a cluster.
kubectl apply -f - <<EOF
apiVersion: apps.kubeblocks.io/v1alpha1
kind: OpsRequest
metadata:
name: ops-stop
namespace: demo
spec:
clusterName: mycluster
type: Stop
EOF
Configure replicas as 0 to delete pods.
apiVersion: apps.kubeblocks.io/v1alpha1
kind: Cluster
metadata:
name: mycluster
namespace: demo
spec:
clusterDefinitionRef: qdrant
clusterVersionRef: qdrant-1.8.1
terminationPolicy: Delete
componentSpecs:
- name: qdrant
componentDefRef: qdrant
disableExporter: true
replicas: 0
volumeClaimTemplates:
- name: data
spec:
storageClassName: standard
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 20Gi
Start a cluster
- OpsRequest
- Edit cluster YAML file
Run the command below to start a cluster.
kubectl apply -f - <<EOF
apiVersion: apps.kubeblocks.io/v1alpha1
kind: OpsRequest
metadata:
name: ops-start
namespace: demo
spec:
clusterName: mycluster
type: Start
EOF
Change replicas back to the original amount to start this cluster again.
apiVersion: apps.kubeblocks.io/v1alpha1
kind: Cluster
metadata:
name: mycluster
namespace: demo
spec:
clusterDefinitionRef: qdrant
clusterVersionRef: qdrant-1.8.1
terminationPolicy: Delete
componentSpecs:
- name: qdrant
componentDefRef: qdrant
disableExporter: true
replicas: 1
volumeClaimTemplates:
- name: data
spec:
storageClassName: standard
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 20Gi
Backup and restore
The backup and restore operations for Qdrant are the same with those of other clusters.
You can refer to the backup and restore docs for details.