Getting Started
Concepts and Features
Backup and Restore
In Place Update
Instance Template
Trouble Shooting
References
Upgrade KubeBlocks
To get the full list of associated resources created by KubeBlocks for given cluster:
kubectl get cmp,its,po -l app.kubernetes.io/instance=<CLUSTER_NAME> -n demo # cluster and worload
kubectl get backuppolicy,backupschedule,backup -l app.kubernetes.io/instance=<CLUSTER_NAME> -n demo # data protection resources
kubectl get componentparameter,parameter -l app.kubernetes.io/instance=<CLUSTER_NAME> -n demo # configuration resources
kubectl get opsrequest -l app.kubernetes.io/instance=<CLUSTER_NAME> -n demo # opsrequest resources
kubectl get svc,secret,cm,pvc -l app.kubernetes.io/instance=<CLUSTER_NAME> -n demo # k8s native resources
For troubleshooting,
kubectl describe TYPE NAME
kubectl logs <podName> -c <containerName>
kubectl -n kb-system logs deployments/kubeblocks -f
Details of each backup method are defined in ActionSet
in KubeBlocks.
For example, To get the ActionSet
which defines the behavior of backup method named wal-g-archive
in PostgreSQL, for instance:
kubectl -n demo get bp pg-cluster-postgresql-backup-policy -oyaml | yq '.spec.backupMethods[] | select(.name=="wal-g-archive") | .actionSetName'
ActionSet defined:
And you may check details of each ActionSet to find out how backup and restore will be performed.
Versions and it compatibility rules are embedded in ComponentVersion
CR in KubeBlocks.
To the the list of compatible versions:
kubectl get cmpv postgresql -ojson | jq '.spec.compatibilityRules'
[
{
"compDefs": [
"postgresql-12-"
],
"releases": [
"12.14.0",
"12.14.1",
"12.15.0"
]
},
{
"compDefs": [
"postgresql-14-"
],
"releases": [
"14.7.2",
"14.8.0"
]
}
]
Releases are grouped by component definitions, and each group has a list of compatible releases.
In this example, it shows you can upgrade from version 12.14.0
to 12.14.1
or 12.15.0
, and upgrade from 14.7.2
to 14.8.0
.
But cannot upgrade from 12.14.0
to 14.8.0
.
If you made some changes to the ComponentDefinition, the status of ComponentDefinition may turn to Unavailable
.
KubeBlocks sets the ComponentDefinition as Unavailable
to prevent the changes from affecting existing clusters.
By describing the ComponentDefinition, you can see following message:
Status:
Message: immutable fields can't be updated
Observed Generation: 3
Phase: Unavailable
If the changes made are on-purpose, you can annotate the ComponentDefinition by running the following command:
kubectl annotate componentdefinition \<COMPONENT_DEFINITION_NAME\> apps.kubeblocks.io/skip-immutable-check\=true
If you are using K8s <= 1.23, you may encounter the following error when installing KubeBlocks:
unknown field "x-kubernetes-validations" .... if you choose to ignore these errors, turn validation off with --validate\=false
This is because the x-kubernetes-validations
field is not supported in K8s <= 1.23.
You can fix this by running the following command:
kubectl create -f https://github.com/apecloud/kubeblocks/releases/download/v1.0.0/kubeblocks_crds.yaml --validate\=false
KubeBlocks supports to cancel
OpsRequest meets the following conditions:
Running
stateVerticalScaling
, HorizontalScaling
To cancel a running OpsRequest, you can run the following command:
kubectl patch opsrequest <OPSREQUEST_NAME> -p '{"spec":{"cancel":true}}' --type=merge
Updating
statusIf you find that a cluster/component is stuck in Updating
status:
Running
statusroles
if required, to check the roles
of a Pod, you can run the following command:kubectl get po <POD_NAME> -L kubeblocks.io/role
apiVersion: v1
kind: Pod
metadata:
name:
spec:
containers:
- image: repo/image:tag # <==== image in spec
name: c1
status:
containerStatuses:
containerID: containerd://123456
image: repo/image:tag # <====== image in status
imageID: repo/image:tag@sha256:123456
name: c1
If the two fields are not match, please check if there are two or more images share the same IMAGG ID
but of different IMAGE
tags.
If so, please remove those images on your node and create a new Cluster.
Deleting
status, and KubeBlocks logs: has no pods to running the pre-terminate action
When deleting a cluster, one may find the cluster stuck in Deleting
status, and the following error in KubeBlocks logs:
kubectl -n kb-system logs deployments/kubeblocks -f
And you may see the following error in KubeBlocks logs:
> INFO build error: has no pods to running the pre-terminate action
This is because KubeBlocks will run the pre-terminate
lifecycle action if defined in corresponding ComponentDefinition
.
If there are no pods to run the pre-terminate action, the cluster will stuck in Deleting
status until the pre-terminate action is completed.
To skip the pre-terminate action, you can annotate the Component by running the following command:
kubectl annotate component <COMPONENT_NAME> apps.kubeblocks.io/skip-pre-terminate-action=true
This case happens when you create a cluster but for some reason, it failed to create any pod (e.g. failed to pull the image or network issue or not enough resources).