This guide demonstrates how to manage a ZooKeeper cluster's operational state in KubeBlocks, including:
| Operation | Effect | Use Case |
|---|---|---|
| Stop | Suspends cluster, retains storage | Cost savings, maintenance |
| Start | Resumes cluster operation | Restore service after pause |
| Restart | Recreates pods for component | Configuration changes, troubleshooting |
Before proceeding, verify your environment meets these requirements:
kubectl v1.21+ installed and configured with cluster accessStopping a ZooKeeper cluster in KubeBlocks will:
apiVersion: operations.kubeblocks.io/v1alpha1
kind: OpsRequest
metadata:
name: zookeeper-stop
namespace: demo
spec:
clusterName: zookeeper-cluster
type: Stop
Apply it:
kubectl apply -f https://raw.githubusercontent.com/apecloud/kubeblocks-addons/refs/heads/main/examples/zookeeper/stop.yaml
kubectl patch cluster zookeeper-cluster -n demo --type='json' -p='[
{
"op": "add",
"path": "/spec/componentSpecs/0/stop",
"value": true
}
]'
kubectl get cluster zookeeper-cluster -n demo -w
NAME CLUSTER-DEFINITION TERMINATION-POLICY STATUS AGE
zookeeper-cluster Delete Stopping 6m
zookeeper-cluster Delete Stopped 6m55s
Verify no running pods:
kubectl get pods -n demo -l app.kubernetes.io/instance=zookeeper-cluster
No resources found in demo namespace.
Confirm persistent volumes remain:
kubectl get pvc -n demo -l app.kubernetes.io/instance=zookeeper-cluster
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
data-zookeeper-cluster-zookeeper-0 Bound pvc-xxx 20Gi RWO kb-default-sc 8m
data-zookeeper-cluster-zookeeper-1 Bound pvc-xxx 20Gi RWO kb-default-sc 8m
data-zookeeper-cluster-zookeeper-2 Bound pvc-xxx 20Gi RWO kb-default-sc 8m
snapshot-log-zookeeper-cluster-zookeeper-0 Bound pvc-xxx 20Gi RWO kb-default-sc 8m
snapshot-log-zookeeper-cluster-zookeeper-1 Bound pvc-xxx 20Gi RWO kb-default-sc 8m
snapshot-log-zookeeper-cluster-zookeeper-2 Bound pvc-xxx 20Gi RWO kb-default-sc 8m
apiVersion: operations.kubeblocks.io/v1alpha1
kind: OpsRequest
metadata:
name: zookeeper-start
namespace: demo
spec:
clusterName: zookeeper-cluster
type: Start
Apply it:
kubectl apply -f https://raw.githubusercontent.com/apecloud/kubeblocks-addons/refs/heads/main/examples/zookeeper/start.yaml
kubectl patch cluster zookeeper-cluster -n demo --type='json' -p='[
{
"op": "remove",
"path": "/spec/componentSpecs/0/stop"
}
]'
kubectl get cluster zookeeper-cluster -n demo -w
NAME CLUSTER-DEFINITION TERMINATION-POLICY STATUS AGE
zookeeper-cluster Delete Updating 9m
zookeeper-cluster Delete Running 9m48s
kubectl get pods -n demo -l app.kubernetes.io/instance=zookeeper-cluster -L kubeblocks.io/role
NAME READY STATUS RESTARTS AGE ROLE
zookeeper-cluster-zookeeper-0 2/2 Running 0 107s follower
zookeeper-cluster-zookeeper-1 2/2 Running 0 94s leader
zookeeper-cluster-zookeeper-2 2/2 Running 0 80s follower
Restart recreates pods without full cluster stop, useful for:
apiVersion: operations.kubeblocks.io/v1alpha1
kind: OpsRequest
metadata:
name: zookeeper-restart
namespace: demo
spec:
clusterName: zookeeper-cluster
type: Restart
restart:
- componentName: zookeeper
Apply it:
kubectl apply -f https://raw.githubusercontent.com/apecloud/kubeblocks-addons/refs/heads/main/examples/zookeeper/restart.yaml
kubectl get ops zookeeper-restart -n demo -w
NAME TYPE CLUSTER STATUS PROGRESS AGE
zookeeper-restart Restart zookeeper-cluster Running 0/3 10s
zookeeper-restart Restart zookeeper-cluster Running 1/3 94s
zookeeper-restart Restart zookeeper-cluster Running 2/3 2m40s
zookeeper-restart Restart zookeeper-cluster Running 3/3 3m45s
zookeeper-restart Restart zookeeper-cluster Succeed 3/3 4m51s
kubectl get pods -n demo -l app.kubernetes.io/instance=zookeeper-cluster -L kubeblocks.io/role
NAME READY STATUS RESTARTS AGE ROLE
zookeeper-cluster-zookeeper-0 2/2 Running 0 3m43s follower
zookeeper-cluster-zookeeper-1 2/2 Running 0 2m1s follower
zookeeper-cluster-zookeeper-2 2/2 Running 0 4m51s leader
kubectl delete cluster zookeeper-cluster -n demo
kubectl delete ns demo