This guide provides a comprehensive walkthrough for deploying and managing ZooKeeper clusters using the KubeBlocks ZooKeeper Add-on, covering:
Before proceeding, verify your environment meets these requirements:
kubectl v1.21+ installed and configured with cluster accessCheck if the ZooKeeper Add-on is installed:
helm list -n kb-system | grep zookeeper
NAME NAMESPACE REVISION UPDATED STATUS CHART
kb-addon-zookeeper kb-system 1 2026-04-03 deployed zookeeper-1.0.2
If the add-on isn't installed, choose an installation method:
# Add Helm repo
helm repo add kubeblocks https://apecloud.github.io/helm-charts
# For users in Mainland China, if GitHub is inaccessible or slow, use this alternative repo:
#helm repo add kubeblocks https://jihulab.com/api/v4/projects/150246/packages/helm/stable
# Update helm repo
helm repo update
# Search available Add-on versions
helm search repo kubeblocks/zookeeper --versions
# Install your desired version (replace <VERSION> with your chosen version)
helm upgrade -i kb-addon-zookeeper kubeblocks/zookeeper --version <VERSION> -n kb-system
# Add an index (kubeblocks is added by default)
kbcli addon index add kubeblocks https://github.com/apecloud/block-index.git
# Update the index
kbcli addon index update kubeblocks
To search and install an addon:
# Search Add-on
kbcli addon search zookeeper
# Install Add-on with your desired version (replace <VERSION> with your chosen version)
kbcli addon install zookeeper --version <VERSION>
To enable or disable an addon:
# Enable Add-on
kbcli addon enable zookeeper
# Disable Add-on
kbcli addon disable zookeeper
Version Compatibility
Always verify that the ZooKeeper Add-on version matches your KubeBlocks major version to avoid compatibility issues.
List available ZooKeeper versions:
kubectl get cmpv zookeeper
NAME VERSIONS STATUS AGE
zookeeper 3.9.4,3.9.2,3.8.4,3.7.2,3.6.4,3.4.14 Available 5m
ZooKeeper requires persistent storage. Verify available options:
kubectl get storageclass
Recommended storage characteristics:
data and snapshot-log)Deploy a basic ZooKeeper ensemble with 3 nodes:
kubectl apply -f https://raw.githubusercontent.com/apecloud/kubeblocks-addons/refs/heads/main/examples/zookeeper/cluster.yaml
This creates:
apiVersion: apps.kubeblocks.io/v1
kind: Cluster
metadata:
name: zookeeper-cluster
namespace: demo
spec:
# Specifies the behavior when a Cluster is deleted.
# Valid options are: [DoNotTerminate, Delete, WipeOut]
# - `DoNotTerminate`: Prevents deletion of the Cluster. This policy ensures that all resources remain intact.
# - `Delete`: Extends the `Halt` policy by also removing PVCs, leading to a thorough cleanup while removing all persistent data.
# - `WipeOut`: An aggressive policy that deletes all Cluster resources, including volume snapshots and backups in external storage.
terminationPolicy: Delete
componentSpecs:
- name: zookeeper
componentDef: zookeeper
# ServiceVersion specifies the version of the Service expected to be
# provisioned by this Component.
# Valid options are: [3.4.14,3.6.4,3.7.2,3.8.4,3.9.2,3.9.4]
serviceVersion: "3.9.2"
# Update `replicas` to your need.
# ZooKeeper requires an odd number of nodes for quorum (e.g., 3, 5, 7)
replicas: 3
# Specifies the resources required by the Component.
resources:
limits:
cpu: '0.5'
memory: 0.5Gi
requests:
cpu: '0.5'
memory: 0.5Gi
# Specifies a list of PersistentVolumeClaim templates that define the storage
# requirements for the Component.
volumeClaimTemplates:
- name: data
spec:
storageClassName: ""
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 20Gi
- name: snapshot-log
spec:
storageClassName: ""
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 20Gi
For more API fields and descriptions, refer to the API Reference.
When deploying a ZooKeeper ensemble, KubeBlocks automatically configures:
Confirm successful deployment:
kubectl get cluster zookeeper-cluster -n demo
NAME CLUSTER-DEFINITION TERMINATION-POLICY STATUS AGE
zookeeper-cluster Delete Running 3m
kubectl get pods -n demo -l app.kubernetes.io/instance=zookeeper-cluster -L kubeblocks.io/role
NAME READY STATUS RESTARTS AGE ROLE
zookeeper-cluster-zookeeper-0 2/2 Running 0 2m54s follower
zookeeper-cluster-zookeeper-1 2/2 Running 0 2m18s leader
zookeeper-cluster-zookeeper-2 2/2 Running 0 94s follower
KubeBlocks automatically provisions:
zookeeper-cluster-zookeeper-account-adminzookeeper-cluster-zookeeper (routes to leader)zookeeper-cluster-zookeeper-readable (routes to all nodes)# Get username
NAME=$(kubectl get secret -n demo zookeeper-cluster-zookeeper-account-admin \
-o jsonpath='{.data.username}' | base64 --decode)
# Get password
PASSWD=$(kubectl get secret -n demo zookeeper-cluster-zookeeper-account-admin \
-o jsonpath='{.data.password}' | base64 --decode)
Connect directly to a pod using the srvr command to verify it's running:
kubectl exec -n demo zookeeper-cluster-zookeeper-0 -- \
bash -c "echo 'srvr' | nc localhost 2181"
Zookeeper version: 3.9.2-e454e8c7283100c7caec6dcae2bc82aaecb63023, built on 2024-02-12 20:59 UTC
Latency min/avg/max: 0/0.0/0
Received: 35
Sent: 34
Connections: 1
Outstanding: 0
Zxid: 0x400000009
Mode: follower
Node count: 5
Forward client port:
kubectl port-forward svc/zookeeper-cluster-zookeeper 2181:2181 -n demo
Connect using zkCli or any ZooKeeper client:
zkCli.sh -server 127.0.0.1:2181
| Port | Name | Description |
|---|---|---|
| 2181 | client | Client connections |
| 2888 | quorum | Follower connections to leader |
| 3888 | election | Leader election |
| 8080 | admin | Admin server (HTTP) |
| 7000 | metrics | Prometheus metrics endpoint |
Stopping a cluster temporarily suspends operations while preserving all data and configuration:
Key Effects:
kubectl apply -f https://raw.githubusercontent.com/apecloud/kubeblocks-addons/refs/heads/main/examples/zookeeper/stop.yaml
apiVersion: operations.kubeblocks.io/v1alpha1
kind: OpsRequest
metadata:
name: zookeeper-stop
namespace: demo
spec:
clusterName: zookeeper-cluster
type: Stop
Alternatively, stop by setting spec.componentSpecs.stop to true:
kubectl patch cluster zookeeper-cluster -n demo --type='json' -p='[
{
"op": "add",
"path": "/spec/componentSpecs/0/stop",
"value": true
}
]'
kubectl apply -f https://raw.githubusercontent.com/apecloud/kubeblocks-addons/refs/heads/main/examples/zookeeper/start.yaml
apiVersion: operations.kubeblocks.io/v1alpha1
kind: OpsRequest
metadata:
name: zookeeper-start
namespace: demo
spec:
clusterName: zookeeper-cluster
type: Start
kubectl patch cluster zookeeper-cluster -n demo --type='json' -p='[
{
"op": "remove",
"path": "/spec/componentSpecs/0/stop"
}
]'
Choose carefully based on your data retention needs:
| Policy | Resources Removed | Data Removed | Recommended For |
|---|---|---|---|
| DoNotTerminate | None | None | Critical production clusters |
| Delete | All resources | PVCs deleted | Non-critical environments |
| WipeOut | All resources | Everything* | Test environments only |
*Includes snapshots and backups in external storage
For test environments, use this complete cleanup:
kubectl patch cluster zookeeper-cluster -p '{"spec":{"terminationPolicy":"WipeOut"}}' --type="merge" -n demo
kubectl delete cluster zookeeper-cluster -n demo