KubeBlocks
BlogsKubeBlocks Cloud
⌘K
​
Overview
Quickstart

Topologies

Redis Standalone Cluster
Redis Replication Cluster
Redis Sharding Cluster

Operations

Lifecycle Management
Vertical Scaling
Horizontal Scaling
Volume Expansion
Manage Redis Services
Modify Redis Parameters
Redis Switchover
Decommission Redis Replica

Backup And Restores

Create BackupRepo
Create Full Backup
Scheduled Backups
Scheduled Continuous Backup
Restore Redis Cluster
Restore with PITR

Custom Secret

Custom Password

Monitoring

Observability for Redis Clusters
FAQs

tpl

  1. Prerequisites
  2. Deploying the Redis Sharding Cluster
  3. Verifying the Deployment
    1. Check the Cluster Status
    2. Verify Component and Pod Status
  4. Cleanup

Deploying a Redis Sharding Cluste (Cluster Mode) with KubeBlocks

Redis Cluster distributes data across multiple nodes (shards) using hash-based partitioning, allowing horizontal scaling for both reads and writes.

Use Cases

  • Large-scale applications requiring high throughput.
  • Distributed caching and session storage.
  • Write-heavy workloads (e.g., real-time analytics).

Prerequisites

    Before proceeding, ensure the following:

    • Environment Setup:
      • A Kubernetes cluster is up and running.
      • The kubectl CLI tool is configured to communicate with your cluster.
      • KubeBlocks CLI and KubeBlocks Operator are installed. Follow the installation instructions here.
    • Namespace Preparation: To keep resources isolated, create a dedicated namespace for this tutorial:
    kubectl create ns demo namespace/demo created

    Deploying the Redis Sharding Cluster

    To create a redis sharding cluster (cluster mode) with 3 shards, and 2 replica for each shard:

    apiVersion: apps.kubeblocks.io/v1 kind: Cluster metadata: name: redis-sharding namespace: demo spec: terminationPolicy: Delete shardings: - name: shard shards: 3 template: name: redis componentDef: redis-cluster-7 disableExporter: true replicas: 2 resources: limits: cpu: '1' memory: 1Gi requests: cpu: '1' memory: 1Gi serviceVersion: 7.2.4 volumeClaimTemplates: - name: data spec: accessModes: - ReadWriteOnce resources: requests: storage: 20Gi services: - name: redis-advertised # This is a per-pod svc, and will be used to parse advertised endpoints podService: true # - NodePort # - LoadBalancer serviceType: NodePort

    Key Configuration Details:

    • shardings: Specifies a list of ShardingSpec objects that configure the sharding topology for components of a Cluster.

    Verifying the Deployment

    Check the Cluster Status

    Once the cluster is deployed, check its status:

    kubectl get cluster redis-sharding -n demo -w

    Expected Output:

    NAME CLUSTER-DEFINITION TERMINATION-POLICY STATUS AGE redis-sharding Delete Running 103s

    Verify Component and Pod Status

    Get all componets working for this cluster:

    kubectl get cmp -l app.kubernetes.io/instance=redis-sharding -n demo

    Expected Output:

    NAME DEFINITION SERVICE-VERSION STATUS AGE redis-sharding-shard-5cd redis-cluster-7-1.0.0 7.2.4 Running 2m34s redis-sharding-shard-drg redis-cluster-7-1.0.0 7.2.4 Running 2m34s redis-sharding-shard-tgf redis-cluster-7-1.0.0 7.2.4 Running 2m34s

    Each component stands for a shard, with hash id as suffix.

    Check pods and their roles

    kubectl get pods -l app.kubernetes.io/instance=redis-sharding -L kubeblocks.io/role -n demo

    Expected Output:

    NAME READY STATUS RESTARTS AGE ROLE redis-sharding-shard-5cd-0 2/2 Running 0 3m55s primary redis-sharding-shard-5cd-1 2/2 Running 0 3m35s secondary redis-sharding-shard-drg-0 2/2 Running 0 3m53s primary redis-sharding-shard-drg-1 2/2 Running 0 3m35s secondary redis-sharding-shard-tgf-0 2/2 Running 0 3m54s primary redis-sharding-shard-tgf-1 2/2 Running 0 3m36s secondary

    There are in-total six replicas in the cluster, two (one primarily and one secondary) for each component.

    Cleanup

    To remove all resources created during this tutorial:

    kubectl delete cluster redis-sharding -n demo kubectl delete ns demo

    © 2025 ApeCloud PTE. Ltd.