K8S deploying Redis Cluster cluster (three master and three slave mode) - deployment notes

1, Redis introduction

  • Redis stands for replicate dictionary server. It is an open source in memory data storage, which is usually used as a database, cache or message agent. It can store and manipulate advanced data types, such as lists, maps, collections, and sorted collections.
  • Because Redis accepts keys in multiple formats, it can perform operations on the server, thus reducing the workload of the client.
  • It only uses the disk for persistence and keeps the data completely in memory.
  • Redis is a popular data storage solution and is used by technology giants such as GitHub, Pinterest, Snapchat, Twitter, StackOverflow and Flickr.
 
2, Why Redis
  • It's very fast. It is written in ANSI C and can run on POSIX systems, such as Linux, Mac OS X, and Solaris.
  • Redis is usually ranked as the most popular key / value database and the most popular NoSQL database used with containers.
  • Its caching solution reduces the number of calls to the cloud database backend.
  • Applications can access it through their client API libraries.
  • All popular programming languages support Redis.
  • It is open source and stable.
 
3, What is a Redis Cluster
  • Redis Cluster is a group of redis instances, which aims to expand the database by partitioning the database, so as to make it more flexible.
  • Each member in the cluster, whether primary or secondary, manages A subset of hashing slots. If the host is inaccessible, its slave will be upgraded to the host. In the minimum Redis cluster composed of three master nodes, each master node has A slave node (to achieve minimum failover), and each master node is assigned A hash slot range between 0 and 16383. Node A contains hash slots from 0 to 5000, node B from 5001 to 10000, and node C from 10001 to 16383.
  • Communication within the cluster is carried out through the internal bus. Protocols are used to spread information about the cluster or discover new nodes.
 
4, Process record of deploying Redis Cluster in Kubernetes
Deploying Redis clusters in Kubernetes is a challenge because each Redis instance depends on a configuration file that can track other cluster instances and their roles. To do this, we need to use a combination of StatefulSets controller and persistent volumes persistent storage.
 
Design principle model of StatefulSet:
  • Topology status:
The relationship between multiple instances of an application is not completely equal. The startup of this application instance must be started in some order. For example, the master node a of the application must be started before the slave node B. If you delete two pods A and B, they must be created in strict accordance with this order. Moreover, the newly created Pod must be the same as the network ID of the original Pod, so that the original visitors can use the same method to access the new Pod
 
  • Storage status:
Multiple instances of the application are bound with different storage data. For these application instances, the data read by Pod A for the first time and the data read again after an interval of ten minutes should be the same, even if Pod A has been recreated during this period. Multiple storage instances of a database application.
 
Storage volume
After understanding the statefullset status, you should know to prepare a storage volume for data. The creation methods include static and dynamic methods. The static method is to manually create PV and PVC, and then call POD. Here, dynamic NFS is used as the mount volume, and NFS dynamic StorageClass needs to be deployed
 
1. Configuring dynamic persistent storage for statefulsets using NFS
1) On the NFS server (172.16.60.238), create a shared directory of the Redis Cluster through NFS
1 [root@k8s-harbor01 ~]# mkdir -p /data/storage/k8s/redis

  

2) Create rbac for nfs

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 [root@k8s-master01 ~]# mkdir -p /opt/k8s/k8s_project/redis [root@k8s-master01 ~]# cd /opt/k8s/k8s_project/redis [root@k8s-master01 redis]# vim nfs-rbac.yaml --- apiVersion: v1 kind: ServiceAccount metadata:   name: nfs-provisioner   namespace: wiseco --- kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1 metadata:    name: nfs-provisioner-runner    namespace: wiseco rules:    -  apiGroups: [""]       resources: ["persistentvolumes"]       verbs: ["get", "list", "watch", "create", "delete"]    -  apiGroups: [""]       resources: ["persistentvolumeclaims"]       verbs: ["get", "list", "watch", "update"]    -  apiGroups: ["storage.k8s.io"]       resources: ["storageclasses"]       verbs: ["get", "list", "watch"]    -  apiGroups: [""]       resources: ["events"]       verbs: ["watch", "create", "update", "patch"]    -  apiGroups: [""]       resources: ["services", "endpoints"]       verbs: ["get","create","list", "watch","update"]    -  apiGroups: ["extensions"]       resources: ["podsecuritypolicies"]       resourceNames: ["nfs-provisioner"]       verbs: ["use"] --- kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata:   name: run-nfs-provisioner subjects:   - kind: ServiceAccount     name: nfs-provisioner     namespace: wiseco roleRef:   kind: ClusterRole   name: nfs-provisioner-runner   apiGroup: rbac.authorization.k8s.io

Create and view

1 2 3 4 5 6 7 8 9 10 11 [root@k8s-master01 redis]# kubectl apply -f nfs-rbac.yaml serviceaccount/nfs-provisioner created clusterrole.rbac.authorization.k8s.io/nfs-provisioner-runner created clusterrolebinding.rbac.authorization.k8s.io/run-nfs-provisioner created   [root@k8s-master01 redis]# kubectl get sa -n wiseco|grep nfs nfs-provisioner                1         24s [root@k8s-master01 redis]# kubectl get clusterrole -n wiseco|grep nfs nfs-provisioner-runner                                                 2021-02-04T02:21:11Z [root@k8s-master01 redis]# kubectl get clusterrolebinding -n wiseco|grep nfs run-nfs-provisioner                                    ClusterRole/nfs-provisioner-runner                                                 34s

  

3) Create storageclass of redis cluster cluster
1 2 3 4 5 6 7 8 9 10 11 12 [root@k8s-master01 redis]# ll total 4 -rw-r--r-- 1 root root 1216 Feb  4 10:20 nfs-rbac.yaml   [root@k8s-master01 redis]# vim redis-nfs-class.yaml apiVersion: storage.k8s.io/v1beta1 kind: StorageClass metadata:   name: redis-nfs-storage   namespace: wiseco provisioner: redis/nfs reclaimPolicy: Retain

Create and view

1 2 3 4 5 6 [root@k8s-master01 redis]# kubectl apply -f redis-nfs-class.yaml storageclass.storage.k8s.io/redis-nfs-storage created   [root@k8s-master01 redis]# kubectl get sc -n wiseco NAME                PROVISIONER   RECLAIMPOLICY   VOLUMEBINDINGMODE   ALLOWVOLUMEEXPANSION   AGE redis-nfs-storage   redis/nfs     Retain          Immediate           false 

  

4) Create NFS client provider for redis cluster cluster
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 [root@k8s-master01 redis]# ll total 8 -rw-r--r-- 1 root root 1216 Feb  4 10:20 nfs-rbac.yaml -rw-r--r-- 1 root root  155 Feb  4 10:24 redis-nfs-class.yaml   [root@k8s-master01 redis]# vim redis-nfs.yml apiVersion: apps/v1 kind: Deployment metadata:   name: redis-nfs-client-provisioner   namespace: wiseco spec:   replicas: 1   selector:     matchLabels:       app: redis-nfs-client-provisioner   strategy:     type: Recreate   template:     metadata:       labels:         app: redis-nfs-client-provisioner     spec:       serviceAccount: nfs-provisioner       containers:         - name: redis-nfs-client-provisioner           image: registry.cn-hangzhou.aliyuncs.com/open-ali/nfs-client-provisioner           imagePullPolicy: IfNotPresent           volumeMounts:             - name: nfs-client-root               mountPath:  /persistentvolumes           env:             - name: PROVISIONER_NAME               value: redis/nfs             - name: NFS_SERVER               value: 172.16.60.238             - name: NFS_PATH               value: /data/storage/k8s/redis       volumes:         - name: nfs-client-root           nfs:             server: 172.16.60.238             path: /data/storage/k8s/redis 

Create and view

1 2 3 4 5 [root@k8s-master01 redis]# kubectl apply -f redis-nfs.yml deployment.apps/redis-nfs-client-provisioner created   [root@k8s-master01 redis]# kubectl get pods -n wiseco|grep nfs redis-nfs-client-provisioner-58b46549dd-h87gg   1/1     Running   0          40s

  

2. Deploy Redis Cluster cluster
The namespace used in the deployment of this case is wiseco

1) Preparing image mirroring
The redis-trib.rb tool can copy one from the redis source code to the current directory, and then build an image.

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 [root@k8s-master01 redis]# pwd /opt/k8s/k8s_project/redis [root@k8s-master01 redis]# ll total 12 -rw-r--r-- 1 root root 1216 Feb  4 15:31 nfs-rbac.yaml -rw-r--r-- 1 root root  155 Feb  4 15:32 redis-nfs-class.yaml -rw-r--r-- 1 root root 1006 Feb  4 15:32 redis-nfs.yml   [root@k8s-master01 redis]# mkdir image [root@k8s-master01 redis]# cd image [root@k8s-master01 image]# ll total 64 -rw-r--r-- 1 root root   191 Feb  4 18:14 Dockerfile -rwxr-xr-x 1 root root 60578 Feb  4 15:49 redis-trib.rb   [root@k8s-master01 image]# cat Dockerfile FROM redis:4.0.11 RUN apt-get update -y RUN apt-get install -y  ruby \ rubygems RUN apt-get clean all RUN gem install redis RUN apt-get install dnsutils -y COPY redis-trib.rb /usr/local/bin/

  

Create an image and upload it to Harbor warehouse

1 2 [root@k8s-master01 image]# docker build -t 172.16.60.238/wiseco/redis:4.0.11 . [root@k8s-master01 image]# docker push 172.16.60.238/wiseco/redis:4.0.11

  

2) Create configmap
The redis configuration file is mounted using the configmap method. If the configuration is encapsulated in the docker image, we need to rebuild the docker build every time we modify the configuration. Personally, I find it troublesome, so I use the configmap method to mount the configuration.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 [root@k8s-master01 redis]# pwd /opt/k8s/k8s_project/redis [root@k8s-master01 redis]# ll total 12 drwxr-xr-x 2 root root   45 Feb  4 18:14 image -rw-r--r-- 1 root root 1216 Feb  4 15:31 nfs-rbac.yaml -rw-r--r-- 1 root root  155 Feb  4 15:32 redis-nfs-class.yaml -rw-r--r-- 1 root root 1006 Feb  4 15:32 redis-nfs.yml   [root@k8s-master01 redis]# mkdir conf [root@k8s-master01 redis]# cd conf/   [root@k8s-master01 conf]# vim redis-configmap.yaml apiVersion: v1 kind: ConfigMap metadata:   name: redis-cluster   namespace: wiseco data:   fix-ip.sh: |     #!/bin/sh     CLUSTER_CONFIG="/data/nodes.conf"     if [ -f ${CLUSTER_CONFIG} ]; then       if [ -z "${POD_IP}" ]; then         echo "Unable to determine Pod IP address!"         exit 1       fi       echo "Updating my IP to ${POD_IP} in ${CLUSTER_CONFIG}"       sed -i.bak -e '/myself/ s/[0-9]\{1,3\}\.[0-9]\{1,3\}\.[0-9]\{1,3\}\.[0-9]\{1,3\}/'${POD_IP}'/' ${CLUSTER_CONFIG}     fi     exec "$@"   redis.conf: |     cluster-enabled yes     cluster-config-file /data/nodes.conf     cluster-node-timeout 10000     protected-mode no     daemonize no     pidfile /var/run/redis.pid     port 6379     tcp-backlog 511     bind 0.0.0.0     timeout 3600     tcp-keepalive 1     loglevel verbose     logfile /data/redis.log     databases 16     save 900 1     save 300 10     save 60 10000     stop-writes-on-bgsave-error yes     rdbcompression yes     rdbchecksum yes     dbfilename dump.rdb     dir /data     #requirepass yl123456     appendonly yes     appendfilename "appendonly.aof"     appendfsync everysec     no-appendfsync-on-rewrite no     auto-aof-rewrite-percentage 100     auto-aof-rewrite-min-size 64mb     lua-time-limit 20000     slowlog-log-slower-than 10000     slowlog-max-len 128     #rename-command FLUSHALL  ""     latency-monitor-threshold 0     notify-keyspace-events ""     hash-max-ziplist-entries 512     hash-max-ziplist-value 64     list-max-ziplist-entries 512     list-max-ziplist-value 64     set-max-intset-entries 512     zset-max-ziplist-entries 128     zset-max-ziplist-value 64     hll-sparse-max-bytes 3000     activerehashing yes     client-output-buffer-limit normal 0 0 0     client-output-buffer-limit slave 256mb 64mb 60     client-output-buffer-limit pubsub 32mb 8mb 60     hz 10     aof-rewrite-incremental-fsync yes

  

Note: the fix-ip.sh script is used to replace the original Pod IP with the new one in / data/nodes.conf when the Pod IP of a redis cluster changes after it is rebuilt. Otherwise, the cluster will have problems.
 
Create and view
1 2 3 4 [root@k8s-master01 conf]# kubectl apply -f redis-configmap.yaml   [root@k8s-master01 conf]# kubectl get cm -n wiseco|grep redis redis-cluster                     2      8m55s

  

2) Prepare StatefulSet
volumeClaimTemplates are used for StatefulSet controller scenarios:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 [root@k8s-master01 redis]# pwd /opt/k8s/k8s_project/redis [root@k8s-master01 redis]# ll total 12 drwxr-xr-x 2 root root   34 Feb  4 18:52 conf drwxr-xr-x 2 root root   45 Feb  4 18:14 image -rw-r--r-- 1 root root 1216 Feb  4 15:31 nfs-rbac.yaml -rw-r--r-- 1 root root  155 Feb  4 15:32 redis-nfs-class.yaml -rw-r--r-- 1 root root 1006 Feb  4 15:32 redis-nfs.yml   [root@k8s-master01 redis]# mkdir deploy [root@k8s-master01 redis]# cd deploy/ [root@k8s-master01 deploy]# cat redis-cluster.yml --- apiVersion: v1 kind: Service metadata:   namespace: wiseco   name: redis-cluster spec:   clusterIP: None   ports:   - port: 6379     targetPort: 6379     name: client   - port: 16379     targetPort: 16379     name: gossip   selector:     app: redis-cluster --- apiVersion: apps/v1 kind: StatefulSet metadata:   namespace: wiseco   name: redis-cluster spec:   serviceName: redis-cluster   replicas: 6   selector:     matchLabels:       app: redis-cluster   template:     metadata:       labels:         app: redis-cluster     spec:       containers:       - name: redis         image: 172.16.60.238/wiseco/redis:4.0.11         ports:         - containerPort: 6379           name: client         - containerPort: 16379           name: gossip         command: ["/etc/redis/fix-ip.sh", "redis-server", "/etc/redis/redis.conf"]         env:         - name: POD_IP           valueFrom:             fieldRef:               fieldPath: status.podIP         volumeMounts:         - name: conf           mountPath: /etc/redis/           readOnly: false         - name: data           mountPath: /data           readOnly: false       volumes:       - name: conf         configMap:           name: redis-cluster           defaultMode: 0755   volumeClaimTemplates:   - metadata:       name: data       annotations:         volume.beta.kubernetes.io/storage-class: "redis-nfs-storage"     spec:       accessModes:         - ReadWriteMany       resources:         requests:           storage: 10Gi

  

Create and view

1 2 3 4 5 6 7 8 9 10 11 12 [root@k8s-master01 deploy]# kubectl apply -f redis-cluster.yml   [root@k8s-master01 deploy]# kubectl get pods -n wiseco|grep redis-cluster redis-cluster-0                                 1/1     Running   0          10m redis-cluster-1                                 1/1     Running   0          10m redis-cluster-2                                 1/1     Running   0          10m redis-cluster-3                                 1/1     Running   0          10m redis-cluster-4                                 1/1     Running   0          9m35s redis-cluster-5                                 1/1     Running   0          9m25s   [root@k8s-master01 deploy]# kubectl get svc -n wiseco|grep redis-cluster redis-cluster    ClusterIP   None             <none>        6379/TCP,16379/TCP           10m

  

View PV, PVC

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 [root@k8s-master01 deploy]# kubectl get pv NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS        CLAIM                         STORAGECLASS        REASON   AGE pvc-20bcb3be-90e1-4354-bd11-4f442a3bd562   10Gi       RWX            Delete           Bound         wiseco/data-redis-cluster-0   redis-nfs-storage            19m pvc-3b53a31b-9a53-4bd4-93ff-2cf9fed551de   10Gi       RWX            Delete           Bound         wiseco/data-redis-cluster-2   redis-nfs-storage            12m pvc-43c0cba2-54a9-4416-afb6-8b7730a199dc   10Gi       RWX            Delete           Bound         wiseco/data-redis-cluster-1   redis-nfs-storage            12m pvc-66daade5-1b97-41ce-a9e0-4cf88d63894d   10Gi       RWX            Delete           Terminating   wiseco/data-redis-cluster-5   redis-nfs-storage            11m pvc-dd62a086-1802-446a-9f9d-35620f7f0b4a   10Gi       RWX            Delete           Bound         wiseco/data-redis-cluster-4   redis-nfs-storage            11m pvc-e5aa9802-b983-471c-a7da-32eebc497610   10Gi       RWX            Delete           Bound         wiseco/data-redis-cluster-3   redis-nfs-storage            12m   [root@k8s-master01 deploy]# kubectl get pvc -n wiseco NAME                   STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS        AGE data-redis-cluster-0   Bound    pvc-20bcb3be-90e1-4354-bd11-4f442a3bd562   10Gi       RWX            redis-nfs-storage   19m data-redis-cluster-1   Bound    pvc-43c0cba2-54a9-4416-afb6-8b7730a199dc   10Gi       RWX            redis-nfs-storage   12m data-redis-cluster-2   Bound    pvc-3b53a31b-9a53-4bd4-93ff-2cf9fed551de   10Gi       RWX            redis-nfs-storage   12m data-redis-cluster-3   Bound    pvc-e5aa9802-b983-471c-a7da-32eebc497610   10Gi       RWX            redis-nfs-storage   12m data-redis-cluster-4   Bound    pvc-dd62a086-1802-446a-9f9d-35620f7f0b4a   10Gi       RWX            redis-nfs-storage   11m data-redis-cluster-5   Bound    pvc-66daade5-1b97-41ce-a9e0-4cf88d63894d   10Gi       RWX            redis-nfs-storage   11m

  

3) View NFS shared storage
Server for NFS (172.16.60.238). View the shared directory / data/storage/k8s/redis
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 [root@k8s-harbor01 redis]# pwd /data/storage/k8s/redis [root@k8s-harbor01 redis]# ll total 0 drwxrwxrwx 2 root root 63 Feb  4 18:59 wiseco-data-redis-cluster-0-pvc-20bcb3be-90e1-4354-bd11-4f442a3bd562 drwxrwxrwx 2 root root 63 Feb  4 18:59 wiseco-data-redis-cluster-1-pvc-43c0cba2-54a9-4416-afb6-8b7730a199dc drwxrwxrwx 2 root root 63 Feb  4 18:59 wiseco-data-redis-cluster-2-pvc-3b53a31b-9a53-4bd4-93ff-2cf9fed551de drwxrwxrwx 2 root root 63 Feb  4 19:00 wiseco-data-redis-cluster-3-pvc-e5aa9802-b983-471c-a7da-32eebc497610 drwxrwxrwx 2 root root 63 Feb  4 19:00 wiseco-data-redis-cluster-4-pvc-dd62a086-1802-446a-9f9d-35620f7f0b4a drwxrwxrwx 2 root root 63 Feb  4 19:00 wiseco-data-redis-cluster-5-pvc-66daade5-1b97-41ce-a9e0-4cf88d63894d [root@k8s-harbor01 redis]# ls ./* ./wiseco-data-redis-cluster-0-pvc-20bcb3be-90e1-4354-bd11-4f442a3bd562: appendonly.aof  nodes.conf  redis.log   ./wiseco-data-redis-cluster-1-pvc-43c0cba2-54a9-4416-afb6-8b7730a199dc: appendonly.aof  nodes.conf  redis.log   ./wiseco-data-redis-cluster-2-pvc-3b53a31b-9a53-4bd4-93ff-2cf9fed551de: appendonly.aof  nodes.conf  redis.log   ./wiseco-data-redis-cluster-3-pvc-e5aa9802-b983-471c-a7da-32eebc497610: appendonly.aof  nodes.conf  redis.log   ./wiseco-data-redis-cluster-4-pvc-dd62a086-1802-446a-9f9d-35620f7f0b4a: appendonly.aof  nodes.conf  redis.log   ./wiseco-data-redis-cluster-5-pvc-66daade5-1b97-41ce-a9e0-4cf88d63894d: appendonly.aof  nodes.conf  redis.log

  

3. Initialize Redis Cluster
Next, form a Redis Cluster, run the following command and type yes to accept the configuration.
Cluster form: the first three nodes become master nodes and the last three nodes become slave nodes.

Note:
redis-trib.rb must use ip to initialize the redis cluster. If the domain name is used, the following error will be reported: ****** / redis / client. RB: 126: in 'call': ERR Invalid node address specified: redis-cluster-0.redis-headless.sts-app.svc.cluster.local:6379 (Redis::CommandError)

Here are the commands for Redis Cluster initialization:
Use the following command and type yes to accept the configuration. The first three nodes become master nodes and the last three nodes become slave nodes.
kubectl exec -it redis-cluster-0 -n wiseco -- redis-trib.rb create --replicas 1 $(kubectl get pods -l app=redis-cluster -n wiseco -o jsonpath='{range.items[*]}{.status.podIP}:6379 ')

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 Get first Redis Cluster 6 nodes of the cluster Pod of ip address [root@k8s-master01 redis]# kubectl get pods -n wiseco -o wide|grep redis-cluster redis-cluster-0                                 1/1     Running   0          4h34m   172.30.217.83    k8s-node04   <none>           <none> redis-cluster-1                                 1/1     Running   0          4h34m   172.30.85.217    k8s-node01   <none>           <none> redis-cluster-2                                 1/1     Running   0          4h34m   172.30.135.181   k8s-node03   <none>           <none> redis-cluster-3                                 1/1     Running   0          4h34m   172.30.58.251    k8s-node02   <none>           <none> redis-cluster-4                                 1/1     Running   0          4h33m   172.30.85.216    k8s-node01   <none>           <none> redis-cluster-5                                 1/1     Running   0          4h33m   172.30.217.82    k8s-node04   <none>           <none>     [root@k8s-master01 redis]# kubectl get pods -l app=redis-cluster -n wiseco -o jsonpath='{range.items[*]}{.status.podIP}:6379 ' 172.30.217.83:6379 172.30.85.217:6379 172.30.135.181:6379 172.30.58.251:6379 172.30.85.216:6379 172.30.217.82:6379   Here, pay special attention to: The last single quotation mark of the above command must be preceded by a space!! Because the next step Redis Cluster When the cluster is initialized, the communication between cluster nodes ip+port The space between them shall be separated by spaces.   [root@k8s-master01 redis]# kubectl exec -it redis-cluster-0 -n wiseco -- redis-trib.rb create --replicas 1 $(kubectl get pods -l app=redis-cluster -n wiseco -o jsonpath='{range.items[*]}{.status.podIP}:6379 ') >>> Creating cluster >>> Performing hash slots allocation on 6 nodes... Using 3 masters: 172.30.217.83:6379 172.30.85.217:6379 172.30.135.181:6379 Adding replica 172.30.58.251:6379 to 172.30.217.83:6379 Adding replica 172.30.85.216:6379 to 172.30.85.217:6379 Adding replica 172.30.217.82:6379 to 172.30.135.181:6379 M: e5a3154a17131075f35fb32953b8cf8d6cfc7df0 172.30.217.83:6379    slots:0-5460 (5461 slots) master M: 961398483262f505a115957e7e4eda7ff3e64900 172.30.85.217:6379    slots:5461-10922 (5462 slots) master M: 2d1440e37ea4f4e9f6d39d240367deaa609d324d 172.30.135.181:6379    slots:10923-16383 (5461 slots) master S: 0d7bf40bf18d474509116437959b65551cd68b03 172.30.58.251:6379    replicates e5a3154a17131075f35fb32953b8cf8d6cfc7df0 S: 8cbf699a850c0dafe51524127a594fdbf0a27784 172.30.85.216:6379    replicates 961398483262f505a115957e7e4eda7ff3e64900 S: 2987a33f4ce2e412dcc11c1c1daa2538591cd930 172.30.217.82:6379    replicates 2d1440e37ea4f4e9f6d39d240367deaa609d324d Can I set the above configuration? (type 'yes' to accept): yes >>> Nodes configuration updated >>> Assign a different config epoch to each node >>> Sending CLUSTER MEET messages to join the cluster Waiting for the cluster to join...... >>> Performing Cluster Check (using node 172.30.217.83:6379) M: e5a3154a17131075f35fb32953b8cf8d6cfc7df0 172.30.217.83:6379    slots:0-5460 (5461 slots) master M: 961398483262f505a115957e7e4eda7ff3e64900 172.30.85.217:6379    slots:5461-10922 (5462 slots) master M: 2d1440e37ea4f4e9f6d39d240367deaa609d324d 172.30.135.181:6379    slots:10923-16383 (5461 slots) master M: 0d7bf40bf18d474509116437959b65551cd68b03 172.30.58.251:6379    slots: (0 slots) master    replicates e5a3154a17131075f35fb32953b8cf8d6cfc7df0 M: 8cbf699a850c0dafe51524127a594fdbf0a27784 172.30.85.216:6379    slots: (0 slots) master    replicates 961398483262f505a115957e7e4eda7ff3e64900 M: 2987a33f4ce2e412dcc11c1c1daa2538591cd930 172.30.217.82:6379    slots: (0 slots) master    replicates 2d1440e37ea4f4e9f6d39d240367deaa609d324d [OK] All nodes agree about slots configuration. >>> Check for open slots... >>> Check slots coverage... [OK] All 16384 slots covered. 
From the above initialization information, we can see the cluster relationship:
redis-cluster-0 is the master node and redis-cluster-3 is its slave node.
redis-cluster-1 is the master node and redis-cluster-4 is its slave node.
redis-cluster-2 is the master node and redis-cluster-5 is its slave node.
 
4. Verify Redis Cluster deployment
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 [root@k8s-master01 redis]# kubectl exec -it redis-cluster-0 -n wiseco -- redis-cli cluster info                                        cluster_state:ok cluster_slots_assigned:16384 cluster_slots_ok:16384 cluster_slots_pfail:0 cluster_slots_fail:0 cluster_known_nodes:6 cluster_size:3 cluster_current_epoch:6 cluster_my_epoch:1 cluster_stats_messages_ping_sent:130 cluster_stats_messages_pong_sent:137 cluster_stats_messages_sent:267 cluster_stats_messages_ping_received:132 cluster_stats_messages_pong_received:130 cluster_stats_messages_meet_received:5 cluster_stats_messages_received:267   [root@k8s-master01 redis]# for x in $(seq 0 5); do echo "redis-cluster-$x"; kubectl exec redis-cluster-$x -n wiseco -- redis-cli role; echo; done redis-cluster-0 master 168 172.30.58.251 6379 168   redis-cluster-1 master 168 172.30.85.216 6379 168   redis-cluster-2 master 182 172.30.217.82 6379 168   redis-cluster-3 slave 172.30.217.83 6379 connected 182   redis-cluster-4 slave 172.30.85.217 6379 connected 168   redis-cluster-5 slave 172.30.135.181 6379 connected 182

Keywords: Kubernetes

Added by Braimaster on Mon, 29 Nov 2021 08:50:08 +0200