session sharing based on redis 5: [research on redis 5.x cluster application]

Build a session sharing module based on spring session. Here, a redis-based cluster (Redis-5.0.3 version) is designed as a POM component, which is directly configured as dependency in the pom.xml file in order to meet the need of sharing sessions among the subsystems of the entire Internet of Things platform and facilitate the use of each subsystem.

 

Today's theme is the construction of redis 5.0.3 environment.

It's similar to redis 3.2.8 (https://www.cnblogs.com/shihuc/p/7882004.html), which I introduced earlier. It's just redis 5, which no longer relies on ruby scripts for cluster configuration, but is implemented directly in c program and based on redis-cli instructions. These are not introduced, but how to configure them.

 

First, the configuration items that need to be modified are as follows:

bind 10.95.200.12
protected-mode no
port 7380
daemonize yes
pidfile /var/run/redis_7380.pid
dbfilename dump-7380.rdb
appendonly yes
appendfilename "appendonly-7380.aof"
cluster-enabled yes
cluster-config-file nodes-7380.conf
cluster-node-timeout 15000
notify-keyspace-events "Ex"

My configuration environment is three pairs of master and slave. The above configuration item is the information of one redis node. IP is 10.95.200.12 and port is 7380. Referring to this configuration information, I will configure the following nodes. I have three virtual machines here: IP: 10.95.200.12, 10.95.200.13, 10.95.200.14, each machine. Two instances are deployed on the device, port 7380 and 7381, respectively.

 

Once the configuration information is configured, each instance needs to be started. For example, the following example starts port 10.95.200.12 as 7380.

[tkiot@tkwh-kfcs-app2 redis]$ ./bin/redis-server redis-7380.conf 
23941:C 10 Jul 2019 08:34:05.607 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
23941:C 10 Jul 2019 08:34:05.607 # Redis version=5.0.3, bits=64, commit=00000000, modified=0, pid=23941, just started
23941:C 10 Jul 2019 08:34:05.607 # Configuration loaded

Here's a little episode:

The redis configuration file on each server, which can run normally, is named redis-7380.conf. At first, I didn't pay much attention to it, or my own carelessness. I named this configuration file nodes-7380.conf, which is the same as the value of cluster-config-file. Really want to scold this pit, the boot process also shows the output above, and the correct program boot the same effect, but, ps view the process, found that there is no redis process... This pit, let me check all morning...

 

After the above instruction starts redis-7380.conf, you can see the cluster nodes:

[tkiot@tkwh-kfcs-app2 redis]$ redis-cli -c -h 10.95.200.12 -p 7380
10.95.200.12:7380> cluster nodes
7f4cf1bffc7e42a0e2d15bcc5a0a5386711813e8 :7380@17380 myself,master - 0 0 0 connected

 

Let's first look at what cluster-related instructions are:

[tkiot@tkwh-kfcs-app2 redis]$ redis-cli --cluster help           
Cluster Manager Commands:
  create         host1:port1 ... hostN:portN
                 --cluster-replicas <arg>
  check          host:port
                 --cluster-search-multiple-owners
  info           host:port
  fix            host:port
                 --cluster-search-multiple-owners
  reshard        host:port
                 --cluster-from <arg>
                 --cluster-to <arg>
                 --cluster-slots <arg>
                 --cluster-yes
                 --cluster-timeout <arg>
                 --cluster-pipeline <arg>
                 --cluster-replace
  rebalance      host:port
                 --cluster-weight <node1=w1...nodeN=wN>
                 --cluster-use-empty-masters
                 --cluster-timeout <arg>
                 --cluster-simulate
                 --cluster-pipeline <arg>
                 --cluster-threshold <arg>
                 --cluster-replace
  add-node       new_host:new_port existing_host:existing_port
                 --cluster-slave
                 --cluster-master-id <arg>
  del-node       host:port node_id
  call           host:port command arg arg .. arg
  set-timeout    host:port milliseconds
  import         host:port
                 --cluster-from <arg>
                 --cluster-copy
                 --cluster-replace
  help           

Next, create a cluster through redis-cli and configure it as a shell script

#!/bin/bash
/u02/redis/bin/redis-cli --cluster create 10.95.200.12:7380 10.95.200.13:7380 10.95.200.14:7380 10.95.200.12:7381 10.95.200.13:7381 10.95.200.14:7381 --cluster-replicas 1

Here, there are six nodes in the script, which are master, slave and master-slave relationships, which are automatically configured when building the cluster.

 

Common instructions for cluster view nodes:

CLUSTER INFO Prints Cluster Information  
CLUSTER NODES lists all the nodes currently known in the cluster and the information about those nodes.
[tkiot@tkwh-kfcs-app2 redis]$ redis-cli -c -h 10.95.200.12 -p 7380
10.95.200.12:7380> 
10.95.200.12:7380> cluster nodes
ed309033dbefe2b0b64ad7fb643c4d2531e53b95 10.95.200.13:7380@17380 master - 0 1562720545267 2 connected 5461-10922
2cb8c6a7c86f512db2c6ca88eb67449197f5b885 10.95.200.12:7381@17381 slave cf6ca00cb36850762fdff1223684edf1fb9bd4ba 0 1562720546000 4 connected
cf6ca00cb36850762fdff1223684edf1fb9bd4ba 10.95.200.14:7380@17380 master - 0 1562720546000 3 connected 10923-16383
26c3a7cd48fe4bddf9cf0c60a3c30b6b1f273135 10.95.200.12:7380@17380 myself,slave 467d4c7508d1cb371ed52c4c6574506cba40c328 0 1562720546000 1 connected
467d4c7508d1cb371ed52c4c6574506cba40c328 10.95.200.13:7381@17381 master - 0 1562720547271 8 connected 0-5460
fbece4571b50904d93a45afbcce66941d53a45b5 10.95.200.14:7381@17381 slave ed309033dbefe2b0b64ad7fb643c4d2531e53b95 0 1562720546269 6 connected

 

Delete a specified node (node Id indicates the node to be deleted, which is the leftmost column of the cluster nodes output above)

10.95.200.12:7380> cluster forget ed309033dbefe2b0b64ad7fb643c4d2531e53b95
OK
10.95.200.12:7380> 
10.95.200.12:7380> cluster nodes
2cb8c6a7c86f512db2c6ca88eb67449197f5b885 10.95.200.12:7381@17381 slave cf6ca00cb36850762fdff1223684edf1fb9bd4ba 0 1562720579324 4 connected
cf6ca00cb36850762fdff1223684edf1fb9bd4ba 10.95.200.14:7380@17380 master - 0 1562720578323 3 connected 10923-16383
26c3a7cd48fe4bddf9cf0c60a3c30b6b1f273135 10.95.200.12:7380@17380 myself,slave 467d4c7508d1cb371ed52c4c6574506cba40c328 0 1562720576000 1 connected
467d4c7508d1cb371ed52c4c6574506cba40c328 10.95.200.13:7381@17381 master - 0 1562720577320 8 connected 0-5460
fbece4571b50904d93a45afbcce66941d53a45b5 10.95.200.14:7381@17381 slave - 0 1562720577000 6 connected
10.95.200.12:7380>

You can't forget yourself.

10.95.200.12:7380> cluster forget 26c3a7cd48fe4bddf9cf0c60a3c30b6b1f273135
(error) ERR I tried hard but I can't forget myself...
10.95.200.12:7380>

 

You can't forget your master either.

10.95.200.12:7380> cluster forget 467d4c7508d1cb371ed52c4c6574506cba40c328
(error) ERR Can't forget my master!
10.95.200.12:7380> 

The forget instruction above can't say that the node is actually deleted, but forget is forgotten for a period of time. Because after a period of time, the cluster nodes instructions are executed again, or you can see the nodes of the complete cluster.

 

To delete cluster nodes, the following instructions can be used:

redis-cli --cluster del-node <ip>:<port> <node_id>

 

Now look at my operation. Here is a wrong operation. The results are as follows:

[tkiot@tkwh-kfcs-app2 redis]$ redis-cli --cluster del-node 10.95.200.12:7380 ed309033dbefe2b0b64ad7fb643c4d2531e53b95
>>> Removing node ed309033dbefe2b0b64ad7fb643c4d2531e53b95 from cluster 10.95.200.12:7380
[ERR] Node 10.95.200.13:7380 is not empty! Reshard data away and try again.

1) See if there is any information on Node 10.95.200.13:7380:

10.95.200.13:7380> keys *
1) "taikang#session:sessions:expires:b6cd8269-dff0-463e-aefb-03167a167292"

2) Clear up all content:

10.95.200.13:7380> flushdb 
OK

At this point, continue to implement redis-cli --cluster del-node 10.95.200.12:7380 ed309033dbefe2b0b64ad7fb643c4d2531e53b95, and ultimately fail, for the same reason, it is not an empty node. Why? The reason is that the del-node instruction is not used according to redis instructions.

 

Note that deleting nodes, according to the above operation, is obviously wrong, prompting Node is not empty, need to re-fragment the data to other nodes, this prompt is a bit not very well understood, according to redis cluster working principle, is a peer-to-peer (master) node cluster, deleting nodes have three steps:

A. First delete the slave node and command: redis-cli -- cluster del-node <ip>: <port> <node_id>.
B. Rehard the slot of the master node corresponding to the slave to be deleted to other nodes. Command: redis-cli -- cluster reshard < master's ip: port > Cluster-from < master's node_id > -- cluster-to < master's node_id > receiving slot -- cluster-slots < the number of slots to be participated in the fragmentation, here is the number of slots to be deleted. All slots of nodes > -- cluster-yes
C. Delete the master after reshard, command: redis-cli -- cluster del-node < IP >: < port > < node_id >

 

The following demonstrates the basic operation redis cluster

Node information before operation:

[tkiot@tkwh-kfcs-app2 redis]$ redis-cli -c -h 10.95.200.12 -p 7380 cluster nodes
ed309033dbefe2b0b64ad7fb643c4d2531e53b95 10.95.200.13:7380@17380 master - 0 1563276949623 11 connected 0-5459 5461-16382
2cb8c6a7c86f512db2c6ca88eb67449197f5b885 10.95.200.12:7381@17381 slave cf6ca00cb36850762fdff1223684edf1fb9bd4ba 0 1563276951624 10 connected
cf6ca00cb36850762fdff1223684edf1fb9bd4ba 10.95.200.14:7380@17380 master - 0 1563276950000 10 connected 16383
26c3a7cd48fe4bddf9cf0c60a3c30b6b1f273135 10.95.200.12:7380@17380 myself,slave 467d4c7508d1cb371ed52c4c6574506cba40c328 0 1563276950000 1 connected
fbece4571b50904d93a45afbcce66941d53a45b5 10.95.200.14:7381@17381 slave ed309033dbefe2b0b64ad7fb643c4d2531e53b95 0 1563276950000 11 connected
467d4c7508d1cb371ed52c4c6574506cba40c328 10.95.200.13:7381@17381 master - 0 1563276950624 8 connected 5460

1. Delete nodes:

1) Delete the slave node first

[tkiot@tkwh-kfcs-app2 redis]$ redis-cli --cluster del-node 10.95.200.14:7381 fbece4571b50904d93a45afbcce66941d53a45b5
>>> Removing node fbece4571b50904d93a45afbcce66941d53a45b5 from cluster 10.95.200.14:7381
>>> Sending CLUSTER FORGET messages to the cluster...
>>> SHUTDOWN the node.

The structure of slave node after deletion:

[tkiot@tkwh-kfcs-app2 redis]$ redis-cli -c -h 10.95.200.12 -p 7380 cluster nodes                                     
ed309033dbefe2b0b64ad7fb643c4d2531e53b95 10.95.200.13:7380@17380 master - 0 1563277493737 11 connected 0-5459 5461-16382
2cb8c6a7c86f512db2c6ca88eb67449197f5b885 10.95.200.12:7381@17381 slave cf6ca00cb36850762fdff1223684edf1fb9bd4ba 0 1563277492000 10 connected
cf6ca00cb36850762fdff1223684edf1fb9bd4ba 10.95.200.14:7380@17380 master - 0 1563277493000 10 connected 16383
26c3a7cd48fe4bddf9cf0c60a3c30b6b1f273135 10.95.200.12:7380@17380 myself,slave 467d4c7508d1cb371ed52c4c6574506cba40c328 0 1563277493000 1 connected
467d4c7508d1cb371ed52c4c6574506cba40c328 10.95.200.13:7381@17381 master - 0 1563277494738 8 connected 5460

 

2) Rehard the master of the slave that has just been deleted by slot:

[tkiot@tkwh-kfcs-app2 redis]$ redis-cli --cluster reshard 10.95.200.13:7381 --cluster-from 467d4c7508d1cb371ed52c4c6574506cba40c328 --cluster-to ed309033dbefe2b0b64ad7fb643c4d2531e53b95 --cluster-slots 5460 --cluster-yes                
>>> Performing Cluster Check (using node 10.95.200.13:7381) M: 467d4c7508d1cb371ed52c4c6574506cba40c328 10.95.200.13:7381 slots:[5460] (1 slots) master 1 additional replica(s) M: ed309033dbefe2b0b64ad7fb643c4d2531e53b95 10.95.200.13:7380 slots:[0-5459],[5461-16382] (16382 slots) master S: 26c3a7cd48fe4bddf9cf0c60a3c30b6b1f273135 10.95.200.12:7380 slots: (0 slots) slave replicates 467d4c7508d1cb371ed52c4c6574506cba40c328 M: cf6ca00cb36850762fdff1223684edf1fb9bd4ba 10.95.200.14:7380 slots:[16383] (1 slots) master 1 additional replica(s) S: 2cb8c6a7c86f512db2c6ca88eb67449197f5b885 10.95.200.12:7381 slots: (0 slots) slave replicates cf6ca00cb36850762fdff1223684edf1fb9bd4ba [OK] All nodes agree about slots configuration. >>> Check for open slots... >>> Check slots coverage... [OK] All 16384 slots covered. Ready to move 5460 slots. Source nodes: M: 467d4c7508d1cb371ed52c4c6574506cba40c328 10.95.200.13:7381 slots:[5460] (1 slots) master 1 additional replica(s) Destination node: M: ed309033dbefe2b0b64ad7fb643c4d2531e53b95 10.95.200.13:7380 slots:[0-5459],[5461-16382] (16382 slots) master Resharding plan: Moving slot 5460 from 467d4c7508d1cb371ed52c4c6574506cba40c328 Moving slot 5460 from 10.95.200.13:7381 to 10.95.200.13:7380:

Basic description of parameters:

cluster-from: Represents the source node of the fragmentation, i.e. assigning the fragmentation to other nodes from the parameter node specified by this identifier. The parameter nodes can be multiple, separated by commas.

cluster-to: as opposed to -- cluster-from, identifies the node that receives the fragmentation. This parameter specifies only one node.

cluster-slots: slot number that participates in re-fragmentation.

cluster-yes: Direct background operation without displaying fragmented process information.

 

Cluster information after operation:

[tkiot@tkwh-kfcs-app2 redis]$ redis-cli -c -h 10.95.200.12 -p 7380 cluster nodes
ed309033dbefe2b0b64ad7fb643c4d2531e53b95 10.95.200.13:7380@17380 master - 0 1563277825000 11 connected 0-16382
2cb8c6a7c86f512db2c6ca88eb67449197f5b885 10.95.200.12:7381@17381 slave cf6ca00cb36850762fdff1223684edf1fb9bd4ba 0 1563277825000 10 connected
cf6ca00cb36850762fdff1223684edf1fb9bd4ba 10.95.200.14:7380@17380 master - 0 1563277825386 10 connected 16383
26c3a7cd48fe4bddf9cf0c60a3c30b6b1f273135 10.95.200.12:7380@17380 myself,slave ed309033dbefe2b0b64ad7fb643c4d2531e53b95 0 1563277824000 1 connected
467d4c7508d1cb371ed52c4c6574506cba40c328 10.95.200.13:7381@17381 master - 0 1563277826390 8 connected

The red node, already connected slot, no longer exists. Before reshard, he had 5460

 

3) Underline the master:

[tkiot@tkwh-kfcs-app2 redis]$ redis-cli --cluster del-node 10.95.200.13:7381 467d4c7508d1cb371ed52c4c6574506cba40c328
>>> Removing node 467d4c7508d1cb371ed52c4c6574506cba40c328 from cluster 10.95.200.13:7381
>>> Sending CLUSTER FORGET messages to the cluster...
>>> SHUTDOWN the node.

Cluster information after operation (from 6 nodes to 4 nodes now):

[tkiot@tkwh-kfcs-app2 redis]$ redis-cli -c -h 10.95.200.12 -p 7380 cluster nodes                                     
ed309033dbefe2b0b64ad7fb643c4d2531e53b95 10.95.200.13:7380@17380 master - 0 1563278285304 11 connected 0-16382
2cb8c6a7c86f512db2c6ca88eb67449197f5b885 10.95.200.12:7381@17381 slave cf6ca00cb36850762fdff1223684edf1fb9bd4ba 0 1563278286304 10 connected
cf6ca00cb36850762fdff1223684edf1fb9bd4ba 10.95.200.14:7380@17380 master - 0 1563278284000 10 connected 16383
26c3a7cd48fe4bddf9cf0c60a3c30b6b1f273135 10.95.200.12:7380@17380 myself,slave ed309033dbefe2b0b64ad7fb643c4d2531e53b95 0 1563278284000 1 connected

 

According to the above operation, after all deletion, there is only one master left at last, so kill it directly and stop the service.

[tkiot@tkwh-kfcs-app2 redis]$ redis-cli -c -h 10.95.200.13 -p 7380 cluster nodes 
ed309033dbefe2b0b64ad7fb643c4d2531e53b95 10.95.200.13:7380@17380 myself,master - 0 1563321180000 11 connected 0-16383

 

The following is a record of how to create a cluster of four nodes and add a master-slave process.

[tkiot@tkwh-kfcs-app2 redis]$ redis-cli --cluster create 10.95.200.12:7380 10.95.200.13:7380 10.95.200.12:7381 10.95.200.13:7381 --cluster-replicas 1
*** ERROR: Invalid configuration for cluster creation.
*** Redis Cluster requires at least 3 master nodes.
*** This is not possible with 4 nodes and 1 replicas per node.
*** At least 6 nodes are required.

This process failed, huh-huh. Six nodes, then delete and add.

 

[tkiot@tkwh-kfcs-app2 redis]$ redis-cli --cluster create 10.95.200.12:7380 10.95.200.13:7380 10.95.200.14:7380 10.95.200.12:7381 10.95.200.13:7381 10.95.200.14:7381 --cluster-replicas 1
>>> Performing hash slots allocation on 6 nodes...
Master[0] -> Slots 0 - 5460
Master[1] -> Slots 5461 - 10922
Master[2] -> Slots 10923 - 16383
Adding replica 10.95.200.13:7381 to 10.95.200.12:7380
Adding replica 10.95.200.12:7381 to 10.95.200.13:7380
Adding replica 10.95.200.14:7381 to 10.95.200.14:7380
>>> Trying to optimize slaves allocation for anti-affinity
[OK] Perfect anti-affinity obtained!
. . . . . 
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.

 

Delete two nodes, 10.95.200.13:7380 10.95.200.14:7381, the deletion process is not many records (currently four nodes).

[tkiot@tkwh-kfcs-app2 redis]$ redis-cli -c -h 10.95.200.12 -p 7381 cluster nodes                                     
f855a20340af517e1502f610d82e8280cc6cd803 10.95.200.12:7381@17381 myself,slave 1591155d7df58218a26974b16996eeeba88c84f1 0 1563331430000 4 connected
1cf9c2ef22a7ce7831cfee357aca36bfb6f275b5 10.95.200.13:7381@17381 slave d46ae46cf66445ddeba923d6af84b78ca5f789cb 0 1563331431232 7 connected
d46ae46cf66445ddeba923d6af84b78ca5f789cb 10.95.200.12:7380@17380 master - 0 1563331430000 7 connected 0-10922
1591155d7df58218a26974b16996eeeba88c84f1 10.95.200.14:7380@17380 master - 0 1563331430229 3 connected 10923-16383

Remember to delete the nodes, in fact, they have shutdown, to get them up, and then perform the add operation.

 

1) Add the new node 10.95.200.13:7380 to the cluster (select a node already in the cluster)

[tkiot@tkwh-kfcs-app2 redis]$ redis-cli --cluster add-node 10.95.200.13:7380 10.95.200.14:7380
>>> Adding node 10.95.200.13:7380 to cluster 10.95.200.14:7380
>>> Performing Cluster Check (using node 10.95.200.14:7380)
M: 1591155d7df58218a26974b16996eeeba88c84f1 10.95.200.14:7380
   slots:[10923-16383] (5461 slots) master
   1 additional replica(s)
S: 1cf9c2ef22a7ce7831cfee357aca36bfb6f275b5 10.95.200.13:7381
   slots: (0 slots) slave
   replicates d46ae46cf66445ddeba923d6af84b78ca5f789cb
S: f855a20340af517e1502f610d82e8280cc6cd803 10.95.200.12:7381
   slots: (0 slots) slave
   replicates 1591155d7df58218a26974b16996eeeba88c84f1
M: d46ae46cf66445ddeba923d6af84b78ca5f789cb 10.95.200.12:7380
   slots:[0-10922] (10923 slots) master
   1 additional replica(s)
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
>>> Send CLUSTER MEET to node 10.95.200.13:7380 to make it join the cluster.
[OK] New node added correctly.

After that, look at the cluster:

[tkiot@tkwh-kfcs-app2 redis]$ redis-cli -c -h 10.95.200.12 -p 7381 cluster nodes              
f855a20340af517e1502f610d82e8280cc6cd803 10.95.200.12:7381@17381 myself,slave 1591155d7df58218a26974b16996eeeba88c84f1 0 1563332042000 4 connected
1cf9c2ef22a7ce7831cfee357aca36bfb6f275b5 10.95.200.13:7381@17381 slave d46ae46cf66445ddeba923d6af84b78ca5f789cb 0 1563332044428 7 connected
4fd334f64bc5b121f5810da3b5800a29d4e8c3ee 10.95.200.13:7380@17380 master - 0 1563332044000 0 connected
d46ae46cf66445ddeba923d6af84b78ca5f789cb 10.95.200.12:7380@17380 master - 0 1563332043426 7 connected 0-10922
1591155d7df58218a26974b16996eeeba88c84f1 10.95.200.14:7380@17380 master - 0 1563332043000 3 connected 10923-16383

 

2) A new node will become master by default. The next slot allocation to the master is a reshard process.

[tkiot@tkwh-kfcs-app2 redis]$ redis-cli --cluster reshard 10.95.200.13:7380 --cluster-from d46ae46cf66445ddeba923d6af84b78ca5f789cb,1591155d7df58218a26974b16996eeeba88c84f1 --cluster-to 4fd334f64bc5b121f5810da3b5800a29d4e8c3ee --cluster-slots 5642

After adding, look at the cluster:

[tkiot@tkwh-kfcs-app2 redis]$ redis-cli -c -h 10.95.200.12 -p 7381 cluster nodes
f855a20340af517e1502f610d82e8280cc6cd803 10.95.200.12:7381@17381 myself,slave 1591155d7df58218a26974b16996eeeba88c84f1 0 1563332267000 4 connected
1cf9c2ef22a7ce7831cfee357aca36bfb6f275b5 10.95.200.13:7381@17381 slave d46ae46cf66445ddeba923d6af84b78ca5f789cb 0 1563332267021 7 connected
4fd334f64bc5b121f5810da3b5800a29d4e8c3ee 10.95.200.13:7380@17380 master - 0 1563332268022 8 connected 0-3761 10923-12802
d46ae46cf66445ddeba923d6af84b78ca5f789cb 10.95.200.12:7380@17380 master - 0 1563332267521 7 connected 3762-10922
1591155d7df58218a26974b16996eeeba88c84f1 10.95.200.14:7380@17380 master - 0 1563332267000 3 connected 12803-16383

 

3) Add a slave node to the master node just added

[tkiot@tkwh-kfcs-app2 redis]$ redis-cli --cluster add-node 10.95.200.14:7381 10.95.200.13:7380 --cluster-slave --cluster-master-id 4fd334f64bc5b121f5810da3b5800a29d4e8c3ee
>>> Adding node 10.95.200.14:7381 to cluster 10.95.200.13:7380
>>> Performing Cluster Check (using node 10.95.200.13:7380)
M: 4fd334f64bc5b121f5810da3b5800a29d4e8c3ee 10.95.200.13:7380
   slots:[0-3761],[10923-12802] (5642 slots) master
   1 additional replica(s)
S: 1cf9c2ef22a7ce7831cfee357aca36bfb6f275b5 10.95.200.13:7381
   slots: (0 slots) slave
   replicates 4fd334f64bc5b121f5810da3b5800a29d4e8c3ee
M: d46ae46cf66445ddeba923d6af84b78ca5f789cb 10.95.200.12:7380
   slots:[3762-10922] (7161 slots) master
M: 1591155d7df58218a26974b16996eeeba88c84f1 10.95.200.14:7380
   slots:[12803-16383] (3581 slots) master
   1 additional replica(s)
S: f855a20340af517e1502f610d82e8280cc6cd803 10.95.200.12:7381
   slots: (0 slots) slave
   replicates 1591155d7df58218a26974b16996eeeba88c84f1
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
>>> Send CLUSTER MEET to node 10.95.200.14:7381 to make it join the cluster.
Waiting for the cluster to join

>>> Configure node as replica of 10.95.200.13:7380.
[OK] New node added correctly.

Finally, look at clusters:

[tkiot@tkwh-kfcs-app2 redis]$ redis-cli -c -h 10.95.200.12 -p 7381 cluster nodes
f855a20340af517e1502f610d82e8280cc6cd803 10.95.200.12:7381@17381 myself,slave 1591155d7df58218a26974b16996eeeba88c84f1 0 1563332620000 4 connected
f6093d443470ae37b8330407d20291ae959fc22f :0@0 slave,fail,noaddr d46ae46cf66445ddeba923d6af84b78ca5f789cb 1563332505335 1563332504432 7 disconnected
1cf9c2ef22a7ce7831cfee357aca36bfb6f275b5 10.95.200.13:7381@17381 slave d46ae46cf66445ddeba923d6af84b78ca5f789cb 0 1563332621533 8 connected
4fd334f64bc5b121f5810da3b5800a29d4e8c3ee 10.95.200.13:7380@17380 master - 0 1563332621633 8 connected 0-3761 10923-12802
d46ae46cf66445ddeba923d6af84b78ca5f789cb 10.95.200.12:7380@17380 master - 0 1563332621533 7 connected 3762-10922
1591155d7df58218a26974b16996eeeba88c84f1 10.95.200.14:7380@17380 master - 0 1563332621000 3 connected 12803-16383
9f069b1f62cc93b95cb432f3726bc5bbfb0c8c76 10.95.200.14:7381@17381 slave 4fd334f64bc5b121f5810da3b5800a29d4e8c3ee 0 1563332620533 8 connected

 

 

Summarize the process of adding cluster nodes:
A) Add nodes (provided that the corresponding node server is running), default to master node, command: redis-cli -- cluster add-node < new_ip: new_port > < existing_ip: existing_port >
B) To allocate slots to the newly added master node, the command is redis-cli -- cluster reshard < master_ip: master_port > -- Cluster-from < node Id of the existing master in the cluster, multiple nodeId that need comma separation > - - cluster-to < node Id of the current master - - cluster-slots < number of slots allocated >
C) Add slave to the master, slot is not needed, because it is slave, command: redis-cli -- cluster add-node < slave_ip: slave_port > < master_ip: master_port > cluster-slave -- cluster-master-id < mastr node_id >

 

Finally, there is a problem that is not clear, that is, when the node is deleted from the cluster and added again, it will always be prompted that the node is not an empty node, leading to the failure of adding. The strategy adopted is to delete all the data files corresponding to the node to be added, restart, and then execute the add operation finger. Ling, it's successful.

Keywords: PHP Redis Session Spring xml

Added by jboy6t9 on Fri, 19 Jul 2019 06:55:59 +0300