Redis master-slave replication
brief introduction
The read-write capability supported by a single redis is still limited. At this time, we can use multiple redis to improve the concurrent processing capability of redis. How these redis cooperate requires a certain architecture design. Here, we first analyze and implement the master / slave architecture
Basic architecture
redis master-slave architecture is shown in the figure:
Among them, the master is responsible for reading and writing, synchronizing the data to the save, and the slave node is responsible for reading
Quick start practice
Based on Redis, a Master-Slave architecture is designed, including one Master and two Slave. The Master is responsible for reading and writing Redis and synchronizing data to the Slave. The Slave is only responsible for reading, The steps are as follows:
Step 1: delete all original redis containers, for example:
docker rm -f redis Container name
Step 2: enter the docker directory of your host computer, and then copy redis01 in two copies, for example:
cp -r redis01/ redis02
cp -r redis01/ redis03
Step 3: start three new redis containers, for example:
docker run -p 6379:6379 --name redis6379 \ -v /usr/local/docker/redis01/data:/data \ -v /usr/local/docker/redis01/conf/redis.conf:/etc/redis/redis.conf \ -d redis redis-server /etc/redis/redis.conf \ --appendonly yes
docker run -p 6380:6379 --name redis6380 \ -v /usr/local/docker/redis02/data:/data \ -v /usr/local/docker/redis02/conf/redis.conf:/etc/redis/redis.conf \ -d redis redis-server /etc/redis/redis.conf \ --appendonly yes
docker run -p 6381:6379 --name redis6381 \ -v /usr/local/docker/redis03/data:/data \ -v /usr/local/docker/redis03/conf/redis.conf:/etc/redis/redis.conf \ -d redis redis-server /etc/redis/redis.conf \ --appendonly yes
Step 4: check the redis service role
Start the three clients, log in to the three redis container services respectively, and view the roles through the info command. By default, the three newly started redis service roles are master
127.0.0.1:6379> info replication
\# Replication role:master connected_slaves:0 master_repl_offset:3860 repl_backlog_active:1 repl_backlog_size:1048576 repl_backlog_first_byte_offset:2 repl_backlog_histlen:3859
Step 5: check the ip settings of redis6379
docker inspect redis6379
...... "Networks": { "bridge": { "IPAMConfig": null, "Links": null, "Aliases": null, "NetworkID": "c33071765cb48acb1efed6611615c767b04b98e6e298caa0dc845420e6112b73", "EndpointID": "4c77e3f458ea64b7fc45062c5b2b3481fa32005153b7afc211117d0f7603e154", "Gateway": "172.17.0.1", "IPAddress": "172.17.0.2", "IPPrefixLen": 16, "IPv6Gateway": "", "GlobalIPv6Address": "", "GlobalIPv6PrefixLen": 0, "MacAddress": "02:42:ac:11:00:02", "DriverOpts": null } }
Step 6: set the Master/Slave architecture
Log in to redis6380/redis6381 respectively, and then execute the following statement
slaveof 172.17.0.2 6379
Note: if the master has a password, it needs to be in redis.com of slave Add the statement "masterauth your password" to the conf configuration file, then restart redis and execute the slaveof instruction
Step 7: log in to redis6379 again and check info
[root@centos7964 ~]# docker exec -it redis6379 redis-cli 127.0.0.1:6379> info replication
\# Replication role:master connected_slaves:2 slave0:ip=172.17.0.3,port=6379,state=online,offset=2004,lag=1 slave1:ip=172.17.0.4,port=6379,state=online,offset=2004,lag=1 master_failover_state:no-failover master_replid:5baf174fd40e97663998abf5d8e89a51f7458488 master_replid2:0000000000000000000000000000000000000000 master_repl_offset:2004 second_repl_offset:-1 repl_backlog_active:1 repl_backlog_size:1048576 repl_backlog_first_byte_offset:1 repl_backlog_histlen:2004
Step 8: log in to redis6379 to test. The master can read and write
[root@centos7964 ~]# docker exec -it redis6379 redis-cli 127.0.0.1:6379> set role master6379 OK 127.0.0.1:6379> get role "master6379" 127.0.0.1:6379>
Step 9: log in to redis6380 to test. slave can only read.
[root@centos7964 ~]# docker exec -it redis6380 redis-cli 127.0.0.1:6379> get role "master6379" 127.0.0.1:6379> set role slave6380 (error) READONLY You can't write against a read only replica. 127.0.0.1:6379>
Read write test analysis in Java. The code is as follows:
@SpringBootTest public class MasterSlaveTests { @Autowired private RedisTemplate redisTemplate; @Test void testMasterReadWrite(){//The profile port is 6379 ValueOperations valueOperations = redisTemplate.opsForValue(); valueOperations.set("role", "master6379"); Object role = valueOperations.get("role"); System.out.println(role); } @Test void testSlaveRead(){//The profile port is 6380 ValueOperations valueOperations = redisTemplate.opsForValue(); Object role = valueOperations.get("role"); System.out.println(role); } }
Principle analysis of master-slave synchronization
The master-slave structure of redis can adopt a master-slave structure. Redis master-slave replication can be divided into full synchronization and incremental synchronization according to whether it is full or not.
Redis full synchronization:
Redis full replication usually occurs in the Slave initialization stage. At this time, the Slave needs to copy all the data on the Master. The specific steps are as follows:
1) Connect the slave server to the master server and send the SYNC command;
2) After receiving the SYNC naming, the master server starts to execute the BGSAVE command to generate RDB files, and uses the buffer to record all write commands executed thereafter;
3) After the master server BGSAVE executes, it sends snapshot files to all slave servers and continues to record the executed write commands during sending;
4) After receiving the snapshot file from the server, discard all old data and load the received snapshot;
5) After sending the snapshot from the master server, start sending the write command in the buffer to the slave server;
6) Finish loading the snapshot from the server, start receiving the command request, and execute the write command from the main server buffer;
Redis incremental synchronization
Redis incremental replication refers to the process of synchronizing the write operations of the master server to the Slave server when the Slave starts working normally after the Slave is initialized. The main process of incremental replication is that every time the master server executes a write command, it will send the same write command to the Slave server, and the Slave server will receive and execute the received write command.
Section interview analysis
What would you do if redis wanted to support 100000 + concurrency?
It is almost impossible to say that the QPS of a single redis is more than 100000 +, unless there are some special circumstances, such as your machine performance is particularly good, the configuration is particularly high, the physical machine and maintenance are particularly good, and your overall operation is not too complex. The general single machine is tens of thousands. To truly realize the high concurrency of redis, read-write separation is required. For cache, it is generally used to support high concurrency of reads. There are relatively few write requests, and write requests may be thousands of times a second. There will be relatively more requests to read, for example, 200000 times a second. Therefore, the high concurrency of redis can be realized based on the master-slave architecture and the read-write separation mechanism.
What is the replication mechanism of Redis?
(1) redis copies data to the slave node asynchronously.
(2) A master node can be configured with multiple slave node s.
(3) When a slave node is replicated, the block master node will not work normally.
(4) When copying, slave node will not block its own query operations. It will use the old data set to provide services; However, when the copy is completed, the old dataset needs to be deleted and the new dataset needs to be loaded. At this time, the external service will be suspended.
(5) The slave node is mainly used for horizontal expansion and read-write separation. The expanded slave node can improve the read throughput.
Redis sentinel mode
brief introduction
Sentinel is a mechanism to achieve high availability under the master-slave architecture mode of Redis.
The Sentinel system composed of one or more Sentinel instance s can monitor any number of master servers and all slave servers under these master servers, and automatically upgrade a slave server under the offline master server to a new master server when the monitored master server enters the offline state, Then, the new master server continues to process the command request instead of the offline master server.
Basic architecture
Sentry quick start
Step 1: open three redis client windows, enter the three redis containers respectively, and execute the following statements in the directory / etc/redis specified in the container:
cat <<EOF > /etc/redis/sentinel.conf sentinel monitor redis6379 172.17.0.2 6379 1 EOF
Among them, the above instruction indicates the master to be monitored, redis6379 is the service name, 172.17.0.2 and 6379 are the ip and port of the master, and 1 indicates how many sentinel s think a master fails before the master really fails
Step 2: execute the following instructions in the / etc/redis directory inside each redis container to start the sentinel service
redis-sentinel sentinel.conf
Step 3: open a new client connection window and close redis6379 service (this service is the master service)
docker stop redis6379
In other client windows, detect log output, such as
410:X 11 Jul 2021 09:54:27.383 # +switch-master redis6379 172.17.0.2 6379 172.17.0.4 6379 410:X 11 Jul 2021 09:54:27.383 * +slave slave 172.17.0.3:6379 172.17.0.3 6379 @ redis6379 172.17.0.4 6379 410:X 11 Jul 2021 09:54:27.383 * +slave slave 172.17.0.2:6379 172.17.0.2 6379 @ redis6379 172.17.0.4 6379
Step 4: log in to the corresponding service with ip 172.17.0.4 for info detection, for example:
127.0.0.1:6379> info replication
\# Replication role:master connected_slaves:1 slave0:ip=172.17.0.3,port=6379,state=online,offset=222807,lag=0 master_failover_state:no-failover master_replid:3d63e8474dd7bcb282ff38027d4a78c413cede53 master_replid2:5baf174fd40e97663998abf5d8e89a51f7458488 master_repl_offset:222807 second_repl_offset:110197 repl_backlog_active:1 repl_backlog_size:1048576 repl_backlog_first_byte_offset:29 repl_backlog_histlen:222779 127.0.0.1:6379>
From the above information output, it is found that the redis6381 service has now become a master.
Sentinel configuration advanced
For sentinel The contents in the conf file can also be enhanced based on actual requirements, such as:
sentinel monitor redis6379 172.17.0.2 6379 1 daemonize yes #Background operation logfile "/var/log/sentinel_log.log" #Run log sentinel down-after-milliseconds redis6379 30000 #Default 30 seconds
Of which:
1) Daemon yes indicates background operation (no by default)
2)logfile is used to specify the location and name of the log file
3) Sentinel down after milliseconds indicates how long it takes for the master to fail
For example: create sentinel based on cat instruction Conf file and add relevant contents
cat <<EOF > /etc/redis/sentinel.conf sentinel monitor redis6379 172.17.0.2 6379 1 daemonize yes logfile "/var/log/sentinel_log.log" sentinel down-after-milliseconds redis6379 30000 EOF
Analysis of sentry working principle
1) : each Sentinel sends a PING command to its known Master, Slave and other Sentinel instances once a second.
2) : if the time of an instance from the last valid reply to the PING command exceeds the value specified by the down after milliseconds option (this configuration item specifies how long it takes to expire, a master will be subjectively considered unavailable by the Sentinel. The unit is milliseconds, and the default is 30 seconds), Then this instance will be marked as subjective offline by Sentinel.
3) : if a Master is marked as subjective offline, all sentinels monitoring the Master should confirm that the Master has indeed entered the subjective offline state once per second.
4) : when a sufficient number of sentinels (greater than or equal to the value specified in the configuration file) confirm that the Master has indeed entered the subjective offline state within the specified time range, the Master will be marked as objective offline.
5) : in general, each Sentinel will send INFO commands to all its known masters and Slave every 10 seconds.
6) : when the Master is marked as objectively offline by Sentinel, the frequency of Sentinel sending INFO commands to all Slave of the offline Master will be changed from once in 10 seconds to once per second.
7) : if not enough Sentinel agree that the Master has been offline, the objective offline status of the Master will be removed.
8) : if the Master returns a valid reply to Sentinel's PING command again, the Master's subjective offline status will be removed.
Redis cluster high availability
sketch
The reliability of redis single machine mode is not guaranteed very well, which is prone to single point of failure. At the same time, its performance is also limited by the processing capacity of CPU. Redis must be highly available in actual development, so the single machine mode is not our destination. We need to upgrade the current redis architecture mode.
Sentinel mode achieves high availability, but there is only one master providing services in essence (in the case of read-write separation, the master is also providing services in essence). When the machine memory of the master node is insufficient to support the data of the system, the cluster needs to be considered.
The redis cluster architecture realizes the horizontal expansion of redis, that is, start n redis nodes, distribute and store the whole data in the n redis nodes, and each node stores 1/N of the total data. Redis cluster provides a certain degree of availability through partition. Even if some nodes in the cluster fail or cannot communicate, the cluster can continue to process command requests.
Basic architecture
For a redis cluster, it is generally set to at least 6 nodes, 3 masters and 3 slave. Its simple architecture is as follows:
Create cluster
Step 1: prepare the network environment
The creation of virtual network card is mainly used for redis cluster, which can communicate with the outside world. Generally, the bridge mode is commonly used.
docker network create redis-net
To view the network card information of docker, you can use the following instructions
docker network ls
To view docker network details, you can use the command
docker network inspect redis-net
Step 2: prepare redis template
mkdir -p /usr/local/docker/redis-cluster
cd /usr/local/docker/redis-cluster
vim redis-cluster.tmpl
In redis cluster Enter the following in tmpl
port ${PORT} cluster-enabled yes cluster-config-file nodes.conf cluster-node-timeout 5000 cluster-announce-ip 192.168.126.129 cluster-announce-port ${PORT} cluster-announce-bus-port 1${PORT} appendonly yes
Of which:
Each node is explained as follows:
Port: node port, that is, the port that provides external communication
Cluster enabled: whether to enable the cluster
Cluster config file: cluster configuration file
Cluster node timeout: connection timeout
Cluster announcement ip: host ip
Cluster announcement port: cluster node mapping port
Cluster announcement bus port: cluster bus terminal
appendonly: persistent mode
Step 3: create a node configuration file
Execute the following command in redis cluser
for port in $(seq 8010 8015); \ do \ mkdir -p ./${port}/conf \ && PORT=${port} envsubst < ./redis-cluster.tmpl > ./${port}/conf/redis.conf \ && mkdir -p ./${port}/data; \ done
Of which:
For variable in $(seq var1 var2);do …; done is a shell loop script in linux, for example:
[root@centos7964 ~]# for i in $(seq 1 5); > do echo $i; > done; 1 2 3 4 5 [root@centos7964 ~]#
The instruction envsubst < source file > target file is used to update the contents of the source file into the target file
View the content of configuration file through cat instruction
cat /usr/local/docker/redis-cluster/801{0..5}/conf/redis.conf
Step 4: create a redis node container in the cluster
for port in $(seq 8010 8015); \ do \ docker run -it -d -p ${port}:${port} -p 1${port}:1${port} \ --privileged=true -v /usr/local/docker/redis-cluster/${port}/conf/redis.conf:/usr/local/etc/redis/redis.conf \ --privileged=true -v /usr/local/docker/redis-cluster/${port}/data:/data \ --restart always --name redis-${port} --net redis-net \ --sysctl net.core.somaxconn=1024 redis redis-server /usr/local/etc/redis/redis.conf; \ done
Where, -- privileged=true means that the user of the started container has real root permission, -- sysctl net core. Somaxconn = 1024, which is a linux kernel parameter used to set the size of the request queue. The default value is 128. Subsequent instructions to start redis need to be put into the request queue first, and then started in turn
After the creation is successful, view the node content through the docker ps instruction.
Step 5: create redis cluster configuration
docker exec -it redis-8010 bash
redis-cli --cluster create 192.168.126.129:8010 192.168.126.129:8011 192.168.126.129:8012 192.168.126.129:8013 192.168.126.129:8014 192.168.126.129:8015 --cluster-replicas 1
The above instructions should be executed in one line as far as possible, and the last 1 represents the master-slave ratio. When the selection prompt message appears, enter yes. After the cluster is created, you can view the cluster information through some related instructions, such as
cluster nodes #View the number of cluster nodes cluster info #View basic cluster information
Step 6: connect the redis cluster and add data to redis
redis-cli -c -h 192.168.126.129 -p 8010
Where - c represents cluster, - H represents host (generally write ip address), and - p represents port
other:
During the construction process, you may need to stop or directly delete the docker container after a problem occurs. You can use the following reference commands:
Batch stop docker container, for example:
docker ps -a | grep -i "redis-801*" | awk '{print $1}' | xargs docker stop
Batch delete docker containers, for example
docker ps -a | grep -i "redis-801*" | awk '{print $1}' | xargs docker rm -f
Delete files and directories in batch, such as:
rm -rf 801{0..5}/conf/redis.conf rm -rf 801{0..5}
The above is the simple steps of building redis cluster based on docker, which may be more complex in practical application. This article is for reference only.
Jedis read / write data test
@Test void testJedisCluster()throws Exception{ Set<HostAndPort> nodes = new HashSet<>(); nodes.add(new HostAndPort("192.168.126.129",8010)); nodes.add(new HostAndPort("192.168.126.129",8011)); nodes.add(new HostAndPort("192.168.126.129",8012)); nodes.add(new HostAndPort("192.168.126.129",8013)); nodes.add(new HostAndPort("192.168.126.129",8014)); nodes.add(new HostAndPort("192.168.126.129",8015)); JedisCluster jedisCluster = new JedisCluster(nodes); //redis using jedisCluster jedisCluster.set("test", "cluster"); String str = jedisCluster.get("test"); System.out.println(str); //Close connection pool jedisCluster.close(); }
RedisTemplate read / write data test
Step 1: configure application YML, for example:
spring: redis: cluster: #redis cluster configuration nodes: 192.168.126.129:8010,192.168.126.129:8011,192.168.126.129:8012,192.168.126.129:8013,192.168.126.129:8014,192.168.126.129:8015 max-redirects: 3 #Maximum number of jumps timeout: 5000 #Timeout database: 0 jedis: #Connection pool pool: max-idle: 8 max-wait: 0
Step 2: write the unit test class. The code is as follows:
package com.cy.redis; @SpringBootTest public class RedisClusterTests { @Autowired private RedisTemplate redisTemplate; @Test void testMasterReadWrite(){ //1. Get data operation object ValueOperations valueOperations = redisTemplate.opsForValue(); //2. Read and write data valueOperations.set("city","beijing"); Object city=valueOperations.get("city"); System.out.println(city); } }
Section interview analysis
What is the relationship between the high concurrency of Redis and the high concurrency of the whole system?
First: if you want to achieve high concurrency in Redis, it is inevitable to do a good job in the underlying cache. For example, the high concurrency of mysql is through a series of complex sub database and sub table, order system and transaction requirements. The QPS is up to tens of thousands, which is relatively high.
Second: to make some e-commerce product details pages, the real ultra-high concurrency. There are hundreds of thousands or even millions of requests in QPS, and millions of requests per second. Redis is not enough, but redis is a very important link in the whole large cache architecture, which supports the high concurrency architecture.
Third: your underlying cache middleware and cache system must be able to support the high concurrency we call. Secondly, after a good overall cache architecture design (multi-level cache architecture and hotspot cache), it can support the real high concurrency of hundreds of thousands or even millions.
Summary
This chapter analyzes and practices the master-slave architecture, read-write separation, sentinel mechanism and cluster mode of redis from the perspective of high performance and high reliability. This part is also a common architecture design method of redis in medium and large-scale projects.