Several data types of redis
1. String
2. list
3. set
4. hash
5. Ordered set (zset)
1. String
String type is the most basic data type in Redis. String can store any form of string, including binary data.
The maximum capacity of data allowed to be stored in a string type is 512MB.
set sets the value of the specified key
Syntax: set key value
127.0.0.1:6379 > set name zhaoliu OK 127.0.0.1:6379 > set name "li si" OK
- set sets the value of the given key. If the key has stored other values, the old value is overwritten
- If there are other characters such as spaces, use quotation marks.
get gets the value of the specified key
Syntax: get key
127.0.0.1:6379 > get name "li si" # Get nonexistent key 127.0.0.1:6379> get name1 (nil) # Gets a key that is not of type String 127.0.0.1:6379> lpush a '111' (integer) 3 127.0.0.1:6379> get a (error) WRONGTYPE Operation against a key holding the wrong kind of value
- Gets the value of the specified key. If the key does not exist, nil is returned. If the value stored by the key is not of String type, an error is returned
mset sets one or more key value pairs
Syntax: mset key1 value1 key2 value2
127.0.0.1:6379> mset keya valuea keyb valueb OK # Set an existing key 127.0.0.1:6379> mset keya 'i ma a' keyb 'i am b' OK
- If the key exists, the new value overwrites the old value (the same as set)
mget gets the value of one or more given key s
# Get the value of a key 127.0.0.1:6379> mget keya 1) "i ma a" # Get the value of multiple key s 127.0.0.1:6379> mget keya keyb 1) "i ma a" 2) "i am b"
**Set the value of setnx key when it does not exist**
127.0.0.1:6379> setnx age 18 (integer) 1 # key exists, setting is unsuccessful, and 0 is returned 127.0.0.1:6379> setnx name 'xiao bai' (integer) 0 127.0.0.1:6379> get name "wang wu"
- Can only be used to set key s that do not exist
msetnx sets one or more key value pairs if and only if all keys do not exist
Syntax: msetnx key1 value1 key2 value2
127.0.0.1:6379> msetnx a1 10 a2 20 (integer) 1 # key exists, setting failed 127.0.0.1:6379> msetnx num 10 age 10 (integer) 0
- When multiple keys are set, as long as one of the keys exists, the setting of all values fails
getset sets the value of the specified key and returns the old value
127.0.0.1:6379> getset name 'wang wu' "li si" 127.0.0.1:6379> get name "wang wu" # key does not exist, return nil 127.0.0.1:6379> getset aa 'i am aa' (nil)
- When the key does not exist, nil is returned
- When the key is not a string, an error is reported
**setrange specifies the range value for the specified key, and rewrites it from offset**
Syntax: setrange key offset value
127.0.0.1:6379> getset name 'wang wu' "li si" 127.0.0.1:6379> get name "wang wu" # key does not exist, return nil 127.0.0.1:6379> getset aa 'i am aa' (nil)
getrange gets the string specifying the range (offset) specified in the key
Syntax: getrange key start end
127.0.0.1:6379> get keya "i ma a hello" 127.0.0.1:6379> getrange keya 0 -1 "i ma a hello" # Get hello in keya 127.0.0.1:6379> getrange keya 7 -1 "hello" 127.0.0.1:6379> getrange keya 7 12 "hello"
append appends the value for the specified key
Syntax: append key value
# Existing key 127.0.0.1:6379> append keya ' hello' (integer) 12 127.0.0.1:6379> get keya "i ma a hello" # Nonexistent key 127.0.0.1:6379> append name 'li si' (integer) 5 127.0.0.1:6379> get name "li si"
- If the key exists, append the value to the end of the original key value
- If the key does not exist, set it. The value of the key is value, which can be understood as performing the set operation
strlen gets the length of the value in the specified key
Syntax: strlen key
127.0.0.1:6379> strlen keya (integer) 12 # key does not exist, return 0 127.0.0.1:6379> strlen ziruchu (integer) 0 # If the key is not a string, an error occurs 127.0.0.1:6379> lpush db mysql redis (integer) 2 127.0.0.1:6379> strlen db (error) WRONGTYPE Operation against a key holding the wrong kind of value
- Only the length of the value whose key type is string can be obtained, otherwise an error will be reported
- When the key does not exist, 0 is returned
incr increments the numeric value stored in the key by 1
Syntax: incr key
127.0.0.1:6379> incr num (integer) 1 127.0.0.1:6379> incr num (integer) 2 # The value in key is not of numeric type 127.0.0.1:6379> incr name (error) ERR value is not an integer or out of range
- If the key does not exist, set it and the value is 1; If the value of key is not numeric, an error is reported
decr subtracts the numeric value existing in the key by 1
Syntax: decr key
127.0.0.1:6379> incr num (integer) 1 127.0.0.1:6379> incr num (integer) 2 # The value in key is not of numeric type 127.0.0.1:6379> incr name (error) ERR value is not an integer or out of range
incrby adds the value of the specified key to the given incremental value
Syntax: incrby key numeric value
127.0.0.1:6379> incr num (integer) 1 127.0.0.1:6379> incr num (integer) 2 # The value in key is not of numeric type 127.0.0.1:6379> incr name (error) ERR value is not an integer or out of range
decrby subtracts the given increment value from the value of the specified key
Syntax: decrby key numeric value
127.0.0.1:6379> decrby num3 2 (integer) 8 # key does not exist 127.0.0.1:6379> decrby num4 10 (integer) -10 # The key value is not numeric 127.0.0.1:6379> decrby name 10 (error) ERR value is not an integer or out of range
incrbyfloat adds the value stored by the key to the given floating-point increment value
Syntax: incrybyfloat key increment value
127.0.0.1:6379> get num "1" 127.0.0.1:6379> incrbyfloat num 5.5 "6.5" 127.0.0.1:6379> incrbyfloat num 5 "11.5" # Nonexistent key 127.0.0.1:6379> incrbyfloat num10 10.1 "10.1" # If it is not a numeric value, an error is reported 127.0.0.1:6379> get name "wang hello" 127.0.0.1:6379> incrbyfloat name 5.5 (error) ERR value is not a valid float
setex sets the expiration time for the specified key
Syntax: setex key time (seconds) value
127.0.0.1:6379> setex db 60 db # View expiration time 127.0.0.1:6379> ttl db (integer) 56 # The nonexistent key is equivalent to set, and the expiration time is set 127.0.0.1:6379> setex db 10 mysql OK # Replace old values and set expiration time 127.0.0.1:6379> setex age 60 19 OK 127.0.0.1:6379> get age "19"
- If the key exists, replace the old value with the new value; If the key does not exist, set the value and set the expiration time
psetex sets the effective time of the key in milliseconds
Syntax: psetex key millisecond value
127.0.0.1:6379> psetex gender 10000 nan OK
Bit operation
A byte consists of 8 binary bits. redis has four bit operation commands: setbit, getbit, bitcount, bittop;
# Set a key value pair 127.0.0.1:6379> set foo bar OK
The three letters bar correspond to ASCL of 98, 97114 respectively, and are converted to binary of 011000100110000101110010 respectively
b | a | r |
---|---|---|
01100010 | 01100001 | 01110010 |
getbit gets the bit (bit) of the specified offset in the key value, and the offset index starts from 0
Syntax: getbit key offset
The getbit command obtains the binary value (0 or 1) of the specified position of a string type key, and the index starts from 0
# Gets the 0th bit of bar, which is also the first number of ASSIC binary of b 127.0.0.1:6379> getbit foo 0 (integer) 0 127.0.0.1:6379> getbit foo 1 (integer) 1 127.0.0.1:6379>
- If the index of the obtained binary bit exceeds the actual length of the binary bit of the key value, the default value is 0
setbit sets or clears the bit on the specified offset for the key value
Syntax: setbit key offset value
# Set the binary bit corresponding to b in bar, starting from 0, which is also the 7th number, and change from 1 to 0 127.0.0.1:6379> setbit foo 6 0 (integer) 1
- If the set position exceeds the length of the binary bit, the setbit command automatically sets the middle binary bit to 0
- Setting a nonexistent key to specify a binary bit value automatically assigns the previous bit to 0
bitcount gets the number of binary bits with the value of 1 in the string class key
Syntax: bitcount key [start end]
1) Gets the total number of all binaries with 1
127.0.0.1:6379> bitcount foo (integer) 11 127.0.0.1:6379> bitcount foo 0 -1 (integer) 11
2) Gets the total number of binary 1 in the specified range of the first 2 bytes
127.0.0.1:6379> bitcount foo 0 1 (integer) 7
bittop performs bit operation on the string key and stores the result in the specified key
Syntax: bitto bit operator result store key [key1, key2...]
Parameter interpretation:
Bitwise operators include: AND, OR, XOR, NOT
127.0.0.1:6379> set foo1 bar OK 127.0.0.1:6379> set foo2 aar OK 127.0.0.1:6379> bitop OR result foo1 foo2 (integer) 3 127.0.0.1:6379> get result "car"
Hash
-
Redis uses dictionary structure to store data in the form of key value pairs.
-
The key value of hash type is also a dictionary structure, which stores the mapping of field and field value. The field value can only be a string and does not support other data types.
-
Hash types cannot be nested with other data types.
-
A hash type key can contain up to 2 ^ 32-1 fields.
-
The hash type is suitable for storing objects: the object category and ID are used to form the key name, the field is used to represent the attribute of the object, and the field value stores the attribute value.
The following example: store the article object with ID 1, and use the three fields of author, time and content to store the information of the article
hset sets the value of the key field in the hash table to value
Syntax: hset key field value
# hset setting individual values 127.0.0.1:6379> hset car price 10000 (integer) 1 127.0.0.1:6379> hset car name BMW (integer) 1 # The field already exists. Perform the update operation and return 0 127.0.0.1:6379> hset car name BMWM (integer) 0
- If the hash does not exist, it is created; If the field already exists, the new value overwrites the old value.
- Hset does not distinguish between insert and update operations. When the insert operation is performed (the field does not exist), the hset command returns 1; when the update operation is performed (the field exists), the hset command returns 0. When the key does not exist, hset will automatically create the key.
hget gets the value in the hash table
127.0.0.1:6379> hget car name BMWM
hmset sets multiple field value pairs into the hash table key
Syntax: hmset key field1 value1 [field2 Value2]
127.0.0.1:6379> hmset cat color white age 2 OK
hmget gets the value of the field specified in the hash table
Syntax: hmget key field1 [field2,field3...]
127.0.0.1:6379> hmget cat color age 1) "white" 2) "2
hgetall gets all fields and values of the specified key in the specified hash table
Syntax: hgetall key
127.0.0.1:6379> hgetall cat 1) "color" 2) "white" 3) "age" 4) "2"
hexists determines whether a field exists
Syntax: hexists key field
127.0.0.1:6379> hexists cat age (integer) 1 127.0.0.1:6379> hexists cat a (integer) 0
- hexists determines whether a field in the hash table exists. If it exists, it returns 1 and if not, it returns 0
hdel Deletes one or more fields
Syntax: hde1 key field1 [field2...]
127.0.0.1:6379> hdel car name (integer) 1 127.0.0.1:6379> hdel cat color age (integer) 2
- Returns the number of deleted fields
hkeys gets all the fields in a Hash table
Syntax: hkeys key
127.0.0.1:6379> hkeys article 1) "author" 2) "time"
hvalues gets all the values in a Hash table
Syntax: hvalues key
127.0.0.1:6379> hvals article 1) "ziruchu" 2) "2020"
Helen gets the number of fields
Syntax: Helen key
127.0.0.1:6379> hlen article (integer) 2
hincrby adds an incremental value to the integer value of the specified field in the hash table key
Syntax: hincrby key field increment value
127.0.0.1:6379> hmset number id 1 total 3 OK 127.0.0.1:6379> hincrby number id 10 (integer) 11
hincrbyfloat adds an incremental value to the floating-point number of the specified field in the hash table key
Syntax: hincrbyfloat key field increment value
127.0.0.1:6379> hincrbyfloat number total 10.34 "13.34"
hscan iterates key value pairs in the hash table based on the key
Syntax: hscan key cussor [match pattern] [count count]
Parameter: cursor cursor pattern matching pattern count specifies how many elements are returned from the dataset. The default value is 10
127.0.0.1:6379> hscan car 0 MATCH "p*" COUNT 1 1) "0" 2) 1) "price" 2) "1001" 3) "produce" 4) "china"
list
The list type can store an ordered list of strings. Common operations are to add elements to both ends of the list or get a fragment of the list.
The list type is internally implemented using a two-way linked list, so the time complexity of adding elements to both ends is O(1), and the closer the elements to both ends, the faster
Disadvantages: it is slow to access elements through index
Classic usage scenario: time effective content such as hot search on microblog
lpush inserts one or more values into the list header
Syntax: lpush key value [value1 value2...]
127.0.0.1:6379> lpush nunber 1 (integer) 1 127.0.0.1:6379> lpush number 2 3 (integer) 2
- lpush adds elements to the left of the list
lpop pops up an element from the left side of the list and displays the value
Syntax: lpop key
127.0.0.1:6379> lpop number "3"
rpush adds one or more values to the right of the list
Syntax: rpush key value1 [value2...]
127.0.0.1:6379> rpush number 0 -1 (integer) 5
rpop pops up an element from the right side of the list and displays the value
Syntax: rpop key
127.0.0.1:6379> rpop number "-1"
llen gets the number of elements in the list
Syntax: llen key
127.0.0.1:6379> rpop number "-1" 127.0.0.1:6379> rpop nunber (nil)
- If the key does not exist, return nil
lrange gets the element of the specified range
Syntax: lrange key start end
# Get all elements 127.0.0.1:6379> lrange number 0 -1 1) "2" 2) "1" 3) "0" # Get the first 2 elements 127.0.0.1:6379> lrange number 0 1 1) "2" 2) "1"
- If the start index position is later than the end index position, an empty list is returned
- If end is larger than the actual index range, the rightmost element of the list is returned
lrem deletes the value specified in the list
Syntax: lrem key count value
127.0.0.1:6379> rpush number 2 (integer) 4 127.0.0.1:6379> lrange number 0 -1 1) "2" 2) "1" 3) "0" 4) "2" # Delete the element with value 2 from the right 127.0.0.1:6379> lrange number 0 -1 1) "2" 2) "1" 3) "0" 4) "2" 127.0.0.1:6379> lrem number -1 2 (integer) 1 127.0.0.1:6379> lrange number 0 -1 1) "2" 2) "1" 3) "0"
Lrem deletes the first count elements with value in the list and returns the number of actually deleted elements. The lrem command is executed differently depending on the count value
- When count > 0, the lrem command deletes the first count elements with value from the left side of the list
- When count < 0, the lrem command deletes the first count elements with value from the right side of the list
- When count=0, the lrem command deletes all elements with value
lindex gets the elements in the list by index
Syntax: lindex key index
127.0.0.1:6379> lrange number 0 -1 1) "1" 2) "1" 3) "1" 4) "3" 5) "2" 6) "2" 127.0.0.1:6379> lindex number 5 "2" 127.0.0.1:6379> lindex number 3 "3"
lset sets the value of a list element by index
# The value with index 1 in the list element is set to 10 127.0.0.1:6379> lset number 1 10 OK
ltrem keeps only the elements of the fragment specified in the list
Syntax: ltrim key start stop
127.0.0.1:6379> lrange number 0 -1 1) "1" 2) "1" 3) "1" 4) "3" 5) "2" 6) "2" 127.0.0.1:6379> ltrim number 1 5 OK 127.0.0.1:6379> lrange number 0 -1 1) "1" 2) "1" 3) "3" 4) "2" 5) "2"
- ltrim can delete all elements except the specified elements
linsert inserts elements before or after elements in the list
Syntax: linsert key before|after pivot value
127.0.0.1:6379> lrange number 0 -1 1) "1" 2) "1" 3) "3" 4) "2" 5) "2" 127.0.0.1:6379> linsert number before 2 4 (integer) 6 127.0.0.1:6379> lrange number 0 -1 1) "1" 2) "1" 3) "3" 4) "4" 5) "2" 6) "2" 127.0.0.1:6379> linsert number after 2 4 (integer) 7 127.0.0.1:6379> 127.0.0.1:6379> lrange number 0 -1 1) "1" 2) "1" 3) "3" 4) "4" 5) "2" 6) "4" 7) "2"
- linsert now looks for the element with the value pivot from left to right in the list, and then determines whether to insert the value value before or after the element according to the second parameter before|after
Rpop lpush moves elements from one list to another
Syntax: rpoplpush source destination timeout
127.0.0.1:6379> rpoplpush nunber number "a" 127.0.0.1:6379> rpoplpush nunber number "b" 127.0.0.1:6379> lrange number 0 -1 1) "b" 2) "a" 3) "1" 4) "1" 5) "3" 6) "4" 7) "2" 8) "4" 9) "2"
- rpoplpush pops up an element on the right side of the source list type first; Then add it to the left of the destination list type and return the pop-up element. The whole process is atomic
rpushx adds a value to an existing list
Syntax: rpushx key value
127.0.0.1:6379> rpushx number + (integer) 10
lpushx inserts a value into the header of an existing list
Syntax: lpushx key value
127.0.0.1:6379> lpushx n - (integer) 0 127.0.0.1:6379> lpush nnumber - (integer) 1 127.0.0.1:6379> lrange nmber 0 -1 (empty array) 127.0.0.1:6379> lrange number 0 -1 1) "b" 2) "a" 3) "1" 4) "1" 5) "3" 6) "4" 7) "2" 8) "4" 9) "2" 10) "+"
blpop removes and gets the first element of the list
Syntax: blpop key1 [key2] timeout
127.0.0.1:6379> blpop number1 300 1) "number1" 2) "3"
- Remove and get the first element of the list. If there is no element in the list, the list will be blocked until the waiting timeout or pop-up element is found
brpop removes and gets the last element of the list
Syntax: brpop key1 [key2] timeout
127.0.0.1:6379> brpop number 200 1) "number" 2) "2"
**brpoplpush pops up a value from the list and inserts the popped value into another list**
Syntax: brpoplpush source destination timeout
127.0.0.1:6379> brpoplpush number number1 500 "1"
- Take the last element from the list and insert it into the head of another list;
- If there are no elements in the list, the list is blocked until the wait times out or a pop-up element is found
Set
Each element in the collection is different and has no order. A Set key can store up to 2 ^ 32 - 1 strings.
Common operations of a collection include adding or deleting elements to the collection and determining whether the basket exists. Because the collection type is implemented in redis using a hash table with an empty value, the time complexity of these operations is O(1). The most convenient is that intersection, union and difference set operations can be carried out between set types.
For example: Store Article tags
Comparison between collection type and list type:
Collection type | List type | |
---|---|---|
Order | no | yes |
Uniqueness | yes | no |
sadd adds one or more elements to the collection
Syntax: sadd key member1 [member2...]
127.0.0.1:6379> sadd letters a (integer) 1 # Because a already exists, it will not be added 127.0.0.1:6379> sadd letters a b c (integer) 2
There cannot be the same element in the collection. If there is a duplicate of the added element, the duplicate element will be ignored. This command returns the number of elements successfully added to the collection
smembers gets all the elements in the collection
Syntax: smembers key
127.0.0.1:6379> smembers letters 1) "c" 2) "b" 3) "a"
srem Deletes one or more elements from the collection
Syntax: srem key member1 [members...]
127.0.0.1:6379> smembers letters 1) "c" 2) "b" 3) "a"
sismember determines whether an element is in the collection
Syntax: sismember key member
127.0.0.1:6379> smembers letters 1) "c" 2) "b" 3) "a"
sdiff performs subtraction operations on multiple sets
Syntax: sdiff key1 [key2 key3...]
127.0.0.1:6379> sadd setA 1 2 3 (integer) 3 127.0.0.1:6379> sadd setB 2 3 4 (integer) 3 127.0.0.1:6379> sdiff setA setB 1) "1" 127.0.0.1:6379> sdiff setB setA 1) "4" 127.0.0.1:6379> sadd setC 2 3 (integer) 2 127.0.0.1:6379> sdiff setA setB setC 1) "1"
setA setB setC difference set operation; First calculate setA - setB, and then calculate the difference set with setC
sinter performs intersection operations on multiple sets
Syntax: sinter key1 [key2...]
127.0.0.1:6379> sinter setA setB 1) "2" 2) "3" 127.0.0.1:6379> sinter setA setB setC 1) "2" 2) "3"
sunion performs union operations on multiple sets
Syntax: sun key1 [key2...]
127.0.0.1:6379> sunion setA setB 1) "1" 2) "2" 3) "3" 4) "4" 127.0.0.1:6379> sunion setA setB setC 1) "1" 2) "2" 3) "3" 4) "4"
sdiffscore returns the difference set of all given sets and stores it in the destination
Syntax: sdiffscore destination key1 [key2...]
127.0.0.1:6379> sdiffstore new_set setA setB (integer) 1 127.0.0.1:6379> smembers new_set 1) "1"
Stores the difference set between a given set in the specified set. If the specified set key already exists, it will be overwritten.
sinterstore returns the intersection of all sets and stores it in destination
Syntax: sinterstore destination key1 [key2...]
127.0.0.1:6379> sinterstore new_set1 setA setB (integer) 2 127.0.0.1:6379> smembers new_set1 1) "2" 2) "3"
suninnstore the union of all given sets is stored in destination
Syntax: sunionstore destination key1 [key2...]
127.0.0.1:6379> sunionstore setUnionJi setA setB (integer) 4
scard gets the number of members in the collection
127.0.0.1:6379> smembers new_set2 1) "1" 2) "2" 3) "3" 4) "4" 127.0.0.1:6379> scard setA (integer) 3 127.0.0.1:6379> scard new_set2 (integer) 4
smove moves the member element from the source collection to the destination collection
Syntax: smove source destination member
127.0.0.1:6379> sadd name lisi wangwu (integer) 2 127.0.0.1:6379> sadd age 17 18 (integer) 2 127.0.0.1:6379> smove age name 17 (integer) 1 127.0.0.1:6379> smembers name 1) "lisi" 2) "17" 3) "wangwu"
spop removes and returns a random element in the collection
Syntax: spop key [count]
127.0.0.1:6379> smembers name 1) "wangwu" 2) "17" 3) "lisi" 127.0.0.1:6379> 127.0.0.1:6379> spop name "lisi" 127.0.0.1:6379> spop name "17" 127.0.0.1:6379> spop new_set1 2 1) "2" 2) "3" 127.0.0.1:6379> smembers new_set1 (empty array)
srendmember returns one or more random numbers in the collection
Syntax: srandmember key [count]
127.0.0.1:6379> srandmember setC "3" 127.0.0.1:6379> srandmember setC "2"
Elements in the scan iteration collection
Syntax: sscan key cursor [match patrn] [count count]
127.0.0.1:6379> sscan name 0 match w* 1) "0" 2) 1) "wangwu"
sorted set
Redis ordered collections, like collections, are collections of string elements, and duplicate members are not allowed
The difference is that each element is associated with a score of type double. redis sorts the members of the collection from small to large by score
Members of an ordered set are unique, but score s can be repeated.
The ordered set is implemented by jump table and Hash, so the complexity of adding and deleting lookup is O(1). The maximum number of members in the collection is 2 ^ 32 - 1
Usage scenario: rush to buy goods according to user weight (priority message queue, first come first), student score ranking
zadd adds one or more elements and their scores to the collection
Syntax: zadd key score1 member1 [score2 member2...]
127.0.0.1:6379> zadd board 89 tom 60 peter 100 hipi (integer) 3 # Modify score 127.0.0.1:6379> zadd board 87 hipi (integer) 0
zscore gets the element score
Syntax: zscore key member
127.0.0.1:6379> zscore board hipi "87"
zrange gets a list of elements ranked in a range
Syntax: zrange key start stop [WithCores]
127.0.0.1:6379> zrange board 0 -1 1) "peter" 2) "hipi" 3) "tom" 127.0.0.1:6379> zrange board 0 2 1) "peter" 2) "hipi" 3) "tom" 127.0.0.1:6379> zrange board 0 -1 withscores 1) "peter" 2) "60" 3) "hipi" 4) "87" 5) "tom" 6) "89"
zrange is sorted from large to small according to the score value. The subscript parameters start and stop are based on 0
zrevrange returns the members in the specified interval in the ordered set. Through the index, the score is from high to low
Syntax: zrevrange key start top [WithCores]
127.0.0.1:6379> zrevrange board 0 -1 1) "tom" 2) "hipi" 3) "peter"
The positions of members are arranged according to the decreasing score value
zrevrangebyscore returns the members in the specified partition interval in the ordered set, and the scores are sorted from high to low
Syntax: zrevrangebyscore key max min [WithCores]
127.0.0.1:6379> zrevrange board 0 -1 1) "tom" 2) "hipi" 3) "peter"
zincrby increases the score of an element
Syntax: zinc key increment member
# The return value is the changed score 127.0.0.1:6379> zincrby board 4 jerry "60"
zcard gets the number of elements in the collection
Syntax: zcard key
# The return value is the changed score 127.0.0.1:6379> zincrby board 4 jerry "60"
zcount gets the number of elements in the collection
Syntax: zcount key min max
127.0.0.1:6379> zcount board 90 100 (integer) 1
zrem Deletes one or more elements
Syntax: zrem key member [member1, member2...]
127.0.0.1:6379> zrem board tom (integer) 1
zremrangebyrank deletes elements by rank range
127.0.0.1:6379> zadd testrem 1 a 2 b 3 c 4 d 5 f 6 g (integer) 6 127.0.0.1:6379> zremrangebyrank testrem 0 1 (integer) 2 127.0.0.1:6379> zrange testrem 0 -1 1) "c" 2) "d" 3) "f" 4) "g"
zremrangebyscore deletes elements by score range
Syntax: zremrangebyscore key min max
127.0.0.1:6379> zremrangebyscore testrem 2 4 (integer) 2 127.0.0.1:6379> zrange testrem 0 -1 withscores 1) "f" 2) "5" 3) "g" 4) "6"
zrank gets the ranking of elements
Syntax: zrank key member
127.0.0.1:6379> zrank board hipi (integer) 5
Obtain the specified element ranking (starting from 0) in the order of the element score from small to large
Zintermore calculates the intersection of ordered sets
Syntax: zinterstore destination numkeys key1 [key2...] [weights [weight...]] [aggregate sum min max]
Explanation: this command is used to calculate the intersection of multiple ordered sets and store the results in the destination key. The return value is the number of elements in the destination key;
127.0.0.1:6379> zadd sort1 1 a 2 b (integer) 2 127.0.0.1:6379> zadd sort2 10 a 20 b (integer) 2 127.0.0.1:6379> zinterstore storeResult 2 sort1 sort2 (integer) 2 127.0.0.1:6379> zrange sorteResult 0 -1 withscores (empty array) 127.0.0.1:6379> zrange storeResult 0 -1 withscores 1) "a" 2) "11" 3) "b" 4) "22"
zunionstore is the union of one or more sets
Syntax: zunionstore destination numbers key1 [key2...]
127.0.0.1:6379> zunionstore uresult 2 sort1 sort2 (integer) 2 127.0.0.1:6379> zrange uresult 0 -1 1) "a" 2) "b" 127.0.0.1:6379> zrange uresult 0 -1 withscores 1) "a" 2) "11" 3) "b" 4) "22" 127.0.0.1:6379> zrange sort1 0 -1 withscores 1) "a" 2) "1" 3) "b" 4) "2"
HyperLogLog
Redis hyperlog is an algorithm used for cardinality statistics.
What is cardinality?
For example, if the data set {1,3,5,10,12}, the cardinality set of the data set is {1,3,5,10,12}, and the cardinality (non repeating element) is 5; cardinality estimation is to quickly calculate the cardinality within the acceptable error range.
Advantages: when the number or volume of input elements is very large, the space required to calculate the cardinality is always fixed and very small.
Usage scenario: count the number of IP S visited every day, the real-time UV of the page, the number of online users, and the number of different entries searched by users every day
pfadd adds the specified element to the hyperlog
Syntax: pfadd key value
127.0.0.1:6379> PFADD w3c "redis" (integer) 1 127.0.0.1:6379> PFAdd w3c mongodb (integer) 1 127.0.0.1:6379> pfcount w3c (integer) 2
pfcount returns the cardinality estimate in the given hyperlog
Syntax: pfcount key
127.0.0.1:6379> PFAdd php thinkphp (integer) 1 127.0.0.1:6379> pfadd php laravel (integer) 1 127.0.0.1:6379> pfadd php yii2 (integer) 1 127.0.0.1:6379> pfadd php symfony (integer) 1 127.0.0.1:6379> pfcount php (integer) 4
pfmerge combines multiple hyperloglogs into one HyperLogLog
Syntax: pfmerge key1 kye2
127.0.0.1:6379> pfmerge php w3c OK 127.0.0.1:6379> pfcount php (integer) 7 127.0.0.1:6379> pfcount w3c (integer) 3
affair
A transaction in Redis is a collection of commands. Both transactions and commands are the smallest execution unit of Redis.
Transaction execution process:
[the external chain image transfer fails, and the source station may have an anti-theft chain mechanism. It is recommended to save the image and upload it directly (IMG imtlahhz-1640782523490) (learn from the blog redis. assets/redis transaction execution process. png)]
A transaction goes through three stages: 1) starting a transaction, 2) listing commands, and 3) executing a transaction
Application scenario: optimistic lock
multi start a transaction
exec executes a transaction
127.0.0.1:6379> multi OK 127.0.0.1:6379(TX)> set city shanghai QUEUED 127.0.0.1:6379(TX)> exec 1) OK #Syntax error: executing exec will directly return an error 127.0.0.1:6379> multi OK 127.0.0.1:6379(TX)> set city aa QUEUED 127.0.0.1:6379(TX)> set cit (error) ERR wrong number of arguments for 'set' command 127.0.0.1:6379(TX)> exec (error) EXECABORT Transaction discarded because of previous errors. 127.0.0.1:6379> get city "shanghai" #Run error: all commands will be executed except the wrong command 127.0.0.1:6379> multi OK 127.0.0.1:6379(TX)> set name test QUEUED 127.0.0.1:6379(TX)> sadd name php QUEUED 127.0.0.1:6379(TX)> set name mysql QUEUED 127.0.0.1:6379(TX)> exec 1) OK 2) (error) WRONGTYPE Operation against a key holding the wrong kind of value 3) OK 127.0.0.1:6379> get name "mysql"
discard cancel transaction
127.0.0.1:6379> get name "mysql" 127.0.0.1:6379> multi OK 127.0.0.1:6379(TX)> set name oracle QUEUED 127.0.0.1:6379(TX)> discard OK 127.0.0.1:6379> get name "mysql"
**watch listens to one or more key s (determines whether a transaction is executed or rolled back)**
#A command line window 127.0.0.1:6379> get name "mysql" 127.0.0.1:6379> watch name OK 127.0.0.1:6379> multi OK 127.0.0.1:6379(TX)> set name mysqli QUEUED 127.0.0.1:6379(TX)> exec 1) OK 127.0.0.1:6379> get name "mysqli" #Window 1 127.0.0.1:6379> get name "mysqli" 127.0.0.1:6379> watch name OK 127.0.0.1:6379> multi OK 127.0.0.1:6379(TX)> set name oracle QUEUED #Pause, reopen a window, enter the command for window 2, and then continue to enter the following command 127.0.0.1:6379(TX)> exec (nil) 127.0.0.1:6379> get name "pdo" #Window 2 127.0.0.1:6379> set name pdo OK
- You can listen to one or more keys. Once one of the keys is modified or deleted, subsequent transactions will not be executed
unwatch cancels the monitoring of all key s by the watch command
127.0.0.1:6379> unwatch OK
Redis publish and subscribe
Redis publish / subscribe (pub/sub) is a message communication mode: the sender sends the message and the subscriber accepts the message
Redis client can subscribe to any number of channels
Subscribe subscribe to one or more channels
Syntax: subscribe channel
#Window 1 127.0.0.1:6379> subscribe redisChat Reading messages... (press Ctrl-C to quit) #Don't turn it off. Now send the specified information to this channel
publish sends information to the specified channel
Syntax: publish channel message
#Window 2 127.0.0.1:6379> publish redisChat "redis is a caching technique" (integer) 1 #Window 1 (appears automatically) Reading messages... (press Ctrl-C to quit) 1) "subscribe" 2) "redisChat" 3) (integer) 1 1) "message" 2) "redisChat" 3) "redis is a caching technique"
- This message returns the number of subscribers who accepted this message
pubsub view subscription and publishing system status
127.0.0.1:6379> pubsub channels 1) "redisChat"
Unsubscribe unsubscribe information for a given channel or channels
127.0.0.1:6379> unsubscribe redisChat 1) "unsubscribe" 2) "redisChat" 3) (integer) 0
psubscribe subscribes to one or more channels that match a given pattern
127.0.0.1:6379> psubscribe cctv?*
Reading messages... (press Ctrl-C to quit)
-
"psubscribe"
-
"cctv?*"
-
(integer) 1
-
"pmessage"
-
"cctv?*"
-
"cctv1"
-
"\xe6\x96\xb0\xe9\x97\xbb"
punsubscribe unsubscribe the channel for the given mode
127.0.0.1:6379> punsubscribe cctv?* 1) "punsubscribe" 2) "cctv?*" 3) (integer) 0
Redis has two persistence methods
Redis persistence is to permanently save the data stored in memory in disk.
Why does Redis need persistence?
The powerful performance of Redis is largely due to the fact that all data is stored in memory. When Redis is restarted, all data in memory will be lost.
Redis supports RDB and AOF persistence methods:
RDB stores the data in memory to the hard disk regularly according to the specified rules;
AOF records the command itself after each execution of the command
One of the two persistence methods can be used alone, and more often the two can be used together.
RDB implementation of persistence
The persistence of RBD mode is completed through snapshot. When certain conditions are met, Redis will automatically generate a copy of all data in memory and store it on the hard disk. This process is called snapshot.
redis will snapshot the data when:
- Automatic snapshot according to configuration rules
- The user executes the save or bgsave command
- Execute the flush command
- When performing replication
(1) Automatic snapshot on line according to configuration rules
#After 900 seconds, if one or more keys are modified, a snapshot is taken save 900 1 #After 300 seconds, if at least 10 keys are modified, a snapshot is taken save 300 10 #If at least 10000 keys are modified within 60 seconds, a snapshot will be taken save 60 10000
- As can be seen above, each snapshot starts with save, and each snapshot condition has an exclusive line. Multiple conditions can exist at the same time, and the relationship between conditions is or.
Automatic snapshot according to configuration rules
realization:
Step 1) annotate the default configuration file and set a new snapshot
#save 900 1 #save 300 10 #save 60 10000 save 60 3
Step 2) delete dump RDB file
Step 3) set the value and wait for 60s to check whether a dump file is generated
127.0.0.1:6379> set k1 v1 OK 127.0.0.1:6379> set k2 v2 OK 127.0.0.1:6379> set k3 v3 OK 127.0.0.1:6379> set k4 v4 OK 127.0.0.1:6379> set k5 v5 OK
Step 4) the dump file has been generated in 60 seconds, restart the virtual machine, and forcibly exit redis, assuming a sudden power failure
Step 5) restart redis to check whether the data is recovered
127.0.0.1:6379> get k1 "v1"
You can see that the data has been restored, which is the automatic execution of snapshots.
The user executes the save or bgsave command
In addition to automatic snapshots, you can also take manual snapshots. When the service is restarted, migrated and backed up manually, you can use manual snapshot operation.
save command
When the Save command is executed, redis performs snapshot operations synchronously. All requests from the client will be blocked during snapshot execution. When there is too much data, save will cause redis not to respond for a long time.
127.0.0.1:6379> set k8 v8 OK 127.0.0.1:6379> save OK
Restore data: just move the backup file (dump.rdb) to the redis installation directory and start the service.
#Get redis directory command 127.0.0.1:6379> config get dir 1) "dir" 2) "/www/server/redis"
bgsave command
For manual snapshot execution, bgsave is recommended. The bgsave command can perform snapshot operations asynchronously in the background, and the service can continue to respond to requests from the client while taking snapshots. After bgsave is executed, redis will immediately return OK, indicating that the snapshot operation is started. Use the lastsave command to view the time of the last successful snapshot execution.
127.0.0.1:6379> set k9 v9 OK 127.0.0.1:6379> bgsave Background saving started 127.0.0.1:6379> lastsave (integer) 1640742064
Flush command
When the flush command is executed, Redis clears all data in the Redis server. Whether or not the automatic snapshot condition is triggered during server emptying, Redis will perform a snapshot operation as long as the automatic snapshot condition is not empty.
When copying
When the master-slave mode is set, Redis will take an automatic snapshot during replication initialization.
Snapshot principle Redis will store the snapshot file in dump.exe in the working directory of the current process of Redis by default RDB file. Specify the storage path and file name of the snapshot file respectively by configuring the dir and dbfilename parameters.
The snapshot execution process is as follows:
1) Redis uses the fork function to copy a copy (child process) of the current process (parent process)
2) The parent process continues to receive and process commands from the client, while the child process begins to write data in memory to temporary files on the hard disk
3) When the child process writes all the data, it will replace the old RDB file with the temporary file. So far, a snapshot operation is completed.
Mode 2: AOF mode
When Redis is used to store non temporary data, it is generally necessary to turn on AOF persistence to reduce data loss caused by job in the process.
realization
Step 1): turn on AOF
Redis does not enable AOF (append only file) persistence by default. It is started by modifying the configuration file parameters
appendonly yes appendfsync always
Step 2): write command
127.0.0.1:6379> set name wangwu OK
Step 3) view appendonly Aof file
[root@dcdfa0e9eb71 redis]# ll appendonly.aof -rwxr-xr-x 1 redis redis 0 Dec 29 10:04 appendonly.aof
In this way, aof is realized.
AOF override
Why rewrite? Some commands are redundant and will also be recorded. As commands are written continuously, the AOF file becomes larger and larger. At this time, you want to optimize the file. For redundant commands, you can delete and keep one.
Manual background override bgrewriteaof
#Window 1 127.0.0.1:6379> bgrewriteaof Background append only file rewriting started #Window 2 [root@dcdfa0e9eb71 redis]# ll appendonly.aof -rwxr-xr-x 1 redis redis 148 Dec 29 10:15 appendonly.aof #Pause, and then enter the redis command of bereweriteaof in window 1 [root@dcdfa0e9eb71 redis]# ll appendonly.aof -rwxr-xr-x 1 redis redis 112 Dec 29 10:15 appendonly.aof
You can clearly see that the aof file has become smaller
Auto rewrite
#redis.conf default configuration auto-aof-rewrite-percentage 100 auto-aof-rewrite-min-size 64mb
Role of rewriting
1) Reduce disk usage and improve self disk utilization
2) Improve persistence efficiency, reduce persistence time, and improve I/O performance
3) Reduce data recovery time and improve data recovery efficiency
Synchronize hard disk data
Although aof records the command in the aof file every time it executes the command, in fact, due to the caching mechanism of the operating system, the data is not really written to the disk, but into the hard disk cache of the system. By default, the system will perform a synchronization operation every 30 seconds, and then it is really written to the disk.
If the system exits abnormally during this 30 second process, the data in the hard disk cache will be lost. At this time, aof persistence cannot tolerate the loss. You can set the synchronization time through the appendsync parameter:
appendsync always #Indicates that synchronization is performed every time a write is performed appendsync no #Indicates that the synchronization operation is not active appendsync everysec #Indicates synchronization per second
One of the master-slave replicas in Redis is multi-slave
Master slave replication refers to copying data from one master redis server to another slave redis server. The replication between Zhu Yong is unidirectional and can only be from the master node to the slave node.
A master database can have multiple slave databases, while a slave database can only have one master database
Functions: data redundancy, fault recovery, load balancing, read-write separation, high availability cornerstone (the foundation of sentinel and cluster)
Redis can realize master-slave replication through slaveof and configuration file. It is recommended to use configuration file.
Method 1: master-slave replication is realized through the saveof command line
Realize one master and two slaves
1) Start the redis service directly; Without any parameters, it listens to port 6379 by default and takes this as the main service
#Start main service [root@dcdfa0e9eb71 bin]# /www/server/redis/src/redis-server 57811:C 29 Dec 2021 10:34:10.057 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo clock: POSIX clock_gettime
2) The client connects to the Redis master server on port 6379
#View configuration information 127.0.0.1:6379> info replication # Replication role:master #Master master database connected_slaves:0 master_failover_state:no-failover master_replid:26d969624f32fcdb591ca1b65639b53d7df769b6 master_replid2:0000000000000000000000000000000000000000 master_repl_offset:0 second_repl_offset:-1 repl_backlog_active:0 repl_backlog_size:1048576 repl_backlog_first_byte_offset:0 repl_backlog_histlen:0 127.0.0.1:6379> set name master OK
3) Use port 6380 to start Redis slave server
You need to reopen the window for server configuration
[root@dcdfa0e9eb71 /]# /www/server/redis/src/redis-server --port 6380 --slaveof 127.0.0.1 6379
After successful connection, any data changes in the primary Redis server database are automatically synchronized to the secondary Redis database (6380).
4) Client connects to 6380 Redis server
You need to reopen the window to connect
[root@dcdfa0e9eb71 /]# redis-cli -p 6380
View master-slave information
127.0.0.1:6380> info replication # Replication role:slave #6380 is slave library master_host:127.0.0.1 master_port:6379 master_link_status:up master_last_io_seconds_ago:4 master_sync_in_progress:0 slave_read_repl_offset:70 slave_repl_offset:70 slave_priority:100 slave_read_only:1 replica_announced:1 connected_slaves:0 master_failover_state:no-failover master_replid:83f3e0a67b4f509ada3e0a0cb0ec3a12209142c7 master_replid2:0000000000000000000000000000000000000000 master_repl_offset:70 second_repl_offset:-1 repl_backlog_active:1 repl_backlog_size:1048576 repl_backlog_first_byte_offset:1 repl_backlog_histlen:70
5) Start the service using port 6381
6379@dcdfa0e9eb71 /]# /www/server/redis/src/redis-server --port 6381 --slaveof 127.0.0.1
View configuration information
[root@dcdfa0e9eb71 /]# redis-cli -p 6381 127.0.0.1:6381> INFO replication # Replication role:slave #6381 is slave Library master_host:127.0.0.1 master_port:6379 master_link_status:up master_last_io_seconds_ago:5 master_sync_in_progress:0 slave_read_repl_offset:238 slave_repl_offset:238 slave_priority:100 slave_read_only:1 replica_announced:1 connected_slaves:0 master_failover_state:no-failover master_replid:83f3e0a67b4f509ada3e0a0cb0ec3a12209142c7 master_replid2:0000000000000000000000000000000000000000 master_repl_offset:238 second_repl_offset:-1 repl_backlog_active:1 repl_backlog_size:1048576 repl_backlog_first_byte_offset:1 repl_backlog_histlen:238
7) Measured
#Main warehouse 6379 127.0.0.1:6379> set name master OK #From library 6380 127.0.0.1:6380> get name (nil) 127.0.0.1:6380> get name "master" #From library 6381 127.0.0.1:6381> get name (nil) 127.0.0.1:6381> get name "master"
Mode 2: master-slave replication is realized through conf configuration file
Realize one master and two slaves
1) Create 6380 and 6381 configuration files. Since these two configuration files only need to modify the port, only one configuration file is listed
cp redis.conf redis6380.conf vim /www/server/redis /redis_6380.conf ############################# ### The following configuration is the configuration of the 6380 ############################# # port port 6380 # Background start daemonize yes pidfile /www/server/redis /redis_6380.pid loglevel notice # log file logfile "/www/server/redis /redis_6380.log" # snapshot file dbfilename 6380_dump.rdb dir /www/server/redis /6380 replicaof 127.0.1 6379 replica-read-only yes
2) Start redis main service
[root@dcdfa0e9eb71 bin]# /www/server/redis/src/redis-server
3) The client logs in to the master server and views the master-slave information
[root@dcdfa0e9eb71 /]# redis-cli -p 6380 127.0.0.1:6380> info replication # Replication role:slave master_host:127.0.0.1 master_port:6379 master_link_status:up master_last_io_seconds_ago:4 master_sync_in_progress:0 slave_read_repl_offset:70 slave_repl_offset:70 slave_priority:100 ...
4) Start Redis server on port 6380
- Create a 63806381 directory. Otherwise, it will fail to start
[root@dcdfa0e9eb71 redis]# mkdir 6380 [root@dcdfa0e9eb71 redis]# /www/server/redis/src/redis-server /www/server/redis/redis6380.conf
6379 client view link information
[root@dcdfa0e9eb71 /]# redis-cli -p 6379 127.0.0.1:6379> info replication # Replication role:master connected_slaves:0 master_failover_state:no-failover master_replid:2350fbf04aebef085c7def0684a7b8f052f13283 master_replid2:0000000000000000000000000000000000000000 master_repl_offset:0 second_repl_offset:-1 repl_backlog_active:0 repl_backlog_size:1048576 repl_backlog_first_byte_offset:0 repl_backlog_histlen:0 127.0.0.1:6379> info replication # Replication role:master connected_slaves:1 slave0:ip=127.0.0.1,port=6380,state=online,offset=14,lag=1 master_failover_state:no-failover master_replid:66026b04b66fe9cadf2c0e995afd85b09d5f0d88 master_replid2:0000000000000000000000000000000000000000 master_repl_offset:14 second_repl_offset:-1 repl_backlog_active:1 repl_backlog_size:1048576 repl_backlog_first_byte_offset:1 repl_backlog_histlen:14
You can see that one is already connected
7) Testing
#6379 main 127.0.0.1:6379> set age 18 OK #6380 from 127.0.0.1:6380> get age "18"
Principle of master-slave replication
1) When a slave database is started, the slave database will send the SYNC command to the master database
2) After receiving the SYNC command, the master database starts to save the snapshot in the background (RDB persistence process) and caches the commands received during snapshot saving. After the snapshot is completed, Redis will send the snapshot file and all cached commands to the slave database
3) When received from the database, the snapshot file is loaded and the command to cache is executed
The above process is the initialization of ❶ replication. ❷ after replication initialization, whenever the master database receives a write command, it will synchronize the command to the slave database, so as to ensure the consistency between the master and slave databases.
Read write separation consistency
Through replication, read-write separation can be realized, and the load capacity of the server can be improved. In the application scenario where the read frequency is greater than the write frequency, when the single Redis cannot cope with a large number of read requests, multiple slave database nodes can be established through the replication function. The master database only performs write operations, while the slave database is responsible for read operations.
Persistence from database
Persistence is a time-consuming operation. In order to improve performance, you can establish one or more databases through the replication function and enable persistence from the database. Persistence is also disabled in the primary database. When the slave database crashes and restarts, the master database will automatically synchronize the data, so you don't have to worry about data loss
If the primary database crashes, the situation becomes much more complicated. Strictly follow the following steps
1) In the slave database, use the replicaof no one command to promote the slave database to the master database to continue the service
2) Start the crashed master database, and then use the replicaof command to set it as the slave database of the new master database to synchronize the data back.
No hard disk replication
During master-slave replication, it is implemented based on RDB persistence. The master database side saves RDB snapshots in the background, and the slave database side receives and loads snapshot files.
Incremental replication
When the master database is disconnected, the slave database will send the SYNC command to perform a full copy operation again. In this way, even if the database changes little during disconnection, all data in the database needs to be re snapshot and transferred once.
Incremental replication is based on the following three points:
1) The slave database stores the master database run ID(run id). Each Redis running instance will have a unique running ID, and a new running ID will be automatically generated whenever the instance is restarted
2) In the replication synchronization phase, when the master database transmits a command to the slave database, it will store the command in a backlog at the same time, and record the offset range of the command stored in the current backlog queue
3) When a command from the main database is received from the database, the offset of the command will be recorded
These three points are the basis for incremental replication. When the master-slave connection is ready, the slave database will send a PSYNC command to tell the master database that all data can be synchronized.
The format is the run ID of PSYNC master database and the latest command offset before disconnection
Steps:
① First, the master database will judge whether the running ID transmitted from the database is the same as its own running ID. This step is to ensure that you are synchronized with yourself before you get from the database, so as to avoid getting wrong data from the database
② Secondly, judge whether the command offset of the last successful synchronization from the database is in the backlog queue. If so, you can perform incremental replication and send the corresponding commands in the backlog queue to the slave database
The backlog queue is essentially a circular queue with a fixed length. By default, the size of the backlog queue is 1MB, which can be adjusted through the repl backlog size option in the configuration file. The larger the backlog queue, the longer the disconnection time of the master-slave database is allowed.
Implementation of Redis sentinel mode configuration
Redis sentinel mode is based on master-slave replication.
When the database goes down, the Sentry will select a new master server and turn the previous master server into a new slave server. This process is automatically completed by the sentry without manual participation.
What is a sentry
Sentinel is an independent process. The principle is that the Sentry can monitor multiple running redis instances by sending commands and waiting for a response from the redis server
The role of Sentinels
- Send commands to return the server to its running state, including master server and slave server
- When the master downtime is detected, the slave will be automatically switched to the master, and then other slave servers will be notified through the publish subscribe mode to modify the configuration file and let them switch hosts.
*One sentinel or multiple sentinels can be used for monitoring. In the actual use environment, multiple sentinels (odd number is recommended) can be used for monitoring
Implementation steps
1) Configure and enable master-slave replication (it has been implemented above and can be used directly)
[root@dcdfa0e9eb71 redis]# /www/server/redis/src/redis-server /www/server/redis/redis.conf [root@dcdfa0e9eb71 redis]# /www/server/redis/src/redis-server /www/server/redis/redis6380.conf [root@dcdfa0e9eb71 redis]# /www/server/redis/src/redis-server /www/server/redis/redis6381.conf [root@dcdfa0e9eb71 redis]# redis-cli -p 6379 127.0.0.1:6379> info replication # Replication role:master connected_slaves:2 slave0:ip=127.0.0.1,port=6380,state=online,offset=70,lag=1 slave1:ip=127.0.0.1,port=6381,state=online,offset=70,lag=1 master_failover_state:no-failover master_replid:517ffcafa445b9cc6acc2b4a7ee5700773746bbf master_replid2:0000000000000000000000000000000000000000 master_repl_offset:70 second_repl_offset:-1 repl_backlog_active:1 repl_backlog_size:1048576 repl_backlog_first_byte_offset:1 repl_backlog_histlen:70
2) Configuring sentinels with profiles
① Preparatory work
# Switch to redis directory [root@192 redis]# cd /www/server/redis/ # Create profile sentinel directory [root@192 redis]# mkdir /www/server/redis/redis_sentinel_conf # Copy the default sentinel file to the directory [root@192 redis]# cp ./sentinel.conf /www/server/redis//redis_sentinel_conf/sentinel_6379.conf [root@192 redis]# cd /usr/local/bin/redis_sentinel_conf/
② Edit sentry profile
[root@192 redis]# cd /usr/local/bin/redis_sentinel_conf/ [root@192 redis_sentinel_conf]# vim sentinel_6379.conf
The revised content is:
# port port 26379 # Start daemon daemonize no # working directory dir /tmp # Declare that the main database of the sentry is mymaster, and the ip and port of the main database are 127.0 and 127.0 respectively 0.1 and 6379 # The meaning of the last 2 is: when the leadership election of sentinels occurs, it needs two sentinels to pass sentinel monitor mymaster 127.0.0.1 6379 2 # Go offline after 30 seconds of mymaster downtime sentinel down-after-milliseconds mymaster 30000 # Specify that at most one slave can synchronize the new master at the same time in case of failover failover sentinel parallel-syncs mymaster 1 # Set the failover timeout to 180 seconds sentinel failover-timeout mymaster 180000
③ Copy sentinel_6379.conf is sentinel_6380.conf,sentinel_6381.conf and modify the corresponding port to 2638026381
[root@192 redis_sentinel_conf]# cp sentinel_6379.conf ./sentinel_6380.conf [root@192 redis_sentinel_conf]# cp sentinel_6379.conf ./sentinel_6381.conf
3) Start the three sentinel services configured
① Start 26379 service and connect
[root@dcdfa0e9eb71 /]# /www/server/redis/src/redis-sentinel /www/server/redis/redis_sentinel_conf/sentinel_6379.conf
The client connects to the sentinel server using port 26379
You can use info to view the information displayed by the sentry 127.0 0.1:26379> info # Server redis_ version:6.2. 6 redis_ git_ sha1:00000000 redis_ git_ dirty:0 redis_ build_ id:36599a7c3e389a76 redis_ mode:sentinel os:Linux 5.10. 47-linuxkit x86_ 64 arch_ bits:64 multiplexing_ api:epoll atomicvar_ api:c11-builtin gcc_ version:8.4. 1 process_ id:73653 process_ supervised:no run_ id:82fa9039e9928d9809429365592ef2f732e2a94d tcp_ port:26379 ...... # Sentinel sentinel_ masters:1 sentinel_ tilt:0 sentinel_ running_ scripts:0 sentinel_ scripts_ queue_ length:0 sentinel_ simulate_ failure_ flags:0 master0:name=mymaster,status=ok,address=127.0. 0.1:379,slaves=2,sentinels=2[root@dcdfa0e9eb71 redis]# redis-cli -p 26379
You can see that the configuration file changes automatically
② Start 26380 sentinel service
[root@dcdfa0e9eb71 redis_sentinel_conf]# /www/server/redis/src/redis-sentinel /www/server/redis/redis_sentinel_conf/sentinel_6380.conf
Check the 26379 service. There are several more configurations
73653:X 29 Dec 2021 14:29:03.124 # Sentinel ID is a4fe46846ee24ff8a183c949ff813aa620d8ef90
73653:X 29 Dec 2021 14:29:03.125 # +monitor master mymaster 127.0.0.1 6379 quorum 2
73653:X 29 Dec 2021 14:29:03.126 * +slave slave 127.0.0.1:6380 127.0.0.1 6380 @ mymaster 127.0.0.1 6379
73653:X 29 Dec 2021 14:29:03.180 * +slave slave 127.0.0.1:6381 127.0.0.1 6381 @ mymaster 127.0.0.1 6379
73653:X 29 Dec 2021 14:41:22.742 * +sentinel sentinel 0b968d92eeb26074330ab631221c1ace39be42fe 127.0.0.1 26380 @ mymaster 127.0.0.1 6379
73653:X 29 Dec 2021 14:43:29.244 * +sentinel sentinel bd67baf85fcc6a7664336bb06e725b9321082bac 127.0.0.1 26381 @ mymaster 127.0.0.1 6379
③ Start 26381 sentinel service
[root@dcdfa0e9eb71 /]# /www/server/redis/src/redis-sentinel /www/server/redis/redis_sentinel_conf/sentinel_6381.conf 74632:X 29 Dec 2021 14:43:27.178 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo 74632:X 29 Dec 2021 14:43:27.179 # Redis version=6.2.6, bits=64, commit=00000000, modified=0, pid=74632, just started 74632:X 29 Dec 2021 14:43:27.179 # Configuration loaded 74632:X 29 Dec 2021 14:43:27.180 * monotonic clock: POSIX clock_gettime _._ _.-``__ ''-._ _.-`` `. `_. ''-._ Redis 6.2.6 (00000000/0) 64 bit .-`` .-```. ```\/ _.,_ ''-._ ( ' , .-` | `, ) Running in sentinel mode |`-._`-...-` __...-.``-._|'` _.-'| Port: 26381 | `-._ `._ / _.-' | PID: 74632 `-._ `-._ `-./ _.-' _.-' |`-._`-._ `-.__.-' _.-'_.-'| | `-._`-._ _.-'_.-' | https://redis.io `-._ `-._`-.__.-'_.-' _.-' |`-._`-._ `-.__.-' _.-'_.-'| | `-._`-._ _.-'_.-' | `-._ `-._`-.__.-'_.-' _.-' `-._ `-.__.-' _.-' `-._ _.-' `-.__.-' 74632:X 29 Dec 2021 14:43:27.206 # Sentinel ID is bd67baf85fcc6a7664336bb06e725b9321082bac 74632:X 29 Dec 2021 14:43:27.206 # +monitor master mymaster 127.0.0.1 6379 quorum 2 74632:X 29 Dec 2021 14:43:27.208 * +slave slave 127.0.0.1:6380 127.0.0.1 6380 @ mymaster 127.0.0.1 6379 74632:X 29 Dec 2021 14:43:27.228 * +slave slave 127.0.0.1:6381 127.0.0.1 6381 @ mymaster 127.0.0.1 6379 74632:X 29 Dec 2021 14:43:28.993 * +sentinel sentinel a4fe46846ee24ff8a183c949ff813aa620d8ef90 127.0.0.1 26379 @ mymaster 127.0.0.1 6379 74632:X 29 Dec 2021 14:43:29.032 * +sentinel sentinel 0b968d92eeb26074330ab631221c1ace39be42fe 127.0.0.1 26380 @ mymaster 127.0.0.1 6379
4) Actual test
① ② ③ ④ ⑤ ⑥ ⑦ ⑧ ⑨
① 6379 write a piece of data to the master server
127.0.0.1:6379> set ziruchu hello OK
② 6380, 6381 read from the server
127.0.0.1:6380> get ziruchu "hello" 127.0.0.1:6381> get ziruchu "hello"
③ Downtime 6379 primary server
Since the 6379 main service is not started by the daemon, you can directly use Ctrl+c to disconnect the service, or use ps -aux to view the process number of the 6379 service, and then kill -9 process number to simulate downtime
④ View a sentinel at random
[root@dcdfa0e9eb71 /]# /www/server/redis/src/redis-sentinel /www/server/redis/redis_sentinel_conf/sentinel_6381.conf 74632:X 29 Dec 2021 14:43:27.178 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo 74632:X 29 Dec 2021 14:43:27.179 # Redis version=6.2.6, bits=64, commit=00000000, modified=0, pid=74632, just started 74632:X 29 Dec 2021 14:43:27.179 # Configuration loaded 74632:X 29 Dec 2021 14:43:27.180 * monotonic clock: POSIX clock_gettime _._ _.-``__ ''-._ _.-`` `. `_. ''-._ Redis 6.2.6 (00000000/0) 64 bit .-`` .-```. ```\/ _.,_ ''-._ ( ' , .-` | `, ) Running in sentinel mode |`-._`-...-` __...-.``-._|'` _.-'| Port: 26381 | `-._ `._ / _.-' | PID: 74632 `-._ `-._ `-./ _.-' _.-' |`-._`-._ `-.__.-' _.-'_.-'| | `-._`-._ _.-'_.-' | https://redis.io `-._ `-._`-.__.-'_.-' _.-' |`-._`-._ `-.__.-' _.-'_.-'| | `-._`-._ _.-'_.-' | `-._ `-._`-.__.-'_.-' _.-' `-._ `-.__.-' _.-' `-._ _.-' `-.__.-'74632:X 29 Dec 2021 14:43:27.206 # Sentinel ID is bd67baf85fcc6a7664336bb06e725b9321082bac 74632:X 29 Dec 2021 14:43:27.206 # +monitor master mymaster 127.0.0.1 6379 quorum 2 74632:X 29 Dec 2021 14:43:27.208 * +slave slave 127.0.0.1:6380 127.0.0.1 6380 @ mymaster 127.0.0.1 6379 74632:X 29 Dec 2021 14:43:27.228 * +slave slave 127.0.0.1:6381 127.0.0.1 6381 @ mymaster 127.0.0.1 6379 74632:X 29 Dec 2021 14:43:28.993 * +sentinel sentinel a4fe46846ee24ff8a183c949ff813aa620d8ef90 127.0.0.1 26379 @ mymaster 127.0.0.1 6379 74632:X 29 Dec 2021 14:43:29.032 * +sentinel sentinel 0b968d92eeb26074330ab631221c1ace39be42fe 127.0.0.1 26380 @ mymaster 127.0.0.1 6379 74632:X 29 Dec 2021 15:18:10.518 # +sdown master mymaster 127.0.0.1 6379 74632:X 29 Dec 2021 15:18:10.581 # +odown master mymaster 127.0.0.1 6379 #quorum 3/2 74632:X 29 Dec 2021 15:18:10.581 # +new-epoch 1 74632:X 29 Dec 2021 15:18:10.581 # +try-failover master mymaster 127.0.0.1 6379 74632:X 29 Dec 2021 15:18:10.613 # +vote-for-leader bd67baf85fcc6a7664336bb06e725b9321082bac 1 74632:X 29 Dec 2021 15:18:10.613 # a4fe46846ee24ff8a183c949ff813aa620d8ef90 voted for a4fe46846ee24ff8a183c949ff813aa620d8ef90 1 74632:X 29 Dec 2021 15:18:10.650 # 0b968d92eeb26074330ab631221c1ace39be42fe voted for a4fe46846ee24ff8a183c949ff813aa620d8ef90 1 74632:X 29 Dec 2021 15:18:10.859 # +config-update-from sentinel a4fe46846ee24ff8a183c949ff813aa620d8ef90 127.0.0.1 26379 @ mymaster 127.0.0.1 6379 74632:X 29 Dec 2021 15:18:10.859 # +switch-master mymaster 127.0.0.1 6379 127.0.0.1 6381 74632:X 29 Dec 2021 15:18:10.859 * +slave slave 127.0.0.1:6380 127.0.0.1 6380 @ mymaster 127.0.0.1 6381 74632:X 29 Dec 2021 15:18:10.859 * +slave slave 127.0.0.1:6379 127.0.0.1 6379 @ mymaster 127.0.0.1 6381 74632:X 29 Dec 2021 15:18:40.861 # +sdown slave 127.0.0.1:6379 127.0.0.1 6379 @ mymaster 127.0.0.1 6381 74632:X 29 Dec 2021 15:21:53.454 # -sdown slave 127.0.0.1:6379 127.0.0.1 6379 @ mymaster 127.0.0.1 6381You can see that 6381 was selected as the primary server
⑤ Set a value in the 6381 server and view it from the server
#Master server 6381 [root@dcdfa0e9eb71 /]# redis-cli -p 6381 127.0.0.1:6381> info replication # Replication role:master connected_slaves:1 slave0:ip=127.0.0.1,port=6380,state=online,offset=493296,lag=0 master_failover_state:no-failover master_replid:4648e7de667cfee4e37c14b9a08c605f05e56681 master_replid2:517ffcafa445b9cc6acc2b4a7ee5700773746bbf master_repl_offset:493296 second_repl_offset:470776 repl_backlog_active:1 repl_backlog_size:1048576 repl_backlog_first_byte_offset:43 repl_backlog_histlen:493254 127.0.0.1:6381> set name 6381 OK #Slave server 6380 [root@dcdfa0e9eb71 redis]# redis-cli -p 6380 127.0.0.1:6380> get name "a" 127.0.0.1:6380> get name "b" 127.0.0.1:6380> get name Error: Server closed the connection 127.0.0.1:6380> get name "6381" 127.0.0.1:6380> info replication # Replication role:slave master_host:127.0.0.1 master_port:6381 master_link_status:up master_last_io_seconds_ago:0 master_sync_in_progress:0 slave_read_repl_offset:499356 slave_repl_offset:499356 slave_priority:100 slave_read_only:1 replica_announced:1 connected_slaves:0 master_failover_state:no-failover master_replid:4648e7de667cfee4e37c14b9a08c605f05e56681 master_replid2:517ffcafa445b9cc6acc2b4a7ee5700773746bbf master_repl_offset:499356 second_repl_offset:470776 repl_backlog_active:1 repl_backlog_size:1048576 repl_backlog_first_byte_offset:1 repl_backlog_histlen:499356
⑥ Restart the 6379 service and connect with the client
[root@dcdfa0e9eb71 redis]# /www/server/redis/src/redis-server 77182:C 29 Dec 2021 15:21:53.079 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo 77182:C 29 Dec 2021 15:21:53.079 # Redis version=6.2.6, bits=64, commit=00000000, modified=0, pid=77182, just started 77182:C 29 Dec 2021 15:21:53.079 # Warning: no config file specified, using the default config. In order to specify a config file use /www/server/redis/src/redis-server /path/to/redis.conf 77182:M 29 Dec 2021 15:21:53.081 * monotonic clock: POSIX clock_gettime _._ _.-``__ ''-._ _.-`` `. `_. ''-._ Redis 6.2.6 (00000000/0) 64 bit .-`` .-```. ```\/ _.,_ ''-._ ( ' , .-` | `, ) Running in standalone mode |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379 | `-._ `._ / _.-' | PID: 77182 `-._ `-._ `-./ _.-' _.-' |`-._`-._ `-.__.-' _.-'_.-'| | `-._`-._ _.-'_.-' | https://redis.io `-._ `-._`-.__.-'_.-' _.-' |`-._`-._ `-.__.-' _.-'_.-'| | `-._`-._ _.-'_.-' | `-._ `-._`-.__.-'_.-' _.-' `-._ `-.__.-' _.-' `-._ _.-' `-.__.-' 77182:M 29 Dec 2021 15:21:53.082 # Server initialized 77182:M 29 Dec 2021 15:21:53.092 * Loading RDB produced by version 6.2.6 77182:M 29 Dec 2021 15:21:53.092 * RDB age 3404 seconds 77182:M 29 Dec 2021 15:21:53.092 * RDB memory usage when created 1.85 Mb 77182:M 29 Dec 2021 15:21:53.092 # Done loading RDB, keys loaded: 2, keys expired: 0. 77182:M 29 Dec 2021 15:21:53.092 * DB loaded from disk: 0.007 seconds 77182:M 29 Dec 2021 15:21:53.092 * Ready to accept connections 77182:S 29 Dec 2021 15:22:03.410 * Before turning into a replica, using my own master parameters to synthesize a cached master: I may be able to synchronize with the new master with just a partial transfer. 77182:S 29 Dec 2021 15:22:03.410 * Connecting to MASTER 127.0.0.1:6381 77182:S 29 Dec 2021 15:22:03.410 * MASTER <-> REPLICA sync started 77182:S 29 Dec 2021 15:22:03.411 * REPLICAOF 127.0.0.1:6381 enabled (user request from 'id=5 addr=127.0.0.1:34810 laddr=127.0.0.1:6379 fd=10 name=sentinel-0b968d92-cmd age=10 idle=0 flags=x db=0 sub=0 psub=0 multi=4 qbuf=196 qbuf-free=40758 argv-mem=4 obl=45 oll=0 omem=0 tot-mem=61468 events=r cmd=exec user=default redir=-1') 77182:S 29 Dec 2021 15:22:03.411 * Non blocking connect for SYNC fired the event. 77182:S 29 Dec 2021 15:22:03.412 * Master replied to PING, replication can continue... 77182:S 29 Dec 2021 15:22:03.412 * Trying a partial resynchronization (request e662599320845936adbf62088ab352d6137febea:1). 77182:S 29 Dec 2021 15:22:03.490 * Full resync from master: 4648e7de667cfee4e37c14b9a08c605f05e56681:516772 77182:S 29 Dec 2021 15:22:03.490 * Discarding previously cached master state. 77182:S 29 Dec 2021 15:22:03.604 * MASTER <-> REPLICA sync: receiving 199 bytes from master to disk 77182:S 29 Dec 2021 15:22:03.611 * MASTER <-> REPLICA sync: Flushing old data 77182:S 29 Dec 2021 15:22:03.611 * MASTER <-> REPLICA sync: Loading DB in memory 77182:S 29 Dec 2021 15:22:03.678 * Loading RDB produced by version 6.2.6 77182:S 29 Dec 2021 15:22:03.678 * RDB age 0 seconds 77182:S 29 Dec 2021 15:22:03.678 * RDB memory usage when created 2.03 Mb 77182:S 29 Dec 2021 15:22:03.678 # Done loading RDB, keys loaded: 2, keys expired: 0. 77182:S 29 Dec 2021 15:22:03.678 * MASTER <-> REPLICA sync: Finished with success⑦ 6379 client view
[root@dcdfa0e9eb71 /]# redis-cli -p 6379 127.0.0.1:6379> info replication # Replication role:slave master_host:127.0.0.1 master_port:6381 master_link_status:up master_last_io_seconds_ago:1 master_sync_in_progress:0 slave_read_repl_offset:525629 slave_repl_offset:525629 slave_priority:100 slave_read_only:1 replica_announced:1 connected_slaves:0 master_failover_state:no-failover master_replid:4648e7de667cfee4e37c14b9a08c605f05e56681 master_replid2:0000000000000000000000000000000000000000 master_repl_offset:525629 second_repl_offset:-1 repl_backlog_active:1 repl_backlog_size:1048576 repl_backlog_first_byte_offset:516773 repl_backlog_histlen:8857 127.0.0.1:6379> get name "6381"
Here, sentry configuration is complete.
The following is supplementary information - sentry start information
6381 sentry display information[ root@dcdfa0e9eb71 /]# /www/server/redis/src/redis-sentinel /www/server/redis/redis_ sentinel_ conf/sentinel_ 6381.conf 74632:X 29 Dec 2021 14:43:27.178 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo 74632:X 29 Dec 2021 14:43:27.179 # Redis version=6.2. 6, bits=64, commit=00000000, modified=0, pid=74632, just started 74632:X 29 Dec 2021 14:43:27.179 # Configuration loaded 74632:X 29 Dec 2021 14:43:27.180 * monotonic clock: POSIX clock_ gettime _._ _.-``__ ''-._ _.-`` `. `_. ''-._ Redis 6.2. 6 (00000000/0) 64 bit .-`` .-```. ```\/ _.,_ ''-._ ( ' , .-` | `, ) Running in sentinel mode |`-._`-...-` __...-.``-._|'` _.-'| Port: 26381 | `-._ `._ / _.-' | PID: 74632 `-._ `-._ `-./ _.-' _.-' |`-._`-._ `-.__.-' _.-'_.-'| | `-._`-._ _.-'_.-' | https://redis.io `-._ `-._`-.__.-'_.-' _.-' |`-._`-._ `-.__.-' _.-'_.-'| | `-._`-._ _.-'_.-' | `-._ `-._`-.__.-'_.-' _.-' `-._ `-.__.-' _.-' `-._ _.-' `-.__.-' 74632:X 29 Dec 2021 14:43:27.206 # Sentinel ID is bd67baf85fcc6a7664336bb06e725b9321082bac 74632:X 29 Dec 2021 14:43:27.206 # +monitor master mymaster 127.0. 0.1 6379 quorum 2 74632:X 29 Dec 2021 14:43:27.208 * +slave slave 127.0. 0.1:6380 127.0. 0.1 6380 @ mymaster 127.0. 0.1 6379 74632:X 29 Dec 2021 14:43:27.228 * +slave slave 127.0. 0.1:6381 127.0. 0.1 6381 @ mymaster 127.0. 0.1 6379 74632:X 29 Dec 2021 14:43:28.993 * +sentinel sentinel a4fe46846ee24ff8a183c949ff813aa620d8ef90 127.0. 0.1 26379 @ mymaster 127.0. 0.1 6379 74632:X 29 Dec 2021 14:43:29.032 * +sentinel sentinel 0b968d92eeb26074330ab631221c1ace39be42fe 127.0. 0.1 26380 @ mymaster 127.0. 0.1 6379 74632:X 29 Dec 2021 15:18:10.518 # +sdown master mymaster 127.0. 0.1 6379 74632:X 29 Dec 2021 15:18:10.581 # +odown master mymaster 127.0. 0.1 6379 #quorum 3/2 74632:X 29 Dec 2021 15:18:10.581 # +new-epoch 1 74632:X 29 Dec 2021 15:18:10.581 # +try-failover master mymaster 127.0. 0.1 6379 74632:X 29 Dec 2021 15:18:10.613 # +vote-for-leader bd67baf85fcc6a7664336bb06e725b9321082bac 1 74632:X 29 Dec 2021 15:18:10.613 # a4fe46846ee24ff8a183c949ff813aa620d8ef90 voted for a4fe46846ee24ff8a183c949ff813aa620d8ef90 1 74632:X 29 Dec 2021 15:18:10.650 # 0b968d92eeb26074330ab631221c1ace39be42fe voted for a4fe46846ee24ff8a183c949ff813a a620d8ef90 1 74632:X 29 Dec 2021 15:18:10.859 # +config-update-from sentinel a4fe46846ee24ff8a183c949ff813aa620d8ef90 127.0. 0.1 26379 @ mymaster 127.0. 0.1 6379 74632:X 29 Dec 2021 15:18:10.859 # +switch-master mymaster 127.0. 0.1 6379 127.0. 0.1 6381 74632:X 29 Dec 2021 15:18:10.859 * +slave slave 127.0. 0.1:6380 127.0. 0.1 6380 @ mymaster 127.0. 0.1 6381 74632:X 29 Dec 2021 15:18:10.859 * +slave slave 127.0. 0.1:6379 127.0. 0.1 6379 @ mymaster 127.0. 0.1 6381 74632:X 29 Dec 2021 15:18:40.861 # +sdown slave 127.0. 0.1:6379 127.0. 0.1 6379 @ mymaster 127.0. 0.1 6381 74632:X 29 Dec 2021 15:21:53.454 # -sdown slave 127.0. 0.1:6379 127.0. 0.1 6379 @ mymaster 127.0. 0.1 6381 6380 sentry display information[ root@dcdfa0e9eb71 redis_ sentinel_ conf]# /www/server/redis/src/redis-sentinel /www/server/redis/redis_ sentinel_ conf/sentinel_ 6380.conf 74461:X 29 Dec 2021 14:41:20.739 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo 74461:X 29 Dec 2021 14:41:20.739 # Redis version=6.2. 6, bits=64, commit=00000000, modified=0, pid=74461, just started 74461:X 29 Dec 2021 14:41:20.739 # Configuration loaded 74461:X 29 Dec 2021 14:41:20.742 * monotonic clock: POSIX clock_ gettime _._ _.-``__ ''-._ _.-`` `. `_. ''-._ Redis 6.2. 6 (00000000/0) 64 bit .-`` .-```. ```\/ _.,_ ''-._ ( ' , .-` | `, ) Running in sentinel mode |`-._`-...-` __...-.``-._|'` _.-'| Port: 26380 | `-._ `._ / _.-' | PID: 74461 `-._ `-._ `-./ _.-' _.-' |`-._`-._ `-.__.-' _.-'_.-'| | `-._`-._ _.-'_.-' | https://redis.io `-._ `-._`-.__.-'_.-' _.-' |`-._`-._ `-.__.-' _.-'_.-'| | `-._`-._ _.-'_.-' | `-._ `-._`-.__.-'_.-' _.-' `-._ `-.__.-' _.-' `-._ _.-' `-.__.-' 74461:X 29 Dec 2021 14:41:20.762 # Sentinel ID is 0b968d92eeb26074330ab631221c1ace39be42fe 74461:X 29 Dec 2021 14:41:20.762 # +monitor master mymaster 127.0. 0.1 6379 quorum 2 74461:X 29 Dec 2021 14:41:20.766 * +slave slave 127.0. 0.1:6380 127.0. 0.1 6380 @ mymaster 127.0. 0.1 6379 74461:X 29 Dec 2021 14:41:20.785 * +slave slave 127.0. 0.1:6381 127.0. 0.1 6381 @ mymaster 127.0. 0.1 6379 74461:X 29 Dec 2021 14:41:22.600 * +sentinel sentinel a4fe46846ee24ff8a183c949ff813aa620d8ef90 127.0. 0.1 26379 @ mymaster 127.0. 0.1 6379 74461:X 29 Dec 2021 14:43:29.244 * +sentinel sentinel bd67baf85fcc6a7664336bb06e725b9321082bac 127.0. 0.1 26381 @ mymaster 127.0. 0.1 6379 74461:X 29 Dec 2021 15:18:10.496 # +sdown master mymaster 127.0. 0.1 6379 74461:X 29 Dec 2021 15:18:10.627 # +new-epoch 1 74461:X 29 Dec 2021 15:18:10.649 # +vote-for-leader a4fe46846ee24ff8a183c949ff813aa620d8ef90 1 74461:X 29 Dec 2021 15:18:10.858 # +config-update-from sentinel a4fe46846ee24ff8a183c949ff813aa620d8ef90 127.0. 0.1 26379 @ mymaster 127.0. 0.1 6379 74461:X 29 Dec 2021 15:18:10.859 # +switch-master mymaster 127.0. 0.1 6379 127.0. 0.1 6381 74461:X 29 Dec 2021 15:18:10.859 * +slave slave 127.0. 0.1:6380 127.0. 0.1 6380 @ mymaster 127.0. 0.1 6381 74461:X 29 Dec 2021 15:18:10.859 * +slave slave 127.0. 0.1:6379 127.0. 0.1 6379 @ mymaster 127.0. 0.1 6381 74461:X 29 Dec 2021 15:18:40.889 # +sdown slave 127.0. 0.1:6379 127.0. 0.1 6379 @ mymaster 127.0. 0.1 6381 74461:X 29 Dec 2021 15:21:53.460 # -sdown slave 127.0. 0.1:6379 127.0. 0.1 6379 @ mymaster 127.0. 0.1 6381 74461:X 29 Dec 2021 15:22:03.410 * +convert-to-slave slave 127.0. 0.1:6379 127.0. 0.1 6379 @ mymaster 127.0. 0.1 6381 6379 sentry display information[ root@dcdfa0e9eb71 /]# /www/server/redis/src/redis-sentinel /www/server/redis/redis_ sentinel_ conf/sentinel_ 6379.conf 73653:X 29 Dec 2021 14:29:03.047 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo 73653:X 29 Dec 2021 14:29:03.052 # Redis version=6.2. 6, bits=64, commit=00000000, modified=0, pid=73653, just started 73653:X 29 Dec 2021 14:29:03.055 # Configuration loaded 73653:X 29 Dec 2021 14:29:03.056 * monotonic clock: POSIX clock_ gettime _._ _.-``__ ''-._ _.-`` `. `_. ''-._ Redis 6.2. 6 (00000000/0) 64 bit .-`` .-```. ```\/ _.,_ ''-._ ( ' , .-` | `, ) Running in sentinel mode |`-._`-...-` __...-.``-._|'` _.-'| Port: 26379 | `-._ `._ / _.-' | PID: 73653 `-._ `-._ `-./ _.-' _.-' |`-._`-._ `-.__.-' _.-'_.-'| | `-._`-._ _.-'_.-' | https://redis.io `-._ `-._`-.__.-'_.-' _.-' |`-._`-._ `-.__.-' _.-'_.-'| | `-._`-._ _.-'_.-' | `-._ `-._`-.__.-'_.-' _.-' `-._ `-.__.-' _.-' `-._ _.-' `-.__.-' 73653:X 29 Dec 2021 14:29:03.124 # Sentinel ID is a4fe46846ee24ff8a183c949ff813aa620d8ef90 73653:X 29 Dec 2021 14:29:03.125 # +monitor master mymaster 127.0. 0.1 6379 quorum 2 73653:X 29 Dec 2021 14:29:03.126 * +slave slave 127.0. 0.1:6380 127.0. 0.1 6380 @ mymaster 127.0. 0.1 6379 73653:X 29 Dec 2021 14:29:03.180 * +slave slave 127.0. 0.1:6381 127.0. 0.1 6381 @ mymaster 127.0. 0.1 6379 73653:X 29 Dec 2021 14:41:22.742 * +sentinel sentinel 0b968d92eeb26074330ab631221c1ace39be42fe 127.0. 0.1 26380 @ mymaster 127.0. 0.1 6379 73653:X 29 Dec 2021 14:43:29.244 * +sentinel sentinel bd67baf85fcc6a7664336bb06e725b9321082bac 127.0. 0.1 26381 @ mymaster 127.0. 0.1 6379 73653:X 29 Dec 2021 15:18:10.514 # +sdown master mymaster 127.0. 0.1 6379 73653:X 29 Dec 2021 15:18:10.573 # +odown master mymaster 127.0. 0.1 6379 #quorum 2/2 73653:X 29 Dec 2021 15:18:10.573 # +new-epoch 1 73653:X 29 Dec 2021 15:18:10.573 # +try-failover master mymaster 127.0. 0.1 6379 73653:X 29 Dec 2021 15:18:10.601 # +vote-for-leader a4fe46846ee24ff8a183c949ff813aa620d8ef90 1 73653:X 29 Dec 2021 15:18:10.613 # bd67baf85fcc6a7664336bb06e725b9321082bac voted for bd67baf85fcc6a7664336bb06e725b9321082bac 1 73653:X 29 Dec 2021 15:18:10.650 # 0b968d92eeb26074330ab631221c1ace39be42fe voted for a4fe46846ee24ff8a183c949ff813a a620d8ef90 1 73653:X 29 Dec 2021 15:18:10.657 # +elected-leader master mymaster 127.0. 0.1 6379 73653:X 29 Dec 2021 15:18:10.657 # +failover-state-select-slave master mymaster 127.0. 0.1 6379 73653:X 29 Dec 2021 15:18:10.715 # +selected-slave slave 127.0. 0.1:6381 127.0. 0.1 6381 @ mymaster 127.0. 0.1 6379 73653:X 29 Dec 2021 15:18:10.715 * +failover-state-send-slaveof-noone slave 127.0. 0.1:6381 127.0. 0.1 6381 @ mymaster 127.0. 0.1 6379 73653:X 29 Dec 2021 15:18:10.792 * +failover-state-wait-promotion slave 127.0. 0.1:6381 127.0. 0.1 6381 @ mymaster 127.0. 0.1 6379 73653:X 29 Dec 2021 15:18:10.847 # +promoted-slave slave 127.0. 0.1:6381 127.0. 0.1 6381 @ mymaster 127.0. 0.1 6379 73653:X 29 Dec 2021 15:18:10.847 # +failover-state-reconf-slaves master mymaster 127.0. 0.1 6379 73653:X 29 Dec 2021 15:18:10.855 * +slave-reconf-sent slave 127.0. 0.1:6380 127.0. 0.1 6380 @ mymaster 127.0. 0.1 6379 73653:X 29 Dec 2021 15:18:11.805 # -odown master mymaster 127.0. 0.1 6379 73653:X 29 Dec 2021 15:18:11.806 * +slave-reconf-inprog slave 127.0. 0.1:6380 127.0. 0.1 6380 @ mymaster 127.0. 0.1 6379 73653:X 29 Dec 2021 15:18:11.806 * +slave-reconf-done slave 127.0. 0.1:6380 127.0. 0.1 6380 @ mymaster 127.0. 0.1 6379 73653:X 29 Dec 2021 15:18:11.882 # +failover-end master mymaster 127.0. 0.1 6379 73653:X 29 Dec 2021 15:18:11.882 # +switch-master mymaster 127.0. 0.1 6379 127.0. 0.1 6381 73653:X 29 Dec 2021 15:18:11.882 * +slave slave 127.0. 0.1:6380 127.0. 0.1 6380 @ mymaster 127.0. 0.1 6381 73653:X 29 Dec 2021 15:18:11.882 * +slave slave 127.0. 0.1:6379 127.0. 0.1 6379 @ mymaster 127.0. 0.1 6381 73653:X 29 Dec 2021 15:18:41.891 # +sdown slave 127.0. 0.1:6379 127.0. 0.1 6379 @ mymaster 127.0. 0.1 6381Start information interpretation
+Slave indicates that a new slave database is found;
+sdown indicates that the sentinel subjectively believes that the primary database is out of service
_ odown said the sentinel objectively believed that the primary database was out of service
+Try failover indicates that the sentinel begins to recover from the failure
+Failover end means that the sentinel completes fault recovery. During this period, the contents involved are relatively complex, including the election of the leading sentinel, the selection of alternative databases, etc
+Switch master indicates that the 6379 port of the primary database is switched to 6381 port, that is, 6381 is upgraded to the primary database
For example, when 6380 goes down, 6381 is selected as the master database. At this time, 6379 is still down. After 6379 is restored, it will change to the slave database of 6381 port instance. Therefore, the sentry modifies the instance information of 6379 to the slave database of 6381.
Sentinel implementation principle:
After a sentinel process is started, it will read the contents of the configuration file and find the master database to be monitored through # sentinel monitor < master name > < IP > < redis port > < quorum >
For example: Sentinel_ Sentinel monitor mymaster 127.0 in 6379.conf 0.1 6381 2
A redis sentinel node can monitor multiple redis master-slave systems at the same time. Its configuration is as follows:
A sentinel node can monitor multiple Redis master-slave systems at the same time. Its configuration is as follows:
#Not configured yet sentinel monitor mymaster 127.0.0.1 6379 2 sentinel monitor othermaster 127.0.0.1 6380 2
[the external chain picture transfer fails. The source station may have an anti-theft chain mechanism. It is recommended to save the picture and upload it directly (img-5joby6ia-1640782523496) (freely start blog redis learning. assets/image-20211229162243900.png)]
Multiple sentinels monitor one Redis master-slave system at the same time. The previous article recorded that multiple sentinels monitor a master-slave.
[the external chain image transfer fails, and the source station may have an anti-theft chain mechanism. It is recommended to save the image and upload it directly (img-ot8vwe60-1640782523498) (freely start blog redis learning. assets/image-20211229143529043.png)]
Explanation of configuration meaning
sentinel monitor mymaster 127.0.0.1 6379 2
mymaster indicates the name of the primary database to be monitored. It can be customized. The name must consist of lowercase letters, numbers and '. -' form
127.0. 0.1 indicates the primary database address
6379 indicates the primary database port number
2 indicates the minimum number of votes. If two of the three sentinels think that the main database is hung, they are hung
#sentinel.conf other configuration meanings `sentinel down-after-milliseconds mymaster 60000` Send every second`ping`command `sentinel down-after-milliseconds othermaster 10000` Send every 600 milliseconds`ping`command
Sentinel principle can be understood as three steps: monitoring, notification and failover
-
monitor
After the sentinel is started, two connections will be established with the monitored master database, one of which is used to subscribe to the master data_ sentinel_:hello channel to obtain other sentinel node information that also monitors the database;
Sentinels also need to regularly send commands such as INFO to the main database to obtain the information of the main database itself.
After establishing a connection with the main database, the Sentry will use another connection to send commands:
- Every 10s, the Sentry will send info commands to the master database and slave database
- Every 2s, the sentry sends its own information to the primary database and the secondary database via the sentinel:hello channel
- Every 1s, the Sentry will send a ping command to the main database, from the database and other sentry nodes
Understand the above three steps in detail:
Firstly, send the info command so that the Sentry can obtain the relevant information of the current database (operation ID, replication, etc.) to realize the automatic discovery of new nodes;
Secondly, after startup, the Sentry will send the info command to the master database, know the list of slave databases by parsing the returned results, and then establish two connections for each slave database;
Third, the Sentry will send info commands to the known master-slave database every 10s to obtain information updates and carry out relevant operations
Fourth, the Sentry will report to the master-slave database_ sentinel_: The Hello channel sends information and shares its own information with the sentry who also monitors the data; The content of the message sent is sentinel address + sentinel port + sentinel operation ID + sentinel configuration version + master database name + master database address + master database port + master database configuration version.
notice
When the configuration is completed, it will be monitored and a ping command will be sent.
If the ping ed database or node fails to reply after the time specified by down after milliseconds, the sentry considers that it is offline
The supervisor offline indicates that the node has been offline from the current sentinel process. If the node is the primary database, the Sentry will judge whether to recover from the failure: the sentry sends the sentinel is master down by add command to other sentry nodes, saying, I found that the primary database of this address is offline. Let's see if it is offline. When the specified number of sentries think that the primary database is offline, So the sentry thinks it's objective.
Failover
The sentinel believes that after the main database goes offline, an internal vote will be held to select a leader, and the fault recovery will be operated by the leader. The leader election process is as follows:
1. The sentinel node (node A) that finds the objective offline of the main database sends A command to each sentinel node to ask the other party to choose itself as the new leading sentinel
2. If no other sentinel node is selected for the target sentinel node, it is agreed that node A is the leader sentinel
3. If node A finds that more than half of the sentinel nodes with more than the quorum parameter value agree to become the leader node, then node A successfully becomes the leader sentinel
4. When multiple nodes participate in the election at the same time, it will be found that there are no sentinel nodes to choose from. At this time, each candidate node will wait a random time to re initiate the candidate request and conduct the next round of election until it succeeds
5. After the leader sentinel node is selected, the leader starts to recover the failure of the main database, as follows:
- 5.1 select the database with the highest priority from all online databases. Priority is set through slave priority;
- 5.2 if there are multiple databases with the highest priority, the larger the offset of the copy command, the first;
- 5.3 if the conditions are the same, select the slave database with smaller operation ID
After selecting a slave database, the leader sentry sends the replicaof no one command to the slave database to upgrade it to the master database. The leader sends the repliaof command to other slave databases to become the slave database of the new master database.
Finally, update the internal records, update the stopped old master database into the slave database identity of the new master database, and continue the service.
redis cluster
realization
1) * * create 6 instance configuration files** Here 9000 Conf as an example, the remaining five configuration files are copied to 9000 Conf and modify the port
[root@dcdfa0e9eb71 redis]# pwd /www/server/redis [root@dcdfa0e9eb71 redis]# mkdir clusterConf [root@dcdfa0e9eb71 redis]# cp /www/server/redis/redis.conf clusterConf/9000.conf [root@dcdfa0e9eb71 clusterConf]# cp 9000.conf 9001.conf [root@dcdfa0e9eb71 clusterConf]# cp 9000.conf 9002.conf [root@dcdfa0e9eb71 clusterConf]# cp 9000.conf 9003.conf [root@dcdfa0e9eb71 clusterConf]# cp 9000.conf 9004.conf [root@dcdfa0e9eb71 clusterConf]# cp 9000.conf 9005.conf
Tip: vi replace text: [;% s / source string / target string] for example:% s/9000/9005/
-
Because it is a local single system operation, the binding address operation is omitted. It is really the ip address in the server system.
-
Due to the local environment, in order to learn records, the background daemon is not started. In the real server environment, the daemon is started.
2) Start the configured 6 servers
[root@dcdfa0e9eb71 redis]# /www/server/redis/src/redis-server ./clusterConf/9000.conf & [root@dcdfa0e9eb71 redis]# /www/server/redis/src/redis-server ./clusterConf/9001.conf & [root@dcdfa0e9eb71 redis]# /www/server/redis/src/redis-server ./clusterConf/9002.conf & [root@dcdfa0e9eb71 redis]# /www/server/redis/src/redis-server ./clusterConf/9003.conf & [root@dcdfa0e9eb71 redis]# /www/server/redis/src/redis-server ./clusterConf/9004.conf & [root@dcdfa0e9eb71 redis]# /www/server/redis/src/redis-server ./clusterConf/9005.conf &
View the redis process after startup
[root@dcdfa0e9eb71 clusterConf]# ps -aux | grep redis root 85259 0.1 0.1 64540 5600 pts/11 Sl+ 17:21 0:00 /www/server/redis/src/redis-server 127.0.0.1:9000 [cluster] root 85305 0.1 0.1 64540 5608 pts/7 Sl+ 17:22 0:00 /www/server/redis/src/redis-server 127.0.0.1:9001 [cluster] root 85414 0.1 0.1 64540 5552 pts/2 Sl 17:23 0:00 /www/server/redis/src/redis-server 127.0.0.1:9002 [cluster] root 85429 0.2 0.1 64540 5492 pts/2 Sl 17:23 0:00 /www/server/redis/src/redis-server 127.0.0.1:9003 [cluster] root 85436 0.2 0.1 64540 5644 pts/2 Sl 17:23 0:00 /www/server/redis/src/redis-server 127.0.0.1:9004 [cluster] root 85445 0.5 0.1 64540 5536 pts/2 Sl 17:24 0:00 /www/server/redis/src/redis-server 127.0.0.1:9005 [cluster] root 85472 0.0 0.0 12136 1092 pts/2 S+ 17:24 0:00 grep --color=auto redis
It can be seen that different from the conventional one, there is an additional cluster flag
3) Create cluster
- Create cluster
[root@dcdfa0e9eb71 clusterConf]# redis-cli --cluster create 127.0.0.1:9000 127.0.0.1:9001 127.0.0.1:9002 127.0.0.1:9003 127.0.0.1:9004 127.0.0.1:9005 --cluster-replicas 1 >>> Performing hash slots allocation on 6 nodes... Master[0] -> Slots 0 - 5460 Master[1] -> Slots 5461 - 10922 Master[2] -> Slots 10923 - 16383 Adding replica 127.0.0.1:9004 to 127.0.0.1:9000 Adding replica 127.0.0.1:9005 to 127.0.0.1:9001 Adding replica 127.0.0.1:9003 to 127.0.0.1:9002 >>> Trying to optimize slaves allocation for anti-affinity [WARNING] Some slaves are in the same host as their master M: e4256cad47611df0aa3f6a41c89d24b91c1aa89e 127.0.0.1:9000 slots:[0-5460] (5461 slots) master M: abc2fa081d403c505b5d5127d3c3af9ffaa38538 127.0.0.1:9001 slots:[5461-10922] (5462 slots) master M: f70b68962d8fb9b485cd420b1731b997feac12ad 127.0.0.1:9002 slots:[10923-16383] (5461 slots) master S: 0071f328a2c0a6a58cfe373ce0d8c6137bf15e0c 127.0.0.1:9003 replicates abc2fa081d403c505b5d5127d3c3af9ffaa38538 S: b8bf4748d7039934d4ba43fe558a464592958eff 127.0.0.1:9004 replicates f70b68962d8fb9b485cd420b1731b997feac12ad S: 1baffc136c23418c05bda6805614fc4a29d02004 127.0.0.1:9005 replicates e4256cad47611df0aa3f6a41c89d24b91c1aa89e Can I set the above configuration? (type 'yes' to accept): yes
- Enter yes and the following interface information will appear, showing which nodes have joined
>>> Nodes configuration updated >>> Assign a different config epoch to each node >>> Sending CLUSTER MEET messages to join the cluster Waiting for the cluster to join . >>> Performing Cluster Check (using node 127.0.0.1:9000) M: e4256cad47611df0aa3f6a41c89d24b91c1aa89e 127.0.0.1:9000 slots:[0-5460] (5461 slots) master 1 additional replica(s) S: 0071f328a2c0a6a58cfe373ce0d8c6137bf15e0c 127.0.0.1:9003 slots: (0 slots) slave replicates abc2fa081d403c505b5d5127d3c3af9ffaa38538 M: f70b68962d8fb9b485cd420b1731b997feac12ad 127.0.0.1:9002 slots:[10923-16383] (5461 slots) master 1 additional replica(s) S: 1baffc136c23418c05bda6805614fc4a29d02004 127.0.0.1:9005 slots: (0 slots) slave replicates e4256cad47611df0aa3f6a41c89d24b91c1aa89e S: b8bf4748d7039934d4ba43fe558a464592958eff 127.0.0.1:9004 slots: (0 slots) slave replicates f70b68962d8fb9b485cd420b1731b997feac12ad M: abc2fa081d403c505b5d5127d3c3af9ffaa38538 127.0.0.1:9001 slots:[5461-10922] (5462 slots) master 1 additional replica(s) [OK] All nodes agree about slots configuration. >>> Check for open slots... >>> Check slots coverage... [OK] All 16384 slots covered.
- View nodes_9000.conf configuration file information (automatically generated)
[root@dcdfa0e9eb71 9000]# cat nodes-9000.conf 0071f328a2c0a6a58cfe373ce0d8c6137bf15e0c 127.0.0.1:9003@19003 slave abc2fa081d403c505b5d5127d3c3af9ffaa38538 0 1640769962000 2 connected f70b68962d8fb9b485cd420b1731b997feac12ad 127.0.0.1:9002@19002 master - 0 1640769963139 3 connected 10923-16383 1baffc136c23418c05bda6805614fc4a29d02004 127.0.0.1:9005@19005 slave e4256cad47611df0aa3f6a41c89d24b91c1aa89e 0 1640769962000 1 connected b8bf4748d7039934d4ba43fe558a464592958eff 127.0.0.1:9004@19004 slave f70b68962d8fb9b485cd420b1731b997feac12ad 0 1640769962000 3 connected abc2fa081d403c505b5d5127d3c3af9ffaa38538 127.0.0.1:9001@19001 master - 0 1640769962134 2 connected 5461-10922 e4256cad47611df0aa3f6a41c89d24b91c1aa89e 127.0.0.1:9000@19000 myself,master - 0 1640769961000 1 connected 0-5460
4) Measured
1. Connect the client to the server
[root@dcdfa0e9eb71 9000]# redis-cli -c -p 9000 127.0.0.1:9000> set cluster success -> Redirected to slot [14041] located at 127.0.0.1:9002 OK 127.0.0.1:9002> get cluster "success" 127.0.0.1:9002>
You can see that it is different from the normal writing method. Here, the written data is redirected to the slot 14041
- The client connects to the 9003 library corresponding to 9000
[root@dcdfa0e9eb71 /]# redis-cli -c -p 9003 127.0.0.1:9003> get cluster -> Redirected to slot [14041] located at 127.0.0.1:9002 "success" 127.0.0.1:9002>
You can see that when 9003 obtains data from the client, the data is redirected to the 14041 slot to obtain data
4) Fault recovery
Slave library downtime test
[root@dcdfa0e9eb71 /]# redis-cli -p 9003 shutdown
After stopping, other main libraries will receive a message to view the 9001 log file, as follows:
#Lost from library 9003 85305:M 29 Dec 2021 17:33:55.819 # Connection with replica 127.0.0.1:9003 lost. 85305:M 29 Dec 2021 17:34:15.057 * Marking node 0071f328a2c0a6a58cfe373ce0d8c6137bf15e0c as failing (quorum reached). #After restarting 9003, the recovery information is received from the library 85305:M 29 Dec 2021 17:36:08.901 * Clear FAIL state for node 0071f328a2c0a6a58cfe373ce0d8c6137bf15e0c: replica is reachable again. 85305:M 29 Dec 2021 17:36:08.919 * Replica 127.0.0.1:9003 asks for synchronization 85305:M 29 Dec 2021 17:36:08.923 * Partial resynchronization not accepted: Replication ID mismatch (Replica asked for 'e67366f3da7d5999da896d02ff440b6a58d8280d', my replication IDs are 'd05670eb8e506f706cec220981513baa93ece8d3' and '0000000000000000000000000000000000000000') 85305:M 29 Dec 2021 17:36:08.927 * Starting BGSAVE for SYNC with target: disk 85305:M 29 Dec 2021 17:36:08.943 * Background saving started by pid 86375 86375:C 29 Dec 2021 17:36:08.986 * DB saved on disk 86375:C 29 Dec 2021 17:36:08.994 * RDB: 0 MB of memory used by copy-on-write 85305:M 29 Dec 2021 17:36:09.060 * Background saving terminated with success 85305:M 29 Dec 2021 17:36:09.082 * Synchronization with replica 127.0.0.1:9003 succeeded
Main library downtime test
Stop main library 9000
[root@192 /]# redis-cli -p 9000 shutdown
View slave library 9005 logs
...... #Main library disconnected 85445:S 29 Dec 2021 17:37:39.203 * Connecting to MASTER 127.0.0.1:9000 85445:S 29 Dec 2021 17:37:39.207 * MASTER <-> REPLICA sync started 85445:S 29 Dec 2021 17:37:39.210 # Error condition on socket for SYNC: Connection refused 85445:S 29 Dec 2021 17:37:40.216 * Connecting to MASTER 127.0.0.1:9000 85445:S 29 Dec 2021 17:37:40.220 * MASTER <-> REPLICA sync started 85445:S 29 Dec 2021 17:37:40.224 # Error condition on socket for SYNC: Connection refused 85445:S 29 Dec 2021 17:37:41.229 * Connecting to MASTER 127.0.0.1:9000 85445:S 29 Dec 2021 17:37:41.235 * MASTER <-> REPLICA sync started 85445:S 29 Dec 2021 17:37:41.238 # Error condition on socket for SYNC: Connection refused 85445:S 29 Dec 2021 17:37:42.243 * Connecting to MASTER 127.0.0.1:9000 85445:S 29 Dec 2021 17:37:40.224 # Error condition on socket for SYNC: Connection refused 85445:S 29 Dec 2021 17:37:41.229 * Connecting to MASTER 127.0.0.1:9000 85445:S 29 Dec 2021 17:37:41.235 * MASTER <-> REPLICA sync started 85445:S 29 Dec 2021 17:37:41.238 # Error condition on socket for SYNC: Connection refused 85445:S 29 Dec 2021 17:37:42.243 * Connecting to MASTER 127.0.0.1:9000 85445:S 29 Dec 2021 17:37:42.247 * MASTER <-> REPLICA sync started 85445:S 29 Dec 2021 17:37:42.251 # Error condition on socket for SYNC: Connection refused 85445:S 29 Dec 2021 17:37:42.750 * FAIL message received from abc2fa081d403c505b5d5127d3c3af9ffaa38538 about e4256cad47611df0aa3f6a41c89d24b91c1aa89e 85445:S 29 Dec 2021 17:37:42.758 # Start of election delayed for 589 milliseconds (rank #0, offset 952). 85445:S 29 Dec 2021 17:37:42.790 # Cluster state changed: fail 85445:S 29 Dec 2021 17:37:43.284 * Connecting to MASTER 127.0.0.1:9000 85445:S 29 Dec 2021 17:37:43.287 * MASTER <-> REPLICA sync started 85445:S 29 Dec 2021 17:37:43.290 # Error condition on socket for SYNC: Connection refused 85445:S 29 Dec 2021 17:37:43.392 # Starting a failover election for epoch 7. 85445:S 29 Dec 2021 17:37:43.407 # Failover election won: I'm the new master. #Upgrade to master library 85445:S 29 Dec 2021 17:37:43.410 # configEpoch set to 7 after successful failover 85445:M 29 Dec 2021 17:37:43.413 * Discarding previously cached master state. 85445:M 29 Dec 2021 17:37:43.416 # Setting secondary replication ID to 7a352c533381b86b535eb666c7e9732ad531c23d, valid up to offset: 953. New replication ID is 27d71ccdd20522d54e8190f70c6bf1f64c454f92 85445:M 29 Dec 2021 17:37:43.420 # Cluster state changed: ok
concept
To use the cluster, you need to open the relevant configurations. These configurations are recommended to be written in a separate configuration
Cluster enabled whether to start the cluster
Cluster config file cluster node configuration file path (automatically generated after startup)
Cluster node timeout timeout
Slot assignment
In a cluster, all keys are assigned to 16384 slots, and each database is responsible for processing some of them.
127.0.0.1:9001> cluster nodes b8bf4748d7039934d4ba43fe558a464592958eff 127.0.0.1:9004@19004 slave f70b68962d8fb9b485cd420b1731b997feac12ad 0 1640770784000 3 connected abc2fa081d403c505b5d5127d3c3af9ffaa38538 127.0.0.1:9001@19001 myself,master - 0 1640770785000 2 connected 5461-10922 e4256cad47611df0aa3f6a41c89d24b91c1aa89e 127.0.0.1:9000@19000 master,fail - 1640770645974 1640770641000 1 disconnected 1baffc136c23418c05bda6805614fc4a29d02004 127.0.0.1:9005@19005 master - 0 1640770784709 7 connected 0-5460 0071f328a2c0a6a58cfe373ce0d8c6137bf15e0c 127.0.0.1:9003@19003 slave abc2fa081d403c505b5d5127d3c3af9ffaa38538 0 1640770782000 2 connected f70b68962d8fb9b485cd420b1731b997feac12ad 127.0.0.1:9002@19002 master - 0 1640770785716 3 connected 10923-16383
As you can see, 9000 is responsible for 0-5460 slots. When initializing the cluster, the slots allocated to each node are continuous. In fact, redis has no restrictions on this, and can allocate any slot to any node.
Relationship between key and slot Redis calculates the hash value of the valid part of the key name of each key using CRC16 algorithm, and then takes the remainder of 16834.
View the information of all nodes in the cluster
[root@dcdfa0e9eb71 redis]# redis-cli -p 9001 127.0.0.1:9001> cluster nodes b8bf4748d7039934d4ba43fe558a464592958eff 127.0.0.1:9004@19004 slave f70b68962d8fb9b485cd420b1731b997feac12ad 0 1640770784000 3 connected abc2fa081d403c505b5d5127d3c3af9ffaa38538 127.0.0.1:9001@19001 myself,master - 0 1640770785000 2 connected 5461-10922 e4256cad47611df0aa3f6a41c89d24b91c1aa89e 127.0.0.1:9000@19000 master,fail - 1640770645974 1640770641000 1 disconnected 1baffc136c23418c05bda6805614fc4a29d02004 127.0.0.1:9005@19005 master - 0 1640770784709 7 connected 0-5460 0071f328a2c0a6a58cfe373ce0d8c6137bf15e0c 127.0.0.1:9003@19003 slave abc2fa081d403c505b5d5127d3c3af9ffaa38538 0 1640770782000 2 connected f70b68962d8fb9b485cd420b1731b997feac12ad 127.0.0.1:9002@19002 master - 0 1640770785716 3 connected 10923-16383
Check whether the cluster is enabled
127.0.0.1:9002> info cluster # Cluster cluster_enabled:1 #1 means enabled
View slot allocation
127.0.0.1:9002> cluster slots 1) 1) (integer) 0 2) (integer) 5460 3) 1) "127.0.0.1" 2) (integer) 9005 3) "1baffc136c23418c05bda6805614fc4a29d02004" 2) 1) (integer) 5461 2) (integer) 10922 3) 1) "127.0.0.1" 2) (integer) 9001 3) "abc2fa081d403c505b5d5127d3c3af9ffaa38538" 4) 1) "127.0.0.1" 2) (integer) 9003 3) "0071f328a2c0a6a58cfe373ce0d8c6137bf15e0c" 3) 1) (integer) 10923 2) (integer) 16383 3) 1) "127.0.0.1" 2) (integer) 9002 3) "f70b68962d8fb9b485cd420b1731b997feac12ad" 4) 1) "127.0.0.1" 2) (integer) 9004 3) "b8bf4748d7039934d4ba43fe558a464592958eff"
Fault recovery
In the previous cluster, each node will send a ping name to other nodes regularly, and judge whether the target is offline by receiving a reply.
Specifically, each node in the cluster will randomly select five nodes every second, and then select the node that has not responded for the longest time to send the ping command.
If the node does not respond within a certain time, the node initiating the PING command will consider that the target node is suspected to be offline.
How to determine whether a node is offline?
1) Once node A thinks that node B is suspected of going offline, it will broadcast the message in the cluster, and all other nodes will record this information after receiving the message;
2) When a node C in the cluster receives that more than half of the nodes think that B is suspected to be offline, it will mark B as offline and spread the information to other nodes in the cluster, so as to make B offline in the whole cluster.
- In the cluster, when a primary database goes offline, some slots cannot be written. If the master database has at least one slave database, the cluster will perform fault recovery to switch one of the slave databases to the master database.
Service type | Is it the primary server | IP address | port |
---|---|---|---|
Redis-server | yes | 127.0.0.1 | 6379 |
Redis-server | no | 127.0.0.1 | 6380 |
Redis-server | no | 127.0.0.1 | 6381 |
Sentinel | - | 127.0.0.1 | 26379 |
Sentinel | - | 127.0.0.1 | 26380 |
Sentinel | - | 127.0.0.1 | 26381 |