Redis learning notes

Redis

The REmote DIctionary Server(Redis) is a key value storage system written by Salvatore Sanfilippo.

Redis is an open source log and key value database written in ANSI C language, complying with BSD protocol, supporting network, memory based and persistent, and provides API s in multiple languages.

It is often called a data structure server because value s can be of types such as string, hash, list, sets, and sorted sets.

NoSQL

Redis: nosql (non relational database).

NoSQL evolution history reference: https://www.cnblogs.com/lukelook/p/11135209.html

NoSQL overview
NoSQL(NoSQL = Not Only SQL), which means "not just SQL",
Generally refers to non relational database. With the development of internet web2 With the rise of web2.0 website, the traditional relational database is dealing with web2.0 0 websites, especially the super large-scale and highly concurrent SNS type web2 0 pure dynamic website has been unable to meet its needs, exposing many insurmountable problems, while non relational database has developed very rapidly due to its own characteristics. The generation of NoSQL database is to solve the challenges brought by large-scale data collection and multiple data types, especially the problems of big data application, including the storage of large-scale data.

(for example, Google or Facebook collects trillions of bits of data for their users every day). These types of data storage do not need a fixed mode and can be expanded horizontally without redundant operations.

Scalability bottleneck of MySQL: MySQL database also often stores some large text fields, resulting in very large database tables. When doing database recovery, it is very slow and not easy to recover the database quickly. For example, 10 million 4KB text is close to 40GB. If you can save these data from mysql, MySQL will become very small. Relational database is very powerful, but it can not cope with all application scenarios. MySQL has poor scalability (which requires complex technology to implement), high IO pressure under big data and difficult table structure change, which are the problems faced by developers using MySQL at present.

NoSQL representatives: MongDB, Redis, Memcache

NoSQL features: decoupling!

  1. Easy to expand (there is no relationship between data, it is easy to expand!)

  2. Large amount of data and high performance (redis reads 11 times a second and writes 8 times a second. NoSQL is a cache record level. It is a fine-grained cache with high performance)

  3. The data types are diverse (there is no need to implement the design database, which can be used as needed)

  4. Traditional RDBMS and NoSQL

    conventional RDBMS
    -Structured organization
    -SQL
    -Data and relationships exist in separate tables
    -Data definition language
    -Strict consistency
    -...
    
    NoSQL
    -Not just data
    -There is no fixed query language
    -Key value pair storage, column storage, document storage, graph database
    -Final consistency
    -CAP Theorem sum BASE Theory (live in different places)
    -High performance, high availability, high scalability
    -...
    

Four categories of NoSQL

KV key value pair:

  • Sina: Redis
  • Meituan: Redis+Tair
  • Alibaba, baidu: Redis + memcache

Document database (bson format is the same as json):

  • MongoDB
    • MongoDB is a database based on distributed file storage, written in C + +, which is mainly used to process a large number of documents.
    • MongoDB is an intermediate product based on relational database and non relational database. MongoDB is the most abundant and relational database among non relational databases.
  • ConthDB

Column storage database

  • HBase
  • distributed file system

Graph relational database

  • It's about relationships, not diagrams, such as circle of friends and social networks
  • Neo4j,infoGrid
classificationExamples examplesTypical application scenariosdata modeladvantageshortcoming
Key value [3]Tokyo Cabinet/Tyrant, Redis, Voldemort, Oracle BDBContent caching is mainly used to deal with the high access load of a large amount of data, as well as some log systems. [3]The Key Value pair with Key pointing to Value is usually realized by hash table [3]Fast search speedThe data is unstructured and is usually only treated as string or binary data [3]
Column storage database [3]Cassandra, HBase, RiakDistributed file systemIt is stored in column clusters to store the same column of data togetherFast search speed, strong scalability and easier distributed expansionRelatively limited functions
Document database [3]CouchDB, MongoDbWeb application (similar to key Value, Value is structured, but the database can understand the content of Value)The key value pair corresponding to key value. Value is structured dataThe data structure requirements are not strict, the table structure is variable, and the table structure does not need to be defined in advance like the relational databaseThe query performance is not high, and there is a lack of unified query syntax.
Graph database [3]Neo4J, InfoGrid, Infinite GraphSocial networks, recommendation systems, etc. Focus on building relationship mapsFigure structureUsing graph structure correlation algorithm. For example, shortest path addressing, N-degree relationship search, etcIn many cases, it is necessary to calculate the whole graph to get the required information, and this structure is not good to make a distributed cluster scheme. [3]

Getting started with Redis

summary

Redis (Remote Dictionary Server), that is, remote dictionary service, is an open source service using ANSI C language Write, support network, memory based and persistent log type, key value database , and provides API s in multiple languages.

Like memcached, redis caches data in memory to ensure efficiency. The difference is that redis will periodically write the updated data to the disk or write the modification operation to the additional record file, and on this basis, it realizes master-slave synchronization.

Free, open source, is one of the most popular NoSQL technologies, also known as structured database.

What can Redis do:

  1. Memory storage, persistence, memory is power-off, so persistence is very important (RDB, AOF).
  2. It is efficient and can be used for caching.
  3. Publish and subscribe system.
  4. Map information analysis.
  5. Timer, counter (Views).
  6. ...

characteristic:

  1. Support multiple languages
  2. Persistence
  3. colony
  4. affair
  5. ...

Redis Chinese website: https://www.redis.net.cn/

Windows setup

  1. Download the installation package (on github)
  2. Unzip the package
  3. Open Redis and double-click the service directly
  4. Use Redis client to connect to server (ping test connection, save value: set name zr, value: get name)

It is very easy to use under Windows, but Redis recommends us to use Redis for development and use.

Linux Installation

  1. Download the installation package (download from the official website)

  2. Unzip the Redis installation package to the specified package

  3. After entering the extracted file, you can see the Redis configuration file (redis.conf)

  4. Environmental installation

    [root@zhourui redis-6.0.9]# yum install gcc-c++
    
    [root@zhourui redis-6.0.9]# make
    
    make install
    

    make again

make install

  1. Default installation path of Redis: / usr/local/bin

  1. Copy the redis configuration file (redis.conf) to the current directory

    [root@zhourui bin]# mkdir zconfig
    
    [root@zhourui bin]# cp /www/server/redis/redis-6.0.9/redis.conf zconfig/
    
    
  2. redis is not started in the background by default. Modify the configuration file to change no to yes

  3. Start Redis service and start the service through the specified configuration file

    [root@zhourui bin]# redis-server zconfig/redis.conf 
    
    
  4. Start client redis cli

    [root@zhourui bin]# redis-cli -p 6379
    127.0.0.1:6379> ping  #Test connection
    PONG
    127.0.0.1:6379> set name zr  #Stored value
    OK
    127.0.0.1:6379> get name  #Value
    "zr"
    127.0.0.1:6379> keys *  #View all key s
    1) "name"
    127.0.0.1:6379> 
    
    
  5. Check whether the redis process is started

  6. Close the redis service and enter shutdown on the connected client

  7. Check whether the process exists again, ps -ef|grep redis

  8. A single machine can modify multiple redis ports

performance testing

Redis benchmark is a stress testing tool.

Official performance test tool.

Redis benchmark command parameters!

The optional parameters of redis performance test tool are as follows:

Serial numberoptiondescribeDefault value
1-hSpecify the server host name127.0.0.1
2-pSpecify server port6379
3-sSpecify server socket
4-cSpecifies the number of concurrent connections50
5-nSpecify the number of requests10000
6-dSpecifies the data size of the SET/GET value in bytes2
7-k1=keep alive 0=reconnect1
8-rSET/GET/INCR uses random keys and Sadd uses random values
9-PPipeline requests1
10-qForce to exit redis. Show only query/sec values
11–csvExport in CSV format
12-lGenerate a loop and permanently execute the test
13-tRun only a comma separated list of test commands.
14-IIdle mode. Open only N idle connections and wait.

Test: after redis is started, open a new window and execute

# Test: 100 concurrent connections, 100000 requests
redis-benchmark -h localhost -p 6379 -c 100 -n 100000

Basic knowledge

Redis has 16 databases by default (you can see from redis.conf).

The default is 0.

You can use select to switch.

Flush DB: clear the current database, flush all: clear all databases, dbsize: database size, keys *: view all keys.

127.0.0.1:6379> select 3  #Switch database
OK
127.0.0.1:6379[3]> dbsize
(integer) 0
127.0.0.1:6379[3]> set name zhou
OK
127.0.0.1:6379[3]> dbsize  #Size of the database
(integer) 1
127.0.0.1:6379[3]> select 6
OK
127.0.0.1:6379[6]> dbsize
(integer) 0
127.0.0.1:6379[6]> get name
(nil)
127.0.0.1:6379[6]> select 3
OK
127.0.0.1:6379[3]> get name
"zhou"
127.0.0.1:6379[3]> flushdb  #clear database 
OK
127.0.0.1:6379[3]> get name
(nil)
127.0.0.1:6379> flushall  #Empty all databases
OK
127.0.0.1:6379[3]> 

Redis is single threaded!

The official said that Redis is based on memory operation, and the CPU is not the performance bottleneck of Redis. The performance bottleneck of Redis is based on the memory of the machine and the bandwidth of the network. Since it can be realized by single thread, single thread is used.

Redis is written in C language. The official data is 100000 + QPS, which is no worse than Memcache using key value!

Redis is a single thread. Why is it so fast?

Myth 1: high performance servers must be multithreaded

Myth 2: multithreading (cpu context switching) must be more efficient than single thread.

Core: Redis puts all data in memory, so using single thread to operate is the most efficient. Multithreading (CPU will switch context and time-consuming operation). For memory system, the switching efficiency without context is the highest. Multiple reads and writes are on one CPU. In the case of memory, this is the best solution.

Five data types

Redis is an open source (BSD licensed) in memory data structure storage system, which can be used as database, cache and message middleware. It supports many types of data structures, such as stringshasheslistssetssorted sets And range query, bitmapshyperloglogs and geospatial Index radius query. Redis has built-in replicationLua scriptingLRU eventtransactions And different levels of Disk persistence , and passed Redis Sentinel And automatic Partition (Cluster) Provide high availability.

Redis-Key

127.0.0.1:6379> exists name  #Determine whether the key exists
(integer) 1
127.0.0.1:6379> move name 1  #Remove the name key of 1 database
(integer) 1
127.0.0.1:6379> set name zzrr
OK
127.0.0.1:6379> keys *  #View all key s
1) "name"
2) "age"
127.0.0.1:6379> expire name 10  #Set the expiration time of key in seconds
(integer) 1
127.0.0.1:6379> keys *
1) "name"
2) "age"
127.0.0.1:6379> ttl name  #View the remaining time of the current key
(integer) 6
127.0.0.1:6379> ttl name
(integer) 2
127.0.0.1:6379> keys *
1) "age"
127.0.0.1:6379> type name  #View the type of current key
string
127.0.0.1:6379> type age
string
127.0.0.1:6379> 

String (string)

###############################################################################
127.0.0.1:6379> set name zr  #Set value
OK
127.0.0.1:6379> get name
"zr"
127.0.0.1:6379> keys *  #View all key s
1) "name"
127.0.0.1:6379> append name "hello"  #Append string. If the current key does not exist, it is equivalent to set key
(integer) 7
127.0.0.1:6379> get name
"zrhello"
127.0.0.1:6379> strlen name  #View the length of the string
(integer) 7
127.0.0.1:6379> 

###############################################################################
# increment
127.0.0.1:6379> set views 0  #The initial value is 0
OK
127.0.0.1:6379> get views  
"0"
127.0.0.1:6379> incr views  #Self increment 1
(integer) 1
127.0.0.1:6379> incr views
(integer) 2
127.0.0.1:6379> get views  
"2"
127.0.0.1:6379> 
127.0.0.1:6379> decr views  #Self subtraction 1
(integer) 1
127.0.0.1:6379> get views
"1"
127.0.0.1:6379> incrby views 10  #Set the step size and specify the increment
(integer) 11
127.0.0.1:6379> decrby views 10  #Set the step size and specify the decrement
(integer) 1
127.0.0.1:6379> 

###############################################################################
# String range
127.0.0.1:6379> flushdb  #clear database 
OK
127.0.0.1:6379> set name zhour  #Set value
OK
127.0.0.1:6379> get name  #Get value
"zhour"
127.0.0.1:6379> getrange name 0 3  #Intercept string [0,3]
"zhou"
127.0.0.1:6379> getrange name 0 -1  #Get all strings, which is the same as getkey
"zhour"
127.0.0.1:6379> 

# replace
127.0.0.1:6379> set key2 abcdefg
OK
127.0.0.1:6379> get key2
"abcdefg"
127.0.0.1:6379> setrange key2 1 xx  #Replace the string starting at the specified position
(integer) 7
127.0.0.1:6379> get key2
"axxdefg"

###############################################################################
# setex (set with expire) sets the expiration time
# setnx (set if not exist) does not exist. It is often used in distributed locks

127.0.0.1:6379> setex key3 30 "hello"   #Set the value of key3 to hello and expire in 30 seconds
OK
127.0.0.1:6379> ttl key3  #See how long it will expire
(integer) 25
127.0.0.1:6379> setnx key4 "redis"  #If key4 does not exist, create key4
(integer) 1
127.0.0.1:6379> setnx key4 "MongoDB"  #If key4 exists, the creation fails
(integer) 0
127.0.0.1:6379> get key4
"redis"
127.0.0.1:6379> 

###############################################################################
# mset
# mget
127.0.0.1:6379> mset k1 v1 k2 v2 k3 v3  #Set multiple values at the same time
OK
127.0.0.1:6379> keys *
1) "k2"
2) "k1"
3) "k3"
127.0.0.1:6379> mget k1 k2 k3  #Get multiple values at the same time
1) "v1"
2) "v2"
3) "v3"
127.0.0.1:6379> msetnx k1 v1 k4 v4  #msetnx is an atomic operation that either succeeds or fails together
(integer) 0
127.0.0.1:6379> get k4
(nil)

# object
set user:1 {name:zhour,age:3}  #Set a user:1 object value to json string to save an object

# user:{id}:{filed}. This design is possible in redis
127.0.0.1:6379> mset user:1:name zhour user:1:age 3
OK
127.0.0.1:6379> mget user:1:name user:1:age
1) "zhour"
2) "3"
127.0.0.1:6379> 

###############################################################################
# getset get before set
127.0.0.1:6379> getset db redis  #nil if no value exists
(nil)
127.0.0.1:6379> get db
"redis"
127.0.0.1:6379> getset db mongodb  #If a value exists, get the original value and set a new value
"redis"
127.0.0.1:6379> get db
"mongodb"
127.0.0.1:6379> 

Usage scenario of String type: value can be not only a String, but also a number.

  • Counter
  • Count the quantity of multiple units
  • Number of fans
  • Object cache storage

List

Basic data types, lists.

In redis, you can play list s into stacks, queues, and blocking queues.

All list commands start with l. Redis is not case sensitive.

###############################################################################
# lpush
127.0.0.1:6379> lpush list one  #Insert one or more values into the head of the list (left)
(integer) 1
127.0.0.1:6379> lpush list two  
(integer) 2
127.0.0.1:6379> lpush list three
(integer) 3
127.0.0.1:6379> lrange list 0 -1  #Get the value of list
1) "three"
2) "two"
3) "one"
127.0.0.1:6379> lrange list 0 1  #Get the specific value in the interval
1) "three"
2) "two"
127.0.0.1:6379> rpush list right  #Insert one or more values at the end of the list (right)
(integer) 4
127.0.0.1:6379> lrange list 0 -1
1) "three"
2) "two"
3) "one"
4) "right"
127.0.0.1:6379> 

###############################################################################
# lpop
# rpop
127.0.0.1:6379> lrange list 0 -1
1) "three"
2) "two"
3) "one"
4) "right"
127.0.0.1:6379> lpop list  #Remove the first element of the list
"three"
127.0.0.1:6379> rpop list  #Removes the last element of the list
"right"
127.0.0.1:6379> lrange list 0 -1
1) "two"
2) "one"
127.0.0.1:6379> 

###############################################################################
# lindex
127.0.0.1:6379> lindex list 0  #Gets the value of the specified subscript through the subscript
"two"
127.0.0.1:6379> lindex list 1
"one"
127.0.0.1:6379> 

###############################################################################
#  llen
127.0.0.1:6379> lrange list 0 -1
1) "two"
2) "one"
127.0.0.1:6379> llen list  #Gets the length of the list
(integer) 2
127.0.0.1:6379> 

###############################################################################
# lrem removes the specified value
127.0.0.1:6379> lrange list 0 -1
1) "four"
2) "four"
3) "three"
4) "two"
5) "one"
127.0.0.1:6379> lrem list 1 one  #Remove the specified number of value s in the list set and match them exactly
(integer) 1
127.0.0.1:6379> lrem list 1 four
(integer) 1
127.0.0.1:6379> lrange list 0 -1
1) "four"
2) "three"
3) "two"
127.0.0.1:6379> lpush list four
(integer) 4
127.0.0.1:6379> lrem list 2 four  #Remove the two four in the list
(integer) 2
127.0.0.1:6379> lrange list 0 -1
1) "three"
2) "two"

###############################################################################
# ltrim sleeve arrow
127.0.0.1:6379> rpush mylist hello
(integer) 1
127.0.0.1:6379> rpush mylist hello1
(integer) 2
127.0.0.1:6379> rpush mylist hello12
(integer) 3
127.0.0.1:6379> rpush mylist hello13
(integer) 4
127.0.0.1:6379> ltrim mylist 1 2  #Intercept the specified length through the subscript, leaving only the intercepted elements
OK
127.0.0.1:6379> lrange mylist 0 -1
1) "hello1"
2) "hello12"
127.0.0.1:6379> 

###############################################################################
# Rpop lpush removes the last element of the list and adds it to a new list
127.0.0.1:6379> rpush mylist hello
(integer) 1
127.0.0.1:6379> rpush mylist hello1
(integer) 2
127.0.0.1:6379> rpush mylist hello2
(integer) 3
127.0.0.1:6379> rpoplpush mylist myotherlist  #Removes the last element of the list and adds it to a new list
"hello2"
127.0.0.1:6379> lrange mylist 0 -1  #View the original list
1) "hello"
2) "hello1"
127.0.0.1:6379> lrange myotherlist 0 -1  #View new list
1) "hello2"
127.0.0.1:6379> 

###############################################################################
# lset replaces the value of the specified subscript in the list with another value
127.0.0.1:6379> exists list  #Determine whether this list exists
(integer) 0
127.0.0.1:6379> lset list 0 item  #If there is no list, an error will be reported if you update it
(error) ERR no such key
127.0.0.1:6379> lpush list value1
(integer) 1
127.0.0.1:6379> lrange list 0 0
1) "value1"
127.0.0.1:6379> lset list 0 item  #If it exists, the value of the current subscript is updated
OK
127.0.0.1:6379> lrange list 0 0
1) "item"
127.0.0.1:6379> lset list 1 other  #If this subscript does not exist, an error will be reported
(error) ERR index out of range

###############################################################################
# linsert inserts a specific value before or after an element
127.0.0.1:6379> rpush mylist hello
(integer) 1
127.0.0.1:6379> rpush mylist world
(integer) 2
127.0.0.1:6379> linsert mylist before world other
(integer) 3
127.0.0.1:6379> lrange mylist 0 -1
1) "hello"
2) "other"
3) "world"
127.0.0.1:6379> linsert mylist after  world new
(integer) 4
127.0.0.1:6379> lrange mylist 0 -1
1) "hello"
2) "other"
3) "world"
4) "new"

List: it is actually a linked list. Values can be inserted into before node, after, left and right.

  • If the key does not exist, create a new list
  • If the key exists, add content
  • If all values are removed, the empty linked list indicates that it does not exist
  • Inserting or changing values on both sides is the most efficient, and the middle element is relatively inefficient

Message queue (Lpush, Rpop), stack (Lpush, Rpop).

Set

The value in set cannot be repeated!

###############################################################################
# Stored value
127.0.0.1:6379> sadd myset hello  #Save value in set
(integer) 1
127.0.0.1:6379> sadd myset hellozr
(integer) 1
127.0.0.1:6379> sadd myset hellozhou
(integer) 1
127.0.0.1:6379> smembers myset  #View all values of the specified set
1) "hellozr"
2) "hello"
3) "hellozhou"
127.0.0.1:6379> sismember myset hello  #Judge whether an element exists in the set, and return 1
(integer) 1
127.0.0.1:6379> sismember myset world  #If this element does not exist, it will return 0
(integer) 0
127.0.0.1:6379> 

###############################################################################
127.0.0.1:6379> scard myset  #Gets the number of elements in the set
(integer) 3

###############################################################################
# srem
127.0.0.1:6379> srem myset hello  # Removes the specified element from the set
(integer) 1
127.0.0.1:6379> scard myset
(integer) 2
127.0.0.1:6379> smembers myset  #View elements in set
1) "hellozr"
2) "hellozhou"
127.0.0.1:6379> 

###############################################################################
# Set unordered non repeating set 
127.0.0.1:6379> sadd myset zhourr
(integer) 1
127.0.0.1:6379> smembers myset
1) "zhourr"
2) "hellozr"
3) "hellozhou"
127.0.0.1:6379> srandmember myset  #Select an element at random
"hellozr"
127.0.0.1:6379> srandmember myset
"hellozhou"
127.0.0.1:6379> srandmember myset
"hellozr"
127.0.0.1:6379> srandmember myset
"hellozr"
127.0.0.1:6379> 

###############################################################################
# Delete the specified key randomly
127.0.0.1:6379> smembers myset
1) "zhourr"
2) "hellozr"
3) "hellozhou"
127.0.0.1:6379> spop myset  #Randomly remove elements
"hellozr"
127.0.0.1:6379> spop myset
"hellozhou"
127.0.0.1:6379> smembers myset
1) "zhourr"
127.0.0.1:6379> 

###############################################################################
# Moves a specified value to another set set
127.0.0.1:6379> sadd myset hello
(integer) 1
127.0.0.1:6379> sadd myset world
(integer) 1
127.0.0.1:6379> sadd myset zhour
(integer) 1
127.0.0.1:6379> sadd myset2 set2
(integer) 1
127.0.0.1:6379> smove myset myset2 zhour   # Moves a specified value to another set set
(integer) 1
127.0.0.1:6379> SMEMBERS myset
1) "hello"
2) "world"
127.0.0.1:6379> SMEMBERS myset2
1) "zhour"
2) "set2"
127.0.0.1:6379> 

###############################################################################
# Common concern (intersection)
# SDIFF difference set
# SINTER intersection
# Union of SUNION
127.0.0.1:6379> sadd key1 a
(integer) 1
127.0.0.1:6379> sadd key1 b
(integer) 1
127.0.0.1:6379> sadd key1 c
(integer) 1
127.0.0.1:6379> sadd key2 c
(integer) 1
127.0.0.1:6379> sadd key2 d
(integer) 1
127.0.0.1:6379> sadd key2 e
(integer) 1
127.0.0.1:6379> SDIFF key1 key2  # Difference set
1) "a"
2) "b"
127.0.0.1:6379> SINTER key1 key2  # Meet common friends
1) "c"
127.0.0.1:6379> SUNION key1 key2  # Union
1) "a"
2) "b"
3) "c"
4) "e"
5) "d"
127.0.0.1:6379> 

You can put all the concerns of A in one set and its fans in another set!

Common friends, common hobbies, second degree friends, recommended friends! (six degree segmentation theory)

Hash (hash)

Map set, key map set and value become map set! The essence is not much different from the String type, but a simple key value.

127.0.0.1:6379> hset myhash field1 zhourr  # set a specific key value
(integer) 1
127.0.0.1:6379> hget myhash field1  # Get a field value
"zhourr"
127.0.0.1:6379> hmset myhash field1 hello field2 world  # set multiple key value s
OK
127.0.0.1:6379> hmget myhash field1 field2  # Get multiple field values
1) "hello"
2) "world"
127.0.0.1:6379> hgetall myhash  # Get all the data key value
1) "field1"
2) "hello"
3) "field2"
4) "world"
127.0.0.1:6379> 

###############################################################################

127.0.0.1:6379> hdel myhash field1  # Delete the key field specified by hash, and the corresponding value value will also be deleted
(integer) 1
127.0.0.1:6379> hgetall myhash
1) "field2"
2) "world"
127.0.0.1:6379> 

###############################################################################
# hlen 
127.0.0.1:6379> hmset myhash field1 hello field2 world
OK
127.0.0.1:6379> hgetall myhash
1) "field2"
2) "world"
3) "field1"
4) "hello"
127.0.0.1:6379> hlen myhash  # Get the number of fields in the hash table
(integer) 2
127.0.0.1:6379> 

###############################################################################
# Judge whether the key specified in the hash exists
127.0.0.1:6379> HEXISTS myhash field1  #Judge whether the key specified in the hash exists
(integer) 1
127.0.0.1:6379> HEXISTS myhash field3
(integer) 0

###############################################################################
#Only get all field s
#Only get all value s
127.0.0.1:6379> hkeys myhash  #Only get all field s
1) "field2"
2) "field1"
127.0.0.1:6379> hvals myhash  #Only get all value s
1) "world"
2) "hello"
127.0.0.1:6379> 

###############################################################################
# hincrby
127.0.0.1:6379> hset myhash field3 5  # Specify increment
(integer) 1
127.0.0.1:6379> HINCRBY myhash field3 2  
(integer) 7
127.0.0.1:6379> HINCRBY myhash field3 -3
(integer) 4
127.0.0.1:6379> hsetnx myhash field4 hello  # If it does not exist, it can be set
(integer) 1
127.0.0.1:6379> hsetnx myhash field4 world  # If present, the settings are not available
(integer) 0
127.0.0.1:6379> 

hash changed data user name age, especially the saving of user information or frequently changed information. hash is more suitable for the storage of objects, and string is more suitable for the storage of strings.

Zset (ordered set)

A value is added to set, set k1 v1, zset k1 score1 v1

###############################################################################
127.0.0.1:6379> zadd myset 1 one  #Add a value
(integer) 1
127.0.0.1:6379> zadd myset 2 two 3 three  # Add multiple values
(integer) 2
127.0.0.1:6379> zrange myset 0 -1  # Get all values
1) "one"
2) "two"
3) "three"
127.0.0.1:6379> 

###############################################################################
# sort
127.0.0.1:6379> zadd salary 2500 xiaohong  # Add three users
(integer) 1
127.0.0.1:6379> zadd salary 5000 zhangsan
(integer) 1
127.0.0.1:6379> zadd salary 4000 zhour
(integer) 1
127.0.0.1:6379> ZRANGEBYSCORE salary -inf +inf  #Displays all users, sorted from negative infinity to positive infinity
1) "xiaohong"
2) "zhour"
3) "zhangsan"
127.0.0.1:6379> ZREVRANGE salary 0 -1  # Sort from large to small
1) "zhangsan"
2) "zhour"
3) "xiaohong"
127.0.0.1:6379> ZRANGEBYSCORE salary -inf +inf withscores  # From small to large, with results
1) "xiaohong"
2) "2500"
3) "zhour"
4) "4000"
5) "zhangsan"
6) "5000"
127.0.0.1:6379> ZRANGEBYSCORE salary -inf 4000 withscores  # Ranking of employees below 4000
1) "xiaohong"
2) "2500"
3) "zhour"
4) "4000"

127.0.0.1:6379> ZREVRANGEBYSCORE salary +inf -inf  # Sort from large to small
1) "zhangsan"
2) "zhour"
3) "xiaohong"

###############################################################################
# Removing Elements 
127.0.0.1:6379> zrange salary 0 -1
1) "xiaohong"
2) "zhour"
3) "zhangsan"
127.0.0.1:6379> zrem salary xiaohong  # Removing Elements 
(integer) 1
127.0.0.1:6379> zrange salary 0 -1
1) "zhour"
2) "zhangsan"
127.0.0.1:6379> zcard salary  # Gets the number of in the ordered collection
(integer) 2
127.0.0.1:6379> 


###############################################################################
# Gets the number of members in the specified interval
127.0.0.1:6379> zadd myset 1 hello
(integer) 1
127.0.0.1:6379> zadd myset 2 world 3 zhour
(integer) 2
127.0.0.1:6379> zcount myset 1 3  # Gets the number of members in the specified interval
(integer) 3


set sorting: class grade table, salary table, with weight to judge the importance of data, and the ranking list takes top N.

For more operations, please check the official documents!!!

Three special data types

Geospatial (geographical location)

The location of friends, the people attached to them, and the calculation of taxi distance.

Redis's geo was launched in version 3.2! This function can calculate the geographical location information, the distance between the two places and the people nearby.

Online geographic location information: http://www.jsons.cn/lngcode/

Related commands

geoadd:

# Add geographic location
# Two levels cannot be added directly. We usually download city data and import it at one time through java program!
# Add the specified geospatial location (longitude, latitude, name) to the specified key
127.0.0.1:6379> geoadd china:city 116.40 39.90 beijing
(integer) 1
127.0.0.1:6379> geoadd china:city 121.47 31.23 shanghai
(integer) 1
127.0.0.1:6379> geoadd china:city 106.50 29.53 chongqin 114.05 22.52 shengzhen
(integer) 2
127.0.0.1:6379> geoadd china:city 120.16 30.34 hangzhou 108.96 34.26 xian
(integer) 2
127.0.0.1:6379> 

  • The effective longitude is from - 180 degrees to 180 degrees.
  • The effective latitude ranges from -85.05112878 degrees to 85.05112878 degrees.

When the coordinate position exceeds the above specified range, the command will return an error.

**geopos: * * obtain the current positioning, which must be a coordinate value

# Gets the longitude and latitude of the specified city
127.0.0.1:6379> geopos china:city beijing
1) 1) "116.39999896287918091"
   2) "39.90000009167092543"
127.0.0.1:6379> geopos china:city beijing chongqin
1) 1) "116.39999896287918091"
   2) "39.90000009167092543"
2) 1) "106.49999767541885376"
   2) "29.52999957900659211"
127.0.0.1:6379> 

deodist:

Returns the distance between two given positions.

If one of the two locations does not exist, the command returns a null value.

The parameter unit of the specified unit must be one of the following units:

  • m is expressed in meters.
  • km is expressed in kilometers.
  • mi is expressed in miles.
  • In feet.

If you do not explicitly specify the unit parameter, GEODIST defaults to meters.

127.0.0.1:6379> geodist china:city beijing shanghai  # The linear distance from Beijing to Shanghai is in meters
"1067378.7564"
127.0.0.1:6379> geodist china:city beijing shanghai km  # The linear distance from Beijing to Shanghai is in kilometers
"1067.3788"
127.0.0.1:6379> geodist china:city beijing chongqin km  # The linear distance from Beijing to Chongqing is in kilometers
"1464.0708"
127.0.0.1:6379> 

**georadius: * * find the elements within a certain radius with the given longitude and latitude as the center.

Nearby people (get the address and location of nearby people) query through the radius.

127.0.0.1:6379> georadius china:city 110 30 1000 km  # Obtain the city with 110, 30 as the center and 1000 km as the radius
1) "chongqin"
2) "xian"
3) "shengzheng"
4) "hangzhou"
127.0.0.1:6379> georadius china:city 110 30 500 km  # Obtain the city with 110, 30 as the center and 500 km as the radius
1) "chongqin"
2) "xian"
127.0.0.1:6379> georadius china:city 110 30 500 km withdist  # Displays the distance to the center point
1) 1) "chongqin"
   2) "341.9374"
2) 1) "xian"
   2) "483.8340"
127.0.0.1:6379> georadius china:city 110 30 500 km withcoord  # Displays the location information of cities within the range
1) 1) "chongqin"
   2) 1) "106.49999767541885376"
      2) "29.52999957900659211"
2) 1) "xian"
   2) 1) "108.96000176668167114"
      2) "34.25999964418929977"
127.0.0.1:6379> georadius china:city 110 30 500 km withdist withcoord count 1  # Filter out 1 Result
1) 1) "chongqin"
   2) "341.9374"
   3) 1) "106.49999767541885376"
      2) "29.52999957900659211"
127.0.0.1:6379> 

GEORADIUSBYMEMBER:

127.0.0.1:6379> GEORADIUSBYMEMBER china:city beijing 1000 km  #Other locations within the specified location range
1) "beijing"
2) "xian"
127.0.0.1:6379> GEORADIUSBYMEMBER china:city shanghai 400 km
1) "hangzhou"
2) "shanghai"
127.0.0.1:6379> 

**Geohash: * * returns the geohash representation of one or more location elements.

This command will return an 11 character Geohash string.

# Convert two-dimensional longitude and latitude into one-dimensional string (the closer the two strings are, the closer the distance is)
127.0.0.1:6379> geohash china:city beijing chongqin
1) "wx4fbxxfke0"
2) "wm5xzrybty0"

The underlying implementation principle of geo is actually zset! We can use zset to operate Geo.

127.0.0.1:6379> zrange china:city 0 -1  #View all elements
1) "chongqin"
2) "xian"
3) "shengzheng"
4) "hangzhou"
5) "shanghai"
6) "beijing"
127.0.0.1:6379> zrem china:city chongqin  #Delete element
(integer) 1
127.0.0.1:6379> zrange china:city 0 -1
1) "xian"
2) "shengzheng"
3) "hangzhou"
4) "shanghai"
5) "beijing"
127.0.0.1:6379> 

Hyperloglog

Cardinality: the number of non repeating elements.

Redis updated the hyperloglog data structure in version 2.8.9.

Redis hyperloglog cardinality statistics algorithm.

Advantages: the memory occupied is fixed. The cardinality of 2 ^ 64 different elements only needs 12kb of memory! From the perspective of memory, hyperloglog is the first choice.

Page views (UV) (if the same person visits multiple times, it is still counted as one person)

In the traditional way, set saves the user's id, and the number of set elements can be counted as judgment.

If you save a large number of user IDs in this way, it will be more troublesome! It is mainly for counting, rather than saving the user's id.

The error rate of 0.81% is negligible for UA tasks.

127.0.0.1:6379> pfadd myket a b c d e f g h i j  # Save a set of values
(integer) 1
127.0.0.1:6379> pfcount myket  # Count the cardinality number of a group of elements
(integer) 10
127.0.0.1:6379> pfadd myket2 i j z x c v b n m
(integer) 1
127.0.0.1:6379> pfcount myket2
(integer) 9
127.0.0.1:6379> pfmerge mykey3 myket myket2  # Merge two sets of Union
OK
127.0.0.1:6379> pfcount mykey3  # View the combined quantity
(integer) 15
127.0.0.1:6379> 

If fault tolerance is allowed, you can use hyperloglog.

Bitmap

Bit storage.

Statistics user information, active, inactive! Login, not login! punch the clock! Bitmap can be used for both States.

Bitmap, bitmap, data structure! They are recorded by operating binary bits, and there are only two states: 0 and 1.

# Use bitmaps to record the clocking from Monday to Sunday. 1 is clocking in and 0 is not clocking in
127.0.0.1:6379> setbit sign 0 1
(integer) 0
127.0.0.1:6379> setbit sign 1 0
(integer) 0
127.0.0.1:6379> setbit sign 2 0
(integer) 0
127.0.0.1:6379> setbit sign 3 1
(integer) 0
127.0.0.1:6379> setbit sign 4 1
(integer) 0
127.0.0.1:6379> setbit sign 5 0
(integer) 0
127.0.0.1:6379> setbit sign 6 0
(integer) 0
127.0.0.1:6379> 

Check whether there is a clock out on a certain day

127.0.0.1:6379> getbit sign 3
(integer) 1
127.0.0.1:6379> getbit sign 5
(integer) 0
127.0.0.1:6379> 

Count the days of clocking in

127.0.0.1:6379> bitcount sign
(integer) 3
127.0.0.1:6379> 

affair

A single Redis command is atomic, but transactions are not atomic.

Redis transaction does not have the concept of isolation level. All commands are not directly executed in the transaction, and will be executed only when the execution command is initiated.

Redis transaction essence: a collection of commands. All commands in a transaction will be serialized and executed in sequence during the execution of the transaction.

Disposable, sequential, exclusive! Execute a series of commands.

==========queue set set set implement=================

Redis transaction:

  • Open transaction (multi)
  • Order queue (...)
  • Execute transaction

Execute transactions normally!

127.0.0.1:6379> multi  # Open transaction
OK
127.0.0.1:6379> set k1 v1  # Order to join the team
QUEUED
127.0.0.1:6379> set k2 v2
QUEUED
127.0.0.1:6379> get k2
QUEUED
127.0.0.1:6379> set k3 v3
QUEUED
127.0.0.1:6379> exec  # Execute transaction
1) OK
2) OK
3) "v2"
4) OK
127.0.0.1:6379> 

Abandon the transaction!

127.0.0.1:6379> multi  # Open transaction
OK
127.0.0.1:6379> set k1 v1
QUEUED
127.0.0.1:6379> set k2 v2
QUEUED
127.0.0.1:6379> set k4 v4
QUEUED
127.0.0.1:6379> DISCARD  # Cancel the transaction. No command in the transaction queue will be executed
OK
127.0.0.1:6379> get k4
(nil)
127.0.0.1:6379> 


Compiled exception! (there are problems in the code, command errors, and all commands in the transaction will not be executed)

127.0.0.1:6379> multi
OK
127.0.0.1:6379> set k1 v1
QUEUED
127.0.0.1:6379> set k2 v2 
QUEUED
127.0.0.1:6379> set k3 v3
QUEUED
127.0.0.1:6379> getset k3  # Wrong command
(error) ERR wrong number of arguments for 'getset' command
127.0.0.1:6379> set k4 v4
QUEUED
127.0.0.1:6379> set k5 v5
QUEUED
127.0.0.1:6379> exec  # Executing a transaction is also reported. All commands will not be executed
(error) EXECABORT Transaction discarded because of previous errors.
127.0.0.1:6379> get k5
(nil)
127.0.0.1:6379> 

Runtime exception! (if there are syntax errors in the transaction queue, other commands can be executed normally during execution, and the wrong command will throw an exception) the transactions here cannot guarantee atomicity!!!

127.0.0.1:6379> set k1 v1
OK
127.0.0.1:6379> multi
OK
127.0.0.1:6379> incr k1  # When the string is incremented, it will fail
QUEUED
127.0.0.1:6379> set k2 v2
QUEUED
127.0.0.1:6379> set k3 v3
QUEUED
127.0.0.1:6379> get k3
QUEUED
127.0.0.1:6379> exec  # The first command reported an error, but the other commands were executed successfully
1) (error) ERR value is not an integer or out of range
2) OK
3) OK
4) "v3"
127.0.0.1:6379> get k2
"v2"
127.0.0.1:6379> 

monitor! Watch

Optimistic lock: very optimistic. I think there will be no problem at any time, so no matter what I do, I won't lock it. When updating the data, judge whether anyone has modified the data during this period, version. Get the version and compare the version when updating.

Pessimistic lock: very pessimistic. I think there will be problems at any time, so lock whatever you do.

Redis monitoring test:

# Normal execution successful
127.0.0.1:6379> set money 100
OK
127.0.0.1:6379> set out 0
OK
127.0.0.1:6379> watch money   # Monitor money
OK
127.0.0.1:6379> multi  # The transaction ends normally and there is no change in the data period
OK
127.0.0.1:6379> DECRBY money 20
QUEUED
127.0.0.1:6379> incrby out 20
QUEUED
127.0.0.1:6379> exec
1) (integer) 80
2) (integer) 20
127.0.0.1:6379> 

Test multithreading, modify the value, and use watch as an optimistic lock operation of redis.

127.0.0.1:6379> watch money  # Monitor money
OK
127.0.0.1:6379> multi
OK
127.0.0.1:6379> decrby money 10 
QUEUED
127.0.0.1:6379> incrby out 10
QUEUED
127.0.0.1:6379> exec  # Before execution, if another thread modifies the money value, the transaction execution will fail
(nil)
127.0.0.1:6379> 

If the transaction fails, unlock unwatch first, then watch money, and then start the transaction to perform the following operations.

Jedis

Use Java to operate redis.

Jedis is a Java connection development tool officially recommended by Redis! Use java to operate Redis middleware! If you want to use java to operate Redis, you must be very familiar with jedis.

Test:

  1. Import corresponding dependencies
<!--    Import jedis Wrapped-->
    <dependencies>
        <dependency>
            <groupId>redis.clients</groupId>
            <artifactId>jedis</artifactId>
            <version>3.2.0</version>
        </dependency>

    <!--    fastjson-->
        <dependency>
            <groupId>com.alibaba</groupId>
            <artifactId>fastjson</artifactId>
            <version>1.2.74</version>
        </dependency>
    </dependencies>
  1. Coding test

    • Connect database

    • Operation command

    • Disconnect

      package com.zr;
      
      import redis.clients.jedis.Jedis;
      
      public class TestPing {
          public static void main(String[] args) {
              // new jedis object
              Jedis jedis = new Jedis("39.105.48.232",6379);
              System.out.println(jedis.ping());
          }
      }
      

Output:

Common API

package com.zr;

import redis.clients.jedis.Jedis;

import java.util.Set;

public class TestKey {
    public static void main(String[] args) {
        Jedis jedis = new Jedis("127.0.0.1",6379);

        System.out.println("Clear data:"+jedis.flushDB());
        System.out.println("Determine whether a key exists:"+jedis.exists("username"));
        System.out.println("New key value pair:"+jedis.set("username","zr"));
        System.out.println("New key value pair:"+jedis.set("password","813794474"));
        System.out.println("The keys in the system are as follows:");
        Set<String> keys = jedis.keys("*");
        System.out.println(keys);

        System.out.println("Delete key"+jedis.del("password"));
        System.out.println("judge password Whether the key exists:"+jedis.exists("password"));
        System.out.println("judge username Type of:"+jedis.type("username"));
        System.out.println("Random return:"+jedis.randomKey());
        System.out.println("Random return:"+jedis.rename("username","name"));
        System.out.println("Fetch value:"+jedis.get("name"));
        System.out.println("Index query:"+jedis.select(0));
        System.out.println("Delete the current database:"+jedis.flushDB());
        System.out.println("Returns the of the current database key number:"+jedis.dbSize());
        System.out.println("Delete all:"+jedis.flushAll());

    }
}

String

package com.zr;

import redis.clients.jedis.Jedis;

import java.util.Set;
import java.util.concurrent.TimeUnit;

public class TestString {
    public static void main(String[] args) {
        Jedis jedis = new Jedis("127.0.0.1",6379);

        System.out.println("Clear data:"+jedis.flushDB());

        System.out.println("=======Add data========");
        System.out.println(jedis.set("k1","v1"));
        System.out.println(jedis.set("k2","v2"));
        System.out.println(jedis.set("k3","v3"));
        System.out.println("Delete:"+jedis.del("k2"));
        System.out.println("Value:"+jedis.get("k2"));
        System.out.println("modify k1: "+jedis.set("k1","v111"));
        System.out.println("k3 Added after:"+jedis.append("k3","zhour"));
        System.out.println("k3: "+jedis.get("k3"));
        System.out.println("Add multiple:"+jedis.mset("k4","v4","k5","v5","k6","v6"));
        System.out.println("Get multiple:"+jedis.mget("k4","k5","k6"));

        jedis.flushDB();
        System.out.println("=======Add key value to prevent overwriting======");
        System.out.println(jedis.setnx("k1","v1"));
        System.out.println(jedis.setnx("k2","v2"));
        System.out.println(jedis.setnx("k3","v3"));

        System.out.println("=======Add key and set effective time======");
        System.out.println(jedis.setex("k3",6,"v3"));
        System.out.println(jedis.get("k3"));
        try{
            TimeUnit.SECONDS.sleep(6);
        } catch (InterruptedException e) {
            e.printStackTrace();
        }
        System.out.println(jedis.get("k3"));

        System.out.println("=======Get the original value and update it to the new value=======");
        System.out.println(jedis.getSet("k2","k222gdsg"));
        System.out.println(jedis.get("k2"));
        System.out.println("intercept:"+jedis.getrange("k2",2,5));
    }
}

list

package com.zr;

import redis.clients.jedis.Jedis;

public class TestList {
    public static void main(String[] args) {

        Jedis jedis = new Jedis("127.0.0.1",6379);
        jedis.flushDB();
        System.out.println("=========increase list=====");
        jedis.lpush("collection","Aeeaylist","Vector","Stack","HashMap","WeakHashMap","LinkHashMap");
        jedis.lpush("collection","HashSet");
        jedis.lpush("collection","TreeSet");
        System.out.println("collection:"+jedis.lrange("collection",0,-1));
        System.out.println("collection Medium 0-3 section:"+jedis.lrange("collection",0,3));
        System.out.println("=========================");
        System.out.println("Delete:"+jedis.lrem("collection",2,"HashMap"));
        System.out.println("collection:"+jedis.lrange("collection",0,-1));
        System.out.println("Delete specified interval:"+jedis.ltrim("collection",0,2));
        System.out.println("collection:"+jedis.lrange("collection",0,-1));

        System.out.println("Left end of stack"+jedis.lpop("collection"));
        System.out.println("Right end of stack"+jedis.rpop("collection"));
        System.out.println("Add element at the right end"+jedis.rpush("collection","right"));
        System.out.println("Add element at left end"+jedis.lpush("collection","left"));
        System.out.println("Modify the specified subscript:"+jedis.lset("collection",1,"LinkHashMap"));
        System.out.println("========length==========");
        System.out.println("collection:"+jedis.lrange("collection",0,-1));
        System.out.println(jedis.llen("collection"));

    }
}

Set, Hash, Zset examples refer to five basic data types!!!!!

Business!

package com.zr;

import com.alibaba.fastjson.JSONObject;
import redis.clients.jedis.Jedis;
import redis.clients.jedis.Transaction;

public class TestTX {
    public static void main(String[] args) {
        Jedis jedis = new Jedis("127.0.0.1", 6379);

        jedis.flushDB();
        JSONObject jsonObject = new JSONObject();
        jsonObject.put("hello","world");
        jsonObject.put("name","zhour");
        //Open transaction
        Transaction multi = jedis.multi();
        String result = jsonObject.toJSONString();
        //jedis.watch(result);

        try {
            multi.set("user1",result);
            multi.set("user2",result);
            //int i = 1/0;  // Code exception, execution failed
            multi.exec();  //Execute transaction
        } catch (Exception e) {
            multi.discard();  //Abandon transaction
            e.printStackTrace();
        }finally {
            System.out.println(jedis.get("user1"));
            System.out.println(jedis.get("user2"));
            jedis.close();  //Close connection
        }
    }
}

SpringBoot integration

SpringBoot operation data: Spring data: JPA, JDBC redis, etc!

SpringData is also as famous as SpringBoot.

Description: spring2 After X, jedis is replaced with lettuce!

Jedis: direct connection is adopted. If multiple threads operate, it is unsafe. If you want to avoid unsafe, you should use jedis pool connection pool! More like BIO mode

lettuce: with netty, instances can be shared among multiple threads. There is no thread insecurity! More like NIO mode

Source code analysis:

@Bean
@ConditionalOnMissingBean(
    name = {"redisTemplate"}  //It takes effect only if it does not exist. We can define one to replace the default one
)
@ConditionalOnSingleCandidate(RedisConnectionFactory.class)
public RedisTemplate<Object, Object> redisTemplate(RedisConnectionFactory redisConnectionFactory) {
    //The default RedisTemplate does not have too many settings, and redis objects need to be serialized
    //Both generic types are Object types, which need to be cast later
    RedisTemplate<Object, Object> template = new RedisTemplate();
    template.setConnectionFactory(redisConnectionFactory);
    return template;
}

@Bean
@ConditionalOnMissingBean  //Since String is the most commonly used method in redis, a bean is proposed separately
@ConditionalOnSingleCandidate(RedisConnectionFactory.class)
public StringRedisTemplate stringRedisTemplate(RedisConnectionFactory redisConnectionFactory) {
    StringRedisTemplate template = new StringRedisTemplate();
    template.setConnectionFactory(redisConnectionFactory);
    return template;
}

Integration test

  1. Import dependency

    <!--operation redis-->
    <dependency>
        <groupId>org.springframework.boot</groupId>
        <artifactId>spring-boot-starter-data-redis</artifactId>
    </dependency>
    
  2. configure connections

    # All configuration classes of springboot have an automatic configuration class
    # The automatic configuration class will bind a properties configuration file RedisAutoConfiguration
    
    #Configure redis
    spring.redis.host=127.0.0.1
    spring.redis.port=6379
    spring.red
    
  3. test

    package com.zr;
    
    @SpringBootTest
    class Redis02SpringbootApplicationTests {
    
        @Autowired
        private RedisTemplate redisTemplate;
    
        @Test
        void contextLoads() {
            //opsForValue operation string. Similar to string
            //opsForList operation list
            //opsForSet
            //opsForHash
            //opsForZSet
            //opsForGeo
    
            //In addition to basic operations, other methods can use redisTemplat to operate, such as transactions and basic CRUD
    
            //Get the connection object of redis
            // RedisConnection connection= redisTemplate.getConnectionFactory().getConnection();
            // connection.flushDb();
            // connection.flushAll();
    
            redisTemplate.opsForValue().set("mykey","zhour");
            System.out.println(redisTemplate.opsForValue().get("mykey"));
        }
    }
    

Serialization configuration (you can see that the default is jdk serialization, and we can use json serialization)

Test:

User

package com.zr.config.pojo;

@Component
@Data
@AllArgsConstructor
@NoArgsConstructor
//In enterprise development, all POJOs are serialized
public class User implements Serializable {
    private String name;
    private Integer age;
}
@Test
void test() throws JsonProcessingException {
    //Real world development generally uses json to pass objects
    User user = new User("Zhou Zhou", 8);
    // String jsonUser = new ObjectMapper().writeValueAsString(user);
    // redisTemplate.opsForValue().set("user",jsonUser);

    redisTemplate.opsForValue().set("user",user);  //If the object is passed directly here, an error will be reported and the object needs to be serialized
    System.out.println(redisTemplate.opsForValue().get("user"));
}

Write your own RedisTemplate

package com.zr.config;

@Configuration
public class RedisConfig {
    //Write our own redisTemplate
    @Bean
    public RedisTemplate<String, Object> redisTemplate(RedisConnectionFactory factory) {
        RedisTemplate<String, Object> template = new RedisTemplate<String,Object>();
        template.setConnectionFactory(factory);
        //json serialization configuration
        Jackson2JsonRedisSerializer jackson2JsonRedisSerializer = new Jackson2JsonRedisSerializer(Object.class);
        ObjectMapper om = new ObjectMapper();
        om.setVisibility(PropertyAccessor.ALL, JsonAutoDetect.Visibility.ANY);
        om.enableDefaultTyping(ObjectMapper.DefaultTyping.NON_FINAL);
        jackson2JsonRedisSerializer.setObjectMapper(om);
        //String serialization
        StringRedisSerializer stringRedisSerializer = new StringRedisSerializer();
        //key adopts String serialization
        template.setKeySerializer(stringRedisSerializer);
        //The key of hash also adopts String serialization
        template.setHashKeySerializer(stringRedisSerializer);
        //value is serialized by jackson
        template.setValueSerializer(jackson2JsonRedisSerializer);
        //The value of hash is serialized by jackson
        template.setHashValueSerializer(jackson2JsonRedisSerializer);
        template.afterPropertiesSet();

        return template;
    }
}

test

 @Autowired
    @Qualifier("redisTemplate")
    private RedisTemplate redisTemplate;

@Test
void test() throws JsonProcessingException {
    //Real world development generally uses json to pass objects
    User user = new User("Zhou Zhou", 8);
    // String jsonUser = new ObjectMapper().writeValueAsString(user);
    // redisTemplate.opsForValue().set("user",jsonUser);

    redisTemplate.opsForValue().set("user",user);  //If the object is passed directly here, an error will be reported and the object needs to be serialized
    System.out.println(redisTemplate.opsForValue().get("user"));
}

All redis operations can be encapsulated into a tool class, similar to the previous JDBC utils.

All redis operations are very simple. It is important for us to understand the concept of redis and the specific application scenarios of each data structure!!

Redis.conf detailed explanation

When starting, it is started through the configuration file!

Configuration file, unit is not case sensitive!

Include: include. Other configuration files can be included

NETWORK: NETWORK

bind 127.0.0.1   # Bound ip
protected-mode yes  # Protection mode
port 6379  # port settings

GENERAL: GENERAL

daemonize yes  # Run as a daemon. The default is no, and we set it to yes
pidfile /var/run/redis_6379.pid  # If you run in the background mode, you need to bind a pid file

# journal
# Specify the server verbosity level.
# This can be one of:
# debug (a lot of information, useful for development/testing)
# verbose (many rarely useful info, but not a mess like the debug level)
# notice (moderately verbose, what you want in production probably)  #Production environment usage
# warning (only very important / critical messages are logged)
loglevel notice

# Location name of the log file
logfile ""

databases 16 # There are 16 databases by default

always-show-logo yes  # Always show log

snapshot

Persistence: the number of operations performed within a specified time will be persisted to the file rdb .aof

redis is an in memory database. If there is no persistence, the data will be lost after power failure.

# If at least one key is modified within 900 seconds, we will perform the persistence operation
save 900 1
# In 300 seconds, if at least 10 key s are modified, we will perform the persistence operation
save 300 10
# Within 60 seconds, if at least 10000 key s have been modified, we will perform the persistence operation
save 60 10000
# You can define your own settings

stop-writes-on-bgsave-error yes  # Persistence error. Do you want to continue working
rdbcompression yes  # Whether to compress rdb files requires some cpu resources
rdbchecksum yes  # Error checking is performed when saving rdb files
dir ./  # Directory where rdb files are saved

REPLICATION: REPLICATION, which is related to master-slave REPLICATION and explained in master-slave REPLICATION.

SECURITY: SECURITY

You can set the password of redis here. There is no password by default,

config get requirepass  # Get password
config set requirepass "123456"  # Set password, config set requirepass "" does not set password

config get requirepass  # No permission is displayed at this time

auth 123456  # Password login

config get requirepass  # Now you can get the set password

Restrictions: CLIENTS

# maxclients 10000  # Set the maximum number of connections that can be connected to redis clients
# maxmemory <bytes>  # Maximum memory capacity
# maxmemory-policy noeviction  # Processing strategy after the memory reaches the upper limit
redis.conf The default expiration policy in is volatile-lru

maxmemory-policy Six ways
1,volatile-lru: Only for those with expiration time set key conduct LRU((default) 
2,allkeys-lru :  delete lru Algorithmic key   
3,volatile-random: Random deletion is about to expire key   
4,allkeys-random: Random deletion   
5,volatile-ttl :  Delete expiring   
6,noeviction :  Never expire, return error

APPEND ONLY MODEl: aof configuration

appendonly no  # aof mode is not enabled by default. rdb is used for persistence by default. In most cases, rdb is fully sufficient
appendfilename "appendonly.aof"  # The name of the persistent file

# appendfsync always  # Every time you modify, sync consumes performance
appendfsync everysec  # sync is performed once per second, and 1 second of data may be lost
# appendfsync no  # Do not execute sync. At this time, the operating system automatically synchronizes data, and the speed is the fastest

Redis persistence

Redis is an in memory database. If the database in memory is not persisted to disk, the data will disappear once the service is terminated. Therefore, redis provides the function of persistence!

RDB(Redis DataBase)

Writes the collective data in memory to disk within a specified time interval. That is, Snapshot, which reads the Snapshot file directly into memory during recovery.

Redis will separately create (fork) a sub process for persistence. It will first write the data to a temporary file. After the persistence process is completed, redis will use this temporary file to replace the last persistent file. In the whole process, the main process does not carry out any IO operation, which ensures high performance. If large-scale data recovery is required and the integrity of recovered data is not very sensitive, RDB is more efficient than AOF. The disadvantage of RDB is that the last persistent data may be lost. Our default is RDB, which generally does not need to be changed.

The general production environment will dump rdb file backup!! In the master-slave replication, the rdb is used as the standby, which is on the slave.

The file saved by rdb is dump rdb, which is configured in the configuration file snapshot.

Trigger mechanism (generate dump.rdb)

  1. When the save rule is satisfied, the rdb rule will be triggered automatically
  2. Executing the flush command will also trigger rdb rules
  3. Exiting redis will also generate rdb files

The backup will automatically generate a dump rdb.

How to recover rdb files

Just dump The RDB file can be placed in the startup directory of redis. When redis starts, it will automatically check dump RDB file.

Check the storage location

127.0.0.1:6379> config get dir
1) "dir"
2) "/usr/local/bin"  # If dump exists in this directory RDB file, the data in it will be recovered automatically after startup

advantage:

  1. Suitable for large-scale data recovery! dump.rdb
  2. Incomplete data line

Disadvantages:

  1. A certain time interval is required for operation! Unexpected downtime, the last data will be lost.
  2. fork process will occupy a certain process space!

AOF(Append Only File)

Record all our commands, history, and execute all this file during recovery.

Record every write operation in the form of log, and record all commands executed by redis (read operation is not recorded). Only the file can be added, but the file cannot be overwritten. Redis will read the file to reconstruct the database. That is, redis startup will execute the write instruction from front to back according to the contents of the log file to complete the data recovery.

aof saves appendonly aof file.

It is not enabled by default. Manual configuration is required for use! Just change appendonly no to yes to enable it!

Restart redis and it will take effect!

If appendonly There is an error in the aof file. redis cannot be started. We need to repair this aof file.

Redis provides us with a tool, redis check AOF -- fix

After successful repair, restart!

Rewrite rule

The default is the unlimited addition of files, which will become larger and larger.

If the aof file is larger than 64mb, it will fork a process to rewrite the file.

merits and demerits:

appendonly no  # aof mode is not enabled by default. rdb is used for persistence by default. In most cases, rdb is fully sufficient
appendfilename "appendonly.aof"  # The name of the persistent file

# appendfsync always  # sync is used for every modification, which consumes performance
appendfsync everysec  # sync is performed once per second, and 1 second of data may be lost
# appendfsync no  # Do not execute sync. At this time, the operating system automatically synchronizes data, and the speed is the fastest

advantage:

  1. Every modification is synchronized, and the integrity of the file will be better!
  2. The default is to synchronize once per second, and one second of data may be lost
  3. Never synchronized, the efficiency is the highest

Disadvantages:

  1. Compared with data files, aof is much larger than rdb, and the repair speed is also slower than rdb
  2. aof also runs slower than rdb, so the default configuration of redis is to use rdb persistence

Redis publish and subscribe

Redis publish / subscribe (pub/sub) is a message communication mode: the sender (pub) sends messages and the subscriber (sub) receives messages. Official account, micro-blog and other concerned systems!

Redis client can subscribe to any number of channels.

The following figure shows the relationship between channel 1 and the three clients subscribing to this channel - client2, client5 and client1

When a new message is sent to channel 1 through the PUBLISH command, the message will be sent to the three clients subscribing to it:

Redis publish subscribe command

The following table lists the common commands for redis publishing and subscribing:

Serial numberCommand and description
1[PSUBSCRIBE pattern pattern ...] Subscribe to one or more channels that match a given pattern.
2[PUBSUB subcommand argument [argument ...]] View subscription and publishing system status.
3PUBLISH channel message Send information to the specified channel.
4[PUNSUBSCRIBE pattern [pattern ...]] Unsubscribe from all channels in a given mode.
5[SUBSCRIBE channel channel ...] Subscribe to information for a given channel or channels.
6[UNSUBSCRIBE channel [channel ...]] Unsubscribe from a given channel.

Test:

Subscriber

127.0.0.1:6379> SUBSCRIBE zhour  # Subscribe to a channel
Reading messages... (press Ctrl-C to quit)
1) "subscribe"
2) "zhour"
3) (integer) 1
# Waiting to receive push messages
1) "message"  #news
2) "zhour"    #Which channel
3) "hello,zhour" #Specific content of the message

1) "message"
2) "zhour"
3) "hello,redis"

Sender

127.0.0.1:6379> PUBLISH zhour "hello,zhour"  # Post message to channel
(integer) 1
127.0.0.1:6379> PUBLISH zhour "hello,redis"  # Post message to channel
(integer) 1
127.0.0.1:6379> 

Redis is implemented in C by analyzing pubsub C file to understand the underlying implementation of publish and subscribe mechanism, which can deepen the understanding of redis.

After subscribing to a channel through the subscribe command, a dictionary is maintained in redis server. The key of the dictionary is each channel, and the value of the dictionary is a linked list. All clients subscribing to the channel are saved in the linked list. The key of the subscribe command is to add the client to the subscription linked list of the given channel.

Send a message to subscribers through the publish command. Redis server will use the specified channel as the key, find the linked list of all clients subscribing to this channel in the channel dictionary maintained by it, traverse all linked lists, and send the message to all subscribers.

Pub/Sub literally means publish and subscribe. In redis, you can set a key value for message publishing and message subscription. When a key is published, all clients subscribing to it will receive corresponding information. The most obvious usage of this function is real-time message system, instant chat, group chat and other functions.

For slightly more complex scenarios, message oriented middleware MQ can be used.

Redis master-slave replication

Master-Slave replication refers to copying data from one redis server to other redis servers. The former is called master/leader and the latter is called slave/follower; The Slave node is the master node, and the Slave node is the master node. The Slave node can only copy data.

By default, each redis server is the master node, and a master node can have multiple slave nodes or no slave nodes, but a slave node can only consist of one master node.

The main functions of master-slave replication include:

  1. Data redundancy: master-slave replication realizes the hot backup of data, which is a way of data redundancy other than persistence.
  2. Fault recovery: when the master node fails, the slave node can provide services to achieve rapid fault recovery. In fact, it is a kind of redundancy of services.
  3. Load balancing: on the basis of master-slave replication, combined with read-write separation, the master node can provide write services, and the slave node can provide read services (that is, the application connects the master node when writing redis data, and the application connects the slave node when reading redis data), so as to share the server load, especially in the scenario of less writing and more reading, the load can be shared through multiple slave nodes, It can greatly improve the concurrency of redis server.
  4. Cornerstone of high availability (cluster): in addition to the above functions, master-slave replication is also the basis for sentinel and cluster implementation. Therefore, master-slave replication is the basis for high availability of redis.

Generally speaking, to apply redis to engineering projects, it is absolutely impossible to use only one redis. The reasons are as follows:

  1. Structurally, a single redis server will have a single point of failure, and one server needs to handle all the request loads, which is under great pressure.
  2. In terms of capacity, the memory capacity of a single redis server is limited. Even if the memory capacity of a set of servers is 256g, all memory can not be used as redis memory. Generally speaking, the maximum memory used by a single redis is no more than 20g.

Commodities on e-commerce websites are generally uploaded at one time and browsed countless times, that is, "read more and write less".

Environment configuration

Configure only slave libraries, not master libraries!

127.0.0.1:6379> info replication  #View information about the current library
# Replication
role:master  # role
connected_slaves:0  # Meiyo slave
master_replid:df9e0065e32fd82d44fc257454f2101cecd2aa10
master_replid2:0000000000000000000000000000000000000000
master_repl_offset:0
second_repl_offset:-1
repl_backlog_active:0
repl_backlog_size:1048576
repl_backlog_first_byte_offset:0
repl_backlog_histlen:0
127.0.0.1:6379> 

Copy three configuration files and modify the corresponding information.

  1. Port number
  2. pid name
  3. Log name
  4. dump.rdb name

After modification, start three redis services to view the process information

One master and two slaves

By default, each redis server is the master node. We only configure the slave.

Recognize the boss! One master (79) and two slaves (80, 81)

Configure slaveof from the slave (80 and 81 are configured in this way)

127.0.0.1:6380> SLAVEOF 127.0.0.1 6379  # SLAVEOF host  6379 
OK
127.0.0.1:6380> info replication
# Replication
role:slave  # Current role slave
master_host:127.0.0.1  # Host information
master_port:6379
master_link_status:up
master_last_io_seconds_ago:4
master_sync_in_progress:0
slave_repl_offset:0
slave_priority:100
slave_read_only:1
connected_slaves:0
master_replid:d07d76bbf57965ec5eb6dfa2696c7fb50a1dab70
master_replid2:0000000000000000000000000000000000000000
master_repl_offset:0
second_repl_offset:-1
repl_backlog_active:1
repl_backlog_size:1048576
repl_backlog_first_byte_offset:1
repl_backlog_histlen:0
127.0.0.1:6380> 

View in host

127.0.0.1:6379> info replication
# Replication
role:master # host
connected_slaves:2  #Connect a slave
slave0:ip=127.0.0.1,port=6380,state=online,offset=294,lag=0  # Slave information
slave1:ip=127.0.0.1,port=6381,state=online,offset=294,lag=0  # Slave information
master_replid:d07d76bbf57965ec5eb6dfa2696c7fb50a1dab70
master_replid2:0000000000000000000000000000000000000000
master_repl_offset:294
second_repl_offset:-1
repl_backlog_active:1
repl_backlog_size:1048576
repl_backlog_first_byte_offset:1
repl_backlog_histlen:294
127.0.0.1:6379> 

The real master-slave configuration should be configured in the configuration file before it is permanent. The commands used here are temporary.

# Configuration in configuration file
# replicaof <masterip> <masterport>

The host can write, the slave can't write, can only read! All data in the host will be saved by the slave.

host

127.0.0.1:6379> set k1 v1
OK
127.0.0.1:6379> get k1
"v1"
127.0.0.1:6379> 

Slave

127.0.0.1:6380> get k1
"v1"
127.0.0.1:6380> set k2 v2
(error) READONLY You can't write against a read only replica.
127.0.0.1:6380> 

Test: when the host is disconnected, the slave is still connected to the host, but there is no write operation. At this time, if the host comes back, the slave can still obtain the information written by the host.

If the slave is configured using the command line, it will change back to the host after restart. As long as it becomes a slave, it will get the value from the host immediately!

Replication principle

After Slave is successfully started and connected to the Master, it will send a sync synchronization command.

After receiving the command, the master starts the background save process and collects all received commands for modifying the dataset. After the background process is executed, the master will transfer the whole data file to the slave and complete a complete synchronization.

Full copy: after receiving the data, the slave saves it and loads it into memory.

Incremental replication: the master continues to transmit all the new collected modification commands to the slave in turn to complete the synchronization.

However, as long as the master is reconnected, a full synchronization (full replication) will be performed automatically.

Layer by layer link

6379-6380-6381. The three nodes are connected in turn. At this time, 6380 is still a slave node.

At this time, master-slave replication can also be completed!

In this case, if 6379 hangs up, you need to manually configure a master node and use slave of no one to make yourself the master node. Other nodes can be manually connected to the master node (manual). If the 6379 is restored, you need to reconnect the configuration.

Sentinel mode

Automatic election mode!

The method of master-slave switching technology is: when the master server goes down, you need to manually switch a server to the master server, which requires manual intervention, takes time and effort, and will cause the service unavailable for a period of time. This method is not very desirable in practice, so there is the Sentinel mode, Redis has officially provided Sentinel architecture since 2.8 to solve this problem.

Manually set the automatic version of the master node to monitor whether the background host fails. If it fails, you can elect a slave node to become the master node according to the number of votes.

Sentry is a special mode. Redis provides sentry commands. Sentry is an independent process that can run independently. Its principle is that sentry monitors multiple running redis instances by sending commands and waiting for the response of redis server.

One sentinel may have problems monitoring the service. For this, we can use multiple sentinels to monitor, and each sentinel will monitor each other, which will form a multi sentinel mode.

How the sentinel process works

  1. Each Sentinel process sends a PING command once per second to the Master master server, Slave server and other Sentinel processes in the whole cluster.
  2. If an instance takes longer than the value specified by the down after milliseconds option before the last valid reply to the PING command, the instance will be marked as SDOWN by the Sentinel process.
  3. If a Master master server is marked as SDOWN, all Sentinel processes that are monitoring the Master master server should confirm that the Master server has indeed entered the SDOWN state once per second.
  4. When a sufficient number of Sentinel processes (greater than or equal to the value specified in the configuration file) confirm that the Master master Master server has entered the subjective offline state (SDOWN) within the specified time range, the Master master Master server will be marked as objective offline (ODOWN).
  5. In general, each Sentinel process sends INFO commands to all Master master servers and Slave servers in the cluster every 10 seconds.
  6. When the Master master server is marked * * ODOWN * * by the Sentinel process, the Sentinel process will change the frequency of sending INFO commands to all Slave slave servers of the offline Master server from once in 10 seconds to once per second.
  7. If not enough Sentinel processes agree to the Master master Master server offline, the objective offline status of the Master master Master server will be removed. If the Master master server sends a PING command to the Sentinel process again and returns a valid reply, the subjective offline status of the Master master server will be removed.

Test:

Our current model is one master and two slaves.

Failed to configure profile

# sentinel monitor the monitored name (the name is set by itself) and the number of votes of host port before the election (i.e. objective offline)
sentinel monitor myredis 127.0.0.1 6379 1

Activate the sentry

[root@zhourui bin]# redis-sentinel zconfig/sentinel.conf 
1971005:X 06 Jan 2021 20:41:13.713 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
1971005:X 06 Jan 2021 20:41:13.713 # Redis version=6.0.9, bits=64, commit=00000000, modified=0, pid=1971005, just started
1971005:X 06 Jan 2021 20:41:13.713 # Configuration loaded
                _._                                                  
           _.-``__ ''-._                                             
      _.-``    `.  `_.  ''-._           Redis 6.0.9 (00000000/0) 64 bit
  .-`` .-```.  ```\/    _.,_ ''-._                                   
 (    '      ,       .-`  | `,    )     Running in sentinel mode
 |`-._`-...-` __...-.``-._|'` _.-'|     Port: 26379
 |    `-._   `._    /     _.-'    |     PID: 1971005
  `-._    `-._  `-./  _.-'    _.-'                                   
 |`-._`-._    `-.__.-'    _.-'_.-'|                                  
 |    `-._`-._        _.-'_.-'    |           http://redis.io        
  `-._    `-._`-.__.-'_.-'    _.-'                                   
 |`-._`-._    `-.__.-'    _.-'_.-'|                                  
 |    `-._`-._        _.-'_.-'    |                                  
  `-._    `-._`-.__.-'_.-'    _.-'                                   
      `-._    `-.__.-'    _.-'                                       
          `-._        _.-'                                           
              `-.__.-'                                               

1971005:X 06 Jan 2021 20:41:13.714 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.
1971005:X 06 Jan 2021 20:41:13.717 # Sentinel ID is bc602d0d0bcb457a46c117f1b87970ec13b67f73
1971005:X 06 Jan 2021 20:41:13.717 # +monitor master myredis 127.0.0.1 6379 quorum 1
1971005:X 06 Jan 2021 20:41:13.717 * +slave slave 127.0.0.1:6380 127.0.0.1 6380 @ myredis 127.0.0.1 6379
1971005:X 06 Jan 2021 20:41:13.719 * +slave slave 127.0.0.1:6381 127.0.0.1 6381 @ myredis 127.0.0.1 6379

# After the host hangs up, re select the log of the host. The newly elected master node is 6381
1971005:X 06 Jan 2021 20:44:12.650 # +sdown master myredis 127.0.0.1 6379
1971005:X 06 Jan 2021 20:44:12.650 # +odown master myredis 127.0.0.1 6379 #quorum 1/1
1971005:X 06 Jan 2021 20:44:12.650 # +new-epoch 1
1971005:X 06 Jan 2021 20:44:12.650 # +try-failover master myredis 127.0.0.1 6379
1971005:X 06 Jan 2021 20:44:12.653 # +vote-for-leader bc602d0d0bcb457a46c117f1b87970ec13b67f73 1
1971005:X 06 Jan 2021 20:44:12.653 # +elected-leader master myredis 127.0.0.1 6379
1971005:X 06 Jan 2021 20:44:12.653 # +failover-state-select-slave master myredis 127.0.0.1 6379
1971005:X 06 Jan 2021 20:44:12.719 # +selected-slave slave 127.0.0.1:6381 127.0.0.1 6381 @ myredis 127.0.0.1 6379
1971005:X 06 Jan 2021 20:44:12.719 * +failover-state-send-slaveof-noone slave 127.0.0.1:6381 127.0.0.1 6381 @ myredis 127.0.0.1 6379
1971005:X 06 Jan 2021 20:44:12.785 * +failover-state-wait-promotion slave 127.0.0.1:6381 127.0.0.1 6381 @ myredis 127.0.0.1 6379
1971005:X 06 Jan 2021 20:44:13.054 # +promoted-slave slave 127.0.0.1:6381 127.0.0.1 6381 @ myredis 127.0.0.1 6379
1971005:X 06 Jan 2021 20:44:13.054 # +failover-state-reconf-slaves master myredis 127.0.0.1 6379
1971005:X 06 Jan 2021 20:44:13.110 * +slave-reconf-sent slave 127.0.0.1:6380 127.0.0.1 6380 @ myredis 127.0.0.1 6379
1971005:X 06 Jan 2021 20:44:14.082 * +slave-reconf-inprog slave 127.0.0.1:6380 127.0.0.1 6380 @ myredis 127.0.0.1 6379
1971005:X 06 Jan 2021 20:44:14.082 * +slave-reconf-done slave 127.0.0.1:6380 127.0.0.1 6380 @ myredis 127.0.0.1 6379
1971005:X 06 Jan 2021 20:44:14.153 # +failover-end master myredis 127.0.0.1 6379
1971005:X 06 Jan 2021 20:44:14.153 # +switch-master myredis 127.0.0.1 6379 127.0.0.1 6381
1971005:X 06 Jan 2021 20:44:14.154 * +slave slave 127.0.0.1:6380 127.0.0.1 6380 @ myredis 127.0.0.1 6381
1971005:X 06 Jan 2021 20:44:14.154 * +slave slave 127.0.0.1:6379 127.0.0.1 6379 @ myredis 127.0.0.1 6381
1971005:X 06 Jan 2021 20:44:44.157 # +sdown slave 127.0.0.1:6379 127.0.0.1 6379 @ myredis 127.0.0.1 6381

If the host comes back at this time, it can only be merged into the new host as a slave. This is the rule of sentinel mode!

Sentinel mode

advantage:

  1. Sentinel mode is based on master-slave replication mode. It has the advantages of all master-slave configurations
  2. The master-slave can be switched and the fault can be transferred, so the availability of the system will be better
  3. Sentinel mode is the upgrade of master-slave mode, which is more robust from manual to automatic

Disadvantages:

  1. Redis is not easy to expand online. Once the cluster capacity is online, it will be very difficult to expand online
  2. The configuration of sentry mode is cumbersome, which consists of many configurations

Full configuration of sentry mode:

# The port of sentry sentinel instance is 26379 by default. The sentry cluster needs to configure the port of each sentry
port 26379

# Sentry's working directory
dir /tmp

#sentinel monitors the ip port of the redis master node
#Master name names the master node by itself
#How many sentinel s are configured in quorum? If the master node is considered to be lost, the primary node is objectively considered to be lost
#sentinel moniter <host> <port> <quorum>
sentinel monitor 127.0.0.1 6379 1

#When the requirepass foobared authorization password is enabled in redis, all clients connecting to the redis instance need to provide the password
#Set the password for the sentinel to connect the master and slave. Note that the same password must be set for the master and slave
#sentinel auth-pass <master-name> <numslaves>
sentinel parallel-syncs mymaster 1

#After setting the number of seconds, the master node does not respond to the sentry. At this time, the sentry subjectively thinks that the default time for the master node to go offline is 30 seconds
#sentinel down-after-milliseconds <master-name> <milliseconds>
sentinel down-after-milliseconds mymaster 30000

#This configuration specifies how many slave s can synchronize the new master at the same time in the event of a failover master switch
 The smaller the number, the better failover The longer it takes
 But if this number is larger, it means that there are many slave because replication Not available
 This value can be set to 1 to ensure only one at a time slave Is in a state where the command request cannot be processed
#sentinel parallel-syncs <master-name> <numslaves>
sentinel parallel-syncs mymaster 1

#Failover timeout, 3 minutes by default
#sentinel failover-timeout <master-name> <milliseconds>
sentinel failover-timeout mymaster 180000

#Configure the script to be executed when an event occurs. You can notify the administrator through the script. For example, send an email to the administrator when the system is not running normally
#sentinel notification-script <master-name> <script-name>
sentinal notification-script mymaster /var/redis/notify.sh

#Client reconfiguration master node parameter script
#When a master is changed due to failover, this script will be called to inform relevant clients of the change of the master address
#sentinel client-reconfig-script <master-name> <script-path>
sentinel client-reconfig-script mymaster /var/redis/reconfig.sh


Redis cache penetration and avalanche

High availability of services!

The use of Redis cache has greatly improved the efficiency and performance of applications, especially in data query, but at the same time, it also brings some problems. Among them, the most important problem is the problem of data consistency. Strictly speaking, this problem has no solution. If the consistency of data is very high, cache cannot be used.

Other typical problems are cache penetration, cache avalanche and cache breakdown. At present, there are more popular solutions in the industry.

Cache penetration (not found)

Cache penetration: the user wants to query a data and finds that there is no data in the redis memory database, that is, the cache does not hit, so he looks it up in the persistence layer database and finds that there is no data, so the query fails. When there are many users, the cache fails to hit (second kill), so they all request the persistence layer database, which will cause great pressure on the persistence layer database, that is, cache penetration!

Solution:

Bloom filter: bloom filter is a data structure. All possible query parameters are stored in hash. They are verified at the control layer and discarded if they do not meet the requirements, so as to avoid the storage pressure on the underlying storage system.

Cache empty object: when the storage layer misses, the returned empty object will also be stored. At the same time, an expiration time will be set. After accessing this data, it will be read from the cache, protecting the back-end data source.

However, this method has the following problems:

If null values can be cached, this means that the cache needs more space to store null keys.

Even if the expiration time is set for a null value, there will still be inconsistency between the data of the cache layer and the storage layer for a period of time, which will have an impact on the business that needs to ensure consistency.

Cache breakdown (too much, cache expired)

Cache breakdown: it means that a key is very hot and is constantly carrying large concurrency. Large concurrency focuses on accessing this point. When the key fails, the continuous large concurrency will break through the cache and directly request the database, which is like cutting a hole in a barrier.

When a key expires, a large number of requests are accessed concurrently. This kind of data is generally hot data. Because the cache expires, the database will be accessed to query the latest data at the same time, and the cache will be written back, which will increase the instantaneous pressure on the database.

Solution:

Set hotspot data never to expire:

From the cache level, there is no expiration time set, so there will be no problem caused by hot key expiration.

Add mutex:

Distributed lock: using distributed lock ensures that there is only one thread for each key to query the back-end service, and other threads do not have the permission of distributed lock, so they only need to wait. This way transfers the pressure of high concurrency to distributed lock, so it is a great test for distributed lock.

Cache avalanche

Cache avalanche: it refers to that in a certain period of time, the cache set expires and redis goes down.

For example, at the double 11, a batch of goods are put into the cache. Assuming that the cache of these goods expires after one hour, the access queries of these goods fall on the database. For the database, there will be periodic pressure peaks, so all requests will reach the storage layer, and the call volume of the storage layer will increase sharply, Cause the database to hang up.

In fact, centralized expiration is not very fatal. The more fatal cache avalanche is the downtime or disconnection of a node of the cache server, because the naturally formed cache avalanche must be the centralized creation of cache in a certain period of time. At this time, the database can withstand the pressure, which is nothing more than the periodic pressure on the data, and the downtime of the cache service node, The pressure on the database server is unpredictable, which is likely to crush the database in an instant.

Solution

redis high availability

The meaning of this idea is that since redis may hang up, set up several more redis, so that after one hangs up, others can continue to work. In fact, it is to build a cluster. (multiple activities in different places)

Current limiting degradation

The idea of this solution is to control the number of threads reading and writing the database cache by locking or queuing after the cache fails. For example, a key allows only one thread to query the data write cache and other threads to wait.

Data preheating

The meaning of data preheating is to access possible data in advance before formal deployment, so that part of the data that may be accessed in large quantities will be loaded into the cache. Before large concurrent access, manually trigger the loading of different key s in the cache, and set different expiration times to make the cache invalidation time as uniform as possible.

Keywords: Redis nosql

Added by Paulus Magnus on Sun, 20 Feb 2022 03:18:24 +0200