What is redis?
Redis is a Nosql database. Compared with traditional databases, redis data exists in memory, so it has high read-write performance and is widely used in caching.
Besides caching, redis is often used for distributed locks and message queues.
redis provides five basic data types and three special data types to support different business scenarios. redis also supports transaction, persistence, Lua script and various clustering schemes.
redis 5+3 data type
string:
introduce
String is the simplest data type of redis. It is a simple key value form. Value can be either string or number.
although redis use c Language, but it is not used c Instead, I built a simple dynamic string myself( SDS),Compared with native c character string, SDS It can save not only text data, but also binary data O(1)Time gets the length of the string.
Usage scenario
- Conventional technology
- Number of microblogs
- Number of fans
- Token token
Basic usage commands
basic
127.0.0.1:6379> set key value #Set a value of type key value OK 127.0.0.1:6379> get key # Obtain the corresponding value according to the key "value" 127.0.0.1:6379> exists key # Determine whether a key exists (integer) 1 127.0.0.1:6379> strlen key # Returns the length of the string value stored by the key. (integer) 5 127.0.0.1:6379> del key # Delete the value corresponding to a key (integer) 1 127.0.0.1:6379> get key (nil)
batch
127.0.0.1:6379> mset key1 value1 key2 value2 # Batch setting values of key value type OK 127.0.0.1:6379> mget key1 key2 # Get value s corresponding to multiple key s in batch 1) "value1" 2) "value2"
Counter
127.0.0.1:6379> set number 1 OK 127.0.0.1:6379> incr number # Increment the numeric value stored in the key by one (integer) 2 127.0.0.1:6379> get number "2" 127.0.0.1:6379> decr number # Subtract the numeric value stored in the key by one (integer) 1 127.0.0.1:6379> get number "1"
be overdue
127.0.0.1:6379> expire key 60 # Data expires after 60s (integer) 1 127.0.0.1:6379> setex key 60 value # Data expires after 60s (setex:[set] + [ex]pire) OK 127.0.0.1:6379> ttl key # How long will the data expire (integer) 56
List:
introduce
List is an ordered set of single key and multiple values. It is similar to a double ended queue. You can insert and delete elements on the left and right. If the key does not exist, the first list will be created. If all the values are removed, the corresponding key will disappear
Usage scenario
- Chat system
- Blog comments
- Comments page loading
Basic usage commands
Implementation queue
127.0.0.1:6379> rpush myList value1 # Add an element to the header (right) of the list (integer) 1 127.0.0.1:6379> rpush myList value2 value3 # Add multiple elements to the header (rightmost) of the list (integer) 3 127.0.0.1:6379> lpop myList # Take out the tail (leftmost) element of the list "value1" 127.0.0.1:6379> lrange myList 0 1 # View the list of corresponding subscripts. 0 is start and 1 is end 1) "value2" 2) "value3" 127.0.0.1:6379> lrange myList 0 -1 # View all elements in the list, - 1 means the last one 1) "value2" 2) "value3"
Implementation stack
127.0.0.1:6379> rpush myList2 value1 value2 value3 (integer) 3 127.0.0.1:6379> rpop myList2 # Take out the header (rightmost) element of the list "value3"
View the list of corresponding subscripts and the length of the list
127.0.0.1:6379> rpush myList value1 value2 value3 (integer) 3 127.0.0.1:6379> lrange myList 0 1 # View the list of corresponding subscripts. 0 is start and 1 is end 1) "value1" 2) "value2" 127.0.0.1:6379> lrange myList 0 -1 # View all elements in the list, - 1 means the last one 1) "value1" 2) "value2" 3) "value3" 127.0.0.1:6379> llen myList (integer) 3
Set:
introduce
Set is an unordered and unique set with single key and multiple value s. It provides operations such as union, intersection and difference
Usage scenario
- Common concern, common fans and common circle
- Second degree friend
- Poker licensing simulation, random licensing
Basic usage commands
127.0.0.1:6379> sadd mySet value1 value2 # Add elements (integer) 2 127.0.0.1:6379> sadd mySet value1 # Duplicate elements are not allowed (integer) 0 127.0.0.1:6379> smembers mySet # View all elements in set 1) "value1" 2) "value2" 127.0.0.1:6379> scard mySet # View the length of the set (integer) 2 127.0.0.1:6379> sismember mySet value1 # Check whether an element exists in the set. Only a single element can be received (integer) 1 127.0.0.1:6379> sadd mySet2 value2 value3 (integer) 2 127.0.0.1:6379> sinterstore mySet3 mySet mySet2 # Get the intersection of mySet and mySet2 and store it in mySet3 (integer) 1 127.0.0.1:6379> smembers mySet3 1) "value2"
ZSet:
introduce
An ordered set of single key and multiple values. This order comes from the addition of a score weight coefficient to value. The elements in the set can be ordered according to score
Usage scenario
- Ranking List
- Hot search top10
- Take topN operation
Basic usage commands
127.0.0.1:6379> zadd myZset 3.0 value1 # Add elements to sorted set with 3.0 as weight (integer) 1 127.0.0.1:6379> zadd myZset 2.0 value2 1.0 value3 # Add multiple elements at a time (integer) 2 127.0.0.1:6379> zcard myZset # View the number of elements in the sorted set (integer) 3 127.0.0.1:6379> zscore myZset value1 # View the weight of a value "3" 127.0.0.1:6379> zrange myZset 0 -1 # Sequentially output the elements of a range. 0 - 1 means to output all elements 1) "value3" 2) "value2" 3) "value1" 127.0.0.1:6379> zrange myZset 0 1 # Sequentially output the elements of a range, 0 is start and 1 is stop 1) "value3" 2) "value2" 127.0.0.1:6379> zrevrange myZset 0 1 # Output the elements of a range in reverse order. 0 is start and 1 is stop 1) "value1" 2) "value2"
Hash:
introduce
hash is a mapping table of string type filed and vlaue. The internal implementation is array + linked list. You can only modify the value of a field of this object
Usage scenario
- User information
- Commodity information
Basic usage commands
127.0.0.1:6379> hset userInfoKey name "guide" description "dev" age "24" OK 127.0.0.1:6379> hexists userInfoKey name # Check whether the field specified in the value corresponding to the key exists. (integer) 1 127.0.0.1:6379> hget userInfoKey name # Gets the value of the specified field stored in the hash table. "guide" 127.0.0.1:6379> hget userInfoKey age "24" 127.0.0.1:6379> hgetall userInfoKey # Gets all fields and values of the specified key in the hash table 1) "name" 2) "guide" 3) "description" 4) "dev" 5) "age" 6) "24" 127.0.0.1:6379> hkeys userInfoKey # Get key list 1) "name" 2) "description" 3) "age" 127.0.0.1:6379> hvals userInfoKey # Get value list 1) "guide" 2) "dev" 3) "24" 127.0.0.1:6379> hset userInfoKey name "GuideGeGe" # Modify the corresponding value of a field 127.0.0.1:6379> hget userInfoKey name "GuideGeGe"
GEO:
introduce
It can be used to store the geographical location given by the user and operate these information, which is internally realized by zset
Usage scenario
- people nearby
- Shake it
- Find the distance between two points
Basic usage commands
127.0.0.1:6379> geoadll china:city 116.40 39.90 beijing (error) ERR unknown command `geoadll`, with args beginning with: `china:city`, `116.40`, `39.90`, `beijing`, 127.0.0.1:6379> geoadd china:city 116.40 39.90 beijing (integer) 1 127.0.0.1:6379> geoadd china:city 121.47 31.23 shanghai (integer) 1 127.0.0.1:6379> geoadd china:city 106.50 29.53 chongqing 114.05 22.52 shengzhen (integer) 2 127.0.0.1:6379> geoadd china:city 120.16 30.24 hangzhou (integer) 1 127.0.0.1:6379> geoadd china:city 108.96 34.26 xian (integer) 1 127.0.0.1:6379> geopos china:city beijing 1) 1) "116.39999896287918091" 2) "39.90000009167092543" 127.0.0.1:6379> geodist china:city beijing shanghai "1067378.7564" 127.0.0.1:6379> geodist china:city beijing shanghai km "1067.3788" 127.0.0.1:6379> georadius china:citgy 110 30 1000 km (empty array) 127.0.0.1:6379> georadius china:city 110 30 1000 km 1) "chongqing" 2) "xian" 3) "shengzhen" 4) "hangzhou" 127.0.0.1:6379> georadius china:city 110 30 500 km 1) "chongqing" 2) "xian" 127.0.0.1:6379> georadius china:city 110 30 500 km withdist withcoord count 1 1) 1) "chongqing" 2) "341.9374" 3) 1) "106.49999767541885376" 2) "29.52999957900659211"
HyperLogLog:
introduce
Hyperlog is an algorithm for cardinality statistics. The advantage of hyperlog is that when the number or volume of input elements is very, very large, the space required to calculate the cardinality is always fixed and very small. In Redis, each hyperlog key only needs 12 KB of memory to calculate the base of nearly 2 ^ 64 different elements
Count. This is in sharp contrast to a collection where the more elements consume more memory when calculating the cardinality. Hyperlog is an algorithm that provides an imprecise de duplication scheme.
Usage scenario
- If I want to count the UV of the web page (the number of browsing users, the same user can only visit once a day), the traditional
The solution is to use Set to save the user id, and then count the number of elements in the Set to get the page UV. However, this scheme can only carry a small number of users. Once the number of users increases, it needs to consume a lot of space to store user id. My goal is to count the number of users rather than save users. This is a thankless scheme! The hyperlog of Redis can count a large number of users in 12k at most. Although it has an error rate of about 0.81%, it can be ignored for UV statistics, which do not need very accurate data.
Basic usage commands
redis 127.0.0.1:6379> PFADD runoobkey "redis" 1) (integer) 1 redis 127.0.0.1:6379> PFADD runoobkey "mongodb" 1) (integer) 1 redis 127.0.0.1:6379> PFADD runoobkey "mysql" 1) (integer) 1 redis 127.0.0.1:6379> PFCOUNT runoobkey (integer) 3
BitMap:
introduce
Bitmap is a bitmap data structure, only 0 / 1. Many scenes can use this data structure to save space
Usage scenario
- Read judgment
- Check in judgment
- Number of new crown infections
Basic usage commands
127.0.0.1:6379> setbit sign 0 1 (integer) 0 127.0.0.1:6379> setbit sign 1 0 (integer) 0 127.0.0.1:6379> setbit sign 2 1 (integer) 0 127.0.0.1:6379> setbit sign 3 1 (integer) 0 127.0.0.1:6379> setbit sign 4 0 (integer) 0 127.0.0.1:6379> setbit sign 5 1 (integer) 0 127.0.0.1:6379> setbit sign 6 0 (integer) 0 127.0.0.1:6379> getbit sign 3 (integer) 1 127.0.0.1:6379> bitcount sign (integer) 4
How to ensure data consistency in redis and db (double write consistency)
Scheme 1: delayed double deletion strategy
1. Redis before and after writing the database Del (key) operation and set a reasonable timeout
public void write( String key, Object data ) { redis.delKey( key ); db.updateData( data ); Thread.sleep( 500 ); redis.delKey( key ); }
2. Specific steps:
- Delete cache first
- Writing database
- Sleep for 500ms
- Delete cache again
3. Set cache expiration time
Theoretically, setting the expiration time for the cache is the solution to ensure the final consistency. All write operations are subject to the database. As long as the cache expiration time is reached, subsequent read requests will naturally read new values from the database and backfill the cache.
4. Disadvantages of the scheme
Combined with the double deletion policy + cache timeout setting, the worst case is that the data is inconsistent within the timeout, which increases the time-consuming of write requests.
(in extreme cases, such as network fluctuations, the call of our modification operation will take a long time!!! Our sleep time is not long enough, and our cache is deleted in advance, but the data has not been updated!!! After deleting the cache, another thread accesses it, resulting in dirty data in our cache)
Scenario 2: serialize data
When updating the data, route the operation according to the unique identification of the data and send it to an internal jvm queue. When reading data, if it is found that the data is not in the cache, the operation of "read data + update cache" will be performed again. After routing according to the unique identification, it will also be sent to the internal queue of the same jvm.
A queue corresponds to a worker thread. Each worker thread gets the corresponding operation in series, and then executes one by one. In this case, for a data change operation, delete the cache first, and then update the database, but the update has not been completed. At this time, if a read request comes and does not read the cache, you can first send the cache update request to the queue. At this time, it will be overstocked in the queue, and then synchronously wait for the cache update to complete.
Here is an optimization point. In fact, multiple update cache requests in a queue are meaningless, so you can filter. If you find that there is already an update cache request in the queue, you don't need to put an update request operation in it. Just wait until the previous update operation request is completed.