Redis -- basic summary of crazy God's theory of redis (end)

Redis – basic summary of crazy God's theory of redis (end)

2021.6.12-2021.6.14: after learning and playing with the summary of Redis foundation in the Dragon Boat Festival, the more you learn, the more you feel the lack of knowledge.

Reference link: crazy talk Java – Redis summary: https://www.bilibili.com/video/BV1S54y1R7SB?p=1




Redis has five basic data types:

  • 1.String string

    OK
    127.0.0.1:6380> set k1 1
    OK
    127.0.0.1:6380> INCR k1
    2
    127.0.0.1:6380> DECR k1
    1
    127.0.0.1:6380> keys *
    k1
    127.0.0.1:6380> set k2 huyuqiao
    OK
    127.0.0.1:6380> GETRANGE K2 0 3
    
    127.0.0.1:6380> GETRANGE k2 0 3
    huyu
    127.0.0.1:6380> GETRANGE k2 0 -1
    huyuqiao
    127.0.0.1:6380> SETRANGE k2 1 XX
    8
    127.0.0.1:6380> get k2
    hXXuqiao
    127.0.0.1:6380> SETEX k3 30 "hello, world"				#set exist: replace if it exists or does not exist
    OK
    127.0.0.1:6380> ttl k3
    28
    127.0.0.1:6380> ttl k2
    -1
    127.0.0.1:6380> keys *
    k1
    k2
    127.0.0.1:6380> ttl k3
    -2	
    127.0.0.1:6380> setnx mykey "redis"						#set if not exist: set if it does not exist, set if it exists, set fails, or the original value
    1
    127.0.0.1:6380> ttl mykey
    -1
    127.0.0.1:6380> setnx mykey "mongodb"
    0
    127.0.0.1:6380> get mykey
    redis
    127.0.0.1:6380> ttl k2									#Permanent return - 1
    -1
    127.0.0.1:6380> ttl k3									#Expired return - 2
    -2
    127.0.0.1:6380> 
    127.0.0.1:6380> mset user:1:name huyuqiao user:1:age 22
    OK
    127.0.0.1:6380> mget user:1:name user:1:age
    huyuqiao
    22
    127.0.0.1:6380> 
    
    
    
  • List

    127.0.0.1:6380> FLUSHALL
    OK
    127.0.0.1:6380> clear
    127.0.0.1:6380> LPUSH list 1
    1
    127.0.0.1:6380> LPUSH list 2
    2
    127.0.0.1:6380> LPUSH list 3
    3
    127.0.0.1:6380> LRANGE list 0 -1
    3
    2
    1
    127.0.0.1:6380> RPUSH list a
    4
    127.0.0.1:6380> LPOP list
    3
    127.0.0.1:6380> LRANGE 0 -1
    ERR wrong number of arguments for 'lrange' command
    
    127.0.0.1:6380> LRANGE list 0 -1
    2
    1
    a
    127.0.0.1:6380> LINDEX list 0
    2
    127.0.0.1:6380> LLEN list
    3
    127.0.0.1:6380> FLUSHALL
    OK
    127.0.0.1:6380> clear
    127.0.0.1:6380> LPUSH list one 
    1
    127.0.0.1:6380> LPUSH list two
    2
    127.0.0.1:6380> LPUSH list two
    3
    127.0.0.1:6380> LREM list 1 one					#Remove an element equal to one in the list (remove the last element added)
    1
    127.0.0.1:6380> LRANGE list 0 -1
    two
    two
    127.0.0.1:6380> LREM list 2 one
    0
    127.0.0.1:6380> LREM list 2 two
    2
    127.0.0.1:6380> LRANGE list 0 -1
    
    127.0.0.1:6380> FLUSHALL
    
    127.0.0.1:6380> LPUSH list one 
    1
    127.0.0.1:6380> LPUSH list two
    2
    127.0.0.1:6380> LPUSH list three
    3
    127.0.0.1:6380> LPUSH list four
    4
    127.0.0.1:6380> LTRIM list 1 2
    OK
    127.0.0.1:6380> LRANGE list 0 -1
    three
    two
    127.0.0.1:6380> 
    
    
    
  • Set

    127.0.0.1:6380> FLUSHALL
    OK
    127.0.0.1:6380> SADD myset "hello"
    1
    127.0.0.1:6380> sadd myset "world"
    1
    127.0.0.1:6380> sadd myset "huyuqiao"
    1
    127.0.0.1:6380> smembers myset 
    hello
    huyuqiao
    world
    127.0.0.1:6380> SISMEMBER myset huyuqiao
    1
    127.0.0.1:6380> sadd myset "HYQ"
    1
    127.0.0.1:6380> SMEMBERS myset
    hello
    huyuqiao
    HYQ
    world
    127.0.0.1:6380> SREM myset hello
    1
    127.0.0.1:6380> scard myset 
    3
    127.0.0.1:6380> SMEMBERS myset 
    huyuqiao
    HYQ
    world
    127.0.0.1:6380> SRANDMEMBER myset 
    world
    127.0.0.1:6380> SRANDMEMBER myset 
    huyuqiao
    127.0.0.1:6380> 
    
  • Hash

    huyuqiao
    127.0.0.1:6380> FLUSHALL
    OK
    127.0.0.1:6380> clear
    127.0.0.1:6380> hset myhash field1 huyuqiao
    1
    127.0.0.1:6380> hmset myhash field1 hello field2 world
    OK
    127.0.0.1:6380> hmget myhash field1 field2
    hello
    world
    127.0.0.1:6380> hgetall myhash
    field1
    hello
    field2
    world
    127.0.0.1:6380> hlen myhash
    2
    127.0.0.1:6380> HEXISTS myhash field1
    1
    
    127.0.0.1:6380> HKEYS myhash
    field1
    field2
    127.0.0.1:6380> HVALS myhash
    hello
    world
    127.0.0.1:6380> HSETNX myhash field4 hello			#hash settings
    1
    127.0.0.1:6380> HGETALL myhash
    field1
    hello
    field2
    world
    field4
    hello
    127.0.0.1:6380> 
    
  • Zset

    127.0.0.1:6380> zadd salary 100 huyuqiao
    1
    127.0.0.1:6380> zadd salary 200 HUYUQIAO
    1
    127.0.0.1:6380> zadd salary 300 HYQ
    1
    127.0.0.1:6380> ZRANGEBYSCORE salary -inf +inf
    huyuqiao
    HUYUQIAO
    HYQ
    127.0.0.1:6380> zrange salary 0 -1
    huyuqiao
    HUYUQIAO
    HYQ
    127.0.0.1:6380> zrem salary huyuqiao
    1
    127.0.0.1:6380> ZRANGEBYSCORE salary -inf +inf
    HUYUQIAO
    HYQ
    
    



Redis has three special data types:

1.GeoSpatical

Applicable scenario: get the distance of nearby people, friends and online car Hailing positioning function.

# Add position latitude and longitude
127.0.0.1:6379> GEOADD china:city 116.41667 39.91667 Beijing
(integer) 1
127.0.0.1:6379> GEOADD china:city 121.43333 31.23000 Shanghai
(integer) 1
127.0.0.1:6379> GEOADD china:city 106.45000 29.56667 Chongqing
(integer) 1
127.0.0.1:6379> GEOADD china:city 114.06667 22.61667 Shenzhen
(integer) 1
127.0.0.1:6379> GEOADD china:city 120.20000 30.26667 Hangzhou
(integer) 1
127.0.0.1:6379> GEOADD china:city 108.95000 34.26667 Xi'an
(integer) 1

# View the latitude and longitude of different positions
127.0.0.1:6379> GEOPOS china:city Beijing Xi'an

#View the distance between different positions (the default is m)
127.0.0.1:6379> GEODIST china:city Beijing Shanghai
"1066981.1340"
127.0.0.1:6379> GEODIST china:city Beijing Shanghai km
"1066.9811"
127.0.0.1:6379> GEODIST china:city Beijing Chongqing km
"1465.8918"
127.0.0.1:6379> 

# View the places within the specified latitude and longitude (which can be spread to a specified number of friends within the specified range)
127.0.0.1:6379> GEORADIUS china:city 110 30 1000 km
 Chongqing
 Xi'an
 Shenzhen
 Hangzhou
127.0.0.1:6379> GEORADIUS china:city 110 30 500 km
 Chongqing
 Xi'an
127.0.0.1:6379> GEORADIUS china:city 110 30 500 km withdist	#View all cities and distances of 500km around a certain latitude and longitude / km
 Chongqing
346.0548
 Xi'an
484.7511
127.0.0.1:6379> GEORADIUS china:city 110 30 500 km withcoord #View all cities and longitude and latitude within 500km around a certain longitude and latitude
 Chongqing
106.4500012993812561
29.56666939001875249
 Xi'an
108.95000249147415161
34.2666710302806834
127.0.0.1:6379> GEORADIUS china:city 110 30 500 km count 1 #View three cities 500 km around a certain latitude and longitude
 Chongqing
127.0.0.1:6379> GEORADIUS china:city 110 30 500 km count 3 #View three cities 500 km around a certain latitude and longitude
 Chongqing
 Xi'an
127.0.0.1:6379> 


#View all cities around the location (similar to location)
127.0.0.1:6379> GEORADIUSBYMEMBER china:city Beijing 1000 km
 Beijing
 Xi'an
127.0.0.1:6379> GEORADIUSBYMEMBER china:city Shanghai 1000 km
 Hangzhou
 Shanghai
127.0.0.1:6379> 


#Convert the two-dimensional longitude and latitude of the city into a one bit hash string
127.0.0.1:6379> GEOHASH china:city Beijing Chongqing
wx4g14s53n0
wm78nq6w2f0
127.0.0.1:6379> 

#The bottom layer of GEO is zset, which can be operated by zset command
127.0.0.1:6379> zrange china:city 0 -1		#View all cities
 Chongqing
 Xi'an
 Shenzhen
 Hangzhou
 Shanghai
 Beijing
127.0.0.1:6379> zrem china:city Beijing			#Delete the city of Beijing

2.Hyperloglog

Applicable scenario: website UV quantity. Set is traditionally used for statistics, but if there are a large number of user IDs, it will consume too much content and be troublesome. If it is only counting and the error rate is allowed (0.81%), it is feasible. Otherwise, set is still used for statistics

Cardinality: the number of non repeating elements in the collection. If {1, 3, 5, 5, 7} is {1, 3, 5, 7}, the base number is 4

[external link image transfer failed. The source station may have anti-theft chain mechanism. It is recommended to save the image and upload it directly (img-35mpjikm-1623665896982)( https://i.loli.net/2021/06/13/pDS495ifgXAVrxb.png )]

127.0.0.1:6379> clear
127.0.0.1:6379> PFADD mykey a b c d e f g h i j				#Set mykey set
(integer) 1
127.0.0.1:6379> PFCOUNT mykey								#Count the cardinality of mykey set
(integer) 10
127.0.0.1:6379> PFADD mykey2 i j z x c v b n m
(integer) 1
127.0.0.1:6379> PFCOUNT mykey2
(integer) 9
127.0.0.1:6379> PFMERGE mykey3 mykey mykey2					#Union of two sets
OK
127.0.0.1:6379> PFCOUNT mykey3
(integer) 15
127.0.0.1:6379>
3.Bitmaps

Applicable scenarios: judge and count the active and inactive, logged in and not logged in scenarios that are not 1 or 0

[external chain picture transfer failed. The source station may have anti-theft chain mechanism. It is recommended to save the picture and upload it directly (img-3npiwqsq-1623665896988)( https://i.loli.net/2021/06/13/F83vgB47L21XzMj.png )]

[root@VM-8-11-centos ~]# redis-cli -a root --raw
127.0.0.1:6379> setbit sign 0 1			#Set whether the sign user clocks in one day
(integer) 0
127.0.0.1:6379> SETBIT sign 1 0
(integer) 0
127.0.0.1:6379> SETBIT sign 2 0
(integer) 0
127.0.0.1:6379> SETBIT sign 3 1
(integer) 0
127.0.0.1:6379> SETBIT sign 4 1
(integer) 0
127.0.0.1:6379> SETBIT sign 5 0
(integer) 0
127.0.0.1:6379> SETBIT sign 6 0
(integer) 0
127.0.0.1:6379> GETBIT sign 3			#Get the sign user's clocking in one day
(integer) 1
127.0.0.1:6379> GETBIT sign 6



Redis transaction:

Redsi transaction: there is no concept of isolation level, that is, the transaction does not guarantee atomicity (there are multiple commands within the transaction, which may not necessarily achieve partial success and partial failure), but a single command ensures atomicity

1.Redis transaction process:
  • Open transaction (multi)
  • Order to join the team
  • Execute transaction (exec)
#Start end transaction
127.0.0.1:6379> multi 
OK
127.0.0.1:6379> set k1 v1
QUEUED
127.0.0.1:6379> set k2 v2
QUEUED
127.0.0.1:6379> get k2
QUEUED
127.0.0.1:6379> set k3 v3
QUEUED
127.0.0.1:6379> exec
OK
OK
v2
OK
127.0.0.1:6379>

#Discard transaction: all previous queue codes are rolled back
127.0.0.1:6379> set k1 v1
QUEUED
127.0.0.1:6379> set k2 asdfasdf
QUEUED
127.0.0.1:6379> DISCARD
OK
127.0.0.1:6379> EXEC
ERR EXEC without MULTI

127.0.0.1:6379> get k2
v2

2.Redis transaction is non atomic

The reason why transactions are non atomic is that there are two exceptions:

  • Compilation exception (checking exception): the code is wrong, and there is no statement, function, configuration file, etc

    127.0.0.1:6379> MULTI
    OK
    127.0.0.1:6379> set k1 v1
    QUEUED
    127.0.0.1:6379> set k2 v2
    QUEUED
    127.0.0.1:6379> gasdfa k3				#Statement error is equivalent to not executing, and an error occurs during compilation
    ERR unknown command 'gasdfa'
    
    127.0.0.1:6379> set k4 v4
    QUEUED
    127.0.0.1:6379> EXEC
    EXECABORT Transaction discarded because of previous errors.
    
    127.0.0.1:6379> get k4
    
    127.0.0.1:6379> get k1
    
    
  • Runtime exception (non checking exception): no variable, no new for an object, etc. (not all rollback)

    127.0.0.1:6379> set k1 "v1"
    OK
    127.0.0.1:6379> multi
    OK
    127.0.0.1:6379> incr k1
    QUEUED
    127.0.0.1:6379> set k2 v2
    QUEUED
    127.0.0.1:6379> get k2
    QUEUED
    127.0.0.1:6379> EXEC
    ERR value is not an integer or out of range
    
    OK
    v2									#The above error does not affect the execution of the following statements, so it is proved that redis transactions do not guarantee atomicity
    127.0.0.1:6379> 
    



Redis optimistic lock:

Optimistic lock: it will not be locked. Only when the data is updated will it be compared whether the version has been modified (if it has been modified in the redis transaction, the transaction will not be executed successfully)

Pessimistic lock: no matter what you do, you will lock it, which is inefficient but safe

In Redis transactions, < font color = red size = 3 > watch < / font > is used to realize optimistic locking. If it is modified halfway, resulting in version change, all transactions will not succeed

127.0.0.1:6379> set money 100
OK
127.0.0.1:6379> set out 0
OK
127.0.0.1:6379> watch money				#Redis optimistic lock: watch monitor money
OK
127.0.0.1:6379> MULTI 					#In another window, change money to 101
OK
127.0.0.1:6379> DECRBY money 10
QUEUED
127.0.0.1:6379> incrby out 10
QUEUED
127.0.0.1:6379> EXEC
127.0.0.1:6379> get money
101
127.0.0.1:6379> 



Springboot - configure Redis source code

Jedis: direct connection, unsafe under multithreading, similar to BIO mode (eliminated after springboot 2. X)

Lettuce: Netty is adopted and shared under multithreading, which is similar to NIO mode

autoconfig->srping. redis related configuration found in factories

@Configuration(proxyBeanMethods = false)
@ConditionalOnClass(RedisOperations.class)
@EnableConfigurationProperties(RedisProperties.class)
@Import({ LettuceConnectionConfiguration.class, JedisConnectionConfiguration.class })
public class RedisAutoConfiguration {

	@Bean
	@ConditionalOnMissingBean(name = "redisTemplate")
	public RedisTemplate<Object, Object> redisTemplate(RedisConnectionFactory redisConnectionFactory)
			throws UnknownHostException {
		RedisTemplate<Object, Object> template = new RedisTemplate<>();
		template.setConnectionFactory(redisConnectionFactory);
		return template;
	}

	@Bean
	@ConditionalOnMissingBean
	public StringRedisTemplate stringRedisTemplate(RedisConnectionFactory redisConnectionFactory)
			throws UnknownHostException {
		StringRedisTemplate template = new StringRedisTemplate();
		template.setConnectionFactory(redisConnectionFactory);
		return template;
	}

}



Springboot – customize RedisTemplate and RedisUtil

1.RedisTemplate serialization configuration
package com.empirefree.springboot.config;

import com.fasterxml.jackson.annotation.JsonAutoDetect;
import com.fasterxml.jackson.annotation.JsonTypeInfo;
import com.fasterxml.jackson.annotation.PropertyAccessor;
import com.fasterxml.jackson.databind.ObjectMapper;
import com.fasterxml.jackson.databind.jsontype.impl.LaissezFaireSubTypeValidator;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import org.springframework.data.redis.connection.RedisConnectionFactory;
import org.springframework.data.redis.core.RedisTemplate;
import org.springframework.data.redis.serializer.Jackson2JsonRedisSerializer;
import org.springframework.data.redis.serializer.StringRedisSerializer;

/**
 * @program: springboot
 * @description: RedisTemplate to configure
 * @author: huyuqiao
 * @create: 2021/06/13 16:05
 */

@Configuration
public class RedisConfig {
    @Bean
    @SuppressWarnings("all")
    public RedisTemplate<String, Object> redisTemplate(RedisConnectionFactory factory) {
        RedisTemplate<String, Object> template = new RedisTemplate<String, Object>();
        template.setConnectionFactory(factory);
        // Json serialization configuration
        Jackson2JsonRedisSerializer jackson2JsonRedisSerializer = new Jackson2JsonRedisSerializer(Object.class);
        ObjectMapper om = new ObjectMapper();
        om.setVisibility(PropertyAccessor.ALL, JsonAutoDetect.Visibility.ANY);
        om.activateDefaultTyping(LaissezFaireSubTypeValidator.instance, ObjectMapper.DefaultTyping.NON_FINAL,
                JsonTypeInfo.As.WRAPPER_ARRAY);
        jackson2JsonRedisSerializer.setObjectMapper(om);
        // Serialization of String
        StringRedisSerializer stringRedisSerializer = new StringRedisSerializer();
        // The key adopts the serialization method of String
        template.setKeySerializer(stringRedisSerializer);
        // The key of hash is also serialized by String
        template.setHashKeySerializer(stringRedisSerializer);
        // value serialization adopts jackson
        template.setValueSerializer(jackson2JsonRedisSerializer);
        // The value serialization method of hash adopts jackson
        template.setHashValueSerializer(jackson2JsonRedisSerializer);
        template.afterPropertiesSet();
        return template;
    }
}

2.RedisUtil configuration (CRUD operation string, map,list,set)
package com.empirefree.springboot.utils;

import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.data.redis.core.RedisTemplate;
import org.springframework.stereotype.Component;
import org.springframework.util.CollectionUtils;

import java.util.Collection;
import java.util.List;
import java.util.Map;
import java.util.Set;
import java.util.concurrent.TimeUnit;

/**
 * @program: springboot
 * @description: Redis Tool class
 * @author: huyuqiao
 * @create: 2021/06/13 16:14
 */

@Component
public final class RedisUtil {

    @Autowired
    private RedisTemplate<String, Object> redisTemplate;

    // =============================common============================
    /**
     * Specify cache expiration time
     * @param key  key
     * @param time Time (seconds)
     */
    public boolean expire(String key, long time) {
        try {
            if (time > 0) {
                redisTemplate.expire(key, time, TimeUnit.SECONDS);
            }
            return true;
        } catch (Exception e) {
            e.printStackTrace();
            return false;
        }
    }

    /**
     * Get expiration time according to key
     * @param key Key cannot be null
     * @return Time (seconds) returns 0, which means it is permanently valid
     */
    public long getExpire(String key) {
        return redisTemplate.getExpire(key, TimeUnit.SECONDS);
    }


    /**
     * Determine whether the key exists
     * @param key key
     * @return true Exists false does not exist
     */
    public boolean hasKey(String key) {
        try {
            return redisTemplate.hasKey(key);
        } catch (Exception e) {
            e.printStackTrace();
            return false;
        }
    }


    /**
     * Delete cache
     * @param key One or more values can be passed
     */
    @SuppressWarnings("unchecked")
    public void del(String... key) {
        if (key != null && key.length > 0) {
            if (key.length == 1) {
                redisTemplate.delete(key[0]);
            } else {
                redisTemplate.delete((Collection<String>) CollectionUtils.arrayToList(key));
            }
        }
    }


    // ============================String=============================

    /**
     * Normal cache fetch
     * @param key key
     * @return value
     */
    public Object get(String key) {
        return key == null ? null : redisTemplate.opsForValue().get(key);
    }

    /**
     * Normal cache put
     * @param key   key
     * @param value value
     * @return true Success false failure
     */

    public boolean set(String key, Object value) {
        try {
            redisTemplate.opsForValue().set(key, value);
            return true;
        } catch (Exception e) {
            e.printStackTrace();
            return false;
        }
    }


    /**
     * Normal cache put in and set time
     * @param key   key
     * @param value value
     * @param time  Time (seconds) time should be greater than 0. If time is less than or equal to 0, the indefinite period will be set
     * @return true Success false failure
     */

    public boolean set(String key, Object value, long time) {
        try {
            if (time > 0) {
                redisTemplate.opsForValue().set(key, value, time, TimeUnit.SECONDS);
            } else {
                set(key, value);
            }
            return true;
        } catch (Exception e) {
            e.printStackTrace();
            return false;
        }
    }


    /**
     * Increasing
     * @param key   key
     * @param delta How many to add (greater than 0)
     */
    public long incr(String key, long delta) {
        if (delta < 0) {
            throw new RuntimeException("The increment factor must be greater than 0");
        }
        return redisTemplate.opsForValue().increment(key, delta);
    }


    /**
     * Diminishing
     * @param key   key
     * @param delta How many to reduce (less than 0)
     */
    public long decr(String key, long delta) {
        if (delta < 0) {
            throw new RuntimeException("Decrement factor must be greater than 0");
        }
        return redisTemplate.opsForValue().increment(key, -delta);
    }


    // ================================Map=================================

    /**
     * HashGet
     * @param key  Key cannot be null
     * @param item Item cannot be null
     */
    public Object hget(String key, String item) {
        return redisTemplate.opsForHash().get(key, item);
    }

    /**
     * Get all the corresponding hashKey values
     * @param key key
     * @return Corresponding multiple key values
     */
    public Map<Object, Object> hmget(String key) {
        return redisTemplate.opsForHash().entries(key);
    }

    /**
     * HashSet
     * @param key key
     * @param map Corresponding to multiple key values
     */
    public boolean hmset(String key, Map<String, Object> map) {
        try {
            redisTemplate.opsForHash().putAll(key, map);
            return true;
        } catch (Exception e) {
            e.printStackTrace();
            return false;
        }
    }


    /**
     * HashSet And set the time
     * @param key  key
     * @param map  Corresponding to multiple key values
     * @param time Time (seconds)
     * @return true Success false failure
     */
    public boolean hmset(String key, Map<String, Object> map, long time) {
        try {
            redisTemplate.opsForHash().putAll(key, map);
            if (time > 0) {
                expire(key, time);
            }
            return true;
        } catch (Exception e) {
            e.printStackTrace();
            return false;
        }
    }


    /**
     * Put data into a hash table. If it does not exist, it will be created
     *
     * @param key   key
     * @param item  term
     * @param value value
     * @return true Success false failure
     */
    public boolean hset(String key, String item, Object value) {
        try {
            redisTemplate.opsForHash().put(key, item, value);
            return true;
        } catch (Exception e) {
            e.printStackTrace();
            return false;
        }
    }

    /**
     * Put data into a hash table. If it does not exist, it will be created
     *
     * @param key   key
     * @param item  term
     * @param value value
     * @param time  Time (seconds): Note: if the existing hash table has time, the original time will be replaced here
     * @return true Success false failure
     */
    public boolean hset(String key, String item, Object value, long time) {
        try {
            redisTemplate.opsForHash().put(key, item, value);
            if (time > 0) {
                expire(key, time);
            }
            return true;
        } catch (Exception e) {
            e.printStackTrace();
            return false;
        }
    }


    /**
     * Delete values in hash table
     *
     * @param key  Key cannot be null
     * @param item Item can make multiple non null able
     */
    public void hdel(String key, Object... item) {
        redisTemplate.opsForHash().delete(key, item);
    }


    /**
     * Judge whether there is the value of this item in the hash table
     *
     * @param key  Key cannot be null
     * @param item Item cannot be null
     * @return true Exists false does not exist
     */
    public boolean hHasKey(String key, String item) {
        return redisTemplate.opsForHash().hasKey(key, item);
    }


    /**
     * hash If increment does not exist, it will create one and return the added value
     *
     * @param key  key
     * @param item term
     * @param by   How many to add (greater than 0)
     */
    public double hincr(String key, String item, double by) {
        return redisTemplate.opsForHash().increment(key, item, by);
    }


    /**
     * hash Diminishing
     *
     * @param key  key
     * @param item term
     * @param by   To reduce (less than 0)
     */
    public double hdecr(String key, String item, double by) {
        return redisTemplate.opsForHash().increment(key, item, -by);
    }


    // ============================set=============================

    /**
     * Get all the values in the Set according to the key
     * @param key key
     */
    public Set<Object> sGet(String key) {
        try {
            return redisTemplate.opsForSet().members(key);
        } catch (Exception e) {
            e.printStackTrace();
            return null;
        }
    }


    /**
     * Query from a set according to value whether it exists
     *
     * @param key   key
     * @param value value
     * @return true Exists false does not exist
     */
    public boolean sHasKey(String key, Object value) {
        try {
            return redisTemplate.opsForSet().isMember(key, value);
        } catch (Exception e) {
            e.printStackTrace();
            return false;
        }
    }


    /**
     * Put data into set cache
     *
     * @param key    key
     * @param values Values can be multiple
     * @return Number of successes
     */
    public long sSet(String key, Object... values) {
        try {
            return redisTemplate.opsForSet().add(key, values);
        } catch (Exception e) {
            e.printStackTrace();
            return 0;
        }
    }


    /**
     * Put set data into cache
     *
     * @param key    key
     * @param time   Time (seconds)
     * @param values Values can be multiple
     * @return Number of successes
     */
    public long sSetAndTime(String key, long time, Object... values) {
        try {
            Long count = redisTemplate.opsForSet().add(key, values);
            if (time > 0)
                expire(key, time);
            return count;
        } catch (Exception e) {
            e.printStackTrace();
            return 0;
        }
    }


    /**
     * Gets the length of the set cache
     *
     * @param key key
     */
    public long sGetSetSize(String key) {
        try {
            return redisTemplate.opsForSet().size(key);
        } catch (Exception e) {
            e.printStackTrace();
            return 0;
        }
    }


    /**
     * Remove with value
     *
     * @param key    key
     * @param values Values can be multiple
     * @return Number of removed
     */

    public long setRemove(String key, Object... values) {
        try {
            Long count = redisTemplate.opsForSet().remove(key, values);
            return count;
        } catch (Exception e) {
            e.printStackTrace();
            return 0;
        }
    }

    // ===============================list=================================

    /**
     * Get the contents of the list cache
     *
     * @param key   key
     * @param start start
     * @param end   End 0 to - 1 represent all values
     */
    public List<Object> lGet(String key, long start, long end) {
        try {
            return redisTemplate.opsForList().range(key, start, end);
        } catch (Exception e) {
            e.printStackTrace();
            return null;
        }
    }


    /**
     * Gets the length of the list cache
     *
     * @param key key
     */
    public long lGetListSize(String key) {
        try {
            return redisTemplate.opsForList().size(key);
        } catch (Exception e) {
            e.printStackTrace();
            return 0;
        }
    }


    /**
     * Get the value in the list through the index
     *
     * @param key   key
     * @param index When index index > = 0, 0 header, 1 second element, and so on; When index < 0, - 1, footer, - 2, the penultimate element, and so on
     */
    public Object lGetIndex(String key, long index) {
        try {
            return redisTemplate.opsForList().index(key, index);
        } catch (Exception e) {
            e.printStackTrace();
            return null;
        }
    }


    /**
     * Put the list into the cache
     *
     * @param key   key
     * @param value value
     */
    public boolean lSet(String key, Object value) {
        try {
            redisTemplate.opsForList().rightPush(key, value);
            return true;
        } catch (Exception e) {
            e.printStackTrace();
            return false;
        }
    }


    /**
     * Put the list into the cache
     * @param key   key
     * @param value value
     * @param time  Time (seconds)
     */
    public boolean lSet(String key, Object value, long time) {
        try {
            redisTemplate.opsForList().rightPush(key, value);
            if (time > 0)
                expire(key, time);
            return true;
        } catch (Exception e) {
            e.printStackTrace();
            return false;
        }

    }


    /**
     * Put the list into the cache
     *
     * @param key   key
     * @param value value
     * @return
     */
    public boolean lSet(String key, List<Object> value) {
        try {
            redisTemplate.opsForList().rightPushAll(key, value);
            return true;
        } catch (Exception e) {
            e.printStackTrace();
            return false;
        }

    }


    /**
     * Put the list into the cache
     *
     * @param key   key
     * @param value value
     * @param time  Time (seconds)
     * @return
     */
    public boolean lSet(String key, List<Object> value, long time) {
        try {
            redisTemplate.opsForList().rightPushAll(key, value);
            if (time > 0)
                expire(key, time);
            return true;
        } catch (Exception e) {
            e.printStackTrace();
            return false;
        }
    }


    /**
     * Modify a piece of data in the list according to the index
     *
     * @param key   key
     * @param index Indexes
     * @param value value
     * @return
     */

    public boolean lUpdateIndex(String key, long index, Object value) {
        try {
            redisTemplate.opsForList().set(key, index, value);
            return true;
        } catch (Exception e) {
            e.printStackTrace();
            return false;
        }
    }


    /**
     * Remove N values as value
     *
     * @param key   key
     * @param count How many to remove
     * @param value value
     * @return Number of removed
     */

    public long lRemove(String key, long count, Object value) {
        try {
            Long remove = redisTemplate.opsForList().remove(key, count, value);
            return remove;
        } catch (Exception e) {
            e.printStackTrace();
            return 0;
        }

    }

}



Redis – persistent

Persistence: store the memory data into the disk within the specified time interval, recover the data after power failure, and use the snapshot file to read it into the memory.

1.Redis – RDB (recommended by default)
  • Saving process: the parent process fork s a child process, which will persist the data into a temporary file. After the persistence is completed, it will replace the last RDB official file.

  • Trigger conditions:

    • Save: the command save 900 1 will trigger RDB if it is modified once within 15 minutes.
    • Execute the FlushAll command
    • Exiting Redis will generate RDB files
  • Applicable scenario: suitable for large-scale data recovery and insensitive to data integrity.

2.Redis - AOF (after restart, the AOF is loaded first by default, because the data is more complete)
  • Save process: the parent process fork s a child process, records all instructions in the form of log (read operation is not recorded), then appends the data without rewriting to the AOF file, and then replaces the last AOF file

  • Trigger condition: appendfsync always/everysec/no command

  • Applicable scenario: strict requirements on the integrity of recovered data

  • Rewrite scenario: if the file is continuously appended to a threshold, the aof file will be rewritten



Redis – publish and subscribe

Publish and subscribe: you can push messages, chat rooms, etc

#Publisher: Send a message to a channel of Redis, and all subscribers can receive it
127.0.0.1:6379> PUBLISH huyuqiao "hello,world"
1


#Subscriber: subscribe to a channel in Redis
127.0.0.1:6379> SUBSCRIBE huyuqiao
subscribe
huyuqiao
1
message
huyuqiao
hello,world



Redis – master-slave replication (Master write slave read)

1.Linux configuration file
  • Change no of daemon to yes
  • port, pidfile, logfile and dbfilename are changed to the new redis name
  • If redis has a password, you need to configure it in slave or remove it in master
#When the master is disconnected, the slave is still the slave of the original master. However, if the slave is disconnected, the master will not have a slave
>>slaveof no one #Close the slave status and change to master
>>shutdown       #When redis is stopped, its slave state will stop. The next time it is started, it will become a master, and the redis under the original master will not be available


redis-server redis80.conf #Start the 80redis window.
kill -s 9 pid  #Close a process
2. Replication principle
  • Full replication: after the slave is started, send the sync synchronization command to the master, and then synchronize all data in the master
  • Incremental replication: after the master writes data, the slave gets it in real time.



Redis - sentinel mode

The process is as follows: after starting the sentinel, close the 79 main service, vote 80 as the main service from 80 and 81, and then regard 79 and 80 as slave. Therefore, the next time 79 starts, it will be slave by default. In addition, because the data is copied incrementally, the data is well preserved in 80, and 79 is fully copied to 79 after startup

#Log in sentinel mode

8044:X 14 Jun 16:35:06.885 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.
8044:X 14 Jun 16:35:06.885 # Sentinel ID is c0ce22fc8365ff48663b7db710ce8c359529c3d9
8044:X 14 Jun 16:35:06.885 # +monitor master mymaster 127.0.0.1 6379 quorum 1
8044:X 14 Jun 16:35:51.556 # +sdown master mymaster 127.0.0.1 6379
8044:X 14 Jun 16:35:51.556 # +odown master mymaster 127.0.0.1 6379 #quorum 1/1
8044:X 14 Jun 16:35:51.556 # +new-epoch 1
8044:X 14 Jun 16:35:51.556 # +try-failover master mymaster 127.0.0.1 6379
8044:X 14 Jun 16:35:51.572 # +vote-for-leader c0ce22fc8365ff48663b7db710ce8c359529c3d9 1
8044:X 14 Jun 16:35:51.572 # +elected-leader master mymaster 127.0.0.1 6379
8044:X 14 Jun 16:35:51.572 # +failover-state-select-slave master mymaster 127.0.0.1 6379
8044:X 14 Jun 16:35:51.624 # +selected-slave slave 127.0.0.1:6380 127.0.0.1 6380 @ mymaster 127.0.0.1 6379
8044:X 14 Jun 16:35:51.624 * +failover-state-send-slaveof-noone slave 127.0.0.1:6380 127.0.0.1 6380 @ mymaster 127.0.0.1 6379
8044:X 14 Jun 16:35:51.684 * +failover-state-wait-promotion slave 127.0.0.1:6380 127.0.0.1 6380 @ mymaster 127.0.0.1 6379
8044:X 14 Jun 16:35:51.826 # +promoted-slave slave 127.0.0.1:6380 127.0.0.1 6380 @ mymaster 127.0.0.1 6379
8044:X 14 Jun 16:35:51.826 # +failover-state-reconf-slaves master mymaster 127.0.0.1 6379
8044:X 14 Jun 16:35:51.914 * +slave-reconf-sent slave 127.0.0.1:6381 127.0.0.1 6381 @ mymaster 127.0.0.1 6379
8044:X 14 Jun 16:35:52.831 * +slave-reconf-inprog slave 127.0.0.1:6381 127.0.0.1 6381 @ mymaster 127.0.0.1 6379
8044:X 14 Jun 16:35:52.831 * +slave-reconf-done slave 127.0.0.1:6381 127.0.0.1 6381 @ mymaster 127.0.0.1 6379
8044:X 14 Jun 16:35:52.912 # +failover-end master mymaster 127.0.0.1 6379
8044:X 14 Jun 16:35:52.912 # +switch-master mymaster 127.0.0.1 6379 127.0.0.1 6380
8044:X 14 Jun 16:35:52.912 * +slave slave 127.0.0.1:6381 127.0.0.1 6381 @ mymaster 127.0.0.1 6380
8044:X 14 Jun 16:35:52.912 * +slave slave 127.0.0.1:6379 127.0.0.1 6379 @ mymaster 127.0.0.1 6380



Redis – cache penetration, breakdown, avalanche

  • Cache penetration: there is no data in the cache and database
    • Solution: 1 Bloom filter 2 Store empty object: after the database is not found, an empty object is temporarily stored in redis
  • Cache breakdown: a key value has expired and a large number of accesses have come
    • Solution: 1 Never expire 2 Distributed lock: one thread obtains the lock while other threads wait
  • Cache avalanche: a large number of key values have expired and a large number of accesses have come
    • Solution: 1 Redis high availability: set up more redis 2 Current limiting degradation: after the cache fails, the number of read and write caches in the database is controlled by locking or queuing. 3 Data preheating: a large amount of data is loaded into the cache, and different expiration times are set according to different accesses

Added by misterm on Sun, 30 Jan 2022 05:08:58 +0200