Deep understanding of redis -- cache avalanche / cache breakdown / cache penetration

1. Cache avalanche

2. Buffer breakdown

3. Cache penetration

4. Summary

1. Cache avalanche
How does cache avalanche happen?
1) The redis service directly hangs up, and redis crashes completely
2) A large number of caches in redis expire at the same time, resulting in a large number of queries directly hitting mysql

solve
1.1) redis cache cluster achieves high availability, with master-slave + sentinel
1.2) ehcache local cache + Hystrix or Alibaba sentinel current limiting & degradation
1.3) enable the Redis persistence mechanism aof/rdb to restore the cache cluster as soon as possible

2.1) set the cache update time as fixed time + random time, so that it will not expire at the same time.

2. Buffer breakdown
How does cache avalanche happen?
A large number of requests are accessing the same key. At this time, the key just fails, which will lead to a large number of requests to the database.

harm:
It will cause too much pressure on mysql at a certain time, and there is a risk of downtime.

Solution:
1) For high hotspot key s, the setting never expires
2) Set a mutex to prevent cache breakdown:

    public TUser findById(Integer id) {
        TUser user  = (TUser)redisTemplate.opsForValue().get(CACHE_KEY_USER + id);
        if(user==null){
            //Small factory use
          /*  user = userMapper.selectById(id);
            if(user!=null) {
                redisTemplate.opsForValue().set(CACHE_KEY_USER + id, user);
            }*/
            //For large factories, for the optimization of high qps, lock it first to ensure a request operation. Let the external redis wait for a while to avoid breakdown of mysql
            synchronized (TUserServiceImpl.class){
                //Double ended retrieval, query redis again
                user  = (TUser)redisTemplate.opsForValue().get(CACHE_KEY_USER + id);
                if(user==null){
                    //If it's still empty, check mysql
                    user = userMapper.selectById(id);
                    if(user!=null){
                        redisTemplate.opsForValue().setIfAbsent(CACHE_KEY_USER + id, user,7L, TimeUnit.DAYS);
                    }
                }
            }
        }
        return user;
    }

3) Dual cache save, scheduled polling, mutually exclusive update, differential expiration time

Demand: suppose we want to complete a cost-effective function on the web page and change a batch of goods every two hours. For example: a batch of goods from 8:00 to 10:00 and a batch of goods from 10:00 to 12:00. If you update the goods at 8:00 according to the practice of scheduled tasks, assuming that the amount of data is large and MySQL is difficult to find out, the cache may have expired when it arrives, but MySQL has not found the goods yet, and a large number of queries will attack mysql, resulting in service downtime.

Idea: we set up two caches to save the same content, and the expiration time of the two caches is different. Cache B expires longer than cache A, which ensures that there is always data in the cache.

Query: query cache A first. If A does not, query cache B.

//The lrange command of redis list data structure is used to realize paging query
list = this.redisTemplate.opsForList().range(Constants.JHS_KEY_A, start, end);
if (CollectionUtils.isEmpty(list)) {
log.info("=========A The cache has expired. Remember to repair it manually, B The cache automatically lasts for 5 days");
//The user first queries cache A (the code above). If cache A cannot be queried (for example, it is deleted when updating the cache), then query cache B
this.redisTemplate.opsForList().range(Constants.JHS_KEY_B, start, end);

Update: update cache B first and then cache A.

//Update B cache first
this.redisTemplate.delete(Constants.JHS_KEY_B);
this.redisTemplate.opsForList().leftPushAll(Constants.JHS_KEY_B,list);
this.redisTemplate.expire(Constants.JHS_KEY_B,20L,TimeUnit.DAYS);
//Update A cache again
this.redisTemplate.delete(Constants.JHS_KEY_A);
this.redisTemplate.opsForList().leftPushAll(Constants.JHS_KEY_A,list);
this.redisTemplate.expire(Constants.JHS_KEY_A,15L,TimeUnit.DAYS);

Note: mutually exclusive update and query are adopted to ensure that both key s have values.

3. Cache penetration
What is cache penetration:
Request to query a record. You can't query the data after going to redis, and then you can't query the data after going to MySQL. Therefore, every request will attack mysql, resulting in a surge in database pressure.

Harm: after the first query, we generally have a mechanism to write back to redis, so it doesn't matter if it happens occasionally, but if it happens frequently, there are potential security risks.

Solution:
1) Empty object or default value
Generally speaking, this is no problem, but if hackers attack our system with a large number of unknown keys, they have to query every time, and more useless keys are written in redis.

2) Redis bloom filter solves cache penetration
Deep understanding of redis -- bloom filter
You can use bloom filter to solve cache penetration, which has been explained in this article.

4. Summary
Today, I learned the phenomena of cache avalanche / cache breakdown / cache penetration and the corresponding solutions.

Keywords: Redis Cache

Added by blackcow on Tue, 15 Feb 2022 11:24:34 +0200