For the addition, deletion, modification and query of a User table, I wrote nearly 200 lines of code




1, Basic version (including basic operation)

This is the most common way of writing in our work

  • RedisUtil: encapsulated Redis related API operations
  • RedisKeyPreConst: the key prefix involved in Redis cache
  • UserMapper: operation database

The code implementation details are as follows:

public class UserServiceImpl implements UserService {

	
	@Autowired
	UserMapper userMapper;

	@Override
	public User insetUser(User user) {
		User insertRet = userMapper.insetUser(user);
		RedisUtil.add(RedisKeyPreConst.USER_CACHE + user.getUserId(), JSON.toJSONString(user));
		return insertRet;
	}

	@Override
	public Integer deleteUser(Integer userId) {
		Integer num = userMapper.deleteUser(userId);
		RedisUtil.delete(RedisKeyPreConst.USER_CACHE + userId);
		return num;
	}

	@Override
	public User updateUser(User user) {
		User updateRet = userMapper.updateUser(user);
		RedisUtil.update(RedisKeyPreConst.USER_CACHE + user.getUserId(), JSON.toJSONString(user));
		return updateRet;
	}

	@Override
	public User findUser(Integer userId) {
		User user = null;

		// 1. Query in redis cache
		String userCacheKey = RedisKeyPreConst.USER_CACHE + userId;
		String jsonUser = RedisUtil.get(userCacheKey);
		if (!StringUtils.isEmpty(jsonUser)) {
			user = JSON.parseObject(jsonUser, User.class);
			return user;
		}
		
		// 2. Query in database
		user = userMapper.findUser(userId);
		if (user != null) {
			RedisUtil.add(userCacheKey, JSON.toJSONString(user));
		}
		return user;
	}
}





2, Upgraded version 1.1 (dealing with cache data consistency)

Basic version is a common processing method, but there are the following problems:

  1. Too much data in the cache: because the requested data will be loaded into the cache, there will be too much data in the cache. You can set the corresponding expiration time for the data
  2. The requested key just expires (CACHE breakdown): when setting the cache, the time of random expiration will be extended based on the basic expiration time
  3. A large number of requests for a nonexistent key (CACHE penetration): the query does not exist and needs to cache empty objects to avoid cache breakdown
  4. A large number of requests for multiple nonexistent key s, that is, too many empty objects are generated: set the corresponding expiration time for the empty object, and extend the expiration time of the empty object every time the empty object is accessed
  5. Data consistency of cache in distributed scenario: you can use a distributed lock and add a cache check after locking in combination with a mode similar to DCL
  6. Since there are many codes with inconsistent double writing, they are shown in 1.2 in order to reduce the understanding cost.



Supplementary knowledge points (important):

  • Cache breakdown (failure): the redis data set fails, and the request directly passes through the cache to the database. Set different expiration times.
  • Cache penetration: when querying a data that does not exist at all, it will appear that it is not found not only in the redis cache, but also in the database. Cache empty objects, bloom filter.
  • Cache avalanche: cache breakdown will cause a series of subsequent impacts on the whole system. Current limiting and data preheating.



The code implementation details are as follows:

public class UserServiceImpl2 implements UserService {

	@Autowired
	private Redisson redisson;

	private String nullUser = JSON.toJSONString(new User());

	private static final String HOT_USER_LOCK = "HOT_USER_LOCK_PRE";
	@Autowired
	UserMapper userMapper;

	@Override
	public User insetUser(User user) {
		User insertRet = userMapper.insetUser(user);
		RedisUtil.add(RedisKeyPreConst.USER_CACHE + user.getUserId(), JSON.toJSONString(user));
		return insertRet;
	}

	@Override
	public Integer deleteUser(Integer userId) {
		Integer num = userMapper.deleteUser(userId);
		RedisUtil.delete(RedisKeyPreConst.USER_CACHE + userId);
		return num;
	}

	@Override
	public User updateUser(User user) {
		User updateRet = userMapper.updateUser(user);
		RedisUtil.update(RedisKeyPreConst.USER_CACHE + user.getUserId(), JSON.toJSONString(user));
		return updateRet;
	}

	@Override
	public User findUser(Integer userId) {
		User user = null;
		String userCacheKey = RedisKeyPreConst.USER_CACHE + userId;

		// Problem 5: query in redis cache for the first time
		user = getUserAndSetExpire(userCacheKey);
		if (user != null) {
			return user;
		}

		// Solution 5: add distributed locks to solve the problem of concurrent data security of hot data
		RLock lock = redisson.getLock(HOT_USER_LOCK + userId);
		lock.lock();
		try {
			// Problem 5: query in redis cache for the second time
			user = getUserAndSetExpire(userCacheKey);
			if (user != null) {
				return user;
			}

			// Query data in database
			user = userMapper.findUser(userId);
			if (user != null) {
				RedisUtil.add(userCacheKey, JSON.toJSONString(user));
			} else {
				// Solution 3: cache an empty object and set the expiration time for the empty object to avoid space waste
				RedisUtil.add(userCacheKey, nullUser, getExpireTime(60), TimeUnit.SECONDS);
			}
		} catch (Exception e) {
			throw new RuntimeException(e);
		} finally {
			lock.unlock();
		}
		return user;
	}

	private int getExpireTime(int time) {
		// Solution 2: add a random number to the basic expiration time to prevent the expiration of the data set and cache breakdown.
		return time + new Random().nextInt(30);
	}

	// Query data in redis cache
	private User getUserAndSetExpire(String userCacheKey) {
		User user = null;

		String jsonUser = RedisUtil.get(userCacheKey);
		if (!StringUtils.isEmpty(jsonUser)) {
			if (nullUser.equals(jsonUser)) {
				// Solution 4: if it is a cached empty object, extend the expiration time of the empty object
				RedisUtil.expire(userCacheKey, getExpireTime(60), TimeUnit.SECONDS);
				return new User();
			}
			user = JSON.parseObject(jsonUser, User.class);
			// Solution 1: reset the expiration time every time you access the data. Inactive data will fail
			RedisUtil.expire(userCacheKey, getExpireTime(60 * 60 * 24), TimeUnit.SECONDS);
		}
		return user;
	}
}





3, Upgraded version 1.2 (dealing with cache data consistency)

What is the inconsistency of double writing? I won't repeat it here. For those who don't know, please refer to my previous blog: Summary of Redis core knowledge points (15000 words, please read it patiently)

  1. Double write inconsistency: Based on the existing DCL structure, a distributed lock is added between the second cache query and the database query

The code implementation details are as follows:

public class UserServiceImpl3 implements UserService {

	@Autowired
	private Redisson redisson;

	private String nullUser = JSON.toJSONString(new User());

	private static final String HOT_USER_LOCK = "HOT_USER_LOCK_PRE";
	private static final String UPDATE_USER_LOCK = "UPDATE_USER_LOCK";
	@Autowired
	UserMapper userMapper;

	@Override
	public User insetUser(User user) {
		User insertRet = new User();

		// Solution 6: locks are also added when adding data to solve the inconsistency between double writes
		RLock updateLock = redisson.getLock(UPDATE_USER_LOCK + user.getUserId());
		updateLock.lock();
		try {
			insertRet = userMapper.insetUser(user);
			RedisUtil.add(RedisKeyPreConst.USER_CACHE + user.getUserId(), JSON.toJSONString(user));
		} catch (Exception e) {
			throw new RuntimeException(e);
		} finally {
			updateLock.unlock();
		}
		return insertRet;
	}

	@Override
	public Integer deleteUser(Integer userId) {
		Integer num;
		// Solution 6: locks are also added when deleting data to solve the problem of double write inconsistency
		RLock updateLock = redisson.getLock(UPDATE_USER_LOCK + userId);
		updateLock.lock();
		try {
			num = userMapper.deleteUser(userId);
		} catch (Exception e) {
			throw new RuntimeException(e);
		} finally {
			updateLock.unlock();
		}
		return num;
	}

	@Override
	public User updateUser(User user) {
		// Solution 6: locks are also added when modifying data to solve the inconsistency of double write
		User updateRet = new User();
		RLock updateLock = redisson.getLock(UPDATE_USER_LOCK + user.getUserId());
		updateLock.lock();
		try {
			updateRet = userMapper.updateUser(user);
			RedisUtil.update(RedisKeyPreConst.USER_CACHE + user.getUserId(), JSON.toJSONString(user));

		} catch (Exception e) {
			throw new RuntimeException(e);
		} finally {
			updateLock.unlock();
		}

		return updateRet;
	}

	@Override
	public User findUser(Integer userId) {
		User user = null;
		String userCacheKey = RedisKeyPreConst.USER_CACHE + userId;

		// Problem 5: query in redis cache for the first time
		user = getUserAndSetExpire(userCacheKey);
		if (user != null) {
			return user;
		}

		// Solution 5: add distributed locks to solve the problem of concurrent data security of hot data
		RLock lock = redisson.getLock(HOT_USER_LOCK + userId);
		lock.lock();
		try {
			// Problem 5: query in redis cache for the second time
			user = getUserAndSetExpire(userCacheKey);
			if (user != null) {
				return user;
			}


			// Solution 6: add a second distributed lock to deal with double write inconsistency
			RLock updateLock = redisson.getLock(UPDATE_USER_LOCK + userId);
			updateLock.lock();
			try {
				//  Query data in database
				user = userMapper.findUser(userId);
				if (user != null) {
					RedisUtil.add(userCacheKey, JSON.toJSONString(user));
				} else {
					// Solution 3: cache an empty object and set the expiration time for the empty object to avoid space waste
					RedisUtil.add(userCacheKey, nullUser, getExpireTime(60), TimeUnit.SECONDS);
				}
			} catch (Exception e) {
				throw new RuntimeException(e);
			} finally {
				updateLock.unlock();
			}
		} catch (Exception e) {
			throw new RuntimeException(e);
		} finally {
			lock.unlock();
		}
		return user;
	}

	private int getExpireTime(int time) {
		// Solution 2: add a random number to the basic expiration time to prevent the expiration of the data set and cache breakdown.
		return time + new Random().nextInt(30);
	}

	// Query data in redis cache
	private User getUserAndSetExpire(String userCacheKey) {
		User user = null;

		// 1. Query in redis cache
		String jsonUser = RedisUtil.get(userCacheKey);
		if (!StringUtils.isEmpty(jsonUser)) {
			if (nullUser.equals(jsonUser)) {
				// Solution 4: if it is a cached empty object, extend the expiration time of the empty object,
				RedisUtil.expire(userCacheKey, getExpireTime(60), TimeUnit.SECONDS);
				return new User();
			}
			user = JSON.parseObject(jsonUser, User.class);
			// Solution 1: reset the expiration time every time you access the data. Inactive data will fail
			RedisUtil.expire(userCacheKey, getExpireTime(60 * 60 * 24), TimeUnit.SECONDS);
		}
		return user;
	}
}





4, Optimized version (optimized multi-level cache architecture)

The above code can basically solve many cache data consistency problems, but if you have higher concurrency (concurrency that cannot be carried by redis) or faster response speed, you can provide the following optimization scheme for reference only:

  1. Refine the granularity of locks and introduce RReadWriteLock: in this way, the concurrency of reading scenarios can be improved
  2. Estimate the service execution time and convert the serial to concurrent: Based on the DCL cache architecture, our original intention is to block the thread when locking and let the thread execute serially to the second cache. If we change the lock logic to tryLock, we can achieve the effect of thread parallelism. This operation needs to estimate the execution time of business code. Personally, I think it is a chicken rib, but it is also an optimization method
  3. redis can't handle Concurrency: first, use some current limiting middleware, such as es sential and hystrix; The second method is to use the memory hotspot to store the JVM data. After all, map is based on the JVM level. In the distributed scenario, there will be data consistency problems, which is not conducive to expansion and data expiration processing. The solution is as follows:
    • Middleware such as EhCache and Guava can be introduced to recycle expired key s
    • Use zookeeper or mq to add corresponding events. Whenever the data in redis changes, it will be synchronized to the memory map of all programs (in the JVM memory of each server in the distributed scenario)
    • With the help of the third-party hotspot real-time computing system, when the data access heat reaches a certain frequency, the corresponding event will be triggered and synchronized to the memory map of all programs



The code implementation details are as follows:

public class UserServiceImpl4 implements UserService {
	
	@Autowired
	private Redisson redisson;

	private String nullUser = JSON.toJSONString(new User());

	private static final String HOT_USER_LOCK = "HOT_USER_LOCK_PRE";
	private static final String UPDATE_USER_LOCK = "UPDATE_USER_LOCK";

	// Optimization scheme 3: use the JVM's own memory Map to process larger concurrent requests
	private static Map<String, User> userMap = new HashMap<>();
	@Autowired
	UserMapper userMapper;

	@Override
	public User insetUser(User user) {
		User insertRet = new User();

		// Optimization scheme 2: use read-write locks to replace locks with larger granularity
		RReadWriteLock readWriteLock = redisson.getReadWriteLock(UPDATE_USER_LOCK + user.getUserId());
		RLock insertWriteLock = readWriteLock.readLock();
		insertWriteLock.lock();
		try {
			insertRet = userMapper.insetUser(user);
			RedisUtil.add(RedisKeyPreConst.USER_CACHE + user.getUserId(), JSON.toJSONString(user));
			// Optimization scheme 3: use the JVM's own memory Map to process larger concurrent requests
			userMap.put(RedisKeyPreConst.USER_CACHE + user.getUserId(), user);
		} catch (Exception e) {
			throw new RuntimeException(e);
		} finally {
			insertWriteLock.unlock();
		}
		return insertRet;
	}

	@Override
	public Integer deleteUser(Integer userId) {
		Integer num;
		// Optimization scheme 2: use read-write locks to replace locks with larger granularity
		RReadWriteLock readWriteLock = redisson.getReadWriteLock(UPDATE_USER_LOCK + userId);
		RLock deleteWriteLock = readWriteLock.readLock();
		deleteWriteLock.lock();
		try {
			num = userMapper.deleteUser(userId);
			RedisUtil.delete(RedisKeyPreConst.USER_CACHE + userId);
			// Optimization scheme 3: use the JVM's own memory Map to process larger concurrent requests
			userMap.remove(RedisKeyPreConst.USER_CACHE + userId);

		} catch (Exception e) {
			throw new RuntimeException(e);
		} finally {
			deleteWriteLock.unlock();
		}
		return num;
	}

	@Override
	public User updateUser(User user) {
		// Optimization scheme 2: use read-write locks to replace locks with larger granularity
		User updateRet = new User();

		RReadWriteLock readWriteLock = redisson.getReadWriteLock(UPDATE_USER_LOCK + user.getUserId());
		RLock updateWriteLock = readWriteLock.readLock();
		updateWriteLock.lock();
		try {
			updateRet = userMapper.updateUser(user);
			RedisUtil.update(RedisKeyPreConst.USER_CACHE + user.getUserId(), JSON.toJSONString(user));
			// Optimization scheme 3: use the JVM's own memory Map to process larger concurrent requests
			userMap.put(RedisKeyPreConst.USER_CACHE + user.getUserId(), user);


		} catch (Exception e) {
			throw new RuntimeException(e);
		} finally {
			updateWriteLock.unlock();
		}

		return updateRet;
	}

	@Override
	public User findUser(Integer userId) {
		User user = null;
		String userCacheKey = RedisKeyPreConst.USER_CACHE + userId;


		// Problem 5: query in redis cache for the first time
		user = getUserAndSetExpire(userCacheKey);
		if (user != null) {
			return user;
		}

		// Solution 5: add distributed locks to solve the problem of concurrent data security of hot data
		RLock lock = redisson.getLock(HOT_USER_LOCK + userId);
		try {
			lock.lock();
			// Optimization scheme 1: estimate the execution time of subsequent services, such as 2 seconds. The following locking method can achieve the effect of serial to concurrent,
			// However, this method is weak, and there will be consistency problems in the Caton data of the gadget project
			lock.tryLock(2, TimeUnit.SECONDS);

			user = getUserAndSetExpire(userCacheKey);
			if (user != null) {
				return user;
			}

			// Optimization scheme 2: use read-write locks to replace locks with larger granularity
			RReadWriteLock readWriteLock = redisson.getReadWriteLock(UPDATE_USER_LOCK + userId);
			RLock findReadLock = readWriteLock.readLock();
			findReadLock.lock();


			try {
				// Query data in database
				user = userMapper.findUser(userId);
				if (user != null) {
					RedisUtil.add(userCacheKey, JSON.toJSONString(user));
					// Optimization scheme 3: use the JVM's own memory Map to process larger concurrent requests
					userMap.put(userCacheKey, user);
				} else {
					// Solution 3: cache an empty object and set the expiration time for the empty object to avoid space waste
					RedisUtil.add(userCacheKey, nullUser, getExpireTime(60), TimeUnit.SECONDS);
				}
			} catch (Exception e) {
				throw new RuntimeException(e);
			} finally {
				findReadLock.unlock();
			}
		} catch (Exception e) {
			throw new RuntimeException(e);
		} finally {
			lock.unlock();
		}
		return user;
	}

	private int getExpireTime(int time) {
		// Solution 2: add a random number to the basic expiration time to prevent the expiration of the data set and cache breakdown.
		return time + new Random().nextInt(30);
	}

	private User getUserAndSetExpire(String userCacheKey) {
		User user = null;

		// Optimization scheme 3: use the JVM's own memory Map to process larger concurrent requests
		user = userMap.get(userCacheKey);
		if (user != null) {
			return user;
		}

		// Query data in redis cache
		String jsonUser = RedisUtil.get(userCacheKey);
		if (!StringUtils.isEmpty(jsonUser)) {
			if (nullUser.equals(jsonUser)) {
				// Solution 4: if it is a cached empty object, extend the expiration time of the empty object
				RedisUtil.expire(userCacheKey, getExpireTime(60), TimeUnit.SECONDS);
				return new User();
			}
			user = JSON.parseObject(jsonUser, User.class);
			// Solution 1: reset the expiration time every time you access the data. Inactive data will fail
			RedisUtil.expire(userCacheKey, getExpireTime(60 * 60 * 24), TimeUnit.SECONDS);
		}
		return user;
	}
}

Keywords: Java Redis Cache

Added by myraleen on Sat, 05 Mar 2022 16:04:56 +0200