There are two serialization strategies adopted by redis by default, one is String's serialization strategy and the other is JDK's serialization strategy. There are several serialization tool classes: GenericToStringSerializer: Any object can be generalized to a string and serialized 2. Jackson 2Json RedisSerializer: Actually the same as Jackson Json RedisSerializer 3. Jackson json RedisSerializer: Serialize object objects as json strings 4. JdkSerializationRedisSerializer: Serialized java objects (serialized objects must implement Serializable interfaces) cannot be converted to objects 5. String RedisSerializer: Simple string serialization 6. GenericToStringSerializer: String RedisSerializer-like string serialization Generic Jackson 2Json RedisSerializer: Similar to Jackson 2Json RedisSerializer, but the constructor is used to customize the order without reference to the above serialization by a specific class. Copyright Statement: This is an original article of CSDN blogger "Forward to a Highlighted Place", which follows CC 4.0 by-sa Copyright Agreement. Please attach a link to the origin of the original text and this statement. Links to the original text: https://blog.csdn.net/y532798113/article/details/82690781
Here's a comparison of jdk.json.hash's performance in serializing 10W objects
/** * Use JDK serialization to save 100,000 user random objects to Redis and compute time-consuming * @return */ @RequestMapping("serializableUserInJDK") public String serializableUserInJDK(Model model) { //Configuring jdk serialization tools redisTemplate.setValueSerializer(new JdkSerializationRedisSerializer()); //Create list template object ListOperations<String, Object> opsForList = redisTemplate.opsForList(); long time = System.currentTimeMillis(); //Traversal storage for (int i = 0; i < 100000; i++) { User user = new User(i,ChineseName.getName(),getSex(),RandomNumber.getPhone("13"),getEmail(),getAge()); opsForList.leftPush("jdkUser"+i, user); } model.addAttribute("message", "jdk"); model.addAttribute("time", System.currentTimeMillis() - time); return "show"; } **jdk Format serialization completed,Use time:56464ms**
/** * Use JSON serialization to save 100,000 user random objects to Redis and compute time-consuming * @return */ @RequestMapping("serializableUserInJSON") public String serializableUserInJSON(Model model) { //Configuring json serialization tools redisTemplate.setValueSerializer(new Jackson2JsonRedisSerializer<>(User.class)); ListOperations<String, Object> opsForList = redisTemplate.opsForList(); long time = System.currentTimeMillis(); for (int i = 0; i < 100000; i++) { User user = new User(i,ChineseName.getName(),getSex(),RandomNumber.getPhone("13"),getEmail(),getAge()); Object json = JSON.toJSON(user); opsForList.leftPush("jsonUser"+i, json); } model.addAttribute("message", "json"); model.addAttribute("time", System.currentTimeMillis() - time); return "show"; } **json Format serialization completed,Use time:56703ms**
/** * Save 100,000 user random objects to Redis using the Hash type of Redis and compute time-consuming * @return */ @RequestMapping("serializableUserInHash") public String serializableUserInHash(Model model) { //Configuring hash serialization tools redisTemplate.setHashValueSerializer(new StringRedisSerializer()); redisTemplate.setHashKeySerializer(new StringRedisSerializer()); HashOperations<String, Object, Object> opsForList = redisTemplate.opsForHash(); long time = System.currentTimeMillis(); for (int i = 0; i < 100000; i++) { User user = new User(i,ChineseName.getName(),getSex(),RandomNumber.getPhone("13"),getEmail(),getAge()); opsForList.put("hashUser", "user"+i, user.toString()); } model.addAttribute("message", "Hash"); model.addAttribute("time", System.currentTimeMillis() - time); return "show"; } **Hash Format serialization completed,Use time:56519ms**
- There is little difference between 10W data. JdkSerialization RedisSerializer is the most efficient (after all, native to JDK) in terms of execution time, but the result string of serialization is the longest. Because of the compactness of JSON data format, the length of serialization is the smallest and the time is longer than the former. OxmSerialiabler was the longest in terms of time (related to the use of specific Marshaller at that time). So the personal choice is to use Jackson Json RedisSerializer as a POJO sequencer.
Copyright Statement: This article is the original article of "keke_xin" by CSDN blogger. It follows CC 4.0 by-sa copyright agreement. Please attach the link of origin and this statement for reproducing.
Links to the original text: https://blog.csdn.net/keke_Xin/article/details/84708633
## Pipeline testing (preface refers to predecessors, want to know more about the link below)
Redis's pipeline (pipeline) function is not available on the command line, but redis supports pipeline and is implemented in various language versions of client. Because of the delay of network overhead, even if the redis server has strong processing power, it will receive fewer client messages, resulting in low throughput. When client uses pipelining to send commands, redis server must put part of the requests in the queue (using memory) and send the results one-time after execution. If there are many commands sent, it is recommended to tag the returned results, which will increase the memory used.
Pipeline is very useful in some scenarios, such as multiple command s that need to be submitted "on time", and they do not depend on each other for the corresponding results, and the response to the results does not need to be obtained immediately, so pipeline can serve as such a "batch" tool; and to a certain extent, it can greatly improve performance. The main reason for the performance improvement is the reduction of "interactive round trip" time in TCP connections.
Note, however, that during pipeline, there will be "exclusive" links, during which other operations of non-pipeline type will not be possible until pipeline closes; if your pipeline has a large set of instructions, in order not to interfere with other operations in the link, you can create a new Client link for the pipeline operation so that pipeline can be closed. Separate from other normal operations in two clients. However, the number of operations pipeline can actually tolerate is strongly related to the size of the socket-output buffer/the size of the data returned; it also means that the number of pipeline links that each redis-server can support at the same time is limited, which will be limited by the buffer of the server's physical memory or network interface. Ability.
Copyright Statement: This article is the original article of CSDN blogger BugFree_Zhang Rui. It follows CC 4.0 by-sa Copyright Agreement. Please attach the link of origin and this statement for reproducing.
Links to the original text: https://blog.csdn.net/u011489043/article/details/78769428
/** * Save 100,000 user objects to Redis using pipeline JDK serialization * */ @RequestMapping("serializableUserInJDK") public String serializableUserInJDK() { redisTemplate.executePipelined(new SessionCallback<Object>() { @Override public <K, V> Object execute(RedisOperations<K, V> operations) throws DataAccessException { long time = System.currentTimeMillis(); for (int i = 0; i < 100000; i++) { User user = new User(ChineseName.getName(),RandomNumber.getPhone("180")); redisTemplate.opsForList().leftPush("pipJdkUser"+i, user); } System.out.println("Use time:"+(System.currentTimeMillis()-time)); return null; } }); return null; } //Time: 1795 MS
/** * Save 100,000 user objects to Redis using JSON serialization */ @RequestMapping("serializableUserInJsonPip") public String serializableUserInJsonPip() { redisTemplate.executePipelined(new SessionCallback<Object>() { @Override public <K, V> Object execute(RedisOperations<K, V> operations) throws DataAccessException { long time = System.currentTimeMillis(); for (int i = 0; i < 1000000; i++) { User user = new User(ChineseName.getName(),RandomNumber.getPhone("180")); Object json = JSON.toJSON(user); redisTemplate.opsForList().leftPush("pipJson"+i, json); } System.out.println("time consuming:"+(System.currentTimeMillis()-time)); return null; } }); return null; } //Time-consuming: 3653ms
/** * Using the Hash type of Redis to save 100,000 user objects takes time to test. */ @RequestMapping("serializeUserInHash") public void serializeUserInHash() { redisTemplate.executePipelined(new SessionCallback<Object>() { @Override public <K, V> Object execute(RedisOperations<K, V> operations) throws DataAccessException { long time = System.currentTimeMillis(); for (int i = 0; i < 100000; i++) { User user = new User(ChineseName.getName(),RandomNumber.getPhone("180")); Object json = JSON.toJSON(user); redisTemplate.opsForHash().put("hashUser","user"+i , json); } System.out.println("time consuming:"+(System.currentTimeMillis() - time)); return null; } }); } //Time-consuming: 3069ms ---
Using pipes can show a significant improvement in performance, more than ten times the previous performance. But it is not the more pipes used, the better. Refer to the detailed introduction of "CSDN blogger BugFree_Zhang Rui".
In a word, the performance of serialization mode is determined by the specific development situation. Rational use of redis will double the performance.
This test has 10w data and 8G running memory is just enough. If the test is to empty the cache in redis before, it may have an impact on the results.
For the first time, give more advice.