background
- Many online topics about redis have talked about avoiding causing large keys, because deletion will block the main thread. I've seen a comment that a large key of 2G was deleted in the test, and the system was blocked for about 80 seconds.
- I was asked how to delete a big key during an interview.
- Recently reading redis5 0.8 source code. You can see that the deletion of large keys is actually only the deletion of key related data in the main thread, while the actual value and its memory release are placed in the asynchronous deletion thread. This operation should not be as terrible as the 80 second main thread blocking said on the Internet.
Test preparation
- redis5.0.x server program (for convenience, it is built here with docker).
- php7.3 (without others, the system comes with version 7.3), and other scripting languages can also be used.
- c compiler, the program for adding data uses c development (because the self-contained php does not install pcntl extension =, =, so c multithreading is used to construct and add data concurrently). It is mainly used to construct data here, and other languages can be used.
Construction data
- Write data construction code, which is based on macos platform and can be slightly adjusted according to the actual operating system.
- The writing is rough. Adding different amounts of data and modifying the server information require modifying the macro definition and recompiling.
#include <stdio.h> #include <pthread.h> #include <stdio.h> #include <unistd.h> #include <netinet/in.h> #include <sys/socket.h> #include <sys/wait.h> #include <sys/types.h> #include <string.h> #include <errno.h> #include <arpa/inet.h> #include <time.h> #include <sys/time.h> #include <signal.h> #define RESV_BUFF_SIZE 512 #define SEND_BUFF_SIZE 1024 /** * redis IP address of the server */ #define DEST_ADDR "172.20.23.83" /** * redis Port number of the server */ #define DEST_PORT 6379 /** * The total number of insert actions per thread */ #define TRANS_PER_THREAD 10000 /** * How many times does the log print */ #define LOG_STEP 255 /** * Total open threads */ #define THREADS_NUM 100 int numLen(int num) { int i = 0; do { num /= 10; i++; } while (num != 0); return i; } /** * Thread execution code * @param p The thread number from 0-99 is used to determine the inserted data segment executed by the current thread * For example, if the number is 1, insert TRANS_PER_THREAD*1 to TRANS_PER_THREAD*1 + TRANS_ PER_ Data in thread interval */ void threadMain(void *p) { int sockfd,*index=p,rlen; *index *= TRANS_PER_THREAD; int end = *index + TRANS_PER_THREAD; struct sockaddr_in dest_addr; bzero(&(dest_addr), sizeof(dest_addr)); char resvbuf[RESV_BUFF_SIZE]; char sendbuf[SEND_BUFF_SIZE]; sockfd = socket(AF_INET, SOCK_STREAM, 0); if (sockfd == -1) { printf("socket create failed:%d!\n",sockfd); } dest_addr.sin_family = AF_INET; dest_addr.sin_port = htons(DEST_PORT); inet_pton(AF_INET,DEST_ADDR,&dest_addr.sin_addr); if (connect(sockfd,(struct sockaddr*)&dest_addr, sizeof(struct sockaddr)) == -1) { printf("connect failed:%d!\n",errno); perror("error: "); } else { printf("connect success!\n"); struct timeval tv,ltv; for (int i = *index; i < end; ++i) { rlen = numLen(i); memset(sendbuf,0,SEND_BUFF_SIZE); /** * Construct the data sent in the current request * RESP protocol specifications shall be met */ sprintf(sendbuf,"*4\r\n$4\r\nHSET\r\n$6\r\nmigkey\r\n$%d\r\nmest%d\r\n$%d\r\nmest67890%d\r\n",rlen+4,i,rlen+9,i); write(sockfd,sendbuf,strlen(sendbuf)); /** * Although we don't care about the return of the server * Read operation is still carried out here to avoid buffer overflow */ read(sockfd,resvbuf,RESV_BUFF_SIZE); /** * Log per execution_ Step + 1 prints one log message at a time * LOG_STEP Need to meet (2^n - 1) */ if ((i & LOG_STEP) == 0) { gettimeofday(&tv, NULL); if (i > *index) { printf("used %ld.%d seconds.\n",tv.tv_sec - ltv.tv_sec,tv.tv_usec - ltv.tv_usec); fflush(stdout); } ltv = tv; } } } printf("finished index:%d.\n",*index); close(sockfd); } void handle_pipe(int sig) { // printf("sig %d ignore.\n",sig); } int main() { /** * After the tcp client is closed, the server may still send data to us * This will cause our process to receive SIGPIPE signal * Therefore, the signal processing function is registered here */ struct sigaction sa; sa.sa_handler = handle_pipe; sigemptyset(&sa.sa_mask); sa.sa_flags = 0; sigaction(SIGPIPE,&sa,NULL); pthread_t pts[THREADS_NUM]; int indexs[THREADS_NUM]; struct timeval tv,ltv; gettimeofday(<v, NULL); /** * Create thread */ for (int i = 0; i < THREADS_NUM; ++i) { indexs[i] = i; if (pthread_create(&pts[i], NULL, threadMain, &indexs[i]) != 0) { printf("create thread error!\n"); } else { printf("thread %d created!\n",i); } } /** * Wait thread */ for (int i = 0; i < THREADS_NUM; ++i) { pthread_join(pts[i], NULL); } gettimeofday(&tv, NULL); printf("\n------All finished!------\n""used %ld.%d seconds.\n",tv.tv_sec - ltv.tv_sec,tv.tv_usec - ltv.tv_usec); return 0; }
- Compile and execute. Here, a total of three large key s are added to the server through code modification and multiple execution. They are:
- bigkey, with a data length of 1 million, occupies about 58M of memory.
- migkey, with a data length of 1 million, occupies about 58M of memory.
- sigkey, with data length of 100000 and memory occupation of 5.5M.
Prepare test procedure
- Test code, a simple php script, and send a get command to the server every 100ms. Observe the delay interval of the get command after executing the command to delete the large key, and judge the blocking degree of the main thread.
- The script code is as follows (here is to query a string type data with key m.):
<?php $socket = socket_create(AF_INET, SOCK_STREAM, SOL_TCP); socket_connect($socket, '172.20.23.83',6379); $st = explode(' ',microtime()); $st = $st[0] + $st[1]; while (true) { $msg = "*2\r\n$3\r\nGET\r\n$1\r\nm\r\n"; socket_write($socket,$msg,strlen($msg)); $s = socket_read($socket,100); $et = explode(' ',microtime()); $et = $et[0] + $et[1]; if (($et - $st) > 0.13) { echo "-------------------xxx---------------------\n"; } echo microtime().'--'.$s; usleep(100000); $st = $et; } socket_close($socket);
Perform test
- Use redis cli to connect to the redis server.
- Set a string type data with key m.
- Execute the php script for observation.
- Redis cli executes the delete command.
- Close the php script.
- Find the -------------------------- xxx -------------------------- tag.
- Observe the length of time interval between two outputs before and after marking.
test result
- migkey:
0.27018300 1643271634--$4 mack -------------------xxx--------------------- 0.66599900 1643271634--$4 mack
- bigkey (although the time interval between the next two tags exceeds 0.13 seconds, the difference is not far, which can be determined to be caused by network fluctuation. Here, the time difference before and after the first tag should be observed.):
0.23842800 1643271538--$4 mack -------------------xxx--------------------- 0.76545200 1643271538--$4 mack -------------------xxx--------------------- 0.90037200 1643271538--$4 mack 0.00909800 1643271539--$4 mack -------------------xxx--------------------- 0.48198300 1643271539--$4 mack
- sigkey: no fluctuation was observed.
Summary of preliminary results
- 5.5M key did not cause obvious obstruction.
- A key of about 58M causes a blockage of about 50ms.
Continue to construct larger data
- There is a big gap with the 2G data on the Internet, so we need to build larger data.
- Since the efficiency of building data through the network client is still not high, even if it is multi-threaded, the big data is built in a special way (using redis cli with pipe parameters).
- Use php script to build a file containing 30 million redis commands 0 Txt, the script is as follows:
<?php echo "start...\n"; $st = explode(' ', microtime()); $st = $st[0] + $st[1]; for ($i=0; $i < 30000000; $i++) { $rstr = "{$i}"; $slen = strlen($rstr); $klen = $slen + 4; $vlen = $slen + 9; $msg = "*4\r\n$4\r\nHSET\r\n$6\r\nligkey\r\n\${$klen}\r\ntest{$rstr}\r\n\${$vlen}\r\ntest67890{$rstr}\r\n"; file_put_contents('0.txt',$msg,FILE_APPEND); } $et = explode(' ', microtime()); $et = $et[0] + $et[1]; $used = $et - $st; echo "finished...\n","used $used seconds.\n";
- Execute script to generate 0 Txt file.
- Use the command cat 0 Txt | redis cli -- pipe import data.
Observe again
- The redis command shows that the data is about 1.8G.
- Execute the observation script.
- Delete the data ligkey.
Big data observation results
0.31861400 1643279405--$4 mack -------------------xxx--------------------- 0.39795600 1643279424--$4 mack
- Deleting 1.8G data takes about 19.08s.
conclusion
- Deleting a large key does cause the main thread to block for a long time, and the blocking time increases with the increase of data.
- In the next note, you need to understand why the main thread is blocked and whether there is a solution from the perspective of source code.