The concept of pool
Because the hardware resources of the server are "abundant", a direct way to improve the performance of the server is to exchange space for time, that is, to "waste" the hardware resources of the server in exchange for its operational efficiency. This is the concept of pool. A pool is a collection of resources that are created and initialized completely at the beginning of server startup, which is called static resource allocation. When the server enters the formal operation stage, that is, when it begins to process customer requests, if it needs related resources, it can be obtained directly from the pool without dynamic allocation. Obviously, getting resources directly from the pool is much faster than dynamically allocating resources, because system calls that allocate system resources are time-consuming. When a client connection is processed by the server, the related resources can be put back into the pool without performing system calls to release the resources. In the end, the pool is equivalent to the application facility of the server management system resources, which avoids the frequent access of the server to the kernel.
Pools can be divided into many kinds, including memory pool, process pool, thread pool and connection pool.
Memory pool
Memory pool is a memory allocation method. Usually we are accustomed to using system calls such as new and malloc to apply for allocation of memory directly. The disadvantage of this method is that because the size of the applied memory block is variable, it will cause a large number of memory fragments and reduce performance when used frequently.
Memory pool is to apply for allocation of a certain number of memory blocks of equal size (in general) to be reserved before the real use of memory. When there is a new memory requirement, a part of the memory block is separated from the memory pool, and if the memory block is insufficient, the application for new memory will not continue. A significant advantage of this approach is that it improves the efficiency of memory allocation.
Process pool and thread pool
The process pool is similar to the thread pool, so here we take the process pool as an example. If there is no special declaration, the following description of process pools is also applicable to thread pools.
The process pool is a set of pre-created sub-processes by the server, with the number of these sub-processes ranging from 3 to 10 (this is a typical case, of course). The number of threads in the thread pool should be about the same as the number of CPU s.
All the subprocesses in the process pool run the same code and have the same attributes, such as priority, PGID, etc.
When a new task arrives, the main process will somehow select a subprocess in the process pool to serve it. The cost of choosing an existing subprocess is much less than creating a subprocess dynamically. As for which sub-process the main process chooses to serve the new task, there are two ways:
1) The main process uses some algorithm To actively select subprocesses. The simplest and most commonly used algorithms are random algorithm and Round Robin (rotation algorithm).
2) The main process and all the sub-processes are synchronized through a shared work queue on which the sub-processes sleep. When new tasks arrive, the main process adds tasks to the work queue. This will wake up the child process waiting for the task, but only one child process will get the "take-over" of the new task, which can take the task out of the work queue and execute it, while the other child processes will continue to sleep on the work queue.
After selecting the sub-process, the main process also needs to use some notification mechanism to tell the target sub-process that new tasks need to be processed and to pass the necessary data. The simplest way is to pre-establish a pipeline between parent and child processes, and then use the pipeline to achieve all inter-process communication. Transferring data between parent and child threads is much simpler, because we can define these data as global, so they are shared by all threads themselves.
Thread pool is mainly used for:
1) A large number of threads are needed to complete the task, and the time to complete the task is relatively short. For example, WEB server completes the task of web page request, using thread pool technology is very suitable. Because a single task is small, and the number of tasks is huge. But for long-term tasks, such as a Telnet connection request, the advantages of thread pooling are not obvious. Because Telnet session time is much longer than thread creation time.
2) Applications with high performance requirements, such as requiring servers to respond quickly to customer requests.
3) Accept a large number of sudden requests, but not so that the server will produce a large number of threads of application.
Process City implementation source code:
How to Linux Implementation of process pool technology, original post:
http://topic.csdn.net/u/20090206/16/b424e1c1-90dc-4589-a63f-1d90ed6560ae.html.