- Most of the nginx installation guides tell you the following Basics - install through apt get, modify a few lines of configuration here or there, and well, you already have a Web server! Moreover, in most cases, a regular installation of nginx already works well for your website. However, if you really want to squeeze out the performance of nginx, you must go deeper. In this guide, I will explain that those settings of nginx can be fine tuned to optimize performance when dealing with a large number of clients. It should be noted that this is not a comprehensive fine-tuning guide. This is a simple Preview - an overview of settings that can be fine tuned to improve performance. Your situation may be different.
- Basic (optimized) configuration
- The only file we will modify is Nginx Conf, which contains all settings for different modules of Nginx. You should be able to find Nginx in the server's / etc/nginx directory conf. First, we will talk about some global settings, and then talk about which settings enable you to have good performance when accessing a large number of clients and why they improve performance one by one according to the modules in the file. There is a complete configuration file at the end of this article.
1.worker_connections
Function: define each work_ The maximum number of connections opened simultaneously by the process. The parameter is limited by the maximum number of files opened by the os process.
Default value: 1024
2.accept_mutex on|off;
Default value: on before 1.11.3, and off after 1.11.3
Note: multiple workers are configured. When this function is turned on, the worker process takes turns to respond to the request with serial number. If it is turned off, it may cause a certain degree of [group shock problem]
For distributed application short connections, it is best to turn on this parameter to avoid excessive context switching overhead
For long connection applications, it is best to turn off this parameter to avoid excessive connection load on a worker, resulting in excessive CPU utilization of a process
3.multi_accept on|off;
Function: if the parameter value is on, a working process can accept all new connections at the same time. The default value is off. nginx a working process can only accept one new connection at the same time. Using the kqueue connection method, this instruction will be ignored.
4.accept_mutex_delay
When accept_ After mutex is set to on, you need to debug accept according to the application scenario_ mutex_ Delay parameter, which specifies how long it takes for all child processes to re seize the accept lock. Appropriate parameter values help to reduce the problem of worker load imbalance. The default value is 500ms
5.use epoll;
The Nginx server provides multiple event driver models to process network messages.
The supported types are: select, poll, kqueue, epoll, rtsing, / dev/poll and eventport.
select: it can only be used under Windows. This event model is not recommended for high load systems
poll:Nginx is preferred by default, but it is not available on all systems
kqueue: this method is available in FreeBSD 4.1 +, openbsd2 9 +, NetBSD 2.0, and MacOS X systems are the most efficient
epoll: This is the most efficient way in the Linux 2.6 + kernel
rtsig: real-time signal, which can be used in the kernel of Linux 2.2.19, but not in high traffic systems
/Dev / poll: the most efficient way for Solaris 7 11 / 99 +, HP / UX 11.22 +, IRIX 6.5.15 +, and tru64 UNIX 5.1a + operating systems
Eventport: the most efficient way for Solaris 10
High level configuration Nginx In the conf file, a few advanced configurations in Nginx are above the module section.
user www-data; pid /usr/local/nginx/logs/nginx.pid; worker_processes auto; #worker_processes defines the number of worder processes when nginx provides web Services externally. #The optimal value depends on many factors, including (but not limited to) the number of CPU cores, the number of hard disks storing data and load mode, #When unsure, setting it to the number of available CPU cores will be a good start (setting it to "auto" will try to detect it automatically) worker_rlimit_nofile 100000; #Change the maximum number of open files limit for the worker process. If not set, this value is the limit of the operating system, #After setting, your operating system and nginx can handle more files than "ulimit -a", so set this value high so that nginx will not have the problem of "too many open files".
Events module the events module contains settings for all processing connections in nginx
events { worker_connections 2048; #worker_connections sets the maximum number of connections that can be opened simultaneously by a worker process, #If the worker mentioned above is set_ rlimit_ Nofile, we can set this value very high. multi_accept on; #multi_accept tells nginx to accept as many connections as possible after receiving a new connection notification use epoll; #use sets the polling method for reusing client threads, #If you use Linux 2.6 +, you should use epoll. If you use * BSD, you should use kqueue }
HTTP module
The HTTP module controls all the core features of nginx http processing. Because there are only a few configurations here, we only extract a small part of the configuration. All these settings should be in the HTTP module, and you won't even notice this setting
http { server_tokens off; #server_tokens does not make nginx execute faster, but it can turn off the nginx version number in the error page, #This is good for security sendfile on; #sendfile Can let sendfile()Play a role. sendfile()Can be on disk and TCP socket Copy data to each other(Or any two file descriptors),Pre-#sendfile is to apply for a data buffer in the user space before transmitting data, then copy the data from the file to the buffer with read() and write() to the buffer data #When writing to the network, sendfile() immediately reads the data from the disk to the OS cache, because this copy is completed in the kernel, #sendfile()Than a combination read()and write()with#It is more effective to turn on and off the discard buffer tcp_nopush on; #tcp_nopush tells nginx to send all header files in one packet instead of sending them one by one tcp_nodelay on; #tcp_nodelay tells nginx not to cache data, but to send it piece by piece - when data needs to be sent in time, #You should set this property for the application so that you can't get the return value immediately when sending a small piece of data information access_log off; #access_log sets whether nginx will store access logs. Turning this option off allows faster read disk IO operations (aka,YOLO) error_log /var/log/nginx/error.log crit; #Error log level: common error log levels are[debug | info | notice | warn | error | crit | alert | emerg],The higher the level, the more information is recorded#The production scenario is generally one of the three levels of warn | error | crit #error_log tells nginx that only serious errors can be logged keepalive_timeout 10; #keepalive_timeout assigns a keep alive link timeout to the client, after which the server will close the link, #We set it lower to keep ngnix working longer client_header_timeout 10; client_body_timeout 10; #client_header_timeout and client_body_timeout sets the timeout of the request header and request body (respectively), #We can also set this lower reset_timedout_connection on; #reset_timeout_connection tells nginx to close unresponsive client connections. This will free up the memory space occupied by that client send_timeout 10; #send_timeout specifies the response timeout for the client. This setting will not be used for the whole forwarder, but between two client read operations, #If the client does not read any data during this time, nginx will close the connection limit_conn_zone $binary_remote_addr zone=addr:5m; #limit_conn_zone sets the parameters used to save the shared memory of various key s (such as the current number of connections). 5m is 5 megabytes. This value should be set large enough to store (32K*5) 32byte status or (16K*5) 64byte status limit_conn addr 100; #limit_conn sets the maximum number of connections for the given key. Here, the key is addr, and the value we set is 100, that is, we allow each IP address to open up to 100 connections at the same time include mime.types; default_type application/octet-stream; charset UTF-8; #charset sets the default character set in our header file fastcgi_connect_timeout 300; fastcgi_send_timeout 300; fastcgi_read_timeout 300; fastcgi_buffer_size 64k; fastcgi_buffers 4 64k; fastcgi_busy_buffers_size 128k; fastcgi_temp_file_write_size 256k; #FastCGI related parameters are to improve the performance of the website: reduce resource occupation and improve access speed. The following parameters can be understood literally. gzip on; #Gzip tells nginx to send data in the form of gzip compression. This will reduce the amount of data we send gzip_disable "msie6"; #gzip_disable disables gzip functionality for the specified client. We set it to IE6 or lower to make our solution widely compatible gzip_proxied any; #gzip_proxied allows or disables compression of response streams based on requests and responses. We set it to any, which means that all requests will be compressed gzip_min_length 1k; #gzip_min_length sets the minimum number of bytes to enable compression of data. If a request is less than 1000 bytes, #We'd better not compress it, because compressing this small data will slow down all processes processing this request gzip_buffers 4 16k; #Compression buffer gzip_comp_level 2; #gzip_comp_level sets the compression level of the data. This level can be any number between 1 and 9. 9 is the slowest but the largest compression ratio, gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript; #gzip_type sets the data format to be compressed. There are already some in the above example, and you can add more formats open_file_cache max=100000 inactive=20s; #open_file_cache specifies the maximum number of caches and the cache time when the cache is opened, #We can set a relatively high maximum time so that we can clear them after they are inactive for more than 20 seconds open_file_cache_valid 30s; #open_file_cache_valid is open_ file_ Specify the interval for detecting correct information in the cache open_file_cache_min_uses 2; #open_file_cache_min_uses defines open_ file_ Minimum number of files in cache during instruction parameter inactivity time open_file_cache_errors on; open_file_cache_errors Specifies whether to cache error messages when searching for a file, including adding files to the configuration again, #We also include server modules, which are defined in different files. If your server module is not in these locations, you have to modify this line to specify the correct location }
Complete optimized configuration
user root; worker_processes auto; worker_cpu_affinity auto; worker_rlimit_nofile 65535; worker_priority -10; events { worker_connections 65535; accept_mutex on; multi_accept on; accept_mutex_delay 500ms; use epoll; } http{ server_tokens off; sendfile on; tcp_nopush on; tcp_nodelay on; access_log off; error_log /var/log/nginx/error.log crit; keepalive_timeout 60; client_header_timeout 10; client_body_timeout 10; reset_timedout_connection on; send_timeout 10; limit_conn_zone $binary_remote_addr zone=addr:5m; limit_conn addr 100; include mime.types; default_type mime.types; charset UTF-8; fastcgi_connect_timeout 300; fastcgi_send_timeout 300; fastcgi_read_timeout 300; fastcgi_buffer_size 64k; fastcgi_buffers 4 64k; fastcgi_busy_buffers_size 128k; fastcgi_temp_file_write_size 256k; gzip on; gzip_disable "MSIE [1-6]\."; gzip_buffers 4 16k; gzip_proxied any; gzip_min_length 1k; gzip_comp_level 2; gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript; open_file_cache max=100000 inactive=20s; open_file_cache_valid 30s; open_file_cache_min_uses 2; open_file_cache_errors on; }