02 architecture 03.3 detailed configuration of nginx

Static page access speed of Nginx and Tomcat

nginx Is all web The fastest service to process static resources

Configure Nginx page

#Configure nginx
[root@web01 ~]# vim /etc/nginx/conf.d/ab.linux.com.conf
server {
    listen 80;
    server_name ab.linux.com;

    location / {
        root /code/ab;
        try_files $uri $uri/ @tomcat;
        index index.html;
    }

    location @tomcat {
        proxy_pass http://172.16.1.31:8080;
    }       
}

#Configure a static page for nginx
mkdir -p /code/ab
echo nginx_linux > /code/ab/index.html
chown -R www.www /code

#Restart Nginx
[root@web01 ~]# systemctl restart nginx

#Add resolution
C:\Windows\System32\drivers\etc\hosts
10.0.0.31 ab.linux.com

#visit
ab.linux.com = ab.linux.com/index.html
	#Display results
nginx_linux

Configure Tomcat page

#Configure a static page for Tomcat
echo tomcat_linux > /usr/local/tomcat/webapps/ROOT/tomcat.html
#Restart Tomcat
[root@web01 ~]# systemctl restart tomcat
#visit
ab.linux.com/tomcat.html
	#Display results
nginx_linux

Use ab pressure measurement

	#Install ab pressure measuring tool
yum -y install httpd-tools

	#Use the ab command
ab option

-n	The number of requests executed in the test session. By default, only one request is executed. Total number of requests
-c	Number of requests generated at one time. The default is one at a time. Number of users requested
-t	The maximum number of seconds the test has taken. Its internal implied value is-n 50000,It can limit the testing of the server to a fixed total time. By default, there is no time limit.

	#Note: the domain name of the pressure test must end with / or URI [the test website must be resolved]
	#Meaning of pressure test results
Server Software:		#Software version of the service
Server Hostname:		#Domain name or IP
Server Port:			#Service port
Document Path:			#Requested static resource file path
Document Length:		#Requested static resource file size
Concurrency Level:      #Concurrent number
Time taken for tests:   #Completion time
Complete requests:      #Number of requests
Failed requests:        #Failed request
Write errors:           #Write bad request
Total transferred:      #Total bytes of 1W requests
HTML transferred:       #Number of bytes left after removing the request header
Requests per second:    #Number of requests per second
Time per request:       #The time the browser waits for each request
Time per request:       #Average time taken by the server to process each request [unit: MS]
Transfer rate:          #The size of request data that can be processed per second

	#The pressure test tool tests the processing of static resources by Nginx
[root@web01 ~]# ab -n 10000 -c 200 http://ab.linux.com/
Server Software:        nginx/1.18.0
Server Hostname:        ab.linux.com
Server Port:            80

Document Path:          /
Document Length:        12 bytes

Concurrency Level:      200
Time taken for tests:   0.836 seconds
Complete requests:      10000
Failed requests:        0
Write errors:           0
Total transferred:      2420000 bytes
HTML transferred:       120000 bytes
Requests per second:    11961.61 [#/sec] (mean)
Time per request:       16.720 [ms] (mean)
Time per request:       0.084 [ms] (mean, across all concurrent requests)
Transfer rate:          2826.86 [Kbytes/sec] received

	#After stopping Nginx, test Tomcat to reduce jump, and the comparison result is more real [in fact, there is little difference between stopping and stopping]
systemctl stop nginx
	#The pressure test tool tests Tomcat's processing of static resources
[root@web01 ~]# ab -n 10000 -c 200 http://ab.linux.com:8080/tomcat.html
Server Software:        
Server Hostname:        ab.linux.com
Server Port:            8080

Document Path:          /tomcat.html
Document Length:        13 bytes

Concurrency Level:      200
Time taken for tests:   2.131 seconds
Complete requests:      10000
Failed requests:        0
Write errors:           0
Total transferred:      2270000 bytes
HTML transferred:       130000 bytes
Requests per second:    4693.11 [#/sec] (mean)
Time per request:       42.616 [ms] (mean)
Time per request:       0.213 [ms] (mean, across all concurrent requests)
Transfer rate:          1040.37 [Kbytes/sec] received

Concurrency Level:   Concurrency level:
Time taken for tests:Test time:
Complete requests:   Completion request:
Failed requests:     Failed requests:
Write errors:        Write error:
Total transferred:   Total data transmitted:
HTML transferred:    HTML Amount of data transferred:
Requests per second: Requests per second: 
Time per request:    Time per request: 
Time per request:    Time per request: 
Transfer rate:       Transmission rate:

Http,server,location

Include file

One server is configured with multiple websites. If the configuration is written in nginx.conf In the main configuration file, it will cause nginx.conf The main configuration file becomes very large and unreadable. Then later maintenance becomes troublesome. 

Suppose you want to close a site quickly now, what should you do? 
1.If it is written in nginx.conf In, manual annotation is required, which is troublesome 
2.If it is include In this way, you only need to modify the extension of the configuration file to complete the annotation Include The purpose of included is to simplify the main configuration file and make it easy to read.

inlcude /etc/nginx/online/*.conf #Online configuration

/etc/nginx/offline #Keep the configuration and do not enable it (move to online for next use)

Module syntax

# You can find the writing method of nginx each module in the following website
http://Go down to the nginx.org/en/docs page and find the Modules reference below

usage method	Syntax: 
Default state	Default:
Where is it written	Context:

1. Directory index module

grammar
ngx_http_autoindex_module

Syntax:	autoindex on | off;
Default:autoindex off;
Context:http, server, location
example
#Set site directory
mkdir -p /code/web01
cp /etc/services /code/web01/a
echo 'The earth is a beautiful world' > /code/web01/The earth is beautiful.txt
chown -R www.www /code

#Set site profile
vim /etc/nginx/conf.d/web01.conf
server {
    listen 80;
    server_name www.web01.com;

	#Set the log file separately for viewing
    access_log /var/log/nginx/www.web01.com.log main;
    #If the Chinese character set is not set, the Chinese characters on the web page will appear garbled
    charset 'utf-8,gbk';
    
    location / {
        root /code/web01;
        #Open the directory module [also remove the / code/web01/index.html file]
        autoindex on;
        #Display the specific size. The off unit is k \ m \ g, and the on unit is bit bytes
        autoindex_exact_size off;
        #Display the exact time of the last modification of the file. If it is off, it needs to be added 8 hours, which is the time of the west side
        autoindex_localtime on;
        }
}

# Windows parsing
C:\Windows\System32\drivers\etc\hosts
10.0.0.31 www.web01.com
# Visit a web page to view
www.web01.com
# The results of the web page are as follows:
Index of /
-------------------------------
../
a                                                  08-Jul-2020 17:36    655K
 The earth is beautiful.txt                                08-Jul-2020 17:00      28
-------------------------------

2. Access control module

grammar
ngx_http_access_module

#Allow access syntax
Syntax:	allow address | CIDR | unix: | all;
Default:	—
Context:	http, server, location, limit_except

#Access denied syntax
Syntax:	deny address | CIDR | unix: | all;
Default:	—
Context:	http, server, location, limit_except
example
#Applicable scenario: the company's internal network segment accesses the background, or accesses the background after connecting to the company's VPN at home. All other network address requests will be denied access

#Set site directory
mkdir -p /code/web02
echo 'web02' > /code/web02/index.html
chown -R www.www /code

#Set site profile
vim /etc/nginx/conf.d/web02.conf
server {
    listen 80;
    server_name www.web02.com;

    access_log /var/log/nginx/www.web01.com.log main;
    charset 'utf-8,gbk';

    location / {
        root /code/web02;
        index index.html;
        #Allow hosts from the 10.0.0 segment to access my / web Directory
        allow 10.0.0.0/24;
        #Deny all hosts from network segments other than 10.0.0 access to my / web Directory
        deny all;

#The above is allow before reject, and the following is reject before allow. You can set allow only or reject only
#The allowed and rejected targets can be 10.0.0.0/24 or 10.0.0.1
        
        #Deny hosts from the 10.0.0 network segment access to my / web Directory
        #deny 10.0.0.0/24;
        #Allow all hosts from network segments other than 10.0.0 to access my / web Directory
        #allow all;
        }
}

# Windows parsing
C:\Windows\System32\drivers\etc\hosts
10.0.0.31 www.web02.com
# Visit a web page to view
www.web02.com

3. Access authentication module

grammar
ngx_http_auth_basic_module

#When the authentication is enabled, the comments must be written (the content is optional, which can not be seen by Google browser, but can only be seen by 360 browser compatibility mode)
Syntax:	auth_basic string | off;
Default:	auth_basic off; perhaps auth_basic 123;
Context:	http, server, location, limit_except

#Document specifying certification (including)
Syntax:	auth_basic_user_file file;
Default:	—
Context:	http, server, location, limit_except
example
#Applicable scenario: you need to enter the account password when accessing

#Create a user name and password first
[root@web01 ~]# htpasswd -c /etc/nginx/conf.d/auth_basic mcy
New password: 
Re-type new password: 
Adding password for user mcy
#You can also set the password of abc account to 321 [this way is easy to be seen by others]
[root@web01 ~]# htpasswd -cb /etc/nginx/conf.d/auth_basic abc 321
Adding password for user abc
#You can set up multiple users, but only one user can be used on a page
htpasswd -cb /etc/nginx/conf.d/m1 test01 1
htpasswd -cb /etc/nginx/conf.d/m2 test02 2


#Set site directory
mkdir -p /code/web03
echo 'web03' > /code/web03/index.html
chown -R www.www /code

#Set site profile
vim /etc/nginx/conf.d/web03.conf 
server {
    listen 80;
    server_name www.web03.com;

    access_log /var/log/nginx/www.web03.com.log main;
    charset 'utf-8,gbk';
    
    location / {
        root /code/web03;
        index index.html;
        #Turn on the authentication, and the comment tells the visitor that "illegal users are not allowed to access" [the comment must be written, but it can be written freely]
        auth_basic "Do not allow illegal user access!";
        #The specified authentication file is auth_basic, which can only use the last set user password
        auth_basic_user_file /etc/nginx/conf.d/auth_basic;
        }
}

# Windows parsing
C:\Windows\System32\drivers\etc\hosts
10.0.0.31 www.web03.com
# Visit a web page to view
www.web03.com

4. Status module

grammar
ngx_http_stub_status_module

Syntax:	stub_status;
Default:	—
Context:	server, location
example
#Set site profile
vim /etc/nginx/conf.d/web04.conf
server {
    listen 80;
    server_name www.web04.com;

    access_log /var/log/nginx/www.web04.com.log main;
    charset 'utf-8';

    #Add a uri for location access [/ mcy this directory may not exist in the system]
    location /mcy {
    	#Set the page where you can view the connection statistics
    	stub_status;
    }
}

# Windows parsing
C:\Windows\System32\drivers\etc\hosts
10.0.0.31 www.web04.com
# Visit a web page to view
www.web03.com/mcy
Nginx 7 states
#visit http://www.web01.com/mcy  The page is as follows
------------------------------------------------------------
Active connections: 2 
server accepts handled requests
         4 			4 		   8 
Reading: 0 Writing: 1 Waiting: 1 
------------------------------------------------------------
Active connections		# Number of active connections
accepts					# Current total connections TCP
handled					# Number of successful TCP connections
requests				# Total http requests

Reading					# Read request header
Writing					# Header returned to client
Waiting					# The number of requests waiting. Keepalive is enabled [keepalive_timeout 65 is configured in the main configuration file;]

# Note that a TCP connection can initiate multiple http requests. The following parameters can be configured for verification
keepalive_timeout  0;   # Similar to closing long connections
keepalive_timeout  65;  # Disconnect if there is no activity for 65 seconds
------------------------------------------------------------
Number of collection requests
#It is recommended to use method 3, which is safe and does not need to edit the new configuration file

# Method 1: the above settings, and then access http://www.web01.com/mcy  Page access, you can see

# Method 2: modify based on the above settings
vim /etc/nginx/conf.d/web04.conf
server {
    listen 80;
    server_name 127.0.0.1;

    access_log /var/log/nginx/www.web04.com.log main;
    charset 'utf-8';

    #Add a location access url [/ mcy this directory may not exist in the system]
    location /mcy {
        #Set the page where you can view the connection statistics
        stub_status;
    }
}

# Command line get total requests per day:
curl -s 127.0.0.1/mcy | awk 'NR==3{print $3}'

# Method 3: arbitrarily find an Nginx configuration file and limit it to local viewing
vim /etc/nginx/conf.d/web01.conf
server {
    listen 80;
    server_name www.web01.com;
    
    location / {
        root /code/web01;
        autoindex on;
    }
    
    location /mcy {
    	stub_status;
    	allow 127.0.0.1;
    	deny all;
    }
}

# Command line get total requests per day:
curl -s 127.0.0.1/mcy | awk 'NR==3{print $3}'
# The collected data can be stored in a file and sent to a host that collects information

5. Connection restriction module

grammar
ngx_http_limit_conn_module

#Limit the number of IPS connected at the same time and the memory space occupied by IP

#Set limited memory space / conditions
	     #  Call the content space stored in the module space = space name: space size
Syntax:     limit_conn_zone    key          zone=name:size;
Default: —
Context: http

#Call the memory space above
Syntax:	limit_conn zone number;
Default:	—
Context:	http, server, location
example
#The configuration is load balanced. I feel that there will be no restrictions in production, because I hope the more users visit, the better

#Set master profile
vim /etc/nginx/nginx.conf 

user  nginx;
worker_processes  1;

error_log  /var/log/nginx/error.log warn;
pid        /var/run/nginx.pid;

events {
    worker_connections  1024;
}

http {
    include       /etc/nginx/mime.types;
    default_type  application/octet-stream;

    log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
                      '$status $body_bytes_sent "$http_referer" '
                      '"$http_user_agent" "$http_x_forwarded_for"';

    access_log  /var/log/nginx/access.log  main;

    sendfile        on;
    #tcp_nopush     on;

    keepalive_timeout  65;

    #gzip  on;
    
    #Call the restricted memory space module to access IP 	   Space = space name: space size [only m can be used as unit]
    limit_conn_zone $remote_addr zone=accessip:10m;
    #Call the name of the above space [you can name it at will, but the top and bottom should be unified], and only one IP access is allowed at a time
    limit_conn accessip 1;

    include /etc/nginx/conf.d/*.conf;
}

#test method
web01 Configure connection restrictions
web02 web03 Access at the same time

6. Restriction request module

grammar
ngx_http_stub_status_module

#Set limited memory space / conditions
		#Use module 		   Space save content space = space name: size rate = 1r/s
Syntax:  limit_req_zone 	key 	zone=name:size rate=rate;
Default: —
Context: http

#Call the above module
Syntax:    limit_req zone number [burst=number] [nodelay];
Default: —
Context: http, server, location
example
#When making restrictions, you should first check the number of requests of your own web page under normal circumstances, otherwise the web page will not be displayed completely

#Set site directory
mkdir -p /code/web06
echo 'web06' > /code/web06/index.html
chown -R www.www /code

#Configure request restriction module
vim /etc/nginx/nginx.conf
...slightly...
http {
...slightly...
	#Configure restricted request module access ip 		  Space = space name: space size 10M, only 1 request per second accepted
	limit_req_zone $remote_addr zone=req_zone:10m rate=1r/s;
...slightly...
}

#Set site profile
vim /etc/nginx/conf.d/web06.conf
server {
    listen 80;
    server_name www.web06.com;

    access_log /var/log/nginx/www.web06.com.log main;
    charset 'utf-8,gbk';

    
    location / {
        root /code/web06;
        index index.html;
        
        #Method 1: accept only 1 request per second for each connection
        #Call restriction request module, space name
    	limit_req zone=req_zone;
    	
    	#Method 2: [delay request] accept one request per second for each connection, and then accept five requests
		#Call the limit request module. The space name accepts 6 requests per second. Nodelay means that all requests are rejected after 5 delay requests. If nodelay is not added, there is no limit
		#limit_req zone=req_zone burst=5 nodelay;
        }
}

# Windows parsing
C:\Windows\System32\drivers\etc\hosts
10.0.0.31 www.web06.com
# Visit the web page to view, press and hold F5, and quickly refresh the page. Sometimes 503 will pop up
www.web06.com
Validation request limit
#Resolve the domain name in / etc/hosts
echo '172.16.1.31 www.web06.com' >> /etc/hosts

#Install ab pressure measuring tool
yum -y install httpd-tools

	#Use the ab command
ab option

-n	The number of requests executed in the test session. By default, only one request is executed. Total number of requests
-c	Number of requests generated at one time. The default is one at a time. Number of users requested
-t	The maximum number of seconds the test has taken. Its internal implied value is-n 50000,It can limit the testing of the server to a fixed total time. By default, there is no time limit.

	#Note: the domain name of the pressure test must end with / or URI [the test website must be resolved]
	#Meaning of pressure test results
Server Software:		#Software version of the service
Server Hostname:		#Domain name or IP
Server Port:			#Service port
Document Path:			#Requested static resource file path
Document Length:		#Requested static resource file size
Concurrency Level:      #Concurrent number
Time taken for tests:   #Completion time
Complete requests:      #Number of requests
Failed requests:        #Failed request
Write errors:           #Write bad request
Total transferred:      #Total bytes of 1W requests
HTML transferred:       #Number of bytes left after removing the request header
Requests per second:    #Number of requests per second
Time per request:       #The time the browser waits for each request
Time per request:       #Average time taken by the server to process each request [unit: MS]
Transfer rate:          #The size of request data that can be processed per second

#Only 1 request is accepted per second
[root@web01 conf.d]# ab -n 20 -c 2 http://www.web06.com/
Server Software:        nginx/1.16.1
Server Hostname:        www.mario.com
Server Port:            80
Document Path:          /download/
Document Length:        179 bytes
Concurrency Level:      2
Time taken for tests:   0.002 seconds
Complete requests:      20
Failed requests:        19
   (Connect: 0, Receive: 0, Length: 19, Exceptions: 0)
   
#After configuring 5 delay requests
[root@web01 code]# ab -n 20 -c 2 http://www.web06.com/
Server Software:        nginx/1.16.1
Server Hostname:        www.mario.com
Server Port:            80
Document Path:          /download/
Document Length:        179 bytes
Concurrency Level:      2
Time taken for tests:   0.002 seconds
Complete requests:      20
Failed requests:        14
   (Connect: 0, Receive: 0, Length: 14, Exceptions: 0)

7. Upload file size

grammar
Syntax:  client_max_body_size size;
Default: client_max_body_size 1m;
Context: http, server, location
example
#It can also be placed in the http layer for global effect. It can also be placed in the server or location layer as needed, and the upload size of different pages can be limited
server {
    listen 80;
    server_name _;
    client_max_body_size 200m;
}

Server layer

priority

1.First select all strings that match exactly server_name as www.mumusir.com ((exact match)
2.Select the wildcard character that precedes it server_name,as *.mumusir.com,
3.Then match the following wildcards, such as www.mumusir.*
4.Finally, select the matching using regular expressions server_name,as ~^www\.(.*)\.com$
5.If none of them match, then the listen Add after configuration item[default_server]of server block
	as listen 80 default_server;
6.If not, then find a match listen First port Server Block configuration file

When multiple identical server_name If the configuration file is used in priority, it will be called. Therefore, it is recommended to configure the same port and different domain names, so that there will be no domain name access conflict.

#If there are multiple configuration files for a single machine, the page access is wrong
1.If an error is reported directly, check hosts Resolve
2.If the access page does not correspond to
	Check whether the configuration file writes the wrong domain name
	inspect nginx Restart

Multiple ways to access web pages [multiple virtual hosts]

1. Access by IP

according to IP Access [if there are multiple hosts] IP][Application scenario: it is suitable for internal personnel of the company to access the background page, and the external network can be accessed through VPN Visit]
vim /etc/nginx/conf.d/ip1.conf
server {
    listen 10.0.0.8:80;
    server_name _;
    
    location / {
        root /code;
        index index.html;
    }
}

vim /etc/nginx/conf.d/ip2.conf
server {
    listen 172.16.1.8:80;
    server_name _;
    
    location / {
        root /code1;
        index index.html;
    }
}

2. Access by port

Access according to the port [provided that the port cannot be occupied]
vim /etc/nginx/conf.d/port1.conf
server {
    listen 80;
    server_name localhost;
    
    location / {
        root /code/a;
        index index.html;
    }
}

vim /etc/nginx/conf.d/port2.conf
server {
    listen 81;
    server_name localhost;
    
    location / {
        root /code/b;
        index index.html;
    }
}

3. Access by domain name

According to the domain name, access [with the most, one] IP Can match multiple domain names]
vim /etc/nginx/conf.d/server1.conf
server {
    listen 80;
    server_name www.web01.com;
    
    location / {
        root /web;
        index index.html;
    }
}

vim /etc/nginx/conf.d/server2.conf
server {
    listen 80;
    server_name www.web02.com;
    
    location /mcy {
        root /web;
        index index.html;
    }
}

#windows configuration domain name resolution
C:\windows\System32\drivers\etc\hosts
10.0.0.8  www.web01.com www.web02.com
 Actual access page:
www.web01.com /web/index.html
www.web02.com /web/mcy/index.html

Prohibit IP direct access

server {
    listen 80 default_server;
    server_name _;
    return 500;
}

Jump to the specified page when using ip access

server {
	listen 80 default_server;
	server_name _;
	return 302 http://www.baidu.com;
}

Location layer

1. Grammar

Syntax:	location [ = | ~ | ~* | ^~ | / ] uri { ... }
		location @name { ... }
Default:	—
Context:	server, location

#The whole is called URL
https://timgsa.baidu.com/mhqcw16j30u01401kx.jpg
# /The following is called URI
mhqcw16j30u01401kx.jpg

example:
server {
    listen 80;
    server_name www.web01.com;
	location /web01 {
		root /code;
		index index.html;
	}
}
interpretation:
When we enter in the browser: www.web01.com
 The order of access is: take it location Matching content /web01 Go to site directory /code Below /web01 Look inside index.html
 The resources we actually visit are:/code/web01/index.html

2. Matching rules

[root@Nginx conf.d]# cat testserver.conf 
server {
    listen 80;
    server_name www.server.com;
    location / {
        default_type text/html;
        return 200 "location /";
    }
 
    location =/ {
        default_type text/html;
        return 200 "location =/";
    }
 
    location ~ / {
        default_type text/html;
        return 200 "location ~/";
    }
 
 	# Location / has the same effect as location ^ ~ / and conflicts exist at the same time
    # location ^~ / {
    #   default_type text/html;
    #   return 200 "location ^~";
    # }
}

3. Priority

Match characterMatching rulespriority
=Exact match1
^~Start with a string2
~Case sensitive regular matching3
~*Case insensitive regular matching4
/Universal matching, any request will be matched5

4. Verification priority

[root@web01 conf.d]# cat testserver.conf 
server {
    listen 80;
    server_name www.server.com;

	location / {
    	root /code;
	}
	#The \ here is the meaning of the character immediately following the escape, as follows
	location ~ \.php$ {
	    root /php;
	}
	 
	location ~ \.jsp$ {
	    root /jsp;
	}
	 
	location ~* .*\.(jpg|gif|png|js|css)$ {
	    root /pic;
	}
	 
	location ~* "\.(sql|bak|tgz|tar.gz|.git)$" {
	    root /package;
	}      
 
}

#Check the matching rules of localization through the log
tailf /var/log/nginx/error.log
#visit
www.server.com/test.php
www.server.com/test.jsp
www.server.com/test.jpg

5. Custom error reporting jump page

#Set site directory
mkdir -p /code/request
echo 'request' > /code/request/index.html
echo 'Something went wrong. You refreshed too fast' > /code/request/503.html
chown -R www.www /code

#Set site profile
vim /etc/nginx/conf.d/request.conf
limit_req_zone $remote_addr zone=req_zone:10m rate=1r/s;

server {
    listen 80;
    server_name www.request.com;
    access_log /var/log/nginx/www.request.com.log main;
    charset 'utf-8';
	
    location / {
	root /code/request;
	index index.html;
	limit_req zone=req_zone;
	#Visit the error page, report an error and jump to the specified page
	error_page 503 /503.html;
    }
}

# Windows parsing
C:\Windows\System32\drivers\etc\hosts
10.0.0.31 www.request.com
# Visit the web page to view, press and hold F5, and quickly refresh the page. Sometimes 503 will pop up
www.request.com

# Example 
	#Jump to the local page. The error page is returned by the web server
vim server1.conf
server {
        listen 80;
        server_name test1.linux.com;
        root /code/test1;
        index index.html;

        error_page 404 /404.html;
        error_page 403 /403.html;
        error_page 502 /502.html;
        error_page 503 /503.html;
        
        location = /404.html {
            root /code/error;
        }
}

	#Jump domain name
vim server1.conf
server {
        listen 80;
        server_name test1.linux.com;
        root /code/test1;
        index index.html;

        error_page  404  http://www.baidu.com;
}

6. root and alias

root And alias The main difference between path matching is nginx How to explain location hinder uri,This allows the two to map requests to server files in different ways, alias Is the definition of a directory alias, root Is the definition of the top-level directory.

root The processing results are: root route+location route
alias The processing result is: use alias Defined path

#When using root, user access http://image.com/picture/1.jpg In fact, Nginx will find the 1.jpg file in the / code/picture / directory
server {
    listen 80;
    server_name image.com;
    
    location /picture {
        root /code;
    }
}

#When using alias, users can access http://image.com/picture/1.jpg In fact, Nginx will find the 1.jpg file in the / code / directory
server {
    listen 80;
    server_name image.com;
    
    location /picture {
        alias /code;
    }
    
    #location /picture/ {
    #    alias /code/;
    #}
}
#When configuring the alias, the license should have '/' and the alias should also have '/'. If not, there will be none

#3. Common online configurations
server {
    listen 80;
    server_name image.driverzeng.com;

    location / {
        root /code;
    }

    location ~* ^.*\.(png|jpg|gif)$ {
        alias /code/images/;
    }
}

7,try_file path match

Nginx of try_file If the paths match, the existence of files and directories will be checked in order (according to root and alias The parameters set by the directive construct the complete file path)
And provide services with the first file found.
stay uri Add a slash after the element name / Indicates that this is a directory.
If neither the file nor the directory exists, Nginx An internal redirection is performed, jumping to the last of the commands uri Parameter defined URI Yes.
#Important note: if try is used_ When accessing file, you must add uri after the domain name, otherwise the latter parameter will be matched



1,Example 1

	#1.try_file configuration
[root@lb01 conf.d]# vim try.conf
server {
    listen 80;
    server_name try.linux.com;
    root /code;
    index index.html;

    location / {
        try_files $uri /404.html;
    }
}
User access try.linux.com
try_files $uri /404.html;		# Nginx looks for the URI first. If it can't find it, go to 404.html
try_files $uri/ /404.html;		# Nginx first finds the URI /. If the URI is empty, it finds / = / code/index.html
try_files $uri $uri/ /404.html;	# Nginx looks for the URI first. If it can't find the URI /, it returns data if it can find it. If it can't find it, it looks for 404.html

	#2. Create instance directories and files
[root@lb01 conf.d]# echo try11111 > /code/index.html
[root@lb01 conf.d]# echo '404 404 404' > /code/404.html

	#3. Try to visit try.linux.com
[root@lb01 conf.d]# curl try.linux.com
404 404 404
#Since you are visiting try.linux.com and $uri is the content written after the domain name, it cannot be found, so you can visit the following content, namely 404.html

	#4. Try to visit try.linux.com/index.html
[root@lb01 conf.d]# curl try.linux.com/index.html
try11111
#Since you are accessing try.linux.com/index.html and $uri gets index.html, the content of / code/index.html is returned

	#5. Modify the configuration to
location / {
    try_files $uri $uri/ /404.html;
}

	#6. Try to visit try.linux.com again
[root@lb01 conf.d]# curl try.linux.com
try11111
#We visited try.linux.com, but we didn't write anything about $uri, so he visited "empty /", which matches / code/index.html


2,Example 2

	#1. Configure nginx
[root@lb01 conf.d]# cat try.conf 
server {
    listen 80;
    server_name try.linux.com;
    root /code;
    index index.html;

    location / {
        try_files $uri $uri/ @java;             
        #When the $uri and $uri / do not match, the back-end java will handle them. The name can be customized, but it must be added@
        # @Internal jump means @ java = call @ java
    }

	# Configure @ java
    location @java {
    #Configure backend tomcat
    proxy_pass http://172.16.1.8:8080;          
    }
}

	#2. Configure back-end tomcat
[root@web02 ~]# cd /usr/share/tomcat/webapps/ROOT
[root@web02 ROOT]# echo 'i am tomcat' > index.html
[root@web02 ROOT]# systemctl start tomcat

	#3. Remove all the documents
[root@lb01 code]# mv index.html index1.html /tmp/

	#4. Test visit
[root@lb01 code]# curl http://try.linux.com/index.html
i am tomcat

rewrite

theory

# What is rewrite

Rewrite Main implementation url Address rewriting, and redirection, is to pass in`web`Redirect your request to another`url`The process of.

Rewrite usage scenario

1,Address jump, user access blog.linux.com/test this URL When, orient it to http://www.baidu.com
server {
    listen 80;
    server_name blog.linux.com;
    location /test {
        rewrite ^(.*)$ http://www.baidu.com;
    }
}

2,Protocol jump, user through http When the agreement requests a website, jump it back to https Agreement mode
server {
        listen 80;
        server_name www.mumusir.com mumusir.com;
        rewrite ^(.*)$ https://www.mumusir.com;
        #rewrite ^(.*)$ https://$server_name$1;
        #return 302 https://$server_name$request_uri;
}

3,Pseudo static, a technology that displays dynamic pages as static pages, which is convenient for search engine input and builds dynamic pages at the same time URL The address exposes too many parameters to improve higher security.

4,Search Engines, SEO Optimization depends on url The path is easy to remember url Convenient for wisdom tooth search engine input

Rewrite configuration syntax

Syntax:	rewrite regex replacement [flag];
Default:	—
Context:	server, location, if

#URL address accessed by rewrite call (regular support) address label after jump
rewrite          regex                   replacement    [flag];

# Sample if layer
server {
        listen 80;
        server_name   www.jd.com;
        if ($http_user_agent ~* "Android|Iphone") {
                rewrite ^(.*)$ https://m.jd.com/ redirect;
        }       
}

flag

The rewrite instruction redirects the URL or modifies the string according to the expression. It can be applied to the server, location and if environment. Each row of the rewrite instruction is followed by a flag tag. The supported flag tags are as follows:

flageffect
lastAfter this rule is matched, stop matching and no longer match the following rules
If the match is not successful, it will no longer match.
breakAfter this rule is matched, stop matching and no longer match the following rules
If the matching is not successful, it will continue to match the rules of other location layers in this server layer
redirectReturn to 302 temporary redirection, and the address bar will display the address after the jump
permanentReturn to 301 permanent redirection, and the address bar will display the address after jump [jump all the time without cleaning the cache]
1. Examples of differences between last and break
[root@web01 ~]# vim /etc/nginx/conf.d/rw.linux.com.conf
server {
        listen 80;
        server_name rw.linux.com;
        root /code/rw;

        location ~ ^/break {
                rewrite ^/break /test/ break;
        }
        location ~ ^/last {
                rewrite ^/last /test/ last;
        }
        location /test/ {
        		#Add a file type to the request and download it directly without adding it [you can also write jpg|txt|json]
                default_type application/json;
                return 200 "ok";
        }
}

# Same point
break	As long as the rule is matched, it will go to the directory of the local configuration path to find the requested file;
last	As long as the rule is matched, the server(...)Tag reissue request.

# difference
break Request:
1,request rewrite.drz.com/break
2,Match to corresponding location,cover rewrite Rewrite as rewrite.drz.com/test
3,Then I'll find the local /code/rw/test/index.html;
4,If found, return /code/rw/test/index.html; Content of
5,If the directory is not found, an Error 404 is reported. If the directory is found and the corresponding file is not found, an error 403 is reported

last request:
1,request rewrite.drz.com/last
2,Match to corresponding location,cover rewrite Rewrite as  rewrite.drz.com/test
3,Then I'll find the local /code/rw/test/index.html;
4,If found, return /code/rw/test/index.html; Content of
5,If it is not found, the current server Reissue a request URL by rewrite.drz.com/test/
6,If so location If there is a match, this field is returned directly location What's inside.
7,If not location Match, and then return 404;

In short 
break 	After matching, take the highlighted one URL Go to this floor location Match, it's over if it doesn't match
last	After matching, take the highlighted one URL Go to this floor location Match, if you can't match, take what you focus on URL go server Layer to match other location
2. Examples of differences between redirect and permanent
[root@web01 conf.d]# cat rw.linux.com.conf 
server {
        listen 80;
        server_name rw.linux.com;
        root /code;

        location /test {
                rewrite ^(.*)$  http://www.baidu.com redirect;
                #return 302 http://www.baidu.com;
                
                #rewrite ^(.*)$  http://www.163.com permanent;
                #return 301 http://www.163.com;
        }
}

# The difference between redirect and permanent
redirect: Each request will ask the server. If the server is unavailable, it will fail to jump.
permanent: The first request will ask, and the browser will record the jump address http://www.163.com,
Close before the second visit web Host nginx Service, visit again rw.linux.com Jump to via browser cached address http://www.163.com
 It will always jump automatically as long as the cache is not cleared

break configuration instance

0. To facilitate troubleshooting, open the log
[root@web01 ~]# vim /etc/nginx/nginx.conf
error_log  /var/log/nginx/error.log notice;

http {
	... ...
    rewrite_log on;
 	... ...
}
1. Example 1:
User access`/abc/1.html`In fact, the real visit is`/ccc/bbb/2.html`

#1. Configure the file after jump
[root@web01 ~]# mkdir /code/rw/ccc/bbb -p
[root@web01 ~]# echo "ccc_bbb_222" > /code/rw/ccc/bbb/2.html

#Configure nginx jump
[root@web01 ~]# vim /etc/nginx/conf.d/rw.linux.com.conf 
server {
    listen 80;
    server_name rw.linux.com;
    root /code/rw;

    location /abc {
        rewrite ^/abc /ccc/bbb/2.html break;
    }
}

#Page access test
http://rw.linux.com/abc/1.html
2. Example 2:
User access`/2018/ccc/2.html`In fact, the real visit is`/2014/ccc/bbb/2.html`

#Configure post jump page
[root@web01 ~]# mkdir /code/rw/2014/ccc/bbb/ -p
[root@web01 ~]# echo "/code/rw/2014/ccc/bbb/2.html" > /code/rw/2014/ccc/bbb/2.html

#Configure nginx
[root@web01 ~]# vim /etc/nginx/conf.d/rw.linux.com.conf 
server {
    listen 80;
    server_name rw.linux.com;
    root /code/rw;

    location /2018 {
        rewrite ^/2018/(.*).html /2014/ccc/bbb/$1.html break;
    }
}

#Access test
http://rw.linux.com/2018/2.html
3. Example 3:
User access/test In fact, the real visit is http://www.baidu.com

#Configure nginx
[root@web01 ~]# vim /etc/nginx/conf.d/rw.linux.com.conf 
server {
    listen 80;
    server_name rw.linux.com;
    root /code/rw;

    location /test {
        rewrite (.*) http://www.baidu.com break;
    }
}
4. Example 4:
User access`course-11-22-33.html`In fact, the real visit is`/course/11/22/33/course_33.html`

#Configure post jump page
[root@web01 ~]# mkdir /code/rw/course/11/22/33 -p
[root@web01 ~]# echo "/course/11/22/33/course_33.html" > /code/rw/course/11/22/33/course_33.html

#Rigid matching
[root@web01 ~]# vim /etc/nginx/conf.d/rw.linux.com.conf 
server {
    listen 80;
    server_name rw.linux.com;
    root /code/rw;

    location /course {
        rewrite ^/course-11-22-33.html /course/11/22/33/course_33.html break;
    }
}

#Flexible matching
[root@web01 ~]# vim /etc/nginx/conf.d/rw.linux.com.conf 
server {
    listen 80;
    server_name rw.linux.com;
    root /code/rw;

    location / {
        rewrite ^/(.*)-(.*)-(.*)-(.*).html /$1/$2/$3/$4/$1_$4.html break;
        rewrite ^/course-11-22-33.html /course/11/22/33/course_33.html break;
    }
}

rewrite pseudo static

# Original link: https://blog.csdn.net/qq_41718455/article/details/80593029

# html static page (true static):

Advantages: first, reduce the load of server response to data,Second, the loading does not need to transfer the database, and the response speed is fast.

Disadvantages: first, it is inconvenient to maintain, and it has to be generated manually every time. Third, the occupation of space is relatively large. Third, there are too many files generated, and the server is right html The response burden of documents is also heavy.

# Pseudo static:

url Benefits of rewriting (pseudo static):
1, It is convenient to optimize the chemical engine, and it is more convenient than generating static.
2, It doesn't take up much space.
3, The home page changes automatically every day without maintenance. There are generally hot rankings on the home page of the website. You can set it as 24-hour ranking, weekly ranking, plus the latest articles, latest comments, etc. In this way, the home page changes every day.
4, Facilitate the rotation of advertising. For example, you can art1234.aspx,This virtual formation n Pages,as art_1234.aspx,news_1234.aspx,top_1234.aspx,Put different advertisements on different pages.
In short, it's dynamic, so you can move at will.

url The disadvantage of rewriting is that it is not as efficient as generation html Because it is not a static page in the real sense, each request is to read the database. But you can compensate with caching technology.

1. Build discuz

#Upload code
[root@web01 ~]# cd /code/
[root@web01 code]# rz Discuz_X3.3_SC_GBK.zip

#Decompress the code
[root@web01 code]# unzip Discuz_X3.3_SC_GBK.zip

#Authorization directory
[root@web01 ~]# chown -R www.www /code/

#Configure site configuration nginx
[root@web01 ~]# vim /etc/nginx/conf.d/discuz.linux.com.conf
server {
    listen 80;
    server_name discuz.linux.com;

    location / {
        root /code/upload;
        index index.php;
    }

    location ~ \.php$ {
        root /code/upload;
        fastcgi_pass 127.0.0.1:9000;
        fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
        include fastcgi_params;
    }
}
[root@web01 ~]# systemctl restart nginx

#Create database
[root@db01 ~]# mysql -uroot -p
Enter password: 

MariaDB [(none)]> create database ultrax;
Query OK, 1 row affected (0.02 sec)

MariaDB [(none)]> grant all on library name.* to user@'Host address' identified by 'password';
MariaDB [(none)]> grant all on *.* to root@'%' identified by '123456';

#Configure the host to access the configuration page

2.rewrite configuration pseudo static

[root@web01 ~]# cat /etc/nginx/conf.d/discuz.linux.com.conf
server {
    listen 80;
    server_name discuz.linux.com;

    location / {
	root /code/upload;
	index index.php;
	#The following rewrite is from the management center page of discuz website. See 48-2 for specific steps_ Rewrite pseudo static. ev4 video
	rewrite ^([^\.]*)/topic-(.+)\.html$ $1/portal.php?mod=topic&topic=$2 last;
	rewrite ^([^\.]*)/article-([0-9]+)-([0-9]+)\.html$ $1/portal.php?mod=view&aid=$2&page=$3 last;
	rewrite ^([^\.]*)/forum-(\w+)-([0-9]+)\.html$ $1/forum.php?mod=forumdisplay&fid=$2&page=$3 last;
	rewrite ^([^\.]*)/thread-([0-9]+)-([0-9]+)-([0-9]+)\.html$ $1/forum.php?mod=viewthread&tid=$2&extra=page%3D$4&page=$3 last;
	rewrite ^([^\.]*)/group-([0-9]+)-([0-9]+)\.html$ $1/forum.php?mod=group&fid=$2&page=$3 last;
	rewrite ^([^\.]*)/space-(username|uid)-(.+)\.html$ $1/home.php?mod=space&$2=$3 last;
	rewrite ^([^\.]*)/blog-([0-9]+)-([0-9]+)\.html$ $1/home.php?mod=space&uid=$2&do=blog&id=$3 last;
	rewrite ^([^\.]*)/(fid|tid)-([0-9]+)\.html$ $1/archiver/index.php?action=$2&value=$3 last;
	rewrite ^([^\.]*)/([a-z]+[a-z0-9_]*)-([a-z0-9_\-]+)\.html$ $1/plugin.php?id=$2:$3 last;
	if (!-e $request_filename) {
       		return 404;
	}
    }

    location ~ \.php$ {
	root /code/upload;
	fastcgi_pass 127.0.0.1:9000;
	fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
	include fastcgi_params;
    }
}

#Specific steps
 Management Center Global SEO Settings URL Static "tick all the pages that need to be static, rewrite Compatibility is submitted
>View current rewrite Select the corresponding in the rule Nginx Web Server

rewrite extension

1.rewrite matching priority

1.Execute first server Block rewrite instructions
2.Second, implementation location Matching rules
3.Final execution location in if Declarative rewrite

server {
    listen 80;
    server_name rew.linux.com;
    rewrite (.*) http://www.baidu.com;
    
    location / {
        rewrite (.*) http://www.jd.com;
        if ($http_user_agent ~* Windows) {
            rewrite (.*) http://www.taobao.com;
        }
    }
}

2.Rewrite and Nginx global variables

Rewrite In the matching process, some are used Nginx global variable

$server_name    #The domain name requested by the current user

server {
        listen 80;
        server_name test.drz.com;
        rewrite ^(.*)$ https://$server_name$1;
}

$request_filename Requested file pathname (home directory with web site)/code/images/test.jpg)
$request_uri The currently requested file path (without the home directory of the web site)/inages/test.jpg)

#Most of them are used to convert http protocol to https protocol
server {
        listen 80;
        server_name php.drz.com;
        return 302 https://$server_name$request_uri;
}
$scheme Protocols used, such as http perhaps https

3.rewrite specification

server {
        listen 80;
        server_name www.drz.com drz.com;
        if ($http_host = drz.com){
            rewrite (.*) http://www.drz.com$1;
        }
}

#Recommended writing format
server {
        listen 80;
        server_name drz.com;
        rewrite ^ http://www.drz.com$request_uri;
}
server {
        listen 80;
        server_name www.drz.com;
}

Nginx optimization

Pressure measuring tool ab

	#Install ab pressure measuring tool
yum -y install httpd-tools

	#Use the ab command
ab option

-n	The number of requests executed in the test session. By default, only one request is executed. Total number of requests
-c	Number of requests generated at one time. The default is one at a time. Number of users requested
-t	The maximum number of seconds the test has taken. Its internal implied value is-n 50000,It can limit the testing of the server to a fixed total time. By default, there is no time limit.

	#Note: the domain name of the pressure test must end with / or URI [the test website must be resolved]
	#Meaning of pressure test results
Server Software:		#Software version of the service
Server Hostname:		#Domain name or IP
Server Port:			#Service port
Document Path:			#Requested static resource file path
Document Length:		#Requested static resource file size
Concurrency Level:      #Concurrent number
Time taken for tests:   #Completion time
Complete requests:      #Number of requests
Failed requests:        #Failed request
Write errors:           #Write bad request
Total transferred:      #Total bytes of 1W requests
HTML transferred:       #Number of bytes left after removing the request header
Requests per second:    #Number of requests per second
Time per request:       #The time the browser waits for each request
Time per request:       #Average time taken by the server to process each request [unit: MS]
Transfer rate:          #The size of request data that can be processed per second

ab -n 10000 -c 200 http://ab.linux.com/

Impact performance index

1,First, we need to understand the structure and bottleneck of our current system, what is currently used, what business is running, what services are available, and how much concurrency each service can support.
such as nginx As a static resource, what is the concurrency of the service, where is the highest bottleneck, and how much can it support qps(Query rate per second), how can we get this group of system structure bottlenecks, such as top View system CPU Load, memory utilization, and always running processes. You can also analyze requests through logs. Of course, you can also use the above information stub_status The module can check the current connection, and can also conduct stress test (low peak period) on the online business to understand how many requests and concurrency the current system can undertake, and has made response evaluation. This is our first consideration in performance optimization.

2,Secondly, we need to understand the business model. Although we do performance optimization, each performance optimization is a service provided for the business. We need to understand the type of each business interface, such as the rush purchase mode in e-commerce websites. In this case, there is no traffic at ordinary times, but the traffic will increase sharply at the rush purchase time.
We also need to understand the hierarchical structure of the system, such as: we use nginx Whether we are acting as an agent, separating dynamic and static, or directly serving users at the back end, we need to sort out each layer accordingly. In order to better serve the business.

3,Finally, we need to consider performance and security. We often pay attention to performance, but ignore security. It often pays too much attention to safety, which will have an impact on performance. For example, when we design the firewall function, the detection is too strict, which will affect the performance. If you pursue performance completely but ignore the security of the service, it will also cause great hidden dangers. Therefore, you need to evaluate the relationship between the two and grasp which is more important. As well as the overall correlation, weigh the corresponding points.

from OSI The model considers the optimization direction

Hardware agent( CPU) Static (disk) IO) Dynamic( cpu,Memory)
Network loan, packet loss and delay
 System file descriptor (number of file handles)
The application service maintains a long connection with the service http1.1
 Service static resource service optimization

#Before optimization, summarize what will affect user access
#There are more or less connections between each service and the service. It is necessary to layer the whole architecture, find the short board of the corresponding system or service, and then optimize it

1,network
    (1)Network traffic
    (2)Whether the network loses packets
    (3)These will affect http Requests and calls for
2,system
    (1)Is there any disk damage to the hardware and the disk speed
    (2)System load, memory and system stability
3,service
    (1)Connection optimization. Request optimization
    (2)Make corresponding service settings according to the business type
4,program
    (1)Interface performance
    (2)processing speed 
    (3)Program execution efficiency
5,database

System performance optimization

introduce

File handle, Linux Everything is a file. The file handle can be understood as an index. The file handle will increase with the frequent calls of our processes. By default, the file handle is limited to 1024, and a process cannot call indefinitely. Therefore, we need to limit the size of the file handle used by each process and each service. The file handle is also an optimization parameter that must be adjusted.

How to set the file handle:
1,System global modification
2,User local modification
3,Process local modification [for services such as: nginx]

set up

	#View the current number of file handles
[root@web01 ~]# ulimit -n
65535
#The value set cannot be greater than the value of cat /proc/sys/fs/nr_open, otherwise you will not be able to log in normally after logging out
[root@web01 ~]# cat /proc/sys/fs/nr_open
1048576

	# Global modification of the system [it can be larger according to actual needs, not exceeding the value of / proc/sys/fs/nr_open]
	# soft is less than or equal to hard
cat >> /etc/security/limits.conf << EOF
* soft nofile 32768
* hard nofile 65535
EOF

	# Local modification of www users
cat >> /etc/security/limits.conf << EOF
www soft nofile 32768
www hard nofile 65535
EOF

	# Process local modification
vim /etc/nginx/nginx.conf
user  www;
worker_processes  auto;
worker_rlimit_nofile 30000	#Add this row

Port reuse

Opening a long link has the advantages of reducing the number of handshakes and waves for three times and four times. The disadvantage is that it will be at the end of the connection time_wait Status, occupied port
	#Adjust kernel parameters: reuse the port occupied by time_wait state
vim /etc/sysctl.conf
net.ipv4.tcp_tw_reuse = 1
	#take effect
sysctl -p

Proxy server optimization

In high concurrency short connection TCP On the server, when the server processes the request, it will actively close the connection normally. In this scenario, there will be a large number of problems socket be in TIME_WAIT Status.
If the concurrency of clients continues to be high, some clients will show that they can't connect. Take the initiative to shut down normally TCP Connection, will appear TIMEWAIT. 

Why pay attention to this highly concurrent short connection? There are two aspects to note:
1. High concurrency allows the server to occupy a large number of ports at the same time in a short time, and the port has a 0~65535 The scope of is not many. There are fewer systems and other services to be used.
2. In this scenario, short connections represent business processing+Data transmission time is much less than TIMEWAIT Timeout time of the connection.

Here is a concept of relative length, such as taking one web page,1 Second http After the short connection completes processing the service, after closing the connection, the port used by the service will stay at TIMEWAIT Status for a few minutes, and these minutes, others HTTP This port cannot be occupied when a request comes(Occupy the pit and don't pull Xiang). Using this service alone to calculate the utilization rate of the server, it will be found that the ratio of the time when the server is doing business and the time when the port (resources) are hung and can not be used is 1:hundreds, which is a serious waste of server resources. (as an aside, if you consider the performance tuning of the server in this sense, you don't need to consider the service of long-term connection business TIMEWAIT At the same time, if you are very familiar with the server business scenario, you will find that in the actual business scenario, the concurrency of the business corresponding to the general long connection is not very high.

#This directive appears in version 1.1.4
Syntax: keepalive connection;
Default: -
Context: upstream

#This directive appears in version 1.15.3
Syntax: keepalive_requests number;
Default: keepalive_requests 10000;	#You can request 10000 times after a connection, and disconnect more than 1W times. Generally, you don't need to use it
Context: upstream

#This directive appears in version 1.15.3
Syntax: keepalive_timeout timeout;
Default: keepalive_timeout 60s;		#No action is requested after a connection. It is not disconnected until it is maintained for 60 seconds. Generally, it is not used
Context: upstream


# Layer 7 load balancing
upstream http_backend {
    server 172.16.1.31:8080;
    server 172.16.1.32:8080;
    keepalive 16;   #The 31 and 32 web servers that are proxied give priority to using 16 long connections, and use other connections when they are not enough
}					#Unlimited requests can be made during long links

server {
    ...
    location /http/ {
        proxy_pass http://http_backend;
        proxy_http_version 1.1;         #For http protocol, it should be specified as 1.1
        proxy_set_header Connection ""; #Clear the "connection" header information of the request header and change it to "keep alive". It will not be set after version 1.14
        proxy_next_upstream error timeout http_500 http_502 http_503 http_504;  #After the 31 access fails, it will smoothly transition to 32 without error
        proxy_set_header Host $http_host;								#Pass the requested domain name to the back-end server
        #proxy_set_header X-Real-IP $remote_addr;						#Pass the upper layer IP to the rear
        proxy_set_header X-Forwared-For $proxy_add_x_forwarded_for;		#This is generally used to pass the user IP to the rear
        proxy_connect_timeout 30s;      # Proxy connection web timeout
        proxy_read_timeout 60s;         # Timeout for agent waiting for web response
        proxy_send_timeout 60s;         # Timeout of web data return to proxy
        proxy_buffering on;             # Open the proxy buffer, the web returns data to the buffer, and the proxy returns to the client while receiving
        proxy_buffer_size 32k;          # The size of the buffer in which the proxy receives the header information of the web response
        proxy_buffers 4 128k;           # The number and size of web responses contained within a single long connection received by the buffering agent
    ...
    }
}

#Proxy PHP
#The local [web server] IP 172.16.1.31, 41 and 42 are PHP servers
#Suppose the user accesses ab.linux.com/fastcgi/1/2/3/1.php
upstream fastcgi_backend {
    server 172.16.1.41:9000;
    server 172.16.1.42:9000;
    keepalive 8;
}

server {
    ...
    location /fastcgi/ {
    	#When the user accesses / fastcgi /, he jumps to PHP 41 and 42
        fastcgi_pass fastcgi_backend;
        #	standard format  				   Site directory / code/ab 	 1/2/3/1.php
        fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
        #Open long link
        fastcgi_keep_conn on;
        #The connection timeout is 60 seconds
        fastcgi_connect_timeout 60s;
        #Included files
        include fastcgi_params;
        ...
    }
}

# Note: the scgi and uwsgi protocols do not have the concept of maintaining connectivity. 2. However, proxy, fastcgi and uwsgi protocols all have the function of cache, which can speed up the efficiency of website access. (hardware dependent)

Static resource optimization

Environmental preparation

Host nameapplication environment Internet addressIntranet address
backuprsync server10.0.0.11172.16.1.11
nfsrsync client + inotify tools = sersync + NFS10.0.0.31172.16.1.21
web01nginx+php10.0.0.31172.16.1.31
web02nginx+php10.0.0.32172.16.1.32
db01mysql10.0.0.51172.16.1.51

Introduction to static resources

Static resources refer to files generated by non WEB server-side running processing

Static resource typetype
Browser renderingHTML,CSS,JS
Picture fileJPEG,GIF,PNG
video file FLV,Mp4,AVI
Other documentsTXT,DOC,PDF,...

Static resource cache

introduce

[the external link image transfer fails. The source station may have an anti-theft chain mechanism. It is recommended to save the image and upload it directly (img-NIXPw2gi-1638326704016)(... \ image \ browser cache validation mechanism. png)]

#Browser access process
0,# Judge whether there is a cache. If there is no cache, directly request data from the web server. If there is a cache, make the following judgment

1,# When the browser requests the server, it will first verify Expires and cache control to check whether the cache has expired. If it has not expired, it will be read directly from the cache file
Expires: Sun, 11 Jul 2021 10:54:06 GMT				# In the local cache directory, the time when the file expires (specified by the server)
cache-control: max-age=31536000	#In the local cache directory, the file expiration time (the server specifies the expiration interval, because the browser generates a specific time according to the interval)

2,# If the cache expires, check whether there is Etag. If there is Etag, the browser will request the web server to compare the values of if none match and Etag, and the server will decide whether to return 200 or 304
ETag: "5d8444bf-3790"								# In the response header: file ID [returned by the server]
If-None-Match: "5d8444bf-3790"						# In the request header: file ID [the value placed on the browser by the server after the request]

3,# If Etag does not exist, the browser requests the web server to compare the value of if modified since with the value of last modified, and the server decides whether to return 200 or 304
Last-Modified: Fri, 20 Sep 2019 03:17:19 GMT		# In the response header: the last modification time of the file on the server
If-Modified-Since: Fri, 20 Sep 2019 03:17:19 GMT	# In the request header: the last modification time of the file on the server
Example
#grammar
#Role: add cache control expires header
Syntax: expires [modified] time;
        epoch | max | off;
Default: expires off;
Context: http, server, location, if in location

server {
    listen 80;
    server_name static.drz.com;

    location ~ .*\.(jpg|gif|png)$ {
        expires      7d;
    }
    location ~ .*\.(js|css)$ {
        expires      30d;
    }
}

#Do not want to read the cache during development. Cancel the cache of static files such as js, css and html
location ~ .*\.(js|css|html)$ {
    add_header Cache-Control no-store;
    add_header Pragma no-cache;
    #Add a line in the response header to read Server: nginx/1.11.1
    add_header Server nginx/1.11.1;
}

Static resource read

introduce
# Static resource efficient read on | off
Syntax: sendfile on | off;
Default: sendfile off;
Context: http, server, location, if in location

# Send multiple packets at one time to improve network transmission efficiency. It is recommended to open large files. sendfile needs to be opened
Syntax: tcp_nopush on | off;
Default: tcp_nopush off;
Context: http, server, location


# To improve the real-time performance of network transmission, you need to turn on keepalive to send one packet without waiting
Syntax: tcp_nodelay on | off;
Default: tcpnodelay off;
Context: http, server, location
Example
#Combined use
    sendfile       on;
    tcp_nopush     on;

#Combined use
    keepalive_timeout  65;
    tcp_nodelay	   on;
    
#tcp_nopush and tcp_nodelay cannot be started at the same time, but it also exists in the configuration file installed by yum in version 1.20.1, which has not been used in actual production.
#Find a commentary online:
Can only say that seemingly contradictory, open at the same time sendfile,tcp_nopush And tcp_nodelay Send for resource when nginx Refer to the following:

1,Ensure that the packet is full before sending to the customer

2,For the last packet, tcp_nopush Will be deleted, allowed TCP Send now without delay

Static resource compression

introduce
#gzip transmission compression, compression before transmission and decompression after transmission, generally has a good effect on document compression and general picture compression
Syntax: gzip on | off;
Default: gzip off;
Context: http, server, location, if in location

#Which files do gzip compress
Syntax: gzip_types mime-type ...;
Default: gzip_types text/html;
Context: http, server, location

#gzip compression ratio to speed up transmission, but compression itself consumes server performance
Syntax: gzip_comp_level level;
Default:gzip_comp_level 1;
Context: http, server, location

#gzip compression protocol version, which http protocol is used for compression, and version 1.1 is the mainstream
Syntax: gzip_http_version 1.0 | 1.1;
Default:gzip_http_version 1.1;
Context: http, server, location
Example
[root@web01 code]# vim /etc/nginx/conf.d/ab.linux.com.conf
upstream daili {
        server 10.0.0.7:8080;
        keepalive 16;
}
server {
        listen 80;
        server_name ab.linux.com;
        root /code;
        index index.html;

        location / {
                proxy_pass http://daili;
                proxy_http_version 1.1;
        }

        location ~ \.(jpg|png|gif|txt)$ {
                gzip on;
                gzip_types image/jpeg image/gif image/png text/plain;
                gzip_comp_level 9;
                gzip_http_version 1.1;
        }
}

Prevent resource theft

1. Test the chain theft page

[root@web01 code]# vim /etc/nginx/conf.d/ab.linux.com.conf
server {
        listen 80;
        server_name ab.linux.com;
        root /code;
        index index.html;

        location ~ \.(jpg|png|gif|txt)$ {
                gzip on;
                gzip_types image/jpeg image/gif image/png text/plain;
                gzip_comp_level 9;
                gzip_http_version 1.1;
        }
}

2. Anti theft chain configuration

#grammar
Syntax: valid_referers none | blocked | server_name | string ...;
Default: -;
Context: server, location

#None: if the referer source header is empty, configure none to access it 								 Source header is empty
(Log format: 10.0.0.7 - - [02/Jan/2020:15:26:08 +0800] "HEAD /2.jpg HTTP/1.1" 200 0 "-" 		"curl/7.29.0" "-" "-")

#Blocked: the referer source header is not empty. These do not start with http: / / or https: / /. If you configure this value, you can access it 	 Source header is not empty
(Log format: 10.0.0.7 - - [02/Jan/2020:15:27:48 +0800] "HEAD /2.jpg HTTP/1.1" 403 0 "www.baidu.com" "curl/7.29.0" "-" "-")

#server_name: the source header contains the current domain name, which can be accessed by regular matching configuration
(Log format: 10.0.0.7 - - [02/Jan/2020:15:31:42 +0800] "HEAD /2.jpg HTTP/1.1" 200 0 "http://static.linux.com" "curl/7.29.0" "-" "-")

#Anti theft chain example 1 [the source header is empty, not empty, and the domain name conforms to *. linux.com]
    location ~ .*\.(jpg|png|gif) {
        root /code/pic;
        valid_referers none blocked server_name *.linux.com;
        if ( $invalid_referer ) {
			return 403;
		}
    }
#Anti theft chain example 2 [domain name can be accessed only if it conforms to *. linux.com]
    location ~ .*\.(jpg|png|gif) {
        root /code;
        valid_referers server_name *.linux.com ~\.linux\.;
        if ( $invalid_referer ) {
			rewrite ^(.*)$ /pic/fangdao.jpg break;
		}
    }
#Allow some websites [Google, baidu] to steal chains
location ~ .*\.(jpg|png|gif) {
    root /data;
    valid_referers none blocked *.linux.com server_name ~\.google\. ~\.baidu\.;
    if ( $invalid_referer ) {
        return 403;
    }   
}

3. Configure web02 as the website of the resource

#Configure nginx
[root@web02 conf.d]# vim static.conf
server {
    listen 80;
    server_name static.linux.com;
    root /code;

    location / {
        index index.html;
    }
}

#Restart nginx
[root@web02 conf.d]# systemctl restart nginx

#Upload some pictures
 to configure hosts Access test

4. Configure web01 as a website for stealing links

#1. Configure nginx
[root@web01 conf.d]# vim dl.conf
server {
    server_name dl.linux.com;
    listen 80;
    root /code;
    location / {
        index index.html;
    }
}

#2. Configure the page of stealing chain
[root@web01 conf.d]# vim /code/index.html
<html>
<head>
    <meta charset="utf-8">
    <title>linux7.com</title>
</head>
<body style="background-color:pink;">
    <center><img src="http://static.linux.com/2.jpg"/></center>
</body>
</html>

#3. Configure the server hosts file and the computer hosts file
[root@web01 conf.d]# vim /etc/hosts
10.0.0.32 static.linux.com

5. Access page test

# Windows parsing
C:\Windows\System32\drivers\etc\hosts
10.0.0.31 static.linux.com
# Visit a web page to view
static.linux.com

6. Presentation

#Of course, this protection does not guarantee that resources will not be stolen, because we can modify the refer ence information of the source through commands
-e The simulation source website is http://static.linux.com
-I Show requests only URL Response header information for
[root@web01 conf.d]# curl -e "http://static.linux.com" -I http://static.linux.com/2.jpg
HTTP/1.1 200 OK
Server: nginx/1.16.1
Date: Thu, 02 Jan 2020 07:31:42 GMT
Content-Type: image/jpeg
Content-Length: 12517
Last-Modified: Wed, 18 Dec 2019 01:41:33 GMT
Connection: keep-alive
ETag: "5df983cd-30e5"
Accept-Ranges: bytes

Allow cross domain access

#Generally used in
1.Server migration
2.Domain name change

1. Configure cross domain websites

#Configure nginx
[root@web02 ~]# vim /etc/nginx/conf.d/beikuayu.conf
server {
	listen 80;
    server_name beikuayu.linux.com;
    root /code;

    location / {
		index index.html;
	}
}

#Configure site page file
[root@web02 ~]# echo 'test' > /code/index.html

2. Configure cross domain websites

#Configure nginx
[root@web01 ~]# vim /etc/nginx/conf.d/kuayu.conf 
server {
        listen 80;
        server_name kuayu.linux.com;

        location / { 
                root /code;
        }
}

#Configure cross domain sites
[root@web01 ~]# vim /code/kuayu.html 
<html lang="en">
<head>
        <meta charset="UTF-8" />
        <title>test ajax And cross domain access</title>
        <script src="http://libs.baidu.com/jquery/2.1.4/jquery.min.js"></script>
</head>
<script type="text/javascript">
$(document).ready(function(){
        $.ajax({
        type: "GET",
        url: "http://beikuayu.linux.com",
        success: function(data) {
                alert("sucess The trough trough trough trough trough trough trough trough trough trough trough trough trough trough trough trough trough trough trough trough trough trough trough trough trough trough trough trough trough trough trough trough trough trough trough trough trough trough trough trough trough trough trough trough trough trough trough trough trough trough trough trough trough trough trough trough trough!!!");
        },
        error: function() {
                alert("fail!!,I can't cross it. I won't let you in. I can only rub it!");
        }
        });
});
</script>
        <body>
                <h1>Test cross domain access</h1>
        </body>
</html>

3. Access test failed

# Access test failed
http://kuayu.linux.com/kuayu.html
# Windows parsing
C:\Windows\System32\drivers\etc\hosts
10.0.0.32 beikuayu.linux.com
10.0.0.31 kuayu.linux.com
# Visit a web page to view
kuayu.linux.com

4. Configure cross domain access

[root@web02 ~]# vim /etc/nginx/conf.d/beikuayu.conf
server {
        listen 80;
        server_name beikuayu.linux.com;
        root /code;

        location / {
                index index.html;
        }

        location ~ .*\.(html|htm)$ {
            add_header Access-Control-Allow-Origin *;
            add_header Access-Control-Allow-Methods GET,POST,PUT,DELETE,OPTIONS;
        }
}

CPU affinity

CPU Affinity( affinity)The implementation principle is to reduce frequent switching between processes and reduce performance loss CPU Core and Nginx Work process binding mode, put each worker The process is fixed to the corresponding cpu Execute on, reduce switching CPU of cache miss,Get better performance.

1. View cpu

[root@tomcat01 ~]# lscpu | grep "CPU(s)"
CPU(s):                8
On-line CPU(s) list:   0-7
NUMA node0 CPU(s):     0-7

#The above servers have a physical CPU with 8 cores

2. Configuration method

# The first binding combination method (not recommended)
worker_processes 12;
worker_cpu_affinity 000000000001 000000000010 000000000100 000000001000 000000010000 000000100000 000001000000 000010000000 000100000000 001000000000 010000000000 10000000000;

# Second method (less used)
worker_processes 2;
worker_cpu_affinity 101010101010 010101010101;

# The third best binding method is to modify the work process started by nginx to automatic.
worker_processes  auto;
worker_cpu_affinity auto;

3. View the bound cpu

[root@web01 ~]# ps -eo pid,args,psr|grep [n]ginx
  1242 nginx: master process /usr/   2
  1243 nginx: worker process         0
  1244 nginx: worker process         1
  1245 nginx: worker process         2
  1246 nginx: worker process         3

General optimized configuration

Online high concurrency optimization reference https://www.cnblogs.com/aaron-agu/p/12317211.html

	#Set up a unified user without login
groupadd -g 666 www
useradd -u 666 -g 666 -Ms /sbin/nologin www
	#Create a directory and authorize it
mkdir /web
chown -R www.www /web

1,#Set master profile
[root@web01 ~]# vim /etc/nginx/nginx.conf 
#------------------------------Core module--------------------------------
	#The original user is nginx, which is replaced by the previously set user and user group
user  www www;
worker_processes auto;      #The number of worker s started can be consistent with the number of cpu cores, or auto
worker_cpu_affinity auto;   #cpu affinity, multi-core
	#Log levels are: 	 debug/info/notice/warn/error/crit/alter/emerg
	#Meaning of each level 	 Debugging | information | notification | warning | error | critical | change | emergency state
	#Set the log level to warn [indicates that warn/error/crit/alter/emerg will be recorded]
error_log  /var/log/nginx/error.log warn;
pid        /var/run/nginx.pid;
	#The file descriptor that can be opened by each worker is adjusted to more than 1w, and the load is high. It is recommended to 2-3w
worker_rlimit_nofile 35535;

#------------------------------Event driven module-----------------------------
#Represents each worker in the core module_ processes  1; 1024 connections can be made at the same time
events {
    use epoll;                  #Use epoll efficient network model [if not written, this is the default]
    worker_connections  1024;	#Limit how many connections each process can handle
}

#-----------------------------http kernel module-----------------------------
http {
	#The types contained in the mime.types file are displayed in the web page
    include       /etc/nginx/mime.types;
    #By default, all types not included in the mime.types file will be downloaded
    default_type  application/octet-stream;
    charset 'utf-8,gbk';	#Unified use of utf-8 and gbk character sets

	#Log format content setting main
    log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
                      '$status $body_bytes_sent "$http_referer" '
                      '"$http_user_agent" "$http_x_forwarded_for"';

    #Log format content setting json_access
    log_format json_access '{"@timestamp":"$time_iso8601",'
                      '"host":"$server_addr",'
                      '"clientip":"$remote_addr",'
                      '"size":$body_bytes_sent,'
                      '"responsetime":$request_time,'
                      '"upstreamtime":"$upstream_response_time",'
                      '"upstreamhost":"$upstream_addr",'
                      '"http_host":"$host",'
                      '"url":"$uri",'
                      '"domain":"$host",'
                      '"xff":"$http_x_forwarded_for",'
                      '"referer":"$http_referer",'
                      '"status":"$status"}';

	#The log setting path and contents are the contents in main
    access_log  /var/log/nginx/access.log  main;

    server_tokens off;  #Disable the browser from displaying nginx version number
    client_max_body_size 200m;  #File upload size limit adjustment
    
	#Optimization part [set on the dynamic and static web server respectively, and the two settings should be separated]
    #For efficient file transfer, it is recommended to open the static resource server
    sendfile            on;
    tcp_nopush          on;		#[get all the contents of the web page and push them to users at one time]
    
    #For real-time file transmission, it is recommended to open the dynamic resource service. You need to open keepalive [transfer as much data as you can view]
    tcp_nodelay         on;
	#65 second limit for long connection [recommended]
    keepalive_timeout	65;

	#Whether the content of the web page is compressed [if the picture compression is not obvious, the text compression effect is obvious]
    gzip  on;
    #gzip_disable "MSIE [1-6]\.";    #For IE browser version 1-6, it is not compressed and generally not used
    gzip_http_version 1.1;	#gzip is transmitted through http1.1 protocol
    gzip_comp_level 5;      #Compression level
    gzip_buffers 16 8k;     #Compressed buffer
    gzip_min_length 1024;   #The file is compressed only when it is larger than 1024 bytes. The default value is 20
    #Compressed file type
    gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript image/jpeg;

	#Virtual host containing the profile path for each web site
	#/In case of any conflict between the file contents in the directory of etc/nginx/conf.d /, the file with the highest file name will prevail
    include /etc/nginx/conf.d/*.conf;
}

2,set up server The [path] of the layer configuration file is the path in the main configuration file above. The file name should be.conf The end is OK]
	#Delete the default server tier profile
rm -f /etc/nginx/conf.d/default.conf
	#Edit a new server layer configuration file as needed
vim /etc/nginx/conf.d/web.conf
	#Use Server to configure websites. Each Server {} represents a website (virtual host for short)
	#There can be multiple servers {} in a file. It is recommended to set up one server {} for each website
server {
	#Listening port, default 80
    listen       80;
    #Domain name provided for user access
    server_name  www.web01.com;
    
    #Log saving path and content [call settings in main configuration]
    access_log /var/log/nginx/www.web01.com.log main;
    #Sometimes the supported character sets do not work, so you need to add '' outside utf-8 as follows:
    charset 'utf-8,gbk';
            
    #Control site access path
    location / {
    	#The directory location where the site source code is stored
        root   /code/html;
        #Returns the file of the web site by default
        index  index.html index.htm index.php;
    }
    #A file containing pages within each website [generally not used]
    #include /etc/nginx/location.d/*.conf
    #Upload file size limit
    client_max_body_size 200m;
    
    #When an error occurs, jump to different pages according to the return value
    error_page  404              /404.html;
    	location = /404.html {
        root   /usr/share/nginx/html;
    }
    error_page   500 502 503 504  /50x.html;
    location = /50x.html {
        root   /usr/share/nginx/html;
    }

    location ~ \.php$ {
        root /web/php;
        #The address and port of the associated php software
        fastcgi_pass 127.0.0.1:9000;
        #Associated pages
        fastcgi_index index.php;
        #Associated site directory and PHP file name
        fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
        #Contains some contents of the fastcgi module
        include fastcgi_params;
    }
}

# fastcgi_params is generally used in dynamic resource web servers

# The fastcgi module of Nginx provides fastcgi_param instruction is mainly used to deal with these mapping relationships. The following is a configuration file of Nginx, which mainly translates the variables in Nginx into variables that can be understood in PHP.
cat /etc/nginx/fastcgi_params

fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
# The path requested by the script file, that is, when accessing 127.0.0.1/index.php, you need to read the index.php file under the root directory of the website. If this configuration item is not configured, nginx does not go back to the root directory of the website to access the. PHP file, so it returns blank

fastcgi_param QUERY_STRING $query_string;            #Requested parameters; Such as? app=123
fastcgi_param REQUEST_METHOD $request_method;        #Requested action (GET,POST)
fastcgi_param CONTENT_TYPE $content_type;            #Content type field in request header
fastcgi_param CONTENT_LENGTH $content_length;        #The content length field in the request header.

fastcgi_param SCRIPT_NAME $fastcgi_script_name;      #Script name 
fastcgi_param REQUEST_URI $request_uri;              #The requested address has no parameters
fastcgi_param DOCUMENT_URI $document_uri;            #Same as $uri. 
fastcgi_param DOCUMENT_ROOT $document_root;          #The root directory of the web site. In the server configuration, the value specified in the root command 
fastcgi_param SERVER_PROTOCOL $server_protocol;      #The protocol used for the request, usually HTTP/1.0 or HTTP/1.1.
fastcgi_param  REQUEST_SCHEME     $scheme;			 #Page Jump?
fastcgi_param  HTTPS              $https if_not_empty;	#If the variable is not empty?

fastcgi_param GATEWAY_INTERFACE CGI/1.1;             #cgi version
fastcgi_param SERVER_SOFTWARE nginx/$nginx_version;  #nginx version number, which can be modified or hidden

fastcgi_param REMOTE_ADDR $remote_addr;              #Client IP
fastcgi_param REMOTE_PORT $remote_port;              #Client port
fastcgi_param SERVER_ADDR $server_addr;              #Server IP address
fastcgi_param SERVER_PORT $server_port;              #Server port
fastcgi_param SERVER_NAME $server_name; #Server name, domain name, server specified in server configuration_ name

fastcgi_param PATH_INFO $path_info;                  #Customizable variable

-- PHP only, required if PHP was built with --enable-force-cgi-redirect
fastcgi_param REDIRECT_STATUS 200;


# proxy_params is generally used in load balancing servers

vim /etc/nginx/proxy_params
proxy_set_header Host $http_host;
proxy_http_version 1.1;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_connect_timeout 60s;
proxy_read_timeout 60s;
proxy_send_timeout 60s;
proxy_buffering on;
proxy_buffer_size 8k;
proxy_buffers 8 8k;
proxy_next_upstream error timeout http_500 http_502 http_503 http_504;

	#notes
proxy_set_header Host $http_host;	#Take the domain name visited by the customer to the rear server
proxy_http_version 1.1;				#Use the HTTP version 1.1 protocol to access the web server

#Take the customer's IP and tell the rear web server what the visiting user's IP is [multiple jumps will accumulate]
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;

#Tell the rear with the ip address of the front end [multiple jumps will be covered, and it is recommended to only configure it on the first layer load balancing server]
#proxy_set_header X-Real-IP $remote_addr;

proxy_connect_timeout 60s;			#User connection load balancing timeout
proxy_read_timeout 60s;				#Timeout of load balancing connection to back-end server
proxy_send_timeout 60s;				#The backend server returns the timeout of the parent server
proxy_buffering on;					#Open buffer

#Set the size of the buffer to read the first part of the response received from the proxy server. This part usually contains a small response header. By default, the buffer size is equal to one memory page. Whether this is 4K or 8K depends on the platform. However, it can be made smaller.
proxy_buffer_size 8k;
proxy_buffers 8 8k;					#Set the number and size of buffers

#After accessing the load balancer, the host will jump to another host if there is an error
proxy_next_upstream error timeout http_500 http_502 http_503 http_504;

Safety and optimization summary

1,CPU Affinity worker Number of processes, adjusting each worker Number of files opened by the process

2,use epoll Network model, adjust each worker Maximum number of connections for the process

3,Efficient reading of files sendfile,nopush

4,Real time transmission of files nodealy

5,open tcp Long connection and long connection timeout keepalived

6,Turn on file transfer compression gzip

7,Open static file expires cache

8,hide nginx Version number

9,No Admittance ip Address access. Malicious domain name resolution is prohibited. Only domain name access is allowed

10,Configure anti-theft chain and cross domain access

11,Guard against DDOS,cc Attack, restriction list IP Concurrent connections, and http request

12,Elegant display nginx Error page

13,nginx Encrypted transmission https optimization

14,nginx proxy_cache,fastcgi_cache,uwsgi_cache Caching, third party tools( squid,varnish)

Optimization summary

1.Hardware
	nginx As a proxy server, it needs to improve its memory and memory cpu
	nginx As a static file server, you need to configure a large disk
2.network
	The bandwidth should be larger
	Packet loss handling
3.system
	Number of open file handles
	Long link time_wait Port reuse [must be done]
4.application
	nginx As a proxy, configure long links
5.service
	nginx As a static website: static cache, static resource reading, static resource compression, anti-theft chain, cross domain access cpu Affinity

Nginx application scenario

Static service		Agency services		Security services		Popular architecture
 Browser cache		Protocol type		access control 		Nginx+PHP(fastcgi_pass)LNMP
 Resource theft prevention		Forward proxy		Access restrictions		Nginx+java(proxy_pass)LNMT
 Resource classification		Reverse proxy		Flow limit		 Nginx+Python(uwsgi_pass)
Resource compression		load balancing 		Intercept attack		
Resource cache		Proxy cache		Intercept exception requests		
Cross domain access		Dynamic and static separation		intercept SQL injection		

Keywords: Linux Operation & Maintenance CentOS

Added by Texan on Wed, 01 Dec 2021 05:20:27 +0200