Introduction to nginx - load balancing

Load balancing purposes:

Forwarding the front-end ultra-high concurrent access to the back-end servers for processing solves the problem that the pressure of a single node is too high, which results in the slow response of Web services, and in serious cases leads to the paralysis of services and the failure to provide services normally.

Working principle:

Load balancing is divided into four layers of load balancing and seven layers of load balancing.

Four-tier load balancing is the fourth layer of the seven-tier protocol, the transport layer, and the main work is forwarding.

After receiving the client's traffic, it forwards the traffic to the application server by modifying the address information of the packet (destination address and port).

The seven-tier load balancing is the seventh-tier application layer of the seven-tier protocol, and the main work is agent.

It first establishes a complete connection with the client and parses the request flow of the application layer, then chooses an application server according to the scheduling algorithm, and establishes another connection with the application server to send the request to the application server.

Example:

Front-end server: 192.168.1.6
 Backend server 1:192.168.1.5
 Backend Server 2:192.168.1.7

Here the back-end server can also be implemented by configuring the virtual host.

To configure:

The front-end server mainly configures upstream and proxy_pass:

upstream mainly configures equilibrium pool and scheduling method.

proxy_pass is primarily the name of the proxy server ip or server group that is configured

proxy_set_header configures the Host forwarded to the back-end server and the real ip of the front-end client.

# Configure upstream instruction block under http instruction block

    upstream web {
        server 192.168.1.5;
        server 192.168.1.7;
    }

 # Configure proxy_pass in location block 
    server {
        listen       80;
        server_name  localhost;
        location /  {
            proxy_pass http://web;
            proxy_next_upstream  error http_404 http_502;
            proxy_set_header Host $host;
            proxy_set_header X-Real-IP $remote_addr;
        }
    }

# proxy_next_upstream  error http_404 http_502;
With this instruction, you can handle errors such as 404 when the back-end service returns.
Instead of returning the error message to the client, the request is forwarded directly to other servers.
# proxy_set_header Host $host; 
Through this instruction, the host requested by the client is forwarded to the back end.
Otherwise, the request header Host received by the back end will be the web, which may result in the wrong request header.
# proxy_set_header X-Real-IP $remote_addr
 Through this instruction, the client IP is forwarded to the back-end server in the log format of the back-end server.
Add $http_x_real_ip to get the IP of the original client.

Equilibrium mode:

1. Polling

upstream web {
    server 192.168.1.5;
    server 192.168.1.7;
    }

Access front-end IP:

[root@192 ~]# while true;do curl 192.168.1.6;sleep 2;done
this is 1.5  page
                       this is 1.7  page
this is 1.5  page
                       this is 1.7  page
                       
#You can see the back-end server, which handles requests very evenly.

2. Polling Weighted Weight

upstream web {
        server 192.168.1.5 weight=3;
        server 192.168.1.7 weight=1;
        }

Access front-end IP:

[root@192 ~]# while true;do curl 192.168.1.6;sleep 1;done
this is 1.5  page
this is 1.5  page
this is 1.5  page
                       this is 1.7  page
this is 1.5  page
this is 1.5  page
this is 1.5  page
                       this is 1.7  page

#Back-end services, which process requests according to weight ratio, are suitable for environments where server performance is uneven.

3. Polling + Weight + Maximum Connection Error Number

The wrong connection is determined by instructions such as proxy_next_upstream, fastcgi_next_upstream, etc. By default, when a back-end server fails, nginx automatically forwards the request to another normal server (because the default proxy_next_upstream=error timeout).

So even if we don't match this parameter, nginx can help us deal with errors and timeout, but it can't handle 404 and other errors.

To see the essence, we first set proxy_next_upstream to off, that is, we do not forward failed requests to other normal servers, so that we can see the result of failed requests.

upstream web {
        server 192.168.1.5 weight=1  max_fails=3 fail_timeout=9s;
        #First turn off the 1.5 nginx.
        server 192.168.1.7 weight=1;
        }

       
server {
        listen       80;
        server_name  localhost;
        location / {
            proxy_pass http://web;
            #proxy_next_upstream  error http_404 http_502;
            proxy_next_upstream off; 
            proxy_set_header Host $host;
            proxy_set_header X-Real-IP $remote_addr;
        }
}
# Here, we set the timeout to 9s and try it three times at most.
//It should be noted here that three attempts still follow the rules of polling, not one request, connecting three times.
//Instead, they polled three times and had three opportunities to process requests.

Access front-end IP:

[root@192 ~]# while true;do curl -I 192.168.1.6 2>/dev/null|grep HTTP/1.1 ;sleep 3;done
HTTP/1.1 502 Bad Gateway
HTTP/1.1 200 OK
HTTP/1.1 502 Bad Gateway
HTTP/1.1 200 OK
HTTP/1.1 502 Bad Gateway
HTTP/1.1 200 OK
HTTP/1.1 200 OK
HTTP/1.1 200 OK
HTTP/1.1 200 OK
HTTP/1.1 502 Bad Gateway

# We set a timeout of 9 seconds, and we request it every 3 seconds.
//We can see that the request is not forwarded directly to the normal server after the back-end server hangs up.
//Instead, it returned directly to 502. After three attempts, wait 9 seconds before starting to try again (the last 502).

#It should be noted that the 200 response in the second line is not the response code of the client's first request, but the second new request of the client.

Turn on proxy_next_upstream and visit again to see the results:

upstream web {
        server 192.168.1.5 weight=1  max_fails=3 fail_timeout=9s;
        #First turn off the 1.5 nginx.
        server 192.168.1.7 weight=1;
        }
        
server {
        listen       80;
        server_name  localhost;
        location / {
            proxy_pass http://web;
            proxy_next_upstream  error http_404 http_502;
            #proxy_next_upstream off; 
            proxy_set_header Host $host;
            proxy_set_header X-Real-IP $remote_addr;
        }
}

Access front-end IP:

[root@192 ~]# while true;do curl -I 192.168.1.6 2>/dev/null|grep HTTP/1.1 ;sleep 3;done
HTTP/1.1 200 OK
HTTP/1.1 200 OK
HTTP/1.1 200 OK
HTTP/1.1 200 OK
HTTP/1.1 200 OK
HTTP/1.1 200 OK

#You can see that no 502 errors have been reported so far and all requests have been processed. Because the wrong response code was retrieved by proxy_next_upstream, the request was forwarded to the next normal server.
//So you can see that it's all 200, but you should be clear which 200 is the last server that responded and which 200 is the normal response.

4. ip_hash

upstream web {
        ip_hash;
        server 192.168.1.5 weight=1  max_fails=3 fail_timeout=9s;
        server 192.168.1.7 weight=1;
        }
        
server {
        listen       80;
        server_name  localhost;
        location / {
            proxy_pass http://web;
            proxy_next_upstream  error http_404 http_502;
            #proxy_next_upstream off; 
            proxy_set_header Host $host;
            proxy_set_header X-Real-IP $remote_addr;
        }
}

Access front-end IP:

[root@192 ~]# while true;do curl 192.168.1.6;sleep 2;done
this is 1.5  page
this is 1.5  page
this is 1.5  page
^C
[root@192 ~]# curl 192.168.1.7
                       this is 1.7  page
                       
#As you can see, the service of 1.7 is normal, but it is not forwarded to 1.7. The request is fixed on the server of 1.5.                      

When using load balancing, we will encounter the problem of session maintenance. The commonly used methods are:

a, ip hash, according to the client's IP, the request is allocated to different servers;

b, cookie, the server sends a cookie to the client, and requests with a specific cookie are assigned to its publisher.

Note: cookie s require browser support and sometimes leak data.

Let's talk about it first, then we'll talk about it in combination with our business.

This will be a series of articles on nginx. We can pay attention to the same name of Wechat Public Number: linux, Stupid Methodology, to get the latest article updates and top-quality software, software updates are ongoing.

Keywords: Operation & Maintenance curl Nginx Session Linux

Added by jclarkkent2003 on Sun, 28 Jul 2019 13:48:26 +0300