preface
Using nginx for load balancing, as the front-end or middle tier of the architecture, with the increasing traffic, it is necessary to make a highly available architecture for load balancing, and use keepalived to solve the single point risk. Once nginx goes down, it can quickly switch to the backup server.
Possible problems and solutions of Vmware network configuration
- Start VMware DHCP Service and VMware NAT Service
- Enable network sharing in the network adapter, allow other networks to check save, and restart the virtual machine
install
Node deployment
node | address | service |
---|---|---|
centos7_1 | 192.168.211.130 | Keepalived+Nginx |
centos7_2 | 192.168.211.131 | Keepalived+Nginx |
centos7_3 | 192.168.211.132 | Redis server |
web1 (physical machine) | 192.168.211.128 | FastApi+Celery |
web2 (physical machine) | 192.168.211.129 | FastApi+Celery |
web configuration
web1 start python http server
vim index.html <html> <body> <h1>Web Svr 1</h1> </body> </html> nohup python -m SimpleHTTPServer 8080 > running.log 2>&1 &
web2 start python http server
vim index.html <html> <body> <h1>Web Svr 2</h1> </body> </html> nohup python -m SimpleHTTPServer 8080 > running.log 2>&1 &
Turn off firewall
firewall-cmd --state systemctl stop firewalld.service systemctl disable firewalld.service
Now the browser access is normal, and the page displays Web Svr 1 and 2
CentOS 1 and 2 install Nginx
First, configure the source of Alibaba cloud
mv /etc/yum.repos.d/CentOS-Base.repo /etc/yum.repos.d/CentOS-Base.repo.backup wget -O /etc/yum.repos.d/CentOS-Base.repo https://mirrors.aliyun.com/repo/Centos-7.repo sed -i -e '/mirrors.cloud.aliyuncs.com/d' -e '/mirrors.aliyuncs.com/d' /etc/yum.repos.d/CentOS-Base.repo
Install dependent packages
yum -y install gcc yum install -y pcre pcre-devel yum install -y zlib zlib-devel yum install -y openssl openssl-devel
Download nginx and unzip it
wget http://nginx.org/download/nginx-1.8.0.tar.gz tar -zxvf nginx-1.8.0.tar.gz
Installing nginx
cd nginx-1.8.0 ./configure --user=nobody --group=nobody --prefix=/usr/local/nginx --with-http_stub_status_module --with-http_gzip_static_module --with-http_realip_module --with-http_sub_module --with-http_ssl_module make make install cd /usr/local/nginx/sbin/ # Check configuration file ./nginx -t # start nginx ./nginx
Open nginx access
firewall-cmd --zone=public --add-port=80/tcp --permanent systemctl restart firewalld.service
At this time, visit 130 and 131 to see the home page of nginx.
Create nginx startup file
Need to be in init D folder to create nginx startup file. This will automatically start nginx every time the server restarts the init process.
cd /etc/init.d/ vim nginx #!/bin/sh # # nginx - this script starts and stops the nginx daemin # # chkconfig: - 85 15 # description: Nginx is an HTTP(S) server, HTTP(S) reverse \ # proxy and IMAP/POP3 proxy server # processname: nginx # config: /etc/nginx/nginx.conf # pidfile: /var/run/nginx.pid # user: nginx # Source function library. . /etc/rc.d/init.d/functions # Source networking configuration. . /etc/sysconfig/network # Check that networking is up. [ "$NETWORKING" = "no" ] && exit 0 nginx="/usr/local/nginx/sbin/nginx" prog=$(basename $nginx) NGINX_CONF_FILE="/usr/local/nginx/conf/nginx.conf" lockfile=/var/run/nginx.lock start() { [ -x $nginx ] || exit 5 [ -f $NGINX_CONF_FILE ] || exit 6 echo -n $"Starting $prog: " daemon $nginx -c $NGINX_CONF_FILE retval=$? echo [ $retval -eq 0 ] && touch $lockfile return $retval } stop() { echo -n $"Stopping $prog: " killproc $prog -QUIT retval=$? echo [ $retval -eq 0 ] && rm -f $lockfile return $retval } restart() { configtest || return $? stop start } reload() { configtest || return $? echo -n $"Reloading $prog: " killproc $nginx -HUP RETVAL=$? echo } force_reload() { restart } configtest() { $nginx -t -c $NGINX_CONF_FILE } rh_status() { status $prog } rh_status_q() { rh_status >/dev/null 2>&1 } case "$1" in start) rh_status_q && exit 0 $1 ;; stop) rh_status_q || exit 0 $1 ;; restart|configtest) $1 ;; reload) rh_status_q || exit 7 $1 ;; force-reload) force_reload ;; status) rh_status ;; condrestart|try-restart) rh_status_q || exit 0 ;; *) echo $"Usage: $0 {start|stop|status|restart|condrestart|try-restart|reload|force-reload|configtest}" exit 2 esac
Verify the configuration file and enter the following commands in turn
chkconfig --add nginx chkconfig --level 345 nginx on
Add execution permissions to this file
chmod +x nginx ls functions netconsole network nginx README
Start Nginx service
service nginx start service nginx status service nginx reload
Nginx reverse proxy, load balancing (centos_1)
Modify nginx Conf configuration file, uncommented code
cd /usr/local/nginx/conf/ mv nginx.conf nginx.conf.bak egrep -v '^#' nginx.conf.bak egrep -v '^#|^[ ]*#' nginx.conf.bak egrep -v '^#|^[ ]*#|^$' nginx.conf.bak egrep -v '^#|^[ ]*#|^$' nginx.conf.bak >> nginx.conf cat nginx.conf
The output is as follows
worker_processes 1; events { worker_connections 1024; } http { include mime.types; default_type application/octet-stream; sendfile on; keepalive_timeout 65; server { listen 80; server_name localhost; location / { root html; index index.html index.htm; } error_page 500 502 503 504 /50x.html; location = /50x.html { root html; } } }
Reload nginx configuration
# Test whether the configuration file is normal ../sbin/nginx -t # Reload nginx configuration ../sbin/nginx -s reload
Configure nginx reverse proxy and load balancing
worker_processes 1; events { worker_connections 1024; } http { include mime.types; default_type application/octet-stream; sendfile on; keepalive_timeout 65; # websvr server cluster (also called load balancing pool) upstream websvr { server 192.168.211.128:8001 weight=1; server 192.168.211.129:8001 weight=2; } server { listen 80; # Used to specify ip address or domain name. Multiple configurations are separated by spaces server_name 192.168.211.130; location / { # Send all requests to the websvr cluster for processing proxy_pass http://websvr; } error_page 500 502 503 504 /50x.html; location = /50x.html { root html; } } }
Now restart nginx
sbin/nginx -s reload
Websvr names can be customized to indicate the meaning of these servers. That is, you only need to add upstream websvr and proxy_pass can achieve load balancing.
Now when you visit 130, Web Svr 1 and Web Svr 2 will switch on the page. The server will be selected according to the weight. The greater the weight value, the higher the weight, that is, repeatedly refresh the page. On average, Web Svr 2 appears twice and Web Svr 1 appears once.
So far, high availability cannot be achieved. Although web services can do this, and single point of failure can be handled in this way, if the nginx service fails, the whole system is basically inaccessible, so it needs to be guaranteed by using multiple nginx.
Multiple Nginx work together, and Nginx is highly available [dual master-slave mode]
Add an nginx service on the 131 server (centos_2). Like the previous configuration, you only need to modify nginx.conf
worker_processes 1; events { worker_connections 1024; } http { include mime.types; default_type application/octet-stream; sendfile on; keepalive_timeout 65; upstream websvr { server 192.168.211.128:8001 weight=1; server 192.168.211.129:8001 weight=2; } server { listen 80; server_name 192.168.211.131; location / { proxy_pass http://websvr; } error_page 500 502 503 504 /50x.html; location = /50x.html { root html; } } } # Reload nginx sbin/nginx -s reload
Visit now http://192.168.211.130/ You can also get and http://192.168.211.131/ Similar results.
The IP addresses of the two nginx servers are different. How can we work together? This requires keepalived.
Install the software, and install two centos at the same time
yum install keepalived pcre-devel -y
Configure keepalived
Both backup
cp /etc/keepalived/keepalived.conf keepalived.conf.bak
centos_1. Configure kept master
[root@localhost keepalived]# cat keepalived.conf ! Configuration File for keepalived global_defs { script_user root enable_script_security } vrrp_script chk_nginx { # Specify a monitoring script to detect whether the nginx service is running normally script "/etc/keepalived/chk_nginx.sh" # Specify the monitoring time and execute it every 10s interval 10 # For the priority change caused by the script result, if the detection fails (the script returns non-0), the priority is - 5 # weight -5 # # The true failure is determined only after two consecutive failures are detected. weight will be used to reduce priority (between 1-255) # fall 2 # If the test is successful once, it will be successful. But the priority is not modified # rise 1 } vrrp_instance VI_1 { # Specify the role of kept. Set the host to MASTER and the standby to BACKUP state BACKUP # Specify the interface of the HA monitoring network. CentOS 7 uses ip addr to get interface ens33 # Active and standby virtual_ router_ The ID must be the same and can be set to the latter group of IP: must be between 1 & 255 virtual_router_id 51 # Priority value, in the same VRRP_ Under instance, MASTER must be higher than BACKUP. After the MASTER is restored, BACKUP will be handed over automatically priority 90 # The number of seconds of VRRP broadcast cycle. If the broadcast is not detected, it is considered that the service is suspended and the active and standby will be switched advert_int 1 # Set the authentication type and password. The master and slave must be the same authentication { # Set vrrp verification types, mainly including PASS and AH auth_type PASS # The encrypted password must be the same for the two servers to communicate normally auth_pass 1111 } track_script { # The service that executes monitoring refers to the VRRP script, that is, in VRRP_ The name specified in the script section. Run them regularly to change priorities chk_nginx } virtual_ipaddress { # VRRP HA virtual address if there are multiple VIP s, continue to fill in line feed 192.168.211.140 } }
Send the configuration file to node 131
scp /etc/keepalived/keppalived.conf 192.168.211.131:/etc/keepalived/keepalived.conf
For 131 nodes, you only need to modify
state BACKUP priority 90
Main keepalived configuration monitoring script chk_nginx.sh
Create a script to execute in keepalived
vi /etc/keepalived/chk_nginx.sh #!/bin/bash # Check whether any nginx process assigns the value to the variable counter counter=`ps -C nginx --no-header |wc -l` # If there is no process, the value is 0 if [ $counter -eq 0 ];then # Try to start nginx echo "Keepalived Info: Try to start nginx" >> /var/log/messages /etc/nginx/sbin/nginx sleep 3 if [ `ps -C nginx --no-header |wc -l` -eq 0 ];then # Output daily arrival system message echo "Keepalived Info: Unable to start nginx" >> /var/log/messages # If it has not been started, the keepalived process is ended # killall keepalived # Or stop /etc/init.d/keepalived stop exit 1 else echo "Keepalived Info: Nginx service has been restored" >> /var/log/messages exit 0 fi else # Status normal echo "Keepalived Info: Nginx detection is normal" >> /var/log/messages; exit 0 fi
Next, grant execution permission and test
chmod +x chk_nginx.sh ./chk_nginx.sh
Restart both sides keepalived
systemctl restart keepalived systemctl status keepalived
At this time, access to. 140 can also be displayed normally, that is, the bound IP is successful. Before execution, you can view the output log in messages in real time through the following command
tail -f /var/log/messages # If nginx is off Keepalived Info: Try to start nginx Keepalived Info: Nginx service has been restored # nginx normally open Keepalived Info: Nginx detection is normal
When nginx detection is normal, it will return 0; If there is no detection, it returns 1. However, it seems that keepalived does not detect this return value to realize the transfer, but detects whether the keepalived service exists to release the local VIP, and finally transfers the virtual IP to another server.
Reference articles
https://www.jianshu.com/p/7e8e61d34960
https://www.cnblogs.com/zhangxingeng/p/10721083.html