Front and back end separate deployment and operation and maintenance in Docker environment

Front and back end separate deployment and operation and maintenance in Docker environment

1, Common commands of Docker virtual machine

  1. Update package first

    yum -y update
    
  2. Install Docker virtual machine

    yum install -y docker
    
  3. Run, restart and shut down Docker virtual machine

    service docker start
    service docker restart
    service docker stop
    
  4. Search image

    docker search Image name
    
  5. Download Image

    docker pull Image name
    
  6. View mirror

    docker images
    
  7. delete mirror

    docker rmi Image name
    
  8. Run container

    docker run Start parameter image name
    
  9. View container list

    docker ps -a
    
  10. Stop, suspend, resume container

docker stop container ID
docker pause container ID
docker unpase container ID
  1. View container information

    docker inspect container ID
    
  2. Delete container

    docker rm container ID
    
  3. Data volume management

    docker volume create Data volume name  #Create data volume
    docker volume rm Data volume name  #Delete data volume
    docker volume inspect Data volume name  #View data volumes
    
  4. Network management

    docker network ls View network information
    docker network create --subnet=Network segment name
    docker network rm Network name
    
  5. Avoid the disconnection of Docker virtual machine after VM virtual machine is suspended and restored

    vi /etc/sysctl.conf
    

    Add net. Net to the file ipv4. ip_ Forward = 1 this configuration

    #service network restart 
    systemctl  restart network
    

2, Install PXC cluster, load balancing, dual machine hot standby

Permanently turn off firewall and selinux

Permanently turn off firewalls and selinux
 Firewall OFF: systemctl stop firewalld
 Boot without firewall: systemctl disable firewalld
 Temporarily Closed selinux : setenforce 0
 Permanent closure selinux : vim /etc/selinux/config
 modify SELINUX=enforcin by SELINUX=disable
 Then use the command: reboot  Restart
  1. Install PXC image

    docker pull percona/percona-xtradb-cluster:5.7.21
    

    It is strongly recommended that students install the PXC image of version 5.7.21 with the best compatibility. Apt get can be executed in the container to install various packages. In the latest version of PXC image, apt get cannot be executed, so the hot backup tool cannot be installed.

  2. Rename PXC image

    docker tag percona/percona-xtradb-cluster pxc
    
  3. Create net1 segment

    docker network create --subnet=172.18.0.0/16 net1
    
  4. Create 5 data volumes

    docker volume create --name v1
    docker volume create --name v2
    docker volume create --name v3
    docker volume create --name v4
    docker volume create --name v5
    
  5. Create backup data volume (for hot backup data)

    docker volume create --name backup
    
  6. Create a 5-node PXC cluster

    Note that after each MySQL container is created, because you need to perform PXC initialization and join the cluster, wait patiently for about 1 minute, and then connect to MySQL with the client. In addition, the first MySQL node must be started successfully. After the MySQL client can be connected, create other MySQL nodes.

    #Create the first MySQL node
    docker run -d -p 3306:3306 -e MYSQL_ROOT_PASSWORD=abc123456 -e CLUSTER_NAME=PXC -e XTRABACKUP_PASSWORD=abc123456 -v v1:/var/lib/mysql -v backup:/data --privileged --name=node1 --net=net1 --ip 172.18.0.2 pxc
    #Create the second MySQL node
    docker run -d -p 3307:3306 -e MYSQL_ROOT_PASSWORD=abc123456 -e CLUSTER_NAME=PXC -e XTRABACKUP_PASSWORD=abc123456 -e CLUSTER_JOIN=node1 -v v2:/var/lib/mysql -v backup:/data --privileged --name=node2 --net=net1 --ip 172.18.0.3 pxc
    #Create the third MySQL node
    docker run -d -p 3308:3306 -e MYSQL_ROOT_PASSWORD=abc123456 -e CLUSTER_NAME=PXC -e XTRABACKUP_PASSWORD=abc123456 -e CLUSTER_JOIN=node1 -v v3:/var/lib/mysql --privileged --name=node3 --net=net1 --ip 172.18.0.4 pxc
    #Create the 4th MySQL node
    docker run -d -p 3309:3306 -e MYSQL_ROOT_PASSWORD=abc123456 -e CLUSTER_NAME=PXC -e XTRABACKUP_PASSWORD=abc123456 -e CLUSTER_JOIN=node1 -v v4:/var/lib/mysql --privileged --name=node4 --net=net1 --ip 172.18.0.5 pxc
    #Create the 5th MySQL node
    docker run -d -p 3310:3306 -e MYSQL_ROOT_PASSWORD=abc123456 -e CLUSTER_NAME=PXC -e XTRABACKUP_PASSWORD=abc123456 -e CLUSTER_JOIN=node1 -v v5:/var/lib/mysql -v backup:/data --privileged --name=node5 --net=net1 --ip 172.18.0.6 pxc
    
  7. Install Haproxy image

    docker pull haproxy:1.9.7
    
  8. Write the Haproxy configuration file on the host

    vi /home/soft/haproxy/haproxy.cfg
    

    The configuration file is as follows:

    global
    	#working directory
    	chroot /usr/local/etc/haproxy
    	#Log file, using the local5 log device in rsyslog service (/ var/log/local5), level info
    	log 127.0.0.1 local5 info
    	#Daemon running
    	daemon
    
    defaults
    	log	global
    	mode	http
    	#Log format
    	option	httplog
    	#Heartbeat detection records of load balancing are not recorded in the log
    	option	dontlognull
        #Connection timeout (MS)
    	timeout connect 5000
        #Client timeout (MS)
    	timeout client  50000
    	#Server timeout (MS)
        timeout server  50000
    
    #Monitoring interface	
    listen  admin_stats
    	#Access IP and port of monitoring interface
    	bind  0.0.0.0:8888
    	#access protocol
        mode        http
    	#URI relative address
        stats uri   /dbs
    	#Statistical report format
        stats realm     Global\ statistics
    	#Login account information
        stats auth  admin:abc123456
    #Database load balancing
    listen  proxy-mysql
    	#IP and port accessed
    	bind  0.0.0.0:3306  
        #Network protocol
    	mode  tcp
    	#Load balancing algorithm (polling algorithm)
    	#Polling algorithm: roundrobin
    	#Weight algorithm: static RR
    	#Least connection algorithm: leastconn
    	#Request source IP algorithm: source 
        balance  roundrobin
    	#Log format
        option  tcplog
    	#Create a haproxy user without permission in mysql, and the password is empty. Haproxy uses this account to detect the heartbeat of MySQL database
        option  mysql-check user haproxy
        server  MySQL_1 172.18.0.2:3306 check weight 1 maxconn 2000  
        server  MySQL_2 172.18.0.3:3306 check weight 1 maxconn 2000  
    	server  MySQL_3 172.18.0.4:3306 check weight 1 maxconn 2000 
    	server  MySQL_4 172.18.0.5:3306 check weight 1 maxconn 2000
    	server  MySQL_5 172.18.0.6:3306 check weight 1 maxconn 2000
    	#Use keepalive to detect dead chain
        option  tcpka  
    
  9. Create heartbeat detection users for all databases (one database operation is OK, because PXC cluster is highly consistent, and the cluster has the characteristics of synchronous transaction operation)

    CREATE USER 'haproxy'@'%' IDENTIFIED BY '';
    
  10. Hang up node command

    docker stop node1
    
  11. Create two Haproxy containers

#Create the first Haproxy load balancing server
docker run -it -d -p 4001:8888 -p 4002:3306 -v /home/soft/haproxy:/usr/local/etc/haproxy --name h1 --privileged --net=net1 --ip 172.18.0.7 haproxy:1.9.7
#Enter the h1 container and start Haproxy
docker exec -it h1 bash
haproxy -f /usr/local/etc/haproxy/haproxy.cfg
#Create the second Haproxy load balancing server
docker run -it -d -p 4003:8888 -p 4004:3306 -v /home/soft/haproxy:/usr/local/etc/haproxy --name h2 --privileged --net=net1 --ip 172.18.0.8 haproxy:1.9.7
#Enter h2 container and start Haproxy
docker exec -it h2 bash
haproxy -f /usr/local/etc/haproxy/haproxy.cfg
  1. Install Keepalived in the Haproxy container and set the virtual IP

Note: virtual IP is not supported by virtual hosts. In addition, virtual IP cannot be created in the network of many companies (create at home), and * * the host must turn off the firewall and SELINUX * *. Many students fail because of this. Remember

#Enter h1 container
docker exec -it h1 bash
#Update package
apt-get update
#Install the VIM
apt-get install vim
#Install Keepalived
apt-get install keepalived
#Edit the Keepalived configuration file (refer to the configuration file below)
vim /etc/keepalived/keepalived.conf
#Start Keepalived
service keepalived start
#The host executes the ping command
ping 172.18.0.201

The content of the configuration file is as follows:

vrrp_instance  VI_1 {
    state  MASTER
    interface  eth0
    virtual_router_id  51
    priority  100
    advert_int  1
    authentication {
        auth_type  PASS
        auth_pass  123456
    }
    virtual_ipaddress {
        172.18.0.201
    }
}
#Enter h2 container
docker exec -it h2 bash
#Update package
apt-get update
#Install the VIM
apt-get install vim
#Install Keepalived
apt-get install keepalived
#Edit the Keepalived profile
vim /etc/keepalived/keepalived.conf
#Start Keepalived
service keepalived start
#The host executes the ping command
ping 172.18.0.201

The content of the configuration file is as follows:

vrrp_instance  VI_1 {
    state  MASTER
    interface  eth0
    virtual_router_id  51
    priority  100
    advert_int  1
    authentication {
        auth_type  PASS
        auth_pass  123456
    }
    virtual_ipaddress {
        172.18.0.201
    }
}
  1. The host computer is installed with Keepalived to realize dual computer hot standby

    #The host performs the installation Keepalived
    yum -y install keepalived
    #Modify the Keepalived configuration file
    vi /etc/keepalived/keepalived.conf
    #Start Keepalived
    service keepalived start
    

    The Keepalived configuration file is as follows:

    vrrp_instance VI_1 {
        state MASTER
        interface ens33
        virtual_router_id 51
        priority 100
        advert_int 1
        authentication {
            auth_type PASS
            auth_pass 1111
        }
        virtual_ipaddress {
           	192.168.180.245
        }
    }
    
    virtual_server 192.168.180.245 8888 {
        delay_loop 3
        lb_algo rr 
        lb_kind NAT
        persistence_timeout 50
        protocol TCP
    
        real_server 172.18.0.201 8888 {
            weight 1
        }
    }
    
    virtual_server 192.168.180.245 3306 {
        delay_loop 3
        lb_algo rr 
        lb_kind NAT
        persistence_timeout 50
        protocol TCP
    
        real_server 172.18.0.201 3306 {
            weight 1
        }
    }
    
  2. Hot backup data

    #Enter node1 container
    docker exec -it node1 bash
    #Update package
    apt-get update
    #Install hot standby tools
    apt-get install percona-xtrabackup-24
    #Full calorimetric preparation
    innobackupex --user=root --password=abc123456 /data/backup/full
    
  3. Cold restore data
    Stop the remaining 4 nodes and delete the nodes

    docker stop node2
    docker stop node3
    docker stop node4
    docker stop node5
    docker rm node2
    docker rm node3
    docker rm node4
    docker rm node5
    

    Delete MySQL data from node1 container

    #Delete data
    rm -rf /var/lib/mysql/*
    #Clear transaction
    innobackupex --user=root --password=abc123456 --apply-back /data/backup/full/2018-04-15_05-09-07/
    #Restore data
    innobackupex --user=root --password=abc123456 --copy-back  /data/backup/full/2018-04-15_05-09-07/
    

    Recreate the remaining four nodes, components and PXC clusters

    [the transfer of external chain pictures fails. The source station may have an anti-theft chain mechanism. It is recommended to save the pictures and upload them directly (img-v4zh4ah8-1622081902374) (course script. assets/image-20210524215958474.png)]

3, PXC notice

What are the meanings of PXC master node and slave node respectively?

The master and slave nodes in PXC are very different from Replication master and slave nodes.

Firstly, the data synchronization of Replication cluster can only be from Master node to Slave node, and the identity of the node is fixed. The Master node is always Master and the Slave node is always Slave, which cannot be interchanged.

However, the primary node on PXC refers to the first node to start. It not only starts MySQL service, but also uses Galera to create PXC cluster. After the master node is demoted into a normal node, the work is automatically completed. When other nodes start, they only need to start the MySQL service and then join the PXC cluster. Therefore, these nodes are ordinary nodes from start to shutdown.

Why can Node1 start and other PXC nodes flash back after starting?

This is because more work needs to be done when node1 is started, as mentioned above. So before node1 creates the PXC cluster, you start other PXC nodes quickly. They can't find the PXC cluster started by node1, so they flash back automatically.

The correct way is to start Node1, wait for 10 seconds, then use Navicat to access it, and then start other PXC nodes

###If the PXC cluster is running, shut down directly on the host or stop the Docker service, why will any PXC node flash back when it is started next time?

This starts with the node management of PXC cluster. The data directory of PXC node is / var/lib/mysql. Fortunately, this directory is mapped to the data volume. For example, if you visit the v1 data volume, you can see the data directory of node1. There is a grastate Dat file, which has a safe_ to_ The bootstrap parameter is used by PXC to record who is the last node to exit the PXC cluster. For example, if node1 is the last closed node, PXC will set safe_ to_ The bootstrap is set to 1, which means that the node1 node finally exits, and its data is up-to-date. The next time you start node1, you must start node1 first, and then other nodes synchronize with node1.

If you turn off the Docker service or power supply of the host when all PXC nodes are running normally, PXC has no time to judge who is the last node to exit. All PXC nodes are turned off in an instant. Which node is safe_ to_ The bootstrap parameters are all 0. It is also easy to solve this fault, that is, select node1, change the parameter to 1, then start node1 normally, and then start other nodes.

PXC cluster has only one node. If the container of this node is closed, can it be started next time?

Of course, because there is only one node in PXC, this node must be started according to the master node, so when it is started, it will start MySQL service and create PXC cluster. Even if the container is closed, the next restart is still this step, and there will be no startup failure. If the PXC cluster is composed of multiple nodes, node1 stops and other nodes operate normally. At this time, when node1 is started, it will flash back. Node1 hangs up just a few seconds after it is started. This is because some nodes such as node2 are running in the existing PXC. At this time, you start node1 and create a PXC cluster with the same name, which will certainly lead to conflict. So node1 flashed back.

In this case, the correct solution is to delete the node1 container. Don't be nervous. I didn't ask you to delete the v1 data volume, so the data can't be lost. Then create a node1 by using the command mode of the slave node. For the setting synchronized with a node in the startup parameters, arbitrarily select a PXC node running now, and then node1 can be started.

Install Redis and configure RedisCluster cluster

  1. Install Redis image

    docker pull yyyyttttwwww/redis
    
  2. Create net2 segment

    docker network create --subnet=172.19.0.0/16 net2
    
  3. Create a 6-node Redis container

    docker run -it -d --name r1 -p 5001:6379 --net=net2 --ip 172.19.0.2 redis bash
    docker run -it -d --name r2 -p 5002:6379 --net=net2 --ip 172.19.0.3 redis bash
    docker run -it -d --name r3 -p 5003:6379 --net=net2 --ip 172.19.0.4 redis bash
    docker run -it -d --name r4 -p 5004:6379 --net=net2 --ip 172.19.0.5 redis bash
    docker run -it -d --name r5 -p 5005:6379 --net=net2 --ip 172.19.0.6 redis bash
    docker run -it -d --name r6 -p 5006:6379 --net=net2 --ip 172.19.0.7 redis bash
    

    Note: bind 0.0.0.0 must be set in the redis configuration file, which allows other IP S to access the current redis. If this parameter is not set, redis cluster cannot be established.

  4. Start 6-node Redis server

    #Enter r1 node
    docker exec -it r1 bash
    cp /home/redis/redis.conf /usr/redis/redis.conf
    cd /usr/redis/src
    ./redis-server ../redis.conf
    #Enter r2 node
    docker exec -it r2 bash
    cp /home/redis/redis.conf /usr/redis/redis.conf
    cd /usr/redis/src
    ./redis-server ../redis.conf
    #Enter r3 node
    docker exec -it r3 bash
    cp /home/redis/redis.conf /usr/redis/redis.conf
    cd /usr/redis/src
    ./redis-server ../redis.conf
    #Enter r4 node
    docker exec -it r4 bash
    cp /home/redis/redis.conf /usr/redis/redis.conf
    cd /usr/redis/src
    ./redis-server ../redis.conf
    #Enter r5 node
    docker exec -it r5 bash
    cp /home/redis/redis.conf /usr/redis/redis.conf
    cd /usr/redis/src
    ./redis-server ../redis.conf
    #Enter r6 node
    docker exec -it r6 bash
    cp /home/redis/redis.conf /usr/redis/redis.conf
    cd /usr/redis/src
    ./redis-server ../redis.conf
    
  5. Create Cluster

    #Execute the following instructions on the r1 node
    cd /usr/redis/src
    mkdir -p ../cluster
    cp redis-trib.rb ../cluster/
    cd ../cluster
    #Create Cluster
    ./redis-trib.rb create --replicas 1 172.19.0.2:6379 172.19.0.3:6379 172.19.0.4:6379 172.19.0.5:6379 172.19.0.6:6379 172.19.0.7:6379
    #Select yes
    

Packaging and deploying backend projects

  1. Enter Renren open source backend project and perform packaging (modify the configuration file, change the port, package three times and generate three JAR files)

    mvn clean install -Dmaven.test.skip=true
    
  2. Install Java image

    docker pull java
    
  3. Create a 3-node Java container

    #Create data volume and upload JAR file
    docker volume create j1
    #Start container
    docker run -it -d --name j1 -v j1:/home/soft --net=host java
    #Enter j1 container
    docker exec -it j1 bash
    #Start Java project
    nohup java -jar /home/soft/renren-fast.jar
    
    #Create data volume and upload JAR file
    docker volume create j2
    #Start container
    docker run -it -d --name j2 -v j2:/home/soft --net=host java
    #Enter j1 container
    docker exec -it j2 bash
    #Start Java project
    nohup java -jar /home/soft/renren-fast.jar
    
    #Create data volume and upload JAR file
    docker volume create j3
    #Start container
    docker run -it -d --name j3 -v j3:/home/soft --net=host java
    #Enter j1 container
    docker exec -it j3 bash
    #Start Java project
    nohup java -jar /home/soft/renren-fast.jar
    
  4. Install Nginx image

    docker pull nginx
    
  5. Create Nginx container and configure load balancing

    On the host / home / N1 / nginx The contents of the conf configuration file are as follows:

    user  nginx;
    worker_processes  1;
    error_log  /var/log/nginx/error.log warn;
    pid        /var/run/nginx.pid;
    
    events {
        worker_connections  1024;
    }
    
    http {
        include       /etc/nginx/mime.types;
        default_type  application/octet-stream;
    
        log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
                          '$status $body_bytes_sent "$http_referer" '
                          '"$http_user_agent" "$http_x_forwarded_for"';
    
        access_log  /var/log/nginx/access.log  main;
    
        sendfile        on;
        #tcp_nopush     on;
    
        keepalive_timeout  65;
    
        #gzip  on;
    	
    	proxy_redirect          off;
    	proxy_set_header        Host $host;
    	proxy_set_header        X-Real-IP $remote_addr;
    	proxy_set_header        X-Forwarded-For $proxy_add_x_forwarded_for;
    	client_max_body_size    10m;
    	client_body_buffer_size   128k;
    	proxy_connect_timeout   5s;
    	proxy_send_timeout      5s;
    	proxy_read_timeout      5s;
    	proxy_buffer_size        4k;
    	proxy_buffers           4 32k;
    	proxy_busy_buffers_size  64k;
    	proxy_temp_file_write_size 64k;
    	
    	upstream tomcat {
    		server 192.168.99.104:6001;
    		server 192.168.99.104:6002;
    		server 192.168.99.104:6003;
    	}
    	server {
            listen       6101;
            server_name  192.168.99.104; 
            location / {  
                proxy_pass   http://tomcat;
                index  index.html index.htm;  
            }  
        }
    }
    

    Create the first Nginx node

    docker run -it -d --name n1 -v /home/n1/nginx.conf:/etc/nginx/nginx.conf --net=host --privileged nginx
    
    

    On the host / home / N2 / nginx The contents of the conf configuration file are as follows:

    user  nginx;
    worker_processes  1;
    error_log  /var/log/nginx/error.log warn;
    pid        /var/run/nginx.pid;
    
    events {
        worker_connections  1024;
    }
    
    http {
        include       /etc/nginx/mime.types;
        default_type  application/octet-stream;
    
        log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
                          '$status $body_bytes_sent "$http_referer" '
                          '"$http_user_agent" "$http_x_forwarded_for"';
    
        access_log  /var/log/nginx/access.log  main;
    
        sendfile        on;
        #tcp_nopush     on;
    
        keepalive_timeout  65;
    
        #gzip  on;
    	
    	proxy_redirect          off;
    	proxy_set_header        Host $host;
    	proxy_set_header        X-Real-IP $remote_addr;
    	proxy_set_header        X-Forwarded-For $proxy_add_x_forwarded_for;
    	client_max_body_size    10m;
    	client_body_buffer_size   128k;
    	proxy_connect_timeout   5s;
    	proxy_send_timeout      5s;
    	proxy_read_timeout      5s;
    	proxy_buffer_size        4k;
    	proxy_buffers           4 32k;
    	proxy_busy_buffers_size  64k;
    	proxy_temp_file_write_size 64k;
    	
    	upstream tomcat {
    		server 192.168.99.104:6001;
    		server 192.168.99.104:6002;
    		server 192.168.99.104:6003;
    	}
    	server {
            listen       6102;
            server_name  192.168.99.104; 
            location / {  
                proxy_pass   http://tomcat;
                index  index.html index.htm;  
            }  
        }
    }
    

    Create the second Nginx node

    docker run -it -d --name n2 -v /home/n2/nginx.conf:/etc/nginx/nginx.conf --net=host --privileged nginx
    
  6. Install Keepalived in Nginx container

    #Enter node n1
    docker exec -it n1 bash
    #Update package
    apt-get update
    #Install the VIM
    apt-get install vim
    #Install Keepalived
    apt-get install keepalived
    #Edit the Keepalived configuration file (as follows)
    vim /etc/keepalived/keepalived.conf
    #Start Keepalived
    service keepalived start
    
    vrrp_instance VI_1 {
        state MASTER
        interface ens33
        virtual_router_id 51
        priority 100
        advert_int 1
        authentication {
            auth_type PASS
            auth_pass 123456
        }
        virtual_ipaddress {
            192.168.99.151
        }
    }
    virtual_server 192.168.99.151 6201 {
        delay_loop 3
        lb_algo rr
        lb_kind NAT
        persistence_timeout 50
        protocol TCP
        real_server 192.168.99.104 6101 {
            weight 1
        }
    }
    
    #Enter node n1
    docker exec -it n2 bash
    #Update package
    apt-get update
    #Install the VIM
    apt-get install vim
    #Install Keepalived
    apt-get install keepalived
    #Edit the Keepalived configuration file (as follows)
    vim /etc/keepalived/keepalived.conf
    #Start Keepalived
    service keepalived start
    
    vrrp_instance VI_1 {
        state MASTER
        interface ens33
        virtual_router_id 51
        priority 100
        advert_int 1
        authentication {
            auth_type PASS
            auth_pass 123456
        }
        virtual_ipaddress {
            192.168.99.151
        }
    }
    virtual_server 192.168.99.151 6201 {
        delay_loop 3
        lb_algo rr
        lb_kind NAT
        persistence_timeout 50
        protocol TCP
        real_server 192.168.99.104 6102 {
            weight 1
        }
    }
    

Packaging and deploying backend projects

  1. Execute the packaging instruction under the front-end project path

    npm run build
    
  2. Copy the files in the build directory to the directories of / home / FN1 / Renren Vue, / home / FN2 / Renren Vue, / home / FN3 / Renren Vue of the host computer

  3. Create a 3-node Nginx and deploy the front-end project

    Host / home / FN1 / nginx Conf configuration file

    user  nginx;
    worker_processes  1;
    error_log  /var/log/nginx/error.log warn;
    pid        /var/run/nginx.pid;
    
    events {
        worker_connections  1024;
    }
    
    http {
        include       /etc/nginx/mime.types;
        default_type  application/octet-stream;
    
        log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
                          '$status $body_bytes_sent "$http_referer" '
                          '"$http_user_agent" "$http_x_forwarded_for"';
    
        access_log  /var/log/nginx/access.log  main;
    
        sendfile        on;
        #tcp_nopush     on;
    
        keepalive_timeout  65;
    
        #gzip  on;
    	
    	proxy_redirect          off;
    	proxy_set_header        Host $host;
    	proxy_set_header        X-Real-IP $remote_addr;
    	proxy_set_header        X-Forwarded-For $proxy_add_x_forwarded_for;
    	client_max_body_size    10m;
    	client_body_buffer_size   128k;
    	proxy_connect_timeout   5s;
    	proxy_send_timeout      5s;
    	proxy_read_timeout      5s;
    	proxy_buffer_size        4k;
    	proxy_buffers           4 32k;
    	proxy_busy_buffers_size  64k;
    	proxy_temp_file_write_size 64k;
    	
    	server {
    		listen 6501;
    		server_name  192.168.99.104;
    		location  /  {
    			root  /home/fn1/renren-vue;
    			index  index.html;
    		}
    	}
    }
    
    #Start node fn1
    docker run -it -d --name fn1 -v /home/fn1/nginx.conf:/etc/nginx/nginx.conf -v /home/fn1/renren-vue:/home/fn1/renren-vue --privileged --net=host nginx
    

    Host / home / FN2 / nginx Conf configuration file

    user  nginx;
    worker_processes  1;
    error_log  /var/log/nginx/error.log warn;
    pid        /var/run/nginx.pid;
    
    events {
        worker_connections  1024;
    }
    
    http {
        include       /etc/nginx/mime.types;
        default_type  application/octet-stream;
    
        log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
                          '$status $body_bytes_sent "$http_referer" '
                          '"$http_user_agent" "$http_x_forwarded_for"';
    
        access_log  /var/log/nginx/access.log  main;
    
        sendfile        on;
        #tcp_nopush     on;
    
        keepalive_timeout  65;
    
        #gzip  on;
    	
    	proxy_redirect          off;
    	proxy_set_header        Host $host;
    	proxy_set_header        X-Real-IP $remote_addr;
    	proxy_set_header        X-Forwarded-For $proxy_add_x_forwarded_for;
    	client_max_body_size    10m;
    	client_body_buffer_size   128k;
    	proxy_connect_timeout   5s;
    	proxy_send_timeout      5s;
    	proxy_read_timeout      5s;
    	proxy_buffer_size        4k;
    	proxy_buffers           4 32k;
    	proxy_busy_buffers_size  64k;
    	proxy_temp_file_write_size 64k;
    	
    	server {
    		listen 6502;
    		server_name  192.168.99.104;
    		location  /  {
    			root  /home/fn2/renren-vue;
    			index  index.html;
    		}
    	}
    }
    
    #Start node fn2
    docker run -it -d --name fn2 -v /home/fn2/nginx.conf:/etc/nginx/nginx.conf -v /home/fn2/renren-vue:/home/fn2/renren-vue --privileged --net=host nginx
    

    Host / home / FN3 / nginx Conf configuration file

    user  nginx;
    worker_processes  1;
    error_log  /var/log/nginx/error.log warn;
    pid        /var/run/nginx.pid;
    
    events {
        worker_connections  1024;
    }
    
    http {
        include       /etc/nginx/mime.types;
        default_type  application/octet-stream;
    
        log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
                          '$status $body_bytes_sent "$http_referer" '
                          '"$http_user_agent" "$http_x_forwarded_for"';
    
        access_log  /var/log/nginx/access.log  main;
    
        sendfile        on;
        #tcp_nopush     on;
    
        keepalive_timeout  65;
    
        #gzip  on;
    	
    	proxy_redirect          off;
    	proxy_set_header        Host $host;
    	proxy_set_header        X-Real-IP $remote_addr;
    	proxy_set_header        X-Forwarded-For $proxy_add_x_forwarded_for;
    	client_max_body_size    10m;
    	client_body_buffer_size   128k;
    	proxy_connect_timeout   5s;
    	proxy_send_timeout      5s;
    	proxy_read_timeout      5s;
    	proxy_buffer_size        4k;
    	proxy_buffers           4 32k;
    	proxy_busy_buffers_size  64k;
    	proxy_temp_file_write_size 64k;
    	
    	server {
    		listen 6503;
    		server_name  192.168.99.104;
    		location  /  {
    			root  /home/fn3/renren-vue;
    			index  index.html;
    		}
    	}
    }
    

    Start fn3 node

    #Start node fn3
    docker run -it -d --name fn3 -v /home/fn3/nginx.conf:/etc/nginx/nginx.conf -v /home/fn3/renren-vue:/home/fn3/renren-vue --privileged --net=host nginx
    
  4. Configure load balancing

    Host / home / FF1 / nginx Conf configuration file

    user  nginx;
    worker_processes  1;
    error_log  /var/log/nginx/error.log warn;
    pid        /var/run/nginx.pid;
    
    events {
        worker_connections  1024;
    }
    
    http {
        include       /etc/nginx/mime.types;
        default_type  application/octet-stream;
    
        log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
                          '$status $body_bytes_sent "$http_referer" '
                          '"$http_user_agent" "$http_x_forwarded_for"';
    
        access_log  /var/log/nginx/access.log  main;
    
        sendfile        on;
        #tcp_nopush     on;
    
        keepalive_timeout  65;
    
        #gzip  on;
    	
    	proxy_redirect          off;
    	proxy_set_header        Host $host;
    	proxy_set_header        X-Real-IP $remote_addr;
    	proxy_set_header        X-Forwarded-For $proxy_add_x_forwarded_for;
    	client_max_body_size    10m;
    	client_body_buffer_size   128k;
    	proxy_connect_timeout   5s;
    	proxy_send_timeout      5s;
    	proxy_read_timeout      5s;
    	proxy_buffer_size        4k;
    	proxy_buffers           4 32k;
    	proxy_busy_buffers_size  64k;
    	proxy_temp_file_write_size 64k;
    	
    	upstream fn {
    		server 192.168.99.104:6501;
    		server 192.168.99.104:6502;
    		server 192.168.99.104:6503;
    	}
    	server {
            listen       6601;
            server_name  192.168.99.104; 
            location / {  
                proxy_pass   http://fn;
                index  index.html index.htm;  
            }  
        }
    }
    
    #Start ff1 node
    docker run -it -d --name ff1 -v /home/ff1/nginx.conf:/etc/nginx/nginx.conf --net=host --privileged nginx
    

    Host / home / FF2 / nginx Conf configuration file

    user  nginx;
    worker_processes  1;
    error_log  /var/log/nginx/error.log warn;
    pid        /var/run/nginx.pid;
    
    events {
        worker_connections  1024;
    }
    
    http {
        include       /etc/nginx/mime.types;
        default_type  application/octet-stream;
    
        log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
                          '$status $body_bytes_sent "$http_referer" '
                          '"$http_user_agent" "$http_x_forwarded_for"';
    
        access_log  /var/log/nginx/access.log  main;
    
        sendfile        on;
        #tcp_nopush     on;
    
        keepalive_timeout  65;
    
        #gzip  on;
    	
    	proxy_redirect          off;
    	proxy_set_header        Host $host;
    	proxy_set_header        X-Real-IP $remote_addr;
    	proxy_set_header        X-Forwarded-For $proxy_add_x_forwarded_for;
    	client_max_body_size    10m;
    	client_body_buffer_size   128k;
    	proxy_connect_timeout   5s;
    	proxy_send_timeout      5s;
    	proxy_read_timeout      5s;
    	proxy_buffer_size        4k;
    	proxy_buffers           4 32k;
    	proxy_busy_buffers_size  64k;
    	proxy_temp_file_write_size 64k;
    	
    	upstream fn {
    		server 192.168.99.104:6501;
    		server 192.168.99.104:6502;
    		server 192.168.99.104:6503;
    	}
    	server {
            listen       6602;
            server_name  192.168.99.104; 
            location / {  
                proxy_pass   http://fn;
                index  index.html index.htm;  
            }  
        }
    }
    
    #Start ff2 node
    docker run -it -d --name ff2 -v /home/ff2/nginx.conf:/etc/nginx/nginx.conf --net=host --privileged nginx
    
  5. Configure dual hot standby

    #Enter ff1 node
    docker exec -it ff1 bash
    #Update package
    apt-get update
    #Install the VIM
    apt-get install vim
    #Install Keepalived
    apt-get install keepalived
    #Edit the Keepalived configuration file (as follows)
    vim /etc/keepalived/keepalived.conf
    #Start Keepalived
    service keepalived start
    
    vrrp_instance VI_1 {
        state MASTER
        interface ens33
        virtual_router_id 52
        priority 100
        advert_int 1
        authentication {
            auth_type PASS
            auth_pass 123456
        }
        virtual_ipaddress {
            192.168.99.152
        }
    }
    virtual_server 192.168.99.151 6701 {
        delay_loop 3
        lb_algo rr
        lb_kind NAT
        persistence_timeout 50
        protocol TCP
        real_server 192.168.99.104 6601 {
            weight 1
        }
    }
    
    #Enter ff1 node
    docker exec -it ff2 bash
    #Update package
    apt-get update
    #Install the VIM
    apt-get install vim
    #Install Keepalived
    apt-get install keepalived
    #Edit the Keepalived configuration file (as follows)
    vim /etc/keepalived/keepalived.conf
    #Start Keepalived
    service keepalived start
    
    vrrp_instance VI_1 {
        state MASTER
        interface ens33
        virtual_router_id 52
        priority 100
        advert_int 1
        authentication {
            auth_type PASS
            auth_pass 123456
        }
        virtual_ipaddress {
            192.168.99.152
        }
    }
    virtual_server 192.168.99.151 6701 {
        delay_loop 3
        lb_algo rr
        lb_kind NAT
        persistence_timeout 50
        protocol TCP
        real_server 192.168.99.104 6602 {
            weight 1
        }
    }
    

Keywords: Java Linux Docker Nginx

Added by pinacoladaxb on Tue, 08 Feb 2022 11:06:48 +0200