linux LVS cluster load balancing | working mode | scheduling algorithm | super detail

1, Meaning of cluster

Cluster, cluster
It is composed of multiple hosts, but externally, it is only shown as a whole. It only provides an access portal (domain name or IP), which is equivalent to a mainframe computer.

1. Why do clusters exist
In Internet applications, as the site has higher and higher requirements for hardware performance, response speed, service stability and data reliability, a single server can not meet the needs of load balancing and high availability.

2. Solve what problem
1. Use expensive minicomputers and mainframes.
2. Build a service cluster using multiple relatively cheap ordinary servers.
The same IP address can be used to integrate multiple servers and provide the same load balancing service with LVS.
This is a cluster technology commonly used in Enterprises - LVS (Linux Virtual Server).

2, Type of cluster

According to the target difference of the cluster, it can be divided into three types
1. Load balancing cluster
2. High availability cluster
3. High performance cluster

1,Load Balance Cluster (Load Balance Cluster)
1,Improve the response ability of the application system, handle more access requests as much as possible, reduce latency as the goal, and obtain high concurrency and high load(LB)Overall performance.

2,LB The load distribution of depends on the shunting algorithm of the master node, which shares the access requests from the client to multiple server nodes, so as to alleviate the load of the whole system.
2,High availability cluster (High Availability Cluster)
1,The goal is to improve the reliability of the application system and reduce the interruption time as much as possible, so as to ensure the continuity of service and achieve high availability(HA) Fault tolerance effect.

2,HA The working mode of includes duplex and master-slave modes. Duplex means that all nodes are online at the same time; The master-slave only has the master node online, but in case of failure, the slave node can automatically switch to the master node.
For example: "failover", "dual machine hot standby", etc.
3,High performance computing cluster(High Performance Computer Cluster)
1,To improve the performance of the application system CPU With the goal of computing speed, expanding hardware resources and analysis ability, we can obtain high-performance computing equivalent to large-scale and supercomputer(HPC)Ability.

2,High performance depends on"Distributed computing "," parallel computing ", which integrates the functions of multiple servers through special hardware and software CPU,Memory and other resources are integrated to realize the computing power that only large and supercomputers have. For example, "cloud computing", "grid computing", etc.

3, Cluster load balancing architecture

Analysis of load balancing structure

The first layer is load scheduler(Load Balancer or Director)
The only access to the entire cluster system, which is shared by all servers VIP Address, also known as cluster IP Address. Two dispatchers, the primary and standby, are usually configured to realize hot backup. When the primary scheduler fails, it can be smoothly replaced with the standby scheduler to ensure high availability.
Layer 2: server pool(Server Pool)
The application services provided by the cluster are undertaken by the server pool, in which each node has an independent RIP address(real IP),Only client requests distributed by the scheduler are processed. When a node fails temporarily, the fault-tolerant mechanism of the load scheduler will isolate it and wait for the error to be eliminated before it is re included in the server pool.
Layer 3, shared storage(Share Storage)
Provide stable and consistent file access services for all nodes in the server pool to ensure the uniformity of the whole cluster and the availability of shared storage NAS equipment,Or provide NFS A dedicated server for shared services.

4, Analysis of load balancing cluster working mode

1. Load balancing cluster is the most widely used cluster type in enterprises
2. The load scheduling technology of cluster has three working modes

Address translation (NAT mode)
IP tunnel (TUN mode)
Direct routing (DR mode)

5, Three load scheduling modes

(1) And NAT pattern
 address translation
Network Address Translation,abbreviation NAT pattern
 Similar to the private network structure of the firewall, the load scheduler acts as the gateway of all server nodes, that is, as the access entrance of the client and the access exit of each node in response to the client
 Server nodes use private IP The address is located in the same physical network as the load scheduler, and the security is better than the other two methods
(2) And TUN pattern
IP Tunnel
IP Tunnel,abbreviation TUN pattern
 The open network structure is adopted, and the load scheduler is only used as the access entrance of the client, and each node passes through its own Internet The connection responds directly to the client without going through the load scheduler
 Server nodes are scattered in different locations in the Internet,Independent public network IP Address, via dedicated IP The tunnel communicates with the load scheduler
(3) And DR pattern
 Direct routing
 Direct Routing,abbreviation DR pattern
 Adopt semi open network structure, and TUN The structure of the mode is similar, but the nodes are not scattered everywhere, but are located in the same physical network as the scheduler
 The load scheduler is connected with each node server through the local network, and there is no need to establish a dedicated network IP Tunnel

6, LVS virtual server

1,Linux Virtual Server
 in the light of Linux Load balancing solution for kernel development
LVS In fact, it is equivalent to based on IP Address based virtualization application IP This paper proposes an efficient solution to the load balancing of address and content request distribution

2,LVS Now it has become Linux Part of the kernel, compiled as ip_ vs Module, which can be called automatically when necessary. stay CentOS 7 In the system, the following operations can be loaded manually ip_ vs Module and view the current system ip_ vs Version information of the module.

modprobe ip_vs
cat /proc/net/ip_vs    #Confirm the support of internal verification LVS

7, LVS cluster scheduling algorithm

1,polling (Round Robin)
 The received access requests are allocated to each node in the cluster in turn(Real server) ,Treat each server equally, regardless of the actual number of connections and system load of the server

2,Weighted polling (Weighted Round Robin)
 The request is distributed according to the weight value set by the scheduler. The node with high weight value gets the task first, and the more requests are allocated
 Ensure that servers with strong performance bear more access traffic

3,Minimum connection (Least Connections )
 Allocate according to the number of connections established by the real server, and give priority to the node with the least number of connections

4,Weighted least connection(Weighted L east Connections )
When the performance of server nodes varies greatly, the weight can be automatically adjusted for the real server
 Nodes with higher performance will bear a larger proportion of the active connection load

8, ipvsadm tool

ipvsadm Description of functions and options
 option	function
-A	Add virtual server
-D	Delete entire virtual server
-s	Specify the load scheduling algorithm (polling): rr,Weighted polling: wrr,Minimum connections: lc,Weighted minimum connection: wlc)
-a	Indicates adding a real server (node server)
-d	Delete a node
-t	appoint VIP Address and TCP port
-r	appoint RIP Address and TCP port
-m	Indicates use NAT Cluster Mode 
-g	Indicates use DR pattern
-i	Indicates use TUN pattern
-w	Set the weight (when the weight is 0, it means to pause the node)
-p 60	Indicates a long connection for 60 seconds
-l	List view LVS Virtual server (view all by default)
-n	Display address, port and other information in digital form, often with“-l"Option combination. ipvsadm -ln
 IX NAT pattern LVS Load balancing cluster deployment
LVS Scheduler as Web Gateway of server pool, LVS It has two network cards, which are respectively connected to the internal and external networks and use polling( rr)scheduling algorithm 

lvs cluster simulation experiment configuration

hostoperating systemIP addressInstallation package
Load Balancer CentOS7192.168.78.1112.0.0.1
Web node server 1CentOS7192.168.78.22rpcbind,nfs-utils,httpd
Web node server 2CentOS7192.168.78.44rpcbind,nfs-utils,httpd
NFS serverCentOS7192.168.78.44rpcbind,nfs-utils

1. Deploy NFS shared storage (NFS server: 192.168.78.44)

Two new hard disks are added to the system, formatted and mounted to /opt/accp /opt/bccp
vim /etc/fstab

systemctl stop firewalld.service
systemctl disable firewalld.service
setenforce 0

yum -y install nfs-utils rpcbind

systemctl start rpcbind.service
systemctl start nfs.service

systemctl enable nfs.service
systemctl enable rpcbind.service

mkdir /opt/bccp
mkdir /opt/accp

chmod 777 /opt/bccp
chmod 777 /opt/accp

vim /etc/exports
/opt/bccp 192.168.78.0/24(rw,sync,no_root_squash)
/opt/accp 192.168.78.0/24(rw,sync,no_root_squash)
exportfs -rv


2. Configure node servers (192.168.78.22, 192.168.78.33)

systemctl stop firewalld.service
systemctl disable firewalld.service
setenforce 0

yum install httpd -y
systemctl start httpd.service
systemctl enable httpd.service

yum -y install nfs-utils rpcbind
showmount -e 192.168.78.44

systemctl start rpcbind
systemctl enable rpcbind


Web-1(192.168.78.22)

mount.nfs 192.168.78.44:/opt/bccp /var/www/html
echo 'this is bccp web' > /var/www/html/index.html

Web-2(192.168.78.33)

mount.nfs 192.168.78.44:/opt/accp /var/www/html
echo 'this is accp web' > /var/www/html/index.html




3. Configure the load scheduler (internal gateway ens33:192.168.78.11, external gateway ens37:12.0.0.1)

`The system needs to mount dual network cards`
systemctl start firewalld.service       #Need to open firewall (make security policy)
systemctl enable firewalld.service    
setenforce 0

(1) Configure SNAT forwarding rules

vim /etc/sysctl.conf
net.ipv4.ip_forward = 1
 or
echo '1' > /proc/sys/net/ipv4/ip_forward
sysctl -p

iptables -t nat -F
 
iptables -t nat -A POSTROUTING -s 192.168.78.0/24 -o ens36 -j SNAT --to-source 12.0.0.1
iptables -t nat -nL

(2) Load LVS kernel module

modprobe ip_vs					#Load ip_vs module
cat /proc/net/ip_vs				#View ip_vs version information

(3) Install ipvsadm management tool

yum -y install ipvsadm

The load distribution policy must be saved before starting the service

ipvsadm-save > /etc/sysconfig/ipvsadm
 or
ipvsadm --save > /etc/sysconfig/ipvsadm

systemctl start ipvsadm.service

(4) Configure load distribution policy

ipvsadm -C 					#Clear original policy
ipvsadm -A -t 12.0.0.1:80 -s rr
ipvsadm -a -t 12.0.0.1:80 -r 192.168.78.22:80 -m
ipvsadm -a -t 12.0.0.1:80 -r 192.168.78.33:80 -m
ipvsadm						#Enable policy

ipvsadm -ln					#Check the node status. Masq represents NAT mode
ipvsadm-save > /etc/sysconfig/ipvsadm						#Save policy
systemctl start ipvsadm.service

4. Test effect

Use a browser to access a client with IP 12.0.0.12 http://12.0.0.1/ , constantly refresh the browser to test the load balancing effect. The refresh interval needs to be longer.

Keywords: Linux Operation & Maintenance Distribution

Added by ijmccoy on Sat, 19 Feb 2022 21:36:38 +0200