reference resources Kubernetes full stack Architect (binary high availability installation k8s cluster deployment) - learning notes
1, Binary high availability basic configuration
k8s high availability architecture analysis, high availability Kubernetes cluster planning, setting static ip, please refer to the previous article
1. Configure all node hosts files (send key input to all sessions)
vim /etc/hosts 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 10.218.22.212 k8s-master01 10.218.22.234 k8s-master02 10.218.22.252 k8s-master03 10.218.22.218 k8s-node01 10.218.22.225 k8s-node02
2. CentOS 7 installation yum source is as follows:
curl -o /etc/yum.repos.d/CentOS-Base.repo https://mirrors.aliyun.com/repo/Centos-7.repo yum install -y yum-utils device-mapper-persistent-data lvm2 yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo sed -i -e '/mirrors.cloud.aliyuncs.com/d' -e '/mirrors.aliyuncs.com/d' /etc/yum.repos.d/CentOS-Base.repo
3. Installation of necessary tools
yum install wget jq psmisc vim net-tools telnet yum-utils device-mapper-persistent-data lvm2 git -y
4. Close firewalld, dnsmasq and SELinux on all nodes (NetworkManager needs to be closed for centos7, but not for CentOS8)
systemctl disable --now firewalld systemctl disable --now dnsmasq systemctl disable --now NetworkManager setenforce 0 sed -i 's#SELINUX=enforcing#SELINUX=disabled#g' /etc/sysconfig/selinux sed -i 's#SELINUX=enforcing#SELINUX=disabled#g' /etc/selinux/config
Check status (must be disabled)
getenforce
Close swap partition on all nodes, fstab comment swap
swapoff -a && sysctl -w vm.swappiness=0 sed -ri '/^[^#]*swap/s@^@#@' /etc/fstab
5. The synchronization time of all nodes is generally configured for the company's machines
Install ntpdate
rpm -ivh http://mirrors.wlnmp.com/centos/wlnmp-release-centos.noarch.rpm yum install ntpdate -y
Synchronization time of all nodes. The time synchronization configuration is as follows:
ln -sf /usr/share/zoneinfo/Asia/Shanghai /etc/localtime echo 'Asia/Shanghai' >/etc/timezone ntpdate time2.aliyun.com
Inspection time
date
Join crontab
crontab -e # Add the following */5 * * * * /usr/sbin/ntpdate time2.aliyun.com
6. Configure limit for all nodes:
ulimit -SHn 65535
vim /etc/security/limits.conf
# Add the following at the end * soft nofile 655360 * hard nofile 131072 * soft nproc 655350 * hard nproc 655350 * soft memlock unlimited * hard memlock unlimited
7. The Master01 node (cancel sending keys to all sessions) logs in to other nodes without keys. The configuration files and certificates generated during installation are operated on Master01. Cluster management is also operated on Master01. A separate kubectl server is required on Alibaba cloud or AWS. The key configuration is as follows:
ssh-keygen -t rsa
Master01 configure password free login to other nodes
for i in k8s-master01 k8s-master02 k8s-master03 k8s-node01 k8s-node02;do ssh-copy-id -i .ssh/id_rsa.pub $i;done
8. All nodes install basic tools (send key input to all sessions)
yum install wget jq psmisc vim net-tools yum-utils device-mapper-persistent-data lvm2 git -y
9. Master01 download installation files (cancel sending key to input all sessions)
cd /root/ ; git clone https://github.com/dotbalo/k8s-ha-install.git
10. All nodes (send key to all sessions) upgrade the system and restart. The kernel is not upgraded here. The kernel will be upgraded separately in the next section:
yum update -y --exclude=kernel* && reboot #CentOS7 needs to be upgraded, and CentOS8 can upgrade the system on demand
11. Binary system and kernel upgrade
CentOS7 needs to upgrade the kernel to 4.18 +, and the locally upgraded version is 4.19
Download the kernel on the master01 node (cancel sending key input to all sessions):
cd /root wget http://193.49.22.109/elrepo/kernel/el7/x86_64/RPMS/kernel-ml-devel-4.19.12-1.el7.elrepo.x86_64.rpm wget http://193.49.22.109/elrepo/kernel/el7/x86_64/RPMS/kernel-ml-4.19.12-1.el7.elrepo.x86_64.rpm
Transfer from master01 node to other nodes:
for i in k8s-master02 k8s-master03 k8s-node01 k8s-node02;do scp kernel-ml-4.19.12-1.el7.elrepo.x86_64.rpm kernel-ml-devel-4.19.12-1.el7.elrepo.x86_64.rpm $i:/root/ ; done
Install kernel on all nodes
cd /root && yum localinstall -y kernel-ml*
All nodes change the kernel boot order
grub2-set-default 0 && grub2-mkconfig -o /etc/grub2.cfg grubby --args="user_namespace.enable=1" --update-kernel="$(grubby --default-kernel)"
Check if the default kernel is 4.19
grubby --default-kernel
Restart all nodes and check whether the kernel is 4.19
reboot uname -a
12. Install ipvsadm on all nodes (load balancing):
yum install ipvsadm ipset sysstat conntrack libseccomp -y
All nodes are configured with ipvs module, which is NF in kernel version 4.19 +_ conntrack_ IPv4 has been changed to nf_conntrack, use NF below 4.18_ conntrack_ IPv4:
modprobe -- ip_vs modprobe -- ip_vs_rr modprobe -- ip_vs_wrr modprobe -- ip_vs_sh modprobe -- nf_conntrack
vim /etc/modules-load.d/ipvs.conf
# Add the following ip_vs ip_vs_lc ip_vs_wlc ip_vs_rr ip_vs_wrr ip_vs_lblc ip_vs_lblcr ip_vs_dh ip_vs_sh ip_vs_fo ip_vs_nq ip_vs_sed ip_vs_ftp ip_vs_sh nf_conntrack ip_tables ip_set xt_set ipt_set ipt_rpfilter ipt_REJECT ipip
Then execute
systemctl enable --now systemd-modules-load.service
Check whether it is loaded (it can be loaded only after restarting):
lsmod | grep -e ip_vs -e nf_conntrack
13. Enable some necessary kernel parameters in k8s cluster, and configure k8s kernel for all nodes:
cat <<EOF > /etc/sysctl.d/k8s.conf net.ipv4.ip_forward = 1 net.bridge.bridge-nf-call-iptables = 1 net.bridge.bridge-nf-call-ip6tables = 1 fs.may_detach_mounts = 1 vm.overcommit_memory=1 vm.panic_on_oom=0 fs.inotify.max_user_watches=89100 fs.file-max=52706963 fs.nr_open=52706963 net.netfilter.nf_conntrack_max=2310720 net.ipv4.tcp_keepalive_time = 600 net.ipv4.tcp_keepalive_probes = 3 net.ipv4.tcp_keepalive_intvl =15 net.ipv4.tcp_max_tw_buckets = 36000 net.ipv4.tcp_tw_reuse = 1 net.ipv4.tcp_max_orphans = 327680 net.ipv4.tcp_orphan_retries = 3 net.ipv4.tcp_syncookies = 1 net.ipv4.tcp_max_syn_backlog = 16384 net.ipv4.ip_conntrack_max = 65536 net.ipv4.tcp_max_syn_backlog = 16384 net.ipv4.tcp_timestamps = 0 net.core.somaxconn = 16384 EOF sysctl --system
net. ipv4. ip_ If forward is not turned on, cross host communication cannot be performed
After all nodes configure the kernel, restart the server to ensure that the kernel is still loaded after restart
reboot lsmod | grep --color=auto -e ip_vs -e nf_conntrack
2, Binary basic component installation
1. Docker installation
Install docker CE 19.03 on all nodes (official recommendation)
yum install docker-ce-19.03.* -y
Since systemd is recommended for the new kubelet, you can change the CgroupDriver of docker to systemd
mkdir /etc/docker cat > /etc/docker/daemon.json <<EOF { "exec-opts": ["native.cgroupdriver=systemd"] } EOF
All nodes are set to start Docker automatically:
systemctl daemon-reload && systemctl enable --now docker
2. K8s and etcd installation
(1) Master01 (cancel sending key input to all sessions) Download kubernetes installation package
Visit the official website for the latest version: https://github.com/kubernetes/kubernetes
Enter the CHANGELOG directory and you can see that the latest version is 1.22. Click Server Binaries to get the download link. If there is an updated version, you need to download the latest version
wget https://dl.k8s.io/v1.22.0-beta.1/kubernetes-server-linux-amd64.tar.gz
(2) Download the etcd installation package (3.4.13 is the official recommended version and has been verified)
wget https://github.com/etcd-io/etcd/releases/download/v3.4.13/etcd-v3.4.13-linux-amd64.tar.gz
(3) Unzip the kubernetes installation file. In fact, the binary installation is completed after unzipping
tar -xf kubernetes-server-linux-amd64.tar.gz --strip-components=3 -C /usr/local/bin kubernetes/server/bin/kube{let,ctl,-apiserver,-controller-manager,-scheduler,-proxy}
(4) Extract the etcd installation file
tar -zxvf etcd-v3.4.13-linux-amd64.tar.gz --strip-components=1 -C /usr/local/bin etcd-v3.4.13-linux-amd64/etcd{,ctl}
(5) Version view
#kubelet --version Kubernetes v1.22.0-beta.1 # etcdctl version etcdctl version: 3.4.13 API version: 3.4
(6) Send components to other nodes
MasterNodes='k8s-master02 k8s-master03' WorkNodes='k8s-node01 k8s-node02' for NODE in $MasterNodes; do echo $NODE; scp /usr/local/bin/kube{let,ctl,-apiserver,-controller-manager,-scheduler,-proxy} $NODE:/usr/local/bin/; scp /usr/local/bin/etcd* $NODE:/usr/local/bin/; done for NODE in $WorkNodes; do scp /usr/local/bin/kube{let,-proxy} $NODE:/usr/local/bin/ ; done
(7) Create / opt/cni/bin directory for all nodes (send key input to all sessions)
mkdir -p /opt/cni/bin
View branch
cd k8s-ha-install/ git branch -a * master remotes/origin/HEAD -> origin/master remotes/origin/manual-installation remotes/origin/manual-installation-v1.16.x remotes/origin/manual-installation-v1.17.x remotes/origin/manual-installation-v1.18.x remotes/origin/manual-installation-v1.19.x remotes/origin/manual-installation-v1.20.x remotes/origin/manual-installation-v1.20.x-csi-hostpath remotes/origin/manual-installation-v1.21.x remotes/origin/master
Master01 switches to 1.20 X branch (other versions can switch to other branches) (cancel sending key input to all sessions)
git checkout manual-installation-v1.20.x
3, Detailed explanation of binary generated certificate
Binary installation is the most critical step. If one step is wrong, you must pay attention to the correctness of each step
1. Master01 download certificate generation tool
wget "https://pkg.cfssl.org/R1.2/cfssl_linux-amd64" -O /usr/local/bin/cfssl wget "https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64" -O /usr/local/bin/cfssljson chmod +x /usr/local/bin/cfssl /usr/local/bin/cfssljson
2. Create etcd certificate directory for all Master nodes (send key input to all sessions and cancel node nodes)
mkdir /etc/etcd/ssl -p
3. Create kubernetes related directories for all nodes (send key input to all sessions)
mkdir -p /etc/kubernetes/pki
4. Master01 node generates etcd Certificate (cancel sending key input to all sessions)
(1) CSR file for generating certificate: certificate signature request file, configured with some domain names, companies and companies
# This directory contains the csr files we need to generate the certificate cd /root/k8s-ha-install/pki #see [root@k8s-master01 pki]# ls admin-csr.json ca-config.json etcd-ca-csr.json front-proxy-ca-csr.json kubelet-csr.json manager-csr.json apiserver-csr.json ca-csr.json etcd-csr.json front-proxy-client-csr.json kube-proxy-csr.json scheduler-csr.json # Generate etcd CA certificate and key of CA certificate cfssl gencert -initca etcd-ca-csr.json | cfssljson -bare /etc/etcd/ssl/etcd-ca 2021/07/23 13:58:44 [INFO] generating a new CA key and certificate from CSR 2021/07/23 13:58:44 [INFO] generate received request 2021/07/23 13:58:44 [INFO] received CSR 2021/07/23 13:58:44 [INFO] generating key: rsa-2048 2021/07/23 13:58:44 [INFO] encoded CSR 2021/07/23 13:58:44 [INFO] signed certificate with serial number 65355458767171380149641516060181865353335743374
(2) View generated key s
ls /etc/etcd/ssl/ etcd-ca.csr etcd-ca-key.pem etcd-ca.pem
(3) Issue certificate
cfssl gencert \ -ca=/etc/etcd/ssl/etcd-ca.pem \ -ca-key=/etc/etcd/ssl/etcd-ca-key.pem \ -config=ca-config.json \ -hostname=127.0.0.1,k8s-master01,k8s-master02,k8s-master03,10.218.22.212,10.218.22.234,10.218.22.252 \ -profile=kubernetes \ etcd-csr.json | cfssljson -bare /etc/etcd/ssl/etcd
(4) View generated certificate
ls /etc/etcd/ssl/ #Generate content etcd-ca.csr etcd-ca-key.pem etcd-ca.pem etcd.csr etcd-key.pem etcd.pem
(5) Copy the certificate to another node
MasterNodes='k8s-master02 k8s-master03' WorkNodes='k8s-node01 k8s-node02' for NODE in $MasterNodes; do ssh $NODE "mkdir -p /etc/etcd/ssl" for FILE in etcd-ca-key.pem etcd-ca.pem etcd-key.pem etcd.pem; do scp /etc/etcd/ssl/${FILE} $NODE:/etc/etcd/ssl/${FILE} done done
5. k8s component certificate
(1) Master01 generate kubernetes certificate
cd /root/k8s-ha-install/pki cfssl gencert -initca ca-csr.json | cfssljson -bare /etc/kubernetes/pki/ca
(2) View generated key s
ls /etc/kubernetes/pki ca.csr ca-key.pem ca.pem
(3) Generate client certificate for apiserver
10.96.0. It is the network segment of k8s service. If you need to change the network segment of k8s service, you need to change 10.96.0.1. If it is not a highly available cluster, 10.218.3.205 is the IP of Master01. Here is the highly available vip
cfssl gencert -ca=/etc/kubernetes/pki/ca.pem -ca-key=/etc/kubernetes/pki/ca-key.pem -config=ca-config.json -hostname=10.96.0.1,10.218.3.205,127.0.0.1,kubernetes,kubernetes.default,kubernetes.default.svc,kubernetes.default.svc.cluster,kubernetes.default.svc.cluster.local,10.218.22.212,10.218.22.234,10.218.22.252 -profile=kubernetes apiserver-csr.json | cfssljson -bare /etc/kubernetes/pki/apiserver
(4) View generated certificates
ls /etc/kubernetes/pki apiserver.csr apiserver-key.pem apiserver.pem ca.csr ca-key.pem ca.pem
(5) Generate the aggregation certificate for apiserver. Requestheader-client-xxx requestheader-allowwd-xxx:aggerator
cfssl gencert -initca front-proxy-ca-csr.json | cfssljson -bare /etc/kubernetes/pki/front-proxy-ca cfssl gencert -ca=/etc/kubernetes/pki/front-proxy-ca.pem -ca-key=/etc/kubernetes/pki/front-proxy-ca-key.pem -config=ca-config.json -profile=kubernetes front-proxy-client-csr.json | cfssljson -bare /etc/kubernetes/pki/front-proxy-client
(6) Generate certificate for controller manage
cfssl gencert \ -ca=/etc/kubernetes/pki/ca.pem \ -ca-key=/etc/kubernetes/pki/ca-key.pem \ -config=ca-config.json \ -profile=kubernetes \ manager-csr.json | cfssljson -bare /etc/kubernetes/pki/controller-manager # Note that if it is not a highly available cluster, change the address of master 01 to 10.218.3.205:6443 and the port of apiserver to 6443 by default # Set cluster: set a cluster item kubectl config set-cluster kubernetes \ --certificate-authority=/etc/kubernetes/pki/ca.pem \ --embed-certs=true \ --server=https://10.218.3.205:6443 \ --kubeconfig=/etc/kubernetes/controller-manager.kubeconfig # Set credentials sets a user item kubectl config set-credentials system:kube-controller-manager \ --client-certificate=/etc/kubernetes/pki/controller-manager.pem \ --client-key=/etc/kubernetes/pki/controller-manager-key.pem \ --embed-certs=true \ --kubeconfig=/etc/kubernetes/controller-manager.kubeconfig # Set an environment item and a context kubectl config set-context system:kube-controller-manager@kubernetes \ --cluster=kubernetes \ --user=system:kube-controller-manager \ --kubeconfig=/etc/kubernetes/controller-manager.kubeconfig # Use an environment as the default kubectl config use-context system:kube-controller-manager@kubernetes \ --kubeconfig=/etc/kubernetes/controller-manager.kubeconfig
(7) Generate the scheduler's certificate
cfssl gencert \ -ca=/etc/kubernetes/pki/ca.pem \ -ca-key=/etc/kubernetes/pki/ca-key.pem \ -config=ca-config.json \ -profile=kubernetes \ scheduler-csr.json | cfssljson -bare /etc/kubernetes/pki/scheduler # Note that if it is not a highly available cluster, change the address of master 01 to 10.218.3.205:6443 and the port of apiserver to 6443 by default kubectl config set-cluster kubernetes \ --certificate-authority=/etc/kubernetes/pki/ca.pem \ --embed-certs=true \ --server=https://10.218.3.205:6443 \ --kubeconfig=/etc/kubernetes/scheduler.kubeconfig kubectl config set-credentials system:kube-scheduler \ --client-certificate=/etc/kubernetes/pki/scheduler.pem \ --client-key=/etc/kubernetes/pki/scheduler-key.pem \ --embed-certs=true \ --kubeconfig=/etc/kubernetes/scheduler.kubeconfig kubectl config set-context system:kube-scheduler@kubernetes \ --cluster=kubernetes \ --user=system:kube-scheduler \ --kubeconfig=/etc/kubernetes/scheduler.kubeconfig kubectl config use-context system:kube-scheduler@kubernetes \ --kubeconfig=/etc/kubernetes/scheduler.kubeconfig
(8) Generate admin certificate
cfssl gencert \ -ca=/etc/kubernetes/pki/ca.pem \ -ca-key=/etc/kubernetes/pki/ca-key.pem \ -config=ca-config.json \ -profile=kubernetes \ admin-csr.json | cfssljson -bare /etc/kubernetes/pki/admin # Note that if it is not a highly available cluster, change the address of master 01 to 10.218.3.205:6443 and the port of apiserver to 6443 by default kubectl config set-cluster kubernetes --certificate-authority=/etc/kubernetes/pki/ca.pem --embed-certs=true --server=https://10.218.3.205:6443 --kubeconfig=/etc/kubernetes/admin.kubeconfig kubectl config set-credentials kubernetes-admin --client-certificate=/etc/kubernetes/pki/admin.pem --client-key=/etc/kubernetes/pki/admin-key.pem --embed-certs=true --kubeconfig=/etc/kubernetes/admin.kubeconfig kubectl config set-context kubernetes-admin@kubernetes --cluster=kubernetes --user=kubernetes-admin --kubeconfig=/etc/kubernetes/admin.kubeconfig kubectl config use-context kubernetes-admin@kubernetes --kubeconfig=/etc/kubernetes/admin.kubeconfig
6. Differentiated certificate
We used the same command to generate admin kubeconfig,scheduler.kubeconfig,controller-manager.kubeconfig, how are they distinguished?
View admin CSR json
cat admin-csr.json { "CN": "admin", # domain name "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "Beijing", "L": "Beijing", "O": "system:masters", # Department, which group does admin belong to "OU": "Kubernetes-manual" } ] }
The certificate we generated will define a user admin, which belongs to the system:masters group. When k8s is installed, there will be a clusterrole, which is a cluster role, equivalent to a configuration. It has the highest administrative authority of the cluster. At the same time, it will create a clusterrole binding, which will bind admin to the system:masters group, Then all users in this group will have the permissions of this cluster
7. Create serviceaccount key - > Secret
(1) ServiceAccount is k8s an authentication method. When creating a ServiceAccount, a secret bound to it will be created, and this secret will generate a token
openssl genrsa -out /etc/kubernetes/pki/sa.key 2048 openssl rsa -in /etc/kubernetes/pki/sa.key -pubout -out /etc/kubernetes/pki/sa.pub
(2) Send certificate to other nodes
for NODE in k8s-master02 k8s-master03; do for FILE in $(ls /etc/kubernetes/pki | grep -v etcd); do scp /etc/kubernetes/pki/${FILE} $NODE:/etc/kubernetes/pki/${FILE}; done; for FILE in admin.kubeconfig controller-manager.kubeconfig scheduler.kubeconfig; do scp /etc/kubernetes/${FILE} $NODE:/etc/kubernetes/${FILE}; done; done
(3) View certificate files (23 files in total)
ls /etc/kubernetes/pki/ ls /etc/kubernetes/pki/ |wc -l [root@k8s-master01 pki]# ls /etc/kubernetes/pki/ admin.csr apiserver-key.pem ca.pem front-proxy-ca.csr front-proxy-client-key.pem scheduler.csr admin-key.pem apiserver.pem controller-manager.csr front-proxy-ca-key.pem front-proxy-client.pem scheduler-key.pem admin.pem ca.csr controller-manager-key.pem front-proxy-ca.pem sa.key scheduler.pem apiserver.csr ca-key.pem controller-manager.pem front-proxy-client.csr sa.pub [root@k8s-master01 pki]# [root@k8s-master01 pki]# ls /etc/kubernetes/pki/ |wc -l 23
(4) View certificate expiration time (expiration time: 100 years)
cat ca-config.json { "signing": { "default": { "expiry": "876000h" }, "profiles": { "kubernetes": { "usages": [ "signing", "key encipherment", "server auth", "client auth" ], "expiry": "876000h" } } } }