Kubedm builds K8S high availability cluster (I)
1, Cluster planning
① Active and standby servers
Four virtual machines are prepared, three of which are K8S high availability cluster + ETCD cluster, and kept + nginx to achieve K8S apiserver high availability.
k8s-01 192.168.0.108 master+etcd+keepalived+nginx k8s-02 192.168.0.109 master+etcd+keepalived+nginx k8s-03 192.168.0.111 master+etcd+keepalived+nginx k8s-04 192.168.0.112 work-node VIP 192.168.0.110 keepalived+nginx Highly available virtual IP
System: CentOS Linux release 7.9.2009 (Core)
Note: switch Centos7 to Alibaba cloud yum source
wget -O /etc/yum.repos.d/CentOS-Base.repo https://mirrors.aliyun.com/repo/Centos-7.repo yum clean all yum makecache
②,environment.sh script
[root@k8s01 ssl]# cat /data/etcd/ssl/environment.sh #!/usr/bin/bash # Encryption key required to generate EncryptionConfig export ENCRYPTION_KEY=$(head -c 32 /dev/urandom | base64) # IP array of each cluster machine export NODE_IPS=(192.168.0.108 192.168.0.109 192.168.0.111) # Host name array corresponding to each IP of the cluster export NODE_NAMES=(k8s01 k8s02 k8s03) # etcd cluster service address list export ETCD_ENDPOINTS="https://192.168.0.108:2379,https://192.168.0.109:2379,https://192.168.0.111:2379" # etcd IP and port for inter cluster communication export ETCD_NODES="k8s01=https://192.168.0.108:2380,k8s02=https://192.168.0.109:2380,k8s03=https://192.168.0.111:2380" # Reverse proxy (Kube nginx) address port of Kube apiserver export KUBE_APISERVER="https://192.168.0.110:8443" # Name of interconnection network interface between nodes export IFACE="ens33" # etcd data directory export ETCD_DATA_DIR="/data/k8s/etcd/data" # etcd WAL directory, SSD disk partition or ETCD_DATA_DIR different disk partitions export ETCD_WAL_DIR="/data/k8s/etcd/wal" # k8s each component data directory export K8S_DIR="/data/k8s/k8s" ## DOCKER_DIR and containerd_ One out of two dir # docker data directory export DOCKER_DIR="/data/k8s/docker" # containerd data directory export CONTAINERD_DIR="/data/k8s/containerd" ## The following parameters generally do not need to be modified # The Token used by TLS Bootstrapping can be generated by using the command head -c 16 /dev/urandom | od -An -t x | tr -d ' BOOTSTRAP_TOKEN="41f7e4ba8b7be874fcff18bf5cf41a7c" # It is best to define service segments and Pod segments using currently unused segments # The service network segment is unreachable before deployment and reachable within the cluster after deployment (Kube proxy guarantee) SERVICE_CIDR="10.254.0.0/16" # Pod network segment, recommended / 16 segment address. The route is unreachable before deployment and reachable in the cluster after deployment (flanneld guarantee) CLUSTER_CIDR="172.30.0.0/16" # Service port range export NODE_PORT_RANGE="30000-32767" # kubernetes service IP (generally the first IP in SERVICE_CIDR) export CLUSTER_KUBERNETES_SVC_IP="10.254.0.1" # Cluster DNS service IP (preallocated from SERVICE_CIDR) export CLUSTER_DNS_SVC_IP="10.254.0.2" # Cluster DNS domain name (without dot at the end) export CLUSTER_DNS_DOMAIN="cluster.local" # Add binary directory / opt/k8s/bin to PATH export PATH=/opt/k8s/bin:$PATH
③ . install docker on all nodes
yum -y install docker
Set startup
systemctl enable docker systemctl start docker
2, Basic environmental preparation
① . set host name
The etcd and master clusters of this document are deployed on three machines.
[root@k8s01 ~]# hostnamectl set-hostname k8s01
If DNS does not support host name resolution, you also need to add the corresponding relationship between host name and IP in the / etc/hosts file of each machine:
cat >> /etc/hosts <<EOF
192.168.0.108 k8s01
192.168.0.109 k8s02
192.168.0.111 k8s03
192.168.0.112 k8s04
EOF
Exit and log in to the root account again. You can see that the host name takes effect.
② . add node trust relationship
Operate on k8s01
[root@k8s01 ~]# ssh-keygen -t rsa [root@k8s01 ~]# ssh-copy-id root@k8s01 [root@k8s01 ~]# ssh-copy-id root@k8s02 [root@k8s01 ~]# ssh-copy-id root@k8s03 [root@k8s01 ~]# ssh-copy-id root@k8s04
③ . install dependent packages
yum install -y epel-release yum install -y chrony conntrack ipvsadm ipset jq iptables curl sysstat libseccomp wget socat git
The Kube proxy in this document uses ipvs mode, and ipvsadm is the management tool of ipvs;
Each machine in etcd cluster needs time synchronization, and chrony is used for system time synchronization;
④ Turn off the firewall
Close the firewall, clear the firewall rules, and set the default Forwarding Policy:
systemctl stop firewalld systemctl disable firewalld iptables -F && iptables -X && iptables -F -t nat && iptables -X -t nat iptables -P FORWARD ACCEPT
⑤ , close swap
Close the swap partition, otherwise the kubelet will fail to start (you can set the kubelet startup parameter -- fail swap on to false to close the swap check):
swapoff -a sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab
⑥ , close selinux
Close SELinux, otherwise kubelet may report an error Permission denied when mounting the directory:
setenforce 0 sed -i 's/^SELINUX=.*/SELINUX=disabled/' /etc/selinux/config
⑦ . optimize kernel parameters
cat > kubernetes.conf <<EOF net.bridge.bridge-nf-call-iptables=1 net.bridge.bridge-nf-call-ip6tables=1 net.ipv4.ip_forward=1 net.ipv4.tcp_tw_recycle=0 net.ipv4.neigh.default.gc_thresh1=1024 net.ipv4.neigh.default.gc_thresh2=2048 net.ipv4.neigh.default.gc_thresh3=4096 vm.swappiness=0 vm.overcommit_memory=1 vm.panic_on_oom=0 fs.inotify.max_user_instances=8192 fs.inotify.max_user_watches=1048576 fs.file-max=52706963 fs.nr_open=52706963 net.ipv6.conf.all.disable_ipv6=1 net.netfilter.nf_conntrack_max=2310720 EOF cp kubernetes.conf /etc/sysctl.d/kubernetes.conf sysctl -p /etc/sysctl.d/kubernetes.conf
Turn off tcp_tw_recycle, otherwise it conflicts with NAT, which may lead to service failure;
⑧ Set the time to update automatically
If the time zone is incorrect, set the time zone command: timedatectl set timezone Asia / Shanghai
systemctl enable chronyd systemctl start chronyd
3, Build a three node etcd cluster
The distributed service system developed by coreos uses raft protocol as the consistency algorithm. As a service discovery system, it has the following characteristics:
Simple: the installation and configuration are simple, and the HTTP API is provided for interaction, and the use is also very simple
Security: support SSL certificate authentication
Fast: according to the official benchmark data, a single instance supports 2k + reads per second
Reliability: adopt raft algorithm to realize the availability and consistency of distributed system data
Etcd currently uses port 2379 by default to provide HTTP API services and port 2380 to communicate with peer (these two ports have been officially reserved for etcd by IANA)
Although etcd also supports single point deployment, cluster deployment is recommended in the production environment. Generally, the number of etcd nodes will be 3, 5 and 7. Etcd ensures that all nodes will save data and ensure high consistency and correctness of data
① Main functions of etcd
1. Basic key value storage
2. Monitoring mechanism
3. The expiration and renewal mechanism of key is used for monitoring and service discovery
4. Atomic CAS and CAD are used for distributed lock and leader election
② Install cfssl on k8s01
Download certificate issuing tool
curl -s -L -o /usr/bin/cfssl https://pkg.cfssl.org/R1.2/cfssl_linux-amd64 curl -s -L -o /usr/bin/cfssljson https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64 curl -s -L -o /usr/bin/cfssl-certinfo https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64 chmod +x /usr/bin/cfssl*
③ Download etcd binary files
Note: all three nodes need to be operated in this step
wget https://github.com/etcd-io/etcd/releases/download/v3.4.18/etcd-v3.4.18-linux-amd64.tar.gz tar zxf etcd-v3.4.18-linux-amd64.tar.gz cd etcd-v3.4.18-linux-amd64 mv etcd etcdctl /usr/bin/
copy the etcd etcdctl binaries to the other two servers
[root@k8s01 ~]# ls anaconda-ks.cfg etcd-v3.4.18-linux-amd64 etcd-v3.4.18-linux-amd64.tar.gz kubernetes.conf [root@k8s01 ~]# cd etcd-v3.4.18-linux-amd64 [root@k8s01 etcd-v3.4.18-linux-amd64]# ls Documentation etcd etcdctl README-etcdctl.md README.md READMEv2-etcdctl.md [root@k8s01 etcd-v3.4.18-linux-amd64]# scp -r etcd etcdctl root@k8s02:/usr/bin/ etcd 100% 23MB 109.6MB/s 00:00 etcdctl 100% 17MB 120.9MB/s 00:00 [root@k8s01 etcd-v3.4.18-linux-amd64]# scp -r etcd etcdctl root@k8s03:/usr/bin/ etcd 100% 23MB 102.0MB/s 00:00 etcdctl 100% 17MB 133.9MB/s 00:00
④ Issue etcd certificate
ca certificate is an authoritative certificate signed by itself, which is used to sign other certificates
server certificate etcd certificate
Client certificate client, such as etcdctl certificate
peer certificate is the certificate for communication between nodes
4.1 create directory
mkdir -p /data/etcd/ssl cd /data/etcd/ssl
4.2 create CA profile
The CA configuration file is used to configure the usage scenario (profile) and specific parameters (usage, expiration time, server authentication, client authentication, encryption, etc.) of the root certificate:
vim ca-config.json
cat > ca-config.json <<EOF { "signing": { "default": { "expiry": "87600h" }, "profiles": { "kubernetes": { "usages": [ "signing", "key encipherment", "server auth", "client auth" ], "expiry": "876000h" } } } } EOF
signing: indicates that the certificate can be used to sign other certificates (CA=TRUE in the generated ca.pem certificate);
server auth: indicates that the client can use the certificate to verify the certificate provided by the server;
client auth: indicates that the server can use the certificate to verify the certificate provided by the client;
"expiry": "876000h": the validity period of the certificate is set to 100 years;
4.3 create certificate signature request file
Create certificate signing request CA CSR json
cat > ca-csr.json <<EOF { "CN": "kubernetes-ca", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "BeiJing", "L": "BeiJing", "O": "k8s", "OU": "devops" } ], "ca": { "expiry": "876000h" } } EOF
CN: Common Name: Kube apiserver extracts this field from the certificate as the requested user name. The browser uses this field to verify whether the website is legal;
O: Organization: Kube apiserver extracts this field from the certificate as the group to which the requesting user belongs;
Kube apiserver takes the extracted User and Group as the User ID authorized by RBAC;
be careful:
CN, C, ST, L, O and OU combinations of different certificate csr files must be different, otherwise PEER'S CERTIFICATE HAS AN INVALID SIGNATURE error may occur;
When subsequently creating csr files of certificates, the CN is different (C, ST, L, O, OU are the same), so as to achieve the purpose of differentiation;
4.4 generate CA certificate and private key
cfssl gencert -initca ca-csr.json | cfssljson -bare ca ls ca*
4.5 creating etcd certificate and private key
cat > etcd-csr.json <<EOF { "CN": "etcd", "hosts": [ "127.0.0.1", "192.168.0.108", "192.168.0.109", "192.168.0.111" ], "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "BeiJing", "L": "BeiJing", "O": "k8s", "OU": "devops" } ] } EOF
hosts: Specifies the list of etcd node IPS authorized to use the certificate. All node IPS of the etcd cluster need to be listed in it;
Generate etcd certificate and private key:
cfssl gencert -ca=/data/etcd/ssl/ca.pem \ -ca-key=/data/etcd/ssl/ca-key.pem \ -config=/data/etcd/ssl/ca-config.json \ -profile=kubernetes etcd-csr.json | cfssljson -bare etcd ls etcd*pem
signing: indicates that the certificate can be used to sign other certificates (CA=TRUE in the generated ca.pem certificate);
server auth: indicates that the client can use the certificate to verify the certificate provided by the server;
client auth: indicates that the server can use the certificate to verify the certificate provided by the client;
"expiry": "876000h": the validity period of the certificate is set to 100 years;
Distribute the generated certificate and private key to each etcd node:
export NODE_IPS=(192.168.0.108 192.168.0.109 192.168.0.111) for node_ip in ${NODE_IPS[@]} do echo ">>> ${node_ip}" ssh root@${node_ip} "mkdir -p /etc/kubernetes/pki/etcd" scp etcd*.pem root@${node_ip}:/etc/kubernetes/pki/etcd scp ca*.pem root@${node_ip}:/etc/kubernetes/pki/etcd done
⑤ Etcd, SYSTEMd configuration template file
[root@k8s01 ssl]# source environment.sh
cat > etcd.service.template <<EOF [Unit] Description=Etcd Server After=network.target After=network-online.target Wants=network-online.target Documentation=https://github.com/coreos [Service] Type=notify WorkingDirectory=/data/k8s/etcd/data ExecStart=/usr/bin/etcd \ --data-dir=/data/k8s/etcd/data \ --wal-dir=/data/k8s/etcd/wal \ --name=##NODE_NAME## \ --cert-file=/etc/kubernetes/pki/etcd/etcd.pem \ --key-file=/etc/kubernetes/pki/etcd/etcd-key.pem \ --trusted-ca-file=/etc/kubernetes/pki/etcd/ca.pem \ --peer-cert-file=/etc/kubernetes/pki/etcd/etcd.pem \ --peer-key-file=/etc/kubernetes/pki/etcd/etcd-key.pem \ --peer-trusted-ca-file=/etc/kubernetes/pki/etcd/ca.pem \ --peer-client-cert-auth \ --client-cert-auth \ --listen-peer-urls=https://##NODE_IP##:2380 \ --initial-advertise-peer-urls=https://##NODE_IP##:2380 \ --listen-client-urls=https://##NODE_IP##:2379,http://127.0.0.1:2379 \ --advertise-client-urls=https://##NODE_IP##:2379 \ --initial-cluster-token=etcd-cluster-0 \ --initial-cluster=${ETCD_NODES} \ --initial-cluster-state=new \ --auto-compaction-mode=periodic \ --auto-compaction-retention=1 \ --max-request-bytes=33554432 \ --quota-backend-bytes=6442450944 \ --heartbeat-interval=250 \ --election-timeout=2000 Restart=on-failure RestartSec=5 LimitNOFILE=65536 [Install] WantedBy=multi-user.target
WorkingDirectory, – data dir: specify the working directory and data directory as ${ETCD_DATA_DIR}. This directory needs to be created before starting the service;
– wal dir: Specifies the wal directory. In order to improve performance, SSD s or disks different from -- data dir are generally used;
– Name: Specifies the node name. When the -- initial cluster state value is new, the parameter value of – name must be in the -- initial cluster list;
– cert file, – key file: the certificate and private key used by etcd server when communicating with client;
– trusted CA file: the CA certificate signing the client certificate, which is used to verify the client certificate;
– peer cert file, – peer key file: the certificate and private key used by etcd to communicate with peer;
– peer trusted CA file: the CA certificate signing the peer certificate, which is used to verify the peer certificate;
Replace the variables in the template file and create a systemd unit file for each node:
cd /data/etcd/ssl source /data/etcd/ssl/environment.sh for (( i=0; i < 3; i++ )) do sed -e "s/##NODE_NAME##/${NODE_NAMES[i]}/" -e "s/##NODE_IP##/${NODE_IPS[i]}/" etcd.service.template > etcd-${NODE_IPS[i]}.service done ls *.service
NODE_NAMES and NODE_IPS is a bash array with the same length, which is the node name and the corresponding IP respectively;
Distribute the generated systemd unit file:
cd /data/etcd/ssl source /data/etcd/ssl/environment.sh for node_ip in ${NODE_IPS[@]} do echo ">>> ${node_ip}" scp etcd-${node_ip}.service root@${node_ip}:/etc/systemd/system/etcd.service done
Start etcd service
cd /data/etcd/ssl source /data/etcd/ssl/environment.sh for node_ip in ${NODE_IPS[@]} do echo ">>> ${node_ip}" ssh root@${node_ip} "mkdir -p ${ETCD_DATA_DIR} ${ETCD_WAL_DIR}" ssh root@${node_ip} "systemctl daemon-reload && systemctl enable etcd && systemctl restart etcd " & done
Check startup results:
cd /data/etcd/ssl source /data/etcd/ssl/environment.sh for node_ip in ${NODE_IPS[@]} do echo ">>> ${node_ip}" ssh root@${node_ip} "systemctl status etcd|grep Active" done
Ensure that the status is active (running), otherwise check the log and confirm the reason:
journalctl -u etcd
Detect etcd service
cd /data/etcd/ssl source /data/etcd/ssl/environment.sh for node_ip in ${NODE_IPS[@]} do echo ">>> ${node_ip}" /usr/bin/etcdctl \ --endpoints=https://${node_ip}:2379 \ --cacert=/etc/kubernetes/pki/etcd/ca.pem \ --cert=/etc/kubernetes/pki/etcd/etcd.pem \ --key=/etc/kubernetes/pki/etcd/etcd-key.pem endpoint health done
>>> 192.168.0.108 https://192.168.0.108:2379 is healthy: successfully committed proposal: took = 20.123255ms >>> 192.168.0.109 https://192.168.0.109:2379 is healthy: successfully committed proposal: took = 9.104908ms >>> 192.168.0.111 https://192.168.0.111:2379 is healthy: successfully committed proposal: took = 10.23718ms
View current leader
cd /data/etcd/ssl source /data/etcd/ssl/environment.sh /usr/bin/etcdctl \ -w table --cacert=/etc/kubernetes/pki/etcd/ca.pem \ --cert=/etc/kubernetes/pki/etcd/etcd.pem \ --key=/etc/kubernetes/pki/etcd/etcd-key.pem \ --endpoints=${ETCD_ENDPOINTS} endpoint status
Output results
+----------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+ | ENDPOINT | ID | VERSION | DB SIZE | IS LEADER | IS LEARNER | RAFT TERM | RAFT INDEX | RAFT APPLIED INDEX | ERRORS | +----------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+ | https://192.168.0.108:2379 | dabc7582f4d7fb46 | 3.4.18 | 20 kB | true | false | 2 | 8 | 8 | | | https://192.168.0.109:2379 | fb3f424f2c5de754 | 3.4.18 | 16 kB | false | false | 2 | 8 | 8 | | | https://192.168.0.111:2379 | 92b71ce0c4151960 | 3.4.18 | 20 kB | false | false | 2 | 8 | 8 | | +----------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+ [root@k8s01 ssl]#
3, Install keepalived
① . execute on three master s
yum -y install keepalived
configuration file
k8s01
! Configuration File for keepalived global_defs { router_id master01 } vrrp_instance VI_1 { state MASTER #main interface ens33 #Network card name virtual_router_id 50 priority 100 #weight advert_int 1 authentication { auth_type PASS auth_pass 123456789 } virtual_ipaddress { 192.168.0.110 #vip } }
k8s02
! Configuration File for keepalived global_defs { router_id master01 } vrrp_instance VI_1 { state BACKUP interface ens33 virtual_router_id 50 priority 90 advert_int 1 authentication { auth_type PASS auth_pass 123456789 } virtual_ipaddress { 192.168.0.110 #vip } }
k8s03
! Configuration File for keepalived global_defs { router_id master01 } vrrp_instance VI_1 { state BACKUP interface ens32 virtual_router_id 50 priority 80 advert_int 1 authentication { auth_type PASS auth_pass 123456789 } virtual_ipaddress { 192.168.0.110 #vip } }
② . set the startup self startup and start the service
systemctl stop keepalived systemctl start keepalived
Verify whether the VIP is online
[root@k8s01 ssl]# ip addr 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever 2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000 link/ether 00:50:56:31:30:b1 brd ff:ff:ff:ff:ff:ff inet 192.168.0.108/24 brd 192.168.0.255 scope global noprefixroute dynamic ens33 valid_lft 5374sec preferred_lft 5374sec inet 192.168.0.110/32 scope global ens33 valid_lft forever preferred_lft forever
4, Install kubedm, kubelet, and kubectl
Install kubedm and kubelet on all nodes. kubectl is optional. You can install it on all machines or only one master 1.
1. Add domestic yum source
cat <<EOF > /etc/yum.repos.d/kubernetes.repo [kubernetes] name=Kubernetes baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64 enabled=1 gpgcheck=0 repo_gpgcheck=0 gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg EOF
2. Specify version installation
yum install -y kubelet-1.19.2 kubeadm-1.19.2 kubectl-1.19.2
3. On all nodes where kubelet is installed, set kubelet to boot
systemctl enable kubelet
5, Initialize kubedm
① . kubedm generate default configuration
kubeadm config print init-defaults > kubeadm-init.yaml
② . after modification:
[root@k8s01 src]# cat kubeadm-init.yaml apiVersion: kubeadm.k8s.io/v1beta2 bootstrapTokens: - groups: - system:bootstrappers:kubeadm:default-node-token token: abcdef.0123456789abcdef ttl: 24h0m0s usages: - signing - authentication kind: InitConfiguration localAPIEndpoint: advertiseAddress: 192.168.0.108 bindPort: 6443 nodeRegistration: criSocket: /var/run/dockershim.sock name: k8s01 taints: - effect: NoSchedule key: node-role.kubernetes.io/master --- apiServer: timeoutForControlPlane: 4m0s apiVersion: kubeadm.k8s.io/v1beta2 certificatesDir: /etc/kubernetes/pki clusterName: kubernetes controllerManager: {} dns: type: CoreDNS etcd: # local: # dataDir: /var/lib/etcd external: endpoints: - https://192.168.0.108:2379 - https://192.168.0.109:2379 - https://192.168.0.111:2379 caFile: /etc/kubernetes/pki/etcd/ca.pem #ca certificate generated when building etcd cluster certFile: /etc/kubernetes/pki/etcd/etcd.pem #Client certificate generated when building etcd cluster keyFile: /etc/kubernetes/pki/etcd/etcd-key.pem #Client key generated when building etcd cluster imageRepository: registry.aliyuncs.com/google_containers kind: ClusterConfiguration kubernetesVersion: v1.19.0 controlPlaneEndpoint: 192.168.0.110 networking: dnsDomain: cluster.local podSubnet: 10.244.0.0/16 serviceSubnet: 10.96.0.0/12 scheduler: {} --- apiVersion: kubeproxy.config.k8s.io/v1alpha1 kind: KubeProxyConfiguration mode: "ipvs"
③ . execute initialization
[root@k8s01 src]# kubeadm init --config=kubeadm-init.yaml
Initialization succeeded
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key [addons] Applied essential addon: CoreDNS [addons] Applied essential addon: kube-proxy Your Kubernetes control-plane has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ You can now join any number of control-plane nodes by copying certificate authorities and service account keys on each node and then running the following as root: kubeadm join 192.168.0.110:6443 --token abcdef.0123456789abcdef \ --discovery-token-ca-cert-hash sha256:453442e665b6186a6acfa60af95a3952d9529d3fea9745e04aaae32351a8e0e8 \ --control-plane Then you can join any number of worker nodes by running the following on each as root: kubeadm join 192.168.0.110:6443 --token abcdef.0123456789abcdef \ --discovery-token-ca-cert-hash sha256:453442e665b6186a6acfa60af95a3952d9529d3fea9745e04aaae32351a8e0e8
④ . copy the certificate under pki to the other two Master nodes
First, scp the ca certificate shared by the cluster generated in master 1 to other master machines.
Note: only ca related certificates are copied, and apiserver class certificates are not required.
[root@k8s01 src]# scp /etc/kubernetes/pki/* root@k8s02:/etc/kubernetes/pki/ [root@k8s01 src]# scp /etc/kubernetes/pki/* root@k8s03:/etc/kubernetes/pki/ # Enter the other two Master servers and delete the apiserver related certificates [root@k8s02 pki]# rm -rf apiserver* [root@k8s03 pki]# rm -rf apiserver*
⑤ , Master k8s02 and k8s03 join the k8s cluster
kubeadm join 192.168.0.110:6443 --token abcdef.0123456789abcdef \ --discovery-token-ca-cert-hash sha256:453442e665b6186a6acfa60af95a3952d9529d3fea9745e04aaae32351a8e0e8 \ --control-plane
⑥ . view cluster node status
Note: set the kubectl environment variable before viewing the status
mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config
The Status is NotReady because the network plug-in is not installed
[root@k8s02 pki]# kubectl get node NAME STATUS ROLES AGE VERSION k8s01 NotReady master 16m v1.19.2 k8s02 NotReady master 2m14s v1.19.2 k8s03 NotReady master 5m1s v1.19.2
⑦ . install the network plug-in caclio
[root@k8s01 src]# curl https://docs.projectcalico.org/manifests/calico.yaml -O
Modify configuration:
[root@k8s01 src]# cp calico.yaml calico.yaml.orig [root@k8s01 src]# diff -U 5 calico.yaml.orig calico.yaml --- calico.yaml.orig 2022-01-09 22:51:33.274483687 +0800 +++ calico.yaml 2022-01-09 23:05:04.278418520 +0800 @@ -4217,12 +4217,12 @@ name: calico-config key: veth_mtu # The default IPv4 pool to create on startup if none exists. Pod IPs will be # chosen from this range. Changing this value after installation will have # no effect. This should fall within `--cluster-cidr`. - # - name: CALICO_IPV4POOL_CIDR - # value: "192.168.0.0/16" + - name: CALICO_IPV4POOL_CIDR + value: "10.244.0.0/16" # Disable file logging so `kubectl logs` works. - name: CALICO_DISABLE_FILE_LOGGING value: "true" # Set Felix endpoint to host default action to ACCEPT. - name: FELIX_DEFAULTENDPOINTTOHOSTACTION
Modify the Pod network segment address to 172.30.0.0/16;
calico automatically detects the Internet card. If there is a fast network card, you can configure the network interface naming regular expression for interconnection, such as eth* (modify according to the network interface name of your server);
Run calico plug-in:
[root@k8s01 src]# kubectl apply -f calico.yaml [root@k8s01 src]# cat calico.yaml|grep image image: docker.io/calico/cni:v3.21.2 image: docker.io/calico/cni:v3.21.2 image: docker.io/calico/pod2daemon-flexvol:v3.21.2 image: docker.io/calico/node:v3.21.2 image: docker.io/calico/kube-controllers:v3.21.2
Note: the download of caclio's four images is slow and needs to be handled in advance!
[root@k8s01 src]# kubectl get node NAME STATUS ROLES AGE VERSION k8s01 Ready master 68m v1.19.2 k8s02 Ready master 54m v1.19.2 k8s03 Ready master 57m v1.19.2
⑧ . Work nodes join the cluster
[root@k8s04 ~]# kubeadm join 192.168.0.110:6443 --token abcdef.0123456789abcdef \ > --discovery-token-ca-cert-hash sha256:453442e665b6186a6acfa60af95a3952d9529d3fea9745e04aaae32351a8e0e8 [preflight] Running pre-flight checks [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service' [preflight] Reading configuration from the cluster... [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml' [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Starting the kubelet [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap... This node has joined the cluster: * Certificate signing request was sent to apiserver and a response was received. * The Kubelet was informed of the new secure connection details. Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
View the node node again:
[root@k8s01 src]# kubectl get node NAME STATUS ROLES AGE VERSION k8s01 Ready master 72m v1.19.2 k8s02 Ready master 57m v1.19.2 k8s03 Ready master 60m v1.19.2 k8s04 Ready <none> 49s v1.19.2
6, Install rancher and import the cluster
For the demonstration here, you can directly deploy the rancher on the work node 112
reference resources: https://blog.csdn.net/lswzw/article/details/109027255
reference resources: https://www.cnblogs.com/huningfei/p/12759833.html