environmental information
role | IP | system |
---|---|---|
master | 10.4.7.152 | Ubuntu18.04 |
node | 10.4.7.162 | Ubuntu18.04 |
1. Operating system configuration (both nodes need to execute)
Operation node: master, node
- Disable swap
swapoff -a #temporary sed -ri 's/.*swap.*/#&/' /etc/fstab #take/etc/fstab Document swap Line use#Notes (permanently disabled)
- Turn off firewall
sudo ufw disable #Turn off firewall sudo ufw status #View firewall status Status: inactive
- Close SELinux (skip if SELinux is not installed)
getenforce #Viewing selinux status sudo apt install selinux-utils sudo setenforce 0 #Temporarily Closed sed -i 's/enforcing/disabled/' /etc/selinux/config #Permanently shut down, modify the setting SELINUX=disabled in the / etc/selinux/config file, and then restart the server.
- Modify kernel parameters
cat <<EOF > /etc/sysctl.d/k8s.conf net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 net.ipv4.ip_forward = 1 EOF # Execute the following command to make it effective sysctl --system
- Load kernel module
modprobe br_netfilter lsmod | grep br_netfilter
2. Install Docker (both nodes need to execute)
Operation node: master, node
In order to install KubeEdge in the next step, Docker must be installed above 19.3.
# Docker old version uninstall sudo apt-get remove docker docker-engine docker-ce docker.io # Install Docker curl -sSL https://get.daocloud.io/docker | sh # Add the following information to the / etc/docker/daemon.json file: { "registry-mirrors": ["https://hub-mirror.c.163.com"] } Make the above configuration effective: sudo systemctl daemon-reload sudo systemctl restart docker #After installation, verify whether the installation is successful sudo docker version
3. Install kubectl, kubelet and kubedm (both nodes need to be executed)
Operation node: master, node
#Open apt source file: sudo vim /etc/apt/sources.list #Add the following: deb https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial main #Add public key curl https://mirrors.aliyun.com/kubernetes/apt/doc/apt-key.gpg | sudo apt-key add - #Then update apt source: sudo apt-get update #Install Kube (you can view the list of installable versions through apt cache Madison) sudo apt-get install -y kubelet=1.19.8-00 kubeadm=1.19.8-00 kubectl=1.19.8-00 #View version after installation kubelet --version
4. Deploy Kubernetes through kubedm on the master node
Operation node: master
- Query required image
root@master-152:~# kubeadm config images list --kubernetes-version v1.19.8 k8s.gcr.io/kube-apiserver:v1.19.8 k8s.gcr.io/kube-controller-manager:v1.19.8 k8s.gcr.io/kube-scheduler:v1.19.8 k8s.gcr.io/kube-proxy:v1.19.8 k8s.gcr.io/pause:3.2 k8s.gcr.io/etcd:3.4.13-0 k8s.gcr.io/coredns:1.7.0
- Kubernetes installation
kubeadm init \ --apiserver-advertise-address=10.4.7.152 \ --image-repository registry.aliyuncs.com/google_containers \ --kubernetes-version v1.19.8 \ --service-cidr=10.96.0.0/12 \ --pod-network-cidr=10.244.0.0/16
Kubedm init option Description:
– apiserver advertisement address #apiserver publishes the IP address it is listening to. If not set, the default network interface is used.
– image repository # select and pull the image repo of control plane images (default "k8s.gcr.io")
– kubernetes version # select the kubernetes version. (default "stable-1")
– service CIDR # specifies the IP range of the service. (default "10.96.0.0 / 12")
– pod network CIDR # specifies the network of the pod. The control plane will automatically publish the network to the nodes of other nodes and let the containers started on them use this network.
- After successful installation, the output is as follows:
Your Kubernetes control-plane has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ Then you can join any number of worker nodes by running the following on each as root: kubeadm join 10.4.7.152:6443 --token 7iddpr.00c75zjjoh78gpbi \ --discovery-token-ca-cert-hash sha256:c5dad4cf76016b5e82e95ba4e69f53559835759b684672de7ea32b8548ef1184
- Copy the config file as prompted above
mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config
- At this point, you can see the master node status, which is NotReady
root@master-152:~# kubectl get nodes NAME STATUS ROLES AGE VERSION master-152 NotReady master 6m36s v1.19.8
- flannel deployment
#Download the flannel deployment file locally wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml #flannel deployment kubectl apply -f kube-flannel.yml #After the deployment of the flannel is successful, check the node status root@master-152:~# kubectl get nodes NAME STATUS ROLES AGE VERSION master-152 Ready master 6m36s v1.19.8
5. Add the Node to the K8s cluster
Operation node: node
- Run kubedm join on the node node to add the node to the cluster. After running the command, you need to wait a few minutes and download the image in the background.
kubeadm join 10.4.7.152:6443 --token mfymh2.rxvxyp2579coacbv \ --discovery-token-ca-cert-hash sha256:9c5b395069f1b327bf4da4f91345674e1f316adf1a6af98a4ea7c466ebacaf68
- If the token expires or is forgotten, it is retrieved at the master node.
kubeadm token create --print-join-command kubeadm join 10.4.7.152:6443 --token k66jak.2c6yw8p50jj6g1e9 --discovery-token-ca-cert-hash sha256:9c5b395069f1b327bf4da4f91345674e1f316adf1a6af98a4ea7c466ebacaf68
- View node status
kubectl get nodes NAME STATUS ROLES AGE VERSION master-152 Ready master 6m36s v1.19.8 node-162 Ready <none> 2m33s v1.19.8
2, Kubedm offline installation K8s
role | IP | system |
---|---|---|
master | 10.4.7.152 | Ubuntu18.04 |
1. System initialization
- Disable swap
swapoff -a #temporary sed -ri 's/.*swap.*/#&/' /etc/fstab #take/etc/fstab Document swap Line use#Notes (permanently disabled)
- Turn off firewall
sudo ufw disable #Turn off firewall sudo ufw status #View firewall status Status: inactive
- Close SELinux (skip if SELinux is not installed)
getenforce #Viewing selinux status sudo apt install selinux-utils sudo setenforce 0 #Temporarily Closed sed -i 's/enforcing/disabled/' /etc/selinux/config #Permanently shut down, modify the setting SELINUX=disabled in the / etc/selinux/config file, and then restart the server.
- Modify kernel parameters
cat <<EOF > /etc/sysctl.d/k8s.conf net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 net.ipv4.ip_forward = 1 EOF # Execute the following command to make it effective sysctl --system
- Load kernel module
modprobe br_netfilter lsmod | grep br_netfilter
2. Docker deployment
In order to install KubeEdge in the next step, Docker must be installed above 19.3.
-
Offline installation package download
website: https://download.docker.com/linux/ubuntu/dists/xenial/pool/stable/amd64/
Download three installation packages: containerd.io, docker CE CLI and docker CE
Select the version above 19.3, for example:
docker-ce_20.10.03-0ubuntu-xenial_amd64.deb
docker-ce-cli_20.10.03-0ubuntu-xenial_amd64.deb
containerd.io_1.4.3-1_amd64.deb -
Docker offline installation package installation
dpkg -i containerd.io_1.4.3-1_amd64.deb dpkg -i docker-ce-cli_20.10.0~3-0~ubuntu-xenial_amd64.deb dpkg -i docker-ce_20.10.0~3-0~ubuntu-xenial_amd64.deb
- Verify that the installation was successful
docker version
3. kubectl, kubelet, kubedm deployment
- Installation package download
The following operations are on a networked Ubuntu machine.
Open apt source file:
vim /etc/apt/sources.list
Add the following line to the end of the file
deb https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial main
#Add public key curl https://mirrors.aliyun.com/kubernetes/apt/doc/apt-key.gpg | sudo apt-key add - #Then update apt source: sudo apt-get update #Download the installation package locally sudo apt-get download -y kubelet=1.19.8-00 kubeadm=1.19.8-00 kubectl=1.19.8-00 #Download the dependency installation package locally sudo apt-get download cri-tools=1.13.0-01 socat=1.7.3.2-2ubuntu2 conntrack kubernetes-cni=0.8.7-00 #View downloaded installation packages ll total 66904 drwxr-xr-x 2 root root 4096 Aug 5 10:51 ./ drwxr-xr-x 6 lixingli lixingli 4096 Aug 5 07:13 ../ -rw-r--r-- 1 root root 30580 Apr 16 2018 conntrack_1%3a1.4.4+snapshot20161117-6ubuntu2_amd64.deb -rw-r--r-- 1 root root 8775008 Jan 2 2021 cri-tools_1.13.0-01_amd64.deb -rw-r--r-- 1 root root 7763176 Feb 18 20:03 kubeadm_1.19.8-00_amd64.deb -rw-r--r-- 1 root root 8352916 Feb 18 20:03 kubectl_1.19.8-00_amd64.deb -rw-r--r-- 1 root root 18226180 Feb 18 20:03 kubelet_1.19.8-00_amd64.deb -rw-r--r-- 1 root root 24995436 Jan 2 2021 kubernetes-cni_0.8.7-00_amd64.deb -rw-r--r-- 1 root root 341772 Apr 4 2018 socat_1.7.3.2-2ubuntu2_amd64.deb
- Installation package installation
Copy installation package to target machine
(pay attention to the installation sequence)
dpkg -i kubernetes-cni_0.8.7-00_amd64.deb dpkg -i conntrack_1%3a1.4.4+snapshot20161117-6ubuntu2_amd64.deb dpkg -i socat_1.7.3.2-2ubuntu2_amd64.deb dpkg -i cri-tools_1.13.0-01_amd64.deb dpkg -i kubectl_1.19.8-00_amd64.deb dpkg -i kubelet_1.19.8-00_amd64.deb dpkg -i kubeadm_1.19.8-00_amd64.deb #Check whether the installation is successful keadm version kubectl version kubelet --version
4. Master node startup
- Image list query
root@master-152:~# kubeadm config images list --kubernetes-version v1.19.8 --image-repository registry.aliyuncs.com/google_containers registry.aliyuncs.com/google_containers/kube-apiserver:v1.19.8 registry.aliyuncs.com/google_containers/kube-controller-manager:v1.19.8 registry.aliyuncs.com/google_containers/kube-scheduler:v1.19.8 registry.aliyuncs.com/google_containers/kube-proxy:v1.19.8 registry.aliyuncs.com/google_containers/pause:3.2 registry.aliyuncs.com/google_containers/etcd:3.4.13-0 registry.aliyuncs.com/google_containers/coredns:1.7.0
– kubernetes version #kubernetes version number
– image repository # selects the pulled image repo (default "k8s.gcr.io")
- Image packaging
On the master machine with K8s installed, perform the following operations to package the image. After packaging, copy the image to the target machine.
docker save -o kube-apiserver-1-19-8.tar registry.aliyuncs.com/google_containers/kube-apiserver:v1.19.8 docker save -o kube-controller-manager-1-19-8.tar registry.aliyuncs.com/google_containers/kube-controller-manager:v1.19.8 docker save -o kube-scheduler-1-19-8.tar registry.aliyuncs.com/google_containers/kube-scheduler:v1.19.8 docker save -o kube-proxy-1-19-8.tar registry.aliyuncs.com/google_containers/kube-proxy:v1.19.8 docker save -o pause-3-2.tar registry.aliyuncs.com/google_containers/pause:3.2 docker save -o etcd-3-4-13-0.tar registry.aliyuncs.com/google_containers/etcd:3.4.13-0 docker save -o coredns-1-7-0.tar registry.aliyuncs.com/google_containers/coredns:1.7.0 #View packaged images ll drwxr-xr-x 2 root root 4096 Aug 6 02:59 ./ drwxr-xr-x 7 lixingli lixingli 4096 Aug 6 02:58 ../ -rw-rw-r-- 1 lixingli lixingli 45365760 Jul 26 08:29 coredns-1-7-0.tar -rw-rw-r-- 1 lixingli lixingli 254679040 Jul 26 08:27 etcd-3-4-13-0.tar -rw-rw-r-- 1 lixingli lixingli 120077824 Jul 26 07:35 kube-apiserver-1-19-8.tar -rw-rw-r-- 1 lixingli lixingli 112070144 Jul 26 07:46 kube-controller-manager-1-19-8.tar -rw-rw-r-- 1 lixingli lixingli 119683072 Jul 26 07:36 kube-proxy-1-19-8.tar -rw-rw-r-- 1 lixingli lixingli 47775232 Jul 26 07:45 kube-scheduler-1-19-8.tar -rw-rw-r-- 1 lixingli lixingli 692736 Jul 26 08:28 pause-3-2.tar
- Mirror loading
docker load -i kube-apiserver-1-19-8.tar docker load -i coredns-1-7-0.tar docker load -i etcd-3-4-13-0.tar docker load -i kube-controller-manager-1-19-8.tar docker load -i kube-proxy-1-19-8.tar docker load -i kube-scheduler-1-19-8.tar docker load -i pause-3-2.tar #View loaded images docker images REPOSITORY TAG IMAGE ID CREATED SIZE registry.aliyuncs.com/google_containers/kube-proxy v1.19.8 ea03182b84a2 5 months ago 118MB registry.aliyuncs.com/google_containers/kube-apiserver v1.19.8 9ba91a90b7d1 5 months ago 119MB registry.aliyuncs.com/google_containers/kube-controller-manager v1.19.8 213ae7795128 5 months ago 111MB registry.aliyuncs.com/google_containers/kube-scheduler v1.19.8 919a3f36437d 5 months ago 46.5MB registry.aliyuncs.com/google_containers/etcd 3.4.13-0 0369cf4303ff 11 months ago 253MB registry.aliyuncs.com/google_containers/coredns 1.7.0 bfe3a36ebd25 13 months ago 45.2MB registry.aliyuncs.com/google_containers/pause 3.2 80d28bedfe5d 17 months ago 683kB
- k8s installation
kubeadm init \ --apiserver-advertise-address=10.4.7.153 \ --image-repository registry.aliyuncs.com/google_containers \ --kubernetes-version v1.19.8 \ --service-cidr=10.96.0.0/12 \ --pod-network-cidr=10.244.0.0/16
The installation is successful, and the screen output is as follows:
Your Kubernetes control-plane has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ Then you can join any number of worker nodes by running the following on each as root: kubeadm join 10.4.7.153:6443 --token yirah5.tcwkxhm9ui1kd8bg \ --discovery-token-ca-cert-hash sha256:4e99dac259dd662932056b26fe8ff85208ad22ef8fe9700b86e7418af6bf92f6
Copy the config file as prompted above
mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config
At this point, you can see the master node status, which is NotReady
root@master-152:~# kubectl get nodes NAME STATUS ROLES AGE VERSION master-152 NotReady master 20m v1.19.8
5. flannel deployment
- Installation package download
On the master machine with K8s installed, perform the following operations, package the image, and then copy it to the target machine.
#Download the flannel deployment file locally wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml #Packaging flannel images docker save -o flannel-0-14-0.tar quay.io/coreos/flannel:v0.14. #View downloaded items ll total 67324 drwxrwxr-x 2 lixingli lixingli 4096 Aug 6 06:43 ./ drwxr-xr-x 7 lixingli lixingli 4096 Aug 6 02:58 ../ -rw------- 1 root root 68921344 Aug 6 06:42 flannel-0-14-0.tar -rw-rw-r-- 1 lixingli lixingli 4813 Jul 25 14:30 kube-flannel.yml
- flannel deployment
#Load mirror docker load -i flannel-v-0-14-0.tar #View mirror docker images | grep flannel quay.io/coreos/flannel v0.14.0 8522d622299c 2 months ago 67.9MB #flannel deployment kubectl apply -f kube-flannel.yml #View the node status, and the status changes to Ready root@master-152:~# kubectl get nodes NAME STATUS ROLES AGE VERSION master-152 Ready master 20m v1.19.8 #Check the status of pods. All pods are in Running status root@master-152:~# kubectl get pods -A NAMESPACE NAME READY STATUS RESTARTS AGE kube-system coredns-6d56c8448f-shkd8 1/1 Running 0 22m kube-system coredns-6d56c8448f-t6gnk 1/1 Running 0 22m kube-system etcd-lixingli 1/1 Running 0 22m kube-system kube-apiserver-lixingli 1/1 Running 0 22m kube-system kube-controller-manager-lixingli 1/1 Running 0 22m kube-system kube-flannel-ds-cmpqv 1/1 Running 0 2m24s kube-system kube-proxy-pd2bt 1/1 Running 0 22m kube-system kube-scheduler-lixingli 1/1 Running 0 22m