k8s cluster deployment
kubeadm is a tool launched by the official community for rapid deployment of kubernetes clusters.
This tool can deploy a kubernetes cluster through two instructions:
# Create a Master node $ kubeadm init # Add a Node to the current cluster $ kubeadm join <Master Nodal IP And ports>
Official website: Kubernetes
Official documents: Kubernetes Documentation |Kubernetes
Environmental description:
host | IP | Hard disk |
---|---|---|
master/CentOS8 | 192.168.220.17 | 30G |
node1/CentOS8 | 192.168.220.20 | 30G |
node2/CentOS8 | 192.168.220.21 | 30G |
Ready to start
- A compatible Linux host. The Kubernetes project provides general instructions for Debian and Red Hat based Linux distributions and some distributions that do not provide a package manager
- 2 GB or more RAM per machine (less than this number will affect the running memory of your application)
- 2 CPU cores or more
- The networks of all machines in the cluster can be connected to each other (both public network and intranet)
- Nodes cannot have duplicate host names, MAC addresses or product names_ uuid. See here Learn more details.
- Open some ports on the machine. See here Learn more details.
- Disable swap partition. In order for kubelet to work properly, you must disable the swap partition.
preparation
# Modify three host names [root@localhost ~]# hostnamectl set-hostname master.example.com [root@localhost ~]# bash [root@master ~]# [root@localhost ~]# hostnamectl set-hostname node1.example.com [root@localhost ~]# bash [root@node1 ~]# [root@localhost ~]# hostnamectl set-hostname node2.example.com [root@localhost ~]# bash [root@node2 ~]# # Shut down three hosts firewall and seLinux [root@master ~]# systemctl disable firewalld Removed /etc/systemd/system/multi-user.target.wants/firewalld.service. Removed /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service. [root@master ~]# sed -ri 's/(SELINUX=).*/\1disabled/g' /etc/selinux/config [root@master ~]# setenforce 0 [root@master ~]# getenforce Permissive [root@master ~]# reboot [root@master ~]# getenforce Disabled # All three platforms delete or comment out the swap space [root@master ~]# free -m # Before closing swap total used free shared buff/cache available Mem: 2797 241 2342 16 213 2386 Swap: 3071 0 3071 [root@master ~]# cat /etc/fstab # # /etc/fstab # Created by anaconda on Sat Dec 18 13:26:48 2021 # # Accessible filesystems, by reference, are maintained under '/dev/disk/'. # See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info. # # After editing this file, run 'systemctl daemon-reload' to update systemd # units generated from this file. # /dev/mapper/cs-root / xfs defaults 0 0 UUID=67bfc26e-88da-45ec-914c-3f28ec9571fb /boot xfs defaults 0 0 #/dev/mapper/cs-swap none swap defaults 0 0 # Delete or comment out [root@master ~]# reboot [root@master ~]# free -m # After closing swap total used free shared buff/cache available Mem: 1789 211 1359 8 218 1416 Swap: 0 0 0
Add domain name access on master
[root@master ~]# cat /etc/hosts 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 192.168.220.17 master master.example.com 192.168.220.20 node1 node1.example.com 192.168.220.21 node2 node2.example.com [root@master ~]# ping master PING master (192.168.220.17) 56(84) bytes of data. 64 bytes from master (192.168.220.17): icmp_seq=1 ttl=64 time=0.020 ms 64 bytes from master (192.168.220.17): icmp_seq=2 ttl=64 time=0.022 ms ^Z [1]+ Stopped ping master [root@master ~]# ping node1 PING node1 (192.168.220.20) 56(84) bytes of data. 64 bytes from node1 (192.168.220.20): icmp_seq=1 ttl=64 time=0.325 ms 64 bytes from node1 (192.168.220.20): icmp_seq=2 ttl=64 time=0.422 ms ^Z [2]+ Stopped ping node1 [root@master ~]# ping node2 PING node2 (192.168.220.21) 56(84) bytes of data. 64 bytes from node2 (192.168.220.21): icmp_seq=1 ttl=64 time=0.320 ms 64 bytes from node2 (192.168.220.21): icmp_seq=2 ttl=64 time=0.228 ms ^Z [3]+ Stopped ping node2
Pass the bridged IPv4 traffic to the iptables chain on the master:
[root@master ~]# cat > /etc/sysctl.d/k8s.conf << EOF > net.bridge.bridge-nf-call-ip6tables = 1 > net.bridge.bridge-nf-call-iptables = 1 > EOF [root@master ~]# cat /etc/sysctl.d/k8s.conf net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 [root@master ~]# sysctl --system * Applying /usr/lib/sysctl.d/10-default-yama-scope.conf ... kernel.yama.ptrace_scope = 0 * Applying /usr/lib/sysctl.d/50-coredump.conf ... kernel.core_pattern = |/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h %e * Applying /usr/lib/sysctl.d/50-default.conf ... kernel.sysrq = 16 kernel.core_uses_pid = 1 kernel.kptr_restrict = 1 net.ipv4.conf.all.rp_filter = 1 net.ipv4.conf.all.accept_source_route = 0 net.ipv4.conf.all.promote_secondaries = 1 net.core.default_qdisc = fq_codel fs.protected_hardlinks = 1 fs.protected_symlinks = 1 * Applying /usr/lib/sysctl.d/50-libkcapi-optmem_max.conf ... net.core.optmem_max = 81920 * Applying /usr/lib/sysctl.d/50-pid-max.conf ... kernel.pid_max = 4194304 * Applying /etc/sysctl.d/99-sysctl.conf ... * Applying /etc/sysctl.d/k8s.conf ... * Applying /etc/sysctl.conf ...
Three hosts install time synchronization service
[root@master ~]# dnf -y install vim wget chrony [root@node1 ~]# dnf -y install vim wget chrony [root@node2 ~]# dnf -y install vim wget chrony # The time synchronization of the three stations is modified to aliyun [root@master ~]# cat /etc/chrony.conf # Use public servers from the pool.ntp.org project. # Please consider joining the pool (http://www.pool.ntp.org/join.html). pool time1.aliyun.com iburst # Modify to pool time1 aliyun. com iburst # Record the rate at which the system clock gains/losses time. driftfile /var/lib/chrony/drift # Allow the system clock to be stepped in the first three updates # if its offset is larger than 1 second. makestep 1.0 3 # Enable kernel synchronization of the real-time clock (RTC). rtcsync # Enable hardware timestamping on all interfaces that support it. #hwtimestamp * # Increase the minimum number of selectable sources required to adjust # the system clock. #minsources 2 # Allow NTP client access from local network. #allow 192.168.0.0/16 # Serve time even if not synchronized to a time source. #local stratum 10 # Specify file containing keys for NTP authentication. keyfile /etc/chrony.keys # Get TAI-UTC offset and leap seconds from the system tz database. leapsectz right/UTC # Specify directory for log files. logdir /var/log/chrony # Select which information is logged. #log measurements statistics tracking [root@master ~]# systemctl enable --now chronyd
Do password free login on the master
[root@master ~]# ssh-keygen -t rsa Generating public/private rsa key pair. Enter file in which to save the key (/root/.ssh/id_rsa): Created directory '/root/.ssh'. Enter passphrase (empty for no passphrase): Enter same passphrase again: Your identification has been saved in /root/.ssh/id_rsa. Your public key has been saved in /root/.ssh/id_rsa.pub. The key fingerprint is: SHA256:dYbSEiqrqFYLgd4UggXxWJF81kftQKKDs/OoBrBoypU root@master.example.com The key's randomart image is: +---[RSA 3072]----+ |+=+o ..o+. | |.=ooo..ooo.. | |o =o= ..oo+ o | |o. + + +.o | |+.* o S | |+= E | |* * o | |o= . | |= | +----[SHA256]-----+ [root@master ~]# ssh-copy-id master /usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/root/.ssh/id_rsa.pub" The authenticity of host 'master (192.168.220.17)' can't be established. ECDSA key fingerprint is SHA256:+w2iu/jKxDt9j9X0LelVpearhiefBgd+vm7AntCUiGo. Are you sure you want to continue connecting (yes/no/[fingerprint])? yes /usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed /usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys root@master's password: Number of key(s) added: 1 Now try logging into the machine, with: "ssh 'master'" and check to make sure that only the key(s) you wanted were added. [root@master ~]# ssh-copy-id node1 /usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/root/.ssh/id_rsa.pub" The authenticity of host 'node1 (192.168.220.20)' can't be established. ECDSA key fingerprint is SHA256:Kv8kDJNeSd2AjUNVDTPmvrvCAXL7GNUKWHUYNoIfSHo. Are you sure you want to continue connecting (yes/no/[fingerprint])? yes /usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed /usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys root@node1's password: Number of key(s) added: 1 Now try logging into the machine, with: "ssh 'node1'" and check to make sure that only the key(s) you wanted were added. [root@master ~]# ssh-copy-id node2 /usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/root/.ssh/id_rsa.pub" The authenticity of host 'node2 (192.168.220.21)' can't be established. ECDSA key fingerprint is SHA256:UlA5inIMH+HDVNyu7eeFEwSE/hFSPS3DNqY6uE2do88. Are you sure you want to continue connecting (yes/no/[fingerprint])? yes /usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed /usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys root@node2's password: Number of key(s) added: 1 Now try logging into the machine, with: "ssh 'node2'" and check to make sure that only the key(s) you wanted were added.
Check whether the time synchronization of the three hosts is consistent
[root@master ~]# for i in master node1 node2;do ssh $i 'date';done Sat Dec 18 01:37:40 EST 2021 Sat Dec 18 01:37:40 EST 2021 Sat Dec 18 01:37:40 EST 2021
Install docker on three hosts
By default, the CRI (container runtime) of Kubernetes is Docker, so Docker is installed first.
# Download docker source [root@master ~]# wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -O /etc/yum.repos.d/docker-ce.repo # see [root@master ~]# ls /etc/yum.repos.d/ CentOS-Stream-AppStream.repo CentOS-Stream-Extras.repo CentOS-Stream-PowerTools.repo CentOS-Stream-BaseOS.repo CentOS-Stream-HighAvailability.repo CentOS-Stream-RealTime.repo CentOS-Stream-Debuginfo.repo CentOS-Stream-Media.repo docker-ce.repo # Install docker [root@master ~]# yum -y install docker-ce # All three are equipped with accelerators [root@master ~]# cat /etc/docker/daemon.json { "registry-mirrors": ["https://wn5c7d7w.mirror.aliyuncs.com"], "exec-opts": ["native.cgroupdriver=systemd"], # Using systemd process management "log-driver": "json-file", # The log format is json type "log-opts": { "max-size": "100m" # The maximum log size is 100MB. If it exceeds 100MB, a file record log will be regenerated }, "storage-driver": "overlay2" # The storage driver is overlay 2 } # View docker version [root@master ~]# docker --version Docker version 20.10.12, build e91ed57 # View accelerator effectiveness [root@master ~]# docker info Client: Context: default Debug Mode: false Plugins: app: Docker App (Docker Inc., v0.9.1-beta3) buildx: Docker Buildx (Docker Inc., v0.7.1-docker) scan: Docker Scan (Docker Inc., v0.12.0) ............. Insecure Registries: 127.0.0.0/8 Registry Mirrors: https://wn5c7d7w.mirror.aliyuncs.com / # acceleration effective Live Restore Enabled: false
kubernetes Alibaba cloud YUM software source is added to all three
# All three are equipped with kubernetes Repo source and install kubelet kubedm kubectl cat <<EOF > /etc/yum.repos.d/kubernetes.repo [kubernetes] name=Kubernetes baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/ enabled=1 gpgcheck=1 repo_gpgcheck=1 gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg EOF setenforce 0 yum install -y kubelet kubeadm kubectl systemctl enable kubelet && systemctl start kubelet # Three versions installed kubectl-1.23.1 kubeadm-1.23.1 kubelet-1.23.1
Deploy Kubernetes Master on master
At 192.168 220.17 execute on the Master.
[root@master ~]# kubeadm init --apiserver-advertise-address 192.168.220.17 \ # masterIP --image-repository registry.aliyuncs.com/google_containers \ --kubernetes-version v1.23.1 \ # kubernetes version --service-cidr=10.96.0.0/12 \ # Cannot change --pod-network-cidr=10.244.0.0/16 # Cannot change [init] Using Kubernetes version: v1.23.1 [preflight] Running pre-flight checks [WARNING Service-Docker]: docker service is not enabled, please run 'systemctl enable docker.service' [WARNING FileExisting-tc]: tc not found in system path [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service' [preflight] Pulling images required for setting up a Kubernetes cluster [preflight] This might take a minute or two, depending on the speed of your internet connection [preflight] You can also perform this action in beforehand using 'kubeadm config images pull' ......... To start using your cluster, you need to run the following as a regular user: # Setting environment variables ordinary users mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config Alternatively, if you are the root user, you can run: # Set environment variable root user export KUBECONFIG=/etc/kubernetes/admin.conf You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ Then you can join any number of worker nodes by running the following on each as root: kubeadm join 192.168.220.17:6443 --token z9bkz4.8zl0ca032qqg3qwu \ # Write it to a file, which will be used later --discovery-token-ca-cert-hash sha256:2382b876b896591aeff33c2df6bf250a28d54e9b4628839dd40ed4d98e7ac3ca [root@master ~]# cat init kubeadm join 192.168.220.17:6443 --token z9bkz4.8zl0ca032qqg3qwu \ --discovery-token-ca-cert-hash sha256:2382b876b896591aeff33c2df6bf250a28d54e9b4628839dd40ed4d98e7ac3ca # Setting environment variables using kubectl tool [root@master ~]# echo 'export KUBECONFIG=/etc/kubernetes/admin.conf' > /etc/profile.d/k8s.sh [root@master ~]# source /etc/profile.d/k8s.sh # View mirror [root@master ~]# docker images REPOSITORY TAG IMAGE ID CREATED SIZE registry.aliyuncs.com/google_containers/kube-apiserver v1.23.1 b6d7abedde39 44 hours ago 135MB registry.aliyuncs.com/google_containers/kube-proxy v1.23.1 b46c42588d51 44 hours ago 112MB registry.aliyuncs.com/google_containers/kube-controller-manager v1.23.1 f51846a4fd28 44 hours ago 125MB registry.aliyuncs.com/google_containers/kube-scheduler v1.23.1 71d575efe628 44 hours ago 53.5MB registry.aliyuncs.com/google_containers/etcd 3.5.1-0 25f8c7f3da61 6 weeks ago 293MB registry.aliyuncs.com/google_containers/coredns v1.8.6 a4ca41631cc7 2 months ago 46.8MB registry.aliyuncs.com/google_containers/pause 3.6 6270bb605e12 3 months ago 683kB [root@master ~]# docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 70b8d97bb83f b46c42588d51 "/usr/local/bin/kube..." 8 minutes ago Up 8 minutes k8s_kube-proxy_kube-proxy-j7hqc_kube-system_02f2421f-17f1-4708-bbaf-39f8c8291848_0 7fd8eff04c65 registry.aliyuncs.com/google_containers/pause:3.6 "/pause" 8 minutes ago Up 8 minutes k8s_POD_kube-proxy-j7hqc_kube-system_02f2421f-17f1-4708-bbaf-39f8c8291848_0 a99e24d6990a 25f8c7f3da61 "etcd --advertise-cl..." 8 minutes ago Up 8 minutes k8s_etcd_etcd-master.example.com_kube-system_88f66f7493adcab2ec614fff53ea6c21_0 82b4584d727a 71d575efe628 "kube-scheduler --au..." 8 minutes ago Up 8 minutes k8s_kube-scheduler_kube-scheduler-master.example.com_kube-system_78d116366c5c52e663d3704a9b950ba6_0 5d8b0b477342 f51846a4fd28 "kube-controller-man..." 8 minutes ago Up 8 minutes k8s_kube-controller-manager_kube-controller-manager-master.example.com_kube-system_e3c7337cbdf9f732e45b211a57aa7a54_0 a3ae8429535d b6d7abedde39 "kube-apiserver --ad..." 8 minutes ago Up 8 minutes k8s_kube-apiserver_kube-apiserver-master.example.com_kube-system_0bd35d96e524489e8ac2242562841834_0 f12debbb16f7 registry.aliyuncs.com/google_containers/pause:3.6 "/pause" 8 minutes ago Up 8 minutes k8s_POD_kube-controller-manager-master.example.com_kube-system_e3c7337cbdf9f732e45b211a57aa7a54_0 c8dbb3ff416b registry.aliyuncs.com/google_containers/pause:3.6 "/pause" 8 minutes ago Up 8 minutes k8s_POD_kube-scheduler-master.example.com_kube-system_78d116366c5c52e663d3704a9b950ba6_0 c6486a53c89c registry.aliyuncs.com/google_containers/pause:3.6 "/pause" 8 minutes ago Up 8 minutes k8s_POD_kube-apiserver-master.example.com_kube-system_0bd35d96e524489e8ac2242562841834_0 641bd0770eb0 registry.aliyuncs.com/google_containers/pause:3.6 "/pause" 8 minutes ago Up 8 minutes k8s_POD_etcd-master.example.com_kube-system_88f66f7493adcab2ec614fff53ea6c21_0 # View port [root@master ~]# ss -antl State Recv-Q Send-Q Local Address:Port Peer Address:Port Process LISTEN 0 128 127.0.0.1:10248 0.0.0.0:* LISTEN 0 128 127.0.0.1:10249 0.0.0.0:* LISTEN 0 128 192.168.220.17:2379 0.0.0.0:* LISTEN 0 128 127.0.0.1:2379 0.0.0.0:* LISTEN 0 128 192.168.220.17:2380 0.0.0.0:* LISTEN 0 128 127.0.0.1:2381 0.0.0.0:* LISTEN 0 128 127.0.0.1:10257 0.0.0.0:* LISTEN 0 128 127.0.0.1:10259 0.0.0.0:* LISTEN 0 128 127.0.0.1:43669 0.0.0.0:* LISTEN 0 128 0.0.0.0:22 0.0.0.0:* LISTEN 0 128 *:10250 *:* LISTEN 0 128 *:6443 *:* LISTEN 0 128 *:10256 *:* LISTEN 0 128 [::]:22 [::]:* # View node [root@master ~]# kubectl get nodes NAME STATUS ROLES AGE VERSION master.example.com NotReady control-plane,master 9m42s v1.23.1 # "NotReady" indicates that it is not ready and the background is still running
Install Pod network plug-in (CNI) on the master
Flannel Can be added to any existing Kubernetes Cluster, although in use pod Any of the network pod Adding it before startup is the easiest. [root@master ~]# kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml Warning: policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ podsecuritypolicy.policy/psp.flannel.unprivileged created clusterrole.rbac.authorization.k8s.io/flannel created clusterrolebinding.rbac.authorization.k8s.io/flannel created serviceaccount/flannel created configmap/kube-flannel-cfg created daemonset.apps/kube-flannel-ds created
Join Kubernetes Node
Add node1 and node2 to the cluster and use the file content init created earlier
# View on the master before joining [root@master ~]# kubectl get nodes NAME STATUS ROLES AGE VERSION master.example.com Ready control-plane,master 23m v1.23.1 # only one [root@master ~]# cat init # Add a new node to the cluster and execute the following output in kubedm init kubeadm join 192.168.220.17:6443 --token z9bkz4.8zl0ca032qqg3qwu \ --discovery-token-ca-cert-hash sha256:2382b876b896591aeff33c2df6bf250a28d54e9b4628839dd40ed4d98e7ac3ca # Join node1 to the cluster on node1 [root@node1 ~]# kubeadm join 192.168.220.17:6443 --token z9bkz4.8zl0ca032qqg3qwu \ > --discovery-token-ca-cert-hash sha256:2382b876b896591aeff33c2df6bf250a28d54e9b4628839dd40ed4d98e7ac3ca [preflight] Running pre-flight checks [WARNING Service-Docker]: docker service is not enabled, please run 'systemctl enable docker.service' [WARNING FileExisting-tc]: tc not found in system path [WARNING Hostname]: hostname "node1.example.com" could not be reached [WARNING Hostname]: hostname "node1.example.com": lookup node1.example.com on 114.114.114.114:53: no such host [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service' [preflight] Reading configuration from the cluster... [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml' [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Starting the kubelet [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap... This node has joined the cluster: * Certificate signing request was sent to apiserver and a response was received. * The Kubelet was informed of the new secure connection details. Run 'kubectl get nodes' on the control-plane to see this node join the cluster. # Join node2 to the cluster on node2 [root@node2 ~]# kubeadm join 192.168.220.17:6443 --token z9bkz4.8zl0ca032qqg3qwu \ > --discovery-token-ca-cert-hash sha256:2382b876b896591aeff33c2df6bf250a28d54e9b4628839dd40ed4d98e7ac3ca [preflight] Running pre-flight checks [WARNING Service-Docker]: docker service is not enabled, please run 'systemctl enable docker.service' [WARNING FileExisting-tc]: tc not found in system path [WARNING Hostname]: hostname "node2.example.com" could not be reached [WARNING Hostname]: hostname "node2.example.com": lookup node2.example.com on 114.114.114.114:53: no such host [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service' [preflight] Reading configuration from the cluster... [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml' [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Starting the kubelet [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap... This node has joined the cluster: * Certificate signing request was sent to apiserver and a response was received. * The Kubelet was informed of the new secure connection details. Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
After adding node1 and node2 to the cluster, view them on the master
[root@master ~]# kubectl get nodes NAME STATUS ROLES AGE VERSION master.example.com Ready control-plane,master 26m v1.23.1 node1.example.com Ready <none> 2m25s v1.23.1 node2.example.com Ready <none> 2m21s v1.23.1
Test kubernetes cluster
Create a pod in the Kubernetes cluster and verify that it works normally:
# Create a pod, which is nginx of deployment type and uses nginx image, without specifying which node to run on [root@master ~]# kubectl create deployment nginx --image=nginx deployment.apps/nginx created # The exposed pod is nginx port 80 of deployment type, which is exposed on the node [root@master ~]# kubectl expose deployment nginx --port=80 --type=NodePort service/nginx exposed # see [root@master ~]# kubectl get pod,svc NAME READY STATUS RESTARTS AGE pod/nginx-85b98978db-xd6wz 1/1 Running 0 68s NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 30m service/nginx NodePort 10.99.129.159 <none> 80:31343/TCP 48s # See which node is running on [root@master ~]# kubectl get pod -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES nginx-85b98978db-xd6wz 1/1 Running 0 87s 10.244.2.2 node2.example.com(Run in) <none> <none> # Access serveip [root@master ~]# curl http://10.99.129.159 <!DOCTYPE html> <html> <head> <title>Welcome to nginx!</title> .............. # View mapped random ports on node2 [root@node2 ~]# ss -antl State Recv-Q Send-Q Local Address:Port Peer Address:Port Process LISTEN 0 128 127.0.0.1:37919 0.0.0.0:* LISTEN 0 128 127.0.0.1:10248 0.0.0.0:* LISTEN 0 128 127.0.0.1:10249 0.0.0.0:* LISTEN 0 128 0.0.0.0:31343(This port) 0.0.0.0:* LISTEN 0 128 0.0.0.0:22 0.0.0.0:* LISTEN 0 128 *:10250 *:* LISTEN 0 128 *:10256 *:* LISTEN 0 128 [::]:22 [::]:*
Access node2IP: mapped random port (31343)