The k8s deployment on the virtual machine always causes various problems in the next day

Problem 1: check the network status and report an error: RTNETLINK answers: File exists error resolution

CentOS7 Failed to start LSB: Bring up/down networking
RTNETLINK answers: File exists error resolution

https://blog.csdn.net/u010719917/article/details/79423180

chkconfig --level 35 network on
chkconfig --level 0123456 NetworkManager off

service NetworkManager stop
service network stop

service network start

If not, restart the system

RTNETLINKanswers:Fileexists error in service network start, or

/etc/init. RTNETLINKanswers:Fileexists error in D / network start. Solution

(in fact, the two are equivalent. In fact, the former executes this command)

The reason for this failure under centos is that there is a conflict between the two services starting the network:

/etc/init.d/network and

/etc/init. There is a conflict between the two services of D / NetworkManager.

Basically, the conflict is caused by the of networkmagmanager (NM). You can stop NetworkManager and restart it.

1. Switch to the root account and use the chkconfig command to view the startup configuration of network and NetworkManager services;

=====
Just execute the following three commands to succeed
service NetworkManager stop
service network stop

service network start

 

Problem 2: when ping ing the Internet, the destination host is unreachable. from intranet ip

Troubleshooting process

[root@mcw7 ~]$ ping www.baidu.com
PING www.a.shifen.com (220.181.38.149) 56(84) bytes of data.
From bogon (172.16.1.137) icmp_seq=1 Destination Host Unreachable

View the routing table that can connect to the external network
[root@mcw8 ~]$ route -n
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
0.0.0.0         10.0.0.2        0.0.0.0         UG    100    0        0 ens33
0.0.0.0         172.16.1.2      0.0.0.0         UG    101    0        0 ens37
10.0.0.0        0.0.0.0         255.255.255.0   U     100    0        0 ens33
10.244.0.0      0.0.0.0         255.255.255.0   U     0      0        0 cni0
172.16.1.0      0.0.0.0         255.255.255.0   U     100    0        0 ens37
172.17.0.0      0.0.0.0         255.255.0.0     U     0      0        0 docker0

Check the routing table without access to the Internet and find that a message about 10 is missing.0.0.2 Routing,
You should add a route as above. Try 0.0.0.0         10.0.0.2        0.0.0.0         UG    100    0        0 ens33
[root@mcw7 ~]$ route -n
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
0.0.0.0         172.16.1.2      0.0.0.0         UG    0      0        0 ens37
10.0.0.0        0.0.0.0         255.255.255.0   U     0      0        0 ens33
169.254.0.0     0.0.0.0         255.255.0.0     U     1002   0        0 ens33
169.254.0.0     0.0.0.0         255.255.0.0     U     1003   0        0 ens37
172.16.1.0      0.0.0.0         255.255.255.0   U     0      0        0 ens37


Add the wrong route, delete
route add -host 10.0.0.137  gw 10.0.0.2
[root@mcw7 ~]$ route add -host 10.0.0.137  gw 10.0.0.2
[root@mcw7 ~]$ route -n
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
0.0.0.0         172.16.1.2      0.0.0.0         UG    0      0        0 ens37
10.0.0.0        0.0.0.0         255.255.255.0   U     0      0        0 ens33
10.0.0.137      10.0.0.2        255.255.255.255 UGH   0      0        0 ens33
169.254.0.0     0.0.0.0         255.255.0.0     U     1002   0        0 ens33
169.254.0.0     0.0.0.0         255.255.0.0     U     1003   0        0 ens37
172.16.1.0      0.0.0.0         255.255.255.0   U     0      0        0 ens37

Delete route -host hinder ip,In the first column of the route, the destination address. I should fill in 0 here.0.0.0. The destination address is arbitrary and cannot be specified gw It's 10.0.0.2
[root@mcw7 ~]$ route del -host 10.0.0.137 dev ens33
[root@mcw7 ~]$ route -n
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
0.0.0.0         172.16.1.2      0.0.0.0         UG    0      0        0 ens37
10.0.0.0        0.0.0.0         255.255.255.0   U     0      0        0 ens33
169.254.0.0     0.0.0.0         255.255.0.0     U     1002   0        0 ens33
169.254.0.0     0.0.0.0         255.255.0.0     U     1003   0        0 ens37
172.16.1.0      0.0.0.0         255.255.255.0   U     0      0        0 ens37
[root@mcw7 ~]$ 


-host It refers to the destination host, where the subnet mask should be set to 0.0.0.0,The rebuild needs to be deleted manually. There seem to be too many flags H,I don't know why
[root@mcw7 ~]$ route add -host 0.0.0.0  gw 10.0.0.2
[root@mcw7 ~]$ route -n
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
0.0.0.0         10.0.0.2        255.255.255.255 UGH   0      0        0 ens33
0.0.0.0         172.16.1.2      0.0.0.0         UG    0      0        0 ens37
10.0.0.0        0.0.0.0         255.255.255.0   U     0      0        0 ens33
169.254.0.0     0.0.0.0         255.255.0.0     U     1002   0        0 ens33
169.254.0.0     0.0.0.0         255.255.0.0     U     1003   0        0 ens37
172.16.1.0      0.0.0.0         255.255.255.0   U     0      0        0 ens37

Delete the specified destination host and specify the network card interface
[root@mcw7 ~]$ route del -host 0.0.0.0 dev ens33
[root@mcw7 ~]$ route -n
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
0.0.0.0         172.16.1.2      0.0.0.0         UG    0      0        0 ens37
10.0.0.0        0.0.0.0         255.255.255.0   U     0      0        0 ens33
169.254.0.0     0.0.0.0         255.255.0.0     U     1002   0        0 ens33
169.254.0.0     0.0.0.0         255.255.0.0     U     1003   0        0 ens37
172.16.1.0      0.0.0.0         255.255.255.0   U     0      0        0 ens37


See the prompt and specify the mask netmask,genmask and mask And 255, take the complement
[root@mcw7 ~]$ route add -host 0.0.0.0 MASK 0.0.0.0  gw 10.0.0.2
Usage: inet_route [-vF] del {-host|-net} Target[/prefix] [gw Gw] [metric M] [[dev] If]
       inet_route [-vF] add {-host|-net} Target[/prefix] [gw Gw] [metric M]
                              [netmask N] [mss Mss] [window W] [irtt I]
                              [mod] [dyn] [reinstate] [[dev] If]
       inet_route [-vF] add {-host|-net} Target[/prefix] [metric M] reject
       inet_route [-FC] flush      NOT supported
[root@mcw7 ~]$ 
[root@mcw7 ~]$ 
[root@mcw7 ~]$ route add -host 0.0.0.0 netmask 0.0.0.0  gw 10.0.0.2
[root@mcw7 ~]$ route -n
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
0.0.0.0         10.0.0.2        255.255.255.255 UGH   0      0        0 ens33
0.0.0.0         172.16.1.2      0.0.0.0         UG    0      0        0 ens37
10.0.0.0        0.0.0.0         255.255.255.0   U     0      0        0 ens33
169.254.0.0     0.0.0.0         255.255.0.0     U     1002   0        0 ens33
169.254.0.0     0.0.0.0         255.255.0.0     U     1003   0        0 ens37
172.16.1.0      0.0.0.0         255.255.255.0   U     0      0        0 ens37
[root@mcw7 ~]$ 

Delete again route del Specify destination host and interface
[root@mcw7 ~]$ route del -host 0.0.0.0 dev ens33
[root@mcw7 ~]$ route -n
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
0.0.0.0         172.16.1.2      0.0.0.0         UG    0      0        0 ens37
10.0.0.0        0.0.0.0         255.255.255.0   U     0      0        0 ens33
169.254.0.0     0.0.0.0         255.255.0.0     U     1002   0        0 ens33
169.254.0.0     0.0.0.0         255.255.0.0     U     1003   0        0 ens37
172.16.1.0      0.0.0.0         255.255.255.0   U     0      0        0 ens37

Description before real solution

Delete default gateway
[root@mcw7 ~]$ route del -host 0.0.0.0 dev ens33
SIOCDELRT: No such process
[root@mcw7 ~]$ 
[root@mcw7 ~]$ route del -host 0.0.0.0 dev ens37
SIOCDELRT: No such process
[root@mcw7 ~]$ route -n
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
0.0.0.0         10.0.0.2        0.0.0.0         UG    100    0        0 ens33
0.0.0.0         172.16.1.2      0.0.0.0         UG    101    0        0 ens37
10.0.0.0        0.0.0.0         255.255.255.0   U     100    0        0 ens33
10.244.0.0      0.0.0.0         255.255.255.0   U     0      0        0 cni0
172.16.1.0      0.0.0.0         255.255.255.0   U     100    0        0 ens37
172.17.0.0      0.0.0.0         255.255.0.0     U     0      0        0 docker0
[root@mcw7 ~]$ route del -host 0.0.0.0
SIOCDELRT: No such process
[root@mcw7 ~]$ route del default
[root@mcw7 ~]$ route -n
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
0.0.0.0         172.16.1.2      0.0.0.0         UG    100    0        0 ens37
10.0.0.0        0.0.0.0         255.255.255.0   U     100    0        0 ens33
10.244.0.0      0.0.0.0         255.255.255.0   U     0      0        0 cni0
172.16.1.0      0.0.0.0         255.255.255.0   U     100    0        0 ens37
172.17.0.0      0.0.0.0         255.255.0.0     U     0      0        0 docker0
[root@mcw7 ~]$ route del default
[root@mcw7 ~]$ route -n
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
10.0.0.0        0.0.0.0         255.255.255.0   U     100    0        0 ens33
10.244.0.0      0.0.0.0         255.255.255.0   U     0      0        0 cni0
172.16.1.0      0.0.0.0         255.255.255.0   U     100    0        0 ens37
172.17.0.0      0.0.0.0         255.255.0.0     U     0      0        0 docker0
[root@mcw7 ~]$ 

Really solve this problem

reference resources: https://www.cnblogs.com/skgoo/p/13559964.html

mcw8 upper ping Unable to connect to the external network. It shows that the package is from the internal network of the server ip. 
[root@mcw8 ~]$ ping www.baidu.com
PING www.a.shifen.com (39.156.66.14) 56(84) bytes of data.
From mcw8 (172.16.1.138) icmp_seq=1 Destination Host Unreachable

mcw9 Upper energy ping Through the Internet, the display package comes to Baidu ip
[root@mcw9 ~]$ ping www.baidu.com
PING www.a.shifen.com (39.156.66.18) 56(84) bytes of data.
64 bytes from 39.156.66.18 (39.156.66.18): icmp_seq=1 ttl=128 time=43.2 ms

see mcw9 Normal gateway, there are 10.0.0.2 Gateway for ip
[root@mcw9 ~]$ route -n
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
0.0.0.0         10.0.0.2        0.0.0.0         UG    100    0        0 ens33
0.0.0.0         172.16.1.2      0.0.0.0         UG    101    0        0 ens37
10.0.0.0        0.0.0.0         255.255.255.0   U     100    0        0 ens33
172.16.1.0      0.0.0.0         255.255.255.0   U     100    0        0 ens37
172.17.0.0      0.0.0.0         255.255.0.0     U     0      0        0 docker0

see mcw8 The route of the abnormal network has no gateway 10 of the external network.0.0.2. 
[root@mcw8 ~]$ route -n
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
0.0.0.0         172.16.1.2      0.0.0.0         UG    0      0        0 ens37
10.0.0.0        0.0.0.0         255.255.255.0   U     0      0        0 ens33
169.254.0.0     0.0.0.0         255.255.0.0     U     1002   0        0 ens33
169.254.0.0     0.0.0.0         255.255.0.0     U     1003   0        0 ens37
172.16.1.0      0.0.0.0         255.255.255.0   U     0      0        0 ens37
172.17.0.0      0.0.0.0         255.255.0.0     U     0      0        0 docker0

to mcw8 Add the default gateway. Add various routes before the above. The results are as follows: genmask It's all wrong. It can't become 0.0.0.0. Only by using the following command can it be realized
Destination Is 0.0.0.0,Gateway It's 10.0.0.2,Genmask Is 0.0.0.0 ,Flags yes UG,Iface yes ens33. Then you can successfully access the Internet
[root@mcw8 ~]$ route add default gw 10.0.0.2
[root@mcw8 ~]$ route -n
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
0.0.0.0         10.0.0.2        0.0.0.0         UG    0      0        0 ens33
0.0.0.0         172.16.1.2      0.0.0.0         UG    0      0        0 ens37
10.0.0.0        0.0.0.0         255.255.255.0   U     0      0        0 ens33
169.254.0.0     0.0.0.0         255.255.0.0     U     1002   0        0 ens33
169.254.0.0     0.0.0.0         255.255.0.0     U     1003   0        0 ens37
172.16.1.0      0.0.0.0         255.255.255.0   U     0      0        0 ens37
172.17.0.0      0.0.0.0         255.255.0.0     U     0      0        0 docker0


stay mcw8 You can access the Internet normally
[root@mcw8 ~]$ ping www.baidu.com
PING www.a.shifen.com (39.156.66.14) 56(84) bytes of data.
64 bytes from 39.156.66.14 (39.156.66.14): icmp_seq=1 ttl=128 time=23.5 ms
64 bytes from 39.156.66.14 (39.156.66.14): icmp_seq=2 ttl=128 time=36.7 ms

 

Due to the second deployment of flannel, the network was blocked and the website could not be accessed (the domain name is a domain name that is forbidden to query), but I saved the contents of this file before. In this way, I can directly copy the contents of the file and deploy it directly. as follows

https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
[machangwei@mcw7 ~]$ ls
mcw.txt  mm.yml  scripts  tools
[machangwei@mcw7 ~]$ kubectl apply -f mm.yml 
Warning: policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
podsecuritypolicy.policy/psp.flannel.unprivileged created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.apps/kube-flannel-ds created

Because I forgot init's command to join the cluster. So when I want kubedm init and then execute kubedm reset, all the original containers are gone

Troubleshooting process, and export and import of IPtables rules

[root@mcw7 ~]$ docker ps
CONTAINER ID   IMAGE     COMMAND   CREATED   STATUS    PORTS     NAMES
[root@mcw7 ~]$ docker ps -a
CONTAINER ID   IMAGE     COMMAND   CREATED   STATUS    PORTS     NAMES
[root@mcw7 ~]$ 
After resetting and then reinitializing, the network has no problem
[root@mcw7 ~]$ docker ps|grep kube-flannel
[root@mcw7 ~]$ 

An error is reported when an ordinary user redeploys the network
[machangwei@mcw7 ~]$ ls
mcw.txt  mm.yml  scripts  tools
[machangwei@mcw7 ~]$ kubectl apply -f mm.yml 
Unable to connect to the server: x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")
[machangwei@mcw7 ~]$ 


Query previously reset information. Found that it cannot be cleared CNI Information about
[root@mcw7 ~]$  echo y|kubeadm reset
[reset] Reading configuration from the cluster...
[reset] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[reset] WARNING: Changes made to this host by 'kubeadm init' or 'kubeadm join' will be reverted.
[reset] Are you sure you want to proceed? [y/N]: [preflight] Running pre-flight checks
[reset] Stopping the kubelet service
[reset] Unmounting mounted directories in "/var/lib/kubelet"
[reset] Deleting contents of config directories: [/etc/kubernetes/manifests /etc/kubernetes/pki]
[reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]
[reset] Deleting contents of stateful directories: [/var/lib/etcd /var/lib/kubelet /var/lib/dockershim /var/run/kubernetes /var/lib/cni]

The reset process does not clean CNI configuration. To do so, you must remove /etc/cni/net.d

The reset process does not reset or clean up iptables rules or IPVS tables.
If you wish to reset iptables, you must do so manually by using the "iptables" command.

If your cluster was setup to utilize IPVS, run ipvsadm --clear (or similar)
to reset your system's IPVS tables.

The reset process does not clean your kubeconfig files and you must remove them manually.
Please, check the contents of the $HOME/.kube/config file.

Removing files doesn't work
[root@mcw7 ~]$ mv /etc/cni/net.d /etc/cni/net.dbak
[root@mcw7 ~]$ ipvsadm --clear
-bash: ipvsadm: command not found


I checked a lot and didn't know how to do it
[root@mcw7 ~]$ iptables -L
Chain INPUT (policy ACCEPT)
target     prot opt source               destination         
KUBE-NODEPORTS  all  --  anywhere             anywhere             /* kubernetes health check service ports */
KUBE-EXTERNAL-SERVICES  all  --  anywhere             anywhere             ctstate NEW /* kubernetes externally-visible service portals */
KUBE-FIREWALL  all  --  anywhere             anywhere            
ACCEPT     all  --  anywhere             anywhere             ctstate RELATED,ESTABLISHED
ACCEPT     all  --  anywhere             anywhere            
INPUT_direct  all  --  anywhere             anywhere            
INPUT_ZONES_SOURCE  all  --  anywhere             anywhere    


Since it cannot be cleared, you can export and import a rule directly from other machines
 Export:
[root@mcw9 ~]$ iptables-save > /root/iptables_beifen.txt
[root@mcw9 ~]$ cat iptables_beifen.txt
# Generated by iptables-save v1.4.21 on Fri Jan  7 23:05:39 2022
*filter
:INPUT ACCEPT [1676:135745]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [896:67997]
:DOCKER - [0:0]
:DOCKER-ISOLATION-STAGE-1 - [0:0]
:DOCKER-ISOLATION-STAGE-2 - [0:0]
:DOCKER-USER - [0:0]
-A FORWARD -j DOCKER-USER
-A FORWARD -j DOCKER-ISOLATION-STAGE-1
-A FORWARD -o docker0 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A FORWARD -o docker0 -j DOCKER
-A FORWARD -i docker0 ! -o docker0 -j ACCEPT
-A FORWARD -i docker0 -o docker0 -j ACCEPT
-A DOCKER-ISOLATION-STAGE-1 -i docker0 ! -o docker0 -j DOCKER-ISOLATION-STAGE-2
-A DOCKER-ISOLATION-STAGE-1 -j RETURN
-A DOCKER-ISOLATION-STAGE-2 -o docker0 -j DROP
-A DOCKER-ISOLATION-STAGE-2 -j RETURN
-A DOCKER-USER -j RETURN
COMMIT
# Completed on Fri Jan  7 23:05:39 2022
# Generated by iptables-save v1.4.21 on Fri Jan  7 23:05:39 2022
*nat
:PREROUTING ACCEPT [32:2470]
:INPUT ACCEPT [32:2470]
:OUTPUT ACCEPT [8:528]
:POSTROUTING ACCEPT [8:528]
:DOCKER - [0:0]
-A PREROUTING -m addrtype --dst-type LOCAL -j DOCKER
-A OUTPUT ! -d 127.0.0.0/8 -m addrtype --dst-type LOCAL -j DOCKER
-A POSTROUTING -s 172.17.0.0/16 ! -o docker0 -j MASQUERADE
-A DOCKER -i docker0 -j RETURN
COMMIT
# Completed on Fri Jan  7 23:05:39 2022
[root@mcw9 ~]$ cat 


mcw7 Import rules on. Error, there is a problem with the file. Comment on the first line
[root@mcw7 ~]$ iptables-restore</root/daoru.txt 
iptables-restore: line 1 failed
[root@mcw7 ~]$ cat daoru.txt  #command
ptables-save v1.4.21 on Fri Jan  7 23:05:39 2022
*filter
:INPUT ACCEPT [1676:135745]




After importing, the firewall rules are consistent
https://blog.csdn.net/jake_tian/article/details/102548306
[root@mcw7 ~]$ iptables-restore</root/daoru.txt 
[root@mcw7 ~]$ iptables -L
Chain INPUT (policy ACCEPT)
target     prot opt source               destination         

Chain FORWARD (policy ACCEPT)
target     prot opt source               destination         
DOCKER-USER  all  --  anywhere             anywhere            
DOCKER-ISOLATION-STAGE-1  all  --  anywhere             anywhere            
ACCEPT     all  --  anywhere             anywhere             ctstate RELATED,ESTABLISHED
DOCKER     all  --  anywhere             anywhere            
ACCEPT     all  --  anywhere             anywhere            
ACCEPT     all  --  anywhere             anywhere            

Chain OUTPUT (policy ACCEPT)
target     prot opt source               destination         

Chain DOCKER (1 references)
target     prot opt source               destination         

Chain DOCKER-ISOLATION-STAGE-1 (1 references)
target     prot opt source               destination         
DOCKER-ISOLATION-STAGE-2  all  --  anywhere             anywhere            
RETURN     all  --  anywhere             anywhere            

Chain DOCKER-ISOLATION-STAGE-2 (1 references)
target     prot opt source               destination         
DROP       all  --  anywhere             anywhere            
RETURN     all  --  anywhere             anywhere            

Chain DOCKER-USER (1 references)
target     prot opt source               destination         
RETURN     all  --  anywhere             anywhere         

============
Do it again and give it a try

retry 
[root@mcw7 ~]$ echo y|kubeadm reset

Look at the firewall, it seems that there is no change
[root@mcw7 ~]$ iptables -L
Chain INPUT (policy ACCEPT)
target     prot opt source               destination         

Chain FORWARD (policy ACCEPT)
target     prot opt source               destination         
DOCKER-USER  all  --  anywhere             anywhere            
DOCKER-ISOLATION-STAGE-1  all  --  anywhere             anywhere            
ACCEPT     all  --  anywhere             anywhere             ctstate RELATED,ESTABLISHED
DOCKER     all  --  anywhere             anywhere            
ACCEPT     all  --  anywhere             anywhere            
ACCEPT     all  --  anywhere             anywhere            

Chain OUTPUT (policy ACCEPT)
target     prot opt source               destination         

Chain DOCKER (1 references)
target     prot opt source               destination         

Chain DOCKER-ISOLATION-STAGE-1 (1 references)
target     prot opt source               destination         
DOCKER-ISOLATION-STAGE-2  all  --  anywhere             anywhere            
RETURN     all  --  anywhere             anywhere            

Chain DOCKER-ISOLATION-STAGE-2 (1 references)
target     prot opt source               destination         
DROP       all  --  anywhere             anywhere            
RETURN     all  --  anywhere             anywhere            

Chain DOCKER-USER (1 references)
target     prot opt source               destination         
RETURN     all  --  anywhere             anywhere            
[root@mcw7 ~]$ 


After reinitialization
[root@mcw7 ~]$ kubeadm init --apiserver-advertise-address 10.0.0.137 --pod-network-cidr=10.244.0.0/24 --image-repository=registry.aliyuncs.com/google_containers
kubeadm join 10.0.0.137:6443 --token 1e2kkw.ivkth6zzkbx72z4u \
    --discovery-token-ca-cert-hash sha256:fb83146082fb33ca2bff56a525c1e575b5f2587ab1be566f9dd3d7e8d7845462
    
[root@mcw7 ~]$ iptables -L
Chain INPUT (policy ACCEPT)
target     prot opt source               destination         
KUBE-NODEPORTS  all  --  anywhere             anywhere             /* kubernetes health check service ports */
KUBE-EXTERNAL-SERVICES  all  --  anywhere             anywhere             ctstate NEW /* kubernetes externally-visible service portals */
KUBE-FIREWALL  all  --  anywhere             anywhere            

Chain FORWARD (policy ACCEPT)
target     prot opt source               destination         
KUBE-FORWARD  all  --  anywhere             anywhere             /* kubernetes forwarding rules */
KUBE-SERVICES  all  --  anywhere             anywhere             ctstate NEW /* kubernetes service portals */
KUBE-EXTERNAL-SERVICES  all  --  anywhere             anywhere             ctstate NEW /* kubernetes externally-visible service portals */
DOCKER-USER  all  --  anywhere             anywhere            
DOCKER-ISOLATION-STAGE-1  all  --  anywhere             anywhere            
ACCEPT     all  --  anywhere             anywhere             ctstate RELATED,ESTABLISHED
DOCKER     all  --  anywhere             anywhere            
ACCEPT     all  --  anywhere             anywhere            
ACCEPT     all  --  anywhere             anywhere            

Chain OUTPUT (policy ACCEPT)
target     prot opt source               destination         
KUBE-SERVICES  all  --  anywhere             anywhere             ctstate NEW /* kubernetes service portals */
KUBE-FIREWALL  all  --  anywhere             anywhere            

Chain DOCKER (1 references)
target     prot opt source               destination         

Chain DOCKER-ISOLATION-STAGE-1 (1 references)
target     prot opt source               destination         
DOCKER-ISOLATION-STAGE-2  all  --  anywhere             anywhere            
RETURN     all  --  anywhere             anywhere            

Chain DOCKER-ISOLATION-STAGE-2 (1 references)
target     prot opt source               destination         
DROP       all  --  anywhere             anywhere            
RETURN     all  --  anywhere             anywhere            

Chain DOCKER-USER (1 references)
target     prot opt source               destination         
RETURN     all  --  anywhere             anywhere            

Chain KUBE-EXTERNAL-SERVICES (2 references)
target     prot opt source               destination         

Chain KUBE-FIREWALL (2 references)
target     prot opt source               destination         
DROP       all  --  anywhere             anywhere             /* kubernetes firewall for dropping marked packets */ mark match 0x8000/0x8000


You may not be able to remember the steps to do the problem, but you can work out the problem quickly according to your notes, and you are sure to be right 
You may not be able to remember the deployment steps and every command executed, but you can quickly do it according to your previous notes




It turns out that this problem has nothing to do with the firewall.
[machangwei@mcw7 ~]$ kubectl apply -f mm.yml 
Unable to connect to the server: x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")
[machangwei@mcw7 ~]$ 
[machangwei@mcw7 ~]$ kubectl get nodes
Unable to connect to the server: x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

 

The real solution:

The procedure is as follows: re configure kubectl with ordinary users, and the previous configuration is invalid

[machangwei@mcw7 ~]$ ls -a
.  ..  .bash_history  .bash_logout  .bash_profile  .bashrc  .kube  mcw.txt  mm.yml  scripts  tools  .viminfo
[machangwei@mcw7 ~]$ mv .kube kubebak
[machangwei@mcw7 ~]$ mkdir -p $HOME/.kube
[machangwei@mcw7 ~]$ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[machangwei@mcw7 ~]$ sudo chown $(id -u):$(id -g) $HOME/.kube/config
[machangwei@mcw7 ~]$ kubectl get node
NAME   STATUS     ROLES                  AGE   VERSION
mcw7   NotReady   control-plane,master   10m   v1.23.1

Recreate network

[machangwei@mcw7 ~]$ ls 
kubebak  mcw.txt  mm.yml  scripts  tools
[machangwei@mcw7 ~]$ kubectl apply -f mm.yml 
Warning: policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
podsecuritypolicy.policy/psp.flannel.unprivileged created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.apps/kube-flannel-ds created

At this point, use - L again to view the firewall

[root@mcw7 ~]$ iptables -L
Chain INPUT (policy ACCEPT)
target     prot opt source               destination         
KUBE-NODEPORTS  all  --  anywhere             anywhere             /* kubernetes health check service ports */
KUBE-EXTERNAL-SERVICES  all  --  anywhere             anywhere             ctstate NEW /* kubernetes externally-visible service portals */
KUBE-FIREWALL  all  --  anywhere             anywhere            

Chain FORWARD (policy ACCEPT)
target     prot opt source               destination         
KUBE-FORWARD  all  --  anywhere             anywhere             /* kubernetes forwarding rules */
KUBE-SERVICES  all  --  anywhere             anywhere             ctstate NEW /* kubernetes service portals */
KUBE-EXTERNAL-SERVICES  all  --  anywhere             anywhere             ctstate NEW /* kubernetes externally-visible service portals */
DOCKER-USER  all  --  anywhere             anywhere            
DOCKER-ISOLATION-STAGE-1  all  --  anywhere             anywhere            
ACCEPT     all  --  anywhere             anywhere             ctstate RELATED,ESTABLISHED
DOCKER     all  --  anywhere             anywhere            
ACCEPT     all  --  anywhere             anywhere            
ACCEPT     all  --  anywhere             anywhere            


ACCEPT     all  --  mcw7/16              anywhere            
ACCEPT     all  --  anywhere             mcw7/16             

Chain OUTPUT (policy ACCEPT)
target     prot opt source               destination         
KUBE-SERVICES  all  --  anywhere             anywhere             ctstate NEW /* kubernetes service portals */
KUBE-FIREWALL  all  --  anywhere             anywhere            

Chain DOCKER (1 references)
target     prot opt source               destination         

Chain DOCKER-ISOLATION-STAGE-1 (1 references)
target     prot opt source               destination         
DOCKER-ISOLATION-STAGE-2  all  --  anywhere             anywhere            
RETURN     all  --  anywhere             anywhere            

Chain DOCKER-ISOLATION-STAGE-2 (1 references)
target     prot opt source               destination         
DROP       all  --  anywhere             anywhere            
RETURN     all  --  anywhere             anywhere            

Chain DOCKER-USER (1 references)
target     prot opt source               destination         
RETURN     all  --  anywhere             anywhere            

Chain KUBE-EXTERNAL-SERVICES (2 references)
target     prot opt source               destination         

Chain KUBE-FIREWALL (2 references)
target     prot opt source               destination         
DROP       all  --  anywhere             anywhere             /* kubernetes firewall for dropping marked packets */ mark match 0x8000/0x8000
^C
      

Look at the rules. The following should be used
[root@mcw7 ~]$ iptables-save 
# Generated by iptables-save v1.4.21 on Sat Jan  8 07:35:11 2022
*nat
:PREROUTING ACCEPT [372:18270]
:INPUT ACCEPT [0:0]
:OUTPUT ACCEPT [239:14302]
:POSTROUTING ACCEPT [239:14302]
:DOCKER - [0:0]
:KUBE-KUBELET-CANARY - [0:0]
:KUBE-MARK-DROP - [0:0]
:KUBE-MARK-MASQ - [0:0]
:KUBE-NODEPORTS - [0:0]
:KUBE-POSTROUTING - [0:0]
:KUBE-PROXY-CANARY - [0:0]
:KUBE-SEP-6E7XQMQ4RAYOWTTM - [0:0]
:KUBE-SEP-IT2ZTR26TO4XFPTO - [0:0]
:KUBE-SEP-N4G2XR5TDX7PQE7P - [0:0]
:KUBE-SEP-XOVE7RWZIDAMLO2S - [0:0]
:KUBE-SEP-YIL6JZP7A3QYXJU2 - [0:0]
:KUBE-SEP-ZP3FB6NMPNCO4VBJ - [0:0]
:KUBE-SEP-ZXMNUKOKXUTL2MK2 - [0:0]
:KUBE-SERVICES - [0:0]
:KUBE-SVC-ERIFXISQEP7F7OF4 - [0:0]
:KUBE-SVC-JD5MR3NA4I4DYORP - [0:0]
:KUBE-SVC-NPX46M4PTMTKRN6Y - [0:0]
:KUBE-SVC-TCOU7JCQXEZGVUNU - [0:0]
-A PREROUTING -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A PREROUTING -m addrtype --dst-type LOCAL -j DOCKER
-A OUTPUT -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A OUTPUT ! -d 127.0.0.0/8 -m addrtype --dst-type LOCAL -j DOCKER
-A POSTROUTING -m comment --comment "kubernetes postrouting rules" -j KUBE-POSTROUTING
-A POSTROUTING -s 172.17.0.0/16 ! -o docker0 -j MASQUERADE
-A POSTROUTING -s 10.244.0.0/16 -d 10.244.0.0/16 -j RETURN
-A POSTROUTING -s 10.244.0.0/16 ! -d 224.0.0.0/4 -j MASQUERADE
-A POSTROUTING ! -s 10.244.0.0/16 -d 10.244.0.0/24 -j RETURN
-A POSTROUTING ! -s 10.244.0.0/16 -d 10.244.0.0/16 -j MASQUERADE
-A DOCKER -i docker0 -j RETURN
-A KUBE-MARK-DROP -j MARK --set-xmark 0x8000/0x8000
-A KUBE-MARK-MASQ -j MARK --set-xmark 0x4000/0x4000
-A KUBE-POSTROUTING -m mark ! --mark 0x4000/0x4000 -j RETURN
-A KUBE-POSTROUTING -j MARK --set-xmark 0x4000/0x0
-A KUBE-POSTROUTING -m comment --comment "kubernetes service traffic requiring SNAT" -j MASQUERADE
-A KUBE-SEP-6E7XQMQ4RAYOWTTM -s 10.244.0.3/32 -m comment --comment "kube-system/kube-dns:dns" -j KUBE-MARK-MASQ
-A KUBE-SEP-6E7XQMQ4RAYOWTTM -p udp -m comment --comment "kube-system/kube-dns:dns" -m udp -j DNAT --to-destination 10.244.0.3:53
-A KUBE-SEP-IT2ZTR26TO4XFPTO -s 10.244.0.2/32 -m comment --comment "kube-system/kube-dns:dns-tcp" -j KUBE-MARK-MASQ
-A KUBE-SEP-IT2ZTR26TO4XFPTO -p tcp -m comment --comment "kube-system/kube-dns:dns-tcp" -m tcp -j DNAT --to-destination 10.244.0.2:53
-A KUBE-SEP-N4G2XR5TDX7PQE7P -s 10.244.0.2/32 -m comment --comment "kube-system/kube-dns:metrics" -j KUBE-MARK-MASQ
-A KUBE-SEP-N4G2XR5TDX7PQE7P -p tcp -m comment --comment "kube-system/kube-dns:metrics" -m tcp -j DNAT --to-destination 10.244.0.2:9153
-A KUBE-SEP-XOVE7RWZIDAMLO2S -s 10.0.0.137/32 -m comment --comment "default/kubernetes:https" -j KUBE-MARK-MASQ
-A KUBE-SEP-XOVE7RWZIDAMLO2S -p tcp -m comment --comment "default/kubernetes:https" -m tcp -j DNAT --to-destination 10.0.0.137:6443
-A KUBE-SEP-YIL6JZP7A3QYXJU2 -s 10.244.0.2/32 -m comment --comment "kube-system/kube-dns:dns" -j KUBE-MARK-MASQ
-A KUBE-SEP-YIL6JZP7A3QYXJU2 -p udp -m comment --comment "kube-system/kube-dns:dns" -m udp -j DNAT --to-destination 10.244.0.2:53
-A KUBE-SEP-ZP3FB6NMPNCO4VBJ -s 10.244.0.3/32 -m comment --comment "kube-system/kube-dns:metrics" -j KUBE-MARK-MASQ
-A KUBE-SEP-ZP3FB6NMPNCO4VBJ -p tcp -m comment --comment "kube-system/kube-dns:metrics" -m tcp -j DNAT --to-destination 10.244.0.3:9153
-A KUBE-SEP-ZXMNUKOKXUTL2MK2 -s 10.244.0.3/32 -m comment --comment "kube-system/kube-dns:dns-tcp" -j KUBE-MARK-MASQ
-A KUBE-SEP-ZXMNUKOKXUTL2MK2 -p tcp -m comment --comment "kube-system/kube-dns:dns-tcp" -m tcp -j DNAT --to-destination 10.244.0.3:53
-A KUBE-SERVICES -d 10.96.0.1/32 -p tcp -m comment --comment "default/kubernetes:https cluster IP" -m tcp --dport 443 -j KUBE-SVC-NPX46M4PTMTKRN6Y
-A KUBE-SERVICES -d 10.96.0.10/32 -p udp -m comment --comment "kube-system/kube-dns:dns cluster IP" -m udp --dport 53 -j KUBE-SVC-TCOU7JCQXEZGVUNU
-A KUBE-SERVICES -d 10.96.0.10/32 -p tcp -m comment --comment "kube-system/kube-dns:dns-tcp cluster IP" -m tcp --dport 53 -j KUBE-SVC-ERIFXISQEP7F7OF4
-A KUBE-SERVICES -d 10.96.0.10/32 -p tcp -m comment --comment "kube-system/kube-dns:metrics cluster IP" -m tcp --dport 9153 -j KUBE-SVC-JD5MR3NA4I4DYORP
-A KUBE-SERVICES -m comment --comment "kubernetes service nodeports; NOTE: this must be the last rule in this chain" -m addrtype --dst-type LOCAL -j KUBE-NODEPORTS
-A KUBE-SVC-ERIFXISQEP7F7OF4 ! -s 10.244.0.0/24 -d 10.96.0.10/32 -p tcp -m comment --comment "kube-system/kube-dns:dns-tcp cluster IP" -m tcp --dport 53 -j KUBE-MARK-MASQ
-A KUBE-SVC-ERIFXISQEP7F7OF4 -m comment --comment "kube-system/kube-dns:dns-tcp" -m statistic --mode random --probability 0.50000000000 -j KUBE-SEP-IT2ZTR26TO4XFPTO
-A KUBE-SVC-ERIFXISQEP7F7OF4 -m comment --comment "kube-system/kube-dns:dns-tcp" -j KUBE-SEP-ZXMNUKOKXUTL2MK2
-A KUBE-SVC-JD5MR3NA4I4DYORP ! -s 10.244.0.0/24 -d 10.96.0.10/32 -p tcp -m comment --comment "kube-system/kube-dns:metrics cluster IP" -m tcp --dport 9153 -j KUBE-MARK-MASQ
-A KUBE-SVC-JD5MR3NA4I4DYORP -m comment --comment "kube-system/kube-dns:metrics" -m statistic --mode random --probability 0.50000000000 -j KUBE-SEP-N4G2XR5TDX7PQE7P
-A KUBE-SVC-JD5MR3NA4I4DYORP -m comment --comment "kube-system/kube-dns:metrics" -j KUBE-SEP-ZP3FB6NMPNCO4VBJ
-A KUBE-SVC-NPX46M4PTMTKRN6Y ! -s 10.244.0.0/24 -d 10.96.0.1/32 -p tcp -m comment --comment "default/kubernetes:https cluster IP" -m tcp --dport 443 -j KUBE-MARK-MASQ
-A KUBE-SVC-NPX46M4PTMTKRN6Y -m comment --comment "default/kubernetes:https" -j KUBE-SEP-XOVE7RWZIDAMLO2S
-A KUBE-SVC-TCOU7JCQXEZGVUNU ! -s 10.244.0.0/24 -d 10.96.0.10/32 -p udp -m comment --comment "kube-system/kube-dns:dns cluster IP" -m udp --dport 53 -j KUBE-MARK-MASQ
-A KUBE-SVC-TCOU7JCQXEZGVUNU -m comment --comment "kube-system/kube-dns:dns" -m statistic --mode random --probability 0.50000000000 -j KUBE-SEP-YIL6JZP7A3QYXJU2
-A KUBE-SVC-TCOU7JCQXEZGVUNU -m comment --comment "kube-system/kube-dns:dns" -j KUBE-SEP-6E7XQMQ4RAYOWTTM
COMMIT
# Completed on Sat Jan  8 07:35:11 2022
# Generated by iptables-save v1.4.21 on Sat Jan  8 07:35:11 2022
*mangle
:PREROUTING ACCEPT [376111:67516258]
:INPUT ACCEPT [369347:67204288]
:FORWARD ACCEPT [6764:311970]
:OUTPUT ACCEPT [369958:67425919]
:POSTROUTING ACCEPT [371215:67488646]
:FORWARD_direct - [0:0]
:INPUT_direct - [0:0]
:KUBE-KUBELET-CANARY - [0:0]
:KUBE-PROXY-CANARY - [0:0]
:OUTPUT_direct - [0:0]
:POSTROUTING_direct - [0:0]
:PREROUTING_ZONES - [0:0]
:PREROUTING_ZONES_SOURCE - [0:0]
:PREROUTING_direct - [0:0]
:PRE_docker - [0:0]
:PRE_docker_allow - [0:0]
:PRE_docker_deny - [0:0]
:PRE_docker_log - [0:0]
:PRE_public - [0:0]
:PRE_public_allow - [0:0]
:PRE_public_deny - [0:0]
:PRE_public_log - [0:0]
-A PREROUTING -j PREROUTING_direct
-A PREROUTING -j PREROUTING_ZONES_SOURCE
-A PREROUTING -j PREROUTING_ZONES
-A INPUT -j INPUT_direct
-A FORWARD -j FORWARD_direct
-A OUTPUT -j OUTPUT_direct
-A POSTROUTING -j POSTROUTING_direct
-A PREROUTING_ZONES -i ens33 -g PRE_public
-A PREROUTING_ZONES -i docker0 -j PRE_docker
-A PREROUTING_ZONES -i ens37 -g PRE_public
-A PREROUTING_ZONES -g PRE_public
-A PRE_docker -j PRE_docker_log
-A PRE_docker -j PRE_docker_deny
-A PRE_docker -j PRE_docker_allow
-A PRE_public -j PRE_public_log
-A PRE_public -j PRE_public_deny
-A PRE_public -j PRE_public_allow
COMMIT
# Completed on Sat Jan  8 07:35:11 2022
# Generated by iptables-save v1.4.21 on Sat Jan  8 07:35:11 2022
*security
:INPUT ACCEPT [591940:133664590]
:FORWARD ACCEPT [1257:62727]
:OUTPUT ACCEPT [596315:107591486]
:FORWARD_direct - [0:0]
:INPUT_direct - [0:0]
:OUTPUT_direct - [0:0]
-A INPUT -j INPUT_direct
-A FORWARD -j FORWARD_direct
-A OUTPUT -j OUTPUT_direct
COMMIT
# Completed on Sat Jan  8 07:35:11 2022
# Generated by iptables-save v1.4.21 on Sat Jan  8 07:35:11 2022
*raw
:PREROUTING ACCEPT [376111:67516258]
:OUTPUT ACCEPT [369958:67425919]
:OUTPUT_direct - [0:0]
:PREROUTING_ZONES - [0:0]
:PREROUTING_ZONES_SOURCE - [0:0]
:PREROUTING_direct - [0:0]
:PRE_docker - [0:0]
:PRE_docker_allow - [0:0]
:PRE_docker_deny - [0:0]
:PRE_docker_log - [0:0]
:PRE_public - [0:0]
:PRE_public_allow - [0:0]
:PRE_public_deny - [0:0]
:PRE_public_log - [0:0]
-A PREROUTING -j PREROUTING_direct
-A PREROUTING -j PREROUTING_ZONES_SOURCE
-A PREROUTING -j PREROUTING_ZONES
-A OUTPUT -j OUTPUT_direct
-A PREROUTING_ZONES -i ens33 -g PRE_public
-A PREROUTING_ZONES -i docker0 -j PRE_docker
-A PREROUTING_ZONES -i ens37 -g PRE_public
-A PREROUTING_ZONES -g PRE_public
-A PRE_docker -j PRE_docker_log
-A PRE_docker -j PRE_docker_deny
-A PRE_docker -j PRE_docker_allow
-A PRE_public -j PRE_public_log
-A PRE_public -j PRE_public_deny
-A PRE_public -j PRE_public_allow
COMMIT
# Completed on Sat Jan  8 07:35:11 2022
# Generated by iptables-save v1.4.21 on Sat Jan  8 07:35:11 2022
*filter
:INPUT ACCEPT [14882:2406600]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [15254:2447569]
:DOCKER - [0:0]
:DOCKER-ISOLATION-STAGE-1 - [0:0]
:DOCKER-ISOLATION-STAGE-2 - [0:0]
:DOCKER-USER - [0:0]
:KUBE-EXTERNAL-SERVICES - [0:0]
:KUBE-FIREWALL - [0:0]
:KUBE-FORWARD - [0:0]
:KUBE-KUBELET-CANARY - [0:0]
:KUBE-NODEPORTS - [0:0]
:KUBE-PROXY-CANARY - [0:0]
:KUBE-SERVICES - [0:0]
-A INPUT -m comment --comment "kubernetes health check service ports" -j KUBE-NODEPORTS
-A INPUT -m conntrack --ctstate NEW -m comment --comment "kubernetes externally-visible service portals" -j KUBE-EXTERNAL-SERVICES
-A INPUT -j KUBE-FIREWALL
-A FORWARD -m comment --comment "kubernetes forwarding rules" -j KUBE-FORWARD
-A FORWARD -m conntrack --ctstate NEW -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A FORWARD -m conntrack --ctstate NEW -m comment --comment "kubernetes externally-visible service portals" -j KUBE-EXTERNAL-SERVICES
-A FORWARD -j DOCKER-USER
-A FORWARD -j DOCKER-ISOLATION-STAGE-1
-A FORWARD -o docker0 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A FORWARD -o docker0 -j DOCKER
-A FORWARD -i docker0 ! -o docker0 -j ACCEPT
-A FORWARD -i docker0 -o docker0 -j ACCEPT
-A FORWARD -s 10.244.0.0/16 -j ACCEPT
-A FORWARD -d 10.244.0.0/16 -j ACCEPT
-A OUTPUT -m conntrack --ctstate NEW -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A OUTPUT -j KUBE-FIREWALL
-A DOCKER-ISOLATION-STAGE-1 -i docker0 ! -o docker0 -j DOCKER-ISOLATION-STAGE-2
-A DOCKER-ISOLATION-STAGE-1 -j RETURN
-A DOCKER-ISOLATION-STAGE-2 -o docker0 -j DROP
-A DOCKER-ISOLATION-STAGE-2 -j RETURN
-A DOCKER-USER -j RETURN
-A KUBE-FIREWALL -m comment --comment "kubernetes firewall for dropping marked packets" -m mark --mark 0x8000/0x8000 -j DROP
-A KUBE-FIREWALL ! -s 127.0.0.0/8 -d 127.0.0.0/8 -m comment --comment "block incoming localnet connections" -m conntrack ! --ctstate RELATED,ESTABLISHED,DNAT -j DROP
-A KUBE-FORWARD -m conntrack --ctstate INVALID -j DROP
-A KUBE-FORWARD -m comment --comment "kubernetes forwarding rules" -m mark --mark 0x4000/0x4000 -j ACCEPT
-A KUBE-FORWARD -m comment --comment "kubernetes forwarding conntrack rule" -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
COMMIT
# Completed on Sat Jan  8 07:35:11 2022
[root@mcw7 ~]$ 

Go back to the command of forgetting to join the cluster, how to regenerate it, and whether it affects the nodes that have joined the cluster

 

/Proc / sys / net / bridge / bridge NF call iptables contents are not set to 1 problem solving

When re joining the node, there is a warning message. We should pay attention to the warning message. For example, docker makes it start. If our virtual machine is not set to start, in case the virtual machine is restarted, the container will hang up

[root@mcw8 ~]$ kubeadm join 10.0.0.137:6443 --token 1e2kkw.ivkth6zzkbx72z4u \
> --discovery-token-ca-cert-hash sha256:fb83146082fb33ca2bff56a525c1e575b5f2587ab1be566f9dd3d7e8d7845462
[preflight] Running pre-flight checks
    [WARNING Service-Docker]: docker service is not enabled, please run 'systemctl enable docker.service'
    [WARNING Hostname]: hostname "mcw8" could not be reached
    [WARNING Hostname]: hostname "mcw8": lookup mcw8 on 10.0.0.2:53: no such host
error execution phase preflight: [preflight] Some fatal errors occurred:
    [ERROR FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables contents are not set to 1
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
To see the stack trace of this error execute with --v=5 or higher


resolvent
[root@mcw8 ~]$ echo "1" >/proc/sys/net/bridge/bridge-nf-call-iptables
[root@mcw8 ~]$ kubeadm join 10.0.0.137:6443 --token 1e2kkw.ivkth6zzkbx72z4u --discovery-token-ca-cert-hash sha256:fb83146082fb33ca2bff56a525c1e575b5f2587ab1be566f9dd3d7e8d7845462[preflight] Running pre-flight checks
    [WARNING Service-Docker]: docker service is not enabled, please run 'systemctl enable docker.service'
    [WARNING Hostname]: hostname "mcw8" could not be reached
    [WARNING Hostname]: hostname "mcw8": lookup mcw8 on 10.0.0.2:53: no such host
^C
[root@mcw8 ~]$ echo y|kubeadm reset

After the above, it still can't be added mcw7 master Node, remember before mcw8 and mcw9 Two node No deployment k8s Network, now deploy and try again. Configure ordinary users kubectl,then

[machangwei@mcw7 ~]$ scp mm.yml  10.0.0.138:/home/machangwei/
machangwei@10.0.0.138's password: 
mm.yml                                                                                                                                          100% 5412     8.5MB/s   00:00    
[machangwei@mcw7 ~]$ scp mm.yml  10.0.0.139:/home/machangwei/
machangwei@10.0.0.139's password: 
mm.yml     


However, the node does not need to be configured with ordinary users kubectl Because the file is missing
[root@mcw8 ~]$ su - machangwei 
[machangwei@mcw8 ~]$ mkdir -p $HOME/.kube
[machangwei@mcw8 ~]$ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
cp: cannot stat '/etc/kubernetes/admin.conf': No such file or directory
[machangwei@mcw8 ~]$ sudo chown $(id -u):$(id -g) $HOME/.kube/config
chown: cannot access '/home/machangwei/.kube/config': No such file or directory


Joining the cluster has been stuck. Add one--V=2 Parameters for printing details
[root@mcw8 ~]$ kubeadm join 10.0.0.137:6443 --token 1e2kkw.ivkth6zzkbx72z4u --discovery-token-ca-cert-hash sha256:fb83146082fb33ca2bff56a525c1e575b5f2587ab1be566f9dd3d7e8d7845462   --v=2
I0108 00:54:46.002913   32058 join.go:413] [preflight] found NodeName empty; using OS hostname as NodeName
I0108 00:54:46.068584   32058 initconfiguration.go:117] detected and using CRI socket: /var/run/dockershim.sock
[preflight] Running pre-flight checks
I0108 00:54:46.068919   32058 preflight.go:92] [preflight] Running general checks

Error message found
I0108 00:54:46.849380   32058 checks.go:620] validating kubelet version
I0108 00:54:46.927861   32058 checks.go:133] validating if the "kubelet" service is enabled and active
I0108 00:54:46.938910   32058 checks.go:206] validating availability of port 10250
I0108 00:54:46.960668   32058 checks.go:283] validating the existence of file /etc/kubernetes/pki/ca.crt
I0108 00:54:46.960707   32058 checks.go:433] validating if the connectivity type is via proxy or direct
I0108 00:54:46.960795   32058 join.go:530] [preflight] Discovering cluster-info
I0108 00:54:46.960846   32058 token.go:80] [discovery] Created cluster-info discovery client, requesting info from "10.0.0.137:6443"
I0108 00:54:46.997909   32058 token.go:118] [discovery] Requesting info from "10.0.0.137:6443" again to validate TLS against the pinned public key
I0108 00:54:47.003864   32058 token.go:217] [discovery] Failed to request cluster-info, will try again: Get "https://10.0.0.137:6443/api/v1/namespaces/kube-public/configmaps/cluster-info?timeout=10s": x509: certificate has expired or is not yet valid: current time 2022-01-08T00:54:47+08:00 is before 2022-01-07T23:18:44Z

Inconsistent time, will mcw8 Before the wrong time. mcw7 Also changed before 2022-01-07T23:18:44Z. Then the mistake has become something else
[root@mcw8 ~]$ date -s "2022-1-7 23:10:00"
Fri Jan  7 23:10:00 CST 2022

According to the above, the error becomes net / http: request canceled (client. Timeout exceeded while awarding headers)


The error becomes as follows
ter-info?timeout=10s": net/http: request canceled (Client.Timeout exceeded while awaiting headers)
I0108 01:27:42.577217 32662 token.go:217] [discovery] Failed to request cluster-info, will try again: Get "https://10.0.0.137:6443/api/v1/namespaces/kube-public/configmaps/cluster-info?timeout=10s": net/http: request canceled (Client.Timeout exceeded while awaiting headers)

 

The k8s system container always fails to start and stops, and the error is reported as follows. Then delete all the containers stopped several times. Adding to the cluster again, an error is reported: reject

rpc error: code = Unknown desc = failed to create a sandbox for pod \"coredns-6d8c4cb4d-8l99d\": Error response from daemon: Conflict. The container name \"/k8s_POD_coredns-6d8c4cb4d-8l99d_kube-system_e030f426-3e8e-46fe-9e05-6c42a332f650_2\" is already in use by container \"b2dbcdd338ab4b2c35d5386e50e7e116fd41f26a0053a84ec3f1329e09d454a4\". You have to remove (or rename) that container to be able to reuse that name." pod="kube-system/coredns-6d8c4cb4d-8l99d"

 

[root@mcw8 ~]$ docker ps -a
CONTAINER ID   IMAGE                                                COMMAND                  CREATED              STATUS                          PORTS     NAMES
2edd274fd7b5   e6ea68648f0c                                         "/opt/bin/flanneld -..."   7 seconds ago        Exited (1) 5 seconds ago                  k8s_kube-flannel_kube-flannel-ds-tvz9q_kube-system_e62fa7b1-1cce-42dc-91d8-cdbd2bfda0f3_2
5b1715be012d   quay.io/coreos/flannel                               "cp -f /etc/kube-fla..."   28 seconds ago       Exited (0) 27 seconds ago                 k8s_install-cni_kube-flannel-ds-tvz9q_kube-system_e62fa7b1-1cce-42dc-91d8-cdbd2bfda0f3_0
7beb96ed15be   rancher/mirrored-flannelcni-flannel-cni-plugin       "cp -f /flannel /opt..."   About a minute ago   Exited (0) About a minute ago             k8s_install-cni-plugin_kube-flannel-ds-tvz9q_kube-system_e62fa7b1-1cce-42dc-91d8-cdbd2bfda0f3_0
4e998fdfce3e   registry.aliyuncs.com/google_containers/kube-proxy   "/usr/local/bin/kube..."   2 minutes ago        Up 2 minutes                              k8s_kube-proxy_kube-proxy-5p7dn_kube-system_92b1b38a-f6fa-4308-93fb-8045d2bae63f_0
fed18476d9a3   registry.aliyuncs.com/google_containers/pause:3.6    "/pause"                 3 minutes ago        Up 3 minutes                              k8s_POD_kube-flannel-ds-tvz9q_kube-system_e62fa7b1-1cce-42dc-91d8-cdbd2bfda0f3_0
ebc2403e3052   registry.aliyuncs.com/google_containers/pause:3.6    "/pause"                 3 minutes ago        Up 3 minutes                              k8s_POD_kube-proxy-5p7dn_kube-system_92b1b38a-f6fa-4308-93fb-8045d2bae63f_0

It is ok now
[machangwei@mcw7 ~]$ kubectl get nodes
NAME   STATUS   ROLES                  AGE     VERSION
mcw7   Ready    control-plane,master   7m22s   v1.23.1
mcw8   Ready    <none>                 4m51s   v1.23.1
mcw9   Ready    <none>                 3m45s   v1.23.1
[machangwei@mcw7 ~]$ 


Each deployed node added to the cluster has three containers. The command added to the cluster is to access the master node apiserver Service. Then it starts to pull the container on the image deployment node
k8s_kube-proxy_kube-
k8s_POD_kube-proxy-n
k8s_POD_kube-flannel

 

pod status: ContainerCreating, ErrImagePull, ImagePullBackOff

[machangwei@mcw7 ~]$ kubectl get pod
NAME                              READY   STATUS              RESTARTS   AGE
mcw01dep-nginx-5dd785954d-d2kwp   0/1     ContainerCreating   0          9m7s
mcw01dep-nginx-5dd785954d-szdjd   0/1     ErrImagePull        0          9m7s
mcw01dep-nginx-5dd785954d-v9x8j   0/1     ErrImagePull        0          9m7s
[machangwei@mcw7 ~]$ 
[machangwei@mcw7 ~]$ kubectl get pod
NAME                              READY   STATUS              RESTARTS   AGE
mcw01dep-nginx-5dd785954d-d2kwp   0/1     ContainerCreating   0          9m15s
mcw01dep-nginx-5dd785954d-szdjd   0/1     ImagePullBackOff    0          9m15s
mcw01dep-nginx-5dd785954d-v9x8j   0/1     ImagePullBackOff    0          9m15s

node All containers on are deleted, but the master node pod It can't be deleted. Forced deletion
[machangwei@mcw7 ~]$ kubectl get pod
NAME                              READY   STATUS        RESTARTS   AGE
mcw01dep-nginx-5dd785954d-v9x8j   0/1     Terminating   0          33m
[machangwei@mcw7 ~]$ kubectl delete pod mcw01dep-nginx-5dd785954d-v9x8j  --force
warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
pod "mcw01dep-nginx-5dd785954d-v9x8j" force deleted
[machangwei@mcw7 ~]$ kubectl get pod
No resources found in default namespace.


The pull image is invalid??? The containers are all up
[machangwei@mcw7 ~]$ kubectl get pod
NAME                              READY   STATUS              RESTARTS   AGE
mcw01dep-nginx-5dd785954d-65zd4   0/1     ContainerCreating   0          118s
mcw01dep-nginx-5dd785954d-hfw2k   0/1     ContainerCreating   0          118s
mcw01dep-nginx-5dd785954d-qxzpl   0/1     ContainerCreating   0          118s

Events:
  Type    Reason     Age        From               Message
  ----    ------     ----       ----               -------
  Normal  Scheduled  112s       default-scheduler  Successfully assigned default/mcw01dep-nginx-5dd785954d-65zd4 to mcw8
  Normal  Pulling    <invalid>  kubelet            Pulling image "nginx"

go node Node view, originally from k8s_POD_mcw01dep-nginx This, No k8s_mcw01dep-nginx
 Now that the master node is viewing pod Information, pull Nginx The age is invalid, then go node node mcw8 Manually pull the image directly from the
[root@mcw8 ~]$ docker pull nginx #The image is manually pulled successfully
Status: Downloaded newer image for nginx:latest
docker.io/library/nginx:latest


View again pod details
Events:
  Type    Reason     Age        From               Message
  ----    ------     ----       ----               -------
  Normal  Scheduled  7m21s      default-scheduler  Successfully assigned default/mcw01dep-nginx-5dd785954d-65zd4 to mcw8
  Normal  Pulling    <invalid>  kubelet            Pulling image "nginx"

The first line shows the schedule, that is, each container has a schedule with the same name POD Container, that's a scheduling. From the default schedule, you can see it in the message pod To which node,
Many times, I have mcw8 The node pulled the image, but it didn't recognize it and didn't pull it again. In that case, I deleted it pod,Let it rebuild automatically pod,from mcw8 Node local pull image 

see pod,The display age with namespace is invalid, that is mcw8 There is a problem with the network of and 9. Should this be regenerated? This network is created when nodes join the cluster
[machangwei@mcw7 ~]$ kubectl get pod --all-namespaces -o wide
NAMESPACE     NAME                           READY   STATUS             RESTARTS              AGE   IP           NODE   NOMINATED NODE   READINESS GATES
kube-system   kube-flannel-ds-tvz9q          0/1     CrashLoopBackOff   102 (<invalid> ago)   8h    10.0.0.138   mcw8   <none>           <none>
kube-system   kube-flannel-ds-v28gj          1/1     Running            102 (<invalid> ago)   8h    10.0.0.139   mcw9   <none>           <none>

To delete k8s system's pod, specify the namespace

[machangwei@mcw7 ~]$ kubectl get pod --all-namespaces -o wide
NAMESPACE     NAME                           READY   STATUS             RESTARTS              AGE   IP           NODE   NOMINATED NODE   READINESS GATES
kube-system   kube-flannel-ds-tvz9q          0/1     CrashLoopBackOff   103 (<invalid> ago)   8h    10.0.0.138   mcw8   <none>           <none>
kube-system   kube-flannel-ds-v28gj          0/1     CrashLoopBackOff   102 (<invalid> ago)   8h    10.0.0.139   mcw9   <none>           <none>
kube-system   kube-flannel-ds-vjfkz          1/1     Running            0                     8h    10.0.0.137   mcw7   <none>           <none>
[machangwei@mcw7 ~]$ kubectl delete pod kube-flannel-ds-tvz9q
Error from server (NotFound): pods "kube-flannel-ds-tvz9q" not found
[machangwei@mcw7 ~]$ kubectl delete pod kube-flannel-ds-tvz9q --namespace=kube-system
pod "kube-flannel-ds-tvz9q" deleted
[machangwei@mcw7 ~]$ kubectl delete pod kube-flannel-ds-v28gj --namespace=kube-system
pod "kube-flannel-ds-v28gj" deleted
[machangwei@mcw7 ~]$ kubectl get pod --all-namespaces -o wide #No change, still invalid
NAMESPACE     NAME                           READY   STATUS             RESTARTS            AGE   IP           NODE   NOMINATED NODE   READINESS GATES
kube-system   kube-flannel-ds-gr7ck          0/1     CrashLoopBackOff   1 (<invalid> ago)   21s   10.0.0.138   mcw8   <none>           <none>
kube-system   kube-flannel-ds-m6qgl          1/1     Running            1 (<invalid> ago)   6s    10.0.0.139   mcw9   <none>           <none>
kube-system   kube-flannel-ds-vjfkz          1/1     Running            0                   8h    10.0.0.137   mcw7   <none>           <non

There are various problems in cloning the virtual machine container. If the virtual machine is created, there is no such problem.

Re create three virtual machines. The following problems are encountered during deployment: coredns has always been peding,

[machangwei@mcwk8s-master ~]$ kubectl get pod --all-namespaces
NAMESPACE     NAME                                    READY   STATUS    RESTARTS   AGE
kube-system   coredns-6d8c4cb4d-nsv4x                 0/1     Pending   0          8m59s
kube-system   coredns-6d8c4cb4d-t7hr6                 0/1     Pending   0          8m59s

Troubleshooting process:

View error messages:
[machangwei@mcwk8s-master ~]$ kubectl describe pod coredns-6d8c4cb4d-nsv4x -namespace=kube-system
Events:
  Type     Reason            Age                 From               Message
  ----     ------            ----                ----               -------
  Warning  FailedScheduling  21s (x7 over 7m9s)  default-scheduler  0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.

Solution:
default k8s Not allowed master When nodes are loaded, the following are allowed: kubectl taint nodes --all node-role.kubernetes.io/master-

[machangwei@mcwk8s-master ~]$ kubectl get nodes #View node, master node is not ready. Execute the following command to make the master node also act as a node
NAME            STATUS     ROLES                  AGE   VERSION
mcwk8s-master   NotReady   control-plane,master   16m   v1.23.1
[machangwei@mcwk8s-master ~]$ kubectl taint nodes --all  node-role.kubernetes.io/master-
node/mcwk8s-master untainted
[machangwei@mcwk8s-master ~]$ 


pod In the description;
Tolerations:                 CriticalAddonsOnly op=Exists
                             node-role.kubernetes.io/control-plane:NoSchedule
                             node-role.kubernetes.io/master:NoSchedule
                             node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s


allow master Node deployment pod,Use the following commands:
kubectl taint nodes --all node-role.kubernetes.io/master-
prohibit master deploy pod
kubectl taint nodes k8s node-role.kubernetes.io/master=true:NoSchedule



Jan  9 11:51:52 mcw10 kubelet: I0109 11:51:52.636701   25612 cni.go:240] "Unable to update cni config" err="no networks found in /etc/cni/net.d"
Jan  9 11:51:53 mcw10 kubelet: E0109 11:51:53.909336   25612 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
Jan  9 11:51:57 mcw10 kubelet: I0109 11:51:57.637836   25612 cni.go:240] "Unable to update cni config" err="no networks found in /etc/cni/net.d"


[machangwei@mcwk8s-master ~]$ kubectl get nodes
NAME            STATUS     ROLES                  AGE   VERSION
mcwk8s-master   NotReady   control-plane,master   43m   v1.23.1
[machangwei@mcwk8s-master ~]$ kubectl get pod --all-namespaces
NAMESPACE     NAME                                    READY   STATUS    RESTARTS   AGE
kube-system   coredns-6d8c4cb4d-t24gx                 0/1     Pending   0          18m
kube-system   coredns-6d8c4cb4d-t7hr6                 0/1     Pending   0          42m

It turns out that it doesn't seem to have anything to do with the previous solution. This is because I didn't deploy the network. I just deploy the network and two pod s of dns

As follows:
[machangwei@mcwk8s-master ~]$ kubectl apply -f mm.yml  #Deploy network
Warning: policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
podsecuritypolicy.policy/psp.flannel.unprivileged created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.apps/kube-flannel-ds created
[machangwei@mcwk8s-master ~]$ kubectl get nodes #The view node is not ready yet
NAME            STATUS     ROLES                  AGE   VERSION
mcwk8s-master   NotReady   control-plane,master   45m   v1.23.1
[machangwei@mcwk8s-master ~]$ kubectl get pod --all-namespaces  #Check dns pod and flannel initialization
NAMESPACE     NAME                                    READY   STATUS     RESTARTS   AGE
kube-system   coredns-6d8c4cb4d-t24gx                 0/1     Pending    0          20m
kube-system   coredns-6d8c4cb4d-t7hr6                 0/1     Pending    0          44m
kube-system   kube-flannel-ds-w8v9s                   0/1     Init:0/2   0          14s

[machangwei@mcwk8s-master ~]$ kubectl get pod --all-namespaces #Failed to view the pull image again
NAMESPACE     NAME                                    READY   STATUS              RESTARTS   AGE
kube-system   coredns-6d8c4cb4d-t24gx                 0/1     Pending             0          20m
kube-system   coredns-6d8c4cb4d-t7hr6                 0/1     Pending             0          45m
kube-system   kube-flannel-ds-w8v9s                   0/1     Init:ErrImagePull   0          45s

[machangwei@mcwk8s-master ~]$ kubectl describe pod kube-flannel-ds-w8v9s --namespace=kube-system #View description information
  Warning  Failed     4m26s                  kubelet            Error: ErrImagePull #It is always failed to pull the image. There is no problem viewing the network
  Warning  Failed     4m25s                  kubelet            Error: ImagePullBackOff #It took three minutes to pull the image successfully
  Normal   BackOff    4m25s                  kubelet            Back-off pulling image "quay.io/coreos/flannel:v0.15.1"
  Normal   Pulling    4m15s (x2 over 4m45s)  kubelet            Pulling image "quay.io/coreos/flannel:v0.15.1"
  Normal   Pulled     3m36s                  kubelet            Successfully pulled image "quay.io/coreos/flannel:v0.15.1" in 39.090145025s
  Normal   Created    3m35s                  kubelet            Created container install-cni
  Normal   Started    3m35s                  kubelet            Started container install-cni
  Normal   Pulled     3m35s                  kubelet            Container image "quay.io/coreos/flannel:v0.15.1" already present on machine
  Normal   Created    3m35s                  kubelet            Created container kube-flannel
  Normal   Started    3m34s                  kubelet            Started container kube-flannel


View the node again. It is already ready That is, deploy the network, coredns That's good, master Node as a node just ready
[machangwei@mcwk8s-master ~]$ kubectl get nodes
NAME            STATUS   ROLES                  AGE   VERSION
mcwk8s-master   Ready    control-plane,master   57m   v1.23.1
[machangwei@mcwk8s-master ~]$ kubectl get pod --all-namespaces
NAMESPACE     NAME                                    READY   STATUS    RESTARTS   AGE
kube-system   coredns-6d8c4cb4d-t24gx                 1/1     Running   0          32m
kube-system   coredns-6d8c4cb4d-t7hr6                 1/1     Running   0          56m
kube-system   etcd-mcwk8s-master                      1/1     Running   0          57m
kube-system   kube-apiserver-mcwk8s-master            1/1     Running   0          57m
kube-system   kube-controller-manager-mcwk8s-master   1/1     Running   0          57m
kube-system   kube-flannel-ds-w8v9s                   1/1     Running   0          12m
kube-system   kube-proxy-nvw6m                        1/1     Running   0          56m
kube-system   kube-scheduler-mcwk8s-master            1/1     Running   0          57m

 

After joining the cluster on node1, there are two more network flannel s on the master without ready pod s

It is the network of nodes. It seems that it does not affect the use. It has no impact for the time being

[root@mcwk8s-node1 ~]$ kubeadm join 10.0.0.140:6443 --token 8yficm.352yz89c44mqk4y6 \
> --discovery-token-ca-cert-hash sha256:bcd36381d3de0adb7e05a12f688eee4043833290ebd39366fc47dd5233c552bf

master There are two more on the ready of pod,The explanation is node This network is not deployed on pod And
[machangwei@mcwk8s-master ~]$ kubectl get pod --all-namespaces
NAMESPACE     NAME                                    READY   STATUS     RESTARTS   AGE
kube-system   kube-flannel-ds-75npz                   0/1     Init:1/2   0          99s
kube-system   kube-flannel-ds-lpmxf                   0/1     Init:1/2   0          111s
kube-system   kube-flannel-ds-w8v9s                   1/1     Running    0          16m

[machangwei@mcwk8s-master ~]$ kubectl get pod --all-namespaces -o wide
NAMESPACE     NAME                                    READY   STATUS                  RESTARTS      AGE     IP           NODE            NOMINATED NODE   READINESS GATES
kube-system   kube-flannel-ds-75npz                   0/1     CrashLoopBackOff        4 (50s ago)   4m37s   10.0.0.141   mcwk8s-node1    <none>           <none>
kube-system   kube-flannel-ds-lpmxf                   0/1     Init:ImagePullBackOff   0             4m49s   10.0.0.142   mcwk8s-node2    <none>           <none>
kube-system   kube-flannel-ds-w8v9s                   1/1     Running                 0             19m     10.0.0.140   mcwk8s-master   <none>           <none>


see nodes State, now there is one ready Yes
[machangwei@mcwk8s-master ~]$ kubectl get nodes
NAME            STATUS     ROLES                  AGE     VERSION
mcwk8s-master   Ready      control-plane,master   65m     v1.23.1
mcwk8s-node1    Ready      <none>                 5m22s   v1.23.1
mcwk8s-node2    NotReady   <none>                 5m35s   v1.23.1

View now pod The situation, though node already ready Yes, but the Internet pod There is still a problem with the status of the display
[machangwei@mcwk8s-master ~]$ kubectl get pod --all-namespaces -o wide
NAMESPACE     NAME                                    READY   STATUS                  RESTARTS      AGE     IP           NODE            NOMINATED NODE   READINESS GATES
kube-system   kube-flannel-ds-75npz                   0/1     CrashLoopBackOff        5 (44s ago)   6m5s    10.0.0.141   mcwk8s-node1    <none>           <none>
kube-system   kube-flannel-ds-lpmxf                   0/1     Init:ImagePullBackOff   0             6m17s   10.0.0.142   mcwk8s-node2    <none>           <none>
kube-system   kube-flannel-ds-w8v9s                   1/1     Running                 0             21m     10.0.0.140   mcwk8s-master   <none>           <none>

describe pod Information, viewing CrashLoopBackOff In this state, it seems that restarting the container failed, and the container already exists
  Normal   Created    5m10s (x4 over 5m59s)  kubelet            Created container kube-flannel
  Normal   Started    5m10s (x4 over 5m58s)  kubelet            Started container kube-flannel
  Warning  BackOff    4m54s (x5 over 5m52s)  kubelet            Back-off restarting failed container
  Normal   Pulled     2m52s (x6 over 5m59s)  kubelet            Container image "quay.io/coreos/flannel:v0.15.1" already present on machine

describe pod Information, viewing Init:ImagePullBackOff In this state, there is a problem with image pulling
  Warning  Failed     23s (x4 over 5m42s)   kubelet            Failed to pull image "quay.io/coreos/flannel:v0.15.1": rpc error: code = Unknown desc = context canceled
  Warning  Failed     23s (x4 over 5m42s)   kubelet            Error: ErrImagePull

Mirror import and export

Recommendations:
 You can select commands according to specific usage scenarios

If you just want to backup images,use save,load that will do
 If the contents of the container change after starting the container and need to be backed up, use export,import

Example
docker save -o nginx.tar nginx:latest
 or
docker save > nginx.tar nginx:latest
 among-o and>Indicates output to a file, nginx.tar For the target file, nginx:latest Is the source image name( name:tag)


Example
docker load -i nginx.tar
 or
docker load < nginx.tar
 among-i and<Indicates input from a file. The image and related metadata will be imported successfully, including tag information


Example
docker export -o nginx-test.tar nginx-test
 among-o Indicates output to a file, nginx-test.tar For the target file, nginx-test Is the source container name( name)


docker import nginx-test.tar nginx:imp
 or
cat nginx-test.tar | docker import - nginx:imp

difference:
export Command exported tar File is slightly smaller than save Command exported

export The command is from the container( container)Export in tar File, and save The command is from mirror( images)Export in
 Based on the second point, export Export the file again import When you go back, you cannot keep all the history of the mirror (that is, each layer) layer Information, unfamiliar can see Dockerfile),Rollback operation cannot be performed; and save It is based on the image, so each layer can be completely retained during import layer Information. As shown in the figure below, nginx:latest yes save export load Imported, nginx:imp yes export export import Imported.

Original link: https://blog.csdn.net/ncdx111/article/details/79878098

Init: resolution of imagepullbackoff status

see node2 No flannel image
[root@mcwk8s-node2 ~]$ docker images
REPOSITORY                                           TAG       IMAGE ID       CREATED        SIZE
registry.aliyuncs.com/google_containers/kube-proxy   v1.23.1   b46c42588d51   3 weeks ago    112MB
rancher/mirrored-flannelcni-flannel-cni-plugin       v1.0.0    cd5235cd7dc2   2 months ago   9.03MB
registry.aliyuncs.com/google_containers/pause        3.6       6270bb605e12   4 months ago   683kB

Export an image from the primary node and upload it to the node2 upper
[root@mcwk8s-master ~]$ docker images
REPOSITORY                                                        TAG       IMAGE ID       CREATED        SIZE
quay.io/coreos/flannel                                            v0.15.1   e6ea68648f0c   8 weeks ago    69.5MB
[root@mcwk8s-master ~]$ docker save quay.io/coreos/flannel >mcwflanel-image.tar.gz
[root@mcwk8s-master ~]$ ls
anaconda-ks.cfg  jiarujiqun.txt  mcwflanel-image.tar.gz
[root@mcwk8s-master ~]$ scp mcwflanel-image.tar.gz 10.0.0.142:/root/

node2 Successfully imported image on
[root@mcwk8s-node2 ~]$ docker images
REPOSITORY                                           TAG       IMAGE ID       CREATED        SIZE
registry.aliyuncs.com/google_containers/kube-proxy   v1.23.1   b46c42588d51   3 weeks ago    112MB
rancher/mirrored-flannelcni-flannel-cni-plugin       v1.0.0    cd5235cd7dc2   2 months ago   9.03MB
registry.aliyuncs.com/google_containers/pause        3.6       6270bb605e12   4 months ago   683kB
[root@mcwk8s-node2 ~]$ ls
anaconda-ks.cfg
[root@mcwk8s-node2 ~]$ ls
anaconda-ks.cfg  mcwflanel-image.tar.gz
[root@mcwk8s-node2 ~]$ docker load < mcwflanel-image.tar.gz 
ab9ef8fb7abb: Loading layer [==================================================>]  2.747MB/2.747MB
2ad3602f224f: Loading layer [==================================================>]  49.46MB/49.46MB
54089bc26b6b: Loading layer [==================================================>]   5.12kB/5.12kB
8c5368be4bdf: Loading layer [==================================================>]  9.216kB/9.216kB
5c32c759eea2: Loading layer [==================================================>]   7.68kB/7.68kB
Loaded image: quay.io/coreos/flannel:v0.15.1
[root@mcwk8s-node2 ~]$ docker images
REPOSITORY                                           TAG       IMAGE ID       CREATED        SIZE
registry.aliyuncs.com/google_containers/kube-proxy   v1.23.1   b46c42588d51   3 weeks ago    112MB
quay.io/coreos/flannel                               v0.15.1   e6ea68648f0c   8 weeks ago    69.5MB
rancher/mirrored-flannelcni-flannel-cni-plugin       v1.0.0    cd5235cd7dc2   2 months ago   9.03MB
registry.aliyuncs.com/google_containers/pause        3.6       6270bb605e12   4 months ago   683kB


View on master node pod The state has changed into CrashLoopBackOff. Restarted many times
[machangwei@mcwk8s-master ~]$ kubectl get pod --all-namespaces
NAMESPACE     NAME                                    READY   STATUS             RESTARTS        AGE
kube-system   kube-flannel-ds-75npz                   0/1     CrashLoopBackOff   9 (4m47s ago)   28m
kube-system   kube-flannel-ds-lpmxf                   0/1     CrashLoopBackOff   4 (74s ago)     28m
kube-system   kube-flannel-ds-w8v9s                   1/1     Running            0               43m

Viewing the description information, restart failed. Problem CrashLoopBackOff resolved

[machangwei@mcwk8s-master ~]$ kubectl describe pod kube-flannel-ds-lpmxf --namespace=kube-system
  Warning  BackOff    3m25s (x20 over 7m48s)  kubelet            Back-off restarting failed container
  

Although the two on the node have not been ready,however node Status is already ready No, forget it first. Deploy an application to verify it
[machangwei@mcwk8s-master ~]$ kubectl get pod --all-namespaces
NAMESPACE     NAME                                    READY   STATUS             RESTARTS        AGE
kube-system   kube-flannel-ds-75npz                   0/1     CrashLoopBackOff   12 (114s ago)   41m
kube-system   kube-flannel-ds-lpmxf                   0/1     CrashLoopBackOff   8 (3m46s ago)   41m
[machangwei@mcwk8s-master ~]$ kubectl get nodes
NAME            STATUS   ROLES                  AGE    VERSION
mcwk8s-master   Ready    control-plane,master   100m   v1.23.1
mcwk8s-node1    Ready    <none>                 40m    v1.23.1
mcwk8s-node2    Ready    <none>                 41m    v1.23.1


Check whether the environment is installed. There is no problem and the application can be deployed 
[machangwei@mcwk8s-master ~]$ kubectl get deployment
NAME             READY   UP-TO-DATE   AVAILABLE   AGE
mcw01dep-nginx   1/1     1            1           5m58s
mcw02dep-nginx   1/2     2            1           71s
[machangwei@mcwk8s-master ~]$ kubectl get pod
NAME                              READY   STATUS    RESTARTS   AGE
mcw01dep-nginx-5dd785954d-z7s8m   1/1     Running   0          7m21s
mcw02dep-nginx-5b8b58857-7mlmh    1/1     Running   0          2m34s
mcw02dep-nginx-5b8b58857-pvwdd    1/1     Running   0          2m34s



Delete the test resources, and then save a virtual machine snapshot to save time k8s If the environment changes and needs to be redeployed, just restore the snapshot directly.
[machangwei@mcwk8s-master ~]$ kubectl get pod
NAME                              READY   STATUS    RESTARTS   AGE
mcw01dep-nginx-5dd785954d-z7s8m   1/1     Running   0          7m21s
mcw02dep-nginx-5b8b58857-7mlmh    1/1     Running   0          2m34s
mcw02dep-nginx-5b8b58857-pvwdd    1/1     Running   0          2m34s
[machangwei@mcwk8s-master ~]$ 
[machangwei@mcwk8s-master ~]$ kubectl get deployment
NAME             READY   UP-TO-DATE   AVAILABLE   AGE
mcw01dep-nginx   1/1     1            1           7m39s
mcw02dep-nginx   2/2     2            2           2m52s
[machangwei@mcwk8s-master ~]$ kubectl delete deployment mcw01dep-nginx mcw02dep-nginx
deployment.apps "mcw01dep-nginx" deleted
deployment.apps "mcw02dep-nginx" deleted
[machangwei@mcwk8s-master ~]$ kubectl get deployment
No resources found in default namespace.
[machangwei@mcwk8s-master ~]$ 
[machangwei@mcwk8s-master ~]$ kubectl get pod
No resources found in default namespace.

 

Added by jefffan24 on Sun, 09 Jan 2022 11:44:00 +0200