I learned docker before and wanted to learn K8S in a complete set.
Reference: https://www.kubernetes.org.cn/k8s
1. Overview of kubernetes
1. Basic introduction
kubernetes is short for k8s, because there are eight words' ubernet 'between K and s, so it is short for k8s. K8s is an open source application for managing containerized applications on multiple hosts in the cloud platform. The goal of k8s is to make the deployment of containerized applications simple and efficient. K8s provides a mechanism for application deployment, planning, updating and maintenance. It is an open source container orchestration engine of google. It supports automatic deployment, large-scale scalability and application container management.
The latest deployment method is realized by deploying containers. Each container is isolated from each other. Each container has its own file system. Processes between containers will not affect each other and can distinguish computing resources. Compared with virtual machines, containers can be deployed quickly. Because containers are decoupled from underlying facilities and machine file systems, they can be migrated between different clouds and different versions of operating systems.
When an application is deployed in a production environment, multiple instances of the application are usually deployed to load balance the application requests. In k8s, we can create multiple containers, run one instance in each container, and then manage, discover and access this group of application instances through the built-in load balancing strategy.
2. k8s function
1. Automatic packing: automatically deploy application containers based on the resource configuration requirements of the container for the application running environment
2. Self repair: when the container fails, the container will be restarted; When there is a problem with the deployed node, the container will be redeployed and rescheduled
3. Horizontal expansion: realize service discovery and load balancing based on kubernetes' own capabilities
4. Rolling update: update the applications running in the application container at one time or in batch according to the changes of applications
5. Version fallback
6. Secret key and configuration management: the secret key and application configuration can be deployed and updated without rebuilding the image, similar to hot deployment
7. Storage arrangement: automatically realize storage system mounting and application, especially for stateful applications to realize data persistence. The storage system can come from local directory, network storage, public cloud storage and other services
8. Batch processing: provides one-time tasks and scheduled tasks to meet the scenario of batch data sorting and analysis
3. k8s cluster architecture and core concepts
1. Cluster architecture
(1) master} master node: performs scheduling management on the cluster and receives operation requests from users outside the cluster
apiserver: the only entry for resource operation, and provides mechanisms such as authentication, authorization, access control, API registration and discovery. It is delivered to etcd for storage in a restful manner
scheduler: node scheduling, select node node application deployment; The Pod is scheduled to the corresponding machine according to the predetermined scheduling policy
controller manager: it is responsible for maintaining the status of the cluster, such as fault detection, automatic expansion, rolling update, etc. one resource corresponds to one controller
etcd: storage system, which is used to save cluster related data, such as status data, pod data, etc
(2) The worker node is a work node that runs the user's business application container
kubelet: in short, it is the representative sent by the master to the node node to manage the related operations of the local container. Responsible for maintaining the life cycle of containers, as well as Volume (CVI) and network (CNI) management
Kube proxy: proxy on the network, including load balancing and other operations. Responsible for providing Service discovery and load balancing within the cluster for services
2. Core concepts
(1) pod: the smallest unit of k8s deployment. A pod can contain multiple containers, that is, a set of containers; Container internal sharing network; The life cycle of a pod is short, and a new pod is deployed.
pod is the basis of all business types in K8s cluster. It can be regarded as a small robot running in K8s cluster. Different types of businesses need different types of small robots to execute. At present, the services in K8s can be divided into long-running, batch, node daemon and stateful application; The corresponding small robot controllers are Deployment, Job, daemon set and PetSet
(2) Replication controller:
RC is the earliest API object in K8s cluster to ensure high availability of Pod. Monitor the running Pod to ensure that the specified number of Pod copies are running in the cluster. The specified number can be multiple or 1; If it is less than the specified number, RC will start running a new Pod copy; More than the specified number, RC will kill the excess Pod
Copy. Even when the specified number is 1, running Pod through RC is more wise than running Pod directly, because RC can also give full play to its highly available ability to ensure that one Pod is running forever. RC is an early technical concept of K8s, which is only applicable to long-term servo business types, such as controlling small robots to provide high availability
Web services.
(3)Service: defines access rules for a group of pod s. Each service corresponds to a valid virtual IP in the cluster, and a service is accessed through the virtual IP in the cluster.
Define rules for unified access through the service, and create a pod through the controller for deployment.
3. Cluster construction method
Currently, there are two deployment k8s methods:
(1) kubeadm
Kubedm is a k8s deployment tool that provides kubedm init and kube join for rapid deployment of k8s clusters. Official website: https://kubernetes.io/docs/reference/setup-tools/kubeadm/
(2) Binary package
Download the binary package of the distribution from github and manually deploy each component to form a k8s cluster.
kubeadm deployment is relatively simple, but it shields many details, and it may be difficult to troubleshoot problems. Binary package deployment k8s cluster can learn many principles and is also conducive to later maintenance.
2. k8s cluster construction
Simply build a master and two node s. The related machines and ip configurations are as follows: each machine needs to access the Internet to download related dependencies
k8smaster1 192.168.13.103 k8snode1 192.168.13.104 k8snode2 192.168.13.105
1. System initialization (unless master is specified, all three nodes execute)
All three machines need to do the following operations. I choose to use a virtual machine and then clone it
1. Turn off the firewall
systemctl stop firewalld
systemctl disable firewalld
systemctl status firewalld View firewall status
2. Close selinux
sed -i 's/enforcing/disabled/' /etc/selinux/config # permanent setenforce 0 # temporary
3. Close swap
free -g #View partition status swapoff -a #Temporarily Closed sed -ri 's/.*swap.*/#&/' /etc/fstab #Permanent shutdown
4. Modify host name
hostnamectl set-hostname <hostname>
5. Change the IP address to static IP (note that DNS needs to be set to set static IP, refer to the previous rocketmq cluster)
vim /etc/sysconfig/network-scripts/ifcfg-ens33
6. Synchronization time
yum install ntpdate -y ntpdate time.windows.com
7. In the master node, modify hosts and configure the reachability of hosts
cat >> /etc/hosts << EOF 192.168.13.103 k8smaster1 192.168.13.104 k8snode1 192.168.13.105 k8snode2 EOF
8. Transfer the bridged IPv4 traffic to the iptables chain
cat > /etc/sysctl.d/k8s.conf << EOF net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 EOF
sysctl --system # take effect
9. Install docker / kubedm / kubelet on all nodes
By default, the CRI (container runtime) of Kubernetes is Docker, so Docker is installed first.
(1) Install Docker
wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -O /etc/yum.repos.d/docker-ce.repo yum -y install docker-ce-18.06.1.ce-3.el7 systemctl enable docker && systemctl start docker docker --version
(2) Add alicloud YUM software source
Set warehouse address
cat > /etc/docker/daemon.json << EOF { "registry-mirrors": ["https://b9pmyelo.mirror.aliyuncs.com"] } EOF
Add yum source:
cat > /etc/yum.repos.d/kubernetes.repo << EOF [kubernetes] name=Kubernetes baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64 enabled=1 gpgcheck=0 repo_gpgcheck=0 gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg EOF
(3) Install kubedm, kubelet, and kubectl
yum install -y kubelet-1.18.0 kubeadm-1.18.0 kubectl-1.18.0 systemctl enable kubelet
After successful installation, relevant information can be verified:
[root@k8smaster1 ~]# kubeadm version kubeadm version: &version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.0", GitCommit:"9e991415386e4cf155a24b1da15becaa390438d8", GitTreeState:"clean", BuildDate:"2020-03-25T14:56:30Z", GoVersion:"go1.13.8", Compiler:"gc", Platform:"linux/amd64"} [root@k8smaster1 ~]# kubelet --version Kubernetes v1.18.0
2. Deploy k8s master
1. On the master node:
kubeadm init --apiserver-advertise-address=192.168.13.103 --image-repository registry.aliyuncs.com/google_containers --kubernetes-version v1.18.0 --service-cidr=10.96.0.0/12 --pod-network-cidr=10.244.0.0/16
The apiserver advertisement address needs to be modified to the IP address of the master node; Specify the address of Alibaba cloud image warehouse at the destination; The third is to specify the kubernetes version; The latter two specify the IP addresses for internal access, as long as they do not conflict with the current network segment. If an error is reported above, you can add -- v=6 to view the detailed log. For a detailed explanation of kubedm parameters, please refer to the official website.
After executing the above command, a series of docker images will be pulled. You can open a new terminal, and then use docker images to view the relevant images and containers as follows:
[root@k8smaster1 ~]# docker images REPOSITORY TAG IMAGE ID CREATED SIZE registry.aliyuncs.com/google_containers/kube-proxy v1.18.0 43940c34f24f 21 months ago 117MB registry.aliyuncs.com/google_containers/kube-scheduler v1.18.0 a31f78c7c8ce 21 months ago 95.3MB registry.aliyuncs.com/google_containers/kube-apiserver v1.18.0 74060cea7f70 21 months ago 173MB registry.aliyuncs.com/google_containers/kube-controller-manager v1.18.0 d3e55153f52f 21 months ago 162MB registry.aliyuncs.com/google_containers/pause 3.2 80d28bedfe5d 23 months ago 683kB registry.aliyuncs.com/google_containers/coredns 1.6.7 67da37a9a360 23 months ago 43.8MB registry.aliyuncs.com/google_containers/etcd 3.4.3-0 303ce5db0e90 2 years ago 288MB [root@k8smaster1 ~]# docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 3877168ddb09 43940c34f24f "/usr/local/bin/kube..." 3 minutes ago Up 3 minutes k8s_kube-proxy_kube-proxy-hn4gx_kube-system_fe8347ac-3c81-4cab-ba78-3da6ad598316_0 18a32d328d49 registry.aliyuncs.com/google_containers/pause:3.2 "/pause" 3 minutes ago Up 3 minutes k8s_POD_kube-proxy-hn4gx_kube-system_fe8347ac-3c81-4cab-ba78-3da6ad598316_0 5f62d3184cd7 303ce5db0e90 "etcd --advertise-cl..." 3 minutes ago Up 3 minutes k8s_etcd_etcd-k8smaster1_kube-system_635d8dc817fc23c8a6c09070c81f668f_0 2af2a1b5d169 a31f78c7c8ce "kube-scheduler --au..." 3 minutes ago Up 3 minutes k8s_kube-scheduler_kube-scheduler-k8smaster1_kube-system_ca2aa1b3224c37fa1791ef6c7d883bbe_0 c77506ee4dd2 d3e55153f52f "kube-controller-man..." 3 minutes ago Up 3 minutes k8s_kube-controller-manager_kube-controller-manager-k8smaster1_kube-system_c4d2dd4abfffdee4d424ce839b0de402_0 303545e4eca9 74060cea7f70 "kube-apiserver --ad..." 3 minutes ago Up 3 minutes k8s_kube-apiserver_kube-apiserver-k8smaster1_kube-system_ba7276261300df6a615a2d947d86d3fa_0 f9da54e2bfae registry.aliyuncs.com/google_containers/pause:3.2 "/pause" 3 minutes ago Up 3 minutes k8s_POD_kube-scheduler-k8smaster1_kube-system_ca2aa1b3224c37fa1791ef6c7d883bbe_0 007e2a0cd10b registry.aliyuncs.com/google_containers/pause:3.2 "/pause" 3 minutes ago Up 3 minutes k8s_POD_kube-controller-manager-k8smaster1_kube-system_c4d2dd4abfffdee4d424ce839b0de402_0 0666c8b43c32 registry.aliyuncs.com/google_containers/pause:3.2 "/pause" 3 minutes ago Up 3 minutes k8s_POD_kube-apiserver-k8smaster1_kube-system_ba7276261300df6a615a2d947d86d3fa_0 0ca472d7f2cd registry.aliyuncs.com/google_containers/pause:3.2 "/pause" 3 minutes ago Up 3 minutes k8s_POD_etcd-k8smaster1_kube-system_635d8dc817fc23c8a6c09070c81f668f_0
Finally, after the download is completed, the main window prompts as follows: (it can be regarded as successful when you see successful)
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key [addons] Applied essential addon: CoreDNS [addons] Applied essential addon: kube-proxy Your Kubernetes control-plane has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ Then you can join any number of worker nodes by running the following on each as root: kubeadm join 192.168.13.103:6443 --token tcpixp.g14hyo8skehh9kcp \ --discovery-token-ca-cert-hash sha256:6129c85d48cbf0ca946ea2c65fdc1055c12363be07b18dd7994c3c0a242a286a
Use kubectl tool: execute the above information after success
mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config
see:
[root@k8smaster1 ~]# kubectl get nodes NAME STATUS ROLES AGE VERSION k8smaster1 NotReady master 7m17s v1.18.0 [root@k8smaster1 ~]# kubectl version Client Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.0", GitCommit:"9e991415386e4cf155a24b1da15becaa390438d8", GitTreeState:"clean", BuildDate:"2020-03-25T14:58:59Z", GoVersion:"go1.13.8", Compiler:"gc", Platform:"linux/amd64"} Server Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.0", GitCommit:"9e991415386e4cf155a24b1da15becaa390438d8", GitTreeState:"clean", BuildDate:"2020-03-25T14:50:46Z", GoVersion:"go1.13.8", Compiler:"gc", Platform:"linux/amd64"}
3. Join Kubernetes Node
Execute the following commands on the k8snode1 and k8snode2 nodes. Add a new node to the cluster and execute the kubedm join command output in kubedm init (with token related):
kubeadm join 192.168.13.103:6443 --token tcpixp.g14hyo8skehh9kcp \
--discovery-token-ca-cert-hash sha256:6129c85d48cbf0ca946ea2c65fdc1055c12363be07b18dd7994c3c0a242a286a
It should be noted that only the kubedm join related parameters printed on the master console can be used here. Because tokens are different, the default token is valid for 24 hours. When it expires, the token will not be available. At this time, you need to recreate the token. You can refer to the official website and use the kubedm token related commands.
After the join succeeds, the output log is as follows:
[root@k8snode1 ~]# kubeadm join 192.168.13.103:6443 --token tcpixp.g14hyo8skehh9kcp \ > --discovery-token-ca-cert-hash sha256:6129c85d48cbf0ca946ea2c65fdc1055c12363be07b18dd7994c3c0a242a286a W0108 21:20:24.380628 25524 join.go:346] [preflight] WARNING: JoinControlPane.controlPlane settings will be ignored when control-plane flag is not set. [preflight] Running pre-flight checks [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/ [WARNING Hostname]: hostname "k8snode1" could not be reached [WARNING Hostname]: hostname "k8snode1": lookup k8snode1 on 114.114.114.114:53: no such host [preflight] Reading configuration from the cluster... [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml' [kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.18" ConfigMap in the kube-system namespace [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Starting the kubelet [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap... This node has joined the cluster: * Certificate signing request was sent to apiserver and a response was received. * The Kubelet was informed of the new secure connection details. Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
Finally, the master node views nodes: (the final cluster information is as follows)
[root@k8smaster1 ~]# kubectl get nodes NAME STATUS ROLES AGE VERSION k8smaster1 NotReady master 69m v1.18.0 k8snode1 NotReady <none> 49m v1.18.0 k8snode2 NotReady <none> 4m7s v1.18.0
The status is not ready, and the network plug-in needs to be installed
4. Deploy CNI network plug-in
Modify the sed command to docker hub image warehouse, and execute the following command on the master node.
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml kubectl get pods -n kube-system
After the second command is executed, check the cluster status again after the relevant components are RUNNING
[root@k8smaster1 ~]# kubectl get pods -n kube-system NAME READY STATUS RESTARTS AGE coredns-7ff77c879f-stfqz 1/1 Running 0 106m coredns-7ff77c879f-vhwr7 1/1 Running 0 106m etcd-k8smaster1 1/1 Running 0 107m kube-apiserver-k8smaster1 1/1 Running 0 107m kube-controller-manager-k8smaster1 1/1 Running 0 107m kube-flannel-ds-9bx4w 1/1 Running 0 5m31s kube-flannel-ds-qzqjq 1/1 Running 0 5m31s kube-flannel-ds-tldt5 1/1 Running 0 5m31s kube-proxy-6vcvj 1/1 Running 1 86m kube-proxy-hn4gx 1/1 Running 0 106m kube-proxy-qzwh6 1/1 Running 0 41m kube-scheduler-k8smaster1 1/1 Running 0 107m [root@k8smaster1 ~]# kubectl get nodes NAME STATUS ROLES AGE VERSION k8smaster1 Ready master 107m v1.18.0 k8snode1 Ready <none> 86m v1.18.0 k8snode2 Ready <none> 41m v1.18.0
5. Test kubernetes cluster
kubectl create deployment nginx --image=nginx kubectl expose deployment nginx --port=80 --type=NodePort kubectl get pod,svc
The final generated information is as follows:
[root@k8smaster1 ~]# kubectl get pod,svc NAME READY STATUS RESTARTS AGE pod/nginx-f89759699-cnj62 1/1 Running 0 3m5s NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 113m service/nginx NodePort 10.96.201.24 <none> 80:30951/TCP 2m40s
Test: it can be accessed from any host, and the port is 30951
curl http://192.168.13.103:30951/ curl http://192.168.13.104:30951/ curl http://192.168.13.105:30951/
View the docker processes of the three machines:
1. k8smaster
[root@k8smaster1 ~]# docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES e71930a745f3 67da37a9a360 "/coredns -conf /etc..." 14 minutes ago Up 14 minutes k8s_coredns_coredns-7ff77c879f-vhwr7_kube-system_9553d8cc-8efb-48d1-9790-4120e09869c7_0 5aaacb75700b 67da37a9a360 "/coredns -conf /etc..." 14 minutes ago Up 14 minutes k8s_coredns_coredns-7ff77c879f-stfqz_kube-system_81bbe584-a3d1-413a-a785-d8edaca7b4c1_0 756d66c75a56 registry.aliyuncs.com/google_containers/pause:3.2 "/pause" 14 minutes ago Up 14 minutes k8s_POD_coredns-7ff77c879f-vhwr7_kube-system_9553d8cc-8efb-48d1-9790-4120e09869c7_0 658b02e25f89 registry.aliyuncs.com/google_containers/pause:3.2 "/pause" 14 minutes ago Up 14 minutes k8s_POD_coredns-7ff77c879f-stfqz_kube-system_81bbe584-a3d1-413a-a785-d8edaca7b4c1_0 8a6f86753098 404fc3ab6749 "/opt/bin/flanneld -..." 14 minutes ago Up 14 minutes k8s_kube-flannel_kube-flannel-ds-qzqjq_kube-system_bf0155d2-0f27-492c-8029-2fe042869579_0 b047ca53a8fe registry.aliyuncs.com/google_containers/pause:3.2 "/pause" 14 minutes ago Up 14 minutes k8s_POD_kube-flannel-ds-qzqjq_kube-system_bf0155d2-0f27-492c-8029-2fe042869579_0 3877168ddb09 43940c34f24f "/usr/local/bin/kube..." 2 hours ago Up 2 hours k8s_kube-proxy_kube-proxy-hn4gx_kube-system_fe8347ac-3c81-4cab-ba78-3da6ad598316_0 18a32d328d49 registry.aliyuncs.com/google_containers/pause:3.2 "/pause" 2 hours ago Up 2 hours k8s_POD_kube-proxy-hn4gx_kube-system_fe8347ac-3c81-4cab-ba78-3da6ad598316_0 5f62d3184cd7 303ce5db0e90 "etcd --advertise-cl..." 2 hours ago Up 2 hours k8s_etcd_etcd-k8smaster1_kube-system_635d8dc817fc23c8a6c09070c81f668f_0 2af2a1b5d169 a31f78c7c8ce "kube-scheduler --au..." 2 hours ago Up 2 hours k8s_kube-scheduler_kube-scheduler-k8smaster1_kube-system_ca2aa1b3224c37fa1791ef6c7d883bbe_0 c77506ee4dd2 d3e55153f52f "kube-controller-man..." 2 hours ago Up 2 hours k8s_kube-controller-manager_kube-controller-manager-k8smaster1_kube-system_c4d2dd4abfffdee4d424ce839b0de402_0 303545e4eca9 74060cea7f70 "kube-apiserver --ad..." 2 hours ago Up 2 hours k8s_kube-apiserver_kube-apiserver-k8smaster1_kube-system_ba7276261300df6a615a2d947d86d3fa_0 f9da54e2bfae registry.aliyuncs.com/google_containers/pause:3.2 "/pause" 2 hours ago Up 2 hours k8s_POD_kube-scheduler-k8smaster1_kube-system_ca2aa1b3224c37fa1791ef6c7d883bbe_0 007e2a0cd10b registry.aliyuncs.com/google_containers/pause:3.2 "/pause" 2 hours ago Up 2 hours k8s_POD_kube-controller-manager-k8smaster1_kube-system_c4d2dd4abfffdee4d424ce839b0de402_0 0666c8b43c32 registry.aliyuncs.com/google_containers/pause:3.2 "/pause" 2 hours ago Up 2 hours k8s_POD_kube-apiserver-k8smaster1_kube-system_ba7276261300df6a615a2d947d86d3fa_0 0ca472d7f2cd registry.aliyuncs.com/google_containers/pause:3.2 "/pause" 2 hours ago Up 2 hours k8s_POD_etcd-k8smaster1_kube-system_635d8dc817fc23c8a6c09070c81f668f_0
2. k8sndoe1
[root@k8snode1 ~]# docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 8189b507fc4a 404fc3ab6749 "/opt/bin/flanneld -..." 10 minutes ago Up 10 minutes k8s_kube-flannel_kube-flannel-ds-9bx4w_kube-system_aa9751c9-7dcf-494b-b720-606cb8950a6d_0 f8e8103639c1 43940c34f24f "/usr/local/bin/kube..." 10 minutes ago Up 10 minutes k8s_kube-proxy_kube-proxy-6vcvj_kube-system_55bc4b97-f479-4dd5-977a-2a61e0fce705_1 6675466fcc0e registry.aliyuncs.com/google_containers/pause:3.2 "/pause" 11 minutes ago Up 10 minutes k8s_POD_kube-flannel-ds-9bx4w_kube-system_aa9751c9-7dcf-494b-b720-606cb8950a6d_0 51d248df0e8c registry.aliyuncs.com/google_containers/pause:3.2 "/pause" 11 minutes ago Up 10 minutes k8s_POD_kube-proxy-6vcvj_kube-system_55bc4b97-f479-4dd5-977a-2a61e0fce705_1
3. k8snode2
[root@k8snode2 ~]# docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES d8bbbe754ebc nginx "/docker-entrypoint...." 4 minutes ago Up 4 minutes k8s_nginx_nginx-f89759699-cnj62_default_9e3e4d90-42ac-4f41-9af7-f7939a52cb01_0 04fbdd617724 registry.aliyuncs.com/google_containers/pause:3.2 "/pause" 7 minutes ago Up 7 minutes k8s_POD_nginx-f89759699-cnj62_default_9e3e4d90-42ac-4f41-9af7-f7939a52cb01_0 e9dc459f9664 404fc3ab6749 "/opt/bin/flanneld -..." 15 minutes ago Up 15 minutes k8s_kube-flannel_kube-flannel-ds-tldt5_kube-system_5b1ea760-a19e-4b5b-9612-6d9a6dda7f59_0 f1d0312d2308 registry.aliyuncs.com/google_containers/pause:3.2 "/pause" 15 minutes ago Up 15 minutes k8s_POD_kube-flannel-ds-tldt5_kube-system_5b1ea760-a19e-4b5b-9612-6d9a6dda7f59_0 d6bae886cb61 registry.aliyuncs.com/google_containers/kube-proxy "/usr/local/bin/kube..." About an hour ago Up About an hour k8s_kube-proxy_kube-proxy-qzwh6_kube-system_7d72b4a8-c6ee-4982-9e88-df70d9745b2c_0 324507774c8e registry.aliyuncs.com/google_containers/pause:3.2 "/pause" About an hour ago Up About an hour k8s_POD_kube-proxy-qzwh6_kube-system_7d72b4a8-c6ee-4982-9e88-df70d9745b2c_0
You can see that the container of nginx runs on the k8snode2 node.
You can also use kubectl to view the operation
[root@k8smaster1 ~]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx-f89759699-cnj62 1/1 Running 0 10m 10.244.2.2 k8snode2 <none> <none>
Output in yaml format:
[root@k8smaster1 ~]# kubectl get pods -o yaml apiVersion: v1 items: - apiVersion: v1 kind: Pod metadata: creationTimestamp: "2022-01-09T03:49:49Z" generateName: nginx-f89759699- labels: app: nginx . . .
View all pods related and output details under all namespace s
[root@k8smaster1 ~]# kubectl get pods --all-namespaces -o wide NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES default nginx-f89759699-cnj62 1/1 Running 0 51m 10.244.2.2 k8snode2 <none> <none> kube-system coredns-7ff77c879f-stfqz 1/1 Running 0 161m 10.244.0.3 k8smaster1 <none> <none> kube-system coredns-7ff77c879f-vhwr7 1/1 Running 0 161m 10.244.0.2 k8smaster1 <none> <none> kube-system etcd-k8smaster1 1/1 Running 0 161m 192.168.13.103 k8smaster1 <none> <none> kube-system kube-apiserver-k8smaster1 1/1 Running 0 161m 192.168.13.103 k8smaster1 <none> <none> kube-system kube-controller-manager-k8smaster1 1/1 Running 0 161m 192.168.13.103 k8smaster1 <none> <none> kube-system kube-flannel-ds-9bx4w 1/1 Running 0 59m 192.168.13.104 k8snode1 <none> <none> kube-system kube-flannel-ds-qzqjq 1/1 Running 0 59m 192.168.13.103 k8smaster1 <none> <none> kube-system kube-flannel-ds-tldt5 1/1 Running 0 59m 192.168.13.105 k8snode2 <none> <none> kube-system kube-proxy-6vcvj 1/1 Running 1 140m 192.168.13.104 k8snode1 <none> <none> kube-system kube-proxy-hn4gx 1/1 Running 0 161m 192.168.13.103 k8smaster1 <none> <none> kube-system kube-proxy-qzwh6 1/1 Running 0 95m 192.168.13.105 k8snode2 <none> <none> kube-system kube-scheduler-k8smaster1 1/1 Running 0 161m 192.168.13.103 k8smaster1 <none> <none> kubernetes-dashboard dashboard-metrics-scraper-78f5d9f487-sfjlr 1/1 Running 0 25m 10.244.2.3 k8snode2 <none> <none> kubernetes-dashboard kubernetes-dashboard-577bd97bc-f2v5g 1/1 Running 0 25m 10.244.1.2 k8snode1 <none> <none>
Add: when the master executes kubedm init initialization, the following local errors are reported:
[kubelet-check] It seems like the kubelet isn't running or healthy. [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
terms of settlement:
1. Create / etc / SYSTEMd / system / kubelet service. d/10-kubeadm. Conf file, as follows:
Environment="KUBELET_SYSTEM_PODS_ARGS=--pod-manifest-path=/etc/kubernetes/manifests --allow-privileged=true --fail-swap-on=false"
2. Restart kubelet
systemctl daemon-reload
systemctl restart kubelet
3. Re execute kubedm init
Add: the master # executes kubedm init for many times and reports the following errors:
error execution phase preflight: [preflight] Some fatal errors occurred: [ERROR FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml]: /etc/kubernetes/manifests/kube-apiserver.yaml already exists [ERROR FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml]: /etc/kubernetes/manifests/kube-controller-manager.yaml already exists
This is because kubedm init was not cleaned up after the last execution. Solution:
kubeadm reset
Add: the master} executes kubedm init and reports the following error:
[kubelet-check] Initial timeout of 40s passed. Unfortunately, an error has occurred: timed out waiting for the condition This error is likely caused by: - The kubelet is not running - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled) If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands: - 'systemctl status kubelet' - 'journalctl -xeu kubelet'
It is said on the Internet that downloading some image warehouses timed out, and no relevant solutions were found. Finally, I deleted the virtual machine, re cloned a virtual machine, and then performed the above initialization and installation process.
Supplement: various errors are encountered in the master deployment process. Try installing different versions of kubedm, kubelet and kubectl
1. View the specified version:
yum list kubeadm --showduplicates
2. Delete related
yum remove -y kubelet kubeadm kubectl
3. Reinstall the specified version Yum install packagename version
yum install -y kubelet-1.17.0 kubeadm-1.17.0 kubectl-1.17.0
Add: after the node joins the k8s cluster and starts, the network fails to start. The error is as follows
Failed to start LSB
terms of settlement:
1. Shut down the NetworkManager service
systemctl stop NetworkManager
systemctl disable NetworkManager
2. Restart
systemctl restart network