I recorded this a long time ago. I made it in a muddle at that time. I will revise it later with a deeper understanding
1, docker three swordsman
①,docker machine
Used to install container engine and management host in virtual machine
②,docker compose
Multiple applications can be completed on the container host through yaml files
③,docker swarm
Manage docker container host clusters
2, K8s function
1. Automatic packing
Configure resources to automatically deploy application containers
2. Self repair
① . the container fails and is restarted automatically
② If there is a problem with the deployed node, the container will be redeployed and adjusted automatically
③ . when the container fails to pass the monitoring inspection, the container will be closed
④ . external services will not be provided until the container is in normal operation
3. Horizontal expansion
Application containers can be scaled up or cropped
4. Service discovery
Without additional services, you can automatically realize discovery and load balancing
5. Rolling update
It can be updated once or in batch
6. Version fallback
Forward or backward
7. Key configuration management
The key and application configuration can be deployed and updated directly, similar to hot deployment
8. Storage orchestration
3, Introduction to K8s cluster
K8s The cluster belongs to the central node architecture. from Master Node and Worker Node Node composition Master Node It belongs to the control node, which schedules and manages the cluster and accepts the operation requests of users outside the cluster to access the cluster Worker Node It belongs to the work node and is responsible for running the business application container Master Node: API server,Scheduler,ETCD database Controller Manger Server Worker Node: kubelet,kube proxy,container runtime
4, Kubedm cluster construction
1. Preliminary preparation
**Note: * * the system prepares three centos7, because centos8 supports podman, so it is troublesome to install docker
1. Initial preparations, such as closing the firewall, selinux, swap partition, and configuring domain name resolution for all hosts
2. Add bridge filter rule:
①,
vim /etc/sysctl.d/k8s.conf net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 net.ipv4.ip_forward = 1 vm.swappiness = 0 #If swap is closed, you don't need to add it
②,
modprobe br_netfilter lsmod | grep br_netfilter
③,
sysctl -p /etc/sysctl.d/k8s.conf
3. Open ipvs: ip virtual server to provide virtual vip
yum install -y ipset ipvsadm vim /etc/sysconfig/modules/ipvs.modules #!/bin/bash modprobe -- ip_vs modprobe -- ip_vs_rr modprobe -- ip_vs_wrr modprobe -- ip_vs_sh modprobe -- nf_conntrack_ipv4 chmod 755 /etc/sysconfig/modules/ipvs.modules bash /etc/sysconfig/modules/ipvs.modules lsmod |grep ip_vs
4. Docker CE is installed on all nodes
Because k8s cluster cannot directly manage containers, its smallest management unit is pod, so it needs the help of docker management tool.
It is best to install the specified version of docker-ce-18.06.3 ce-3. EL7. For other versions, use / lib / SYSTEMd / system / docker - H and later contents in ExecStart in service.
Start up automatically and start docker service
vim /etc/docker/daemon.json { "exec-opt": ["native.cgroupdriver=systemd"] } systemctl restart docker
5. k8s required software installation and configuration
① . installation
Note: configure the yum source of kubernets on alicloud
Kubedm: initialize cluster, manage cluster, etc
kubelet: accept API server instructions to manage the pod life cycle
kubectl: cluster management command
vim /etc/yum.repos.d/k8s.repo [kubernetes] name=Kubernetes baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/ enabled=1 gpgcheck=1 repo_gpgcheck=1 gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg yum list |grep kubeadm #Note that it will hold and wait for you to enter y to confirm whether to import GPG yum install -y kubeadm kubelet kubectl
② Configure kubelet
Because kubelet is inconsistent with the cgroupdriver used by docker by default, modify the kubelet configuration file
vim /etc/sysconfig/kubelet KUBELET_EXTRA_ARGS="--cgroup-driver=systemd"
Since kubedm has not been initialized yet, the kubelet configuration file will only be generated during initialization, so now you only need to set the boot auto start.
2. Initialize deployment
1. Since kubedm deploys clusters and all core components run as pod s (containers), you need to prepare images.
Master node:
kubeadm config images list #See which core components you need #Write a script. Because goose can't connect, Ali is used kubeadm config images list >> images.list vim images.list #!/bin/bash #Designated Ali warehouse DOCKER_HUB=registry.cn-hangzhou.aliyuncs.com/google_containers #Primary warehouse YUANLAI_HUB=k8s.gcr.io #Mirror variable images='/kube-apiserver:v1.19.4 /kube-controller-manager:v1.19.4 /kube-scheduler:v1.19.4 /kube-proxy:v1.19.4 /pause:3.2 /etcd:3.4.13-0 /coredns:1.7.0' #Pull and label as native label for img in ${images};do docker pull ${DOCKER_HUB}${img} docker tag ${DOCKER_HUB}${img} ${YUANLAI_HUB}${img} docker rmi ${DOCKER_HUB}${img} done
Worker node:
Just package the Kube proxy and pause in the master node and transfer them to the worker node.
2. Cluster initialization: Master node operation
Initialization: check the version used; Check whether the image is ready (prepared in the previous step); Exchange zoning; Start kubelet; Generate each component certificate; Generate various configuration files.
kubeadm init --apiserver-advertise-address=10.0.0.100 --image-repository registry.aliyuncs.com/google_containers --kubernetes-version v1.19.3 --pod-network-cidr=10.244.0.0/16
Work node:
kubeadm join 10.0.0.100:6443 --token 1gkiq9.bhlaqcdunhtd2jm8 --discovery-token-ca-cert-hash sha256:f3a6042d206adc795187ea4c95a4c9bcc2f65e17c9b64357a60b12b13d05e0ae[preflight] Running pre-flight checks
5, Question
1. Problems with the master node:
[root@master1 ~]#kubectl get nodes NAME STATUS ROLES AGE VERSION master1 Ready master 27h v1.19.3 [root@master1 ~]#kubectl get cs Warning: v1 ComponentStatus is deprecated in v1.19+ NAME STATUS MESSAGE ERROR scheduler Unhealthy Get "http://127.0.0.1:10251/healthz": dial tcp 127.0.0.1:10251: connect: connection refused controller-manager Unhealthy Get "http://127.0.0.1:10252/healthz": dial tcp 127.0.0.1:10252: connect: connection refused etcd-0 Healthy {"health":"true"}
solve:
1. Check the local ports first. You can confirm that ports 10251 and 10252 are not started. If they are not started, it means there is an error
2. Modify scheduler and Controller Manager
vim /etc/kubernetes/manifests/kube-scheduler.yaml vim /etc/kubernetes/manifests/kube-controller-manager.yaml #Comment out -- port=0 # - --port=0 systemctl restart kubelet
Recheck:
[root@master1 ~]#ss -ntl State Recv-Q Send-Q Local Address:Port Peer Address:Port LISTEN 0 128 127.0.0.1:10248 *:* LISTEN 0 128 127.0.0.1:10249 *:* LISTEN 0 128 127.0.0.1:2379 *:* LISTEN 0 128 10.0.0.100:2379 *:* LISTEN 0 128 10.0.0.100:2380 *:* LISTEN 0 128 127.0.0.1:2381 *:* LISTEN 0 128 127.0.0.1:10257 *:* LISTEN 0 128 127.0.0.1:10259 *:* LISTEN 0 128 127.0.0.1:46580 *:* LISTEN 0 128 *:22 *:* LISTEN 0 100 127.0.0.1:25 *:* LISTEN 0 128 [::]:10250 [::]:* LISTEN 0 128 [::]:10251 [::]:* LISTEN 0 128 [::]:6443 [::]:* LISTEN 0 128 [::]:10252 [::]:* LISTEN 0 128 [::]:10256 [::]:* LISTEN 0 128 [::]:22 [::]:* LISTEN 0 100 [::1]:25 [::]:* [::]:* [root@master1 ~]#kubectl get nodes NAME STATUS ROLES AGE VERSION master1 Ready master 27h v1.19.3 [root@master1 ~]#kubectl get cs Warning: v1 ComponentStatus is deprecated in v1.19+ NAME STATUS MESSAGE ERROR scheduler Healthy ok controller-manager Healthy ok etcd-0 Healthy {"health":"true"}
2. Error reporting when adding from node
Since there are two authentications for joining the master node, one is the token and the other is the hash generated when the master node is created. The hash is unchanged, but the token is invalid.
[root@worker2 ~]#kubeadm join 10.0.0.100:6443 --token 1gkiq9.bhlaqcdunhtd2jm8 --discovery-token-ca-cert-hash sha256:f3a6042d206adc795187ea4c95a4c9bcc2f65e17c9b64357a60b12b13d05e0ae[preflight] Running pre-flight checks [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/ error execution phase preflight: couldn't validate the identity of the API Server: could not find a JWS signature in the cluster-info ConfigMap for token ID "1gkiq9" To see the stack trace of this error execute with --v=5 or higher
Solution: the master node generates a new token and adds it from the slave node
[root@master1 ~]#kubeadm token create W1113 17:09:58.951005 14398 configset.go:348] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io] r0d1gj.qturo8s7g64gac60
Join from node:
kubeadm join 10.0.0.100:6443 --token r0d1gj.qturo8s7g64gac60 --discovery-token-ca-cert-hash sha256:f3a6042d206adc795187ea4c95a4c9bcc2f65e17c9b64357a60b12b13d05e0ae