Grain Mall - High Availability Cluster

1. K8s quick start

1) Introduction

kubernetes is k8s for short. Is an open source system for automatically deploying, extending, and managing containerized applications.
Chinese official website: https://kubernetes.io/Zh/
Chinese community: https://www.kubernetes.org.cn/
Official documents: https://kubernetes.io/zh/docs/home/
Community documentation: https://docs.kubernetes.org.cn/

Evolution of deployment:

[failed to transfer the pictures in the external link. The source station may have anti-theft chain mechanism. It is recommended to save the pictures and upload them directly (img-XzC2O9ZA-1592811283187)(images/image-20200503105948619.png))

[failed to transfer the pictures in the external link. The source station may have anti-theft chain mechanism. It is recommended to save the pictures and upload them directly (img-qhtgcrnb-1592811283192) (images / image-202005031101659. PNG))

2) Architecture

(1) Overall master-slave mode

[failed to transfer the pictures in the external link. The source station may have anti-theft chain mechanism. It is recommended to save the pictures and upload them directly (img-k49nqqgh-1592811283183197) (images / image-20200503110244940. PNG))

[failed to transfer the pictures in the external link. The source station may have anti-theft chain mechanism. It is recommended to save the pictures and upload them directly (img-8ivzr5uq-1592811283203) (images / image-2020050310350256. PNG))

(2) master node architecture

[failed to transfer the pictures in the external link. The source station may have anti-theft chain mechanism. It is recommended to save the pictures and upload them directly (img-mmWvug8i-1592811283235)(images/image-20200503110458806.png))

[failed to transfer the pictures in the external link. The source station may have anti-theft chain mechanism. It is recommended to save the pictures and upload them directly (img-HWevnYZP-1592811283241)(images/image-20200503110631219.png))

[failed to transfer the pictures in the external link. The source station may have anti-theft chain mechanism. It is recommended to save the pictures and upload them directly (img-uropnrdq-15928112832246) (images / image-20200503110732773. PNG))

(3) Node architecture

[failed to transfer the pictures in the external link. The source station may have anti-theft chain mechanism. It is recommended to save the pictures and upload them directly (img-dfut2brv-1592811283249) (images / image-2020050310804361. PNG))

[failed to transfer the pictures in the external link. The source station may have anti-theft chain mechanism. It is recommended to save the pictures and upload them directly (img-3ziclwxz-1592811283256) (images / image-2020050311032457. PNG))

3) Concept

[failed to transfer the pictures in the external link. The source station may have anti-theft chain mechanism. It is recommended to save the pictures and upload them directly (img-aekizvas-1592811283264) (images / image-202005031122551188. PNG))

[failed to transfer the pictures in the external link. The source station may have anti-theft chain mechanism. It is recommended to save the pictures and upload them directly (img-rcc0xm7z-1592811283267) (images / image-202005031122627449. PNG))

[failed to transfer the pictures in the external link. The source station may have anti-theft chain mechanism. It is recommended to save the pictures and upload them directly (img-zcwhpwl-1592811283272) (images / image-20200503112272377. PNG))

[failed to transfer the pictures in the external link. The source station may have anti-theft chain mechanism. It is recommended to save the pictures and upload them directly (img-haoonmzbn-1592812183281) (images / image-202005031122810938. PNG))

[failed to transfer the pictures in the external link. The source station may have anti-theft chain mechanism. It is recommended to save the pictures and upload them directly (img-O6KpI90b-1592811283284)(images/image-20200503113055314.png))

[failed to transfer the pictures in the external link. The source station may have anti-theft chain mechanism. It is recommended to save the pictures and upload them directly (img-1FJiousp-1592811283287)(images/image-20200503113619233.png))

[failed to transfer the pictures in the external link. The source station may have anti-theft chain mechanism. It is recommended to save the pictures and upload them directly (img-EtaqCYAg-1592811283290)(images/image-20200503113701902.png))

4) Fast experience

(1) Install minikube

https://github.com/kubernetes/minikube/releases
Download minikuber-windows-amd64.exe and change its name to minikube.exe
Open virtualBox, open cmd
function
minikube start --vm-driver=virtualbox --registry-mirror=https://registry.docker-cn.com
Wait 20 minutes.

(2) Experience nginx deployment upgrade

  1. Submit a nginx deployment
    kubectl apply -f https://k8s.io/examples/application/deployment.yaml

  2. Upgrade nginx deployment
    kubectl apply -f https://k8s.io/examples/application/deployment-update.yaml

  3. Expand nginx deployment

2. K8s cluster installation

1)kubeadm

kubeadm is a tool launched by the official community for rapid deployment of kuberneters clusters.
This tool can complete the deployment of a kuberneters cluster through two instructions

Create a master node

$ kuberneters init

Add a node node to the current cluster

$kubeadm join < IP and port of master node >

2) Pre requirements

One or more machines, operating system Centos7.x-86_x64
Hardware configuration: 2GB or more RAM, 2 CPU s or more, 30GB or more hard disk
Network interworking among all machines in the cluster
You can access the Internet and need to pull the image
Disable Swap partition

3) Deployment steps

  1. Install Docker and kubeadm on all nodes
  2. Not Kubernetes Master
  3. Deploy container network plug in
  4. Deploy Kubernetes Node and add the node to Kubernetes cluster
  5. Deploy DashBoard web page, and visually view Kubernetes resources

[failed to transfer the pictures in the external link. The source station may have anti-theft chain mechanism. It is recommended to save the pictures and upload them directly (img-6vdztrou-1592811283297) (images / image-2020050314120720. PNG))

4) Environmental preparation

(1) Preparations

  • We can use vagrant to create three virtual machines quickly. Set up the host network of virtualbox before starting the virtual machine. Now they are all unified as 192.168.56.1, and later all virtual machines are ip addresses of 56.x.

[failed to transfer the pictures in the external link. The source station may have anti-theft chain mechanism. It is recommended to save the pictures and upload them directly (img-EsXdD91L-1592811283301)(images/image-20200503175351320.png))

  • In global settings, find a disk with large space to store the image.

[failed to transfer the pictures in the external link. The source station may have anti-theft chain mechanism. It is recommended to save the pictures and upload them directly (img-9fu6yi3o-15928112830306) (images / image-20200503180202640. PNG))

(2) Start three virtual machines

  • Use the vagrant file provided by us, copy it to a non Chinese directory without spaces, and run vagrant up to start three virtual machines. In fact, vagrant can deploy all K8s clusters in one click
    https://github.com/rootsongjc/kubernetes-vagrant-centos-cluster
    http://github.com/davidkbainbridge/k8s-playground

Here is the vagrantfile, which is used to create three virtual machines, k8s-node1, k8s-node2 and k8s-node3

Vagrant.configure("2") do |config|
   (1..3).each do |i|
        config.vm.define "k8s-node#{i}" do |node|
            # Set the Box of virtual machine
            node.vm.box = "centos/7"

            # Set the host name of the virtual machine
            node.vm.hostname="k8s-node#{i}"

            # Set IP of virtual machine
            node.vm.network "private_network", ip: "192.168.56.#{99+i}", netmask: "255.255.255.0"

            # Set the shared directory between the host and the virtual machine
            # node.vm.synced_folder "~/Documents/vagrant/share", "/home/vagrant/share"

            # VirtaulBox related configuration
            node.vm.provider "virtualbox" do |v|
                # Set the name of the virtual machine
                v.name = "k8s-node#{i}"
                # Set the memory size of the virtual machine
                v.memory = 4096
                # Set the number of CPU s of the virtual machine
                v.cpus = 4
            end
        end
   end
end
  • Enter three virtual machines and enable root's password access
After Vagrant ssh xxx enters the system

su root password is vagrant

vi /etc/ssh/sshd_config

modify
PermitRootLogin yes 
PasswordAuthentication yes

All virtual machines are set to 4-core 4G

In the connection mode of "network address translation", the eth0 and IP addresses of three nodes are the same.

**Problem Description: * * view the routing table of k8s-node1:

[root@k8s-node1 ~]# ip route show
default via 10.0.2.2 dev eth0 proto dhcp metric 100 
10.0.2.0/24 dev eth0 proto kernel scope link src 10.0.2.15 metric 100 
192.168.56.0/24 dev eth1 proto kernel scope link src 192.168.56.100 metric 101 
[root@k8s-node1 ~

It can be seen that what is recorded in the routing table is that packets are sent and received through port eth0.

Check the IP addresses bound by eth0 of k8s-node1, k8s-node2 and k8s-node3 respectively, and find that they are all the same, all of them are 10.0.2.15, which are used for kubernetes cluster communication. Different from the IP addresses on eth1, they are used by remote management.

[root@k8s-node1 ~]# ip addr
...
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 52:54:00:8a:fe:e6 brd ff:ff:ff:ff:ff:ff
    inet 10.0.2.15/24 brd 10.0.2.255 scope global noprefixroute dynamic eth0
       valid_lft 84418sec preferred_lft 84418sec
    inet6 fe80::5054:ff:fe8a:fee6/64 scope link 
       valid_lft forever preferred_lft forever
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 08:00:27:a3:ca:c0 brd ff:ff:ff:ff:ff:ff
    inet 192.168.56.100/24 brd 192.168.56.255 scope global noprefixroute eth1
       valid_lft forever preferred_lft forever
    inet6 fe80::a00:27ff:fea3:cac0/64 scope link 
       valid_lft forever preferred_lft forever
[root@k8s-node1 ~]# 

**Cause analysis: * * this is because they use port forwarding rules. They use the same address and are distinguished by different ports. However, this kind of port forwarding rule will cause many unnecessary problems in later use, so it needs to be modified to NAT network type.

[failed to transfer the pictures in the external link. The source station may have anti-theft chain mechanism. It is recommended to save the pictures and upload them directly (img-Lla6TalU-1592811283311)(images/image-20200503184536343.png))

resolvent:

  • Select three nodes, and then execute management > Global Settings > network to add a NAT network.
  • Modify the network type of each device respectively, and refresh to regenerate the MAC address.
  • View the IP addresses of the three nodes again

(3) Setting up the Linux environment (all three nodes execute)

  • Turn off firewall
systemctl stop firewalld
systemctl disable firewalld
  • Turn off Linux
sed -i 's/enforcing/disabled/' /etc/selinux/config
setenforce 0
  • Turn off swap
swapoff -a #Temporarily Closed
sed -ri 's/.*swap.*/#&/' /etc/fstab #Permanent closure
free -g #Validation, swap must be 0
  • Add host name and IP correspondence:

View host name:

hostname

If the hostname is not correct, you can modify it with the command "hostnamectl set hostname < newhostname >: specify a new hostname".

vi /etc/hosts
10.0.2.15 k8s-node1
10.0.2.4 k8s-node2
10.0.2.5 k8s-node3

Transfer the bridged IPV4 traffic to the iptables chain:

cat > /etc/sysctl.d/k8s.conf <<EOF

net.bridge.bridge-nf-call-ip6tables = 1

net.bridge.bridge-nf-call-iptables = 1

EOF

Application rules:

sysctl --system

Problem: run the following command when you encounter a file system that is read-only

mount -o remount rw /
  • date viewing time (optional)
yum -y install ntpupdate

ntpupdate time.window.com #Sync up to date

5) Install docker, kubeadm, kubelet and kubectl on all nodes

Kubenetes default CRI (container runtime) is Docker, so install Docker first.

(1) Install Docker

1. Before uninstalling docker

$ sudo yum remove docker \
                  docker-client \
                  docker-client-latest \
                  docker-common \
                  docker-latest \
                  docker-latest-logrotate \
                  docker-logrotate \
                  docker-engine

2. Install docker-ce

$ sudo yum install -y yum-utils

$ sudo yum-config-manager \
    --add-repo \
    https://download.docker.com/linux/centos/docker-ce.repo
    
$ sudo yum -y install docker-ce docker-ce-cli containerd.io   

3. Configure image acceleration

sudo mkdir -p /etc/docker
sudo tee /etc/docker/daemon.json <<-'EOF'
{
  "registry-mirrors": ["https://ke9h1pt4.mirror.aliyuncs.com"]
}
EOF
sudo systemctl daemon-reload
sudo systemctl restart docker

4. Start docker & & set docker startup

systemctl enable docker

The basic environment is ready to back up three virtual machines;

[failed to transfer the pictures in the external link. The source station may have anti-theft chain mechanism. It is recommended to save the pictures and upload them directly (img-byht6ema-159281128333) (images / image-20200503192940651. PNG))

(2) Add Alibaba and Yum sources

cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

For more details, see: https://developer.aliyun.com/mirror/kubernetes

(3) Install kubeadm, kubelet and kubectl

yum list|grep kube

install

yum install -y kubelet-1.17.3 kubeadm-1.17.3 kubectl-1.17.3

Start up

systemctl enable kubelet && systemctl start kubelet

To view the status of kubelet:

systemctl status kubelet

To view the kubelet version:

[root@k8s-node2 ~]# kubelet --version
Kubernetes v1.17.3

6) Deploy k8s master

(1) master node initialization

On the Master node, create and execute the master_images.sh

#!/bin/bash

images=(
	kube-apiserver:v1.17.3
    kube-proxy:v1.17.3
	kube-controller-manager:v1.17.3
	kube-scheduler:v1.17.3
	coredns:1.6.5
	etcd:3.4.3-0
    pause:3.1
)

for imageName in ${images[@]} ; do
    docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/$imageName
#   docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/$imageName  k8s.gcr.io/$imageName
done

Initialize kubeadm

$ kubeadm init \
--apiserver-advertise-address=10.0.2.15 \
--image-repository registry.cn-hangzhou.aliyuncs.com/google_containers \
--kubernetes-version   v1.17.3 \
--service-cidr=10.96.0.0/16  \
--pod-network-cidr=10.244.0.0/16

Note:

  • – apiserver-advertisement-address = 10.0.2.21: the IP address here is the address of the master and the address of the eth0 network card above;

Execution result:

[root@k8s-node1 opt]# kubeadm init \
> --apiserver-advertise-address=10.0.2.15 \
> --image-repository registry.cn-hangzhou.aliyuncs.com/google_containers \
> --kubernetes-version   v1.17.3 \
> --service-cidr=10.96.0.0/16  \
> --pod-network-cidr=10.244.0.0/16
W0503 14:07:12.594252   10124 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[init] Using Kubernetes version: v1.17.3
[preflight] Running pre-flight checks
        [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8s-node1 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.0.2.15]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [k8s-node1 localhost] and IPs [10.0.2.15 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [k8s-node1 localhost] and IPs [10.0.2.15 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
W0503 14:07:30.908642   10124 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[control-plane] Creating static Pod manifest for "kube-scheduler"
W0503 14:07:30.911330   10124 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 22.506521 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.18" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node k8s-node1 as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node k8s-node1 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: sg47f3.4asffoi6ijb8ljhq
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy
#Indicates that kubernetes has been initialized successfully
Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 10.0.2.15:6443 --token sg47f3.4asffoi6ijb8ljhq \
    --discovery-token-ca-cert-hash sha256:81fccdd29970cbc1b7dc7f171ac0234d53825bdf9b05428fc9e6767436991bfb 
[root@k8s-node1 opt]# 

Because the default pull image address k8s cr.io Unable to access in China, Alibaba cloud warehouse address is specified here. You can manually follow our images.sh Pull the image first.

The address changes to: registry.aliyuncs.com/googole_containers are OK, too.
Popular science: classless inter domain routing (CIDR) is a method used to assign IP addresses to users and to classify the IP addresses of the effective routing IP packets on the Internet.
Pull may fail. You need to download the image.

Run complete early replication: token to join the cluster.

(2) Test Kubectl (Master execution)

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

Detailed deployment documents: https://kubernetes.io/docs/concepts/cluster-administration/addons/

$ kubectl get nodes #Get all nodes

At present, the Master status is not ready. Wait for the network to join.

$ journalctl -u kubelet #View kubelet logs
kubeadm join 10.0.2.15:6443 --token sg47f3.4asffoi6ijb8ljhq \
    --discovery-token-ca-cert-hash sha256:81fccdd29970cbc1b7dc7f171ac0234d53825bdf9b05428fc9e6767436991bfb 

7) Install POD network plug in (CNI)

Execute the network plug-in according to POD on the master node

kubectl apply -f \
https://raw.githubusercontent.com/coreos/flanne/master/Documentation/kube-flannel.yml

The above address may be the wall, you can directly access the local downloaded flannel.yml Just run, such as:

[root@k8s-node1 k8s]# kubectl apply -f  kube-flannel.yml    
podsecuritypolicy.policy/psp.flannel.unprivileged created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.apps/kube-flannel-ds-amd64 created
daemonset.apps/kube-flannel-ds-arm64 created
daemonset.apps/kube-flannel-ds-arm created
daemonset.apps/kube-flannel-ds-ppc64le created
daemonset.apps/kube-flannel-ds-s390x created
[root@k8s-node1 k8s]#

meanwhile flannel.yml The images specified in cannot be accessed. You can go to the docker hub to find a wget yml address
vi modify the address of all amd64 of yml
Wait about 3 minutes
Kubectl get Pods - n Kube system view pods for the specified namespace
Kubectl get Pods - all namespace view pods for all namespaces

$ip link set cni0 down if there is a problem with the network, shut down cni0 and restart the virtual machine to continue the test
Execute watch kubectl get Pod - n Kube system - O wide to monitor pod progress
Wait 3-10 minutes. It's all running. Continue

View namespace:

[root@k8s-node1 k8s]# kubectl get ns
NAME              STATUS   AGE
default           Active   30m
kube-node-lease   Active   30m
kube-public       Active   30m
kube-system       Active   30m
[root@k8s-node1 k8s]#
[root@k8s-node1 k8s]# kubectl get pods --all-namespaces       
NAMESPACE     NAME                                READY   STATUS    RESTARTS   AGE
kube-system   coredns-546565776c-9sbmk            0/1     Pending   0          31m
kube-system   coredns-546565776c-t68mr            0/1     Pending   0          31m
kube-system   etcd-k8s-node1                      1/1     Running   0          31m
kube-system   kube-apiserver-k8s-node1            1/1     Running   0          31m
kube-system   kube-controller-manager-k8s-node1   1/1     Running   0          31m
kube-system   kube-flannel-ds-amd64-6xwth         1/1     Running   0          2m50s
kube-system   kube-proxy-sz2vz                    1/1     Running   0          31m
kube-system   kube-scheduler-k8s-node1            1/1     Running   0          31m
[root@k8s-node1 k8s]# 

To view the node information on the master:

[root@k8s-node1 k8s]# kubectl get nodes
NAME        STATUS   ROLES    AGE   VERSION
k8s-node1   Ready    master   34m   v1.17.3   #status is ready to execute the following command
[root@k8s-node1 k8s]#

Finally, execute again, and execute the following commands on "k8s-node2" and "k8s-node3" respectively:

kubeadm join 10.0.2.15:6443 --token sg47f3.4asffoi6ijb8ljhq \
    --discovery-token-ca-cert-hash sha256:81fccdd29970cbc1b7dc7f171ac0234d53825bdf9b05428fc9e6767436991bfb 
[root@k8s-node1 opt]# kubectl get nodes;
NAME        STATUS     ROLES    AGE   VERSION
k8s-node1   Ready      master   47m   v1.17.3
k8s-node2   NotReady   <none>   75s   v1.17.3
k8s-node3   NotReady   <none>   76s   v1.17.3
[root@k8s-node1 opt]# 

Monitor pod progress

watch kubectl get pod -n kube-system -o wide

Wait until all status changes to running status, and check the node information again:

[root@k8s-node1 ~]#  kubectl get nodes;                         
NAME        STATUS   ROLES    AGE     VERSION
k8s-node1   Ready    master   3h50m   v1.17.3
k8s-node2   Ready    <none>   3h3m    v1.17.3
k8s-node3   Ready    <none>   3h3m    v1.17.3
[root@k8s-node1 ~]# 

8) Node node joining kubenetes

Execute in the node node, add new nodes to the cluster, and execute the kubeadm join command output in kubeadm init;
Make sure the node is successful:
What to do if the token expires
kubeadm token create --print-join-command

9) Getting started with kubernetes cluster

1. Deploy a tomcat on the master node

kubectl create deployment tomcat6 --image=tomcat:6.0.53-jre8

Get all resources:

[root@k8s-node1 k8s]# kubectl get all
NAME                           READY   STATUS              RESTARTS   AGE
pod/tomcat6-7b84fb5fdc-cfd8g   0/1     ContainerCreating   0          41s

NAME                 TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
service/kubernetes   ClusterIP   10.96.0.1    <none>        443/TCP   70m

NAME                      READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/tomcat6   0/1     1            0           41s

NAME                                 DESIRED   CURRENT   READY   AGE
replicaset.apps/tomcat6-7b84fb5fdc   1         1         0       41s
[root@k8s-node1 k8s]# 

kubectl get pods -o wide can get tomcat deployment information and see that it has been deployed to k8s-node2

[root@k8s-node1 k8s]# kubectl get all -o wide
NAME                           READY   STATUS    RESTARTS   AGE    IP           NODE        NOMINATED NODE   READINESS GATES
pod/tomcat6-7b84fb5fdc-cfd8g   1/1     Running   0          114s   10.244.2.2   k8s-node2   <none>           <none>

NAME                 TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE   SELECTOR
service/kubernetes   ClusterIP   10.96.0.1    <none>        443/TCP   71m   <none>

NAME                      READY   UP-TO-DATE   AVAILABLE   AGE    CONTAINERS   IMAGES               SELECTOR
deployment.apps/tomcat6   1/1     1            1           114s   tomcat       tomcat:6.0.53-jre8   app=tomcat6

NAME                                 DESIRED   CURRENT   READY   AGE    CONTAINERS   IMAGES               SELECTOR
replicaset.apps/tomcat6-7b84fb5fdc   1         1         1       114s   tomcat       tomcat:6.0.53-jre8   app=tomcat6,pod-template-hash=7b84fb5fdc
[root@k8s-node1 k8s]# 

Check which images are downloaded on node2 node:

[root@k8s-node2 opt]# docker images
REPOSITORY                                                       TAG                 IMAGE ID            CREATED             SIZE
registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy   v1.17.3             0d40868643c6        2 weeks ago         117MB
registry.cn-hangzhou.aliyuncs.com/google_containers/pause        3.2                 80d28bedfe5d        2 months ago        683kB
quay.io/coreos/flannel                                           v0.11.0-amd64       ff281650a721        15 months ago       52.6MB
tomcat                                                           6.0.53-jre8         49ab0583115a        2 years ago         290MB
[root@k8s-node2 opt]# 

To view the running containers on the Node2 node:

[root@k8s-node2 opt]# docker ps
CONTAINER ID        IMAGE                                                            COMMAND                  CREATED             STATUS              PORTS               NAMES
9194cc4f0b7a        tomcat                                                           "catalina.sh run"        2 minutes ago       Up 2 minutes                            k8s_tomcat_tomcat6-7b84fb5fdc-cfd8g_default_0c9ebba2-992d-4c0e-99ef-3c4c3294bc59_0
f44af0c7c345        registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.2    "/pause"                 3 minutes ago       Up 3 minutes                            k8s_POD_tomcat6-7b84fb5fdc-cfd8g_default_0c9ebba2-992d-4c0e-99ef-3c4c3294bc59_0
ef74c90491e4        ff281650a721                                                     "/opt/bin/flanneld -..."   20 minutes ago      Up 20 minutes                           k8s_kube-flannel_kube-flannel-ds-amd64-5xs5j_kube-system_11a94346-316d-470b-9668-c15ce183abec_0
c8a524e5a193        registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy   "/usr/local/bin/kube..."   25 minutes ago      Up 25 minutes                           k8s_kube-proxy_kube-proxy-mvlnk_kube-system_519de79a-e8d8-4b1c-a74e-94634cebabce_0
4590685c519a        registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.2    "/pause"                 26 minutes ago      Up 26 minutes                           k8s_POD_kube-flannel-ds-amd64-5xs5j_kube-system_11a94346-316d-470b-9668-c15ce183abec_0
54e00af5cde4        registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.2    "/pause"                 26 minutes ago      Up 26 minutes                           k8s_POD_kube-proxy-mvlnk_kube-system_519de79a-e8d8-4b1c-a74e-94634cebabce_0
[root@k8s-node2 opt]# 

On node1:

[root@k8s-node1 k8s]# kubectl get pods
NAME                       READY   STATUS    RESTARTS   AGE
tomcat6-7b84fb5fdc-cfd8g   1/1     Running   0          5m35s

[root@k8s-node1 k8s]# kubectl get pods --all-namespaces
NAMESPACE     NAME                                READY   STATUS    RESTARTS   AGE
default       tomcat6-7b84fb5fdc-cfd8g            1/1     Running   0          163m
kube-system   coredns-546565776c-9sbmk            1/1     Running   0          3h52m
kube-system   coredns-546565776c-t68mr            1/1     Running   0          3h52m
kube-system   etcd-k8s-node1                      1/1     Running   0          3h52m
kube-system   kube-apiserver-k8s-node1            1/1     Running   0          3h52m
kube-system   kube-controller-manager-k8s-node1   1/1     Running   0          3h52m
kube-system   kube-flannel-ds-amd64-5xs5j         1/1     Running   0          3h6m
kube-system   kube-flannel-ds-amd64-6xwth         1/1     Running   0          3h24m
kube-system   kube-flannel-ds-amd64-fvnvx         1/1     Running   0          3h6m
kube-system   kube-proxy-7tkvl                    1/1     Running   0          3h6m
kube-system   kube-proxy-mvlnk                    1/1     Running   0          3h6m
kube-system   kube-proxy-sz2vz                    1/1     Running   0          3h52m
kube-system   kube-scheduler-k8s-node1            1/1     Running   0          3h52m
[root@k8s-node1 ~]# 

From the above, we can see that tomcat is deployed on node2. Now simulate the downtime for various reasons, turn off node2 and observe the situation.

[root@k8s-node1 ~]# kubectl get nodes
NAME        STATUS     ROLES    AGE     VERSION
k8s-node1   Ready      master   4h4m    v1.17.3
k8s-node2   NotReady   <none>   3h18m   v1.17.3
k8s-node3   Ready      <none>   3h18m   v1.17.3
[root@k8s-node1 ~]# 
[root@k8s-node1 ~]# kubectl get pods -o wide
NAME                       READY   STATUS    RESTARTS   AGE    IP           NODE        NOMINATED NODE   READINESS GATES
tomcat6-7b84fb5fdc-cfd8g   1/1     Running   0          177m   10.244.2.2   k8s-node2   <none>           <none>
[root@k8s-node1 ~]# 

[failed to transfer the pictures in the external link. The source station may have anti-theft chain mechanism. It is recommended to save the pictures and upload them directly (img-We9dDdve-1592811283344)(images/image-20200504104925236.png))

2. Expose nginx access

Execute on master

kubectl expose deployment tomcat6 --port=80 --target-port=8080 --type=NodePort 

80 of pod maps to 8080 of container; 80 of pod will be brought by server

View services:

[root@k8s-node1 ~]# kubectl get svc
NAME         TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)        AGE
kubernetes   ClusterIP   10.96.0.1      <none>        443/TCP        12h
tomcat6      NodePort    10.96.24.191   <none>        80:30526/TCP   49s
[root@k8s-node1 ~]# 
[root@k8s-node1 ~]# kubectl get svc -o wide
NAME         TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)        AGE     SELECTOR
kubernetes   ClusterIP   10.96.0.1      <none>        443/TCP        12h     <none>
tomcat6      NodePort    10.96.24.191   <none>        80:30526/TCP   3m30s   app=tomcat6
[root@k8s-node1 ~]# 

http://192.168.56.100:30526/

[failed to transfer the pictures in the external link. The source station may have anti-theft chain mechanism. It is recommended to save the pictures and upload them directly (img-h8wibaTj-1592811283359)(images/image-20200504105723874.png))

[root@k8s-node1 ~]# kubectl get all
NAME                           READY   STATUS    RESTARTS   AGE
pod/tomcat6-7b84fb5fdc-qt5jm   1/1     Running   0          13m

NAME                 TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)        AGE
service/kubernetes   ClusterIP   10.96.0.1      <none>        443/TCP        12h
service/tomcat6      NodePort    10.96.24.191   <none>        80:30526/TCP   9m50s

NAME                      READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/tomcat6   1/1     1            1           11h

NAME                                 DESIRED   CURRENT   READY   AGE
replicaset.apps/tomcat6-7b84fb5fdc   1         1         1       11h
[root@k8s-node1 ~]#

3. Dynamic expansion test

kubectl get deployment

[root@k8s-node1 ~]# kubectl get deployment
NAME      READY   UP-TO-DATE   AVAILABLE   AGE
tomcat6   2/2     2            2           11h
[root@k8s-node1 ~]# 

Application upgrade: kubectl set image (– help view help)
Expansion: kubectl scale --replicas=3 deployment tomcat6

[root@k8s-node1 ~]# kubectl scale --replicas=3 deployment tomcat6
deployment.apps/tomcat6 scaled
[root@k8s-node1 ~]# 

[root@k8s-node1 ~]# kubectl get pods -o wide
NAME                       READY   STATUS    RESTARTS   AGE   IP           NODE        NOMINATED NODE   READINESS GATES
tomcat6-7b84fb5fdc-hdgmc   1/1     Running   0          61s   10.244.2.5   k8s-node2   <none>           <none>
tomcat6-7b84fb5fdc-qt5jm   1/1     Running   0          19m   10.244.1.2   k8s-node3   <none>           <none>
tomcat6-7b84fb5fdc-vlrh6   1/1     Running   0          61s   10.244.2.4   k8s-node2   <none>           <none>
[root@k8s-node1 ~]# kubectl get svc -o wide    
NAME         TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)        AGE   SELECTOR
kubernetes   ClusterIP   10.96.0.1      <none>        443/TCP        13h   <none>
tomcat6      NodePort    10.96.24.191   <none>        80:30526/TCP   16m   app=tomcat6
[root@k8s-node1 ~]#

The capacity has been expanded to multiple copies. No matter which node's specified port is accessed, tomcat6 can be accessed

http://192.168.56.101:30526/

[failed to transfer the pictures in the external link. The source station may have anti-theft chain mechanism. It is recommended to save the pictures and upload them directly (img-lyr51l2g-1592811283363) (images / image-2020050411008668. PNG))

http://192.168.56.102:30526/

[the external link image transfer failed. The source station may have anti-theft chain mechanism. It is recommended to save the image and upload it directly (img-6yh6oqfo-1592811283368) (images / image-2020050411102496. PNG))

Shrink capacity: kubectl scale --replicas=2 deployment tomcat6

[root@k8s-node1 ~]#  kubectl scale --replicas=2 deployment tomcat6
deployment.apps/tomcat6 scaled
[root@k8s-node1 ~]# kubectl get pods -o wide                       
NAME                       READY   STATUS        RESTARTS   AGE     IP           NODE        NOMINATED NODE   READINESS GATES
tomcat6-7b84fb5fdc-hdgmc   0/1     Terminating   0          4m47s   <none>       k8s-node2   <none>           <none>
tomcat6-7b84fb5fdc-qt5jm   1/1     Running       0          22m     10.244.1.2   k8s-node3   <none>           <none>
tomcat6-7b84fb5fdc-vlrh6   1/1     Running       0          4m47s   10.244.2.4   k8s-node2   <none>           <none>
[root@k8s-node1 ~]# 

4. yaml acquisition of the above operations
Refer to k8s for details

5. Delete
kubectl get all

#View all resources
[root@k8s-node1 ~]# kubectl get all
NAME                           READY   STATUS    RESTARTS   AGE
pod/tomcat6-7b84fb5fdc-qt5jm   1/1     Running   0          26m
pod/tomcat6-7b84fb5fdc-vlrh6   1/1     Running   0          8m16s

NAME                 TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)        AGE
service/kubernetes   ClusterIP   10.96.0.1      <none>        443/TCP        13h
service/tomcat6      NodePort    10.96.24.191   <none>        80:30526/TCP   22m

NAME                      READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/tomcat6   2/2     2            2           11h

NAME                                 DESIRED   CURRENT   READY   AGE
replicaset.apps/tomcat6-7b84fb5fdc   2         2         2       11h
[root@k8s-node1 ~]#
#delete deployment.apps/tomcat6 
[root@k8s-node1 ~]# kubectl delete  deployment.apps/tomcat6 
deployment.apps "tomcat6" deleted

#View remaining resources
[root@k8s-node1 ~]# kubectl get all   
NAME                 TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)        AGE
service/kubernetes   ClusterIP   10.96.0.1      <none>        443/TCP        13h
service/tomcat6      NodePort    10.96.24.191   <none>        80:30526/TCP   30m
[root@k8s-node1 ~]# 
[root@k8s-node1 ~]#
#Delete service/tomcat6 
[root@k8s-node1 ~]# kubectl delete service/tomcat6  
service "tomcat6" deleted
[root@k8s-node1 ~]# kubectl get all
NAME                 TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
service/kubernetes   ClusterIP   10.96.0.1    <none>        443/TCP   13h
[root@k8s-node1 ~]#

kubectl delete deploye/nginx
kubectl delete service/nginx-service

3. K8s details

1. kubectl document

​ https://kubernetes.io/zh/docs/reference/kubectl/overview/

2. Resource type

https://kubernetes.io/zh/docs/reference/kubectl/overview/#%e8%b5%84%e6%ba%90%e7%b1%bb%e5%9e%8b

3. Format output

https://kubernetes.io/zh/docs/reference/kubectl/overview/

The default output format for all kubectl commands is human readable plain text. To output details to the terminal window in a specific format, you can add the - o or -- output parameter to the supported kubectl command.

grammar

kubectl [command] [TYPE] [NAME] -o=<output_format>

According to the kubectl operation, the following output formats are supported:

Output format Description
-o custom-columns= Comma separated Custom columns List print table.
-o custom-columns-file= Use the Custom columns Template print table.
-o json Output the API object in JSON format
`-o jsonpath= Printing jsonpath Fields defined by expressions
-o jsonpath-file= In print jsonpath The field defined by the expression.
-o name Print only the resource name without anything else.
-o wide Output as plain text with any additional information. Include the node name for the pod.
-o yaml Output API objects in YAML format.
Example

In this example, the following command outputs the details of a single pod to an object in YAML format:

kubectl get pod web-pod-13je7 -o yaml

Remember: for more information about which output format each command supports, see kubectl Reference documentation.

–dry-run:

–dry-run='none': Must be "none", "server", or "client". If client strategy, only print the object that would be

sent, without sending it. If server strategy, submit server-side request without persisting the resource.

Value must be none, server or client. If it is a client policy, only the sending object will be printed, but it will not be sent. If server policy, submit server-side requests without persisting resources.

That is to say, through the – dry run option, this command is not actually executed.

[root@k8s-node1 ~]# kubectl create deployment tomcat6 --image=tomcat:6.0.53-jre8 --dry-run -o yaml
W0504 03:39:08.389369    8107 helpers.go:535] --dry-run is deprecated and can be replaced with --dry-run=client.
apiVersion: apps/v1
kind: Deployment
metadata:
  creationTimestamp: null
  labels:
    app: tomcat6
  name: tomcat6
spec:
  replicas: 1
  selector:
    matchLabels:
      app: tomcat6
  strategy: {}
  template:
    metadata:
      creationTimestamp: null
      labels:
        app: tomcat6
    spec:
      containers:
      - image: tomcat:6.0.53-jre8
        name: tomcat
        resources: {}
status: {}
[root@k8s-node1 ~]# 

In fact, we can also output this yaml to a file, and then use kubectl apply -f to apply it

#Output to tomcat6.yaml 
[root@k8s-node1 ~]# kubectl create deployment tomcat6 --image=tomcat:6.0.53-jre8 --dry-run -o yaml >tomcat6.yaml
W0504 03:46:18.180366   11151 helpers.go:535] --dry-run is deprecated and can be replaced with --dry-run=client.

#3 modified copies
[root@k8s-node1 ~]# cat tomcat6.yaml 
apiVersion: apps/v1
kind: Deployment
metadata:
  creationTimestamp: null
  labels:
    app: tomcat6
  name: tomcat6
spec:
  replicas: 3     #3 modified copies
  selector:
    matchLabels:
      app: tomcat6
  strategy: {}
  template:
    metadata:
      creationTimestamp: null
      labels:
        app: tomcat6
    spec:
      containers:
      - image: tomcat:6.0.53-jre8
        name: tomcat
        resources: {}
status: {}

#Apply tomcat6.yaml 
[root@k8s-node1 ~]# kubectl apply -f tomcat6.yaml 
deployment.apps/tomcat6 created
[root@k8s-node1 ~]# 

To view pods:

[root@k8s-node1 ~]# kubectl get pods  
NAME                       READY   STATUS    RESTARTS   AGE
tomcat6-7b84fb5fdc-5jh6t   1/1     Running   0          8s
tomcat6-7b84fb5fdc-8lhwv   1/1     Running   0          8s
tomcat6-7b84fb5fdc-j4qmh   1/1     Running   0          8s
[root@k8s-node1 ~]#

To view the specific information of a pod:

[root@k8s-node1 ~]# kubectl get pods tomcat6-7b84fb5fdc-5jh6t  -o yaml
apiVersion: v1
kind: Pod
metadata:
  creationTimestamp: "2020-05-04T03:50:47Z"
  generateName: tomcat6-7b84fb5fdc-
  labels:
    app: tomcat6
    pod-template-hash: 7b84fb5fdc
  managedFields:
  - apiVersion: v1
    fieldsType: FieldsV1
    fieldsV1:
      f:metadata:
        f:generateName: {}
        f:labels:
          .: {}
          f:app: {}
          f:pod-template-hash: {}
        f:ownerReferences:
          .: {}
          k:{"uid":"292bfe3b-dd63-442e-95ce-c796ab5bdcc1"}:
            .: {}
            f:apiVersion: {}
            f:blockOwnerDeletion: {}
            f:controller: {}
            f:kind: {}
            f:name: {}
            f:uid: {}
      f:spec:
        f:containers:
          k:{"name":"tomcat"}:
            .: {}
            f:image: {}
            f:imagePullPolicy: {}
            f:name: {}
            f:resources: {}
            f:terminationMessagePath: {}
            f:terminationMessagePolicy: {}
        f:dnsPolicy: {}
        f:enableServiceLinks: {}
        f:restartPolicy: {}
        f:schedulerName: {}
        f:securityContext: {}
        f:terminationGracePeriodSeconds: {}
    manager: kube-controller-manager
    operation: Update
    time: "2020-05-04T03:50:47Z"
  - apiVersion: v1
    fieldsType: FieldsV1
    fieldsV1:
      f:status:
        f:conditions:
          k:{"type":"ContainersReady"}:
            .: {}
            f:lastProbeTime: {}
            f:lastTransitionTime: {}
            f:status: {}
            f:type: {}
          k:{"type":"Initialized"}:
            .: {}
            f:lastProbeTime: {}
            f:lastTransitionTime: {}
            f:status: {}
            f:type: {}
          k:{"type":"Ready"}:
            .: {}
            f:lastProbeTime: {}
            f:lastTransitionTime: {}
            f:status: {}
            f:type: {}
        f:containerStatuses: {}
        f:hostIP: {}
        f:phase: {}
        f:podIP: {}
        f:podIPs:
          .: {}
          k:{"ip":"10.244.2.7"}:
            .: {}
            f:ip: {}
        f:startTime: {}
    manager: kubelet
    operation: Update
    time: "2020-05-04T03:50:49Z"
  name: tomcat6-7b84fb5fdc-5jh6t
  namespace: default
  ownerReferences:
  - apiVersion: apps/v1
    blockOwnerDeletion: true
    controller: true
    kind: ReplicaSet
    name: tomcat6-7b84fb5fdc
    uid: 292bfe3b-dd63-442e-95ce-c796ab5bdcc1
  resourceVersion: "46229"
  selfLink: /api/v1/namespaces/default/pods/tomcat6-7b84fb5fdc-5jh6t
  uid: 2f661212-3b03-47e4-bcb8-79782d5c7578
spec:
  containers:
  - image: tomcat:6.0.53-jre8
    imagePullPolicy: IfNotPresent
    name: tomcat
    resources: {}
    terminationMessagePath: /dev/termination-log
    terminationMessagePolicy: File
    volumeMounts:
    - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
      name: default-token-bxqtw
      readOnly: true
  dnsPolicy: ClusterFirst
  enableServiceLinks: true
  nodeName: k8s-node2
  priority: 0
  restartPolicy: Always
  schedulerName: default-scheduler
  securityContext: {}
  serviceAccount: default
  serviceAccountName: default
  terminationGracePeriodSeconds: 30
  tolerations:
  - effect: NoExecute
    key: node.kubernetes.io/not-ready
    operator: Exists
    tolerationSeconds: 300
  - effect: NoExecute
    key: node.kubernetes.io/unreachable
    operator: Exists
    tolerationSeconds: 300
  volumes:
  - name: default-token-bxqtw
    secret:
      defaultMode: 420
      secretName: default-token-bxqtw
status:
  conditions:
  - lastProbeTime: null
    lastTransitionTime: "2020-05-04T03:50:47Z"
    status: "True"
    type: Initialized
  - lastProbeTime: null
    lastTransitionTime: "2020-05-04T03:50:49Z"
    status: "True"
    type: Ready
  - lastProbeTime: null
    lastTransitionTime: "2020-05-04T03:50:49Z"
    status: "True"
    type: ContainersReady
  - lastProbeTime: null
    lastTransitionTime: "2020-05-04T03:50:47Z"
    status: "True"
    type: PodScheduled
  containerStatuses:
  - containerID: docker://18eb0798384ea44ff68712cda9be94b6fb96265206c554a15cee28c288879304
    image: tomcat:6.0.53-jre8
    imageID: docker-pullable://tomcat@sha256:8c643303012290f89c6f6852fa133b7c36ea6fbb8eb8b8c9588a432beb24dc5d
    lastState: {}
    name: tomcat
    ready: true
    restartCount: 0
    started: true
    state:
      running:
        startedAt: "2020-05-04T03:50:49Z"
  hostIP: 10.0.2.4
  phase: Running
  podIP: 10.244.2.7
  podIPs:
  - ip: 10.244.2.7
  qosClass: BestEffort
  startTime: "2020-05-04T03:50:47Z"

Command Reference

[failed to transfer the pictures in the external link. The source station may have anti-theft chain mechanism. It is recommended to save the pictures and upload them directly (img-zs5nowhz-1592811283376) (images / image-2020050411582358. PNG))

The meaning of service

[failed to transfer the pictures in the external link. The source station may have anti-theft chain mechanism. It is recommended to save the pictures and upload them directly (img-4kikyfoe-1592811283382) (images / image-2020050420856830. PNG))

Previously, we deployed and exposed tomcat through the command line, which can also be done through yaml.

#These operations are actually to get yaml template of Deployment
[root@k8s-node1 ~]#  kubectl create deployment tomcat6 --image=tomcat:6.0.53-jre8 --dry-run -o yaml >tomcat6-deployment.yaml
W0504 04:13:28.265432   24263 helpers.go:535] --dry-run is deprecated and can be replaced with --dry-run=client.
[root@k8s-node1 ~]# ls tomcat6-deployment.yaml
tomcat6-deployment.yaml
[root@k8s-node1 ~]# 

Modify "tomcat6-deployment.yaml ”The contents are as follows:

apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: tomcat6
  name: tomcat6
spec:
  replicas: 3
  selector:
    matchLabels:
      app: tomcat6
  template:
    metadata: 
      labels:
        app: tomcat6
    spec:
      containers:
      - image: tomcat:6.0.53-jre8
        name: tomcat
#deploy
[root@k8s-node1 ~]# kubectl apply -f tomcat6-deployment.yaml
deployment.apps/tomcat6 configured


#View resources
[root@k8s-node1 ~]# kubectl get all
NAME                           READY   STATUS    RESTARTS   AGE
pod/tomcat6-7b84fb5fdc-5jh6t   1/1     Running   0          27m
pod/tomcat6-7b84fb5fdc-8lhwv   1/1     Running   0          27m
pod/tomcat6-7b84fb5fdc-j4qmh   1/1     Running   0          27m

NAME                 TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
service/kubernetes   ClusterIP   10.96.0.1    <none>        443/TCP   14h

NAME                      READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/tomcat6   3/3     3            3           27m

NAME                                 DESIRED   CURRENT   READY   AGE
replicaset.apps/tomcat6-7b84fb5fdc   3         3         3       27m
[root@k8s-node1 ~]#
kubectl expose deployment tomcat6 --port=80 --target-port=8080 --type=NodePort  --dry-run -o yaml
apiVersion: v1
kind: Service
metadata:
  creationTimestamp: null
  labels:
    app: tomcat6
  name: tomcat6
spec:
  ports:
  - port: 80
    protocol: TCP
    targetPort: 8080
  selector:
    app: tomcat6
  type: NodePort
status:
  loadBalancer: {}

Combine this output with "tomcat6-deployment.yaml ”Splicing indicates that the deployment is completed and the service is exposed:

apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: tomcat6
  name: tomcat6
spec:
  replicas: 3
  selector:
    matchLabels:
      app: tomcat6
  template:
    metadata: 
      labels:
        app: tomcat6
    spec:
      containers:
      - image: tomcat:6.0.53-jre8
        name: tomcat
---
apiVersion: v1
kind: Service
metadata:
  creationTimestamp: null
  labels:
    app: tomcat6
  name: tomcat6
spec:
  ports:
  - port: 80
    protocol: TCP
    targetPort: 8080
  selector:
    app: tomcat6
  type: NodePort

Deploy and expose services

[root@k8s-node1 ~]# kubectl apply -f tomcat6-deployment.yaml
deployment.apps/tomcat6 created
service/tomcat6 created

View service and deployment information

[root@k8s-node1 ~]# kubectl get all
NAME                           READY   STATUS    RESTARTS   AGE
pod/tomcat6-7b84fb5fdc-dsqmb   1/1     Running   0          4s
pod/tomcat6-7b84fb5fdc-gbmxc   1/1     Running   0          5s
pod/tomcat6-7b84fb5fdc-kjlc6   1/1     Running   0          4s

NAME                 TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)        AGE
service/kubernetes   ClusterIP   10.96.0.1       <none>        443/TCP        14h
service/tomcat6      NodePort    10.96.147.210   <none>        80:30172/TCP   4s

NAME                      READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/tomcat6   3/3     3            3           5s

NAME                                 DESIRED   CURRENT   READY   AGE
replicaset.apps/tomcat6-7b84fb5fdc   3         3         3       5s
[root@k8s-node1 ~]#

To access the 30172 ports of node1, node1 and node3:

[root@k8s-node1 ~]# curl -I http://192.168.56.{100,101,102}:30172/
HTTP/1.1 200 OK
Server: Apache-Coyote/1.1
Accept-Ranges: bytes
ETag: W/"7454-1491118183000"
Last-Modified: Sun, 02 Apr 2017 07:29:43 GMT
Content-Type: text/html
Content-Length: 7454
Date: Mon, 04 May 2020 04:35:35 GMT

HTTP/1.1 200 OK
Server: Apache-Coyote/1.1
Accept-Ranges: bytes
ETag: W/"7454-1491118183000"
Last-Modified: Sun, 02 Apr 2017 07:29:43 GMT
Content-Type: text/html
Content-Length: 7454
Date: Mon, 04 May 2020 04:35:35 GMT

HTTP/1.1 200 OK
Server: Apache-Coyote/1.1
Accept-Ranges: bytes
ETag: W/"7454-1491118183000"
Last-Modified: Sun, 02 Apr 2017 07:29:43 GMT
Content-Type: text/html
Content-Length: 7454
Date: Mon, 04 May 2020 04:35:35 GMT

[root@k8s-node1 ~]# 

Ingress

Find pod through Ingress for association. Domain based access
Load balancing of POD through Ingress controller
Supports TCP/UDP 4-layer load balancing and HTTP 7-layer load balancing

[failed to transfer the pictures in the external link. The source station may have anti-theft chain mechanism. It is recommended to save the pictures and upload them directly (img-xceadrtm-1592811283389) (images / image-2020050423948771. PNG))

Step:
(1) Deploy Ingress controller

Execute "k8s/ingress-controller.yaml "

[root@k8s-node1 k8s]# kubectl apply -f ingress-controller.yaml 
namespace/ingress-nginx created
configmap/nginx-configuration created
configmap/tcp-services created
configmap/udp-services created
serviceaccount/nginx-ingress-serviceaccount created
clusterrole.rbac.authorization.k8s.io/nginx-ingress-clusterrole created
role.rbac.authorization.k8s.io/nginx-ingress-role created
rolebinding.rbac.authorization.k8s.io/nginx-ingress-role-nisa-binding created
clusterrolebinding.rbac.authorization.k8s.io/nginx-ingress-clusterrole-nisa-binding created
daemonset.apps/nginx-ingress-controller created
service/ingress-nginx created
[root@k8s-node1 k8s]# 

see

[root@k8s-node1 k8s]# kubectl get pods --all-namespaces
NAMESPACE       NAME                                READY   STATUS              RESTARTS   AGE
default         tomcat6-7b84fb5fdc-dsqmb            1/1     Running             0          16m
default         tomcat6-7b84fb5fdc-gbmxc            1/1     Running             0          16m
default         tomcat6-7b84fb5fdc-kjlc6            1/1     Running             0          16m
ingress-nginx   nginx-ingress-controller-9q6cs      0/1     ContainerCreating   0          40s
ingress-nginx   nginx-ingress-controller-qx572      0/1     ContainerCreating   0          40s
kube-system     coredns-546565776c-9sbmk            1/1     Running             1          14h
kube-system     coredns-546565776c-t68mr            1/1     Running             1          14h
kube-system     etcd-k8s-node1                      1/1     Running             1          14h
kube-system     kube-apiserver-k8s-node1            1/1     Running             1          14h
kube-system     kube-controller-manager-k8s-node1   1/1     Running             1          14h
kube-system     kube-flannel-ds-amd64-5xs5j         1/1     Running             2          13h
kube-system     kube-flannel-ds-amd64-6xwth         1/1     Running             2          14h
kube-system     kube-flannel-ds-amd64-fvnvx         1/1     Running             1          13h
kube-system     kube-proxy-7tkvl                    1/1     Running             1          13h
kube-system     kube-proxy-mvlnk                    1/1     Running             2          13h
kube-system     kube-proxy-sz2vz                    1/1     Running             1          14h
kube-system     kube-scheduler-k8s-node1            1/1     Running             1          14h
[root@k8s-node1 k8s]#

Here, the master node is responsible for scheduling, and the specific execution is given to node2 and node3 to complete. You can see that they are downloading the image

[failed to transfer the pictures in the external link. The source station may have anti-theft chain mechanism. It is recommended to save the pictures and upload them directly (img-rw657wbx-1592811283394) (images / image-20200504242468258. PNG))

(2) Create Ingress rule

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: web
spec:
  rules:
  - host: tomcat6.kubenetes.com
    http:
       paths: 
          - backend: 
              serviceName: tomcat6
              servicePort: 80
[root@k8s-node1 k8s]# touch ingress-tomcat6.yaml
#Add the above rule to the file ingress-tomcat6.yaml
[root@k8s-node1 k8s]# vi  ingress-tomcat6.yaml  
 
[root@k8s-node1 k8s]# kubectl apply -f ingress-tomcat6.yaml 
ingress.extensions/web created
[root@k8s-node1 k8s]# 

Modify the hosts file and add the following domain name conversion rules:

192.168.56.102 tomcat6.kubenetes.com

Test: http://tomcat6.kubenetes.com/

[failed to transfer the pictures in the external link. The source station may have anti-theft chain mechanism. It is recommended to save the pictures and upload them directly (img-e6nvii6j-1592811283398) (images / image-20200504113125267. PNG))

And even if one node in the cluster is unavailable, it will not affect the overall operation.

Install kubernetes visual interface - DashBoard

1. Deploy DashBoard

$ kubectl appy -f  kubernetes-dashboard.yaml

The files are provided in the "k8s" source directory

2. Expose DashBoard as public access

By default, DashBoard can only be accessed inside the cluster. Modify the Service to NodePort type and expose it outside

kind: Service
apiVersion: v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kube-system
spec:
  type: NodePort
  ports:
    - port: 443
      targetPort: 8443
      nodePort: 3001
  selector:
    k8s-app: kubernetes-dashboard

Access address: http://NodeIP:30001

3. Create authorized account

$ kubectl create serviceaccount dashboar-admin -n kube-sysem
$ kubectl create clusterrolebinding dashboar-admin --clusterrole=cluter-admin --serviceaccount=kube-system:dashboard-admin
$ kubectl describe secrets -n kube-system $( kubectl -n kube-system get secret |awk '/dashboard-admin/{print $1}' )

Log in to dashboard with the output token

[failed to transfer the pictures in the external link. The source station may have anti-theft chain mechanism. It is recommended to save the pictures and upload them directly (img-hapmkvdc-1592811283405) (images / image-202005041536300775. PNG))

kubesphere

The default dashboard is useless. We can use kubesphere to get through all the devops links. Kubesphere integrates many packages and requires a high cluster
https://kubesphere.io

kuboard is also very good. The cluster requirements are not high
https://kuboard.cn/support/

1. Simplicity

kubesphere is an open source project for cloud acoustic design. At present, the distributed multi-user container management platform built by kubernets intelligence, the mainstream container scheduling platform, provides a simple and easy-to-use operation interface and guided operation mode. While reducing the learning cost of the user using the container scheduling platform, it greatly reduces the complexity of the daily work of development, testing and operation and maintenance.

2. Submit before installation

1. Installing helm (executed by the master node)

Helm is the package manager of kubernetes. Package manager is similar to apt in Ubuntu, yum in centos or pip in python. It can quickly find, download and install packages. Helm consists of the client component helm and the server component Tiller. It can package a group of K8S resources for unified management. It is the best way to find, share and use the software built for kubernetes.

1) Installation

curl -L https://git.io/get_helm.sh|bash

Because of being walled, use our given get_helm.sh .

[root@k8s-node1 k8s]# ll
total 68
-rw-r--r-- 1 root root  7149 Feb 27 01:58 get_helm.sh
-rw-r--r-- 1 root root  6310 Feb 28 05:16 ingress-controller.yaml
-rw-r--r-- 1 root root   209 Feb 28 13:18 ingress-demo.yml
-rw-r--r-- 1 root root   236 May  4 05:09 ingress-tomcat6.yaml
-rwxr--r-- 1 root root 15016 Feb 26 15:05 kube-flannel.yml
-rw-r--r-- 1 root root  4737 Feb 26 15:38 kubernetes-dashboard.yaml
-rw-r--r-- 1 root root  3841 Feb 27 01:09 kubesphere-complete-setup.yaml
-rw-r--r-- 1 root root   392 Feb 28 11:33 master_images.sh
-rw-r--r-- 1 root root   283 Feb 28 11:34 node_images.sh
-rw-r--r-- 1 root root  1053 Feb 28 03:53 product.yaml
-rw-r--r-- 1 root root   931 May  3 10:08 Vagrantfile
[root@k8s-node1 k8s]# sh get_helm.sh 
Downloading https://get.helm.sh/helm-v2.16.6-linux-amd64.tar.gz
Preparing to install helm and tiller into /usr/local/bin
helm installed into /usr/local/bin/helm
tiller installed into /usr/local/bin/tiller
Run 'helm init' to configure helm.
[root@k8s-node1 k8s]# 

2) Verify version

helm version

3) Create permission (master execution)

Create helm-rbac.yaml , write the following

apiVersion: v1
kind: ServiceAccount
metadata:
  name: tiller
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: tiller
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
  - kind: ServiceAccount
    name: kubernetes-dashboard
    namespace: kube-system

Application configuration:

[root@k8s-node1 k8s]#  kubectl apply -f helm-rbac.yaml
serviceaccount/tiller created
clusterrolebinding.rbac.authorization.k8s.io/tiller created
[root@k8s-node1 k8s]#

2. Install tiller (Master execution)

1. Initialization

[root@k8s-node1 k8s]# helm init --service-account=tiller --tiller-image=sapcc/tiller:v2.16.3 --history-max 300 
Creating /root/.helm 
Creating /root/.helm/repository 
Creating /root/.helm/repository/cache 
Creating /root/.helm/repository/local 
Creating /root/.helm/plugins 
Creating /root/.helm/starters 
Creating /root/.helm/cache/archive 
Creating /root/.helm/repository/repositories.yaml 
Adding stable repo with URL: https://kubernetes-charts.storage.googleapis.com 
Adding local repo with URL: http://127.0.0.1:8879/charts 
$HELM_HOME has been configured at /root/.helm.

Tiller (the Helm server-side component) has been installed into your Kubernetes Cluster.

Please note: by default, Tiller is deployed with an insecure 'allow unauthenticated users' policy.
To prevent this, run `helm init` with the --tiller-tls-verify flag.
For more information on securing your installation see: https://v2.helm.sh/docs/securing_installation/
[root@k8s-node1 k8s]# 

– tiller image specifies the image, otherwise it will be wall, waiting for the tiller deployed on the node to complete.

[root@k8s-node1 k8s]#  kubectl get pods -n kube-system
NAME                                   READY   STATUS             RESTARTS   AGE
coredns-546565776c-9sbmk               1/1     Running            3          23h
coredns-546565776c-t68mr               1/1     Running            3          23h
etcd-k8s-node1                         1/1     Running            3          23h
kube-apiserver-k8s-node1               1/1     Running            3          23h
kube-controller-manager-k8s-node1      1/1     Running            3          23h
kube-flannel-ds-amd64-5xs5j            1/1     Running            4          22h
kube-flannel-ds-amd64-6xwth            1/1     Running            5          23h
kube-flannel-ds-amd64-fvnvx            1/1     Running            4          22h
kube-proxy-7tkvl                       1/1     Running            3          22h
kube-proxy-mvlnk                       1/1     Running            4          22h
kube-proxy-sz2vz                       1/1     Running            3          23h
kube-scheduler-k8s-node1               1/1     Running            3          23h
kubernetes-dashboard-975499656-jxczv   0/1     ImagePullBackOff   0          7h45m
tiller-deploy-8cc566858-67bxb          1/1     Running            0          31s
[root@k8s-node1 k8s]#

To view all node information of the cluster:

 kubectl get node -o wide
[root@k8s-node1 k8s]#  kubectl get node -o wide
NAME        STATUS   ROLES    AGE   VERSION   INTERNAL-IP   EXTERNAL-IP   OS-IMAGE                KERNEL-VERSION               CONTAINER-RUNTIME
k8s-node1   Ready    master   23h   v1.17.3   10.0.2.15     <none>        CentOS Linux 7 (Core)   3.10.0-957.12.2.el7.x86_64   docker://19.3.8
k8s-node2   Ready    <none>   22h   v1.17.3   10.0.2.4      <none>        CentOS Linux 7 (Core)   3.10.0-957.12.2.el7.x86_64   docker://19.3.8
k8s-node3   Ready    <none>   22h   v1.17.3   10.0.2.5      <none>        CentOS Linux 7 (Core)   3.10.0-957.12.2.el7.x86_64   docker://19.3.8
[root@k8s-node1 k8s]# 

2. Testing

helm install stable/nginx-ingress --name nginx-ingress

Minimize installation of KubeSphere

If CPU > 1 core and memory > 2G are available for the cluster, you can use the following command to minimize the installation of KubeSphere:

kubectl apply -f https://raw.githubusercontent.com/kubesphere/ks-installer/master/kubesphere-minimal.yaml

Tip: if your server prompts that GitHub cannot be accessed, you can kubesphere-minimal.yaml or kubesphere-complete-setup.yaml The file is saved locally as a local static file, and then installed with reference to the above command.

  1. Please wait patiently for the installation to succeed.
$ kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l app=ks-install -o jsonpath='{.items[0].metadata.name}') -f

Note: if you encounter problems during installation, you can also use the above log command to troubleshoot problems.

Keywords: Kubernetes Docker kubelet Tomcat

Added by Tomatron on Mon, 22 Jun 2020 10:43:47 +0300